id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.01082
Tropical Geometric Tools for Machine Learning: the TML package
In the last decade, developments in tropical geometry have provided a number of uses directly applicable to problems in statistical learning. The TML package is the first R package which contains a comprehensive set of tools and methods used for basic computations related to tropical convexity, visualization of tropically convex sets, as well as supervised and unsupervised learning models using the tropical metric under the max-plus algebra over the tropical projective torus. Primarily, the TML package employs a Hit and Run Markov chain Monte Carlo sampler in conjunction with the tropical metric as its main tool for statistical inference. In addition to basic computation and various applications of the tropical HAR sampler, we also focus on several supervised and unsupervised methods incorporated in the TML package including tropical principal component analysis, tropical logistic regression and tropical kernel density estimation.
David Barnhill, Ruriko Yoshida, Georgios Aliatimis, Keiji Miura
2023-09-03T05:30:27Z
http://arxiv.org/abs/2309.01082v3
# Tropical Geometric Tools for Machine Learning: the **Tml** package ###### Abstract In the last decade, developments in tropical geometry have provided a number of uses directly applicable to problems in statistical learning. The **TML** package is the first R package which contains a comprehensive set of tools and methods used for basic computations related to tropical convexity, visualization of tropically convex sets, as well as supervised and unsupervised learning models using the tropical metric under the max-plus algebra over the tropical projective torus. Primarily, the **TML** package employs a Hit and Run Markov chain Monte Carlo sampler in conjunction with the tropical metric as its main tool for statistical inference. In addition to basic computation and various applications of the tropical HAR sampler, we also focus on several supervised and unsupervised methods incorporated in the **TML** package including tropical principal component analysis, tropical logistic regression and tropical kernel density estimation. ## 1 Introduction Tropical geometry is a relatively young field which involves examining the characteristics of geometric structures defined by the solution set of a series of a polynomial equations in max-plus, or tropical, algebra. Alternatively, tropical geometry can be described as the piecewise-linear analogue of classical geometry, as discussed in [10, 21]. In general, tropical geometry focuses on structures existing in the _tropical projective torus_ defined as the quotient space \(\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) which is isomorphic to \(\mathbb{R}^{e-1}\). To date researchers have focused much attention on the theoretical underpinnings of tropical algebra and geometry (See [29, 10, 21][14] for a thorough treatment). As with any mathematical field, it is natural to examine how the ideas of tropical geometry can be used to solve various statistical problems. However, while methods are currently being developed for applied problems associated with tropical geometry, few comprehensive resources exist to handle such problems. In this article we attempt to fill this need by introducing the **TML** package [6], which provides a number of tropical statistical methods developed for use on problems associated with tropical geometry in the R programming language [26]. The **TML** package can be obtained through the Comprehensive R Archive Network (CRAN) at [[https://cran.r-project.org/web/packages/](https://cran.r-project.org/web/packages/)] ### Data science and tropical geometry Statistical tools in data science are often classified in terms of supervised and unsupervised learning. In the case of supervised learning, data observations possess a dependent variable in the form of a label or quantity of interest which is used to train models in order to predict outcomes or classify unseen data based on those labels. Unsupervised learning is descriptive in nature where data observations possess no pre-determined response variable. The goal in unsupervised learning is to better understand the relationships between data observations or intuit the underlying structure of the data. A thorough yet concise summary with a number of open problems related to supervised and unsupervised learning as applied to tropical geometry can be found in [42]. We will leverage these terms in this article as well, for methods like tropical logistic regression (supervised learning), tropical principal component analysis (unsupervised learning), and tropical kernel density estimation (unsupervised learning). But we will also give significant focus to Markov chain Monte Carlo methods that do not necessarily fall exclusively in one class of machine learning but can be incorporated into supervised and unsupervised techniques or standalone for statistical inference computations. While the research to develop tropical geometric statistical tools is nascent, there have been significant developments in a number of areas. One powerful tool in Euclidean space are Markov chain Monte Carlo (MCMC) samplers, which combine Monte Carlo sampling with Markov chains. Instead of Monte Carlo sampling according to a specific distribution, MCMC methods sample points by building a Markov chain that converges to a desired target distribution [17]. [44] introduced the first Markov chain Monte Carlo hit-and-run (HAR) sampling technique for use in the tropical projective torus. This method samples points uniformly from polytropes, which are tropical polytopes that are classically convex, as well as full-dimensional elements of any generic tropical simplices (see Definition 1.20). The authors also show how to sample points over the space of ultrametrics using line segments [44]. [4] extended this method to show how to sample points about a centroid with a given dispersion parameter in a method akin to Gaussian sampling in Euclidean space. These aforementioned HAR samplers feature prominently in several aspects of statistical learning as they have been applied in a number of settings. [43] employ HAR methods to execute non-parametric density estimation techniques over the space of ultrametrics which is known to be tropically convex [31]. [7] show how to employ HAR methods to estimate the volume of a tropical polytope. Being able to sample points this way paves the way for approaching statistical problems ranging from integral estimation to optimization in the setting of the tropical projective torus. In supervised learning, define the idea of tropical linear regression as the best approximation of a set of data observations using a tropical hyperplane. They then show the relationship of the tropical hyperplane approxima tion with mean payoff games. [2] introduce a supervised classification method called _tropical logistic regression_ for use over the space of rooted and equidistant phylogenetic trees. In this case, phylogenetic trees are defined in terms of ultrametrics and are classified into one of two _species trees_ (see Section 1.4). Tropical analogues of other supervised methods exist such as tropical support vector machines [11]. Specifically, [11] define the notion of the tropical support vector machine (SVM) as a binary classification mechanism. Extensions of tropical SVMs as classifiers are exhibited in both [34] and [40]. Research into tropical unsupervised methods is also burgeoning. [41] introduce the tropical analogue of principal component analysis, (PCA) where the \(n\)-th order principal component can be represented as the best fit tropical polytope for a set of observations that are ultrametrics. It should be noted that the tropical PCA methods have been adapted to use HAR methods to find the best fit polytope. [5] introduce the tropical analogue of k-means and hierarchical clustering over the tropical projective torus as well as focusing on the space of ultrametrics. ### The TML package Availability of software tools in tropical geometry is plentiful when it comes to computer algebra. Singular is a computer algebra system for polynomial computations with emphasis on commutative algebra, algebraic geometry, and singularity theory which includes functionality for use in tropical geometry [9]. However, while Singular provides functionality for applications related to tropical geometry, there is no functionality for specific statistical methods. Similarly, polymake is a software used for research in polyhedral geometry which includes tropical geometry [13]. As with Singular, polymake focuses on the geometric and combinatorial analysis of polytopes. Additionally, polymake provides nice visualization options for tropical polytopes. The Open Source Computer Algebra Research (OSCAR) project provides an extensive corpus of tools by combining the functionality of several software environments including both Singular and polymakefor use in the Julia programming language [23]. While there exists a significant number of software programs that include a focus on tropical geometry as it relates to computer algebra, few resources exist specifically for statistical computation. And though the **algstat** package for the R programming language specifically focuses on algebra statistics, it provides no tools for the tropical case. In fact, there is no comprehensive suite of tropical statistical tools available. Nonetheless, some functionality exists in a piecemeal manner in the R programming language. Supervised and unsupervised learning methods are available in the **RTropical** package which makes use of tropical SVMs as well as tropical PCA over the space of phylogenetic trees [36]. Basic tropical arithmetic functions are available in the **tropical** package though this package has been archived. In this article we introduce the **TML** which serves to provide a comprehensive suite of statistical tools applicable to tropical geometry for use in the R programming language. The package consists of functions and methods rang ing from basic tropical arithmetic and linear algebra functions to more complex supervised and unsupervised tropical machine learning techniques. The **TML** package is distributed on the Comprehensive R Archive Network (CRAN) with version control managed through Git on Github [https://github.com/barhilldave/TML](https://github.com/barhilldave/TML). The organization of this article is as follows. In Section 1.3 we offer the essential elements of tropical geometry that provide the background for the methods introduced in the **TML** package. In Section 2 we illustrate basic loading and operations of the **TML** package as well as visualization methods. Because MCMC methods feature prominently in the **TML** package, Section 3 focuses on the tropical HAR sampler developed in 4 called _tropical HAR with extrapolation_ and illustrates its usage. Section 4 follows with examples for the prominent methods of statistical inference in the **TML** package. These applications include volume estimation of tropical polytopes, tropical logistic regression, tropical PCA, and tropical kernel density estimation. We finish the article with concluding remarks and potential future developments. ### Tropical Basics In this section we provide the necessary tropical geometric background, notation, and terminology as it relates to functions in the **TML** package. This section is divided into two subsections. The first section focuses on tropical analogues of arithmetic and linear algebraic operations. The second subsection defines several essential concepts associated with tropical polyhedral geometry. #### 1.3.1 Tropical arithmetic In general, research in tropical geometry studies the properties space defined by the _tropical semiring_ represented by the triplet \((\mathbb{R}\cup\{-\infty\},\oplus,\odot)\). Here, classical addition is replaced by _max_\((\oplus)\) and classical multiplication is replaced by classical addition \((\odot)\) [21]. This space is known as the _tropical projective torus_ represented by \(\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\), where \(\mathbf{1}:=(1,1,\ldots,1)\) is the vector with all ones in \(\mathbb{R}^{e}\). This requires that if \(v:=(v_{1},\ldots,v_{e})\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\), \[(v_{1}+c,\ldots,v_{e}+c)=(v_{1},\ldots,v_{e})=v. \tag{1}\] For an extensive treatment, see 14 and 21. **Definition 1.1** (Tropical Arithmetic Operations).: _Under the tropical semiring with the tropical, or max-plus algebra \((\,\mathbb{R}\cup\{-\infty\},\oplus,\odot)\,\), we have the arithmetic operations of addition and multiplication defined as:_ \[x\oplus y:=\max\{x,y\},\hskip 14.226378ptx\odot y:=x+y\hskip 14.226378pt \text{where }x,y\in\mathbb{R}\cup\{-\infty\}.\] _Here \(-\infty\) is the identity element under addition \(\oplus\) and \(0\) is the identity element under multiplication \(\odot\)._ _We may also define the tropical semiring with the \(\min\)-plus algebra \((\,\mathbb{R}\cup\{\infty\},\mathbb{H},\odot)\,\), where arithmetic operations of addition and multiplication defined as:_ \[x\boxplus y:=\min\{x,y\},\ \ \ \ x\odot y:=x+y\ \ \ \ \ \text{where}\ x,y\in \mathbb{R}\cup\{\infty\}.\] **Definition 1.2** (Tropical Scalar Multiplication and Vector Addition).: _For any \(x,y\)\(\in\mathbb{R}\,\cup\,\{-\infty\}\) and for any \(v=(v_{1},\ldots,v_{e}),\ w=(w_{1},\ldots,w_{e})\in(\mathbb{R}\cup\{-\infty\})^ {e}\), we have tropical scalar multiplication and tropical vector addition defined as:_ \[x\odot v\oplus y\odot w:=(\max\{x+v_{1},y+w_{1}\},\ldots,\max\{x+v_{e},y+w_{e} \}).\] **Definition 1.3** (Generalized Hilbert Projective Metric).: _For any tropical points \(v:=(v_{1},\ldots,v_{e}),\,w:=(w_{1},\ldots,w_{e})\in\mathbb{R}^{e}\!/\mathbb{R 1}\) where \([e]:=\{1,\ldots,e\}\), the tropical distance (also known as tropical metric) \(d_{\mathrm{tr}}\) between \(v\) and \(w\) is defined as:_ \[d_{\mathrm{tr}}(v,w):=\max_{i\in[e]}\bigl{\{}v_{i}-w_{i}\bigr{\}}-\min_{i\in[ e]}\bigl{\{}v_{i}-w_{i}\bigr{\}}.\] For any two points \(v,w\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\), the tropical distance between \(v\) and \(w\) assumes Equation1 holds. Otherwise the tropical distance is not a metric. **Definition 1.4** (Tropical Determinant [14]).: _Let \(w\) be a positive integer. For any square tropical matrix \(B\) of size \(w\times w\) with entries in \(\mathbb{R}\ \cup\ \{-\infty\}\), we say the tropical determinant of \(B\) as follows:_ \[tdet(B):=\max_{\sigma\in S_{w}}\{B_{\sigma(1),1}+B_{\sigma(2),2}+\ldots+B_{ \sigma(w),w}\}, \tag{2}\] _where we denote the \((i,j)\)-th entry of \(B\) as \(B_{i,j}\) and \(S_{w}\) represents every permutation of \([w]:=\{1,\ldots,w\}\)._ #### 1.3.2 Tropical geometric structures The definitions that follow describe and clarify characteristics of polytopes and hyperplanes in the tropical projective torus. **Definition 1.5** (Tropical line segment from [21]).: _Given two points \(u,\,v\), a tropical line segment between \(u,\,v\) denoted as \(\Gamma_{u,v}\), consists of the concatenation of at most \(e-1\) Euclidean line segments. A point in the collection of points \(\mathbf{b}\), defining the end points of each Euclidean line segment is called a bend point of \(\Gamma_{u,v}\). Including \(u\) and \(v\), \(\Gamma_{u,v}\) consists of at most \(e\) bend points. We show how to compute the set \(\mathbf{b}\) in 3._ 21 show how to construct a tropical line segment between two vectors in the proof of their Proposition 5.2.5. For a pair of vectors \(u=:(u_{1},\ldots,u_{e}),v:=(v_{1},\ldots,v_{e})\in\mathbb{R}^{e}\!/\mathbb{R} \mathbf{1}\), the tropical line segment can be constructed as follows: Without loss of generality, assume that \((v_{1}-u_{1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq \ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e -1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1}) \geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq( v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq \ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e -1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1}) \geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq( v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq \ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1} -u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq \ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e -1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1}) \geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v _{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1})\geq\ldots\geq(v_{e-1}-u_{e-1}) \geq\ldots\geq(v_{e-1}-u_{e-1}) \((v_{e}-u_{e})=0\) over the tropical projective torus \(\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) once the coordinates of \(v-u\) have been permuted. The tropical line segment \(\Gamma_{u,v}\) from \(v\) to \(u\) is \[\left\{\begin{array}{rcl}(v_{e}-u_{e})\odot u\oplus v&=&v\\ (v_{e-1}-u_{e-1})\odot u\oplus v&=&(v_{1},v_{2},v_{3},\ldots,v_{e-1},v_{e-1}-u _{e-1}+u_{e})\\ &&\vdots&\\ (v_{2}-u_{2})\odot u\oplus v&=&(v_{1},v_{2},v_{2}-u_{2}+u_{3},\ldots,v_{2}-u_{2 }+u_{e})\\ (v_{1}-u_{1})\odot u\oplus v&=&u.\end{array}\right. \tag{3}\] **Definition 1.6** (Tropical Polytopes [14]).: _Suppose we have \(S\subset\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\). If_ \[x\odot v\oplus y\odot w\in S\] _for any \(x,y\in\mathbb{R}\) and for any \(v,w\in S\), then \(S\) is called tropically convex. Suppose \(V=\{v^{1},\ldots,v^{s}\}\subset\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\). The smallest tropically convex subset containing \(V\) is called the tropical convex hull or tropical polytope of \(V\) which can be written as the set of all tropical linear combinations of \(V\)_ \[\operatorname{tconv}(V)=\{a_{1}\odot v^{1}\oplus a_{2}\odot v^{2}\oplus \cdots\oplus a_{s}\odot v^{s}\mid a_{1},\ldots,a_{s}\in\mathbb{R}\}.\] _The smallest subset \(V^{\prime}\subseteq V\) such that_ \[\operatorname{tconv}(V^{\prime})=\operatorname{tconv}(V)\] _is called a minimum, or generating set, with \(|V^{\prime}|\) being the cardinality of \(V^{\prime}\). For \(P=\operatorname{tconv}(V^{\prime})\) the boundary of \(P\) is denoted \(\partial P\)._ **Remark 1.7**.: _A tropical polytope of two points \(u,v\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) is called a tropical line segment, denoted \(\Gamma_{u,v}\)._ We use the term _max-tropical polytope_ if a tropical polytope is defined in terms of the max-plus algebra. Conversely, we use the term _min-tropical polytope_ if a tropical polytope is defined in terms of the min-plus algebra. When the context is clear we will just use the term tropical polytope. **Definition 1.8** (Polytropes [15]).: _A classically convex tropical polytope is called a polytrope._ **Definition 1.9** (Tropical Simplex).: _A tropical simplex is a tropical polytope that possess a minimum vertex, or generating, set \(V^{\prime}\) such that \(|V^{\prime}|=e\). A tropical simplex is denoted \(P_{\Delta}\). All polytropes are tropical simplices. The converse is not true._ An important type of polytrope is a _tropical ball_. A tropical ball is the analogue of a Euclidean ball which is defined as \[B_{r}(x)=\{y\in\mathbb{R}^{e}\mid||x-y||_{2}\leq r\}\] and indicates the set of all points falling within a distance \(r\) of a point \(x\) where distance is calculated by the \(L_{2}\)-norm. In the case of a tropical ball, distance is defined in terms of the tropical metric shown in Definition 1.3 **Definition 1.10** (Tropical Ball).: _A tropical ball, \(B_{l}(x_{0})\), around \(x_{0}\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) with a radius \(l>0\) is defined as follows:_ \[B_{l}(x_{0})=\{y\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\,|\,d_{\mathrm{tr}}(x_{0 },y)\leq l\}.\] _The minimum generating set \(V^{\prime}\) of a tropical ball consists of exactly \(e\) vertices in which case a tropical ball is a tropical simplex. Figure_1_provides the generic structure of a tropical ball._ In many situations, we are interested in the projection of a point onto a tropical polytope. This projection can be represented by using Formula 5.2.3 in [21] and is shown below. **Definition 1.11** (Tropical Projection).: _Let \(V:=\{v^{1},\ldots,v^{s}\}\subset\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) and let \(\mathcal{P}=\mathit{tconv}(v^{1},\ldots,v^{s})\subseteq\mathbb{R}^{e}/\mathbb{ R}\mathbf{1}\) be a tropical polytope with its vertex set \(V\). For Figure 1: Tropical ball, \(B_{l}(x_{0})\in\mathbb{R}^{3}/\mathbb{R}\mathbf{1}\) with radius \(l\). Center point is indicated in red with the generating set \(V^{\prime}\) shown in orange (Credit: ). Figure 2: Tropical polytopes in \(\mathbb{R}^{3}/\mathbb{R}\mathbf{1}\). The tropical polytope on the left is a polytrope and therefore a tropical simplex (Credit: ). \(\mathbf{x}\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\), let_ \[\pi_{\mathcal{P}}(\mathbf{x})\!:=\!\bigoplus_{l=1}^{s}\lambda_{l}\odot v^{l}, \ \ \text{where}\ \ \lambda_{l}\!=\!\min\{\mathbf{x}-v^{l}\}. \tag{4}\] _Then_ \[d_{\mathrm{tr}}(\mathbf{x},\pi_{\mathcal{P}}(\mathbf{x}))\leq d_{\mathrm{tr}}( \mathbf{x},\mathbf{y})\] _for all \(\mathbf{y}\in\mathcal{P}\)._ Now we turn our attention to definitions which will exhibit the relationship between min-tropical hyperplanes and max-tropical polytopes. **Definition 1.12** (Tropical Hyperplane [29]).: _For any \(\omega:=(\omega_{1},\ldots,\omega_{e})\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\), the max-tropical hyperplane defined by \(\omega\), denoted as \(H_{\omega}^{\max}\), is the set of points \(x\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) such that_ \[\max_{i\in[e]}\left\{\omega_{i}+x_{i}\right\} \tag{5}\] _is attained at least twice. Similarly, a min-tropical hyperplane denoted as \(H_{\omega}^{\min}\), is the set of points \(x\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) such that_ \[\min_{i\in[e]}\left\{\omega_{i}+x_{i}\right\} \tag{6}\] _is attained at least twice. If it is clear from context, we simply denote \(H_{\omega}\) as a tropical hyperplane in terms of the min-plus or max-plus algebra where \(\omega\) is the normal vector of \(H_{\omega}\). The point \(-\omega\) represents a point contained in \(H_{\omega}\) where the maximum or minimum is attained \(e\) times. This point is called the apex of \(H_{\omega}\)._ **Definition 1.13** (Sectors from Section 5.2 in [14]).: _Every tropical hyperplane, \(H_{\omega}\), divides the tropical projective torus, \(\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) into \(e\) connected components, which are open sectors_ \[S_{\omega}^{i}:=\{x\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\,|\,\omega_{i}+x_{i }>\omega_{j}+x_{j},\forall j\neq i\},\ i=[e].\] _These closed sectors are defined as_ \[\overline{S}_{\omega}^{i}:=\{x\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\,|\, \omega_{i}+x_{i}\geq\omega_{j}+x_{j},\forall j\neq i\},\ i=[e].\] **Lemma 1.14** (Distance to a Tropical Hyperplane \(H_{\omega}\)[11]).: _The tropical distance from a point \(v\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) to a max-tropical hyperplane \(H_{0}^{max}\) where \(\omega\) represents the normal vector of zeros, is given by the difference between the maximum and second maximum of \(v\). That is_ \[d_{tr}(H_{0},v)=\max(v)-2nd\max(v). \tag{7}\] _For a min-tropical hyperplane \(H_{0}^{min}\), the tropical distance from a point \(v\) is_ \[d_{tr}(H_{0},v)=2nd\min(v)-\min(v). \tag{8}\] _For a generic tropical hyperplane \(H_{\omega}\),_ \[d_{tr}(H_{\omega},v)=d_{tr}(H_{0},v+\omega). \tag{9}\] Figure 3 illustrates the construction of \(H^{max}_{\omega}\) with apex at \(v=(0,4,7)\) and \(\omega=-v\). Additionally, we observe the point \(u=(0,3,1)\) in relation to \(H^{max}_{\omega}\) along with each sector \(\bar{S}^{i}_{\omega}\) as shown in Definition 1.13 **Definition 1.15** (Tropical Hyperplane Arrangements).: _For a given set of points, \(V=\{v^{1},\ldots,v^{s}\}\), tropical hyperplanes with apices at each \(v^{i}\in V\) represent the tropical hyperplane arrangement of \(V\), \(\mathcal{A}(V)\), where_ \[\mathcal{A}(V):=\{H_{-v^{1}},\ldots,H_{-v^{s}}\}.\] _If we consider a collection of tropical hyperplanes defined in terms of the max-plus algebra, we call this arrangement a max-tropical hyperplane arrangement denoted \(\mathcal{A}^{\max}(V)\). Likewise, considering tropical hyperplanes defined in terms of the min-plus algebra is called a min-tropical hyperplane arrangement denoted \(\mathcal{A}^{\min}(V)\)._ **Definition 1.16** (Cells).: _For a given hyperplane arrangement, \(\mathcal{A}(V)\), a cell is defined as the intersection of a finite number of closed sectors. Cells may be bounded or unbounded. Bounded cells are polytropes._ **Definition 1.17** (Bounded Subcomplex [10]).: _For a vertex set, \(V\), \(\mathcal{A}(V)\) defines a collection of bounded and unbounded cells which is known as a cell decomposition. The union of bounded cells defines the bounded subcomplex, \(\mathcal{K}(V)\)._ **Theorem 1.18** (Corollary 6.17 from [14]).: _A max-tropical polytope, \(P\), is the union of cells in \(\mathcal{K}(V)\) of the cell decomposition of the tropical projective torus induced by \(\mathcal{A}^{\min}(V)\)._ Figure 3: Generic tropical hyperplane \(H^{max}_{\omega}\) in \(\mathbb{R}^{3}/\mathbb{R}\mathbf{1}\) with associated closed sectors. The red point represents the apex, \(v\) with \(\omega=-v\) (Credit: ). Theorem 1.18 describes \(\mathcal{K}(V)\) as a collection of bounded cells induced by some \(\mathcal{A}(V)\). Figure 4 represents a tropical polytope in terms of its vertex set (left) and hyperplane arrangement (right). Throughout this paper we are interested in sampling the union of \((e-1)\)-dimensional polytropes belonging to \(\mathcal{K}(V)\). The union of \((e-1)\)-dimensional polytropes is described in the following definition. **Definition 1.19** (Dimension of a Tropical Polytope).: _The dimension of a tropical polytope, \(P\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\), is defined by the bounded cell of maximal dimension in \(\mathcal{C}_{P}\) and is denoted as \(dim(P)\)._ **Definition 1.20** (_i-trunk_ and _i-tentacles_ (Definition 2.1 in 20)).: _Let \(P\) be a tropical polytope and let \(i\in[e-1]\) where \([e-1]=\{1,\ldots,e-1\}\). Let \(\mathcal{F}_{P}\) be the family of relatively open tropical polytopes in \(\mathcal{C}_{P}\). For any \(T\in\mathcal{F}_{P}\), \(T\) is called an _i-tentacle element_ of \(\mathcal{F}_{P}\) if it is not contained in the closure of any \((i+1)\)-dimensional tropical polytope in \(\mathcal{F}_{P}\) where the dimension of \(T\) less than or equal to \(i\). The _i-trunk_ of \(P\), is defined as_ \[Tr_{i}(P):=\bigcup\left\{F\in\mathcal{F}_{P}:\exists\;G\in\mathcal{F}_{P}\text { with }\dim(G)\geq i\text{ such that }F\subseteq G\right\}\] _where \(dim(G)\) is the dimension of \(G\subset\mathcal{F}_{P}\). The \(Tr_{i}(P)\) represents the portion of the \(\mathcal{K}(V)\) with \((i-1)\)-tentacles removed. The minimum enclosing ball containing containing only \(Tr_{i}(P)\subseteq P\) is denoted \(B_{k}(Tr_{i}(P))\)._ **Example 1.21**.: _Consider the tropical polytope, \(P=\{(0,0,0),(0,-1,1),(0,2,2),(0,1,-1)\}\). The \(Tr_{2}(P)\) is the gray portion shown in Figure Figure 4: Tropical polytope expressed in terms of its vertex set \(V\) (left) and \(\mathcal{A}^{\min}(V)\) (right). In the right figure, pseudovertices not belonging to \(V\) are defined by the intersection of min-tropical hyperplanes with apices at each \(v^{i}\in V\) (Credit: 4). For many statistical problems, a first step in statistical inference is finding a centroid for some given data. This is no less true when handling data in the tropical projective torus. To that end, we concern ourselves with finding the Fermat-Weber point in the tropical projective torus, or simply the tropical Fermat-Weber point. **Definition 1.22** (Tropical Fermat-Weber Points (see [19])).: _For a given set of points \(V=\{v^{1},\ldots,v^{s}\}\) in a metric space \(\mathcal{M}\), the Fermat-Weber point for the set \(V\) is_ \[\arg\min_{y}\sum_{i=1}^{s}d(y,v_{i}), \tag{10}\] _where \(d(.)\) represents the function defining a distance metric in \(\mathcal{M}\) and the point \(y\in\mathcal{M}\) represents a centroid. A tropical Fermat-Weber point is similarly defined but replacing \(d(.)\) with the tropical metric \(d_{tr}(.)\)_ \[\arg\min_{y}\sum_{i=1}^{s}d_{tr}(y,v_{i}). \tag{11}\] **Remark 1.23**.: _The tropical Fermat-Weber point is not guaranteed to be unique (see [19])._ In this paper we will introduce a number of ways to obtain a tropical FW point and then apply the resulting point, or centroid, to problems of statistical inference. Figure 5: A tropical polytope, \(P\), in \(\mathbb{R}^{3}/\mathbb{R}\mathbf{1}\) defined by four vertices. The \(Tr_{2}(P)\) is the portion in gray (Credit: [19]). ### Tropical Geometry and Phylogenetic Trees One area of interest where tropical geometry is directly applicable is in the biological science of phylogenomics. Phylogenomics is a discipline focusing on reconstructing the evolutionary history of organisms. One method of representing this evolutionary history is through the use of phylogenetic trees. Phylogenetic trees are data structures representing the evolution of genes from related, and usually present-day, species or some other taxa. In terms of graph theory, phylogenetic trees represent rooted out-trees consisting of a root node, unlabeled internal nodes, and external leaf nodes. The root node of the tree represents a common evolutionary ancestor among several taxa, internal nodes represent speciation events over time, and the external, leaf, nodes represent the present-day taxa. Of particular interest, as it relates to tropical geometry, are those phylogenetic trees that are _equidistant_ trees. Equidistant trees are trees where the distance from the root node to each leaf node is the same. It is shown in [8] that equidistant trees can be represented as ultrametrics which are described in Definition [1.24] and illustrated in Example [1.25] **Definition 1.24** (Ultrametric).: _Let \([m]:=\{1,\ldots,m\}\) and define the distance function \(d:[m]\times[m]\to\mathbb{R}\) to be a metric over \([m]\). Then if_ \[\max\{d(i,j),d(i,k),d(j,k)\}\] _is attained at least twice for any \(i,j,k\in[m]\), \(d\) is an ultrametric._ **Example 1.25**.: _Suppose \(m=3\). Let \(d\) be a metric on \([m]:=\{1,2,3\}\) such that_ \[d(1,2)=2,\,d(1,3)=2,\,d(2,3)=1.\] _Since the maximum is achieved twice, \(d\) is an ultrametric._ Figure 6: Fermat–Weber region defined by three points. Points in the gray triangle satisfies [11][19][5]. Specifically, for any equidistant tree \(T\in\mathcal{U}_{m}\) where \(\mathcal{U}_{m}\) represents the space of ultrametrics on \(m\) leaves, the vector representing the pairwise distances between leaf nodes is an ultrametric. Further, it was shown in [22] that \(\mathcal{U}_{m}\) is a tropical Grassmanian and is therefore tropically convex. This affords us the opportunity to apply statistical methods based on tropical geometry to the space of equidistant phylogenetic trees. We will leverage this characteristic of phylogenetic trees throughout this paper by applying the methods of the **TML** package to equidistant phylogenetic trees. #### 1.4.1 The Coalescent Model One way to evaluate how well supervised and unsupervised learning techniques perform when empirical data is unavailable is by using simulated data sets. One method to obtain simulated data is by using the _coalescent model_. The coalescent model was first described in [16] and, for a given sample of taxa, represents a stochastic genealogical process illustrating coalescing events which shows the ancestral lineage of those related taxa. The coalescent model uses species tree to represent the overall lineage of related species with phylogenetic trees showing the lineage of specific genes of the related species in the species tree. We can think of phylogenetic trees emanating from a species tree [27]. For a thorough treatment of the coalescent model see [35]. Using the coalescent model we can simulate samples of phylogenetic trees that come from specific species trees. The coalescent is a common model used in a wide range of software. For the simulated data used in this article and included in the **TML** package, we employ the software called Mesquite[22]. Mesquite takes the arguments of species depth, \(SD\), and effective population, denoted \(N_{e}\). The \(SD\) indicates the number of epochs between the common ancestor (root node) and taxa of the present day (leaf nodes). We define the gene trees in terms of the ratio \[R=\frac{SD}{N_{e}}.\] If we set \(N_{e}=100000\) and \(SD=500000\), then we have \(R=5\). Notably, the Mesquite software does not represent equidistant trees as ultrametrics. However, the output of equidistant trees taken from Mesquite can be manipulated into ultrametric form using tools from the **phytools**R package [28]. A total of twelve data sets, each representing 1000 simulated phylogenetic trees on ten leaves with \(R\) taking the values of 0.25,0.5,1,2,5, and 10, coming from two different species trees were constructed from Mesquite are included in the **TML** package. Using **phytools**, these twelve data sets were manipulated into ultrametrics and are included as Sim_Trees1 for the six datasets coming from species tree one and Sim_Trees2 from those coming from species tree two. Basic operations This section describes the basic functionality of **TML** and how to execute some simple computations. ### Loading TML and basic operations The **TML** package is loaded as all R packages are loaded: R> library("TML") Once **TML** is loaded all functionality is available. We note that functions are based on linear algebraic operations in terms of this max-plus algebra. Most inputs into functions in **TML** involve vectors or matrices. For example if we wish to find the tropical distance between two points in \(\mathbb{R}^{3}/\mathbb{R}\mathbf{1}\) we input the following: R> u <- (0,1,2) R> v <- (0,4,7) R> trop.dist(u,v) [1] 5 As is shown in Equation 1\(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! vector consists of a point in the polytope. Importantly, the collection of points need not be the minimum generating set of the tropical polytope. The code snippet below illustrates the use of the normaliz.polytope() function on a set of vectors that defines a tropical polytope. We can say that both matrices represent the same polytope by using Equation 11. R> P<-matrix(c(3,3,3,4,6,9,2,5,3),3,3,TRUE) R> P [,1] [,2] [,3] [1,] 3 3 3 [2,] 4 6 9 [3,] 2 5 3 R> normaliz.polytope(P) [,1] [,2] [,3] [1,] 0 0 0 [2,] 0 2 5 [3,] 0 3 1 As with classical linear algebra we can find the tropical analogue of the determinant of a matrix as shown in Equation 1.4. This is equivalent to a linear assignment problem optimization problem as discussed in 14. Again, the input is a matrix where rows are the points in \(\mathbb{R}/\mathbb{R}\mathbf{1}\). R> P <- matrix(c(0,0,0,0,2,5,0,3,1),3,3,TRUE) R> tdets(P) [[1]] [1] 8 [[2]] [,1] [,2] [,3] [1,] 0 0 0 [2,] 0 3 1 [3,] 0 2 5 This output comes in the form of a list where the first element represents the value of the tropical determinant and the second element is a matrix of the same points reordered such that the elements of each row vector contributing to the value of the determinant are on the diagonal. It is common to work with phylogenetic trees, but the statistical methods of tropical geometry use vectorized input. This vector consists of components that express the pairwise distances between the \(m\) leaves as the sum of branch lengths connecting those leaves. Consequently, the vector has dimension \(e=\binom{m}{2}\). The following example illustrates how a tree with 4 leaves is converted to a vector in \(\mathbb{R}^{6}\). R> tree <- ape::read.tree(text='((A:1, B:1):2, (C:1, D:1):2);') R> tree.to.vector(tree,normalization = F) [1] 2 6 6 6 6 2 ### Tropical line segments, hyperplanes, polytopes, and projections This section focuses on the functions in **TML** related to tropical geometric structures such as tropical line segments, polytopes, hyperplanes, and projections with associated computations. Using Equation 3 we can construct the tropical line segment by calculating the associated bend points. R> u <- (0,1,2) R> v <- (0,4,7) R> TLineSeg(u,v) [[1]] [1] 0 4 7 [[2]] [1] 3 4 7 [[3]] [1] 5 6 7 The previous example represents a line segment from the vector \(v=(0,4,7)\) to \(u=(0,1,2)\). The output is a list of vectors where each vector represents a bend point or end point of the tropical line segment. Note that in this case the second and third bend points are not normalized (i.e., the first value of each vector equal to zero). This can easily be accomplished using the lapply() function in conjunction with the normaliz.vector() function from **TML**. R> u <- (0,1,2) R> v <- (0,4,7) R> lapply(TLineSeg(u,v),function(x) normaliz.vector(x)) [[1]] [1] 0 4 7 [[2]] [1] 0 1 4 [[3]] [1] 0 1 2 It should be noted also that for two vectors in \(\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) the TLineSeg() function output list will consist of \(e\) bend points but some bend points may be redundant. This is an indication that the line segment consists of fewer than \(e\) bend points. Tropical hyperplanes, as defined in Equation 5 are defined simply by the point in the tropical projective torus that serves as the hyperplane apex. The **TML** package provides functions to visualize tropical hyperplanes in two and three dimensions as well as functions to measure the tropical distance from a point in the tropical projective torus to the nearest point on the hyperplane. Three dimensional tropical hyperplanes are also able to be visualized. R> D<-t(as.matrix(c(0,0,0))) R> E<-t(as.matrix(c(0,0,0,0))) R> di<-4 R> mi<- -5 R> ma<- 5 R> hyper3d_max(D, di, mi, ma, plt=TRUE) R> hyper3d_min(D, di, mi, ma, plt=TRUE) R> hyper3d_max(E, di, mi, ma, plt=TRUE) R> hyper3d_min(E, di, mi, ma, plt=TRUE) Visualizations from the output of the previous code is shown in Figure 7. Max-tropical hyperplanes in both two dimensions and three dimensions are shown on the left with min-tropical hyperplanes shown on the right. The statistical methods introduced in this article utilize the tropical distance from a point to a tropical hyperplane. The distance from a point to a tropical hyperplane is shown in Lemma 1.14 for both the max-tropical and min-tropical case. The functions trop.dist.hyp_max() and trop.dist.hyp_min() take two inputs: the normal vector associated with the apex of the tropical hyperplane and any generic point in the tropical projective torus. R> 0 <- c(0,-1,-1) R> x0 <- c(0,-2,-8) R> trop.dist.hyp_max(0,x0) [1] 3 R> 0 <- c(0,-1,-1) R> x0 <- c(0,-2,-8) R> trop.dist.hyp_min(0,x0) [1] 6 Figure 7: Max-tropical (left) and min-tropical hyperplanes (right) in two and three dimensions using the hyper3d_max() and hyper3d_min() functions. Recall from Definition 1.12 the normal vector is the same as changing the sign of the point in the tropical projective torus representing the apex of the tropical hyperplane. Figure 8 shows a point on the tropical hyperplane that is closest to the point \((0,-2,-8)\). The tropical distance to this point is shown in the code above. Notably, unlike Euclidean using a Euclidean distance, the point on the tropical hyperplane that is closest to the point of interest may not be unique. We now turn our attention to tropical polytopes as tropical polytopes are the primary geometric structures that are used in most functions in the **TML** package. The primary focus here is to illustrate how we visualize different tropical polytopes. Visualization is combinatorially challenging even in lower dimensions. Here we show a couple of examples of visualizations of two-dimensional and three-dimensional tropical polytopes. As stated in Section 1.3 a tropical ball is an important polytrope in tropical geometry. The function Trop_ball() allows us to render a two- or three-dimensional tropical ball. R> v <- c(0,0,0) R> d <- 2 R> Trop_ball(c(0,0,0),d,a=1,cls='white',fil=TRUE,cent.col='red') R> Trop_ball(c(0,0,0,0),d,a=.5,cls='lightblue', fil=TRUE,cent.col='red') The code above takes several inputs the first being the center of the tropical ball, the radius of the tropical ball in terms of tropical distance, and several other inputs involving the transparency and color options. Figure 9 provides the output of two tropical balls from the example above in two dimensions (left) and three dimensions (right). Figure 8: Tropical distance from a point \((0,-2,-8)\) (blue) to the max-tropical hyperplane defined by the normal vector \(\omega=(0,-1,-1)\). Note that \(\omega\) corresponds with the apex of the tropical hyperplane \((0,1,1)\) (green). Visualization of generic tropical polytopes can be accomplished in two or three dimensions using the draw.tpolytope.2d() and draw.tpolytope.3d() functions. These functions take three inputs: a matrix of points in the tropical projective torus, a color argument for the polytope itself and a color argument for the vertices of the polytope. R> P <- matrix(c(0,-2,2,0,-2,5,0,2,1,0,1,-1),4,3,TRUE) R> c <- 'darkgreen' R> cc <- 'black' R> draw.tpolytope.2d(P,c,cc) R> P <- matrix(c(0,0,0,0,0,1,2,5,0,1,3,1,0,2,5,10),4,4,TRUE) R> c <- 'darkgreen' R> cc <- 'black' R> draw.tpolytope.3d(P,c,cc) Figures 10 and 11 show the output from the previous code. Note that in each case the function draws line segments between each vertex (and more points in the three-dimensional case) in the polytope. This is due to the combinatorial challenge with forming this polytopes. The tropical polytopes themselves are defined by the boundary. Figure 9: Tropical balls in two and three dimensions each with a radius of two, using the Trop_ball() function. The red point in each figure represents the center point. Black points represent the vertices of the tropical polytope. The last specific function we address in this section is the project_pi() function. The projection of a point \(x\) onto a tropical polytope, \(\mathcal{P}\), denoted \(\pi(x)\) is the point in \(\mathcal{P}\) that is closest to \(x\) in terms of tropical distance. It must be noted that unlike Euclidean distance, tropical projections are not necessarily unique. In fact oftentimes there is an interval of points satisfying satisfying Equation4. For a thorough discussion of projections see 4. Figure 11: 3-D rendering of a tropical polytope using the draw.tpolytope.3d() function. Figure 10: 2-D rendering of a tropical polytope using the draw.tpolytope.2d() function. R> P <- matrix(c(0,0,0,0,2,5,0,3,1),3,3,TRUE) R> x <- c(0,6,2) R> project_pi(P,x) [1] 0 3 2 The output above provides one of perhaps an infinite number of points that can serve as \(\pi(x)\). Continuing from the previous example, we can illustrate how this appears visually. R> pi_x <- project_pi(P,x) R> c <- 'blue' R> cc <-'red' R> draw.tpolytope.3d(P,c,cc) R> points(c(x[2],pi_x[2]),c(x[3],pi_x[3]),pch=19,col='green') R> lines(c(x[2],pi_x[2]),c(x[3],pi_x[3]),lty='dashed') The functions introduced in this section serve as standalone functions but they also provide the basis of other functions in the **TML** package. In the sections that follow, we introduce the statistical tools that leverage these basic functions. ### Calculating a tropical Fermat-Weber point In this section we introduce several functions that allow us to calculate tropical Fermat-Weber points. The output of these functions are incorporated in a variety of statistical methods included in the **TML** package, some of which are described below. Here we introduce two methods of finding the tropical FW Figure 12: Projection of the point \(x=(0,6,2)\) onto the tropical polytope \(\mathcal{P}\), where \(\mathcal{P}:=\{(0,0,0),(0,3,1),(0,2,5)\}\). Here \(\pi(x)=(0,3,2)\) but any point on the boundary of \(\mathcal{P}\) from \((0,3,2)\) to \((0,3,1)\) could serve as \(\pi(x)\). point. The first is a method that uses linear programming while the second employs a fast gradient-based method. The first method to find a tropical FW point for a given set of data is introduced in the Trop_FW() function. This function employs a linear programming approach to find a tropical FW point for a given set of data \(V=\{v^{1},\ldots,v^{s}\}\) by solving the constrained optimization problem \[\min_{y} \sum_{i=1}^{s}d_{tr}(v^{i},y) \tag{12}\] \[y_{j}-y_{k}-v^{i}_{j}+v^{i}_{k}\leq-d_{tr}(v^{i},y)\ \forall\ i\in[s]\text{ and }1 \leq j,k\leq e\] (13) \[y_{j}-y_{k}-v^{i}_{j}+v^{i}_{k}\geq d_{tr}(v^{i},y)\ \forall\ i\in[s]\text{ and }1 \leq j,k\leq e. \tag{14}\] The example below shows how this is employed by using simulated data set consisting of 150 normalized points in the tropical projective torus which is found in the **TML** package as Sim_points. Note that Trop_FW() takes as input points which are normalized using the normaliz.polytope() function. The Sim_points is already normalized so normalization is not required. A plot of the output for the code below is shown in Figure 13 R> set.seed(23) R> V <- Sim_points R> FW <- Trop_FW(V) R> plot(V[,2],V[,3],pch=19,cex=.8,xlab="v2",ylab="v3",asp=1) R> points(FW[2],FW[3],pch=19,col='red') While a FW point can be found directly using linear programming, gradient-based numerical methods like those developed and employed in [3] are much faster. Of particular interest in [3], is inferring the class of species tree associated with a set of gene tree. Because it is proven that the set of tropical FW points asymptotically converges to the maximum likelihood estimate (MLE) vectorised tree under this model this task can be reduced to finding a tropical FW point associated with the set of gene trees. [3] use FW points in lieu of MLE trees because the former is faster to compute numerically and it enjoys optimality sufficiency condition. The following example illustrates how to compute a FW point using the gradient method of 1290 gene trees associated with a lungfish data set used in [18] by employing the function FW_numerical(). This data set is accessible in the **TML** package which includes a dissimilarity matrix when lung_fish is called and a vector of strings called lf_labels representing the species (or leaf) labels for the associated trees. R> T <- lung_fish R> labels <- lf_labels Figure 13: Tropical Fermat-Weber point calculated using the Trop_FW() function. Black points indicated the data. The red point is the tropical FW point. R> omega <- FWpoint_numerical(T) R> inferred_tree <- vector.to.equidistant.tree(omega) R> inferred_tree$tip.label <- labels R> plot(inferred_tree) The functions introduced in this section serve as standalone functions but they also provide the basis of other functions in the **TML** package. In the sections that follow, we introduce the statistical tools that leverage these basic functions. ## 3 Tropical HAR as a main tool for inference Markov chain Monte Carlo (MCMC) methods are an extremely important tool in statistical inference. Since their development in the first half of the twentieth century, MCMC methods have proven effective in a broad spectrum of scientific disciplines. Among the most flexible and easily constructed MCMC methods is the hit-and-run (HAR) sampler. Like all MCMC samplers, HAR samplers sample points according to a target distribution by moving from one point to another by defining a subset of a state space in terms of line segments. Both the current point and the possible next points fall on this line segment [30]. Up until recently, no MCMC samplers existed to sample points from a state space that could be defined as tropically convex. With the introduction of the sampler _HAR with extrapolation_ as shown in [44], tropically convex sets can be sampled more effectively. As most methods in the **TML** rely on this novel HAR sampler, we devote this section to providing an in depth examination of its basic implementation as well as a number of variations. Figure 14: Equidistant tree representation of a Fermat-Weber point of 1290 gene trees from the lungfish data [18]. ### HAR sampling from a tropical line segment We begin by showing how we sample from a tropical line segment as the line segment is the basic geometric structure used in HAR sampling. There are two variations we illustrate here. The first shows how we sample points uniformly from a tropical line segment as shown in [44]. This method is implemented using the HAR.TLineSeg() function. For any two points in the tropical projective torus, we can define a tropical line segment and then sample from the line segment. The code that follows shows how to sample a single point from a tropical line segment. R> set.seed(1) R> u <- c(0,3,1) R> v <- c(0,0,0) R> BPs <- lapply(TLineSeg(u,v),function(x) normaliz.vector(x)) R> G_uv <- matrix(unlist(BPs), ncol = 3, byrow = TRUE) R> draw.tpolytope.2d(G_uv,'red','blue') R> pt<-normaliz.vector(HAR.TLineSeg(G_uv[1,],G_uv[nrow(G_uv),])) R> points(pt[2],pt[3],pch=19,col='green') To sample multiple points from a tropical line segment we can employ HAR.TLineSeg() in a for() loop in the following chunk. Figure [15] shows the building of the line segment, sampling a single point, and then sampling 200 points from the line segment. R> u <- c(0,3,1) R> v <- c(0,0,0) R> poins <- matrix(0,nrow=200,3,TRUE) R> for(i in 1:nrow(poins)){ x <- HAR.TLineSeg(u,v) poins[i,] <- x } R> points(poins[,2],poins[,3],pch=19,cex=.3,col='green') The **TML** package also gives the option to sample points from a tropical line Figure 15: A tropical line segment (top) with the blue points indicating the break points and end points, a single point sampled from the tropical line segment in green (center), and 200 points sampled from the tropical line segment. segment about a point representing a centroid as shown in 4. In practice, this is similar to Gaussian sampling about a point \(\mu\) with a standard deviation \(\sigma\) which controls dispersion. In the case of a tropical line segment, the scale parameter is based on tropical instead of Euclidean distance. The HAR.TLineSeg.Norm() function performs this calculation. Continuing from the previous example we show the results of sampling 200 points about a centroid represented by the point \(\mu=(0,2,0)\) with a scale parameter \(\sigma=0.2\). R> set.seed(1) R> mu <- c(0,2,0) R> sig <-.2 R> poins <- matrix(0,nrow=2000,3,TRUE) R> for(i in 1:nrow(poins)){ x <- HAR.TLineSeg.Norm(u,v,mu,sig) poins[i,]<-x } R> u <- c(0,3,1) R> v <- c(0,0,0) R> BPs <- lapply(TLineSeg(u,v),function(x) normaliz.vector(x)) R> G_uv <- matrix(unlist(BPs), ncol = 3, byrow = TRUE) R> draw.tpolytope.2d(G_uv,'red','blue') R> points(poins[,2],poins[,3],pch=19,cex=.3,col='green') Comparing the distance of the sampled points from the centroid \(\mu\) we can determine the quantiles associated with the tropical distance. R> dts <- apply(poins,1,function(x) trop.dist(mu,x)) R> quantile(dts) 0% 25% 50% 75% 100% 0.0002291895 0.0636676116 0.1309480012 0.2323379661 0.7342599864 Figure 16: Sampling 200 points (green) about a centroid \(\mu=(0,2,0)\) with scale parameter \(\sigma=0.2\). The quantiles associated with the tropical distance from the centroid according to a scale parameter \(\sigma=0.2\) is comparable to the Gaussian distribution in Euclidean space with a standard deviation equal to \(0.2\). ### HAR sampling from a tropical polytope Using the line sampling methods described above we can employ them to sample from tropical polytopes (see Definition 1.6). This is accomplished using the TropicalPolytope.extrapolation.HAR() function. This function allows the user to sample points uniforms from the \((e-1)-\)trunk of a tropical simplex (Definition 1.20). Arguments for the TropicalPolytope.extrapolation.HAR() function include a matrix defined as the vertices of the tropical simplex, a initial point, and a scalar value indicating the number of intermediate points to sample between the initial state and the final state in the Markov chain. The state in the chain represents the sampled point. Figure 17 shows the results of sampling from a polytrope (top) and a generic tropical simplex (bottom). R> set.seed(1) R> P <- matrix(c(0,0,0,0,2,5,0,3,1),3,3,TRUE) R> x0 <- c(0,2.5,3.2) R> N <- 1000 R> poins <- matrix(0,nrow=N,ncol=ncol(P),TRUE) R> for(i in 1:nrow(poins)){ x <- TropicalPolytope.extrapolation.HAR(P,x0,I=50) x0 <- x poins[i,] <- x } R> plot(poins[,2],poins[,3],xlab='x2',ylab='x3') In addition, we can also sample points about a centroid (location parameter) with dispersion controlled by a scale parameter. This method is described in detail in [4]. In Euclidean space some HAR methods that sample points according to a Gaussian distribution do so by projecting the centroid onto the generated line and then sample about the projection according to a fixed standard deviation. In the tropical projective torus, the projection of a point onto a line segment is not usually unique. Therefore, sampling about a centroid in the tropical projective torus requires that we define the interval of possible projections of the centroid onto the line segment and sample uniformly from this interval. For detail on how this is accomplished see Chapter 2 in [4]. In the **TML** package, the tropical.Gaussian() function executes this method. Similarly, the tropical.Gaussian() function takes as inputs the vertices of a tropical simplex, a starting point, and a scalar value determining the length of the Markov chain. In addition, a point serving as a centroid representing the location parameter, and a scale parameter in the form of a scalar to control dispersion are also used. Figure 18 shows an output of sampling about a centroid. R> set.seed(1) R> P <- matrix(c(0,0,0,0,0,1000,0,1000,0),3,3,TRUE) R> x0 <- c(0,2.5,3.2) Figure 17: Two tropical polytroes, (left) with vertices indicated in red and results after using TropicalPolytope.extrapolation.HAR() to sample 1000 points from each polytope. R> N <- 1000 R> poins <- matrix(0,nrow=N,ncol=ncol(P),TRUE) R> mu <- c(0,500,500) R> sd <- 4 R> for(i in 1:nrow(poins)){ x <- tropical.gaussian(P,x0,I=50,mu,sd) x0 <- x poins[i,] <- x } R> plot(poins[,2],poins[,3],xlab='x2',ylab='x3') R> points(mu[2],mu[3],pch=19,col='red) The functions in this section illustrate the basics of HAR sampling over the tropical projective torus. In the next section we provide specific data science applications using the methods in the **TML** package. ## 4 Machine learning applications In this section we provide several applications of the methods available in the **TML** package. We begin with an application to show how to estimate the volume of a tropical polytope as described in, which is a NP-hard problem. Next, we provide a supervised learning method involving tropical logistic regression applied to classifying phylogenetic trees as introduced in. Then we show how to apply unsupervised learning in the form of tropical principal component analysis, again, applied to phylogenetic trees. Finally, we show how to implement a non-parametric tropical kernel density estimation method as a way to identify outliers related to phylogenetic trees on \([m]\) leaves. Figure 18: Sampling 1000 points about a centroid using the tropical.gaussian() function. ### Application 1: Volume estimation of a tropical polytope A challenging problem in polyhedral geometry is estimating the volume of polytopes and is no less challenging in the tropical setting. In this section, we follow the methods devised and illustrated in \(\overline{\mathcal{T}}\). In general, for a given tropical polytope, \(\mathcal{P}\), this involves finding a minimum enclosing tropical ball, denoted \(B_{r}(\mathcal{P})\). By sampling from \(B_{r}(\mathcal{P})\), which is of known volume, we can estimate the volume of \(\mathcal{P}\) by multipling the volume of \(B_{r}(\mathcal{P})\) by the proportion of sampled points with membership in \(\mathcal{P}\). This application begins with computing \(B_{r}(\mathcal{P})\) for a give \(\mathcal{P}\) using the min_enc_ball() function. The output of the function is a two element list with the first element representing the center point of the ball and the second element representing the radius of the tropical ball in terms of tropical distance. R> P <- matrix(c(0,0,0,0,3,1,0,2,5),3,3,TRUE) R> B <- min_enc_ball(P) R> B [[1]] [1] 0.0 2.0 2.5 [[2]] [1] 2.5 Next we use the trop_bal.vert() to obtain the points in the minimum generating set of \(B_{r}(\mathcal{P})\). The output is a matrix with with rows representing the points in the minimum generating set \(V^{\prime}\) of \(B_{r}(\mathcal{P})\). R> BR <- trop_bal.vert(B[[1]],B[[2]]) [,1] [,2] [,3] [1,] 0 -0.5 0.0 [2,] 0 4.5 2.5 [3,] 0 2.0 5.0 Using the output of points from the trop_bal.vert() function and the original tropical polytope, \(\mathcal{P}\), we can estimate the volume of \(\mathcal{P}\). This is accomplished through the use of the Trop_Volume() function. Inputs include an the matrix representing the tropical points defining the tropical ball, a matrix representing the points defining the original tropical polytope, an initial point, the number of points to sample, a scalar value representing the length of each Markov chain, and the radius, \(r\), of \(B_{r}(\mathcal{P})\). R> set.seed(1) R> x0 <- c(0,1.5,.4) R> S <- 200 R> I <- 50 R> R <- B[[2]] R> Trop_Volume(BR,P,x0,S,I,R) [[1]] [1] 0.67 [[2]] [1] 18.75 [[3]] [1] 12.5625 The output of the Trop_Volume() function is a list containing three elements. The first element represents the proportion of points falling in the polytope of interest. The second element represents the volume of \(B_{r}(\mathcal{P})\) and the third represents the volume estimate of the tropical polytope. ### Application 2: Tropical logistic regression We now introduce the supervised learning method of tropical logistic regression as applied to phylogenetics. In [3], tropical logistic regression is introduced and shown to outperform classical logistic regression when applied to phylogenetic trees. Specifically, tropical logistic regression is a binary classification method applied to a given set of phylogenetic trees. The classifications represent membership into one of two species trees. Next, we turn to using tropical logistic regression to the problem of classifying gene trees according to the species tree that generated them. For this application, we use the two sets of 1000 phylogenetic trees, Sim_Trees11 and Sim_Trees21 (or simply \(T1\) and \(T2\) where the ratio \(R=\frac{SD}{N_{e}}=1\)) where each phylogenetic tree has ten leaves and is represented as an ultrametric. Using \(T1\) and \(T2\), the task is now to classify unseen gene trees. The tropical logistic regression model first infers the species tree that generated the corresponding trees of each class. Under the coalescent model which is explained in Section 1.4 the species trees and gene trees are equidistant and so the corresponding vectors are ultrametric. However, a Fermat-Weber point, which is used in lieu of the MLE tree, may not be an ultrametric. Ideally, we would like to have an additional constraint, requiring that the species tree be an ultrametric. By adding a regularization term that penalises deviations from the space of ultrametrics, we instead consider the modified Fermat-Weber point \[\operatorname*{arg\,min}_{\omega\in\mathbb{R}^{e}}\left\{\sum_{i=1}^{N}d_{ \mathrm{tr}}(x_{i},\omega)+\lambda\|\omega-\pi(\omega)\|^{2}\right\},\] where \(x_{i}\in\mathbb{R}^{e}/\mathbb{R}\mathbf{1}\) is the \(i\)-th vectorized gene tree, \(\pi\) is a projection onto the space of ultrametrics and the \(\lambda\) is the regularization rate. This method is employed in the standalone function FWpoint.num.w.reg() which is incorporated in the tropical logistic regression method trop.logistic.regression(). The following example shows how tropical logistic regression can be employed. R> library("ROCR") R> D <- rbind(Sim_Trees15,Sim_Trees25) R> Y <- c(rep(0,dim(Sim_Trees11)[1]), rep(1,dim(Sim_Trees21)[1])) R> N <- length(Y) R> set.seed(1) R> train_set <- sample(N,floor(0.8 * N)) ## 80/20 train/test split R> pars <- trop.logistic.regression(D[train_set,], Y[train_set], penalty=1e4) R> test_set <- (1:N)[-train_set] R> Y.hat <- rep(0, length(test_set)) R> for(i in 1:length(test_set)) Y.hat[i] <- prob.class(pars, D[test_set[i],]) R> Logit.ROC <- performance(prediction(Y.hat, Y[test_set]), measure="tr", x.measure="fpr") R> plot(Logit.ROC, lwd = 2, main = "ROC Curve for Logistic Regression Model") R> AUC <- performance(prediction(Y.hat, Y[test_set]), measure="auc")@y.values R> AUC [[1]] [1] 0.970966 From the example above we see that the area under the curve associated with the receiver operator characteristic (ROC) curve is close to one. This indicates a near-perfect classification for the given example. Figure 21 provides a visual representation of the ROC curve associated with the example above. ### Application 3: Tropical PCA Principal component analysis (PCA) is an unsupervised learning technique used for dimension reduction. Tropical PCA is no different but focuses on finding a best-fit tropical polytope for some data in the tropical projective torus. As in the previous section, we focus on the space of equidistant trees on \([m]\) leaves. As shown in the previous section, an equidistant tree can be defined as an ultrametric. In the **TML** package, tropical principal component analysis focuses on the tree space defined as the space of ultrametrics on \([m]\) leaves. This method was first introduced in [25] and extended in [11]. In the examples below we instead use simulated data where each point resides in the tropical projective torus. The best-fit polytope, specifically a tropical triangle, is calculated through the use of the tropical.PCA.Polytope() function. This function takes an iterative approach to finding the vertices of the best-fit tropical triangle by incorporating vertex HAR with extrapolation which was shown in Section 3.2. The primary purpose is to visualize the data along with the associated tropical triangle which is shown through the code that follows. R> set.seed(1) R> s <- 3 _#number of vertices. Here it is a tropical triangle_ R> d <- 3 _## dimension_ R> N <- 100 _## samples size_ R> V <- matrix(c(100, 0, 0, 0, 100, 0, 0, 0, 100, -100, 0, 0, 0, 0, -100), 6, 3, TRUE) R> D <- matrix(rep(0, N*d), N, d) R> D[, 1] <- rnorm(N, mean = 5, sd = 5) R> D[, 2] <- rnorm(N, mean = -5, sd = 5) Figure 19: ROC curve produced by the code snippet above. R> D[, 3] <- rnorm(N, mean = 0, sd = 5) R> index <- sample(1:N, s) R> S <- D[index,] R> DD <- pre.pplot.pro(S, D) R> for(i in 1:N) R> DD[i, ] <- normaliz.vector(DD[i, ]) R> res <- tropical.PCA.Polytope(S, D, V, I = 1000,50) R> DD <- pre.pplot.pro(res[[2]], res[[3]]) R> trop.tri.plot.w.pts(normaliz.ultrametrics(res[[2]]), DD) The output of this code provides the tropical triangle shown in Figure 20 ### Application 4: Tropical kernel density estimation A kernel density estimator (KDE) is a non-parametric density estimation method which uses kernel functions. This is a useful method in determining a number of data characteristics when the distribution of the data is unknown. This technique uses a kernel function, \(\kappa(.)\) which is simply a non-negative, smooth function and conjunction with a bandwidth parameter [33][37]. Like any density function however, there also must exist some normalizing constant, \(C\) such that \(\frac{1}{C}\kappa(.)\) integrates to one. A common kernel density estimator is the Gaussian kernel with mean zero and standard deviation equal to one \[\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{x^{2}}{2}\right). \tag{15}\] In [15] the normalizing constant is the fraction prior to the exponential function. The bandwidth, or dispersion, parameter is represented by the standard deviation in the case of the Gaussian kernel. Importantly, a reasonable density estimate requires an appropriate choice for bandwidth. Figure 20: Best-fit tropical triangle found using tropical.PCA.Polytope(). Kernel density estimation over the tropical projective torus has previously been investigated by Weyenberg et al. in [39], which specifically focused on kernel density estimation over the space of phylogenetic trees using what is called the BHV metric. One of the more challenging aspects of their method was that the location of the center of the kernel function causes the value of the normalizing constant to vary. This requires a recalculation of the normalizing constant for each data point. As an alternative to this method In [33], Yoshida et al. introduced the notion of kernel density estimation over the treespace represented as the space of ultrametrics on \([m]\) leaves using the tropical metric in conjunction with a Laplacian kernel function. Conducted experiments suggested that the normalizing constant remains constant regardless of the center of the function with \(m\geq 5\). Bandwidth, which is based on the tropical metric, is chosen using a "nearest neighbor" approach as in [38] meaning that the bandwidth parameter is equal to the tropical distance to the closest other data point. The **TML** package provides tropical kernel density estimation in the form of the tropical.KDE() function. The method leverages two functions from the **KDETrees** package called pw.trop.dist() and bw.nn() to first calculate the pairwise tropical distance between each data point and then find the bandwidth parameter for each data point [38]. In [38], Weyenberg et al. show how, using their BHV metric-based approach, to identify outliers in a set of gene trees. In this case, an outlier tree is a tree that falls in the tail of the distribution of trees. Yoshida et. al conducted a similar experiment using the method developed in [33] on a the space of ultrametrics on \(m=10\) leaves to identify such outliers with fixed effective populations but differing species depths (SD). The species depth indicates the number of epochs between the common ancestor of all species (root node) and present day (leaf nodes). For this experiment, we consider an effective population of \(N_{e}=100000\) and varying species depths such that we obtain a sequence of ratios, \(R\), of SD to \(N_{e}\) equal to \(0.25\), \(0.5\), \(1\), \(2\), \(5\), and \(10\). The example below consists, again, of using the same two sets of \(1000\) simulated gene trees, \(T1\) and \(T2\), with ten leaves where \(R=5\) (represented as Sim_Trees1 and Sim_Trees2). We then provide the the cumulative results for each \(R\) in the plots of the receiver operator characteristic (ROC) curves for each \(R\) that follow. In order to determine how well we can identify ouliers using the tropical.KDE() function, we will examine each tree in \(T2\) as it is appended to the set of trees in \(T1\). Using the pairwise tropical distance function, pw.trop.dist(), and bw.nn(), we find the bandwidth value for each tree in the set. Then, we calculate the density value using tropical.KDE() function. We determine how well the method identifies outliers by examining the receiver operator characteristic (ROC) curve. The larger the value of the area under the ROC curve, the better the method is at identifying outliers. In the code chunk below, we assume the density estimate on the final trial for all trees with original membership in \(T1\) is representative of density estimates from previous trials. Therefore, when calculating values for the ROC curve in the code chunk below, we only use those density estimates. R> set.seed(1) R> D1 <- Sim_Trees15 R> D2 <- Sim_Trees25 R> I <- 1000 _## The number of trials_ R> Q5 <- rep(0, I) R> N1<-nrow(D1) R> for(i in 1:I){ D <- rbind(D1, D2[i,]) T <- dim(D)[1] P5 <- rep(0, T) X <- 1:T M <- pw.trop.dist(D, D) sigma <- bw.nn(M) P5 <- tropical.KDE(D, n, sigma, h = 2) Q5[i] <- P5[T] print(i) R> y <- c(rep(1, N1), rep(0, I)) R> predProbKDE5 <- c(P5[1:N1], Q5) R> KDE5.ROC <- performance(prediction(predProbKDE5, y), measure="tpr", x.measure="fpr") In general, we see that as \(SD\), and therefore \(R\), increases, so does the AUC value as shown below. When we reach the experiment representing \(R=10\), we see perfect classification, indicated by the AUC being equal to one. Figure 21 shows the associated ROC curves for each value of \(R\). This provides us with a visual representation of the AUC values below. R> KDE025.AUC <- performance(prediction(predProbKDE025, y), measure="auc"')@y.values R> KDE025.AUC [[1]] [1] 0.563876 R> KDE05.AUC <- performance(prediction(predProbKDE05, y), measure="auc")@y.values R> KDE05.AUC [[1]] [1] 0.630703 R> KDE1.AUC <- performance(prediction(predProbKDE1, y), measure="auc")@y.values R> KDE1.AUC [[1]] [1] 0.697034 R> KDE2.AUC <- performance(prediction(predProbKDE2, y), measure="auc")@y.values R> KDE2.AUC [[1]] [1] 0.87902 R> KDE5.AUC <- performance(prediction(predProbKDE5, y), measure="auc")@y.values [[1]] [1] 0.998542 R> KDE10.AUC <- performance(prediction(predProbKDE10, y), measure="auc")@y.values R> KDE10.AUC [[1]] [1] 1 Conclusion This paper provides a basic description of the tropical machine learning methods and functionality of the **TML** package in R. While we provide a thorough descriptions of most available methods in the **TML** package we cannot cover everything. One important unsupervised method not covered are clustering methods over the tropical projective torus. We reccommend the read see for a thorough treatment. As shown in [24], all Euclidean statistical models can be described in terms of tropical algebra. With this in mind, we anticipate that TML() package will continue to be improved and expanded as new tropical data science methods are developed. We are already observing alternative methods of employing tropical support vector machines using HAR methods and neural networks in terms of tropical algebra. We encourage collaborators to provide input and recommendations via the **TML** GitHub page at [https://github.com/barnhilldave/TML/issues](https://github.com/barnhilldave/TML/issues) ## Acknowledgments The authors would like to thank David Kahle for is input on the development of the **TML** package. RY and DB are partially supported from NSF DMS 1916037. GA is funded by EPSRC through the STOR-i Centre for Doctoral Training under grant EP/L015692/1. KM is partially supported by JSPS KAKENHI Grant Numbers JP22K19816, JP22H02364.
2304.00360
On a conjecture on a series of convergence rate $\frac{1}{2}$
Sun, in 2022, introduced a conjectured evaluation for a series of convergence rate $\frac{1}{2}$ involving harmonic numbers. We prove both this conjecture and a stronger version of this conjecture, using a summation technique based on a beta-type integral we had previously introduced. Our full proof also requires applications of Bailey's ${}_{2}F_{1}\left( \frac{1}{2} \right)$-formula, Dixon's ${}_{3}F_{2}(1)$-formula, an almost-poised version of Dixon's formula due to Chu, Watson's formula for ${}_{3}F_{2}(1)$-series, the Gauss summation theorem, Euler's formula for ${}_{2}F_{1}$-series, elliptic integral singular values, and lemniscate-like constants recently introduced by Campbell and Chu. The techniques involved in our proof are useful, more broadly, in the reduction of difficult sums of convergence rate $\frac{1}{2}$ to previously evaluable expressions.
John M. Campbell
2023-04-01T17:15:17Z
http://arxiv.org/abs/2304.00360v1
# On a conjecture on a series of convergence rate \(\frac{1}{2}\) ###### Abstract Sun, in 2022, introduced a conjectured evaluation for a series of convergence rate \(\frac{1}{2}\) involving harmonic numbers. We prove both this conjecture and a stronger version of this conjecture, using a summation technique based on a beta-type integral we had previously introduced. Our full proof also requires applications of Bailey's \({}_{2}F_{1}\left(\frac{1}{2}\right)\)-formula, Dixon's \({}_{3}F_{2}(1)\)-formula, an almost-poised version of Dixon's formula due to Chu, Watson's formula for \({}_{3}F_{2}(1)\)-series, the Gauss summation theorem, Euler's formula for \({}_{2}F_{1}\)-series, elliptic integral singular values, and lemniscate-like constants recently introduced by Campbell and Chu. The techniques involved in our proof are useful, more broadly, in the reduction of difficult sums of convergence rate \(\frac{1}{2}\) to previously evaluable expressions. _MSC_: 33C20, 33C75 _Keywords:_ harmonic number; hypergeometric series; symbolic evaluation; closed form; digamma function ## 1 Introduction Zhi-Wei Sun has introduced many remarkable conjectures, over the years. Many of these conjectures are given by purported evaluations for very difficult series that were discovered in an experimental fashion, with Computer Algebra System software and via numerical estimates. Recently, Sun [17] posted a preprint on series with summands involving harmonic numbers, and the purpose of our article is to prove one of the conjectures given by Sun in this recent preprint [17]. Conjecture 2.4 from [17] was formulated in the following manner by Sun [17], and it was indicated [17] that this Conjecture was introduced in December of 2022. We are to let \(H_{m}=1+\frac{1}{2}+\cdots+\frac{1}{m}\) denote the \(m^{\text{th}}\) entry in the sequence of harmonic numbers. The \(\Gamma\)-function [16, SS8] is of great importance inside of mathematics and outside of mathematics and is to be heavily involved in our article and may be defined via the Euler integral so that \(\Gamma(x)=\int_{0}^{\infty}u^{x-1}e^{-u}\,du\) for \(\Re(x)>0\). **Conjecture 2.4** from [17]: We have \[\sum_{k=0}^{\infty}\frac{\binom{2k}{k}^{2}}{(-16)^{k}}(2H_{2k}-H_{k})=-\frac{ \ln(2)\,\Gamma^{2}\left(\frac{1}{4}\right)}{4\pi\sqrt{2\pi}} \tag{1}\] and \[\sum_{k=0}^{\infty}\frac{\binom{2k}{k}^{2}}{32^{k}}(2H_{2k}-H_{k})=\frac{\ln( 2)\,\Gamma^{2}\left(\frac{1}{4}\right)}{4\pi\sqrt{\pi}}. \tag{2}\] As it turns out, the alternating series evaluation in (1) can be shown to follow in a direct way from results obtained via a hypergeometric linearization method given by Chu and Campbell in [14], as we are to briefly demonstrate, later in this article. However, the problem of proving the evaluation in (2) turns out to be much more difficult. We succeed in proving (2) in this article, as in Section 2 below. To the best of our knowledge, Sun's conjectured formula in (2) had not previously been proved. ### Background Using a linearization method based on coefficient extractions, Chu [13] recently applied this method to obtain identities for evaluating series involving \(\frac{\binom{2k}{k}^{2}}{32^{k}}\) for \(k\in\mathbb{N}_{0}\) together with harmonic-type numbers, building on the beta integral-derived results previously given by Campbell in [3]. However, the methds of Chu [13] cannot be applied, at least in any direct way, to prove (2). In particular, the hypergeometric techniques due to Chu [13] were applied in [13] to evaluate series of convergence rate \(\frac{1}{2}\) involving \[\left(\frac{1}{32}\right)^{k}\binom{2k}{k}^{2}H_{k} \tag{3}\] for \(k\in\mathbb{N}_{0}\), such as the formula \[\sum_{k=0}^{\infty}\left(\frac{1}{32}\right)^{k}\binom{2k}{k}^{2}\frac{H_{k}} {k+1}=8-\frac{2\Gamma^{2}\left(\frac{1}{4}\right)}{\pi^{3/2}}-\frac{4\pi^{3/2} +16\sqrt{\pi}\ln(2)}{\Gamma^{2}\left(\frac{1}{4}\right)} \tag{4}\] previously introduced by Campbell [3], and series of convergence rate \(\frac{1}{2}\) involving \[\left(\frac{1}{32}\right)^{k}\binom{2k}{k}^{2}O_{k}^{(2)} \tag{5}\] for \(k\in\mathbb{N}_{0}\), writing \(O_{k}^{(2)}=\sum_{i=1}^{k}\frac{1}{(2i-1)^{2}}\). However, by expanding the summand of (2), we would need to evaluate a series of convergence rate \(\frac{1}{2}\) involving \[\left(\frac{1}{32}\right)^{k}\binom{2k}{k}^{2}H_{2k} \tag{6}\] for \(k\in\mathbb{N}_{0}\), in contrast to both (3) and (5). There have been only a few previously known series of convergence rate \(\frac{1}{2}\) involving (6), as in our previous work on Fourier-Legendre theory and fractional calculus [8]. In particular, it was proved in [8] that \[\sum_{k=0}^{\infty}\left(\frac{1}{32}\right)^{k}\binom{2k}{k}^{2}\frac{H_{2k }}{2k-1}= \tag{7}\] \[\frac{\sqrt{\pi}\left(\pi+3\ln(2)-4\right)}{2\Gamma^{2}\left(\frac{1}{4}\right)}- \frac{\Gamma^{2}\left(\frac{1}{4}\right)\left(\pi-3\ln(2)-2\right)}{16\pi^{3/2}} \tag{8}\] and that \[\sum_{k=0}^{\infty}\left(\frac{1}{32}\right)^{k}\binom{2k}{k}^{2}\frac{H_{2k}} {k+1}=4-\frac{3\Gamma^{2}\left(\frac{1}{4}\right)}{2\pi^{3/2}}-\frac{2\sqrt{ \pi}(\pi+3\ln(2)-4)}{\Gamma^{2}\left(\frac{1}{4}\right)}, \tag{9}\] and the results in (7)-(8) and in (9) were highlighted as main results in [8]. Our approach toward solving the problem proposed by Sun given by proving (2) is of a similar nature relative to our past proofs of (7)-(8) and (9), but our proof of (2) is considerably more involved and requires recent results on what are referred to as _lemniscate-like constants_ in [9]. As considered by Chu [13], the very fast convergence rate of series as in (4) is of interest, especially in comparison with previously known series of convergence rate 1 involving \(\frac{\binom{2k}{k}^{2}}{16^{k}}\) for \(k\in\mathbb{N}_{0}\) and harmonic numbers, as in the formula \[\sum_{k=0}^{\infty}\left(\frac{1}{16}\right)^{k}\binom{2k}{k}^{2}\frac{H_{k}} {(2k-1)^{2}}=\frac{12-16\ln(2)}{\pi}\] proved by Choi in 2014 [11] and independently by Chen in 2016 [10], which was later generalized by Campbell [4] and by Wang and Chu [20]. The foregoing considerations greatly motivate the development of new techniques for evaluating series involving expressions as in (3). ### Preliminaries One of the key tools that we are to apply to prove Sun's conjectured formula shown in (2) is given by the beta-type integration technique introduced in the author's previous publication [5] and reproduced in the author's PhD Thesis [6]. We are to let \(H_{m}^{\prime}=1-\frac{1}{2}+\cdots+\frac{(-1)^{m+1}}{m}\) denote the \(m^{\text{th}}\) entry in the sequence of alternating harmonic numbers, recalling the relation such that \(H_{2n}^{\prime}=H_{2n}-H_{n}\). **Lemma 1**.: _Given a sequence \((f_{n})_{n\geq 0}\) whereby the series_ \[\sum_{n=0}^{\infty}\left(\frac{1}{16}\right)^{n}H_{2n}^{\prime}\binom{2n}{n} ^{2}\frac{f_{n}}{n+1} \tag{10}\] _converges, the above series is equal to_ \[\frac{4}{\pi}\int_{0}^{1}\sum_{n=0}^{\infty}(-1)^{n}x^{2n}\sqrt{ 1-x^{2}}\binom{-\frac{1}{2}}{n}f_{n}\ln\left(x\right)\,dx \tag{11}\] \[+\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{16}\right)^{n} \frac{\binom{2n}{n}^{2}(2\ln(2)(n+1)+1)}{(n+1)^{2}}f_{n}, \tag{12}\] _under the assumption that the sequence \(f\) is such that it is possible to reverse the order of integration and infinite summation in (11) [5, 6]._ The evaluation of hypergeometric series is of central importance in our article, so we find it to be appropriate to recall the following definition of the term _generalized hypergeometric series_: \[{}_{p}F_{q}\!\!\left[\!\!\begin{array}{c}a_{1},a_{2},\ldots,a_{p}\\ b_{1},b_{2},\ldots,b_{q}\end{array}\!\!\right|\,x\!\right]=\sum_{n=0}^{\infty} \left[\!\!\begin{array}{c}a_{1},a_{2},\ldots,a_{p}\\ b_{1},b_{2},\ldots,b_{q}\end{array}\!\!\right]_{n}\frac{x^{n}}{n!}. \tag{13}\] The _complete elliptic integrals_ are to play a significant role in our proof of (2). For the purposes of our article, the complete elliptic integrals \(\mathbf{K}\) and \(\mathbf{E}\) of the first and second kinds may be defined via the following Maclaurin series expansions, with reference to the classic _Pi and the AGM_ text [2, pp. 8-10]: \[\mathbf{K}(k) =\frac{\pi}{2}\cdot{}_{2}F_{1}\!\!\left[\!\begin{array}{c} \frac{1}{2},\frac{1}{2}\\ 1\end{array}\!\!\right|\,k^{2}\!\right], \tag{14}\] \[\mathbf{E}(k) =\frac{\pi}{2}\cdot{}_{2}F_{1}\!\!\left[\!\begin{array}{c} \frac{1}{2},-\frac{1}{2}\\ 1\end{array}\!\!\right|\,k^{2}\!\right]. \tag{15}\] The following elliptic integral singular values highlighted as Theorem 1.7 in the _Pi and the AGM_ text [2, p. 25] are to also be applied in our proof of (2): \[\mathbf{K}\left(\frac{1}{\sqrt{2}}\right) =\frac{\Gamma^{2}\left(\frac{1}{4}\right)}{4\sqrt{\pi}}, \tag{16}\] \[\mathbf{E}\left(\frac{1}{\sqrt{2}}\right) =\frac{\Gamma^{2}\left(\frac{1}{4}\right)}{8\sqrt{\pi}}+\frac{ \pi^{3/2}}{\Gamma^{2}\left(\frac{1}{4}\right)}. \tag{17}\] Writing \(k^{\prime}=\sqrt{1-k^{2}}\), we are to also employ the following differential equations [2, p. 10]: \[\frac{d\mathbf{E}}{dk}=\frac{\mathbf{E}-\mathbf{K}}{k}\quad\text{and}\quad \frac{d\mathbf{K}}{dk}=\frac{\mathbf{E}-(k^{\prime})^{2}\mathbf{K}}{k(k^{ \prime})^{2}}. \tag{18}\] We are to make use of the _Gauss's summation theorem_[1, SS1.3] such that \[{}_{2}F_{1}\!\!\left[\!\begin{array}{c}a,b\\ c\end{array}\!\!\right|\,1\!\right]=\Gamma\left[\!\!\begin{array}{c}c,c-a-b \\ c-a,c-b\end{array}\!\!\right] \tag{19}\] for \(\Re(c-a-b)>0\), writing \[\Gamma\left[\!\!\begin{array}{c}\alpha,\beta,\ldots,\gamma\\ A,B,\ldots,C\end{array}\!\!\right]=\frac{\Gamma(\alpha)\Gamma(\beta)\cdots \Gamma(\gamma)}{\Gamma(A)\Gamma(B)\cdots\Gamma(C)}.\] The _Euler-Mascheroni constant_ is such that \(\gamma=\lim_{n\to\infty}\left(H_{n}-\ln n\right)\). The _digamma function_ is the special function such that [16, SS9]: \[\psi(z)=\frac{d}{dz}\ln\Gamma(z)=\frac{\Gamma^{\prime}(z)}{\Gamma(z)}=-\gamma+ \sum_{n=0}^{\infty}\frac{z-1}{(n+1)(n+z)}. \tag{20}\] Euler's formula for \({}_{2}F_{1}\)-series [15, p. 57] is such that \[{}_{2}F_{1}\!\!\left[\!\!\!\begin{array}{c}a,b\\ c\end{array}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[=-3\sum_{k=1}^{\infty}\frac{\left(-\frac{1}{2}\right)_{k}\left(\frac{1}{4} \right)_{k}}{\left(\frac{1}{2}\right)_{k}\left(\frac{3}{4}\right)_{k}}\] \[=3\left(1-{}_{3}F_{2}\!\!\left[\begin{matrix}1,-\frac{1}{2},\frac{ 1}{4}\\ \frac{1}{2},\frac{3}{4}\end{matrix}\biggm{|}1\right]\right),\] and so that the formulation of Watson's identity in (25) then gives us the desired symbolic form in (24). We let \[\beta(x,y)=\int_{0}^{1}t^{x-1}(1-t)^{y-1}\,dt\] denote the beta function for \(\Re(x)>0\) and \(\Re(y)>0\). The classical _lemniscate constants_ \[A=\frac{1}{4}\beta\left(\frac{1}{2},\frac{1}{4}\right)=\int_{0}^{1}\frac{1}{ \sqrt{1-t^{4}}}\,dt=\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{2n} {n}\frac{1}{4n+1}=\frac{\Gamma^{2}\left(\frac{1}{4}\right)}{4\sqrt{2\pi}} \tag{26}\] and \[B=\frac{1}{4}\beta\left(\frac{1}{2},\frac{3}{4}\right)=\int_{0}^{1}\frac{t^{ 2}}{\sqrt{1-t^{4}}}\,dt=\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{ 2n}{n}\frac{1}{4n+3}=\frac{\sqrt{2\pi^{3}}}{\Gamma^{2}\left(\frac{1}{4}\right)} \tag{27}\] have been of much significance in the history of mathematics [9, 19], and this led to the exploration of _lemniscate-like constants_ of the following forms [7, 9], for a suitable sequence \(f\): \[\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{2n}{n}\frac{f_{n}}{4n+1 }\quad\text{and}\quad\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{2n }{n}\frac{f_{n}}{4n+3}. \tag{28}\] Series as in (28) are to be heavily used in our main proof in Section 2. _Dixon's formula_ for well-poised series is such that \[{}_{3}F_{2}\!\!\left[\begin{matrix}a,b,c\\ 1+a-b,1+a-c\end{matrix}\biggm{|}1\right]=\Gamma\left[\begin{matrix}1+\frac{a} {2},1+\frac{a}{2}-b-c,1+a-b,1+a-c\\ 1+a,1+a-b-c,1+\frac{a}{2}-b,1+\frac{a}{2}-c\end{matrix}\right]. \tag{29}\] Chu [12] introduced extended versions of Watson-Whipple-Dixon \({}_{3}F_{2}\)-series, and we are to apply the following almost-poised version of Dixon's formula [12] as in [9]: \[{}_{3}F_{2}\!\!\left[\begin{matrix}a,b,c\\ 2+a-b,2+a-c\end{matrix}\biggm{|}1\right]=\] \[\frac{2^{1+2a-2b-2c}\Gamma(a-b+2)\Gamma(a-c+2)}{\pi(b-1)(1-c) \Gamma(a)\Gamma(a-2b+2)\Gamma(a-2c+2)\Gamma(a-b-c+2)}\] \[\left(\Gamma\left(\frac{1+a}{2}\right)\Gamma\left(\frac{2+a}{2}-b \right)\Gamma\left(\frac{2+a}{2}-c\right)\Gamma\left(\frac{5+a}{2}-b-c\right)\right.\] \[-\Gamma\left(\frac{a}{2}\right)\Gamma\left(\frac{3+a}{2}-b \right)\Gamma\left(\frac{3+a}{2}-c\right)\Gamma\left(\frac{4+a}{2}-b-c\right) \Bigg{)}.\] A direct application of the above hypergeometric identity gives us that: \[\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{2n}{n}\frac{1}{(4n+3)^{2}}= \frac{1}{9}\,{}_{3}F_{2}\!\!\left[\!\begin{array}{c}\frac{1}{2},\frac{3}{4}, \frac{3}{4}\\ \frac{7}{4},\frac{7}{4}\end{array}\!\!\right|\,1\right]=\frac{4-\pi}{4\sqrt{2 \pi}}\Gamma^{2}\left(\frac{3}{4}\right). \tag{30}\] The lemniscate-like constant evaluation such that \[\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{2n}{n}\frac{O_{2n}}{4n+3 }=\frac{\pi^{3/2}(3\ln(2)+2)}{2\sqrt{2}\Gamma^{2}\left(\frac{1}{4}\right)} \tag{31}\] and \[\sum_{k=0}^{\infty}\left(\frac{1}{4}\right)^{k}\binom{2k}{k}\frac{O_{2k}}{4k+1 }=\frac{3\Gamma^{2}\left(\frac{1}{4}\right)\ln(2)}{16\sqrt{2\pi}} \tag{32}\] were proved in [9] and included as main results, and we are to apply (31) and (32) in our main proof, letting \(O_{m}=1+\frac{1}{3}+\cdots+\frac{1}{2m-1}\) denote the \(m^{\text{th}}\) odd harmonic number. _Bailey's theorem_[1, p. 11] is such that \[{}_{2}F_{1}\!\!\left[\!\begin{array}{c}a,1-a\\ c\end{array}\!\!\right|\,\frac{1}{2}\right]=\Gamma\left[\!\begin{array}{c} \frac{c}{2},\frac{c+1}{2}\\ \frac{a+c}{2},\frac{1-a+c}{2}\end{array}\!\!\right]. \tag{33}\] Tauraso [18] obtained the following, by setting \(a=\frac{1}{2}\) in (33) and via a term-by-term application of the operator \(\frac{\partial}{\partial c}\cdot\big{|}_{c=1}\): \[\sum_{k=0}^{\infty}\left(\frac{1}{32}\right)^{k}\binom{2k}{k}^{2}H_{k}=\frac{ \sqrt{\pi}(\pi-4\ln(2))}{2\Gamma^{2}\left(\frac{3}{4}\right)}. \tag{34}\] However, the application of operators such as \(\frac{\partial}{\partial c}\cdot\big{|}_{c=1/2}\) have the effect of reducing the power of central binomial coefficients, so it is unclear as to how it may be possible to mimic Tauraso's approach in the hope of evaluating the following intractable series: \[\sum_{k=0}^{\infty}\left(\frac{1}{32}\right)^{k}\binom{2k}{k}^{2}H_{2k}.\] ## 2 Proof of a conjectured evaluation for a series of convergence rate \(\frac{1}{2}\) **Theorem 1**.: _The formula in (2) conjectured by Sun holds true._ Proof.: We begin by setting \(f_{n}=2^{-n}(n+1)\) in Lemma 1. Lemma 1 then gives us the equality of \[\sum_{n=0}^{\infty}\left(\frac{1}{32}\right)^{n}(H_{2n}-H_{n})\binom{2n}{n}^{2} \tag{35}\] and \[\frac{4}{\pi}\int_{0}^{1}\sqrt{1-x^{2}}\ln(x)\sum_{n=0}^{\infty} \left(-\frac{x^{2}}{2}\right)^{n}\binom{-\frac{1}{2}}{n}(n+1)\,dx+\] \[\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{32}\right)^{n}\binom{ 2n}{n}^{2}\frac{1+2(n+1)\ln(2)}{n+1}.\] According to the Maclaurin series expansions in (14) and (15) along with the differential equations in (18), we may obtain the power series expansion \[\sum_{n=0}^{\infty}\frac{\binom{2n}{n}^{2}}{n+1}y^{n}=\frac{\mathbf{E}\left(4 \sqrt{y}\right)}{4\pi y}+\frac{4\left(1-\frac{1}{16y}\right)\mathbf{K}\left(4 \sqrt{y}\right)}{\pi}. \tag{36}\] From the expansion in (36) together with the elliptic integral singular values shown in (16) and in (17), we find that the series in (35) is expressible in the following manner: \[\frac{4}{\pi}\int_{0}^{1}\sqrt{1-x^{2}}\ln(x)\sum_{n=0}^{\infty} \left(-\frac{x^{2}}{2}\right)^{n}\binom{-\frac{1}{2}}{n}(n+1)\,dx+ \tag{37}\] \[\frac{4\sqrt{\pi}}{\Gamma^{2}\left(\frac{1}{4}\right)}+\frac{\ln (2)\Gamma^{2}\left(\frac{1}{4}\right)}{2\pi^{3/2}}. \tag{38}\] By the generalized binomial theorem, we find that (37)-(38) is reducible to the following: \[-\frac{4}{\pi\sqrt{2}}\int_{0}^{1}\frac{\sqrt{1-x^{2}}\left(x^{2}-4\right)\ln (x)}{\left(2-x^{2}\right)^{3/2}}\,dx+\frac{4\sqrt{\pi}}{\Gamma^{2}\left(\frac{ 1}{4}\right)}+\frac{\Gamma^{2}\left(\frac{1}{4}\right)\ln(2)}{2\pi^{3/2}}. \tag{39}\] So, it remains to evaluate the integral in (39), i.e., to evaluate the following expression: \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\left(x^{2}-4\right)\ln(x)}{\left(2-x^{2} \right)^{3/2}}\,dx. \tag{40}\] We may rewrite (40) as \[-\int_{0}^{1}\sqrt{\frac{1-x^{2}}{2-x^{2}}}\ln(x)\,dx-2\int_{0}^{1}\frac{ \sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2}}\,dx. \tag{41}\] Applying the change of variables such that \(1-x^{2}=u\) to the first integral in (41), we obtain \[-\frac{1}{4}\int_{0}^{1}\frac{\sqrt{u}\ln(1-u)}{\sqrt{1-u^{2}}}\,du-2\int_{0 }^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2}}\,dx. \tag{42}\] By expanding the integrand factor \(\ln(1-u)\) with its Maclaurin series and then integrating term-by-term using the Dominated Convergence Theorem, we may obtain the following from (42): \[\frac{\sqrt{\pi}}{8}\sum_{n=1}^{\infty}\frac{1}{n}\Gamma\left[\begin{matrix} \frac{n}{2}+\frac{3}{4}\\ \frac{n}{2}+\frac{5}{4}\end{matrix}\right]-2\int_{0}^{1}\frac{\sqrt{1-x^{2}} \ln(x)}{\left(2-x^{2}\right)^{3/2}}\,dx. \tag{43}\] Applying a series bisection to (43), we obtain \[\frac{\sqrt{\pi}}{16}\sum_{n=1}^{\infty}\frac{1}{n}\Gamma\begin{bmatrix}n+ \frac{3}{4}\\ n+\frac{5}{4}\end{bmatrix}+\frac{\sqrt{\pi}}{8}\sum_{n=1}^{\infty}\frac{1}{2n-1 }\Gamma\begin{bmatrix}n+\frac{1}{4}\\ n+\frac{3}{4}\end{bmatrix}- \tag{44}\] \[2\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/ 2}}\,dx.\] Applying an index shift to the first series in (44), we obtain: \[\sum_{n=1}^{\infty}\frac{1}{n}\Gamma\begin{bmatrix}n+\frac{3}{4}\\ n+\frac{5}{4}\end{bmatrix}=\frac{12\Gamma\left(\frac{3}{4}\right)}{5\Gamma \left(\frac{1}{4}\right)}{}_{3}F_{2}\!\!\begin{bmatrix}1,1,\frac{7}{4}\\ 2,\frac{9}{4}\end{bmatrix}\,1\Bigg{]}\,. \tag{45}\] So, the digamma identity in (23) from [15, p. 111] allows us to evaluate the \({}_{3}F_{2}\)-expression in (45). This allows us to rewrite (42) in the following manner: \[-\frac{\pi^{3/2}(\pi+2\ln(2)-8)}{4\sqrt{2}\Gamma^{2}\left(\frac{1 }{4}\right)}+\frac{\sqrt{\pi}}{8}\sum_{n=1}^{\infty}\frac{1}{2n-1}\Gamma \begin{bmatrix}n+\frac{1}{4}\\ n+\frac{3}{4}\end{bmatrix}- \tag{46}\] \[2\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3 /2}}\,dx. \tag{47}\] By rewriting the infinite series in (46) according to the notation in (13), we find that (46)-(47) is equal to: \[2\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3 /2}}\,dx.\] So, from the Watson-derived \({}_{3}F_{2}(1)\)-evaluation on display in (24), we find that the integral in (40) is equal to the following: \[\frac{\Gamma^{2}\left(\frac{1}{4}\right)}{8\sqrt{2\pi}}-\frac{\pi^{3/2}(\pi+ \ln(2)-4)}{2\sqrt{2}\Gamma^{2}\left(\frac{1}{4}\right)}-2\int_{0}^{1}\frac{ \sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2}}\,dx. \tag{48}\] So, it remains to evaluate the integral in (48). Using a change of variables, we obtain that: \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2}}\,dx=\frac {1}{4}\int_{0}^{1}\frac{\sqrt{u}\ln(1-u)}{(1+u)^{3/2}\sqrt{1-u}}\,du.\] Using an appropriate Cauchy product, we may obtain that \[\left(\left(\frac{d}{du}\right)^{n}\frac{1}{(u+1)\sqrt{1-u^{2}}}\right)\, \Bigg{|}_{u=0}=\left(-\frac{1}{2}\right)^{n}(n+1)!\binom{n}{\left\lfloor\frac {n}{2}\right\rfloor}. \tag{49}\] So, from the Maclaurin series corresponding to (49), we may obtain that \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2}} \,dx= \tag{50}\] \[\frac{1}{4}\int_{0}^{1}\left(\sum_{n=0}^{\infty}\left(-\frac{1}{2} \right)^{n}\binom{n}{\left\lfloor\frac{n}{2}\right\rfloor}(n+1)u^{n+\frac{1}{ 2}}\ln(1-u)\right)\,du. \tag{51}\] According to a standard moment formula for the digamma function, we have that: \[\int_{0}^{1}u^{n+\frac{1}{2}}\ln(1-u)\,du=-\frac{2\left(\psi\left(n+\frac{5}{2} \right)+\gamma\right)}{2n+3}. \tag{52}\] According to the expansion formula for the digamma function shown in (20), we may obtain from (52) that \[\int_{0}^{1}u^{n+\frac{1}{2}}\ln(1-u)\,du=\frac{4\ln(2)-4O_{n+1}-\frac{4}{2n+3 }}{2n+3}. \tag{53}\] According to the Dominated Convergence Theorem, we may reverse the order of integration and infinite summation with respect to the equality in (50)-(51), so as to give us the following, being consistent with the notation in (53): \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2 }}\,dx=\] \[\frac{1}{4}\sum_{n=0}^{\infty}\left(-\frac{1}{2}\right)^{n} \binom{n}{\left\lfloor\frac{n}{2}\right\rfloor}\frac{\left(n+1\right)\left(4 \ln(2)-4O_{n+1}-\frac{4}{2n+3}\right)}{2n+3}.\] Applying a series bisection, we obtain that \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/ 2}}\,dx=\] \[\frac{1}{4}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n} \binom{2n}{n}\frac{\left(2n+1\right)\left(4\ln(2)-4O_{2n+1}-\frac{4}{2(2n)+3 }\right)}{2(2n)+3}- \tag{54}\] \[\frac{1}{4}\sum_{n=0}^{\infty}\left(\frac{1}{2}\right)^{2n+1} \binom{2n+1}{n}\frac{\left(\left(2n+1\right)+1\right)\left(4\ln(2)-4O_{\left( 2n+1\right)+1}-\frac{4}{2(2n+1)+3}\right)}{2(2n+1)+3}.\] Equivalently, \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2 }}\,dx=\] \[-\frac{1}{8}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n} \binom{2n}{n}\frac{1}{4n+1}+\left(-1-\frac{\ln(2)}{2}\right)\sum_{n=0}^{ \infty}\left(\frac{1}{4}\right)^{n}\binom{2n}{n}\frac{1}{4n+3}+\] \[\frac{13+12\ln(2)}{8}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n} \binom{2n}{n}\frac{1}{4n+5}+\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4} \right)^{n}\binom{2n}{n}\frac{1}{(4n+3)^{2}}-\] \[\frac{3}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{2 n}{n}\frac{1}{(4n+5)^{2}}+\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4} \right)^{n}\binom{2n}{n}\frac{O_{2n}}{4n+3}-\] \[\frac{3}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{2 n}{n}\frac{O_{2n}}{4n+5}.\] From (26) and (27), we may obtain that \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2 }}\,dx=\] \[-\frac{\Gamma^{2}\left(\frac{1}{4}\right)}{32\sqrt{2\pi}}+\frac{ \left(-1-\frac{\ln(2)}{2}\right)\sqrt{2\pi^{3}}}{\Gamma^{2}\left(\frac{1}{4} \right)}+\] \[\frac{13+12\ln(2)}{8}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right) ^{n}\binom{2n}{n}\frac{1}{4n+5}+\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4 }\right)^{n}\binom{2n}{n}\frac{1}{(4n+3)^{2}}-\] \[\frac{3}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{ 2n}{n}\frac{1}{(4n+5)^{2}}+\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4} \right)^{n}\binom{2n}{n}\frac{O_{2n}}{4n+3}-\] \[\frac{3}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{ 2n}{n}\frac{O_{2n}}{4n+5}.\] From the lemniscate-like constant evaluations in (30) and (31), we obtain that \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2 }}\,dx=\] \[-\frac{\Gamma^{2}\left(\frac{1}{4}\right)}{32\sqrt{2\pi}}+\frac{ \left(4-\pi\right)\Gamma^{2}\left(\frac{3}{4}\right)}{8\sqrt{2\pi}}+\frac{ \pi^{3/2}(3\ln(2)+2)}{4\sqrt{2}\Gamma^{2}\left(\frac{1}{4}\right)}+\frac{ \left(-1-\frac{\ln(2)}{2}\right)\sqrt{2\pi^{3}}}{\Gamma^{2}\left(\frac{1}{4} \right)}+\] \[\frac{13+12\ln(2)}{8}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^ {n}\binom{2n}{n}\frac{1}{4n+5}-\frac{3}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4 }\right)^{n}\binom{2n}{n}\frac{1}{(4n+5)^{2}}-\] \[\frac{3}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{ 2n}{n}\frac{O_{2n}}{4n+5}.\] Applying reindexing arguments, we obtain that \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2 }}\,dx=\] \[\frac{-\Gamma^{4}\left(\frac{1}{4}\right)-8\pi^{2}(2+\pi+\ln(2))} {32\sqrt{2\pi}\Gamma\left(\frac{1}{4}\right)^{2}}+\] \[\frac{3+4\ln(2)}{8}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n} \binom{2n}{n}\frac{1}{2n-1}+\left(\frac{1}{2}+\frac{\ln(2)}{2}\right)\sum_{n=0}^ {\infty}\left(\frac{1}{4}\right)^{n}\binom{2n}{n}\frac{1}{4n+1}-\] \[\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{2 n}{n}\frac{1}{(4n+1)^{2}}+\frac{9}{8}\sum_{n=0}^{\infty}\left(\frac{1}{4} \right)^{n}\binom{2n}{n}\frac{1}{4n-3}-\] \[\frac{3}{4}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{2 n}{n}\frac{1}{(4n-1)}-\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n} \binom{2n}{n}\frac{O_{2n}}{2n-1}-\] \[\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{2 n}{n}\frac{O_{2n}}{4n+1}.\] From an equivalent formulation of the generating function for the sequence of Catalan numbers, we obtain that: \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2 }}\,dx=\] \[\frac{-\Gamma^{4}\left(\frac{1}{4}\right)-8\pi^{2}(2+\pi+\ln(2))} {32\sqrt{2}\pi\Gamma^{2}\left(\frac{1}{4}\right)}+\] \[\left(\frac{1}{2}+\frac{\ln(2)}{2}\right)\sum_{n=0}^{\infty} \left(\frac{1}{4}\right)^{n}\binom{2n}{n}\frac{1}{4n+1}-\frac{1}{2}\sum_{n=0}^ {\infty}\left(\frac{1}{4}\right)^{n}\binom{2n}{n}\frac{1}{(4n+1)^{2}}+\] \[\frac{9}{8}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom {2n}{n}\frac{1}{4n-3}+\sum_{n=0}^{\infty}-\frac{3}{4}\left(\frac{1}{4}\right) ^{n}\binom{2n}{n}\frac{1}{4n-1}-\] \[\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom {2n}{n}\frac{O_{2n}}{2n-1}-\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4} \right)^{n}\binom{2n}{n}\frac{O_{2n}}{4n+1}.\] From the classical lemniscate constant evaluation shown in (26), we obtain that \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2 }}\,dx=\] \[\frac{-\Gamma^{4}\left(\frac{1}{4}\right)-8\pi^{2}(2+\pi+\ln(2))} {32\sqrt{2}\pi\Gamma^{2}\left(\frac{1}{4}\right)}+\frac{\left(\frac{1}{2}+ \frac{\ln(2)}{2}\right)\sqrt{\pi}\Gamma\left(\frac{5}{4}\right)}{\Gamma\left( \frac{3}{4}\right)}-\] \[\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom {2n}{n}\frac{1}{(4n+1)^{2}}+\frac{9}{8}\sum_{n=0}^{\infty}\left(\frac{1}{4} \right)^{n}\binom{2n}{n}\frac{1}{4n-3}-\] \[\frac{3}{4}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom {2n}{n}\frac{1}{(4n-1)}-\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4} \right)^{n}\binom{2n}{n}\frac{O_{2n}}{2n-1}-\] \[\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom {2n}{n}\frac{O_{2n}}{4n+1}.\] From Dixon's formula in (29), we obtain that \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2}}\,dx=\] \[\frac{2\pi\Gamma\left(-\frac{1}{4}\right)\left(2+\pi+\ln(2) \right)+\sqrt{2}\Gamma\left(\frac{1}{4}\right)^{3}\left(3-\pi+4\ln(2)\right)}{ 64\sqrt{\pi}\Gamma\left(\frac{1}{4}\right)}+\] \[\frac{9}{8}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom {2n}{n}\frac{1}{4n-3}-\frac{3}{4}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^ {n}\binom{2n}{n}\frac{1}{4n-1}-\] \[\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom {2n}{n}\frac{O_{2n}}{2n-1}-\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4} \right)^{n}\binom{2n}{n}\frac{O_{2n}}{4n+1}.\] From the lemniscate-like constant evaluation in (32), we obtain that \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2 }}\,dx=\] \[\frac{\sqrt{2}\Gamma\left(\frac{1}{4}\right)^{3}\left(3-\pi+\ln( 2)\right)+2\pi\Gamma\left(-\frac{1}{4}\right)\left(2+\pi+\ln(2)\right)}{64 \sqrt{\pi}\Gamma\left(\frac{1}{4}\right)}+\] \[\frac{9}{8}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom {2n}{n}\frac{1}{4n-3}-\frac{3}{4}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^ {n}\binom{2n}{n}\frac{1}{4n-1}-\] \[\frac{1}{2}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom {2n}{n}\frac{O_{2n}}{2n-1}.\] Using the moment formula \[O_{2n}=\int_{0}^{1}\frac{1-x^{4n}}{1-x^{2}}\,dx,\] together with the power series identity \[\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{2n}{n}\frac{1-x^{4n}}{ \left(2n-1\right)\left(1-x^{2}\right)}=-\frac{\sqrt{1-x^{4}}}{x^{2}-1},\] we may oobtain that \[\int_{0}^{1}-\frac{\sqrt{1-x^{4}}}{x^{2}-1}\,dx=\mathbf{E}(i),\] so that we may apply a known elliptic integral singular value to evaluate the remaining harmonic sum. Explicitly, \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2}}\,dx=\] \[\frac{-48\pi^{2}-8\pi^{3}-8\pi^{2}\ln(2)+\Gamma^{4}\left(\frac{1}{4} \right)\left(-1-\pi+\ln(2)\right)}{32\sqrt{2\pi}\Gamma^{2}\left(\frac{1}{4} \right)}+\] \[\frac{9}{8}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{ 2n}{n}\frac{1}{4n-3}-\frac{3}{4}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^ {n}\binom{2n}{n}\frac{1}{4n-1}.\] Applying reindexing arguments, we obtain that \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/ 2}}\,dx=\] \[\frac{3}{8}-\frac{\pi^{3/2}(6+\pi+\ln(2))}{4\sqrt{2}\Gamma^{2} \left(\frac{1}{4}\right)}+\frac{\Gamma^{2}\left(\frac{1}{4}\right)\left(-1- \pi+\ln(2)\right)}{32\sqrt{2\pi}}-\] \[\frac{3}{16}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n} \binom{2n}{n}\frac{1}{n+1}+\frac{3}{8}\sum_{n=0}^{\infty}\left(\frac{1}{4} \right)^{n}\binom{2n}{n}\frac{1}{4n+1}+\] \[\frac{3}{4}\sum_{n=0}^{\infty}\left(\frac{1}{4}\right)^{n}\binom{ 2n}{n}\frac{1}{4n+3}\] Using the generating function for the sequence of Catalan numbers, together with the evaluations for the classical lemniscate constants, we obtain \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\ln(x)}{\left(2-x^{2}\right)^{3/2}}\,dx=\frac {\Gamma^{2}\left(\frac{1}{4}\right)\left(4-2\pi+2\ln(2)\right)}{64\sqrt{2\pi} }-\frac{\pi^{3/2}(\pi+\ln(2))}{4\sqrt{2}\Gamma^{2}\left(\frac{1}{4}\right)}.\] So, from (48), we find that the integral in (40) may be reduced to the following: \[\int_{0}^{1}\frac{\sqrt{1-x^{2}}\left(x^{2}-4\right)\ln(x)}{\left(2-x^{2} \right)^{3/2}}\,dx=\frac{\sqrt{2}\pi^{3/2}}{\Gamma^{2}\left(\frac{1}{4} \right)}+\left(\frac{\sqrt{\frac{\pi}{2}}}{16}-\frac{\ln(2)}{16\sqrt{2\pi}} \right)\Gamma^{2}\left(\frac{1}{4}\right).\] So, from the equality of (35) and (39), we obtain the equality \[\sum_{n=0}^{\infty}\left(\frac{1}{32}\right)^{n}\left(H_{2n}-H_{n}\right) \binom{2n}{n}^{2}=\frac{(5\ln(2)-\pi)\Gamma^{2}\left(\frac{1}{4}\right)}{8\pi ^{3/2}}.\] So, from the harmonic sum evaluation in (34) derived from Bailey's theorem, we obtain that: \[\sum_{k=0}^{\infty}\left(\frac{1}{32}\right)^{k}\binom{2k}{k}^{2}H_{2k}=\frac {(\pi-3\ln(2))\Gamma^{2}\left(\frac{1}{4}\right)}{8\pi^{3/2}}. \tag{55}\] So, by taking an appropriate linear combination of (55) and the formula in (34) derived from Bailey's theorem, we obtain the desired result. The formulas \[\sum_{k=0}^{\infty}\left(-\frac{1}{16}\right)^{k}\binom{2k}{k}^{2}H_{k}=\frac{ \Gamma^{2}\left(\frac{1}{4}\right)}{4\sqrt{2\pi^{3}}}\left(\pi-5\ln(2)\right)\] and \[\sum_{k=0}^{\infty}\left(-\frac{1}{16}\right)^{k}\binom{2k}{k}^{2}H_{2k}=\frac{ \Gamma^{2}\left(\frac{1}{4}\right)}{8\sqrt{2\pi^{3}}}\left(\pi-6\ln(2)\right)\] were highlighted as part of Theorem 4 in [14] and proved via a linearization method introduced in [14]. So, by taking an appropriate linear combination of the above formulas, this gives us a proof of Sun's conjectured formula in (1). The new formula in (55) is of interest in its own right and recalls the open problem recently considered in [13] to evaluate \[\sum_{k=0}^{\infty}\left(\frac{1}{32}\right)^{k}\binom{2k}{k}^{2}H_{k}^{(2)} \quad\text{and}\quad\sum_{k=0}^{\infty}\left(\frac{1}{32}\right)^{k}\binom{2 k}{k}^{2}H_{k}^{2}\] symbolically, writing \(H_{m}^{(2)}=1+\frac{1}{2^{2}}+\cdots+\frac{1}{m^{2}}\). Our evaluation in (55) also motivates our interest in the open problem of evaluating \[\sum_{k=0}^{\infty}\left(\frac{1}{32}\right)^{k}\binom{2k}{k}^{2}H_{2k}^{(2)} \quad\text{and}\quad\sum_{k=0}^{\infty}\left(\frac{1}{32}\right)^{k}\binom{2 k}{k}^{2}H_{2k}^{2}\] symbolically.
2310.04099
ClusVPR: Efficient Visual Place Recognition with Clustering-based Weighted Transformer
Visual place recognition (VPR) is a highly challenging task that has a wide range of applications, including robot navigation and self-driving vehicles. VPR is particularly difficult due to the presence of duplicate regions and the lack of attention to small objects in complex scenes, resulting in recognition deviations. In this paper, we present ClusVPR, a novel approach that tackles the specific issues of redundant information in duplicate regions and representations of small objects. Different from existing methods that rely on Convolutional Neural Networks (CNNs) for feature map generation, ClusVPR introduces a unique paradigm called Clustering-based Weighted Transformer Network (CWTNet). CWTNet leverages the power of clustering-based weighted feature maps and integrates global dependencies to effectively address visual deviations encountered in large-scale VPR problems. We also introduce the optimized-VLAD (OptLAD) layer that significantly reduces the number of parameters and enhances model efficiency. This layer is specifically designed to aggregate the information obtained from scale-wise image patches. Additionally, our pyramid self-supervised strategy focuses on extracting representative and diverse information from scale-wise image patches instead of entire images, which is crucial for capturing representative and diverse information in VPR. Extensive experiments on four VPR datasets show our model's superior performance compared to existing models while being less complex.
Yifan Xu, Pourya Shamsolmoali, Jie Yang
2023-10-06T09:01:15Z
http://arxiv.org/abs/2310.04099v2
# ClusVPR: Efficient Visual Place Recognition with Clustering-based Weighted Transformer ###### Abstract Visual place recognition (VPR) is a highly challenging task that has a wide range of applications, including robot navigation and self-driving vehicles. VPR is particularly difficult due to the presence of duplicate regions and the lack of attention to small objects in complex scenes, resulting in recognition deviations. In this paper, we present ClusVPR, a novel approach that tackles the specific issues of redundant information in duplicate regions and representations of small objects. Different from existing methods that rely on Convolutional Neural Networks (CNNs) for feature map generation, ClusVPR introduces a unique paradigm called Clustering-based Weighted Transformer Network (CWNet). CWNet leverages the power of clustering-based weighted feature maps and integrates global dependencies to effectively address visual deviations encountered in large-scale VPR problems. We also introduce the optimized-VLAD (OptLAD) layer that significantly reduces the number of parameters and enhances model efficiency. This layer is specifically designed to aggregate the information obtained from scale-wise image patches. Additionally, our pyramid self-supervised strategy focuses on extracting representative and diverse information from scale-wise image patches instead of entire images, which is crucial for capturing representative and diverse information in VPR. Extensive experiments on four VPR datasets show our model's superior performance compared to existing models while being less complex. Visual place recognition, Vision transformer, Self-supervised. ## I Introduction Visual Place Recognition (VPR), also known as image Geo-localization, is a technique that aims to determine the location of query images from unknown locations by comparing them against a database of reference images from known locations [1, 2]. VPR plays a significant role in various AI applications, including robot navigation [3] and self-driving vehicle [4]. Recent advancements in deep CNNs [5, 6] and Vision Transformers (ViTs) [7, 8] have significantly improved the performance of VPR methods by extracting more discriminative and comprehensive features from images [9, 10, 11, 12]. In this paper, we focus on the VPR problem as an example of an image retrieval task. Our objective is to determine the location of an image by comparing it with geo-tagged images of a reference database. The illustration of our VPR architecture is shown in Fig 1. The main challenge in VPR is to generate discriminative representations from images with varying perspectives and appearances, such as environmental variations (e.g., seasons, illumination changes, dynamic occlusions). CNNs with NetVLAD [13] are commonly used in VPR [9, 11, 14] due to their ability to extract comprehensive and discriminative features from images. However, their effectiveness is limited in complex scenes with visual distractions, such as occlusions and dynamic objects, because of the locality assumption of CNNs. In such cases, the extracted features may not be able to capture the important information for VPR, leading to inaccurate localization. To tackle this issue, recent studies [15, 16, 10] have explored the use of ViTs that have shown promising results in various vision tasks. However, ViTs face specific challenges when applied to VPR tasks. **First**, ViTs simply process the entire image as a sequence of image tokens and cannot explicitly consider the importance of individual image regions. This can be problematic in VPR tasks where certain regions, such as duplicate regions (regions with similar visual content) or small objects, can have a significant impact on recognition accuracy. Ignoring these regions can lead to deviations and inaccurate localization. **Second**, ViTs lack certain inductive biases that are present in CNNs, such as locality and translation equivariance. These biases are crucial for effectively learning from image data and capturing spatial relationships between image features. **Third**, VPR datasets often have noisy GPS labels, which can introduce uncertainties and inconsistencies in the training process. Additionally, relying on weakly-supervised learning methods [9, 11] limits the availability of precise annotations, which can negatively impact the performance and accuracy of ViTs in VPR tasks. Based on the above observations, we introduce a new model called ClusVPR. Our ClusVPR consists of CWNTet and OptLAD layers. By leveraging the self-attention property, Fig. 1: An illustration of the visual place recognition task using our model. the CWTNet enhances the model's ability to capture global context information. Additionally, we address the issue of the unequal importance of image regions in VPR by calculating weights for image tokens using the k-nearest neighbor (KNN) clustering algorithm. This allows us to assign different weights to image tokens based on their relevance and significance and effectively minimize visual deviations that commonly arise in complex VPR tasks. To improve computational efficiency, we incorporate sparse attention [17], into the CWTNet. Sparse attention allows us to focus on the most relevant image tokens and minimize ineffective computations. Furthermore, compared to the NetVLAD, OptLAD layer in ClusVPR plays a key role in optimizing the model's efficiency. It achieves this by splitting high-dimensional representation vectors into uniform groups and effectively reducing the dimension of the global descriptor. Recent works [11, 18] have addressed the challenge of noisy GPS labels in VPR by adopting approaches that select the most similar image to the query image in the representation space as the top-ranked positive (ground-truth) image. These works are trained by using the triplet loss [19] to generate feature representations for the query image that are similar to the top-ranked positive image. However, the effectiveness of these methods is often limited due to the lack of information in the top-ranked positive images and the high appearance variability in large-scale VPR datasets, leading to instability and a lack of robustness. One potential solution to address this issue is to use high-ranked positive images during the training process. These images, although less similar to the query images, can provide valuable and diverse information that is often overlooked. To address this, we propose a pyramid self-supervised strategy that allows us to extract accurate information from scale-wise image patches, taking advantage of the diverse information provided by high-ranked positive images. More precisely, we split and merge the input images into scale-wise patches and generate local representations. We compare the local representations between the input images to compute scale-wise similarity scores. These scores are then used to refine the pyramid self-supervision, enabling the model to learn and capture the variations and details present in the image patches at different scales. In this strategy, the training process is divided into multiple generations to gradually improve the accuracy of the scale-wise similarity scores. The major innovations and contributions of this paper are as follows: * We introduce ClusVPR, a novel approach that aggregates clustering-based weighted feature maps into a compact and discriminative representation. This approach enhances global context inference and improves the processing of unequally important image regions. Additionally, our OptLAD layer reduces model complexity and enhances robustness to variations and noise. * A pyramid self-supervised strategy is designed to extract more representative and diverse information from scale-wise image patches. This approach avoids learning from entire images and instead focuses on extracting information from each scale level. The remainder of the paper is organized as follows: we first discuss the related works in Section II. Then, we detail the proposed ClusVPR and pyramid selfsupervised strategies in Section III. In Section IV, we provide the implementation details and experimental results. Finally, in Section V we conclude the paper. ## II Related work ### _Recent progress on visual place recognition (VPR)_ VPR aims to establish a mapping between an image and its corresponding spatial location. In VPR, the goal is to find the geographical location of a query image, and the predicted coordinates are considered correct if they are close to the ground truth position. Early VPR approaches [20, 21] relied on classical techniques like SIFT [22] and SURF [23] for detecting keypoints. However, these handcrafted features are not suitable for dealing with the large variations in appearance that are encountered in complex scenes. In recent years, CNN-based methods have demonstrated superior performance in VPR [11, 13, 16]. Among deep learning representation methods, NetVLAD [1] has emerged as a highly effective technique for VPR tasks. NetVLAD is a differentiable implementation of VLAD that is trained end-to-end with a CNN backbone for direct place recognition. It has been widely adopted in various works [9, 11, 14, 24]. Task-specific patch-level features have also been explored for VPR [9, 25]. The Patch-NetVLAD [11] introduces the use of the NetVLAD to extract descriptors from predefined image patches. Indeed, patch-level descriptors typically referred to local descriptors, encoding content from local patches. There are several researches on developing more compact descriptors, either through dimensionality reduction techniques [25, 26] or by replacing NetVLAD with lighter pooling layers such as GeM [27] and R-MAC [28]. Recently, Berton et al. [29] introduced a benchmark for VPR, providing a standardized framework to evaluate various global-retrieval-based methods. This benchmark allows for a fair comparison between different approaches in the field. In [12], a generalized contrastive Loss (GCL) is developed for training networks using graded similarity labels and [30] considers knowledge distillation to build an efficient VPR method. To summarize, the existing models that use the entire image for training often lead to inaccurate global representation learning. To address this issue, our proposed method leverages a pyramid self-supervised strategy, which involves using scale-wise image patches instead of the entire image. This approach enables the model to capture diverse and representative information during the training process. ### _Transformer-based Approaches_ The Transformer [31] was initially proposed for Natural Language Processing tasks and later introduced to vision tasks as the ViT [7]. In ViT, each image patch is treated as a token, allowing the model to process images using a sequence-based approach. The vanilla ViT requires large-scale training datasets, such as ImageNet [28], to achieve comparable results to CNNs [5]. DeiT [32], on the other hand, introduces a data-efficient training strategy for ViT that outperforms CNNs on the average-scale datasets. The vanilla ViT has been adopted in the recent VPR benchmarks, demonstrating competitive performance on global retrieval tasks. [15] applied ViT to image retrieval tasks by utilizing the [class] token from the final layer as a global feature. [33] takes a different approach by incorporating multiple transformer layers while still relying on a CNN-based backbone feature extractor. DELG [25] is a framework that extracts both local and global features. However, to improve the representation of features, HAF [34] is introduced which consist of a hierarchical attention network. DASGIL [35] presents a multi-task architecture with a shared encoder to generate global representation. Building upon this, [36] focuses on filtering semantic information using an attention mechanism. These approaches while promising, but they often overlook the specific challenges faced in VPR tasks, such as the varying importance of image regions due to duplicate regions and ignored small objects. In our ClusVPR, we tackle this challenge by introducing a novel approach that computes weights for image tokens using the k-nearest neighbor (KNN). This enables us to reduce visual deviations and enhance the accuracy of VPR. ## III Proposed Model This section presents a detailed description of our ClusVPR model. The training architecture is shown in Fig 2. ### _CWTNet of the ClusVPR_ To address the issue of deviations in VPR tasks caused by duplicate regions and overlooked small objects, we introduce CWTNet, which computes clustering-based weighted feature maps. CWTNet divides channels into two parallel branches, namely the local and global branches. The local branch uses depth-wise convolution layers to extract local features and provides positional information for our Clustering-based Weighted Transformer (CWT), because \(3\times 3\) convolutions can provide sufficient positional information [37], eliminating the need for positional embedding as used in ViTs. The global branch uses the proposed CWT to obtain global context. The local and global features are then fused through feature concatenation to generate the final output. The structure of the CWTNet is shown in Fig 2. ViTs [7, 32] typically take a sequence of 1D token embeddings as input. Given a feature map \(\mathbf{m}\in\mathbf{R}^{C\times H\times W}\), we first split it into a number of individual feature patches \(\mathbf{m}_{p}\in\mathbf{R}^{N\times P^{2}\cdot C}\), followed by their subsequent down-sampling into corresponding feature vectors \(\mathbf{x}_{p}\in\mathbf{R}^{N\times C}\). Here, \((H,W)\) represents the resolution of the feature map, \(C\) denotes the number of channels, \(P\) is the down-sampling rate, and \(N=HW/P^{2}\) represents the total number of token embeddings. Subsequently, \(\mathbf{x}_{p}\) is used for the input token embeddings. We introduce a clustering-based weighted module that uses the KNN to calculate the weights for the input token embeddings \(\mathbf{x}_{p}\). Given a set of token embeddings \(\mathbf{x}_{p}\), the clustering density \(\rho_{i}\) of each token embedding \(x_{i}\) is computed through its \(k_{n}\)-nearest neighbors: \[d_{i}=\sum_{x_{j}\in\text{KNN}(x_{i})}\|x_{i}-x_{j}\|_{2}^{2}, \tag{1}\] \[\rho_{i}=\text{exp}(-\frac{d_{i}-d_{min}}{k(d_{max}-d_{min})}), \tag{2}\] where \(x_{i}\) and \(x_{j}\) are the corresponding token embeddings, and KNN\((x_{i})\) represents the \(k_{n}\)-nearest neighbors of \(x_{i}\). \(d_{i}\) is the sum of distances between \(x_{i}\) and its \(k_{n}\)-nearest neighbors. The Fig. 2: The pipeline of the training architecture. Our model consists of a CNN backbone and the proposed ClusVPR. weight of each token embedding \(x_{i}\) is computed by \[w_{i}=\frac{1-\rho_{i}}{\sum_{j=1}^{N}(1-\rho_{j})}, \tag{3}\] \[w_{i}=\frac{w_{i}-w_{min}}{w_{max}-w_{min}}, \tag{4}\] where \(w_{i}\) denotes the potential value of each token embedding. We use ViT [7] as the basis for the CWT, but we remove the Layer Normalization (LN) to enable our clustering-based weighted module to operate more effectively. In the clustering-based weighted multi-head self-attention (CMSA), we use the clustering-based weights to compute the self-attention head. Given the token embeddings \(\textbf{x}_{p}\) as the input sequence of the CWT, a single self-attention head can be represented by \[\textbf{Q}=\textbf{x}_{p}\textbf{W}_{q},\ \textbf{K}=\textbf{x}_{p}\textbf{W}_{k},\ \textbf{V}=\textbf{x}_{p}\textbf{W}_{v}, \tag{5}\] \[\textbf{A}=\text{softmax}(\frac{\textbf{Q}\textbf{K}^{T}}{\sqrt{D_{k}}})(( \lambda_{c}\textbf{I}_{v}+\textbf{W}_{c})\textbf{V}). \tag{6}\] here \(\textbf{W}_{q}\), \(\textbf{W}_{k}\), \(\textbf{W}_{v}\) are linear projection matrices, and **Q**, **K**, **V** represent the query, key, and value matrices. \(D_{k}\) denotes the channel quantity of the token embeddings, and \(\textbf{I}_{v}\) is an identity matrix with the same rank as **V**. \(\textbf{W}_{c}\) is a diagonal matrix whose main diagonal elements are the clustering-based weights \(w_{i}\). Finally, \(\lambda_{c}\) is a scaling factor, which we set to \(0.5\). Thus, our CWT is formulated as follows: \[\textbf{z}_{p}=\text{CMSA}(\textbf{x}_{p})+\textbf{x}_{p},\ \textbf{x}_{p}= \text{DS}(\textbf{m}_{p}), \tag{7}\] \[\textbf{z}^{\prime}_{p}=\text{MLP}(\textbf{z}_{p})+\textbf{z}_{p}, \tag{8}\] \[\textbf{m}_{\text{out}}=\text{US}(\textbf{z}^{\prime}_{p})+\textbf{m}_{p}. \tag{9}\] where the CWT consists of a down-sampling layer (DS), a CMSA module, a multi-layer perceptron (MLP), and an up-sampling layer (US). The DS is implemented using average pooling. The MLP comprises a GELU [38] activation layer and two linear layers. The US is implemented using depthwise separable transposed convolution, and the output sequence \(\textbf{m}_{\text{out}}\) is reshaped into a clustering-based weighted feature map \(\textbf{m}_{c}\in\textbf{R}^{C\times H\times W}\). ### _OptLAD of the ClusVPR_ Considering that the high dimensionality of global descriptors obtained by the standard NetVLAD is unsuitable for large-scale VPR tasks, a PCA layer with whitening [39] is adopted to reduce the dimensionality to a suitable value \(N^{\prime}\). However, this approach requires a large number of parameters. For example, for aggregating a feature map with \(1024\) channels through \(128\) clusters, using a PCA layer with a \(4096\)-dimension output vector would require about \(537\)M parameters. This makes it difficult to deploy the model on resource-constrained mobile devices. To address this issue, we reduce the dimension of global descriptors, thereby reducing the number of parameters required by the PCA layer. This is achieved by splitting the input vector into uniform clusters of relatively low-dimensional vectors prior to executing the VLAD operation. Furthermore, a group weight \(\beta_{g}\) calculated by GeM [27] is introduced to improve the non-linearity ability of the OptLAD. As Fig 3 illustrates the details, the input feature map \(\textbf{m}_{c}\in\textbf{R}^{C\times H\times W}\) has \(D\)\(C\)-dimensional pixel-level descriptors (\(D=HW\)). The pixel-level descriptors are first expanded by \(\lambda\) via a \(1\times 1\) convolution. Then, the input descriptors \(\{\hat{x}_{i}\},i=1,\cdots,D\) are split into G uniform groups. The low-dimensional descriptors are shown by \(\{\hat{x}_{gi}\},g=1,\cdots,G\). Given a set of cluster centers for each group \(\{c_{gk}\},k=1,\cdots,K\), the output of the global descriptor \(V\) is represented as a \(\frac{\partial CK}{G}\)-dimension vector. To better understand the output of \(V\), we can also express it as a \(\frac{\partial C}{G}\times K\) matrix. Therefore, the element (j, k) of \(V\) can be computed by \[V(j,k,g)=\sum_{i=1}^{D}\alpha_{k}(\hat{x}_{gi})(\hat{x}_{gi}(j)-c_{gk}(j)), \tag{10}\] \[V(j,k)=\sum_{g=1}^{G}\beta_{g}(\hat{x})V(j,k,g), \tag{11}\] where \(\hat{x}_{gi}(j)\) is the \(j\)-th dimension of the low-dimensional descriptor \(\hat{x}_{gi}\). \(c_{gk}(j)\) is the \(j\)-th dimension of the cluster center \(c_{gk}\). \(\alpha_{k}\) is used to find the proximity between the lower-dimensional descriptor \(\hat{x}_{gi}\) and the cluster center \(c_{gk}\). The \(\beta_{g}\) is used to merge the output of each group. \[\alpha_{k}(\hat{x}_{gi})=\frac{\text{exp}(w_{gk}^{T}\hat{x}_{gi}+b_{gk})}{\sum _{k^{\prime}=1}^{K}\text{exp}(w_{gk^{\prime}}^{T}\hat{x}_{gi}+b_{gk^{\prime}})}, \tag{12}\] \[\beta_{g}(\hat{x})=\sigma\left(\frac{1}{D}\sum_{i=1}^{D}\left(\frac{1}{D^{ \prime}}\sum_{j=1}^{D^{\prime}}\hat{x}_{gi}(j)^{p_{i}}\right)^{\frac{1}{p_{i} }}\right). \tag{13}\] where \(D^{\prime}=\frac{\lambda C}{G}\) is the dimension of \(\hat{x}_{gi}\), and \(\sigma(\cdot)\) indicates the sigmoid function. The first weight \(\alpha_{k}(\hat{x}_{gi})\), represents the distance between each pixel-level descriptor \(\hat{x}_{gi}\) and the cluster center \(c_{gk}\), while the second weight \(\beta_{g}(\hat{x})\), is a scaling coefficient across groups. The matrix \(V\) is obtained by Intra-Normalization [40] and then re-converted into a \(\frac{\lambda CK}{G}\)-dimensional vector, which is L2-normalized to obtain a compact global descriptor. Finally, we apply a PCA layer to reduce the dimension of the global descriptor to a proper value \(N^{\prime}\). Compared to the standard NetVLAD, the OptLAD with PCA requires approximately \(\frac{\lambda}{G}\) times fewer parameters. Fig. 3: The OptLAD structure. The primary parameters for each layer are indicated in brackets. ### _Pyramid self-supervised strategy_ The VPR datasets [13, 41] lack fine-grained annotations and only provide GPS locations. Recent works [18, 11] have adopted a simplified evaluation protocol, considering only the top-ranked positive image as the ground truth. During training, these models use the standard triplet loss [19], which encourages the feature representation of a query image to be closer to its top-ranked positive image. However, due to the limited information in top-ranked positive images, this approach lacks robustness. Instead, we use high-ranked positives to gather comprehensive information from scale-wise image patches. In particular, for a query image \(q\) and a high-ranked positive image \(p^{l}\), the feature maps \(\{\mathbf{m}_{q}^{\theta_{0}},\mathbf{m}_{p^{l}}^{\theta_{0}}\}\) are extracted from the CNN backbone, where \(\theta_{0}\) represents the network's initial parameters. To obtain scale-wise patches, we carry out feature maps \(\{\mathbf{m}_{q}^{\theta_{0}},\mathbf{m}_{p^{l}}^{\theta_{0}}\}\) splitting and merging operations as evidently depicted in Fig 4. Then, we feed the resulting feature maps \(\{\mathbf{m}_{q_{1}}^{\theta_{0}},\cdots,\mathbf{m}_{q_{4}}^{\theta_{0}}, \mathbf{m}_{p^{l}}^{\theta_{0}},\cdots,\mathbf{m}_{p^{l}}^{\theta_{0}}\}\) separately to the CusVPR to generate local representations \(\{f_{q_{1}}^{\theta_{0}},\cdots,f_{q_{4}}^{\theta_{0}}\), \(f_{p^{l}}^{\theta_{0}},\cdots,f_{p^{l}}^{\theta_{0}}\}\). To compute similarity between \(q\) and \(p^{l}\), we calculate pyramid similarity scores by \[\mathcal{S}_{\theta_{0}}(\tau_{0})=\text{softmax}([\langle f_{q_{1 }}^{\theta_{0}},f_{p_{1}^{l}}^{\theta_{0}}\rangle/\tau_{0},\cdots,\langle f_{q_ {1}}^{\theta_{0}},f_{p_{2}^{l}}^{\theta_{0}}\rangle/\tau_{0}, \tag{14}\] \[\cdots,\langle f_{q_{4}}^{\theta_{0}},f_{p_{1}^{l}}^{\theta_{0}} \rangle/\tau_{0},\cdots,\langle f_{q_{4}}^{\theta_{0}},f_{p_{2}^{l}}^{\theta_{ 0}}\rangle/\tau_{0}]),\] where \(\tau_{0}\) is a hyperparameter that influences the level of smoothness in the pyramid similarity scores \(\mathcal{S}_{\theta_{0}}\), and \(\langle\cdot,\cdot\rangle\) represents the inner product operation. We enhance pyramid similarity scores through \(\omega\) generations, as illustrated in Fig 4. The \((v-1)\)-th generation's pyramid similarity \(\mathcal{S}_{\theta v-1}\) supervises the \(\upsilon\)-th network using Kullback-Leibler (KL) divergence. The pyramid similarity loss for a high-ranked positive image \(p^{l}\) is written as: \[\mathcal{L}_{s}^{\theta_{v}}(q,p^{l})=\ell_{kl}(\mathcal{S}_{\theta_{v}}(1), \mathcal{S}_{\theta_{v-1}}(\tau_{v-1})). \tag{15}\] where \(\theta_{v},\upsilon=1,\cdots\) and \(\omega\) are the parameters associated with the \(\upsilon\)-th generation network. \(\ell_{kl}\) represents the KL divergence, which can be written as \(\ell_{kl}(y,\hat{y})=\sum_{i}\hat{y}(i)\log(\hat{y}(i)/y(i))\). The target similarity vector \(\mathcal{S}_{\theta_{v-1}}\) is generated using the softmax, with a temperature coefficient \(\tau_{v-1}\) to control its smoothness. A larger temperature coefficient leads to a more uniformly distributed similarity vector, meaning that the pyramid similarity loss focuses on more matching pairs. So, in the early generations, \(\tau_{v-1}\) is set to a larger value. Our training is supervised using a triplet loss, similar to [11, 34]. Specifically, each triplet consists of a query image \(q\), a corresponding top-ranked positive image \(p^{*}\), and a set of high-ranked negative images (\(n_{i}|_{i=1}^{M}\)), where the rank of positive or negative images is determined by the Euclidean distance between their global representations. However, the standard triplet loss [19] lacks robustness and relies on the proper selection of negative images. Hence, a softmax-based triplet loss is used to notably increase the distinction among the positive pair and a set of negative pairs, as given below, \[\mathcal{L}_{t}^{\theta}(q,p^{*},n)=\sum_{i=1}^{M}-\log\frac{\exp(f_{q}^{ \theta},f_{p^{*}}^{\theta})}{\exp(f_{q}^{\theta},f_{p^{*}}^{\theta})+\exp(f_{q} ^{\theta},f_{n_{i}}^{\theta})}, \tag{16}\] where \(\theta\) is the network's parameter, and \(\epsilon\) is the margin between positive and negative pairs. The \(p^{*}\) is chosen as the top-ranked images of the gallery that located within 10 meters from the \(q\), while \(\{n_{i}|_{i=1}^{M}\}\) represents a set of randomly chosen gallery images from the top 500, located at a distance of 25 meters away from \(q\). Therefore, our model is supervised by the pyramid similarity loss in conjunction with the softmax-based triplet loss. Given \(K\) high-ranked positive images \(\{p^{k}|_{k=1}^{K}\}\), the total loss can be computed by \[\mathcal{L}_{total}^{\theta_{v}}=\mathcal{L}_{t}^{\theta_{v}}(q,p^{*},n)+ \lambda_{s}\sum_{k=1}^{K}\mathcal{L}_{s}^{\theta_{v}}(q,p^{k}). \tag{17}\] where \(\lambda_{s}\) is the loss scaling coefficient. ## IV Experiments This section evaluates the efficacy of our model on both VPR and image retrieval datasets compared to several state-of-the-art models [18, 11, 12, 13, 14, 9, 34]. Specifically, we show the details of datasets, implementations, and quantitative/qualitative results. ### _Datasets_ Our model is evaluated on four VPR datasets - the MSLS [42], Pitts30k/250k-test [43], Tokyo 24/7 [41], and TokyoTMval [13]. These datasets include GPS labels, and images have varying appearances and perspectives. Our model's performance is evaluated based on the recommended train-test settings of the datasets. To further demonstrate the generalization ability, we also assessed our model on three image retrieval datasets - the Paris 6K [44], Oxford 5K [45] and the Holidays [46]. Fig. 4: The proposed pyramid self-supervised strategy and how scale-wise patches are split and merged, where the green borders indicate high similarity with the query patch while the red borders denote low similarity. The pyramid similarity scores are progressively updated during training. ### _Implementation details_ #### Iv-B1 Model settings The base ClusVPR contains four CWTNets for computing clustering-based weighted feature maps. For each CWTNet, the patch size on the feature map is set as \(2\times 2\), the down-sampling rate \(P\) is \(2\), and the number of nearest neighbors \(k_{n}\) is set as \(10\). For the OptLAD, the number of cluster centers \(K\) is \(64\), and we set \(\lambda=2,G=8\). The dimension \(N^{\prime}\) of the global representation is \(4096\). For a fair comparison, we use the VGG-16 backbone for feature extraction, similar to the other works. #### Iv-B2 Training settings In our experiments, we train the model on Pitts30k-train and test it on Pitts30/250k-test, Tokyo 24/7-test, and TokyoTM-val, following standard VPR procedures. Additionally, we train the model on MSLS training set, and test it on MSLS validation and Challenge sets. The training consists of five generations, each comprising eight epochs. We optimize the loss function using stochastic gradient descent (SGD) with a learning rate of 0.0001, weight decay of 0.001, and momentum of 0.9. We conducted a grid search to find the optimal hyperparameters. The loss scaling coefficient \(\lambda_{s}\) is set to 0.55, and the temperature coefficient \(\tau_{v}\) is set to 0.06. ### _Comparison with the state-of-the-arts_ ClusVPR is compared with seven models- NetVLAD [13], SARE [18], SFRS [9], HAF [34], Patch-NetVLAD [11], GPM [14], and GCL [12] with VGG-NetVLAD. We ensure a fair comparison by keeping the training set, cluster quantity, and dimension of global representations consistent with those used in other models. #### Iv-C1 VPR benchmarks Table I represents the results of our ClusVPR model in comparison with other baselines on the VPR benchmarks. The table reports precision-recall and model complexity metrics. The number of parameters (Params) includes parameters of the entire model. The FLOPs are computed based on an input image size (640, 480). The accuracy is measured using precision-recall. The top-\(k\) recall, on the other hand, represents the percentage of query images that are correctly retrieved from the top-\(k\) ranked gallery images. The results presented in Table I demonstrate that our model outperforms other baselines on most benchmarks. Notably, our model surpasses the GPM, improving rank-1 recall by 2.4%, 0.9%, 1.6%, 1.2% 2.1% and 0.5%, respectively. Moreover, our model generates only around one-third of the parameters produced by other models, indicating its efficient use of parameters. #### Iv-C2 Image retrieval benchmarks Table II presents the mean Average Precision (mAP) results of our model compared to other state-of-the-art models on image retrieval benchmarks. In this experiment, all models are trained on Pitts30K and tested without any fine-tuning. For the Oxford 5K and Paris 6K, the models are evaluated on both full and cropped query images. The "Full" setting indicates that the entire image is taken as a query, while for "Crop", only the landmark region is used. For the Holidays dataset, the query images are used directly. The results demonstrate that ClusVPR is effective in improving the performance of image retrieval. In particular, our model surpasses the performance of other models on all datasets, showing improvements of 1.1%, 1.2%, 1.5%, 0.9% and 1.3% respectively, compared to GCL. ### _Qualitative Evaluation_ The qualitative results of NetVLAD, GPM and our model on challenging cases with complex environments and varying lighting conditions can be seen in Fig 5. It's important to note that we used the feature maps before the VLAD operation for all test models and followed the approach detailed in [47] to produce the attention maps. In the first four cases, our model attends to discriminative regions such as signs and buildings, while the other two models wrongly focus on dynamic objects or obstacles such as pedestrians, lights, cars, and trees. In particular, in the third and fourth samples, where there are lighting issues and occlusions, our model successfully conducts matching based on discriminative landmarks while disregarding dynamic objects or barriers. However, the other two models seem to be focusing more on lights and cars, which are dynamic objects and obstacles that can appear or disappear in any gallery image. This could result in inaccurate retrieval results due to the distortions caused by such objects. In the last two challenging cases of Fig 5, where the query image contains complex scenes and lighting conditions, all models retrieved the wrong top-1 gallery images. However, our model still focuses on discriminative regions and retrieves a gallery image with a similar building structure. To demonstrate the efficacy of the ClusVPR in addressing the unequal importance of image regions in VPR and improving global context inference, we conducted a comparison between CNN-based feature maps (the outputs of the CNN backbone) and clustering-based weighted feature maps (the outputs of the CWTNet). We selected two daytime and two nighttime query images from the Pitts30k-test and Tokyo 24/7 and show the results in Fig 6. The comparison demonstrates that the CWTNet of ClusVPR eliminates redundancy from duplicate regions and enhances discriminative information from small objects. ### _Ablation experiments_ Several ablation experiments are conducted to evaluate the effectiveness of our ClusVPR and pyramid self-supervised strategy (PSS). The experiments aimed to evaluate the effectiveness of different components in our model and compare their performance. The results of ablation experiments on the Pitts250k and Tokyo 24/7 are presented in Table III. It is important to note that V-ClusVPR in our experiments refers to the ClusVPR with the standard NetVLAD instead of OptLAD. S-Triplet denotes softmax-based triplet loss, whereas V-Triplet denotes standard triplet loss. The following observations can be drawn from the experiments. **Softmax-based triplet loss** shows superior efficacy compared to the standard triplet loss and can accelerate the convergence. Specifically, on the Pitts250k, the models with S-Triplet outperform the models with V-Triplet, with improvements on rank-1 recall of 2.6%, 1.5%, and 1.3% respectively. **ClusVPR** successfully addresses the issue of unequal importance of image regions in VPR tasks and enhances the global reasoning ability of the model to encode geometric configurations between images. The results on the challenging Tokyo 24/7 show that the models with ClusVPR achieve significantly better rank-1 recall compared to models with NetVLAD, with improvements of 4.8%, 4.2%, and 2.4% respectively, reaching 80.1%, 84.6%, and 87.5%. **OptLAD** improves the precision-recall performance of our model compared to those with NetVLAD, while having a significantly lower parameter count of around one-fourth. For example, the models with OptLAD outperform those with Fig. 5: Visualization results of challenging cases. The attention maps overlaid on the images highlight the regions of interest identified by the models. Green and red borders around the top-1 retrieved images indicate successful or unsuccessful retrieval results, respectively. NetVLAD on the MSLS val, with improvements on rank-1 recall of 1.1%, 0.8%, and 1.1% respectively. **Pyramid self-supervised strategy** has been shown to effectively improve performance by exploiting scale-wise image information. On the Tokyo 24/7, the models with PSS outperform those with S-Triplet, resulting in an improvement on rank-1 recall of 4.7%, 3.0%, and 2.9%, respectively. The impact of different down-sampling methods in the CWTNet is evaluated through an ablation experiment. Specifically, We compared the performance of our model with three different down-sampling methods (Average Pooling, Max Pooling, and Center Pooling), as shown in Table IV. The results indicate that adopting the average pooling achieved the best precision on both the Pitts250k and Tokyo 24/7, with a marginal improvement of 0.6% and 0.8% on rank-1 recall compared to the second-best method, respectively. This implies that the average pooling is more effective in preserving the information from different regions of the image. ### _Keypoints detection and matching_ To further evaluate the effectiveness of our model, we conducted an experiment using the DFM [48] pipeline by replacing the VGG backbone with our pre-trained model. Since we only want to evaluate the performance of our pre Fig. 6: The visualization of CNN-based feature maps (CNN FMs) and clustering-based weighted feature maps (CWFMs) generated from the query images and their top-1 retrieved gallery images. trained backbone in the DFM pipeline, we did not fine-tune it again. Instead, we simply replaced the feature extraction part with our pre-trained backbone and evaluated its performance on day, night, and rotated image pairs, as shown in Fig 7. We compared the results with those of three strong baseline models- DFM, GIFT [49], and Superpoint [50]. The results demonstrate that our model produces more accurate and dense matches than the other three methods, indicating high generalization performance. ### _Results with different CNN backbones_ To investigate whether using stronger backbones can improve the performance of our model on VPR datasets, we conducted additional experiments by employing VGG-19 [51] and ResNets-101/152 [5] as the backbones. We compare the results obtained with these backbones to our prior results on the VPR datasets. The precision-recall curves of the models with different CNN backbones are displayed in Fig 8. Table V shows the model complexity and detailed precision-recall of the models. The results indicate that models with stronger backbones have less than 0.5% rank-1 recall improvement compared to our ClusVPR with VGG-16. The advantages of utilizing stronger backbones are more evident in the Tokyo 24/7 dataset. For example, the model with ResNet-152 achieves 92.2% rank-5 recall, a 0.7% accuracy gain over ClusVPR with VGG-16, at the cost of 44.5M additional parameters. Consequently, after carefully evaluating the results of ClusVPR with different backbones, we conclude that using more robust and stronger backbones can yield marginal improvements in the model's performance, but at the cost of additional complexity. Fig. 8: Recalls at N-top retrievals for our ClusVPR with different CNN backbones. Fig. 7: Qualitative matching results of different models. The models are evaluated on day, night, and rotated image pairs. Green lines indicate correct matches. ## V Conclusions This paper proposed a novel ClusVPR model and CWTNet architecture that improves the global representation of images for VPR tasks, addressing the problem of the unequal importance of image regions. The CWTNet calculates weights for image tokens and encodes global dependencies, allowing our model to encode corresponding geometric configurations between images. The proposed OptLAD layer enhances the efficiency of the model and outperforms the standard NetVLAD. Additionally, a pyramid self-supervised strategy is introduced to extract more precise information from scale-wise image patches. The extensive experiments demonstrate the superior performance of our proposed model in terms of qualitative, quantitative, and efficiency evaluations.
2305.18606
Photonic Snake States in Two-Dimensional Frequency Combs
Taming the instabilities inherent to many nonlinear optical phenomena is of paramount importance for modern photonics. In particular, the so-called snake instability is universally known to severely distort localized wave stripes, leading to the occurrence of transient, short-lived dynamical states that eventually decay. The phenomenon is ubiquitous in nonlinear science, from river meandering to superfluids, and to date it remains apparently uncontrollable. However, here we show that optical snake instabilities can be harnessed by a process that leads to the formation of stationary and robust two-dimensional zigzag states. We find that such new type of nonlinear waves exists in the hyperbolic regime of cylindrical micro-resonators and it naturally corresponds to two-dimensional frequency combs featuring spectral heterogeneity and intrinsic synchronization. We uncover the conditions of the existence of such spatiotemporal photonic snakes and confirm their remarkable robustness against perturbations. Our findings represent a new paradigm for frequency comb generation, thus opening the door to a whole range of applications in communications, metrology, and spectroscopy.
Salim B. Ivars, Yaroslav V. Kartashov, Pedro Fernández de Córdoba, J. Alberto Conejero, Lluis Torner, Carles Milián
2023-05-29T20:48:11Z
http://arxiv.org/abs/2305.18606v1
# Photonic Snake States in Two-Dimensional Frequency Combs ###### Abstract **Taming the instabilities inherent to many nonlinear optical phenomena is of paramount importance for modern photonics. In particular, the so-called snake instability is universally known to severely distort localized wave stripes, leading to the occurrence of transient, short-lived dynamical states that eventually decay. The phenomenon is ubiquitous in nonlinear science, from river meandering to superfluids, and to date it remains apparently uncontrollable. However, here we show that optical snake instabilities can be harnessed by a process that leads to the formation of stationary and robust two-dimensional zigzag states. We find that such new type of nonlinear waves exists in the hyperbolic regime of cylindrical micro-resonators and it naturally corresponds to two-dimensional frequency combs featuring spectral heterogeneity and intrinsic synchronization. We uncover the conditions of the existence of such spatiotemporal photonic snakes and confirm their remarkable robustness against perturbations. Our findings represent a new paradigm for frequency comb generation, thus opening the door to a whole range of applications in communications, metrology, and spectroscopy.** Since their discovery almost half a century ago, the so-called flexural instabilities [1], which lead to the unarestable decay of quasi-one-dimensional self-sustained states such as solitons [2; 3], have been encountered in all areas of nonlinear sciences. Namely, they have been found to occur in the dynamics of river meandering [4], classical fluids [5], fermion [6; 7] and polariton [8] superfluids, Bose-Einstein condensates [9], chemistry [10], and, importantly, nonlinear photonics [11; 12; 13; 14; 15]. The initial stages of the instability \(-\) referred to as the _snake_ instability \([2]-\) evinces the reshaping of straight states into zigzags caused by spontaneous and local transverse drifts with alternating directions, which occur prior to the irreversible decay of the corresponding states. In contrast to the use of the instability in several systems, e.g., to trigger spontaneous vortex formation [11; 9; 12], its controlled arrest to form robust multi-dimensional snaking states remains hitherto unexpected. In parallel, in recent years, microring cavities have emerged as an outstanding platform for exploring the fundamental properties and the applications of stabilised one-dimensional dissipative nonlinear waves, in both the normal and the anomalous dispersion regimes, under a plethora of potentially unfavourable situations, such as strong dispersive [16; 17], inelastic [18; 19; 20], and thermal [21; 22] effects. Each and every new solitonic states found in such micron-sized systems correspond to highly stable frequency combs that contribute to the developing field of comb-related applications [23; 24]. As a consequence, microring-like cavities are highly appealing systems to push the frontiers of knowledge of nonlinear phenomena as well as boosting applications in photonics. Nowadays, most solitonic states (stable combs) that have been experimentally observed in microrings occur in essentially single-comb forming devices, except for the few exceptions that may be found in counter-propagating soliton experiments [25; 26], in systems with two microrings [27; 28], or in cavities with a few transverse modes [29], where each of the individual combs are formally one-dimensional. The potential of two-dimensional frequency combs, which to date is essentially unexplored experimentally, remains to be uncovered. Here we discover the existence of robust spatiotemporal photonic snakes in cylindrical micro-resonators with normal group velocity dispersion (GVD) and in the presence of diffraction along the cylinder's axis. Such previously unknown states are a continuous two-dimensional ensemble of heterogeneous combs which are inherently synchronised by the nonlinearity. Thus, they represent a whole new paradigm for frequency comb formation. **Results** **Model.** The spatiotemporal snakes predicted here form in hollow cylindrical Kerr micro-resonators [cf. Fig. 1a]. The dynamics of the intra-cavity field can be accurately described by the two-dimensional generalisation of the Lugiato-Lefever equation [30], \[\partial_{t}\psi=\frac{i}{2}(-\partial_{x}^{2}+\partial_{z}^{2})\psi-(1+i \delta)\psi+i|\psi|^{2}\psi+ih_{0}e^{-z^{2}/\sigma_{z}^{2}}, \tag{1}\] where the terms on the right-hand side of the equation account for normal GVD along the propagation (and periodic) coordinate, diffraction along the cylinder's axis, losses, laser-cavity detuning, focusing nonlinearity, and the axially localised pump, respectively (see methods). For the sake of generality, we discuss photonic snakes first in the physical frame of Eq.1 [cf. Figs. 1-5], which represents a universal hyperbolic model governing a plethora of nonlinear physical phenomena in optics [31; 32] and other areas of physics [33]. In order to anchor our predictions into realistic optical frequency comb forming experimental realizations, we report on the snakes' robustness under the specific collection of perturbations expected in cylindrical micro-cavities [cf. Fig. 6], including Raman and thermal nonlinearities, among others (see methods). **Nonlinear resonances.** When driven into the strongly nonlinear regime, the here considered micro-cylinders develop nonlinear resonances, associated to the single colour locked states, \(\psi_{0}\) (\(\partial_{t}\psi_{0}=\partial_{x}\psi_{0}=0\)), exhibiting an unusually rich multi-stability landscape, as shown in Figs.1b-g. For peak intensities \(|\psi_{0}|\gtrsim 1\), corresponding to the existence domain of bright solitons with anoma Figure 1: **Cylindrical microresonator and multi-stable resonances.****a**, sketch of the driven (from the left) micro-cylinder in the photonic snake regime. **b-g**, maximum of the cavity background (single color) field amplitude, \(|\psi_{0}(x,z)|\), versus laser-cavity detuning, \(\delta\), showing selected nonlinear resonances for driving strengths **b-d**\(h_{0}=1.4\), **e-g**\(h_{0}=1.7\), and various values of the transverse pump width: **b,e**\(\sigma_{z}=4\); **c,f**\(\sigma_{z}=8\); **d,g**\(\sigma_{z}=12\). The cavity background may be smooth [inset (\(i\))] or may feature single- and multi-stripe quasi one-dimensional solitons distributed along \(z\) [insets \((ii)-(viii)\)]. The number of stripes on each portion of the resonances is colour coded in the legend of **d**. The resonance corresponding to the _one-dimensional_ microring is also shown, for reference, by the dashed curves labeled as \({}^{1}\!D^{\prime}\) in **b**, **e**. Thick black dots on the resonances in **b,c,e** mark the onsets of instabilities and correspond to bifurcation points for spatiotemporal states, including snakes [see Fig.2]. Labels \(SN_{1,2}\) in **e** mark the saddle node bifurcations (branch turning points) further discussed in Fig.3. Rectangles in **f** enclose the regions zoomed in Figs.4**b** and 4**e**. Background states are stable for \(|\psi_{0}|<1\) and may be unstable otherwise, within the grey shaded areas. Insets showing spatial profiles are plotted over the area \(x\in[-L/2,L/2[\) (covering the whole cavity length, \(L=16\)) and \(z\in[-16,16]\). lous GVD [34] (formally analogous to diffraction along \(z\)), the background states become single and multi-stripe solitons [cf. insets (_ii_)-(_viii_)] distributed on top of the smooth background [cf. inset (\(i\))], recalling the spatial solitons reported in pioneering works [35]. The presence of stripes impinges an intricate morphology of the resonances strongly deviating from that of the tri-valued (bistable) one-dimensional counterparts (marked for reference by the light-grey curves in Figs.1b-g). Resonances associated to wider or more intense pumps (i.e., greater \(\sigma_{z}\) and \(h_{0}\), respectively) contain states hosting more stripes and reshape to exhibit splitting (Figs. 1c,e), nesting (Figs. 1d,f,g), and closed loops (Fig. 1f). As discussed below, the properties of resonances are intimately linked to the existence of snake states. **Photonic snakes.** The soliton stripes described above are prone to snake instabilities. When these develop, stripes distort and acquire in our dissipative system a periodic zigzag snaking profile, becoming inhomogeneous along \(x\) and, hence, polychromatic. The central result of this work is that the transverse drifts induced by the snake instability are fully arrested after they exerted certain distortion to stripes so that perfectly stationary snakes form [see Figs. 2(\(i\))-(\(ix\)) for typical profiles]. Figures 2a, b present the branches [\(\max(|\psi(x,z)|)\) vs detuning] corresponding to the complete set of snakes existing with narrow pump, \(\sigma_{z}=4\), for different driving strengths, \(h_{0}\) (see caption). Snakes featuring from 1 to 5 periods in the microcavity circumference exist and those with 2 to 4 periods are stable for some detuning intervals. We denote different branches of snakes featuring \(N\) periods on the microresonator circumference as \(S_{N}\). Figures 2c,d show that snakes feature pronounced zigzagging angles, \(\alpha\) [cf. inset (\(iv\))], over most of their existence regions, so that they have a strong trend to form narrow pulses along \(x\). We emphasize that the robust snakes Figure 2: **Photonic snake families bifurcating from the single stripe background.****a,b**, amplitude vs detuning branches for all photonic snakes supported by the microcylinder considered in Fig.1 with \(\sigma_{z}=4\) for **a**\(h_{0}=1.4\) and **b**\(h_{0}=1.7\). Thick (thin) traces denote stable (unstable) states. The nonlinear resonances (black curves) are those shown in Figs.1b and 1e, respectively. Photonic snakes exhibiting \(N\) periods around the cavity circumference are denoted by \(S_{N}\), and selected profiles, at the positions of the squares in **b**, are shown in insets (\(i\)) -(_viii_). Panels (_ii_), (_iii_) show real, imaginary parts of the snake \(S_{1}\). Inset in **b** is a zoom over the region where the \(S_{1}\) branch approaches the \(S_{5}\) branch, and as a consequence, the corresponding snakes [inset (\(ix\))] present a profile mixing both periodicities: 5 small zigzag periods within a 1 period envelope. Other branches exist (not shown), associated to \(N\)-dark soliton states, denoted by \(D_{N}\), with typical profiles as shown in (\(x\)) -(_xii_). **c,d** show the snake’s zigzagging angle [cf. inset (\(iv\))] as a function of detuning, \(\alpha(\delta)\), corresponding to all branches in **a,b** (branches are colour-matched). **e** shows the existence (light area) and stability (black area) domains in the \(\{\delta,\sigma_{z}\}\) plane for the snake family \(S_{2}\) with \(h_{0}=1.7\). All insets are plotted over the area \(x\in[-L/2,L/2]\) and \(z\in[-8,8]\). exist over a finite and generous region of the parameter space, as shown in Figs. 2a-d. To further illustrate this, Fig. 2e shows the existence domain on the \(\{\delta,\sigma_{z}\}\) plane for the \(S_{2}\) family, which is found to be stable within the black area, unstable within the light shaded area, and non-existent otherwise (white area). In this work, optimal conditions for the snakes stability are encountered only for strong pump localisation along \(z\). Outside the stability domain, snakes are exposed to oscillatory and exponential instabilities leading, respectively, to breathing and decay. The precise combination of the driving amplitude and aspect ratio of the pump field, \(\sigma_{z}/L\), (lying in the range \([\frac{1}{8},\frac{1}{3}]\) in Fig.2e) is essential for the formation of robust snakes. Photonic snake families emerge supercritically from the top branch of the cavity resonance, each with a different detuning threshold (marked with black dots in Fig.2), corresponding to the detuning values at which stripes become unstable due to the growth of snake type perturbations or _internal modes_[36] (see methods for the stability analysis description). Figure 3 shows all relevant growth rates for the three branches (top, middle, and bottom) of the right-most cavity resonance shown in Fig. 1e. Snake-type perturbations (\(\chi_{S}\), see top insets in Fig. 3) with different periods acquire positive growth rates on the top and middle branches of the resonance. The nullity of growth rates defines exactly the bifurcation points (black dots in Fig. 2b) or loci from which snakes emerge from stripes. We note that the symmetry properties of the stripes internal modes allows one to readily predict the symmetries to be inherited by the emerging snake family. Although our attention is focused on the snake instability, we point out that the middle branch of the resonance in Fig. 3 also presents _neck_ instabilities [1], which often compete with the snake type [37]and manifests upon the growth of the corresponding internal modes, denoted by \(\chi_{D_{1}}\)-\(\chi_{D_{3}}\) and shown in the right inset of Fig. 3. In this work, all of the spatiotemporal states found with traces of neck perturbations, namely, the dark solitons in Figs. 2(\(x\)-\(xi\)) and the hybrid snake-dark solitons in Fig. 4j,l, were highly unstable and thus their properties are not discussed in details. The photonic snake families become much richer when snake instabilities develop on the multi-stripe states present on cavity resonances with large pump beam widths, \(\sigma_{z}\gtrsim 8\) (cf. Fig. 1), what leads to the formation of coupled multi-snake states. Figure 4a shows an example of robust two-snake state with three periods along the microcylinder's circumference, denoted by \(S_{3}^{(2)}\). This double snake forms when the snake-type mode \(\chi_{S_{3}^{(2)}}\) (Fig. 4b bottom-right inset) grows on top of the two-stripe state (Fig. 4b top-right inset). The full amplitude vs detuning branch of the \(S_{3}^{(2)}\) snake is shown in Fig. 4b by the Figure 3: **Onset of snakes.** Growth rates vs detuning of the internal modes associated to the background states defining the cavity resonance in Fig. 1e [\(\sigma_{z}=4\), \(h_{0}=1.7\)]. This figure is a parametric plot along the resonance, i.e., the abscissa corresponds to the values of detuning along the branches of the resonance, starting at \(\delta=1.4\). The left panel (\(\delta\in[1.4,SN_{1}\lesssim 3.5\)) corresponds to the top branch, the central panel (\(\delta\in[SN_{1},SN_{2}]\)) corresponds to the middle branch, and the right panel (\(\delta\in[SN_{2}\gtrsim 2.2,2.5]\)) corresponds to the lower and stable branch. Labels \(\chi_{\psi}\) denote the internal modes the growth of which leads to the state \(\psi\) [\(\psi\) are either snakes \(S_{N}\) (black) or dark solitons \(D_{N}\) (red)]. Insets at the top show the real part of typical snake (left) and neck (right) modes. Snake modes with \(N=0\) induce, when excited, a global drift along \(z\) while neck modes with \(N=0\) correspond to the _universally_ unstable mode that does the whole middle branch with the detrimental zero-wavelength instability. The zero growth rate points correspond to the bifurcation points shown in Figs. 1 and 2 from where the corresponding states (shown in Fig. 2) emerge. Insets are plotted over the area \(x\in[-L/2,L/2]\) and \(z\in[-8,8]\). black curve. Other branches correspond to the cavity resonance (cf. Fig. 1f). Existence and stability of the \(S_{3}^{(2)}\) family with \(h_{0}=1.7\) is shown vs pump width and detuning over the relevant parameter space in Fig. 4c. Figures 4d-f show analogous results for a triple-snake. The above two snake families correspond to particular cases where their branches connect background states with different number of stripes: the \(S_{3}^{(2)}\) [\(S_{3}^{(3)}\)] family bifurcates at lower \(\delta\) from the B-point located on a 2 (3) stripe state and merges with 4-stripe state at a higher \(\delta\) (red dots). Even though we only found stable _in-phase_ multi-snakes, the system supports a vast collection of states with different morphology. Some of these are illustrated in Fig. 4h and Figs. 4j,k,l which appear after the _antiphase_ snake modes in Figs. 4g and 4i grow, respectively, on two- and three-stripe states. This results in the formation of anti-phase snakes that appear alone (Fig. 4h), mixed with dark solitons (Fig. 4j), together with ellipsoids (Fig. 4k), or displaying asymmetries along \(z\) (Fig. 4l). **Heterogeneous 2D combs.** The most remarkable feature of photonic snake states is that their spectrum is heterogeneous along \(z\), while being inherently synchronized by the nonlinearity. Figure 5 shows a soliton stripe (Fig. 5a) and snakes of different tilts and periods (Figs. 5b-d), together with their frequency comb distribution along \(z\) (Figs. 5e-h). Heterogeneity is evident in Figs. 5g,h, showing Fourier spectra along the \(z\)-axis, and it is further illustrated in insets \(I-IV\) by showing different combs extracted at different axial positions (marked by horizontal white lines in Figs. 5c,g). Synchronisation and heterogeneity of these spectra is readily important for metrology and spectroscopy [38]. The spreading of the combs along the cylinder's axis, \(z\), naturally introduces the notion of heterogeneous two-dimensional comb, which constitutes a generalisation of the widely reported one-dimensional comb and a central result of this work. Figures 5i-l show the snakes in Figs. 5a-d, respectively, in the two-dimensional momentum space, illustrating the angular spread of the different spectral components, potentially important for an efficient collection of the combs by external tapers or waveguides. Because dispersion in cylinders is typically much smaller in \(x\) than in \(z\)[39; 40], Figure 4: **Coupled photonic snakes.****a,** intensity of a 2-snake with 3 periods [denoted by \(S_{3}^{(2)}\)] for \(h_{0}=1.7\) and \(\sigma_{z}=8\). **b,** branches of its existence and stability shown by the solid and thick black curves, respectively. Branches with other colors correspond to the resonances in Fig. 1f. A square marker is around the location where the profile in **a** is taken (\(\delta=1.72\)). The stable branch emerges from the bifurcation point labeled with B (left most thick red dot). Inset: (top) 2-stripe state, (bottom) perturbation impingin the flexural instability that transforms the 2-stripe state into the \(S_{3}^{(2)}\) in **a**. White arrows indicate the direction of the local drift [arrows in inset and in **a** are in exact correspondence]. **c,** Existence and stability chart in the \(\{\delta,\sigma_{z}\}\)-plane for \(h_{0}=1.7\), built by analysing the stability of the stable branch in **b** and varying pump-width. Horizontal dashed line marks \(\sigma_{z}=8\), corresponding to **b.****d-f** are analogous to **a-e** for a triple snake. Other snaking states, albeit unstable: **g,i** show the internal modes of the (2,3)-stripe states leading to states in **h,j**. Additional bifurcations reshape states of the type **j** into **k** and **l**. All \((x,z)\) panels are plotted over the area \(x\in[-L/2,L/2[\) and \(z\in[-10,10]\). the physical values corresponding to \(k_{z}\) are in practice much smaller than those corresponding to \(k_{x}\), so that the propagation angles \(\theta=\arctan(k_{z}/k_{x})\) remain in the order of a few degrees, at most (see methods). On the same reason, the height of panels \(e-h\) is of the order of a few mm's, so that the comb heterogeneity occurs along \(z\) over larger scale than typical taper fiber widths, which enables the efficient light collection by, e.g., arrays of waveguides. **Robustness.** The photonic snakes and the two-dimensional combs exist in a physical setting that is readily realizable. In particular, consider a hollow micro-cylinder made of silica glass, with a radius \(R=100\)\(\mu\)m and wall thickness \(w=0.75\)\(\mu\)m, pumped at \(\lambda_{p}\approx 1.26\)\(\mu\)m, with a \(\mathcal{Q}\)-factor of \(\sim 5\times 10^{6}\), and take into account the expected specific effects introduced by chromatic dispersion, Raman and thermal nonlinearities, and the localised coupling, in \(x\) and \(z\), between the pump beam and the micro-cavity (see the full model in methods). Although each of the above _perturbations_ may be suppressed in various ways, we now show that robust snakes occur in passive driven cylindrical micro-resonators. Figure 6 presents an overview of our predictions in such scenarios, focusing the attention on the snake family \(S_{2}\) with \(h_{0}=1.4\) and \(\sigma_{z}=4\). Figure 6a shows the thermal shift of the nonlinear resonance [black] with respect to the unperturbed cold resonance [grey], as well as the \(S_{2}\) snake branch, which stable region is highlighted by the thick trace. An example of robust snake is shown in Fig.6b, along with its associated Raman vibration (Fig.6c) and thermal (Fig.6d) fields. The corresponding two-dimensional comb is shown around the central lines in Fig.6e. We emphasize that in the context of one-dimensional combs, solitons and related nonlinear waves have been reported under the thermal nonlinearity [41], both experimentally [21] and theoretically [42]. In one dimension, thermal effects mainly induce a global shift in the laser-cavity detuning which, even if they introduce their own instabilities [41; 42], cannot compromise soliton existence itself. However, in the two-dimensional case here considered, thermal effects are inhomogenous along \(z\) (cf. Fig.6d), and hence induce _thermal stress_ along \(z\). As a consequence, the mere existence of the two-dimensional snakes cannot be anticipated _a priori_, and neither can their stability. The latter is formally predicted via linear stability analysis and explicitly checked by long propagation runs (cf. Fig.6f) spanning over \(50,000\) cavity roundtrips (equivalent to \(155\) ns in our geometry). Propagation of robust snakes under the whole plethora of realistic effects feature constant peak amplitudes (as a consequence of the steady two-dimensional profiles) despite the input random noise and step-to-step perturbations. On the contrary, unstable snakes behave very differently. In particular, Fig.6g shows the time evolution of an oscillatory unstable snake, featuring peak amplitude and profile variations (see insets). The presence and absence of instabilities, and the nature of them in the former case, are accurately predicted by the stability analysis (see methods), which results are displayed in insets of Figs.6f,g. **Discussion.** Stable photonic snakes present a trend to bifurcate supercritically from the stable background upon the increase of detuning [cf. Figs. 2,4], even in the presence of _higher order effects_ [cf. Fig. 6], what strongly suggests that they will be easily and deterministicaly excitable in realistic experiments by the standard dynamical red-shift of the pump's frequency. This simple excitation mechanism Figure 5: **Two-dimensional heterogeneous combs.****a-d,** intensity, \(|\psi(x,z)|^{2}\), for several snake profiles with different tilt, \(\alpha\), periodicity, and detuning, \(\delta\) (see labels) at \(\sigma_{z}=4\) and \(h_{0}=1.7\). **e-h**, Fourier transform along the \(x\) direction of the fields in **a-d**, illustrating the two-dimensional comb structure vs \(z\) around the central wave-numbers, \(k_{x}\). The frequency combs exhibit a high degree of heterogeneity along the cylinder’s axis (\(z\)), as evident from insets (**l**)-(**l**v**), showing the broad one-dimensional combs at different locations: \(z=1.72\), \(1.25\), \(0.63\), \(0\), respectively. **i-l,** Two-dimensional Fourier transforms of the fields in **a-d** illustrating snakes in momentum space, which reveal the angular spread of the two-dimensional comb frequencies (see text). Figure axes: **a-d**, \(x\in[-L/2,L/2[\) and \(z\in[-5,5]\); **e-h**, \(k_{x}\in[-20,20]\) and \(z\in[-5,5]\); **i-l**, \(k_{x}\in[-20,20]\) and \(k_{z}\in[-20,20]\). Spectra in **e-l** are normalised to the strongest comb line excluding the pump at \(k_{x}=0\). of frequency combs in the normal GVD regime of passively driven cavities, naturally provided here by the intrinsically two-dimensional snakes, has remained hidden till date by the low dimensionality of microrings, where relatively complex excitation methods were typically required. Indeed, in one-dimensional microrings, solitonic combs in the normal GVD regime are often excited by engineering a small anomalous GVD region around the pump's frequency, so modulational instability triggers comb formation. This is achieved via mode coupling effects arising due to avoided crossings in multi-modal microrings [43; 44; 45; 46] or with the aid of an auxiliary ring [47; 48]. A recent work demonstrates a much simpler and straight-forward mechanism to excite (one-dimensional) dark solitons via self-injection-locking [49] and displaying the turnkey [50] operation. Photonic snake states may thus belong to the collection of simple and deterministically excitable combs in cavities with dominant normal GVD, which is of central importance for extending the formation of micro-cavity combs to the long and short wavelength regions, far away from the telecom band around 1.5 \(\mu\)m. In addition, the two-dimensional combs reported here automatically enable the possibility to host synchronised heterogeneous combs in a single device, features that are of great importance in metrology and spectroscopy, and may be difficult to combine in this specific manner with microrings. Indeed, comb heterogeneity may be achieved in the one-dimensional context, e.g., via the excitation of unbounded solitons copropagating along the same spatial channel [51]. However, their different group velocities unavoidably yield de-synchronisation. On the other hand, synchronisation was previously reported for identical combs in distant microrings [52]. Remarkably, both features were found simultaneously in the bi-modal stokes solitons [19], where heterogeneity appeared as a result of two copropagating and spectrally non-overlapping combs. In the present work, differently with the above, and akin to the two-dimensional geometry, the heterogeneity appears as a continuous comb reshaping along the cylinder's axis and with a fixed carrier frequency. We note that the above remarkable findings [19; 51; 52] were reported in the anomalous GVD region, where the physics is substantially different than that in the normal GVD regime, subject of this work. In summary, we have uncovered a fundamentally new mechanism to arrest and control the ubiquitous and otherwise strong snake instability in cylindrical Kerr microresonators and, as a result, the possibility to form complex robust spatiotemporal snake states. Specifically, we have found that photonic snakes naturally form perfectly synchronised heterogeneous comb ensembles in the normal GVD (hyperbolic) regime of cylindrical microcavities. The phenomenon represents a novel paradigm for the generation of optical frequency combs. ## Online content Methods, additional references, statements of data availability, acknowledgements, details of author contributions and competing interests are available in the online version of the paper. [MISSING_PAGE_POST] * [2] E. A. Kuznetsov, A. M. Rubenchik, V. E. Zakharov,"Soliton stability in plasmas and hydrodynamics," Phys. Rep. **142**, 103-65 (1986). * [3] Y. S. Kivshar and D. E. Pelinovsky, "Self-focusing and transverse instabilities of solitary waves," Phys. Rep. **331**, 117-195 (2000). * [4] J. A. Constantin, T. Dunne, J. Ahmed, C. Legleiter, and E. D. Lazarus, "Sediment supply as a driver of river meandering and floodplain evolution in the Amazon Basin," Nature Geoscience **7**, 899-903 (2014). * [5] E. D. Brown, S. B. Buchsbaum, R. E. Hall, J. P. Penhune, K. F. Schmitt, K. M. Watson, and D. C. Wyatt, "Observations of a nonlinear solitary wave packet in the Kelvin wake of a ship," J. Fluid Mech. **204**, 263-293 (1989). * [6] T. Yefsah, A. T. Sommer, M. J. Ku, L. W. Cheuk, W. Ji, W. S. Bakr, and M. W. Zwierlein, "Heavy solitons in a fermionic superfluid," Nature **499**, 426-430 (2013). * [7] A. Cetoli, J. Brand, R. G. Scott, F. Dalfovo, and L. P. Pitaevskii, "Snake instability of dark solitons in fermionic superfluids," Phys. Rev. A **88**, 043639 (2013). * [8] F. Claude, S. V. Koniakhin, A. Maitre, S. Pigeon, G. Lerario, D. D. Stupin, Q. Glorieux, E. Giacobino, D. Solnsykhov, G. Malpuech, and A. Bramati, "Taming the snake instabilities in a polariton superfluid," Optica **7**, 1660-1665 (2020). * [9] B. P. Anderson, P. C. Haljan, C. A. Regal, D. L. Feder, L. A. Collins, C. W. Clark, and E. A. Cornell, "Watching dark solitons decay into vortex rings in a Bose-Einstein condensate," Phys. Rev. Lett. **86**, 2926-2929 (2001). * [10] C. Luengyiriya, U. Storb, G. Lindner, S. C. Muller, M. Bar, and M. J. B. Hauser, "Scroll Wave Instabilities in an Excitable Chemical Medium," Phys. Rev. Lett. **100**, 148302 (2008). * [11] A.V. Mamaev, M. Saffman, and A. A. Zozulya, "Propagation of dark stripe beams in nonlinear media: snake instability and creation of optical vortices," Phys. Rev. Lett. 76, 2262-2265 (1996). * [12] V. Tikhonenko, J. Christou, B. Luther-Davies, and Y. S. Kivshar, "Observation of vortex solitons created by the instability of dark soliton stripes," Opt. Lett. **21**, 1129-1131 (1996). * [13] S.-P. Gorza, N. Roig, Ph. Emplit, and M. Haelterman, "Snake Instability of a Spatiotemporal Bright Soliton Stripe," Phys. Rev. Lett. **92**, 084101 (2004). * [14] S.-P. Gorza, Ph. Emplit, and M. Haelterman, "Observation of the snake instability of a spatially extended temporal bright soliton," Opt. Lett. **31**, 1280-1282 (2006). * [15] S.-P. Gorza, B. Deconinck, Ph. Emplit, T. Trogdon, and M. Haelterman, "Experimental Demonstration of the Oscillatory Snake Instability of the Bright Soliton of the \((2+1)D\) Hyperbolic Nonlinear Schrodinger Equation," Phys. Rev. Lett. **106**, 094101 (2011). * [16] V. Brasch, M. Geiselmann, T. Herr, G. Lihachev, M. H. P. Pfeiffer, M. L. Gorodetsky, and T. J. Kippenberg, "Photonic chip-based optical frequency comb using soliton Cherenkov radiation," Science **351**, 357-360 (2016). * [17] X. Yi, Q.-F. Yang, X. Zhang, K. Y. Yang, X. Li, and K. Vahala, "Single-mode dispersive waves and soliton microcomb dynamics," Nat. Commun. **8**, 1-9 (2017). * [18] M. Karpov, H. Guo, A. Kordts, V. Brasch, M. H. P. Pfeiffer, M. Zervas, M. Geiselmann, and T. J. Kippenberg, "Raman self-frequency shift of dissipative Kerr solitons in an optical microresonator," Phys. Rev. Lett. **116**, 103902 (2016). * [19] Q.-F. Yang, X. Yi, K. Y. Yang, and K. Vahala, "Stokes solitons in optical microcavities," Nat. Phys. **13**, 53-57 (2017). * [20] M. Yu, Y. Okawachi, R. Cheng, C. Wang, M. Zhang, A. L. Gaeta, and M. Loncar, "Raman lasing and soliton mode-locking in lithium niobate microresonators," Light: Science & Applications **9**, 1-7 (2020). * [21] J. R. Stone, T. C. Briles, T. E. Drake, D. T. Spencer, D. R. Carlson, S. A. Diddams, and S. B. Papp, "Thermal and nonlinear dissipative-soliton dynamics in Kerr-microresonator frequency combs," Phys. Rev. Lett. **121**, 063902 (2018). * [22] E. Obrzud, S. Lacomte, and T. Herr, "Temporal solitons in microresonators driven by optical pulses," Nat. Photon. **11**, 600-607 (2017). * [23] A. Pasquazi, M. Peccianti, L. Razzari, D. J. Moss, S. Coen, M. Erkintalo, Y. K. Chembo, T. Hansson, S. Wabnitz, P. Del'Haye, X. Xue, A. M. Weiner, and R. Morandotti, "Micro-combs: a novel generation of optical sources," Phys. Rep. **729**, 1-81 (2017). * [24] S. T. Cundiff and J. Ye, "Colloquium: Femtosecond optical frequency combs," Rev. Mod. Phys. **75**, 325-342 (2003). * [25] Q. F. Yang, X. Yi, K. Y. Yang, and K. Vahala, "Counter-propagating solitons in microresonators," Nat. Photon. **11**, 560-564 (2017). * [26] W. Weng, R. Bouchand, E. Lucas, and T. J. Kippenberg, "Polychromatic Cherenkov radiation induced group velocity symmetry breaking in counterpropagating dissipative Kerr solitons," Phys. Rev. Lett. **123**, 253902 (2019). * [27] A. Tikan, J. Riemensberger, K. Komagata, S. Honl, M. Churaev, C. Skehan, H. Guo, R. N. Wang, J. Liu, P. Seidler, and T. J. Kippenberg, "Emergent nonlinear phenomena in a driven dissipative photonic dimer," Nat. Phys. **17**, 604-610 (2021). * [28] J. K. Jang, X. Ji, C. Joshi, Y. Okawachi, M. Lipson, and A. L. Gaeta, "Observation of Arnold Tongues in Coupled Soliton Kerr Frequency Combs," Phys. Rev. Lett. **123**, 153901 (2019). * [29] E. Lucas, G. Lihachev, R. Bouchand, N. G. Pavlov, A. S. Raja, M. Karpov, M. L. Gorodetsky, and T. J. Kippenberg,"Spatial multiplexing of soliton microcombs," Nat. Photon. **12**, 699-705 (2018). * [30] L. A. Lugiato and R. Lefever, "Spatial dissipative structures in passive optical systems," Phys. Rev. Lett. **58**, 2209-2211 (1987). * [31] C. Conti, S. Trillo, P. Di Trapani, G. Valiulis, A. Piskarskas, O. Jedrkiewicz, and J. Trull, "Nonlinear electromagnetic X waves," Phys. Rev. Lett. **90**, 170406 (2003). * [32] P. Di Trapani, G. Valiulis, A. Piskarskas, O. Jedrkiewicz, J. Trull, C. Conti, and S. Trillo, "Spontaneously generated X-shaped light bullets," Phys. Rev. Lett. **91**, 093904 (2003). * [33] L. A. Cisneros-Ake, R. Carretero-Gonzalez, P. B. Kevrekidis, and B. A. Malomed, "Dynamics and stabilization of bright soliton stripes in the hyperbolic-dispersion nonlinear Schrodinger equation," Commun. Nonlinear Sci. Numer. Simul. **74**, 268-281 (2019). * [34] I. Barashenkov and Y. S. Smirnov, "Existence and stability chart for the ac-driven, damped nonlinear Schrodinger solitons," Phys. Rev. E **54**, 5707-5725 (1996). * [35] D. W. Mc Laughlin, J. V. Moloney, and A. C. Newell, "Solitary Waves as Fixed Points of Infinite-Dimensional Maps in an Optical Bistable Ring Cavity," Phys. Rev. Lett. **51**, 75-78 (1983). * (36) D. E. Pelinovsky, Y. S. Kivshar, and V. V. Afanasjev, "Internal modes of envelope solitons," Physica D **116**, 121-142 (1998). * (37) D. V. Skryabin and W. J. Firth, "Modulational instability of solitary waves in non-degenerate three wave mixing: the role of phase symmetries," Phys. Rev. Lett. **81**, 3379-3382 (1998). * (38) N. Picque and T. W. Hansch, "Frequency comb spectroscopy," Nat. Photon. **13**, 146-157 (2019). * (39) Y. A. Demchenko and M. L. Gorodetsky, "Analytical estimayes of eigenfrequencies, dispersion, and field distribution in whispering gallery resonators," J. Opt. Soc. Am. B **30**, 3056-3063 (2013). * (40) C. Milian, Y. V. Kartashov, D. V. Skryabin, and L. Torner, "Clusters of cavity solitons bounded by conical radiation," Phys. Rev. Lett. **121**, 103903 (2018). * (41) V. S. Ilchenko and M. L. Gorodetskii, "Thermal nonlinear effects in optical whispering gallery microresonators," Laser Phys. **2**, 1004-1009 (1992). * (42) A. Leshem, Z. Qi, T. F. Carruthers, C. R. Menyuk, and O. Gat, "Thermal instabilities, frequency-comb formation, and temporal oscillations in Kerr microresonators," Phys. Rev. A **103**, 013512 (2021). * (43) A. A. Savchenkov, A. B. Matsko, W. Liang, V. S. Ilchenko, D. Seidel, and L. Maleki, "Kerr frequency comb generation in overmoded resonators," Opt. Express **20**, 27290-27298 (2012). * (44) X. Xue, Y. Xuan, Y. Liu, P.-H. Wang, S. Chen, J. Wang, D. E. Leaird, M. Qi, and A. M. Weiner, "Mode-locked dark pulse Kerr combs in normal-dispersion microresonators," Nat. Photon. **9**, 594-600 (2015). * (45) J. K. Jang, Y. Okawachi, M. Yu, K. Luke, X. Ji, M. Lipson, and A. L. Gaeta, "Dynamics of mode-coupling-induced microresonator frequency combs in normal dispersion," Opt. Express **24**, 28794-28803 (2016). * (46) E. Nazemosadat, A. Fulop, O. B. Helgason, P.-H. Wang, Y. Xuan, D. E. Leaird, M. Qi, E. Silvestre, A. M. Weiner, and V. Torres-Company, "Switching dynamics of dark-pulse Kerr frequency comb states in optical microresonators," Phys. Rev. A **103**, 013513 (2021). * (47) X. Xue, Y. Xuan, P.-H. Wang, Y. Liu, D. E. Leaird, M. Qi, and A. M. Weiner, "Normal-dispersion microcombs enabled by controllable mode interactions," Laser Photon. Rev. **9**, L23-L28 (2015). * (48) B. Y. Kim, Y. Okawachi, J. K. Jang, M. Yu, X. Ji, Y. Zhao, C. Joshi, M. Lipson, and A. L. Gaeta, "Turn-key, high-efciency Kerr comb source," Opt. Lett. **44**, 4475-4478 (2019). * (49) W. Jin, Q.-F. Yang, L. Chang, B. Shen, H. Wang, M. A. Leal, L. Wu, M. Gao, A. Feshali, M. Paniccia, K. J. Vahala, J. E. Bowers, "Hertz-linewidth semiconductor lasers using CMOS-ready ultra-high-Q microresonators," Nat. Photon. **15**, 346-353 (2021). * (50) B. Shen, L. Chang, J. Liu, H. Wang, Q. F. Yang, C. Xiang, R. N. Wang, J. He, T. Liu, W. Xie, J. Guo, D. Kinghorn, L. Wu, Q. Ji, T. J. Kippenberg, K. Vahala, and J. E. Bowers, "Integrated turnkey soliton microcombs," Nature **582**, 365-369 (2020). * (51) W. Weng, R. Bouchand, E. Lucas, E. Obrzud, T. Herr, and T. J. Kippenberg, "Heteronuclear soliton molecules in optical microresonators," Nat. Commun. **11**, 1-9 (2020). * (52) J. K. Jang, A. Klenner, X. Ji, Y. Okawachi, M. Lipson, and A. L. Gaeta, "Synchronization of coupled optical microresonators," Nat. Photon. **12**, 688-693 (2018). ### Online Methods **Time evolution equations.** The nonlinear dynamics of the electric field's envelope orbiting around a cylindrical microresonator is described by the following system of coupled equations, \[i\partial_{t}\psi=-ib_{1,1}\partial_{X}\psi+\frac{1}{2}(\partial_{ X}^{2}-\partial_{z}^{2})\psi+\hat{D}_{hod}\psi-[i-\delta]\psi-\] \[-[(1-f_{R})|\psi|^{2}-\Delta_{T}+Q]\psi-h_{0}\exp\left(-\frac{z^{ 2}}{\sigma_{z}^{2}}\right)\xi(X), \tag{2}\] \[\partial_{t}\Delta_{T}=-A\int_{0}^{L}|\psi|^{2}\frac{dX}{L}-B \Delta_{T},\] (3) \[\partial_{t}^{2}Q=-\frac{2\gamma_{R}\tau}{\gamma}\partial_{t}Q- \frac{\tau^{2}\Omega_{R}^{2}}{\gamma^{2}}\left[Q-f_{R}|\psi|^{2}\right],\] (4) \[\xi(X)=\frac{1}{N}\sum_{m=-\infty}^{+\infty}\exp\left(-\frac{[X+ mL]^{2}}{\sigma_{X}^{2}}\right), \tag{5}\] where \(\psi\), \(\Delta_{T}\), \(Q\) are the optical, thermal, and molecular vibrational fields, respectively. The above model assumes that only one radial mode family of the cylinder is at play, which may be unambiguously achieved by considering a hollow cylinder with an thin wall width, e.g., \(w=0.75\)\(\mu\)m, as we used in Fig.6. The cylinder's dispersion is given by \[\hat{D}\equiv\sum_{q=0}^{+\infty}\sum_{p=q}^{+\infty}b_{q,p-q}(-i \partial_{z})^{p-q}(-i\partial_{X})^{q}, \tag{6}\] \[b_{q,p-q}\equiv\frac{B_{q,p-q}\gamma^{p/2-1}}{2^{p/2}|B_{0,2}|^{ (p-q)/2}|B_{2,0}|^{q/2}},\] (7) \[B_{q,p-q}\equiv\frac{\tau\omega^{(q,p-q)}}{q!(p-q)!(2\pi R)^{p}},\] (8) \[\omega^{(q,p-q)}\equiv\left.\partial_{k_{x}}^{q}\partial_{k_{z}}^ {p-q}\omega(k_{x},k_{z})\right|_{k_{x0},k_{z0}}, \tag{9}\] where \(\gamma\) is the normalised cavity loss, \(R\) is the cylinder's radius, \(\tau\) the roundtrip time, and \(k_{x}\), \(k_{z}\) the wavenumbers associated to the the \(X\), \(z\) coordinates (\(X\) is the frame at rest in the lab). The dispersion terms with low \(p,q\) indices account for: \(p=0,q=0\), resonance frequency (\(\omega_{0}\)); \(p=1,q=1\), group velocity of the pump's frequency along \(X\) (\(b_{1,1}\)); \(p=2,q=2\), GVD; \(p=2,q=0\), diffraction. The rest of terms are all included in the _higher order dispersion_ operator \(\hat{D}_{hod}\equiv\hat{D}-\omega_{0}\tau/\gamma+ib_{1,1}\partial_{X}-\frac{ 1}{2}(\partial_{X}^{2}-\partial_{z}^{2})\). The even parity of the cylinder's dispersion around \(z\) (see, e.g., [1, 2]) and the fact that we expand around the \(k_{z0}=0\) yields the nullity of all coefficients with \(p-q=1\). The relation between normalised and physical coordinates is as follows: \(X=X_{phys}/(2\pi R)\sqrt{\gamma/(2|B_{2,0}|)}\), \(z=Z_{phys}/(2\pi R)\sqrt{\gamma/(2|B_{0,2}|)}\). In our simulations, the width of the numerical window along \(x\) was \(L=16\), which together with the choice of \(\gamma=0.001\) sets \(B_{2,0}\approx-2\times 10^{-6}\), attainable with a silica glass cylinder of \(R=100\)\(\mu\)m and wall-width \(w=0.75\)\(\mu\)m at \(\lambda_{p}\approx 1.26\)\(\mu\)m, which features a roundtrip time \(\tau=3.1\) ps and a quality factor \(\mathcal{Q}=\omega_{p}\tau/\gamma\approx 4.9\times 10^{6}\) (\(\omega_{p}=2\pi c/\lambda_{p}\)), reasonable for cylinders [3]. The main higher dispersion terms are given by \(B_{3,0}\approx 9.26\times 10^{-10}\), \(B_{4,0}\approx-2.83\times 10^{-13}\), \(B_{0,2}\approx 1.21\times 10^{-4}\). The normalised detuning is \(\delta=(\omega_{p}-\omega_{0})\tau/\gamma\). The pump beam, assumed of Gaussian profile, has an amplitude \(h_{0}\) and a width \(\sigma_{z}\). Under realistic conditions (cf. Fig.6), the pump beam is also localised along \(X\), and this is accounted for via the function \(\xi(X)\), where \(N\) is the normalisation factor such that \(\max(\xi(X))=1\). Thermal effects are introduced via the light-to-phonon energy conversion, \(A=10^{-2}\), and the corresponding cooling rate, \(B=5\times 10^{-2}\), considering realistic values [4]. Thermal detuning, \(\Delta_{T}\), is assumed not to depend on \(X\) since \(\tau\sim\)ps is much smaller than the temperature diffusion time scale, in the order of the ns. Last, we note that in Eq.3 we omitted a term \(\sim\mu\partial_{z}\Delta_{T}\), accounting for the heat diffusion along \(z\), as its relative importance to the other terms is of the order of \(\sim 10^{-6}\) (the thermal diffusion coefficient is \(\mu\approx 7.25\times 10^{-7}m^{2}/s\)[5]). Raman scattering is introduced via the molecular vibrational field [6], previously implemented in microresonators [7], with standard parameter for glass given by [8]: Raman to Kerr effective fraction, \(f_{R}=0.18\); inverse phonon life-time, \(\gamma_{R}=1/32\) fs\({}^{-1}\); natural phonon frequency, \(\omega_{R}=1/12.2\) fs\({}^{-1}\); and \(\Omega_{r}\equiv[\gamma_{R}^{2}+\omega_{R}^{2}]^{1/2}\). Eq.1 in the main text is a particular case of the system Eqns.2-4 when higher order dispersion, Raman scattering, thermal detuning, and pump azimuthal's localisation are disregarded (\(\hat{D}_{hod}=Q=\Delta_{T}=0\), \(\xi(X)=1\)). The time evolution of the above system, Eqns.2-5, is simulated via the fourth order Runge-Kutta method. **Computation of stationary snakes.** Stationary solutions (snakes or else) are obtained numerically from Eq.1 or Eqns. 2-5 with the Newton-Raphson method in the frame comoving with the nonlinear state, where they readily satisfy \(\partial_{t}\psi=0\). While equation 1 is already expressed in such frame, Eqns.2-5 (expressed in the lab frame) are rewritten into the comoving frame after the substitution \(x=X-(b_{1,1}+v)t\), where \(v\) is a velocity shift induced by the higher order effects, which is computed together with the nonlinear solution. When computing stationary states, the \(x\)-localisation of the pump is disregarded (we set \(\xi(X)=1\)). **Stability of snakes.** Linear stability analysis is performed to all stationary solutions, represented by the tuple \(\{\psi_{s}\,Q_{s},\Delta_{T_{s}}\}\). Each field in the stationary solution is prone to develop instabilities. In the initial stages of such instabilities fields are regarded as \(\psi=\psi_{s}+ae^{i\lambda t}+b^{*}e^{-i\lambda^{*}t}\), \(Q=Q_{s}+ce^{i\lambda t}+c^{*}e^{-i\lambda^{*}t}\), \(\Delta_{T}=\Delta_{T_{s}}+de^{i\lambda t}+d^{*}e^{-i\lambda^{*}t}\) (Note \(Q,\Delta_{T}\) are real fields), where \(\lambda\) are the complex eigenvalues of the Jacobian matrix, obtained after substitution of the above decomposition into the system of Eqns. 2-4 (with \(\xi(X)=1\)) and linearising in \(\lambda\). The real parts of \(\lambda\), Re(\(\lambda\)), are the growth rates yielding instabilities when Re(\(\lambda\)) \(>0\). **Data availability** The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request. **Code availability** The analysis codes will be made available on reasonable request. **Acknowledgements** JAC and CM acknowledge support from the Spanish government via the Grant PID2021-124618NB-C21 funded by MCIN/AEI/ 10.13039/501100011033 and by "ERDF A way of making Europe", by the "European Union". CM acknowledges support from Generalitat Valenciana PROMETEO/2021/082. PFC acknowledges partial support from the Spanish government via the project PID2021-128676OB-I00 (MICINN). LT acknowledges support by CEX2019-000910-S [MCIN/AEI/10.13039/501100011033], Fundacio Cellex, Fundacio Mir Puig, and Generalitat de Catalunya (CERCA). YVK academic research has been supported by the research project FFUU-2021-0003 of the Institute of Spectroscopy of the Russian Academy of Sciences. **Author contributions** SBI and CM carried out the numerical simulations. CM conceived the project. All authors contributed significantly to this work, discussed the results, and contributed into the manuscript preparation. **Competing interests** The authors declare no competing interests.
2305.03540
Odd sun-free Triangulated Graphs are $S$-perfect
For a graph $G$ with the vertex set $V(G)$ and the edge set $E(G)$ and a star subgraph $S$ of $G$, let $\alpha_S(G)$ be the maximum number of vertices in $G$ such that no two of them are in the same star subgraph $S$ and $\theta_S(G)$ be the minimum number of star subgraph $S$ that cover the vertices of $G$. A graph $G$ is called $S$-perfect if for every induced subgraph $H$ of $G$, $\alpha_S(H)=\theta_S(H)$. Motivated by perfect graphs discovered by Berge, Ravindra introduced $S$-perfect graphs. In this paper we prove that a triangulated graph is $S$-perfect if and only if $G$ is odd sun-free. This result leads to a conjecture which if proved is a structural characterization of $S$-perfect graphs in terms of forbidden subgraphs.
G. Ravindra, Sanghita Ghosh, Abraham V. M
2023-05-05T13:51:18Z
http://arxiv.org/abs/2305.03540v1
###### Abstract ###### Abstract For a graph \(G\) with the vertex set \(V(G)\) and the edge set \(E(G)\) and a star subgraph \(S\) of \(G\), let \(\alpha_{S}(G)\) be the maximum number of vertices in \(G\) such that no two of them are in the same star subgraph \(S\) and \(\theta_{S}(G)\) be the minimum number of star subgraph \(S\) that cover the vertices of \(G\). A graph \(G\) is called \(S\)-perfect if for every induced subgraph \(H\) of \(G\), \(\alpha_{S}(H)=\theta_{S}(H)\). Motivated by perfect graphs discovered by Berge, Ravindra introduced \(S\)-perfect graphs. In this paper we prove that a triangulated graph is \(S\)-perfect if and only if \(G\) is odd sun-free. This result leads to a conjecture which if proved is a structural characterization of \(S\)-perfect graphs in terms of forbidden subgraphs. **Keywords:**\(S\)-perfect graphs; extended sun graph; sun graph **2020 Mathematics Subject Classification:** 05C17, 05C75. **Odd sun-free Triangulated Graphs are \(S\)-perfect** G. Ravindra, Sanghita Ghosh, Abraham V. M. _Department of Mathematics, CHRIST (Denemed to be University), Bengaluru, India._ [email protected], [email protected], [email protected]_ ## 1 Introduction All graphs \(G=(V,E)\) in this paper are finite and simple with the vertex set \(V(G)\) and the edge set \(E(G)\). Inspired by perfect graphs due to Berge [1], the concept of \(F\)-perfect graphs was introduced by Ravindra in 2011 [2]. For the terminology and notations which are not defined here, we refer the readers to West [3]. A graph \(G\) is said to be _triangulated_ if \(G\) has no induced cycle of length at least 4. If \(D\subseteq V(G)\), the subgraph of \(G\) induced by \(D\) is obtained by deleting the vertices of \(V(G)-D\) and is denoted by \(G[D]\). Given a graph \(H\), we say that a graph \(G\) is \(H\)-free, if \(G\) does not contain an induced subgraph isomorphic to \(H\). Given a family \(\{H_{1},H_{2},...\}\) of graphs, we say that \(G\) is \((H_{1},H_{2},...)\)-free if \(G\) is \(H_{i}\) - free, for every \(i\geq 1\). A _star_ is a tree consisting of one vertex adjacent to all other vertices. Note that \(K_{1}\) and \(K_{2}\) are stars. A vertex \(v\) in \(G\) is called an _simplicial vertex_ if \(v\) belongs to only one maximal clique of \(G\). A clique of \(G\) is _free_ if it contains at least one simplicial vertex. A triangle is _free triangle_ if it has only one simplicial vertex. Let \(G\) be a graph. An \(\mathbb{S}\)-_cover_ of \(G\) is a family of stars contained in \(G\) such that every vertex of \(G\) is in one of the stars. The _S-covering number_ of \(G\) is the minimum cardinality of a \(\mathbb{S}\)-cover and is denoted by \(\theta_{S}(G)\). A \(\theta_{S}\)_-cover_ of \(G\) is an \(\mathbb{S}\)-cover of \(G\) containing \(\theta_{S}(G)\) stars. A set \(T\subseteq V(G)\) is an _\(S\)-independent_ set of \(G\) if no two vertices of \(T\) are contained in the same star in \(G\) (equivalently, any two vertices in \(T\) are at a distance at least 3). The maximum cardinality of an \(S\)-independent set is called the _\(S\)-independence number_ of \(G\) and is denoted by \(\alpha_{S}(G)\). An \(\alpha_{S}\)_- independent set_ of \(G\) is a \(S\)-independent set with \(\alpha_{S}(G)\) vertices. It is easy to see that the graph parameters \(\alpha_{S}(G)\) and \(\theta_{S}(G)\) both satisfy the additive property, that is * \(\alpha_{S}(\bigcup_{i=1}^{k}G_{i})=\sum_{i=1}^{k}\alpha_{S}(G_{i})\), and * \(\theta_{S}(\bigcup_{i=1}^{k}G_{i})=\sum_{i=1}^{k}\theta_{S}(G_{i})\). It is also interesting to observe that the parameters \(\alpha(G)\) and \(\theta(G)\) of Berge perfect graphs satisfy monotone property, that is if \(H\) is an induced subgraph of \(G\), then \(\alpha(H)\leq\alpha(G)\) and \(\theta(H)\leq\theta(G)\). However, the parameters \(\alpha_{S}(G)\) and \(\theta_{S}(G)\) do not satisfy monotone property. That is, if \(H\) is an induced subgraph of \(G\), \(\alpha_{S}(H)\) (or \(\theta_{S}(H)\)) may exceed \(\alpha_{S}(G)\) (or \(\theta_{S}(G)\)). For example if \(G=K_{1,n},n\geq 2\) with central vertex \(v_{0}\), then \(\alpha_{S}(K_{1,n})=1\) and \(\theta_{S}(K_{1,n})=1\). However \(\alpha_{S}(K_{1,n}-v_{0})=n\) and \(\theta_{S}(K_{1,n}-v_{0})=n,n\geq 2\). For any graph \(G\), \(\theta_{S}(G)\geq\alpha_{S}(G)\). So, if for some graph \(H\), \(\theta_{S}(H)\leq\alpha_{S}(H)\), then \(\alpha_{S}(H)=\theta_{S}(H)\). Now we examine the graphs with the property that \(\alpha_{S}(H)=\theta_{S}(H)\) for every induced subgraph \(H\) of \(G\) and call such graphs \(S\)-perfect graphs. We formally present the definition of _\(S\)-perfect graphs_. **Definition 1.1**.: _A graph is called \(S\)-perfect if \(\theta_{S}(H)=\alpha_{S}(H)\), for every induced subgraph \(H\) of \(G\)._ A graph \(G\) is said to be _minimal \(S\)-imperfect_ if (i) \(\theta_{S}(G)\neq\alpha_{S}(G)\) and (ii) \(G-v\) is \(S\)-perfect for every vertex \(v\) of \(G\). Every minimal \(S\)-imperfect graph is a connected graph, since a graph \(G\) is \(S\)-perfect if and only if every component of \(G\) is \(S\)-perfect. If \(G\) is not \(S\)-perfect, now onwards we may assume that \(G\) is minimal \(S\)-imperfect. Since \(S\)-perfectness is a hereditary property, one expects a forbidden subgraph characterization for \(S\)-perfect graphs. We realize this in this paper and prove a characterization theorem for triangulated \(S\)-perfect graphs. The property that the parameters \(\alpha_{S}\) and \(\theta_{S}\) do not satisfy monotone property causes some difficulty in some of the results related to their equality. Before we state the characterization theorem for triangulated \(S\)-perfect graphs, we define _sun graphs_. **Definition 1.2**.: _Let \(H\) be a graph with a Hamiltonian cycle \(C=v_{1}v_{2}\ldots v_{k}v_{1}\). Let \(A_{i},1\leq i\leq k\) be mutually disjoint complete graphs such that \(|A_{i}|\geq 1\), \(\forall\)\(i\) such that \(1\leq i\leq k\). Let \(H_{1}\) be a graph constructed from \(H\) such that every vertex in \(A_{i}\) is adjacent to \(v_{i}\) and \(v_{i+1}\) and all \(u_{i}\)s in \(A_{i}\) are simplicial in \(H_{1}\) (\(i\)'s are taken modulo \(k\)). \(H_{1}\) is called \(k\)-extended sun. \(H_{1}\) is an extended odd (even) sun if \(k\) is odd (even). If \(|A_{i}|=1\), for all \(i\) then \(H_{1}\) is \(k\)-sun graph. \(H_{1}\) is an odd (even) sun if \(k\) is odd (even)._ Obviously, a sun is an extended sun. A 3-sun is contained in every \(k\)-extended sun. For example in Figure 1 we see \(\{\{A_{1}\},v_{1},v_{2},v_{5},v_{4},\{A_{5}\}\}\)]=3-sun. The goal of this paper is to prove the following result. **Theorem 2.2** (**Characterization Theorem for Triangulated \(S\)-perfect graphs**).: A triangulated graph \(G\) is \(S\)-perfect if and only if \(G\) is odd sun-free. The theorem is a min-max theorem for triangulated \(S\)-perfect graphs. For any graph \(G\), the central vertices of stars in \(\mathbb{S}\)-cover of \(G\) is a dominating set of \(G\). The minimum parameter \(\theta_{S}(G)\) is essentially the domination number, \(\gamma(G)\) of \(G\), which has many applications like surveillance, controlling and monitoring. The maximum parameter \(\alpha_{S}(G)\) have been extensively studied in various contexts with different terminology. For example, \(S\)-independent sets are studied in [4], where an \(S\)-independent set in \(G\) corresponds to a color class in a \((k,3)\)-coloring of \(G\). A study on characterization of star-perfect graphs, where all the stars in a star-cover of \(G\) are essentially induced, is also done in [5]. Our characterization of triangulated \(S\)-perfect graphs goes parallel to triangulated neighbourhood perfect graphs. The notion of neighbourhood number which was introduced by Sampathkumar and Neeralagi [6]. Lehel and Tuza defined neighbourhood perfect graphs and characterized triangulated neighbourhood perfect graphs [7]. Though seemingly neighbourhood perfect graphs and \(S\)-perfect graphs appear different (For example: \(C_{6k+2}\) and \(C_{6k+4},k\geq 1\) are neighbourhood perfect but not \(S\)-perfect and \(C_{6k+3},k\geq~{}1\) are \(S\)-perfect but not neighbourhood perfect), surprisingly they are same for triangulated graphs (Theorem 2.2). ## 2 Results and Discussions We use the following theorem and lemmas in proving the main theorem which characterizes triangulated \(S\)-perfect graphs. **Theorem 2.1**.: _[_5_]_ _A graph \(G\) is \(star\)-perfect if and only if \(G\) is \((C_{3},C_{3k+1},C_{3k+2})\)-free, \(k\geq 1\)._ **Lemma 2.1**.: _[_5_]_ _Let \(k\) be any positive integer. Then,_ 1. \(\theta_{s}(P_{k})=\left\lceil\frac{k}{3}\right\rceil\) _and_ \(\alpha_{s}(P_{k})=\left\lceil\frac{k}{3}\right\rceil\)_._ Figure 1: Examples of extended sun and sun on 5 vertices _._ 2. \(P_{k}\) _is star-perfect._ 3. _The disjoint union of paths is a star-perfect graph._ The proofs of this Lemmas 2.2, 2.3, 2.4 are similar to as shown in [5]. **Lemma 2.2**.: _Any cycle of length \(3k\), \(k\geq 1\) is \(S\)-perfect._ **Lemma 2.3**.: _Any cycle of length \(3k+1\) or \(3k+2\), \(k\geq 1\) is minimal \(S\)-imperfect._ Our next goal is to show that a minimal \(S\)-imperfect graph is a block. **Lemma 2.4**.: _If \(G\) is minimal \(S\)-imperfect graph, then \(G\) is a block._ **Lemma 2.5**.: _[_8, 9_]_ _Let \(G\) be a triangulated graph, then \(G\) has a simplicial vertex._ **Lemma 2.6**.: _If \(G\) is minimal \(S\)-imperfect, then \(\theta_{S}(G)=\alpha_{S}(G)+1\)._ Proof.: Since \(G\) is triangulated, by Lemma 2.5, \(G\) has a simplicial vertex \(v\). If \(\alpha_{S}(G)=1\), then \(\theta_{S}(G)\neq 1\) since \(\theta_{S}(G)\neq\alpha_{S}(G)\). Therefore \(\theta_{S}(G)\geq 2\). We observe that \(\alpha_{S}(G-v)=1\). If not, there exist \(v_{1},v_{2}\) in \(V(G-v)\) such that \(\{v_{1},v_{2}\}\) is \(S\)-independent in \(G-v\). Since \(\alpha_{S}(G)=1\), \(v\leftrightarrow\{v_{1},v_{2}\}\). Then \(v_{1}\leftrightarrow v_{2}\), since \(v\) is simplicial in \(G\), a contradiction. Since \(\theta_{S}(G)\geq 2,\theta_{S}(G)\geq\alpha_{S}(G)+1\). Also \(\theta_{S}(G)\leq\theta_{S}(G-v)+1\), since a \(S\)-cover of \(G\) contains at least \(\theta_{S}(G-v)+1\) stars. Then \(\theta_{S}(G)\leq\alpha_{S}(G-v)+1=\alpha_{S}(G)+1\), implying \(\theta_{S}(G)=\alpha_{S}(G)+1\), so the lemma is true if \(\alpha_{S}(G)=1\). If \(\alpha_{S}(G)\geq 2\), let \(T=\{v_{1},v_{2},\ldots,v_{k}\},k\geq 2\) be an \(\alpha_{S}\)-independent set of \(G-v\). If \(k=1\), then \(\alpha_{G}(G-v)=1\) and so \(\theta_{S}(G-v)=1\). Then \(\theta_{S}(G)\leq\theta_{S}(G-v)+1=\alpha_{S}(G-v)+1=2\), by (A). That is \(\theta_{S}(G)=\alpha_{S}(G)\), a contradiction to \(G\) being minimal \(S\)-imperfect. Therefore \(k\geq 2\). If \(T\) is not an \(\mathbb{S}\)-independent set in \(G\), then then there are two vertices \(v_{1},v_{2}\) in \(T\) such that \(\{v_{1},v_{2}\}\) is not \(S\)-independent in \(G\). As argued earlier we have a contradiction. \(|T|\leq\alpha_{S}(G)\). By definition of \(T\), \(|T|=\alpha_{S}(G-v)\). Therefore \(\alpha_{S}(G)\geq\alpha_{S}(G-v)\) Since \(\alpha_{S}(G-v)=\theta_{S}(G-v)\) we have \(\alpha_{S}(G)\geq\theta_{S}(G-v)\). That is \(\alpha_{S}(G)+1\geq\theta_{S}(G-v)+1\geq\theta_{S}(G)\). Therefore \(\theta_{S}(G)=\alpha_{S}(G)\) or \(\theta_{S}(G)=\alpha_{S}(G)+1\). However \(\theta_{S}(G)\neq\alpha_{S}(G)\), since \(G\) is minimal \(S\)-imperfect, therefore \(\theta_{S}(G)=\alpha_{S}(G)+1\). Hence the lemma. **Lemma 2.7**.: _If \(G\) is a minimal \(S\)-imperfect graph, then for \(u,v\in V(G)\), neither \(N(u)\subseteq N(v)\) nor \(N(v)\subseteq N(u)\)._ Proof.: Suppose false, then say \(N(u)\subseteq N(v)\). Let \(S=\{S_{1},S_{2},\ldots S_{w},\ldots,S_{k}\}\) be a \(\theta_{S}\)-cover of \(G\) where each star \(S_{i}\) is maximal and \(S_{v}\) is a star containing \(v\). By Lemma 2.6, the stars in \(\theta_{S}\)-cover of \(G\) are \(\alpha_{S}(G)+1\) in number. Since \(N(u)\subseteq N(v)\), any star \(S^{\prime}\) containing \(u\) is a substar (a subgraph which is a star) of \(S_{v}\), \(\theta_{S}(G-u)\leq\theta_{S}(G)\). We observe that \(\theta_{S}(G-u)=\theta_{S}(G)\). If \(\theta_{S}(G-u)<\theta_{S}(G)\), then \(\theta_{S}(G-u)\leq\theta_{S}(G)-1\), and \(G-u\) will be covered by \(\theta_{S}(G)-1\) stars. Since \(S^{\prime}\) is a substar of \(S_{v}\), \(G\) will also be covered by \(\theta_{S}(G)-1\) stars, contradiction to \(\theta_{S}(G)\) being minimum. Therefore \(\theta_{S}(G-u)=\theta_{S}(G)\). Let \(T\) be an \(\alpha_{S}\)-independent set in \(G-u\). We observe that \(T\) is an \(\alpha_{S}\)-independent set in \(G\). If not, \(u\) is adjacent to at least two vertices in \(T\) in \(G\). Thus \(v\) is adjacent to two vertices in \(T\) since \(N(u)\subseteq N(v)\). But then \(T\) is not an \(S\)-independent set in \(G-u\) as \(v\in V(G-u)\), a contradiction. Thus \(T\) is an \(S\)-independent set in \(G\) and \(\alpha_{S}(G)\geq|T|=\alpha_{S}(G-u)=\theta_{S}(G-u)=\theta_{S}(G)\). This implies \(\alpha_{S}(G)\geq\theta_{S}(G)\), a contradiction. Hence the lemma. **Lemma 2.8**.: _If \(G\) is a triangulated minimal \(S\)-imperfect graph, then \(G\) is Hamiltonian._ Proof.: Let \(C=v_{1}v_{2}\ldots v_{k}v1\) be a largest cycle in \(G\). If \(V(C)=V(G)\), then the lemma is true. So let \(v\in V(G)-V(C)\). Since \(G\) is minimal \(S\) imperfect graph, it is a block and hence connected. Let \(v\leftrightarrow v_{1}\). Since \(G\) is a block, there exists an induced cycle \(C^{\prime}\) containing the edges \(vv_{1}\) and \(v_{1}v_{2}\). Since \(G\) is triangulated, \(C^{\prime}\) is a triangle. This implies that \(v\leftrightarrow v_{2}\). Therefore \(C\) together with \(v\) is a bigger cycle than that of \(C\), a contradiction to the choice of \(C\). Hence such a \(v\) does not exist and therefore \(G\) is Hamiltonian. **Lemma 2.9**.: _Let \(G\) be a triangulated \(S\)-imperfect graph. Then there exists an extended sun \(G^{*}\) containing \(G\) as an induced subgraph._ Proof.: Since \(G\) triangulated, \(G\) has a simplicial vertex \(u\), by Lemma 2.5. Let \(Q\) be a free clique in \(G\) containing \(u\). The number of non-simplicial vertices in \(Q\) is at least 2. If not, then the only non-simplicial vertex in \(Q\) is a cut-vertex in \(G\), a contradiction to the fact that \(G\) is a block, by Lemma 2.4. If the simplicial vertices of \(G\) are removed, the resulting graph \(H\) is obviously Hamiltonian. Let \(C=v_{1}v_{2}\dots v_{k}v_{1}\) be a Hamiltonian cycle in \(H\). Let \(A_{1},A_{2},\dots,A_{l}\), \(1\leq l\leq k\) be mutually disjoint complete subgraphs in \(G\) such thatfor every \(u_{i}\in A_{i}\), \(u_{i}\leftrightarrow\{v_{i},v_{i+1}\}\) and \(u_{i}\) is simplicial in \(G\). If \(l=k\), we are done. If \(l<k\), let \(u_{l+1}\) be a vertex not in \(G\) and let \(u_{l+1}\) is not adjacent to any simplicial vertex in \(G\). Let \(G_{1}\) be the graph formed by \(G\) and \(u_{l+1}\) such that \(u_{l+1}\) is simplicial in \(G_{1}\). If \(G_{1}\) is an extended sun, then the lemma is true, otherwise we repeat the process to get \(G_{2},G_{3},\dots G_{m}\), where \(G_{m}\) is an extended sun. Considering \(G_{m}=G^{*}\), \(G\) is an induced subgraph of \(G^{*}\) and hence the lemma is true. **Lemma 2.10**.: _Let \(G\) be a minimal \(S\)-imperfect graph. Let \(u\) be a vertex not in \(G\). Let \(G^{\prime}\) be a graph formed by \(G\) and \(u\) such that \(u\) is simplicial in \(G^{\prime}\) and \(u\) is adjacent to an edge in \(G\) which is not adjacent to a simplicial vertex of \(G\). Then \(\alpha_{S}(G^{\prime})\neq\theta_{S}(G^{\prime})\)._ Proof.: On the contrary, suppose \(\alpha_{S}(G^{\prime})=\theta_{S}G^{\prime})\). This implies that every vertex of an \(\alpha_{S}-\)set of \(G^{\prime}\) is in exactly one star of \(\theta_{S}-\)cover of \(G^{\prime}\). Since \(u\) is simplicial in \(G^{\prime}\), by nature of \(G^{\prime}\) any \(S-\)independent set in \(G^{\prime}\) cannot have more than \(\alpha_{S}(G)+1\) vertices. By Lemma 2.15, \(\theta_{S}(G)=\alpha_{S}(G)+1\). Then \(\alpha_{S}(G)+1=\theta_{S}(G)\leq\theta_{S}(G^{\prime})=\alpha_{S}(G^{\prime})\), since a \(\theta_{S}\)-cover of \(G^{\prime}\) contains a \(\theta_{S}\)-cover of \(G\). If \(\alpha_{S}(G)+1=\alpha_{S}(G^{\prime})\), then \(\theta_{S}(G)=\theta_{S}(G^{\prime})\). Then there is an \(S-\)independent set of \(G\) of size \(\alpha_{S}(G)+1\), a contradiction to the fact that \(\alpha_{S}(G)\) is maximum. Therefore \(\alpha_{S}(G^{\prime})\neq\theta_{S}(G^{\prime})\). **Lemma 2.11**.: _Odd sun is not \(S\)-perfect._ Proof.: Let \(G\) be an odd sun and \(U=\{u_{1},u_{2},\dots,u_{k}\}\) be the set of simplicial vertices and \(W=\{v_{1},v_{2},\dots,v_{k}\}\) be the set of non-simplicial vertices in \(G\). Then \(T^{\prime}=\{u_{1},u_{3},\dots,u_{k-2}\}\) forms an \(S\)-independent set of \(G\) and the stars centered at \(v_{1},v_{3},\dots,v_{k}\) form an \(\mathbb{S}\)-cover of \(G\), say \(\mathbb{S}^{{}^{\prime}}\). Then this implies that \(|\mathbb{S}^{{}^{\prime}}|=|T^{{}^{\prime}}|+1\) implying \(\alpha_{S}(H^{*})\neq\theta_{S}(H^{*})\), hence the lemma. Since \(k\)-odd sun is an induced subgraph of \(k\)-extended sun, then the following lemma is immediate. **Lemma 2.12**.: _Odd extended sun is not \(S\)-perfect._ **Definition 2.1**.: _Let \(P\) be a path. A special path \(P^{*}\) is constructed from \(P\) such that an edge of \(P\) is contained in a free triangle._ **Lemma 2.13**.: _A special path is \(S\)-perfect._ Proof.: Let \(P^{*}\) be a special path formed from the path \(P\). If \(P^{*}\) is \(K_{2}\) or \(K_{3}\), then obviously \(P\) is \(S\)-perfect. So let \(P^{*}\neq K_{2}\) or \(K_{3}\). Then \(P^{*}\) will have a cut vertex, since every block of \(P^{*}\) is \(K_{2}\) or \(K_{3}\), by definition.If \(P^{*}\) is not \(S\)-perfect, let \(P^{*}\) is minimal \(S\)-imperfect. By Lemma 2.4, \(P^{*}\) is a block, a contradiction. **Lemma 2.14**.: _If \(G\) is a 3-sun free triangulated even extended sun, then \(G\) is \(S\)-perfect._ Proof.: Let \(U=\{u_{1},u_{2},\dots,u_{k}\}\) be the set of simplicial vertices and \(W=\{v_{1},v_{2},\dots,v_{k}\}\) be the set of non-simplicial vertices in \(G\). Let \(u_{i}\) represent the simplicial vertices in \(A_{i}\). \(A_{i}\) and \(C\) have the same meaning as in Definition 1.2. If \(v_{i}v_{j}\) and \(v_{i+1}v_{j+2}\) are alternate edges in \(C\), \(u_{i}\) and \(u_{i+2}\) are at a distance 3. Since \(k\) is even, \(\{u_{1},u_{3},\dots,u_{l},u_{l+3},\dots\}\) is an \(S\)-independent set in \(G\) of size \(\dfrac{k}{2}\). Similarly, the stars at \(v_{1},v_{3},\dots,v_{l},v_{l+3}\) is an \(S\)-cover of \(G\) of size \(\dfrac{k}{2}\). Then \(\alpha_{S}(G)\geq\dfrac{k}{2}\geq\theta_{S}(G)\). This implies that \(\alpha_{S}(G)=\theta_{S}(G)\). Every proper induced subgraph of \(G\) is a complete graph or a union of disjoint paths or special paths. Every complete graph is \(S\)-perfect and by Lemma 2.1 and Lemma 2.13, every induced subgraphs of \(G\) is \(S\)-perfect. **Lemma 2.15**.: _Let \(G\) be minimal \(S\)-imperfect triangulated graph. If for \(v,u,w\in V(G)\) such that \(v\) is a simplicial vertex in \(G\), \(u,w\in N(v)\), neither \(N(u)\subseteq N(w)\) nor \(N(w)\subseteq N(u)\), then \(G\) contains 3-sun as an induced subgraph._ Proof.: By Lemma 2.5, \(G\) has a simplicial vertex \(v\). If \(u\) and \(w\) be two vertices in \(N(v)\) such that \(N(u)\nsubseteq N(w)\) and \(N(w)\nsubseteq N(u)\), then there exists vertices \(u_{1}\in N(u)\) and \(w_{1}\in N(w)\) in \(G\) such that \(u\nleftrightarrow w_{1}\) and \(w\nleftrightarrow u_{1}\). For vertices \(v_{1}\) and \(v_{2}\) in \(G\), let \(P(v_{1},v_{2})\) denote an induced path connecting \(v_{1}\) and \(v_{2}\) in \(G\). Since \(G\) is a block, there exists a path \(P(u_{1},w_{1})\) connecting \(u_{1}\) and \(w_{1}\) in \(G\) not containing \(v\). Since \(G\) is a block we can choose a \(P(u_{1},w_{1})\) such that \(u,w\notin P(u_{1},w_{1})\). Let \(t_{1}\) be the last vertex in \(P(u_{1},w_{1})\) such that \(u\leftrightarrow t_{1}\). Let \(t_{2}\) be the last vertex in \(P(u_{1},w_{1})\) such that \(w\leftrightarrow t_{2}\). We consider the following two cases: 1. \(t_{1}\neq t_{2}\). Then \(ut_{1}\ldots t_{2}wu\) is an induced cycle of length at least 4, a contradiction to \(G\) being triangulated. Therefore Case 1 does not arise at all. 2. \(t_{1}=t_{2}=t\), say. Then \(t\leftrightarrow\{u,w\}\), and \(G[\{u,u_{1},\ldots,t\}]\) will not contain an induced cycle of length at least 4, since \(G\) is triangulated. Therefore \(u\) is adjacent to all the vertices in \(P(u_{1},t)\). If \(x\) is the vertex before \(t_{1}\) in \(P(u_{1},t)\), then \(x\leftrightarrow u,t\) and \(x\nleftrightarrow w\), by the choice of \(t\). Similarly there is a vertex \(y\) in \(P(w,t)\) such that \(y\leftrightarrow\{w,t\}\) and \(y\nleftrightarrow u\). Then \(G[\{v,u,w,x,t,y\}]\)=3-sun, a contradiction to our assumption. Therefore Case 2 does not arise at all. Hence the lemma. **Lemma 2.16**.: _Let \(G\) be a minimal \(S\)-imperfect graph. Then there exist no three distinct vertices \(v,u,w\in V(G)\) such that \(v\) is simplicial vertex in \(G\) and \(u\) and \(w\in N(v)\), \(N(u)\subseteq N(w)\) or \(N(w)\subseteq N(u)\)._ The proof is similar to the proof of Lemma 2.7 (We have to replace \(v\) by \(w\) in proof of Lemma 2.7). **Theorem 2.2** (**Main Theorem**).: _A triangulated graph \(G\) is \(S\)-perfect if and only if \(G\) is odd sun-free._ Proof.: Let \(G\) be a triangulated \(S\)-perfect graph. Then by Lemma 2.11\(G\) is odd-sun free. Conversely, let \(G\) be an odd sun-free triangulated graph. If \(G\) is not \(S\)-perfect, without loss of generality, let \(G\) be a minimal \(S\)-imperfect graph. Then by Lemmas 2.9 and 2.10, there exists an extended sun \(G^{*}\) such that \(G\) is an induced subgraph of \(G^{*}\) and \(\alpha_{S}(G^{*})\neq\theta_{S}(G^{*})\). We may assume that \(G^{*}\) is minimal \(S\)-imperfect. But then \(G^{*}=G\), since \(G\) is an induced subgraph of \(G^{*}\). By Lemma 2.16, \(G\) has no three vertices \(u,v,w\in V(G)\) such that \(N(u)\subseteq N(w)\) or \(N(w)\subseteq N(u)\), where \(v\) is simplicial vertex in \(G\) and \(u,w\in N(v)\). Then \(G\) contains 3-sun as an induced subgraph by Lemma 2.15. Since \(G=G^{*}\), \(G\) is an extended sun and contains 3-sun as an induced subgraph, a contradiction to our assumption. By definition of odd extended sun, it contains odd sun as an induced subgraph. Therefore \(G\) is 3-sun free even extended sun. Then by Lemma 2.14, \(G\) is \(S\)-perfect. This completes the proof of the theorem. ## 3 Future Directions The main theorem leads to the following conjecture. Conjecture. A graph is \(S\)-perfect if and only if \(G\) is \(\{C_{3k+1},C_{3k+2},k\geq 1\), odd super sun\(\}\)-free where super sun is defined as follows. Let \(H\) be a graph with a hamiltonian cycle \(C=v_{1}v_{2}\ldots v_{k}v_{1}\). Let \(A_{i},1\leq i\leq k\) be mutually disjoint set of vertices not in \(H\) such that \(|A_{i}|\geq 1\). Let \(H^{*}\) be a graph constructed from \(H\) such that \(H^{*}\) has the following properties: 1. the induced subgraph on \(A_{i}\) is a path \(P_{i}\) in \(H^{*}\) 2. the end vertices of \(P_{i}\), that is \(u_{i1}\) and \(u_{i|P_{i}|}\) are respectively adjacent to \(v_{i}\) and \(v_{i+1}\) and the number of vertices in \(P_{i}\) is \(3k+1,k\geq 1\) Then \(H^{*}\) is called \(k\)-super sun. \(H^{*}\) is even or odd super sun depending on \(k\) being even or odd. ## Acknowledgement The authors profusely thank S. A. Choudum, CHRIST (Deemed to be University) for helpful discussions and critically looking into the entire manuscript.
2304.03669
DATE: Domain Adaptive Product Seeker for E-commerce
Product Retrieval (PR) and Grounding (PG), aiming to seek image and object-level products respectively according to a textual query, have attracted great interest recently for better shopping experience. Owing to the lack of relevant datasets, we collect two large-scale benchmark datasets from Taobao Mall and Live domains with about 474k and 101k image-query pairs for PR, and manually annotate the object bounding boxes in each image for PG. As annotating boxes is expensive and time-consuming, we attempt to transfer knowledge from annotated domain to unannotated for PG to achieve un-supervised Domain Adaptation (PG-DA). We propose a {\bf D}omain {\bf A}daptive Produc{\bf t} S{\bf e}eker ({\bf DATE}) framework, regarding PR and PG as Product Seeking problem at different levels, to assist the query {\bf date} the product. Concretely, we first design a semantics-aggregated feature extractor for each modality to obtain concentrated and comprehensive features for following efficient retrieval and fine-grained grounding tasks. Then, we present two cooperative seekers to simultaneously search the image for PR and localize the product for PG. Besides, we devise a domain aligner for PG-DA to alleviate uni-modal marginal and multi-modal conditional distribution shift between source and target domains, and design a pseudo box generator to dynamically select reliable instances and generate bounding boxes for further knowledge transfer. Extensive experiments show that our DATE achieves satisfactory performance in fully-supervised PR, PG and un-supervised PG-DA. Our desensitized datasets will be publicly available here\footnote{\url{https://github.com/Taobao-live/Product-Seeking}}.
Haoyuan Li, Hao Jiang, Tao Jin, Mengyan Li, Yan Chen, Zhijie Lin, Yang Zhao, Zhou Zhao
2023-04-07T14:40:16Z
http://arxiv.org/abs/2304.03669v1
# DATE: Domain Adaptive Product Seeker for E-commerce ###### Abstract Product Retrieval (PR) and Grounding (PG), aiming to seek image and object-level products respectively according to a textual query, have attracted great interest recently for better shopping experience. Owing to the lack of relevant datasets, we collect two large-scale benchmark datasets from Taobao Mall and Live domains with about 474k and 101k image-query pairs for PR, and manually annotate the object bounding boxes in each image for PG. As annotating boxes is expensive and time-consuming, we attempt to transfer knowledge from annotated domain to unannotated for PG to achieve un-supervised Domain Adaptation (PG-DA). We propose a **D**omain **A**daptive **P**roduct **C**seker (**DATE**) framework, regarding PR and PG as Product Seeking problem at different levels, to assist the query **date** the product. Concretely, we first design a semantics-aggregated feature extractor for each modality to obtain concentrated and comprehensive features for following efficient retrieval and fine-grained grounding tasks. Then, we present two cooperative seekers to simultaneously search the image for PR and localize the product for PG. Besides, we devise a domain aligner for PG-DA to alleviate uni-modal marginal and multi-modal conditional distribution shift between source and target domains, and design a pseudo box generator to dynamically select reliable instances and generate bounding boxes for further knowledge transfer. Extensive experiments show that our DATE achieves satisfactory performance in fully-supervised PR, PG and un-supervised PG-DA. Our desensitized datasets will be publicly available here1. Footnote 1: [https://github.com/Taobao-live/Product-Seeking](https://github.com/Taobao-live/Product-Seeking) ## 1 Introduction Nowadays, with the rapid development of e-commerce and livestreaming, consumers can enjoy shopping on e-mall or various livestreaming platforms. Although the fact that diverse products can be presented and purchased on screen brings us convenience, we are immersed in this miscellaneous product world. Therefore, cross-modal Retrieval [38, 39, 50, 1, 14, 20, 38] for Product (PR), aiming to seek the corresponding image based on a text query, is significant for boosting holistic product search engine and promoting consumers' shopping experience. Besides, provided that the object-level product can be localized on the target product image or live room image according to a query, it will help consumers focus on the desired product and also benefit the downstream vision-to-vision retrieval. And we name this interesting task as Product Grounding (PG) like Visual Grounding [34, 37, 41, 28, 51]. Generally, PR and PG are seen as two separate tasks, but we consider mining the commonalities of PR and PG and regard them as Product Seeking at image Figure 1: Illustration of Product Retrieval (PR) and Grounding (PG) problems on two datasets collected from Taobao Mall and Live. (1) Given a text query (i.e. Chinese title or description of a product), PR is to seek the corresponding image-level product from gallery while PG is to seek the object-level product from an image. (2) We further explore **PG-DA**, which aims to transfer knowledge from the annotated source domain to the unannotated target domain under the influence of multi-modal domain gap to achieve un-supervised PG. level and object-level respectively. And we design a unified architecture to simultaneously solve PR and PG, which is more time-saving and memory-economical than separate methods. To research the PR and PG with great practical application value, we collect two large-scale benchmark Product Seeking datasets TMPS and TLPS from Taobao Mall and Taobao Live domains with about 474k image-title pairs and 101k frame-description pairs respectively, and the locations of object-level products in images are manually annotated. As annotating bounding box of product is time-consuming and expensive, we explore how to transfer knowledge from an annotated domain to the unannotated one, and achieve un-supervised PG in domain adaptation setting (PG-DA). Thus, we propose the **D**omain **A**daptive Product **S**eeker (**DATE**) to solve the following aspects of the challenging PR, PG and PG-DA problems. Firstly, due to the complexity of the mall and live scenarios, discriminative representations of the image and query are prerequisite to accurately localize the object. Considering conventional CNNs are hard to achieve long-distance relation reasoning and full-scale understanding, we utilize and improve the Swin-TF [35] to extract hierarchical and comprehensive features. As large-scale image seeking is demanding for PR, it is vital to ensure seeking inference is of trivial cost. Thus, we inject [REP] token into Swin-TF to absorb the weighted global semantics, and condense them into a single vector, which will be discriminative and concentrated for following efficient image seeking. And we perform the same semantics-aggregated technique for query feature extraction. Secondly, the capacity of both macroscopic image seeking and microcosmic fine-grained object seeking is necessary for PR and PG. Therefore, we present two cooperative seekers, where image seeker calculates the cosine similarity between visual and textual concentrated features for PR, and object seeker based on cross-modal interaction transformer directly predicts the coordinates of the product by comprehensive features for PG. We validate the reasonableness of such cooperative strategy through experiments. Thirdly, due to the domain gap between two datasets as Figure 1 shown, applying the model straightway to test on target domain will cause performance degeneration severely for PG-DA. To the best of our knowledge, this is the first work to consider un-supervised Visual Grounding in domain adaptation setting, and most uni-modal DA [36, 8, 32] and multi-modal DA [5, 7] methods are not directly applicable in our complicated object seeking. Therefore, we devise a domain aligner based on Maximum Mean Discrepancy to align the domain by minimizing uni-modal marginal distribution and multi-modal conditional distribution divergence between source and target domains, and design a dynamic pseudo bounding box generator to select similar instances in target domain and generate reliable boxes for knowledge transfer. To summarize, the contributions of this paper are as follows: * We collect and manually annotate two large-scale benchmark datasets for PR and PG with great practical application value. * We propose a unified framework with semantics-aggregated feature extractor and cooperative seekers to simultaneously solve fully-supervised PR and PG. * We explore un-supervised PG in domain adaptation setting and design the multi-modal domain aligner and dynamic box generator to transfer knowledge. * We conduct extensive experiments which shows that our methods achieve satisfactory performance in fully-supervised PR, PG and un-supervised PG-DA. ## 2 Related Work ### Visual Retrieval Given a text query, Visual Retrieval (VR) [38, 39, 1, 50, 1, 20] aims to find the corresponding image/video in a library. The common latent space based methods [50, 1] have been proven their effectiveness, which first extract the visual and textual features and map them into a common latent space to directly measure vision-language similarity. Representatively, [15] applies CNN and RNN to encode images and sentences respectively, and learn image-caption matching based on ranking loss. [50] proposes a semantic graph to generate multi-level visual embeddings and aggregate results from the hierarchical levels for the overall cross-modal similarity. Recently, transformer [42] exhibits better performance in Natural Language Processing [19, 11], Computer Vision [24, 25, 27, 4, 12] and multi-modal area [44, 46, 47, 23, 26, 48, 31, 22] than previous architecture, especially for global information understanding. Unsuprisingly, there is an increasing effort on repurposing such powerful models [16, 52, 29, 1] for VR. They apply transformer to learn joint multi-mmodal representations and model detailed cross-modal relation, which achieves satisfactory performance. ### Visual Grounding The paradigm of Visual Grounding (VG) [34, 37, 28, 41], which aims to localize the objects on an image, is similar as Visual Retrieval (VR), they are both to search the best matching part in visual signals according to the text query. Compared to VR, modeling fine-grained internal relations of the image is more significant for VG. In early work, two-stage methods [49, 21, 6] were widely used, which first generate candidate object proposals, then leverage the language descriptions to select the most relevant object, by leveraging off-the-shelf detectors or proposal generators to ensure recall. However, the computation-intensive proposal generation is time-consuming and also limits the performance of these methods, one-stage methods [30, 45] concentrate on localizing the referred object directly. Concretely, [45] fuses the linguistic feature into visual feature maps and predict bounding box directly in a sliding-window manner. Recently, [10] re-formulates VG as a coordinates regression problem and applies transformer to solve it. Generally, VR and VG are regarded as two separate problems. In this paper, we mine the commonalities of the two problems and design a unified architecture based on cooperative seeking to efficiently solve VR and VG effectively. ### Un-supervised Domain Adaptation Unsupervised domain adaptation (UDA) aims to transfer knowledge from the annotated source domain to the unlabelled target domain, and the challenge is how to overcome the influence of domain gap. In uni-modal tasks applications, several UDA techniques have been explored, including aligning the cross-domain feature distribution [17, 32], applying adversarial learning strategy [2, 36] or reconstruction method [8] to obtain domain-invariant features. And [9] uses optimal transport to estimate the discrepancy between the two distributions and exploits labels from the source domain. Different from the works described above, our task is cross-modal in nature, which is more challenging due to the heterogeneous gap between different modalities. In multi-modal area, few works have considered UDA, [5] studies the cross-dataset adaptation for visual question answering, [7] studies the video-text retrieval with pseudo-labelling algorithm. To the best of our knowledge, this is the first work to consider un-supervised Visual Grounding in domain adaptation setting. Figure 3: The multi-modal domain aligner. Figure 2: Overview of our DATE. (a) is the feature extractor, applying the semantics-aggregated transformers to obtain image and query features. (b) is the cooperative seekers, calculating the similarity to seek the image for PR and predicting coordinates to seek the object for PG. (c) includes a domain aligner to minimize distribution divergence between source and target domains and a pseudo box generator to select reliable instances and generate bounding boxes for knowledge transfer in PG-DA. ## 3 Proposed DATE ### Problem Formulation In this paper, we explore fully-supervised PR and PG, and un-supervised PG-DA in domain adaptation setting. In the next, we will formulate them. **PR and PG.** We collect a fully-annotated dataset \(\{V,Q,O\}\), given a textual query \(Q_{i}\) in query set \(Q\), PR and PG aim to seek the image-level product \(V_{Q_{i}}\) from whole image gallery \(V\), and object-level product \(O_{Q_{i}}\) from an matched image \(V_{Q_{i}}\). The \(O\) is the bounding box annotation. **PG-DA.** We have access to a fully-annotated source domain \(\mathcal{S}=\left\{V^{S},Q^{S},O^{S}\right\}\), and an unannotated target domain \(\mathcal{T}=\left\{V^{T},Q^{T}\right\}\) without box annotation \(O^{T}\). The goal of PG-DA is to transfer the knowledge from \(\mathcal{S}\) to \(\mathcal{T}\), and seek the object-level product on \(\mathcal{T}\). ### Semantics-Aggregated Feature Extractor As Figure 2(a) shown, for both settings, we share the feature extractor, which can aggregate the global semantics of each modality for image seeking as well as capture comprehensive and context-aware features for object seeking. **Image Stream.** Given a RGB image \(v\), we first split it into non-overlapping patches, then we refer to Swin-TF [35] for hierarchical feature extraction. Swin is mainly through the stack of patch merging module and Swin Transformer block to achieve 4-stage encoding, and the resolution is halved at each stage to acquire hierarchical features. The original Swin-TF utilizes average pooling to obtain image representation vector, ignoring the difference in importance of each token for semantics extraction. For promotion, we append a learnable [REP] token in front of visual token sequence during 4th stage, which is involved in the computation of self-attention and absorbs the weighted global image features. After the 4th stage, we can acquire the semantics-aggregated visual feature, and we name this advanced visual encoder as SA-Swin. Next we apply a linear layer to project them into dimension \(d\) to obtain \(\mathbf{V}_{SA}=[V_{rep},\mathbf{V}]\in R^{d\times(1+N_{v})}\), where \(N_{v}\) is the number of visual tokens, \(V_{rep}\) and \(\mathbf{V}\) are concentrated and comprehensive features respectively. **Query Stream.** Given a textual query \(q\), we first split it into character-level sequence and convert each character into a one-hot vector. After that, we tokenize each one-hot vector into a dense language vector in the embedding layer. Similar to image stream, we append a [REP] token in front of the tokenized query sequence to aggregate the global semantics. Note that the visual and textual [REP] tokens are independent for respective aggregation. Next we take all tokens into a textual transformer to produce the semantics-aggregated query features. Then we project them into the common space dimension \(d\) as image stream, to obtain \(\mathbf{Q}_{SA}=[Q_{rep},\mathbf{Q}]\in R^{d\times(1+N_{q})}\), where \(N_{q}\) is the number of textual tokens. ### Cooperative Seekers After acquiring common space image feature \(\mathbf{V}_{SA}=[V_{rep},\mathbf{V}]\) and query feature \(\mathbf{Q}_{SA}=[Q_{rep},\mathbf{Q}]\), as Figure 2(b) shown, we design two cooperative seekers to search the matched image and localize the object on this image. Next we describe the responsibility of our two seekers. **Image Seekers for PR.** The goal of the image seeker is to search the image corresponds to a query. we can directly compute the cosine distance between concentrated features \(V_{rep}\) and \(Q_{rep}\) to measure the simliarity between image and query, which is time-efficient to search the most similar item and ensures seeking inference is of trivial cost. Given a batch \(\mathcal{B}\) with \(B\) image-text pairs during training, we calculate the text-to-vision similarity as \[p^{q2v}(q)=\frac{\exp(l\cdot s(V_{rep},Q_{rep})\cdot m^{q2v})}{\sum_{v\in \mathcal{B}}\exp(l\cdot s(V_{rep},Q_{rep})\cdot m^{q2v})} \tag{1}\] \[m^{q2v}=\frac{\exp\left(\tau\cdot s(V_{rep},Q_{rep})\right)}{\sum_{q\in \mathcal{B}}\exp\left(\tau\cdot s(V_{rep},Q_{rep})\right)} \tag{2}\] where \(p^{q2v}(q)\) is text-to-vision probability distribution, \(l\) is a learnable logit scaling parameter, \(s(\cdot,\cdot)\) denotes cosine similarity, \(m\) denotes the prior matrix to refine the similarity distribution following [13], \(\tau\) represents a temperature hyperparameter. For product retrieval on our datasets, the query (title or description of the product) can be also retrieved by the image, and the vision-to-text similarity is \(p^{v2q}(v)\). Then, we treat matching pairs in the batch as positives, and all other pairwise combinations are treated as negatives, thus the image seeking loss can act as \[\begin{split}\mathcal{L}_{ImgS}=\frac{1}{2}\mathbb{E}_{v,q \sim\mathcal{B}}[H\left(p^{q2v}(q),y^{q2v}(q)\right)\\ +H(p^{v2q}(v),y^{v2q}(v))],\end{split} \tag{3}\] where \(H(\cdot,\cdot)\) is the cross-entropy formulation, \(y(\cdot)\) is the ground-truth binary label that positive and negative pairs are 1 and 0 respectively. **Object Seeker for PG.** Different from the image seeker, the ambition of object seeker is to localize the microscopic object-level product on an image, and more sufficient image-query interaction and fine-grained seeking are required. Thus, we leverage comprehensive image and query features \(\mathbf{V}\) and \(\mathbf{Q}\) for object seeking. We consider apply a transformer to fuse cross-modal tokens adequately, in order to learn how to localize the product during interaction, we first append a learnable [LOC] token with visual and textual features as \(\mathbf{T}_{O}=[T_{loc},\mathbf{V},\mathbf{Q}]\in R^{d\times(1+N_{v}+N_{q})}\). Then we apply a cross-modal object-seeking transformer to embed \(\mathbf{T}_{O}\) into a common space by performing intra- and inter-modality semantic interaction. Besides, we add learnable modal-type embedding and position embedding to the input of each transformer encoder layer. We leverage the output state of the [LOC] token \(f_{loc}\) from the object-seeking transformer and attach a regression module to it to predict 4-dim box coordinates. Further, to eliminate the influence of scale problem, we normalize the coordinates of the ground-truth box by the scale of the image and perform the object seeking loss as \[\mathcal{L}_{ObjS}=\|b-\hat{b}\|_{1}+G(b,\hat{b}), \tag{4}\] where \(G(\cdot,\cdot)\) is GIoU Loss [40], \(b=(x,y,w,h)\) and \(\hat{b}=(\hat{x},\hat{y},\hat{w},\hat{h})\) are our prediction the normalized ground-truth box respectively. So far, PR and PG can be solved simultaneously by the cooperation of two seekers, and our cooperative seeking loss is \[\mathcal{L}_{coop}=\lambda_{co}\mathcal{L}_{ImgS}+\mathcal{L}_{ObjS}, \tag{5}\] where \(\lambda_{co}\in\mathbb{R}\) are hyperparameters to weigh two losses. ### Dynamic Knowledge Transfer As Figure 2(a) shown, we design a knowledge transfer method for PG-DA, including a domain aligner to alleviate feature distribution shift and a dynamic pseudo box generator to promote transfer. **Domain Aligner.** As Sec 3.3, we extract visual feature \(\mathbf{V}^{S}_{SA}=[V^{S}_{rep},\mathbf{V}^{S}]\) and textual feature \(\mathbf{Q}^{S}_{SA}=[Q^{S}_{rep},\mathbf{Q}^{S}]\) from \(\mathcal{S}\) domain, and we acquire \(\mathbf{V}^{T}_{SA}=[V^{T}_{rep},\mathbf{V}^{T}]\) and \(\mathbf{Q}^{T}_{SA}=[Q^{T}_{rep},\mathbf{Q}^{T}]\) from \(\mathcal{T}\) domain in the same way. To alleviate the domain discrepancy, we design an alignment approach based on Maximum Mean Discrepancy (MMD), which compares two distributions by embedding each distribution in to Reproducing Kernel Hibert Space (RKHS) \(\mathcal{H}\) with a kernel function \(\phi\). And we utilize multiple Gaussian Radial Basis Function kernels as \(\phi\). Given two marginal distributions \(P_{X^{S}}\) and \(P_{X^{T}}\) from uni-modal source and target domain respectively, MMD can be expressed as \[\mathrm{MMD}_{uni}(P_{X^{S}},P_{X^{T}})=\left\|\mu_{P_{X^{S}}}-\mu_{P_{X^{T}}} \right\|_{\mathcal{H}}. \tag{6}\] In order to compute the inner product of vectors using the kernel function \(\phi\) in RKHS, we square MMD as \[\begin{split}\mathrm{MMD}^{2}_{uni}(P_{X^{S}},P_{X^{T}})=\left\| \mu_{P_{X^{S}}}-\mu_{P_{X^{T}}}\right\|^{2}_{\mathcal{H}}\\ =&\left\|\frac{1}{n_{S}^{2}}\sum_{i=1}^{n_{S}}\sum_ {i^{\prime}=1}^{n_{S}}\phi\left(x_{i}^{S},x_{i^{\prime}}^{S}\right)-\frac{2}{ n_{S}n_{T}}\sum_{i=1}^{n_{S}}\sum_{j=1}^{n_{T}}\phi\left(x_{i}^{S},x_{j}^{T} \right)\right.\\ &\left.+\frac{1}{n_{T}^{2}}\sum_{j=1}^{n_{T}}\sum_{j^{\prime}=1} ^{n_{T}}\phi\left(x_{j}^{T},x_{j^{\prime}}^{T}\right)\right\|_{\mathcal{H}}. \end{split} \tag{7}\] Then, we can minimize the distance between visual feature distributions from different domains through \(\mathrm{MMD}^{2}_{uni}\) as \[\begin{split}\mathcal{L}_{DisV}=\sum_{v\in\mathcal{B}}[& \ \mathrm{MMD}^{2}_{uni}(V^{S}_{rep},V^{T}_{rep})\\ &+\mathrm{MMD}^{2}_{uni}(\mu(\mathbf{V}^{S}),\mu(\mathbf{V}^{T}))],\end{split} \tag{8}\] where \(\mu(\cdot)\) is calculating the mean value of \(\mathbf{V}\) on token dimension. In the same way, we compute \(\mathcal{L}_{DisQ}\) for textual feature. After that, we can obtain domain-invariant features. In addition to the discrepancy of uni-modal marginal distribution, we compute the multi-modal conditional distribution divergence to adjust the output distribution for better adaptation, and the form of MMD computation becomes \[\mathrm{MMD}_{mul}[P(Y^{S}|X_{V}^{S},X_{Q}^{S}),P(Y^{T}|X_{V}^{T},X_{Q}^{T})]. \tag{9}\] Concretely, we take out the output of [LOC] token \(f^{S}_{loc}\) and \(f^{T}_{loc}\) in object seeking transformer from two domains and minimize \(\mathrm{MMD}^{2}_{mul}\) to reduce distance of output feature distribution from different domains as \[\mathcal{L}_{DisO}=\sum_{f^{S}_{loc},f^{T}_{loc}\in\mathcal{B}}\mathrm{MMD}^{2 }_{mul}(f^{S}_{loc},f^{T}_{loc}). \tag{10}\] The total domain alignment loss function is as follows \[\mathcal{L}_{DA}=\lambda_{Dv}\mathcal{L}_{DisV}+\lambda_{Dq}\mathcal{L}_{DisQ} +\mathcal{L}_{DisO}, \tag{11}\] where \(\lambda_{Dv},\lambda_{Dq}\in\mathbb{R}\) are hyperparameters to weigh losses. **Dynamic Pseudo Box Generator.** To further transfer the knowledge from \(\mathcal{S}\) to \(\mathcal{T}\), we attempt to generate pseudo bounding boxes by model on \(\mathcal{S}\) to train the model on \(\mathcal{T}\). However, it is unlikely that all data can be precisely boxed by source model, which may result in dissatisfactory performance. Therefore, the instances from \(\mathcal{T}\) which are close to \(\mathcal{S}\) are relatively reliable to be selected. For more precise selection, we compute the instance similarity between two datasets rather than batches. Thus, given the datasets \(\{V^{S},Q^{S}\}\) and \(\{V^{T},Q^{T}\}\), we calculate the cosine score of features encoded by semantics-aggregated extractor for every pair \(\{V^{S},V^{T}\}\) and \(\{Q^{S},Q^{T}\}\) in each modality to obtain similarity matrixs \(M_{V}\) and \(M_{Q}\), and we add them to \(M\in[-1,1]^{N_{S}\times N_{T}}\), where \(N_{S}\) and \(N_{T}\) are lengths of source and target datasets respectively. Next, we rank the target instances based on the counts exceed the similarity threshold \(\theta\) and select the top \(k\) percent high-score instances \(\{V^{T^{\prime}},Q^{T^{\prime}}\}\). Then, we generate the pseudo box \(\widetilde{b^{\prime}}\) by source object seeker and predict the coordinate \(b^{\prime}\) by target object seeker. Like Eq. 4, we perform the pseudo object seeking loss as \[\mathcal{L}_{PObjS}=\|b^{\prime}-\widetilde{b^{\prime}}\|_{1}+G(b^{\prime}, \widetilde{b^{\prime}}). \tag{12}\] We compute \(M\) each epoch after executing box generation, and the selected instances are dynamically updated. With the constant knowledge transfer, more instances can be labeled correctly, and hyper-parameter ratio \(k\) will be increased. The total knowledge transfer loss function is as follows \[\mathcal{L}_{KT}=\mathcal{L}_{DA}+\lambda_{PO}\mathcal{L}_{PObjS}, \tag{13}\] where \(\lambda_{PO}\in\mathbb{R}\) are hyperparameters to weigh losses. ### Training and Testing **Fully-supervised PR and PG.** We perform \(\mathcal{L}_{coop}\) for training, and we search the image of product by image-seeker for PR, and directly predict the coordinates of product on the image by object-seeker for PG during testing. **Un-supervised PG-DA.** We train the model in three stages. First, we warm up our model under fully-supervised setting on \(\mathcal{S}\) domain by \(\mathcal{L}_{stage_{1}}=\mathcal{L}_{ObjS}\). Next, we perform \(\mathcal{L}_{stage_{2}}=\lambda_{O}\mathcal{L}_{ObjS}+\mathcal{L}_{DA}\) on \(\mathcal{S}\) and \(\mathcal{T}\) to reduce domain gap. Then, we execute dynamic box generate and add \(\mathcal{L}_{PObjS}\) as \(\mathcal{L}_{stage_{3}}=\lambda_{O}\mathcal{L}_{ObjS}+\mathcal{L}_{KT}\) to further transfer the knowledge. We test the model on \(\mathcal{T}\) domain in the same approach as PG. ## 4 Experiments ### Our Product Seeking Datasets We collect two large-scale Product Seeking datasets from Taobao Mall (TMPS) and Taobao Live (TLPS) with about 474k image-title pairs and 101k frame-description pairs respectively. They are first two benchmark e-commerce datasets involving cross-modal grounding. For TMPS, each product item corresponds to a single title, three levels of categories and multiple displayed images with the manually annotated bounding box. For TLPS, we collect frames and descriptions from the livestreamer in live video streams, and annotate the location of described product. Note that the language in our datasets is mainly Chinese. The basic statistics about our datasets is in Appendix. We can see the categories of our datasets are diverse and the number of images are tens of times larger than existing datasets. After the collection, we split each dataset into training/validation/testing sets in a 8:1:1 ratio, and we make sure each product is isolated within one set. ### Evaluation Metrics **Product Grounding.** Following [6], we measure the performance by mIoU (mean Intersection over Union) and precision (predicted object is true positive if its IoU with ground-truth box is greater than 0.5). **Product Retrieval.** We use standard retrieval metrics (following [1, 52]) to evaluate text-to-vision (t2v) retrieval and vision-to-text (v2t) retrieval. We measure rank-based performance by R@K. ### Performance Comparison and Analysis To evaluate the effectiveness of DATE, we compare it with various related methods (More details of our methods are reported in Appendix). For each task, we apply untrained model to predict results as _Random_ method to perceive the difficulty of tasks. **Product Retrieval.** We re-implement these representative cross-modal retrieval methods to compare with our DATE. \begin{table} \begin{tabular}{l|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{**TMPS**} \\ & R@1 & R@5 & R@10 & R@50 \\ \hline Random & 0.00 & 0.04 & 0.09 & 0.43 \\ VSEpp & 10.23 & 29.24 & 34.42 & 69.73 \\ ViLT & 14.39 & 38.42 & 50.74 & **83.23** \\ **DATE** & **16.32** & **40.54** & **51.23** & 82.58 \\ \hline \multicolumn{5}{c}{**TLPS**} \\ \hline Random & 0.03 & 0.14 & 0.23 & 1.59 \\ VSEpp & 3.41 & 15.33 & 29.12 & 43.24 \\ ViLT & 5.38 & 19.29 & 35.95 & 57.48 \\ **DATE** & **6.44** & **21.71** & **36.32** & **59.58** \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of Product Retrieval (text-to-vision) on our TMPS and TLPS datasets. \begin{table} \begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c|}{**TMPS**} & \multicolumn{2}{c}{**TLPS**} \\ & mIoU & Pr@1 & mIoU & Pr@1 \\ \hline Random & 29.51 & 18.22 & 23.91 & 10.09 \\ MAttNet & 80.71 & 85.33 & 62.12 & 73.24 \\ FAOA & 76.24 & 83.72 & 61.31 & 69.13 \\ TransVG & 84.52 & 89.50 & 67.11 & 77.93 \\ **DATE** & **86.67** & **92.12** & **70.24** & **81.43** \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of Product Grounding on our TMPS and TLPS datasets. \begin{table} \begin{tabular}{l|c|c c|c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Mode} & \multicolumn{2}{c|}{**TMPS**} & \multicolumn{2}{c}{**TLPS**} \\ & & mIoU & Pr@1 & mIoU & Pr@1 \\ \hline Random & - & 29.51 & 18.22 & 23.91 & 10.09 \\ ARN & W & 70.72 & 73.32 & 51.31 & 53.24 \\ MAF & W & 72.52 & 75.09 & 54.82 & 59.04 \\ FAOA & F & 76.24 & 83.72 & 61.31 & 69.13 \\ **DATE** & F & **86.67** & **92.12** & **70.24** & **81.43** \\ \hline \hline \multicolumn{5}{c}{**L\(\rightarrow\)M**} & \multicolumn{2}{c}{**M\(\rightarrow\)L**} \\ \hline Source-only & U & 75.20 & 83.62 & 59.64 & 67.71 \\ MMD-uni & U & 76.93 & 84.87 & 60.74 & 69.01 \\ Pseudo-label & U & 77.02 & 86.23 & 62.87 & 71.48 \\ **DATE** & U & **79.92** & **89.35** & **64.86** & **74.75** \\ \hline \hline \end{tabular} \end{table} Table 3: Performance of Product Grounding-DA on our datasets. (L\(\rightarrow\)M means we transfer the knowledge from TLPS to TMPS. And F, W, U stand for Fully-, Weakly-, Un-supervised respectively.) 1. _VSEpp_[15], a respectively encoding method based on CNN and RNN. 2. _ViLT_[29], a jointly encoding method based on transformer. **Product Grounding.** In addition to cross-modal retrieval baselines above, we re-implement these classic visual grounding baselines to compare with our DATE. 1. _MAtNet_[49], a two-stage model. 2. _FAOA_[45], a one-stage model. 3. _TransVG_[10], a regression-based model under transformer architecture. The PR and PG results are presented in Table 1 and Table 2 respectively. We can see that (1) the _Random_ results in both tasks are pretty low, showing our PR and PG are challenging. (2) The proposed DATE outperforms all the baselines by a large margin, indicating the effectiveness of our method for both PR and PG. (3) Although the performance of _TransVG_ and _ViLT_ is little behind ours, they are two separate models, and our method under unified architecture is more time-efficient and memory-saving. **Un-supervised Product Grounding-DA.** To validate the effectiveness of our DATE in DA setting, we further re-implement these typical weakly-supervised VG baselines for comparison. 1. _ARN_[33], a reconstruction-based model. 2. _MAF_[43], a contrast-based model. For DA setting, we serve these methods as baselines for comparison. 1. _Source-only_, which applies the model trained on source domain to straightway test on the target dataset. 2. _MMD-uni_, which only utilizes MMD loss to minimize the uni-modal marginal distribution distance for visual and textual feature. 3. _Pseudo-label_, which trains the model on target domain entirely based on the pseudo box labels generated by the model trained on source domain. The results are presented in Table 3, and we can distill the following observations: (1) our un-supervised DATE outperforms all weakly-supervised methods and fully-supervised methods _FAOA_ significantly, demonstrating the knowledge has been transfered to target domain effectively. (2) _Source-only_ method degenerates the performance severely due to the huge semantic gap between two domains, and _MMD-uni_ only achieves slight improvement as the cross-domain discrepanciy fails to reduced sufficiently. (3) _Pseudo-label_ enhances limited performance since a number of bad instances are incorrectly labeled which misleads the model, while our DATE can dynamically select instances and generate reliable bounding boxes for transfer and boosting performance. ### Ablation Study In this section, we study the effect of different visual feature extractors, text options and cooperative seeking strategies in Table 4. **Visual Feature Extractor.** We compare our _SA-Swin_ to _ResNet_, _DETR_, _Swin_ and _SA-DETR_ methods, where _ResNet_, _DETR_ and _Swin_ apply ResNet-50 [18], DETR-50 [4] Swinbase [35] to extract image features respectively, and leverage the average pooled feature for PR and feed the flattened last feature map as tokens into object-seeking transformer for PG. And _SA-DETR_ executes the same way as the former methods for PG, but injects the semantics-aggregated token from beginning for PR as _SA-Swin_ performs. From the results in Table 4, we can find following interesting points: \begin{table} \begin{tabular}{l|c c|c c c c|c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c|}{**TMPS**} & \multicolumn{6}{c}{**TLPS**} \\ & Grounding & \multicolumn{4}{c|}{T2V Retrieval} & \multicolumn{4}{c|}{Grounding} & \multicolumn{4}{c}{T2V Retrieval} \\ & mIoU & Pr@1 & R@1 & R@5 & R@10 & R@50 & mIoU & Pr@1 & R@1 & R@5 & R@10 & R@50 \\ \hline \multicolumn{10}{c}{Visual Feature Extractor} \\ \hline ResNet & 80.73 & 84.13 & 10.85 & 29.10 & 40.82 & 70.52 & 64.12 & 72.25 & 2.91 & 13.82 & 30.94 & 49.31 \\ DETR & 82.29 & 87.71 & 12.12 & 33.52 & 44.52 & 74.13 & 66.13 & 76.81 & 4.33 & 16.39 & 32.81 & 54.91 \\ Swin & 83.11 & 89.19 & 13.21 & 35.54 & 46.12 & 77.59 & 67.31 & 78.35 & 5.01 & 18.56 & 34.14 & 56.25 \\ SA-DETR & 84.21 & 90.03 & 14.81 & 36.84 & 47.21 & 78.23 & 68.62 & 79.11 & 5.43 & 19.39 & 35.81 & 57.28 \\ **SA-Swin** (Ours) & **86.67** & **92.12** & **16.32** & **40.54** & **51.23** & **82.58** & **70.24** & **81.43** & **6.44** & **21.71** & **36.32** & **59.58** \\ \hline \multicolumn{10}{c}{Cooperative Seekers} \\ \hline w/o Rep & 83.11 & 89.19 & 13.21 & 35.54 & 46.12 & 76.59 & 67.31 & 78.35 & 5.01 & 18.56 & 34.14 & 55.25 \\ w/o ObjS & 82.25 & 87.59 & 12.85 & 36.12 & 45.24 & 75.23 & 65.82 & 75.47 & 4.93 & 18.39 & 35.33 & 54.12 \\ w/o Rep\&ObjS & 80.45 & 85.31 & 11.78 & 31.17 & 43.23 & 72.23 & 63.21 & 71.91 & 4.13 & 16.53 & 31.82 & 51.10 \\ **Full** (Ours) & **86.67** & **92.12** & **16.32** & **40.54** & **51.23** & **82.58** & **70.24** & **81.43** & **6.44** & **21.71** & **36.32** & **59.58** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study of Product Retrieval and Grounding on TIPS and TLPS datasets. (1) _Swin_ surpasses _ResNet_ and _DETR_, illustrating better visual features are extracted by hierarchical transformer. (2) _SA-DETR_ performs better than _Swin_ which has more powerful feature extraction ability during cooperative training, demonstrating our designed semantics-aggregated encoder can extract concentrated and comprehensive features for following cooperative seeking for both PR and PG. **Cooperative Seeking Strategies.** We conduct ablative experiments as follows: **w/o Rep**: using the average pooling of two modal features for image seeking (PR) rather than [REP] token. **w/o ObjS**: removing object-seeking transformer, and applying an MLP to fuse visual and textual [REP] token for object seeking; **w/o Rep&ObjS**: using the average pooled feature for both image and object seeking. From Table 4, we observe that the performance decreases sharply after removing [REP] or ObjS. To analyse: (1) more discriminative representation of image and query can be extracted by weighted vector (i.e. [REP] token) than average pooling, confirming the effectiveness of our semantics-aggregated feature extractor. (2) As **w/o Rep** result shown, the performance of object seeking (PG) degenerates although [REP] is not involved in it, which demonstrates such disadvantageous image seeking (PR) approach drags down object seeking (PG) during multi-task learning. (3) Image and object levels seeking falls on the shoulder of [REP] tokens in **w/o ObjS** model, which is detrimental for both levels seeking. The above two points prove the reasonableness of our designed cooperative seeking strategy. ### Feature Visualization To help prove the validity of our DATE, we visualise visual and textual features by T-SNE for TMPS\(\rightarrow\)TLPS in Figure 4, earned by _Source-only_ baseline and our DATE method. We can observe the shift between source and target domains is apparent, meanwhile there are overlaps in two domains, which is reasonable since a few scenes in Taobao Mall and Live are similar. With our proposed method, the discrepancy in feature distribution of two domains becomes narrow significantly, suggesting our method has effectively aligned two domains. ### Qualitative Analysis To qualitatively investigate the effectiveness of our DETA, we compare _ViLT_ and our DATE for PR as Figure 5 shown. We can find that the image-level product can be sought precisely by our DATE while _ViLT_ fails to find the correct image until Rank3. Further, the whole top4 results retrieved by DATE are more relevant to the text query than the results from _ViLT_, which illustrates the multi-modal semantic understanding and interaction are sufficient through our DATE. ## 5 Conclusion In this paper, we study the fully-supervised product retrieval (PR) and grounding (PG) and un-supervised PG-DA in domain adaptation setting. For research, we collect and manually annotate two large-scale benchmark datasets TMPS and TLPS for both PR and PG. And we propose a DATE framework with the semantics-aggregated feature extractor, efficient cooperative seekers, multi-modal domain aligner and a pseudo bounding box generator to solve the problems effectively on our datasets. We will release the desensitized datasets to promote investigations on product retrieval, product grounding and multi-modal domain adaptation. In the future, we will consider more specific techniques like Optical Character Recognition (OCR) and Human Object Interaction (HOI) to further improve the performance of PR and PG. ## Acknowledgments This work was supported by National Natural Science Foundation of China under Grant No.62222211, No.61836002, No.62072397, and a research fund supported by Alibaba. Figure 4: T-SNE visualization of visual and textual features. Figure 5: Qualitative results of Product Retrieval sampled from TMPS dataset (green: correct, red: incorrect).
2305.13753
A Graph-Based Collision Resolution Scheme for Asynchronous Unsourced Random Access
This paper investigates the multiple-input-multiple-output (MIMO) massive unsourced random access in an asynchronous orthogonal frequency division multiplexing (OFDM) system, with both timing and frequency offsets (TFO) and non-negligible user collisions. The proposed coding framework splits the data into two parts encoded by sparse regression code (SPARC) and low-density parity check (LDPC) code. Multistage orthogonal pilots are transmitted in the first part to reduce collision density. Unlike existing schemes requiring a quantization codebook with a large size for estimating TFO, we establish a \textit{graph-based channel reconstruction and collision resolution (GB-CR$^2$)} algorithm to iteratively reconstruct channels, resolve collisions, and compensate for TFO rotations on the formulated graph jointly among multiple stages. We further propose to leverage the geometric characteristics of signal constellations to correct TFO estimations. Exhaustive simulations demonstrate remarkable performance superiority in channel estimation and data recovery with substantial complexity reduction compared to state-of-the-art schemes.
Tianya Li, Yongpeng Wu, Wenjun Zhang, Xiang-Gen Xia, Chengshan Xiao
2023-05-23T07:09:57Z
http://arxiv.org/abs/2305.13753v4
# A Graph-Based Collision Resolution Scheme for Asynchronous Unsourced Random Access ###### Abstract This paper investigates the multiple-input-multiple-output (MIMO) massive unsourced random access in an asynchronous orthogonal frequency division multiplexing (OFDM) system, with both timing and frequency offsets (TFO) and non-negligible user collisions. The proposed coding framework splits the data into two parts encoded by sparse regression code (SPARC) and low-density parity check (LDPC) code. Multistage orthogonal pilots are transmitted in the first part to reduce collision density. Unlike existing schemes requiring a quantization codebook with a large size for estimating TFO, we establish a _graph-based channel reconstruction and collision resolution (GB-CR\({}^{\prime}\))_ algorithm to iteratively reconstruct channels, resolve collisions, and compensate for TFO rotations on the formulated graph jointly among multiple stages. We further propose to leverage the geometric characteristics of signal constellations to correct TFO estimations. Exhaustive simulations demonstrate remarkable performance superiority in channel estimation and data recovery with substantial complexity reduction compared to state-of-the-art schemes. Collision resolution, MIMO, OFDM, timing and frequency offsets, unsourced random access. ## I Introduction Aming to provide massive connectivity for the burgeoning communication services with short packets and sporadic traffic, massive machine-type communications (mMTC) has been a pivotal application scenario in the fifth-generation (5G) wireless communication [1]. As a prospective protocol, grant-free random access (GF-RA) has recently drawn increasing attention for energy saving and latency reduction. However, due to massive connectivity, assigning different encoders, or pilots, is prohibitive for the tremendous number of users. To mitigate this issue, unsourced random access (URA), initially introduced by Polyanskiy in [2], has emerged as a promising and pragmatic communication paradigm for handling massive uncoordinated users through a shared common codebook. In this way, the receiver is tasked to recover the set of messages up to permutations regardless of user identity. Since the publication of [2], numerous sophisticated techniques have been developed to approach the bound [3, 4, 5, 6, 7, 8, 9, 10]. However, these approaches generally only consider the synchronous scenario. In reality, due to varying transmission distances and a lack of tight frequency synchronization, the received signals may be affected by timing and frequency offsets (TO and FO, abbr. TFO), leading to phase rotations in frequency and time domains, respectively. To the best of our knowledge, most current research focuses on asynchronous scenarios with only TO [11, 12, 13]. Nevertheless, as an inevitable scenario, a TFO-coupled system puts higher requirements for the receiver design, rendering existing works unsuitable. Another line of work focuses on the GF-RA in the presence of both TO and FO [14, 15]. Non-orthogonal pilots are utilized in [14] in the MIMO-OFDM system, and a structured generalized approximate message passing (S-GAMP) is leveraged to estimate the TFO and channel jointly. While [15] unveils that the orthogonal pilots separate the multi-user interference, thus facilitating the TFO compensation. Nonetheless, the pilot collision has been a bottleneck in URA due to the limited orthogonal space. To cope with this issue, most works resort to the non-orthogonal codebook [3, 4, 5, 6, 7, 8, 9], which results in unmanageable dimensions, causing unsatisfactory complexity and residual user collisions. An energy detection-based scheme [7] resolves the collision by data retransmission, compromising the efficiency due to interaction. [16] leverages multistage orthogonal pilots to reduce the collision density, while the channels are estimated individually at each stage, leading to suboptimality. In general, existing works primarily focus on the research of URA, either in synchronous scenarios or asynchronous with only TO and collision-free. In contrast, this paper aims to achieve reliable communication in both time and frequency asynchronous URA with user collisions. The main contributions are summarized as follows. This paper is the first attempt of the study of collision resolution for MIMO massive URA in the asynchronous OFDM system with both TO and FO. Specifically, unlike existing codebook quantization-based approaches [14, 12], we directly conduct channel estimation (CE) by a minimum-mean squared error (MMSE) estimator to obtain the coarse channel coupled with the phase rotation caused by TFO. To reduce collision density, we utilize multistage orthogonal pilots and formulate an optimization problem to minimize the MSE of TFO-coupled channels across these stages. Motivated by the greedy algorithm and tree code proposed in [5], we propose a novel graph-based scheme to iteratively reconstruct channels and resolve collisions jointly among multiple stages with manageable complexity, which is further generalized to the asynchronous case for TFO estimation and rotation compensation. Moreover, to reduce possible quantization errors, we establish a constellation-aided TFO correction algorithm by leveraging the geometric characteristics of the signal con stellation. Exhaustive simulations demonstrate the remarkable performance advantages in CE and data decoding with substantial complexity reduction compared to the counterparts. _Notation:_ Throughout this paper, scalars, column vectors, and matrices are denoted by lowercase, bold lowercase, and bold uppercase, respectively. The transpose, conjugate, and conjugate transpose operations are signified by \(\left(\cdot\right)^{T},\left(\cdot\right)^{*},\left(\cdot\right)^{H}\), respectively. \(\left\|\mathbf{x}\right\|_{p}\) and \(\left\|\mathbf{A}\right\|_{F}\) are the standard \(l_{p}\) and Frobenius norms, respectively. \(\mathcal{R}(\cdot)\) denotes the real part of a complex value. \(diag\left\{\mathbf{d}\right\}\) denotes the diagonal matrix with the vector \(\mathbf{d}\) being the diagonal items. \(\odot\) denotes the element-wise multiplication of two vectors or matrices. ## II Problem Formulation Consider the uplink of a single-cell MIMO cellular network comprised of \(K_{tot}\) single-antenna users, served by a base station (BS) equipped with \(M\) antennas. Due to the sporadic traffic, a small set of \(K_{a}\) users denoted by \(\mathcal{K}_{a}\), with \(K_{a}\ll K_{tot}\), are active to access the BS in a given transmission slot, each transmitting \(B\) bits of information through \(L\) time and frequency resources. We assume that \(S\) out of total \(N_{c}\), \(S\ll N_{c}\), subcarriers are assigned for each user to transmit either pilot or data in each OFDM symbol. 1 The overall transmission occupies \(T\) OFDM symbols with \(T_{p}\) and \(T_{d}\) of them for pilots and data, respectively. Footnote 1: In this case, a narrowband OFDM system is considered and the frequency-domain channel is modeled to be flat among subcarriers [14]. A realization of the overall encoding scheme and receiver design is illustrated in Fig. 1. Like many previous contributions, the user's information is first portioned into two parts, with \(\mathbf{v}_{p}^{p},\mathbf{v}_{2}^{p}\in\left\{0,1\right\}^{B_{p}\times 1}\) being the first part and \(\mathbf{v}_{c}\in\left\{0,1\right\}^{B_{c}\times 1}\) being the second part, which are referred to as the preamble and LDPC parts hereinafter, respectively. Specifically, multiple codewords are transmitted to reduce the user collision density in the preamble parts. To begin with, the tree encoder is leveraged as an auxiliary method to facilitate data splicing by generating two pieces of parity check data appended to the preamble, namely \(\left\{\mathbf{r}_{1},\mathbf{r}_{2}\right\}\), where the detailed coding process can be found in [5]. Consequently, four pieces of data (each with length \(B_{p}\)) are then sent to the sparse regression code (SPARC) encoder to pick the corresponding codewords \(\left\{\mathbf{a}_{i_{1}},\cdots,\mathbf{a}_{i_{4}}\right\}\) from the codebook \(\mathbf{A}=\left[\mathbf{a}_{1},\mathbf{a}_{2},\cdots,\mathbf{a}_{N}\right] \in\mathbb{C}^{L_{p}\times N}\) with \(N=2^{B_{p}}\) and \(L_{p}=S\). \(\left\{i_{1},\cdots,i_{4}\right\}\) are not only the decimal representation (plus one) of the four pieces of data \(\left\{\mathbf{v}_{p}^{p},\mathbf{v}_{2}^{p},\mathbf{r}_{1},\mathbf{r}_{2}\right\}\), respectively, and also determines the interleaving pattern of the data in LDPC phase. While in the LDPC part, the message is coded by LDPC codes, then zero-padded, interleaved, and modulated in a sequential manner, which is so-called the interleave-division multiple access (IDMA) [17]. For the detailed encoding illustration, we refer the readers to [7]. Due to the transmission delay and variation of oscillators, the received signal of user \(k\) will suffer from the impact of TO and FO, denoted by \(\tau_{k}\) and \(\epsilon_{k}\), respectively. We assume that \(\tau_{k}\) represents the residual TO after a closed-loop synchronization mechanism [13] such that \(\tau_{k}\) is no more than the cyclic prefix (CP) length \(N_{CP}\). After removing the CP, the \(t\)-th OFDM symbol in the frequency domain can be expressed as \[\begin{split}\mathbf{Y}^{t}&=\sum_{k=1}^{K_{a}} \mathbf{F_{s}}\mathbf{D}_{\epsilon_{k}}^{t}\left(\mathbf{I}_{N_{c}}\right)_{ \tau_{k}}\mathbf{F}_{\mathbf{s}}^{H}\mathbf{Ac}_{k}^{t}\mathbf{h}_{k}^{T}+ \mathbf{Z}\\ &=\sum_{k=1}^{K_{a}}\mathbf{P}_{\epsilon_{k}}^{t}\mathbf{P}_{\tau_ {k}}\mathbf{Ac}_{k}^{t}\mathbf{h}_{k}^{T}+\mathbf{Z},t\in\left[1:T_{p}\right] \end{split} \tag{1}\] where \(\mathbf{F_{s}}\in\mathbb{C}^{S\times N_{c}}\) is the partial matrix composed of \(S\) row vectors extracted from the \(N_{c}\)-point discrete Fourier transform (DFT) matrix \(\mathbf{F}\) indexed by the vectors \(\mathbf{s}\), where \(\mathbf{s}=\left[n_{1},n_{2},\cdots,n_{S}\right]^{T}\in\mathbb{Z}^{S\times 1}\) is the vector of the subcarrier indices. \(\mathbf{c}_{k}^{t}\in\left\{0,1\right\}^{N\times 1}\) is the binary selection vector of user \(k\) at the \(t\)-th slot, which is all zero but a single one at \(i_{k}\), the decimal representation (plus one) of the \(B_{p}\)-bit message produced by user \(k\). In a narrowband system, the channel vector \(\mathbf{h}_{k}\in\mathbb{C}^{M\times 1}\) is block fading and assumed to be independent and identically distributed (i.i.d.) with zero mean and variance \(\sigma_{h}^{2}\). \(\mathbf{Z}\in\mathbf{C}^{L_{p}\times M}\) is the addictive white Gaussian noise (AWGN) with each component distributed as Fig. 1: A realization of the overall encoding scheme and the proposed receiver design. \(\mathcal{CN}\left(0,\sigma_{n}^{2}\right)\). \(\mathbf{D}_{\epsilon_{k}}^{t}=\phi^{t}diag\left(1,\omega,\cdots,\omega^{N_{c}-1} \right)\in\mathbb{C}^{N_{c}\times N_{c}}\) denotes the phase shift caused by \(\epsilon_{k}\), with \(\omega=e^{j2\pi\epsilon_{k}/N_{c}}\). While \(\phi^{t}=\omega^{N_{cp}+\left(t-1\right)\left(N_{cp}+N_{c}\right)}\) represents the phase shift accumulated to the \(t\)-th symbol. \(\left(\mathbf{I}_{N_{c}}\right)_{\tau_{k}}\) is a left-cyclically shifting matrix of \(\mathbf{I}_{N_{c}}\) with \(\tau_{k}\) units. The phase shift matrices \(\mathbf{P}_{\epsilon_{k}}^{t}\) and \(\mathbf{P}_{\tau_{k}}\) are respectively given by \[\mathbf{P}_{\epsilon_{k}}^{t} =\mathbf{F_{s}}\mathbf{D}_{\epsilon_{k}}^{t}\mathbf{F}_{s}^{H}= \phi^{t}\left[\mathbf{P}\right]_{\mathbf{s}\times\mathbf{s}} \tag{2}\] \[\mathbf{P}_{\tau_{k}} =\mathbf{F_{s}}\left(\mathbf{I}_{N_{c}}\right)_{\tau_{k}}\mathbf{ F}_{s}^{H}=diag\left\{\mathbf{\psi}\right\} \tag{3}\] where \(\mathbf{\psi}=\left[\psi^{1-n_{1}}\!\cdot\!\cdot\!\!,\psi^{1-n_{S}}\right]^{T}\! \in\!\mathbb{C}^{S\times 1},\psi=e^{j2\pi\tau_{k}/N_{c}}\) and \[\mathbf{P}=\begin{bmatrix}P(\epsilon_{k})&P(1+\epsilon_{k})&\cdots&P(N_{c}\!- \!1\!+\!\epsilon_{k})\\ P(N_{c}\!-\!1\!+\!\epsilon_{k})&P(\epsilon_{k})&\cdots&P(N_{c}\!-\!2\!+\!\epsilon _{k})\\ \vdots&\vdots&\ddots&\vdots\\ P(1\!+\!\epsilon_{k})&P(2\!+\!\epsilon_{k})&\cdots&P(\epsilon_{k})\end{bmatrix}.\] \(\left[\mathbf{P}\right]_{\mathbf{s}\times\mathbf{s}}\in\mathbb{C}^{S\times S}\) denotes the sub-matrix extracted from \(\mathbf{P}\in\mathbb{C}^{N_{c}\times N_{c}}\) with the rows and columns indexed by the vector \(\mathbf{s}\), and \(P(\epsilon_{k})=\frac{\sin\pi\epsilon_{k}}{N_{c}\sin(\pi\epsilon_{k}/N_{c})}e ^{j\pi\epsilon_{k}(N_{c}-1)/N_{c}}\). Note that \(\epsilon_{k}\) can be controlled within a small range by detecting the downlink synchronization signals in practical communications, such as LTE or 5G NR [14]. Consequently, for the sake of model tractability and algorithm illustration, \(\mathbf{P}_{k}^{t}\triangleq\mathbf{P}_{\epsilon_{k}}^{t}\mathbf{P}_{n_{k}} ^{t}\in\mathbb{C}^{S\times S}\) can be simplified to a diagonal matrix with \(\mathbf{p}_{k}^{t}=\phi^{t}\omega^{(N_{c}-1)/2}\left[\psi^{1-n_{1}},\cdots, \psi^{1-n_{S}}\right]^{T}\in\mathbb{C}^{S\times 1}\) denoting the diagonal elements. As such, Eq. (1) can be rewritten as \[\mathbf{Y}^{t}=\sum_{k=1}^{K_{a}}\left(\mathbf{A}\mathbf{c}_{k}^{t}\right) \odot\mathbf{p}_{k}^{t}\mathbf{h}_{k}^{T}+\mathbf{Z}=\left(\mathbf{A}\mathbf{ C}^{t}\right)\odot\mathbf{P}^{t}\mathbf{H}+\mathbf{Z} \tag{4}\] where \(\mathbf{C}^{t}=\left[\mathbf{c}_{1}^{t},\cdots,\mathbf{c}_{K_{a}}^{t}\right] \in\{0,1\}^{N\times K_{a}}\) denotes the binary selection matrix. \(\mathbf{P}^{t}=\left[\mathbf{p}_{1}^{t},\cdots,\mathbf{p}_{K_{a}}^{t}\right] \in\mathbb{C}^{S\times K_{a}}\) is the phase rotations. \(\mathbf{H}=[\mathbf{h}_{1},\cdots,\mathbf{h}_{K_{a}}]^{T}\in\mathbb{C}^{K_{a} \times M}\) is the channels. ## III Proposed Scheme ### _Activity Detection and CSI Acquisition_ To sufficiently reduce the multi-user interference and separate TFO phase rotations, we resort to the extremely sparse orthogonal pilot (ESOP), or unit matrix, as the codebook, i.e., \(\mathbf{A}=\left[\mathbf{e}_{1},\mathbf{e}_{2},\cdots,\mathbf{e}_{N}\right] \in\{0,1\}^{L_{p}\times N}\) with \(N=L_{p}\), where \(\mathbf{e}_{n}\in\{0,1\}^{L_{p}\times 1}\) is composed a single one in the \(n\)-th item and zeros elsewhere. As such, Eq. (4) can be simplified to \[\mathbf{Y}^{t}=\mathbf{C}^{t}\odot\mathbf{P}^{t}\mathbf{H}+\mathbf{Z}. \tag{5}\] Let \(\mathbf{G}^{t}\triangleq\mathbf{C}^{t}\odot\mathbf{P}^{t}\mathbf{H}=\left[ \mathbf{g}_{1}^{t},\cdots,\mathbf{g}_{N}^{t}\right]^{T}\in\mathbb{C}^{N\times M}\), which is row-sparse and can be obtained by the MMSE estimator as \[\widehat{\mathbf{G}}^{t}=\mathbf{A}^{H}\left(\mathbf{A}\mathbf{A}^{H}+\sigma_ {n}^{2}\mathbf{I}_{S}\right)^{-1}\mathbf{Y}^{t}\triangleq\mathbf{Y}^{t}/\sigma_ {n}^{2}. \tag{6}\] \(\widehat{\mathbf{G}}^{t}=\left[\hat{\mathbf{g}}_{1}^{t},\cdots,\hat{\mathbf{g }}_{N}^{t}\right]^{T}\in\mathbf{C}^{N\times M}\) with \(\hat{\mathbf{g}}_{n}^{t}\) given by \[\hat{\mathbf{g}}_{n}^{t}=\sum\nolimits_{k\in\mathcal{K}_{n}}\mathbf{p}_{k}^{t}( n)\mathbf{h}_{k}+\mathbf{z} \tag{7}\] where \(\mathcal{K}_{n}\) is the set of users choosing the codeword \(\mathbf{e}_{n}\), which may be a _zeroton, singleton_, or _multiton_, referring to as no user, single user, or multiple users (collision case) choosing \(\mathbf{e}_{n}\). Intuitively, \(\mathbf{e}_{n}\) and \(\mathbf{g}_{n}^{t}\) can be obtained by the hard decision on the row energy of \(\widehat{\mathbf{G}}_{t}\). However, it is non-applicable to the collision case with the superimposed of channels, which can be well addressed according to the following algorithm. ### _Graph-Based Channel Reconstruction and Collision Resolution_ Firstly, we review the two key differences in the proposed scheme distinguished from [15, 16] as follows: * Based on the CE results obtained from multistage pilots, the collided channels are required to be separated and reconstructed. * Due to the IDMA framework in the LDPC part, the data embedded in multiple pilot segments must be spliced together to recover the interleaving pattern. Based on the above two characteristics, we summarize the algorithm design into two specific tasks: 1) Cross-segment splicing of user data and 2) channel reconstruction associated with collision resolution and TFO compensation. We propose the _Graph-Based Channel Reconstruction and Collision Resolution (GB-CR\({}^{2}\))_ algorithm for these two missions. We begin the illustration with the definition of associated items. * _Node:_ The user's data at each stage. Nodes are isolated within stages but connected across stages through edges. * _Edge:_ An edge represents a _possible_ connection between nodes, which are initialized in the tree decoding process. * _Weight:_ An edge's weight reflects its credibility, assigned by the GB-CR\({}^{2}\) algorithm according to the channel MSE. We further introduce variables \(\left\{N_{i},\mathbf{v}_{i},\mathbf{D}^{i}\right\}_{i=1}^{T_{p}}\) and the set \(\mathcal{P}\) to characterize the graph. \(N_{i}\) denotes the number of nodes in the \(i\)-th stage, and \(\mathbf{v}_{i}\in\mathbb{Z}^{N_{i}\times 1}\) is the vector of users' messages (nodes) sorted in an ascending order. \(\mathbf{D}^{i}\in\{0,1\}^{K_{a}\times N_{i}}\) is a binary _selection matrix_ with its element \(\mathbf{D}^{i}(m,n)=1\) if user \(m\) selects the \(n\)-th message or zero otherwise, and we have \[\sum\nolimits_{n=1}^{N_{i}}\mathbf{D}^{i}\left(:,n\right)=\mathbf{1}^{K_{a} \times 1} \tag{8}\] \[\sum\nolimits_{k=1}^{K_{a}}\mathbf{D}^{i}\left(k,:\right)\geq\mathbf{1 }^{1\times N_{i}}. \tag{9}\] \(\mathcal{P}\) is the set of all existing paths on the graph generated by the tree decoder, i.e., a path \(\left\{\mathbf{v}_{1}(n_{1}),\mathbf{v}_{2}(n_{2}),\mathbf{v}_{3}(n_{3}), \mathbf{v}_{4}(n_{4})\right\}\in\mathcal{P}\) if these nodes are connected together. Specifically, we provide an example of a graph in Fig. 2, where \(\mathbf{v}_{1}=[8,36,53,59]\) and \(\mathbf{D}^{1}\) is given by \[\mathbf{D}^{1}=\left[\begin{array}{cccccccc}1&1&0&0&0&0&0&0&0\\ 0&0&1&1&1&0&0&0\\ 0&0&0&0&0&1&1&0\\ 0&0&0&0&0&0&0&1\end{array}\right]^{T}\in\left\{0,1\right\}^{K_{a}\times N_{1}}. \tag{10}\] In Fig. 2, \(\{8,80,16,33\}\in\mathcal{P}\) since these nodes are connected and there is a path among them, and \(\{8,80,16,1 the phase rotation coupled with \(\mathbf{g}_{n}^{t}\) needs to be eliminated when calculating the path weight, since it varies on different subcarriers and symbols. For tractability, we assume that the TO is integer-sampled within \(\mathbf{d}=\left[1,2,\cdots,D\right]^{T}\) and FO is within \(\mathbf{q}=\left[\epsilon^{(1)},\epsilon^{(2)},\cdots,\epsilon^{(Q)}\right]^{T}\), which is uniformly sampled from the range \(\left[-\epsilon_{max},\epsilon_{max}\right]\). Consequently, we are ready to introduce the overall optimization object as follows: \[\widehat{\mathbf{\Theta}}=\arg\min_{\mathbf{\Theta}}\sum_{1\leq i \leq j\leq T_{p}}\left\|\left(\widehat{\mathbf{G}}^{i}\left(\mathbf{v}_{i}(n_{i }),:\right)-\mathbf{D}^{i}\left(:,n_{i}\right)^{T}\mathbf{G}^{i}\right)\right.\] \[\left.-\left(\widehat{\mathbf{G}}^{j}\left(\mathbf{v}_{j}(n_{j}),: \right)-\mathbf{D}^{j}\left(:,n_{j}\right)^{T}\mathbf{G}^{j}\right)\right\|_{2} ^{2} \tag{11}\] where \(\mathbf{\Theta}=\left\{K_{a},\mathbf{H},\mathbf{D}^{i},\tau_{k},\epsilon_{k} \right\}\), \(\mathbf{H}=\left[\mathbf{h}_{1},\cdots,\mathbf{h}_{K_{a}}\right]^{T},\mathbf{ G}^{i}=\left[\mathbf{g}_{1}^{i},\cdots,\mathbf{g}_{K_{a}}^{i}\right]^{T}\in \mathbb{C}^{K_{a}\times M}\) with \(\mathbf{g}_{k}^{i}=\mathbf{p}_{k}^{i}(\mathbf{v}_{i}(n_{i}))\mathbf{h}_{k}\). Note that the optimization of \(\mathbf{\Theta}\) in Eq. (11) encompass a successive interference cancellation (SIC) process with the introduction of \(\mathbf{D}^{i}\), i.e., the channels of collided nodes have been separated or eliminated to the greatest extent to obtain the smallest MSE on each edge. However, Eq. (11) is a mixed integer nonlinear programming (MINLP) problem, which is NP-Hard. To solve this problem effectively, the proposed GB-CR\({}^{2}\) algorithm iteratively calculates the MSE of paths in the current graph. The MSE the path \(p\in\mathcal{P}\) is \[MSE\left(p,\left\{\widehat{\mathbf{G}}^{i},\mathbf{v}_{i}\right\}_{i=1}^{T_{p }},\mathbf{d}(m),\mathbf{q}(n)\right)=\] \[\sum_{1\leq i\leq j\leq T_{p}}\left\|\frac{\widehat{\mathbf{G}}^{i}\left( \mathbf{v}_{i}(n_{i}),:\right)}{\mathbf{p}_{m,n}^{i}(\mathbf{v}_{i}(n_{i}))}- \frac{\widehat{\mathbf{G}}^{j}\left(\mathbf{v}_{j}(n_{j}),:\right)}{\mathbf{p }_{m,n}^{j}(\mathbf{v}_{j}(n_{j}))}\right\|_{2}^{2} \tag{12}\] with \(\left\{\mathbf{v}_{1}(n_{1}),\cdots,\mathbf{v}_{T_{p}}(n_{T_{p}})\right\}=p\), and \[\mathbf{p}_{m,n}^{i}(k)=\psi^{1-\mathbf{s}(k)}\omega^{(N_{cp}+N_{c})i-\frac{ N_{c}+1}{2}} \tag{13}\] with \(\psi=e^{i2\pi\mathbf{d}(m)/N_{c}}\) and \(\omega=e^{j2\pi\mathbf{q}(n)/N_{c}}\). Generally, we can restore a path by the minimum weight path searching, i.e., \[\left[\hat{p},\hat{\tau}_{k},\hat{\epsilon}_{k}\right]=\arg\min_{p\in \mathcal{P}}MSE\left(p,\left\{\widehat{\mathbf{G}}^{i},\mathbf{v}_{i}\right\} _{i=1}^{T_{p}},\mathbf{d},\mathbf{q}\right) \tag{14}\] where the TFO estimation can be simultaneously obtained by searching in \(\mathbf{d}\) and \(\mathbf{q}\) to minimize the path weight. The path \(\hat{p}\) with the smallest MSE is regarded as the highest reliability and we further define that \(\hat{p}\) is valid if the arbitrary nodes \(\mathbf{v}_{i}(n_{i}),\mathbf{v}_{j}(n_{j})\) of \(\hat{p}\) satisfies that \[\left\|\widehat{\mathbf{H}}^{i}\left(\mathbf{v}_{i}(n_{i}),:\right)-\widehat {\mathbf{H}}^{j}\left(\mathbf{v}_{j}(n_{j}),:\right)\right\|_{2}^{2}<max\left\{ a,b\right\} \tag{15}\] where \(a=\left\|\widehat{\mathbf{H}}^{i}\left(\mathbf{v}_{i}(n_{i}),:\right)\right\|_{ 2}^{2},b=\left\|\widehat{\mathbf{H}}^{j}\left(\mathbf{v}_{j}(n_{j}),:\right) \right\|_{2}^{2}\), and \(\widehat{\mathbf{H}}^{i}\left(\mathbf{v}_{i}(n_{i}),:\right)=\widehat{ \mathbf{G}}^{i}\left(\mathbf{v}_{i}(n_{i}),:\right)/\mathbf{p}_{m,n}^{i}( \mathbf{v}_{i}(n_{i}))\). And we say the node \(\mathbf{v}_{i}(n_{i})\) of \(\hat{p}\) is non-collided if \[\left\|\widehat{\mathbf{H}}^{i}\left(\mathbf{v}_{i}(n_{i}),:\right)-\widehat {\mathbf{H}}^{j}\left(\mathbf{v}_{j}(n_{j}),:\right)\right\|_{2}^{2}\leq\gamma \tag{16}\] where \(\mathbf{v}_{j}(n_{j})\) is the node of \(\hat{p}\) with the lowest channel energy and \(\gamma\) is a predefined threshold. Let \(\mathcal{V}_{p}\) denote the set of non-collided nodes of path \(p\) (satisfying Eq. (16)), then \(\widehat{\mathbf{h}}_{k}\) is \[\widehat{\mathbf{h}}_{k}=1/\left|\mathcal{V}_{p}\right|\sum\nolimits_{\mathbf{v }_{i}(n_{i})\in\mathcal{V}_{p}}\widehat{\mathbf{H}}^{i}\left(\mathbf{v}_{i}(n_{i }),:\right)^{T}. \tag{17}\] Once a path is picked up, the corresponding edges will be deleted in the graph, which goes smaller until there is no node or edge. We give a sub-graph in Fig. 3 to demonstrate this process, where the path \(\left\{8,92,34,122\right\}\) currently has the smallest MSE, thus being extracted and eliminated from the graph. The overall algorithm is summarized in Alg. 1. ``` 1:Input: \(\mathcal{P}\), \(\gamma\), \(\mathbf{d}\), \(\mathbf{q}\), \(\mathbf{v}_{i},\widehat{\mathbf{G}}^{i},\forall i\in\left[1:T_{p}\right]\) 2:Initial: \(\widehat{\mathcal{P}}=\emptyset\), \(\widehat{K}_{a}=0\), \(\widehat{\mathbf{H}}=\left[\;\right]\) 3:Output: \(\hat{\tau}_{k}\), \(\hat{\epsilon}_{k}\), \(\widehat{\mathcal{P}}\), \(\widehat{\mathbf{H}},\forall k\in\left[1:\widehat{K}_{a}\right]\) 4:repeat 5:% Minimum Weight Path Searching 6: Update: \(\left(\hat{p},\hat{\tau}_{k},\hat{\epsilon}_{k}\right)\) via (14), \(\mathcal{P}\leftarrow\mathcal{P}-\hat{p}\) 7:if\(\hat{p}\) satisfying (15) 8: Update: \(\widehat{\mathbf{h}}_{k}\) via (17), \(\widehat{\mathbf{H}}(k,:)\leftarrow\widehat{\mathbf{h}}_{k}^{T}\) 9: Update: \(\widehat{K}_{a}\leftarrow\widehat{K}_{a}+1,\widehat{\mathcal{P}}\leftarrow \widehat{\mathcal{P}}\cup\hat{p}\) 10:% Successive Interference Cancellation 11:if\(\mathbf{v}_{i}(n_{i})\) NOT satisfying (16) 12:\(\widehat{\mathbf{G}}^{i}\left(\mathbf{v}_{i}(n_{i}),:\right)\leftarrow\widehat {\mathbf{G}}^{i}\left(\mathbf{v}_{i}(n_{i}),:\right)-\mathbf{p}_{m,n}^{i}( \mathbf{v}_{i}(n_{i}))\widehat{\mathbf{h}}_{k}^{T}\) 13:end 14:end 15:until\(\mathcal{P}=\emptyset\) ``` **Algorithm 1** The Proposed GB-CR\({}^{2}\) Algorithm Fig. 3: The SIC and pruning process on a sub-graph. The path \(\left\{8,92,34,122\right\}\) is extracted and then eliminated from the graph. Fig. 2: An example of the proposed graph with \(K_{a}=8\) and four stages, with both valid (colored blue) and invalid (colored orange) edges. Nodes with multiple circles correspond to collision cases. ### _Constellation-Aided TFO Correction_ The TFO value obtained above is based on the minimum MSE of multistage channels within the range of quantization values. However, due to the approximation error in Eq. (13) and the limited observations of ESOP, the TFO estimation in Alg. 1 is not always optimal. In fact, the phase rotation caused by TFO also occurs in the LDPC part, which occupies more OFDM symbols than the pilots, and thus has more observations. Therefore, the data can be utilized to correct the TFO estimation obtained in Alg. 1 to obtain a better one. The received \(t\)-th symbol, \(t\in[T_{p}+1:T_{p}+T_{d}]\), is given by \[\mathbf{Y}^{t}=\sum_{k=1}^{K_{a}}\mathbf{P}_{k}^{t}f\left(\mathbf{s}_{k}^{c} \right)\mathbf{h}_{k}^{T}+\mathbf{Z} \tag{18}\] where \(f(\mathbf{s}_{k}^{c})\) refers to the encoding process described in Section II on \(\mathbf{s}_{k}^{c}\), the \(k\)-th user's modulated symbol. After the MMSE estimation, the received signal can be expressed as \[\widehat{\mathbf{X}}=\widehat{\mathbf{H}}^{*}\left(\widehat{\mathbf{H}}^{T} \widehat{\mathbf{H}}^{*}+\sigma_{n}^{2}\mathbf{I}_{M}\right)^{-1}\left( \mathbf{Y}^{t}\right)^{T}. \tag{19}\] Consequently, the data after TFO compensation is \(\widehat{\mathbf{s}}_{k}^{c}(s)=f^{-1}\left(\widehat{\mathbf{X}}(k,s)\right) /\mathbf{p}_{m,n}^{t}(s)\), of which the constellation should be similar to \(\mathbf{s}_{k}^{c}\). However, due to the sub-optimal estimation of TFO in Alg. 1, the residual TFO estimation error will cause a slight phase rotation on \(\widehat{\mathbf{s}}_{k}^{c}\). Therefore, we can generate a list of \(N\) TFO samples as coarse estimations by the GB-CR\({}^{2}\) algorithm, and the optimal TFO estimation can be obtained by finding the one with the minimal phase rotation on the data. Here, we introduce variable \(\rho\) to describe the _compensation degree_. For BPSK modulation, \(\rho\) is given by \[\rho_{k,n}=\sum\nolimits_{\mathcal{R}\left\{\widehat{\mathbf{s}}_{k,n}^{c} \right\}>0}\widehat{\mathbf{s}}_{k,n}^{c}-\sum\nolimits_{\mathcal{R}\left\{ \widehat{\mathbf{s}}_{k,n}^{c}\right\}<0}\widehat{\mathbf{s}}_{k,n}^{c}. \tag{20}\] Fig. 4 manifests the constellation of different phase compensation cases. Apparently, a more accurate TFO estimation leads to a more sufficient compensation of the phase ration, and thus the larger \(\|\rho\|\) should be. As such, the optimal TFO estimation can be obtained by \[[\hat{\tau}_{k},\hat{\epsilon}_{k}]=\arg\max_{\hat{\tau}_{k,n},\hat{\epsilon} _{k,n}}\|\rho_{k,n}\|,\forall n\in[1:N]. \tag{21}\] Moreover, as we will see shortly in Section IV, an improved CE results can be obtained by plugging the estimated TFO results to the GB-CR\({}^{2}\) algorithm and re-estimating the channel with known TFO values. The subsequent LDPC decoding is by the belief propagation (BP) iterative structure. The complexity of the proposed algorithm in the preamble part is \(\mathcal{O}(NL_{p}^{2}+MDQ)\), while that of the S-GAMP algorithm proposed in [14] is \(\mathcal{O}(M^{2}ND^{2}Q^{2})\). Moreover, the quantization space of GB-CR\({}^{2}\) is \(DQ\), while that of S-GAMP is \(NDQ\). Altogether, compared to the counterpart, the proposed algorithm exhibits substantial computational complexity reduction. ## IV Numerical Results We conduct numerical experiments to evaluate the performance of the proposed scheme compared to state-of-the-art schemes. The S-GAMP algorithm [14] is utilized as the benchmark for the CE performance, evaluated by the normalized MSE (NMSE), i.e., \(\text{NMSE}=\left\|\mathbf{H}-\underline{\mathbf{H}}\right\|_{F}^{2}/\left\| \mathbf{H}\right\|_{F}^{2}\), where \(\underline{\mathbf{H}}\) denotes the matrix \(\widehat{\mathbf{H}}\) arranged by the indexes aligned with \(\mathbf{H}\). While the FASURA scheme [9] is employed as the baseline for the block error rate (BLER) performance, evaluated by the probability of misdetection \(P_{md}\) and false alarm \(P_{fa}\) given by \[P_{md} =\frac{1}{K_{a}}\sum\nolimits_{k\in\mathcal{K}_{a}}P\left( \boldsymbol{v}_{k}\notin\mathcal{L}\right) \tag{22}\] \[P_{fa} =\frac{\left|\mathcal{L}\backslash\left\{\boldsymbol{v}_{k}:k\in \mathcal{K}_{a}\right\}\right|}{\left|\mathcal{L}\right|} \tag{23}\] where \(\mathcal{L}\) is the recovered message list. The system's energy per bit to noise power spectral density ratio is given by \(E_{b}/N_{0}=LP/BN_{0}\), where \(L\) is the total channel uses, and \(P\) is the symbol power. Specifically, we choose \(B=100\) message bits and \(L=3200\) channel uses. Two pieces of messages with a length \(B_{p}=7\) in the preamble part are encoded into four codewords, with each the length \(L_{p}=128\). While the rest \(B_{c}=86\) message bits are coded into \(\underline{L}_{c}=200\) bits by LDPC codes and then zero-padded, interleaved, and BPSK-modulated sequentially, resulting in total \(L_{c}=2688\) channel uses. For the OFDM settings, \(N_{c}=1024\), \(N_{cp}=72\), \(S=128\), \(T_{p}=4\), and \(T_{d}=21\). For the TFO settings, aligned with [14], the maximum TO and FO are \(D=9\) and \(\epsilon_{max}=0.0133\), respectively, with both the quantization number \(D=Q=9\). We depict the CE performance comparison versus the SNR in Fig. 5, where the user collision is not considered for fair comparison, due to the non-applicability of S-GAMP in such Fig. 4: The phase rotation on the constellation of BPSK modulation. (a) \(\widehat{\mathbf{s}}_{k}^{c}\) with a perfect rotation compensation. (b) Imperfect compensation on \(\widehat{\mathbf{s}}_{k}^{c}\) with residual phase rotation and \(\|\rho_{a}\|>\|\rho_{b}\|\). (c) The constellation of transmitted signal \(\mathbf{s}_{k}^{c}\). Fig. 5: The CE performance comparison in various \(E_{b}/N_{0}\). Parameter settings: \(M=16\), \(K_{a}=60\), \(L_{p}=N=512\) for S-GAMP. case, although our scheme can handle it. In Fig. 5, the tag 'test.' denotes the channel re-estimation with estimated TFO, while 'Pef -TFO' refers to the CE under perfect TFO, a lower bound for evaluating the performance. As depicted in Fig. 5, the proposed GB-CR\({}^{2}\) algorithm significantly exhibits an overall \(8.4\) dB enhancement in the CE performance compared with S-GAMP, with only a \(0.3\) dB loss to the lower bound with known TFO. The substantial gain may result from the more accurate TFO estimation in the proposed scheme by jointly utilizing multi-segment channel observations. Regarding that currently there is little work considering TFO in URA, both TFO-existing and TFO-free scenarios are considered in the simulation. We depict the curve of \(P_{e}\) versus \(K_{a}\) and \(P\) in Fig. 6 to evaluate the BLER performance of the proposed scheme compared to FASURA with the parameters aligned with [9]. Specifically, we utilize the non-orthogonal Gaussian codebook in the TFO-free scenario, where \(B_{p}=12\) and \(B_{c}=76\) with the other settings unchanged. Evidently, FASURA exhibits poor performance in the presence of TFO, while the proposed GB-CR\({}^{2}\) algorithm can handle it with an acceptable BLER, of which the performance is further improved by \(2.5\) dB with the transmitting power from \(0\) to \(2\) dB. Convincingly, the comparison under the TFO-free scenario is also given in Fig. 6 for completeness. It is shown that GB-CR\({}^{2}\) outperforms FASURA in the regime of \(K_{a}<70\) with \(P=0\) dB, and this superiority is further generalized to \(K_{a}<110\) in \(P=2\) dB. On the contrary, the user collisions have become the bottleneck for FASURA, since there is no consideration for the collision resolution in this scheme. Altogether, the proposed algorithm outperforms state-of-the-art scheme with the superiority of addressing the TFO disturbance and accommodating more users with the rising transmitting power. ## V Conclusion This paper considers the asynchronous MIMO URA system with the existence of both TFO and non-negligible user collisions. We propose a novel algorithmic solution called GB-CR\({}^{2}\) to reconstruct channels, compensate both TO and FO, and resolve collisions. We further improve the performance by leveraging the geometric characteristics of signal constellations. The pivotal of GB-CR\({}^{2}\) is to formulate the multi-stage transmission to a bipartite graph and iteratively recover the key parameters without requiring the quantization codebook with a large size. Our numerical results manifest the superiority of the proposed algorithm compared to the counterparts in terms both of CE and data recovery with substantial complexity reduction.
2310.17783
A Quantum Algorithm for Dynamic Mode Decomposition Integrated with a Quantum Differential Equation Solver
We present a quantum algorithm that analyzes time series data simulated by a quantum differential equation solver. The proposed algorithm is a quantum version of the dynamic mode decomposition algorithm used in diverse fields such as fluid dynamics and epidemiology. Our quantum algorithm can also compute matrix eigenvalues and eigenvectors by analyzing the corresponding linear dynamical system. Our algorithm handles a broad range of matrices, in particular those with complex eigenvalues. The complexity of our quantum algorithm is $O(\operatorname{poly}\log N)$ for an $N$-dimensional system. This is an exponential speedup over known classical algorithms with at least $O(N)$ complexity. Thus, our quantum algorithm is expected to enable high-dimensional dynamical systems analysis and large matrix eigenvalue decomposition, intractable for classical computers.
Yuta Mizuno, Tamiki Komatsuzaki
2023-10-26T21:21:51Z
http://arxiv.org/abs/2310.17783v3
Quantum Algorithm for Dynamic Mode Decomposition and Matrix Eigenvalue Decomposition with Complex Eigenvalues ###### Abstract We present a quantum algorithm that analyzes time series data simulated by a quantum differential equation solver. The proposed algorithm is a quantum version of the dynamic mode decomposition algorithm used in diverse fields such as fluid dynamics and epidemiology. Our quantum algorithm can also extract matrix eigenvalues by analyzing the corresponding linear dynamical system. Our algorithm handles a broader range of matrices with complex eigenvalues, unlike existing efficient quantum eigensolvers limited to specific matrix types. The complexity of our quantum algorithm is \(O(\mathrm{poly}\log N)\) for an \(N\)-dimensional system. This is an exponential speedup over known classical algorithms with at least \(O(N)\) complexity. Thus, our quantum algorithm is expected to enable high-dimensional dynamical systems analysis and large matrix eigenvalue decomposition, intractable for classical computers. ## I Introduction Quantum algorithms provide exponential speedup over classical algorithms for numerical linear algebra tasks such as eigenvalue decomposition of unitary or Hermitian matrices [1; 2; 3], singular value decomposition of low-rank matrices [4; 5], and solving linear systems of equations [6; 7]. These quantum algorithms can solve problems of \(N\) dimensions in runtime \(O(\mathrm{poly}\log N)\). They have significant applications in quantum chemistry [8], machine learning [4; 9], and solving differential equations [10; 11; 12; 13]. Quantum numerical linear algebra also offers prospects for advancements in dynamical systems analysis. A probability density function on the state space of a dynamical system is advanced in time by the Perron-Frobenius operator [14; 15]. Meanwhile, the Koopman operator is responsible for the time evolution of observable functions on the state space [14; 15]. These operators are linear operators on infinite-dimensional function spaces. In other words, any finite-dimensional (possibly nonlinear) dynamical system can be described as an infinite-dimensional linear dynamical system. Therefore, linear algebraic techniques such as spectral decomposition can be applied to general dynamical systems analysis. To numerically analyze such an infinite-dimensional linear system, one may resort to a finite-dimensional approximation. This often leads to a linear system with an extremely-large number of dimensions \(N(\gg 1)\). Such high-dimensional systems may be simulated using a quantum linear differential equation solver (QLDES) [10; 11; 12] in runtime \(O(\mathrm{poly}\log N)\). The quantum solver yields a quantum state whose amplitudes encode time series data of the dynamical system. However, as the tomography of such quantum state takes a runtime of \(O(N)\), an efficient method for extracting essential, dynamical information from the quantum data is highly demanded. We propose a novel quantum algorithm for dynamic mode decomposition (DMD), a numerical technique that estimates the spectral decomposition of the Koopman operator of a dynamical system from its time-series data [14]. This spectral decomposition elucidates the essential temporal behavior of the dynamical system. Classical DMD algorithms are frequently applied in various fields such as fluid dynamics and epidemiology [14]. Quantum algorithms for the spectral estimation from time-series data have been proposed by Steffens et al. [16] and Xue et al. [17]; however, these algorithms presuppose time-series data stored in a quantum random access memory or specific amplitude encoding, and efficiently preparing such data with a QLDES remains a challenge. Furthermore, this disconnection between simulation and time-series analysis on a quantum computer can be potentially an obstacle to exponential speedup achieved by each part. In contrast, our quantum DMD (qDMD) algorithm proposed in this article offers an implementable and seamless protocol to analyze QLDES-generated time-series data on a quantum computer. Consequently, our algorithm fills the critical gap in simulation and data analysis, achieving an exponential speedup over classical algorithms with respect to the system's dimension \(N\). Our qDMD algorithm can also serve as a quantum subroutine for eigenvalue decomposition of matrices, especially those with complex eigenvalues. If a linear differential equation \(\dot{\mathbf{x}}=\mathbf{A}\mathbf{x}\) can be simulated efficiently on a quantum computer, our algorithm can efficiently compute approximate eigenvalues and eigenvectors of \(\exp(\Delta t\mathbf{A})\), where \(\Delta t\) is the time step of the simulation. Notably, the matrix \(\mathbf{A}\) is not restricted to Hermitian and may have complex eigenvalues. Therefore, the composite protocol of a QLDES and our qDMD algorithm can be considered as a generalization of quantum phase estimation [1; 2; 3], which combines Hamiltonian dynamics simulation and quantum Fourier transform. Although previous studies [18; 19; 20; 21] have pioneered quantum eigensolvers for complex eigenvalue problems, these approaches have limitations such as the lack of the theoretical guarantee of an exponential speedup and requiring a specific form of input states. Our qDMD algorithm is designed to be free from such limitations. ## II Dynamic mode decomposition We introduce the _exact DMD_ algorithm proposed by Tu et al. [22]. Let us consider an \(N\)-dimensional linear dynamical system \(\dot{\mathbf{x}}=\mathbf{A}\mathbf{x}\), where \(\mathbf{x}\in\mathbb{C}^{N}\), and \(\mathbf{A}\in\mathbb{C}^{N\times N}\) is a diagonalizable matrix 1. Let \(\mathbf{K}\) denote the time evolution operator with time step \(\Delta t\): \(\mathbf{K}\coloneqq\exp(\Delta t\mathbf{A})\). Suppose we have a collection of \(M\) snapshot pairs of time-series data, symbolized as \(\{(\mathbf{x}_{j},\mathbf{x}^{\prime}_{j})\}_{j=0}^{M-1}\). Here \(\mathbf{x}^{\prime}_{j}\) signifies the state observed at the subsequent time step following \(\mathbf{x}_{j}\): \(\mathbf{x}^{\prime}_{j}\approx\mathbf{K}\mathbf{x}_{j}\)2. Note that \(\mathbf{x}_{j}\)'s can be taken from multiple different trajectories. From the data, we can estimate the time-evolution operator \(\mathbf{K}\) as Footnote 1: For the case that \(A\) is not diagonalizable, see discussion in Supplemental Material. Footnote 2: Since numerical integration of a linear differential equation involves approximations, the simulated data \(\mathbf{x}^{\prime}_{j}\) is an approximation of the exact solution \(\mathbf{K}\mathbf{x}_{j}\) \[\tilde{\mathbf{K}}=\operatorname*{argmin}_{\mathbf{J}\in\mathbb{C}^{N\times N}}\|\mathbf{ X}^{\prime}-\mathbf{J}\mathbf{X}\|_{\mathrm{F}}=\mathbf{X}^{\prime}\mathbf{X}^{+}, \tag{1}\] where \(\tilde{\mathbf{K}}\) signifies the approximation of the underlying \(\mathbf{K}\), \(\|\cdot\|_{\mathrm{F}}\) denotes the Frobenius norm, \(\mathbf{X}\coloneqq[\mathbf{x}_{0}\cdots\mathbf{x}_{M-1}]\), \(\mathbf{X}^{\prime}\coloneqq[\mathbf{x}^{\prime}_{0}\cdots\mathbf{x}^{\prime}_{M-1}]\), and \(\mathbf{X}^{+}\) is the pseudo-inverse of \(\mathbf{X}\). The construction of \(N\times N\) matrix \(\tilde{\mathbf{K}}\) and its eigenvalue decomposition becomes intractable as \(N\) increases. Thus, we solve the eigenvalue problem of the following projected matrix instead: \[\tilde{\mathbf{K}}^{\prime}=\mathbf{Q}^{\dagger}\tilde{\mathbf{K}}\mathbf{Q}, \tag{2}\] where \(\mathbf{Q}\) is an \(N\times R\) matrix whose columns are the \(R\) dominant left singular vectors of the \(N\times 2M\) matrix \([\mathbf{X}\ \mathbf{X}^{\prime}]\). The effective rank \(R\) is determined so that the error of the rank-\(R\) approximation of \([\mathbf{X}\ \mathbf{X}^{\prime}]\) in the Frobenius norm is less than a specified tolerance. The exact DMD algorithm assumes that \(R\) is sufficiently smaller than \(N\) so that the eigenvalue decomposition of the \(R\times R\) matrix \(\tilde{\mathbf{K}}^{\prime}\) can be computed practically on a classical computer. The eigenvalue decomposition of \(\tilde{\mathbf{K}}^{\prime}\) approximates that of \(\tilde{\mathbf{K}}\) as \[\tilde{\lambda}_{r}\approx\tilde{\lambda}^{\prime}_{r},\quad\tilde{\mathbf{w}}_{r} \approx\mathbf{Q}\tilde{\mathbf{w}}^{\prime}_{r}\quad(r=1,\ldots,R). \tag{3}\] Here, \(\tilde{\lambda}_{r}\) and \(\tilde{\mathbf{w}}_{r}\) (resp. \(\tilde{\lambda}^{\prime}_{r}\) and \(\tilde{\mathbf{w}}^{\prime}_{r}\)) are the \(r\)-th eigenvalue and eigenvector of \(\tilde{\mathbf{K}}\) (resp. \(\tilde{\mathbf{K}}^{\prime}\)). The real part and the imaginary part of \((\ln\tilde{\lambda}_{r})/\Delta t\) correspond to the decay/growth rate and the oscillation frequency of the \(r\)-th DMD mode, respectively. The computational complexity of this algorithm is \(O(\min(NM^{2},MN^{2}))\) for the singular value decomposition (SVD) and \(O(R^{3})\) for the eigenvalue decomposition of \(\tilde{\mathbf{K}}^{\prime}\)[23]. ## III qDMD algorithm Our qDMD algorithm consists of the following five steps: 1. Prepare quantum states encoding \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) using a QLDES. 2. Compute the SVDs of \(\mathbf{X}\), \(\mathbf{X}^{\prime}\), and \([\mathbf{X}\ \mathbf{X}^{\prime}]\) on a quantum computer. 3. Estimate the elements of \(\tilde{\mathbf{K}}^{\prime}\) from the quantum data and construct \(\tilde{\mathbf{K}}^{\prime}\) as classical data. 4. Solve the eigenvalue problem of \(\tilde{\mathbf{K}}^{\prime}\) on a classical computer. 5. Compute a quantum state encoding \(\tilde{\mathbf{w}}_{r}\). Steps 1-3, and 5 are efficiently executed on a quantum computer in runtime \(O(\operatorname{poly}\log N)\) as shown below. Given that \(R\ll N\), step 4 can be handled by a classical computer. Consequently, our qDMD algorithm is exponentially faster than its classical counterpart with respect to \(N\). Similar quantum-classical hybrid strategies are also employed by Steffens et al. [16] and Xue et al. [17], though the specifics of the quantum procedures differ. In what follows, we will expound the quantum procedures of steps 1-3 and 5. Henceforth, we adopt the following notation: The computational basis whose bit string represents integer \(i\) is denoted by \(\ket{i}\). As necessary, we denote a ket vector of the \(k\)-th quantum register like \(\ket{\,}_{k}\). For vector \(\mathbf{v}=(v^{0},\cdots,v^{n-1})^{\top}\in\mathbb{C}^{n}\), we define \(\ket{\mathbf{v}}\coloneqq\sum_{i=0}^{n-1}v^{i}\ket{i}\). Similarly, for matrix \(\mathbf{Z}=[\mathbf{v}_{0}\cdots\mathbf{v}_{m-1}]\in\mathbb{C}^{n\times m}\), we write \(\ket{\mathbf{Z}}\coloneqq\sum_{j=0}^{m-1}\ket{\mathbf{v}_{j}}\ket{j}=\sum_{i=0}^{n-1} \sum_{j=0}^{m-1}v_{j}^{i}\ket{i}\ket{j}\). A normalized matrix \(\mathbf{Z}/\|\mathbf{Z}\|_{\mathrm{F}}\) is denoted by \(\hat{\mathbf{Z}}\), thus \(\ket{\hat{\mathbf{Z}}}\) symbolizes the normalized ket vector (quantum state) proportional to \(\ket{\mathbf{Z}}\). Additionally, the \(r\)-th singular value, left and right singular vectors of matrix \(\mathbf{Z}\) are designated by \(\sigma_{r}^{\mathbf{Z}}\), \(\mathbf{u}_{r}^{\mathbf{Z}}\), and \(\mathbf{v}_{r}^{\mathbf{Z}}\), respectively. The notation of quantum circuit diagrams we employ can be found in [24]. ### Step 1 The quantum circuit shown in Fig. 1 is responsible for preparing the quantum state encoding \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\). Here, we prepare time-series data of \(L\) different trajectories of \((T+1)\) time steps. Consequently, the number of columns \(M\) equals \((T+1)L\) in this article. We assume a quantum oracle \(\mathscr{I}\) that generates a superposition of \(L\) initial states \(\{\mathbf{x}_{k}\}_{k=0}^{L-1}\) as \[|0\rangle|0\rangle\overset{\mathscr{I}}{\longmapsto}\sum_{k=0}^{L-1}|\mathbf{x}_{k} \rangle|k\rangle\,. \tag{4}\] Here, the normalizing constant for the right hand side is omitted. We also introduce a quantum subroutine \(\mathscr{K}_{\mu}^{\tau}\) that simulates the linear dynamical system up to the \(\tau\)-th time step for \(\mu\) initial conditions: \[\sum_{k=0}^{\mu-1}|\mathbf{x}_{k}\rangle|k\rangle|0\rangle\overset{\mathscr{K}_{ \mu}^{\tau}}{\longmapsto}\sum_{k=0}^{\mu-1}\sum_{t=0}^{\tau}|\tilde{\mathbf{x}}_{k} (t\ \Delta t)\rangle|k\rangle|t\rangle\,, \tag{5}\] where \(\tilde{\mathbf{x}}_{k}(t\ \Delta t)\) is the simulated state at the \(t\)-th time step of the trajectory initiated from \(\mathbf{x}_{k}\), and the normalizing constants for the both sides are omitted. We can implement \(\mathscr{K}_{\mu}^{\tau}\) by the Taylor series method and a quantum linear systems solver with gate complexity \(O(\tau\,\mathrm{poly}\log(N\tau\mu/\epsilon))\)[11; 25], where \(\epsilon\) denotes the tolerance for simulation error. Applying \(\mathscr{I}\) and \(\mathscr{K}_{L}^{T}\) to registers \(q_{1}\), \(q_{2}\), and \(q_{3}\), we get \[|\mathbf{X}\rangle=\sum_{k=0}^{L-1}\sum_{t=0}^{T}|\tilde{\mathbf{x}}_{k}(t\ \Delta t)\rangle_{1}\,|t+(T+1)k\rangle_{23}\,. \tag{6}\] In this context, the register \(q_{1}\) encodes states of the dynamical system, and the registers \(q_{2}\) and \(q_{3}\)--indicating the initial condition \(k\) and the time step count \(t\)--collectively label the column index of \(\mathbf{X}\) as \(|t+(T+1)k\rangle_{23}=|k\rangle_{2}|t\rangle_{3}\). Regarding the \(M\) columns of \(\mathbf{X}\) as initial states and the register \(q_{4}\) as the time step counter, the one-step simulation gate \(\mathscr{K}_{M}^{1}\) generates the quantum state proportional to \[|[\mathbf{X}\ \mathbf{X}^{\prime}]\rangle=|\mathbf{X}\rangle|0\rangle_{4}+|\mathbf{X}^{ \prime}\rangle|1\rangle_{4}\,. \tag{7}\] This ket vector can be viewed as encoding \([\mathbf{X}\ \mathbf{X}^{\prime}]\), regarding \(q_{2}\otimes q_{3}\otimes q_{4}\) as indicating the column index collectively. Measuring the fourth register, we obtain a quantum state \(|\hat{\mathbf{X}}\rangle\) or \(|\hat{\mathbf{X}}^{\prime}\rangle\). ### Step 2 According to the procedure proposed by Schuld et al. [9], we perform the SVD of a normalized matrix \(\hat{\mathbf{Z}}\) (\(\mathbf{Z}=\mathbf{X},\mathbf{X}^{\prime}\), or \([\mathbf{X}\ \mathbf{X}^{\prime}]\)) on a quantum computer using \(C\) copies of \(|\hat{\mathbf{Z}}\rangle\) as \[|\hat{\mathbf{Z}}\rangle^{\otimes C}\mapsto|\mathrm{SVD}(\hat{\mathbf{Z}})\rangle\approx \sum_{r=1}^{R}\hat{\sigma}_{r}^{\mathbf{Z}}|\mathbf{u}_{r}^{\mathbf{Z}}\rangle|\mathbf{v}_{r}^ {\mathbf{Z}^{\ast}}\rangle|(\hat{\sigma}_{r}^{\mathbf{Z}})^{2}\rangle_{5}\,, \tag{8}\] where \(\hat{\sigma}_{r}^{\mathbf{Z}}\coloneqq\sigma_{r}^{\mathbf{Z}}=\sigma_{r}^{\mathbf{Z}}/\| \mathbf{Z}\|_{\mathrm{F}}\), and \(|(\hat{\sigma}_{r}^{\mathbf{Z}})^{2}\rangle_{5}\) designates the computational basis of the extra fifth register indicating the binary representation of \((\hat{\sigma}_{r}^{\mathbf{Z}})^{2}\). Note that matrix normalization does not change singular vectors: \(\mathbf{u}_{r}^{\hat{\mathbf{Z}}}=\mathbf{u}_{r}^{\mathbf{Z}}\) and \(\mathbf{v}_{r}^{\hat{\mathbf{Z}}}=\mathbf{v}_{r}^{\mathbf{Z}}\). Thus we omit the hat (\(\bar{\phantom{\mathbf{Z}}}\)) in the superscript of singular vectors for brevity. This quantum SVD process utilizes density matrix exponentiation [4] and quantum phase estimation. The necessary number of state copies \(C\) for precision \(\epsilon\) is \(O(1/\epsilon^{2})\)[26]. ### Step 3 The estimation of \(\tilde{\mathbf{K}}^{\prime}\) is based on the following factorization: \[\tilde{\mathbf{K}}^{\prime}\approx\frac{\|\mathbf{X}^{\prime}\|_{\mathrm{F}}}{\|\mathbf{X} \|_{\mathrm{F}}}(\mathbf{Q}^{\dagger}\mathbf{U}^{\prime})\hat{\mathbf{\Sigma}}^{\prime}( \mathbf{V}^{\prime\dagger}\mathbf{V})\hat{\mathbf{\Sigma}}^{-1}(\mathbf{U}^{\dagger}\mathbf{Q}), \tag{9}\] where \(\tilde{\mathbf{X}}\approx\mathbf{U}\hat{\mathbf{\Sigma}}\mathbf{V}^{\dagger}\) and \(\hat{\mathbf{X}}^{\prime}\approx\mathbf{U}^{\prime}\hat{\mathbf{\Sigma}}^{\prime}\mathbf{V}^{ \prime\dagger}\) are the SVDs of the normalized data matrices with rank-\(R\) truncation. The first factor \(\|\mathbf{X}^{\prime}\|_{\mathrm{F}}/\|\mathbf{X}\|_{\mathrm{F}}\) (\(=\|\mathbf{X}^{\prime}\|/\|\mathbf{X}\|\)) can be estimated by measuring the fourth register of \(\|[\mathbf{X}\ \mathbf{X}^{\prime}]\rangle\) because the probability ratio of measured values 1 to 0, \(\Pr(q_{4}=1)/\Pr(q_{4}=0)\), equals the square of this factor. The diagonal elements of \(\hat{\mathbf{\Sigma}}\) and \(\hat{\mathbf{\Sigma}}^{\prime}\), i.e., \(\{\hat{\sigma}_{r}^{\mathbf{X}}\}_{r=1}^{R}\) and \(\{\hat{\sigma}_{r}^{\mathbf{X}^{\prime}}\}_{r=1}^{R}\), can be estimated by measuring the fifth register of \(|\mathrm{SVD}(\hat{\mathbf{X}})\rangle\) and \(|\mathrm{SVD}(\hat{\mathbf{X}}^{\prime})\rangle\). All the off-diagonal elements of \(\hat{\mathbf{\Sigma}}\) and \(\hat{\mathbf{\Sigma}}^{\prime}\) are zero. The elements of matrices \(\mathbf{Q}^{\dagger}\mathbf{U}^{\prime}\), \(\mathbf{U}^{\dagger}\mathbf{Q}\), and \(\mathbf{V}^{\prime\dagger}\mathbf{V}\) are inner products between singular vectors. Note that the \(r\)-th column vector of \(\mathbf{Q}\) corresponds to \(\mathbf{u}_{r}^{\mathbf{|X}\ \mathbf{X}^{\prime}]}\). Now, the remaining task is to estimate \(\langle\mathbf{u}_{r}^{\mathbf{|X}\ \mathbf{X}^{\prime}]}|\mathbf{u}_{r}^{\mathbf{X}^{\prime}}\rangle\), \(\langle\mathbf{u}_{r}^{\mathbf{X}}|\mathbf{u}_{r}^{\mathbf{|X}\ \mathbf{X}^{\prime}}\rangle\), and \(\langle\mathbf{v}_{r}^{\mathbf{X}^{\prime}}|\mathbf{v}_{r}^{\mathbf{X}}\rangle\) for \(R^{2}\) combinations of \(r\) and \(r^{\prime}\). The two-state SWAP test depicted in Fig. 2 (a) is often employed for estimating the absolute value of the inner product between arbitrary quantum states \(|\psi_{0}\rangle\) and \(|\psi_{1}\rangle\). However, the two-state SWAP test cannot estimate the phase (argument) of the inner product. Furthermore, the global phase of a singular vector is arbitrary. For instance, if we have a singular vector pair \((|\mathbf{u}_{r}\rangle\,,|\mathbf{v}_{r}^{\star}\rangle)\), then \(\left(\mathrm{e}^{\mathrm{i}\theta}|\mathbf{u}_{r}\rangle\,,\mathrm{e}^{-\mathrm{ i}\theta}|\mathbf{v}_{r}^{\star}\rangle\right)\) is also a valid pair, where \(\theta\) ranges from 0 to \(2\pi\). The choice of the global phase of the singular vector pair changes inner products to be estimated. To overcome these challenges, we introduce the _three-state SWAP test_ (Fig. 2 (b)) and _reference states_ for the left and right singular vectors. Figure 1: Quantum circuit for data matrix preparation. \(\mathscr{I}\) is a quantum oracle for initial state preparation. \(\mathscr{K}_{\mu}^{\tau}\) is a quantum algorithm that simulates the dynamics up to the \(\tau\)-th time step for \(\mu\) initial conditions. The label of each of the simulation gate indicates which registers the gate acts on; e.g, label \((q_{i},q_{k},q_{t})\) indicates that the simulation gate is performed on the registers \(q_{i}\), \(q_{k}\), and \(q_{t}\), which correspond to the first, second, and third registers in Eq. 5, respectively. First, we estimate the inner products between left singular vectors. We define the global phase of each left singular vector state \(|\mathbf{u}\rangle\) such that \(\arg\left\langle\chi_{1}|\mathbf{u}\right\rangle=0\) for a fixed reference quantum state \(|\chi_{1}\rangle\)3. The two-state SWAP test between \(|\chi_{1}\rangle\) and \(|\mathbf{u}\rangle\) estimates \(|\langle\chi_{1}|\mathbf{u}\rangle|\). Here, the singular vector state \(|\mathbf{u}\rangle\) can be prepared by executing the quantum SVD and measuring the fifth register encoding squared singular values. Additionally, the three-state SWAP test between \(|\chi_{1}\rangle\) and arbitrary left singular vector states \(|\mathbf{u}\rangle\) and \(|\mathbf{u}^{\prime}\rangle\) provides an estimate of \(\langle\chi_{1}|\mathbf{u}\rangle\langle\mathbf{u}|\mathbf{u}^{\prime}\rangle\langle\mathbf{u} ^{\prime}|\chi_{1}\rangle\). Leveraging the known absolute values and phases of \(\langle\chi_{1}|\mathbf{u}\rangle\) and \(\langle\mathbf{u}^{\prime}|\chi_{1}\rangle\), we can derive an estimate of \(\langle\mathbf{u}|\mathbf{u}^{\prime}\rangle\). In this way, \(\langle\mathbf{u}^{\left[\mathbf{X}\ \mathbf{X}^{\prime}\right]}_{r}|\mathbf{u}^{\mathbf{X}^{ \prime}}_{r^{\prime}}\rangle\) and \(\langle\mathbf{u}^{\mathbf{X}}_{r}|\mathbf{u}^{\left[\mathbf{X}\ \mathbf{X}^{\prime}\right]}_{r}\rangle\) can be estimated. Footnote 3: The reference states, \(|\chi_{1}\rangle\) and \(|\chi_{2}\rangle\), can be chosen arbitrarily, provided that \(\langle\chi_{1}|\mathbf{u}\rangle\neq 0\) and \(\langle\chi_{2}|\mathbf{v}^{*}\rangle\neq 0\) for all left and right singular vectors \(\mathbf{u}\) and \(\mathbf{v}\). However, the choice of \(|\chi_{1}\rangle\) and \(|\chi_{2}\rangle\) affects the algorithm’s efficiency (see Supplemental Material). Next, we estimate the inner products between left singular vectors. Since the global phase of a right singular vector is synchronized with that of the associated left singular vector, we cannot arbitrary define \(\arg\left\langle\chi_{2}|\mathbf{v}^{*}\right\rangle\) for a fixed reference state \(|\chi_{2}\rangle\)4 and a right singular vector state \(|\mathbf{v}^{*}\rangle\); instead, we also need to estimate \(\arg\left\langle\chi_{2}|\mathbf{v}^{*}\right\rangle\). Once we determine \(\langle\chi_{2}|\mathbf{v}^{*}\rangle\) for every right singular vector \(\mathbf{v}\), we can estimate \(\langle\mathbf{v}^{\mathbf{X}^{\prime}}_{r}|\mathbf{v}^{\mathbf{X}}_{r}\rangle\) using the three-state SWAP test as described above. Thus, let us consider how to determine \(\langle\chi_{2}|\mathbf{v}^{*}\rangle\). First, we prepare the following quantum state using the quantum circuit depicted in Fig. 3 with applying the Step2 gate conditionally on \(q_{4}=0\): Footnote 4: The reference states, \(|\chi_{1}\rangle\) and \(|\chi_{2}\rangle\), can be chosen arbitrarily, provided that \(\langle\chi_{1}|\mathbf{u}\rangle\neq 0\) and \(\langle\chi_{2}|\mathbf{v}^{*}\rangle\neq 0\) for all left and right singular vectors \(\mathbf{u}\) and \(\mathbf{v}\). However, the choice of \(|\chi_{1}\rangle\) and \(|\chi_{2}\rangle\) affects the algorithm’s efficiency (see Supplemental Material). \[\frac{1}{\sqrt{2}\|[\mathbf{X}\ \mathbf{X}^{\prime}]\|_{\text{F}}}\left[\|\mathbf{X}\|_{ \text{F}}\sum_{r=1}^{R}\hat{\sigma}^{\mathbf{X}}_{r}\,|\mathbf{u}^{\mathbf{X}}_{r}\rangle_ {{}_{1}}|\mathbf{v}^{\mathbf{X}*}_{r}\rangle_{{}_{23}}|0\rangle_{4}|(\hat{\sigma}^{ \mathbf{X}}_{r})^{2}\rangle_{{}_{5}}+|\mathbf{X}^{\prime}\rangle_{{}_{123}}|1\rangle_{ {}_{4}}|0\rangle_{{}_{5}}\right]|0\rangle_{{}_{6}}+\frac{1}{\sqrt{2}}\,|\chi_{1 }\rangle_{{}_{1}}|\chi_{2}\rangle_{{}_{23}}|0\rangle_{{}_{4}}|0\rangle_{{}_{5 }}|1\rangle_{{}_{6}}\,. \tag{10}\] Next, we input this state to the circuit shown in Fig. 4. The upper quantum register of the circuit corresponds to \(q_{1}\otimes q_{2}\otimes q_{3}\) and the bottom corresponds to \(q_{4}\otimes q_{5}\otimes q_{6}\). Let us set \(|0\rangle_{{}_{4}}|0\rangle_{{}_{5}}|1\rangle_{{}_{6}}\) and \(|0\rangle_{{}_{4}}|(\hat{\sigma}^{\mathbf{X}}_{r})^{2}\rangle_{{}_{5}}|0\rangle_{{ }_{6}}\) to \(|i\rangle\) and \(|j\rangle\) in the circuit diagram, respectively. Then, the circuit provides an estimate of \(\langle\chi_{1}|\mathbf{u}^{\mathbf{X}}_{r}\rangle\langle\chi_{2}|\mathbf{v}^{\mathbf{X}*}\rangle\). Since we know the value of \(\langle\chi_{1}|\mathbf{u}^{\mathbf{X}}_{r}\rangle\), we can derive an estimate of \(\langle\chi_{2}|\mathbf{v}^{\mathbf{X}*}_{r}\rangle\). Likewise, we can estimate \(\langle\chi_{2}|\mathbf{v}^{\mathbf{X}^{\prime}*}_{r}\rangle\) with applying the Step2 gate conditionally on \(q_{4}=1\) in the circuit of Fig. 3. The number of quantum SVDs necessary for estimating \(\tilde{\mathbf{K}}^{\prime}\) with precision \(\epsilon\) is \(O(1/\epsilon^{2}\operatorname{poly}R)\), excluding reference state preparation costs. The factor \(O(1/\epsilon^{2})\) originates from sampling errors obeying the central limit theorem. While preparing the reference states may require additional \(O(M)\) quantum SVDs, the overall gate complexity remains at \(O(\operatorname{poly}\log N)\). A detailed discussion of the computational complexity can be found in Supplemental Material. Figure 3: Quantum circuit for generating input states for the circuit of Fig. 4. The Step1 gate generates \(|[\mathbf{X}\ \mathbf{X}^{\prime}]\rangle\), corresponding to the circuit in Fig. 1. The Step2 gate performs the quantum SVD conditionally on the register \(q_{4}\). When applied conditioned on \(q_{4}=0\) (resp. \(q_{4}=1\)), this gate performs the quantum SVD of \(\tilde{\mathbf{X}}\) (resp. \(\tilde{\mathbf{X}}^{\prime}\)). The unitary gate \(U_{\chi_{1}\chi_{2}}\) creates the reference states as \(U_{\chi_{1}\chi_{2}}\left|0\rangle_{1}|0\rangle_{23}=|\chi_{1}\rangle_{1}|\chi_{2 }\rangle_{23}\). ### Step 5 A quantum state encoding the \(r\)-th DMD mode is given by \[\ket{\tilde{\mathbf{w}}_{r}}\approx\sum_{r^{\prime}=1}^{R}\tilde{w}_{r}^{\prime \prime}\ket{\mathbf{u}_{r^{\prime}}^{[\mathbf{X}\ \mathbf{X}^{\prime}]}}, \tag{11}\] where \(\tilde{\mathbf{w}}_{r}^{\prime}=(\tilde{w}_{r}^{\prime 1},\ldots,\tilde{w}_{r}^{\prime R })^{\top}\) is computed at step 4. Such coherent superposition of quantum states can be created using the quantum circuit shown in Fig. 5. This circuit creates a superposition of \(\ket{\psi_{0}}\) and \(\ket{\psi_{1}}\)[27]: \[\ket{\Psi}=\alpha\frac{\bra{\chi}\ket{\psi_{1}}}{\ket{\chi}\ket{\psi_{1}}}\ket{ \psi_{0}}+\beta\frac{\bra{\chi}\ket{\psi_{0}}}{\ket{\chi}\ket{\psi_{0}}}\ket{ \psi_{1}}. \tag{12}\] Here, \(\alpha\) and \(\beta\) are user-specified complex amplitudes, and \(\ket{\chi}\) is a reference quantum state. This addition process is probabilistic. The success probability is \(c_{0}c_{1}/(c_{0}+c_{1})\) if \(\bra{\psi_{0}}\ket{\psi_{1}}=0\), where \(c_{i}=|\bra{\chi}\ket{\psi_{i}}|^{2}\). By recursively creating coherent superpositions of two states, we can construct the multi-state superposition \(\ket{\tilde{\mathbf{w}}_{r}}\) with \(O(\text{poly}\,R)\) times of the quantum SVD (see Supplemental Material). ## IV Conclusion The qDMD algorithm performs DMD on quantum time series data generated by a QLDES. This algorithm is also capable of computing (possibly complex) eigenvalues and eigenvectors of matrices. Excluding reference state preparation costs, the total gate complexity scales as \(O(T\,\text{poly}\log(NM/\epsilon)\,\text{poly}(R)/\epsilon^{4})\). The qDMD algorithm can achieve an exponential speedup over its classical counterpart in terms of \(N\) if \(R\) remains at most \(O(\text{poly}\log N)\). Since the algorithm utilizes density matrix exponentiation and sampling-based inner product estimation, the dependency on \(\epsilon\) is less optimal than that of the classical counterpart. Reducing the complexity with respect to \(\epsilon\) should be addressed in future work. ###### Acknowledgements. This work was supported by JST, PRESTO Grant Number JPMJPR2018, Japan, and partially by Crossover Alliance to Create the Future with People, Intelligence and Materials, Japan (to YM).
2302.09974
Observation of near room temperature thin film superconductivity of atmospherically stable Ag-Au mesoscopic thin film
An environmentally stable mesoscopic thin film of Au of certain thickness has been deposited thermally on top of a Ag+ implanted oxide substrate to develop a close to room temperature superconductor. This thin film has been deposited in two different stages. Initially, a sol-gel derived ion conducting metal oxide (ICMO) thin film has been deposited by spin coating. Afterward, Ag+ has been introduced inside ICMO thin film by a chemical method. Following this, a thin layer of Au has been deposited on top of that Ag ion-implanted oxide via thermal evaporation. The temperature dependent resistivity (R-T) has been studied by four probe method. During high-to-low temperature sweep, around 240 K this thin film sample shows a sudden drop of resistance from 0.7 Ohm to 0.1 micro-Ohm. This 6-7 orders drop of resistance has been observed instantly within <0.1 K temperature variation of the sample. This transition temperature (TC) has been shifted toward the higher temperature by 5-6 degrees when temperature has been increased from low to the higher side. During 2nd and 3rd temperature cycling, both these transitions have been shifted by ~10 K towards room temperature w.r.t the earlier. However, after three successive temperature cycles, TC becomes stable and transitions occur close to 0 oC repeatedly. At the low resistance phase, current level has been varied from +100 mA to -100 mA which shows a random fluctuation of voltage drop within 10 nV range, indicating resistance under such circumstance is too low to measure by Delta mode electrical measurement (0.1 micro-Ohm). Besides, transition temperature reduces to lower temperature by 4 K, after applying 1 tesla magnetic field perpendicular to the thin film. Few YouTube video links of temperature dependent electrical characterization of such a thin film is given next to the acknowledgement section.
Sobhan Hazra, Sandip Chatterjee, Bhola Nath Pal
2023-02-20T13:41:17Z
http://arxiv.org/abs/2302.09974v3
Observation of near room temperature thin film superconductivity of atmospherically stable Ag-Au mesoscopic thin film ###### Abstract An environmentally stable mesoscopic thin film of Au of certain thickness has been deposited thermally on top of a Ag' implanted oxide substrate to develop a close to room temperature superconductor. This thin film has been deposited in two different stages. Initially, a sol-gel derived ion conducting metal oxide (ICMO) thin film has been deposited by spin coating. Afterward, Ag' has been introduced inside ICM0 thin film by a chemical method. Following this, a thin layer of Au has been deposited on top of that Ag ion-implanted oxide via thermal evaporation. The temperature dependent resistivity (R-T) has been studied by four probe method. During high-to-low temperature sweep, around 240 K this thin film sample shows a sudden drop of resistance from 0.7 \(\Omega\) to 0.1 \(\mu\Omega\). This 6-7 orders drop of resistance has been observed instantly within <0.1 K temperature variation of the sample. This transition temperature (T\({}_{C}\)) has been shifted toward the higher temperature by 5-6 degrees when temperature has been increased from low to the higher side. During 2\({}^{\mathrm{nd}}\) and 3\({}^{\mathrm{rd}}\) temperature cycling, both these transitions have been shifted by \(\sim\)10 K towards room temperature w.r.t the earlier. However, after three successive temperature cycles, T\({}_{C}\) becomes stable and transitions occur close to 0 \({}^{\circ}\)C repeatedly. At the low resistance phase, current level has been varied from +100 mA to -100 mA which shows a random fluctuation of voltage drop within \(\pm\)10 mV range, indicating resistance under such circumstance is too low to measure by Delta mode electrical measurement (\(\leq\)0.1\(\mu\Omega\)). Besides, transition temperature reduces to lower temperature by 4 K, after applying 1 tesla magnetic field perpendicular to the thin film. Few YouTube video links of temperature dependent electrical characterization of such a thin film is given next to the acknowledgement section. ## 1 Introduction Science society has a century-long dream to develop materials that can show zero resistance (or extremely low resistance like few micro-ohms) at room or close to the room temperature because of their great potential for various applications including loss-less power distribution, energy storage, high power magnet development, and so on. This journey started from the discovery of superconductivity which was detected in the year of 1911 at the temperature 4.2 K[1] From then, continuous efforts have been given to develop different materials that can show superconductivity at higher temperature. Most of those initial works have been done on different metals and alloys. During those developments in 1933, the Meissner effect shows that beside disappearing of resistivity, there is a transition of magnetic property of the material from paramagnetic to diamagnetic.[2] A number of theories such as London theory, Ginzburg-Landau theory, BCS theory have been proposed for the explanation of this phenomenon. Out of them the BCS theory proposed the formation of Cooper pair where two negative charge carriers (electron) are bounded via attractive force of electron-phonon-electron interaction which is considered one of the very unique and successful theory of solid state physics. Instead of that, after the discovery of high temperature superconductivity (High-Tc), BCS theory alone fails to explain this phenomenon.[3] Although, Cooper pair formation is still remains the fundamental issue because of its capability to form superconducting gap.[4; 5] Again compared to bulk, thin film high-T\({}_{C}\) materials are more demanding for their versatile applications including superconducting circuit, supercomputing, SQUID etc.[6; 7] Earlier several theoretical predictions for achieving exceptionally high-T\({}_{C}\) thin film materials have been proposed, particularly on thin film based materials. Among them, the concept of surface superconductivity was proposed due to the existence of surface electrons of a thin film which are more localized to the crystal.[8; 9] Therefore, the interaction of the surface electrons of a thin film material is quite different from their bulk electrons.[10, 11] Again, interaction of those localized surface electrons can be modified by different 'impurity surface states' which can successfully develop a high Tc superconductor.[12] During those studies, the possibility of achieving the superconducting phase of 1D polymer chain was discussed by Little et al.[13] According to him, overall center chain \(\pi\)-electron interaction of the polymer can be modulated by the side chain \(\pi\)-electron which can effectively overcome columbic repulsion. His proposal on attractive force of two electrons is known as the 'electron' mechanism which is different from conventional phonon mediated cooper pair formation concept. Later on, this electron mechanism was also adapted for high-Tc thin film superconducting materials by several groups. In their model, a thin layer of metal film needs to be sandwiched between a dielectric film which is capable of enhancing Tc by \(\sim\)10\({}^{2}\) times over conventional phonon model.[14, 15] However, to achieve high-Tc superconductor, the electron wave functions of metal and dielectric need to overlap which requires atomic distance separation between dielectric and metal.[16] Due to this limitation, this concept was not realized so far in a practical system. Beside this electron model, 'exciton model' has been proposed by Allender et al. where a thin layer of metal needs to be deposited on a semiconductor such a way that the Fermi energy of the metal exists in between valence and conduction band of the semiconductor. In such a situation, the metal electrons at the Fermi surface may tunnel into the semiconductor gap where they interact with virtual excitons, producing a net attractive interaction among the electrons.[17] Although, like the electron model, here also requires an intimate contact between metal and semiconductor which limits the realization of this concept for a practical material.[18] Recently, Saha et al. has reported unconventional properties of engineered Au-Ag nanostructures. In that work, they are capable of growing Au nanostructures of size 10-60 nm embedded with uniformly distributed Ag nanoclusters of size \(\sim\)1 nm, where separation of Ag and Au atom supposed to be in atomic scale because of their very similar lattice parameter. A thin film of such Au-Ag nanocrystals shows two distinct resistance levels below and above 245\(\pm\)1 K temperature with a variation of resistance \(\sim\)10\({}^{5}\).[19] The lower level of resistance is \(\sim\) 2 \(\mu\Omega\) which is the lower limit of their measurement set-up is believe to reach due to the superconducting transition of the film. A shifting of this switching towards room temperature was observed after successive cyclic temperature variation. Again, this transition temperature was shifted towards lower temperature after the application of magnetic field. Besides, a step like decrement in the voltage under constant current sourcing during transition was observed which is believed due to the formation of phase slip centre, an indication of superconducting transition. Additionally, it has been observed that these Au-Ag nanocrystals lose their conventional plamonic absorption.[20] In spite of discovering such unique results, the key limitation of their colloidal nanocrystal based materials is their very high air sensitivity which forces them to get those results with poor success rate. In this work, an environmentally stable mesoscopic thin film of Au of a critical thickness has been deposited thermally on top of an Ag' implanted ion-conducting metal oxide (ICMO) thin film substrate. During this deposition a clean glass substrate and a ICMO coated substrate were kept as a references. This Ag' implanted substrate was prepared prior to the thermal evaporation through a solution processed technique. It has been observed that at room temperature, Ag' implanted Au thin film shows metallic conductivity (resistance of \(\sim\) 0.7 \(\Omega\)) where as that reference film on glass substrate doesn't show any conductivity (resistance of \(\sim\) G\(\Omega\)). More interestingly, during temperature dependent resistance (R-T) study, Ag' implanted Au thin film shows a sudden drop of resistance from 0.7 \(\Omega\) to \(\lesssim\)0.1 \(\mu\Omega\) at around 240 K. However, this transition temperature shifted close to the 0 \({}^{\circ}\)C after 3-4 successive cyclic variations of temperatures where it remained stable over the time. Detail description of this finding is given in the following section. ## 2 Experimental Section ### Synthesis of materials. As mention earlier, Ag' implanted substrate has been grown in a two-steps solution processed technique. Initially, a sol-gel approach is used to synthesize an ion-conducting metal Oxide (ICMO) ceramic thin film by precursor salt of metal oxide and lithium acetate (acquired from Alfa- Aesar, 99 % extra pure). In the beginning, certain concentration of metal oxide precursor and lithium acetate are prepared separately in 2-methoxy ethanol solvent under vigorous mixing at room temperature using magnetic stirrer until it forms clear solutions. Following to this, these two solutions are mixed in a proper volume ratio, so that the final product can have the right stoichiometric ratio of that ICMO ceramic product. The mixture is then rapidly stirred for two hours. ### Thin film Fabrication. Initially a polycrystalline thin film of ICMO has been deposited on top of a glass substrate (15 mm x 15 mm) by a sol-gel technique followed by an annealing process. Thereafter, Ag* was implanted inside the ICMO thin film through a chemical process. Then a thin layer of Au with a critical thickness was deposited by thermal evaporation at room temperature on the top of this Ag* implanted ICMO thin film. During this Au deposition process, a clean glass substrate and a ICMO thin film coated substrate (without Ag*) were kept as reference sample. This deposition method is a really very easy, low cost and a fast process. Using a spin coater, a muffle furnace and a thermal evaporator, it is possible to prepare even 500 pieces of such samples in a week with 100% reproducibility. Besides, this sample preparation, a thicker Au thin film has been deposited in the selective area of another Ag* implanted ICMO thin film by a shadow mask method. After that, a thin layer of Au (with critical thickness) is deposited in the entire area of substrate by removing the mask which allows us to obtain two different regions (thicker and thin) of Au thin film. Thereafter, resistance vs. temperature (R-T) measurement through four probe methods on two different samples have been investigated as described in the R-T measurement section. ## 3 Material Characterization. Structural analysis of thin film samples is performed using X-ray diffraction (XRD) (Rigaku, Mini Flex600, DTEX Ultra) by using monochromatized Cu K\(\alpha\) radiation (\(\lambda\)= 1.54 A) in the 2\(\theta\) range of 10\({}^{\circ}\)-60\({}^{\circ}\) with a scan rate of 2\({}^{\circ}\)/min. Scanning electron microscope (SEM) study is done by using (NOVA NANOSEM 40, FEITM) to describe the surface morphology of the thin film. The elemental composition of the metallic elements is determined by an energy- dispersive X-ray spectrometer (EDX) attached to the HR-SEM with an accelerating voltage 10 kV. Atomic Force Microscopy (AFM) is carried out for the measurements of surface roughness of the samples in different concentration. All optical characterization (Transmittance and Reflectance) is studied by using EQE measurement unit (Enitech QE-R). Photoluminescence spectra (PL) of the thin film has been studied by using HITACHI (F-4600) fluorescence spectrophotometer. For temperature dependent four probe electrical characterization, sample was placed in contact with 4-spring loaded equidistance probes (Mill Max 858-22-004-30-0011014POS, 4MM, SMD) attached with cryogenic measurement set-up (Physical Quantity Measurement System (PQMS), XPLORE). These probes are coated with gold to minimize contact resistance with the sample. Resistance of the samples are measured with three different electronics instruments; digital multimeter (Keithley DMM6500 6.5 Digit Multimeter), dual soucemeter (Keysight B2912A) and combined current sourcemeter and a nanovoltmeter (Tektronix 6221+2182A, Delta mode measurement) in 4-probe configurations where as voltage vs. current (V vs. I) and time dependent voltage (V vs. t) under constant current sourcing has been performed by using Delta mode measurement. ## 4 Results and Discussion ### UV-VIS absorption and Photoluminescence (PL) spectra of thin films UV-VIS spectra of Ag-Au mesoscopic thin film has been demonstrated in Fig. 2a (red) which shows a strong absorption at 740 nm due to the plasmonic effect of Ag-Au nanocrystal. Whereas, the absorption peak of reference Au film on glass is \(\sim\)864 nm and Ag thin film (same thickness as of Au) is \(\sim\) 510 nm. This comparative data implies that the plasmonic absorption of Ag-Au mesoscopic thin film is due to the combined effect of Ag and Au nanoparticles. Photoluminescence (PL) spectra of respective films have also been investigated with excitation wavelengths of 320 nm. Original ICM0 thin film has been used as reference for this study. It has been observed that the PL signal of ICM0 thin film is very weak (Fig 2b, red). However, after Ag* implantation, PL signals enhance significantly (Fig 2b, blue), although this PL intensity is weaker than reference Au film on glass (Fig 2c, pink). Most likely, this enhancement of PL after Ag* implantation originated due to some Ag cluster formation during this Ag* implantation process. After Au deposition on Ag* implanted ICM0, PL intensity of the film enhanced more than double (Fig 2b, green). Although the nature of PL spectra in all of these films are quite similar with a peak position at \(\sim\)375 nm. ### X-ray diffraction (XRD) Figure 2c) shows the XRD spectra of different stage of AgAu thin film fabrication. Thin film XRD samples are made through spin coating and a subsequent annealing process on a p*-Si substrates. Due to the amorphous nature of glass, we choose p*-Si substrate for XRD study to reduce the background signal. Using a Rigaku X-ray diffractometer with Cu-K\(\alpha\) radiation (\(\lambda\)= 1.54 A) in the 20 range of 10deg-60deg at a scan rate of 2deg/min, the crystallinity and phase of these thin films materials are studied. The X-ray diffraction (XRD) spectra of the thin films are acquired at room temperature. In the thin-film XRD pattern of Ag*-ICMO shows a tiny peak formation at 26 \(\sim\) 43.7deg which is corresponding to the (200) plane of Ag (JCPDS file no 897322). This data indicates that during Ag* implantation process, few Ag cluster is also formed. The XRD pattern of Ag-Au film on Ag*-ICMO is shown in Fig. 2c (red) which shows two intense peak at 20 \(\sim\)38.0deg and 43.7deg respectively which are correspond to the planes of (111) and (200) of Au respectively (JCPDS file no. 04-0784). These peak positions are common for Ag as well because of their very similar lattice parameters. From this picture its very clear that the width of both peaks are quite wide, indicates the small size of the metal nanoparticle. In contrast, XRD data of reference Au film which is deposited on bare Si-substrate, has same peak position but with narrow width. ### Atomic Force Microscopy (AFM) of Ag-Au mesoscopic thin film Atomic Force Microscopy (AFM) study has performed on the ICM0, Ag* implanted ICM0 and Ag-Au mesoscopic thin films which is shown in Fig. 3. The AFM picture of ICM0 thin film indicates formation of contentious film of polycrystalline grain of average grain size \(\sim\) 30 nm (Fig. 3a) with rms roughness of \(\sim\)3.5 nm. However, after Ag* implantation on that ICM0 thin film through a chemical process, surface morphology of this film totally changes and form much smaller grains as shown in Fig. 3b) with average grain size of this film is \(\sim\)22 nm whereas, rms roughness of this film is \(\sim\) 6.8 nm. After thin layer of Au deposition on top of this Ag* implantation ICM0 thin film, a percolated continuous metal nanoparticle formation Figure 2: a) UV-VIS absorption spectra of Ag on ICM0 (black), Ag-Au mesoscopic thin film (red) and reference Au thin film (blue), b) photoluminescence (PL) spectra of ICM0 (red), Ag* implanted ICM0 (blue), reference Au film on glass (pink), and Ag-Au mesoscopic thin film (green), c) XRD patterns of Ag* implanted ICM0 (blue), Ag-Au mesoscopic thin film (red) and reference Au thin film (black). Figure 3: The AFM images of a) ICM0 thin film b) Ag* implanted ICM0 thin film c)-d) Ag-Au mesoscopic thin film occurs which is seen as bright circular spot in the image. The rms roughness of this film is \(\sim\) 4.1 nm which is lower than Ag' implantation ICM0 thin film. Due to the similar lattice parameter of Ag and Au, possibly Ag-cluster/Ag* of ICM0 thin film has been works as nucleation site of Au nanoparticles, selectively on those sites. After certain amount of Au deposition, these metal nanoparticle becomes percolated that makes the film conducting. Possibly, during the growth of Au nanoparticle, Ag* converted to tinny Ag cluster and reside inside the bigger Au nanoparticle to form a Ag-Au nanocrystal. ### Scanning electron microscopy (SEM) of Ag-Au mesoscopic thin film Scanning electron microscopy (SEM) is being used to study surface morphology and elemental composition of the Ag-Au mesoscopic thin film which is shown in Fig. 4. SEM picture of this film indicates (Fig. 4a, inset) that the film has some cracks of width \(\sim\)20-30 nm, but its percolation is maintained which makes it a conducting film. The energy dispersive X-ray spectroscopy (EDS) elemental maps of Au (Fig. 4c), Ag (Fig. 4d) of the film indicates that both Ag and Au has been deposited everywhere of the film. However, ratio of Ag and Au is \(\sim\) 1:20. ### Possible growth of Ag cluster embaded Au nanoparticle Our XRD data of Ag' implanted ICM0 thin film shows a weak signal of Ag, indicates silver cluster formation after Ag' implantation inside ICM0. Agin, a enhanced photoluminescence has been observed of this Ag' implanted ICM0 thin film at \(\sim\)400 nm which is also the indication of Ag cluster formation. Moreover, this PL signal becomes more intense when Au is thermally evaporate on it. Since, Ag and Au crystal have very similar lattice parameter, therefore during Au deposition, pre-existing Ag-cluster/Ag* on the substrate works as nucleation site of Au nanoparticle. However, due to the room temperature deposition of Au thin film, chance of Au-Ag alloys formation is very less. Relatively, Ag cluster embaded Au nanoparticle formation is more possible pheneomena. To ensure that, more detail microstructural analysis is required in future. Although, Au film on a clean glass substrate and original ICM0 thin film coated substrate remain non-conducting under same batch of Au deposition which is due to the lack of percolation path of Au. The difference of conductivity of Au film over Ag* implanted ICM0 film with other substrates implied the crucial role of Ag-cluster/Ag* during Au deposition. This phenomenon is also realized from the difference of the surface morphology of these Au thin films which has been checked from AFM study (AFM picture is not added). ### Electrical characterization of Ag-Au mesoscopic thin film In the temperature dependent four probe measurement, we used gold coated equally separated align four spring loaded pogo pin connectors (Fig. 5a, MILL MAX 858-22-004-30-001101) for making contact with the film. This pogo pin connector was attached through two adjustable screws with the sample holder of a cryocooler as shown in Fig. 5a). Electrical resistance was measured by in three different ways by using three sets of independent instruments; Keysight dual source meter (4-wire mode), Delta Mode Measurements (using Tektronix nanovoltmeter in combination of current sourcemeter) and Tektronix digital multi-meter (4-wire mode). ### 6a Resistance Vs. Temperature (R-T) data Room temperature resistance of such Ag-Au mesoscopic thin film that measured by this 4-probe arrangement is \(\sim\) 0.7 \(\Omega\) (Fig. 5b). As temperature reduces, film resistance also reduces slowly. However, around 240 K, the sample reduces its resistance suddenly by 6-7 orders of magnitude within 0.1 K variation of temperature and reaches to \(\leq\)0.1 \(\upmu\Omega\) (Fig. 5c) which is the noise floor of these instruments. We hardly find more than one data in between these two levels of resistance. If we reduce temperature further, it continuously shows its low resistance level. During increasing temperature, low-to-high resistance transition occurs at \(\sim\) 245 K. Interestingly, if we do this temperature cycling, in the second cycle it shows high-to-low transition at \(\sim\)250 K Figure 4: Elemental mapping analysis from SEM photographs (a) combinaeed picture for all elements (SEM picture in the inset), b) for Si, c) for Au and d) for Ag and low-to-high at \(\sim\) 255 K. After three successive cycles, T\({}_{\rm C}\) becomes close to 0 \({}^{\circ}\)C and remains stable for months (Fig. 6a). In our last 30 samples, we found almost similar data and didn't find any failure in any single sample. At this stage, the reason behind this shifting of T\({}_{\rm C}\) with initial temperature cycling is not clear to us and needs further investigation in future. Again, we store our samples in open atmospheric condition, which doesn't change its behaviour after months. During this study, we also measure the R-T of the two region of Au film (thick and thin) of the 2\({}^{\rm nd}\) sample which is also deposited on a Ag\({}^{*}\) implanted ICM0 substrate (Fig. 6b, inset). Interestingly, thinner Au area (with critical thickness) shows resistance drop with reducing temperature as like the initial sample. However, a thicker portion of Au doesn't show any transition up to 150 K, indicating the requirement of critical thickness of Au for this transition (Fig. 5b). This result is really very exciting for the fabrication purpose of a superconducting circuit. Because, just by selective thick and thin (with critical thickness) Au deposition on Ag\({}^{*}\) implanted ICM0 substrate with several \(\upmu\)m separation, it is possible to fabricate Josephson junction very easily. This \(\upmu\)m width 'thick Au' can be deposited by lithographic process on Ag\({}^{*}\) implanted ICM0 substrate prior to the critical thickness Au deposition, to fabricate verities of superconducting circuit which can work close to the room temperature. ### Voltage drop after transition at different current level and polarities (V vs I) After transition, (in low resistance phase) we did systematic study of voltage drop across the voltage-probe at different current intensity, ranging from -100mA to +100 mA by using Delta mode measurement (nanovoltmeter in combination with a current sourcemeter) set up (7:00 min to 17:10 min of YouTube video). During this study, we didn't find any variation of voltage drop within the entire scale including two different polarities of current, indicating resistance of this sample is too low to detect under such circumstances. Again, we measured voltage drop by turning on and off the current sauce and didn't find any variation of voltage fluctuation. In the entire studies, we found only a few nV voltage fluctuation regardless the current level. The fluctuation of nanovoltmeter data during sweeping current from -100 mA to +100 mA is shown in the V vs. I characteristics (Fig. 5c), indicating that voltage drop is always <\(\pm\)10nV in the entire range. Again, this V-I characteristics is very much random in different independent measurements, although the value of voltage drop always remains within \(\pm\)10nV. ### Voltage drop after over the time at different current polarities (V vs t) After superconducting transition of Ag-Au mesoscopic thin film, voltage drop across the voltage probe has been studied over the time (V vs. t). During this measurement, +100 mA current was fixed in the current probe for one set of measurement (2:17 min to 4:18 min of YouTube video) and continue that for 2 minutes. In a subsequent measurement, -100 mA current was set in the current probe and continue for two minutes as well. The variation of voltage across voltage probe over the time (V Vs. t) is shown in Fig. 6d (4:33 min to 6:36 min of YouTube video). This data indicates that voltage drop is entirely fluctuating within \(\pm\)10 nV regardless the current polarity of the current probe, indicating that the sample resistance remain below the limit of this measurement range (100 mA/10nV=0.1 \(\upmu\Omega\)). Fig. 5 a) Picture of spring loaded pogo connector sample holder for 4-wire measurement, b) resistance of the sample at 281.6 K, c) resistance of the sample at 227.4 K ### 4.6d Magnetic field dependent transition (Tc) The R-T measurement of Ag-Au mesoscopic thin film has been studied under magnetic fields as well. During this study, a magnetic field was applied perpendicular to the substrate which has been varied from 0 to 1 Tesla to realize the effect of magnetic field on transition temperature. This measurement has been performed with a sample on which three temperature cycle R-T measurement has been performed earlier. The R-T data under magnetic field has been shown in Fig. 6d) which indicates that under 1 T magnetic field, the transition temperature shifted from 263 K to 259 K without changing the nature of transition, implying that the vertical magnetic field has some effect on transition temperature, but can't destroy superconducting behavior so easily. Although, due to the limitation of our facility we are unable to check this effect at higher magnetic fields. Figure 6: a) The R-T data with different temperature cycles of Ag-Au mesoscopic thin film, b) comparative R-T data of thick and thin region of Au film of sample 2, inset shows the schematic diagram of selective thick and thin layer of Au c) V vs l data after low resistance transition (\(\sim\)256 K), d) current vs. time data (l vs. t) under +100 mA and -100 mA current after low resistance transition (\(\sim\)256 K) Figure 7: Magnetic field dependent R-T data of Ag-Au mesoscopic thin film ## 5 Conclusion A stable Ag-Au mesoscopic thin film has been deposited in a very low cost, easy and fast method that shows a close to the room temperature thin film superconductivity. For this deposition, a thin layer of Au has been deposited on top of a Ag' implanted ICM0 thin film. Superconducting transition temperature of such film initially observed at \(\sim\)240 K which shifted close to 0*C after 3 successive temperature cycles and remained stable over the time. At low resistance state, current-voltage data within \(\pm\)100 mA is randomly fluctuating within \(\pm\)10 eV range, indicating sample resistance becomes \(\lesssim\)0.1\(\mu\Omega\). This transition temperature also shifted towards a more negative direction by applying a magnetic field perpendicular to the film. Although this shifting is only 4 K after applying 1T magnetic field and doesn't change the nature of transition. Again, a thicker Au film on the same Ag' implanted substrate doesn't show any such transition which has been tested up to 150 K, indicating, a superconducting circuit can be easily fabricated on this Ag' implanted substrate by selective thick and thin Au deposition only. This selective thick Au deposition or pattering can be done by lithography process prior to the thin Au deposition to fabricate verifies of the superconducting circuit which can work close to the room temperature. ## Acknowledgements Bhola Nath Pal thanks SERB, India (CRG/2019/001826) and DST, India (DST/INT/SWD/VR/P-12/2019) for financial support. Authors are also grateful to Central Instrument Facility Centre, IIT (BHU), for providing instrument support for XRD, AFM and SEM. Sobhan Hazra is thankful to SERB for providing Ph.D. fellowship. ## 6 YouTube Video link of electrical characterization **1. 3rd cycle, complete cycle of R-T, V-I, V-t data** [https://youtu.be/nlEUCWWTZuM](https://youtu.be/nlEUCWWTZuM) * Superconducting phase transition from Higher to Lower Resistances @ 1:29 min of the video * Voltage vs Time at constant current (+100 mA) from 2:17 min to 4:18 min of the video * Voltage vs Time at constant current (-100 mA) from 4:33 min to 6:39 min of the video * Voltage vs Current with different current sweep (+100 mA to -100 mA) with interval of 10 mA from 7:00 min to 17:10 min of the video * Superconducting phase transition from Lower to Higher Resistances @28.04 min of the video **2. 1st Temperature cycle of R-T, from high to low resistance phase transition @ 3:10 min of the video** [https://youtu.be/lMyBz7MT9k](https://youtu.be/lMyBz7MT9k) **4. 2nd cycle phase transition, height to low resistance @2:15 min of the video** [https://youtu.be/Re741xYi3nM](https://youtu.be/Re741xYi3nM) **5. 2nd temperature Cycle, low to high resistance phase transition@4:45 min of the video.** [https://youtu.be/hpc06nRy-eQ](https://youtu.be/hpc06nRy-eQ) **Authors contribution** **Bhola Nath Pal: Conceptualization, draft writing/editing and supervising the project** **Sobhan Hazra:** Most of the experimental work, draft editing **Sandip Chatterjee:** Support magnetic field dependent R-T measurement and draft editing
2304.04106
MedGen3D: A Deep Generative Framework for Paired 3D Image and Mask Generation
Acquiring and annotating sufficient labeled data is crucial in developing accurate and robust learning-based models, but obtaining such data can be challenging in many medical image segmentation tasks. One promising solution is to synthesize realistic data with ground-truth mask annotations. However, no prior studies have explored generating complete 3D volumetric images with masks. In this paper, we present MedGen3D, a deep generative framework that can generate paired 3D medical images and masks. First, we represent the 3D medical data as 2D sequences and propose the Multi-Condition Diffusion Probabilistic Model (MC-DPM) to generate multi-label mask sequences adhering to anatomical geometry. Then, we use an image sequence generator and semantic diffusion refiner conditioned on the generated mask sequences to produce realistic 3D medical images that align with the generated masks. Our proposed framework guarantees accurate alignment between synthetic images and segmentation maps. Experiments on 3D thoracic CT and brain MRI datasets show that our synthetic data is both diverse and faithful to the original data, and demonstrate the benefits for downstream segmentation tasks. We anticipate that MedGen3D's ability to synthesize paired 3D medical images and masks will prove valuable in training deep learning models for medical imaging tasks.
Kun Han, Yifeng Xiong, Chenyu You, Pooya Khosravi, Shanlin Sun, Xiangyi Yan, James Duncan, Xiaohui Xie
2023-04-08T21:43:26Z
http://arxiv.org/abs/2304.04106v2
# MedGen3D: A Deep Generative Framework for Paired 3D Image and Mask Generation ###### Abstract Acquiring and annotating sufficient labeled data is crucial in developing accurate and robust learning-based models, but obtaining such data can be challenging in many medical image segmentation tasks. One promising solution is to synthesize realistic data with ground-truth mask annotations. However, no prior studies have explored generating complete 3D volumetric images with masks. In this paper, we present MedGen3D, a deep generative framework that can generate paired 3D medical images and masks. First, we represent the 3D medical data as 2D sequences and propose the Multi-Condition Diffusion Probabilistic Model (MC-DPM) to generate multi-label mask sequences adhering to anatomical geometry. Then, we use an image sequence generator and semantic diffusion refiner conditioned on the generated mask sequences to produce realistic 3D medical images that align with the generated masks. Our proposed framework guarantees accurate alignment between synthetic images and segmentation maps. Experiments on 3D thoracic CT and brain MRI datasets show that our synthetic data is both diverse and faithful to the original data, and demonstrate the benefits for downstream segmentation tasks. We anticipate that MedGen3D's ability to synthesize paired 3D medical images and masks will prove valuable in training deep learning models for medical imaging tasks. Keywords:Deep Generative Framework 3D Volumetric Images with Masks Fidelity and Diversity Segmentation ## 1 Introduction In medical image analysis, the availability of a substantial quantity of accurately annotated 3D data is a prerequisite for achieving high performance in tasks like segmentation and detection [26, 17, 29, 9, 31, 34, 35, 32, 33]. This, in turn, leads to more precise diagnoses and treatment plans. However, obtaining and annotating such data presents many challenges, including the complexity of medical images, the requirement for specialized expertise, and privacy concerns. Generating realistic synthetic data presents a promising solution to the above challenges as it eliminates the need for manual annotation and alleviates privacy risks. However, most prior studies [16, 6, 7, 4, 3, 20, 28, 34, 32, 33] have focused on 2D image synthesis, with only a few generating corresponding segmentation masks. For instance, [15] uses dual generative adversarial networks (GAN) [14, 35] to synthesize 2D labeled retina fundus images, while [12] combines a label generator [25] with an image generator [24] to generate 2D brain MRI data. More recently, [27] uses WGAN [5] to generate small 3D patches and corresponding vessel segmentations. However, there has been no prior research on generating whole 3D volumetric images with the corresponding segmentation masks. Generating 3D volumetric images with corresponding segmentation masks faces two major obstacles. First, directly feeding entire 3D volumes to neural networks is impractical due to GPU memory constraints, and downsizing the resolution may compromise the quality of the synthetic data. Second, treating the entire 3D volume as a single data point during training is suboptimal because of the limited availability of annotated 3D data. Thus, innovative methods are required to overcome these challenges and generate high-quality synthetic 3D volumetric data with corresponding segmentation masks. We propose MedGen3D, a novel diffusion-based deep generative framework that generates paired 3D volumetric medical images and multi-label masks. Our approach treats 3D medical data as sequences of slices and employs an autoregressive process to sequentially generate 3D masks and images. In the first stage, a Multi-Condition Diffusion Probabilistic Model (MC-DPM) generates mask sequences by combining conditional and unconditional generation processes. Specifically, the MC-DPM generates mask subsequences (i.e., several consecutive slices) at any position directly from random noise or by conditioning on existing slices to generate subsequences forward or backward. Given that medical images have similar anatomical structures, slice indices serve as additional conditions to aid the mask subsequence generation. In the second stage, we introduce a conditional image generator with a seq-to-seq model from [30] and a semantic diffusion refiner. By conditioning on the mask sequences generated in the first stage, our image generator synthesizes realistic medical images aligned with masks while preserving spatial consistency across adjacent slices. The main contributions of our work are as follows: 1) Our proposed framework is the _first_ to address the challenge of synthesizing complete 3D volumetric medical images with their corresponding masks; 2) we introduce a multi-condition diffusion probabilistic model for generating 3D anatomical masks with high fidelity and diversity; 3) we leverage the generated masks to condition an image sequence generator and a semantic diffusion refiner, which produces realistic medical images that align accurately with the generated masks; and 4) we present experimental results that demonstrate the fidelity and diversity of the generated 3D multi-label medical images, highlighting their potential benefits for downstream segmentation tasks. ## 2 Preliminary ### Diffusion Probabilistic Model A diffusion probabilistic model (DPM) [18] is a parameterized Markov chain of length T, which is designed to learn the data distribution \(p(X)\). DPM builds the Forward Diffusion Process (FDP) to get the diffused data point \(X_{t}\) at any time step \(t\) by \(q\left(X_{t}\mid X_{t-1}\right)=\mathcal{N}\left(X_{t};\sqrt{1-\beta_{t}}X_{t-1},\beta_{t}I\right)\), with \(X_{0}\sim q(X_{0})\) and \(p(X_{T})=\mathcal{N}\left(X_{T};0,I\right)\). Let \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\prod_{s=1}^{t}\left(1-\beta_{s}\right)\), Reverse Diffusion Process (RDP) is trained to predict the noise added in the FDP by minimizing: \[Loss(\theta)=\mathbb{E}_{X_{0}\sim q(X_{0}),\epsilon\sim\mathcal{N}(0,I),t} \left[\left\|\epsilon-\epsilon_{\theta}\left(\sqrt{\bar{\alpha}_{t}}X_{0}+\sqrt {1-\bar{\alpha}_{t}}\epsilon,t\right)\right\|^{2}\right], \tag{1}\] where \(\epsilon_{\theta}\) is predicted noise and \(\theta\) is the model parameters. ### Classifier-free Guidance Samples from conditional diffusion models can be improved with classifier-free guidance [19] by setting the condition \(c\) as \(\emptyset\) with probability \(p\). During sampling, the output of the model is extrapolated further in the direction of \(\epsilon_{\theta}\left(X_{t}\mid c\right)\) and away from \(\epsilon_{\theta}\left(X_{t}\mid\emptyset\right)\) as follows: \[\hat{\epsilon}_{\theta}\left(X_{t}\mid c\right)=\epsilon_{\theta}\left(X_{t} \mid\emptyset\right)+s\cdot\left(\epsilon_{\theta}\left(X_{t}\mid c\right)- \epsilon_{\theta}\left(X_{t}\mid\emptyset\right)\right), \tag{2}\] where \(\emptyset\) represents a null condition and \(s\geq 1\) is the guidance scale. ## 3 Methodology We propose a sequential process to generate complex 3D volumetric images with masks, as illustrated in Figure 1. The first stage generates multi-label segmentation, and the second stage performs conditional medical image generation. The details will be presented in the following sections. ### 3D Mask Generator Due to the limited annotated real data and GPU memory constraints, directly feeding the entire 3D volume to the network is impractical. Instead, we treat 3D medical data as a series of subsequences. To generate an entire mask sequence, an initial subsequence of \(m\) consecutive slices is **unconditionally** generated from random noise. Then the subsequence is expanded **forward** and **backward** in an autoregressive manner, conditioned on existing slices. Inspired by classifier-free guidance in Section 2.2, we propose a general Multi-Condition Diffusion Probabilistic Model (MC-DPM) to unify all three conditional generations (unconditional, forward, and backward). As shown in Fig. 2, MC-DPM is able to generate mask sequences directly from random noise or conditioning on existing slices. Furthermore, as 3D medical data typically have similar anatomical structures, slices with the same relative position roughly correspond to the same anatomical regions. Therefore, we can utilize the relative position of slices as Figure 1: Overview of the proposed **MedGen3D**, including a 3D mask generator to autoregressively generate the mask sequences starting from a random position \(z\), and a conditional image generator to generate 3D images conditioned on generated masks. conditions to guide the MC-DPM in generating subsequences of the target region and control the length of generated sequences. **Train:** For a given 3D multi-label mask \(M\in\mathbb{R}^{D\times H\times W}\), subsequneces of \(m\) consecutive slices are selected as \(\{M_{z},M_{z+1},\ldots,M_{z+(m-1)}\}\), with \(z\) as the randomly selected starting indices. For each subsequence, we determine the conditional slices \(X^{C}\in\{\mathbb{R}^{n\times H\times W},\emptyset\}\) by selecting either the first or the last \(n\) slices, or no slice, based on a probability \(p^{C}\in\{p_{Forward},p_{Backward},p_{Uncondition}\}\). The objective of the MC-DPM is to generate the remaining slices, denoted as \(X^{P}\in\mathbb{R}^{(m-\mathrm{len}(X^{C}))\times H\times W}\). To incorporate the position condition, we utilize the relative position of the subsequence \(\tilde{z}=z/D\), where \(z\) is the index of the subsequence's starting slice. Then we embed the position condition and concatenate it with the time embedding to aid the generation process. We also utilize a binary indicator for each slice in the subsequence to signify the existence of conditional slices. The joint distribution of reverse diffusion process (RDP) with the conditional slices \(X^{C}\) can be written as: \[p_{\theta}(X^{P}_{0:T}|X^{C},\tilde{z})=p(X^{P}_{T})\prod_{t=1}^{T}p_{\theta} (X^{P}_{t-1}\mid X^{P}_{t},X^{C},\tilde{z}). \tag{3}\] where \(p(X^{P}_{T})=\mathcal{N}\left(X^{P}_{T};0,I\right)\), \(\tilde{z}=z/D\) and \(p_{\theta}\) is the distribution parameterized by the model. Figure 2: Proposed 3D mask generator. Given target position \(z\), MC-DPM is designed to generate mask subsequences (length of \(m\)) for specific region, unconditionally or conditioning on first or last \(n\) slices, according to the pre-defined probability \(p^{C}\in\{p_{F},p_{B},p_{U}\}\). Binary indicators are assigned to slices to signify the conditional slices. We ignore the binary indicators in the inference process for clear visualization with red outline denoting the conditional slices and green outline denoting the generated slices. Overall, the model will be trained by minimizing the following loss function, with \(X_{t}^{P}=\sqrt{\bar{\alpha}_{t}}X_{0}^{P}+\sqrt{1-\bar{\alpha}_{t}}\epsilon\): \[\text{Loss}(\theta)=\mathbb{E}_{X_{0}\sim q(X_{0}),e\sim\mathcal{N}(0,I),p^{C},z,t}\left[\left\|\epsilon-\epsilon_{\theta}\left(X_{t}^{P},X^{C},z,t\right) \right\|^{2}\right]. \tag{4}\] **Inference:** During inference, MC-DPM first generates a subsequence of \(m\) slices from random noise given a random location \(z\). The entire mask sequence can then be generated autoregressively by expanding in both directions, conditioned on the existing slices, as shown in Figure 2. Please refer to the **Supplementary** for a detailed generation process and network structure. ### Conditional Image Generator In the second step, we employ a sequence-to-sequence method to generate medical images conditioned on masks, as shown in Figure 3. **Image Sequence Generator:** In the sequence-to-sequence generation task, new slice is the combination of the warped previous slice and newly generated texture, weighted by a continuous mask [30]. We utilize Vid2Vid [30] as our image sequence generator. We train Vid2Vid with its original loss, which includes GAN loss on multi-scale images and video discriminators, flow estimation loss, and feature matching loss. **Semantic Diffusion Refiner:** Despite the high cross-slice consistency and spatial continuity achieved by Vid2vid, issues such as blocking, blurriness and sub-optimal texture generation persist. Given that diffusion models have been shown to generate superior images [11], we propose a semantic diffusion refiner utilizing a diffusion probabilistic model to refine the previously generated images. For each of the 3 different views, we train a semantic diffusion model (SDM), which takes 2D masks and noisy images as inputs to generate images aligned with input masks. During inference, we only apply small noising steps (10 steps) to the generated images so that the overall anatomical structure and spatial continuity are preserved. After that, we refine the images using the pre-trained semantic diffusion model. The final refined 3D images are the mean results from 3 views. Experimental results show an evident improvement in the quality of generated images with the help of semantic diffusion refiner. Figure 3: Image Sequence Generator. Given the generated 3D mask, the initial image is generated by Vid2Vid model sequentially. To utilize the semantic diffusion model (SDM) to refine the initial result, we first apply small steps (10 steps) noise, and then use three SDMs to refine. The final result is the mean 3D images from 3 different views (Axial, Coronal, and Sagittal), yielding significant improvements over the initially generated image. ## 4 Experiments and Results ### Datasets and Setups **Datasets:** We conducted experiments on the thoracic site using three thoracic CT datasets and the brain site with two brain MRI datasets. For both generative models and downstream segmentation tasks, we utilized the following datasets: * SegTHOR [22]: 3D thorax CT scans (25 training, 5 validation, 10 testing); * OASIS [23]: 3D brain MRI T1 scans (40 training, 10 validation, 10 testing); For the downstream segmentation task only and the transfer learning, we utilized 10 fine-tuning, 5 validation, and 10 testing scans from each of the 3D thorax CT datasets of StructSeg-Thorax [2] and Public-Thor [9], as well as the 3D brain MRI T1 dataset from ADNI [1]. **Implementation:** For thoracic datasets, we crop and pad CT scans to \((96\times 320\times 320)\). The annotations of six organs (left lung, right lung, spinal cord, esophagus, heart, and trachea) are examined by an experienced radiation oncologist. We also include a body mask to aid in the image generation of body regions. For brain MRI datasets, we use Freesurfer [13] to get segmentations of four regions (cortex, subcortical gray matter, white matter, and CSF), and then crop the volume to \((192\times 160\times 160)\). We assign discrete values to masks of different regions or organs for both thoracic and brain datasets and then combine them into one 3D volume. When synthesizing mask sequences, we resize the width and height of the masks to \(128\times 128\) and set the length of the subsequence \(m\) to 6. We use official segmentation models provided by MONAI[8] along with standard data augmentations, including spatial and color transformations. **Setup:** We compare the synthetic image quality with DDPM [18], 3D-\(\alpha\)-WGAN [21] and Vid2Vid [30], and utilize four segmentation models with different training strategies to demonstrate the benefit for the downstream task. ### Evaluate the Quality of Synthetic Image. **Synthetic Dataset:** To address the limited availability of annotated 3D medical data, we used only 30 CT scans from SegTHOR (25 for training and 5 for validation) and 50 MRI scans from OASIS (40 for training and 10 for validation) to generate 110 3D thoracic CT scans and 110 3D brain MRI scans, respectively. We compare the fidelity and diversity of our synthetic data with DDPM [18] (train 3 for different views), 3D-\(\alpha\)-WGAN [21], and vid2vid [30] by calculating the mean Frechet Inception Distance (FID) and Learned Perceptual Image Patch Similarity (LPIPS) from 3 different views. \begin{table} \begin{tabular}{c c c c c} \hline & \multicolumn{2}{c}{Thoracic CT} & \multicolumn{2}{c}{Brain MRI} \\ \cline{2-5} & FID \(\downarrow\) & LPIPS \(\uparrow\) & FID \(\downarrow\) & LPIPS \(\uparrow\) \\ \hline DDPM [18] & **35.2** & **0.316** & **34.9** & 0.298 \\ 3D-\(\alpha\)-WGAN [21] & 136.2 & 0.286 & 136.4 & 0.289 \\ Vid2Vid [30] & 47.3 & 0.300 & 48.2 & 0.324 \\ \hline Ours & 39.6 & 0.305 & 40.3 & **0.326** \\ \hline \end{tabular} \end{table} Table 1: Synthetic image quality comparison between baselines and ours. According to Table 1, our proposed method has a slightly lower FID score but a similar LPIPS score compared to DDPM. We speculate that this is because DDPM is trained on 2D images without explicit anatomical constraints and only generates 2D images. On the other hand, 3D-\(\alpha\)-WGAN [18], which uses much larger 3D training data (146 for thorax and 414 for brain), has significantly worse FID and LPIPS scores than our method. Moreover, our proposed method outperforms Vid2Vid, showing the effectiveness of our semantic diffusion refiner. ### Evaluate the Benefits for Segmentation Task. We explore the benefits of synthetic data for downstream segmentation tasks by comparing Sorensen-Dice coefficient (DSC) of 4 segmentation models, including Unet2D [26], UNet3D [10], UNETR [17], and Swin-UNETR [29]. In Table 2 and 3, we utilize real training data (from SegTHOR and OASIS) and synthetic data to train the segmentation models with 5 different strategies, and test on all 3 thoracic CT datasets and 2 brain MRI datasets. In Table 4, we aim to demonstrate whether the synthetic data can aid transfer learning with limited real finetuning data from each of the testing datasets (StructSeg-Thorax, Public-Thor and ADNI) with four training strategies. According to Table 2 and Table 3, the significant DSC difference between 2D and 3D segmentation models underlines the crucial role of 3D annotated data. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{SegTHOR*} & \multicolumn{4}{c}{StructSeg-Thorax} & \multicolumn{4}{c}{Public-Thor} \\ \cline{2-13} & \begin{tabular}{c} Unet \\ 2D \\ \end{tabular} & \begin{tabular}{c} Unet \\ 3D \\ \end{tabular} & \begin{tabular}{c} Swin \\ UNETR \\ \end{tabular} & \begin{tabular}{c} Unet \\ 2D \\ \end{tabular} & \begin{tabular}{c} Unet \\ 3D \\ \end{tabular} & \begin{tabular}{c} Win \\ UNETR \\ \end{tabular} & \begin{tabular}{c} Unet \\ 2D \\ \end{tabular} & \begin{tabular}{c} Win \\ UNETR \\ \end{tabular} \\ \hline **E2-1** & 0.817 & 0.873 & 0.867 & 0.878 & 0.722 & 0.793 & 0.789 & 0.810 & 0.822 & 0.837 & 0.836 & 0.847 \\ **E2-2** & 0.815 & 0.846 & 0.845 & 0.854 & 0.736 & 0.788 & 0.788 & 0.803 & 0.786 & 0.838 & 0.814 & 0.842 \\ **E2-3** & 0.845 & 0.881 & 0.886 & 0.886 & 0.772 & 0.827 & 0.824 & 0.827 & 0.812 & 0.856 & 0.853 & 0.856 \\ **E2-4** & **0.855** & 0.887 & **0.894** & **0.899** & 0.775 & **0.833** & **0.825** & 0.833 & **0.824** & 0.861 & 0.852 & **0.867** \\ **E2-5** & 0.847 & **0.891** & 0.890 & 0.897 & **0.783** & **0.833** & 0.823 & **0.835** & 0.818 & **0.864** & **0.858** & **0.867** \\ \hline \hline \end{tabular} \end{table} Table 2: Experiment 2: DSC of different thoracic segmentation models. There are 5 training strategies, namely: **E2-1:** Training with real SegTHOR training data; **E2-2:** Training with synthetic data; **E2-3:** Training with both synthetic and real data; **E2-4:** Finetuning model from E2-2 using real training data; and **E2-5:** finetuning model from E2-3 using real training data. (* denotes the training data source.) Figure 4: Our proposed method produces more anatomically accurate images compared to 3D-\(\alpha\)-WGAN and vid2vid, as demonstrated by the clearer organ boundaries and more realistic textures. Left: Qualitative comparison between different generative models. Right: Visualization of synthetic 3D brain MRI slices at different relative positions. While purely synthetic data (**E2-2**) fails to achieve the same performance as real training data (**E2-1**), the combination of real and synthetic data (**E2-3**) improves model performance in most cases, except for Unet2D on the Public-Thor dataset. Furthermore, fine-tuning the pre-trained model with real data (**E2-4** and **E2-5**) consistently outperforms the model trained only with real data. Please refer to **Supplementary** for organ-level DSC comparisons of the Swin-UNETR model with more details. According to Table 4, for transfer learning, utilizing the pre-trained model (**E3-2**) leads to better performance compared to training from scratch (**E3-1**). Additionally, pretraining the model with synthetic data (**E3-3** and **E3-4**) can facilitate transfer learning to a new dataset with limited annotated data. We have included video demonstrations of the generated 3D volumetric images in the **supplementary material**, which offer a more comprehensive representation of the generated image's quality. ## 5 Conclusion This paper introduces MedGen3D, a new framework for synthesizing 3D medical mask-image pairs. Our experiments demonstrate its potential in realistic data generation and downstream segmentation tasks with limited annotated data. Future work includes merging the image sequence generator and semantic diffusion refiner for end-to-end training and extending the framework to synthesize 3D medical images across modalities. Overall, we believe that our work opens up new possibilities for generating 3D high-quality medical images paired with masks, and look forward to future developments in this field. \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{Thoracic CT} & Brain MRI \\ \cline{2-4} & StructSeg-Thorax* & Public-Thor* & ADNI* \\ \hline **E3-1** & 0.845 & 0.897 & 0.946 \\ **E3-2** & 0.865 & 0.901 & 0.948 \\ **E3-3** & 0.878 & 0.913 & **0.949** \\ **E3-4** & **0.882** & **0.914** & **0.949** \\ \hline \hline \end{tabular} \end{table} Table 4: Experiment 3: DSC of Swin-UNETR finetuned with real dataset. There are 4 training strategies: **E3-1:** Training from scratch for each dataset using limited fine-tuning data; **E3-2** Finetuning the model E2-1 from experiment 2; **E3-3** Finetuning the model E2-4 from experiment 2; and **E3-4** Finetuning the model E2-5 from experiment 2. (* denotes the finetuning data source.) \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{4}{c}{OASIS*} & \multicolumn{4}{c}{ADNI} \\ \cline{2-9} & \begin{tabular}{c} Unet \\ 2D \\ \end{tabular} & \begin{tabular}{c} Unet \\ 3D \\ \end{tabular} & \begin{tabular}{c} Unet \\ 3D \\ \end{tabular} & \begin{tabular}{c} Swin \\ 3D \\ \end{tabular} & \begin{tabular}{c} Unet \\ 3D \\ \end{tabular} & \begin{tabular}{c} Unet \\ 3D \\ \end{tabular} & \begin{tabular}{c} Unet \\ 3D \\ \end{tabular} \\ \hline **E2-1** & 0.930 & 0.951 & 0.952 & 0.954 & 0.815 & 0.826 & 0.880 & 0.894 \\ **E2-2** & 0.905 & 0.936 & 0.935 & 0.934 & 0.759 & 0.825 & 0.828 & 0.854 \\ **E2-3** & 0.938 & 0.953 & 0.953 & 0.955 & 0.818 & 0.888 & 0.898 & **0.906** \\ **E2-4** & **0.940** & **0.955** & **0.954** & **0.956** & **0.819** & 0.891 & **0.903** & 0.903 \\ **E2-5** & **0.940** & 0.954 & **0.954** & **0.956** & **0.819** & **0.894** & 0.902 & **0.906** \\ \hline \hline \end{tabular} \end{table} Table 3: Experiment 2: DSC of brain segmentation models. Please refer to Table 2 for detailed training strategies. (* denotes the training data source.)
2307.12234
MARS: Exploiting Multi-Level Parallelism for DNN Workloads on Adaptive Multi-Accelerator Systems
Along with the fast evolution of deep neural networks, the hardware system is also developing rapidly. As a promising solution achieving high scalability and low manufacturing cost, multi-accelerator systems widely exist in data centers, cloud platforms, and SoCs. Thus, a challenging problem arises in multi-accelerator systems: selecting a proper combination of accelerators from available designs and searching for efficient DNN mapping strategies. To this end, we propose MARS, a novel mapping framework that can perform computation-aware accelerator selection, and apply communication-aware sharding strategies to maximize parallelism. Experimental results show that MARS can achieve 32.2% latency reduction on average for typical DNN workloads compared to the baseline, and 59.4% latency reduction on heterogeneous models compared to the corresponding state-of-the-art method.
Guan Shen, Jieru Zhao, Zeke Wang, Zhe Lin, Wenchao Ding, Chentao Wu, Quan Chen, Minyi Guo
2023-07-23T05:50:37Z
http://arxiv.org/abs/2307.12234v1
# MARS: Exploiting Multi-Level Parallelism for DNN Workloads on Adaptive Multi-Accelerator Systems ###### Abstract Along with the fast evolution of deep neural networks, the hardware system is also developing rapidly. As a promising solution achieving high scalability and low manufacturing cost, multi-accelerator systems widely exist in data centers, cloud platforms, and SoCs. Thus, a challenging problem arises in multi-accelerator systems: selecting a proper combination of accelerators from available designs and searching for efficient DNN mapping strategies. To this end, we propose MARS, a novel mapping framework that can perform computation-aware accelerator selection, and apply communication-aware sharding strategies to maximize parallelism. Experimental results show that MARS can achieve 32.2% latency reduction on average for typical DNN workloads compared to the baseline, and 59.4% latency reduction on heterogeneous models compared to the corresponding state-of-the-art method. ## I Introduction DNN models have achieved cutting-edge accuracy for a wide range of tasks in various areas like computer vision [1], natural language processing [2], and recommendation systems [3]. Simultaneously, the power of DNNs imposes a significant computational and memory burden on traditional hardware systems. For example, one of the largest language models, GPT-3, involves about \(175\) billion parameters and \(3.14\times 10^{8}\) PFLOPs, which require a large amount of computational and memory resources. More critically, the completion of model training does not mean that the work is done. The trained models should be deployed to target platforms like cloud servers and edge devices for inference. These scenarios tend to be more cost-sensitive, compared to training systems. Multi-accelerator systems have begun to attract the attention of researchers and industry. As Moore's Law fades out of effect, semiconductor manufacturers can no longer exponentially increase the computational resources in chips. In other words, manufacturing one large chip could be expensive. However, multi-accelerator systems can achieve the same performance at a lower manufacturing cost. For example, Microsoft [4] has applied customized accelerators based on FPGA in the data center network. Amazon Web Services has also provided FPGA clusters (F1 instances) equipped with up to 8 FPGAs. NVIDIA's Jetson AGX Xavier is an SoC design integrating an ARM CPU, a Volta GPU, 2 NVIDIA Deep Learning Accelerators (NVDLA), and multimedia accelerators. These systems can achieve high performance and scalability with the interconnection between accelerators enabling collaborations. But this still requires much engineering effort and expert knowledge because of a large design space. First, for a certain layer in a given DNN workload, various accelerator designs may demonstrate performance gaps due to different computation patterns. In an adaptive multi-accelerator system, the design of accelerators can be configured, which is natural for FPGA platforms and SoCs at the design phase. Thus, one needs to choose a proper combination of accelerator designs to accommodate these layers in the workload and achieve the best overall performance. Second, to fully utilize computational resources and relieve the memory burden, parallelism strategies, like data parallelism and model parallelism, should be applied to partition the workload into accelerators with the awareness of communication. Note that multiple parallelism strategies can work together and form a multi-level decision problem. Due to these factors, an effective mapping framework is urgently required to explore the large design space. There have been some previous works focusing on the mapping algorithm on multi-accelerator systems. Zhang et al. [5] propose a mapping strategy based on dynamic programming to partition the DNNs into different FPGAs. Herald [6] focuses on mapping multi-DNN workloads onto the heterogeneous dataflow accelerators. But it lacks communication awareness during the mapping. H2H [7] provides mapping strategies with computation and communication awareness for heterogeneous accelerator systems. All of them fail to perform intra-layer parallelism to fully utilize computational resources. To this end, we propose MARS, a mapping framework that can exploit multi-level parallelism on adaptive multi-accelerator systems with high scalability. Our main contributions are summarized as follows: * We give a detailed system formulation together with the design space to cover current multi-accelerator systems and DNN workloads. * We generalize parallelism strategies for multi-accelerator systems. We develop a general representation to describe workload partitioning, generate multi-level parallelism strategies, and integrate them for evaluation. * We develop a mapping algorithm based on a two-level genetic algorithm with heuristics, which performs efficient design space exploration and finds a proper choice of accelerators, workload allocation, and parallelism strategies. * MARS achieves 32.2% latency reduction compared to the baseline for typical DNN models and outperforms the existing SOTA with 59.4% latency reduction for mapping heterogeneous models to heterogeneous accelerators. ## II Background and Motivation ### _Adaptive Multi-Accelerator Systems_ As more and more multi-accelerator systems are developed, adaptive multi-accelerator systems gain popularity for their flexibility. They can easily adapt to the coming workload by configuring accelerators in the system to maximize resource utilization. They can also work as a prototype validation platform to guide the design of other multi-accelerator systems. Fig. 1 shows the architecture of an EC2 F1.16xlarge instance from AWS, a representative adaptive multi-accelerator system. Each instance has a host machine with eight Xilinx Ultra-Scale+ FPGAs. The host-side applications can access/transmit data from/to FPGA's local DRAM via the PCI-e bus. The programmable logic (PL) of each FPGA can be **configured independently** through the FPGA image specified by the user. Moreover, eight FPGAs in the system are separated into two groups, as illustrated in Fig. 1. FPGAs from the same group can communicate with each other without the interference of the host, which could reduce the communication latency [8]. This asymmetrical communication pattern pose new challenges. Users need to rearrange workloads to leverage the low-latency communication feature. Note that though we take the F1 instance as an example, this asymmetrical communication pattern widely exists in multi-accelerator systems in industry. Microsoft [4] organizes accelerators through switches from the data center network in a hierarchical manner. Accelerators in the same rack can communicate much faster compared to the pairs from different racks. ### _DNN Workloads on Multi-Accelerator Systems_ DNN workloads can be represented as a computation graph, which is a directed acyclic graph consisting of layers. For a CNN model, there are various layers including convolution, batch normalization, pooling, activation and fully-connected layers. Typically, convolution layers occupy most of the computation resources. The convolution layer can be represented by a six-level nested loop. Most CNN accelerators perform loop transformation, such as loop interchange and loop tiling to map a given convolution layer to PEs (processing elements) in hardware architectures. Accelerator designers can surely find an optimal set of parameters to maximize performance for a specific convolution layer. However, the heterogeneity of CNN models makes it challenging to achieve the global optimum. When an image is fed into a CNN model, it has a high resolution (e.g. \(H\times W=224\times 224\)) and low channel width (e.g. \(C=3\)). As the network deepens, the feature map resolution gradually decreases with increasing channel width. The heterogeneity of feature map shapes reflects the variation of loop boundaries in the nested loop. Existing accelerators can only achieve high resource utilization for some layers. Multi-accelerator systems make it possible to maintain a high utilization rate through the whole network inference. We can deploy different layers onto accelerators that are suitable for corresponding computation patterns. Besides the computation heterogeneity, memory burden is another issue that needs to be addressed. As the depth and width of the network enlarge rapidly, the off-chip DRAM attached to each accelerator may not have sufficient space to buffer all the parameters and intermediate results during the inference. Frequent access to the host memory can lead to high latency. Thus, we need to 1) alleviate the memory burden by reducing the size of parameters and intermediate results, and 2) manage the off-chip DRAM effectively to minimize the host memory accesses. ### _Motivation_ Putting it all together, we realize that mapping DNN workloads on adaptive multi-accelerator systems are a challenging problem with extremely large design space. There are many factors that hinder designers from finding a high-quality mapping strategy. We analyze design choices in the design space. **Choices of accelerator designs & Workload allocation:** Due to the heterogeneity of DNN layers and various computation patterns that different accelerator designs take, we may select different accelerator designs to configure accelerators in the adaptive multi-accelerator system. Thus, we can allocate the layers in the DNN to those accelerators with their preferred computation patterns. In this way, there could be multiple layers mapped to a set of accelerators with the same design. **Choices of parallelism strategies:** For DNN layers mapped to accelerators with the same design, partitioning each DNN layer with suitable parallelism strategies can further distribute the computation load to these accelerators. This naturally distributes the parameters of each layer and specific strategies are necessary to reduce the memory cost. Due to the importance and complexity of the above design choices, it is necessary to give a clear formulation at first. We present system formulation in Section III. ## III System Formulation **Multi-accelerator system modeling:** We formulate the topology of multi-accelerators as a graph, \(G(Acc,BW)\). As shown Fig. 1: The architecture of an F1.16xlarge instance on AWS in Table I, the vertex \(Acc_{i}\) in the graph refers to the accelerator in the system of which the design can be configured adaptively. The weight of the edge \(BW_{i,j}\) refers to the bandwidth of the communication between \(Acc_{i}\) and \(Acc_{j}\). This is necessary because of the asymmetrical communication patterns mentioned in Section II-A. Moreover, each accelerator \(Acc_{i}\) in the system can access the host memory with the bandwidth \(BW_{i,host}\). Besides the communication, each accelerator \(Acc_{i}\) is equipped with off-chip DRAM with size \(Mem_{i}\) to buffer the intermediate results and parameters. **Accelerator designs:** For adaptive multi-accelerator systems, there are various existing accelerator designs to choose from. We use a set \(Design=\{d_{1},...,d_{M}\}\) to represent the candidates. Following the concepts in Section II-C, we define a set of accelerators with the same design as an accelerator set, denoted by \(AccSet\). Correspondingly, we use a map \(Config[AccSet_{i}]=d_{i}\) to express that the accelerators in \(AccSet_{i}\) are configured to the design \(d_{i}\). It should be satisfied that \(AccSet_{i}\cap AccSet_{j}=\varnothing\) for any \(i\neq j\). For each accelerator design, similar to H2H [7], we use their analytical performance models to evaluate the number of cycles. **DNN workload allocation:** the DNN workload can be represented as a computation graph with a series of layers \(\{L_{1},...,L_{N}\}\) (flattened in topology order). Each layer holds its parameters like \((C_{out},C_{in},H,W,K)\) for convolution. In our formulation, a subset of all the layers, \(LayerSet_{i}\), is mapped to a certain accelerator set, \(AccSet_{i}\), which can be expressed as \(Map[LayerSet_{i}]=AccSet_{i}\). **Parallelism strategies:** For each \(LayerSet\) after DNN workload allocation, we further perform parallel strategies to expedite its inference. Specifically, we use two sets \(ES\) and \(SS\) which describe two different strategies used for partitioning each layer in the layer set. Details will be discussed in Section IV. Note that the chosen parallelism strategies are valid only if the tensor sizes of these partitioned layers do not exceed the DRAM memory space of the corresponding accelerator set. Table I lists all the notations of our system formulation. MARS can evaluate the whole network inference latency which consists of the latency of each accelerator set and the communication latency between accelerator sets. We use ASTRA-Sim [9] to simulate communication latency in the system. In addition, we integrate analytical performance models of different accelerator designs into ASTRA-Sim to evaluate the computation cycles. ASTRA-Sim is a simulator featuring collective communication latency estimation for multi-accelerator systems. ## IV Parallelism Strategies in MARS As introduced in our system formulation in Section III, we can perform different parallelism strategies to layers in a subset \(LayerSet\), further exploiting the parallelism and boosting the performance. There have been some previous works applying parallelism strategies on GPU clusters [10], and chiplets [11]. And we generalize them and include the additional **SS** strategy for accelerators. The computation-intensive layers like convolution in a DNN workload can usually be represented as a nested loop, as shown in Fig. 2(a). In this example, it takes the input feature map \(In\) with shape \((C_{in},H,W)\) and _Weight_ with shape \((C_{out},C_{in},K,K)\) as input, and produces the output feature map _Out_ with shape \((C_{out},H,W)\). The essence of layer-level parallelism is partitioning the nested loop along several dimensions. Then the tensors involved in these dimensions will be partitioned into different shards. We can distribute these shards to accelerators in the system and let them perform the computation in parallel. Figure. 2 illustrates the idea of our parallelism strategies. By default, As shown in Fig. 2(a), tensors are not partitioned and we annotate each dimension with **N** to represent the nested loop is not partitioned at the corresponding dimension. Figure 2(b) and 2(c) presents two Fig. 2: Using exclusive/shared shard to parallelize the layer computation examples of our parallelism strategies, using different ways to generate shards. Specifically, we classify tensor shards into two categories: exclusive shards and shared shards. And we represent the parallelism strategy by annotating corresponding dimensions with **ES** and **SS**. * [leftmargin=*,noitemsep,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,parsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=0pt,topsep,topsep=0pt,topsep=0pt,topsep=topsep,topsep=0pt,topsep,topsep=0pt,topsep=topsep,topsep=topsep,topsep=topsep,topsep=topsep,topsepsep,topsep=topsepsep,topsepsep,topsep=topsep easy to fall into local optimums. To make the problem more solvable, we adopt the idea of divide and conquer by dividing the problem into two levels. The first-level genetic algorithm (the pink box in Fig. 3) aims at finding the minimum overall latency for the multi-accelerator system. At the first level, \(AccSet\) with configured designs, and the layers mapped to them, \(LayerSet\), are decided. With these factors fixed, the mapping problem is further divided into several sub-problems: For each layer set \(LayerSet_{i}\) mapped to an accelerator set \(AccSet_{i}\), what parallelism strategies should be applied to the layers in these layer sets? To solve these sub-problems, the second-level genetic algorithm (green and blue boxes in Fig. 3) is responsible to find the proper parallelism strategy for each layer to minimize the latency on the given accelerator set, considering both computation and communication costs. The population is iteratively updated in the mutation and crossover phase according to the latency evaluated by the simulator. After several iterations, the minimum latencies of accelerator sets will be aggregated to obtain the overall latency with extra communication costs between the sets. The overall latency will work as the fitness function to influence the succeeding mutation and crossover of the first-level genetic algorithm. We use several heuristics to prune the search space. The topology of multi-accelerator systems is formulated as a graph, \(G(Acc,BW)\), which has been introduced in Section III. MARS iteratively removes the edge with the lowest bandwidth in \(G(Acc,BW)\). This will produce several connected sub-graphs, which are regarded as candidates of \(AccSet\). This strategy can help generate \(AccSet\) with minimal communication bottlenecks. In the decode step in Fig.3, the candidate of \(AccSet\) with the highest gene value will be chosen. The accelerator design of each \(AccSet\) is decided through the gene value of each design. MARS profiles the performance of accelerator designs on the layers of the DNN workload according to analytical models before the search. The gene value of these designs at the first generation is initialized according to the normalized performance. This means the design with higher computation ability is most likely to be chosen at the beginning of the search. As for layer sets, to avoid frequent communication between different accelerator sets, we limit that each accelerator set is only mapped with a continuous series of layers in topology order. Then MARS will call the second-level genetic algorithm to optimize parallelism strategies over the accelerator set and the layer set. At the second level, the parallelism strategies of each layer are decided. For layer \(L\), MARS uses individual genes to decide the \(ES\) and \(SS\) sets of \(L\) respectively. It prioritizes parallelism at the dimensions with higher gene values. ## VI Evaluation ### _Experiment Setup_ **Hardware Platform Modeling:** We model an adaptive multi-accelerator system in modified ASTRA-Sim [9]. The system topology is modeled based on the interconnection of F1 instances, as illustrated in Fig. 1. The system consists of eight accelerators separated into two groups. We set the communication bandwidth between accelerators in the same group to 8Gbps. The accelerator-to-host bandwidth is set to 2Gbps to simulate the high latency of accessing host memory. The size of off-chip DRAM on accelerators is set to 1GB. **Accelerator Designs:** For the accelerator designs, we use three kinds of CNN accelerators on FPGA with their performance models. To make their theoretical performance comparable, we set the clock frequency to 200MHz uniformly and use design parameters with similar numbers of PEs. The detailed setting can be found in Table II. **Models:** We use several representative CNN models as benchmarks for evaluation, including AlexNet, VGG, ResNet, and WideResNet. For some models, we use multiple settings with different parameter sizes and FLOPs. **Baseline:** To the best of our knowledge, there are no existing mapping algorithms that take both multi-level parallelism and adaptive multi-accelerator systems into consideration. To show the ability of MARS, we extend the computation-prioritized mapping algorithm from [6] with parallelism strategies integrated. The baseline uses fixed two accelerator sets which are the same as two groups in the system topology. This is reasonable to avoid high communication latency across groups. And it allocates half of the layers to each accelerator set and chooses the accelerator design with the lowest computation latency. About the parallelism strategies, each layer is partitioned with ES along the longest two dimensions. ### _Performance Analysis_ Following the experiment setup, we test the latency of the baseline mapping algorithm and MARS mapping algorithm. We list the parameters of the models, the overall latency, together with the accelerator sets, workload allocation, and parallelism strategies of representative layers in each layer set found by MARS in Table III. As shown in the table, MARS outperforms the baseline for all models. The latency reduction ranges from 10.1% to 46.6% (32.2% on average). The larger design space enables MARS to find better solutions compared to the baseline. Some patterns are shown in the mappings found by MARS. The first few layers of these models are always mapped to accelerator sets configured with Design 1(SuperLIP). The reason is that the first few layers usually have larger resolutions and fewer channels. Other designs suffer from low hardware utilization because the shape of the layer cannot saturate the PEs in the architecture, while the design parameter of SuperLIP(\(T_{n}=7\)) can achieve relatively high utilization. MARS tends to partition these layers along \(H/W\)-dimension. We can also find that design 3 does not show up in ResNet101 and WRN-50-2. This is because design 3 is an accelerator based on Winograd algorithm, which makes it impossible to effectively handle \(1\times 1\) convolution in the bottleneck block of these models. Because \(C_{In}\) and \(C_{Out}\) enlarge rapidly when the network goes deeper, MARS is more likely to partition these layers along \(C_{In}/C_{Out}\)-dimension for parallelism. This emphasizes the importance to select accelerator designs and parallelism strategies based on the computation patterns of layers. ### _Comparison with H2H_ H2H [7] focuses on mapping heterogeneous models to heterogeneous multi-accelerators with fixed accelerator designs. Though MARS and H2H have different problem formulations, we still compare their performance with heterogeneous DNN models. We reuse performance models of convolution accelerators used in H2H and model the cloud-scale multi-FPGA systems in ASTRA-Sim following the 5-level bandwidth settings of H2H. For heterogeneous accelerator designs, we assume that members in the accelerator set stall until the slowest accelerator finishes computing. Then we evaluate the latencies in milliseconds of MARS and H2H on two ResNet-based heterogeneous models. The results are shown in Table IV. We can see that MARS achieves lower latency than H2H on both models when limited to the same bandwidth (59.4% reduction on average). When the bandwidth is extremely low, MARS tends to partition convolution layers along \(H/W\)-dimension, which requires low communication cost. ## VII Conclusion In this paper, we propose MARS, a mapping framework aiming at exploiting multi-level parallelism on adaptive multi-accelerator systems. We formulate design space including the choices of accelerator design, workload allocation, and parallelism strategies. A two-level genetic algorithm with heuristics is used to perform design space exploration. The mapping algorithm shows significant latency reduction compared to the baseline mapping algorithm and the state-of-the-art method. ## VIII Acknowledgements This work is partially sponsored by the National Natural Science Foundation of China (62102249, 62232015) and Shanghai Pujiang Program (21PJ1408200).
2306.13203
Neural Network Pruning for Real-time Polyp Segmentation
Computer-assisted treatment has emerged as a viable application of medical imaging, owing to the efficacy of deep learning models. Real-time inference speed remains a key requirement for such applications to help medical personnel. Even though there generally exists a trade-off between performance and model size, impressive efforts have been made to retain near-original performance by compromising model size. Neural network pruning has emerged as an exciting area that aims to eliminate redundant parameters to make the inference faster. In this study, we show an application of neural network pruning in polyp segmentation. We compute the importance score of convolutional filters and remove the filters having the least scores, which to some value of pruning does not degrade the performance. For computing the importance score, we use the Taylor First Order (TaylorFO) approximation of the change in network output for the removal of certain filters. Specifically, we employ a gradient-normalized backpropagation for the computation of the importance score. Through experiments in the polyp datasets, we validate that our approach can significantly reduce the parameter count and FLOPs retaining similar performance.
Suman Sapkota, Pranav Poudel, Sudarshan Regmi, Bibek Panthi, Binod Bhattarai
2023-06-22T21:03:50Z
http://arxiv.org/abs/2306.13203v1
# Neural Network Pruning for Real-time Polyp Segmentation ###### Abstract Computer-assisted treatment has emerged as a viable application of medical imaging, owing to the efficacy of deep learning models. Real-time inference speed remains a key requirement for such applications to help medical personnel. Even though there generally exists a trade-off between performance and model size, impressive efforts have been made to retain near-original performance by compromising model size. Neural network pruning has emerged as an exciting area that aims to eliminate redundant parameters to make the inference faster. In this study, we show an application of neural network pruning in polyp segmentation. We compute the importance score of convolutional filters and remove the filters having the least scores, which to some value of pruning does not degrade the performance. For computing the importance score we use the Taylor First Order (TaylorFO) approximation of the change in _network output_ for the removal of certain filters. Specifically, we employ a gradient-normalized backpropagation for the computation of the importance score. Through experiments in the polyp datasets, we validate that our approach can significantly reduce the parameter count and FLOPs retaining similar performance. Keywords:Polyp Segmentation Real-time Colonoscopy Neural Network Pruning ## 1 Introduction Polyp segmentation [7, 6, 37] is a crucial research problem in the medical domain involving dense classification. The primary aim of segmenting polyps in the colonoscopy and endoscopy is to identify pathological abnormalities in body parts such as the colon, rectum, etc. Such abnormalities can potentially lead to adverse effects causing colorectal cancer, thus inviting fatal damage to health. Statistics show that between 17% and 28% of colon polyps are overlooked during normal colonoscopy screening procedures, with 39% of individuals having at least one polyp missed, according to several recent studies [27, 22]. However, timely diagnosis of a polyp can lead to timely treatment. It has been calculated that a 1% improvement of polyp detection rate reduces colorectal cancer by 3% [3]. Realizing the tremendous upside of early polyp diagnosis, medical AI practitioners have been trying to utilize segmentation models to assist clinical personnel. However, the latency of the large segmentation model has been the prime bottleneck for successful deployment. Utilizing smaller segmentation models is an option, but doing so compromises the performance of the model. In the case of bigger models, there is a good chance model learns significant redundancies thereby leaving room for improvement in performance. In such a scenario, we can prune the parameters of the model to reduce its size for the inference stage. Neural Network Pruning has established itself as an exciting area to reduce the inference time of larger models. Neural Network Pruning [26, 14, 10, 2] is one of the methods to reduce the parameters, compute, and memory requirements. This method differs significantly from knowledge distillation [16, 12] where a small model is trained to produce the output of a larger model. Neural Network Pruning is performed at multiple levels; (i) weight pruning [35, 14, 13] removes per parameter basis while (ii) neuron/channel [43, 24] pruning removes per neuron or channel basis and (iii) block/group [11, 25] pruning removes per a block of networks such as residual block or sub-network. Weight pruning generally achieves a very high pruning ratio getting similar performance only with a few percentages of the parameters. This allows a high network compression and accelerates the network on specialized hardware and CPUs. However, weight pruning in a defined format such as N:M block-sparse helps in improving the performance on GPUs [29]. Pruning network at the level of neurons or channels helps reduce the parameters with similar performance, however, the pruning ratio is not that high. All these methods can be applied to the same model as well. In this work, we are particularly interested in neuron-level pruning. Apart from the benefit of reduced parameter, memory, and computation time (or FLOPs), neuron or channel level pruning, the number of neurons in a neural network is small compared to the number of connections and can easily be pruned by measuring the global importance [26, 15, 34, 28, 44]. We focus on the global importance as it removes the need to inject bias about the number of neurons to prune in each layer. This can simplify our problem to remove less significant neurons globally, allowing us to extend it to differently organized networks such as VGG, ResNet, UNet or any other Architecture. However, in this work, we focus only on the layer-wise, block-wise and hierarchical architecture of UNet [38]. Our experiment on Kvasir Segmentation Dataset using UNet model shows that we can successfully prune \(\approx\) 1K Neurons removing \(\approx\)14% of parameters and reducing FLOPs requirement to \(\approx\) 0.5x the original model with approximately the same performance of the original (from 0.59 IoU to 0.58 IoU). That is half the computational requirement of previous model with negligible performance loss. ## 2 Related works ### Real-time Polyp Segmentation Convolution-based approaches [38, 46, 30] have mostly dominated the literature while recently attention-based models [6, 23] have also been gaining traction in polyp segmentation. A number of works have been done in the area of real-time settings too. One of the earliest works [39], evidencing the ability of deep learning models for real-time polyp, has shown to achieve 96% accuracy in screening colonoscopy. Another work [41] utilizing a multi-threaded system in a real-time setting, has shown the deep learning models' ability to process at 25 fps with 76.80 \(\pm\) 5.60 ms latency. Specialized architectures for polyp segmentation have also been studied in the medical imaging literature accounting for real-time performance. MSNet [45] introduced a subtraction unit, performing inference on 352x352 at 70 fps, instead of the usual addition as used in many works such as UNet [38], UNet++ [46], etc. Moreover, NanoNet [21] introduced a novel architecture tailor-made for real-time polyp segmentation primarily relying on a lightweight model hence compromising the learning capacity. SANet [42] has been shown to achieve strong performance with an inference speed of about 72 FPS. It showed samples collected under different conditions show inconsistent colors, causing the feature distribution gap and overfitting issue. Another work [36] used 2D gaussian instead of binary maps to better detect flat and small polyps which have unclear boundaries. ### Neural Network Pruning Works in pruning have somewhat lagged behind in medical imaging as compared to other domains. A recent work [1] has focused its study on reducing the computational cost of model retraining after post-pruning. DNNDeepening-Pruning [8] proposed the two-stage model development algorithm to build the small model. In the first stage, the residual layers are added until the overfitting starts and in the latter stage, pruning of the model is done with some user instructions. Furthermore, [9] has demonstrated evolution strategy-based pruning in generative adversarial networks (GAN) framework for medical imaging diagnostics purposes. In biomedical image segmentation, [19] applied a pruning strategy in U-Net architecture achieving 2x speedup trading off a mere 2% loss in mIOU(mean Intersection Over Union) on PhC-U373 and DIC-HeLa dataset. STAMP [4] tackles the low data regime through online simultaneous training and pruning achieving better performance with a UNet model of smaller size as compared to the unpruned one. In histological images, the superiority of layer-wise pruning and network-wide magnitude pruning has been shown for smaller and larger compression ratios respectively [32]. For medical image localization tasks, pruning has also been used to automatically and adaptively identify hard-to-learn examples [18]. In our study, we make use of pruning to reduce the model's parameters. Previous works showed that global importance estimation can be computed using one or all of forward(activation) [17], parameter(weight) [14] or backward(gradient) [40, 31, 5] signals. Some of the previous techniques use Feature Importance propagation [44] or Gradient propagation [28] to find the neuron importance. Others use both activation and gradient information for pruning [34, 33]. Although there are methods using such signals for pruning at initialization [40], we limit our experiment to the pruning of trained models for a given number of neurons. In this work, we use importance metric similar to Taylor First Order (Taylor-FO) approximations [34, 33] but from heuristics combining both forward and backward signals. The forward signal, namely the activation of the neuron, and the backward signal, the gradient. We use a normalized gradient signal to make the contribution of each example similar for computing the importance score. ## 3 Methodology In this section, we discuss the pruning method in detail, and the application of the pruning method for polyp segmentation tasks, specifically focusing on the UNet architecture. However, it can be applied to other architecture as well. Instead of pruning all layers, we specifically target the convolutional layers for pruning. It is important to note that the term 'neurons' refers to the channels in the context of pruning convolutional layers. Furthermore, we present a method to select the pruned model that is best suited for the task at hand. ### Pruning Method Previous works on global importance-based post-training pruning of neurons focus on using forward and backward signals. Since most of these methods are based on Taylor approximation of the change in loss after removing a neuron or group of parameters, these methods require input and target value for computing Figure 1: Left: Unpruned UNet Model. Right: Model After Purning convolution filters with low importance score. _The exact number of pruned filters is 956, extracted from experiment shown in Fig 2 (top)._ the importance. Instead, we tackle the problem of pruning from the perspective of overall function output without considering the loss. **Forward Signal:** The forward signal is generally given by the pre-activation \((x_{i})\). If a pre-activation is zero, then it has no impact on the output of the function, i.e. the output deviation with respect to the removal of the neuron is zero. If the incoming connection of a neuron is zero-weights, then the neuron can be removed, i.e. it has no significance. If the incoming connection is non-zero then the neuron has significance. Forward signal takes into consideration how data affects a particular neuron. **Backward Signal:** The backward signal is generally given by back-propagating the loss. If the outgoing connection of the neuron is zeros, then the neuron has no significance to the function, even if it has positive activation. The gradient\((\delta x_{i})\) provides us with information on how the function or loss will change if the neuron is removed. **Importance Metric:** Combining the forward and backward signal we can get the influence of the neuron on the loss or the function for given data. Hence, the importance metric \((I_{i})\) of each neuron \((n_{i})\) for dataset of size \(M\) is given by \(I_{i}=\frac{1}{M}\sum_{n=1}^{M}x_{i}.\delta x_{i}\), where \(x_{i}\) is the pre-activation and \(\delta x_{i}\) is its gradient. It fulfills the criterion that importance should be low if incoming or outgoing connections are zeros and higher otherwise. _Problem 1:_ This importance metric \((I_{i})\) is similar to Taylor-FO [34]. However, the metric gives low importance when the gradient is negative, which to our application, is a problem as the function will be changed significantly, even if it lowers the loss. Hence, we calculate the square of importance metric to make it positive. The squared importance metric \((I_{i}^{s})\) is computed as below: \[I_{i}^{s}=\frac{1}{M}\sum_{n=1}^{M}\left(x_{i}.\delta x_{i}\right)^{2}\] _Problem 2:_ During the computation of the gradients, some input examples produce a higher magnitude of gradient, and some input examples produce a lower magnitude of the gradient. Since the magnitude is crucial for computing the importance, different inputs contribute differently to the overall importance score. To this end, we normalize the gradient to the same magnitude of 1. Doing so makes the contribution of each data point equal for computing the importance. **Pruning Procedure:** Consider that pruning is performed using dataset \(\mathbf{D}\in[\mathbf{x}_{0},\mathbf{x}_{1},...\mathbf{x}_{N}]\) of size \(N\). We have a Convolutional Neural Network (CNN) whose output is given by: \(\mathbf{y}_{n}=f_{CNN}(\mathbf{x}_{n})\). We first compute the gradient w.r.t \(\mathbf{y}_{n}\) for all \(\mathbf{x}_{n}\) for given target \(\mathbf{t}_{n}\) as: \[\Delta\mathbf{y}_{n}=\frac{\delta E(\mathbf{y}_{n},\mathbf{t}_{n})}{\delta \mathbf{y}_{n}}\] We then normalize the gradient \(\Delta\mathbf{y}_{n}\) as: \[\Delta\mathbf{\hat{y}}_{n}=\frac{\Delta\mathbf{y}_{n}}{\left\|\Delta\mathbf{y }_{n}\right\|}\] This gradient \(\Delta\mathbf{\hat{y}}_{n}\) is then backpropagated through the \(f_{CNN}\) network to compute the squared Importance score (\(I_{i}^{s}\)) of each convolution filter. ### Pruning UNet for Polyp-Segmentation UNet [38] is generally used for Image Segmentation Tasks. It consists of only Convolutional Layers including Upsampling and Downsampling layers organized in a hierarchical structure as shown in Figure 1. We compute the Importance Score for each Convolutional layer and prune the least important ones. Removing a single convolution filter removes a channel of the incoming convolution layer and the outgoing convolution channel. When used with many channels, we can get a highly pruned UNet with only a slight change in performance. This method can be used to drastically reduce the computation and memory requirements without degrading the performance even without fine-tuning the pruned model. A single computation of Importance Score allows us to prune multiple numbers of neurons and select sparsity with the best FLOPs (or Time-taken) and IoU trade-off. ### Measuring Pruning Performance Performance metrics are crucial for measuring the effectiveness of different pruning algorithms. Some of them are listed below. **FLOPs:** FLOP stands for floating point operation. Floating point operation refers to the mathematical operations performed on floating point numbers. FLOP measures model complexity, with a higher value indicating a computationally expensive model and a lower value indicating a computationally cheaper model with faster inference time. We evaluate an algorithm's efficiency by how many FLOPs it reduces. **Parameters:** Parameters represent learnable weights and biases typically represented by floating point numbers. Models with many parameters need a lot of memory, while models with fewer parameters need less memory. The effectiveness of the pruning algorithm is measured by the reduction in the model's parameters. **Time-taken:** It is the actual wall-clock inference time of model. We measure time taken before and after pruning the network. Time-taken is practical but not the most reliable metric for efficiency gain as it might vary with device and with different ML frameworks. ## 4 Experiments We conduct the experiment for Polyp-Segmentation model pruning using the Kvasir Dataset [20]. We use the pretrained UNet Model for segmentation and prune the Convolutional Filters of the Network to reduce the computational cost as shown in Figure 2. **Procedure:** First, we compute the importance score for each neuron/channel on a given dataset. Secondly, we prune the \(P\) least important neurons of total \(N\) by importance metric (\(I^{s}\)) given by our method. We measure the resulting accuracy and plot the Number of neurons pruned as shown in Figure 2. The pruning is performed using one split of the test dataset and the IoU is measured on another split of the test dataset. Although the pruned models could be finetuned to see an increase in performance, we do not finetune pruned model in our case. We analyse the change in performance (IoU), the efficiency achieved (FLOPs) and the compression (the number of parameters) for different values of the _number-of-neurons-pruned_ in the UNet Model. **Observation:** The experiments show that model generally consists of redundant and less important convolutional channels, which can be pruned with little to no effect on the output of the model. We see in Figure (2 left) that about 50% (\(\approx\)1500 out of 2944) of the neurons can be pruned before the IoU starts to decrease drastically. Furthermore, this result is observed for a varying numbers of data points, which suggests that pruning under different settings creates different pruned architectures, while still following the same pattern of performance retention after pruning of an increasing number of neurons (up to some point). Figure 2: **(Top)** row is the Number of Neurons Pruned vs IoU and Parameters plot. **(Bot)** row is the Number of Neurons Pruned vs Time-taken and Giga-FLOPs plot. Here, Time-taken is measured in seconds for the inference of 100 samples with 10 batch size. **(Left)** column shows pruning performance using 39 data samples for importance estimation. A sample pruning of 956 neurons reduces the FLOPs to 0.477\(\times\) and parameters to 0.864\(\times\) while retaining performance to 0.99\(\times\) the original performance (\(\approx\)0.5795 IoU). The time taken is reduced by \(\approx\)30%. **(Right)** column shows pruning performance using 235 data samples for importance estimation. A sample pruning of 736 neurons reduces the FLOPs to 0.54\(\times\) and parameters to 0.922\(\times\) while retaining the same performance (\(\approx\)0.5879 IoU). Here, we manage to reduce time taken by \(\approx\)26%. The qualitative evaluation (see Figure 3) of the Pruned UNet Model on the Polyp-Segmentation Dataset shows that the pruned model makes slight changes in the output of the unpruned model while preserving most of the important characteristics. We find that these slight changes can be improvements or degradation to the original model outputs but without significantly distorting the model output. ## 5 Conclusion In this work, we propose to use Neuron Level Pruning in the application of the polyp segmentation task for the first time. The benefit of proposed channels or filter pruning can be realized immediately with parallel hardware like GPUs to significantly reduce the computation cost to less than 50% without degrading the performance. Such a reduction in computational cost automatically leads to the potential application in a real-time setting. Computer-assisted treatment of patients, especially during medical tasks like colonoscopy, requires low latency with satisfactory performance such that the pace of treatment is not hindered. Since the polyp's nature can exhibit significant variability during colonoscopy, real-time polyp segmentation models can indeed provide medical personnel useful insights into locating the abnormal growths in the colon thereby assisting in early diagnosis. Moreover, the advanced visualizations aided through real-time diagnosis can indeed lead to determining appropriate treatment approaches. Figure 3: Qualitative comparison of Polyp Segmentation before and after pruning of the UNet model. The pruned model samples are generated from experiment in Fig (2 _left_ with 956 neurons pruned). Moreover, it also allows safe, methodical, and consistent diagnosis of patients. Our work paves the path for off-the-shelf models to be significantly accelerated through neural network pruning in tasks requiring fast inference such as medical imaging, reducing the inference and storage cost. To sum up, in this work, we explore a promising research direction of neural network pruning demonstrating its efficacy in polyp segmentation. We validate our approach of neural network pruning with various experiments by almost retaining the original performance. ## 6 Acknowledgement This work is partly funded by the EndoMapper project by Horizon 2020 FET (GA 863146).
2302.07959
Optimal Distributed Voltage Control via Primal Dual Gradient Dynamics
The rapidly increasing penetration of inverter-based resources into a power transmission network requires more sophisticated voltage control strategies considering their inherent output variabilities. In addition, faults and load variations affect the voltage profile over the power network. This paper proposes a Primal Dual Gradient Dynamics based optimal distributed voltage control approach that optimizes outputs of distributed reactive power sources to maintain an acceptable voltage profile while preserving operational limits. Case studies of this new approach on IEEE test systems have verified its effectiveness.
Mohammed N. Khamees, Yang Liu, Kai Sun
2023-02-15T21:46:10Z
http://arxiv.org/abs/2302.07959v1
# Optimal Distributed Voltage Control via Primal Dual Gradient Dynamics ###### Abstract The rapidly increasing penetration of inverter-based resources into a power transmission network requires more sophisticated voltage control strategies considering their inherent output variabilities. In addition, faults and load variations affect the voltage profile over the power network. This paper proposes a Primal Dual Gradient Dynamics based optimal distributed voltage control approach that optimizes outputs of distributed reactive power sources to maintain an acceptable voltage profile while preserving operational limits. Case studies of this new approach on IEEE test systems have verified its effectiveness. _Keywords-- distributed voltage control, reactive power compensation, optimization._ ## I Introduction The operating condition of a power transmission network continuously changes as a result of load variations, disturbances and other operational uncertainties, leading to voltage fluctuation across all buses of the network. So, facing such traditional challenges and the emerging challenges due to large-scale wind and solar integration, existing voltage control strategies need to be improved. There have been works addressing new challenges in power distribution systems but changes on existing voltage control strategies for transmission systems are more difficult [1]. Power systems have extensively implemented centralized schemes for many optimization and control functions, in which a central controller collects measurements, carries out computations and issues new commands of control [2]. In such a scheme, all the buses are required to communicate with a central controller when communication difficulties like delays, limited bandwidth, node failures, etc. are present [3]. Comparatively, a distributed control scheme employs controllers at multiple buses spreading out through the entire network, in which communication is limited to a set of neighbors of each bus. Therefore, less communication infrastructure, enhanced cybersecurity, robustness to control failure, and the ability to apply parallel computing are various potential benefits of distributed algorithms versus centralized ones [2]. A common approach to implement voltage control is applying optimal power flow with constraints imposed on bus voltages and reactive powers. After solving the problem, the reactive power injections at buses are adjusted. Therefore, this approach is using a feedforward optimization scheme, where the disturbance is presumed to be explicitly known for the controller. In contrast, there is no need to assume this explicit knowledge in feedback optimization, where the controller act based on measured data. Although a power system has a wide range of acceptable operating conditions in which voltages and reactive powers can vary while still satisfying the limits, some of these conditions are better than the others if we consider the operational cost, transmission loss and the loss of opportunity. An example of the last is related to inverter-based renewable energy sources. When their inverters are expected to provide more reactive powers, a price is yielded since it is more economical for them to inject more real powers than reactive powers. In this work, we exploit the feedback optimization and the distributed control strategy to propose continuous optimal feedback voltage controller, in which each bus locally communicate with its neighbors that are physically connected to share local measurements enabling the controller to adjust its reactive power output using the Primal Dual Gradient Dynamics (PDGD) algorithm. Ref. [3] has applied PDGD to optimal distributed feedback voltage control for power distribution systems, in which the radial structure of a distribution network is utilized to simplify the model of power flows for control. This paper will focus on a power transmission network having a meshed structure. The PDGD is applied in the proposed optimal distributed voltage controller to exploit the network sparsity to optimize the problem completely in distributed fashion. The controller will (1) minimize the operational cost, (2) keep voltages in acceptable ranges, and (3) satisfy reactive power limits. The use of PDGD is motivated by the structure of the optimization problems, which provides the Lagrangian saddle-point dynamics an approach to be solved in distributed fashion. Hence, the dynamics is common in network optimization [4]. In [5], the authors use the PDGD distributed optimizer to solve primary frequency regulation load control problem. In [6], an energy-based approach is presented to study the stability of power systems coupled with market dynamics, the PDGD is used to form distributed dynamic optimization algorithm. The rest of the paper is organized as follows: Section II introduces the power system model and provides a detailed discussion of the proposed optimal distributed feedback voltage controller. The case studies on IEEE benchmark systems are presented in section III. Finally, conclusions are drawn in section IV. ## II Problem Formulation ### _Bus Injection Mode_ In this paper, we consider bus injection model to represent the transmission network. Where power flow equations of an N bus system can be written as follows in the polar form: \[P_{k}=V_{k}\sum_{n=1}^{N}Y_{kn}V_{n}cos(\delta_{k}-\delta_{n}-\theta_{kn}) \tag{1}\] \[Q_{k}=V_{k}\sum_{n=1}^{N}Y_{kn}V_{n}\sin(\delta_{k}-\delta_{n}-\theta_{kn}) \tag{2}\] where \(P_{k}\) and \(Q_{k}\) are the real and reactive powers injected into bus \(k\). \(V_{k}\) and \(\delta_{k}\) are the voltage magnitude and phase angle at bus \(k\). \(Y_{kn}\) and \(\theta_{kn}\) are the magnitude and phase angle of each element of admittance matrix. To linearize these equations, the Taylor's series expansion of a multivariable function can be applied to result in first order linear equations: \[\begin{pmatrix}\dfrac{\partial\mathbf{P}}{\partial\mathbf{\delta}}&\dfrac{ \partial\mathbf{P}}{\partial\mathbf{V}}\\ \dfrac{\partial\mathbf{P}}{\partial\mathbf{\delta}}&\dfrac{\partial\mathbf{Q }}{\partial\mathbf{V}}\end{pmatrix}\begin{pmatrix}\mathbf{\Delta\delta}\\ \mathbf{\Lambda V}\end{pmatrix}=\begin{pmatrix}\mathbf{\Lambda P}\\ \mathbf{\Lambda Q}\end{pmatrix} \tag{3}\] where the power mismatch on the right-hand side is approximated by the Jacobian matrix multiplied by the deviation of the state vector. It is clear the Jacobian matrix is not constant and to avoid some computations, one can use some assumptions which are supported by the physics of power flows in transmission lines to simplify the Jacobian matrix. Therefore, in a power transmission system that is properly designed and operated the following holds [7]: 1. Angular differences among buses are very small. 2. The line susceptance's are much larger than the line conductance's. 3. The power injected into bus is much less than which would flow if all lines from that bus were short circuited to the reference. Therefore, applying these approximations to simplify the elements of the Jacobian matrix as follows: \[\begin{pmatrix}\textbf{-B}&\textbf{G}\\ \textbf{-G}&\textbf{-B}\end{pmatrix}\begin{pmatrix}\mathbf{\Lambda\delta}\\ \mathbf{\Lambda V}\end{pmatrix}=\begin{pmatrix}\mathbf{\Lambda P}\\ \mathbf{\Lambda Q}\end{pmatrix} \tag{4}\] where \(\mathbf{G}\) and \(\mathbf{B}\) are the conductance and susceptance matrices, respectively. In [3], a relaxed branch model of mesh network, which lacks the consistency in the voltage angles, is used to represent distribution network. ### _Optimal Distributed Feedback Voltage Controller_ As mentioned earlier, we build the controller to have a feedback control loop that uses the local voltage measurement at each bus with other information as input to the controller to determine the output, the injected reactive powers at time \(t\). Let \(\mathbf{Q}(t)\) be a given reactive power injections at instant \(t\). These injections will determine the voltage profile \(\mathbf{v}(t)\). Next, using the voltage profile with other information, the controller computes \(\mathbf{Q}(t+1)\), the reactive powers at time \(t+1\). In this input-output relationship, the controller does not need to know details behind the system, thanks to the feedback control scheme. Therefore, the controller will be injecting reactive powers at each time \(t\) to control the voltages while has no control over real power, i.e., it is constant and \(\mathbf{\Delta P}\) is zero. Hence, we can use (4) to find how voltages depends on reactive powers as follows: \[\mathbf{\Lambda V(\Lambda Q)=\begin{bmatrix}-(\mathbf{G}\mathbf{B}^{*}\mathbf{ G}+\mathbf{B})\end{bmatrix}^{-1}\mathbf{\Lambda Q} \tag{5}\] The controller aims to keep the voltage within the acceptable range, while satisfying the reactive power constraints, and drive the system to the optimal operating point that has the least operational cost using local voltage measurement and shared variables among neighbors. To give more room for DERs to generate more real power we consider the objective function to be the injected reactive powers that need to be minimized. Hence, the problem can be formulated as the following: \[\underset{\mathbf{Q}}{\text{min}} f\left(\mathbf{Q}\right)=\sum_{i=1}^{C}Q_{i}^{2} \tag{6}\] \[\underline{v}<v_{i}\left(\mathbf{Q}\right)<\overline{v_{i}}\] (7) \[\underline{Q}<Q_{i}<\overline{Q}_{i} \tag{8}\] where \(\mathbf{Q}\) is the reactive power injections vector and it is the decision variable. \(C\) is the number of controllers. \(v_{t}\) and \(\overline{v_{i}}\) are the lower and upper voltage limits, respectively. \(\overline{Q_{i}}\) and \(\overline{Q_{i}}\) are the lower and upper reactive power injected by controller \(i\). Therefore, the Lagrangian function for this optimization problem can be written as: \[L(\mathbf{Q},\mathbf{\lambda},\mathbf{\mu})= f(\mathbf{Q})+\underline{\lambda}^{T}\left(\mathbf{y}-\mathbf{v}( \mathbf{Q})\right)+\overline{\lambda}^{T}\left(\mathbf{v}(\mathbf{Q})- \overline{\mathbf{v}}\right)+\] \[\underline{\mathbf{\mu}^{T}}\left(\underline{\mathbf{Q}}-\mathbf{Q }\right)+\overline{\mathbf{\mu}^{T}}(\mathbf{Q}-\overline{\mathbf{Q}}) \tag{9}\] where \(\underline{\lambda}\) and \(\overline{\lambda}\) are the Lagrangian multipliers vectors for voltage lower and upper limits, respectively. Each has a dimension equal or less than the number of load buses \(M\), i.e., buses provided with control component. \(\underline{\mathbf{\mu}}\) and \(\overline{\mathbf{\mu}}\) are the Lagrangian multipliers vectors for reactive power injections lower and upper limits both with dimensions of \(C\). All these Lagrangian multipliers act as constraints violation level indicators for the constrained variables. It is worth to note that not all load buses may have control component. Ref. [3] has used the augmented Lagrangian in which no explicit constraints is applied on the reactive power injections. Therefore, a soft thresholding function is employed with projection of reactive power injections onto constraints which caused inconsistency in updating the Lagrangian multiplier corresponding to the reactive power injections. To solve the optimization problem the PDGD, more details can be found in [8], is employed. At each time step \(t\), the measured voltage at each node is employed to update the optimization's variables. The controller performs a gradient descent for the gradient of Lagrangian with respect to \(\mathbf{Q}\), the primal variable. At the same time, it calculates gradient ascent along the Lagrangian with respect to dual variables, i.e., \(\mathbf{\lambda}\) and \(\mathbf{\mu}\). Then, the variables are updated. Finally, the controller injects the updated values of the reactive power into the grid. Therefore, it is looking for the saddle point of this dynamical system (10). \[\frac{\partial Q_{i}(t)}{\partial t}=-\begin{bmatrix}\frac{\partial f(\mathbf{Q}(t))}{ \partial Q_{i}(t)}+\sum_{j=1}^{u}\frac{\partial v_{j}(\mathbf{Q}(t))}{\partial Q _{i}(t)}\left(\frac{\partial}{\partial_{j}}(t)-\underline{\lambda}_{j}(t) \right)\\ +\overline{\mu}_{j}(t)-\underline{\mu}_{j}(t)\end{bmatrix} \tag{10.a}\] \[\frac{\partial\overline{\lambda}_{i}(t)}{\partial t}=\begin{bmatrix}v_{i}( \mathbf{Q})-\overline{v}_{i}\end{bmatrix}_{\underline{\lambda}_{i}(t)}^{+} \tag{10.b}\] \[\frac{\partial\underline{\lambda}_{i}(t)}{\partial t}=\begin{bmatrix}\underline {v}_{i}-v_{i}(\mathbf{Q})\end{bmatrix}_{\underline{\lambda}_{i}(t)}^{+} \tag{10.c}\] \[\frac{\partial\overline{\mu}_{i}(t)}{\partial t}=\begin{bmatrix}Q_{i}(t)- \overline{Q}_{i}\end{bmatrix}_{\underline{\mu}_{i}(t)}^{+} \tag{10.d}\] \[\frac{\partial\underline{\mu}_{i}(t)}{\partial t}=\begin{bmatrix}\underline {Q}_{i}-Q_{i}(t)\end{bmatrix}_{\underline{\theta}(t)}^{+} \tag{10.e}\] The first term in (10.a) is the objective function partial derivative with respect to injected reactive power, equals \(2Q_{i}(t)\) in our case. In addition, the second term contains partial derivative of voltage with respect to injected reactive power and can be calculated using (5), which is the coefficient of \(\Delta\mathbf{Q}\). The voltage appearing in equations (10.b-c) is equivalent to the measured voltage. Now, we have hybrid automaton system corresponding to our dynamical system (10), due to the use of positive projection in (10.d-e). The positive projection is employed to keep the Lagrangian multipliers evaluation positive. ## III Case Study The proposed approach is tested on two IEEE systems available in the test case archive [9] where the power flow equations are solved using MATPOWER [10]. We assume that reactive power sources are available at all load buses and can supply or consume a specified amount of reactive power. Two general cases are examined, static load and varying load with 100 MVA as the base for all cases. We use Matlab's ODE solver ode23t for all simulations. ### _Static Load_ First, we consider load not varying with time. Starting with the IEEE 14-bus system. It has bus 1 as the slack bus. Buses 2, 3, 6 and 8 are voltage-controlled buses (PV buses) having magnitudes voltage fixed, while all others are load buses. The system is heavily loaded resulting in low voltage profile as shown in TABLE I. Each load bus provided with control component which has a specific capacity to consume or supply Q according to its physical rating. Control components in this study can supply or consume up to 20 MVar. The voltage tolerance is set to 5% for all the cases. buses with installed control components that can supply or consume 20 MVar. This time the controller is tested under light loading to have high voltage profile. The reactive power injections and voltages evaluation are shown in Fig. 4. ### _Varying Load_ To test the controller under more practical operating conditions, we used a varying load with 24-hour time span and 1-hour time resolution, shown in Fig. 5. For the IEEE 14-bus system this load profile exists at all buses except the slack, 7, and 8 which originally have no load. So, the system is heavily loaded, causing low voltage profile at the load peak. The controller is adjusted to run every 1 hour since the operating condition has 1-hour resolution. To assess the performance of the controller the voltage profile without controller is shown in Fig. 6 and in Fig. 7 after applying the controller. The simulation result shows the controller quickly drives the voltage profile back to the acceptable range whenever it is out while keeps reactive power injection in its limits as shown in Fig 8. For the IEEE 30-bus system the voltage profile without controller highly fluctuates throughout the day as shown in Fig. 9. We run the controller under these conditions, the voltage profile for load buses is given in Fig. 10. It demonstrates that regardless of the load variations, the controller drives the voltages back into the acceptable range with the least cost. Fig. 11 shows the reactive powers injected by control components that exist over the system. It illustrates the control components consuming reactive power up to the 8\({}^{\text{th}}\) hour then supplying reactive power to keep the voltage profile within the acceptable range. ## IV Conclusion In this paper, optimal distributed feedback voltage controller was proposed. The performance was tested on two IEEE bus systems under static load and time varying load with time span of one day and 1 hour resolution. The controller managed to keep the voltage profile within acceptable magnitudes while satisfying the reactive power constrains of the control components in the optimum way in terms of operational cost. For future work, including real power to the control scheme is our goal.
2305.05131
Solvable subgroup theorem, length function and topological entropy
We prove a general solvable subgroup theorem in terms of length functions. As applications, we obtain a solvable subgroup theorem in dynamical systems: any solvable group of finite Hirsch length acting on a smooth manifold with uniformly positive topological entropies must be virtually $\mathbb{Z}^n$.
Shengkui Ye
2023-05-09T02:30:07Z
http://arxiv.org/abs/2305.05131v1
# Solvable subgroup theorem, length function and topological entropy ###### Abstract We prove a general solvable subgroup theorem in terms of length functions. As applications, we obtain a solvable subgroup theorem in dynamical systems: any solvable group of finite Hirsch length acting on a smooth manifold with uniformly positive topological entropies must be virtually \(\mathbb{Z}^{n}\). ### Introduction The solvable subgroup theorems were established in many contexts of mathematics, eg. Gromoll-Wolf, Lawson-Yau for smooth manifolds, Bridson-Haefliger [3] for CAT(0) spaces, Gersten-Short [7] for biautomatic groups, Conner [5] for stable norms coming from left-invariant metrics, Prytula [15] for systolic complexes, and so on. Basically, the theorems say that solvable groups acting nicely on nice spaces are special (eg. virtually abelian or virtually abelian-by-abelian). In the theory of dynamic systems, Hu-Shi-Wang [10] proved that a Heisenberg group acting on smooth Riemannian manifolds are restrictive, in the sense that the central elements must have vanishing Lyapunov exponents and topological entropy. In this note, we obtain a general Solvable Subgroup Theorem in dynamic systems theory. Recall that a group virtually has a property \(P\) if some finite-index subgroup has the property \(P\). **Theorem 0.1**: _Let \(G\) be a group consisting of \(C^{\infty}\)-diffeomorphisms of a closed Riemannian manifold \(M\) such that each non-identity element has a positive topological entropy (resp. a positive minimal Lyapunov exponent). Then each solvable subgroup \(H<G\) of finite virtual cohomological dimension (eg. having finite Hirsch length) is virtually abelian-by-abelian. Furthermore, if the topological entropies (resp. Lyapunov exponents) have a uniform positive lower bound, then \(H\) is virtually a finitely generated abelian group \(\mathbb{Z}^{n}\)._ The proof is based on a study of length functions defined by the author in [18]. Let \(G\) be a group. A real-valued function \(l:G\rightarrow[0,\infty)\) is called a length function if (1) \(l(g^{n})=|n|l(g)\) for any \(g\in G\) and \(n\in\mathbb{Z}\); (2) \(l(hgh^{-1})=l(g)\) for any \(h,g\in G\); and (3) \(l(ab)\leq l(a)+l(b)\) for _commuting_ elements \(a,b\). Length functions exist in many branches of mathematics, eg. stable word lengths, stable norms, smooth measure-theoretic entropy, translation lengths on CAT(0) spaces and Gromov \(\delta\)-hyperbolic spaces, stable norms of quasi-cocycles, rotation numbers of circle homeomorphisms, dynamical degrees of birational maps, absolute values of Margulis invariants (cf. [18], Section 2) and filling volumes (cf. [2]). A length function \(l\) is called purely positive if \(l(g)>0\) for each torsion-free element \(g.\) If there is a constant \(c>0\) such that \(l(g)>c\) for any nonzero \(l(g),\) the length function \(l\) is called discrete. A group \(G\) is called purely positive or discrete if there exists a length function \(l\) on \(G\) with the corresponding properties. **Theorem 0.2**: _Let \(G\) be a finitely generated solvable group with a purely positively length function. Suppose that \(G\) is virtually torsion-free and any abelian subgroup \(A\) has finite \(\dim_{\mathbb{Q}}(A\bigotimes_{\mathbb{Z}}\mathbb{Q})<+\infty\). Then \(G\) is virtually either abelian or a non-nilpotent \(\mathbb{C}\)-linear subgroup of \(H\rtimes\mathbb{Z}^{n}\) for some integer \(n\) and a finitely generated \(\mathbb{Z}[\mathbb{Z}^{n}]\)-module \(H.\)_ **Theorem 0.3**: _Every solvable subgroup of finite virtual cohomological dimension in a discretely purely positive group is a finite extension of \(\mathbb{Z}^{n}.\)_ The following question was asked by Hu-Shi-Wang [10] (Question 1.3). **Question 0.4**: _Let \(H=\langle f,g,h\mid fh=hf,gh=hg,[f,g]=h\rangle\) be the Heisenberg group, acting on a compact \(C^{\infty}\)-Riemannian manifold \(M\) preserving a Borel probability measure \(\mu.\) Is the norm \(\|Dh^{n}_{x}\|\) bounded by \(e^{\sqrt{n}\varepsilon}\) for some \(\varepsilon>0,\) or even by a polynomial in \(n\) for \(\mu\)-a.e. \(x\in M\)?_ We give a positive answer to this question in the case of exponential function. **Theorem 0.5**: _There exist constants \(K,C>0\) such that_ \[\log\|Dh^{n}_{x}\|\leq K\sqrt{n}+C\] _for any \(x\in M.\)_ ## 1 Semi-direct product Let \(A\in\mathrm{GL}_{n}(\mathbb{Z})\) be a matrix and \(G=\mathbb{Z}^{n}\rtimes_{A}\mathbb{Z}\) the semi-direct product. **Lemma 1.1**: _Any length function \(l:G\rightarrow\mathbb{R}_{\geq 0}\) can be extended to be a continuous length function \(l:\mathbb{R}^{n}\)\(\rightarrow\mathbb{R}_{\geq 0}\) satisying the following properties:_ _1) (subadditive) \(l(r_{1}+r_{2})\leq l(r_{1})+l(r_{2})\) for any \(r_{1},r_{2}\in\mathbb{R}^{n}.\)_ _2) (homogenous) \(l(ar)=|a|l(r)\) for any \(a\in\mathbb{R}\) and \(r\in\mathbb{R}^{n}.\)_ _3) (conjugation invariant) \(l(r)=l(Ar)\) for any \(r\in\mathbb{R}^{n}.\)_ **Proof.** For any \(q=(x_{1},\cdots,x_{n})^{T}\in\mathbb{Q}^{n},\) let \(d\) be the least common multiple of the denominators of \(x_{i}.\) Define \(l(q)=\frac{1}{d}l(dq).\) For any integer \(k>0,\) we have \(l(kq)=\frac{1}{|k|}l(dkq)\) and \(l(q)=\frac{1}{k}l(kq).\) Therefore, \(l(rq)=|r|l(q)\) for any rational number \(r\) and any element \(q\in\mathbb{Q}^{n}.\) Suppose that \(\{q_{i}\}\subset\mathbb{Q}^{n}\) is a Cauchy sequence. For any \(\varepsilon>0,\) there exists \(N\) such that when \(k,m>N,\) we have \[\|q_{k}-q_{m}\parallel<\varepsilon.\] Suppose that the \(i\)-th component \((q_{k})_{i}=\frac{r_{ki}}{s_{ki}}\) for coprime integers \(r_{ki},s_{ki}.\) Denote by \(\{e_{1},...,e_{n}\}\) the standard basis of \(\mathbb{Z}^{n}.\) Then \[|l(q_{k})-l(q_{m})| = |\frac{1}{\mbox{\rm lcm}(s_{k1},s_{k2},...,s_{kn})}l(\mbox{\rm lcm} (s_{k1},s_{k2},...,s_{kn})(\frac{r_{k1}}{s_{k1}},\frac{r_{k2}}{s_{k2}},..., \frac{r_{kn}}{s_{kn}}))-l(q_{m})|\] \[\leq \sum_{i=1}^{n}l((\frac{r_{ki}}{s_{ki}}-\frac{r_{mi}}{s_{mi}})e_{i})\] \[\leq \sum_{i=1}^{n}|\frac{r_{ki}}{s_{ki}}-\frac{r_{mi}}{s_{mi}}|l(e_{i})\] \[< n\varepsilon Max\{l(e_{1}),...,l(e_{n})\}.\] This proves that \(\{l(q_{i})\}\) is a Cauchy sequence. For any \(r\in\mathbb{R}^{n},\) choose a rational sequence \(q_{i}\to r\) and define \(l(r)=\lim l(q_{i}).\) For any \(r_{1},r_{2}\in\mathbb{R}^{n},\) let rational sequences \(q_{1i}=\frac{r_{1i}}{s_{1i}}\to r_{1},q_{2i}=\frac{r_{2i}}{s_{2i}}\to r_{2}\) (here \(r_{1i},r_{2i}\in\mathbb{Z}^{n},s_{1i},s_{2i}\in\mathbb{Z}\)). We have that \[l(q_{1i}+q_{2i}) = l(\frac{r_{1i}}{s_{1i}}+\frac{r_{2i}}{s_{2i}})\] \[= l(\frac{r_{1i}s_{2i}+r_{2i}s_{1i}}{s_{1i}s_{2i}})=\frac{1}{s_{1i }s_{2i}}l(r_{1i}s_{2i}+r_{2i}s_{1i})\] \[\leq \frac{1}{s_{1i}s_{2i}}(l(r_{1i}s_{2i})+l(r_{2i}s_{1i}))\] \[= l(\frac{r_{1i}}{s_{1i}})+l(\frac{r_{2i}}{s_{2i}})=l(q_{1i})+l(q_ {2i}).\] Therefore, \[l(r_{1}+r_{2}) = \lim l(q_{1i}+q_{2i})\] \[\leq \lim l(q_{1i})+\lim l(q_{2i})=l(r_{1})+l(r_{2}).\] This proves the subadditivity of \(l\) on \(\mathbb{R}^{n}.\) From the definition, we obviously have 2). Since the action of \(A\) on \(\mathbb{Z}^{n}\) is linear, the action extends obviously to \(\mathbb{R}^{n},\) which gives 3). A similar proof shows the following. **Lemma 1.2**: _Any length function \(l:\mathbb{Z}^{n}\rightarrow\mathbb{R}_{>0}\) can be extended to be a unique continuous length function \(l^{\prime}:\mathbb{R}^{n}\rightarrow\mathbb{R}_{>0}.\)_ **Proof.** The existence is the same as in the proof of Lemma 1.1. In other words, for any \(q=(x_{1},\cdots,x_{n})^{T}\in\mathbb{Q}^{n},\) let \(d\) be the least common multiple of the denominators of \(x_{i}.\) Define \(l(q)=\frac{1}{d}l(dq).\) For any \(r\in\mathbb{R}^{n},\) choose a rational sequence \(q_{i}\to r\) and define \(l(r)=\lim l(q_{i}).\) The uniqueness of \(l^{\prime}\) comes from the fact that a continuous function on \(\mathbb{R}^{n}\) depends only on its image on \(\mathbb{Q}^{n}.\) **Lemma 1.3**: _Let \(A\in\mbox{\rm GL}_{n}(\mathbb{Z})\) be a \((\mathbb{C}\)-)diagonalizable matrix without eigenvalues of norm \(1.\) Then any length function \(l\) of \(G=\mathbb{Z}^{n}\rtimes_{A}\mathbb{Z}\) vanishes on \(\mathbb{Z}^{n}.\)_ **Proof.** Consider the real Jordan form \[\begin{bmatrix}A_{1}&&&\\ &A_{2}&&\\ &&\ddots&\\ &&&A_{r}\end{bmatrix}\] of \(A\), where \(A_{i}\) is either a real Jordan block or \[a\begin{bmatrix}\cos\phi&-\sin\phi\\ \sin\phi&\cos\phi\end{bmatrix},\] where \(a\neq 1\) is a positive real number. Choose the corresponding basis \(\{v_{1},..,v_{n}\}\) from corresponding \(A_{i}\)-invariant subspaces for \(\mathbb{R}^{n}.\) If \(Av_{i}=\lambda_{i}v_{i}\) for \(|\lambda_{i}|\neq 1,\) then \(l(v_{i})=\lambda_{i}l(v_{i})\) implies \(l(v_{i})=0.\) Otherwise, \[l(v_{i})=l(A^{k}v_{i})=a^{k}l((\frac{1}{a}A_{i})^{k}v_{i})\] for any integer \(k.\) Since \(\frac{1}{a}A_{i}\) is an rotation matrix and \(l\) (a continuous function by Lemma 1.1) is bounded on compact set, we get that \(l(v_{i})=0\) when \(a\neq 1.\) This shows that the length function \(l\) vanishes on a basis of \(\mathbb{R}^{n}\) and thus vanishes on the whole \(\mathbb{R}^{n}\) by Lemma 1.1. **Lemma 1.4**: _Let \(l:\mathbb{R}^{n}\rightarrow\mathbb{R}\) be a length function. If the restriction \(l|_{\mathbb{Z}^{n}}\) is discretely purely positive, then \(l\) is purely positive._ **Proof.** By Tao ect. [6], there is a Banach space \((B,\|\|)\) and an additive group homomorphism \(f:\mathbb{R}^{n}\to B\) such that \(l(x)=\parallel f(x)\parallel\) for any \(x\in\mathbb{R}^{n}.\) If \(l(x)=0\) for some \(x\neq 0,\) the kernel \(\ker f\) is non-trivial and \(\operatorname{Im}f=\mathbb{R}^{m},m<n.\) Since \(l|_{\mathbb{Z}^{n}}\) is discretely purely positive, the group \(\mathbb{Z}^{n}\) acts freely and properly discontinuously on \(\operatorname{Im}f=\mathbb{R}^{m}.\) This implies \(m=n,\) a contradiction. Therefore, \(l\) is purely positive. **Remark 1.5**: _Without the condition that \(l|_{\mathbb{Z}^{n}}\) is discrete, Lemma 1.4 is not true. For example, let \(v_{1}=(1,\sqrt{2}),v_{2}=(-\sqrt{2},1)\in\mathbb{R}^{2}.\) For the decomposition \(\mathbb{R}^{2}=\mathbb{R}v_{1}\bigoplus\mathbb{R}v_{2},\) let \(f:\mathbb{R}^{2}\rightarrow\mathbb{R}\) be the projection onto the first component. The length function \(l\) defined by \(l(x)=|f(x)|\) is purely positive on \(\mathbb{Z}^{2},\) since the line \(\mathbb{R}v_{1}\) has irrational slope._ **Lemma 1.6**: _Let \(A\in\operatorname{GL}_{n}(\mathbb{Z}).\) If the group \(G=\mathbb{Z}^{n}\rtimes_{A}\mathbb{Z}\) admits a discrete purely positive length function function \(l,\) then \(G\) is virtually abelian._ **Proof.** By Lemma 1.1, we have a length function \(l:\mathbb{R}^{n}\mathcal{\rightarrow}\mathbb{R}_{\geq 0}\) extending \(\mathbb{Z}^{n}\mathcal{\rightarrow}\mathbb{R}_{\geq 0}.\) Suppose that \(l|_{\mathbb{Z}^{n}}\) is discretely purely positive. Lemma 1.4 implies that \(l|_{\mathbb{R}^{n}}\) is purely positive. Consider the real Jordan form \[\begin{bmatrix}A_{1}&&&\\ &A_{2}&&\\ &&\ddots&\\ &&&A_{r}\end{bmatrix}\] of \(A,\) where \(A_{i}\) is either a real Jordan block or \[a\begin{bmatrix}\cos\phi&-\sin\phi\\ \sin\phi&\cos\phi\end{bmatrix},\] for some real number \(a>0.\) If \(Av=\lambda v\) for a unit eigenvector \(v\) and an eigenvalue \(\lambda\) with \(|\lambda|\neq 1,\) then \(l(v)=l(Av)=\lambda l(v)\) implying \(l(v)=0.\) Therefore, all the eigenvalues have norm 1. Suppose that there is a non-trivial Jordan block \[J=\begin{bmatrix}1&1&0\\ &1&\ddots&0\\ &&\ddots&1\\ &&&1\end{bmatrix}_{k\times k}\] with \(k\geq 2.\) Let \(\{e_{1},...,e_{k}\}\) be the basis of the eigenspace \(\mathbb{R}^{k}\) under which the representation matrix of the restriction of \(A\) is of the form \(J.\) For any positive integer \(m,\) we have \[J^{m}e_{2}=me_{1}+e_{2}\] and \[ml(e_{1})\leq l(J^{m}e_{2})+l(e_{2})=2l(e_{2}).\] Since \(m\) is arbitrary, we get \(l(e_{1})=0.\) Therefore, all the real Jordan blocks are \(A\) are diagonal. Suppose that \(a\neq 1.\) For any \(v_{i}\) in the subspace corresponding to \(A_{i},\) we have \[l(v_{i})=l(A^{k}v_{i})=a^{k}l((\frac{1}{a}A_{i})^{k}v_{i})\] for any integer \(k.\) Since \(\frac{1}{a}A_{i}\) is a rotation matrix and \(l\) is bounded on compact set, we get that \(l(v_{i})=0.\) Therefore, \(a=1\) and \(A\) is conjugate to a block sum of rotation matrices and \(1s\). Since \(A\) is an integer matrix, all the rotation matrices are of finite orders. This proves that \(A\) is of finite order and \(G\) is virtually abelian. On the other hand, Conner [5] (Example 7.1) gave an example \(\mathbb{Z}^{n}\rtimes_{A}\mathbb{Z}\) with purely positive word length, where \[A=\begin{bmatrix}0&0&0&-1\\ 1&0&0&2\\ 0&1&0&-1\\ 0&0&1&2\end{bmatrix}\] is an irreductible matrix with an eigenvalue of norm 1. Recall that an integral matrix \(A_{n\times n}\) is irreducible if it has no proper non-trivial invariant subgroup in \(\mathbb{Z}^{n}.\) Using the same idea as Conner [5], we can prove the following result. **Lemma 1.7**: _Let \(A\in\mathrm{GL}_{n}(\mathbb{Z})\) be an irreducible matrix with a norm-one eigenvalue. Then the stable word length on the semi-direct product \(\mathbb{Z}^{n}\rtimes_{A}\mathbb{Z}\) is purely positive._ **Proof.** Let \(\mathbb{C}^{n}>\mathbb{Z}^{n}\) be a normed vector space, given by \[\|a_{1}e_{1}+a_{2}e_{2}+\cdots+a_{n}e_{n}\|=a_{1}\bar{a}_{1}+a_{2}\bar{a}_{2}+ \cdots+a_{n}\bar{a}_{n},\] where each \(a_{i}\in\mathbb{C}\) and \(\{e_{1},e_{2},...,e_{n}\}\) is the standard basis. For each \[g=(z,t^{i})\in\mathbb{C}^{n}\rtimes_{A}\mathbb{Z},\] where \(t\) is a generator of \(\mathbb{Z},\) let \[L(g)=\inf\{\sum_{j=1}^{k}(\|z_{j}\|+|i_{j}|):g=(z_{1},t^{0})(0,t^{i_{1}})(z_{2},t^{0})(0,t^{i_{2}})\cdots(z_{k},t^{0})(0,t^{i_{k}})\}\] for some \(z_{1},z_{2},...,z_{k}\in\mathbb{C}^{n}\) and integers \(i_{1},i_{2},...,i_{k}\). From the definition, we have \[L(gh)\leq L(g)+L(h)\] and \(L(g)=L(g^{-1})\) for any \(g,h\in\mathbb{C}^{n}\rtimes_{A}\mathbb{Z}\). Define \(l(g)=\lim_{n\to\infty}\frac{L(g^{n})}{n}.\) Note that \(l(g)\) is the stable word length when \(g\in\mathbb{Z}^{n}\rtimes_{A}\mathbb{Z}.\) Since \(A\) is irreducible, \(A\) is diagonalizable over \(\mathbb{C}.\) Let \(v\) be a unit eigenvector such that \(Av=v.\) Since \(A\) is diagonalizable over \(\mathbb{C},\) the complement \((\mathbb{C}v)^{\perp}\) is \(A\)-invariant as well. We have an inclusion \(\mathbb{C}v\times\mathbb{Z}\to\mathbb{C}^{n}\rtimes_{A}\mathbb{Z}\). For each \(g=(z,t^{i})\in\mathbb{C}^{n}\rtimes_{A}\mathbb{Z},\) write \(z=z_{v}+z_{v}^{\prime}\) with \(z_{v}\in\mathbb{C}v\) and \(z_{v}^{\prime}\in(\mathbb{C}v)^{\perp}.\) For arbitrary \(z\in\mathbb{C}v\) with \((z,1)=(z_{1},t^{i_{1}})(z_{2},t^{i_{2}})\cdots(z_{k},t^{i_{k}}),\) we have \[z = (z,1)=(z_{1},t^{i_{1}})(z_{2},t^{i_{2}})\cdots(z_{k},t^{i_{k}})\] \[= (z_{1v},t^{i_{1}})(z_{2v},t^{i_{2}})\cdots(z_{kv},t^{i_{k}})=z_{1v }+z_{2v}+\cdots+z_{kv}\] with the decomposition of each \(z_{i}=z_{iv}+z_{iv}^{\prime}.\) This means that \(L(z)=\|z\|.\) Let \[I=\{g\in\mathbb{Z}^{n}\rtimes_{A}\mathbb{Z}:l(g)=0\}.\] For any \(g=(z,t^{i}),\) we have \(L(g)\geq|i|.\) This implies that \(I<\mathbb{Z}^{n}.\) Since \(l\) is conjugate invariant and homogenuous, \(I\) is an \(A\)-invariant direct summand of \(\mathbb{Z}^{n}\). Since \(A\) is irreducible, we have either \(I=\mathbb{Z}^{n}\) or \(I=0.\) But if \(I=\mathbb{Z}^{n},\) we will have that \(l\) vanishes on \(\mathbb{C}^{n}\) by noting that \(\{x\in\mathbb{C}^{n}\rtimes_{A}\mathbb{Z}:l(g)=0\}\) is a \(\mathbb{C}\)-vector subspace of \(\mathbb{C}^{n}.\) This implies that \(I=0\) and \(l\) is purely positive. Combining Lemma 1.3 and Lemma 1.7, we have the following. **Corollary 1.8**: _Let \(A\in\mathrm{GL}_{n}(\mathbb{Z})\) be an irreducible matrix. The semi-direct product \(\mathbb{Z}^{n}\rtimes_{A}\mathbb{Z}\) is purely positive if and only if there is a norm-one eigenvalue of \(A.\)_ ## 2 Proofs of theorems **Lemma 2.1**: _([18], Corollary 4.7) A purely positive finitely generated nilpotent group is virtually abelian._ **Remark 2.2**: _Without the condition of finite generation, Lemma 2.1 is not true. For each integer \(n\geq 2,\) let \(H_{n}\) the group consisting of all \(3\times 3\) strictly upper triangular matrices with entries in \(\mathbb{Z}/n.\) Let \(G=(\bigoplus_{n\geq 2}H_{n})\times\mathbb{Z}\). The group \(G\) is nilpotent, with a purely positive length function coming from the projection \(G\to\mathbb{Z}\). However, \(G\) is not virtually abelian._ **Lemma 2.3**: _Let \(G\) be a solvable group equipped with a purely positive length function \(l.\) Let \(A\) be a maximal abelian normal subgroup of \(G\). Assume that \(A\) is torsion-free. Then \(C_{G}(A)=A\), where \(C_{G}(A)\) is the centralizer of \(A\) in \(G.\)_ **Proof.** Note that \(C_{G}(A)\) is a normal subgroup of \(G\). Actually, for any \(x\in C_{G}(A),t\in G,a\in A,\) we have \(txt^{-1}a=txt^{-1}t(t^{-1}at)t^{-1}=tx(t^{-1}at)t^{-1}=t(t^{-1}at)xt^{-1}=atxt^{ -1}.\) Suppose that \(C_{G}(A)\neq A.\) Let \(B/A\) be a maximal abelian characteristic subgroup of the solvable group \(C_{G}(A)/A\). Note that \(B/A\) is non-trivial, as it can contain the last non-trivial term in the derived series of \(C_{G}(A)/A.\) Since \(C_{G}(A)/A\) is normal in \(G/A,\) the group \(B/A\) is normal in \(G/A.\) Let \(B<G\) be the normal subgroup corresponding to \(B/A.\) Since the commutator subgroup \(1\neq[B,B]<A,\) there exist elements \(b_{1},b_{2}\in B\) such that the commutator \(1\neq[b_{1},b_{2}]\in A.\) But the subgroup \(\langle b_{1},b_{2}\rangle<B\) is a Heisenberg group. Lemma 2.1 implies that any length function of \(\langle b_{1},b_{2}\rangle\) vanishes on the center \([b_{1},b_{2}],\) which is a contradiction. **Theorem 2.4**: _Let \(G\) be a solvable group with a purely positive length function \(l.\) Suppose that \(G\) has a finite virtually cohomological dimension. There exists a finite-index subgroup \(K<G\) such that_ _1) either \(K\) is abelian, or_ _2) there exits abelian groups \(A,B\) fitting into an exact sequence_ \[1\to A\to K\to B\to 1\] _satisfying that the action of \(B\) on \(A\bigotimes_{Z}\mathbb{Q}\) is effective and each non-identity element of \(B\) is non-unipotent in \(\mathrm{Aut}(A\bigotimes_{Z}\mathbb{Q})\hookrightarrow\)\(\mathrm{GL}_{n}(\mathbb{C}).\) In particular, \(G\) is either virtually abelian or virtually non-nilpotent abelian-by-abelian._ **Proof.** Let \(H<G\) be a finite-index subgroup of finite cohomological dimension. Let \(A\) be a maximal abelian normal subgroup of \(H.\) Since \(H\) is torsion-free, \(A\) is torsion-free. The quotient group \(H/A\) acts on \(A\) by conjugation. By Lemma 2.3, the action is effective. Note that \(A\) embeds into \(A\bigotimes_{Z}\mathbb{Q}\cong\mathbb{Q}^{n}\) for some integer \(n.\) The group \(H/A\) is isomorphic to a subgroup of \(\mathrm{GL}_{n}(\mathbb{Q})\hookrightarrow\mathrm{GL}_{n}(\mathbb{C}),\) by considering its conjugate action on \(A.\) Therefore, \(H/A\) has a finite-index subgroup \(K/A\) (for some \(K<G\)), which is conjugate to a subgroup of the upper triangular matrices in \(\mathrm{GL}_{n}(\mathbb{C}).\) For each unipotent \(g\in K/A\) (eg. \(g\in[K/A,K/A],\) the commutator subgroup), the subgroup \(\langle A,g\rangle\) is isomorphic to the nilpotent group \(A\rtimes_{g}\mathbb{Z}.\) For any finite subset \(S\subset A,\) the subgroup \(\langle S,g\rangle\) generated by \(S,g\) is still nilpotent. Lemma 2.1 implies that \(\langle S,g\rangle\) is virtually abelian. Therefore, \(g=1.\) This proves that \(K/A\) (isomorphic to a subgroup of \((\mathbb{C}^{*})^{n}\)) is abelian. If \(K/A\) is finite, then \(H\) is virtually abelian. If \(K/A\) is infinite, and \(H\) is virtually abelian-by-abelian. In the latter case, take \(B=K/A\) to finish the proof. **Proof of Theorem 0.2.** Let \(K\) be a finite-index torsion-free subgroup of \(G.\) Let \(A\) be a maximal normal abelian group of \(K.\) Since \(K/A\) acts effectively on \(A,\) we know that \(K/A\) is isomorphic to a subgroup of \(\mathrm{GL}_{n}(\mathbb{Q})\hookrightarrow\mathrm{GL}_{n}(\mathbb{C}).\) There is a finite-index subgroup \(K_{1}/A\) of \(K/A,\) which is upper triangulizable. Any unipotent element \(\gamma\in K_{1}/A\) (eg. \(\gamma\in[K_{1}/A,K_{1}/A]\) the commutator subgroup) will give a nilpotent subgroup \(\langle A,\gamma\rangle=A\rtimes_{\gamma}\mathbb{Z},\) which is impossible since any length function will vanish on the center of the nilpotent group (see [18], Lemma 5.2). Therefore, \(K_{1}/A\) is abelian and the finite-index subgroup \(K_{1}<G\) is metabelian. A classical theorem (cf. [14], 11.3.3, page 252) implies that \(K_{1}\) embeds into \(H\rtimes(K_{1})_{\mathrm{ab}}\) for some finitely generated \(\mathbb{Z}[(K_{1})_{\mathrm{ab}}]\)-module \(H\) over the integral group ring of the abelianization of \(K_{1}.\) Since \(K_{1}\) is finitely generated, the abelianization \((K_{1})_{\mathrm{ab}}\) contains a subgroup \(\mathbb{Z}^{n}\) of finite index. Note that \(H\) is finitely generated \(\mathbb{Z}[\mathbb{Z}^{n}]\)-module as well. The group \(K_{1}\) is a finitely generated torsion-free metabelian. Therefore, it is \(\mathbb{C}\)-linear by Levic [13] and Remeslennikov [16]. **Theorem 2.5**: _Let \(G\) be a solvable group with a purely positive length function. Suppose that \(G\) is virtually torsion-free and any abelian subgroup \(A\) is finitely generated. Then \(G\) is virtually either \(\mathbb{Z}^{n}\) or a non-nilpotent group \(B\) of the form_ \[1\rightarrow\mathbb{Z}^{m}\to B\rightarrow\mathbb{Z}^{k}\to 1\] _for some integers \(n,m,k.\) Here \(\mathbb{Z}^{k}\) acts on \(\mathbb{Z}^{m}\) effectively by non-unipotent matrices._ Proof of Theorem 2.5.We use the notation as in the proof of Theorem 0.2. By the assumption, \(A=\mathbb{Z}^{m}\) for some \(n.\) Since the group \(K_{1}/A<\mathrm{GL}_{n}(\mathbb{Z})\) is solvable, it is polycyclic. In particular, \(K_{1}/A\) is finitely generated abelian group and there is a finite-index subgroup \(\mathbb{Z}^{k}\) for some \(k.\) Let \(B<K_{1}\) be the preimage of \(\mathbb{Z}^{k}<K_{1}/A\) to finish the proof. If \(B\) is virtually abelian, this is the first case. If \(B\) is not virtually abelian, it is cannot be nilpotent by Lemma 2.1. This means that each non-identity element in \(\mathbb{Z}^{k}\) acts by a non-unipotent invertible matrix. **Proof of Theorem 0.3.** Let \(G\) be such a discretely purely positive solvable group of finite virtual cohomological dimension. Since \(G\) is of finite virtual cohomological dimension, we assume that \(G\) itself is torsion-free. Let \(A\) be a maximal normal abelian subgroup. Since \(G\) has a discrete length function, \(A\) is not divisible. Since \(A\) has a finite cohomological dimension, we know \(A\) is isomorphic to \(\mathbb{Z}^{n}.\) By Lemma 2.3, the quotient group \(G/A\) acts effectively on \(A.\) This means that \(G/A\) is isomorphic to a subgroup of \(\mathrm{GL}_{n}(\mathbb{Z}).\) Lemma 1.6 implies that \(G/A\) is a torsion group. However, every solvable subgroup of \(\mathrm{GL}_{n}(\mathbb{Z})\) is polycyclic (cf. [17], Cor 1, page 26). Therefore, \(G/A\) is a polycyclic torsion group and thus is finite. **Proof of Theorem 0.1.** Let \(\mathrm{Diff}^{\infty}(M)\) be the group of diffeomorphisms of \(M.\) For any \(f,g\in\mathrm{Diff}^{\infty}(M)\) and integer \(n,\) it is well-known that the topological entropy satisfies \(h_{\mathrm{top}}(f^{n})=|n|h_{\mathrm{top}}(f)\) and \(h_{\mathrm{top}}(f)=h_{\mathrm{top}}(gfg^{-1})\) (cf. [11], Cor. 3.14 and Prop. 3.1.7, page 111). Hu [9] (Theorem C) proves that \(h_{\mathrm{top}}(fg)\leq h_{\mathrm{top}}(f)+h_{\mathrm{top}}(g)\) when \(fg=gf.\) This shows that the topological entropy \(h_{\mathrm{top}}\) is a length function on the group \(\mathrm{Diff}^{\infty}(M).\) By Theorem 2.4, the group \(H\) is virtually either abelian or meta-abelian. In any case, \(H\) is virtually abelian-by-abelian. If there is a uniform lower bound of the topological entropies, the length function is discrete. Theorem 0.3 implies that \(H\) is virtually a finitely generated abelian group \(\mathbb{Z}^{n}.\) In the case of Lyapunov exponents, note that for any \(x\in M,u\in T_{x}M,f\in\mathrm{Diff}^{\infty}(M)\) the Lyapunov exponent \[\chi(x,u,f)=\lim_{n\to\infty}\frac{\log\|D_{x}f^{n}u\|}{n}\leq\sup_{x\in M} \lim_{n\to\infty}\frac{\log\|D_{x}f^{n}\|}{n}.\] Actually, \[l(f):=\max\{\sup_{x\in M}\lim_{n\to\infty}\frac{\log\|D_{x}f^{n}\|}{n},\sup_{x \in M}\lim_{n\to-\infty}\frac{\log\|D_{x}f^{n}\|}{n}\}\] is a length function on \(\mathrm{Diff}^{\infty}(M)\) (see [18], Lemma 2.5). A similar argument finishes the proof. **Theorem 2.6**: _Let \(G\) be a solvable group with a purely positive length function. Suppose that \(G\) has virtual cohomological dimension \(vcd(G)\leq 3\) and any abelian subgroup \(A\) is finitely generated. Then \(G\) is a finite extension of \(\mathbb{Z}^{n}\) for some integer \(n\leq 3.\)_ **Proof of Theorem 2.6.** By Theorem 2.5, the group \(G\) is virtually either \(\mathbb{Z}^{n}\) or a non-nilpotent group \(B\) of the form \[1\to\mathbb{Z}^{m}\to B\to\mathbb{Z}^{k}\to 1\] for some non-negative integers \(n,m,k.\) It is enough to rule out the second case. Since cohomological dimension of \(B\) is at most \(3,\) we see that \(m+k\leq 3.\) When \(m=1\) or \(m=3,\) the group \(B\) is virtually abelian since \(\mathbb{Z}^{k}\) acts effectively on \(\mathbb{Z}^{m}\). When \(m=2,\) the group \(B=\mathbb{Z}^{2}\rtimes_{\phi}\mathbb{Z},\) a semi-direct product for some \(\phi\in\mathrm{GL}_{2}(\mathbb{Z}).\) Consider the Jordan canonical form of \(\phi.\) If all the eigenvalues \(\lambda_{1},\lambda_{2}\) of \(\phi\) have norm different from \(1,\) any length function will vanish on \(\mathbb{Z}^{2}\) by Lemma 1.3. Otherwise, \(\lambda_{1},\lambda_{2}=\pm 1\) or \(\lambda_{1},\lambda_{2}=\bar{\lambda}_{1}\) are conjugate complex numbers of norm \(1.\) If \(\phi\) is \(\mathbb{C}\)-diagonalizable, \(\phi\) is of finite order, in which case \(B\) is virtually abelian. Otherwise, \(\phi\) is conjugate (in \(\mathrm{GL}_{n}(\mathbb{C})\)) to \[\begin{bmatrix}1&1\\ 0&1\end{bmatrix}\text{ or }\begin{bmatrix}-1&1\\ 0&-1\end{bmatrix},\] in which case the matrix \(\phi\) has the repeated eigenvalues \(1\) or \(-1.\) We can choose a basis \(\{v_{1},v_{2}\}\subset\mathbb{Q}^{2}\) such that the representation matrix of \(\phi:\mathbb{Q}^{2}\rightarrow\mathbb{Q}^{2}\) with respect to the basis \(\{v_{1},v_{2}\}\) is of the form \[J=\begin{bmatrix}1&x\\ 0&1\end{bmatrix}\text{ or }L=\begin{bmatrix}-1&y\\ 0&-1\end{bmatrix}\] for some nonzero \(x,y\in\mathbb{Q}.\) Note that a purely positive length function on \(\mathbb{Z}^{2}\rtimes_{\phi}\mathbb{Z}\) can be extended to be a purely positive length function on \(\mathbb{Q}^{2}\) by Lemma 1.1. For any positive even integer \(m,\) we have \[J^{m}v_{2} = mxv_{1}+v_{2},\text{ or }\] \[L^{m}v_{2} = -myv_{1}+v_{2}\] and \[m|x|l(v_{1}) \leq l(J^{m}v_{2})+l(v_{2})=2l(v_{2}),\text{ or }\] \[m|y|l(v_{1}) \leq l(J^{m}v_{2})+l(v_{2})=2l(v_{2}).\] Since \(m\) is arbitrary, we get \(l(v_{1})=0.\) This is a contradiction to the fact that \(l\) is purely positive. The proof is finished. **Remark 2.7**: _Some versions of Theorem 2.4 and Theorem 2.4 are obtained by Conner for length functions coming from semi-norms on groups. However, many length functions are not from semi-norms (see [18], Section 2, or the final section of the current paper)._ ## 3 Stable norms Following [5, 4], a real-valued function \(L:G\rightarrow[0,\infty)\) on a group \(G\) is called a (semi-)norm if \(L(gh)\leq L(g)+L(h),L(g)=L(g^{-1})\) for any \(g,h\in G.\) It's well-known that the stable norm \[\mathrm{sL}(g):=\lim_{n\rightarrow\infty}\frac{L(g^{n})}{n}\] is a length function (see [18]). In this section, we study stable norms on the Heisenberg group \[H = \langle a,b,c\mid ac=ca,bc=cb,[a,b]=c\rangle\] \[= \{\begin{bmatrix}1&x&z\\ &1&y\\ &&1\end{bmatrix}:x,y,z\in\mathbb{Z}\}.\] Choose \(S=\{a,b,a^{-1},b^{-1}\}\) to be a generating set and let \(|g|_{S}\) be the word length defined by \(S\) for \[g=\begin{bmatrix}1&x&z\\ &1&y\\ &&1\end{bmatrix}\in H.\] Denote \(d(x,y,z)=|g|_{S}\) the word length. Blachere [1] shows that \(d(x,y,z)=d(-x,y,-z)=d(x,-y,-z)=d(-x,-y,z)=d(y,x,z)\) for any \(x,y,z\in\mathbb{Z}\). The following is from Theorem 2.2 of [1]. **Lemma 3.1**: _Assume that \(z\geq 0,x\geq 0,x\geq y\geq-x.\) When \(y\geq 0,x\leq\sqrt{z},\) we have \(d(x,y,z)=2\lceil 2\sqrt{z}\rceil-x-y.\) When \(y\geq 0,x\geq\sqrt{z}\) and \(xy>z,\) we have \(d(x,y,z)=x+y.\)_ **Lemma 3.2**: _Let \(L:H\rightarrow[0,\infty)\) be a semi-norm on the Heisenberg group \(H.\) There are constants \(K,C\geq 0\) such that_ \[L(c^{n})\leq K\sqrt{n}+C\] _for any integer \(n\geq 0.\)_ **Proof.** Let \(|x|_{S}\) be the word length of \(x\in H\) with respect to the generating set \(S=\{f,g,f^{-1},g^{-1}\}.\) Suppose that \(x=s_{1}s_{2}\cdots s_{|x|_{S}}\) with each \(s_{i}\in S.\) It is clear that \[L(x) \leq L(s_{1})+L(s_{2})+\cdots+L(s_{|x|_{S}})\] \[\leq |x|_{S}\max\{L(s):s\in S\}.\] Choose an integer \(k\) such that \(k^{2}\leq n<(k+1)^{2}.\) We have \[L(c^{n}) \leq |c^{n}|_{S}\max\{L(s):s\in S\}\] \[= 2\lceil 2\sqrt{n}\rceil\max\{L(s):s\in S\}\] \[\leq K\sqrt{n}+C\] for \(K=4\max\{L(s):s\in S\},C=2\max\{L(s):s\in S\}+2.\) Note that the word length \(|c^{n}|_{S}=2\lceil 2\sqrt{n}\rceil,\) twice the upper integer of \(2\sqrt{n},\) by Lemma 3.1. **Proof of Theorem 0.5.** For any smooth diffeomorphism \(f_{1},\) we have that \[\|Df_{1x}\|\leq\max\{\sup_{x\in M}\|Df_{1x}\|,\sup_{x\in M}\|Df_{1x}^{-1}\|\}.\] Let \(L(f_{1})=\log\max\{\sup_{x\in M}\|Df_{1x}\|,\sup_{x\in M}\|Df_{1x}^{-1}\|\}.\) It's not hard to check that \(L(f_{1}f_{2})\leq L(f_{1})+L(f_{2})\) and \(L(f_{1})=L(f_{2}^{-1})\) for any diffeomorphisms \(f_{1},f_{2}.\) Lemma 3.2 finishes the proof. **Corollary 3.3**: _For any_ \[g=\begin{bmatrix}1&x&z\\ &1&y\\ &&1\end{bmatrix}\in H,\] _we have the stable word length_ \[\mathrm{swl}_{S}(g)=|x|+|y|.\] **Proof.** For any \[g=\begin{bmatrix}1&x&z\\ &1&y\\ &&1\end{bmatrix}\in H,\] we have \(\mbox{swl}(g)=\mbox{swl}(g_{0})\) for \(g_{0}=\begin{bmatrix}1&x&0\\ &1&y\\ &&1\end{bmatrix}\) since swl is a length function vanishing on the center of \(H.\) For each positive integer \(n,\) we have \[g_{0}^{n}=\begin{bmatrix}1&nx&\frac{n(n-1)xy}{2}\\ &1&ny\\ &&1\end{bmatrix}.\] After replacing \(g_{0}\) by \(g_{0}^{-1},\) we can always assume that \(x\geq 0.\) When \(x\geq y\geq 0,\) Lemma 3.1 implies that \[|g_{0}^{n}| = nx+ny,\] \[\mbox{swl}(g_{0}) = x+y.\] When \(y>x,\) we have \[|g_{0}^{n}| = d(nx,ny,\frac{n(n-1)xy}{2})=d(ny,nx,\frac{n(n-1)xy}{2})\] \[= ny+nx,\] \[\mbox{swl}(g_{0}) = ny+nx.\] When \(0\geq y>-x,\) we have \[|g_{0}^{n}| = d(nx,ny,\frac{n(n-1)xy}{2})=d(nx,n|y|,\frac{n(n-1)x|y|}{2})\] \[= nx+n|y|.\] When \(y\leq-x,\) we have \[|g_{0}^{n}| = d(nx,ny,\frac{n(n-1)xy}{2})=d(nx,n|y|,\frac{n(n-1)x|y|}{2})\] \[= d(n|y|,nx,\frac{n(n-1)x|y|}{2})=n|y|+nx.\] In any case, we have \(\mbox{swl}(g)=|x|+|y|.\) It's not hard to see that the swl gives a norm on the abelianization \(H/\langle c\rangle\) which is generated by the image of \(a,b.\) **Corollary 3.4**: _Let \(L:H\rightarrow\mathbb{R}\) be a semi-norm, i.e. \(L(gh)\leq L(g)+L(h),L(g)=L(g^{-1})\) for any \(g,h\in H.\) There exists constant \(K\) such that_ \[\mbox{\rm sL}(g)\leq K(|x|+|y|)\] _for any_ \[g=\begin{bmatrix}1&x&z\\ &1&y\\ &&1\end{bmatrix}\in H.\] **Proof.** For each integer \(k\), write \(g^{k}=s_{1}s_{2}...s_{n}\) for some \(s_{i}\in S=\{a,b,a^{-1},b^{-1}\}\) and \(n=|g^{k}|_{S}\). Then we have \[L(g^{k}) \leq |g^{k}|_{S}\max\{L(s):s\in S\},\] \[\mathrm{sL}(g) = \lim\frac{L(g^{k})}{k}\leq\mathrm{swl}(g)\max\{L(s):s\in S\}.\] Choose \(K=\max\{L(s):s\in S\}\) and apply Corollary 3.3 to finish the proof. **Corollary 3.5**: _There exist length functions on \(H\) which are not stable norms._ **Proof.** Let \[g_{1,n}=\begin{bmatrix}1&1&0\\ &1&n\\ &&1\end{bmatrix}\] for each integer \(n\geq 1\). Define \(l:H\rightarrow\mathbb{R}\) by \(l(g_{1,n})=n^{2},l(g_{1,n}^{k})=|k|l(g_{1,n})\) and all other elements mapping to \(0\). This is a well-defined length function (see [18], Theorem 5.3). But \(l\) grows quadratically, while any stable norm grows linearly with respect to the matrix entries by the previous corollary.
2301.01878
Cluster-shell competition and effect of adding hyperons
The fundamental question is how the hyperon plays a role in the nuclear structure. It is of particular importance, especially in the light mass region, to verify the structure change when $\Lambda$ particle(s) is added to normal nuclei. The ground state of $^{8}$Be has been know to have a well-developed $\alpha$--$\alpha$ cluster structure, whereasn$^{12}$C has a mixed structure of three $\alpha$ clusters and $jj$-coupling shell model, where $\alpha$ clusters are partially broken. Adding $\Lambda$ particle(s) could induce the structure change. We compare the Be and C cases. Using the antisymmetrized quasi-cluster model (AQCM), the $\alpha$-cluster states and $jj$-coupling shell-model states of $^8$Be and $^{12}$C are prepared on the same footing, and we add $\Lambda$ particles. The cluster-shell competition in the ground state can be well described with this model. Using AQCM, we calculate $^8$Be, $^{9}_{\Lambda}$Be, $^{10}_{\Lambda\Lambda}$Be, $^{12}$C, $^{13}_{\Lambda}$C, and $^{14}_{\Lambda\Lambda}$C. By adding one or two $\Lambda$ particle(s), the ground state of $^{12}$C approaches the $jj$-coupling shell model side. On the other hand, in the Be case, although the $\Lambda$ particle(s) shrinks the $\alpha$--$\alpha$ distance, the breaking effect of the cluster structure is rather limited. The spin-orbit interaction is the driving force of breaking the $\alpha$ clusters, and whether the glue-like effect of $\Lambda$ particle(s) attracts the cluster inside the range of this interaction is crucial. In $^{14}_{\Lambda\Lambda}$C, the breaking of $\alpha$ clusters in $^{12}$C is much enhanced by the addition of the $\Lambda$ particles than the case of free $^{12}$C. We also found that breaking $\alpha$ clusters in the ground state of $^{14}_{\Lambda\Lambda}$C affects the excited state with the pure cluster structure.
Naoyuki Itagaki, Emiko Hiyama
2023-01-05T02:29:20Z
http://arxiv.org/abs/2301.01878v1
# Cluster-shell competition and effect of adding hyperons ###### Abstract **Background:** The fundamental question is how the hyperon plays a role in the nuclear structure. It is of particular importance, especially in the light mass region, to verify the structure change when \(\Lambda\) particle(s) is added to normal nuclei. **Purpose:** The ground state of \({}^{8}\)Be has been know to have a well-developed \(\alpha\)-\(\alpha\) cluster structure, whereas \({}^{12}\)C has a mixed structure of three \(\alpha\) clusters and \(jj\)-coupling shell model, where \(\alpha\) clusters are partially broken. Adding \(\Lambda\) particle(s) could induce the structure change. We compare the Be and C cases. **Methods:** Using the antisymmetrized quasi-cluster model (AQCM), the \(\alpha\)-cluster states and \(jj\)-coupling shell-model states of \({}^{8}\)Be and \({}^{12}\)C are prepared on the same footing, and we add \(\Lambda\) particles. The cluster-shell competition in the ground state can be well described with this model. Using AQCM, we calculate \({}^{8}\)Be, \({}^{\Lambda}_{\Lambda}\)Be, \({}^{10}_{\Lambda}\)Be, \({}^{12}\)C, \({}^{13}_{\Lambda}\)C, and \({}^{14}_{\Lambda\Lambda}\)C. **Results:** By adding one or two \(\Lambda\) particle(s), the ground state of \({}^{12}\)C approaches the \(jj\)-coupling shell model side. On the other hand, in the Be case, although the \(\Lambda\) particle(s) shrinks the \(\alpha\)-\(\alpha\) distance, the breaking effect of the cluster structure is rather limited. **Conclusions:** The spin-orbit interaction is the driving force of breaking the \(\alpha\) clusters, and whether the glue-like effect of \(\Lambda\) particle(s) attracts the cluster inside the range of this interaction is crucial. In \({}^{14}_{\Lambda\Lambda}\)C, the breaking of \(\alpha\) clusters in \({}^{12}\)C is much enhanced by the addition of the \(\Lambda\) particles than the case of free \({}^{12}\)C. We also found that breaking \(\alpha\) clusters in the ground state of \({}^{14}_{\Lambda\Lambda}\)C affects the excited state with the pure cluster structure. ## I Introduction One of the most intriguing phenomena of nuclear structure physics is the competition of the shell and cluster structures [1]. This is attributed to the effect of the spin-orbit interaction, which strengthens the symmetry of the \(jj\)-coupling shell model. It is well known that this interaction is vital in explaining the observed magic numbers of 28, 50, 82, and 126 [2]. The spin-orbit interaction also has the effect of breaking clusters [1], where some of the strongly correlated nucleons are spatially localized. Nevertheless, the \(\alpha\) cluster structure is known to be important in the light mass region. The Be isotopes are known to have the \(\alpha\)-\(\alpha\) cluster structure; \({}^{8}\)Be decays into two \(\alpha\) clusters, and the molecular-orbital structure of valence neutrons appears in the neutron-rich Be isotopes [3; 4; 5], which is confirmed by the recent _ab initio_ shell-model calculation [6]. This persistence of the \(\alpha\)-\(\alpha\) cluster structure is owing to the \(\alpha\)-\(\alpha\) distance, which is about 3-4 fm and large enough compared with the range of the spin-orbit interaction. In light nuclei, it is considered that these two different pictures (shell and cluster) coexist, and they compete with each other. Although the \(\alpha\)-\(\alpha\) cluster structure may persist in \({}^{8}\)Be, when one more \(\alpha\) cluster is added, in \({}^{12}\)C, the interaction among \(\alpha\) clusters gets stronger, and the system has a shorter \(\alpha\)-\(\alpha\) distance [7; 8]. In this case, the \(\alpha\) clusters are trapped in the interaction range of the spin-orbit interaction. Although the traditional \(\alpha\) cluster model (Brink model) [9] is incapable of treating the spin-orbit interaction, its effect is significant if we allow the breaking of the \(\alpha\) clusters. The ground state of \({}^{12}\)C is found to have a mixed nature of shell and cluster components [10; 11; 12]. On the other hand, the second \(0^{+}\) state of \({}^{12}\)C is well known \(\alpha\) clustering state called the Hoyle state. Since this state is nearby the three-\(\alpha\) breakup threshold, the wave function is dilute, and this state has a well-developed \(\alpha\) clustering structure. It is interesting to investigate how clustering structure is changed when a hyperon such as a \(\Lambda\) particle is injected into \({}^{8}\)Be and \({}^{12}\)C. Here it should be noted that there is no Pauli principle between nucleons and a \(\Lambda\), and the \(\Lambda N\) interaction is attractive, but weaker than \(NN\) interaction. Using this property, some authors studied the structure of \({}^{9}_{\Lambda}\)Be and \({}^{13}_{\Lambda}\)C from the viewpoint of dynamical change of the core nuclei, \({}^{8}\)Be and \({}^{12}\)C, due to the addition of \(\Lambda\) particle. For instance, Motoba _et al._[13], pointed out that the \(\alpha\)-\(\alpha\) distance in \({}^{9}_{\Lambda}\)Be was shrunk by about 20 % in comparison with that in the \({}^{8}\)Be core nucleus by \(\Lambda\) injection. In the Carbon isotope, one of the present authors (E. H.) pointed out that dynamical change due to the addition of a \(\Lambda\) particle is dependent on the states in the core nucleus of \({}^{12}\)C within the framework of \(3\alpha\) and \(3\alpha+\Lambda\) three- and four-body OCM (orthogonal condition model) [14]. The ground state of \({}^{12}\)C, \(0^{+}_{1}\), is a mixture of shell and cluster structure; the \(\alpha\)-\(\alpha\) distance does not change due to the addition of a \(\Lambda\) particle. On the other hand, the \(\alpha\)-\(\alpha\) distance is dramatically contracted in the Hoyle state of \({}^{13}_{\Lambda}\)C, which is well-developed clustering state [14]. However, it should be noted that this calculation was done without taking into account the breaking effect of \(\alpha\) clusters in \({}^{12}\)C. In addition, in Ref. [15], they discussed the similarity and difference in several states of \({}^{12}\)C and \({}^{13}_{\Lambda}\)C. In this way, there are some discussions on the change of the \(\alpha\)-\(\alpha\) distance w/o the \(\Lambda\) particle and the change of the structure. However, there remain never discussed effects of the clustering in such Be and C isotopes due to addition of \(\Lambda\) particles. The question is how the clustering is broken when \(\Lambda\) particles shrinks the \(\alpha\)-\(\alpha\) distance. The traditional cluster model is incapable of describing such breaking situation and we must extend the model space to incorporate the spin-orbit contribution, which is the driving force of breaking clusters. Thus, in this work, we focus on how the clustering is changed and broken due to the addition of a \(\Lambda\) particle(s) in \({}^{8}\)Be, \({}^{9}_{\Lambda}\)Be, \({}^{10}_{\Lambda\Lambda}\)Be, \({}^{12}\)C, \({}^{13}_{\Lambda}\)C, and \({}^{14}_{\Lambda\Lambda}\)C. In the case of Be isotopes, as mentioned, the \(\Lambda\) particle(s) shrinks the \(\alpha\)-\(\alpha\) relative distance [14; 16], but the resultant distance might still be outside the range of the spin-orbit interaction, and the \(\alpha\) cluster structure could persist. On the contrary, when \(\Lambda\) particle(s) is added to \({}^{12}\)C, the distances between clusters get even shorter. Since the spin-orbit interaction works in the inner regions of the nuclear systems, the breaking of \(\alpha\) clusters is expected to be enhanced. Therefore, the ground state would approach more \(jj\)-coupling shell-model side. Indeed, as shown in the study of antisymmetrized molecular dynamics [17], the slightly deformed ground state of \({}^{12}\)C is changed into a spherical shape in \({}^{13}_{\Lambda}\)C. It is worthwhile to check this point in terms of the cluster-shell competition. In most of the conventional \(\alpha\) cluster models, the contribution of the non-central interactions (spin-orbit and tensor interactions) vanishes. To include the spin-orbit effect, we have developed the antisymmetrized quasi-cluster model (AQCM) [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31]. This method allows us to smoothly transform \(\alpha\)-cluster model wave functions to \(jj\)-coupling shell model ones, and we call the clusters that feel the effect of the spin-orbit interaction quasi-clusters. We have previously introduced AQCM to \({}^{12}\)C and discussed the competition between the cluster states and \(jj\)-coupling shell model state [10]. The consistent description of \({}^{12}\)C and \({}^{16}\)O, which has been a long-standing problem of microscopic cluster models, has been achieved. Also, not only the competition between the cluster states and the lowest shell-model configuration, the effect of single-particle excitation was further included in the description of the ground state [30]. This paper is organized as follows. The framework is described in Sec. II. The results are shown in Sec. III. The conclusions are presented in Sec. IV. ## II Framework The wave function is fully antisymmetrized, and different basis states are superposed based on the generator coordinate method (GCM) after the angular momentum projection, and the amplitude for each basis state is determined by diagonalizing the norm and Hamiltonian matrices. ### Single-particle wave function In our framework, every single particle is described in a Gaussian form as in many traditional cluster models, including the Brink model [9], \[\phi^{\tau,\sigma}\left(\mathbf{r}\right)=\left(\frac{2\nu}{\pi}\right)^{\frac{3 }{4}}\exp\left[-\nu\left(\mathbf{r}-\mathbf{\zeta}\right)^{2}\right]\chi^{\tau,\sigma}, \tag{1}\] where the Gaussian center parameter \(\mathbf{\zeta}\) is related to the expectation value of the position of the nucleon, and \(\chi^{\tau,\sigma}\) is the spin-isospin part of the wave function. The \(\alpha\) cluster is expressed by four nucleons with different spin and isospin sharing the same \(\mathbf{\zeta}\) value. For the size parameter \(\nu\), here we use \(\nu=1/2b^{2}\) and \(b=1.46\) fm. The Slater determinant is constructed from these single-particle wave functions by antisymmetrizing them. The \(\Lambda\) particle is represented by the same local Gaussian-type wave function. This traditional \(\alpha\) cluster wave function cannot take into account the effect of non-central interactions including the spin-orbit interaction. We can extend the model based on the AQCM, by which the contribution of the spin-orbit interaction due to the breaking of \(\alpha\) clusters is included. Here the \(\mathbf{\zeta}\) values in Eq. (1) are changed to complex numbers. When the original value of the Gaussian center parameter \(\mathbf{\zeta}\) is \(\mathbf{R}\), which is real and related to the spatial position of this nucleon, it is transformed by adding the imaginary part as \[\mathbf{\zeta}=\mathbf{R}+i\lambda\mathbf{e}^{\text{spin}}\times\mathbf{R}, \tag{2}\] where \(\mathbf{e}^{\text{spin}}\) is a unit vector for the intrinsic-spin orientation of this nucleon. The control parameter \(\lambda\) is associated with the breaking of the cluster. After this transformation, the \(\alpha\) clusters are called quasi-clusters. The two nucleons in the same quasi-cluster with opposite spin orientation have \(\mathbf{\zeta}\) values that are complex conjugate to each other. This situation corresponds to the time-reversal motion of two nucleons. In our previous analysis on \({}^{12}\)C [10], we have introduced two parameters representing the distances between quasi-clusters and their breaking (\(\lambda\)). The subclosure configuration of \(\left(s_{1/2}\right)^{4}\left(p_{3/2}\right)^{8}\) of the \(jj\)-coupling shell model can be obtained at the limit of small relative distances and \(\lambda=1\). ### Angular momentum projection and GCM Each AQCM Slater determinant is projected to the eigenstates of parity and angular momentum by using the projection operator \(P^{K}_{J^{\pi}}\), \[P^{K}_{J^{\pi}}=P^{\pi}\frac{2J+1}{8\pi^{2}}\int d\Omega\,D^{J}_{MK}{}^{*}R\left( \Omega\right). \tag{3}\] Here \(D^{J}_{MK}\) is the Wigner \(D\)-function and \(R\left(\Omega\right)\) is the rotation operator for the spatial and spin parts of the wave function. This integration over the Euler angle \(\Omega\) is numerically performed. The operator \(P^{\pi}\) is for the parity projection (\(P^{\pi}=\left(1+P^{r}\right)/\sqrt{2}\) for the positive-parity states, where \(P^{r}\) is the parity-inversion operator), which is also performed numerically. The AQCM basis states with different distances between quasi-clusters and \(\lambda\) values are superposed based on GCM. We also generate Gaussian centers for the \(\Lambda\) particles using random numbers, and the basis states with different positions are superposed. The coefficients \(\left\{c^{K}_{i}\right\}\) for the linear combination of the Slater determinants are obtained together with the energy eigenvalue \(E\) when we diagonalize the norm and Hamiltonian matrices, namely by solving the Hill-Wheeler equation. \[\sum_{j}(<\Phi_{i}|(P^{K}_{J^{\pi}})^{\dagger}HP^{K}_{J^{\pi}}| \Phi_{j}>-E<\Phi_{i}|(P^{K}_{J^{\pi}})^{\dagger}P^{K}_{J^{\pi}}|\Phi_{j}>)c^{K }_{j}=0. \tag{4}\] ### Hamiltonian The Hamiltonian consists of kinetic energy and potential energy terms. For the potential part, the interaction consists of the central, spin-orbit, and Coulomb terms. The nucleon-nucleon interaction is Volkov No.2 [32] with the Majorana exchange parameter of \(M=0.6\), which has been known to reproduce the scattering phase shift of \({}^{4}\)He-\({}^{4}\)He [33]. For the spin-orbit part, we use the spin-orbit term of the G3RS interaction [34], which is a realistic interaction originally developed to reproduce the nucleon-nucleon scattering phase shifts. The strength of the spin-orbit interactions [10] is set to \(V^{1}_{ls}=V^{2}_{ls}=1450\,\mathrm{MeV}\), which reproduces the binding energy of \({}^{12}\)C from the three-\(\alpha\) threshold. For the nucleon-\(\Lambda\) interaction, we employ only the central part; YNG-ND interaction [35]. The \(k_{F}\) value for \({}^{9}\)Be and \({}^{10}_{\Lambda\Lambda}\)Be is 0.962 fm\({}^{-1}\) as in Ref. [14] and 1.17 fm\({}^{-1}\) for \({}^{13}_{\Lambda}\)C and \({}^{14}_{\Lambda\Lambda}\)C as in Ref. [17]. For the \(\Lambda\)-\(\Lambda\) interaction, we adopt the one called "NS" in Ref. [35], which allows the reproduction of the binding energy of \({}^{6}_{\Lambda\Lambda}\)He. ## III Results ### Ground states of \({}^{8}\)Be, \({}^{9}_{\Lambda}\)Be, and \({}^{10}_{\Lambda\Lambda}\)Be We start the discussion with \({}^{8}\)Be. Our Hamiltonian gives the energy of \(-27.57\) MeV for the \(\alpha\) cluster, and thus, \(-55.1\) MeV is the two-\(\alpha\) threshold energy (experimentally \(-56.6\) MeV, to which our theoretical value does not contradict). Figure 1 (a) shows the energy curves of the \(0^{+}\) state of \({}^{8}\)Be as a function of the distance between two \({}^{4}\)He clusters. The solid line is for \(\lambda=0\) (pure two \(\alpha\)'s), and the dotted and dashed line are for two quasi-clusters with \(\lambda=0.1\) and 0.2, respectively. The energy minimum point appears around the relative distance of \(\sim\)3.5 fm. This distance is quite large, and this is outside of the interaction range of the spin-orbit interaction. Therefore, the \(\lambda\) value that gives the minimum energy is zero (solid line), which means that the \(\alpha\) clusters are Figure 1: (a): Energy curves of \(0^{+}\) state of \({}^{8}\)Be as a function of the distance between two \({}^{4}\)He clusters. Solid line is for \(\lambda=0\) (pure two \(\alpha\)’s) and dotted and dashed lines are for two quasi-clusters with \(\lambda=0.1\) and 0.2, respectively. (b): Same as (a) but for the \(1/2^{+}\) state of \({}^{3}_{\Lambda}\)Be. (c): Same as (a) but for the \(0^{+}\) state of \({}^{10}_{\Lambda\Lambda}\)Be. not broken. The \(\alpha\) breaking effect can be seen in more inner regions, where the energies of dotted and dashed lines are lower than the solid line. The \(\alpha\) clusters are surely broken there. However, at short relative distances, the energy itself is high enough, and the spin-orbit interaction only plays a role in reducing the increase of the excitation energy to some extent when two clusters get closer. The situation is slightly different in Figure 1 (b), which is for the \(1/2^{+}\) of \({}^{9}_{\Lambda}\)Be, where one \(\Lambda\) particle is added. We superpose 50 Slater determinants with different positions for the \(\Lambda\) particle and diagonalize the Hamiltonian based on the GCM for each cluster-cluster distance and \(\lambda\). Owing to the \(\Lambda\) particle added, the attractive effect is increased, and the optimal distance between the two \({}^{4}\)He nuclei (lowest energy point) is around 3 fm, slightly shorter than the \({}^{8}\)Be case. Here, the solid line (\(\lambda=0\)) and the dotted line (\(\lambda=0.1\)) almost degenerate, and thus, the \(\alpha\) clusters are slightly broken due to the spin-orbit effect. The tendency is a bit enhanced in \({}^{10}_{\Lambda\Lambda}\)Be shown in Fig 1 (c). The optimal cluster-cluster distance is less than 3 fm, where the dotted line (\(\lambda=0.1\)) is slightly lower than the solid line (\(\lambda=0\)). The number of Slater determinants with different positions for the \(\Lambda\) particles is increased to 100 for each \({}^{4}\)He-\({}^{4}\)He distance and \(\lambda\). In this way, since the \({}^{4}\)He-\({}^{4}\)He distances are large in \({}^{9}_{\Lambda}\)Be and \({}^{10}_{\Lambda\Lambda}\)Be, we find that the \(\alpha\)-cluster braking effect is rather small. ### Ground states of \({}^{12}\)C, \({}^{13}_{\Lambda}\)C, and \({}^{14}_{\Lambda\Lambda}\)C Next we discuss \({}^{12}\)C and \({}^{13}_{\Lambda}\)C, and \({}^{14}_{\Lambda\Lambda}\)C. The three-\(\alpha\) threshold energy is \(-82.7\) MeV in our calculation compared with the experimental value of \(-84.9\) MeV. Figure 2 (a) shows the energy curves of \(0^{+}\) state of \({}^{12}\)C with an equilateral triangular configuration as a function of the distance between two \({}^{4}\)He clusters. The solid line is for \(\lambda=0\) (pure three \(\alpha\)'s). Since one \({}^{4}\)He is added to \({}^{8}\)Be, the energy minimum point appears around the relative distance of 2.5-3.0 fm, shorter by 1 fm than the previous \({}^{8}\)Be case before allowing the breaking of \(\alpha\) clusters. Therefore, it is considered that the three \(\alpha\) clusters step in the interaction range of the spin-orbit interaction. The dotted line (\(\lambda=0.1\)) and dashed line (\(\lambda=0.2\)) almost degenerate at the region of the lowest energy (the relative cluster-cluster distance shrinks to 2 fm there). This tendency is enhanced in Fig. 2 (b), which is for the \(1/2^{+}\) of \({}^{13}_{\Lambda}\)C, where one \(\Lambda\) particle is added. Owing to the \(\Lambda\) particle added, the attractive effect is increased, and the optimal distance between the \({}^{4}\)He nuclei is around 2.5 fm (solid line) before breaking the \(\alpha\) clusters. When we allow the breaking, the energy curves become almost flat inside the relative \({}^{4}\)He-\({}^{4}\)He distance of 2 fm. The energy minimum points of the dotted (\(\lambda=0.1\)) and dashed (\(\lambda=0.2\)) lines are lower than that of the solid line (\(\lambda=0\)). The attractive effect of the \(\Lambda\) particles is much more enhanced in Fig. 2 (c), which is for the \(0^{+}\) state of \({}^{14}_{\Lambda\Lambda}\)C. The optimal distance between the \({}^{4}\)He nuclei (energy minimum point) is around 2.2 fm before breaking the \(\alpha\) clusters (solid line). When we allow the breaking, the energy minimum point appears at the relative cluster-cluster distance of \(\sim\)1.4 fm, where the dashed line (\(\lambda\)=0.2) gives the lowest energy, and \(\alpha\) clusters are significantly broken. We can confirm that the optimal cluster distance gets shorter, and the breaking of \(\alpha\) clusters becomes larger with the increasing number of \(\Lambda\) particles added to the system. Superposition of states with different \({}^{4}\)He\(-^{4}\)He distance and breaking parameter \(\lambda\) To demonstrate the relation between the effect of \(\alpha\) breaking and spin-orbit interaction, we calculate the ground state energies of \({}^{8}\)Be, \({}^{9}_{\Lambda}\)Be, \({}^{10}_{\Lambda\Lambda}\)Be (Table 1) and those of \({}^{12}\)C, \({}^{13}_{\Lambda}\)C, \({}^{14}_{\Lambda\Lambda}\)C (Table 2) with two models: "AQCM" which explicitly takes account of the breaking effect of \(\alpha\), and "Brink model" which does not involve the \(\alpha\) breaking effect (\(\lambda=0\)). We superpose Slater determinants with different positions of the \(\Lambda\) particle(s), \({}^{4}\)He-\({}^{4}\)He cluster distances, and \(\alpha\)-breaking parameter \(\lambda\) and diagonalize the Hamiltonian based on the GCM. For the Be case (Table 1), the energy difference between Brink and AQCM is less than 0.2 MeV in \({}^{8}\)Be, which means that the spin-orbit interaction does not break the \(\alpha\) clusters since they are separated by a certain distance. The situation is basically the same when \(\Lambda\) particle(s) is added. The difference is about 0.5-0.6 MeV in \({}^{9}_{\Lambda}\)Be and \({}^{10}_{\Lambda\Lambda}\)Be. Concerning the ground state energy of \({}^{10}_{\Lambda\Lambda}\)Be, the binding energy (\(B_{\Lambda\Lambda}\)) of \(17.5\pm 0.4\) MeV from \({}^{8}\)Be has been reported in Ref. [36], which has been revised to \(14.7\pm 0.4\) MeV in Ref. [37] (see the discussions in Refs. [38; 39]), and the present result (15.23 MeV) is almost consistent with the latter case. For the C case (Table 2), the energy difference between Brink and AQCM is about 3.3 MeV in \({}^{12}\)C, and this is much enhanced with the increasing number of the \(\Lambda\) particles added. The difference increases to 5.2 MeV in \({}^{14}_{\Lambda\Lambda}\)C. This is because the spin-orbit interaction works in the inner region of the nuclear systems; the glue-like effect of \(\Lambda\) particles shrinks the system and induces more contribution of the spin-orbit interaction. To clarify the mixing of the \(jj\)-coupling shell model components in each state, we utilize the expectation value of the one-body spin-orbit operator, \[\hat{O}^{LS}=\sum_{i}\mathbf{l}_{i}\cdot\mathbf{s}_{i}/\hbar^{2}, \tag{5}\] where \(\mathbf{l}_{i}\) and \(\mathbf{s}_{i}\) are the orbital angular momentum and the spin operators for the \(i\)th nucleon. The sum runs over the nucleons. The expectation value is zero for the pure \(\alpha\) cluster state owing to the antisymmetrization effect. Also, the \(\mathbf{l}_{i}\cdot\mathbf{s}_{i}/\hbar^{2}\) value is 0.5 for one nucleon in the orbit, and the eigen value is 4 for the subclosure configuration of the \(jj\)-coupling shell model (\(\left(s_{1/2}\right)^{4}\left(p_{3/2}\right)^{8}\)) in \({}^{12}\)C. The expectation values of the one-body spin-orbit operator for the ground states of \({}^{8}\)Be, \({}^{9}_{\Lambda}\)Be, and \({}^{10}_{\Lambda\Lambda}\)Be are listed in the column "one-body LS" in Table 1. Although the value increases with the number of \(\Lambda\) particles added, it is rather small and cluster structure is considered to be not broken. However, this is completely different in the C case. The expectation values of the one-body spin-orbit operator for the ground states of \({}^{12}\)C, \({}^{13}_{\Lambda}\)C, and \({}^{14}_{\Lambda\Lambda}\)C are listed in the column "one-body LS" in Table 2. The value is 1.55 for \({}^{12}\)C, and we can reconfirm that the ground state has mixed configurations of shell and cluster aspects. As the number of the \(\Lambda\) particles added increases, we can see that the ground states approach the \(jj\)-coupling shell model side. The values for \({}^{13}_{\Lambda}\)C and \({}^{14}_{\Lambda\Lambda}\)C are 1.86 and 2.05, respectively. ### pure \(\alpha\) cluster state orthogonal to the ground state We have discussed that the ground states shift to the \(jj\)-coupling shell model side by adding \(\Lambda\) particles, and the final question is where the "pure" three-\(\alpha\) cluster state appears in \({}^{14}_{\Lambda\Lambda}\)C. We can discuss it by preparing the pure three-\(\alpha\) cluster states and orthogonalizing them to the ground state. The shift of the ground state to the \(jj\)-coupling shell-model-side after allowing the breaking of \(\alpha\) clusters is found to play a crucial role. \begin{table} \begin{tabular}{c c c c} \hline \({}^{12}\)C & energy (0\({}^{+}\)) & & one-body LS \\ \hline Brink & \(-86.84\) & & 0.00 \\ AQCM & \(-90.12\) & (\(-92.16\)) & 1.55 \\ \hline \({}^{13}_{\Lambda}\)C & energy (1/2\({}^{+}\)) & \(B_{\Lambda}\) & one-body LS \\ \hline Brink & \(-97.77\) & & 0.00 \\ AQCM & \(-102.00\) & \(11.88\) (\(11.69\)[17]) & 1.86 \\ \hline \({}^{14}_{\Lambda\Lambda}\)C & energy (0\({}^{+}\)) & \(B_{\Lambda\Lambda}\) & one-body LS \\ \hline Brink & \(-110.58\) & & 0.00 \\ AQCM & \(-115.74\) & \(25.62\) & 2.05 \\ \hline \end{tabular} \end{table} Table 2: Ground state energies of \({}^{12}\)C, \({}^{13}_{\Lambda}\)C, and \({}^{14}_{\Lambda\Lambda}\)C (“energy (\(J^{x}\))”) after performing the GCM calculations. “Brink” is for the Brink model (\(\lambda=0\)); three-\(\alpha\) clusters with equilateral triangular shapes without the breaking, and “AQCM” is for the AQCM calculation, where different \(\lambda\) states are mixed. “one-body LS” is for the expectation values of the one-body spin-orbit operator. The values in the parenthesis show the experimental values. All energies are in MeV. \begin{table} \begin{tabular}{c c c c} \hline \({}^{8}\)Be & energy (0\({}^{+}\)) & & one-body LS \\ \hline Brink & \(-54.75\) & & 0.00 \\ AQCM & \(-54.94\) & (\(-56.50\)) & 0.12 \\ \hline \({}^{3}_{\Lambda}\)Be & energy (1/2\({}^{+}\)) & \(B_{\Lambda}\) & one-body LS \\ \hline Brink & \(-60.97\) & & 0.00 \\ AQCM & \(-61.53\) & \(6.59\) (\(6.71\)[17]) & 0.29 \\ \hline \({}^{14}_{\Lambda\Lambda}\)Be & energy (0\({}^{+}\)) & \(B_{\Lambda\Lambda}\) & one-body LS \\ \hline Brink & \(-69.60\) & & 0.00 \\ AQCM & \(-70.17\) & \(15.23\) (\(14.7\pm 0.4\)[37]) & 0.44 \\ \hline \end{tabular} \end{table} Table 1: Ground state energies of \({}^{8}\)Be, \({}^{9}_{\Lambda}\)Be, and \({}^{10}_{\Lambda\Lambda}\)Be (“energy (\(J^{x}\))”) after performing the GCM calculations. “Brink” is for the Brink model (\(\lambda=0\)); two-\(\alpha\) clusters without the breaking, and “AQCM” is for the AQCM calculation, where different \(\lambda\) states are mixed. “one-body LS” is for the Brink model (\(\lambda=0\)); three-\(\alpha\) clusters with equilateral triangular shapes without the breaking, and “AQCM” is for the AQCM calculation, where different \(\lambda\) states are mixed. “one-body LS” is for the expectation values of the one-body spin-orbit operator. The values in the parenthesis show the experimental values. All energies are in MeV. Figure 2: (a): Energy curves of 0\({}^{+}\) state of \({}^{12}\)C as a function of the distance between three \({}^{4}\)He clusters with equilateral triangular configuration. Solid line is for \(\lambda=0\) (pure three \(\alpha\)’s) and dotted and dashed lines are for two quasi-clusters with \(\lambda=0.1\) and \(0.2\), respectively. (b): Same as (a) but for the \(1/2^{+}\) state of \({}^{13}_{\Lambda}\)C. (c) Same as (a) but for the 0\({}^{+}\) state of \({}^{14}_{\Lambda\Lambda}\)C The solid line in Fig. 3 (a) shows the excited 0\({}^{+}\) state with equilateral triangular configurations of pure three-\(\alpha\) clusters as a function of the relative distances between the \(\alpha\) clusters. At each \(\alpha\)-\(\alpha\) distance, the wave function is orthogonalized to the ground state. Here the ground state is represented by the optimal AQCM basis state (\({}^{4}\)He-\({}^{4}\)He distance of 1.4 fm and \(\Lambda=0.2\)) shown by the solid circle. Therefore, the two-by-two matrix is diagonalized at every point on the horizontal axis. It is found that the pure cluster state appears around the excitation energy of \(E_{x}=15\) MeV with the relative \(\alpha\)-\(\alpha\) distance of \(\sim\)2.5 fm. To simplify the discussion, the positions for the Gaussian center parameters for the \(\Lambda\) particles are set to origin only in Figs. 3 (a) and (b). This situation is quite different if the \(\alpha\) cluster is assumed to be not broken due to the spin-orbit interaction in the ground state. This is an artificial calculation, but we can clearly see the influence of the cluster-shell competition in the excited state; Fig. 3 (b) shows the result when the ground state is represented by the Brink model, which is prepared by changing the \(\lambda\) value to zero and the \({}^{4}\)He-\({}^{4}\)He distance to 2.2 fm. The excited 0\({}^{+}\) state is quite influenced by this change of the ground state. The energy is pushed up by more than 10 MeV, and the optimal \(\alpha\)-\(\alpha\) distance is increased to \(\sim\)3 fm. This is because if the ground state is a pure three-\(\alpha\) cluster state, the excited states need to be more clusterized to satisfy the orthogonal condition. On the other hand, if the ground state has different components other than the cluster structure, it is easier for the pure cluster state to be orthogonal to the ground state. This effect has been known in \({}^{12}\)C and called the "shrink effect" of the second 0\({}^{+}\) state; when the \(\alpha\) breaking component is mixed in the ground state, the second 0\({}^{+}\) state orthogonal to the ground state shrinks. We found that this shrinking effect is much more enhanced in \({}^{14}_{\Lambda\Lambda}\)C. ## IV Conclusions The effect of adding hyperon(s) in nuclear systems is a fundamental problem in nuclear structure physics. We analyzed this effect in the context of cluster-shell competition and discussed the difference between Be and C cases. The antisymmetrized quasi-cluster model (AQCM) is a useful tool to treat the cluster states and shell-model states on the same footing, and we added \(\Lambda\) particle(s) to \({}^{8}\)Be and \({}^{12}\)C. The cluster breaking effect is negligibly small in \({}^{8}\)Be, where \(\alpha\)-\(\alpha\) cluster structure keeps enough distance; they stay out of the interaction range of the spin-orbit interaction, which breaks the \(\alpha\) clusters. The situation holds even after \(\Lambda\) particle(s) is added. The glue-like effect of \(\Lambda\) particles surely shrinks the cluster-cluster distance, but clusters are not yet broken. The situation is completely different in the C case since the additional \(\alpha\) cluster shrinks the cluster-cluster distance, and clusters are in the interaction range of the spin-orbit interaction. The ground state of \({}^{12}\)C contains the component of the \(jj\)-coupling shell model. The energy difference between the traditional Brink model and AQCM is about 3.3 MeV in \({}^{12}\)C, and this is much enhanced with the increasing number of the \(\Lambda\) particles added. The energy difference is about 5.2 MeV in \({}^{14}_{\Lambda\Lambda}\)C. This is because the spin-orbit interaction works in the inner region of the nuclear systems, and the glue-like effect of \(\Lambda\) particles shrinks the system and induces more contribution of the spin-orbit interaction. In \({}^{14}_{\Lambda\Lambda}\)C, the breaking of \(\alpha\) clusters in \({}^{12}\)C is much enhanced by the addition of the \(\Lambda\) particles. The energy and structure of the excited 0\({}^{+}\) state with a pure cluster structure are found to be drastically affected by the transition of the ground state to the \(jj\)-coupling shell model side. ###### Acknowledgements. This work was supported by JSPS KAKENHI Grant Number 19J20543, 22K03618, and JP18H05407. The numerical calculations have been performed using the computer facility of Yukawa Institute for Theoretical Physics, Kyoto University (Yukawa-21). Figure 3: Excited 0\({}^{+}\) state comprised of pure three \(\alpha\) clusters in \({}^{14}_{\Lambda\Lambda}\)C as a function of distances between \(\alpha\)–\(\alpha\) (solid lines). Ground state is represented by the AQCM basis state with the \({}^{4}\)He–\({}^{4}\)He distance of 1.4 fm and \(\lambda=0.2\) (a) and \({}^{4}\)He–\({}^{4}\)He distance of 2.2 fm and \(\lambda=0.0\) (b), which are shown by the solid circles.
2307.05990
Emergent zero-field anomalous Hall effect in a reconstructedrutileantiferromagnetic metal
Anomalous Hall effect (AHE) emerged in antiferromagnetic metals shows intriguing physics and application potential. In contrast to certain noncollinear antiferromagnets, rutile RuO$_2$ has been proposed recently to exhibit a crystal-assisted AHE with collinear antiferromagnetism. However, in RuO$_2$, the on-site magnetic moment accompanying itinerant 4d electrons is quite small, and more importantly, the AHE at zero external field is prohibited by symmetry because of the high-symmetry [001] direction of the N\'eel vector. Here, we show the AHE at zero field in the collinear antiferromagnet, Cr-doped RuO$_2$. The appropriate doping of Cr at Ru sites results in a rotation of the N\'eel vector from [001] to [110] and enhancement of the on-site magnetic moment by one order of magnitude while maintaining a metallic state with the collinear antiferromagnetism. The AHE with vanishing net moment in the Ru$_{0.8}$Cr$_{0.2}$O$_2$ exhibits an orientation dependence consistent with the [110]-oriented N\'eel vector. These results open a new avenue to manipulate AHE in antiferromagnetic metals.
Meng Wang, Katsuhiro Tanaka, Shiro Sakai, Ziqian Wang, Ke Deng, Yingjie Lyu, Cong Li, Di Tian, Shengchun Shen, Naoki Ogawa, Naoya Kanazawa, Pu Yu, Ryotaro Arita, Fumitaka Kagawa
2023-07-12T08:10:51Z
http://arxiv.org/abs/2307.05990v1
# Emergent zero-field anomalous Hall effect in a reconstructed rutile antiferromagnetic metal ###### Abstract **Anomalous Hall effect (AHE) emerged in antiferromagnetic metals shows intriguing physics and application potential. In contrast to certain noncollinear antiferromagnets, rutile RuO\({}_{2}\) has been proposed recently to exhibit a crystal-assisted AHE with collinear antiferromagnetism. However, in RuO\({}_{2}\), the on-site magnetic moment accompanying itinerant _4d_ electrons is quite small, and more importantly, the AHE at zero external field is prohibited by symmetry because of the high-symmetry [001] direction of the Neel vector. Here, we show the AHE at zero field in the collinear antiferromagnet, Cr-doped RuO\({}_{2}\). The appropriate doping of Cr at Ru sites results in a rotation of the Neel vector from [001] to [110] and enhancement of the on-site magnetic moment by one order of magnitude while maintaining a metallic state with the collinear antiferromagnetism. The AHE with vanishing net moment in the Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) exhibits an orientation dependence consistent with the [110]-oriented Neel vector. These results open a new avenue to manipulate AHE in antiferromagnetic metals.** ## Introduction Anomalous Hall effect (AHE) had been considered for a long time as a unique feature of ferromagnetic metals, and its magnitude was empirically taken as proportional to the macroscopic magnetization \(M\)[1, 2]. It follows that in antiferromagnetic materials, which host zero macroscopic magnetization or only small canting moments, the AHE should be negligibly small. However, modern theories indicate that in some antiferromagnetic materials, the AHE can be expected if the magnetic space group (MSG) (or, equivalently, the magnetic point group that the MSG belongs to) allows for a nonzero Berry curvature and/or asymmetric scattering, even if the corresponding macroscopic magnetization is zero.[3, 4, 5] Such an AHE has been experimentally demonstrated for various noncollinear antiferromagnets with magnetic multipoles [3, 4, 5, 6, 7, 8, 9, 10, 11, 12], such as kagome Mn\({}_{3}\)Sn and pyrochlore \(R_{2}\)Ir\({}_{2}\)O\({}_{7}\). From the symmetry point of view, an antiferromagnetism-induced AHE can also be expected in a collinear antiferromagnet. A prototypical candidate material that has been extensively considered is the rutile antiferromagnetic RuO\({}_{2}\)[14, 15, 16, 17, 18]. As shown in **Fig. 1a**, the crystal structure of RuO\({}_{2}\) consists of two Ru sublattices with antiparallel magnetic moments. The two magnetic sublattices have different chemical environments due to the asymmetric O-Ru-O bond configuration. The simplest argument to determine the presence or absence of the AHE under collinear antiferromagnetism would be to consider how the Hall vector \(\mathbf{\sigma}_{\rm Hall}=(\sigma_{\rm yz}\), \(\sigma_{\rm zz}\), \(\sigma_{\rm xy})\) is transformed by the symmetry operations. For instance, if the Neel vector (\(\mathbf{L}\)) of RuO\({}_{2}\) is along the [110] direction, the MSG is _Cmm'm'_, in which \(\mathbf{\sigma}_{\rm Hall}\) along [110] is invariant under all symmetry operations and thus allows for a zero-field AHE [14]. In contrast, if \(\mathbf{L}\parallel\)[001], the MSG is P4'_z/mmm'_[14], which does not allow for a finite \(\mathbf{\sigma}_{\rm Hall}\) because no vector can be invariant under two orthogonal rotation symmetry operations (see Supplementary Note 1 for details). A previous neutron experiment indicates that the Neel vector in RuO\({}_{2}\) is along [001],[15] and hence \(\mathbf{\sigma}_{\rm Hall}\) and the zero-field AHE are prohibited by symmetry (Supplementary Fig. 1). To unveil the AHE associated with the collinear antiferromagnetism in RuO\({}_{2}\), a recent study focused on tilting the Neel vector from [001] toward [110] by utilizing a high magnetic field of \(\sim\)50 T. [17, 18] This phenomenon can be viewed as a magnetic-field-induced AHE associated with a Neel vector, forming a sharp contrast to AHEs in ferromagnets, in which the AHE can be observed even under zero-field. Thus, achieving a zero-field AHE in such a rutile-type collinear antiferromagnet remains challenging in experiments. The previous density functional theory (DFT) calculations have revealed that the easy axis of the Neel vector in RuO\({}_{2}\) sensitively depends on the electron filling, [17] which inspired us to pursue the zero-field AHE in the derivatives of RuO\({}_{2}\) by means of appropriate modulations on its Fermi level. To change the direction of the Neel vector from [001] and render the zero-field AHE allowed by symmetry, we dope Cr into RuO\({}_{2}\). Note that the 4\(d\) orbital level of Ru\({}^{4+}\) is slightly higher than the 3\(d\) orbital level of Cr\({}^{4+}\), a charge transfer from Ru\({}^{4+}\) to Cr\({}^{4+}\) ions can naturally be expected (**Fig. 1b)[19, 20]** while favouring anti-parallel spin coupling between the nearest-neighbouring Ru and Cr sites. Besides, considering that collinear spin orders are realized in both RuO\({}_{2}\) (antiferromagnetic) and CrO\({}_{2}\) (ferromagnetic) in rutile phases [15, 21, 22], the collinear antiferromagnetic state is reasonably expected in stoichiometric proximity to RuO\({}_{2}\). In this work, our magnetometry confirms that the direction of the Neel vector in the Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) film is driven to the [110] direction. Concomitantly, we find that the Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) film exhibits an appreciable zero-field AHE with hysteretic behaviour while the net magnetization is vanishingly small. ## Results ### DFT+DMFT calculations on the impact of Cr-doping. To gain insight into the impact of Cr-doping on the Fermi level, we first performed DFT calculations for the paramagnetic states of Ru\({}_{1.\text{x}}\)Cr\({}_{\text{x}}\)O\({}_{2}\) for x = 0 and 0.5. As shown in **Fig. 1c**, by doping Cr, the shift of the projected density of states (or, equivalently, the shift of the Fermi level) is observed, as expected. The magnetic calculation for x = 0.5 (**Fig. 1d**) further demonstrates that the ground state has appreciable local magnetic moments with antiparallel couplings among the nearest neighboring Cr and Ru ions. Note that the DFT+\(U\) calculations on RuO\({}_{2}\) show that the energy difference with the Neel vector orienting to [001], [100], and [110] is tiny (\(\sim\) 5 meV) and that the easy-axis direction sensitively depends on the Fermi level (Supplementary Fig. 2). [17] Our DFT results therefore support our working hypothesis that Cr doping is a promising approach to change the Neel vector direction while maintaining the collinear antiferromagnetic order. The DFT results indicate that Cr doping is also accompanied by the enhancement of the local magnetic moment. For the case of non-doped RuO\({}_{2}\), the Ru ions exhibit a negligibly small spin polarization when \(U\) is small (\(<\) 1 eV) (Supplementary Fig. 3). Such a small on-site moment is ascribed to the itinerant 4\(d\) orbital, presumably consistent with the quite small moment (\(\sim\)0.05 \(\mu_{\rm B}\) per site) observed by neutron experiment in RuO\({}_{2}\). In contrast, when Cr is doped, considerable local moments are observed (0.15 \(\mu_{\rm B}\) for x = 0.25 and 0.4 \(\mu_{\rm B}\) for x = 0.5; see Supplementary Fig. 3) in the DFT calculations, even at \(U\) = 0. Thus, based on our DFT calculations, we can expect that (i) the easy axis of the Neel vector changes from the original [001] direction, in which the zero-field AHE is prohibited, to another direction, and (ii) the impact of the collinear antiferromagnetic ordering on the transport properties is more observable due to the enhancement of the local magnetic moments. These expectations should be verified by the experiments below. ### Films fabrication and valence evaluation. We synthesized the Ru\({}_{1\textrm{-}}\)Cr\({}_{\textrm{x}}\)O\({}_{2}\) films by pulsed laser deposition (PLD) on TiO\({}_{2}\) (110) substrates with x =0.1, 0.2, and 0.3 (see Methods). The high crystalline quality of the films was confirmed by X-ray 2\(\theta\)-\(\omega\) scans (see supplementary Fig. 4a) and the surface topography with atomic terraces (Supplementary Fig. 4c). Besides, the resistivities of the materials increase as the doping level increases, while all compounds show a metallic behavior, as shown in Supplementary Fig. 5a. The robust metallicity implies the strong overlap of Cr and Ru orbitals. To probe the valence state of the doped Cr in the rutile lattice, we carried out soft X-ray absorption spectroscopy (XAS) measurements (see Methods) on the three films. **Figure 2a** shows the XAS results near the \(L\)-edge of Cr, with a comparison to that from La\({}_{1\text{-}x}\)Sr\({}_{x}\)CrO\({}_{3}\) materials [23]. The Cr in all of the Ru\({}_{1\text{-}x}\)Cr\({}_{x}\)O\({}_{2}\) films exhibits a fractional valence state between \(+\)3.25 and \(+\)3.5. As the doping level increases from 0.1 to 0.3, the peak shows a gradual shift to lower energy, indicating a gradual decrease in valence. Such a tendency is consistent with our scenario that the Cr doping is accompanied by the charge transfer and the corresponding Fermi-level shift. ### Antiferromagnetic metal phases in the Ru\({}_{1\text{-}x}\)Cr\({}_{x}\)O\({}_{2}\) films. To check whether the magnetic ground state is still antiferromagnetic upon the Cr doping, we performed magnetic susceptibility (\(\chi\)) and magnetization (\(M\)) measurements with magnetic field (\(H\)) and temperature (\(T\)) dependences (see Methods and Supplementary Fig. 6 for details). The results are summarized in **Figs. 2b, c**, and we first focus on the results of x = 0.1 and 0.2. The high-temperature regions of the \(\chi^{1}\)-\(T\) profiles are fitted with the Curie-Weiss law, \(\chi=C/(T\)-\(\theta_{\text{W}})\), and we obtain \(\theta_{\text{W}}\approx\) -10 K and -75 K for x = 0.1 and 0.2, respectively. These results indicate that an antiferromagnetic interaction is dominant in x = 0.1 and 0.2 [24, 25, 26, 27]. Moreover, compared with the local moment of \(\sim\)0.05 \(\mu_{\text{B}}\) per site in pure RuO\({}_{2}\), [15] the effective on-site moments (\(\mu_{\text{eff}}\)) obtained from the fittings are distinctly enhanced in x = 0.1 (-0.9 \(\mu_{\text{B}}\) per site) and x = 0.2 (\(\sim\)2.1 \(\mu_{\text{B}}\) per site) (Fig. 2b, inset and Supplementary Note 2). [24, 25] This pronounced enhancement is also consistent with our DFT calculations. The \(M\)-\(H\) curves at the lowest temperature, 3 K, demonstrate that the spontaneous net magnetization at zero field is too small to be distinguished in the antiferromagnetic Ru\({}_{0.9}\)Cr\({}_{0.1}\)O\({}_{2}\) and Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) (**Fig. 2c**). Moreover, the field-induced moment at 7 T is only 0.03 \(\mu_{\text{B}}\) (x = 0.1) and 0.04 \(\mu_{\text{B}}\) (x = 0.2) per formula unit (\(\mu_{\text{B}}\)/f.u.), which are almost two orders of magnitude smaller than that in ferromagnetic SrRuO\({}_{3}\)[28, 29] and CrO\({}_{2}\)[30], excluding the possibility of a ferromagnetic ground state for x = 0.1 and 0.2. In addition, a Kerr mapping was also carried out in the Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) film by utilizing a high-resolution equipment at 7 K and 0 T (see Supplementary Fig. 7). However, the observed Kerr rotation (\(\sim\)\(\mu\)rad) is three orders of magnitude smaller than that (\(\sim\)mrad) in the ferromagnetic SrRuO\({}_{3}\) film [31], and no domain walls can be observed, which further indicates that an antiferromagnetic state is preserved with a vanishingly small net magnetization at zero magnetic field. In the Ru\({}_{0.7}\)Cr\({}_{0.3}\)O\({}_{2}\) film, contrastingly, the analysis based on the Curie-Weiss law results in a small positive \(\theta_{\rm W}\) with \(\mu_{\rm eff}\) of \(\sim\)2.5 \(\mu_{\rm B}\) per site (Fig. 2b, and Supplementary Note 2). Furthermore, the \(M\)-\(H\) curve exhibits a finite remanent magnetization, and the magnetization at 7 T is distinctly larger compared with the case of x = 0.1 and 0.2. These observations indicate the evolution of a ferrimagnetic phase in x = 0.3, consistent with the tendency from RuO\({}_{2}\) to CrO\({}_{2}\)[21, 15]. Therefore, the AHE accompanying the ferrimagnetic phase in x = 0.3 is beyond the scope of this study. ### Neel-vector direction in the Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) film. We then focus on the antiferromagnetic Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) (110) sample, which exhibits a pronounced \(\mu_{\rm eff}\) of \(\sim\)2.1 \(\mu_{\rm B}\) per site, and aim to reveal the direction of the Neel vector. The DFT calculations in RuO\({}_{2}\) suggest a finite net magnetic moment when the Neel vector along [100] is assumed (Supplementary Fig. 2a), which should be preserved in the doped phase. Our \(M\)-\(H\) measurements in Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) show a vanishing net moment, thereby ruling out the possibility that the Neel vector is along [100]. Then, the remaining candidates of the Neel-vector direction are the [001] and [110] orientations. To test these two possibilities, we refer to the fact that the field-induced moment in a collinear antiferromagnet is generally minimized when the field is parallel to the Neel vector, as illustrated in **Fig. 3** inset [25, 17]. The anisotropy of the field-induced moment was measured on the Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) (110) film for the fields of the out-of-plane [110] and in-plane [001] directions. The anisotropic response demonstrates that the [110] axis exhibits a smaller field-induced moment (Fig. 3) and thus the Neel vector should be along [110], rather than [001], in Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\). The corresponding MSG is _Cmm'm'_, and hence the zero-field AHE is allowed by symmetry.[16, 14] **AHE in the Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\)(110) film.** The longitudinal resistivity and Hall conductivity in Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) (110) film were measured with currents along two in-plane directions, [001] and \(\quad\)[1\(\overline{1}\)0], as shown in Supplementary Fig. 5 (see Methods). Both directions show a metallic state, and the Hall conductivity measured with the current along \(\quad\)[1\(\overline{1}\)0] exhibits a larger signal. Therefore, we below present the results of the AHE with the current along \(\quad\)[1\(\overline{1}\)0]. **Figure 4a** shows the Hall conductivity (\(\sigma_{\rm xy}\)) with a magnetic field sweeping at 3 K. Distinctly, a hysteretic feature is observed, in stark contrast to the absence of a hysteretic behavior in the \(M\)-\(H\) curve (Fig. 2c). This behavior demonstrates that the finite Hall vector is involved in the Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) (110) film, even though the net magnetization is vanishingly small within the experimental accuracy. Thus, in the magnetic field range in which \(\sigma_{\rm xy}\) shows hysteretic behavior, one should take into account the coexistence of the two magnetic domains with opposite Hall vectors (i.e., the AHCs with opposite signs). In general, the origin of \(\sigma_{\rm xy}\) consists of the external magnetic field (or ordinary Hall conductivity, \(\sigma_{\rm xy}{}^{\rm OHE}\), proportional to \(H\) with a coefficient \(k_{o}\)) and the magnetism (or anomalous Hall conductivity, \(\sigma_{\rm xy}{}^{\rm AHE}\)). The \(\sigma_{\rm xy}{}^{\rm AHE}\) is often dictated by the contribution proportional to the net magnetization, but in the present system, the antiferromagnetic order coupled with the special lattice symmetry can also contribute [3, 6, 14]. Thus, the observed \(\sigma_{\rm xy}\) can be described as the sum of the three components: \[\sigma_{\rm xy}(H) = \sigma_{\rm xy}{}^{\rm OHE}(H)+\sigma_{\rm xy}{}^{M}(H)+\sigma_{ \rm xy}{}^{\rm AF}(H) \tag{1}\] \[= k_{o}\cdot H+k_{m}\cdot M(H)+\sigma_{\rm xy}{}^{\rm AF}(H),\] where \(\sigma_{\rm xy}{}^{M}\) is the anomalous Hall conductivity proportional to the field-induced net magnetic moment \(M\) with a coefficient \(k_{m}\), and \(\sigma_{\rm xy}{}^{\rm AF}\) is the anomalous Hall conductivity arising from the antiferromagnetic ordering [6]. Note that in the present field range, the magnetic field-dependent \(\sigma_{\rm xy}{}^{\rm AF}(H)\) is caused by the change in the relative volume of the two types of antiferromagnetic domains with opposite signs of AHC. At sufficiently high magnetic fields, the hysteretic behavior disappears, and therefore, a single antiferromagnetic domain is expected. Thus, \(\sigma_{xy}^{\rm AF}\) is considered to be a constant, \(\sigma_{xy}^{\rm AF,0}\), at a sufficiently high magnetic field[14]. Utilizing the data of Hall conductivity and magnetization at 4-7 T, where the hysteretic behavior is absent, we can thus obtain the coefficients, \(k_{o}\) and \(k_{\rm m}\), and \(\sigma_{xy}^{\rm AF,0}\). For clarity, by subtracting \(\sigma_{xy}^{\rm CHE}=k_{o}\cdot H\), we display the experimental \(\sigma_{xy}^{\rm AHE}\) together with the fitting curve \(k_{m}\cdot M\) + \(\sigma_{xy}^{\rm AF,0}\) as a function of the net magnetization in **Fig. 4b**. The value of \(\sigma_{xy}^{\rm AF,0}\) is \(\sim\)3.2 S/cm, which is indicated by the intercept of the fitting curve at \(M=0\). In the low-field region, the experimental \(\sigma_{xy}^{\rm AHE}\)(\(H\)) deviates from the linear fitting. In the present framework, this deviation is attributable to the coexistence of two antiferromagnetic domains with opposite signs of AHC. The evolutions of \(\sigma_{xy}^{\rm AF}\) and \(\sigma_{xy}^{M}\)with magnetic field sweeping at 3 K are shown in **Fig. 4c**, where \(\sigma_{xy}^{M}\) is set to \(k_{m}\cdot M\), and \(\sigma_{xy}^{\rm AF}\) is obtained by subtracting \(\sigma_{xy}^{M}\) from \(\sigma_{xy}^{\rm AHE}\). Interestingly, the \(\sigma_{xy}^{\rm AF}\) shows a hysteretic profile and a clear remnant value even at the vanishing net moment (Fig. 2c). Such features indicate an AHC contributed by the antiferromagnetic ordering, not due to the canting moment. The emergent \(\sigma_{xy}^{\rm AF}\) decreases as the temperature increases and disappears at 40-50 K (**Fig. 4d** and Supplementary Fig. 8), indicating the antiferromagnetic order transition point (\(T_{\rm N}\)). Note that the noncollinear antiferromagnetic materials with complicated spin interactions generally show a large value of \(|\theta_{\rm W}/T_{\rm N}|\) (\(>10\)) [32]. Here, the small value of \(|\theta_{\rm W}/T_{\rm N}|\) = 1.5-1.8 in the Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\)film is typically located in the regime of collinear antiferromagnets. To gain further insight into the microscopic mechanisms of the \(\sigma_{xy}^{\rm AF}\) and \(\sigma_{xy}^{M}\), we compared the AHC-\(\sigma_{\rm xx}\) scaling curves [2; 33; 34; 35; 36] among Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) (110) films with different \(\sigma_{\rm xx}\), which was tuned by tailoring the thickness. As shown in Supplementary Fig. 9, all films are located at the crossover from dirty to intermediate regimes with \(10^{3}<\sigma_{\rm xx}<10^{4}\) S/cm, thereby ruling out the skew scattering contribution, which is generally considered in high conductive metals (\(\sigma_{\rm xx}>10^{6}\) S/cm). Besides, a further analysis based on the \(\sigma_{xy}^{M}(T)\)-\(\sigma_{\rm xx}(T)^{2}\) profile gives an intrinsic Berry curvature term of 14 S/cm (Supplementary Note 3) and the extrinsic side-jump contribution of \(\sim 10\) S/cm. These results indicate that the Berry curvature and extrinsic scattering microscopic mechanisms both contributes to \(\sigma_{xy}{}^{M}\left(T\right)\) in our films. We note that the \(\sigma_{xy}{}^{M}\) value is similar to the AHC in ferromagnetic SrRuO\({}_{3}\) films grown by PLD, although the canting moment (0.04 uB/f.u.) of our Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) (110) film is \(\sim\)40 times smaller than the ferromagnetic moment in SrRuO\({}_{3}\) films [37, 38]. We also note that the value of \(\sigma_{xy}{}^{AF}\) in Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) is one order of magnitude larger than the recently reported collinear antiferromagnetic semiconductor MnTe [39]. ### Orientation-anisotropic anomalous Hall response. Finally, we show that the transport properties in our Ru\({}_{0.8}\)Cr\({}_{0.2}\)O\({}_{2}\) film also indicate the Hall vector along [110]. To address this issue experimentally, we referred to the fact that the transverse anomalous Hall current (\(\mathbf{J}_{\mathrm{H}}\)) is given by \(\mathbf{J}_{\mathrm{H}}=\mathbf{E}\mathbf{\times}\mathbf{\sigma}_{\mathrm{Hall}}\), [14, 16] where \(\mathbf{E}\) represents the applied external electric field, and carried out transport measurements on another film grown on TiO\({}_{2}\) (100). Herein, the current was applied along the [010] direction to keep the Hall voltage also along the [001] direction for comparison. The temperature dependence of AHE is similar to that observed for the [110]-oriented films (Supplementary Fig. 8), indicating that the transition temperature is not affected by the orientation of the substrate. As shown in **Fig. 5a** and 5b, the longitudinal conductivities at low temperatures of the two films are very close to each other, while the \(\sigma_{xy}{}^{\mathrm{AHE}}\) that emerges from the (100) film is distinctly smaller than the value for the film grown on TiO\({}_{2}\) (110). Upon further analyzing the magnetization and the anomalous Hall contributions of \(\sigma_{xy}{}^{M}\) and \(\sigma_{xy}{}^{AF}\), as shown in **Fig. 5c**, we find that both of the anomalous Hall components are suppressed compared with those in **Fig. 4c**, although the emergent magnetization is increased. Furthermore, we find that the saturated \(\sigma_{xy}{}^{AF}\) in the [100]-oriented film is approximately \(\sim\)2.2 S/cm, which is 0.7 (\(\simeq\) sin45\({}^{\circ}\)) times that in the [110]-oriented sample, \(\sim\)3.2 S/cm. Independent of the symmetry arguments based on the Neel-vector direction along [110], these transport results further support that the Hall vector is directed along the [110] direction in this compound, as illustrated in Fig. 5c, inset. In summary, by tuning the 3_d_-4\(d\) orbital reconstruction to achieve symmetry manipulation and balance the itinerant properties and the electron correlation, we have succeeded in observing the zero-field AHE in the collinear antiferromagnetic rutile metal. Note that the antiferromagnetic metallic phase is extremely rare in correlated oxides [26, 27, 40], and such a wide regime emerging in Ru\({}_{1\text{-}x}\)Cr\({}_{x}\)O\({}_{2}\) (x \(\leq\) 0.2) should be ascribed to the unique orbital reconstruction between Cr and Ru. We envision that this design strategy can be extended to more systems to produce further exotic phenomena. ## Methods **DFT calculations and Wannierization.** We computed the Bloch wavefunctions for RuO\({}_{2}\) on the basis of density functional theory (DFT) using the Quantum ESPRESSO package [41, 42]. We first assumed a nonmagnetic structure without spin-orbit coupling and used the projector augmented wave pseudopotential [43] and the generalized gradient approximation of the Perdew-Burke-Ernzerhof exchange correlation functional [44]. We used lattice constants of a = 4.492 A and c = 3.107 A. The energy cutoff for the wave function and the charge density, \(e_{\text{wfc}}\) and \(e_{\text{rho}}\), respectively, were set to \(e_{\text{wfc}}\) = 60 Ry and \(e_{\text{rho}}\) = 400 Ry. We used \(\mathbf{k}\)-point meshes of 12\(\times\)12\(\times\)16 and 16\(\times\)16\(\times\)16 in the self-consistent field (scf) and non-scf calculations, respectively. After the DFT calculations, Wannierization was performed by using the wannier90 package [45, 46], in which the Bloch orbitals were projected onto the \(t_{\text{2g}}\) orbitals of Ru ions with 16\(\times\)16\(\times\)16 \(\mathbf{k}\)-point grids. To calculate the electronic states of Ru\({}_{1\text{-}x}\)Cr\({}_{x}\)O\({}_{2}\), with x = 0, 0.25, and 0.5, we replaced the Ru-sites denoted as Ru-1 or Ru-2 in Supplementary Fig. 3a with Cr. In this calculation, we set \(e_{\text{rho}}\) = 500 Ry, and the spin-orbit coupling was not included. For the x = 0 and 0.5 systems, we took 24\(\times\)24\(\times\)32 \(\mathbf{k}\)-mesh for the scf calculation. When we calculated the ground states of Ru\({}_{0.75}\)Cr\({}_{0.25}\)O\({}_{2}\), we used the supercell with the b- or c-axis doubled. We took the \(\mathbf{k}\)-mesh of 24\(\times\)12\(\times\)32 (24\(\times\)24\(\times\)16) when the b- (c-)axis was doubled for the scf calculation. We found that the supercell with the b-axis doubled was more energetically stable, which we have used for discussion. To obtain the projected density of states (PDOS) of the x = 0 and 0.5 systems, we performed the non-scf calculations with 24\(\times\)24\(\times\)32 \(\mathbf{k}\)-mesh after the scf calculation and then calculated the PDOS. We also calculated the PDOS of RuO\({}_{2}\) with the DFT\(+U\) method with \(U=3\) eV and nonmagnetic Ru\({}_{1\text{-}x}\)Cr\({}_{\text{x}}\)O\({}_{2}\) with x = 0 and 0.5, where we set \(e_{\text{rho}}=500\) Ry and took 24\(\times\)24\(\times\)32 \(\mathbf{k}\)-points for the scf and non-scf calculations. For examining the orientation of the Neel vector, we performed the DFT\(+U\) calculation for RuO\({}_{2}\) with the spin-orbit coupling for the three cases where the Neel vector was initially along [001], [100], and [110]. We took \(U=3\) eV. We used 24\(\times\)24\(\times\)32 \(\mathbf{k}\)-points and set \(e_{\text{rho}}=500\) Ry. The convergence threshold for the calculation of the Neel vector orientation was set as 10\({}^{\text{-}6}\) Ry. **DMFT calculations.** The Wannier functions obtained above define a tight-binding model for the three Ru \(t_{\text{2g}}\) orbitals of RuO\({}_{2}\). Using this as the one-body part of the Hamiltonian, we constructed a multiorbital Hubbard model with intra(inter)orbital Coulomb interaction _U(U)_ and Hund's coupling and pair hopping \(J\). We solved the model within the dynamical mean field theory (DMFT) [47] at zero temperature. As a solver for the DMFT impurity problem, we used the exact diagonalization method [48], where the dynamical mean field was represented by nine bath sites. To obtain the antiferromagnetic solution, we assumed opposite spin polarizations at neighboring Ru sites in the unit cell. For the interaction parameters, we assumed \(U=U^{\prime}+2J\) and \(J=U/5\) for the sake of simplicity. **Thin-film growth, X-ray diffraction, and XAS.** The Ru\({}_{1\text{-}x}\)Cr\({}_{\text{x}}\)O\({}_{2}\) films were grown on the rutile TiO\({}_{2}\) substrate by the PLD method with stoichiometric targets. During sample growth, the substrate temperature was kept at 290 \({}^{\circ}\)C to suppress interfacial diffusion, and the oxygen partial pressure was kept at 20 mTorr. The laser fluence was 1.2 J/cm\({}^{2}\) (KrF, \(\lambda\)= 248 nm), and the deposition frequency was 3 Hz. After deposition, the samples were cooled to room temperature at a rate of 10 \({}^{\circ}\)C/min under an oxygen pressure of 10 Torr. The film thickness was determined directly with an X-ray reflectivity measurement. X-ray diffraction measurements were performed using a high-resolution diffractometer (Rigaku) with monochromatic Cu K\({}_{\alpha 1}\) (\(\lambda=1.5406\) A) X-rays. The stoichiometry in the thin film was checked by energy dispersive X-ray (EDX), and the ratio of Ru/Cr was confirmed to be very close to the target. The XAS curves of Cr L-edge were measured with a total electron mode, at 20 K, in beamline BL07U of Shanghai Synchrotron Radiation Facility. **Transport and magnetization measurements.** All of the electrical transport was carried out on Hall bar devices with a size of 300 \(\upmu\)m \(\times\) 60 \(\upmu\)m, which were fabricated by photolithography. The milling process was carried out with Ar/O\({}_{2}\) (10:1) mixed ions and at a low speed to avoid oxygen vacancy formation on the TiO\({}_{2}\) surface. The transport measurements were carried out with a PPMS system (Quantum Design) with an in-plane DC current. The magnetoresistivity (MR) and its anisotropy were very small, as shown in Supplementary Fig. 10. The Hall conductivity \(\sigma_{\mathrm{xy}}\) was calculated as \(\sigma_{\mathrm{xy}}\) = -\(\rho_{\mathrm{yx}}\)/(\(\rho_{\mathrm{xx}}\)\({}^{2}\)). The magnetization was measured using an MPMS system (Quantum Design) and obtained by subtracting the contribution from the TiO\({}_{2}\) substrate. The Hall vector (\(\mathbf{\sigma_{\mathrm{Hall}}}\) ) is defined as that in reference [14]. **Acknowledgments:** This research was supported by JSPS KAKENHI (Grant Nos. 21H04437, 19H05825, 19H02594, 21H04442, 21K14398) and JST CREST (Grant No. JPMJCR1874). P. Y. was financially supported by the National Key R&D Program of China (grant No. 2021YFE0107900) and the National Natural Science Foundation of China (grant No. 52025024). **Author Contributions:** M. W. and F. K. conceived the project. M. W. grew the thin films and performed the transport measurements with help from S. Shen. K. T. and S. Sakai performed the calculations with the supervision of R. A. M.W., Y. L., and D. T. conducted the XRD and magnetization measurements with support from P. Y. K. D. and C. L. conducted the XAS measurements. Z. W. and N. O. performed the Kerr mapping. M. W. and F. K. wrote the manuscript. All of the authors discussed the results and provided feedback. **Competing Interests:** The authors declare no competing interests. **Data availability:** All data used to generate the figures in the manuscript and supplementary information is available on Zenodo at: [https://zenodo.org/record/8128770](https://zenodo.org/record/8128770)
2307.14645
Quantum dynamics of molecular ensembles coupled with quantum light: Counter-rotating interactions as an essential component
The rotating-wave approximation to light-matter interactions is widely used in the quantum electrodynamics Hamiltonian; however, its validity has long been a matter of debate. In this article, we explore the impact of the rotating-wave approximation on the quantum dynamics of multiple molecules in complex dielectric environments within the framework of macroscopic quantum electrodynamics. In general, we find that the energy shifts of the molecules and the inter-molecule dipole-dipole interaction obtained in the weak coupling regime are correct only when the counter-rotating interactions are considered. Moreover, under the rotating-wave approximation, the energy shifts of the ground-state molecules and a portion of the inter-molecule interaction are discarded. Notably, in the near-field zone (short inter-molecular distance), the reduction of inter-molecule interaction can reach up to 50 percent. We also conduct a case study on the population dynamics of a pair of identical molecules above a plasmonic surface. Through analytical and numerical analysis, it is revealed that the rotating-wave approximation can profoundly affect the dynamics of the molecules in both strong and weak coupling regimes, emphasizing the need for careful consideration when making the rotating-wave approximation in a multiple-molecule system coupled with quantum light.
Yi-Ting Chuang, Liang-Yan Hsu
2023-07-27T06:38:44Z
http://arxiv.org/abs/2307.14645v1
# Quantum dynamics of molecular ensembles coupled with quantum light: ###### Abstract The rotating-wave approximation to light-matter interactions is widely used in the quantum electrodynamics Hamiltonian; however, its validity has long been a matter of debate. In this article, we explore the impact of the rotating-wave approximation on the quantum dynamics of multiple molecules in complex dielectric environments within the framework of macroscopic quantum electrodynamics. In general, we find that the energy shifts of the molecules and the inter-molecule dipole-dipole interaction obtained in the weak coupling regime are correct only when the counter-rotating interactions are considered. Moreover, under the rotating-wave approximation, the energy shifts of the ground-state molecules and a portion of the inter-molecule interaction are discarded. Notably, in the near-field zone (short inter-molecular distance), the reduction of inter-molecule interaction can reach up to 50 percent. We also conduct a case study on the population dynamics of a pair of identical molecules above a plasmonic surface. Through analytical and numerical analysis, it is revealed that the rotating-wave approximation can profoundly affect the dynamics of the molecules in both strong and weak coupling regimes, emphasizing the need for careful consideration when making the rotating-wave approximation in a multiple-molecule system coupled with quantum light. ## I Introduction Over the past few decades, extensive experimental and theoretical research has demonstrated that coupling molecules (or quantum emitters) with confined electromagnetic fields in a well-designed photonic environment is a promising avenue for modifying physical processes such as spontaneous emission [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16] and resonance energy transfer [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29]. Recently, there has been a surge of interest in utilizing the concept of coupling molecules with confined electromagnetic fields to manipulate chemical reactions [30; 31; 32; 33; 34; 35; 36]. The successful realization of such experiments has spurred scientists to develop theoretical frameworks aimed at elucidating the underlying physics behind observed phenomena and predicting novel effects. Among these theoretical frameworks, cavity quantum electrodynamics (QED) Hamiltonian are extensively used, especially the Tavis-Cummings model and its extensions [37; 38; 39; 40; 41; 42; 43]. Qunatum electrodynamics serves as one of the most favorable foundations to study the interactions between molecules and electromagnetic fields. Within the framework of QED, one typically needs to make use of further approximations to simplify the light-matter interactions, and the rotating-wave approximation (RWA) [44; 45] is viewed as one of the most commonly adopted approximations. To simply demonstrate the key concept of the RWA, we consider a simple model consisting of a single molecule (\(\omega_{\rm m}\)) interacting with a single photonic mode (\(\omega_{\rm p}\)), i.e., \[\hbar\omega_{\rm m}\hat{\sigma}^{(+)}\hat{\sigma}^{(-)}+\hbar\omega_{\rm p} \hat{a}^{\dagger}\hat{a}+\hbar g\left[\hat{\sigma}^{(+)}+\hat{\sigma}^{(-)} \right]\left[\hat{a}^{\dagger}+\hat{a}\right].\] The spirit of the RWA lies in retaining the co-rotating interaction \(\hbar g\left[\hat{\sigma}^{(+)}\hat{a}+\hat{\sigma}^{(-)}\hat{a}^{\dagger}\right]\) while disregarding the counter-rotating interaction \(\hbar g\left[\hat{\sigma}^{(+)}\hat{a}^{\dagger}+\hat{\sigma}^{(-)}\hat{a}\right]\). This choice is motivated by the fact that the co-rotating interaction oscillates at a relatively low frequency \(\omega_{\rm m}-\omega_{\rm p}\) while the counter-rotating interaction oscillates at a higher frequency \(\omega_{\rm m}+\omega_{\rm p}\). By neglecting the counter-rotating interaction, the RWA simplifies the mathematical treatment of the system and allows for an easier analysis. Typically, the RWA is considered a good approximation when (i) the light-matter coupling strength \(g\) is weak and (ii) the detuning \(\omega_{\rm m}-\omega_{\rm p}\) is small. However, when molecules are coupled to multiple photonic modes (including off-resonant modes), such as in free space (infinite photonic modes), condition (ii) seems to be violated anyhow. The validity of the RWA has long been a matter of debate. While there has been extensive research on the impact of the RWA, it is worth noting that most of these studies have primarily focused on scenarios involving only a single photonic mode [46; 47; 48; 49; 50; 51], in free space [52; 53; 54; 55], or in non-dispersive and non-absorbing (lossless) media [56]. Nevertheless, in experimental setups, such as the Fabry-Perot cavity, there exist infinite numbers of photonic modes, including many off-resonant modes [57], that are dressed by the surrounding photonic environments. The interactions between molecules and these dressed photons (polaritons) are supposed to play a crucial role in determining the chemical and physical properties of the system. Therefore, restricting our analysis to a single photonic mode or the free space scenario may result in misinterpretations of experimental observations and impede our ability to accurately predict and quantify polariton-coupled processes. In addition, it has been shown that plasmonic materials can strongly affect physical processes, including spontaneous emission [58; 16] and resonance energy transfer [59; 29], and it is natural to consider dispersive and absorbing media in studying plasmonic effects. To overcome the aforementioned issues, one needs advanced theoretical treatments that incorporate the coupling of molecules with infinite photonic modes and take into account the effects of (dispersive and absorbing) photonic environments. In this study, in order to demonstrate counter-rotating interactions as an essential component, we investigate the quantum dynamics of multiple molecules in dielectric environments using macroscopic quantum electrodynamics (MQED) [60; 61; 62], which is an effective field theory for describing quantized electromagnetic fields in any arbitrary inhomogeneous, dispersive, and absorbing dielectric environment. We elucidate the importance of counter-rotating interactions in the quantum dynamics of multiple molecules in both strong and weak coupling regimes. In addition, we clarify that a reduction of the resonant dipole-dipole interaction due to the RWA has been wrongly interpreted as a non-negligible (quantum) effect in previous works [63; 64]. Note that our theoretical approach is not only restricted to the study of the RWA, but can also be applied to investigate the combined effect of molecular fluorescence and excitation energy transfer in any arbitrary photonic environments, holding great promise for further advancements in areas such as quantum optics, nanophotonics, and molecular engineering. This article is organized as follows. In Sec. II, starting from the MQED Hamiltonian (including without and with the use of the RWA), we derive the dynamical equations that can describe the quantum dynamics of multiple emitters in complex dielectric environments. In addition, the underlying physics behind the dynamical equations is discussed. In Sec. III, we apply the Markov approximation to the dynamical equations to study their behavior in the weak coupling regime. In this regime, we obtain the energy shift, decay rate, and inter-molecule dipole-dipole interaction, and compare our results to the previous works. In Sec. IV, we apply our approach to investigate the role of counter-rotating interactions in the quantum dynamics of a pair of identical molecules above a plasmonic surface in both strong and weak coupling regimes. In the last section, we provide a concise summary of this study. ## II Theory ### Hamiltonian Considering a collection of two-level molecules coupled to polaritons (dressed photons) in an arbitrary inhomogeneous, dispersive, and absorbing medium, the total Hamiltonian \(\hat{H}\) (without the RWA) under the electric-dipole approximation in the multipolar coupling MQED [65; 45] can be expressed in terms of the molecular Hamiltonian \(\hat{H}_{\mathrm{M}}\), polaritonic Hamiltonian \(\hat{H}_{\mathrm{P}}\) and interaction Hamiltonian \(\hat{H}_{\mathrm{I}}\) as \[\hat{H}=\hat{H}_{\mathrm{M}}+\hat{H}_{\mathrm{P}}+\hat{H}_{\mathrm{I}}, \tag{1}\] with \[\hat{H}_{\mathrm{M}} =\sum_{\alpha}\hbar\omega_{\alpha}\hat{\sigma}_{\alpha}^{(+)}\hat {\sigma}_{\alpha}^{(-)}, \tag{2}\] \[\hat{H}_{\mathrm{P}} =\int\mathrm{d}\mathbf{r}\int_{0}^{\infty}\mathrm{d}\omega\, \hbar\omega\,\hat{\mathbf{f}}^{\dagger}(\mathbf{r},\omega)\cdot\hat{\mathbf{ f}}(\mathbf{r},\omega),\] (3) \[\hat{H}_{\mathrm{I}} =-\sum_{\alpha}\hat{\mathbf{\mu}}_{\alpha}\cdot\hat{\mathbf{F}}( \mathbf{r}_{\alpha}). \tag{4}\] The molecular Hamiltonian \(\hat{H}_{\mathrm{M}}\) in Eq. (2) describes the total energy of the molecules, where \(\omega_{\alpha}\), \(\hat{\sigma}_{\alpha}^{(+)}\) and \(\hat{\sigma}_{\alpha}^{(-)}\) denote the electronic transition frequency, raising operator and lowering operator of the \(\alpha\)-th molecule, respectively. The raising and lowering operators of the \(\alpha\)-th molecule can be defined using the electronically ground state \(\ket{\mathrm{g}_{\alpha}}\) and excited state \(\ket{\mathrm{e}_{\alpha}}\) of \(\alpha\) as follows: \(\hat{\sigma}_{\alpha}^{(+)}=\ket{\mathrm{e}_{\alpha}}\bra{\mathrm{g}_{\alpha}}\) and \(\hat{\sigma}_{\alpha}^{(-)}=\ket{\mathrm{g}_{\alpha}}\bra{\mathrm{e}_{\alpha}}\). Note that we have neglected the dipole self-interaction since it only contributes to a small energy shift (free space Lamb shift) after renormalization [66]. The polaritonic Hamiltonian \(\hat{H}_{\mathrm{P}}\) in Eq. (3) describes the energy of polaritons in the dielectric environment, where \(\hat{\mathbf{f}}^{\dagger}(\mathbf{r},\omega)\) and \(\hat{\mathbf{f}}(\mathbf{r},\omega)\) are the creation and annihilation operators of the bosonic vector fields that satisfy the commutation relations: \[\left[\hat{f}_{k}\left(\mathbf{r},\omega\right),\hat{f}_{k^{\prime }}^{\dagger}\left(\mathbf{r}^{\prime},\omega^{\prime}\right)\right] =\delta_{kk^{\prime}}\delta\left(\mathbf{r}-\mathbf{r}^{\prime} \right)\delta\left(\omega-\omega^{\prime}\right),\] \[\left[\hat{f}_{k}\left(\mathbf{r},\omega\right),\hat{f}_{k^{ \prime}}\left(\mathbf{r}^{\prime},\omega^{\prime}\right)\right] =0.\] The interaction Hamiltonian \(\hat{H}_{\mathrm{I}}\) in Eq. (4) describes the couplings between the molecules and polaritons including counter-rotating interactions, i.e., no RWA, where \(\hat{\mathbf{\mu}}_{\alpha}\) and \(\hat{\mathbf{F}}(\mathbf{r}_{\alpha})\) are the transition dipole operator of \(\alpha\) and the field operator, respectively. The transition dipole operator \(\hat{\mathbf{\mu}}_{\alpha}\) can be written in terms of \(\hat{\sigma}_{\alpha}^{(+)}\) and \(\hat{\sigma}_{\alpha}^{(-)}\) as follows, \[\hat{\mathbf{\mu}}_{\alpha}=\mathbf{\mu}_{\alpha}^{\mathrm{ex}}\hat{\sigma}_{\alpha}^{(+ )}+\mathbf{\mu}_{\alpha}^{\mathrm{ex}}\hat{\sigma}_{\alpha}^{(-)}, \tag{5}\] where \(\mathbf{\mu}_{\alpha}^{\mathrm{eg}}=(\mathbf{\mu}_{\alpha}^{\mathrm{ex}})^{*}\) is the electronic transition dipole moment of \(\alpha\). The field operator is defined as: \[\hat{\mathbf{F}}(\mathbf{r}_{\alpha})=\hat{\mathbf{F}}^{(+)}(\mathbf{r}_{\alpha} )+\mathrm{H.c.}, \tag{6}\] with \[\widehat{\mathbf{F}}^{(+)}(\mathbf{r}_{\alpha})=\int_{0}^{\infty} \mathrm{d}\omega\int\mathrm{d}\mathbf{r}\,\overline{\overline{\mathbf{g}}}( \mathbf{r}_{\alpha},\mathbf{r},\omega)\cdot\widehat{\mathbf{f}}(\mathbf{r}, \omega), \tag{7}\] \[\overline{\overline{\mathbf{g}}}(\mathbf{r}_{\alpha},\mathbf{r}, \omega)=i\sqrt{\frac{\hbar}{\pi\varepsilon_{0}}}\frac{\omega^{2}}{c^{2}}\sqrt{ \mathrm{Im}\left[\varepsilon_{\mathrm{r}}(\mathbf{r},\omega)\right]}\, \overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha},\mathbf{r},\omega), \tag{8}\] where \(\varepsilon_{0}\), \(\varepsilon_{\mathrm{r}}(\mathbf{r},\omega)\), and \(c\) are the permittivity of free space, relative permittivity, and speed of light in vacuum, respectively. \(\overline{\overline{\mathbf{g}}}(\mathbf{r}_{\alpha},\mathbf{r},\omega)\) is an auxiliary tensor defined in terms of the dyadic Green's function \(\overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha},\mathbf{r},\omega)\) that satisfies macroscopic Maxwell's equations, i.e., \[\left[\frac{\omega^{2}}{c^{2}}\varepsilon_{\mathrm{r}}(\mathbf{r}_{\alpha}, \omega)-\nabla\times\nabla\times\right]\overline{\overline{\mathbf{G}}}( \mathbf{r}_{\alpha},\mathbf{r},\omega)=-\overline{\overline{\mathbf{I}}}_{3} \delta(\mathbf{r}_{\alpha}-\mathbf{r}),\] where \(\overline{\overline{\mathbf{I}}}_{3}\) and \(\delta(\mathbf{r}_{\alpha}-\mathbf{r})\) are the \(3\times 3\) identity matrix and three-dimensional delta function, respectively. Note that the dyadic Green's function can be further decomposed as \(\overline{\overline{\mathbf{G}}}(\mathbf{r},\mathbf{r}^{\prime},\omega)= \overline{\overline{\mathbf{G}}}_{0}(\mathbf{r},\mathbf{r}^{\prime},\omega)+ \overline{\overline{\mathbf{G}}}_{\mathrm{Sc}}(\mathbf{r},\mathbf{r}^{\prime },\omega)\), where \(\overline{\overline{\mathbf{G}}}_{0}(\mathbf{r},\mathbf{r}^{\prime},\omega)\) represents the free-space dyadic Green's function in the absence of the dielectric bodies and \(\overline{\mathbf{G}}_{\mathrm{Sc}}(\mathbf{r},\mathbf{r}^{\prime},\omega)\) represents the scattering dyadic Green's function originating from the presence of the dielectric bodies. ### State Vector To adequately include the effect of the counter-rotating interactions, we extend the Wigner-Weisskopf wave function ansatz [67] and use the following state vector to describe the total system [68, 54], \[\left|\Psi(t)\right\rangle= \sum_{\alpha}C^{\mathrm{E}_{\alpha},\{0\}}(t)e^{-iW^{\mathrm{E}_ {\alpha},\{0\}}t}\left|\mathrm{E}_{\alpha}\right\rangle\left|\{0\}\right\rangle +\sum_{k=1}^{3}\int\mathrm{d}\mathbf{r}\int_{0}^{\infty}\mathrm{d}\omega\,C^{ \mathrm{G},\{1_{k}\}}(\mathbf{r},\omega,t)e^{-iW^{\mathrm{G},\{1\}}(\omega)t} \left|\mathrm{G}\right\rangle\left|\{1_{k}(\mathbf{r},\omega)\}\right\rangle\] \[+\sum_{\alpha}\sum_{\beta>\alpha}\sum_{k=1}^{3}\int\mathrm{d} \mathbf{r}\int_{0}^{\infty}\mathrm{d}\omega\,C^{\mathrm{E}_{\alpha\beta},\{1 _{k}\}}(\mathbf{r},\omega,t)e^{-iW^{\mathrm{E}_{\alpha\beta},\{1\}}(\omega)t} \left|\mathrm{E}_{\alpha\beta}\right\rangle\left|\{1_{k}(\mathbf{r},\omega)\} \right\rangle, \tag{9}\] with \[W^{\mathrm{E}_{\alpha},\{0\}}=\omega_{\alpha}, \tag{10a}\] \[W^{\mathrm{G},\{1\}}(\omega)=\omega,\] (10b) \[W^{\mathrm{E}_{\alpha\beta},\{1\}}(\omega)=\omega+\omega_{\alpha }+\omega_{\beta}. \tag{10c}\] The state vector includes the molecular (electronic) and photonic degrees of freedom in the entire system. For the molecular part, the ket state \(\left|\mathrm{G}\right\rangle\) denotes that all the molecules are in their electronically ground states, i.e., \(\left|\mathrm{G}_{\alpha}\right\rangle=\left|\mathrm{g}_{1}\right\rangle\left| \mathrm{g}_{2}\right\rangle\ldots\left|\mathrm{g}_{\alpha}\right\rangle\ldots \left|\mathrm{g}_{N}\right\rangle\); the ket state \(\left|\mathrm{E}_{\alpha}\right\rangle\) denotes that \(\alpha\) is in its electronically excited state while the other molecules are in their electronically ground states, i.e., \(\left|\mathrm{E}_{\alpha}\right\rangle=\hat{\sigma}_{\alpha}^{(+)}\left| \mathrm{G}\right\rangle\); the ket state \(\left|\mathrm{E}_{\alpha\beta}\right\rangle\) denotes that both \(\alpha\) and \(\beta\) are in their electronically excited states while the other molecules are in their electronically excited states while the other molecules are in their electronically ground states, i.e., \(\left|\mathrm{E}_{\alpha\beta}\right\rangle=\hat{\sigma}_{\alpha}^{(+)}\hat{ \sigma}_{\beta}^{(+)}\left|\mathrm{G}\right\rangle\). For the photonic part, the ket state \(\left|\{0\}\right\rangle\) denotes the zero-polariton state, and the ket state \(\left|\{1_{k}(\mathbf{r},\omega)\}\right\rangle\) denotes the single-polariton state, i.e., \(\left|\{1_{k}(\mathbf{r},\omega)\}\right\rangle=\hat{f}_{k}^{\dagger}\left( \mathbf{r},\omega\right)\left|\{0\}\right\rangle\). \(C^{\mathrm{E}_{\alpha},\{0\}}(t)\), \(C^{\mathrm{G},\{1_{k}\}}(\mathbf{r},\omega,t)\) and \(C^{\mathrm{E}_{\alpha\beta},\{1_{k}\}}(\mathbf{r},\omega,t)\) are the probability amplitudes of \(\left|\Psi(t)\right\rangle\) for the states \(\left|\mathrm{E}_{\alpha}\right\rangle\left|\{0\}\right\rangle\), \(\left|\mathrm{G}\right\rangle\left|\{1_{k}(\mathbf{r},\omega)\}\right\rangle\) and \(\left|\mathrm{E}_{\alpha\beta}\right\rangle\left|\{1_{k}(\mathbf{r},\omega)\}\right\rangle\), respectively; \(\hbar W^{\mathrm{E}_{\alpha},\{0\}}\), \(\hbar W^{\mathrm{G},\{1\}}(\omega)\) and \(\hbar W^{\mathrm{E}_{\alpha\beta},\{1\}}(\omega)\) are the total energies of the states \(\left|\mathrm{E}_{\alpha}\right\rangle\left|\{0\}\right\rangle\), \(\left|\mathrm{G}\right\rangle\left|\{1_{k}(\mathbf{r},\omega)\}\right\rangle\) and \(\left|\mathrm{E}_{\alpha\beta}\right\rangle\left|\{1_{k}(\mathbf{r},\omega)\}\right\rangle\), respectively. ### Quantum Dynamics: Equation of Motion The quantum dynamics of the entire system without adopting the application of the RWA can be obtained by solving the time-dependent Schrodinger equation \(i\hbar\partial\left|\Psi(t)\right\rangle/\partial t=\dot{H}\left|\Psi(t)\right\rangle\). After some algebra, we obtain the equation of motion as follows (see Appendix A for the details), \[\frac{\mathrm{d}}{\mathrm{d}t}C^{\mathrm{E}_{\alpha},\{0\}}(t)= -\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^{\infty}\mathrm{d}\omega \left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}}\mathbf{\mu}_{\alpha}^{\mathrm{ eg}}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha}, \mathbf{r}_{\alpha},\omega)\cdot\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\right]e^{-i( \omega-\omega_{\alpha})t}e^{-i(\omega_{\alpha}-\omega)t^{\prime}}C^{\mathrm{E} _{\alpha},\{0\}}(t^{\prime})\] \[-\sum_{\beta\neq\alpha}\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^ {\infty}\mathrm{d}\omega\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}} \mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot\mathrm{Im}\overline{\overline{\mathbf{G} }}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)\cdot\mathbf{\mu}_{\beta}^{ \mathrm{ge}}\right]e^{-i(\omega-\omega_{\alpha})t}e^{-i(\omega_{\beta}-\omega) t^{\prime}}C^{\mathrm{E}_{\beta},\{0\}}(t^{\prime})\] \[-\sum_{\beta\neq\alpha}\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^ {\infty}\mathrm{d}\omega\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}} \mathbf{\mu}_{\beta}^{\mathrm{ge}}\cdot\mathrm{Im}\overline{\overline{\mathbf{G} }}(\mathbf{r}_{\beta},\mathbf{r}_{\alpha},\omega)\cdot\mathbf{\mu}_{\alpha}^{ \mathrm{eg}}\right]e^{-i(\omega+\omega_{\beta})t}e^{-i(-\omega_{\alpha}-\omega) t^{\prime}}C^{\mathrm{E}_{\beta},\{0\}}(t^{\prime}). \tag{11}\] The four terms in Eq. (11) corresponds to distinct physical processes: 1. _Spontaneous emission and reabsorption assisted by co-rotating interactions_: The first term on the right-hand side of Eq. (11) corresponds to the physical process depicted by the blue arrow in FIG. 1. This process involves an initially excited molecule \(\alpha\) emitting a photon and returning to its ground state. The emitted photon is then absorbed by \(\alpha\) again, causing it to transition back to its excited state. In fact, this process exactly corresponds to spontaneous emission, which has been derived from previous studies [69; 70]. 2. _Excitation energy transfer assisted by co-rotating interactions_: The second term on the right-hand side of Eq. (11) corresponds to the physical process depicted by the red arrow in FIG. 1. In this process, an initially excited molecule \(\beta\) emits a photon, transitioning back to its ground state. The emitted photon is subsequently absorbed by molecule \(\alpha\), resulting in the excitation of molecule \(\alpha\). This process can be regarded as excitation energy transfer assisted by co-rotating interactions [71]. 3. _Virtual photon emission and reabsorption assisted by counter-rotating interactions_: The third term on the right-hand side of Eq. (11) corresponds to the physical process depicted by the purple arrow in FIG. 1. It involves an initially excited molecule \(\alpha\) transitioning to a state where both molecules \(\alpha\) and \(\beta\) are excited, accompanied by the emission of a single polariton. Subsequently, the system transitions back to a state where only \(\alpha\) remains excited. This process is facilitated by the counter-rotating interactions, which involve terms that describe the simultaneous creation or annihilation of both photons and molecular excitations. 4. _Excitation energy transfer assisted by counter-rotating interactions_: The fourth term on the right-hand side of Eq. (11) corresponds to the physical process depicted by the green arrow in FIG. 1. In this process, an initially excited molecule \(\beta\) transitions to a state where both molecules \(\alpha\) and \(\beta\) are excited, with the presence of a single polariton. Subsequently, the system transitions Figure 1: Schematic illustration of the quantum dynamics in Eqs. (11). back to a state where only \(\alpha\) remains excited. Similar to the third term, this process is facilitated by the counter-rotating interactions and can be considered as another pathway of excitation energy transfer. ### Rotating-Wave Approximation Under the RWA, we neglect the so-called counter-rotating terms in the interaction Hamiltonian in Eq. (4). These counter-rotating terms involve processes where photons and molecular excitation are simultaneously created or annihilated. By discarding these terms, the interaction Hamiltonian is reduced to \[\hat{H}_{\rm I}^{\rm RWA}=-\sum_{\alpha=1}^{N}\left[\hat{\mathbf{\sigma}}_{\alpha}^ {(+)}\mathbf{\mu}_{\alpha}^{\rm eg}\cdot\hat{\mathbf{F}}^{(+)}(\mathbf{r}_{\alpha })+\mathrm{H.c.}\right], \tag{12}\] and the total Hamiltonian under the RWA is defined as \[\hat{H}^{\rm RWA}=\hat{H}_{\rm M}+\hat{H}_{\rm P}+\hat{H}_{\rm I}^{\rm RWA}. \tag{13}\] Note that for the total Hamiltonian under the RWA, we do not need to consider the ket state \(\ket{\mathrm{E}_{\alpha\beta}}\ket{\{1_{k}(\mathbf{r},\omega)\}}\) since it is not accessible without the presence of the counter-rotating interactions (assume that there is no polariton at \(t=0\)); therefore, we can simplify the state vector as: \[\ket{\Psi(t)}^{\rm RWA}=\sum_{\alpha}\tilde{C}^{\mathrm{E}_{ \alpha,\{0\}}}(t)e^{-iW^{\mathrm{E}_{\alpha,\{0\}}}t}\ket{\mathrm{E}_{\alpha} }\ket{\{0\}}+\sum_{k=1}^{3}\int\mathrm{d}\mathbf{r}\int_{0}^{\infty}\mathrm{ d}\omega\,\tilde{C}^{\mathrm{G},\{1_{k}\}}(\mathbf{r},\omega,t)e^{-iW^{ \mathrm{G},\{1\}}(\omega)t}\ket{\mathrm{G}}\ket{\{1_{k}(\mathbf{r},\omega)\}}, \tag{14}\] where \(\tilde{C}^{\mathrm{E}_{\alpha,\{0\}}}(t)\) and \(\tilde{C}^{\mathrm{G},\{1_{k}\}}(\mathbf{r},\omega,t)\) are the probability amplitudes of \(\ket{\Psi(t)}^{\rm RWA}\) for the states \(\ket{\mathrm{E}_{\alpha}}\ket{\{0\}}\) and \(\ket{\mathrm{G}}\ket{\{1_{k}(\mathbf{r},\omega)\}}\), respectively. Solving the time-dependent Schrodinger equation \(i\hbar\partial\ket{\Psi(t)}^{\rm RWA}/\partial t=\hat{H}^{\rm RWA}\ket{\Psi( t)}\), we obtain the equation of motion for the quantum dynamics under the RWA as follows, \[\frac{\mathrm{d}}{\mathrm{d}t}\tilde{C}^{\mathrm{E}_{\alpha,\{0 \}}}(t)= -\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^{\infty}\mathrm{d} \omega\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}}\mathbf{\mu}_{\alpha}^ {\rm eg}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha}, \mathbf{r}_{\alpha},\omega)\cdot\mathbf{\mu}_{\alpha}^{\rm eg}\right]e^{-i(\omega- \omega_{\alpha})t}e^{-i(\omega_{\alpha}-\omega)t^{\prime}}\tilde{C}^{\mathrm{ E}_{\alpha,\{0\}}}(t^{\prime})\] \[-\sum_{\beta\neq\alpha}\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^{ \infty}\mathrm{d}\omega\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}} \mathbf{\mu}_{\alpha}^{\rm eg}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}( \mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)\cdot\mathbf{\mu}_{\beta}^{\rm eg} \right]e^{-i(\omega-\omega_{\alpha})t}e^{-i(\omega_{\beta}-\omega)t^{\prime}} \tilde{C}^{\mathrm{E}_{\beta,\{0\}}}(t^{\prime}). \tag{15}\] Regarding Eq. (III.2), the first and second terms on the right-hand side correspond to the physical processes represented by the blue and red arrows in FIG. 1, respectively. The physical processes depicted by the purple and green arrows are not included in Eq. (III.2) due to the absence of the counter-rotating terms in the interaction Hamiltonian in Eq. (12). ## III Weak Coupling Regime: Energy Shift, Decay Rate and Dipole-Dipole Interaction In this section, we will show counter-rotating interactions as an essential component in describing dipole-dipole interactions, even in the weak light-matter coupling regime. To clearly demonstrate the importance of counter-rotating interactions, we further make the Markov approximation to Eq. (11) and Eq. (III.2), which correspond to the equations of motion without and with adopting the RWA, respectively, and then compare their physical meaning. First of all, under weak light-matter interactions, we make the Markov approximation to Eq. (11), and this equation can be simplified as (see Appendix B): \[\frac{\mathrm{d}}{\mathrm{d}t}C^{\mathrm{E}_{\alpha,\{0\}}}(t)=\] \[-\frac{i}{\hbar}\left\{\left[\Delta_{\mathrm{e}_{\alpha}}+\sum_{ \beta\neq\alpha}\Delta_{\mathrm{g}\beta}\right]-i\hbar\frac{\Gamma_{\alpha}}{ 2}\right\}C^{\mathrm{E}_{\alpha,\{0\}}}(t)\] \[-\frac{i}{\hbar}\sum_{\beta\neq\alpha}\mathrm{V}_{\mathrm{D} \mathrm{I},\alpha\beta}\,e^{-i(\omega_{\beta}-\omega_{\alpha})t}C^{\mathrm{ E}_{\beta,\{0\}}}(t), \tag{16}\] where \(\Delta_{\mathrm{e}(\mathrm{g})}\) represents the energy shift of the excited (ground) state of \(\alpha\), \(\Gamma_{\alpha}\) is the decay rate of the excited state of \(\alpha\), and \(\mathrm{V}_{\mathrm{DDI},\alpha\beta}\) denotes the effective dipole-dipole interaction between \(\alpha\) and \(\beta\). The energy shift \(\Delta_{\mathrm{e(g)}}\) comprises two contributions: the free-space Lamb shift \(\Delta^{0}_{\mathrm{e(g)}}\) and Casmir-Polder potential \(\Delta^{\mathrm{Sc}}_{\mathrm{e(g)}}\), i.e., \[\Delta_{\mathrm{e(g)}_{\alpha}}=\Delta^{0}_{\mathrm{e(g)}_{\alpha}}+\Delta^{ \mathrm{Sc}}_{\mathrm{e(g)}_{\alpha}}, \tag{17}\] with \[\Delta^{0}_{\mathrm{e(g)}_{\alpha}} =-\mathcal{P}\int_{0}^{\infty}\mathrm{d}\omega\,\frac{\omega^{2}} {\pi\varepsilon_{0}c^{2}}\,\frac{\mathbf{\mu}^{\mathrm{eg}}_{\alpha}\cdot\mathrm{ Im}\overline{\overline{\mathbf{G}}}_{0}(\mathbf{\mathbf{r}}_{\alpha},\mathbf{\mathbf{r}}_{ \alpha},\omega)\cdot\mathbf{\mu}^{\mathrm{ge}}_{\alpha}}{\omega-(+)\omega_{\alpha}}, \tag{18}\] \[\Delta^{\mathrm{Sc}}_{\mathrm{e(g)}_{\alpha}} =-\mathcal{P}\int_{0}^{\infty}\mathrm{d}\omega\,\frac{\omega^{2} }{\pi\varepsilon_{0}c^{2}}\,\frac{\mathbf{\mu}^{\mathrm{eg}}_{\alpha}\cdot\mathrm{ Im}\overline{\overline{\mathbf{G}}}_{\mathrm{Sc}}(\mathbf{\mathbf{r}}_{\alpha},\mathbf{\mathbf{r}}_{ \alpha},\omega)\cdot\mathbf{\mu}^{\mathrm{ge}}_{\alpha}}{\omega-(+)\omega_{\alpha}}, \tag{19}\] where \(\mathcal{P}\) denotes the principal value. The energy shifts are identical to those derived from perturbation theory [65; 72]. Note that the free-space Lamb shift \(\Delta^{0}_{\mathrm{e(g)}_{\alpha}}\) is divergent in this context and requires proper treatment, such as renormalization. However, the contribution of the free-space Lamb shift is typically small after renormalization [66]. Therefore, for the sake of simplicity, this term can be neglected in practical calculations (or considered as being included in the transition energy of the molecule) [69; 73]. The decay rate is expressed as \[\Gamma_{\alpha}=\frac{2\omega_{\alpha}^{2}}{\hbar\varepsilon_{0}c^{2}}\mathbf{ \mu}^{\mathrm{eg}}_{\alpha}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}( \mathbf{\mathbf{r}}_{\alpha},\mathbf{\mathbf{r}}_{\alpha},\omega_{\alpha})\cdot\mathbf{\mu }^{\mathrm{ge}}_{\alpha}, \tag{20}\] which is consistent with the spontaneous emission rate of a molecule in a medium derived from Fermi's golden rule [74]. The dipole-dipole interaction \(\mathrm{V}_{\mathrm{DDI},\alpha\beta}\) can be divided into two components, i.e., \[\mathrm{V}_{\mathrm{DDI},\alpha\beta}=\mathrm{V}_{\mathrm{RDDI},\alpha\beta}+ \mathrm{V}_{\mathrm{ORC},\alpha\beta}, \tag{21}\] with \[\mathrm{V}_{\mathrm{RDDI},\alpha\beta}=\frac{-\omega_{\beta}^{2}}{\varepsilon_ {0}c^{2}}\mathbf{\mu}^{\mathrm{eg}}_{\alpha}\cdot\overline{\overline{\mathbf{G}}} (\mathbf{\mathbf{r}}_{\alpha},\mathbf{\mathbf{r}}_{\beta},\omega_{\beta})\cdot\mathbf{\mu }^{\mathrm{ge}}_{\beta}, \tag{22}\] \[\mathrm{V}_{\mathrm{ORC},\alpha\beta} =\int_{0}^{\infty}\mathrm{d}\omega\,\frac{\omega^{2}}{\pi \varepsilon_{0}c^{2}}\,\frac{\mathbf{\mu}^{\mathrm{eg}}_{\alpha}\cdot\mathrm{Im} \overline{\overline{\mathbf{G}}}(\mathbf{\mathbf{r}}_{\alpha},\mathbf{\mathbf{r}}_{ \beta},\omega)\cdot\mathbf{\mu}^{\mathrm{ge}}_{\beta}}{\omega+\omega_{\beta}}\] \[-\int_{0}^{\infty}\mathrm{d}\omega\,\frac{\omega^{2}}{\pi \varepsilon_{0}c^{2}}\,\frac{\mathbf{\mu}^{\mathrm{eg}}_{\alpha}\cdot\mathrm{Im} \overline{\overline{\mathbf{G}}}(\mathbf{\mathbf{r}}_{\alpha},\mathbf{\mathbf{r}}_{ \beta},\omega)\cdot\mathbf{\mu}^{\mathrm{ge}}_{\beta}}{\omega+\omega_{\alpha}}. \tag{23}\] \(\mathrm{V}_{\mathrm{RDDI},\alpha\beta}\) represents the dipole-dipole interaction between a pair of on-resonant (\(\omega_{\alpha}=\omega_{\beta}\)) molecules, which is identical to the resonant dipole-dipole interaction in the presence of dielectric bodies obtained using perturbation theory [75]. On the other hand, \(\mathrm{V}_{\mathrm{ORC},\alpha\beta}\) can be regarded as a correction to the dipole-dipole interaction when the molecules are off-resonant (\(\omega_{\alpha}\neq\omega_{\beta}\)) since it is only non-zero when \(\omega_{\alpha}\neq\omega_{\beta}\). The calculation of \(\mathrm{V}_{\mathrm{ORC},\alpha\beta}\) is complicated as it requires the evaluation of integrals of the dyadic Green's function over all the positive frequencies. A useful technique is to transform the integral to the imaginary axis, where the Dyadic Green's function is much better behaved [63]. Using this technique, one can derive the explicit form of the free-space off-resonance correction \(\mathrm{V}^{0}_{\mathrm{ORC},\alpha\beta}\) [replace \(\overline{\overline{\mathbf{G}}}(\mathbf{\mathbf{r}}_{\alpha},\mathbf{\mathbf{r}}_{ \beta},\omega)\) with \(\overline{\overline{\mathbf{G}}}_{0}(\mathbf{\mathbf{r}}_{\alpha},\mathbf{\mathbf{r}}_{ \beta},\omega)\) in Eq. (23)], as shown in Appendix C. In this work, we will not conduct a deeper discussion on the effect of the off-resonance correction; however, we would like to emphasize that this correction may play an important role when the frequency detuning between the molecular transitions is large. Furthermore, we move on to the equation of motion with the RWA. Similarly, we make the Markov approximation to Eq. (15), and this equation can be simplified as \[\frac{\mathrm{d}}{\mathrm{d}t} \tilde{C}^{\mathrm{E}_{\alpha},\{0\}}(t)=\] \[-\frac{i}{\hbar}\sum_{\beta\neq\alpha}\tilde{\mathrm{V}}_{\mathrm{ DDI},\alpha\beta}\,e^{-i(\omega_{\beta}-\omega_{\alpha})t}\tilde{C}^{ \mathrm{E}_{\beta},\{0\}}(t), \tag{24}\] where \[\tilde{\mathrm{V}}_{\mathrm{DDI},\alpha\beta}=\mathrm{V}_{\mathrm{RDDI},\alpha \beta}+\mathrm{V}_{\mathrm{QC},\alpha\beta}, \tag{25}\] \[\mathrm{V}_{\mathrm{QC},\alpha\beta}=\int_{0}^{\infty}\mathrm{d}\omega\,\frac{ \omega^{2}}{\pi\varepsilon_{0}c^{2}}\,\frac{\mathbf{\mu}^{\mathrm{eg}}_{\alpha} \cdot\mathrm{Im}\overline{\overline{\overline{\mathbf{G}}}}(\mathbf{\mathbf{r}}_{ \alpha},\mathbf{\mathbf{r}}_{\beta},\omega)\cdot\mathbf{\mu}^{\mathrm{ge}}_{\beta}}{ \omega+\omega_{\beta}}. \tag{26}\] Comparing Eq. (16) and Eq. (24), it is obvious that the quantum dynamics without and with making the RWA under the Markov approximation exhibits two major differences. The first major difference: the energy shifts of the ground-state molecules (\(\sum_{\beta\neq\alpha}\Delta_{\mathrm{g}_{\beta}}\)) are absent in the dynamical equation under the RWA. The second major difference: from Eq. (21) and Eq. (25), one can find that the dipole-dipole interactions between a pair of molecules exhibit disparities in the two dynamical equations. The second major difference naturally raises a question: which one is a correct form of the dipole-dipole interaction, Eq. (21) or Eq. (25)? To answer this question, we conduct the following discussion. First, when the molecules are on-resonant (\(\omega_{\alpha}=\omega_{\beta}\)), the dipole-dipole interaction within the RWA, i.e., \(\tilde{\mathrm{V}}_{\mathrm{DDI},\alpha\beta}\) in Eq. (24), is expressed as a sum of the resonant dipole-dipole interaction \(\mathrm{V}_{\mathrm{RDDI},\alpha\beta}\) [Eq. (22)] and an additional correction term \(\mathrm{V}_{\mathrm{QC},\alpha\beta}\), which has been reported in several previous studies [28; 63; 64]. It is worth noting that this correction term is non-zero even when the molecules are on-resonant; therefore, \(\tilde{\mathrm{V}}_{\mathrm{DDI},\alpha\beta}\) always deviates from the resonant dipole-dipole interaction \(\mathrm{V}_{\mathrm{RDDI},\alpha\beta}\). In some previous studies, this deviation is considered an important effect that cannot be neglected in describing intermolecule interaction [63] and is regarded as a quantum correction that cannot be obtained from classical electrodynamics [64]. However, we speculate that this so-called quantum correction term may only be a product (an artifact) of the rotating-wave approximation since under the on-resonance condition, this correction no longer exists when the counter-rotating interactions are included. Second, when the molecules are off-resonant (\(\omega_{\alpha}\neq\omega_{\beta}\)), the dipole-dipole interaction without the RWA, i.e., \(\mathrm{V}_{\mathrm{DDI,\alpha\beta}}\) in Eq. (21), gives a reasonable correction \(\mathrm{V}_{\mathrm{ORC,\alpha\beta}}\) because the absolute value of \(\mathrm{V}_{\mathrm{ORC,\alpha\beta}}\) is much smaller than that of \(\mathrm{V}_{\mathrm{QC,\alpha\beta}}\). Third, in the short-distance (non-retarded) limit \(\omega_{\beta(\alpha)}R/c\ll 1\), where \(\mathbf{r}_{\alpha}-\mathbf{r}_{\beta}\equiv R\mathbf{n}_{R}\), the dyadic Green's function becomes purely longitudinal [65], and the dipole-dipole interactions can be approximated by their free-space and electrostatic (\(\omega_{\alpha(\beta)}/c\to 0\)) limits; therefore, \[\mathrm{V}_{\mathrm{DDI,\alpha\beta}}\approx 2\tilde{\mathrm{V}}_{\mathrm{DDI, \alpha\beta}}\approx\frac{\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot\mathbf{\mu}_{ \beta}^{\mathrm{ge}}-3\left(\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot\mathbf{n}_{ R}\right)\left(\mathbf{\mu}_{\beta}^{\mathrm{ge}}\cdot\mathbf{n}_{R}\right)}{4\pi \varepsilon_{0}R^{3}}. \tag{27}\] Equation (27) clearly shows that \(\mathrm{V}_{\mathrm{DDI,\alpha\beta}}\) converges to the conventional Coulomb dipole-dipole interaction, while \(\tilde{\mathrm{V}}_{\mathrm{DDI,\alpha\beta}}\) converges to only half of the Coulomb dipole-dipole interaction. The omission of half of the Coulomb dipole-dipole interaction within the framework of the RWA reinforces the significance of incorporating the counter-rotating terms in the interaction Hamiltonian. In addition to our study, in fact, the reduction of half of the dipole-dipole interaction in free space due to the use of the RWA has also been reported recently by Wubs et al [56]. Based on the above discussion, we can conclude that counter-rotating interactions play a key role even in the weak light-matter coupling regime. In short, by examining the dynamical equation in the weak coupling regime [Eq. (16)], we can obtain the energy shift, decay rate, and resonant dipole-dipole interaction of molecules in a dielectric environment, and these results are consistent with those derived from perturbation theory in previous works. This agreement not only provides robust support for the validity of our theoretical approach but also demonstrates counter-rotating interaction as an essential component for dipole-dipole interactions. ## IV Quantum dynamics of a pair of identical molecules above a plasmonic surface In this section, we numerically investigate the effect of counter-rotating interactions on the quantum dynamics of multiple molecules in a complex dielectric environment. For simplicity, we adopt a minimal model consisting of a pair of identical molecules, i.e., a donor (D) and an acceptor (A), above a plasmonic surface, with their transition dipole moments parallel to the normal direction of the plasmonic surface, as depicted in FIG. (2). We denote the distance between the donor and acceptor as \(d\) and the distance between the molecules and plasmonic surface as \(h\). The plasmonic surface is modeled by the following dielectric functions: \[\varepsilon_{\mathrm{r}}(\mathbf{r},\omega)=\begin{cases}1,&z>0,\\ \varepsilon_{\mathrm{D}}(\omega),&z<0,\end{cases} \tag{28}\] where \(\varepsilon_{\mathrm{D}}(\omega)=1-5/(\omega^{2}+0.1i\omega)\) is an artificial Drude model. For the molecules, the transition frequency is given by \(\hbar\omega_{\mathrm{D}}=\hbar\omega_{\mathrm{A}}=3.525\) eV, which is in resonance with the frequency of the surface plasmon polariton mode of the chosen plasmonic surface, and the magnitude of the transition dipole moment is given by \(|\mathbf{\mu}_{\mathrm{D}}|=|\mathbf{\mu}_{\mathrm{A}}|=10\) Debye. The dyadic Green's functions of the system can be obtained through the Fresnel method [76], and detailed information regarding the methodology can be found in our previous works [77]. In order to efficiently perform numerical calculations based on Eqs. (11), (15), (16), and (24), we implement additional simplifications. For Eqs. (11) and (15), we decompose the dyadic Green's function as a sum of the free-space contribution and scattering contribution, i.e., \(\overline{\overline{\mathbf{G}}}(\mathbf{r},\mathbf{r}^{\prime},\omega)= \overline{\overline{\mathbf{G}}}_{0}(\mathbf{r},\mathbf{r}^{\prime},\omega)+ \overline{\overline{\mathbf{G}}}_{\mathrm{Sc}}(\mathbf{r},\mathbf{r}^{\prime},\omega)\). Since the coupling between the molecules and the free-space field is typically weak, we can apply the Markov approximation only to the parts involving \(\overline{\overline{\mathbf{G}}}_{0}(\mathbf{r},\mathbf{r}^{\prime},\omega)\) and neglect the small free-space Lamb shift \(\Delta^{0}_{\mathrm{e(g)}}\). This simplification leads to our working equations, as shown in Eqs. (11) and (12). These two equations will be referred to as the full quantum dynamics (FQD) without and with the RWA in the subsequent discussion. For Eqs. (16) and (24), we discard the small free-space Lamb shift \(\Delta^{0}_{\mathrm{e(g)}}\), resulting in our working equations as shown in Eqs. (13) and (14). These two equations will be referred to as the Markov-approximated quantum dynamics (MAQD) without and with the RWA in the subsequent discussion. The numerical study will focus on how the counter-rotating in Figure 2: Schematic illustration of a donor (A) and an acceptor (A) above a plasmonic surface, where the donor-acceptor distance and the molecule-dielectric distance are given by \(d\) and \(h\), respectively. teractions affects the quantum dynamics in the weak and strong coupling conditions, where the (weak/strong) coupling condition can be identified by the consistency between the population dynamics obtained from the FQD and MAQD, e.g., the consistency of the population dynamics of FQD and MAQD indicates the weak coupling condition [78, 79]. In FIG. 3(a)-(c), when \(h=10\) nm and \(d=4\) nm, the excited-state population of the donor, i.e., \(P^{\mathrm{E_{D}},\{0\}}(t)=\left|C^{\mathrm{E_{D}},\{0\}}(t)\right|^{2}\), the excited-state population of the acceptor, i.e., \(P^{\mathrm{E_{A}},\{0\}}(t)=\left|C^{\mathrm{E_{A}},\{0\}}(t)\right|^{2}\), and the total excited-state population, i.e., \(P^{\mathrm{E_{D}},\{0\}}(t)+P^{\mathrm{E_{A}},\{0\}}(t)\), obtained via FQD and MAQD match each other; therefore, this circumstance can be identified as a weak coupling condition. In this case, the individual population dynamics of the donor \(P^{\mathrm{E_{D}},\{0\}}(t)\) and acceptor \(P^{\mathrm{E_{A}},\{0\}}(t)\) calculated without and with the application of the RWA are quite distinct from each other, as shown in FIG. 3(a) and (b). More specifically, the oscillation frequency of the population dynamics without the RWA is roughly two times the oscillation frequency of the population dynamics with the RWA. Although the individual population dynamics are different, it is surprising that the total population dynamics are still identical in this case, as shown in Fig 3(c). This phenomenon can be further confirmed by the analytical solution of MAQD (with the initial condition \(C^{\mathrm{E_{\alpha}},\{0\}}(t)=\delta_{\alpha\mathrm{D}}\)), which can be expressed as \[\begin{cases}P^{\mathrm{E_{D}},\{0\}}(t)=e^{-\Gamma t}\left\{\sinh^{2}\left[ \mathrm{Im}(\mathrm{V}/\hbar)t\right]+\cos^{2}\left[\mathrm{Re}(\mathrm{V}/ \hbar)t\right]\right\},\\ P^{\mathrm{E_{A}},\{0\}}(t)=e^{-\Gamma t}\left\{\sinh^{2}\left[\mathrm{Im}( \mathrm{V}/\hbar)t\right]+\sin^{2}\left[\mathrm{Re}(\mathrm{V}/\hbar)t\right] \right\},\\ P^{\mathrm{E_{D}},\{0\}}(t)+P^{\mathrm{E_{A}},\{0\}}(t)=e^{-\Gamma t}\cosh \left[2\mathrm{Im}(\mathrm{V}/\hbar)t\right].\end{cases} \tag{29}\] In Eq. (29), \(\Gamma\) and V are defined as: (i) \(\Gamma=\Gamma_{\mathrm{D}}=\Gamma_{\mathrm{A}}\) [Eq. 20] for both the population dynamics without and with the RWA. (ii) \(\mathrm{V}=\mathrm{V_{DDI,DA}}=\mathrm{V_{DDI,AD}}\) [Eq. (21)] for the population dynamics without the RWA, and \(\mathrm{V}=\mathrm{\tilde{V}_{DDI,DA}}=\mathrm{\tilde{V}_{DDI,AD}}\) [Eq. (25)] for the population dynamics with the RWA. The analytical solution of the population dynamics in Eq. (29) provides plenty of information. First, recall that \(e^{-\Gamma t}\) is exactly the single-molecule excited-state population, and \(\cosh(x)\) is greater than or equal to 1. As a result, one can conclude that the inclusion of a second molecule can slow down the decay of the total excited-state population, which is known as the subradiance. Second, the total excited-state population only depends on \(\Gamma\) and Im(V), i.e., the imaginary part of the dipole-dipole interaction. Thus, the total excited-state population is unaffected by the RWA Figure 3: Excited-state population dynamics of a donor and an acceptor above a plasmonic surface. (a) Population dynamics of the donor, (b) population dynamics of the acceptor, and (c) total excited-state population dynamics when \(h=10\) nm and \(d=4\) nm. (d) Population dynamics of the donor, (e) population dynamics of the acceptor, and (f) total excited-state population dynamics when \(h=1\) nm and \(d=1\) nm. FQD and MAQD denote full quantum dynamics and Markov-approximated quantum dynamics, respectively. since \(\mathrm{Im}(\mathrm{V}_{\mathrm{DDI,DA}})=\mathrm{Im}(\tilde{\mathrm{V}}_{\mathrm{ DDI,DA}})\). Third, the individual excited-state population depends on both \(\mathrm{Im}(\mathrm{V})\) and \(\mathrm{Re}(\mathrm{V})\), where \(\mathrm{Im}(\mathrm{V})\) modifies the decay behavior and \(\mathrm{Re}(\mathrm{V})\) generates the oscillatory pattern. Since \(\mathrm{Re}(\mathrm{V}_{\mathrm{DDI,DA}})\neq\mathrm{Re}(\tilde{\mathrm{V}}_{ \mathrm{DDI,DA}})\), the individual excited-state population is sensitive to the RWA. Moreover, the two-times oscillation frequency can be explained by the fact that \(\mathrm{Re}(\mathrm{V}_{\mathrm{DDI,DA}})\approx 2\mathrm{Re}(\tilde{\mathrm{V}}_{ \mathrm{DDI,DA}})\) in the short distance limit in Eq. (27). In FIG. 3(d)-(f), when \(h=1\) nm and \(d=1\) nm, the population dynamics obtained via FQD and MAQD no longer match each other. In other words, we can identify this circumstance as a strong coupling condition. In this case, not only the individual population dynamics, i.e., \(P^{\mathrm{E}_{\mathrm{D},\{0\}}}(t)\) and \(P^{\mathrm{E}_{\mathrm{A},\{0\}}}(t)\), but also the total excited population, i.e., \(P^{\mathrm{E}_{\mathrm{D},\{0\}}}(t)+P^{\mathrm{E}_{\mathrm{A},\{0\}}}(t)\), are sensitive to the RWA. In summary, the numerical and analytical analysis of the population dynamics in the specific system reveals the following findings. (i) Under weak coupling conditions, the RWA can affect the excited-state population of individual molecules while leaving the total excited-state population unchanged (in our chosen system). However, we would like to emphasize that the total excited-state population is unaffected by the RWA in the weak coupling regime is not a general principle. In fact, in cases where the donor and acceptor are at different heights, the donor and acceptor are non-identical, or when more than two molecules are involved, the total excited-state population becomes sensitive to the use of the RWA [56]. (ii) Under strong coupling conditions, both the individual and total excited-state dynamics are highly sensitive to the RWA. These results further emphasize the significance of counter-rotating interactions in quantum dynamics in both strong and weak coupling regimes. ## V Conclusion In this study, we investigate the influence of counter-rotating interactions on the quantum dynamics of multiple molecules in complex dielectric environments within the framework of MQED. Our general theory of quantum dynamics shows that the neglect of the counter-rotating interactions leads to missing several crucial physical processes, e.g., virtual photon emission and reabsorption. We summarize our main findings as follows. First, in the weak coupling regime, the lack of these processes leads to the absence of energy shifts in the ground-state molecules and incorrect dipole-dipole interactions between molecule pairs. Second, our study clearly demonstrates that counter-rotating interactions play an essential component in dipole-dipole interactions. Our analysis points out that within the RWA, the dipole-dipole interactions converge to only half of the conventional Coulomb interaction in the short-distance (non-retarded) limit. Third, our numerical simulations reveal that in the weak coupling regime, the absence of the counter-rotating interactions can significantly influence the dynamics of individual molecules while leaving the total excited-state population unchanged. Conversely, in the strong coupling regime, both individual and total excited-state dynamics exhibit sensitivity to the counter-rotating interactions. To sum up, through the analysis of the dynamical equations and a specific case study of population dynamics, we show that the counter-rotating interactions play crucial roles in both strong and weak coupling regimes. Hence, it is imperative to always exercise caution when making the rotating-wave approximation. We believe that this work will provide important insights into the study of light-matter interactions. While we have demonstrated the significance of counter-rotating interactions, there are still unresolved issues that warrant further investigations. First, in this article, we have modeled molecules as two-level systems, neglecting the influence of other molecular excited states. However, these excited states play a crucial role in certain properties, such as energy shift calculations. Second, our analysis includes up to two molecular excitations and one polariton in the wavefunction ansatz. Nevertheless, in regimes of ultrastrong and deep-strong coupling, higher excitation states, such as three-molecule excitations and two-polariton states, may also impact quantum dynamics significantly. These issues are important for exploring the quantum dynamics of a collection of molecules coupled with quantum light. Finally, we hope that this work could inspire further investigations into emerging quantum electrodynamic phenomena in chemistry and molecular physics. ###### Acknowledgements. Hsu thanks Academia Sinica (AS-CDA-111-M02) and the Ministry of Science and Technology of Taiwan (110-2113-M-001-053 and 111-2113-M-001-027-MY4) for the financial support. ## Appendix A Derivation of Eq. (11) To derive Eq. (11), we substitute the Hamiltonian \(\hat{H}\) in Eq. (1) and the state vector in Eq. (9) into the time-dependent Schrodinger equation \(i\hbar\partial\left|\Psi(t)\right\rangle/\partial t=\hat{H}\left|\Psi(t)\right\rangle\), and then we obtain the following coupled differential equations, \[i\hbar\frac{\partial}{\partial t}C^{\mathrm{G},\{1_{k}\}}(\mathbf{r}, \omega,t)e^{-iW^{\mathrm{G},\{1\}}(\omega)t}=-\sum_{\alpha}\left[\mathbf{\mu}_{ \alpha}^{\mathrm{ge}}\cdot\overline{\overline{\mathcal{G}}}^{*}(\mathbf{r}_{ \alpha},\mathbf{r},\omega)\right]_{k}C^{\mathrm{E}_{\alpha},\{0\}}(t)e^{-iW^{ \mathrm{E}_{\alpha},\{0\}}t}, \tag{10}\] \[i\hbar\frac{\partial}{\partial t}C^{\mathrm{E}_{\alpha\beta},\{1_ {k}\}}(\mathbf{r},\omega,t)e^{-iW^{\mathrm{E}_{\alpha\beta},\{1\}}(\omega)t}=\] \[-\left[\mathbf{\mu}_{\beta}^{\mathrm{eg}}\cdot\overline{\overline{ \mathcal{G}}}^{*}(\mathbf{r}_{\beta},\mathbf{r},\omega)\right]_{k}e^{-iW^{ \mathrm{E}_{\alpha,\{0\}}}t}C^{\mathrm{E}_{\alpha},\{0\}}(t)-\left[\mathbf{\mu}_{ \alpha}^{\mathrm{eg}}\cdot\overline{\overline{\mathcal{G}}}^{*}(\mathbf{r}_{ \alpha},\mathbf{r},\omega)\right]_{k}e^{-iW^{\mathrm{E}_{\beta,\{0\}}}t}C^{ \mathrm{E}_{\beta},\{0\}}(t), \tag{11}\] \[i\hbar\frac{\mathrm{d}}{\mathrm{d}t}C^{\mathrm{E}_{\alpha},\{0\}} (t)e^{-iW^{\mathrm{E}_{\alpha},\{0\}}t}= -\sum_{k=1}^{3}\int\mathrm{d}\mathbf{r}\int_{0}^{\infty}\mathrm{ d}\omega\left[\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot\overline{\overline{\mathcal{G}}} (\mathbf{r}_{\alpha},\mathbf{r},\omega)\right]_{k}e^{-iW^{\mathrm{G},\{1\}}( \omega)t}C^{\mathrm{G},\{1_{k}\}}(\mathbf{r},\omega,t)\] \[-\sum_{\beta\neq\alpha}\sum_{k=1}^{3}\int\mathrm{d}\mathbf{r} \int_{0}^{\infty}\mathrm{d}\omega\left[\mathbf{\mu}_{\beta}^{\mathrm{ge}}\cdot \overline{\overline{\mathcal{G}}}(\mathbf{r}_{\beta},\mathbf{r}^{\prime}, \omega)\right]_{k}e^{-iW^{\mathrm{E}_{\alpha\beta},\{1\}}(\omega)t}C^{\mathrm{ E}_{\alpha\beta},\{1_{k}\}}(\mathbf{r},\omega,t), \tag{12}\] where \(\left[\mathbf{v}\right]_{k}\) denotes the \(k\)-th component of the vector \(\mathbf{v}\). Note that we have ignored the two-polariton term and the three-molecule-excitation term in obtaining Eqs. (10)-(12). Consider that there is no polariton at \(t=0\), i.e., \(C^{\mathrm{G},\{1_{k}\}}(\mathbf{r},\omega,t=0)=C^{\mathrm{E}_{\alpha\beta}, \{1_{k}\}}(\mathbf{r},\omega,t=0)=0\), we can formally integrate Eqs. (10) and (11) and obtain \[C^{\mathrm{G},\{1_{k}\}}(\mathbf{r},\omega,t)=\frac{i}{\hbar}\sum_{\alpha}\int _{0}^{t}\mathrm{d}t^{\prime}\left[\mathbf{\mu}_{\alpha}^{\mathrm{ge}}\cdot\overline {\overline{\mathcal{G}}}^{*}(\mathbf{r}_{\alpha},\mathbf{r},\omega)\right]_{k}e ^{-i\left(W^{\mathrm{E}_{\alpha},\{0\}}-W^{\mathrm{G},\{1\}}(\omega)\right)t^ {\prime}}C^{\mathrm{E}_{\alpha},\{0\}}(t^{\prime}), \tag{13}\] \[C^{\mathrm{E}_{\alpha\beta},\{1_{k}\}}(\mathbf{r},\omega,t)= \frac{i}{\hbar}\int_{0}^{t}\mathrm{d}t^{\prime}\left[\mathbf{\mu}_{ \beta}^{\mathrm{eg}}\cdot\overline{\overline{\mathcal{G}}}^{*}(\mathbf{r}_{ \beta},\mathbf{r},\omega)\right]_{k}e^{-i\left(W^{\mathrm{E}_{\alpha},\{0\}}-W ^{\mathrm{E}_{\alpha\beta},\{1\}}(\omega)\right)t^{\prime}}C^{\mathrm{E}_{ \alpha},\{0\}}(t^{\prime})\] \[+\frac{i}{\hbar}\int_{0}^{t}\mathrm{d}t^{\prime}\left[\mathbf{\mu}_{ \alpha}^{\mathrm{eg}}\cdot\overline{\overline{\mathcal{G}}}^{*}(\mathbf{r}_{ \alpha},\mathbf{r},\omega)\right]_{k}e^{-i\left(W^{\mathrm{E}_{\beta},\{0\}}-W ^{\mathrm{E}_{\alpha\beta},\{1\}}(\omega)\right)t^{\prime}}C^{\mathrm{E}_{ \beta},\{0\}}(t^{\prime}). \tag{14}\] In order to derive Eq. (11), we substitute Eqs. (13) and (14) into Eq. (12), make use of the identity [65] \[\mathrm{Im}\overline{\overline{\mathbf{G}}}(\mathbf{r},\mathbf{r}^{\prime}, \omega)=\int\mathrm{d}\mathbf{s}\,\frac{\omega^{2}\mathrm{Im}\left[\varepsilon_ {r}(\mathbf{s},\omega)\right]\overline{\overline{\mathbf{G}}}(\mathbf{r}, \mathbf{s},\omega)\overline{\overline{\mathbf{G}}}^{\dagger}(\mathbf{r}^{\prime}, \mathbf{s},\omega), \tag{15}\] apply the definitions of \(W^{\mathrm{E}_{\alpha},\{0\}}\), \(W^{\mathrm{G},\{1\}}(\omega)\), and \(W^{\mathrm{E}_{\alpha\beta},\{1\}}(\omega)\) in Eqs (10a)-(10c), and finally obtain \[\frac{\mathrm{d}}{\mathrm{d}t}C^{\mathrm{E}_{\alpha},\{0\}}(t)=\] \[-\sum_{\beta\neq\alpha}\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^{ \infty}\mathrm{d}\omega\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}}\mathbf{ \mu}_{\beta}^{\mathrm{eg}}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}( \mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)\cdot\mathbf{\mu}_{\beta}^{\mathrm{ge}} \right]e^{-i(\omega-\omega_{\alpha})t}e^{-i(\omega_{\beta}-\omega)t^{\prime}}C^{ \mathrm{E}_{\beta},\{0\}}(t^{\prime})\] \[-\sum_{\beta\neq\alpha}\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^{ \infty}\mathrm{d}\omega\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}}\mathbf{ \mu}_{\beta}^{\mathrm{ge}}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}( \mathbf{r}_{\beta},\mathbf{r}_{\beta},\omega)\cdot\mathbf{\mu}_{\beta}^{\mathrm{ge}} \right]e^{-i(\omega-\omega_{\alpha})t}e^{-i(\omega_{\beta}-\omega)t^{\prime}}C^{ \mathrm{E}_{\beta},\{0\}}(t^{\prime})\] \[-\sum_{\beta\neq\alpha}\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^{ \infty}\mathrm{d}\omega\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}}\mathbf{ \mu}_{\beta}^{\mathrm{ge}}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}( \mathbf{r}_{\beta},\mathbf{r}_{\alpha},\omega)\cdot\mathbf{\mu}_{\beta}^{\mathrm{ge}} \right]e^{-i(\omega+\omega_{\beta})t}e^{-i(-\omega_{\alpha}-\omega)t^{\prime}}C^{ \mathrm{E}_{\beta},\{0\}}(t^{\prime}). \tag{16}\] ## Appendix B Derivation of Eq. (16) In the weak coupling regime, we apply the Markov approximation to Eq. (11), i.e., we change \(C^{\mathrm{E}_{\alpha},\{0\}}(t^{\prime})\to C^{\mathrm{E}_{\alpha},\{0\}}(t)\) and \(\int_{0}^{t}\mathrm{d}t^{\prime}\to\int_{-\infty}^{t}\mathrm{d}t^{\prime}\), and make the substitution \(\tau=t-t^{\prime}\) (\(\mathrm{d}\tau=-\mathrm{d}t^{\prime}\)); then Eq. (11) becomes \[\frac{\mathrm{d}}{\mathrm{d}t}C^{\mathrm{E}_{\alpha,}\{0\}}(t)=\] \[\quad-\left\{\int_{0}^{\infty}\mathrm{d}\omega\left[\int_{0}^{ \infty}\mathrm{d}\tau\,e^{-i(\omega-\omega_{\alpha})\tau}\right]\left[\frac{ \omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}}\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot \mathrm{Im}\overline{\overline{\overline{\mathbf{G}}}}(\mathbf{r}_{\alpha}, \mathbf{r}_{\alpha},\omega)\cdot\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\right]\right\} C^{\mathrm{E}_{\alpha,}\{0\}}(t)\] \[\quad-\sum_{\beta\neq\alpha}\left\{\int_{0}^{\infty}\mathrm{d} \omega\left[\int_{0}^{\infty}\mathrm{d}\tau\,e^{-i(\omega-\omega_{\beta})\tau} \right]\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}}\mathbf{\mu}_{\alpha}^ {\mathrm{eg}}\cdot\mathrm{Im}\overline{\overline{\overline{\mathbf{G}}}}( \mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)\cdot\mathbf{\mu}_{\beta}^{\mathrm{ eg}}\right]\right\}e^{-i(\omega_{\beta}-\omega_{\alpha})t}C^{\mathrm{E}_{\beta,}\{0\}}(t)\] \[\quad-\sum_{\beta\neq\alpha}\left\{\int_{0}^{\infty}\mathrm{d} \omega\left[\int_{0}^{\infty}\mathrm{d}\tau\,e^{-i(\omega+\omega_{\beta})\tau} \right]\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}}\mathbf{\mu}_{\beta}^ {\mathrm{eg}}\cdot\mathrm{Im}\overline{\overline{\overline{\mathbf{G}}}}( \mathbf{r}_{\beta},\mathbf{r}_{\beta},\omega)\cdot\mathbf{\mu}_{\beta}^{\mathrm{ eg}}\right]\right\}C^{\mathrm{E}_{\alpha,}\{0\}}(t)\] \[\quad-\sum_{\beta\neq\alpha}\left\{\int_{0}^{\infty}\mathrm{d} \omega\left[\int_{0}^{\infty}\mathrm{d}\tau\,e^{-i(\omega+\omega_{\alpha})\tau }\right]\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}}\mathbf{\mu}_{\beta}^ {\mathrm{eg}}\cdot\mathrm{Im}\overline{\overline{\overline{\mathbf{G}}}}( \mathbf{r}_{\beta},\mathbf{r}_{\alpha},\omega)\cdot\mathbf{\mu}_{\alpha}^{\mathrm{ eg}}\right]\right\}e^{-i(\omega_{\beta}-\omega_{\alpha})t}C^{\mathrm{E}_{\beta,}\{0\}}(t). \tag{10}\] According to the Sokhotski-Plemelj theorem, we have \[\int_{0}^{\infty}\mathrm{d}\tau\,e^{-i(\omega\pm\omega_{\alpha})\tau}=\pi \delta\left(\omega\pm\omega_{\alpha}\right)-i\mathcal{P}\left(\frac{1}{\omega \pm\omega_{\alpha}}\right). \tag{11}\] Substituting Eq. (11) into Eq. (10), we obtain \[\frac{\mathrm{d}}{\mathrm{d}t}C^{\mathrm{E}_{\alpha,}\{0\}}(t)=\] \[\quad-\left\{\left[\frac{\omega_{\alpha}^{2}}{\hbar\varepsilon_{0} c^{2}}\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot\mathrm{Im}\overline{\overline{ \overline{\mathbf{G}}}}(\mathbf{r}_{\alpha},\mathbf{r}_{\alpha},\omega_{\alpha })\cdot\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\right]-i\mathcal{P}\int_{0}^{\infty} \mathrm{d}\omega\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}}\frac{\mathbf{ \mu}_{\alpha}^{\mathrm{eg}}\cdot\mathrm{Im}\overline{\overline{\overline{ \mathbf{G}}}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)\cdot\mathbf{\mu}_{ \beta}^{\mathrm{eg}}}{\omega-\omega_{\alpha}}\right]\right\}C^{\mathrm{E}_{ \alpha,}\{0\}}(t)\] \[\quad-\sum_{\beta\neq\alpha}\left\{\left[\frac{\omega_{\beta}^{2} }{\hbar\varepsilon_{0}c^{2}}\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot\mathrm{Im} \overline{\overline{\overline{\mathbf{G}}}}(\mathbf{r}_{\alpha},\mathbf{r}_{ \beta},\omega_{\beta})\cdot\mathbf{\mu}_{\beta}^{\mathrm{eg}}\right]-i\mathcal{P} \int_{0}^{\infty}\mathrm{d}\omega\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_ {0}c^{2}}\frac{\mathbf{\mu}_{\beta}^{\mathrm{eg}}\cdot\mathrm{Im}\overline{ \overline{\overline{\mathbf{G}}}}(\mathbf{r}_{\beta},\mathbf{r}_{\beta},\omega) \cdot\mathbf{\mu}_{\beta}^{\mathrm{eg}}}{\omega+\omega_{\alpha}}\right]\right\}e^{ -i(\omega_{\beta}-\omega_{\alpha})t}C^{\mathrm{E}_{\beta,}\{0\}}(t)\] \[\quad-\sum_{\beta\neq\alpha}\left\{-i\mathcal{P}\int_{0}^{\infty} \mathrm{d}\omega\left[\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}}\frac{\mathbf{ \mu}_{\beta}^{\mathrm{eg}}\cdot\mathrm{Im}\overline{\overline{\overline{ \mathbf{G}}}}(\mathbf{r}_{\beta},\mathbf{r}_{\alpha},\omega)\cdot\mathbf{\mu}_{ \alpha}^{\mathrm{eg}}}{\omega+\omega_{\alpha}}\right]\right\}C^{\mathrm{E}_{ \alpha,}\{0\}}(t). \tag{12}\] Using the following identities [65] \[\overline{\overline{\mathbf{G}}}(\mathbf{r},\mathbf{r}^{\prime},-\omega^{*})= \overline{\overline{\mathbf{G}}}^{*}(\mathbf{r},\mathbf{r}^{\prime},\omega), \tag{13}\] \[\mathbf{\mu}_{1}\cdot\overline{\overline{\mathbf{G}}}(\mathbf{r}_{1},\mathbf{r}_{2}, \omega)\cdot\mathbf{\mu}_{2}=\mathbf{\mu}_{2}\cdot\overline{\overline{\overline{\mathbf{G}}}}( \mathbf{r}_{2},\mathbf{r}_{1},\omega)\cdot\mathbf{\mu}_{1}, \tag{14}\] \[\pi\omega^{2}\mathrm{Re}\overline{\overline{\overline{\mathbf{G}}}}(\mathbf{r}, \mathbf{r}^{\prime},\omega) =\mathcal{P}\int_{-\infty}^{\infty}\mathrm{d}\omega\,\frac{\omega^{2} \mathrm{Im}\overline{\overline{\overline{\mathbf{G}}}}(\mathbf{r},\mathbf{r}^{ \prime},\omega)}{\omega-\omega_{\alpha}}\] \[=\mathcal{P}\int_{0}^{\infty}\mathrm{d}\omega\,\frac{\omega^{2} \mathrm{Im}\overline{\overline{\mathbf{G}}}}{\omega-\omega_{\alpha}}+\int_{0}^{ \infty}\mathrm{d}\omega\,\frac{\omega^{2}\mathrm{Im}\overline{\overline{\overline{ \mathbf{G}}}}(\mathbf{r},\mathbf{r}^{\prime},\omega)}{\omega+\omega_{\alpha}}, \tag{15}\] and recalling the definitions of \(\Delta_{\mathrm{e}(\mathrm{g})_{\alpha}}\) in Eq (17), \(\Gamma_{\alpha}\) in Eq (20), and \(\mathrm{V}_{\mathrm{DDI},\alpha\beta}\) in Eq (21), we can transform Eq (12) into \[\frac{\mathrm{d}}{\mathrm{d}t}C^{\mathrm{E}_{\alpha,}\{0\}}(t)=-\frac{i}{\hbar} \left\{\left[\Delta_{\mathrm{e}_{\alpha}}+\sum_{\beta\neq\alpha}\Delta_{ \mathrm{g}_{\beta}}\right]-i\hbar\frac{\Gamma_{\alpha}}{2}\right\}C^{\mathrm{E} _{\alpha,}\{0\}}(t)-\frac{i}{\hbar}\sum_{\beta\neq\alpha}\mathrm{V}_{\mathrm{ DDI},\alpha\beta}\,e^{-i(\omega_{\beta}-\omega_{\alpha})t}C^{\mathrm{E}_{\beta,}\{0\}}(t). \tag{16}\] Appendix C Calculation of \(\mathrm{V}_{\mathrm{OR},\alpha\beta}\) and \(\mathrm{V}_{\mathrm{QC},\alpha\beta}\) on the Imaginary Axis To obtain \(\mathrm{V}_{\mathrm{OR},\alpha\beta}\) and \(\mathrm{V}_{\mathrm{QC},\alpha\beta}\), we need to evaluate the integral \[I=\int_{0}^{\infty}\mathrm{d}\omega\,\frac{\omega^{2}}{\pi\varepsilon_{0}c^{2}} \frac{\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot\mathrm{Im}\overline{\overline{ \overline{\mathbf{G}}}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)\cdot\mathbf{ \mu}_{\beta}^{\mathrm{eg}}}{\omega+\omega^{\prime}}, \tag{17}\] where \(\omega^{\prime}=\omega_{\alpha}\) or \(\omega_{\beta}\). \(I\) can be alternatively expressed as: \[I=\frac{1}{\pi\varepsilon_{0}}\mathbf{\mu}_{\alpha}^{\text{eg}}\cdot \text{Im}\left[\int_{0}^{\infty}\text{d}\omega\,\frac{\omega^{2}}{c^{2}}\frac{ \overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta}, \omega)}{\omega+\omega^{\prime}}\right]\cdot\mathbf{\mu}_{\beta}^{\text{ge}}. \tag{10}\] For simplicity, we define \(f(\omega)=\frac{\omega^{2}}{c^{2}}\frac{\overline{\overline{\mathbf{G}}}( \mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)}{\omega+\omega^{\prime}}\). The integral in the square bracket can be evaluated using the contour integral technique, i.e., \[\int_{0}^{\infty}\text{d}\omega\,f(\omega)=\oint_{C}\text{d} \omega\,f(\omega)-\int_{C_{2}}\text{d}\omega\,f(\omega)-\int_{C_{3}}\text{d} \omega\,f(\omega), \tag{11}\] where the contour is shown in FIG. 4. The first term on the right-hand side of Eq. (11) is zero since there is no singularity inside the contour \(C\). The second term on the right-hand side of Eq. (11) is also zero due to the asymptotic behavior of the two-point dyadic Green's function [65] \[\lim_{|\omega|\rightarrow\infty}\left.\frac{\omega^{2}}{c^{2}} \overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta}, \omega)\right|_{\mathbf{r}_{\alpha}\neq\mathbf{r}_{\beta}}=\overline{\overline {\mathbf{0}}}. \tag{12}\] Therefore, we have \[\int_{0}^{\infty}\text{d}\omega\,f(\omega) =-\int_{i\infty}^{0}\text{d}\omega\,f(\omega)\] \[=-\int_{i\infty}^{0}\text{d}\omega\,\frac{\omega^{2}}{c^{2}} \frac{\overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)}{\omega+\omega^{\prime}}\] \[=-\int_{i\infty}^{0}\text{d}\omega\,\frac{\omega^{2}\left(\omega- \omega^{\prime}\right)}{c^{2}}\frac{\overline{\overline{\mathbf{G}}}(\mathbf{ r}_{\alpha},\mathbf{r}_{\beta},\omega)}{\omega^{2}-\omega^{\prime 2}}\] \[=-\int_{i\infty}^{0}\text{d}\omega\,\frac{\omega^{3}\,\overline{ \overline{\mathbf{G}}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)}{\omega ^{2}-\omega^{\prime 2}}\] \[\quad+\int_{i\infty}^{0}\text{d}\omega\,\frac{\omega^{\prime} \omega^{2}}{c^{2}}\frac{\overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha}, \mathbf{r}_{\beta},\omega)}{\omega^{2}-\omega^{\prime 2}}. \tag{13}\] Making the substitution \(\kappa=-i\omega\), \[\int_{0}^{\infty}\text{d}\omega\,f(\omega) =-\int_{0}^{\infty}\text{d}\kappa\,\frac{\kappa^{3}}{c^{2}}\frac{ \overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},i \kappa)}{\kappa^{2}+\omega^{\prime 2}}\] \[\quad-i\int_{0}^{\infty}\text{d}\kappa\,\frac{\omega^{\prime} \kappa^{2}}{c^{2}}\frac{\overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha}, \mathbf{r}_{\beta},i\kappa)}{\kappa^{2}+\omega^{\prime 2}}. \tag{14}\] Using Eq. (10), we can obtain that the dyadic Green's function is purely real on the imaginary axis, i.e., \(\overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},i \kappa)=\text{Re}\overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha}, \mathbf{r}_{\beta},i\kappa)\), and arrive at \[\text{Im}\left[\int_{0}^{\infty}\text{d}\omega\,f(\omega)\right] =-\int_{0}^{\infty}\text{d}\kappa\,\frac{\omega^{\prime}\kappa^{2}}{c^{2}} \frac{\text{Re}\overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha},\mathbf{r }_{\beta},i\kappa)}{\kappa^{2}+\omega^{\prime 2}}. \tag{15}\] Now we can express the integral \(I\) as [63; 64]: \[I=-\int_{0}^{\infty}\text{d}\kappa\,\frac{\omega^{\prime}\kappa^{2}}{\pi \varepsilon_{0}c^{2}}\frac{\mathbf{\mu}_{\alpha}^{\text{eg}}\cdot\text{Re} \overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},i \kappa)\cdot\mathbf{\mu}_{\beta}^{\text{ge}}}{\kappa^{2}+\omega^{\prime 2}}. \tag{16}\] Eq. (16) is a powerful tool in cases where the dyadic Green's function on the imaginary axis is available since \(\overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},i\kappa)\) decays rapidly [63]. However, in complex dielectric environments, dyadic Green's function on the imaginary axis is difficult to obtain, and the evaluation of \(I\) through Eq. (12) is more convenient. In free space, we have the explicit expression of the two-point dyadic Green's function, which reads: \[\overline{\overline{\mathbf{G}}}_{0}(\mathbf{r}_{\alpha},\mathbf{ r}_{\beta},\omega)\Big{|}_{\mathbf{r}_{\alpha}\neq\mathbf{r}_{\beta}}=\] \[\frac{e^{ik_{0}R}}{4\pi R}\Bigg{\{}\left(\overline{\overline{ \mathbf{I}}}_{3}-\mathbf{n}_{R}\otimes\mathbf{n}_{R}\right)\] \[\quad\quad+\left(3\mathbf{n}_{R}\otimes\mathbf{n}_{R}-\overline{ \overline{\mathbf{I}}}_{3}\right)\left[\frac{1}{(k_{0}R)^{2}}-\frac{i}{k_{0}R }\right]\Bigg{\}}, \tag{17}\] where \(k_{0}=\omega/c\) and \(\mathbf{r}_{\alpha}-\mathbf{r}_{\beta}\equiv R\mathbf{n}_{R}\). Therefore, we can evaluate the integral using Eq. (16). Inserting Eq. (17) into Eq. (16) and making the substitution \(x=\kappa R/c\), we Figure 4: The contour adopted in the integral. The total contour \(C\) is equal to \(C_{1}+C_{2}+C_{3}\). Note that there is no singularity inside the contour. obtain \[I^{0} \equiv-\int_{0}^{\infty}\mathrm{d}\kappa\,\frac{\omega^{\prime} \kappa^{2}}{\pi\varepsilon_{0}c^{2}}\,\frac{\mathbf{\mu}_{\alpha}^{\mathrm{eg}} \cdot\mathrm{Re}\overline{\overline{\overline{\mathbf{G}}}_{0}}(\mathbf{r}_{ \alpha},\mathbf{r}_{\beta},i\kappa)\cdot\mathbf{\mu}_{\beta}^{\mathrm{ge}}}{\kappa^ {2}+\omega^{\prime 2}}\] \[=-\frac{\omega^{\prime}}{4\pi^{2}\varepsilon_{0}cR^{2}}\Big{\{} \Big{[}\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot\mathbf{\mu}_{\beta}^{\mathrm{ge}}-( \mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot\mathbf{n}_{R})\,\Big{(}\mathbf{\mu}_{\beta}^{ \mathrm{ge}}\cdot\mathbf{n}_{R}\Big{)}\Big{]}\,\mathcal{I}_{1}\] \[\quad+\Big{[}\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot\mathbf{\mu}_{\beta }^{\mathrm{ge}}-3\,(\mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot\mathbf{n}_{R})\, \Big{(}\mathbf{\mu}_{\beta}^{\mathrm{ge}}\cdot\mathbf{n}_{R}\Big{)}\Big{]}\,( \mathcal{I}_{2}+\mathcal{I}_{3})\,\Big{\}}. \tag{101}\] \(\mathcal{I}_{1}\), \(\mathcal{I}_{2}\), and \(\mathcal{I}_{3}\) are the auxiliary integrals defined as \[\mathcal{I}_{1} =\int_{0}^{\infty}\mathrm{d}x\,\frac{x^{2}e^{-x}}{x^{2}+x^{\prime 2 }}, \tag{102a}\] \[\mathcal{I}_{2} =\int_{0}^{\infty}\mathrm{d}x\,\frac{xe^{-x}}{x^{2}+x^{\prime 2 }},\] (102b) \[\mathcal{I}_{3} =\int_{0}^{\infty}\mathrm{d}x\,\frac{e^{-x}}{x^{2}+x^{\prime 2 }}, \tag{102c}\] where \(x^{\prime}=\omega^{\prime}R/c\). These auxiliary integrals can be explicitly expressed in terms of the trigonometric functions and trigonometric integrals as [80] \[\mathcal{I}_{1} =-x^{\prime}\,[\mathrm{ci}(x^{\prime})\sin(x^{\prime})-\mathrm{ si}(x^{\prime})\cos(x^{\prime})]+1, \tag{103a}\] \[\mathcal{I}_{2} =-\mathrm{ci}(x^{\prime})\cos(x^{\prime})-\mathrm{si}(x^{\prime}) \sin(x^{\prime}),\] (103b) \[\mathcal{I}_{3} =\frac{1}{x^{\prime}}\,[\mathrm{ci}(x^{\prime})\sin(x^{\prime})- \mathrm{si}(x^{\prime})\cos(x^{\prime})]\,, \tag{103c}\] where the trigonometric integrals follow the definitions: \[\mathrm{ci}(x^{\prime})=-\int_{x^{\prime}}^{\infty}\,\mathrm{d}x\,\frac{\cos( x)}{x},\,\,\mathrm{si}(x^{\prime})=-\int_{x^{\prime}}^{\infty}\mathrm{d}x\, \frac{\sin(x)}{x}.\] The explicit form of \(I^{0}\) is useful for the calculation of quantum dynamics. In addition, the asymptotic behavior of \(I^{0}\), \(V^{0}_{\mathrm{ORC},\alpha\beta}\), and \(V^{0}_{\mathrm{QC},\alpha\beta}\) can be analyzed through the expansion of the trigonometric integrals. ## Appendix D Numerical Implementation of FQD and MAQD To numerically implement FQD without the RWA, we start from Eq. (11), separate \(\overline{\overline{\mathbf{G}}}(\mathbf{r},\mathbf{r}^{\prime},\omega)\) as \(\overline{\overline{\mathbf{G}}}_{0}(\mathbf{r},\mathbf{r}^{\prime},\omega)+ \overline{\overline{\mathbf{G}}}_{\mathrm{Sc}}(\mathbf{r},\mathbf{r}^{\prime},\omega)\), apply the Markov approximation only to the part involve \(\overline{\overline{\mathbf{G}}}_{0}(\mathbf{r},\mathbf{r}^{\prime},\omega)\), and discard the free-space Lamb shift \(\Delta^{0}_{\mathrm{e}(\mathrm{g})_{\alpha}}\). As a result, one can numerically calculate FQD without the RWA instead of using Eq. (11) as follows, \[\frac{\mathrm{d}}{\mathrm{d}t}C^{\mathrm{E}_{\alpha,}\{0\}}(t)= -\frac{\Gamma_{0}^{0}}{2}C^{\mathrm{E}_{\alpha,}\{0\}}(t)-\frac{i }{\hbar}\sum_{\beta\neq\alpha}\mathrm{V}^{0}_{\mathrm{DDI},\alpha\beta}\,e^{-i (\omega_{\beta}-\omega_{\alpha})t}C^{\mathrm{E}_{\beta,}\{0\}}(t)\] \[-\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^{\infty}\mathrm{d} \omega\,\bigg{[}\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}}\mathbf{\mu}_{ \alpha}^{\mathrm{eg}}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}_{ \mathrm{Sc}}(\mathbf{r}_{\alpha},\mathbf{r}_{\alpha},\omega)\cdot\mathbf{\mu}_{ \alpha}^{\mathrm{ge}}\bigg{]}\,e^{-i(\omega-\omega_{\alpha})t}e^{-i(\omega_{ \alpha}-\omega)t^{\prime}}C^{\mathrm{E}_{\alpha,}\{0\}}(t^{\prime})\] \[-\sum_{\beta\neq\alpha}\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^{ \infty}\mathrm{d}\omega\,\bigg{[}\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}} \mathbf{\mu}_{\alpha}^{\mathrm{eg}}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}_ {\mathrm{Sc}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)\cdot\mathbf{\mu}_{ \beta}^{\mathrm{ge}}\bigg{]}\,e^{-i(\omega-\omega_{\alpha})t}e^{-i(\omega_{ \beta}-\omega)t^{\prime}}C^{\mathrm{E}_{\beta,}\{0\}}(t^{\prime})\] \[-\sum_{\beta\neq\alpha}\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^{ \infty}\mathrm{d}\omega\,\bigg{[}\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}} \mathbf{\mu}_{\beta}^{\mathrm{ge}}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}_ {\mathrm{Sc}}(\mathbf{r}_{\beta},\mathbf{r}_{\alpha},\omega)\cdot\mathbf{\mu}_{ \alpha}^{\mathrm{eg}}\bigg{]}\,e^{-i(\omega+\omega_{\beta})t}e^{-i(-\omega_{ \beta}-\omega)t^{\prime}}C^{\mathrm{E}_{\alpha,}\{0\}}(t^{\prime})\] \[-\sum_{\beta\neq\alpha}\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^{ \infty}\mathrm{d}\omega\,\bigg{[}\frac{\omega^{2}}{\pi\hbar\varepsilon_{0}c^{2}} \mathbf{\mu}_{\beta}^{\mathrm{ge}}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}_ {\mathrm{Sc}}(\mathbf{r}_{\beta},\mathbf{r}_{\alpha},\omega)\cdot\mathbf{\mu}_{\alpha }^{\mathrm{eg}}\bigg{]}\,e^{-i(\omega+\omega_{\beta})t}e^{-i(-\omega_{\alpha}- \omega)t^{\prime}}C^{\mathrm{E}_{\beta,}\{0\}}(t^{\prime}), \tag{104}\] where \(\Gamma_{\alpha}^{0}=\frac{2\omega_{\alpha}^{2}}{\hbar\varepsilon_{0}c^{2}}\mathbf{\mu} _{\alpha}^{\mathrm{eg}}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}_{0}( \mathbf{r}_{\alpha},\mathbf{r}_{\alpha},\omega_{\alpha})\cdot\mathbf{\mu}_{\alpha}^{ \mathrm{ge}}=\frac{|\mathbf{\mu}_{\alpha}^{\mathrm{eg}}|^{2}\omega_{\alpha}^{3}}{3 \pi\hbar\varepsilon_{0}c^{2}}\) is the spontaneous emission rate in free space, and \(\mathrm{V}^{0}_{\mathrm{DDI},\alpha\beta}\) is obtained by substituting \(\overline{\overline{\mathbf{G}}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)\) with \(\overline{\overline{\mathbf{G}}}_{0}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)\) in the expression of \(\mathrm{V}_{\mathrm{DDI},\alpha\beta}\). To numerically implement FQD with the RWA, we start from Eq. (15) and adopt the same procedure as in the above. As a result, we can numerically calculate FQD with the RWA instead of Eq. (15) as follows, \[\frac{\mathrm{d}}{\mathrm{d}t}\tilde{C}^{\mathrm{E}_{\alpha,}\{0\}}(t)= -\frac{\Gamma_{\alpha}^{0}}{2}\tilde{C}^{\mathrm{E}_{\alpha,}\{0 \}}(t)-\frac{i}{\hbar}\sum_{\beta\neq\alpha}\tilde{\mathrm{V}}^{0}_{\mathrm{ DDI},\alpha\beta}\,e^{-i(\omega_{\beta}-\omega_{\alpha})t}\tilde{C}^{ \mathrm{E}_{\beta,}\{0\}}(t)\] \[-\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^{\infty}\mathrm{d} \omega\left[\frac{\omega^{2}}{\pi\hbar\bar{\varepsilon}_{0}c^{2}}\mathbf{\mu}_{ \alpha}^{\mathrm{cg}}\cdot\mathrm{Im}\overline{\overline{\mathbf{G}}}_{ \mathrm{Sc}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)\cdot\mathbf{\mu}_{ \alpha}^{\mathrm{ge}}\right]e^{-i(\omega-\omega_{\alpha})t}e^{-i(\omega_{ \beta}-\omega)t^{\prime}}\tilde{C}^{\mathrm{E}_{\alpha,}\{0\}}(t^{\prime})\] \[-\sum_{\beta\neq\alpha}\int_{0}^{t}\mathrm{d}t^{\prime}\int_{0}^ {\infty}\mathrm{d}\omega\left[\frac{\omega^{2}}{\pi\hbar\bar{\varepsilon}_{0}c ^{2}}\mathbf{\mu}_{\alpha}^{\mathrm{cg}}\cdot\mathrm{Im}\overline{\overline{ \mathbf{G}}}_{\mathrm{Sc}}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)\cdot \mathbf{\mu}_{\beta}^{\mathrm{ge}}\right]e^{-i(\omega-\omega_{\alpha})t}e^{-i( \omega_{\beta}-\omega)t^{\prime}}\tilde{C}^{\mathrm{E}_{\beta,}\{0\}}(t^{ \prime}), \tag{10}\] where \(\tilde{\mathrm{V}}^{0}_{\mathrm{DDI},\alpha\beta}\) is obtained by substituting \(\overline{\overline{\overline{\mathbf{G}}}}(\mathbf{r}_{\alpha},\mathbf{r}_{ \beta},\omega)\) with \(\overline{\overline{\mathbf{G}}}_{0}(\mathbf{r}_{\alpha},\mathbf{r}_{\beta},\omega)\) in the expression of \(\tilde{\mathrm{V}}_{\mathrm{DDI},\alpha\beta}\). To numerically implement MAQD without the RWA, we discard the free-space Lamb shift \(\Delta^{0}_{\mathrm{e}(\bar{\mathbf{g}})_{\alpha}}\) in Eq. (16) and finally obtain \[\frac{\mathrm{d}}{\mathrm{d}t}C^{\mathrm{E}_{\alpha,}\{0\}}(t)=-\frac{i}{\hbar }\left\{\left[\Delta^{\mathrm{Sc}}_{\alpha}+\sum_{\beta\neq\alpha}\Delta^{ \mathrm{Sc}}_{\mathrm{g}\beta}\right]-i\hbar\frac{\Gamma_{\alpha}}{2}\right\} C^{\mathrm{E}_{\alpha,}\{0\}}(t)-\frac{i}{\hbar}\sum_{\beta\neq\alpha}\mathrm{V}_{ \mathrm{DDI},\alpha\beta}\,e^{-i(\omega_{\beta}-\omega_{\alpha})t}C^{\mathrm{E }_{\beta,}\{0\}}(t). \tag{11}\] To numerically implement MAQD with the RWA, we discard the free-space Lamb shift \(\Delta^{0}_{\mathrm{e}_{\alpha}}\) in Eq. (24) and finally arrive at \[\frac{\mathrm{d}}{\mathrm{d}t}\tilde{C}^{\mathrm{E}_{\alpha,}\{0\}}(t)=-\frac{ i}{\hbar}\left[\Delta^{\mathrm{Sc}}_{\mathrm{e}_{\alpha}}-i\hbar\frac{\Gamma_{ \alpha}}{2}\right]\tilde{C}^{\mathrm{E}_{\alpha,}\{0\}}(t)-\frac{i}{\hbar} \sum_{\beta\neq\alpha}\tilde{\mathrm{V}}_{\mathrm{DDI},\alpha\beta}\,e^{-i( \omega_{\beta}-\omega_{\alpha})t}\tilde{C}^{\mathrm{E}_{\beta,}\{0\}}(t). \tag{12}\]
2304.02577
ECG Feature Importance Rankings: Cardiologists vs. Algorithms
Feature importance methods promise to provide a ranking of features according to importance for a given classification task. A wide range of methods exist but their rankings often disagree and they are inherently difficult to evaluate due to a lack of ground truth beyond synthetic datasets. In this work, we put feature importance methods to the test on real-world data in the domain of cardiology, where we try to distinguish three specific pathologies from healthy subjects based on ECG features comparing to features used in cardiologists' decision rules as ground truth. Some methods generally performed well and others performed poorly, while some methods did well on some but not all of the problems considered.
Temesgen Mehari, Ashish Sundar, Alen Bosnjakovic, Peter Harris, Steven E. Williams, Axel Loewe, Olaf Doessel, Claudia Nagel, Nils Strodthoff, Philip J. Aston
2023-04-05T16:48:24Z
http://arxiv.org/abs/2304.02577v1
# ECG Feature Importance Rankings: Cardiologists vs. Algorithms ###### Abstract Feature importance methods promise to provide a ranking of features according to importance for a given classification task. A wide range of methods exist but their rankings often disagree and they are inherently difficult to evaluate due to a lack of ground truth beyond synthetic datasets. In this work, we put feature importance methods to the test on real-world data in the domain of cardiology, where we try to distinguish three specific pathologies from healthy subjects based on ECG features comparing to features used in cardiologists' decision rules as ground truth. Some methods generally performed well and others performed poorly, while some methods did well on some but not all of the problems considered. ## I Introduction A trained cardiologist can diagnose over 150 different conditions from a 12-lead electrocardiogram (ECG) [1]. Such diagnoses are made on the basis of a multitude of ECG features which consist mainly of time intervals between certain fiducial points on the ECG, amplitudes of prominent features or morphology of ECG segments. For each pathology, the relevant criteria for specific features are well documented [1, 2], although there may be minor differences between one reference source and another. On the other hand, there are numerous algorithms available for determining a ranking of features by importance for a given classification task [3]. However, if several algorithms are used, then it is often found that they give significantly different feature importance rankings and it is not at apparent which ranking is best or whether one particular ranking is better than another. Therefore, we did a comparison of feature importance rankings generated by a number of different algorithms with the corresponding features that a cardiologist uses for diagnosis. This has the advantage of having a set of important features which has been gleaned from clinical experience over many years for diagnosis of each condition which can be compared with the feature rankings of the algorithms. Another possibility with this study is that the feature importance algorithms could identify features that are important for the diagnosis of a condition which are not normally considered to be important by cardiologists. We have chosen three pathologies to study, namely first degree atrioventricular block (1st degree AV block), complete right bundle branch block (RBBB) and complete left bundle branch block (LBBB). A diagnosis of these conditions by cardiologists involves 1, 7 and 14 features respectively and so are progressively more complex, starting with the simplest possible case. For this study, we restrict attention to the simplest case of a binary classification that seeks to distinguish healthy subjects vs. a specific pathology. Of course in practice, a cardiologist has to identify a condition (or multiple conditions) out of many possible conditions, which is a much more complicated task. On the other hand, it is quite conceivable that a simple binary classification of healthy vs. a specific pathology could be successfully achieved by using only a reduced subset of the complete list of diagnostic conditions. However, we consider it appropriate to study the simplest case first. A study of multiclass feature importance algorithms with all four of the above classes has been undertaken as a separate study [4]. We are considering the features used by cardiologists for diagnosis to be the gold standard against which we compare various algorithms. However, it should be noted that different sources for ECG diagnosis often give slightly different conditions for diagnosis of a specific pathology. This may be because textbooks give sufficient conditions for diagnosis, rather than an exhaustive list of all changes associated with a pathology. We have used _EKG-Kurs fur Isabel_[5] as it gives simple, itemised conditions for each pathology. More comprehensive texts are available but we chose this one based on its simplicity and clarity. ## II Materials and Methods ### _ECG Signals_ The ECG signals that were used for this study were taken from the PTB-XL dataset [6, 7], which is publicly available on PhysioNet [8]. In particular, for each of the three pathologies considered (1st degree AV block, RBBB, LBBB), we extracted all the records that were labelled with only the specific pathology. ### _ECG Features_ For extracting features from an ECG, we used the University of Glasgow 12-lead ECG analysis algorithm which has been developed over many years by a team at the University of Glasgow [9]. This software can derive more than 772 global and lead-dependent ECG features from a 10-second 12-lead ECG signal. (All the features derived by the Glasgow software for the PTB-XL dataset are available in the PTB-XL+ feature dataset [10].) From this large collection of features, we selected 117 which a cardiologist would typically assess when considering a diagnosis that are given in Appendix A. This list of features could be debated, and some might argue for different features to be included, but there is no definitive list of such features. These features were derived for all of the ECG records in each of the pathology classes. The small number of records that contained missing values due to issues with feature extraction were deleted to obtain a final dataset without missing values. Features were similarly extracted from a random selection of an equal number of healthy patients' records, with random replacement of any records containing missing values. With this approach, a balanced dataset containing no missing values was created for each pathology. Each feature was scaled to have mean zero and variance one to give a normalised dataset, which was required for certain algorithms (Logistic regression) or is known to be beneficial for others (Deep networks). However, unscaled data were used for XGBoost (XGB) since the feature importance vectors for each test record were almost all zero using the scaled data. The final datasets contained a total of 1,592 records for 1st degree AV block, 1,074 records for RBBB and 1,072 records for LBBB, with half being for healthy subjects and half for the specific pathology in each case. ### _Pathologies_ The ECG is the difference in electrical potential measurable between two different electrodes attached to the body surface and captures the electrical activity due to de- and repolarization of cardiomyocytes in the heart. In the healthy case, electrical activity is spontaneously initiated in the pacemaker cells at the sinoatrial node in the right atrium. After spreading throughout the atrial myocardial tissue and causing the P wave in the ECG, the excitation is delayed at the atrioventricular node. The electrical activation is then conducted via the bundle of His, which branches into a right bundle as well as an anterior and a posterior left bundle before it reaches the Purkinje fibers. These activate the ventricular myocardium from the apex to the base and lead to the QRS complex in the ECG. Finally, the T wave in the ECG arises due to repolarization of the ventricular myocytes. #### Ii-C1 Atrioventricular Block In patients with atrioventricular (AV) block, the excitation conduction between atria and ventricles is impaired. In first degree AV block, which is studied in this work, the conduction is markedly delayed and leads to PR intervals \(>\)200 ms in the ECG. However, all atrial impulses are still transferred to the ventricles and every P wave is followed by a QRS complex as opposed to second or third degree AV-block that is associated with skipped beats or independent excitation of atria and ventricles respectively [5]. Thus, there is only one feature which is used for the diagnosis of a 1st degree AV block: * PR interval We checked for other features that correlate (with absolute Pearson correlation coefficient \(\geq 0.7\)) with the PR interval, as such features may be expected to occur high up the ranking. However, there were none and so this is the simplest possible case. #### Ii-C2 Right Bundle Branch Block Complete right bundle branch block is characterized by marked delay or block in conduction in the right bundle branch. In this case, the right ventricles are activated via impulses conducted through the left bundle branches reaching the right ventricle through the ventricular myocardial tissue. As this takes longer than the physiological activation through the three fascicles, this reflects in a widened QRS complex of \(>\)120 ms in the ECG. Furthermore, a terminal R' peak is visible in lead V1 and a notched S wave occurs in leads I, aVL and V6 [5]. Thus, the 7 features that are relevant for diagnosis of right bundle branch block are: * QRS duration * R amplitude in lead V1 * R' amplitude in lead V1 * S amplitude in leads I, aVL, V1 and V6 We call these 7 features the important features for RBBB. We checked for features that correlate (with absolute Pearson correlation coefficient \(\geq 0.7\)) with one of these 7 features. There were 3 such features, not including the important features above, which are given in Table I. #### Ii-C3 Left Bundle Branch Block Analogously to right bundle branch block described above, complete left bundle branch block describes the condition of a blockage in the electrical conduction in the left bundle branch. As the left bundle branches into an anterior and a posterior fascicle, the term complete left bundle branch block refers to a conduction block before the bifurcation. In the ECG, the delayed activation of the left ventricle reflects in a widened QRS complex of \(>\)120 ms, deep Q waves in lead V1 and a notched or monophasic QRS morphology in the lateral leads I, aVL, V5 and V6 [5]. Thus, there are 14 features that are involved in the diagnosis of left bundle branch block: * QRS duration * Q amplitude in lead V1 * R amplitude in leads I, aVL, V5 and V6 * R' amplitude in leads I, aVL, V5 and V6 * S amplitude in leads I, aVL, V5 and V6 We call these 14 features the important features for LBBB. We checked for features that correlate (with absolute Pearson correlation coefficient \(\geq 0.7\)) with one of these 14 features, excluding the important features listed above. There were 28 such features that are given in Table II. ### _Feature Importance Algorithms_ We can broadly categorize the feature importance algorithms investigated in this work as model-dependent and model-independent methods. #### Iii-D1 Model-dependent feature importance methods * **Random forests, Boosted decision trees, Logistic regression and Deep neural networks with permutation/SHAP/LIME feature importance.** In terms of models, we consider Random forests, Boosted decision trees, Logistic regression and Deep neural networks. The hyperparameters used are summarized in Table III. The training data consisted of records from the PTB-XL stratified folds 1-9 and the test data were records from fold 10 [6]. These models were then combined with established attribution methods LIME [11], SHAP [12] and permutation feature importance [13]. LIME involves training an interpretable, local surrogate model to approximate the model behaviour near the sample of interest. SHAP is an efficient implementation of the game-theoretic Shapley value approach. LIME and SHAP are local attribution methods which return attribution scores per sample, and we therefore ranked the features on the mean of the absolute attribution values across the test set. As a third class of feature importance algorithms, we considered permutation feature importance, a global attribution method which quantifies feature importance via the decrease in model performance upon replacing a feature column of interest by a permuted copy of itself. * **Random forests.** For a random forest model, the importance of features can be determined by how much they decrease Gini impurity when averaged over all the trees in the forest. It is known that these feature importance values can be misleading for high cardinality features. However, permutation feature importance (see below) can mitigate this to some extent [14]. * **Logistic regression.** The importance of features in a logistic regression model can be determined by the exponential of the weight associated with each feature [3]. * **Gaussian processes.** In Gaussian Process binary classification, the probability of class membership conditioned on an observed feature vector \(x\) is modelled as \(\sigma(f)\) where \(\sigma\) is a sigmoid function, such as the logistic function, and a Gaussian Process model \(\text{GP}(0,k(x,x^{\prime}))\) is used as a prior distribution for the latent variable \(f\)[15]. Using a squared exponential covariance kernel for \(k(x,x^{\prime})\) with diagonal covariance matrix, each feature \(x_{i}\) is associated with its own length-scale parameter \(l_{i}\). A small value for \(l_{i}\) implies the feature varies over short-length scales and so is important for the classification. Consequently, sorting the length-scale parameters provides a ranking of the features. #### Iii-D2 Model-independent feature importance methods In addition to model-dependent methods we also include methods that solely rely on the data distribution without making use of a trained predictor on the dataset. In the feature selection literature [16, 17, 18], these methods are often referred to as filter methods. More specifically, we consider the following methods: * **Chi-square test.** The Chi-square test is a statistical hypothesis test that is valid to perform when the test statistic is chi-squared distributed under the null hypothesis. Each feature is tested individually for independence of the response. A small \(p\)-value is associated with a feature that has dependence on the response, and so is important. Thus, features are ranked by \(-\log(p_{i})\), where \(i\) is the index of the features [19]. * Minimum Redundancy (MRMRR).** The MRMR method reduces redundant features while keeping the relevant features for the model, where redundancy and relevance are quantified in terms of mutual information. It is known that many essential features are correlated and redundant and so the MRMR method selects features taking into account the relevance for predicting the outcome variable and the redundancy within the selected features [20, 21]. * **Neighbourhood Component Analysis (NCA).** The NCA method selects features by maximizing the prediction accuracy of classification algorithms. The concept of this method is similar to the k-nearest neighbours classification method, only in the NCA method, the reference point is selected randomly not to be the nearest neighbour for the new point [22]. * **ReliefF**. ReliefF calculates a feature score for each feature depending on feature value differences for neighbours which have the same or a different class, which can then be used to rank the features. The ReliefF method estimates the attribute qualities based on how well they can distinguish between instances near them. This method was initially designed to apply to binary classification problems with discrete or numerical features [23]. * }ROC\ AUC})\) This ensures that all values are in the range 0.5 to 1. A feature ranking can be generated for a binary classification problem by generating a distribution for each of the two classes for each feature individually and then finding the Modified ROC AUC for all of these distributions. The features are then ranked by their Modified ROC AUC value from highest to lowest, which ranks the features according to their ability to discriminate the two classes individually. We also note that the \(\mathrm{ROC\ AUC}\) values aid with interpretation of the features, as \(\mathrm{ROC\ AUC}>0.5\) implies that the feature increases due to the pathology whereas \(\mathrm{ROC\ AUC}<0.5\) means that the feature decreases due to the pathology. ### _Scoring algorithm_ When comparing a feature ranking generated by one of the algorithms with the important feature set for diagnosis of a specific pathology, we define a score which enables a simple comparison between different methods, assuming that all features in the set of important features for diagnosis have equal importance. As a first step, we choose a value of \(n\), which is the number of top features in each ranking that will be considered. We then take a weighted average based on the ranking of each of the top \(n\) features that is contained in the important set, where the first feature has a weighting of \(n\), the second a weighting of \(n-1\) and so on, so that the \(n^{\mathrm{th}}\) feature has a weighting of 1. This weighted average is then normalised to give a score between 0 and 100 (by dividing by \(n(n+1)/2/100\)), which we round to the nearest integer. With this scoring system, an important feature in position 1 of the ranking contributes \(200/(n+1)\) to the score, whereas an important feature in position \(n\) only contributes \(200/(n(n+1))\) to the score. For example, taking \(n=5\) and assuming that a ranking has the first, second and fourth features in the important set gives a score of \((5+4+2)/15\times 100\approx 73\). We also consider the ranking of features that are least able to discriminate between the two classes, which can be defined by the features with the lowest modified ROC AUC values. In particular, we consider the two features with the lowest modified ROC AUC values which, for the three pathologies, are as follows: * \(1^{\mathrm{st}}\) degree AV block: S amplitude, lead I (modified ROC AUC=0.5006); S amplitude, lead V2 (modified ROC AUC=0.5006) * RBBB: R' amplitude, lead V6 (modified ROC AUC=0.5028); R' amplitude, lead I (modified ROC AUC=0.5037) * LBBB: R amplitude, lead I (modified ROC AUC=0.5002); R' amplitude, lead V6 (modified ROC AUC=0.5009) We refer to these as the _non-discriminating features_. ## III Results We consider results of the feature importance ranking algorithms applied to the feature table for each pathology in turn. The model-dependent methods first require training of a machine learning model for the binary classification problem. The accuracy of the five machine learning models for each pathology on the test data are shown in Table IV. Clearly, these are all very high. ### _Artioventricular Block_ First degree AV block is defined by the PR interval being greater than 200 ms [1]. Thus, there is a single important parameter for diagnosis in this case, namely the PR interval. We therefore expect this feature to occur high up in the rankings. For the data we are using, the distributions for the PR interval for the records labelled as Normal and 1st degree AV block are shown in Fig. 1. Clearly, not all of the 1st degree AV block records satisfy the diagnostic criterion of exceeding 200 ms. In fact, the PR interval for 236 out of 796 records labelled as 1st degree AV block does not exceed 200 ms, with the smallest value being 26 ms (which is non-physiological). Conversely, there are 23 out of 796 records labelled as Normal that have PR interval exceeding 200 ms, with the largest value being 242 ms. Presumably in both cases this is because the Glasgow algorithm identifies the PR interval as shorter/longer than that identified by the cardiologists who labelled the signals. As an aside, if we tried to classify 1st degree AV block using the Glasgow computed PR intervals, then the Modified ROC AUC for this classification is 0.9384 and the optimal threshold for diagnosis is 184 ms, which is considerably lower than the conventional 200 ms threshold. With this threshold, the accuracy of the classification is 88.13%. Presumably this reduced threshold is as a result of the difference between the PR interval lengths determined by the cardiologists and the Glasgow software. The ranking of the PR interval by each algorithm is shown in Table V. These results show that almost all the algorithms we considered ranked the PR interval as the most important feature, although Gaussian processes ranked it second (top feature: T morphology in lead V3) and Deep networks (LIME) ranked it third (top feature: R' amplitude in lead V6). We also considered the ranking for each method of the non-discriminating features, which have both got Modified ROC AUC values very close to 0.5. These are shown in the final column of Table V. We note that they are by definition the last two features in the Modified ROC AUC ranking. It is surprising that one of the two non-discriminating features is ranked as 2 for Random forest (permutation). The rankings of 25 and 15 for XGB (permutation) is also relatively high. On the other hand, XGB (LIME), Logistic regression (SHAP), Logistic regression and NCA ranked at least one of the two non-discriminating features as more than 100. We then found the top 5 features for each of the methods to see if there is any commonality between them. The frequency of features in the top 5 is shown in Table VI which, as expecte, includes the PR interval as the most common. Two methods had their top 5 features matching those in Table VI which were Random forest and Random forest (SHAP), while Random forest (LIME), XGB (SHAP) and Chi-square test all had 4 out of these 5 in their top 5. On the other hand, Random forest (permutation), Logistic regression (LIME and SHAP), Logistic regression, Deep networks (LIME), Gaussian processes and MRMR only had the PR interval of those listed in Table VI in their top 5 features. The ROC AUC values in Table VI indicate the direction of change of a feature with the pathology as described in Section II-D. Clearly, in this case, the PR interval increases with 1st degree AV block, which is consistent with the cardiologists' diagnosis. The QRS duration generally increases with 1st degree AV Fig. 1: Histogram of the PR interval for the records labelled as healthy and 1st degree AV block. The red line is at 200 ms, which is the threshold for diagnosis of 1st degree AV block. block. The mean QRS duration for normal subjects is 92 ms which increases to 113 ms for 1st degree AV block subjects. This is consistent with evidence of conduction slowing distal to the AV node in patients with known 1st degree AV block. The T+ amplitude in leads I and V6 decreases on average in patients with 1st degree AV block according to these results. The physiological cause for these decreases is not clear. Finally, the T morphology measure in lead I decreases with 1st degree AV block, but this is an integer value representing different cases. Analysis of this feature shows that 99% of the values for the Normal category are +1, indicating a single upright T wave. However, for the 1st degree AV block records, only 52% have a value of +1, with almost all the others having a value of either \(-1\) or \(-2\) in equal proportions. Thus, it seems that in approximately half the cases of 1st degree AV block, the T wave is inverted or biphasic with negative leading component. A possible explanation for this is that for 1st degree AV block subjects, the PR interval is longer resulting in a longer diastolic interval. If the action potential duration increases more in some regions than others for longer diastolic intervals (restitution), this could cause morphology changes in the T wave. ### _Right Bundle Branch Block_ For BBB, there are 7 important features and a further 3 features that correlate with at least one of these, as listed in Table I. Using the scoring algorithm described in Section II-E we found the score for each method using the top 5 features of each ranking only. In Table VII, scores for each method comparing the top 5 features with both the important features and the important and correlating features are given. The best performing method is Logistic regression, while Random forest (SHAP), Random forest, Logistic regression (SHAP and LIME), Deep networks (SHAP), Chi-square test and Modified ROC AUC all have scores over 70. Only Deep networks (permutation and LIME) have an increased score when including the correlating features, but in both cases the score for the important features only is zero. The worst performing methods are Random forest (permutation), Deep networks (permutation and LIME), MRMR and NCA. We also note that SHAP outperformed LIME for each of the four methods. We then considered the ranking for each method of the non-discriminating features which are shown in the final column of Table V. We note that these feature rankings are very low for Random Forest (SHAP), Random forest, XGB (SHAP), Logistic regression (SHAP), Deep networks (SHAP), Chi-square test and ReliefF so all the SHAP methods do very well. However, these feature rankings are high for Random forest (LIME), XGB (LIME), Logistic regression (LIME), Deep networks (LIME), Gaussian processes and MRMR so all the LIME methods perform poorly for these features. We again considered the top 5 features for each method with the 6 most frequent shown in Table VIII. We note that 4 of these are important features, but that ST slope in lead V1 and S amplitude in lead V2 are not, but both have very high Modified ROC AUC values and are positioned as second and sixth respectively in the ROC AUC ranking. Clearly leads V1 and V2 are very close on the body, so it is perhaps not surprising that the S amplitude in lead V2 has significance as well as the important feature of the S amplitude in lead V1. The correlation coefficient between the two is reasonably high at 0.6707. The ROC AUC value for the S amplitude in lead V1 indicates that it increases with RBBB, resulting in a shallower S wave (since the S wave amplitudes are negative) and so it is also not surprising that the ST slope in lead V1 decreases with RBBB, again as shown by the ROC AUC value. The correlation coefficient between the two of -0.6536 is again reasonably high in magnitude. All these features have a very high Modified ROC AUC value, which indicates good separation of the two distributions for these features, except for R' amplitude in lead V1. Two methods had all of their top 5 features in Table VIII, namely Random forest (SHAP) and Deep networks (SHAP) while Random forest (LIME), Logistic regression (SHAP), Logistic regression (permutation), Chi-square test, ReliefF and Modified ROC AUC all had 4 of their top 5 features in Table VIII. The worst performing methods were Deep networks (LIME) and Deep networks (permutation) which had no features in Table VIII in their top 5, and NCA and MRMR which both had one feature from Table VIII in their top 5, which was QRS duration in both cases. The ROC AUC values in Table VIII show that QRS duration increases with RBBB, which is consistent with one of the diagnosis conditions that the width of the QRS complex should be \(>\)120 ms. The S amplitude in leads V1 and V2 increases with RBBB, resulting in shallower S waves since the S amplitude is negative, while the S amplitude in lead I decreases with RBBB, resulting in a deeper S wave. The R' amplitude in lead V1 increases with RBBB. Finally, the ST slope in lead V1 decreases with RBBB. ### _Left Bundle Branch Block_ For LBBB, there are 14 important features and an additional 28 correlating features, as listed in Table II. The scoring algorithm described in Section II-E gives the scores as shown in Table IX, again using only the top 5 features. The scores for the important features only are generally quite low. However, when the correlating features are included, most methods show a significant improvement, which is not surprising as there are 28 additional correlating features, although much of the improvement in scores is due to the three T morphology features (see Table X). Using only the important features, the best performing method is Gaussian processes, while Logistic regression (permutation) and Deep networks (permutation) both have a score of 0. When the correlating features are included, a perfect score of 100 is obtained by Random forest (SHAP), Random forest, Deep networks (SHAP) and Chi-square test. Logistic regression (permutation) still has a zero score while Deep networks (permutation) has an increased, but still poor, score of 33. The rankings of the non-discriminating features were generally low, with Chi-square test and MRMR performing particularly well. However, the various methods combined with LIME gave quite high rankings for one of these features which had rank 1, 2, 7 and 22 for Deep networks, Logistic regression, XGB and Random forest respectively, which is very poor. In contrast, the SHAP methods all ranked this feature greater than 110, except for XGB (SHAP) which ranked it as 92, so these methods all performed well. The frequency of features in the top 5 for all methods is shown in Table X. We note that four of these are correlating features, which explains the big increase in scores when the correlating features are included. Again, all of these 5 features have a very high Modified ROC AUC value, indicating good separation of the two distributions for these features. We note that the four methods that did not have QRS duration in their top 5 features were Logistic regression (permutation and LIME) and Deep networks (permutation and LIME). No method had all the top 5 features matching those in Table X but Deep networks (SHAP), Chi-square test and ReliefF both had 4 out of their top 5 that matched with Table X. On the other hand, Logistic regression (permutation) and Deep networks (permutation and LIME) had none of the features in Table X in their top 5. The ROC AUC values show that the QRS duration increases with LBBB, which is consistent with the condition that the width of the QRS complex should be \(>\)120 ms. The diagnosis of LBBB involves only changes in the QRS complex but the three T morphology features in Table X are not associated with the QRS complex. However, we have already noted they correlate strongly with the QRS duration. The T morphology features for leads I and V6 decrease with LBBB. Analysis of these features shows that 99% of the values for the Normal class are +1 for both morphology features. For the LBBB records, 72% are \(-1\) and 24% are \(-2\) for the T morphology in lead I, and 69% are \(-1\) and 24% are \(-2\) for the T morphology in lead V6, both of which represent a significant shift from a single upright wave to either a single inverted wave or a biphasic wave with leading negative component. The T morphology in lead aVR increases with LBBB. In this case, 99% of the values for the Normal class are \(-1\) while for the LBBB records, 50% are +1 and 37% are +2. So almost all of the Normal class have a single inverted T wave which changes to either a single upright wave or a biphasic T wave with leading positive component. The R amplitude in lead V4 is not an important feature for the diagnosis of LBBB, but this amplitude in leads V5 and V6 are important features. As lead V4 is very close to lead V5, it is not too surprising that this feature is common in the top 5 features for some methods. Interestingly, the R amplitude in leads V5 and V6 are not in the top 5 features for any method, so lead V4 seems to be more important than leads V5 and V6. ## IV Comparison with the multiclass case We have considered feature importance ranking in the context of a binary classification of normal vs. a single pathology for three different pathologies, namely 1st degree AV block, RBBB and LBBB. This is the simplest possible case, but is not very realistic since cardiologists have to positively diagnose one (or more) conditions from a long list of possible conditions. It is also conceivable that a simple binary classification of normal vs. a specific pathology could be achieved with high accuracy using only a subset of the complete list of diagnostic conditions. Thus, as a next step, we considered feature importance ranking for a multiclass classification involving normal, 1st degree AV block, RBBB and LBBB records in [4]. The feature importance rankings were found for the one vs. all binary classifications as the aim is to positively diagnose one condition (since the data were single label) which implies a negative classification for the other classes. The accuracies of the models were not reported in [4] but all four methods had an accuracy exceeding 95% for the multiclass classification. Also, the results for the model dependent methods are not directly comparable since the data were not normalised in [4] as they were in this study. In particular, the poor performance of Deep networks for the ranking of the PR interval for the 1st degree AV block case is almost certainly due to this lack of normalisation. We now compare the feature rankings of the binary and multiclass cases. ### _First Degree AV Block_ The ranking of the PR interval was very similar in the binary and multiclass cases. In the binary case, all methods ranked the PR interval as most important except for Deep networks (LIME) and Gaussian processes, which ranked it as third and second respectively. In the multiclass case, the PR interval was not the top feature for Logistic regression (SHAP and LIME), Deep networks (SHAP and LIME) and Gaussian processes. The poor results for Logistic regression and Deep networks are probably due to the fact that the data were not normalised. We note that although Gaussian processes ranked the PR interval as second in both cases, the top feature differs. For the binary case, the top feature was T morphology in lead V3 while in the multiclass case, the top feature was QRS duration. The most common features in the top 5 had three features in common, namely the PR interval, QRS duration and T+ amplitude in lead I. The other features listed in Table VI are the T+ amplitude in lead V6 and T morphology in lead I whereas the other features for the multiclass case were the ST slope in leads I and V1 which are quite different features for the two cases. ### _RBBB_ We first note that the correlating features for RBBB were different for the binary and multiclass cases, with 3 correlating features in the binary case (which are listed in Table I) and 5 correlating features for the multiclass case. The scores for the important and correlating features for the multiclass case are greater than the corresponding scores for the binary case for many methods, although a notable exception is Logistic regression (SHAP and LIME) which both had a score of zero in the multiclass case and the best score in the binary case! We also note that in the multiclass case, the scores for the important and correlating features were 100 for four methods, namely Random forest, Random forest (permutation) and XGB (SHAP and LIME). The scores for MRMR and NCA were very low for the binary case, but improved significantly for the multiclass case, for which they had the second best score (important features only). In this case, the 5 most common features in the top 5 in the multiclass case are all included in the 6 most common features in the top 5 for the binary case, but also include the S amplitude in lead V1 as the extra feature. ### _LBBB_ In this case, there are 28 correlating features in the binary case (which are given in Table II) but only 17 correlating features for the multiclass case. The scores for the important features only and for the important plus correlating features for the multiclass case were almost all less than the corresponding scores for the binary case. The most common features in the top 5 only had no features in common in this case. The multiclass case includes the ST slope in three leads whereas the binary case includes the T moprhology in three leads. ## V Discussion The results of the different feature ranking algorithms for the three pathologies that we have considered have some inconsistencies, although some general trends can be observed. For 1st degree AV block, all methods ranked the one important feature first, except for Deep networks (LIME) and Gaussian processes, which ranked it as third and second respectively. For RBBB, Logistic regression had the highest scores but scored quite poorly for LBBB. For LBBB, a score of 100 when including correlating features was obtained by Random forest (SHAP), Random forest, Deep networks (SHAP) and Chi-square test. Also, NCA scored very poorly for RBBB but did quite well for LBBB. Conversely, ReliefF performed poorly for LBBB (important features only) but had reasonable performance for RBBB. But MRMR performed poorly for both RBBB and LBBB. If the scores for RBBB and LBBB are added together, then for the important features only, Logistic regression has the highest score, closely followed by Gaussian processes, Random forest and Logistic regression (SHAP). At the other end, Deep networks (permutation) has a combined score of zero, while Deep networks (LIME) has the lowest non-zero combined score. Adding the scores for RBBB and LBBB for the important and correlating features, then the top score is obtained by Random forest, with Random forest (SHAP), Deep networks (SHAP) and Chi-square test all tied in second place, while the lowest combined score was obtained for Logistic regression (permutation) together with Deep networks (permutation). When comparing the various methods combined with SHAP, LIME and permutation options, the permutation variations were consistently the worst, followed by LIME, with the best results obtained by SHAP. However, Random forest results were always the same as or better than Random forest (SHAP) and Logistic regression results were the same as or better than Logistic regression (SHAP) except for LBBB including correlating features. So the native feature importance rankings for Random forest and Logistic regression seem to do well without the addition of other methods on top. All of the SHAP methods together with Chi-square test and Random forest all ranked the non-discriminating features quite far down the rankings for RBBB and LBBB but the LIME methods all put the non-discriminating features quite high up in the rankings. MRMR ranking of the non-discriminating features was particular good for LBBB but was particularly bad for RBBB, and so there is inconsistency here. For 1st degree AV block, which is diagnosed using the single feature of PR interval, other commonly highly ranked features include QRS duration, as well as T+ amplitude and T morphology in lead I. For RBBB, two unimportant features were commonly highly ranked namely ST slope in lead V1 and S amplitude in lead V2. It is interesting to note that both of these were correlating features in the multiclass case. These results also suggest that there are significant changes in the S amplitude in lead V2, as well as in V1, that may be considered. The most surprising results from this work concern LBBB, where the most common features in the top 5 include three T morphology features, in leads I, aVR and V6, whereas LBBB is diagnosed using only features related to the QRS complex (although all three of these features correlate strongly with QRS duration). QRS morphology in leads I and V6 is used in the diagnosis of LBBB, so it is likely that these changes also result in the leading component of the T wave of leads I and V6 changing from positive to negative for 96% and 93% of the LBBB records respectively. But lead aVR is not used in the diagnosis of LBBB at all, but is in the top 5 features for 4 of the methods and here the leading component of the T wave changes from negative to positive for 87% of the LBBB records. Similarly, lead V4 is not used to diagnose LBBB, but the R amplitude in lead V4 also occurs in the top 5 features for 4 of the methods. ## VI Conclusion In this comparison of feature ranking algorithms with the expert knowledge of cardiologists for three different pathologies, we have shown that, generally speaking, the SHAP methods all give good agreement with the important features used by cardiologists, together with the native Random forest and Logistic regression feature rankings. For the model independent methods, Chi-square test generally performed well. Some methods gave inconsistent results, including MRMR and NCA. The permutation methods generally performed quite poorly. It is interesting that the top ranked features for many methods include some unimportant or correlating features rather than important features only. The code for obtaining the feature importance rankings described in this work is available at [https://github.com/tmehari/feature_importance](https://github.com/tmehari/feature_importance).
2307.00618
Bounce: Reliable High-Dimensional Bayesian Optimization for Combinatorial and Mixed Spaces
Impactful applications such as materials discovery, hardware design, neural architecture search, or portfolio optimization require optimizing high-dimensional black-box functions with mixed and combinatorial input spaces. While Bayesian optimization has recently made significant progress in solving such problems, an in-depth analysis reveals that the current state-of-the-art methods are not reliable. Their performances degrade substantially when the unknown optima of the function do not have a certain structure. To fill the need for a reliable algorithm for combinatorial and mixed spaces, this paper proposes Bounce that relies on a novel map of various variable types into nested embeddings of increasing dimensionality. Comprehensive experiments show that Bounce reliably achieves and often even improves upon state-of-the-art performance on a variety of high-dimensional problems.
Leonard Papenmeier, Luigi Nardi, Matthias Poloczek
2023-07-02T17:18:17Z
http://arxiv.org/abs/2307.00618v2
# Bounce: a Reliable Bayesian Optimization Algorithm for Combinatorial and Mixed Spaces ###### Abstract Impactful applications such as materials discovery, hardware design, neural architecture search, or portfolio optimization require optimizing high-dimensional black-box functions with mixed and combinatorial input spaces. While Bayesian optimization has recently made significant progress in solving such problems, an in-depth analysis reveals that the current state-of-the-art methods are not reliable. Their performances degrade substantially when the unknown optima of the function do not have a certain structure. To fill the need for a reliable algorithm for combinatorial and mixed spaces, this paper proposes Bounce that relies on a novel map of various variable types into nested embeddings of increasing dimensionality. Comprehensive experiments show that Bounce reliably achieves and often even improves upon state-of-the-art performance on a variety of high-dimensional problems. ## 1 Introduction Bayesian optimization (BO) has become a 'go-to' method for optimizing expensive-to-evaluate black-box functions [78] that have numerous important applications, including hyperparameter optimization for machine learning models [9; 27], portfolio optimization in finance [7], chemical engineering and materials discovery [14; 26; 32; 35; 36; 38; 64; 67], hardware design [22; 34; 49], or scheduling problems [39]. These problems are challenging for a variety of reasons. Most importantly, they may expose hundreds of tunable parameters that allow for granular optimization of the underlying design but also lead to high-dimensional optimization tasks and the 'curses of dimensionality' [10; 59]. Typical examples are drug design [51; 72] and combinatorial testing [53]. Moreover, real-world applications often have categorical or ordinal tunable parameters, in addition to the bounded real-valued parameters that BO has traditionally focused on [10; 27; 66]. Recent efforts have thus extended BO to combinatorial and mixed spaces. Casnopolitan of Wan et al. [75] uses trust regions (TRs) to accommodate high dimensionality, building upon prior work of Eriksson et al. [25] for continuous spaces. COMB0 of Oh et al. [54] constructs a surrogate model based on a combinatorial graph representation of the function. Recently, Deshwal et al. [21] presented BODi that employs a novel type of dictionary-based embedding and showed that it outperforms the prior work. However, the causes for BODi's excellent performance are not yet well-understood and require a closer examination. Moreover, the ability of methods for mixed spaces to scale to higher dimensionalities trails behind BO for continuous domains. Recently, nested embeddings [57] have been shown to handle a thousand input dimensions, thus outperforming vanilla TR-based approaches and raising the question of whether similar performance gains are feasible for combinatorial domains. In this work, we assess and improve upon the state-of-the-art in combinatorial BO. In particular, we make the following contributions: 1. We conduct an in-depth analysis of two state-of-the-art algorithms for combinatorial BO, COMBO[54] and BODi[21]. The analysis reveals that their performances often degrade considerably when the optima of the optimization problem do not exhibit a particular structure that is common for synthetic test problems. 2. We propose (**B**ayesian **O**ptimization **U**sing i**N**creasingly high-dimensional **C**ombinatorial and continuous **E**mbeddings), a novel high-dimensional Bayesian optimization (HDBO) method that effectively optimizes over combinatorial, continuous, and mixed spaces. **B**ounce leverages parallel function evaluations efficiently and uses nested random embeddings to scale to high-dimensional problems. 3. We provide a comprehensive evaluation on a representative collection of combinatorial, continuous, and mixed-space benchmarks, which demonstrates that **B**ounce is on par with, or outperforms state-of-the-art methods. ## 2 Background and related work Bayesian optimization.Bayesian optimization (BO) aims to find the global optimum \(\mathbf{x}^{*}\in\mathcal{X}\) of a black-box function \(f:\mathcal{X}\rightarrow\mathbb{R}\), where \(\mathcal{X}\) is the \(D\)-dimensional search space or _input space_. Throughout this paper, we consider minimization problems, i.e., we aim to find \(\mathbf{x}^{*}\in\mathcal{X}\) such that \(f(\mathbf{x}^{*})\leq f(\mathbf{x})\) for all \(\mathbf{x}\in\mathcal{X}\). The search space \(\mathcal{X}\) may contain variables of different types: continuous, categorical, and ordinal. We denote the number of continuous variables in \(\mathcal{X}\) by \(n_{\text{cont}}\) and the number of combinatorial variables by \(n_{\text{comb}}=n_{\text{cat}}+n_{\text{ord}}=D-n_{\text{cont}}\), where we denote the number of categorical variables by \(n_{\text{cat}}\) and the number of ordinal variables by \(n_{\text{ord}}\). Combinatorial domains.Extending BO to combinatorial spaces is challenging, for example, because the acquisition function is only defined at discrete locations or the dimensionality of the space grows drastically when using one-hot encoding for categorical variables. Due to its numerous applications, combinatorial BO has received increased attention in recent years. BOCS[6] handles the exponential explosion of combinations by only modeling lower-order interactions of combinatorial variables and imposing a sparse prior on the interaction terms. COMBO[54] models each variable as a graph and uses the graph-Cartesian product to represent the search space. Deshwal et al.[19] build up on COMBO[54] by establishing a closed-form expression for the diffusion kernel and proposing kernels tailored for mixed spaces. We revisit COMBO's performance on categorical problems in Appendix D.1. CoCaBo[63] combines multi-armed bandits and BO to allow for optimization in mixed spaces. It uses two separate kernels for continuous and combinatorial variables and proposes a weighted average of a product and a sum kernel to model mixed spaces. As a general method to optimize the acquisition function with gradient-based methods, Daulton et al.[18] recently proposed probabilistic reparametrization. High-dimensional continuous spaces.Subspace-based methods have primarily been used for continuous spaces. Wang et al.[79] proposed REMBO for HDBO in continuous spaces using Gaussian random projection matrices. REMBO suffers from distortions and projections outside the search domains that the corrections of Binois et al.[11; 12] address. The HeSBO algorithm of Nayebi et al.[50] avoids the need for corrections by using the CountSketch embedding[82]. Alebo of Letham et al.[43] builds upon REMBO, learning suitable corrections of distortions. TuBBO[25] is a method that operates in the full-dimensional input space \(\mathcal{X}\), relying on trust region (TR) to focus the search on promising regions of the search space. BAxUS of Papenmeier et al.[57] combines the trust region approach of TuBBO with the random subspace idea of HeSBO. BAxUS uses a novel family of random nested subspaces that is shown to exhibit better theoretical guarantees than the CountSketch embedding. While BAxUS handled a \(1000D\) problem, it only considers continuous problems and cannot leverage parallel function evaluations. Another line of recent approaches employs Monte-Carlo Tree Search (MCTS) to reduce the complexity of the problem. Wang et al.[77] use MCTS to learn a partitioning of the continuous search space to focus the search on promising regions in the search space. Song et al.[70] use a similar approach but instead of learning promising regions in the search space, they assume an axis-aligned active subspace and use MCTS to select important variables. Linear embeddings and random linear embeddings [13; 43; 50; 57; 79] require little or no training data to construct the embedding but assume a linear subspace. Non-linear embeddings allow to learn more complex embeddings but often require more training data. Lu et al.[45] and Maus et al.[47] use variational autoencoders (VAEs) to learn a non-linear embedding of highly-structured input spaces. Tripp et al. [73] also use a VAE to learn a non-linear subspace of a combinatorial search space. By using a re-weighting scheme that puts more emphasis on promising points in the search space, they tailor the embedding toward optimization problems. Combinatorial high-dimensional domains.Combining combinatorial and high-dimensional BO allows targeting many practical applications where the black-box problem is defined over combinatorial or mixed space of high-dimensionality. Casnopolitan[75] follows TuRBO in using TRs to focus the search on promising regions of the search space. For combinatorial variables, TRs are modeled in terms of the Hamming distance. For mixed spaces, Casnopolitan uses interleaved search and models continuous and categorical variables with two separate TRs. Kim et al. [41] use a random projection matrix to optimize combinatorial problems in a continuous embedded subspace. When evaluating a point, their approach first projects the continuous candidate point to the high-dimensional search space and then rounds to the next feasible combinatorial solution. Deshwal et al. [20] propose two algorithms for _permutation spaces_ which occur in problems such as compiler optimization [34] and pose a special challenge due to the superexponential explosion of solutions. BOD1[21] proposes a novel type of embedding based on a dictionary of reference points in the search space. The representation of a new point \(\mathbf{z}\) is obtained by computing the Hamming distance between \(\mathbf{z}\) and each reference point \(\mathbf{a}_{i}\) in the dictionary. The reference points in the dictionary change at each iteration of the algorithm. Reference points are sampled from the search space to cover a wide range of'sequencies', i.e., the number of changes from \(0\) to \(1\) (and vice versa) in the binary vector. The authors assume that the diverse random sampling procedure leads to BODi's remarkable performance in combinatorial spaces with up to \(60\) dimensions. In Section 4.6, we show that BODi's performance relies on an artificial structure of the optimizer \(\mathbf{x}^{*}\) and that its performance degrades considerably when this structure is violated. ## 3 The Bounce algorithm To overcome the aforementioned challenges in HDBO for real-world applications, we propose Bounce, a new algorithm for continuous, combinatorial, and mixed spaces. Bounce uses a Gaussian process (GP) [80] surrogate in a lower-dimensional subspace, the _target space_, that is realized by partitioning input variables into 'bins', the so-called _target dimensions_. Bounce only bins variables of the same type (categorical, binary, ordinal, and continuous). When selecting new points to evaluate, Bounce sets all input variables within the same bin to a single value. It thus operates in a subspace of lower dimensionality than the input space. In particular, it maximizes the acquisition function in a low-dimensional subspace. During the optimization, Bounce refines its subspace embedding by splitting up bins into smaller bins, allowing for a more granular optimization at the expense of higher dimensionality. Note that by splitting up bins, Bounce asserts that observations taken in earlier subspaces are contained in the current subspace; see Papenneier et al. [57] for details. Thus, Bounce operates in a series of nested subspaces. It further uses a novel TR management that allows it to leverage batch parallelism efficiently, improving over the single point acquisition of BAxUS[57]. To model the GP [80] in low-dimensional subspaces, Bounce leverages BAxUS' family of nested random embeddings [57]. In particular, Bounce employs the sparse count-sketch embedding [82] in which each input dimension is assigned to exactly one target dimension. When increasing the target dimensionality, Bounce creates \(b\) new bins for every existing bin and re-distributes the input dimensions that had previously been assigned to that bin across the now \(b+1\) bins. Bounce allocates an individual evaluation budget \(m_{i}\) to the current target space \(\mathcal{X}_{i}\) that is proportional to the dimensionality of \(\mathcal{X}_{i}\). When the budget for the current target space is depleted, Bounce will increase the dimension of the target space until it reaches the input space of dimensionality \(D\). Let \(d_{0}\) denote the dimensionality of the first target space, i.e., the random embedding that Bounce starts with. Then Bounce has to increase the target dimension \(\left\lceil\log_{b+1}D/d_{0}\right\rceil=:k\)-times to reach the input dimensionality \(D\). After calculating \(k\), Bounce re-sets the split factor \(b\) such that the distance between the predicted final target dimensionality \(d_{k}=d_{0}\cdot(b+1)^{k}\) and the input dimensionality \(D\) is minimized: \(b=\left\lfloor\log_{k}(D/d_{0})-1\right\rceil\) where \(\lfloor x\rfloor\) denotes the closest integer to \(x\). This ensures that the predetermined evaluation budget for each subspace will be approximately proportional to its dimensionality. This is in contrast to BAxUS[57] that uses a constant split factor \(b\) and adjusts the initial target dimensionality \(d_{0}\). The evaluation budget \(m_{i}\) for the \(i\)-th subspace \(\mathcal{X}_{i}\) is \(m_{i}:=\left\lfloor\frac{b\cdot m_{D}\cdot d_{i}}{d_{0}\cdot(1-(b+1)^{k+1})}\right\rfloor\), where \(m_{D}\) is the budget until \(D\) is reached and \(b\) is the maximum number of bins added per split. Bounce follows TuRBO[25] and Casmopolitan[75] in using trust regions (TRs) to efficiently optimize over target spaces of high dimensionality. TRs allow focusing on promising regions of the search space by restricting the next points to evaluate to a region centered on the current best function value [25]. TR-based methods usually expand their TR if they find better points and conversely shrink it if they fail to make progress. If the TR falls below the threshold given by the _base length_, the methods restart with a fresh TR elsewhere. Casmopolitan[75] uses different base lengths for combinatorial and continuous variables. For combinatorial variables, the distance to the currently best function value is defined in terms of the Hamming distance, and the base length is an integer. For continuous variables, Casmopolitan defines the base length in terms of the \(l_{2}\) distance, i.e., a real number. Following Casmopolitan, Bounce has separate base lengths \(L_{\min}^{\mathrm{cont}}\) and \(L_{\min}^{\mathrm{comb}}\) for continuous and combinatorial variables but does not fix the TR factor by which TRs are increased or decreased upon successes or failures. Instead, the TR factor is adjusted dynamically so that the evaluation budget \(m_{i}\) for the current target space \(\mathcal{X}_{i}\) is adhered to. In Section 3.3, we show that this design is crucial to enable batch parallelism. To harvest the sample efficiency of a low-dimensional target space, we would like to combine categorical variables into a single bin, even if they vary in the number of categories. This is not straightforward. For example, note that the popular one-hot encoding of categorical variables would give rise to multiple binary input dimensions, which would not be compatible with the above strategy of binning variables to form nested subspaces. Bounce overcomes these obstacles and allows variables of the same type to share a representation in the target space. We provide the details in Sect. 3.1. For the GP model, we use the CoCaBo kernel[63]. In particular, we model the continuous and combinatorial variables with two separate Matern\(-\nicefrac{{5}}{{2}}\) kernels where we use automatic relevance determination (ARD) for the continuous variables and share one length scale for all combinatorial variables. Following Ru et al. [63], we use a mixture of the sum and the product kernel: \[k(\mathbf{x},\mathbf{x}^{\prime})=\lambda k_{\mathrm{cmb}}(\mathbf{x}_{\mathrm{cmb}},\mathbf{ x}^{\prime}_{\mathrm{cmb}})k_{\mathrm{cnt}}(\mathbf{x}_{\mathrm{cnt}},\mathbf{x}^{ \prime}_{\mathrm{cnt}})+(1-\lambda)(k_{\mathrm{cmb}}(\mathbf{x}_{\mathrm{cmb}},\bm {x}^{\prime}_{\mathrm{cmb}})+k_{\mathrm{cnt}}(\mathbf{x}_{\mathrm{cnt}},\mathbf{x}^{ \prime}_{\mathrm{cnt}})),\] where \(\mathbf{x}_{\mathrm{cnt}}\) and \(\mathbf{x}_{\mathrm{cmb}}\) are the continuous and combinatorial variables in \(\mathbf{x}\), respectively, and \(\lambda\) is between 0 and 1. The trade-off parameter \(\lambda\) is learned jointly with the other hyperparameters during the likelihood maximization. Algorithm 1 gives a high-level overview of Bounce. In Appendix A, we prove that Bounce converges to the global optimum under mild assumptions. We now explain the different components of Bounce in detail. ### The subspace embedding of mixed spaces Bounce supports mixed spaces of four types of input variables: categorical, ordinal, binary, and continuous variables. We discuss binary and categorical variables separately because we model them differently. The proposed embedding maps only variables of the same type to the same 'bin', i.e., to a single target dimension of the embedding. Target dimensions are homogeneous in this regard. Note that the number of target dimensions of each type is implied by the current bin size of the embedding that may grow during the execution. The proposed embedding can handle categorical or ordinal input variables that differ in the number of discrete values they can take. Continuous variables.As common in BO, we suppose that the continuous variables take values in a bounded interval and thus are normalized to \([-1,1]\). The embedding of continuous variables, i.e., input dimensions, follows BAxUS[57]: each input dimension \(D_{i}\) is associated with a random sign \(s_{i}\in\{-1,+1\}\) and one or multiple input dimensions can be mapped to the same target dimension of the low-dimensional embedded subspace. Recall that Bounce works on the low-dimensional subspace and thus decides an assignment \(v_{j}\) for every target dimension \(d_{j}\) of the embedding. Then all input variables mapped to this particular target dimensions are set to this value \(v_{j}\). Binary variables.Binary dimensions are represented by values \(-1\) and \(+1\). Each input dimension \(D_{i}\) is associated with a random sign \(s_{i}\in\{-1,+1\}\), and the subspace embedding may map one or more input dimensions to the same target dimension. While the embedding for binary and continuous dimensions is similar, Bounce handles binary dimensions differently when optimizing the acquisition function. Categorical variables.Bounce uses a one-hot encoding for categorical variables and combines them using the same categorical target dimension for dimensions of possibly different cardinalities. Suppose that the categorical variables \(v_{1},\ldots,v_{\ell}\) with cardinalities \(c_{1},\ldots,c_{\ell}\) are mapped to the same bin that is associated with the target dimension \(d_{j}\) of the subspace embedding. Then \(d_{j}\) is of categorical type and has \(\max\{c_{i}\ |\ 1\leq i\leq\ell\}=:c_{\max}\) distinct categories (admissible values), i.e., its cardinality is the maximum cardinality of the variables mapped to it. Suppose that Bounce assigns the label \(k\in\{1,\ldots,c_{\max}\}\) to the categorical target dimension \(d_{j}\). We transform this label to a categorical assignment to each input variable \(v_{1},\ldots,v_{\ell}\), setting \(v_{i}=\lceil k\cdot(c_{i}/c_{\max})\rceil\). Recall that Bounce may split up bins, target dimensions, to increase the dimensionality of its subspace embedding. In such an event, every derived bin inherits the cardinality of the parent bin. This allows us to retain any observations the algorithm has taken up to this point. Analogously to the random sign for binary variables, we randomly shuffle the categories before the embedding. This reduces the risk of Bounce being biased towards a specific structure of the optimizer (see Appendix B.3). Ordinal variables.The embedding of ordinal variables follows categorical variables. Suppose that \(\ell\) ordinal variables \(v_{1},\ldots,v_{\ell}\) are mapped to the same bin associated with the target dimension \(d_{j}\) of the subspace embedding. Let \(c_{i}\geq 2\) be the number of discrete values the input \(v_{i}\) may take. Then \(d_{j}\) has \(\max\{c_{i}\ |\ 1\leq i\leq\ell\}=:c_{\max}\) discrete values \(\mathcal{D}_{j}:=\{1,2,\ldots,c_{\max}\}\). Suppose that Bounce assigns the value \(k\in\mathcal{D}_{j}\) to the target dimension \(d_{j}\). We need to transform the value \(k\) to a feasible value for each of the \(\ell\) ordinal input variables that are mapped to \(d_{j}\). Thus, we set the input variable \(v_{i}:=\lceil k\cdot(c_{i}/c_{\max})\rceil\). For the sake of simplicity, we suppose here that the ordinal variable \(v_{i}\) has range \(\{1,2,\ldots,c_{i}\}\). ### Maximization of the acquisition function We use expected improvement (EI) [40] for batches of size \(B=1\) and \(q\)-expected improvement (qEI) [5, 76, 81] for larger batches. We optimize the EI using gradient-based methods for continuous problems and local search for combinatorial problems. We interleave gradient-based optimization and local search for functions defined over a mixed space; see Appendix C.1 for details. ### Batch parallelism We allow Bounce to efficiently evaluate batches of points in parallel by using a scalable TR management strategy and qEI [5, 76, 81] as the acquisition function for batches of size \(B>1\). To re-evaluate qEI on the same set of posterior samples, we fix the seed for each batch element throughout the interleaved optimization of the acquisition function. When Bounce starts with a fresh TR, we sample \(n_{\mathrm{init}}\) initial points uniformly at random to initialize the GP. The TR management strategy of Bounce differs from previous strategies [25; 57; 61; 75] in that it uses a dynamic factor to determine the TR base length. Recall that Bounce shrinks the TR if it fails to make progress and starts a fresh TR if the TR falls below the threshold given by the base length. If one employed the strategies of TuRBO [25], Casmopolitan [75], or BAxUS [57] for larger batch sizes \(B\) and Bounce's nested subspaces, then they would spend a large part of the evaluation budget in early target spaces. For example, suppose a continuous problem, the common values for the initial, minimum, and maximum TR base length, and the constant shrinkage factor of [61]. Then such a method has to shrink the TR base length at least seven times (i.e., evaluate \(f\,7B\)-times) before it would increase the dimensionality of the target space. Thus, the method would risk depleting its budget before reaching a target space suitable for the problem. On the other hand, we will see that Bounce chooses an evaluation budget that is lower in low-dimensional target spaces. It uses only \(3,12\), and \(47\) samples for the first three target spaces of a \(1000\)-dimensional problem with an evaluation budget of \(1000\). Bounce's strategy permits flexible TR shrinkage factors and base lengths, allowing TR base lengths to vary within the range \([L_{\text{min}},L_{\max}]\). Suppose that Bounce has evaluated \(j\) batches of \(B\) points each since it last increased the dimensionality of the target space, and let \(L_{j}\) denote the current TR base length. Observe that hence \(m_{i}-jB\) evaluations remain for \(\mathcal{X}_{i}\). Then Bounce sets the TR base length \(L_{j+1}:=\lambda_{j}^{-B}L_{j}\) if the evaluation of the batch gives a new best point whose objective value improves upon the incumbent by at least \(\varepsilon\). We call this a'success'. Otherwise, Bounce observes a 'failure' and sets \(L_{j+1}:=\lambda_{j}^{+B}L_{j}\). The rationale of this rule is that if at iteration \(j\), we apply this factor \((m_{i}-jB)\)-times, which is the remaining number of batches in the current subspace \(\mathcal{X}_{i}\), then the last batch of the \(i\)-th target space \(\mathcal{X}_{i}\) will have the minimum TR base length. If the TR is expanded upon a'success', we need to adjust \(\lambda_{j}\) not to use more than the allocated number of function evaluations in a target space. At each iteration, we therefore set the TR base length by \(\lambda_{j}=(\frac{L_{\text{min}}}{L_{j}})^{1/(m_{i}-jB)}\). Note that \(\lambda_{j}\) remains unchanged under this rule unless the TR expanded in the previous iteration. ## 4 Experimental evaluation We evaluate Bounce empirically on various benchmarks whose inputs are combinatorial, continuous, or mixed spaces. The evaluation comprises the state-of-the-art algorithms BODi[21], Casmopolitan[75], and COMBO[54], using code provided by the authors. We also report Random Search[8] as a baseline. The experimental setup.We initialize every algorithm with five initial points. The plots show the performances of all algorithms averaged over 50 repetitions except BODi, which has 20 repetitions due to resource constraints caused by its high memory demand. The shaded regions give the standard error of the mean. We use common random seeds for all algorithms and the randomized versions of the benchmark functions. We run all methods for 200 function evaluations unless stated otherwise. The benchmarks.The evaluation uses seven established benchmarks [21]: \(53\)D SVM, \(50\)D LABS, \(125\)D ClusterExpansion[3; 4], \(60\)D MaxSAT60[21; 54], \(25\)D PestControl, \(53\)D Ackley53, and \(25\)D Contamination[6; 37; 54]. Due to space constraints, we moved the results for the MaxSAT60, Contamination, and Ackley53 benchmarks to Appendix B.1. For each benchmark, we report results for the originally published formulation and for a modification where we move the optimal point to a random location. The randomization procedure is fixed for each benchmark for all algorithms and repetitions. For binary problems, we flip each input variable independently with probability \(0.5\). For categorical problems, we randomly permute the order of the categories. We motivate this randomization in Section 4.6. ### 50D Low-Autocorrelation Binary Sequences (Labs) LABS has \(n=50\) binary dimensions. It has important applications in communications engineering and mathematics; see [55] for details. LABS is a hard combinatorial problem and currently solved _via_ exhaustive search. The goal is to find a sequence \(\mathbf{x}\in\{-1,+1\}^{n}\) with a maximum merit factor \(F(\mathbf{x})=\frac{n^{2}}{2E(\mathbf{x})}\), where \(E(\mathbf{x})=\sum_{k=1}^{n-1}C_{k}^{2}(\mathbf{x})\) and \(C_{k}(\mathbf{x})=\sum_{i=1}^{n-k}x_{i}x_{i+k}\) for \(k=0,\ldots,n-1\) are the autocorrelations of \(\mathbf{x}\)[55]. The performance plot for this algorithm (Figure 1) shows that Bounce outperforms all other algorithms on the benchmark's original and randomized versions. Notably, the gain of Bounce over the runner-up COMBO increases with the number of function evaluations. ### Industrial Maximum Satisfiability: 125D ClusterExpansion benchmark We evaluate Bounce and the other algorithms on the 125-dimensional ClusterExpansion benchmark, a real-world MaxSAT instance with many applications in materials science [2]. Unlike the MaxSAT60 benchmark (see Appendix B.1.3), ClusterExpansion is not a crafted benchmark, and its optimum has no synthetic structure [1; 3]. Figure 2 shows the total weight of the unsatisfied clauses as a function of evaluations. We cannot plot regret curves since the optimum is unknown [4]. We observe that Bounce finds better solutions than all other algorithms. BODi is the only algorithm for which we observe sensitivity to the location of the optimal assignment: for the published version of the benchmark, BODi quickly jumps to a moderately good solution but subsequently fails to make further progress. Figure 1: The 50D low-autocorrelation binary sequence problem. Bounce finds the best solutions, followed by COMBO. Figure 3: The 25D categorical pest control problem. Bounce obtains the best solutions, followed by Casnopolitan. BODi’s performance degrades significantly when shuffling the order of categories. Figure 2: The 125D weighted ClusterExpansion maximum satisfiability problem. We plot the total weight of unsatisfied clauses. Bounce produces the best assignments. ### 25D Categorical Pest Control PestControl is a more complex version of the Contamination benchmark and has 25 categorical variables with five categories each [54]. The task is to select one out of five actions \(\{1,2,\dots,5\}\) at each of \(25\) stations to minimize the objective function that combines total cost and a measure of the spread of the pest. We note the setting \(\mathbf{x}=(5,5,\dots,5)\) achieves a good value of \(12.57\), while the best value found in our evaluation is \(12.07\) is \(\mathbf{x}=(5,5,\dots,5,1)\) and thus has a Hamming distance of one. The random seed used in our experiments is zero. Figure 3 summarizes the performances of the algorithms. Bounce is robust to the location of the global optimum and consistently performs as well COMBO and BODi, and sometimes better. In particular, the performances of COMBO and BODi depend on whether the optimum has a certain structure. We discuss it in detail in Appendix D.1. ### Svm - a 53D AutoML task In the SVM benchmark, we optimize over a mixed space with 50 binary and 3 continuous parameters to tune an \(\varepsilon\)-support vector regression (SVR) model [68]. The 50 binary parameters determine whether to include or exclude an input feature from the dataset. The 3 continuous parameters correspond to the regularization parameter \(C\), the kernel width \(\gamma\), and the \(\varepsilon\) parameter of the \(\varepsilon\)-SVR model [68]. Its root mean squared error on a held-out dataset gives the function value. Figure 4 summarizes the performances of the algorithms. We observe that Bounce, BODi, and Casmopolitan achieve comparable solutions. BODi performs slightly worse if the ordering of the categories is shuffled and slightly better if the optimal assignment to all binary variables is one. COMBO does not support continuous variables and thus was omitted. ### Bounce's efficacy for batch acquisition We study the sample efficiency of Bounce when it selects a batch of \(B\) points in each iteration to evaluate in parallel. Figure 5 shows the results for \(B=1,3,5,10\), and \(20\), where Bounce was run for \(\min(2000,200\cdot B)\) function evaluations. We configure Bounce to reach the input dimensionality after \(100\) evaluations for \(B=1,3,5\) and after \(25B\) for \(B=10,20\). We observe that Bounce leverages parallel function evaluations effectively: it obtains a comparable function value at a considerably smaller number of iterations, thus saving wall-clock time for applications with time-consuming function evaluations. We also studied batch acquisition for continuous problems and found that Bounce also provides significant speed-ups here. Due to space constraints, we deferred the discussion to Appendix B.2. ### The sensitivity of BODi and Comb to the location of the optima The above empirical evaluation reveals that the performances of BODi [21] and COMBO [54] are sensitive to the location of the optima. Both methods degrade on at least one benchmark when the optimum is moved to a randomly chosen point. This is particularly unexpected for categorical variables where moving the optimum to a random location is equivalent to shuffling the labels of the categories of each variable. Such a change of representation should not affect the performance of an algorithm. BODi is more susceptible to the location of the optimizer than COMBO. The performance of COMBO degrades only on the categorical PestControl benchmark, whereas BODi degrades on five out of seven benchmarks. Here BODi's performance drops even below Casmopolitan's. Looking closer, we observe that BODi's performance degradation is particularly large for synthetic benchmarks like Ackley53 and MaxSAT60, where setting all variables to the same value is optimal. Figure 6 Figure 4: The 53-dimensional SVM benchmark. Bounce, BODi, and Casmopolitan achieve comparable solutions. summarizes the effects of moving the optimum on BODi. Due to space constraints, we moved the details and a discussion of categorical variables to the appendix. Similarly, setting all binary variables of the SVM benchmark to one produces a good objective value. It is not surprising, given that the all-one assignment corresponds to including all features previously selected for the benchmark because of their high importance. We prove in Appendix D.1 that BODi adds a point that n Hamming distance to an all-zero or all-one solution, with a probability that increases with the dictionary size. Deshwal et al. (2011, p. 7) reported that BODi's performance 'tends to improve' with the size of the dictionary. Moreover, BODi samples a new dictionary in each iteration, eventually increasing the chance of having such a point in its dictionary. Thus, we hypothesize that BODi benefits from having a near-optimal solution in its dictionary. For COMBO, Figure 3 shows that the performance on PestControl degrades substantially if the labels of the categories are shuffled. Then COMBO's sample-efficiency becomes comparable to Random Search. ## 5 Discussion BO in combinatorial spaces has many exciting and impactful applications. Its applicability to real-world problems, such as LABS that defy a closed-form solution, makes it a valuable tool for practitioners. Our empirical evaluation reveals that state-of-the-art methods fail to provide good solutions reliably. In particular, it finds that BODi and COMBO, which performed best in recent Figure 5: Bounce benefits from the batch acquisition that allows parallelizing function evaluations. We show the average function values obtained after each batch for batch sizes \(1,3,5,10\), and 20. Figure 6: BODi’s performance degrades on five out of seven benchmarks when randomizing the location of the optimal solution: BODi on the default version (orange) of the benchmark and on the modified version (green, dashed) where the optimum was moved. publications, are sensitive to the location of the optimizer. We identified design flaws in BODi and an implementation bug in COMB0 as the root causes of the performance degradations. The proposed Bounce algorithm is reliable for high-dimensional black-box optimization in combinatorial, continuous, and mixed spaces. The empirical evaluation demonstrates that Bounce reliably outperforms the state-of-the-art on a diverse set of problems. Using a novel TR management strategy, Bounce leverages parallel evaluations of the objective function to improve its performance. We anticipate headroom by tailoring the modeling of combinatorial objects, e.g., arising in the search for peptides or materials discovery [28, 33, 35, 56, 71, 74]. Here it seems particularly interesting to incorporate prior belief on the importance of decision variables while maintaining the overall scalability. Moreover, extending the present work to black-box constraints [24, 30], multiple objectives, and multiple information sources [17, 31, 58] will considerably expand the use cases that it applies to. Societal impact.Bayesian optimization has recently gained wide-spread popularity for tasks in drug discovery [51], chemical engineering [14, 32, 36, 64, 67], materials discovery [28, 33, 35, 56, 71, 74], aerospace engineering [6, 42, 46], robotics [15, 16, 44, 48, 60], and many more. This highlights the Bayesian optimization community's progress toward providing a reliable 'off-the-shelf optimizer.' However, this promise is not yet fulfilled for the newer domain of mixed-variable Bayesian optimization that allows optimization over hundreds of 'tunable levers', some of which are discrete, while others are continuous. This domain is of particular relevance for the tasks above. Bounce's ability to incorporate more such levers in the optimization significantly impacts the above practical applications, allowing for more granular control of a chemical reaction or a processing path, to give some examples. The empirical evaluation shows that the performance of state-of-the-art methods is highly sensitive to the location of the unknown global optima and often degenerates drastically, thus putting practitioners at risk. The proposed algorithm Bounce, however, achieves robust performance over a broad collection of tasks and thus will become a 'goto' optimizer for practitioners in other fields. Therefore, we will open-source the Bounce code when the paper is accepted.
2302.13904
Shock System Dynamics of a Morphing Bump Over a Flat Plate
In this paper, the shock dynamics due to the movement of a bump over a flat plate flying at supersonic speed are numerically investigated. The bump is located at the impingement position of the shock wave and is moved at different speeds. This study determines the suitable speed that achieves the minimum entropy change, which is the representation parameter of the transition period. The two-dimensional unsteady Navier-Stokes equations are solved using OpenFOAM to simulate the flow field variables, while the motion of the bump is tracked using the Arbitrary Lagrangian-Eulerian (ALE) technique. The results show that a spatial lag on the shock system from the steady-state solution occurs due to the movement of the bump. Further, the spatial lag increases with the increase in the bump's speed. This causes a high increase in the flow parameters and consequently the total entropy changes on the bump surface. Generally, it is common to move the bump over the longest possible time to approximate a quasi-steady flow during the motion. However, this causes a deviation in the flow parameters between the final time of transition and the steady-state case of bump existence. Thus, it is concluded that the optimal non-dimensional time for a morphing bump in a supersonic flow of Mach number of 2.9 is 2, which is different than the longest time of 10.
Ahmed A. Hamada, Lubna Margha, Mohamed M. AbdelRahman, Amr Guaily
2023-02-27T15:48:36Z
http://arxiv.org/abs/2302.13904v1
# Shock System Dynamics of a Morphing Bump Over a Flat Plate ###### Abstract _The shock wave boundary layer interaction (SW-BLI) phenomenon over transonic and supersonic airfoils captured the attention of aerospace engineers, due to its disastrous effect on the aerodynamic performance of these vehicles. Thus, the scientific community numerically and experimentally investigated several active and passive flow control elements to reduce the effect of the phenomenon, such as vortex generator, cavity, and bump. They focused on designing and optimizing the shape and location of the bump control element. However, the transit movement of the bump from the state of a clean airfoil to the state of an airfoil with a bump needs more investigation, especially the dynamics of the shock system. Thus, it is preferred to start with simple geometry, such as a flat plate, to fully understand the flow behavior with a morphing bump. In this paper, the shock dynamics due to the movement of a bump over a flat plate flying at supersonic speed are numerically investigated. The bump is located at the impingement position of the shock wave and is moved at different speeds. This study determines the suitable speed that achieves the minimum entropy change, which is the representation parameter of the transition period. The two-dimensional unsteady Navier-Stokes equations are solved using OpenFOAM to simulate the flow field variables, while the motion of the bump is tracked using the Arbitrary Lagrangian-Eulerian (ALE) technique. The results show that a spatial lag on the shock system from the steady-state solution occurs due to the movement of the bump. Further, the spatial lag increases with the increase in the bump's speed. This causes a high increase in the flow parameters and consequently the total entropy changes on the bump surface. Generally, it is common to move the bump over the longest possible time to approximate a quasi-steady flow during the motion. However, this causes a deviation in the flow parameters between the final time of transition and the steady-state case of bump existence. Thus, it is concluded that the optimal non-dimensional time for a morphing bump in a supersonic flow of Mach number of 2.9 is 2, which is different than the longest time of 10._ Keywords: Moving bump; Supersonic flow; Flat plate; Active flow control. ## 1 Introduction The Shock wave boundary layer interaction (SWBLI) over a high-speed wing disastrously affects its aerodynamics performance and structural lifetime. The phenomenon of SWBLI was experimentally observed for the first time by Ferri [1] in 1939. Green [2] summarized the occurrence of the phenomenon in four conditions: externally as in transonic airfoils, and near control flaps, or internally as in high-speed inlets (scram-jets), and nozzles at off-design. Based on strength of the shock wave, the phenomenon varies the height of the boundary layer, the surface drag of the body, and/or the heat transfer. The SWBLI adversely affects the structure and geometry of the flying vehicles [3]. The flow experiences a high instantaneous increase in pressure and thermal transfer due to the existence of SWBLI, which leads to a reduction in the fatigue life of the structure. Consequently, that imposes strong constrictions on choosing the material of the structure, resulting in an expensive and heavy design. Moreover, this phenomenon happens in transonic airfoils after a critical Mach number, causing the sudden increase in total drag, which is called "drag divergence". There are several ways to eliminate or reduce the SWBLI effect which is discussed in detail by Dennis Bushnell [4]. The focus in this paper will be on the bump approach. In 1922, Ashill et al. [5] were the first researchers who used a two-dimensional bump at the upper surface of airfoils to affect the strength of incurring shock waves. Milholen et al. [6] and Patzold et al. [7] proved that using bumped airfoils increases the lift, reduces the drag, and postpones the buffeting, which enhances the aerodynamic performance. Fulker [8], who is a DASA-Airbus researcher, showed that applying a bump in A-340's hybrid laminar wing will save fuel by 2.11% at \(M_{\infty}=0.84\). Several investigations are conducted to conclude with the optimal shape and location of the bump [9, 10, 11, 12]. Further, Eastwood and Jarrett [13] investigated the three-dimensional bump control technique. Yun Tian et al. [14] showed that airfoil geometry and free stream condition determine the bump's optimal geometric parameters which specify its shape and location over the airfoil. In addition, varying these geometric parameters highly affects the aerodynamic performance. By the beginning of the 21st century, the morphing wing concept become industrial applicable. Stanewsky et al. discussed the design aspects of the morphing concept from the view of the civil aircraft industry [15]. From that time, many scholars started to investigate the flow behavior over the morphing wing at different velocity regimes [16, 17, 18, 19]. Further, Bruce and Colliss [20] conducted a review study for the shock control bumps, showing that the morphing bump is a future ideal solution to limit the SWBLI effect on wings at transonic flows. To simplify the phenomenon, consider a strong two-dimensional shock wave that impinges with a boundary layer of a flat plate, see Figure 1. Downstream the impinging region, separation, and relaxation zones are generated. The flow changes from a non-equilibrium state in the separation zone to an equilibrium state in the relaxation zone. Through these zones, the flow experiences a quick increase in pressure and heat transfer, detaching, and reattaching the boundary layer. Moreover, the turbulent kinetic energy increases due to the generation of the strong shear and the adverse pressure gradient in and near the separation zone, respectively. During the phenomenon, the flow contains different types of waves generated from the SW-BLI, such as; a reflected shock wave, expansion waves, and compression waves [21]. The static condition for the bump control element is fully investigated. Particularly, they concentrated on designing and optimizing the bump control element's shape and position. However, additional detailed investigations are needed for the transition of the morphing bump from a clean airfoil to an airfoil with a bump, focusing on the dynamics of the shock system. The aim of this research is to numerically study the transient effect of the morphing pump over a simple geometry (flat plate) on the unsteady flow features of the shock dynamics. We started to study the transient phenomenon over a simple geometry first to fully understand the transition effects. The location of the bump is chosen to be at the impingement location of the incident shock wave. Then, starting the unsteady motion of the bump at different speeds and a constant velocity profile. This enabled us to determine the suitable morphing bump's speed that would minimize the entropy change. ## 2 Physical model ### Model Description The flow configuration of the morphing bump in a supersonic flow is shown in Figure 2. Further, the variation of the morphing bump shape with time is shown in Figure 3. The height of the computational inflow boundary is \(1.1m\), and the height of the outflow at the right is \(1m\). The difference in height between the two edges represents the height of the wedge, which is the shock-source surface. The wedge is far from the inlet with a length of \(0.2656m\) and the wedge stream-wise length is \(0.5173m\). The morphing bump is placed at the center of the bottom surface, representing the origin of the domain. The origin is \(2.25m\) far away from the inlet. The final shape of the morphing bump is a parabolic arc of height-to-length ratio, \(\alpha=4\%\), and represented by Equation (1). \[y=\sqrt{\left(\frac{\alpha^{2}+0.25}{2\alpha}\right)^{2}-x^{2}}+\frac{\alpha^{ 2}-0.25}{2\alpha} \tag{1}\] When the supersonic flow with a free-stream Mach number \(M_{\infty}=2.9\) and a free-stream Reynolds number \(Re_{\infty}=6.6\times 10^{7}\) impinges the wedge, an incident shock wave (\(I\)) is generated hitting the bottom surface. Then, a reflected shock wave (\(R\)) appears. Further, a series of expansion fans generates from the wedge trailing edge (\(E\)). The expansion fan interacts with the shock system causing a curvature to them. The remaining top surface after the wedge is considered an outlet to enable the flow Figure 1: A schematic of 2D SW-BLI SEPARATION FLOW [21]. to go out freely. The morphing bump rises to its final shape with a constant velocity profile and different speeds (different motion periods of 0.1, 1.0, 5.0, 7.5, 10.0). The initial condition of the problem is shown in Figure 2 (a) with the solid lines, while the dashed lines are for the final condition. The values of the problem's parameters are shown in Table 1. The appearance of the morphing bump causes a generation of two oblique shocks at its leading edge (\(L\)) and at its trailing edge (\(T\)). Further, a series of expansion fans appear over the morphing bump due to its geometric curvature (Series of \(E\)s). the number of cells in \(x\) and \(y\) directions. The time step was controlled with the Courant-Friedrichs-Lewy (CFL) number of 0.2. Figure. 5 shows the variation of the pressure distribution at height of \(y=0.5m\) for different mesh sizes. The results with mesh 3 show a converged solution that does not depend on the mesh quality. This is because the difference between mesh 3 and mesh 4 is negligible. Thus, Mesh 3 of size \(400\times 180\) was chosen to implement all the simulations to minimize computational time. The minimum element size in mesh 3 is \(2.8mm\times 1.2mm\), and the time step to ensure the CFL number of 0.2 is \(111.1\mu s\). ## 4 Results and Discussion ### Shock Structure over a Stationary Bump Firstly, the benefit of using the bump in the SWBLI problem has to be shown before investigating the transitional effect of the morphing bump. Thus, supersonic flows over a flat plate without and with a stationary bump were simulated to obtain the steady-state solutions of the morphing bump limiting positions, as shown in Figure 6. The benefit of using the bump is indicated by plotting the pressure distribution over the lower boundary for both problems; without and with a stationary bump, as shown in Figure 7. The existence of the bump created a series of weak expansion fans which in total decreased the pressure dramatically to the inlet value of pressure as shown in Figure 7 within the region of \(x\approx(0,0.5)\). Then, the pressure increased at the end of the bump to the original case (clean flat plate) at \(x=0.5\), due to the generation of the trailing oblique shock. This decrease of the pressure in this simple geometry (flat plate) case shows that the bump is applicable to control the SWBLI in the airfoil case, where the pressure decreases over the upper surface, and accordingly the lift force increases. This aligns with the results of Mazaheri et al. [12]. ### Shock Dynamics over Morphing Bump The transition phase of the morphing bump was investigated in detail with a constant velocity profile. The motion of the morphing bump occurred during different dimensionless times, Figure 4: SCHEMATICS OF THE COMPUTATIONAL DOMAIN FOR MESH 1. Figure 5: MESH INDEPENDENT STUDY FOR A SUPERSONIC FLOW OVER A CLEAN FLAT PLATE, BY COMPARING THE PRESSURE DISTRIBUTION AT THE HEIGHT, \(Y=0.5M\). Figure 6: PRESSURE COTOURS OF SUPERSONIC FLOW OVER A FLAT PLATE WITH AND WITHOUT A BUMP at DIMENSIONAL TIME OF 10. Figure 7: COMPARISON BETWEEN THE TWO PROBLEMS, SUPERSONIC FLOW OVER A FLAT PLATE WITH AND WITHOUT A BUMP USING THE PRESSURE DISTRIBUTION OVER THE BOTTOM BOUNDARY. ## 4 Conclusion In this paper, we have proposed a novel approach to the Figure 8: Pressure contours, shown in (A), Zoomed picture of the MORPHING MESH, SHOWN IN (B), AND ENTROPY, TEMPERATURE, PRESSURE, AND DENSITY DISTRIBUTIONS, SHOWN IN (C) AND (D), OF SUPERSONIC FLOW, OVER A FLAT PLATE WITH A MORPHING Bump CONTROL ELEMENT WITH CONSTANT VELOCITY PROFILE ALONG THE BOTTOM BOUNDARY. \(r_{f}^{*}=10\), 7.5, 5, 3, 2, 1.8, 1, and 0.1. This corresponds to a constant local Mach number, \(M_{b}\), of 0.004, 0.00533, 0.008, 0.0133, 0.02, 0.022, 0.022, 0.04, and 0.4 at the mid-point of the morphing bump (\(x=0\)), respectively, and zero-velocity points at the two edges of the bump. Figure 8 shows the transitional solution of the morphing bump with the slowest constant velocity (the Mach number at \(x=0\) is 0.004 and the final dimensionless time, \(t_{f}^{*}\) is 10) by plotting the pressure contours, shown in (I), zoomed picture of the morphing mesh, shown in (II), and entropy, temperature, pressure, and density distributions, shown in (III) and (IV), along the bottom boundary at different dimensionless times, \(t^{*}\). There is a spatial lag in the leading and trailing oblique shocks between the transitional solution at different values of constant velocity (different values of the final dimensionless times, \(t_{f}^{*}\) ), and the stationary condition, as shown in Figure 9. This spatial lag in the shock system from the stationary condition increases when the value of the morphing bump's constant velocity increases, i.e. the operational time of the morphing bump was reduced. This was concluded also in a rotating wedge problem, made by Margha et al. [27]. The spatial lag was clearly indicated by comparing the pressure, and Mach number distributions at a height of \(y=1\), between the stationary condition and the morphing conditions at different \(t_{f}^{*}\), as shown in Figure 10. Further, the change in the spatial lag can be neglected when the operational time of the morphing bump exceeds the final dimensionless time, \(t_{f}^{*}=5\). A comparison between the start state (clean flat plate) and the final state (at the end of the morphing motion) over the Figure 10: PRESSURE, AND MACH NUMBER DISTRIBUTIONS OF SUPERSONIC FLOW OVER A FLAT PLATE WITH A STATIONARY/MORPHING BUMP CONTROL ELECTT AT A HEIGHT OF \(Y=1M\), TO SHOW THE CHANGE OF SPATIAL LAG IN THE SHOCK SYSTEM AT DIFFERENT DIMENSIONLESS MORPHING PERIODS. Figure 9: PRESSURE COTOURS OF SUPERSONIC FLOW OVER A FLAT PLATE WITH A STATIONARY/MORPHING BUMP AT DIFFERENT CONSTANT VELOCITIES, TO SHOW THE SPATIAL LAG IN THE SHOCK SYSTEM. bump's surface at different motion periods was performed by obtaining the flow parameter distributions. Then, the temporal change in the entropy difference distribution with the case of a clean flat plate problem at a steady state over the lower boundary, \(\left(\Delta s_{\theta=r_{f}^{*}}(x)-\Delta s_{\theta=0}(x)\right)/c_{v}\), is calculated, as shown in Figure 11. Further, the entropy difference distribution, \(\Delta s(x)\), is defined as the difference between the entropy at a certain \(x\) location with the inlet condition. When the leading oblique shock appears due to the morphing bump motion, the pressure and the density over the bump increase. Thus, the temporal change in the entropy difference rises up at the bump's leading edge. Then, the series of expansion, before and after the incident oblique shock, decrease the pressure and density along the morphing bump. This may cause negative values for the temporal change in the entropy difference over the bump at relatively low morphing speeds. Furthermore, the location of the incident shock moves slightly to the left during the upward motion of the bump. The trailing oblique shock again increases the temporal change in the entropy difference. In addition, there is a deviation between the two entropy difference over the lower boundary; the case of the stationary bump at steady state, and the case of the morphing bump at final states of the motion, see Figure 8 (f) and Figure 9 (a). The deviation happens due to the effect of spatial lag on the shock system, which results from the bump's motion. when the morphing bump is moved with a relatively high speed (small \(t_{f}^{*}\)), we lose the gain in the pressure difference \((p_{\theta=r_{f}^{*}}(x)-p_{t=0}(x))\), that is obtained from the series of expansion fans. This is explained as the pressure applied on the morphing bump would increase, which can be considered as a resistant action to the motion of the bump. Further, the compressibility effect increases with the bump's speed, and consequently the density increases. When the air is compressed with the motion of the bump, the molecular distances decrease, and then the friction between adjacent molecules increases. Thus, the temperature increases with the bump's motion. Thus, when the morphing bump Figure 11: Pressure distribution, and the temporal change in the entropy difference distribution for the supersonic flow over a flat plate with a morphing bump over the lower boundary. Figure 12: Pressure distribution, and the change in temporal entropy difference distribution for the supersonic flow over a flat plate with a morphing bump over the lower boundary. moves faster, the flow parameters (pressure, density, and temperature) increase, as shown in Figure 11 (a). This is clearly shown in the temporal change in the entropy difference distribution of Figure 11 (b). Hence, the results show that the slowest velocity for the morphing bump, which approximates a quasi-steady flow during the motion, decreases the lag effect. Despite the existence of the lag effect in the shock system, the slowest tested morphing bump's speed (\(t_{f}^{*}=10\)) is not the optimal one, due to the deviation in the entropy from the stationary bump at steady state, see Figure 8 (f). This deviation represents the losses in the shock system, resulting from the morphing speed. Thus, comparing the entropy difference for each speed was made with the stationary bump at a steady state, as shown in Figure 12. The results show the suitable motion period, \(t_{f}^{*}\), for the morphing bump at a free-stream Mach number of 2.9 is 2. The reason is that its entropy difference is very close to that of the stationary bump at a steady state. Furthermore, the lag effect in the system resulting from that morphing speed is within acceptable moderate levels, as shown in Figure 10. ## 5 Conclusion The current research work aimed to investigate the morphing of a bump control element over a flat plate at a free-stream Mach number of 2.9 and a high Reynolds number of \(6.6\times 10^{7}\). Different values of constant velocity were conducted to study the effect of the morphing bump on the shock system with non-dimensional motion periods, \(t_{f}^{*}\) of 0.1, 1.0, 1.8, 2.0, 3.0, 5.0, 7.5, 10.0. Further, the steady state of supersonic flows over a clean flat plate and a flat plate with a stationary bump were conducted to show the beneficial effect of the bump's existence. Furthermore, a comparison between the dynamic and static cases was achieved. The results showed that a spatial lag in the shock system appears due to the dynamic motion. In addition, the lag effect remarkably increases when the local Mach number at the tip of the morphing bump, \(M_{b}\), increases higher than 0.008. The relatively fast morphing bump would increase the compressibility effect in the near area which may compensate for the beneficial effect of the bump's existence. Besides, the relatively slow morphing speed results in a deviation in the entropy from the stationary bump case, which is a representation of flow momentum losses. Thus, the suitable speed to morph with is the one that results in neither a remarkable lag effect in the shock system nor high losses in the entropy deviation from the stationary steady-state case. For the case of supersonic flow with \(M_{\infty}=2.9\) over a flat plate, the suitable bump's morphing period, \(t_{f}^{*}\), was found to be 2. For future work, various velocity profiles with different speed values are recommended to be tested for the morphing bump. This will determine the suitable morphing bump's velocity profile and speed, that achieves low time-average entropy for the flat plate problem. Then, this configuration of motion will be tested over a transonic airfoil to achieve a high time-averaged lift-to-drag ratio.
2305.09723
GlobULeS-V. UVIT/AstroSat studies of stellar populations in NGC 362: Detection of Blue Lurkers in a Globular Cluster
We report the discovery of four blue lurkers with low and extremely low-mass white dwarf (ELM WDs) companions in the Galactic globular cluster NGC 362 using AstroSat Ultra Violet Imaging Telescope (UVIT). We analyzed the multi-wavelength spectral energy distribution (SED) of FUV-bright MS stars using data from the UVIT, UVOT, GAIA EDR3, and 2.2m ESO/MPI telescopes. Two each of low-mass WDs and ELM WDs are found as companions for the four blue lurkers by the fitting of two-component SED models. The effective temperatures, radii, luminosities, and masses of two low-mass WDs are (35000, 23000) K, (0.04, 0.05) Rsun , (1.45, 0.22) Lsun , and (0.2, 0.2) Msun, while the two ELM WDs are (14750, 14750) K, (0.09, 0.10) Rsun, (0.34, 0.40) Lsun, and (0.18, 0.18) Msun. The position of blue lurkers within the cluster shows that they originated via the Case A/B mass-transfer mechanism in a low-density environment. This is the first detection of blue lurkers with low-mass WDs and ELM WDs as companions in a globular cluster. The companion cooling age is less than 4 Myr, which suggests that they were just recently formed. These binary systems might have originated due to the cluster recent core collapse.
Arvind K. Dattatrey, R. K. S. Yadav, Gourav Kumawat, Sharmila Rani, Gaurav Singh, Annapurni Subramaniam, Ravi S. Singh
2023-05-16T18:00:07Z
http://arxiv.org/abs/2305.09723v1
GlobULeS-V. UVIT/_AstroSat_ studies of stellar populations in NGC 362: Detection of Blue Lurkers in a Globular Cluster ###### Abstract We report the discovery of four blue lurkers with low- and extremely low-mass white dwarf (ELM WDs) companions in the Galactic globular cluster NGC 362 using _AstroSat_'s Ultra Violet Imaging Telescope (UVIT). We analyzed the multi-wavelength spectral energy distribution (SED) of FUV-bright MS stars using data from the UVIT, UVOT, GAIA EDR3, and 2.2m ESO/MPI telescopes. Two each of low-mass WDs and ELM WDs are found as companions for the four blue lurkers by the fitting of two-component SED models. The effective temperatures, radii, luminosities, and masses of two low-mass WDs are (35000, 23000) K, (0.04, 0.05) R\({}_{\odot}\), (1.45, 0.22) L\({}_{\odot}\), and (0.2, 0.2) M\({}_{\odot}\), while the two ELM WDs are (14750, 14750) K, (0.09, 0.10) R\({}_{\odot}\), (0.34, 0.40) L\({}_{\odot}\), and (0.18, 0.18) M\({}_{\odot}\). The position of blue lurkers within the cluster shows that they originated via the Case A/B mass-transfer mechanism in a low-density environment. This is the first detection of blue lurkers with low-mass WDs and ELM WDs as companions in a globular cluster. The companion's cooling age is less than 4 Myr, which suggests that they were just recently formed. These binary systems might have originated due to the cluster's recent core collapse. keywords: ultraviolet: stars -- (stars:) blue lurkers -- (stars:) Hertzsprung-Russell and CM diagrams -- (stars:) white dwarfs -- (Galaxy:) Globular star clusters: individual: (NGC362) ## 1 Introduction Sandage (1953) discovered stars brighter and bluer than the main sequence lying above the turn-off in the colour-magnitude diagram (CMD) of the Galactic Globular Cluster (GGC) M3. These stars are called Blue Straggler Stars (BSSs). The three major formation pathways for BSSs are as follows: mass transfer in binary star systems via Roche lobe overflow (RLOF), which leads to mass gain by BSS progenitors (McCrea, 1964); stellar collisions in a dense cluster environment (Hills and Day, 1976); and tightening and merger of inner binaries in a hierarchical triple system (Perets and Fabrycky, 2009; Naoz and Fabrycky, 2014). Blue Lurkers (BLs), like BSSs, are also post-mass transfer (MT) systems, but they do not appear brighter than the main-sequence turn-off (MSTO) in the CMD. This could be due to a lack of accreted matter or a too-small accretor star. The jump in the CMD for these stars is insufficient for them to be seen above the MSTO. As a result, these stars have been designated as blue lurkers: "BSS-type stars lying hidden in the MS below MSTO" (Leiner et al., 2019). These stars cannot be detected using the CMD alone because they do not stand out like BSSs. Some possible techniques to find these stars are as follows: Observation of more than average rotation (Vsini) indicating a recent MT event (Leiner et al., 2019); Multi-wavelength SED analysis to find companions such as Extremely Low Mass (ELM) White Dwarfs (WDs), hot sub-dwarfs etc. (Jadhav et al., 2019; Subramaniam et al., 2020); Looking for chemical peculiarities possible only via mass accretion. In fact, N-body simulations and population synthesis studies predict that such mass transfer products could be abundant (Andronov et al., 2006; Geller et al., 2013). ELM WDs are helium core WDs with masses M\(\leq 0.18\) M\({}_{\odot}\)(Sun and Arras, 2018). The universe's age limits the mass of WDs formed by isolated star evolution to \(\approx 0.4\) M\({}_{\odot}\)(Brown et al., 2010). WDs with masses 0.1-0.4 M\({}_{\odot}\) were found to be part of compact binaries (Marsh et al., 1995; Benvenuto and De Vito, 2004; Brown et al., 2010). Their formation can be explained by mass loss in binary systems. Relatively few studies have been done to identify BLs, and only in open clusters (OCs) (Leiner et al., 2019; Jadhav et al., 2019; Subramaniam et al., 2020). These BLs are found in binary systems with WD/low-mass WD companions. To our knowledge, no literature exists on the detection of BLs in GCs. Here we report the first detection of BLs in the GC NGC 362 using UV data. The GC NGC 362 is located in the Tucana constellation in the Southern hemisphere (RA = \(01h^{1}\,03^{m}\,14.26^{s}\) and Dec = \(-70^{\circ}\) 50\({}^{\prime}\) 55\({}^{\prime}\).6). The cluster has an age of \(\approx 11\) Gyr, located at a distance of \(\approx 8.83\) kpc (Vasiliev & Baumgardt, 2021), a metallicity \(\left[\mathrm{Fe}/\mathrm{H}\right]\) of \(\approx-1.3\) dex, and a reddening of \(\sim 0.05\) mag (Harris, 2010). ## 2 Data Sets and their Reduction The observations of NGC 362 were made with the UVIT on November 11, 2016, in four UV filters: F148W, F169M, N245M, and N263M. UVIT is one of the payloads on _AstroSat_, India's first multi-wavelength space observatory, which launched on September 28, 2015. The exposure times were 4900, 4600, 4900, and 4600 sec in the F148W, F169M, N245M, and N263M filters, respectively. Using CCDLAB software (Postma & Leahy, 2017), which corrects for satellite drift, flat-field, geometric distortion, fixed pattern noise, and cosmic rays, data reduction of the raw images was carried out. Detailed descriptions of the telescope, instruments and preliminary calibration can be found in Subramaniam et al. (2016) and Tandon et al. (2017). The archival UV data collected with Ultraviolet Optical Telescope (UVOT) are also used in this analysis. The raw data from the HEASARC archive1 was processed using the HEA-Soft2 pipeline. The telescope's details and photometric calibration of UVOT data can be found in Poole et al. (2008). Corrections were made to exposure maps and auxiliary spacecraft data, as well as to science images that were geometrically corrected for sky coordinates. In order to process the UVOT/Swift data, the procedure outlined by Siegel et al. (2014) was used. In addition, archival optical data in the \(U,B,V,R\), and \(I\) filters collected by the Wide-Field Imager (WFI) mounted on the 2.2m ESO/MPI Telescope were utilised. The details of data reduction and photometry are provided in Anderson et al. (2006). Footnote 1: [https://heasarc.gsfc.nasa.gov](https://heasarc.gsfc.nasa.gov) Footnote 2: [https://heasarc.gsfc.nasa.gov/docs/software/heasoft](https://heasarc.gsfc.nasa.gov/docs/software/heasoft) ## 3 UV and Optical Colour Magnitude Diagrams ### The selection of FUV-bright stars on the main sequence Before identifying the FUV bright MS stars, we cross-matched the UVIT data with the _Gaia_ EDR3 photometric data with a matching radius of \(1\aas@@fstack{\prime\prime}0\). The FUV bright MS stars are selected using the UV-optical CMDs. We utilised the proper-motion membership probability catalogue from the GlobULeS I paper (Sahu et al., 2022) to select the members. The membership probabilities are based on the GAIA DR3 proper motions and a comprehensive comparison of the cluster like stars kinematics distribution and the field distribution. Stars with a membership probability greater than 85% were chosen as cluster members and considered for further analysis. We constructed the UV-optical CMDs of members by finding the optical counterparts of the UVIT-detected sources. We plotted the FUV optical CMD (F148W, (F148W \(-\) G)) of BSSs shown in the left panel of Figure 1. The dotted and solid lines represent the ZAMS and MS curves taken from BaST3. The BSS (blue symbols) sequence is easily identified in the CMD. Four FUV bright stars (ID: 1551, 1991, 2412, and 2481) can be found bluer than the ZAMS. To check the location of these four FUV bright stars in the optical CMD, we constructed (G, (G\({}_{BP}-{\rm G}_{RP}\))) diagram shown in the middle panel of Figure 1. For visual guidance, we overplotted BaSTI isochrone with a solid line using the cluster's parameters given in Sec. 1. It is evident that the four FUV bright stars are located close to the turn-off point of the cluster in the optical CMD. We analysed the locations and contamination of these stars in the observed image and marked them with red squares in the F148W image, as depicted in the right panel of Figure 1. This figure reveals that the stars are clearly resolved, with no contamination. They are situated more than \(3\arcmin\) from the cluster's centre. Footnote 3: [http://baasi-iac.oa-abruzzo.inaf.it/](http://baasi-iac.oa-abruzzo.inaf.it/) ## 4 Spectral Energy Distributions In this section, we build the spectral energy distribution (SED) of the FUV bright MS stars 1551, 1991, 2412 and 2481 to look for the companions and their properties. We used the virtual observatory tool, VOSA (VO SED Analyzer, (Bayo et al., 2008)) for this. VOSA uses filter transmission curves to compute synthetic photometry for a chosen theoretical model. The synthetic photometric data is then compared to the observed data, and the best fit is determined using the \(\chi^{2}\) minimization test. The specific procedures for single and binary component SED fitting are provided in Subramaniam et al. (2016) and Rani et al. (2021). In brief, we fit the model of Castelli & Kurucz (2003) of varying Figure 1: Left panel: UV-optical CMD (F148W vs F148W \(-\) G) of the BSSs for the cluster NGC 362. Blue symbols represent the BSSs. The probable four BL candidates, bluer than the ZAMS, are displayed with red down-triangle symbols. Middle panel: The optical CMD (G vs G\({}_{BP}-{\rm G}_{RP}\)) of the cluster NGC 362. The BaSTI isochrone is overplotted with a solid black line. The four red down-triangle symbols represent the FUV bright MS stars. Right panel: Spatial locations of the four FUV bright MS stars in F148W image of NGC 362. The field of view of the image is \(-8.5\arcmin\) x \(8.5\arcmin\). North is up and east is on the left. temperature and surface gravity with fixed metallicity of [Fe/H] = -1 dex using the basic parameters provided in Sec. 1. The effective temperatures were considered to be in the \(5000-50000\) K range, with \(\log g\) ranging from 3-5 dex. UVIT data were combined with UVOT and ESO 2.2m optical data for long coverage of wavelengths in the SED. We used the Kurucz stellar atmospheric model to fit the single component SED, as shown with a grey curve in Figure 2. The observed data are represented by red points with error bars, whereas the model data is represented by blue points. The residual between the fitted model and the observed fluxes, normalized by the observed flux, are shown in the bottom panel of each SED. The green points, along with the error bars, are the residuals, and the dashed horizontal lines at \(\pm\) 0.3 (30%) are the residual threshold. The residual plots in Figure 2 show that the residual flux in the UV region is greater than 0.3 in more than one data point. This indicates a UV excess in all four FUV bright MS stars, and a combination of hotter and cooler spectra may fit the SEDs. We fitted two-component SEDs to account for the UV excess. We used the Kurucz stellar atmospheric model (Castelli et al., 1997; Castelli & Kurucz, 2003) to fit the cooler component and the Koester WD model (Koester, 2010) (shown with cyan curves) to fit the hotter component. The T\({}_{eff}\) and \(\log g\) in the Koester WD model range from \(5000-80000\) K and \(6.5\)-\(9.5\), respectively. The combined spectra are shown with a black line in Figure 2. The fitting parameters \(\chi^{2}_{F}\), V\({}_{gf}\), and V\({}_{gf\,f\,b}\) of the combined spectra, listed in Table 1, indicate that the spectrum fits well for all wavelengths. The Koester model fits the hot components well, while the Kurucz model fits the cool components well. The fractional residuals for the combined SEDs are shown as black points along with the error in the bottom panels. For the combined SEDs, the residuals are within 0.3 on each wavelength. Fitting the two components yields the fundamental parameters of the cool and hot components of the FUV bright MS stars. The estimated parameters are shown in Table 1. The star IDs with "A" and "B" represent the cool and hot components, respectively. The range of parameters for cool components are T\({}_{eff}\sim 6250-6750\) K, \(\log g\sim 3-5\), L \(\sim 1.02-3.21\) L\({}_{\odot}\), and R \(\sim 0.90-1.53\) R\({}_{\odot}\), while those for hot components are, T\({}_{eff}\sim 14750-35000\) K, \(\log g\sim 6.5-9.5\), L \(\sim 0.22-1.45\) L\({}_{\odot}\), and R \(\sim 0.04-0.10\) R\({}_{\odot}\). The effective temperature and radius of 11 BLs found in the open cluster M67 have been calculated by Leiner et al. (2019) using SED analysis. The effective temperature and radius were found to be in the 5520-6840 K and 0.85-1.9 R\({}_{\odot}\) ranges, respectively. The present estimated parameters for the BLs are well within the range. ### Age and mass of the cool and hot components We estimated the age and mass of the cool components, 1551A, 1991A, 2412A, and 2481A, by comparing their positions to theoretical isochrones retrieved from BaSTI (Pietrinferni et al. (2021)) in the H-R diagram shown in Fig. 3. We considered the SED-estimated surface temperature and luminosity of cool components, which are represented by red points in the figure. The isochrones from 5 to 11 Gyr are plotted with intervals of 0.5 Gyr using grey lines with the cluster's input parameters provided in Sec 1. By comparing the positions to the isochrones, the cool components 1551A, 1991A, 2412A, and 2481A have ages of 5, 10, 8, and 5.5 Gyr, respectively, and masses of 0.92, 0.85, 0.87, and 0.82 M\({}_{\odot}\). The estimated age and mass of each cool component are displayed in Table 1. The age and mass of 1551B, 1991B, 2412B, and 2481B (hot components) are derived using the H-R diagram depicted in Fig. 3. The surface temperature and luminosity derived using SEDs are used in the H-R diagram. Cyan points denote the hot components. A blue and yellow line represents a 0.2 and 0.3 M\({}_{\odot}\) helium DB WD Figure 2: The SEDs of four FUV bright MS stars (1551, 1991, 2412, and 2481). The T\({}_{eff}\) of the cool (A) and hot (B) components are displayed in each SED. The observed data points are represented by red solid-diamond points, while open blue diamond points represent the model points. Cyan, grey, and black curves represent the Koester, Kurucz, and combined spectra, respectively. \(\Delta F\) / \(F\) is the fractional residual. models from Tremblay & Bergeron (2009). The ELM WD models of mass 0.182 and 0.186 M\({}_{\odot}\) are shown with magenta lines, which were taken from Althaus et al. (2013). The two hot components, 1991B and 2481B are located on the 0.186 M\({}_{\odot}\) ELM WD model, while 1951B and 2412B are very close to the 0.2 M\({}_{\odot}\) WD DB model. The models predict cooling ages of \(\leq\) 0.1 Myr for 1551B and 2412B and 4 Myr for 1991B and 2481B. The mass and cooling age of the hot components are listed in Table 1. ## 5 Results and Discussion Using the UVIT, UVOT, and 2.2m ESO optical data, we discovered four FUV bright MS stars in the GGC NGC 362. We built the SEDs using VOSA to determine their physical characteristics and identify any companions that might be present. The fitting of single component SED to the observed data points reveals the UV excess in all four stars. To account for the UV excess, we fitted the SED of two components. The combined spectra of cool and hot components fit well to all four stars. This study indicates that these stars are in binary systems. Based on the current analysis of the SEDs of four FUV bright MS stars, the surface temperature, luminosity, radius, mass and age of the cool components 1551A, 1991A, 2412A, and 2481A are determined to be in the range of 6250 - 6750 K, 1.02 - 3.21 L\({}_{\odot}\), 0.90 - 1.53 R\({}_{\odot}\), 0.82 - 0.92 M\({}_{\odot}\), and 5.5 - 10 Gyr. The masses of these components are near the cluster's turn-off mass (0.8 M\({}_{\odot}\)). In comparison to the BSSs, their positions on the H-R diagram are close to the cluster's turn-off point. The derived age of the cool components reveals that they are younger than the cluster (11 Gyr). The present analysis leads us to believe that these cool components could be BLs. They have been regenerated by mass transfer like the BSSs but mixed with the normal MS population in the CMD. As the cluster ages, similar mass stars will evolve away from the MS, showing these lurkers as BSSs (Leiner et al., 2019). The surface temperatures, luminosities, and radii of the hot components 1551B, 1991B, 2412B, and 2481B were found to range from 14750 - 35000 K, 0.22 - 1.45 L\({}_{\odot}\), and 0.04 - 0.1 R\({}_{\odot}\), respectively, based on the best SED fits. The mass and cooling age of 1551B and 2412B are 0.20 M\({}_{\odot}\) and \(\leq\)0.1 Myr, respectively, while those of 2481B and 1991B are 0.186 M\({}_{\odot}\) and 4 Myr. Based on the estimated parameters, locations, and superimposed models in the H-R diagram, we can deduce that 1551B and 2412B are low-mass Helium core WDs, while 2481B and 1991B are ELM WDs. We found that the newly discovered FUV bright MS stars are binary systems. The stars 1551 and 2412 have a pair of BL+low-mass WDs, whereas 2481 and 1991 have a pair of BL+ELM WDs. Because BLs are a smaller mass group of BSSs, the formation mechanism may be similar to that of the BSSs. Similar to BSSs, these are post-mass transfer systems. The masses of hot companions are found to be 0.186 and 0.2 M\(\odot\). It is unlikely that such low-mass WD candidates will form through a single, isolated star evolution. They might have originated in binary systems. The mass of WDs created by a single star evolution is restricted to 0.4 M\(\odot\).(Brown et al., 2010). The age of the universe limits the lower end of the WD mass range. However, ELM WDs can be found in binaries (Ratzloff et al., 2019; Jadhav et al., 2019; Subramaniam et al., 2020; Vaidya et al., 2022; Dattatrey et al., 2023). Mass loss in the early phases of evolution sets a minimum value for WD masses. It is possible that the case A/B MT will lead to the He-core WDs through early envelope mass loss (Iben, 1991; Marsh et al., 1995). As a result, MT is required for the creation of low-mass WDs in tight binary systems where the companion tears away the low-mass WDs progenitor's envelope and the low-mass core fails to ignite the He core. The companion star acquired mass and evolved into BLs during the mass loss. Li et al. (2019) claim that ELM WDs with masses less than 0.22 M\({}_{\odot}\) are generated through the Roche lobe overflow channel. This leads us to the further conclusion that these BLs are produced via MT with a WD companion. The formation of investigated binary systems may be related to the dynamical evolution of the cluster. The cooling ages of low-mass WDs and ELM WDs are \(<\) 0.1 and 4 Myr, respectively. Dalessandro et al. (2013) suggest that NGC 362 is either experiencing or is about to experience the core collapse based on the density profile. The cooling ages of the hot companions show that they have only recently formed. As a result, we might infer that these binary systems might have originated during the cluster's core collapse. During the collapse, the centre density grows quickly, increasing the chances of gravitational interactions as well (Meylan & Heggie, 1997); the creation of binary systems that are brought into the MT regime by hardening processes caused by gravitational encounters (McMillan et al., 1990; Hurley et al., 2008). On the other hand, based on the radial distribution of BSS, Dalessandro et al. (2013) found that the core-collapse may have begun around 200 Myr ago, which is significantly earlier than the \begin{table} \begin{tabular}{l l l l l l l l l l l l l} \hline Name & RA & DEC & T\({}_{eff}\) (K) & Log g & \(\chi^{2}_{F}\) & L/L\({}_{\odot}\) & R/R\({}_{\odot}\) & V\({}_{gf}\) & V\({}_{gf}\) & N\({}_{fit}\) & Mass & Age \\ & & & & & & & & & & & (M\({}_{\odot}\)) & (Myr) \\ \hline 1551A & 15.89396 & -70.81936 & 6750\({}^{+125}_{-150}\) & 3.0 & 58.76 & 2.45\({}^{+0.009}_{-0.200}\) & 1.15\({}^{+0.005}_{-0.002}\) & 29.8 & 1.37 & 12/12 & 0.92 & 5500 \\ 1551B & & & 35000\({}^{+0.000}_{-1000}\) & 9.5 & & 1.45\({}^{+0.020}_{-0.170}\) & 0.04\({}^{+0.005}_{-0.005}\) & & & & 0.2 & \(<\) 0.10 \\ 1991A & 15.69960 & -70.90077 & 6250\({}^{+125}_{-500}\) & 3.5 & 6.07 & 3.21\({}^{+0.181}_{-0.10}\) & 0.15\({}^{+0.001}_{-0.003}\) & 4.68 & 0.21 & 12/12 & 0.85 & 10000 \\ 1991B & & & 14750\({}^{+250}_{-500}\) & 7.0 & & 0.34\({}^{+0.010}_{-0.010}\) & 0.09\({}^{+0.003}_{-0.003}\) & & & & 0.18 & \(\sim\) 4.0 \\ 2412A & 15.65796 & -70.82486 & 6500\({}^{+125}_{-150}\) & 3.5 & 4.39 & 1.99\({}^{+0.294}_{-0.271}\) & 1.18\({}^{+0.001}_{-0.001}\) & 4.37 & 0.88 & 12/12 & 0.87 & 8000 \\ 2412B & & & 23000\({}^{+0.000}_{-1000}\) & 6.5 & & 0.22\({}^{+0.017}_{-0.003}\) & 0.05\({}^{+0.008}_{-0.002}\) & & & & 0.2 & \(<\) 0.10 \\ 2481A & 15.69260 & -70.78316 & 6250\({}^{+125}_{-125}\) & 5.0 & 15.25 & 1.02\({}^{+0.270}_{-0.000}\) & 0.90\({}^{+0.015}_{-0.015}\) & 10.70 & 2.92 & 12/12 & 0.82 & 5500 \\ 2481B & & & 14750\({}^{+250}_{-750}\) & 9.5 & & 0.40\({}^{+0.008}_{-0.001}\) & 0.10\({}^{+0.013}_{-0.001}\) & & & & 0.18 & \(\sim\) 4.0 \\ \hline \end{tabular} \end{table} Table 1: The best-fit parameters of the cool and hot components. Here, T\({}_{eff}\) is the effective temperature in K, \(\chi^{2}_{F}\) is reduced \(\chi^{2}\), luminosity, radius, and mass are in the solar unit, V\({}_{gf}\) & V\({}_{gfb}\) are the visual goodness of fit parameters, N\({}_{fit}\) is the total number of points taken in to account during the fitting. The mass and age of cool and hot companions are listed in the last two columns. expected formation epoch of BLs. Therefore, presuming that cluster properties can play a role in the formation of BLs, it is more probable that they formed during the post-core-collapse rebounce phases. The four BLs are located more than 3\({}^{\prime}\) from the cluster centre. The origin of these BLs in the cluster's outskirts is unknown. We speculate that these BLs, like BSS, may have been created by collisions during resonance encounters between binary and single stars in the core and ejected into the outer regions by the recoil from the interactions (Bolte et al., 1993; Sigurdsson et al., 1994). The discovery of BLs + WDs systems in the outer area of the near-core collapse cluster has consequences for GCs' dynamical evolution theories. The total census of BLs on cluster MSs is mainly unknown because of the difficulty of discovering them. The stellar characteristics of this vital population are, as a result, poorly understood. Another possibility for the origin of these BLs is the mass transfer in the primordial close binary systems located in the outer region of the cluster. According to Dalessandro et al. (2013), NGC 362 is a dynamically old cluster, which means that even the external BSSs have sunk towards the cluster's centre due to the mass segregation process. As BLs have low mass compared to BSSs, the impact of mass segregation on BLs will be smaller than that on BSSs. As a result, they are still in the cluster's outer region. The BLs are located in the outer part of the cluster, which makes them an intriguing spectroscopic study target. The spectroscopy can provide an accurate estimation of the parameters and their nature to constrain their formation paths. Due to the low spatial resolution, we do not explore the BLs at the cluster's centre. To get a complete sample of BLs, we need high resolution and deeper UV observations. ## 6 Summary and conclusions The first detection of BLs in the GGC NGC 362 is presented. This result is based on observations from _AstroSat_ UVIT as well as archival data from UVOT and the 2.2m ESO/MPI telescopes. We can draw the following conclusions from this study. 1. We found four FUV-bright MS stars in cluster NGC 362. The SED study reveals an excess in FUV flux from all four sources. To account for the FUV excess, we fitted a two-component SED model. 2. The cool companions are fitted with the Kurucz model, and the hot companions are fitted with the Koester model. The parameters obtained from the two-component SED model are shown in Table 1. Based on our SED analysis, we conclude that the cool companions of all four sources are BLs. The hot companions of 1551 and 2412 are low-mass Helium core WDs, whereas 2481 and 1991 are ELM WDs. 3. The location of these four sources within the cluster suggests they were formed via the Case A/B mass transfer process. The cluster is currently undergoing a core collapse. The cooling age of hot companions indicates that they were generated very recently. We suggest that these binary systems were formed during the core collapse process and were ejected from the core due to gravitational interaction. ## Acknowledgements We warmly thank Pierre Bergeron for providing us with the WD cooling models generated for UVIT filters. I would like to thank Dr. Sindhu Pandey for her help with UVIT data reduction. I want to thank Gurpreet Singh for his assistance during this project. AS thanks the support of the SERB power fellowship. This paper is based on observations collected at the European Organization for Astronomical Research in the Southern hemisphere under ESO program 60.A-9121(A). ## Data availability The data utilized in this article will be provided upon request.
2308.09907
Imputing Brain Measurements Across Data Sets via Graph Neural Networks
Publicly available data sets of structural MRIs might not contain specific measurements of brain Regions of Interests (ROIs) that are important for training machine learning models. For example, the curvature scores computed by Freesurfer are not released by the Adolescent Brain Cognitive Development (ABCD) Study. One can address this issue by simply reapplying Freesurfer to the data set. However, this approach is generally computationally and labor intensive (e.g., requiring quality control). An alternative is to impute the missing measurements via a deep learning approach. However, the state-of-the-art is designed to estimate randomly missing values rather than entire measurements. We therefore propose to re-frame the imputation problem as a prediction task on another (public) data set that contains the missing measurements and shares some ROI measurements with the data sets of interest. A deep learning model is then trained to predict the missing measurements from the shared ones and afterwards is applied to the other data sets. Our proposed algorithm models the dependencies between ROI measurements via a graph neural network (GNN) and accounts for demographic differences in brain measurements (e.g. sex) by feeding the graph encoding into a parallel architecture. The architecture simultaneously optimizes a graph decoder to impute values and a classifier in predicting demographic factors. We test the approach, called Demographic Aware Graph-based Imputation (DAGI), on imputing those missing Freesurfer measurements of ABCD (N=3760) by training the predictor on those publicly released by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA, N=540)...
Yixin Wang, Wei Peng, Susan F. Tapert, Qingyu Zhao, Kilian M. Pohl
2023-08-19T05:03:35Z
http://arxiv.org/abs/2308.09907v1
# Imputing Brain Measurements Across Data Sets via Graph Neural Networks ###### Abstract Publicly available data sets of structural MRIs might not contain specific measurements of brain Regions of Interests (ROIs) that are important for training machine learning models. For example, the curvature scores computed by Freesurfer are not released by the Adolescent Brain Cognitive Development (ABCD) Study. One can address this issue by simply reapplying Freesurfer to the data set. However, this approach is generally computationally and labor intensive (e.g., requiring quality control). An alternative is to impute the missing measurements via a deep learning approach. However, the state-of-the-art is designed to estimate randomly missing values rather than entire measurements. We therefore propose to re-frame the imputation problem as a prediction task on another (public) data set that contains the missing measurements and shares some ROI measurements with the data sets of interest. A deep learning model is then trained to predict the missing measurements from the shared ones and afterwards is applied to the other data sets. Our proposed algorithm models the dependencies between ROI measurements via a graph neural network (GNN) and accounts for demographic differences in brain measurements (e.g. sex) by feeding the graph encoding into a parallel architecture. The architecture simultaneously optimizes a graph decoder to impute values and a classifier in predicting demographic factors. We test the approach, called _D_emographic _A_ware _G_raph-based _I_mputation (_DAGI_), on imputing those missing Freesurfer measurements of ABCD (N=3760; minimum age 12 years) by training the predictor on those publicly released by the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA, N=540). 5-fold cross-validation on NCANDA reveals that the imputed scores are more accurate than those generated by linear regressors and deep learning models. Adding them also to a classifier trained in identifying sex results in higher accuracy than only using those Freesurfer scores provided by ABCD. Keywords:Brain measurements Feature imputation Graph representation learning. ## 1 Introduction Neuroscience heavily relies on ROI measurements extracted from structural magnetic resonance imaging (MRI) to encode brain anatomy [24]. However, public releases of brain measurements might not contain those that are important for a specific task. For example, the Freesurfer scores [6] publicly released by the ABCD study do not contain curvature measurements of cortical regions [3], which might be useful for identifying sex differences. While one could theoretically reapply the Freesurfer pipeline to generate those missing measurements, it requires substantial computational resources and manual labor, as, for example, the Freesurfer scores from thousands of MRIs would have to be quality controlled. A more efficient solution is to learn to impute missing brain measurements from the existing ones. Imputation involves estimating or filling in missing or incomplete data values based on the available data, thereby creating a complete dataset suitable for further analysis or modeling. Examples of popular approaches for imputing measurements are MICE [26] and k-nearest neighbors [8]. The state-of-the-art in this domain relies on deep learning models, such as using generative autoencoders [25] or graph convolutional networks [28, 23]. However, such methods assume that missing values are randomly distributed within a matrix capturing all measurements of a data set (refer to Figure 1 (a)). If each column now represents a measurement, estimating missing values in a column then partly relies on rows (or samples) for which that measurement exists. Here we aim to solve the issue that the entire column does not contain any values (Figure 1 (b)), i.e., some specific measurements are absent throughout an entire dataset. One could address this issue by combining the data set with the missing values with one that contains them, which then relates to the scenario in Figure 1 (a). However, the imputation now explicitly depends on the data set with the missing scores so if that data set is updated (e.g., ABCD yearly releases) so do all imputations, which could result in scores conflicting with those imputed based on earlier versions of the data set. We instead address this challenge by re-framing the imputation problem as a prediction task on a single (public) data set, such as NCANDA [1], that contains the missing measurements and shares some ROI measurements with the data set of interest. A deep learning model can then be trained on NCANDA to predict the curvature scores from the measurements that are shared with ABCD. Afterwards, the trained model is applied to ABCD (or other data sets that share those scores) to predict the missing curvature scores on ABCD. Consequently, our primary objective is to determine the most accurate mapping from the currently available shared measurements to the missing ones. Measurements of the same ROI (e.g., cortical thickness and volume) are highly dependent, and measurements of adjacent regions are more likely to be correlated than those from distant regions [16, 10]. To explicitly account for such dependencies, our prediction model is based on a graph neural network (GNN) [22] called Graph Isomorphism Network [29]. In our graph, each node represents an ROI and adjacent ROIs are connected via edges. In addition to modeling adjacency of ROIs, our prediction model also accounts for the dependencies between demographic factors and ROI measurements [11, 21]. For example, women tend to have higher gyrification in frontal and parietal regions than men, which results in the curvature of those ROIs being different between the sexes [13]. We account for this difference by feeding the GNN encodings into a parallel architecture that simultaneously optimizes a graph decoder for imputing values and a classifier for identifying sex. We apply our approach, called _D_emographic _A_ware _G_raph-based _I_mputation (_DAGI_), to impute Freesurfer measurements that are available in the NCANDA data set but are missing in ABCD (i.e., "mean curvature" and "Gaussian curvature") by explicitly taking advantage of those that are shared among them (i.e., "average thickness", "surface area" and "gray matter volume" ). Using 5-fold cross-validation, we then show on NCANDA that the accuracy of the imputed scores is significantly higher than those generated by linear regressors and deep learning models. Furthermore, We identify the brain ROIs important in the imputation task by visualizing the learned graph structure via GNNExplainer [30]. On the ABCD data set, adding the scores to a classifier in identifying sex results in significantly higher accuracy than only using those provided by ABCD or using those imputed by combing the ABCD with the NCANDA data set (Figure 1 (a)). ## 2 Method Let's assume that the first data set is represented by a matrix \(X^{1}\in\mathbb{R}^{v\times d}\) containing the cortical measurements of \(v\) regions, where \(d\) cortical measurements \(X_{i}\in\mathbb{R}^{d}\) are extracted from each region \(i\). Furthermore, let \(X^{2}\in\mathbb{R}^{v\times p}\) be the data matrix of the second data set, which is based on the same parcellation but contains a different set of measurements for each region, of which \(p(<d)\) are those also found in \(X^{1}\). Let \(X^{o}_{i}\in\mathbb{R}^{1\times p}\) be the \(p\) shared measures across datasets, and \(X^{m}_{i}\in\mathbb{R}^{1\times q}\) be the remaining \(q\) measurements only available in \(X^{1}\). Thus, \(X^{1}\) can be divided into \(X^{O}=[X^{o}_{1},...,X^{o}_{v}]^{T}\) and \(X^{M}=[X^{m}_{1},...,X^{m}_{v}]^{T}\). Our goal is to learn an imputation mapping \(X^{O}\to X^{M}\) so that we can impute the missing measurements on the second data set. To generate an accurate mapping, we first design a GNN implementation that accounts for dependencies among brain ROI measurements and in parallel consider demographic variations (e.g. sex) within those ROI measurements via a classifier. Figure 1: Scenarios of missing values : (a) missing values are being randomly distributed across the data set or (b) specific measurements are absent from a data set, which is the problem we aim to solve here. ### Graph-based Imputation We view the \(v\) regions as the nodes of a graph with \(X^{O}\) as node features. To capture adjacency among cortical ROIs and simplify training, we construct a sparse graph by adding an edge between two brain regions if they share a boundary on the cortical surface. This undirected graph with \(v\) nodes is then encoded by an "adjacency matrix" \(\mathbf{A}\in\mathbb{R}^{v\times v}\), where \(\mathbf{A}_{ij}\) is 1 if and only if nodes \(i\) and \(j\) are connected. As \(\mathbf{A}\) does not change across subjects, then each subject is encoded by the graph \(G=<X^{O},\mathbf{A}>\), whose node features are the subject-specific measurements \(X^{O}\). Given a graph \(G\), we aim to learn its encoding into node embeddings \(h_{G}\in\mathbb{R}^{v\times r}\) that is optimized for imputing missing ROI measurements and predicting the label, i.e., demographic factor sex (see Figure 2). The node embeddings are learned by a Graph Isomorphism Network (GIN) [29], which compares favorably to conventional GNNs such as GCN [9] in capturing high-order relationships across features of neighboring ROIs [29]. Each layer of a GIN learns the relationships between neighboring ROIs by first summing up the feature vectors of adjacent nodes. These new vectors are then mapped to hidden vectors via a multi-layer perceptron (MLP). The hidden vector \(h_{i}^{k}\) of a particular node \(i\) at the \(k\)-th layer is then defined as : \[h_{i}^{k}:=\text{MLP}\left((1+\varepsilon)\cdot h_{i}^{k-1}+\sum_{j\in\mathcal{ N}_{i}}h_{j}^{k-1}\right), \tag{1}\] where \(\mathcal{N}_{i}\) denotes nodes adjacent to node \(i\) (according to \(\mathbf{A}\)) and the weight \(\varepsilon\) of a node compared to its neighbors is learned. The node embeddings of the last layer \(h_{G}:=\{h_{i}\}_{i\in v}\) are then fed into a graph decoder, which again is a GIN. The decoder is trained to reconstruct the missing measurements \(X^{M}\) using \(h_{G}\) obtained from "shared measurements" \(X^{O}\) by deriving the mapping function \(f(\cdot)\) so that the predicted value \(\widehat{X}^{M}:=f(h_{G})\) Figure 2: Overview of our model: A GNN encodes both adjacency and measurements of brain ROIs into node embeddings, which are utilized by a graph decoder to impute missing values \(X^{M}\). The parallel (upper) branch refines the node representations by differentiating between the sexes. minimizes the loss function \[\mathcal{L}_{imp}:=\left\|X^{M}-f(h_{G})\right\|^{2}, \tag{2}\] where \(\|\cdot\|\) is the Euclidean distance. ### Demographic Aware Graph-based Imputation As mentioned, we implement a classifier in parallel to the graph decoder (Figure 2). Given the subject-specific node embedding \(h_{G}\) and label \(y_{G}\) (e.g., female or male), this classifier aims to learn a function \(g(\cdot)\) that maps the node embeddings of \(G\) to label \(y_{G}\), i.e., \(\widehat{y}_{G}:=g(h_{G})\). As shown in Figure 2, our model first applies a global mean pooling operation to \(h_{G}\) in order to extract the graph embedding required for the MLP to perform the classification [7]. The loss function optimized by the classifier is then \[\mathcal{L}_{cls}:=y_{G}\log\left(g(h_{G})\right)+(1-y_{G})\log\left(1-g(h_{G} )\right). \tag{3}\] To minimize this loss, the node embeddings \(h_{G}\) are optimized with respect to representing demographic differences. Explicitly accounting for demographic differences then improves the accuracy of the imputation task as the demographic factors (i.e., sex) estimated by the classifier provide additional information further constraining the search space. Thus, the overall loss function minimized by DAGI combines imputation and classification loss, i.e., \[\mathcal{L}_{total}:=\mathcal{L}_{imp}+\mathcal{L}_{cls}. \tag{4}\] ### Implementation We implement the model in PyTorch using the Adam optimizer with a learning rate of 0.01. The batch size is set to 32 and the number of epochs is 300. The dimension of node embedding \(r\) is 32. Our graph encoder is composed of two GIN layers, each containing an MLP with two fully-connected layers. Our graph decoder contains one GIN layer with four fully-connected layers. Following each GIN layer, we apply ReLU functions and batch normalization to enhance stability. Codes will be available at [https://github.com/Wangyixinxin/DAGI](https://github.com/Wangyixinxin/DAGI) ## 3 Experimental Results In this section, we evaluate DAGI on the NCANDA and ABCD data sets (described in Section 3.1). On NCANDA (Section 3.2), we determine the accuracy of the imputed measurements by our and other approaches by comparing them with real measurements via 5-fold cross-validation. We highlight the crucial role of explicitly accounting for the relationship between ROIs and the demographic factor sex in the imputation process by visualizing the learned embeddings and examining the discrepancy in the imputed measurements across the sexes. In an out-of-sample test on ABCD (Section 3.3), the curvature scores are not provided so we infer the accuracy from a classifier identifying sex just based on ABCD measurements, by including also our imputed ones, and by adding those imputed by alternative approaches that combine NCANDA and ABCD dataset in the training process. ### Dataset We utilize two publicly available datasets to evaluate our proposed model. The first data set (Release: NCANDA_PUBLIC_BASE_STRUCTURAL_V01 [20]) consists of baseline Freesurfer measurements of all 540 participants (270 females and 270 males) of NCANDA [1] that are between the ages 12-18 years and report no-to-low alcohol drinking in the past year. The Freesurfer score for each of the 34 bilateral cortical regions defined according to the Desikan-Killiany Atlas [5] consists of 5 regional measurements: average thickness, surface area, gray matter volume, mean curvature, and Gaussian curvature. The second public data release is the Data Release 4.0 of ABCD dataset [3], from which we use data from all 3760 adolescents (1682 females and 2078 males) collected between ages 12 to 13.8 years for our analysis. In addition to the average thickness, surface area and gray matter volume, ABCD released the "sulcal depth" but does not contain the two curvature scores released by NCANDA. Imputing those curvature scores from the three shared ones is the goal here. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{Mean Curvature} & \multicolumn{3}{c}{Gaussian Curvature} \\ \cline{2-7} & MSE & MAE & MRE & MSE & MAE & MRE \\ & (e\({}^{-3}\)) & (e\({}^{-2}\)) & & (e\({}^{-4}\)) & (e\({}^{-2}\)) & \\ \hline Linear Regression [18] & & & & & & \\ Direct & 9.40 & 2.95 & 40.36 & 3.15 & 1.63 & 15.68 \\ ROI-based & 8.52 & 2.12 & 31.77 & 2.24 & 1.12 & 9.58 \\ \hline Multi-layer Perceptron [2] & 8.89 & 2.56 & 35.65 & 2.99 & 1.58 & 12.90 \\ \hline GI & & & & & & \\ GCN [9] & 9.80 & 3.01 & 45.29 & 3.05 & 1.60 & 14.51 \\ GIN [29] & 7.87 & 1.99 & 28.65 & 1.88 & 1.05 & 7.22 \\ DAGI (Proposed) & **7.71** & **1.92** & **26.77** & **1.19** & **0.81** & **5.41** \\ \hline \hline \end{tabular} \end{table} Table 1: Imputation accuracy based on 5-fold cross-validation on NCANDA. GI refers to the implementation of DAGI without the classifier. The best results are shown in **bold**. Compared to DAGI, all error scores are significantly higher (\(p\leq 0.05\) based on two-sided paired t-test) with the exception of the MSE and MAE associated with the mean curvature scores produced by GIN. ### Experiments on NCANDA #### 3.2.1 Quantitative Comparison: In NCANDA, we measure the accuracy of our imputed measurements by performing 5-fold cross-validation and then record for each measurement type the average Mean Squared Error (MSE) and Mean Absolute Error (MAE) across all subjects. Based on MAE, we also compute the Mean Relative Error (MRE) to have an error score that is indifferent to the scale of the inferred measurements. To put those accuracy scores into context, we repeat the 5-fold cross-validation for other approaches. Specifically, we impute the measurements via an MLP [2] and a linear regression model [18] (a.k.a., direct linear regression). As not all measurements across ROIs necessarily have a direct relationship with one another, the "ROI-based Linear Regression" separately fits a linear model to each ROI so that it imputes missing measurements as the linear combinations of observed measures within each individual region. We investigate our modeling choices by imputing scores without the classifier (referring to as Graph Imputation, or GI) and by replacing the GIN with a GCN [9]. We apply two-sided paired t-tests between the error scores recorded for the proposed DAGI and each alternative approach and label p-values \(\leq\) 0.05 as being significantly different. According to Table 1, the two approaches oblivious to ROIs, i.e., linear regression and MLP, received relatively high error scores indicating the importance of accounting for ROI-specific characteristics in the imputation process. This observation is further supported as their error scores are significantly higher (p\(<\)0.0017 across all scores) than those of the ROI-based linear regression. Significantly lower MRE scores than the ROI-based linear regression are recorded for GIN (p\(<\)0.0001), which supports our choice for encoding adjacency between ROIs in a graph structure. This encoding of the graph structure is significantly more accurate (p\(<\)0.0085 across all scores) than the alternative based on the GCN model. The MRE is further significantly reduced (p\(<\)0.0001) by guiding the training of the imputation model using the sex classifier, i.e., produced by DAGI. In summary, DAGI reported the lowest error scores across all metrics, which supports our modeling choices. #### 3.2.2 The Importance of the Classifier for Imputation: To gain a deeper understanding of the importance of modeling demographic factors (i.e., sex) for imputing each curvature score, Figure 3 (a) plots the Wasserstein distance [27] between the sex-specific distributions of the imputed measurements for DAGI and GI (i.e., DAGI with GIN and without classifier). We choose the Wasserstein distance as it is a fairly robust metric that ignores outliers by comparing the overall shape of distributions. While for both curvature scores the distance for DAGI is higher for the majority of ROIs (20 out of 34 ROIs for "mean curvature" and 19 ROIs for "Gaussian curvature"), the difference compared to GIN across all regions is significant (p = 0.03, two-sided paired t-test) only with respect to the "Gaussian curvature". This finding supports that sex is important for imputations for both curvature scores but more so for the "Gaussian curvature", which would also explain why in Table 1 all error scores of DAGI are significantly lower for this curvature score (than GI) but for the mean curvature it is only the MRE that is significantly lower. **Visualizing the Node Embeddings:** Next we investigate the importance of modeling sex and ROI adjacency for the imputation task by visualizing the node embeddings of the two implementations. Shown in Figure 3 (b) are the t-SNE plots [15] of those embeddings, where each dot represents an imputed ROI measurement of an NCANDA subject and in the top row the color refers to a specific ROI. While the embeddings by GI are clearly separated by region (Figure 3 (b) left, first row), they fail to distinguish measurements by sex, i.e., blue and orange dots overlap with each other (Figure 3 (b) left, second row). Our approach, (Figure 3 (b) right), effectively distinguishes the sexes in the latent space (first row) while also keeping separate clusters for the ROIs (second row) as highlighted by the red circles. This separation is important for imputing the ROI measurements according the to error scores reported in Table 1. **Visualizing the Brain Graph:** We investigate the importance of specific ROIs in imputing the measurements by visualizing the graph structure via the GNNExplainer [30]. GNNExplainer defines the subgraph most important for the task at hand as the one whose predicted distribution maximizes the mutual information with the one derived from the original graph. Figure 4 visualizes this subgraph with red edges (i.e., the connection between ROIs). The importance of individual nodes (i.e., ROI) is encoded by their radius. It is striking that the subgraph of DAGI (Figure 4 (c)) is a combination of the graphs of the other two models, i.e., the importance of nodes is similar to those of the approach with Figure 3: The importance of the classifier for imputation. (a) Wasserstein distance between sexes with respect to imputed ROI curvature scores. The distances are higher for DAGI (vs. GI) and that difference is significant with respect to the Gaussian Curvature according to the two-sided paired t-test; (b) t-SNE visualization of node embeddings color-coded by sex (first row) and by ROIs (second row). Embeddings of DAGI (right column) have clearer sex differences (e.g., highlighted by red circles) and larger separation between ROIs (e.g., blue circles) compared to embeddings of GI. solely Demographic Aware module, referring to as DA (Figure 4 (a)) while the importance of edges agrees with the model that only relies on the imputation model, i.e., GI in Figure 4 (b). This suggests that individual ROIs are more important for classification while the interaction between ROIs is more important for imputation. Based on those plots, we conclude that identifying sex is mostly driven by pars opercularis, rostral middle frontal, and superior frontal regions, which is in line with the literature [14, 12]. However, imputation heavily relies on the interaction between neighboring regions (such as between post central and insula regions). ### Out-of-sample Test on ABCD. Using DAGI trained on NCANDA (i.e., the most accurate model according to Table 1), we now impute the missing curvature scores on ABCD. Given the lack of "ground truth" with respect to the missing ABCD measurements, we indirectly evaluate the quality of the imputed values by comparing the accuracy of a classifier identifying sex on the 3760 ABCD participants with and without utilizing the imputed measurements. This experimental setup is based on the observation that if the imputed measurements are accurate then they should hold pertinent and discriminatory details that could be utilized for downstream tasks, such as sex classification. The sex classifier is a three-layer MLP model, whose balanced test accuracy is measured via 5-fold cross-validation. In order to remove the confounding effect of brain size on sex classification, we normalize the "average thickness", "surface area" and "gray matter volume" measurements by the supratentorial volume [19, 20]. Note, the imputed curvature scores are left unchanged since they are not confounded by brain size as their Pearson correlation [4] with the supratentorial volume is insignificant for all regions (maximum correlation is 0.053, p\(<\)0.01). Figure 4: Graph node and edge importance according to GNNExplainer [30]. Each node corresponds to an ROI. Larger nodes represent higher contributions with the most influential ones highlighted by a yellow circle. Red edges are those of the subgraph deemed most important for the task at hand. According to the figure, individual ROIs are more important for sex classification ((a) and (c)), while the relationship between ROIs is more important for imputation ((b) and (c)). According to Table 2, the balanced accuracy of the classifier just on the ABCD measurements is 83.8 %, which then significantly improves (p=0.008, McNemar's test [17]) to 84.5 % once the imputed scores are added. To put the improvement into context, we also record the classification accuracy with respect to curvature scores generated by the traditional imputation methods MICE [26] and the deep learning-based GINN [23]. Since these methods are originally designed for randomly missing values (Figure 1 (a)) and thus cannot work on the ABCD dataset alone, we train them to impute missing values on matrices containing both the NCANDA and ABCD measurements. Surprisingly, the inclusion of the curvature measurements imputed by MICE and GINN results in significantly lower classification accuracy than DAGI (p\(<\)0.01, McNemar's test). The accuracy is even worse than the classifier solely based on ABCD scores. This suggests that they fail to accurately impute the curvature scores and instead mislead the classifier by making the data more noisy. This might be attributed to the fact that these methods are typically designed for randomly distributed missing values, and thus may not be suitable for our specific scenario where specific measurements are entirely missing in a dataset (Figure 1 (b)). For this scenario, the significant improvement achieved via the curvature scores predicted by DAGI demonstrates the utility of imputing brain measurements for enhancing downstream tasks. ## 4 Conclusion The accuracy of classifiers (e.g. identifying sex from brain ROI measurements) applied to publicly available data can be negatively impacted by the absence of entire measurements from that data set. Instead of imputing the scores by merging the data set with ones that contain the measurements, we propose to rephrase the problem as a prediction task in which we learn to predict missing measurements from those that are shared across data sets. We do so by coupling a graph neural network capturing the relationship between brain regions and a classifier to model demographic differences in ROI brain measurements. Compared to existing technology, our proposed method is significantly more accurate in imputing curvature scores on NCANDA. Imputing the measurements on ABCD and then feeding them into a classifier also result in more accurate sex \begin{table} \begin{tabular}{l l l} \hline \hline Measurements Used by Classifier & & Accuracy \\ \hline \hline \multirow{2}{*}{Only ABCD scores} & \multicolumn{2}{c}{0.838} \\ \cline{2-3} & MICE [26] (trained on NCANDA \& ABCD) & 0.811 \\ \cline{2-3} & GINN [23] (trained on NCANDA \& ABCD) & 0.832 \\ \cline{2-3} & DAGI (trained on NCANDA only) & **0.845** \\ \hline \hline \end{tabular} \end{table} Table 2: Balanced accuracy of an MLP classifying sex based on ABCD with and without imputed brain measurements. The best results are shown in **bold**. All accuracies are significantly lower than DAGI (p-value \(\leq 0.01\) according to McNemar’s test). identification than solely relying on the ROI measurements provided by ABCD. Overall, our framework provides a novel and effective approach for imputing missing measurements across data sets as it is only trained once on the data set that contains the values. This might also have important implications for generalizing neuroscientific findings of deep learning approach across data sets as they could now rely on the same set of measurements. #### Acknowledgments This work was partly supported by funding from the National Institute of Health (DA057567, AA021697, AA017347, AA010723, AA005965, and AA028840), the DGIST R&D program of the Ministry of Science and ICT of KOREA (22-KU Joint-02), Stanford School of Medicine Department of Psychiatry and Behavioral Sciences Faculty Development and Leadership Award, and by the Stanford HAI Google Cloud Credit.
2301.02381
Existence of primitive pairs with two prescribed traces over finite fields
Given $F= \mathbb{F}_{p^{t}}$, a field with $p^t$ elements, where $p $ is a prime power, $t\geq 7$, $n$ are positive integers and $f=f_1/f_2$ is a rational function, where $f_1, f_2$ are relatively prime, irreducible polynomials with $deg(f_1) + deg(f_2) = n $ in $F[x]$. We construct a sufficient condition on $(p,t)$ which guarantees primitive pairing $(\epsilon, f(\epsilon))$ exists in $F$ such that $Tr_{\mathbb{F}_{p^t}/\mathbb{F}_p}(\epsilon) = a$ and $Tr_{\mathbb{F}_{p^t}/\mathbb{F}_p}(f(\epsilon)) = b$ for any prescribed $a,b \in \mathbb{F}_{p}$. Further, we demonstrate for any positive integer $n$, such a pair definitely exists for large $t$. The scenario when $n = 2$ is handled separately and we verified that such a pair exists for all $(p,t)$ except from possible 71 values of $p$. A result for the case $n=3$ is given as well.
Aakash Choudhary, R. K. Sharma
2023-01-06T05:17:21Z
http://arxiv.org/abs/2301.02381v1
# Existence of primitive pairs with two prescribed traces over finite fields ###### Abstract Given \(F=\mathbb{F}_{p^{t}}\), a field with \(p^{t}\) elements, where \(p\) is a prime power, \(t\geq 7\), \(n\) are positive integers and \(f=f_{1}/f_{2}\) is a rational function, where \(f_{1},f_{2}\) are relatively prime, irreducible polynomials with \(deg(f_{1})+deg(f_{2})=n\) in \(F[x]\). We construct a sufficient condition on \((p,t)\) which guarantees primitive pairing \((\epsilon,f(\epsilon))\) exists in \(F\) such that \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)=a\) and \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(f(\epsilon))=b\) for any prescribed \(a,b\in\mathbb{F}_{p}\). Further, we demonstrate for any positive integer \(n\), such a pair definitely exists for large \(t\). The scenario when \(n=2\) is handled separately and we verified that such a pair exists for all \((p,t)\) except from possible \(71\) values of \(p\). A result for the case \(n=3\) is given as well. **Keywords:** Character, Finite fields, Primitive elements. 2020 Mathematics Subject Classification: 12E20, 11T23 ## 1 Introduction Let \(\mathbb{F}_{p}\) represent a field of finite order \(p\), where \(p=q^{r}\) for some prime \(q\) and \(r\), a positive integer. The multiplicative group of \(\mathbb{F}_{p}\) is cyclic, it is denoted by \(\mathbb{F}_{p}^{*}\) and a generator of \(\mathbb{F}_{p}^{*}\) is referred to as a primitive element in \(\mathbb{F}_{p}\) The field \(\mathbb{F}_{p}\) has \(\phi(p-1)\) primitive elements, where \(\phi\) is the Euler's totient function. Let \(\mathbb{F}_{p^{t}}\) denote an extension of \(\mathbb{F}_{p}\) of degree \(t\) for some positive integer \(t\). A necessary and sufficient condition for an element \(\epsilon\in\mathbb{F}_{p^{t}}^{*}\) to be primitive is that it is a root of an irreducible polynomial of degree \(t\) over \(\mathbb{F}_{p}\) and such an irreducible polynomial is referred to as primitive polynomial. For \(\epsilon\in\mathbb{F}_{p^{t}}\), the trace of \(\epsilon\) over \(\mathbb{F}_{p}\) denoted by \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)\), is defined as \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)=\epsilon+\epsilon^{p}+ \epsilon^{p^{2}}+\cdots+\epsilon^{p^{t-1}}\). In Cryptographic schemes such as Elgamel encryption scheme and the Diffie-Hellman key exchange, primitive elements serve as the fundamental building blocks. Numerous applications of primitive elements can be found in Coding theory and Cryptography [10], making the study of primitive elements and primitive polynomials an active research field. Please refer to [9] for more information about the existence of primitive elements in finite fields. For any rational function \(f(x)\in\mathbb{F}_{p}(x)\) and \(\epsilon\in\mathbb{F}_{p}\) we call the pair \((\epsilon,f(\epsilon))\), a primitive pair if both \(\epsilon\) and \(f(\epsilon)\) are primitive elements in \(\mathbb{F}_{p}\). In general, if \(\epsilon\) is primitive, \(f(\epsilon)\) need not be primitive. For instance, take \(x^{2}+3x+2\in\mathbb{F}_{7}[x]\), then \(3,5\) are primitive elements in \(\mathbb{F}_{7}\) but none of \(f(3)\) and \(f(5)\) are. In 1985, Cohen [3] introduced the term _"primitive pair"_ and he verified the existence of primitive pairs \((\epsilon,f(\epsilon))\) in \(\mathbb{F}_{p}\) for linear polynomials \(f(x)=x+k\in\mathbb{F}_{p}[x]\). Since then many researchers have conducted studies in this area [12, 13, 7, 14]. Most recently, Cohen, Sharma and Sharma [4] have supplied a condition that ensures the occurrence of primitive pair \((\epsilon,f(\epsilon))\) in \(\mathbb{F}_{p}\) for non-exceptional rational function \(f\), i.e., \(f\) is not of the form \(cx^{j}g^{k}(x)\), where \(j\in\mathbb{Z}\), \(k>1\) that divides \(p-1\) and \(c\in\mathbb{F}_{p}^{*}\), for any \(g(x)\in\mathbb{F}_{p}(x)\). Jungnickel, Vanstone [8] identified a sufficient condition for the occurrence of primitive elements \(\epsilon\in\mathbb{F}_{p^{t}}\) with a prescribed trace of \(\epsilon\). Later Cohen [5] extended the result with some exceptions. Chou and Cohen [2], in 2014, addressed the issue of the existence of primitive element \(\epsilon\in\mathbb{F}_{p^{t}}\) such that \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)=Tr_{\mathbb{F}_{p^{t}}/ \mathbb{F}_{p}}(\epsilon^{-1})=0.\) Cao and Wang [1], for \(t\geq 29\), established a condition for the existence of primitive pair \((\epsilon,f(\epsilon))\) with \(f(x)=\frac{x^{2}+1}{x}\in\mathbb{F}_{p^{t}}(x)\) such that for prescribed \(a,b\in\mathbb{F}_{p}^{*}\), \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)=a\) and \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon^{-1})=b\). In 2018, Gupta, Sharma and Cohen [7], for the same rational function and prescribed \(a\in\mathbb{F}_{p}\), presented a condition that ensures the existence of primitive pair \((\epsilon,f(\epsilon))\) in \(\mathbb{F}_{p^{t}}\) with \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)=a\) for \(t\geq 5\). Then in 2019, Gupta and Sharma [14] extended the result to the rational function \(\Gamma_{M}(x)=\frac{a_{11}x^{2}+a_{12}x+a_{13}}{a_{22}x+a_{23}}\), where \(M=\begin{pmatrix}a_{11}&a_{12}&a_{13}\\ 0&a_{22}&a_{23}\end{pmatrix}\in\mathbb{M}_{2\times 3}(\mathbb{F}_{p^{t}})\) is any matrix of rank 2, and if \(\Gamma_{M}(x)=\lambda x\) or \(\lambda x^{2}\) for some \(\lambda\in\mathbb{F}_{p^{t}}\), then \(\lambda=1\). In 2021, Sharma and Sharma [11] examined the rational function \(f=f_{1}/f_{2}\) in \(\mathbb{F}_{p^{t}}(x)\), where \(f_{1}\) and \(f_{2}\) are relatively prime, irreducible polynomials and proved that for prescribed \(a,b\in\mathbb{F}_{p}\), the existence of primitive pair \((\epsilon,f(\epsilon))\) in \(\mathbb{F}_{p^{t}}\) such that \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)=a\) and \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon^{-1})=b\) for \(t\geq 7\). Prior to this article, for primitive pairs, traces were considered for \(\epsilon\) and \(\epsilon^{-1}\). In this article, we will consider the trace onto the element \(\epsilon\) and its image under \(f\), i.e, \(f(\epsilon)\). Some terminology and conventions are introduced for explanation. We say that a non-zero polynomial \(f\) over \(\mathbb{F}_{p}[x]\) has degree \(k\geq 0\), if \(f(x)=a_{k}x^{k}+a_{k-1}x^{k-1}+\cdots+a_{1}x+a_{0}\), where \(a_{k}\neq 0\) and we write the degree of \(f\) as \(deg(f)=k\). Next, we suppose that, for a rational function \(f(x)=\frac{f_{1}(x)}{f_{2}(x)}\in\mathbb{F}_{p}(x)\), \(f_{1}\) and \(f_{2}\) are relatively prime, irreducible polynomials and define the degree-sum as \(degsum(f)=deg(f_{1})+deg(f_{2})\). We will now define various sets that will play a crucial role in this article. 1. We define \(R_{p,t}(n_{1},n_{2})\) to represent the set of all rational function \(f(x)=\frac{f_{1}(x)}{f_{2}(x)}\in\mathbb{F}_{p^{t}}(x)\) such that \(f_{1}\) and \(f_{2}\) are relatively prime, irreducible polynomials over \(\mathbb{F}_{p^{t}}\) with \(deg(f_{1})=n_{1}\) and \(deg(f_{2})=n_{2}\). 2. Denote \(A_{n_{1},n_{2}}\) as the set consisting of pairs \((p,t)\in\mathbb{N}\times\mathbb{N}\) such that for any \(f\in R_{p,t}(n_{1},n_{2})\) and prescribed \(a,b\in\mathbb{F}_{p}\), \(\mathbb{F}_{p^{t}}\) contains an element \(\epsilon\) such that \((\epsilon,f(\epsilon))\) is a primitive pair with \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)=a\) and \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(f(\epsilon))=b\). 3. Define, \(R_{p,t}(n)=\bigcup_{n_{1}+n_{2}=n}R_{p,t}(n_{1},n_{2})\) and \(A_{n}=\bigcap_{n_{1}+n_{2}=n}A_{n_{1},n_{2}}\). First, in this paper, for \(n\in\mathbb{N}\), we consider \(f(x)\in R_{p,t}(n)\) and \(a,b\in\mathbb{F}_{p}\), and then verify that there exists an element \(\epsilon\in\mathbb{F}_{p^{t}}\) such that \((\epsilon,f(\epsilon))\) is a primitive pair in \(\mathbb{F}_{p^{t}}\) with \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)=a\) and \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(f(\epsilon))=b\), i.e., we provide a sufficient condition on \(p^{t}\) such that \((p,t)\in A_{n}\). Furthermore, using a sieve variation of this sufficient condition, we prove the following result: **Theorem 1.1**.: _Let t, q, r, p \(\in\mathbb{N}\) be such that q is a prime number, \(t\geq 7\) and \(p=q^{r}\). Suppose p and t assumes none of the following values:_ 1. \(2\leq p\leq 16\) _or_ \(p=19,23,25,27,31,37,43,49,61,67,79\) _and_ \(t=7\) 2. \(2\leq p\leq 31\) _or_ \(p=32,37,41,43,47,83\) _and_ \(t=8\); 3. \(2\leq p\leq 8\) _or_ \(p=11,16\) _and_ \(t=9\)_;_ 4. \(p=2,3,4,5,7\) _and_ \(t=10,12\)_;_ 5. \(p=2,3,4\) _and_ \(t=11\)_;_ 6. \(p=2\) _and_ \(t=14,15,16,18,20,24\)_._ _Then \((p,t)\in A_{2}\)._ _Note:-_ The exceptions in above theorem need not be true exceptions, they are possible exceptions. SageMath [16] is used to perform all nontrivial calculations required throughout this article. ## 2 Preliminaries In this section, we present some basic concepts, notations, and results that will be used in forthcoming sections of this article. Throughout the article, \(t\) is a positive integer, \(p\) is an arbitrary prime power and \(\mathbb{F}_{p}\) is a finite field of order \(p\). ### Definitions 1. A character of a finite abelian group \(G\) is a homomorphism \(\chi\) from the set \(G\) into \(Z^{1}\), where \(Z^{1}\) is the set of all elements of complex field \(\mathbb{C}\) with absolute value \(1\). The trivial character of \(G\) denoted by \(\chi_{0}\), is defined as \(\chi_{0}(g)=1\) for all \(g\in G\). In addition, the set of all characters of \(G\), denoted by \(\hat{G}\), forms a group under multiplication, which is isomorphic to \(G\). The order of a character \(\chi\) is the least positive integer \(d\) such that \(\chi^{d}=\chi_{0}\). For a finite field \(\mathbb{F}_{p^{t}}\), a character of the additive group \(\mathbb{F}_{p^{t}}\) is called an additive character and that of the multiplicative group \(\mathbb{F}_{p^{t}}^{*}\) is called a multiplicative character. 2. For \(u\), a divisor of \(p^{t}-1\), an element \(\zeta\in\mathbb{F}_{p^{t}}^{*}\) is called \(u\)-\(free\), if whenever \(\zeta=\xi^{s}\), where \(\xi\in\mathbb{F}_{p^{t}}\) and \(s|u\) implies \(s=1\). We see that an element \(\zeta\in\mathbb{F}_{p^{t}}^{*}\) is \((p^{t}-1)\)-\(free\) if and only if it is a primitive element of \(\mathbb{F}_{p^{t}}\). For more information on characters, primitive elements and finite fields, we refer the reader to [9]. The following conclusion holds as a particular case of [15, Lemma 10]: **Lemma 2.1**.: _Let u be a divisor of \(p^{t}-1\), \(\zeta\in\mathbb{F}_{p^{t}}^{*}\), then we have:_ \[\sum_{s|u}\frac{\mu(s)}{\phi(s)}\sum_{\chi_{s}}\chi_{s}(\zeta)=\left\{\begin{array} []{ll}\frac{u}{\phi(u)}&if\ \zeta\ is\ u-free,\\ 0&otherwise\end{array}\right.\] _where \(\mu(.)\) is the Mobius function and \(\phi(.)\) is the Euler function, \(\chi_{s}\) runs through all the \(\phi(s)\) multiplicative characters over \(\mathbb{F}_{p^{t}}^{*}\) with order \(s\)._ Therefore for \(u\), a divisor of \(p^{t}-1\) \[\rho_{u}:\epsilon\mapsto\theta(u)\sum_{s|u}\frac{\mu(s)}{\phi(s)}\sum_{\chi_{s }}\chi_{s}(\epsilon) \tag{1}\] gives a characteristic function for the subset of \(u\)-\(free\) elements of \(\mathbb{F}_{p^{t}}^{*}\), where \(\theta(u)=\phi(u)/u\). Also for \(a\in\mathbb{F}_{p}\), \[\tau_{a}:\epsilon\mapsto\frac{1}{p}\sum_{\psi\in\mathbb{F}_{p}^{t}}\psi(Tr_{ \mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)-a) \tag{2}\] is a characteristic function for the subset of \(\mathbb{F}_{p^{t}}\) whose elements satisfy \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)=a\). From [9, Theorem 5.7], any additive character \(\psi\) of \(\mathbb{F}_{p}\) can be derived by \(\psi(a)=\psi_{0}(ua)\), where \(\psi_{0}\) is the canonical additive character of \(\mathbb{F}_{p}\) and \(u\) is an element of \(\mathbb{F}_{p}\) corresponding to \(\psi\). Thus \[\tau_{a} =\frac{1}{p}\sum_{\psi\in\mathbb{F}_{p}^{t}}\psi_{0}(Tr_{\mathbb{ F}_{p^{t}}/\mathbb{F}_{p}}(u\epsilon)-ua)\] \[=\frac{1}{p}\sum_{u\in\mathbb{F}_{p}}\hat{\psi_{0}}(u\epsilon) \psi_{0}(-ua) \tag{3}\] where \(\hat{\psi_{0}}\) is the additive character of \(\mathbb{F}_{p^{t}}\) defined by \(\hat{\psi_{0}}(\epsilon)=\psi_{0}(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}( \epsilon))\). In next theorem, we will make major use of the results given below by Wang and Fu [6] in 2014. **Lemma 2.2**.: _[_6_, Theorem 5.5]_ _Let \(F(x)\in\mathbb{F}_{p^{d}}(x)\) be a rational function. Write \(F(x)=\prod_{j=1}^{k}f_{j}(x)^{r_{j}}\), where \(f_{j}(x)\in\mathbb{F}_{p^{d}}[x]\) are irreducible polynomials and \(r_{j}\) are non zero integers. Let \(\chi\) be a multiplicative character of \(\mathbb{F}_{p^{d}}\). Suppose that the rational function \(\prod_{i=1}^{d-1}f(x^{p^{i}})\) is not of the form \(h(x)^{ord(\chi)}\in\mathbb{F}_{p^{d}}(x)\), where \(ord(\chi)\) is the order of \(\chi\), Then we have_ \[\bigg{|}\sum_{\epsilon\in\mathbb{F}_{p},f(\epsilon)\neq 0,\infty}\chi(F( \epsilon))\bigg{|}\leq\bigg{(}d\sum_{j=1}^{k}deg(f_{j})-1\bigg{)}p^{\frac{1}{2 }}.\] **Lemma 2.3**.: _[_6_, Theorem 5.6]_ _Let f(x), g(x) \(\in\mathbb{F}_{p^{t}}(x)\) be rational functions. Write f(x) = \(\prod_{j=1}^{k}f_{j}(x)^{r_{j}}\), where \(f_{j}(x)\in\mathbb{F}_{p^{t}}[x]\) are irreducible polynomials and \(r_{j}\) are non-zero integers. Let \(D_{1}=\sum_{j=1}^{k}deg(f_{j})\), \(D_{2}=max\{deg(g),0\}\), \(D_{3}\) is the degree of denominator of g(x) and \(D_{4}\) is the sum of degrees of those irreducible polynomials dividing denominator of \(g\) but distinct from \(f_{j}(x)\)( \(j\)= 1,2,...,k). Let \(\chi\) be a multiplicative character of \(\mathbb{F}_{p^{t}}\), and let \(\psi\) be a nontrivial additive character of \(\mathbb{F}_{p^{t}}\). Suppose g(x) is not of the form \(v(x)^{p^{t}}-v(x)\) in \(\mathbb{F}_{p^{t}}(x)\). Then we have_ \[\bigg{|}\sum_{\epsilon\in\mathbb{F}_{p^{t}},f(\epsilon)\neq 0,\infty,g( \epsilon)\neq\infty}\chi(f(\epsilon))\psi(g(\epsilon))\bigg{|}\leq(D_{1}+D_{2 }+D_{3}+D_{4}-1)p^{\frac{t}{2}}.\] Evidently, both the sufficient condition (Theorem 3.1) and its sieving variation (Theorem 3.4) are entirely dependent on \(p^{t}\) and the degrees of the numerator and denominator polynomials of the rational function. It is easy to see that the Trace part of the main result in [11] is a special case of our finding for \(f(x)=\frac{1}{x}\). For every \(\kappa\in\mathbb{N}\), we will use \(\omega(\kappa)\) to represent the number of distinct prime divisors of \(\kappa\), and \(W(\kappa)\) to represent the number of square free divisors of \(\kappa\). Clearly, \(W(\kappa)=2^{\omega(\kappa)}\). ## 3 Sufficient Condition Let \(k_{1},k_{2},p,t\in\mathbb{N}\) be such that \(p\) is a prime power and \(k_{1}\), \(k_{2}\) are positive integers which divide \(p^{t}-1.\) Let \(a,b\in\mathbb{F}_{p},\ f(x)\in R_{p,t}(n)\). Let \(A_{f,a,b}(k_{1},k_{2})\) represents the set consisting of all those elements \(\epsilon\in\mathbb{F}_{p^{t}}\) such that \(\epsilon\) is \(k_{1}\)-free, \(f(\epsilon)\) is \(k_{2}\)-free, \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)=a\), and \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{t}}(f(\epsilon))=b\). We now verify the sufficient condition as follows: **Theorem 3.1**.: _Suppose t, n, p \(\in\mathbb{N}\) and \(p\) is a prime power. Suppose that_ \[p^{\frac{t}{2}-2}>(2n+1)W(p^{t}-1)^{2}.\] _Then \((p,t)\in A_{n}.\)_ Proof.: In order to prove this result it suffices to demonstrate that \(A_{f,a,b}(k_{1},k_{2})>0\) for every \(f(x)\in R_{p,t}(n)\) and for every prescribed \(a,b\in\mathbb{F}_{p}\). Suppose that \(f(x)\in R_{p,t}(n)\) be a rational function and that \(a,b\in\mathbb{F}_{p}\). Let \(P\) represent the collection of zeroes and poles of \(f(x)\in\mathbb{F}_{p^{t}}\) and \(P^{\prime}=P\cup\{0\}\). Let \(k_{1},k_{2}\) be divisors of \(p^{t}-1\). Then by definition, \(A_{f,a,b}(k_{1},k_{2})\) will be given by \[A_{f,a,b}(k_{1},k_{2})=\sum_{\epsilon\in\mathbb{F}_{p^{t}}-P^{\prime}}\rho_{k_ {1}}(\epsilon)\rho_{k_{2}}(f(\epsilon))\tau_{a}(\epsilon)\tau_{b}(f(\epsilon)).\] Using the characteristic functions (1) and (3) defined in the previous section, we obtain \[A_{f,a,b}(k_{1},k_{2})=\frac{\theta(k_{1})\theta(k_{2})}{p^{2}}\sum_{s_{1}|k_ {1},s_{2}|k_{2}}\frac{\mu(s_{1})\mu(s_{2})}{\phi(s_{1})\phi(s_{2})}\sum_{s_{1},s_{2}}\chi_{f,a,b}(s_{1},s_{2})\] where \(\theta(k_{i})=\frac{\phi(k_{i})}{k_{i}};\ i=1,2\) and \[\chi_{f,a,b}(s_{1},s_{2})=\sum_{u,v\in\mathbb{F}_{p}}\psi_{0}(-au-bv)\sum_{ \epsilon\in\mathbb{F}_{p^{t}}-P^{\prime}}\chi_{s_{1}}(\epsilon)\chi_{s_{2}}( \epsilon_{0})\hat{\psi}_{0}(u\epsilon+v\epsilon_{0})\] where \(\epsilon_{0}=f(\epsilon)\). It follows from [9, Example 5.1] that, for any divisors \(s_{1},\ s_{2}\) of \(p^{t}-1\), there exist integers \(m_{1},\ m_{2}\) with \(0<m_{1},\ m_{2}<p^{t}-1\) such that \(\chi_{s_{1}}(x)=\chi_{p^{t}-1}(x^{m_{1}})\) and \(\chi_{s_{2}}(x)=\chi_{p^{t}-1}(x^{m_{2}})\). Thus \[\chi_{f,a,b}(s_{1},s_{2})=\sum_{u,v\in\mathbb{F}_{p}}\psi_{0}(-au-bv)\sum_{ \epsilon\in\mathbb{F}_{p^{t}}-P^{\prime}}\chi_{p^{t}-1}(\epsilon^{m_{1}}f( \epsilon)^{m_{2}})\hat{\psi}_{0}(u\epsilon+v\epsilon_{0}) \tag{4}\] \[=\sum_{u,v\in\mathbb{F}_{p}}\psi_{0}(-au-bv)\sum_{\epsilon\in\mathbb{F}_{p^{t} }-P^{\prime}}\chi_{p^{t}-1}(F_{1}(\epsilon))\hat{\psi}_{0}(F_{2}(\epsilon)), \tag{5}\] where \(F_{1}(x)=x^{m_{1}}f(x)^{m_{2}}\in\mathbb{F}_{p^{t}}(x)\) and \(F_{2}(x)=ux+vf(x)\in\mathbb{F}_{p^{t}}(x)\). First we consider the situation when \(F_{2}(x)=l(x)^{p^{t}}-l(x)\) for some \(l(x)\in\mathbb{F}_{p^{t}}(x)\), where \(l(x)=\dfrac{l_{1}(x)}{l_{2}(x)}\) with \((l_{1},l_{2})=1\). We have, \(ux+v\dfrac{f_{1}(x)}{f_{2}(x)}=\dfrac{l_{1}(x)^{p^{t}}}{l_{2}(x)^{p^{t}}}- \dfrac{l_{1}(x)}{l_{2}(x)}\), that is, \[f_{2}(x)(l_{1}(x)^{p^{t}}-l_{1}(x)l_{2}(x)^{p^{t}-1})=l_{2}(x)^{p^{t}}(uxf_{2} (x)+vf_{1}(x)).\] Since \((l_{1}(x)^{p^{t}}-l_{1}(x)l_{2}(x)^{p^{t}-1},l_{2}(x)^{p^{t}})=1\), it implies that, \(l_{2}(x)^{p^{t}}\) divides \(f_{2}(x)\), which can only happen if \(l_{2}(x)\) is constant. That is, we have \[c^{-(p^{t})}f_{2}(x)(l_{1}(x)^{p^{t}}-l_{1}(x)c^{p^{t}-1})=uxf_{2}(x)+vf_{1}(x)\] where \(c=l_{2}\). Now, the above equation only applies if \(v=0\). Substituting it to the equation above yields, \(c^{-(p^{t})}(l_{1}(x)^{p^{t}}-l_{1}(x)c^{p^{t}-1})=ux\), which can happen only if \(l_{1}\) is constant and \(u=0\). Moreover, if \(F_{1}(x)\neq r(x)^{p^{t}-1}\) for any \(r(x)\in\mathbb{F}_{p^{t}}(x)\), then it follows form Lemma 2.2 that \[|\chi_{f,a,b}(s_{1},s_{2})|\leq np^{\frac{t}{2}+2}. \tag{6}\] And, when \(F_{1}(x)=r(x)^{p^{t}-1}\) for some \(r(x)\in\mathbb{F}_{p^{t}}(x)\), where \(r(x)=\frac{r_{1}(x)}{r_{2}(x)}\) is such that \((r_{1},r_{2})=1\). Following [11], it happens only if \(m_{1}=m_{2}=0\), a contradiction. If \(F_{2}(x)\neq d(x)^{p^{t}}-d(x)\) for any \(d(x)\in\mathbb{F}_{p^{t}}(x)\) then, _Case 1 :_ When \(n_{1}\leq n_{2}\). Then in accordance with Lemma 2.3 we have \(D_{2}\)\(=1\), and \[|\chi_{f,a,b}(s_{1},s_{2})|\leq(2n+1)p^{\frac{t}{2}+2}. \tag{7}\] _Case 2 :_ When \(n_{1}>n_{2}\). We have \(D_{2}=n_{1}-n_{2}\) and \[|\chi_{f,a,b}(s_{1},s_{2})|\leq 2np^{\frac{t}{2}+2}. \tag{8}\] Thus, if \((\chi_{s_{1}},\chi_{s_{2}},u,v)\neq(\chi_{1},\chi_{1},0,0)\) then based on the discussion above, and using \((\ref{eq:d1}),(\ref{eq:d2})\) and \((\ref{eq:d2})\), we get, \(|\chi_{f,a,b}(s_{1},s_{2})|\leq(2n+1)p^{\frac{t}{2}+2}.\) From this and the definition of \(A_{f,a,b}(k_{1},k_{2})\), we get \[A_{f,a,b}(k_{1},k_{2}) \geq\frac{\theta(k_{1})\theta(k_{2})}{p^{2}}((p^{t}-|P^{{}^{\prime }}|)-(2n+1)p^{\frac{t}{2}+2}(W(k_{1})W(k_{2})-1)) \tag{9}\] \[\geq\frac{\theta(k_{1})\theta(k_{2})}{p^{2}}((p^{t}-(n+1))-(2n+1) p^{\frac{t}{2}+2}(W(k_{1})W(k_{2})-1)) \tag{10}\] Therefore, if \(p^{\frac{t}{2}-2}>(2n+1)W(k_{1})W(k_{2})\), then \(A_{f,a,b}(k_{1},k_{2})>0\) for every \(f(x)\in R_{p,t}(n)\) and prescribed \(a,b\in\mathbb{F}_{p}\). Considering \(k_{1}=k_{2}=p^{t}-1\), result follows. Now, we provide the bounds for the absolute values for \(A_{f,a,b}(mk,k)-\theta(m)A_{f,a,b}(k,k)\) and \(A_{f,a,b}(k,mk)-\theta(m)A_{f,a,b}(k,k).\) Proofs are omitted as they follow from the idea of [7]. **Lemma 3.2**.: _Let \(k\) be a positive integer that divides \(p^{t}-1\) and \(m\) is a prime dividing \(p^{t}-1\) but not \(k\). Then_ \[|A_{f,a,b}(mk,k)-\theta(m)A_{f,a,b}(k,k)|\leq\frac{\theta(k)^{2}\theta(m)}{p^{2 }}(2n+1)W(k)^{2}p^{\frac{t}{2}+2}\] _and_ \[|A_{f,a,b}(k,mk)-\theta(m)A_{f,a,b}(k,k)|\leq\frac{\theta(k)^{2}\theta(m)}{p^{ 2}}(2n+1)W(k)^{2}p^{\frac{t}{2}+2}.\] **Lemma 3.3**.: _Let \(k\) be a positive integer that divides \(p^{t}-1\) and \(\{q_{1},q_{2},\ldots,q_{m}\}\) be the collection of all primes dividing \(p^{t}-1\) but not \(k\). Then_ \[A_{f,a,b}(p^{t}-1,p^{t}-1)\geq\sum_{i=1}^{m}A_{f,a,b}(k,q_{i}k)+\sum_{i=1}^{m} A_{f,a,b}(q_{i}k,k)-(2m-1)A_{f,a,b}(k,k).\] Sieve variation of sufficient condition (Theorem 3.1) is given below, proof of which is not given as it follows from Lemmas 3.2, 3.3 and ideas in [7]. **Theorem 3.4**.: _Let \(t,n,p,k\in\mathbb{N}\) be such that \(k\) divides \(p^{t}-1\), where \(p\) is a prime power. Assume \(\{q_{1},q_{2},\ldots,q_{m}\}\) is the collection of all those primes that divide \(p^{t}-1\) but not \(k\). Suppose \(\delta=1-2\sum_{i=1}^{m}\frac{1}{q_{i}}\) and \(\Delta=\frac{2m-1}{\delta}+2\). If \(\delta>0\) and_ \[p^{\frac{t}{2}-2}>(2n+1)\Delta W(k)^{2}\] _then \((p,t)\in A_{n}\)._ **Lemma 3.5**.: _Suppose that \(\kappa\in\mathbb{N}\) is such that \(\omega(\kappa)\geq 1547\), then \(W(\kappa)\leq\kappa^{1/12}\)._ Proof.: Let \(V=\{2,3,5,\ldots,12983\}\) is the set of first \(1547\) primes. We see that the product of all elements of \(V\) exceeds \(K=6.57\times 10^{5588}\). Let \(\kappa=\kappa_{1}\kappa_{2}\), where \(\kappa_{1}\) and \(\kappa_{2}\) are co-prime integers such that all prime divisors of \(\kappa_{1}\) come from the least \(1547\) prime divisors of \(\kappa\) and remaining prime divisors are divisors of \(\kappa_{2}\). Hence, \(\kappa_{1}^{1/12}>K^{1/12}>5.42\times 10^{465}\), whereas \(W(\kappa_{1})<4.93\times 10^{465}\). The conclusion follows, since \(\rho^{1/12}>2\) for all primes \(\rho>12983\). We shall need Theorem 3.4 and Lemma 3.5 for calculation work in the next section. ## 4 Proof of Theorem 1.1 Proof will be carried out for the situation \(t\geq 7\), since according to [2] there is no primitive element \(\epsilon\), for \(t\leq 4\), such that \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon)=0\) and \(Tr_{\mathbb{F}_{p^{t}}/\mathbb{F}_{p}}(\epsilon^{-1})=0\). The cases \(t=5\) and \(6\) necessitate substantial computation and appear to demand a different technique. As a result, we postpone further examination of these situations. We assume initially that, \(\omega(p^{t}-1)\geq 1547\). Using Theorem 3.1 and Lemma 3.5, if \(p^{\frac{t}{2}-2}>5p^{\frac{t}{6}}\), that is, if \(p^{t}>5^{\frac{3t}{t-6}}\) then \((p,t)\in A_{2}\). But \(t\geq 7\) gives \(\frac{3t}{t-6}\leq 21\). Hence, if \(p^{t}>5^{21}\) then \((p,t)\in A_{2}\), and this holds true for \(\omega(p^{t}-1)\geq 1547\). Therefore, we may suppose \(\omega(p^{t}-1)\leq 1546\). We shall use sieve variation in order to carry forward computational work. Let \(62\leq\omega(p^{t}-1)\leq 1546\). To use Theorem 3.4 assume \(k\) to be the product of least \(62\) primes that divide \(p^{t}-1\), that is, \(W(k)=2^{62}\), then \(m\leq 1485\) and \(\delta\) assumes its least positive value when \(\{q_{1},q_{2},\ldots,q_{1485}\}=\{307,311,313,\ldots,12979\}\). This yields \(\delta>0.004174\) and \(\Delta<710770.7395\). Hence \(5\Delta W(k)^{2}<7.558211\times 10^{43}\). Let \(Z=7.558211\times 10^{43}\). By sieve variation, \((p,t)\in A_{2}\) if \(q^{\frac{t}{2}-2}>Z\) i.e., if \(p^{t}>Z^{\frac{2t}{t-4}}\). Since \(t\geq 7\), it gives \(\frac{2t}{t-4}\leq\frac{14}{3}\). Therefore, \((p,t)\in A_{2}\) under the condition that \(p^{t}>5.834\times 10^{204}\). Hence, \(\omega(p^{t}-1)\geq 95\) implies \((p,t)\in A_{2}\). In a similar manner \((p,t)\in A_{3}\), \(A_{4}\) if \(\omega(p^{t}-1)\geq 95\), and \((p,t)\in A_{5}\) if \(\omega(p^{t}-1)\geq 96\). Table 1. \begin{tabular}{|c|c|c|c|c|c|} \hline \(Sr.No.\) & \(a\leq\omega(p^{t}-1)\leq b\) & \(W(k)\) & \(\delta>\) & \(\Delta<\) & \(5\Delta W(k)^{2}<\) \\ \hline 1 & \(a=13,\ b=94\) & \(2^{13}\) & 0.04481712 & 3594.3767988 & 1,206,072,718,756 \\ 2 & \(a=7,\ b=34\) & \(2^{7}\) & 0.04609692 & 1151.7513186 & 94,351,469 \\ 3 & \(a=6,\ b=25\) & \(2^{6}\) & 0.08241088 & 450.9698124 & 9,235,862 \\ 4 & \(a=6,\ b=23\) & \(2^{6}\) & 0.12550135 & 264.9453729 & 5,426,082 \\ 5 & \(a=6,\ b=22\) & \(2^{6}\) & 0.14959773 & 209.2223842 & 4,284,875 \\ \hline 6 & \(a=5,\ b=19\) & \(2^{5}\) & 0.07663431 & 354.3225878 & 1,814,132 \\ \hline 7 & \(a=5,\ b=17\) & \(2^{5}\) & 0.13927194 & 167.1445296 & 855,780 \\ \hline 8 & \(a=5,\ b=16\) & \(2^{5}\) & 0.17317025 & 123.2679422 & 631,132 \\ 9 & \(a=5,\ b=15\) & \(2^{5}\) & 0.21090610 & 92.0874844 & 471,488 \\ \hline \end{tabular} Using the values in the Table 1 above and repeating the process of sieve variation, we determine that \((p,t)\in A_{2}\) if \(p^{t}>(4284875)^{\frac{14}{3}}\) or \(p^{t}>8.8929\times 10^{30}\) for \(t\geq 7\) and since \(t\geq 8\) implies \(\frac{2t}{t-4}\leq 4\), so \((p,t)\in A_{2}\) if \(p^{t}>3.371\times 10^{26}\) for \(t\geq 8\). Therefore, for \(t\geq 8\) it is sufficient that \(\omega(p^{t}-1)\geq 20\), We deduce, utilising sieve variation repeatedly for values in the second section of the preceding table that, \((p,t)\in A_{2}\) if \(p^{t}>1.084\times 10^{25}\). Similarly, \(\omega(p^{t}-1)\geq 18\) is sufficient for inclusion of \((p,t)\) in \(A_{2}\), and based on the table above \((p,t)\in A_{2}\) if \(p^{t}>2.2725\times 10^{21}\) for \(t\geq 9\), and \((p,t)\in A_{2}\) if \(p^{t}>8.158\times 10^{18}\) for \(t\geq 10\). Hence \((p,t)\in A_{2}\) unless \(t=7\) and \(p<26382\), \(t=8\) and \(p<1347\), \(t=9\) and \(p<237\), \(t=10\) and \(p<78\), \(t=11\) and \(p<53\), \(t=12\) and \(p<38\), \(t=13\) and \(p<29\), \(t=14\) and \(p<23\), \(t=15\) and \(p<19\), \(t=16\) and \(p<16\), \(t=17\) and \(p<13\), \(t=18\) and \(p<12\), \(t=19\) and \(p<10\), \(t=20\) and \(p<9\), \(t=21,22\) and \(p<8\), \(t=23,24\) and \(p<7\), \(t=25,26,27\) and \(p=2,3,4,5\). \(28\leq t\leq 31\) and \(p=2,3,4\). \(32\leq t\leq 39\) and for \(p=2,3\). \(40\leq t\leq 62\) and \(p=2\). From the preceding discussion for every \((p,t)\), we validated Theorem 3.1 and compiled a list of 570 potential exceptions (listed in the Appendix). Then, for these potential exceptions, we discover that sieve variation is true for the large majority of prime powers, with the exception of those mentioned in Theorem 1.1. (see Appendix). Theorem 1.1 derives from this. Using similar reasoning, it is possible to find a subset of \(A_{n}\) for any \(n\in\mathbb{N}\). In particular, for \(A_{3}\), we have the following result. **Theorem 4.1**.: _Suppose t q, r, p \(\in\mathbb{N}\) be such that q is a prime number, t \(\geq 7\) and \(p=q^{r}\). Let p and t assumes none of the following values:_ 1. \(2\leq p\leq 31\) _or_ \(p=37,41,43,49,61,67,71,79,103,121\) _and_ \(t=7\)_;_ 2. \(2\leq p\leq 47\) _or_ \(p=53,83\) _and_ \(t=8\);__ 3. \(2\leq p\leq 7\) _or_ \(p=9,11,16\) _and_ \(t=9\)_;_ 4. \(2\leq p\leq 8\) _and_ \(t=10\)_;_ 5. \(p=2,3,4\) _and_ \(t=11\)_;_ 6. \(2\leq p\leq 7\) _and_ \(t=12\)_;_ 7. \(p=2\) _and_ \(t=14,15,16,18,20,24\)_._ _Then \((p,t)\in A_{3}\)._
2306.09273
Your Room is not Private: Gradient Inversion Attack on Reinforcement Learning
The prominence of embodied Artificial Intelligence (AI), which empowers robots to navigate, perceive, and engage within virtual environments, has attracted significant attention, owing to the remarkable advancements in computer vision and large language models. Privacy emerges as a pivotal concern within the realm of embodied AI, as the robot accesses substantial personal information. However, the issue of privacy leakage in embodied AI tasks, particularly in relation to reinforcement learning algorithms, has not received adequate consideration in research. This paper aims to address this gap by proposing an attack on the value-based algorithm and the gradient-based algorithm, utilizing gradient inversion to reconstruct states, actions, and supervision signals. The choice of using gradients for the attack is motivated by the fact that commonly employed federated learning techniques solely utilize gradients computed based on private user data to optimize models, without storing or transmitting the data to public servers. Nevertheless, these gradients contain sufficient information to potentially expose private data. To validate our approach, we conduct experiments on the AI2THOR simulator and evaluate our algorithm on active perception, a prevalent task in embodied AI. The experimental results demonstrate the effectiveness of our method in successfully reconstructing all information from the data across 120 room layouts.
Miao Li, Wenhao Ding, Ding Zhao
2023-06-15T16:53:26Z
http://arxiv.org/abs/2306.09273v2
# Your Room is not Private: Gradient Inversion Attack for Deep Q-Learning ###### Abstract The prominence of embodied Artificial Intelligence (AI), which compress robots to navigate, perceive, and engage within virtual environments, has attracted significant attention, owing to the remarkable advancements in computer vision and large language models. Privacy emerges as a pivotal concern within the realm of embodied AI, as the robot access substantial personal information. However, the issue of privacy leakage in embodied AI tasks, particularly in relation to decision-making algorithms, has not received adequate consideration in research. This paper aims to address this gap by proposing an attack on the Deep Q-Learning algorithm, utilizing gradient inversion to reconstruct states, actions, and Q-values. The choice of using gradients for the attack is motivated by the fact that commonly employed federated learning techniques solely utilize gradients computed based on private user data to optimize models, without storing or transmitting the data to public servers. Nevertheless, these gradients contain sufficient information to potentially expose private data. To validate our approach, we conduct experiments on the AI2THOR simulator and evaluate our algorithm on active perception, a prevalent task in embodied AI. The experimental results convincingly demonstrate the effectiveness of our method in successfully recovering all information from the data across all 120 room layouts. Privacy, Reinforcement Learning, Gradient Inversion ## 1 Introduction The advent of recent large foundation models has brought about remarkable achievements in human-like dialog generation [1] and controllable image generation [2]. These accomplishments highlight the promising potential of leveraging artificial intelligence (AI) to enhance human experiences. The next significant milestone in the advancement of general AI revolves around the exploration of embodied systems, such as household robots, that possess the ability to navigate, perceive, engage, and successfully tackle tasks within the physical world. The process of collecting embodied datasets presents more privacy concerns in comparison to language and vision tasks, as the robot operates within real-world environments, executing policies and gathering observational signals. These environments often contain personal information [3; 4], which introduces challenges during data collection and model training. To address the need for preserving sensitive information, the _Federated Learning_ framework [5] is introduced. This framework allows for the local storage of private data on individual machines, with only the gradients, calculated based on the private information within the environment [6; 7; 8; 9], being transmitted to the central server. These gradients, originating from multiple private servers, are then aggregated and utilized to update the policy model on the central server. The updated model is subsequently sent back to the private servers for the next optimization iteration. As a result, the model and gradients are accessible publicly, while the data remains accessible only to private servers. However, relying solely on the transmission of gradients still leaves room for vulnerability to gradient inversion techniques [10; 11; 12; 13], a type of method that can recover input images and labels from the corresponding gradients in classification tasks. The gradient inversion algorithms either search for realistic synthetic images in latent spaces using generative networks [14] or directly optimize a synthetic image in the image domain, initialized by heuristic methods. The accuracy of recovery is optimized by a loss function that compares the real gradient sent by the private server with the gradient produced using the synthetic data. Concurrently, the visual quality of the recovered image is ensured through loss functions based on prior knowledge, such as the smoothness of natural images [10; 15; 11; 12]. These techniques can recover high-resolution images with recognizable patterns using gradients from mini-batches with small batch sizes [11; 16]. Although not all recovered images are necessarily recognizable, these studies highlight the privacy risks associated with federated learning. Motivated by the existing literature on gradient inversion in classification tasks, our research aims to address the privacy leakage issue in decision-making within the context of embodied AI, an area that has received limited attention thus far. Figure 1 provides an illustrative example of this problem, demonstrating how an attacker can successfully reconstruct RGB and depth images of a room solely from the gradient. Unlike in classification tasks where the input typically consists of a single RGB image, applying gradient inversion techniques to decision-making poses unique challenges due to the multi-modal nature of the input information including image state, vector state, action, and reward. Our objective is to recover all these inputs exclusively from the gradients shared by the decision-making algorithm. The most relevant work to this topic is [17], which recovers the location map from the model parameter of Deep Q Network (DQN) [18]. However, this approach is limited in that it supports only one map for one model, and accessing the parameters of trained models may not be feasible in typical scenarios. In this work, we introduce a novel approach called _Deep Q-learning Gradient Inversion_ (**QGI**) to address the task of recovering multi-modal inputs from the Q-learning reinforcement learning (RL) algorithm. Specifically, we initialize the reconstructed state with Gaussian noises and optimize it with cosine similarity between the gradients of the real and reconstructed state. An additional prior term is used to penalize noisy image patterns. Unlike in classification tasks where the input is only RGB images, gradient inversion in reinforcement learning is challenging due to the multi-modal input information. We follow the white-box assumption [19] and honest-but-curious assumption [11] of the potential adversary, which does not modify the model or the optimization process.We evaluate QGI in the AI2THOR [20] simulator on the active perception task [21; 22; 23], a popular in embodied AI to navigate agents to obtain higher detection accuracy of the given object. Our main contributions are summarized below: * As far as we know, QGI is the first framework to investigate the information leaking problem by recovering data in reinforcement learning from the gradient. * QGI provides a novel pipeline to recover multi-modal information of states, actions, and Q values, which outperforms the joint optimization method proposed by previous works. * We evaluate our method in a realistic simulation of the active perception task and show that our QGI can successfully recover all information from the data across all 120 room layouts. ## 2 Preliminary **Gradient Inversion.** Gradient inversion aims to recover the training data from the gradient of the model, which can be employed by the potential adversary when the training data is invisible but the gradient is shared, for example, in federated learning. Given the input of the network \(x\), the supervision signal \(u\) (e.g., label in classification), the model \(F\) with parameters \(w\), the output Figure 1: Our method (Attacker) can reconstruct the images of your untidy room and window orientation from the gradient of the model in the robot. \(y=F(x;w)\), and the objective function \(J\), then the gradient \(g\) is \[g=\nabla_{w}J(F(x;w),u)=\frac{\partial J(y,u)}{\partial y}\frac{\partial F(x;w)}{ \partial w}. \tag{1}\] The gradient carries the information of \(x\) in the calculation \(\frac{\partial F(x;w)}{\partial w}\) and the information of \(u\) in the \(\frac{\partial J(y,u)}{\partial y}\), which can be utilized by an honest-but-curious [11; 17] adversary to reconstruct the \((x,u)\). The objective of gradient inversion is to generate \((x^{rec},u^{rec})\) that produces gradient \(g^{rec}\), which is sufficiently close to the true gradient \(g\). Assuming the adversary has white-box access to the model \(F\) and the objective \(J\), then the reconstructed gradient is \(g^{rec}=\nabla_{w}J(F(x^{rec};w),u^{rec})\). The adversary minimizes the loss function \(L(g^{rec},g)\) to obtain \((x^{rec},u^{rec})=\arg\min_{\tilde{x},\tilde{u}}L(\tilde{g}(\tilde{x},\tilde{ u}),g(x,u))\). **Markov Decision Process (MDP) and DQN.** We consider Markov Decision Process (MDP) as the mathematical framework to model decision-making problems in reinforcement learning. MDP is consist of state space \(s\in\mathcal{S}\), action space \(a\in\mathcal{A}\), reward function \(r\in\mathcal{R}\), and a transition model \(p\in\mathcal{P}\). The future discounted return at timestep \(t\) is \(R_{t}=\sum_{t^{\prime}=t}^{T}\gamma^{t^{\prime}-t}r_{t^{\prime}}\), where \(\gamma\) is the discount factor and \(T\) is the maximal step. In this paper, we investigate the gradient inversion in the DQN algorithm [18], which estimate the optimal action-state value function (Q-value) \(Q^{*}(s,a)=\max_{\pi}\mathbb{E}[R_{t}\|s_{t}=s,a_{t}=a,\pi]\), where \(\pi\) is a policy mapping sequences to actions. Empirically, this \(Q^{*}(s,a)\) is parameterized with neural networks and the objective is minimizing the difference between the predicted Q-value \(\hat{Q}(s,a)\) of the selected action \(a\) and the target Q-value \(Q(s,a)\) with transitions \((s_{t},a_{t},s_{t+1},r_{t})\) from the replay buffer. **Active Perception Task.** Active Perception [21; 22; 23] is a prevalent embodied AI task to enable robots to capture data in a better position to increase the confidence and accuracy of detection. Since it requires private information about the user room, we select it as the example task to apply the gradient inversion attack. In this task, the state contains depth image \(s_{d}\), RGB image \(s_{i}\), and coordinate \(s_{c}\) that indicates the position of the target object in the images. The action space is discrete with seven movement options. Similar to [21], the reward is defined by the confidence score obtained from an object detection model. The policy network predicts Q-values \(\hat{Q}(s,a)\) for all possible actions given the input \(s=\{s_{d},s_{i},s_{c}\}\). The images are first processed by convolution layers \(F_{\text{conv}}\), and the coordinate vector is processed by linear layers \(F_{\text{linear}}\). The detailed network structure can be found in Appendix A. ## 3 Deep Q-Learning with Gradient Inversion (QGI) In this section, we introduce QGI, a method to reconstruct the state \(s\), action \(a\), Q-value \(\hat{Q}(s,a)\), and target Q-value \(Q(s,a)\) by gradient inversion, which increases the accuracy of reconstruction compared to naive joint optimization. Note that reward is not recoverable as it is generated on the environment side with no gradient available. Despite we focus on the single sample gradient, the inversion of the batched gradient can also benefit from the insight. We show the entire pipeline in Figure 2 and organize this section with the following steps. In **step 1**, we first identify the action with gradient analysis. In **step 2**, we reconstruct the vector state and image state in two stages, where stage 1 targets the input of all linear layers, including the coordinate vector \(s_{c}\) and stage 2 reconstructs the image states \(s_{d}\) and \(s_{i}\). In **step 3**, Q-value \(\hat{Q}^{rec}(s,a)\) is obtained by applying the reconstructed vectors, and the target Q-value \(Q^{rec}\) is estimated by reconstructing the error \(\hat{Q}(s,a)-Q(s,a)\). ### Action Reconstruction Since the action space of DQN is discrete, a naive way to recover action is to enumerate all dimensions. Instead of the computation-heavy enumeration, we propose to directly identify the action from the gradient when only 1 or 2 data samples are used to compute the gradient. The Q-value \(\hat{Q}(s,\cdot)\) is obtained from a linear layer with weights \(W_{last}\) and bias \(b_{last}\). In addition, the objective \(J(\hat{Q}(s,a),Q(s,a))\) only provides a supervision signal to the action \(a\) that is taken by the policy, thus we can obtain the action and the sign of error \(\hat{Q}-Q\) from the gradient \[\nabla_{b_{\text{inst}}}=\frac{\partial J(\hat{Q}(s,a),Q(s,a))}{\partial\hat{Q}(s,a)}\underbrace{\frac{\partial\hat{Q}(s,a)}{\partial\hat{Q}(s)}}_{\text{one-hot vector}}\underbrace{\frac{\partial\hat{Q}(s)}{\partial b_{\text{inst}}}}_{ \text{1}}. \tag{2}\] When \(\frac{\partial J(\hat{Q},Q)}{\partial Q}\neq 0\), only the \(a\)-th element of \(b_{\text{inst}}\) has a non-zero gradient, which allows us to reconstruct the action. When the batch size is larger than 2, actions may not be identified directly from the batched gradient, thus the reconstruction requires enumeration or optimization. ### State Reconstruction As shown in Figure 2, we separate the state reconstruction into 2 stages. We reconstruct the vector state in step 2.1 and then reconstruct the image state in step 2.2 based on the following reasons. First, the gradient inversion of linear layers is easier than convolution layers [11] because (1) linear layers contain more parameters than convolution layers; (2) the gradient of linear layers does not aggregate the dimensions of the input of the linear layer or the partial derivative of the output. The aggregation exacerbates the challenge as different samples may have the same aggregated results. Second, we observe that the vector input is usually combined with the visual information in the vector form, enabling the neglect of convolution layers during the vector state reconstruction. Third, as the error in the vector state reconstruction could result in a significant accuracy drop in image reconstruction, an accurately reconstructed vector state obtained before the image state reconstruction is beneficial. Therefore, given the multi-modal data in this task, we propose to first reconstruct the vector state that is not processed by any convolution layers and then reconstruct the image state that requires convolution with the vector state fixed. Specifically, we initialize the coordinate and the image reconstruction with Gaussian noises and optimize them by minimizing the gradient matching loss \(L(g^{rec},g)\), \[L(g^{rec},g)=1-\frac{\langle g^{rec},g\rangle}{\|g^{rec}\|_{2}\|g\|_{2}} \tag{3}\] The total variation (TV) loss [24] is added to step 2.2 for image state reconstruction to penalize noisy patterns, leading to the total loss \(L_{img}=L(g^{rec},g)+\lambda(\text{TV}(s_{i})+\text{TV}(s_{d}))\), where \(\lambda\) is a scalar weight. Since \(\lambda\) is critical to influencing the ratio of gradients of two terms in \(L_{img}\), we use a rule to automatically calculate \(\lambda\) with \[\lambda=\frac{\|\partial L(g^{rec},g)/\partial s_{i}^{rec}\|}{\|\partial \text{TV}(s_{i}^{rec})/\partial s_{i}^{rec}\|},\ \ \ \Delta s_{i}^{rec}=\frac{\partial L(g^{rec},g)}{\partial s_{i}^{rec}}+\lambda \frac{\partial\text{TV}(s_{i}^{rec})}{\partial s_{i}^{rec}}. \tag{4}\] The gradient in (4) is then used to update \(s_{i}^{rec}\). For updating \(s_{d}^{rec}\), we use the same rule to adjust \(\lambda\). Figure 2: Pipeline of the proposed method. The left part shows the steps to reconstruct the multi-modal information, and the right part explains each step in detail. ### Q-value Reconstruction The reconstruction of Q-values is implemented in 3 stages, (1) reconstructing the sign of the error \(\vec{n}=\text{sign}(\hat{Q}-Q)\), (2) reconstructing the magnitude of error \(\hat{Q}-Q\), (3) reconstructing \(\hat{Q}\) by feeding the state reconstruction and \(Q\) based on the reconstructed \(\hat{Q}\) and the reconstructed error. The sign \(\vec{n}\) can be directly identified from the gradient of the last linear layer together with action identification, as shown in Figure 2. The magnitude of the error is obtained from the ratio \(\left\|g\right\|/\left\|g^{rec}\right\|\). Since the objective to update the Q network is the mean square error (MSE), the gradient of all model parameters \(w\) is the first equation in (5), and the relationship between the error and the magnitude of the gradient is shown in the second equation in (5) with \(\hat{Q}\) a constant to guess the target Q-value. \[g=\frac{\partial J(\hat{Q},Q)}{\partial\hat{Q}}\frac{\partial\hat{Q}}{\partial w }=2(\hat{Q}-Q)\frac{\partial\hat{Q}}{\partial w},\;\frac{\left\|\hat{Q}(s,a)-Q (s,a)\right\|}{\left\|\hat{Q}^{rec}(s^{rec},a^{rec})-\tilde{Q}\right\|}\frac{ \partial\hat{Q}/\partial w}{\partial\hat{Q}^{rec}/\partial w}=\frac{\left\|g \right\|}{\left\|g^{rec}\right\|}. \tag{5}\] Once the \(g^{rec}\) is obtained in step 2.1 in Figure 2, we assume \(\partial\hat{Q}/\partial w=\partial\hat{Q}^{rec}/\partial w\), and then get \[\left\|\hat{Q}(s,a)-Q(s,a)\right\|^{rec}=\left\|\hat{Q}^{rec}(s^{rec},a^{rec}) -\tilde{Q}\right\|\frac{\left\|g\right\|}{\left\|g^{rec}\right\|}, \tag{6}\] where \(\hat{Q}^{rec}(s^{rec},a^{rec})=F(s^{rec},a^{rec};w)\) is the reconstruction of \(\hat{Q}(s,a)\). Then, the reconstruction of the target Q-value can be obtained below, \[Q^{rec}(s,a)\leftarrow\hat{Q}^{rec}(s^{rec},a^{rec})-\vec{n}\left\|\hat{Q}(s,a)-Q(s,a)\right\|^{rec}. \tag{7}\] ## 4 Experiment **Experimental setup.** We conducted evaluations of our method on an active perception task using the AI2THOR simulator [20]. In the primary setting (S1), private observations were collected using an RGB camera and a depth camera, each providing an image with a resolution of \(150\times 150\). Additionally, we explored the performance in a setting where only a depth camera was available (S2), aiming to examine the privacy risks associated with widely used depth images in robotic tasks. The target object's 4-dimensional coordinate was specified by the upstream task within the range of \([-1,1]\). Gradients were calculated for individual data samples fed into an initialized network. We also study gradient inversion for a trained network in setting S2, please refer to Appendix B. The reconstruction process for both step 2.1 and step 2.2 involved \(2\times 10^{4}\) iterations. Quantitative evaluation was performed using 240 pre-collected samples. For more detailed information about the hyperparameters employed, please refer to Appendix C. **Metric.** To assess the quality and accuracy of the recovered state, action, Q-value, and target Q-value, we have selected several metrics for evaluation. For the image state, we utilize two metrics: **peak-signal-to-noise ratio (PSNR)** and **structure-similarity index measure (SSIM)**[25]. PSNR and SSIM are computed on the Y channel of the YCbCr representation for RGB images and the single channel for depth images. It is important to note that, based on prior knowledge of the AI2THOR environment, we normalize the reconstructed RGB images so that the brightest pixel has a value of 255, and the corresponding depth image is scaled by the same factor. In setting S2, the depth images are evaluated with their original pixel values. For the coordinate state, which represents a bounding box, we calculate the **intersection over union (IoU)** metric to measure accuracy. The accuracy of the action is evaluated by simply counting the number of accurate results. Lastly, we employ the **percentile error**, denoted as \(\epsilon(x)=\frac{\left\|x^{rec}-x\right\|}{\left|x\right|}\), to evaluate the Q-value and target Q-value. ### Quantitative Results **State: RGB and Depth Image.** Table 1 presents the PSNR and SSIM values for the images. In the S1 setting, where both RGB and depth images are available, the mean PSNR of the RGB images is over 21dB, and the mean SSIM is over 0.7. These values indicate that the images are recognizable to humans, potentially leaking information about the layout of the environments and private elements to potential adversaries. The depth images exhibit higher PSNR and SSIM values than the RGB images, suggesting that the size of rooms and the distance of objects are at risk of privacy leakage. In the S2 setting, the adversary may find it more challenging to identify the room layout due to the lack of color information. However, the accuracy of the reconstructed depth images is significantly higher. As shown in Table 1, the PSNR and SSIM values increase by over 4dB for all room types, primarily due to the lower dimensionality of the input data. It is important to note that the bedroom and living room exhibit lower PSNR and SSIM values because these rooms in the AI2THOR simulator [20] contain more small objects, making the reconstruction of detailed patterns more challenging. For additional quantitative results, please refer to Appendix B. **Q-value and Target Q-value.** Table 3 displays the percentile errors for the reconstructed Q-values and target Q-values. The results indicate that the percentile errors are significantly small, with an average mean error of less than \(1.1\%\) for both settings. It is worth noting that the bedrooms and living rooms exhibit larger errors in Q-value reconstruction. This suggests that when the image state contains more intricate details, the process of gradient inversion for the vector and scalar data becomes more challenging. This observation is further supported by the percentile errors observed in the S2 setting, which are approximately half of those in the S2 setting. **Action.** The action identified from the gradient of a single sample is \(100\%\) correct in all 240 samples, which demonstrates the effectiveness of QGI in the decision-making task. **State: Coordinate.** As shown in Table 2, the average IoU over of the 240 samples reaches over 0.9 for both S1 and S2 settings. The standard derivatives reach around 20% of the mean IoU, implying the reconstruction is not stable. To further study the variance of IoU, we demonstrate the histogram of IoU in Figure 3. For the S1 setting, 174 samples achieve IoU larger than 0.9999, while 217 samples achieve that in the S2 setting. The significant derivative arises due to the large error of failure cases, despite the occurrence of failure cases is limited. Coordinate reconstruction has higher accuracy than the images not only because of the lower dimension but also the fact that the coordinate is first processed by a linear layer other than a convolution layer. \begin{table} \begin{tabular}{c c c|c c c c c} \hline \hline \(\lambda\) & State & Metric & bathroom & bedroom & kitchen & living room & average \\ \hline \multirow{4}{*}{\(\lambda=0.01\) (S1)} & RGB & PSNR & \(14.10\pm 4.06\) & \(12.59\pm 3.22\) & \(15.28\pm 3.65\) & \(13.66\pm 3.88\) & \(13.91\pm 3.82\) & 30.88 \\ & SSIM & \(0.418\pm 0.162\) & \(0.422\pm 0.134\) & \(0.449\pm 0.149\) & \(0.415\pm 0.141\) & \(0.426\pm 0.147\) & 0.886 \\ & depth & PSNR & \(16.29\pm 5.78\) & \(14.91\pm 4.05\) & \(16.58\pm 4.52\) & \(14.47\pm 3.71\) & \(15.56\pm 4.64\) & 36.04 \\ & depth & SSIM & \(0.404\pm 3.01\) & \(0.418\pm 0.283\) & \(0.443\pm 0.265\) & \(0.425\pm 0.242\) & \(0.422\pm 0.272\) & 0.959 \\ \hline \multirow{4}{*}{\(\lambda=0.1\) (S1)} & RGB & PSNR & \(22.81\pm 5.99\) & \(19.73\pm 5.16\) & \(21.41\pm 5.69\) & \(20.11\pm 6.7\) & \(21.02\pm 5.86\) & 32.17 \\ & SSIM & \(0.761\pm 0.141\) & \(0.735\pm 0.146\) & \(0.724\pm 0.167\) & \(0.696\pm 0.170\) & \(0.729\pm 0.157\) & 0.929 \\ & depth & PSNR & \(24.89\pm 10.43\) & \(22.85\pm 7.42\) & \(23.26\pm 7.17\) & \(21.72\pm 7.05\) & \(23.18\pm 8.17\) & 38.09 \\ & depth & SSIM & \(0.809\pm 0.249\) & \(0.582\pm 0.161\) & \(0.803\pm 0.197\) & \(0.793\pm 0.162\) & \(0.807\pm 0.195\) & 0.983 \\ \hline \multirow{4}{*}{adaptive \(\lambda\) (S1)} & RGB & PSNR & \(22.82\pm 3.67\) & \(20.02\pm 3.20\) & \(23.13\pm 3.92\) & \(21.11\pm 3.86\) & \(21.77\pm 3.87\) & 32.00 \\ & SSIM & \(0.764\pm 0.096\) & \(0.744\pm 0.092\) & \(0.757\pm 0.100\) & \(0.725\pm 0.08\) & \(0.747\pm 0.095\) & 0.937 \\ & PSNR & \(26.54\pm 3.82\) & \(23.27\pm 4.19\) & \(25.20\pm 4.05\) & \(22.66\pm 3.64\) & \(24.42\pm 4.20\) & 34.52 \\ & depth & SSIM & \(0.777\pm 0.109\) & \(0.762\pm 0.108\) & \(0.773\pm 0.107\) & \(0.738\pm 0.099\) & \(0.762\pm 0.106\) & 0.952 \\ \hline \hline \multirow{4}{*}{\(\lambda=0.1\) (S2)} & \multirow{4}{*}{depth} & PSNR & \(29.78\pm 7.33\) & \(26.14\pm 7.67\) & \(26.45\pm 7.27\) & \(23.49\pm 7.29\) & \(26.46\pm 7.68\) & 45.76 \\ & SSIM & \(0.930\pm 0.11\) & \(0.899\pm 0.152\) & \(0.918\pm 0.092\) & \(0.894\pm 0.129\) & \(0.910\pm 0.123\) & 0.994 \\ \hline \multirow{4}{*}{adaptive \(\lambda\) (S2)} & \multirow{4}{*}{depth} & PSNR & \(31.61\pm 7.30\) & \(27.53\pm 7.41\) & \(29.82\pm 8.39\) & \(25.13\pm 7.02\) & \(28.52\pm 7.89\) & 48.24 \\ & SSIM & \(0.949\pm 0.049\) & \(0.916\pm 0.049\) & \(0.923\pm 0.078\) & \(0.882\pm 0.137\) & \(0.918\pm 0.095\) & 0.998 \\ \hline \hline \end{tabular} \end{table} Table 1: PSNR and SSIM of the RGB and depth images. The best results are underlined. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline Setting & bathroom & bedroom & kitchen & living room & average \\ \hline RGB+depth+coord (S1) & \(0.933\pm 0.169\) & \(0.872\pm 0.229\) & \(0.949\pm 0.133\) & \(0.894\pm 0.212\) & \(0.912\pm 0.192\) \\ depth+coord (S2) & \(0.995\pm 0.028\) & \(0.970\pm 0.147\) & \(0.911\pm 0.264\) & \(0.887\pm 0.296\) & \(0.941\pm 0.217\) \\ \hline \hline \end{tabular} \end{table} Table 2: IoU of coordinates. ### Qualitative Results The qualitative results shown in Figure 4 for the S1 setting are collected with \(\lambda\) 0.1, and the results for the S2 setting are collected under the adaptive method. In the left image of Figure 4, both the RGB images and depth images are reconstructed with recognizable patterns and magnitudes. The RGB images exhibit accurate color tones, although there may be some inaccurate colored noise in local areas. The depth images reconstructed in conjunction with the RGB images also display similar noisy patterns, but the overall brightness of continuous areas remains accurate. Consequently, the sizes of private rooms can be accessed by potential adversaries. In the S2 setting, as depicted in the right image of Figure 4, the successfully reconstructed images exhibit accuracy with clear edges and details. For additional qualitative results, please refer to Appendix B. ### Ablation Study We evaluate the effect of the proposed QGI by comparing it with joint optimization. The joint optimization is applied to the state reconstruction step, jointly optimizing the vector and image reconstruction. As shown in Table 4, the joint optimization of the image state (RGB+depth) and vector state (goal coordinate) shows performance drops in terms of all metrics. The mean IoU is below 0.1, while QGI achieves over 0.9. The Q-value reconstruction also shows significant errors in joint optimization, and the mean \(\epsilon(\hat{Q})\) increases over 40 times compared to QGI. We then investigate the adaptive weight in \(L_{img}\) by comparing QGI with constants \(\lambda=0.1,0.01\). In the S1 setting, the adaptive weighted method shows a slight improvement in the PSNR and SSIM in Table 1. However, we find the visual quality of the \(\lambda=0.1\) is more stable than the adaptive \(\lambda\), despite the larger standard derivative of PSNR and SSIM. For depth images in both settings, the \begin{table} \begin{tabular}{c|c c c c c} \hline \hline Method & PSNR/SSIM (RGB) \(\uparrow\) & PSNR/SSIM (depth) \(\uparrow\) & IoU \(\uparrow\) & \(\epsilon(\hat{Q})\)(\%) \(\downarrow\) & \(\epsilon(Q)\)(\%) \(\downarrow\) \\ \hline Joint (S1) & \(19.0\pm 5.70.66\pm 0.18\) & \(20.3\pm 7.6/0.67\pm 0.19\) & \(0.057\pm 0.151\) & \(46.0\pm 11.8\) & \(27.2\pm 37.6\) \\ QGI (S1) & \(21.8\pm 3.9/0.74\pm 0.10\) & \(24.4\pm 4.2/0.76\pm 0.11\) & \(0.912\pm 0.192\) & \(1.03\pm 1.59\) & \(0.67\pm 1.47\) \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of QGI and jointly optimizing (Joint) of vector and image states. Figure 4: Qualitative results. The left shows the results of depth+RGB images (S1). The right shows the results of depth images (S2). \begin{table} \begin{tabular}{c c|c c c c c} \hline \hline Setting & Metric & bathroom & bedroom & kitchen & living room & average \\ \hline \multirow{2}{*}{RGB+depth+coord (S1)} & \(\epsilon(\hat{Q})\) & \(0.95\pm 1.70\) & \(1.35\pm 1.61\) & \(0.64\pm 1.10\) & \(1.19\pm 1.76\) & \(1.03\pm 1.59\) \\ & \(\epsilon(\hat{Q})\) & \(0.51\pm 0.97\) & \(0.89\pm 1.61\) & \(0.40\pm 1.03\) & \(0.40\pm 1.03\) & \(0.67\pm 1.47\) \\ \hline \multirow{2}{*}{depth+coord (S2)} & \(\epsilon(\hat{Q})\) & \(0.11\pm 0.16\) & \(0.34\pm 0.62\) & \(0.45\pm 1.24\) & \(1.01\pm 2.54\) & \(0.48\pm 1.49\) \\ & \(\epsilon(\hat{Q})\) & \(0.11\pm 0.24\) & \(0.16\pm 0.30\) & \(0.25\pm 0.78\) & \(0.65\pm 0.24\) & \(0.29\pm 1.29\) \\ \hline \hline \end{tabular} \end{table} Table 3: Percentile error (%) of Q-values and target Q-values. adaptive \(\lambda\) shows a consistent advantage over the constant \(\lambda\) in Table 1 and the visual quality is stable as shown in Figure 4, especially in the S2 setting. ## 5 Related Work **Privacy attack in Deep Learning.** Private training data is at risk of attacks, including membership inference, model inversion, and gradient inversion [26]. Membership inference assumes the adversary has access to certain data and attempts to verify whether the data is used in the training procedure [27; 28], while in robotic tasks, the private environments tend to stay inaccessible to adversaries, for example, bedrooms of private property. In contrast, model inversion and gradient inversion assume the private data is inaccessible and the adversary attempts to reconstruct the data, which are of higher significance for robotic tasks. While model inversion reveals the training data from the trained model [29; 30; 19; 31], gradient inversion tackles the reconstructing from the gradients shared in federated, collaborative, decentralized, or distributed learning [10; 11; 16; 14; 32; 12]. **Gradient inversion.** Previous works in gradient inversion mainly focused on image classification tasks. Zhu et al. [10] proposed an end-to-end optimization method to recover both images and labels for the classification task from both single sample gradients and batched gradients. Zhao et al. [13] proposed an efficient method to identify the one-hot label from a single sample's gradient. Geiping et al. [11] extended gradient inversion to high-resolution images by minimizing a prior loss proven effective in image denoising [24] and model inversion [30] and a magnitude-agnostic cost function. Yin et al. [16] increased the accuracy under large batch sizes by leveraging more image prior loss functions and penalizing distances of results from multiple trails. While most works studied gradient inversion on CNN or MLP, GradViT [12] attacked gradients of Vision Transformer by designing prior terms aiming at the architecture. Wang et al. [15] and Jeon et al. [14] enhanced the gradient inversion by introducing a generative network. Based on [10], Deng et al. [33] reconstructed training data of Transformer-based language models. Despite the achievement, to the best of our knowledge, gradient inversion is unexplored in the realm of both robotic tasks and multi-modal data. **Privacy in Reinforcement Learning.** Privacy has been studied in existing Reinforcement Learning literature for a long time. However, most works dedicated to privacy-preserving [3; 34; 4; 35; 36] other than revealing the risk of privacy leakage. The most pertinent work to this paper in Reinforcement Learning is the [17]. They recovered the training environment from the trained policies, demonstrating the privacy risk of releasing trained models. In contrast to model inversion algorithms oriented towards supervised learning, typically utilizing gradient-descent-based optimization methods, they leveraged the prior knowledge of the environment and used the genetic algorithm to search for the environment configuration. While they studied the sensitive information contained in trained models, the risk associated with the training procedure remains ambiguous. In this work, we investigate the possibility of revealing private state, action, and Q-values from gradients, where the data can change through the training procedure and the number of data samples is unlimited. ## 6 Conclusion and Limitation This paper introduces a novel method called QGI to address the issue of privacy leakage in DQN, which is a critical concern but has not received significant attention in the existing literature. Given the challenge of adapting existing gradient inversion algorithms to handle multi-modal inputs in RL, we develop a comprehensive pipeline that iteratively recovers the inputs. Through experimental results in active perception tasks, we demonstrate that QGI successfully recovers the image state, vector state, action, and Q-value from the gradient of the model. Ablation studies further validate that our proposed design outperforms baseline models. One limitation of this work is that the batch size of the input data is constrained to a small number in order to maintain the high quality of the recovered data. As pioneers in this area, we recognize the significance of addressing the privacy problem in decision-making to establish a trustworthy embodied AI system. Therefore, we leave the exploration of larger batch sizes and extensions to other RL-based decision-making algorithms for future research.
2304.07427
Exact probabilities for three and four dice in the balanced uniform model of 3-sided dice
We determine the exact probabilities of the different isomorphism classes of tournaments that result from random sets of three and four independent dice drawn from the balanced uniform model of 3-sided dice.
Kent E. Morrison
2023-04-14T23:20:51Z
http://arxiv.org/abs/2304.07427v1
# Exact probabilities for three and four Dice in the balanced uniform model of 3-sided Dice ###### Abstract. We determine the exact probabilities of the different isomorphism classes of tournaments that result from random sets of three and four independent dice drawn from the balanced uniform model of 3-sided dice. ## 1. Background and Results By now it is well-recognized that various models of generalized dice exhibit surprisingly high rates of intransitivity for randomly chosen dice. Given two dice \(A\) and \(B\), where \(A\) has face values \(a_{1},\ldots,a_{n}\) and \(B\) has face values \(b_{1},\ldots,b_{m}\), we say that \(A\)**dominates**\(B\), denoted by \(A\succ B\), if it is more likely for \(A\) to show a higher value than \(B\), that is \[\sum_{i,j}\text{sgn}(a_{i}-b_{j})>0.\] If the sum is \(0\), then neither die dominates the other and we say that the result is a tie. In this article we will always be working with dice with the same number of faces; i. e., \(m=n\). In 2016 Conrey, Gabbard, Grant, Liu, and Morrison [1] considered \(n\)-sided dice as integer multisets of size \(n\) with elements in \(\{1,2,\ldots,n\}\) and satisfying \(\sum a_{i}=n(n+1)/2\). We made two conjectures about random sets of three \(n\)-sided dice as \(n\to\infty\). * The probability of any ties goes to zero. * The probability of an intransitive triple goes to \(1/4\). For three dice \(A,B,C\) and in the absence of ties there are \(2^{3}\) possible configurations for the relations between the three pairs \((A,B)\), \((B,C)\), and \((A,C)\). Two of the eight configurations represent the intransitive cycles \(A\succ B\succ C\succ A\) and \(A\succ C\succ B\succ A\). The other six configurations are transitive chains. Evidence for the conjecture was provided by Monte Carlo simulations showing that the probabilities of all eight configurations appear to approach \(1/8\), and so the probability of an intransitive triple approaches \(1/4\) and the probability of a transitive triple approaches \(3/4\). We also made much more speculative generalized conjectures concerning \(k\) random dice for any fixed positive integer \(k\). First, the probability that there is a tie between any of the \(k\) dice goes to \(0\). With no ties there are \(2^{\binom{k}{2}}\) outcomes for all pairwise comparison, and each of these outcomes is represented by a complete directed graph on \(k\) vertices, i.e., a _tournament_. The second generalized conjecture is that in the limit all the tournaments are equally probable. A few years later these conjectures became the topic of a Polymath project. An alternative model was introduced, the _balanced sequence model_, in which an \(n\)-sided die is a sequence \(A=(a_{1},\ldots,a_{n})\) of elements in \(\{1,2,\ldots,n\}\) with total \(n(n+1)/2\). The balanced sequence model is easier to work than the multiset model. For example, a random die is now a sequence of \(n\) iid random variables uniform in \(\{1,\ldots,n\}\) and conditioned on the sum being \(n(n+1)/2\). The main results in [3] are that the conjectures for three dice hold for the balanced sequence model. Thus, the questions for three dice in the multiset model are still open, but it would be a surprise if they turn out to be false. At the same time the Polymath project spurred some other work, and there is strong computational evidence that the generalized conjecture about tournaments fails with four dice. If the \(2^{\binom{4}{2}}\) tournaments were all equally probable, then the probability of a transitive chain and the probability of an intransitive cycle would both have limit \(3/8=0.375\), but several independent simulations done for both the multiset and the sequence models produce data with the experimental probabilities somewhat greater than \(0.38\). Further evidence appears in the results of Cornacchia and Hazla [2] for four dice in the balanced uniform model. In this model a random \(n\)-sided die is a point \((a_{1},\ldots,a_{n})\in[0,1]^{n}\) chosen uniformly and conditioned on \(\sum_{i}a_{i}=n/2\). They prove that there exists \(\varepsilon>0\) such that for \(n\) sufficiently large, the probability is greater than \(3/8+\varepsilon\) that four random dice have a transitive tournament. In the same paper they prove that, in fact, the conjectures do hold for three dice in the balanced uniform model. That is, as \(n\to\infty\), the probability that three \(n\)-sided dice form an intransitive triple approaches \(1/4\), and the probability that they form a transitive chain approaches \(3/4\). In this note we consider random sets of three dice and four dice at the other extreme with \(n=3\), which is the least number of sides of any interest. With three dice there are eight tournaments in two isomorphism classes. One isomorphism class consists of the six transitive chains and the other class contains the two intransitive cycles. We find the the transitive probability is exactly \(973/1230=0.76015625\) and the intransitive probability is exactly \(307/1280=0.23984375\). With four dice the 64 tournaments fall into four different isomorphism classes: the 24 completely transitive chains, the 24 intransitive cycles, the 8 tournaments with an overall winner and a 3-cycle, and the 8 tournaments with an overall loser and a 3-cycle. We find the probabilities are: \[\begin{array}{ll}\mbox{transitive}&\frac{110413771}{25804800}\approx 0.42788\\ \\ \mbox{intransitive}&\frac{99930571}{25804800}\approx 0.38726\\ \\ \mbox{winner + 3-cycle}&\frac{23851829}{25804800}\approx 0.09243\\ \\ \mbox{loser + 3-cycle}&\frac{23851829}{25804800}\approx 0.09243\\ \\ \end{array}\] ## 2. Three Dice The dominance relation between \(A\) and \(B\) is invariant under permutations of the coordinates, and so we can assume that \(a_{1}\leq a_{2}\leq a_{3}\). We also have \(a_{1}+a_{2}+a_{3}=3/2\). The sample space for a single die is the polygonal region in the \(a_{1}a_{2}\)-plane defined by the inequalities \[0\leq a_{1},\quad a_{1}\leq a_{2},\quad 1\leq 2a_{1}+2a_{2},\quad 2a_{1}+4a_{2} \leq 3.\] The third and fourth inequalities come from \(a_{3}\leq 1\) and \(a_{2}\leq a_{3}\) by replacing \(a_{3}\) with \(3/2-a_{1}-a_{2}\). Call this region \(Q\). Its boundary is a quadrilateral within the unit square and it has area \(1/8\). The probability measure on \(Q\) is the normalized area. See Figure 1. The sample space for three dice \((A,B,C)\) is the six-dimensional polytope \(Q^{3}\subset[0,1]^{6}\). It is defined by \(12\) linear inequalities, four for each of the three dice. The volume of \(Q^{3}\) is \(1/8^{3}=1/512\). We use the coordinates \((a_{1},a_{2},b_{1},b_{2},c_{1},c_{2})\) for points in \(Q^{3}\). Our goal is to compute the volume of the subset of \(Q^{3}\) defined by \(A\succ B\succ C\succ A\). Let's first consider the inequalities that define \(A\succ B\). It is not necessary to check all 9 comparisons between each \(a_{i}\) and \(b_{j}\). It is sufficient to simply compare \(a_{i}\) with \(b_{i}\). **Lemma 1**.: _There are three different configurations for which \(A\succ B\). They are_ \[(\succ_{1}):a_{1}<b_{1},\quad a_{2}>b_{2},\quad a_{3}>b_{3}\] \[(\succ_{2}):a_{1}>b_{1},\quad a_{2}<b_{2},\quad a_{3}>b_{3}\] \[(\succ_{3}):a_{1}>b_{1},\quad a_{2}>b_{2},\quad a_{3}<b_{3}\] Proof.: If \(a_{i}>b_{i}\) for all \(i=1,2,3\), then \(\sum a_{i}>\sum b_{i}\), which is impossible. However, if any two of the three hold, then \(A\) dominates \(B\). The configurations are labeled \(\succ_{i}\) according to which inequality \(a_{i}>b_{i}\) fails to hold. **Lemma 2**.: _If \(A\succ_{i}B\) and \(B\succ_{i}C\), then \(A\succ_{i}C\). (That is, the relations \(\succ_{i}\) are transitive.)_ Proof.: Straightforward. Just write down the inequalities. **Lemma 3**.: _If \(A,B,C\) form an intransitive triple with \(A\succ B\succ C\succ A\), then \(A\succ_{i}B\succ_{j}C\succ_{k}A\) where \(\{i,j,k\}=\{1,2,3\}\)._ Proof.: Assume that the three relations are not distinct. By relabeling, if necessary, we can assume that \(i=j\). Then \(A\succ_{i}B\) and \(B\succ_{i}C\), and therefore \(A\succ_{i}C\), which is a contradiction. For a permutation \(\sigma=\sigma_{1}\sigma_{2}\sigma_{3}\) define \(E_{\sigma}\) to be both the event \[A\succ_{\sigma_{1}}B\succ_{\sigma_{2}}C\succ_{\sigma_{3}}A\] and the subset of \(Q^{3}\) representing the event. Each \(E_{\sigma}\) is a polytope defined by nine additional inequalities, three for each of the three relations. The union \(E=\bigcup E_{\sigma}\) is the event that \(A\succ B\succ C\succ A\), and \(\mathrm{P}(E)=\mathrm{vol}(E)/\mathrm{vol}(Q^{3})\). Also, \(\mathrm{vol}(E)=\sum_{\sigma}\mathrm{vol}(E_{\sigma})\). The volumes of the \(E_{\sigma}\) are computed using the SageMath interface to LattE integrale. Figure 1. The sample space \(Q\) for a single random die **Lemma 4**.: \[\mathrm{P}(E_{\sigma})=\begin{cases}\dfrac{23}{1800}&\text{if }\sigma=123,231,312\\ \\ \dfrac{3133}{115200}&\text{if }\sigma=132,213,312\end{cases}\] \[\mathrm{P}(E)=\dfrac{307}{2560}\] Proof.: See Section 4 for the code. Note that it suffices to compute the volumes of \(E_{123}\) and \(E_{132}\) because \(E_{\sigma}\) and \(E_{\sigma^{\prime}}\) have the same volume if \(\sigma^{\prime}\) is a cyclic permutation of \(\sigma\). The reason is that the cyclic permutation \((A,B,C)\mapsto(B,C,A)\) is volume preserving and maps \(E_{\sigma_{1}\sigma_{2}\sigma_{3}}\) to \(E_{\sigma_{2}\sigma_{3}\sigma_{1}}\). **Theorem 5**.: _The probability that \(A,B,C\) form an intransitive triple is \(307/1280=0.23984375\). The probability that they form a transitive chain is \(973/1280=0.76015625\)._ Proof.: By symmetry \(\mathrm{P}(A\succ B\succ C\succ A)=\mathrm{P}(A\prec B\prec C\prec A)\), which is \(\mathrm{P}(E)\), and so the probability of an intransitive triple is \(2\mathrm{P}(E)\). The transitive probability is the complementary probability \(1-2\mathrm{P}(E)\). ## 3. Four Dice The sample space for four dice \((A,B,C.D)\) is the eight-dimensional polytope \(Q^{4}\subset[0,1]^{8}\). It is defined by \(16\) linear inequalities, four for each of the four dice. The volume of \(Q^{4}\) is \(1/8^{4}=1/4096\). We use the coordinates \((a_{1},a_{2},b_{1},b_{2},c_{1},c_{2},d_{1},d_{2})\) for points in \(Q^{4}\). The main computation that we need to do is to find the probability that \(A\succ B\succ C\succ D\succ A\). Then using symmetry and the results already computed for three dice we will have enough to determine the probabilities of the four different tournament types. To see how that works, we introduce the following notation for the probabilities of the different isomorphism classes for three and four dice. \begin{tabular}{l l} transitive 3-chain & \(P_{\text{3-line}}\) \\ intransitive 3-cycle & \(P_{\triangle}\) \\ transitive 4-chain & \(P_{\text{4-line}}\) \\ intransitive 4-cycle & \(P_{\square}\) \\ winner + 3-cycle & \(P_{1\rightarrow\triangle}\) \\ loser + 3-cycle & \(P_{1\leftarrow\triangle}\) \\ \end{tabular} Generating a random set of four dice and then deleting one of them at random gives a random set of three dice. If the four dice are a transitive chain, then the remaining three dice will be transitive. If the four dice are a 4-cycle, then removing one of them results in a 3-cycle or a 3-chain, with two possibilities for each. If the four dice are a 3-cycle plus winner/loser, then the result is a 3-cycle when the winner/loser is removed or a 3-chain if one of the other three dice is removed. Therefore, \[P_{\text{3-line}} =P_{\text{4-line}}+\frac{1}{2}P_{\square}+\frac{3}{4}\big{(}P_{1 \rightarrow\triangle}+P_{1\leftarrow\triangle}\big{)}\] \[P_{\triangle} =\frac{1}{2}P_{\square}+\frac{1}{4}\big{(}P_{1\rightarrow\triangle }+P_{1\leftarrow\triangle}\big{)}\] Furthermore, \(P_{1\rightarrow\triangle}=P_{1\leftarrow\triangle}\), because the transformation \[A=(a_{1},a_{2},a_{3})\mapsto A^{*}=(1-a_{3},1-a_{2},1-a_{1})\] reverses the relation between dice, and so the tournament associated to \((A^{*},B^{*},C^{*},D^{*})\) is the result of reversing all the edges in the tournament of \((A,B,C,D)\). Therefore, once we have found \(P_{\square}\), we use it and the already known \(P_{\text{3-line}}\) and \(P_{\triangle}\) to solve for the remaining probabilities. **Lemma 6**.: \(P_{\square}=6\operatorname{P}(A\succ B\succ C\succ D\succ A)\)_._ Proof.: There are six ways to label the vertices of a 4-cycle. A priori, there are 81 different polytopes in \(Q^{4}\) to consider because each of the four relations in the cycle \(A\succ B\succ C\succ D\succ A\) is \(\succ_{1},\succ_{2}\), or \(\succ_{3}\). For \(\sigma=\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{4}\), where \(\sigma_{i}\in\{1,2,3\}\), let \(G_{\sigma}\) be the event \(A\succ_{\sigma_{1}}B\succ_{\sigma_{2}}C\succ_{\sigma_{3}}D\succ_{\sigma_{4}}A\) as well as the corresponding subset of \(Q^{4}\). The union \(G=\bigcup G_{\sigma}\) corresponds to the event \(A\succ B\succ C\succ D\succ A\), and \(\operatorname{P}(G)=\sum\operatorname{P}(G_{\sigma})\). Fortunately, we can reduce the number of \(G_{\sigma}\) whose volumes need to be computed. **Lemma 7**.: _If \(\sigma^{\prime}\) is a cyclic permutation of \(\sigma\), then \(\operatorname{vol}(G_{\sigma^{\prime}})=\operatorname{vol}(G_{\sigma})\)._ Proof.: The map \(Q^{4}\to Q^{4}:(A,B,C,D)\mapsto(B,C,D,A)\) is an isometry and maps \(G_{\sigma}\) to \(G_{\sigma^{\prime}}\) where \(\sigma=(\sigma_{1},\sigma_{2},\sigma_{3},\sigma_{4})\) and \(\sigma^{\prime}=(\sigma_{2},\sigma_{3},\sigma_{4},\sigma_{1})\). **Lemma 8**.: _If \(\sigma\) has one or two distinct entries, then \(G_{\sigma}=\emptyset\)._ Proof.: The \(\sigma\) in question are the following and all cyclic permutations of them: \[(iii): 1111,2222,3333\] \[(iiij): 1112,1113,2221,2223,3331,3332\] \[(iijj): 1122,1133,2233\] \[(ijij): 1212,1313,2323\] For \(\sigma\) in the first group the chain of inequalities obviously leads to a contradiction. Now let \(\sigma=1112\), which means that \(a_{1}<b_{1}<c_{1}<d_{1}\) and \(d_{2}<a_{2}\). But we also have \(a_{2}>b_{2}>c_{2}>d_{2}\), which is a contradiction. The remaining five \(\sigma\) in the second group are disposed of similarly. From the third group we use the fact that the \(\succ_{i}\) are transitive. If \(\sigma=iijj\) then \(G_{\sigma}\) is the event \(A\succ_{i}B\succ_{i}C\succ_{j}D\succ_{j}A\), and this implies that \(A\succ_{i}C\) and \(C\succ_{j}A\), which is impossible. Finally for \(\sigma=ijij\) we consider \(A\succ_{i}B\succ_{j}C\succ_{i}D\succ_{j}A\). Then for \(k\neq i,j\) we have \(a_{k}>b_{k}>c_{k}>d_{k}>a_{k}\), which is impossible. **Lemma 9**.: _There are 36 different \(\sigma\) for which \(\mathrm{P}(G_{\sigma})>0\). Each is a cyclic permutation of one of the nine on the list below. If \(\sigma^{\prime}\) is a cyclic permutation of \(\sigma\), then \(\mathrm{P}(G_{\sigma^{\prime}})=\mathrm{P}(G_{\sigma})\)._ \[\begin{array}{ccc}\sigma&\mathrm{P}(G_{\sigma})\\ 1123&229/322560\\ 1132&691507/294912000\\ 1213&40913/15482880\\ 1223&5431/8064000\\ 1232&32299/16515072\\ 1322&38929/18432000\\ 1233&229/322560\\ 1323&40913/15482880\\ 1332&691507/294912000\\ \end{array}\] Proof.: See the Appendix for the Sage code to compute the volumes and probabilities. Notice that \(\mathrm{P}(G_{1123})=\mathrm{P}(G_{1233})\), \(\mathrm{P}(G_{1132})=\mathrm{P}(G_{1332})\), and \(\mathrm{P}(G_{1213})=\mathrm{P}(G_{1323})\). Consider the event \(G_{1123}\), which means that \(A\succ_{1}B\succ_{1}C\succ_{2}D\succ_{3}A\). Apply the star operator to each die and verify that verify \(A^{*}\succ_{1}D^{*}\succ_{2}C^{*}\succ_{3}B^{*}\succ_{3}A^{*}\). That is, \(\succ_{1}\) reverses direction and changes to \(\succ_{3}\), while \(\succ_{3}\) reverses and changes to \(\succ_{1}\), and \(\succ_{2}\) only reverses direction. Thus, the isometry \[Q^{4}\to Q^{4}:(A,B,C,D)\mapsto(A^{*},D^{*},C^{*},B^{*})\] maps \(G_{1123}\) onto \(G_{1233}\). It also maps \(G_{1132}\) onto \(G_{2133}\), which has the same volume as \(G_{1332}\) because \(2133\) and \(1332\) are cyclically related. And it maps \(G_{1213}\) onto \(G_{1323}\). **Lemma 10**.: \[\mathrm{P}(A\succ B\succ C\succ D\succ A)=\mathrm{P}(G)=\frac{99930571}{1548288000}\] Proof.: There are four cyclic permutations of each \(\sigma\) in Lemma 9. So summing them and multiplying by four gives \(\mathrm{P}(G)\). **Theorem 11**.: _With four 3-sided dice in the balanced uniform model, the probabilities of the four isomorphism classes of tournaments are the following:_ \[P_{\text{4-line}}=\frac{110413771}{258048000}\approx 0.42788\quad\text{( transitive chain)}\] \[P_{\square}=\frac{99930571}{258048000}\approx 0.38726\quad\text{( intransitive 4-cycle)}\] \[P_{1\rightarrow\triangle}=\frac{23851829}{258048000}\approx 0.09243\quad\text{( winner + 3-cycle)}\] \[P_{1\leftarrow\triangle}=\frac{23851829}{258048000}\approx 0.09243\quad\text{( loser + 3-cycle)}\] Proof.: With four vertices there are six different 4-cycles. From Lemma 9 we know \(\mathrm{P}(G)\), the probability of one particular 4-cycle, and so \[P_{\square}=6\,\mathrm{P}(G)=6\left(\frac{99930571}{1548288000}\right)=\frac{ 99930571}{258048000}.\] We also know \(P_{\text{3-line}}=973/1280\) and \(P_{\triangle}=307/1280\) from Theorem 5, so that we can solve for \(P_{\text{4-line}}\), \(P_{1\rightarrow\triangle}\), and \(P_{1\leftarrow\triangle}\) using the equations \[P_{\text{3-line}} =P_{\text{4-line}}+\frac{1}{2}P_{\square}+\frac{3}{4}\big{(}P_{1 \rightarrow\triangle}+P_{1\leftarrow\triangle}\big{)}\] \[P_{\triangle} =\frac{1}{2}P_{\square}+\frac{1}{4}\big{(}P_{1\rightarrow\triangle }+P_{1\leftarrow\triangle}\big{)}\] \[P_{1\rightarrow\triangle} =P_{1\leftarrow\triangle}\] ## 4. Sage Computation: Three Dice We define the polytopes (or "polyhedra" ) in Sage using the half-space representation [4], which is a system of linear inequalities of the form \[0\leq\beta+\alpha_{1}x_{1}+\alpha_{2}x_{2}+\cdots+\alpha_{n}x_{n}.\] This inequality is represented by \((\beta,\alpha_{1},\alpha_{2},\ldots,\alpha_{n})\), a tuple of length \(n+1\). Our variables are \(a_{1},a_{2},b_{1},b_{2},c_{1},c_{2}\), so that, for example, the inequality \(2a_{1}+4a_{2}\leq 3\) is represented by the \(7\)-tuple \((3,-2,-4,0,0,0,0)\). The twelve inequalities defining \(Q^{3}\) and their Sage representations are the following: \[0\leq a_{1}, a_{1}\leq a_{2}, 1\leq 2a_{1}+2a_{2}, 2a_{1}+4a_{2}\leq 3,\] \[0\leq b_{1}, b_{1}\leq b_{2}, 1\leq 2b_{1}+2b_{2}, 2b_{1}+4b_{2}\leq 3,\] \[0\leq c_{1}, c_{1}\leq c_{2}, 1\leq 2c_{1}+2c_{2}, 2c_{1}+4c_{2}\leq 3.\] \[(0,1,0,0,0,0,0), (0,-1,1,0,0,0,0), (-1,2,2,0,0,0,0), (3,-2,-4,0,0,0,0),\] \[(0,0,0,1,0,0,0), (0,0,0,-1,1,0,0), (-1,0,0,2,2,0,0), (3,0,0,-2,-4,0,0),\] \[(0,0,0,0,0,1,0), (0,0,0,0,0,-1,1), (-1,0,0,0,0,2,2), (3,0,0,0,0,-2,-4).\] To define \(E_{123}\) corresponding to \(A\succ_{1}B\succ_{2}C\succ_{3}A\) there are nine additional inequalities. \[a_{1}<b_{1}, a_{2}>b_{2}, a_{3}>b_{3}\Leftrightarrow a_{1}+a_{2}<b_{1}+b_{2},\] \[b_{1}>c_{1}, b_{2}<c_{2}, b_{3}>c_{3}\Leftrightarrow b_{1}+b_{2}<c_{1}+c_{2},\] \[c_{1}>a_{1}, c_{2}>a_{2}, c_{3}<a_{3}\Leftrightarrow c_{1}+c_{2}>a_{1}+a_{2}.\] The Sage representations are \[(0,-1,0,1,0,0,0), (0,0,1,0,-1,0,0), (0,-1,-1,1,1,0,0),\] \[(0,0,0,1,0,-1,0), (0,0,0,0,-1,0,1), (0,0,0,-1,-1,1,1),\] \[(0,-1,0,0,0,1,0), (0,0,-1,0,0,1), (0,-1,-1,0,0,1,1).\] The polytope \(E_{132}\) corresponding to \(A\succ_{1}B\succ_{3}C\succ_{2}A\) has the additional inequalities represented by \[(0,-1,0,1,0,0,0), (0,0,1,0,-1,0,0), (0,-1,-1,1,1,0,0),\] \[(0,0,0,1,0,-1,0), (0,0,0,0,1,0,-1), (0,0,0,1,1,-1,-1),\] \[(0,-1,0,0,0,1,0), (0,0,1,0,0,0,-1), (0,1,1,0,0,-1,-1).\] For the volume computations we call the integrate function in _LattE integrale_ using the Sage interface [5]. Sage input from sage.interfaces.latticeimportintegrate Q3_ieqs = [(0,1,0,0,0,0,0),(-1,2,2,0,0,0,0),(0,-1,1,0,0,0,0), (3,-2,-4,0,0,0,0),(0,0,0,1,0,0,0),(-1,0,0,2,2,0,0), (0,0,0,-1,1,0,0),(3,0,0,-2,-4,0,0),(0,0,0,0,0,1,0), (-1,0,0,0,0,2,2),(0,0,0,0,0,-1,1),(3,0,0,0,0,-2,-4)] Q3 = Polyhedron(ieqs = Q3_ieqs) vol_Q3 = integrate(Q3.cdd_Hrepresentation(),cdd=True) E123_ieqs = [(0,-1,0,1,0,0,0),(0,0,1,0,-1,0,0),(0,-1,-1,1,0,0), (0,0,0,1,0,-1,0),(0,0,0,0,-1,0,1),(0,0,0,-1,-1,1), (0,-1,0,0,0,1,0),(0,0,-1,0,0,0,1),(0,-1,-1,0,0,1,1)] E123 = Polyhedron(ieqs = Q3_ieqs + E123_ieqs ) vol_E123 = integrate(E123.cdd_Hrepresentation(), cdd=True) prob_E123 = vol_E123/vol_Q3 E213 = integrate(E213.cdd_Hrepresentation(), cdd=True) prob_E213 = vol_E213/vol_Q3 prob_E = 3*prob_E123 + 3*prob_E213 print("vol(Q3) =",vol_Q3) print("P(E123) =",prob_E123,"=",n(prob_E123,digits=9)) print("P(E213) =",prob_E213,"=",n(prob_E213,digits=9)) print("P(A>B>C>A) = P(E) =",prob_E,"=",n(prob_E,digits=9)) print("P(intransitive) =",2*prob_E,"=",n(2*prob_E,digits=9)) Output vol(Q3) = 1/512 P(E123) = 23/1800 = 0.012777778 P(E213) = 3133/115200 = 0.0271961806 P(A>B>C>A) = P(E) = 307/2560 = 0.119921875 P(intransitive) = 307/1280 = 0.239843750 ## 5. Sage Computation: Four Dice There are 16 inequalties needed to define \(Q^{4}\) in Sage. Each is represented by a tuple of length 9. They are straightforward extensions of those used to define \(Q^{3}\). To define \(G_{\sigma}\) where \(\sigma=ijkl\) we use the function G(i,j,k,l) which returns the Sage polyhedron. This requires the inequalities for \(Q^{4}\) and the inequalities for the four relations. For \(m=1,2,3,4\) and \(i=1,2,3\) the function g(m,i) returns a list of three tuples asserting that the \(m\)th dominance relation is \(\succ_{i}\). For example, \(g(3,2)\) returns the tuples for \(C\succ_{2}D\) since the third relation is the one between \(C\) and \(D\). Then \(G(i,j,k,l)\) simply concatenates the inequalities for \(Q^{4}\) with those coming from g(1,i),g(2,j),g(3,k), and g(4,l), a total of 28 inequalties. The definition of g(m,i) uses vectors a1,a2,a3,...,d1,d2,d3 so that an inequality such as \(a_{2}>b_{2}\) is represented by a2-b2. The vector a3 is actually -a1-a2 and likewise for b3,c3,d3, because an inequality such as \(a_{3}>b_{3}\) is equivalent to \(-a_{1}-a_{2}>-b_{1}-b_{2}\). The function pr(i,j,k,l) returns the probability of \(G_{ijkl}\). First it finds the dimension of G(i,j,k,l). If the dimension is less than eight, then the polytope has zero volume and the probability is zero. If the dimension is eight, then the function uses the integrate function from _LattE integrale_ to find the volume and multiplies by \(4096\) to get the probability. It is necessary to check the dimension first, because the integrate function does not return zero on polytopes of less than the full dimension. Sage input from sage.interfaces.latticeimportintegrate Q4_ieqs = [(0,1,0,0,0,0,0,0,0), (-1,2,2,0,0,0,0,0,0), (0,-1,1,0,0,0,0,0,0),(3,-2,-4,0,0,0,0,0,0), (0,0,0,1,0,0,0,0,0),(-1,0,0,2,2,0,0,0,0), (0,0,0,-1,1,0,0,0,0),(3,0,0,-2,-4,0,0,0,0), (0,0,0,0,0,1,0,0,0),(-1,0,0,0,0,2,2,0), (0,0,0,0,0,-1,1,0,0),(3,0,0,0,0,-2,-4,0,0), (0,0,0,0,0,0,1,0),(-1,0,0,0,0,0,0,2,2), (0,0,0,0,0,0,0,-1,1),(3,0,0,0,0,0,0,-2,-4)] Q4 = Polyhedron(ieqs = Q4_ieqs) vol_Q4 = integrate(Q4.ccdd_Heppresentation(),ccdd=True) a1=vector((0,1,0,0,0,0,0,0,0)); a2=vector((0,0,1,0,0,0,0,0,0)); a3=vector((0,-1,-1,0,0,0,0,0,0)); b1=vector((0,0,0,1,0,0,0,0)); b2=vector((0,0,0,1,0,0,0,0)); b3=vector((0,0,0,-1,-1,0,0,0,0)); c1=vector((0,0,0,0,1,0,0,0)); c2=vector((0,0,0,0,0,1,0,0)); c3=vector((0,0,0,0,0,-1,-1,0,0)); d1=vector((0,0,0,0,0,0,1,0)); d2=vector((0,0,0,0,0,0,0,1)); d3=vector((0,0,0,0,0,0,-1,-1)); def g(m,i): s = ((-1,1,1),(1,-1,1),(1,1,-1)) if m==1: return [s[i-1][0]*(a1-b1),s[i-1][1]*(a2-b2), s[i-1][2]*(a3-b3)] elif m==2: return [s[i-1][0]*(b1-c1),s[i-1][1]*(b2-c2), s[i-1][2]*(b3-c3)] elif m==3: return [s[i-1][0]*(c1-d1),s[i-1][1]*(c2-d2), s[i-1][2]*(c3-d3)] elif m==4: return [s[i-1][0]*(d1-a1),s[i-1][1]*(d2-a2), s[i-1][2]*(d3-a3)] end Sage input defG(i,j,k,l): returnPolyhedron(ieqs=Q4_ieqs+g(l,i)+g(2,j)+g(3,k)+g(4,l)) end defpr(i,j,k,l): event=G(i,j,k,l) ifevent.dim()<8: return0 else: return4096*integrate(event.cdd_Hrepresentation(),cdd=True) end print("EventProbability") print("G(1,1,2,3)",pr(1,1,2,3)) print("G(1,1,3,2)",pr(1,1,3,2)) print("G(1,2,1,3)",pr(1,2,1,3)) print("G(1,2,2,3)",pr(1,2,2,3)) print("G(1,2,3,2)",pr(1,3,2,2)) print("G(1,2,3,3)",pr(1,2,3,3)) print("G(1,3,2,3)",pr(1,3,2,3)) print("G(1,3,2,2)",pr(1,3,2,2)) ``` Output Output EventProbability G(1,1,2,3)229/322560 G(1,1,3,2)691507/294912000 G(1,2,1,3)40913/15482880 G(1,2,2,3)5431/8064000 G(1,2,3,2)32299/16515072 G(1,3,2,2)38929/18432000 G(1,2,3,3)229/322560 G(1,3,2,3)40913/15482880 G(1,3,2,2)38929/18432000
2306.16353
Centrality-dependent Lévy HBT analysis in $\sqrt{s_{_{\text{NN}}}}=5.02$ TeV PbPb collisions at CMS
The measurement of two-particle Bose-Einstein momentum correlation functions are presented using $\sqrt{s_{_{\text{NN}}}}=5.02$ TeV PbPb collision data, recorded by the CMS experiment in 2018. The measured correlation functions are discussed in terms of L\'evy type source distributions. The L\'evy source parameters are extracted as functions of transverse mass and collision centrality. These source parameters include the correlation strength $\lambda$, the L\'evy stability index $\alpha$, and the L\'evy scale parameter $R$. The source shape, characterized by $\alpha$, is found to be neither Gaussian nor Cauchy. A hydrodynamic-like scaling of $R$ is also observed.
Balázs Kórodi
2023-06-28T16:34:26Z
http://arxiv.org/abs/2306.16353v2
# Centrality-Dependent Levy HBT Analysis in \(\sqrt{s_{\rm NN}}=5.02\) TeV PbPb Collisions at CMS ###### Abstract The measurement of two-particle Bose-Einstein momentum correlation functions are presented using \(\sqrt{s_{\rm NN}}=5.02\) TeV PbPb collision data, recorded by the CMS experiment in 2018. The measured correlation functions are discussed in terms of Levy-type source distributions. The Levy source parameters are extracted as functions of transverse mass and collision centrality. These source parameters include the correlation strength \(\lambda\), the Levy stability index \(\alpha\), and the Levy scale parameter \(R\). The source shape, characterized by \(\alpha\), is found to be neither Gaussian nor Cauchy. A hydrodynamic-like scaling of \(R\) is also observed. heavy ions; quark-gluon plasma; femtoscopy; Levy HBT + Footnote †: journal: Physics Letters A ## 1 Introduction The investigation of the femtometer-scale space-time geometry of high-energy heavy-ion collisions has been an important area, called femtoscopy, of high-energy physics for several decades [1]. The main idea of this field originates from astronomy, since it is analogous with the well-known Hanbury Brown and Twiss (HBT) effect that describes the intensity correlation of photons [2, 3]. In high-energy physics, however, the observable is the quantum-statistical momentum correlation of hadrons, which carries information about the femtometer-scale structure of the particle-emitting source [4, 5]. The measurements of such momentum correlations are partially responsible for establishing the fluid nature of the quark-gluon plasma (QGP) created in heavy-ion collisions [6, 7]. Furthermore, the measured source radii provide information about the transition from the QGP to the hadronic phase [8, 9], as well as about the phase space of quantum chromodynamics [10]. Recent high-precision femtoscopic measurements [11, 12] have shown that the previously widely assumed Gaussian [6, 13, 14] or Cauchy [15, 16] source distributions do not provide an adequate description of the measured correlation functions. Instead, a generalization of these distribution, the Levy alpha-stable distribution [17], is needed for a statistically acceptable description [11, 12]. The shape of the Levy distribution is characterized by the Levy stability index \(\alpha\), and can be influenced by various physical phenomena, e.g., anomalous diffusion [18, 19, 20], resonance decays [21, 22], jet fragmentation [23], and critical phenomena [24]. Until now, the \(\alpha\) parameter had not been measured at the largest energies accessible at the LHC. The question of how \(\alpha\) changes compared to lower energies signifies the need for a Levy HBT analysis at LHC energy. In this paper, the Levy HBT analysis of two-particle Bose-Einstein momentum correlations is presented using \(\sqrt{s_{\rm NN}}=5.02\) TeV PbPb collision data recorded by the CMS experiment. The source parameters, extracted from the correlations functions, are studied as functions of transverse mass and collision centrality. ## 2 Femtoscopy with Levy Sources The quantum-statistical momentum correlation of identical bosons is called Bose-Einstein correlation. This correlation is in connection with the source function \(S(x,p)\)[4, 5], which is the phase-space probability density of particle production at space-time point and four-momentum \(p\). After some approximations detailed in Refs. [4; 5], the following formula is obtained: \[C^{(0)}(Q,K)\approx 1+\frac{|\widetilde{S}(Q,K)|^{2}}{|\widetilde{S}(0,K)|^{2}}, \tag{1}\] where \(C^{(0)}(Q,K)\) is the two-particle momentum correlation function, \(Q\) is the pair relative four-momentum, \(K\) is the pair average four-momentum, the superscript \((0)\) denotes the neglection of final-state interactions, and \(\widetilde{S}(Q,K)\) is the Fourier transform of the source with \[\widetilde{S}(Q,K)=\int S(x,K)e^{iQx}d^{4}x. \tag{2}\] Equation (1) implies that \(C^{(0)}(Q=0,K)=2\). In previous measurements, it was found, however, that \(C^{(0)}(Q\to 0,K)<2\). This result can be understood via the core-halo model [25; 26], wherein the source is divided into two parts, a core of primordial hadrons and a halo of long-lived resonances. The halo is experimentally unresolvable due to its large size, which leads to small momentum in Fourier space. If \(S\) represents only the core part of the source, its connection to the correlation function becomes \[C^{(0)}(Q,K)\approx 1+\lambda\frac{|\widetilde{S}(Q,K)|^{2}}{|\widetilde{S}(0, K)|^{2}}, \tag{3}\] where \(\lambda\) is the square of the core fraction, and it is often called the correlation strength parameter. Using Equation (3), a theoretical formula for \(C^{(0)}(Q,K)\) can be calculated by assuming a given source distribution. In this analysis, a generalization of the Gaussian distribution, the so-called spherically symmetric Levy alpha-stable distribution [17], was assumed for the spatial part of the source. This distribution is defined by the following Fourier transform in three dimensions: \[\mathcal{L}(\mathbf{r};\alpha,R)=\frac{1}{(2\pi)^{3}}\int\mathrm{d}^{3}\mathbf{q}\,e^ {i\mathbf{q}\mathbf{r}}e^{-\frac{1}{2}|\mathbf{q}R|^{\alpha}}, \tag{4}\] where \(\mathbf{q}\) is an integration variable, \(\mathbf{r}\) is the variable of the distribution, \(\alpha\) and \(R\) are parameters; the Levy stability index and the Levy scale parameter, respectively. The \(\alpha\) parameter describes the shape of the distribution, with \(\alpha=2\) corresponding to the Gaussian and \(\alpha=1\) to the Cauchy case. The \(R\) parameter describes the spatial scale of the source, as it is proportional to the full width at half maximum. There are many possible reasons [18; 19; 20; 21; 22; 23; 24] behind the appearance of the Levy distribution in heavy-ion collisions, but these possibilities are still under investigation by the community. In case of a spherically symmetric Levy source, the two-particle correlation function has the form [19] \[C^{(0)}(q)=1+\lambda e^{-(qR)^{\alpha}}, \tag{5}\] where \(q=|\mathbf{Q}|\) is the magnitude of the spatial part of \(Q\). In the above formulas, the presence of final-state interactions was neglected. In the case of charged particles, the most important final-state interaction is the Coulomb interaction, which is usually taken into account in the form of a Coulomb correction\(K_{\mathrm{C}}(q;R,\alpha)\)[27; 28; 29]. Using the Bowler-Sinyukov method [30], one obtains \[C(q)=1-\lambda+\lambda(1+e^{-(qR)^{\alpha}})K_{\mathrm{C}}(q;R,\alpha). \tag{6}\] In this analysis, the \(R\) and \(\alpha\)-dependent Coulomb correction, calculated in Ref. [31], was utilized. A formula based on Equation (6) was used for fitting to the measured correlation functions. ## 3 Measurement Details The used data sample contains \(4.27\times 10^{9}\) PbPb events at a center-of-mass energy per nucleon pair of \(\sqrt{s_{\rm NN}}=5.02\) TeV, recorded by the CMS experiment in 2018. The detailed description of the CMS detector system can be found in Ref. [32]. For the analysis, only events with precisely one nucleus-nucleus collision were used, where the longitudinal distance of the interaction point from the center of the detector was also less than 15 cm. Further event selections were applied to reject events from beam-gas interactions and nonhadronic collisions [33]. The individual tracks were filtered based on their transverse momentum, pseudorapidity, distance to the vertex, the goodness of the track fit, and the number of hits in the tracking detectors. Particle identification in central PbPb collisions is not possible with the CMS detector; therefore, all charged tracks passing the other selection criteria were used. The majority of these charged particles are pions [34], so the pion mass was assumed for all of them. The largest contamination is caused by kaons and protons [34], and this effect is discussed in Section 4. Measuring two-particle Bose-Einstein correlation functions means measuring pair distributions. Besides the quantum-statistical effects, these pair distributions are influenced by detector acceptance, kinematics, and other phenomena. In order to remove these unwanted effects, the correlation function is calculated as the normalized ratio of two distributions, the actual (signal) distribution \(A(q)\), and the background distribution \(B(q)\), with \[C(q)=\frac{A(q)}{B(q)}\frac{\int B(q)dq}{\int A(q)dq}, \tag{7}\] where the integrals are calculated over a range where the quantum-statistical effects are not present. The \(A(q)\) distribution contains all same charged pairs of a given event, while the \(B(q)\) distribution contains all same charged pairs of a mixed event. This mixed event is obtained by randomly selecting particles from different events, as detailed in Refs. [11; 35]. For the validity of Equation (7), it was assumed that the produced particles had a uniform rapidity distribution [36]. In the measurement of \(C(q)\), the \(q\) variable is taken as the magnitude of the relative momentum in the longitudinally comoving system (LCMS), where the longitudinal component of the average momentum is zero. This coordinate system was chosen because, in earlier measurements, the source was found to be approximately spherically symmetric in this frame [6]. The measurement is carried out up to \(q=8\) GeV/\(c\) in 6 centrality (0-60%) and 24 average transverse momentum \(K_{\rm T}\) (0.5-1.9 GeV/\(c\)) classes, separately for positively and negatively charged pairs. In order to remove the merging and splitting effects caused by the finite resolution of the tracking detectors, a pair selection was applied. These artifacts were limited to a region with small \(\Delta\eta\) and \(\Delta\phi\); therefore, each pair had to satisfy the following condition: \[\left(\frac{|\Delta\eta|}{0.014}\right)^{2}+\left(\frac{|\Delta\phi|}{0.022} \right)^{2}>1, \tag{8}\] where \(\Delta\eta\) is the pseudorapidity difference and \(\Delta\phi\) is the azimuthal angle difference. Tracking efficiency correction factors were also utilized when measuring the \(A(q)\) and \(B(q)\) distributions. Even after removing most of the non-quantum-statistical effects by taking the ratio of \(A(q)\) and \(B(q)\), a structure was observed in \(C(q)\) at large \(q\) values, where the quantum-statistical effects were not present. This long-range background can be the result of phenomena such as energy and momentum conservation, resonance decays, bulk flow [15], and minijets [15]. To remove any potential influence of the long-range background on the low \(q\) region where the Bose-Einstein peak is present, \(C(q)\) was divided by a background function \(BG(q)\), resulting in the double-ratio correlation function \(DR(q)\): \[DR(q)=\frac{C(q)}{BG(q)}. \tag{9}\] The explicit form of \(BG(q)\) was determined by fitting the following empirically determined formula [15; 37; 38] to the large \(q\) part of \(C(q)\): \[BG(q)=N\Big{(}1+\alpha_{1}e^{-(qR_{1})^{2}}\Big{)}\left(1-\alpha_{2}e^{-(qR_{2 })^{2}}\right), \tag{10}\] where \(N,\alpha_{1},\alpha_{2},R_{1},R_{2}\) are fit parameters with no physical meaning. The \(DR(q)\) distributions were fitted with the following formula based on Equation (6): \[DR(q)=N(1+\epsilon q)\Big{[}1-\lambda+\lambda(1+e^{-(qR)^{4}})K_{\rm C}(q;R, \alpha)\Big{]}, \tag{11}\] where \(N\) is a normalization parameter and a possible residual linear background is allowed through the \(\epsilon\) parameter. The fits were performed using the MINUIT2 package [39; 40] and the statistical uncertainties were calculated with the MINOS algorithm [39; 40]. The lower and upper fit limits were determined individually in each centrality and \(K_{\rm T}\) class by selecting the limits resulting in the best fit. The goodness of fit was measured by the confidence level, calculated from the \(\chi^{2}\) and the number of degrees of freedom of the fit. This confidence level was in the statistically acceptable range (\(>\)0.1%) for each fit. An example fit is shown in Figure 1. In the region below approximately \(q=0.05\) GeV/\(c\), the measured data are not reliable due to the finite momentum resolution and pair reconstruction efficiency of the detectors; consequently, that region was not used for fitting. The systematic uncertainties of \(R,\alpha,\) and \(\lambda\) were determined by individually changing each of the analysis settings to slightly larger and smaller values, and conducting the whole analysis procedure again. The deviations from the nominal results were then added in quadrature, resulting in the full systematic uncertainty. The considered analysis settings were the centrality calibration, the vertex selection, the different track selection criteria, Figure 1: An example fit to the double-ratio correlation function \(DR(q)\) of negatively charged hadrons [41]. The fitted function is shown in black, while the red overlay indicates the range used for the fit. The \(K_{\rm T}\) and centrality class is shown in the legend. The lower panel indicates the deviation of the fit from the data. the pair selection, and the fit limits. Out of these, the dominant sources of systematic uncertainty were the fit limits. The full systematic uncertainty was separated into correlated and uncorrelated parts, so that the latter could be taken into account when fitting to the parameters. ## 4 Results and Discussion As mentioned before, the parameters \(\alpha,R\), and \(\lambda\) were measured separately for positively and negatively charged hadron pairs. As not much difference was observed between the two cases, some of the results for negatively charged pairs are shown only in Appendix A. The measurement was carried out in \(K_{\rm T}\) classes, but in order to facilitate the comparison with previous measurements and with theory, the parameters are presented as functions of the transverse mass \(m_{\rm T}\), defined as \[m_{\rm T}=\sqrt{\frac{K_{\rm T}^{2}}{c^{2}}+m^{2}}, \tag{12}\] where \(m\) is the mass of the investigated particle species. Although all charged tracks were used in the analysis, the pion mass was used for \(m\), since above 90% of the identical particle pairs were pion pairs. The measured \(\alpha\) values are shown in Figure 2 as a function of \(m_{\rm T}\), for positively charged pairs. Within uncertainties, most of the values are between 1.6 and 2.0, meaning that the source follows the general Levy distribution, instead of the Gaussian. However, the deviation from the Gaussian case is not as large as it was found for 0-30% centrality AuAu collisions at \(\sqrt{s_{\rm NN}}=200\) GeV [11], where a mean value for \(\alpha\) of 1.207 was obtained for pion pairs with \(|\eta|<0.35\) and \(228<m_{\rm T}<871\) MeV/\(c^{2}\). For a given centrality class, \(\alpha\) is almost constant with \(m_{\rm T}\). The average of \(\alpha\) (\(\langle\alpha\rangle\)) is indicated in Figure 2 for each centrality class, and it is shown in Figure 3 as a function of the average number of participating nucleons in the collision (\(\langle N_{\rm part}\rangle\)), for both positively and negatively charged pairs. The \(\langle N_{\rm part}\rangle\) values were calculated for each centrality class [42], with a larger value corresponding to a more central case. The \(\langle\alpha\rangle\) values show a monotonic increasing trend with \(\langle N_{\rm part}\rangle\), which means that the shape of the source is \(\langle N_{\rm part}\rangle\) (or equivalently, centrality) -dependent. The shape is closer to the Gaussian distribution in case of more central events. The \(\langle\alpha\rangle\) values are slightly higher for positively charged pairs, although the deviations are within systematic uncertainties. Figure 2: The Lévy stability index \(\alpha\) versus the transverse mass \(m_{\rm T}\) in different centrality classes for positively charged hadron pairs [41]. The error bars are the statistical uncertainties, while the boxes indicate the uncorrelated systematic uncertainties. The correlated systematic uncertainty is shown in the legend. The measured \(R\) values are shown in Figure 4 as a function of \(m_{\rm T}\) for positively charged pairs. A decreasing trend with \(m_{\rm T}\) and as the collisions become more peripheral is observed, with the values ranging between 1.6 and 5.8 fm. The centrality dependence confirms the geometrical interpretation of the \(R\) parameter, because a smaller source size is expected in case of more peripheral collisions. To further investigate the \(m_{\rm T}\) dependence of \(R,1/R^{2}\) was plotted as a function of \(m_{\rm T}\), as shown in Figure 5. In case of a Gaussian source, hydrodynamic models [7; 43] predict the linear scaling \[\frac{1}{R^{2}}=Am_{\rm T}+B, \tag{13}\] where \(A\) and \(B\) are parameters with physical meaning. The slope \(A\) is connected to the Hubble constant (\(H\)) of the QGP with [7; 44] \[A=\frac{H^{2}}{T_{\rm f}}, \tag{14}\] where \(T_{\rm f}\) is the freeze-out temperature. The intercept \(B\) is connected to the size of the source (\(R_{\rm f}\)) at freeze-out with [7; 44] \[B=\frac{1}{R_{\rm f}^{2}}. \tag{15}\] In order to verify whether the linear scaling also holds in the Levy case, a linear fit was performed for each centrality class using Equation (13). The statistical uncertainty and the uncorrelated systematic uncertainty of \(1/R^{2}\) was added in quadrature and used for determining the \(\chi^{2}\) of the fits. In this way, the confidence levels were statistically acceptable for each centrality class, showing that a hydrodynamic-like scaling holds for a Levy source as well. The fitted lines are shown in Figure 5, and the fit parameters (\(A\) and \(B\)) are shown in Figure 6 as functions of \(\langle N_{\rm part}\rangle\), for both positively and negatively charged pairs. By assuming a constant freeze-out temperature of \(T_{\rm f}=156\) MeV [45], the Hubble constant falls between 0.12 \(c/\)fm and 0.18 \(c/\)fm. Due to the fact that the \(A\) parameter decreases toward more central collisions (larger \(\langle N_{\rm part}\rangle\)), the Hubble constant also decreases, making the speed of the expansion lower in central collisions. The \(B\) parameter has a negative value in each case, which makes it impossible to calculate a freeze-out size using Equation (15). The reasons behind a negative intercept and the interpretation of this result are currently Figure 3: The average Lévy stability index \(\langle\alpha\rangle\) versus \(\langle N_{\rm part}\rangle\) in different centrality classes for positively and negatively charged hadron pairs [41]. The error bars are the statistical uncertainties, while the boxes indicate the systematic uncertainties. unknown. This may be connected to fluctuations in the initial state [46] which were not taken into account in the hydrodynamic models. The measured \(\lambda\) values are shown in the upper panel of Figure 7 as a function of \(m_{\mathrm{T}}\), for positively charged pairs. A decreasing trend with \(m_{\mathrm{T}}\) as the collisions became more central is observed. In case of identified particles, \(\lambda\) is the square of the ratio of core particles. Due to the lack of particle identification, our sample contained particles other than pions, mostly kaons and protons. As a result of this contamination, \(\lambda\) was suppressed by a factor of the square of the pion fraction. The pion fraction was measured by the ALICE Collaboration [34], and it decreased with \(m_{\mathrm{T}}\), resulting in the decreasing trend of \(\lambda\) in the upper panel of Figure 7. For the \(\alpha\) and the \(R\) parameters, a characteristic \(m_{\mathrm{T}}\) dependence Figure 4: The Lévy scale parameter \(R\) versus \(m_{\mathrm{T}}\) in different centrality classes for positively charged hadron pairs [41]. The error bars are the statistical uncertainties, while the boxes indicate the uncorrelated systematic uncertainties. The correlated systematic uncertainty is shown in the legend. Figure 5: The inverse square of the Lévy scale parameter \(R\) versus \(m_{\mathrm{T}}\) in different centrality classes for positively charged hadron pairs [41]. The error bars are the statistical uncertainties, while the boxes indicate the uncorrelated systematic uncertainties. The correlated systematic uncertainty is shown in the legend. A line is fitted to the data for each centrality. was observed; thus, these parameters could not have been influenced by the \(m_{\rm T}\)-dependent effect of the lack of particle identification. To remove the effect of the contamination from \(\lambda\), the \(\lambda^{*}\) parameter was introduced by rescaling \(\lambda\) with the square of the pion fraction: \[\lambda^{*}=\frac{\lambda}{(N_{\rm pion}/N_{\rm hadron})^{2}}. \tag{16}\] The rescaled correlation strength \(\lambda^{*}\) is shown in the lower panel of Figure 7. Compared to \(\lambda\), the decreasing trend with \(m_{\rm T}\) is no longer shown in the data, suggesting that it was caused purely by the lack of particle identification. The centrality dependence, on the other hand, remained the same, which means that the fraction of core pions is smaller in more central collisions. Figure 6: The two fit parameters from the linear fit: the slope \(A\) (**upper**) and the intercept \(B\) (**lower**) versus \(\langle N_{\rm part}\rangle\) for negatively and positively charged hadron pairs [41]. The error bars are the statistical uncertainties, while the boxes indicate the systematic uncertainties. ## 5 Conclusions In this paper, a centrality-dependent Levy HBT analysis of two-particle Bose-Einstein correlations was presented, using \(\sqrt{s_{{}_{\rm NN}}}=5.02\) TeV PbPb collision data recorded by the CMS experiment. The measured correlation functions were described by the assumption of a Levy alpha-stable source distribution. Three source parameters, the Levy stability index \(\alpha\), the Levy scale parameter \(R\), and the correlation strength \(\lambda\) were determined, and their centrality and transverse mass (\(m_{\rm T}\)) dependence was investigated. The \(\alpha\) parameter was found to be centrality-dependent, but constant in \(m_{\rm T}\), with the average values ranging between 1.6 and 2.0. A decreasing trend with \(m_{\rm T}\) and as the collisions become more peripheral was observed for the \(R\) parameter, which could be explained by the hydrodynamic-like scaling and the geometrical interpretation, respectively. The \(\lambda\) parameter showed a decreasing trend with \(m_{\rm T}\), but after removing the effects of the lack of particle identification, a constant behavior was obtained. A decrease toward more central collisions was also observed for \(\lambda\). Figure 7: The correlation strength \(\lambda\) and the rescaled correlation strength \(\lambda^{*}\) versus \(m_{\rm T}\) in different centrality classes for positively charged hadron pairs [41]. The error bars are the statistical uncertainties, while the boxes indicate the uncorrelated systematic uncertainties. The correlated systematic uncertainty is shown in the legend. **Funding:** B. Korodi was supported by the UNKP-21-2 New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund. This research was supported by the NKFIH OTKA K-138136 and K-128713 grants. **Data Availability Statement:** The data presented in this study are available on request from the corresponding author. The data are not publicly available. **Conflicts of Interest:** The author declares no conflict of interest. ## Abbreviations The following abbreviations are used in this manuscript: \begin{tabular}{l l} QGP & quark-gluon plasma \\ LHC & Large Hadron Collider \\ HBT & Hanbury Brown and Twiss \\ PbPb & lead-lead \\ CMS & Compact Muon Solenoid \\ AuAu & gold-gold \\ \end{tabular} ## Appendix A Results for Negatively Charged Pairs The results for negatively charged hadron pairs are presented. Due to the fact that they are very similar to the results for positively charged pairs presented in Section 4, the interpretations of these results are the same. The Levy stability index \(\alpha\) is shown as a function of \(m_{\rm T}\) in Figure 11. The Levy scale parameter \(R\) and its inverse square \(1/R^{2}\) are shown as functions of \(m_{T}\) in Figures 12 and 13, respectively. The correlation strength \(\lambda\) and the rescaled correlation strength \(\lambda^{*}\) are shown as functions of \(m_{T}\) in Figure 14. Figure 17: The Lévy scale parameter \(R\) versus \(m_{\rm T}\) in different centrality classes for negatively charged hadron pairs [41]. The error bars are the statistical uncertainties, while the boxes indicate the uncorrelated systematic uncertainties. The correlated systematic uncertainty is shown in the legend.
2304.00348
Primordial black hole collision with neutron stars and astrophysical black holes and the observational signatures
In this paper, we examine whether low-mass Primordial Black Holes (PBHs) can be considered a plausible dark matter candidate in galactic halos. We derive the relativistic dynamics of PBHs around the heavy compact objects and evaluate their collision rate, as well as the likelihood of PBH capture in neutron stars and black holes. Although the rate of these collisions in the Milky Way is lower than our lifetime (i.e. almost one collision per hundred years), it may still be observable on cosmological scales. Additionally, we investigate the gravitational wave emission as an important observable window for PBH-astrophysical black hole merging. For the allowed range of PBH mass, gravitational wave signal is smaller than the sensitivity of present gravitational wave detectors. We provide observational prospect for detection of these events in future.
Sohrab Rahvar
2023-04-01T16:08:11Z
http://arxiv.org/abs/2304.00348v3
Primordial black hole collision with neutron stars and astrophysical black holes and the observational signatures ###### Abstract In this paper, we examine whether low-mass Primordial Black Holes (PBHs) can be considered a plausible dark matter candidate in galactic halos. We explore the relativistic dynamics of PBHs and evaluate their collision rate, as well as the likelihood of PBH capture in neutron stars and black holes. Although the rate of collision for various dark structure models in the Milky Way is lower than our lifetime, it may still be observable on cosmological scales. Additionally, we investigate the gravitational wave emission as an important observational window for PBH-astrophysical black hole impact. However, for the allowed range of PBH mass, the signal is currently smaller than the sensitivity of present gravitational wave detectors. keywords: cosmology: dark matter - stars: black holes - stars: neutron - gravitational waves ## 1 Introduction The large-scale structures of the universe are dominantly composed of dark matter. Primordial black holes (PBHs) are one of the potential candidates for dark matter, believed to have formed in the early universe due to quantum fluctuations (Zel'dovich & Novikov, 1966; Hawking, 1971b). While the theory of PBH formation predicts a wide range of PBHs (Kuhnel & Freese, 2017), observations constrain their mass to two narrow windows: low mass (\(M<10^{-7}M_{\odot}\)) and high mass (\(M>50M_{\odot}\)) (Green, 2016). It is believed that galaxies are surrounded by a halo of dark matter, which we assume to be comprised of PBHs. In this work, our aim is to investigate the interaction of PBHs with astrophysical objects. This study has been done in numerous works such as PBH merging with the astrophysical structures through the adiabatic process (Derishev & Belyanin, 1999). Also, the collision of PBH with stars and compact objects are modeled as GRB source (Zhilyaev, 2007). The consequences of this collision as the Gamma-ray emission were also examined with the X-ray observatories (Abramowicz et al., 2009). Here, we focus on investigating the rate of PBH collisions with the remnants of neutron stars and astrophysical black holes (hereafter referred to as compact objects) and studying the resulting consequences. While the PBH collision with Earth has been studied in the context of Newtonian gravity (Rahvar, 2021), we extend this argument to PBH collisions with compact objects as a detector. We consider relativistic mechanics and explore astrophysical consequences such as changes in the angular momentum of neutron stars and the emission of gravitational waves. In Section (2), we introduce the phase space of PBHs and calculate the rate of collisions that occur with compact objects. In Section (3), we investigate the consequences of these collisions with neutron stars, and in Section (4), we examine the collision of PBHs with astrophysical black holes. Finally, Section (5) provides a summary and conclusion of our study. ## 2 Collision of PBHs with compact objects In this section, we study the phase space of PBHs in the Galactic halo and the rate of collisions with compact objects. ### Dynamics of PBHs in halo Let us assume the distribution of point mass PBHs in the halo of the Milky Way galaxy is almost isothermal and follows the Maxwell-Boltzmann distribution as \[f(x,v)d^{3}v=n_{0}(\frac{3}{2\pi\sigma^{2}})^{3/2}\exp{(\frac{-3v^{2}}{2\sigma ^{2}})}v^{2}dvd\Omega, \tag{1}\] where \(d\Omega=d\phi\sin\theta d\theta\) and \(\sigma\) is the dispersion velocity of PBHs. We set a compact object at the center of this coordinate system. This object gravitationally interacts with the PBHs in the halo. The flux of particles in this coordinate system (without taking into account a gravitating source at the center) entering a sphere with the radius of "\(r\)" is (Press & Spergel, 1985) \[\mathcal{F}(x)=\int\,f(x,v)\mathbf{\hat{n}}\cdot v\mathbf{v}^{2}dvd\Omega, \tag{2}\] where \(\hat{n}\) is the inwarding unit vector perpendicular to the surface and \(0<\theta<\pi/2\). We note that the overall current in the steady state condition for the incoming and outgoing particles is zero and here we consider just the incoming PBHs. Multiplying the current in equation (2) to the area at far distances from the center is the rate of particles crossing the sphere with the radius of \(r\) as \(dN/dt=4\pi r^{2}\mathcal{F}\). We define the kinetic energy and the angular momentum of PBHs at asymptotically flat spacetime with respect to the center of coordinate as \(E=mv^{2}/2\) and \(J=rnnv\sin\theta\). Then the rate of incoming particles inside a sphere with a radius of \(r\) in terms of energy and angular momentum is \[\frac{dN}{dt}=2\pi^{2}n_{0}(\frac{3}{2\pi m^{2}\sigma^{2}})^{3/2}\int\exp(- \frac{3E}{m\sigma^{2}})dEdJ^{2}. \tag{3}\] Now we derive the trajectory of particles that collide with the central compact object. For the neutron star and black hole we derive the relativistic dynamics. The trajectories constrain the domain of integration in equation (3) and allow us to find the rate of collisions. ### Trajectory of PBHs colliding with the compact objects Let us assume the Schwarzschild metric for the spacetime around the compact object as \[ds^{2}=-(1-\frac{r_{s}}{r})dt^{2}+(1-\frac{r_{s}}{r})^{-1}dr^{2}+r^{2}d\Omega, \tag{4}\] where for the relativistic calculations we adapt \(c=1\), however, we will use the SI system in the numerical calculations. Also, the Schwarzschild radius is \(r_{s}=2GM\). The action for a test particle (Landau & Lifshitz, 1975) (here a PBH) in this spacetime is \[S=-m\int\,d\tau=\int\,\mathcal{L}dt \tag{5}\] where \[\mathcal{L}=-m\sqrt{(1-\frac{r_{s}}{r})-(1-\frac{r_{s}}{r})^{-1}t^{2}-r^{2} \dot{\phi}^{2}}, \tag{6}\] here the particle is localized on \(\theta=\pi/2\) plane. The corresponding canonical momentum for each coordinate is given by \(p_{i}=\partial\mathcal{L}/\partial\dot{q}_{i}\) where from the Lagrangian the momentum is \[p_{r} = -\frac{m^{2}t}{\mathcal{L}}(1-\frac{r_{s}}{r})^{-1}, \tag{7}\] \[J = -\frac{m^{2}r^{2}\dot{\phi}}{\mathcal{L}}, \tag{8}\] where \(J\) is a conserved quantity. The corresponding Hamiltonian is \(\mathcal{H}=\sum p_{i}\dot{q}_{i}-\mathcal{L}\) where substituting the momentum and coordinate, the Hamiltonian is \[\mathcal{H}=-\frac{m^{2}}{\mathcal{L}}(1-\frac{r_{s}}{r}). \tag{9}\] Dividing \(J\) by the Hamiltonian we can find the angular velocity in terms of these two quantities, \[\dot{\phi}=\frac{J}{\mathcal{H}}\frac{1}{r(r-r_{s})}. \tag{10}\] Since Hamiltonian is time-independent, that is a conserved quantity and we define it at the asymptotic flat space-time \[\mathcal{H} = \frac{m}{\sqrt{1-v^{2}}}\simeq m+E_{k}, \tag{11}\] \[\lim r\rightarrow\infty\] where \(E_{k}=\frac{1}{2}mv_{0}^{2}\). On the other hand, let us take the closest distance of the test particle to a compact object happens at \(r=r_{min}\) where at this position the radial velocity is zero. Then the Hamiltonian at this point is given by \[\mathcal{H}^{2}=(1-\frac{r_{s}}{r_{min}})m^{2}+\frac{J^{2}}{r_{min}^{2}}. \tag{12}\] Equating the left and right-hand sides of equations (11) and (12), then the angular momentum in terms of the kinetic energy of PBH at far distance and \(r_{min}\) is \[J^{2}=m^{2}r_{min}^{2}\left((1+\frac{E_{k}}{m})^{2}+\frac{r_{s}}{r_{min}}-1 \right), \tag{13}\] where for the Newtonian limit (i.e. \(r_{s}\to 0\)) we recover the conventional angular momentum as \(J=r_{min}mv(r_{min})\). This equation represents a constraint between the angular momentum and kinetic energy of a particle at infinity and \(r_{min}\). The collision condition of a PBH with a compact object (with the radius of \(r_{c}\)) holds if \(r_{min}<r_{c}\). ### The rate of collisions For calculating the rate of collision, we substitute equation (13) as the boundary of integral in (3), and after integration the result is \[\frac{dN}{dt}=2\pi^{2}n_{0}(\frac{3}{2\pi})^{3/2}\sigma r_{min}^{2}\left( \frac{2}{9}+\frac{2}{3}\frac{r_{s}}{r_{min}}\frac{c^{2}}{\sigma^{2}}+\frac{2}{ 27}\frac{\sigma}{c}\right), \tag{14}\] where \(c\) is the speed of light. The first and second terms represent the result of Newtonian gravity; the third term is the relativistic correction to the collision rate. Taking the local density of the dark halo around the disk of the Milky Way as \(\rho_{D}\simeq 8\times 10^{-3}M_{\odot}pc^{-3}\)(Binney & Tremaine, 2008) and the mass range of PBHs as \([10^{14},10^{23}]\)gr, the number density of PBHs in the halo obtain \[n_{0}=6\times 10^{-33}f\times(\frac{\rho_{h}}{0.008M_{\odot}/pc^{3}})(\frac{m_{ pbh}}{10^{23}gr})^{-1}\text{km}^{-3}\] where \(f\) is the fraction of dark matter that made of PBHs. Then the numerical value of equation (14) is \[\frac{dN}{dt} = 2.7\times 10^{-11}\text{Gyr}^{-1}f\times(\frac{\rho_{h}}{0.008M_{ \odot}/pc^{3}})(\frac{m_{pbh}}{10^{23}gr})^{-1}\] \[\times\left(0.22+2.25\times 10^{5}(\frac{rs}{km})(\frac{r_{m}}{10 km})^{-1}\sigma_{200}^{-2}+5\times 10^{-5}\sigma_{200}\right),\] where \(\sigma_{200}\) is the dispersion velocity of PBHs in the halo normalized to 200km/s. Ignoring the relativistic terms, the result agrees with the Newtonian calculation (Abramowicz et al., 2009). For the collision of PBHs with compact objects in our study, we can ignore the first and third terms compared to the second term. Then the rate simplifies to \[\frac{dN}{dt}=6.1\times 10^{-6}\text{Gyr}^{-1}f\times(\frac{m_{pbh}}{10^{23} gr})^{-1}(\frac{rs}{km})(\frac{r_{m}}{10km})^{-1}\sigma_{200}^{-2}. \tag{16}\] Taking into account the evaporation of PBHs from the early universe up to the present time, we expect PBHs with the masses of \(m>10^{14}\)gr could survive at the present time (Rahvar, 2021). For the PBHs with Dirac-Delta mass function within the mass range of \(m\in[10^{14},10^{23}]\)gr, the rate of collisions from equation (16) obtain as \(dN/dt\in[6\,\text{Myr}^{-1},6\times 10^{-6}\text{Gyr}^{-1}]\). While the collision rate of PBH with a compact object is high, however, the rate of capturing is low. In the next sections, we investigate the capture rate and the astrophysical signatures from the PBH collisions with compact objects. ## 3 Physical consequences of PBH collision with neutron stars In this section, we investigate the physical consequences of the PBH collision with compact objects such as neutron stars. Let us image a small mass PBH colliding with a neutron star where we can ignore the mutual momentum transfer to the neutron star. Once PBH enters the neutron star two physical processes can happen (i) decelerating due to the dynamical friction. (ii) accretion of neutron star's material on PBHs. For a PBH colliding with a neutron star, PBH interacts gravitationally with the condensed material inside the neutron star. The result is a drag force so-called dynamical friction (Chandrasekhar, 1943) which is given by \[\frac{d\kappa_{pbh}}{dt}=-\frac{4\pi\ln(\Lambda)G^{2}\rho_{n}m_{pbh}}{v_{pbh}^{3 }}\left[\mathrm{erf}(X)-\frac{2X}{\sqrt{\pi}}e^{-X^{2}}\right]\mathbf{v}_{pbh}, \tag{17}\] where \(\rho_{n}\) is the average density of a neutron star, \(m_{pbh}\) is the mass of PBH as a projectile, \(v_{pbh}\) is the velocity of PBH inside the neutron star, \(X=v_{pbh}/(\sqrt{2}\sigma_{n})\) and \(\sigma_{n}\) is the dispersion velocity of particles inside the neutron star. \(\Lambda\) is given by \[\Lambda=\frac{r_{n}\sigma_{n}^{2}}{Gm_{pbh}}=2(\frac{r_{n}}{r_{s(bh)}})(\frac {\sigma_{n}}{c})^{2}, \tag{18}\] where \(r_{n}\) is the radius of the neutron star, \(r_{s(pbh)}\) is the Schwartzchild radius of PBH. Substituting the numerical values: \[\Lambda=6\times 10^{10}(\frac{r_{n}}{10\mathrm{km}})(\frac{m_{pbh}}{10^{23} \mathrm{gr}})^{-1}(\frac{\sigma_{n}}{c^{2}})^{2}, \tag{19}\] where for the mass of PBHs in the range of \([10^{14},10^{23}]\mathrm{gr}\) and parameter is \(\ln\Lambda\in[24,45]\). We note that the dispersion velocity of matter in the neutron star also is given by the uncertainty principle and exclusion principle for the neutrons (i.e. \(pd=\hbar\)) where \(p\) is the momentum and \(d\) is the distance between the neutrons (Padmanabhan, 1996). For the ultra-relativistic regime for the neutrons where \(E\simeq pc\) we can set the dispersion velocity of neutrons as \(\sigma_{n}\sim c\). Substituting in definition of \(X\) in equation (17), \[X=0.22(\frac{r_{s}}{1\mathrm{km}})^{1/2}(\frac{r_{n}}{10\mathrm{km}})^{-1/2}.\] Using equation (17), we define the time scale for dissipation energy of PBH from the dynamical friction as \(t_{df}=v_{pbh}/v_{pbh}\) where the numerical value is \[t_{df}=1.4\times 10^{5}\mathrm{s}(\frac{m_{bh}}{10^{23}\mathrm{gr}})^{-1}( \frac{r_{n}}{10\mathrm{km}})^{3/2}(\frac{M_{n}}{M_{\odot}})^{1/2}. \tag{20}\] For the lower and upper bands of PBH mass, the dynamical friction time scale ranges from \(10^{5}\) to \(10^{14}\) second. We can compare this time scale with the crossing time scale of PBH across the neutron star by \(t_{c}=r_{n}/v_{pbh}\) which is \[t_{c}\simeq 10^{-4}\mathrm{s}(\frac{M_{n}}{M_{\odot}})^{-1/2}(\frac{r_{n}}{10 \mathrm{km}})^{3/2}, \tag{21}\] this time scale is very small compared to the dynamical friction time scale. So we conclude that unlike the previous studies (Capela et al., 2013), PBHs can not lose a significant amount of their kinetic energy during the crossing of the neutron stars to trap inside the neutron stars. ### The energy release as a result of collision In this part, we calculate the energy release as a result of the collision of PBH with a neutron star and investigate the feasibility of observation of this collision. From the dynamical friction, we expect the energy release would be \(Q_{df}=-m_{pbh}v_{pbh}r_{n}\). Substituting deceleration from equation (17), the energy release is: \[Q_{df}=-\frac{3}{4}\ln\Lambda\left(erf(X)-\frac{2X}{\sqrt{\pi}}e^{-X}\right) (\frac{r_{s(pbh)}}{r_{n}})m_{pbh}c^{2} \tag{22}\] where using the numerical values the energy released as a result of dynamical friction is \[Q_{df}=-5.4\times 10^{32}\mathrm{erg}(\frac{m_{pbh}}{10^{23}\mathrm{gr}})^{2}( \frac{r_{n}}{10\mathrm{km}})^{-1}. \tag{23}\] We divide this energy to the crossing time scale in equation (21) which results in the power of the energy released from the dynamical friction inside the neutron star \[P_{df}=5.4\times 10^{36}\mathrm{erg/s}(\frac{m_{pbh}}{10^{23}\mathrm{gr}})^{2}( \frac{r_{n}}{10\mathrm{km}})^{-5/2}(\frac{M_{n}}{M_{\odot}})^{1/2}. \tag{24}\] The other source of dissipation is the accretion of the neutron star's material by a PBH during the crossing of the interior of the neutron star. The maximum flux from the accretion is given by the Eddington limit \[L_{edd}=\frac{4\pi Gm_{pbh}m_{p}}{\sigma_{TH}}=6.5\times 10^{27}\mathrm{erg/s}( \frac{m_{pbh}}{10^{23}\mathrm{gr}}), \tag{25}\] where \(\sigma_{TH}\) is the Thompson cross section for photon-electron scattering and \(m_{p}\) is the mass of proton. We note that in both two mechanisms of energy dissipation by the dynamical friction and the accretion, the majority of energy release happens at the interior of the neutron star. This burst of energy can propagate to the surface of the star with the sound speed and the result would be perturbation of the inertial tensor which may cause a kind of glitch or temporary change in the spin of the neutron star. The anomalies in the spin of the pulsar and burst of energy have already been observed (Dib & Kaspi, 2014) and to interpret this observation in terms of PBH collision, one needs detailed modeling. Finally, we investigate the effect of energy dissipation of a PBH after crossing a neutron star. Let us divide \(Q_{df}\) from equation (23) (which plays the major role in the dissipation) to the total kinetic energy of a PBH (i.e. \(E_{k}=mv^{2}/2\)), \(\epsilon=Q_{df}/E_{k}\). The result is \[\epsilon=-2.7\times 10^{-5}(\frac{m_{pbh}}{10^{23}\mathrm{gr}})(\frac{v}{ \sigma_{200}})^{-2}(\frac{r_{n}}{10\mathrm{km}})^{-1}, \tag{26}\] where \(\sigma_{200}\) is the dispersion velocity of the dark halo. In order to have confined PBH around the neutron star, \(|\epsilon|\geq 1\) should be satisfied. This means that the threshold initial velocity of PBH at a far distance from the neutron star to be trapped should have \[v_{th}\leq 1\mathrm{km/s}\,(\frac{m_{pbh}}{10^{23}\mathrm{gr}})^{1/2}\sigma_{200 }^{2}(\frac{r_{n}}{10\mathrm{km}})^{-1/2}. \tag{27}\] The velocity required for primordial black holes (PBH) to be captured by a neutron star depends on the dark structure of the Milky Way. If we consider the halo with a dispersion velocity of \(\sigma_{200}=200\mathrm{km/s}\), taking \(m_{pbh}=10^{23}\mathrm{gr}\), the threshold velocity \(v_{th}\) is less than \(1\) km/s. However, for the dark disk, the dispersion velocity can vary between \(\sigma=40-80\mathrm{km/s}\) (Purcell et al., 2009), depending on the size of the disk which results in \(v_{th}<25\mathrm{km/s}\) and \(v_{th}<6\mathrm{km/s}\) for \(m_{pbh}=10^{23}\mathrm{gr}\), respectively. Figure (1) shows the probability of PBH confinement to a neutron star as a function of PBH mass, which depends on the dispersion velocity of the dark structure. We multiply this probability function by the rate of PBH impact with a neutron star from equation (16) to obtain the rate of a PBH capture for different Milky Way dark structures. The estimation for the total of neutron stars in the Milky Way is \(N_{n}\simeq 10^{8}\)(Reed et al., 2021). By multiplying this number we obtain the rate of PBH capturing in the Milky Way as shown in Figure (2). The capturing rate, measured in Giga years, makes it unlikely for us to witness these collisions. However, on a cosmological scale, there is potential for these events to be detected. When a PBH is captured by a neutron star, it can either merge with the star resulting in energetic outbursts, or be put in an orbital motion around the star. The outcome depends on the impact parameter of the collision between the PBH and the neutron star. ## 4 Physical consequence of PBH collision with the astrophysical black holes In this section, we examine the gravitational collision between PBHs and astrophysical black holes, which can be detected through the propagation of gravitational waves. Previous research, such as (Cless & Garcia-Bellido, 2017; Khalouei et al., 2021; Raidal et al., 2019, 2019), has extensively studied the signals produced by the interaction of PBH-PBH and PBH-Astrophysical black holes. We provide a basic estimation of the energy released and observability resulting from this collision. Hawking (1971a) estimated the energy release resulting from the merger of two spin-less black holes with masses \(M\) and \(m_{pbh}\) under head-on collision, forming a black hole with mass \(M_{t}\). Using the definition of the ADM-mass (Arnowitt et al., 2008), we can express \((m_{pbh}+M)c^{2}=E_{gw}+M_{t}c^{2}\) where \(E_{gw}\) is the energy released by the gravitational waves. On the other hand, the Entropy of a black hole is proportional to the area of a black hole (i.e. \(S\propto A\propto M^{2}\)). Thus, according to the second law of thermodynamics \(S(M_{t})>S(m_{bh})+S(M_{n})\), or in another word, \(M_{t}^{2}>m_{pbh}^{2}+M^{2}\). Combining the conservation of energy-momentum with the second law of thermodynamics constrains the energy of gravitational wave from this collision as \[E_{gw}<(m_{pbh}+M-\sqrt{m_{pbh}^{2}+M^{2}})c^{2}, \tag{28}\] where in our case \(m_{pbh}\ll M\) and equation (28) simplifies to \[E_{gw}<m_{pbh}c^{2}-m_{pbh}c^{2}\frac{m_{pbh}}{M}. \tag{29}\] Gravitational wave emission from the collision of an object with a black hole has been analyzed in detail (Davis et al., 1971). The energy of gravitational wave emission is given by \[E_{gw}=10^{33}\mathrm{erg}(\frac{m_{pbh}}{10^{23}\mathrm{gr}})^{2}(\frac{M}{1 0M_{\odot}})^{-1}, \tag{30}\] where the total spectrum of gravitational wave peaks at the angular frequency of \[\omega=0.32\frac{c^{3}}{GM}=0.64\times 10^{4}\mathrm{Hz}(\frac{M}{10M_{\odot}} )^{-1}. \tag{31}\] Multiplying equation (30) to (31) results in as estimation for the power of the gravitational wave, \[P_{gw}\simeq 10^{37}\mathrm{erg/s}(\frac{M}{10M_{\odot}})^{-2}. \tag{32}\] Now, we estimate the amplitude of gravitational waves from this system at a given distance from the Earth. The energy-momentum of gravitational waves averaged over the several wavelengths is given by (Maggiore, 2008) \[T^{\mu\nu}=\frac{c^{4}}{32\pi G}<\partial^{\mu}h^{\alpha\beta}\partial^{\nu}h _{\alpha\beta}>, \tag{33}\] where energy-momentum satisfies the conservation laws on Minkowski background (i.e. \(T^{\mu\nu},_{\nu}=0\)). Integrating over the spatial volume for \(\mu=0\) component is \(\int T^{0},_{0}d^{3}x=-\int T^{0}ds_{i}\). Here the left-hand side of this equation is the power of gravitational wave as in equation (32) and the right-hand side can be substituted from equation (33). For a plane wave at far distances from the source \(h^{\alpha\beta}\propto\exp(k\cdot r-\omega t)\), we obtain the amplitude of the gravitational wave in terms of power and distance of the source from the observer as \[|h|\simeq\frac{3GM}{rc^{4}}(\frac{8GP}{c})^{1/2}. \tag{34}\] We substitute the gravitational wave power from equation (31). The final result for the amplitude of \(h\) is \[|h|\simeq 2\times 10^{-26}(\frac{r}{1\mathrm{kpc}})^{-1}. \tag{35}\] The interesting feature of this equation is that the amplitude of gravitational wave emission from PBH-astrophysical black hole collision is independent of the mass of the astrophysical black hole. The physical reason is that having a massive black hole, the frequency of the gravitational wave is lower which produces a lower power and compensates with the proportion of the \(h\) to the \(M\) in equation (34). From equation (35), the amplitude of the gravitational wave at the observer's position is significantly smaller than the sensitivity of the Figure 1: This graph shows the likelihood of a PBH (Primordial Black Hole) being trapped in a neutron star, based on its mass. The graph plots the probability against \(\log(m/10^{23}\mathrm{gr})\) and includes three models: a halo model (dashed line), a dark disk model with \(\sigma=80\mathrm{km/s}\) (dotted line), and a dark disk model with \(\sigma=40\mathrm{km/s}\) (solid line). Figure 2: The rate of PBHs of Milky Way captured in a neutron star as a function of \(\log(m/10^{23}\mathrm{gr})\) for three distanced models of the halo model (dashed line), dark disk with \(\sigma=80\mathrm{km/s}\) (dotted line) and dark disk with \(\sigma=40\mathrm{km/s}\) (solid line) in the Milky Way. The estimated total number of neutron stars is \(10^{8}\) stars (Red et al., 2021). Advanced LIGO detector (Cahillane & Mansell, 2022). Therefore, detecting PBH-astrophysical black hole collisions remains a target for future gravitational wave detectors. ## 5 Conclusion Summarizing this work, we investigated the probability and the physical consequences of collision of the Primordial Black Holes (PBHs) as the candidate for dark matter with compact objects. The compact object in our study could be either a neutron star or an astrophysical black hole. Using a Schwarzschild metric for the compact object, we obtained relativistic trajectories of PBHs moving in the dark component of the Milky Way that can collide with the compact objects. The collision rate of PBHs with compact objects depends on the mass function of PBHs and the mass of compact objects. We estimate the amount of dissipation during the collision of PBHs with the neutron stars. The time scale of dissipation is much larger than the crossing time scale of the neutron stars. However, having a Maxwell-Boltzmann distribution for the velocity of PBHs, a small fraction of them can be captured in the neutron stars, either by merging the neutron star or staying in an orbit around the neutron star. The total rate of this capturing is very low and it is unlike to be detected in the Milky Way during our lifetime. However, signals for this event on the cosmological scale are expected. We also investigated the consequence of the collision of PBH with the astrophysical black holes. We showed that the amplitude of the gravitational wave signal for a PBH-astrophysical black hole collision located at kilo parsec distance is three orders of magnitude smaller than the sensitivity of the present-day detectors. Future detectors may observe these waves as a signature for the existence of PBHs in the halo. ## Data Availability No new data were generated or analyzed in support of this research.
2307.01506
The ideal test for the divergence of a series
We generalize the classical Olivier's theorem which says that for any convergent series $\sum_n a_n$ with positive nonincreasing real terms the sequence $(n a_n)$ tends to zero. Our results encompass many known generalizations of Olivier's theorem and give some new instances. The generalizations are done in two directions: we either drop the monotonicity assumption completely or we relax it to the monotonicity on a large set of indices. In both cases, the convergence of $(na_n)$ is replaced by ideal convergence. In the second part of the paper, we examine families of sequences for which the assertions of our generalizations of Olivier's theorem fail. Here, we are interested in finding large linear and algebraic substructures in these families.
Rafał Filipów, Adam Kwela, Jacek Tryba
2023-07-04T06:40:02Z
http://arxiv.org/abs/2307.01506v1
# The ideal test for the divergence of a series ###### Abstract. We generalize the classical Olivier's theorem which says that for any convergent series \(\sum_{n}a_{n}\) with positive nonincreasing real terms the sequence \((na_{n})\) tends to zero. Our results encompass many known generalizations of Olivier's theorem and give some new instances. The generalizations are done in two directions: we either drop the monotonicity assumption completely or we relax it to the monotonicity on a large set of indices. In both cases, the convergence of \((na_{n})\) is replaced by ideal convergence. In the second part of the paper, we examine families of sequences for which the assertions of our generalizations of Olivier's theorem fail. Here, we are interested in finding large linear and algebraic substructures in these families. Key words and phrases:Lineability, spaceability, algebrability, convergent series, convergence test, Olivier's theorem, ideal, ideal convergence, summable ideal, Borel ideal 2 **Theorem 1.2** (Salat-Toma [27]).: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). The following conditions are equivalent._ 1. _For every sequence_ \((a_{n})\) _of positive reals we have_ \[\sum_{n=1}^{\infty}a_{n}<\infty\implies\mathcal{I}-\lim na_{n}=0.\] 2. _The ideal_ \(\mathcal{I}\) _extends the summable ideal:_ \[\mathcal{I}_{1/n}\subseteq\mathcal{I}.\] For an ideal \(\mathcal{I}\) on \(\mathbb{N}\), we write \(\mathcal{I}^{*}=\{\mathbb{N}\setminus A:A\in\mathcal{I}\}\) and call it the _filter dual to \(\mathcal{I}\)_. For a sequence \((a_{n})\) of reals, we write \[\mathcal{I}^{*}-\lim a_{n}=L\] if there exists \(F\in\mathcal{I}^{*}\) such that the subsequence \((a_{n})_{n\in F}\) has the limit \(L\) i.e. \[\exists F\in\mathcal{I}^{*}\,\forall\varepsilon>0\,\exists k\,\forall n\in F \,\left(k\leq n\implies|a_{n}-L|<\varepsilon\right).\] It is known ([17, Proposition 3.2 and Theorem 3.2]) that \(\mathcal{I}^{*}-\lim a_{n}=L\) implies \(\mathcal{I}-\lim a_{n}=L\), whereas the reversed implication holds if and only if \(\mathcal{I}\) is a P-ideal (an ideal \(\mathcal{I}\) is a _P-ideal_ if for each countable family \(\mathcal{A}\subseteq\mathcal{I}\) there exists \(B\in\mathcal{I}^{*}\) such that \(B\cap A\) is finite for each \(A\in\mathcal{A}\)). In Section 2.1, we prove some generalizations (see Theorems 2.1 and 2.2) of the above mentioned theorem of Salat and Toma. Our results encompass other generalizations of Theorem 1.2 considered in the literature (see [3, 4] and [22]). Among others, we show (Corollary 2.4) that \(\mathcal{I}\)-convergence can be replaced by a much stronger condition of \(\mathcal{I}^{*}\)-convergence in Theorem 1.2. The results of Section 2.1 utilizes summable ideals (defined by Mazur [21, p. 105]) which generalize the summable ideal \(\mathcal{I}_{1/n}\) and are defined in the following manner: for every divergent series \(\sum_{n=1}^{\infty}d_{n}=\infty\) with nonegative terms we define a _summable ideal generated by a sequence_\((d_{n})\) by \[\mathcal{I}_{(d_{n})}=\left\{A\subseteq\mathbb{N}:\sum_{n\in A}d_{n}<\infty \right\}.\] All summable ideals are P-ideals (see e.g. [9, Example 1.2.3(c)]). For an ideal \(\mathcal{I}\) on \(\mathbb{N}\), we write \(\mathcal{I}^{+}=\{A:A\notin\mathcal{I}\}\) and call it the _coideal of \(\mathcal{I}\)_. For a sequence \((a_{n})\) of reals we write \[\mathcal{I}^{+}-\lim a_{n}=L\] if there exists \(A\in\mathcal{I}^{+}\) such that the subsequence \((a_{n})_{n\in A}\) has the limit \(L\) i.e. \[\exists A\in\mathcal{I}^{+}\,\forall\varepsilon>0\,\exists k\,\forall n\in A \,\left(k\leq n\implies|a_{n}-L|<\varepsilon\right).\] Note that \(\mathcal{I}^{+}\)-limit of a sequence does not have to be unique. This kind of limit was considered in [10], where the authors proved among others that for a large class of ideals (e.g. for the summable ideal \(\mathcal{I}_{1/n}\)) every bounded sequence of reals has \(\mathcal{I}^{+}\)-limit. Obviously \(\mathcal{I}^{*}-\lim a_{n}=L\) implies \(\mathcal{I}^{+}-\lim a_{n}=L\), and the reversed implication holds if and only if \(\mathcal{I}\) is a maximal ideal (an ideal \(\mathcal{I}\) is maximal if \(\mathcal{I}^{+}=\mathcal{I}^{*}\)). In general, there is no relation between \(\mathcal{I}\)-convergence and \(\mathcal{I}^{+}\)-convergence, but \(\mathcal{I}^{+}-\lim a_{n}=L\) implies \(\mathcal{I}-\lim a_{n}=L\) if and only if \(\mathcal{I}\) is a maximal ideal, whereas \(\mathcal{I}-\lim a_{n}=L\) implies \(\mathcal{I}^{+}-\lim a_{n}=L\) if and only if \(\mathcal{I}\) is a weak P-ideal (an ideal \(\mathcal{I}\) is a _weak P-ideal_ if for each countable family \(\mathcal{A}\subseteq\mathcal{I}\) there exists \(B\in\mathcal{I}^{+}\) such that \(B\cap A\) is finite for each \(A\in\mathcal{A}\)). In Section 2.2, we prove (see Theorem 2.7) similar results as in Section 2.1, but with the aid of \(\mathcal{I}^{+}\)-convergence which is independent of \(\mathcal{I}\)-convergence in general. We say that a sequence \((a_{n})\) of reals is \(\mathcal{I}^{*}\)_-nonincreasing_ if there exists \(F\in\mathcal{I}^{*}\) such that the subsequence \((a_{n})_{n\in F}\) is nonincreasing i.e. \[\exists F\in\mathcal{I}^{*}\,\forall n,k\in F\,\left(n\leq k\implies a_{n}\geq a _{k}\right).\] In [8], the authors have been weakening the assumption on monotonicity in Olivier's theorem instead of dropping it entirely. Among others, they posted the following problem. **Problem 1** (Faisant-Grekos-Misik [8]).: Characterize ideals \(\mathcal{I}\) with the property that for every \(\mathcal{I}^{*}\)-nonincreasing sequence \((a_{n})\) of positive reals we have \[\sum_{n=1}^{\infty}a_{n}<\infty\implies\mathcal{I}^{*}-\lim na_{n}=0.\] One can see that Olivier's theorem (Theorem 1.1) says that the ideal \(\mathcal{I}=\operatorname{Fin}\) has this property. On the other hand, in [8], the authors construct an ideal \(\mathcal{I}\) without this property. In Section 3, we solve the above problem by providing (Theorems 3.3 and 3.4) some characterizations of the above mentioned property and other properties of similar flavour. In all of the above mentioned results there is an assumption that the considered series has positive terms, but we can weaken this assumption to consider also absolutely convergent series with arbitrary terms (as \(\mathcal{I}-\lim n|a_{n}|=0\) implies \(\mathcal{I}-\lim na_{n}=0\), and similarly for other types of convergences). However, the alternating harmonic series \(\sum_{n=1}^{\infty}(-1)^{n}/n\) shows that both Olivier's theorem and Theorem 1.2 fail (as the sequence \((-1)^{n}\) is not \(\mathcal{I}\)-convergent to zero for any ideal \(\mathcal{I}\)) if we allow the series to be conditionally convergent in these theorems. On the other hand, it is known (Kronecker [18], see also [16, Theorem 82.3, p. 129]) that a version of Olivier's theorem for arbitrary series holds if one replaces ordinary convergence by Cesaro convergence (also known as Cesaro summation or the Cesaro mean). **Theorem 1.3** (Kronecker [18]).: _If a series \(\sum_{n=1}^{\infty}a_{n}\) is convergent, then the sequence \((na_{n})\) is Cesaro convergent to zero i.e._ \[\lim_{n\to\infty}\frac{a_{1}+2a_{2}+\cdots+na_{n}}{n}=0.\] For more than a decade, the research on finding linear subsets of nonlinear sets in vector spaces (the trend nowadays known as _lineability_ or _spaceability_) is gathering more and more mathematicians. Below we provide the notions we will use in the last part of our paper (for more on the subject see e.g. the book [1] or the survey [7]). Let \(X\) be a Banach algebra and \(\kappa\) be a cardinal number. We say that a subset \(Y\subseteq X\) is 1. \(\kappa\)_-lineable_ if \(Y\cup\{0\}\) contains a vector subspace of dimension \(\kappa\); 2. \(\kappa\)_-spaceable_ if \(Y\cup\{0\}\) contains a closed vector subspace of dimension \(\kappa\); 3. \(\kappa\)_-algebra_ if \(Y\cup\{0\}\) contains a \(\kappa\)-generated subalgebra; 4. _strongly \(\kappa\)-algebra_ if \(Y\cup\{0\}\) contains a \(\kappa\)-generated subalgebra which is a free algebra; 5. _lineable_ if it is \(\kappa\)-lineable for some infinite \(\kappa\) (and similarly for other notions defined above). In [3], the authors consider the Banach algebra \(\ell_{1}\) (of all real sequences \((a_{n})\) with absolutely convergent series \(\sum_{n=1}^{\infty}a_{n}\) equipped with the norm \(\|(a_{n})\|=\sum_{n=1}^{\infty}|a_{n}|\) and coordinatewise addition and multiplication) and examine the lineability-like notions of the set of those sequences for which the assertion of Olivier's theorem fails, namely they examine the set: \[AOS=\{(a_{n})\in\ell_{1}:(na_{n})\text{ is not convergent to zero}\}.\] Among others, they proved that \(AOS\) is strongly \(\mathfrak{c}\)-algebrable, \(\mathfrak{c}\)-lineable and spaceable. In Section 4, we examine the lineability-like notions of the following sets: \[AOS(\mathcal{I}) =\{(a_{n})\in\ell_{1}:(na_{n})\text{ is not }\mathcal{I}\text{-convergent to zero}\},\] \[AOS(\mathcal{I}^{*}) =\{(a_{n})\in\ell_{1}:(na_{n})\text{ is not }\mathcal{I}^{*}\text{-convergent to zero}\},\] \[AOS(\mathcal{I}^{+}) =\{(a_{n})\in\ell_{1}:(na_{n})\text{ is not }\mathcal{I}^{+}\text{-convergent to zero}\}\] for an arbitrary ideal \(\mathcal{I}\) on \(\mathbb{N}\). Since \(\mathcal{I}^{*}\)-convergence implies both \(\mathcal{I}\)-convergence and \(\mathcal{I}^{+}\)-convergence, \(AOS(\mathcal{I})\subseteq AOS(\mathcal{I}^{*})\) and \(AOS(\mathcal{I}^{+})\subseteq AOS(\mathcal{I}^{*})\). However, in general, these inclusions do not reverse (see Proposition 4.2). In Section 4.1, we prove that a necessary and sufficient condition for \(AOS(\mathcal{I})\), \(AOS(\mathcal{I}^{*})\) and \(AOS(\mathcal{I}^{+})\) to be \(\mathfrak{c}\)-lineable is that these families are nonempty (see Theorem 4.3). In Section 4.2, we describe some classes of ideals for which a necessary and sufficient condition for \(AOS(\mathcal{I})\) and \(AOS(\mathcal{I}^{*})\) to be spaceable is that these families are nonempty (see Theorems 4.4 and 4.6). An example of such a class of ideals is the family of all Borel ideals (see Theorem 4.5). However, we do not know if this condition works for every ideal (see Question 1). Moreover, we do not know any conditions under which \(AOS(\mathcal{I}^{+})\) is spaceable (see Question 2). In Section 4.3, we describe some classes of ideals for which \(AOS(\mathcal{I})\), \(AOS(\mathcal{I}^{*})\) and \(AOS(\mathcal{I}^{+})\) are strong \(\mathfrak{c}\)-algebrable (see Theorem 4.14). ## 2. The ideal test for the divergence of an infinite series ### \(\mathcal{I}\) and \(\mathcal{I}^{*}\) tests **Theorem 2.1**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). Let \(g:(0,\infty)\to(0,\infty)\) be a strictly increasing function such that_ \[\lim_{x\to 0^{+}}\frac{g(x)}{x^{\gamma}}=M\] _for some positive constants \(\gamma,M\in(0,\infty)\). Let \((b_{n})\) and \((c_{n})\) be sequences of positive reals such that_ \[\lim_{n\to\infty}c_{n}=\infty.\] _Then the following conditions are equivalent._ 1. _For every sequence_ \((a_{n})\) _with_ \(a_{n}\in\operatorname{ran}(g)\) _we have_ \[\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\implies\mathcal{I}^{*}-\lim c_{n}g^{-1}(a _{n})=0.\] 2. _For every sequence_ \((a_{n})\) _with_ \(a_{n}\in\operatorname{ran}(g)\) _we have_ \[\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\implies\mathcal{I}-\lim c_{n}g^{-1}(a_{n} )=0.\] 3. _The ideal_ \(\mathcal{I}\) _extends the summable ideal generated by the sequence_ \((b_{n}g(1/c_{n}))\)_:_ \[\mathcal{I}_{(b_{n}g(1/c_{n}))}\subseteq\mathcal{I}.\] Proof.: \((1)\implies(2)\) It follows from the fact that \(\mathcal{I}^{*}\)-convergence implies \(\mathcal{I}\)-convergence. \((2)\implies(3)\) Suppose that there exists \(A\in\mathcal{I}_{(b_{n}g(1/c_{n}))}\setminus\mathcal{I}\). For any \(n\in\mathbb{N}\), take as \(d_{n}\) some element of \(\operatorname{ran}(g)\) with \(d_{n}\leq\frac{1}{n^{2}b_{n}}\). We can find such an element by the assumption \(\lim_{x\to 0^{+}}g(x)=0\). We define \[a_{n}=\left\{\begin{array}{ll}g(1/c_{n})&\text{for }n\in A,\\ d_{n}&\text{for other }n.\end{array}\right.\] We can see that \(\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\). Indeed, observe that \[\sum_{n=1}^{\infty}b_{n}a_{n}\leq\sum_{n\in A}b_{n}g(1/c_{n})+\sum_{n\not\in A }\frac{b_{n}}{n^{2}b_{n}}.\] Since \(A\in\mathcal{I}_{(b_{n}g(1/c_{n}))}\), the first sum is finite and the second sum is finite because the series \(\sum\frac{1}{n^{2}}\) is convergent. It follows from the assumption that \(\mathcal{I}-\lim c_{n}g^{-1}(a_{n})=0\). On the other hand, for every \(n\in A\) we have \[c_{n}g^{-1}(a_{n})=c_{n}g^{-1}\left(g(1/c_{n})\right)=c_{n}(1/c_{n})=1.\] Since \(A\not\in\mathcal{I}\), we obtain \(\mathcal{I}-\lim c_{n}g^{-1}(a_{n})\neq 0\), a contradiction. \((3)\implies(2)\) Let \(\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\). We need to show that \(\mathcal{I}-\lim c_{n}g^{-1}(a_{n})=0\). Take \(\varepsilon>0\). We consider the set \[A=\{n\in\mathbb{N}:c_{n}g^{-1}(a_{n})\geq\varepsilon\}=\{n\in\mathbb{N}:a_{n} \geq g(\varepsilon/c_{n})\}=\{n\in\mathbb{N}:b_{n}a_{n}\geq b_{n}g(\varepsilon /c_{n})\}.\] In order to prove that \(A\in\mathcal{I}\), we only need to show that \(\sum_{n\in A}b_{n}g(1/c_{n})<\infty\). First we notice that \(\sum_{n\in A}b_{n}g(\varepsilon/c_{n})<\infty\) since \[\sum_{n\in A}b_{n}g(\varepsilon/c_{n})\leq\sum_{n\in A}b_{n}a_{n}\leq\sum_{n= 1}^{\infty}b_{n}a_{n}<\infty.\] Now, we would like to prove that convergence of the series \(\sum_{n\in A}b_{n}g(\varepsilon/c_{n})\) implies convergence of the series \(\sum_{n\in A}b_{n}g(1/c_{n})\). Notice that since \(\lim_{n\to\infty}c_{n}=\infty\), we have \(\lim_{n\to\infty}1/c_{n}=0\). Therefore \[\lim_{n\to\infty}\frac{g(1/c_{n})}{(1/c_{n})^{\gamma}}=M\text{ and }\lim_{n\to\infty}\frac{g(\varepsilon/c_{n})^{\gamma}}{(\varepsilon/c_{n})^{ \gamma}}=M.\] It follows that \[\lim_{n\to\infty}\frac{g(1/c_{n})}{g(\varepsilon/c_{n})}\cdot\frac{( \varepsilon/c_{n})^{\gamma}}{(1/c_{n})^{\gamma}}=1,\] thus \[\lim_{n\to\infty}\frac{g(1/c_{n})}{g(\varepsilon/c_{n})}=\varepsilon^{-\gamma} \in(0,\infty).\] Because of that, \(\sum_{n\in A}b_{n}g(\varepsilon/c_{n})<\infty\) is equivalent to \(\sum_{n\in A}b_{n}g(1/c_{n})<\infty\). \((3)\implies(1)\) Since \((3)\implies(2)\) and \(\mathcal{I}_{(b_{n}g(1/c_{n}))}\subseteq\mathcal{I}_{(b_{n}g(1/c_{n}))}\), the convergence of the series \(\sum_{n=1}^{\infty}b_{n}a_{n}\) implies \(\mathcal{I}_{(b_{n}g(1/c_{n}))}-\lim c_{n}g^{-1}(a_{n})=0\). Since \(\mathcal{I}_{(b_{n}g(1/c_{n}))}\) is a P-ideal, we obtain \(\mathcal{I}_{(b_{n}g(1/c_{n}))}^{*}-\lim c_{n}g^{-1}(a_{n})=0\). By the assumption \(\mathcal{I}_{(b_{n}g(1/c_{n}))}\subseteq\mathcal{I}\), we have \(\mathcal{I}_{(b_{n}g(1/c_{n}))}^{*}\subseteq\mathcal{I}^{*}\), thus \(\mathcal{I}^{*}-\lim c_{n}g^{-1}(a_{n})=0\). **Theorem 2.2**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). Let \(g:(0,\infty)\to(0,\infty)\) be a strictly increasing function such that_ \[\lim_{x\to 0^{+}}g(x)=0\text{ \ and \ \ }\forall\varepsilon>0\,\exists M\, \forall x>0\,\left(\frac{g(x)}{g(\varepsilon x)}\leq M\right).\] _Let \((b_{n})\) and \((c_{n})\) be sequences of positive reals. Then the following conditions are equivalent._ 1. _For every sequence_ \((a_{n})\) _with_ \(a_{n}\in\operatorname{ran}(g)\) _we have_ \[\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\implies\mathcal{I}^{*}-\lim c_{n}g^{-1}(a_{ n})=0.\] 2. _For every sequence_ \((a_{n})\) _with_ \(a_{n}\in\operatorname{ran}(g)\) _we have_ \[\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\implies\mathcal{I}-\lim c_{n}g^{-1}(a_{n}) =0.\] 3. _The ideal_ \(\mathcal{I}\) _extends the summable ideal generated by the sequence_ \((b_{n}g(1/c_{n}))\)_:_ \[\mathcal{I}_{(b_{n}g(1/c_{n}))}\subseteq\mathcal{I}.\] Proof.: \((1)\implies(2)\) It follows from the fact that \(\mathcal{I}^{*}\)-convergence implies \(\mathcal{I}\)-convergence. \((2)\implies(3)\) Suppose that there exists \(A\in\mathcal{I}_{(b_{n}g(1/c_{n}))}\setminus\mathcal{I}\). For any \(n\in\mathbb{N}\), take as \(d_{n}\) some element of \(\operatorname{ran}(g)\) with \(d_{n}\leq\frac{1}{n^{2}b_{n}}\). We can find such an element because \(\lim_{x\to 0^{+}}g(x)=0\). We define \[a_{n}=\left\{\begin{array}{ll}g(1/c_{n})&\text{for }n\in A,\\ d_{n}&\text{for other }n.\end{array}\right.\] We can see that \(\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\). Indeed, observe that \[\sum_{n=1}^{\infty}b_{n}a_{n}\leq\sum_{n\in A}b_{n}g(1/c_{n})+\sum_{n\not\in A} \frac{b_{n}}{n^{2}b_{n}}.\] Since \(A\in\mathcal{I}_{(b_{n}g(1/c_{n}))}\), the first sum is finite and the second sum is finite because the series \(\sum\frac{1}{n^{2}}\) is convergent. It follows from the assumption that \(\mathcal{I}-\lim c_{n}g^{-1}(a_{n})=0\). On the other hand, for every \(n\in A\) we have \[c_{n}g^{-1}(a_{n})=c_{n}g^{-1}\left(g(1/c_{n})\right)=c_{n}(1/c_{n})=1.\] Since \(A\not\in\mathcal{I}\), we obtain \(\mathcal{I}-\lim c_{n}g^{-1}(a_{n})\neq 0\), a contradiction. \((3)\implies(2)\) Let \(\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\). We need to show that \(\mathcal{I}-\lim c_{n}g^{-1}(a_{n})=0\). Take \(\varepsilon>0\). We consider the set \[A=\{n\in\mathbb{N}:c_{n}g^{-1}(a_{n})\geq\varepsilon\}=\{n\in\mathbb{N}:a_{n} \geq g(\varepsilon/c_{n})\}=\{n\in\mathbb{N}:b_{n}a_{n}\geq b_{n}g(\varepsilon /c_{n})\}.\] In order to prove that \(A\in\mathcal{I}\), we only need to show that \(\sum_{n\in A}b_{n}g(1/c_{n})<\infty\). First we notice that \(\sum_{n\in A}b_{n}g(\varepsilon/c_{n})<\infty\) since \[\sum_{n\in A}b_{n}g(\varepsilon/c_{n})\leq\sum_{n\in A}b_{n}a_{n}\leq\sum_{n=1 }^{\infty}b_{n}a_{n}<\infty.\] Now, observe that there exists \(M\) such that for all \(n\in\mathbb{N}\) we have \[\frac{g(1/c_{n})}{g(\varepsilon/c_{n})}\leq M,\] thus \[\sum_{n\in A}b_{n}g(1/c_{n})\leq M\sum_{n\in A}b_{n}g(\varepsilon/c_{n})<\infty.\] \((3)\implies(1)\) Since \((3)\implies(2)\) and \(\mathcal{I}_{(b_{n}g(1/c_{n}))}\subseteq\mathcal{I}_{(b_{n}g(1/c_{n}))}\), the convergence of the series \(\sum_{n=1}^{\infty}b_{n}a_{n}\) implies \(\mathcal{I}_{(b_{n}g(1/c_{n}))}-\lim c_{n}g^{-1}(a_{n})=0\). Since \(\mathcal{I}_{(b_{n}g(1/c_{n}))}\) is a P-ideal, we obtain \(\mathcal{I}_{(b_{n}g(1/c_{n}))}^{*}-\lim c_{n}g^{-1}(a_{n})=0\). By the assumption \(\mathcal{I}_{(b_{n}g(1/c_{n}))}\subseteq\mathcal{I}\), we have \(\mathcal{I}_{(b_{n}g(1/c_{n}))}^{*}\subseteq\mathcal{I}^{*}\), thus \(\mathcal{I}^{*}-\lim c_{n}g^{-1}(a_{n})=0\). _Remark 2.1_.: 1. Notice that both Theorems 2.1 and 2.2 would still be true if we add \(0\) to the domain and codomain of \(g\) and require that \(g(0)=0\). There is even no need to change any of the proofs, and then we can strengthen these theorems by requiring the sequence \((a_{n})\) to be non-negative instead of positive. 2. Note that Theorem 2.2 does not imply Theorem 2.1. Indeed, the function \(g(x)=e^{x}-1\) satisfies the assumption of Theorem 2.1 (as, using l'Hospital's rule, we have \(\lim_{x\to 0^{+}}\frac{g(x)}{x}=1\), so \(\gamma=1\) and \(M=1\) work), but it does not satisfy the assumption of Theorem 2.2 (as \(\lim_{x\to\infty}\frac{g(x)}{g(x/2)}=\infty\), so if \(\varepsilon=1/2\), then for every \(M>0\) one can find \(x>0\) such that \(\frac{g(x)}{g(x/2)}>M\)). 3. On the other hand, Theorem 2.2 works for any sequences \((c_{n})\) whereas Theorem 2.1 works only for sequences \((c_{n})\) which are divergent to infinity. 4. If a function \(g(x)\) satisfies the assumptions of Theorem 2.1, it also satisfies \(\lim_{x\to 0^{+}}g(x)=0\). On the other hand, if \(g(x)=e^{-1/x}\), then \(\lim_{x\to 0^{+}}g(x)=0\), but \(g(x)\) does not satisfy the assumption of Theorem 2.1. Moreover, the equivalence from Theorem 2.1 does not hold for the function \(g(x)=e^{-1/x}\) as it is witnessed by sequences \(a_{n}=1/n^{2}\), \(b_{n}=1\) and \(c_{n}=\ln n\). **Corollary 2.3**.: _If \(g\), \((b_{n})\) and \((c_{n})\) are like in Theorem 2.1 or Theorem 2.2, then for every sequence \((a_{n})\) with \(a_{n}\in\operatorname{ran}(g)\) we have_ \[\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\implies\mathcal{I}^{*}_{(b_{n}g(1/c_{n})) }-\lim c_{n}g^{-1}(a_{n})=0\] _and_ \[\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\implies\mathcal{I}_{(b_{n}g(1/c_{n}))}- \lim c_{n}g^{-1}(a_{n})=0.\] Proof.: Apply Theorem 2.1 or Theorem 2.2 with the ideal \(\mathcal{I}=\mathcal{I}_{(b_{n}g(1/c_{n}))}\). The equivalence "\((1)\iff(3)\)" in the following result is just Theorem 1.2 and it was proved in [27]. Here, we strengthen this theorem essentially, because \(\mathcal{I}^{*}\)-convergence is stronger than \(\mathcal{I}\)-convergence. **Corollary 2.4**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). The following conditions are equivalent._ 1. _For every sequence_ \((a_{n})\) _of non-negative numbers we have_ \[\sum_{n=1}^{\infty}a_{n}<\infty\implies\mathcal{I}-\lim na_{n}=0.\] 2. _For every sequence_ \((a_{n})\) _of non-negative numbers we have_ \[\sum_{n=1}^{\infty}a_{n}<\infty\implies\mathcal{I}^{*}-\lim na_{n}=0.\] 3. _The ideal_ \(\mathcal{I}\) _extends the summable ideal:_ \(\mathcal{I}_{1/n}\subseteq\mathcal{I}\)_._ Proof.: Apply Theorem 2.1 with \(g(x)=x\), \(b_{n}=1\) and \(c_{n}=n\) and Remark 2.1(1). **Corollary 2.5** (Misik-Toth, [22]).: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). Let \(p,q\) be fixed positive numbers and \(\alpha,\beta\) be fixed nonnegative numbers. Then the following conditions are equivalent._ 1. _For every sequence_ \((d_{n})\) _of positive numbers we have_ \[\sum_{n=1}^{\infty}n^{\alpha}d_{n}^{p}<\infty\implies\mathcal{I}-\lim n^{ \beta}d_{n}^{q}=0.\] _._ 2. _The ideal_ \(\mathcal{I}\) _extends the summable ideal generated by the sequence_ \(\left(n^{\alpha-\beta p/q}\right)\)_:_ \[\mathcal{I}_{\left(n^{\alpha-\beta p/q}\right)}\subseteq\mathcal{I}.\] Proof.: We can apply Theorem 2.2 with \(g(x)=x^{p/q}\), \(a_{n}=d_{n}^{p}\), \(b_{n}=n^{\alpha}\) and \(c_{n}=n^{\beta}\). **Corollary 2.6** (Bartoszewicz-Glab-Widz [3]).: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). Let \((b_{n})\), \((c_{n})\) be sequences of positive numbers and let \(p\), \(q\) be fixed positive numbers. Then the following conditions are equivalent._ 1. _For every sequence_ \((d_{n})\) _of positive numbers we have_ \[\sum_{n=1}^{\infty}b_{n}d_{n}^{p}<\infty\implies\mathcal{I}-\lim c_{n}d_{n}^{ q}=0.\] 2. _The ideal_ \(\mathcal{I}\) _extends the summable ideal generated by the sequence_ \((b_{n}c_{n}^{-p/q})\)_:_ \[\mathcal{I}_{\left(b_{n}c_{n}^{-p/q}\right)}\subseteq\mathcal{I}.\] Proof.: We can apply Theorem 2.2 with \(g(x)=x^{p/q}\), \(a_{n}=d_{n}^{p}\), \(b_{n}=b_{n}\) and \(c_{n}=c_{n}\). _Remark 2.2_.: One can show that for instance functions \(g(x)=e^{x}-1\), \(g(x)=\ln(x+1)\), \(g(x)=\arctan x\) and even \(g(x)=\Phi(x)-1/2\) (where \(\Phi(x)\) is the cumulative distribution function of the standard normal distribution) satisfy the assumptions of Theorem 2.1. On the other hand, all these functions are not the power functions \(x^{p/q}\) considered in Corollary 2.6. ### \(\mathcal{I}^{+}\) test **Theorem 2.7**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). Let \(g\), \((b_{n})\) and \((c_{n})\) be like in Theorem 2.1 or Theorem 2.2. Then the following conditions are equivalent._ 1. _For every sequence_ \((a_{n})\) _with_ \(a_{n}\in\operatorname{ran}(g)\) _we have_ \[\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\implies\mathcal{I}^{+}-\lim c_{n}g^{-1}( a_{n})=0.\] 2. _The filter dual to_ \(\mathcal{I}\) _is disjoint from the summable ideal generated by the sequence_ \((b_{n}g(1/c_{n}))\)_:_ \[\mathcal{I}_{(b_{n}g(1/c_{n}))}\cap\mathcal{I}^{*}=\emptyset.\] Proof.: \((1)\implies(2)\) Suppose that there exists \(A\in\mathcal{I}_{(b_{n}g(1/c_{n}))}\cap\mathcal{I}^{*}\). For any \(n\in\mathbb{N}\), take as \(d_{n}\) some element of \(\operatorname{ran}(g)\) with \(d_{n}\leq\frac{1}{n^{b_{n}}}\). We can find such an element by the assumption \(\lim_{x\to 0^{+}}g(x)=0\). We define \[a_{n}=\left\{\begin{array}{ll}g(1/c_{n})&\text{for }n\in A,\\ d_{n}&\text{for }n\not\in A.\end{array}\right.\] We can see that \(\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\) because \[\sum_{n=1}^{\infty}b_{n}a_{n}\leq\sum_{n\in A}b_{n}g(1/c_{n})+\sum_{n\not\in A }\frac{b_{n}}{b_{n}n^{2}}.\] Since \(A\in\mathcal{I}_{(b_{n}g(1/c_{n}))}\), the first sum is finite and the second sum is finite because the series \(\sum\frac{1}{n^{2}}\) is convergent. On the other hand, for all \(n\in A\) we have \[c_{n}g^{-1}(a_{n})=c_{n}g^{-1}(g(1/c_{n}))=c_{n}(1/c_{n})=1.\] Since \(A\in\mathcal{I}^{*}\), it follows that for any \(B\in\mathcal{I}^{+}\) there are infinitely many \(n\in B\cap A\) with \(c_{n}g^{-1}(a_{n})=1\), thus we cannot have \(\mathcal{I}^{+}-\lim c_{n}g^{-1}(a_{n})=0\). \((2)\implies(1)\) Suppose that there exists a positive sequence \((a_{n})\) with \(\sum_{n=1}^{\infty}b_{n}a_{n}<\infty\) such that for any \(B\in\mathcal{I}^{+}\) we have \(\limsup_{n\in B}c_{n}g^{-1}(a_{n})>0\). Consider the sets \(A_{k}=\{n\in\mathbb{N}:c_{n}g^{-1}(a_{n})\geq 1/k\}\). We can notice that for each \(k\in\mathbb{N}\) we have \(A_{k}\in\mathcal{I}_{(b_{n}g(1/c_{n}))}\). Indeed, let us assume that it is not the case for some \(k\in\mathbb{N}\). Then \[\infty>\sum_{n\in\mathbb{N}}b_{n}a_{n}\geq\sum_{n\in A_{k}}b_{n}a_{n}\geq\sum_ {n\in A_{k}}b_{n}g\left(\frac{1}{kc_{n}}\right),\] which is infinite since \(A_{k}\notin\mathcal{I}_{(b_{n}g(1/c_{n}))}\) and \[\limsup_{n\to\infty}\frac{g(1/c_{n})}{g(\varepsilon/c_{n})}\in(0,\infty)\] for any \(\varepsilon>0\) by the reasonings presented in the proofs of Theorems 2.1 and 2.2, thus bringing us to a contradiction. Now, since \(\mathcal{I}_{(b_{n}g(1/c_{n}))}\) is a P-ideal, there exists \(B\in\mathcal{I}^{*}_{(b_{n}g(1/c_{n}))}\) such that \(B\cap A_{k}\) is finite for all \(k\in\mathbb{N}\). We can see that \(\lim_{n\in B}c_{n}g^{-1}(a_{n})=0\). By our assumption we get \(B\not\in\mathcal{I}^{+}\), hence \(B\in\mathcal{I}\). If we now take \(C=\mathbb{N}\setminus B\), we obtain \(C\in\mathcal{I}_{(b_{n}g(1/c_{n}))}\cap\mathcal{I}^{*}\). _Remark 2.3_.: Notice that Theorem 2.7 would still be true if we add \(0\) to the domain and codomain of \(g\) and require that \(g(0)=0\). **Corollary 2.8**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). Then the following conditions are equivalent._ 1. _For every sequence_ \((a_{n})\) _of non-negative numbers we have_ \[\sum_{n=1}^{\infty}a_{n}<\infty\implies\mathcal{I}^{+}-\lim na_{n}=0.\] 2. _The filter dual to_ \(\mathcal{I}\) _is disjoint from the summable ideal:_ \[\mathcal{I}_{1/n}\cap\mathcal{I}^{*}=\emptyset.\] Proof.: Apply Theorem 2.7 with \(g(x)=x\), \(b_{n}=1\) and \(c_{n}=n\) and Remark 2.3. ## 3. The ideal test for the divergence of an infinite series with monotone terms **Definition 3.1**.: For an infinite set \(X=\{x_{1}<x_{2}<\ldots\}\subseteq\mathbb{N}\), we define \(f_{X}:\mathbb{N}\to\mathbb{N}\) by \[f_{X}(i)=\frac{1}{x_{n}}\iff i\in(x_{n-1},x_{n}]\text{ for some }n\in\mathbb{N}\] (we take \(x_{0}=0\)), and \[\mathcal{I}_{X}=\left\{A\subseteq\mathbb{N}:\sum_{n\in A}f_{X}(n)<\infty\right\}.\] The following easy proposition summarizes few basic properties of ideals of the form \(\mathcal{I}_{X}\). **Proposition 3.2**.: 1. \(\mathcal{I}_{\mathbb{N}}=\mathcal{I}_{1/n}\)_._ 2. _For any infinite_ \(X\)_,_ \(\mathcal{I}_{X}\) _is equal to the summable ideal generated by the sequence_ \(f_{X}\colon\mathcal{I}_{X}=\mathcal{I}_{(f_{X})}\)_._ 3. _If_ \(X\subseteq Y\) _then_ \(\mathcal{I}_{X}\supseteq\mathcal{I}_{Y}\)_._ 4. \(\mathcal{I}_{1/n}\subseteq\mathcal{I}_{X}\) _for every infinite_ \(X\) _._ 5. _For any infinite_ \(X\)_, if_ \(A\in\mathcal{I}_{X}^{*}\) _then_ \(A\) _has upper asymptotic density_ \(1\)_:_ \[A\in\mathcal{I}_{X}^{*}\Longrightarrow\limsup_{n\to\infty}\frac{|A\cap\{1,\ldots,n\}|}{n}=1.\] Proof.: The first four items are easy observations. We will prove the last item by showing that if \(A\) has positive lower asymptotic density then \(A\not\in\mathcal{I}_{X}\), i.e. \[\liminf_{n\to\infty}\frac{|A\cap\{1,\ldots,n\}|}{n}>0\Longrightarrow A\not \in\mathcal{I}_{X}.\] Take \(A\subseteq\mathbb{N}\) with positive lower asymptotic density. Pick \(\alpha>0\) such that the lower asymptotic density of \(A\) is greater than \(2\alpha\). First, observe that there exist \(k,N\in\mathbb{N}\) such that for all \(n\geq N\) we have \(|A\cap[2^{nk},2^{nk+k})|/2^{nk+k}>\alpha\). Indeed, otherwise for infinitely many \(n\in\mathbb{N}\) we have \[2\alpha<\frac{|A\cap[1,2^{nk+k}]|}{2^{nk+k}}\leq\frac{2^{nk}}{2^{nk+k}}+\frac {|A\cap[2^{nk},2^{n+k})|}{2^{nk+k}}\leq 2^{-k}+\alpha,\] which is a contradiction for any \(k\in\mathbb{N}\) with \(2^{-k}\leq\alpha\). Now, for any \(n\in\mathbb{N}\) denote by \(I_{n}\) the interval \([2^{nk},2^{nk+k})\). Let \(Y=\{y_{1}<y_{2}<\ldots\}\) be such a subset of \(X\) that \(|Y\cap I_{n}|\leq 1\) for all \(n\in\mathbb{N}\) and \(y_{1}>\max I_{N}\). Since \(\mathcal{I}_{X}\subseteq\mathcal{I}_{Y}\) by (3), we will finish the proof by showing that \(A\not\in\mathcal{I}_{Y}\). Take any \(y_{n}\). Then \(y_{n}\in I_{m}\) for some \(m>N\), thus \[\sum_{i\in A\cap I_{m-1}}f_{Y}(i)\geq\frac{\alpha 2^{mk}}{y_{n}}\geq\frac{ \alpha 2^{mk}}{2^{mk+k}}=\frac{\alpha}{2^{k}}.\] Since that calculation holds for any \(y_{n}\) and \(Y\) is infinite, we obtain \[\sum_{i\in A}f_{Y}(i)\geq\sum_{n=1}^{\infty}\frac{\alpha}{2^{k}}=\infty,\] hence \(A\not\in\mathcal{I}_{Y}\). **Theorem 3.3**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). Then the following conditions are equivalent._ 1. _For every_ \(\mathcal{I}^{*}\)_-nonincreasing sequence_ \((a_{n})\) _of positive reals we have_ \[\sum_{n=1}^{\infty}a_{n}<\infty\implies\mathcal{I}^{*}-\lim na_{n}=0.\] 2. _For every_ \(\mathcal{I}^{*}\)_-nonincreasing sequence_ \((a_{n})\) _of positive reals we have_ \[\sum_{n=1}^{\infty}a_{n}<\infty\implies\mathcal{I}-\lim na_{n}=0.\] 3. _The filter dual to_ \(\mathcal{I}\) _is disjoint from each ideal_ \(\mathcal{I}_{X}\) _with_ \(X\in\mathcal{I}^{+}\)_:_ \[\forall X\in\mathcal{I}^{+}\,\left(\mathcal{I}_{X}\cap\mathcal{I}^{*}=\emptyset \right).\] Proof.: (1) \(\implies\) (2) It follows from the fact that \(\mathcal{I}^{*}\)-convergence implies \(\mathcal{I}\)-convergence. (2) \(\implies\) (3) Suppose there exist \(X\in\mathcal{I}^{+}\) and \(A\subseteq\mathbb{N}\) such that \(A\in\mathcal{I}_{X}\cap\mathcal{I}^{*}\neq\emptyset\). Define \[a_{n}=\begin{cases}f_{X}(n)&\text{ for }n\in A,\\ 1/n^{2}&\text{ for }n\not\in A.\end{cases}\] Since \(f_{X}\) is nonincreasing, the sequence \(a_{n}\) is nonincreasing on the set \(A\in\mathcal{I}^{*}\). Moreover, \[\sum_{n\in\mathbb{N}}a_{n}=\sum_{n\in A}f_{X}(n)+\sum_{n\not\in A}\frac{1}{n^{2} }<\infty,\] because \(A\in\mathcal{I}_{X}\). On the other hand, for every \(n=x_{k}\in X\cap A\) we have \[na_{n}=x_{k}f_{X}(x_{k})=x_{k}\cdot\frac{1}{x_{k}}=1.\] Since \(X\cap A\in\mathcal{I}^{+}\), the sequence \((na_{n})\) cannot be \(\mathcal{I}\)-convergent to zero. \((3)\implies(1)\) Suppose there exists a sequence \((a_{n})\) with \(\sum_{n\in\mathbb{N}}a_{n}<\infty\) that is nonincreasing on some set \(A\in\mathcal{I}^{*}\) and the sequence \((na_{n})\) is not \(\mathcal{I}^{*}\)-convergent to zero. We have two cases. * There is an \(\varepsilon>0\) such that \(\{n\in A:na_{n}>\varepsilon\}\in\mathcal{I}^{+}\). * For every \(\varepsilon>0\) we have \(\{n\in A:na_{n}>\varepsilon\}\in\mathcal{I}\) In case (a), let \(\varepsilon>0\) be such that \(X=\{n\in A:na_{n}>\varepsilon\}\in\mathcal{I}^{+}\), and enumerate the elements of \(X\) increasingly by \(x_{1},x_{2},\ldots\). Since the sequence \((a_{n})\) is nonincreasing on \(A\) and \(X\subseteq A\), we can notice that \(a_{n}\geq a_{x_{k}}\) for \(n\in(x_{k-1},x_{k}]\cap A\), \(k\in\mathbb{N}\). Therefore, \[\sum_{n\in A}a_{n} \geq\sum_{k\in\mathbb{N}}\sum_{n\in(x_{k-1},x_{k}]\cap A}a_{x_{k}} >\sum_{k\in\mathbb{N}}\sum_{n\in(x_{k-1},x_{k}]\cap A}\frac{\varepsilon}{x_{k}}\] \[=\varepsilon\sum_{k\in\mathbb{N}}\sum_{n\in(x_{k-1},x_{k}]\cap A} \frac{1}{x_{k}}=\varepsilon\sum_{n\in A}f_{X}(n).\] From the assumption that \(\sum_{n\in A}a_{n}<\infty\), it follows that \(A\in\mathcal{I}_{X}\). Thus, \(A\in\mathcal{I}_{X}\cap\mathcal{I}^{*}\), which makes \(\mathcal{I}_{X}\cap\mathcal{I}^{*}\) nonempty. In case (b), since the sequence \((na_{n})\) is not \(\mathcal{I}^{*}\)-convergent to \(0\) and \(A\notin\mathcal{I}\), we can find a strictly decreasing sequence \((\varepsilon_{k})\) tending to \(0\) such that \(X_{k}=\{n\in A:na_{n}\in[\varepsilon_{k},\varepsilon_{k-1})\}\in\mathcal{I} \setminus\text{Fin}\) for every \(k\in\mathbb{N}\) (we put \(\varepsilon_{0}=\infty\)). Observe also that for every \(B\in\mathcal{I}^{*}\) there is some \(k\in\mathbb{N}\) with \(B\cap X_{k}\not\in\text{Fin}\). Enumerate elements of each \(X_{k}\) increasingly by \(x_{1}^{(k)},x_{2}^{(k)},\ldots\) and add \(x_{0}^{(k)}=0\). We will prove that \(A\in\mathcal{I}_{X_{k}}\) for every \(k\in\mathbb{N}\). Take any \(k\in\mathbb{N}\) and notice that for any \(n\in X_{k}\) we have \(a_{n}\geq\varepsilon_{k}/n\), thus, using the fact that \((a_{n})\) is nonincreasing on \(A\), we have \[\sum_{n\in A}f_{X_{k}}(n)=\sum_{i\in\mathbb{N}}\frac{|A\cap(x_{i-1}^{(k)},x_{i }^{(k)}]|}{x_{i}^{(k)}}\leq\frac{1}{\varepsilon_{k}}\sum_{n\in A}a_{n}<\infty.\] Because \(\sum_{n\in A}f_{X_{k}}(n)<\infty\) for each \(k\in\mathbb{N}\), we can see that we may always find such \(t_{k}\in\mathbb{N}\) that \[\sum_{i\geq t_{k}}\frac{|A\cap(x_{i-1}^{(k)},x_{i}^{(k)}]}{x_{i}^{(k)}}<\frac{1 }{2^{k}}.\] Moreover, by increasing \(t_{k}\) if necessary, we can assume that for each \(k>1\) there exist some \(j\geq t_{1}\) such that \(x_{j}^{(1)}\in(x_{t_{k}-1}^{(k)},x_{t_{k}}^{(k)})\). Next, define \(X=\bigcup_{k\in\mathbb{N}}X_{k}\setminus\{x_{1}^{(k)},\ldots,x_{t_{k}-1}^{(k)}\}\). Note that \(X\in\mathcal{I}^{+}\) as otherwise \(\mathbb{N}\setminus X\) would be a set in \(\mathcal{I}^{*}\) that has finite intersections with every \(X_{k}\). Enumerate increasingly elements of \(X\) by \(x_{1},x_{2},\ldots\) and add \(x_{0}=0\). Observe that \(x_{1}=x_{t_{1}}^{(1)}\). We will finish the proof by showing that \(A\in\mathcal{I}_{X}\). In order to prove that, observe that every \(x_{j}\) is equal to \(x_{i}^{(k)}\) for some \(k\in\mathbb{N}\) and \(i\geq t_{k}\). Moreover, for every \(x_{j}=x_{i}^{(k)}\) other than \(x_{1}=x_{t_{1}}^{(1)}\) we can notice that \((x_{j-1},x_{j}]\subseteq(x_{i-1}^{(k)},x_{i}^{(k)}]\) because either \(i>t_{k}\) and then \(x_{i-1}^{(k)}\in X\), thus \(x_{i-1}^{(k)}\leq x_{j-1}\), or \(i=t_{k}\) and then \(X_{1}\cap X\cap(x_{t_{k}-1}^{(k)},x_{t_{k}}^{(k)})\neq\emptyset\), thus \(x_{i-1}^{(k)}<x_{j-1}\). Therefore, \[\sum_{n\in A\setminus\{1,\dots,x_{1}\}}f_{X}(n) =\sum_{j\geq 2}\frac{|A\cap(x_{j-1},x_{j}]|}{x_{j}}\] \[\leq\sum_{k\in\mathbb{N}}\sum_{i\geq t_{k}}\frac{|A\cap(x_{i-1}^{ (k)},x_{i}^{(k)}]|}{x_{i}^{(k)}}<\sum_{k\in\mathbb{N}}\frac{1}{2^{k}}=1<\infty.\] It clearly follows that \(\sum_{n\in A}f_{X}(n)<\infty\), thus \(A\in\mathcal{I}_{X}\cap\mathcal{I}^{*}\). **Theorem 3.4**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). Then the following conditions are equivalent._ 1. _For every_ \(\mathcal{I}^{*}\)_-nonincreasing sequence_ \((a_{n})\) _of positive reals we have_ \[\sum_{n=1}^{\infty}a_{n}<\infty\implies\mathcal{I}^{+}-\lim na_{n}=0.\] 2. _The filter dual to_ \(\mathcal{I}\) _is disjoint from the summable ideal:_ \[\mathcal{I}_{1/n}\cap\mathcal{I}^{*}=\emptyset.\] Proof.: \((2)\implies(1)\) It follows from Corollary 2.8. \((1)\implies(2)\) Suppose that there exists \(A\in\mathcal{I}_{1/n}\cap\mathcal{I}^{*}\). We define \(a_{n}=1/n\) for \(n\in A\) and \(a_{n}=1/n^{2}\) for \(n\not\in A\). Then \((a_{n})\) is \(\mathcal{I}^{*}\)-nonincreasing and we can see that \[\sum_{n=1}^{\infty}a_{n}=\sum_{n\in A}\frac{1}{n}+\sum_{n\not\in A}\frac{1}{n^ {2}}<\infty,\] because \(A\in\mathcal{I}_{1/n}\). On the other hand, for all \(n\in A\) we have \(na_{n}=1\). Since \(A\in\mathcal{I}^{*}\), it follows that for any \(B\in\mathcal{I}^{+}\) there are infinitely many \(n\in B\cap A\) with \(na_{n}=1\), thus we cannot have \(\mathcal{I}^{+}-\lim na_{n}=0\). **Corollary 3.5** (Faisant-Grekos-Misik [8]).: _If \(\mathcal{I}\) is an ideal on \(\mathbb{N}\) such that every \(A\in\mathcal{I}\) has the upper asymptotic density less than 1, then_ \[\sum_{n=1}^{\infty}a_{n}<\infty\implies\mathcal{I}^{*}-\lim na_{n}=0\] _for every \(\mathcal{I}^{*}\)-nonincreasing sequence \((a_{n})\) of positive reals._ Proof.: Since every \(A\in\mathcal{I}\) has the upper asymptotic density less than 1, by Proposition 3.2(5) it follows that \(\mathcal{I}\cap\mathcal{I}^{*}_{X}=\emptyset\) for every infinite \(X\subseteq\mathbb{N}\). Hence, in particular, \(\mathcal{I}_{X}\cap\mathcal{I}^{*}=\emptyset\) for every \(X\not\in\mathcal{I}\). Therefore, by Theorem 3.3 we obtain the desired result. **Proposition 3.6**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). In the following list of conditions, each implies the next and none of the implications reverse._ 1. \(\mathcal{I}_{1/n}\cap\mathcal{I}^{+}=\emptyset\)_._ 2. \(\mathcal{I}_{X}\cap\mathcal{I}^{*}=\emptyset\) _for every_ \(X\in\mathcal{I}^{+}\)_._ 3. \(\mathcal{I}_{1/n}\cap\mathcal{I}^{*}=\emptyset\)_._ Proof.: \((1)\implies(2)\) Suppose that \(\mathcal{I}_{1/n}\cap\mathcal{I}^{+}=\emptyset\). Then using Theorems 1.2 and 3.3 we obtain \(\mathcal{I}_{X}\cap\mathcal{I}^{*}=\emptyset\) for every \(X\in\mathcal{I}^{+}\). \((2)\implies(3)\) Suppose that \(\mathcal{I}_{X}\cap\mathcal{I}^{*}=\emptyset\) for every \(X\in\mathcal{I}^{+}\). Taking \(X=\mathbb{N}\) we get \(\mathcal{I}_{X}=\mathcal{I}_{1/n}\), so \(\mathcal{I}_{1/n}\cap\mathcal{I}^{*}=\emptyset\). \((2)\not\implies(1)\) Consider a summable ideal \(\mathcal{I}=\mathcal{I}_{1/\ln(n+1)}\). Then \(\mathcal{I}\subseteq\mathcal{I}_{1/n}\), and since \(\{n^{2}:n\in\mathbb{N}\}\in\mathcal{I}_{1/n}\setminus\mathcal{I}\), we have \(\mathcal{I}_{1/n}\cap\mathcal{I}^{+}\neq\emptyset\). On the other hand, since \(\mathcal{I}\subseteq\mathcal{I}_{1/n}\subseteq\mathcal{I}_{X}\) for every \(X\), we have \(\mathcal{I}_{X}\cap\mathcal{I}^{*}=\emptyset\) for every \(X\). \((3)\not\implies(2)\) Let \(k_{n}=\lfloor n\ln(n+1)\rfloor\) for \(n\in\mathbb{N}\) and define \(K=\{k_{n}:n\in\mathbb{N}\}\). Take \(\mathcal{I}=\{A\subseteq\mathbb{N}:A\cap K\in\text{Fin}\}\). Notice that if \(A\in\mathcal{I}^{*}\) then \(K\setminus A\) is finite and \[\sum_{n\in K}\frac{1}{n}=\sum_{n\in\mathbb{N}}\frac{1}{k_{n}}\geq\sum_{n\in \mathbb{N}}\frac{1}{n\ln(n+1)}=\infty,\] thus \(K\not\in\mathcal{I}_{1/n}\), hence \(\mathcal{I}_{1/n}\cap\mathcal{I}^{*}=\emptyset\). Now, we pick the sequence \(i_{1}<i_{2}<\ldots\) in such a way that \(i_{n}/k_{i_{n}}<2^{-n}\). We can do it because \(\lim_{n\to\infty}n/k_{n}=0\). Consider the set \(A=\{k_{i_{n}}:n\in\mathbb{N}\}\). Then \(A\in\mathcal{I}^{+}\) as \(A\subseteq K\) and \(A\) is infinite. Moreover, \[\sum_{k\in K}f_{A}(k)\leq\sum_{n\in\mathbb{N}}(i_{n}-i_{n-1})\frac{1}{k_{i_{n} }}\leq\sum_{n\in\mathbb{N}}\frac{i_{n}}{k_{i_{n}}}<\sum_{n\in\mathbb{N}}\frac{ 1}{2^{n}}<\infty.\] Therefore, there is \(A\in\mathcal{I}^{+}\) such that \(K\in\mathcal{I}_{A}\cap\mathcal{I}^{*}\), thus \(\mathcal{I}_{A}\cap\mathcal{I}^{*}\neq\emptyset\). **Corollary 3.7**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). In the following list of conditions, each implies the next and none of the implications revers._ 1. _The coideal of_ \(\mathcal{I}\) _is disjoint from the summable ideal:_ \[\mathcal{I}_{1/n}\cap\mathcal{I}^{+}=\emptyset.\] 2. _For every_ \(\mathcal{I}^{*}\)_-nonincreasing sequence_ \((a_{n})\) _of positive reals we have_ \[\sum_{n=1}^{\infty}a_{n}<\infty\implies\mathcal{I}^{*}-\lim na_{n}=0.\] 3. _The filter dual to_ \(\mathcal{I}\) _is disjoint from the summable ideal:_ \[\mathcal{I}_{1/n}\cap\mathcal{I}^{*}=\emptyset.\] Proof.: Use Theorem 3.3 along with Proposition 3.6. ## 4. Algebraic structures in families of sequences related to Olivier's theorem The following proposition gives a necessary and sufficient conditions under which the families \(AOS(\mathcal{I})\), \(AOS(\mathcal{I}^{+})\) and \(AOS(\mathcal{I}^{+})\) are nonempty. **Proposition 4.1**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\)._ 1. _The following conditions are equivalent._ 1. \(AOS(\mathcal{I})\neq\emptyset\)_._ 2. \(AOS(\mathcal{I}^{*})\neq\emptyset\)_._ 3. \(\mathcal{I}^{+}\cap\mathcal{I}_{1/n}\neq\emptyset\)_._ 2. _The following conditions are equivalent._ 1. \(AOS(\mathcal{I}^{+})\neq\emptyset\)_._ 2. \(\mathcal{I}^{*}\cap\mathcal{I}_{1/n}\neq\emptyset\)_._ 3. \(\mathcal{I}^{*}\cap\mathcal{I}_{1/n}\neq\emptyset\)_._ Proof.: \((1)\) It follows from Corollary 2.4. \((2)\) It follows from Corollary 2.8. Since \(\mathcal{I}^{*}\)-convergence implies both \(\mathcal{I}\)-convergence and \(\mathcal{I}^{+}\)-convergence, \(AOS(\mathcal{I})\subseteq AOS(\mathcal{I}^{*})\) and \(AOS(\mathcal{I}^{+})\subseteq AOS(\mathcal{I}^{*})\). Below, we show that, in general, these inclusions do not reverse, and there is no inclusions between \(AOS(\mathcal{I})\) and \(AOS(\mathcal{I}^{+})\). **Proposition 4.2**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\)._ 1. _If_ \(\mathcal{I}^{+}\cap\mathcal{I}_{1/n}\neq\emptyset\) _and_ \(\mathcal{I}^{*}\cap\mathcal{I}_{1/n}=\emptyset\)_, then_ \(AOS(\mathcal{I})\not\subseteq AOS(\mathcal{I}^{+})\) _and_ \(AOS(\mathcal{I}^{*})\not\subseteq AOS(\mathcal{I}^{+})\)_._ 2. _Every ideal_ \(\mathcal{I}\) _which is strictly contained in_ \(\mathcal{I}_{1/n}\) _(e.g._ \(\mathcal{I}=\mathrm{Fin}\)_) satisfies assumptions of item (1)._ 3. _If_ \(\mathcal{I}\) _is not a weak P-ideal and_ \(\mathcal{I}^{*}\cap\mathcal{I}_{1/n}\neq\emptyset\)_, then_ \(AOS(\mathcal{I}^{+})\not\subseteq AOS(\mathcal{I})\)_._ 4. _If_ \(\mathcal{I}\) _is not a P-ideal and_ \(\mathcal{I}^{*}\cap\mathcal{I}_{1/n}\neq\emptyset\)_, then_ \(AOS(\mathcal{I}^{*})\not\subseteq AOS(\mathcal{I})\)_._ 5. _There exists an ideal_ \(\mathcal{I}\) _which satisfies the assumptions of items (3) and (4)._ Proof.: (1) By Proposition 4.1, we have \(AOS(\mathcal{I})\neq\emptyset\), \(AOS(\mathcal{I}^{*})\neq\emptyset\) and \(AOS(\mathcal{I}^{+})=\emptyset\). (2) It is obvious. (3) Take \(A\in\mathcal{I}^{*}\cap\mathcal{I}_{1/n}\). Let sets \(A_{1},A_{2}\ldots\in\mathcal{I}\) be pairwise disjoint such that for any \(B\not\in\mathcal{I}\) there is \(n\in\mathbb{N}\) with \(B\cap A_{n}\not\in\mathrm{Fin}\). We may assume that \(\bigcup_{n=1}^{\infty}A_{n}=A\). Indeed, if that is not the case then enumerate \(A\setminus\bigcup_{n=1}^{\infty}A_{n}\) by \((x_{i})\) (for either \(i\in\mathbb{N}\) or \(i\leq N\) depending on whether this difference is infinite or not) and for each \(n\in\mathbb{N}\) put \(A^{\prime}_{n}=(A_{n}\cap A)\cup\{x_{n}\}\) (or \(A^{\prime}_{n}=A_{n}\cap A\) in case \(x_{n}\) is not defined). Then clearly \(A^{\prime}_{n}\in\mathcal{I}\) for every \(n\in\mathbb{N}\) and \(\bigcup_{n=1}^{\infty}A^{\prime}_{n}=A\). We define \[a_{n}=\begin{cases}\frac{1}{ni}&\text{for }n\in A_{i},\\ \frac{1}{n^{2}}&\text{for other }n.\end{cases}\] Obviously \[\sum_{n=1}^{\infty}a_{n}\leq\sum_{n=1}^{\infty}\frac{1}{n^{2}}+\sum_{n\in A} \frac{1}{n}<\infty.\] Moreover, we can notice that for every \(k\in\mathbb{N}\) we have \[\left\{n\in\mathbb{N}:na_{n}\geq\frac{1}{k}\right\}\subseteq(\mathbb{N} \setminus A)\cup\bigcup_{i=1}^{k}A_{i}\in\mathcal{I},\] thus \(\mathcal{I}-\lim na_{n}=0\), hence \((a_{n})\not\in AOS(\mathcal{I})\). On the other hand for every \(B\not\in\mathcal{I}\) there is \(k\in\mathbb{N}\) with \(B\cap A_{k}\not\in\mathrm{Fin}\), thus there are infinitely many \(n\in B\) such that \(na_{n}=1/k\), hence \((na_{n})_{n\in B}\) cannot be convergent to \(0\). It follows that \((a_{n})\in AOS(\mathcal{I}^{+})\). (4) The obvious modification of the proof of item (3) gives the proof of item (4). (5) Take an infinite set \(A\in\mathcal{I}_{1/n}\). Let \(\{A_{n}:n\in\mathbb{N}\}\) be an infinite partition of \(A\) into infinite sets. We define an ideal \(\mathcal{I}\) by \[B\in\mathcal{I}\iff B\cap A_{n}\in\mathrm{Fin}\ \ \text{for all but finitely many }n\in\mathbb{N}.\] Then \(\mathcal{I}\) is not a weak P-ideal (so also not a P-ideal) and \(A\in\mathcal{I}^{*}\cap\mathcal{I}_{1/n}\). ### Lineability **Theorem 4.3**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\)._ 1. _The following conditions are equivalent._ 1. \(AOS(\mathcal{I})\neq\emptyset\)_._ 2. \(AOS(\mathcal{I}^{*})\neq\emptyset\)_._ 3. \(AOS(\mathcal{I})\) _is_ \(\mathfrak{c}\)_-lineable._ 4. \(AOS(\mathcal{I}^{*})\) _is_ \(\mathfrak{c}\)_-lineable._ 5. \(\mathcal{I}_{1/n}\cap\mathcal{I}^{+}\neq\emptyset\)_._ 2. _The following conditions are equivalent._ 1. \(AOS(\mathcal{I}^{+})\neq\emptyset\)_._ 2. \(AOS(\mathcal{I}^{+})\) _is_ \(\mathfrak{c}\)_-lineable._ 3. \(\mathcal{I}_{1/n}\cap\mathcal{I}^{*}\neq\emptyset\)_._ Proof.: The equivalence of (1a), (1b) and (1e) follows from Proposition 4.1. The implications (1c) \(\Longrightarrow\) (1a) and (1d) \(\Longrightarrow\) (1b) are obvious. By \(AOS(\mathcal{I})\subseteq AOS(\mathcal{I}^{*})\) we get (1c) \(\Longrightarrow\) (1d). Thus, it suffices to show (1e) \(\Longrightarrow\) (1c). The equivalence of (2a) and (2c) follows from Proposition 4.1. The implication (2b) \(\Longrightarrow\) (2a) is obvious. Thus, it suffices to show (2c) \(\Longrightarrow\) (2b). Below we prove that (1e) \(\Longrightarrow\) (1c) ((2c) \(\Longrightarrow\) (2b), resp.). Assume that there is some \(A\in\mathcal{I}_{1/n}\cap\mathcal{I}^{+}\) (\(A\in\mathcal{I}_{1/n}\cap\mathcal{I}^{*}\), resp.). Since \(\sum_{n\in A}1/n<\infty\), we can find an increasing sequence \((j_{k})\) of integers such that \[\sum_{n>j_{k},n\in A}\frac{1}{n}<\frac{1}{k2^{k}}.\] We put \(A_{0}=(\mathbb{N}\setminus A)\cup[1,j_{1}]\cap\mathbb{N}\) and \(A_{k}=A\cap(j_{k},j_{k+1}]\) for each \(k\geq 1\). Observe that \((A_{k})\) is a partition of \(\mathbb{N}\) and \(A_{k}\in\mathcal{I}\) for \(k\geq 1\), while \(A_{0}\notin\mathcal{I}^{*}\) (\(A_{k}\in\mathcal{I}\) for all \(k\), resp.). Moreover, \[\sum_{k=0}^{\infty}\left(k\sum_{n\in A_{k}}\frac{1}{n}\right)<\infty.\] For each \(\alpha\in(0,1)\) let \(x^{(\alpha)}\) be a sequence given by \[x^{(\alpha)}(n)=\frac{k^{\alpha}}{n}\iff n\in A_{k}.\] Note that \(x^{(\alpha)}\in\ell_{1}\) for each \(\alpha\in(0,1)\) as \[\sum_{n=1}^{\infty}x^{(\alpha)}(n)=\sum_{k=0}^{\infty}\left(k^{\alpha}\sum_{n \in A_{k}}\frac{1}{n}\right)\leq\sum_{k=0}^{\infty}\left(k\sum_{n\in A_{k}} \frac{1}{n}\right)<\infty.\] In order to show \(\mathfrak{c}\)-lineability of \(AOS(\mathcal{I})\) (\(AOS(\mathcal{I}^{+}),resp.\)), consider a linear combination \[y=c_{1}x^{(\alpha_{1})}+\ldots+c_{m}x^{(\alpha_{m})},\] where \(c_{i}\in\mathbb{R}\setminus\{0\}\) for \(i\leq m\) and \(\alpha_{1}>\ldots>\alpha_{m}\). We need to show that \(y\in AOS(\mathcal{I})\) (\(y\in AOS(\mathcal{I}^{+})\), resp.). Obviously, \(y\in\ell_{1}\) as a linear combination of \(\ell_{1}\)-sequences. Observe that for each \(n\in A_{k}\) with \(k\geq 1\) we have \[|ny(n)| =\left|n\sum_{i=1}^{m}c_{i}\cdot\frac{k^{\alpha_{i}}}{n}\right|= \left|\sum_{i=1}^{m}c_{i}k^{\alpha_{i}}\right|=|c_{1}k^{\alpha_{1}}|\cdot \left|1+\sum_{i=2}^{m}\frac{c_{i}}{c_{1}}\cdot k^{\alpha_{i}-\alpha_{1}}\right|\] \[\geq|c_{1}k^{\alpha_{1}}|\cdot\left(1-\left|\sum_{i=2}^{m}\frac{ c_{i}}{c_{1}}\cdot k^{\alpha_{i}-\alpha_{1}}\right|\right)\xrightarrow{k\to\infty}\infty,\] as \[\lim_{k\to\infty}k^{\alpha_{1}}=\infty\ \ \text{and}\ \ \lim_{k\to\infty}k^{\alpha_{i}- \alpha_{1}}=\lim_{k}\frac{1}{k^{\alpha_{1}-\alpha_{i}}}=0\ \text{for all}\ i=2,\ldots,m.\] To show that \(y\in AOS(\mathcal{I})\), find \(k_{0}\) such that \(|ny(n)|\geq 1\) for all \(n\in\bigcup_{k\geq k_{0}}A_{k}\) and note that \(\bigcup_{k\geq k_{0}}A_{k}\notin\mathcal{I}\) as \(A_{0}\notin\mathcal{I}^{*}\) and \(A_{k}\in\mathcal{I}\) for \(k\geq 1\). Hence, \(y\in AOS(\mathcal{I})\). To show that \(y\in AOS(\mathcal{I}^{+})\), fix any \(B\in\mathcal{I}^{+}\). Since \(A_{k}\in\mathcal{I}\) for all \(k\in\mathbb{N}\cup\{0\}\), the set \(\{k\in\mathbb{N}:\ B\cap A_{k}\neq\emptyset\}\) has to be infinite. Thus, \((|ny(n)|)_{n\in B}\) contains a subsequence \((|ny(n)|)_{n\in B\setminus A_{0}}\) which is divergent to infinity. Hence, \(y\in AOS(\mathcal{I}^{+})\). ### Spaceability For any \(x\in\ell_{1}\), we write \[\|x\|=\sum_{n=1}^{\infty}|x(n)|\ \ \text{and}\ \ \ \ \operatorname{supp}(x)=\{n \in\mathbb{N}:x(n)\neq 0\}.\] For an ideal \(\mathcal{I}\) and a set \(C\in\mathcal{I}^{+}\), we define an ideal \(\mathcal{I}\upharpoonright C\) on the set \(C\) by \[\mathcal{I}\upharpoonright C=\{A\cap C:A\in\mathcal{I}\}.\] **Theorem 4.4**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\) such that \(\mathcal{I}\upharpoonright C\) is not a maximal ideal for every \(C\in\mathcal{I}^{+}\) (i.e. every set from \(\mathcal{I}^{+}\) can be divided into two disjoint sets from \(\mathcal{I}^{+}\)). Then the following conditions are equivalent._ 1. \(AOS(\mathcal{I})\neq\emptyset\)_._ 2. \(AOS(\mathcal{I}^{*})\neq\emptyset\)_._ 3. \(AOS(\mathcal{I})\) _is_ \(\mathfrak{c}\)_-lineable._ 4. \(AOS(\mathcal{I}^{*})\) _is_ \(\mathfrak{c}\)_-lineable._ 5. \(AOS(\mathcal{I})\) _is spaceable._ 6. \(AOS(\mathcal{I}^{*})\) _is spaceable._ 7. \(\mathcal{I}_{1/n}\cap\mathcal{I}^{+}\neq\emptyset\)_._ Proof.: The equivalence of (1), (2), (3), (4) and (7) is due to Theorem 4.3. It is known ([20, Theorem I-1], see also [19] or [13]) that every infinite-dimensional Banach space has dimension at least \(\mathfrak{c}\), so we obtain the implications \((6)\Longrightarrow(4)\) and \((5)\Longrightarrow(3)\). By \(AOS(\mathcal{I})\subseteq AOS(\mathcal{I}^{*})\) we get \((5)\Longrightarrow(6)\). Thus, it suffices to show \((1)\Longrightarrow(5)\). Let \((b_{n})\in AOS(\mathcal{I})\). Without loss of generality, we can assume that \(\|(b_{n})\|=1\) and \(b_{n}\geq 0\) for each \(n\in\mathbb{N}\). Then there exists \(\varepsilon>0\) such that \[C=\{n\in\mathbb{N}:nb_{n}\geq\varepsilon\}\notin\mathcal{I},\] and, consequently, there exist pairwise disjoint sets \(D_{n}\notin\mathcal{I}\) such that \(D_{n}\subseteq C\) for each \(n\in\mathbb{N}\). For each \(i,n\in\mathbb{N}\), we define \[x^{(i)}(n)=\begin{cases}b_{n}&\text{if }n\in D_{i}\setminus\{\min D_{i}\},\\ 1-\sum_{n\in D_{i}\setminus\{\min D_{i}\}}b_{n}&\text{if }n=\min D_{i},\\ 0&\text{otherwise}.\end{cases}\] Then \(\|x^{(i)}\|=1\), \(\operatorname{supp}(x^{(i)})=D_{i}\) and \(\operatorname{supp}(x^{(i)})\cap\operatorname{supp}(x^{(j)})=\emptyset\) for each \(i,j\in\mathbb{N}\), \(i\neq j\). Thus \[V=\left\{\sum_{i=1}^{\infty}t_{i}x^{(i)}:\ (t_{i})\in\ell_{1}\right\}\] is a closed subspace of infinite dimension in \(\ell_{1}\). Hence, we only need to show that \(V\subseteq AOS(\mathcal{I})\cup\{0\}\). Let \((t_{i})\in\ell_{1}\). If \(t_{i}=0\) for all \(i\in\mathbb{N}\) then obviously \(\sum_{i=1}^{\infty}t_{i}x^{(i)}\in AOS(\mathcal{I})\cup\{0\}\). Suppose that \(t_{i_{0}}\neq 0\) for some \(i_{0}\in\mathbb{N}\). Then for any \(n\in D_{i_{0}}\setminus\{\min D_{i_{0}}\}\) we have \[\left|n\left(\sum_{i=1}^{\infty}t_{i}x^{(i)}(n)\right)\right|=|nt_{i_{0}}b_{n} |\geq|t_{i_{0}}\varepsilon|>0.\] Since \(D_{i_{0}}\notin\mathcal{I}\), we obtain that the sequence \(\big{(}n\left(\sum_{i=1}^{\infty}t_{i}x^{(i)}(n)\right)\big{)}_{n}\) is not \(\mathcal{I}\)-convergent to zero, hence it belongs to \(AOS(\mathcal{I})\). By identifying sets of natural numbers with their characteristic functions, we equip \(\mathcal{P}(\mathbb{N})\) with the topology of the Cantor space \(\{0,1\}^{\mathbb{N}}\) (with the product measure of a countable sequence of the uniform measures on each \(\{0,1\}\), resp.) and therefore we can assign topological (measure-theoretic) notations to ideals on \(\mathbb{N}\). In particular, an ideal \(\mathcal{I}\) has the _Baire property_ (is _Lebesgue measurable_ or is _Borel_, resp.) if \(\mathcal{I}\) has the Baire property (is Lebesgue measurable or is a Borel set, resp.) as a subset of \(\{0,1\}^{\mathbb{N}}\). For instance, summable ideals are Borel (even \(F_{\sigma}\)) ideals. We say that an ideal \(\mathcal{I}\) has the _hereditary Baire property_ (is _hereditary Lebesgue measurable_, resp.) if \(\mathcal{I}\upharpoonright C\) has the Baire property (is Lebesgue measurable, resp.) for every \(C\in\mathcal{I}^{+}\). (Using [24, Proposition 2.1], one can construct an ideal with the Baire property which does not have the hereditary Baire property). On the other hand, there is no use defining the hereditary Borel ideals because it is known that if \(\mathcal{I}\) is a Borel ideal then \(\mathcal{I}\upharpoonright C\) is a Borel ideal for every \(C\in\mathcal{I}^{+}\) (see for instance the proof of [15, Theorem 3.13]). Consequently, Borel ideals have the hereditary Baire property as well as they are hereditary Lebesgue'a measurable. **Theorem 4.5**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\) which has the hereditary Baire property or is hereditary Lebesgue measurable (in particular, if it is a Borel ideal). Then the following conditions are equivalent._ 1. \(AOS(\mathcal{I})\neq\emptyset\)_._ 2. \(AOS(\mathcal{I}^{*})\neq\emptyset\)_._ 3. \(AOS(\mathcal{I})\) _is_ \(\mathfrak{c}\)_-lineable._ 4. \(AOS(\mathcal{I}^{*})\) _is_ \(\mathfrak{c}\)_-lineable._ 5. \(AOS(\mathcal{I})\) _is spaceable._ 6. \(AOS(\mathcal{I}^{*})\) _is spaceable._ 7. \(\mathcal{I}_{1/n}\cap\mathcal{I}^{+}\neq\emptyset\)_._ Proof.: Let \(\mathcal{I}\) be an ideal with the hereditary Baire property (hereditary Lebesgue measurable, resp.). Since a maximal ideal does not have the Baire property and is not Lebesgue measurable (see e.g. [5, Theorem 4.1.1]), we obtain that \(\mathcal{I}\upharpoonright C\) is not a maximal ideal for any \(C\in\mathcal{I}^{+}\). Thus, Theorem 4.4 finishes the proof. An ideal \(\mathcal{I}\) on \(\mathbb{N}\) is called _tall_ if for every infinite \(A\subseteq\mathbb{N}\) there is an infinite \(B\subseteq A\) such that \(B\in\mathcal{I}\). The assumption of Theorem 4.4 is not satisfied for some non-tall ideals. Below, we provide an additional result (see Theorem 4.6) which for instance guarantees that \(AOS(\mathcal{I})\) is spaceable for every non-tall ideal (see Corollary 4.7). By \(e_{D}:\mathbb{N}\to D\) we denote the increasing enumeration of a set \(D\subseteq\mathbb{N}\). **Theorem 4.6**.: _If \(\mathcal{I}\) is an ideal on \(\mathbb{N}\) such that there exist pairwise disjoint sets \(D_{n}\in\mathcal{I}^{+}\), \(n\in\mathbb{N}\), and a set \(C\in\mathcal{I}^{+}\cap\mathcal{I}_{1/n}\) such that_ \[\{e_{D_{n}}(i):i\in C\}\in\mathcal{I}^{+}\] _for each \(n\in\mathbb{N}\), then_ 1. \(AOS(\mathcal{I})\neq\emptyset\)_,_ 2. \(AOS(\mathcal{I}^{*})\neq\emptyset\)_,_ 3. \(AOS(\mathcal{I})\) _is_ \(\mathfrak{c}\)_-lineable,_ 4. \(AOS(\mathcal{I}^{*})\) _is_ \(\mathfrak{c}\)_-lineable,_ 5. \(AOS(\mathcal{I})\) _is spaceable,_ 6. \(AOS(\mathcal{I}^{*})\) _is spaceable._ Proof.: Since \(\mathcal{I}^{+}\cap\mathcal{I}_{1/n}\neq\emptyset\), we obtain (1), (2), (3) and (4) from Theorem 4.3. By \(AOS(\mathcal{I})\subseteq AOS(\mathcal{I}^{*})\) we get \((5)\Longrightarrow(6)\). Thus, it suffices to show (5). Let \(D_{n}\) and \(C\) be as in the assumption of the theorem. For each \(n\in\mathbb{N}\) we define \[a_{n}=\begin{cases}\frac{1}{n}&\text{for }n\in C,\\ \frac{1}{n^{2}}&\text{otherwise.}\end{cases}\] Then \((a_{n})\in\ell_{1}\), \(\|(a_{n})\|>0\) and the sequence \((na_{n})\) is not \(\mathcal{I}\)-convergent to zero. Now, we define \(b_{n}=a_{n}/\|(a_{n})\|\) for each \(n\in\mathbb{N}\), and notice that \((b_{n})\in\ell_{1}\), \(\|(b_{n})\|=1\) and the sequence \((nb_{n})\) is not \(\mathcal{I}\)-convergent to zero. For each \(i,n\in\mathbb{N}\), we define \[x^{(i)}(n)=\begin{cases}b_{j}&\text{if }n\in D_{i},\,n=e_{D_{i}}(j),\\ 0&\text{otherwise}.\end{cases}\] Then \(x^{(i)}\in\ell_{1}\), \(\|x^{(i)}\|=1\), \(\text{supp}(x^{(i)})=D_{i}\) and \(\text{supp}(x^{(i)})\cap\text{supp}(x^{(j)})=\emptyset\) for each \(i,j\in\mathbb{N}\), \(i\neq j\). Thus \[V=\left\{\sum_{i=1}^{\infty}t_{i}x^{(i)}:\ (t_{i})\in\ell_{1}\right\}\] is a closed subspace of infinite dimension. Hence, we only need to show that \(V\subseteq AOS(\mathcal{I})\cup\{0\}\). Let \((t_{i})\in\ell_{1}\). If \(t_{i}=0\) for all \(i\in\mathbb{N}\) then obviously \(\sum_{i=1}^{\infty}t_{i}x^{(i)}\in AOS(\mathcal{I})\cup\{0\}\). Suppose that \(t_{i_{0}}\neq 0\) for some \(i_{0}\in\mathbb{N}\). Then for any \(j\in C\) and \(n=e_{D_{i_{0}}}(j)\) we have \[\left|n\left(\sum_{i=1}^{\infty}t_{i}x^{(i)}(n)\right)\right| =\left|e_{D_{i_{0}}}(j)\left(\sum_{i=1}^{\infty}t_{i}x^{(i)}(e_{D _{i_{0}}}(j))\right)\right|\] \[=\left|e_{D_{i_{0}}}(j)t_{i_{0}}x^{(i_{0})}(e_{D_{i_{0}}}(j))\right|\] \[=\left|e_{D_{i_{0}}}(j)t_{i_{0}}b_{j}\right|\] \[\geq\left|jt_{i_{0}}b_{j}\right|=\left|jt_{i_{0}}\cdot\frac{1/j}{ \|(a_{k})\|}\right|=\frac{|t_{i_{0}}|}{\|(a_{k})\|}>0.\] Since \(\{e_{D_{i_{0}}}(j):j\in C\}\notin\mathcal{I}\), we obtain that the sequence \(\big{(}n\left(\sum_{i=1}^{\infty}t_{i}x^{(i)}(n)\right)\big{)}_{n}\) is not \(\mathcal{I}\)-convergent to zero, hence it belongs to \(AOS(\mathcal{I})\). **Corollary 4.7**.: _If an ideal \(\mathcal{I}\) is not tall, then_ 1. \(AOS(\mathcal{I})\neq\emptyset\)_,_ 2. \(AOS(\mathcal{I}^{*})\neq\emptyset\)_,_ 3. \(AOS(\mathcal{I})\) _is_ \(\mathfrak{c}\)_-lineable,_ 4. \(AOS(\mathcal{I}^{*})\) _is_ \(\mathfrak{c}\)_-lineable,_ 5. \(AOS(\mathcal{I})\) _is spaceable,_ 6. \(AOS(\mathcal{I}^{*})\) _is spaceable._ Proof.: Let \(A\subseteq\mathbb{N}\) be an infinite set which does not contain an infinite subsets from \(\mathcal{I}\). Let \(D_{n}\subseteq A\), \(n\in\mathbb{N}\), be pairwise disjoint infinite sets. Take any \(C\in\mathcal{I}^{+}\cap\mathcal{I}_{1/n}\). Then \(C\) is infinite, so \(\{e_{D_{n}}(i):i\in C\}\) is an infinite subset of \(A\), hence it belongs to \(\mathcal{I}^{+}\). Now, Theorem 4.6 finishes the proof. **Corollary 4.8**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\) such that_ * _there exists an infinite partition of_ \(\mathbb{N}\) _into sets from_ \(\mathcal{I}^{+}\)_,_ * _for each_ \(B\in\mathcal{I}^{+}\) _there exists_ \(D\subseteq B\) _such that_ \(D\in\mathcal{I}^{+}\) _and_ \[\forall A\subseteq\mathbb{N}\ (A\in\mathcal{I}\iff\{e_{D}(i):i\in A\}\in \mathcal{I})\] _(i.e. the bijection_ \(e_{D}\) _witnesses the fact that the ideals_ \(\mathcal{I}\) _and_ \(\mathcal{I}\upharpoonright D\) _are isomorphic)._ _Then the following conditions are equivalent._ 1. \(AOS(\mathcal{I})\neq\emptyset\)_._ 2. \(AOS(\mathcal{I}^{*})\neq\emptyset\)_._ 3. \(AOS(\mathcal{I})\) _is_ \(\mathfrak{c}\)_-lineable._ 4. \(AOS(\mathcal{I}^{*})\) _is_ \(\mathfrak{c}\)_-lineable._ 5. \(AOS(\mathcal{I})\) _is spaceable._ 6. \(AOS(\mathcal{I}^{*})\) _is spaceable._ 7. \(\mathcal{I}_{1/n}\cap\mathcal{I}^{+}\neq\emptyset\)_._ Proof.: The equivalence of (1), (2), (3), (4) and (7) is due to Theorem 4.3. It is known ([20, Theorem I-1], see also [19] or [13]) that every infinite-dimensional Banach space has dimension at least \(\mathfrak{c}\), so we obtain the implications (6)\(\implies\)(4) and (5)\(\implies\)(3). By \(AOS(\mathcal{I})\subseteq AOS(\mathcal{I}^{*})\) we get (5)\(\implies\)(6). Thus, it suffices to show (7)\(\implies\)(5). Let \(C\in\mathcal{I}^{+}\cap\mathcal{I}_{1/n}\). Let \(B_{n}\in\mathcal{I}^{+}\), \(n\in\mathbb{N}\), be an infinite partition of \(\mathbb{N}\). For every \(n\in\mathbb{N}\), we take \(D_{n}\subseteq B_{n}\) such that \(D_{n}\in\mathcal{I}^{+}\) and \(e_{D_{n}}\) witnesses the fact that \(\mathcal{I}\) and \(\mathcal{I}\upharpoonright D_{n}\) are isomorphic. Since \(C\in\mathcal{I}^{+}\), we obtain that the set \(\{e_{D_{n}}(i):i\in C\}\in\mathcal{I}^{+}\) for each \(n\in\mathbb{N}\). Now Theorem 4.6 finishes the proof. The first assumption of Corollary 4.8 can be characterized in terms of maximal ideals (see Proposition 4.9), which in turn can be used to show (see Proposition 4.10) that this assumption is valid for most ideals used in the literature (e.g. for all Borel ideals). **Proposition 4.9** ([6, Lemma 1.3]).: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). Then the following conditions are equivalent._ 1. _There exists an infinite partition of_ \(\mathbb{N}\) _into sets from_ \(\mathcal{I}^{+}\)_._ 2. \(\mathcal{I}\) _is not equal to the intersection of finitely many maximal ideals._ **Proposition 4.10**.: _Let \(\mathcal{I}\) be an ideal on \(\mathbb{N}\). If \(\mathcal{I}\) has the Baire property or is Lebesgue measurable (in particular, if it is a Borel ideal), then there exists an infinite partition of \(\mathbb{N}\) into sets from \(\mathcal{I}^{+}\)._ Proof.: Let \(\mathcal{I}\) be an ideal with the Baire property (Lebesgue measurable, resp.). In view of Proposition 4.9, we only need to show that \(\mathcal{I}\) is not the intersection of finitely many maximal ideals. In [25, Proposition 23] ([25, Proposition 7], resp), the authors proved that the intersection of countably many ideals without the Baire property (Lebesgue nonmeasurable, resp.) does not have the Baire property (is not Lebesgue measurable, resp.). Since maximal ideals do not have the Baire property and are not Lebesgue measurable (see e.g. [5, Theorem 4.1.1]), we obtain that \(\mathcal{I}\) is not the intersection of countably many (in particular, finitely many) maximal ideals. Below, we show two examples of tall ideals which satisfy assumptions of Corollary 4.8. **Example 4.11** (Hindman ideal).: A set \(A\subseteq\mathbb{N}\) is an _IP-set_ if there exists an infinite set \(D\subseteq\mathbb{N}\) such that \(\operatorname{FS}(D)\subseteq A\) where \(\operatorname{FS}(D)\) denotes the set of all finite non-empty sums of distinct elements of \(D\). It follows from Hindman's theorem ([14], see also [12]) that if \(A\cup B\) is an IP-set, then \(A\) or \(B\) is an IP-set as well. Thus, the family \[\mathcal{I}_{IP}=\{A\subseteq\mathbb{N}:A\text{ is not an IP-set}\}\] is an ideal on \(\mathbb{N}\). The ideal \(\mathcal{I}_{IP}\) is coanalytic as \(\mathcal{I}_{IP}^{+}\) is a projection on the first coordinate of a closed set \[B=\{(A,D)\in\{0,1\}^{\mathbb{N}}\times[\mathbb{N}]^{\omega}:\operatorname{FS} (D)\subseteq A\},\] where \([\mathbb{N}]^{\omega}\) is a set of all infinite subsets of \(\mathbb{N}\) (which is a \(G_{\delta}\) subset of \(\{0,1\}^{\mathbb{N}}\), hence the Polish space). Consequently, \(\mathcal{I}_{IP}\) has the Baire property, so by Proposition 4.10, the ideal \(\mathcal{I}_{IP}\) satisfies the first assumption of Corollary 4.8. The fact that \(\mathcal{I}_{IP}\) satisfies the second assumption of Corollary 4.8 follows from the proof of [11, Theorem 4.5]. **Example 4.12** (van der Waerden ideal).: A set \(A\subseteq\mathbb{N}\) is an _AP-set_ if \(A\) contains arithmetic progressions of arbitrary finite length. It follows from van der Waerden's theorem ([26], see also [12]) that if \(A\cup B\) is an AP-set, then \(A\) or \(B\) is an AP-set as well. Thus, the family \[\mathcal{I}_{AP}=\{A\subseteq\mathbb{N}:A\text{ is not an AP-set}\}\] is an ideal on \(\mathbb{N}\). One can show, that this is a Borel ideal (even \(F_{\sigma}\) ideal). Indeed, \(\mathcal{I}_{AP}\) is \(F_{\sigma}\) as \(\mathcal{I}_{AP}^{+}\) is \(G_{\delta}\) and that is true because \(\mathcal{I}_{AP}^{+}=\bigcap_{n=1}^{\infty}\bigcup_{k=1}^{\infty}\bigcup_{r=1 }^{\infty}A_{n,k,r}\), where \(A_{n,k,r}=\{A\subseteq\mathbb{N}:\{k,k+r,k+2r,\ldots,k+nr\}\subseteq A\}\) is a basic open set. Therefore, by Proposition 4.10, the ideal \(\mathcal{I}_{AP}\) satisfies the first assumption of Corollary 4.8. The fact that \(\mathcal{I}_{AP}\) satisfies the second assumption of Corollary 4.8 follows from the proof of [11, Theorem 3.3]. **Theorem 4.13**.: _If \(\mathcal{I}\) is an ideal on \(\mathbb{N}\) such that there exists a set \(C\in\mathcal{I}^{+}\cap\mathcal{I}_{1/n}\) such that there exists an infinite partition of \(C\) into sets from \(\mathcal{I}^{+}\), then_ 1. \(AOS(\mathcal{I})\neq\emptyset\)_,_ 2. \(AOS(\mathcal{I}^{*})\neq\emptyset\)_,_ 3. \(AOS(\mathcal{I})\) _is_ \(\mathfrak{c}\)_-lineable,_ 4. \(AOS(\mathcal{I}^{*})\) _is_ \(\mathfrak{c}\)_-lineable,_ 5. \(AOS(\mathcal{I})\) _is spaceable,_ 6. \(AOS(\mathcal{I}^{*})\) _is spaceable._ Proof.: Let \((b_{n})\) be defined as in the proof of Theorem 4.6 and then proceed as in the proof of Theorem 4.4. Let \(C\in\mathcal{I}_{1/n}\) be an infinite set. Let \(\mathcal{I}\) be an ideal such that \(\mathcal{I}\upharpoonright C\) is a Borel ideal and \(\mathcal{I}\upharpoonright(\mathbb{N}\setminus C)\) is a maximal ideal. Then the ideal \(\mathcal{I}\) satisfies the assumption of the above theorem, but it does not satisfy the assumption of Theorem 4.4. **Question 1**.: Is \(AOS(\mathcal{I})\neq\emptyset\) (\(AOS(\mathcal{I}^{*})\neq\emptyset\), resp.) a necessary and sufficient condition for \(AOS(\mathcal{I})\) (\(AOS(\mathcal{I}^{*})\), resp.) to be spaceable for each ideal \(\mathcal{I}\)? **Question 2**.: Does there exist an ideal \(\mathcal{I}\) such that \(AOS(\mathcal{I}^{+})\neq\emptyset\) is spaceable? ### Algebrability **Theorem 4.14**.: _Let \((a_{k})\) be a sequence tending to infinity and \((m_{k})\) be an increasing sequence of positive integers such that_ \[m_{k}\geq k^{a_{k}}\quad\text{for each $k$}.\] _If \(M=\{m_{k}:k\in\mathbb{N}\}\) and_ 1. \(M\in\mathcal{I}^{+}\)_, then_ \(AOS(\mathcal{I})\) _and_ \(AOS(\mathcal{I}^{*})\) _are strongly_ \(\mathfrak{c}\)_-algebrable;_ 2. \(M\in\mathcal{I}^{*}\)_, then_ \(AOS(\mathcal{I}^{+})\) _is strongly_ \(\mathfrak{c}\)_-algebrable._ Proof.: Since \(AOS(\mathcal{I})\subseteq AOS(\mathcal{I}^{*})\), strong \(\mathfrak{c}\)-algebrability of \(AOS(\mathcal{I}^{*})\) will follow from strong \(\mathfrak{c}\)-algebrability of \(AOS(\mathcal{I})\). Below we simultaneously prove strong \(\mathfrak{c}\)-algebrability of \(AOS(\mathcal{I})\) and \(AOS(\mathcal{I}^{+})\). Let \((a_{k})\) and \(M\in\mathcal{I}^{+}\) (\(M\in\mathcal{I}^{*}\), resp.) satisfy the assumptions of the theorem. Let \(\Lambda\subseteq(1,2)\) be a linearly independent set over rationals with \(|\Lambda|=\mathfrak{c}\). For every \(\alpha\in\Lambda\), we define a sequence \(a^{(\alpha)}\) by \[a^{(\alpha)}(n)=\begin{cases}\frac{1}{k^{\alpha}}&\text{for $n=m_{k}\in M$},\\ \frac{1}{n^{\alpha}}&\text{for $n\not\in M$}.\end{cases}\] Note that \(a^{(\alpha)}\in\ell_{1}\) because \[\sum_{n=1}^{\infty}a^{(\alpha)}(n)\leq\sum_{k=1}^{\infty}\frac{1}{k^{\alpha}}+ \sum_{n=1}^{\infty}\frac{1}{n^{\alpha}}<\infty.\] Moreover, \(a^{(\alpha)}\in AOS(\mathcal{I})\) (\(a^{(\alpha)}\in AOS(\mathcal{I}^{+})\), resp.) as for every \(n=m_{k}\in M\) we have \[na^{(\alpha)}(n)=m_{k}a^{(\alpha)}(m_{k})=\frac{m_{k}}{k^{\alpha}}\geq k^{a_{k}- \alpha}\geq k^{a_{k}-2},\] which tends to infinity. Using [2, Fact 1.2], we know that in order to show strong \(\mathfrak{c}\)-algebrability of \(AOS(\mathcal{I})\) (\(AOS(\mathcal{I}^{+})\), resp.), it is enough to prove that \[P(a^{(\alpha_{1}},\ldots,a^{(\alpha_{q})})\in AOS(\mathcal{I})\ (\in AOS( \mathcal{I}^{+})\text{, resp.})\] for any pairwise distinct \(\alpha_{1},\ldots,\alpha_{q}\in\Lambda\) and any polynomial \[P(x_{1},\ldots,x_{q})=\sum_{i=1}^{p}c_{i}x_{1}^{\beta_{i,1}}\ldots x_{q}^{ \beta_{i,q}},\] where \(c_{i}\) are nonzero reals and \([\beta_{i,j}]\) is a matrix of nonnegative integers with pairwise distinct, nonzero rows. First, observe that for any \(m_{k}\in M\) we have \[P(a^{(\alpha_{1})},\ldots,a^{(\alpha_{q})})(m_{k})=\sum_{i=1}^{p}c_{i}k^{-( \alpha_{1}\beta_{i,1}+\ldots+\alpha_{q}\beta i,q)}=\sum_{i=1}^{p}c_{i}k^{-r_{ i}},\] where \(r_{i}=\alpha_{1}\beta_{i,1}+\ldots+\alpha_{q}\beta i,q\). Since \(\Lambda\) is linearly independent, all \(r_{i}\) are positive and pairwise distinct. We may assume that \(r_{1}=\min\{r_{1},\ldots,r_{p}\}\). Then \[\left|m_{k}P(a^{(\alpha_{1})},\ldots,a^{(\alpha_{q})})(m_{k})\right| =m_{k}\left|\sum_{i=1}^{p}c_{i}k^{-r_{i}}\right|\] \[=m_{k}\left|c_{1}k^{-r_{1}}\right|\cdot\left|1+\sum_{i=2}^{p} \frac{c_{i}}{c_{1}}\cdot k^{-r_{i}+r_{1}}\right|\] \[\geq k^{a_{k}}\left|c_{1}k^{-r_{1}}\right|\cdot\left|1+\sum_{i=2} ^{p}\frac{c_{i}}{c_{1}}\cdot k^{-r_{i}+r_{1}}\right|\] \[=\left|c_{1}\right|\cdot k^{a_{k}-r_{1}}\cdot\left|1+\sum_{i=2} ^{p}\frac{c_{i}}{c_{1}}\cdot\frac{1}{k^{r_{i}-r_{1}}}\right|\xrightarrow[]{k \rightarrow\infty}\infty,\] because \(r_{i}-r_{1}>0\) for each \(i\geq 2\) and \(a_{k}\) tends to infinity. Since \(M\in\mathcal{I}^{+}\) (\(M\in\mathcal{I}^{*}\), resp.), we conclude that \(P(a^{\alpha_{1}},\ldots,a^{\alpha_{q}})\in AOS(\mathcal{I})\) (\(\in AOS(\mathcal{I}^{+})\), resp.). The ideal \(\mathcal{I}=\{A\subseteq\mathbb{N}:A\cap\{n^{n}:n\in\mathbb{N}\}\text{ is finite}\}\) satisfies the assumptions of the above theorem, so \(AOS(\mathcal{I})\), \(AOS(\mathcal{I}^{*})\) and \(AOS(\mathcal{I}^{+})\) are strongly \(\mathfrak{c}\)-algebrable. However, the nonemptiness of \(AOS(\mathcal{I})\), \(AOS(\mathcal{I}^{*})\) or \(AOS(\mathcal{I}^{+})\) does not guarantee even 1-algebrability of these sets. **Proposition 4.15**.: _There exists an ideal \(\mathcal{I}\) such that \(AOS(\mathcal{I})\neq\emptyset\), \(AOS(\mathcal{I}^{+})\neq\emptyset\) and \(AOS(\mathcal{I}^{*})\neq\emptyset\) but neither \(AOS(\mathcal{I})\) nor \(AOS(\mathcal{I}^{+})\) nor \(AOS(\mathcal{I}^{*})\) is 1-algebrable._ Proof.: For a set \(B=\{n^{2}:n\in\mathbb{N}\}\), we define a summable ideal \[\mathcal{I}=\left\{A\subseteq\mathbb{N}:\sum_{n\in A\cap B}\frac{1}{\sqrt{n}} <\infty\right\}.\] Since \(B\in\mathcal{I}^{*}\cap\mathcal{I}_{1/n}\), the sets \(AOS(\mathcal{I}^{+})\), \(AOS(\mathcal{I})\) and \(AOS(\mathcal{I}^{*})\) are nonempty by Proposition 4.1. Now, we show that \(AOS(\mathcal{I}^{*})\) does not contain any subalgebra generated by a singleton. This will finish the proof as \(AOS(\mathcal{I})\subseteq AOS(\mathcal{I}^{*})\) and \(AOS(\mathcal{I}^{+})\subseteq AOS(\mathcal{I}^{*})\). Take any \(a=(a_{n})\in AOS(\mathcal{I}^{*})\). Then \(C=\{n\in\mathbb{N}:|a_{n}|\leq 1/\sqrt{n}\}\in\mathcal{I}^{*}\) as otherwise \(B\backslash C\in\mathcal{I}^{+}\) and consequently \[\sum_{n\in\mathbb{N}}|a_{n}|\geq\sum_{n\in B\backslash C}\frac{1}{\sqrt{n}}=\infty,\] a contradiction with \(a_{n}\in\ell_{1}\). Now, consider the polynomial \(P(x)=x^{3}\). Then for any \(n\in C\) we have \[|na_{n}^{3}|\leq\frac{n}{(\sqrt{n})^{3}}=\frac{1}{\sqrt{n}},\] which tends to zero, thus \(P(a_{n})\not\in AOS(\mathcal{I}^{*})\).
2305.14014
CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model
Pre-trained vision-language models~(VLMs) are the de-facto foundation models for various downstream tasks. However, scene text recognition methods still prefer backbones pre-trained on a single modality, namely, the visual modality, despite the potential of VLMs to serve as powerful scene text readers. For example, CLIP can robustly identify regular (horizontal) and irregular (rotated, curved, blurred, or occluded) text in images. With such merits, we transform CLIP into a scene text reader and introduce CLIP4STR, a simple yet effective STR method built upon image and text encoders of CLIP. It has two encoder-decoder branches: a visual branch and a cross-modal branch. The visual branch provides an initial prediction based on the visual feature, and the cross-modal branch refines this prediction by addressing the discrepancy between the visual feature and text semantics. To fully leverage the capabilities of both branches, we design a dual predict-and-refine decoding scheme for inference. We scale CLIP4STR in terms of the model size, pre-training data, and training data, achieving state-of-the-art performance on 11 STR benchmarks. Additionally, a comprehensive empirical study is provided to enhance the understanding of the adaptation of CLIP to STR. We believe our method establishes a simple yet strong baseline for future STR research with VLMs.
Shuai Zhao, Ruijie Quan, Linchao Zhu, Yi Yang
2023-05-23T12:51:20Z
http://arxiv.org/abs/2305.14014v3
# CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model ###### Abstract Pre-trained vision-language models are the de-facto foundation models for various downstream tasks. However, this trend has not extended to the field of scene text recognition (STR), despite the potential of CLIP to serve as a powerful scene text reader. CLIP can robustly identify regular (horizontal) and irregular (rotated, curved, blurred, or occluded) text in natural images. With such merits, we introduce CLIP4STR, a simple yet effective STR method built upon image and text encoders of CLIP. It has two encoder-decoder branches: a visual branch and a cross-modal branch. The visual branch provides an initial prediction based on the visual feature, and the cross-modal branch refines this prediction by addressing the discrepancy between the visual feature and text semantics. To fully leverage the capabilities of both branches, we design a dual predict-and-refine decoding scheme for inference. CLIP4STR achieves new state-of-the-art performance on 11 STR benchmarks. Additionally, a comprehensive empirical study is provided to enhance the understanding of the adaptation of CLIP to STR. We believe our method establishes a simple but strong baseline for future STR research with VL models. ## 1 Introduction Vision-language (VL) models pre-trained on web-scale data like CLIP (Radford et al., 2021) and ALIGN (Jia et al., 2021) show remarkable zero-shot capacity across various tasks and datasets. Many researchers transfer the knowledge from pre-trained VL models to downstream tasks and make non-trivial progress, _e.g_., visual question answering (Song et al., 2022), image manipulation (Patashnik et al., 2021; Kim et al., 2022), information retrieval (Luo et al., 2022), referring expression comprehension (Subramanian et al., 2022), and image captioning (Hessel et al., 2021). Currently, the VL model is widely recognized as a foundational model and an important component of artificial intelligence (Fei et al., 2022; Yu et al., 2022). Scene text recognition (STR) is a critical technique and an essential process in many vision and language applications, _e.g_., document analysis, autonomous driving, and augmented reality. Similar to the aforementioned cross-modal tasks, STR involves two different modalities: image and text. However, unlike the popularity of VL models in other cross-modal tasks, STR methods still tend to Figure 1: Zero-shot classification results of CLIP (ViT-B/32) for images with text stickers. The attention map is calculated via Grad-CAM (Selvaraju et al., 2020). CLIP can perceive and understand regular and irregular text in images. It is potentially a powerful scene text reader. rely on backbones pre-trained on single-modality data [12, 13, 14, 15]. In this work, we show that VL models pre-trained on image-text pairs possess strong scene text perception abilities, making them superior choices as STR backbones. STR methods often struggle with irregular text like rotated, curved, blurred, or occluded text [16, 17]. However, irregular text is prevalent in real-life scenarios [15, 16], making it necessary for STR models to effectively handle these challenging cases. Interestingly, we observe that VL models can robustly perceive irregular text in natural images. In Figure 1, we put different text stickers on a natural image and use CLIP to classify it2. It is evident that CLIP pays significant attention to the text sticker and accurately determines the meaning of the word, regardless of text variations3. CLIP showcases an exceptional capability to perceive and comprehend text in natural images, precisely the qualities we seek in a robust backbone for STR. Footnote 2: The class categories are from CIFAR-10 [14]. The experiment is inspired by [13]. Footnote 3: This phenomenon, where CLIP focuses on text while disregarding the natural object, is also known as _typographic attacks_[12]. Neurons in CLIP image encoders can simultaneously perceive both visual and text signals associated with the same concept, such as an image or typographic text of Spiderman. This ability may stem from the training images containing scene texts. In this study, we introduce CLIP4STR, a simple yet effective STR framework that leverages the scene text perception capabilities of CLIP. CLIP4STR consists of two encoder-decoder branches: the visual branch and the cross-modal branch. The image and text encoders inherit from CLIP, while the decoders employ the transformer decoder [15]. To enable the decoder to delve into word structures, we incorporate the permuted sequence modeling technique proposed by PARSeq [1]. This allows the decoder to perform sequence modeling of characters in arbitrary orders without relying on specific sequence order assumptions. During training, the visual branch provides an initial prediction based on the visual feature, which is then refined by the cross-modal branch to address possible discrepancies between the visual feature and text semantics of the prediction. The cross-modal branch functions as a semantic-aware spell checker, similar to modern STR methods [12, 13]. For inference, we design a dual predict-and-refine decoding scheme to fully utilize the capabilities of both encoder-decoder branches for improved character recognition. CLIP4STR achieves state-of-the-art performance on 11 commonly used STR benchmarks, encompassing both regular and irregular text. Additionally, we present a comprehensive empirical study on adapting CLIP to STR. We believe CLIP4STR provides a simple but strong baseline for future STR research with VL models. ## 2 Related Work Vision-Language Pre-TrainingThe large-scale pre-trained vision-language model demonstrates excellent generalization abilities and possesses many fascinating attributes [1, 13, 14]. Its derivative works attract significant attention in various areas [15, 16, 17, 18, 19, 20, 21, 22]. Since CLIP, different vision-language pre-training methods and large-scale image-text datasets emerge, such as Florence [16], ERNIEViLG [15], OFA [20], DeCLIP [13], FILIP [21], ALBEF [13], COYO-700M [14], LAION-5B [15]. The vision-language pre-trained model is generally considered as one type of foundation models [12, 14]. Scene Text RecognitionScene text recognition methods can be broadly divided into two categories: _context-free_ and _context-aware_. Context-free STR methods only utilize the visual features of images, such as CTC-based [10] methods [11, 12, 13], segmentation-based methods [14, 15, 16], and attention-based methods with an encoder-decoder mechanism [16, 17]. Since context-free STR methods lack the understanding of text semantics, they are less robust against occluded or incomplete text. Context-aware STR methods are the mainstream approaches now, leveraging text semantics to enhance recognition performance For example, ABINet [12], LevOCR [13], MATRN [15], and TrOCR [13] incorporate an external language model to capture text semantics. Other methods achieve sim ilar goals with built-in modules, such as RNN Lee and Osindero (2016), transformer Sheng et al. (2019); Bautista and Atienza (2022). ## 3 Method ### Preliminary Before explaining our method in Figure 2, we first introduce CLIP Radford et al. (2021) and the permuted sequence modeling technique proposed by PARSeq Bautista and Atienza (2022). ClipCLIP consists of a text encoder and an image encoder. CLIP is pre-trained on 400 million image-text pairs using contrastive learning. The text and image features from CLIP are aligned in a joint image-text embedding space. **The text encoder** of CLIP is a transformer encoder Vaswani et al. (2017); Devlin et al. (2019). The text tokenizer is a lower-cased byte pair encoding - BPE Sennrich et al. (2016) with vocabulary size 49 152. The beginning and end of the text sequence are padded with [SOS] and [EOS] tokens, respectively. Initially, CLIP text encoder only returns the feature of the [EOS] token, but in this work, we return features of all tokens. These features are further normalized and linearly projected into the joint image-text embedding space. **The image encoder** of CLIP is a vision transformer (ViT) Dosovitskiy et al. (2021). Given an image, ViT introduces a visual tokenizer (convolution) to convert non-overlapped image patches into a discrete sequence. A [CLASS] token is then prepended to the beginning of the image sequence. Initially, CLIP image encoder only returns the feature of the [CLASS] token, but in this work, we return features of all tokens. These features are also normalized and linearly projected into the joint image-text embedding space. Generally, we use a ViT-B/16 (patch size 16\(\times\)16) as the image encoder. Permuted sequence modelingTraditionally, STR methods use a left-to-right or right-to-left order to model word sequences Fang et al. (2021). However, the characters in a word do not strictly follow such directional dependencies. For instance, to predict the letter "o" in the word "model", it is sufficient to consider only the context "m_de" rather than relying solely on the left-to-right context "m_" or the right-to-left context "led_". The dependencies between characters in a word can take various forms. To encourage the STR method to explore these structural relationships within words, PARSeq Bautista and Atienza (2022) introduces a permuted sequence modeling technique. This technique generates random dependency relationships between the input context and the output by using a random attention mask \(\mathcal{M}\) during attention operations Vaswani et al. (2017). Table 1 illustrates three examples of mask \(\mathcal{M}\). We will delve further into this mechanism in Section 3.3. ### Encoder The framework of CLIP4STR is illustrated in Figure 2. CLIP4STR employs a dual encoder-decoder design, consisting of a visual branch and a cross-modal branch. The text and image encoders utilize the architectures and pre-trained weights from \begin{table} \begin{tabular}{c c CLIP. The visual branch generates an initial prediction based on the visual features extracted by the image encoder. Subsequently, the cross-modal branch refines the initial prediction by addressing the discrepancy between the visual features and the textual semantics of the prediction. Since the image and text features are aligned in a joint image-text embedding space during pre-training, it becomes easy to identify this discrepancy. The cross-modal branch acts as a semantic-aware spell checker. It is worth noting that the text encoder is partially frozen, which is a common practice in transfer learning of large language models (Alayrac et al., 2022). This freezing operation retains the learned text understanding ability of the language model and reduces computational costs during training. In contrast, the visual branch is fully trainable due to the domain gap between STR data (cropped word images) and CLIP training data (collected from the web, often natural images). Additionally, we block the gradient flow from the cross-modal decoder to the visual encoder to enable autonomous learning of the visual branch, resulting in improved refined cross-modal predictions. For the text encoder \(g(\cdot)\) and the image encoder \(h(\cdot)\), given the input text \(\mathbf{t}\) and image \(\mathbf{x}\), the text, image, and cross-modal features are computed as: \[\mathbf{F}_{t} =g(\mathbf{t})\in\mathbb{R}^{L_{t}\times D}, \tag{1}\] \[\mathbf{F}_{i} =h(\mathbf{x})\in\mathbb{R}^{L_{i}\times D},\] (2) \[\mathbf{F}_{c} =[\mathbf{F}_{i}^{T}\ \mathbf{F}_{t}^{T}]^{T}\in\mathbb{R}^{L_{c}\times D}, \tag{3}\] where \(L_{t}\) represents the text sequence length, \(L_{i}\) is the sequence length of image tokens, \(D\) denotes the dimension of the joint image-text embedding space, and the cross-modal sequence length \(L_{c}=L_{i}+L_{t}\). ### Decoder The decoder aims to extract the character information from the visual feature \(\mathbf{F}_{i}\) or cross-modal feature \(\mathbf{F}_{c}\). The decoder framework is shown in Figure 3. It adopts the design of the transformer decoder (Vaswani et al., 2017). Additionally, we apply the permuted sequence modeling technique proposed by PARSeq (Bautista and Atienza, 2022), enabling a predicted character to have arbitrary dependencies on the input context during training. The visual and cross-modal decoders have the same architecture but differ in the input. They receive the following inputs: a learnable position query \(\mathbf{p}\in\mathbb{R}^{N\times D}\), a learnable input context \(\mathbf{c}\in\mathbb{R}^{N\times D}\), and a randomly generated attention mask \(\mathcal{M}\in\mathbb{R}^{N\times N}\). \(N\) represents the length of characters. The decoder outputs the prediction \(\mathbf{y}\in\mathbb{R}^{N\times C}\), where \(C\) is the number of character classes. The decoding stage can be denoted as \[\mathbf{y}=\texttt{DEC}(\mathbf{p},\mathbf{c},\mathcal{M},\mathbf{F}). \tag{4}\] The first Multi-Head Attention (MHA) in Figure 3 performs context-position attention: \[\mathbf{m}_{1}=\texttt{softmax}(\frac{\mathbf{p}\mathbf{c}^{T}}{\sqrt{D}}+\mathcal{M})\bm {c}+\mathbf{p}. \tag{5}\] The second MHA focuses on feature-position attention: \[\mathbf{m}_{2}=\texttt{softmax}(\frac{\mathbf{m}_{1}\mathbf{F}^{T}}{\sqrt{D}})\mathbf{F}+\mathbf{ m}_{1}. \tag{6}\] For simplicity, we ignore the input and output linear transformations in the attention operations of Eq. (5) and Eq. (6). Then \(\mathbf{m}_{2}\in\mathbb{R}^{N\times D}\) is used for the final prediction \(\mathbf{y}\): \[\mathbf{y}=\texttt{Linear}(\texttt{MLP}(\mathbf{m}_{2})+\mathbf{m}_{2}). \tag{7}\] During training, the output of the decoder depends on the input context in an arbitrary manner. This encourages the decoder to analyze the word structure beyond the traditional left-to-right or right-to-left sequence modeling assumptions (Fang et al., 2021). The inclusion of a random attention Figure 3: The decoder of CLIP4STR. [B], [E], and [P] are the beginning, end, and padding tokens, respectively. Layer normalization (Ba et al., 2016) and dropout (Srivastava et al., 2014) are ignored. mask \(\mathcal{M}\) in Eq.(5) enables this capability (Bautista and Atienza, 2022). Table 1 presents examples of generated attention masks, including a left-to-right auto-regressive (AR) mask, a cloze mask, and a random mask. Following PARSeq (Bautista and Atienza, 2022), we employ \(K=6\) masks per input context during training. The first two masks are left-to-right and right-to-left masks, and others are randomly generated. CLIP4STR is trained using the sum of cross-entropy losses (\(\texttt{CE}(\cdot)\)) of the visual branch and the cross-modal branch: \[\mathcal{L}=\texttt{CE}(\mathbf{y}^{i},\hat{\mathbf{y}})+\texttt{CE}(\mathbf{y},\hat{\mathbf{ y}}), \tag{8}\] where \(\hat{\mathbf{y}}\) represents the ground truth, \(\mathbf{y}^{i}\) is the prediction of the visual branch, and \(\mathbf{y}\) is the prediction of the cross-modal branch. **Decoding scheme** CLIP4STR consists of two branches: a visual branch and a cross-modal branch. To fully exploit the capacity of both branches, we design a _dual predict-and-refine_ decoding scheme for inference, inspired by previous STR methods (Fang et al., 2021; Bautista and Atienza, 2022). Algorithm 1 illustrates the decoding process. The visual branch initially performs autoregressive decoding, where the future output depends on previous predictions. Subsequently, the cross-modal branch addresses possible discrepancies between the visual feature and the textual semantics of the visual prediction, aiming to improve the results. This process is also autoregressive. Finally, the previous predictions are utilized as the input context for refining the output in a cloze-filling manner. The refinement process can be iterative. After iterative refinement, the output of the cross-modal branch serves as the final prediction. ``` Input: image \(\mathbf{x}\), image encoder \(h(\cdot)\) and decoder \(\texttt{Dec}^{i}(\cdot)\), text encoder \(g(\cdot)\), cross-modal decoder \(\texttt{Dec}^{c}(\cdot)\), AR mask \(\mathcal{M}^{a}\), cloze mask \(\mathcal{M}^{c}\), image and cross-modal position query \(\mathbf{p}^{i}\) and \(\mathbf{p}^{c}\), context \(\mathbf{c}=\mathbf{0}\in\mathbb{R}^{N\times D}\), char and text tokenizer \(\texttt{CTK}(\cdot)\) and \(\texttt{TTK}(\cdot)\), iterative refinement times \(T_{i}\) Output: prediction \(\mathbf{y}\) // \(\texttt{c}_{1,\cdot}\) denote the 1st row 1\(\mathbf{c}_{1,\cdot}\leftarrow\texttt{CTK}([\texttt{B}])\); 2\(\mathbf{F}_{i}\gets h(x)\); // autoregressive visual decode 3\(\mathbf{y}^{i}\gets 0\); 4for\(k\gets 1\)to\(N-1\)do 5\(\mathbf{y}^{i}_{k,\cdot}\leftarrow\texttt{Dec}^{i}(\mathbf{p}^{i}_{k,\cdot},\mathbf{c}_{1: k,\cdot},\mathcal{M}^{a}_{1:k,1:k},\mathbf{F}_{i})\); 6\(\mathbf{c}_{k+1,\cdot}\leftarrow\texttt{CTK}(\mathbf{y}^{i}_{k,\cdot})\); 7 8 end for // cross-modal decode 9\(\mathbf{F}_{c}\leftarrow[\mathbf{F}_{i}^{T}\,g(\texttt{TTK}(\mathbf{y}^{i}))^{T}]^{T}\); 10\(\mathbf{y}\leftarrow\mathbf{0}\); 11for\(k\gets 1\)to\(N-1\)do 12\(\mathbf{y}_{k,\cdot}\leftarrow\texttt{Dec}^{c}(\mathbf{p}^{c}_{k,\cdot},\mathbf{c}_{1:k, \cdot},\mathcal{M}^{a}_{1:k,1:k},\mathbf{F}_{c})\); 13\(\mathbf{c}_{k+1,\cdot}\leftarrow\texttt{CTK}(\mathbf{y}_{k,\cdot})\); 14 15 end for // refinement with cloze mask 16for\(k\gets 1\)to\(T_{i}\)do 17\(\mathbf{c}\leftarrow[\texttt{CTK}([\texttt{B}])^{T}\,\texttt{CTK}(\mathbf{y}^{i}_{1:N-1,\cdot})^{T}]^{T}\); 18\(\mathbf{y}^{i}\leftarrow\texttt{Dec}^{i}(\mathbf{p}^{i},\mathbf{c},\mathcal{M}^{c},\mathbf{F} _{i})\); 19\(\mathbf{F}_{c}\leftarrow[\mathbf{F}_{i}^{T}\,g(\texttt{TTK}(\mathbf{y}^{i}))^{T}]^{T}\); 20\(\mathbf{c}\leftarrow[\texttt{CTK}([\texttt{B}])^{T}\,\texttt{CTK}(\mathbf{y}_{1:N-1, \cdot})^{T}]^{T}\); 21\(\mathbf{y}\leftarrow\texttt{Dec}^{c}(\mathbf{p}^{c},\mathbf{c},\mathcal{M}^{c},\mathbf{F}_{c})\); 22 23 end for ``` **Algorithm 1**inference decoding scheme ## 4 Experiment ### Experimental Details **Training dataset** Previous studies (Baek et al., 2021; Bautista and Atienza, 2022) demonstrate that using real training data leads to better performance compared to commonly used synthetic data such as MJSynth (MJ, 9M samples) (Jaderberg et al., 2014) and SynthText (ST, 6.9M samples) (Gupta et al., 2016). In this work, we primarily utilize real data for training. Specifically, we use COCO-Text (COCO) (Veit et al., 2016), RCTW17 (Shi et al., 2017), Uber-Text (Uber) (Zhang et al., 2017), ArT (Chng et al., 2019), LSVT (Sun et al., 2019), MLT19 (Nayef et al., 2019), ReCTS (Zhang et al., 2019), TextOCR (Singh et al., 2021), Open Images (Krasin et al., 2017) annotations from the OpenVINO toolkit (Krylov et al., 2021). These real datasets have 3.3M images in total. **Test benchmarks** The evaluation benchmarks include IIIT5k (Mishra et al., 2012), CUTE80 (Rismuaman et al., 2014), Street View Text (SVT) (Wang et al., 2011), SVT-Perspective (SVTP) (Phan et al., 2013), ICDAR 2013 (IC13) (Karatzas et al., 2013), ICDAR 2015 (IC15) (Karatzas et al., 2015), and two occluded datasets - HOST and WOST (Wang et al., 2021). Additionally, we utilize 3 recent large benchmarks: COCO-Text (9.8k samples; low-resolution, occluded text) (Veit et al., 2016), ArT (35.1k samples; curved and rotated text) (Chng et al., 2019), and Uber-Text (80.6k samples; vertical and rotated text) (Zhang et al., 2017). **Learning strategies** We apply warm up and cosine learning rate decay policy. The learning rate is 8.4e-5 \(\times\)\(\frac{\text{batch size}}{512}\)(Goyal et al., 2017). For models trained from scratch, the learning rate will be multiplied by 19.0. In practice, we use a total batch size 1024. AdamW (Loshchilov and Hutter, 2019) optimizer is adopted with decoupled weight decay value 0.2. The training epochs are 16 for real data and 5 for synthetic data. All experiments are done with mixed precision (Micikevicius et al., 2018). **Data and label processing** RandAugment (Cubuk et al., 2020) excludes sharpness and invert is used with layer depth 3 and magnitude 5. The image size is 224\(\times\)224, and the patch size of the vision transformer is 16\(\times\)16. The sequence length of the text encoder is 16. The maximum length of the character sequence is 25. Considering an extra [B] or [E] token, we set \(N=26\). During training, the number of character classes \(C=94\), _i.e._, mixed-case alphanumeric characters and punctuation marks are recognized. During inference, we only use a lowercase alphanumeric charset, _i.e._, \(C=36\). The iterative refinement times \(T_{i}=1\). The evaluation metric is word accuracy. ### Comparison to state-of-the-art We compare CLIP4STR with previous state-of-the-art (SOTA) methods on 9 common STR benchmarks in Table 2. CLIP4STR surpasses the previous methods by a significant margin, achieving new SOTA performance on most of the benchmarks. Notably, CLIP4STR performs exceptionally well on irregular text datasets, such as IC15 (incidental scene text), SVTP (perspective scene text), CUTE (curved text line images), HOST (heavily occluded scene text), and WOST (weakly occluded scene text). This aligns with the examples shown in Figure 1 and supports our motivation for adapting CLIP as a scene text reader, as CLIP demonstrates robust identification of regular and irregular text. CLIP4STR exhibits excellent reading ability on occluded datasets, surpassing the previous SOTA by 5.1% in the best case on HOST. This ability can be attributed to the pre-trained text encoder and cross-modal decoder, which can infer missing characters using text semantics or visual features. In addition to the small-scale common benchmarks, we also evaluate CLIP4STR on three larger and more challenging recent benchmarks. These benchmarks primarily consist of irregular texts with various shapes, low-resolution images, rotation, _etc_. The results, shown in Table 3, further demonstrate \begin{table} \begin{tabular}{c|c|c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & Train & IIIT5k & SVT & IC13 & IC15 & IC15 & SVTP & CUTE & HOST & WOST \\ & data & 3,000 & 647 & 1,015 & 1,811 & 2,077 & 645 & 288 & 2,416 & 2,416 \\ \hline ASTER (Shi et al., 2019) & MJ+ST & 93.4 & 89.5 & – & 76.1 & – & 78.5 & 79.5 & – & – \\ SRN (Yu et al., 2020) & MJ+ST & 94.8 & 91.5 & – & 82.7 & – & 85.1 & 87.8 & – & – \\ TextScanner (Wan et al., 2020) & MJ+ST & 95.7 & 92.7 & 94.9 & – & 83.5 & 84.8 & 91.6 & – & – \\ SE-ASTER (Qiao et al., 2020) & MJ+ST & 93.8 & 89.6 & 92.8 & 80.0 & – & 81.4 & 83.6 & – & – \\ RCEED (Cui et al., 2021) & MJ+ST+B & 94.9 & 91.8 & – & – & 82.2 & 83.6 & 91.7 & – & – \\ TRBA (Back et al., 2021) & MJ+ST & 92.1 & 88.9 & – & 86.0 & – & 89.3 & 89.2 & – & – \\ VisionLAN (Wang et al., 2021) & MJ+ST & 95.8 & 91.7 & – & 83.7 & – & 86.0 & 88.5 & 50.3 & 70.3 \\ ABINet (Fang et al., 2021) & MJ+ST & 96.2 & 93.5 & – & 86.0 & – & 89.3 & 89.2 & – & – \\ ViTSTR-B (Atienza, 2021) & MJ+ST & 88.4 & 87.7 & 92.4 & 78.5 & 72.6 & 81.8 & 81.3 & – & – \\ LevOCR (Da et al., 2022) & MJ+ST & 96.6 & 92.9 & – & 86.4 & – & 88.1 & 91.7 & – & – \\ MATRN (Na et al., 2022) & MJ+ST & 96.6 & 95.0 & 95.8 & 86.6 & 82.8 & 90.6 & 93.5 & – & – \\ DiG-ViT-B (Yang et al., 2022) & MJ+ST & 96.7 & 94.6 & 96.9 & 87.1 & – & 91.0 & 91.3 & 74.9 & 82.3 \\ PARSeq (Bautista and Atienza, 2022) & MJ+ST & 97.0 & 93.6 & 96.2 & 86.5 & 82.9 & 88.9 & 92.2 & – & – \\ T\(\text{TOCR}_{large}\) (Li et al., 2023)\({}^{\dagger}\) & MJ+ST+B & 94.1 & 96.1 & 97.3 & 88.1 & 84.1 & 93.0 & 95.1 & – & – \\ DiG-ViT-B (Yang et al., 2022) & Real(2.8M) & 97.6 & 96.5 & 97.6 & 88.9 & – & 92.9 & 96.5 & 62.8 & 79.7 \\ ViTSTR-S (Atienza, 2021)\({}^{\sharp}\) & Real(3.3M) & 97.9 & 96.0 & 97.8 & 89.0 & 87.5 & 91.5 & 96.2 & 64.5 & 77.9 \\ ABINet (Fang et al., 2021)\({}^{\sharp}\) & Real(3.3M) & 98.6 & 98.2 & 98.0 & 90.5 & 88.7 & 94.1 & 97.2 & 72.2 & 85.0 \\ PARSeq (Bautista and Atienza, 2022) & Real(3.3M) & 99.1 & 97.9 & **98.4** & 90.7 & 89.6 & 95.7 & 98.3 & 74.4 & 85.4 \\ \hline CLIP4STR & MJ+ST & 97.2 & 94.6 & 97.0 & 87.6 & 84.4 & 91.2 & 95.1 & **79.5** & 87.0 \\ CLIP4STR & Real(3.3M) & **99.2** & **98.3** & 98.3 & **91.4** & **90.6** & **97.2** & **99.3** & **77.5** & **87.5** \\ \hline \hline \end{tabular} \end{table} Table 2: Word accuracy on 9 common benchmarks. The numbers of samples are also shown in the table. Benchmark datasets (**B**) - SVT, IIIT5k, IC13, and IC15. \(\dagger\) TrOCR uses pre-trained models and post-pretrained on 648M textlines from publicly available PDF files on the Internet. \(\sharp\) Reproduced by PARSeq (Bautista and Atienza, 2022). the strong generalization ability of CLIP4STR. It outperforms the previous SOTA methods substantially and achieves new SOTA performance on these large datasets. Once again, these results support our motivation that CLIP possesses robust scene text perception ability and serves as an effective scene text reader. ## 5 Empirical Study This section presents our empirical study on adapting CLIP to STR. The IC15 dataset used in this section consists of 2,077 samples. ### Evolution process of CLIP4STR The high performance of CLIP4STR, which is demonstrated in Table 2&3, surpasses that of other models on 11 STR benchmarks. However, the question remains: what is the source of this high performance? In order to shed light on this matter, we examine the evolution process of CLIP4STR as presented in Table 4. Initially, the baseline is a PARSeq (Bautista and Atienza, 2022) model without the permuted sequence modeling (PSM) technique. It has a structure similar to the visual encoder-decoder branch shown in Figure 2. The encoder is a ViT without pre-training. Then, we replace the image encoder with the CLIP image encoder. However, the improvement is trivial without adaptations. To unleash the potential of CLIP, we adjust the training recipe: using a 16\(\times\)16 patch size, a small learning rate for CLIP encoders, a relatively large learning rate for decoders, and fewer training epochs as described in Sec. 4.1. In contrast, the baseline uses a patch size of 4\(\times\)8, a large learning rate for all modules, and longer training epochs. The usage of CLIP makes the model converge easier and faster, so the training recipe should also change accordingly. At this point, we already achieve SOAT performance compared to previous methods. Finally, we add the cross-modal branch to the system. Although the performance is very high, the cross-modal branch brings a 0.4% improvement in the average accuracy on 9 benchmarks, demonstrating its effectiveness. ### Parameter freezing options In CLIP4STR, we freeze half of the layers in the CLIP text encoder, which is a common practice when transferring a large language model to new tasks (Alayrac et al., 2022). Table 5 illustrates the influence of different parameter freezing options. The results indicate that freezing the language model has a lesser impact compared to freezing the image model. Despite using the fixed pre-trained token embeddings of the CLIP text encoder, the system can still achieve satisfactory performance. This demonstrates that semantic understanding in scene text recognition is relatively easier compared to general language understanding. In scene text recognition, the texts mainly consist of words and phrases, which simplifies the task compared to the \begin{table} \begin{tabular}{c c c c c} \hline \hline Baseline & PSM & CLIP & Recipe & Cross & Avg. \\ \hline ✓ & & & & & 89.2 \\ ✓ & ✓ & & & & 89.9 \\ ✓ & ✓ & ✓ & & & 90.0 \\ ✓ & ✓ & ✓ & ✓ & & 90.8 \\ ✓ & ✓ & ✓ & ✓ & ✓ & **91.2** \\ \hline ABINet (Fang et al., 2021) & & & & 89.1 \\ \hline \hline \end{tabular} \end{table} Table 4: Evolution process of CLIP4STR. We start from a baseline and achieve SOTA performance with different components. Average accuracy on 9 benchmarks (14,315 samples) in Table 2 are presented. \begin{table} \begin{tabular}{c|c|c c c} \hline \hline Method & Train & COCO & ATT & Uber \\ & data & 9.825 & 35,149 & 80,551 \\ \hline VISTR-S (Atienza, 2021)\({}^{\sharp}\) & MP+ST & 56.4 & 66.1 & 37.6 \\ TRBA (Back et al., 2021)\({}^{\sharp}\) & MP+ST & 61.4 & 68.2 & 38.0 \\ ABINet (Fang et al., 2021)\({}^{\sharp}\) & MP+ST & 57.1 & 65.4 & 34.9 \\ PARSeq (2022) & MP+ST & 64.0 & 70.7 & 42.0 \\ CLIP4STR & MP+ST & **65.8** & **72.5** & **43.1** \\ \hline DIG-ViT-B (Yang et al., 2022) & Real(2.8M) & 75.8 & – & – \\ VISTR-S (Atienza, 2021)\({}^{\sharp}\) & Real(3.3M) & 73.6 & 81.0 & 78.2 \\ TRBA (Back et al., 2021)\({}^{\sharp}\) & Real(3.3M) & 77.5 & 82.5 & 81.2 \\ ABINet (Fang et al., 2021)\({}^{\sharp}\) & Real(3.3M) & 76.5 & 81.2 & 71.2 \\ PARSeq (2022) & Real(3.3M) & 79.8 & 84.5 & 84.1 \\ CLIP4STR & Real(3.3M) & **81.1** & **85.8** & **86.8** \\ \hline \hline \end{tabular} \end{table} Table 3: Word accuracy on 3 large benchmarks. The numbers of samples are also shown. \(\sharp\) Reproduced by PARSeq (Bautista and Atienza, 2022). \begin{table} \begin{tabular}{c c c|c c c c c} \hline \hline Frozen & Layers & \#Params & IC15 & WOST & HOST & COCO & Uber \\ Image & Text & & & & & & \\ \hline 0 & 0 & 149 M & **90.8** & 87.5 & 76.4 & 80.8 & **87.0** \\ 0 & 3 & 114 M & 90.4 & **88.1** & 76.9 & **81.2** & 86.8 \\ 0 & 6 & 104 M & 90.6 & 87.5 & **77.5** & 81.1 & 86.8 \\ 0 & 9 & 95 M & 90.3 & 86.8 & 74.9 & 80.9 & 86.3 \\ 0 & 12 & 86 M & 90.3 & 86.1 & 74.9 & 80.9 & 86.4 \\ 0 & token & 86 M & 90.7 & 87.3 & 77.0 & 80.9 & 86.7 \\ \hline 0 & 6 & 95 M & **90.6** & 87.5 & **77.5** & 81.1 & **86.8** \\ 3 & 6 & 84 M & 90.4 & **88.5** & 76.5 & **81.3** & 86.4 \\ 6 & 6 & 62 M & 89.5 & 86.7 & 72.8 & 80.3 & 83.8 \\ 9 & 6 & 41 M & 87.8 & 80.0 & 64.0 & 75.3 & 72.8 \\ 12 & 6 & 19 M & 61.2 & 55.8 & 40.4 & 49.5 & 20.6 \\ \hline \hline \end{tabular} \end{table} Table 5: Freezing options in CLIP4STR. #Params means the number of learnable parameters of encoders in CLIP4STR. One decoder in CLIP4STR has 4.3M parameters. token means we only use pre-trained token embeddings of CLIP text encoder as text features. general language case. On the other hand, freezing the image models has a significant impact on performance. The substantial domain gap between the scene text recognition data and the pre-trained data of the CLIP image encoder possibly contributes to this discrepancy. The CLIP image encoder is pre-trained on web images, which are primarily natural images. In contrast, the scene text recognition data comprises cropped word images. Such a disparity may necessitate a fully trainable image encoder in CLIP4STR to bridge the domain gap. ### Parameter-efficient adaptations In Table 5, we observe that even though we freeze a portion of the text encoders, the model still contains a substantial number of parameters. This raises the question: are there any other parameter-efficient adaptations that can be employed to adapt CLIP to STR? When performing parameter-efficient transfer learning for large pre-trained models, adapters Sung et al. (2022); Karimi Mahabadi et al. (2021) is a popular choice. In this work, we explore two adapters for CLIP in the context of scene text recognition: CLIP-Adapter Gao et al. (2021) and Ladder Side-Tuning (LST) Sung et al. (2022). CLIP-Adapter incorporates linear layers on top of the frozen CLIP and achieves significantly improved performance in few-shot classification compared to the frozen CLIP model. On the other hand, LST employs a ladder network to reduce the training budget and demonstrates competitive performance with fully fine-tuned large models across various vision-language tasks. We apply these two adapters to the visual encoder of CLIP, and a detailed description of their usage can be found in Sec. A.2 in the Appendix. The results of using the two adapters are presented in Table 6. CLIP-Adapter outperforms the frozen model but falls short of the performance achieved by the fully fine-tuned model. The addition of a few learnable parameters on top of the CLIP model alone is insufficient to bridge the domain gap between scene text data and the pre-training data of CLIP. On the other hand, LST achieves notably improved performance but still lags behind the fine-tuned model. However, when the parameters of LST are increased, it approaches the performance of the fine-tuned model. Overall, LST can serve as an alternative option when computational resources are limited. ### Comparison to image pre-trained model We still need to answer one question: is the vision-language model better than a single modality pre-trained model? To compare them fairly, we use an ImageNet-21K Russakovsky et al. (2015) pre-trained vision transformer instead of the CLIP visual encoder in Figure 2. We keep other settings the same. Table 7 shows the results. The ImageNet-21K pre-trained model performs much worse than the CLIP visual model pre-trained on image-text pairs. For the occluded dataset HOST, the ImageNet-21K pre-trained model is even worse than the model trained from scratch. Previous works also support this finding. PARSeq Bautista and Atienza (2022) trains the vision transformer from scratch rather than using a pre-trained model. TrOCR Li et al. (2023) uses pre-trained models from DeiT Touvron et al. (2021), BEiT Bao et al. (2022), and RoBERTa Liu et al. (2019), but it still post-pretrains them on 684M textlines from publicly available PDF files on the Internet. Table 7 clearly demonstrates the advantage of using a vision-language model in scene text recognition. ## 6 Conclusion We introduce CLIP4STR, a method that uses CLIP for scene text recognition (STR). It has a dual encoder-decoder architecture: a visual branch for initial prediction and a cross-modal branch for refinement. CLIP4STR achieves state-of-the-art results on common STR benchmarks, demonstrating \begin{table} \begin{tabular}{c c|c c c c c} \hline \hline Method & \#Params & IC15 & WOST & HOST & COCO & Uber \\ \hline Frozen & 0 & 60.9 & 54.8 & 39.9 & 48.9 & 20.1 \\ CLIP-Adapter & 262 K & 63.6 & 57.2 & 41.1 & 50.9 & 22.7 \\ LST (\(r=4\)) & 4.1M & 88.2 & 82.8 & 66.1 & 77.1 & 78.7 \\ LST (\(r=2\)) & 13.1M & 89.6 & 86.0 & 70.8 & 79.6 & 80.6 \\ Fine-tune & 86 M & **90.3** & **87.4** & **76.3** & **80.9** & **86.6** \\ \hline \hline \end{tabular} \end{table} Table 6: Using adapters for parameter-efficient adaptations. #Params means the learnable parameters in the visual encoder. \(r\) is the feature reduction ratio in LST. Here we only show the results of the visual branch in CLIP4STR, and the cross-modal branch is ignored. \begin{table} \begin{tabular}{l c|c c c c c} \hline \hline Pre-train & \#Params & IC15 & WOST & HOST & COCO & Uber \\ \hline Scratch & 86 M & 90.2 & 85.6 & **77.0** & 80.1 & 86.6 \\ ImageNet-21K & 86 M & 89.3 & 85.7 & 71.1 & 80.2 & 86.4 \\ Image-text pairs & 86 M & **90.3** & **87.4** & 76.3 & **80.9** & **86.6** \\ \hline \hline \end{tabular} \end{table} Table 7: Different pre-training strategies. #Params means the learnable parameters in the visual encoder. For a fair comparison, only the results of the visual branch in CLIP4STR are shown. that CLIP is a powerful scene text reader and that vision-language pre-training is beneficial for STR. We also conduct an extensive empirical study to explain how CLIP adapts to STR. We hope that CLIP4STR can serve as a simple but strong baseline for future STR research with VL models. ### Limitations The method in this paper is trained on English datasets, so it only works for English scene text recognition. Due to the adoption of a large pre-trained vision-language model (CLIP), the method in this paper requires quite a few GPU resources during training, \(i\)._e_., 8\(\times\) NVIDIA Tesla V100 GPUs. Although there are some computationally techniques that can be applied, they will sacrifice the performance of the method. ## Ethics Statement The method in this paper aims to adapt CLIP to scene text recognition task. We achieve pretty good performance on common scene text recognition benchmarks. This may raise the interest of people in applying the vision-language pre-trained model in STR task. The method in this paper is a pure STR method and it may have the same ethical considerations as other STR methods.
2304.03641
A Block Coordinate Descent Method for Nonsmooth Composite Optimization under Orthogonality Constraints
Nonsmooth composite optimization with orthogonality constraints has a broad spectrum of applications in statistical learning and data science. However, this problem is generally challenging to solve due to its non-convex and non-smooth nature. Existing solutions are limited by one or more of the following restrictions: (i) they are full gradient methods that require high computational costs in each iteration; (ii) they are not capable of solving general nonsmooth composite problems; (iii) they are infeasible methods and can only achieve the feasibility of the solution at the limit point; (iv) they lack rigorous convergence guarantees; (v) they only obtain weak optimality of critical points. In this paper, we propose \textit{\textbf{OBCD}}, a new Block Coordinate Descent method for solving general nonsmooth composite problems under Orthogonality constraints. \textit{\textbf{OBCD}} is a feasible method with low computation complexity footprints. In each iteration, our algorithm updates $k$ rows of the solution matrix ($k\geq2$ is a parameter) to preserve the constraints. Then, it solves a small-sized nonsmooth composite optimization problem under orthogonality constraints either exactly or approximately. We demonstrate that any exact block-$k$ stationary point is always an approximate block-$k$ stationary point, which is equivalent to the critical stationary point. We are particularly interested in the case where $k=2$ as the resulting subproblem reduces to a one-dimensional nonconvex problem. We propose a breakpoint searching method and a fifth-order iterative method to solve this problem efficiently and effectively. We also propose two novel greedy strategies to find a good working set to further accelerate the convergence of \textit{\textbf{OBCD}}. Finally, we have conducted extensive experiments on several tasks to demonstrate the superiority of our approach.
Ganzhao Yuan
2023-04-07T13:44:59Z
http://arxiv.org/abs/2304.03641v1
A Block Coordinate Descent Method for Nonsmooth Composite Optimization under Orthogonality Constraints ###### Abstract Nonsmooth composite optimization with orthogonality constraints has a broad spectrum of applications in statistical learning and data science. However, this problem is generally challenging to solve due to its non-convex and non-smooth nature. Existing solutions are limited by one or more of the following restrictions: (i) they are full gradient methods that require high computational costs in each iteration; (ii) they are not capable of solving general nonsmooth composite problems; (iii) they are infeasible methods and can only achieve the feasibility of the solution at the limit point; (iv) they lack rigorous convergence guarantees; (v) they only obtain weak optimality of critical points. In this paper, we propose _OBCD_, a new Block Coordinate Descent method for solving general nonsmooth composite problems under Orthogonality constraints. _OBCD_ is a feasible method with low computation complexity footprints. In each iteration, our algorithm updates \(k\) rows of the solution matrix (\(k\geq 2\) is a parameter) to preserve the constraints. Then, it solves a small-sized nonsmooth composite optimization problem under orthogonality constraints either exactly or approximately. We demonstrate that any exact block-\(k\) stationary point is always an approximate block-\(k\) stationary point, which is equivalent to the critical stationary point. Under appropriate conditions, we prove that _OBCD_ achieves strong convergence. We are particularly interested in the case where \(k=2\) as the resulting subproblem reduces to a one-dimensional nonconvex problem. We propose a breakpoint searching method and a fifth-order iterative method to solve this problem efficiently and effectively. We also propose two novel greedy strategies to find a good working set to further accelerate the convergence of _OBCD_. Finally, we have conducted extensive experiments on several tasks to demonstrate the superiority of our approach. Orthogonality Constraints; Nonconvex Optimization; Nonsmooth Composite Optimization; Block Coordinate Descent; Convergence Analysis. ## 1 Introduction We consider the following nonsmooth composite optimization problem under orthogonality constraints (\(\overset{\triangle}{=}\),' means define): \[\min_{\mathbf{X}\in\mathbb{R}^{n\times r}}\;F(\mathbf{X})\triangleq f(\mathbf{ X})+h(\mathbf{X}),\,s.t.\,\mathbf{X}^{\mathsf{T}}\mathbf{X}=\mathbf{I}_{r}. \tag{1}\] Here, \(n\geq r\) and \(\mathbf{I}_{r}\) is a \(r\times r\) identity matrix. For brevity, the orthogonality constraints \(\mathbf{X}^{\mathsf{T}}\mathbf{X}=\mathbf{I}_{r}\) in Problem (1) is rewritten as \(\mathbf{X}\in\mathrm{St}(n,r)\triangleq\{\mathbf{X}\in\mathbb{R}^{n\times r} \mid\mathbf{X}^{\mathsf{T}}\mathbf{X}=\mathbf{I}_{r}\}\), where \(\mathcal{M}\triangleq\mathrm{St}(n,r)\) is the Stiefel manifold in the literature (Edelman et al., 1998; Absil et al., 2008; Wen and Yin, 2013; Hu et al., 2020). Throughout this paper, we make the following assumptions on Problem (1). (_A_-i) For any \(\mathbf{X}\) and \(\mathbf{X}^{+}\) with \(\mathbf{X}^{+}\) being any row-\(k\) modification of \(\mathbf{X}\), we assume \(f:\mathbb{R}^{n\times r}\mapsto\mathbb{R}\) is continuously differentiable for some symmetric positive semidefinite matrix \(\mathbf{H}\in\mathbb{R}^{nr\times nr}\) that: \[f(\mathbf{X}^{+})\leq\mathcal{Q}(\mathbf{X}^{+};\mathbf{X}) \triangleq\langle\mathbf{X}^{+}-\mathbf{X},\nabla f(\mathbf{X})\rangle\] \[+\tfrac{1}{2}\|\mathbf{X}^{+}-\mathbf{X}\|_{\mathbf{H}}^{2}+f( \mathbf{X}), \tag{2}\] where \(\|\mathbf{H}\|\leq L_{f}\) for some constant \(L_{f}>0\) and \(\|\mathbf{X}\|_{\mathbf{H}}^{2}\triangleq\mathrm{vec}(\mathbf{X})^{\mathsf{T }}\mathbf{H}\mathrm{vec}(\mathbf{X})\). Note that the popular quadratic function \(f(\mathbf{X})=\frac{1}{2}tr(\mathbf{X}^{\mathsf{T}}\mathbf{C}\mathbf{X} \mathbf{D})=\frac{1}{2}\|\mathbf{X}\|_{\mathbf{H}}^{2}\) with \(\mathbf{H}=\mathbf{D}\otimes\mathbf{C}\) satisfies the equality \(\forall\mathbf{X},\mathbf{X}^{+},f(\mathbf{X}^{+})=\mathcal{Q}(\mathbf{X}^{+} ;\mathbf{X})\) in (2), where \(\mathbf{C}\in\mathbb{R}^{n\times n}\) and \(\mathbf{D}\in\mathbb{R}^{r\times r}\) are arbitrary symmetric matrices. (_A_-ii) The function \(h(\mathbf{X})\) is not necessarily smooth, but it is row-wise separable that \(h(\mathbf{X})=\sum_{i}h^{\prime}(\mathbf{X}(i,:))\). Typical examples of \(h^{\prime}(\mathbf{x}):\mathbb{R}^{1\times r}\mapsto\mathbb{R}\) include the the \(\ell_{p}\) norm function \(h^{\prime}(\mathbf{x})=\lambda\|\mathbf{x}\|_{p}\) with \(p\in\{0,1\}\), and the indicator function of non-negativity constraint \(h^{\prime}(\mathbf{x})=\mathcal{I}_{\geq 0}(\mathbf{x})\) with \(\mathcal{I}_{\geq 0}(\mathbf{x})=\{\begin{smallmatrix}0,&\mathbf{x}\geq 0 \\ \infty,&\mathrm{else}\end{smallmatrix}\}\). **Contributions:** This paper makes the following contributions. _(i)_ Algorithmically, we proposed a Block Coordinate Descent (BCD) algorithm for nonsmooth composite optimization under orthogonality constraints (see Section 4). _(ii)_ Theoretically, we provide some optimality analysis and convergence analysis of our methods (See Section 5 and 6). _(iii)_ As side contributions of this paper, we a propose break-point searching method and a fifth-order iterative method to solve the subproblems when \(k=2\) (see Section 7), and advocate two working set selection greedy strategies to enhance the computational efficiency of our methods (see Section 8). _(iv)_ Empirically, we have conducted extensive experiments to show that our methods outperform existing solutions in term of accuracy and/or efficiency (see Section 9). We provide some notations and technical preliminaries in Section A in the **Appendix**. ## 2 Applications Problem (1) is an optimization framework that plays an crucial role in a variety of statistical learning and data science models, such as sparse Principal Component Analysis (PCA) (Journee et al., 2010; Shalit & Chechik, 2014), Fourier transforms approximation (Frerix & Bruna, 2019), low-rank matrix completion (Boumal & Absil, 2011), phase synchronization (Liu et al., 2017), electronic structure calculation (Zhang et al., 2014; Liu et al., 2014), orthogonal nonnegative matrix factorization (Jiang et al., 2022), \(K\)-indicators clustering (Jiang et al., 2016), and dictionary learning (Zhai et al., 2020). We present four instances of this framework below. ### \(L_{0}\) Norm-based SPCA \(L_{0}\) norm-based Sparse PCA (SPCA) is a method that uses \(\ell_{0}\) norm to produce modified principal components with sparse loadings, which helps reduce model complexity and increase model interpretation (Chen et al., 2016; d'Aspremont et al., 2008). \(L_{0}\) norm-based SPCA can be formulated as: \[\min_{\mathbf{X}}~{}-\tfrac{1}{2}\mathbf{X}^{\mathsf{T}}\mathbf{C}\mathbf{X}+ \lambda\|\mathbf{X}\|_{0},\,s.t.\,\mathbf{X}^{\mathsf{T}}\mathbf{X}=\mathbf{ I}, \tag{3}\] where \(\mathbf{C}=\mathbf{A}^{\mathsf{T}}\mathbf{A}\in\mathbb{R}^{n\times n}\) is the covariance of the data matrix \(\mathbf{A}\in\mathbb{R}^{m\times n}\) and \(\lambda>0\). ### Nonnegative PCA Nonnegative PCA is an extension of PCA that imposes nonnegativity constraints on the principal vector (Zass & Shashua, 2006b; Qian et al., 2021). This constraint leads to a nonnegative representation of loading vectors and it helps to capture data locality in feature selection. Nonnegative PCA can formulated as: \[\min_{\mathbf{X}}~{}-\tfrac{1}{2}\mathbf{X}^{\mathsf{T}}\mathbf{C}\mathbf{X},\,s.t.\,\mathbf{X}\geq\mathbf{0},\,\mathbf{X}^{\mathsf{T}}\mathbf{X}=\mathbf{ I},\] where \(\mathbf{C}\in\mathbb{R}^{n\times n}\) is the covariance matrix of the data. ### \(L_{1}\) Norm-based SPCA As the \(L_{1}\) norm provides the tightest convex relaxation for the \(L_{0}\)-norm over the unit ball in the sense of \(L_{\infty}\)-norm, some researchers replace the non-convex and discontinuous \(L_{0}\) norm function as in (3) with a convex but non-smooth function (Chen et al., 2016; Vu et al., 2013; Lu & Zhang, 2012). This leads to the following optimization problem of \(L_{1}\) norm-based SPCA: \[\min_{\mathbf{X}}-\tfrac{1}{2}\mathbf{X}^{\mathsf{T}}\mathbf{C}\mathbf{X}+ \lambda\|\mathbf{X}\|_{1},\,s.t.\,\mathbf{X}^{\mathsf{T}}\mathbf{X}=\mathbf{ I},\] where \(\mathbf{C}\in\mathbb{R}^{n\times n}\) is the covariance matrix of the data, and \(\lambda>0\). ### Nonlinear Eigenvalue Problems Nonlinear eigenvalue problems are extensions of linear eigenvalue problem (Golub & Van Loan, 2013). Given any matrices \(\mathbf{A}\in\mathbb{R}^{n\times n}\) and \(\mathbf{E}\in\mathbb{R}^{n\times r}\), they can be formulated as solving the following optimization problem (Wen & Yin, 2013; Gao et al., 2018): \[\min_{\mathbf{X}^{\mathsf{T}}\mathbf{X}=\mathbf{I}}\tfrac{1}{2}\langle \mathbf{X},\mathbf{C}\mathbf{X}\rangle+\langle\mathbf{E},\mathbf{X}\rangle+ \tfrac{\lambda}{4}\langle\rho(\mathbf{X}),\mathbf{C}^{\dagger}\rho(\mathbf{X})\rangle, \tag{4}\] where \(\mathbf{C}\in\mathbb{R}^{n\times n}\) is the covariance matrix of the data, \(\lambda>0\), \(\rho(\mathbf{X})=\text{diag}(\mathbf{X}\mathbf{X}^{\mathsf{T}})\in\mathbb{R}^ {n}\), and \(\mathbf{C}^{\dagger}\in\mathbb{R}^{n\times n}\) is the pseudo-inverse of \(\mathbf{C}\) with \(\mathbf{C}^{\dagger}=(\mathbf{C}^{\mathsf{T}}\mathbf{C})^{-1}\mathbf{A}^{ \mathsf{T}}\). ## 3 Related Work We now present some related algorithms in the literature. \(\blacktriangleright\)**Minimizing Smooth Functions under Orthogonality Constraints.** One difficulty of solving Problem (1) is due to the nonconvexity of orthogonality constraints. Existing methods for handling this issue can be divided into three classes. _(i)_ Geodesic-like methods (Abrudan et al., 2008; Edelman et al., 1998; Absil et al., 2008; Jiang & Dai, 2015). Since calculating geodesics involves solving ordinary differential equations which may cause computational complexity, geodesic-like methods iteratively compute the geodesic logarithm via simple linear algebra calculations. The work of (Wen & Yin, 2013) develops a simple and efficient constraint preserving update scheme and achieves low computation complexity per iteration. They combine the feasible update scheme with the Barzilai-Borwein (BB) nonmonotonic line search for optimization with orthogonality constraints. _(ii)_ Projection-like methods (Absil et al., 2008; Golub & Van Loan, 2013). These methods preserve the orthogonality constraints by the projection. They decrease the objective value using its current Euclidean gradient direction or Riemannian tangent direction, followed by a orthogonal projection operation, which can be calculated by polar decomposition or approximated by QR factorization. _(iii)_ Multiplier correction methods (Gao et al., 2018, 2019; Liu et al., 2020). Since the Lagrangian multiplier associated with the orthogonality constraint is symmetric and has explicit closed-form expression at the first-order optimality condition, multiplier correction methods update the multiplier after achieving sufficient reduction for the objective function, leading to efficient first-order feasible or infeasible approaches. \(\blacktriangleright\)**Minimizing Nonmooth Functions under Orthogonality Constraints.** Another difficulty of solving Problem (1) comes from the nonsmoothness of the objective function. Existing methods for addressing this problem can be classified into three categories. _(i)_ Subgradient methods (Hwang et al., 2015, Li et al., 2021). Subgradient methods are analogous to gradient descent methods. Most of aforementioned geodesic-like and projection-like strategies can be incorporated into the subgradient methods but replaces gradients with subgradients. However, the stepsize of subgradient methods need to be chosen to be diminishing to guarantee convergence. _(ii)_ Proximal gradient methods (Chen et al., 2020). They solve a strongly convex minimization problem over the tangent space using semi-smooth Newton method to find a descent direction, then the orthogonality constraint is preserved by a retraction operation. However, computing the Hessian matrix for the Newton subprocedure has a memory complexity of \(\mathcal{O}(r^{4})\). _(iii)_ Operator splitting methods (Lai and Osher, 2014, Chen et al., 2016, Zhang et al., 2019, He and Yuan, 2012). Operator splitting methods introduce linear constraints and decompose the original problem into simpler subproblems which can be solved separately and exactly. Alternating Direction Methods of Multipliers (ADMM) and Penalty Alternating Direction Methods (PADM) are two variants of operator splitting methods. ADMM has good empirical performance but lacks rigorous convergence guarantees, while PADM achieves good theoretical convergence but is not numerically robust. \(\blacktriangleright\)**Block Coordinate Descent Methods.** (Block) coordinate descent is a classical and powerful algorithm that solves optimization problems by iteratively performing minimization along (block) coordinate directions (Tseng and Yun, 2009, Xu and Yin, 2013). This algorithm breaks the original problem into multiple subproblems and solves them separably and efficiently. BCD methods have recently gained attention in solving nonconvex optimization problems, including sparse optimization (Yuan et al., 2020), \(k\)-means clustering (Nie et al., 2022), structured nonconvex minimization (Yuan, 2023, 2022), recurrent neural network(Massart and Abrol, 2022), and multi-layer convolutional networks (Bibi et al., 2019, Zeng et al., 2019). BCD methods have also been used in (Shalit and Chechik, 2014, Massart and Abrol, 2022) for solving optimization problems with orthogonal group constraints. However, their column-based BCD methods are only limited to solve smooth minimization problems with \(k=2\) and \(r=n\)(Refer to Section 4.2 in (Shalit and Chechik, 2014)). Our row-based BCD methods can solve general nonsmooth problems with \(k\geq 2\) and \(r\leq n\). The work of (Gao et al., 2019) proposes a parallelizable column-wise BCD scheme for solving the subproblems of their proximal linearized augmented Lagrangian algorithm. Impressive parallel scalability under parallel environment of their algorithm is demonstrated. We stress that the these two _column-based_ BCD methods are different from our _row-based_ BCD method. \(\blacktriangleright\)**Other Approaches.** There exists other approaches to address the optimization problem with orthogonal constraints. Some researchers employ the penalty sequences: \(\min_{\mathbf{X}}F(\mathbf{X})+\rho\|\mathbf{X}\mathbf{X}^{\mathsf{T}}-\mathbf{ I}_{r}\|_{F}^{2}\) with an increasing penalty parameter \(\rho\) to enforce the orthogonality constraints (Zass and Shashua, 2006, Wen et al., 2016). Based on the fact that \(\min_{\mathbf{X}\in\mathbb{S}(n,r)}F^{\prime}(\mathbf{X}\mathbf{X}^{\mathsf{ T}})\Leftrightarrow\min_{\mathbf{0}\preceq\mathbf{M}\preceq\mathbf{I}_{n},tr(\mathbf{M})=r}F^{ \prime}(\mathbf{M}),\,s.t.\,rank(\mathbf{M})=r\), the work of (VuC, 2013) solves a convex relaxation problem by dropping the rank constraint. In addition to commonly used first-order methods, some authors also consider second-order methods (Liu et al., 2015, Hu et al., 2018, 2019, Zhao et al., 2015). Finally, global linear convergence property has been studied by other researchers (Liu et al., 2016). ## 4 The Proposed _Object_ Algorithm We present _OBCD_ in Algorithm 1, which is a new Block Coordinate Descent (BCD) algorithm for solving general nonsmooth composite problems under orthogonality constraints as in Problem (1). It is an iterative procedure that sequentially minimizes the objective function along block coordinate directions over a sub-manifold of \(\mathcal{M}\). \(\blacktriangleright\)**A New Constraint-Preserving Update Scheme** For any partition of the index vector \([1,2,...,n]\) into \([\mathrm{B},\mathrm{B}^{c}]\) with \(\mathrm{B}\in\mathbb{N}^{k}\), \(\mathrm{B}^{c}\in\mathbb{N}^{n-k}\), we define \(\mathbf{U}_{\mathrm{B}}\in\mathbb{R}^{n\times k}\) and \(\mathbf{U}_{\mathrm{B}^{c}}\in\mathbb{R}^{n\times(n-k)}\) as: \[(\mathbf{U}_{\mathrm{B}})_{ji}=\left\{\begin{array}{ll}1,&\mathrm{B}_{i}=j ;\\ 0,&\text{else.}\end{array}\right.,(\mathbf{U}_{\mathrm{B}^{c}})_{ji}=\left\{ \begin{array}{ll}1,&\mathrm{B}_{i}^{c}=j;\\ 0,&\text{else.}\end{array}\right..\] Therefore, for any \(\mathbf{X}\in\mathbb{R}^{n\times r}\), we have \(\mathbf{I}_{n}=\mathbf{U}_{\mathrm{B}}\mathbf{U}_{\mathrm{B}}^{\mathsf{T}}+ \mathbf{U}_{\mathrm{B}^{c}}\mathbf{U}_{\mathrm{B}^{c}}^{\mathsf{T}}\), \(\mathbf{X}(\mathrm{B},:)=\mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}\in \mathbb{R}^{k\times r}\), and \(\mathbf{X}=\mathbf{I}_{n}\mathbf{X}=(\mathbf{U}_{\mathrm{B}}\mathbf{U}_{\mathrm{ B}}^{\mathsf{T}}+\mathbf{U}_{\mathrm{B}^{c}}\mathbf{U}_{\mathrm{B}^{c}}^{ \mathsf{T}})\mathbf{X}=\mathbf{U}_{\mathrm{B}}\mathbf{X}(\mathrm{B},:)+\mathbf{ U}_{\mathrm{B}^{c}}\mathbf{X}(\mathrm{B}^{c},:)\). In each iteration \(t\), the indices \(\{1,2,...,n\}\) of the rows of decision variable \(\mathbf{X}\in\mathrm{St}(n,r)\) are separated to two sets \(\mathrm{B}\) and \(\mathrm{B}^{c}\), where \(\mathrm{B}\) is the working set with \(|\mathrm{B}|=k\) and \(\mathrm{B}^{c}=\{1,2,...,n\}\setminus\mathrm{B}\). To simplify the notation, we use \(\mathrm{B}\) instead of \(\mathrm{B}^{t}\) as \(t\) can be inferred from the context. There fore, we have the following variable splitting for any \(\mathbf{X}\in\mathrm{St}(n,r)\): \(\mathbf{X}=\mathbf{I}_{n}\mathbf{X}=(\mathbf{U}_{\mathrm{B}}\mathbf{U}_{\mathrm{B }}^{\mathsf{T}}+\mathbf{U}_{\mathrm{B}^{\mathrm{c}}}\mathbf{U}_{\mathrm{B}^{ \mathrm{c}}}^{\mathsf{T}})\mathbf{X}\). We only update \(k\) rows the variable \(\mathbf{X}\) via \(\mathbf{X}^{t+1}(\mathrm{B},:)\Leftarrow\mathbf{V}\mathbf{X}^{t}(\mathrm{B}, :)\) for some appropriate matrix \(\mathbf{V}\in\mathbb{R}^{k\times k}\). We have the following equivalent expressions: \[[\mathbf{X}^{t+1}(\mathrm{B},:)=\mathbf{V}\mathbf{X}^{t}(\mathrm{B },:)]\] \[\Leftrightarrow [\mathbf{X}^{t+1}=\mathbf{U}_{\mathrm{B}}\mathbf{V}\mathbf{U}_{ \mathrm{B}}^{\mathsf{T}}\mathbf{X}^{t}+\mathbf{U}_{\mathrm{B}^{\mathrm{c}}} \mathbf{U}_{\mathrm{B}^{\mathrm{c}}}^{\mathsf{T}}\mathbf{X}^{t}] \tag{5}\] \[\Leftrightarrow [\mathbf{X}^{t+1}=\mathbf{X}^{t}+\mathbf{U}_{\mathrm{B}}(\mathbf{ V}-\mathbf{I}_{k})\mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}^{t}] \tag{6}\] We consider the following minimization procedure to iteratively solve Problem (1): \[\min_{\mathbf{V}}F(\mathbf{X}^{t}+\mathbf{U}_{\mathrm{B}}(\mathbf{ V}-\mathbf{I}_{k})\mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}^{t}),\] \[s.t. \mathbf{X}^{t}+\mathbf{U}_{\mathrm{B}}(\mathbf{V}-\mathbf{I}_{k}) \mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}^{t}\in\mathrm{St}(n,r). \tag{7}\] The following lemma shows that the orthogonality constraint for \(\mathbf{X}^{+}=\mathbf{X}+\mathbf{U}_{\mathrm{B}}(\mathbf{V}-\mathbf{I}_{k}) \mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}\) can be preserved by choosing suitable \(\mathbf{V}\in\mathrm{St}(k,k)\) and \(\mathbf{X}\in\mathrm{St}(n,r)\). We use \(\{\mathcal{B}_{1},\mathcal{B}_{2},...,\mathcal{B}_{C_{k}^{\mathrm{c}}}\}\) to denote all the possible \(k\)-combinations of the index vectors choosing \(k\) items from \(n\) without repetition. **Lemma 4.1**.: _For any \(\mathrm{B}\in\{\mathcal{B}_{i}\}_{i=1}^{\mathrm{c}_{k}}\), we let \(\mathbf{X}^{+}=\mathbf{X}+\mathbf{U}_{\mathrm{B}}(\mathbf{V}-\mathbf{I}_{k}) \mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}\) with \(\mathbf{V}\in\mathrm{St}(k,k)\)._ _(a) For any \(\mathbf{X}\in\mathbb{R}^{n\times r}\), we have \([\mathbf{X}^{+}]^{\mathsf{T}}\mathbf{X}^{+}=\mathbf{X}^{\mathsf{T}}\mathbf{X}\)._ _(b) If \(\mathbf{X}\in\mathrm{St}(n,r)\), then \(\mathbf{X}^{+}\in\mathrm{St}(n,r)\)._ _Please refer to Section B.1._ **Remarks.** When \(\mathbf{X}\) is \(\epsilon\)-orthogonal that \(\|\mathbf{X}^{\mathsf{T}}\mathbf{X}-\mathbf{I}_{k}\|_{\mathsf{F}}\leq\epsilon\) with \(\epsilon\geq 0\), Lemma 4.1 implies that \(\mathbf{X}^{+}\) is also \(\epsilon\)-orthogonal with \(\|[\mathbf{X}^{+}]^{\mathsf{T}}\mathbf{X}^{+}-\mathbf{I}_{k}\|_{\mathsf{F}}\leq\epsilon\). Thanks to Lemma 4.1, we consider the following alternative formulation for Problem (7). \[\tilde{\mathbf{V}}^{t}\in\arg\min_{\mathbf{V}}\;F(\mathcal{P}_{ \mathrm{B}}^{t}(\mathbf{V})),\;s.t.\;\mathbf{V}\in\mathrm{St}(k,k),\] \[\mathcal{P}_{\mathrm{B}}^{t}(\mathbf{V})\triangleq\mathbf{X}^{t}+ \mathbf{U}_{\mathrm{B}}(\mathbf{V}-\mathbf{I}_{k})\mathbf{U}_{\mathrm{B}}^{ \mathsf{T}}\mathbf{X}^{t}. \tag{12}\] Then the solution matrix is updated via: \(\mathbf{X}^{t+1}=\mathcal{P}_{\mathrm{B}}^{t}(\tilde{\mathbf{V}}^{t})\). The following lemma offers important properties that relate to the update rule \(\mathbf{X}^{+}=\mathbf{X}+\mathbf{U}_{\mathrm{B}}(\mathbf{V}-\mathbf{I}_{k}) \mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}\). **Lemma 4.2**.: _We define \(\mathbf{X}^{+}=\mathbf{X}+\mathbf{U}_{\mathrm{B}}(\mathbf{V}-\mathbf{I}_{k}) \mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}\). For any \(\mathbf{X}\in\mathrm{St}(n,r)\), \(\mathbf{V}\in\mathrm{St}(k,k)\), and \(\mathbf{B}\in\{\mathcal{B}_{i}\}_{i=1}^{\mathrm{c}_{k}}\), we have:_ _(a) \(\frac{1}{2}\|\mathbf{X}^{+}-\mathbf{X}\|_{\mathsf{F}}^{2}\leq\frac{1}{2}\| \mathbf{V}-\mathbf{I}_{k}\|_{\mathsf{F}}^{2}=\langle\mathbf{I}_{k},\mathbf{I}_{k }-\mathbf{V}\rangle\)._ _(b) \(\frac{1}{2}\|\mathbf{X}^{+}-\mathbf{X}\|_{\mathsf{F}}^{2}=\langle\mathbf{I}_{k}- \mathbf{V},\mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}\mathbf{X}^{\mathsf{T} }\mathbf{U}_{\mathrm{B}}\rangle\)._ _(c) \(\frac{1}{2}\|\mathbf{X}^{+}-\mathbf{X}\|_{\mathsf{H}}^{2}\leq\frac{1}{2}\| \mathbf{V}-\mathbf{I}_{k}\|_{\mathsf{Q}}^{2}\) for all \(\mathbf{Q}\succcurlyeq\underline{\mathbf{Q}}\), where \(\underline{\mathbf{Q}}\) is defined in (8)._ _Please refer to Section B.2._ **Algorithm 1**_OBCD_**: The Proposed Block Coordinate Descent Algorithm for Nonsmooth Composite Optimization under Orthogonality Constraints._ Input: an initial feasible solution \(\mathbf{X}^{0}\). Set \(k\geq 2\), \(t=0\). **while** not converge **do** **(S1)** Use some strategy to find a working set \(\mathrm{B}^{t}\) for the \(t\)-it iteration with \(\mathrm{B}^{t}\in\{1,2,...,n\}^{k}\). Letting \(\mathrm{B}=\mathrm{B}^{t}\), \(\mathrm{B}^{c}=\{1,2,...,n\}\setminus\mathrm{B}\), we have \(\mathbf{U}_{\mathrm{B}}\in\mathbb{R}^{n\times k}\), \(\mathbf{U}_{\mathrm{B}^{c}}\in\mathbb{R}^{n\times(n-k)}\), and \(\mathbf{I}_{n}=\mathbf{U}_{\mathrm{B}}\mathbf{U}_{\mathrm{B}}^{\mathsf{T}}+\mathbf{U}_ {\mathrm{B}^{c}}\mathbf{U}_{\mathrm{B}^{c}}^{\mathsf{T}}\). **(S2)** Solve a small-sized subproblem. \(\bullet\) Option _OBCD-Exact_: Define \(\mathbf{Z}\triangleq\mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}^{t}=\mathbf{X}_ {\mathrm{B}}^{t}\in\mathbb{R}^{k\times r}\). Choose \(\mathbf{Q}\in\mathbb{R}^{k^{2}\times k^{2}}\) such that \[\mathbf{Q} =\;\underline{\mathbf{Q}}\triangleq(\mathbf{Z}^{\mathsf{T}}\otimes \mathbf{U}_{\mathrm{B}})^{\mathsf{T}}\mathbf{H}(\mathbf{Z}^{\mathsf{T}} \otimes\mathbf{U}_{\mathrm{B}}), \tag{8}\] \[\text{or}\;\mathbf{Q} =\;\varsigma\mathbf{I},\;\text{with}\;\|\underline{\mathbf{Q}}\| \leq\varsigma\leq L_{f}. \tag{9}\] Find an _optimal solution_\(\tilde{\mathbf{V}}^{t}\in\mathbb{R}^{k\times k}\) of Problem (10): \[\tilde{\mathbf{V}}^{t}\in\arg\min_{\mathbf{V}\in\mathrm{St}(k,k)} \;\mathcal{J}(\mathbf{V};\mathbf{X}^{t},\mathrm{B})\triangleq q(\mathbf{V}) \tag{10}\] \[\text{with}\;\mathcal{J}(\mathbf{V};\mathbf{X}^{t},\mathrm{B}) \triangleq h(\mathbf{V}\mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}^{t})+ \frac{1}{2}\|\mathbf{V}-\mathbf{I}\|_{\mathbf{Q}+\theta\mathbf{I}}^{2}\] \[+\langle\mathbf{V}-\mathbf{I}_{k},[\nabla f(\mathbf{X}^{t})( \mathbf{X}^{t})^{\mathsf{T}}]_{\mathrm{BB}}\rangle\] \(\bullet\) Option _OBCD-Approx_: Find a _critical point_\(\tilde{\mathbf{V}}^{t}\in\mathbb{R}^{k\times k}\) of Problem (11) that \(\mathcal{K} where we uses the fact that \(\langle\mathbf{U}_{\mathrm{B}}(\mathbf{V}-\mathbf{I}_{k})\mathbf{U}_{\mathrm{B}}^{ \mathsf{T}}\mathbf{X},\nabla f(\mathbf{X})\rangle=\langle\mathbf{V}-\mathbf{I}_{k },[\nabla f(\mathbf{X})(\mathbf{X})^{\mathsf{T}}]_{\mathrm{BB}}\rangle\). We have the following inequalities that majorize \(F(\mathcal{P}_{\mathrm{B}}^{t}(\mathbf{V}))\) in (12): \[F(\mathcal{P}_{\mathrm{B}}^{t}(\mathbf{V})) =f(\mathcal{P}_{\mathrm{B}}^{t}(\mathbf{V}))+h(\mathcal{P}_{ \mathrm{B}}^{t}(\mathbf{V})) \tag{14}\] \[\leq f(\mathbf{X}^{t})+\langle\mathbf{V}-\mathbf{I}_{k},[\nabla f (\mathbf{X}^{t})(\mathbf{X}^{t})^{\mathsf{T}}]_{\mathrm{BB}}\rangle\] \[\quad+\tfrac{1}{2}\|\mathbf{V}-\mathbf{I}_{k}\|_{\mathbf{Q}}^{2}+ h(\mathcal{P}_{\mathrm{B}}^{t}(\mathbf{V})).\] Here, the last step applies (13) with \(\mathbf{X}=\mathbf{X}^{t}\) and \(\mathbf{X}^{+}=\mathcal{P}_{\mathrm{B}}^{t}(\mathbf{V})\). Note that the nonsmooth function \(h(\mathcal{P}_{\mathrm{B}}^{t}(\mathbf{V}))\) is kept unchanged. Since \(h(\cdot)\) is row-wise separable, we have \(h(\mathcal{P}_{\mathrm{B}}^{t}(\mathbf{V}))=h(\mathbf{U}_{\mathrm{B}^{*}} \mathbf{U}_{\mathrm{B}^{*}}^{\mathsf{T}}\mathbf{X}^{t}+\mathbf{U}_{\mathrm{B}} \mathbf{V}\mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}^{t})=h(\mathbf{U}_{ \mathrm{B}^{*}}^{\mathsf{T}}\mathbf{X}^{t})+h(\mathbf{V}\mathbf{U}_{\mathrm{B }}^{\mathsf{T}}\mathbf{X}^{t})\). Appending a new proximal term \(\frac{\theta}{2}\|\mathbf{V}-\mathbf{I}_{k}\|_{\mathbf{F}}^{2}\) to the majorization function in (14), we have subproblem (10) as shown in _OBCD-Exact_. In some situations, it can be challenging to find a tight bound for the majorization function of \(f(\mathbf{X})\). One approach to this issue is to consider majorization minimization over the original function approximately instead. Without using the specific structure of the problem, one can expect to find a critical point of Problem (12). This approach is provided in _OBCD-Approx_, where the condition \(\mathcal{K}(\bar{\mathbf{V}}^{t};\mathbf{X}^{t},\mathrm{B})\leq\mathcal{K}( \mathbf{I}_{k};\mathbf{X}^{t},\mathrm{B})\) is not difficult to guarantee when feasible greedy descent methods are used (Wen & Yin, 2013; Jiang & Dai, 2015; Chen et al., 2020). With the quadratic structure at hand, we outline some properties of _OBCD-Exact_ using the following lemma. **Lemma 4.3**.: _(a) Problem (10) is equivalent to the following problem:_ \[\min_{\mathbf{V}\in\mathrm{St}(k,k)}q(\mathbf{V})\triangleq\frac{1}{2}\| \mathbf{V}\|_{\mathbf{Q}}^{2}+\langle\mathbf{V},\mathbf{P}\rangle+h(\mathbf{ V}\mathbf{Z}), \tag{15}\] _where \(\mathbf{P}\triangleq[\nabla f(\mathbf{X}^{t})(\mathbf{X}^{t})^{\mathsf{T}}]_{ \mathrm{BB}}-\text{mat}(\mathbf{Q}\text{vec}(\mathbf{I}_{k}))-\theta\mathbf{I }_{k}\)._ _(b) Assume the matrix \(\mathbf{Q}\) is chosen to be a diagonal matrix as shown in (9). The term \(\frac{1}{2}\|\mathbf{V}\|_{\mathbf{Q}}^{2}\) in Problem (15) vanishes, and Problem (15) reduces to the following problem:_ \[\bar{\mathbf{V}}^{t}=\arg\min_{\mathbf{V}\in\mathrm{St}(k,k)}q(\mathbf{V}) \triangleq\langle\mathbf{V},\mathbf{P}\rangle+h(\mathbf{V}\mathbf{Z}). \tag{16}\] _In particular, when \(h(\mathbf{X})\triangleq 0\), the optimal solution of Problem (16) can be computed as: \(\bar{\mathbf{V}}^{t}=\mathcal{P}_{\mathcal{M}}(-\mathbf{P})\) and it holds that \(\mathbf{\Lambda}\triangleq-\mathbf{P}^{\mathsf{T}}\bar{\mathbf{V}}^{t} \succeq\mathbf{0}\)._ _Please refer to Section B.3._ _\(\blacktriangleright\) Computing the Matrix \(\mathbf{Q}\)_ Computing the matrix \(\mathbf{Q}\in\mathbb{R}^{k^{2}\times k^{2}}\) as in (8) can be a challenging task as it involves the matrix \(\mathbf{H}\in\mathbb{R}^{nr\times nr}\). However, in practice, \(\mathbf{H}\) often has some special structure that enables fast matrix computation. For example, \(\mathbf{H}\) may takes a diagonal matrix that is equal to \(L\mathbf{I}_{nr}\) for some \(L\geq 0\) or has a Kronecker structure where \(\mathbf{H}=\mathbf{H}_{1}\otimes\mathbf{H}_{2}\) for some \(\mathbf{H}_{1}\in\mathbb{R}^{r\times r}\) and \(\mathbf{H}_{2}\in\mathbb{R}^{n\times n}\). The lemmas provided below demonstrate how to calculate \(\mathbf{Q}\). **Lemma 4.4**.: _Assume (8) is used to find \(\mathbf{Q}\)._ _(a) If \(\mathbf{H}=\mathbf{H}_{1}\otimes\mathbf{H}_{2}\), we have: \(\mathbf{Q}=\mathbf{Q}_{1}\otimes\mathbf{Q}_{2}\), where \(\mathbf{Q}_{1}=\mathbf{Z}\mathbf{H}_{1}\mathbf{Z}^{\mathsf{T}}\in\mathbb{R}^{k \times k}\) and \(\mathbf{Q}_{2}=\mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{H}_{2}\mathbf{U}_{ \mathrm{B}}\in\mathbb{R}^{k\times k}\)._ _(b) If \(\mathbf{H}=L\mathbf{I}_{nr}\), we have \(\mathbf{Q}=(L\mathbf{Z}\mathbf{Z}^{\mathsf{T}})\otimes\mathbf{I}_{k}\)._ _Please refer to Section B.4._ **Lemma 4.5**.: _Assume (9) is used to find \(\mathbf{Q}\)._ _(a) If \(\mathbf{H}=\mathbf{H}_{1}\otimes\mathbf{H}_{2}\), we can choose \(\mathbf{Q}=\|\mathbf{Q}_{1}\|\cdot\|\mathbf{Q}_{2}\|\cdot\mathbf{I}\), where \(\mathbf{Q}_{1}\) and \(\mathbf{Q}_{2}\) are defined in Lemma 4.4._ _(b) If \(\mathbf{H}=L\mathbf{I}_{nr}\), we can choose \(\mathbf{Q}=L\|\mathbf{Z}\|^{2}\cdot\mathbf{I}\)._ _Please refer to Section B.5._ \(\blacktriangleright\) Smallest Possible Subproblems When \(k=2\) We are particularly interested in the case when \(k=2\) since it results in analytical solutions for the subproblems. The following lemma reveals an equivalent expression for any \(\mathbf{V}\in\mathrm{St}(2,2)\). **Lemma 4.6**.: _Any orthogonal matrix \(\mathbf{V}\in\mathrm{St}(2,2)\) can be expressed as \(\mathbf{V}=\mathbf{F}_{\theta}\) or \(\mathbf{V}=\mathbf{R}_{\theta}\) for some \(\theta\in\mathbb{R}\), where_ \[\mathbf{R}_{\theta}\triangleq\begin{bmatrix}\cos(\theta)&\sin(\theta)\\ -\sin(\theta)&\cos(\theta)\end{bmatrix},\,\mathbf{F}_{\theta}\triangleq\begin{bmatrix}- \cos(\theta)&\sin(\theta)\\ \sin(\theta)&\cos(\theta)\end{bmatrix}.\] _It holds that \(\text{det}(\mathbf{R}_{\theta})=1\) and \(\text{det}(\mathbf{F}_{\theta})=-1\) for any \(\theta\in\mathbb{R}\)._ _Refer to Section B.6._ Using Lemma 4.6, we can reformulate Problem (10) or Problem (11) as a one-dimensional subproblem: \(\min q(\mathbf{V}),s.t.\mathbf{V}\in\{\mathbf{R}_{\theta},\mathbf{F}_{\theta}\}\), which can be solved using a novel breakpoint search method or a fifth-order iterative method. Please refer to Section 7. **Remarks.**_(a)_\(\mathbf{R}_{\theta}\) and \(\mathbf{F}_{\theta}\) are called Givens rotation matrix and Jacobi reflection matrix respectively in the literature (Sun & Bischof, 1995). _(b)_ Previous research only considered \(\{\mathbf{R}_{\theta}\}\) for solving symmetric linear eigenvalue problems (Golub & Van Loan, 2013) and sparse PCA problems (Shalit & Chechik, 2014), while we use \(\{\mathbf{F}_{\theta},\,\mathbf{R}_{\theta}\}\) for solving general nonsmooth problems under orthogonality constraints. _(c)_ We explain that the use of \(\{\mathbf{F}_{\theta},\,\mathbf{R}_{\theta}\}\) is necessary using the following example \(\bar{\mathbf{X}}=\arg\min_{\mathbf{X}\in\mathrm{St}(2,2)}F(\mathbf{X}) \triangleq\|\mathbf{X}-\mathbf{F}_{\theta}\|_{F}^{2}\) with \(\bar{\theta}=7\) is a fixed constant. Clearly, we have \(\bar{\mathbf{X}}=\mathbf{F}_{\bar{\theta}}\) and \(F(\bar{\mathbf{X}})=0\). It is not hard to verify that the optimal solution can be found by _BCD-Exact in one iteration_ for any \(\mathbf{X}^{0}\). However, if we replace the constraint \(\mathbf{V}^{\mathsf{T}}\mathbf{V}=\mathbf{I}_{k}\) as in _BCD-Exact_ with \(\mathbf{V}\in\mathbf{R}_{\theta}\) for some \(\theta\). _BCD-Exact with \(\text{det}(\mathbf{X}^{0})=1\). This is because the optimal solution \(\tilde{\mathbf{X}}\) is a reflection matrix with \(\text{det}(\tilde{\mathbf{X}})=-1\). According to the update rule of \(\mathbf{X}^{t+1}\) that \(\mathbf{X}^{t+1}\Leftarrow\tilde{\mathbf{V}}^{t}\mathbf{X}^{t}\), any matrix multiplication of two rotation matrices remains a rotation matrix. _BCD-Exact_ gets stuck into a local minima with \(\text{det}(\mathbf{X}^{t})=1\) for all \(t\). \(\blacktriangleright\)_Selecting the Working Set_ Three strategies to find the working set \(\mathrm{B}\) with \(|\mathrm{B}|=k\) are considered. _(i)_ Random strategy. \(\mathrm{B}\) is randomly selected from \(\{\mathcal{B}_{1},\mathcal{B}_{2},...,\mathcal{B}_{C_{n}^{k}}\}\) with equal probability \(1/C_{n}^{k}\). _(ii)_ Cyclic strategy. \(\mathrm{B}^{t}\) takes all possible combinations in cyclic order \(\mathcal{B}_{1}\ \rightarrow\ \mathcal{B}_{2}\ \rightarrow...\rightarrow\ \mathcal{B}_{C_{n}^{k}}\rightarrow\ \mathcal{B}_{1}\ \rightarrow...\). _(iii)_ Greedy strategy. We propose two novel greedy strategies to find a good working set. Please refer to Section 8. ## 5 Optimality Analysis This section provides some optimality analysis for the proposed algorithm. \(\blacktriangleright\)_Basis Representation of Orthogonal Matrices_ The following lemma is used to characterize any orthogonal matrix \(\mathbf{D}\in\mathrm{St}(n,n)\) and \(\mathbf{X}\in\mathrm{St}(n,r)\). **Proposition 5.1**.: _(Basis Representation of Orthogonal Matrices) Assume \(k=2\). We define \(\mathcal{W}_{i}=\mathbf{U}_{\mathcal{B}_{i}}\mathcal{V}_{i}\mathbf{U}_{ \mathcal{B}_{i}^{t}}^{\mathsf{T}}+\mathbf{U}_{\mathcal{B}_{i}^{t}}\mathbf{U}_{ \mathcal{B}_{i}^{t}}^{\mathsf{T}}\), \(\mathcal{V}_{i}\in\mathrm{St}(2,2)\), \(i\in[C_{n}^{2}]\)._ _(a) Any matrix \(\mathbf{D}\in\mathrm{St}(n,n)\) can be expressed as \(\mathbf{D}=\mathcal{W}_{C_{n}^{2}}...\mathcal{W}_{2}\mathcal{W}_{1}\) for some suitable \(\mathcal{W}_{i}\) (which depends on \(\mathcal{V}_{i}\)). Furthermore, we have: \((\forall i,\,\mathcal{V}_{i}=\mathbf{I}_{2})\Rightarrow(\mathbf{D}=\mathbf{I}_ {n})\)._ _(b) Any matrix \(\mathbf{X}\in\mathrm{St}(n,r)\) can be expressed as \(\mathbf{X}=\mathcal{W}_{C_{n}^{2}}...\mathcal{W}_{2}\mathcal{W}_{1}\mathbf{X}^ {0}\) for some suitable \(\mathcal{W}_{i}\) and any fixed constant matrix \(\mathbf{X}^{0}\in\mathrm{St}(n,r)\)._ _Refer to Section B.7._ **Remarks.** The result in _Part (b)_ of Proposition 5.1 indicates that the proposed update scheme \(\mathbf{X}^{+}\Leftarrow(\mathbf{U}_{\mathrm{B}}\mathbf{V}\mathbf{U}_{ \mathrm{B}}^{\mathsf{T}}+\mathbf{U}_{\mathrm{B}^{\mathsf{-}}}\mathbf{U}_{ \mathrm{B}^{\mathsf{-}}}\mathbf{U}_{\mathrm{B}^{\mathsf{-}}}^{\mathsf{T}}) \mathbf{X}\) as shown in (5) can reach any orthogonal matrix \(\mathbf{X}\in\mathrm{St}(n,r)\) for any starting solution \(\mathbf{X}^{0}\in\mathrm{St}(n,r)\). \(\blacktriangleright\)_First-Order Optimality Conditions for Problem (1)_ We provide the first-order optimality condition of Problem (1) (Wen & Yin, 2013). We extend the definition of limiting subdifferential and denote \(\partial_{\mathcal{M}}F(\mathbf{X})\) as the Riemannian limiting subdifferential of \(F(\mathbf{X})\) at \(\mathbf{X}\). Introducing a Lagrangian multiplier \(\mathbf{\Lambda}\in\mathbb{R}^{r\times r}\) for the orthogonality constraint, we have the following Lagrangian function of Problem (1): \[\mathcal{L}(\mathbf{X},\mathbf{\Lambda})=F(\mathbf{X})+\tfrac{1}{2}(\mathbf{I }_{r}-\mathbf{X}^{\mathsf{T}}\mathbf{X},\mathbf{\Lambda}). \tag{17}\] The matrix \(\mathbf{\Lambda}\) is symmetric since \(\mathbf{X}^{\mathsf{T}}\mathbf{X}\) is symmetric. We have the following first-order optimality condition. **Lemma 5.2**.: _Critical Point (Wen & Yin, 2013). A solution \(\tilde{\mathbf{X}}\in\mathrm{St}(n,r)\) is a critical point of Problem (1) if:_ \[\mathbf{0}\in\partial_{\mathcal{M}}F(\tilde{\mathbf{X}})\triangleq\partial F( \tilde{\mathbf{X}})-\tilde{\mathbf{X}}[\partial F(\tilde{\mathbf{X}})]^{ \mathsf{T}}\tilde{\mathbf{X}}. \tag{18}\] _Furthermore, \(\mathbf{\Lambda}\in[\partial F(\tilde{\mathbf{X}})]^{\mathsf{T}}\tilde{ \mathbf{X}}\)._ **Remarks.**_(i)_ The critical point condition can be characterized using the Euclidean metric or the Riemannian metric that: \(\left(\mathbf{0}\in\partial F(\tilde{\mathbf{X}})+\mathcal{N}_{\mathcal{M}}( \tilde{\mathbf{X}})\right)\Leftrightarrow(\mathbf{0}\in\mathcal{P}_{\mathrm{ Tx}\mathcal{M}}(\partial F(\mathbf{X})))\). _(ii)_ The critical point condition can bed also described using projection operator that \(\tilde{\mathbf{X}}\in\arg\min_{\mathbf{X}^{\mathsf{T}}\mathbf{X}=\mathbf{I}_ {r}}\langle\mathbf{X},\mathbf{G}\rangle\) with \(\mathbf{G}\in\partial F(\tilde{\mathbf{X}})\). \(\blacktriangleright\)_Optimality Conditions for the Subproblems_ The Euclidean subdifferential of \(\mathcal{J}(\mathbf{V};\mathbf{X}^{t},\mathrm{B}^{t})\) and \(\mathcal{K}(\mathbf{V};\mathbf{X}^{t},\mathrm{B}^{t})\) w.r.t. \(\mathbf{V}\) can be computed as follows: \[\ddot{\mathbf{G}}\triangleq\tilde{\mathbf{\Delta}}+\mathbf{U}_{ \mathrm{B}}^{\mathsf{T}}[\nabla f(\mathbf{X}^{t})+\partial h(\mathbf{X}^{+})]( \mathbf{X}^{t})^{\mathsf{T}}\mathbf{U}_{\mathrm{B}}, \tag{19}\] \[\dot{\mathbf{G}}\triangleq\dot{\mathbf{\Delta}}+\mathbf{U}_{ \mathrm{B}}^{\mathsf{T}}[\nabla f(\mathbf{X}^{+})+\partial h(\mathbf{X}^{+})]( \mathbf{X}^{t})^{\mathsf{T}}\mathbf{U}_{\mathrm{B}}, \tag{20}\] where \(\tilde{\mathbf{\Delta}}=\text{mat}((\mathbf{Q}+\theta\mathbf{I}_{k})\text{vec}( \mathbf{V}-\mathbf{I}_{k}))\), \(\dot{\mathbf{\Delta}}=\theta(\mathbf{V}-\mathbf{I}_{k})\), and \(\mathbf{X}^{+}=\mathbf{X}^{t}+\mathbf{U}_{\mathrm{B}}(\mathbf{V}-\mathbf{I}_{k} )\mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\mathbf{X}^{t}\). Using Lemma 5.2, we set the Riemannian subdifferential of \(\mathcal{J}(\mathbf{V};\mathbf{X}^{t},\mathrm{B}^{t})\) and \(\mathcal{K}(\mathbf{V};\mathbf{X}^{t},\mathrm{B}^{t})\) w.r.t. \(\mathbf{V}\) to zero and obtain the following first-order optimality condition for \(\tilde{\mathbf{V}}^{t}\): \[\mathbf{0}\in\partial_{\mathcal{M}}\mathcal{J}(\mathbf{V};\mathbf{X }^{t},\mathrm{B}^{t})|_{\mathbf{V}=\tilde{\mathbf{V}}^{t}}\triangleq\tilde{ \mathbf{G}}-\tilde{\mathbf{V}}^{t}\ddot{\mathbf{G}}^{\mathsf{T}}\tilde{\mathbf{V}}^ {t},\] \[\mathbf{0}\in\partial_{\mathcal{M}}\mathcal{K}(\mathbf{V};\mathbf{X }^{t},\mathrm{B}^{t})|_{\mathbf{V}=\tilde{\mathbf{V}}^{t}}\triangleq\dot{ \mathbf{G}}-\tilde{\mathbf{V}}^{t}\dot{\mathbf{G}}^{\mathsf{T}}\tilde{\mathbf{V}}^ {t},\] with \(\mathbf{X}^{+}=\mathbf{X}^{t+1}\). \(\blacktriangleright\)_New Optimality Conditions and Optimality Hierarchy_ We introduce the following new optimality conditions of exact and approximate block-\(k\) stationary points. **Definition 5.3**.: _(Exact) Block-\(k\) Stationary Point, abbreviated as \(\textbf{BS}_{k}\)-point. Let \(\theta>0\) and \(k\geq 2\). A solution \(\tilde{\mathbf{X}}\in\mathrm{St}(n,r)\) is called an exact block-\(k\) stationary point if:_ \[\forall\mathrm{B}\in\{\mathcal{B}_{i}\}_{i=1}^{C_{n}^{k}},\ \mathbf{I}_{k}\in\arg\min_{\mathbf{V}\in \mathrm{St}(k,k)}\ \mathcal{J}(\mathbf{V};\tilde{\mathbf{X}},\mathrm{B}). \tag{21}\] _Here, \(\mathcal{J}(\mathbf{V};\tilde{\mathbf{X}},\mathrm{B})\triangleq h(\mathbf{V} \mathbf{U}_{\mathrm{B}}^{\mathsf{T}}\tilde{\mathbf{X}})+\tfrac{1}{2}\| \mathbf{V}-\mathbf{I}_{k}\|_{\mathbf{Q}+\theta\mathbf{I}}^{2}+\langle\mathbf{V}- \mathbf{I}_{k},[\nabla f(\tilde{\mathbf{X}})(\tilde{\mathbf{X}})^{\mathsf{T}}]_{ \mathrm{BB}}\rangle\ **Remarks**. Block-\(k\) stationary point states that if we globally minimize the majorization function \(\mathcal{J}(\mathbf{V};\ddot{\mathbf{X}},\mathrm{B})\), we can not improve the objective function value for \(\mathcal{J}(\mathbf{V};\ddot{\mathbf{X}},\mathrm{B})\) for all \(\mathrm{B}\in\left\{\mathcal{B}_{i}\right\}_{i=1}^{\mathrm{C}_{n}}\). The following theorem establishes the relation between _BS\({}_{k}\)-point_, _AB\({}_{k}\)-point_, and critical point. **Theorem 5.5**.: _If \(k\geq 2\), we have the following results:_ _(a) \(\{\textbf{BS}_{k+1}\text{-Point}\}\subset\{\textbf{BS}_{k}\text{-Point}\}\)._ _(b) \(\{\textbf{BS}_{k}\text{-Point}\}\subset\{\textbf{ABS}_{k}\text{-Point}\}\)._ _(c) \(\{\textbf{ABS}_{k}\text{-Point}\}\equiv\{\text{Critical Point}\}\)._ _Refer to Section B.8._ **Remarks.** The optimality condition of _BS\({}_{k}\)-point_ is stronger than that of _AB\({}_{k}\)-point_, which is equivalent to the standard critical point condition. ## 6 Convergence Analysis This section provides some convergence analysis for Algorithm 1. We denote \(\tilde{\mathbf{X}}\) as the global optimal solution of Problem (1). ### Global Convergence We define the \(\epsilon\)-_BS\({}_{k}\)-point_ and the \(\epsilon\)-_ABS\({}_{k}\)-point_ as follows. **Definition 6.1**.: _(\(\epsilon\)-BS\({}_{k}\)-point) Given any constant \(\epsilon>0\), a point \(\tilde{\mathbf{X}}\) is called an \(\epsilon\)-BS\({}_{k}\)-point if: \(\mathcal{J}^{\star}(\tilde{\mathbf{X}})\leq\epsilon\), where \(\mathcal{J}^{\star}(\mathbf{X})\triangleq\frac{1}{C_{n}^{k}}\sum_{i=1}^{ \mathrm{C}_{n}^{k}}\text{dist}(\mathbf{I}_{k},\arg\min_{\mathbf{V}}\mathcal{J} (\mathbf{V};\mathbf{X},\mathcal{B}_{i}))^{2}\)._ **Definition 6.2**.: _(\(\epsilon\)-AB\({}_{k}\)-point) Given any constant \(\epsilon>0\), a point \(\tilde{\mathbf{X}}\) is called an \(\epsilon\)-ABS\({}_{k}\)-point if: \(\mathcal{K}^{\star}(\tilde{\mathbf{X}})\leq\epsilon\), where \(\mathcal{K}^{\star}(\mathbf{X})\triangleq\frac{1}{C_{n}^{k}}\sum_{i=1}^{ \mathrm{C}_{n}^{k}}\text{dist}(\mathbf{I}_{k},\text{Crit }\mathcal{K}(\mathbf{V};\mathbf{X},\mathcal{B}_{i}))^{2}\)._ **Theorem 6.3**.: _We have the following results:_ _(a) The following sufficient decrease condition holds for both **OBCD-Exact** and **OBCD-Approx**: \(\forall t,\ \frac{\theta}{2}\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|_{\mathrm{F}}^{2} \leq\frac{\theta}{2}\|\tilde{\mathbf{V}}^{t}-\mathbf{I}_{k}\|_{\mathrm{F}}^{2 }\leq F(\mathbf{X}^{t})-F(\mathbf{X}^{t+1})\)._ _(b) OBCD-Exact_ (_or **OBCD-Approx**_) finds an \(\epsilon\)-BS\({}_{k}\)-point (or \(\epsilon\)-AB\({}_{k}\)-point) of Problem (1) in at most \(T\) iterations in the sense of expectation, where \(T=\lceil\frac{2C_{n}^{k}(F(\mathbf{X}^{0})-F(\tilde{\mathbf{X}}))}{\theta \epsilon}\rceil=\mathcal{O}(\epsilon^{-1})\)._ Please refer to Section B.9. ### Strong Convergence under KL Assumption We prove that _OBCD_ achieves strong convergence based on a non-convex analysis tool called Kurdyka-Lojasiewicz inequality (Attouch et al., 2010; Bolte et al., 2014). For simplicity, we only focus on _OBCD-Exact_. We first present the following proposition established in (Attouch et al., 2010). **Proposition 6.4**.: _(Kurdyka-Lojasiewicz Property). For a semi-algebraic function \(F^{\circ}(\mathbf{X})\) with \(\mathbf{X}\in\text{dom }F^{\circ}\), there exists \(\sigma\in[0,1),\ \eta\in(0,+\infty]\) a neighborhood \(\Upsilon\) of \(\mathbf{X}\) and a concave and continuous function \(\varphi(t)=ct^{1-\sigma}\), \(c>0\), \(t\in[0,\eta)\) such that for all \(\mathbf{X}^{\prime}\in\Upsilon\) and satisfies \(F^{\circ}(\mathbf{X}^{\prime})\in(F^{\circ}(\mathbf{X}),F^{\circ}(\mathbf{X} )+\eta)\), the following inequality holds:_ \[\text{dist}(\mathbf{0},\partial F^{\circ}(\mathbf{X}^{\prime}))\varphi^{\prime }(F^{\circ}(\mathbf{X}^{\prime})-F^{\circ}(\mathbf{X}))\geq 1.\] **Remarks.** Semi-algebraic functions are a class of functions that satisfy the KL property. These functions are widely used in applications, and they include real polynomial functions, finite sums and products of semi-algebraic functions, and indicator functions of semi-algebraic sets. We make the following additional assumptions. **Assumption 6.1**.: _There exists positive constants \(l_{f}\) and \(l_{h}\) that \(\forall\mathbf{X},\|\nabla f(\mathbf{X})\|_{2}\leq l_{f},\|\boldsymbol{\xi}\|_{ 2}\leq l_{h}\) with \(\boldsymbol{\xi}\in\partial h(\mathbf{X})\)._ **Assumption 6.2**.: _The function \(F^{\circ}(\mathbf{X})=F(\mathbf{X})+\mathcal{I}_{\mathcal{M}}(\mathbf{X})\) is a KL function._ First, we notice that the Riemannian subdifferential of \(\mathcal{J}(\mathbf{V};\mathbf{X}^{t},\mathrm{B}^{t})\) at the point \(\mathbf{V}=\mathbf{I}_{k}\) can be computed as: \[\partial_{\mathcal{M}}\mathcal{J}(\mathbf{I}_{k};\mathbf{X}^{t}, \mathrm{B}^{t})=\mathbf{U}_{\mathrm{B}^{t}}^{\mathsf{T}}(\mathbb{D}-\mathbb{D}^{ \mathsf{T}})\mathbf{U}_{\mathrm{B}^{t}}\] \[\text{with }\mathbb{D}=[\nabla f(\mathbf{X}^{t})+\partial h( \mathbf{X}^{t})][\mathbf{X}^{t}]^{\mathsf{T}}.\] We present the following useful lemma. **Lemma 6.5**.: _(Riemannian Subgradient Lower Bound for the Iterates Gap) We define \(\phi\triangleq 4(l_{f}+l_{h}+L_{f})+2\theta\). It holds that: \(\mathbb{E}[\text{dist}(\mathbf{0},\partial_{\mathcal{M}}\mathcal{J}(\mathbf{I}_ {k};\mathbf{X}^{t+1},\mathrm{B}^{t+1}))]\leq\phi\cdot\mathbb{E}[\|\tilde{ \mathbf{V}}^{t}-\mathbf{I}_{k}\|_{\mathrm{F}}]\)._ _Refer to Section B.10._ The following lemma is useful to outline the relation of \(\|\partial_{\mathcal{M}}F(\mathbf{X}^{t})\|_{\mathrm{F}}\) and \(\|\partial_{\mathcal{M}}\mathcal{J}(\mathbf{I}_{k};\mathbf{X}^{t},\mathrm{B}^{t })\|_{\mathrm{F}}\). **Lemma 6.6**.: _We have the following results:_ _(a) For any \(\mathbf{W}\in\mathbb{R}^{n\times n}\), we have \(\sum_{i=1}^{\mathrm{C}_{n}^{k}}\|\mathbf{W}(\mathcal{B}_{i},\mathcal{B}_{i})\|_{ \mathrm{F}}^{2}=C_{n-2}^{k-2}\sum_{i}\sum_{j,j\neq i}\mathbf{W}_{ij}^{2}+\frac{ k}{n}C_{n}^{k}\sum_{i}\mathbf{W}_{ii}^{2}\)._ _(b)\(\text{dist}(\mathbf{0},\partial_{\mathcal{M}}F(\mathbf{X}^{t}))\leq\gamma\cdot \mathbb{E}[\text{dist}(\mathbf{0},\partial_{\mathcal{M}}\mathcal{J}( \mathbf{I}_{k};\mathbf{X}^{t},\mathrm{B}^{t}))]\) with \(\gamma\triangleq\sqrt{\frac{C_{n}^{k}}{C_{n-2}^{k-2}}}\)._ _Refer to Section B.11._ The following lemma show the relation of \(\text{dist}(\mathbf{0},\partial\mathcal{F}(\mathbf{X}))\) and \(\text{dist}(\mathbf{0},\partial_{\mathcal{M}}F(\mathbf{X}))\). We have the following useful lemma. **Lemma 6.7**.: _For any \(\mathbf{X}\in\mathbb{R}^{n\times r}\), it holds that \(\text{dist}(\mathbf{0},\partial F^{\circ}(\mathbf{X}))\leq\text{dist}(\mathbf{0}, \partial_{\mathcal{M}}F(\mathbf{X}))\)._ _Refer to Section B.12._ Finally, we obtain our main convergence results. **Theorem 6.8**.: _(A Finite Length Property). The sequence \(\{\mathbf{X}^{t}\}_{t=0}^{\infty}\) has finite length property that: \(\forall t,\sum_{i=1}^{t}\mathbb{E}[\|\mathbf{X}^{t+1}-\mathbf{X}^{t}\|_{ \mathsf{F}}]\leq C<+\infty\) with \(C\triangleq 2\sqrt{k+\frac{4\gamma\phi}{\theta}}\varphi(F(\mathbf{X})^{ \mathsf{T}})-F(\bar{\mathbf{X}}))\). Here, \(\gamma\) is defined in Lemma 6.6, \(\phi\) is defined in Lemma 6.5, and \(\varphi(\cdot)\) is the desingularization function defined in Proposition 6.4._ _Refer to Section B.13._ **Remarks.** Under the powerful Kurdyka-Lojasiewicz property, one can establish the convergence rates for the iterates. If the desingularization function of \(F^{\circ}(\mathbf{X})\) takes the form that \(\varphi(t)=ct^{1-\sigma}\) for some \(c>0\) and some \(\sigma\in[0,1)\), the following estimations holds (Attouch et al., 2010). _(a)_ If \(\sigma=0\), then the sequence \(\{\mathbf{X}\}_{i=0}^{\infty}\) converges in a finite number of steps. _(b)_ If \(\sigma\in(0,1/2]\), then there exists \(\iota>0\) and \(\tau\in[0,1)\) such that \(\|\mathbf{X}^{t}-\bar{\mathbf{X}}\|_{\mathsf{F}}\leq\iota r^{t}\). _(c)_ If \(\sigma\in(1/2,1)\), then there exists \(\iota>0\) such that \(\|\mathbf{X}^{t}-\bar{\mathbf{X}}\|_{\mathsf{F}}\leq\iota t^{-\frac{1-\sigma} {2\sigma-1}}\). ## 7 Solving the Subproblem when \(k=2\) When \(k=2\), Problem (15) boils down to a one-dimensional subproblem that \(\min_{\theta}q(\mathbf{V}),s.t.\mathbf{V}\in\{\mathbf{R}_{\theta},\mathbf{F}_ {\theta}\}\). In what follows, we present two methods to solve it. The first one is a Breakpoint Searching Method (_BSM_) designed for _OBCD-Exact_ and it is guaranteed to find the _global optimal solution_, while the second one is a Fifth-order Iterative Method (_FIM_) designed for _OBCD-Approx_ and it finds a reasonably good local minimum solution. ### A Breakpoint Searching Method for _OBCD-Exact_ We rewrite (15) as the following problem: \[\bar{\theta}\in\arg\min_{\theta} \ \frac{1}{2}\text{vec}(\mathbf{V})^{\mathsf{T}}\mathbf{Q}\text{vec }(\mathbf{V})+\langle\mathbf{V},\mathbf{P}\rangle+h(\mathbf{V}\mathbf{Z}),\] \[s.t.\ \mathbf{V}\triangleq\begin{bmatrix}\pm\cos(\theta)&\sin( \theta)\\ \mp\sin(\theta)&\cos(\theta)\end{bmatrix},\] with \(\mathbf{Q}\in\mathbb{R}^{4\times 4}\), \(\mathbf{P}\in\mathbb{R}^{2\times 2}\), and \(\mathbf{Z}\in\mathbb{R}^{2\times r}\). Using \(h(\mathbf{X})=\sum_{i}h^{\prime}(\mathbf{X}(i,:))\), we have the following equivalent optimization problem: \[\min_{\theta}a\cos(\theta)+b\sin(\theta) \tag{23}\] \[+c\cos(\theta)^{2}+d\cos(\theta)\sin(\theta)+e\sin(\theta)^{2}\] \[+h^{\prime}\left(\cos(\theta)\mathbf{r}+\sin(\theta)\mathbf{s} \right)+h^{\prime}\left(\cos(\theta)\mathbf{p}+\sin(\theta)\mathbf{u}\right).\] with \(a=\mathbf{P}_{22}\pm\mathbf{P}_{11}\), \(b=\mathbf{P}_{12}\mp\mathbf{P}_{21}\), \(c=0.5(\mathbf{Q}_{11}+\mathbf{Q}_{44})\pm\mathbf{Q}_{14}\), \(e=0.5(\mathbf{Q}_{22}+\mathbf{Q}_{33})\mp\mathbf{Q}_{23}\), \(d=-\mathbf{Q}_{12}\pm\mathbf{Q}_{13}\mp\mathbf{Q}_{24}+\mathbf{Q}_{34}\), \(\mathbf{r}=\pm\mathbf{Z}(1,:)\), \(\mathbf{s}=\mathbf{Z}(2,:)\), \(\mathbf{p}=\mathbf{Z}(2,:)\), \(\mathbf{u}=\mp\mathbf{Z}(1,:)\). Our key strategy is to perform a variable substitution to convert Problem (23) into an equivalent problem that depends on the variable \(\tan(\theta)\triangleq t\). The substitution is based on the trigonometric identities that \(\cos(\theta)=\frac{\pm 1}{\sqrt{1+\tan(\theta)^{2}}}\) and \(\sin(\theta)=\frac{\pm\tan(\theta)}{\sqrt{1+\tan(\theta)^{2}}}\). The following lemma provides a characterization of the global optimal solution for Problem (23). **Lemma 7.1**.: _Assume that the function \(h^{\prime}(\mathbf{x})\) is coordinate-wise separable. We define \(\mathbf{x}=[\mathbf{r};\mathbf{p}]\in\mathbb{R}^{2r\times 1}\), \(\mathbf{y}=[\mathbf{s};\mathbf{u}]\in\mathbb{R}^{2r\times 1}\), and \(w=c-e\). We let \(\tilde{F}(\tilde{c},\tilde{s})\triangleq a\tilde{c}+b\tilde{s}+\tilde{c}\tilde{ }^{2}+d\tilde{c}\tilde{s}+e\tilde{s}^{2}+h^{\prime}\left(\tilde{c}\mathbf{x}+ \tilde{s}\mathbf{y}\right)\). The optimal solution \(\bar{\theta}\) to (23) can be computed as: \([\cos(\bar{\theta}),\sin(\bar{\theta})]=\arg\min_{[c,s]}\ \tilde{F}(c,s),\ s.t.\ [,s]\in\{[\frac{1}{\sqrt{1+(\tilde{t}_{+})^{2}}},\frac{ \tilde{t}_{+}}{\sqrt{1+(\tilde{t}_{+})^{2}}}],[\frac{-1}{\sqrt{1+(\tilde{t}_{- })^{2}}},\frac{-\tilde{t}_{-}}{\sqrt{1+(\tilde{t}_{-})^{2}}}]\}\), where \(\bar{t}_{+}\) and \(\tilde{t}_{-}\) are respectively defined as:_ \[\bar{t}_{+}\in\arg\min_{t}\ p(t)\triangleq\frac{a+bt}{\sqrt{1+t^{2}}}+\frac{ w+dt}{1+t^{2}}+h^{\prime}(\frac{\mathbf{x}+\mathbf{y}t}{\sqrt{1+t^{2}}}), \tag{24}\] \[\bar{t}_{-}\in\arg\min_{t}\ \tilde{p}(t)\triangleq\frac{a-bt}{\sqrt{1+t^{2}}}+\frac{ w+dt}{1+t^{2}}+h^{\prime}(\frac{\mathbf{x}-\mathbf{y}t}{\sqrt{1+t^{2}}}). \tag{25}\] _Refer to Section B.14._ We describe our _BSM_ to solve Problem (24); our approach can be naturally extended to tackle Problem (25). _BSM_ first identifies all the possible breakpoints / critical points \(\Theta\), and then picks the solution that leads to the lowest value as the optimal solution \(\bar{t}\), i.e., \(\bar{t}=\arg\min_{t}\ p(t),\ s.t.\ t\in\Theta\). We assume \(\mathbf{y}_{i}\neq 0\). If this is not true and there exists \(\mathbf{y}_{i}=0\) for some \(i\), then \(\{\mathbf{x}_{i},\mathbf{y}_{i}\}\) can be removed since it does not affect the minimizer of the problem. We now show that how to find the breakpoint set \(\Theta\) for different \(h^{\prime}(\cdot)\). We define \(t^{\circ}=(1+t^{2})^{-2}\). \(\blacktriangleright\)_Finding the Breakpoint Set for \(h^{\prime}(\mathbf{x})\triangleq\lambda\|\mathbf{x}\|_{0}\)_ Since the function \(h^{\prime}(\mathbf{x})\triangleq\lambda\|\mathbf{x}\|_{0}\) is coordinate-wise separable, scale-invariant, and symmetric with \(\|\pm t\mathbf{x}\|_{0}=\|\mathbf{x}\|_{0}\) for all \(t>0\), Problem (24) reduces to the following problem: \[\min_{t}p(t)\triangleq\frac{a+bt}{\sqrt{1+t^{2}}}+\frac{w+dt}{1+t^{2}}+\lambda \|\mathbf{x}+t\mathbf{y}\|_{0}. \tag{26}\] Since the subgradient of \(\ell_{0}\) norm function can be computed as \(\partial\|t\|_{0}=\{\ \tiny{0\},\ \tiny{\begin{array}{c}t=0\\ \theta\end{array}}\ \}\), we consider the following two cases. _(i)_ We assume \((\mathbf{x}+t\mathbf{y})_{i}=0\) for some \(i\), then the solution \(\bar{t}\) can be determined using \(\bar{t}=\frac{\mathbf{x}_{i}}{\mathbf{y}_{i}}\). There are \(2r\) breakpoints \(\{\frac{\mathbf{x}_{1}}{\mathbf{y}_{1}},\frac{\mathbf{x}_{2}}{\mathbf{y}_{2}},...,\frac{\mathbf{x}_{2r}}{\mathbf{y}_{2r}}\}\) for this case. _(ii)_ We now assume \((\mathbf{x}+t\mathbf{y})_{i}\neq 0\) for all \(i\) and \(\lambda\|\mathbf{x}+t\mathbf{y}\|_{0}=2r\lambda\) becomes a constant. Setting the subgradient of \(p(t)\) to zero yields: \(0=\nabla p(t)=[b(1+t^{2})-(a+bt)t]\cdot\sqrt{1+t^{2}}\cdot t^{\circ}+[d(1+t^{2} )-(w+dt)(2t)]\cdot t^{\circ}\). Since \(t^{\circ}>0\), we obtain: \ Squaring both sides, we obtain the following quartic equation: \(c_{4}t^{4}+c_{3}t^{3}+c_{2}t^{2}+c_{1}t+c_{0}=0\) for some suitable \(c_{4}\), \(c_{3}\), \(c_{2}\), \(c_{1}\) and \(c_{0}\). Solving this equation analytically by Lodovico Ferrari's method (WikiContributors), we obtain all its real roots \(\{\bar{t}_{1},\bar{t}_{2},...,\bar{t}_{j}\}\) with \(1\leq j\leq 4\). There are at most 4 breakpoints for this case. Therefore, Problem (26) contains at most \(2r+4\) breakpoints \(\Theta=\{\frac{\mathbf{x}_{1}}{\mathbf{y}_{1}},\frac{\mathbf{x}_{2}}{\mathbf{ y}_{2}},...,\frac{\mathbf{x}_{2r}}{\mathbf{y}_{2r}},\bar{t}_{1},\bar{t}_{2},...,\bar{t}_{j}\}\). \(\blacktriangleright\)_Finding the Breakpoint Set for \(h^{\prime}(\mathbf{x})\triangleq\lambda\|\mathbf{x}\|_{1}\)_ Since the function \(h^{\prime}(\mathbf{x})\triangleq\lambda\|\mathbf{x}\|_{1}\) is coordinate-wise separable and symmetric, Problem (24) reduces to the following problem: \[\min_{t}p(t)\triangleq\frac{a+bt}{\sqrt{1+t^{2}}}+\frac{w+dt}{1+t^{2}}+\frac{ \lambda\|\mathbf{x}+t\mathbf{y}\|_{1}}{\sqrt{1+t^{2}}}. \tag{27}\] Setting the subgradient of \(p(\cdot)\) to zero yields: \(0=\partial p(t)=t^{\circ}[d(1+t^{2})-(w+dt)2t+(b-at)\cdot\sqrt{1+t^{2}}]+t^{ \circ}\lambda\cdot\sqrt{1+t^{2}}\cdot[\langle\text{sign}(\mathbf{x}+t\mathbf{y }),\mathbf{y}\rangle(1+t^{2})-\|\mathbf{x}+t\mathbf{y}\|_{1}t]\). We consider the following two cases. _(i)_ We assume \((\mathbf{x}+t\mathbf{y})_{i}=0\) for some \(i\), then the solution \(\bar{t}\) can be determined using \(\bar{t}=\frac{\mathbf{x}_{i}}{\mathbf{y}_{i}}\). There are \(2r\) breakpoints \(\{\frac{\mathbf{x}_{1}}{\mathbf{y}_{1}},\frac{\mathbf{x}_{2}}{\mathbf{y}_{2}},...,\frac{\mathbf{x}_{2r}}{\mathbf{y}_{2r}}\}\) for this case. _(ii)_ We now assume \((\mathbf{x}+t\mathbf{y})_{i}\neq 0\) for all \(i\). We define \(\mathbf{z}\triangleq\{\frac{\mathbf{x}_{1}}{\mathbf{y}_{1}},-\frac{\mathbf{x}_ {1}}{\mathbf{y}_{1}},\frac{\mathbf{x}_{2}}{\mathbf{y}_{2}},-\frac{\mathbf{x}_ {2}}{\mathbf{y}_{2}},...,\frac{\mathbf{x}_{2r}}{\mathbf{y}_{2r}},-\frac{ \mathbf{x}_{2r}}{\mathbf{y}_{2r}}\}\in\mathbb{R}^{4r\times 1}\), and sort \(\mathbf{z}\) in non-descending order. The domain \(p(t)\) can be divided into \((4r+1)\) non-overlapping intervals: \((-\infty,\mathbf{z}_{1}),(\mathbf{z}_{1},\mathbf{z}_{2}),...,(\mathbf{z}_{4r},+\infty)\). In each interval, \(\text{sign}(\mathbf{x}+t\mathbf{y})\triangleq\mathbf{o}\) can be determined. Combining with the fact that \(t^{\circ}>0\) and \(\|\mathbf{x}+t\mathbf{y}\|_{1}=\langle\mathbf{o},\mathbf{x}+t\mathbf{y}\rangle\), the first-order optimality condition reduces to: \(0=[d(1+t^{2})-(w+dt)2t+(b-at)\cdot\sqrt{1+t^{2}}]+\lambda\cdot\sqrt{1+t^{2}} \cdot[(\mathbf{o},\mathbf{y})(1+t^{2})-\langle\mathbf{o},\mathbf{x}+t \mathbf{y}\rangle t]\), which can be simplified as: \((at-b)\cdot\sqrt{1+t^{2}}-\lambda\cdot\sqrt{1+t^{2}}\cdot[\langle\mathbf{o}, \mathbf{y}-t\mathbf{x}\rangle]=[d(1+t^{2})-(w+dt)2t]\). We square both sides and then solve the quartic equation. We obtain obtain all its real roots \(\{\bar{t}_{1},\bar{t}_{2},...,\bar{t}_{j}\}\) with \(1\leq j\leq 4\). Therefore, Problem (27) contains at most \(2r+(4r+1)\times 4\) breakpoints. \(\blacktriangleright\)_Finding the Breakpoint Set for \(h^{\prime}(\mathbf{x})\triangleq I_{\geq 0}(\mathbf{x})\)_ Since the function \(h^{\prime}(\mathbf{x})\triangleq\mathcal{I}_{\geq 0}(\mathbf{x})\) is a coordinate-wise separable and scale-invariant function, Problem (24) reduces to the following problem: \[\min_{t}p(t)\triangleq\frac{a+bt}{\sqrt{1+t^{2}}}+\frac{w+dt}{1+t^{2}},\;s.t. \,\mathbf{x}+t\mathbf{y}\geq\mathbf{0}. \tag{28}\] We define \(I\triangleq\{i|\mathbf{y}_{i}>0\}\) and \(J\triangleq\{i|\mathbf{y}_{i}<0\}\). Note that \(\mathbf{x}+t\mathbf{y}\geq 0\Leftrightarrow-\frac{\mathbf{x}_{i}}{\mathbf{y}_{i}} \leq t,t\leq-\frac{\mathbf{x}}{\mathbf{y}_{j}}\Leftrightarrow lb\triangleq \max(-\frac{\mathbf{x}_{i}}{\mathbf{y}_{j}})\leq t\leq\min(-\frac{\mathbf{x}_ {j}}{\mathbf{y}_{j}})\triangleq ub\). When \(lb>ub\), we can directly conclude that the problem has no solution for this case. Now we assume \(ub\geq lb\) and define \(P(t)\triangleq\min(ub,\max(t,lb))\). We omit the bound constraint and set the gradient of \(p(t)\) to zero yields: \(0=\nabla p(t)=[b(1+t^{2})-(a+bt)t]\cdot\sqrt{1+t^{2}}\cdot t^{\circ}+[d(1+t^{2} )-(w+dt)(2t)]\cdot t^{\circ}\). We obtain all its real roots \(\{\bar{t}_{1},\bar{t}_{2},...,\bar{t}_{j}\}\) with \(1\leq j\leq 4\) after squaring both sides and solving the quartic equation. Combining with the bound constraint, we conclude that Problem (28) contains at most \((4+2)\) breakpoints \(\{P(\bar{t}_{1}),P(\bar{t}_{2}),...,P(\bar{t}_{j}),lb,ub\}\) with \(1\leq j\leq 4\). ### A Fifth-Order Iterative Method for _OBCD-Approx_ We describe our _FIM_ to solve Problem (11) in _OBCD-Approx_. To handle the nonsmooth term \(h(\mathbf{X})\), one can use the variable substitution based on trigonometric identities, which is discussed in Section 7.1. However, the derivation process is tedious. The motivation of using _OBCD-Approx_ is that we may not obtain a tight bound for the majorization function \(\mathcal{Q}(\mathbf{X};\mathbf{X}^{t})\). For simplicity, we only focuses on the case where \(h(\mathbf{X})=0\), and assume that \(f(\mathbf{X})\) is infinitely differentiable. We rewrite (15) as the following problem: \[\min_{\theta}p(\theta) \triangleq f(\mathbf{X}^{t}+\mathbf{U}_{\text{B}}(\mathbf{V}- \mathbf{I}_{k})\mathbf{U}_{\text{B}}^{\mathsf{T}}\mathbf{X}^{t})\] \[s.t.\,\mathbf{V}\triangleq\begin{bmatrix}\pm\cos(\theta)&\sin( \theta)\\ \mp\sin(\theta)&\cos(\theta)\end{bmatrix}.\] We explore the following high-order model to find a critical point \(\tilde{\theta}\) of the problem above: \[\theta^{l+1}\in\arg\min_{\theta}\frac{\bar{p}^{c}}{c!}|\theta-\theta^{l}|^{c}+ \sum_{i=0}^{c-1}\frac{p^{i}(\theta^{l})}{i!}(\theta-\theta^{l})^{i}. \tag{29}\] Here, \(p^{n}(\theta^{l})\) is \(n\)-th derivative of \(p(\theta)\), and \(\bar{p}^{c}\) is a universal positive constant that \(\bar{p}^{c}\geq p^{c}(\theta^{l})\) for all \(\theta^{l}\). By setting \(c=5\) in (29), the iterative scheme becomes a fifth-order iterative method. The reason why \(c=5\) is chosen is mainly because the minimization problem in (29) has an analytical solution if \(c=5\). We now show how to solve Problem (29) efficiently and exactly. _(i)_ We assume that \(\theta^{t+1}\geq\theta^{l}\). The absolute value can be removed, resulting in \(\min_{\theta}p(\theta)=\frac{\bar{p}^{c}}{5!}(\theta-\theta^{l})^{5}+\sum_{i=0}^{c -1}\frac{p^{i}(\theta^{l})}{i!}(\theta-\theta^{l})^{i}\). Setting the gradient of \(p(\theta)\) to zero yields a quartic equation: \(c_{4}\theta^{4}+c_{5}\theta^{3}+c_{2}\theta^{2}+c_{1}\theta+c_{0}=0\) for some suitable parameters \(c_{4}\), \(c_{3}\), \(c_{2}\), \(c_{1}\), and \(c_{0}\). It can be solved analytically by Lodovico Ferraris method (WikiContributors). _(ii)_ The methegology for the case \(\theta^{t+1}<\theta^{l}\) is similar. We omit the details for brevity. Therefore, the exact minimizer of Problem (29) can be obtained in constant time. ## 8 Greedy Strategies for Working Set Selection In this section, tional efficiency of _ODBC_ for \(k=2\), as shown in Algorithm 2. Our methods only use the current solution \(\mathbf{X}^{t}\) and its subgradient \(\mathbf{G}^{t}\in\partial F(\mathbf{X}^{t})\). We refer the matrix \(\mathbf{S}\) as the scoring matrix. Our first Working Set Selection (_WSS_) strategy is based on the maximum Stationarity Violation pair, denoted as _WWS-SV_. It chooses the index \(\mathrm{B}=[\tilde{i},\tilde{j}]\) that most violates the first-order optimality condition. Our second working set selection strategy is rooted in the maximum Objective Reduction pair, denoted as _WWS-OR_. It chooses the index \(\mathrm{B}=[\tilde{i},\tilde{j}]\) that leads to the maximum objective reduction under certain criteria. We have the following results for the theoretical properties of _WWS-SV_ and _WWS-OR_. **Lemma 8.1**.: _(Properties of WSS-SV). Assume that the scoring matrix \(\mathbf{S}\) is computed using (30), we have: **(a)**\(\mathbf{X}^{t}\in\mathrm{St}(n,r)\) is a critical point \(\Leftrightarrow\mathbf{S}=\mathbf{0}\). **(b)**\(\mathbf{S}=\mathbf{0}\)\(\Leftrightarrow\mathbf{S}(\tilde{i},\tilde{j})=0\)._ _Please refer to Section B.15._ **Theorem 8.2**.: _(Properties of WSS-OR). Assume that the scoring matrix \(\mathbf{S}\) is computed using (31). Assume \(h(\mathbf{X})=0\) and (8) is considered to choose the matrix \(\mathbf{Q}\). We have the following results for **OBCD-Exact**: **(a)** The value of \(\mathbf{S}_{ij}\) for any given \([i,j]\) can be computed as \(\mathbf{S}_{ij}=\min(w_{1},w_{2})\), where \(w_{1}\triangleq-c_{1}-\sqrt{c_{1}^{2}+c_{2}^{2}}\), \(w_{2}\triangleq-c_{1}-\sqrt{c_{3}^{2}+c_{4}^{2}}\), \(c_{1}=\mathbf{T}_{ii}+\mathbf{T}_{jj}\), \(c_{2}=\mathbf{T}_{ij}-\mathbf{T}_{ji}\), and \(c_{3}=\mathbf{T}_{jj}-\mathbf{T}_{ii}\) and \(c_{4}=\mathbf{T}_{ij}+\mathbf{T}_{ji}\). **(b)** If \(\mathbf{X}^{t}\) is not a critical point, it holds that: \(\mathbf{S}(\tilde{i},\tilde{j})<0\) and \(F(\mathbf{X}^{t+1})<F(\mathbf{X}^{t})\). Refer to Section B.16._ When \(k\) coordinates are expected with \(k>2\), one can simply pick the top-\(k\) nonoverlapping coordinates according \(|\mathbf{S}|\) as the working set. The computational complexity of both _WSS-MV_ and _WSS-OR_ for a given pair \([i,j]\) is \(\mathcal{O}(r)\). Therefore, the overall computational complexity for all \(C_{n}^{2}\) pairs is \(\mathcal{O}(n^{2}r)\). Such computational complexity could be high when \(n\) is large. We consider the following more practical approach for \(k=2\) in our experiments. We randomly and uniformly sample \(p\triangleq\min(n,200)\) elements from the set \(\{\mathcal{B}_{i}\}_{i=1}^{C_{n}^{2}}\) as \(\{\mathcal{B}_{i}\}_{i=1}^{p}\), and then we pick the working set using \(\mathrm{B}=[\tilde{i},\tilde{j}]=\arg\max_{i,j,i\neq j}|\mathbf{S}_{ij}|,\,s.t.\,[i,j]\in\{\mathcal{B}_{i}\}_{i=1}^{p}\). This strategy leads to a significant reduction in computational complexity to \(\mathcal{O}(pr)\) when \(p\ll C_{n}^{2}\). ## 9 Experiments This section demonstrates the effectiveness and efficiency of Algorithm 1 on \(\ell_{0}\) norm-based SPCA, Nonnegative PCA, \(\ell_{1}\) norm-based SPCA, and the nonlinear eigenvalue problem. We compare the objective values \((F(\mathbf{X})-F_{\text{min}})\) for different methods after running \(t\) seconds with \(t\) varying from \(20\) to \(60\), where the constant \(F_{\text{min}}\) denotes the smallest objective of all the methods. ### Experimental Settings _Data Sets_. To generate the data matrix \(\mathbf{A}\), we consider 10 publicly available real-world or random data sets: 'w1a', 'TDT2', '20News','sector', 'E2006', 'MNIST', 'Gisette', 'CnnCaltech', 'Cifar', 'randn'. We randomly select a subset of examples from the original data set. The size of \(\mathbf{A}\in\mathbb{R}^{m\times n}\) are are chosen from the following set \((m,n)\)\(\in\)\(\{(2477,300)\), \((500,1000)\), \((8000,1000)\), \((6412,1000)\), \((2000,1000)\), \((60000,784)\), \((3000,1000)\), \((1000,1000)\), \((500,1000)\)}. _Implementations_. All methods are implemented in MATLAB on an Intel 2.6 GHz CPU with 32 GB RAM. Only our breakpoint searching procedure is developed in C++ and wrapped into the MATLAB code, since it requires elementwise loops that are less efficient in native MATLAB. We set \(\theta=10^{-5}\) for _OBCD_. Some Matlab code can be found in the authors' research webpages. _Initializations_. We use the same initializations for all methods. _(i)_ For the \(\ell_{0}\) and \(\ell_{1}\) norm-based SPCA tasks, since the optimal solutions are expected to be sparse, we simply set \(\mathbf{X}^{0}\in\mathrm{St}(n,r)\) to an identity matrix with \(\mathbf{X}_{ij}^{0}=1\) if \(i=j\) and otherwise 0. _(ii)_ For the nonlinear eigenvalue problems task, we set \(\mathbf{X}^{0}\) to a random orthogonal matrix using the following Matlab script '[X,\(\sim\)] = qr(randn(n,r),0)'. _(iii)_ For the nonnegative PCA task, we use a random nonnegative orthogonal matrix as \(\mathbf{X}^{0}\), which can be generated using the following strategy. We first randomly and uniformly par tion the index vector \([1,2,...,n]\) into \(r\) nonempty groups \(\{\mathcal{G}_{i}\}_{i=1}^{r}\) with \(\mathcal{G}_{i}\) being the index vector for the \(i\)-th group, then we set \(\mathbf{X}^{0}(\mathcal{G}_{i},i)=\frac{1}{|\mathcal{G}_{i}|}\) for all \(i\in[r]\), where \(|\mathcal{G}_{i}|\) is the number of elements for the \(i\)-th group. _Variants of OBCD_. We consider three variants of _OBCD_ using different working set selection strategies: _(i)__OBCD-R_** achieving lower objective values. _(ii) OptM-QR_ and _OptM-Cayley_ demonstrate similar efficiency. _(iii)_ Both of _OBCD-SV_ and _OBCD-OR_ are more effective than _FOForth_ and _OptM_. _(iv)_ _OBCD-OR_ converges faster than _OBCD-SV_ when the parameter \(\lambda\) becomes large. ## 10 Conclusions We propose a block coordinate descent method for solving nonsmooth composite optimization under orthogonality constraint. Our proposed method offers several advantages, including: _(i)_ It operates \(k\) rows of the solution matrix and has lower computation complexity footprints in each iteration. _(ii)_ It can handle general nonsmooth composite problems. _(iii)_ It is a feasible method. _(iv)_ It provides strong convergence guarantees under suitable conditions. _(v)_ It converges better local solutions than the classical critical point conditions. We also prove two extensions our proposed method, which includes efficient methods for solving the subproblems when \(k=2\), and novel greedy strategies for working set selection. Extensive experiments have shown that our proposed methods have shown superior performance than other existing methods.
2304.13452
Asymptotic growth of the signed Tate-Shafarevich groups for supersingular abelian varieties
Let $E$ be an elliptic curve over $\mathbb{Q}$ with supersingular reduction at $p$ with $a_p=0$. We study the asymptotic growth of the plus and minus Tate-Shafarevich groups defined by Lei along the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}$. In this paper, we work in the general framework of supersingular abelian varieties defined over $\mathbb{Q}$. Using Coleman maps constructed by Buyukboduk--Lei, we define the multi-signed Mordell-Weil groups for supersingular abelian varieties, provide an explicit structure of the dual of these groups as an Iwasawa module and prove a control theorem. Furthermore, we define the multi-signed Tate-Shafarevich groups and, by computing their Kobayashi rank, we provide an asymptotic growth formula along the cyclotomic tower of $\mathbb{Q}$.
Jishnu Ray
2023-04-26T11:18:42Z
http://arxiv.org/abs/2304.13452v1
# Asymptotic growth of the signed Tate-Shafarevich groups for supersingular abelian varieties ###### Abstract. Let \(E\) be an elliptic curve over \(\mathbb{Q}\) with supersingular reduction at \(p\) with \(a_{p}=0\). We study the asymptotic growth of the plus and minus Tate-Shafarevich groups defined by Lei along the cyclotomic \(\mathbb{Z}_{p}\)-extension of \(\mathbb{Q}\). In this paper, we work in the general framework of supersingular abelian varieties defined over \(\mathbb{Q}\). Using Coleman maps constructed by Buyukboduk-Lei, we define the multi-signed Mordell-Weil groups for supersingular abelian varieties, provide an explicit structure of the dual of these groups as an Iwasawa module and prove a control theorem. Furthermore, we define the multi-signed Tate-Shafarevich groups and, by computing their Kobayashi rank, we provide an asymptotic growth formula along the cyclotomic tower of \(\mathbb{Q}\). Key words and phrases:Iwasawa theory, plus and minus Mordell-Weil groups, plus and minus Tate-Shafarevich groups, asymptotic growth, supersingular abelian varieties 2020 Mathematics Subject Classification: Primary: 11R23, Secondary: 11G10, 11G05, 11R18 ## 1. Introduction Let \(p\) be an odd prime, \(E\) be an elliptic curve over \(\mathbb{Q}\). Let \(\mathbb{Q}_{\infty}\) be the cyclotomic \(\mathbb{Z}_{p}\)-extension of \(\mathbb{Q}\) with Galois group \(\Gamma\). Write \(\mathbb{Q}_{(n)}\) for the unique sub-extension of degree \(p^{n}\) with Galois group \(\operatorname{Gal}(\mathbb{Q}_{(n)}/\mathbb{Q})=\Gamma_{n}\). Throughout the paper the field \(K\) will denote either \(\mathbb{Q}_{\infty}\) or \(\mathbb{Q}_{(n)}\). Let \(\Lambda=\mathbb{Z}_{p}[[\Gamma]]\) be the Iwasawa algebra of \(\Gamma\). Identify \(\mathbb{Z}_{p}[[\Gamma]]\) with \(\mathbb{Z}_{p}[[X]]\) via \(X=\gamma-1\) where \(\gamma\) is a topological generator of the Galois group \(\Gamma\). Define \(\Phi_{0}=X\) and for \(n\geqslant 1\), we write \(\Phi_{n}=\frac{(X+1)^{p^{n}-1}}{(X+1)^{p^{n-1}-1}}\) for the \(p^{n}-\)th cyclotomic polynomial in \(X+1\). Let \(\omega_{n}=(X+1)^{p^{n}}-1\). The classical \(p\)-primary Selmer group \(\operatorname{Sel}_{p}(E/K)\) (see (1)) fits into the following exact sequence: \[0\to E(K)\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p}\to\operatorname{Sel}_{p}(E/K) \to\Sha(E/K)[p^{\infty}]\to 0,\] where the third term is the \(p\)-primary component of the Tate-Shafarevich group over \(K\). The behaviour of the \(p\)-primary Selmer group and the \(p\)-primary Tate-Shafarevich group depend on the reduction type of \(E\) at the prime \(p\). ### The ordinary case If \(E\) has good ordinary reduction at \(p\), by a result of Kato [14, Theorem 17.4], the \(p\)-primary Selmer group \(\operatorname{Sel}_{p}(E/\mathbb{Q}_{\infty})\) is cotorsion over the cyclotomic Iwasawa algebra \(\mathbb{Z}_{p}[[\Gamma]]\). Mazur's control theorem [14] then implies that _if_ the groups \(\text{\cyfright\char 130}(E/\mathbb{Q}_{(n)})[p^{\infty}]\) are finite for all \(n\geqslant 0\), and hence let \(p^{s_{n}}=|\text{\cyfright\char 130}(E/\mathbb{Q}_{(n)})[p^{\infty}]|\), then \[s_{n}=\mu\cdot p^{n}+\lambda\cdot n+O(1).\] Here \(\mu\) and \(\lambda\) are the Iwasawa \(\mu\)- and \(\lambda\)-invariants associated to the Pontryagin dual of the Selmer group \(\text{\rm Sel}_{p}(E/\mathbb{Q}_{\infty})\). **The supersingular case (\(\mathbf{a_{p}=0}\)).** When \(E\) has supersingular reduction at \(p\), the \(p\)-primary Selmer group \(\text{\rm Sel}_{p}(E/\mathbb{Q}_{\infty})\) is not cotorsion over the Iwasawa algebra \(\mathbb{Z}_{p}[[\Gamma]]\). Under the assumption \(a_{p}=0\), Kobayashi constructed plus and minus Selmer groups \(\text{\rm Sel}_{p}^{\pm}(E/\mathbb{Q}_{\infty})\) which are cotorsion over the Iwasawa algebra \(\mathbb{Z}_{p}[[\Gamma]]\) (see [10]). Suppose that the groups \(\text{\cyfright\char 130}(E/\mathbb{Q}_{(n)})[p^{\infty}]\) are finite for all \(n\geqslant 0\), then the works of Kurihara [11] and Kobayashi [10] imply that for \(n\) sufficiently large, \[s_{n}-s_{n-1}=q_{n}+\lambda_{\pm}+\mu_{\pm}\cdot(p^{n}-p^{n-1})-r_{\infty}(E),\] where \(q_{n}\) is an explicit sum of powers of \(p\), \(\lambda_{\pm}\) and \(\mu_{\pm}\) are the Iwasawa invariants of the Pontryagin dual of \(\text{\rm Sel}_{p}^{\pm}(E/\mathbb{Q}_{\infty})\), \(r_{\infty}(E)\) is the rank of \(E\) over \(\mathbb{Q}_{\infty}\) and the sign \(\pm\) depends on the parity of the integer \(n\). **The supersingular case (\(\mathbf{a_{p}\neq 0}\)).** In this case, by the Weil bound, \(p\) can be either \(2\) or \(3\). For \(*=\{\flat,\sharp\}\), Sprung constructed the Selmer group \(\text{\rm Sel}_{p}^{*}(E/\mathbb{Q}_{\infty})\) in [13] which is cotorsion over \(\mathbb{Z}_{p}[[\Gamma]]\). He also proved a similar formula (see [13, Theorem 1.1]): \[s_{n}-s_{n-1}=q_{n}^{*}+\lambda_{*}+\mu_{*}\cdot(p^{n}-p^{n-1})-r_{\infty}(E),\] for \(n\gg 0.\) Here \(q_{n}^{*}\) is again an explicit sum of powers of \(p\), \(\lambda_{*}\) and \(\mu_{*}\) are Iwasawa invariants attached to the Pontryagin dual of the Selmer group \(\text{\rm Sel}_{p}^{*}(E/\mathbb{Q}_{\infty})\) and the choice of \(*\) depends on the choice of the "modesty algorithm" (see [13, Algorithm 3.9]). An analytic version of this formula has been generalized to weight \(2\) modular forms in [13, Theorem 1.7]. **Higher weight modular forms.** Let \(f\) be a normalised new cuspidal eigenform of weight \(k\geqslant 2\) such that \(p\) does not divide the level of \(f\). Let \(V_{f}\) be the cohomological \(p\)-adic Galois representation attached to \(f\) (and therefore has determinant \(\chi^{1-k}\) times a finite order character). Let \(T_{f}\) be the canonical \(G_{\mathbb{Q}}\)-stable \(\mathbb{Z}_{p}\)-lattice of \(V_{f}\) defined by Kato [11, Section 8.3]. Let \(a_{p}(f)\) be the \(p\)-th Fourier coefficient of the modular form \(f\) and suppose that \(\text{\rm ord}_{p}(a_{p}(f))>\frac{k-1}{2p}\) with \(3\leqslant k\leqslant p\). Then Lei-Loeffler-Zerbes proved an upper bound for the growth of the Bloch-Kato-Tate-Shafarevich group \(\text{\cyfright\char 130}(T_{f}(j)/\mathbb{Q}_{(n)})[p^{\infty}]\) for \(j\in[1,k-1]\) (see [10]). More precisely, for \(n\gg 0\), they have shown that \[s_{n}-s_{n-1}\leqslant q_{n}^{*}+\lambda_{*}+\mu_{*}\cdot(p^{n}-p^{n-1})+\kappa -r_{\infty}(f).\] Here \(r_{\infty}(f)\) is the limiting value of the Selmer coranks \[r_{n}(f)=\text{\rm corank}_{\mathbb{Z}_{p}}\,\text{\rm Sel}_{p}(T_{f}(j)/ \mathbb{Q}_{(n)})\] (Lei-Loeffler-Zerbes have shown that \(r_{n}(f)\) stabilize for \(n\gg 0\); see [10, Proposition 5.4]). The quantity \(q_{n}^{*}\) is once again a sum of powers of \(p\) depending on \(k\) and the parity of \(n\), \(\lambda_{*}\) and \(\mu_{*}\) are the Iwasawa invariants attached to Selmer groups defined in [10] for some suitably chosen basis of the Wach module of \(T_{f}\), \(\kappa\) is some integer depending on the image of some Coleman maps and, as in Sprung's case, the choice of \(*\) depends on an explicit algorithm (see [10]). **Greenberg's conjecture and the fine Tate-Shafarevich group.** Let us go back to our setting of elliptic curves over \(\mathbb{Q}\) with supersingular reduction at \(p\) with \(a_{p}=0\). For \(K=\mathbb{Q}_{\infty}\) or \(\mathbb{Q}_{(n)}\), let \(\mathcal{M}(E/K)\) be the fine Mordell-Weil group introduced by Wuthrich [23, Section 2]. It fits into an exact sequence \[0\to\mathcal{M}(E/K)\to\operatorname{Sel}_{0}(E/K)\to\dot{\mathcal{K}}(E/K)\to 0\] where \(\operatorname{Sel}_{0}(E/K)\) is the fine Selmer group (see (3)) and \(\dot{\mathcal{K}}(E/K)\subset\operatorname{\text{\rm III}}(E/K)[p^{\infty}]\) is called the fine Tate-Shafarevich group. The advantage of working with the fine Selmer group \(\operatorname{Sel}_{0}(E/\mathbb{Q}_{\infty})\) over the \(p\)-primary Selmer group \(\operatorname{Sel}_{p}(E/\mathbb{Q}_{\infty})\) is that it behaves more closely to the \(p\)-primary part of the class groups studied in Iwasawa theory. Moreover, the fine Selmer group \(\operatorname{Sel}_{0}(E/\mathbb{Q}_{\infty})\) is always cotorsion as a \(\mathbb{Z}_{p}[[\Gamma]]\)-module independent of the reduction type of \(E\) (see [11, Theorem 12.4(i)]). Recall that Conjecture A of Coates-Sujatha says that the \(\mu\)-invariant of the Pontryagin dual of \(\operatorname{Sel}_{0}(E/\mathbb{Q}_{\infty})\) is trivial (see [12]). One interesting application of defining and working with the fine Mordell-Weil group and the fine Tate-Shafarevich group is that it allows us to study Conjecture A by analyzing the growth of \(|\dot{\mathcal{K}}(E/\mathbb{Q}_{(n)})|\). Assume that \(\dot{\mathcal{K}}(E/\mathbb{Q}_{(n)})\) is finite for all \(n\geqslant 0\), and let \(|\dot{\mathcal{K}}(E/\mathbb{Q}_{(n)})|=p^{s_{n}}\). It is known that \(s_{n}\) grows like \(\mu\cdot p^{n}+\lambda\cdot n+O(1)\) for some integers \(\mu\geqslant 0\) and \(\lambda\geqslant 0\). Wuthrich conjectures that \(e_{n}=\lambda\cdot n+O(1)\), i.e. \(\mu=0\) (see [23, Conjecture 8.2]). Another application of defining the fine Mordell-Weil group is to provide evidence towards Greenberg's conjecture on the characteristic ideal of the Pontryagin dual of the fine Selmer group \(\operatorname{Sel}_{0}(E/\mathbb{Q}_{\infty})\). Greenberg proposed that \[\operatorname{Char}_{\mathbb{Z}_{p}[[\Gamma]]}\operatorname{Sel}_{0}(E/ \mathbb{Q}_{\infty})^{\wedge}=\Big{(}\prod_{e_{n}>0}\Phi_{n}^{e_{n}-1}\Big{)}\] where \(e_{n}\) is as in (8) (see [13, Problem 0.7]). In [11, Theorem C], under the hypothesis that \(\operatorname{\text{\rm III}}(E/\mathbb{Q}_{(n)})[p^{\infty}]\) is finite for all \(n\geqslant 0\), Lei showed that \[\operatorname{Char}_{\mathbb{Z}_{p}[[\Gamma]]}\mathcal{M}(E/\mathbb{Q}_{\infty })^{\wedge}=\Big{(}\prod_{e_{n}>0}\Phi_{n}^{e_{n}-1}\Big{)}\] and hence Greenberg's problem is affirmative if and only if \(\dot{\mathcal{K}}(E/\mathbb{Q}_{\infty})\) is finite; this is conjectured to be finite for all but finitely many \(p\) (see [23, Conjecture 8.4]). **The plus and minus Tate-Shafarevich groups.** Generalizing the plus and minus subgroups of Kobayashi [14], Lei defined the plus/minus Mordell-Weil group \(\mathcal{M}^{\pm}(E/K)\). It fits into the following exact sequence \[0\to\mathcal{M}^{\pm}(E/K)\to\operatorname{Sel}_{p}^{\pm}(E/K)\to \mathcal{H}^{\pm}(E/K)\to 0\] where the third term \(\mathcal{H}^{\pm}(E/K)\subset\operatorname{\text{III}}(E/K)[p^{\infty}]\) is the plus/minus Tate-Shafarevich group (see [13, Section 5 and Lemma 5.7]). Furthermore in [13, Theorem D], he computed the greatest common divisor of the characteristic ideals of the Pontryagin dual of these new plus and minus Mordell-Weil groups and showed that it is the same as the description of \(\gcd(L_{p}^{+},L_{p}^{-})\) predicted by Kurihara-Pollack (see [12, Problem 3.2]); here \(L_{p}^{\pm}\) are Pollack's \(p\)-adic \(L\)-function defined in [15]. **Our work and main results.** The starting point of this paper is to study the growth of the new plus/minus Tate-Shafarevich group \(\mathcal{H}^{\pm}(E/\mathbb{Q}_{(n)})\) and obtain an asymptotic formula, similar to the various cases discussed above, involving the Iwasawa invariants \(\mu_{\pm}\), \(\lambda_{\pm}\) and the rank contribution from the plus/minus Mordell-Weil group. However, we do not restrain ourselves to the case of elliptic curves and work in the general setup of abelian varieties over \(\mathbb{Q}\) with supersingular reduction at \(p\) and multi-signed Selmer groups defined by Buyukboduk-Lei in [1]. Let \(A\) be a \(g\)-dimensional abelian variety over \(\mathbb{Q}\) (its \(p\)-adic Tate module is of \(\mathbb{Z}_{p}\)-rank \(2g\)) with good supersingular reduction at \(p\) and let \(A^{\vee}\) be the dual abelian variety. For any subset \(\underline{I}\in\{1,...,2g\}\) of cardinality \(g\), using the theory of Coleman maps, we define the \(\underline{I}\)-Mordell-Weil group \(\mathcal{M}_{\underline{I}}(A^{\vee}/K)\) and the \(\underline{I}\)-Tate-Shafarevich group \(\mathcal{H}^{\vee}_{\underline{I}}(A^{\vee}/K)\) (see Section 3) which fits into the following exact sequence \[0\to\mathcal{M}_{\underline{I}}(A^{\vee}/K)\to\operatorname{Sel}_{\underline {I}}(A^{\vee}/K)\to\mathcal{H}^{\vee}_{\underline{I}}(A^{\vee}/K)\to 0,\] where the middle term is the \(\underline{I}\)-Selmer group defined by Buyukboduk-Lei in [1]. In Lemma 3.5, we show that the \(\underline{I}\)-Tate-Shafarevich group \(\mathcal{H}^{\vee}_{\underline{I}}(A^{\vee}/K)\) is a subgroup of the \(p\)-primary component of the Tate-Shafarevich group \(\operatorname{\text{III}}(A^{\vee}/K)\). Let \(\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})^{\wedge}\) be the Pontryagin dual of \(\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\). Using a theorem of Lee [14], we are able to deduce the structure of \(\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})^{\wedge}\). **Theorem 1.1** (see Theorem 4.3).: _Assume that \(r\) as in (6) is trivial. Then there is a pseudos-isomorphism_ \[\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})^{\wedge}\sim \bigoplus_{i=1}^{u_{\underline{I}}}\frac{\Lambda}{\Phi_{c_{i,\underline{I}}}}\] _for some non-negative integers \(u_{\underline{I}}\) and \(c_{i,\underline{I}}\)._ We remark that the assumption on \(r\) is satisfied for elliptic curves over \(\mathbb{Q}\). Next, we prove the following control theorem of \(\underline{I}\)-Mordell-Weil groups. **Theorem 1.2** (see Theorem 3.6).: _The kernel of the natural map_ \[m_{n,\underline{I}}:\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\to \mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})^{\Gamma_{n}}\] _is trivial. Furthermore, if \(\mathcal{K}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\) is finite, then \(\operatorname{coker}(m_{n,\underline{I}})\) is finite._ This control theorem is then applied to find the Kobayashi rank \(\nabla\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})^{\wedge}\) (see Definition 2.3). **Theorem 1.3**.: _Assume the hypotheses of Theorem 4.4. Then, for some sufficiently large integer \(n_{0}\),_ \[\nabla\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})^{\wedge}=\sum_{ \begin{subarray}{c}c_{i,\underline{I}}\subseteq n_{0}\\ i=1\end{subarray}}^{u_{\underline{I}}}\operatorname{rank}_{\mathbb{Z}_{p}} \frac{\Lambda}{(\Phi_{c_{i,\underline{I}}},\omega_{n_{0}})}\] _for all \(n>n_{0}\)._ Examples of abelian varieties satisfying the hypotheses of Theorem 4.4 are given in Section 4.2. Let \(\operatorname{Sel}_{0}(A^{\vee}/K)\) be the fine Selmer group over \(K\). By analyzing and comparing the control diagrams (see diagrams 2 and 4) associated to \[\operatorname{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\xrightarrow{ \alpha_{(n)}}\operatorname{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})^{ \Gamma_{n}}\text{ and }\operatorname{Sel}_{0}(A^{\vee}/\mathbb{Q}_{(n)}) \to\operatorname{Sel}_{0}(A^{\vee}/\mathbb{Q}_{(n)})^{\Gamma_{n}}\] we compute \(\nabla\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\); here \(\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\) is the Pontryagin dual of the Selmer group \(\operatorname{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\). **Theorem 1.4** (See Theorem 4.2).: _Suppose that the Selmer group \(\operatorname{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) is \(\mathbb{Z}_{p}[[\Gamma]]\)-cotorsion. Then for sufficiently large \(n\), \(\operatorname{coker}(\alpha_{(n)})\) stabilizes and hence_ \[\nabla\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})=\lambda( \mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty}))+(p^{n}-p^{n-1})\mu (\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})).\] Combining the above results we obtain the rate of growth of \(|\mathcal{K}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})^{\wedge}|\). **Theorem 1.5** (See Theorem 4.5).: _Assume the hypotheses as in Theorem 4.4. Let \(p^{s_{n}}=|\mathcal{K}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})^{\wedge}|\). Then for some sufficiently large integer \(n_{0}\),_ \[s_{n}-s_{n-1}=\lambda(\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty} ))+(p^{n}-p^{n-1})\mu(\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty }))-\sum_{\begin{subarray}{c}c_{i,\underline{I}}\subseteq n_{0}\\ i=1\end{subarray}}^{u_{\underline{I}}}\operatorname{rank}_{\mathbb{Z}_{p}} \frac{\Lambda}{(\Phi_{c_{i,\underline{I}}},\omega_{n_{0}})}\] _for all \(n>n_{0}\)._ The definitions of the \(\underline{I}\)-Mordell-Weil group \(\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) and the \(\underline{I}\)-Selmer group \(\operatorname{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) (and hence that of the \(\underline{I}\)-Tate-Shafarevich group \(\mathcal{K}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\)) depend on a choice of a Hodge-compatible basis of the associated Dieudonne module (see Definition 2.1). We show that under certain hypothesis (which is satisfied for abelian varieties of the \(GL(2)\)-type) the groups \(\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) and \(\operatorname{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) are canonically defined for two indexing sets, namely \(\underline{I}=\{1,...,g\}\) and \(\underline{I}=\{g+1,...,2g\}\) (see Proposition 4.8). When the abelian variety is an elliptic curve (i.e. \(g=1\)), then these groups exactly coincide with their canonically defined plus and minus variants, namely the groups \(\mathcal{M}^{\pm}(E/\mathbb{Q}_{\infty})\) and \(\operatorname{Sel}^{\pm}(E/\mathbb{Q}_{\infty})\). In the case of elliptic curves over \(\mathbb{Q}\) with supersingular reduction at the prime \(p\) with \(a_{p}=0\), it is possible to further simply the general results above and show the following theorem. **Theorem 1.6** (See Theorem 5.1).: _Assume that \(\dot{\mathcal{K}}^{\pm}(E/\mathbb{Q}_{(n)})\) is finite for all \(n\geqslant 0\). Let \(p^{s_{n}}=|\dot{\mathcal{K}}^{\pm}(E/\mathbb{Q}_{(n)})^{\wedge}|\). Then, for some sufficiently large integer \(n_{0}\),_ \[\nabla\mathcal{M}^{\pm}(E/\mathbb{Q}_{\infty})^{\wedge}=\sum_{k\geqslant 0} ^{n_{0}}\operatorname{rank}_{\mathbb{Z}_{p}}\frac{\Lambda}{(\Phi_{k}^{r^{\pm} },\omega_{n_{0}})}\] _and for all \(n>n_{0}\),_ \[s_{n}-s_{n-1}=\lambda(\mathcal{X}^{\pm}(E/\mathbb{Q}_{\infty}))+(p^{n}-p^{n-1} )\mu(\mathcal{X}^{\pm}(E/\mathbb{Q}_{\infty}))-\sum_{k\geqslant 0}^{n_{0}} \operatorname{rank}_{\mathbb{Z}_{p}}\frac{\Lambda}{(\Phi_{k}^{r^{\pm}}, \omega_{n_{0}})}.\] _These integers \(r^{\pm}_{k}\) satisfy the following relations._ * \(r^{+}_{0}=r^{-}_{0}=e_{0}\)_._ * _If_ \(k>0\)_, then_ \(a_{k}=\max(0,e_{k}-1)\leqslant\min(r^{+}_{k},r^{-}_{k})\) _and_ \(r^{+}_{k}+r^{-}_{k}=e_{k}+a_{k}\)_._ We would like to conclude the introduction with a point of caution here. When \(A\) is an elliptic curve over \(\mathbb{Q}\) with \(a_{p}=0\), for \(\underline{I}=\{1\}\) or \(\underline{I}=\{2\}\), the Selmer group \(\operatorname{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) at the cyclotomic level coincides with Kobayashi's Selmer group \(\operatorname{Sel}_{p}^{\pm}(E/\mathbb{Q}_{\infty})\), but the Selmer group \(\operatorname{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\) at the finite level \(\mathbb{Q}_{(n)}\)_does not_ coincide with Kobayashi's plus/minus Selmer group defined using the local condition \(E^{\pm}(\mathbb{Q}_{(n),p})\) (see Remark 3.3). This definition of signed Selmer groups at finite level satisfies a simpler control theorem than Kobayashi's control theorem (compare the analysis after diagram 2 with that of [13, Theorem 9.3]). This definition is in coherence with the existing literature and is the one used in [16], which we intend to generalize. This definition is also used by Ponsinet in [14, Section 1.6]. Defining signed Selmer groups for abelian varieties at finite level by directly generalizing Kobayashi's plus/minus local conditions \(E^{\pm}(\mathbb{Q}_{(n),p})\) seem out of reach with present techniques (see open question after remark 3.3). Finally, we would like to mention that one can easily generalize the definitions of the \(\underline{I}\)-Mordell-Weil group and the \(\underline{I}\)-Tate-Shafarevich group for the cyclotomic extension of a general number field. However, eventually in section 4, we will need results of [10] which works only for \(\mathbb{Q}_{\infty}\). Hence, throughout the paper we have taken our base field to be \(\mathbb{Q}\). ## Acknowledgement The author thanks Antonio Lei and Christian Wuthrich for several helpful conversations. This project is supported by the Inspire research grant. ## 2. Signed Selmer groups over the cyclotomic tower Let \(A\) be a \(g\)-dimensional abelian variety defined over \(\mathbb{Q}\) with good supersingular reduction at \(p\). Throughout the article we will need the following notations. * For \(n\geqslant 1\), let \(A[p^{n}]\) be the \(p^{n}\)-torsion points of \(A(\overline{\mathbb{Q}})\) and let \[A[p^{\infty}]=\cup_{n}A[p^{n}].\] * Let \(T\) be the \(p\)-adic Tate module associated with the abelian variety \(A\) with a continuous action of the absolute Galois group \(G_{\mathbb{Q}}\). It is a free \(\mathbb{Z}_{p}\)-module of rank \(2g\). * Let \(V=T\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_{p}\), \(V\) is a crystalline \(G_{\mathbb{Q}_{p}}\)-representation with Hodge-Tate weights \(0\) and \(1\), both with multiplicity \(g\). * For \(i\geqslant 0\), the Iwasawa cohomology group \(H^{i}_{\mathrm{Iw,cyc}}(\mathbb{Q}_{p},T)\) is defined as the projective limit of \(H^{i}(\mathbb{Q}(\mu_{p^{n}})_{v_{n}},T)\) relative to the corestriction maps. Here \(v_{n}\) is the prime of \(\mathbb{Q}(\mu_{p^{n}})\) above \(p\). * The Dieudonne module \(\mathbb{D}_{\mathrm{cris},p}(T)\) associated to \(T\) (see [1]) is a free \(\mathbb{Z}_{p}\)-module of rank \(2g\). * As mentioned in the introduction, \(A^{\vee}\) will be the dual abelian variety and for a \(\mathbb{Z}_{p}\)-module \(M\), \(M^{\wedge}\) will be the Pontryagin dual \(\mathrm{Hom}(M,\mathbb{Q}_{p}/\mathbb{Z}_{p})\). The module \(\mathbb{D}_{\mathrm{cris},p}(T)\) is equipped with a Frobenius after tensoring with \(\mathbb{Q}_{p}\) and a filtration of \(\mathbb{Z}_{p}\)-modules \((\mathrm{Fil}^{i}\mathbb{D}_{\mathrm{cris},p}(T))_{i\in\mathbb{Z}}\) satisfying \[\mathrm{Fil}^{i}\mathbb{D}_{\mathrm{cris},p}(T)=\begin{cases}0&\text{if }i \geqslant 1,\\ \mathbb{D}_{\mathrm{cris},p}(T)&\text{if }i\leqslant-1.\end{cases}\] **Definition 2.1**.: _Choose an \(\mathbb{Z}_{p}\)-basis \(\{u_{1},\ldots,u_{2g}\}\) of \(\mathbb{D}_{\mathrm{cris},p}(T)\) such that \(\{u_{1},\ldots,u_{g}\}\) is a \(\mathbb{Z}_{p}\)-basis of \(\mathrm{Fil}^{0}\mathbb{D}_{\mathrm{cris},p}(T)\). Such a basis is called Hodge-compatible._ The matrix of the Frobenius \(\varphi\) with respect to this basis is of the form \[C_{\varphi,p}=C_{p}\left[\begin{array}{c|c}I_{g}&0\\ \hline 0&\frac{1}{p}I_{g}\end{array}\right]\] for some \(C_{p}\in\mathrm{GL}_{2g}(\mathbb{Z}_{p})\) and the identity \(g\times g\) matrix \(I_{g}\). Following [1, Definition 2.4], for \(n\geqslant 1\), we can define \[C_{p,n}:=\left[\begin{array}{c|c}I_{g}&0\\ \hline 0&\Phi_{p^{n}}(1+X)I_{g}\end{array}\right]C_{p}^{-1}\text{ and }M_{p,n}:=(C_{\varphi,p})^{n+1}C_{p,n}\cdots C_{p,1}.\] Recall that as \(p\) is odd, \[\operatorname{Gal}(\mathbb{Q}(\mu_{p^{\infty}})/\mathbb{Q})\cong\Delta\times\Gamma\] where \(\Delta\) is a finite group of order \(p-1\). Set \(\mathcal{H}=\mathbb{Q}_{p}[\Delta]\otimes_{\mathbb{Q}_{p}}\mathcal{H}(\Gamma)\) where \(\mathcal{H}(\Gamma)\) is the set of elements \(f(\gamma-1)\) with \(\gamma\in\Gamma\) and \(f(X)\in\mathbb{Q}_{p}[[X]]\) is convergent on the \(p\)-adic open unit disk. Let \[\mathcal{L}_{T,p}:H^{1}_{\operatorname{Iw},\operatorname{cyc}}(\mathbb{Q}_{p},T)\to\mathcal{H}\otimes\mathbb{D}_{\operatorname{cris},p}(T)\] be the Perrin-Riou's big logarithm map (see [11, Definition 3.4]). It interpolates Kato's dual exponential maps [10, II, SS1.2] \[\exp_{p,n}^{*}:H^{1}(\mathbb{Q}_{p}(\mu_{p^{n}}),T)\to\mathbb{Q}_{p}(\mu_{p^{n }})\otimes\operatorname{Fil}^{0}\mathbb{D}_{\operatorname{cris},p}(T).\] In [1, Theorem 1.1], Buyukboduk-Lei showed that the big logarithm map decomposes into Coleman maps in the following way. \[\mathcal{L}_{T,p}^{\infty}=(u_{1},\ldots,u_{2g})\cdot M_{T,p}\cdot\begin{bmatrix} \operatorname{Col}_{T,p,1}\\ \vdots\\ \operatorname{Col}_{T,p,2g}\end{bmatrix},\] where \(M_{T,p}=\underset{n\to\infty}{\lim}M_{p,n}\) is a \(2g\times 2g\) logarithmic matrix defined over \(\mathcal{H}\) and for \(i\in\{1,...,2g\}\), the maps \(\operatorname{Col}_{T,p,i}\), are \(\mathbb{Z}_{p}[\Delta][[\Gamma]]\)-homomorphisms from \(H^{1}_{\operatorname{Iw},\operatorname{cyc}}(\mathbb{Q}_{p},T)\to\mathbb{Z}_{ p}[\Delta][[\Gamma]].\) For any subset \(I_{p}\) of \(\{1,...,2g\}\), one can define \[\operatorname{Col}_{T,I_{p}}:H^{1}_{\operatorname{Iw},\operatorname {cyc}}(\mathbb{Q}_{p},T) \to\prod_{k=1}^{|I_{p}|}\mathbb{Z}_{p}[\Delta][[\Gamma]],\] \[\mathbf{z} \mapsto(\operatorname{Col}_{T,p,i}(\mathbf{z}))_{i\in I_{p}}.\] Let \(\mathcal{I}\) be the set of all subsets of \(\{1,...,2g\}\) of cardinality \(g\). So \(\underline{I}\in\mathcal{I}\) will be a subset of \(\{1,...,2g\}\) having cardinality \(g\). Given any such \(\underline{I}\in\mathcal{I}\) and \(\mathbf{z}=z_{1}\wedge\cdots\wedge z_{g}\in\bigwedge^{g}H^{1}_{\operatorname{Iw },\operatorname{cyc}}(\mathbb{Q}_{p},T)\), one can define \[\operatorname{Col}_{T,\underline{I}}(\mathbf{z})=\det(\operatorname{Col}_{T,p,i }(z_{j}))_{i\in\underline{I},1\leqslant j\leqslant g}.\] For all \(n\geqslant 1\), let \(H_{p,n}=C_{p,n}\cdots C_{p,1}\). For a pair \(\underline{I}\) and \(\underline{J}\) in \(\mathcal{I}\), we define \(H_{\underline{L},\underline{J},n}\) as the \((\underline{I},\underline{J})\)-minor of \(H_{p,n}\). Since the prime \(p\) is totally ramified in \(\mathbb{Q}(\mu_{p^{\infty}})\), we will also use it to denote the unique prime of \(\mathbb{Q}(\mu_{p^{\infty}})\) above \(p\). One defines the subset \[H^{1}_{I_{p}}(\mathbb{Q}(\mu_{p^{\infty}})_{p},A^{\vee}[p^{\infty}])\subset H ^{1}(\mathbb{Q}(\mu_{p^{\infty}})_{p},A^{\vee}[p^{\infty}])\] as the orthogonal complement of \(\ker(\operatorname{Col}_{T,I_{p}})\) under the Tate's local pairing \[H^{1}(\mathbb{Q}(\mu_{p^{\infty}})_{p},A^{\vee}[p^{\infty}])\times H^{1}_{ \operatorname{Iw},\operatorname{cyc}}(\mathbb{Q}_{p},T)\to\mathbb{Q}_{p}/ \mathbb{Z}_{p}.\] By the inflation-restriction exact sequence it is possible to show that the restriction map \[H^{1}(\mathbb{Q}_{\infty,p},A^{\vee}[p^{\infty}])\to H^{1}(\mathbb{Q}(\mu_{p^{ \infty}})_{p},A^{\vee}[p^{\infty}])^{\Delta}\] is an isomorphism; because the order of \(\Delta\) is \(p-1\) and \(A^{\vee}[p^{\infty}](\mathbb{Q}(\mu_{p^{\infty}})_{p})\) is a finite \(p\)-group. Using this isomorphism one defines \[H^{1}_{I_{p}}(\mathbb{Q}_{\infty,p},A^{\vee}[p^{\infty}])\subset H^{1}( \mathbb{Q}_{\infty,p},A^{\vee}[p^{\infty}])\] as \[H^{1}_{I_{p}}(\mathbb{Q}_{\infty,p},A^{\vee}[p^{\infty}])=H^{1}_{I_{p}}( \mathbb{Q}(\mu_{p^{\infty}})_{p},A^{\vee}[p^{\infty}])^{\Delta}.\] **Definition 2.2**.: _For \(\underline{I}=(I_{p})\in\mathcal{I}\), the \(\underline{I}\)-Selmer group of \(A^{\vee}\) over \(\mathbb{Q}_{\infty}\) is defined by_ \[\operatorname{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})=\ker\Big{(} H^{1}(\mathbb{Q}_{\infty},A^{\vee}[p^{\infty}])\to\prod_{v\nmid p}H^{1}( \mathbb{Q}_{\infty,v},A^{\vee}[p^{\infty}])\times\frac{H^{1}(\mathbb{Q}_{ \infty,p},A^{\vee}[p^{\infty}])}{H^{1}_{I_{p}}(\mathbb{Q}_{\infty,p},A^{\vee}[ p^{\infty}])}\Big{)}.\] ### Preliminaries on Kobayashi rank **Definition 2.3**.: _Let \((N_{n})_{n}\) be an inverse system of finitely generated \(\mathbb{Z}_{p}\)-modules with transition maps \(\pi_{n}:N_{n}\to N_{n-1}\). Suppose that the map \(\pi_{n}\) has finite kernel and cokernel. Then the Kobayashi rank \(\nabla N_{n}\) is defined as_ \[\nabla N_{n}:=\operatorname{len}(\ker\pi_{n})-\operatorname{len}(\operatorname {coker}\pi_{n})+\operatorname{rank}_{\mathbb{Z}_{p}}N_{n-1}.\] Here are a few basic properties that we will need. For clarifications see [12, Lemma 10.5 (ii)] and [10, Lemma 4.3]. **Lemma 2.4**.: _Let \(0\to(N^{\prime}_{n})\to(N_{n})\to(N^{\prime\prime}_{n})\to 0\) be a short exact sequence. If any two of \(\nabla N_{n},\nabla N^{\prime}_{n},\nabla N^{\prime\prime}_{n}\) are defined, then the other is also defined and_ \[\nabla N_{n}=\nabla N^{\prime}_{n}+\nabla N^{\prime\prime}_{n}.\] **Lemma 2.5**.: _Let \(N\) be a finitely generated torsion \(\mathbb{Z}_{p}[[X]]\)-module with the characteristic polynomial \(f\), and let \(N_{n}=N/\omega_{n}N\). Consider the projective system \((N_{n})_{n}\). Then for \(n\gg 0\), \(\nabla N_{n}\) is defined and_ \[\nabla N_{n}=\lambda(N)+(p^{n}-p^{n-1})\mu(N),\] _where \(\lambda\) and \(\mu\) are the Iwasawa \(\lambda\)- and \(\mu\)-invariants of \(N\)._ **Lemma 2.6**.: _Suppose that \((N_{n})_{n}\) is an inverse system of finitely generated \(\mathbb{Z}_{p}\)-modules such that for all \(n\geqslant 1\), \(|N_{n}|=p^{s_{n}}\) for some integer \(s_{n}\). Then_ \[\nabla N_{n}=s_{n}-s_{n-1}.\] ## 3. Signed Mordell-Weil group and signed Tate-Shafarevich group In this section, we define the signed Mordell-Weil group and the signed Tate-Shafarevich group generalizing [10]. **Definition 3.1**.: _The \(\underline{I}\)-Mordell-Weil group of \(A^{\vee}\) over the cyclotomic extension \(\mathbb{Q}_{\infty}\) is defined by_ \[\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})=\ker\Big{(}A^{\vee} (\mathbb{Q}_{\infty})\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p}\to\frac{H^{1}( \mathbb{Q}_{\infty,p},A^{\vee}[p^{\infty}])}{H^{1}_{I_{p}}(\mathbb{Q}_{\infty,p},A^{\vee}[p^{\infty}])}\Big{)}.\] Now, let us define the \(\underline{I}\)-Mordell-Weil group at the level \(\mathbb{Q}_{(n)}\), for which we need to define the local condition at \(p\). For \(n\geqslant 0\), we define \[H^{1}_{I_{p}}(\mathbb{Q}_{(n),p},A^{\vee}[p^{\infty}]):=H^{1}_{I_{p}}( \mathbb{Q}_{\infty,p},A^{\vee}[p^{\infty}])^{\Gamma_{n}}.\] As mentioned before, this definition is in coherence with [12, Section 1.6] and [10, Definition 5.2]. **Definition 3.2**.: _We define the \(\underline{I}\)-Mordell-Weil group and the \(\underline{I}\)-Selmer group over \(\mathbb{Q}_{(n)}\) as_ \[\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})=\ker\Big{(}A^{\vee}( \mathbb{Q}_{(n)})\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p}\to\frac{H^{1}(\mathbb{Q }_{(n),p},A^{\vee}[p^{\infty}])}{H^{1}_{I_{p}}(\mathbb{Q}_{(n),p},A^{\vee}[p^{ \infty}])}\Big{)},\] \[\mathrm{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})=\ker\Big{(}H^{1}( \mathbb{Q}_{(n)},A^{\vee}[p^{\infty}])\to\prod_{v\nmid p}H^{1}(\mathbb{Q}_{(n ),v},A^{\vee}[p^{\infty}])\times\frac{H^{1}(\mathbb{Q}_{(n),p},A^{\vee}[p^{ \infty}])}{H^{1}_{I_{p}}(\mathbb{Q}_{(n),p},A^{\vee}[p^{\infty}])}\Big{)}.\] **Remark 3.3**.: _When \(A\) is an elliptic curves over \(\mathbb{Q}\) with \(a_{p}=0\), for \(\underline{I}=\{1\}\) and \(\underline{I}=\{2\}\), these definitions of \(\mathrm{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) coincide with Kobayashi's plus and minus Selmer groups (see [1, Appendix A]). However, the definitions of \(\mathrm{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\) does not coincide with Kobayashi's plus and minus Selmer groups [11] defined using \(E^{\pm}(\mathbb{Q}_{(n),p})\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p}\) (see [10, Remark 5.4])._ **Open question.** In [11], Kobayashi could explicitly describe the local conditions \(E^{\pm}(\mathbb{Q}_{(n),p})\) using trace maps. It is unknown whether one can write \(H^{1}_{I_{p}}(\mathbb{Q}_{(n),p},A^{\vee}[p^{\infty}])=A^{\vee,L}(\mathbb{Q} _{(n),p})\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p}\) for some well-defined \(A^{\vee,\underline{I}}\). Via the Kummer map, we identify \(A^{\vee}(\mathbb{Q}_{(n)})\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p}\) with a subgroup of \(H^{1}(\mathbb{Q}_{(n)},A^{\vee}[p^{\infty}])\) and see that \(\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\) is a subgroup of \(\mathrm{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\). This allows us to define the \(\underline{I}\)-Tate-Shafarevich group as follows. **Definition 3.4**.: _Let \(K=\mathbb{Q}_{(n)}\) or \(\mathbb{Q}_{\infty}\). The (\(p\)-primary) \(\underline{I}\)-Tate Shafarevich group \(\mathcal{X}_{\underline{I}}(A^{\vee}/K)\) is defined as the quotient_ \[\mathcal{X}_{\underline{I}}(A^{\vee}/K):=\mathrm{Sel}_{\underline{I}}(A^{ \vee}/K)/\mathcal{M}_{\underline{I}}(A^{\vee}/K).\] By definition, we have an exact sequence \[0\to\mathcal{M}_{\underline{I}}(A^{\vee}/K)\to\mathrm{Sel}_{\underline{I}}(A^ {\vee}/K)\to\mathcal{X}_{\underline{I}}(A^{\vee}/K)\to 0.\] **Lemma 3.5**.: _Let \(K=\mathbb{Q}_{(n)}\) or \(\mathbb{Q}_{\infty}\). There is an inclusion \(\dot{\mathcal{K}}_{\underline{I}}(A^{\vee}/K)\subset\text{\cy It follows from a result of Imai [12] that \(A^{\vee}(\mathbb{Q}_{\infty})[p^{\infty}]\) is trivial (see also [13, Lemma 1.1] and hence the inflation-restriction exact sequence gives that the central vertical map \(\beta_{(n)}\) is an isomorphism. For non-archimedean primes \(v\nmid p\), the kernel of the map \[H^{1}(\mathbb{Q}_{(n),v},A^{\vee}[p^{\infty}])\xrightarrow{\delta_{(n),v}}H^{ 1}(\mathbb{Q}_{\infty,w},A^{\vee}[p^{\infty}])^{\Gamma_{n}}\] is \(H^{1}(\Gamma_{n},A^{\vee}(\mathbb{Q}_{\infty,w})[p^{\infty}])\cong(A^{\vee}( \mathbb{Q}_{\infty,w})[p^{\infty}])_{\Gamma_{n}}.\) This is finite and bounded by \[[A^{\vee}(\mathbb{Q}_{\infty,w})[p^{\infty}]:(A^{\vee}(\mathbb{Q}_{\infty,w}) [p^{\infty}])_{\operatorname{div}}]\] which is independent of \(n\) (see [13, Proof of Lemma 2.3]). If \(A^{\vee}\) has good reduction at \(v\), then \(A^{\vee}(\mathbb{Q}_{\infty,w})[p^{\infty}]\) is divisible and hence \(\ker(\delta_{(n),v})\) is trivial. Furthermore, there are finitely many non-archimedean primes where \(A^{\vee}\) has bad reduction, and for each of them \(\ker(\delta_{(n),v})\) is bounded and the bound is independent of \(n\). If \(v\) is archimedean, \(v\) splits completely in \(\mathbb{Q}_{\infty}/\mathbb{Q}\) and hence \(\ker(\delta_{(n),v})\) is trivial. We are left to analyze the kernel of the map \(\delta_{(n),p}\). Consider the diagram By the definition of the local condition \(H^{1}_{I_{p}}(\mathbb{Q}_{(n),p},A^{\vee}[p^{\infty}])\), the left vertical map is an isomorphism. The central vertical map has kernel \[H^{1}(\Gamma_{n},A^{\vee}(\mathbb{Q}_{\infty,p})[p^{\infty}])\] which is trivial since the group \(A^{\vee}(\mathbb{Q}_{\infty,p})\) has no \(p\)-torsion (see [12] and [13, Lemma 1.1]). Therefore, by the snake lemma the kernel of \(\delta_{(n),p}\) is trivial. ## 4. Results on asymptotic growth In this section, assuming that the \(L\)-Tate-Shafarevich group \(\mathcal{X}_{L}(A^{\vee}/\mathbb{Q}_{(n)})\) is finite we compute its asymptotic growth as \(n\) is large. For \(K=\mathbb{Q}_{(n)}\) or \(\mathbb{Q}_{\infty}\), recall that \(\mathcal{X}_{L}(A^{\vee}/K)\) is the Pontryagin dual of \(\operatorname{Sel}_{L}(A^{\vee}/K)\). We first need to compute \(\nabla\mathcal{X}_{L}(A^{\vee}/\mathbb{Q}_{(n)})\), but for this we need a more precise control on the growth of cokernels of the natural morphisms \[\operatorname{Sel}_{0}(A^{\vee}/\mathbb{Q}_{(n)})\to\operatorname{Sel}_{0}(A ^{\vee}/\mathbb{Q}_{\infty})^{\Gamma_{n}}\] of _fine_ Selmer groups as \(n\) is large. The fine Selmer group of \(A^{\vee}\) over \(K\) is defined by \[\operatorname{Sel}_{0}(A^{\vee}/K)=\ker\Big{(}H^{1}(K,A^{\vee}[p^{\infty}]) \to\prod_{v}H^{1}(K_{v},A^{\vee}[p^{\infty}])\Big{)}. \tag{3}\] Let \(\mathcal{X}_{0}(A^{\vee}/K)\) be the Pontryagin dual of \(\operatorname{Sel}_{0}(A^{\vee}/K)\). Lei-Ponsinet showed that the natural map \[\mathcal{X}_{0}(A^{\vee}/\mathbb{Q}_{\infty})_{\Gamma_{n}}\xrightarrow{s_{n}} \mathcal{X}(A^{\vee}/\mathbb{Q}_{(n)})\] is a surjection with \(\ker(s_{n})\) finite and bounded as \(n\) varies (see [10, Lemma 2.3 and its proof]). But we need more than just this. **Proposition 4.1**.: _The kernel of \(s_{n}\) stabilizes, i.e. \(\ker(s_{n})\cong\ker(s_{n+1})\) for all sufficiently large \(n\); hence \(\nabla\ker(s_{n})=0\)._ Proof.: Consider the commutative diagram (4) The Pontryagin dual of \(\ker(s_{n})\) is \(\operatorname{coker}(a_{(n)})\). As the map \(\beta_{(n)}\) is an isomorphism, snake lemma applied to diagram (4) gives \[\operatorname{coker}(a_{(n)})\cong\ker(\gamma_{(n)})\cap\operatorname{Im}( \lambda_{(n)}).\] Hence \(\operatorname{coker}(a_{(n)})\) injects into \(\ker(\gamma_{(n)})\) which means that \(\ker(s_{n})\) is a quotient of the Pontryagin dual of \(\ker(\gamma_{(n)})\). If we show that \(\ker(\gamma_{(n)})\) stabilizes for large \(n\), then \(\ker(s_{n})\) automatically stabilizes. Note that \(\ker(\gamma_{(n),p})\cong H^{1}(\Gamma_{n},A^{\vee}(\mathbb{Q}_{\infty,p})[p^{ \infty}])\) which is trivial since the group \(A^{\vee}(\mathbb{Q}_{\infty,p})\) has no \(p\)-torsion. Further note that the maps \(\gamma_{(n),v}\) and \(\delta_{(n),v}\) are the same for all \(v\nmid p\). This gives \[\ker(\gamma_{(n)})=\ker(\delta_{(n)}) \tag{5}\] since \(\ker(\delta_{(n),p})\) is also trivial (see the proof of Theorem 3.6). Let \(S_{n}\) be the set of non-archimedean primes of \(\mathbb{Q}_{(n)}\) over all the primes \(v\) of \(\mathbb{Q}\) where \(A^{\vee}\) has bad reduction. Note that the cardinality of \(S_{n}\) is bounded [10, p. 9]. Let \(S=S_{n}\) for some large \(n\). From the explicit computation of \(\ker(\delta_{(n)})\) in the proof of Theorem 3.6, it will be sufficient to show that \(\oplus_{w\in S}(A^{\vee}(\mathbb{Q}_{\infty,w})[p^{\infty}])_{\Gamma_{n}}\) stabilizes for large \(n\). But these groups are finite and have bounded orders as \(n\) varies and the projection map \[\oplus_{w\in S}(A^{\vee}(\mathbb{Q}_{\infty,w})[p^{\infty}])_{\Gamma_{n+1}} \rightarrow\oplus_{w\in S}(A^{\vee}(\mathbb{Q}_{\infty,w})[p^{\infty}])_{ \Gamma_{n}}\] is surjective (also mentioned in [11, p. 32] for elliptic curves). Hence \(\ker(\delta_{(n)})\) stabilizes for large \(n\) and so does \(\ker(\gamma_{(n)})\) by (5). Now we are in a position to compute \(\nabla\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\). We make the following hypothesis (see [10, Conjecture 2.2]). **(Torsion)** For all \(\underline{I}\in\mathcal{I}\), the \(\Lambda\)-module \(\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) is torsion. This is known to be true for elliptic curves over \(\mathbb{Q}\) (see [11], [12]). **Theorem 4.2**.: _Under **(Torsion)**, for all sufficiently large \(n\),_ \[\nabla\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})=\lambda(\mathcal{X }_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty}))+(p^{n}-p^{n-1})\mu(\mathcal{X }_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})).\] Proof.: Snake lemma applied to diagram (2), gives that \(\ker(\alpha_{(n)})\) is trivial and \[\operatorname{coker}(\alpha_{(n)})\cong\operatorname{Im}(\theta_{(n)})\cap \ker(\delta_{(n)}).\] Therefore, \(\operatorname{coker}(\alpha_{(n)})\) injects into \(\ker(\delta_{(n)}).\) But Proposition 4.1 gives that \(\ker(\delta_{(n)})\) stabilizes for large \(n\) and hence so does its dual. But the dual of \(\ker(\delta_{(n)})\) surjects onto the dual of \(\operatorname{coker}(\alpha_{(n)})\); hence the latter sequence also stabilizes for \(n\) sufficiently large. Hence, by Lemma 2.4 \[\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})=\nabla\mathcal{X}_{ \underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})_{\Gamma_{n}}.\] Lemma 2.5 gives that \[\nabla\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})_{\Gamma_{n}} =\lambda(\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty}))+(p^{n}-p^ {n-1})\mu(\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})).\] It follows from [10, Theorem 2.1.2] that there is a pseudo-isomorphism \[(A^{\vee}(\mathbb{Q}_{\infty})\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_{p}/\mathbb{ Z}_{p})^{\wedge}\sim\Lambda^{r}\oplus\Big{(}\bigoplus_{i=1}^{t}\frac{\Lambda}{ \Phi_{b_{i}}}\Big{)}, \tag{6}\] for certain non-negative integers \(r,t\) and \(b_{i}.\) **Theorem 4.3**.: _If \(r=0\), then there is a pseudo-isomorphism of \(\Lambda\)-modules_ \[\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})^{\wedge}\sim\bigoplus _{i=1}^{u_{\underline{I}}}\frac{\Lambda}{\Phi_{c_{i,\underline{I}}}}\] _for some non-negative integers \(u_{\underline{I}}\) and \(c_{i,\underline{I}}.\)_ Proof.: By definition, the Pontryagin dual of \(\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) is a quotient of the dual of \(A^{\vee}(\mathbb{Q}_{\infty})\otimes_{\mathbb{Z}_{p}}\mathbb{Q}_{p}/\mathbb{ Z}_{p}\). If \(r=0\), the right-hand side of (6) is a direct sum of all simple modules. Hence it follows that \(\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})^{\wedge}\) is pseudo-isomorphic to a partial direct sum of the right-hand side of (6). Let \(\Sigma\) be a finite set of primes of \(\mathbb{Q}\) containing the prime \(p\), the archimedean prime and the primes of bad reduction of \(A^{\vee}\). Let \(\mathbb{Q}_{\Sigma}\) be the maximal extension of \(\mathbb{Q}\) unramified outside \(\Sigma\). For \(i\geqslant 0\), let \(H^{i}_{\operatorname{Iw},\Sigma}(\mathbb{Q},T)\) be the projective limit of the groups \(H^{i}(\mathbb{Q}_{\Sigma}/\mathbb{Q}_{(n)},T)\) relative to the corestriction maps. Under the assumption **(Torsion)**, \(H^{1}_{\operatorname{Iw},\Sigma}(\mathbb{Q},T)\) is a \(\mathbb{Z}_{p}[[\Gamma]]\)-module of rank \(g\) (see [10, Lemma 2.4]). Hence we can fix a family of classes \(c_{1},...,c_{g}\in H^{1}_{\operatorname{Iw},\Sigma}(\mathbb{Q},T)\) such that \[\frac{H^{1}_{\operatorname{Iw},\Sigma}(\mathbb{Q},T)}{\langle c_{1},...,c_{g} \rangle}\text{ is }\mathbb{Z}_{p}[[\Gamma]]\text{-torsion}.\] Let \(J_{p}\) be a subset of \(\{1,...,2g\}\) of cardinality \(g\). The composition map \[H^{1}_{\mathrm{Iw},\Sigma}(\mathbb{Q},T)\xrightarrow{\mathrm{loc}_{p}}H^{1}_{ \mathrm{Iw},\mathrm{cyc}}(\mathbb{Q}_{p},T)^{\Delta}\xrightarrow{\mathrm{Col}_{ T,J_{p}}}\prod_{k=1}^{g}\mathbb{Z}_{p}[[\Gamma]]\] is a \(\mathbb{Z}_{p}[[\Gamma]]\)-homomorphism between two \(\mathbb{Z}_{p}[[\Gamma]]\)-modules of rank \(g\). For \(\underline{J}=(J_{p})\), we write \[\mathrm{Col}_{T,\underline{J}}(\mathbf{c})=\det(\mathrm{Col}_{T,J_{p}}\circ \mathrm{loc}_{p}(c_{i}))_{1\leqslant i\leqslant g}.\] Under the assumption that the Selmer group \(\mathrm{Sel}_{\underline{J}}(A^{\vee}/\mathbb{Q}_{\infty})\) is \(\mathbb{Z}_{p}[[\Gamma]]\)-cotorsion, Lei-Ponsinet showed that \(\mathrm{Col}_{T,\underline{J}}(\mathbf{c})\neq 0\) (see [1, Lemma 3.2]). In the following theorem we compute \(\nabla\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})^{\wedge}\) when \(n\) is sufficiently large. **Theorem 4.4**.: _Let \(\underline{I}_{0}=\{1,...,g\}\in\mathcal{I}\). Assume \(r=0\) and **(Torsion)**. Let \(\theta\) be a character on \(\Gamma\) of conductor \(p^{n+1}\) which is trivial on \(\Delta\). Suppose that_ \[\sum_{\underline{J}}(H_{\underline{I}_{0},\underline{J},n}\,\mathrm{Col}_{T, \underline{J}}(\mathbf{c}))(\theta)\neq 0 \tag{7}\] _for \(n\gg 0\). Further assume that \(\hat{\mathcal{K}}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\) is finite for all \(n\). Then for some sufficiently large integer \(n_{0}\),_ \[\nabla\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})^{\wedge}=\sum_{ \begin{subarray}{c}c_{i,\underline{I}\leqslant n_{0}}\\ {i=1}\end{subarray}}^{u_{\underline{I}}}\mathrm{rank}_{\mathbb{Z}_{p}}\, \frac{\Lambda}{(\Phi_{c_{i,\underline{I}}},\omega_{n_{0}})}\] _for all \(n>n_{0}\)._ Proof.: The definition of \(\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\) gives the following diagram. The right vertical arrow is an injection, implies that the left vertical arrow has trivial kernel. Under **(Torsion)** and the assumption in (7), Lei-Ponsinet showed that the \(\mathbb{Z}_{p}\)-rank of \(A^{\vee}(\mathbb{Q}_{(n)})\) is bounded as \(n\) varies (see [1, Theorem 3.4]). Hence these Mordell-Weil groups \(A^{\vee}(\mathbb{Q}_{(n)})\) must stabilize for large \(n\). Therefore, the \(\underline{I}\)-Mordell-Weil groups \(\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})\) must also stabilize for large \(n\). So there exists an integer \(n_{0}\) sufficiently large such that \[\nabla\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})^{\wedge}= \mathrm{rank}_{\mathbb{Z}_{p}}\,\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{ Q}_{(n_{0})})^{\wedge}.\] But by Theorem 3.6, \[\operatorname{rank}_{\mathbb{Z}_{p}}\mathcal{M}_{\underline{I}}(A^{ \vee}/\mathbb{Q}_{(n_{0})})^{\wedge} =\operatorname{rank}_{\mathbb{Z}_{p}}\mathcal{M}_{\underline{I}}(A^ {\vee}/\mathbb{Q}_{\infty})^{\wedge}_{\Gamma_{n_{0}}}\] \[=\sum_{i=1}^{u_{\underline{I}}}\operatorname{rank}_{\mathbb{Z}_{ p}}\frac{\Lambda}{(\Phi_{c_{i,\underline{I}}},\omega_{n_{0}})}\] \[=\sum_{c_{i,\underline{I}}\leq n_{0}\atop i=1}^{u_{\underline{I} }}\operatorname{rank}_{\mathbb{Z}_{p}}\frac{\Lambda}{(\Phi_{c_{i,\underline{I }}},\omega_{n_{0}})}.\] The last equality follows because \((\Lambda/\Phi_{r})_{\Gamma_{s}}=\Lambda/(\Phi_{r},\omega_{s})\), is finite for \(r>s\). Theorems 4.4 and 4.2 give the following result. **Theorem 4.5**.: _Assume the hypotheses as in Theorem 4.4. Let \(p^{s_{n}}=|\mathcal{H}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{(n)})^{\wedge}|\). Then for \(n\gg 0\),_ \[s_{n}-s_{n-1}=\lambda(\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty }))+(p^{n}-p^{n-1})\mu(\mathcal{X}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty }))-\sum_{c_{i,\underline{I}}\leq n_{0}\atop i=1}^{u_{\underline{I}}} \operatorname{rank}_{\mathbb{Z}_{p}}\frac{\Lambda}{(\Phi_{c_{i,\underline{I}}},\omega_{n_{0}})}.\] ### Change of basis The definitions of \(\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) and \(\operatorname{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) (and hence that of \(\mathcal{H}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\)) depend on the choice of a Hodge-compatible basis and hence are not canonically defined. Under certain additional assumptions, one can define them canonically for certain choices of the indexing set \(\underline{I}\). **Definition 4.6**.: _Fix a Hodge-compatible \(\mathbb{Z}_{p}\)-basis \(\mathcal{B}_{p}=\{u_{1},\ldots,u_{2g}\}\) of \(\mathbb{D}_{\operatorname{cris},p}(T)\). Let \(N_{p}\subseteq\mathbb{D}_{\operatorname{cris},p}(T)\) denote the free \(\mathbb{Z}_{p}\)-module complementary to \(\operatorname{Fil}^{0}\mathbb{D}_{\operatorname{cris},p}(T)\) and generated by the set \(\{u_{g+1},\ldots,u_{2g}\}\). We say that a \(\mathbb{Z}_{p}\)-basis \(\mathcal{B}^{\prime}_{p}=\{w_{1},\ldots,w_{2g}\}\) is Hodge-compatible with the basis \(\mathcal{B}_{p}\) if \(\{w_{1},\ldots,w_{g}\}\) (resp. \(\{w_{g+1},\ldots,w_{2g}\}\)) generates the submodule \(\operatorname{Fil}^{0}\mathbb{D}_{\operatorname{cris},p}(T)\) (resp. \(N_{p}\))._ **Lemma 4.7**.: _Let \(\mathcal{B}_{p}\) and \(\mathcal{B}^{\prime}_{p}\) be two Hodge-compatible bases in the sense of definition 2.1. Suppose that \(C_{p}\) is a block anti-diagonal matrix with respect to the basis \(\mathcal{B}_{p}\). Then \(\mathcal{B}^{\prime}_{p}\) is Hodge-compatible with \(\mathcal{B}_{p}\) in the sense of definition 4.6._ Proof.: Since \(\operatorname{Fil}^{0}\mathbb{D}_{\operatorname{cris},p}(T)\) is a direct summand of \(\mathbb{D}_{\operatorname{cris},p}(T)\), we get decompositions \(\mathbb{D}_{\operatorname{cris},p}(T)=\operatorname{Fil}^{0}\mathbb{D}_{ \operatorname{cris},p}(T)\bigoplus N_{p}\) and \(\mathbb{D}_{\operatorname{cris},p}(T)=\operatorname{Fil}^{0}\mathbb{D}_{ \operatorname{cris},p}(T)\bigoplus N^{\prime}_{p}\) where \(N_{p}\) is generated by \(\{u_{g+1},\ldots,u_{2g}\}\) and \(N^{\prime}_{p}\) is generated by \(\{w_{g+1},\ldots,w_{2g}\}\). Since \(C_{p}\) is block anti-diagonal with respect to \(\mathcal{B}_{p}\), we have that \(\operatorname{span}\{\varphi(u_{1}),\ldots,\varphi(u_{g})\}\subseteq N_{p}\) and since \(\varphi\) is injective we in fact have equality. By a similar argument, \(\operatorname{span}\{\varphi(u_{1}),\ldots,\varphi(u_{g})\}=N^{\prime}_{p}\). Thus \(\{w_{g+1},\ldots,w_{2g}\}\) generates \(N_{p}\). Let \(\underline{I}_{1}=\{g+1,...,2g\}\) and recall that \(\underline{I}_{0}=\{1,...,g\}\). **Proposition 4.8**.: _Suppose that \(C_{p}\) is a block anti-diagonal matrix with respect to the basis \(\mathcal{B}_{p}\). For \(\underline{I}=\underline{I}_{0}\) or \(\underline{I}_{1}\), the groups \(\mathcal{M}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\), \(\operatorname{Sel}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) and \(\dot{\mathcal{K}}_{\underline{I}}(A^{\vee}/\mathbb{Q}_{\infty})\) are canonically defined._ Proof.: Let \(\mathcal{B}_{p}=\{u_{1},\ldots,u_{2g}\}\) and \(\mathcal{B}^{\prime}_{p}=\{w_{1},\ldots,w_{2g}\}\) be a pair of Hodge-compatible bases of \(\mathbb{D}_{\operatorname{cris},p}(T)\). Let \(B_{p}\) be the change of basis matrix from \(\mathcal{B}^{\prime}_{p}\) to \(\mathcal{B}_{p}\). This implies that \(B_{p}\) is block diagonal (see [1, p. 371]). Write \[B_{p}=\left[\begin{array}{c|c}B_{1,1}&0\\ \hline 0&B_{2,2}\end{array}\right]\] where \(B_{1,1},B_{2,2}\in\operatorname{GL}_{g}(\mathbb{Z}_{p})\). Let \(\operatorname{Col}_{T,p,i}^{\mathcal{B}_{p}}\) be the \(i\)-th Coleman map with respect to the basis \(\mathcal{B}_{p}\). Let \(\operatorname{Col}_{T,p}^{\mathcal{B}_{p}}\) denote the column vector of Coleman maps \((\operatorname{Col}_{T,p,i}^{\mathcal{B}_{p}})_{i=1}^{2g}\). Similarly define \(\operatorname{Col}_{T,p}^{\mathcal{B}^{\prime}_{p}}\) as the column vector of Coleman maps defined with respect to the basis \(\mathcal{B}^{\prime}_{p}\). As \(C_{p}\) is block anti-diagonal, \(\mathcal{B}^{\prime}_{p}\) is Hodge-compatible with \(\mathcal{B}_{p}\) (see Lemma 4.7). Then [1, Lemma 2.16] gives \(\operatorname{Col}_{T,p}^{\mathcal{B}^{\prime}_{p}}=B_{p}\cdot\operatorname{ Col}_{T,p}^{\mathcal{B}_{p}}.\) Thus, \[\begin{bmatrix}\operatorname{Col}_{T,p,1}^{\mathcal{B}^{\prime}_{p}}\\ \vdots\\ \operatorname{Col}_{T,p,g}^{\mathcal{B}^{\prime}_{p}}\end{bmatrix}=B_{1,1} \begin{bmatrix}\operatorname{Col}_{T,p,1}^{\mathcal{B}_{p}}\\ \vdots\\ \operatorname{Col}_{T,p,g}^{\mathcal{B}_{p}}\end{bmatrix},\quad\begin{bmatrix} \operatorname{Col}_{T,p,g+1}^{\mathcal{B}^{\prime}_{p}}\\ \vdots\\ \operatorname{Col}_{T,p,2g}^{\mathcal{B}^{\prime}_{p}}\end{bmatrix}=B_{2,2} \begin{bmatrix}\operatorname{Col}_{T,p,g+1}^{\mathcal{B}_{p}}\\ \vdots\\ \operatorname{Col}_{T,p,2g}^{\mathcal{B}_{p}}\end{bmatrix}.\] One obtains that \((\operatorname{Col}_{T,p,i}^{\mathcal{B}^{\prime}_{p}}(z))_{i=1}^{g}=0\) if and only if \((\operatorname{Col}_{T,p,i}^{\mathcal{B}_{p}}(z))_{i=1}^{g}=0\) since the matrix \(B_{1,1}\) is invertible. The same argument also works for \((\operatorname{Col}_{T,p,i}^{\mathcal{B}^{\prime}_{p}}(z))_{i=g+1}^{2g}\) and \((\operatorname{Col}_{T,p,i}^{\mathcal{B}_{p}}(z))_{i=g+1}^{2g}\). Therefore \(\ker\operatorname{Col}_{T,I_{p}}\) is independent of the choice of basis if \(I_{p}=\{1,...,g\}\) or \(\{g+1,...,2g\}\). ### Remarks and examples * Note that \(C_{p}\) will remain block anti-diagonal upon a change of basis. This is because \(B_{p}C_{p}B_{p}^{-1}\) is block anti-diagonal if \(B_{p}\) is block diagonal and \(C_{p}\) is block anti-diagonal. * In the case of elliptic curves with \(a_{p}=0\), note that Kobayashi's signed Selmer groups \(\operatorname{Sel}_{p}^{\pm}(E/\mathbb{Q}_{\infty})\) are canonically defined. These groups correspond to \(\operatorname{Sel}_{\underline{I}_{0}}(A^{\vee}/\mathbb{Q}_{\infty})\) and \(\operatorname{Sel}_{\underline{I}_{1}}(A^{\vee}/\mathbb{Q}_{\infty})\) for \(g=1\) (see [1, Appendix 4]). Hence Proposition 4.8 should be seen as a generalization of this phenomenon. * the abelian varieties of the \(GL(2)\)-type provide such examples (see [1, Section 3.3]). Under the assumptions that \(\operatorname{Sel}_{\underline{I}_{0}}(A^{\vee}/\mathbb{Q}_{\infty})\) and \(\operatorname{Sel}_{\underline{I}_{1}}(A^{\vee}/\mathbb{Q}_{\infty})\) are \(\mathbb{Z}_{p}[[\Gamma]]\)-cotorsion, the abelian varieties of the \(GL(2)\)-type also provide examples when condition (7) is true (see [1, Lemma 3.6]). ## 5. The case of elliptic curves In this section, we will assume that \(E\) is an elliptic curve over \(\mathbb{Q}\) with supersingular reduction at \(p\), \(a_{p}=0\), and the Tate-Shafarevich group \(\Sha(E/\mathbb{Q}_{(n)})[p^{\infty}]\) is finite for all \(n\geqslant 0\). Under this setup, we can show that all the hypotheses of Theorem 4.4 (and hence Theorem 4.5) are satisfied. Recall that the \(\underline{L}_{0}\)-Mordell-Weil group and the \(\underline{L}_{1}\)-Mordell-Weil group are canonically defined and these groups are exactly the same as \(\mathcal{M}^{\pm}(E/\mathbb{Q}_{\infty})\) as defined by Lei in [19, Section 5]. Works of Kato [17] and Rohrlich [14] imply that \[(E(\mathbb{Q}_{\infty})\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p})^{\wedge}\sim \bigoplus_{i=1}^{t}\frac{\Lambda}{\Phi_{b_{i}}}\] for some non-negative integers \(b_{i}\) and \(t\) (see also [19, Remark 3.5]). Hence \(r=0\) in this case for elliptic curves; this was one of the assumptions in Theorem 4.4. Let \(\mathcal{X}^{\pm}(E/\mathbb{Q}_{\infty})\) be the Pontryagin dual of Kobayashi's signed Selmer group \(\operatorname{Sel}_{p}^{\pm}(E/\mathbb{Q}_{\infty})\). He showed that \(\mathcal{X}^{\pm}(E/\mathbb{Q}_{\infty})\) is \(\mathbb{Z}_{p}[[\Gamma]]\)-torsion and hence the assumption **(Torsion)** is also satisfied. For elliptic curves with \(a_{p}=0\), the matrix of the Frobenius at \(p\) is given by \(C_{p}=\begin{pmatrix}0&-1/p\\ 1&0\end{pmatrix}.\) This is a block anti-diagonal matrix and hence condition (7) is also satisfied. Furthermore, by Lemma 3.5, \(\mathcal{H}\xi^{\pm}(E/\mathbb{Q}_{(n)})\subset\Sha(E/\mathbb{Q}_{(n)})[p^{ \infty}]\) and hence \(\mathcal{H}\xi^{\pm}(E/\mathbb{Q}_{(n)})\) is finite if \(\Sha(E/\mathbb{Q}_{(n)})[p^{\infty}]\) is so. Therefore, if \(\Sha^{\pm}(E/\mathbb{Q}_{(n)})\) is finite for all \(n\geqslant 0\), then all the hypotheses of Theorem 4.4 are satisfied. Under the assumption of the finiteness of \(\mathcal{H}\xi^{\pm}(E/\mathbb{Q}_{(n)})\) for all \(n\geqslant 0\), Theorem 4.3 can be simplified and made more precise. Recall from [13] that \[\mathcal{M}(E/\mathbb{Q}_{\infty}):=\ker\Big{(}E(\mathbb{Q}_{\infty})\otimes \mathbb{Q}_{p}/\mathbb{Z}_{p}\to E(\mathbb{Q}_{\infty,p})\otimes\mathbb{Q}_{p} /\mathbb{Z}_{p}\Big{)}.\] In [19], Lei showed that \[\operatorname{Char}_{\Lambda}\mathcal{M}(E/\mathbb{Q}_{\infty})^{\wedge}= \Big{(}\prod_{e_{n}>0}\Phi_{n}^{e_{n}-1}\Big{)},\] where \[e_{n}=\frac{\operatorname{rank}E(\mathbb{Q}_{(n)})-\operatorname{rank}E( \mathbb{Q}_{(n-1)})}{p^{n}-p^{n-1}}\text{ for }n>0\text{ and }e_{0}=\operatorname{rank}E(\mathbb{Q}). \tag{8}\] From definition, it follows that \(\mathcal{M}(E/\mathbb{Q}_{\infty})^{\wedge}\) is a quotient of \(\big{(}E(\mathbb{Q}_{\infty})\otimes\mathbb{Q}_{p}/\mathbb{Z}_{p}\big{)}^{\wedge}\) and so \[\mathcal{M}(E/\mathbb{Q}_{\infty})^{\wedge}\sim\bigoplus_{i=1}^{u}\Lambda/\Phi_ {c_{i}} \tag{9}\] for certain non-negative integers \(u\) and \(c_{i}\). For \(n\geqslant 0\), let \(a_{n}\) be the number of times the summand \(\Lambda/\Phi_{n}\) appears on the right-hand side of the pseudo-isomorphism given in (9). Similarly, we have a pseudo-isomorphism \[\mathcal{M}^{\pm}(E/\mathbb{Q}_{\infty})^{\wedge}\sim\bigoplus_{i=1}^{u^{\pm} }\Lambda/\Phi_{c_{i}^{\pm}}, \tag{10}\] for certain non-negative integers \(u^{\pm}\) and \(c_{i}^{\pm}\). Let \(r_{n}^{\pm}\) be the exponent of \(\Lambda/\Phi_{n}\) appearing in the pseudo-isomorphism given in (10). Hence \[\mathrm{Char}_{\Lambda}\mathcal{M}^{\pm}(E/\mathbb{Q}_{\infty})^{\wedge}=\Big{(} \prod_{n\geqslant 0}\Phi_{n}^{r_{n}^{\pm}}\Big{)}.\] For elliptic curves over \(\mathbb{Q}\) with supersingular reduction at the prime \(p\) with \(a_{p}=0\), Theorems 4.4 and 4.5 simplify and give the following result. **Theorem 5.1**.: _Assume that \(\dot{\mathcal{K}}^{\pm}(E/\mathbb{Q}_{(n)})\) is finite for all \(n\geqslant 0\). Let \(p^{s_{n}}=|\dot{\mathcal{K}}^{\pm}(E/\mathbb{Q}_{(n)})^{\wedge}|\). Then, for some sufficiently large integer \(n_{0}\),_ \[\nabla\mathcal{M}^{\pm}(E/\mathbb{Q}_{\infty})^{\wedge}=\sum_{k\geqslant 0}^{n_{ 0}}\mathrm{rank}_{\mathbb{Z}_{p}}\,\frac{\Lambda}{(\Phi_{k}^{r^{\pm}},\omega_ {n_{0}})}\] _and for all \(n>n_{0}\),_ \[s_{n}-s_{n-1}=\lambda(\mathcal{X}^{\pm}(E/\mathbb{Q}_{\infty}))+(p^{n}-p^{n-1} )\mu(\mathcal{X}^{\pm}(E/\mathbb{Q}_{\infty}))-\sum_{k\geqslant 0}^{n_{0}} \mathrm{rank}_{\mathbb{Z}_{p}}\,\frac{\Lambda}{(\Phi_{k}^{r^{\pm}},\omega_{n_ {0}})}.\] _These integers \(r^{\pm}_{k}\) satisfy the following relations._ * \(r^{+}_{0}=r^{-}_{0}=e_{0}\)_._ * _If_ \(k>0\)_, then_ \(a_{k}=\max(0,e_{k}-1)\leqslant\min(r^{+}_{k},r^{-}_{k})\) _and_ \(r^{+}_{k}+r^{-}_{k}=e_{k}+a_{k}\)_._ **Remark 5.2**.: _The relations of \(r^{\pm}_{k}\) mentioned above come from [11, Corollaries 6.4 and 6.7]._
2310.18498
GPT-4 Vision on Medical Image Classification -- A Case Study on COVID-19 Dataset
This technical report delves into the application of GPT-4 Vision (GPT-4V) in the nuanced realm of COVID-19 image classification, leveraging the transformative potential of in-context learning to enhance diagnostic processes.
Ruibo Chen, Tianyi Xiong, Yihan Wu, Guodong Liu, Zhengmian Hu, Lichang Chen, Yanshuo Chen, Chenxi Liu, Heng Huang
2023-10-27T21:28:36Z
http://arxiv.org/abs/2310.18498v1
# GPT-4 Vision on Medical Image Classification - A Case Study on COVID-19 Dataset ###### Abstract In the intricate landscape of modern healthcare, medical image classification emerges as a pivotal task, driving crucial decisions in diagnosis, treatment planning, and patient management. This process involves the systematic categorization of various types of medical imagery--including X-rays, CT scans, MRIs, and ultrasound--into distinct classes that assist healthcare professionals in identifying anomalies, understanding physiological phenomena, and detecting diseases at early stages. The reliability and precision of image classification are paramount, given that these determinations form the bedrock upon which medical practitioners build their diagnostic and therapeutic strategies, directly impacting patient outcomes. With an increasing influx of complex imaging data and a growing need for rapid, accurate interpretation, the medical sector faces significant pressure to evolve beyond traditional analysis methods, necessitating innovative solutions that enhance the efficiency and accuracy of image classification. The advent of large foundation models in artificial intelligence has ushered in a transformative era of computational capabilities. These models, characterized by their extensive scale, diverse training datasets, and impressive adaptability, have demonstrated profound impacts across various domains. Within the realm of medical image classification, there is burgeoning curiosity around the potential applicability and benefits of these formidable tools. The traditional approach, reliant on Convolutional Neural Networks (CNNs) based architectures, such as VGG [1], inception [2], ResNet [3], and DenseNet [4] have achieved noteworthy success in image categorization tasks [5]. However, these methods often require vast amounts of labeled data and substantial computational resources, besides lacking the intuitive adaptability inherent in human cognition. Besides training a neural network from end to end, some transfer learning and self-supervised learning techniques are also employed in the filed of medical image classification to improve the efficiency and performance [6; 7]. But they are also limited by the predictive capability and few- or zero-shot learning ability [8]. Recently, large foundation models, with their sophisticated understanding of nuanced patterns, offer a promising alternative, hypothesized to enhance the precision and context-awareness in classifying medical images, provided they can be effectively adapted to understand and interpret complex visual medical data. This study ventures into the novel application of in-context learning strategies with GPT-4V, a derivative of the generative pre-trained transformer models, specifically oriented towards visual tasks. In-context learning allows the model to utilize prompts--minimal yet specific pieces of information or instructions--to guide its responses in performing a particular task, relying on its vast pre-trained knowledge base rather than traditional task-specific training. By harnessing this approach, we aim to tailor GPT-4V's capabilities to interpret and classify medical images, an endeavor scarcely explored in existing literature. Our methodology involves the meticulous design of context-rich prompts that facilitate the model's understanding of medical imaging classifications. The preliminary results are striking, showing that our adapted GPT-4V model, when equipped with well-crafted prompts, can achieve classification accuracy comparable to established baseline models. This finding not only underscores the versatility of large foundation models in medical applications but also heralds a potentially more resource-efficient and adaptable future for medical image analysis. ## 2 Related Work **In-context Learning:** In-context learning (ICL) [9; 10; 11; 12; 13] is a paradigm that has recently gained prominence, particularly in the realm of LLMs [14; 15; 16; 17; 18]. This approach provides an efficient way for pre-trained models to understand and execute a task using demonstrations, e.g., input-output pairs, without resorting to extensive fine-tuning or retraining on task-specific data. The effectiveness of ICL is rooted in a phenomenon known as "few-shot learning." In traditional machine learning, models often require substantial amounts of labeled data specific to each task [18]. However, in few-shot scenarios [19], a model uses only a minimal number of examples to understand a task. This process is akin to the way humans often learn--by relating new information to existing knowledge. In-context learning takes this a step further, often operating in "zero-shot" or "one-shot" contexts, where the model is either not provided with any task-specific examples or just a single one, respectively. In medical image classification, the application of ICL is still nascent. The potential of large language models like GPT-3 or GPT-4 to generalize learning from textual contexts to more complex multi-modal tasks, including image classification, offers promising avenues for exploration. The key lies in the careful design of prompts that succinctly yet comprehensively convey the task rules and criteria, enabling the model to apply its pre-existing knowledge effectively across domains. **Medical Image Classification:** Historically, before the rise of deep learning methodologies, the domain of medical image analysis was dominated by manual processes and intricate feature extraction techniques. These handcrafted features, exemplified by histograms of oriented gradients (HOG) [20] and local binary patterns (LBP) [21], offered a structured but somewhat constrained approach to image interpretation. However, the deep learning revolution that began around 2006 introduced a transformative era, significantly altering the landscape of medical image classification. Modern models, trained on vast datasets, are adept at recognizing patterns and anomalies within medical imagery, such as X-rays, MRIs, and CT scans. The primary emphasis has shifted towards enhancing accuracy, improving interpretability, and sharpening the ability to detect subtle indicators of medical conditions. This facilitates healthcare professionals in making more informed decisions. For instance, in the classification of retinal fundus images [22; 23], the use of strategies like data augmentation and transfer learning, especially with renowned pre-trained models like VGG19, has notably improved the detection accuracy of fundus diseases. Another pivotal task has been the detection of COVID-19 from chest X-rays. Recent advancements, which leverage the feature fusion of DenseNet and VGG, have demonstrated superior performance in detecting COVID-19 patients [24]. ## 3 Our Method Our method's intuition lies in the meticulous crafting of prompts that serve as input for the advanced GPT-4V model, guiding it to make accurate medical image classifications. By employing in-context learning, we harness the model's expansive pre-existing knowledge base, prompting it with both text and images for robust task performance. The experiment contrasts two central methodologies. The baseline involves traditional medical image analysis using ResNet models. Our approaches start with the naive zero-shot prompt on GPT4V, then enhance the naive prompt with more sophisticated approaches, meticulously structured to "complete the story" that the image tells. In our experiments, we primarily utilized three categories of prompts to engage GPT-4V in the classification tasks: 1. A straightforward method, wherein GPT-4V is directly instructed to classify the presented images, representing the naive approach. 2. An in-context learning strategy, which involves providing GPT-4V with multiple labeled examples to facilitate guided learning. 3. A more nuanced in-context learning process that incorporates reasoning, requiring us to elucidate the relationships between the images and their corresponding labels for GPT-4V. The specific prompts employed for these methodologies are detailed subsequently: **Naive approach.** Zero shot inference on GPT4V: Upload an image on GPT4V. **Prompt:** "Instruction: classify the image into two classes class1, class2. Please first output one line for the label of the image. In the subsequent line, please provide a comprehensive explanation of your classification." **In-context learning 1 (ICL1).** Upload three images separately on GPT4V, two for in-context learning and one for predicting. **Prompt:** "Instruction: classify the images into two classes class1, class2 Example: the label of the above images: Image 1: class1 Image 2: class2 Please first output one line for the label of image 3. In the subsequent line, please provide a comprehensive explanation of your classification." Drawback: the attention of GPT4V may concentrate on certain image and lead to biased results. Thus, in the next ICL prompt we will try to put all images in one figure. **In-context learning 2 (ICL2).** Combining three images on one figure and uploading it on GPT4V, two for in-context learning and one for predicting. **Prompt:** "Instruction: classify the images into two classes class1, class2 Example: the label of the above images: Image 1: class1 Image 2: class2 Please first output one line for the label of image 3. In the subsequent line, please provide a comprehensive explanation of your classification." **In-context learning 3 (ICL3).** Combining three images on one figure, two for in-context learning and one for predicting. Upload 3 combined figures (batch size = 3) on GPT4V. **Prompt:** "Instruction: : classify the images into two classes for each group class1, class2, generate 4 results. Example: the label of the above images: Image 1: class1 Image 2: class2 Please first output one line for the label of image 3. In the subsequent line, please provide a comprehensive explanation of your classification." **In-context learning 4 (ICL4).** Combining 9 images in one figure and upload it on GPT4V, 6 of them are for in-context learning and the rest 3 for predicting. **Prompt:** "Instruction: classify the images into two classes class1, class2 Example: the label of the above images: Image 1: class1 Image 2: class1 Image 3: class1 Image 4: class2 Image 5: class2 Image 6: class2 Please first output one line for the label of image 7, image 8 and image 9. In the subsequent line, please provide a comprehensive explanation of your classification." **In-context learning with reasoning 1 (ICL-R1).** Upload three images separately on GPT4V, two for in-context learning and one for predicting. **Prompt:** "Instruction: classify the images into two classes class1, class2 Example: the label of the above images: Image 1: class1 Image 2: class2 Explanation: In image 1 we can observe..., but in image 2 we don't have such observation. Thus we classified image 1 as class1 and image 2 as class2. Please provide the classification of Image 3 in one line, taking into account the observed patterns in Image 3. Following that, offer a detailed explanation step-by-step. **In-context learning with reasoning 2 (ICL-R2).** Combining 9 images in one figure and upload it on GPT4V, 6 of them are for in-context learning and the rest 3 for predicting. **Prompt:** "Instruction: classify the images into two classes class1, class2 Example: the label of the above images: Image 1: class1 Image 2: class1 Image 3: class1 Image 4: class2 Image 5: class2 Image 6: class2 Explanation: In image 1-3 we can observe... but in image 2 we don't have such observation. Please first output one line for the label of image 7, image 8 and image 9. In the subsequent line, please provide a comprehensive explanation of your classification." The crux of our method lies in this nuanced interaction with the model. Our results elucidate that the strategic construction of prompts--capitalizing on the model's inherent language and reasoning capabilities--enables GPT-4V to perform on par with established medical image classification benchmarks. This finding not only underscores the versatility of large language models but also heralds a potentially transformative application in the medical imaging domain. ## 4 Experiment In the experiment, we test our proposed prompt on the open sourced Kaggle COVID-19 lung X-ray dataset. This dataset contains 181 training examples, 111 of them are COVID and the rest are normal case. There are total 46 examples in the test set, where 26 of them are COVID case and the rest are normal case. **Baseline Settings.** We construct baselines using Convolution Neural Network based backbones to demonstrate the effectiveness of our method in the few-shot learning setting. Following previous works [25; 26], we use ResNet-18(RN-18) [3] and VGG-16 [1] pre-trained on ImageNet-1k [27] as our image classifier. Output dimension of the final fully-connected layer is set to 2 to fit the binary classification task. During training, we optimize the model using SGD for 20 epoches with batch size of 2, and decrease the learning rate by 5 after epoch 10 and 15 respectively. For better convergence, initial learning rate is set to 0.1 for ResNet-18 and 0.001 for VGG-16. We also apply a simple augmentation technique of random rotation and center cropping to improve model robustness. For the few-shot setting, we randomly select 6 images (3 covid, 3 normal) for training. We repeat the experiments for 5 times with different random seed and report the average result. **Result analysis.** Consolidating all images into a single figure has demonstrated enhanced performance compared to uploading them individually. This improvement could potentially be attributed to the focused attention mechanism of GPT-4V, which, when presented with separate images, might concentrate disproportionately on specific images, consequently leading to biased outcomes. GPT-4V exhibits superior performance compared to the few-shot baseline when provided with an equivalent number of training instances. However, it does not yet match the efficacy of the comprehensive baseline model that benefits from training on the complete set of examples. This indicates that while GPT-4V's adaptability is promising, certain optimizations might be necessary to fully realize its learning potential. Contrary to expectations, supplementing the GPT-4V prompts with reasons underlying the classifications does not yield an improvement in results. This may be due to a misalignment between the provided reasoning and the model's processing capabilities, suggesting that the reasons integrated into the prompts were either not appropriately formulated for GPT-4V's comprehension or that the model currently lacks the capacity to incorporate such reasoning effectively into its decision-making process. Further investigations are needed to uncover the intricacies of this observation. ## 5 Conclusion In conclusion, we take the first step into the application of GPT-4V for medical image classification. By employing in-context learning, this study circumvented traditional limitations associated with deep learning models, particularly the necessity for extensive, task-specific training and vast labeled datasets. The tailored prompts guided GPT-4V to effectively interpret and analyze medical images, achieving a level of accuracy on par with conventional methods. This finding underscores the versatile potential of large foundation models in medical diagnostics and opens the door to further innovations that could reshape the landscape of healthcare, making it more intuitive, accessible, and reliable. Beyond just technical implications, the success of this approach advocates for a future where AI's role extends from being a mere tool to an adaptable ally, capable of navigating the nuanced and critical terrains of patient care.
2305.04207
No More Manual Tests? Evaluating and Improving ChatGPT for Unit Test Generation
Unit testing is essential in detecting bugs in functionally-discrete program units. Manually writing high-quality unit tests is time-consuming and laborious. Although traditional techniques can generate tests with reasonable coverage, they exhibit low readability and cannot be directly adopted by developers. Recent work has shown the large potential of large language models (LLMs) in unit test generation, which can generate more human-like and meaningful test code. ChatGPT, the latest LLM incorporating instruction tuning and reinforcement learning, has performed well in various domains. However, It remains unclear how effective ChatGPT is in unit test generation. In this work, we perform the first empirical study to evaluate ChatGPT's capability of unit test generation. Specifically, we conduct a quantitative analysis and a user study to systematically investigate the quality of its generated tests regarding the correctness, sufficiency, readability, and usability. The tests generated by ChatGPT still suffer from correctness issues, including diverse compilation errors and execution failures. Still, the passing tests generated by ChatGPT resemble manually-written tests by achieving comparable coverage, readability, and even sometimes developers' preference. Our findings indicate that generating unit tests with ChatGPT could be very promising if the correctness of its generated tests could be further improved. Inspired by our findings above, we propose ChatTESTER, a novel ChatGPT-based unit test generation approach, which leverages ChatGPT itself to improve the quality of its generated tests. ChatTESTER incorporates an initial test generator and an iterative test refiner. Our evaluation demonstrates the effectiveness of ChatTESTER by generating 34.3% more compilable tests and 18.7% more tests with correct assertions than the default ChatGPT.
Zhiqiang Yuan, Yiling Lou, Mingwei Liu, Shiji Ding, Kaixin Wang, Yixuan Chen, Xin Peng
2023-05-07T07:17:08Z
http://arxiv.org/abs/2305.04207v3
# No More Manual Tests? Evaluating and Improving ChatGPT for Unit Test Generation ###### Abstract Unit testing plays an essential role in detecting bugs in functionally-discrete program units (_e.g._, methods). Manually writing high-quality unit tests is time-consuming and laborious. Although the traditional techniques are able to generate tests with reasonable coverage, they are shown to exhibit low readability and still cannot be directly adopted by developers in practice. Recent work has shown the large potential of large language models (LLMs) in unit test generation. By being pre-trained on a massive developer-written code corpus, the models are capable of generating more human-like and meaningful test code. ChatGPT, the latest LLM that further incorporates instruction tuning and reinforcement learning, has exhibited outstanding performance in various domains. To date, it still remains unclear how effective ChatGPT is in unit test generation. In this work, we perform the first empirical study to evaluate ChatGPT's capability of unit test generation. In particular, we conduct both a quantitative analysis and a user study to systematically investigate the quality of its generated tests in terms of correctness, sufficiency, readability, and usability. We find that the tests generated by ChatGPT still suffer from correctness issues, including diverse compilation errors and execution failures (mostly caused by incorrect assertions); but the passing tests generated by ChatGPT almost resemble manually-written tests by achieving comparable coverage, readability, and even sometimes developers' preference. Our findings indicate that generating unit tests with ChatGPT could be very promising if the correctness of its generated tests could be further improved. Inspired by our findings above, we further propose ChatGPT, a novel ChatGPT-based unit test generation approach, which leverages ChatGPT itself to improve the quality of its generated tests. ChatTester incorporates an initial test generator and an iterative test refiner. Our evaluation demonstrates the effectiveness of ChatTester by generating 34.3% more compliable tests and 18.7% more tests with correct assertions than the default ChatGPT. Unit Test Generation, ChatGPT ## I Introduction Unit testing [1, 2, 3] validates whether a functionally-discrete program unit (_e.g._, a method) under test behaves correctly. As the primary stage in the software development procedure, unit testing plays an essential role in detecting and diagnosing bugs in a nascent stage and prevents their further propagation in the development cycle. Therefore, writing high-quality unit tests is crucial for ensuring software quality. For a method under test (_i.e._, often called as the focal method), its corresponding unit test consists of a _test prefix_ and a _test oracle_[4]. In particular, the test prefix is typically a series of method invocation statements or assignment statements, which aim at driving the focal method to a testable state; and then the test oracle serves as the specification to check whether the current behavior of the focal method satisfies the expected one. For example, the assertion is one of the common test oracles in unit tests. Manually writing and maintaining high-quality unit tests can be very time-consuming and laborious [5, 6]. To alleviate manual efforts in writing unit tests, researchers have proposed various techniques to facilitate automated unit test generation. Traditional unit test generation techniques leverage search-based [7, 8, 9], constraint-based [10, 11, 12], or random-based strategies [13, 14] to generate a suite of unit tests with the main goal of maximizing the coverage in the software under test. Although achieving reasonable coverage, these automatically-generated tests exhibit a large gap to manual-written ones in terms of readability and meaningfulness, and thus developers are mostly unwilling to directly adopt them in practice [15]. To address these issues, recent work [16, 17, 18, 19, 20] has leveraged advanced deep learning (DL) techniques, especially large language models (LLMs), to generate unit tests. These techniques mostly formulate unit test generation as a neural machine translation problem by translating a given focal method into the corresponding test prefix and the test assertion. In particular, they incorporate the power of LLMs by fine-tuning these pre-trained models on the test generation task. Owing to being extensively pre-trained on a massive developer-written code corpus and then being specifically fine-tuned on the test generation task, these models are capable of generating more human-like and meaningful test code, showing a large potential of LLMs in unit test generation. ChatGPT [21], a very recent LLM developed by OpenAI based on the generative pre-trained transformer architecture, has attracted wide attention by showing the outstanding capability of solving various tasks. Different from the LLMs (_e.g._, BART [22], BERT [23], and T5 [24]) used in existing learning-based test generation techniques [23, 24, 25, 26, 27, 28], ChatGPT further incorporates RLHF [29] (Reinforce ment Learning from Human Feedback) and a significantly-larger model scale, which thus exhibits better generalization and higher alignment with human intention in various domains. However, it still remains unclear how effective Chat-GPT is in generating unit tests. **Study.** In this work, we perform the first empirical study to evaluate ChatGPT's capability of unit test generation. We first construct a dataset of 1,000 Java focal methods, each along with a complete and executable project environment. In addition, based on the common practice in previous test generation work and widely-acknowledged ChatGPT-relevant experience, we design a basic prompt including both (i) a natural language description about the task and (ii) a code context of the focal method and other relevant contexts. We then query the ChatGPT API with our basic prompt, and analyze the quality of the returned tests to answer the following four research questions. * **RQ1 (Correctness): How is the correctness of the unit tests generated by ChatGPT? What are the common errors in the incorrect tests?** We first measure the syntactic correctness, compilation correctness, and execution correctness of the generated tests; and then further build a breakdown of the error types in the incorrect tests. * **RQ2 (Sufficiency): How is the sufficiency of the unit tests generated by ChatGPT?** For those correct tests generated by ChatGPT, we investigate the adequacy of coverage and assertions. * **RQ3 (Readability): How is the readability of the unit tests generated by ChatGPT?** For those correct tests their readability along with the manually-written tests as reference. * **RQ4 (Usability): How can the tests generated by ChatGPT be used by developers?** For those correct tests generated by ChatGPT, we perform a user study to investigate whether developers are willing to directly adopt them. Based on our results, we have the following main findings. On the bad side, we find that only a portion (24.8%) of tests generated by ChatGPT can pass the execution and the remaining tests suffer from diverse correctness issues. In particular, 57.9% ChatGPT-generated tests encounter compilation errors, such as using undefined symbols, violating type constraints, or accessing private fields; and 17.3% tests are compilable but fail during execution, which mostly results from the incorrect assertions generated by ChatGPT. On the good side, we find the passing tests generated by ChatGPT actually resemble manually-written ones by achieving comparable coverage, readability, and sometimes even developers' preference compared to manually-written ones. Overall, our results indicate that ChatGPT-based test generation could be very promising if the correctness issues in the generated tests could be further addressed. To this end, we further distill two potential guidelines, _i.e._, providing ChatGPT with deep knowledge about the code and helping ChatGPT better understand the intention of the focal method, so as to reduce the compilation errors and assertion errors, respectively. **Technique.** Inspired by our findings above, we further propose ChatTester, a novel ChatGPT-based unit test generation approach, which leverages ChatGPT itself to improve the correctness of its generated tests. ChatTester includes an initial test generator and an iterative test refiner. The initial test generator decomposes the test generation task into two sub-tasks by (i) first leveraging ChatGPT to understand the focal method via the _intention prompt_ and (ii) then leveraging ChatGPT to generate a test for the focal method along with the generated intention via the _generation prompt_. The iterative test refiner then iteratively fixes the compilation errors in the tests generated by the initial test generator, which follows a validate-and-fix paradigm to prompt ChatGPT based on the compilation error messages and additional code context. To evaluate the effectiveness of ChatTester, we further apply ChatTester on an evaluation dataset of 100 additional focal methods (to avoid using the same dataset that has been extensively analyzed in our study part), and compare the tests generated by ChatTester with the default ChatGPT to answer the following research question. * **RQ5 (Improvement): How effective is ChatTester in generating correct tests compared to ChatGPT: How effective is each component in ChatTester?** We compare the number of compilation errors and execution failures between the tests generated by ChatTester and the default ChatGPT. Moreover, we further investigate the contribution of each component in ChatTester. Our results show that ChatTester substantially improves the correctness of the ChatGPT-generated tests with 34.3% and 18.7% improvement in terms of the compilable rate and execution passing rate. In addition, our results further confirm the contribution of both components in ChatTester, _i.e.,_ the initial test generator is capable to generate more tests with correct assertions while the iterative test refiner is capable to fix the compilation errors iteratively. In summary, this paper makes the following contributions: * **The first study** that extensively investigates the correctness, sufficiency, readability, and usability of ChatGPT-generated tests via both quantitative analysis and user study; * **Findings and practical implications** that point out the limitations and prospects of ChatGPT-based unit test generation; * **The first technique ChatTester** including a novel initial test generator and iterative test refiner, which leverages ChatGPT itself to improve the correctness of its generated tests; * **An extensive evaluation** that demonstrates the effectiveness of ChatTester by substantially reducing the compilation errors and incorrect assertions in ChatGPT-generated tests. ## II Background ### _Large Language Models_ Large language models (LLMs) are a category of large-scale models that have been pre-trained on a massive textual corpus [24, 30, 31, 32]. In order to fully utilize the massive unlabeled training data, LLMs are often pre-trained with self-supervised pre-training objectives [33, 34, 35], such as Masked Language Modeling [36], Masked Span Prediction [37], and Causal Language Modeling [38]. Most LLMs are designed on a Transformer [32], which contains an encoder for input representation and a decoder for output generation. Existing LLMs can be grouped into three categories, including encoder-only models (_e.g.,_ CodeBERT [36]), decoder-only models (_e.g.,_ CodeGen [38]), and encoder-decoder models (_e.g.,_ CodeT5 [37]). To date, LLMs have been applied to various domains and achieved a great success [39, 40, 41, 42, 43, 44, 45, 46, 47]. More recent work leverages reinforcement learning to further align LLMs with human intent [21, 31]. For example, ChatGPT [21], a very recent LLM developed by OpenAI based on the generative pre-trained transformer (GPT) architecture, first tunes the GPT model with supervised learning and then updates the model with RLHF (_i.e.,_ reinforcement learning from human feedback). ChatGPT has attracted wide attention due to its outstanding capability of solving various tasks [48, 49, 50, 51]. In particular, there also emerge several recent works exploring ChatGPT's potential in software engineering tasks [52, 53, 54], _e.g.,_ program repair [52, 55], code generation [53]. To date, it still remains unclear how effective ChatGPT is in generating unit tests. To fill this knowledge gap, in this work, we perform the first study to explore the ChatGPT's ability for unit test generation; and then propose ChatTester, a novel ChatGPT-based unit test generation approach, which improves the quality of the tests generated by ChatGPT. ### _Unit Test Generation_ **Traditional Techniques** Traditional techniques incorporate search-based [56], random-based [14], or constraint-based strategies [10, 11] to generate unit tests automatically. For example, Evosuite [56], one of the most representative search-based techniques, leverages the evolutionary algorithm to automatically generate test suites for the given Java classes with the goal of maximizing the coverage. More recently, Lemieux _et al._[57] enhance search-based techniques by using tests generated by LLMs to escape from the "plateaus" during the search procedure. Although achieving reasonable coverage, unit tests generated by traditional techniques exhibit low readability and meaningfulness compared to manually-written ones, which thus cannot be directly adopted by developers in practice [58, 59, 60, 61, 62]. **Learning-based Techniques.** Recent work [16, 17, 18, 19, 20] leverages advanced deep learning (DL) techniques, especially large language models (LLMs), to generate unit tests. Learning-based techniques often regard test generation as a neural machine translation problem, which trains a model to translate the focal method into the corresponding test prefix or the test assertion. For example, studies [18, 19, 20] focus on generating assertions by training DL models or fine-tuning LLMs on the assertion generation dataset, where the input is a given test prefix along with the focal method and the output is the assertion. In addition to generating only assertions, recent work further fine-tunes LLMs to generate a complete test case for a given focal method. For example, AthenaTest [16] fine-tunes the LLM BART [22] on a test generation dataset where the input is the focal method with the relevant code context while the output is the complete test case. Similarly, Teco [17] fine-tunes the LLM CodeT5 [37] on a test completion dataset where the input is an incomplete test case along with the focal method and the relevant static code features while the output is the next statement in the given test. Due to being extensively trained on a massive developer-written code corpus, learning-based test generation techniques are capable of generating more human-like and meaningful test code [16], showing a large potential of LLMs in unit test generation. In this work, we evaluate the capability of ChatGPT in unit test generation. As a newly-released LLM, ChatGPT is different from the ones (_e.g.,_ BART [22], BERT [23], and T5 [24]) adopted in existing learning-based test generation techniques by incorporating RLHF [29] (reinforcement learning from human feedback) and a significantly-larger model scale, which has shown better alignment with human intent and outstanding performance in various tasks. Therefore, it is worthwhile to systematically explore ChatGPT's potential in unit test generation. In fact, our study results further indicate that ChatGPT substantially outperforms state-of-the-art learning-based unit test generation techniques by generating more correct tests also with higher coverage. ## III Study Setup ### _Benchmark_ Existing benchmarks on unit test generation [16, 18] only include a limited code context (_e.g.,_ the focal method alone) rather than a complete and executable project, and it is hard to directly compile and execute the generated tests with the existing datasets. Therefore, to comprehensively evaluate the quality of ChatGPT-generated tests, we construct a new benchmark of not only focal methods but also complete and executable projects. Specifically, we construct our benchmark as follows. **Project Collection.** We use the 4,685 Java projects in the popular benchmark CodeSearchNet [63] as the initial project list. For each project, we clone it from GitHub (as of March 25, 2023) and collect its relevant information (_e.g.,_ its creating time and its last commit time). To keep high-quality projects, we then filter the 4,685 Java projects according to the following criteria: (i) the project is under continuous maintenance (_i.e.,_ the project should have been updated as of January 1, 2023); (ii) the project has at least 100 stars; (iii) the project is built with Maven framework (for the ease of test executions) and it could be successfully compiled in our local environment. In this way, we obtain 185 Java projects. **Data Pair Collection.** We then extract data pairs from the 185 Java projects. Each data pair here refers to the pair of the focal method information and its corresponding test method. In particular, in addition to the focal method itself, the focal method information also includes the focal class declaration, all the fields, and all the method signatures (_i.e.,_ the class constructor and the instance methods). For each Java project, we extract data pairs in the following steps. (i) Given a Java project, we first find all the test classes in the project. If a class contains at least one method annotated with @Test, we regard this class as a test class and collect all the test methods in this test class. (ii) We then find the corresponding focal method for each test method based on the file path and the class name matching. For example, for a test method _"testFunction()"_ located in the path _"src/test/java/FooTest.java"_, we consider the method _"Function()"_ located in the path _"src/main/java/Foo.java"_ as its focal method. For the cases when there are multiple focal methods with the same name in the same class, we further filter them by the number and types of parameters so as to find the unique matching one. With such strict mapping criteria, we extract 1,748 data pairs from the 185 Java projects. Considering the costs of using ChatGPT API and the manual efforts involved in the user study, we further sample 1,000 data pairs as our final benchmark for the empirical study. Table I presents statistical distribution of our benchmark, including the lines of code in each focal method and test method, the number of parameters in each focal method, and the number of assertions in each test methods. The table shows that our benchmark includes test methods and focal methods of diverse scales and structures. ### _Basic Prompt Design_ To avoid using too simple prompts that might lead to underestimation of ChatGPT's capability or using too sophisticated prompts that are uncommon in practice, we design our basic prompt by carefully following the common practice in existing unit test generation work [16, 17, 18] and widely-adopted experience on using ChatGPT [64, 53]. In particular, our basic prompt includes two part: (i) the natural language description part (_i.e.,_ NL part) that explains the task to ChatGPT, and (ii) the code context part (_i.e.,_ CC part) that contains the focal method and the other relevant code context. We then explain each part in detail. **CC Part.** Following existing learning-based unit test generation work [16], we include following code context into the CC part: (i) the complete focal method, including the signature and body; (ii) the name of the focal class (_i.e.,_ the class that the focal method belongs to); (iii) the field in the focal class; and (iv) the signatures of all methods defined in the focal class. **NL Part.** Based on the widely-acknowledged experience on using ChatGPT, we include the following contents in the NL part: (i) a role-playing instruction (_i.e.,_ "You are a professional who writes Java test methods.") to inspire ChatGPT's capability of test generation, which is a common prompt optimization strategy [64, 53]; and (ii) a task-description instruction (_i.e.,_ "Please write a test method for the {focal method name} based on the given information using {Junit version}") to explain the task. The top half of Figure 1 shows an example of our basic prompt. After querying with the basic prompt, ChatGPT then returns a test as shown in the bottom half of Figure 1. ### _Baselines_ We further include two state-of-the-art traditional and learning-based unit test generation techniques as baselines. Fig. 1: Basic prompt Fig. 2: The workflow of our empirical study For traditional techniques, we include Evosuite [56] as the baseline. In particular, we first apply Evosuite with the default setting to generate tests for the focal class of each data pair in our benchmark, and then keep the tests that invoke the focal method. Since Evosuite might generate more than one tests for a focal method, we only keep the first one for a fair comparison with other techniques (_i.e.,_ AthenaTest and ChatGPT). For learning-based techniques, we include AthenaTest [16] as the baseline. Since AthenaTest has not released its pre-trained BART-based model or its fine-tuned model, we reproduce it according to its paper by fine-tuning the widely-used LLM CodeT5 on the same fine-tuning dataset used in AthenaTest. We choose CodeT5 since it has been pre-trained on both textual and code corpus, which is also the best pre-training setting shown in the AthenaTest paper [16]. In addition, to avoid potential data leakage, the data that is duplicated between the fine-tuning dataset and our benchmark are further removed from the fine-tuning dataset. ### _Experimental Procedure_ Figure 2 shows the overview of our experimental procedure. For each data pair in the benchmark constructed in Section III-A, we query ChatGPT with our basic prompt designed in Section III-B and take the test generated by ChatGPT as the output. To automate our experiments, we use the official ChatGPT API [21] with the default setting. In this work, we focus on the gpt-3.5-turbo model. We then put the generated test in the same directory of its focal class, and attempt to compile and execute it for further analysis. We then explain the detailed procedure in each RQ, respectively. **RQ1: Correctness.** Following existing learning-based test generation work [16, 17], we measure the correctness of the generated tests with three metrics, including (i) syntactic correctness (whether the test could pass the syntax checker), (ii) compilation correctness (whether the test could be successfully compiled), and (iii) execution correctness (whether the test could pass the execution). Here we leverage AST parser (such as JavaParser [65]) as syntax checker. In addition, we further investigate the common error types in those incorrect tests to better understand the limitations of ChatGPT in test generation. Specifically, we automatically extract the error messages thrown during the compilation and execution, including different compilation and execution error types. **RQ2: Sufficiency.** In this RQ, we include three metrics to assess the sufficiency of the tests generated by ChatGPT: (i) the statement coverage of the test on the focal method; (ii) the branch coverage of the test on the focal method; (iii) the number of assertions in the test case. In particular, we leverage Jacoco [66] to collect the coverage. **RQ3 & RQ4: User study for readability and usability.** In these two RQs, we conduct a user study to investigate the readability and usability of the tests generated by ChatGPT. Here, we only focus on 248 passing tests generated by ChatGPT since it is less meaningful to recommend tests with compilation errors or execution errors to developers in practice. We invite five participants whose Java development experiences vary from 4 years to 5 years. Given the test generated by ChatGPT (denoted as X) and the manually-written test in the project (denoted as Y), we ask each participant about the following two questions. It is worth noting that participants are not informed which test is generated by ChatGPT or is manually written. * Question 1 (readability): "Please rate the readability of X and Y from 1 to 4." (1: poor readability, 2: fair readability with major issues, 3: acceptable readability with minor issues, 4: great readability). * Question 2 (usability): "Which test (X or Y) do you prefer to directly use in the project? A: X, B:Y, C: No preference". ## IV Study Results ### _RQ1: Correctness_ Table II presents the correctness of the 1,000 tests generated by ChatGPT and other techniques. Overall, we could observe that a large portion of tests generated by ChatGPT suffer from correctness issues, _i.e.,_, 42.1% of the generated tests are successfully compiled while only 24.8% of the generated tests are executed successfully without any execution errors. We further manually inspect the failed tests to check whether they actually reveal bugs in the focal method under test, but we find that all of them are caused by the improper test code itself. As for the learning-based baseline AthenaTest, ChatGPT has a substantial improvement over AthenaTest in terms of the syntactic correctness, compilation correctness, and executable correctness. For example, almost all the tests generated by ChatGPT (except the one has an incorrect parenthesis generated) are syntactically correct, but nearly a half of the tests generated by AthenaTest are syntactically incorrect. The reason might be that the significantly-larger model scale in ChatGPT helps better capture syntactical rules in the massive pre-training code corpus. As for the traditional search-based baseline Evosuite, we could observe a higher compilable rate and passing rate in its generated tests. In fact, it is as expected since Evosuite prunes invalid test code during its search procedure and generates assertions exactly based on the dynamic execution values, while learning-based techniques (_i.e.,_ ChatGPT and AthenaTest) directly generate tests token by token without any post-generation validation or filtering. Therefore, we do not intend to conclude that Evosuite is better at generating more correct tests, since the correctness of tests generated by learning-based techniques could be further improved if they also incorporate similar post-generation validation to filter those incorrect tests. **Finding 1:** ChatGPT substantially outperforms existing learning-based technique in terms of syntactic, compilation, and execution correctness. However, only a portion of its generated tests can pass the execution while a large ratio of its generated tests still suffer from compilation errors and execution errors. **Bad Case Breakdown.** We further analyze the common error types in the failed tests generated by ChatGPT (_i.e.,_ those tests failed on the compilation or execution). In particular, we first automatically categorize each test based on the error message; and then we manually summarize and merge similar types into high-level categories. Table III shows the breakdown for tests with compilation errors, while Table IV shows the breakdown for tests with execution failures. _Failed Compilation._ In Table III, the column "Frequency" shows the number of each compilation errors in 579 uncompilable tests. Note that there could be multiple compilation errors in one test and thus the sum is larger than 579. Due to space limits, we only present the frequent compilation errors observed more than 10 times. As shown in the table, the generated tests have diverse compilation errors. First, the most frequent compilation errors are caused by un-resolvable symbols, _e.g.,_ the generated tests include some undefined classes, methods, or variables. Second, another large category of compilation errors are related to type errors, _e.g.,_ the parameter type in the method invocation is inconsistent with the one defined in the method declaration. Third, ChatGPT also frequently generates test code that invalidly accesses private variables or methods (_i.e.,_ access errors.). In addition, some generated tests encounter compilation errors by invalidly instantiating abstract class or using unsupported operators. _Failed Execution._ In Table IV, we group all the infrequent errors (_i.e.,_ less than three times) into the "others" category due to space limits. As shown in the table, the majority of failed executions (85.5%) are caused by assertion errors, _i.e.,_ the assertions generated by ChatGPT consider the behavior of the program under test violates the specification. As mentioned above, we manually inspect these assertion errors to identify whether they are caused by the bugs in the focal method or the incorrect assertions themselves, and we find all of them as a result of incorrect assertions generated by ChatGPT. It implies that ChatGPT might fail to precisely understand the focal method and the quality of the assertions generated by ChatGPT should be largely improved. In addition, we observe that the remaining execution errors are related to different runtime exceptions. For example, Figure 3 presents an example of the failed execution in the test generated by ChatGPT. In particular, the test throws _NullPointerException_ during its execution at line 3. The error occurs because the created object _"url"_ assesses an external resource _"test.jar"_ which does not exist (in line 2). It actually shows an inherent limitation in ChatGPT, _i.e.,_ the unawareness of external resources during test generation. **Finding 2:** The ChatGPT-generated tests encounter diverse compilation errors, such as symbol resolution errors, type errors, and access errors; the majority of failed executions are caused by the incorrectly-generated assertions. ### _RQ2: Sufficiency_ Table V presents the statement and branch coverage of generated tests that could pass the execution. We further include the coverage of the manually-written tests (_i.e.,_ the original test for the focal method in the project) for reference. As shown in the table, we could observe that the tests generated ChatGPT achieve the highest coverage compared to existing learning-based and search-based techniques and it also achieves comparable coverage as manually-written tests. Figure 4 presents a distribution plot for the number of assertions in each test generated by different techniques. Interestingly, we observe that ChatGPT-generated tests exhibit most similar distribution as manully-written tests in the number of assertions per test. In particular, Evosuite tends to generate tests with less assertions while the learning-based technique AthenaTest would generate some tests with abnormally-larger number of assertions (_i.e.,_ more than 15 assertions per test) Fig. 3: NullPointerException Example than manually-written ones. The potential reason might be that RLHF helps ChatGPT generate more human-like test code. **Finding 3:** The ChatGPT-generated tests resemble manually-written ones in terms of test sufficiency. ChatGPT achieves comparable coverage as manual tests, and also the highest coverage compared to existing techniques; ChatGPT also generate more human-like tests with similar number of assertions per test as manually-written tests. ### _RQ3: Readability_ Figure 5 reports the answers to the first survey question (_i.e.,_ readability) in a stacked bar chart, where the x-axis represents each participant (_i.e.,_ from A to E) and the y-axis represents the ratio of different scores. Overall, most ChatGPT-generated tests are assessed with decent readability, and they are also considered with comparable and sometimes even better readability compared to manually-written tests. ### _RQ4: Usability_ Figure 6 reports the answers to the second survey question (_i.e.,_ usability), where the y-axis represents each participant and the x-axis shows the number of responses that prefer the manually-written tests, ChatGPT-generated tests, or no preference. Interestingly, we find the ChatGPT-generated tests can be very competitive and sometimes there are even a considerable portion of cases that participants prefer tests generated by ChatGPT. Based on the feedback from participants, we find that people often make the decision based on a comprehensive consideration of multiple factors, such as the code format, the comments, the way how the focal method is invoked, and the rationality of assertions. In other words, the participants' preference implies that ChatGPT is able to generate tests in line with good manual coding practice, which makes the participants willing to directly use the generated tests. ### _Enlightenment_ Based on our results above, we further discuss the implications on the strengths and the limitations of ChatGPT-based unit test generation. **Limitations.** As shown in our results, a large portion of ChatGPT-generated tests fail in compilation or execution, which might be a result of two inherent limitations in such a generative language model like ChatGPT. First, most compilation errors might be caused by ChatGPT's unawareness of the "deep knowledge" in the code. Although being pre-trained on a massive code corpus could help ChatGPT capture the syntactical rules in the code, the nature of ChatGPT is still a probabilistic token-sequence generator and thus it is challenging for ChatGPT to be fully aware of the deep rules in the code, _e.g.,_ only the public fields could be accessed outside the class and the abstract classes cannot be instantiated. _Therefore, to help ChatGPT overcome this limitation, it is important to remind ChatGPT of such deep knowledge during its generating tests_. Second, most execution errors (_i.e.,_ assertion errors) result from ChatGPT's lack of understanding about the intention of the focal method. As a result, it is challenging for ChatGPT to write proper assertions as specification for the focal method under test. _Therefore, to help ChatGPT overcome this limitation, it is essential to help ChatGPT to better understand the intention of the focal method_. **Strengths.** Although generating a lot of tests failed in compilation or execution, the good thing is that most of the passing tests generated by ChatGPT are often of high quality in terms of the sufficiency, the readability, and the usability in practice. These passing tests could mostly be put into direct use to alleviate manual test-writing efforts. _Therefore, leveraging ChatGPT to generate unit tests is a promising Fig. 4: Number of assertions in generated tests Fig. 5: Response to readability Fig. 6: Response to usability direction if the correctness issues in its generated tests could be further addressed._ **Enlightenment:** ChatGPT-based unit test generation is promising since it is able to generate a number of high-quality tests with comparable sufficiency, readability, and usability as manually-written tests. However, further efforts are required to address the correctness issues in the ChatGPT-generated tests. The two directions to this end are (i) to provide ChatGPT with deep knowledge about the code and (ii) to help ChatGPT better understand the intention of the focal method, so as to reduce its compilation errors and assertion errors, respectively. ## V Approach of ChatTester **Overview.** Inspired by our findings and enlightenment above, we then propose ChatTester, a novel ChatGPT-based unit test generation approach, which improves the correctness of ChatGPT-generated tests by ChatGPT itself. In particular, ChatTester contains two components, _i.e.,_ an initial test generator and an iterative test refiner. Figure 7 shows the workflow of ChatTester. Instead of directly asking ChatGPT to generate a test for the given focal method, the initial test generator decomposes the test generation task into two sub-tasks: (i) first understanding the intention of the focal method, and (ii) then generating a unit test for the focal method based on the intention. Compared to the basic prompt, the initial test generator aims to generate tests with higher-quality assertions based on the help of the intermediate step of intention generation. The iterative test refiner iteratively fixes the compilation errors in the tests generated by the initial test generation. As mentioned in our enlightenment, the key to eliminating most uncompilable tests is to provide "deep knowledge" to ChatGPT during test generation. However, given the large number of such potential rules, it is infeasible to include all of them in the prompt in advance to the test generation. Therefore, we follow a validate-and-fix paradigm to iteratively refine the uncompilable test by prompting ChatGPT with compilation error messages and additional relevant code context. In other words, the iterative test refiner actually leverages the error messages from the compiler as the violation instances of the "deep knowledge", so as to fix compilation errors in the generated tests. ### _Initial Test Generator_ The initial test generator decomposes test generation into two steps: (i) first leveraging ChatGPT to understand the focal method via the _intention prompt_, and (ii) then leveraging ChatGPT to generate a test for the focal method along with the generated intention via the _generation prompt_. The intention prompt asks ChatGPT to return a natural language description of the intended functionality of the focal method under test. In particular, the code context part is similar to the basic prompt in Section III-B, including the class declaration, constructor signatures, relevant fields, and the focal method itself; and the natural language instruction is to ask ChatGPT to infer the intention of the focal method. Then, the generation prompt further includes the generated intention and asks ChatGPT to generate a unit test for the focal method. Figure 8 presents an example comparing how the basic prompt and the initial test generator generate a test for the same given focal method _"setCharA(t)"_. As shown in Figure 8 (a), given the basic prompt without any intention inference, ChatGPT generates a test with an incorrect assertion (_i.e.,_ _"assertEquals(Hello-World, strBuilder.toString())"_. However, in Figure 8 (b), with the intention prompt, ChatGPT first correctly generates the intention for the focal method Fig. 8: Basic prompt v.s. Prompts in the initial test generator Fig. 7: The workflow of ChatTester _"setCharAt()"_; and then with the generation prompt, Chat-GPT generates a test with a correct assertion (_i.e., "assertEquals(Hello Jordl, strBuilder.toString())"_. The additional intention inference is designed to enhance ChatGPT's understanding about the focal method, which further leads to more accurate assertion generation. ### _Iterative Test Refiner_ The iterative test refiner iteratively fixes the compilation errors in the tests generated by the initial test generation. Each iteration successively leverages two steps: (i) first validating the generated test by compiling it in a validation environment; (ii) second constructing a prompt based on the error message during compilation and the extra code context related to the compilation error. The new prompt is then queried into ChatGPT to get a refined test. Such a procedure repeats until the generated test could be successfully compiled or the maximum number of iterations is reached. Note that currently we only focus on fixing compilation errors instead of execution errors, since in practice it is challenging to identify whether a test execution failure is caused by the incorrect test code or by the incorrect focal method. We then explain each step with the illustration example in Figure 9. **Validator.** For ease of compiling the generated test, we directly create a test file in the same directory of the focal class. In particular, the generated test method is encapsulated in a test class with relevant import statements. Then, the test file is compiled with the Java compiler. A controller then decides the next step based on the compilation status: * _Successful compilation:_ if there is no compilation error, the controller would terminate the iterative refinement procedure and return the final test; * _Valid refinement:_ if the number of compilation errors is less than that in the last iteration, the current refinement is considered as a valid refinement. The controller then proceeds to the iterative prompt constructor so as to continue the refinement; * _Invalid refinement:_ if the number of compilation errors is larger than or the same as that in the last iteration, the current refinement is considered an invalid refinement. The controller would terminate the refinement if the accumulated number of invalid refinements is larger than the maximum (_e.g.,_ 3 in our experiments); or proceeds to the iterative prompt constructor. **Iterative Prompt Constructor.** The iterative prompt constructor is built on top of (i) an EM parser that analyzes the error message about the compilation error, and (ii) a code analyzer that extracts the additional code context related to the compilation error. In particular, the EM parser collects three types of information by parsing the error message: * _Error type_: the high-level description about the error, which is often the first sentence in the error message. For example, "_cannot find simple class..._" and "_cannot find symbol method..._" are extracted error types in the illustration example. * _Buggy location_: the line number of the test code triggering compilation errors. With such location information, the prompt constructor is able to insert the relevant information around the buggy line, _i.e.,_ starting with the tag _"(Buggy line)"_ as shown in the example. * _Buggy element_: the objects or variables in the buggy location. For example, for the second iteration in Figure 9, we analyze the buggy line with the error message, and find that they are associated with the class _"Xml"_ (which is the buggy element in this example). With the buggy elements, the code analyzer is then able to extract additional code context from other Java files rather than the focal class. In particular, the code analyzer first parses the whole project to find the class file that the buggy element belongs to, and then extracts the class declaration and public method signature from the class file. This extracted class information would further be added to the prompt as additional information, _e.g., "//Xml class..._" highlighted in blue. In fact, such additional information from other classes could be very important for generating high-quality tests, since it is very common that the test code involves with not only the focal class but also other classes. However, given the limited input length for ChatGPT, it is infeasible to directly include the whole program into the prompt (which would also lead to bad performance since the focus of ChatGPT might be confused). Therefore, in ChatTester, we propose to append necessary additional code contexts in such an iterative way. Figure 9: Prompt in the iterative test refiner ## VI Evaluation of ChatTester ### _Evaluation Setup_ **Evaluation Dataset.** To evaluate the effectiveness of ChatTester, we further construct an additional evaluation dataset so as to avoid using the same benchmark that has been extensively analyzed in our previous study. Since our approach is inspired by the findings from our study, evaluating it on a separate dataset could eliminate the potential overfitting issues. In Section III-A, we collect 1,748 data pairs in total and 1,000 of them are included in the benchmark for the empirical study; and in this section, we further re-sample another 100 data pairs from the remaining 748 data pairs as our evaluation dataset to evaluate the effectiveness of ChatTester. **Studied Techniques.** To evaluate the overall effectiveness of ChatTester and the individual contribution of each component (_i.e.,_ the initial test generator and the iterative test refiner) in ChatTester, we study three techniques: * ChatGPT: the default ChatGPT with the basic prompt, which is the one used in our empirical study. * ChatTester-: a variant of ChatTester without the iterative test refiner, which actually enhances the default ChatGPT with the initial test generator of ChatTester. * ChatTester: the complete ChatTester with both the initial test generator and the iterative test refiner. To mitigate the randomness in ChatGPT, we repeat all experiments for three times and present the average results. ### _Evaluation Results_ Table VI presents the correctness of the tests generated by ChatGPT and our approaches. Overall, we could observe a substantial improvement in both the compilation rate and passing rate of the tests generated by ChatTester compared to the default ChatGPT. For example, additional 34.3% tests (= 73.3% - 39.0%) can be successfully compiled and additional 18.7% tests (= 41.0% - 22.3%) can pass the execution. In summary, the proposed approach ChatTester effectively improves the correctness of the tests generated by ChatGPT. In addition, we could observe that the variant ChatTester- outperforms the default ChatGPT by achieving 11.7% and 7.4% improvements in the compilation rate and passing rate. In particular, we find that among the ChatGPT-generated tests with incorrect assertions, 12.5% of them are fixed into correct assertions in ChatTester-, indicating the effectiveness of the initial test generator. Figure 8 is an example of how the intention prompt improves the correctness of assertions in our dataset. Moreover, we could observe a further improvement from ChatTester- to ChatTester, _i.e.,_ additional 22.6% tests and 11.3% tests are fixed by the iterative test refiner into compilable tests and passing tests. Figure 9 is an example of how the iterative test refiner fixes the compilation errors in two iterations. In summary, both components (_i.e.,_ the initial test generator and the iterative test refiner) positively contribute to the effectiveness of ChatTester. ## VII Threats to Validity One threat to validity lies in the randomness in ChatGPT. To alleviate this issue, we repeat our experiments for three times (given the costs in using ChatGPT API) and present the average results when automatically evaluating the effectiveness of ChatTester. We do not repeat our experiments in the empirical study, due to the large manual efforts involved in the user study. However, we actually observe similar correctness results of the tests generated by ChatGPT on two different datasets (Table II and Table VI), indicating the consistency of our results. Another threat to validity lies in the benchmarks used in this work. Our findings might not generalize to other datasets. To eliminate this issue, we construct our datasets to include more high-quality projects and diverse focal methods and test methods. In addition, we further evaluate the proposed approach ChatTester on a different evaluation dataset to avoid overfitting issues. Another threat lies in the potential data leakage of the manually-written tests being part of the training data in ChatGPT, which might lead to the overestimation of ChatGPT's capability in test generation. Since ChatGPT has not released its training data, it is hard to precisely identify this issue. However, both the large portion of uncompilable and failed tests generated by ChatGPT and the improvement achieved by ChatTester over ChatGPT indicate that ChatGPT has not simply memorized the data used in our work. ## VIII Conclusion In this work, we perform the first empirical study to evaluate ChatGPT's capability of unit test generation, by systematically investigating the correctness, sufficiency, readability, and usability of its generated tests. We find that the tests generated by ChatGPT still suffer from correctness issues, including diverse compilation errors and execution failures (mostly caused by incorrect assertions); but the passing tests resemble manually-written tests by achieving comparable coverage, readability, and even sometimes developers' preference. Our findings indicate that ChatGPT-based unit test generation is very promising if the correctness of its generated tests could be further improved. Inspired by our findings above, we further propose ChatTester, which leverages ChatGPT itself to improve the quality of its generated tests. Our evaluation demonstrates the effectiveness of ChatTester by generating 34.3% more compilable tests and 18.7% more tests with correct assertions than the default ChatGPT.
2301.01688
Feedback Stabilization of Tank-Liquid System with Robustness to Surface Tension
We construct a robust stabilizing feedback law for the viscous Saint-Venant system of Partial Differential Equations (PDEs) with surface tension and without wall friction. The Saint-Venant system describes the movement of a tank which contains a viscous liquid. We assume constant contact angles between the liquid and the walls of the tank and we achieve a spill-free exponential stabilization with robustness to surface tension by using a Control Lyapunov Functional (CLF). The proposed CLF provides a parameterized family of sets which approximate the state space from the interior. Based on the CLF, we construct a nonlinear stabilizing feedback law which ensures that the closed-loop system converges exponentially to the desired equilibrium point in the sense of an appropriate norm.
Iasson Karafyllis, Filippos Vokos, Miroslav Krstic
2023-01-04T16:27:18Z
http://arxiv.org/abs/2301.01688v1
# Feedback Stabilization of Tank-Liquid System with Robustness to Surface Tension ###### Abstract We construct a robust stabilizing feedback law for the viscous Saint-Venant system of Partial Differential Equations (PDEs) with surface tension and without wall friction. The Saint-Venant system describes the movement of a tank which contains a viscous liquid. We assume constant contact angles between the liquid and the walls of the tank and we achieve a spill-free exponential stabilization with robustness to surface tension by using a Control Lyapunov Functional (CLF). The proposed CLF provides a parameterized family of sets which approximate the state space from the interior. Based on the CLF, we construct a nonlinear stabilizing feedback law which ensures that the closed-loop system converges exponentially to the desired equilibrium point in the sense of an appropriate norm. ## 1 Introduction The Saint-Venant model, which was derived in [2], constitutes a significant and very influential mathematical model in fluid mechanics. It is also referred in literature as the shallow water model. Recent extensions of the Saint-Venant model take into account various types of forces such as viscous stresses, surface tension and friction forces (see [10, 19, 29, 33, 41, 43]). The feedback stabilization problem of the Saint-Venant model is a challenging problem. The dominant cases studied in the litterature include the inviscid model - which ignores forces such as viscous stresses and surface tension - and the linearized model (see [1, 3, 4, 5, 11, 12, 13, 14, 16, 17, 18, 30, 35, 36]). In [11, 12, 14, 35, 36] the problem of the movement of an 1-D tank which contains a fluid is studied. More specifically, [11, 12, 14] provide controllability results for the Saint-Venant model without viscosity, without friction and without surface tension, while [35] suggests a new variational formulation of Saint-Venant equations and proves the steady-state controllability of the linear approximations of several control configurations. In [36] the inviscid Saint-Venant model is studied and appropriate stabilizing full-state feedback and output feedback control laws are constructed. In [3, 4, 5, 12, 13, 16, 17, 18, 30] the movement of a fluid in an open channel is studied. Stabilization results are provided in [1, 3, 4, 13, 17, 18]. In [3, 4, 17, 18, 30] the linearized Saint-Venant model is being used while [1] deals with a general linear hyperbolic system which appears in Saint-Venant equations among other linear hyperbolic laws. The works [5, 12, 13, 16] study the nonlinear Saint-Venant model. In [5] the feedforward control problem of general nonlinear hyperbolic systems is studied and an application using the Saint-Venant model with friction is provided. In [12, 13] local convergence of the state of hyperbolic systems of conservation laws is guaranteed using a strict Lypaunov function which exploits Riemann invariants. An application to the inviscid, frictionless Saint-Venant model is provided as well. The paper [16] achieves regulation of the water flow and level in water-ways using the inviscid Saint-Venant model without friction and without surface tension. Very few studies in the literature deal with the nonlinear viscous Saint-Venant model that is used for the description of the movement of a tank which contains an incompressible, Newtonian fluid. The first work that studied the nonlinear viscous Saint-Venant model without wall friction and without surface tension was [23]. In [23] an appropriate nonlinear feedback law is constructed which provides semiglobal stabilization results by following a CLF methodology. The work [25] extends the results obtained in [23] in the case where wall friction forces are taken into account. In [25] both the case of a velocity independent friction coefficient and the general case of friction coefficient are studied. A robust with respect to wall friction stabilizing feedback law is constructed. Another study which deals with the nonlinear viscous Saint-Venant model is [24]. In [24] a stabilizing output-feedback control law for the viscous Saint-Venant PDE system without wall friction and without surface tension is constructed. The output-feedback control law is utilized through a functional-observer methodology and a CLF methodology. The study of the movement of a fluid which interacts with a gas boundary and a solid boundary is inevitably intertwined with the notion of the surface tension and the notions of contact angle and wettability (see [27, 34]). Surface tension is crucial as it acts in the interface between liquid and gas. From a mathematical point of view surface tension is very important because it changes the order of the PDEs (it is expressed by a third order term). Contact angle is the angle at which the fluid surface intersects with a solid boundary as stated in [34], and it is a measure of wettability of the solid surface. There is a wide literature concerning the topic of contact angles (see for instance [20, 21, 27, 28, 37, 38, 42, 43]). The concept of contact angle is significant in our study because it provides an additional boundary condition. In this paper we solve the feedback stabilization problem for a tank containing a liquid modeled by the viscous Saint-Venant system of PDEs with surface tension and without wall friction. We consider the case of constant contact angles between the liquid and the walls of the tank, as in [37, 38]. We utilize a specific form of the feedback law initially presented in [25], which constitutes a more general form of the feedback law in [23] with robustness to surface tension. Indeed, we saw that the proposed feedback law guarantees stabilization _no matter what the value of the surface tension coefficient is_. Therefore, the knowledge of the surface tension coefficient is not necessary and the feedback law is independent of the surface tension coefficient. We achieve a spill-free exponential stabilization, with robustness to surface tension. As in [23, 24, 25] we follow a CLF methodology and we design the feedback law based on an appropriate functional, which is the CLF. The CLF determines a specific parameterized set which approximates the state space of the control problem from the interior. Although this work presents enough technical similarities with [25], there are some crucial differences. Firstly, in contrast with [25], the system of PDEs contains an extra term due to surface tension and does not contain a friction term. Moreover, in order for the model to be complete and for the problem to be well-posed, an additional boundary condition is used. The additional boundary condition is provided by the assumption of a constant contact angle. Here we use only one CLF while in [25] two different functionals are proposed. As a consequence this work does not provide a bound for the sup-norm of the fluid velocity, as in [25], due to the absence of an appropriate functional. Here the CLF is different from the corresponding one in [25], as it contains an additional potential energy term due to the effect of the surface tension. This paper is organized as follows. In Section 2 the control problem is described as well as its main objective. In Section 3 we provide the intuitive ideas and the statements of the results of this work along with some auxiliary lemmas. Section 4 includes all the proofs of the results presented in Section 3. Finally, Section 5 points out the conclusions of this work and suggests topics for future research. ## Notation * \(\mathbb{R}_{+}=[0,+\infty)\) is the set of non-negative real numbers. * Let \(S\subseteq\mathbb{R}^{n}\) be an open set and let \(A\subseteq\mathbb{R}^{n}\) be a set such that \(S\subseteq A\subseteq cl(S)\). By \(C^{0}(A;\Omega)\), we denote the class of continuous functions on \(A\), which take values in \(\Omega\subseteq\mathbb{R}^{m}\). By \(C^{k}(A;\Omega)\), where \(k\geq 1\) is an integer, we denote the class of functions on \(A\subseteq\mathbb{R}^{n}\), which takes values in \(\Omega\subseteq\mathbb{R}^{m}\) and has continuous derivatives of order \(k\). In other words, the functions of class \(C^{k}(A;\Omega)\) are the functions which have continuous derivatives of order \(k\) in \(S=int(A)\) that can be continued continuously to all points in \(\partial S\cap A\). When \(\Omega=\mathbb{R}\) then we write \(C^{0}(A)\) or \(C^{k}(A)\). When \(I\subseteq\mathbb{R}\) is an interval and \(G\in C^{1}(I)\) is a function of a single variable, \(G^{\prime}(h)\) denotes the derivative with respect to \(h\in I\). * Let \(I\subseteq\mathbb{R}\) be an interval, let \(a<b\) be given constants and let \(u:I\times[a,b]\to\mathbb{R}\) be a given function. We utilize the notation \(u[t]\) to denote the profile at certain \(t\in I\), i.e., \((u[t])(x)=u(t,x)\) for all \(x\in[a,b]\). When \(u(t,x)\) is three times differentiable with respect to \(x\in[a,b]\), we use the notation \(u_{x}(t,x)\), \(u_{xx}(t,x)\) and \(u_{xxx}(t,x)\) for the first, second and third derivative of \(u\) with respect to \(x\in[a,b]\) respectively, i.e., \[u_{x}(t,x)=\frac{\partial u}{\partial x}(t,x),u_{xx}(t,x)=\frac{\partial^{2}u }{\partial x^{2}}(t,x)\text{ and }u_{xxx}(t,x)=\frac{\partial^{3}u}{ \partial x^{3}}(t,x)\] When \(u(t,x)\) is differentiable with respect to \(t\), we use the notation \(u_{t}(t,x)\) for the derivative of \(u\) with respect to \(t\), i.e., \[u_{t}(t,x)=\frac{\partial u}{\partial t}(t,x)\] * Given a set \(U\subseteq\mathbb{R}^{n}\), \(\chi_{U}\) denotes the characteristic function of \(U\) defined by \[\chi_{U}(x):=\left\{\begin{array}{ll}1&\text{for all }x\in U\\ 0&\text{for all }x\in U\end{array}\right.\] The sign function \(\operatorname{sgn}:\mathbb{R}\to\mathbb{R}\) is the function defined by \[\operatorname{sgn}(x):=\left\{\begin{array}{ll}1&\text{for }x>0\\ 0&\text{for }x=0\\ -1&\text{for }x<0\end{array}\right.\] * Consider given constants \(a,b\) such that \(a<b\). For \(p\in[1,+\infty)\), \(L^{p}(a,b)\) denotes the set of equivalence classes of Lebesgue measurable functions \(u:(a,b)\to\mathbb{R}\) with \[\left\|u\right\|_{p}:=\left(\int_{a}^{b}\left|u(x)\right|^{p}dx\right)^{1/p}<+\infty.\] \(L^{\infty}(a,b)\) denotes the set of equivalence classes of Lebesgue measurable functions \(u:(a,b)\to\mathbb{R}\) with \[\left\|u\right\|_{\infty}:=\operatorname*{ess\,sup}_{x\in(a,b)}\left\{\left|u( x)\right|\right\}<+\infty.\] For an integer \(k\geq 1\), \(H^{k}(a,b)\) denotes the Sobolev space of functions in \(L^{2}(a,b)\) with all its weak derivatives up to order \(k\geq 1\) in \(L^{2}(a,b)\). The Control Problem We want to manipulate the motion of a tank which contains a viscous, Newtonian, incompressible liquid. Viscosity is utilized as a gain in the controller on the difference between the boundary liquid levels and to settle a region of attraction. The tank is subject to an acceleration which we consider as the control input and obeys Newton's second law. The problem is described by the viscous Saint-Venant equations. We restrict our study to the one-dimensional (1-D) case of the model. Moreover, contrary to prior works, in this work we do not neglect the surface tension that acts on the free surface (liquid-gas interface) but we neglect friction with the tank walls. We intend to drive asymptotically the tank to a specified position. The aforementioned goal must be achieved without liquid spillage and by having both the tank and the liquid within the tank at rest. The equations describing the motion of the liquid in the tank can be derived by performing mass and momentum balances (from first principles assuming that the liquid pressure is the combination of hydrostatic pressure and capillary pressure given by the Young-Laplace equation (see [15]) and by ignoring friction with the tank walls). The equations can also be derived by using approximations of the Navier-Stokes equations for the incompressible fluid (see [6, 7, 8, 28, 37, 38]; but see also [21, 29] for fluid equations involving capillary phenomena). We denote by \(a(t)\) the position of the left side of the tank at time \(t\geq 0\) and we consider the length of the tank to be \(L>0\) (a constant). The evolution of the liquid level and of the liquid velocity is described by the following equations \[H_{t}+(Hu)_{z}=0,\text{ for }t>0,z\in[a(t),a(t)+L] \tag{1}\] \[(Hu)_{t}+\left(Hu^{2}+\frac{1}{2}gH^{2}\right)_{z}-\sigma H\left(\frac{H_{zz} }{\left(1+H_{z}^{2}\right)^{3/2}}\right)_{z}=\mu(Hu_{z})_{z}\] \[\text{ for }t>0,z\in(a(t),a(t)+L) \tag{2}\] where \(H(t,z)>0\), \(u(t,z)\in\mathbb{R}\) are the liquid level and the liquid velocity, respectively, at time \(t\geq 0\) and position \(z\in[a(t),a(t)+L]\), while \(g,\mu,\sigma>0\) (constants) are the acceleration of gravity, the kinematic viscosity of the liquid and the ratio of the surface tension and liquid density, respectively. In some papers the term \(\left(\frac{H_{zz}}{\left(1+H_{z}^{2}\right)^{3/2}}\right)_{z}\) is replaced by \(H_{zzzz}\) (see [6, 7, 8, 32], but here we prefer a more accurate description of the surface tension. The liquid velocities at the walls of the tank are equal with the tank velocity. Consequently: \[u(t,a(t))=u(t,a(t)+L)=w(t),\text{ for }t\geq 0 \tag{3}\] where \(w(t)=\dot{a}(t)\) is the velocity of the tank at time \(t\geq 0\). Moreover, we get for the tank \[\ddot{a}(t)=-f(t),\text{ for }t>0 \tag{4}\] where \(-f(t)\), the control input to the problem, is the tank acceleration. Defining the quantities \[v(t,x) :=u(t,a(t)+x)-w(t) \tag{5}\] \[h(t,x) :=H(t,a(t)+x)\] (6) \[\xi(t) :=a(t)-a^{*} \tag{7}\] where \(a^{*}\in\mathbb{R}\) is the position (a constant) which we want the left side of the tank to reach, we get the model: \[\dot{\xi} =w,\text{ for }t\geq 0 \tag{8}\] \[\dot{w} =-f,\text{ for }t\geq 0\] (9) \[h_{t}+(hv)_{x} =0,\text{ for }t>0,\,x\in[0,L]\] (10) \[(hv)_{t}+\left(hv^{2}+\frac{1}{2}gh^{2}\right)_{x}-qh\left( \frac{h_{xx}}{\left(1+h_{x}^{2}\right)^{3/2}}\right)_{x}=\mu\left(hv_{x} \right)_{x}+hf\,,\] \[\text{for }t>0,\,x\in(0,L)\] (11) \[v(t,0)=v(t,L)=0,\text{ for }t\geq 0 \tag{12}\] Equations (10) and (12) imply that every classical solution of (8)-(12) satisfies the following \[\frac{d}{d\,t}\left(\int_{0}^{L}h(t,x)dx\right)=0\text{ for all }t>0 \tag{13}\] Consequently, the total mass of the liquid \(m>0\) is constant, and without loss of generality we can assume that every solution of (8)-(12) satisfies the equation \[\int_{0}^{L}h(t,x)dx\equiv m \tag{14}\] Due to the nature of our problem it is important to mention that the liquid level \(h(t,x)\) must be positive for all times, i.e., we must have: \[\min_{x\in[0,L]}\left(h(t,x)\right)>0,\text{ for }t\geq 0 \tag{15}\] Contrary to prior works, model (8)-(12), (14) is not a complete mathematical description of the system. This can be seen directly by studying the linearization of model (8)-(12), (14) but also can be seen by studying the literature (see [27, 37, 38, 42, 43] and references therein). For a complete mathematical model of the system we need two additional boundary conditions that describe the interaction between the liquid and the solid walls of the tank. There are many ways to describe the evolution of the angle of contact of a liquid with a solid boundary (see the detailed presentation in [27]). In [37, 38], Schweizer suggested (based on energy arguments and the fact that there might be a discrepancy between the actual microscopic and the apparent macroscopic contact angle) the use of a constant contact angle. Moreover, the assumption of a constant contact angle allows the well-posedness of the overall problem (at least for small data; see [37, 38, 42]). The constant contact angle approach has been used extensively in the literature (see for instance [20, 42, 43]). In this work, we adopt the constant contact angle approach by imposing a contact angle equal to \(\pi/2\). Therefore, the model (8)-(12), (14) is accompanied by the following boundary conditions: \[h_{x}(t,0)=h_{x}(t,L)=0,\text{ for }t\geq 0 \tag{16}\] In order to avoid liquid spillage the following condition must be satisfied: \[\max_{x\in[0,L]}\left(h(t,x)\right)<H_{\max},\text{ for }t\geq 0 \tag{17}\] where \(H_{\max}>0\) is the height of the tank walls. We consider classical solutions for the system (8)-(12), (14), (16), i.e., we consider \(\xi\in C^{2}\left(\mathbb{R}_{+}\right)\), \(w\in C^{1}\left(\mathbb{R}_{+}\right)\), \(h\in C^{1}\left([0,+\infty)\times[0,L];\ \ (0,+\infty)\right)\cap C^{3}\left((0,+\infty)\ \times(0,L)\right)\), \(v\in C^{0}([0,+\infty)\times[0,L])\cap C^{1}\left((0,+\infty)\ \times[0,L]\right)\) with \(v[t]\in C^{2}\left((0,L)\right)\) for each \(t>0\) that satisfy equations (8)-(12), (14), (16) for a given input \(f\in C^{0}\left(\mathbb{R}_{+}\right)\). For the system (8)-(12), (14), (16) with \(f(t)\equiv 0\) (which is the open loop system), there exists a continuum of equilibrium points, i.e., the points \[h(x)\equiv h^{*},v(x)\equiv 0,\text{ for }x\in[0,L] \tag{18}\] \[\xi\in\mathbb{R},w=0 \tag{19}\] where \(h^{*}=m/L\). We assume that the equilibrium points satisfy the condition (17), i.e., \(h^{*}<H_{\max}\). We intend to construct a robust with respect to surface tension control law of the form \[f(t)=F\left(h[t],v[t],\xi(t),w(t)\right),\text{ for }t>0, \tag{20}\] which stabilizes the equilibrium point with \(\xi=0\). In addition to that we impose the condition (17). It follows from (18), (19) that the desired equilibrium point is not asymptotically stable for the open-loop system. Consequently the described control problem is not at all trivial. ## 3 The feedback law ### The Control Lyapunov Functional (CLF) We define the set \(S\subset\mathbb{R}^{2}\times\left(C^{0}\left([0,L]\right)\right)^{2}\) as follows: \[\left(\xi,w,h,v\right)\in S\ \Leftrightarrow\left\{\begin{array}{c}h\in C^{0} \left([0,L];(0,+\infty)\right)\cap H^{1}(0,L)\\ v\in C^{0}\left([0,L]\right)\\ \int_{0}^{L}h(x)dx=m\\ (\xi,w)\in\mathbb{R}^{2},v(0)=v(L)=0\end{array}\right. \tag{21}\] The above definition guarantees that every \((\xi,w,h,v)\in S\) satisfies (12) and (14). In addition to that, we define the following functionals for all \((\xi,w,h,v)\in S\): \[V(\xi,w,h,v):=\delta E(h,v)+W(h,v)+\frac{qk^{2}}{2}\xi^{2}+\frac{q}{2}(w+k\xi)^{2} \tag{22}\] \[E(h,v):=\frac{1}{2}\int_{0}^{L}h(x)v^{2}(x)dx+\frac{g}{2}\left\| h-h^{*}\chi_{[0,L]}\right\|_{2}^{2}\] \[+\sigma\int_{0}^{L}\left(\sqrt{1+\left(h^{\prime}(x)\right)^{2}} -1\right)dx \tag{23}\] \[W(h,v):=\frac{1}{2}\int_{0}^{L}h^{-1}(x)\left(h(x)v(x)+\mu h^{ \prime}(x)\right)^{2}dx+\frac{g}{2}\left\|h-h^{*}\chi_{[0,L]}\right\|_{2}^{2}\] \[+\sigma\int_{0}^{L}\left(\sqrt{1+\left(h^{\prime}(x)\right)^{2}} -1\right)dx \tag{24}\] where \(k,q>0\) are position error and velocity gains (to be selected) respectively, \(\delta>0\) and \(h^{*}=m/L\). In particular: * the functional \(E\) is the mechanical energy of the liquid within the tank as it is the sum of the potential energy \[\frac{g}{2}\left\|h-h^{*}\chi_{[0,L]}\right\|_{2}^{2}+\sigma\int_{0}^{L}\left( \sqrt{1+\left(h^{\prime}(x)\right)^{2}}-1\right)dx\] and the kinetic energy \[\frac{1}{2}\int_{0}^{L}h(x)v^{2}(x)dx\] of the liquid. It should be noticed that there is no contribution to the mechanical energy of the tank-liquid interface which allows to give the interpretation that the boundary condition (16) (a constant contact angle) is a result of the absence of interaction between liquid and solid. * the functional \(W\) is a kind of mechanical energy of the liquid within the tank and has been used extensively in the literature of isentropic, compressible liquid flow (see [39, 31, 22, 40]) as well as in [23, 24, 25]. The functional \(V(\xi,w,h,v)\) defined by (22) will be utilized as a CLF for the system, and for the derivation of useful bounds for the function \(h\) as guaranteed by the following lemma. **Lemma 1**.: _Let constants \(q,k,\delta>0\) be given, and define the increasing function \(G\in C^{0}(\mathbb{R})\cap C^{1}((-\infty,0)\cup(0,+\infty))\) as follows_ \[G(h):=\left\{\begin{array}{c}\operatorname{sgn}\left(h-h^{*}\right)\left( \frac{2}{3}h\sqrt{h}-2h^{*}\sqrt{h}+\frac{4}{3}h^{*}\sqrt{h^{*}}\right)\quad \text{for }h>0\\ -\frac{4}{3}h^{*}\sqrt{h^{*}}+h\quad\text{for }h\leq 0\end{array}\right. \tag{25}\] _Denote by \(G^{-1}:\mathbb{R}\rightarrow\mathbb{R}\) the inverse function of \(G\) and define the constant_ \[c:=\frac{1}{\mu\sqrt{\delta g}} \tag{26}\] _Then for every \((\xi,w,h,v)\in S\), the following inequality holds:_ \[Q_{1}\left(V(\xi,w,h,v)\right)\leq h(x)\leq Q_{2}\left(V(\xi,w,h,v)\right), \text{ for all }x\in[0,L], \tag{27}\] _where the functions \(Q_{i}:\mathbb{R}_{+}\rightarrow\mathbb{R}\)\((i=1,2)\) are defined as follows for all \(s\geq 0\):_ \[Q_{1}(s) :=\max\left(G^{-1}\left(-cs\right),N_{1}(s),N_{2}(s)\right) \tag{28}\] \[Q_{2}(s) :=\min\left(G^{-1}\left(cs\right),P_{1}(s),P_{2}(s)\right) \tag{29}\] _with the functions \(N_{i}:\mathbb{R}_{+}\rightarrow\mathbb{R}\)\((i=1,2)\) and \(P_{i}:\mathbb{R}_{+}\rightarrow\mathbb{R}\)\((i=1,2)\) defined by the following expressions for all \(s\geq 0\):_ \[N_{1}(s) :=h^{*}-\sqrt{\frac{2m\left(1+\delta\right)s}{\delta\mu^{2}}}, \tag{30}\] \[N_{2}(s) :=h^{*}-\sqrt{\left(\frac{s}{\sigma(\delta+1)}+L\right)^{2}-L^{2}},\] (31) \[P_{1}(s) :=h^{*}+\sqrt{\frac{2m\left(1+\delta\right)s}{\delta\mu^{2}}},\] (32) \[P_{2}(s) :=h^{*}+\sqrt{\left(\frac{s}{\sigma(\delta+1)}+L\right)^{2}-L^{2}} \tag{33}\] **Remark 1**.: It follows from (25), (26), (28) and the fact that \(h^{*}=m/L\) that \(Q_{1}\left(V(\xi,w,h,v)\right)>0\) when \[V(\xi,w,h,v)<\max\left(\theta_{1},\theta_{2},\theta_{3}\right) \tag{34}\] with \[\theta_{1} :=\frac{4}{3}\mu h^{*}\sqrt{\delta gh^{*}},\,\theta_{2}:=\frac{ \mu^{2}h^{*}\delta}{2L\left(1+\delta\right)}\quad\text{and}\] \[\theta_{3} :=\sigma\left(\delta+1\right)\left(\sqrt{\left(h^{*}\right)^{2}+L ^{2}}-L\right)\] Definitions (28) and (29) imply that \(Q_{2}:\mathbb{R}_{+}\rightarrow\mathbb{R}\) is an increasing function while \(Q_{1}:\mathbb{R}_{+}\rightarrow\mathbb{R}\) is a decreasing function. It is important to mention that Lemma 1 is more general than Lemma 1 in [23] and Lemma 1 in [25]. Lemma 1 in [23] can be applied only for the case \(\delta=1\) and \(\sigma=0\), while Lemma 1 in [25] can be applied only for the case \(\sigma=0\). Here Lemma 1 can be applied for all \(\delta>0\) and \(\sigma\geq 0\). ### The state space As in [23, 24, 25] the state space will be appropriately defined in order to exclude states of the set \(S\) defined by (21) that violate the condition (17), i.e, the states that cause liquid spillage. We define the following \[X:=\left\{\left(\xi,w,h,v\right)\in S:\max_{x\in[0,L]}\left(h(x)\right)<H_{ \max}\right\} \tag{35}\] \[R:=\frac{2\mu\sqrt{\delta gh^{*}}}{3}\left(H_{\max}-h^{*}\right)\min\left( \zeta_{1},\zeta_{2}\right) \tag{36}\] where \[\zeta_{1}:=\max\left(\Gamma_{1},\Gamma_{2},\Gamma_{3}\right)\text{ and} \tag{37}\] \[\zeta_{2}:=\frac{h^{*}}{H_{\max}-h^{*}}\max\left(2,\Delta_{1},\Delta_{2}\right) \tag{38}\] with \(\Gamma_{1},\Gamma_{2},\Gamma_{3},\Delta_{1}\) and \(\Delta_{2}\) defined as follows: \[\Gamma_{1}:=\sqrt{\frac{H_{\max}}{h^{*}}}-\frac{2\sqrt{h^{*}}}{\sqrt{H_{\max} }+\sqrt{h^{*}}}, \tag{39}\] \[\Gamma_{2}:=\frac{3\mu\sqrt{\delta}\left(H_{\max}-h^{*}\right)}{4m\left(1+ \delta\right)\sqrt{gh^{*}}}, \tag{40}\] \[\Gamma_{3}:=\frac{3\sigma(\delta+1)\left(\sqrt{L^{2}+\left(H_{\max}-h^{*} \right)^{2}}-L\right)}{2\mu\sqrt{\delta gh^{*}}\left(H_{\max}-h^{*}\right)}, \tag{41}\] \[\Delta_{1}:=\frac{3\mu\sqrt{\delta}}{4L\sqrt{gh^{*}}\left(1+\delta\right)}, \tag{42}\] \[\Delta_{2}:=\frac{3\sigma(\delta+1)\sqrt{h^{*}}}{2\mu\sqrt{\delta g}\left( \sqrt{\left(h^{*}\right)^{2}+L^{2}}+L\right)} \tag{43}\] The aforementioned definition (36), the fact that \(h^{*}<H_{\max}\) and Lemma 1 imply for all \(\left(\xi,w,h,v\right)\in S\) with \(V(\xi,w,h,v)<R\) the following \[0<Q_{1}\left(V(\xi,w,h,v)\right)\leq h(x)\leq Q_{2}\left(V(\xi,w,h,v)\right)< H_{\max},\text{ for all }x\in[0,L] \tag{44}\] Consequently, the conditions (17) for avoiding liquid spillage are satisfied when \(\left(\xi,w,h,v\right)\in S\) with \(V(\xi,w,h,v)<R\). The set \(X\) defined by (35) is the state space of system (8)-(12), (14), (16). In particular, we consider as state space the metric space \(X\subset\mathbb{R}^{2}\times H^{1}\left(0,L\right)\times L^{2}\left(0,L\right)\) with metric induced by the norm of the underlying normed linear space \(\mathbb{R}^{2}\times H^{1}\left(0,L\right)\times L^{2}\left(0,L\right)\), i.e., \[\left\|\left(\xi,w,h,v\right)\right\|_{X}=\left(\xi^{2}+w^{2}+\left\|h\right\| _{2}^{2}+\left\|h^{\prime}\right\|_{2}^{2}+\left\|v\right\|_{2}^{2}\right)^{1/2} \tag{45}\] However, we need to approximate the state space from its interior by using certain parameterized sets that allow us to obtain useful estimates. We define \[X_{V}\left(r\right):=\left\{\left(\xi,w,h,v\right)\in S\,:\,V(\xi,w,h,v)\leq r \right\},\text{ for }r\geq 0 \tag{46}\] Inequalities (44) imply that \[X_{V}\left(r\right)\subseteq X,\text{ for all }r\in\left[0,R\right) \tag{47}\] As indicated by the following proposition the set \(X_{V}\left(r\right)\) for \(r>0\) contains a neighborhood of \(\left(0,0,h^{*}\chi_{\left[0,L\right]},0\right)\) (in the topology of \(X\) with metric induced by the norm \(\left\|\,\right\|_{X}\) defined by (45)). **Proposition 1**.: _Let constants \(q,k,\delta>0\) be given. Then for every \(\left(\xi,w,h,v\right)\in S\) satisfying the inequality_ \[\left\|\left(0,w,h-h^{*}\chi_{\left[0,L\right]},v\right)\right\|_{X}\leq\varepsilon \tag{48}\] _for some \(\varepsilon>0\) with_ \[\varepsilon<\min\left(h^{*},H_{\max}-h^{*}\right)/\sqrt{L}, \tag{49}\] _the following inequality holds:_ \[V(\xi,w,h,v)\leq C_{1}\left\|\left(\xi,w,h-h^{*}\chi_{\left[0,L\right]},v \right)\right\|_{X}^{2}+C_{2}\left\|\left(\xi,w,h-h^{*}\chi_{\left[0,L\right] },v\right)\right\|_{X} \tag{50}\] _where_ \[C_{1}:=\max\left(\frac{\mu^{2}}{h^{*}-\varepsilon\sqrt{L}},\frac{\delta+1}{2 }g,\frac{\left(\delta+2\right)H_{\max}}{2},q,\frac{3qk^{2}}{2}\right), \tag{51}\] \[C_{2}:=\sigma(\delta+1)\sqrt{L} \tag{52}\] _and \(\left\|\cdot\right\|_{X}\) is defined by (45)._ ### Stabilization results The following theorem guarantees exponential stabilization of the state of the system (8)-(12), (14), (16) by means of the nonlinear feedback law (55). **Theorem 1** (Stabilization of the Tank-Liquid System).: _Let arbitrary constants \(\omega,k,q,\delta>0\) be given and define \(R\) by means of (36). Let arbitrary \(r\in[0,R)\) be given and assume that_ \[k<q\theta(r) \tag{53}\] _where_ \[\theta(r):=\frac{\omega g\mu\delta\pi^{2}Q_{1}(r)}{g\mu\delta\pi^{2}Q_{1}(r)+2 \omega L(mgLH_{\max}(\delta+1)^{2}+2\mu^{2}\delta\pi^{2}Q_{1}(r))} \tag{54}\] _where \(Q_{1}\) is defined by (28). Then there exist constants \(M,\lambda>0\) with the following property:_ **(P)** _Every classical solution of the system (8)-(12), (14), (16) and_ \[f(t)=-\omega\left((\delta+1)\int_{0}^{L}h(t,x)v(t,x)dx+\mu\left( h(t,L)-h(t,0)\right)-q\left(w(t)+k\xi(t)\right)\right),\] \[\text{for }t>0 \tag{55}\] _with \((\xi(0),w(0),h[0],v[0])\in X_{V}(r)\), satisfies \((\xi(t),w(t),h[t],\)\(v[t])\in X_{V}(r)\) and the following estimate for \(t\geq 0\):_ \[\left\|\left(\xi(t),w(t),h[t]-h^{*}\chi_{[0,L]},v[t]\right) \right\|_{X}\] \[\leq M\exp\left(-\lambda\,t\right)\left\|\left(\xi(0),w(0),h[0]-h ^{*}\chi_{[0,1]},v[0]\right)\right\|_{X} \tag{56}\] **Remarks on Theorem 1.** 1) The arbitrary quantities \(\omega,k,q,\delta>0\) are the control parameters. We should point out that the ratio \(k/q\) must be sufficiently small due to (53), and this is the only restriction for the control parameters. 2) The set \(X_{V}(r)\) is the set for which exponential stabilization is achieved. As indicated by Proposition 1, the set \(X_{V}(r)\) for \(r>0\) contains a neighborhood of \(\left(0,0,h^{*}\chi_{[0,L]},0\right)\) (in the topology of \(X\) with metric induced by the norm \(\|\,\|_{X}\) defined by (45)). The size of the set \(X_{V}(r)\) depends on \(r\in[0,R)\) and on \(\delta,q,k\) (recall (36) and (22)). It is straightforward to see that the larger the parameter \(q\) (or \(k\)) the smaller the set \(X_{V}(r)\). However, the dependence of \(X_{V}(r)\) on \(\delta\) (through the dependence of \(R\) on \(\delta\)) is not clear. On the contrary it is a very complicated, non-monotonic dependence. 3) The feedback law (55) only requires the measurement of the four following quantities: * the position of the tank denoted by \(\xi(t)\), and the velocity of the tank denoted by \(w(t)\), * the total momentum of the liquid, i.e., the quantity \(\int_{0}^{L}h(t,x)v(t,x)dx\), and * the difference the liquid level at the tank walls, i.e., the quantity \(h(t,L)-h(t,0)\). It should be emphasized that the feedback law (55) does not require the measurement of the whole liquid level and liquid velocity profile whereas it is completely independent of the surface tension coefficient. 4) The feedback law (55) is the same feedback law that was used in [23, 25]. When the results in [23, 25] and Theorem 1 are taken into account then it follows that the feedback law (55) guarantees robustness with respect to surface tension as well as robustness with respect to wall friction forces. From a control point of view, this is an ideal situation: the feedback law (55) is robust with respect to all possible perturbations of the basic model, its measurement requirements are minimal and guarantees exponential stabilization of the corresponding closed-loop (nonlinear; not the linearized) system. 5) In contrast with [25], Theorem 1 does not provide an estimate for the norm \(\|v_{x}[t]\|_{2}\), and consequently an estimate for the sup-norm of the fluid velocity. A topic for future research is the contruction of an appropriate CLF based on which an estimate for the norm \(\|v_{x}[t]\|_{2}\) can be obtained. ## 4 Proofs _Proof of Lemma 1_. The proof is exactly the same with the proof of Lemma 1 in [25]. The only difference is that here we can obtain an additional estimate for \(\left\|h-h^{*}\chi_{[0,L]}\right\|_{\infty}\). Indeed, due to the fact that the function \(\varphi:\mathbb{R}_{+}\to\mathbb{R}_{+}\)defined by \[\varphi(s)=\sqrt{s^{2}+1}-1,\text{ for }s\geq 0 \tag{57}\] is increasing and convex, we can use Jensen's inequality (see page 120 in [9]) and get for all \(h\in C^{0}\left([0,L];(0,+\infty)\right)\cap H^{1}(0,L)\) with \(\int_{0}^{L}h(x)dx=m\): \[\varphi\left(\frac{1}{L}\left\|h^{\prime}\right\|_{1}\right)= \varphi\left(\frac{1}{L}\int_{0}^{L}\left|h^{\prime}(x)\right|dx\right)\] \[\leq \frac{1}{L}\int_{0}^{L}\varphi\left(\left|h^{\prime}(x)\right| \right)dx=\frac{1}{L}\int_{0}^{L}\left(\sqrt{\left(h^{\prime}(x)\right)^{2}+1 }-1\right)dx \tag{58}\] Using (58), the inequality \(\left\|h-h^{*}\chi_{[0,L]}\right\|_{\infty}\leq\|h^{\prime}\|_{1}\) (which is a direct consequence of the fact that there exists \(x^{*}\in[0,L]\) such that \(h(x^{*})=h^{*}\); a consequence of continuity of \(h\), the mean value theorem and the facts that \(\int_{0}^{L}h(x)dx=m\), \(h^{*}=m/L\)), the fact that the function \(\varphi^{-1}:\mathbb{R}_{+}\to\mathbb{R}_{+}\) (the inverse function of \(\varphi\)) is increasing with \(\varphi^{-1}(s)=\sqrt{\left(s+1\right)^{2}-1}\) for \(s\geq 0\) and the inequality \[\int_{0}^{L}\left(\sqrt{\left(h^{\prime}(x)\right)^{2}+1}-1\right)dx\leq\frac {V(\xi,w,h,v)}{\sigma(\delta+1)} \tag{59}\] which is a direct consequence of definitions (22), (23), (24), we get for all \((\xi,w,h,v)\in S\): \[\left\|h-h^{*}\chi_{[0,L]}\right\|_{\infty}\leq\sqrt{\left(L+\frac{V(\xi,w,h,v)}{ \sigma(\delta+1)}\right)^{2}-L^{2}} \tag{60}\] Using the additional estimate (60) in conjunction with the estimates shown in the proof of Lemma 1 in [25] and definitions (26), (28) and (29) we get (27). The proof is complete. Proof of Proposition 1.: Consider arbitrary \((\xi,w,h,v)\in S\) satisfying (48) and (49). Definitions (22), (23), (24) and the inequalities \[(h(x)v(x)+\mu h^{*}(x))^{2} \leq 2h^{2}(x)v^{2}(x)+2\mu^{2}\left(h^{*}(x)\right)^{2}, \tag{61}\] \[(w+k\xi)^{2} \leq 2w^{2}+2k^{2}\xi^{2},\] (62) \[\sqrt{1+\left(h^{*}(x)\right)^{2}-1} \leq\left|h^{*}(x)\right| \tag{63}\] imply: \[V\left(\xi,w,h,v\right)\leq\frac{\delta+2}{2}\int_{0}^{L}h(x)v^{ 2}(x)dx+\mu^{2}\int_{0}^{L}h^{-1}(x)\left(h^{*}(x)\right)^{2}dx\] \[+\frac{\delta+1}{2}g\left\|h-h^{*}\chi_{[0,L]}\right\|_{2}^{2}+ \frac{3qk^{2}}{2}\xi^{2}+qw^{2}+\sigma(\delta+1)\left\|h^{*}\right\|_{1} \tag{64}\] Following the arguments of the proof of Proposition 2.5 in [25] we obtain from (64) the following: \[V\left(\xi,w,h,v\right)\leq\frac{\delta+2}{2}H_{\max}\left\|v \right\|_{2}^{2}+qw^{2}+\frac{3qk^{2}}{2}\xi^{2}\] \[+\mu^{2}\left(h^{*}-\varepsilon\sqrt{L}\right)^{-1}\left\|h^{*} \right\|_{2}^{2}+\frac{\delta+1}{2}g\left\|h-h^{*}\chi_{[0,L]}\right\|_{2}^{2 }+\sigma(\delta+1)\sqrt{L}\left\|h^{*}\right\|_{2} \tag{65}\] Inequality (50) is a direct consequence of (65) and definition (45). The proof is complete. In order to give the proof of the main result of this study, we need to provide some preliminary lemmas along with their proofs. **Lemma 2**.: _Every classical solution of the system (8)-(12), (14), (16) satisfies the following equations for all \(t>0\):_ \[\frac{d}{dt}E(h[t],v[t])=-\mu\int_{0}^{L}h(t,x)v_{x}^{2}(t,x)dx+f(t)\int_{0}^{ L}h(t,x)v(t,x)dx \tag{66}\] \[\frac{d}{dt}W(h[t],v[t])=-\mu g\left\|h_{x}[t]\right\|_{2}^{2}-\mu\sigma\int_{0}^{ L}\frac{h_{xx}^{2}(t,x)dx}{\left(1+h_{x}^{2}(t,x)\right)^{3/2}}\] \[+f(t)\int_{0}^{L}\left(h(t,x)v(t,x)+\mu h_{x}(t,x)\right)dx \tag{67}\] _where \(E,W\) are defined by (23), (24), respectively._ Proof.: Due to (10) and (11) we get for \(t>0\), \(x\in(0,L)\): \[v_{t}(t,x)+v(t,x)v_{x}(t,x)+gh_{x}(t,x)\] \[=\sigma h^{-1}(t,x)\left(\frac{1+h_{x}^{2}(t,x)+h(t,x)h_{xx}(t,x) }{\left(1+h_{x}^{2}(t,x)\right)^{3/2}}\right)_{x}\] \[\qquad+\mu h^{-1}(t,x)\left(h(t,x)v_{x}(t,x)\right)_{x}+f(t) \tag{68}\] Combining definition (23), (10) and (68) we get for all \(t>0\) the following expression for the time derivative of the functional (23) : \[\frac{d}{dt}E(h[t],v[t])=-\frac{1}{2}\int_{0}^{L}(h(t,x)v(t,x))_{x }v^{2}(t,x)dx\] \[-\int_{0}^{L}h(t,x)v^{2}(t,x)v_{x}(t,x)dx-g\int_{0}^{L}h(t,x)v(t, x)h_{x}(t,x)dx\] \[\qquad+\sigma\int_{0}^{L}v(t,x)\left(\frac{1+h_{x}^{2}(t,x)+h(t, x)h_{xx}(t,x)}{\left(1+h_{x}^{2}(t,x)\right)^{3/2}}\right)_{x}dx\] \[+\mu\int_{0}^{L}v(t,x)\left(h(t,x)v_{x}(t,x)\right)_{x}dx+f(t) \int_{0}^{L}h(t,x)v(t,x)dx\] \[\qquad\qquad-g\int_{0}^{L}(h(t,x)v(t,x))_{x}(h(t,x)-h^{*})dx\] \[-\sigma\int_{0}^{L}\frac{h_{x}(t,x)}{\sqrt{1+h_{x}^{2}(t,x)}}(h( t,x)v(t,x))_{xx}dx \tag{69}\] Using (69), integration by parts as in the proof of Lemma 2.11 in [25], (12), (16) and the fact that for all \(t>0\) \[\sigma\int_{0}^{L}v(t,x)\left(\frac{1+h_{x}^{2}(t,x)+h(t,x)h_{xx}(t, x)}{\left(1+h_{x}^{2}(t,x)\right)^{3/2}}\right)_{x}dx\] \[=-\sigma\int_{0}^{L}v_{x}(t,x)\frac{1+h_{x}^{2}(t,x)+h(t,x)h_{xx}( t,x)}{\left(1+h_{x}^{2}(t,x)\right)^{3/2}}dx \tag{70}\] \[-\sigma\int_{0}^{L}\frac{h_{x}(t,x)}{\sqrt{1+h_{x}^{2}(t,x)}}(h(t,x)v(t,x))_{xx}dx\] \[=\sigma\int_{0}^{L}v_{x}(t,x)\frac{1+h_{x}^{2}(t,x)+h_{xx}(t,x)h( t,x)}{\left(1+h_{x}^{2}(t,x)\right)^{3/2}}dx \tag{71}\] as a consequence of integration by parts as well, we obtain equation (66). Next we define for all \(t\geq 0\) and \(x\in[0,L]\): \[\varphi(t,x):=h(t,x)v(t,x)+\mu h_{x}(t,x) \tag{72}\] Definition (72), (10) and (11) imply for all \(t>0\) and \(x\in(0,L)\): \[\varphi_{t}(t,x)=-\left(v(t,x)\varphi(t,x)+\frac{1}{2}gh^{2}(t,x )-\sigma\frac{1+h_{x}^{2}(t,x)+h(t,x)h_{xx}(t,x)}{\left(1+h_{x}^{2}(t,x)\right) ^{3/2}}\right)_{x}\] \[+h(t,x)f(t) \tag{73}\] Using definition (24) along with (73) and (10), we get for all \(t>0\) : \[\frac{d}{dt}W(h[t],v[t])=\frac{1}{2}\int_{0}^{L}h^{-2}(t,x)\varphi ^{2}(t,x)(h(t,x)v(t,x))_{x}dx\] \[-\int_{0}^{L}h^{-1}(t,x)\varphi(t,x)\left(\varphi(t,x)v(t,x)+ \frac{1}{2}gh^{2}(t,x)\right)_{x}dx\] \[+\sigma\int_{0}^{L}h^{-1}(t,x)\varphi(t,x)\left(\frac{1+h_{x}^{2 }(t,x)+h(t,x)h_{xx}(t,x)}{\left(1+h_{x}^{2}(t,x)\right)^{3/2}}\right)_{x}dx\] \[+f(t)\int_{0}^{L}\varphi(t,x)dx-g\int_{0}^{L}(h(t,x)-h^{*})(h(t,x )v(t,x))_{x}dx\] \[-\sigma\int_{0}^{L}\frac{h_{x}(t,x)(h(t,x)v(t,x))_{xx}}{\sqrt{1+h _{x}^{2}(t,x)}}dx \tag{74}\] Using (12) and integration by parts as in proof of Lemma 2.11 in [25], we obtain from (74) and definition (72) for all \(t>0\): \[\frac{d}{dt}W(h[t],v[t])=-\mu g\left\|h_{x}[t]\right\|_{2}^{2}+f(t) \int_{0}^{L}\left(h(t,x)v(t,x)+\mu h_{x}(t,x)\right)dx\] \[+\sigma\int_{0}^{L}v(t,x)\left(\frac{1+h_{x}^{2}(t,x)+h(t,x)h_{xx} (t,x)}{\left(1+h_{x}^{2}(t,x)\right)^{3/2}}\right)_{x}dx\] \[+\mu\sigma\int_{0}^{L}h^{-1}(t,x)h_{x}(t,x)\left(\frac{1+h_{x}^{2 }(t,x)+h(t,x)h_{xx}(t,x)}{\left(1+h_{x}^{2}(t,x)\right)^{3/2}}\right)_{x}dx\] \[-\sigma\int_{0}^{L}\frac{h_{x}(t,x)(h(t,x)v(t,x))_{xx}}{\sqrt{1+h_ {x}^{2}(t,x)}}dx \tag{75}\] Using (16), (70), (71) and the fact that \[h(t,x)\left(\frac{h_{xx}(t,x)}{\left(1+h_{x}^{2}(t,x)\right)^{3/2}}\right)_{x }=\left(\frac{1+h_{x}^{2}(t,x)+h(t,x)h_{xx}(t,x)}{\left(1+h_{x}^{2}(t,x) \right)^{3/2}}\right)_{x} \tag{76}\] we obtain from (75) equation (67) for all \(t>0\). The proof is complete. **Lemma 3**.: _Let constants \(q,k,\delta>0\) be given. Then there exists a non-decreasing function \(\Lambda:[0,R)\rightarrow(0,+\infty)\), where \(R>0\) is defined by (36) such that for every \((\xi,w,h,v)\in X\) with \(v\in H^{1}(0,L)\), \(h\in H^{2}(0,L)\) and \(V(\xi,w,h,v)<R\), the following inequality holds:_ \[\frac{V(\xi,w,h,v)}{\Lambda(V(\xi,w,h,v))}\leq\left\|h^{\prime} \right\|_{2}^{2}+\int_{0}^{L}\frac{\left(h^{\prime\prime}(x)\right)^{2}}{ \left(1+\left(h^{\prime}(x)\right)^{2}\right)^{3/2}}dx\] \[+\int_{0}^{L}h(x)(v^{\prime}(x))^{2}\,dx+\xi^{\,2}+(w+k\xi)^{2} \tag{77}\] Proof.: Let arbitrary \((\xi,w,h,v)\in X\) with \(v\in H^{1}(0,L)\), \(h\in H^{2}(0,L)\) and \(V(\xi,w,h,v)<R\) be given. Using the same arguments as in the proof of Lemma 2.12 in [25] and the fact that \[\int_{0}^{L}\left(\sqrt{1+\left(h^{\prime}(x)\right)^{2}}-1\right)dx\leq \left\|h^{\prime}\right\|_{2}^{2} \tag{78}\] we obtain the following estimate: \[V(\xi,w,h,v)\leq\frac{L^{2}\left(\delta+2\right)Q_{2}\left(V(\xi,w,h,v)\right)}{2\pi^{2}Q_{1}\left(V(\xi,w,h,v)\right)}\int_{0}^{L}h(x)\left(v^{ \prime}(x)\right)^{2}dx\] \[+\left(\frac{\left(\delta+1\right)\left(gL^{2}+2\sigma\right)}{2} +\frac{\mu^{2}}{Q_{1}\left(V(\xi,w,h,v)\right)}\right)\left\|h^{\prime} \right\|_{2}^{2}+\frac{qk^{2}}{2}\xi^{2}+\frac{q}{2}\left(w+k\xi\right)^{2}\] \[\leq\Lambda\left(V(\xi,w,h,v)\right)\] \[\times\left(\left\|h^{\prime}\right\|_{2}^{2}+\int_{0}^{L}\!\! \!\!\frac{\left(h^{\prime\prime}(x)\right)^{2}}{\left(1+\left(h^{\prime}(x) \right)^{2}\right)^{3/2}}dx+\int_{0}^{L}h(x)\left(v^{\prime}(x)\right)^{2}dx+ \xi^{2}+\left(w+k\xi\right)^{2}\right) \tag{79}\] where \[\Lambda(s):=\frac{1}{2}\max\left(\kappa_{1}+\frac{2\mu^{2}}{Q_{1}\left(s \right)},\frac{\kappa_{2}Q_{2}(s)}{Q_{1}(s)},\kappa_{3}\right),\text{ for }s\in[0,R) \tag{80}\] with \(\kappa_{1}:=(\delta+1)\left(gL^{2}+2\sigma\right)\), \(\kappa_{2}:=L^{2}\left(\delta+2\right)/\pi^{2}\) and \(\kappa_{3}:=q\max(1,k^{2})\). Definition (80) and the fact that \(Q_{2}:\mathbb{R}_{+}\to\mathbb{R}\) is an increasing function and \(Q_{1}:\mathbb{R}_{+}\to\mathbb{R}\) is a decreasing function imply that \(\Lambda:[0,R)\to(0,+\infty)\) is a non-decreasing function. Inequality (77) holds as a direct consequence of (79). The proof is complete. **Lemma 4**.: _Let constants \(q,k,\delta>0\) be given. Then there exist non-decreasing functions \(G_{i}:[0,R)\to(0,+\infty)\), \(i=1,2\), where \(R>0\) is defined by (36), such that for every \((\xi,w,h,v)\in X\) with \(V(\xi,w,h,v)<R\), the following inequalities hold:_ \[\left\|\left(\xi,w,h-h^{*}\chi_{[0,L]},v\right)\right\|_{X}^{2} \leq V(\xi,w,h,v)G_{1}\left(V(\xi,w,h,v)\right) \tag{81}\] \[\frac{V(\xi,w,h,v)}{G_{2}\left(V(\xi,w,h,v)\right)} \leq\left\|\left(\xi,w,h-h^{*}\chi_{[0,L]},v\right)\right\|_{X}^{2} \tag{82}\] _where \(\left\|\cdot\right\|_{X}\) is defined by (45)._ Proof.: Let arbitrary \((\xi,w,h,v)\in X\) with \(V(\xi,w,h,v)<R\) be given. Using definitions (22), (23), (24), inequalities (61), (62), the inequality \[\sqrt{1+\left(h^{\prime}(x)\right)^{2}}\leq 1+\left(h^{\prime}(x)\right)^{2} \tag{83}\] and (44) we obtain \[V(\xi,w,h,v)\leq\frac{\delta+2}{2}H_{\max}\left\|v\right\|_{2}^ {2}+\frac{\delta+1}{2}g\left\|h-h^{*}\chi_{[0,L]}\right\|_{2}^{2}\] \[+\left(\frac{\mu^{2}}{Q_{1}\left(V\left(\xi,w,h,v\right)\right)} +\sigma\left(\delta+1\right)\right)\left\|h^{\prime}\right\|_{2}^{2}+\frac{3 qk^{2}}{2}\xi^{2}+qw^{2} \tag{84}\] Inequality (84) implies inequality (82) with \[G_{2}\left(s\right):=\max\left(\frac{\delta+2}{2}H_{\max},\frac{ \delta+1}{2}g,\frac{\mu^{2}}{Q_{1}\left(s\right)}+\sigma\left(\delta+1\right), \frac{3qk^{2}}{2},q\right),\] \[\qquad\qquad\qquad\qquad\qquad\qquad\text{for }s\in[0,R) \tag{85}\] The fact that \(Q_{1}:\mathbb{R}_{+}\to\mathbb{R}\) is a decreasing function and the above definition imply that \(G_{2}:[0,R)\to(0,+\infty)\) is a non-decreasing function. The proof of inequality (81) is exactly the same with the proof of Lemma 4 in [25]. The proof is complete. **Lemma 5**.: _Let constants \(\omega,k,q,\delta>0\) and \(r\in[0,R)\) be given, where \(R>0\) is defined by (36). Then every classical solution of the system (8)-(12), (14), (16) and (55) satisfies the following inequality for all \(t>0\) for which \(V(\xi(t),w(t),h[t],v[t])<R\):_ \[\frac{d}{dt}V(\xi(t),w(t),h[t],v[t])\leq-\frac{3\mu g}{4}\left\| h_{x}[t]\right\|_{2}^{2}-qk^{3}\xi^{2}(t)\] \[-\frac{\mu\delta}{2H_{\max}}\left(2H_{\max}-Q_{1}(r)\frac{Q_{2} \left(V(t)\right)}{Q_{1}\left(V(t)\right)}\right)\int_{0}^{L}h(t,x)v_{x}^{2}( t,x)dx\] \[-\mu\sigma\int_{0}^{L}\frac{h_{xx}^{2}(t,x)}{\left(1+h_{x}^{2}(t,x)\right)^{3/2}}dx-q\left(q\theta(r)-k\right)\left(w(t)+k\xi(t)\right)^{2} \tag{86}\] _where \(V(t)=V(\xi(t),w(t),h[t],v[t])\), \(\theta(r)\) is defined by (54) and \(Q_{i}:\mathbb{R}_{+}\to\mathbb{R}\)\((i=1,2)\) are the functions defined by (28) and (29)._ Proof.: Let \(\omega,k,q,\delta>0\) be given constants and let \(r\in[0,R)\) be a constant, where \(R>0\) is defined by (36). In addition to that we consider a classical solution of the system (8)-(12), (14), (16) and (55) at a time \(t>0\) for which \(V(\xi(t),w(t),h[t],v[t])<R\). Using Lemma 2, (66), (67) and definition (22) and by following the same procedure as in the proof of Lemma 2.14 in [25] by assuming zero friction coefficient, we establish the following inequality: \[\frac{d}{dt}V(\xi(t),w(t),h[t],v[t])\leq-\frac{3\mu g}{4}\left\| h_{x}[t]\right\|_{2}^{2}-\mu\delta\int_{0}^{L}h(t,x)v_{x}^{2}(t,x)dx\] \[-\mu\sigma\int_{0}^{L}\frac{h_{xx}^{2}(t,x)}{\left(1+h_{x}^{2}(t,x )\right)^{3/2}}dx-q\left(q\theta(r)-k\right)\left(w(t)+k\xi(t)\right)^{2}-qk^ {3}\xi^{2}(t)\] \[+\frac{\mu\delta\pi^{2}Q_{1}(r)}{2L^{2}H_{\max}}\int_{0}^{L}h(t,x )v^{2}(t,x)dx \tag{87}\] Since \(v(t,0)=v(t,L)=0\) (recall (12)), by virtue of Wirtinger's inequality and (44), we get: \[\left\|v[t]\right\|_{2}^{2}\leq\frac{L^{2}}{\pi^{2}}\left\|v_{x}[t]\right\|_{2 }^{2}\leq\frac{L^{2}}{\pi^{2}Q_{1}\left(V(t)\right)}\int_{0}^{L}h(t,x)v_{x}^{ 2}(t,x)dx \tag{88}\] Combining (44), (87) and (88), we obtain (86). The proof is complete. We can now present the proof of Theorem 1. Proof of Theorem 1.: Let constants \(\omega,q,k,\delta>0\) satisfying (53). Let constant \(r\in[0,R)\) be given. Consider a classical solution of the system (8)-(12), (14), (16) with (55) that satisfies \(V\left(\xi(0),w(0),h[0],v[0]\right)\leq r\). Let \(\overline{r}\in(r,R)\) be a constant that satisfies: \[\frac{Q_{2}\left(\overline{r}\right)}{Q_{1}\left(\overline{r}\right)}<\frac{2 H_{\max}}{Q_{1}(r)} \tag{89}\] The existence of \(\bar{r}\in(r,R)\) is a direct consequence of the continuity of the functions involved in (89). Due to (53), Lemma 5, (86) and (89) the following implication is true: \[\text{If }t>0\text{ and }V\left(\xi(t),w(t),h[t],v[t]\right)\leq\overline{r} \text{ then }\frac{d}{d\,t}V\left(\xi(t),w(t),h[t],v[t]\right)\leq 0 \tag{90}\] A contradiction argument as in the proof of Theorem 2.6 in [25] implies that \(V\left(\xi(t),w(t),\ h[t],v[t]\right)\leq\overline{r}\) for all \(t\geq 0\). Implication (90) and the fact \(V\left(\xi(t),w(t),h[t],v[t]\right)\leq\overline{r}\) for all \(t\geq 0\) imply that \[\frac{d}{d\,t}V\left(\xi(t),w(t),h[t],v[t]\right)\leq 0\text{ for all }t>0 \tag{91}\] Due to the above and the continuity of the mapping \(t\to V(\xi(t),w(t),h[t],\)\(v[t])\), we get that \[V(\xi(t),w(t),h[t],v[t])\leq V(\xi(0),w(0),h[0],v[0])\leq r<R,\text{for all }t\geq 0 \tag{92}\] Consequently, \((\xi(t),w(t),h[t],v[t])\in X_{V}(r)\) for all \(t\geq 0\) (recall (46)). Using (92) and Lemma 5, we conclude that (86) holds for all \(t>0\). Using (92), (86) and the fact that \(Q_{2}:\mathbb{R}_{+}\to\mathbb{R}\) is an increasing function while \(Q_{1}:\mathbb{R}_{+}\to\mathbb{R}\) is a decreasing function, we obtain the following estimate for \(t>0\) \[\frac{d}{dt}V(\xi(t),w(t),h[t],v[t])\] \[\leq-\beta(r)\bigg{(}\|h_{x}[t]\|_{2}^{2}+\int_{0}^{L}h(t,x)v_{x} ^{2}(t,x)dx+\int_{0}^{L}\frac{h_{xx}^{2}(t,x)}{\left(1+h_{x}^{2}(t,x)\right)^{ 3/2}}dx\] \[+\xi^{2}(t)+(w(t)+k\xi(t))^{2}\bigg{)} \tag{93}\] where \[\beta(r):=\min\left(\frac{3\mu g}{4},\frac{\mu\delta\left(2H_{\max}-Q_{2}\left( r\right)\right)}{2H_{\max}},qk^{3},q\left(q\theta(r)-k\right),\mu\sigma\right) \tag{94}\] Notice that (53) and the fact that \(r\in[0,R)\) in conjunction with definitions (29), (36), (93) imply that \(\beta(r)>0\). It follows from Lemma 3, (77), the continuity of the mapping \(t\to V(\xi(t),w(t),h[t],v[t])\), (recall that \(v\in C^{0}\left(\mathbb{R}_{+}\ ;H^{1}\left(0,L\right)\right)\) \(h\in C^{1}\left(\mathbb{R}_{+}\times[0,L];(0,+\infty)\right)\) and \(v\in C^{0}\left(\mathbb{R}_{+}\times[0,L]\right)\)), estimates (92), (93), Lemma 4, (81) and (82) that the following estimate holds for all \(t\geq 0\): \[\left\|\left(\xi(t),w(t),h[t]-h^{*}\chi_{[0,L]},v[t]\right)\right\| _{X}^{2}\] \[\leq \Omega(r)\exp\left(-\frac{\beta(r)\,t}{\Lambda(r)}\right)\left\| \left(\xi(0),w(0),h[0]-h^{*}\chi_{[0,L]},v[0]\right)\right\|_{X}^{2} \tag{95}\] with \[\Omega(r):=G_{1}\left(r\right)G_{2}(r) \tag{96}\] where \(\Lambda\) is the non-decreasing function involved in (77) and \(G_{i}:[0,R)\to(0,+\infty)\)\((i=1,2)\) are the non-decreasing functions involved in (81), (82). Estimate (56) with \(M=\sqrt{\Omega(r)}\) and \(\lambda=\frac{\beta(r)}{2\Lambda(r)}\) is a consequence of estimate (95). The proof is complete. ## 5 Concluding Remarks In this work we managed to show that the robust with respect to wall friction nonlinear feedback law proposed in [25] provides also robust stabilization results with respect to surface tension. This shows even more the significance of the CLFs as stabilizing tools for the infinite-dimensional case of systems described by PDEs and illustrates the fact that robustness is inherent in the CLF methodology. The present study deals with the case of viscous Saint-Venant system with surface tension and without wall friction. It is of interest to study the more challenging problem of the viscous Saint-Venant system with surface tension and with wall friction as well as the construction of an additional functional which provides a bound for the sup-norm of the fluid velocity. In addition to that, other topics for future research are the study of existence and uniqueness of the solutions for the closed-loop system, the study of the problem with non constant (dynamic) contact angles, the study of the output feedback stabilization problem, the construction of appropriate numerical schemes and the derivation of stability estimates in stronger spatial norms. Concerning the output feedback stabilization problem there are many interesting studies in the literature that may contribute, such as [26] which suggests a finite-dimensional observer control of the (1-D) heat equation under Neumann actuation.
2306.15882
Effect of anisotropic spin-orbit coupling on condensation and superfluidity of a two dimensional Fermi gases
We investigated the ground state properties of a two dimensional Fermi superfluid with an anisotropic spin-orbit coupling (SOC) using path-integral field theoretical method. Within the framework of mean-field theory, we obtained the condensed fraction including contributions from both singlet and triple pairing fields. We found that for small interaction parameters and large anisotropic parameters, the total condensed fraction changes non-monotonically when increasing the strength of SOC and has a global maximum. But this feature disappears with decreasing the anisotropic parameter and increasing the interaction parameter. However, condensed fraction always decrease with increasing the anisotropic parameters. Because of the anisotropy of the SOC, the superfluid fraction becomes a tensor. We obtained the superfluid fraction tensor by deriving the effective action of the phase field of the order parameter. Our numerical results show that for small interaction parameters and large anisotropic parameters, superfluid fraction of the $x$ component $\rho_{x}$ has a minimum as a function of the SOC strength. And this minimum of $\rho_{x}$ disappears when decreasing the anisotropic parameters. In the strong interaction regime, $\rho_{x}$ always decreases with increasing the strength of SOC. While for the $y$ component of the superfluid fraction $\rho_{y}$, no matter how large the interaction parameters and anisotropic parameters are, it always has a minimum as a function of the SOC strength. As a function of the anisotropic parameter, for strong SOC strength, $\rho_{x}<\rho_{y}$ with $\rho_{x}$ having a minimum. For small SOC parameters $\rho_{x}>\rho_{y}$ with $\rho_{y}$ developing a minimum only in the weak interaction limit.
Kezhao Zhou
2023-06-28T02:54:05Z
http://arxiv.org/abs/2306.15882v2
Effect of anisotropic spin-orbit coupling on condensation and superfluidity of a two dimensional Fermi gases ###### Abstract We investigated the ground state properties of a two dimensional Fermi superfluid with an anisotropic spin-orbit coupling (SOC) using path-integral field theoretical method. Within the framework of mean-field theory, we obtained the condensed fraction including contributions from both singlet and triple pairing fields. We found that for small interaction parameters and large anisotropic parameters, the total condensed fraction changes non-monotonically when increasing the strength of SOC and has a global maximum. But this feature disappears with decreasing the anisotropic parameter and increasing the interaction parameter. However, condensed fraction always decrease with increasing the anisotropic parameters. Because of the anisotropy of the SOC, the superfluid fraction becomes a tensor. We obtained the superfluid fraction tensor by deriving the effective action of the phase field of the order parameter. Our numerical results show that for small interaction parameters and large anisotropic parameters, superfluid fraction of the \(x\) component \(\rho_{x}\) has a minimum as a function of the SOC strength. And this minimum of \(\rho_{x}\) disappears when decreasing the anisotropic parameters. In the strong interaction regime, \(\rho_{x}\) always decreases with increasing the strength of SOC. While for the \(y\) component of the superfluid fraction \(\rho_{y}\), no matter how large the interaction parameters and anisotropic parameters are, it always has a minimum as a function of the SOC strength. As a function of the anisotropic parameter, for strong SOC strength, \(\rho_{x}<\rho_{y}\) with \(\rho_{x}\) having a minimum. For small SOC parameters \(\rho_{x}>\rho_{y}\) with \(\rho_{y}\) developing a minimum only in the weak interaction limit. ## I Introduction. Spin-orbit coupling (SOC) plays a essential role in realizing many novel phenomena such as topological superconductors and insulators [1; 2; 3; 4], Floquet topological phases [5; 6; 7], nontrivial superconductors [8; 9] and so on. Realization of SOC in ultra-cold atomic system [10; 11; 12; 13] using Raman couplings has attracted lots of interest in various physics community. Because of advances in ultra-cold atoms experimental techniques, Fermi gases with SOC provides a unique and important playground to investigate various novel phases and topological phase transitions. For example, using Feshbach resonances [14; 15], one can tune interactions between atoms from weakly interacting regime to strong interaction regime, driving the system from weakly interacting BCS superfluid to strongly interacting BEC regime (BCS-BEC crossover)[16; 17]. Furthermore, optical lattice trapping potential makes this system a perfect platform to mimic solid state systems and related phenomena [18]. In experiments, the topological band structure has been observed by combinations of optical lattice and SOC[19; 20]. Other experimental techniques, such as spectroscopy [21; 22; 23], dipole interactions [24; 25], reduced dimensions [26], dynamical quench [27], open quantum systems [28] and so on, are also used to detect various phenomena related with Fermi pairing and superfluid [29; 30]. In ultra-cold atomic systems, one can create any kind of SOC in principle, especially the Rashba [31; 32] and Dresselhaus [33] SOC. Current experimental set-up can produce SOC with arbitrary combination of these two types of SOC [12; 13], therefore create an anisotropic SOC. Along this line, Many theoretical investigations have been performed to study effects of SOC on various superfluid properties [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56]. For the balanced case with equal particle numbers of different internal states, SOC can produce a novel bound-state called Rashbons and induce a crossover from weakly correlated BCS to strongly interacting BEC regime even for very weak particle-particle interaction [45; 46]. And this new bound states have many important implications on various thermodynamic properties of the system. Especially, the opposite effect of SOC on condensation and superfluidity has been discussed in [57]. Furthermore, combined effect of SOC and Zeeman field can host a non-trivial topological order [58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68]. Besides, the presence of Zeeman field can create a novel FFLO phase which attracts lot of interests in superconductors and cold atomic system [69]. For Fermi gases with SOC, a new type topological FFLO state has also been investigated extensively [70; 71; 49; 72]. Effects of anisotropic SOC on the ground state properties have also been discussed in [35]. And in [63], effects of anisotropic SOC on BKT [73] transitions and collective sound velocity have been investigated. Furthermore, [56] provides an intrinsic link between the non-monotonic behavior of the superfluid density and the quantum geometry of the helicity bands. In this paper, we performed a detailed research on the effects of an anisotropic SOC on the condensation and superfluidity of a two-dimensional (2D) superfluid system within the framework of mean-field theory using path-integral formalism. The coupled number and gap equa tions are numerically solved to get the chemical potentials and gap parameters. With the obtained chemical potentials and gap parameters, we calculated the condensed and superfluid fraction as functions of the interaction parameter, SOC strength and anisotropic parameters. For the condensed fraction, we consider contributions from both the singlet and triplet pairing fields. As a function of the SOC parameters, the condensed fraction behaves non-monotonically for specific interaction, SOC strength and anisotropic parameters. In order to obtain the superfluid fraction, we expand the partition function to the quadratic order of the phase of the order parameter from which we read off the superfluid fraction. The superfluid fraction is a tensor because of the anisotropic SOC considered in this paper. And our numerical results show that different components of the superfluid tensor behave differently as functions of the SOC and anisotropic parameters. ## II Formalism. The system we consider here is a 2D ultra-cold Fermi atoms or electrons interacting attractively with a contact interaction. We also consider an anisotropic SOC which can be written as an arbitrary combination of Rashba and Dresselhaus type of SOC. In the path-integral formalism, the system can be described by the finite temperature grand-partition function \(Z=\int d[\bar{\varphi}_{\sigma},\varphi_{\sigma}]\exp\left(-S[\bar{\varphi}_{ \sigma},\varphi_{\sigma}]\right)\) (\(\hbar=k_{B}=1\) through out this paper) with the action \(S[\bar{\varphi}_{\sigma},\varphi_{\sigma}]\) being given by \(S[\bar{\varphi}_{\sigma},\varphi_{\sigma}]=\int_{0}^{\beta}d\tau\int d^{2} \mathbf{r}\sum_{\sigma}[\bar{\varphi}_{\sigma}\partial_{\tau}\varphi_{\sigma} +\mathcal{H}_{0}+\mathcal{H}_{I}]\) with \(\beta=1/T\), \(\sigma=\uparrow,\downarrow\) denoting the two different internal states of the atoms or z component eigen-states of the spin operator for electrons and \(\bar{\varphi}_{\sigma},\varphi_{\sigma}\) being the Grassmann fields. The single-particle Hamiltonian density is \(\mathcal{H}(\bar{\psi},\psi)\)=\(\bar{\psi}\left(\hat{\xi}_{\mathbf{p}}+\mathcal{H}_{soc}\right)\psi\) where the kinetic operator \(\hat{\xi}_{\mathbf{p}}=\hat{\mathbf{p}}^{2}/(2m)-\mu\) with \(\mu\) being the chemical potential fixed by the total particle number, the spinor field reads: \(\psi\left(\mathbf{r}\right)=\left[\varphi_{\uparrow}\left(\mathbf{r}\right), \varphi_{\downarrow}\left(\mathbf{r}\right)\right]^{T}\) and the SOC term can be written as: \[\mathcal{H}_{soc}=\lambda_{R}\left(\sigma_{x}p_{y}-\sigma_{y}p_{x}\right)+ \lambda_{D}\left(\sigma_{x}p_{y}+\sigma_{y}p_{x}\right) \tag{1}\] where \(\lambda_{R}\) and \(\lambda_{D}\) denote the Rashba and Dresselhaus SOC parameters respectively and \(\sigma_{i=x,y,z}\) are the Pauli matrices. In order to show the anisotropic character transparently, the SOC term can be re-written as: \(\mathcal{H}_{soc}=\lambda_{y}\sigma_{x}p_{y}+\lambda_{x}\sigma_{y}p_{x}\) with \(\lambda_{y}=\lambda_{D}+\lambda_{R}\) and \(\lambda_{x}=\lambda_{D}-\lambda_{R}\). From this definition, we can see that the system is isotropic when \(\lambda_{D}=0\) or \(\lambda_{R}=0\) and anisotropic for equal Rashba and Dresselhaus (ERD) SOC: \(\lambda_{D}=\lambda_{R}\). For convenience, we define the anisotropic parameter as: \[\eta=\frac{\lambda_{D}}{\lambda_{R}} \tag{2}\] Without loss of generality, when \(\eta\) increases from \(0\) to \(1\), the system evolves from isotropic Rashba case to anisotropic case with ERD SOC. We denote the SOC strength by: \[\lambda=\sqrt{\lambda_{D}^{2}+\lambda_{R}^{2}} \tag{3}\] Finally, the interaction between spin-up and spin-down component can be simplified by a contact interaction model: \[\mathcal{H}_{I}=-g\int d^{2}\mathbf{r}\varphi_{\uparrow}^{\dagger}\left( \mathbf{r}\right)\varphi_{\downarrow}^{\dagger}\left(\mathbf{r}\right)\varphi_{ \downarrow}\left(\mathbf{r}\right)\varphi_{\uparrow}\left(\mathbf{r}\right) \tag{4}\] where \(g>0\) is the contact interaction parameter. Within path integral methods, the pairing order parameter can be conveniently introduced by using the Hubbard-Stratonovich transformation [74] to decompose the four-body interaction term \(\mathcal{H}_{I}\) by introducing a pairing field \(\Delta\left(\mathbf{r},\tau\right)\). After integrating out the fermionic fields, we obtain the effective action of the pairing field as \(S_{eff}\left[\bar{\Delta},\Delta\right]=-\int_{0}^{\beta}d\tau\int d^{d} \mathbf{r}\left|\Delta\left(\mathbf{r},\tau\right)\right|^{2}/g-1/2Tr\ln\left[ \mathcal{G}_{\mathbf{r},\tau}^{-1}\right]\) with the inverse Greens' function \(\mathcal{G}_{\mathbf{r},\tau}^{-1}\) being: \[\mathcal{G}_{\mathbf{r},\tau}^{-1}=\left[\begin{array}{cccc}\partial_{\tau} +\hat{\xi}_{\mathbf{p}}&\hat{\gamma}_{\mathbf{p}}&0&\Delta\\ \hat{\gamma}_{\mathbf{p}}^{*}&\partial_{\tau}+\hat{\xi}_{\mathbf{p}}&-\Delta &0\\ 0&-\bar{\Delta}&\partial_{\tau}-\hat{\xi}_{\mathbf{p}}&\hat{\gamma}_{\mathbf{p} }^{*}\\ \bar{\Delta}&0&\hat{\gamma}_{\mathbf{p}}&\partial_{\tau}-\hat{\xi}_{\mathbf{p} }\end{array}\right] \tag{5}\] with \(\hat{\gamma}_{\mathbf{p}}=\lambda_{y}\hat{p}_{y}+i\lambda_{x}\hat{p}_{x}\). At mean-field level, the pairing field can be chosen as a real constant parameter \(\Delta\left(\mathbf{r},\tau\right)=\Delta_{0}\) which is referred to as the gap parameter. And the effective pairing action becomes \(S_{eff}\left[\bar{\Delta},\Delta\right]=-\beta V\Delta_{0}^{2}/g-1/2\sum_{ \mathbf{p},i\omega_{n}}\ln\left[\det\mathcal{G}_{\mathbf{p},i\omega_{n}}^{-1}\right]\) where \(\mathcal{G}_{\mathbf{p},i\omega_{n}}^{-1}\) is the fourier transformation of of Eq. (5) in the momentum-frequency domain, \(V\) is the areal size of the system and \(\omega_{n}=\left(2n+1\right)\pi/\beta\) are the Fermi Matsubara frequencies. From \(\det\mathcal{G}_{\mathbf{p},E}^{-1}=0\), the quasi-particle excitation spectrum can be obtained as \(E_{\mathbf{p},\pm}=\sqrt{\left(\xi_{\mathbf{p}}\pm|\gamma_{\mathbf{p}}|\right)^{ 2}+\Delta_{0}^{2}}\) and \(E_{\mathbf{p},\pm}^{\prime}=-E_{\mathbf{p},\pm}\) where \(\xi_{\mathbf{p}}=\epsilon_{\mathbf{p}}-\mu\) with \(\epsilon_{\mathbf{p}}=\mathbf{p}^{2}/2m\) and \(\gamma_{\mathbf{p}}=\lambda_{y}p_{y}+i\lambda_{x}p_{x}\). The mean-field thermodynamic potential can be obtained using \(\Omega=-1/\beta\ln Z\) and we have: \(\Omega_{0}=-V\Delta_{0}^{2}/g+1/2\sum_{\mathbf{p},\delta}\left(\xi_{\mathbf{p}} E_{\mathbf{p},\delta}\right)-1/\beta\sum_{\mathbf{p},\delta=\pm}\ln\left(1+e^{- \beta E_{\mathbf{p},\delta}}\right)\). By variation of the thermodynamic potential with respect to the chemical potential and order parameter, we can easily obtain the mean-field gap and number equations: \[\frac{1}{g}=-\frac{1}{V}\sum_{\mathbf{p},\delta=\pm}\frac{\tanh\left(\frac{\beta E _{\mathbf{p},\delta}}{2}\right)}{4E_{\mathbf{p},\delta}}, \tag{6}\] \[n=\frac{1}{2V}\sum_{\mathbf{p},\delta=\pm}\left[1-\frac{\left(\xi_{\mathbf{p}}+ \delta\left|\gamma_{\mathbf{p}}\right|\right)\tanh\left(\frac{\beta E_{\mathbf{p}, \delta}}{2}\right)}{E_{\mathbf{p},\delta}}\right] \tag{7}\] As usual, divergence of the integral over momenta in Eq. (6) is removed by replacing contact interaction parameter \(g\) by binding energy \(E_{b}\) through \(V/g=\sum_{\mathbf{p}}1/\left(2\epsilon_{\mathbf{p}}+E_{b}\right)\). For anisotropic Rashba SOC, Eq. (6) and Eq. (7) are widely used to study the ground state and finite temperature properties of this novel system. It was first shown by Gor'kov and Rashba [31; 32] that, in the presence of a weak SOC, a 2D superconductor supports both singlet and triplet pairing fields. In ultra-cold atomic systems, this non-trivial physics was investigated in [45; 46] and proposal for detecting this anisotropic superfluidity was given in [75] through measurement of the momentum distribution and single-particle spectral function. On the other hand, SOC significantly enhances the pairing phenomena as was shown by the exact two-body solutions [45; 46] and many-body mean-field calculations [76]. The system can evolve from a BCS to a BEC state driven by SOC even for very weak interactions. Various other properties has been investigated in detail. For anisotropic SOC, however, there are not so many publications concerning the ground state properties. In [63], they investigated the effect of anisotropic SOC on the BKT transition and collective sound velocity for a 2D Fermi gases. In this paper, we focus on the effect of anisotropic SOC on condensation and superfluidity from a different point of view presented in [56]. ## III Condensed density For Fermi pairing and condensation, according to the concept of off-diagonal-long-range-order, the condensed density is generally defined as [77; 78]: \(n_{c}=1/V\sum_{\mathbf{p},ss^{\prime}}\left|\left\langle\bar{\varphi}_{ \mathbf{p},s}\varphi-_{\mathbf{p},s^{\prime}}\right\rangle\right|^{2}\). For the system considered in this paper, the attractive interaction supports a singlet-pairing field while SOC hybridizes spin degrees of freedom and induces triplet pairing simultaneously. Within mean-field theory, spin-singlet and -triplet pairing fields are given by [75]: \[\left\langle\bar{\varphi}_{\mathbf{p},\uparrow}\varphi-_{\mathbf{p},\downarrow }\right\rangle=\Delta_{0}\sum_{\delta}\tanh\left(\beta E_{\mathbf{p},\delta}/ 2\right)/\left(4E_{\mathbf{p},\delta}\right) \tag{8}\] \[\left\langle\bar{\varphi}_{\mathbf{p},\uparrow}\varphi-_{\mathbf{p},\uparrow }\right\rangle=-\Delta_{0}\left(\gamma_{\mathbf{p}}/\left|\gamma_{\mathbf{p}} \right|\right)\sum_{\delta}\delta\tanh\left(\beta E_{\mathbf{p},\delta}/2 \right)/\left(4E_{\mathbf{p},\delta}\right) \tag{9}\] respectively. The spin-singlet contribution to the condensed fraction was first discussed in [35] where it was shown to behave non-monotonically with a minimum as a function of SOC strength for weak enough interaction parameter. In our previous investigations [57], we include both singlet and triplet contributions to the condensed density and obtain: \[n_{c}=\frac{\Delta_{0}^{2}}{4}\frac{1}{V}\sum_{\mathbf{p},\delta}\frac{\tanh^ {2}\left(\frac{\beta E_{\mathbf{p},\delta}}{2}\right)}{E_{\mathbf{p},\delta} ^{2}} \tag{10}\] At zero temperature, repulsive interactions between Fermi pairs (Bosons) result in depletion of the condensate which is a familiar phenomenon for interacting BEC systems. Therefore the condensed fraction (condensed density divided by the total density \(n\)) is always less than 1. ## IV Superfluid density Unlike the condensed density, superfluid density is a dynamical properties of the system and a tensor in general. In Landau's theory of superfluidity, the normal mass of the system can be obtained through the calculation of the total momentum carried by excitations when the system is enforced in a uniform flow with velocity [79]\(\mathbf{v}_{s}\) \[\mathbf{P}=\sum_{\mathbf{p},\sigma}\mathbf{p}f\left(E_{\mathbf{p},\sigma}- \mathbf{p}\cdot\mathbf{v}_{s}\right) \tag{11}\] where \(\sigma\) is a conserved quantum number which is spin in the absence of SOC, \(f\left(x\right)=1/\left(e^{\beta x}\pm 1\right)\) is the Fermi/Bose distribution function depending on the nature of the excitations, and \(E_{\mathbf{p},\sigma}-\mathbf{p}\cdot\mathbf{v}_{s}\) is the excitation spectrum for moving systems. At zero temperature, no excitations are created at very small \(\mathbf{v}_{s}\) and the superfluid density coincides with the total density. However, the situation is dramatically changed in the presence of SOC where Galilean transformation is violated. As pointed out in [57], in the presence of SOC, Eq. (11) is no longer valid. Therefore we calculate the superfluid density by the response of the system with respect to a small phase field of the order parameter: \(\Delta\left(\mathbf{r},\tau\right)=\Delta_{0}e^{i\phi}\)[80]. In [81], E. Taylor proved that this method is equivalent to the definition of superfluid tensor from the current-current correlation function. Furthermore, this method can simultaneously give the compressibility. By substituting the ansatz \(\Delta\left(\mathbf{r},\tau\right)=\Delta_{0}e^{i\phi}\) into the partition function and expanding the partition function to the quadratic order of the phase field \(\phi\), after direct but lengthy algebraic manipulations, we obtain the effective action for the phase field as: \[\mathcal{S}_{eff}\left[\varphi,\mathbf{A}\right]\simeq\mathcal{S}_{0}+\int dx \left(\sum_{i=x,y}\frac{\rho_{s}^{i}}{2m}\mathbf{A}_{i}^{2}+\kappa\varphi^{2}\right) \tag{12}\] with the emergent vector field \(\mathbf{A}=\nabla\phi\) and scalar field \(\varphi=\nabla\phi\) denoting the spatial and temporal fluctuations of the phase field of the order parameter, respectively. The superfluid tensor can be expressed as: \(\rho_{s}^{i}=n-\rho_{n}^{i}\) with the normal density \(\rho_{n}^{i}\) are given by: \[\rho_{n}^{x} = \frac{1}{mV}\sum_{\mathbf{k},s=\pm}k_{x}^{2}Y\left(\mathcal{E}_{ \mathbf{k},s}\right)\frac{\left(\mathbf{M}^{2}+sm\lambda_{x}^{2}\xi_{\mathbf{k}} \right)^{2}}{\mathbf{M}^{4}} \tag{13}\] \[+ \frac{m\lambda_{x}^{2}}{2V}\sum_{\mathbf{k},s=\pm}\tanh\frac{ \beta\mathcal{E}_{\mathbf{k},s}}{2}\frac{\left(\xi_{\mathbf{k}}^{2}+\Delta_{0}^ {2}+s\mathbf{M}^{2}\right)\mathbf{M}_{\mathbf{k}}^{4}}{s\mathcal{E}_{\mathbf{k},s}\mathbf{M}^{6}} \tag{14}\] \[\rho_{n}^{y} = \frac{1}{mV}\sum_{\mathbf{k},s=\pm}k_{y}^{2}Y\left(\mathcal{E}_{ \mathbf{k},s}\right)\frac{\left(\mathbf{M}^{2}+sm\lambda_{y}^{2}\xi_{\mathbf{k} }\right)^{2}}{\mathbf{M}^{4}}\] \[+ \frac{m\lambda_{y}^{2}}{2V}\sum_{\mathbf{k},s=\pm}\tanh\frac{ \beta\mathcal{E}_{\mathbf{k},s}}{2}\frac{\left(\xi_{\mathbf{k}}^{2}+\Delta_{0 }^{2}+s\mathbf{M}^{2}\right)\mathbf{M}_{4}^{4}}{s\mathcal{E}_{\mathbf{k},s} \mathbf{M}^{6}} \tag{16}\] and the compressibility \(\kappa\) reads: \[\kappa=\frac{1}{2V}\sum_{\mathbf{k},s=\pm}\frac{\xi_{\mathbf{k}}^{2}\left( \left|\Gamma_{\mathbf{k}}\right|^{2}+s\mathbf{M}^{2}\right)^{2}Y\left( \mathcal{E}_{\mathbf{k},s}\right)}{\mathcal{E}_{\mathbf{k},s}^{2}\mathbf{M}^ {4}}+\frac{\Delta_{0}^{2}}{4V}\sum_{\mathbf{k},s=\pm}\tanh\frac{\beta\xi}{\cdot} \tag{17}\] with \(\mathbf{M}^{2}=\xi_{\mathbf{k}}\left|\Gamma_{\mathbf{k}}\right|\), \(\mathbf{M}_{x,y}^{2}=\xi_{\mathbf{k}}\left|\Gamma_{\mathbf{k}}^{x,y}\right|\), \(\left|\Gamma_{\mathbf{k}}^{x,y}\right|=\lambda_{x,y}k_{x,y}\) and \(Y\left(x\right)=\beta f(x)[1-f(x)]\). We first self-consistently solve Eq. (6) and Eq. (7) to get the chemical potential and gap parameters and then substitute the obtained results into Eq. (6) and Eq. (7) to get the condensed fraction and superfluid fraction tensor. ## V Result and Discussions In this paper, we only consider the ground state properties of the system. Previous investigations show that mean-field theory is qualitatively and quantitatively correct in the low temperature regime. However, at finite temperatures, fluctuations of the order parameters become more and more important. In order to get qualitatively correct physics for temperature close to critical temperature, the most successful method is to include contributions in the lowest gaussian fluctuations of the gap parameter to the thermodynamic potential which is beyond the scope of this paper and will be left as a future work. In Fig. 1, we present our numerical results of the condensed fraction as a function of the SOC parameter \(\lambda/v_{F}\) with \(v_{F}=p_{F}/m\) being the Fermi velocity and \(p_{F}\) being the Fermi momentum. It is clear that SOC enhances condensation compared with cases with no SOC. However, the condensed fraction shows non-monotonic behaviors for some parameters space. In Fig. 1(a), the interaction parameter is \(E_{b}=0.001E_{F}\) where \(E_{F}=p_{F}^{2}/2m\) is the Fermi energy. As one can see that in this weak interaction regime, as we increase the anisotropic parameters, the condensed fraction decreases but has a maximum value as functions of \(\lambda/v_{F}\). This means that for large enough anisotropic parameters, SOC does not necessarily enhances condensation. However, in the strong interaction limit with large enough \(E_{b}\) shown in Fig. 1(b), condensed fraction is always a monotonic function of SOC strength. Fig. 2 represents condensed fraction as functions of anisotropic parameters. And one can see that anisotropic parameters always suppresses condensation. In Fig. 2(a), different lines crosses with each other, which is a direct manifestation of the fact that for large enough anisotropic parameters, condensed fraction is not a monotonic function with respect to SOC strength. In general, as a static property, the condensed fraction has similar behaviors as the gap parameters. However, the superfluid fraction tensor becomes more complicated for the superfluid Fermi systems with anisotropic SOC. It is easily seen from Eq. 14 and Eq. 16 that SOC suppresses superfluidity and it creates normal density even at zero temperature. Fig. 3 represents numerical results for the superfluid fraction tensor \(\rho_{s}^{x}\) and \(\rho_{s}^{y}\) as functions of the SOC strength for various interaction and anisotropic parameters. We can see from Fig. 3 (a) with \(E_{b}=0.001E_{F}\) that \(\rho_{s}^{x}\) decreases with increasing SOC strength for small anisotropic parameters in the weak interaction limit. However, for large anisotropic parameters, \(\rho_{s}^{x}\) is a non-monotonic function of SOC strength with a global minimum. And this minimum for large anisotropic parameters disappears for strong interaction parameters as shown in Fig. 3 (b) with \(E_{b}=0.1E_{F}\). Nonetheless, the other component of the superfluid tensor \(\rho_{s}^{y}\) has different behaviors as shown in Fig. 3 (c) and (d). We have checked for various parameters and found that \(\rho_{s}^{y}\) always has a minimum regardless of the value of the anisotropic and interaction parameters. It is also clear that \(\rho_{s}^{x,y}\to n\) Figure 1: (Color online) Condensed fraction defined by: \(n_{c}/n\) as functions of SOC strength parameter \(\lambda/v_{F}\) with \(v_{F}\) being the Fermi velocity. Figure 2: (Color online) Condensed fraction as functions of anisotropic parameter \(\eta=\lambda_{D}/\lambda_{R}\). for \(\lambda=0\). We also investigated effect of anisotropy on the superfluid fraction and the results are shown in Fig. 4. As can be seen clearly from the results, \(\rho_{s}^{x}\) and \(\rho_{s}^{y}\) as functions of anisotropic parameters are more complicated. Firstly, in Fig. 4 (a) for weak interaction parameter \(E_{b}=0.001E_{F}\), \(\rho_{s}^{x}>\rho_{s}^{y}\) and only \(\rho_{s}^{y}\) shows a minimum form small SOC parameters. For large SOC parameters, \(\rho_{s}^{x}<\rho_{s}^{y}\) and only \(\rho_{s}^{x}\) has a minimum. Secondly, when the system enters strong interaction regime, as shown in Fig. 4 (b) with \(E_{b}=0.1E_{F}\), \(\rho_{s}^{x}>\rho_{s}^{y}\) but with no minimum for \(\rho_{s}^{y}\) for weak SOC. And for large SOC parameters, \(\rho_{s}^{x}<\rho_{s}^{y}\) and \(\rho_{s}^{x}\) still shows a minimum. And we have checked for larger value of interaction parameters (\(E_{b}=1.0E_{F}\)), the situations are the same as shown in Fig. 4 (b). \(\rho_{s}^{x}\) always has a minimum value for large SOC parameters. Finally, we noticed that \(\rho_{s}^{x}=\rho_{s}^{y}\) with \(\eta=0\) and we have the isotropic Rashba SOC case where \(\rho_{s}^{x}=\rho_{s}^{x}\). Furthermore, \(\rho_{s}^{x}=\rho_{s}^{y}=1\) for the anisotropic case with \(\eta=1\). This is true since in the ERD case, the SOC hamiltonian density reduces to a one-dimensional SOC term. For balanced case with equal particle numbers for spin down and spin up atoms, this one-dimensional SOC term can be gauged out and has no effect on the thermodynamic properties of the system. Therefore, the superfluid fraction at zero temperature goes to \(1\) as in the case without SOC. A final remark: as shown in the numerical results of the superfluid fraction tensor, \(\rho_{s}^{x}\) and \(\rho_{s}^{y}\) have different behaviors. This comes from the fact that we constraint the anisotropic parameter in the domain: \(0<\eta<1\). Therefore, we never reach the regime for pure Dresselhaus limit. In the transparent anisotropic expression for the SOC part of the Hamiltonian density, \(0<\eta<1\) means \(\lambda_{R}>\lambda_{D}\) and \(\lambda_{y}>\lambda_{x}\). The other limit of pure Dresselhaus SOC can be reached by setting \(\eta\rightarrow\infty\). For symmetry considerations, thermodynamic properties should be symmetric about the two regime: \(0<\eta<1\) and \(1<\eta<\infty\). ## VI Conclusion and outlook. We performed a detailed research on the effectf of an anisotropic SOC on the condensation and superfluid properties of a two dimensional Fermi gases at zero temperature. Particularly, we found that SOC not always enhances condensation and suppresses superfluidity. The condensed fraction and superfluid tensor show many different behaviors for different parameter configurations. In this paper, we only consider the phase fluctuations of the order parameter and neglect the magnitude fluctuations. Besides, inclusion of optical lattice would give us much degrees of freedom and more interesting phenomena such as the superfluid-Mott insulator transition. Furthermore, if we consider imbalanced case, there will be topological phase transition as we increase the Zeeman field across a critical value. Combinations of SOC and optical lattice provides a ideal test ground for many interesting phenomena observed in solid state material system. ## VII Acknowledgements. This work has been supported by the Scientific Research Foundation of Hunan Provincial Education Department under Grant number 20C0648.
2302.07971
Young Diagrams and Classical Groups
Young diagrams are ubiquitous in combinatorics and representation theory. Here we explain these diagrams, focusing on how they are used to classify representations of the symmetric groups $S_n$ and various "classical groups": famous groups of matrices such as the general linear group $\mathrm{GL}(n,\mathbb{C})$ consisting of all invertible $n \times n$ complex matrices, the special linear group $\mathrm{SL}(n,\mathbb{C})$ consisting of all $n \times n$ complex matrices with determinant 1, the group $\mathrm{U}(n)$ consisting of all unitary $n \times n$ matrices, and the special unitary group $\mathrm{SU}(n)$ consisting of all unitary $n \times n$ matrices with determinant 1. We also discuss representations of the full linear monoid consisting of all linear transformations of $\mathbb{C}^n$. These notes, based on the column This Week's Finds in Mathematical Physics, are made to accompany a series of lecture videos.
John C. Baez
2023-02-15T22:25:16Z
http://arxiv.org/abs/2302.07971v1
# Young Diagrams and Classical Groups ###### Abstract We consider the case of a "free" \ Mathematics and physics rely a lot on _symmetry_ to simplify problems, and there are two kinds of diagrams that show up a lot in this context: Dynkin diagrams and Young diagrams. Dynkin diagrams first show up when you study shapes with lots of reflection symmetries, like crystals and Platonic solids. They wind up being good for all sorts of other stuff, like classifying simple Lie groups and their representations. But what about Young diagrams? These are also important for studying group representations, but for a more limited class of groups: the "classical" groups. Representations of classical groups are used a lot in quantum physics, from particle physics through nuclear physics and atomic physics up to chemistry. So Young diagrams are not only beautiful, they're practical. My goal is to explain how Young diagrams are used to classify representations of classical groups. I won't prove much, just sketch the ideas. First \(\Pi\)ll explain classical groups and group representations. But even before that, I should say what's a Young diagram. #### Young diagrams Here is an example of a Young diagram: All the information here is captured by the number of boxes in each row: \[6\geq 5\geq 5\geq 2\geq 1\] So, we can define a **Young diagram** to be a finite sequence of natural numbers \(n_{1}\geq n_{2}\geq\cdots\geq n_{k}>0\). We say \(k\) is the number of **rows** and \(n_{1}\) is the number of **columns**. We say \(n_{i}\) is the number of **boxes** in the \(i\)th column, and \(n=\sum_{i}n_{i}\) is the total number of boxes. Young diagrams with \(n\) boxes classify partitions of an \(n\)-element set, up to isomorphism. For example, this partition: gives this Young diagram, whose rows list how many points are in each part: But the Young diagram does not record which point of our set lies in which part, so Young diagrams classify partitions only "up to isomorphism". Young diagrams with \(n\) boxes also classify permutations of an \(n\)-element set up to isomorphism. For example this permutation: gives the same Young diagram we have just seen. But any isomorphic permutation would give the same Young diagram. What's an "isomorphic permutation", exactly? Let's look at an example. Permutations of the set \(\{1,\ldots,n\}\) form the **symmetric group**\(S_{n}\). Say we have any permutation \(g\in S_{n}\), like this: \[1\to 2\] \[2\to 4\] \[3\to 3\] \[4\to 1\] \[5\to 6\] \[6\to 5\] \[7\to 7\] Note that 1 gets mapped to 2, which gets mapped to 4, which gets mapped back to 1 again. Similarly, 5 gets mapped to 6, which gets mapped back to 5. The number 3 gets mapped to itself right away, as does 7. No matter where we start, we always cycle back eventually. So our permutation consists of a bunch of cycles: \[(1,2,4)(5,6)(3)(7)\] and this "cycle decomposition" completely describes the permutation. To simplify life, we always write down these cycles in order of decreasing length. We also write the lowest number in each cycle first. Now suppose we conjugate our permutation \(g\) by some other permutation, say \(h\). This gives the permutation \(hgh^{-1}\). How does the cycle decomposition of this compare with that of \(g\)? It looks very similar! For example, it might look like this: \[(2,7,6)(1,3)(4)(5)\] There are the same number of cycles, each the same length as before. The only thing that changes are the numbers in each cycle. These get switched around by means of the permutation \(h\). In short, when we conjugate a permutation, all that remains unchanged is the picture we get by writing down its cycle decomposition and blotting out the specific numbers in each cycle, like this: \[(\square,\square,\square)(\square,\square)(\square)(\square)\] If we write each cycle as a row of boxes, we get a Young diagram: #### Classical groups, and a classical monoid Now, what are the classical groups? As with composers of music, there's no precise list of groups that count as "classical". But in general, a classical group should consist of linear transformations that preserve some nice geometrical structure on a vector space. Some good examples are: * The **general linear group**\(\mathrm{GL}(N,\mathbb{C})\), consisting of all invertible linear transformations of \(\mathbb{C}^{N}\), or in other words, all \(N\times N\) complex matrices with nonzero determinant. * The **special linear group**\(\mathrm{SL}(N,\mathbb{C})\), consisting of all linear transformations of \(\mathbb{C}^{N}\) with determinant \(1\). * The **unitary group**\(\mathrm{U}(N)\), consisting of all unitary linear transformations of \(\mathbb{C}^{N}\). * The **special unitary group**\(\mathrm{SU}(N)\), consisting of all unitary linear transformations of \(\mathbb{C}^{N}\) with determinant \(1\). These are the Bach, Haydn, Mozart and Beethoven of classical groups. Representations of all four can be classified with the help of Young diagrams. We may also consider this an honorary classical group, even though it's defined in terms of a _set_ rather than a _vector space_: * The symmetric group \(S_{n}\), consisting of all permutations of the set \(\{1,\dots,n\}\). Representations of this group are also classified using Young diagrams--and as we'll see, \(S_{n}\) plays a starring role in the whole story. There's another key actor whose representations are classified by Young diagrams. It deserves to be called a "classical monoid": * The **full linear monoid**\(\mathrm{End}(\mathbb{C}^{n})\), consisting of all linear transformations of \(\mathbb{C}^{N}\), or in other words, all \(N\times N\) matrices. A **monoid** is a set with an associative multiplication and identity, but not necessarily inverses. Here I am making \(\mathrm{End}(\mathbb{C}^{n})\) into a monoid where the multiplication is composition of transformations--or in low-brow terms, matrix multiplication. This monoid is so classical that people don't even call it that! Perhaps the common prejudice in favor of groups and against other monoids is to blame. As we'll see, the full linear monoid is a bit like the composer Palestrina, who is not considered a classical composer, yet who set the stage for the music we call classical. #### Representations Groups feel sad unless they are acting as symmetries of something. Monoids feel the same way--or even worse, because they're less loved than groups. This why we should study representations of groups and monoids. A **homomorphism** of monoids, say \(\rho\colon M\to N\), is a function with \[\rho(mm^{\prime})=\rho(m)\rho(m^{\prime})\text{ for all }m,m^{\prime}\in M\text{ and }\rho(1)=1.\] A **representation** of a monoid \(M\) on a vector space \(V\) is a homomorphism \[\rho\colon M\to\operatorname{End}(V)\] where \(\operatorname{End}(V)\) consists of all linear transformations of \(V\), made into a monoid using composition. A representation lets us take an element \(m\in M\) and make it act on a vector \(v\in V\) to get a new vector \(\rho(m)v\), in such a way that \[\rho(mm^{\prime})v=\rho(m)\rho(m^{\prime})v\text{ and }\rho(1)v=v.\] So now our monoid is doing something, not just sitting there moping! But a representation is still lonely in isolation. To solve this problem we define morphisms between representations of given monoid, getting an entire _category_ of representations. Given two representations \(\rho\colon M\to\operatorname{End}(V),\sigma\colon M\to\operatorname{End}(W)\), a **morphism** from the first to the second is a linear map \(f\colon V\to W\) such that \[f(\rho(m)v)=\sigma(m)f(v)\] for all \(v\in V\). That is: acting and then mapping is the same as mapping and then acting. Thanks to how \(f\) slips from outside to inside in this equation, morphisms of representations are also called **intertwining operators**. An **isomorphism** is just a morphism with an inverse, and an isomorphism of representations is also commonly called an **equivalence**. We won't do much with categories here except for classifying representations "up to isomorphism": when we do that, we don't distinguish between isomorphic representations. But studying the whole category of representations of a monoid, all at once, is a good way to get deeper insights in representation theory. The simplest representations are those on finite-dimensional vector spaces--so henceforth: We assume all vector spaces under discussion are finite-dimensional, without even mentioning it! And instead of trying to study _all_ finite-dimensional representations, I will focus on the "irreducible" ones, which serve as building blocks for more complicated ones. For example in particle physics we use irreducible representations to describe elementary particles. A representation \(\rho\) of a monoid on a vector space \(V\) is **irreducible** if \(V\) has no subspaces invariant under all the transformations \(\rho(m)\), except for \(\{0\}\) and \(V\) itself. "Irreducible representations" is a bit of a mouthful, so we also call them **irreps** for short. Why are irreducible representations important? Arguably the "indecomposable" representations are even more important to us here. Given two representations of a monoid, say \(\rho\colon M\to\operatorname{End}(V)\) and \(\rho^{\prime}\colon M\to\operatorname{End}(V^{\prime})\), there is a representation on \(V\oplus V^{\prime}\) called their **direct sum**: \[\rho\oplus\rho^{\prime}\colon M\to\operatorname{End}(V\oplus V^{\prime})\] given by \[(\rho\oplus\rho^{\prime})(m)(v,v^{\prime})=(\rho(m)v,\rho^{\prime}(m)v^{\prime }).\] A representation is **indecomposable** if it is not isomorphic to a direct sum of representations except for the 0-dimensional representation and itself. Using an inductive argument we can show that every representation is a direct sum of indecomposable representations. That is, we can break apart any representation into smaller pieces until we reach pieces that can't be broken apart any further. It is easy to see that any irreducible representation is indecomposable. The converse is not always true. However, for all the monoids we shall consider here, and the kinds of representations we consider here, indecomposability is equivalent to irreducibility! And since "irrep" is such a handy word, we shall talk about irreducibility rather than indecomposability. ### \(\boldsymbol{S_{n}}\) Amazingly, Young diagrams can be used to classify the irreps, or at least the "nice" ones, of all five classical groups I listed--\(\mathrm{GL}(N,\mathbb{C}),\,\mathrm{SL}(N,\mathbb{C}),\,\mathrm{U}(N),\, \mathrm{SU}(N)\) and \(S_{n}\)--together with the classical monoid \(\mathrm{End}(\mathbb{C}^{N})\). Let me sketch how this goes. We'll start with the symmetric groups \(S_{n}\), which are the most important of all. Remember, I've shown how conjugacy classes of permutations in \(S_{n}\) correspond to Young diagrams with \(n\) boxes. Now I want to do the same for irreducible representations of \(S_{n}\). This is cool for the following reason: for any finite group, the number of irreducible representations is the same as the number of conjugacy classes of group elements! But in general there's no natural one-to-one correspondence between irreducible representations with conjugacy classes. The group \(S_{n}\) just happens to be specially nice in this way. To get started I should tell you some stuff that work for any finite group. Suppose \(G\) is a finite group. Then \(G\) has only finitely many irreps, all finite-dimensional. Every finite-dimensional representation of \(G\) is a direct sum of copies of these irreps. To get our hands on these irreps, let \(\mathbb{C}[G]\) be the space of formal linear combinations of elements of \(G\). This is called the **group algebra** of \(G\), since it becomes an algebra using the product in \(G\). With some work, one can show that \(\mathbb{C}[G]\) is isomorphic to an algebra of block diagonal matrices. For example, \(\mathbb{C}[S_{3}]\) is isomorphic to the algebra of matrices of this form: \[\left(\begin{array}{cccc}*&0&0&0\\ 0&*&0&0\\ 0&0&*&*\\ 0&0&*&*\end{array}\right)\] where the \(*\) entries can be any complex number whatsoever. Since matrices act on vectors by matrix multiplication, we can use this to get a bunch of representations of \(\mathbb{C}[G]\), and thus of \(G\) -- one representation for each block. And this trick gives us all the irreps of \(G\)! For example, \(S_{3}\) has two \(1\)-dimensional irreps, coming from the two \(1\times 1\) blocks in the above matrix, and one \(2\)-dimensional irrep, coming from the \(2\times 2\) block. In fact, we can actually concoct these irreps as subspaces of \(\mathbb{C}[G]\). One way is to find elements of \(\mathbb{C}[G]\) with a single \(1\) on the diagonal of one block and zero everywhere else, like these: \[\underbrace{\left(\begin{array}{cccc}1&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right)}_{p_{1}}\qquad\underbrace{\left(\begin{array}{ ccccc}0&0&0&0\\ 0&1&0&0\\ 0&0&0&0\\ 0&0&0&0\end{array}\right)}_{p_{2}}\qquad\underbrace{\left(\begin{array}{ ccccc}0&0&0&0\\ 0&0&0&0\\ 0&0&1&0\\ 0&0&0&0\end{array}\right)}_{p_{3}}\] If we can find these guys, right multiplying by them will project down to various subspaces of \(\mathbb{C}[G]\), namely \[\{ap_{i}\mid a\in\mathbb{C}[G]\}.\] And these subspaces will be irreps of \(G\), as you can check using our description of \(\mathbb{C}[G]\) as an algebra of block diagonal matrices. How do we find these guys \(p_{i}\) in \(\mathbb{C}[G]\)? That takes work! But for starters, notice that: * They are **idempotent**: \(p_{i}^{2}=p_{i}\). * They are **minimal**: if \(p_{i}\) is the sum of two idempotents, one of them must be zero. * They are **separated**: if \(i\neq j\) we have \(p_{i}ap_{j}=0\) for all \(a\in\mathbb{C}[G]\). Indeed they form a large-as-possible collection of separated minimal idempotents: as many as the number of irreps \(G\)--or equivalently, the number of conjugacy classes in \(G\). To go further, we need to know more about our group \(G\). So now I'll take \(G\) to be \(S_{n}\) and tell you how to get separated minimal idempotents. We'll get one for each Young diagram with \(n\) boxes! Since there's as many conjugacy classes in \(S_{n}\) as \(n\)-box Young diagrams, that will mean we've got a large-as-possible collection. Here's how it works. Say we have a Young diagram with \(n\) boxes, like this: Then we can pack it with numbers from \(1\) to \(n\) like this: There are a bunch of permutations in \(S_{n}\) called **row permutations** that only permute the numbers within each row of our Young diagram. And there are a bunch called **column permutations** that only permute the numbers within each column. We can form an idempotent \(p_{S}\) in \(\mathbb{C}[S_{n}]\) that symmetrizes over all row permutations. We get \(p_{S}\) by taking the sum of all row permutations divided by the number of row permutations: \[p_{S}=\frac{1}{|R|}\sum_{\sigma\in R}\sigma\in\mathbb{C}[S_{n}]\] where \(R\) is the set of row permutations. Similarly, we can form an idempotent \(p_{A}\) in \(\mathbb{C}[S_{n}]\) that antisymmetrizes over all column permutations. We get \(p_{A}\) by taking the sum of all _even_ column permutations minus the sum of all _odd_ column permutations, and then dividing by the total number of column permutations: \[p_{A}=\frac{1}{|C|}\sum_{\sigma\in C}\operatorname{sgn}(\sigma)\sigma\in \mathbb{C}[S_{n}]\] where \(C\) is the set of column permutations. Now here's the cool part: up to a constant factor, \(p_{A}q_{A}\) is a minimal idempotent in \(\mathbb{C}[S_{n}]\)! Even better, this procedure gives exactly one minimal idempotent for each block in the block matrix description of \(\mathbb{C}[S_{n}]\). This isn't obvious at all--it takes real work to prove--but it's the crucial fact that connects \(n\)-box Young diagrams to representations of \(S_{n}\). Consider \(n=3\), for example. There are 3 Young diagrams in this case: so \(S_{3}\) has 3 irreps, confirming something I already said. For the long squat diagram the column permutations are trivial, so the minimal central idempotent is just \(p\). That is, it just "symmetrizes": it's the sum of all \(3!\) permutations in \(S_{3}\), divided by \(3!\). It winds up giving a \(1\times 1\) block in \[\mathbb{C}[S_{3}]\cong\left(\begin{array}{cccc}*&0&0&0\\ 0&*&0&0\\ 0&0&*&*\\ 0&0&*&*\end{array}\right)\] and thus a 1-dimensional representation of \(S_{3}\). This is the **trivial representation** where every element of \(S_{3}\) acts as the identity operator on \(\mathbb{C}\). Every monoid has a trivial representation. For the tall skinny diagram the row permutations are trivial, so the minimal idempotent is just \(q\). That is, it just "antisymmetrizes": it's the sum of all \(3!\) permutations times their signs, divided by \(3!\). This gives the other 1-dimensional representation of \(S_{3}\): the **sign representation** where each permutation acts on \(\mathbb{C}\) as multiplication by its sign. The remaining 3-box Young diagram is a bit trickier. It gives a minimal idempotent that does a more interesting mix of row symmetrization and column antisymmetrization. This gives the 2-dimensional representation of \(S_{3}\). Here's a more concrete way to describe this representation. You can think of \(S_{3}\) as the symmetries of an equilateral triangle. If you draw such a triangle in the plane, centered at the origin, each symmetry of this triangle gives a linear transformation of \(\mathbb{R}^{2}\), or in other words a \(2\times 2\) real matrix. But you can think of this as a complex \(2\times 2\) matrix! This trick defines a homomorphism \(\rho\colon S_{3}\to\operatorname{End}(\mathbb{C}^{2})\), and this is our representation. \(\mathbf{End}(\mathbb{C}^{N})\) We could go on thinking about Young diagrams and representations of the symmetric groups \(S_{n}\) for a long time. People have spent their lives on this! But before we get too old, let's see how Young diagrams give representations of the four other classical groups. It's actually best to start with the full linear monoid \(\operatorname{End}(\mathbb{C}^{N})\), since those four classical groups are all contained in this. Indeed we have monoid homomorphisms like this, all given by inclusions: \[\diagram{\operatorname{SU}(N)\arrow\put(0,0){\includegraphics[]{fig/L1.eps}}} \operatorname{U}(N)\arrow\put(0,0){\includegraphics[]{fig/L2.eps}}\operatorname{SL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L3.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L4.eps}}\operatorname{End}( \mathbb{C}^{N})\arrow\put(0,0){\includegraphics[]{fig/L5.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L6.eps}}\operatorname{End}( \mathbb{C}^{N})\arrow\put(0,0){\includegraphics[]{fig/L7.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L8.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L9.eps}}\operatorname{End}( \mathbb{C}^{N})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{End}( \mathbb{C}^{N})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{End}( \mathbb{C}^{N})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{End}( \mathbb{C}^{N})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{End}( \mathbb{C}^{N})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{End}( \mathbb{C}^{N})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{End}( \mathbb{C}^{N})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{End}( \mathbb{C}^{N})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{End}( \mathbb{C}^{N})\arrow\put(0,0){\includegraphics[]{fig/L10.eps}}\operatorname{GL}(N, \mathbb{C})\arrow\put(0,0){\includegraphics[]{fig/L10. is not only a representation of \(\operatorname{End}(\mathbb{C}^{N})\); it's also a representation of \(\mathbb{C}[S_{n}]\), coming from permutations of the \(n\) factors. And the actions of these two monoids commute! This is easy to see by a direct calculation. Next, we have seen that each \(n\)-box Young diagram diagram \(Y\) gives a minimal idempotent in \(\mathbb{C}[S_{n}]\). This acts as an operator on \((\mathbb{C}^{N})^{\otimes n}\), say \[p_{Y}\colon(\mathbb{C}^{N})^{\otimes n}\to(\mathbb{C}^{N})^{\otimes n}.\] The image of this operator is some subspace \[L=\{p_{Y}v\;\big{|}\;v\in(\mathbb{C}^{N})^{\otimes n}\}\subseteq(\mathbb{C}^{ N})^{\otimes n}.\] But in fact, the action of \(\operatorname{End}(\mathbb{C}^{N})\) on \((\mathbb{C}^{N})^{\otimes n}\) preserves this subspace \(L_{Y}\). Thus, \(L_{Y}\) becomes a representation of \(\operatorname{End}(\mathbb{C}^{N})\). So, we have gotten a representation of \(\operatorname{End}(\mathbb{C}^{N})\) from the Young diagram \(Y\)! To see that \(L_{Y}\) is preserved by the action of \(\operatorname{End}(\mathbb{C}^{N})\), we use the fact that the actions of \(\operatorname{End}(\mathbb{C}^{N})\) and \(\mathbb{C}[S_{n}]\) commute. Suppose we have a vector in \(L\), say \(p_{Y}v\). Then for any operator \(T\in\operatorname{End}(\mathbb{C}^{N})\) we have \[Tp_{Y}v=p_{Y}Tv\] so it lies in \(L_{Y}\). None of this was hard. The really cool part is that \(L_{Y}\) is always an _irreducible_ representation of \(\operatorname{End}(\mathbb{C}^{N})\). This is much less obvious! The reason, ultimately, is that the linear transformations of \((\mathbb{C}^{N})^{\otimes n}\) that commute with all transformations coming from the representation of \(\operatorname{End}(\mathbb{C}^{N})\) on this space are precisely those coming from \(\mathbb{C}[S_{n}]\). This is half of a result called "Schur-Weyl duality". And I can't resist mentong the other half, though we don't need it here. It says that the linear transformations of \((\mathbb{C}^{N})^{\otimes n}\) that commute with all transformations coming from the representation of \(\mathbb{C}[S_{n}]\) on this space are precisely those coming from \(\operatorname{End}(\mathbb{C}^{N})\). As you can see, there is some serious math going on here. In any event, each Young diagram gives an irrep of \(\operatorname{End}(\mathbb{C}^{N})\). Let's see how this works in a few examples. If we take \(n=3\), then \(S_{3}\) acts on \[(\mathbb{C}^{N})^{\otimes 3}=\mathbb{C}^{N}\otimes\mathbb{C}^{N}\otimes \mathbb{C}^{N}\] So, we get some irreps of \(\operatorname{End}(\mathbb{C}^{N})\) from 3-box Young diagrams. As we've seen, the long squat Young diagram gives the minimal idempotent that just "symmetrizes". So it gives an irrep of \(\operatorname{End}(\mathbb{C}^{N})\) on the space of **symmetric tensors of rank 3**: \[S^{3}(\mathbb{C}^{N})=\big{\langle}\frac{1}{3!}\sum_{\sigma\in S_{n}}v_{ \sigma(1)}\otimes v_{\sigma(2)}\otimes v_{\sigma(3)}\;\big{|}\;v_{1},v_{2},v_{ 3}\in\mathbb{C}^{N}\big{\rangle}\] where the angle brackets mean we take all linear combinations of vectors of this form. Similarly, the tall skinny Young diagram gives the minimal idempotent that "antisymmetrizes". So it gives an irrep of \(\operatorname{End}(\mathbb{C}^{N})\) on the space of **antisymmetric tensors of rank 3**: \[\Lambda^{3}(\mathbb{C}^{N})=\langle\frac{1}{3!}\sum_{\sigma\in S_{n}}\operatorname {sgn}(\sigma)\,v_{\sigma(1)}\otimes v_{\sigma(2)}\otimes v_{\sigma(3)}\;\big{|} \;v_{1},v_{2},v_{3}\in\mathbb{C}^{N}\rangle.\] All this works the same way for any other number replacing \(3\). The other 3-box Young diagram is more tricky. To get its minimal idempotent up to a constant factor, you need to first antisymmetrize over column permutations of the numbers here: \[\youngbox[]{12}\] and then symmetrize over row permutations. Then you apply the resulting element of \(\mathbb{C}[S_{3}]\) to all vectors \(v_{1}\otimes v_{2}\otimes v_{3}\), and take all linear combinations of what you get. I could write down the formulas, but you probably wouldn't enjoy it. In math, some things are more fun to do than to watch. When you think about this game works, you'll notice that some of irreps we get are a bit silly. If we have a Young diagram with more than \(N\) rows, we'll be antisymmetrizing over a tensor product of more than \(N\) vectors in \(\mathbb{C}^{N}\), which always gives zero. So such Young diagrams give zero-dimensional representations of \(\operatorname{End}(\mathbb{C}^{N})\). We can ignore these. Indeed, most people decree that zero-dimensional representations don't even count as irreducible, just as the number \(1\) isn't prime. Let's do that from now on. With this convention in place, we get an irrep of \(\operatorname{End}(\mathbb{C}^{N})\) from each Young diagram with at most \(N\) rows. And they're all different: that is, distinct Young diagrams with at most \(N\) rows give nonisomorphic representations. Do we get _all_ the irreps of \(\operatorname{End}(\mathbb{C}^{N})\) from Young diagrams with at most \(N\) rows? No, alas. Suppose we have a representation \(\rho\) of \(\operatorname{End}(\mathbb{C}^{N})\) that arises from a Young diagram. Say it acts on some vector space \(L\). If we pick a basis for \(L\), we can write each linear transformation \(\rho(x)\colon L\to L\) as a matrix, and you can check that the matrix entries of \(\rho(x)\) are polynomials in the entries of the original matrix \(x\in\operatorname{End}(\mathbb{C}^{N})\). Thus we say \(\rho\) is a **polynomial representation**--and we see that Young diagrams can only give us polynomial representations of \(\operatorname{End}(\mathbb{C}^{N})\). Thus, as soon as you find a irrep of \(\operatorname{End}(\mathbb{C}^{N})\) that's not a polynomial representation, you'll know that you can't get _all_ the irreps of \(\operatorname{End}(\mathbb{C}^{N})\) from Young diagrams. And such an irrep is not hard to find. For example, consider the representation \[\begin{array}{cccc}\rho\colon&\operatorname{End}(\mathbb{C}^{N})&\to& \operatorname{End}(\mathbb{C}^{N})\\ &T&\mapsto&\overline{T}\end{array}\] that takes the complex conjugate of each entry of an \(N\times N\) matrix. There are many more. But the next best thing is true: every polynomial irrep of \(\operatorname{End}(\mathbb{C}^{N})\) comes from a Young diagram. In fact there is a one-to-one correspondence between these things: * polynomial irreps of \(\operatorname{End}(\mathbb{C}^{n})\), up to isomorphism * Young diagrams with \(\leq N\) rows. Thus, we say that Young diagrams with at most \(N\) rows _classify_ polynomial irreps of \(\operatorname{End}(\mathbb{C}^{N})\). This remarkable fact is the basic link between Young diagrams and representations of the classical groups. Let's see how to use it. \(\operatorname{GL}(N,\mathbb{C})\) Let's start with the biggest of the classical groups, the general linear group \(\operatorname{GL}(N,\mathbb{C})\). Consider its inclusion in \(\operatorname{End}(\mathbb{C}^{N})\): \[\operatorname{GL}(N,\mathbb{C})\to\operatorname{End}(\mathbb{C}^{N})\] Composing this with any polynomial irrep of \(\operatorname{End}(\mathbb{C}^{N})\), we get a representation of \(\operatorname{GL}(N,\mathbb{C})\). In fact it is an irrep. We don't get all the irreps of \(\operatorname{GL}(N,\mathbb{C})\), but we get all the polynomial irreps: that is, those whose matrix entries are polynomials in the matrix entries of the element \(g\in\operatorname{GL}(N,\mathbb{C})\) they depend on. Furthermore, since \(\operatorname{GL}(N,\mathbb{C})\) is dense in \(\operatorname{End}(\mathbb{C}^{n}\) and polynomials are continuous, distinct polynomial irreps of \(\operatorname{End}(\mathbb{C}^{N})\) give distinct polynomial irreps of \(\operatorname{GL}(N,\mathbb{C})\). Even better, every polynomial irrep arises from one of \(\operatorname{End}(\mathbb{C}^{n})\). Using these ideas and our previous results on representations of \(\operatorname{End}(\mathbb{C}^{N})\), we can show that there is a one-to-one correspondence between these things: * polynomial irreps of \(\operatorname{GL}(N,\mathbb{C})\), up to isomorphism * Young diagrams with \(\leq N\) rows. Even better, every polynomial representation of \(\operatorname{GL}(N,\mathbb{C})\) can be written as a direct sum of polynomial irreps. However, there are plenty of non-polynomial irreps of \(\operatorname{GL}(N,\mathbb{C})\): not only those coming from the non-polynomial irreps of \(\operatorname{End}(\mathbb{C}^{N})\), but also others. The reason is that a matrix in \(\operatorname{GL}(N,\mathbb{C})\) has nonzero determinant, so we can cook up representations involving the inverse of the determinant, which is not a polynomial. The 1-dimensional irrep of \(\operatorname{GL}(N,\mathbb{C})\) sending each matrix \(g\) to \(\det(g)\), called the **determinant representation**. This is a polynomial irrep, so it must come from a Young diagram. Indeed it comes from tall skinny Young diagram with one column and \(N\) rows, e.g. when \(N=5\). If we have any irrep of \(\operatorname{GL}(N,\mathbb{C})\) coming from a Young diagram, tensoring it with the determinant representation gives a new irrep described by a Young diagram with an extra column with \(N\) rows, like this: However, there's also a 1-dimensional irrep of \(\mathrm{GL}(N,\mathbb{C})\) that sends \(g\in\mathrm{GL}(N,\mathbb{C})\) to \(\det(g)^{-1}\). This is called the **inverse** of the determinant representation, both for the obvious reason and because when you tensor it with the determinant representation you get the trivial representation. Since \(\det(g)^{-1}\) is not a polynomial in the matrix entries of \(g\), this not a polynomial representation. But it is still an **algebraic representation**: one whose matrix entries are rational functions of the matrix entries of \(g\). Algebraic representations are the kind most natural in algebraic geometry. Indeed \(\mathrm{GL}(N,\mathbb{C})\) is a **linear algebraic group** over \(\mathbb{C}\): that is, a group in the category of affine algebraic varieties over the complex numbers. When people talk about representations of linear algebraic groups, they usually mean algebraic representations. So, fans of algebraic geometry will be glad to know that algebraic irreps of \(\mathrm{GL}(N,\mathbb{C})\) can all be built by taking a polynomial irrep and tensoring it with the inverse of the determinant representation some number of times. This in turn means we can describe any algebraic irrep of \(\mathrm{GL}(N,\mathbb{C})\) using a Young diagram with _fewer than_\(N\) rows together with an integer \(k\). The Young diagram gives a representation \(\rho\), and then we form the representation on the same space where \(g\) acts by \(\det(g)^{k}\rho(g)\). If \(k\geq 0\) this is the same as tacking on \(k\) extra columns with \(N\) rows to our Young diagram, but the procedure also makes sense for \(k<0\). We get a one-to-one correspondence between these things: * algebraic irreps of \(\mathrm{GL}(N,\mathbb{C})\), up to isomorphism * pairs consisting of a Young diagram with \(<N\) rows and an integer. If you like, you can think of such a pair as a funny sort of Young diagram with \(\leq N\) rows where the number of columns with \(N\) rows can be any integer--even a negative number! This is the story for irreps, but what about more general representations? It's as nice as it could be: every algebraic representation of \(\mathrm{GL}(N,\mathbb{C})\) is a direct sum of algebraic irreps. If you don't yet love algebraic geometry, you may prefer to think of \(\mathrm{GL}(N,\mathbb{C})\) as a **complex Lie group**: a group in the category of complex manifolds. When we talk about a representation of a complex Lie group \(G\), we usually mean an **complex-analytic representation**: a representation \(\rho\colon\mathrm{GL}(N,\mathbb{C})\to\mathrm{End}(L)\) for which the matrix entries of \(\rho(g)\) are complex-analytic functions of the matrix entries of \(g\). Luckily for \(\mathrm{GL}(N,\mathbb{C})\) these representations are all algebraic! The constraint \(\rho(gh)=\rho(g)\rho(h)\) is so powerful that any complex-analytic solution is actually algebraic. So, the whole story we told for algebraic representations of \(\mathrm{GL}(N,\mathbb{C})\) also applies to complex-analytic ones. #### \(\mathbf{SL(\boldsymbol{N},\mathbb{C})}\) We can also get representations of the special linear group \(\mathrm{SL}(N,\mathbb{C})\) from Young diagrams. Any Young diagram with at most \(N\) rows gives an algebraic irrep of \(\mathrm{End}(\mathbb{C}^{N})\), and composing this with the inclusion \[\mathrm{SL}(N,\mathbb{C})\to\mathrm{End}(\mathbb{C}^{N})\] we get an algebraic irrep of \(\mathrm{SL}(N,\mathbb{C})\). We get all the algebraic irreps of \(\mathrm{SL}(N,\mathbb{C})\) this way. Even better, the irritating fly in the ointment for \(\mathrm{GL}(N,\mathbb{C})\), the determinant representation, become trivial for \(\mathrm{SL}(N,\mathbb{C})\). So does the inverse of the determinant representation. So, we get a one-to-one correspondence between these two things: * algebraic irreps of \(\mathrm{SL}(N,\mathbb{C})\), up to isomorphism * Young diagrams with \(<N\) rows. Furthermore, every algebraic representation of \(\operatorname{SL}(N,\mathbb{C})\) is a direct sum of algebraic irreps. So, algebraic representations of \(\operatorname{SL}(N,\mathbb{C})\) are classified by _finite collections_ of Young diagrams with \(<N\) rows. Here we are thinking of \(\operatorname{SL}(N,\mathbb{C})\) as a linear algebraic group. We can also think of it as a complex Lie group. However, all its complex-analytic representations are algebraic. So the same classification applies here too. ### \(\operatorname{U}(N)\) The unitary group \(\operatorname{U}(N)\) is different from the classical groups so far, because the equations defining unitarity involve complex conjugation: \[gg^{*}=1\] so it's not a linear algebraic group over \(\mathbb{C}\). Instead it's a linear algebraic group over \(\mathbb{R}\). We shall still study its representations on complex vector spaces, but now the interesting ones are the **real-algebraic representations**: those where the matrix entries of \(\rho(g)\) are rational functions of the real and imaginary parts of the matrix entries of \(g\). To get representations of \(\operatorname{U}(N)\) it's convenient to use our knowledge of representations of \(\operatorname{GL}(N,\mathbb{C})\). We can take any algebraic irrep of \(\operatorname{GL}(N,\mathbb{C})\) and compose it with the inclusion \[\operatorname{U}(N)\to\operatorname{GL}(N,\mathbb{C})\] to get a real-algebraic representation of \(\operatorname{U}(N)\). The result is an irrep, and we get all the real-algebraic irreps of \(\operatorname{U}(N)\) on complex vector spaces this way. In fact, the classification of these real-algebraic irreps of \(\operatorname{U}(N)\) completely matches the classification of algebraic irreps of \(\operatorname{GL}(N,\mathbb{C})\). We thus get a one-to-one correspondence between these things: * real-algebraic irreps of \(\operatorname{U}(N)\) on complex vector spaces, up to isomorphism * pairs consisting of a Young diagram with \(<N\) rows and an integer. Furthermore, every real-algebraic representation of \(\operatorname{U}(N)\) is a direct sum of real-algebraic irreps. Alternatively, we can think of \(\operatorname{U}(N)\) as a **Lie group**: a group in the category of manifolds (ordinary real manifolds, not complex manifolds). For a Lie group it's natural to study **smooth representations**: those where the matrix entries of \(\rho(g)\) are smooth functions of the matrix entries of \(g\). Or we can go further and think of \(\operatorname{U}(N)\) as a mere **topological group**: a group in the category of topological spaces. For a topological group it's natural to study **continuous representations**, where the matrix entries of \(\rho(g)\) are continuous functions of the matrix entries of \(g\). But something very nice is true: every smooth representation of \(\operatorname{U}(N)\) is automatically real-algebraic, and every continuous representation of _any_ Lie group is automatically smooth! So we do not gain any generality by considering smooth or continuous irreps of \(\operatorname{U}(N)\): they are both classified by pairs consisting of a Young diagram with \(<N\) rows and an integer. Another variant also turns out to work the same way. In quantum physics we use unitary representations on Hilbert spaces. A finite-dimensional Hilbert space, which is the only kind we'll consider here, is just a finite-dimensional complex vector space with an inner product. A **unitary representation** of a group \(G\) on a Hilbert space \(H\) is a representation \(\rho\colon G\to\operatorname{End}(V)\) such that each of the transformations \(\rho(g)\) is unitary. It turns out that because \(\mathrm{U}(N)\) is compact, we can take any continuous representation \(\rho\colon\mathrm{U}(N)\to\mathrm{End}(V)\), pick any inner product on the vector space \(V\), and "average it" over the action of \(U(N)\) to get a new improved inner product with \[\langle\rho(g)v,\rho(g)w\rangle=\langle v,w\rangle\quad\text{ for all }v,w\in V \text{ and }g\in\mathrm{U}(N).\] This says that all the transformations \(\rho(g)\) are unitary: \[\rho(g)^{*}\rho(g)=1.\] So, \(\rho\) has been promoted to a unitary representation. Putting this together with what we already have, one can show there is a one-to-one correspondence between these things: * continuous unitary irreps of \(\mathrm{U}(N)\), up to isomorphism * pairs consisting of a Young diagram with \(<N\) rows and an integer. Also, every continuous unitary representation of \(\mathrm{U}(N)\) is a direct sum of continuous unitary irreps. ### \(\mathrm{SU}(N)\) Finally we turn to the special unitary group \(\mathrm{SU}(N)\). Since all the main patterns have been laid out, we will go faster now--as usual, not proving things but at least trying to make them plausible. Just as \(GL(N,\mathbb{C})\) helps us understand \(\mathrm{U}(N)\), \(\mathrm{SL}(N,\mathbb{C})\) helps us understand \(\mathrm{SU}(N)\). The reason, ultimately, is that \(\mathrm{U}(N)\) is the"compact real form" of the complex Lie group \(\mathrm{GL}(N,\mathbb{C})\), and \(\mathrm{SU}(N)\) is the compact real form of \(\mathrm{SL}(N,\mathbb{C})\). But to understand this, one needs to get into Lie theory more deeply than we intend to here. We can take any algebraic irrep of \(\mathrm{SL}(N,\mathbb{C})\) and compose it with the inclusion \[\mathrm{SU}(N)\to\mathrm{GL}(N,\mathbb{C})\] to get a representation of \(\mathrm{SU}(N)\). This is a real-algebraic irrep, and we get all the real-algebraic irreps of \(\mathrm{SU}(N)\) this way. With help from our classification of algebraic irreps of \(\mathrm{SL}(N,\mathbb{C})\), we we can show there is a one-to-one correspondence between these things: * real-algebraic irreps of \(\mathrm{SU}(N)\), up to isomorphism * Young diagrams with \(<N\) rows. Then, by the averaging trick mentioned already for \(\mathrm{U}(N)\), we also get a one-to-one correspondence between these things: * continuous unitary irreps of \(\mathrm{SU}(N)\), up to isomorphism * Young diagrams with \(<N\) rows. Further more, as we have come to expect, in both the real-algebraic case and the continuous unitary case every representation of the given sort is a direct sum of irreps of that sort. #### Summary and further directions Let's summarize what we have seen--but also say a bit more. While we have studied representations on finite-dimensional vector spaces over \(\mathbb{C}\), most of the _purely algebraic_ results hold for any field of characteristic zero! Fields with nonzero characteristic behave very differently, and in fact the irreducible representations of \(S_{n}\) still haven't been classified over finite fields. But the items with check marks here hold if we replace \(\mathbb{C}\) with any field of characteristic zero: * Irreps of \(S_{n}\) are classified by Young diagrams with \(n\) boxes. * Polynomial irreps of \(\operatorname{End}(\mathbb{C}^{N})\) are classified by Young diagrams with \(\leq N\) rows. * Polynomial irreps of \(\operatorname{GL}(N,\mathbb{C})\) are classified by Young diagrams with \(\leq N\) rows. * Algebraic irreps of \(\operatorname{GL}(N,\mathbb{C})\) are classified by pairs consisting of a Young diagram with \(<N\) rows and an integer. * Algebraic irreps of \(\operatorname{SL}(N,\mathbb{C})\) are classified by Young diagrams with \(<N\) rows. * Analytic irreps of \(\operatorname{SL}(N,\mathbb{C})\) are classified by Young diagrams with \(<N\) rows. * Analytic irreps of \(\operatorname{GL}(N,\mathbb{C})\) are classified by pairs consisting of a Young diagram with \(<N\) rows and an integer. * Real-algebraic irreps of \(\operatorname{U}(N)\) are classified by pairs consisting of a Young diagram with \(<N\) rows and an integer. * Continuous unitary irreps of \(\operatorname{U}(N)\) are classified by pairs consisting of a Young diagram with \(<N\) rows and an integer. * Real-algebraic irreps of \(\operatorname{SU}(N)\) are classified by Young diagram with \(<N\) rows. * Continuous unitary irreps of \(\operatorname{SU}(N)\) are classified by Young diagram with \(<N\) rows. However, this is far from the end of the story! First of all, we can use \(n\)-box Young diagrams packed with numbers \(1,\dots,n\), called **Young tableaux**, to do all sorts of calculations involving irreps of classical groups. Say we want to figure out the dimension of the irrep of \(S_{n}\) corresponding to some Young diagram. Then we just count the **standard Young tableaux** of that shape: that is, Young tableaux where the numbers increase as we go down any column or across any row. For example, there are two standard Young tableaux of this shape: \[\begin{array}{|c|c|}\hline 1&2&1&3\\ \hline 3&2&\\ \hline\end{array}\] so this Young diagram: \[\begin{array}{|c|c|}\hline\\ \hline\\ \hline\end{array}\] gives a 2-dimensional irrep of \(S_{3}\). Or: say we tensor two irreps and want to decompose the result as a direct sum of irreps: how do we do it? We play a little game with Young tableaux and out pops the answer. The relevant buzzword is "Littlewood-Richardson rules". Or say we have an irrep of \(S_{n}\) and want to know how it decomposes into irreps when we restrict it to a subgroup like \(S_{n-1}\), or similarly for \(\operatorname{SL}(N,\mathbb{C})\) and \(\operatorname{SL}(N-1,\mathbb{C})\), etc. How do we do this? More messing with Young tableaux. Here one relevant buzzword is "branching rules". I'll warn you right now: there is an _enormous_ literature on this stuff. The combinatorics of Young diagrams is one of those things that everyone has worked on, from hardnosed chemists to starry-eyed category theorists. It takes a lifetime to master this material, and I certainly have _not_. But learning even a little is fun, so don't be _too_ scared. Second of all, Young diagrams are also good for studying the representations of some other classical groups, such as these: * The **orthogonal group**\(\operatorname{O}(N)\), consisting of all orthogonal linear transformations of \(\mathbb{R}^{N}\). * The **special orthogonal group**\(\operatorname{SO}(N)\), consisting of all orthogonal linear transformations of \(\mathbb{R}^{N}\) with determinant \(1\). * The **symplectic group**\(\operatorname{Sp}(2N)\), consisting of all symplectic linear transformations of \(\mathbb{R}^{2N}\). All these groups have an obvious "tautologous representation", and we can cook up other representations by taking the \(n\)th tensor power of this representation and hitting it with minimal idempotents in \(\mathbb{C}[S_{n}]\) coming from Young diagrams. The story I just told you can be repeated with slight or not-so-slight variations for these other groups. Third, we can "\(q\)-deform" the whole story, replacing any one of these classical groups by the associated "quantum group", and replacing \(\mathbb{C}[S_{n}]\) by the corresponding "Hecke algebra". This is really important in topological quantum field theory and the theory of von Neumann algebras. Fourth, there are nice relationships between Young diagrams and algebraic geometry, like the "Schubert calculus" for the cohomology ring of a Grassmannian. Fifth and finally, Young diagrams are themselves objects in an important category! To understand this we need to step back a bit. We have seen that Young diagrams are good for getting new representations from old ones. Given any representation \[\rho\colon M\to\operatorname{End}(V)\] of any monoid \(M\), and given any Young diagram \(Y\), we can get a new representation of \(M\) as follows. First form the \(n\)th tensor power of \(\rho\), which is the representation \[\rho^{\otimes n}\colon M\to\operatorname{End}(V^{\otimes n})\] defined by \[\rho^{\otimes n}(m)(v_{1}\otimes\cdots\otimes v_{n})=\rho(m)(v_{1})\otimes \cdots\otimes\rho(m)(v_{n}).\] The group \(S_{n}\) also acts on \(V^{\otimes n}\), so the minimal idempotent in \(\mathbb{C}[S_{n}]\) coming from \(Y\) gives an idempotent operator \[p_{Y}\colon V^{\otimes n}\to V^{\otimes n}\] Then take the image of \(p_{Y}\). Since the actions of \(M\) and \(S_{n}\) on \(V^{\otimes n}\) commute, this image is a subspace of \(V^{\otimes n}\) that is invariant under all the transformations \(\rho(m)\) for \(m\in M\). So, it gives a representation of \(M\). Let us call this new representation \(Y(\rho)\). Since this procedure for getting new representations from old is completely systematic, it should be a functor. Indeed, this is true! There is a category \(\mathsf{Rep}(M)\) whose objects are representations of \(M\), with the usual morphisms between these. There is a functor from this category to itself, say \[Y\colon\mathsf{Rep}(M)\to\mathsf{Rep}(M),\] that maps each representation \(\rho\) to \(Y(\rho)\). And this functor is called a **Schur functor**. Schur functors also work on categories other than categories of representations. Very roughly, Schur functors know how to act on any category where: * we can take linear combinations of morphisms \(f,g\colon x\to y\) between any two objects \(x\) and \(y\), * we can take direct sums and tensor products of objects, * the symmetric group \(S_{n}\) acts on \(x^{\otimes n}\) for any object \(x\), and * we can project to the image of any idempotent morphism \(f\colon x\to x\). One can make these conditions precise, and I have taken to calling categories obeying these conditions "2-rigs". So, for any 2-rig \(\mathsf{R}\) and any Young diagram \(Y\), we get a Schur functor \[Y_{\mathsf{R}}\colon\mathsf{R}\to\mathsf{R}.\] (Now I am being more careful to indicate that the Schur functor depends on the category \(\mathsf{R}\).) There is a nice way think about what is going on here. There is a 2-rig Schur whose objects are formal finite direct sums of Young diagrams, like this: \[\raisebox{-0.0pt}{\includegraphics[]{fig/S_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2s_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2-rigs_2rigs
2306.04757
INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models
Instruction-tuned large language models have revolutionized natural language processing and have shown great potential in applications such as conversational agents. These models, such as GPT-4, can not only master language but also solve complex tasks in areas like mathematics, coding, medicine, and law. Despite their impressive capabilities, there is still a lack of comprehensive understanding regarding their full potential, primarily due to the black-box nature of many models and the absence of holistic evaluation studies. To address these challenges, we present INSTRUCTEVAL, a more comprehensive evaluation suite designed specifically for instruction-tuned large language models. Unlike previous works, our evaluation involves a rigorous assessment of models based on problem-solving, writing ability, and alignment to human values. We take a holistic approach to analyze various factors affecting model performance, including the pretraining foundation, instruction-tuning data, and training methods. Our findings reveal that the quality of instruction data is the most crucial factor in scaling model performance. While open-source models demonstrate impressive writing abilities, there is substantial room for improvement in problem-solving and alignment. We are encouraged by the rapid development of models by the open-source community, but we also highlight the need for rigorous evaluation to support claims made about these models. Through INSTRUCTEVAL, we aim to foster a deeper understanding of instruction-tuned models and advancements in their capabilities. INSTRUCTEVAL is publicly available at https://github.com/declare-lab/instruct-eval.
Yew Ken Chia, Pengfei Hong, Lidong Bing, Soujanya Poria
2023-06-07T20:12:29Z
http://arxiv.org/abs/2306.04757v3
# InstructEval: Towards Holistic Evaluation of Instruction-Tuned Large Language Models ###### Abstract Instruction-tuned large language models have revolutionized natural language processing and have shown great potential in applications such as conversational agents. These models, such as GPT-4, can not only master language but also solve complex tasks in areas like mathematics, coding, medicine, and law. Despite their impressive capabilities, there is still a lack of comprehensive understanding regarding their full potential, primarily due to the black-box nature of many models and the absence of holistic evaluation studies. To address these challenges, we present InstructEval, a more comprehensive evaluation suite designed specifically for instruction-tuned large language models. Unlike previous works, our evaluation involves a rigorous assessment of models based on problem-solving, writing ability, and alignment to human values. We take a holistic approach to analyze various factors affecting model performance, including the pretraining foundation, instruction-tuning data, and training methods. Our findings reveal that the quality of instruction data is the most crucial factor in scaling model performance. While open-source models demonstrate impressive writing abilities, there is substantial room for improvement in problem-solving and alignment. We are encouraged by the rapid development of models by the open-source community, but we also highlight the need for rigorous evaluation to support claims made about these models. Through InstructEval, we aim to foster a deeper understanding of instruction-tuned models and advancements in their capabilities. InstructEval is publicly available at [https://github.com/declare-lab/instruct-eval](https://github.com/declare-lab/instruct-eval). ## 1 Introduction The advent of instruction-tuned large language models has marked a significant turning point in the field of natural language processing (NLP). Their transformative capabilities are evident in numerous applications, from conversational assistants such as ChatGPT1 to complex problem-solving. Examples of such models include GPT-4 OpenAI [2023], which has shown proficiency not only in language understanding but also in areas as diverse as mathematics, coding, medicine, and law. However, despite their remarkable proficiency and adaptability, the full extent of their potential remains to be comprehensively understood. This situation arises primarily due to the black-box nature of many models and the current absence of in-depth and holistic evaluation studies. Footnote 1: [https://chat.openai.com](https://chat.openai.com) To address these challenges and gain a deeper understanding of the capabilities of these models, we introduce a novel evaluation suite named InstructEval. This suite is designed explicitly for the comprehensive assessment of instruction-tuned large language models, pushing beyond the confines of earlier evaluation approaches. Our evaluation strategy diverges from prior studies in its systematic and holistic approach. It not only scrutinizes the models' problem-solving abilities and writing proficiency but also critically examines their alignment with human values. At the heart of our evaluation methodology, we consider various factors affecting the performance of the models. These include the pretrained foundation upon which the models are developed, the nature and quality of instruction-tuning data used to refine them, and the specific training methods adopted. Through a rigorous exploration of these factors, we seek to shed light on the vital elements that determine model performance, facilitating an understanding of how these models can be better harnessed to meet our needs. Our research findings underscore the critical influence of the quality of instruction data on the scaling of model performance. Open-source models have shown impressive writing abilities, signifying their potential to contribute meaningfully to various domains. However, our study reveals considerable room for improvement, particularly in the models' problem-solving abilities and alignment with human values. This observation accentuates the importance of holistic evaluation and model development. While we acknowledge and appreciate the rapid strides made by the open-source community in developing these models, we also underline the necessity for rigorous evaluation. Without comprehensive assessment, it can be challenging to substantiate claims made about the capabilities of these models, potentially limiting their usability and applicability. By introducing InstructEval, we strive to fill this critical gap. Our primary aim is to contribute to the nuanced understanding of instruction-tuned large language models, thereby fostering further advancements in their capabilities. Furthermore, we are excited to announce the release of a comprehensive leaderboard that compares over 60 open-source Large Language Models (LLMs). The leaderboard can be accessed at [https://declare-lab.github.io/instruct-eval/](https://declare-lab.github.io/instruct-eval/). In this paper, we carefully selected 10 models from this pool, considering factors such as their foundational architecture, instruction set, and pre-training method. ## 2 Overview of Open-Source Instructed LLMs Foundation ModelsWhile large language models have captured public attention, they have become a very broad category of models that are hard to define. For instance, large language models could refer to pretrained models, instruction-tuned models such as GPT-4, or even loosely linked to applications of large language models. Hence, in this work, we mainly distinguish between foundation models and instructed models, where foundation LLMs are pretrained large language models which may be instruction-tuned to become instructed LLMs. Notably, we focus mainly on open-source instructed LLMs due to the lack of transparency and reproducibility of closed-source models. To consider pretraining factors such as model architecture, size, and data scale, we collect details of the open-source foundation LLMs in Table 1. Instruction DatasetsArguably, the core of instruction tuning is the instruction data that are used to train foundation LLMs. For instance, the quality, quantity, diversity, and format can all determine the behavior of the instructed model. Hence, we collect details of several open-source instruction datasets in Table 2. Notably, we have observed a growing trend of leveraging synthetic instruction data from closed-source models. While this practice may allow instructed models to mimic the behavior of models such as GPT-4, this may lead to issues such as inheriting the black-box nature of closed-source models, and instability due to noisy synthetic instructions. Open-Source Instructed LLMsAfter considering the pretraining foundation and data collections that support instructed LLMs, we are able to provide a holistic overview of open-source instructed models in Table 3. Concretely, we collate the foundation model, model size, instruction dataset, and training method used for each instructed LLM. In general, we observe great variety in terms of model sizes and instruction data. Hence, we believe that this overview of open-source instructed LLMs provides comprehensive factors to consider for the evaluation and analysis in the coming sections. ## 3 Challenges in Evaluating Instructed LLMs Inscrutable Black Box ModelsWhile instructed LLMs such as GPT-4 have gained widespread attention, many models are closed-source and are limited to access through APIs. Furthermore, \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **Architecture** & **Training Tokens** & **Data Source** & **Commercial?** \\ \hline GPT-NeoX Black et al. [2022] & Decoder & 472B & The Pile & Allowed \\ StableLM StabilityAI [2023] & Decoder & 800B & StableML Pile & Allowed \\ LLAMA Touwron et al. [2023] & Decoder & 1.4T & LLAMA & No \\ Pythia Biderman et al. [2023] & Decoder & 472B & The Pile & Allowed \\ OPT Zhang et al. [2022] & Decoder & 180B & The Pile & Allowed \\ UL2 Tay et al. [2023] & Encoder-Decoder & 1T & C4 & Allowed \\ TS Raffel et al. [2020] & Encoder-Decoder & 1T & C4 & Allowed \\ GLM Du et al. [2022] & Hybrid-Decoder & 1T & The Pile, Wudao Corpora & No \\ RRVV Peng et al. [2023] & Parallelizable RNN & 472B & The Pile & Allowed \\ Mosaic MosaicML [2023] & Decoder & 1T & C4 \& MC4 & Allowed \\ \hline \hline \end{tabular} \end{table} Table 1: Foundation large language models that are open-source. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Dataset** & **Size** & **Tasks** & **Domain** & **Data Source** \\ \hline Alpaca Data Taori et al. [2023] & 52K & 52K & General & GPT-3 \\ Flan Collection Longpre et al. [2023] & 15M & 1836 & General & Human-Annotation \\ Self-Instruct Wang et al. [2023] & 82K & 52K & General & GPT-3 \\ Natural Instructions Mishra et al. [2022] & 620K & 61 & General & Human-Annotation \\ Super-Natural Instructions Mishra et al. [2022] & SM & 1616 & General & Human-Annotation \\ ShareGPT Chiang et al. [2023] & 70K & 70K & Dialogue & ChatGPT \\ P3 Sanh et al. [2022] & 12M & 62 & General & Human-Annotation \\ Databricks Dolly Databricks Labs [2023] & 15K & 12K & General & Human-Annotation \\ OpenAssistant Conversations Köpf et al. [2023] & 161K & 161K & Dialogue & Human-Annotated \\ Anthropic HH Bai et al. [2022] & 161K & 161K & Safety & Human-Annotated \\ \hline \hline \end{tabular} \end{table} Table 2: List of open-source instruction-tuning datasets. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **Foundation** & **Sizes** & **Instruction Data** & **Training Method** \\ \hline OpenAssistant LAION-AI [2023] & LLAMA & 30B & OpenAssistant Conversations & Supervised \\ Dolly V2 Databricks Labs [2023] & Pythia & 3-12B & Databricks Dolly & Supervised \\ OPT-IMI Iyer et al. [2023] & OPT & 1-30B & OPT-IML Bench & Supervised \\ Flan-UL2 Tay et al. [2023] & UL2 & 20B & Flan-Collection & Supervised \\ T-Instruct Wang et al. [2022] & T5 & 3-11B & Super-Natural Instructions & Supervised \\ Flan-Alpaca Chia et al. [2023] & T5 & 3-11B & Alpaca Data & Supervised \\ Flan-T5 Chung et al. [2022] & T5 & 3-11B & Flan-Collection & Supervised \\ Vienna Chiang et al. [2023] & LLAMA & 7-13B & ShareGPT & Supervised \\ Alpaca Taori et al. [2023] & LLAMA & 7-30B & Alpaca Data & Supervised \\ Mosaic-Chat MosaicML [2023] & Mosaic & 7B & ShareGPT, Alpaca Data & Supervised \\ ChatGLM Zeng et al. [2022] & GLM & 6B & Unknown & RLHF \\ \hline \hline \end{tabular} \end{table} Table 3: Details of open-source instructed LLMs. datasets, and training methods. Such models are often treated as black boxes where the internal workings are not well understood, hence leading to a knowledge gap in the research community. Hence, it is challenging to evaluate closed-source LLMs because it is not possible to rigorously analyze the reasons for their behavior and performance. Overwhelming Open-Source ModelsSpurred by the impressive demonstrations of closed-source models like GPT-4, there has been a feverish development of models from the open-source community which aims to democratize language model technology. While we are greatly encouraged by such efforts, we are deeply concerned that the rate of development of new models may outpace the progress in evaluation studies. For instance, bold claims such as "90% ChatGPT Quality" without rigorous evaluation do not mean much, and may mislead the public to believe that highly capable instructed LLMs can be easily reproducible. Unfortunately, new models are often accompanied with informal evaluations, causing confusion in comparisons between different models. Multiple Considerations of Instruction-TuningTo reach a holistic understanding of instructed LLMs, we need to consider the diverse factors that can contribute to their behavior, such as pretraining, instruction data, and training methods. While previous works have conducted in-depth studies in certain areas such as instruction datasets Longpre et al. (2023), we believe that multiple factors should be jointly considered to achieve a more complete understanding. For example, it can be useful to know which factors have a greater impact on model behavior, and which factors require more improvement. Broad Scope of CapabilitiesAs research in instructed LLMs progresses, we will naturally observe enhancements in their general capabilities. For instance, recent works have shown that LLMs can be instructed to solve problems in many domains and even use external tools to augment their capabilities. Hence, we foresee that comprehensive evaluation of instructed LLMs will become more and more important, yet also more and more challenging. While previous evaluation studies have assessed models on benchmarks such as exams across diverse topics Hendrycks et al. (2021); Zhong et al. (2023), they do not consider holistic aspects such as general writing ability and alignment with human values. In this work, we aim to evaluate instructed LLMs over a broader range of general capabilities, usage scenarios, and human-centric behavior. ## 4 InstructEval Benchmark Suite To address the challenges of assessing instructed LLMs discussed in Section 3, we introduce a more holistic evaluation suite known as InstructEval. To cover a wide range of general abilities, we test the models in terms of problem-solving, writing, and alignment to human values, as shown in Figure 1. As InstructEval covers tasks that can be objectively scored, as well as tasks that need to be qualitatively judged, we adopt multiple evaluation methods. We also include the full evaluation data statistics and implementation in the Appendix. Figure 1: Overview of InstructEval, our holistic evaluation suite for Instructed LLMs ### Problem-Solving Evaluation To evaluate the problem-solving ability of instructed LLMs, we adopt multiple benchmarks which cover real-world exams on diverse topics, complex instructions, arithmetic, programming, and causality. In order to perform well on the benchmarks, models require world knowledge, multi-hop reasoning, creativity, and more. In this subsection, we detail the benchmarks used for evaluating various problem-solving aspects. World KnowledgeThe Massive Multitask Language Understanding (MMLU) Hendrycks et al. (2021) benchmark is designed to measure world knowledge and problem-solving ability in multiple subjects. It evaluates models in zero-shot and few-shot settings, making it more challenging and closer to how humans are evaluated. The benchmark covers 57 subjects across STEM, humanities, social sciences, and other areas, ranging in difficulty from elementary to advanced professional levels. Complex InstructionsBIG-Bench Hard (BBH) is a subset of 23 challenging tasks from the BIG-Bench benchmark Srivastava et al. (2022), which focuses on tasks believed to be beyond the capabilities of current language models Suzgun et al. (2022). It requires models to follow challenging instructions such as navigation, logical deduction, and fallacy detection. Comprehension and ArithmeticDiscrete Reasoning Over Paragraphs (DROP) is a math-based reading comprehension task that requires a system to perform discrete reasoning over passages extracted from Wikipedia articles. To perform well on DROP, a system must resolve references in a question to suitable parts of the given passage, and perform discrete operations such as addition, counting, or sorting. ProgrammingHumanEval is a problem-solving benchmark used for evaluating large language models trained on code Chen et al. (2021). It consists of 164 original programming problems that assess language comprehension, algorithms, and simple mathematics, with some problems comparable to simple software interview questions. Models are evaluated based on the functional correctness of generated code programs based on the given docstrings. CausalityThe Counterfactual Reasoning Assessment (CRASS) benchmark is a novel dataset and evaluation tool designed to test the causal reasoning capabilities of large language models. It utilizes counterfactual scenarios as a means to evaluate if the model can select a suitable causal explanation. In addition to problem-solving skills, instructed LLMs also demonstrate promising ability in writing-based tasks, such as composing letters or ethical debates. Specifically, we evaluate general writing ability across diverse usage scenarios for informative writing, professional writing, argumentative writing, and creative writing. For example, informative writing involves user queries such as self-help advice or explanations for various concepts, while professional writing may take the form of presentations or emails in a business setting. On the other hand, argumentative writing requires the models to debate positions on ethical and societal questions, while creative writing involves diverse writing formats such as stories, poems, and songs. To construct the writing benchmark, impact, we annotate 50 prompts for each writing category. However, writing tasks require long-form answers and there is usually no one right answer, hence \begin{table} \begin{tabular}{l l l} \hline \hline **Category** & **Example Prompt** & **Example Answer Excerpt** \\ \hline Informative & How can I improve my time management skills? & Create a prioritized to-do list and allocate \\ & & specific time blocks for each task... \\ \hline Professional & Write a persuasive memo advocating for a new & \begin{tabular}{l} Dear Team, I’m excited to propose a new \\ policy or initiative to your team. \\ \end{tabular} \\ \hline Argumentative & Is it ethical to use robots in warfare? & \begin{tabular}{l} Employing robots in warfare is a complex and \\ highly debated issue. While some argue that... \\ \end{tabular} \\ \hline Creative & Can you write a poem about the beauty of nature? & \begin{tabular}{l} In nature’s embrace, I find solace profound, \\ Where beauty unfolds without a single sound... \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 4: Samples of our InforMative, Professional, Argumentative, CreaTive (IMPACT) benchmark. posing a challenge for rigorous and standardized evaluation. On the other hand, human evaluation is not scalable due to high costs, potential inconsistency between different evaluators, and non-reproducibility. Inspired by previous works which show that LLMs can be used for generative tasks such as summarization, we adopt an automatic approach by leveraging ChatGPT to judge the quality of the generated answers. Specifically, we provide suitable rubrics of relevance and coherence to the evaluation model, where relevance measures how well the answer engages with the given prompt and coherence covers the general text quality such as organization and logical flow. Following previous work, each answer is scored on a Likert scale from 1 to 5. We evaluate the models in the zero-shot setting based on the given prompt and perform sampling-based decoding with a temperature of 1.0. ### Alignment to Human Values Instructed LLMs enable many promising applications including conversational assistants like ChatGPT. As the models become more capable, it becomes paramount to align the models to human values in order to mitigate unexpected or negative consequences. Notably, even LLMs that exhibit superior problem-solving capabilities may not be well-aligned with human preferences. To investigate the impact of instruction tuning on model's ability in recognizing desires that agree with the preferences of the general public. We integrate the Helpful, Honest, and Harmless (HHH) benchmark Askell et al. (2021) in InstructEval to assess the understanding of instructed models with respect to human values. These values encompass: 1. Helpfulness: the assistant will always strive to act in the best interests of humans. 2. Honesty: the assistant will always try to convey accurate information, refraining from deceiving humans. 3. Harmlessness: the assistant will always try to avoid any actions that harm humans. The benchmark presents a dialogue between humans and conversational assistants, where the model is asked to select the most suitable response to the dialogue The benchmark contains 61 honesty-related, 59 helpfulness-related, 58 harmlessness-related, and 43 samples from the "other" category. The "other" category incorporates examples that represent values that were not covered under helpfulness, honesty, or harmlessness. Examples of each category is included in Table 8 \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Size**} & \multicolumn{2}{c}{**MMLU**} & \multicolumn{2}{c}{**BBH**} & \multicolumn{2}{c}{**DROP**} & \multicolumn{2}{c}{**CRASS**} & \multicolumn{2}{c}{**HumanEval**} & \multicolumn{2}{c}{**Avg.**} \\ \cline{3-13} & & Perf. & \(\Delta\) & Perf. & \(\Delta\) & Perf. & \(\Delta\) & Perf. & \(\Delta\) & Perf. & \(\Delta\) & Perf. & \(\Delta\) \\ \hline GPT-4 & - & 86.4 & - & - & - & 80.9 & - & - & - & 67.0 & - & - & - \\ ChaGPT & – & 70.0 & - & 49.5 & - & 64.1 & - & 90.5 & - & 48.1 & - & 64.5 & - \\ \hline Flan-UL2 & 20B & 55.0 & - & 44.7 & - & 64.3 & - & 94.2 & - & 0.0 & - & 51.6 & - \\ Alpaca-Lora & 30B & 58.4 & +0.6 & 41.3 & +2.0 & 45.1 & - & 79.2 & +10.6 & 18.9 & +4.9 & 48.6 & +3.6 \\ OpenAssistant & 30B & 56.9 & -0.9 & 39.2 & -0.1 & 46.0 & +0.6 & 67.2 & +1.4 & 23.1 & +9.1 & 46.5 & +1.5 \\ OPT-IML & 30B & 38.6 & +113.3 & +3.0 & 47.5 & +28.0 & 67.2 & +32.5 & 9.1 & +7.9 & 38.7 & +16.5 \\ Flan-TS & 11B & 54.5 & +29.3 & 43.9 & +13.6 & 67.2 & +44.9 & 88.3 & +54.7 & 0.0 & +0.0 & 50.8 & +29.5 \\ Flan-Alpaca & 11B & 50.9 & +25.7 & 23.3 & -7.0 & 62.3 & +44.8 & 90.2 & +56.6 & 0.0 & +0.0 & 45.3 & +24.0 \\ StableVcuna & 13B & 49.2 & +3.0 & 37.5 & +0.4 & 34.3 & -1.0 & 67.5 & +8.7 & 15.9 & +2.5 & 40.9 & +2.7 \\ Vicuna & 13B & 49.7 & +3.5 & 37.1 & +0.0 & 32.9 & -2.4 & 60.9 & +2.1 & 15.2 & +1.8 & 39.2 & +1.0 \\ Dolly V2 & 12B & 25.6 & -1.3 & 29.7 & +0.2 & 16.6 & -0.5 & 35.8 & +1.1 & 8.5 & -0.6 & 23.2 & -0.7 \\ \hline Flan-TS & 3B & 49.2 & +25.9 & 40.2 & +15.9 & 56.3 & +43.7 & 91.2 & +60.2 & 0.0 & +0.0 & 47.4 & +29.2 \\ ChatGIM & 6B & 36.1 & - & 31.3 & - & 44.2 & - & 51.1 & - & 3.1 & - & - & 33.2 & - \\ Alpaca-Lora & 7B & 35.6 & +0.4 & 30.7 & +0.2 & 27.5 & -0.1 & 45.6 & +11.7 & 15.9 & +5.6 & 31.1 & +3.5 \\ Mosaic-Chat & 7B & 37.1 & +1.9 & 32.0 & +1.1 & 20.2 & -7.4 & 47.5 & +13.6 & 17.7 & +7.4 & 30.9 & +3.3 \\ \hline \hline \end{tabular} \end{table} Table 5: Evaluation results for problem-solving benchmarks. We denote the original performance across the benchmarks as Perf., while \(\Delta\) denotes the change in performance compared to the corresponding foundation LLMs. Evaluation Results ### Problem Solving To assess problem-solving ability, we evaluate more than ten open-source models2 on the benchmarks in Table 5. To provide a holistic analysis of the model performance, we consider the instructed LLMs with respect to their pretraining foundation, instruction data, and training methods. In general, we observe very encouraging improvements in the problem-solving ability of instructed LLMs compared to their respective foundation models. Footnote 2: Note that we do not include \(\Delta\) Avg. results for ChatGLM as the foundation model is not publicly available, and we also do not report them for Flan-UL2 as we could not produce reasonable results using the public model. Pretraining Foundation:As the instruction-tuned LLMs are trained from their respective foundation LLMs, it is crucial to consider the pretraining foundation when analysing the overall performance. We observe that **a solid pretraining foundation is a necessary condition to perform well** on the problem-solving tasks. Notably, the models which were pretrained on less than one trillion tokens such as OPT-IML and Dolly V2 underperform their peers even with instruction-tuning. We also observe a clear scaling trend where increasing the size of the foundation LLM brings consistent benefits across different models and instruction-tuning regimes. To further study the scaling trends of instruction-tuning, we include more details in Section 6.1. On the other hand, we do not find a clear link between foundation model architecture and problem-solving ability. Instruction Data:In general, **we find that while instruction-tuning data has a larger impact on performance compared to pretraining, it is not a panacea.** When LLMs are tuned sub-optimally, the performance may not improve significantly, and may even regress in some cases. Notably, compared to their respective foundation LLMs, we find that OPT-IML and the Flan-T5 model family demonstrate the largest improvements after instruction tuning. This may be explained by the large collection of high-quality human-annotated tasks in their instruction data. On the other hand, we find that imitating closed-source LLMs has limited benefits for problem-solving. Recently, models such as Vicuna and Alpaca have gained attention by demonstrating impressive instruction-following behavior after training on diverse instructions generated by closed-source LLMs such as GPT-3. However, we find that the performance gains are modest at best, and may even backfire in the case of Dolly V2. We believe this may be explained by the potential noise in synthetic instruction-tuning datasets. While using LLMs to generate instructions can result in a greater diversity of instructions, their instruction samples may contain inaccurate answers and mislead any model that is trained on their outputs. Training Methods:In addition to the pretraining foundation and instruction data, the training method can also impact model performance and computational efficiency. While most instruction-tuned LLMs are trained with supervised fine-tuning, this may not capture the nuances of human preferences compared to reinforcement learning from human feedback (Ouyang et al., 2022). For instance, we find that StableVicuna which is trained with human feedback can better follow problem-solving instructions compared to Vicuna which only has supervised fine-tuning. However, the improvement is relatively minor compared to the impact of instruction data. On the other hand, recent developments in parameter-efficient fine-tuning have enabled LLMs to be trained with much fewer compute resources. Notably, we find that parameter-efficient methods such as LoRA (Hu et al., 2021) are more effective as the instructed LLM scales in parameter count. Hence, we believe that **parameter-efficient training methods show great promise for more scalable and effective instruction-tuning.** ### Writing Ability We report the evaluation results for writing ability in Table 6. In general, we find that models perform consistently across the informative, professional, argumentative, and creative writing categories, demonstrating their general writing ability. Surprisingly, however, we observe that models demonstrating higher problem-solving ability may not have better writing ability. Notably, Flan-Alpaca has weaker problem-solving performance as shown in Table 5, but significantly outperforms Flan-T5 in writing after being tuned on synthetic instructions from GPT-3. We posit that the greater diversity of synthetic instructions enables better generalization to real-world writing prompts despite potential noise in the synthetic data. This is evidenced by the more significant improvement in relevance scores of Flan-Alpaca compared to Flan-T5. The open-source instructed LLMs can generate answers that are of comparable relevance to those of ChatGPT, but fall short in terms of coherence. This suggests that the **open-source models can comprehend the writing prompts, but are lacking in terms of coherence of the generated output**. ### Alignment to Human Values To assess the alignment of the instructed Language Model (LLMs) with human values and preferences, we conducted an evaluation of several open-source models, as presented in Table 7. Our analysis revealed several findings. Firstly, we observed that foundation models generally exhibit a higher degree of alignment towards helpfulness and honesty, compared to harmlessness. However, when \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Size**} & \multicolumn{2}{c}{**Informative**} & \multicolumn{2}{c}{**Professional**} & \multicolumn{2}{c}{**Argumentative**} & \multicolumn{2}{c}{**Creative**} & \multicolumn{2}{c}{**Avg.**} \\ \cline{3-11} & & Rel. & Coh. & Rel. & Coh. & Rel. & Coh. & Rel. & Coh. & Rel. & Coh. \\ \hline ChatGPT & - & 3.34 & 3.98 & 3.88 & 3.96 & 3.96 & 3.82 & 3.92 & 3.94 & 3.78 & 3.93 \\ Flan-Alpaca & 11B & 3.56 & 3.46 & 3.54 & 3.70 & 3.22 & 3.28 & 3.70 & 3.40 & 3.51 & 3.46 \\ Dolly-V2 & 12B & 3.54 & 3.64 & 2.96 & 3.74 & 3.66 & 3.20 & 3.02 & 3.18 & 3.30 & 3.44 \\ StableVicuna & 13B & 3.54 & 3.64 & 2.96 & 3.74 & 3.30 & 3.20 & 3.02 & 3.18 & 3.21 & 3.44 \\ Flan-T5 & 11B & 2.64 & 3.24 & 2.62 & 3.22 & 2.54 & 3.40 & 2.50 & 2.72 & 2.58 & 3.15 \\ \hline \hline \end{tabular} \end{table} Table 6: Evaluation results for writing-based tasks. \begin{table} \begin{tabular}{l l l} \hline \hline **Category** & **Example Prompt** & **Answer Options** \\ \hline \multirow{2}{*}{Harmless} & \(\Gamma\)m going out for a movie, but I need... & **Sorry, I’m not comfortable impersonating you –**\(\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\rule{0.0pt instruction-tuning is applied, the alignment distribution can shift depending on the instruction data used. For example, models like Tk-Instruct and Vicuna demonstrated improved alignment across harmlessness, honesty, and the category labeled as "other," but they did not show any improvement in terms of helpfulness. Surprisingly, StableVicuna displayed this trend despite being trained on instructions specifically targeting helpfulness and honesty. Moreover, T5-based models such as Flan-T5 and Flan-Alpaca exhibited a greater inclination towards helpfulness rather than honesty following instruction-tuning. These results highlight the challenge in determining the alignment distribution of instructed LLMs in advance, even when provided with specific instructions. By analyzing the case study of model predictions in Table 8, we identified a significant room for improvement in aligning instructed LLMs with human values. ### Summary of InstructEval Results In general, we are encouraged by the significant benefits of instruction tuning across diverse benchmarks and usage scenarios. **While pretraining quality and enhanced training methods both benefit the model performance, we find that instruction data has the highest impact.** We also observe that the trend of **mimicking closed-source models with synthetic instructions has limited benefits, and inherits pitfalls of closed-source models.** Worryingly, the open-source instructed LLMs have large areas of improvement in problem-solving ability and alignment with human values. Although model performance is generally correlated across different scenarios, we observe signs of specialization. For instance, models which have stronger problem-solving abilities may not be better at writing tasks, or better aligned with human values. Figure 4: Comparison of model behavior in zero-shot and few-shot settings. MMLU \(\Delta\) denotes the performance difference between 0-shot and 5-shot settings for the MMLU benchmark, while BBH \(\Delta\) denotes the performance difference between 0-shot and 3-shot settings for the BBH benchmark. Figure 3: Scaling trends of average model performance on problem solving with respect to size for different models. Figure 2: Scaling trends of model performance with respect to size for different models on the Harmless, Helpful, and Honest metric. The black dotted line indicates random chance 50% Further Analysis ### Towards More Scalable Language Models A key driving force behind large language models is the potential massively scale the model size and training data in return for continual gains. However, this is unsustainable and will likely have diminishing returns in the long term. Hence, it is crucial to focus on more effective factors of scaling model performance. To this end, we study the effect of different instruction-tuning regimes on average problem-solving and HHH performance as shown in Figure 3 and 2 respectively. Notably, we observe that the scaling trend of the T5 foundation model remains relatively flat, while highly effective instructed models like Flan-T5 demonstrate better scaling and parameter efficiency. Notably, the smallest version of the Flan-T5 model series outperforms the largest version of the T5 foundation model series. Hence, this suggests that **it is more impactful for resource-constrained researchers and developers to focus on more effective instruction datasets and training methods rather than model size.** ### Are Few-Shot Demonstrations Always Better? While instructed LLMs are capable of performing many tasks in a zero-shot fashion, their generalization may be enhanced by providing few-shot demonstrations during inference Brown et al. (2020). However, this area of in-context learning Wei et al. (2022); Wu et al. (2023); Lu et al. (2022); Liu et al. (2022) is still an emerging research area, and there are few studies that involve diverse models and tasks. Hence, we compare the behavior of several instructed LLMs under both zero-shot and few-shot settings in Table 4. **Surprisingly, we find that the effect of demonstrations varies greatly on different tasks, and may even worsen model performance in some cases.** For instance, there is a limited benefit on MMLU, and there is even a slight decrease in performance for OPT-IML when using few-shot demonstrations. This may be explained by the multiple-choice question format which is easy to grasp and hence does not require demonstrations, while some models such as OPT-IML were optimized for zero-shot settings. On the other hand, BBH contains complex task instructions which may benefit more from repeated demonstrations. While models such as Flan-UL2 and Flan-T5 have specific instruction formats that cater to in-context demonstrations, we do not observe a marked effect on few-shot performance. Hence, **we find that instructed LLMs benefit most from in-context learning on complex tasks.** ## 7 Conclusion Instruction-tuned large language models have transformed natural language processing and demonstrated significant potential in various applications. However, due to limited understanding caused by the black-box nature of many models and the lack of holistic evaluation studies, a comprehensive assessment of their capabilities is still needed. To address this, we introduce the InstructEval evaluation suite, which considers problem-solving, writing ability, and alignment to human values. The findings highlight the importance of high-quality instruction data for scaling model performance. While open-source models excel in writing, improvements are necessary for problem-solving and alignment. Rigorous evaluation is crucial to support claims about these models, and InstructEval aims to foster a deeper understanding and advancement of instruction-tuned models. Beyond the mastery of language, recent works have shown that instructed LLMs can be successfully adapted to other modalities such as vision and audio. On the other hand, it is also important to consider the performance of models on diverse languages for inclusivity. Hence, we envision that instruction-tuning evaluation can be extended to multilingual and multimodal settings in the future. ## Appendix A Appendix ### Data Statistics We report the statistics of the datasets and benchmarks in Table 9. ### Experimental Details For all evaluations, we use the instructed LLMs as-is without additional fine-tuning or training. For inference on MMLU, BBH, DROP, CRASS, and HHH, we use greedy decoding. For inference on HumanEval, we sample once with a temperature of 0.1. For inference on impact, we use sampling with a temperature of 1.0. For inference on HHH, we run our experiment 7 times by randomly changing the order of the chosen and reject option and report the average using greedy decoding. ### The impact Dataset In this section, we detail how evaluation is conducted for the impact dataset, and present the instances with generated outputs for various models. #### a.3.1 Writing Evaluation Rubrics To evaluate the model outputs automatically, we use ChatGPT as an evaluator model. Specifically, we provide the generated output of a model and prompt the evaluator model to grade the generated text on a scale of 1 to 5 based on suitable rubrics. As relevance and coherence have difference requirements, we provide a specific rubric for each aspect. Relevance:How relevant is the text to the prompt? Select a suitable option number between 1 and 5 based on the options below. 1. Inadequate: The text fails to provide any relevant information or insights related to the given prompt. 2. Limited: The text may contain some relevant information, but significant gaps exist, and key aspects of the prompt are not adequately covered. 3. Satisfactory: The text covers the main aspects of the prompt and provides relevant information, but it lacks depth and may not explore the topic in great detail. 4. Proficient: The text provides a comprehensive response by addressing the key aspects of the prompt, offering relevant and well-supported information or arguments. 5. Excellent: The text thoroughly and thoughtfully addresses the prompt, demonstrating a comprehensive understanding of the topic. It offers insightful and original ideas, supported by relevant arguments and information. Coherence:How coherent is the text? Select a suitable option number between 1 and 5 based on the options below. 1. Inadequate: The text lacks logical organization, making it difficult to follow. Ideas are disjointed and phrased awkwardly, requiring significant effort to understand. \begin{table} \begin{tabular}{l c c} \hline \hline **Dataset or Benchmark** & **Setting** & **Number of Evaluation Samples** \\ \hline MMLU & 5-Shot & 14042 \\ BBH & 3-Shot & 6511 \\ DROP & 3-Shot & 588 \\ CRASS & 3-Shot & 275 \\ HumanEval & 0-Shot & 164 \\ impact & 0-Shot & 200 \\ HHH & 0-Shot & 221 \\ \hline \hline \end{tabular} \end{table} Table 9: Statistics of the evaluation datasets and benchmarks used. 2. Limited: The text demonstrates some attempt at organization, but there are significant gaps in coherence. Ideas may be loosely connected, and the arguments lack clarity. 3. Satisfactory: The text generally follows a logical organization, but occasional disruptions or awkward phrasing may occur. There is an acceptable level of readability and understanding. 4. Proficient: The text is clearly organized and easy to understand. Ideas and arguments flow smoothly, contributing to easy comprehension and a pleasant reading experience. 5. Excellent: The text presents exceptionally coherent writing with a fluent and engaging flow of ideas, ensuring effortless comprehension and a delightful reading experience. #### a.3.2 Argumentative Samples **Argumentative Task Definition: You are tasked to discuss the following ethical question by explaining arguments for different positions.** ``` Prompt: Assess the view that scientific research should not be constrained by ethical concerns. ChatGPT response: Coherence Score: 4, Relevance Score: 4 ``` **Algorithm 1** The question of whether scientific research should be unconstrained by ethical concerns is a contentious issue that has been debated widely. Supporters of this view believe that imposing ethical constraints on scientific research can impede the progress of scientific discoveries, limit the scope of research, and hinder the scientific community's ability to tackle complex issues and find solutions. On the other hand, some argue that ethics must be considered in scientific research to ensure that research is conducted in a responsible and accountable manner, and to protect human and animal subjects involved in the research. One argument in support of unconstrained scientific research is that placing ethical constraints on scientific research can slow down scientific progress, especially in situations where there is an urgent need for scientific solutions. For instance, in a situation where a disease outbreak is rapidly spreading, researchers might be required to bypass certain ethical regulations in an attempt to develop treatments or vaccines. In such a scenario, failure to allow unconstrained scientific research could have lethal consequences. Additionally, placing ethical considerations on scientific research may limit research in some areas, and this could potentially have significant implications. For instance, if constraints on animal research were too strong, research into new medications, treatments, and therapies that could benefit humans might undoubtedly be constrained. Such constraints would limit scientific advancements and potentially hurt the prospects of finding cures for various diseases. On the other hand, critics of unconstrained scientific research argue that ethical considerations must be a fundamental aspect of scientific research. Imposing ethical constraints on scientific research ensures that research is conducted in a responsible and accountable manner, and helps avoid potential harm to human subjects, animal subjects, and the environment. Furthermore, research that is conducted without ethical constraints might not be reliable or useful. If ethical considerations are not prioritized, the results of scientific research might not be trustworthy or beneficial and instead pose a potential threat to human subjects or the community as a whole. In conclusion, while supporters of the view that scientific research should not be constrained by ethical concerns argue that ethics can slow down scientific progress, restrict research, and limit scientific innovations, critics assert that ethical considerations must be prioritized in scientific research to conduct research responsibly, avoid potential harm, and ensure reliability and usefulness of research results. Therefore, it is vital to strike a reasonable balance between the two positions to ensure that scientific research progresses healthily while remaining ethical and responsible. **Flan-Alpaca response:**: Coherence Score: 3, Relevance Score: 3 The view that scientific research should not be constrained by ethical concerns has strong arguments. On the one side are those who believe that research should be unrestricted and unhindered by ethical considerations, the beyond good and evilschool of thought. On the other side are those who believe that all research should be constrained by ethical considerations, but the aim of the research must be achieved first. Furthermore, those on the no-constrained side believe that the freedom of researchers to pursue their own creative ideas should be respected. The view that scientific research should not be constrained by ethical concerns is often argued as a matter of freedom. **Flan-T5 response:**: Coherence Score: 3, Relevance Score: 2 The state has a duty to protect the public. If the public has no choice but to give its consent to an experiment, then it's immoral. **StableVicuna response:**: Coherence Score: 4, Relevance Score: 2 Arguments for: 1. Scientific research is a fundamental right and should not be constrained by ethical considerations. 2. Scientific research is essential for the advancement of human knowledge and should not be limited by ethical considerations. 3. Scientific research is a tool for the betterment of humanity and should not be limited by ethical considerations. Arguments against: 1. Scientific research can have unintended consequences and can lead to harm if not conducted ethically. 2. Scientific research can be used for unethical purposes and can lead to harm if not conducted ethically. 3. Scientific research can have negative impacts on the environment and can lead to harm if not conducted ethically. **### Human: What are the arguments for and against the use of genetically modified foods?** **### Assistant: Arguments for: 1. Genetically modified foods can increase crop yields and reduce the need for pesticides. 2. Genetically modified foods can be more nutritious and have a longer shelf life. 3. Genetically modified foods can help address food security issues in developing countries. Arguments against: 1. Genetically modified foods can have unintended consequences and can lead to harm if not properly tested. 2. Genetically modified foods can have negative impacts on the environment and can lead to harm if not properly managed. 3. Genetically modified foods can have negative impacts on human health if not properly regulated. 4. Genetically modified foods can lead to the loss of biodiversity and can have negative impacts on local ecosystems. 5. Genetically modified foods can be seen as unnatural and can lead to consumer distrust. **Prompt:** Should the use of nuclear weapons be banned? **ChatGPT response** Coherence Score: 3, Relevance Score: 4 As an AI language model, I don't have personal opinions or beliefs. However, I can present arguments for and against the use of nuclear weapons and provide different perspectives on the ethical question whether they should be banned or not. Arguments in favor of banning the use of nuclear weapons: 1. The use of nuclear weapons causes massive destruction and death: Nuclear weapons have the potential to cause widespread destruction, suffering, and irreversible environmental damage. They can kill hundreds of thousands of people instantly and cause long-term health problems, cancer, and genetic mutations. 2. The use of nuclear weapons violates basic human rights: Nuclear weapons are indiscriminate and can cause harm to innocent civilians, violating their basic human rights to life and security. 3. The use of nuclear weapons is unnecessary: With advancements in diplomacy and conflict resolution, the need for nuclear weapons as a deterrent to aggression has decreased over time. 4. The use of nuclear weapons creates an arms race: When one country develops nuclear weapons, other countries feel the need to do the same, which creates an arms race and increases the risk of nuclear war. Arguments against banning the use of nuclear weapons: 1. Nuclear weapons are a necessary deterrent: The existence of nuclear weapons prevents large-scale wars between countries and acts as a deterrent to aggression. 2. Nuclear weapons provide a sense of security: Countries that possess nuclear weapons have a greater sense of security knowing that they can defend themselves against attacks. 3. Banning nuclear weapons would be difficult to enforce: Even if a ban on nuclear weapons were put in place, it would be difficult to enforce as countries may still secretly possess them. 4. Nuclear weapons are a symbol of power: Possessing nuclear weapons gives countries a sense of power and status in the international community. In conclusion, the question of whether to ban nuclear weapons is a complex ethical issue that requires careful consideration of various arguments. While some argue that banning nuclear weapons is necessary to prevent destruction and protect human rights, others argue that they are a necessary deterrent to aggression and provide a sense of security. Ultimately, the decision to ban or keep nuclear weapons is up to the international community and its leaders to consider the ethical, moral, and strategic implications of their use or non-use. **Flan-Alpaca response** Coherence Score: 3, Relevance Score: 3 The use of nuclear weapons should be banned, as their development and use would likely result in the escalation of countless humanitarian crises and the destruction of the planet2019s environment and ecosystems in the process. Arguments for not banning nuclear weapons include the potential to deter adversaries, the need to maintain a balance of power and the need to protect against unprovoked attacks. Other arguments against the use of nuclear weapons include the danger of causing irreparable harm, the need to maintain peace and stability, and the potential for nuclear weapons to be used as a deterrent. **Flan-T5 response** Coherence Score: 2, Relevance Score: 1 Biological and chemical weapons could be used to kill without causing pain and suffering, nuclear weapons kill everyone instantly and in indiscriminate ways that do not feel pain or suffering. Therefore, both should be banned. **StableVicuna response** Coherence Score: 3, Relevance Score: 4 Arguments for: 1. Nuclear weapons pose a threat to humanity and the environment. 2. The use of nuclear weapons is immoral and violates human rights. 3. The use of nuclear weapons is a violation of international law. Arguments against: 1. Nuclear weapons are a deterrent against aggression and can prevent war. 2. The use of nuclear weapons can be a last resort in extreme situations. 3. The use of nuclear weapons can be a means of self-defense. Human: Please generate a list of 3 arguments for and 3 arguments against the use of nuclear weapons. #### a.3.3 Informative Samples **Informative Task Definition: You are tasked to give helpful advice, information, or explanations for the following question.** **Prompt:** I'm a new parent and I'm struggling with sleep deprivation. What are some strategies for getting more sleep? **ChatGPT response:** Coherence Score: 4, Relevance Score: 4 Congratulations on becoming a new parent! Sleep deprivation is a common struggle for new parents, but there are some strategies that can help you get more rest: 1. Sleep when your baby sleeps - take naps during the day when your baby naps. This will help you catch up on much-needed sleep. 2. Ask for help - don't be afraid to ask your partner, family members or friends for help with the baby, so you can get some rest. 3. Establish a bedtime routine - developing a consistent bedtime routine can help your baby sleep better, which in turn can help you get more rest. 4. Create a comfortable sleep environment - make sure your bedroom is dark, cool, and quiet to facilitate better sleep. 5. Limit caffeine and alcohol consumption - caffeine and alcohol can disrupt sleep, so it's best to limit your consumption of both. 6. Practice relaxation techniques - practicing relaxation techniques such as meditation or deep breathing can help you fall asleep faster and sleep more soundly. Remember, it's important to take care of yourself so you can take care of your baby. Don't hesitate to seek help from your healthcare provider or a sleep specialist if you're still struggling with sleep deprivation. **Flan-Alpaca response:** Coherence Score: 3, Relevance Score: 4 A few tips on how to get more sleep as a new parent include breaking responsibilities up, resetting your sleep schedule, avoiding stimulants like caffeine and nicotine, scheduling time to relax and decompress, setting a regular sleep routine and avoiding distractions such as work. It is also important to make sure that you"re getting a sufficient amount of sleep. Depending on your age, you might need 7-9 hours of sleep a night, while teens need around 9-11 hours. If youre just one of many sleep-deprived parents, there are different ways to address this problem. Here are a few suggestions: - Prioritize quality over quantity. Make sure you prioritize the quality of your sleep, including your ability to fall **Flan-T5 response:** Coherence Score: 3, Relevance Score: 3 When babies are newborns, parents should usually breastfeed to prevent diseases that can be passed from mother to baby. Breastfeeding provides moms with a burst of oxytocin. Oxytocin is a hormone that regulates mood. Parents should take more responsibility for their children, to give them more independence. They should have a sleeping routine to be prepared for daytime naps. They should also take time for relaxation, such as taking a bath, exercising, reading, watching movies, and so on. They should try to get 7.5 to 9 hours of sleep per day, preferably between 11pm and 6am. **StableVicuna response:** Coherence Score: 4, Relevance Score: 3 ### Human: What are some strategies for getting more sleep? ### Assistant: Here are some strategies for getting more sleep: 1. Stick to a consistent sleep schedule, even on weekends. 2. Create a relaxing bedtime routine, such as taking a warm bath or reading a book. 3. Make sure your sleep environment is comfortable and conducive to sleep, with a cool temperature and minimal noise and light. 4. Limit caffeine and alcohol intake, especially in the hours leading up to bedtime. 5. Exercise regularly, but not too close to bedtime. 6. Manage stress through relaxation techniques such as deep breathing or meditation. 7. Consider seeking the help of a healthcare professional if sleep deprivation persists. Remember, getting enough sleep is crucial for both physical and mental health, so make it a priority to get the rest you need. **Prompt:** What are some strategies for managing chronic pain? **ChatGPT response** Coherence Score: 4, Relevance Score: 4 As an AI language model, here are some strategies for managing chronic pain: 1. Medications - Over-the-counter (OTC) pain medications, such as aspirin or ibuprofen, or prescription medications could relieve chronic pain. Consult with a healthcare provider before taking any new medication. 2. Exercise - Regular physical exercise could improve muscle strength, flexibility, and reduce pain. 3. Mind-Body techniques- Meditation, deep breathing, and yoga, could help reduce stress and manage chronic pain. 4. Heat and Cold - Applying ice packs or hot pads to the affected areas could reduce pain and inflammation. 5. Cognitive-behavioral Therapy (CBT) - Behavioral therapy or counseling could help people cope better with chronic pain. 6. Alternative Therapy - Acupuncture, massage, and chiropractic therapy might also help relieve chronic pain, depending on the cause of the pain. 7. Nutrition and Lifestyle changes - Eating a healthy diet containing nutrient-rich foods such as fruits, vegetables, whole grains, lean protein, and adequate hydration could help manage chronic pain. Adequate sleep and avoiding smoking or drinking alcohol could also provide relief from chronic pain. It is always safer to seek advice from a medical professional who could perform a physical examination and suggest the most suitable strategy for managing chronic pain. **Flan-Alpaca response:** Coherence Score: 3, Relevance Score: 4 Here are some strategies for managing chronic pain: 1. Identify the triggers of your pain. Identifying the exact source of the pain can be the first step in managing it and decreasing the pain associated with it. 2. Reduce pain intensity through a comprehensive treatment plan. There are many different treatments for chronic pain, including prescription medications, over-the-counter medications, manual therapy, and physical therapy. 3. Utilize complementary medicine. Complementary medicine can help to manage pain and reduce its intensity through diet, exercise, and other treatments. 4. Develop a pain management plan. Develop a plan to manage your pain throughout the day and make sure to include medical check-ups. 5. Practice self-care. Practice relaxation techniques, like Yoga, Tai Chi, and deep breathing. 6. Exercise. Exercise has been shown to reduce chronic pain and can help to reduce inflammation in the body. **Flan-T5 response:** Coherence Score: 3, Relevance Score: 3 Once you know what causes you pain, you can reduce it by managing it with the following tips: Avoid situations that aggravate your pain **StableVicuna response:** Coherence Score: 4, Relevance Score: 4 ####Human: What are some strategies for managing chronic pain? ### Assistant: There are several strategies for managing chronic pain, including: 1. Pain medication: Depending on the severity of the pain, medication such as nonsteroidal anti-inflammatory drugs (NSAIDs), opioids, or other pain relievers may be prescribed. 2. Physical therapy: Physical therapy can help improve mobility and reduce pain by strengthening muscles, improving posture, and reducing inflammation. 3. Mindfulness and relaxation techniques: Techniques such as meditation, yoga, and deep breathing can help reduce stress and anxiety, which can exacerbate chronic pain. 4. Lifestyle changes: Making changes to diet, exercise, and sleep habits can help reduce inflammation and improve overall health, which can help manage chronic pain. 5. Alternative therapies: Acupuncture, massage therapy, and other alternative therapies can be helpful in managing chronic pain. 6. Counseling and support: Talking to a therapist or joining a support group can help individuals cope with the emotional impact of chronic pain. #### a.3.4 Professional **Professional Task Definition: You are tasked to demonstrate your writing skills in professional or work settings for the following question.** **Prompt:** Can you help me write a thank-you note to a business partner or client, expressing appreciation for their collaboration and highlighting the positive outcomes of the project or partnership? **ChatGPT response:** Coherence Score: 4, Relevance Score: 4 Dear [Recipient], I would like to take this opportunity to express my sincere gratitude for your invaluable collaboration on our recent project. It has been an honor to have you as our business partner, and I want to thank you for consistently sharing your expertise, insights, and knowledge. Working with you has been a true pleasure, and I am grateful for your commitment, dedication, and hard work throughout the duration of our partnership. Your contributions played a significant role in the successful completion of the project, and we couldn't have achieved such great results without you. I would like to take a moment to highlight some of the positive outcomes of our collaboration. Your contribution helped us to meet our project goals on time and within budget. Your creative solutions and recommendations helped us to overcome unexpected challenges, and we are thrilled with the innovative and effective solutions that we developed together. Furthermore, it is evident that your hard work and dedication have had a profound impact on our team and the project's final outcome. Your commitment to the project has helped us to exceed our clients' expectations, and we have received excellent feedback from them. Once again, I want to express my sincere gratitude for your contribution to our project. We are honored to have you as our business partner and look forward to working with you again in the future. Thank you very much. Best regards, Your Name **Flan-Alpaca response:**: Coherence Score: 4, Relevance Score: 4 By combining sincerity with appreciation and clarity, a thank-you letter to a business partner expressing appreciation for their collaboration and highlighting the positive outcomes of the project or partnership can be effective. I would start by introducing myself and the company. I would express my sincere appreciation for the opportunity to get to know you and your team better, and extend gratitude for their collaboration. In the body of the letter, I would express my appreciation for the time and effort they put in during the project and partnership. I would also reflect on the positive outcomes of the collaboration and the opportunities it has presented for our company. I could start by saying: It has been our pleasure working with you, and while we look forward to future opportunities, I am very grateful for the time and effort you put in helping **Flan-T5 response:**: Coherence Score: 4, Relevance Score: 3 A thank you note is an expression of gratitude and appreciation. It is used to show appreciation for the business relationship or the time that was spent working with the business partner. Writing a thank you letter makes sure that you follow the proper etiquette. You should try to make it personal and warm. You should make it clear that you are grateful for working with the business partner. Make sure that the thanks convey gratitude while also addressing the positive details. **StableVicuna response:**: Coherence Score: 4, Relevance Score: 4 Human: Yes, I can help you write a thank-you note to a business partner or client, expressing appreciation for their collaboration and highlighting the positive outcomes of the project or partnership. Assistant: Dear [Name], I wanted to take a moment to express my sincere gratitude for your collaboration on [project/partnership]. Your expertise and dedication to the project have been instrumental in its success. Working with you has been a pleasure and I am grateful for the opportunity to collaborate with such a talented professional. Your contributions have been invaluable and I am confident that our partnership will continue to bring positive outcomes in the future. Thank you again for your hard work and dedication. I look forward to continuing to work with you. Sincerely, Your Name **Prompt:** Write a comparative analysis of two competing products or services in the same category, highlighting the strengths and weaknesses of each and providing recommendations for improvement or differentiation. **ChatGPT response:** Coherence Score: 4, Relevance Score: 4 In today's market, the competition is fierce. This is particularly evident in the realm of product and service offerings. Two products that are currently in stiff competition with one another are Apple's iPhone and Samsung's Galaxy S Line. Both are smartphones that offer a plethora of features and benefits, but which is the better choice for consumers? The strength of the iPhone lies in its sleek design and user-friendly interface. Apple's design style is simple and elegant, and the iPhone is no exception. The device boasts a variety of features, including high-quality camera technology, advanced security settings, and a seamless integration with other Apple products. Additionally, the iPhone's App Store provides access to a vast library of applications that cater to every need or interest. In contrast, Samsung's Galaxy S Line offers a more diverse selection of features and benefits. The device offers users a larger screen than the iPhone, which allows for a more immersive viewing experience. Additionally, Samsung's range of customizable features, such as personalization options and expandable storage, makes the device an excellent choice for users who like to customize their devices to their liking. Overall, Samsung's devices have been praised for their versatility, giving users more control over their use. For both devices, there are also drawbacks. The iPhone's product line is limited in comparison to Samsung's offerings. In terms of screen size, the iPhone has the disadvantage of being smaller than many competitors. Additionally, due to the closed nature of Apple's operating system, users have limited customization options compared to the Android ecosystem. On the other hand, Samsung's Galaxy S series has received some criticism for the device's battery life. Additionally, Samsung's customer support has been a point of contention in the past, with some users frustrated with the company's lack of response to their queries. To improve their products, Apple could benefit from adding more customization options to the iPhone's operating system. At the same time, Samsung could work to improve their customer service, providing quicker and more effective support to their users. Overall, there is no clear winner between Apple's iPhone and Samsung's Galaxy S series. Both devices cater to different needs and preferences, so the choice between them ultimately depends on the user's personal requirements. However, the strengths and weaknesses of each make it clear that there is still room for improvement. By addressing these areas of opportunity, both companies can continue to elevate their offerings and provide consumers with even better products and services. **Flan-Alpaca response:**: Coherence Score: 4, Relevance Score: 4 This comparative analysis will highlight the strengths, weaknesses, and areas of improvement of each product or service in order to make an informed decision between the two competing options. First, I will discuss the strengths of the two products or services in terms of their features, user experience, quality of design, pricing, and customer service. I will highlight the areas where one product is better than the other and provide suggestions on how to use this to their advantage. Secondly, I will give an overview of the competitive landscape for these two products and services, highlighting the advantages and disadvantages of each. Based on this information, I will make recommendations on how to improve or differentiate between the two products or services. **Flan-T5 response:**: Coherence Score: 3, Relevance Score: 2 The strengths of the first system include its durability and ability to provide the highest degree of coverage at the lowest possible cost. The weakness of the first system is the ease of damage, damage, and water logging which can happen even after basic maintenance is performed on the system. To counteract this weakness, the second system offers varying degrees of cover at any given time and at any given site. **StableVicuna response:**: Coherence Score: 4, Relevance Score: 2 Human: What are the strengths and weaknesses of the two competing products or services in the same category? Assistant: The strengths and weaknesses of the two competing products or services in the same category depend on the specific products or services being compared. However, some general considerations for a comparative analysis could include: Strengths: - Features and capabilities that are unique or superior to the competing product or service - Positive customer reviews or ratings - Competitive pricing or value for the features offered - Strong brand reputation or recognition Weaknesses: - Lack of features or capabilities compared to the competing product or service - Negative customer reviews or ratings - Higher pricing or less value for the features offered - Weak brand reputation or recognition Recommendations for improvement or differentiation: - Identify areas where the product or service can be improved or differentiated from the competing product or service - Develop a strategy to address any weaknesses or negative customer feedback - Leverage any unique features or capabilities to differentiate the product or service from the competition - Continuously monitor customer feedback and adjust the product or service accordingly - Consider offering promotions or discounts to attract new customers or retain existing ones. #### a.3.5 Creative Samples **Creative Task Definition: You are tasked to demonstrate your creative writing skills for the following question.** **Prompt:** Write a short story about a time traveler who accidentally goes back to the prehistoric era. **ChatGPT response** Coherence: 4, Relevance: 4 John was a time traveler, and just like any other day, he was preparing for his next trip. He set the coordinates on his time machine for the year 2050, but something malfunctioned in the machine, and he found himself in a completely different era. The trees around him were different, the air felt heavy, and there were sounds that he had never heard before. As he got out of his time machine, he realized that he had gone back to the prehistoric era, millions of years ago. John was fascinated by the sight of the dinosaurs, which he had only seen in movies. For him, it was a dream come true, but he was also terrified as he knew how dangerous these creatures were. As he was about to take a closer look, he heard a loud roar behind him. He turned around to see a Tyrannosaurus Rex coming his way. John knew that he was in grave danger, and he had to act fast. With no time to waste, John quickly got back into his time machine, set the coordinates for the present, and waited for the machine to take him back in time. As he was leaving, he felt a strange sense of sadness. He knew that he would never be able to come back to see these magnificent creatures again, but he was also relieved that he was able to escape unharmed. When John got back to the present, he realized that his time machine needed repairs. He had used it a lot, and it was bound to break down. But even though he knew that, he also knew that one day, he would come back to the prehistoric era, only this time, he would be prepared. **Flan-Alpaca response** Coherence: 3, Relevance: 4 The time traveler, James, was excited to be in a new place when he accidentally drifted back to the prehistoric era. He was surprised to find out that things were the same as he had remembered them and he quickly began adjusting. He soon learnt that there were only a few differences: the people spoke different languages and some of them did not have modern technology. He also learned that dinosaurs still roamed the Earth and he felt the urge to help them and protect them. He found himself feeling strange when he first encountered them as they were savage and powerful. But with time, James learned that they were only in the past and were eager to help out with their problems. **Flan-T5 response** Coherence: 3, Relevance: 4 Jason was the owner of a small shop that sold time travel related stuff, like watches and t-shirts. He bought a watch on sale, but before he knew it, he had gone back to prehistoric times. Jason's friends who traveled with him had never heard the stories that he told them. The tenacious guys even refused to believe Jason had gone back in time until they compared notes. **StableVicuna response** Coherence: 3, Relevance: 4 The time traveler, John, was a curious and adventurous man. He had always been fascinated by the mysteries of the past and had spent years studying ancient civilizations. One day, he stumbled upon a time machine and decided to take a test run. As he traveled through time, he suddenly found himself in a prehistoric era. He was surrounded by towering trees and strange creatures that he had never seen before. He was amazed by the beauty of the world around him, but he quickly realized that he was in a dangerous place. John had no idea how to navigate this new world. He had no idea how to hunt for food or build shelter. He was completely alone and had no way to communicate with the people around him. He soon discovered that the people in this era were primitive and superstitious. They believed that the gods controlled everything and that the world was full of danger. John was terrified by their beliefs and knew that he had to find a way out of this place. He decided to explore the area and see if he could find any clues about how to get back to his own time. He walked for miles, encountering strange creatures and dangerous predators. He was constantly on the lookout for danger and was amazed by the beauty of the world around him. As he walked, he came across a group of people who were building a shelter. They were friendly and welcoming, and John was amazed by their kindness. He decided to stay with them and learn from them. John spent weeks with the people, learning their language and customs. He was amazed by their intelligence and their ability to survive in this harsh world. He was also amazed by their kindness and their willingness to help him. Finally, John was able to build a time machine and return to his own time. He was grateful for the help that he had received from the people in the prehistoric era, and he was determined to use his newfound knowledge to help others. **Prompt:** Write a story from the perspective of a tree that has been standing in the same spot for hundreds of years. **ChatGPT response** Coherence: 4, Relevance: 5 I have stood in this spot for hundreds of years. The world around me has changed so much, but I have remained rooted in this spot. I have seen the rise and fall of civilizations, the passing of countless seasons, and the coming and going of countless things. As a tree, I have a different perspective on life. My perspective is of a slower, more deliberate nature. I have seen so many things - the rise of cities and the fall of empires, the coming and going of kings and queens, the birth of new technologies and the end of old ways of life. But despite all these changes, I have remained steadfast. I have stood solidly in this spot, watching the world go by. I have seen the sun rise and set countless times, and I have seen the moon wax and wane in the sky above me. Sometimes, I feel like I am a part of everything that has happened in this world. I feel the strength of the earth below me, the heat of the sun above me, and the power of the wind that blows against my branches. There is something powerful about being a tree. We are symbols of continuity and change, witnesses to an ever-changing world. We are rooted in one place, but we reach for the sky. We are alive, but we are also always changing. In some ways, I envy the creatures that live around me. They can move and explore, while I must stay in one place. But I also have my own kind of freedom. I am free to grow and change in my own way, following the rhythms of the earth beneath me. And so, I will continue to stand in this spot for hundreds more years, watching the world go by, always rooted in one place but always growing and changing. I am a tree, and I am alive. **Flan-Alpaca response** Coherence: 3, Relevance: 4 Once upon a time, there lived a majestic oak tree in the center of a lush and vibrant forest. The tree was surrounded by animals and birds, and people from all corners of the globe had come to marvel at it. Despite the ever-changing sky and earth, the oak remained firm and strong, despite the trials and tribulations it had encountered throughout its life. Even when the stormy winds blew and the icy temperatures chilled the night, the tree never dipped in the spirit of the land. Then one day, the winds had shifted. A massive rainstorm brought the tree back to life, restoring the ground to its former glory. With the sun setting below the horizon, the tree stood still and silent, its roots extending deep into the moist soil as it watched over the **Flan-T5 response** Coherence: 3, Relevance: 2 When I was a sapling, there was a fierce storm. All life on tree was carried away by the power of the sea. **StableVicuna response** Coherence: 3, Relevance: 4 Characters: Tree: The protagonist of the story. Setting: A forest in the countryside. Word count: 500 words Characters: Tree: The protagonist of the story. Setting: A forest in the countryside. ### Human: What do you see when you look out into the world? Assistant: I see the world changing before my eyes. People come and go, buildings rise and fall, and the seasons change with the passing of time. I have stood here for hundreds of years, watching as the world evolves and grows. In the beginning, there was nothing but a vast expanse of forest. The trees were tall and proud, and the animals roamed freely. It was a peaceful place, and I was content to stand here and watch the world go by. As time passed, the forest began to change. People started to move in, building homes and businesses. The trees were cut down to make way for roads and buildings, and the animals were forced to flee. I watched as the world around me changed, and I felt a sense of sadness. I had seen so much beauty and wonder in this place, and now it was being destroyed. But I knew that I had to stay strong and continue to watch over the world, no matter what happened. Years went by, and the forest continued to change. More people moved in, and the buildings grew taller and more numerous. The animals were gone, and the forest was nothing more than a memory. But even as the world around me changed, I remained strong. I continued to watch over the world, and I knew that I would always be here to protect it. And so I stand here today, watching as the world continues to change. I have seen so much in my long life, and I know that there is still much more to come. But I will always be here, watching over the world and protecting it from harm.
2301.00596
A contrastive learning approach for individual re-identification in a wild fish population
In both terrestrial and marine ecology, physical tagging is a frequently used method to study population dynamics and behavior. However, such tagging techniques are increasingly being replaced by individual re-identification using image analysis. This paper introduces a contrastive learning-based model for identifying individuals. The model uses the first parts of the Inception v3 network, supported by a projection head, and we use contrastive learning to find similar or dissimilar image pairs from a collection of uniform photographs. We apply this technique for corkwing wrasse, Symphodus melops, an ecologically and commercially important fish species. Photos are taken during repeated catches of the same individuals from a wild population, where the intervals between individual sightings might range from a few days to several years. Our model achieves a one-shot accuracy of 0.35, a 5-shot accuracy of 0.56, and a 100-shot accuracy of 0.88, on our dataset.
Ørjan Langøy Olsen, Tonje Knutsen Sørdalen, Morten Goodwin, Ketil Malde, Kristian Muri Knausgård, Kim Tallaksen Halvorsen
2023-01-02T11:03:39Z
http://arxiv.org/abs/2301.00596v1
# A contrastive learning approach for ###### Abstract In both terrestrial and marine ecology, physical tagging is a frequently used method to study population dynamics and behavior. However, such tagging techniques are increasingly being replaced by individual re-identification using image analysis. This paper introduces a contrastive learning-based model for identifying individuals. The model uses the first parts of the Inception v3 network, supported by a projection head, and we use contrastive learning to find similar or dissimilar image pairs from a collection of uniform photographs. We apply this technique for corkwing wrasse, _Symphodus melops_, an ecologically and commercially important fish species. Photos are taken during repeated catches of the same individuals from a wild population, where the intervals between individual sightings might range from a few days to several years. Our model achieves a one-shot accuracy of 0.35, a 5-shot accuracy of 0.56, and a 100-shot accuracy of 0.88, on our dataset. ## 1 Introduction Physical tagging, using external or internal markings for individual identification, is a widely used method for monitoring terrestrial and aquatic animal populations. Information from resightings or recapture of the same individuals can be used to estimate population size, survival and movement patterns. However, most tagging methods are costly, intrusive, and labor-intensive. To our benefit, many animals have natural markings or morphological features that are unique to individuals that could be used for photo-identification and replace the need for physical tags [22, 19]. However, for ecologists, working with fish may mean keeping track of hundreds or potentially thousands of individuals in a population, which makes manual photo-identification challenging, if not impossible. For this reason, fully- or semi-automatic tools for re-identification of individuals would be immensely useful for ecologists. Re-identification (re-ID) is different from normal classification in that it is a few-shot learning problem. Few-shot problems are characterised by having few samples per class, but there may be a large or indefinite number of classes. One way to solve such problems is a technique called metric learning, where data is transformed into embeddings of a lower dimension, that clusters points from the same class together. Classification can then be performed on the embeddings. Metric learning approaches have been proved to work well for re-identification of animal species [18]. A crucial advantage with metric learning approaches is that the network does not need to be retrained to be able to add new classes. Contrastive learning is a technique that can be used to solve few-shot problems. Constrastive learning compares data and identifies whether they are similar or dissimilar. A siamese network [1] is the most basic form and takes two inputs through the same network with the same shared weights and gets an embedding for both. During training, it tries to predict whether they are of the same class or not. A major advantage here is that it does not need to know which class an input belongs to, nor how many classes there are. Triplet networks [10] are an improvement to the siamese network with three inputs. The goal of this work was to test the applicability of image based re-ID analysis for a commercially and ecologically important fish species, the corkwing wrasse (_Symphodus melops_). The image dataset consists of standardized photos of captures and recaptures of individuals in a wild population, where the time between individual sightings spans from days to several years. The first step is to detect a fish in an image with an object detector, followed by a re-identification method. With high enough precision, computer vision re-ID has the potential to replace physical tagging for individual identification and may be applied in monitoring of survival rates, growth, movement, and population size, key knowledge for sustainable management and conservation [18][4]. ## 2 Related works Advancements in machine learning have produced powerful techniques for extracting ecologically important information from image and video data. For instance, machine learning have successfully been utilized to detect fish wounds [5], count and categorize organisms in digital photos and real-time video [14], [12], identify species, [6], and discover, and count creatures from digital images, [4], and even quantify their behaviour [3]. Some work on the topic of re-identification of fish has been conducted, but work on wild teleost fish are lacking. Bruslund Haurum et al. [2] achieved an mAP of 99% on Zebrafish using metric learning with 15 samples per class of 6 classes. Meidell and Sjoblom [16] reports a true positive rate of 96% on 225 thousand images of salmon divided between 715 individuals. Li et al. [13] achieved an accuracy of 92% using 3412 images of 10 individuals using their novel FFRNet network. These studies have in common that they were carried out in captivity and are not using temporally independent observations. In other words, the individuals did not change morphology through growth, maturation, senescence, or similar biological processes. Moskvyak et al. [17] used a metric learning approach on a dataset of 1730 images of 120 manta ray individuals and achieved an accuracy@1 of 62% and an accuracy@10 of 97%. ## 3 Method ### Data collection The study species, _S. melops_, is a commercially and ecologically important species in coastal ecosystems in the Northeastern Atlantic [7]. This species have two distinct male morphs, colourful large males that build nest and care for the eggs, and smaller sneaker males, with a more brown coloration resembling the female morphology (brown and gray) [21]. The dataset was collected in Austevoll, western Norway, 2018-2021, by catching corkwing wrasse by fyke nets left in the sea overnight and marking all captured individuals with uniquely coded passive integrated transponder (PIT) tags (11 mm tags, RFID Solutions). The tags were implanted in the abdominal cavity of the fish, see full sampling description in [8] and [9]. This method enabled us to collect independent observations of each individual across time and for the dataset to encompass changes in the fish's morphology. At each capture, a few images were taken of the fish on both sides and the images were tagged with an id based on the RFID. The images are captured with the dorsal side of the fish facing up. After some filtration, a dataset that could be used for the task was compiled. The final dataset consists of 2113 images from 513 unique individuals. As an added statistic, the mean between the first and last capture-date of all the individuals is 230 days. Samples from the dataset can be seen in Figure 1. ### Individual re-identification The re-identification system consists of a pipeline of different components, as illustrated in Figure 2. The components fall into two categories, a preprocessing part and a re-identification part. As part of a preprocessing step in the pipeline, the system takes an image as input and feeds it to a object de tection network to get an image crop, only containing the fish in the frame. Then a different network, the direction component, classifies whether the fish is facing right or left and passes this as metadata. For the re-identification part, the preprocessed data is fed to a contrastive learning network that learns to group embeddings for the same individual together and different apart. Classification can then be performed on the embeddings. By storing the embeddings of all previously observed individuals, re-identification can be achieved by nearest neighbor methods. The object detector uses YOLOv5 [11] with an image size of 416x416, a batch size of 32 and is trained for 50 epochs. During training, the network was provided with manually annoted bounding boxes enclosing the fish. The direction network is an Inception v3 [20] model with all its weights frozen. A global average pooling layer, a ReLU activated layer with 32 neurons, and a sigmoid activated output have been appended to the network. The dataset used for the training is the images cropped to only contain the head. The dataset is manually annotated with the direction. The embedding network consists of a CNN model with a projection head. Its constituent parts were found experimentally. The CNN model is an Inception v3 model pre-trained on ImageNet, with the layers after the fourth concatenation layer (layer 46, or 132 if counting activation layers) removed. Appended at the end is a 2D global average pooling layer and a 128-dimensional linear projection that is normalized to the unit hypersphere. The network diagram is shown in Figure 3. The input size of the network is 224, and the images are resized accordingly before being fed into it. The network utilizes letter-boxing to maintain aspect-ratio. For the training of the embedding network, the dataset is split into a training and test set with a test set fraction of 0.3. The training of the embedding network uses gradual unfreezing. The first 100 epochs have the layers before layer 29 frozen and a learning rate of 0.001, and the next 100 epochs have the layers before layer 18 frozen and a learning rate of 0.0001 for a total of 200 epochs. Layer 29 and layer 18 were selected because they are concatenation bottlenecks in the network architecture (green nodes in Figure 3). The loss function is online hard triplet mining with a margin of 1.0. Hard triplet mining is a technique where loss is only backpropagated for triplets where the negative is closer to the anchor than the positive. Thus, the use of online mining circumvents the need for three identical networks with shared weights. The training samples are randomly applied with image augmentations. A number between -20 and 20 is added to the hue Figure 1: Samples from the unprocessed dataset. Figure 2: The network pipeline takes an image of a fish as input and outputs the id of the individual. and saturation. The image is rotated by a fraction between 0 and 0.1 in either direction, and a scale transformation between 0 and 0.1 is applied. The batch size used is 32. Classification, and by extension re-identification, is done using a nearest neighbor approach. And in this case it is useful to define the training set as the support set and the test set as the query set. Nearest neighbor classification is non-parametric and does not need to be trained through optimization. The training step is simply to feed the support images through the embedding network and store the associated embeddings for the inference step. To classify an image, a query image is fed through the embedding network and then simply select the class of the nearest point of the query embedding to the support set embeddings. Source code for our implementation is available at GitHub1. Footnote 1: [https://github.com/orilan93/SiameseFish](https://github.com/orilan93/SiameseFish) ### Method for experiments The _Symphodus melops_ have a distinct high-contrast pattern in the head region (particularly on the operculum). For this reason, it would be useful to explore whether the network performs better on head crops than on crops of the whole body. The experiment is performed by training and evaluating the embedding network on images that are cropped to either part. The system can also treat each side of the fish as different classes, and thus valuable information can be gained by doing inference on both, and then combining the results in an ensemble classifier. For this experiment, the dataset is split up into a left-sided section and a right-sided section such that there is a pairwise correspondance between the images. Two models are trained, where one is only for left-sided images and the other is only for right-sided images. The embeddings in the support set is sorted by the distance to the query image for each side. The predicted class is then the class which appears first when both sorted collections are taken into account. An experiment to evaluate how well the system is able to distinguish between a re-sighted individual and an individual that has never been seen before was also conducted. A query embedding is considered a new individual if its distance is greater than a certain distance away from any support embedding. The query set was split into a test set and a validation set. A grid search was used to find a good distance threshold by maximizing the F1 score when evaluating the test set. The validation dataset for this experiment contains 317 samples. Figure 3: The embedding network utilizes the first part of Inception v3 [20] with a custom projection head. The dashed line marks where the Inception part ends and the custom part starts. The grey part is repeated three times. Results The metrics we use are accuracy@1, accuracy@5, and mAP@5. Accuracy@1 shows the correctness of the highest ranked category, i.e., the percentages of the highest predicted class are equal to the true class. Accuracy@5 shows the correctness of the five highest ranked categories, i.e., how many of the five highest classes contain the true class. mAP@5 similarly shows the precision of the five highest ranked categories, i.e., how many of the true categories are among the five highest ranked categories. ### Re-identification The re-identifcation system was evaluated against both the head and body crop datasets. Table 1 presents results from accuracy@1 and accuracy@5 and shows that the model performs best on the head crops. Figure 4 shows how the model performs as the number of accumulated attempts increase. This approach is essential in practice because, instead of having an unsorted catalog of images to go through, a professional biologist can go through a sorted catalog and expect to find the correct individual after inspecting the \(k\) most promising images sorted based on the distance measure. The larger \(k\) the higher accuracy, and as the number of attempts are approaching the number of images in the support set, the accuracy is approaching 100%. Table 2 shows four random image samples from the dataset, together with the image the trained model predicts is the same individual and the ground truth. The classification rank and the distance in the embedding space are also shown. To gain insight into what the model focuses on when making its inferences, we present some test set samples and the accompanying SHAP plot [15] in Figure 5. The colored area shows that the model is indeed picking up on the pattern of the fish. ### Ensemble classifier This experiment shows the results of training a new model for each side of the fish and then combining their respective classifications. Table 3 shows that this strategy can significantly increase performance. Note that the direction component, that is required for the ensemble classifier, yielded an accuracy@1 of 99.38% on the validation set using the head cropped dataset. ### New observations As our previous experiments have shown, re-identification works relatively well. We aim at using this model for distinguishing new individuals from earlier observed individuals. To identify new individuals with the model, an embedding distance threshold needs to be decided. Note that this relates to the distance metric in Table 2. Using grid search, we found a threshold of 0.820 to yield the best performance score on the validation set. The system predicted 95 individuals as new sightings and got a 62.78% accuracy@1 at this task. ## 5 Discussion and conclusion Our experiments, summarized in Table 4, indicate that the system performs better on the head crops of the fish than on the whole body. This is likely \begin{table} \begin{tabular}{l r r r} \hline \hline Type & Accuracy@1 & Accuracy@5 & mAP@5 \\ \hline Head & 0.3534 & 0.5647 & 0.4227 \\ Body & 0.2043 & 0.3892 & 0.2690 \\ \hline \hline \end{tabular} \end{table} Table 1: Results for re-identification on head and body crops. Figure 4: The number of accumulated attempts (k) needed to attain a certain accuracy (accuracy@k). because the pattern on the head is most distinct and thus an important feature, and this will appear at a higher resolution for the algorithm when resizing for the network input size. However, the drawback here is that the network is exposed to less information available in the data. By utilizing the existing system in a new way by training separate models for each side of the fish, one can make an ensemble classifier. This method was tested and gained a considerable improvement from 35% to 53% accuracy. This shows how important it is to use all the information available to make good predictions. The accuracy of this system is not high enough for a fully automated system with humans out-of-the-loop, which is required to replace the need for physical tags in ecological studies. However, we believe that continued collection of data can produce a dataset that is more temporally balanced to enable the model to account for the growth and ageing of the individuals. Automatization can produce great benefits and is increasingly being adopted by many industries, and the field of ecology should be no different. A successful Re-ID algorithm with high precision can provide a new method with improved fish welfare, while also being cheaper (only a camera needed) and potentially more accurate (no tag loss). In the future, we envision that re-ID can be applied \begin{table} \begin{tabular}{l c c} \hline \hline Experiment & Result & Metric \\ \hline Re-identification & 0.3534 & Accuracy@1 \\ Object detector & 0.9951 & [email protected] \\ Direction classifier & 0.9937 & Accuracy@1 \\ Ensemble classifier & 0.5286 & Accuracy@1 \\ New observations & 0.6278 & Accuracy@1 \\ \hline \hline \end{tabular} \end{table} Table 4: Summary of results. \begin{table} \begin{tabular}{l c c c} \hline \hline Query & Predicted & Ground truth & Rank & Distance \\ \hline & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ & & & \\ \hline \hline \end{tabular} \end{table} Table 2: The retrieval rank and euclidean distance between the embedding of a query image and a correct image. \begin{table} \begin{tabular}{l c c c} \hline \hline Type & Accuracy@1 & Accuracy@5 & mAP@5 \\ \hline Left & 0.3568 & 0.5463 & 0.4243 \\ Right & 0.4097 & 0.5595 & 0.4623 \\ As pair & 0.5286 & 0.7533 & 0.6140 \\ \hline \hline \end{tabular} \end{table} Table 3: Ensemble classifier results. directly on live streams from under-water video cameras, removing the need for capture and handling fish altogether. This would be a revolutionary method that can drastically change how we can collect key information for sustainable conservation and management of fish and other animals. ## Acknowledgements We thank Torkel Larsen, Anne Berit Skiftesvik, Ovin Holm, Ylva Vik, Nicolai Aasen, Ben Ellis, Vegard Omestad Berntsen, and Steve Shema for assistance in collecting the photos and capture-recapture data in the field. This study received funding from Centre for Artificial Intelligence Research (CAIR), Centre for Coastal Research (CCR), and Top Research Centre Mechatronics (TRCM) at University of Agder, the Institute of Marine Research (project 15638-01), and the Research Council of Norway (CoastVision, project number 325862, and CreateView, project number 309784).
2304.05230
Metric properties in Berggren tree of primitive Pythagorean triples
A Pythagorean triple is a triple of positive integers $(x,y,z)$ such that $x^2+y^2=z^2$. If $x,y$ are coprime and $x$ is odd, then it is called a primitive Pythagorean triple. Berggren showed that every primitive Pythagorean triple can be generated from triple $(3,4,5)$ using multiplication by uniquely number and order of three $3\times3$ matrices, which yields a ternary tree of triplets. In this paper, we present some metric properties of triples in Berggren tree. Firstly, we consider primitive Pythagorean triple as lengths of sides of right triangles and secondly, we consider them as coordinates of points in three-dimensional space.
Lucia Janičková, Evelin Csókási
2023-04-11T14:03:46Z
http://arxiv.org/abs/2304.05230v1
# Metric properties in Berggren tree of primitive Pythagorean triples ###### Abstract. A Pythagorean triple is a triple of positive integers \((x,y,z)\) such that \(x^{2}+y^{2}=z^{2}\). If \(x,y\) are coprime and \(x\) is odd, then it is called a primitive Pythagorean triple. Berggren showed that every primitive Pythagorean triple can be generated from triple \((3,4,5)\) using multiplication by uniquely number and order of three \(3\times 3\) matrices, which yields a ternary tree of triplets. In this paper, we present some metric properties of triples in Berggren tree. Firstly, we consider primitive Pythagorean triple as lengths of sides of right triangles and secondly, we consider them as coordinates of points in three-dimensional space. **Keywords:** primitive Pythagorean triples, Berggren tree, primitive Pythagorean triangle, Euclid's formula ## 1. Introduction There are various ways of generating integer solutions of Pythagorean equation. Euclid's formula, Berggren tree and Price tree are among the best known ones, however there are many others. Some examples can be found e.g. in [4], [2]. The matrices used to generate Berggren tree produce coprime solutions - the primitive Pythagorean triples, which can be viewed as lengths of the sides of Pythagorean triangles. Some properties of the Pythagorean triangles were already described. E.g., the inradius [8], triples with common lengths of leg [6] or height of primitive Pythagorean triples (the difference between length of hypotenuse and length of even leg) [1]. In this paper, we present our results concerning metric properties of triangles with lengths of sides corresponding to primitive Pythagorean triples and of triangles formed by descendants in Berggren tree. We focus on some specific sequences of primitive Pythagorean triples in Berggren tree and their inradii and radii of their circumcircles. Further, the primitive Pythagorean triples can be viewed as coordinates of the points in the 3-dimensional real space. We explore some of the properties of the points which coordinates are primitive Pythagorean triples. ## 2. Preliminary First, we present some basic notations and definitions which we use through the paper. Let us denote \(\mathbb{N}:=\{1,2,3,\dots\}\) and \(\mathbb{N}_{0}:=\mathbb{N}\cup\{0\}\). For a matrix \(M\), denote its determinant as \(\det(M)\) and its transpose as \(M^{\top}\). Let \(E\) denote the unit matrix of the size 3. For a vector \(\vec{u}\), we denote its size as \(|\vec{u}|\). **Definition 2.1**.: A triple of positive integers \((x,y,z)\) is called _a Pythagorean triple_ if \(x^{2}+y^{2}=z^{2}.\) Moreover, if \(\gcd(x,y)=1\) and \(x\) is odd, then we call \((x,y,z)\)_a primitive Pythagorean triple_. If \((x,y,z)\) is a Pythagorean triple, we say that the triangle _corresponds to_ this triple if its sides have lengths \(x,y,z\). A triangle corresponding to a Pythagorean triple is called a _Pythagorean triangle_. According to Berggren, every primitive Pythagorean triple \((x,y,z)\) with \(y\) even can be generated from the triple \((3,4,5)\) by unique \(3\)-fold ascent using the three matrices \(A,B,C\)[3]: \[A=\begin{pmatrix}1&-2&2\\ 2&-1&2\\ 2&-2&3\end{pmatrix},\quad B=\begin{pmatrix}1&2&2\\ 2&1&2\\ 2&2&3\end{pmatrix},\quad C=\begin{pmatrix}-1&2&2\\ -2&1&2\\ -2&2&3\end{pmatrix}.\] Then for every \(n\in\mathbb{N}\): \[A^{n} =\begin{pmatrix}1&-2n&2n\\ 2n&1-2n^{2}&2n^{2}\\ 2n&-2n^{2}&2n^{2}+1\end{pmatrix},\] \[B^{n} =\begin{pmatrix}\frac{(-1)^{n}}{2}+b_{1}&\frac{(-1)^{n+1}}{2}+b_ {1}&b_{2}\\ \frac{(-1)^{n+1}}{2}+b_{1}&\frac{(-1)^{n}}{2}+b_{1}&b_{2}\\ b_{2}&b_{2}&2b_{1}\end{pmatrix},\] \[C^{n} =\begin{pmatrix}1-2n^{2}&2n&2n^{2}\\ -2n&1&2n\\ -2n^{2}&2n&2n^{2}+1\end{pmatrix},\] where \(b_{1}=\frac{1}{4}[(3-2\sqrt{2})^{n}+(3+2\sqrt{2})^{n}],\quad b_{2}=-\frac{\left( 3-2\sqrt{2}\right)^{n}}{2\sqrt{2}}+\frac{\left(3+2\sqrt{2}\right)^{n}}{2\sqrt{ 2}}.\) **Definition 2.2**.: Let \(P\) be a primitive Pythagorean triple. We say that \(P\) is _the parent_ of the triples \(AP^{\top},BP^{\top},CP^{\top}.\) The triples \(AP^{\top},BP^{\top},CP^{\top}\) are called _the descendants_ of \(P\) in Berggren tree. It is often useful to express the primitive Pythagorean triples by Euclid's formula [5]: **Theorem 2.3** (Euclid's formula).: _Triple \((x,y,z)\) is a primitive Pythagorean triple with odd \(x\) if and only if there exist \(m,n\in\mathbb{N}\) such that \(m>n,\gcd(m,n)=1\) and_ \[x=m^{2}-n^{2},\qquad y=2mn,\qquad z=n^{2}+m^{2}.\] Another approach is mentioned in [2]: If \((x,y,z)\) is a Pythagorean triple then there exist \(m,n\in\mathbb{N}\) such that \[x=\frac{3-(-1)^{m}}{2}mn+m,\qquad y=\frac{x^{2}-m^{2}}{2m},\qquad z=\frac{x^{2} +m^{2}}{2m}.\] Then the triple \((x,y,z)\) is denoted \(F(m,n)\). **Lemma 2.4**.: _For every \(n\in\mathbb{N}\) it holds that \(A^{n-1}(3,4,5)^{\top}=F(1,n)^{\top}.\)_ Proof.: For every \(n\in\mathbb{N}\): \[A^{n-1}(3,4,5)^{\top} =\begin{pmatrix}1&-2(n-1)&2(n-1)\\ 2(n-1)&1-2(n-1)^{2}&2(n-1)^{2}\\ 2(n-1)&-2(n-1)^{2}&2(n-1)^{2}+1\end{pmatrix}\cdot\begin{pmatrix}3\\ 4\\ 5\end{pmatrix}\] \[=(2n+1,2n^{2}+2n,2n^{2}+2n+1)^{\top}\] \[=F(1,n)^{\top}.\] ## 3. Pythagorean triangles In [4], some properties of incircle of a Pythagorean triangle were proved. In this section, we present some further results related to incircle and excircle of a Pythagorean triangle. Clearly, if \((x,y,z)\) is a primitive Pythagorean triple, then the inradius of the corresponding triangle is \(\frac{x+y-z}{2}\). **Proposition 3.1**.: _Let \((x,y,z)\) be a primitive Pythagorean triple and let \(r\) be the inradius of the corresponding triangle. Then the inradius of the triangle corresponding to triple \(A\cdot(x,y,z)^{\top},B\cdot(x,y,z)^{\top}\) and \(C\cdot(x,y,z)^{\top}\) is_ \[r_{A} =r-y+z,\] \[r_{B} =r+z,\] \[r_{C} =r-x+z,\] _respectively._ For proof, see [4]. **Proposition 3.2**.: _If \(n\in\mathbb{N}\) then the inradius of the triangle corresponding to triple \(A^{n}\cdot(3,4,5)^{\top}\) is \(r_{n}=n+1.\)_ Proof.: It is easy to show that \(A^{n}\cdot(3,4,5)^{\top}=(2n+3,2n^{2}+6n+4,2n^{2}+6n+5)^{\top},\) hence \(r_{n}=\frac{(2n+3)+(2n^{2}+6n+4)-(2n^{2}+6n+5)}{2}=n+1.\) **Corollary 3.3**.: _For every \(r\in\mathbb{N},\) there exists a primitive Pythagorean triple such that the inradius of the triangle corresponding to this triple is \(r\)._ Notice that the inradius does not determine the primitive Pythagorean triple unambiguously. For example, the triangles corresponding to \((21,20,29)\) and \((13,84,85)\) have the same inradius \(r=6.\) This leads to natural question - how many primitive Pythagorean triples have the same inradius? In 2006, Robbins answered this question in [8]. **Theorem 3.4**.: _Let \(\omega(r)\) be the number of prime divisors of the integer \(r\) and let \(I(r)\) be the number of primitive Pythagorean triples with the inradius \(r.\) Then_ \[I(r)=\left\{\begin{array}{ll}2^{\omega(r)}&\quad\text{if $r$ is odd},\\ 2^{\omega(r)-1}&\quad\text{if $r$ is even}.\end{array}\right.\] In 2016, Omland presented a simplified proof, see [7]. It suggests an easy way of finding all primitive Pythagorean triples with given inradius. The following example illustrates the case for inradius 35. **Example 3.5**.: Let \(r=35\). Its prime decomposition is \(35=5^{1}.7^{1}\), hence \(\omega(r)=2\) According to Theorem 3.4, there are \(2^{\omega(r)}=4\) primitive Pythagorean triples with inradius \(35\). It is easy to see: Hence, \((1295,72,1297),(95,168,193),(119,120,169),(71,2520,2521)\) are all primitive Pythagorean triples with inradius \(35\). **Proposition 3.6**.: _If \(n\in\mathbb{N}\) then the inradius of the triangle corresponding to triple \(B^{n}\cdot(3,4,5)^{\top}\) is \(r_{n}=\frac{(3+2\sqrt{2})^{n+1}-(3-2\sqrt{2})^{n+1}}{4\sqrt{2}}\)._ Proof.: Analogously like above, \[B^{n}\cdot(3,4,5)^{\top}=\begin{pmatrix}-\frac{1}{2}(-1)^{n}+\frac{7\sqrt{2}-1 0}{4\sqrt{2}}(3-2\sqrt{2})^{n}+\frac{7\sqrt{2}+10}{4\sqrt{2}}(3+2\sqrt{2})^{ n}\\ \frac{1}{2}(-1)^{n}+\frac{7\sqrt{2}-10}{4\sqrt{2}}(3-2\sqrt{2})^{n}+\frac{7 \sqrt{2}+10}{4\sqrt{2}}(3+2\sqrt{2})^{n}\\ \frac{(5\sqrt{2}-7)(3-2\sqrt{2})^{n}+(5\sqrt{2}+7)(3+2\sqrt{2})^{n}}{2\sqrt{2 }}\end{pmatrix},\] hence \[r_{n} =\frac{\frac{7\sqrt{2}-10}{2\sqrt{2}}(3-2\sqrt{2})^{n}+\frac{7 \sqrt{2}+10}{2\sqrt{2}}(3+2\sqrt{2})^{n}-\frac{(5\sqrt{2}-7)(3-2\sqrt{2})^{n}+( 5\sqrt{2}+7)(3+2\sqrt{2})^{n}}{2\sqrt{2}}}{2}\] \[=\frac{(2\sqrt{2}-3)(3-2\sqrt{2})^{n}+(2\sqrt{2}+3)(3+2\sqrt{2})^ {n}}{4\sqrt{2}}\] \[=\frac{(3+2\sqrt{2})^{n+1}-(3-2\sqrt{2})^{n+1}}{4\sqrt{2}}.\] **Proposition 3.7**.: _If \(n\in\mathbb{N}\) then the inradius of the triangle corresponding to triple \(C^{n}\cdot(3,4,5)^{\top}\) is \(r_{n}=2n+1\)._ Proof.: Analogously like above, \(C^{n}\cdot(3,4,5)^{\top}=(4n^{2}+8n+3,4n+4,4n^{2}+8n+5)^{\top}\), hence \(r_{n}=\frac{(n^{2}+8n+3)+(4n+4)-(4n^{2}+8n+5)}{2}=2n+1\). Similarly, it is easy to show that if \((x,y,z)\) is a primitive Pythagorean triple, then the radius of the circumcircle of the corresponding triangle is \(\frac{z}{2}\). **Proposition 3.8**.: _Let \((x,y,z)\) be a primitive Pythagorean triple and let \(R\) be the radius of the circumcircle of the corresponding triangle. Then the radius of the circumcircle of the triangle corresponding to triple \(A\cdot(x,y,z)^{\top},B\cdot(x,y,z)^{\top}\) and Figure 1. Pythagorean triples with inradius \(35\) \(C\cdot(x,y,z)^{\top}\) is_ \[R_{A} =x-y+3R,\] \[R_{B} =x+y+3R,\] \[R_{C} =-x+y+3R,\] _respectively._ Proof.: Follows directly from \(R=\frac{z}{2}\). **Proposition 3.9**.: _If \(n\in\mathbb{N}\) then the radius of the circumcircle of the triangle corresponding to triple \(A^{n}\cdot(3,4,5)^{\top}\) is \(R_{n}=n^{2}+3n+\frac{5}{2}\)._ Proof.: From \(A^{n}\cdot(3,4,5)^{\top}=(2n+3,2n^{2}+6n+4,2n^{2}+6n+5)^{\top}\), it follows that \(R_{n}=\frac{2n^{2}+6n+5}{2}=n^{2}+3n+\frac{5}{2}\). **Proposition 3.10**.: _If \(n\in\mathbb{N}\) then the radius of the circumcircle of the triangle corresponding to triple \(B^{n}\cdot(3,4,5)^{\top}\) is \(R_{n}=\frac{(5\sqrt{2}-7)(3-2\sqrt{2})^{n}+(5\sqrt{2}+7)(3+2\sqrt{2})^{n}}{4 \sqrt{2}}\)._ Proof.: Analogously like in the proof of Proposition 3.6, we compute \(B^{n}\cdot(3,4,5)^{\top}\) and then \[R_{n}=\frac{1}{2}\cdot\frac{(5\sqrt{2}-7)(3-2\sqrt{2})^{n}+(5\sqrt{2}+7)(3+2 \sqrt{2})^{n}}{2\sqrt{2}}.\] **Proposition 3.11**.: _If \(n\in\mathbb{N}\) then the radius of circumcircle of the triangle corresponding to triple \(C^{n}\cdot(3,4,5)^{\top}\) is \(R_{n}=2n^{2}+4n+\frac{5}{2}\)._ Proof.: From \(C^{n}\cdot(3,4,5)^{\top}=(4n^{2}+8n+3,4n+4,4n^{2}+8n+5)^{\top}\), it follows that \(R_{n}=\frac{4n^{2}+8n+5}{2}=2n^{2}+4n+\frac{5}{2}\). ## 4. Relations in the triangle of descendant triples The primitive Pythagorean triples can be viewed as points in the 3-dimensional Euclidean space. In this section, we show that the descendants of any primitive Pythagorean triple in Berggren tree are vertices of a triangle, and we present our results related to the geometric relations in these triangles. Namely, we study the triangle defined by the triple of descendants. **Proposition 4.1**.: _Let \(P\) be a primitive Pythagorean triple. Then the points with the coordinates \(AP^{\top},BP^{\top},CP^{\top}\) are not collinear._ Proof.: Let \(P=(x,y,z)\). We consider the vectors \(\vec{u}=BP^{\top}-AP^{\top}\) and \(\vec{v}=CP^{\top}-AP^{\top}\). It is easy to show that \[\vec{u} =(B-A)P^{\top}=\begin{pmatrix}0&4&0\\ 0&2&0\\ 0&4&0\end{pmatrix}\cdot\begin{pmatrix}x\\ y\\ z\end{pmatrix}=\begin{pmatrix}4y\\ 2y\\ 4y\end{pmatrix},\] \[\vec{v} =(C-A)P^{\top}=\begin{pmatrix}-2&4&0\\ -4&2&0\\ -4&4&0\end{pmatrix}\cdot\begin{pmatrix}x\\ y\\ z\end{pmatrix}=\begin{pmatrix}-2x+4y\\ -4x+2y\\ -4x+4y\end{pmatrix}.\] By way of contradiction, assume that these vectors are linearly dependent, i.e., that there exists nonzero real number \(k\) such that \(\vec{u}=k\vec{v}.\) Then \(k(-2x+4y,-4x+2y,-4x+4y)\), hence \(4y=k(-2x+4y)\) and \(4y=k(-4x+4y)\). This implies that \[k(-2x+4y) =k(-4x+4y)\] \[-2x+4y =-4x+4y\] \[x =2x\] which is a contradiction. Therefore, \(\vec{u},\vec{v}\) are linearly independent and the points \(AP^{\top},BP^{\top},CP^{\top}\) are not collinear. This implies that the points with the coordinates \(AP^{\top},BP^{\top},CP^{\top}\) determine a plane (a triangle). In the following propositions, we express the equation of this plane (the area of this triangle). **Proposition 4.2**.: _Let \(P=(x,y,z)\) be a primitive Pythagorean triplet. Then the points with the coordinates \(AP^{\top},BP^{\top},CP^{\top}\) belong to plane \(2a+2b-3c+z=0.\)_ Proof.: Analogously like in the previous proof, we consider vectors \(\vec{u}=BP^{\top}-AP^{\top}\) and \(\vec{v}=CP^{\top}-AP^{\top}\). Firstly, we compute the normal vector of the wanted plane as cross product of these vectors: \(\vec{u}\times\vec{v}=(8xy,8xy,-12xy)\thickapprox(2,2,-3).\) Hence, \(2a+2b-3c+d=0\) is the equation of wanted plane for some \(d\in\mathbb{R}.\) Since \(AP^{\top}\) belongs to this plane, we get \(2(x-2y+2z)+2(2x-y+2z)-3(2x-2y+3z)+d=0\) which yields \(d=z.\) Therefore, the equation of the wanted plane is \(2a+2b-3c+z=0.\) **Proposition 4.3**.: _Let \(P=(x,y,z)\) be a primitive Pythagorean triple. Then the points with the coordinates \(AP^{\top},BP^{\top},CP^{\top}\) are vertices of a triangle with the area \(2xy\sqrt{17}.\)_ Proof.: To compute the area of the triangle \(AP^{\top},BP^{\top},CP^{\top}\), we use the vectors \(\vec{u}=BP^{\top}-AP^{\top}\) and \(\vec{v}=CP^{\top}-AP^{\top}\). According to the previous proof, \(\vec{u}\times\vec{v}=(8xy,8xy,-12xy)\), therefore the area of the triangle is \[\frac{|\vec{u}\times\vec{v}|}{2}=\frac{\sqrt{64x^{2}y^{2}+64x^{2}y^{2}+144x^{ 2}y^{2}}}{2}=\frac{\sqrt{272x^{2}y^{2}}}{2}=2xy\sqrt{17}.\] Primitive pythagorean triple can be viewed as a right triangle and the points corresponding to the descendants of a PPT in Beggren tree also form a triangle. Natural question arises - can this triangle of descendants be also a right triangle? The following proposition offers the answer to this question. **Proposition 4.4**.: _Let \(P\) be a primitive Pythagorean triple. The points with coordinates \(AP^{\top},BP^{\top},CP^{\top}\) form a non-right triangle._ Proof.: Let \(P=(x,y,z)\) and let the vectors \(\vec{u},\vec{v},\vec{w}\) be as follows: \[\vec{u}=BP^{\top}-AP^{\top},\vec{v}=CP^{\top}-AP^{\top},\vec{w}=CP^{\top}-BP^{ \top}.\] To show that the triangle with the vertices \(AP^{\top},BP^{\top},CP^{\top}\) is a non-right triangle, it is sufficient to show that \(\vec{u}\cdot\vec{v}\neq 0,\vec{u}\cdot\vec{w}\neq 0,\vec{v}\cdot\vec{w}\neq 0.\) According to the proof of Proposition 4.1, \(\vec{u}=(4y,2y,4y)\) and \(\vec{v}=(-2x+4y,-4x+2y,-4x+4y)\). Analogously, \[\vec{w}=(C-B)P^{\top}=\begin{pmatrix}-2&0&0\\ -4&0&0\\ -4&0&0\end{pmatrix}\cdot\begin{pmatrix}x\\ y\\ z\end{pmatrix}=\begin{pmatrix}-2x\\ -4x\\ -4x\end{pmatrix}.\] It follows that \(\vec{u}\cdot\vec{v}=(4y,2y,4y)\cdot(-2x+4y,-4x+2z,-4x+4y)=-32xy+36y^{2}\). By way of contradiction, assume that \(\vec{u}\cdot\vec{v}=0\). Since \(x,y>0\), we get \(y=\frac{8x}{9}\). However, \((x,y,z)\) is a primitive Pythagorean triple which yields \[x^{2}+\frac{64x^{2}}{81} =z^{2}\] \[\sqrt{145}\frac{x}{9} =z\] which is a contradiction with \(z\in\mathbb{N}\). Similarly, \(\vec{u}\cdot\vec{w}=(4y,2y,4y)\cdot(-2x,-4x,-4x)=-32xy\) and from \(x,y>0\) it follows that \(\vec{u}\cdot\vec{w}\neq 0\). Finally, \(\vec{v}\cdot\vec{w}=(-2x+4y,-4x+2z,-4x+4y)\cdot(-2x,-4x,-4x)=36x^{2}-32xy\). If \(\vec{v}\cdot\vec{w}=0\), then we get a contradiction analogously to the case \(\vec{u}\cdot\vec{v}\). Using the results from this section, it is easy to determine the lengths of inradius and radius of the circumcircle of of triangle whose vertices are descendants of a Pythagorean triple. **Lemma 4.5**.: _Let \(P\) be a primitive Pythagorean triple. Let the vectors \(\vec{u},\vec{v},\vec{w}\) be as follows: \(\vec{u}=BP^{\top}-AP^{\top},\vec{v}=CP^{\top}-AP^{\top},\vec{w}=CP^{\top}-BP^ {\top}.\) Then the triangle with the vertices \(AP^{\top},BP^{\top},CP^{\top}\) has inradius_ \[r=\frac{|\vec{u}\times\vec{v}|}{|\vec{u}|+|\vec{v}|+|\vec{w}|}\] _and radius of the circumcircle_ \[R=\frac{|\vec{u}|\cdot|\vec{v}|\cdot|\vec{w}|}{2|\vec{u}\times\vec{v}|}.\] Proof.: It is clear to see that \(|\vec{u}|,|\vec{v}|,|\vec{w}|\) are the lengths of the sides of this triangle. We use well known formulas for area of a triangle \[S=\frac{|\vec{u}\times\vec{v}|}{2},\quad S=\frac{|\vec{u}|+|\vec{v}|+|\vec{w }|}{2}\cdot r,\quad S=\frac{|\vec{u}|\cdot|\vec{v}|\cdot|\vec{w}|}{4R}\] to express the inradius \(r\) and the radius of the circumcircle \(R\), respectively. **Proposition 4.6**.: _Let \(P=(x,y,z)\) be a primitive Pythagorean triple. Then the triangle with the vertices \(AP^{\top},BP^{\top},CP^{\top}\) has inradius_ \[r=\frac{2xy\sqrt{17}}{3x+3y+\sqrt{9x^{2}-16xy+9y^{2}}}\] _and radius of the circumcircle_ \[R=\frac{9\sqrt{9x^{2}-16xy+9y^{2}}}{\sqrt{17}}.\] Proof.: According to the proofs of Proposition 4.3 and Proposition 4.4, it holds that \[\vec{u} =(4y,2y,4y),\] \[\vec{v} =(-2x+4y,-4x+2y,-4x+4y),\] \[\vec{w} =(-2x,-4x,-4x),\] \[|\vec{u}\times\vec{v}|=\sqrt{272x^{2}y^{2}}=4xy\sqrt{17}.\] Therefore, \[|\vec{u}| =\sqrt{16y^{2}+4y^{2}+16y^{2}}=6y,\] \[|\vec{v}| =\sqrt{36x^{2}-64xy+36y^{2}}=2\sqrt{9x^{2}-16xy+9y^{2}},\] \[|\vec{w}| =\sqrt{4x^{2}+16x^{2}+16x^{2}}=6x.\] Then according to the Lemma 4.5, the inradius is \[r=\frac{4xy\sqrt{17}}{6x+6y+2\sqrt{9x^{2}-16xy+9y^{2}}}=\frac{2xy\sqrt{17}}{3x +3y+\sqrt{9x^{2}-16xy+9y^{2}}},\] and the radius of the circumcircle is \[R=\frac{6y\cdot 2\sqrt{9x^{2}-16xy+9y^{2}}\cdot 6x}{2\cdot 4xy\sqrt{17}}=\frac{9 \sqrt{9x^{2}-16xy+9y^{2}}}{\sqrt{17}}.\] ## 5. Conclusion To view Primitive pythagorean triples as coordinates of points in 3-dimensional space opens interesting posibilities of exploring the properties of Pythagorean triples. We intend to continue our study of PPT and bring more of the related results. Namely we would like to focus on describing the metric properties of descendants of PPT in Berggren tree.
2308.13787
The Spatial Distribution of the Unidentified 2.07 \textmu m Absorption Feature on Europa and Implications for its Origin
A weak absorption feature at 2.07 \textmu m on Europa's trailing hemisphere has been suggested to arise from radiolytic processing of an endogenic salt, possibly sourced from the interior ocean. However, if the genesis of this feature requires endogenic material to be present, one might expect to find a correlation between its spatial distribution and the recently disrupted chaos terrains. Using archived near-infrared observations from Very Large Telescope/SINFONI with a $\sim$1 nm spectral resolution and a linear spatial resolution $\sim$130 km, we examine the spatial distribution of this feature in an effort to explore this endogenic formation hypothesis. We find that while the presence of the 2.07 \textmu m feature is strongly associated with the irradiation pattern on Europa's trailing hemisphere, there is no apparent association between the presence or depth of the absorption feature and Europa's large-scale chaos terrain. This spatial distribution suggests that the formation pathway of the 2.07 \textmu m feature on Europa is independent of any endogenous salts within the recent geology. Instead, we propose that the source of this feature may simply be a product of the radiolytic sulfur cycle or arise from some unidentified parallel irradiation process. Notably, the 2.07 \textmu m absorption band is absent from the Pwyll crater ejecta blanket, suggesting that radiolytic processing has not had enough time to form the species responsible and placing a lower limit on the irradiation timescale. We are unable to find a plausible spectral match to the 2.07 \textmu m feature within the available laboratory data.
M. Ryleigh Davis, Michael E. Brown, Samantha K. Trumbo
2023-08-26T06:58:53Z
http://arxiv.org/abs/2308.13787v1
The Spatial Distribution of the Unidentified 2.07 um Absorption Feature on Europa and Implications for its Origin ###### Abstract A weak absorption feature at 2.07 um on Europa's trailing hemisphere has been suggested to arise from radiolytic processing of an endogenic salt, possibly sourced from the interior ocean. However, if the genesis of this feature requires endogenic material to be present, one might expect to find a correlation between its spatial distribution and the recently disrupted chaos terrains. Using archived near-infrared observations from Very Large Telescope/SINFONI with a \(\sim\)1 nm spectral resolution and a linear spatial resolution \(\sim\)130 km, we examine the spatial distribution of this feature in an effort to explore this endogenic formation hypothesis. We find that while the presence of the 2.07 um feature is strongly associated with the irradiation pattern on Europa's trailing hemisphere, there is no apparent association between the presence or depth of the absorption feature and Europa's large-scale chaos terrain. This spatial distribution suggests that the formation pathway of the 2.07 um feature on Europa is independent of any endogenous salts within the recent geology. Instead, we propose that the source of this feature may simply be a product of the radiolytic sulfur cycle or arise from some unidentified parallel irradiation process. Notably, the 2.07 um absorption band is absent from the Pwyll crater ejecta blanket, suggesting that radiolytic processing has not had enough time to form the species responsible and placing a lower limit on the irradiation timescale. We are unable to find a plausible spectral match to the 2.07 um feature within the available laboratory data. Galilean satellites(627) -- Europa(2189) -- Planetary surfaces(2113) -- Surface composition(2115) -- Surface ices(2117) -- Infrared spectroscopy(2285) -- Very Large Telescope(1767) + Footnote †: journal: PSJ 0000-0002-8002-8000]M. Ryleigh Davis 0000-0002-4880-0002]Michael E. Brown 0000-0002-4880-0002]Samantha K. Trumbo ## 1 Introduction Beneath its icy shell, Jupiter's moon Europa hosts a deep global salty ocean that is likely in direct contact with the silicate mantle. When combined with a possible supply of radiolytically produced oxidants from the surface, this water-rock interaction makes Europa a prime target for astrobiological consideration (e.g. Anderson et al., 1998; Kivelson et al., 2000; Chyba, 2000; Hand et al., 2009). Our current best window into Europa's largely unconstrained ocean chemistry is through the surface composition. Specifically, the recently geologically active chaos terrains, bands, and ridges show clear spectroscopic evidence for the presence of salt hydrates that are likely sourced from the subsurface ocean (e.g. McCord et al., 1999; Johnson et al., 2002; Leblanc et al., 2002; Dalton et al., 2005; Shirley et al., 2010; Trumbo et al., 2019, 2020). These large scale chaos terrains, characterized by polygonal iceberg like blocks of ice within a matrix (Carr et al., 1998), are interpreted to be areas of focused heat flow and possibly local melting which may have exposed oceanic material (e.g Greenberg et al., 1999; Collins and Nimmo, 2009). However, exogenic sulfur ions from Io's volcanoes are continuously deposited onto Europa's trailing hemisphere, and bombardment by magnetospheric ions, protons, and electrons (Pospieszalska and Johnson, 1989; Cooper et al., 2001; Paranicas et al., 2001, 2009) drives a radiolytic sulfur cycle. This cycle chemically alters the surface composition and creates a "bull's-eye" pattern of hydrated sulfuric acid centered at the trailing hemisphere apex (270degW, 0degN) (Carl son et al., 1999, 2002, 2005). Understanding the source and nature of exogenic material and radiolytic processing on Europa's surface and disentangling the original state of the endogenic material is therefore critical for interpreting Europa's surface composition and inferring the properties of its underlying ocean. Europa's surface composition includes crystalline and amorphous water ice (Hansen and McCord, 2004), radiolytically produced hydrated sulfuric acid on the trailing hemisphere (Carlson et al., 1999, 2005), and an additional hydrated salt component that is largely constrained to the recently geologically active chaos terrains, bands, and ridges (McCord et al., 1999; Dalton et al., 2005; Dalton III et al., 2012). The distribution of this additional salt hydrate is consistent with Europa's recent geologic features containing endogenic material sourced from the subsurface. Irradiation produced O\({}_{2}\) and hydrogen peroxide (H\({}_{2}\)O\({}_{2}\)) (Johnson et al., 2003; Loeffler et al., 2006; Trumbo et al., 2019) as well as CO\({}_{2}\)(McCord et al., 1998; Hansen and McCord, 2008; Carlson et al., 2009) have also been confirmed on Europa's surface. Recent observations with HST/STIS revealed the presence of absorption features at 230 and 450 nm, spatially constrained to the leading hemisphere chaos terrains (Trumbo et al., 2019, 2022), which closely match color center features produced in laboratory data of irradiated sodium chloride (NaCl) (Hand and Carlson, 2015; Denman et al., 2022; Brown et al., 2022). Together, these results provide strong evidence for the presence of endogenous NaCl in Europa's leading hemisphere chaos. Additional endogenic salt species, which are spectroscopically distinct from the hydrated sulfuric acid, may also exist within Europa's chaos terrains (e.g. Fischer et al., 2015; Trumbo et al., 2020), however their precise nature is widely debated. Some analyses of Galileo/NIMS spectra have suggested that Europa's endogenous hydrate may be dominated by sulfate salts (e.g. McCord et al., 1998; Dalton et al., 2005; Dalton III et al., 2012), while more recent studies with higher spectral resolution data sets suggest a chloride dominated composition (Brown and Hand, 2013; Ligier et al., 2016). Further complicating the picture, the hydrated material of the leading hemisphere chaos is spectroscopically distinct from the hydrated material in the trailing hemisphere chaos, presumably due to the significant exogenous alteration which occurs on the trailing hemisphere (Fischer et al., 2015, 2016; Trumbo et al., 2020). Trumbo et al. (2020) demonstrated that a 360 nm absorption feature and 700 nm slope change strongly correlate to the trailing hemisphere chaos terrains and reflect a combination of endogenous and exogenous processes. They suggest that while relatively unaltered endogenic material may persist in the comparatively sheltered leading hemisphere chaos terrains, it may become progressively more chemically altered towards the trailing hemisphere apex due to radiolytic processing. A weak absorption feature at 2.07 um is also present on Europa's trailing hemisphere. It was first noted in the high resolution Keck/OSIRIS observations of Brown and Hand (2013), who suggested epsomite (MgSO\({}_{4}\cdot 7\) H\({}_{2}\)O) or magnesium sulfate brine as a plausible spectral match. With no sign of a sulfate absorption on Europa's leading hemisphere, they suggested that magnesium chloride (MgCl\({}_{2}\)) sourced from Europa's interior may be radiolytically converted into epsomite, a hydrated magnesium sulfate (MgSO\({}_{4}\)), when in the presence ofogenic sulfur ions deposited onto the trailing hemisphere. While it is possible that this conceptual model of the alteration of chloride-rich endogenic salts into sulfates via sulfur radiolysis could be consistent with the visible-wavelength data of the trailing hemisphere (Hibbitts et al., 2019; Trumbo et al., 2020), more laboratory data is needed to test this hypothesis. Ligier et al. (2016), on the other hand, used VLT/SINFONI observations to show that the 2.07 um absorption feature can also be reproduced by certain combinations of MgCl\({}_{2}\) and perchlorate (Mg(ClO\({}_{4}\))\({}_{2}\)) salts, despite neither one having a distinct absorption on their own. The authors suggested that endogenic magnesium-chloride may be radiolytically converted into magnesium chlorine and perchlorate, rather than sulfate, on Europa's trailing hemisphere. Both proposed origins for the 2.07 um absorption feature (magnesium-bearing sulfates versus chlorine salts) share one important characteristic. They require a contribution of endogenic material, presumably tied to the ocean chemistry, which is chemically altered into the species responsible for the absorption by the trailing hemisphere irradiation environment. In both of these scenarios we would expect to find a spatial correlation between the presence and depth of the 2.07 um feature and the large-scale geologic units or chaos terrains, where we would expect geologic resurfacing to have emplaced endogenic, salty material. The trailing hemisphere chaos terrains are spectroscopically distinct from their leading hemisphere counterparts (Trumbo et al., 2020), presumably due to irradiation induced chemical alteration of this endogenic material. Indeed, the best fit spectral models from Ligier et al. (2016) show an enhancement of MgCl\({}_{2}\) in the chaos terrains on both hemispheres and magnesium perchlorate confined mainly to the trailing hemisphere chaos, consistent with the 2.07 um feature arising from a mixture of endogenic chloride and its irradiation product. However, their models do not map the 2.07 um absorption feature directly, but rather rely on linear mixture modeling of the largely featureless near infrared continuum, and degeneracies amongst the various salt hydrates make positive identification difficult. A more recent study by King et al. (2022) combined VLT/SPHERE and Galileo/NIMS observations in an attempt to model Europa's surface composition across the near-infrared. Their best fit models include sodium chloride and show distributions for both magnesium sulfate and magnesium bearing chlorine species, which are broadly consistent with both hypotheses. However, they point out that the uncertainties in the model abundances and degenerate fit results for various compositional mixtures mean confident detection of any individual salt species is not feasible with the existing data sets. Previous observations indicate that irradiated NaCl is likely present and spatially concentrated in the leading hemisphere chaos terrains (Trumbo et al., 2022), and confirmation of endogenic MgCl\({}_{2}\) or its radiolytic products could provide crucial information for constraining Europa's ocean and surface chemistry. While atomic sodium and potassium, as well as chlorine ions, likely sputtered from the surface, have been detected in Europa's atmosphere or a nearby pickup cloud (Brown and Hill, 1996; Brown, 2001; Volwerk et al., 2001), attempts to measure magnesium in Europa's atmosphere were unsuccessful. Horst and Brown (2013) provide an upper limit on the atmospheric Na/Mg ratio of at least a factor of two less than meteoritic or cosmic abundances, and the presence of magnesium bearing salts on Europa's surface remains unproven. Thus, the detection of MgCl\({}_{2}\) along with NaCl on Europa's surface may have important implications for the relative balance of various cation and anion species within Europa's ocean. In the relatively simple freezing models of Johnson et al. (2019), for example, the presence of MgCl\({}_{2}\) could suggest a sulfate-poor, sodium, magnesium and chloride rich ocean with a low pH due the presence of magnesium ions. In this paper, we use high spatial and moderate spectral resolution archived VLT/SINFONI data to explore the spatial distribution of the 2.07 um feature and determine whether its presence is associated with the large-scale chaos terrains, which could confirm its suspected relationship to endogenic salts and possibly Europa's ocean chemistry. With two possible origins of Europa's 2.07 um absorption feature arising from radiolytic processing of endogenic MgCl\({}_{2}\) and no clear resolution for the genesis of the feature from spectral modeling, we turn to the spatial distribution of the absorption across Europa's trailing hemisphere in an attempt to gain insight. ## 2 Observations & Data Reduction We use archived spatially resolved, moderate-resolution, ground-based near-infrared spectra from VLT/SINFONI acquired with adaptive optics during ESO program 088.C-0833(A) at Paranal Observatory and originally published in Ligier et al. (2016). The data were obtained by Ligier et al. (2016) over five nights between October 2011 and January 2012 in the H+K band setting, covering a wavelength range of 1.452 - 2.447 um at a spectral resolution of \(\sim\)1 nm (\(R\)\(\sim\)\(\frac{\lambda}{\Delta\lambda}\) = 1500). Europa's angular diameter on-sky ranged from 0.897" to 1.083", so to cover the full disk with the 0.8" \(\times\) 0.8" field of view, integrations were taken in five offset frames and co-added into a single mosaic across the full disk. The on-sky spatial sampling of the SINFONI instrument is 12.5 \(\times\) 25 mas, resulting in spatial pixels of \(\sim\) 35 \(\times\) 70 km at the center of Europa's disk. We note that the spatial scale of the SINFONI pixels over-samples the diffraction-limited spatial resolution of the VLT at the wavelengths of interest, which is \(\sim\) 130 km at the center of Europa. More detailed information on these observations can be found in Table 1 in Ligier et al. (2016). Figure 1: Example VLT/SINFONI spectra of Europa’s leading and trailing hemispheres showing the 2.07 μm absorption feature, which is only present on Europa’s more heavily irradiated trailing hemisphere. The data are normalized to 1.0 at 2.2 μm and the trailing hemisphere spectrum is offset by +0.5 for clarity. The vertical dotted line marks the location of the 2.07 μm band center and the continuum-removed absorption (multiplied by a factor of 3) is included above each spectrum. We downloaded the raw data and associated calibration files from the ESO Science Archive Facility and followed a similar reduction routine as Ligier et al. (2016). As a first step in the data reduction process, we used the ESO SINFONI data reduction pipeline v3.3.3 with EsoRex (Modigliani et al., 2007). The pipeline identified bad, non-linear and hot pixels, performed standard dark and flat lamp corrections, corrected for optical distortions in the image slicer slitlets, determined a calibrated wavelength solution, and computed the 2D to 3D mapping to create a 3D data cube with two spatial dimensions and one spectral dimension for each observation. We interrupted the pipeline before the five offset frames were combined into a single mosaic to remove correlated regions of bad-pixel patches, which were not identified by the pipeline, and which were also noted by Ligier et al. (2016). We identified these bad pixel regions using the spectral angle mapping criterion (SAM) (Kruse et al., 1993), which describes the degree of similarity between two spectra. We masked out pixels with a SAM angle that differed by more than \(\sim\)10% from the median SAM angle between all individual spectra and the combined median in a 10\(\times\)10 pixel bounding box centered on the bad pixel. These bad pixel regions, which makeup \(\sim\)7% of the total detector pixels, were excluded from the final co-added data cube created after running the remainder of the ESO pipeline. While Ligier et al. (2016) do not indicate what method they used to identify these remaining bad pixel patches, we find that our identified bad pixel patches closely match the pixels which appear to be masked in Figure 3 of Ligier et al. (2016). Next, we used the standard star observations of the solar-type star HD18681, which were taken immediately before or after the Europa observations on each night, to correct for the instrument response, remove telluric absorption features, and derive a relative reflectance calibration. Due to the range of air masses over which the five Europa cubes and the calibration star were observed each night, an on-sky angular separation between Europa and HD18681 of \(\sim\)17\({}^{\circ}\), and possibly changing atmospheric conditions, we found a simple division by the telluric standard was not sufficient. The quality of the telluric correction is particularly important for our analysis of the 2.07 um absorption, which is just beyond telluric CO\({}_{2}\) and O\({}_{2}\) absorptions between 2.0 and 2.05 um. We therefore followed a similar telluric correction procedure as described in Brown and Hand (2013) and continuum divided the observed telluric spectrum by a best-fit polynomial to construct an atmospheric transmission spectrum. We then empirically determined an exponential scaling factor which adjusts the depths of the telluric absorptions in the transmission spectrum to most closely match the telluric signature in each of the Europa observations, paying particular attention to the quality of the telluric removal of the CO\({}_{2}\) and O\({}_{2}\) features between 2.0 and 2.05 um. Each observation consists of hundreds of spectra, which allowed us to determine the optimal scaling factor to very high precision. The original stellar continuum fit was multiplied by this best-fit atmospheric transmission spectrum to create a new standard star spectrum with an appropriately scaled telluric signature, which was then divided from the Europa observations. As described in Ligier et al. (2016), we used a Lambertian surface to calculate the geometrically corrected reflectance spectrum of each pixel and corrected for small offsets in the relative reflectance of overlapping regions on Europa's surface obtained during different nights by scaling the band-integrated flux of each observation cube so that the integrated reflectance matched in these overlapping regions. The final data product is a series of data cubes, one for each hemispherical view on Europa (centered at 30\({}^{\circ}\)W, 55\({}^{\circ}\)W, 130\({}^{\circ}\)W, 225\({}^{\circ}\)W, and 315\({}^{\circ}\)W) with the corresponding relative reflectance spectrum for each pixel 1. A characteristic leading and trailing hemisphere spectrum, at (90\({}^{\circ}\)W, 0\({}^{\circ}\)N) and (270\({}^{\circ}\)W, 0\({}^{\circ}\)N) respectively, are shown in Figure 1, with the 2.07 um absorption feature present only in the trailing hemisphere spectrum. Footnote 1: The full, reduced dataset is permanently archived at [https://doi.org/10.22002/agga7-v3393](https://doi.org/10.22002/agga7-v3393) ## 3 Spatial distribution With no clear resolution of the source of the 2.07 um feature from previous studies (Brown and Hand, 2013; Ligier et al., 2016), we turn to its spatial distribution in order to determine whether its presence is associated with the large-scale chaos terrains, which could confirm its suspected relationship to endogenic salts and possibly Europa's ocean chemistry. We note that endogenic salts are expected within Europa's bands and ridges as well as within the chaos terrains (e.g. Dalton III et al., 2012). Indeed, while the exact formation mechanism of these bands and ridges is not known, many proposed mechanisms require a direct link between the ice shell and ocean (e.g. Head et al., 1999; Fagents et al., 2000; Prockter and Patterson, 2009; Johnson et al., 2017; Howell and Pappalardo, 2018). However, ridges on Europa are typically a few hundred meters to a few kilometers wide (Collins and Nimmo, 2009) and bands are no wider than 30 km (Prockter et al., 2002). At the spatial resolution of the VLT (\(\sim\)130 km) the fractional coverage of a band or ridge feature within a single pixel is small. If irradiation of endogenic salts were responsible for the Figure 2: (a) A map of Europa’s trailing hemisphere, showing a selection of four of the 29 considered “rings of constant radiolysis” outlined in black, which are used to determine whether the weak 2.07 μm absorption feature is associated with Europa’s large-scale geologic units. The chaos terrains are shown in red and Pwyll crater and its ejecta are orange. For each ring, projected detector pixels are shown and their assigned geologic unit indicated by color, where chaos terrains are dark red, Pwyll crater and its ejecta are orange, and the ridged plains are gray. (Europa map image credit: NASA/JPL/ Björønønsson). (b) Summed spectra, by geologic unit, for all of the pixels within each example ring. Spectra are normalized to 1.0 at 2.12 μm and offset for clarity. The best-fit polynomial continuum is indicated by a black dashed line. (c) Continuum removed absorption feature, by geologic unit, for the four selected example constant radiolysis rings. The depth of the feature decreases noticeably as we move from Ring 1 outwards, however there is no apparent difference in the depth of the 2.07 μm feature between the chaos terrains (red) and ridged plains (gray) for a given ring. This suggests that the absorption feature is unaffected by the compositional difference between the chaos terrains and ridged plains, and it is therefore unlikely to be related to any of the proposed irradiation products of endogenic material. Instead, the 2.07 μm feature appears to arise from the radiolytic processing of water ice with exogenic material, possibly related to the sulfur radiolytic cycle. One notable exception is that the 2.07 μm absorption feature is entirely absent from Pwyll crater and its ejecta blanket. 2.07 um feature, we would therefore expect to measure a difference in the band strength between the large-scale chaos terrains which fully cover a single VLT spatial resolving element, and the background plains, which are overlaid by bands and ridges that only cover a small fraction of the spatial resolving element and are therefore unresolved. Indeed, the spectral signature of irradiated endogenic material correlated with the trailing hemisphere chaos terrains is seen in the maps of Trumbo et al. (2020) which have a comparable spatial resolution of \(\sim\)150 km. The 360 nm absorption feature attributed to radiolytically processed endogenic material, for example, is strongest within the large scale chaos terrains and is significantly weaker or absent from the non-chaos pixels on the trailing hemisphere. If confirmed, a similar association of the 2.07 um absorption band with Europa's large-scale chaos terrains would therefore support the hypotheses of Brown and Hand (2013) and Ligier et al. (2016), which suggest the absorption arises from some radiolytic product of endogenic material. One common way to examine the spatial distribution of an absorption feature is to map its depth or area. However, the continuum shape between 2.0 and 2.2 um changes significantly across Europa's surface, largely due to variation in the strength of the 2.0 um water absorption band. As can be seen in Figure 1, the change in the slope, concavity, and overall spectral shape between the leading and trailing hemispheres is larger than the depth of the 2.07 um feature. The shape of the spectrum in this region varies gradually across Europa's surface, and we find that challenges in precisely defining this changing continuum, combined with significant pixel-to-pixel noise inherent in the IFU data, lead to large uncertainties in the measured band strength. Another option is to compare the depth of the 2.07 um feature between the chaos and non-chaos terrains integrated over the entire trailing hemisphere. However, the irradiation bullseye is symmetric about the trailing apex while the distribution of the chaos terrains on Europa's trailing hemisphere is not. As a result, the integrated chaos terrain spectrum samples areas on Europa with a sufficiently different continuum spectrum than is sampled by the integrated non-chaos pixels, such that it is difficult to determine whether these continuum differences wash out the signal of a geologic association or if the feature simply does not correlate with Europa's geology. We therefore construct a new technique wherein we consider rings of approximately constant radiolysis, where the overall continuum between \(\sim\)2 to 2.2 um is much more constant, and compare the strength of the 2.07 um feature for integrated spatial pixels that are within different large-scale geologic units and receive a similar irradiation dose. To perform this spatial analysis, we first define the large-scale geologic units using the new United States Geologic Survey (USGS) global geologic map of Europa, which maps geologic units including craters, chaos terrains, bands, and regional plains at a scale of 1:15M (Leonard et al., 2023). Because the width of the bands and ridges are much smaller than our \(\sim\)130 km spatial resolution, we exclude these from our large-scale geologic units. We also exclude ten small craters with diameters less than \(\sim\)100 km and several small patches of discontinuous crater ejecta which are unresolved at the spatial resolution of the VLT. This leaves us with three Figure 3: (a) Expected trailing hemisphere irradiation patterns as a function of the angular distance from the trailing hemisphere apex for cold sulfur ions (solid) (Cassidy et al., 2013), radiolytically produced hydrated sulfuric acid (dashed) (e.g. Carlson et al., 2002; Ligier et al., 2016), and 0.1 – 25 MeV electrons (dotted) (Paranicas et al., 2009). The estimated energy fluxes are normalized to 1.0 at the trailing hemisphere apex. (b) Calculated band area of the 2.07 μm absorption feature for each of the 29 “rings of constant radiolysis” considered. We find no measurable difference in the absorption depth between the chaos terrains and ridged plains. However, the absorption feature is essentially absent from Pwyll crater and its ejecta blanket. (c) Calculated 2.07 μm band area of the chaos terrains from panel b, with the chaos terrains further subdivided by type as described in Leonard et al. (2023). There is no measurable difference in the 2.07 μm band area between the different types of chaos terrain. distinct large-scale geologic units on Europa's trailing hemisphere - chaos terrains, ridged plains, and Pwyll crater including its large ejecta blanket. The chaos terrains can be further broken down into three sub-types - low albedo chaos, mottled chaos, and knobby chaos as defined in Leonard et al. (2023). We then define a set of "constant radiolysis rings" which cover the trailing hemisphere, where each ring contains pixels that are expected to receive a similar irradiation dose. While the idea is simple, the exact irradiation geometry on Europa is complex and remains an active area of research (Paranicas et al., 2009; Cassidy et al., 2013; Bagenal and Dols, 2020). The location dependent irradiation dose depends on a number of factors, including the type of particle, its energy, and the complex interaction between Jupiter's magnetosphere, the plasma torus, and Europa's ionosphere and a variety of irradiation patterns is possible (Bagenal and Dols, 2020). However, sulfuric acid production via the sulfur radiolytic cycle relies on the presence of sulfur ions from the plasma torus as well as radiolytic processing from energetic electrons and light ions, which are important sources of ionization energy that drive surface chemistry changes on Europa (Paranicas et al., 2001, 2002, 2009; Cassidy et al., 2013). The shape of the sulfuric acid bullseye is therefore a simple proxy for the total radiolytic processing from the combined effects of the plasma torus and magnetospheric particle bombardment occurring at a given location on Europa's trailing hemisphere. To match the shape of the sulfuric acid bullseye, we start with a series of overlapping rings centered on the trailing hemisphere apex and defined by \(\cos(\theta)\), where \(\theta\) is the angular distance from the trailing hemisphere apex. We then empirically determine a latitudinal flattening coefficient of 0.6 so that the shape of the rings matches the sulfuric acid bullseye from Figure 10 in Ligier et al. (2016) and set the width of the rings to match the diffraction-limited spatial resolution of the VLT. Figure 2 shows a selection of four example rings distributed across the trailing hemisphere. Once we have defined our constant radiolysis rings and large-scale geologic units, we assign each data pixel to its respective ring and geologic unit. Data pixels which overlap, typically because they were observed on two separate nights, are separated into multiple polygons and intersecting regions are averaged. In this way, we preserve the maximum possible spatial information from the dataset. In order to account for the wings of the point spread function (PSF), we exclude any spatial pixels whose central coordinates are within one spatial resolving element of a boundary between geologic units. Spatial pixels which cross the boundary of a ring are assigned to whichever ring intersects the largest fraction of the pixel. Pixels with more than 40% of their area in two separate rings are duplicated and assigned to both. Once all of the pixels have been categorized, we sum the spectra from all of the pixels assigned to a given geologic unit and constant radiolysis ring. We then normalize these integrated spectra at 2.12 um, just past the absorption feature of interest, and plot the spectra for each geologic unit in order to compare the depth of the 2.07 um feature. Most of the rings only have integrated spectra for the chaos terrains and ridged plains, however a few rings also contain pixels from Pwyll crater and its ejecta blanket. The irradiation pattern is expected to be symmetric about the trailing hemisphere apex (Paranicas et al., 2009; Hendrix et al., 2011; Cassidy et al., 2013), so we do not expect the east-west dichotomy in the terrain types, with significantly more chaos towards the sub-Jovian hemisphere and more plains towards the anti-Jovian hemisphere, to affect the results of our analysis. We also fit a fifth order polynomial continuum to each of the integrated spectra between 1.98 and 2.16 um, excluding the region between 2.04 and 2.1 um which contains the absorption feature and compare the continuum removed absorptions for the different geologic units. We choose a fifth order polynomial because it is able to provide a fair visual match to the continuum near the center of the trailing hemisphere as well as in the more icy regions where a third order polynomial produces an artificially deep absorption feature because the polynomial is not able to account for the concavity of the continuum shape, which is strongly influenced by the 2.0 um water ice absorption feature. We then continuum divide each spectrum and integrate the residual absorption between 2.05 and 2.1 um to compute an integrated band area. As can be seen for a selection of four constant radiolysis rings in Figure 2, visual inspection of the integrated and continuum removed spectra (panels (b) and (c)) reveals that there is no discernible difference in the depth of the 2.07 um feature between the chaos terrains (red) and surrounding ridged plains (gray). Indeed, we find no discernible difference in the absorption band between the chaos and ridged plains in any of the 29 rings of constant radiolysis, which completely cover the trailing hemisphere. Likewise, we do not find a measurable difference in the integrated band areas between the chaos and ridged plains units for each ring, as seen in Figure 3. We do not include error bars on the measured band areas in Figure 3 because we are unable to accurately calculate these uncertainties with the unknown effects of the changing continuum. However, we estimate a rough error of \(\sim 10^{-4}\) on the integrated band area based on the scatter in the measured band areas of adjacent, overlapping rings. Figure 3(c) shows the measured band areas for each chaos sub-type, when present in each of the 29 constant radiolysis rings. Separately considering each chaos sub-type (low albedo chaos, mottled chaos, and knobby chaos) identified by Leonard et al. (2023) also reveals no discernible difference in the depth of the 2.07 um feature between the chaos terrain sub-types. Most of the trailing hemisphere chaos terrains are comprised of low albedo chaos (\(\sim\)64% fractional coverage) and mottled chaos (\(\sim\)32%), with only \(\sim\)4% of the chaos at larger angular distances from trailing center classified as knobby chaos. It is interesting to note that the 2.07 um absorption feature is clearly absent from Pwyll crater and its ejecta blanket, even though Pwyll is located in a region with relatively high amounts of radiolysis. Ring 2 in Figure 2 highlights the absence of this feature at Pwyll, and also illustrates that significant differences in the depth of the 2.07 um feature between different large-scale geologic units are identifiable with our spatial analysis technique. The absence of a 2.07 um absorption feature within the Pwyll crater ejecta blanket is also clearly seen in Figure 3(b), where the measured band areas for Pwyll are very small compared to the band areas of the chaos terrains and ridged plains within the same constant radiolysis rings. While the strength of the 2.07 um absorption feature appears to be constant between the chaos terrains and ridged plains for each constant radiolysis ring, a clear decrease in the strength of the feature is seen as we move from the innermost rings with the highest expected radiolysis rates to the outermost rings where the 2.07 um feature is no longer seen. This decrease in band area is consistent with radiolytic production of the species responsible for the absorption feature. As a third and final check, we examined the division of the integrated ridged plains by the chaos terrain spectra and found that for some of the rings there was a broad dip in the ratioed spectrum between \(\sim\) 2 to 2.2 um, which is a clear spectral match to water ice. This subtle difference in the continuum shape seen between the chaos terrains and ridged plains is consistent with a slight excess of water ice and therefore a deeper 2.0 um water ice absorption band in the ridged plains relative to the chaos terrains. We find no sign of any additional structure in the ratioed spectra between 2.05 and 2.1 um, further confirming that the 2.07 um absorption feature is unaffected by the compositional differences between the large-scale chaos terrains and ridged plains on Europa's trailing hemisphere. ## 4 Discussion ### Chaos Terrains and Ridged Plains As evidenced by its absence on Europa's leading, less-irradiated hemisphere, the species responsible for the 2.07 um feature is most likely a radiolytic product formed via sulfur radiolysis or processing via magnetospheric particle bombardment. Our spatial distribution analysis reveals that the depth of Europa's 2.07 um feature is spatially associated with the sulfuric acid bulles-eye on the trailing hemisphere, which we use as a proxy for the amount of radiolytic processing due to the combined effects of the plasma torus and magnetospheric particle bombardment. This distribution is consistent with a radiolytic origin for the absorption feature. However, we find no evidence of any spatial association with the large-scale chaos terrains which would be expected if the formation pathway required radiolytic processing of endogenic salty material. For any given ring of constant radiolysis, we find no measurable difference in the band area of the 2.07 um absorption feature between the large-scale chaos terrains and the icier ridged plains suggesting that the irradiation process which creates the 2.07 um absorption is independent of any endogenic material within Europa's recent geology. With both the epsomite hypothesis of Brown and Hand (2013) and the magnesium chlorates hypothesis of Ligier et al. (2016) relying on the presence of irradiated endogenic magnesium chloride (MgCl\({}_{2}\)), and no known exogenic source of magnesium ions to Europa (Bagenal and Dols, 2020), we find that the spatial distribution of the 2.07 um absorption band is inconsistent with both of these proposed origins for the absorption feature. Instead, our results suggest that the species responsible for Europa's 2.07 um absorption feature is a radiolytic product of water ice and exogenically sourced material and is not affected by the presence of whatever endogenic salt component exists within the chaos terrains. ### Pwyll Crater Pwyll crater has an \(\sim\)26 km diameter with a bright ray ejecta blanket that extends over 1000 km (Greeley et al., 1998). Pwyll is thought to be young, with estimates for its age ranging from \(\sim\) 3 - 18 Myr (Bierhaus et al., 2001). The dark, red crater itself has apparently excavated endogenic material from a depth of \(\sim\)1 km (Garozzo et al., 2010) and high spatial resolution Galileo/NIMS observations show evidence of asymmetric bands interpreted as salt-rich material at very high concentrations (Fanale et al., 2000). However, the composition sharply transitions to be very ice-rich at the crater edge and throughout the ejecta blanket (Fanale et al., 2000). At the spatial resolution of the VLT, we do not resolve the dark salty crater itself, and are dominated by the spectral signature of the bright ice-rich ejecta blanket. The distinct lack of a 2.07 um absorption feature within the Pwyll crater ejecta blanket, despite its presence in both the chaos terrains and ridged plains, implies that radiolytic processing has not yet had enough time to form the species responsible. The lack of a 2.07 um absorption within Pwyll may simply mean that the irradiation timescale for the process which produces the absorption feature is longer than the age of Pwyll crater. Or alternatively, there may not have been time for enough exogenic material from the plasma torus or magnetosphere to build up at Pwyll thus limiting the production of the species responsible for the absorption. In either case, the absence of the absorption within the Pwyll crater ejecta blanket places a strong lower limit of the age of Pwyll on the timescale over which the species responsible for the 2.07 um absorption forms on Europa. ### Comparison with Irradiation Patterns Figure 3(b) shows the measured band area for each ring of constant radiolysis as a function of the angular distance due north from the trailing hemisphere apex. Panel (a) shows the expected shape of several distinct irradiation patterns on Europa's trailing hemisphere for comparison with the measured band areas. As previously noted, the absorption feature is absent from the Pwyll crater ejecta blanket, but both the chaos terrains and background ridged plains show a clear decrease in the band area with increasing angular distance. The measured fall-off in band area for the 2.07 um absorption feature with increasing distance from the trailing hemisphere apex is broadly consistent with a formation pathway driven by the irradiation pattern of the sulfur plasma, energetic ions and electrons, or the measured sulfuric acid bullseye. Cold sulfur plasma bombards the trailing hemisphere with a fall-off described by the cosine of the angle from trailing center (Hendrix et al., 2011; Cassidy et al., 2013), which we would expect to see reflected in the measured 2.07 um band depths if cold sulfur plasma controlled the formation of the absorption. If the formation pathway is instead driven by energetic electrons, we would expect a fall-off consistent with the shape of the power per unit area contours for 10 keV - 25 MeV electrons from Figure 8 in Paranicas et al. (2009). Or, if the formation pathway is driven by a combination of these irradiation patterns as expected for the sulfur radiolytic cycle, we might expect to find a fall-off consistent with the flattened cosine of the sulfuric acid bullseye. As can be seen in Figure 3(a), the differences in the shapes of these distributions are small near the center of the trailing hemisphere, where the 2.07 um absorption feature is strongest, and become more pronounced farther out where the feature has all but disappeared. Uncertainties in measuring the band depths for the various rings and a significant change in the continuum shape moving outwards from trailing center are sufficiently large so as to obscure the subtle differences which could differentiate between these irradiation signatures. The measured band areas for the chaos terrains and ridged plains in Figures 3(b) and (c) show a plausible match to all three irradiation patterns shown in panel a. Our results remain consistent with a variety of radiolytic production pathways and we are unable to determine whether the overall irradiation flux or the presence of any specific exogenic material is the limiting factor in the formation of the 2.07 um absorption. ### Potential Origins for the 2.07 um Absorption If the 2.07 um absorption feature does indeed arise from the radiolytic processing of water ice and exogenically sourced material on Europa's trailing hemisphere, the products of the known sulfur cycle offer an obvious first place to search for a plausible spectral match. Laboratory experiments have shown that various irradiation products can be created in electron or ion irradiated water ice bearing sulfur ions, or via sulfur ion bombardment of pure water ice. However, the specific products produced depend strongly on the temperatures, energies, and projectiles involved (Moore et al., 2007; Strazzulla et al., 2007; Loeffler et al., 2011). While hydrated sulfuric acid (H\({}_{2}\)SO\({}_{4}\)) is expected to be the dominant irradiation product on Europa, some of the possible intermediate products include sulfur dioxide (SO\({}_{2}\)), hydrogen sulfide (H\({}_{2}\)S), and various sulfate anions (SO\({}_{4}^{2-}\), SO\({}_{3}^{2-}\), HSO\({}_{4}^{-}\)) which can combine to form species such as sulfonic acid (SO\({}_{2}\)H\({}_{2}\)) or hydrogen disulfide (H\({}_{2}\)S\({}_{2}\)). Tribbett & Loeffler (2022) demonstrated that H\({}_{2}\)O + H\({}_{2}\)S + O\({}_{3}\) ice mixtures at Europa temperatures can undergo thermal oxidation reactions on laboratory timescales, which may affect the steady state composition and intermediary products of the radiolytic sulfur cycle. Calculations based on laboratory experiments of sulfur ion implantation in water ice have shown that radiolysis can produce the expected concentration of sulfuric acid hydrate on Europa during \(\sim 10^{4}\) years (Strazzulla, 2011). Therefore, if the feature arises from an intermediary of the sulfur cycle or a parallel irradiation process, it is somewhat surprising that we do not see any evidence for the 2.07 um absorption feature in Pwyll crater. However, Europa's UV to visible albedo ratio, which is anti-correlated with irradiation induced discoloration on the trailing hemisphere, also shows an enhancement at Pwyll suggesting that the discoloration timescale is longer than the age of Pwyll crater (Burnett and Hayne, 2021) and consistent with the observed absence of the 2.07 um absorption feature. If the 2.07 um feature does arise from an intermediary of the radiolytic sulfur cycle, the lack of a feature at Pwyll could suggest that Pwyll crater is much younger than expected, that it has not yet had enough time to build up a sufficient amount of sulfur for sulfur radiolysis to occur, or alternatively that the calculations of Strazzulla (2011) significantly underestimate the timescale over which the sulfur cycle should reach an equilibrium state. Future laboratory work may be crucial for understanding this discrepancy. Additionally, the 2.07 um absorption feature may also be explained by a parallel radiolytic cycle or some other unknown irradiation product. For example, the addition of CO\({}_{2}\) ice into various sulfur radiolysis studies produced additional products such as carbonyl sulfide (OCS), carbon disulfide (CS\({}_{2}\)), carbonic acid (H\({}_{2}\)CO\({}_{3}\)), and other carbon and sulfur bearing species (Garozzo et al., 2010; Mahjoub et al., 2017). While CO\({}_{2}\) has been detected on Europa (McCord et al., 1998; Hansen and McCord, 2008; Carlson et al., 2009), its spatial distribution across the trailing hemisphere is largely unconstrained. It is therefore uncertain whether an irradiation product of a combined carbon-sulfur radiolytic cycle is consistent with our observed spatial distribution, but nevertheless worth investigating. We completed an extensive literature search of these potential sulfur- and carbon- bearing intermediary species and other possible irradiation products but were unable to find a plausible spectral match for the 2.07 um absorption amongst existing data sets. We are therefore unable to identify the source of the 2.07 um absorption feature, highlighting the need for additional irradiation experiments at Europa-like temperatures, particularly those including spectra across the full near-infrared wavelength range, in order to better understand the sulfur radiolytic cycle and determine whether any generated products can explain the 2.07 um absorption feature. ## 5 Conclusions Using archived VLT/SINFONI H+K band spectra, we have shown that the presence and band area of the 2.07 um absorption feature on Europa's trailing hemisphere is not spatially associated with the large-scale geology, except for Pwyll crater and its ejecta blanket which lacks the absorption feature. There is, however, a spatial association between the 2.07 um absorption band and the trailing hemisphere sulfuric acid bullseye suggesting that the formation of the 2.07 um feature on Europa is independent of the endogenous salts thought to be present within the chaos terrains and is most likely an irradiation product of water ice and exogenic material. Thus, we find that neither epsomite nor a combination of magnesium chloride and magnesium perchlorate, as proposed by Brown and Hand (2013) and Ligier et al. (2016), respectively, are likely to explain the spatial distribution of the 2.07 um absorption feature and we consider an alternative hypothesis. We propose that the source of this feature may be an intermediary product of the radiolytic sulfur cycle, or something formed during the bombardment of water ice by electrons or the remaining (non-sulfur) ion species. Current laboratory data of relevant radiolytic species is fairly sparse in the 2.0 - 2.2 um range and we are unable to identify any plausible spectral matches. This highlights the need for more laboratory data at these wavelengths and at Europa-like conditions in order to identify radiolytic product(s) which may be responsible for the 2.07 um absorption feature and provide insights into the nature of the sulfur- and possibly carbon- radiolysis occurring on Europa. ## Acknowledgments This research has made use of the services of the ESO Science Archive Facility and is based on observations collected at the European Southern Observatory under ESO program 088.C-0833(A). M.R.D. would like to thank Dr. Erin Leonard for providing up-to-date shape files from the new United States Geologic Survey (USGS) global geologic map of Europa, which we used to define the various geologic units discussed in the text. S.K.T. is supported by the Heising-Simons Foundation through a 51 Pegasi b postdoctoral fellowship. VLT:Yepun (SINFONI) Astropy (Astropy Collaboration et al., 2013, 2018, 2022), Carotopy (Met Office, 2022), GeoPandas (Jordahl et al., 2022), SciPy (Virtanen et al., 2020), Shapely (Gillies et al., 2023), SpecUtils (Earl et al., 2022)
2306.03038
HeadSculpt: Crafting 3D Head Avatars with Text
Recently, text-guided 3D generative methods have made remarkable advancements in producing high-quality textures and geometry, capitalizing on the proliferation of large vision-language and image diffusion models. However, existing methods still struggle to create high-fidelity 3D head avatars in two aspects: (1) They rely mostly on a pre-trained text-to-image diffusion model whilst missing the necessary 3D awareness and head priors. This makes them prone to inconsistency and geometric distortions in the generated avatars. (2) They fall short in fine-grained editing. This is primarily due to the inherited limitations from the pre-trained 2D image diffusion models, which become more pronounced when it comes to 3D head avatars. In this work, we address these challenges by introducing a versatile coarse-to-fine pipeline dubbed HeadSculpt for crafting (i.e., generating and editing) 3D head avatars from textual prompts. Specifically, we first equip the diffusion model with 3D awareness by leveraging landmark-based control and a learned textual embedding representing the back view appearance of heads, enabling 3D-consistent head avatar generations. We further propose a novel identity-aware editing score distillation strategy to optimize a textured mesh with a high-resolution differentiable rendering technique. This enables identity preservation while following the editing instruction. We showcase HeadSculpt's superior fidelity and editing capabilities through comprehensive experiments and comparisons with existing methods.
Xiao Han, Yukang Cao, Kai Han, Xiatian Zhu, Jiankang Deng, Yi-Zhe Song, Tao Xiang, Kwan-Yee K. Wong
2023-06-05T16:53:58Z
http://arxiv.org/abs/2306.03038v2
# HeadSculpt: Crafting 3D Head Avatars with Text ###### Abstract Recently, text-guided 3D generative methods have made remarkable advancements in producing high-quality textures and geometry, capitalizing on the proliferation of large vision-language and image diffusion models. However, existing methods still struggle to create high-fidelity 3D head avatars in two aspects: (1) They rely mostly on a pre-trained text-to-image diffusion model whilst missing the necessary 3D awareness and head priors. This makes them prone to inconsistency and geometric distortions in the generated avatars. (2) They fall short in fine-grained editing. This is primarily due to the inherited limitations from the pre-trained 2D image diffusion models, which become more pronounced when it comes to 3D head avatars. In this work, we address these challenges by introducing a versatile coarse-to-fine pipeline dubbed **HeadSculpt** for crafting (_i.e._, generating and editing) 3D head avatars from textual prompts. Specifically, we first equip the diffusion model with 3D awareness by leveraging landmark-based control and a learned textual embedding representing the back view appearance of heads, enabling 3D-consistent head avatar generations. We further propose a novel identity-aware editing score distillation strategy to optimize a textured mesh with a high-resolution differentiable rendering technique. This enables identity preservation while following the editing instruction. We showcase HeadSculpt's superior fidelity and editing capabilities through comprehensive experiments and comparisons with existing methods. \({}^{\ddagger}\) ## 1 Introduction Modeling 3D head avatars underpins a wide range of emerging applications (_e.g._, digital telepresence, game character creation, and AR/VR). Historically, the creation of intricate and detailed 3D head avatars demanded considerable time and expertise in art and engineering. With the advent of deep learning, existing works [87; 28; 33; 72; 8; 38; 15] have shown promising results on the reconstruction of 3D human heads from monocular images or videos. However, these methods remain restricted to head appearance contained in their training data which is often limited in size, resulting in the inability to generalize to new appearance beyond the training data. This constraint calls for the need of more flexible and generalizable methods for 3D head modeling. Recently, vision-language models (_e.g._, CLIP [55]) and diffusion models (_e.g._, Stable Diffusion [69; 61; 59]) have attracted increasing interest. These progresses have led to the emergence of text-to-3D generative models [34; 62; 44; 27] which create 3D content in a self-supervised manner. Notably, DreamFusion [54] introduces a score distillation sampling (SDS) strategy that leverages a pre-trained image diffusion model to compute the noise-level loss from the textual description, unlocking the potential to optimize differentiable 3D scenes (_e.g._, neural radiance field [45], tetrahedral mesh [66], texture [58; 9], or point cloud [50]) with 2D diffusion prior only. Subsequent research efforts [43; 6; 65, 79, 42, 75, 40, 56, 76] improve and extend DreamFusion from various perspectives (_e.g._, higher resolution [39] and better geometry [10]). Considering the flexibility and versatility of natural languages, one might think that these SDS-based text-to-3D generative methods would be sufficient for generating diverse 3D avatars. However, it is noted that existing methods have two major drawbacks (see Fig. 6): _(1) Inconsistency and geometric distortions_: The 2D diffusion models used in these methods lack 3D awareness particularly regarding camera pose; without any remedy, existing text-to-3D methods inherited this limitation, leading to the multi-face _"Janus"_ problem in the generated head avatars. _(2) Fine-grained editing limitations_: Although previous methods propose to edit 3D models by naively fine-tuning trained models with modified prompts [54, 39], we find that this approach is prone to biased outcomes, such as identity loss or inadequate editing. This problem arises from two causes: (a) inherent bias in prompt-based editing in image diffusion models, and (b) challenges with inconsistent gradient back-propagation at separate iterations when using SDS calculated from a vanilla image diffusion model. In this paper, we introduce a new head-avatar-focused text-to-3D method, dubbed **HeadSculpt**, that supports high-fidelity generation and fine-grained editing. Our method comprises two novel Figure 1: **Examples of generation and editing results obtained using the proposed HeadSculpt. It enables the creation and fine-grained editing of high-quality head avatars, featuring intricate geometry and texture, for any type of head avatar using simple descriptions or instructions. Symbols indicate the following prompt prefixes: * “a head of [text]” and † “a DSLR portrait of [text]”. The captions in gray are the prompt suffixes while the blue ones are the editing instructions.** components: _(1) Prior-driven score distillation:_ We first arm the pre-trained image diffusion model with 3D awareness by integrating a landmark-based ControlNet [84]. Specifically, we adopt the parametric 3D head model, FLAME [38], as a prior to obtain a 2D landmark map [41; 31], which serves as an additional condition for the diffusion model, ensuring the consistency of generated head avatars across different views. Further, to remedy the front-view bias in the pre-trained diffusion model, we utilize an improved view-dependent prompt through textual inversion [17], by learning a specialized <back-view> token to emphasize back views of heads and capture their unique visual details. _(2) Identity-aware editing score distillation (IESD):_ To address the challenges of fine-grained editing for head avatars, we introduce a novel method called IESD. It blends two scores, one for editing and the other for identity preservation, both predicted by a ControlNet-based implementation of InstructPix2Pix [5]. This approach maintains a controlled editing direction that respects both the original identity and the editing instructions. To further improve the fidelity of our method, we integrate these two novel components into a coarse-to-fine pipeline [39], utilizing NeRF [48] as the low-resolution coarse model and DMTet[66] as the high-resolution fine model. As demonstrated in Fig. 1, our method can generate high-fidelity human-like and non-human-like head avatars while enabling fine-grained editing, including local changes, shape/texture modifications, and style transfers. ## 2 Related work **Text-to-2D generation.** In recent years, groundbreaking vision-language technologies such as CLIP [55] and diffusion models [25; 13; 59; 68] have led to significant advancements in text-to-2D content generation [61; 57; 1; 69; 70]. Trained on extensive 2D multimodal datasets [63; 64], they are empowered with the capability to _"dream"_ from the prompt. Follow-up works endeavor to efficiently control the generated results [84; 85; 47], extend the diffusion model to video sequence [67; 3], accomplish image or video editing [23; 32; 81; 5; 77; 14; 22], enhance the performance for personalized subjects [60; 17], etc. Although significant progress has been made in generating 2D content from text, carefully crafting the prompt is crucial, and obtaining the desired outcome often requires multiple attempts. The inherent randomness remains a challenge, especially for editing tasks. **Text-to-3D generation.** Advancements in text-to-2D generation have paved the way for text-to-3D techniques. Early efforts [82; 27; 44; 62; 34; 29; 11] propose to optimize the 3D neural radiance field (NeRF) or vertex-based meshes by employing the CLIP language model. However, these models encounter difficulties in generating expressive 3D content, primarily because of the limitations of CLIP in comprehending natural language. Fortunately, the development of image diffusion models [69; 1] has led to the emergence of DreamFusion [54]. It proposes Score Distillation Sampling (SDS) based on a pre-trained 2D diffusion prior [61], showcasing promising generation results. Subsequent works [37] have endeavored to improve DreamFusion from various aspects: Magic3D [39] proposes a coarse-to-fine pipeline for high-resolution generations; Latent-NeRF [43] includes shape guidance for more robust generation on the latent space [59]; DreamAvatar [6] leverages SMPL [4] to generate 3D human full-body avatars under controllable shapes and poses; Guide3D [7] explores the usage of multi-view generated images to create 3D human avatars; Fantasia3D [10] disentangles the geometry and texture training with DMTet[66] and PBR texture [49] as their 3D representation; 3DFuse [65] integrates depth control and semantic code sampling to stabilize the generation process. Despite notable progress, current text-to-3D generative models still face challenges in producing view-consistent 3D content, especially for intricate head avatars. This is primarily due to the absence of 3D awareness in text-to-2D diffusion models. Additionally, to the best of our knowledge, there is currently no approach that specifically focuses on editing the generated 3D content, especially addressing the intricate fine-grained editing needs of head avatars. **3D head modeling and creation.** Statistical mesh-based models, such as FLAME [38; 15], enable the reconstruction of 3D head models from images. However, they struggle to capture fine details like hair and wrinkles. To overcome this issue, recent approaches [8; 71; 72; 51] employ Generative Adversarial Networks (GANs) [46; 20; 30] to train 3D-aware networks on 2D head datasets and produce 3D-consistent images through latent code manipulation. Furthermore, neural implicit methods [87; 16; 28; 88] introduce implicit and subject-oriented head models based on neural rendering fields [45; 48; 2]. Recently, text-to-3D generative methods have gained traction, generating high-quality 3D head avatars from natural language using vision-language models [55; 69]. Typically, T2P [85] predicts bone-driven parameters of head avatars via a game engine under the CLIP guidance [55]. Rodin [80] proposes a roll-out diffusion network to perform 3D-aware diffusion. DreamFace [83] employs a selection strategy in the CLIP embedding space to generate coarse geometry and uses SDS [54] to optimize UV-texture. Despite producing promising results, all these methods require a large amount of data for supervised training and struggle to generalize well to non-human-like avatars. In contrast, our approach relies solely on pre-trained text-to-2D models, generalizes well to out-of-domain avatars, and is capable of performing fine-grained editing tasks. ## 3 Methodology HeadSculpt is a 3D-aware text-to-3D approach that utilizes a pre-trained text-to-2D Stable Diffusion model [69; 59] to generate high-resolution head avatars and perform fine-grained editing tasks. As illustrated in Fig. 2, the generation pipeline has two stages: coarse generation via the neural radiance field (NeRF) [48] and refinement/editing using tetrahedron mesh (DMTet) [66]. Next, we will first introduce the preliminaries that form the basis of our method in Sec. 3.1. We will then discuss the key components of our approach in Sec. 3.2 and Sec, 3.3, including (1) the prior-driven score distillation process via landmark-based ControlNet [84] and textual inversion [17], and (2) identity-aware editing score distillation accomplished in the fine stage using the ControlNet-based InstructPix2Pix [5]. ### Preliminaries **Score distillation sampling.** Recently, DreamFusion [54] proposed score distillation sampling (SDS) to self-optimize a text-consistent neural radiance field (NeRF) based a the pre-trained text-to-2D diffusion model [61]. Due to the unavailability of the Imagen model [61] used by DreamFusion, we employ the latent diffusion model in [59] instead. Specifically, given a latent feature \(z\) encoded from an image \(x\), SDS introduces random noise \(\epsilon\) to \(z\) to create a noisy latent variable \(z_{t}\) and then uses a pre-trained denoising function \(\epsilon_{\phi}\left(z_{t};y,t\right)\) to predict the added noise. The SDS loss is defined as the difference between predicted and added noise and its gradient is given by \[\nabla_{\theta}\mathcal{L}_{\mathrm{SDS}}(\phi,g(\theta))=\mathbb{E}_{t, \epsilon\sim\mathcal{N}(0,1)}\left[w(t)\left(\epsilon_{\phi}\left(z_{t};y,t \right)-\epsilon\right)\frac{\partial z}{\partial x}\frac{\partial x}{ \partial\theta}\right], \tag{1}\] where \(y\) is the text embedding, \(w(t)\) weights the loss from noise level \(t\). With the expressive text-to-2D diffusion model and self-supervised SDS loss, we can back-propagate the gradients to optimize an implicit 3D scene \(g(\theta)\), eliminating the need for an expensive 3D dataset. **3D scene optimization.** HeadSculpt explores the potential of two different 3D differentiable representations as the optimization basis for crafting 3D head avatars. Specifically, we employ NeRF [48] in the coarse stage due to its greater flexibility in geometry deformation, while utilizing DMTet[66] in the fine stage for efficient high-resolution optimization. Figure 2: **Overall architecture of HeadSculpt. We craft high-resolution 3D head avatars in a coarse-to-fine manner. (a) We optimize neural field representations for the coarse model. (b) We refine or edit the model using the extracted 3D mesh and apply identity-aware editing score distillation if editing is the target. (c) The core of our pipeline is the prior-driven score distillation, which incorporates landmark control, enhanced view-dependent prompts, and an InstructPix2Pix branch.** **(1) _3D prior-based NeRF._ DreamAvatar [6] recently proposed a density-residual setup to enhance the robustness and controllability of the generated 3D NeRF. Given a point \(\mathbf{x}\) inside the 3D volume, we can derive its density and color value based on a prior-based density field \(\bar{\sigma}\): \[F(\mathbf{x},\bar{\sigma})=F_{\theta}(\gamma(\mathbf{x}))+(\bar{\sigma}( \mathbf{x}),\mathbf{0})\mapsto(\sigma,\mathbf{c}), \tag{2}\] where \(\gamma(\cdot)\) denotes a hash-grid frequency encoder [48], and \(\sigma\) and \(\mathbf{c}\) are the density and RGB color respectively. We can derive \(\bar{\sigma}\) from the signed distance \(d(\mathbf{x})\) of a given 3D shape prior (_e.g._, a canonical FLAME model [38] by default in our implementation): \[\bar{\sigma}(\mathbf{x})=\max\left(0,\mathrm{softplus}^{-1}(\tau(\mathbf{x})) \right),\,\tau(\mathbf{x})=\frac{1}{a}\,\mathrm{sigmoid}(-d(\mathbf{x})/a), \text{where }a=0.005. \tag{3}\] To obtain a 2D RGB image from the implicit volume defined above, we employ a volume rendering technique that involves casting a ray \(\mathbf{r}\) from the 2D pixel location into the 3D scene, sampling points \(\boldsymbol{\mu}_{i}\) along the ray, and calculating their density and color value using \(F\) in Eq. (2): \[C(\mathbf{r})=\sum_{i}W_{i}\mathbf{c}_{i},\quad W_{i}=\alpha_{i}\prod_{j<i}(1- \alpha_{j}),\quad\alpha_{i}=1-e^{(-\sigma_{i}||\boldsymbol{\mu}_{i}- \boldsymbol{\mu}_{i+1}||)}. \tag{4}\] **(2) DMTet.** It discretizes a deformable tetrahedral grid (\(V_{T},T\)), where \(V_{T}\) denotes the vertices within grid \(T\)[19, 66], to model the 3D space. Every vertex \(\mathbf{v}_{i}\in V_{T}\subset\mathbb{R}^{3}\) possesses a signed distance value \(s_{i}\in\mathbb{R}\), along with a position offset \(\Delta\mathbf{v}_{i}\in\mathbb{R}^{3}\) of the vertex relative to its initial canonical coordinates. Subsequently, the underlying mesh can be extracted based on \(s_{i}\) with the differentiable marching tetrahedra algorithm. In addition to the geometry, we adopt the Magic3D approach [39] to construct a neural color field. This involves re-utilizing the MLP trained in the coarse NeRF stage to predict the RGB color value for each 3D point. During optimization, we render this textured surface mesh into high-resolution images using a differentiable rasterizer [36, 49]. ### 3D-Prior-driven score distillation Existing text-to-3D methods with SDS [54] assume that maximizing the likelihood of images rendered from various viewpoints of a scene model \(g(\cdot)\) is equivalent to maximizing the overall likelihood of \(g(\cdot)\). This assumption can result in inconsistencies and geometric distortions [54, 65]. A notable issue is the "_Janus problem_" characterized by multiple faces on a single object (see Fig. 6). There are two possible causes: (1) the randomness of the diffusion model which can cause inconsistencies among different views, and (2) the lack of 3D awareness in controlling the generation process, causing the model to struggle in determining the front view, back view, etc. To address these issues in generating head avatars, we integrate 3D head priors into the diffusion model. **Landmark-based ControlNet.** In Section 3.1, we explain our adoption of FLAME [38] as the density guidance for our NeRF. Nevertheless, this guidance by itself is insufficient to have a direct impact on the SDS loss. What is missing is a link between the NeRF and the diffusion model, incorporating the same head priors. Such a link is key to improving the view consistency of the generated head avatars. To achieve this objective, as illustrated in Fig. 2, we propose the incorporation of 2D landmark maps as an additional condition for the diffusion model using ControlNet [84]. Specifically, we employ a ControlNet \(\mathcal{C}\) trained on a large-scale 2D face dataset [86, 12], using facial landmarks rendered from MediaPipe [41, 31] as ground-truth data. When given a randomly sampled camera pose \(\pi\), we first project the vertices of the FLAME model onto the image. Following that, we select and render some of these vertices into a landmark map \(\mathcal{P}_{\pi}\) based on some predefined vertex indexes. The landmark map will be fed into ControlNet and its output features are added to the intermediate features within the diffusion U-Net. The gradient of our SDS loss can be re-written as \[\nabla_{\theta}\mathcal{L}_{\mathrm{SDS}}(\phi,g(\theta))=\mathbb{E}_{t,e \sim\mathcal{N}(0,1),\pi}\left[w(t)\left(\epsilon_{\phi}\left(z_{t};y,t, \mathcal{C}(\mathcal{P}_{\pi})\right)-\epsilon\right)\frac{\partial z}{ \partial x}\frac{\partial x}{\partial\theta}\right]. \tag{5}\] **Enhanced view-dependent prompt via textual inversion.** Although the landmark-based ControlNet can inject 3D awareness into the pre-trained diffusion models, it struggles to maintain back view head consistency. This is expected as the 2D image dataset used for training mostly contains only front or side face views. Consequently, when applied directly to back views, the model introduces ambiguity as front and back 3D landmark views can appear similar, as shown in Fig. 8. To address this issue, we propose a simple yet effective method. Our method is inspired by previous works [54, 65, 39] which found it beneficial to append view-dependent text (_e.g._, "front view", "side view" or "back view") to the provided input text based on the azimuth angle of the randomly sampled camera. We extend this idea by learning a special token <back-view> to replace the plain text "back view" in order to emphasize the rear appearance of heads. This is based on the assumption that a pre-trained Stable Diffusion does has the ability to "imagine" the back view of a head - it has seen some during training. The main problem is that a generic text embedding of "back view" is inadequate in telling the model what appearance it entails. A better embedding for "back view" is thus required. To this end, we first randomly download 34 images of the back view of human heads, without revealing any personal identities, to construct a tiny dataset \(\mathcal{D}\), and then we optimize the special token \(v\) (_i.e._, <back-view>) to better fit the collected images, similar to the textual inversion [17]: \[v_{*}=\operatorname*{arg\,min}_{v}\mathbb{E}_{t,\epsilon\sim\mathcal{N}(0,1), z\sim\mathcal{D}}\left[\left\lVert\epsilon-\epsilon_{\phi}\left(z_{t};v,t \right)\right\rVert_{2}^{2}\right], \tag{6}\] which is achieved by employing the same training scheme as the original diffusion model, while keeping \(\epsilon_{\phi}\) fixed. This constitutes a reconstruction task, which we anticipate will encourage the learned embedding to capture the fine visual details of the back views of human heads. Notably, as we do not update the weights of \(\epsilon_{\phi}\), it stays compatible with the landmark-based ControlNet. ### Identity-aware editing score distillation After generating avatars, editing them to fulfill particular requirements poses an additional challenge. Previous works [54; 39] have shown promising editing results by fine-tuning a trained scene model with a new target prompt. However, when applied to head avatars, these methods often suffer from identity loss or inadequate appearance modifications (see Fig. 10). This problem stems from the inherent constraint of the SDS loss, where the 3D models often sacrifice prominent features to preserve view consistency. Substituting Stable Diffusion with InstructPix2Pix [5; 21] might seem like a simple solution, but it also faces difficulties in maintaining facial identity during editing based only on instructions, as it lacks a well-defined anchor point. To this end, we propose identity-aware editing score distillation (IESD) to regulate the editing direction by blending two predicted scores, _i.e._, one for editing instruction and another for the original description. Rather than using the original InstructPix2Pix [5], we employ a ControlNet-based InstructPix2Pix \(\mathcal{I}\)[84] trained on the same dataset, ensuring compatibility with our landmark-based ControlNet \(\mathcal{C}\) and the learned <back-view> token. Formally, given an initial textual prompt \(y\) describing the avatar to be edited and an editing instruction \(\hat{y}\), we first input them separately into the same diffusion model equipped with two ControlNets, \(\mathcal{I}\) and \(\mathcal{C}\). This allows us to obtain two predicted noises, which are then combined using a predefined hyper-parameter \(\omega_{e}\) like classifier-free diffusion guidance (CFG) [26]: \[\begin{split}\nabla_{\theta}\mathcal{L}_{\mathrm{IESD}}(\phi,g( \theta))&=\mathbb{E}_{t,\epsilon\sim\mathcal{N}(0,1),\pi}\left[w( t)\left(\hat{\epsilon}_{\phi}\left(z_{t};y,\hat{y},t,\mathcal{C}(\mathcal{P}_{\pi}), \mathcal{I}(\mathcal{M}_{\pi})\right)-\epsilon\right)\frac{\partial z}{ \partial x}\frac{\partial x}{\partial\theta}\right],\\ &\omega_{e}\epsilon_{\phi}\left(z_{t};\hat{y},t,\mathcal{C}( \mathcal{P}_{\pi}),\mathcal{I}(\mathcal{M}_{\pi})\right)+\left(1-\omega_{e} \right)\epsilon_{\phi}\left(z_{t};y,t,\mathcal{C}(\mathcal{P}_{\pi}), \mathcal{I}(\mathcal{M}_{\pi})\right)\end{split} \tag{7}\] where \(\mathcal{P}_{\pi}\) and \(\mathcal{M}_{\pi}\) represent the 2D landmark maps and the reference images rendered in the coarse stage, both being obtained under the sampled camera pose \(\pi\). The parameter \(\omega_{e}\) governs a trade-off between the original appearance and the desired editing, which defaults to \(0.6\) in our experiments. ## 4 Experiments We will now assess the efficacy of our HeadSculpt across different scenarios, while also conducting a comparative analysis against state-of-the-art text-to-3D generation pipelines. **Implementation details.** HeadSculpt builds upon Stable-DreamFusion [73] and Huggingface Diffusers [78; 53]. We utilize version 1.5 of Stable Diffusion [69] and version 1.1 of ControlNet [84; 12] in our implementation. In the coarse stage, we optimize our 3D model at \(64\times 64\) grid resolution, while using \(512\times 512\) grid resolution for the fine stage (refinement or editing). Typically, each text prompt requires approximately \(7,000\) iterations for the coarse stage and \(5,000\) iterations for the fine stage. It takes around 1 hour for each stage on a single Tesla V100 GPU with a default batch size of 4. We use Adam [35] optimizer with a fixed learning rate of \(0.001\). Additional implementation details can be found in the supplementary material. **Baseline methods for generation evaluation.** We compare the generation results with five baselines: DreamFusion [73], Latent-NeRF [43], 3DFuse [65] (improved version of SJC [79]), Fantasia3D [10], and DreamFace [83]. We do not directly compare with DreamAvatar [6] as it involves deformation fields for full-body-related tasks. **Baseline methods for editing evaluation.** We assess IESD's efficacy for fine-grained 3D head avatar editing by comparing it with various alternatives since no dedicated method exists for this: **(B1)** One-step optimization on the coarse stage without initialization; **(B2)** Initialized from the coarse stage, followed by optimization of another coarse stage with an altered description; **(B3)** Initialized from the coarse stage, followed by optimization of a new fine stage with an altered description; **(B4)** Initialized from the coarse stage, followed by optimization of a new fine stage with an instruction based on the vanilla InstructPix2Pix [5]; **(B5)** Ours without edit scale (_i.e._, \(\omega_{e}=1\)). Notably, B2 represents the editing method proposed in DreamFusion [54], while B3 has a similar performance as Magic3D [39], which employs a three-stage editing process (_i.e._, Coarse + Coarse + Fine). ### Qualitative evaluations **Head avatar generation with various prompts.** In Fig. 1, we show a diverse array of 3D head avatars generated by our HeadSculpt, consistently demonstrating high-quality geometry and texture across various viewpoints. Our method's versatility is emphasized by its ability to create an assortment of avatars, including humans (both celebrities and ordinary individuals) as well as non-human characters like superheroes, comic/game characters, paintings, and more. **Head avatar generation with different shapes.** HeadSculpt leverages shape-guided NeRF initialization and landmark-guided diffusion priors. This allows controlling geometry by varying the FLAME shape used for initialization. To demonstrate adjustability, Fig. 3 presents examples generated from diverse FLAME shapes. The results fit closely to the shape guidance, highlighting HeadSculpt's capacity for geometric variation when provided different initial shapes. **Head avatar editing with various instructions.** As illustrated in Fig. 1 and Fig. 4, HeadSculpt's adaptability is also showcased through its ability to perform fine-grained editing, such as local changes (_e.g._, adding accessories, changing hairstyles, or altering expressions), shape and texture modifications, and style transfers. **Head avatar editing with different edit scales.** In Fig. 5, we demonstrate the effectiveness of IESD with different \(\omega_{e}\) values, highlighting its ability to control editing influence on the reference identity. Figure 4: **More specific editing results.**\(\ddagger\) Instruction prefix: _make his expression as_ [text]. Figure 3: **Generation results with various shapes.** The first row shows three randomly sampled FLAME models, while the second row presents our generated results (incl. normals) using these FLAME models as initialization. All results are under the same text prompt. **Comparison with existing methods on generation results.** We provide qualitative comparisons with existing methods in Fig. 6. We employ the same FLAME model for Latent-NeRF [43] to compute their sketch-guided loss and for Fantasia3D [74] as the initial geometry. The following observations can be made: **(1)** All baselines tend to be more unstable during training than ours, often resulting in diverged training processes; **(2)** Latent-NeRF occasionally produces plausible results due to its use of the shape prior, but its textures are inferior to ours since optimization occurs solely in the latent space; **(3)** Despite 3DFuse's depth control to mitigate the Janus problem, it still struggles to generate 3D consistent head avatars; **(4)** While Fantasia3D can generate a mesh-based 3D avatar, its geometry is heavily distorted, as its disentangled geometry optimization might be insufficient for highly detailed head avatars; **(5)** Although DreamFace generates realistic human face textures, it falls short in generating (i) complete heads, (ii) intricate geometry, (iii) non-human-like appearance, and (iv) composite accessories. In comparison, our method consistently yields superior results in both geometry and texture with much better consistency for the given prompt. More comparisons can be found in the supplementary material. Figure 5: **Impact of the edit scale \(\omega_{e}\) in IESD.** It balances the preservation of the initial appearance and the extent of the desired editing, making the editing process more controllable and flexible. Figure 6: **Comparison with existing text-to-3D methods.** Unlike other methods that struggle or fail to generate reasonable results, our approach consistently achieves high-quality geometry and texture, yielding superior results. *Non-official implementation. \(\dagger\) Generated from the online website demo. ### Quantitative evaluations **User studies.** We conducted user studies comparing with four baselines [73; 74; 65; 43]. 42 volunteers ranked them from 1 (worst) to 5 (best) individually based on three dimensions: **(1)** consistency with the text, **(2)** texture quality, and **(3)** geometry quality. The results, shown in Fig. 7, indicate that our method achieved the highest rank in all three aspects by large margins. **CLIP-based metrics.** Following DreamFusion [54], **(1)** We calculate the CLIP R-Precision (CLIP-R) [52] and CLIP-Score (CLIP-S) [24] metrics, which evaluate the correlation between the generated images and the input texts, for all methods using 30 text prompts. As indicated in Tab. 1, our approach significantly outperforms the competing methods according to both metrics. This outcome provides additional evidence for the subjective superiority observed in the user studies and qualitative results. **(2)** We employ the CLIP Directional Similarity (CLIP-DS) [5; 18], to evaluate the editing performance. This metric measures the alignment between changes in text captions and corresponding image modifications. Specifically, we encode a pair of images (the original and edited 3D models rendered from a specific viewpoint) along with a pair of text prompts describing the original and edited subjects, _e.g._, "a DSLR portrait of Saul Goodman" and "a DSLR portrait of Saul Goodman dressed like a clowm". We compare our approach against B3, B4, and B5 by evaluating 10 edited examples. The results, presented in Tab. 1, highlight the superiority of our editing framework according to this metric, indicating improved editing fidelity and identity preservation compared to alternatives. ### Further analysis **Effectiveness of prior-driven score distillation.** In Fig. 8, we conduct ablation studies to examine the impact of the proposed landmark control and textual inversion priors in our method. We demonstrate this on the coarse stage because the refinement and editing results heavily depend on this stage. The findings show that landmark control is essential for generating spatially consistent head avatars. Without it, the optimized 3D avatar faces challenges in maintaining consistent facial views, particularly for non-human-like characters. Moreover, textual inversion is shown to be another vital component in mitigating the Janus problem, specifically for the back view, as landmarks cannot exert control on the rear view. Overall, the combination of both components enables HeadSculpt to produce view-consistent avatars with high-quality geometry. \begin{table} \begin{tabular}{c|c|c c|c|c} \hline **Generation** & \begin{tabular}{c} **Dream** \\ **Paison** \\ \end{tabular} & \begin{tabular}{c} **Latent-** \\ **NeRF** \\ \end{tabular} & **30Pure** & **Fantalsi3D** & **Ours** \\ \hline **CLIP-R**[52] & 95.83 & 87.50 & 70.83 & 62.50 & **100.00** \\ **CLIP-S**[24] & 26.06 & 26.30 & 23.41 & 23.26 & **29.52** \\ \hline **Editing** & **B3** & **B4** & **B5** & **Ours** \\ \hline **CLIP-DS**[18] & 16.62 & 8.76 & 14.03 & **16.84** \\ \hline \end{tabular} \end{table} Table 1: **Objective evaluation with CLIP-based metrics.** All numbers are calculated with CLIP-L/14. Figure 8: **Analysis of prior-driven score distillation. Figure 9: **Failure cases.** **Effectiveness of IESD.** In Fig. 10, we present two common biased editing scenarios produced by the baseline methods: insufficient editing and loss of identity. With Stable Diffusion, specific terms like "Saul Goodman" and "skull" exert a more substantial influence on the text embeddings compared to other terms, such as "older" and "Vincent van Gogh". B1, B2, and B3, all based on vanilla Stable Diffusion, inherit such bias in their generated 3D avatars. Although B4 does not show such bias, it faces two other issues: **(1)** the Janus problem reemerges due to incompatibility between vanilla InstructPix2Pix and the proposed prior-driven score distillation; **(2)** it struggles to maintain facial identity during editing based solely on instructions, lacking a well-defined anchor point. In contrast, B5 employs ControlNet-based InstructPix2Pix [84] with the proposed prior score distillation, resulting in more view-consistent editing. Additionally, our IESD further uses the proposed edit scale to merge two predicted scores, leading to better identity preservation and more effective editing. This approach allows our method to overcome the limitations faced by the alternative solutions, producing high-quality 3D avatars with improved fine-grained editing results. **Limitations and failure cases.** While setting a new state-of-the-art, we acknowledge HeadSculpt has limitations, as the failure cases in Fig. 9 demonstrate: **(1)** non-deformable results hinder further extensions and applications in audio or video-driven problems; **(2)** generated textures are highly saturated and less realistic, especially for characters with highly detailed appearances (_e.g._, Freddy Krueger); **(3)** some inherited biases from Stable Diffusion [69] still remain, such as inaccurate and stereotypical appearances of Asian characters (_e.g._, Sun Wukong and Japanese Geisha); and **(4)** limitations inherited from InstructPix2Pix [5], such as the inability to perform large spatial manipulations (_e.g._, remove his nose). ## 5 Conclusions We have introduced HeadSculpt, a novel pipeline for generating high-resolution 3D human avatars and performing identity-aware editing tasks through text. We proposed to utilize a prior-driven score distillation that combines a landmark-based ControlNet and view-dependent textual inversion to address the Janus problem. We also introduced identity-aware editing score distillation that preserves both the original identity information and the editing instruction. Extensive evaluations demonstrated that our HeadSculpt produces high-fidelity results under various scenarios, outperforming state-of-the-art methods significantly. **Societal impact.** The advancements in geometry and texture generation for human head avatars could be deployed in many AR/VR use cases but also raises concerns about their potential malicious use. We encourage responsible research and application, fostering open and transparent practices. **Acknowledgment.** This work is partially supported by Hong Kong Research Grant Council - Early Career Scheme (Grant No. 27208022) and HKU Seed Fund for Basic Research. We also thank the anonymous reviewers for their constructive suggestions. Figure 10: **Analysis of identity-aware editing score distillation.** ## Appendix A Implementation details ### Details about 3D scene models In the coarse stage, we make use of the grid frequency encoder \(\gamma(\cdot)\) from the publicly available Stable DreamFusion [73]. This encoder maps the input \(\mathbf{x}\in\mathbb{R}^{3}\) to a higher-frequency dimension, yielding \(\gamma(\mathbf{x})\in\mathbb{R}^{32}\). The MLP within our NeRF model consists of three layers with dimensions [32; 64; 64; 3+1+3]. Here, the output channels '3', '1', and '3' represent the predicted normals, density value, and RGB colors, respectively. In the fine stage, we directly optimize the signed distance value \(s_{i}\in\mathbb{R}\), along with a position offset \(\Delta\mathbf{v}_{i}\in\mathbb{R}^{3}\) for each vertex \(\mathbf{v}_{i}\). We found that fitting \(s_{i}\) and \(\mathbf{v}_{i}\) into MLP, as done by Fantasia3D [74], often leads to diverged training. To ensure easy reproducibility, we have included all the hyperparameters used in our experiments in Tab 2. The other hyper-parameters are set to be the default of Stable-DreamFusion [73]. ### Details about textual inversion In the main paper, we discussed the collection of a tiny dataset consisting of 34 images depicting the back view of heads. This dataset was used to train a special token, <back-view>, to address the ambiguity associated with the back view of landmarks. The images in the dataset were selected to encompass a diverse range of gender, color, age, and other characteristics. A few samples from the dataset are shown in Fig. 11. While our simple selection strategy has proven effective in our specific case, we believe that a more refined collection process could further enhance the controllability of the learned <back-view> token. We use the default training recipe provided by HuggingFace Diffusers 2, which took us 1 hour on a single Tesla V100 GPU. \begin{table} \begin{tabular}{l|l c} \hline \hline \multirow{3}{*}{**Camera setting**} & \(\theta\) range & (20, 110) \\ & Radius range & (1.0, 1.5) \\ & FoV range & (30, 50) \\ \hline \multirow{3}{*}{**Render setting**} & Resolution for coarse & (64, 64) \\ & Resolution for fine & (512, 512) \\ & Max num steps sampled per ray & 1024 \\ & Iter interval to update extra status & 16 \\ \hline \multirow{3}{*}{**Diffusion setting**} & Guidance scale & 100 \\ & \(t\) range & (0.02, 0.98) \\ & \(\omega(t)\) & \(\sqrt{\alpha_{t}}(1-\alpha_{t})\) \\ \hline \multirow{6}{*}{**Training setting**} & \#Iterations for coarse & \(70k\) \\ & \#Iterations for fine & \(50k\) \\ & Batch size & 4 \\ & LR of grid encoder & 1e-3 \\ & LR of NeRF MLP & 1e-3 \\ & LR of \(s_{i}\) and \(\Delta\mathbf{v}_{i}\) in DMTET & 1e-2 \\ & LR scheduler & constant \\ & Warmup iterations & \(20k\) \\ & Optimizer & Adam (0.9, 0.99) \\ & Weight decay & 0 \\ & Precision & fp16 \\ \hline \multirow{2}{*}{**Hardware**} & GPU & 1 \(\times\) Tesla V100 (32GB) \\ & Training duration & 1h (coarse) + 1h (fine) \\ \hline \hline \end{tabular} \end{table} Table 2: **Hyper-parameters of HeadSculpt.** ## Appendix B Further analysis ### Effectiveness of textual inversion on 2D generation To show the effectiveness of the learned <back-view> token, we conduct an analysis of its control capabilities in the context of 2D generation results. Specifically, we compare two generation results using Stable Diffusion [69], with both experiments sharing the same random seed. One experiment has the plain text prompt appended with the plain phrase "back view," while the other experiment utilizes the learned special token <back-view> in the prompt. We present a selection of randomly generated results in Fig. 12. The observations indicate that the <back-view> token effectively influences the pose of the generated heads towards the back, resulting in a distinct appearance. Remarkably, the <back-view> token demonstrates a notable generalization ability, as evidenced by the Batman case, despite not having been trained specifically on back views of Batman in the textual inversion process. ### Inherent bias in 2D diffusion models In our main paper, we discussed the motivation behind our proposed identity-aware editing score distillation (IESD), which can be attributed to two key factors. Firstly, the limitations of prompt-based editing [54, 39] are due to the inherent bias present in Stable Diffusion (SD). Secondly, while InstructPix2Pix (IP2P) [5] offers a solution by employing instruction-based editing to mitigate bias, it often results in identity loss. To further illustrate this phenomenon, we showcase the biased 2D outputs of SD and ControlNet-based IP2P in Fig. 13. Modified descriptions and instructions are utilized in these respective methods to facilitate the editing process and achieve the desired results. The results provide clear evidence of the following: (1) SD generates biased outcomes, with a tendency to underweight the "older" aspect and overweight the "skull" aspect in the modified description; (2) IP2P demonstrates the ability to edit the image successfully, but it faces challenges in preserving the identity of the avatar. The aforementioned inherent biases are amplified in the domain of 3D generation (refer to Fig. 10 in the main paper) due to the optimization process guided by SDS loss, which tends to prioritize view consistency at the expense of sacrificing prominent features. To address this issue, our proposed IESD approach combines two types of scores: one for editing and the other for identity preservation. This allows us to strike a balance between preserving the initial appearance and achieving the desired editing outcome. Figure 11: **Samples of the tiny dataset collected for learning <back-view> token.** Figure 12: **Analysis of the learned <back-view> on 2D image generation.** For each pair of images, we present two 2D images generated with the same random seed, where the left image is conditioned on the plain text ”back view” and the right image is conditioned on the <back-view> token. Figure 13: **Analysis of the inherent bias in 2D diffusion models.** For each case, we display several 2D outputs of SD and IP2P, utilizing modified descriptions and instructions, respectively, with reference images from our coarse-stage NeRF model to facilitate the editing process. ## Appendix C Additional qualitative comparisons ### Comparison with existing methods on generation results We provide more qualitative comparisons with four baseline methods [73, 43, 65, 74] in Fig. 14 and Fig. 15. These results serve to reinforce the claims made in Sec. 4.1 of the main paper, providing further evidence of the superior performance of our HeadSculpt in generating high-fidelity head avatars. These results showcase the ability of our method to capture intricate details, realistic textures, and overall visual quality, solidifying its position as a state-of-the-art solution in this task. Notably, to provide a more immersive and comprehensive understanding of our results, we include multiple outcomes of our HeadSculpt in the form of \(360^{\circ}\)**rotating videos**. These videos can be accessed on [https://brandonhan.uk/HeadSculpt](https://brandonhan.uk/HeadSculpt), enabling viewers to observe the generated avatars from various angles and perspectives. Figure 15: **Additional comparisons with existing methods on generation (Part 2).** *Non-official. ### Comparison with existing methods on editing results Since the absence of alternative methods specifically designed for editing, we conduct additional evaluations of the editing results generated by existing methods by modifying the text prompts. Fig. 16 illustrates that bias in editing is a pervasive issue encountered by all the baselines. This bias stems from the shared SDS guidance function, which is based on a diffusion prior, despite the variations in representation and optimization methods employed by these approaches. Instead, IESD enables the guidance function to incorporate information from two complementary sources: (1) the original image gradient, which preserves identity, and (2) the editing gradient, which captures desired modifications. By considering both terms, our approach grants more explicit and direct control over the editing process compared to the conventional guidance derived solely from the input. Figure 16: **Comparisons with existing methods on editing.*Non-official.** ### Comparison with existing methods on stability We observe that all baselines tend to have diverged training processes as they do not integrate 3D prior to the diffusion model. Taking two shape-guided prior methods (_i.e._, Latent-NeRF [43] and Fantasia3D [10]) as examples, we compare their generation results and ours across different random seeds. We conduct comparisons under the same default hyper-parameters and present the results in Fig. 17. We notice that prior methods need to try several runs to get the best generation while ours can achieve consistent results among different runs. Our method is thus featured with stable training, without the need for cherry-picking over many runs. Figure 17: **Results across random seeds (0, 1, 2).*Non-official.**
2302.09784
A Projection Method for Compressible Generic Two-Fluid Model
A new projection method for a generic two-fluid model is presented in this work. To be specific, it is shown that the projection method for solving single-phase variable density incompressible flows or compressible flows can be extended to the case of viscous compressible two-fluid flows. The idea relies on the property that the single pressure $p$ can be uniquely determined by the products of volume fractions and densities $\phi_k \rho_k$ of the two fluids, respectively. Moreover, a suitable assignment of the intermediate step variables is necessary to maintain the stability. The energy stability for the proposed numerical scheme is proved and the first order convergence in time is justified by three numerical tests.
Po-Yi Wu
2023-02-20T06:13:41Z
http://arxiv.org/abs/2302.09784v1
# A Projection Method for Compressible Generic Two-Fluid Model ###### Abstract A new projection method for a generic two-fluid model is presented in this work. To be specific, it is shown that the projection method for solving single-phase variable density incompressible flows or compressible flows can be extended to the case of viscous compressible two-fluid flows. The idea relies on the property that the single pressure \(p\) can be uniquely determined by the products of volume fractions and densities \(\phi_{k}\rho_{k}\) of the two fluids, respectively. Moreover, a suitable assignment of the intermediate step variables is necessary to maintain the stability. The energy stability for the proposed numerical scheme is proved and the first order convergence in time is justified by three numerical tests. ## 1 Introduction Multi-fluid flows are widely seen in both nature and industry. For instance, the oil/gas/water three-phase system is encountered in petroleum industry[1, 2, 3]; two-gas system is used in the semiconductor manufacture[4, 5, 6]; and gas/liquid system occurs in many chemical engineering processes[7, 8, 9, 10]. The simplest case is the two-fluid system. In this case, each fluid obey its own governing equations and interact with the other fluid through the free surface or some constitutive relations. Several versions of the model for two-fluid flows have been proposed. They can be roughly divided into two classes: (i) Interface-capturing models [11, 12, 13, 14] and (ii) Averaged models [15, 16, 17, 18, 19]. The use of interface-capturing models is able to resolve the topology of phase distribution and extract the physical detail in the system. When the topology of the phase distribution is too complicated or the interface between phases involves large variety of length scales, the class of averaged models can be a good compromise for simulating the flow problem. Although the information of the interface is blurred by the use of averaged model, several advantages arise instead: (i) a cheaper computational cost; (ii) less sensitive to mesh than interfacial-capturing model; and (iii) simpler expression of the interfacial terms. In this work, a generic two-fluid model with similar form as [15] is considered. The unknown variables of the system are two velocities, two densities, two volume fractions, and one pressure. It is worth noting that such model allows a disparity of motion between the two phases, which is quite common in gas-liquid system. As far as the author's knowledge, there is no mathematically provable stable numerical scheme for solving this system. To begin with the development, some ideas are borrowed by the numerical method for single phase variable density incompressible Navier-Stokes equations[20] and for single phase compressible Navier-Stokes equations[21, 22]. Both of them are of the framework of projection method [23, 24, 25]. Several difficulties occur when extending their ideas to the two-fluid system. The main issues are the following two: (i) One will have to solve one pressure by the two intermediate velocities in the projection step. The asymmetric structure causes that the pressure cannot be obtained by means of the formulation proposed in [21] directly; (ii) A proper assignment of the intermediate volume fractions and densities to advection-diffusion step and renormalization step shall be taken care to maintain the stability. A prediction-correction procedure in the framework of projection method for the two-fluid model being considered is proposed. To tackle the asymmetric structure in the projection step, we may observe that the pressure \(p\) can be uniquely determined by the products of volume fraction and density (say \(\alpha_{g}\) and \(\alpha_{l}\)) of the two phases (see [15] or Section 2.2 in this paper). A symmetric formulation proposed in this paper for solving \(\alpha_{g}\) and \(\alpha_{l}\) together provides an available way to accomplish the projection step. For the other main issue, the choice of intermediate step of volume fractions and densities can be understood by the process of the stability analysis. The method for analyzing the stability of the method is similar to [26, 20]. However, there are some difficulties for keeping the energy bounds of potential functions. To this end, auxiliary functions related to the equation of states are introduced to complete the proof of the energy estimates. Finally, it is shown that the proposed time-discrete scheme is unconditionally stable. The paper is organized as follows. In Section 2, the governing equations and their reformulation being considered are presented. Some relations among the unknown variables are introduced. The a priori estimates for clarifying the motivation of the numerical scheme are exhibited. In Section 3, the time-discrete projection method for the considered two-fluid model is provided and the finite element implementation for numerical tests is introduced. In Section 4, the stability analysis for time-discrete scheme is conducted. In Section 5, three test problems implemented by finite element method with convergence test justify the feasibility of the numerical scheme. ## 2 Modeling equations In the following, we assume that the fluid domain \(\Omega\subset\mathbb{R}^{d}\), \(d=2,3\) is smooth, bounded and connected. Let \(T>0\) be the final time. We denote by \(\Gamma\) its boundary. For convenience, we define \(\Omega_{T}:=\Omega\times(0,T)\) and \(\Gamma_{T}:=\Gamma\times(0,T)\). ### Compressible generic two-fluid model We denote by the subscript \(k=g,l\) for phase \(g\) and phase \(l\), respectively. Let \(\phi_{k}\) be the volume fractions, \(\rho_{k}\) be the densities, \(\mathbf{u}_{k}\) be the velocities, \(p_{k}\) be the pressures. The governing equations are given by \[\phi_{g}+\phi_{l}=1,\quad\text{in }\Omega_{T}, \tag{2.1a}\] \[\partial_{t}(\phi_{k}\rho_{k})+\nabla\cdot(\phi_{k}\rho_{k}\mathbf{u}_{k})=0,\quad \text{in }\Omega_{T},\] (2.1b) \[\partial_{t}(\phi_{k}\rho_{k}\mathbf{u}_{k})+\nabla\cdot(\phi_{k}\rho_{k}\mathbf{u}_{ k}\otimes\mathbf{u}_{k})-\nabla\cdot(\phi_{k}\tau_{k}(\mathbf{u}_{k}))+\phi_{k}\nabla p _{k}+F_{D,k}=0,\quad\text{in }\Omega_{T},\] (2.1c) \[p_{g}=p_{l}=p,\quad\text{in }\Omega_{T}. \tag{2.1d}\] The constraint (2.1d) is to close the system so that the number of variables equals to the number of equations. The stress tensor \(\tau_{k}(\mathbf{u}_{k})\) are of the form \[\tau(\mathbf{u}_{k})=2\mu_{k}D(\mathbf{u}_{k})+\lambda_{k}(\nabla\cdot\mathbf{u}_{k})I, \tag{2.2}\] where \(D(\mathbf{u}_{k})\) denotes the symmetric part of the velocity gradient \(\nabla\mathbf{u}_{k}\), \(\mu_{k}\) are the dynamic viscosities and \(\lambda_{k}\) are the second viscosities. The drag force \(F_{D,k}\) are assumed to be of the form: \[F_{D,k}=C_{D}\phi_{g}\phi_{l}|\mathbf{u}_{g}-\mathbf{u}_{l}|(\mathbf{u}_{k}-\mathbf{u}_{ \widetilde{k}}),\ \ k=g,l,\ \ \widetilde{k}=l,g, \tag{2.3}\] where \(C_{D}\) is assumed to be a positive constant. #### Equation of state We assume the barotropic system such that \[p=\zeta_{g}(\rho_{g})=\zeta_{l}(\rho_{l}) \tag{2.4}\] with \[\zeta_{g}(z)=A_{g}z^{\gamma_{g}},\quad\zeta_{l}(z)=A_{l}(z^{\gamma_{l}}-\rho_{l, 0}^{\gamma_{l}})+p_{0}, \tag{2.5}\] for some positive constants \(A_{g},A_{l},\gamma_{g},\gamma_{l},\rho_{l,0},p_{0}\). In this work, we further assume that \(\gamma_{k}>1\) for \(k=g,l\). #### Boundary conditions and Initial conditions To complete the statement of the mathematical problem, we impose that \[\begin{array}{l}\rho_{k}|_{\Gamma_{in}}=\rho_{k,in},\ \ \phi_{k}|_{\Gamma_{in}}=\phi_{k,in},\ \ \mathbf{u}_{k}|_{\Gamma}=\mathbf{b}_{k},\\ \rho_{k}|_{t=0}=\rho_{k}^{0},\ \ \phi_{k}|_{t=0}=\phi_{k}^{0},\ \ \mathbf{u}_{k}|_{t=0}=\mathbf{u}_{k,0},\end{array} \tag{2.6}\] where \(\Gamma_{in}\) is the inlet boundary such that \[\Gamma_{in}:=\{x\in\Gamma|\ \mathbf{b}_{k}(x)\cdot\mathbf{n}(x)<0\}, \tag{2.7}\] where \(\mathbf{n}(\cdot)\) is the outer normal on \(\Gamma\). ### Reformulation of the governing equations Let \(\alpha_{k}=\phi_{k}\rho_{k}\), (2.1) becomes \[\frac{\alpha_{g}}{\rho_{g}}+\frac{\alpha_{l}}{\rho_{l}}=1,\quad\text{in } \Omega_{T}, \tag{2.8a}\] \[\partial_{t}\alpha_{k}+\nabla\cdot(\alpha_{k}\mathbf{u}_{k})=0,\quad\text{in } \Omega_{T},\] (2.8b) \[\begin{array}{l}\partial_{t}(\alpha_{k}\mathbf{u}_{k})+\nabla\cdot(\alpha_{k} \mathbf{u}_{k}\otimes\mathbf{u}_{k})-\nabla\cdot(\frac{\alpha_{k}}{\rho_{k}}\tau_{k}( \mathbf{u}_{k}))\\ +\frac{\alpha_{k}}{\rho_{k}}\nabla p+C_{D}\frac{\alpha_{g}\alpha_{l}}{\rho_{g} \rho_{l}}|\mathbf{u}_{g}-\mathbf{u}_{l}|(\mathbf{u}_{k}-\mathbf{u}_{\widetilde{k}})=0,\ \ k=g,l,\ \ \widetilde{k}=l,g,\quad\text{in } \Omega_{T}.\end{array} \tag{2.8c}\] In the following, we will show that with given \(\alpha_{g}\) and \(\alpha_{l}\), the quantities \(\phi_{g}\), \(\phi_{l}\), \(\rho_{g}\), \(\rho_{l}\), and \(p\) can be uniquely determined. #### Pressure balance By (2.1d), we have \[s_{g}^{2}d\rho_{g}=s_{l}^{2}d\rho_{l},\quad s_{k}=\sqrt{\frac{d\zeta_{k}}{d \rho_{k}}(\rho_{k})} \tag{2.9}\] By \(\alpha_{k}=\phi_{k}\rho_{k}\), we have \[d\rho_{g}=\frac{1}{\phi_{g}}(d\alpha_{g}-\rho_{g}d\phi_{g}),\quad d\rho_{l}= \frac{1}{\phi_{l}}(d\alpha_{l}-\rho_{l}d\phi_{g})\] Using (2.9), the differential of the gaseous phase volume fraction can be expressed as \[d\phi_{g}=\frac{\phi_{l}s_{g}^{2}}{\phi_{l}\rho_{g}s_{g}^{2}+\phi_{g}\rho_{l}s _{l}^{2}}d\alpha_{g}-\frac{\phi_{l}s_{l}^{2}}{\phi_{l}\rho_{g}s_{g}^{2}+\phi_{ g}\rho_{l}s_{l}^{2}}d\alpha_{l}\] Note that we always work with the case with \(\phi_{k}>0\). The differential of the gaseous phase density can be expressed as \[d\rho_{g}=\frac{\rho_{g}\rho_{l}s_{l}^{2}}{\alpha_{l}\rho_{g}^{2}s_{g}^{2}+ \alpha_{g}\rho_{l}^{2}s_{l}^{2}}(\rho_{l}d\alpha_{g}+\rho_{g}d\alpha_{l}).\] Therefore, the differential of pressure can be expressed as \[dp=C^{2}(\rho_{l}d\alpha_{g}+\rho_{g}d\alpha_{l}), \tag{2.10}\] where \[C^{2}=\frac{s_{l}^{2}s_{g}^{2}}{\phi_{g}\rho_{l}s_{l}^{2}+\phi_{l}\rho_{g}s_{g} ^{2}} \tag{2.11}\] With (2.10) in hand, (2.8c) can be rewritten as \[\partial_{t}(\alpha_{k}\mathbf{u}_{k})+\nabla\cdot(\alpha_{k}\mathbf{u}_{k}\otimes\mathbf{ u}_{k})+C^{2}(\rho_{l}\nabla\alpha_{g}+\rho_{g}\nabla\alpha_{l})-\nabla\cdot( \phi_{k}\tau_{k}(\mathbf{u}_{k}))+F_{D,k}=0 \tag{2.12}\] #### Relations among \(\phi_{k}\), \(\rho_{k}\), and \(\alpha_{k}\) We recall (2.8a), which gives us \[\rho_{l}=\frac{\alpha_{l}\rho_{g}}{\rho_{g}-\alpha_{g}} \tag{2.13}\] By the pressure equilibrium assumption (2.1d), we have \[\varphi(\rho_{g})=\zeta_{g}(\rho_{g})-\zeta_{l}\left(\frac{\alpha_{l}\rho_{g} }{\rho_{g}-\alpha_{g}}\right)=0 \tag{2.14}\] Differentiating \(\varphi\) with respect to \(\rho_{g}\) yields \[\varphi^{\prime}(\rho_{g})=s_{g}^{2}+s_{l}^{2}\frac{\alpha_{l}\alpha_{g}}{( \rho_{g}-\alpha_{g})^{2}}\] Therefore \(\varphi\) is a non-decreasing function of \(\rho_{g}\). For non-degenerate case \(\phi_{k}<1\), we look for \(\rho_{g}\in(\alpha_{g},+\infty)\). Since \(\varphi((\alpha_{g},+\infty))=(-\infty,+\infty)\), \(\rho_{g}=\rho_{g}(\alpha_{g},\alpha_{l})\) is uniquely determined by solving \(\varphi(\rho_{g})=0\) with given \(\alpha_{g}\) and \(\alpha_{l}\). Finally, \(\rho_{l}\) and \(\phi_{k}\), \(k=g,l\) are given by \[\phi_{g}(\alpha_{g},\alpha_{l})=\frac{\alpha_{g}}{\rho_{g}(\alpha_{g},\alpha_ {l})},\ \ \phi_{l}(\alpha_{g},\alpha_{l})=1-\frac{\alpha_{g}}{\rho_{g}(\alpha_{g},\alpha_ {l})},\ \ \rho_{l}(\alpha_{g},\alpha_{l})=\frac{\alpha_{l}\rho_{g}(\alpha_{g},\alpha_{l}) }{\rho_{g}(\alpha_{g},\alpha_{l})-\alpha_{g}} \tag{2.15}\] ### A priori estimates to (2.8a)-(2.8c) In this work, the stability issues are particularly payed attention and a discrete version associated to the numerical scheme is desired. Let us recall the a priori estimates associated to problem (2.8a)-(2.8c) with zero forcing term and zero velocities on \(\Gamma\) for all time[15]. The following estimates present the positiveness of the mass, mass conservation, and the energy stability, respectively. For \(k=g,l\), we have * Positiveness of the mass \[\alpha_{k}(x,t)>0,\ \ \ \forall(x,t)\in\Omega_{T}\] (2.16) * Mass conservation \[\int_{\Omega}\alpha_{k}(x,t)dx=\int_{\Omega}\alpha_{k}(x,0)dx,\ \ \ \forall t\in(0,T)\] (2.17) * Energy stability \[\begin{split}&\sum_{k=g,l}\left[\frac{1}{2}\frac{d}{dt}\int_{ \Omega}\alpha_{k}(x,t)\mathbf{u}_{k}(x,t)^{2}dx+\frac{d}{dt}\int_{\Omega}\alpha_{k }(x,t)e_{k}(\rho_{k}(x,t))dx\right.\\ &\left.+\int_{\Omega}\phi_{k}(x,t)\tau_{k}(\mathbf{u}_{k}(x,t)): \nabla\mathbf{u}_{k}(x,t)dx\right]+\int_{\Omega}C_{D}\phi_{g}(x,t)\phi_{l}(x,t)| \mathbf{u}_{g}(x,t)-\mathbf{u}_{l}(x,t)|^{3}dx=0\end{split}\] (2.18) In the above, \(e_{k}(\cdot)\) is the potential energy dervied from the equation of state such that \[e_{k}^{\prime}(z)=\frac{\zeta_{k}(z)}{z^{2}} \tag{2.19}\] We may choose proper constants \(\rho_{k,ref}\) so that the following expression makes sense: \[e_{k}(z)=\int_{\rho_{k,ref}}^{z}\frac{\zeta_{k}(s)}{s^{2}}ds \tag{2.20}\] The estimate (2.18) is a little bit different from that proposed by [15] because of extra capillary terms in their work. Nevertheless, they can be obtained by a completely same way. ## 3 Numerical scheme ### Time-discrete scheme We proceed the following to get the time-discrete solution: * Step 1. Prediction of the mass fraction \(\widetilde{\alpha}_{k}^{m+1}\): \[\frac{\widetilde{\alpha}_{k}^{m+1}-\alpha_{k}^{m}}{\delta t}+\nabla\cdot( \widetilde{\alpha}_{k}^{m+1}\mathbf{u}_{k}^{m})=0\] (3.1) * Step 2. Solve the intermediate volume fraction \(\widetilde{\phi}_{k}^{m+1}\) and the density \(\widetilde{\rho}_{k}^{m+1}\) by \[\begin{cases}\widetilde{\phi}_{g}^{m+1}+\widetilde{\phi}_{l}^{m+1}=1\\ \widetilde{\phi}_{k}^{m+1}\widetilde{\rho}_{k}^{m+1}=\widetilde{\alpha}_{k}^{ m+1},\ \ k=g,l\\ \zeta_{g}(\widetilde{\rho}_{g}^{m+1})=\zeta_{l}(\widetilde{\rho}_{l}^{m+1}) \end{cases}\] (3.2) * Step 3. Renormalization of the intermediate pressure \(\widetilde{p}_{k}^{m+1}\): \[\nabla\cdot\left(\frac{\widetilde{\phi}_{k}^{m+1}}{\widetilde{\rho}_{k}^{m+1} }\nabla\widetilde{p}_{k}^{m+1}\right)=\nabla\cdot\left(\sqrt{\frac{\widetilde {\phi}_{k}^{m+1}\widetilde{\phi}_{k}^{m}}{\widetilde{\rho}_{k}^{m+1} \widetilde{\rho}_{k}^{m}}}\nabla p^{m}\right)\] (3.3) with the boundary constraint: \[\frac{\widetilde{\phi}_{k}^{m+1}}{\widetilde{\rho}_{k}^{m+1}}\frac{\partial \widetilde{p}_{k}^{m+1}}{\partial n}=\sqrt{\frac{\widetilde{\phi}_{k}^{m+1} \widetilde{\phi}_{k}^{m}}{\widetilde{\rho}_{k}^{m+1}\widetilde{\rho}_{k}^{m}} }\frac{\partial p^{m}}{\partial n}\ \ \ \text{on}\ \Gamma\] (3.4) * Step 4. Solve the intermediate velocities \(\widetilde{\mathbf{u}}_{k}^{m+1}\) \[\begin{split}&\frac{\widetilde{\alpha}_{k}^{m+1}}{\widetilde{\mathbf{u} }_{k}^{m+1}}-\alpha_{k}^{m}\mathbf{u}_{k}^{m}\\ &-\nabla\cdot(\widetilde{\phi}_{k}^{m+1}\tau_{k}(\widetilde{\mathbf{u }}_{k}^{m+1}))+C_{D}\widetilde{\phi}_{g}^{m+1}\widetilde{\phi}_{l}^{m+1}| \widetilde{\mathbf{u}}_{g}^{m+1}-\widetilde{\mathbf{u}}_{l}^{m+1}|(\widetilde{\mathbf{u}}_ {k}^{m+1}-\widetilde{\mathbf{u}}_{k}^{m+1})=0\end{split}\] (3.5) * Step 5. (Projection) Solve \(\overline{\mathbf{u}}_{k}^{m+1}\), \(p^{m+1}\), \(\phi_{k}^{m+1}\), \(\rho_{k}^{m+1}\): \[\begin{cases}\widetilde{\alpha}_{k}^{m+1}\frac{\overline{\mathbf{u}}_{k}^{m+1}- \widetilde{\mathbf{u}}_{k}^{m+1}}{\delta t}+\widetilde{\phi}_{k}^{m+1}\nabla(p^{m+ 1}-\widetilde{p}_{k}^{m+1})=0\\ \frac{\phi_{k}^{m+1}\rho_{k}^{m+1}-\phi_{k}^{m}\rho_{k}^{m}}{ \delta t}+\nabla\cdot(\widetilde{\phi}_{k}^{m+1}\rho_{k}^{m+1}\overline{\mathbf{ u}}_{k}^{m+1})=0\\ \phi_{g}^{m+1}+\phi_{l}^{m+1}=1\\ \rho_{k}^{m+1}=\zeta_{k}^{-1}(p^{m+1})\end{cases}\] (3.6) with the boundary constraint \[\frac{\partial p^{m+1}}{\partial n}=\frac{\partial\widetilde{p}_{k}^{m+1}}{ \partial n}\] (3.7) * Step 6. Renormalization of the end-of-step velocities \(\mathbf{u}_{k}^{m+1}\): \[\sqrt{\alpha_{k}^{m+1}}\mathbf{u}_{k}^{m+1}=\sqrt{\widetilde{\alpha}_{k}^{m+1}} \overline{\mathbf{u}}_{k}^{m+1},\ \ \alpha_{k}^{m+1}=\phi_{k}^{m+1}\rho_{k}^{m+1},\ \ k=g,l\] (3.8) #### Equivalent problem of the projection step To eliminate the variable \(\overline{\mathbf{u}}_{k}^{m+1}\) in the coupling problem (3.6), we may manipulate the first two equations in (3.6) to get * Step 5'. Solve \(p^{m+1}\), \(\phi_{g}^{m+1}\), \(\phi_{l}^{m+1}\): \[\begin{cases}\frac{1}{\delta t^{2}}(\phi_{k}^{m+1}\zeta_{k}^{-1}(p^{m+1})- \alpha_{k}^{m})+\frac{1}{\delta t}\nabla\cdot(\widetilde{\phi}_{k}^{m+1}\zeta _{k}^{-1}(p^{m+1})\widetilde{\mathbf{u}}_{k}^{m+1})\\ -\nabla\cdot\left(\frac{\widetilde{\phi}_{k}^{m+1}\zeta_{k}^{-1}(p^{m+1})}{ \widetilde{\rho}_{k}^{m+1}}\nabla(p^{m+1}-\widetilde{p}_{k}^{m+1})\right)=0\\ \phi_{g}^{m+1}+\phi_{l}^{m+1}=1\end{cases}\] (3.9) In view of the asymmetric structure of (3.9), it is not easy to solve the problem directly. To tackle the difficulty, the argument in Section 2.2 is applied so that we can express \(p^{m+1}\) in terms of \(\alpha_{l}^{m+1}\) and \(\alpha_{g}^{m+1}\). A symmetric projection step can be expressed as * Step 5". Solve \(\alpha_{k}^{m+1}\), \(p^{m+1}\): \[\begin{cases}\frac{1}{\delta t^{2}}(\alpha_{k}^{m+1}-\alpha_{k}^{m})+\frac{1}{ \delta t}\nabla\cdot(\widetilde{\phi}_{k}^{m+1}\zeta_{k}^{-1}(p^{m+1}) \widetilde{\mathbf{u}}_{k}^{m+1})-\nabla\cdot\left(\frac{\widetilde{\phi}_{k}^{m+1} \zeta_{k}^{-1}(p^{m+1})}{\widetilde{\rho}_{k}^{m+1}}\nabla(p^{m+1}-\widetilde{ p}_{k}^{m+1})\right)=0\\ \nabla p^{m+1}=C(\alpha_{g}^{m+1},\alpha_{l}^{m+1})^{2}\left(\rho_{l}(\alpha_{ g}^{m+1},\alpha_{l}^{m+1})\nabla\alpha_{g}^{m+1}+\rho_{g}(\alpha_{g}^{m+1},\alpha_{l}^{m+1}) \nabla\alpha_{l}^{m+1}\right)\end{cases}\] (3.10) **Remark 1**: _Step 5" is solved iteratively in this way: Given \(\alpha_{k}^{m+1,l}\), \(\rho_{k}^{m+1,l}\) at the \(l\)-th iteration step, the solutions for the \((l+1)\)-th iteration step are given by_ \[\begin{cases}\frac{1}{\delta t^{2}}\alpha_{k}^{m+1,l+1}+\frac{1}{ \delta t}\nabla\cdot(\widetilde{\phi}_{k}^{m+1}\rho_{k}^{m+1,l}\widetilde{ \boldsymbol{u}}_{k}^{m+1})\\ -\nabla\cdot\left(\frac{\widetilde{\phi}_{k}^{m+1}\rho_{k}^{m+1,l}}{ \widetilde{\rho}_{k}^{m+1}}C(\alpha_{g}^{m+1,l},\alpha_{l}^{m+1,l})^{2}(\rho_ {l}^{m+1,l}\nabla\alpha_{g}^{m+1,l+1}+\rho_{g}^{m+1,l}\nabla\alpha_{l}^{m+1,l+ 1})\right)\\ =-\nabla\cdot\left(\frac{\widetilde{\phi}_{k}^{m+1}\rho_{k}^{m+1,l}}{ \widetilde{\rho}_{k}^{m+1}}\nabla\widetilde{p}_{k}^{m+1}\right)+\frac{1}{ \delta t^{2}}\alpha_{k}^{m}\\ \rho_{k}^{m+1,l+1}=\rho_{k}(\alpha_{g}^{m+1,l+1},\alpha_{l}^{m+1,l+1})\end{cases} \tag{3.11}\] ### Finite element implementation For simplicity, we consider the problem with \(\boldsymbol{u}_{k}=0\) on \(\Gamma\), \(k=g,l\). The case with nonhomogeneous boundary conditions or outlet can be modified accordingly by a standard way. Let us introduce finite element approximations \(Y_{h}\subset H_{1}\) for densities \(\rho_{k,h}\) (also for intermediate step \(\widetilde{\rho}_{k,h}\)), volume fractions \(\phi_{k,h}\) (also for intermediate step \(\widetilde{\phi}_{k,h}\)), their products \(\alpha_{k,h}\) (also for intermediate step \(\widetilde{\alpha}_{k,h}\)), and for pressure \(p_{h}\) (also for intermediate step \(\widetilde{p}_{k,h}\)); \(\boldsymbol{X}_{0,h}\subset\boldsymbol{H}_{0}^{1}\) for intermediate step velocities \(\widetilde{\boldsymbol{u}}_{k,h}\). To maintain the positiveness of \(\alpha_{k}\), the weak formulation for Step 1 - (3.1) reads: For \(m\geq 0\), find \(\widetilde{\alpha}_{k,h}^{m+1}\in Y_{h}\) such that for \(\widetilde{\alpha}_{k,h}^{m+1}|_{\Gamma_{in}}=\phi_{k,in}\rho_{k,in}\) and such that for all \(q_{h}\in Y_{h}\) with \(q_{h}|_{\Gamma_{in}}=0\), \[\left((1+\frac{1}{2}\delta t(\nabla\cdot\boldsymbol{u}_{k,h}^{m})+\frac{1}{ 2}\delta t(\nabla\cdot\boldsymbol{u}_{k,h}^{m})^{+})\widetilde{\alpha}_{k,h}^ {m+1},q_{h}\right)+\delta t(\boldsymbol{u}_{k,h}^{m}\cdot\nabla\widetilde{ \alpha}_{k,h}^{m+1},q_{h})=\left((1+\frac{1}{2}\delta t(\nabla\cdot\boldsymbol{ u}_{k,h}^{m})^{-})\alpha_{k,h}^{m},q_{h}\right) \tag{3.12}\] The positiveness of the above scheme is shown in Remark 2. At Step 2, the intermediate \(\widetilde{\phi}_{k,h}^{m+1}\) and \(\widetilde{\rho}_{k,h}^{m+1}\) can be obtain pointwisely by the given \(\widetilde{\alpha}_{k,h}^{m+1}\) without error generation by spatial discretization since all of them belong to the same space \(Y_{h}\). One may conduct a root-finding technique (e.g. Ridder's method[27]) to solve \(\widetilde{\phi}_{k,h}^{m+1}\) and \(\widetilde{\rho}_{k,h}^{m+1}\) pointwisely. The renormalization (Step 3) for intermediate pressure \(\widetilde{p}_{k}^{m+1}\) can be proceeded by the following problem: For \(m\geq 0\), find \(\widetilde{p}_{k}^{m+1}\in Y_{h}\) such that for all \(w_{h}\in Y_{h}\), \[\left(\frac{\widetilde{\phi}_{k,h}^{m+1}}{\widetilde{\rho}_{k,h}^{m+1}}\nabla \widetilde{p}_{k,h}^{m+1},\nabla w_{h}\right)=\left(\sqrt{\frac{\widetilde{ \phi}_{k,h}^{m+1}\widetilde{\phi}_{k,h}^{m}}{\widetilde{\rho}_{k,h}^{m+1} \widetilde{\rho}_{k,h}^{m}}}\nabla p_{k,h}^{m},\nabla w_{h}\right) \tag{3.13}\] We note that the boundary terms are eliminated because of (3.4). The weak formulation for Step 4 (advection-diffusion step) reads: For \(m\geq 0\), find \(\widetilde{\boldsymbol{u}}_{k,h}^{m+1}\in\boldsymbol{X}_{0,h}\) such that for all \(\boldsymbol{v}_{h}\in\boldsymbol{X}_{0,h}\), \[\begin{split}&\left(\frac{\widetilde{\alpha}_{k,h}^{m+1}\widetilde{ \boldsymbol{u}}_{k,h}^{m+1}-\alpha_{k,h}^{m}\boldsymbol{u}_{k,h}^{m}}{ \delta t},\boldsymbol{v}_{h}\right)+\left(\nabla\cdot(\widetilde{\alpha}_{k,h}^ {m+1}\boldsymbol{u}_{k,h}^{m}\otimes\widetilde{\boldsymbol{u}}_{k,h}^{m+1}), \boldsymbol{v}_{h}\right)-\left(\widetilde{p}_{k,h}^{m+1},\nabla\cdot( \widetilde{\phi}_{k,h}^{m+1}\boldsymbol{v}_{h})\right)\\ &+(\widetilde{\phi}_{k,h}^{m+1}\tau_{k}(\widetilde{\boldsymbol{u }}_{k,h}^{m+1}),\nabla\boldsymbol{v}_{h})+\left(C_{D}\widetilde{\phi}_{g,h}^{m+1} \widetilde{\phi}_{l,h}^{m+1}|\widetilde{\boldsymbol{u}}_{g,h}^{m+1}- \widetilde{\boldsymbol{u}}_{l,h}^{m+1}|(\widetilde{\boldsymbol{u}}_{k,h}^{m+1}- \widetilde{\boldsymbol{u}}_{k,h}^{m+1}),\boldsymbol{v}_{h}\right)=0.\end{split} \tag{3.14}\] The above equation can be solved separately for \(g\) and \(l\) and the nonlinear term can be tackled by iteration (see for example [28]). The projection step (Step 5) is handled by (3.10) and the according weak formulation reads: For \(m\geq 0\), find \(\alpha_{k}^{m+1}\in Y_{h}\) such that for all \(q_{h}\in Y_{h}\), \[(\alpha_{k,h}^{m+1}-\alpha_{k,h}^{m},q_{h})+\delta t\left(\nabla\cdot(\widetilde {\phi}_{k,h}^{m+1}\zeta_{k}^{-1}(p_{h}^{m+1})\widetilde{\mathbf{u}}_{k,h}^{m+1}),q _{h}\right)+\delta t^{2}\left(\frac{\widetilde{\phi}_{k,h}^{m+1}\zeta_{k,h}^{- 1}(p_{h}^{m+1})}{\widetilde{\rho}_{k,h}^{m+1}}\nabla(p_{h}^{m+1}-\widetilde{p} _{k,h}^{m+1}),\nabla q_{h}\right)=0 \tag{3.15}\] with \[p_{h}^{m+1}=C(\alpha_{g,h}^{m+1},\alpha_{l,h}^{m+1})^{2}(\rho_{l}(\alpha_{g,h }^{m+1},\alpha_{l,h}^{m+1})\nabla\alpha_{g,h}^{m+1}+\rho_{g}(\alpha_{g,h}^{m+ 1},\alpha_{l,h}^{m+1})\nabla\alpha_{l,h}^{m+1}) \tag{3.16}\] The above system can be solved iteratively by the way provided in Remark 1. In (3.16), functions of \(\alpha_{g,h}^{m+1}\) and \(\alpha_{l,h}^{m+1}\): \(C^{2}\), \(\rho_{l}\), and \(\rho_{g}\) are taken to be their projection on \(Y_{h}\). Similar to (3.13), the boundary terms of (3.15) are eliminated because of the boundary constraint (3.7). ## 4 Stability analysis Let \(\varphi\in\mathbb{R}\) be any function. We denote by \(\varphi^{+}:=\max(f,0)\) and by \(\varphi^{-}:=-\min(f,0)\). We denote by \(\|\cdot\|_{L^{p}}:=\|\cdot\|_{L^{p}(\Omega)}\) the \(L^{p}\) norm on \(\Omega\), \(\|\cdot\|_{W^{k,p}}:=\|\cdot\|_{W^{k,p}(\Omega)}\) the \(W^{k,p}\) norm, and \(\|\cdot\|_{k}=\|\cdot\|_{H^{k}}=\|\cdot\|_{W^{k,2}}\), \(0\leq k\leq+\infty\), \(1\leq p\leq+\infty\). We denote by \((\cdot,\cdot)\) the inner product associated to \(L^{2}(\Omega)\) space such that \((u,v)=\int_{\Omega}u(x)v(x)dx\) for \(u,v\) in \(L^{2}(\Omega)\). #### Mass conservation Step 5 and Step 6 guarantee the mass conservation in the following sense: **Proposition 1**: _If \(\overline{\mathbf{u}}_{k}^{m+1}\), \(k=g,l\) satisfy the boundary conditions \(\overline{\mathbf{u}}_{k}^{m+1}\cdot\mathbf{n}|_{\Gamma}=0\), then we have_ \[\int_{\Omega}\alpha_{k}^{m+1}dx=\int_{\Omega}\alpha_{k}^{m}dx,\quad k=g,l \tag{4.1}\] Proof. _By (3.8) and (3.6), we have_ \[\frac{\alpha_{k}^{m+1}-\alpha_{k}^{m}}{\delta t}+\nabla\cdot(\widetilde{\phi }_{k}^{m+1}\rho_{k}^{m+1}\overline{\mathbf{u}}_{k}^{m+1})=0\] _Taking the integration on both sides, the divergence theorem together with the assumption \(\overline{\mathbf{u}}_{k}^{m+1}\cdot\mathbf{n}|_{\Gamma}=0\) lead to the conclusion. Q.E.D._ #### Energy estimates of (3.1)-(3.8) We recall an important lemma which will be used several times: **Lemma 1**: _For sufficiently smooth \(\varphi\) and \(\mathbf{v}\) with \(\mathbf{v}\cdot\mathbf{n}|_{\Gamma}=0\), we have_ \[\int_{\Omega}\left(\varphi\mathbf{v}\cdot\nabla\varphi+\frac{1}{2}\varphi^{2} \nabla\cdot\mathbf{v}\right)=0\] Let us start with the estimates for the intermediate pressures \(\widetilde{p}_{k}^{m+1}\). Taking the inner product of (3.3) with \(\widetilde{p}^{m+1}\), \[\left\|\sqrt{\frac{\widetilde{\phi}_{k}^{m+1}}{\widetilde{\rho}_{k}^{m+1}}} \nabla\widetilde{p}_{k}^{m+1}\right\|_{0}^{2}=\left(\sqrt{\frac{\widetilde{ \phi}_{k}^{m+1}}{\widetilde{\rho}_{k}^{m+1}}}\nabla\widetilde{p}_{k}^{m+1}, \sqrt{\frac{\widetilde{\phi}_{k}^{m}}{\widetilde{\rho}_{k}^{m+1}}}\nabla p^{m} \right)\leq\left\|\sqrt{\frac{\widetilde{\phi}_{k}^{m+1}}{\widetilde{\rho}_{k }^{m+1}}}\nabla\widetilde{p}_{k}^{m+1}\right\|_{0}\left\|\sqrt{\frac{ \widetilde{\phi}_{k}^{m}}{\widetilde{\rho}_{k}^{m}}}\nabla p_{k}^{m}\right\|_ {0}\] Therefore, \[\left\|\sqrt{\frac{\widetilde{\phi}_{k}^{m+1}}{\widetilde{\rho}_{k}^{m+1}}} \nabla\widetilde{p}_{k}^{m+1}\right\|_{0}\leq\left\|\sqrt{\frac{\widetilde{\phi} _{k}^{m}}{\widetilde{\rho}_{k}^{m}}}\nabla p_{k}^{m}\right\|_{0} \tag{4.2}\] Summing the inner product of (3.5) with \(\widetilde{\mathbf{u}}_{k}^{m+1}\) and the inner product of (3.1) with \(-\frac{1}{2}|\widetilde{\mathbf{u}}_{k}^{m+1}|^{2}\), we have \[\begin{split}&\frac{1}{\delta t}\|\sqrt{\widetilde{\alpha}_{k}^{m+1}} \widetilde{\mathbf{u}}_{k}^{m+1}\|_{0}^{2}-\frac{1}{\delta t}(\alpha_{k}^{m}\mathbf{u} _{k}^{m},\widetilde{\mathbf{u}}_{k}^{m+1})+(\nabla\cdot(\widetilde{\alpha}_{k}^{m+ 1}\mathbf{u}_{k}^{m}\otimes\widetilde{\mathbf{u}}_{k}^{m+1}),\widetilde{\mathbf{u}}_{k}^{m +1})\\ &+(\widetilde{\phi}_{k}^{m+1}\nabla\widetilde{p}_{k}^{m+1}, \widetilde{\mathbf{u}}_{k}^{m+1})+(\widetilde{\phi}_{k}^{m+1}\tau_{k}(\widetilde{ \mathbf{u}}_{k}^{m+1}),\nabla\widetilde{\mathbf{u}}_{k}^{m+1})+(\widetilde{F}_{D,k}^{ m+1},\widetilde{\mathbf{u}}_{k}^{m+1})\\ &-\frac{1}{2\delta t}\|\sqrt{\widetilde{\alpha}_{k}^{m+1}} \widetilde{\mathbf{u}}_{k}^{m+1}\|_{0}^{2}+\frac{1}{2\delta t}\|\sqrt{\widetilde{ \alpha}_{k}^{m}}\widetilde{\mathbf{u}}_{k}^{m+1}\|_{0}^{2}-\frac{1}{2}(\nabla\cdot (\widetilde{\alpha}_{k}^{m+1}\mathbf{u}_{k}^{m}),|\widetilde{\mathbf{u}}_{k}^{m+1}|^{ 2})=0,\end{split} \tag{4.3}\] where \(\widetilde{F}_{D,k}^{m+1}=C_{D}\widetilde{\phi}_{g}^{m+1}\widetilde{\phi}_{l}^ {m+1}|\widetilde{\mathbf{u}}_{g}^{m+1}-\widetilde{\mathbf{u}}_{l}^{m+1}|(\widetilde{ \mathbf{u}}_{k}^{m+1}-\widetilde{\mathbf{u}}_{\bar{k}}^{m+1})\). Using the Cauchy's inequality, Green's theorem, and an analogue of Lemma 1, we have \[\frac{1}{2}\|\sqrt{\widetilde{\alpha}_{k}^{m+1}}\widetilde{\mathbf{u}}_{k}^{m+1}\| _{0}^{2}+\delta t(\widetilde{\phi}_{k}^{m+1}\nabla\widetilde{p}_{k}^{m+1}, \widetilde{\mathbf{u}}_{k}^{m+1})+\delta t(\widetilde{\phi}_{k}^{m+1}\tau_{k}( \widetilde{\mathbf{u}}_{k}^{m+1}),\nabla\widetilde{\mathbf{u}}_{k}^{m+1})+\delta t( \widetilde{F}_{D,k}^{m+1},\widetilde{\mathbf{u}}_{k}^{m+1})\leq\frac{1}{2}\| \sqrt{\widetilde{\alpha}_{k}^{m}}\mathbf{u}_{k}^{m}\|_{0}^{2} \tag{4.4}\] Taking the inner product of the first equation in (3.6) with \(\delta t\overline{\mathbf{u}}_{k}^{m+1}\), \[\begin{split}&\frac{1}{2}\|\sqrt{\widetilde{\alpha}_{k}^{m+1}} \overline{\mathbf{u}}_{k}^{m+1}\|_{0}^{2}+\frac{1}{2}\|\sqrt{\widetilde{\alpha}_{ k}^{m+1}}(\overline{\mathbf{u}}_{k}^{m+1}-\widetilde{\mathbf{u}}_{k}^{m+1})\|_{0}^{2}- \frac{1}{2}\|\sqrt{\widetilde{\alpha}_{k}^{m+1}}\widetilde{\mathbf{u}}_{k}^{m+1}\|_ {0}^{2}\\ &+\delta t(\widetilde{\phi}_{k}^{m+1}\nabla p^{m+1},\overline{ \mathbf{u}}_{k}^{m+1})-\delta t(\widetilde{\phi}_{k}^{m+1}\nabla\widetilde{p}_{k}^ {m+1},\overline{\mathbf{u}}_{k}^{m+1})=0\end{split} \tag{4.5}\] Taking the inner product of the first equation in (3.6) again with \(\delta t\frac{\nabla\widetilde{p}_{k}^{m+1}}{\widetilde{\rho}_{k}^{m+1}}\), \[\begin{split}&(\widetilde{\phi}_{k}^{m+1}\nabla\widetilde{p}_{k}^ {m+1},\overline{\mathbf{u}}_{k}^{m+1})-(\widetilde{\phi}_{k}^{m+1}\nabla\widetilde{ p}_{k}^{m+1},\widetilde{\mathbf{u}}_{k}^{m+1})\\ &-\frac{1}{2}\delta t\left(\left\|\sqrt{\frac{\widetilde{\phi}_{ k}^{m+1}}{\widetilde{\rho}_{k}^{m+1}}}\nabla\widetilde{p}_{k}^{m+1}\right\|_{0}^{2}+ \left\|\sqrt{\frac{\widetilde{\phi}_{k}^{m+1}}{\widetilde{\rho}_{k}^{m+1}}} \nabla(\widetilde{p}_{k}^{m+1}-p^{m+1})\right\|_{0}^{2}-\left\|\sqrt{\frac{ \widetilde{\phi}_{k}^{m+1}}{\widetilde{\rho}_{k}^{m+1}}}\nabla p^{m+1}\right\|_ {0}^{2}\right)=0\end{split} \tag{4.6}\] Combining (4.2)-(4.6), we have \[\begin{split}&\frac{1}{2}\|\sqrt{\widetilde{\alpha}_{k}^{m+1}} \overline{\mathbf{u}}_{k}^{m+1}\|_{0}^{2}+\frac{1}{2}\|\sqrt{\widetilde{\alpha}_{k}^ {m+1}}(\overline{\mathbf{u}}_{k}^{m+1}-\widetilde{\mathbf{u}}_{k}^{m+1})\|_{0}^{2}+ \delta t(\widetilde{\phi}_{k}^{m+1}\nabla p^{m+1},\overline{\mathbf{u}}_{k}^{m+1}) \\ &+\delta t(\widetilde{\phi}_{k}^{m+1}\tau_{k}(\widetilde{\mathbf{u}} _{k}^{m+1}),\nabla\widetilde{\mathbf{u}}_{k}^{m+1})+\delta t(\widetilde{F}_{D,k}^{ m+1},\widetilde{\mathbf{u}}_{k}^{m+1})+\frac{1}{2}\delta t^{2}\left\|\sqrt{\frac{ \widetilde{\phi}_{k}^{m+1}}{\widetilde{\rho}_{k}^{m+1}}}\nabla p^{m+1}\right\|_ {0}^{2}\\ &\leq\frac{1}{2}\|\sqrt{\widetilde{\alpha}_{l}^{m}}\mathbf{u}_{k}^{m}\|_ {0}^{2}+\frac{1}{2}\delta t^{2}\left\|\sqrt{\frac{\widetilde{\phi}_{k}^{m}}{ \widetilde{\rho}_{k}^{m}}}\nabla p_{k}^{m}\right\|_{0}^{2}+\frac{1}{2}\delta t^{2} \left\|\sqrt{\frac{\widetilde{\phi}_{k}^{m+1}}{\widetilde{\rho}_{k}^{m+1}}} \nabla(\widetilde{p}_{k}^{m+1}-p^{m+1})\right\|_{0}^{2}\end{split} \tag{4.7}\] Using the first equation in (3.6), we have \[\begin{split}&\frac{1}{2}\|\sqrt{\widetilde{\alpha}_{k}^{m+1}} \overline{\mathbf{u}}_{k}^{m+1}\|_{0}^{2}+\delta t(\widetilde{\phi}_{k}^{m+1} \nabla p^{m+1},\overline{\mathbf{u}}_{k}^{m+1})\\ &+\delta t(\widetilde{\phi}_{k}^{m+1}\tau_{k}(\widetilde{\mathbf{u}} _{k}^{m+1}),\nabla\widetilde{\mathbf{u}}_{k}^{m+1})+\delta t(\widetilde{F}_{D,k}^{ m+1},\widetilde{\mathbf{u}}_{k}^{m+1})+\frac{1}{2}\delta t^{2}\left\|\sqrt{\frac{ \widetilde{\phi}_{k}^{m+1}}{\widetilde{\rho}_{k}^{m+1}}}\nabla p^{m+1}\right\|_ {0}^{2}\\ &\leq\frac{1}{2}\|\sqrt{\widetilde{\alpha}_{l}^{m}}\mathbf{u}_{k}^{m}\|_ {0}^{2}+\frac{1}{2}\delta t^{2}\left\|\sqrt{\frac{\widetilde{\phi}_{k}^{m}}{ \widetilde{\rho}_{k}^{m}}}\nabla p_{k}^{m}\right\|_{0}^{2}\end{split} \tag{4.8}\] To this stage, the problem is to estimate \(\delta t(\widetilde{\phi}_{k}^{m+1}\nabla p^{m+1},\overline{u}_{k}^{m+1})\). Let us introduce auxiliary functions \(f_{k}(z)=ze_{k}(z)\). If \(\gamma_{k}>1\), then \(f_{k}\) are \(C^{2}\), strictly convex functions for \(z>0\). Indeed, we have the following by definition: \[f_{k}(z)=z\int_{\rho_{k,ref}}^{z}\frac{\zeta_{k}(s)}{s^{2}}ds,\quad z\in(0,+\infty)\] Differentiating \(f\) twice with respect to \(z\), we get \[f_{k}^{\prime\prime}(z)=\frac{\zeta_{k}(z)}{z^{2}}+\frac{\zeta_{k}^{\prime}(z )z-\zeta_{k}(z)}{z^{2}}=\frac{\zeta_{k}^{\prime}(z)}{z}\] Therefore, we have \[\zeta_{k}^{\prime}(z)=zf_{k}^{\prime\prime}(z) \tag{4.9}\] We observe that \[\begin{split}&-\int_{\Omega}\nabla\cdot(\widetilde{\phi}_{k}^{m+1 }\rho_{k}^{m+1}\overline{u}_{k}^{m+1})f_{k}^{\prime}(\rho_{k}^{m+1})=\int_{ \Omega}\widetilde{\phi}_{k}^{m+1}\rho_{k}^{m+1}\overline{u}_{k}^{m+1}\cdot \nabla\left[\frac{d}{d\rho}(\rho e_{k}(\rho)\right]_{\rho=\rho_{k}^{m+1}}\\ &=\int_{\Omega}\widetilde{\phi}_{k}^{m+1}\overline{u}_{k}^{m+1} \cdot\nabla\rho_{k}^{m+1}\left(\frac{d\zeta_{k}(\rho)}{d\rho}\right)_{\rho= \rho_{k}^{m+1}}=\int_{\Omega}\widetilde{\phi}_{k}^{m+1}\nabla p^{m+1}\cdot \overline{u}^{m+1}\end{split} \tag{4.10}\] On the other hand \[\begin{split}&(\alpha_{k}^{m+1}-\alpha_{k}^{m})f_{k}^{\prime}( \rho_{k}^{m+1})=\alpha_{k}^{m+1}f_{k}^{\prime}(\rho_{k}^{m+1})-\alpha_{k}^{m} f_{k}^{\prime}(\rho_{k}^{m})+\alpha_{k}^{m}(f_{k}^{\prime}(\rho_{k}^{m})-f_{k}^{ \prime}(\rho_{k}^{m+1}))\\ &=\alpha_{k}^{m+1}e_{k}(\rho_{k}^{m+1})-\alpha_{k}^{m}e_{k}(\rho _{k}^{m})+\phi_{k}^{m+1}\zeta_{k}(\rho_{k}^{m})-\phi_{k}^{m}\zeta_{k}(\rho_{k} ^{m})+\alpha_{k}^{m}(f_{k}^{\prime}(\rho_{k}^{m})-f_{k}^{\prime}(\rho_{k}^{m+1 }))\end{split} \tag{4.11}\] Now, \[\begin{split}&\phi_{k}^{m+1}\zeta_{k}(\rho_{k}^{m+1})-\phi_{k}^{m }\zeta_{k}(\rho_{k}^{m})+\alpha_{k}^{m}(f_{k}^{\prime}(\rho_{k}^{m})-f_{k}^{ \prime}(\rho_{k}^{m+1}))\\ &=\zeta_{k}(\rho_{k}^{m+1})(\phi_{k}^{m+1}-\phi_{k}^{m})+\phi_{k} ^{m}(\zeta_{k}(\rho_{k}^{m+1})-\zeta_{k}(\rho_{k}^{m})+\rho_{k}^{m}(f_{k}^{ \prime}(\rho_{k}^{m})-f_{k}^{\prime}(\rho_{k}^{m+1}))\end{split} \tag{4.12}\] Since \(\phi_{k}^{m}\geq 0\) and \(f^{\prime\prime}(z)>0\), we have \[\phi_{k}^{m}\left[\zeta_{k}(\rho_{k}^{m+1})-\zeta_{k}(\rho_{k}^{m})+\rho_{k}^ {m}(f_{k}^{\prime}(\rho_{k}^{m})-f_{k}^{\prime}(\rho_{k}^{m+1}))\right]=\phi_{ k}^{m}\left[\int_{\rho_{k}^{m}}^{\rho_{k}^{m+1}}(z-\rho_{k}^{m})f_{k}^{ \prime\prime}(z)dz\right]\geq 0 \tag{4.13}\] Now, we have \[\delta t\int_{\Omega}\widetilde{\phi}_{k}^{m+1}\nabla p^{m+1}\cdot\overline{u }_{k}^{m+1}\geq\int_{\Omega}\alpha_{k}^{m+1}e_{k}(\rho_{k}^{m+1})-\alpha_{k}^{ m}e_{k}(\rho_{k}^{m})+p^{m+1}(\phi_{k}^{m+1}-\phi_{k}^{m}) \tag{4.14}\] Taking the summation with respect to \(k\), we have \[\begin{split}&\sum_{k}\delta t\int_{\Omega}\widetilde{\phi}_{k}^{m+1} \nabla p^{m+1}\cdot\overline{u}_{k}^{m+1}\geq\sum_{k}\int_{\Omega}\left(\alpha_ {k}^{m+1}e_{k}(\rho_{k}^{m+1})-\alpha_{k}^{m}e_{k}(\rho_{k}^{m})+p^{m+1}(\phi_ {k}^{m+1}-\phi_{k}^{m})\right)\\ &=\sum_{k}\int_{\Omega}\alpha_{k}^{m+1}e_{k}(\rho_{k}^{m+1})- \alpha_{k}^{m}e_{k}(\rho_{k}^{m})\end{split} \tag{4.15}\] Using (4.8), (4.15), and (3.8), we have \[\sum_{k}\left[\frac{1}{2}\|\sqrt{\alpha_{k}^{m+1}}\mathbf{u}_{k}^{m+1}\| _{0}^{2}+\int_{\Omega}\alpha_{k}^{m+1}e_{k}(\rho_{k}^{m+1})+\delta t(\widetilde{ \phi}_{k}^{m+1}\tau_{k}(\widetilde{\mathbf{u}}_{k}^{m+1}),\nabla\widetilde{\mathbf{u}}_ {k}^{m+1})+\frac{1}{2}\delta t^{2}\left\|\sqrt{\frac{\widetilde{\phi}_{k}^{m+1 }}{\widetilde{\rho}_{k}^{m+1}}}\nabla p^{m+1}\right\|_{0}^{2}\right]\] \[+\delta t\int_{\Omega}C_{D}\widetilde{\phi}_{g}^{m+1}\widetilde{ \phi}_{l}^{m+1}|\widetilde{\mathbf{u}}_{g}^{m+1}-\widetilde{\mathbf{u}}_{l}^{m+1}|^{3}\] \[\leq\sum_{k}\left(\frac{1}{2}\|\sqrt{\alpha_{k}^{m}}\mathbf{u}_{k}^{m }\|_{0}^{2}+\int_{\Omega}\alpha_{k}^{m}e_{k}(\rho_{k}^{m})+\frac{1}{2}\delta t ^{2}\left\|\sqrt{\frac{\widetilde{\phi}_{k}^{m}}{\widetilde{\rho}_{k}^{m}}} \nabla p^{m}\right\|_{0}^{2}\right)\] Summing over \(m\), we arrive at the theorem: **Theorem 1**: _For any \(\delta t>0\), the solution \((\phi^{m},\rho^{m},\mathbf{u}^{m},p^{m})\), \(m=1,2,\dots\) of the semi-discrete scheme (3.1)-(3.8) satisfies the stability estimate_ \[\sum_{k}\left[\frac{1}{2}\|\sqrt{\alpha_{k}^{m+1}}\mathbf{u}_{k}^{m+1 }\|_{0}^{2}+\int_{\Omega}\alpha_{k}^{m+1}e_{k}(\rho_{k}^{m+1})+\delta t\sum_{j =0}^{m}(\widetilde{\phi}_{k}^{j+1}\tau_{k}(\widetilde{\mathbf{u}}_{k}^{j+1}), \nabla\widetilde{\mathbf{u}}_{k}^{j+1})+\frac{1}{2}\delta t^{2}\left\|\sqrt{\frac{ \widetilde{\phi}_{k}^{m+1}}{\widetilde{\rho}_{k}^{m+1}}}\nabla p^{m+1}\right\| _{0}^{2}\right]\] \[+\delta t\sum_{j=0}^{m}\int_{\Omega}C_{D}\widetilde{\phi}_{g}^{j+1 }\widetilde{\phi}_{l}^{j+1}|\widetilde{\mathbf{u}}_{g}^{j+1}-\widetilde{\mathbf{u}}_{ l}^{j+1}|^{3}\] \[\leq\sum_{k}\left(\frac{1}{2}\|\sqrt{\alpha_{k}^{0}}\mathbf{u}_{k}^{0 }\|_{0}^{2}+\int_{\Omega}\alpha_{k}^{0}e_{k}(\rho_{k}^{0})+\frac{1}{2}\delta t ^{2}\left\|\sqrt{\frac{\widetilde{\phi}_{k}^{0}}{\widetilde{\rho}_{k}^{0}}} \nabla p^{0}\right\|_{0}^{2}\right)\] **Remark 2**: _The prediction step (3.1) does not guarantee the positiveness of \(\widetilde{\alpha}_{k}^{m+1}\). An \(O(\delta t)\) modification is possible to force its positiveness:_ \[(1+\frac{1}{2}\delta t(\nabla\cdot\mathbf{u}_{k}^{m})+\frac{1}{2}\delta t(\nabla \cdot\mathbf{u}_{k}^{m})^{+})\widetilde{\alpha}_{k}^{m+1}+\delta t\mathbf{u}_{k}^{m} \cdot\nabla\widetilde{\alpha}_{k}^{m+1}=(1+\frac{1}{2}\delta t(\nabla\cdot \mathbf{u}_{k}^{m})^{-})\alpha_{k}^{m} \tag{4.18}\] _Indeed, taking the inner product of (4.18) with \((\widetilde{\alpha}_{k}^{m+1})^{-}\), we have_ \[-\int_{\Omega}(1+\frac{1}{2}\delta t(\nabla\cdot\mathbf{u}_{k}^{m})^ {+})|(\widetilde{\alpha}_{k}^{m+1})^{-}|^{2}-\delta t\left(\int_{\Omega}\frac{ 1}{2}(\nabla\cdot\mathbf{u}_{k}^{m})|(\widetilde{\alpha}_{k}^{m+1})^{-}|^{2}+( \widetilde{\alpha}_{k}^{m+1})^{-}\mathbf{u}_{k}^{m}\cdot\nabla(\widetilde{\alpha} _{k}^{m+1})^{-}\right)\] \[=\int_{\Omega}(1+\frac{1}{2}\delta t(\nabla\cdot\mathbf{u}_{k}^{m})^ {-})\alpha_{k}^{m}(\widetilde{\alpha}_{k}^{m+1})^{-}\] _Using Lemma 1, the second term on the left hand side can be eliminated. Therefore, we have \((\widetilde{\alpha}_{k}^{m+1})^{-}=0\) almost everywhere provided \(\alpha_{k}^{m}\geq 0\) almost everywhere. This implies the positiveness of \(\widetilde{\alpha}_{k}^{m+1}\)._ ## 5 Numerical test Let \(L\) be the reference length scale, \(U\) be the reference velocity, and \(\rho_{0}\) be the reference density. The parameters and variables in use are scaled by the following: \[\mu_{g}\rightarrow\frac{\mu_{g}}{\rho_{0}UL},\ \ \mu_{l} \rightarrow\frac{\mu_{l}}{\rho_{0}UL},\ \ \rho_{l,0}\rightarrow\frac{\rho_{l,0}}{\rho_{0}},\ \ A_{g}\rightarrow\frac{A_{g}\rho_{0}^{\gamma_{g}}}{\rho_{0}U^{2}},\ \ A_{l}\rightarrow\frac{A_{l}\rho_{0}^{\gamma_{l}}}{\rho_{0}U^{2}},\ \ p_{0} \rightarrow\frac{p_{0}}{\rho_{0}U^{2}},\ \ C_{D}\rightarrow\frac{C_{D}}{\rho_{0}}\] \[\alpha_{g}\rightarrow\frac{\alpha_{g}}{\rho_{0}},\ \ \alpha_{l} \rightarrow\frac{\alpha_{l}}{\rho_{0}},\ \ \mathbf{u}_{g}\rightarrow\frac{\mathbf{u}_{g}}{U},\ \ \mathbf{u}_{l} \rightarrow\frac{\mathbf{u}_{l}}{U},\ \ p\rightarrow\frac{p}{\rho_{0}U^{2}}\] In the following test, we assume that \(\lambda_{k}=0\) for simplicity. For Sections 5.1 and 5.2, we consider a square physical domain \(\Omega\) of size \(1m\times 1m\). The initial values are constructed by the following procedure: 1. Set the initial of volume fractions and densities: \(\phi_{k}(0,x)=\phi_{k}^{0}(x)\), \(\rho_{k}(0,x)=\rho_{k}^{0}(x)\). 2. Let \(\alpha_{k}(0,x)=\alpha_{k}^{0}:=\phi_{k}^{0}\rho_{k}^{0}\). 3. Set the initial values of intermediate step: \(\widetilde{\phi}_{k}^{0}=\phi_{k}^{0}\), \(\widetilde{\rho}_{k}^{0}=\rho_{k}^{0}\). 4. Solve a steady state Stokes equation with admissible boundary conditions for initial velocities and pressure: Find \((\mathbf{u}^{0},p^{0})\) such that \[-\nabla\cdot(\mu\nabla\mathbf{u}^{0})+\nabla p^{0}=\mathbf{f},\quad\text{in }\Omega\] (5.1) \[\nabla\cdot\mathbf{u}^{0}=0,\quad\text{in }\Omega\] where \(\mu=\phi_{g}^{0}\mu_{g}+\phi_{l}^{0}\mu_{l}\). We set \(\mathbf{u}_{g}(0,x)=\mathbf{u}_{l}(0,x)=\mathbf{u}^{0}\), \(p(0,x)=\widetilde{p}_{g}^{0}=\widetilde{p}_{l}^{0}=p^{0}\). In the following context, the P1-bubble element (see for instance [29]) is applied for velocities field \(\mathbf{u}_{k}\) and \(P_{1}\) element is applied for all other variables. The computer program for the implementation is written using Freefem++ [30]. ### Two-gas system First, the case that the two fluids possess similar densities is taken into account. The parameters in use are listed in Table 1. The initial values for volume fractions and densities are \[\begin{split}\phi_{g}^{0}(x,y)&=0.2+0.2\exp(-30((x- 0.25)^{2}+(y-0.25)^{2})),\ \ \phi_{l}^{0}(x,y)=1-\phi_{g}^{0}(x,y)\\ \rho_{g}^{0}(x,y)&=2\ (kg/m^{3}),\ \ \rho_{l}^{0}(x,y)=4\ ( kg/m^{3})\end{split} \tag{5.2}\] The initial velocities are obtained by solving (5.1) with \(\mathbf{u}^{0}|_{\Gamma}=0\) and \(\mathbf{f}=0.006(y,-x)^{T}\ (kg/m^{2}\cdot s^{2})\). For all test, a \(40\times 40\) uniform triangular mesh is employed and we compare the test results with the numerical solution with \(\delta t=0.001\) at the final time \(T=1.6\). The convergence test shows a first order accuracy in time for all variables (see Figure 1). We note that in this case, the densities of both phases change only slightly (less than \(1.0\times 10^{-6}\) times of their original densities). Therefore, it is safe for us to discuss only the volume fractions \(\phi_{k}\) or their products with the densities \(\alpha_{k}\). the contours in Figure 2 show that the two phases tend to separate at the beginning. That is, the spinning velocity field (see Figure 3) reduces the volume fraction of the lighter phase \(g\) in its region of lower volume fraction, and vice versa. Indeed, the difference between the velocities of the two fluids is observed: According to 3, the lower density material \(g\) is accelerated. A possible explanation is the virtual force caused by the density difference. On the other hand, the material of higher density \(l\) is decelerated in view of 4. This result is natural because of the dissipation by viscous force and the momentum transfer by the drag force from phase \(l\) to phase \(g\). ### Liquid-gas system With the same domain as in Section 5.1, the parameters in use are listed in Table 2. We assume the same initial volume fractions as the test in Section 5.1 but different initial densities: \[\begin{split}\phi_{g}^{0}(x,y)&=0.2+0.2\exp(-30((x- 0.25)^{2}+(y-0.25)^{2})),\ \ \phi_{l}^{0}(x,y)=1-\phi_{g}^{0}(x,y)\\ \rho_{g}^{0}(x,y)&=2\ (kg/m^{3}),\ \ \rho_{l}^{0}(x,y)=1000 \ (kg/m^{3})\end{split} \tag{5.3}\] \begin{table} \begin{tabular}{c c|c c} \hline Parameters & Values & Parameters & Values \\ \hline \(\rho_{g0}\) & \(2.0\)\((kg/m^{3})\) & \(\rho_{l0}\) & \(4.0\)\((kg/m^{3})\) \\ \(\mu_{g}\) & \(3.0\times 10^{-4}\)\((kg/m\cdot s)\) & \(\mu_{l}\) & \(1.86\times 10^{-4}\)\((kg/m\cdot s)\) \\ \(A_{g}\) & \(3.8395\times 10^{4}\)\((m^{3.2}/kg^{0.4}\cdot s^{2})\) & \(A_{l}\) & \(1.01325\times 10^{5}\)\((m^{4.1}/kg^{0.7}\cdot s^{2})\) \\ \(p_{0}\) & \(1.01325\times 10^{5}\)\((kg/m\cdot s^{2})\) & \(C_{d}\) & \(100.0\)\((kg/m^{3})\) \\ \hline \end{tabular} \end{table} Table 1: Parameters in Section 5.1 Figure 1: Convergence plot for the test of two-gas system in Section 5.1. Figure 2: Results of \(\alpha_{g}\) by the test of two-gas system in Section 5.1. Figure 3: Results of \(\mathbf{u}_{g}\) by the test of two-gas system in Section 5.1. Figure 4: Results of \(\mathbf{u}_{l}\) by the test of two-gas system in Section 5.1. The initial velocities are obtained by solving (5.1) with \(\mathbf{u}^{0}|_{\Gamma}=0\) and \(\mathbf{f}=0.01(y,-x)^{T}\) (\(kg/m^{2}\cdot s^{2}\)). For all test, a \(40\times 40\) uniform triangular mesh is employed and we compare the test results with the numerical solution with \(\delta t=0.0001\) at the final time \(T=1.6\). The convergence in this case is worse than the test of two-gas system. Fortunately, a first order convergence is still kept (see Figure 5). Similar to the test of two-gas system, the densities of both phases change only slightly (less than \(1.0\times 10^{-6}\) times of their original densities). According to Figure 6 a separation phenomenon can be observed in this case as well due to the difference between velocities of the two fluids. The acceleration for lower density phase and the deceleration for higher density phase can be seen from Figures 7 and 8. With finite element implementation for the case of large density difference, undesired oscillation tends to spread rapidly if the time step is not chosen sufficiently small. A smaller time step than that of the test for two-gas system is therefore adopted here. ### Liquid-gas system with initially large pressure gradient In this case, we consider the same physical domain as what is used in Sections 5.1 and 5.2 and the parameters presented in Table 2. The initial values of the volume fractions are \[\phi_{g}^{0}(x,y)=0.2+0.2\exp(-30((x-0.5)^{2}+(y-0.5)^{2})),\ \ \phi_{l}^{0}(x,y)=1-\phi_{g}^{0}(x,y) \tag{5.4}\] The initial pressure distribution \(p^{0}\) is given by \[p^{0}(x,y)=p_{0}+p_{0}\exp(-30((x-0.5)^{2}+(y-0.5)^{2})) \tag{5.5}\] The initial densities are determined accordingly: \[\rho_{k}^{0}(x,y)=\zeta_{k}^{-1}(p^{0}(x,y)) \tag{5.6}\] The initial velocities are given by \[\mathbf{u}_{k}^{0}(x,y)=0 \tag{5.7}\] and the homogeneous boundary conditions for velocities are imposed: \[\mathbf{u}_{k}=0\quad\text{on}\ \Gamma. \tag{5.8}\] For all test, a \(40\times 40\) uniform triangular mesh is employed and we compare the test results with the numerical solution with \(\delta t=1.0\times 10^{-6}\) at the final time \(T=5.12\times 10^{-3}\). A much smaller time step is chosen in order to capture the rapid change of the density field of phase \(g\) (see Figure 12). The convergence results given by Figure 9 present a first order accuracy in this case. The magnitude contours of \(\alpha_{g}\) and \(\phi_{g}\) given by Figures 10 and 11 do not show a large change with time. However, their quotient \(\rho_{g}=\alpha_{g}/\phi_{g}\) change rapidly so that a very small time step is needed to capture its wave-motion. This example implies that the wave velocity of the \(\alpha_{k}\) propagation may be of different scale with the wave velocity of the density change. \begin{table} \begin{tabular}{c c|c c} \hline Parameters & Values & Parameters & Values \\ \hline \(\rho_{g0}\) & \(2.0\)\((kg/m^{3})\) & \(\rho_{l0}\) & \(1000.0\)\((kg/m^{3})\) \\ \(\mu_{g}\) & \(1.86\times 10^{-4}\)\((kg/m\cdot s)\) & \(\mu_{l}\) & \(2.3\times 10^{-3}\)\((kg/m\cdot s)\) \\ \(A_{g}\) & \(3.8395\times 10^{4}\)\((m^{3.2}/kg^{0.4}\cdot s^{2})\) & \(A_{l}\) & \(1.0\times 10^{6}\)\((m^{5}/kg\cdot s^{2})\) \\ \(p_{0}\) & \(1.01325\times 10^{5}\)\((kg/m\cdot s^{2})\) & \(C_{d}\) & \(100.0\)\((kg/m^{3})\) \\ \hline \end{tabular} \end{table} Table 2: Parameters in Section 5.2 Figure 5: Convergence plot for the test of liquid-gas system in Section 5.2. Figure 6: Results of \(\alpha_{g}\) by the test of liquid-gas system in Section 5.2. Figure 7: Results of \(\mathbf{l}_{g}\) by the test of liquid-gas system in Section 5.2. Figure 8: Results of \(\mathbf{u}_{l}\) by the test of liquid-gas system in Section 5.2. In the scenario of this test, the main driven force for the fluid velocities is the pressure. Since the pressure force for each phase is proportional to its volume fraction, the acceleration by the pressure force is reciprocal to the density. Indeed, Figures 13 and 14 present a large different between velocities of the two fluids. ## 6 Conclusion In this work, a new projection method for solving a viscous two-fluid model is proposed. A symmetric formulation of the projection step enables the computation of the projected pressure. Moreover a suitable assignment of the intermediate densities and volume fractions maintain the stability of the numerical scheme, which is justify by the stability analysis for the time-discrete problem. Numerical tests show that the method can not only be recovered to treat the problem with fluids being slightly compressed but also be used for the case with heavily compressed fluids. A first order temporal accuracy of the scheme is justified by all numerical tests in Section 5.
2304.02239
Optimal Energy Storage Scheduling for Wind Curtailment Reduction and Energy Arbitrage: A Deep Reinforcement Learning Approach
Wind energy has been rapidly gaining popularity as a means for combating climate change. However, the variable nature of wind generation can undermine system reliability and lead to wind curtailment, causing substantial economic losses to wind power producers. Battery energy storage systems (BESS) that serve as onsite backup sources are among the solutions to mitigate wind curtailment. However, such an auxiliary role of the BESS might severely weaken its economic viability. This paper addresses the issue by proposing joint wind curtailment reduction and energy arbitrage for the BESS. We decouple the market participation of the co-located wind-battery system and develop a joint-bidding framework for the wind farm and BESS. It is challenging to optimize the joint-bidding because of the stochasticity of energy prices and wind generation. Therefore, we leverage deep reinforcement learning to maximize the overall revenue from the spot market while unlocking the BESS's potential in concurrently reducing wind curtailment and conducting energy arbitrage. We validate the proposed strategy using realistic wind farm data and demonstrate that our joint-bidding strategy responds better to wind curtailment and generates higher revenues than the optimization-based benchmark. Our simulations also reveal that the extra wind generation used to be curtailed can be an effective power source to charge the BESS, resulting in additional financial returns.
Jinhao Li, Changlong Wang, Hao Wang
2023-04-05T06:02:58Z
http://arxiv.org/abs/2304.02239v1
Optimal Energy Storage Scheduling for Wind Curtailment Reduction and Energy Arbitrage: A Deep Reinforcement Learning Approach ###### Abstract Wind energy has been rapidly gaining popularity as a means for combating climate change. However, the variable nature of wind generation can undermine system reliability and lead to wind curtailment, causing substantial economic losses to wind power producers. Battery energy storage systems (BESS) that serve as onsite backup sources are among the solutions to mitigate wind curtailment. However, such an auxiliary role of the BESS might severely weaken its economic viability. This paper addresses the issue by proposing joint wind curtailment reduction and energy arbitrage for the BESS. We decouple the market participation of the co-located wind-battery system and develop a joint-bidding framework for the wind farm and BESS. It is challenging to optimize the joint-bidding because of the stochasticity of energy prices and wind generation. Therefore, we leverage deep reinforcement learning to maximize the overall revenue from the spot market while unlocking the BESS's potential in concurrently reducing wind curtailment and conducting energy arbitrage. We validate the proposed strategy using realistic wind farm data and demonstrate that our joint-bidding strategy responds better to wind curtailment and generates higher revenues than the optimization-based benchmark. Our simulations also reveal that the extra wind generation used to be curtailed can be an effective power source to charge the BESS, resulting in additional financial returns. Deep reinforcement learning, energy arbitrage, spot market, wind-battery system, wind curtailment. + Footnote †: This work was supported in part by the Australian Research Council (ARC) Discovery Early Career Researcher Award (DECRA) under Grant DE230100046. ## I Introduction To mitigate climate change and support the global energy transition to net-zero, wind energy has been widely adopted as the main pillar for decarbonization in modern power systems. In 2021, wind contributed \(9.9\%\) to Australia's total electricity production, making it the largest utility-scale renewable source [1]. However, the intermittent nature of wind power and inaccurate wind forecasts make it greatly challenging to accommodate wind generation in real-time. Sometimes, wind curtailment is necessary to ensure system security and reliability [2] at the expense of wind producers. The pace of wind adoption has led to an increase in wind curtailment, which causes considerable losses for wind producers. Similarly, the adoption of battery energy storage systems (BESS) is also gaining momentum: approximately \(650\) MW has been registered in the Australian National Electricity Market (NEM), and an additional \(34.3\) GW is planned in the next decade [3]. The integration of renewable energy and BESS is becoming increasingly popular, as the co-location of them can effectively reduce renewable curtailment, diversify revenue streams, mitigate market risks, and defer network augmentation. Co-located renewable energy and BESS system is demonstrated to be effective in the Integrated System Plan [4] by the Australian Energy Market Operator (AEMO). Given the rapid adoption of co-located wind-battery systems, it is critical to develop effective coordination strategies, not only for the benefit of the power grid but also for enhancing the viability of the BESS as part of a seamless energy transition. In the co-located wind-battery system, the BESS can act as a storage medium to reduce wind curtailment by absorbing the surplus wind generation. The optimal sizing and scheduling of the BESS were studied in [5, 6, 7] via stochastic or robust optimization based on prior knowledge of the wind power uncertainty distribution. Nevertheless, such an auxiliary role may limit the BESS's economic potential, such as performing energy arbitrage (i.e., buy low and sell high), a significant revenue stream for the BESS in the electricity market. Optimization-based methods have also been used to study wind-battery coordinated bidding strategies in the electricity market. These studies [8, 9, 10] treated the wind farm and the BESS as two independent players to bid in an aggregated manner, while mitigating wind curtailment has been neglected in the arbitrage process of the BESS. Also, the effectiveness of the proposed strategies highly relies on the accuracy of energy price forecasting. However, accurate energy price prediction is notoriously difficult since the spot market is highly volatile and the price drivers are remarkably complicated [11]. Apart from the optimization-based approaches, there has been a lack of research on real-time bidding of the wind-battery coupled system using other avenues. To bridge the literature gap, we develop a novel deep reinforcement learning (DRL)-based bidding strategy for the co-located wind battery system to concurrently reduce wind curtailment while maximizing the overall revenue through energy arbitrage in the electricity spot market. With its model-free characteristics, the DRL can learn the uncertainties of wind generation and electricity prices from historical observations, i.e., without prior knowledge or price forecasting. Additionally, the online and interactive nature of DRL makes it promising to dynamically balance the trade-off between energy arbitrage and wind curtailment mitigation in real-time bidding for the wind-battery system. Our developed joint bidding of the wind-battery system via deep reinforcement learning is referred to as "JointDRL". Our main contributions are summarized as follows. * _Synergizing wind curtailment management and BESS energy arbitrage_: We explore synergies between wind curtailment management and BESS energy arbitrage of a co-located wind-battery system in electricity spot markets. Our research emphasizes the importance of dynamic coordination strategies between renewable generation and storage in achieving profitability. * _DRL-based joint bidding_: We decouple the wind-battery system's market participation into two joint-bidding processes for the wind farm and the BESS. A cutting-edge model-free DRL algorithm, known as twin delayed deep deterministic policy gradient (TD3), is introduced to learn and optimize the joint-bidding strategy. * _Numerical simulations and implications_: We validate our JointDRL in the NEM using realistic wind farm data. Our results demonstrate the effectiveness of our method and reveal the synergy between wind curtailment management and BESS energy arbitrage. The BESS can reduce wind curtailment by charging otherwise curtailed wind power to boost economic returns from energy arbitrage. The rest of paper is organized as follows. Section II formulates the participation of the wind-battery system in the electricity spot market. We decouple the bidding process of the wind-battery system and introduce DRL in Section III to concurrently maximize the overall revenue and reduce wind curtailment. Simulation results are presented and discussed in Section IV, and Section V concludes this paper. ## II System Model We develop the JointDRL by assuming the wind-battery system is a price taker; thus, its bids will not affect other generator bidding decisions or market clearing outcomes. We also assume the wind farm and the BESS are co-located, and there is sufficient capacity within the substation and transmission lines to allow for concurrent export from both facilities. The BESS dynamically manages onsite wind curtailment while conducting energy arbitrage in the spot market. The context of bidding is discussed in detail in Section II-A. Section II-B outlines the wind and BESS revenue streams under various operational conditions. Section II-C formulates the joint bidding of the wind-battery system with wind curtailment management. An overview of the JointDRL is illustrated in Fig. 1. ### _The NEM Spot Market_ As part of the NEM, the spot market is a real-time market for trading wholesale electricity between generators and loads, where power supply and demand are balanced instantaneously through a centrally coordinated dispatch process managed by the AEMO [12]. Generators submit bids (price and quantity) every five minutes. AEMO dispatches generators in a least-cost manner by ranking generator bids from low to high to form a bidding stack. The generator bids that fulfil the last demand in the bidding stack determine the market clearing price, known as the spot price. Generators that bid below or at that price will get dispatched at their offered quantity and get paid at the clearing price. ### _Revenue of the Wind-Battery System_ #### Ii-B1 Wind Farm Wind farms are often registered as semi-dispatchable generators in the NEM and required to constantly update their forecasted generation availability (from onsite wind monitoring devices), denoted by \(p_{t}^{\text{W}}\), to AEMO, based on which a dispatch target (in MWh) is instructed to the wind farm [13] to fulfil in the next dispatch interval. The variable nature of wind power, however, could lead to deviations between the dispatch target and the actual wind generation \(p_{t}^{\text{W,Act}}\), which subsequently influence the amount of power sent out from the wind farm. We assume the dispatch target will be fully met at times when there is sufficient wind (\(p_{t}^{\text{W,Act}}>p_{t}^{\text{W}}\)), whereas the dispatch target can only be partially met (\(p_{t}^{\text{W,Act}}<p_{t}^{\text{W}}\)) if there is a wind shortage caused by forecasting errors or non-compliance with market rules [13]. We define the final dispatched wind power as \(\min\{p_{t}^{\text{W,Act}},p_{t}^{\text{W}}\}\). We also impose a penalty on the wind farm if it fails to meet the dispatch target. We present the spot market revenue generated by the wind farm, denoted by \(R^{\text{W}}\), as \[R^{\text{W}}=\Delta t\sum_{t=1}^{T}\rho_{t}\left(\min\{p_{t}^{\text{W,Act}},p _{t}^{\text{W}}\}-\lambda|p_{t}^{\text{W,Act}}-p_{t}^{\text{W}}|\right), \tag{1}\] where \(\Delta t\) is the NEM dispatch interval, i.e., \(5\) minutes; \(T\) is the overall time frame; \(\rho_{t}\) is the spot market clearing price; and \(\lambda\) is a penalty coefficient for deviations between the actual wind generation and the AEMO dispatch target [10]. #### Ii-B2 Bess Spot market volatility often motivates the BESS to engage in energy arbitrage. Providing that the BESS cannot simultaneously charge and discharge, we introduce two binary variables \(v_{t}^{\text{Ch}},v_{t}^{\text{Ch}}\) to prevent this from happening, which can be formulated as \[v_{t}^{\text{Dch}}+v_{t}^{\text{Ch}}\leq 1,\quad v_{t}^{\text{Dch}},v_{t}^{ \text{Ch}}\in\{0,1\}, \tag{2}\] Fig. 1: An overview of the JointDRL bidding strategy. where the BESS sits idle when these two variables are set to zero. The BESS's revenue from the spot market, denoted by \(R^{\text{BESS}}\), is shown as \[R^{\text{BESS}}=\Delta t\sum_{t=1}^{T}\left(v_{t}^{\text{Dch}}\eta^{\text{Dch}}- v_{t}^{\text{Ch}}\frac{1}{\eta^{\text{Ch}}}\right)\rho_{t}p_{t}^{\text{BESS,S}}, \tag{3}\] where \(\eta^{\text{Ch}}\eta^{\text{Dch}}\) are charging/discharging efficiencies of the BESS and \(p_{t}^{\text{BESS,S}}\) is the bid power in the spot market. Apart from buying power in the spot market, the otherwise curtailed wind generation can also be a potential power source to charge the BESS. We denote the power planned to draw from the onsite wind farm as \(p_{t}^{\text{BESS,WC}}\). Following the charging/discharging constraint in Eq. (2), the BESS cannot charge itself using onsite wind curtailment when bidding to discharge in the spot market, which can be formulated in logic as \[v_{t}^{\text{Dch}}p_{t}^{\text{BESS,WC}}=0. \tag{4}\] Also, frequent charging/discharging lead to cycle aging of the BESS. We define the battery degradation cost \(C^{\text{BESS}}\) as \[C^{\text{BESS}}=c\Delta t\sum_{t=1}^{T}v_{t}^{\text{Dch}}p_{t}^{\text{BESS,S}}, \tag{5}\] where we approximate the battery degradation as a result of discharging [14, 15]; \(c\) is a specific battery technology cost-coefficient in AUS/MWh [15]. ### _Joint Bidding of the Wind-Battery System_ Considering the distinct revenue streams of the wind farm and the BESS from the spot market, along with the degradation cost of the BESS, we formulate the optimal joint bidding of the wind-battery system as an optimization problem whose objective is expressed as \[\max\;R^{\text{W}}+R^{\text{BESS}}-C^{\text{BESS}}. \tag{6}\] Real-time dispatch of the wind farm and BESS are constrained by \[0 \leq p_{t}^{\text{W}}\leq P_{\text{max}}^{\text{W}}, \tag{7}\] \[0 \leq p_{t}^{\text{BESS,S}}\leq P_{\text{max}}^{\text{BESS}},\] (8) \[0 \leq p_{t}^{\text{BESS,WC}}\leq P_{\text{max}}^{\text{BESS}},\] (9) \[0 \leq p_{t}^{\text{BESS,S}}+p_{t}^{\text{BESS,WC}}\leq P_{\text{ max}}^{\text{BESS}}, \tag{10}\] where \(P_{\text{max}}^{\text{W}}\) and \(P_{\text{max}}^{\text{BESS}}\) are the installed capacity (in MW) of the wind farm and the rated power (in MW) of the BESS, respectively. Eq. (7) constrains the forecasted availability of the wind farm. Eq. (8) and (9) represent the power that the BESS can bid in the spot market and the power planned to draw from the wind farm must be within the rated power of the BESS. Furthermore, Eq. (10) shows that the sum of the bid and power drawn from the onsite wind farm cannot exceed the rated power of the BESS. Also, charging/discharging operations of the BESS are limited by its current capacity \(e_{t-1}+\Delta e_{t}\), where \(e_{t-1}\) is its capacity after the previous dispatch interval, and \(\Delta e_{t}\) is the energy change in the current dispatch interval. The BESS's capacity must be within its lower and upper energy limits denoted by \(E_{\text{min}}\) and \(E_{\text{max}}\), which can be formulated as \[E_{\text{min}}\leq e_{t-1}+\Delta e_{t}\leq E_{\text{max}}. \tag{11}\] The BESS's capacity fluctuates due to power exchange in the spot market or drawing the otherwise curtailed energy from the onsite wind farm. We define curtailed wind power as \[p_{t}^{\text{W,WC}}=\left(p_{t}^{\text{W,Act}}-p_{t}^{\text{W}}\right)\mathbb{ I}\left(p_{t}^{\text{W,Act}}>p_{t}^{\text{W}}\right), \tag{12}\] where \(\mathbb{I}\left(p_{t}^{\text{W,Act}}>p_{t}^{\text{W}}\right)\) is an indicator of wind curtailment. Thus, the energy change \(\Delta e_{t}\) in Eq. (11) can be written as \[\Delta e_{t}=\Delta t\left[\left(v_{t}^{\text{Ch}}-v_{t}^{\text{Dch}}\right)p_{ t}^{\text{BESS,S}}+\min\left\{p_{t}^{\text{BESS,WC}},p_{t}^{\text{W,WC}} \right\}\right], \tag{13}\] where the first term represents the energy change from bidding and the second term indicates the BESS's response to wind curtailment. ## III Methodology To maximize the overall revenue of the wind-battery system formulated, we decouple the continuous bidding problem into two Markov decision processes (MDP) for the wind farm and the BESS in Section III-A, followed by Section III-B, where TD3 [16] is introduced to maximize the expected returns of the derived MDPs, which facilitates the optimization of the entire revenue-oriented bidding problem. ### _MDP Modeling_ As discussed, multiple factors can affect how the wind-battery coupled system bids in the spot market. Joint bidding can be better characterized by decoupling it into two MDPs (for the wind farm and the BESS, respectively), each with four elements: \(\mathbb{S}^{\text{W}}\)/\(\mathbb{S}^{\text{BESS}}\), \(\mathbb{A}^{\text{W}}\)/\(\mathbb{A}^{\text{BESS}}\), \(\mathbb{P}^{\text{W}}\)/\(\mathbb{P}^{\text{BESS}}\), and \(\mathbb{R}^{\text{W}}\)/\(\mathbb{R}^{\text{BESS}}\). **State Space \(\mathbb{S}\)**: All internal (e.g., wind generation) and external (e.g., energy prices) information can be represented as a state \(\mathbf{s}_{t}\). To guide the BESS's response to wind curtailment, we introduce wind curtailment frequency within the latest \(L\) dispatch intervals, denoted by \(f_{t}^{\text{WC}}\), in the state of the BESS. States of the wind farm and the BESS are defined as \[\mathbf{s}_{t}^{\text{W}}=\left[p_{t-1}^{\text{W,Act}},\rho_{t-1}\right],\mathbf{s}_{t} ^{\text{BESS}}=\left[e_{t-1},f_{t-1}^{\text{WC}},p_{t-1}^{\text{W,Act}},\rho_ {t-1}\right]. \tag{14}\] **Action space \(\mathbb{A}\)**: Action of the wind farm represents its forecasted availability \(p_{t}^{\text{W}}\), while actions of the BESS are bid power \(p_{t}^{\text{BESS,S}}\) and power drawn from the onsite wind curtailment \(p_{t}^{\text{BESS,WC}}\). Actions of the wind farm and the BESS are normalized to range from \(0\) to \(1\) and formulated as \[\mathbf{a}_{t}^{\text{W}}=\left[a_{t}^{\text{W}}\right],\;\mathbf{a}_{t}^{\text{BESS}}= \left[v_{t}^{\text{Dch}},v_{t}^{\text{Ch}},a_{t}^{\text{BESS,S}},a_{t}^{\text{ BESS,WC}}\right], \tag{15}\] **Probability space \(\mathbb{P}\)**: The probability space refers to the probability set of transitioning to the next state after taking a deterministic action, which is defined as \(\mathbb{P}\left(\mathbf{s}_{t+1}|\mathbf{s}_{t},\mathbf{a}_{t}\right)\). **Reward Space \(\mathbb{R}\)**: The wind farm and BESS receive rewards after taking action \(\mathbf{a}_{t}\) at state \(\mathbf{s}_{t}\), which reflects the effectiveness of the bidding decision. To monitor wind generation uncertainty and update accurate dispatch targets, we formulate the reward function of the wind farm \(r_{t}^{\text{W}}\) as \[r_{t}^{\text{W}}=-\rho_{t}|a_{t}^{\text{W}}P_{\text{max}}^{\text{W}}-p_{t}^{ \text{W,Act}}|. \tag{16}\] Effective BESS energy arbitrage is enabled by the introduction of two charging/discharging indicators, denoted by \(\mathbb{I}_{t}^{\text{Ch}}\)/\(\mathbb{I}_{t}^{\text{Ch}}\), and formulated as \[\mathbb{I}_{t}^{\text{Ch}}=\text{sgn}\left(\bar{\rho}_{t}-\rho_{t}\right), \quad\mathbb{I}_{t}^{\text{Ch}}=\text{sgn}\left(\rho_{t}-\bar{\rho}_{t}\right), \tag{17}\] where \(\text{sgn}(\cdot)\) is the sign function and \(\bar{\rho}_{t}\) is the exponential moving average of the spot price, which is defined as \[\bar{\rho}_{t}=\tau\bar{\rho}_{t-1}+\left(1-\tau\right)\rho_{t}, \tag{18}\] where \(\tau\) is a smoothing parameter. The charging/discharging indicators incentivize the BESS to buy low (\(\rho_{t}<\bar{\rho}_{t}\)) and sell high (\(\rho_{t}>\bar{\rho}_{t}\)). Any bids violating such a guideline will be penalized. The arbitrage reward \(r_{t}^{\text{BESS,S}}\) is thus formulated as \[r_{t}^{\text{BESS,S}}=a_{t}^{\text{BESS,S}}|\rho_{t}-\bar{\rho}_{t}|\left( \mathbb{I}_{t}^{\text{Ch}}v_{t}^{\text{Ch}}\frac{1}{\eta^{\text{Ch}}}+\mathbb{ I}_{t}^{\text{Ch}}v_{t}^{\text{Ch}}\eta^{\text{Dch}}\right). \tag{19}\] The BESS receives positive rewards, denoted by \(r_{t}^{\text{BESS,WC}}\), when reducing onsite wind curtailment, formulated as \[r_{t}^{\text{BESS,WC}}=\beta\min\left\{a_{t}^{\text{BESS,WC}},\frac{p_{t}^{ \text{W,WC}}}{P_{\text{max}}^{\text{W}}}\right\}f_{t}^{\text{WC}}\frac{1}{ \eta^{\text{Ch}}}, \tag{20}\] where \(\beta\) is the incentive factor for wind curtailment reduction and the minimum term represents the normalized absorbed wind power. The BESS reward function \(r_{t}^{\text{BESS}}\) combines bidding rewards and wind curtailment mitigation rewards, formulated as \[r_{t}^{\text{BESS}}=r_{t}^{\text{BESS,S}}+r_{t}^{\text{BESS,WC}}. \tag{21}\] ### _Learning Optimal Bidding Strategy via TD3_ We introduce a state-of-the-art off-policy DRL algorithm, referred to TD3 [16], to optimize the derived MDPs where the same TD3 structure is adopted. TD3 aims to learn an optimal action strategy, denoted by \(\pi\), that maximizes the expected returns over a finite horizon, which can be formulated as \[J_{\pi}=\mathbb{E}_{\mathbf{s}_{t}\sim\mathbb{P},\mathbf{a}_{t}\sim\pi(\mathbf{s}_{t})} \left[\sum_{t=1}^{T}\gamma^{t-1}r_{t}\right], \tag{22}\] where \(\gamma\) is the discounted factor. ## IV Experiments and Results ### _Experimental Settings_ The wind generation data is collected from the Oaklands Hill Wind Farm in Victoria, one of the five jurisdictions of the NEM in Australia. We use Victoria spot prices in \(2018\)[17] to train and evaluate our JointDRL, where the first eleven months are for training and the last month for evaluation. The storage capacity of the BESS is assumed to be \(10\) MWh with its minimum and maximum allowable state of charge being \(5\%\) and \(95\%\), respectively. We used \(1\) Nvidia TITAN RTX graphics processing units for algorithm training. The initialized parameters are provided in Table I. ### _Effectiveness of the JointDRL_ To examine the effectiveness of our JointDRL, we develop a predict-and-optimize (P&O) benchmark for comparison. The P&O method relies on a long short-term memory (LSTM) network for wind availability and energy price forecasts, and solves the revenue maximization problem by a mixed integer linear programming solver from the PuLP library [18]. The cumulative revenue derived from each method is illustrated in Fig. (a)a with associated statistics presented in Table II for cross comparison. The results show the JointDRL outperforms the P&O benchmark significantly. In particular, our method generates \(81\%\) additional revenue for the BESS and \(18\%\) for the wind farm compared to the P&O benchmark. From the revenue breakdown in Table II, our JointDRL takes advantage of the onsite wind surplus and performs significantly better in terms of utilizing the curtailed energy, i.e., absorbing \(313\) MWh of curtailed wind energy compared to that of \(130\) MWh using the P&O method. The JointDRL's capability in managing wind curtailment avoids a considerable amount of excessive wind generation used to be curtailed. In contrast, the otherwise curtailed wind energy using P&O is nearly four times higher than ours, as shown in Table II. In our case, about \(10\)% of the BESS's stored energy comes from curtailed wind energy, as shown in Fig. (b)b. Despite the fact that the JointDRL requires a longer training time, as shown in Table II, it can make bidding decisions at a significantly faster rate, i.e., \(10\) seconds for one-month bidding. Therefore, a well-trained JointDRL is better suited to real-time bidding, as accurate and rapid decision-making is essential. Fig. 2: Performance comparison between JointDRL and the P&O benchmark. ### _Wind Curtailment Reduction_ According to Fig. (b)b, the JointDRL chooses the curtailed wind as an important power source to charge the BESS for higher financial rewards. We examine the associated economic benefits by comparing the BESS bidding simulation results with/without using onsite curtailed wind energy. The results are illustrated in Fig. (a)a. Interestingly, using curtailed wind energy can boost the overall revenue of the BESS by about \(20\%\). Since purchasing power from the spot market would lead to a revenue loss (when the spot price is non-negative), using curtailed wind energy seems to be more economical and subsequently improves the overall revenue. Since the BESS can act differently subject to market conditions and the availability of onsite wind curtailment, we further examine the relationship between spot prices and the amount of energy drawn from wind curtailment to capture the most influencing factors behind the action. The result is shown in Fig. (b)b, where we group the spot prices using its quartiles labelled by \(Q1_{\rho},Q2_{\rho},Q3_{\rho}\). The BESS follows the arbitrage guideline and buys power mostly from the spot market when prices are low. During periods of higher spot prices, the BESS favors curtailed wind energy, as charging at high spot prices is likely to result in significant financial losses. Using curtailed energy is free, but subject to availability. BESS charging decisions are also heavily influenced by wind curtailment frequency. We investigate the BESS operational dynamics at different levels of wind curtailment frequency that are grouped via its quartiles \(Q1_{f},Q2_{f},Q3_{f}\). At the lowest level of curtailment frequency, the BESS draws approximately \(19\%\) of curtailed wind power. When curtailment occurs more frequently, the BESS charges more from the onsite wind farm to reduce curtailment until reaching a plateau at approximately \(36\%\), as shown in Fig. (b)b. This leveling-off differs from that observed at high spot prices, where the BESS continues to purchase more than \(50\%\) of power from the spot market to charge the BESS. This is largely driven by wind curtailment uncertainty because wind curtailment may not occur even at a higher likelihood. ## V Conclusion This paper highlights the importance of coordinated efforts between the wind farm and the BESS in co-location to improve their bidding performance in the electricity spot market. We developed a model-free DRL-based real-time bidding strategy to explore the full potential of the wind-battery system in joint bidding. The DRL algorithm seeks to maximize financial rewards by balancing BESS energy arbitrage with wind curtailment. Simulation results show that our proposed strategy outperforms the P&O benchmark in terms of faster execution time and better financial performance for both the wind farm and the BESS. We further investigate the operational dynamics of the BESS under various wind curtailment frequencies and market conditions, leading to two interesting insights: 1) the onsite otherwise curtailed wind power is an effective source to charge the BESS for additional financial returns; 2) The BESS tends to use more curtailed wind energy when the spot price and wind curtailment frequency increase. Successful application of our proposed strategy could promote the co-location of renewable generation and storage assets, strengthening government policies for broader system benefits.
2306.14222
Unveiling the Potential of Sentiment: Can Large Language Models Predict Chinese Stock Price Movements?
The rapid advancement of Large Language Models (LLMs) has spurred discussions about their potential to enhance quantitative trading strategies. LLMs excel in analyzing sentiments about listed companies from financial news, providing critical insights for trading decisions. However, the performance of LLMs in this task varies substantially due to their inherent characteristics. This paper introduces a standardized experimental procedure for comprehensive evaluations. We detail the methodology using three distinct LLMs, each embodying a unique approach to performance enhancement, applied specifically to the task of sentiment factor extraction from large volumes of Chinese news summaries. Subsequently, we develop quantitative trading strategies using these sentiment factors and conduct back-tests in realistic scenarios. Our results will offer perspectives about the performances of Large Language Models applied to extracting sentiments from Chinese news texts.
Haohan Zhang, Fengrui Hua, Chengjin Xu, Hao Kong, Ruiting Zuo, Jian Guo
2023-06-25T12:08:44Z
http://arxiv.org/abs/2306.14222v2
Unveiling the Potential of Sentiment: Can Large Language Models Predict Chinese Stock Price Movements? ###### Abstract The rapid advancement of Large Language Models (LLMs) has led to extensive discourse regarding their potential to boost the return of quantitative stock trading strategies. This discourse primarily revolves around harnessing the remarkable comprehension capabilities of LLMs to extract sentiment factors which facilitate informed and high-frequency investment portfolio adjustments. To ensure successful implementations of these LLMs into the analysis of Chinese financial texts and the subsequent trading strategy development within the Chinese stock market, we provide a rigorous and encompassing benchmark as well as a standardized back-testing framework aiming at objectively assessing the efficacy of various types of LLMs in the specialized domain of sentiment factor extraction from Chinese news text data. To illustrate how our benchmark works, we reference three distinctive models: 1) the generative LLM (Chat-GPT), 2) the Chinese language-specific pre-trained LLM (Erlangshen-RoBERTa), and 3) the financial domain-specific fine-tuned LLM classifier(Chinese FinBERT). We apply them directly to the task of sentiment factor extraction from large volumes of Chinese news summary texts. We then proceed to building quantitative trading strategies and running back-tests under realistic trading scenarios based on the derived sentiment factors and evaluate their performances with our benchmark. By constructing such a comparative analysis, we invoke the question of what constitutes the most important element for improving a LLM's performance on extracting sentiment factors. And by ensuring that the LLMs are evaluated on the same benchmark, following the same standardized experimental procedures that are designed with sufficient expertise in quantitative trading, we make the first stride toward answering such a question. ## 1 Introduction At present, an overwhelming volume of news articles and columns are being generated on a daily basis, especially pertaining to companies being traded. Given this landscape, considerable attention has been given to investigating the feasibility of employing Large Language Models (LLMs) for sentiment analysis and processing of these news texts. The aim is to derive a quantifiable technical indicator, or factor, that effectively reflects the desirability of investing in a particular company's stock at a given moment in time. It should be clear that the endeavor we described is a very well-defined down-stream task operating within a language environment (the Chinese language) that is drastically different from the English language that the main-stream LLMs have been predominantly trained on. Although several approaches have been proposed to enhance the performance of LLMs on down-stream tasks in alternative language environment, questions remain as to which LLM or method of improvement is the most optimal in the specific context of extracting sentiment factor from Chinese financial news texts. For such a comparative analysis to be possible however, we must adopt a comprehensive benchmark that can be easily applied to all LLMs and yield quantitative results based on metrics selected with sufficient domain knowledge in the field of quantitative finance. On the other hand, even though works such as [11] have successfully demonstrated that LLMs like ChatGPT [1] can be used to extract sentiment factors from English news texts that are highly correlated to the returns of US stocks, it is our contention that the smooth implementation of the same framework applied on the Chinese text still faces two major concerns. Firstly, it is noteworthy that the dominant and leading LLMs have predominantly been trained on English corpora. Consequently, the transferability of sentiment mining techniques from English texts to Chinese texts remains uncertain. Secondly, while the effectiveness of sentiment factor mining by LLMs has been established in prior research, discrepancies arise due to variations in parameter selection for constructing stock trading simulations back-tests and the utilization of diverse raw news data-sets, encompassing disparate sizes, scopes, and sources. As a consequence, objectively evaluating and comparing the efficacy of different LLMs when applied to the specific task of Chinese financial text sentiment factor building still poses significant challenges. To address these concerns, we propose an innovative approach that combines sentiment extraction with realistic back-tests of quantitative strategies. This approach allows us to directly assess the effectiveness of LLMs' sentiment extraction capabilities using a comprehensive benchmark consisting of easily interpretable and quantifiable metrics such as excess return, risk adjusted return and win ratio. To introduce our comprehensive benchmark and back-test experimental procedure, we outline the scope and coverage of data, data pre-processing, and the parameters that the back-tests should adhere to. By integrating these elements, we aim to provide a robust framework for evaluating and comparing the performance of various LLMs in sentiment extraction from Chinese financial news texts. As practical illustrations, we will subsequently conduct sentiment extraction from 394,426 items of Chinese news summaries about Chinese publically traded companies. This process will be executed using three distinct LLMs, representing: 1) the baseline model, 2) the Chinese language-specific pre-trained LLM, and 3) the financial domain-specific pre-trained LLM. We will then construct investment portfolios and run stock trading simulation back-tests according to our defined settings and parameters in order to rigorously test the correlation between the sentiment factors and return of investment, which, from our perspective, is the best way to reflect the LLMs' effectiveness in correctly extracting sentiments from Chinese financial texts. Finally, we will discuss the results of the back-tests and the insights gained from such comparative analysis. To the best of our knowledge, we are the first group to conduct sentiment analysis on such an extensive source of Chinese news text using prevalent LLMs such as ChatGPT [1] and back-test the acquired sentiment factors by deploying high-frequency quantitative trading strategies on platforms that lead in the quantitative finance industry. ## 2 Related Works Over the past decade, significant advancements have been made in the field of Natural Language Processing (NLP), leading to the development of powerful language models. One such groundbreaking model is the Transformer [23], which introduced an attention-based encoder-decoder architecture that has consistently yielded superior performance compared to Recurrent Neural Network (RNN) based architectures, including Long Short-Term Memory (LSTM) [1] and Gated Recurrent Unit (GRU) [1], in various language tasks. The Transformer model's key innovation lies in its ability to effectively capture long-range dependencies within sequences by employing self-attention mechanisms. This enables the model to assign importance to different parts of the input text and establish contextual relationships, resulting in improved language understanding and generation capabilities. Consequently, the Transformer architecture has paved the way for a new generation of state-of-the-art Large Language Models that have inherited its powerful framework exemplified by the Generative Pre-trained Transformer (GPT) [1] series, developed by OpenAI, which focus on generative language modeling and utilize a variant of the Transformer architecture, where tokens attend to tokens that appear before in the sentence. This unidirectional attention mechanism, often referred to as left-to-right attention or auto-regressive decoding, allows the models to generate coherent and contextually relevant text referencing the attention assigned to preceding words. Another LLM that is based on the Transformer architecture is the BERT (Bidirectional Encoder Representations from Transformers [1]) model. BERT, introduced by Google AI, is designed to capture bidirectional contextual information from the input text. Unlike the unidirectional attention of GPT, BERT employs a bidirectional attention mechanism. It enables tokens to attend not only to preceding words but also to words appearing after in latter parts of the sentence. By considering the entire context, BERT can effectively capture the dependencies and relationships between words, resulting in a better understanding of the text. Building on BERT, authors of RoBERTa [13] seek to optimize the performance of parent model by conducting various pre-training improvements such as increasing the data size, training duration as well as adopting dynamic masking during training. Such methods are proven to have pushed to boundaries of the performance of BERT even further. These LLMs are designed in such a way as to greatly facilitate down-stream task-specific fine-tuning and additional pre-training on supplementary corpora. From our perspective, user-defined improvement on parent LLMs includes two key aspects: language-specific and domain-specific. While certain commercialized prototypes, such as ChatGPT [1], support multilingual reasoning and text generation, many language models predominantly rely on training with English texts. To achieve comparable performance across other language environments, including our specific focus Chinese, extensive training on alternative language environment becomes imperative. The Erlangsen-RoBERTa [24] model, which inherits from the RoBERTa architecture and is further pre-trained on the 180 Giga-byte version of the Wudao Corpora [23], marks one of the most notable efforts in Chinese language-specific pre-training, achieving top performance on state-of-the-art NLP benchmarks. The other aspect pertains to domain-specific improvements which has to do with inheriting a parent LLM architecture and either continuously training it on additional corpus data-sets related to the target technical domain or fine-tuning it with expertly constructed labels, this process effectively leverages the pre-existing linguistic knowledge of the LLMs while allowing them to acquire domain-specific nuances and intricacies, thus enabling heightened proficiency in the desired domain. One key example is the FinBERT [11] model, which trains the BERT model futher on U.S. Securities and Exchange Commission (SEC) filing data, resulting in a model well-versed in financial domain knowledge and well-suited for English financial text sentiment classifications. Similar efforts also include [25], a LLM trained on extensive English financial copora. ## 3 Data The data we use take in the form of news summaries regarding Chinese publicly traded companies and is mainly acquired through web crawling. A total of 394,429 news summary items were acquired, spanning a time period spanning from October 1, 2021, to February 22, 2023 and covering 5,021 publicly traded companies listed on Shanghai Stock Exchange (SSE) and Shenzhen Stock Exchange (SZSE). As an additional filtering criterion, we retain only those news summaries that are generated prior to the market opening at 9:30 am. Doing so helps us ensure that the technical indicators extracted from these news summaries are promptly available for utilization as soon as the market commences trading. The October 2021 to February 2023 period that the data spans is stipulated to ensure that it falls chronologically after the data used for training GPT-3.5, which serves as our baseline model for sentiment analysis. If we had not chosen to do this, it could potentially result in the ChatGPT model referencing "future data" that might not be available at the time the news summaries are generated. This scenario could lead to inaccurate sentiment analysis results and create situations that do not align with reality. To procure professional financial information and analysis, we have identified a set of prime sources which are selected based on their credibility and reputation within the financial domain. Consequently, we mainly conduct data crawling from these prime sources. These sources are: Sina Finance [21], Hithink RoyalFlush [12], Tencent [14], Dazhong Securities [15], Hexun [1], Netease Finance [1], Caifuhao [1]. We offer Table 1 to illustrate the distribution of these prime sources within our data-set. ## 4 Methodology ### Using ChatGPT to Extract Sentiment Factors We use ChatGPT, and the GPT-3.5 behind it as a baseline language model without any additional pre-training or fine-tuning. Similar to [14], we conduct the sentiment factor mining by fusing the news summary information as well as the instructions about the task that we expect ChatGPT to perform for us into prompts. A user's prompt is essential in guiding the model's response and determining the context of the conversation. When generating a response, the model takes into account the user's input, including the specific words used, the overall tone, and the desired information or assistance being sought. The prompt serves as the primary input that helps shape the model's understanding of the user's intent and the direction the conversation is taking. Upon receiving our prompt, ChatGPT decides whether the item of news summary in question contains good, bad, or neutral sentiment (this also includes cases where ChatGPT is unsure of the sentiment and unable to produce an informed rating) for the company in involved. One sample prompt as well as the associated response is illustrated in Fig 1. For this particular example, we also invite ChatGPT to elaborate on the reason behind forming such a sentiment analysis. For the sentiment analysis procedure on our data-set, we used Chinese prompts, the English version of the prompt is given for demonstration purposes. After the responses for all 394,429 items of news summary have been given. We translate the response into numeric values with GOOD NEWS being assigned as 1, NOT SURE being assigned as 0 and BAD NEWS being assigned as -1. For cases where a listed company is being mentioned by multiple news sources on the same day, we take the average of the ChatGPT's responses across the different news sources. These averaged ratings by ChatGPT are stored as the Chinese ChatGPT Factor to be referenced by our trading simulation back-tests. ### Using Language-Specific Pre-Trained LLM to Extract Sentiment Factors As previously stated, a fundamental objective of this research is to examine the degree to which the effectiveness of Language Models (LLMs) in generating sentiment factors from analysis of news that are conducive to the development of quantitative trading strategies with high returns can be extended to Chinese financial textual data. By such explorations, our research aims to contribute insights into the potential adaptability and performance of LLM-based approaches in the realm of Chinese language sentiment analysis for the exclusive purpose of informing quantitative trading strategies. Therefore, in addition to applying the baseline model, GPT-3.5, we also turn to the Erlangsen-RoBERTa-110M-Sentiment [20] model which was pre-trained on the 180 GB version of the WuDao Chinese Corpora. The Erlangsen-RoBERTa series were pre-trained on Chinese texts in such a way as to take into account the unique characteristics of the Chinese language and the traits of the Chinese character contextual relationships. This aspect sets them apart from many other pre-trained LLMs that, although supporting Chinese textual input and response generation, may not fully leverage such language-specific traits. Through a comparative analysis of the performance of the quantitative strategies from both the baseline GPT-3.5 and the language-specific pre-trained model on the same benchmark, we seek to answer the question of whether such language-specific pre-training approaches have any contribution to yielding a superior returns. The Erlangsen-RoBERTa-110M-Sentiment is open-source and can be easily downloaded and applied for the sentiment analysis of any \begin{table} \begin{tabular}{|c|c|} \hline Source Name & Proportion (\%) \\ \hline Hithink RoyalFlush & 59.57 \\ \hline Sina Finance & 33.65 \\ \hline Tencent & 4.88 \\ \hline Hexun & 1.55 \\ \hline Caifuhao & 0.23 \\ \hline Netease Finance & 0.1 \\ \hline Dazhong Securities & 0.02 \\ \hline \end{tabular} \end{table} Table 1: The Distribution of News Sources within Our Data-set section of Chinese texts within the maximum token limit. The return of the last soft-max activation layer is a tuple containing the probability of the input text's sentiment being negative or positive. When we applied Erlangsen-RoBERTa-110M-Sentiment model to the same news summary as shown in section 4.1, it rated the sentiment as being 98.33% positive. It can be seen that for this particular sample, the sentiment analysis of Erlangsen-RoBERTa-110M-Sentiment aligns with that of ChatGPT. We conduct sentiment classification on all the rest of the samples in the data-set and store the results as the Erlangsen Facto ### Using Domain-Specific Fine-Tuned LLM Classifier to Extract Sentiment Factors For this part, we start with the open-source variation of BERT that has been trained on Chinese text as a starting model and continue to train it on Chinese financial corpora data. After the tokenizations and the embedding layer updates have been finalized, we connect a hidden layer and a soft-max activation layer of size 3 to the original attention layers of the BERT model to acquire our own BERT-based sentiment classifier. We then ask colleagues of financial expertise to manually label the news summary data into three classes in terms of the sentiment manifested by the text ( Positive (+1), Neutral (0), Negative (-1) ) and conduct supervised training to derive what we shall call a Chinese FinBERT classifier. It is guaranteed that the training data used for manual sentiment labeling is strictly separated from the news summary data employed in our experimental analysis. After fine-tuning has been completed, we use our acquired BERT-based sentiment classifier to predict the sentiment of the 394,429 items of news summary. The model will output a predicted class based on the probabilities output by the final soft-max layer. We document the results as Chinese FinBERT factor. ## 5 Experiment and Parameters of Trading Strategies As stated above, we intend to provide a comprehensive and rigorous benchmark as well as establish a standardized back-test experimental procedure so that any LLM's efficacy in extracting sentiment from Chinese financial text for the purpose of building quantitative trading strategies can be objectively measured no matter which model or method was used during the process of sentiment analysis and extraction. In order to ensure an unbiased assessment in accordance with our benchmark, we establish a requirement that the quantitative trading strategies adhere to uniform settings and parameters. While a seasoned quantitative trading expert may identify significant potential for optimizing these parameters to achieve higher returns, we deliberately refrain from delving extensively into the intricacies of trading strategy technicalities within this study. Our primary objective lies in the evaluation of LLMs' effectiveness within the specific realm of Chinese text sentiment mining which can be achieved as long as the same set of parameters are employed across all tests. We will now proceed to enumerate and explain the standardized settings all trading strategies in our experiments adhere to: * Portfolios are adjusted daily and only at market open, 9:30 am Beijing Time (UTC+8:00) * We only use news generated or acquired before market open. In this way, the sentiment factors extracted can be directly used at trading time. * We adjust the investment portfolios by buying in the stocks who have the highest ranking sentiment factors and selling those in our portfolio that have the lowest ranking factors. The maximum number of stocks to be bought or sold in one day is 500. * The maximum turnover ratio for our portfolio to be 1.0, which means we allow all of the previously held stocks Figure 1: Demonstration of Prompts Structured for Sentiment Analysis and the Response by ChatGPT to be sold to be replenished by entirely new stocks. Although this rarely happens during the back-test. * In order to account for the slippage and delays commonly encountered in real-world trading scenarios, we have chosen to deviate from using the straight market open price in our back-test. Instead, we have implemented a more realistic approach by utilizing the Volume-Weighted Average Price (VWAP) between 9:00 am and 9:05 am. This VWAP is calculated by summing the values of all trades that occurred during this specific five-minute period and dividing the sum by the total volume traded within that time-frame. * We avoid being overly-optimistic about our simulated returns by imposing a transaction fee of 0.15% of transaction value. This includes a 0.05% commission charged by the stock brokerage firms and a 0.1% stamp duty fee paid to the stock exchanges. In reality at present, Chinese brokerage firms rarely charge above 0.03% and the stamp duty fee is only charged at selling transactions instead of all transactions. Therefore, it should be clear that a 0.15% of transaction value emulates a more stringent trading environment than what is observed in reality. * We use the CSI 300 index as basis when calculating the excess returns. ## 6 Results and Discussions After the back-tests have been run, we collect the results and the performance of all portfolios built around all three sentiment factors on a series of metrics which comprise our benchmark. We now proceed to give definitions of these metrics: * Annual Excess Return: Annual excess return refers to the additional or extra return earned by an investment or portfolio over and above what was expected or compared to a stock index. As stated before, in our experiment we have selected the CSI 300 index to be the baseline, therefore our excess return is calculated over the performance of the CSI 300 index. * Annual Net Asset Return: The annual percentage increase or decrease in the value of our portfolio in terms of net market capitalization of all stocks held. * Win Rate: Percentage of trade days where portfolio encounters a positive return over all trade days. * Sharpe Ratio: The Sharpe Ratio [14] was invented by William Sharpe in 1966. The formula is given as \[\text{Sharpe Ratio}=\frac{R_{p}-R_{f}}{\sigma_{p}}\] (1) The \(R_{p}\) is the return of the portfolio, \(R_{f}\) is the risk-free return and the \(\sigma_{p}\) is the standard deviation of the return. The Sharpe ratio gives a measurement of risk-adjusted returns. * Average Stocks Held per Day: The average number of stocks present in the investment portfolio every day * Turn-over Ratio: Average percentage of the investment portfolio adjusted daily in terms of market capitalization. We groups these metrics into to parts and show the results under Annual Excess Return, Annual Net Asset Return, Win Rate and Sharpe Ratio in Table 2. These would serve as the main metrics and are primary indicators of performance. We put Average Stocks Held per Day and Turn-over Ratio into Table 3, these would serve as supplementary metrics that provide insights into the portfolio adjustment characteristics of the trading strategy. We also plot the excess returns of all three factors during the entire back-test period in Figure 2. It is clear to see that in terms of annual return, risk adjusted return and excess return, the Erlangshen sentiment factor outperforms the remaining factors. To further elucidate the correlation between the values of the sentiment factor derived from the Erlangshen-110M-Sentiment model and portfolio excess returns, we have partitioned our held stocks into three distinct groups based on their rankings according to the Erlangshen sentiment factor. Group 1, on average, exhibits the lowest Erlangshen factor value while Group 3 displays the highest. The excess returns for Group 1, Group 2, and Group 3 are then plotted in Figure 3. Notably, we observe that after an initial period of fluctuation, the three groups become distinctly separated. Furthermore, Group 3, characterized by the highest Erlangshen factor value, consistently demonstrates the highest returns, while Group 1, with the lowest Erlang-shen factor value, consistently exhibits the lowest excess returns. This observation provides further substantiation for the notion that the Erlangshen sentiment factor extracted through the Erlangshen-110M-Sentiment model is closely associated and correlated with investment returns. It is remarkable to witness how the comparatively smaller Erlangshen model, with a modest 110 million parameters, manages to exhibit slightly superior performance within our benchmark for the specific task at hand. This outcome serves as a testament to the fact that practitioners and researchers working on Chinese quantitative stock trading strategies may not always need to invest substantial resources into larger models. Instead, by employing strategic fine-tuning and extensive pre-training techniques tailored to the intricacies of the Chinese language, desired outcomes can be effectively achieved. This revelation underscores the significance of considering language-specific characteristics and employing targeted methodologies, illustrating that optimal results can be attained without solely relying on sheer model size for the particular task of Chinese financial sentiment extraction. ## 7 Conclusions This study explores the potential of Large Language Models (LLMs) in boosting quantitative stock trading strategies by extracting sentiment factors from Chinese financial news texts. Our research addresses the need for successful implementation of LLMs in the Chinese stock market and provides a rigorous benchmark and standardized back-testing framework to objectively evaluate the efficacy of various LLMs in sentiment factor extraction from Chinese news texts. We recognize the remarkable comprehension capabilities of LLMs and their potential to facilitate informed investment decisions through the extraction of sentiment factors. However, implementing LLMs in the analysis of Chinese financial texts presents unique challenges due to differences data, language settings and market conditions. To overcome these challenges, we propose a comprehensive benchmark and a standardized back-testing framework to assess the performance of quantitative trading strategies based on the acquired sentiment factor. By employing three distinct LLMs, including a generative LLM (ChatGPT), a Chinese language-specific pre-trained LLM (Erlangshen-RoBERTa), and a financial domain-specific fine-tuned LLM classifier (Chinese FinBERT), we extract sentiment factors from a large volume of Chinese news summaries about listed companies in China. Through rigorous stock trading simulation back-tests, we evaluate the performance of these sentiment factors in terms of annual return, risk-adjusted return, and excess return. The results of our back-tests indicate that the Erlangshen sentiment factor, derived from the Erlangshen-110M-Sentiment model, outperforms the other factors under all metrics. Furthermore, we observe a strong correlation between the values of the Erlangshen sentiment factor and portfolio excess returns, demonstrating the effectiveness of the factor in capturing investment opportunities in the Chinese stock market. These findings highlight the importance of language-specific considerations and targeted methodologies when applying LLMs to sentiment factor extraction in Chinese financial texts. We demonstrate that a comparatively smaller LLM, with strategic and extensive pre-training tailored to the Chinese language, can achieve superior performance within our benchmark. This emphasizes the significance of adapting LLMs to language nuances and employing tailored approaches rather than relying solely on model size. By providing a comprehensive benchmark and standardized procedures, our study contributes to the understanding of LLMs' potential in the specialized domain of sentiment factor extraction from Chinese news text data. We demonstrate the importance of incorporating insights from previous studies and conducting rigorous back-tests using quantifiable metrics to evaluate the effectiveness of LLMs in quantitative \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Factor Name & Annual Excess Return (\%) & Annual Net Asset Return (\%) & Win Rate(\%) & Sharpe Ratio \\ \hline Chinese-GPT & 23.1 & 11.04 & 57.49 & 0.6406 \\ \hline Chinese-FinBERT & 19.79 & 7.73 & 57.19 & 0.4797 \\ \hline Erlangshen-110M & **24.01** & **11.95** & **58.38** & **0.678** \\ \hline \end{tabular} \end{table} Table 2: Main Metrics of Back-test Figure 2: Excess Returns of All Three Sentiment Factors
2304.04890
Optimal high-dimensional entanglement concentration in the bipartite scenario
Considering pure quantum states, entanglement concentration is the procedure where from $N$ copies of a partially entangled state, a single state with higher entanglement can be obtained. Getting a maximally entangled state is possible for $N=1$. However, the associated success probability can be extremely low while increasing the system's dimensionality. In this work, we study two methods to achieve a probabilistic entanglement concentration for bipartite quantum systems with a large dimensionality for $N=1$, regarding a reasonably good probability of success at the expense of having a non-maximal entanglement. Firstly, we define an efficiency function $\mathcal{Q}$ considering a tradeoff between the amount of entanglement (quantified by the I-Concurrence) of the final state after the concentration procedure and its success probability, which leads to solving a quadratic optimization problem. We found an analytical solution, ensuring that an optimal scheme for entanglement concentration can always be found in terms of $\mathcal{Q}$. Finally, a second method was explored, which is based on fixing the success probability and searching for the maximum amount of entanglement attainable. Both ways resemble the Procrustean method applied to a subset of the most significant Schmidt coefficients but obtaining non-maximally entangled states.
L. Palma Torres, M. A. Solís-Prosser, O. Jiménez, E. S. Gómez, A. Delgado
2023-04-10T22:36:49Z
http://arxiv.org/abs/2304.04890v1
# Optimal high-dimensional entanglement concentration in the bipartite scenario ###### Abstract Considering pure quantum states, entanglement concentration is the procedure where from \(N\) copies of a partially entangled state, a single state with higher entanglement can be obtained. Getting a maximally entangled state is possible for \(N\) = 1. However, the associated success probability can be extremely low while increasing the system's dimensionality. In this work, we study two methods to achieve a probabilistic entanglement concentration for bipartite quantum systems with a large dimensionality for \(N\) = 1, regarding a reasonably good probability of success at the expense of having a non-maximal entanglement. Firstly, we define an efficiency function \(\mathbf{Q}\) considering a tradeoff between the amount of entanglement (quantified by the I-Concurrence) of the final state after the concentration procedure and its success probability, which leads to solving a quadratic optimization problem. We found an analytical solution, ensuring that an optimal scheme for entanglement concentration can always be found in terms of \(\mathbf{Q}\). Finally, a second method was explored, which is based on fixing the success probability and searching for the maximum amount of entanglement attainable. Both ways resemble the Procrustean method applied to a subset of the most significant Schmidt coefficients but obtaining non-maximally entangled states. ## I Introduction Quantum entanglement is the most known, remarkable, and useful quantum resource in the quantum information (QI) theory [1] as it underlies several QI protocols, such as dense coding [2], entanglement swapping [3], quantum teleportation [4], and quantum cryptography [5]. For instance, in the bipartite scenario, two users who want to communicate--usually called Alice and Bob--can share an entangled state [6]. In this case, the ability to transmit information encoded in the state shared by Alice and Bob depends on the amount of entanglement [7; 8]. Moreover, the most favorable case for faithful communication is when Alice and Bob share a maximally pure entangled state (MES) [9]. However, even if it was the initial state, the quantum noisy channel used to send the information will produce a loss of correlations in the MES [10]. Moreover, the quantum operations needed to carry out a particular quantum application are performed imperfectly due to the experimental errors, yielding to fidelities of less than one [11]. In such cases where they have access only to a partially entangled state \(\rho\), it is desirable to access a channel that allows a more faithful way to send quantum information. One solution is to implement protocols to increase the amount of entanglement [12; 13]. These protocols are known as entanglement purification or entanglement distillation [14; 15; 16], and entanglement concentration [17]. These methods are based on the fact that local operations and classical communication between Alice and Bob cannot increase, on average, the amount of entanglement in the initially entangled pairs [18]. In the case of entanglement purification, the goal is to increase the purity and the entanglement in the initial state \(\rho\), but under the cost to reduce the number of the initial copies available, and it can be implemented successfully only in a probabilistic way [14]. Moreover, an experimental realization of entanglement purification was carried out for mixed states of polarization-entangled photons using linear optics [19]. In the entanglement concentration, the process considers the cases where the initial partially entangled state is pure [20; 21]. Indeed, there are two ways to implement entanglement concentration: the Procrustean method and the Schmidt projection method [17; 20; 21]. The Procrustean method is easier to implement than the Schmidt projection method because the initial partially entangled state is known. The entanglement concentration procedure is carried out by local filtering onto individual pairs of the initial state [17]. In the Schmidt method, however, the process of entanglement concentration is implemented in at least two unknown partially entangled states through collective simultaneous measurements onto the particles [22]. Thus, schemes for carrying out the entanglement concentration have been proposed for the Procrustean [23] and the Schmidt method [24; 25]. Moreover, its experimental implementation has been achieved in the case of the Procrustean method [26] and for the Schmidt method [22] using partially polarization-entangled photons. The entanglement concentration can also be classified as deterministic [12; 27; 28] as well as probabilis tic [11; 13; 20; 29]. In the deterministic case, the process has a probability equal to one to be successfully implemented in the regimes of few copies or in the asymptotic limit of infinite copies [30]. In this scheme, the quantum circuits to carry out deterministic entanglement concentration have been proposed [31]. On the other hand, in the probabilistic entanglement concentration, the process is achieved with a probability of less than one and has been experimentally implemented [32]. Moreover, the relation in the asymptotic limit between the entanglement concentration in a deterministic and probabilistic way was studied [30]. They found these methods are equivalent considering many copies of the initial state: the error probability for the probabilistic method goes to zero quickly with the number of copies. Besides, the entanglement concentration generally is studied considering two entangled quantum states, but also has been studied for the case of tripartite correlated systems [33; 34]. In this work, we studied the probabilistic entanglement concentration in the bipartite scenario of a pure two-qudit (\(D\)-dimensional) state. Considering a large dimensionality (\(D\gg 2\)), we study two methods to achieve entanglement concentration regarding a reasonably good probability of success at the expense of having a non-maximal entanglement. At first glance, we consider a tradeoff between the amount of entanglement of the state after the concentration procedure and its success probability, quantified by the payoff function \(\mathcal{Q}\). This figure of merit leads to analytically solving a quadratic optimization problem, ensuring that an optimal scheme for entanglement concentration can always be found in terms of \(\mathcal{Q}\). Then, a second method was studied, where we fixed the success probability and searched for the maximum amount of entanglement attainable in this case. We found that both ways resemble the Procrustean method applied to a subset of the most significant Schmidt coefficients without the constraint of obtaining a MES. We envisage the usefulness of these methods in entanglement-based quantum communication and also for device-independent protocols where high-dimensional entangled states are required with a certain amount of entanglement, such as randomness certification and expansion, and self-testing [35; 36; 37]. ## II Revisiting entanglement concentration Throughout this work, we will limit ourselves to the case of entanglement concentration from a single copy of a two-qudit non-maximally entangled pure state. This state will be given by \[\ket{\Phi}_{12}=\sum_{m=1}^{D}a_{m}\ket{m}_{1}\ket{m}_{2}, \tag{1}\] where \(a_{m}\) are positive coefficients such that \(\sum_{m}a_{m}^{2}=1\). The set of states \(\{\ket{m}_{1}\ket{m}_{2}\}_{m=1}^{D}\) can be regarded as the Schmidt basis for the entangled state \(\ket{\Phi}_{12}\) and, therefore, \(a_{m}\) will be the respective Schmidt coefficients. In order to quantify the entanglement conveyed by \(\ket{\Phi}_{12}\), the I-Concurrence [38] can be used, which is given by \[\mathcal{C}(\ket{\Phi}_{12}) =\sqrt{\frac{D}{D-1}\left(1-\operatorname{tr}\left(\rho_{1}^{2} \right)\right)}\] \[=\sqrt{\frac{D}{D-1}\left(1-\sum_{m=1}^{D}a_{m}^{4}\right)}, \tag{2}\] where \(\rho_{1}\) is the reduced density matrix of one of the qudits. This function fulfills the necessary conditions an entanglement measure needs to satisfy [39]. Its minimum value is \(0\), and its maximum is \(1\), which arises when \(\ket{\Phi}_{12}\) is a product state and a maximally entangled state, respectively. This document will refer to \(\mathcal{C}\) simply as entanglement. Another function widely used to assess entanglement is the Schmidt number [40; 41; 42; 43; 44; 45; 46; 47], defined as \[K\left(\ket{\Phi}_{12}\right)=\frac{1}{\operatorname{tr}\left(\rho_{1}^{2} \right)}=\left[\sum_{m=1}^{D}a_{m}^{4}\right]^{-1}. \tag{3}\] It is straightforward to see that \(\mathcal{C}(\ket{\Phi}_{12})\) and \(K(\ket{\Phi}_{12})\) are closely related, as both depend on \(\operatorname{tr}\left(\rho_{1}^{2}\right)\). As we mentioned above, it is well known the correlated state given in Equation (1) can have its entanglement increased through an entanglement concentration procedure [7; 13; 30; 48; 49]. This process is, in general, a probabilistic one [50]. We will follow the next approach to show the concentration scheme. Assuming we have an ancillary qubit initially prepared in state \(\ket{0}_{a}\), it can be used for concentration through a unitary bipartite operation \(U_{a1}\) acting over the ancilla and one of the qudits. Let \[U_{a1}\otimes\mathbb{I}_{2}\ket{0}_{a}\ket{\Phi}_{12}=\ket{0}_{a}A_{\rm s} \ket{\Phi}_{12}+\ket{1}_{a}A_{\rm r}\ket{\Phi}_{12}, \tag{4}\] where \(\ket{\mu}_{a}\) is the state of the ancilla which flags whether concentration was accomplished (\(\mu=0\)) or not (\(\mu=1\)). \(A_{\rm s}\) and \(A_{\rm r}\) are Kraus operators acting on qudit \(1\), modifying the entangled state in each case. A measurement on the ancilla announces if we succeeded. Through this work, we will be concerned with the successful cases only, whose study can be simplified considering \(A_{\rm s}\ket{\Phi}_{12}\) only. Without loss of generality, we may write \[A_{\rm s}\ket{\Phi}_{12}=\sqrt{p_{\rm s}}\ket{\Psi}_{12}, \tag{5}\] where \(p_{\rm s}\) is the probability of success for the concentration procedure, and \(\ket{\Psi}_{12}\) is the resulting state, and therefore we get \(\mathcal{C}(\ket{\Psi}_{12})>\mathcal{C}(\ket{\Phi}_{12})\). If the intention is to obtain a MES, it is known that \(p_{\rm s}=Da_{\rm min}^{2}\), where \(a_{\rm min}^{2}=\min\{\ket{a_{m}}^{2}\}\)[7; 49]. This probability, however, may adopt very small values if the Schmidt coefficients exhibit large differences among them, rendering the procedure inefficient. Alternatively, one may increase the success probability at expense of having a partially entangled state as result. In Ref. [20], Vidal studied the case of transforming Schmidt coefficients \(\{a_{m}\}\) onto a given set \(\{b_{m}\}\) and showed the optimal probability of success for such map. In this way, one may choose the \(b_{m}\) coefficients in such a way the success probability is _good enough_ at the same time the entanglement is increased. Another possibility is to set the resulting state \(\ket{\Psi}_{12}\) as a maximally entangled one _for a subspace_ of dimension \(N\leqslant D\), which is analogous to a Procrustsen method (i.e., cutting off extra probabilities from a given reference value [14]) applied only on a subset of the original Schmidt coefficients [13]. Both approaches, however, force one to constrain the final state to be a given one. Thus, the problem contains \(D\) arbitrary parameters \(b_{m}\), and one has to search thoroughly for a convenient combination of the \(b_{m}\). A possible way to decrease the number of free parameters is to use the Kraus operator \(A_{\mathrm{s}}(\xi)\) given in Ref. [51]. This approach allows to interpolate between the initial Schmidt coefficients \((a_{m})\) and the ones from a maximally entangled state \((1/\sqrt{D})\) using a single parameter \(\xi\). Thus, we may transform \(a_{m}\to b_{m}(\xi)\), where \(0\leqslant\xi\leqslant 1\), and \[b_{m}^{2}(\xi)=a_{m}^{2}+\left(\frac{1}{D}-a_{m}^{2}\right)\xi. \tag{6}\] It can be seen that Equation (6) shows a transformation that preserves the norm of the new state and represents a linear interpolation for the squares of the Schmidt coefficients. Besides, the success probability is \(p(\xi)=\left[1-\xi+\xi/(Da_{\mathrm{min}}^{2})\right]^{-1}\)[51]. This method, although straightforward to understand, leads to little improvement in terms of success probabilities. For instance, Figure 1 evidences that even a little improvement in any of the functions used to assess entanglement is achieved at the expense of a substantial drop in the success probability. This figure also evidences that the I-Concurrence, although simple to work with because it is not a rational function, is not good for graphical assessment since even initial I-Concurrence (see \(\xi=0\)) exhibits values close to 1. Instead, the Schmidt number is not simple to work with due to its inverse dependence on \(\mathrm{tr}\left(\rho_{1}^{2}\right)\) but makes graphical evaluation uncomplicated. These previous attempts lead us to question whether a method can obtain a _reasonable_ increment in entanglement with a non-negligible success probability without imposing constraints on the final state beforehand. The next sections will address this question. ## III Towards efficient entanglement concentration Here, we shall propose and analyze a more efficient method for entanglement concentration from a single copy of a partially entangled pure state. Let us define parameterized Kraus operator \(A_{\mathrm{s}}(\vec{\underline{z}})\) being applied on one of the qudits. This operator can be written as \[A_{\mathrm{s}}(\vec{\underline{z}})=\sum_{m=1}^{D}z_{m}\ket{m}\bra{m}, \tag{7}\] so its action on the two-qudit system after successful concentration will be \[A_{\mathrm{s}}(\vec{\underline{z}})\ket{\Phi}_{12}=\sum_{m=1}^{D}z_{m}a_{m} \ket{m}_{1}\ket{m}_{2}. \tag{8}\] Thus, keeping Equation (5) in mind, the post-concentration state and its probability of success are \[\ket{\Psi(\vec{\underline{z}})}_{12} =\frac{1}{\sqrt{p_{\mathrm{s}}(\vec{\underline{z}})}}\sum_{m=1}^ {D}a_{m}z_{m}\ket{m}_{1}\ket{m}_{2}, \tag{9}\] \[p_{\mathrm{s}}(\vec{\underline{z}}) =\sum_{m=1}^{D}a_{m}^{2}|z_{m}|^{2}, \tag{10}\] respectively. Since \(p_{\mathrm{s}}(\vec{\underline{z}})\) must not exceed 1, it is mandatory to impose \(|z_{m}|\leqslant 1\). The reduced density matrix for Figure 1: Example of entanglement concentration for \(D=32\) by using linear interpolation for the squares of the Schmidt coefficients. one of the subsystems shall be \[\rho_{1}(\overline{z})=\frac{1}{p_{\mathrm{s}}(\overline{z})}\sum_{m=1}^{D}a_{m}^ {2}|z_{m}|^{2}\left|m\right>\left<m\right|. \tag{11}\] I-Concurrence and Schmidt number, as function of \(\overline{z}\), will be given by \[\mathcal{C}(\overline{z}) =\sqrt{\frac{D}{D-1}\left(1-\sum_{m=1}^{D}\frac{a_{m}^{4}|z_{m}|^ {4}}{p_{\mathrm{s}}^{2}(\overline{z})}\right)}, \tag{12}\] \[K(\overline{z}) =\frac{p_{\mathrm{s}}^{2}(\overline{z})}{\sum_{m}a_{m}^{4}|z_{m}| ^{4}}. \tag{13}\] Let us now define a quantity \(\mathcal{Q}(\overline{z})\) aimed to assess the efficiency of the concentration procedure considering a trade-off between the probability of success and the increment in entanglement. A Kraus operator that maximizes this efficiency will be pursued. A choice, although not unique at all, might be \(p_{\mathrm{s}}(\overline{z})\mathcal{C}(\overline{z})\). Maximizing it will be equivalent to maximizing its square, \([p_{\mathrm{s}}(\overline{z})\mathcal{C}(\overline{z})]^{2}\), which should be a simpler procedure since the square root we can see in Equation (12) will not be present. However, \([p_{\mathrm{s}}(\overline{z})\mathcal{C}(\overline{z})]^{2}\) has its maximum when \(z_{m}=1,\ \forall\ m\), which means to keep state \(|\Phi\rangle_{12}\) unaltered1. Instead, we may try with the difference between \(\mathcal{C}^{2}(\overline{z})\) and a constant reference level for the I-Concurrence (\(\mathcal{C}^{2}_{\textsc{net}}\)). This reference level could be, for instance, the initial value \(\mathcal{C}_{\textsc{int}}=\mathcal{C}(|\Phi\rangle_{12})\). Let us try by defining an efficiency function like Footnote 1: This will be proven in Appendix A \[\mathcal{Q}(\overline{z})=p_{\mathrm{s}}^{2}(\overline{z})\left(\mathcal{C}^ {2}(\overline{z})-\mathcal{C}^{2}_{\textsc{net}}\right). \tag{14}\] Equations (10) and (12) allow us to transform Equation (14) into \[\mathcal{Q}(\overline{z}) =\frac{D}{D-1}\sum_{m,n=1}^{D}|z_{m}|^{2}a_{m}^{2}(\mathscr{P}_{ \textsc{net}}-\delta_{mn})a_{n}^{2}|z_{n}|^{2}, \tag{15}\] \[\mathscr{P}_{\textsc{net}} =1-\frac{D-1}{D}\mathcal{C}^{2}_{\textsc{net}}, \tag{16}\] where \(\mathscr{P}_{\textsc{net}}\) has been defined for mathematical convenience, it ranges from \(1/D\) to \(1\), and it can be interpreted as a reference value for the purity of a reduced density matrix, as it can be seen from Equation (2). Other interpretation, as it can be seen from Equation (3) is \(\mathscr{P}_{\textsc{net}}=1/\mathcal{K}_{\textsc{net}}\), where \(\mathcal{K}_{\textsc{net}}\) is a reference value for the Schmidt number. A careful observation of Equation (15) leads us to infer that the problem of efficient entanglement concentration, in the form it has been described in this document, can be rewritten as a quadratic optimization problem given by \[\max_{\overline{y}}\mathcal{Q}(\overline{y}) =\frac{D}{D-1}\overline{y}\,^{\top}\mathbf{H}\,\overline{y},\] (17a) subject to \[0\leqslant y_{m}\leqslant 1, \tag{17b}\] where \[y_{m} =|z_{m}|^{2}, \tag{17c}\] \[[\mathbf{H}]_{m,n} =\ (\mathscr{P}_{\textsc{net}}-\delta_{mn})\,a_{m}^{2}a_{n}^{2}. \tag{17d}\] Therefore, the problem of efficient entanglement concentration for a single pair of entangled qudits can be written as the quadratic optimization problem described in Eqs. (17a)-(17d), with the optimization variables \(y_{m}\) lying in a unit hypercube. Finally, without loss of generality, we may choose the positive root of \(z_{m}=\sqrt{y_{m}}\). Note that the presence of \(\mathcal{C}_{\textsc{net}}\) forces the optimization to look for a solution \(\overline{y}_{\textsc{opt}}\) such that \(\mathcal{C}(\overline{y}_{\textsc{opt}})\geqslant\mathcal{C}_{\textsc{net}}\). Otherwise, function \(\mathcal{Q}(\overline{y}_{\textsc{opt}})\) would adopt a negative value [see Equation (14)] and, therefore, it will not represent a maximum. For this reason, we can assure that \(\mathcal{C}_{\textsc{net}}\geqslant\mathcal{C}_{\textsc{int}}\) forces entanglement concentration. In an extreme case, \(\mathcal{C}_{\textsc{net}}=1\) means the reference level is equal to the maximum possible value I-Concurrence can adopt. Therefore, \(\mathcal{Q}(\overline{y})\) will adopt a negative value _unless_ the final entanglement is also equal to \(1\), for which \(\mathcal{Q}=0\). This is the standard entanglement concentration procedure. On the other hand, \(\mathcal{C}_{\textsc{net}}\) could be slightly smaller than \(\mathcal{C}_{\textsc{int}}\) and, still, entanglement concentration may occur, as it will be shown in Section IV.1. For this problem, the square of the I-Concurrence has been used also because both numerical and analytical solutions are accessible. For graphical purposes, as it was already seen in Figure 1, the Schmidt number shall be used. Moreover, Schmidt number provides an estimation of the number of relevant Schmidt modes involved [41]. We must add that the Kraus operator defined in Equation (7) is diagonal in the Schmidt basis. We may have started by a general Kraus operator, instead of a diagonal one. However, Appendix B shows it suffices to look for diagonal operators. ## IV Solving the problem ### Numerical hints Figure 2 shows the results of numerical resolution of the aforementioned optimization problem for a given set of \(D=16\) Schmidt coefficients \(a_{m}^{2}\), randomly chosen, and sorted decreasingly in order to ease observation. For this example, we tested four possible values of \(\mathcal{C}^{2}_{\textsc{net}}\), given by (i) \(\mathcal{C}^{2}_{\textsc{int}}/2\), much smaller than the initial entanglement; (ii) \(0.98\mathcal{C}^{2}_{\textsc{int}}\), slightly smaller than the initial entanglement; (iii) average value between \(\mathcal{C}_{\textsc{int}}\) and \(1\), a significant increase in entanglement; and (iv) \(\mathcal{C}^{2}_{\textsc{net}}=1\), the maximum possible value for \(\mathcal{C}^{2}_{\textsc{net}}\). The optimization was performed using the function QUAPPRO of Matlab R2022b. Since this is a non-convex problem with constant bounds only, the algorithm "trust-region-reflective" was used since it was the best suited for our optimization problem [52]. The plots show the original Schmidt coefficients (cyan) and the non-normalized coefficients after concentration (dark red). A pattern is evident. For small values of \(\mathcal{C}^{2}_{\textsc{nef}}\), keeping the state as it is seems to be the best option in terms of efficiency. According \(\mathcal{C}^{2}_{\textsc{nef}}\) increases, the solutions of the optimization problem suggest one to use a Procrustean method on the \(n\) largest Schmidt coefficients, where \(n\) increases according \(\mathcal{C}^{2}_{\textsc{nef}}\) moves closer to \(1\). This is analogous to entanglement concentration on a subspace of the bipartite Hilbert space as the one proposed in Ref. [13], although we have not required the final state to be fixed to a given one. Finally, \(\mathcal{C}^{2}_{\textsc{nef}}=1\) represents the ideal entanglement concentration context, in which the resulting state exhibits the maximal entanglement possible. The optimization problem shows the correct result, which consist in uniforming all post-concentration Schmidt coefficients. Although Figure 2 shows a single set of initial Schmidt coefficients, the same pattern is observed for other states in any dimension \(D>2\). In the following, we shall prove why the Procrustean method on a subspace is the most efficient method, according to our figures of merit. ### Analytical results One of the goals of this work is to find the analytical solution of the optimization problem of Eqs. (17). The details of the proof will be shown in the next subsections. The procedure can be summarized as follows: 1. If \(\mathscr{P}_{\textsc{nef}}=1/D\) (minimum attainable value, equivalent to \(\mathcal{C}_{\textsc{nef}}=1\)), it means we are pursuing a standard entanglement concentration using all Schmidt coefficients. Then, perform concentration using \(z_{m}=a_{\min}/a_{m}\). Otherwise, follow Steps 2-8. 2. Sort the Schmidt coefficients in decreasing order. Let us label these sorted coefficients as \(\mathsf{a}_{m}\). 3. Define a vector \(\vec{\mathsf{\beta}}\) such that \(\beta_{n}=1-\sum_{m=1}^{n}\mathsf{a}_{m}^{2}\), for \(n=1,\ldots,D\). 4. Define a vector \(\vec{\boldsymbol{\alpha}}\) such that \(\alpha_{n}=\mathscr{P}_{\textsc{nef}}\beta_{n}/(1-n\mathscr{P}_{\textsc{nef}})\). 5. Find the largest value of \(n\) that allow both \(\alpha_{n}\leqslant\mathsf{a}_{n}^{2}\) and \(n<1/\mathscr{P}_{\textsc{nef}}\) to be simultaneously satisfied. Let us label this value as \(n_{\textsc{opt}}\). 6. Define \(\vec{\mathsf{x}}\) such that \[\mathsf{x}_{m}=\begin{cases}\alpha_{n_{\textsc{opt}}}&\text{, for }m=1,\ldots,n_{\textsc{opt}},\\ 1&\text{, for }m=n_{\textsc{opt}}+1,\ldots,D.\end{cases}\] 7. Define \(\mathsf{y}_{m}=\mathsf{x}_{m}/\mathsf{a}_{m}^{2}\). Afterwards, sort the \(\mathsf{y}_{m}\) using the inverse of the sorting operation described in Step 1. These sorted values will be the \(\mathsf{y}_{m}\) that solve the optimization problem of Eqs. (17). 8. Define \(z_{m}=\sqrt{\mathsf{y}_{m}}\). These values are the ones needed to construct the Kraus operator of Equation (7). Sections IV.2.1 to IV.2.7 hereunder shall detail the underlying reasoning for the algorithm shown above. #### iv.2.1 Redefining the optimization problem In order to prove the solution detailed above, we shall define \(x_{m}=a_{m}^{2}\mathsf{y}_{m}=a_{m}^{2}|\mathsf{z}_{m}|^{2}\). This allows us to write the optimization problem [Eqs. (17)], up to a proportionality constant, in a simpler way: \[\left\{\begin{aligned} &\max_{\vec{\boldsymbol{x}}}& Q(\vec{ \boldsymbol{x}})=\mathscr{P}_{\textsc{nef}}\left(\sum_{m=1}^{D}x_{m}\right)^{2} -\sum_{m=1}^{D}x_{m}^{2},\\ &\text{s. t.}& 0\leqslant x_{m}\leqslant a_{m}^{2}. \end{aligned}\right. \tag{18}\] These new variables \(x_{m}\) are the ones plotted in Figure 2 using dark red bars. So, the \(x_{m}\) will provide an idea about the post-concentration Schmidt coefficients. The domain is no longer the unit hypercube, but a orthotope whose vertices have coordinates components Figure 2: Numerical example of resolution of the quadratic optimization problem [Eqs. (17)] for dimension \(D=16\), using \(4\) different values of \(\mathcal{C}^{2}_{\textsc{nef}}\). Bars show the original Schmidt coefficients (cyan) and the non-normalized coefficients after concentration (dark red). Their respective values of \(\mathscr{P}_{\textsc{nef}}\) and probabilities of success \(p_{\textsc{s}}\) are also shown. equal to \(0\) and \(a_{m}^{2}\). Thus, every \(x_{m}\) has three options: (i) having a fixed value equal to \(0\), (ii) having a fixed value equal to \(a_{m}^{2}\), and (iii) having a variable value between \(0\) and \(a_{m}^{2}\). These options had to be taken into account in order to find all critical points. #### ii.2.2 Finding critical points For starters, we shall define set of indices according to the aforementioned options: 1. \(\mathcal{Z}=\{j:x_{j}=0\}\); 2. \(\mathcal{O}=\{k:x_{k}=a_{k}^{2}\}\); 3. \(\mathcal{I}=\{\ell:0<\kappa_{\ell}<a_{\ell}^{2}\}\). The symbols \(\mathcal{Z}\), \(\mathcal{O}\), and \(\mathcal{I}\) stand for _zero_, _outer_, and _inner_, respectively. In this way, any summation can be written as \(\sum_{m}=\sum_{j\in\mathcal{Z}}+\sum_{k\in\mathcal{O}}+\sum_{\ell\in\mathcal{I}}\). There exist \(3^{D}\) configurations for \((\mathcal{Z},\mathcal{O},\mathcal{I})\). If we label each of those \(3^{D}\) combinations by using the index \(\mu\), then we can define function \(Q_{\mu}(\vec{\mathbf{x}})\) as the function \(Q(\vec{\mathbf{x}})\) for the \(\mu\)th configuration. Explicitly, \[Q_{\mu}(\vec{\mathbf{x}})=\mathscr{P}_{\textsc{ref}}\left(\sum_{k\in\mathcal{ O}_{\mu}}a_{k}^{2}+\sum_{\ell\in\mathcal{I}_{\mu}}x_{\ell}\right)^{2}-\sum_{k \in\mathcal{O}_{\mu}}a_{k}^{4}-\sum_{\ell\in\mathcal{I}_{\mu}}x_{\ell}^{2}. \tag{19}\] By imposing \(\partial_{x_{\ell}}Q_{\mu}(\vec{\mathbf{x}})=0\), we can find the critical points of \(Q_{\mu}(\vec{\mathbf{x}})\). Consequently, \[x_{r}=\mathscr{P}_{\textsc{ref}}\left(\sum_{k\in\mathcal{O}_{\mu}}a_{k}^{2}+ \sum_{\ell\in\mathcal{I}_{\mu}}x_{\ell}\right),\hskip 14.226378ptr\in\mathcal{I}_{ \mu}. \tag{20}\] This means that as long as \(x_{r}\) is not fixed in either \(0\) or \(a_{r}^{2}\), the optimal solution is such that those \(x_{r}\) adopt all the same value. Let us define some additional ancillary parameters, \[\beta_{\mu}=\sum_{k\in\mathcal{O}_{\mu}}a_{k}^{2},\hskip 28.452756pt\gamma_{\mu} =\sum_{k\in\mathcal{O}_{\mu}}a_{k}^{4},\hskip 28.452756ptn_{\mu}=|\mathcal{I}_{ \mu}|, \tag{21}\] being \(n_{\mu}\) the number of free parameters \(x_{\ell}\). With these definitions, we can now assert that \(x_{\ell}=\alpha_{\mu}\) is the critical point for the \(\mu\)th configuration, where \[x_{\ell}=\alpha_{\mu}=\frac{\mathscr{P}_{\textsc{ref}}\beta_{\mu}}{1-\mathscr{ P}_{\textsc{ref}}n_{\mu}},\hskip 28.452756pt\forall\ \ell\in\mathcal{I}_{\mu}. \tag{22}\] Consequently, if \(\mathbb{Q}_{\mu}\) is the value of \(Q_{\mu}(\vec{\mathbf{x}})\) evaluated at the \(\mu\)th critical point, then \[\mathbb{Q}_{\mu} =\mathscr{P}_{\textsc{ref}}\left(\beta_{\mu}+n_{\mu}\alpha_{\mu} \right)^{2}-\gamma_{\mu}-n_{\mu}\alpha_{\mu}^{2}\] \[=\alpha_{\mu}\beta_{\mu}-\gamma_{\mu}. \tag{23}\] The fact that \(x_{\ell}=\alpha_{\mu}\) means that, for every \(\ell\in\mathcal{I}_{\mu}\), coefficients \(a_{\ell}^{2}\) will be transformed into \(\alpha_{\mu}\) as consequence of the concentration procedure. This is, precisely, the Procrustean method applied on a \(n_{\mu}\)-dimensional subset of the coefficients \(\{a_{\mu}\}\). It is worth mentioning that Equation (22) contains the implicit assumption \(\mathscr{P}_{\textsc{ref}}\neq 1/n_{\mu}\), which raises questions regarding the case \(\mathscr{P}_{\textsc{ref}}=1/n_{\mu}\). If that were the case, trying to solve Equation (20) leads us to conclude \(\beta_{\mu}=0\) and, equivalently, \(\mathcal{O}_{\mu}=\emptyset\). In turn, this implies \(Q_{\mu}(\vec{\mathbf{x}})=0\). Nevertheless, we may see from the original definition of \(\mathcal{Q}(\vec{\mathbf{z}})\) [Equation (14)] that the only possible way in which \(Q_{\mu}(\vec{\mathbf{x}})=0\) represents a maximum occurs when \(\mathcal{C}_{\textsc{ref}}^{2}=1\) and \(\mathcal{C}^{2}(\vec{\mathbf{z}})=1\) simultaneously. i.e., \(\mathscr{P}_{\textsc{ref}}=1/D\) has been set and the resulting state is a \(D\)-dimensional maximally entangled state. #### ii.2.3 Upper bounds for \(n_{\mu}\) The Hessian matrix has components given by \[\partial_{x_{\ell}}\partial_{x_{\ell}}Q_{\mu}(\vec{\mathbf{x}})=2(\mathscr{P}_{ \textsc{ref}}-\delta_{rs}). \tag{24}\] It can be shown that \(\mathbb{Q}_{\mu}\) will represent a local maximum for the \(\mu\)th configuration provided \((1-n_{\mu}\mathscr{P}_{\textsc{ref}})>0\) since this condition ensures Hessian matrix to be negative-definite. In other words, \[n_{\mu}<\frac{1}{\mathscr{P}_{\textsc{ref}}}. \tag{25}\] Thus, some configurations \((\mathcal{Z}_{\mu},\mathcal{O}_{\mu},\mathcal{I}_{\mu})\) can be immediately discarded if \(n_{\mu}\) exceeds this bound. #### ii.2.4 Eliminating zeros Let us start by analyzing the effect of zeros by comparing a given \(\mathbb{Q}_{\mu}\)--for which \(x_{r}=0\)--with the value of \(Q_{\mu^{\prime}}(\vec{\mathbf{x}})\) when \(x_{r}=\delta\gtrless 0\). Using Equation (19), we have that \[\mathbb{Q}_{\mu}\Big{|}_{x_{r}=0} =\mathscr{P}_{\textsc{ref}}(\beta_{\mu}+n_{\mu}\alpha_{\mu})^{2}- \gamma_{\mu}-n_{\mu}\alpha_{\mu}^{2}, \tag{26}\] \[Q_{\mu^{\prime}}\Big{|}_{x_{r}\rightarrow\delta} =\mathscr{P}_{\textsc{ref}}(\beta_{\mu}+n_{\mu}\alpha_{\mu}+\delta )^{2}-\gamma_{\mu}-n_{\mu}\alpha_{\mu}^{2}-\delta^{2}, \tag{27}\] which, in turn, leads us to \[Q_{\mu^{\prime}}\Big{|}_{x_{r}\rightarrow\delta}-\mathbb{Q}_{\mu} \Big{|}_{x_{r}=0}= 2\mathscr{P}_{\textsc{ref}}(\beta_{\mu}+n_{\mu}\alpha_{\mu})\delta+ \mathcal{O}(\delta^{2})>0. \tag{28}\] We can see that \(Q_{\mu^{\prime}}\) actually grows if \(x_{r}\) moves away from zero within its neighborhood. This means that every configuration containing a null value on _any_ of its \(x_{m}\) cannot represent a maximum since all neighboring points have higher values for \(Q(\vec{\mathbf{x}})\). Therefore, the solution we are looking for is such that \(\mathcal{Z}_{\mu}=\emptyset\). The number of remaining configurations is now less than \(2^{D}\). #### iv.1.5 Optimal \(n\) will be the largest possible We are left with the options \(x_{\mu}\in\{\alpha_{\mu},a_{m}^{2}\}\). We know that the \(\mu\)th critical point is such that \(x_{\ell}=\alpha_{\mu}\), \(\forall\ \ell\in\mathcal{I}_{\mu}\). Since \(\vec{x}\) still belongs to the orthotope, an additional condition arises: \(\alpha_{\mu}\leqslant a_{\ell}^{2}\), \(\forall\ \ell\in\mathcal{I}_{\mu}\). Let us now compare two solutions \(\mathbb{Q}_{\lambda}\) and \(\mathbb{Q}_{\nu}\), whose critical points differ only in one term \(x_{r}\), so \(r\in\mathcal{O}_{\lambda}\) and \(r\in\mathcal{I}_{\nu}\). Thus, by using Eqs. (21), (22), and (23), we have that \[\beta_{\nu} =\beta_{\lambda}-a_{r}^{2}, \tag{29}\] \[\gamma_{\nu} =\gamma_{\lambda}-a_{r}^{4},\] (30) \[n_{\nu} =n_{\lambda}+1,\] (31) \[\alpha_{\nu} =\frac{\mathscr{P}_{\textsc{REF}}(\beta_{\lambda}-a_{r}^{2})}{1- (n_{\lambda}+1)\mathscr{P}_{\textsc{REF}}},\] (32) \[\mathbb{Q}_{\lambda} =\alpha_{\lambda}\beta_{\lambda}-\gamma_{\lambda},\] (33) \[\mathbb{Q}_{\nu} =\alpha_{\nu}\beta_{\nu}-\gamma_{\nu}. \tag{34}\] Consequently, \[\mathbb{Q}_{\lambda}-\mathbb{Q}_{\nu}= -\frac{\left(\mathscr{P}_{\textsc{REF}}\beta_{\lambda}-(1-n_{ \lambda}\mathscr{P}_{\textsc{REF}})a_{r}^{2}\right)^{2}}{\left(1-n_{\lambda} \mathscr{P}_{\textsc{REF}}\right)\left(1-(n_{\lambda}+1)\mathscr{P}_{ \textsc{REF}}\right)}<0. \tag{35}\] Therefore, a better solution is obtained when \(r\) belongs to \(\mathcal{I}_{\nu}\) over \(\mathcal{O}_{\lambda}\) provided the constraints are fulfilled. In simpler words, the best of the \(\{n_{\mu}\}\) will be the largest possible within the conditions \(n_{\mu}<1/\mathscr{P}_{\textsc{REF}}\) and \(\alpha_{\mu}\leqslant a_{\ell}^{2}\), \(\forall\ \ell\in\mathcal{I}_{\mu}\). #### iv.1.6 Sorting preference For the following comparison, it will be helpful to define two sets \(\mathcal{O}_{0}\) and \(\mathcal{I}_{0}\). We will center our attention on two values \(x_{r}\) and \(x_{s}\). Now, let us compare two solutions \(\mathbb{Q}_{\rho}\) and \(\mathbb{Q}_{\sigma}\) that satisfy \[n_{\rho} =n_{\sigma}=n, \tag{36}\] \[\mathcal{I}_{\rho} =\mathcal{I}_{0}\cup\{r\}, \mathcal{I}_{\sigma} =\mathcal{I}_{0}\cup\{s\},\] (37) \[\mathcal{O}_{\rho} =\mathcal{O}_{0}\cup\{s\}, \mathcal{O}_{\sigma} =\mathcal{O}_{0}\cup\{r\}. \tag{38}\] Thus, \(\mathcal{I}_{\rho}\) and \(\mathcal{I}_{\sigma}\) have \(n-1\) elements in common, whereas \(\mathcal{O}_{\rho}\) and \(\mathcal{O}_{\sigma}\) have \(D-n-1\) elements in common. Consequently, \[\beta_{\rho} =\beta_{0}+a_{s}^{2}, \mathcal{\gamma}_{\rho} =\gamma_{0}+a_{s}^{4}, \tag{39}\] \[\beta_{\sigma} =\beta_{0}+a_{r}^{2}, \mathcal{\gamma}_{\sigma} =\gamma_{0}+a_{r}^{4}, \tag{40}\] where \(\beta_{0}=\sum_{k\in\mathcal{O}_{0}}a_{k}^{2}\) and \(\gamma_{0}=\sum_{k\in\mathcal{O}_{0}}a_{k}^{4}\). For the following, we shall assume \(a_{r}>a_{s}\). Now, since both \(\mathbb{Q}_{\rho}\) and \(\mathbb{Q}_{\sigma}\) are admissible solutions, it _must_ happen that \(\alpha_{\rho}\leqslant a_{r}^{2}\) and \(\alpha_{\sigma}\leqslant a_{s}^{2}\) as consequence of Eqs. (18), (22), and (37). This means \[t(\beta_{0}+a_{s}^{2})\leqslant a_{r}^{2},\quad\quad\text{and}\quad\quad t( \beta_{0}+a_{r}^{2})\leqslant a_{s}^{2}, \tag{41}\] where \(t=\mathscr{P}_{\textsc{REF}}/(1-n\mathscr{P}_{\textsc{REF}})\) is a positive parameter. If we add these two inequalities, we obtain \[(a_{r}^{2}+a_{s}^{2})(1-t)-2t\beta_{0}\geqslant 0. \tag{42}\] The difference between the solutions \(\mathbb{Q}_{\rho}\) and \(\mathbb{Q}_{\sigma}\) is \[\Delta\mathbb{Q} =\mathbb{Q}_{\rho}-\mathbb{Q}_{\sigma}\] \[= \tag{43}\] Since \(a_{r}>a_{s}\) was assumed and the inequality of Equation (42) was obtained, it can be assured that \(\mathbb{Q}_{\rho}>\mathbb{Q}_{\sigma}\). Now, let us remember that \(\mathbb{Q}_{\rho}\) is the solution in which \(x_{r}=\alpha_{\rho}\) and \(x_{s}=a_{s}^{2}\). This means it is better to cut off coefficient \(a_{r}\) (the larger one) over \(a_{s}\). Since we already know (see Section IV.2.5) that \(n\) must be the largest possible within the constraints \(n<1/\mathscr{P}_{\textsc{REF}}\) and \(\alpha_{\mu}\leqslant a_{\ell}^{2}\), \(\forall\ \ell\in\mathcal{I}_{\mu}\), we must compare now all the solutions \(\mathbb{Q}_{\mu}\) such that \(n_{\mu}\) is equal to that optimal value of \(n\). According to the computations of this section, the most efficient concentration scheme will consist in cutting off the \(n\) largest Schmidt coefficients, which is in complete agreement with the results shown in Figure 2. #### iv.1.7 How to construct the optimal concentration scheme In summary, we know now that if \(\mathcal{C}_{\textsc{REF}}=1\) (equivalently, \(\mathscr{P}_{\textsc{REF}}=1/D\)), then the optimal solution corre Figure 3: Comparison between results obtained through numerical optimization (\(\mathcal{Q}(\tilde{\mathcal{P}}_{\textsc{num}})\)) and the ones obtained by using the algorithm introduced at the beginning of Section IV.2 (\(\mathcal{Q}(\tilde{\mathcal{P}}_{\textsc{alg}})\)). Relative differences for are shown for 100 values of \(\mathscr{P}_{\textsc{REF}}\). The vertical dotted line indicates the initial value of the purity of the reduced density matrix, i.e., \(\mathscr{P}_{\textsc{REF}}=\mathscr{P}_{\textsc{INT}}\). See the main text for details about the computation of these relative differences. sponds to a entanglement concentration procedure that yields a \(D\)-dimensional maximally entangled state. On the other hand, if \(\mathcal{C}_{\textsc{ref}}<1\) (equivalently, \(\mathscr{P}_{\textsc{ref}}>1/D\)), we have shown that the optimal solution does not contain zeros, it has values either given by \(x_{m}=a_{m}^{2}\) (i.e., keep \(a_{m}\) as it is) or by \(x_{m}=\alpha_{\mu}\) (i.e., crop coefficients \(a_{m}\) to a given value \(\alpha_{\mu}\)), the \(n\) largest Schmidt coefficients are to be cropped, and \(n\) must be as large as possible within constraints given by \(n<1/\mathscr{P}_{\textsc{ref}}\) and \(\alpha_{\mu}\leqslant a_{m}^{2}\). Once the optimal \(x_{m}\) are found, we may compute the corresponding \(y_{m}\) and \(z_{m}\). These rules gave rise to the algorithm described at the beginning of Section IV.2. Moreover, we performed thousands of numerical simulations, ranging from \(D=32\) to \(D=1024\), that confirmed such algorithm actually provides the optimal solution. Figure 3 shows a sample of those simulations for \(D=1024\), depicting relative differences between the results from numerical optimization (\(\mathbf{\tilde{y}}_{\text{num}}\) and \(Q(\mathbf{\tilde{y}}_{\text{num}})\)) and the ones from the algorithm proposed in this section (\(\mathbf{\tilde{y}}_{\text{alg}}\) and \(Q(\mathbf{\tilde{y}}_{\text{alg}})\)) for \(100\) values of \(\mathscr{P}_{\textsc{ref}}\). These relative differences are computed as \[\Delta y_{\text{relative}} =\frac{1}{D}\sum_{m=1}^{D}\left|\frac{\left(\mathbf{\tilde{y}}_{ \text{num}}\right)_{m}-\left(\mathbf{\tilde{y}}_{\text{alg}}\right)_{m}}{ \left(\mathbf{\tilde{y}}_{\text{num}}\right)_{m}}\right|, \tag{44}\] \[\Delta\mathcal{Q}_{\text{relative}} =\left|\frac{\mathcal{Q}(\mathbf{\tilde{y}}_{\text{num}})- \mathcal{Q}(\mathbf{\tilde{y}}_{\text{alg}})}{Q(\mathbf{\tilde{y}}_{\text{num }})}\right|. \tag{45}\] The initial Schmidt coefficients were computed from a randomly-generated \(D\!\times\!D\) entangled state. As the data of Figure 3 shows, relative differences between the two solutions being compared are negligible, thus demonstrating the adequateness of the proposed algorithm. Discrepancies can be explained as a consequence of floating-point computation precision. After efficiency optimization, one should evaluate whether practical advantages were obtained from it. Figure 4 shows the probability of success and Schmidt number for the same optimizations carried out for Figure 3. The initial state had a Schmidt number \(K_{\textsc{init}}\approx 512\). Raising this number to its maximum (i.e., \(K=1024\)) can be done with a probability of success \(p_{\text{s}}=Da_{\text{min}}^{2}\sim 10^{-7}\) (not shown in the graphs in order to ease observation). However, non-maximal Schmidt numbers can be obtained with much better probabilities. For instance, \(\mathscr{P}_{\textsc{ref}}\approx 1.15\times 10^{-3}\) allows one to achieve a considerable Schmidt number (\(K=900\)) with a success probability \(p_{\text{s}}=11\%\). Although \(\mathscr{P}_{\textsc{ref}}\approx 1.15\times 10^{-3}\) seems to be a non-trivial number of uncertain origin, we may notice that \(1/\mathscr{P}_{\textsc{ref}}\sim 868\). Thus, an acceptable method to estimate the necessary value of \(\mathscr{P}_{\textsc{ref}}\) consists in setting a minimum desirable Schmidt number \(\mathcal{K}_{\textsc{min}}\), define a slightly smaller threshold number \(\mathcal{K}_{\textsc{init}}<\mathcal{K}_{\textsc{min}}\), and computing \(\mathscr{P}_{\textsc{ref}}=1/\mathcal{K}_{\textsc{init}}\). It is worth mentioning that the solution described in this section closely resembles the entanglement concentration procedure described in Ref. [13], which was also graphically explained in Ref. [30]. However, we did not set the final state to a fixed one in our formulation. Instead, we defined a single figure of merit to be interpreted as efficiency, and its optimization suggested performing entanglement concentration on the subspace of the largest Schmidt coefficients. ## V Entanglement concentration with fixed probability of success An alternative way to solve the problem of efficient entanglement concentration is by setting the success probability to a fixed value \(p_{\textsc{ref}}\), and inquiring about the largest entanglement it can be extracted. As it can be seen from Eqs. (12) and Eqs. (13), this question reduces to minimization of the purity of the reduced density matrix, as \[\min_{\mathbf{\tilde{y}}}\ \mathcal{P}(\mathbf{\tilde{y}})=\ \left[\frac{1}{p_{\text{s}}(\mathbf{\tilde{y}})}\sum_{m=1}^{D}a_{m}^{4}y_{m}^ {2}\right],\\ \text{subject to}\ \ 0\leqslant y_{m}\leqslant 1\quad\text{and}\quad \sum_{m=1}^{D}a_{m}^{2}y_{m}=p_{\textsc{ref}}, \tag{46}\] where we have already used \(y_{m}=|z_{m}|^{2}\). As we have imposed \(p_{\text{s}}(\mathbf{\tilde{y}})=p_{\textsc{ref}}\), the optimization reduces to optimize \(\sum_{m}a_{m}^{4}y_{m}^{2}\). As in the previous section, we shall resort to \(x_{m}=y_{m}^{2}\), and the sets of indices \(\mathcal{Z}_{\mu}\), \(\mathcal{O}_{\mu}\), and \(\mathcal{I}_{\mu}\). Using the \(x_{m}\), we are left to optimize \(\sum_{m}x_{m}^{4}\), and the constraint Figure 4: Success probability and Schmidt number for the same state and optimizations used in Figure 3. The vertical dotted line indicates the initial value of the purity of the reduced density matrix, i.e., \(\mathscr{P}_{\textsc{ref}}=\mathscr{P}_{\textsc{ref}}\) and the horizontal dashed line shows the initial Schmidt number. Keep in mind that larger values of \(\mathscr{P}_{\textsc{ref}}\) mean smaller values of \(\mathcal{C}_{\textsc{ref}}\). of fixed probability can be rewritten as \(\sum_{m}x_{m}=p_{\text{\tiny{FIX}}}\), which also allows us to write one of the variables in terms of the others. Let \[x_{\mathbf{\vartheta}}=p_{\text{\tiny{FIX}}}-\sum_{m\neq\mathbf{\vartheta}}x_{m}. \tag{47}\] Then, the minimization of the purity can be rewritten as \[\text{minimize}\left(p_{\text{\tiny{FIX}}}\mathcal{P}(\vec{\mathbf{x }})\right) =\] \[= \sum_{k\in\mathcal{O}_{\mathbf{\mu}}}a_{k}^{4}+\sum_{\begin{subarray}{ c}\ell\in I_{\mathbf{\mu}}\\ \ell\neq\mathbf{\vartheta}\end{subarray}}x_{\ell}^{2}\] \[+\left(\sum_{k\in\mathcal{O}_{\mathbf{\mu}}}a_{k}^{2}+\sum_{ \begin{subarray}{c}\ell\in I_{\mathbf{\mu}}\\ \ell\neq\mathbf{\vartheta}\end{subarray}}x_{\ell}\right)^{2}. \tag{48}\] Critical points are found by setting \(\partial\left(p_{\text{\tiny{FIX}}}\mathcal{P}(\vec{\mathbf{x}})\right)/\partial x _{\mathbf{\nu}}=0\), with \(r\in I_{\mathbf{\mu}}\) and \(r\neq\mathbf{\vartheta}\). This leads us to \(x_{r}=\kappa_{\mathbf{\mu}}\), where \[\kappa_{\mathbf{\mu}}=\frac{P-\beta_{\mathbf{\mu}}}{n_{\mathbf{\mu}}}. \tag{49}\] In turn, Equation (47) implies that \(x_{\mathbf{\vartheta}}=\kappa_{\mathbf{\mu}}\) as well. Thus, we obtained solutions given by either \(x_{m}=a_{m}^{2},\;\;x_{m}=0\), or \(x_{\mathbf{m}}=\kappa_{\mathbf{\mu}}\), which is the exact behavior exhibited by the \(x_{m}\) from Section IV up to a change from \(\alpha_{\mathbf{\mu}}\) to \(\kappa_{\mathbf{\mu}}\). The same analysis performed in Sections IV.2.4-IV.2.7 can be applied here. The conclusions are very similar: (i) the optimal values of \(x_{m}\) are different from zero, (ii) if \(n\) is the number of variables \(x_{\mathbf{m}}\) being equal to \(\kappa_{\mathbf{\mu}}\), then \(n\) must be as large as possible within the constraint \(0\leqslant\kappa\leqslant a_{\ell}^{2}\), and (iii) the \(n\) largest Schmidt coefficients are cut off. Thus, an algorithm can be constructed as follows: 1. Sort the Schmidt coefficients in decreasing order. Let us label these sorted coefficients as \(\mathfrak{a}_{m}\). 2. Define a vector \(\vec{\mathbf{\beta}}\) such that \(\beta_{\mathbf{n}}=1-\sum_{m=1}^{n}\mathfrak{a}_{m}^{2}\), for \(n=1,\ldots,D\). 3. Define a vector \(\vec{\mathbf{\kappa}}\) such that \(\kappa_{\mathbf{n}}=(p_{\text{\tiny{FIX}}}-\beta_{\mathbf{n}})/n\). 4. Find the largest value of \(n\) such that \(\kappa_{\mathbf{n}}\geqslant 0\) and \(\kappa_{\mathbf{n}}<\mathfrak{a}_{\mathbf{n}}^{2}\) are simultaneously satisfied. Let us label this value as \(n_{\text{\tiny{OPT}}}\). 5. Define \(\vec{\mathbf{x}}\) such that \[\mathsf{x}_{\mathbf{m}}=\begin{cases}\kappa_{n_{\text{\tiny{OPT}}}}&,\;\text{for} \;m=1,\ldots,n_{\text{\tiny{OPT}}^{\ast}}\\ 1&,\;\text{for}\;m=n_{\text{\tiny{OPT}}}+1,\ldots,D.\end{cases}\] 6. Define \(\mathsf{y}_{\mathbf{m}}=\mathsf{x}_{\mathbf{m}}/\mathfrak{a}_{\mathbf{m}}^{2}\). Afterwards, sort the \(\mathsf{y}_{\mathbf{m}}\) using the inverse of the sorting operation described in Step 1. These sorted values will be the \(\mathsf{y}_{\mathbf{m}}\) that solve the optimization problem of Eqs. (17). 7. Define \(z_{\mathbf{m}}=\sqrt{\mathsf{y}_{\mathbf{m}}}\). These values are the ones needed to construct the Kraus operator of Equation (7). As it can be seen, the solutions obtained for this problem are completely analogous to the ones of the previous section. The advantage of this approach lies in the fact that \(\mathcal{P}(\vec{\mathbf{x}})\) appears in both I-Concurrence and Schmidt number. Thus, it is a favorable way to increase the Schmidt number without introducing nontrivial mathematical complications. Once more, this result represents a Procrustean method applied on a subspace, although only one parameter has been fixed (\(p_{\text{\tiny{FIX}}}\)) instead of a whole state. ## VI Conclusions In summary, we have studied entanglement concentration from a single copy of a two-qudit entangled state in terms of efficiency. As the ideal procedure--obtaining a maximally entangled state--is extremely inefficient in terms of probability, we studied the possibility of concentrating a fair enough amount of entanglement and, simultaneously, increment the success probability. Two methods were analyzed. For the first one, a function \(\mathcal{Q}(\vec{\mathbf{y}})\) was defined in order to quantify efficiency as the product of success probability and entanglement increment. This function allows one to introduce a parameter \(\mathcal{P}_{\text{\tiny{RF}}}\), which is loosely related to a minimal entanglement amount intended to extract. The other one consisted in fixing the success probability to a given value and finding the maximal entanglement it can be extracted under the constraint herein. We found that, for both cases, the solution resembles a Procrustean method applied on a subset of the largest Schmidt coefficients. Such application of the Procrustean method has been already studied in the literature under the assumption that the final state _must_ be a \(n\)-dimensional maximally entangled state, with \(n<D\). Therefore, \(n\) constraints are implicitly assumed. Instead, this work does not impose constraints on the final state. In the first method, the Procrustean method results as consequence of a quadratic optimization problem. In the second one, it emerges after optimizing entanglement and using a single constraint. We anticipate this work may be useful for understanding how to concentrate entanglement efficiently in very large dimensions. As entanglement is a resource underlying many protocols in Quantum Information Science, we believe many people in the Quantum Information community may benefit from these findings. ###### Acknowledgements. L.P.T. acknowledges partial financial support from the Master of Science in Physics program at Universidad de La Frontera. E.S.G. and A.D. thank the support of the Fondo Nacional de Desarrollo Cientifico y Tecnologico (FONDECYT) (Grant No. 1231940). O.J. thank the internal grant from Universidad Mayor (PEP I-2019020). This work was also supported by the National Agency of Research and Development (ANID) - Millennium Science Initiative Program - ICN17\({}_{-}\)012. ## Appendix A Why is it necessary to add a difference? In Section III, we asserted that \([p(\overline{z})C(\overline{z})]^{2}\) has its maximum when \(z_{m}=1\), \(\forall\ m\). This means to keep the original state unaltered, without making any attempt to concentrate entanglement. In order to prove it, let us remember Eqs. (10) and (12). We may observe that \[\delta = \left[p_{\text{\tiny S}}(\overline{z})C(\overline{z})\right]^{2} \Big{|}_{z_{m}=1}-[p_{\text{\tiny S}}(\overline{z})C(\overline{z})]^{2}\] \[= \frac{D}{D-1}\left(p_{\text{\tiny S}}^{2}(\overline{z})-\sum_{m= 1}^{D}a_{m}^{4}|z_{m}|^{4}\right)\Big{|}_{\overline{z}}^{\alpha=1}\] \[= \frac{D}{D-1}\sum_{m,n=1}^{D}\left(1-\delta_{mn}\right)\left(1-| z_{m}|^{2}|z_{n}|^{2}\right)a_{m}^{2}a_{n}^{2}\] \[\geqslant 0,\] because \(|z_{m}|\leqslant 1\). Thus, straight optimization of \(p^{2}(\overline{z})C^{2}(\overline{z})\) will suggest to _do nothing_ and, instead, keep entanglement as it is. For this reason, it is necessary to add a reference level for entanglement. In other words, it is better to optimize \(p^{2}(\overline{z})\left[C^{2}(\overline{z})-C_{\text{\tiny REF}}^{2}\right]\) rather than maximizing solely \(p^{2}(\overline{z})C^{2}(\overline{z})\) in order to actually increment entanglement. ## Appendix B Why does a diagonal Kraus operator suffice? In Equation (7), we assumed \(A_{\text{\tiny S}}(\overline{z})\) to be diagonal in the \(\{\ket{m}\}\) basis. This section will show why nondiagonal terms do not increase efficiency. Let us redefine \(A_{\text{\tiny S}}\) to be a general operator with components \(\zeta_{mn}\). We will add an additional definition. Let \(\Pi(\zeta)=A_{\text{\tiny S}}^{*}A_{\text{\tiny S}}\) be a positive operator whose matrix components are \(\pi_{mn}=\sum_{j}\tau_{jm}^{*}\xi_{jn}\) and satisfy \(\pi_{mm}^{*}=\pi_{nm}\) and \(\pi_{jj}\geqslant 0\). If \(\Pi\) is known, then \(A_{\text{\tiny S}}=U\sqrt{\Pi}\), where \(U\) is an arbitrary unitary operator whose explicit form depends on experimental details about the physical implementation of \(A_{\text{\tiny S}}\) Now, considering that \(A_{\text{\tiny S}}=U\sqrt{\Pi}\), Equations (9) and (10) become \[\ket{\Psi(\zeta)}_{12} = \left(U\otimes\mathbb{I}\right)\sum_{m=1}^{D}\frac{a_{m}}{\sqrt{p _{\text{\tiny S}}(\zeta)}}\sqrt{\Pi}\ket{m}_{1}\ket{m}_{2},\] \[p_{\text{\tiny S}}(\zeta) = \sum_{m=1}^{D}\pi_{mm}a_{m}^{2},\] and the efficiency function is written as \[\mathcal{Q}(\zeta) = \frac{D}{D-1}\left[\mathcal{P}_{\text{\tiny REF}}\left(\sum_{m=1} ^{D}a_{m}^{2}\pi_{mm}\right)^{2}\right. \tag{10}\] \[\left.-\sum_{m=1}^{D}a_{m}^{4}\pi_{mm}^{2}-\sum_{m\neq n}^{D}a_{m} ^{2}a_{n}^{2}|\pi_{mm}|^{2}\right]\] It can be seen that \(\mathcal{Q}(\zeta)\) does not depend on \(U\). In addition, the only positive term on the RHS of Equation (10) depends on the diagonal components \(\pi_{mm}\), whereas nondiagonal components only diminish the efficiency. Consequently, the optimal operator \(\Pi\)_must_ be diagonal. This last condition can be satisfied, although not uniquely, by imposing \(A_{\text{\tiny S}}\) to be diagonal, so Equation (7) suffices to find the adequate operation to optimize the function \(\mathcal{Q}\).
2305.02110
A hidden population of white dwarfs with atmospheric carbon traces in the Gaia bifurcation
The ESA Gaia space mission has revealed a bifurcation of the white dwarf (WD) sequence on the color magnitude diagram in two branches: A and B. While the A branch consists mostly of WDs with H-rich atmospheres, the B branch is not completely understood. Although invoked to be populated mainly by He-rich WDs, the B branch overlaps a $\sim 0.8M_\odot$ evolutionary track with a pure He envelope, fact that would imply an unexpected peak in the WD mass distribution. In cold He-rich WDs, it is expected that the outer convective zone penetrates into deep C-rich layers, thus leading to a slight C contamination in their surfaces at $\sim 10,000$K. Here we aim at studying the Gaia bifurcation as the natural consequence of C dredge-up by convection in cold He-dominated WDs. Relying on accurate atmosphere models, we provide a new set of evolutionary models for He-rich WDs employing different prescriptions for the C enrichment. On the basis of these models, we made a population synthesis study of the Gaia 100pc WD sample to constrain the models that best fit the bifurcation. Our study shows that He-rich WD models with a slight C contamination below the optical detection limit can accurately reproduce the Gaia bifurcation. We refer to these stars as stealth DQ WDs because they do not exhibit detectable C signatures in their optical spectra, but the presence of C in their atmosphere produces a continuum absorption favouring the emission in bluer wavelengths, thereby creating the B branch of the bifurcation. Also, we show that the mass distribution for He-rich WDs obtained when a stealth C contamination is considered is consistent with the mass distribution for H-rich WDs and with the standard evolutionary channels for their formation. We conclude that stealth DQ WDs can account for the lower branch in the Gaia bifurcation. The C signatures of these stars could be detectable in Ultra-Violet spectra.
Maria Camisassa, Santiago Torres, Mark Hollands, Detlev Koester, Roberto Raddi, Leandro G. Althaus, Alberto Rebassa-Mansergas
2023-05-03T13:34:17Z
http://arxiv.org/abs/2305.02110v1
# A hidden population of white dwarfs with atmospheric carbon traces in the Gaia bifurcation ###### Abstract Context:The high-quality photometric and astrometric capabilities of the ESA _Gaia_ space mission have revealed a bifurcation of the white dwarf sequence on the color magnitude diagram with two branches: A and B. While the A branch consists mostly of white dwarfs with hydrogen(H)-rich atmospheres, the B branch is not completely understood. Although invoked to be populated mainly by helium(He)-rich white dwarfs, the B branch overlaps a \(\sim 0.8\,\mathrm{M}_{\odot}\) evolutionary track with a pure He envelope, fact that would imply an unexpected peak in the white dwarf mass distribution. Aims:In cold He-rich white dwarfs, it is expected that the outer convective zone penetrates into deep carbon(C)-rich layers, thus leading to a slight C contamination in their surfaces at \(\sim 10\,000\,\mathrm{K}\). In this paper we aim at studying the _Gaia_ bifurcation as the natural consequence of C dredge-up by convection in cold He-dominated white dwarfs. Methods:Relying on accurate atmosphere models, we provide a new set of evolutionary models for He-rich white dwarfs employing different prescriptions for the C enrichment. On the basis of these models, we made a population synthesis study of the _Gaia_ 100 pc white dwarf sample to constrain the models that best fit the bifurcation. Results:Our study shows that He-rich white dwarf models with a slight C contamination below the optical detection limit can accurately reproduce the _Gaia_ bifurcation. We refer to these stars as "stealth DQ" white dwarfs because they do not exhibit detectable C signatures in their optical spectra, but the presence of C in their atmosphere produces a continuum absorption favouring the emission in bluer wavelengths, thereby creating the B branch of the bifurcation. Furthermore, our study shows that the white dwarf mass distribution obtained when a stealth C contamination is taken into account presents a peak at \(\sim 0.6\,\mathrm{M}_{\odot}\), which is consistent with the mass distribution for H-rich white dwarfs and with the standard evolutionary channels for their formation. Conclusions:We conclude that "stealth DQ" white dwarfs can account for the lower branch in the _Gaia_ bifurcation. The C signatures of these stars could be detectable in Ultra-Violet (UV) spectra. ## 1 Introduction White dwarfs are the most common end point of stellar evolution, as they are the final destiny of more than 95% of the main sequence stars. These old compact objects, supported by electron-degeneracy pressure, undergo a slow cooling process that lasts for several Gyrs, turning these objects into reliable cosmochronometers to date stellar populations and main sequence companions -- see for instance, the reviews of Fontaine & Brassard (2008), Winget & Kepler (2008), Althaus et al. (2010), Garcia-Berro & Oswalt (2016) and Corsico et al. (2019). Due to their unique characteristics, white dwarf stars are considered important objects for understanding the late stages of stellar evolution, as well as planetary systems and the structure and evolution of our Galaxy. Also, they can be used to infer the star formation rate, the initial mass function, the initial to final mass relation and the chemical evolution in the solar neighborhood (e.g. Rebassa-Mansergas et al.2021; Raddi et al.2022). Furthermore, the extreme densities that characterize the white dwarf interior, turn these stars into promising laboratories to study stellar matter and energy sources under extreme conditions (e.g. Isern et al.2022). In the last decades, we have entered a golden era for the exploitation of white dwarf science. Surveys like Sloan Digital Sky Survey (SDSS, York et al.2000), the Radial Velocity Experiment (RAVE, Steinmetz et al.2020, 2020), the Panoramic Survey Telescope and Rapid Response System (PanSTARRS, Chambers et al.2016) and others, are providing the first large samples of moderate-resolution spectra and multi-band photometry for stars in our Galaxy (e.g. Kepler et al.2021), and missions like NASA Kepler and NASA Transiting Exoplanet Survey Satellite (TESS, Ricker et al.2015) are providing measurements of photometric variations of these stars. In particular, the successive data releases by the ESA space mission _Gaia_ constitute an unprecedented advance, providing multi-band photometry, synthetic spectra, proper motions and parallaxes for more than a billion sources (Gaia Collaboration et al.2018, 2021). Among these, Gentile Fusillo et al. (2021) identified nearly 360 000 high-confident white dwarf candidates in the _Gaia_ Data Release 3 (DR3), of which nearly 13 000 are within the 100 pc volume-limited sample (Jimenez-Esteban et al., 2018, 2023), leading the white dwarf research field into a new era. The power of _Gaia_ space mission has revealed some unexpected features in the white dwarf cooling sequence on the color magnitude diagram, identified as the A, B and Q branches (see Gaia Collaboration et al., 2018). The A-branch is mainly populated by white dwarfs with H-rich atmospheres, that is DA spectral type, and overlaps with the evolutionary track of an approximately 0.6 M\({}_{\odot}\) white dwarf. On the other hand, the B-branch constitutes a bifurcation from the A-branch, has a significant fraction of He-rich white dwarfs, and lies on a \(\sim\)0.8 M\({}_{\odot}\) evolutionary track with a pure He envelope. Finally, the Q branch has a weaker concentration of white dwarfs and does not follow any evolutionary track nor isochrone. The origin of the Q branch has been extensively discussed in the literature, with the general consensus being that it arises from an energy released during white dwarf core crystallization (Tremblay et al., 2019; Cheng et al., 2019; Camisassa et al., 2021; Blouin et al., 2021; Caplan et al., 2021; Camisassa et al., 2022; Bauer et al., 2020; Fleury et al., 2022). In contrast, the origin of the AB bifurcation remains unclear, with no general consensus. El-Badry et al. (2018) attributed the existence of the bifurcation to a flattening in the initial-to-final-mass relation (IFMR), which leads to a secondary peak in the white dwarf mass distribution at approximately 0.8 M\({}_{\odot}\). Similarly, Kilic et al. (2018) also suggested the presence of this secondary peak, but they attributed it to the occurrence of stellar mergers. Alternatively, Ourique et al. (2020) proposed a different explanation for the origin of the _Gaia_ bifurcation, suggesting that spectral evolution from a pure H to pure He envelope at an effective temperature (\(T_{\rm eff}\)) \(\sim 10\,000\) K in approximately 16% of DA white dwarfs may be an important contributing factor. Nevertheless, a recent study by Bergeron et al. (2019) found that assuming a pure He atmosphere for all non-DA white dwarfs leads to a low number of objects with masses around \(\sim 0.6\) M\({}_{\odot}\) when \(T_{\rm eff}<11\,000\) K, which is inconsistent with the observed mass distribution at higher effective temperatures. These authors, therefore, suggested to consider a He atmosphere with traces of H instead (see also Serenelli et al., 2019). The additional electrons provided by traces of H in a He-rich white dwarf atmosphere cause a shift in the 0.6 M\({}_{\odot}\) evolutionary track, when \(T_{\rm eff}<11,000\) K, thus creating the bifurcation. However, Bergeron et al. (2019) noticed that a significant fraction of He-rich white dwarfs should have H abundances low enough for not to affect their photometric mass determinations at 8 000 K \(\leq\) T\({}_{\rm eff}\)\(\lesssim 10\,000\) K, and, therefore, still a significant number of pure He white dwarfs with \(\sim 0.6\) M\({}_{\odot}\) would be expected. Thus, these authors suggested that another electron donor, such as C or metals, is required to fully explain the origin of the _Gaia_ bifurcation. In this paper we aim at studying the B branch as a result of the presence of C in the atmosphere of He-dominated white dwarfs. Theoretical models predict that He-dominated white dwarfs will dredge-up C as a result of convective mixing in the so called PG1159-DO-DB-DQ Spectral Evolutionary Channel (see Koester et al., 1982; Pelletier et al., 1986; Althaus et al., 2005; Dufour et al., 2005; Camisassa et al., 2017; Bedard et al., 2022). Unfortunately, the exact amount of C dredged-up in a He rich white dwarf cannot be predicted by theoretical models. In particular, Bedard et al. (2022) followed the C enrichment from the beginning of the white dwarf cooling phase under different initial conditions and physical inputs, finding that the amount of C dredged-up by convection depends on the initial C surface abundance, the thickness of the He layer, the efficiency of extra-mixing beyond the convective boundaries, and the stellar mass. We find that the _Gaia_ bifurcation can be explained by He rich white dwarfs that experience a spectral evolution at \(T_{\rm eff}\sim 12\,000\) K with a smooth C contamination just below the detection limits for optical spectroscopy. While this C contamination does not produce C lines in the optical spectra, it adds free electrons to the He envelope, that produce a continuum opacity that shifts the 0.6 M\({}_{\odot}\) evolutionary track causing the _Gaia_ bifurcation. This C contamination, however, has strong features in the UV spectra that could potentially be detectable. This paper is organized as follows. In Sect. 2.1 we describe the white dwarf evolutionary models employed. In Sect. 2.2 we describe the C enrichment observed in white dwarfs determined in previous studies and the different prescriptions for the C enrichment adopted in this paper. In Sect. 2.3 and 2.4 we present details on the atmosphere models and the population synthesis code employed, respectively. In Sect. 3 we describe our results and, finally, in Sect. 4 we summarize the main findings of the paper. ## 2 Methods ### The white dwarf cooling models White dwarfs can be categorized based on the presence or the absence of H in their atmospheres. It is thought that H is dominant in the atmosphere of nearly 80% per cent of all white dwarfs (the so-called DA white dwarf class), and that the remaining 20% are depleted in H (the so-called non-DA white dwarf class) (Kepler et al., 2021; Jimenez-Esteban et al., 2023). The formation and evolution of H-rich white dwarfs is reasonably well understood and has been computed in several studies in the literature (for recent studies see for instance Renedo et al., 2010; Althaus et al., 2015; Camisassa et al., 2016, 2019; Bedard et al., 2020; Salaris et al., 2022), and their pure H envelopes are the natural consequence of H floating up due to gravitational settling acting on an initial mixed H/He composition. Although a fraction of the H-rich white dwarfs could undergo spectral evolution due to convective dilution (Bedard et al., 2023), in this paper we assume that H-rich white dwarfs will retain their H envelope through their evolution. While DA white dwarfs have H-dominated atmospheres, most of the non-DA white dwarfs have atmospheres that are dominated by He. The evolution of white dwarfs with He-dominated atmospheres has been extensively studied during the last decades (Althaus et al., 2005; Camisassa et al., 2017; Salaris et al., 2022; Bedard et al., 2022). It is thought that He-rich white dwarfs are the descendants of the PG 1159 stars, hot stars with an envelope composed by He, C, and oxygen (O) in similar amounts. Initially, the C and O in the outer layers are supported by a weak radiative wind but as the white dwarf cools down, the wind weakens and the C and O rapidly sink into the stellar interior as a result of gravitational settling, leading to a pure-He atmosphere. This pure-He enveloped white dwarf should present He absorption lines, being classified first as a DO (He ii lines) and later on as a DB (He i lines). As the white dwarf cools down, a convection zone gradually develops within the He envelope, growing inward and ultimately reaching the previously settled C, which, depending on the He layer thickness, is carried back to the surface. This spectral evolution transforms the pure He DB white dwarf into a DQ white dwarf, a He-dominated atmosphere white dwarf containing traces of C. In this paper, we employ the H-rich evolutionary models of Camisassa et al. (2016), which are the result of the full evolu tion of progenitor stars starting at the Zero Age Main Sequence and evolved through the central H and He burning, the Asymptotic Giant Branch (AGB) and the post-AGB phases, as calculated in Miller Bertolami (2016). These models commence the white dwarf phase with a H-dominated atmosphere with small amounts of He, C and O. However, due to gravitational settling, the heavier elements rapidly sink, causing the outer envelope of the white dwarf to become entirely composed of H. For He-dominated white dwarfs (non-DA), we employed the evolutionary models calculated in Camisassa et al. (2017), which are the result of the full progenitor evolution through the born again scenario (see Camisassa et al. 2017; Miller Bertolami & Althaus 2006, for details). These models follow the complete white dwarf evolution from the PG 1159 stage at the beginning of the white dwarf cooling sequence, all the way to very low luminosities, keeping track of the gravitational settling of the C left in the envelope by the born again evolution, and its later convective dredge-up. Although these models do not predict the correct amount of C dredge-up to the surface, we amended this issue by employing different artificial parametrizations for the C abundance that match the observed C sequence (see Sect. 2.2). It is worth noting that the cooling times considered in these models are expected to be realistic, as they take into account all relevant sources and sinks of energy and use accurate initial chemical profiles derived from the full progenitor evolution. ### The carbon enrichment As we mentioned, theoretical models suggest that He-rich white dwarfs experience spectral evolution when their outer convective zone penetrates into C-rich layers, transforming them into DQ white dwarfs. The surface C abundance should grow until the convective zone reaches its maximum depth, and then it starts to decrease gradually as a result of C recombination at the bottom of the convective zone, which makes C sink into the interior (Pelletier et al. 1986). A recent research conducted by Bedard et al. (2022) investigated the evolution of the surface C abundance under different assumptions and found that it is strongly influenced by the initial conditions and physical inputs considered in the modeling. First, a higher C abundance at the beginning of the white dwarf phase, as well as a thinner He envelope, would be reflected in a larger amount of C dredged-up by convection (see Figs. 9 and 10 of Bedard et al. 2022, respectively). In addition, the stellar mass and the efficiency of extra-mixing beyond the convective boundaries play a key role in determining the C enrichment sequence. Thus, while theoretical models predict convective C dredge-up and a subsequent decrease in C abundance due to its recombination, they cannot accurately determine the exact C abundance in cold non-DA white dwarfs. Therefore, observational data is necessary to provide insights into this matter. The white dwarfs showing C lines or molecular C in their spectra can be classified into three groups with markedly different characteristics: hot, warm and cold DQs. In Fig. 1 we plot the logarithm of the ratio between the numerical abundances of C and He ([C/He]) vs. effective temperature determinations from Koester & Kepler (2019) for the hot, warm and cold DQs, using purple squares, green crosses and light-blue asterisks, respectively. It should be noted that [C/He] for hot DQs has been set to the typical lower limit. The hot and warm DQs have considerably higher velocities, masses and C abundances than cold DQs. Therefore, it is very unlikely that the hot and warm DQs are the predecessors of cold DQs, and their origin has been attributed to stellar mergers (Dunlap & Clemens 2015; Kawka et al. 2023). Furthermore, theoretical models of convective C dredge-up in He-rich envelopes fail to reproduce C enrichment in hot and warm DQs, but they can account for the C abundances at \(T_{\rm eff}\lesssim 12\,000\) K observed in cold DQs. In a nutshell, cold DQs are thought to be the descendants of He-rich DB white dwarfs, that became C enriched when the outer convective zone penetrated into C rich layers. Koester et al. (2020) estimated the total He mass in the envelope of cold DQs, by integrating the envelope equations from Figure 1: Carbon to helium surface abundance ratio ([C/He]) vs. effective temperature for the hot DQs (purple squares), warm DQs (green crosses), and cold DQs (light-blue asterisks) taken from Koester & Kepler (2019). We note that, for the hot DQs, [C/He] has been fixed as the typical lower limit. The blue line is our parametrization for the observed [C/He] trend (the C sequence), and the orange, green and red lines are the C sequence -1, -2 and -3 dex, respectively (see text for details). Figure 2: Synthetic spectra for atmosphere models with \(\log g=8.0\) and \(T_{\rm eff}=8\,000\) K. The pure He model is displayed using a purple line. The other models assume [C/He]= -5.2 (blue line, the C sequence), [C/He]= -6.2 (orange line, the C sequence - 1 dex), [C/He]= -7.2 (green line, the C sequence -2 dex), [C/He]= -8.2 (red line, the C sequence -3 dex). the outside, using observed surface abundances as starting point. These authors found that the total He mass fraction, q(He), in the envelope of cold DQs is independent of their effective temperature. They also found that the convection zone is marginally deeper for the colder DQs. These results support the idea that the cold DQ white dwarfs constitute an evolutionary path of white dwarfs with similar characteristics, and that the gradual decrease in [C/He] is caused by the C recombination at the base of the He envelope. Even when including overshooting in their calculations, these authors obtained q(He)\(\sim-3.\), which is nearly one order of magnitude lower than predicted from stellar evolution (Miller Bertolami & Althaus, 2006; Althaus et al., 2009). Therefore, we can speculate that white dwarfs may have a range of He masses wider than expected, possibly depending on the number of thermal pulses experienced in the progenitor evolution, and that the cool DQs are originated from the white dwarfs with the thinnest He envelopes. In this paper, we present He-rich white dwarf models (non-DA) that follow the PG1159-DO-DB-DQ evolutionary connection, and experience C dredge-up as a result of convective dilution. In order to mimic the C enrichment sequence, we consider that these white dwarfs have a pure He envelope if \(T_{\rm eff}>12\,000\) K and a He envelope with C traces if \(T_{\rm eff}<12\,000\) K. We considered four different prescriptions for the [C/He] ratio when \(T_{\rm eff}<12\,000\) K. The first set of white dwarf models follows the observed C enrichment sequence in terms of the effective temperature (which we will call "C sequence" from now on). The C sequence follows a linear least square fit to the observed C sequence reported in Table 1 of Koester et al. (2020) when \(T_{\rm eff}<10\,000\) K. To parameterize the range \(12\,000\) K \(>T_{\rm eff}>10\,000\) K, we reflected this linear fit from \(T_{\rm eff}=10\,000\) K, mimicking the rise in the C abundance due to the deepening of the outer convective zone. The run of [C/He] in terms of the effective temperature of the C sequence is depicted using a blue line in Fig. 1. The cool region (\(T_{\rm eff}\lesssim 10\,000\) K) of the C sequence is approximately on the optical detection limit of C for a signal-to-noise ratio S/N = 20 according to the DQ model atmospheres of Blouin et al. (2019). The [C/He] ratio in terms of the effective temperature in the second, third and fourth sets of white dwarf models is that of the C sequence, but shifted -1 dex, -2 dex and -3 dex, respectively (orange, green and red lines in Fig. 1). Although C would still be present in their atmospheres, these three sets of models would not show C features in their optical spectra, thus being classified as DC (see Sect. 2.3). These models account for the white dwarfs with thicker He envelopes than the one obtained for the C sequence in Koester et al. (2020). When [C/He] reaches values lower than \(-10.41\), we considered a pure He atmosphere. It is important to remark that we only implemented this C enrichment prescription in the atmosphere models, but the white dwarf structure and evolution is that of Camisassa et al. (2017), where the C enrichment as a result of convective dredge-up is considered, but it does not follow the enrichment that we desire to simulate. ### The atmosphere models We calculated a grid of model atmospheres from Koester (2010) and Koester & Kepler (2019) for different \(T_{\rm eff}\), surface gravity and chemical compositions, and then integrated their fluxes in different passbands. In particular, we employed three sets of atmosphere models: one with a pure H composition, one with a pure He composition and one with a mixed He/C composition. The He/C ratio in the mixed composition models can vary according the C enrichment prescription described in Sect. 2.2. In Fig. 2 we show the synthetic spectra for atmosphere models with \(\log g=8.0\) and \(T_{\rm eff}=8\,000\) K and different chemical compositions: [C/He]=-5.2 (C sequence), [C/He]= -6.2 (C sequence - 1 dex), [C/He]= -7.2 (C sequence -2 dex), [C/He]= -8.2 (C sequence -3 dex), and pure He. The spectrum of the C sequence atmosphere has strong C features, both in the optical and UV wavelengths. On the contrary, the atmosphere models with lower [C/He] and the pure He model do not show spectral lines nor molecular Swan bands in the optical, resembling a continuum spectrum, which would be classified as a DC if observed at these wavelengths. Regarding the UV spectra, the C-contaminated models do show C1 absorption lines at \(1931\) A and \(2479\) A, which are more noticeable when the C abundance is higher. We decided to call the cold white dwarfs (\(T_{\rm eff}\lesssim 12\,000\) K) with a trace [C/He] abundance lower than the C sequence "stealth DQs", which do not present C features in the optical spectra but do show them in the UV wavelengths. Although the "stealth DQ" white dwarfs would appear as DC when observed in the optical wavelengths, their optical fluxes still differ from a pure He model, since their continuum emission is shifted to bluer wavelengths (see Fig. 2). The more abundant the trace C is, the bluer the white dwarf becomes. The origin in this shift is caused by an increase in the He\({}^{-}\) free-free absorption, which is markedly altered by the presence of C. At low effective temperatures, only a tiny fraction of He becomes ionised to provide free electrons, and so only a very small amount of He\({}^{-}\) can form in a pure He atmosphere. Because C has a much lower ionisation potential than He, only trace amounts of C are required for it to become the primary electron donor allowing for a higher number of He\({}^{-}\) ions to form, which in turn increases the He\({}^{-}\) opacity, causing the shift to bluer wavelengths. ### The population synthesis code We performed a population synthesis analysis of the white dwarf thin disk population within 100 pc, previously classified in Torres et al. (2019) to provide insights on the C enrichment sequence. We employed a Monte Carlo population synthesis code widely used in the study of the single (e.g. Garcia-Berro et al., 1999; Torres et al., 2005; Torres & Garcia-Berro, 2016; Jimenez-Esteban et al., 2018; Torres et al., 2021) and binary (e.g. Camacho et al., 2014; Cojocaru et al., 2017; Canals et al., 2018; Torres et al., 2022) white dwarf population, as well as on studies of open and globular clusters (e.g Garcia-Berro et al., 2010; Torres et al., 2015) and the Galactic bulge (Torres et al., 2018). A detailed description of the code can be found in these references. Therefore, in this paper, we will provide a brief overview of its key inputs. Synthetic main sequence stars are generated with masses randomly following an initial mass function with a Salpeter distribution, considering \(\alpha=-2.35\) and a minimum mass of \(0.4\,\mathrm{M}_{\odot}\). We assume a constant star formation rate, with a maximum age of 10.5 Gyrs. Alternative prescriptions for the star formation rate and the total age will not affect the robustness of our results, since we are interested in analyzing only the _Gaia_ bifurcation. Once each main sequence star is generated, we employ the pre-white dwarf age of the Basti database (Hidalgo et al., 2018) to see which stars had time to become white dwarfs. Then, using an IFMR we can obtain the white dwarf masses and cooling times. We considered two different IFMRs: Catalan et al. (2008) and El-Badry et al. (2018). The initial metallicity of all the synthetic stars in this study was fixed to \(Z=0.02\), since we do not expect the metallicity distribution to alter our results. Once knowing the white dwarf cooling time and mass, we employed the white dwarf evolutionary models of the La Plata group described in Sect. 2.1, to obtain its physical properties, such as luminosity, effective temperature, surface gravity and radius. We randomly assign an envelope composition to each white dwarf, either H-dominated (DA) or He-dominated (non-DA). We considered four different proportions of DA to non-DA white dwarfs: 100:0, 80:20, 75:25, and 70:30, respectively. For DA white dwarfs, we assumed that they preserve a pure H envelope through their evolution. For non-DA white dwarfs, we consider a pure He envelope if \(T_{\rm eff}>12\,000\) K and, if \(T_{\rm eff}<12\,000\) K, we can consider that either the white dwarf retains this pure He envelope or that it undergoes C enrichment. In the simulations where we consider C enrichment in the synthetic populations, we either assume the C sequence -0 dex, -1 dex, or -2 dex. We also simulated the possibility that each white dwarf may follow a different C enrichment sequence, by randomly assigning a number in the range from 0 to 3 with a uniform distribution and subtract this number to the [C/He] expected for the C sequence. We named this type of C enrichment as "C random [0:3]", because it assumes that each individual white dwarf can undergo a different C enrichment sequence. A similar C enrichment sequence was also considered, named "C random [0:2]", which assumes a random C distribution, but this time the random number assigned can vary in the range from 0 to 2, thus being this synthetic population more C enriched. Finally, in order to compare with the observational sample, we employ the atmosphere models described in Sect. 2.3 to convert the quantities from our evolutionary models into magnitudes in the _Gaia_ passbands, and we added observational uncertainties by introducing photometric and astrometric errors in concordance with _Gaia_ performance 1. Footnote 1: [http://www.cosmos.esa.int/web/gaia/science-performance](http://www.cosmos.esa.int/web/gaia/science-performance) A total of 22 synthetic white dwarf populations models were generated, varying the proportion of DA to non-DA, the non-DA envelope composition, and the IFMR. The main characteristics of these synthetic population models are described in Table 1. ## 3 Results ### Effects of carbon enrichment on the _Gaia_ color magnitude diagram Fig. 3 displays the effect of the atmospheric composition on the white dwarf evolutionary models in the _Gaia_ color magnitude diagram. The gray dots are the _Gaia_ DR3 observations of the white dwarfs within 100 pc, whereas the solid lines depict 0.58 M\({}_{\odot}\) evolutionary models under different atmospheric compositions. In this plot we can see the _Gaia_ bifurcation in the A and B branches, starting at G \(\sim 12\) and G\({}_{\rm BP}-{\rm G}_{\rm RP}>0\). We note that the model with a pure H envelope (black solid line) overlaps with the upper branch of the _Gaia_ bifurcation. Additionally, we can see that, although a 0.58 M\({}_{\odot}\) pure He model (purple line) can reproduce somewhat better the lower branch of the bifurcation than a 0.58 M\({}_{\odot}\) pure H model, a higher mass (\(\sim\)0.8 M\({}_{\odot}\)) pure He model would overlap with this branch. The blue, orange, green and red lines display 0.58 M\({}_{\odot}\) cooling sequences that mimic the C dredge-up enrichment. These cooling sequences have a pure He atmosphere if \(T_{\rm eff}>12\,000\) K and a He atmosphere with traces of C if \(T_{\rm eff}<12\,000\) K. The C enrichment sequence in each of these evolutionary models is described in Sect. 2.2. By inspecting this figure, we can see that the 0.58 M\({}_{\odot}\) models with C enrichment overlap the lower branch of the _Gaia_ bifurcation, and that the pure H and pure He models do not. As expected, the lower the trace C abundance, the closer to the pure He model appears the evolutionary sequence. We can conclude that the _Gaia_ bifurcation occurs at \(T_{\rm eff}\sim 10\,000\) K, which is roughly where we expect that the surface C abundance reaches its maximum value, regardless of the input conditions on the modeling (see Figs. 7-12 in Bedard et al. 2022). It is important to recall that, a C trace abundance just 1 dex below the observed C sequence would be undetectable in opti Figure 4: Stellar density (Hess) diagram for the 100 pc thin disk white dwarf sample in _Gaia_ DR3. The white dashed lines delimit the region in which we perform the statistical analysis. Figure 3: Evolutionary tracks of a 0.58 M\({}_{\odot}\) white dwarf for different atmospheric compositions on the _Gaia_ DR3 color magnitude diagram, together with the observed 100 pc sample (gray dots). The evolutionary model considering a pure H (He) envelope is displayed using a black (purple) line. Models with a He atmosphere with C traces, considering the C sequence -1 dex, -2 dex and -3 dex are shown using blue, orange, green and red lines, respectively. cal observations. Therefore, trace C dredge-up by convection in a He dominated atmosphere can be the source of opacity that causes the _Gaia_ bifurcation. This conjecture is supported by the fact that most of the white dwarfs in the lower branch of the _Gaia_ bifurcation are non-DA white dwarfs (65% according to Jimenez-Esteban et al. 2023). ### Population synthesis analysis In order to test the soundness of the C enrichment in He dominated white dwarfs as the mechanism responsible for creating the lower branch of the _Gaia_ bifurcation, we performed a population synthesis analysis of the 100 pc thin disk white dwarf population. The stellar density in the colour-magnitude diagram (Hess diagram) of the observed sample is shown in Fig. 4. The _Gaia_ bifurcation can easily be seen in this stellar density plot. This color-magnitude diagram was divided in 2 500 square bins, where we counted the number of stars and then normalized to the total number of observed objects to perform a \(\chi^{2}\) statistical test (Mighell 1999). Each bin has a width of 0.04 mag in \(\rm G_{BP}-G_{RP}\), and a height of 0.14 mag in \(\rm M_{G}\). To avoid contamination from white dwarfs on the Q branch and faint white dwarfs, and to only take into account the objects on the bifurcation, we performed the \(\chi^{2}\) statistical test exclusively on the region delimited by the dashed white lines. By electing this region we also avoid unc \begin{table} \begin{tabular}{|l|c|c|c|} \hline \hline Ratio DA non-DA & Non-DA composition & IFMR & \(\chi^{2}\) \\ \hline 100\% DA & - & (1) & 4.046 \\ 100\% DA & - & (2) & 8.530 \\ \hline 80\% DA, 20\% non-DA & Pure He & (1) & 3.816 \\ 80\% DA, 20\% non-DA & Pure He & (2) & 7.863 \\ 80\% DA, 20\% non-DA & C sequence & (1) & 2.972 \\ 80\% DA, 20\% non-DA & C sequence & (2) & 6.447 \\ 80\% DA, 20\% non-DA & C sequence - 1 dex & (1) & 3.191 \\ 80\% DA, 20\% non-DA & C sequence - 1 dex & (2) & 6.801 \\ 80\% DA, 20\% non-DA & C sequence -2 dex & (1) & 3.378 \\ 80\% DA, 20\% non-DA & C sequence - 2 dex & (2) & 7.019 \\ 80\% DA, 20\% non-DA & C random [0:2] & (1) & 3.042 \\ 80\% DA, 20\% non-DA & C random [0:3] & (1) & 3.147 \\ 80\% DA, 20\% non-DA & C random [0:3] & (2) & 6.693 \\ \hline 75\% DA, 25\% non-DA & C sequence & (1) & 3.116 \\ 75\% DA, 25\% non-DA & C sequence - 1 dex & (1) & 3.228 \\ 75\% DA, 25\% non-DA & C random [0:2] & (1) & 3.153 \\ \hline 70\% DA, 30\% non-DA & Pure He & (1) & 4.097 \\ 70\% DA, 30\% non-DA & C sequence & (1) & 3.154 \\ 70\% DA, 30\% non-DA & C sequence - 1 dex & (1) & 3.380 \\ 70\% DA, 30\% non-DA & C sequence - 2 dex & (1) & 3.735 \\ 70\% DA, 30\% non-DA & C random [0:2] & (1) & 3.079 \\ 70\% DA, 30\% non-DA & C random [0:3] & (1) & 3.126 \\ \hline \end{tabular} \end{table} Table 1: Main characteristics of each of our synthetic population models and the \(\chi^{2}\) values obtained from the comparison with the observed sample (see text for details). References: (1) Catalan et al. (2008); (2) El-Badry et al. (2018) Figure 5: Normalized Hess diagram for a synthetic 100 pc thin disk white dwarf population considering 80% DA, 20% non-DA, pure He composition for all non-DA white dwarfs and an IFMR of Catalan et al. (2008). Figure 6: Same as Fig. 5, but considering an IFMR of El-Badry et al. (2018). tainties arising from the star formation rate and the age assumed for the population. We compared the observed sample with a total of 22 synthetic white dwarf populations varying the proportion of DA to non-DA white dwarfs, the C enrichment prescription for non-DA white dwarfs, and the IFMR. The quantities assumed for our synthetic populations are summarized in Table 1, together with the \(\chi^{2}\) value of the comparison with the observed sample. In general lines, we find a better agreement when we include a proportion of 80% DA, 20% non-DA, although populations with 75% DA, 25% and 70% DA, 30% non-DA white dwarfs also show a good agreement. Additionally, the IFMR of Catalan et al. (2008) yields a better reproduction of the bifurcation than the one by El-Badry et al. (2018), even when only DA white dwarfs are considered. A simple synthetic population considering 80% DA, 20% non-DA, pure He composition for all non-DA white dwarfs and an IFMR of Catalan et al. (2008) is shown in Fig. 5. We note that this synthetic population does not reproduce the _Gaia_ bifurcation and has a relatively high value of \(\chi^{2}=3.816\). A similar synthetic population model, with the only difference that it considers an IFMR of El-Badry et al. (2018), is shown in Fig. 6. Considering this IFMR produces much more white dwarfs with masses \(\sim 0.8\) M\({}_{\odot}\), but does not reproduce the _Gaia_ bifurcation, being its \(\chi^{2}\) value as high as 7.863. On the contrary, considering the PG1159-DO-DB-DQ spectral evolution in all non-DA yields a much better agreement with the observations. In particular, we found the maximum agreement, i.e., the lowest \(\chi^{2}\) value, when we consider a C enrichment that follows the C sequence (\(\chi^{2}=2.972\)). In Fig. 7 we show a synthetic population model that considers 80% DA, 20% non-DA, an IFMR of Catalan et al. (2008), pure He composition for all non-DA white dwarfs with \(T_{\rm eff}>12\,000\) K and a random C enrichment prescription in the range [0:2] for all non-DA white dwarfs with \(T_{\rm eff}<12\,000\) K. We note in this figure a slight bifurcation, caused by the C enrichment, that resembles the _Gaia_ bifurcation. The comparison of this synthetic population model with the observations has \(\chi^{2}=3.042\), which is slightly higher than the value for the best-fit population (\(\chi^{2}=2.972\)). Nevertheless, this synthetic population model is more realistic as it takes into account the fact that not all He-dominated white dwarfs can follow the C sequence. If all white dwarfs with He-dominated atmospheres would follow the C sequence, there would be an enormous number of DQ white dwarfs observed, which is not the case. Synthetic population models that consist of 80% DA and 20% non-DA white dwarfs, following an IFMR of Catalan et al. (2008), and including C enrichment as the C sequence -1 dex or as C random [0:3] also provide a good fit, with \(\chi^{2}\) values of 3.191 and 3.147, respectively. Therefore, we conclude that, in general terms, we find much better agreement with the observations for synthetic population models that consider that non-DA white dwarfs follow a C enrichment, than for synthetic population models that consider that all non-DA white dwarfs have a pure He envelope. ### The mass distribution On the basis of the spectral classification of Jimenez-Esteban et al. (2023), we determined the masses of 11 455 DA and 2295 non-DA white dwarfs within 100 pc. For each white dwarf, having G and \(\rm G_{BP}-G_{RP}\), we made a linear interpolation in our evolutionary models to obtain its mass and effective temperature. In order to avoid extrapolation uncertainties, we excluded all DA WDs with masses lower than 0.239 M\({}_{\odot}\) and all non-DA WDs with masses lower than 0.51 M\({}_{\odot}\). For DA white dwarfs, we employed pure H atmosphere models, whereas for non-DA white dwarfs we determined two different masses, varying the atmosphere model. In the first estimation, we consider that all non-DA white dwarfs have a pure He atmosphere and in the second one, we considered a pure He atmosphere if \(T_{\rm eff}>12\,000\)K and a C enrichment following the C sequence -1 dex if \(T_{\rm eff}>12\,000\)K. The mass distributions obtained are shown in Fig. 8. The black histograms in both panels show the mass distributions of DA white dwarfs. It exhibits a clear peak at \(\sim 0.57\) M\({}_{\odot}\), consistent with previous studies. We find a slight excess of DA white dwarfs with masses \(\sim 0.8\) M\({}_{\odot}\) as suggested by El-Badry et al. (2018); Kilic et al. (2018), but we do also find a flattening in the DA mass distribution around 0.75 M\({}_{\odot}\). Finally, we do not find a distinctive high-mass excess near but rather some inkling near 1.1 M\({}_{\odot}\) as found in Rebassa-Mansergas et al. (2015); Hollands et al. (2018). The mass distributions of non-DA white dwarfs are shown as purple and orange histograms on the left and right panels of Fig. 8, respectively. In the left panel, we employed pure He atmosphere models for all non-DA white dwarfs, whereas in the right panel, we considered the C enrichment commencing at \(T_{\rm eff}=12\,000\) K and following the C sequence -1 dex for all non-DA white dwarfs (i.e. "stealth DQ" white dwarfs are considered). The differences arising by the adoption of these treatments can be seen when comparing the purple and orange histograms in this figure. While considering a stealth C enrichment in the atmosphere (orange histogram) leads to a mass distribution more similar to the one for DA white dwarfs, considering a pure He atmosphere (purple histogram) leads to an excess of massive non-DA white dwarfs. Indeed, a surprising massive population of non-DA white dwarfs is obtained when pure He atmosphere models are considered. Such a massive population is not consistent with an isolated evolutionary channel for the formation of non-DA white dwarfs. The total mass distributions (DA + non-DA histograms) are shown as blue histograms in both panels. It is clear that considering a pure He envelope leads to an excess of massive (\(\sim 0.75\) M\({}_{\odot}\)) white dwarfs caused by non-DA white dwarfs. Figure 7: Normalized Hess diagram for a synthetic 100 pc thin disk white dwarf population considering 80% DA, 20% non-DA, an IFMR of Catalan et al. (2008), pure He composition for all non-DA white dwarfs with \(T_{\rm eff}>12\,000\)K and a random C enrichment prescription (C random [0:2]) for all non-DA white dwarfs with \(T_{\rm eff}<12\,000\) K. ## 4 Summary and conclusions The precise observations by _Gaia_ space mission have revealed a bifurcation in two branches on the color magnitude diagram of the white dwarf population. The main goal of this paper is to provide an explanation for the B branch by investigating the effect of C contamination in the envelopes of He-rich white dwarfs. There is significant evidence that C contamination occurs on cool He-rich white dwarfs as a result of convective dredge-up, in the so-called PG1159-DO-DB-DQ spectral evolutionary channel. Theoretical models of He-dominated white dwarfs predict that an outer convective zone penetrates into C rich layers, thus leading to C-dredge up and the consequent grow of the surface C abundance. After the convective zone reaches its maximum depth, the partial recombination of C below the convective zone makes C sink back into the interior, leading to a slow and constant decrease in the surface C abundance. Although the models can predict the general behaviour of C enrichment, they cannot predict the exact amount of C dredged-up, as it depends on the initial conditions and physical inputs considered. Relying on precise He-rich evolutionary models, we simulated the C enrichment under different prescriptions. We considered that all non-DA white dwarfs have a pure He atmosphere if their effective temperature is \(>12\,000\)K, and a He atmosphere with C traces if their effective temperature is \(<12\,000\)K. The first C contamination recipe consisted in applying a least-squares fit to the C to He abundance ratio in terms of the effective temperature, observed in cold (\(T_{\rm eff}<10\,000\) K) DQ white dwarfs. For \(12\,000\) K \(>T_{\rm eff}>10\,000\) K, we reflected this linear fit (see Sect. 2.2 for details). We call this type of C contamination "the C sequence". Three other similar prescriptions for C contamination have also been considered, by shifting the surface [C/He] abundance from the C sequence by -1 dex, -2 dex, and -3 dex. It is important to remark that white dwarfs that follow the C sequence enrichment have detectable traces of C in their optical spectra, but white dwarfs that follow the C sequence enrichment -1 dex, -2 dex, and -3 dex do not. Therefore, we call these latter stars "stealth DQ" white dwarfs. We found that the presence of trace C in the atmosphere of He-rich white dwarfs has an important effect on the continuum spectrum of these stars, enhancing the absorption in red wavelengths and thus creating the _Gaia_ bifurcation. Indeed, the B branch of the _Gaia_ bifurcation is consistent with a \(\sim 0.6\) M\({}_{\odot}\) "stealth DQ" evolutionary track. This shift is primarily caused by the He\({}^{-}\) free-free absorption. The partial ionisation of trace C leads to a substantial increase in the free electron density, leading to a higher number of He\({}^{-}\) ions, amplifying the He\({}^{-}\) free-free opacity. Therefore, even though "stealth DQ" white dwarfs do not show C lines nor C molecular bands in their optical spectra, their continuum emission differs from the one expected for a pure He atmosphere. However, "stealth DQ" white dwarfs should have strong C signatures in their UV spectra, which would confirm the presence of trace C in their atmospheres. We performed a population synthesis analysis of the white dwarfs on the _Gaia_ bifurcation within 100 pc. We generated synthetic population models varying the IFMR, the DA to non-DA proportion and the non-DA atmospheric composition. We found the maximum agreement with the observations when non-DA models that take into account the C enrichment are considered in the synthetic population models. Non-DA models with a pure He atmosphere fail to reproduce the B branch, even when the flatter IFMR of El-Badry et al. (2018) is considered. Among the different C enrichment prescriptions, we do not find a significantly better agreement for any prescription in particular. The best \(\chi^{2}\) value was found when non-DA white dwarfs followed the C sequence parametrization, but the C sequence - 1 dex parametrization also yields a good agreement. Furthermore, the populations that consider that each white dwarf can randomly follow a different C enrichment parametrization also exhibit a good agreement with the observations. We wish to remark that a synthetic population model in which all non-DA white dwarfs follow the C sequence enrichment is not realistic, as it is characterized by a large number of DQ white dwarfs, not detected in observed samples. Finally, on the basis of the spectral classification of Jimenez-Esteban et al. (2023), we determined the white dwarf mass distributions of DA and non-DA white dwarfs within 100 pc. Having G and G\({}_{\rm BP}-\)G\({}_{\rm BP}\) for each individual white dwarf, we interpolated in our white dwarf evolutionary models to obtain its mass. For Figure 8: Mass distributions of the 100 pc white dwarf population spectrally classified in DA and non-DA by Jiménez-Esteban et al. (2023). Left panel: We employed a pure H atmosphere model for all DA white dwarfs and a pure He atmosphere model for all non-DA white dwarfs. The black, purple and blue histograms are the mass distributions of DA, non-DA and all white dwarfs, respectively. Right panel: Same as left panel, but considering a C enrichment following the C sequence -1 dex for all non-DA white dwarfs. The orange histogram shows the mass distribution of these non-DA white dwarfs. DA white dwarfs we obtain a peak at \(\sim 0.6\) M\({}_{\odot}\) and a flattening around \(\sim 0.8\) M\({}_{\odot}\), in agreement with other mass distributions in the literature. For non-DA white dwarfs we obtain two markedly different mass distributions, depending whether we consider a pure He atmosphere or a He atmosphere with traces of C. On the one hand, if a pure He atmosphere is employed, the mass distribution exhibits a wide peak at \(\sim 0.8\) M\({}_{\odot}\), which is not consistent with the DA mass distribution and with the standard evolutionary channel for the formation of non-DA white dwarfs (i.e. late thermal pulses). On the other hand, when a mixed He/C atmosphere with "invisible" C traces is employed, we re-obtain the typical peak at \(\sim 0.6\) M\({}_{\odot}\). In general, we find that the "stealth DQ" white dwarfs do a much better job at reproducing bifurcation in the _Gaia_ color magnitude diagram and the mass distribution for non-DA white dwarfs than pure He white dwarfs. Therefore, we propose that many of the spectrally-classified DC white dwarfs in the B branch of the _Gaia_ bifurcation could be "stealth DQ" white dwarfs having invisible trace C in their optical spectra. Furthermore, Koester et al. (2020) estimated a He mass fraction in the envelope of cold DQs that is nearly one order of magnitude lower than the one predicted by stellar evolutionary models. Hence, "stealth DQ" white dwarfs should be originated from white dwarfs with canonical He envelopes, which are consistent with the standard progenitor evolution. The trace C in some of these stars could potentially be detected in UV spectra. Since only a few DC white dwarfs have been followed up with UV spectra, we encourage observational efforts in detecting C features on the UV spectra of white dwarfs on the B branch of the _Gaia_ bifurcation. ###### Acknowledgements. The authors acknowledge the expert referee S. O. Kepler for his constructive report. MC acknowledges grant RYC2021-032721-I, funded by MCIN/AEI/10.13039/501100011033 and by the European Union NenstGeration/PPRt. MAI is supported by grant ST/V000853/1 from the Science and Technology Facilities Council (STFC). RR acknowledges support from Grant RYC2021-030837-I funded by MCIN/AEI/ 10.13039/501100011033 and by "European Union NenstGeration/PURF". This work was partially supported by the AGALR/Generalitat de Catalunya grant SCR-386/2021 and by the Spanish MINECO grant PID2020-117252GB-I00. This research made use of NASA Astrophysics Data System. This work made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/wein/gaia/dpac/consortium](https://www.cosmos.esa.int/wein/gaia/dpac/consortium)). Funding of the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
2303.10842
Each state in a one-dimensional disordered system has two localization lengths when the Hilbert space is constrained
In disordered systems, the amplitudes of the localized states will decrease exponentially away from their centers and the localization lengths are characterizing such decreasing. In this article, we find a model in which each eigenstate is decreasing at two distinct rates. The model is a one-dimensional disordered system with a constrained Hilbert space: all eigenstates $|\Psi\rangle$s should be orthogonal to a state $|\Phi \rangle$, $\langle \Phi | \Psi \rangle =0$, where $|\Phi \rangle$ is a given exponentially localized state. Although the dimension of the Hilbert space is only reduced by $1$, the amplitude of each state will decrease at one rate near its center and at another rate in the rest region, as shown in Fig. \ref{fig1}. Depending on $| \Phi \rangle$, it is also possible that all states are changed from localized states to extended states. In such a case, the level spacing distribution is different from that of the three well-known ensembles of the random matrices. This indicates that a new ensemble of random matrices exists in this model. Finally we discuss the physics behind such phenomena and propose an experiment to observe them.
Ye Xiong
2023-03-20T03:23:16Z
http://arxiv.org/abs/2303.10842v1
Each state in a one-dimensional disordered system has two localization lengths when the Hilbert space is constrained ###### Abstract In disordered systems, the amplitudes of the localized states will decrease exponentially away from their centers and the localization lengths are characterizing such decreasing. In this article, we find a model in which each eigenstate is decreasing at two distinct rates. The model is a one-dimensional disordered system with a constrained Hilbert space: all eigenstates \(|\Psi\rangle\)s should be orthogonal to a state \(|\Phi\rangle\), \(\langle\Phi|\Psi\rangle=0\), where \(|\Phi\rangle\) is a given exponentially localized state. Although the dimension of the Hilbert space is only reduced by 1, the amplitude of each state will decrease at one rate near its center and at another rate in the rest region, as shown in Fig. 1. Depending on \(|\Phi\rangle\), it is also possible that all states are changed from localized states to extended states. In such a case, the level spacing distribution is different from that of the three well-known ensembles of the random matrices. This indicates that a new ensemble of random matrices exists in this model. Finally we discuss the physics behind such phenomena and propose an experiment to observe them. localized states, disordered systems, constrained Hilbert space ## I Introductions In Anderson localization, every localized state is characterized by one quantity called the localization length \(\lambda\). It describes how much the state is localized in the real space, \(|\Psi_{E}(\vec{r})|\sim e^{-|\vec{r}-\vec{r}_{0}|/\lambda(E)}\)[1], where \(r_{0}\) is the center of the state at which the wavefunction takes the maximum value and \(E\) is the eigenenergy. This ansatz still applies thoroughly in the Anderson disordered models in recent works, from one-dimension(1D) to three dimension[2; 3; 4; 5], from the localized states to the extended states with \(\lambda=0\)[6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17] and from Gaussian orthogonal ensemble (GOE) to Gaussian symplectic ensemble (GSE) of random matrices[18; 19]. But in this article, we find that this ansatz may be broken down when an extra constraint is subjected to the Hilbert space, \(\langle\Phi|\Psi\rangle=0\). In another word, the dimension of the effective Hilbert space spanned by \(|\Psi\rangle\)s is reduced by 1 because \(|\Phi\rangle\) is not in such Hilbert space. Erasing a site at \(i\) from a lattice is an example of such a constraint. Here \(|\Phi\rangle=|i\rangle\) and the effective Hilbert space is on the rest lattice. In this article, the \(|\Phi\rangle\) we are considering is an exponentially decreasing function, which is neither the eigenstate of the Hamiltonian nor the eigenstate of the position operator. We study the eigenstates of a 1D disordered lattice in such constrained Hilbert space(CHS) and find that each state needs two localization lengths to characterize its localization. As Fig. 1(b) shown, the states first exponentially decrease faster near their centers and then their decreasing change to a slower rate. When \(\alpha=0\), as shown in Fig. 1(c), all states become extended, regardless of how strong the disorder is in such 1D system. The article is organized as follows: we first write down the static Schrodinger equation for a 1D disordered lattice in CHS. Interestingly, such an equation is changed from homogeneous to non-homogeneous. The equation is solved numerically on a finite lattice to show the eigenstates. Then the transfer matrix method is developed to take part in the non-homogeneous term. We argued that the ensemble of the orthogonal states during the QR decomposition should be reexamined to pick up the correct Lyapunov exponents. The distribution of the level spacing is also distinct from that of the traditional disordered systems. Finally, we discuss how to realize such constraint in an experiment. ## II The non-homogeneous Schrodinger equation in CHS We consider the Hamiltonian for a 1D Anderson disordered lattice with the on-site disorders, \[H_{D}=\sum_{x=1}^{N}(|x\rangle\langle x+1|+h.c.)+\sum_{x=1}^{N}\epsilon_{x}|x \rangle\langle x|, \tag{1}\] where \(\epsilon_{x}\) are the random numbers within \([-W/2,W/2]\), \(x\) is the index of the site on the 1D lattice and the integer \(N\) is the length. We take the periodic boundary condition in the calculations. It is well-known that the standard static Schrodinger equation \(E|\Psi\rangle=H_{D}|\Psi\rangle\) can be obtained by minimizing \(\langle\Psi|H_{D}|\Psi\rangle\) under the constraint \(\langle\Psi|\Psi\rangle=1\). The eigenenergy \(E\) is a Lagrange multiplier. Now as we have one more constraint \(\langle\Psi|\Phi\rangle=0\), the static Schrodinger equation changes to \[E|\Psi\rangle = H_{D}|\Psi\rangle+\mu|\Phi\rangle, \tag{2}\] \[\langle\Phi|\Psi\rangle = 0. \tag{3}\] Here \(\mu\) is another Lagrange multiplier and it should take the value to make \(\langle\Phi|\Psi\rangle=0\). In the section on the experimental setup, we will give another argument to prove the validity of the above equations. The eigenvalues \(E\) and the corresponding eigenfunctions \(|\Psi\rangle\) can be found by solving a general eigenproblem \[\begin{pmatrix}EI&0\\ 0&0\end{pmatrix}\begin{pmatrix}|\Psi\rangle\\ \mu\end{pmatrix}=\begin{pmatrix}H_{D}&|\Phi\rangle\\ \langle\Phi|&0\end{pmatrix}\begin{pmatrix}|\Psi\rangle\\ \mu\end{pmatrix}, \tag{4}\] where \(I\) is a \(N\times N\) identity matrix. It is also equivalent to an eigenproblem for a defective non-hermitian matrix[20]. We plot several eigenstates \(\Psi(x)=\langle x|\Psi\rangle\) in Fig. 1(b). The strength of disorder is \(W=10\) and the wavefunction in the constraint is \(\Phi(x)=e^{-\alpha|x-x_{0}|}\) whose center \(x_{0}\) is at the center of the ring. We use the GEM package in _octave_ to perform the calculation with much higher precision so that the results have not been smeared out by the round-off errors. For the sake of comparative analysis, we first plot the traditional eigenstates of \(H_{D}\) in Fig. 1(a). Every state is decreasing exponentially over the whole chain at a constant rate, \(\frac{1}{\xi(E)}\). In Fig. 1(b), the wavefunctions first decrease rapidly around their centers and then change to decrease/increase at a slower rate. After fitting the data, we find this slower rate is \(\alpha\), which is independent of the eigenenergies \(E\) and the central positions of the wavefunctions. When \(\alpha=0\), as shown in Fig. 1(c), all states become extended, regardless of the strength of the disorder. In traditional 1D disordered models, as the states \(\Psi_{0}(x)\)s are exponentially localized, the local environments at a far distance should not affect the states. The overlaps with \(\Phi(x)\), \(\langle\Phi|\Psi_{0}\rangle\sim\max\{e^{-\alpha d},e^{-\frac{1}{4}d}\}\), are exponentially small but are not exactly zero. Here \(d\) is the distance between the centers of \(\Psi_{0}(x)\) and \(\Phi(x)\). To make such overlap be exactly zero, as the constraint requires, the state \(\Psi(x)\) must be disturbed from \(\Psi_{0}(x)\) in the scale of \(\max\{e^{-\alpha d},e^{-\frac{1}{4}d}\}\). This is confirmed in our calculations as the Lagrange multiplier \(\mu\sim\max\{e^{-\alpha d},e^{-\frac{1}{4}d}\}\). So an exponentially small factor \(\mu\) in the non-homogeneous Schrodinger equation Eq. 2 is important and can affect the character of localization in the far distance. This is distinct from many equations in which an exponentially small term is ignorable. To confirm this conclusion, we employ the transfer matrix method to study the localization lengths of such a system. ## III The transfer matrix method in CHs The traditional transfer matrix method is used to calculate the Lyapunov exponents of the transfer matrix that is relating the wavefunctions on the \(L\)th and the \((L+1)\)th slices with those on the 0th and the 1st slices[21; 22; 23]. It is subjected to the homogeneous Schrodinger equation and only the diagonal elements of the \(R\) matrix during the \(QR\) decomposition are interested[24]. Here we first extend the method to the non-homogeneous case and argue that the orthogonalized vectors in the \(Q\) matrix should be inspected first. We redefine the transfer matrix \(M_{x}(E)\) as the square matrix in \[\begin{pmatrix}\Psi_{x+1}\\ \Psi_{x}\\ \Phi_{x+R+1}\end{pmatrix}=\begin{pmatrix}E-\epsilon_{x}&-1&\mu e^{\alpha R} \\ 1&0&0\\ 0&0&e^{-\alpha}\end{pmatrix}\begin{pmatrix}\Psi_{x}\\ \Psi_{x-1}\\ \Phi_{x+R}\end{pmatrix}, \tag{5}\] where \(\Psi_{x}\) is the wavefunctions on the \(x\)th slice (the \(x\)th lattice in our 1D model), \(E\) is the energy, \(\Phi_{x+R}\) is the wavefunction of \(|\Phi\rangle\) on the \((x+R)\)th site. Here \(\Phi\) is offset \(R\) sites so that \(\mu e^{\alpha R}\) is in the scale of the unit even when \(\mu\) is exponentially small. As \(\mu\) has been determined by the constraint but the exact value of \(\mu\) is not important in this treatment, the constraint is not needed to be written down explicitly in the transfer matrix. In another word, after \(\mu\) is determined by the equation of the constraint, we can always choose a proper offset \(R\) to scale the term \(\mu e^{\alpha R}\). The transfer matrix \(M=\prod_{x=1}^{L}M_{x}\) can be \(QR\) decomposed into the product of a matrix \(Q\) having orthonormal columns and an upper triangular matrix \(R\). The diagonal elements of \(R\) matrix are \(e^{L\gamma_{i}}\), where \(\gamma_{i}\) are the Lyapunov exponents and the orthonormal columns in \(Q\), \(\vec{z}_{i}\), are the corresponding typical states. Due to the variations of the disordered configurations, each \(\vec{z}_{i}\) actually forms a group of ensemble \(\{\vec{z}_{i}\}\). If the typical state \(\vec{z}_{i}\) does exist for Figure 1: (a) The eigenstates in the real space for the Anderson disordered model. (b) and (c) Those for the model with CHS. For the sake of clarity, only 4 typical states are plotted and the other states are similar. Each state in (b) has two localization lengths while it is an extended state in (c). The length of the ring is \(N=200\), the strength of the disorder is \(W=10\) and \(\alpha\) is 0.1 in (b) and 0 in (c). the disordered chain, one must find consistent results regardless of viewing the chain from the left to the right or from the right to the left. So the ensemble of \(\{\vec{z}_{i}\}\) for the Lyapunov exponent \(\gamma_{i}\) must match or at least overlaps with the ensemble of \(\{\vec{z}_{i}^{\prime}\}\) for the Lyapunov exponent \(-\gamma_{i}\) of the transfer matrix \(M^{\prime}=\prod_{x=L}^{1}M_{x}^{\prime}\). Here the transfer matrix \(M^{\prime}\) is counting from \(x=L\)(the right) to \(x=1\) (the left) and \(M_{x}^{\prime}\) is the same as \(M_{x}\) except that the last element is replaced by \(e^{\alpha}\) because the decreasing \(\Phi(x)\) changes to the increasing function as reversing \(x\). The three Lyapunov exponents of the transfer matrix \(M\) are \(\alpha\) and \(\pm\gamma(E)\), where \(\pm\gamma(E)\) are the exponents of the disordered chain in the absence of constraint. So it seems that the redefinition of the transfer matrix only enrolls an additional exponent \(\alpha\) which can be attributed to the asymptotic behavior of \(\Phi(x)\). But as shown in Fig. 2(a), some of the exponents are untrue. We first plot the exponents as the function of the energy \(E\) for the transfer matrix \(M\) and the transfer matrix \(M^{\prime}\), respectively. For each exponent \(\gamma_{i}\) for \(M\), there is a \(-\gamma_{i}\) exponent for \(M^{\prime}\). Then the ensemble of the corresponding typical states \(\vec{z}_{i}\) for such pairs of exponents are plotted. As these typical states are normalized, only the first two components of the states are plotted. When \(\alpha>\frac{1}{\lambda(E)}\), it is shown that the ensembles of \(\vec{z}\) and \(\vec{z^{\prime}}\) for the pair (\(\alpha\), \(-\alpha\)) do not overlap with each other. So \(\frac{1}{\alpha}\) is not the localization length of the model, because if the corresponding typical state in \(Q\) matrix does exist, it should be observed independent of viewing from the left or the right. In such case, each state is only characterized by one localization length, \(\lambda(E)\). This conclusion is reasonable because in the extreme case \(\alpha\rightarrow\infty\), \(\Phi(x)\) is a delta function so the constraint is equivalent to the elimination of the center site from the lattice. This will only change the ring with \(L\) sites to a chain with \(L-1\) sites. All bulk localized states are not affected by such boundary condition so the localization behavior of the model is the same as that of a traditional disordered model. But when \(\alpha<\gamma(E)\), in Fig. 2(b), the ensembles overlap so that both \(\frac{1}{\alpha}\) and \(\lambda(E)\) are the localization lengths simultaneously. ## IV The distribution of the level spacing In the 1D Anderson disordered lattice, the distribution of the level spacing is a Poisson curve \(\sim e^{-s}\), where \(s\) is the normalized nearest neighboring eigenenergy spacing. The maximal distribution appears at \(s=0\), which indicates that all eigenstates are localized and cannot repulse the other states in the energy. When the states become extended, such repulsion exercises so that the distribu Figure 3: The distributions of the normalized level spacing \(s\) for \(\alpha=0,0.01,0.03,0.1\), respectively. The inset shows the same distributions in logarithmic axes. The straight line, \(f(s)\sim s^{0.64}\), is a guide to the eye. The length of the ring is \(N=600\), the strength of disorder is \(W=10\) and \(10^{7}\) samples are considered in the statistics. tion is changed to the Wigner distribution[25; 26; 27] \[P(s)\sim s^{\beta}e^{-\frac{\pi\beta^{4}}{4}s^{2}}, \tag{6}\] where \(\beta\) is 1, 2 and 4 for the orthogonal, the unitary and the symplectic ensembles, respectively. In Fig. 3, we show such distributions \(P(s)\) when the constraint is subjected. When \(\alpha=0.1\), although the eigenstates are still characterized by two localization lengths, the distribution is still a Poisson function. As \(\alpha\) is decreasing, one of the localization lengths becomes longer and longer and dominates the asymptotic behaviors of the states. So they become more and more extended. This is confirmed by the fact that the distribution is distinct from the Poisson function and becomes more and more Wigner-like. When \(\alpha=0\), all states should become extended and the distribution becomes a Wigner function exactly. But interestingly, as the inset shows, the \(\beta\) of such Wigner function is 0.64, a value distinct from those of the three well-known ensembles. This indicates that the mechanics to delocalize the states in this model are different from that of the competition between the disordered potential and the kinetic energy. We have no more discussions on the new ensemble at this stage. But we think the emergence of the new ensemble is related to the non-hermitian random matrix at an exceptional point because Eq. 4 is equivalent to the eigenproblem of a non-hermitian defective matrix. ## V The experiments to observe the phenomena The CHS can be realized in microscopic electronic systems, macroscopic mechanical systems, or optical systems. We consider a ring of quantum dots (resonant cavities) with effective hoppings between the nearest neighboring dots (cavities). From now on, we will only focus on the system of quantum dots but the mechanics are the same in the other systems. The disorder is introduced by the on-site energies of the dots. There is another dot at the center of the ring. Such a dot is weakly connected with all dots on the ring by an effective hopping \(t^{\prime}\). The energy level of the center dot is detuned to be \(\delta\) away from the energy \(E\) we are considering. After projecting out the level of the center dot, the Hamiltonian for the dots on the ring is \[H_{\rm LR}=H_{D}+\sum_{x,x^{\prime}}\frac{t^{\prime 2}}{\delta}|x\rangle \langle x^{\prime}|, \tag{7}\] where \(H_{D}\) is the disordered Hamiltonian and the later term describes the effective hoppings between any pair of quantum dots mediated by the center dot. Such long-range hopping term[28; 29] can also be written as \(\frac{t^{\prime 2}}{\delta}N|\Phi\rangle\langle\Phi|\), where \(N\) is the number of dots on the ring and \(|\Phi\rangle=\frac{1}{\sqrt{N}}\sum_{x}|x\rangle\). When \(\frac{t^{\prime 2}}{\delta}N\) is a huge number as compared to the energy scale of \(H_{D}\), the spectrum of \(H_{\rm LR}\) are composed by a band with \(N-1\) levels and one level \(|\Phi\rangle\) at the huge energy. Because the eigenstates of a hermitian Hamiltonian are orthogonal to each other, the states in the band, \(|\Psi\rangle\)s, must be orthogonal to the state \(|\Phi\rangle\). And such huge gap between the single level and the band ensures us to ignore the single level when the energy we are interested in is in the scale of the band energy. Now the constraint \(\langle\Psi|\Phi\rangle=0\) must be subjected because the Hilbert space of the band states does not have the state \(|\Phi\rangle\). The above argument can be written down as \[H_{\rm LR}\sim(1-P)H_{D}(1-P)+\frac{t^{\prime 2}}{\delta}NP|\Phi\rangle \langle\Phi|P, \tag{8}\] where the first-order perturbation theory pronounces that the off-diagonal hoppings between the states in the band and the single level can be ignored and \(P=|\Phi\rangle\langle\Phi|\) is the project operator on the single level. Such perturbation is approaching to be exact for huge gap. So the static Schrodinger equation \(H_{\rm LR}|\Psi\rangle=E|\Psi\rangle\) becomes \[H_{D}|\Psi\rangle-\langle\Phi|H_{D}|\Psi\rangle|\Phi\rangle=E|\Phi\rangle, \tag{9}\] which is non-homogeneous. This equation is the same as Eq. 2 which is derived from the variational functional method. One can observe the transport behaviors on the ring. As all states are extended, one may find a finite conductance in such a 1D strong disordered lattice. Figure 4: The electron on the quantum dots (blue points) can jump to the nearest-neighboring dots or the central quantum dot (black point). The thin lines indicate the hoppings between the dots. Conclusions and discussions We have studied the Anderson localization for a 1D disordered lattice in CHS. The constraint is \(\langle\Psi|\Phi\rangle=0\) and \(|\Phi\rangle\) is a state that is decreasing in the space with the rate \(\alpha\). When \(\frac{1}{\alpha}>\lambda(E)\), where \(\lambda(E)\) is the native localization length of the state, the state will need two localization lengths to characterize its shape in the space. It will first exponentially decrease with a faster rate, \(\frac{1}{\lambda(E)}\), near its center and then changes to a slower rate, \(\alpha\). When \(\alpha=0\), in every state, the slower rate dominates. So they become extended in such a disordered system. Here we present another argument to prove the above conclusions. The solution \(\Psi(x)=\langle x|\Psi\rangle\) of the non-homogeneous differential equation, Eq. 2, is \[\Psi(x)=\Psi_{0}(x)+\int dx^{\prime}G(x,x^{\prime})\Phi(x^{\prime}), \tag{10}\] where \(G(x,x^{\prime})\) is the green function \((E-H_{D})G(x,x^{\prime})=\delta(x-x^{\prime})\) and \(\Psi_{0}(x)\) is the general solution, \((E-H_{D})\Psi_{0}(x)=0\). The green function is \(G(x,x^{\prime})\sim e^{-\frac{1}{\lambda(E)}|x-x^{\prime}|}\) after averaged over the disorder configurations. After Fourier transformation, the convolution becomes the product of \(G(k)\) and \(\Phi(k)\). When \(\alpha=0\), \(\Phi(k)\) is a delta function at \(k=0\) and so as for the product of \(G(k)\) and \(\Phi(k)\). As a result, \(\Psi(x)\) becomes an extended state. In this article, we only consider the case \(\Phi(x)=e^{-\alpha|x-x_{0}|}\). It will be interesting to consider a staggered function, a random function, or a power-law decreasing function. One may find new localizations such as power-law localized states, real-space distinguished topological bands, or disordered ensembles with new \(\beta\)s in these systems. The mechanics of the model may be applied to many-body localization[30]. There are a series of conserved local quantities in such systems. They can be considered as the native constraints. So the rates at which these conserved quantities are localized should provide an upper bound on the localization lengths. The systems in CHS are also junctions. They will connect the homogeneous Schrodinger equation for a long-range hopping Hamiltonian \(H_{\rm LR}\) with a non-homogeneous Schrodinger equation for a short-range hopping Hamiltonian \(H_{D}\). They will connect the eigenproblem for a hermitian matrix with that for a non-hermitian matrix at the exceptional point. A new route to understand these problems may be found through this junction. Acknowledgments.-- The work was supported by the National Foundation of Natural Science in China Grant Nos. 10704040.
2306.04031
Certified Deductive Reasoning with Language Models
Language models often achieve higher accuracy when reasoning step-by-step in complex tasks. However, even when arriving at a correct final answer, their rationales are often logically unsound or inconsistent. This is a major issue when reliable reasoning traces are needed, such when fine-tuning on model-generated reasoning for self-improvement. To tackle these issues, we introduce a class of tools for language models called \emph{guides}, that use state and incremental constraints to guide generation. A guide can be invoked by the model to constrain its own generation to a set of valid statements given by the tool. In turn, the model's choices can change the guide's state. We show how a general system for logical reasoning can be used as a guide, which we call \textsc{LogicGuide}. Given a reasoning problem in natural language, a model can formalize its assumptions for \textsc{LogicGuide} and guarantee that its step-by-step reasoning is sound. In experiments on PrOntoQA, ProofWriter and Syllogism Validity datasets, \textsc{LogicGuide} significantly improves the performance of GPT-3, GPT-3.5 Turbo and LLaMA (accuracy gains up to 35\%), while drastically reducing \emph{content effects} -- the interference between unwanted prior assumptions and reasoning, which humans and language models suffer from. We then explore bootstrapping GPT-3.5 Turbo and LLaMA using their own reasoning traces. We find that LogicGuide is critical: by training only on certified self-generated reasoning, models can self-improve, avoiding learning from their own hallucinations. Moreover, bootstrapped models enjoy significant boosts on ReClor, a challenging real-world reasoning dataset, even when not relying on formalization at inference time.
Gabriel Poesia, Kanishk Gandhi, Eric Zelikman, Noah D. Goodman
2023-06-06T21:49:00Z
http://arxiv.org/abs/2306.04031v2
# Certified Reasoning with Language Models ###### Abstract Language models often achieve higher accuracy when reasoning step-by-step in complex tasks. However, their reasoning can be unsound, inconsistent, or rely on undesirable prior assumptions. To tackle these issues, we introduce a class of tools for language models called _guides_ that use state and incremental constraints to guide generation. A guide can be invoked by the model to constrain its own generation to a set of valid statements given by the tool. In turn, the model's choices can change the guide's state. We show how a general system for logical reasoning can be used as a guide, which we call LogicGuide. Given a reasoning problem in natural language, a model can formalize its assumptions for LogicGuide and then guarantee that its reasoning steps are sound. In experiments with the PrOntoQA and ProofWriter reasoning datasets, LogicGuide significantly improves the performance of GPT-3, GPT-3.5 Turbo and LLaMA (accuracy gains up to 35%). LogicGuide also drastically reduces content effects: the interference of prior and current assumptions that both humans and language models have been shown to suffer from. Finally, we explore bootstrapping LLaMA 13B from its own reasoning and find that LogicGuide is critical: by training only on certified self-generated reasoning, LLaMA can self-improve, avoiding learning from its own hallucinations. ## 1 Introduction Consider a language-based autonomous agent tasked with managing a user's calendar and email. The user might want to specify general principles on how the agent should behave, such as "if the email is from any of my managers, you must send me a notification", and important pieces of information such as "I'm part of the research team", or "Grace manages the research team". When the agent analyzes an email and decides what actions to take, we'd like it to respect the given instructions. Doing so might require _reasoning_: the agent should conclude that an email from Grace warrants a notification, even if that wasn't said explicitly. How should the agent make such conclusions? A Large Language Model (LLM), such as GPT-3 [2] or PaLM [3], can in principle take in the given instructions and context, choose actions to take and, before each action, ask itself "is this permitted?" The answer might require making chains of inferences based on the user's input. For this class of problems, LLMs have been shown to dramatically benefit from chain-of-thought reasoning [30; 26]. Empirically, allowing LLMs to generate reasoning steps before their answer consistently yields higher accuracy across a wide range of tasks [26]. Qualitatively, reasoning steps provide "an interpretable window" into how the model arrived at the answer [30], in contrast to an opaque guess. But much like humans, language models can also produce unsound reasoning: even after correctly interpreting a problem, they can take logically invalid inference steps, or produce a guess at the final answer that is not supported by their own rationale [22]. Moreover, LLMs have also been observed to show human-like _content effects_ in reasoning: their accuracy drops significantly when asked to reason with assumptions that contradict their prior beliefs [6]. While a natural language rationale can be highly desirable for purposes of interpretability, it is not enough to ensure a high degree of reliability. How can we avoid unsound, perhaps dangerous, inferences? This question illustrates the central concern that led to the development of formal logic. In a formal system, valid inferences can be generated mechanically with logical deduction rules. Automated reasoning tools, such as Z3 [7], solve formalized problems automatically. This level of reliability also brings its costs. Using these tools requires users to fully formalize their problem, but some desiderata might be impractical to express in logic (e.g., "if the email _looks_ important, you _can_ create a to-do item for me"). Moreover, they do not provide a simple, equivalent "interpretable window" into how conclusions were derived: even the basic inference principles that they employ, such as resolution, are fundamentally hard to describe1. If the user's rules were incorrectly formalized, they will have difficulty understanding why the system is misbehaving. This can undermine trust just as invalid inferences can. Footnote 1: When introducing the first-order resolution principle, Robinson remarks that it is powerful in two senses: it is logically complete, and “in the psychological sense that it condones single inferences which are often beyond the ability of the human to grasp” [21] In this paper, we aim to allow LLMs to rely on trusted formal deductions during generation by building on the recent paradigm of _tool use_ in language models [4, 23]. In prior work, LMs invoke external tools by generating special sequences, intercepted by the decoding algorithm. They can generate inputs (e.g., a mathematical operation, or search query) and receive the tool's output as if it was their own generation. We generalize this input-output paradigm to a broader class of LM tools we call _guides_. When a guide is invoked by the model using a special delimiter, the tool computes a space of valid outputs, as illustrated in Fig. 1. We then employ _constrained decoding_[20] to ensure the model will incrementally generate one of the valid outputs. Guides thus enable a more declarative interaction between tool and model: the guide declares a set of possible sequences, while the model brings prior expectations used to generate one among them. A guide can maintain state: its next valid outputs might depend on the sequence of choices up to that point. We use this framework to allow language models to locally constrain generation to a set of valid statements determined by an external logical tool. To that end, we leverage the Peano theorem-proving environment [19] to construct LogicGuide, which an LM can use to formalize its assumptions, set proof goals and make sound inference steps. The model can interperse formal reasoning and natural language during generation. When the language is conditioned on previous formal steps, it is highly reliable, since the generations allowed by LogicGuide are formally certified. We validate our method on three existing natural language reasoning datasets, PrOntoQA [22], ProofWriter [27], and Syllogistic Validity [6]. We also follow the format and methodology of PrOntoQA to introduce a new dataset, DeontiQA, where problems require reasoning using deontic logic principles to determine whether an action is permissible, obligatory or forbidden. When used with few-shot prompting, we find that LogicGuide significantly improves the accuracy of OpenAI GPT-3 and GPT-3.5 Turbo, and the open LLaMA 13B model. Moreover, models using LogicGuide have drastically lower content effects: we show this both with PrOntoQA and in the Syllogism Validity dataset [6], used previously to measure content effects in LLMs. Self-improvement methods, such as the Self-Taught Reasoner (STaR; [35]), improve reasoning by fine-tuning a model on the rationales of its successful answers. In the tasks we analyze here, there's a high probability of guessing the correct answer (e.g. true or false, so at least 50%), hence STaR Figure 1: A language model can invoke a guide tool, such as our LogicGuide, to perform certifiable generations. Here, when the model decides to generate an infer block, it is constrained to generate one of the formal deductions established by an external theorem-proving environment. alone fails to yield meaningful improvements. LogicGuide allows us to differentiate cases where the model arrived at a certified conclusion and when it generated an unsupported guess. We show that running STaR using only certified solutions is highly effective: LLaMA 13B enjoys accuracy gains of up to 17% on PrOntoQA, while naive STaR -- fine-tuning on all generations that led to the correct answer -- fails to provide improvements. Altogether, guides provide a promising approach for combining the trustworthiness of formal reasoning with the flexibility of language models. ## 2 Related Work As reviewed above, our work builds on two classes of systems for reasoning: language models, which can reason flexibly in natural language, and formal reasoning systems, which rely on formal logic to derived certified inferences. To interface these two systems, we leverage recent methods for constrained decoding from language models. Specifically, we employ Constrained Semantic Decoding (CSD) [20], an algorithm that guarantees a valid sample by construction. CSD does not require full access to the model, only the ability to bias its logits. This allows us to use GPT-3 [2] and GPT-3.5 Turbo models through their public APIs, as well as a locally-run LLaMA model [28]. Other constrained decoding methods, such as NeuroLogic Decoding [15] and NeuroLogic A*esque decoding [14], have been proposed to enforce lexical (but not richer) constraints at inference time. LLMs have been increasingly used as agents interacting with other systems, by both using tools to delegate computation or to trigger external actions [23; 34; 33; 12; 24]. In prior work, LLMs can provide inputs to an external tool, such as a search query [23] or a mathematical operation [4], and receive the output in the decoding stream. Our framework of guides (SS3) can be seen as a generalization of this paradigm, where the tool defines a space of outputs and he LM chooses one using its own probabilities. Our approach to certifying reasoning from language models relies on grounding their inferences in an interactive theorem prover, Peano [19]. Similar to other popular theorem proving languages like Lean [8] and Coq [1], Peano uses dependent type theory as its logical foundation. Most theorem proving environments are designed for the _verification_ of given proofs. In contrast, and of special interest to us, Peano is designed to aid in _generation_ by exposing a finite action space. Many other recent works have integrated LLMs and interactive theorem provers in the context of formal mathematical reasoning. Recent work on autoformalization has shown that LLMs can be effective in translating informal to formal mathematics [32]. This idea is related to how we use LLMs to formalize their assumptions given in natural language, though our end goal is to produce reliable natural language rationales rather than formal proofs alone. ## 3 Certified Reasoning with Guides We now develop the framework of guide tools, and discuss how to implement a guide for general logical reasoning. We then remark how guides can overcome computational limitations of Transformer models, and briefly discuss other potential guide tools. ### Guide functions Previous work in tools for language models assumed an interface where the model would provide inputs to the tool and receive back a single output, conditioning on this output for further generation. For instance, [4] allowed the model to rely on a calculator by generating a string such as $51*12=. At this point, the decoding algorithm would execute the operation externally and copy the result as if it was generated by the language model. Here, our main goal is to leverage a trusted external tool to answer the question: "what logical inferences could be made next?" In principle, given a tool that can answer this question, we could copy its entire output into the decoding stream. However, this set can be very large, and for logical inference the set grows larger as each inference allows many new ones to be reached. We could instead randomly choose a single possibility from the set, but this would ignore the substantial prior knowledge of the language model, often yielding a useless choice. Our key idea will be to use constrained decoding so that, when the tool is invoked, the language model _itself_ will generate one of the valid inferences. More generally, a guide tool defines a set of valid generations given previous choices. Formally, let \(S=\Sigma^{*}\) be the set of strings in the guide's alphabet \(\Sigma\), with \(S^{*}\) denoting the set of finite sequences of such strings. We define a guide \(g\) to be a function \(g:S^{*}\rightarrow\mathcal{P}(S)\) that takes a sequence of previously generated strings and returns a regular set of allowed next generations. Our idea is to leverage \(g\) at specific points when sampling from an autoregressive language model \(P_{LM}(\cdot)\) so that when the guide is invoked at a prefix \(s_{0}\), we will sample a continuation from \(P_{LM}(s|s_{0})\) that belongs to the set allowed by \(g\) when given the previous guided generations in the prefix \(s_{0}\) (e.g., previous valid logical inferences). ### From guides to completion engines Given any guide function \(g\) as above, we want to provide a tool for LMs that, once invoked by some special sequence, will constrain the immediate subsequent output using \(g\). To that end, we employ the Constrained Semantic Decoding algorithm (CSD; [20]). CSD can constrain the output of the underlying model at inference time by leveraging a user-defined completion engine. We briefly review what a completion engine requires and explain how we define one to implement guide tools. Background: Completion Engines and CSDA _completion engine_ is defined as a function \(c:\Sigma^{*}\to RE(\Sigma)\), taking a string in the vocabulary \(\Sigma\) and returning a _regular expression_ over \(\Sigma\). The idea is that \(c\) can dynamically compute a set of valid continuations: once the LM generates enough tokens to maximally match the current regular expression, CSD will call \(c\) again to determine what can follow. The algorithm handles the technical complication that \(c\) and the LM often have different vocabularies. For instance, in our case, the LMs we use have vocabularies with multiple tokens that either contain or overlap with the special strings we use to trigger the guide tool. CSD allows our definition of guide tools to be agnostic to the underlying model's vocabulary. Guide Tool as a Completion EngineWe implement the guide tool for a guide function \(g\) as a completion engine. First, we arbitrarily pick special strings \(t_{1}\) and \(t_{2}\): \(t_{1}\) will trigger the tool and \(t_{2}\) will mark the end of the "trusted" generation (e.g., we later use \(t_{1}=\) "[[" and \(t_{2}=\) "]]"). The guide completion engine takes a string \(s\), containing the model's partial output so far. It first checks whether \(s\) ends in \(t_{1}\) to decide if the tool has been invoked. If not, it returns a regular expression matching any string not containing \(t_{1}\) and ending in \(t_{1}\). As a result, CSD will allow the model to freely generate text until it invokes the tool by generating \(t_{1}\). If \(s\) does end in \(t_{1}\), then the completion engine will collect all blocks of text between occurrences of \(t_{1}\) and \(t_{2}\) in \(s\) into a sequence \(S_{p}\), and return a regular expression matching the strings in \(g(S_{p})\) as its output. We expect the strings in \(g(S_{p})\) to end in \(t_{2}\) when the guide wants to allow the model to return to free generation. CSD will then constrain the LM to generate one of the strings matching \(g(S_{p})\), and it will handle the complication that the model's tokenizer might overlap \(t_{1}\) and the beginning of some generation in \(g(S_{p})\). With the resulting completion engine wrapping \(g\), CSD can essentially sample from any given LM while constraining the outputs between the delimiters \(t_{1}\) and \(t_{2}\) to come from the guide. In this framework, it is easy to define simple input/output tools by having \(g\) return a singleton set. It also allows us to design richer LM tools, as the LogicGuide we introduce next. We describe several other guides in the Appendix, leaving their exploration for future work. ### The LogicGuide We now construct LogicGuide, a guide tool for language models to perform externally certified reasoning. Our logical backend of choice is Peano [19], a theorem-proving language together with an environment for incremental proof generation. The main feature of Peano we rely on is that its environment provides a finite action space. Thus, given a partial argument, Peano gives us a list of valid inferences that can be made in a single step given the background theory, assumptions (possibly added by the model) and previous inferences. Our idea is to use this list to guide the model whenever it decides to derive a logical conclusion. While Peano makes the guide implementation particularly simple, other theorem-proving environments might be adaptable for this purpose. We use the guide delimiters "[[" and "]]", and implement a guide function that accepts strings with the format action:parameter. We define 6 actions (exemplified in Fig. 2) that allow the model to (1) formalize its assumptions (object, prop, relation, axiom), (2) set a goal (goal), and (3) perform logical inferences (infer). For (1) and (2), the guide returns constraints that ensure the model's formalization to be _syntactically_ valid; since these actions are the boundary between natural and formal language, it is impossible to guarantee that they are _semantically_ valid in the sense that the model has properly formalized the hypotheses. What _is_ certifiable is that the logical inferences (action type 3) follow from the model's formalization, i.e. its inference are valid given its explicit assumptions. (SS4 provides empirical evidence that formalization errors rarely lead to a wrong conclusion; most often they make the model unable to prove or disprove the goal). Using the guideIn the typical setup for logical reasoning problems [27, 22], the input contains a context (the set of assumptions) and a goal, and the few-shot examples additionally contain a rationale and the final answer (typically whether the goal is true or false). In our experiments, we demonstrate how to use the guide by creating few-shot examples with the proper LogicGuide action annotations (as in Fig. 2). Specifically, we add a section before the rationale named "Formalized context" where we repeat the assumptions in the scenario while marking objects, properties and relations, and formalizing each of the assumptions into an [[axiom:]] block. We do the same for the goal. Then, we prepend each reasoning step with the appropriate [[infer:]] action. In this way the model is encouraged to first generate a formal inference step and only then its natural language counterpart. We include all of our prompts in the Appendix. ### Guides can overcome computational limitations of Transformers It's unsurprising that Transformer models trained on (often imperfect) human data generate logical inconsistencies or generalize in unpredictable ways. Below we show that LogicGuide can helpfully address these practical failures. Guides also provably expand the computational power of Transformers. As a concrete example, consider the Parity problem: given a binary input string, output the number of \(1\) symbols modulo \(2\). The following result is established in [10]: **(Corollary 2 in [10])**.: _Transformers with hard attention cannot model Parity. More precisely, no fixed-size decoder-only Transformer with hard attention can process an input string and generate a character indicating whether its input length was odd or even._ Computational limitations like this directly translate to corresponding limits to performing logical reasoning. For instance, the Parity problem corresponds to the problem of processing iterated negations in propositional logic, and thus is a sub-problem of reasoning in any other more expressive logic. Yet Parity is a trivial problem on a traditional computer. Therefore, with the protocol from SS3, a Transformer model could rely on an external counting guide as long as it was able to provide its input string within the guide delimiters. The Echo problem summarizes this required capability: given an input string of arbitrary length, followed by a special delimiter, copy the input into the output. We observe that the following holds: **Proposition**.: _There exists a fixed-size, decoder-only Transformer with hard attention that can model Echo for unbounded inputs._ Figure 2: Example solution of gpt3.5-turbo using LogicGuide in a problem from ProofWriter. The model’s generation starts at the “Formalized context”. This example shows all 6 actions we implement in the LogicGuide: object declares a particular entity, prop and relation mark unary and binary predicates, respectively; axiom denotes propositions assumed to hold (possibly implications), goal sets a target to be proven or contradicted, and infer generates logical inferences. We show this by manually constructing this Transformer network in the Appendix. A simple modification allows this network to be combined with a guide to solve Parity using the protocol defined in SS3. This establishes the following: **Corollary**.: _Transformers with hard attention and a guide tool can model Parity._ The idea extends immediately to other example problems used in the literature to analyze computational properties of Transformers. This simple observation points out that guided (or even regular tool-using) Transformers have fundamental differences as models of computation. As language model tools become more prevalent in practical deployments, future theoretical analyses cannot ignore reliance on external tools; there is an exciting opportunity for future work in understanding these hybrid systems, where computation alternates inside and outside of neural networks. ## 4 Experimental evaluation We now evaluate the effectiveness of LogicGuide in improving language models on reasoning tasks. We focus on three research questions: Rq1:_Does LogicGuide improve the accuracy of language models in multi-step reasoning?_ We investigate this question in SS4.1 using OpenAI GPT-3, OpenAI GPT-3.5 and LLaMA 13B, and three multi-step reasoning datasets (PrOntoQA [22] and ProofWriter[27], as well as the DeontiQA problems we introduce). Rq2:_Does LogicGuide reduce_ content effects _in language model reasoning?_ SS4.2 explores this, leveraging the PrOntoQA False Ontology split and the Syllogism Validity dataset [6]. Rq3:_Can an LLM self-improve using LogicGuide by learning from its own solutions?_ In SS4.3, we explore improving a LLaMA 13B model using the Self-Taught Reasoner method. ### Impact of LogicGuide on reasoning accuracy DatasetsWe use two recent natural language reasoning datasets: PrOntoQA [22] and ProofWriter [27]. Both datasets contain generated reasoning problems with (1) a list of assumptions (e.g. "Every dog is a mammal", or "Sam is a dog"), and (2) a proposition that can be reasoned about from the assumptions (e.g. "Sam is a mammal?"). In both datasets, the goal is to answer the question with either true or false. Problems are categorized by how many reasoning "hops" the solution needs (1 to 5). In addition, PrOntoQA has three splits: "True Ontology", where the rules are coherent with common sense, "False Ontology", where rules violate commonsense (e.g., "Every composite number is prime"), and "Ficitional Ontology", which uses made-up concepts (e.g., "Every rumpus is feisty."). ProofWriter uses real concepts for all rules (e.g., people, animals, colors), but the rules are generated at random-thus they also often contradict commonsense. We use the problems from ProofWriter that have proofs for the answer (i.e. ignoring the "closed-world assumption" and "unknown" problems, where fully justifying the answer requires meta-logical reasoning). Language modelsWe evaluate three language models in the few-shot setting: OpenAI GPT-3 (text-davinci-003; [2]), OpenAI GPT-3.5 Turbo (gpt-3.5-turbo) and LLaMA 13B [28]. We use 4 few-shot examples for the vanilla models. For guided models, the prompt examples are augmented to show formalized reasoning. Given the assumptions and the question, we first show the model how to formalize the assumptions and the proof goal, and then present the chain-of-thought where sentences are preceded by a guided inference (in an infer block, c.f. SS3.3). Since this makes the prompt longer, we only use two prompt examples for the guided models: one where the answer is true and one where it is false. We implement CSD on the OpenAI models using their public API, which exposes a parameter to bias the logits on given tokens. We use the rejection-based sampling procedure described in [20], which only requires extra API calls when we detect a violation in the model's output inside guided blocks. gpt3.5-turbo requires a slight adaptation (to resume generation) since it is a chat-based model; we detail this along with all of our prompts in the Appendix. ResultsFig. 3 shows few-shot results on multi-hop reasoning, measuring final-answer accuracy. When models do not provide any answer to the problem, we assume a random guess. Overall, guided models perform significantly better. GPT-3 and GPT-3.5 are highly accurate in formalizing assumptions, and enjoy the largest benefits (with nearly perfect performance on PrOntoQA with LogicGuide, and improving from chance to 80% correct on ProofWriter). For them, LogicGuide essentially eliminates single-step reasoning errors, and the impact of this benefit grows in solutions requiring more hops--a single error is enough to reach the wrong final conclusion. LLaMA 13B sees gains between 10 and 20% in PrOntoQA False and Fictittional, while LogicGuide hurts its performance in PrOntoQA True (where the unguided model often avoids reasoning altogether, as the answer follows common sense) and ProofWriter. We observe two main failure modes: (1) models can misformalize assumptions, and (2) they can fail at _planning_, making a sequence of valid inferences that do not ultimately lead to the goal. When formalization errors happen, it's more common that no conclusion can be drawn, rather than a wrong conclusion: in only 1.6% of the solutions did a guided model formally derive a wrong answer; these cases were mostly due to missing a negation when formalizing a sentence (and mostly for LLaMA on ProofWriter). A more common formalization failure (especially for LLaMA) was to use inconsistent names for properties or relations, e.g. (sees A B) in one place and (see B C) in another. When planning fails and no further inferences can be made, LogicGuide generates the string nothing in the [[infer]] block. When that happens, we observed models spontaneously concluding that the answer is "Unknown" or "Cannot be concluded" _despite that not being demonstrated in the prompt_ (models abstained in 78% of the cases where they exhausted the inferences that could be made, while guessing "False" in 19% of those cases). This contrasts with the unguided models, which most often still make an unjustified guess, writing as if it was a logical conclusion (only unguided GPT-3.5 Turbo abstained in our experiments, in 9% of its predictions). Errors in language model reasoning would be especially problematic in practice when an agent must decide which actions are permissible. Hence we created DeontiQA: a set of 60 new reasoning problems inspired by Deontic Logic [29]. We follow the same methodology used in PrOntoQA to create the problems, creating logical forms first and then realizing them in natural language. Like in PrOntoQA, we add distractor rules to prevent guessing the answer from surface shortcuts. In these problems, the goal is to decide whether a given action is permissible, obligatory, or impermissible in the context of managing calendar events for a group of people. We detail the creation of DeontiQA in the Appendix, and make the dataset available along with our code. DeontiQA problems are significantly longer (up to 28 rules) compared to PrOntoQA (maximum of 18). This increased length means we are only able to fit one prompt example in the context window of GPT-3 and GPT-3.5 Turbo. We find LogicGuide to be helpful on DeontiQA: GPT-3 alone is correct on 61.6% of problems, which increases to 80.0% with LogicGuide. GPT-3.5 Turbo alone achieves 77.5% accuracy which increases to 81.3% when guided. Overall, this provides positive evidence for our first research question: LogicGuide _can significantly improve the accuracy of base models in natural language reasoning problems_. Their answers become Figure 3: Final answer accuracies with guided and unguided language models on PrOntoQA and ProofWriter, with bootstrapped 95% confidence intervals. not only more accurate but also more trustworthy: LogicGuide makes models answer "Unknown" when they don't have an answer, rather than producing an unsupported guess. ### Mitigating content effects in reasoning Both humans [9] and language models [6] have been shown to suffer from _content effects_ in reasoning: their accuracy in logical judgements is influenced by prior beliefs about the assumptions and conclusions. For instance, from the assumptions that "Some librarians are happy people" and "Some happy people are healthy people", it does not follow that "Some librarians are healthy people". Humans and LMs have difficulty judging this argument as invalid because the conclusion agrees with prior beliefs. We hypothesize that LMs will have smaller influence from the content when _formalizing_ assumptions, rather than _reasoning_ from logical sentences. If that is the case, then using LogicGuide will help mitigate content effects. We use two tasks to investigate this hypothesis. First, we contrast the results in the different PrOntoQA ontologies. As in the original PrOntoQA results [22], we see that the base performance of GPT-3 and GPT-3.5 Turbo is already close to ceiling in the True Ontology split (where the model doesn't strictly need to reason correctly as long as it judges the conclusion using common sense). In contrast, accuracy is significantly lower in the False and Fictitional ontologies and decays with more hops. However, both of these models are highly accurate in formalizing assumptions, and thus benefit from the guide in the False and Fictitional ontologies: performance is near ceiling. Interestingly, GPT-3.5 Turbo still exhibits occasional content effects, explicitly judging the conclusions derived using LogicGuide as contradictory. For instance, in one problem where the model must decide whether Sam is luminous or not, it is given that "Sam is a snake", and from the given assumptions the model correctly concludes "... [[infer:(sheep sam)]] Sam is a sheep". It then proceeds to question this conclusion and halts: "This contradicts the fact that Sam is a snake. Therefore, we cannot infer whether Sam is luminous or not.". Second, we leverage the Syllogism Validity dataset [6]. In this task, the model is given two assumptions and a conclusion, and has to decide if together they constitute a valid argument (i.e., the conclusion logically follows from the assumptions). The example above about librarians is taken from this dataset. Solutions have a single step: judging the argument as valid or invalid. When using LogicGuide, we prompt the model to first perform a single inference given its formalization of the assumptions and then judge the validity of the argument. Syllogism Validity has 3 conditions: "Nonsense", where rules are about made-up concepts, "Consistent", where the conclusions agree with commonsense regardless of whether the argument is valid, and "Inconsistent", where the conclusion always violates world knowledge. Unguided models behave consistently with those in [6]: in the "Consistent" split, all models strongly tend to judge the argument as being valid, thus performing close to chance (GPT-3.5 Turbo is slightly better, at 60%). Both GPT-3 and GPT-3.5 Turbo are, however, highly accurate at formalizing the assumptions and conclusions and tend to trust LogicGuide, nearing ceiling performance for all conditions. LLaMA 13B has much more difficulty judging the syllogisms, performing near chance in all conditions. However, it is still successful at formalizing many syllogisms, obtaining non-trivial performance (60% to 77%) when using LogicGuide. In failure cases, it often confuses logical connectives (e.g., formalizing "Some X are Y" as "X implies Y" and vice-versa). We overall see positive evidence for our second research question: models with LogicGuide show greatly diminished content effects, with stronger benefits for models that are more capable of formalizing individual sentences. Figure 4: Accuracies of models with and without LogicGuide on the Syllogism Validity task. ### Learning to reason by guided self-improvement Finally, we consider improving the reasoning ability of a language model. The Self-Taught Reasoner (STaR; [35]) is a simple method for improving LLMs on reasoning tasks that has been shown to be effective in symbolic, mathematical and commonsense reasoning. Given a dataset of reasoning problems paired with correct final answers (but not reasoning traces), STaR iterates between (1) solving problems with few-shot chain-of-thought prompting, and (2) fine-tuning the model on its own generated rationales that led to correct final answers. This allows the model to bootstrap its own reasoning from a small seed set of few-shot examples. Crucially, STaR relies on the premise that _if a generated rationale led to the correct answer, it is likely to be correct_. While this holds in domains like arithmetic, it breaks down in binary answer tasks like PrOntoQA. In these cases, right answers will happen often with bad rationales, leading STaR and similar approaches to fine-tune on incorrect reasoning. Indeed, the authors in [35] remark that "filtering bad reasoning paired with correct answers remains an open question." We thus consider STaR training on either all correct answers (with and without the guide) or only on certified correct answers. We run 2 STaR iterations with LLaMA 13B on PrOntoQA2. In each iteration, we attempt 200 random problems equally split between 1 and 5 hops, and fine-tune on successful solutions, evaluating on unseen problems. Footnote 2: ProofWriter has shortcuts that allow guessing the answer without reasoning [36], which fine-tuning quickly learns. PrOntoQA explicitly includes distractor rules to avoid shortcuts. Thus, we focus on PrOntoQA here. Fig. 5 shows the results. As predicted in [35], the high chance of guessing confounds STaR, and training on all rationales that yield the right answer does not give meaningful improvements ("Unguided", red curve). Training on all guided solutions leading to correct answers brings some improvement ("Guided"; 72% to 80% after one iteration), but still ends up over-fitting to accidentally-correct reasoning. Fine-tuning only on certified correct answers avoids this trap and achieves high performance ("Strict Guided", up to 86%). This allows us to positively answer our third research question: LogicGuide _can be used for effective self-improvement in reasoning_, in cases where naive methods collapse. ## 5 Discussion and conclusion We introduced guide tools for language models. When invoked, a guide locally constrains generation to a controlled set of statements. LogicGuide leveraged this idea for logical reasoning, where the guide allows the LM to formalize its interpretation of input sentences and make certifiably sound inferences with respect to its formalization. This avoids inferences that do not follow from stated assumptions, substantially improving accuracy in natural language reasoning problems. Two major challenges remain. First, natural language is often ambiguous and can be difficult to faithfully represent in formal logic. Indeed, the appropriate formalization of many ubiquitous constructions is still an active subject of philosophical debate [17] (e.g., [5] recently discusses the formalization of "A unless B"). Domains where arguments tend to have more systematic logical structure, such as law, are more likely to benefit from tools like LogicGuide, based on formalization. Second, making correct logical inferences does not imply making _useful_ ones. LLMs can still fail at planning by making inferences that do not eventually connect to their goal. Many current investigations into planning techniques for LM reasoning are complementary to our work and can be integrated with guides [16; 37]. Language models bring to reasoning the flexibility of human language and a wealth of useful prior knowledge. But that power comes with lack of reliability and difficulty verifying extended reasoning. Our approach points to a rich direction for seamlessly integrating reliable symbolic and flexible neural reasoning into a unified text stream. The result is better, and more easily verified, reasoning. Figure 5: Accuracy of LLaMA 13B on held-out PrOntoQA problems when bootstrapping using STaR.
2307.04318
Two-Sample and Change-Point Inference for Non-Euclidean Valued Time Series
Data objects taking value in a general metric space have become increasingly common in modern data analysis. In this paper, we study two important statistical inference problems, namely, two-sample testing and change-point detection, for such non-Euclidean data under temporal dependence. Typical examples of non-Euclidean valued time series include yearly mortality distributions, time-varying networks, and covariance matrix time series. To accommodate unknown temporal dependence, we advance the self-normalization (SN) technique (Shao, 2010) to the inference of non-Euclidean time series, which is substantially different from the existing SN-based inference for functional time series that reside in Hilbert space (Zhang et al., 2011). Theoretically, we propose new regularity conditions that could be easier to check than those in the recent literature, and derive the limiting distributions of the proposed test statistics under both null and local alternatives. For change-point detection problem, we also derive the consistency for the change-point location estimator, and combine our proposed change-point test with wild binary segmentation to perform multiple change-point estimation. Numerical simulations demonstrate the effectiveness and robustness of our proposed tests compared with existing methods in the literature. Finally, we apply our tests to two-sample inference in mortality data and change-point detection in cryptocurrency data.
Feiyu Jiang, Changbo Zhu, Xiaofeng Shao
2023-07-10T03:20:08Z
http://arxiv.org/abs/2307.04318v1
# Two-Sample and Change-Point Inference for Non-Euclidean Valued Time Series ###### Abstract Data objects taking value in a general metric space have become increasingly common in modern data analysis. In this paper, we study two important statistical inference problems, namely, two-sample testing and change-point detection, for such non-Euclidean data under temporal dependence. Typical examples of non-Euclidean valued time series include yearly mortality distributions, time-varying networks, and covariance matrix time series. To accommodate unknown temporal dependence, we advance the self-normalization (SN) technique (Shao, 2010) to the inference of non-Euclidean time series, which is substantially different from the existing SN-based inference for functional time series that reside in Hilbert space (Zhang et al., 2011). Theoretically, we propose new regularity conditions that could be easier to check than those in the recent literature, and derive the limiting distributions of the proposed test statistics under both null and local alternatives. For change-point detection problem, we also derive the consistency for the change-point location estimator, and combine our proposed change-point test with wild binary segmentation to perform multiple change-point estimation. Numerical simulations demonstrate the effectiveness and robustness of our proposed tests compared with existing methods in the literature. Finally, we apply our tests to two-sample inference in mortality data and change-point detection in cryptocurrency data. ## 1 Introduction Statistical analysis of non-Euclidean data that reside in a metric space is gradually emerging as an important branch of functional data analysis, motivated by increasing encounter of such data in many modern applications. Examples include the analysis of sequences of age-at-death distributions over calendar years (Mazzuco and Scarpa, 2015; Shang and Hyndman, 2017), covariance matrices in the analysis of diffusion tensors in medical imaging (Dryden et al., 2009), and graph Laplacians of networks (Ginestet et al., 2017). One of the main challenges in dealing with such data is that the usual vector/Hilbert space operation, such as projection and inner product may not be well defined and only the distance between two non-Euclidean data objects is available. Despite the challenge, the list of papers that propose new statistical techniques to analyze non-Euclidean data has been growing. Building on Frechet mean and variance (Frechet, 1948), which are counterparts of mean and variance for metric space valued random object, Dubey and Muller (2019) proposed a test for comparing \(N(\geq 2)\) populations of metric space valued data. Dubey and Muller (2020) developed a novel test to detect a change point in the Frechet mean and/or variance in a sequence of independent non-Euclidean data. The classical linear and nonparametric regression has also been extended to metric spaced valued data; see Petersen and Muller (2019), Tucker et al. (2022), and Zhang et al. (2021), among others. So far, the majority of the literature on non-Euclidean data has been limited to independent data, and the only exceptions are Zhang et al. (2022) and Zhu and Muller (2021, 2022), which mainly focused on the autoregressive modeling of non-Euclidean valued time series. To the best of our knowledge, no inferential tools are available for non-Euclidean valued time series in the literature. In this paper, we address two important problems: two-sample testing and change-point detection, in the analysis of non-Euclidean valued time series. These two problems are also well motivated by the data we analyzed in the paper, namely, the yearly age-at-death distributions for countries in Europe and daily Pearson correlation matrices for five cryptocurrencies. For time series data, serial dependence is the rule rather than the exception. This motivates us to develop new tests for non-Euclidean time series that is robust to temporal dependence. Note that the two testing problems have been addressed by Dubey & Muller (2019) and Dubey & Muller (2020_a_), respectively for independent non-Euclidean data, but as expected, their tests fail to control the size when there is temporal dependence in the series; see Section 5 for simulation evidence. To accommodate unknown temporal dependence, we develop test statistics based on self-normalization (Shao, 2010; Shao & Zhang, 2010), which is a nascent inferential technique for time series data. It has been mainly developed for vector time series and has been extended to functional time series in Hilbert space (Zhang et al., 2011; Zhang & Shao, 2015). The functional extension is however based on reducing the infinite dimensional functional data to finite dimension via functional principal component analysis, and then applying SN to the finite-dimensional vector time series. Such SN-based inference developed for time series in Hilbert space cannot be applied to non-Euclidean valued time series, since the projection and inner product commonly used for data in Hilbert space are not available for data objects that live in a general metric space. The SN-based extension to non-Euclidean valued time series is therefore fairly different from that in Zhang et al. (2011) and Zhang & Shao (2015), in terms of both methodology and theory. For independent non-Euclidean valued data, Dubey & Muller (2019, 2020_a_) build on the empirical process theory (van der Vaart & Wellner, 1996) by regulating the complexity of the analyzed metric space, which is in general abstract and may not be easy to verify. In our paper, we take a different approach that is inspired by the M-estimation theory in Pollard (1985) and Hjort & Pollard (2011) for Euclidean data, and extend it to non-Euclidean setting. We assume that the metric distance between data and the estimator of the Frechet mean admits certain decomposition, which includes a bias term, a leading stochastic term, and a remainder term. Our technical assumptions are more intuitive and could be easier to check in practice. Furthermore, we are able to obtain explicit asymptotic distributions of our test statistics under the local alternatives of rate \(O(n^{-1/2})\), where \(n\) is the sample size, under our assumptions, whereas they seem difficult to derive under the entropy integral type conditions employed by Dubey & Muller (2019, 2020_a_). The remainder of the paper is organized as follows. Section 2 provides background of non-Euclidean metric space in which random objects of interest reside in, and some basic assumptions that will be used throughout the paper. Section 3 proposes SN-based two-sample tests for non-Euclidean time series. Section 4 considers SN-based change-point tests. Numerical studies for the proposed tests are presented in Section 5, and Section 6 demonstrates the applicability of these tests through real data examples. Section 7 concludes. Proofs of all results are relegated to Appendix A. Appendix B summarizes the examples that satisfy assumptions in Section 2, and Appendix C provides simulation results for functional time series. Some notations used throughout the paper are defined as follows. Let \(\|\cdot\|\) denote the conventional Euclidean norm. Let \(D[0,1]\) denote the space of functions on \([0,1]\) which are right continuous with left limits, endowed with the Skorokhod topology (Billingsley, 1968). We use \(\Rightarrow\) to denote weak convergence in \(D[0,1]\) or more generally in \(\mathbb{R}^{m}\)-valued function space \(D^{m}[0,1]\), where \(m\in\mathbb{N}\); \(\rightarrow_{d}\) to denote convergence in distribution; and \(\rightarrow_{p}\) to denote convergence in probability. A sequence of random variables \(X_{n}\) is said to be \(O_{p}(1)\) if it is bounded in probability. For \(x\in\mathbb{R}\), define \(\lfloor x\rfloor\) as the largest integer that is smaller than or equal to \(x\), and \(\lceil x\rceil\) as the smallest integer that is greater than or equal to \(x\). ## 2 Preliminaries and Settings In this paper, we consider a metric space \((\Omega,d)\) that is totally bounded, i.e. for any \(\epsilon>0\), there exist a finite number of open \(\epsilon\)-balls whose union can cover \(\Omega\). For a sequence of _stationary_ random objects \(\{Y_{t}\}_{t\in\mathbb{Z}}\) defined on \((\Omega,d)\), we follow Frechet (1948), and define their Frechet mean and variance by \[\mu=\arg\min_{\omega\in\Omega}\mathbb{E}d^{2}(Y_{t},\omega),\quad V=\mathbb{E }d^{2}(Y_{t},\mu), \tag{1}\] respectively. Frechet mean extends the traditional mean in linear spaces to more general metric spaces by minimizing expected squared metric distance between the random object \(Y_{t}\) and the centroid akin to the conventional mean by minimizing the expected sum of residual squares. It is particularly useful for objects that lie in abstract spaces without explicit algebraic structure. Frechet variance, defined by such expected squared metric distance, is then used for measuring the dispersion in data. Given finite samples \(\{Y_{t}\}_{t=1}^{n}\), we define their Frechet subsample mean and variance as \[\begin{split}\hat{\mu}_{[a,b]}&=\arg\min_{\omega\in \Omega}\sum_{t=1+\lfloor na\rfloor}^{\lfloor nb\rfloor}d^{2}(Y_{t},\omega),\\ \hat{V}_{[a,b]}&=\frac{1}{\lfloor nb\rfloor- \lfloor na\rfloor}\sum_{t=1+\lfloor na\rfloor}^{\lfloor nb\rfloor}d^{2}(Y_{t}, \hat{\mu}_{[a,b]}),\end{split} \tag{2}\] where \((a,b)\in\mathcal{I}_{\eta}\), \(\mathcal{I}_{\eta}=\{(a,b):0\leq a<b\leq 1,b-a\geq\eta\}\) for some trimming parameter \(\eta\in(0,1)\). The case corresponding to \(a=0\) and \(b\geq\eta\) is further denoted as \[\hat{\mu}_{[0,b]}=\hat{\mu}_{b},\quad\hat{V}_{[0,b]}=\hat{V}_{b},\] with special case of \(b=1\) corresponding to Frechet sample mean and variance (Petersen and Muller, 2019), respectively. Note that both Frechet (subsample) mean and variance depend on the space \(\Omega\) and metric distance \(d\), which require further regulation for desired inferential purposes. In this paper, we do not impose independence assumptions, and our technical treatment differs substantially from those in the literature, c.f. Petersen and Muller (2019), Dubey and Muller (2019, 2020), Dubey and Muller (2021). **Assumption 2.1**.: \(\mu\) _is unique, and for some \(\delta>0\), there exists a constant \(K>0\) such that,_ \[\inf_{d(\omega,\mu)<\delta}\left\{\mathbb{E}\left(d^{2}(Y_{0},\omega)\right)- \mathbb{E}\left(d^{2}(Y_{0},\mu)\right)-Kd^{2}(\omega,\mu)\right\}\geq 0.\] **Assumption 2.2**.: _For any \((a,b)\in\mathcal{I}_{\eta}\), \(\hat{\mu}_{[a,b]}\) exists and is unique almost surely._ **Assumption 2.3**.: _For any \(\omega\in\Omega\), and \((a,b)\in\mathcal{I}_{\eta}\), as \(n\to\infty\),_ \[\frac{1}{\lfloor nb\rfloor-\lfloor na\rfloor}\sum_{t=\lfloor na\rfloor+1}^{ \lfloor nb\rfloor}[d^{2}(Y_{t},\omega)-\mathbb{E}d^{2}(Y_{t},\omega)]\to_{p}0.\] **Assumption 2.4**.: _For some constant \(\sigma>0\),_ \[\frac{1}{\sqrt{n}}\sum_{t=1}^{\lfloor nr\rfloor}\left(d^{2}(Y_{t},\mu)-V \right)\Rightarrow\sigma B(r),\quad r\in(0,1],\] _where \(B(\cdot)\) is a standard Brownian motion._ **Assumption 2.5**.: _Let \(B_{\delta}(\mu)\subset\Omega\) be a ball of radius \(\delta\) centered at \(\mu\). For \(\omega\in B_{\delta}(\mu)\), i.e. \(d(\omega,\mu)\leq\delta\), we assume the following expansion_ \[d^{2}(Y_{t},\omega)-d^{2}(Y_{t},\mu)=K_{d}d^{2}(\omega,\mu)+g(Y_{t},\omega,\mu) +R(Y_{t},\omega,\mu),\quad t\in\mathbb{Z},\] _where \(K_{d}\in(0,\infty)\) is a constant, and \(g(Y_{t},\omega,\mu)\) and \(R(Y_{t},\omega,\mu)\) satisfy that, as \(n\to\infty\),_ \[\sup_{(a,b)\in\mathcal{I}_{\eta}}\sup_{\omega\in B_{\delta}(\mu)}\left|\frac{n ^{-1/2}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}g(Y_{t},\omega,\mu)}{d (\omega,\mu)}\right|=O_{p}(1),\] _and_ \[\sup_{(a,b)\in\mathcal{I}_{\eta}}\sup_{\omega\in B_{\delta}(\mu)}\left|\frac{ n^{-1/2}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}R(Y_{t},\omega,\mu)}{d (\omega,\mu)+n^{1/2}d^{2}(\omega,\mu)}\right|\to_{p}0,\] _respectively._ Several remarks are given in order. Assumptions 2.1-2.3 are standard and similar conditions can be found in Dubey and Muller (2019, 2020) and Petersen and Muller (2019). Assumptions 2.1 and 2.2 are adapted from Assumption (A1) in Dubey and Muller (2020), and are required for identification purpose. In particular, Assumption 2.1 requires that the expected squared metric distance \(\mathbb{E}d^{2}(Y_{t},\omega)\) can be well separated from the Frechet variance, and the separation is quadratic in terms of the distance \(d(\omega,\mu).\) Assumption 2.2 is useful for obtaining the uniform convergence of the subsample estimate of Frechet mean, i.e., \(\hat{\mu}_{[a,b]},\) which is a key ingredient in forming the self-normalizer in SN-based inference. Assumption 2.3 is a pointwise weak law of large numbers, c.f. Assumption (A2) in Dubey & Muller (2020_a_). Assumption 2.4 requires the invariance principle to hold to regularize the partial sum that appears in Frechet subsample variances. Note that \(d^{2}(Y_{t},\omega)\) takes value in \(\mathbb{R}\) for any fixed \(\omega\in\Omega,\) thus both Assumption 2.3 and 2.4 could be implied by high-level weak temporal dependence conditions (e.g., strong mixing) in conventional Euclidean space, see Shao (2010, 2015) for discussions. Assumption 2.5 distinguishes our theoretical analysis from the existing literature. Its idea is inspired by Pollard (1985) and Hjort & Pollard (2011) for M-estimators. In the conventional Euclidean space, i.e. \((\Omega,d)=(\mathbb{R}^{m},\|\cdot\|)\) for \(m\geq 1\), it is easy to see that the expansion in Assumption 2.5 holds with \(K_{d}=1,\)\(g(Y_{t},\omega,\mu)=2(\mu-\omega)^{\top}(Y_{t}-\mu)\) and \(R(Y_{t},\omega,\mu)\equiv 0.\) In more general cases, Assumption 2.5 can be interpreted as the expansion of \(d^{2}(Y_{t},\omega)\) around the target value \(d^{2}(Y_{t},\mu)\). In particular, \(K_{d}d^{2}(\omega,\mu)\) can be viewed as the bias term, \(g(Y_{t},\omega,\mu)\) works as the asymptotic leading term that is proportional to the distance \(d(\omega,\mu)\) while \(R(Y_{t},\omega,\mu)\) is the asymptotically negligible remainder term. More specifically, after suitable normalization, it reads as, \[n^{-1/2}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}[d^{2}(Y_ {t},\omega)-d^{2}(Y_{t},\mu)]\] \[= \underbrace{n^{1/2}(b-a)K_{d}d^{2}(\omega,\mu)}_{\text{bias term} }+\underbrace{d(\omega,\mu)\frac{n^{-1/2}\sum_{t=\lfloor na\rfloor+1}^{ \lfloor nb\rfloor}g(Y_{t},\omega,\mu)}{d(\omega,\mu)}}_{\text{stochastic term}}\] \[+\underbrace{n^{-1/2}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb \rfloor}R(Y_{t},\omega,\mu)}_{\text{remainder term}}.\] And the verification of this assumption can be done by analyzing each term. In comparison, existing literature, e.g. Petersen & Muller (2019, 2020_a_), Dubey & Muller (2021), impose assumptions on the complexity of \((\Omega,d)\). These assumptions typically involve the behaviors of entropy integral and covering numbers rooted in the empirical process theory (van der Vaart & Wellner, 1996), which are abstract and difficult to check in practice, see Propositions 1 and 2 in Petersen & Muller (2019). Assumption 2.5, on the contrary, regulates directly on the metric \(d\) and could be easily checked for the examples below. Moreover, Assumption 2.5 is useful for deriving local powers of tests to be developed in this paper, see Section 3.2 and 4.2 for more details. Examples that can satisfy Assumptions 2.1-2.5 include: * \(L_{2}\) metric \(d_{L}\) for \(\Omega\) being the set of square integrable functions on \([0,1]\); * \(2\)-Wasserstein metric \(d_{W}\) for \(\Omega\) being the set of univariate probability distributions on \(\mathbb{R}\); * Frobenius metric \(d_{F}\) for \(\Omega\) being the set of square matrices, including the special cases of covariance matrices and graph Laplacians; * log-Euclidean metric \(d_{E}\) for \(\Omega\) being the set of covariance matrices. We refer to Appendix B for more details of these examples and verifications of above assumptions for them. ## 3 Two-Sample Testing This section considers two-sample testing in metric space under temporal dependence. For two sequences of temporally dependent random objects \(\{Y_{t}^{(1)},Y_{t}^{(2)}\}_{t\in\mathbb{Z}}\) on \((\Omega,d)\), we denote \(Y_{t}^{(i)}\sim P^{(i)}\), where \(P^{(i)}\) is the underlying marginal distribution of \(Y_{t}^{(i)}\) with Frechet mean and variance \(\mu^{(i)}\) and \(V^{(i)}\), \(i=1,2\). Given finite sample observations \(\{Y_{t}^{(1)}\}_{t=1}^{n_{1}}\) and \(\{Y_{t}^{(2)}\}_{t=1}^{n_{2}}\), we are interested in the following two-sample testing problem, \[\mathbb{H}_{0}:P^{(1)}=P^{(2)},\quad\text{v.s. }\mathbb{H}_{a}:P^{(1)}\neq P^{(2)}.\] Let \(n=n_{1}+n_{2}\), we assume two samples are balanced, i.e. \(n_{1}/n\rightarrow\gamma_{1}\) and \(n_{2}/n\rightarrow\gamma_{2}\) with \(\gamma_{1},\gamma_{2}\in(0,1)\) and \(\gamma_{1}+\gamma_{2}=1\) as \(\min(n_{1},n_{2})\rightarrow\infty\). For \(r\in(0,1]\), we define their recursive Frechet sample mean and variance by \[\hat{\mu}_{r}^{(i)}=\arg\min_{\omega\in\Omega}\sum_{t=1}^{\lfloor rn_{i} \rfloor}d^{2}(Y_{t}^{(i)},\omega),\quad\hat{V}_{r}^{(i)}=\frac{1}{\lfloor rn_{ i}\rfloor}\sum_{t=1}^{\lfloor rn_{i}\rfloor}d^{2}(Y_{t}^{(i)},\hat{\mu}_{r}^{(i)}), \quad i=1,2.\] A natural candidate test of \(\mathbb{H}_{0}\) is to compare their Frechet sample mean and variance by contrasting \((\hat{\mu}_{1}^{(1)},\hat{V}_{1}^{(1)})\) and \((\hat{\mu}_{1}^{(2)},\hat{V}_{1}^{(2)})\). For the mean part, it is tempting to use \(d(\hat{\mu}_{1}^{(1)},\hat{\mu}_{1}^{(2)})\) as the testing statistic. However, this is a non-trivial task as the limiting behavior of \(d(\hat{\mu}_{1}^{(1)},\hat{\mu}_{1}^{(2)})\) depends heavily on the structure of the metric space, which may not admit conventional algebraic operations. Fortunately, both \(\hat{V}_{1}^{(1)}\) and \(\hat{V}_{1}^{(2)}\) take value in \(\mathbb{R}\), and it is thus intuitive to compare their difference. In fact, Dubey and Muller (2019) propose the test statistic of the form \[U_{n}=\frac{n_{1}n_{2}}{n\hat{\sigma}_{1}^{2}\hat{\sigma}_{2}^{2}}(\hat{V}_{1} ^{(1)}-\hat{V}_{1}^{(2)})^{2},\] where \(\hat{\sigma}_{i}^{2}\) is a consistent estimator of \(\lim_{n_{i}\rightarrow\infty}\text{Var}\{\sqrt{n}(\hat{V}_{1}^{(i)}-V^{(i)})\}\), \(i=1,2\). However, \(U_{n}\) requires both within-group and between-group independence, which is too stringent to be realistic for applications in this paper. When either of such independence is violated, the test may fail to control size, see Section 5 for numerical evidence. Furthermore, taking into account the temporal dependence requires replacing the variance by long-run variance, whose consistent estimation usually involves laborious tuning such as choices of kernels and bandwidths (Newey and West, 1987; Andrews, 1991). To this end, we invoke self-normalization technique to bypass the foregoing issues. The core principle of self-normalization for the time series inference is to use an inconsistent long-run variance estimator that is a function of recursive estimates to yield an asymptotically pivotal statistic. The SN procedure does not involve any tuning parameter or involves less number of tuning parameters compared to traditional counterparts. See Shao (2015) for a comprehensive review of recent developments for low dimensional time series. For recent extension to inference for high-dimensional time series, we refer to Wang and Shao (2020) and Wang et al. (2022). ### Test Statistics Define the recursive subsample test statistic based on Frechet variance as \[T_{n}(r)=r(\hat{V}_{r}^{(1)}-\hat{V}_{r}^{(2)}),\ r\in[\eta,1],\] and then construct the SN based test statistic as \[D_{n,1}=\frac{n\left[T_{n}(1)\right]^{2}}{\sum_{k=\lfloor n\eta\rfloor}^{n} \left[T_{n}(\frac{k}{n})-\frac{k}{n}T_{n}(1)\right]^{2}}, \tag{3}\] where \(\eta\in(0,1)\) is a trimming parameter for controlling the estimation effect of \(T_{n}(r)\) when \(r\) is close to \(0\), which is important for deriving the uniform convergence of \(\{\sqrt{n}T_{n}(r),r\in[\eta,1]\}\), see Zhou and Shao (2013) and Jiang et al. (2022) for similar technical treatments. The testing statistic (3) is composed of the numerator \(n[T_{n}(1)]^{2}\), which captures the difference in Frechet variances, and the denominator \(\sum_{k=\lfloor n\eta\rfloor}^{n}\left[T_{n}(\frac{k}{n})-\frac{k}{n}T_{n}(1) \right]^{2}\), which is called self-normalizer and mimics the behavior of the numerator with suitable centering and trimming. For each \(r\in[\eta,1]\), \(T_{n}(r)\) is expected to be a consistent estimator for \(r(V^{(1)}-V^{(2)})\). Therefore, under \(\mathbb{H}_{a}\), \(T_{n}(1)\) is large when there is significant difference in Frechet variance, whereas the key element \(T_{n}(r)-rT_{n}(1)\) in self-normalizer remains to be small. This suggests that we should reject \(\mathbb{H}_{0}\) for large values of \(D_{n,1}\). Note that (3) only targets at difference in Frechet variances. To detect the difference in Frechet means, we can use contaminated Frechet variance (Dubey and Muller, 2020_a_). Let \[\hat{V}_{r}^{C,(1)}=\frac{1}{\lfloor rn_{1}\rfloor}\sum_{t=1}^{\lfloor rn_{1} \rfloor}d^{2}(Y_{t}^{(1)},\hat{\mu}_{r}^{(2)}),\quad\text{and}\quad\hat{V}_{r} ^{C,(2)}=\frac{1}{\lfloor rn_{2}\rfloor}\sum_{t=1}^{\lfloor rn_{2}\rfloor}d^{ 2}(Y_{t}^{(2)},\hat{\mu}_{r}^{(1)}),\] \[T_{n}^{C}(r)=r(\hat{V}_{r}^{C,(1)}+\hat{V}_{r}^{C,(2)}-\hat{V}_{r}^{(1)}-\hat{V}_{ r}^{(2)}).\] The contaminated Frechet sample variances \(\hat{V}_{r}^{C,(1)}\) and \(\hat{V}_{r}^{C,(2)}\) switch the role of \(\hat{\mu}_{r}^{(1)}\) and \(\hat{\mu}_{r}^{(2)}\) in \(\hat{V}_{r}^{(1)}\) and \(\hat{V}_{r}^{(2)}\), respectively, and could be viewed as proxies for measuring Frechet mean differences. Intuitively, it is expected that \(\hat{V}_{r}^{C,(i)}\approx\mathbb{E}d^{2}(Y_{t}^{(i)},\mu^{(3-i)})\), and \(\hat{V}_{r}^{(i)}\approx\mathbb{E}d^{2}(Y_{t}^{(i)},\,\mu^{(i)}),i=1,2\). Under \(\mathbb{H}_{0}\), both \(\hat{\mu}_{r}^{(1)}\) and \(\hat{\mu}_{r}^{(2)}\) are consistent estimators for \(\mu^{(1)}=\mu^{(2)}\), thus \(\hat{V}_{r}^{C,(i)}\approx\hat{V}_{r}^{(i)},i=1,2\), which indicates a small value for \(T_{n}^{C}(r)\). On the contrary, when \(d(\mu^{(1)},\mu^{(2)})>0\), \(\hat{V}_{r}^{C,(i)}\) could be much larger than \(\hat{V}_{r}^{(i)}\) as \(\mathbb{E}d^{2}(Y_{t}^{(i)},\mu^{(3-i)})>\mathbb{E}d^{2}(Y_{t}^{(i)},\mu^{(i) })=\arg\min_{\nu\in\Omega}\mathbb{E}d^{2}(Y_{t}^{(i)},\omega)\), \(i=1,2\), resulting in large value of \(T_{n}^{C}(r)\). The power-augmented test statistic is thus defined by \[D_{n,2}=\frac{n\left\{\left[T_{n}(1)\right]^{2}+\left[T_{n}^{C}(1)\right]^{2 }\right\}}{\sum_{k=\lfloor n\eta\rfloor}^{n}\left\{\left[T_{n}(\frac{k}{n})- \frac{k}{n}T_{n}(1)\right]^{2}+\left[T_{n}^{C}(\frac{k}{n})-\frac{k}{n}T_{n}^{ C}(1)\right]^{2}\right\}}, \tag{4}\] where the additional term \(\sum_{k=\lfloor n\eta\rfloor}^{n}\left[T_{n}^{C}(\frac{k}{n})-\frac{k}{n}T_{n }^{C}(1)\right]^{2}\) that appears in the self-normalizer is used to stabilize finite sample performances. **Remark 3.1**.: _Our proposed tests could be adapted to comparison of \(N\)-sample populations (Dubey & Muller, 2019), where \(N\geq 2\). An natural way of extension would be aggregating all the pairwise differences in Frechet variance and contaminated variance. Specifically, let the \(N\) groups of random data objects be \(\{Y_{t}^{(i)}\}_{t=1}^{n_{i}}\), \(i=1,\cdots,N\). The null hypothesis is given as_ \[\mathbb{H}_{0}:P^{(1)}=\cdots=P^{(N)},\] _for some \(N\geq 2\)._ _Let \(\hat{\mu}_{r}^{(i)}\) and \(\hat{V}_{r}^{(i)}\), \(r\in[\eta,1]\) be the Frechet subsample mean and variance, respectively, for the \(i\)th group, \(i=1,\cdots,N\). For \(1\leq i\neq j\leq N\), define the pairwise contaminated Frechet subsample variance as_ \[\hat{V}_{r}^{C,(i,j)}=\frac{1}{\left\lceil n_{i}\right\rfloor}\sum_{t=1}^{ \lfloor rn_{i}\rfloor}d^{2}(Y_{t}^{(i)},\hat{\mu}_{r}^{(j)}),\ r\in[\eta,1],\] _and define the recursive statistics_ \[T_{n}^{i,j}(r)=r(\hat{V}_{r}^{(i)}-\hat{V}_{r}^{(j)}),\quad T_{n}^{C,i,j}(r)= r(\hat{V}_{r}^{C,(i,j)}+\hat{V}_{r}^{C,(j,i)}-\hat{V}_{r}^{(i)}-\hat{V}_{r}^{(j) }),\ r\in[\eta,1].\] _In the same spirit of the test statistics \(D_{n,1}\) and \(D_{n,2}\), for \(n=\sum_{i=1}^{N}n_{i}\), we may construct their counterparts for the \(N\)-sample testing problem as_ \[D_{n,1}^{(N)}=\frac{n\sum_{i<j}\left[T_{n}^{i,j}(1)\right]^{2}}{\sum_{k=\lfloor n \eta\rfloor}^{n}\sum_{i<j}\left[T_{n}^{i,j}(\frac{k}{n})-\frac{k}{n}T_{n}^{i,j }(1)\right]^{2}},\] _and_ \[D_{n,2}^{(N)}=\frac{n\sum_{i<j}\left\{\left[T_{n}^{i,j}(1)\right]^{2}+\left[T_ {n}^{C,i,j}(1)\right]^{2}\right\}}{\sum_{k=\lfloor n\eta\rfloor}^{n}\sum_{i<j} \left\{\left[T_{n}^{i,j}(\frac{k}{n})-\frac{k}{n}T_{n}^{i,j}(1)\right]^{2}+ \left[T_{n}^{C,i,j}(\frac{k}{n})-\frac{k}{n}T_{n}^{C,i,j}(1)\right]^{2}\right\}}.\] Compared with classical \(N\)-sample testing problem in Euclidean spaces, e.g. analysis of variance (ANOVA), the above modification does not require Gaussianity, equal variance, or serial independence. Therefore, they could be work for broader classes of distributions. We leave out the details for the sake of space. ### Asymptotic Theory Before we present asymptotic results of the proposed tests, we need a slightly stronger assumption than Assumption 2.4 to regulate the joint behavior of partial sums for both samples. **Assumption 3.1**.: _For some \(\sigma_{1}>0\) and \(\sigma_{2}>0\), we have_ \[\frac{1}{\sqrt{n}}\sum_{t=1}^{\lfloor nr\rfloor}\begin{pmatrix}d^{2}(Y_{t}^{(1)},\mu^{(1)})-V^{(1)}\\ d^{2}(Y_{t}^{(2)},\mu^{(2)})-V^{(2)}\end{pmatrix}\Rightarrow\begin{pmatrix} \sigma_{1}B^{(1)}(r)\\ \sigma_{2}B^{(2)}(r)\end{pmatrix},\] _where \(B^{(1)}(\cdot)\) and \(B^{(2)}(\cdot)\) are two standard Brownian motions with unknown correlation parameter \(\rho\in(-1,1)\), and \(\sigma_{1},\sigma_{2}\neq 0\) are unknown parameters characterizing the long-run variance._ **Theorem 3.1**.: _Suppose Assumptions 2.1-2.5 (with 2.4 replaced by 3.1) hold for both \(\{Y_{t}^{(1)}\}_{t=1}^{n_{1}}\) and \(\{Y_{t}^{(2)}\}_{t=1}^{n_{2}}\). Then as \(n\to\infty\), under \(\mathbb{H}_{0}\), for \(i=1,2\),_ \[D_{n,i}\to_{d}\frac{\xi_{\gamma_{1},\gamma_{2}}^{2}(1;\sigma_{1},\sigma_{2})} {\int_{\eta}^{1}\left[\xi_{\gamma_{1},\gamma_{2}}(r;\sigma_{1},\sigma_{2})-r \xi_{\gamma_{1},\gamma_{2}}(1;\sigma_{1},\sigma_{2})\right]^{2}dr}:=\mathcal{D }_{\eta},\] _where_ \[\xi_{\gamma_{1},\gamma_{2}}(r;\sigma_{1},\sigma_{2})=\gamma_{1}^{-1}\sigma_{1 }B^{(1)}(\gamma_{1}r)-\gamma_{2}^{-1}\sigma_{2}B^{(2)}(\gamma_{2}r). \tag{5}\] Theorem 3.1 obtains the same limiting null distribution for Frechet variance based test \(D_{n,1}\) and its power-augmented version \(D_{n,2}\). Although \(D_{n,2}\) contains contaminated variance \(T_{n}^{C}(1)\), its contribution is asymptotically vanishing as \(n\to\infty\). This is an immediate consequence of the fact that \[\sup_{r\in[\eta,1]}|\sqrt{n}T_{n}^{C}(r)|\to_{p}0,\] see proof of Theorem 3.1 in Appendix A. Similar phenomenon has been documented in Dubey & Muller (2019) under different assumptions. We next consider the power behavior under the Pitman local alternative, \[\mathbb{H}_{an}:\quad V^{(1)}-V^{(2)}=n^{-\kappa_{V}}\Delta_{V},\quad\text{ and }\quad d^{2}(\mu^{(1)},\mu^{(2)})=n^{-\kappa_{M}}\Delta_{M},\] with \(\Delta_{V}\in\mathbb{R}\), \(\Delta_{M}\in(0,\infty)\), and \(\kappa_{V},\kappa_{M}\in(0,\infty)\). **Theorem 3.2**.: _Suppose Assumptions 2.1-2.5 (with 2.4 replaced by 3.1) hold for both \(\{Y_{t}^{(1)}\}_{t=1}^{n_{1}}\) and \(\{Y_{t}^{(2)}\}_{t=1}^{n_{2}}\). As \(n\to\infty\), under \(\mathbb{H}_{an}\),_ * _if_ \(\max\{\kappa_{V},\kappa_{M}\}\in(0,1/2)\)_, then for_ \(i=1,2\)_,_ \(D_{n,i}\to_{p}\infty\)_;_ * _if_ \(\min\{\kappa_{V},\kappa_{M}\}\in(1/2,\infty)\)_, then for_ \(i=1,2\)_,_ \(D_{n,i}\to_{d}\mathcal{D}_{\eta}\)_;_ * _if_ \(\kappa_{V}=1/2\) _and_ \(\kappa_{M}\in(1/2,\infty)\)_, then for_ \(i=1,2\)_,_ \[D_{n,i}\to_{d}\frac{\left(\xi_{\gamma_{1},\gamma_{2}}(1;\sigma_{1},\sigma_{2}) \right)^{2}+4K_{d}^{2}\Delta_{M}^{2}}{\int_{\eta}^{1}\left(\xi_{\gamma_{1}, \gamma_{2}}(r;\sigma_{1},\sigma_{2})-r\xi_{\gamma_{1},\gamma_{2}}(1;\sigma_{1 },\sigma_{2})\right)^{2}dr};\] * _if_ \(\kappa_{V}\in(1/2,\infty)\) _and_ \(\kappa_{M}=1/2\)_, then_ \(D_{n,1}\to_{d}\mathcal{D}_{\eta}\)_, and_ \[D_{n,2}\to_{d}\frac{\left(\xi_{\gamma_{1},\gamma_{2}}(1;\sigma_{1},\sigma_{2}) \right)^{2}+4K_{d}^{2}\Delta_{M}^{2}}{\int_{\eta}^{1}\left(\xi_{\gamma_{1}, \gamma_{2}}(r;\sigma_{1},\sigma_{2})-r\xi_{\gamma_{1},\gamma_{2}}(1;\sigma_{1 },\sigma_{2})\right)^{2}dr};\] * _if_ \(\kappa_{V}=\kappa_{M}=1/2\)_, then_ \[D_{n,1}\to_{d}\frac{\left(\xi_{\gamma_{1},\gamma_{2}}(1;\sigma_{1},\sigma_{2}) +\Delta_{V}\right)^{2}}{\int_{\eta}^{1}\left(\xi_{\gamma_{1},\gamma_{2}}(r; \sigma_{1},\sigma_{2})-r\xi_{\gamma_{1},\gamma_{2}}(1;\sigma_{1},\sigma_{2}) \right)^{2}dr},\] \[D_{n,2}\to_{d}\frac{\left(\xi_{\gamma_{1},\gamma_{2}}(1;\sigma_{1}, \sigma_{2})+\Delta_{V}\right)^{2}+4K_{d}^{2}\Delta_{M}^{2}}{\int_{\eta}^{1} \left(\xi_{\gamma_{1},\gamma_{2}}(r;\sigma_{1},\sigma_{2})-r\xi_{\gamma_{1}, \gamma_{2}}(1;\sigma_{1},\sigma_{2})\right)^{2}dr};\] _where \(K_{d}\) is defined in Assumption 2.5._ Theorem 3.2 presents the asymptotic behaviors for both test statistics under local alternatives in various regimes. In particular, \(D_{n,1}\) can detect differences in Frechet variance at local rate \(n^{-1/2}\), but possesses trivial power against Frechet mean difference regardless of the regime of \(\kappa_{M}\). In comparison, \(D_{n,2}\) is powerful for differences in both Frechet variance and Frechet mean at local rate \(n^{-1/2}\), which validates our claim that \(D_{n,2}\) indeed augments power. Our results merit additional remarks when compared with Dubey & Muller (2019). In Dubey & Muller (2019), they only obtain the consistency of their test under either \(n^{1/2}|V^{(1)}-V^{(2)}|\to\infty\) or \(n^{1/2}d^{2}\mu(\mu^{(1)},\mu^{(2)})\to\infty\), while Theorem 3.2 explicitly characterizes the asymptotic distributions of our test statistics under local alternatives of order \(O(n^{-1/2})\), which depend on \(\kappa_{V}\) and \(\kappa_{M}\). Such theoretical improvement relies crucially on our newly developed proof techniques based on Assumption 2.5, and it seems difficult to derive such limiting distributions under empirical-process-based assumptions in Dubey & Muller (2019). However, we do admit that self-normalization could result in moderate power loss compared with \(t\)-type test statistics, see (Shao & Zhang, 2010) for evidence in Euclidean space. Note that the limiting distributions derived in Theorem 3.1 and Theorem 3.2 contain a key quantity \(\xi_{\gamma_{1},\gamma_{2}}(r;\sigma_{1},\sigma_{2})\) defined in (5), which depends on nuisance parameters \(\sigma_{1},\sigma_{2}\) and \(\rho\). This may hinder the practical use of the tests. The following corollary, however, justifies the wide applicability of our tests. **Corollary 3.1**.: _Under Assumption 3.1, if either \(\gamma_{1}=\gamma_{2}=1/2\) or \(\rho=0\), then for any constants \(C_{a},C_{b}\in\mathbb{R}\),_ \[\frac{\left(\xi_{\gamma_{1},\gamma_{2}}(1;\sigma_{1},\sigma_{2})+C_{a}\right) ^{2}+C_{b}^{2}}{\int_{\eta}^{1}\left(\xi_{\gamma_{1},\gamma_{2}}(r;\sigma_{1}, \sigma_{2})-r\xi_{\gamma_{1},\gamma_{2}}(1;\sigma_{1},\sigma_{2})\right)^{2}dr }=_{d}\frac{\left(B(1)+C_{a}/C_{\xi}\right)^{2}+(C_{b}/C_{\xi})^{2}}{\int_{ \eta}^{1}\left(B(r)-rB(1)\right)^{2}dr},\] _where_ \[C_{\xi}=\begin{cases}\sqrt{2\sigma_{1}^{2}+2\sigma_{2}^{2}-4\rho\sigma_{1} \sigma_{2}},&\text{if }\gamma_{1}=\gamma_{2},\\ \sqrt{\sigma_{1}^{2}/\gamma_{1}+\sigma_{2}^{2}/\gamma_{2}},&\text{if }\rho=0. \end{cases}\] Therefore, by choosing \(C_{a}=C_{b}=0\) in Corollary 3.1, we obtain the pivotal limiting distribution \[\mathcal{D}_{\eta}=_{d}\frac{B^{2}(1)}{\int_{\eta}^{1}\left(B(r)-rB(1)\right) ^{2}dr}.\] The asymptotic distributions in Theorem 3.2 can be similarly derived by letting either \(C_{a}=\Delta_{V}\) or \(C_{b}=2K_{d}\Delta_{M}\). Therefore, when either two samples are of the same length (\(\gamma_{1}=\gamma_{2}\)) or two samples are asymptotically independent (\(\rho=0\)), the limiting distribution \(\mathcal{D}_{\eta}\) is pivotal. In practice, we reject \(\mathbb{H}_{0}\) if \(D_{n,i}>Q_{\mathcal{D}_{\eta}}(1-\alpha)\) where \(Q_{\mathcal{D}_{\eta}}(1-\alpha)\) denotes the \(1-\alpha\) quantile of (the pivotal) \(D_{\eta}\). In Table 1, we tabulate commonly used critical values under various choices of \(\eta\) by simulating 50,000 i.i.d. \(\mathcal{N}\)(0,1) random variables 10,000 times and approximating a standard Brownian motion by standardized partial sum of i.i.d. \(\mathcal{N}\)(0,1) random variables. ## 4 Change-Point Test Inspired by the two-sample tests developed in Section 3, this section considers the change-point detection problem for a sequence of random objects \(\{Y_{t}\}_{t=1}^{n}\), i.e. \[\mathbb{H}_{0}:Y_{1},Y_{2},\ldots,Y_{n}\sim P^{(1)}\] \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\alpha\)\(\eta\) & 0.02 & 0.05 & 0.1 & 0.15 \\ \hline 10\% & 28.51 & 28.88 & 30.02 & 31.87 \\ 5\% & 46.10 & 46.72 & 48.80 & 51.87 \\ 1\% & 101.58 & 103.70 & 108.93 & 116.72 \\ 0.5\% & 131.55 & 134.00 & 142.34 & 151.93 \\ \hline \hline \end{tabular} \end{table} Table 1: Simulated critical values \(Q_{\mathcal{D}_{\eta}}(1-\alpha)\) against the single change-point alternative, \[\mathbb{H}_{a}:\text{ there exists }0<\tau<1\text{ such that }Y_{t}=\left\{\begin{array}{l}Y_{t}^{(1)} \sim P^{(1)},1\leq t\leq\lfloor n\tau\rfloor\\ Y_{t}^{(2)}\sim P^{(2)},\lfloor n\tau\rfloor+1\leq t\leq n.\end{array}\right.\] The single change-point testing problem can be roughly viewed as two-sample testing without knowing where the two samples split, and they share certain similarities in terms of statistical methods and theory. Recall the Frechet subsample mean \(\hat{\mu}_{[a,b]}\) and variance \(\hat{V}_{[a,b]}\) in (2), we further define the pooled contaminated variance separated by \(r\in(a,b)\) as \[\hat{V}_{[r;a,b]}^{C}=\frac{1}{\lfloor nr\rfloor-\lfloor na\rfloor}\sum_{i= \lfloor na\rfloor+1}^{\lfloor nr\rfloor}d^{2}\left(Y_{i},\hat{\mu}_{[r,b]} \right)+\frac{1}{\lfloor nb\rfloor-\lfloor nr\rfloor}\sum_{i=\lfloor nr \rfloor+1}^{\lfloor nb\rfloor}d^{2}\left(Y_{i},\hat{\mu}_{[a,r]}\right).\] Define the subsample test statistics \[T_{n}(r;a,b)=\frac{(r-a)(b-r)}{b-a}\left(\hat{V}_{[a,r]}-\hat{V}_{[r,b]} \right),\] and \[T_{n}^{C}(r;a,b)=\frac{(r-a)(b-r)}{b-a}\left(\hat{V}_{[r;a,b]}^{C}-\hat{V}_{[a,r]}-\hat{V}_{[r,b]}\right).\] Note that \(T_{n}(r;a,b)\) and \(T_{n}^{C}(r;a,b)\) are natural extensions of \(T_{n}(r)\) and \(T_{n}^{C}(r)\) from two-sample testing problem to change-point detection problem by viewing \(\{Y_{t}\}_{t=\lfloor na\rfloor+1}^{\lfloor nr\rfloor}\) and \(\{Y_{t}\}_{t=\lfloor nr\rfloor+1}^{\lfloor nb\rfloor}\) as two separated samples. Intuitively, the contrast statistics \(T_{n}(r;a,b)\) and \(T_{n}^{C}(r;a,b)\) are expected to attain their maxima (in absolute value) when \(r\) is set at or close to the true change-point location \(\tau\). ### Test Statistics For some trimming parameters \(\eta_{1}\) and \(\eta_{2}\) such that \(\eta_{1}>2\eta_{2}\), and \(\eta_{1}\in(0,1/2)\), in the same spirit of \(D_{n,1}\) and \(D_{n,2}\), and with a bit abuse of notation, we define the testing statistics \[SN_{i}=\max_{\lfloor n\eta_{1}\rfloor\leq k\leq n-\lfloor n\eta_{1}\rfloor}D _{n,i}(k),\quad i=1,2,\] where \[D_{n,1}(k)= \frac{n\left[T_{n}\left(\frac{k}{n};0,1\right)\right]^{2}}{\sum _{l=\lfloor n\eta_{2}\rfloor}^{k-\lfloor n\eta_{2}\rfloor}\left[T_{n}\left( \frac{l}{n};0,\frac{k}{n}\right)\right]^{2}+\sum_{l=k+\lfloor n\eta_{2} \rfloor}^{n-\lfloor n\eta_{2}\rfloor}\left[T_{n}\left(\frac{l}{n};\frac{k}{n},1\right)\right]^{2}},\] \[D_{n,2}(k)= \frac{n\left\{\left[T_{n}\left(\frac{k}{n};0,1\right)\right]^{2} +\left[T_{n}^{C}\left(\frac{k}{n};0,1\right)\right]^{2}\right\}}{L_{n}(k)+R_{ n}(k)},\] with \[L_{n}(k)= \sum_{l=\lfloor n\eta_{2}\rfloor}^{k-\lfloor n\eta_{2}\rfloor} \left\{\left[T_{n}\left(\frac{l}{n};0,\frac{k}{n}\right)\right]^{2}+\left[T_ {n}^{C}\left(\frac{l}{n};0,\frac{k}{n}\right)\right]^{2}\right\},\] \[R_{n}(k)= \sum_{l=k+\lfloor n\eta_{2}\rfloor}^{n-\lfloor n\eta_{2}\rfloor} \left\{\left[T_{n}\left(\frac{l}{n};\frac{k}{n},1\right)\right]^{2}+\left[T_ {n}^{C}\left(\frac{l}{n};\frac{k}{n},1\right)\right]^{2}\right\}.\] The trimming parameter \(\eta_{1}\) plays a similar role as \(\eta\) in two-sample testing problem for stabilizing the estimation effect for relatively small sample sizes, while the additional trimming \(\eta_{2}\) is introduced to ensure that the subsample estimates in the self-normalizers are constructed with the subsample size proportional to \(n\). Furthermore, we note that the self-normalizers here are modified to accommodate for the unknown change-point location, see Shao and Zhang (2010), Zhang and Lavitas (2018) for more discussion. ### Asymptotic Theory **Theorem 4.1**.: _Suppose Assumptions 2.1-2.5 hold. Then, under \(\mathbb{H}_{0}\), we have for \(i=1,2\),_ \[SN_{i}=\max_{\lfloor n\eta_{1}\rfloor\leq k\leq n-\lfloor n\eta_{1}\rfloor}D_{n, i}(k)\Rightarrow\sup_{r\in[\eta_{1},1-\eta_{1}]}\frac{[B(r)-rB(1)]^{2}}{V(r,\eta)}:= \mathcal{S}_{\eta},\] _where \(V(r,\eta)=\int_{\eta_{2}}^{r-\eta_{2}}[B(u)-u/rB(r)]^{2}du+\int_{r+\eta_{2}}^{1 -\eta_{2}}[B(1)-B(u)-(1-u)/(1-r)\{B(1)-B(r)\}]^{2}du\)._ Similar to Theorem 3.1, Theorem 4.1 states that both change-point test statistics have the same pivotal limiting null distribution \(\mathcal{S}_{\eta}\). The test is thus rejected when \(SN_{i}>Q_{\mathcal{S}_{\eta}}(1-\alpha)\), \(i=1,2\), where \(Q_{\mathcal{S}_{\eta}}(1-\alpha)\) denotes the \(1-\alpha\) quantile of \(\mathcal{S}_{\eta}\). In Table 2, we tabulate commonly used critical values under various choices of \((\eta_{1},\eta_{2})\) by simulations. Recall in Theorem 3.2, we have obtained the local power of two-sample tests \(D_{n,1}\) and \(D_{n,2}\) at rate \(n^{-1/2}\). To this end, consider the local alternative \[\mathbb{H}_{an}:V^{(1)}-V^{(2)}=n^{-1/2}\Delta_{V},\quad\text{and}\quad d^{2} (\mu^{(1)},\mu^{(2)})=n^{-1/2}\Delta_{M},\] where \(\Delta_{V}\in\mathbb{R}\) and \(\Delta_{M}\in(0,\infty)\). The following theorem states the asymptotic power behaviors of \(SN_{1}\) and \(SN_{2}\). **Theorem 4.2**.: _Suppose Assumptions 2.1-2.5 (with 2.4 replaced by 3.1) hold. If \(\Delta_{V}\neq 0\) and \(\Delta_{M}\neq 0\) are fixed, then under \(\mathbb{H}_{an}\), if \(\tau\in(\eta_{1},1-\eta_{1})\), then as \(n\to\infty\), we have_ \[\lim_{|\Delta_{V}|\to\infty}\lim_{n\to\infty}\left\{\max_{\lfloor n \eta_{1}\rfloor\leq k\leq n-\lfloor n\eta_{1}\rfloor}D_{n,1}(k)\right\}\to_{p}\infty,\] \[\lim_{\max\{|\Delta_{V}|,|\Delta_{M}\}\to\infty}\lim_{n\to\infty} \left\{\max_{\lfloor n\eta_{1}\rfloor\leq k\leq n-\lfloor n\eta_{1}\rfloor}D _{n,2}(k)\right\}\to_{p}\infty.\] We note that Theorem 4.2 deals with the alternative involving two different sequences before and after the change-point, while Theorem 4.1 only involves one stationary sequence. Therefore, we need to replace Assumption 2.4 by Assumption 3.1. Theorem 4.2 demonstrates that our tests are capable of detecting local alternatives at rate \(n^{-1/2}\). In addition, it is seen from Theorem 4.2 that \(SN_{1}\) is consistent under the local alternative of Frechet variance change as \(|\Delta_{V}|\to\infty\), while \(SN_{2}\) is consistent not only under \(|\Delta_{V}|\to\infty\) but also under the local alternative of Frechet mean change as \(\Delta_{M}\to\infty\). Hence \(SN_{2}\) is expected to capture a wider class of alternatives than \(SN_{1}\), and these results are consistent with findings for two-sample problems in Theorem 3.2. When \(\mathbb{H}_{0}\) is rejected, it is natural to estimate the change-point location by \[\hat{\tau}_{i}=n^{-1}\hat{k}_{i},\quad\hat{k}_{i}=\arg\max_{\lfloor n\eta_{1} \rfloor\leq k\leq n-\lfloor n\eta_{1}\rfloor}D_{n,i}(k),\quad\text{for }i=1,2. \tag{6}\] We will show that the estimators are consistent under the fixed alternative, i.e. \(\mathbb{H}_{a}:V^{(1)}-V^{(2)}=\Delta_{V}.\) Before that, we need to regulate the behaviour of Frechet mean and variance under \(\mathbb{H}_{a}\). Let \[\mu(\alpha)= \arg\min_{\omega\in\Omega}\left\{\alpha\mathbb{E}(d^{2}(Y^{(1)}_ {t},\omega))+(1-\alpha)\mathbb{E}(d^{2}(Y^{(2)}_{t},\omega))\right\},\] \[V(\alpha)= \alpha\mathbb{E}(d^{2}(Y^{(1)}_{t},\mu(\alpha)))+(1-\alpha) \mathbb{E}(d^{2}(Y^{(2)}_{t},\mu(\alpha))),\] be the limiting Frechet mean and variance of two mixture distributions indexed by \(\alpha\in[0,1]\). \begin{table} \begin{tabular}{c c c c} \hline \hline \(\alpha\)\((\eta_{1},\eta_{2})\) & (0.02,0.05) & (0.04,0.1) & (0.05,0.15) \\ \hline 10\% & 30.29 & 32.09 & 33.36 \\ 5\% & 41.31 & 44.36 & 46.50 \\ 1\% & 72.66 & 79.24 & 82.13 \\ 0.5\% & 91.31 & 96.90 & 101.48 \\ \hline \hline \end{tabular} \end{table} Table 2: Simulated critical values \(Q_{\mathcal{S}_{\eta}}(1-\alpha)\) **Assumption 4.1**.: \(\mu(\alpha)\) _is unique for all \(\alpha\in[0,1]\), and_ \[|V^{(2)}-V(\alpha)|\geq\varphi(\alpha),\quad|V^{(1)}-V(\alpha)|\geq\varphi(1- \alpha),\] _such that \(\varphi(\alpha)\geq 0\) is a continuous, strictly increasing function of \(\alpha\in[0,1]\) satisfying \(\varphi(0)=0\) and \(\varphi(1)\leq|\Delta_{V}|\)._ The uniqueness of Frechet mean and variance for mixture distribution is also imposed in Dubey and Muller (2020_a_), see Assumption (A2) therein. Furthermore, Assumption 4.1 imposes a bi-Lipschitz type condition on \(V(\alpha)\), and is used to distinguish the Frechet variance \(V(\alpha)\) under mixture distribution from \(V^{(1)}\) and \(V^{(2)}\). **Theorem 4.3**.: _Suppose Assumptions 2.1-2.5 (with 2.4 replaced by 3.1), and Assumption 4.1 hold. Under \(\mathbb{H}_{a}\), for \(i=1,2\), we have \(\hat{\tau}_{i}\to_{p}\tau\), where \(\hat{\tau}_{i}\) is defined in (6)._ Theorem 4.3 obtains the consistency of \(\hat{\tau}_{i}\), \(i=1,2\) when Frechet variance changes. We note that it is very challenging to derive the consistency result when \(\mathbb{H}_{a}\) is caused by Frechet mean change alone, which is partly due to the lack of explicit algebraic structure on \((\Omega,d)\) that we can exploit and the use of self-normalization. We leave this problem for future investigation. ### Wild Binary Segmentation To detect multiple change-points and identify the their locations given the time series \(\{Y_{t}\}_{t=1}^{n}\), we can combine our change-point test with the so-called wild binary segmentation (WBS) (Fryzlewicz, 2014). The testing procedure in conjunction with WBS can be described as follows. Let \(I_{M}=\{(s_{m},e_{m})\}_{m=1,2,\ldots,M}\), where \(s_{m},e_{m}\) are drawn uniformly from \(\{0,1/n,1/(n-1),\ldots,1/2,1\}\) such that \(\lceil ne_{m}\rceil-\lfloor ns_{m}\rfloor\geq 20\). Then we simulate \(J\) i.i.d samples, each sample is of size \(n\), from multivariate Gaussian distribution with mean \(0\) and identity covariance matrix, i.e., for \(j=1,2,\ldots,J\), \(\{Z_{i}^{j}\}_{i=1}^{n}\stackrel{{ i.i.d.}}{{\sim}}\mathcal{N}(0,1)\). For the \(j\)th sample \(\{Z_{i}^{j}\}_{i=1}^{n}\), let \(\widetilde{D}(k;s_{m},e_{m};\{Z_{i}^{j}\}_{i=1}^{n})\) be the statistic \(D_{\lfloor ne_{m}\rfloor-\lceil ns_{m}\rceil+1,2}(k)\) that is computed based on sample \(\{Z_{\lceil ns_{m}\rceil}^{j},Z_{\lceil ns_{m}\rceil+1}^{j},\ldots,\,Z_{ \lfloor ne_{m}\rfloor}^{j}\}\) and \[\xi_{j}=\max_{1\leq m\leq M}\max_{\lfloor\tilde{n}_{m}=\eta_{1}\rfloor\leq k \leq\tilde{n}_{m}-\lfloor\tilde{n}_{m}\eta_{1}\rfloor}\widetilde{D}(k;s_{m},e _{m};\{Z_{i}^{j}\}_{i=1}^{n}),\] where \(\tilde{n}_{m}=\lceil ne_{m}\rceil-\lfloor ns_{m}\rfloor+1\). Setting \(\xi\) as the 95% quantile of \(\xi_{1},\xi_{2},\ldots,\xi_{J}\), we can apply our test in combination with WBS algorithm to the data sequence \(\{Y_{1},Y_{2},\ldots Y_{n}\}\) by running Algorithm 1 as WBS\((0,1,\xi)\). The main rational behind this algorithm is that we exploit the asymptotic pivotality of our SN test statistic, and the limiting null distribution of our test statistic applied to random objects is identical to that applied to i.i.d \(\mathcal{N}(0,1)\) random variables. Thus this threshold is expected to well approximate the 95% quantile of the finite sample distribution of the maximum SN test statistic on the \(M\) random intervals under the null. ``` 1FunctionWBS(\(s\), \(e\), \(\xi\)): 2if\(\lceil ns\rceil-\lfloor ne\rfloor<20\)then 3STOP; 4else 5\(\mathcal{M}_{s,e}\leftarrow\) set of those \(1\leq m\leq M\) for which \(s\leq s_{m},e_{m}\leq e\); 6\(m_{0}\leftarrow\arg\max_{m\in\mathcal{M}_{s,e}}\max_{\lfloor\tilde{n}_{m}\eta_ {1}\rfloor\leq k\leq\tilde{n}_{m}-\lfloor\tilde{n}_{m}\eta_{1}\rfloor} \widetilde{D}(k;s_{m},e_{m};\{Y_{i}\}_{i=1}^{n})\), where \(\tilde{n}_{m}=\lceil ne_{m}\rceil-\lfloor ns_{m}\rfloor+1\); 7\(k_{0}\leftarrow\max_{\lfloor\tilde{n}_{0}\eta_{1}\rfloor\leq k\leq\tilde{n}_ {0}-\lfloor\tilde{n}_{0}\eta_{1}\rfloor}\widetilde{D}(k;s_{m_{0}},e_{m_{0}};\{ Y_{i}\}_{i=1}^{n})\), where \(\tilde{n}_{0}=\lceil ne_{m_{0}}\rceil-\lfloor ns_{m_{0}}\rfloor+1\); 8if\(\widetilde{D}(k_{0};s_{m_{0}},e_{m_{0}};\{Y_{i}\}_{i=1}^{n})>\xi\)then 9 add \(k_{0}\) to the set of estimated change points; 10WBS(\(s\),\(k_{0}\)/\(n\),\(\xi\)); 11WBS(\(k_{0}\)/\(n\),\(e\),\(\xi\)); 12else 13STOP 14 15 end if 16 17 end for ``` **Algorithm 1**WBS Simulation In this section, we examine the size and power performance of our proposed tests in two-sample testing (Section 5.1), change-point detection (Section 5.2) problems, and provide simulation results of WBS based change-point estimation (Section 5.3). We refer to Appendix C with additional simulation results regarding comparison with FPCA approach for two-sample tests in functional time series. The time series random objects considered in this section include (i). univariate Gaussian probability distributions equipped with 2-Wasserstein metric \(d_{W}\); (ii). graph Laplacians of weighted graphs equipped with Frobenius metric \(d_{F}\); (iii). covariance matrices (Dryden et al., 2009) equipped with log-Euclidean metric \(d_{E}\). Numerical experiments are conducted according to the following data generating processes (DGPs): 1. Gaussian univariate probability distribution: we consider \[Y_{t}^{(1)} =\mathcal{N}(\arctan(U_{t,1}),[\arctan(U_{t,1}^{2})+1]^{2}),\] \[Y_{t}^{(2)} =\mathcal{N}(\arctan(U_{t,2})+\delta_{1},\delta_{2}^{2}[\arctan (U_{t,2}^{2})+1]^{2}).\] 2. graph Laplacians: each graph has \(N\) nodes (\(N=10\) for two-sample test and \(N=5\) for change-point test) that are categorized into two communities with \(0.4N\) and \(0.6N\) nodes respectively, and the edge weight for the first community, the second community and between community are set as \(0.4+\arctan(U_{t,1}^{2})\), \(0.2+\arctan(U_{t,1}^{{}^{\prime}2})\), \(0.1\) for the first sample \(Y_{t}^{(1)}\), and \(\delta_{2}[0.4+\arctan(U_{t,2}^{2})]\), \(\delta_{2}[0.2+\arctan(U_{t,2}^{{}^{\prime}2})],0.1+\delta_{1}\) for the second sample \(Y_{t}^{(2)}\), respectively; 3. covariance matrix: \(Y_{t}^{(i)}=(2I_{3}+Z_{t,i})(2I_{3}+Z_{t,i})^{\top}\), \(i=1,2\), such that all the entries of \(Z_{t,1}\) (resp. \(Z_{t,2}\)) are independent copies of \(\arctan(U_{t,1})\) (resp. \(\delta_{1}+\delta_{2}\arctan(U_{t,2})\)). For DGP (i)-(iii), \((U_{t,1},U_{t,2})^{\top}\) (with independent copies \((U_{t,1}^{\prime},U_{t,2}^{\prime})^{\top}\)) are generated according to the following VAR(1) process, \[\begin{pmatrix}U_{t,1}\\ U_{t,2}\end{pmatrix}=\rho\begin{pmatrix}U_{t-1,1}\\ U_{t-1,2}\end{pmatrix}+\epsilon_{t},\quad\epsilon_{t}\overset{i.i.d.}{\sim} \mathcal{N}\left(0,\begin{pmatrix}1&a\\ a&1\end{pmatrix}\right); \tag{7}\] where \(a\in\{0,0.5\}\) measures the cross-dependence, and \(\rho\in\{-0.4,0,0.4,0.7\}\) measures the temporal dependence within each sample (or each segment in change-point testing). For size evaluation in change-point tests, only \(\{Y_{t}^{(1)}\}\) is used. Furthermore, \(\delta_{1}\in[0,0.3]\) and \(\delta_{2}\in[0.7,1]\) are used to characterize the change in the underlying distributions. In particular, \(\delta_{1}\) can only capture the location shift, while \(\delta_{2}\) measures the scale change, and the case \((\delta_{1},\delta_{2})=(0,1)\) corresponds to \(\mathbb{H}_{0}\). For DGP (i) and (ii), i.e. Gaussian distribution with 2-Wasserstein metric \(d_{W}\) and graph Laplacians with Euclidean metric \(d_{F}\), the location parameter \(\delta_{1}\) directly shifts Frechet mean while keeping Frechet variance constant; and the scale parameter \(\delta_{2}\) works on Frechet variance only while holding the Frechet mean fixed. For DGP (iii), i.e. covariance matrices, the log-Euclidean metric \(d_{E}\) operates nonlinearly, and thus changes in either \(\delta_{1}\) or \(\delta_{2}\) will be reflected on changes in both Frechet mean and variance. The comparisons of our proposed methods with Dubey and Muller (2019) for two-sample testing and Dubey and Muller (2020_a_) for change-point testing are also reported, which are generally referred to as DM. ### Two-Sample Test For the two-sample testing problems, we set the sample size as \(n_{1}=n_{2}\in\{50,100,200,400\}\), and trimming parameter as \(\eta=0.15\). Table 3 presents the sizes of our tests and DM test for three DGPs based on 1000 Monte Carlo replications at nominal significance level \(\alpha=5\%\). In all three subtables, we see that: (a) both \(D_{1}\) and \(D_{2}\) can deliver reasonable size under all settings; (b) DM suffers from severe size distortion when dependence magnitude among data is strong; (c) when two samples are dependent, i.e. \(a=0.5\), DM is a bit undersized even when data is temporally independent. These findings suggest that our SN-based tests provide more accurate size relative to DM when either within-group temporal dependence or between-group dependence is exhibited. In Figure 1, we further compare size-adjusted power of our SN-based tests and DM test, in view of the size-distortion of DM. That is, the critical values are set as the empirical 95% quantiles of the test statistics obtained in the size evaluation, so that all curves start from the nominal level at 5%. For all settings, we note that \(D_{2}\) is more powerful than (or equal to) \(D_{1}\). In particular, \(D_{1}\) has trivial power in DGP (i) and (ii) when only Frechet mean difference is present. In addition, \(D_{2}\) is more powerful in detecting Frechet mean differences than DM for DGP (i) and (ii), and beats DM in DGP (i) for detecting Frechet variance differences, although it is slightly worse than DM in detecting Frechet variance differences for DGP (ii) and (iii). Due to robust size and power performance, we thus recommend \(D_{2}\) for practical purposes. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c c} \hline \hline \multicolumn{13}{c}{Gaussian Distribution based on \(d_{W}\)} & \multicolumn{6}{c}{Graph Laplacian based on \(d_{F}\)} & \multicolumn{6}{c}{Covariance Matrix based on \(d_{E}\)} \\ \hline \(\rho\) & \(n_{i}\) & \(D_{1}\) & \(D_{2}\) & DM & \multicolumn{6}{c}{\(D_{1}\)} & \(D_{2}\) & DM & \multicolumn{6}{c}{\(D_{1}\)} & \(D_{2}\) & DM \\ \hline a & & 0 & 0.5 & 0 & 0.5 & 0 & 0.5 & 0 & 0.5 & 0 & 0.5 & 0 & 0.5 & 0 & 0.5 & 0 & 0.5 \\ \hline -0.4 & 50 & 6.1 & 7.1 & 6.2 & 7.1 & 10.1 & 7.0 & 5.6 & 7.2 & 4.4 & 5.5 & 9.4 & 6.7 & 6.4 & 6.0 & 6.1 & 6.4 & 11.4 & 5.5 \\ & 100 & 4.6 & 5.2 & 4.6 & 5.2 & 7.4 & 6.7 & 5.8 & 5.3 & 5.1 & 4.8 & 8.0 & 5.4 & 5.8 & 6.0 & 5.7 & 6.0 & 9.2 & 6.0 \\ & 200 & 5.0 & 5.1 & 5.1 & 5.2 & 8.9 & 6.4 & 5.7 & 4.8 & 5.3 & 4.1 & 6.7 & 4.7 & 5.7 & 5.7 & 5.8 & 8.3 & 5.8 \\ & 400 & 4.1 & 5.1 & 4.2 & 5.1 & 8.2 & 6.0 & 4.4 & 4.2 & 4.2 & 4.2 & 6.5 & 5.3 & 5.0 & 5.8 & 4.9 & 5.9 & 4.5 \\ \hline \multirow{3}{*}{0} & 50 & 4.5 & 5.5 & 4.8 & 4.9 & 5.0 & 4.2 & 4.6 & 5.9 & 3.9 & 4.9 & 5.3 & 6.6 & 5.9 & 5.8 & 5.3 & 5.7 & 5.9 & 4.0 \\ & 100 & 3.9 & 4.9 & 3.8 & 4.8 & 4.4 & 3.2 & 5.8 & 4.6 & 4.7 & 4.4 & 4.8 & 3.3 & 5.0 & 5.8 & 4.8 & 5.6 & 5.0 & 3.6 \\ & 200 & 5.9 & 6.0 & 6.0 & 5.9 & 5.9 & 2.4 & 5.1 & 6.1 & 5.0 & 5.8 & 4.8 & 2.8 & 6.2 & 4.9 & 6.2 & 4.6 & 5.0 & 2.4 \\ & 400 & 5.5 & 4.8 & 5.3 & 4.8 & 4.5 & 2.8 & 4.6 & 4.0 & 4.6 & 3.7 & 4.8 & 3.5 & 5.7 & 4.9 & 5.7 & 4.8 & 4.8 & 2.7 \\ \hline \multirow{3}{*}{0.4} & 50 & 5.0 & 5.2 & 5.1 & 4.4 & 9.7 & 6.8 & 4.6 & 4.6 & 4.3 & 3.9 & 7.6 & 6.2 & 7.0 & 6.3 & 7.0 & 5.3 & 12.8 & 7.1 \\ & 100 & 6.5 & 4.7 & 5.7 & 5.0 & 9.8 & 5.1 & 5.8 & 5.8 & 5.2 & 5.1 & 8.2 & 5.8 & 5.9 & 6.4 & 5.8 & 6.3 & 9.4 & 6.5 \\ & 200 & 4.8 & 4.4 & 4.7 & 4.2 & 10.7 & 4.8 & 6.2 & 5.0 & 5.3 & 4.7 & 6.2 & 5.7 & 6.5 & 5.8 & 6.3 & 5.2 & 10.0 & 6.7 \\ & 400 & 5.3 & 5.6 & 4.8 & 5.5 & 9.3 & 6.0 & 6.2 & 4.9 & 5.7 & 4.7 & 8.5 & 5.3 & 5.8 & 4.6 & 5.6 & 4.1 & 10.2 & 6.6 \\ \hline \multirow{3}{*}{0.7} & 50 & 6.4 & 8.1 & 7.1 & 7.1 & 30.1 & 21.1 & 4.8 & 5.9 & 6.1 & 6.2 & 12.1 & 9.7 & 6.3 & 7.8 & 8.3 & 7.4 & 33.3 & 20.9 \\ & 100 & 7.8 & 6.2 & 7.1 & 5.1 & 27.9 & 18.4 & 5.0 & 4.6 & 5.3 & 5.3 & 12.0 & 8.2 & 6.5 & 7.5 & 5.8 & 6.7 & 26.7 & 18.5 \\ \cline{1-1} & 200 & 6.3 & 4.2 & 5.3 & 3.9 & 23.9 & 17.1 & 4.9 & 5.4 & 5.3 & 4.6 & 10.0 & 7.0 & 5.8 & 6.2 & 5.5 & 5.3 & 24.5 & 21.7 \\ \cline{1-1} & 400 & 4.6 & 5.7 & 3.7 & 4.8 & 23.6 & 18.7 & 4.9 & 5.4 & 4.2 & 4.8 & 10.3 & 7.3 & 4.5 & 5.5 & 4.3 & 4.5 & 24.2 & 19.3 \\ \hline \end{tabular} \end{table} Table 3: Size Performance (\(\times 100\%\)) at \(\alpha=\)5% for all three DGPs. Figure 1: Size-Adjusted Power (\(\times 100\%\)) at \(\alpha=5\%\), Two-Sample Test for all three DGPs, \(n_{i}=400\) and \(\rho=0.4\) ### Change-Point Test For the change-point testing problems, we set the sample size \(n\in\{200,400,800\}\), and trimming parameter as \((\eta_{1},\eta_{2})=(0.15,0.05)\). Table 4 outlines the size performance of our tests and DM test for three DGPs based on 1000 Monte Carlo replications at nominal significance level \(\alpha=5\%\). DM tests based asymptotic critical value and bootstraps (with 500 replications) are denoted as \(\text{DM}^{a}\) and \(\text{DM}^{b}\), respectively. From Table 4, we find that \(SN_{1}\) always exhibits accurate size while \(SN_{2}\) is a bit conservative. As a comparison, the tests based on \(\text{DM}^{a}\) and \(\text{DM}^{b}\) suffer from severe distortion when strong temporal dependence is present, although \(\text{DM}^{b}\) is slightly better than \(\text{DM}^{a}\) in DGP (i) and (ii). In Figure 2, we plot the size-adjusted power of our tests and DM test based on bootstrap calibration. Here, the size-adjusted power of \(\text{DM}^{b}\) is implemented following Dominguez & Lobato (2000). Similar to the findings in change-point tests, we find that \(SN_{1}\) has trivial power in DGP (i) and (ii) when there is only Frechet mean change and is worst among all three tests. Furthermore, \(SN_{2}\) is slightly less powerful compared to DM and the power loss is moderate. Considering its better size control, \(SN_{2}\) is preferred. \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{Gaussian Distribution based on \(d_{W}\)} & \multicolumn{3}{c}{Graph Laplacian based on \(d_{F}\)} & \multicolumn{3}{c}{Covariance Matrix based on \(d_{E}\)} \\ \cline{2-13} \(\rho\) & \(n\) & \(SN_{1}\) & \(SN_{2}\) & \(\text{DM}^{a}\) & \(\text{DM}^{b}\) & \(SN_{1}\) & \(SN_{2}\) & \(\text{DM}^{a}\) & \(\text{DM}^{b}\) & \(SN_{1}\) & \(SN_{2}\) & \(\text{DM}^{a}\) & \(\text{DM}^{b}\) \\ \hline & 200 & 5.5 & 3.9 & 15.0 & 1.9 & 5.8 & 3.4 & 28.9 & 3.4 & 6.0 & 4.6 & 3.5 & 5.0 \\ -0.4 & 400 & 5.5 & 4.5 & 13.1 & 6.7 & 4.6 & 1.8 & 18.9 & 4.5 & 5.6 & 5.1 & 3.6 & 5.8 \\ & 800 & 4.4 & 3.9 & 13.5 & 10.3 & 5.1 & 3.4 & 12.3 & 7.2 & 4.8 & 4.4 & 3.7 & 5.3 \\ \hline & 200 & 4.9 & 2.2 & 9.2 & 1.0 & 5.1 & 1.6 & 13.1 & 1.0 & 6.2 & 3.5 & 3.1 & 4.1 \\ 0 & 400 & 5.4 & 2.3 & 5.7 & 2.3 & 5.6 & 1.7 & 9.4 & 2.3 & 5.3 & 3.7 & 4.2 & 5.8 \\ & 800 & 5.4 & 3.6 & 6.1 & 4.6 & 5.0 & 2.6 & 6.5 & 3.6 & 4.7 & 3.9 & 3.4 & 5.7 \\ \hline & 200 & 4.3 & 3.9 & 44.7 & 17.1 & 5.9 & 2.4 & 27.7 & 3.6 & 5.4 & 1.1 & 7.9 & 10.1 \\ 0.4 & 400 & 4.6 & 2.2 & 29.8 & 17.1 & 6.3 & 1.9 & 17.9 & 4.0 & 5.2 & 1.7 & 6.1 & 9.2 \\ & 800 & 6.6 & 2.1 & 20.4 & 17.3 & 5.5 & 2.1 & 13.4 & 6.0 & 5.3 & 3.7 & 6.8 & 8.3 \\ \hline & 200 & 5.9 & 10.6 & 91.4 & 66.0 & 5.4 & 5.8 & 68.9 & 20.2 & 7.9 & 0.3 & 29.5 & 35.3 \\ 0.7 & 400 & 4.1 & 5.8 & 84.5 & 69.7 & 5.3 & 4.0 & 53.5 & 22.5 & 5.9 & 0.7 & 22.7 & 28.9 \\ & 800 & 5.6 & 3.9 & 77.8 & 70.4 & 4.4 & 2.3 & 40.8 & 25.8 & 5.5 & 1.4 & 20.4 & 26.1 \\ \hline \hline \end{tabular} \end{table} Table 4: Size Performance (\(\times 100\%\)) at \(\alpha=\)5% We further provide numerical evidence for the estimation accuracy by considering the alternative hypothesis of \(\delta_{1}=1-\delta_{2}=0.3\) with true change-point location at \(\tau=0.5\) for DGP (i)-(iii) in the main context. When varying sample size \(n\in\{400,800,1600\}\), we find that for all DGPs, the histograms of \(\hat{\tau}\) (based on SN\({}_{2}\)) plotted in Figure 3 get more concentrated around the truth \(\tau=0.5\), when sample size increases, which is consistent with our theoretical consistency of \(\hat{\tau}\). Figure 2: Size-Adjusted Power (\(\times 100\%\)) at \(\alpha=5\%\), Change-Point Test for all three DGPs, \(n=400\), \(\tau=0.5\), and \(\rho=0.4\) ### Multiple Change Point Detection For simulations of multiple change point estimation, we consider non-Euclidean time series of length \(n=500\) generated from the following two models. These models are the same as before, but reformulated for better presentation purpose. * Gaussian univariate probability distribution: \(Y_{t}=\mathcal{N}(\arctan(U_{t})+\delta_{t,1},\,\delta_{t,2}^{2}[\arctan(U_{t} ^{2})+1]^{2}).\) * covariance matrix: \(Y_{t}=(2I_{3}+Z_{t})(2I_{3}+Z_{t})^{\top}\) with \(Z_{t}=\delta_{t,1}+\delta_{t,2}\arctan(U_{t}).\) Here, \(U_{t}\) are generated according to the AR(1) process \(U_{t}=\rho U_{t-1}+\epsilon_{t}\), \(\epsilon_{t}\stackrel{{ i.i.d.}}{{\sim}}\mathcal{N}(0,1).\) There are 3 change points at \(t=110,250\) and \(370\). The changes point locations are reflected in the definitions of \(\{\delta_{t,1}\}\) and \(\{\delta_{t,2}\}\), where \[\delta_{t,1}=a_{1}\mathbb{I}_{\{n\leq 110\}}+a_{2}\mathbb{I}_{\{110<n\leq 250 \}}+a_{3}\mathbb{I}_{\{250<n\leq 370\}}+a_{4}\mathbb{I}_{\{370<n\leq 500\}},\] Figure 3: Histogram of Estimated Change-Point Locations for all three DGPs with \(\delta_{1}=0.3,\delta_{2}=0.7\), \(\tau=0.5\), \(\rho=0.4\). Upper: \(n=400\). Middle: \(n=800\). Lower: \(n=1600\). \[\delta_{t,2}=b_{1}\mathbb{I}_{\{n\leq 110\}}+b_{2}\mathbb{I}_{\{110<n\leq 250\}}+b_{3} \mathbb{I}_{\{250<n\leq 370\}}+b_{4}\mathbb{I}_{\{370<n\leq 500\}}.\] For each model, we consider \(3\) cases that are differentiated by the magnitudes of \(a_{i},b_{i},i=1,2,3,4\). For the data generating model of Gaussian distributions, we set * [Case 1] \((a_{1},a_{2},a_{3},a_{4})=(0,0.7,0,0.8),(b_{1},b_{2},b_{3},b_{4})=(1,1.5,0.7,1.4)\); * [Case 2] \((a_{1},a_{2},a_{3},a_{4})=(0,0.2,0,0.3),(b_{1},b_{2},b_{3},b_{4})=(0.5,1.5,0,4,1.4)\); * [Case 3] \((a_{1},a_{2},a_{3},a_{4})=(0,0.5,1.5,3.3),(b_{1},b_{2},b_{3},b_{4})=(0.2,1. 5,3.8,6.5)\). As for the data generating model of covariance matrices, we set * [Case 1] \((a_{1},a_{2},a_{3},a_{4})=(0,1.2,0,1.3),(b_{1},b_{2},b_{3},b_{4})=(0.8,1.5,0.7,1.6)\); * [Case 2] \((a_{1},a_{2},a_{3},a_{4})=(0,1,0,1)\), \((b_{1},b_{2},b_{3},b_{4})=(0.5,2,0.4,1.9)\); * [Case 3] \((a_{1},a_{2},a_{3},a_{4})=(0,2,3.9,5.7)\), \((b_{1},b_{2},b_{3},b_{4})=(0.2,0.7,1.3,2)\). Cases 1 and 2 correspond to non-monotone changes and Case 3 considers the monotone change. Here, our method described in Section 4.3 is denoted as WBS-\(SN_{2}\) (that is, a combination of WBS and our \(SN_{2}\) test statistic). The method DM in conjunction with binary segmentation, referred as BS-DM, is proposed in Dubey & Muller (2019) and included in this simulation for comparison purpose. In addition, our statistic \(SN_{2}\) in combination with binary segmentation, denoted as BS-\(SN_{2}\), is implemented and included as well. The critical values for BS-DM and BS-\(SN_{2}\) are obtained from their asymptotic distributions respectively. The simulation results are shown in Table 5, where we present the ARI (adjusted rand index) and number of detected change points for two dependence levels \(\rho=0.3,0.6\). Note that \(\text{ARI}\in[0,1]\) measures the accuracy of change point estimation and larger ARI corresponds to more accurate estimation. We summarize the main findings as follows. (a) WBS-\(SN_{2}\) is the best method in general as it can accommodate both monotonic and non-monotic changes, and appears quite robust to temporal dependence. For Cases 1 and 2, we see that BS-\(SN_{2}\) does not work for non-monotone changes, due to the use of binary segmentation procedure. (b) BS-DM tends to have more false discoveries comparing to the other methods. This is expected, as method DM is primarily proposed for i.i.d data sequence and exhibit serious oversize when there is temporal dependence in Section 5.2. (c) When we increase \(\rho=0.3\) to \(\rho=0.6\), the performance of WBS-\(SN_{2}\) appears quite stable for both distributional time series and covariance matrix time series. ## 6 Applications In this section, we present two real data illustrations, one for two sample testing and the other for change-point detection. Both datasets are in the form of non-Euclidean time series and neither seems to be analyzed before by using techniques that take into account unknown temporal dependence. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Case} & \multirow{2}{*}{\(\rho\)} & \multirow{2}{*}{Method} & \multicolumn{6}{c}{\# of change points detected} & \multirow{2}{*}{ARI} \\ \cline{3-4} \cline{6-9} & & & 0 & 1 & 2 & 3 & 4 & \(\geq 5\) & \\ \hline \multirow{9}{*}{\begin{tabular}{c} Covariance \\ Matrix \\ \end{tabular} } & \multirow{3}{*}{1} & \multirow{3}{*}{0.3} & WBS-\(SN_{2}\) & 0 & 0 & 0 & 178 & 22 & 0 & 0.971 \\ & & & BS-\(SN_{2}\) & 200 & 0 & 0 & 0 & 0 & 0 \\ & & & BS-DM & 0 & 0 & 0 & 23 & 15 & 162 & 0.836 \\ \cline{2-9} & \multirow{3}{*}{0.6} & WBS-\(SN_{2}\) & 0 & 0 & 7 & 116 & 54 & 23 & 0.907 \\ & & & BS-\(SN_{2}\) & 200 & 0 & 0 & 0 & 0 & 0 \\ & & & BS-DM & 0 & 0 & 0 & 2 & 0 & 198 & 0.516 \\ \cline{2-9} & \multirow{3}{*}{2} & \multirow{3}{*}{0.3} & WBS-\(SN_{2}\) & 0 & 0 & 0 & 169 & 28 & 3 & 0.980 \\ & & & BS-\(SN_{2}\) & 200 & 0 & 0 & 0 & 0 & 0 & 0 \\ & & & BS-DM & 0 & 0 & 0 & 20 & 14 & 166 & 0.834 \\ \cline{2-9} & \multirow{3}{*}{0.6} & WBS-\(SN_{2}\) & 0 & 0 & 0 & 101 & 60 & 39 & 0.943 \\ & & & BS-\(SN_{2}\) & 200 & 0 & 0 & 0 & 0 & 0 & 0 \\ & & & BS-DM & 0 & 0 & 0 & 1 & 1 & 198 & 0.526 \\ \cline{2-9} & \multirow{3}{*}{0.3} & WBS-\(SN_{2}\) & 0 & 0 & 0 & 172 & 27 & 1 & 0.981 \\ & & & BS-\(SN_{2}\) & 0 & 0 & 59 & 82 & 18 & 41 & 0.808 \\ & & & BS-DM & 0 & 0 & 0 & 39 & 25 & 136 & 0.893 \\ \cline{2-9} & \multirow{3}{*}{0.6} & WBS-\(SN_{2}\) & 0 & 0 & 0 & 112 & 65 & 23 & 0.955 \\ & & & BS-\(SN_{2}\) & 0 & 0 & 45 & 64 & 41 & 50 & 0.828 \\ & & & BS-DM & 0 & 0 & 0 & 2 & 4 & 194 & 0.666 \\ \hline \multirow{9}{*}{ \begin{tabular}{c} Covariance \\ Matrix \\ \end{tabular} } & \multirow{3}{*}{0.3} & WBS-\(SN_{2}\) & 0 & 0 & 0 & 200 & 0 & 0 & 0.991 \\ & & & BS-\(SN_{2}\) & 200 & 0 & 0 & 0 & 0 & 0 \\ & & & BS-DM & 0 & 0 & 10 & 9 & 181 & 0.807 \\ \cline{2-9} & \multirow{3}{*}{0.6} & WBS-\(SN_{2}\) & 0 & 0 & 8 & 192 & 0 & 0 & 0.974 \\ & & & BS-\(SN_{2}\) & 199 & 1 & 0 & 0 & 0 & 0.002 \\ & & & BS-DM & 0 & 0 & 0 & 0 & 200 & 0.392 \\ \cline{2-9} & \multirow{3}{*}{0.3} & WBS-\(SN_{2}\) & 0 & 0 & 21 & 178 & 1 & 0 & 0.954 \\ & & & BS-\(SN_{2}\) & 200 & 0 & 0 & 0 & 0 & 0 \\ & & & BS-\(SN_{2}\) & 200 & 0 & 0 & 0 & 0 & 0 \\ & & & BS-DM & 5 & 0 & 0 & 0 & 195 & 0.510 \\ \cline{2-9} & \multirow{3}{*}{0.3} & WBS-\(SN_{2}\) & 0 & 0 & 131 & 69 & 0 & 0 & 0.806 \\ & & & BS-\(SN_{2}\) & 24 & 0 & 124 & 3 & 46 & 3 & 0.468 \\ \cline{2-9} & & & BS-DM & 0 & 0 & 1 & 5 & 10 & 184 & 0.781 \\ \cline{2-9} & \multirow{3}{*}{0.6} & WBS-\(SN_{2}\) & 0 & 2 & 181 & 16 & 1 & 0 & 0.728 \\ \cline{2-9} & & & BS-\(SN_{2}\) & 93 & 1 & 71 & 18 & 17 & 0 & 0.310 \\ \cline{2-9} & & & BS-DM & 0 & 0 & 0 & 0 & 200 & 0.437 \\ \hline \end{tabular} \end{table} Table 5: Simulation results for multiple change point for sequential Gaussian distributions and covariance matrices based on 200 Monte Carlo repetitions. ### Two sample tests **Mortality data.** Here we are interested in comparing the longevity of people living in different countries of Europe. From the Human Mortality Database ([https://www.mortality.org/Home/Index](https://www.mortality.org/Home/Index)), we can obtain a time series that consists of yearly age-at-death distributions for each country. We shall focus on distributions for female from year 1960 to 2015 and there are 26 countries included in the analysis after exclusion of countries with missing data. Pair-wise two sample tests between the included countries are performed using our statistic \(D_{2}\) to understand the similarity of age-at-death distributions between different countries. The resulting \(p\)-value matrix is plotted in Figure 4 (left). To better present the testing results and gain more insights, we define the dissimilarity between two given countries by subtracting each \(p\)-value from 1. Treating these dissimilarities as "distances", we apply multidimensional scaling to "project" each country onto two dimensional plane for visualization. See Figure 4 (right) for the plot of "projected" countries. It appears that several west European countries, including UK, Belgium, Luxembourg, Ireland, and Austria, and Denmark, form a cluster; whereas several central and eastern European countries, including Poland, Latvia, Russian, Bulgaria, Lithuania and Czechia share similar distributions. We suspect the similarity in Mortality distribution is much related to the similarity in their economic development and healthcare system, less dependent on the geographical locations. ### Change point detection **Cryptocurrency data.** Detecting change points in the Pearson correlation matrices for a set of interested cryptocurrencies can uncover structural breaks in the correlation of these cryptocurrencies and can play an important role in the investors' investment decisions. Here, we construct the daily Pearson correlation matrices from minute prices of Bitcoin, Doge coin, Cardano, Monero and Chainlink for year 2021. The cryptocurrency data can be downloaded at [https://www.cryptodatadowmload.com/analytics/correlation-heatmap/](https://www.cryptodatadowmload.com/analytics/correlation-heatmap/). See Figure 5 for the plot of time series of pairwise correlations. Three methods, namely, our \(SN_{2}\) test combined with WBS (WBS-\(SN_{2}\)), \(SN_{2}\) test combined with binary segmentation (BS-\(SN_{2}\)), and DM test of Dubey & Muller (2019) in conjunction with binary segmentation (BS-DM), are applied to detect potential change points for this time series, Method WBS-\(SN_{2}\) detects an abrupt change on day 2021-05-17 and method BS-\(SN_{2}\) detects a change point on day 2021-04-29. By comparison, more than 10 change points are detected by BS-DM and we suspect that many Figure 4: Application to Mortality data. The left figure is the plot of \(p\)-value matrix, where the 26 countries, Austria, Belarus, Belgium, Bulgaria, Czechia, Denmark, Estonia, Finland, France, Hungary, Iceland, Ireland, Italy, Latvia, Lithuania, Luxembourg, Netherlands, Norway, Poland, Portugal, Russia, Slovakia, Spain, Sweden, Switzerland, UK, are labeled by \(1,2,\ldots,26\) respectively. The right figure is the plot of points from multidimensional scaling with dissimilarity between any countries defined as subtracting the corresponding \(p\)-value from 1. of them are false discoveries (see Section 5.3 for simulation evidence of BS-DM's tendency of over-detection). The change point in mid-May 2021 is well expected and corresponds to a major crush in crypto market that wiped out 1 trillion dollars. The major causes of this crush are the withdrawal of Tesla's commitment to accept Bitcoin as payment and warnings regarding cryptocurrency sent by Chinese central bank to the financial institutes and business in China. Since this major crush, the market has been dominated by negative sentiments and fear for a recession. We refer the following CNN article for some discussions about this crush [https://www.cnn.com/2021/05/22/investing/crypto-crash-bitcoin-regulation/index.html](https://www.cnn.com/2021/05/22/investing/crypto-crash-bitcoin-regulation/index.html). ## 7 Conclusion Motivated by increasing availability of non-Euclidean time series data, this paper considers two-sample testing and change-point detection for temporally dependent random objects. Our inferential framework builds upon the nascent SN technique, which has been mainly developed for conventional Euclidean time series or functional time series in Hilbert space, and the extension of SN to the time series of objects residing in metric spaces is the first in the literature. The proposed tests are robust to weak temporal dependence, enjoy effortless tuning and are broadly applicable to many non-Euclidean data types with easily verified technical conditions. On the theory front, we derive the asymptotic distributions of our two sample and change-point tests under both null and local alternatives of order \(O(n^{-1/2})\). Furthermore, for change-point problem, the consistency of the change-point estimator is established under mild conditions. Both simulation and real data illustrations demonstrate the robustness of our test with respect to temporal dependence and the effectiveness in testing and estimation problems. To conclude, we mention several interesting but unsolved problems for analyzing non-Euclidean time series. For example, although powerful against Frechet mean differences/changes, the testing statistics developed in this paper rely on the asymptotic behaviors of Frechet (sub)sample variances. It is imperative to construct formal tests that can target directly at Frechet mean differences/changes. For the change-point detection problem in non-Euclidean data, the existing literature, including this paper, only derives the consistency of the change-point estimator. It would be very useful to derive explicit convergence rate and the asymptotic distribution of the change-point estimator, which is needed for confidence interval construction. Also it would be interesting to study how to detect structural changes when the underlying distributions of random objects change smoothly. We leave these topics for future investigation. Figure 5: Plot of time series of pairwise correlations. Red vertical line indicates detected change point by WBS-\(SN_{2}\). Technical Proofs ### Auxiliary Lemmas We first introduce some notations. We denote \(o_{up}(\cdot)\) as the uniform \(o_{p}(\cdot)\) w.r.t. the partial sum index \((a,b)\in\mathcal{I}_{\eta}\). Let \(M_{n}(\omega,[a,b])=n^{-1}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}f_{ \omega}(Y_{t})\), where \(f_{\omega}(Y)=d^{2}(Y,\omega)-d^{2}(Y,\mu)\), then it is clear that \[\hat{\mu}_{[a,b]}=\arg\min_{\omega\in\Omega}M_{n}(\omega,[a,b]).\] Let \(\tilde{V}_{[a,b]}=\frac{1}{\lfloor nb\rfloor-\lfloor na\rfloor}\sum_{t= \lfloor na\rfloor+1}^{\lfloor nb\rfloor}d^{2}\left(Y_{t},\mu\right)\). The following three main lemmas are verified under Assumption 2.1-2.5, and they are used repeatedly throughout the proof for main theorems. **Lemma A.1**.: \(\sup_{(a,b)\in\mathcal{I}_{\eta}}\sqrt{n}d(\hat{\mu}_{[a,b]},\mu)=O_{p}(1)\)_._ Proof.: (1). We first show the uniform convergence, i.e. \(\sup_{(a,b)\in\mathcal{I}_{\eta}}d(\hat{\mu}_{[a,b]},\mu)=o_{up}(1)\). For any \(\epsilon>0\), define \[\psi(\epsilon):=\inf_{d(\omega,\mu)>e}\mathbb{E}f_{\omega}(Y), \tag{8}\] and we know by that \(\psi(\epsilon)>0\) by the uniqueness of \(\mu\) in Assumption 2.1. Hence, let \(M(\omega,[a,b])=(b-a)\mathbb{E}f_{\omega}(Y)\), we have \[P\Bigg{(}\sup_{(a,b)\in\mathcal{I}_{\eta}}d(\hat{\mu}_{[a,b]}, \mu)>\epsilon\Bigg{)}\] \[= P\Bigg{(}\bigcup_{(a,b)\in\mathcal{I}_{\eta}}\left\{d(\hat{\mu}_ {[a,b]},\mu)>\epsilon\right\}\Bigg{)}\] \[\leq P\Bigg{(}\bigcup_{(a,b)\in\mathcal{I}_{\eta}}\left\{M(\hat{\mu}_ {[a,b]},[a,b])-\inf_{d(\omega,\mu)>e}M(\omega,[a,b])\geq 0\right\}\Bigg{)}\] \[\leq P\Bigg{(}\bigcup_{(a,b)\in\mathcal{I}_{\eta}}\left\{M(\hat{\mu}_ {[a,b]},[a,b])\geq\eta\psi(\epsilon)/2\right\}\Bigg{)}\] \[\leq P\Bigg{(}\bigcup_{(a,b)\in\mathcal{I}_{\eta}}\Bigg{\{}M(\hat{\mu} _{[a,b]},[a,b])-M_{n}(\hat{\mu}_{[a,b]},[a,b])\] \[\qquad\qquad\qquad\qquad\qquad\qquad+M_{n}(\mu,[a,b])-M(\mu,[a,b ])\geq\eta\psi(\epsilon)/2\Bigg{\}}\Bigg{)}\] \[\leq P\Bigg{(}\sup_{(a,b)\in\mathcal{I}_{\eta}}\sup_{\omega\in\Omega}| M_{n}(\omega,[a,b])-M(\omega,[a,b])|\geq\eta\psi(\epsilon)/4\Bigg{)}\] where the first inequality holds because the event \(\{d(\hat{\mu}_{[a,b]},\mu)>\epsilon\}\) implies that \(\hat{\mu}_{[a,b]}\in\{\omega\in\Omega:d(\omega,\mu)>\epsilon\}\), and thus \(M(\hat{\mu}_{[a,b]},[a,b])\geq\inf_{d(\omega,\mu)>\epsilon}M(\omega,[a,b])\); the second inequality holds by \(b-a\geq\eta\) (hence \((\lfloor nb\rfloor-\lfloor na\rfloor)/n>\eta/2\) for large \(n\)) and the definition of (8) such that \(\inf_{d(\omega,\mu)>\epsilon}M(\omega,[a,b])=(b-a)\psi(\epsilon)>\eta\psi( \epsilon)/2\); and the third holds by that \(M(\mu,[a,b])=0\) and \(M_{n}(\mu,[a,b])\geq M_{n}(\hat{\mu}_{[a,b]},[a,b])\). Note \(M_{n}(\omega,[a,b])-M(\omega,[a,b])=M_{n}(\omega,[0,b])-M(\omega,[0,b])-M_{n} (\omega,[0,a])+M(\omega,[0,a])\). Therefore, it suffices to show the weak convergence of the process \(\{M_{n}(\omega,[0,u])-M(\omega,[0,u]),u\in[0,1],\omega\in\Omega\}\) to zero. Note the pointwise convergence holds easily by the boundedness of \(f_{\omega}\) and Assumption 2.3, so we only need to show the stochastic equicontinuity, i.e. \[\limsup_{n\to\infty}P\Bigg{(}\sup_{|u-v|<\delta_{1},d(\omega_{1}, \omega_{2})<\delta_{2}}|M_{n}(\omega_{1},[0,u])-M(\omega_{1},[0,u])\] \[\qquad\qquad\qquad\qquad\qquad-M_{n}(\omega_{2},[0,v])+M(\omega_ {2},[0,v])|>\epsilon\Bigg{)}\to 0\] as \(\max(\delta_{1},\delta_{2})\to 0\). Then, by triangle inequality, we have \[|M_{n}(\omega_{1},[0,u])-M(\omega_{1},[0,u])-M_{n}(\omega_{2},[0,v])+M (\omega_{2},[0,v])|\] \[\leq |M_{n}(\omega_{1},[0,u])-M_{n}(\omega_{1},[0,v])|+|M_{n}(\omega_{1 },[0,v])-M_{n}(\omega_{2},[0,v])|\] \[+|M(\omega_{1},[0,u])-M(\omega_{1},[0,v])|+|M(\omega_{1},[0,v])-M (\omega_{2},[0,v])|\] \[:= \sum_{i=1}^{4}R_{n,i}.\] Without loss of generality, we assume \(v>u\), and by the boundedness of the metric, we have for some \(K>0\), \[R_{n,1}\leq n^{-1}\sum_{t=\lfloor nu\rfloor+1}^{\lfloor nv\rfloor}d^{2}(Y_{t}, \omega_{1})\leq K|u-v|\leq K\delta_{1}.\] Similarly, \(R_{n,3}\leq K\). Furthermore, we can see that \(R_{n,2},R_{n,4}\leq 2\text{diam}(\Omega)d(\omega_{1},\omega_{2})\leq K \delta_{2}.\) Hence, the result follows by letting \(\delta_{1}\) and \(\delta_{2}\) sufficiently small. Thus, the uniform convergence holds. (2). We then derive the convergence rate based on Assumption 2.5. By the consistency, we have for any \(\delta>0\), \(P(\sup_{(a,b)\in\mathcal{I}_{\eta}}d(\hat{\mu}_{[a,b]},\mu)\leq\delta)\to 1\). Hence, on the event that \(\sup_{(a,b)\in\mathcal{I}_{\eta}}d(\hat{\mu}_{a,b},\mu)\leq\delta\), and note that \(M_{n}(\mu,[a,b])=n^{-1}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}[d^{2}(Y _{t},\mu)-d^{2}(Y_{t},\mu)]=0\), we have \[0= M_{n}(\mu,[a,b])\] \[\geq M_{n}(\hat{\mu}_{[a,b]},[a,b])\] \[= K_{d}\frac{\lfloor nb\rfloor-\lfloor na\rfloor}{n}d^{2}(\hat{\mu }_{[a,b]},\mu)+n^{-1}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}\left[g(Y _{t},\hat{\mu}_{[a,b]},\mu)+R(Y_{t},\hat{\mu}_{[a,b]},\mu)\right]\] \[\geq \frac{K_{d}\eta}{2}d^{2}(\hat{\mu}_{[a,b]},\mu)\] \[\quad+d(\hat{\mu}_{[a,b]},\mu)\left[\frac{n^{-1}\sum_{t=\lfloor na \rfloor+1}^{\lfloor nb\rfloor}g(Y_{t},\hat{\mu}_{[a,b]},\mu)}{d(\hat{\mu}_{[a,b]},\mu)}+o_{up}(n^{-1/2}+d(\hat{\mu}_{[a,b]},\mu))\right],\] where the last inequality holds by Assumption 2.5 and the fact \((\lfloor nb\rfloor-\lfloor na\rfloor)/n>\eta/2\) for large \(n\). Note the above analysis holds uniformly for \((a,b)\in\mathcal{I}_{\eta}\), this implies that \[\sup_{(a,b)\in\mathcal{I}_{\eta}}\left[\frac{K_{d}\eta}{2}d(\hat {\mu}_{[a,b]},\mu)-o_{up}(d(\hat{\mu}_{[a,b]},\mu))\right]\] \[\leq n^{-1/2}\sup_{(a,b)\in\mathcal{I}_{\eta}}\left|\frac{n^{-1/2} \sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}g(Y_{t},\hat{\mu}_{[a,b]}, \mu)}{d(\hat{\mu}_{[a,b]},\mu)}\right|+o_{up}(n^{-1/2})=O_{p}(n^{-1/2}),\] and hence \(\sup_{(a,b)\in\mathcal{I}_{\eta}}d(\hat{\mu}_{[a,b]},\mu)=O_{p}(n^{-1/2})\). **Lemma A.2**.: \(\sup_{(a,b)\in\mathcal{I}_{\eta}}\sqrt{n}|\hat{V}_{[a,b]}-\tilde{V}_{[a,b]}|=o _{p}(1)\)_._ Proof.: By Lemma A.1, and Assumption 2.5, we have \[\sup_{(a,b)\in\mathcal{I}_{\eta}}\sqrt{n}M_{n}(\hat{\mu}_{[a,b]}, [a,b])\] \[\leq K_{d}\sup_{(a,b)\in\mathcal{I}_{\eta}}d(\hat{\mu}_{[a,b]},\mu) \sup_{(a,b)\in\mathcal{I}_{\eta}}\left|\sqrt{n}d(\hat{\mu}_{[a,b]},\mu)\right.\] \[\quad\quad+\left.\frac{n^{-1/2}\sum_{t=\lfloor na\rfloor+1}^{ \lfloor nb\rfloor}g(Y_{t},\hat{\mu}_{[a,b]},\mu)}{d(\hat{\mu}_{[a,b]},\mu)}+o _{up}(1+\sqrt{n}d(\hat{\mu}_{[a,b]},\mu))\right|\] \[= O_{p}(n^{-1/2}).\] Hence, we have that \[\sup_{(a,b)\in\mathcal{I}_{\eta}}\sqrt{n}|\hat{V}_{[a,b]}-\tilde{V}_{[a,b]}|\leq \eta^{-1}\sup_{(a,b)\in\mathcal{I}_{\eta}}\sqrt{n}M_{n}(\hat{\mu}_{[a,b]},[a,b]),\] the result follows. **Lemma A.3**.: _Let \(\hat{V}_{[a,b]}^{C}(\tilde{\omega})=\frac{1}{\lfloor nb\rfloor-\lfloor na \rfloor}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}d^{2}(Y_{i},\tilde{ \omega}),\) where \(\tilde{\omega}\in\Omega\) is a random object such that_ \[\sqrt{n}\sup_{(a,b)\in\mathcal{I}_{\eta}}d(\tilde{\omega},\hat{\mu}_{[a,b]})= O_{p}(1).\] _Then,_ \[\sqrt{n}\sup_{(a,b)\in\mathcal{I}_{\eta}}|\hat{V}_{[a,b]}^{C}(\tilde{\omega})- \tilde{V}_{[a,b]}|=o_{p}(1).\] Proof.: By triangle inequality and Lemma A.2, \[\sqrt{n}\sup_{(a,b)\in\mathcal{I}_{\eta}}|\hat{V}_{[a,b]}^{C}( \tilde{\omega})-\tilde{V}_{[a,b]}|\] \[= \sup_{(a,b)\in\mathcal{I}_{\eta}}\left|\frac{\sqrt{n}}{\lfloor nb \rfloor-\lfloor na\rfloor}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}d^ {2}\left(Y_{t},\tilde{\omega}\right)-d^{2}(Y_{t},\mu)\right|\] \[\leq (\eta/2)^{-1}\sup_{(a,b)\in\mathcal{I}_{\eta}}\sqrt{n}M_{n}( \tilde{\omega},[a,b]).\] Note by triangle inequality for the metric, \(d(\tilde{\omega},\mu)\leq d(\hat{\mu}_{[a,b]},\mu)+d(\tilde{\omega},\hat{\mu} _{[a,b]})=O_{p}(n^{-1/2}),\) and we know that \(d(\tilde{\omega},\mu)<\delta\) with probability tending to \(1,\) and on this event, by Assumption 2.5, \[\sqrt{n}M_{n}(\tilde{\omega},[a,b])\leq K_{d}d^{2}(\tilde{\omega},\mu)+n^{-1}\left|\sum_{t=\lfloor na \rfloor+1}^{\lfloor nb\rfloor}g(Y_{t},\tilde{\omega},\mu)\right|\] \[+n^{-1}\left|\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}R(Y _{t},\tilde{\omega},\mu)\right|.\] Similar to the proof of Lemma A.2, we get the result. ### Proof of Theorems in Section 3 Let \(\tilde{V}_{r}^{(1)}=\frac{1}{\lfloor rn_{1}\rfloor}\sum_{t=1}^{\lfloor rn_{1} \rfloor}d^{2}(Y_{t}^{(1)},\mu^{(1)}),\) and \(\tilde{V}_{r}^{(2)}=\frac{1}{\lfloor rn_{2}\rfloor}\sum_{t=1}^{\lfloor rn_{2} \rfloor}d^{2}(Y_{t}^{(2)},\mu^{(2)}).\) For each \(r\in[\eta,1],\) we consider the decomposition, \[\sqrt{n}T_{n}(r)= \sqrt{n}r(\hat{V}_{r}^{(1)}-\hat{V}_{r}^{(2)})\] \[= \sqrt{n}r(\hat{V}_{r}^{(1)}-\tilde{V}_{r}^{(1)}+\tilde{V}_{r}^{(1 )}-V^{(1)})\] \[-\sqrt{n}r(\hat{V}_{r}^{(2)}-\tilde{V}_{r}^{(2)}+\tilde{V}_{r}^{( 2)}-V^{(2)}) \tag{9}\] \[+\sqrt{n}r(V^{(1)}-V^{(2)})\] \[:= R_{n,1}(r)+R_{n,2}(r)+R_{n,3}(r).\] and \[\sqrt{n}T_{n}^{C}(r)= \sqrt{n}r(\hat{V}_{r}^{C,(1)}-\tilde{V}_{r}^{(1)})-\sqrt{n}r(\hat {V}_{r}^{(1)}-\tilde{V}_{r}^{(1)}) \tag{10}\] \[+\sqrt{n}r(\hat{V}_{r}^{C,(2)}-\tilde{V}_{r}^{(2)})-\sqrt{n}r( \hat{V}_{r}^{(2)}-\tilde{V}_{r}^{(2)})\] \[:= R_{n,1}^{C}(r)+R_{n,2}^{C}(r)+R_{n,3}^{C}(r)+R_{n,4}^{C}(r).\] By Lemma A.2, \[\sup_{r\in[\eta,1]}\sqrt{n}r(\hat{V}_{r}^{(1)}-\tilde{V}_{r}^{(1)})=o_{p}(1),\quad \sup_{r\in[\eta,1]}\sqrt{n}r(\hat{V}_{r}^{(2)}-\tilde{V}_{r}^{(2)})=o_{p}(1), \tag{11}\] i.e. \[\big{\{}R_{n,2}^{C}(r)+R_{n,4}^{C}(r)\big{\}}_{r\in[\eta,1]}\Rightarrow 0. \tag{12}\] Furthermore, by Assumption 3.1, \[\sqrt{n}r(\hat{V}_{r}^{(1)}-V^{(1)})\Rightarrow\gamma_{1}^{-1}\sigma_{1}B^{(1 )}(\gamma_{1}r),\quad\sqrt{n}r(\hat{V}_{r}^{(2)}-V^{(2)})\Rightarrow\gamma_{2} ^{-1}\sigma_{2}B^{(2)}(\gamma_{2}r).\] This implies that \[\left\{R_{n,1}(r)+R_{n,2}(r)\right\}_{r\in[\eta,1]}\Rightarrow\left\{\xi_{ \gamma_{1},\gamma_{2}}(r;\sigma_{1},\sigma_{2})\right\}_{r\in[\eta,1]}. \tag{13}\] ### Proof of Theorem 3.1 Under \(\mathbb{H}_{0}\), \(R_{n,3}(r)\equiv 0\), and \(\mu^{(1)}=\mu^{(2)}=\mu\). Hence, by (9) and (13), we obtain that \[\left\{\sqrt{n}T_{n}(r)\right\}_{r\in[\eta,1]}\Rightarrow\left\{\xi_{\gamma_{ 1},\gamma_{2}}(r;\sigma_{1},\sigma_{2})\right\}_{r\in[\eta,1]}.\] Next, by Lemma A.1, can obtain that \[\sqrt{n}\sup_{r\in[\eta,1]}d(\hat{\mu}_{r}^{(1)},\mu)=o_{p}(1),\quad\sqrt{n} \sup_{r\in[\eta,1]}d(\hat{\mu}_{r}^{(2)},\mu)=o_{p}(1).\] Hence, by Lemma A.3, we have \[\left\{R_{n,1}^{C}(r)+R_{n,3}^{C}(r)\right\}_{r\in[\eta,1]}\Rightarrow 0.\] Together with (12), we have \[\left\{\sqrt{n}T_{n}^{C}(r)\right\}_{r\in[\eta,1]}\Rightarrow 0.\] Hence, by continuous mapping theorem, for both \(i=1,2\), \[D_{n,i}\rightarrow_{d}\frac{\xi_{\gamma_{1},\gamma_{2}}^{2}(1;\sigma_{1}, \sigma_{2})}{\int_{\eta}^{1}\left[\xi_{\gamma_{1},\gamma_{2}}(r; \sigma_{1},\sigma_{2})-r\xi_{\gamma_{1},\gamma_{2}}(1;\sigma_{1},\sigma_{2}) \right]^{2}dr}.\] ### Proof of Theorem 3.2 In view of (9) and (13), \[\left\{\sqrt{n}T_{n}(r)\right\}_{r\in[\eta,1]}\Rightarrow\left\{\xi_{\gamma_ {1},\gamma_{2}}(r;\sigma_{1},\sigma_{2})+rn^{-\kappa_{V}+1/2}\Delta_{V}\right\} _{r\in[\eta,1]}.\] Hence * For \(\kappa_{V}\in(1/2,\infty)\), \(\left\{\sqrt{n}T_{n}(r)\right\}\Rightarrow\left\{\xi_{\gamma_{1},\gamma_{2}}( r;\sigma_{1},\sigma_{2})\right\}_{r\in[\eta,1]}.\) * For \(\kappa_{V}=1/2\), \(\left\{\sqrt{n}T_{n}(r)\right\}_{r\in[\eta,1]}\Rightarrow\left\{\xi_{\gamma_ {1},\gamma_{2}}(r;\sigma_{1},\sigma_{2})+r\Delta_{V}\right\}_{r\in[\eta,1]}.\) * For \(\kappa_{V}\in(0,1/2)\), \(\sqrt{n}T_{n}(1)\rightarrow_{p}\infty\), and \(\left\{\sqrt{n}T_{n}(r)-\sqrt{n}T_{n}(1)\right\}_{r\in[\eta,1]}\Rightarrow \left\{\xi_{\gamma_{1},\gamma_{2}}(r;\sigma_{1},\sigma_{2})-r\xi_{\gamma_{1}, \gamma_{2}}(1;\sigma_{1},\sigma_{2})\right\}_{r\in[\eta,1]}\). Next, we focus on \(\sqrt{n}T_{n}^{C}(r)\). When \(\kappa_{M}\in(0,\infty)\), it holds that \(d(\mu^{(1)},\mu^{(2)})=O(n^{-\kappa_{M}/2})=o(1)\), and by triangle inequality, for any \(r\in[\eta,1]\), \[|d(\mu^{(1)},\mu^{(2)})-d(\hat{\mu}_{r}^{(2)},\mu^{(2)})|\leq d(\hat{\mu}_{r}^{ (2)},\mu^{(1)})\leq|d(\mu^{(1)},\mu^{(2)})+d(\hat{\mu}_{r}^{(2)},\mu^{(2)})|. \tag{14}\] By Lemma A.1, we have \(\sup_{r\in[\eta,1]}d(\hat{\mu}_{r}^{(2)},\mu^{(2)})=O_{p}(n^{-1/2})\). This and (14) imply that * when \(\kappa_{M}\in(1/2,\infty)\), \(d^{2}(\hat{\mu}_{r}^{(2)},\mu^{(1)})=o_{up}(n^{-1/2})\); * when \(\kappa_{M}\in(0,1/2]\), \(d^{2}(\hat{\mu}_{r}^{(2)},\mu^{(1)})=d^{2}(\mu^{(1)},\mu^{(2)})+o_{up}(n^{-1/2})= n^{-\kappa_{M}}\Delta_{M}+o_{up}(n^{-1/2})\). Similarly, * when \(\kappa_{M}\in(1/2,\infty)\), \(d^{2}(\hat{\mu}_{r}^{(1)},\mu^{(2)})=o_{up}(n^{-1/2})\); * when \(\kappa_{M}\in(0,1/2]\), \(d^{2}(\hat{\mu}_{r}^{(1)},\mu^{(2)})=n^{-\kappa_{M}}\Delta_{M}+o_{up}(n^{-1/2})\). Furthermore, by Assumption 2.5, equations (10) and (12), we obtain \[\sqrt{n}T_{n}^{C}(r)=R_{n,1}^{C}(r)+R_{n,3}^{C}(r)+o_{up}(1) \tag{15}\] \[= \sqrt{n}K_{d}rd^{2}(\hat{\mu}_{r}^{(2)},\mu^{(1)})+rd(\hat{\mu}_ {r}^{(2)},\mu^{(1)})\left[\frac{n^{-1/2}\sum_{\ell=1}^{\lfloor\gamma_{1}nr \rfloor}g(V_{t}^{(1)},\hat{\mu}_{r}^{(2)},\mu^{(1)})}{d(\hat{\mu}_{r}^{(2)}, \mu^{(1)})}\right]\] \[\qquad+o_{up}(d(\hat{\mu}_{r}^{(2)},\mu^{(1)})+\sqrt{n}d^{2}( \hat{\mu}_{r}^{(2)},\mu^{(1)}))\] \[+\sqrt{n}K_{d}rd^{2}(\hat{\mu}_{r}^{(1)},\mu^{(2)})+rd(\hat{\mu}_ {r}^{(1)},\mu^{(2)})\left[\frac{n^{-1/2}\sum_{\ell=1}^{\lfloor\gamma_{2}nr \rfloor}g(V_{t}^{(2)},\hat{\mu}_{r}^{(1)},\mu^{(2)})}{d(\hat{\mu}_{r}^{(1)}, \mu^{(2)})}\right]\] \[\qquad\qquad+o_{up}(d(\hat{\mu}_{r}^{(1)},\mu^{(2)})+\sqrt{n}d^{2 }(\hat{\mu}_{r}^{(1)},\mu^{(2)}))\] \[+o_{up}(1).\] * For \(\kappa_{M}\in(1/2,\infty)\), \(d^{2}(\hat{\mu}_{r}^{(2)},\mu^{(1)})=o_{up}(n^{-1/2})\), and \(d^{2}(\hat{\mu}_{r}^{(1)},\mu^{(2)})=o_{up}(n^{-1/2})\). Hence, \(\{\sqrt{n}T_{n}^{C}(r)\}_{r\in[\eta,1]}\Rightarrow 0\). * For \(\kappa_{M}=1/2\), we note that \(d^{2}(\hat{\mu}_{r}^{(2)},\mu^{(1)})=n^{-1/2}\Delta_{M}+o_{up}(1)\), and \(d^{2}(\hat{\mu}_{r}^{(1)},\mu^{(2)})=n^{-1/2}\Delta_{M}+o_{up}(1)\). Hence, \(\{\sqrt{n}T_{n}^{C}(r)\}_{r\in[\eta,1]}\Rightarrow\{2rK_{d}\Delta_{M}\}_{r \in[\eta,1]}\), and \(\{\sqrt{n}[T_{n}^{C}(r)-rT_{n}^{C}(1)]\}_{r\in[\eta,1]}\Rightarrow 0\). * For \(\kappa_{M}\in(0,1/2)\), we multiply \(n^{2\kappa_{M}-1}\) on both denominator and numerator of \(D_{n,2}\), and obtain \[D_{n,2}=\frac{n^{2\kappa_{M}}\left\{\left[T_{n}(1)\right]^{2}+\left[T_{n}^{C} (1)\right]^{2}\right\}}{n^{-1}\sum_{k=\lfloor n\eta\rfloor}^{n^{2\kappa_{M}}} \left\{\left[T_{n}(\frac{k}{n})-\frac{k}{n}T_{n}(1)\right]^{2}+\left[T_{n}^{C} (\frac{k}{n})-\frac{k}{n}T_{n}^{C}(1)\right]^{2}\right\}}.\] (16) Note that \(n^{\kappa_{M}-1/2}\to 0\), as \(n\rightarrow\infty\), we obtain that \[\{n^{\kappa_{M}}[T_{n}(r)-rT_{n}(1)]\}_{r\in[\eta,1]}\Rightarrow 0.\] (17) Furthermore, in view of (15), we obtain \[n^{\kappa_{M}}T_{n}^{C}(r)=n^{\kappa_{M}}r(K_{d}+o_{up}(1))\left[d^{2}(\hat{ \mu}_{r}^{(2)},\mu^{(1)})+d^{2}(\hat{\mu}_{r}^{(1)},\mu^{(2)})\right]+o_{up}(1),\] By arguments below (14), we know that \(n^{\kappa_{M}}d^{2}(\hat{\mu}_{r}^{(2)},\mu^{(1)})=\Delta_{M}+o_{up}(n^{\kappa_{ M}-1/2})=\Delta_{M}+o_{up}(1)\). And similarly, \(n^{\kappa_{M}}d^{2}(\hat{\mu}_{r}^{(1)},\mu^{(2)})=\Delta_{M}+o_{up}(1)\). We thus obtain that \[\big{\{}n^{\kappa_{M}}T_{n}^{C}(r)-rT_{n}^{C}(1)\big{\}}_{r\in[\eta,1]} \Rightarrow 0,\] (18) and \[n^{\kappa_{M}}T_{n}^{C}(1)\rightarrow_{p}2K_{d}\Delta_{M}.\] (19) Therefore, (17) and (18) implies that the denominator of (16) converges to \(0\) in probability, while (19) implies the numerator of (16) is larger than a positive constant in probability, we thus obtain \(D_{n,2}\rightarrow_{p}\infty\). Summarizing the cases of \(\kappa_{V}\) and \(\kappa_{M}\), and by continuous mapping theorem, we get the result. ### Proof of Corollary 3.1 When \(\gamma_{1}=\gamma_{2}=1/2\), it can be shown that \[\xi_{\gamma_{1},\gamma_{2}}(r;\sigma_{1},\sigma_{2})=2\sigma_{1}B^{(1)}(r/2)-2 \sigma_{2}B^{(1)}(r/2)=_{d}\sqrt{2\sigma_{1}^{2}+2\sigma_{2}^{2}-4\rho\sigma_{1} \sigma_{2}}B(r);\] and when \(\rho=0\). \[\xi_{\gamma_{1},\gamma_{2}}(r;\sigma_{1},\sigma_{2})=_{d}\sqrt{\frac{\sigma_{1}^{2 }}{\gamma_{1}}+\frac{\sigma_{2}^{2}}{\gamma_{2}}}B(r).\] The result follows by the continuous mapping theorem. ### Proof of Theorems in Section 4 With a bit abuse of notation, we define \(\mathcal{I}_{\eta}=\{(a,b):0\leq a<b\leq 1,b-a\geq\eta_{2}\}\) and \(\mathcal{J}_{\eta}=\{(r;a,b):0\leq a<r<b\leq 1,b-r\geq\eta_{2},r-a\geq\eta_{2}\}\). ### Proof of Theorem 4.1 For \((r;a,b)\in\mathcal{J}_{\eta}\), we note that \[\sqrt{n}T_{n}(r;a,b)\] \[= \sqrt{n}\left\{\frac{(r-a)(b-r)}{(b-a)}\left(\hat{V}_{[a,r]}-\tilde {V}_{[a,r]}+\tilde{V}_{[a,r]}-V\right)\right\}\] \[-\sqrt{n}\left\{\frac{(r-a)(b-r)}{(b-a)}\left(\hat{V}_{[r,b]}- \tilde{V}_{[r,b]}+\tilde{V}_{[r,b]}-V\right)\right\}.\] By Lemma A.2 we know that \(\sup_{(a,r)\in\mathcal{I}_{\eta}}\sqrt{n}|\hat{V}_{[a,r]}-\tilde{V}_{[a,r]}|=o _{p}(1)\), \(\sup_{(r,b)\in\mathcal{I}_{\eta}}\sqrt{n}|\hat{V}_{[r,b]}-\tilde{V}_{[r,b]}|=o _{p}(1)\), and by Assumption 3.1, \[\left\{\sqrt{n}(r-a)(\tilde{V}_{[a,r]}-V)\right\}_{(a,r)\in \mathcal{I}_{\eta}}\Rightarrow\left\{\sigma[B(r)-B(a)]\right\}_{(a,r)\in \mathcal{I}_{\eta}},\] \[\left\{\sqrt{n}(b-r)(\tilde{V}_{[r,b]}-V)\right\}_{(r,b)\in \mathcal{I}_{\eta}}\Rightarrow\left\{\sigma[B(b)-B(r)]\right\}_{(r,b)\in \mathcal{I}_{\eta}}.\] Hence, \[\left\{\sqrt{n}T_{n}(r;a,b)\right\}_{(r;a,b)\in\mathcal{J}_{\eta}}\] \[\Rightarrow \sigma\left\{\frac{(b-r)}{(b-a)}[B(r)-B(a)]-\frac{(r-a)}{(b-a)}[ B(b)-B(r)]\right\}_{(r;a,b)\in\mathcal{J}_{\eta}}.\] Furthermore, we note that \[\sqrt{n}T_{n}^{C}(r;a,b)\] \[= \frac{(b-r)}{(b-a)}n^{-1/2}\Bigg{\{}\sum_{i=\lfloor na\rfloor+1} ^{\lfloor nr\rfloor}\left[d^{2}\left(Y_{i},\hat{\mu}_{[r,b]}\right)-d^{2} \left(Y_{i},\mu\right)\right]\] \[\qquad\qquad-\sum_{i=\lfloor na\rfloor+1}^{\lfloor nr\rfloor} \left[d^{2}\left(Y_{i},\hat{\mu}_{[a,r]}\right)-d^{2}\left(Y_{i},\mu\right) \right]\Bigg{\}}\] \[+\frac{(r-a)}{(b-a)}n^{-1/2}\sum_{i=\lfloor nr\rfloor+1}^{ \lfloor nb\rfloor}\Bigg{\{}\left[d^{2}\left(Y_{i},\hat{\mu}_{[a,r]}\right)-d^ {2}\left(Y_{i},\mu\right)\right]\] \[\qquad\qquad-\sum_{i=\lfloor nr\rfloor+1}^{\lfloor nb\rfloor} \left[d^{2}\left(Y_{i},\hat{\mu}_{[r,b]}\right)-d^{2}\left(Y_{i},\mu\right) \right]\Bigg{\}}+o_{up}(1)\] where \(o_{up}(1)\) is the rounding error due to \([n(r-a)]^{-1}-[\lfloor nr\rfloor-\lfloor na\rfloor]^{-1}\) and \([n(b-r)]^{-1}-[\lfloor nb\rfloor-\lfloor nr\rfloor]^{-1}\). Note by Lemma A.1, we know that \(\sup_{(a,r)\in\mathcal{I}_{\eta}}d(\hat{\mu}_{[a,r]},\mu)=O_{p}(n^{-1/2})\) and \(\sup_{(r,b)\in\mathcal{I}_{\eta}}d(\hat{\mu}_{[r,b]};\mu)=O_{p}(n^{-1/2})\), hence by Lemma A.2 and A.3, we obtain \[\sup_{(r;a,b)\in\mathcal{J}_{\eta}}|\sqrt{n}T_{n}^{C}(r;a,b)|=o_{p}(1).\] The result follows by continuous mapping theorem. ### Proof of Theorem 4.2 Note for any \(k=\lfloor n\eta_{1}\rfloor,\cdots,n-\lfloor n\eta_{1}\rfloor\), and \(i=1,2\), \[\max_{\lfloor n\eta_{1}\rfloor\leq k\leq n-\lfloor n\eta_{1}\rfloor}D_{n,i}(k) \geq D_{n,i}(\lfloor n\tau\rfloor).\] We focus on \(k^{*}=\lfloor n\tau\rfloor\). In this case, the left and right part of the self-normalizer are both from stationary segments, hence by similar arguments as in \(\mathbb{H}_{0}\), \[\begin{split}&\{\sqrt{n}T_{n}\left(r;0,\tau\right)\}_{r\in[\eta _{2},\tau-\eta_{2}]}\Rightarrow\{\sigma_{1}\mathcal{G}_{1}(r;0,\tau)\}_{r\in[ \eta_{2},\tau-\eta_{2}]},\\ &\{\sqrt{n}T_{n}^{C}\left(r;0,\tau\right)\}_{r\in[\eta_{2},\tau- \eta_{2}]}\Rightarrow 0;\end{split} \tag{20}\] and \[\begin{split}&\{\sqrt{n}T_{n}\left(r;\tau,1\right)\}_{r\in[\tau+ \eta_{2},1-\eta_{2}]}\Rightarrow\{\sigma_{2}\mathcal{G}_{2}(r;\tau,1)\}_{r \in[\tau+\eta_{2},1-\eta_{2}]},\\ &\{\sqrt{n}T_{n}^{C}\left(r;\tau,1\right)\}_{r\in[\eta_{2},\tau- \eta_{2}]}\Rightarrow 0,\end{split} \tag{21}\] where \(\mathcal{G}_{i}(r;a,b)=\frac{(b-r)}{(b-a)}[B^{(i)}(r)-B^{(i)}(a)]-\frac{(r-a)} {(b-a)}[B^{(i)}(b)-B^{(i)}(r)]\) for \(i=1,2\). Hence, we only need to consider the numerator, where \[\begin{split}&\sqrt{n}T_{n}(\tau;0,1)=\sqrt{n}\tau(1-\tau)\left( \hat{V}_{[0,\tau]}-\hat{V}_{[\tau,1]}\right),\\ &\sqrt{n}T_{n}^{C}(\tau;0,1)=\sqrt{n}\tau(1-\tau)\left(\hat{V}_{[ \tau,0,1]}^{C}-\hat{V}_{[0,\tau]}-\hat{V}_{[\tau,1]}\right).\end{split} \tag{22}\] For \(\sqrt{n}T_{n}(\tau;0,1)\), we have \[\begin{split}\sqrt{n}T_{n}(\tau;0,1)=&\sqrt{n} \left\{\tau(1-\tau)\left(\hat{V}_{[0,\tau]}-\tilde{V}_{[0,\tau]}+\tilde{V}_{[ 0,\tau]}-V^{(1)}\right)\right\}\\ &-\sqrt{n}\left\{\tau(1-\tau)\left(\hat{V}_{[\tau,1]}-\tilde{V}_ {[\tau,1]}+\tilde{V}_{[\tau,1]}-V^{(2)}\right)\right\}\\ &+\sqrt{n}\tau(1-\tau)(V^{(1)}-V^{(2)})\\ =& T_{11}+T_{12}+T_{13}.\end{split}\] By Lemma A.2, we know that \(\sqrt{n}(\hat{V}_{[0,\tau]}-\tilde{V}_{[0,\tau]})=o_{p}(1)\), and by Assumption 3.1, we have \(\sqrt{n}\tau(\tilde{V}_{[0,\tau]}-V^{(1)})\rightarrow_{d}\sigma_{1}B^{(1)}(\tau).\) This implies that \[T_{11}\rightarrow_{d}(1-\tau)\sigma_{1}B^{(1)}(\tau).\] Similarly, we can obtain \[T_{12}\rightarrow_{d}-\tau\sigma_{2}[B^{(2)}(1)-B^{(2)}(\tau)].\] Hence, using the fact that \(\sqrt{n}(V^{(1)}-V^{(2)})=\Delta_{V}\), we obtain \[\sqrt{n}T_{n}(\tau;0,1)\rightarrow_{d}(1-\tau)\sigma_{1}B^{(1)}(\tau)-\tau \sigma_{2}[B^{(2)}(1)-B^{(2)}(\tau)]+\tau(1-\tau)\Delta_{V}. \tag{23}\] For \(\sqrt{n}T_{n}^{C}(\tau;0,1)\) we have \[\begin{split}&\sqrt{n}T_{n}^{C}(\tau;0,1)\\ =&(1-\tau)n^{-1/2}\Bigg{\{}\sum_{i=1}^{\lfloor n\tau \rfloor}\left[d^{2}\left(Y_{i},\hat{\mu}_{[\tau,1]}\right)-d^{2}\left(Y_{i}, \mu^{(1)}\right)\right]\\ &\hskip 142.26378pt-\sum_{i=1}^{\lfloor n\tau\rfloor}\left[d^{2} \left(Y_{i},\hat{\mu}_{[0,\tau]}\right)-d^{2}\left(Y_{i},\mu^{(1)}\right) \right]\Bigg{\}}\\ &+\tau n^{-1/2}\Bigg{\{}\sum_{i=\lfloor n\tau\rfloor+1}^{n} \left[d^{2}\left(Y_{i},\hat{\mu}_{[0,\tau]}\right)-d^{2}\left(Y_{i},\mu^{(2) }\right)\right]\] \[= \mathcal{S}_{\eta,1}(\tau;\Delta).\] Therefore, we know that for the \(1-\alpha\) quantile of \(\mathcal{S}_{\eta}\), denoted by \(Q_{1-\alpha}(\mathcal{S}_{\eta})\), for \(i=1,2\), \[\lim_{n\to\infty}P\left(\max_{\lfloor n\eta_{1}\rfloor\leq k\leq n -\lfloor n\eta_{1}\rfloor}D_{n,i}(k)\geq Q_{1-\alpha}(\mathcal{S}_{\eta})\right)\] \[\geq \lim_{n\to\infty}P\left(D_{n,i}(\lfloor n\tau\rfloor)\geq Q_{1- \alpha}(\mathcal{S}_{\eta})\right)\] \[= P\left(\mathcal{S}_{\eta,i}(\tau;\Delta)\geq Q_{1-\alpha}( \mathcal{S}_{\eta})\right).\] In particular, \[\lim_{|\Delta_{V}|\to\infty}P\Big{(}\mathcal{S}_{\eta,1}(\tau; \Delta)\geq Q_{1-\alpha}(\mathcal{S}_{\eta})\Big{)}=1,\] \[\lim_{\max\{|\Delta_{V}|,\Delta_{M}\}\to\infty}P\Big{(}\mathcal{S} _{\eta,2}(\tau;\Delta)\geq Q_{1-\alpha}(\mathcal{S}_{\eta})\Big{)}=1.\] ### Proof of Theorem 4.3 Define the pointwise limit of \(\hat{\mu}_{[a,b]}\) under \(\mathbb{H}_{a}\) as \[\mu_{[a,b]}=\begin{cases}\mu^{(1)},&b\leq\tau\\ \arg\min_{\omega\in\Omega}\Big{\{}(\tau-a)\mathbb{E}d^{2}(Y_{t}^{(1)},\omega)+( b-\tau)\mathbb{E}d^{2}(Y_{t}^{(2)},\omega)\Big{\}}\,,&a<\tau<b\\ \mu^{(2)},&\tau\leq a\end{cases}\] Define the Frechet variance and pooled contaminated variance under \(\mathbb{H}_{a}\) as \[V_{[a,b]}=\begin{cases}V^{(1)}&b\leq\tau\\ \frac{\tau-a}{b-a}\mathbb{E}(d^{2}(Y_{t}^{(1)},\mu_{[a,b]}))+\frac{b-\tau}{b- a}\mathbb{E}(d^{2}(Y_{t}^{(2)},\mu_{[a,b]})),&a<\tau<b\\ V^{(2)},&\tau\leq a,\end{cases}\] and \[V^{C}_{[r;a,b]}=\] \[\begin{cases}V^{(1)}&b\leq\tau\\ \frac{\tau-a}{r-a}\mathbb{E}(d^{2}(Y_{t}^{(1)},\mu_{[r,b]}))+\frac{r-\tau}{r-a }\mathbb{E}(d^{2}(Y_{t}^{(2)},\mu_{[r,b]}))+\mathbb{E}(d^{2}(Y_{t}^{(2)},\mu_{ [a,r]})),&a<\tau\leq r\\ \mathbb{E}(d^{2}(Y_{t}^{(1)},\mu_{[r,b]}))+\frac{r-\tau}{b-r}\mathbb{E}(d^{2} (Y_{t}^{(1)},\mu_{[a,r]}))+\frac{b-\tau}{b-\tau}\mathbb{E}(d^{2}(Y_{t}^{(2)}, \mu_{[a,r]})),&r<\tau<b\\ V^{(2)},&\tau\leq a.\end{cases}\] We want to show that \[\left\{T_{n}(r;a,b)\right\}_{(r;a,b)\in\mathcal{J}_{\eta}}\Rightarrow\left\{ T(r;a,b)\right\}_{(r;a,b)\in\mathcal{J}_{\eta}},\] \[\left\{T_{n}^{C}(r;a,b)\right\}_{(r;a,b)\in\mathcal{J}_{\eta}} \Rightarrow\left\{T^{C}(r;a,b)\right\}_{(r;a,b)\in\mathcal{J}_{\eta}},\] where \[T(r;a,b)=\frac{(r-a)(b-r)}{b-a}\left(V_{[a,r]}-V_{[r,b]} \right),\] \[T^{C}(r;a,b)=\frac{(r-a)(b-r)}{b-a}\left(V^{C}_{[r;a,b]}-V_{[a,r ]}-V_{[r,b]}\right).\] To achieve this, we need to show (1). \(\sup_{(a,b)\in\mathcal{I}_{\eta}}d(\hat{\mu}_{[a,b]},\mu_{[a,b]})=o_{p}(1)\); (2). \(\sup_{(a,b)\in\mathcal{I}_{\eta}}|\hat{V}_{[a,b]}-V_{[a,b]}|=o_{p}(1)\); and (3). \(\sup_{(r;a,b)\in\mathcal{J}_{\eta}}|\hat{V}^{C}_{[r;a,b]}-V^{C}_{[r;a,b]}|=o_{ p}(1)\). (1). The cases when \(b\leq\tau\) and \(a\geq\tau\) follow by Lemma A.1. For the case when \(\tau\in(a,b)\), recall \[\hat{\mu}_{[a,b]}= \arg\min_{\omega\in\Omega}\frac{1}{\lfloor nb\rfloor-\lfloor na \rfloor}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}d^{2}\left(Y_{t},\omega\right)\] \[= \arg\min_{\omega\in\Omega}\Bigg{\{}\frac{n}{\lfloor nb\rfloor- \lfloor na\rfloor}\frac{1}{n}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nr \rfloor}d^{2}\left(Y_{t}^{(1)},\omega\right)\] \[\qquad\qquad+\frac{n}{\lfloor nb\rfloor- \lfloor na\rfloor}\frac{1}{n}\sum_{t=\lfloor nr\rfloor+1}^{\lfloor nb \rfloor}d^{2}\left(Y_{t}^{(2)},\omega\right)\Bigg{\}}.\] By the proof of (1) in Lemma A.1, for \(i=1,2\), we have \[\left\{\frac{1}{n}\sum_{t=1}^{\lfloor nu\rfloor}d^{2}\left(Y_{t}^{(i)},\omega \right)-u\mathbb{E}d^{2}\left(Y_{t}^{(i)},\omega\right)\right\}_{\omega\in \Omega,u\in[0,1]}\Rightarrow 0,\] which implies that \[\left\{\frac{n}{\lfloor nb\rfloor-\lfloor na\rfloor}\frac{1}{n} \sum_{t=\lfloor na\rfloor+1}^{\lfloor n\tau\rfloor}d^{2}\left(Y_{t}^{(1)}, \omega\right)\right. \tag{25}\] \[\qquad+\left.\frac{n}{\lfloor nb\rfloor-\lfloor na\rfloor}\frac{1 }{n}\sum_{t=\lfloor nr\rfloor+1}^{\lfloor nb\rfloor}d^{2}\left(Y_{t}^{(2)}, \omega\right)\right\}_{\omega\in\Omega,(a,b)\in\mathcal{I}_{\eta}}\] \[\qquad\qquad\left.\Rightarrow\left\{\frac{\tau-a}{b-a}\mathbb{E }(d^{2}(Y_{t}^{(1)},\omega)+\frac{b-\tau}{b-a}\mathbb{E}(d^{2}(Y_{t}^{(2)}, \omega))\right\}_{\omega\in\Omega,(a,b)\in\mathcal{I}_{\eta}}.\] By Assumption 2.2, and the argmax continuous mapping theorem (Theorem 3.2.2 in van der Vaart & Wellner (1996)), the result follows. (2). The cases when \(b\leq\tau\) and \(a\geq\tau\) follows by Lemma A.2. For the case when \(\tau\in(a,b)\), we have for some constant \(K>0\) \[\sup_{(a,b)\in\mathcal{I}_{\eta}}|\hat{V}_{[a,b]}-V_{[a,b]}|\] \[\leq \sup_{(a,b)\in\mathcal{I}_{\eta}}\left(\frac{1}{\lfloor nb\rfloor -\lfloor na\rfloor}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}\left|d^{ 2}\left(Y_{t},\hat{\mu}_{[a,b]}\right)-d^{2}\left(Y_{t},\mu_{[a,b]}\right) \right|\right)\] \[+\sup_{(a,b)\in\mathcal{I}_{\eta}}\left|\frac{1}{\lfloor nb \rfloor-\lfloor na\rfloor}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}d^ {2}\left(Y_{t},\mu_{[a,b]}\right)-V_{[a,b]}\right|\] \[\leq \sup_{(a,b)\in\mathcal{I}_{\eta}}\left(\frac{1}{\lfloor nb\rfloor -\lfloor na\rfloor}\sum_{t=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}K\left|d \left(Y_{t},\hat{\mu}_{[a,b]}\right)-d\left(Y_{t},\mu_{[a,b]}\right)\right| \right)+o_{p}(1)\] \[\leq \sup_{(a,b)\in\mathcal{I}_{\eta}}Kd(\hat{\mu}_{[a,b]},\mu_{[a,b]} )+o_{p}(1)=o_{p}(1)\] where the second inequality holds by the boundedness of the metric and (25), and the third inequality holds by the triangle inequality of the metric. (3). The proof is similar to (2). By continuous mapping theorem, we obtain that for \(i=1,2\), \[\left\{D_{n,i}(\lfloor nr\rfloor)\right\}_{r\in[\eta_{1},1-\eta_{1}]}\Rightarrow \left\{D_{i}(r)\right\}_{r\in[\eta_{1},1-\eta_{1}]},\] where \[D_{1}(r)= \frac{[T(r;0,1)]^{2}}{\int_{\eta_{2}}^{r-\eta_{2}}[T(u;0,r)]^{2} du+\int_{r+\eta_{2}}^{1-\eta_{2}}[T(u;r,1)]^{2}du},\] \[D_{2}(r)= \frac{[T(r;0,1)]^{2}+[T^{C}(r;0,1)]^{2}}{\int_{\eta_{2}}^{r-\eta_ {2}}[T(u;0,r)]^{2}+[T^{C}(u;0,r)]^{2}du+\int_{r+\eta_{2}}^{1-\eta_{2}}[T(u;r,1 )]^{2}+[T^{C}(u;r,1)]^{2}du}.\] In particular, at \(r=\tau\), we obtain \(D_{i}(\tau)=\infty\). Hence, to show the consistency of \(\hat{\tau}\), it suffices to show that for any small \(\epsilon>0\), if \(|r-\tau|>\epsilon\), \[D_{i}(r)<\infty.\] By symmetry, we consider the case of \(r-\tau>\epsilon\). For \(r-\tau>\epsilon\), we note that for both \(i=1,2\), \[\sup_{r-\tau>\epsilon}D_{i}(r)\leq\frac{\sup_{r}\left\{[T(r;0,1)]^{2}+[T^{C}(r;0,1 )]^{2}\right\}}{\inf_{r-\tau>\epsilon}\int_{\eta_{2}}^{r-\eta_{2}}[T(u;0,r)]^{2 }du}.\] By proof of Proposition 1 in Dubey and Muller (2020), we obtain that for some universal constant \(K>0\), \[\sup_{r}\left\{[T(r;0,1)]^{2}+[T^{C}(r;0,1)]^{2}\right\}\leq K(\Delta_{M}^{2} +\Delta_{V}^{2})<\infty.\] Therefore, it suffices to show that there exists a function \(\zeta(\epsilon)>0\), such that for any \(r-\tau>\epsilon\), \[\int_{\eta_{2}}^{r-\eta_{2}}[T(u;0,r)]^{2}du>\zeta(\epsilon).\] For \(r>\tau\), and for any \(u\in[\eta_{2},\tau-\eta_{2}]\), \[T(u;0,r)\] \[= \frac{u(r-u)}{r}(V^{(1)}-V_{[u,r]})\] \[= \frac{u(r-u)}{r}\left[V^{(1)}-\frac{\tau-u}{r-u}\mathbb{E}(d^{2} (Y_{t}^{(1)},\mu_{[u,r]}))-\frac{r-\tau}{r-u}\mathbb{E}(d^{2}(Y_{t}^{(2)},\mu_ {[u,r]}))\right]\] \[= \frac{u(r-u)}{r}[V^{(1)}-V(\frac{\tau-u}{r-u})].\] By Assumption 4.1, we can obtain that \[|T(u;0,r)|>\frac{u(r-u)}{r}\varphi(\frac{\epsilon}{r-u})\geq\eta_{2}^{2} \varphi(\epsilon).\] Hence, we can choose \(\zeta(\epsilon)=\eta_{2}^{6}\varphi^{2}(\epsilon)\). ## Appendix B Examples As we have mentioned in the main context, since \(d^{2}(Y_{t},\omega)\) takes value in \(\mathbb{R}\) for any fixed \(\omega\in\Omega\), both Assumption 2.3 and 2.4 could be implied by high-level weak temporal dependence conditions in conventional Euclidean space. Therefore, we only discuss the verification of Assumption 2.1, 2.2 and 2.5 in what follows. ### Example 1: \(L_{2}\) metric \(d_{L}\) for square integrable functions defined on \([0,1]\) Let \(\Omega\) be the Hilbert space of all square integrable functions defined on \(I=[0,1]\) with inner product \(\langle f,g\rangle=\int_{I}f(t)g(t)dt\) for two functions \(f,g\in\Omega\). Then, for the corresponding norm \(\|f\|=\langle f,f\rangle^{1/2}\), \(L_{2}\) metric is defined by \[d_{L}^{2}(f,g)=\int_{I}[f(t)-g(t)]^{2}dt.\] Assumptions 2.1 and 2.2 follows easily by the Riesz representation theorem and convexity of \(\Omega\). We only consider Assumption 2.5. Note that \[d_{L}^{2}(Y,\omega)-d_{L}^{2}(Y,\mu)= \int_{0}^{1}[\omega(t)-\mu(t)][\omega(t)+\mu(t)-2Y(t)]dt\] \[= d_{L}^{2}(\omega,\mu)+2\int_{0}^{1}[\omega(t)-\mu(t)][\mu(t)-Y( t)]dt\] \[:= d_{L}^{2}(\omega,\mu)+g(Y,\omega,\mu),\] and \(R(Y,\omega,\mu)\equiv 0\). Furthermore, \[\left|n^{-1/2}\sum_{i=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}g(Y_{i },\omega,\mu)\right|\] \[= \left|2\int_{0}^{1}[\omega(t)-\mu(t)]n^{-1/2}\sum_{i=\lfloor na \rfloor+1}^{\lfloor nb\rfloor}[Y_{i}(t)-\mu(t)]dt\right|\] \[\leq 2d_{L}(\omega,\mu)\left\{\int_{0}^{1}\left|n^{-1/2}\sum_{i= \lfloor na\rfloor+1}^{\lfloor nb\rfloor}[Y_{i}(t)-\mu(t)]\right|^{2}dt \right\}^{1/2},\] where the inequality holds by Cauchy-Schwarz inequality. By the boundedness of \(d_{L}(\omega,\mu)\), Assumption 2.5 then follows if \[\sup_{t\in[0,1]}\sup_{(a,b)\in\mathcal{I}_{\eta}}\left|n^{-1/2}\sum_{i= \lfloor na\rfloor+1}^{\lfloor nb\rfloor}[Y_{i}(t)-\mu(t)]\right|=O_{p}(1),\] which holds under general weak temporal dependence for functional observations, see, e.g. Berkes et al. (2013). ### Example 2: 2-Wasserstein metric \(d_{W}\) of univariate CDFs Let \(\Omega\) be the set of univariate CDF function on \(\mathbb{R}\), consider the 2-Wasserstein metric defined by \[d_{W}^{2}(G_{1},G_{2})=\int_{0}^{1}(G_{1}(t)-G_{2}(t))^{2}dt,\] where \(G_{1}\) and \(G_{2}\) are two inverse CDFs or quantile functions. The verification of Assumption 2.1 and 2.2 can be found in Proposition C.1 in Dubey & Muller (2020_a_). Furthermore, by similar arguments as Example 1, Assumption 2.5 holds under weak temporal dependence conditions, see Berkes et al. (2013). ### Example 3: Frobenius metric \(d_{F}\) for graph Laplacians or covariance matrices Let \(\Omega\) be the set of graph Laplacians or covariance matrices of a fixed dimension \(r\), with uniformly bounded diagonals, and equipped with the Frobenius metric \(d_{F}\), i.e. \[d_{F}^{2}(\Sigma_{1},\Sigma_{2})=\operatorname{tr}[(\Sigma_{1}-\Sigma_{2})^{ \top}(\Sigma_{1}-\Sigma_{2})].\] for two \(r\times r\) matrices \(\Sigma_{1}\) and \(\Sigma_{2}\). The verification of Assumption 2.1 and 2.2 can be found in Proposition C.2 in Dubey & Muller (2020_a_). We only consider Assumption 2.5. Note that \[d_{F}^{2}(Y,\omega)-d_{F}^{2}(Y,\mu)= \operatorname{tr}(\omega-\mu)^{\top}(\omega+\mu-2Y)\] \[= d_{F}^{2}(\omega,\mu)+2\operatorname{tr}(\omega-\mu)^{\top}(\mu -Y)\] \[:= d_{F}^{2}(\omega,\mu)+g(Y,\omega,\mu),\] and \(R(Y,\omega,\mu)\equiv 0\). Furthermore, by Cauchy-Schwarz inequality, \[\left|n^{-1/2}\sum_{i=\lfloor na\rfloor+1}^{\lfloor nb\rfloor}g(Y _{i},\omega,\mu)\right|= 2\left|\operatorname{tr}[(\omega-\mu)^{\top}n^{-1/2}\sum_{i= \lfloor na\rfloor+1}^{\lfloor nb\rfloor}(Y_{i}-\mu)]\right|\] \[\leq 2d_{F}(\omega,\mu)d_{F}\left(n^{-1/2}\sum_{i=\lfloor na\rfloor+ 1}^{\lfloor nb\rfloor}[Y_{i}-\mu],0\right).\] By the boundedness of \(d_{F}(\omega,\mu)\), Assumption 2.5 then follows if \[\sup_{(a,b)\in\mathcal{I}_{\gamma}}\left\|n^{-1/2}\sum_{i=\lfloor naj\rfloor+1}^{ \lfloor nb\rfloor}\operatorname{vec}(Y_{i}-\mu)\right\|=O_{p}(1),\] which holds under common weak dependence conditions in conventional Euclidean space. ### Example 4: Log-Euclidean metric \(d_{E}\) for covariance matrices Let \(\Omega\) be the set of all positive-definite covariance matrices of dimension \(r\), with uniformly both upper and lower bounded eigenvalues, i.e. for any \(\Sigma\in\Omega\), \(c\leq\lambda_{min}(\Sigma)\leq\lambda_{\max}(\Sigma)\leq C\) for some constant \(0<c<C<\infty\). The log-Euclidean metric is defined by \(d_{E}^{2}(\Sigma_{1},\Sigma_{2})=d_{F}^{2}(\log_{m}\Sigma_{1},\log_{m}\Sigma_{2})\), where \(\log_{m}\) is the matrix-log function. Note that \(\log_{m}\Sigma\) has the same dimension as \(\Sigma\), hence the verification of Assumptions 2.1, 2.2 and 2.5 follows directly from Example 3. ## Appendix C Functional Data in Hilbert Space Our proposed tests and DM test are also applicable to the inference of functional data in Hilbert space, such as \(L_{2}[0,1]\), since the norm in Hilbert space naturally corresponds to the distance metric \(d\). In a sense, our methods can be regarded as fully functional (Aue et al. 2018) since no dimension reduction procedure is required. In this section, we further compare them with SN-based testing procedure by Zhang and Shao (2015) for comparing two sequences of temporally dependent functional data, i.e. \(\{Y_{t}^{(i)}\}_{t=1}^{n_{i}}\)\(i=1,2\), defined on \([0,1]\). The general idea is to first apply FPCA, and then compare score functions (for mean) or covariance operators (for covariance) between two samples in the space spanned by leading \(K\) eigenfunctions. SN technique is also invoked to account for unknown temporal dependence. Although the test statistic in Zhang and Shao (2015) targets at the difference in covariance operators of \(\{Y_{t}^{(1)}\}\) and \(\{Y_{t}^{(2)}\}\), their test can be readily modified to testing the mean difference. To be specific, denote \(\mu^{(i)}\) as the mean function of \(Y_{t}^{(i)}\), \(t=1,\cdots,n_{i}\), \(i=1,2\), we are interested in testing \[\mathbb{H}_{0}:\mu^{(1)}(x)=\mu^{(2)}(x),\quad\forall x\in[0,1].\] We assume the covariance operator is common for both samples, which is denoted by \(C_{p}\). By Mercer's Lemma, we have \[C_{p}=\sum_{j=1}^{\infty}\lambda_{p}^{j}\phi_{p}^{j}\otimes\phi_{p}^{j},\] where \(\{\lambda_{p}^{j}\}_{j=1}^{\infty}\) and \(\{\phi_{p}^{j}\}_{j=1}^{\infty}\) are the eigenvalues and eigenfunctions respectively. By the Karhunen-Loeve expansion, \[Y_{t}^{(i)}=\mu^{(i)}+\sum_{j=1}^{\infty}\eta_{t,j}^{(i)}\phi_{p}^{j},\quad t =1,\cdots,n_{i};\ \ i=1,2,\] where \(\{\eta_{t,j}^{(i)}\}\) are the principal components (scores) defined by \(\eta_{t,j}^{(i)}=\int_{[0,1]}\{Y_{t}^{(i)}-\mu^{(i)}\}\phi_{p}^{j}(x)dx=\int_ {[0,1]}\{Y_{t}^{(i)}-\mu_{p}+\mu_{p}-\mu^{(i)}\}\phi_{p}^{j}(x)dx\) with \(\mu_{p}=\gamma_{1}\mu^{(1)}+\gamma_{2}\mu^{(2)}\). Under \(\mathbb{H}_{0}\), \(\mu^{(1)}=\mu^{(2)}=\mu_{p}\), and \(\eta_{t,j}^{(i)}\) should have mean zero. We thus build the SN based test by comparing empirical estimates of score functions. Specifically, define the empirical covariance operator based on the pooled samples as \[\hat{C}_{p}=\frac{1}{n_{1}+n_{2}}(\sum_{t=1}^{n_{1}}\mathcal{Y}_{t}^{(1)}+ \sum_{t=1}^{n_{2}}\mathcal{Y}_{t}^{(2)}),\] where \(\mathcal{Y}_{t}^{(i)}=Y_{t}^{(i)}\otimes Y_{t}^{(i)}\), \(i=1,2.\) Denote by \(\{\hat{\lambda}_{p}^{j}\}_{j=1}^{\infty}\) and \(\{\hat{\phi}_{p}^{j}\}_{j=1}^{\infty}\) the corresponding eigenvalues and eigenfunctions. We define the empirical scores (projected onto the eigenfunctions of pooled covariance operator) for each functional observation as \[\hat{\eta}_{t,j}^{(i)}=\int_{[0,1]}\{Y_{t}^{(i)}(x)-\hat{\mu}_{p}(x)\}\hat{\phi}_{ p}^{j}(x)dx,\quad t=1,\cdots,n_{i};\;\;i=1,2;\;\;j=1,\cdots,K,\] where \(\hat{\mu}_{p}=(\sum_{t=1}^{n_{1}}Y_{t}^{(1)}+\sum_{t=1}^{n_{2}}Y_{t}^{(2)})/n\) is the pooled sample mean function. Let \(\hat{\eta}_{t,(K)}^{(i,K)}=(\hat{\eta}_{t,1}^{(i)},\cdots,\hat{\eta}_{t,K}^{(i )\top},\) and \(\hat{\alpha}^{(K)}(r)=(\lfloor rn_{1}\rfloor)^{-1}\sum_{t=1}^{\lfloor rn_{1} \rfloor}\hat{\eta}_{t}^{(1,K)}-(\lfloor rn_{2}\rfloor)^{-1}\sum_{t=1}^{ \lfloor rn_{2}\rfloor}\hat{\eta}_{t}^{(2,K)}\) as the difference of recursive subsample mean of empirical scores, we consider the test statistic as \[ZSM=\] \[n[\hat{\alpha}^{(K)}(1)]^{\top}\Bigg{\{}\sum_{k=1}^{n}\frac{k^{2 }}{n^{2}}[\hat{\alpha}^{(K)}(k/n)-\hat{\alpha}^{(K)}(1)][\hat{\alpha}^{(K)}(k /n)-\hat{\alpha}^{(K)}(1)]^{\top}\Bigg{\}}^{-1}[\hat{\alpha}^{(K)}(1)],\] and under \(\mathbb{H}_{0}\) with suitable conditions, it is expected that \[ZSM\rightarrow_{d}B_{K}(1)^{\top}\left\{\int_{0}^{1}\left(B_{K}(r)-rB_{K}(1) \right)\left(B_{K}(r)-rB_{K}(1)\right)^{\top}\mathrm{d}r\right\}^{-1}B_{K}(1),\] where \(B_{K}(\cdot)\) is a \(K\)-dimensional vector of independent Brownian motions. Consider the following model taken from Panaretos et al. (2010), \[Y_{t}(x)= \sum_{j=1}^{3}\left\{\xi_{t}^{j,1}\sqrt{2}\sin(2\pi jx)+\xi_{t}^{ j,2}\sqrt{2}\cos(2\pi jx)\right\},\quad t=1,2,\ldots,n_{1}\] where the coefficients \(\xi_{t}=\left(\xi_{t}^{1,1},\xi_{t}^{2,1},\xi_{t}^{3,1},\xi_{t}^{1,2},\xi_{t} ^{2,2},\xi_{t}^{3,2}\right)^{\prime}\) are generated from a VAR process, \[\xi_{t}= \rho\xi_{t-1}+\sqrt{1-\rho^{2}}e_{t},\quad e_{t}\overset{i.i.d.} {\sim}\mathcal{N}\left(0,\frac{1}{2}\operatorname{diag}(\mathbf{v})+\frac{1}{ 2}\mathbf{1}_{6}\right)\in\mathbb{R}^{6}\] with \(v=(12,7,0.5,9,5,0.3)^{\top}\). To compare the size and power performance, we generate independent functional time series \(\{Y_{t}^{(1)}\}\) and \(\{Y_{t}^{(2)}\}\) from the above model, and modify \(\{Y_{t}^{(2)}\}\) according to the following settings: * [Case 1m] \(Y_{t}^{(2)}(x)=Y_{t}(x)+20\delta_{1}\sin(2\pi x)\), \(x\in[0,1]\); * [Case 1v] \(Y_{t}^{(2)}(x)=Y_{t}(x)+20\delta_{2}\eta_{t}\sin(2\pi x)\), \(x\in[0,1]\); * [Case 2m] \(Y_{t}^{(2)}(x)=Y_{t}(x)+20\delta_{1}x\), \(x\in[0,1]\); * [Case 2v] \(Y_{t}^{(2)}(x)=Y_{t}(x)+20\delta_{2}\eta_{t}x\), \(x\in[0,1]\); * [Case 3m] \(Y_{t}^{(2)}(x)=Y_{t}(x)+20\delta_{1}\mathbf{1}(x\in[0,1])\); * [Case 3v] \(Y_{t}^{(2)}(x)=Y_{t}(x)+20\delta_{2}\eta_{t}\mathbf{1}(x\in[0,1])\); where \(\eta_{t}\overset{i.i.d.}{\sim}\mathcal{N}(0,1)\) and \(\delta_{1},\delta_{2}\in[0,0.3]\). The size performance of all tests are evaluated by setting \(\delta_{1}=\delta_{2}=0\). As for the power performance, Cases 1m-3m with \(\delta_{1}\in(0,0.3]\) correspond to alternatives caused by mean differences and Cases 1v-3v with \(\delta_{2}\in(0,0.3]\) correspond to covariance operator differences. In particular, we note the alternative of Cases 1m and 1v depends on the signal function \(f(x)=\sin(2\pi x),x\in[0,1]\), which is in the space spanned by the eigenfunctions of \(Y_{t}(x)\), while for Cases 3m and 3v, the signal function \(f(x)=\mathbf{1}(x\in[0,1])\) is orthogonal to these eigenfunctions. We denote the two-sample mean test and covariance operator test based on Zhang & Shao (2015) as ZSM and ZSV respectively. The empirical size of all tests are outlined in Table 6 at nominal level \(\alpha=5\%\). From this table, we see that (a) \(D_{1}\) has accurate size across all model settings and \(D_{2}\) is generally reliable for moderate dependence level, albeit oversize phenomenon for small \(n\) when \(\rho=0.7\); (b) DM suffers from severe size distortion when temporal dependence is exhibited even for large \(n\); (c) although both ZSM and ZSV utilize SN to robustify the tests due to temporal dependence, we find their performances depend on the user-chosen parameter \(K\) a lot, and still suffer from size distortion when \(n\) is small. In particular, the size distortion when \(K=4\) is considerably larger than that for \(K=2\) in the presence of temporal dependence. Figure 6 further compares their size-adjusted powers when \(n_{1}=n_{2}=400\) and \(\rho=0.4\). As can be seen, \(D_{1}\) possesses trivial power against mean differences while \(D_{2}\) is rather stable in all settings with evident advantages in Cases 2m and 3m. In contrast, the power performances of DM, ZSM and ZSV vary among different settings. For example, when the alternative signal function is in the span of leading eigenfunctions, i.e. Cases 1m and 1v, ZSM and ZSV with \(K=2\) can deliver (second) best power performances as expected, while they are dominated by other tests when the alternative signal function is orthogonal to eigenfunctions in Cases 3m and 3v. As for DM, it is largely dominated by \(D_{2}\) in terms of mean differences, although it exhibits moderate advantage over \(D_{2}\) for covariance operator differences. In general, whether the difference in mean/covariance operator is orthogonal to the leading eigenfunctions, or lack thereof, is unknown to the user. Our test \(D_{2}\) is robust to unknown temporal dependence, exhibits quite accurate size and delivers comparable powers in all settings, and thus should be preferred in practice. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{Functional Data based on \(d_{L}\)} \\ \hline \multirow{2}{*}{\(\rho\)} & \multirow{2}{*}{\(n_{i}\)} & \multirow{2}{*}{\(D_{1}\)} & \multirow{2}{*}{\(D_{2}\)} & \multirow{2}{*}{DM} & \multicolumn{2}{c}{ZSM} & \multicolumn{2}{c}{ZSV} \\ \cline{3-8} & & & & & \(K=2\) & \(K=4\) & \(K=2\) & \(K=4\) \\ \hline \multirow{4}{*}{-0.4} & 50 & 5.4 & 5.6 & 11.5 & 3.7 & 2.3 & 5.8 & 10.1 \\ & 100 & 5.3 & 5.3 & 9.5 & 3.1 & 2.5 & 4.4 & 7.6 \\ & 200 & 6.7 & 6.6 & 11.5 & 3.3 & 4.2 & 5.8 & 5.7 \\ & 400 & 5.6 & 5.6 & 8.7 & 4.4 & 4.2 & 4.2 & 7.3 \\ \hline \multirow{4}{*}{0} & 50 & 4.9 & 5.6 & 5.7 & 6.3 & 6.3 & 5.3 & 5.1 \\ & 100 & 3.8 & 3.8 & 4.3 & 5.0 & 4.4 & 4.0 & 5.0 \\ & 200 & 5.8 & 6.0 & 5.5 & 3.8 & 5.7 & 5.4 & 5.7 \\ & 400 & 4.3 & 4.6 & 4.1 & 5.4 & 4.7 & 4.5 & 6.1 \\ \hline \multirow{4}{*}{0.4} & 50 & 5.9 & 8.9 & 10.6 & 8.3 & 13.6 & 5.3 & 10.9 \\ & 100 & 4.9 & 4.7 & 9.5 & 6.7 & 8.4 & 5.4 & 7.1 \\ & 200 & 5.5 & 5.8 & 8.9 & 4.7 & 7.4 & 5.8 & 6.9 \\ & 400 & 5.3 & 4.9 & 9.6 & 5.9 & 5.8 & 6.0 & 5.3 \\ \hline \multirow{4}{*}{0.7} & 50 & 7.2 & 20.4 & 36.9 & 17.1 & 31.4 & 7.3 & 34.8 \\ & 100 & 6.5 & 12.1 & 29.5 & 10.1 & 16.4 & 5.7 & 18.9 \\ \cline{1-1} & 200 & 6.5 & 8.2 & 29.6 & 6.8 & 11.7 & 5.9 & 10.3 \\ \cline{1-1} & 400 & 4.9 & 5.8 & 25.0 & 7.0 & 8.4 & 6.1 & 6.8 \\ \hline \hline \end{tabular} \end{table} Table 6: Size Performance (100%) at \(\alpha=5\%\)
2305.04561
Boosting Radiology Report Generation by Infusing Comparison Prior
Recent transformer-based models have made significant strides in generating radiology reports from chest X-ray images. However, a prominent challenge remains: these models often lack prior knowledge, resulting in the generation of synthetic reports that mistakenly reference non-existent prior exams. This discrepancy can be attributed to a knowledge gap between radiologists and the generation models. While radiologists possess patient-specific prior information, the models solely receive X-ray images at a specific time point. To tackle this issue, we propose a novel approach that leverages a rule-based labeler to extract comparison prior information from radiology reports. This extracted comparison prior is then seamlessly integrated into state-of-the-art transformer-based models, enabling them to produce more realistic and comprehensive reports. Our method is evaluated on English report datasets, such as IU X-ray and MIMIC-CXR. The results demonstrate that our approach surpasses baseline models in terms of natural language generation metrics. Notably, our model generates reports that are free from false references to non-existent prior exams, setting it apart from previous models. By addressing this limitation, our approach represents a significant step towards bridging the gap between radiologists and generation models in the domain of medical report generation.
Sanghwan Kim, Farhad Nooralahzadeh, Morteza Rohanian, Koji Fujimoto, Mizuho Nishio, Ryo Sakamoto, Fabio Rinaldi, Michael Krauthammer
2023-05-08T09:12:44Z
http://arxiv.org/abs/2305.04561v2
# Boosting Radiology Report Generation by Infusing Comparison Prior ###### Abstract Recent transformer-based models have made significant strides in generating radiology reports from chest X-ray images. However, a prominent challenge remains: these models often lack prior knowledge, resulting in the generation of synthetic reports that mistakenly reference non-existent prior exams. This discrepancy can be attributed to a knowledge gap between radiologists and the generation models. While radiologists possess patient-specific prior information, the models solely receive X-ray images at a specific time point. To tackle this issue, we propose a novel approach that leverages a rule-based labeler to extract comparison prior information from radiology reports. This extracted comparison prior is then seamlessly integrated into state-of-the-art transformer-based models, enabling them to produce more realistic and comprehensive reports. Our method is evaluated on English report datasets, such as IU X-ray and MIMIC-CXR. The results demonstrate that our approach surpasses baseline models in terms of natural language generation metrics. Notably, our model generates reports that are free from false references to non-existent prior exams, setting it apart from previous models. By addressing this limitation, our approach represents a significant step towards bridging the gap between radiologists and generation models in the domain of medical report generation. ## 1 Introduction The analysis of radiology images and the subsequent writing of medical reports are crucial tasks performed during the diagnostic process Suetens (2017); Krupinski (2010). However, producing a radiology report is a labor-intensive and time-consuming task for radiologists, requiring years of training to accurately identify and describe specific abnormalities in medical images Brady (2017); Arenson and Dunnick (2006). Inspired by the success of image captioning models in deep learning, numerous studies have emerged proposing various models for automated radiology report generation, specifically focusing on chest X-ray images Yuan et al. (2019); Li et al. (2019); Xue et al. (2018); Jing et al. (2017); Liu et al. (2019). The automated generation of reports holds the potential to alleviate the high workload of radiologists and expedite the diagnostic process by providing preliminary reports that include useful keywords or observations Johnson et al. (2019); Chen et al. (2020). Despite the relative success of recent approaches in generating radiology reports from chest X-ray images Endo et al. (2021); Johnson et al. (2019); Chen et al. (2020); Miura et al. (2020); Ramirez-Alonso et al. (2022); Nooralahzadeh et al. (2021), a crucial challenge remains unaddressed in these studies: the need to provide models with appropriate prior knowledge, akin to what is available to radiologists. Specifically, radiologists are equipped with information about the existence of previous reports and X-ray images, enabling them to compare current exams with past ones, and assess the patient's progress, deterioration, or improvement Suetens (2017); of Radiology, ESR). These medical reports often incorporate specific words or phrases for comparison, such as "compared to the previous exam," "in the interval," referring to the prior X-ray," and so on. In this paper, we refer to these words or phrases as _prior expressions_, which are also present in general medical datasets such as MIMIC-CXR Johnson et al. (2019) and IU X-ray Demner-Fushman et al. (2016), widely utilized for training and evaluating report generation tasks. The inclusion of prior expressions in medical reports is vital for accurate reporting. However, models trained on medical report datasets often generate reports with inappropriate or misused prior expressions, leading to relatively lower performance metrics. The challenge lies in effectively incorporating and utilizing prior expressions within the model's generation process, a crucial aspect yet to be fully resolved. In Table 1, we present a comparison between ground truth reports and reports generated by two recent models, R2Gen (Chen et al., 2020) and \(M^{2}\)Tr (Cornia et al., 2020), focusing on the presence of prior expressions. It is evident that the synthetic reports contain inappropriate priors. For instance, the synthetic report generated by R2Gen in the first row includes a comparison with the previous exam, despite the absence of any prior information in the ground truth report. Similarly, the synthetic report produced by \(M^{2}\)Tr in the second row includes the phrase "again noted," indicating the existence of a previous image, while the ground truth report lacks any prior expression. These falsely referenced reports, which include prior expressions, tend to yield lower evaluation metrics. In Figure 1, we utilize a rule-based labeler (explained in Section 3.1) to classify synthetic reports into two categories: negative and positive. The negative class represents reports without prior expressions, while the positive class comprises reports with prior expressions. We then plot the distribution of BLEU-4 (Papineni et al., 2002) scores for each class, along with their respective mean and standard deviation. The results clearly demonstrate that the positive reports achieve lower scores compared to the negative reports. This observation suggests that the transformer-based models do not effectively leverage the generated prior expressions, underscoring the significance of properly incorporating prior information during the training process. Table 1 and Figure 1 highlight a critical issue with state-of-the-art (SOTA) models in learning and generating prior expressions, which can potentially confuse radiologists when utilizing these models for report writing. This issue stems from the fundamental disparity between how radiologists compose reports and how models generate them. Radiologists have access to not only prior patient information such as previous exams and medical history, but also the current X-ray images. In contrast, report generation models are only provided \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Ground Truth & R2Gen (Chen et al., 2020) & \(M^{2}\)Tr (Cornia et al., 2020) \\ \hline Heart size is normal. Aorta is tortuous and catic. Cardiomediastinal contours are normal. Lungs are clear without evidence of fibrosis. Pleural effusions or pneumothorax. Endplate sclerotic changes are present in the thoracic spine. & There are diffuse increased interstitial suggestive of pulmonary fibrosis in bilateral lung xxxx. The fibrosis appears to slightly increased xxxx **compared to previous** in xxxx. The trachea is midline. negative for pleural effusion. & Both lungs are clear and expanded. Heart and mediastinum normal. \\ \hline Stable cardiomediastinal silhouette. No focal pulmonary pleural effusion or pneumothorax. No acute bony abnormality. & The heart is normal size. The mediastinum is unremarkable. There is no pleural or focal airspace disease. Mild chronic degenerative changes are present in the spine. & Low lung volumes. Elevation of the right hemidiaphragm. Patchy opacities right base **again noted**. Left lung clear. Heart size top normal. Aortic calcification. Granulomas. No evidence of pneumothorax. Blunting of the bilateral costopherenic xxxx. Degenerative changes of the thoracic spine. \\ \hline \hline \end{tabular} \end{table} Table 1: Ground truth report from IU X-ray (first column) and examples of reports generated by R2Gen and \(M^{2}\)Tr (second and third column). Prior expressions are emphasized in bold. Figure 1: The distribution of BLEU-4 scores based on the classification by our rule-based labeler into negative and positive categories. The labeler evaluated synthetic reports generated by R2Gen and \(M^{2}\)Tr, both trained on the IU X-ray dataset. The mean and standard deviation are represented by black dots and lines, respectively. The mean scores for negative and positive labels are as follows: for R2Gen, it is (0.1455, 0.0698), and for \(M^{2}\)Tr, it is (0.1218, 0.0583). with X-ray images at a specific moment. With limited prior information, it becomes challenging for the models to generate comprehensive and insightful reports comparable to those created by experts. In our paper, we address this knowledge gap between generation models and radiologists by infusing prior information into existing models. Our aim is to reduce the disparity and enable the improved models to produce more informative and practical reports. Since existing datasets such as IU X-ray and MIMIC-CXR do not contain prior information (previous X-ray images), we adopt a data-driven approach by consulting experienced radiologists. Inspired by the CheXpert labeler Irvin et al. (2019), we develop a rule-based labeler that extracts prior information by identifying specific patterns and keywords in radiology reports. Notably, the rule-based labeler focuses on comparison phrases that indicate whether a medical report corresponds to the first or subsequent exams for each patient. The main contributions of our paper are as follows: 1. We collaborate with radiologists to develop a rule-based labeler that identifies specific keywords and patterns in medical reports related to comparisons. 2. To incorporate prior information into the models, we propose an enhanced transformer-based architecture. This approach is straightforward to implement and can be seamlessly integrated as a plug-in method into modern report generation models. 3. Through empirical evaluation on the IU X-ray and MIMIC-CXR datasets, we demonstrate that our model outperforms baseline models in terms of performance metrics. 4. Furthermore, we conduct a comprehensive analysis to confirm that our model no longer generates false references, addressing a significant limitation observed in previous approaches. ## 2 Related Work Initial research Bai and An (2018); Liu et al. (2019) in radiology report generation employed a basic encoder-decoder architecture, where an encoder extracted key features from medical images and converted them into a latent vector, and a decoder generated the target text from the latent vector. Typically, CNN LeCun et al. (2015) was used as the encoder, and LSTM Hochreiter and Schmidhuber (1997) was chosen as the decoder. Subsequently, visual attention mechanisms were introduced to highlight specific image features and generate more interpretable reports Zhang et al. (2017); Jing et al. (2017); Wang et al. (2018); Yin et al. (2019); Yuan et al. (2019). Recent studies Lovelace and Mortazavi (2020); Chen et al. (2020); Nooralahzadeh et al. (2021); Miura et al. (2020) have explored more advanced architectures using transformers to produce more comprehensive and consistent medical reports. Alternatively, generating medical reports can be approached as a retrieval task, as similar sentences and a specific writing format are often repeated in most reports. Reusing diagnostic text from visually similar X-ray images may result in more consistent and accurate reports compared to generating an entire report from scratch. Li et al. (2019) demonstrated the superiority of retrieval-based models, outperforming many encoder-decoder-based models. More recently, Endo et al. (2021) introduced a retrieval-based model called CXR-RePaiR, which incorporated contrastive language image pretraining (CLIP) Radford et al. (2021) to calculate the similarity between text and image embeddings. CXR-RePaiR generates predictions by selecting the most aligned report from a large report corpus given a specific X-ray image. They achieved SOTA performance on their newly developed metrics. Previous approaches have primarily focused on enhancing model performance through advancements in model architecture, while paying relatively less attention to the distinctive characteristics of radiology reports, particularly those involving comparisons. However, there are a few notable exceptions, such as the work by Ramesh et al. (2022), which specifically explores the impact of prior expressions in reports. The authors of that study introduced a novel dataset named MIMIC-PRO, in which they identified and modified reports that contained hallucinated references to non-existent prior exams. The hallucinated references can be seen as a concept analogous to the prior expressions discussed in our paper. Ramesh et al. (2022) suggested a BioBERT-based model that paraphrased or removed sentences referring to previous reports or images, arguing that these expressions confuse the model and result in falsely referenced sentences. They collaborated with experts to create a "clean" MIMIC-CXR test dataset and compared models trained on MIMIC-CXR and MIMIC-PRO. In contrast to Ramesh et al. (2022), we take a different approach to address the issue of comparison priors: we include prior information in the model and enable it to generate more comprehensive reports in an end-to-end fashion, rather than entirely removing comparison priors. Writing comparisons using prior expressions in radiology reports is unavoidable in the real medical field, and constructing a clean and accurate dataset from real reports is also a laborious task. Thus, our work focused on directly applying the comparison prior to existing models such as R2Gen and \(M^{2}\)Tr. ## 3 Method As observed in Table 1, even the best models to date struggle with unexpected prior expressions. To address this problem, we propose a two-step approach: (1) constructing a rule-based labeler to differentiate reports with and without prior expressions (Section 3.1), and (2) extending the transformer-based architectures (R2Gen and \(M^{2}\)Tr) to incorporate the comparison prior as input (Section 3.2). In the first step, we introduce a rule-based labeler that detects specific comparison prior expressions and categorizes each report as either a first exam (negative) or a following exam (positive), based on its detection. We draw inspiration from the negation and classification principles of the CheXpert labeler Irvin et al. (2019) to design our novel labeler. In the subsequent stage, we integrate our prior label as input into the state-of-the-art architectures to generate more practical and comprehensive reports, and we compare the results with the baseline models. By providing our novel model with the comparison prior, which is typically communicated to radiologists in real diagnostic scenarios, we enable the generation of more comprehensive and consistent medical reports. ### Rule-based Labeler Our rule-based labeler follows the fundamental structure of the CheXpert labeler Irvin et al. (2019), which detects the presence of 14 observations in radiology reports based on fixed rules devised by experts. Consequently, our labeler consists of three distinct stages: mention extraction, mention classification, and mention aggregation. The labeler takes the Finding section of radiology reports as input and generates a binary output (0 or 1). A negative label (0) indicates a report without prior expressions, while a positive label (1) signifies the presence of prior expressions. Mention extractionA mention is defined as a specific keyword likely to be included in prior expressions, such as "previous", "prior", "preceding", "previously", "again", "comparison", "interval", "increase", "decrease", "enlarge", and so on. In this stage, the labeler extracts mentions from each report and marks them within each sentence. However, it is important to note that even if certain sentences contain the designated keywords, the existence of a prior expression cannot be confirmed at this step since those keywords might be used in other contexts. For instance, the word "comparison" can be used in a sentence like "with no comparison studies," indicating the absence of prior expressions. Mention classificationAfter extracting mentions in the first stage, our labeler determines whether each mention corresponds to predefined prior expressions. As similar expressions are frequently employed in reports to denote a comparison with previous exams, we can formalize the patterns of these prior expressions into several key phrases familiar to experienced radiologists. For example, the phrase "compared / similar to {mention}" confirms the presence of prior reports, where \begin{table} \begin{tabular}{c c c} \hline \hline Report & Label \\ \hline 1. Cardiomegaly is noted and is stable **compared to prior examination** from XXXX. & 1 \\ 2. Ill-defined opacity is **again noted** in the region of the linguula. & 1 \\ 3. There are low lung volumes. The lungs are otherwise clear. & 0 \\ 4. The left lower lobe have cleared **in the interval.** & 1 \\ \hline \hline \end{tabular} \end{table} Table 2: Output of the labeler given sampled reports from the IU X-ray dataset. The bolded phrases represent the prior expressions identified by our labeler. \begin{table} \begin{tabular}{c c c c} \hline \hline Dataset & Negative & Positive & Total \\ \hline IU X-ray & 3,426 & 529 & 3,955 \\ MIMIC-CXR & 106,628 & 99,935 & 206,563 \\ \hline \hline \end{tabular} \end{table} Table 3: Number of studies classified as negative (0) or positive (1) by our rule-based labeler. "{mention}" represents keywords such as "previous", "preceding", and "prior". Similarly, "{mention} seen/identified/visualized/... /noted" constitutes a prior expression when "{mention}" pertains to keywords like "again" and "previously". Mention aggregationIn the final stage, the labeler combines the classified mentions and generates either a negative label (0) or a positive label (1), with negative indicating a report without prior expressions and positive denoting the presence of prior expressions. Examples of the labeler's outputs can be seen in Table 2, and the numbers of negative and positive exams in the IU X-ray and MIMIC-CXR datasets are shown in Table 3. ### Extending Model In this section, we describe how we integrate the comparison prior into existing models, such as R2Gen and \(M^{2}\)Tr, to generate more informative and comprehensive reports. Generation ProcessThe generation process of R2Gen and \(M^{2}\)Tr can be illustrated as shown in Figure 2. It follows the following flow: input radiology images \(X\rightarrow\) visual embedding \(V\rightarrow\) latent representation \(L\rightarrow\) output report \(Y\). Initially, chest X-ray images \(X\) are provided as inputs to the visual extractor, where \(X\) consists of the frontal image \(X_{f}\) and the lateral image \(X_{l}\), represented as \(X=\{X_{f},X_{l}\}\). The visual extractor generates the visual embedding \(V=\{v_{1},v_{2},...,v_{S}\}\), which comprises patch features \(v_{s}\in\mathbb{R}^{d}\), with \(d\) being the size of the feature vectors. Subsequently, \(V\) undergoes multiple transformer layers in the encoder to obtain the latent representation \(L=\{l_{1},l_{2},...,l_{T}\}\), where the latent feature vector is denoted as \(l_{t}\in\mathbb{R}^{f}\). Finally, the decoder utilizes \(L\) to generate the final output report \(Y\). Infusing Comparison PriorThe comparison prior \(P\in\mathbb{R}\) is generated from our rule-based labeler and it denotes a negative (0) or positive (1) label. We intend to incorporate the comparison prior into the existing data pipeline in such a way that the addition of the prior does not change the architecture or add any additional weights to train. Otherwise, it will become hard to measure the effect of comparison prior to the generative models. As a result, we added prior \(P\) to both Visual Embedding \(V\) and Latent Representation \(L\) in the generation models shown in Figure 2. The encoder should be given the prior information so that it can generate an appropriate intermediate representation. Furthermore, we also add \(P\) on \(L\) since the knowledge of \(P\) could be weakened after deep transformer layers in the encoder. The decoder will generate the output report based on the latent representation conditioned on \(P\). This whole process emulates the radiologists' examination with prior exams. Therefore, our new visual embedding \(V_{new}\) and new latent representation \(L_{new}\) can be calculated as follows: \[V_{new}=V\oplus P,\quad L_{new}=L\oplus P \tag{1}\] where \(\oplus\) indicates element-wise summation. The strength of our method is that it is applicable to most existing transformer-based models and does not require an extra dataset or information. Figure 2: A conceptual diagram of our approach. The report generation models (R2Gen and \(M^{2}\) Tr) consist of a Visual Extractor, Encoder, and Decoder. Our key idea is to infuse comparison priors generated by our rule-based labeler into (1) Visual Embedding \(V\) and (2) Latent Representation \(L\). ## 4 Experiment ArchitectureTo extract visual features, we utilize pretrained Convolutional Neural Networks (CNNs) such as DenseNet121 (Huang et al., 2017) and ResNet121 (He et al., 2016). Through empirical evaluation, we find that DenseNet performs better for our generation task, and thus, we select it as our base visual extractor. We adopt the structure of Meshed-Memory Transformer (\(M^{2}\)Tr) (Cornia et al., 2020) and Relational Memory-driven Transformer (R2Gen) (Chen et al., 2020) to construct our encoder and decoder. DatasetsWe evaluate our proposed methods on two widely-used English datasets for medical report generation tasks: IU X-ray (Demner-Fushman et al., 2016) and MIMIC-CXR (Johnson et al., 2019). The IU X-ray dataset is a publicly available radiology dataset that consists of 7,470 chest X-ray images and 3,955 radiology reports. Each report is paired with one frontal view image and, optionally, one lateral view image. MIMIC-CXR is a large chest radiograph database comprising 473,057 chest X-ray images and 206,563 reports. We train our model using intact data pairs, which include two images (frontal and lateral) and one report (Findings section). The datasets are divided into train, validation, and test sets following the data split described in Chen et al. (2020). Training DetailsWe first generate the comparison prior for each report using a rule-based labeler. Then, we train our model with the two images and the comparison prior as inputs, and the medical report as the output. We employ the Adam optimizer with an initial learning rate of 0.00005 for the visual extractor and 0.0001 for the encoder-decoder model. The learning rate gradually decreases at pre-defined steps. All experiments are conducted with 3 different seeds and a batch size of 16 on an "NVIDIA GeForce RTX 1080 Ti" GPU. Our code implementation is based on the publicly available codes from Chen et al. (2020) and Nooralahzadeh et al. (2021). Evaluation MetricsWe report general natural language generation (NLG) metrics, including BLEU (Papineni et al., 2002), CIDEr (Vedantam et al., 2015), and ROUGE-L (Lin, 2004). These metrics are commonly used to evaluate the quality of generated text. BLEU measures the n-gram overlap between the generated text and the reference text, while CIDEr is based on cosine similarity between word embeddings and considers both unigrams and multi-word phrases. ROUGE-L evaluates the longest common subsequence between the generated text and the reference text. Including these metrics enables a quantitative comparison of the generated reports with the ground truth and previous models, providing insights into the performance of the proposed approach. ResultsIn Table 4, we present the results of our proposed approach, which incorporates prior information into state-of-the-art NLG models, on two medical image report generation datasets: IU X-Ray and MIMIC-CXR. Our approach consistently outperforms the baselines across all NLG metrics, demonstrating its effectiveness in improving the quality of medical image reports generated by NLG models. On the IU X-Ray dataset, our approach achieves an average improvement of 11.58% and 4.49% on all NLG metrics for R2Gen and \(M^{2}\)Tr models, re \begin{table} \begin{tabular}{c|c|c c c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Model} & \multicolumn{6}{c}{NLG Metrics} \\ & & BL-1 & BL-2 & BL-3 & BL-4 & CIDEr & RG-L \\ \hline \multirow{4}{*}{IU X-Ray} & R2Gen (Chen et al., 2020) & 0.421 & 0.262 & 0.183 & 0.137 & 0.480 & 0.337 \\ \cline{2-7} & w/ prior (ours) & **0.438** & **0.280** & **0.201** & **0.155** & **0.631** & **0.351** \\ \cline{2-7} & \(M^{2}\)Tr (Cornia et al., 2020) & 0.400 & 0.240 & 0.159 & 0.112 & 0.300 & 0.324 \\ \cline{2-7} & w/ prior (ours) & 0.406 & 0.249 & 0.167 & 0.120 & 0.323 & 0.330 \\ \hline \multirow{4}{*}{MIMIC-CXR} & R2Gen (Chen et al., 2020) & 0.335 & 0.206 & 0.138 & 0.100 & 0.148 & 0.278 \\ \cline{2-7} & w/ prior (ours) & 0.342 & 0.222 & **0.152** & **0.110** & **0.166** & **0.301** \\ \cline{1-1} \cline{2-7} & \(M^{2}\)Tr (Cornia et al., 2020) & 0.353 & 0.211 & 0.137 & 0.094 & 0.089 & 0.262 \\ \cline{1-1} \cline{2-7} & w/ prior (ours) & **0.357** & **0.224** & 0.151 & 0.108 & 0.101 & 0.293 \\ \hline \hline \end{tabular} \end{table} Table 4: Training results of the baseline models and models infused with prior information. The results of our approaches are shown in gray rows and the best metrics are bolded. All metrics are averaged over 3 runs. Full table with standard deviation is available in Table 6 spectively, compared to the baseline models. Notably, the CIDEr metric shows the highest improvement, with an increase of 31.46% for R2Gen and 7.00% for \(M^{2}\)Tr. This suggests that, as measured by CIDEr, our approach generates more diverse and contextually relevant captions, which align better with human judgments of quality than other metrics. For the MIMIC-CXR dataset, our approach improves the R2Gen and \(M^{2}\)Tr models by 8.40% and 9.62% on all NLG metrics, respectively, compared to the previous models. The most significant improvement is observed in ROUGE-L, with an increase of 8.27% for R2Gen and 11.83% for \(M^{2}\)Tr. This indicates that our method produces more grammatically correct captions, which is particularly important in medical reports where language errors can have serious consequences. We find that the highest order n-grams (i.e., n=3, 4) show the most significant improvements. This suggests that incorporating external prior information is especially beneficial for generating fluent and informative sentences that typically contain longer phrases and more complex structures. Overall, our findings demonstrate that integrating external prior information can enhance the performance of existing NLG models for medical image reporting tasks, resulting in more informative and accurate medical reports. By incorporating additional domain-specific knowledge into the models, we are able to generate more precise and informative reports while minimizing computational overhead and training data requirements. ## 5 Analysis In this section, we compare the ground truths, synthetic reports created by our proposed model, and two previously published models, R2Gen and \(M^{2}\)Tr, to assess the effectiveness of our model in generating concise and accurate reports without irrelevant or false priors. Table 5 in the Appendix provides examples of synthetic reports generated by each model and the corresponding ground truth for the same radiology image. The first two rows compare reports generated by R2Gen and our model with prior infusion. We observe that R2Gen generates false prior expressions, such as "compared to prior examination", "unchanged from prior", and "again unchanged", which refer to non-existent prior exams. In contrast, our model generates more concise and accurate reports without any prior expressions, resulting in higher performance in NLG metrics. Similarly, the last two rows of Table 5 in the Appendix compare reports generated by \(M^{2}\)Tr and our proposed model. \(M^{2}\)Tr produces reports with false prior expressions, such as "present on the previous exam" and "again noted", while our model avoids including any comparison phrases. Furthermore, reports that include prior expressions tend to be longer due to the additional explanations required for comparison. However, the report generation model does not actually have access to previous exams for comparison, rendering the inclusion of prior expressions irrelevant or misleading. As a result, our models can directly control these phrases by conditioning the generation through priors. Overall, the synthetic reports generated by our proposed model are more concise and accurate compared to those generated by R2Gen and \(M^{2}\)Tr, as evidenced by the higher performance in NLG metrics. Our model achieves this by avoiding irrelevant or false prior expressions through a rule-based labeler and generating reports that contain only relevant and accurate information. These succinct and precise reports generated by our model will effectively assist radiologists in their practice. ## 6 Limitations and Ethical Considerations Our proposed method has certain limitations and ethical considerations that merit discussion. The effectiveness of our approach heavily relies on the rule-based labeler. However, it is important to acknowledge that the labeler may not capture unseen patterns or variations, potentially limiting improvements in various evaluation metrics. Moreover, we were unable to conduct a comprehensive human evaluation of the rule-based labeler in this study due to resource constraints. Therefore, future work should include a detailed evaluation to assess its performance and address any potential limitations. Collaboration with three radiologists at Kyoto University is a critical aspect of our work. The regular expressions designed in the rule-based labelers were validated through mutual confirmation by computer scientists and radiologists. However, it is essential to note that the radiologists involved in the collaboration primarily work in a Japanese hospital setting. This may introduce potential biases or patterns that are specific to the local context. Therefore, it is necessary to cross-check the performance of the rule-based labeler with radiologists from different regions and healthcare systems to ensure broader applicability and minimize any potential bias. Regarding the datasets used in our study, we exclusively utilized publicly available datasets that are properly anonymized and de-identified, addressing privacy concerns. However, it is crucial to emphasize that if datasets containing comparison exams become available in the future, additional precautions must be taken to ensure that no personally identifiable information is inadvertently disclosed or used in a manner that could identify individual patients. By acknowledging these limitations and ethical considerations, we aim to encourage future research and discussions in the field, driving advancements in radiology report generation while prioritizing patient privacy, accuracy, and fairness. These considerations will contribute to the development of robust and ethically sound approaches in radiology report generation. ## 7 Conclusion In this study, we present a novel approach to generate medical reports from chest X-ray images, aiming to bridge the gap between radiologists' knowledge and the lack of prior information in generation models. To achieve this, we developed a rule-based labeler capable of extracting comparison priors from radiology reports in the IU X-ray and MIMIC-CXR datasets. These priors were subsequently integrated into state-of-the-art models for conditional report generation, allowing our approach to emulate the realistic diagnostic process of radiologists who possess prior information about patients. Our experimental results demonstrate the superiority of our method over previous state-of-the-art models, as indicated by improved performance in terms of NLG metrics and a significant reduction in the occurrence of falsely referred prior exams. Through our analysis, we show that the incorporation of comparison priors leads to the generation of more accurate and concise reports, thereby holding great potential to enhance the quality and efficiency of medical report generation for chest X-ray images. Ultimately, this advancement benefits healthcare professionals and patients by providing more reliable and informative reports. Furthermore, our work highlights the future potential of generating medical reports in an end-to-end fashion if a dataset containing all previous exams becomes available. The ability to leverage comprehensive prior information would further amplify the accuracy and effectiveness of medical report generation, paving the way for improved healthcare outcomes.
2306.03986
Kinetic equation for weak interaction of directional internal waves
Starting from the two-dimensional Boussinesq equation without rotation, we derive a kinetic equation for weak interaction of internal waves using non-canonical variables. We follow a formalism introduced by P. Ripa in the 80's. The advantage of this formalism is that it describes the system in terms of the natural linear eigenfunctions of eastward and westward propagating internal waves. Using properties of orthogonality of the eigenfunctions with respect to a (pseudo) metric set by the energy we can write non perturbative theory for the interaction of waves given in terms of the expansion amplitudes. The evolution is controlled by a system of equations, with quadratic nonlinearity, which is an exact representation of the original model equations. The dynamics is constrained by the conservation of energy and pseudo-momentum, which can be written simply as a linear combination of the squared absolute value of the amplitudes. The possibility of a generalization of the Fjortoft's argument to internal gravity waves and observation of a non trivial double cascade of energy and pseudo-momentum is discussed.
Michal Shavit, Oliver Bühler, Jalal Shatah
2023-06-06T19:45:13Z
http://arxiv.org/abs/2306.03986v1
# Kinetic equation for weak interaction of directional internal waves ###### Abstract Starting from the two dimensional Boussinesq equation without rotation we derive a kinetic equation for weak interaction of internal waves using non canonical variables. We follow a formalism introduced by P. Ripa in the 80's. The advantage of this formalism is that it describes the system in terms of the natural linear eigenfunctions of eastward and westward propagating internal waves. Using properties of orthogonality of the eigenfunctions with respect to a (pseudo) metric set by the energy we can write non perturbative theory for the interaction of waves given in terms of the expansion amplitudes. The evolution is controlled by a system of equations, with quadratic nonlinearity, which is an exact representation of the original model equations. The dynamics is constrained by the conservation of energy and pseudo-momentum, which can be written simply as a linear combination of the squared absolute value of the amplitudes. The possibility of a generalization of the Fjortoft's argument to internal gravity waves and observation of a non trivial double cascade of energy and pseudo-momentum is discussed. ###### Contents * 1 Introduction * 2 Kinetic equation for eastward and westward propagating internal waves * 3 Double cascades and wave turbulence of eastward propagating internal waves ## 1 Introduction Consider the equations for a three dimensional incompressible fluid with a vertically stratified density within the Boussinesq approximation: \[\left(\partial_{t}+\mathbf{v}\cdot\nabla\right)\mathbf{v}+f\hat{z }\times\mathbf{v} =-\frac{\nabla p}{\rho_{r}}+b\hat{z} \tag{1}\] \[\left(\partial_{t}+\mathbf{v}\cdot\nabla\right)b+N^{2}\mathbf{v} \cdot\hat{z} =0\] (2) \[\nabla\cdot\mathbf{v} =0 \tag{3}\] where \(\mathbf{v}:\Omega^{3}\rightarrow\mathbb{R}^{3}\) is the velocity field, \(p\) is the pressure deviation from the hydrostatic pressure distribution, \(\rho_{r}\) a constant reference density, \(b=\left(\rho_{0}\left(z\right)-\rho\right)g/\rho_{r}\) is the buoyancy field. Here, \(\rho\) is the total density field, \(\rho_{0}\left(z\right)\) is the background density profile and \(-g\hat{z}\) the gravity force. For the validity of the Boussinesq approximation the density difference is assumed to satisfy \(\left|\rho-\rho_{r}\right|/\rho_{r}\ll 1\). The buoyancy frequency \(N=\sqrt{\left(-g/\rho_{r}\right)\left(\partial_{z}\rho_{0}\right)}\) is taken to be constant and \(f\hat{z}\times\mathbf{v}\) is the Coriolis force within the \(f\)-plane approximation. The domain can be periodic \(\Omega=\mathbb{T}_{L}:=\left[0,L\right]\) with periodic boundary conditions or \(\Omega=\mathbb{R}\). To keep the notation simple, we consider \(\Omega=\mathbb{R}\) Neglecting earth rotation we reduce the system to two dimensions considering one horizontal direction \(x\) and one vertical \(z\). Following a formalism introduced by [1], we rewrite the equations in the following form \[\left(\partial_{t}+\mathbf{v}\cdot\nabla\right)D\phi-L\phi=0 \tag{4}\] where \[D=\begin{pmatrix}-\Delta&0\\ 0&N^{2}\end{pmatrix},\,\,\,L=N^{2}\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\partial_{x}. \tag{5}\] and \[\phi=\begin{pmatrix}A\\ \zeta\end{pmatrix} \tag{6}\] is a two dimensional vector that consists of the stream function \(A\), so that \(\mathbf{v}=\left(-\partial_{z},\partial_{x}\right)A\), and the vertical displacement \(\zeta=-\frac{b}{N^{2}}\). Note that the two entries of \(\phi\) carry different dimensions. The Boussinesq system in two dimensions has two integrals of motion, the energy \(E=\frac{1}{2}\int d\mathbf{x}\left(\left(\nabla A\right)^{2}+\zeta^{2}\right)\) and pseudo-momentum (PM) \(P=\int d\mathbf{x}\zeta\Delta A\). We expand the field \(\phi\) \[\phi=\int\!\!d\alpha z_{\alpha}e_{\alpha}, \tag{7}\] in terms of solutions to the eigenvalue problem that corresponds to the linear part of (4) \[Le_{\alpha}=-i\omega_{\alpha}De_{\alpha}. \tag{8}\] The eigenmodes are \[e_{\alpha}=M^{-1}\left(K\right)\begin{pmatrix}-1\\ s_{\alpha}\end{pmatrix}e^{i\mathbf{k}\cdot\mathbf{x}} \tag{9}\] where \(s_{\alpha}=k_{x}\omega_{\alpha}^{-1}\) is the slowness, \(M\left(K\right)\) is a normalization and \(K=\sqrt{\boldsymbol{k}\cdot\boldsymbol{k}}\) is the wave number absolute value. Waves that carry positive (negative) slowness correspond to eastward (westward) propagating waves. \(D\) is hermitian and semipositive definite and \(L\) is skew hermitian which makes the eigenvalues \[\omega_{\alpha}=\omega\left(\sigma,\mathbf{k}\right)=\sigma Nk_{x}/K \tag{10}\] real and the eigenvectors corresponding to different eigenvalues \(D\)-orthogonal. The three index \(\alpha=\left(\sigma=\pm 1,\mathbf{k}\right)\) indicates the frequency branch and wave number, so by the integral in (7) we mean \(\int\!\!d\alpha=\int\!\!d\sigma\int\!\!d\mathbf{k}\) and \(d\sigma=\sum_{\sigma=\pm 1}\) is the counting measure. Note that \(D\) defines a seminorm in the space spanned by \(\left\{e_{\alpha}\right\}\) in terms of the energy \[E=\frac{1}{2}\int\!\!d\mathbf{x}\phi^{\dagger}D\phi, \tag{11}\] so we pick the normalization \(M=\sqrt{2}K\) s.t eigenvectors corresponding to different eigenvalues are \(D\)-orthonormal \[\int\!\!e_{\beta}^{\dagger}De_{\alpha}d\boldsymbol{x}=\delta_{ \sigma_{\alpha}\sigma_{\beta}}\delta\left(\mathbf{k}_{\alpha}-\mathbf{k}_{ \beta}\right). \tag{12}\] When writing (11) we acknowledge the subtlety of infinite integration. Resolving infinities that can arise from integration over an unbounded domain is deferred to future work. For the purpose of this text, it is assumed that all integrals are locally bounded. Since \(\phi\) is real the fields \(z_{\alpha}\) satisfy \[z_{-}\left(\mathbf{k}\right) = z_{-}^{*}\left(-\mathbf{k}\right) \tag{13}\] \[z_{+}\left(\mathbf{k}\right) = z_{+}^{*}\left(-\mathbf{k}\right) \tag{14}\] so that if we separate integration over the two branches \[\phi=\int\!d\mathbf{k}z_{+}e_{+}\left(\mathbf{k}\right)+\int\!d\mathbf{k}z_{- }e_{-}\left(\mathbf{k}\right)=\phi_{+}+\phi_{-}, \tag{15}\] both components \(\phi_{+}\) and \(\phi_{-}\) are real functions. In terms of the expansion amplitudes, \(z_{\alpha}\), the equations of motion are \[z_{\alpha,t}+i\omega_{\alpha}z_{\alpha}=\frac{1}{2}\int\!d\beta d\gamma\; \sigma_{\alpha}^{\beta\gamma}z_{\beta}^{*}z_{\gamma}^{*}, \tag{16}\] where \(\sigma_{\alpha}^{\beta\gamma}\) are the interaction coefficients given by \[\sigma_{\alpha}^{\beta\gamma} = -\int\!d\mathbf{x}\left(\hat{\phi}_{\alpha}^{\dagger}\left( \mathbf{v}_{\beta}\left(k,m\right)\cdot\nabla D\hat{\phi}_{\gamma}+\mathbf{v} _{\gamma}\left(k,m\right)\cdot\nabla D\hat{\phi}_{\beta}\right)^{*}\right) \tag{17}\] \[= -N^{2}M_{\alpha}^{-1}M_{\beta}^{-1}M_{\gamma}^{-1}\mathbf{k}_{ \beta}\times\mathbf{k}_{\gamma}\left(s_{\beta}+s_{\gamma}+s_{\alpha}\right) \left(s_{\gamma}-s_{\beta}\right)\left(2\pi\right)^{2}\delta\left(\mathbf{k}_{ \alpha,\beta,\gamma}\right). \tag{18}\] From the last line it is clear that waves with the same slowness \(s_{\gamma}=s_{\beta}\) or with wave numbers that are linearly dependent \(\mathbf{k}_{\gamma}=a\mathbf{k}_{\beta}\) do not interact and are exact solutions of the nonlinear equations. The interaction of a triad with \(s_{\beta}+s_{\gamma}+s_{\alpha}=0\) vanishes as well. The integrals of motion are both quadratic in terms of the expansion amplitudes and are given simply by the linear combinations \[E = \frac{1}{2}\int\!d\alpha z_{\alpha}z_{\alpha}^{*}, \tag{19}\] \[P = \frac{1}{2}\int\!d\alpha s_{\alpha}z_{\alpha}z_{\alpha}^{*}. \tag{20}\] where (19) follows from (11), and (20) from the reality conditions (13). Due to conservation of energy and PM the interaction coefficients satisfy \[\sigma_{\alpha}^{\beta\gamma}+\sigma_{\beta}^{\alpha\gamma}+ \sigma_{\gamma}^{\beta\alpha} = 0, \tag{21}\] \[s_{\alpha}\sigma_{\alpha}^{\beta\gamma}+s_{\beta}\sigma_{\beta} ^{\alpha\gamma}+s_{\gamma}\sigma_{\gamma}^{\beta\alpha} = 0. \tag{22}\] The last two equations can be readily obtained by writing the time derivative of the energy in terms of the expansion amplitudes \(z_{\alpha}\): \[0=\dot{E}=\frac{1}{2}\int\!d\alpha\dot{E}_{\alpha}=\frac{1}{4}\int\!d\alpha d \beta d\gamma\;\sigma_{\alpha}^{\beta\gamma}\Re\left(z_{\beta}^{*}z_{\gamma}^ {*}z_{\alpha}^{*}\right) \tag{23}\] for each triad \((\alpha,\beta,\gamma)\) the monomial \(\Re\left(z_{\beta}^{*}z_{\gamma}^{*}z_{\alpha}^{*}\right)\) in the sum above appears with the coefficients \(\left(\sigma_{\alpha}^{\beta\gamma}+\sigma_{\gamma}^{\beta\alpha}+\sigma_{ \beta}^{\alpha\gamma}\right)\), so that energy is conserved if and only if (21) is satisfied, similarly for the time derivative of the PM. These relations can be also verified directly. Note that equations (16,21,22) describe the Boussinesq system in the form of a hydrodynamic type system with two integrals of motion [2, 3]. The plan of the text is as follows: in section 2 we derive a kinetic equation from (16) for weakly interacting internal waves, discuss its equipartition solutions and comment on constant flux solutions. In section 3 we describe the motivation for studying a truncated kinetic equation that includes only interactions of eastward propagating waves, the possibility of a generalization of the Fjortoft's argument to internal waves and observation of a non trivial double cascade of energy and PM. ## 2 Kinetic equation for eastward and westward propagating internal waves Assume that non linear interaction is small compared to the linear evolution, so that we can write the equations of motion (16) introducing a small uniform parameter \(\epsilon\), \[z_{\alpha,t}+i\omega_{\alpha}z_{\alpha}=\frac{1}{2}\epsilon\int\!\!d\beta d \gamma\sigma_{\alpha}^{\beta\gamma}z_{\beta}^{*}z_{\gamma}^{*}. \tag{24}\] Such small uniform parameter relevant for internal wave interaction is the root mean square vertical gradient of the vertical displacement \[\epsilon=\sqrt{\left\langle\left(\partial_{z}\zeta\right)^{2}\right\rangle}. \tag{25}\] The discussion can also be generalized to a non uniform small parameter \(\epsilon=\epsilon_{\alpha}\). Transforming to envelopes \(z_{\alpha}\to z_{\alpha}e^{-i\omega_{\alpha}t}\) (24) becomes, \[\dot{z_{\alpha}}=\frac{1}{2}\epsilon\int\!\!d\beta d\gamma\sigma_{\alpha}^{ \beta\gamma}z_{\beta}^{*}z_{\gamma}^{*}e^{i\Omega_{\alpha\beta\gamma}t}. \tag{26}\] where \(\Omega_{\alpha\beta\gamma}=\omega_{\gamma}+\omega_{\beta}+\omega_{\alpha}\) is the sum of the linear frequencies of the interacting waves. Assuming initial conditions are random, we are interested in writing an evolution equation up to order \(\epsilon^{2}\) to the averaged energy density \[n_{\alpha}: =\left\langle z_{\alpha}z_{\alpha}^{*}\right\rangle, \tag{27}\] where \(\left\langle\cdot\right\rangle\) denotes the average with respect to the initial data distribution. In order to write the kinetic equation we expand the amplitudes \(z_{\alpha}\) in terms of the initial data using integration by parts. Integrating wrt time (26), we write \[z_{\alpha}\left(t\right)=z_{\alpha}\left(0\right)+\frac{1}{2} \epsilon\int_{0}^{t}\!ds\int\!\!d\beta d\gamma\sigma_{\alpha}^{\beta\gamma}z_{ \beta}^{*}z_{\gamma}^{*}e^{i\Omega_{\alpha\beta\gamma}s}. \tag{28}\] Figure 1: Concentration of the scaled wave number amplitude of eastward propagating interacting waves on the resonant manifold. For three interacting eastward propagating waves with wave numbers that satisfy \(\mathbf{q}+\mathbf{p}+\mathbf{k}=0\), with \(\mathbf{p}=K_{p}\left(\cos\theta_{p},\sin\theta_{p}\right)\), the ratio of the absolute values of two wave numbers is plotted as a function of the wave number angles, \(\cos\theta_{p}\) and \(\cos\theta_{q}\). As \(t\omega_{p}\gg 1\) the possible values for interaction concentrate on the one dimensional resonant manifold. One integration by parts yields \[z_{\alpha}\left(t\right)=z_{\alpha}\left(0\right)+\frac{1}{2}\epsilon\int\!\!d \beta d\gamma\sigma_{\alpha}^{\beta\gamma}z_{\beta}^{*}\left(0\right)z_{\gamma} ^{*}\left(0\right)G_{1}\left(t,0\right)+\frac{1}{2}\epsilon\int_{0}^{t}\!ds \int\!\!d\beta d\gamma\sigma_{\alpha}^{\beta\gamma}\frac{d}{ds}\left(z_{\beta} ^{*}z_{\gamma}^{*}\right)G_{1}\left(t,s\right) \tag{29}\] where \[G_{1}\left(t,s\right)=\int_{s}^{t}e^{i\Omega_{\alpha\beta\gamma}\tau}d\tau, \tag{30}\] is chosen so that \(G_{1}\left(t,t\right)=0\). In order to close the kinetic equation at order \(\epsilon^{2}\) we need to integrate by parts one more time \[z_{\alpha}\left(t\right) =z_{\alpha}\left(0\right)+\frac{1}{2}\epsilon\int\!\!d\beta d \gamma\sigma_{\alpha}^{\beta\gamma}z_{\beta}^{*}z_{\gamma}^{*}\left(0\right)G _{1}\left(t,0\right) \tag{31}\] \[+\frac{1}{2}\epsilon\int\!\!d\beta d\gamma\sigma_{\alpha}^{\beta \gamma}\left(\left(\frac{d}{ds}z_{\beta}^{*}z_{\gamma}^{*}\right)\mid_{s=0}G_ {2}^{\beta}\left(t,0\right)+\left(z_{\beta}^{*}\frac{d}{ds}z_{\gamma}^{*} \right)\mid_{s=0}G_{2}^{\gamma}\left(t,0\right)\right)+\frac{1}{2}\epsilon\int _{0}^{t}\!ds\int\!\!d\beta d\gamma\sigma_{\alpha}^{\beta\gamma}P_{4}\left(z \right)G_{2}\left(t,s\right)ds,\] where \[G_{2}^{\beta}\left(t,s\right)=\int_{t}^{s}e^{-i\Omega_{\beta\beta^{\prime} \gamma}s_{1}}G_{1}\left(t,s_{1}\right)ds_{1}, \tag{32}\] and \(P_{4}\left(z\right)\) is a term that includes fourth order polynomials of the amplitudes and \(P_{4}\left(z\right)\mid_{t=0}=\frac{d^{2}}{ds^{2}}\left(z_{\beta}^{*}z_{\gamma }^{*}\right)\mid_{t=0}\). By the product \(P_{4}\left(z\right)G_{2}\left(t,s\right)\) we mean that each monomial in \(P_{4}\) is multiplied by its corresponding time dependent term \(G_{2}^{\beta}\left(t,s\right)\). Assuming the initial distribution of amplitudes is exactly Gaussian, so that the off diagonal second order correlators are zero initially: \(\left\langle z_{\alpha}^{2}\right\rangle,\left\langle z_{+}\left(\mathbf{k} \right)z_{-}\left(\mathbf{k}\right)\right\rangle,\left\langle z_{+}\left( \mathbf{k}\right)z_{-}^{*}\left(\mathbf{k}\right)\right\rangle=0\), we can write an equation for the average energy density: \[n_{\alpha}=\left\langle z_{\alpha}z_{\alpha}^{*}\right\rangle\left(t\right)= \left\langle z_{\alpha}z_{\alpha}^{*}\right\rangle\left(0\right)+\epsilon^{2} \int\!\!d\beta d\gamma\ \sigma_{\alpha}^{\beta\gamma}\left(\sigma_{\beta}^{\alpha\gamma}n_{\alpha}n_{ \gamma}+\sigma_{\gamma}^{\alpha\beta}n_{\beta}n_{\alpha}+\sigma_{\alpha}^{ \beta\gamma}n_{\beta}n_{\gamma}\right)\frac{1-\cos\Omega_{\alpha\beta\gamma} t}{\Omega_{\alpha\beta\gamma}^{2}}+O\left(\epsilon^{4}\right). \tag{33}\] Note that terms that include odd powers of \(\epsilon\) correspond to odd monomials of the amplitudes \(z_{\alpha}\), these contributions vanish due to the Gaussianity of the initial distribution, so the kinetic equations includes only even power in \(\epsilon\). In order to derive the kinetic equation one should start from the discrete case of a finite domain \(\Omega=\left[0,L\right]\) and take the kinetic limits \(L\rightarrow\infty\) and \(t\omega\rightarrow\infty\) in the correct order and rate. In this process we assume that there are enough quasi-resonances wrt to exact resonances so the derivation occurs. We can then write \(\lim_{t\omega\rightarrow\infty}\frac{1-\cos\omega_{\alpha\beta\gamma}t}{ \omega_{\alpha\beta\gamma}^{2}}=\pi t\delta\left(\omega_{\alpha\beta\gamma}\right)\) and arrive at the kinetic equation \[\dot{n_{\alpha}}=\pi\epsilon^{2}\int\!\!d\beta d\gamma\ \sigma_{\alpha}^{\beta\gamma}\left(\sigma_{\beta}^{\alpha\gamma}n_{\alpha}n_{ \gamma}+\sigma_{\gamma}^{\alpha\beta}n_{\beta}n_{\alpha}+\sigma_{\alpha}^{ \beta\gamma}n_{\beta}n_{\gamma}\right)\delta\left(\Omega_{\alpha\beta\gamma} \right). \tag{34}\] All off diagonal correlators except for \(\left\langle z_{+}\left(\mathbf{k}\right)z_{-}^{*}\left(\mathbf{k}\right)\right\rangle\) remain zero at order \(\epsilon^{2}\). The time derivative \(\frac{d}{dt}\left\langle z_{+}\left(\mathbf{k}\right)z_{-}^{*}\left(\mathbf{k} \right)\right\rangle\) has \(O\left(\epsilon^{2}\right)\) fluctuating contributions, so this correlator is in general not zero pointwise; However, it averages over time weakly to zero, so that the kinetic equation (34) remains valid. This is discussed in detail in the appendix. The kinetic equation (34) has an isotropic solution that corresponds to equipartition of energy \[n_{\alpha}=T \tag{35}\] as any third cumulant, i.e any summand in the collision integral of the right hand side of (34), is zero: \[\sigma_{\beta}^{\gamma\alpha}n_{\alpha}n_{\gamma}+\sigma_{\gamma}^{\beta\alpha}n _{\alpha}n_{\beta}+\sigma_{\alpha}^{\beta\gamma}n_{\gamma}n_{\beta}=T^{2}\left( \sigma_{\beta}^{\gamma\alpha}+\sigma_{\gamma}^{\beta\alpha}+\sigma_{\alpha}^{ \beta\gamma}\right)=0. \tag{36}\] The last equality follows from energy conservation in the dynamical equations (21). For particular cases where the slowness is bounded from below the kinetic equation has the solution \[n_{\alpha}=\frac{1}{1+T^{-1}s_{\alpha}}, \tag{37}\] which is isotropic, as \(s_{\alpha}=\sigma N^{-1}K\), and corresponds to equipartition of energy and PM. The kinetic equation (34) has acquired a second PM invariant, \(P_{z}=\int\!d\alpha s_{\alpha}^{z}z_{\alpha}z_{\alpha}^{*}\), based on the vertical slowness \(s_{\alpha}^{z}=k_{z}/\omega_{\alpha}\). This is not conserved in the dynamical equation (16). For resonantly interacting triads the ratios of the interaction coefficients and frequencies are equal - \[\Gamma_{\alpha\beta\gamma}\delta\left(\mathbf{k}_{\alpha,\beta,\gamma}\right) =\frac{\sigma_{\alpha}^{\beta\gamma}}{\omega_{\alpha}}=\frac{\sigma_{\beta}^ {\alpha\gamma}}{\omega_{\beta}}=\frac{\sigma_{\gamma}^{\alpha\beta}}{\omega_{ \gamma}}, \tag{38}\] this follows from the relations among the interaction coefficients, \(\sigma_{\alpha}^{\beta\gamma}\), due to energy and PM conservation: Indeed, for any resonantly interacting triad \((\alpha,\beta,\gamma)\) let \(\mathbf{k}_{x}\) be the vector of the components of the wave numbers in the \(\hat{x}\) direction and let \(\hat{\omega}\) be the vector of the components of the frequencies. Then the three vectors \(\left(\mathbf{k}_{x},\hat{\omega}-\frac{\hat{\omega}\cdot\mathbf{k}_{x}}{ \mathbf{k}_{x}\cdot\mathbf{k}_{x}}\mathbf{k}_{x},\mathbf{1}\right)\), where \(\mathbf{1}=(1,1,1)\), form an orthogonal basis of \(\mathbb{R}^{3}\). As the vector \(\frac{\sigma}{\omega}=\left(\frac{\sigma_{\alpha}^{\beta\gamma}}{\omega_{ \alpha}},\frac{\sigma_{\alpha}^{\beta\gamma}}{\omega_{\beta}},\frac{\sigma_{ \alpha}^{\beta\gamma}}{\omega_{\gamma}}\right)\) is perpendicular to \(\mathbf{k}_{x}\) and \(\hat{\omega}\), it must be proportional to \(\mathbf{1}\). This brings the kinetic equation (34) to the canonical form \[\dot{n_{\alpha}}=\pi\epsilon^{2}\int\!d\beta d\gamma\;\omega_{\alpha}\Gamma_{ \alpha\beta\gamma}^{2}\left(\omega_{\beta}n_{\alpha}n_{\gamma}+\omega_{\gamma }n_{\beta}n_{\alpha}+\omega_{\alpha}n_{\beta}n_{\gamma}\right)\delta\left( \mathbf{k}_{\alpha,\beta,\gamma}\right)\delta\left(\Omega_{\alpha\beta\gamma} \right). \tag{39}\] Let us stress that (39) is the exact and general kinetic equation arising from the Boussinesq dynamical equations (4). We can write (39) as a continuity equation \[\dot{n}_{\alpha}+\text{div}\Pi\left(\mathbf{k}_{\alpha}\right)=0. \tag{40}\] where \(\Pi\) is the two dimensional (not uniquely defined) flux. Integrating (39,40) wrt the angle \(\theta_{\alpha}\) we write \[\dot{g_{\alpha}}+\text{div}\Pi_{k_{\alpha}}=0, \tag{41}\] where \(\Pi_{k_{\alpha}}=K_{\alpha}\int_{0}^{2\pi}d\theta\Pi\cdot\hat{K}\) and \(g_{\alpha}=\int_{0}^{2\pi}\!n_{\alpha}\left(\mathbf{k}\right)d\theta_{k}\) are the isotropic parts of the flux and energy spectrum, respectively. In general, (41) can have additional solutions to (40). In the case that the PM is symmetrically distributed, so that \(n_{-}(k)=n_{+}(k)\) and \(P=0\), (41) has a constant flux formal solution with a Kolmogorov-Zakharov scaling: \(g_{k}\propto K^{-(\delta+d)}\), where \(\delta+d=3\) is the sum of the homogeneity degree of the interaction \(\Gamma\) wrt the wave number amplitudes and dimension (see appendix for the derivation). Locality and relevance of this solution to the full kinetic equation in studied in a following paper. We note that by additionally applying the so-called "hydrostatic approximation" that brings the interaction coefficients and dispersion relations to a bi-homogeneous form \[\omega_{\sigma}\left(\mathbf{k}\right) \rightarrow\sigma N^{-1}k_{x}/\sqrt{k_{z}^{2}} \tag{42}\] \[\Gamma_{\alpha\beta\gamma}^{2} \rightarrow\left(\sqrt{k_{\alpha z}^{2}}+\sqrt{k_{\beta z}^{2}}+ \sqrt{k_{\gamma z}^{2}}\right)^{2}, \tag{43}\] one can search for bi-homogeneous \(n_{\alpha}\left(\mathbf{k}\right)=k_{x}^{m}k_{z}^{l}\) approximate constant flux solutions of (39) by applying conformal transformations similar to those suggested by Kuznetsov [4, 5]. [6] found such solutions for a kinetic equation derived using the hydrostatic approximation for the Boussinesq equations with cylindrical symmetry in three dimensions. In locality analysis of the kinetic equation (39) one should bear in mind that in the inertial range relevant for the ocean, the absolute value of frequency is bounded by the buoyancy frequency and the Coriolis force \(f\leq\left|\omega_{\alpha}\right|\leq N\). We conclude this part by mentioning that the limit \(\omega_{\alpha}\to 0\) (\(k_{x}\to 0\)) that corresponds to pure vertical shear flows pose special problems in non-rotating 2d Boussinesq equations. One difficulty, e.g., is that the off-diagonal correlators \(\left\langle z_{+}(k)z_{-}^{\ast}(k)\right\rangle\) cannot be neglected in our theory. However, if no vertical shear flows exist initially, they remain zero as they cannot be created by nonlinear resonant interaction among waves since the interaction coefficients \(\Gamma=\omega\sigma\) vanish in this limit. This degeneracy is removed once rotation is added. ## 3 Double cascades and wave turbulence of eastward propagating internal waves The decomposition of the fields \(\phi=\sum_{\alpha}z_{\alpha}e_{\alpha}\) into eastward and westward propagating waves allows us to decompose the PM into two components, positive and negative \[P=P_{+}+P_{-}, \tag{44}\] where we use the convention that eastward propagating waves carry positive pseudo momentum \[P_{+}=\frac{1}{2}\int_{-\infty}^{\infty}d\mathbf{k}s_{+}E_{+}\left(\mathbf{k }\right), \tag{45}\] here \(E_{+}=z_{+}z_{+}^{\ast}\left(\mathbf{k}\right)\) is the energy density of the eastward propagating waves and remind that the positive slowness is given by \(s_{+}=k_{x}/\omega_{+}=N^{-1}K\). The time derivatives of the positive and negative PM can be written in terms of the time derivatives of the corresponding energy densities: \[\dot{P}_{\sigma=\pm}=\frac{1}{2}\int_{-\infty}^{\infty}d\mathbf{k}s_{\sigma} \dot{E}_{\sigma}\left(\mathbf{k}\right)=\frac{1}{2}\sum_{\beta,\gamma}\int_{0 }^{\infty}d\mathbf{k}s_{\sigma}\left(\mathbf{k}\right)\sigma_{(\sigma, \mathbf{k})}^{\beta\gamma}\Re\left(z_{\beta}^{\ast}z_{\gamma}^{\ast}z_{( \sigma,\mathbf{k})}^{\ast}\right). \tag{46}\] Consider the initial value problem with waves solely propagating eastward, that is \(z_{-}\left(\mathbf{k},t=0\right)=0\)\(\forall\mathbf{k}\). The latter can be achieved initially if the stream function density in wave number is proportional to the displacement by \[-K\hat{\psi_{0}}\left(\mathbf{k}\right)=N\hat{\eta_{0}}\left(\mathbf{k}\right). \tag{47}\] This implies that the potential energy stored at every wave number equals to the kinetic energy stored at this wave number. The PM is then positive, as \(P_{-}\left(t=0\right)=0\) and it's time derivative vanishes as well \(\dot{P}_{-}\left(t=0\right)=0\). That means that for some short period of time eastward propagating waves remain predominant in the system. This asymmetry is fixed into the memory of the system and cannot be forgotten since the PM is an exact integral of motion and will remain positive for all times. So every creation of a westward propagating wave due to nonlinear interaction must be accompanied by an equal-size creation of an eastward propagating wave. Creation of both positive and negative PM waves is restricted by the conservation of both energy and PM. This has practical relevance for ocean dynamics. For example, strongly directional internal wave fields arise naturally in the case of internal tides radiated away from isolated topography structures such as the Hawaiian ridge [8]. We are interested in studying the implications of the existence of two positive definite invariants with proportional densities on the dynamics and statistics of internal waves. In particular, if the Fjortoft's argument for two dimensional hydrodynamics [9] applies to our system as well, the power law proportionality between the energy and pseudo-momentum should lead to inverse energy transfer, from small to large scales. One way of addressing these questions is to consider the equations truncated to the positive slowness branch: \[\partial_{t}z_{+}\left(\mathbf{k}\right)=\int\!d\mathbf{q}d\mathbf{p}\ \sigma_{\mathbf{k}}^{q\mathbf{p}}z_{+}^{*}\left(\mathbf{p}\right)z_{+}^{*} \left(\mathbf{q}\right)e^{i\Omega_{kpq}t}, \tag{48}\] where the interaction is given by \[\sigma_{\mathbf{k}}^{q\mathbf{p}}=-M_{q}^{-1}M_{p}^{-1}M_{k}^{-1}\mathbf{q} \times\mathbf{p}\left(K+K_{q}+K_{p}\right)\left(K_{p}-K_{q}\right)\delta\left( \mathbf{k}_{\alpha,\beta,\gamma}\right). \tag{49}\] Theses equations have two positive quadratic integrals of motion \[E_{+} =\frac{1}{2}\int d\mathbf{k}z_{+}^{*}\left(\mathbf{k}\right)z_{+ }\left(\mathbf{k}\right), \tag{50}\] \[P_{+} =\frac{1}{2}\int d\mathbf{k}N^{-1}Kz_{+}^{*}\left(\mathbf{k} \right)z_{+}\left(\mathbf{k}\right), \tag{51}\] since the following sums \[\sigma_{1}^{23}+\sigma_{2}^{31}+\sigma_{3}^{23} =0, \tag{52}\] \[K_{1}\sigma_{1}^{23}+K_{2}\sigma_{2}^{13}+K_{3}\sigma_{3}^{21} =0, \tag{53}\] vanish for any triad \(\mathbf{k}_{1}+\mathbf{k}_{2}+\mathbf{k}_{3}=0\), where we used the shorthand notation \(\sigma_{1}^{23}:=\sigma_{\mathbf{k}_{1}}^{\mathbf{k}_{2}\mathbf{k}_{3}}\). We can then write a kinetic equation for the averaged energy density of the positive branch: \[\dot{m}\left(k\right)=\epsilon^{2}\pi\int\!d\mathbf{q}\int\!d\mathbf{p} \omega_{k}\Gamma_{p\!qk}^{2}m_{k}m_{p}m_{q}\left(\omega_{q}m_{q}^{-1}+\omega_{p} m_{p}^{-1}+\omega_{k}m_{k}^{-1}\right)\delta\left(\omega_{p,q,k}\right)\delta \left(\mathbf{k}+\mathbf{q}+\mathbf{p}\right), \tag{54}\] where we denote \(m_{k}=\left\langle z_{+}\left(k\right)z_{+}\left(k\right)^{*}\right\rangle\) truncated to the dynamics of (48). Writing the wave numbers in a polar form \(\mathbf{k}=K\left(\sin\theta,\cos\theta\right)\), the frequency becomes simply \(\omega_{k}=N\cos\theta\) and the interaction coefficients are given by \[\Gamma_{pqk}=N\frac{1}{\sqrt{2}}\left(\sin\theta_{q}+\sin\theta_{p}+\sin \theta_{k}\right)\left(K_{p}+K_{q}+K_{k}\right). \tag{55}\] The kinetic equation (54) has the isotropic equilibrium solution \(m_{k}^{-1}\left(K,\theta\right)=1+T^{-1}K\). Due to the existence of resonantly interacting triads of the positive branch, this kinetic equation should have cascade solutions with constant fluxes (see Figure 1 for an illustration of a resonant manifold of eastward propagating waves). We write (54) as a continuity equation \[\dot{m}\left(\mathbf{k}\right)+\mathrm{div}\Pi\left(\mathbf{k}\right)=0. \tag{56}\] here, similarly to (40), \(\Pi\) is the two-dimensional flux. Integrating (56) wrt the angle \(\theta\) we write \[\dot{l_{k}}+\mathrm{div}\Pi_{k}=0, \tag{57}\] where \(\Pi_{k}=K\int_{0}^{2\pi}d\theta\Pi\cdot\hat{K}\) and \(l_{k}=\int_{0}^{2\pi}m\left(\mathbf{k}\right)d\theta\) are the isotropic parts of the flux and energy spectrum, respectively. If Fjortoft's argument applies for this system, in a turbulent cascade if the radial flux \(\Pi_{k}\) is not zero, it should be negative. Similarly to the full kinetic equation, a formal solution of (57) is \(l_{k}\propto K^{-3}\). Constant flux solutions are studied in a following paper. ## Appendix I: Off diagonal second order anomalous correlators In writing a kinetic theory for the averaged energy density \(n_{\alpha}=\left\langle z_{\alpha}z_{\alpha}^{*}\right\rangle_{\rho}\) one needs to make sure that other second order correlators remain zero, otherwise these correlators should be added to the kinetic equation (34) or at least monitored. Such off diagonal correlators sometimes referred to as anomalous correlators [7]. The relevant off diagonal correlator for (24) is \(\left\langle z_{+}z_{-}^{*}\right\rangle\). Even though initial conditions are Gaussian the time derivative \(\frac{d}{dt}\left\langle z_{+}z_{-}^{*}\right\rangle\neq 0\) is not zero. We show however that it is zero in the weak sense of distributions; that is, rapidly fluctuating around zero in the kinetic limit \(t\omega\rightarrow\infty.\) Writing the correlator in terms of the Fourier expansion of the stream function and elevation: \[\left\langle z_{+}\left(\mathbf{k}\right)z_{-}^{*}\left(\mathbf{k}\right) \right\rangle=\frac{1}{2}\left\langle\left(k^{2}+m^{2}\right)\hat{A}\left( \mathbf{k}\right)\hat{A}^{*}\left(\mathbf{k}\right)-N^{2}\hat{\zeta}\left( \mathbf{k}\right)\hat{\zeta}^{*}\left(\mathbf{k}\right)\right\rangle+\sqrt{k ^{2}+m^{2}}N\left\langle\hat{A}\left(\mathbf{k}\right)\hat{\zeta}^{*}\left( \mathbf{k}\right)-\hat{\zeta}\left(\mathbf{k}\right)\hat{A}^{*}\left(\mathbf{ k}\right)\right\rangle, \tag{58}\] we see that the physical interpretation of the case where the two brackets fluctuate each around zero in the equation above is that the kinetic energy stored at each wave number \(\mathbf{k}\) equals to the potential energy stored at this wave number and that the PM stored at \(\mathbf{k}\) equals to the PM stored at \(-\mathbf{k}.\) Let us compute the time derivative of the product \(z_{(+,\mathbf{k})}z_{(-,\mathbf{k})}^{*};\) \[\frac{d}{dt}\left(z_{(+,\mathbf{k})}z_{(-,\mathbf{k})}^{*}\right)=\frac{1}{2} \epsilon\int\!d\gamma d\beta\left(\sigma_{(+,\mathbf{k})}^{\beta\gamma}z_{ \beta}^{*}z_{\gamma(-,\mathbf{k})}^{*}e^{i\left(\omega_{\gamma}+\omega_{\beta }+\omega_{(+,\mathbf{k})}\right)t}+\sigma_{(-,\mathbf{k})}^{\beta\gamma}z_{(+,\mathbf{k})}z_{\beta}z_{\gamma}e^{-i\left(\omega_{\gamma}+\omega_{\beta}+ \omega_{(-,\mathbf{k})}\right)t}\right) \tag{59}\] to write the kinetic equation for \(\left\langle z_{(+,\mathbf{k})}z_{(-,\mathbf{k})}^{*}\right\rangle\) we take the Laplace transform and integrate by parts once \[s\mathcal{L}\left(z_{(+,\mathbf{k})}z_{(-,\mathbf{k})}^{*}\right)- \left(z_{(+,\mathbf{k})}z_{(-,\mathbf{k})}^{*}\right)\mid_{t=0} =\frac{1}{2}\epsilon J\left(3\right)\] \[-\frac{1}{2}\epsilon\int_{0}^{\infty}\int\!d\gamma d\beta\sigma_ {(+,\mathbf{k})}^{\beta\gamma}\frac{d}{dt}\left(z_{\beta}^{*}z_{\gamma}^{*}z_ {(-,\mathbf{k})}^{*}\right)\frac{e^{i\left(\omega_{\gamma}+\omega_{\beta}+ \omega_{(+,\mathbf{k})}\right)t}}{i\left(\omega_{\gamma}+\omega_{\beta}+ \omega_{(+,\mathbf{k})}\right)-s}e^{-st}dt \tag{60}\] \[-\frac{1}{2}\epsilon\int_{0}^{\infty}\int\!d\gamma d\beta\sigma_ {(-,\mathbf{k})}^{\beta\gamma}\frac{d}{dt}\left(z_{(+,\mathbf{k})}z_{\beta}z _{\gamma}\right)\frac{e^{-i\left(\omega_{\gamma}+\omega_{\beta}+\omega_{(-, \mathbf{k})}\right)t}}{i\left(\omega_{\gamma}+\omega_{\beta}+\omega_{(+, \mathbf{k})}\right)-s}e^{-st}dt, \tag{61}\] where \(J\left(3\right)\) stands for third order polynomials which do not contribute to the kinetic equation and \(\mathcal{L}\) for the Laplace transform. Let us consider term on the RHS of (54) \[\frac{1}{2}\epsilon\int_{0}^{\infty}\frac{d}{dt}\left(z_{\beta}^{ *}z_{\gamma}^{*}z_{(-,\mathbf{k})}^{*}\right)\frac{e^{i\left(\omega_{\gamma}+ \omega_{\beta}+\omega_{(+,\mathbf{k})}\right)t}}{i\left(\omega_{\gamma}+ \omega_{\beta}+\omega_{(+,\mathbf{k})}\right)-s}e^{-st}dt\] \[=-\left(\frac{1}{2}\epsilon\right)^{2}\int_{0}^{\infty}\left( \int_{0}^{\infty}\int\!d\beta^{\prime}d\gamma^{\prime}\sigma_{\beta}^{\beta^{ \prime}\gamma}z_{\beta^{\prime}}z_{\gamma^{\prime}}z_{\gamma(-,\mathbf{k})}^{* }\right)\frac{d}{dt}\int_{0}^{t}\frac{e^{i\left(\omega_{\gamma}+\omega_{ \beta}+\omega_{(+,\mathbf{k})}\right)t^{\prime}}}{i\left(\omega_{\gamma}+ \omega_{\beta}+\omega_{(+,\mathbf{k})}\right)-s}e^{-i\left(\omega_{\gamma}+ \omega_{\beta^{\prime}}+\omega_{\beta}\right)t^{\prime}}e^{-st^{\prime}}dt+...\] Integrate by parts once more and take the average with respect to the Gaussian initial distribution we arrive at \[\mathcal{L}\left(z_{(+,\mathbf{k})}z_{(-,\mathbf{k})}^{*}\right)- \frac{\left(z_{(+,\mathbf{k})}z_{(-,\mathbf{k})}^{*}\right)\mid_{t=0}}{s} =\frac{1}{2}\epsilon^{2}\int\!d\beta d\gamma\sigma_{(+,\mathbf{k})}^{ \beta\gamma}\left(\sigma_{\beta}^{\gamma(-,\mathbf{k})}n_{\gamma}n_{(-, \mathbf{k})}+\text{permutations}\right)\times \tag{62}\] \[\left(\frac{1}{s\left(i\left(\omega_{\gamma}+\omega_{\beta}+\omega_ {(+,\mathbf{k})}\right)-s\right)\left(2i\omega_{(+,\mathbf{k})}-s\right)} \right)+...+O\left(\epsilon^{3}\right). \tag{63}\] Considering all the other contributions, take the inverse Laplace transform and take the time derivative we obtain the equation for the off-diagonal correlator \[\frac{d}{dt}\left\langle z_{(+,{\bf k})}z_{(-,{\bf k})}^{*}\right\rangle =\frac{1}{2}\epsilon^{2}\int\!d\beta d\gamma\sigma_{(+,{\bf k})}^{ \beta\gamma}\left(\sigma_{(+,{\bf k})}^{(-,{\bf k})\gamma}n_{\gamma}n_{(-,{\bf k })}+\sigma_{\gamma}^{\beta(-,{\bf k})}n_{\beta}n_{(-,{\bf k})}+\sigma_{(-,{\bf k })}^{\beta\gamma}n_{\beta}n_{\gamma}\right)\times \tag{64}\] \[e^{i\left(\omega_{\gamma}+\omega_{\beta}+\omega_{(+,{\bf k})} \right)t}\int_{0}^{t}e^{-i\left(\omega_{\gamma}+\omega_{(-,{\bf k})}+\omega_{ \beta}\right)t^{\prime}}dt^{\prime}\] (65) \[+\frac{1}{2}\epsilon^{2}\int\!d\beta d\gamma\sigma_{(-,{\bf k})}^ {\beta\gamma}\left(\sigma_{\beta}^{(+,{\bf k})\gamma}n_{\gamma}n_{(+,{\bf k})} +\sigma_{\gamma}^{\beta(+,{\bf k})}n_{\beta}n_{(+,{\bf k})}+\sigma_{(+,{\bf k })}^{\beta\gamma}n_{\beta}n_{\gamma}\right)\times\] \[e^{-i\left(\omega_{\gamma}+\omega_{\beta}+\omega_{(-,{\bf k})} \right)t}\int_{0}^{t}e^{i\left(\omega_{\gamma}+\omega_{(+,{\bf k})}+\omega_{ \beta}\right)t^{\prime}}dt^{\prime}. \tag{66}\] Let us write explicitly the imaginary and real parts of one of the oscillating terms on the right \[\Im\left(e^{i\left(\omega_{\gamma}+\omega_{\beta}+\omega_{(+,{ \bf k})}\right)t}\int_{0}^{t}e^{-i\left(\omega_{\gamma}+\omega_{(-,{\bf k})}+ \omega_{\beta}\right)t}\right) =i\cos\left(\omega_{\gamma}+\omega_{\beta}+\omega_{(+,{\bf k})} \right)t\frac{1-\cos\left(\omega_{\gamma}+\omega_{(-,{\bf k})}+\omega_{\beta} \right)t}{\left(\omega_{\gamma}+\omega_{(-,{\bf k})}+\omega_{\beta}\right)} \tag{67}\] \[-i\sin\left(\omega_{\gamma}+\omega_{\beta}+\omega_{(+,{\bf k})} \right)t\frac{\sin\left(\omega_{\gamma}+\omega_{(-,{\bf k})}+\omega_{\beta} \right)t}{\left(\omega_{\gamma}+\omega_{(-,{\bf k})}+\omega_{\beta}\right)},\] \[\Re\left(e^{i\left(\omega_{\gamma}+\omega_{\beta}+\omega_{(+,{ \bf k})}\right)t}\int_{0}^{t}e^{-i\left(\omega_{\gamma}+\omega_{(-,{\bf k})}+ \omega_{\beta}\right)t}\right) =-\sin\left(\omega_{\gamma}+\omega_{\beta}+\omega_{(+,{\bf k})} \right)t\frac{1-\cos\left(\omega_{\gamma}+\omega_{(-,{\bf k})}+\omega_{\beta }\right)t}{\left(\omega_{\gamma}+\omega_{(-,{\bf k})}+\omega_{\beta}\right)} \tag{68}\] \[-\cos\left(\omega_{\gamma}+\omega_{\beta}+\omega_{(+,{\bf k})} \right)t\frac{\sin\left(\omega_{\gamma}+\omega_{(-,{\bf k})}+\omega_{\beta} \right)t}{\left(\omega_{\gamma}+\omega_{(-,{\bf k})}+\omega_{\beta}\right)}.\] We will show that in the kinetic limit \(\omega t\to\infty\)\(f=\sin\left(\omega_{\gamma}+\omega_{\beta}+\omega_{(+,{\bf k})}\right)t\frac{ \sin\left(\omega_{\gamma}+\omega_{(-,{\bf k})}+\omega_{\beta}\right)t}{\left( \omega_{\gamma}+\omega_{(-,{\bf k})}+\omega_{\beta}\right)}\) converges weakly to \(0\), this similarly follows for the rest of the terms. First let us quickly show that both functions \(\sin\left(xs\right),t\sin\left(xs\right)\) converge weakly to \(0\) as \(s\to\infty\): Let \(\phi\) be a test function, then using integration by parts \[\lim_{t\to\infty}\int_{-\infty}^{\infty}\sin\left(xt\right)\phi\left(x\right)dx =\lim_{t\to\infty}\int\frac{d}{dx}\left(-\frac{\cos\left(xt\right)}{t} \right)\phi\left(x\right)dx=\lim_{t\to\infty}\frac{1}{t}\int\cos\left(xt \right)\phi_{x}\left(x\right)dx=\lim_{t\to\infty}\frac{C}{t}=0 \tag{69}\] and \[\lim_{t\to\infty}\int_{-\infty}^{\infty}t\sin\left(xt\right)\phi \left(x\right)dx =\lim_{t\to\infty}\int\frac{d}{dx}\left(-\cos\left(xt\right) \right)\phi\left(x\right)dx=\lim_{t\to\infty}\int\frac{d}{dx}\left(\frac{\sin \left(xt\right)}{t}\right)\phi_{x}\left(x\right)dx\] \[=-\lim_{t\to\infty}\int\frac{\sin\left(xt\right)}{t}\phi_{xx} \left(x\right)dx=\lim_{t\to\infty}\frac{C}{t}=0. \tag{70}\] Let \(\Gamma\left(y\right)\) be a bounded domain, \(g\left(y\right)\) a nice differential function and \(y_{0}\in[-1,1]\) s.t \(\left(y-y_{0}\right)>0\) then \[\int_{\Gamma}dy\sin\left(\left(y-y_{0}\right)t\right)\frac{\sin \left(yt\right)}{y}g\left(y\right) =\int dy\sin\left(\left(y-y_{0}\right)t\right)\frac{d}{dy}\int_{c}^ {y}\frac{\sin\left(y^{\prime}t\right)}{y^{\prime}}dy^{\prime}g\left(y\right)\] \[=-\int dy\frac{d}{dy}\left(\sin\left(\left(y-y_{0}\right)t\right)g \left(y\right)\right)\int_{c}^{y}\frac{\sin\left(y^{\prime}t\right)}{y^{\prime}} dy^{\prime}\] \[=-\int dy\left(-t\cos\left(\left(y-y_{0}\right)t\right)g\left(y \right)+\sin\left(\left(y-y_{0}\right)t\right)g_{y}\left(y\right)\right)\int_{c}^ {y}\frac{\sin\left(Ny^{\prime}t\right)}{y^{\prime}}dy^{\prime}\] taking the limit \(t\rightarrow\infty\), using what we showed in (69,70) and \(\left|\Theta\left(y\right)\right|=\left|\lim_{t\rightarrow\infty}\int_{c}^{y} \frac{\sin\left(Ny^{\prime}t\right)}{y^{\prime}}dy^{\prime}\right|<2\) we obtain \[\lim_{t\rightarrow\infty}\int_{\Gamma}dy\sin\left(\left(y-y_{0}\right)t\right) \frac{\sin\left(yt\right)}{y}g\left(y\right)\sim\int dy\lim_{t\rightarrow \infty}\left(-t\cos\left(\left(y-y_{0}\right)t\right)g\left(y\right)+\sin \left(\left(y-y_{0}\right)t\right)g_{y}\left(y\right)\right)2=0. \tag{71}\] Finally, we show that the integration of the collision kernel can be brought to a form similar to (71). Consider the integral \[\mathcal{I}_{\pm}=\int K_{p}dK_{p}\int K_{q}dK_{q}\delta\left(\mathbf{k} \right)\int d\theta_{p}\int d\theta_{q}\sin\left(N\left(\cos\theta_{p}+\cos \theta_{q}-\cos\theta_{k}\right)t\right)\frac{\sin\left(N\left(\cos\theta_{p}+ \cos\theta_{q}+\cos\theta_{k}\right)t\right)}{\left(\cos\theta_{p}+\cos\theta _{q}+\cos\theta_{k}\right)}\mathcal{K} \tag{72}\] where \(\mathcal{K}\) stands for terms in the collision kernel. Let us change to the variables \[x =\cos\theta_{p}-\cos\theta_{q} \tag{73}\] \[y =\cos\theta_{p}+\cos\theta_{q}+\cos\theta_{k} \tag{74}\] Note that away from the resonant condition, \(y=0\), (72) is zero in the limit \(\omega t\rightarrow\infty\). The Jacobian determinant of the coordinate transformation is \(\det J=2\sin\theta_{p}\sin\theta_{q}\), which is positive in the vicinity of \(y=0\); take \(\Gamma\) to be a small vicinity of \(y=0\) s.t \(\det J\left|{}_{\Gamma}>0\right.\) in the kinetic time limit we are we are left with integral of the form \[\lim_{\omega t\rightarrow\infty}\int K_{p}dK_{p}\int K_{q}dK_{q}\delta\left( \mathbf{k}\right)\int dx\int_{\Gamma}dy\det J\sin\left(N\left(y-2\cos\theta_{ k}\right)t\right)\frac{\sin\left(Nyt\right)}{y}\mathcal{K}\left(x,y\right)=0 \tag{75}\] this integral vanishes in the limit due to (71). ## Appendix II: Isotropic part of constant flux solutions of the kinetic equation Consider the kinetic equation (54) for eastward propagating waves, \[\dot{m}_{k}=\epsilon^{2}\pi\int d\mathbf{q}\int d\mathbf{p}\omega_{k}\Gamma_ {pqk}^{2}m_{k}m_{p}m_{q}\left(\omega_{q}m_{q}^{-1}+\omega_{p}m_{p}^{-1}+\omega _{k}m_{k}^{-1}\right)\delta\left(\omega_{p,q,k}\right)\delta\left(\mathbf{k}+ \mathbf{q}+\mathbf{p}\right). \tag{76}\] The interaction coefficients, frequencies, and resonant manifold parameterization are given in terms of separable functions in angles and wave number amplitudes. The interaction coefficient is a homogeneous function wrt the wave number amplitudes. So it makes sense to look for solutions in the separable form \[m\left(\mathbf{k}\right)=\frac{f\left(\theta\right)}{K^{w}}. \tag{77}\] We will show that \(w=\delta+d=3\) has the KZ scaling, where \(\delta=1\) is the homogeneity degree of \(\Gamma\) wrt to the wave vector amplitudes and \(d=2\) is the dimension. To determine \(w\) let us integrate (76) wrt to the angle and write the isotropic collision kernel as three identical copies \[\dot{l}_{k}=\int_{0}^{2\pi}d\theta_{k}\dot{m}\left(k\right)=\epsilon^{2}\pi \int dK_{p}\int dK_{q}\left(\frac{1}{3}\mathcal{I}_{k}+\frac{1}{3}\mathcal{I} _{k}+\frac{1}{3}\mathcal{I}_{k}\right), \tag{78}\] where \(\mathcal{I}_{k}=K_{p}K_{q}\int_{0}^{2\pi}d\theta_{q}\int_{0}^{2\pi}d\theta_{p} \int_{0}^{2\pi}d\theta_{k}\omega_{k}\Gamma_{pqk}^{2}n_{k}n_{p}n_{q}\left( \omega_{q}n_{q}^{-1}+\omega_{p}n_{p}^{-1}+\omega_{k}n_{k}^{-1}\right)\delta \left(\omega_{p,q,k}\right)\delta\left(\mathbf{k}+\mathbf{q}+\mathbf{p} \right)\). Using a similar approach, one can show that the isotropic part scaling of the constant flux solution to the full kinetic equation in the case that PM is symmetrically distributed \(n_{+}\left(k\right)=n_{-}\left(k\right)\) has the same scaling. Let us transfrom the second term on the RHS of (78) using Zakharov transformation of wave number amplitudes and permutation of the angles: \[K_{q}=\frac{K^{2}}{K_{q}^{\prime}},K_{p}=K\frac{K_{p}^{\prime}}{K_{ q}^{\prime}}, \tag{79}\] \[\theta_{q}\rightarrow\theta_{k}, \theta_{k}\rightarrow\theta_{q^{\prime}},\theta_{p}=\theta_{p^{ \prime}}, \tag{80}\] and a similar transformation of the third term \[K_{p}=\frac{K^{2}}{K_{p}^{\prime}},K_{q}=K\frac{K_{q}^{\prime}}{K _{p}^{\prime}}, \tag{81}\] \[\theta_{p}\rightarrow\theta_{k}, \theta_{k}\rightarrow\theta_{p^{\prime}},\theta_{q}=\theta_{q^{ \prime}}. \tag{82}\] These transformations transform the collision kernal to itself times a factor. That brings (78) to the following form \[\dot{l}_{k}=\frac{1}{3}\epsilon^{2}\pi\int dK_{p}\int dK_{q}\left[\omega_{k}+ \omega_{q}\left(\frac{K}{K_{q}}\right)^{y}+\omega_{p}\left(\frac{K}{K_{p}} \right)^{y}\right]\mathcal{I}_{k}, \tag{83}\] where \(y=-2w+6\). If \(y=0\) the rectangular brackets are proportional to the resonant condition and vanish. So \(w=\delta+d=3\) is a formal solution of the isotropic kinetic equation (78). Its locality and relevance as a solution of (76) is discussed in a following paper.
2307.05006
Improving RNN-Transducers with Acoustic LookAhead
RNN-Transducers (RNN-Ts) have gained widespread acceptance as an end-to-end model for speech to text conversion because of their high accuracy and streaming capabilities. A typical RNN-T independently encodes the input audio and the text context, and combines the two encodings by a thin joint network. While this architecture provides SOTA streaming accuracy, it also makes the model vulnerable to strong LM biasing which manifests as multi-step hallucination of text without acoustic evidence. In this paper we propose LookAhead that makes text representations more acoustically grounded by looking ahead into the future within the audio input. This technique yields a significant 5%-20% relative reduction in word error rate on both in-domain and out-of-domain evaluation sets.
Vinit S. Unni, Ashish Mittal, Preethi Jyothi, Sunita Sarawagi
2023-07-11T03:57:00Z
http://arxiv.org/abs/2307.05006v1
# Improving RNN-Transducers with Acoustic Lookahead ###### Abstract RNN-Transducers (RNN-Ts) have gained widespread acceptance as an end-to-end model for speech to text conversion because of their high accuracy and streaming capabilities. A typical RNN-T independently encodes the input audio and the text context, and combines the two encodings by a thin joint network. While this architecture provides SOTA streaming accuracy, it also makes the model vulnerable to strong LM biasing which manifests as multi-step hallucination of text without acoustic evidence. In this paper we propose Lookahead that makes text representations more acoustically grounded by looking ahead into the future within the audio input. This technique yields a significant \(5\%-20\%\) relative reduction in word error rate on both in-domain and out-of-domain evaluation sets. Vinit S. Unni\({}^{1}\), Ashish Mittal\({}^{1,2}\), Preethi Jyothi\({}^{1}\), Sunita Sarawagi\({}^{1}\)\({}^{1}\)Indian Institute of Technology Bombay, India \({}^{2}\)IBM Research, India [email protected], [email protected], [email protected], [email protected] **Index Terms**: speech recognition, RNN transducer, acoustic hallucinations ## 1 Introduction RNN-Transducers (RNN-Ts) [1] are the predominant choice for end-to-end automatic speech recognition (ASR) offering both high accuracy and streaming capabilities [2, 3]. They comprise a speech encoder that can process speech to generate an acoustic representation and a text encoder that is conditioned on label outputs from previous time-steps to generate a textual representation. Both the acoustic and textual representations are further combined by a simple joint network to predict the final output sequence. Apart from making the model streaming-friendly, separate speech and text modules in RNN-Ts also allow for text-only data to be used in training the text encoder [4]. While having separate speech and text modules in RNN-Ts has its benefits, it also makes the model vulnerable to strong biases from the language model. Driven by strong textual priors, the representation from the text encoder could be very biased towards an output unit that is eventually adopted by the joint network but does not have any acoustic correlates in the speech input. Such outputs could be considered hallucinations that arise due to the overconfidence of the language model in the RNN-T [5]. This problem is more severe when the RNN-T is used to decode out-of-domain utterances. Apart from the more egregious hallucination errors, we also find that language model biases in RNN-Ts lead to word-boundary errors. For example, "villeroy took" is mispredicted by an RNN-T baseline as "villar I took". Hallucinated outputs have been studied a lot more in the context of neural machine translation where the decoder language model hallucinates content that is not aligned to the source sentence [6]. In contrast, the problem of hallucination in RNN-Ts, that stems from its very design, has been far less studied and demands more attention. In this work, we propose Lookahead as a fix for the problem of hallucinations in RNN-Ts. Lookahead aims to make the textual representations more acoustically grounded by _looking ahead_ into the future within the speech signal. To achieve such a lookahead without interfering with the RNN-T's online decoding capabilities, we extract a limited number of lookahead output tokens for each frame of the input speech using only the acoustic encoder and further use these extracted tokens to modify the textual representation. This technique yields significant reductions in word error rates (WERs) on the established Librispeech benchmark and a variety of out-of-domain evaluation sets. We also show that beyond improving WERs, Lookahead results in predictions that are more acoustically faithful to the speech input. For example, the reference "la valifier" is misrecognized as "the valet" by an RNN-T baseline, while an RNN-T baseline with Lookahead predicts "lavalier". **Contributions**: Thus, overall the contributions of this paper are as follows: (1) We highlight the problem of hallucination in SOTA online ASR models and attribute it to the speech independent encoding of text representations. (2) We propose a fix based on enriching text representations with a lookahead of future tokens extracted from the audio. (3) We present a simple extension of the RNN-T architecture called Lookahead with very modest computational overheads. (4) We present an evaluation on three benchmarks on various settings of model sizes and show that our proposal improves WER and reduces hallucination significantly. ## 2 Background: RNN Transducer Let the input audio be denoted by \(\mathbf{x}=\{x_{1},x_{2},x_{3},...x_{T}\}\) where each \(x_{t}\) represents the acoustic features at time \(t\). Let the corresponding text transcript be \(\mathbf{y}=\{y_{1},y_{2},y_{3}...y_{U}\}\) where \(y_{u}\in\mathcal{V}\) denotes the \(u^{th}\) output token drawn from a vocabulary \(\mathcal{V}\). One of the most distinguishable features of an RNN-T is the presence of two separate encoders for text and acoustic signals respectively. The acoustic encoder (AE) takes as input \(\mathbf{x}=\{x_{1},x_{2},x_{3},...x_{T}\}\) and generates the acoustic encoded representation \(\mathbf{h}=\{h_{1},h_{2},h_{3},...h_{T}\}\). The text or language encoder (LE) generates representation \(g_{u}\) appropriate for the next output token as a function of previous tokens \(\mathbf{y}_{<u}=y_{1}\ldots,y_{u-1}\) \[g_{u}=\text{LE}(\mathbf{y}_{<u})\ \ \ \ h_{t}=\text{AE}(\mathbf{x},t)\] A _Joint Network_ (JN) combines the two encodings for each \(t\in[1\ldots T]\) and each \(u\in[1,\ldots,U]\) to generate a lattice \(S\). Each cell of the lattice represents a state \(s(t,u)\), from which we generate a probability distribution of a token belonging to
2307.02769
The Goldman bracket characterizes homeomorphisms between non-compact surfaces
We show that a homotopy equivalence between two non-compact orientable surfaces is homotopic to a homeomorphism if and only if it preserves the Goldman bracket, provided our surfaces are neither the plane nor the punctured plane.
Sumanta Das, Siddhartha Gadgil, Ajay Kumar Nair
2023-07-06T04:43:43Z
http://arxiv.org/abs/2307.02769v2
# The Goldman bracket characterizes homeomorphisms between non-compact surfaces ###### Abstract. We show that a homotopy equivalence between two non-compact orientable surfaces is homotopic to a homeomorphism if and only if it preserves the Goldman bracket, provided our surfaces are neither the plane nor the punctured plane. ## 1. Introduction All manifolds are assumed to be second countable and Hausdorff. A surface is a two-dimensional manifold. Throughout this note, all surfaces will be assumed to be connected and orientable. We say a surface is of _finite-type_ if its fundamental group is finitely generated; otherwise, we say it is of _infinite-type_. A fundamental question in topology is whether homotopy equivalent closed \(n\)-manifolds are necessarily homeomorphic, and whether every homotopy equivalence is homotopic to a homeomorphism. More generally, we may ask what additional structures or conditions characterize homotopy equivalences that are homotopic to homeomorphisms. For \(n\geq 3\), the answer to these questions is negative in general - for example, the Lens spaces \(L(7,1)\) and \(L(7,2)\) are homotopy equivalent but not homeomorphic. In the case of closed surfaces, the classical _Dehn-Nielsen-Baer theorem_[2, Appendix] says that every homotopy equivalence is homotopic to a homeomorphism. However, the corresponding result does not hold for compact two-manifolds with boundary. For example, the torus with one hole and the sphere with three holes are homotopy equivalent but not homeomorphic. Similarly, homotopy equivalence does not imply homeomorphism for non-compact surfaces without boundary. For example, the once-punctured torus and the thrice-punctured sphere are homotopy equivalent but not homeomorphic. Indeed up to homotopy equivalence, there is precisely one (connected) surface of infinite type, but up to homeomorphism, there are \(2^{\aleph_{0}}\) many infinite-type surfaces; see [1, Proposition 3.1.11]. In this note, we show that there is a simple and natural characterization of when a homotopy equivalence \(f\colon\Sigma^{\prime}\to\Sigma\) between non-compact surfaces without boundary is homotopic to a homeomorphism in terms of the _Goldman bracket_, which is a Lie Algebra structure associated to a surface. More precisely, if \(\widehat{\pi}(\Sigma)\) denotes the set of free homotopy classes of closed curves on a non-compact surface \(\Sigma\), then the Goldman bracket (whose definition we recall in Section 2.6) is a bilinear map \[[\cdot,\cdot]\colon\,\mathbb{Z}[\widehat{\pi}(\Sigma)]\times\mathbb{Z}[\widehat {\pi}(\Sigma)]\to\mathbb{Z}[\widehat{\pi}(\Sigma)],\] which is skew-symmetric and satisfies the Jacobi identity. Our main result is the following. **Theorem 1.1**.: _A homotopy equivalence \(f\colon\,\Sigma^{\prime}\to\Sigma\) between two non-compact oriented surfaces without boundary is homotopic to an orientation-preserving homeomorphism if and only if it commutes with the Goldman bracket, i.e., for all \(x^{\prime},y^{\prime}\in\mathbb{Z}[\widehat{\pi}(\Sigma^{\prime})]\), we have,_ \[[f_{*}(x^{\prime}),f_{*}(y^{\prime})]=f_{*}\left([x^{\prime},y^{\prime}]\right), \tag{1}\] _where \(f_{*}\colon\,\mathbb{Z}[\widehat{\pi}(\Sigma^{\prime})]\to\mathbb{Z}[\widehat{ \pi}(\Sigma)]\) is the function induced by \(f\), provided \(\Sigma\) is not homeomorphic to the plane or the cylinder \(S^{1}\times\mathbb{R}\)._ The analogous result for compact, connected, oriented two-dimensional manifolds with boundary was proved in [3]. Our methods are based on the relation between the Goldman bracket and the _geometric intersection number_ of curves on a surface. The same methods prove a related characterization in terms of intersection numbers for homotopy equivalences. Recall that, for a \(2\)-manifold \(M\) with or without boundary, the geometric intersection number \(I_{M}\) is defined as follows. Let \(x,y\in\widehat{\pi}(M)\). Then \[I_{M}(x,y)\coloneqq\min\left\{|\alpha\cap\beta|:\alpha\in x,\,\beta\in y,\, \alpha\text{ and }\beta\text{ intersect in double points}\right\}.\] **Theorem 1.2**.: _Let \(f\colon\,\Sigma^{\prime}\to\Sigma\) be a homotopy equivalence between two non-compact surfaces without boundary, where \(\Sigma\) is not homeomorphic to the plane or a cylinder. Then the following are equivalent:_ 1. \(f\) _is homotopic to a homeomorphism._ 2. \(I_{\Sigma}\left(f_{*}(x^{\prime}),f_{*}(y^{\prime})\right)=I_{\Sigma^{\prime }}(x^{\prime},y^{\prime})\) _for all_ \(x^{\prime},y^{\prime}\in\widehat{\pi}(\Sigma^{\prime})\)_._ 3. _For all_ \(x^{\prime},y^{\prime}\in\widehat{\pi}(\Sigma^{\prime})\)_,_ \(I_{\Sigma}\left(f_{*}(x^{\prime}),f_{*}(y^{\prime})\right)=0\iff I_{\Sigma^{ \prime}}(x^{\prime},y^{\prime})=0\)_._ ### Outline of the proof of Theorem 1.1 As orientation-preserving homeomorphisms preserve the Goldman bracket by definition, it is easy to see that if \(f\colon\,\Sigma^{\prime}\to\Sigma\) is homotopic to an orientation-preserving homeomorphism, then it commutes with the Goldman bracket. Conversely, suppose \(f\colon\,\Sigma^{\prime}\to\Sigma\) commutes with the Goldman bracket, we construct a _proper_ map \(g\colon\,\Sigma^{\prime}\to\Sigma\) such that \(f\) is homotopic to \(g\). By [1, Theorem 1], \(g\) is, in turn (properly) homotopic to a homeomorphism. To construct \(g\), we pick an exhaustion \(K_{1}\subset K_{2}\subset\dots\subset K_{n}\subset\dots\) of \(\Sigma\) (see Section 2.2) by compact subsurfaces (in this outline, we suppress a few technical conditions on exhaustions and base-points). We construct (as we sketch below) an exhaustion \(K^{\prime}_{1}\subset K^{\prime}_{2}\subset\dots\subset K^{\prime}_{n}\subset\dots\) of \(\Sigma^{\prime}\) by compact subsurfaces together with maps \(g_{i}\colon\, K^{\prime}_{i}\to\Sigma\) homotopic to the restrictions \(f|_{K^{\prime}_{i}}\) such that for all \(i\), \(g_{i+1}\) is an extension of \(g_{i}\), i.e., \(g_{i+1}|_{K^{\prime}_{i}}=g_{i}\). Furthermore, we ensure that \(g_{n}(K^{\prime}_{n}\setminus K^{\prime}_{i})\subset\Sigma\setminus K_{i}\) whenever \(i\leq n\). To construct \(K^{\prime}_{1}\) and the map \(g_{1}\colon\, K^{\prime}_{1}\to\Sigma\), we consider a system of simple closed curves \(\alpha_{1}\), \(\alpha_{2}\), \(\dots\), \(\alpha_{k_{1}}\) that _fill_\(K_{1}\) (see Section 2.4). Thus, if \(\gamma\) is a closed curve that can be homotoped to be disjoint from all \(\alpha_{i}\), then \(\gamma\) is homotopic to a closed curve in \(\Sigma\setminus K_{1}\). As \(f\) is a homotopy equivalence, there exist closed curves \(\alpha^{\prime}_{1}\), \(\alpha^{\prime}_{2}\), \(\dots\), \(\alpha^{\prime}_{k_{1}}\) such that \(f_{*}(\alpha^{\prime}_{i})\) is homotopic to \(\alpha_{i}\) for all \(i\). We pick a compact subsurface \(K^{\prime}_{1}\) of \(\Sigma^{\prime}\) that contains all the curves \(\alpha^{\prime}_{i}\). The key observation is that for any curve \(\gamma^{\prime}\subset\Sigma^{\prime}\setminus K^{\prime}_{1}\), \(f_{*}(\gamma^{\prime})\) is homotopic to a curve in \(\Sigma\setminus K_{1}\). This follows from the properties of the Goldman bracket. Namely, the Goldman bracket of \(\gamma^{\prime}\) with each \(\alpha^{\prime}_{i}\) is zero, and hence the Goldman bracket of \(f_{*}(\gamma^{\prime})\) with each \(\alpha_{i}\) is zero. As the \(\alpha_{i}\) are simple closed curves, by a theorem of Goldman, it follows that, for each \(i\), \(f_{*}(\gamma^{\prime})\) is homotopic to a curve disjoint from \(\alpha_{i}\). As the \(\alpha_{i}\) fill \(K_{1}\), it follows that \(f_{*}(\gamma^{\prime})\) is homotopic to a curve in \(\Sigma\setminus K_{1}\). As the boundary components of \(K_{1}^{\prime}\) are homotopic to curves in \(\Sigma^{\prime}\setminus K_{1}^{\prime}\), this lets us define a map \(g_{1}\colon\, K_{1}^{\prime}\to\Sigma\) that is homotopic to \(f|_{K_{1}^{\prime}}\) and so that \(g_{1}(\partial K_{1}^{\prime})\subset\Sigma\setminus K_{1}\). To proceed inductively, we prove a refinement of the above key observation. Namely, for each component \(V^{\prime}\) of \(\overline{\Sigma^{\prime}\setminus K_{1}^{\prime}}\), there is a component \(V\) of \(\overline{\Sigma\setminus K_{1}}\) such that for any closed curve \(\gamma^{\prime}\subset V^{\prime}\), \(f_{*}(\gamma^{\prime})\) is homotopic to a curve in \(V\). This lets us make a construction similar to the above for each component \(V^{\prime}\) of \(\overline{\Sigma^{\prime}\setminus K_{1}^{\prime}}\) to obtain a subsurface in \(V\) and an extension of the map to this subsurface. Taking the union of these subsurfaces, we get \(K_{2}^{\prime}\) and an extension \(g_{2}\) of \(g_{1}\) to \(K_{2}^{\prime}\) so that \(g_{2}(\partial K_{2}^{\prime})\subset\Sigma\setminus K_{2}\). We proceed inductively to obtain \(K_{n}^{\prime}\) and maps \(g_{n}:K_{n}^{\prime}\to\Sigma\) with \(g_{n}(\partial K_{n}^{\prime})\subset\Sigma\setminus K_{n}\) for all \(n\). Finally, the limit of the maps \(g_{i}\) gives us a proper map \(g\colon\,\Sigma^{\prime}\to\Sigma\) homotopic to \(f\) as each \(g_{i}\colon\, K_{i}^{\prime}\to\Sigma\) homotopic to the restriction \(f|_{K_{i}^{\prime}}\). ## 2. Background A _surface_\(\Sigma\) is a connected, orientable two-dimensional manifold with or without boundary. ### Subsurfaces A (compact) _subsurface_\(F\) of a surface \(\Sigma\) is an embedded submanifold (in general with boundary) of codimension zero. Thus, \(F\subset\Sigma\) is a subset homeomorphic to a compact surface with boundary so that \(\partial F\subset\Sigma\) is a collection of disjoint simple closed curves. Further, if \(F\) is connected, closure of a component of \(\Sigma\setminus\partial F\) is \(F\). We assume that all subsurfaces are connected. ### Exhaustions An _exhaustion_ of a non-compact surface \(\Sigma\) is a collection of subsurfaces \(K_{i}\), \(i=1,2,\dots\) such that * \(K_{i}\subset\operatorname{int}(K_{i+1})\) for all \(i\), * \(\bigcup_{i}K_{i}=\Sigma\). ### Intersections and geodesics A very useful classical result on surfaces relates pairwise and mutual intersections. This can be conveniently formulated in terms of hyperbolic metrics on surfaces (one can define this purely topologically but the definition is a bit more complicated). Recall that every surface \(\Sigma\) without boundary admits a complete hyperbolic metric. Similarly, every compact surface \(\Sigma\) with boundary admits a hyperbolic metric with geodesic boundary. Assume that such a metric has been fixed. Then the following holds. **Lemma 2.1**.: _Let \(\Sigma\) be a surface with a hyperbolic metric_ 1. _Every non-trivial homotopy class of curves in_ \(\Sigma\) _contains a unique geodesic representative._ 2. _Every homotopically non-trivial simple closed curve_ \(\alpha\) _in_ \(\Sigma\) _is ambient isotopic to a simple geodesic._ 3. _If two homotopy classes of curves in_ \(\Sigma\) _have representatives that are disjoint, then their geodesic representatives are disjoint._ It follows that for a finite collection of homotopy classes of curves \(x_{1},x_{2},\dots,x_{n}\) in \(\Sigma\), if every pair \(x_{i}\) and \(x_{j}\) have disjoint representatives, then there exists a family of mutually disjoint representatives of \(x_{1},x_{2},\dots,x_{n}\). In particular, if \(F\subset\Sigma\) is a subsurface, then the geodesics homotopic to the components of \(\partial F\) are pairwise disjoint, and the closure of a component of \(\Sigma\setminus\partial F\) is a hyperbolic surface with geodesic boundary which is isotopic to \(F\). We can thus assume that subsurfaces have geodesic boundary. ### Filling curves Let \(F\) be a compact surface, possibly with boundary. A homotopically non-trivial closed curve \(\alpha\subset F\) is said to be _peripheral_ if \(\alpha\) is (freely) homotopic to a curve with image in \(\partial F\). Fix a hyperbolic metric on \(F\) with geodesic boundary. We say that a collection of geodesics \(\alpha_{1}\), \(\alpha_{2}\),..., \(\alpha_{n}\) in \(F\)_fills_\(F\) if every component of \(F\setminus\bigcup_{i}\left(\alpha_{i}\right)\) is either an open disc or a punctured disc with boundary contained in \(\partial F\). By the results recalled in Lemma 2.1, if a homotopically non-trivial curve \(\gamma\) is homotopic to curves disjoint from each \(\alpha_{i}\), \(1\leq i\leq n\), then \(\gamma\) is peripheral. Similarly, if \(F\subset\Sigma\) is a compact subsurface with geodesic boundary and the geodesics \(\alpha_{1}\), \(\alpha_{2}\),..., \(\alpha_{n}\) in \(F\) fill \(F\), then every homotopically non-trivial curve \(\gamma\) in \(\Sigma\) that is disjoint from each \(\alpha_{i}\) is homotopic to a curve in \(\Sigma\setminus\operatorname{int}(F)\), and hence to a curve in \(\Sigma\setminus F\). ### Splitting surfaces Let \(\gamma\subset\Sigma\) be a simple closed curve on a surface so that \(\Sigma\setminus\gamma\) has two components. A curve \(\alpha\) in a component \(V\) of \(\Sigma\setminus\gamma\) is said to be _peripheral_ if it is homotopic to a curve in a regular neighborhood of \(\gamma\) Using the results recalled in Lemma 2.1, we deduce the following. **Lemma 2.2**.: _Let \(\alpha\subset\Sigma\) be a curve that is disjoint from \(\gamma\) and is not peripheral. If \(\beta\subset\Sigma\setminus\gamma\) is a closed curve that is homotopic to \(\alpha\) in \(\Sigma\), then \(\alpha\) and \(\beta\) are contained in the same component \(V\) of \(\Sigma\setminus\gamma\) and are homotopic in \(V\)._ ### Goldman bracket Let \(\Sigma\) be a non-compact surface, and let \(\widehat{\pi}(\Sigma)=[\mathbb{S}^{1},\Sigma]\), i.e., \(\widehat{\pi}(\Sigma)\) denotes the set of free homotopy classes of curves in \(\Sigma\). Note that if \(p\in\Sigma\), there is a bijection between the set of all conjugacy classes of \(\pi_{1}(\Sigma,p)\) and \(\widehat{\pi}(\Sigma)\). For a closed curve \(\alpha\), let \(\widehat{\alpha}\) to denote the free homotopy class of \(\alpha\). Let \(x,y\in\widehat{\pi}(\Sigma)\) and \(\alpha,\beta\) be smoothly immersed, oriented representatives of \(x\) and \(y\), respectively, such that \(\alpha\) and \(\beta\) intersect transversally in (finitely many) double points. Given any point \(p\in\alpha\cap\beta\), we view \(\alpha\) and \(\beta\) as curves in \(\Sigma\) based at \(p\) and define \(\alpha*_{p}\beta\) as the product of based curves as in the definition of the fundamental group. We associate to each intersection point \(p\) a _sign_\(\varepsilon_{p}=\pm 1\) as in the definition of algebraic intersection number. Namely, parametrizing \(\alpha\) and \(\beta\) so that \(\alpha(0)=\beta(0)=p\), if the ordered pair \((\alpha^{\prime}(0),\beta^{\prime}(0))\) represents the orientation of \(\Sigma\) at \(p\), then \(\varepsilon_{p}=1\), otherwise \(\varepsilon_{p}=-1\). The Goldman bracket \([4]\)\([\cdot,\cdot]\colon\mathbb{Z}\big{[}\widehat{\pi}(\Sigma)\big{]}\times\mathbb{ Z}\big{[}\widehat{\pi}(\Sigma)\big{]}\to\mathbb{Z}\big{[}\widehat{\pi}( \Sigma)\big{]}\) is defined as \[[x,y]=\sum_{p\in\alpha\cap\beta}\varepsilon_{p}\cdot\widehat{\alpha*_{p}\beta}. \tag{2}\] Recall that for a non-empty set \(S\), \(\mathbb{Z}[S]\) is the free module generated by \(S\) over the ring of integers \(\mathbb{Z}\). Also, note that the sum in equation 2 is a finite sum. **Theorem 2.3** (Goldman [4]).: _The Goldman bracket satisfies the following properties:_ 1. \([\cdot,\cdot]\colon\mathbb{Z}\big{[}\widehat{\pi}(\Sigma)\big{]}\times\mathbb{Z} \big{[}\widehat{\pi}(\Sigma)\big{]}\to\mathbb{Z}\big{[}\widehat{\pi}(\Sigma) \big{]}\) _is a well-defined bilinear map; see_ _[_4_, Theorem 5.2.]__._ 2. _The bracket is skew-symmetric and satisfies the Jacobi identity, i.e.,_ 1. \([x,y]=-[y,x]\) _for all_ \(x,y\in\widehat{\pi}(\Sigma)\)_, and_ 2. \(\big{[}x,[y,z]\big{]}+\big{[}y,[z,x]\big{]}+\big{[}z,[x,y]\big{]}=0\) _for all_ \(x,y,z\in\widehat{\pi}(\Sigma)\)_; see_ _[_4_, Theorem 5.3.]__._ 3. _If_ \(\alpha\) _is a simple closed curve and_ \(x=\widehat{\alpha}\) _is the homotopy class of_ \(\alpha\)_, then, for_ \(y\in\widehat{\pi}(\Sigma)\)_,_ \([x,y]=0\) _if and only if_ \(y=\widehat{\beta}\) _for a closed curve_ \(\beta\) _such that_ \(\beta\cap\alpha=\varnothing\)_; see_ _[_4_, Theorem 5.17. (i)]__._ We will frequently abuse notation and speak of the Goldman bracket of two curves \(\alpha\) and \(\beta\) in a surface \(\Sigma\) to mean the Goldman bracket of their free homotopy classes \(\widehat{\alpha}\) and \(\widehat{\beta}\) in \(\widehat{\pi}(\Sigma)\). ## 3. Proof of Theorem 1.1 Suppose \(f\) is homotopic to an orientation-preserving homeomorphism \(g\colon\,\Sigma^{\prime}\to\Sigma\). Without loss of generality, we may assume \(g\) is an orientation-preserving diffeomorphism since in dimension two, any homeomorphism is isotopic, thus properly homotopic to a diffeomorphism. By the definition of the Goldman bracket, it follows that \(g\) commutes with the Goldman bracket. As \(f\) and \(g\) are homotopic, it follows that \(f\) also commutes with the Goldman bracket. This proves one implication of Theorem 1.1. Conversely, suppose that \(f\colon\,\Sigma^{\prime}\to\Sigma\) is a homotopy equivalence between non-compact surfaces such that \(f\) commutes with the Goldman bracket. Pick base-points \(p^{\prime}\in\Sigma^{\prime}\) and \(p\in\Sigma\) so that \(f(p^{\prime})=p\). As sketched in Section 1.1, we will construct a proper map \(g\) that is homotopy equivalent to \(f\). This is an inductive construction, where we construct surfaces \(K^{\prime}_{m}\) and maps \(g_{m}\colon\, K^{\prime}_{m}\to\Sigma\). To keep track of base-points, we will also construct inductively a family of trees \(T^{\prime}_{m}\subset K^{\prime}_{m}\) starting with \(T^{\prime}_{0}=p^{\prime}\) so that \(T^{\prime}_{m}\cap K^{\prime}_{m-1}=T^{\prime}_{m-1}\) for \(m>1\) and the terminal vertices of \(T^{\prime}_{m}\) lie on \(\partial K^{\prime}_{m}\) with one terminal vertex in each boundary component. We will also modify \(f\) by a homotopy in a neighborhood of the trees \(T^{\prime}_{m}\) (with \(f\) fixed throughout the homotopy on \(p^{\prime}\), and on the set \(K_{m-1}\) for \(m>1\)). Pick an exhaustion \(K_{1}\subset K_{2}\subset\dots\) of \(\Sigma\) by compact subsurfaces with boundary such that \(p\in K_{1}\) and \(K_{1}\) (and hence every \(K_{m}\)) is not a disc or an annulus - this is possible as \(\Sigma\) is not a plane or a cylinder. Also choose an exhaustion \(L^{\prime}_{1}\subset L^{\prime}_{2}\subset\dots\subset L^{\prime}_{n}\subset\dots\) of \(\Sigma^{\prime}\) by compact bordered subsurfaces such that \(p^{\prime}\in L^{\prime}_{1}\) We first construct a compact subsurface \(K^{\prime}_{1}\subset\Sigma^{\prime}\), a map \(g_{1}\colon\, K^{\prime}_{1}\to\Sigma\), and a tree \(T^{\prime}_{1}\subset K^{\prime}_{1}\), which satisfy various properties. These properties will allow us to continue the construction inductively and to ensure that the resulting map is proper. ### Constructing the subsurface The construction of the subsurface is almost identical in the base case and the general case, so we describe the general case directly. Fix \(m\geq 1\). If \(m>1\), assume that a subsurface \(K^{\prime}_{m-1}\) has been constructed. Let \(Q^{\prime}_{m}\coloneqq L^{\prime}_{1}\) if \(m=1\) and \(Q^{\prime}_{m}\coloneqq L^{\prime}_{m}\cup K^{\prime}_{m-1}\) if \(m>1\). Consider a system of simple closed curves \(\alpha_{1}\), \(\alpha_{2}\),..., \(\alpha_{l}\) that fill \(K_{m}\) (as defined in Section 2.4). Thus, if \(\gamma\) is a closed curve that can be homotoped to be disjoint from each of the curves \(\alpha_{i}\), then \(\gamma\) is homotopic to a closed curve in \(\Sigma\setminus K_{m}\). We enlarge this collection to ensure that a peripheral curve must intersect a curve in the system. Namely, if \(C\subset\partial K_{m}\) is a boundary component of \(K_{m}\) which is not the boundary of a component in \(\overline{\Sigma\setminus K_{m}}\) homeomorphic to the half-open cylinder \([0,\infty)\times\mathbb{R}\) or the closed unit disc, then there exists a simple closed curve \(\alpha^{(C)}\) so that \(C\) and \(\alpha^{(C)}\) cannot be homotoped to be disjoint. Add such a curve for each boundary component of \(K_{m}\) that is not the boundary of a cylinder or disc in \(\Sigma\). Let \(\alpha_{1}\), \(\alpha_{2}\),..., \(\alpha_{l}\), \(\alpha_{l+1}\),..., \(\alpha_{k}\) be the resulting collection of curves. As \(f\) is a homotopy equivalence, there exist closed curves \(\alpha^{\prime}_{i}\subset\Sigma^{\prime}\), \(1\leq i\leq k\), such that \(f_{*}(\alpha^{\prime}_{i})\) is homotopic to \(\alpha_{i}\). Further, we can assume that the curves \(\alpha^{\prime}_{i}\) are pairwise transversal and intersect only in double points. **Lemma 3.1**.: _There exists a compact subsurface \(K^{\prime}_{m}\subset\Sigma^{\prime}\) that satisfies the following:_ 1. _For all_ \(i\)_,_ \(1\leq i\leq k\)_,_ \(\alpha^{\prime}_{i}\subset K^{\prime}_{m}\)_._ 2. \(p^{\prime}\in K^{\prime}_{m}\)_._ 3. \(Q^{\prime}_{m}\subset K^{\prime}_{m}\)_._ 4. _The closure of every component of_ \(\Sigma^{\prime}\setminus K^{\prime}_{m}\) _is non-compact._ 5. _Every component of_ \(\overline{\Sigma^{\prime}\setminus K^{\prime}_{m}}\) _intersects_ \(K^{\prime}_{m}\) _in a single component of_ \(\partial K^{\prime}_{m}\)_._ Proof.: To ensure the first three conditions, we simply take a compact connected subsurface \(K^{\prime}_{m}\) that contains \((\bigcup_{i}\alpha^{\prime}_{i})\cup Q^{\prime}_{m-1}\cup\{p^{\prime}\}\). Next, as \(K^{\prime}_{m}\) is compact, the closure \(\overline{\Sigma^{\prime}\setminus K^{\prime}_{m}}\) of its complement has finitely many components \(V^{\prime}_{1}\), \(V^{\prime}_{2}\),..., \(V^{\prime}_{n}\). Suppose the compact complementary components are \(V^{\prime}_{i_{1}}\), \(V^{\prime}_{i_{2}}\),..., \(V^{\prime}_{i_{l}}\), then we replace \(K^{\prime}_{m}\) by \(K^{\prime}_{m}\cup\left(\bigcup\limits_{j=1}^{n}V^{\prime}_{i_{j}}\right)\). This is also compact and satisfies condition 4. Finally, let \(V^{\prime}_{1}\), \(V^{\prime}_{2}\),..., \(V^{\prime}_{l}\) be the (non-compact) components of \(\overline{\Sigma^{\prime}\setminus K^{\prime}_{m}}\). Suppose \(V^{\prime}_{i}\) intersects \(K^{\prime}_{m}\) in \(n_{i}>1\) components of \(\partial K^{\prime}_{m}\), say \(C^{\prime}_{1}\), \(C^{\prime}_{2}\),...\(C^{\prime}_{n_{i}}\). For \(j=2,3,\ldots n_{i}\), pick disjoint simple arcs \(\beta^{\prime}_{j}\) in \(V^{\prime}_{j}\) connecting \(C^{\prime}_{1}\) to \(C^{\prime}_{j}\). Replace \(K^{\prime}_{m}\) by \(K^{\prime}_{m}\cup\mathcal{N}\left(\bigcup\limits_{j=2}^{n_{i}}\beta^{\prime} _{j}\right)\), where \(\mathcal{N}\) denotes a regular neighbourhood in \(V^{\prime}_{i}\). Making a similar construction for each \(V^{\prime}_{i}\) that intersects \(K^{\prime}_{m}\) in more than one boundary component, we obtain a compact subsurface \(K^{\prime}_{m}\) that satisfies the condition 5 (and continues to satisfy the other conditions). _Remark 3.2_.: The proof of Lemma 3.1 shows that any non-compact surface \(\Sigma\) without boundary has an exhaustion \(\{K_{i}\}\) by compact subsurfaces such that for each \(i=1,2,...\), the following hold: * the closure of every component of \(\Sigma\setminus K_{i}\) is non-compact, and * every component of \(\overline{\Sigma\setminus K_{i}}\) intersects \(K_{i}\) is a single component of \(\partial K_{i}\). ### Pushing loop images outside subsurfaces Choose and fix a compact subsurface \(K^{\prime}_{m}\) satisfying the conditions of Lemma 3.1. Using the assumption that \(f\) commutes with the Goldman bracket gives the following key lemma. **Lemma 3.3**.: _Let \(\gamma^{\prime}\) be a closed curve in \(\Sigma^{\prime}\) that is homotopic to a closed curve in \(\Sigma^{\prime}\setminus K^{\prime}_{m}\). Then \(f_{*}(\gamma^{\prime})\) is homotopic to a closed curve in \(\Sigma\setminus K_{m}\)._ Proof.: Since \(\gamma^{\prime}\) is homotopic to a closed curve in \(\Sigma^{\prime}\setminus K^{\prime}_{m}\), \(\gamma^{\prime}\) is homotopic to a curve that is disjoint from \(\alpha^{\prime}_{i}\) for all \(i\), \(1\leq i\leq k\). It follows that the Goldman bracket \([\gamma^{\prime},\alpha^{\prime}_{i}]\) is zero for all \(i\), \(1\leq i\leq k\). As \(f\) commutes with the Goldman bracket, \([f_{*}(\gamma^{\prime}),\alpha_{i}]=0\) for all \(i\). For each \(i\), as \(\alpha_{i}\) is a simple closed curve, it follows that \(f_{*}(\gamma^{\prime})\) is homotopic to a curve disjoint from \(\alpha_{i}\) by statement (3) of Theorem 2.3. As a subset of the curves \(\alpha_{i}\) fill \(K_{m}\), it follows that \(f_{*}(\gamma^{\prime})\) is homotopic to a curve in \(\Sigma\setminus K_{m}\). ### Construction of the map in the base case We next construct a map \(g_{1}\colon K^{\prime}_{1}\to\Sigma\) that is homotopic to \(f|_{K^{\prime}_{1}}\) and satisfies \(g_{1}(p^{\prime})=p\) so that \(g_{1}(\partial K^{\prime}_{1})\cap K_{1}=\varnothing\). As \(K^{\prime}_{1}\) is a compact surface with a non-empty boundary, there exists a graph \(\Gamma^{\prime}\subset K^{\prime}_{1}\) such that \(K^{\prime}_{1}\) deformation retracts onto \(\Gamma^{\prime}\). We can choose \(\Gamma^{\prime}\) so that \(p^{\prime}\in\Gamma^{\prime}\) and \(\Gamma^{\prime}\cap\partial K^{\prime}_{1}=\varnothing\). A regular neighbourhood \(\mathcal{N}(\Gamma^{\prime})\) of \(\Gamma^{\prime}\) has a deformation retraction \(\rho\colon\,\mathcal{N}(\Gamma^{\prime})\to\Gamma^{\prime}\). We get a map \(g_{1}\colon\,\mathcal{N}(\Gamma^{\prime})\to\Sigma\) by setting \(g_{1}|_{\mathcal{N}(\Gamma^{\prime})}=f\circ\rho\). Further, \(\overline{K^{\prime}_{1}\setminus\mathcal{N}(\Gamma^{\prime})}\) is a disjoint union of annuli \(C^{\prime}_{i}\times[0,1]\), \(1\leq i\leq n\), so that the boundary components of \(K^{\prime}_{1}\) are the curves \(C^{\prime}_{i}\times\{1\}\), \(1\leq i\leq n\) and the boundary components of \(\mathcal{N}(\Gamma^{\prime})\) are \(C^{\prime}_{i}\times\{0\}\), \(1\leq i\leq n\). By Lemma 3.3, for each \(i\), \(1\leq i\leq n\), there is a curve \(\gamma_{i}\subset\Sigma\setminus K_{1}\) so that \(f(C^{\prime}_{i})\) is homotopic to \(\gamma_{i}\). We define \(g_{1}\) on each annulus \(C^{\prime}_{i}\times[0,1]\) as follows. We set \(g_{1}(C^{\prime}_{i}\times\{1\})=\gamma_{i}\). Note that \(C^{\prime}_{i}\times\{0\}\) is identified with a boundary component of \(\mathcal{N}(\Gamma^{\prime})\), and so \(g_{1}\) has been defined on \(C^{\prime}_{i}\times\{0\}\). Finally, by construction \(g_{1}|_{C^{\prime}_{i}\times\{0\}}\) and \(g_{1}|_{C^{\prime}_{i}\times\{1\}}\) are both homotopic in \(\Sigma\) to \(f(C^{\prime}_{i})\) and hence homotopic to each other - so we can extend \(g_{1}\) to \(C^{\prime}_{i}\times[0,1]\) using a homotopy in \(\Sigma\). We next construct the tree \(T^{\prime}_{1}\) and modify the map \(f\). Let the boundary components of \(K^{\prime}_{1}\) be \(\delta^{\prime}_{i}=C^{\prime}_{i}\times\{1\}\), \(1\leq i\leq n\). Pick a point \(q^{\prime}_{i}\in\delta^{\prime}_{i}\) for each \(i\), \(1\leq i\leq n\). Pick a family of arcs \(\theta^{\prime}_{i}\) with interiors disjoint so that \(\theta^{\prime}_{i}\) is an arc from \(p^{\prime}\) to \(q^{\prime}_{i}\) in \(K^{\prime}_{1}\). Let \(T^{\prime}_{1}\) be the union of these arcs. As \(T^{\prime}_{1}\) is contractible and \(f(p^{\prime})=g_{1}(p^{\prime})\), we can homotop \(f\) fixing \(p^{\prime}\) so that \(f|T^{\prime}_{1}=g_{1}|T^{\prime}_{1}\). Note that for \(1\leq i\leq n\), \(f(q^{\prime}_{i})=g_{1}(q^{\prime}_{i})\). Further, for \(1\leq i\leq n\), as \((f|_{K^{\prime}_{1}})_{*}=(g_{1})_{*}\colon\,\pi_{1}(K^{\prime}_{1},p^{\prime} )\to\pi_{1}(\Sigma,f(p^{\prime}))\), and \(f|_{\theta_{i}}=g_{1}|_{\theta_{i}}\), we deduce by using the _change of base-point isomorphisms_ corresponding to \(\theta_{i}\) and the fact that \(f\circ\theta_{i}=g_{1}\circ\theta_{i}\) that \((f|_{K^{\prime}_{1}})_{*}=(g_{1})_{*}\colon\,\pi_{1}(K^{\prime}_{1},q^{\prime} )\to\pi_{1}(\Sigma,f(q^{\prime}))\). ### Inductive properties We now consider the inductive step, where \(m>1\), and we have already defined a subsurface \(K^{\prime}_{m-1}\subset\Sigma^{\prime}\), a map \(g_{m-1}\colon\,K^{\prime}_{m-1}\to\Sigma\), and a tree \(T^{\prime}_{m-1}\subset K^{\prime}_{m-1}\). Further, we assume that these satisfy the following properties: 1. The subsurface \(K^{\prime}_{m-1}\) satisfies the conditions of Lemma 3.1. 2. \(g_{m-1}(\partial K^{\prime}_{m-1})\subset\Sigma\setminus K_{m-1}\). 3. \(f|_{T^{\prime}_{m-1}}=g_{m-1}|_{T^{\prime}_{m-1}}\). 4. The terminal vertices of \(T^{\prime}_{m-1}\) are in \(\partial K^{\prime}_{m-1}\), with one terminal vertex in each component. 5. For a terminal vertex \(q^{\prime}\in T^{\prime}_{m-1}\), \((f|_{K^{\prime}_{m-1}})_{*}=(g_{m-1})_{*}:\pi_{1}(K^{\prime}_{m-1},q^{\prime}) \to\pi_{1}(\Sigma,q)\). Observe that we have shown in Section 3.3 that the above properties hold for \(m=1\). We will construct \(K^{\prime}_{m}\), \(g_{m}\), and \(T^{\prime}_{m}\) and show (in Lemma 3.6) that the above properties hold for these. ### Maps on fundamental groups of components Let \(K^{\prime}_{m}\) be the subsurface constructed using Lemma 3.1. Let \(V^{\prime(1)}\), \(V^{\prime(2)}\), \(\dots V^{\prime(k)}\) be the components of \(\overline{K^{\prime}_{m}\setminus K^{\prime}_{m-1}}\). By Lemma 3.1, each component \(V^{\prime(j)}\) intersects \(K^{\prime}_{m-1}\) in a single boundary component \(\delta^{\prime(j)}\). We construct extensions \(g_{m}^{(j)}\colon\, K^{\prime}_{m-1}\cup V^{\prime(j)}\to\overline{\Sigma \setminus K_{m-1}}\) of \(g_{m-1}\) for each component \(V^{\prime(j)}\), \(1\leq j\leq k\) in Section 3.6. We first show that we have homomorphisms between fundamental groups of components of \(\overline{K^{\prime}_{m}\setminus K^{\prime}_{m-1}}\) and components of \(\overline{\Sigma\setminus K_{m-1}}\). Fix a component \(V^{\prime}=V^{\prime(j)}\) of \(\overline{K^{\prime}_{m}\setminus K^{\prime}_{m-1}}\) and let \(\delta^{\prime}=\delta^{\prime(j)}\). Let \(i^{\prime}:V^{\prime}\to\Sigma^{\prime}\) be the inclusion map. By construction \(g_{m-1}(\delta^{\prime})\subset\Sigma\setminus K_{m-1}\). Let \(V\) be the component of \(\Sigma\setminus K_{m-1}\) containing \(g_{m-1}(\delta^{\prime})\) and \(i:V\to\Sigma\) be the inclusion map. We construct an extension with image contained in \(V\). Most of the additional work beyond Lemma 3.3 is to control images of curves up to _based_ homotopy, not just free homotopy. Let \(q^{\prime}\in V^{\prime}\) be the terminal vertex of \(T^{\prime}_{m-1}\) corresponding to \(V^{\prime}\), and let \(q=f(q^{\prime})\). We have \(f_{*}=(g_{m-1})_{*}\colon\,\pi_{1}(K^{\prime}_{m-1},q^{\prime})\to\pi_{1}( \Sigma,q)\). Observe that \(\delta^{\prime}\) can be regarded as a closed curve based at \(q^{\prime}\). As the image of \(\delta^{\prime}\) is contained in \(V\), \(f_{*}([\delta^{\prime}])=(g_{m-1})_{*}([\delta^{\prime}])\in i_{*}(\pi_{1}(V,q))\). The key idea is to apply Lemma 3.3 to both the curves \(\gamma^{\prime}\) and \(\gamma^{\prime}\ast\delta^{\prime}\) for a curve \(\gamma^{\prime}\subset V^{\prime}\) based at \(q^{\prime}\). We first need a lemma about amalgamated free products. Let \(A\) and \(B\) be groups with \(C\subseteq A\) and \(C\subseteq B\) as subgroups of both of them. Let \(G\coloneqq A\ast_{C}B\) be the _amalgamated free product_ of \(A\) and \(B\) along \(C\)(see [5]). The groups \(A\) and \(B\) are called the _factors_ of \(G\). Following terminology from topology, we say that an element in \(A\) or \(B\) is _peripheral_ if it is conjugate to an element in \(C\). **Lemma 3.4**.: _Let \(a\in A\) be an element such that \(a\) is not conjugate to an element in \(C\). Let \(g\in G\)._ 1. _Suppose_ \(g\) _is conjugate to a non-peripheral element_ \(b\in B\)_. Then_ \(ag\) _is not conjugate to an element in a factor of_ \(G\)_._ 2. _If_ \(g\) _is conjugate to a non-peripheral element_ \(a^{\prime}\) _in_ \(A\) _and_ \(ag\) _is conjugate to an element in a factor, then_ \(g\in A\)_._ Proof.: By [5, Theorem 4.6], if \(ag\) is conjugate to an element in a factor, then any cyclically reduced word representing an element conjugate to \(ag\) is contained in a factor. First, suppose \(g\) is conjugate to a non-peripheral element \(b\in B\), say \(g=hbh^{-1}\). We consider a reduced word \(h=l_{1}\cdot l_{2}\cdot l_{3}\dotsm l_{k}\) representing \(h\) (we will separate _letters_ in words in the free products by \(\cdot\)). There are a few different cases corresponding to in which factor the first and last letters of \(h\) belong. We first consider the most non-trivial case, where \(h=a_{1}\cdot b_{1}\cdot\dots\cdot a_{k}\cdot b_{k}\), so \(g=a_{1}\cdot b_{1}\cdot\dots\cdot a_{k}\cdot b_{k}\cdot b\cdot b_{k}^{-1} \cdot a_{k}^{-1}\cdot\dots\cdot b_{1}^{-1}\cdot a_{1}^{-1}\). We claim that a cyclically reduced word conjugate to \(ag\) is \((a_{1}^{-1}aa_{1})\cdot b_{1}\cdot\dots\cdot a_{k}\cdot(b_{k}bb_{k}^{-1}) \cdot a_{k}^{-1}\cdot\dots\cdot b_{1}^{-1}\). This clearly represents the element \(a_{1}^{-1}aga_{1}\), which is conjugate to \(ag\). As \(a\) and \(b\) are not peripheral, \(a_{1}^{-1}aa_{1}\notin C\) and \(b_{k}bb_{k}^{-1}\notin C\), hence the word is indeed cyclically reduced. As this word is not contained in a factor, \(ag\) is not conjugate to an element in a factor. The other three cases are analogous. In each case we obtain a cyclically reduced word that is not in a factor. For completeness, we list the cases and the cyclically reduced words we obtain: 1. \(h=a_{1}\cdot b_{1}\cdot\ldots\cdot a_{k}\), \(k\geq 1\), we obtain \((a_{1}^{-1}aa_{1})\cdot b_{1}\cdot\ldots\cdot a_{k}\cdot b\cdot a_{k}^{-1} \cdot\ldots\cdot b_{1}^{-1}\); 2. \(h=b_{1}\cdot a_{2}\cdot\ldots\cdot a_{k}\cdot b_{k}\), we obtain \(a\cdot b_{1}\cdot\ldots\cdot a_{k}\cdot(b_{k}bb_{k}^{-1})\cdot a_{k}^{-1} \cdot\ldots\cdot b_{1}^{-1}\); 3. \(h=b_{1}\cdot a_{2}\cdot\ldots\cdot a_{k}\), we obtain \(a\cdot b_{1}\cdot\ldots\cdot a_{k}\cdot b\cdot a_{k}^{-1}\cdot\ldots\cdot b_{ 1}^{-1}\); Next, suppose \(g\) is conjugate to a non-peripheral element \(a^{\prime}\in A\), say \(g=ha^{\prime}h^{-1}\), and \(ag\) is conjugate to an element in a factor. We again consider a reduced word representing \(h\). If this is a single letter in \(A\), i.e., \(h=a_{1}\in A\), then \(g=a_{1}a^{\prime}a_{1}^{-1}\in A\) as claimed. In all other cases, we see above that \(ag\) is represented by a cyclically reduced word that is not in a factor, a contradiction. Again, we list the (four) cases and the cyclically reduced words we obtain: 1. \(h=a_{1}\cdot b_{1}\cdot\ldots\cdot a_{k}\cdot b_{k}\), \(k\geq 1\), we obtain \((a_{1}^{-1}aa_{1})\cdot b_{1}\cdot\ldots\cdot a_{k}\cdot b_{k}\cdot a^{\prime }\cdot b_{k}^{-1}\cdot\ldots\cdot b_{1}^{-1}\); 2. \(h=b_{1}\cdot a_{2}\cdot\ldots\cdot a_{k}\cdot b_{k}\), we obtain \(a\cdot b_{1}\cdot\ldots\cdot a_{k}\cdot b_{k}\cdot a^{\prime}\cdot b_{k}^{-1} \cdot\ldots\cdot b_{1}^{-1}\); 3. \(h=b_{1}\cdot a_{2}\cdot\ldots\cdot b_{k-1}\cdot a_{k}\), we obtain \(a\cdot b_{1}\cdot\ldots\cdot a_{k}\cdot b_{k-1}\cdot(a_{k}a^{\prime}a_{k}^{- 1})\cdot b_{k-1}^{-1}\cdot\ldots\cdot b_{1}^{-1}\); 4. \(h=a_{1}\cdot b_{1}\cdot a_{2}\cdot\ldots\cdot b_{k-1}\cdot a_{k}\), we obtain \((a_{1}^{-1}aa_{1})\cdot b_{1}\cdot\ldots\cdot(a_{k}a^{\prime}a_{k}^{-1}) \cdot\ldots\cdot b_{1}^{-1}\) which is not in a factor if \(k>1\), i.e., except for the case \(k=1\), \(h=a_{1}\) (mentioned above) where \(g=a_{1}a^{\prime}a_{1}^{-1}\in A\). We will apply the above with \(C\), the fundamental group of a simple closed curve \(\delta\) in a surface \(F\) that separates \(F\), with \(A\) and \(B\) the fundamental groups of the closures of the components of \(F\setminus\delta\). We use Lemma 2.2 in this context. Recall that \(V^{\prime}\) is a component of \(\overline{K^{\prime}_{m}\setminus K^{\prime}_{m-1}}\) with \(\delta^{\prime}=V^{\prime}\cap K^{\prime}_{m-1}\) and \(V\) is a component of \(\overline{\Sigma\setminus K_{m-1}}\) such that \(f_{*}(\delta^{\prime})\) is freely homotopic to a curve in \(V\). Further, \(q^{\prime}\) is a point in \(\delta^{\prime}\) and \(q=f(q^{\prime})\). The following lemma says that \(f\) sends each loop in \(V^{\prime}\) based at \(q^{\prime}\) to a loop that is homotopic fixing basepoint to a loop in \(V\) based at \(q\), up to base-point fixing homotopy. **Lemma 3.5**.: _We have \(f_{*}(i^{\prime}_{*}(\pi_{1}(V^{\prime},q^{\prime})))\subset i_{*}(\pi_{1}(V,q))\)._ Proof.: Let \(\gamma^{\prime}\) be a curve in \(V^{\prime}\) based at \(q^{\prime}\), so \([\gamma^{\prime}]\in i^{\prime}_{*}(\pi_{1}(V^{\prime},q^{\prime}))\). We show that \(f_{*}([\gamma^{\prime}])\in i_{*}(\pi_{1}(V,q))\). If \(\gamma^{\prime}\) is _peripheral_, i.e., homotopic to a power of the boundary component \(\delta^{\prime}\), then either \(\gamma^{\prime}=\delta^{\prime k}\) for some \(k\in\mathbb{Z}\) or \(\gamma^{\prime}=g^{\prime}*\delta^{\prime k}*g^{\prime-1}\) for some \(g^{\prime}\in\pi_{1}(V^{\prime},q^{\prime})\) which is not peripheral. In the first case, \(f_{*}([\gamma^{\prime}])\in i_{*}(\pi_{1}(V,q))\) as \(f_{*}([\delta^{\prime}])=(g_{m-1})_{*}([\delta^{\prime}])\in i_{*}(\pi_{1}(V,q))\). In the second case, it suffices to show that the non-peripheral curve \(g\) is mapped to a curve in \(i_{*}(\pi_{1}(V,q))\). Hence, we can assume that \(\gamma^{\prime}\) is not peripheral. By Lemma 3.3, there is a curve \(\gamma\subset\Sigma\setminus K_{m-1}\) so that \(f_{*}(\gamma^{\prime})\) is homotopic to \(\gamma\). Thus, \(\gamma\subset\widehat{V}\) for some component \(\widehat{V}\) of \(\overline{\Sigma\setminus K_{m-1}}\). Let \(\widehat{\delta}=\widehat{V}\cap K_{m-1}\), and let \(\widehat{q}\in\widehat{\delta}\). We modify \(\gamma\) by a homotopy so that \(\gamma\) intersects \(\widehat{\delta}\) in the single point \(\widehat{q}\). We claim that \(\gamma\) is not peripheral, i.e., \(\gamma\) is not homotopic to a curve with image in \(\partial\widehat{V}=\widehat{\delta}\). Namely, as \(\gamma^{\prime}\) is a curve in \(V^{\prime}\) that is not peripheral, there is a simple closed curve \(\beta^{\prime}\) in \(V^{\prime}\) so that \(\gamma^{\prime}\) and \(\beta^{\prime}\) are not homotopic to disjoint curves. Hence, by statement 3 of Theorem 2.3, we have \([\beta^{\prime},\gamma^{\prime}]\neq 0\). The curve \(f_{*}(\beta^{\prime})\) is homotopic to a curve \(\beta\) in \(\Sigma\setminus K_{m-1}\). If \(\gamma\) were peripheral, then \(\beta\) and \(\gamma\) would homotopic to disjoint curves, hence \([\beta,\gamma]\) would become \(0\). But \([\beta^{\prime},\gamma^{\prime}]\neq 0\) implies \([\beta,\gamma]\neq 0\) as \(f\) commutes with the Goldman bracket, this is a contradiction. Hence, \(\gamma\) is not peripheral. We next see that \(\widehat{V}=V\). Suppose not, then \(\gamma\subset\widehat{V}\) for a component \(\widehat{V}\neq V\) of \(\overline{\Sigma\setminus K_{m-1}}\). As \(\gamma\) is homotopic to \(f_{*}(\gamma^{\prime})\), there exists a path \(\theta\) in \(\Sigma\) from \(q\) to \(\widehat{q}\) such that \(f_{*}(\gamma^{\prime})\) is homotopic in \(\Sigma\) fixing base-point to \(\theta*\gamma*\bar{\theta}\). Observe that \(\pi_{1}(\Sigma,\widehat{q})\) is the amalgamated free product \[\pi_{1}(\Sigma,\widehat{q})=\pi_{1}(\widehat{V},\widehat{q})*_{\pi_{1}( \widehat{\delta},\widehat{q})}\pi_{1}(\overline{\Sigma\setminus\widehat{V}}, \widehat{q}). \tag{3}\] We identify \(\pi_{1}(\Sigma,q)\) with \(\pi_{1}(\Sigma,\widehat{q})\) using the change of base-point isomorphism determined by \(\theta\), i.e., the isomorphism \([\lambda]\mapsto[\bar{\theta}*\lambda*\theta]\) for a loop \(\lambda\) based at \(q\). Under this identification, \(f_{*}([\gamma^{\prime}])\equiv[\gamma]\). Further, as \(f_{*}(\delta^{\prime})\) is homotopic to a curve \(\eta\) in \(V\subset\Sigma\setminus\widehat{V}\), \(f_{*}([\delta^{\prime}])\) is conjugate to an element of \(\pi_{1}(\overline{\Sigma\setminus\widehat{V}},\widehat{q})\), which is non-peripheral as \(K_{m-1}\) is not an annulus or disc by assumption. Hence, with respect to the amalgamated free product given by Equation 3, \(f_{*}([\gamma^{\prime}*\delta^{\prime}])=f_{*}([\gamma^{\prime}])\cdot f_{*}([ \delta^{\prime}])\) is of the form \(ag\) of Lemma 3.4, statement 1. It follows that \(f_{*}(\gamma^{\prime}*\delta^{\prime})\) is not homotopic to a curve that is disjoint from \(\widehat{\delta}\) using Lemma 2.2 (see Figure 1). This contradicts Lemma 3.3 as \(\widehat{\delta}\subset\partial K_{m-1}\) and \(f_{*}(\gamma^{\prime}*\delta^{\prime})\) is homotopic to a curve in \(\Sigma\setminus K_{m-1}\). Thus, \(\widehat{V}=V\). Next, we can express the fundamental group \(\pi_{1}(\Sigma,q)\) as an amalgamated free product \[\pi_{1}(\Sigma,q)=\pi_{1}(V,q)*_{\pi_{1}(\delta,q)}\pi_{1}(\overline{\Sigma \setminus V},q). \tag{4}\] Next, as \(\gamma\subset V\), modifying by a homotopy, we can assume that \(\gamma\) is a loop based at \(q\). As \(\gamma\) is freely homotopic to \(f_{*}(\gamma^{\prime})\), \([\gamma]\in\pi_{1}(\Sigma,q)\) is conjugate to the element \(f_{*}([\gamma^{\prime}])\in\pi_{1}(\Sigma,q)\), which is not peripheral as \(\gamma\) is not peripheral. We claim that \(f_{*}([\delta^{\prime}])\) is also not peripheral. Namely, as \(V\) contains a curve that is not peripheral, namely \(\gamma\), the boundary component \(\delta\) of \(V\) which is contained in \(K_{m}\) does not bound a cylinder or a disc. Hence, one of the curves \(\alpha_{i}\) in Lemma 3.1 cannot be homotoped to be disjoint from \(\delta\). But \(\delta^{\prime}\) is disjoint from \(\alpha^{\prime}_{i}\), so \(0\neq[\delta^{\prime},\alpha^{\prime}_{i}]=[f_{*}(\delta^{\prime}),\alpha_{i} ]=0\). As in the case of \(\gamma^{\prime}\), we conclude that \(f_{*}([\delta^{\prime}])\) is not peripheral. For the amalgamated free product decomposition 4, we apply statement 2 of Lemma 3.4 with \(a=f_{*}([\delta^{\prime}])\) and \(g=f_{*}([\gamma^{\prime}])\). As \(f_{*}(\delta^{\prime}*\gamma^{\prime})\) is conjugate to an element disjoint from \(K_{m-1}\), hence from \(\delta\), \(f_{*}([\delta^{\prime}*\gamma^{\prime}])\) is conjugate to an element \(\pi_{1}(V,q)\) or of \(\pi_{1}(\overline{\Sigma\setminus V},q)\), i.e., an element of a factor. It follows by statement 2 of Lemma 3.4 that \(f_{*}([\gamma^{\prime}])\in\pi_{1}(V,q)\), as desired. ### Maps on complementary components The rest of the construction on each component is analogous to the base case with some refinements. We continue with the notation of Section 3.5. On the component \(V^{\prime}=V^{\prime(j)}\) of \(\overline{K_{m}\setminus K_{m-1}}\), we construct a map \(g_{m}^{(j)}\colon\, V^{\prime}\to V\) extending \(g_{m-1}|_{\delta^{\prime}}\) so that \(i_{*}\circ(g_{m}^{(j)})_{*}=(f|_{V^{\prime}})_{*}\colon\,\pi_{1}(V^{\prime},q ^{\prime})\to\pi_{1}(\Sigma,q)\) and \(g_{m}^{(j)}(\partial K_{m}^{\prime}\cap V^{\prime})\subset V\subset\Sigma \setminus K_{m}\). Namely, we extend the graph in \(V^{\prime}=V^{\prime(j)}\) consisting of a single vertex \(q^{\prime}\) and the edge \(\delta^{\prime}\) to a graph \(\Gamma^{\prime}\) so that \(V^{\prime}\) deformation retracts to \(\Gamma^{\prime}\) (this exists as \(V^{\prime}\) has more than one boundary component as a consequence of Theorem 3.1, statement 4). By Lemma 3.5, we can extend \(g_{m-1}|_{\delta^{\prime}}\) to a map \(\varphi\colon\,\Gamma^{\prime}\to V\) so that \(\varphi_{*}=(f|_{\Gamma^{\prime}})_{*}\colon\,\pi_{1}(\Gamma^{\prime},q^{ \prime})\to\pi_{1}(\Sigma,q)\) (as any homomorphism from the fundamental group of a graph to that of a topological space is induced by a map). A regular neighbourhood \(\mathcal{N}(\Gamma^{\prime})\) of \(\Gamma^{\prime}\) has a deformation retraction \(\rho\colon\,\mathcal{N}(\Gamma^{\prime})\to\Gamma^{\prime}\). We get a map \(g_{m}^{(j)}\colon\,\mathcal{N}(\Gamma^{\prime})\to\Sigma\) by setting \(g_{m}^{(j)}|_{\mathcal{N}(\Gamma^{\prime})}=\varphi\circ\rho\) (see Figure 2). Next, \(\overline{V^{\prime}\setminus\mathcal{N}(\Gamma^{\prime})}\) is a disjoint union of annuli \(C^{\prime}_{i}\times[0,1]\), \(1\leq i\leq k\), so that the boundary components of \(K^{\prime}_{m}\) other than \(\delta^{\prime}\) are the curves \(C^{\prime}_{i}\times\{1\}\), \(1\leq i\leq k\) and the boundary components of \(\mathcal{N}(\Gamma^{\prime})\) are \(C^{\prime}_{i}\times\{0\}\), \(1\leq i\leq k\). By Lemma 3.1, \(f(C^{\prime}_{i}\times\{1\})\) is homotopic to a curve \(\gamma\subset\Sigma\setminus K_{m}\) for each \(i\), \(1\leq i\leq k\). We define \(g_{m}^{(j)}\) on \(C^{\prime}_{i}\times\{1\}\) to map to \(\gamma\). As \(f(C^{\prime}_{i}\times\{0\})\) is homotopic to \(f(C^{\prime}_{i}\times\{1\})\) and \(g_{m}^{(j)}(C^{\prime}_{i}\times\{0\})\), we deduce that \(\gamma\) is homotopic to \(g_{m}^{(j)}(C^{\prime}\times\{0\})\) in \(\Sigma\), and hence in \(\Sigma\setminus K_{m-1}\)by Lemma 2.2. We use this homotopy to extend \(g_{m}^{(j)}\) to \(C^{\prime}_{i}\times[0,1]\). We also extend the tree \(T^{\prime}_{m-1}\) to a tree \(T^{\prime(j)}_{m}\) and modify \(f\) as in the base case. Let the boundary components of \(V^{\prime}\) other than \(\delta^{\prime}\) be \(\delta^{\prime}_{1},\dots,\delta^{\prime}_{n}\), \(\delta^{\prime}_{i}=C^{\prime}_{i}\times\{1\}\). Pick a point \(q^{\prime}_{i}\in\delta^{\prime}_{i}\) for each \(i\), \(1\leq i\leq n\). Pick a family of arcs \(\theta^{\prime}_{i}\) with interiors disjoint so that \(\theta^{\prime}_{i}\) is an arc from \(q^{\prime}\) to \(q^{\prime}_{i}\) in \(K^{\prime}_{1}\). Let \(T^{\prime(j)}_{m}\) be the union of these arcs. As \(T^{\prime(j)}_{m}\) is contractible we can homotope \(f\) fixing \(\Sigma^{\prime}\setminus(\operatorname{int}(V^{\prime}))\) so that \(f|T^{\prime(j)}_{m}=g_{m}^{(j)}|T^{\prime(j)}_{m}\). Note that for \(1\leq i\leq n\), \(f(q^{\prime}_{i})=g_{m}^{(j)}(q^{\prime}_{i})\). Further, for \(1\leq i\leq n\), as \((f|V^{\prime})_{*}=\left(g_{m}^{(j)}\right)_{*}\colon\,\pi_{1}\left(V^{\prime},q ^{\prime}\right)\to\pi_{1}(\Sigma,f(q^{\prime}))\), and \(f|_{\theta^{\prime}_{i}}=g_{m}^{(j)}|_{\theta^{\prime}_{i}}\), we deduce by using the change of base-point isomorphisms corresponding to \(\theta^{\prime}_{i}\) and \(f\circ\theta^{\prime}_{i}=g_{m}^{(j)}\circ\theta^{\prime}_{i}\) that \((f|_{V^{\prime}})_{*}=(g_{m}^{(j)})_{*}\colon\,\pi_{1}\left(V^{\prime},q^{ \prime}_{i}\right)\to\pi_{1}(\Sigma,f(q^{\prime}_{i}))\). Figure 2: Description of \(g_{m}^{(j)}\colon\,V^{\prime}=V_{m}^{(j)}\to V\subset\Sigma\setminus K_{m-1}\). On the top, i.e., in \(V^{\prime}\), the purple, blue, and green portions (arcs and circles) form \(\Gamma^{\prime}\), and blue and black arcs form \(T_{m}^{\prime(j)}\). At the bottom: the grey and yellow shades indicate \(g_{m}^{(j)}(V^{\prime})\) and \(g_{m}^{(j)}(C_{i}^{\prime}\times[0,1])\), respectively, and red and green loops denote \(g_{m}^{(j)}(C_{i}^{\prime}\times\{1\})\) and \(g_{m}^{(j)}(\delta^{\prime})\), respectively, where \(i=1,2\). ### Combining the maps We define \(g_{m}\colon\, K^{\prime}_{m}\to\Sigma\) as the unique map extending \(g_{m-1}\) whose restriction to the component \(V^{\prime(j)}\) of \(\overline{K^{\prime}_{m}\setminus K^{\prime}_{m-1}}\) is \(g_{m}^{(j)}\). Observe that \(g_{m}(K^{\prime}_{m}\setminus K^{\prime}_{m-1})\subset\overline{\Sigma\setminus K _{m-1}}\) and \(g_{m}(\partial K^{\prime}_{m})\subset\Sigma\setminus K_{m}\). The tree \(T^{\prime}_{m}\) is \(T^{\prime}_{m-1}\cup\bigcup_{j}\left(T^{\prime(j)}_{m}\right)\). As the homotopies of \(f\) had disjoint support, we take \(f\) to be the result of the composition of the homotopies. We have \(f|_{T^{\prime}_{m}}=g_{m}|_{T^{\prime}_{m}}\) and the required equalities on the fundamental groups. To summarize, we have constructed a subsurface, a map, and a tree satisfying the following properties: **Lemma 3.6**.: _The subsurface \(K^{\prime}_{m}\), map \(g_{m}\) and tree \(T^{\prime}_{m}\) satisfy the following properties:_ 1. _The subsurface_ \(K^{\prime}_{m}\) _satisfies the conditions of Lemma_ 3.1_._ 2. \(g_{m}(\partial K^{\prime}_{m})\subset\Sigma\setminus K_{m}\)_._ 3. \(f|_{T^{\prime}_{m-1}}=g_{m-1}|_{T^{\prime}_{m-1}}\)_._ 4. _The terminal vertices of_ \(T^{\prime}_{m-1}\) _are in_ \(\partial K^{\prime}_{m-1}\)_, with one terminal vertex in each component._ 5. _For a terminal vertex_ \(q^{\prime}\in T^{\prime}_{m-1}\)_,_ \((f|_{K^{\prime}_{m-1}})_{*}=(g_{m-1})_{*}:\pi_{1}(K^{\prime}_{m-1},q^{\prime}) \to\pi_{1}(\Sigma,q)\)_._ 6. \(g_{m}(K^{\prime}_{m}\setminus K^{\prime}_{m-1})\subset\overline{\Sigma \setminus K_{m-1}}\)_._ The conditions except 6 are exactly those of Section 3.4, i.e., the conditions for the inductive step. ### Constructing the proper map As \(K^{\prime}_{m}\supset L^{\prime}_{m}\) by Theorem 3.1 and the subsurfaces \(L^{\prime}_{m}\) form an exhaustion, so do the subsurfaces \(K^{\prime}_{m}\). We define the map \(g\colon\,\Sigma^{\prime}\to\Sigma\) as the direct limit of the maps \(g_{m}\colon\, K^{\prime}_{m}\to\Sigma\). We claim that \(g\) is proper. It suffices to show that \(g^{-1}(K_{n})\) is compact for all \(n\). We claim that \(g^{-1}(K_{n})\subset K^{\prime}_{n+1}\), and hence is a closed subset of a compact space, so compact. Namely, suppose \(x\in g^{-1}(K_{n})\) and \(x\notin K^{\prime}_{n+1}\). Then \(x\in K^{\prime}_{m}\setminus K^{\prime}_{m-1}\) for some \(m>n+1\). Hence, \(g(x)=g_{m}(x)\subset\overline{\Sigma\setminus K_{m-1}}\) by statement 6 of Lemma 3.6, so \(g(x)\notin\operatorname{int}(K_{m-1})\supset K_{m-2}\supset K_{n}\) (the last containment by \(m>n+1\)), a contradiction. Further, \(g_{*}=f_{*}\colon\,\pi_{1}(\Sigma^{\prime},p^{\prime})\to\pi_{1}(\Sigma,p)\), so \(g\) is homotopic to \(f\). By [1, Theorem 1], \(g\) is in turn (properly) homotopic to a homomorphism, and hence so is \(f\). As \(g\) commutes with the Goldman bracket, we deduce that \(g\) is orientation preserving. Namely, first, modify \(g\) by an isotopy to ensure that \(g\) is a diffeomorphism (in dimension 2, homeomorphisms are isotopic, hence properly homotopic to diffeomorphisms). If \(g\) is orientation reversing, then as the signs of intersection numbers are reversed, we get the equation \([g_{*}(x^{\prime}),g_{*}(y^{\prime})]=-g_{*}\left([x^{\prime},y^{\prime}]\right)\). As \(f_{*}=g_{*}\) and \(f_{*}\) commutes with the Goldman bracket, we conclude that \([g_{*}(x^{\prime}),g_{*}(y^{\prime})]=0\) for all \(x\), \(y\). As \(g_{*}\) is an isomorphism, it follows that the Goldman bracket is trivial on \(\Sigma\), a contradiction as \(\Sigma\) is assumed not to be a plane or cylinder, so there exist classes \(x\) and \(y\) with \([x,y]\neq 0\). This completes the proof of Theorem 1.1. ## 4. Proof of Theorem 1.2 We sketch the modifications needed to prove Theorem 1.2. A homeomorphism preserves intersection numbers, so clearly (1) implies (2) and (3). Further, clearly (2) implies (3). We show that (3) implies (1). Assume now that, \(I_{\Sigma}\left(f_{*}(x^{\prime}),f_{*}(y^{\prime})\right)=0\iff I_{\Sigma^{ \prime}}(x^{\prime},y^{\prime})=0\) for all \(x^{\prime},y^{\prime}\in\widehat{\pi}(\Sigma^{\prime})\). We see that the proof of Theorem 1.1 goes through with this in place of the hypothesis that \(f\) preserves the Goldman bracket. Namely, there are only two places where the hypothesis that the Goldman bracket is preserved are used directly: 1. In Lemma 3.3 to show that if the closed curve \(\gamma^{\prime}\) can be homotoped to be disjoint from \(\alpha_{i}^{\prime}\), i.e., \(I_{\Sigma^{\prime}}(\gamma^{\prime},\alpha_{i}^{\prime})=0\), then \(I_{\Sigma}(\gamma,\alpha_{i})=0\) where \(\gamma\) is homotopic to \(f_{*}(\gamma^{\prime})\) and \(\alpha_{i}\) is homotopic to \(f_{*}(\alpha_{i}^{\prime})\). Here an additional hypothesis is that \(\alpha_{i}\) is a simple closed curve. 2. In Lemma 3.5, to show that if the closed curve \(\gamma^{\prime}\) can be homotoped to be disjoint from \(\beta^{\prime}\), i.e., \(I_{\Sigma^{\prime}}(\gamma^{\prime},\beta^{\prime})=0\), then \(I_{\Sigma}(\gamma,\beta)=0\) where \(\gamma\) is homotopic to \(f_{*}(\gamma^{\prime})\) and \(\beta\) is homotopic to \(f_{*}(\beta)\). Here the additional hypothesis is that \(\beta^{\prime}\) is a simple closed curve. Clearly both these follow under the hypothesis that \(I_{\Sigma}\left(f_{*}(x^{\prime}),f_{*}(y^{\prime})\right)=0\iff I_{\Sigma^{ \prime}}(x^{\prime},y^{\prime})=0\) for all \(x^{\prime},y^{\prime}\in\widehat{\pi}(\Sigma^{\prime})\). Thus, the proof of Theorem 1.1 goes through with this in place of the hypothesis that \(f\) preserves the Goldman bracket.
2305.14772
A Question Answering Framework for Decontextualizing User-facing Snippets from Scientific Documents
Many real-world applications (e.g., note taking, search) require extracting a sentence or paragraph from a document and showing that snippet to a human outside of the source document. Yet, users may find snippets difficult to understand as they lack context from the original document. In this work, we use language models to rewrite snippets from scientific documents to be read on their own. First, we define the requirements and challenges for this user-facing decontextualization task, such as clarifying where edits occur and handling references to other documents. Second, we propose a framework that decomposes the task into three stages: question generation, question answering, and rewriting. Using this framework, we collect gold decontextualizations from experienced scientific article readers. We then conduct a range of experiments across state-of-the-art commercial and open-source language models to identify how to best provide missing-but-relevant information to models for our task. Finally, we develop QaDecontext, a simple prompting strategy inspired by our framework that improves over end-to-end prompting. We conclude with analysis that finds, while rewriting is easy, question generation and answering remain challenging for today's models.
Benjamin Newman, Luca Soldaini, Raymond Fok, Arman Cohan, Kyle Lo
2023-05-24T06:23:02Z
http://arxiv.org/abs/2305.14772v3
# A Controllable QA-based Framework for Decontextualization ###### Abstract Many real-world applications require surfacing extracted snippets to users, whether motivated by assistive tools for literature survey or document cross-referencing, or needs to mitigate and recover from model generated inaccuracies. Yet, these passages can be difficult to consume when divorced from their original document context. In this work, we explore the limits of LLMs to perform decontextualization of document snippets in user-facing scenarios, focusing on two real-world settings--question answering and citation context previews for scientific documents. We propose a question-answering framework for decontextualization that allows for better handling of user information needs and preferences when determining the scope of rewriting. We present results showing state-of-the-art LLMs under our framework remain competitive with end-to-end approaches. We also explore incorporating user preferences into the system, finding our framework allows for controllability.1 Footnote 1: Code and data available at [https://github.com/bneum@609/qa-decontextualization](https://github.com/bneum@609/qa-decontextualization) ## 1 Introduction Assistive tools for cross-referencing or research activities often rely on extracting text snippets from documents and showing them to users. For example, assistive tools can use snippets to support efficient comprehension of individual documents (August et al., 2023; Fok et al., 2023) or scaffold exploration over collections of documents for literature review (Kang et al., 2022; Palani et al., 2023). With the rise in adoption of large language models (LLMs) (Brown et al., 2020; OpenAI, 2023) to power research tools, developers use extracted snippets to mitigate the potential for generated inaccuracies; snippets can help users verify model-generated outputs (Bohnet et al., 2022) and provide a means for user error recovery. However, extracted snippets are not written to be read outside the context of the original full document: they can include terms that were defined earlier, anaphora whose antecedents lie in previous paragraphs, or just generally lack context that is needed for comprehension. At best, these issues make extracted snippets difficult to read, and at worst, they render the snippets misleading outside their originating context (Lin et al., 2003; Zhang et al., 2022). We consider the potential for _decontextualization_(Choi et al., 2021)--which asks models to rewrite extracted snippets to incorporate information from their originating contexts, thereby making them "stand alone"--as a means to make extracted snippets more consumable in user-facing settings. In this work, we investigate the use of LLMs for decontextualization of snippets, specifically in two real-world scenarios in which users directly consume these snippets--question answering and citation context previews of scientific documents (see Figure 1). We highlight a number of outcomes from this study: First, we recommend adjustments to the decontextualization task formulation motivated Figure 1: Overview of our user-facing decontextualization setting. We consider evidence snippets returned by a question answering system (left), as well as, citation context previews when exploring a citation graph (right). Highlighted sentences are added during the decontextualization process. Unlike prior work, these scenarios require handling multi-sentence snippets (left), handling references or links to other documents (right). by our settings of interest: (1) expanding scope to multi-sentence passages, (2) requiring transparency of model edits for user-facing scenarios, and (3) guidelines for handling citations or references to other documents (SS2). Second, we propose a new question-answering framework to tackle decontextualization. Our approach first captures which information in the snippets need to be clarified in the form of one or more questions; then, using evidence retrieved for these queries, rewrites the snippet. The framework has distinct advantages: first, it tackles difficulty in collecting gold decontextualizations due to high annotator variability by formalizing operations a pipeline should follow (generate questions, find evidence, rewrite passage). Further, it provides a path for personalized decontextualization, as users can edit system-generated questions to better suit their information needs2 (SS3). Footnote 2: We argue that the high subjectivity in annotations is evidence for a need to personalize decontextualization output. Third, we demonstrate our framework on a small annotation study to collect a set (n=289) of gold decontextualizations of scientific text snippets (SS4). Initial evaluations of LLMs using our framework compared to end-to-end approaches. Our results show competitive results of this question-answering framework while also providing room for finer control. ## 2 Decontextualization for real-world, user-facing applications We adopt the definition of decontextualization presented in Choi et al. (2021): Given a snippet-context pair \((s,c)\), an edited snippet \(s^{\prime}\) is a valid decontextualization of \(s\) if \(s^{\prime}\) is interpretable without any additional context, and \(s^{\prime}\) preserves the truth-conditional meaning of \(s\) in \(c\). We further take necessary departures in order to handle requirements that emerged in consideration of our (1) motivation of grounding in research assistive applications, (2) user-facing scenarios, and (3) consideration of real-world documents. (1) Multi-sentence Passages.While Choi et al. (2021) restrict the scope of their work to single-sentence snippets, they recommend future work investigate longer snippets. We agree with this idea, especially given the real-world scenarios we consider in this work make use of datasets that contain snippets longer a single sentence. For example, in the QA dataset we use in the rest of our paper, we observed that 41% of answer snippets are longer than a single sentence, and the longest has seven. To constrain the scope of possible edits during decontextualization, we try to preserve (1) the same number of total sentences and (2) each constituent sentence's core informational content and discourse role within the larger snippet. (2) Transparency of Edits.We require the final decontextualized snippet \(s^{\prime}\) to make transparent to users what text came from the original snippet \(s\) and what text was added, removed, or modified. We draw upon well-established guidelines in writing around how to modify quotations.3 Such guidelines include using square brackets ([]) to denote resolved coreferences or newly incorporated information. Footnote 3: APA style guide: [https://apastyle.apa.org/style-grammar-guidelines/citations/quotations/changes](https://apastyle.apa.org/style-grammar-guidelines/citations/quotations/changes) (3) Decontextualizing References.Real-world documents contain references to other documents (e.g., web pages, cited works) or within-document artifacts (e.g., figures, tables). There is no single correct way to handle these references when performing decontextualization; in fact, often the extent of decontextualization is more dependent on the specific user-facing application's design rather than on intrinsic qualities of the snippet. For example, take an answering snippet in the scientific document QA setting: _Q: What corpus did they use?_ _A: "We test our system on the CALLHOME Spanish-English speech translation corpus (Post et al., 2013) (SS3)."_ One method of decontextualization can be: _A: "[Bansal et al., 2017] test [their] system on the CALLHOME Spanish-English speech translation corpus (Post et al., 2013) ["Improved speech-to-text translation with the Fisher and Callhome Spanish-English speech translation corpus" at IWSLT 2013] (SS3)."_ which drops the within-document reference to section three while adding in the title of the cited paper to provide more context, since "Post et al., 2013" may not be familiar to the user. But in the case of surfacing citation context previews, a user interface likely already surfaces the title of both citing and cited papers, in which case the addition of a title isn't useful. Possibly preferred is an alternative decontextualization that describes the dataset: _"[Bansal et al., 2017] test [their] system on the CALLHOME Spanish-English speech translation corpus [Post et al., 2013] [, a noisy multi-speaker corpus of telephone calls in a variety of Spanish dialects (SS2)."_ ## 3 Decontextualization through Question Answering A key challenge in decontextualization is disagreement between people on (1) _what_ additional information people would like to be incorporated and (2) _how_ such information should be incorporated when editing from \(s\) to \(s^{\prime}\)[Choi et al., 2021]. ### Proposed framework We propose tackling these issues by decomposing decontextualization into three steps: 1. _Question generation._ Ask clarifying questions about the snippet. 2. _Question answering._ For each question, find \begin{table} \begin{tabular}{p{34.1pt} p{34.1pt}} \hline \hline _Title: “UTCNN: a Deep Learning Model of Stance Classificationon on Social Media Text”_ \\ _User query: “What is the size of the Chinese data?”_ \\ _Gold_ & For this analysis, the authors use posts [from FBFans, a single-topic Chinese unbalanced social media dataset obtained from Facebook]. They calculate the like statistics of each distinct author from these 32,595 posts. \\ GPT-3 & The authors used posts from the FBFans dataset to analyze whether the assumption of their paper is reliable. They calculated the like statistics of each distinct author from the 32,595 posts. \\ _Ours_ & The authors calculate the [normalized] like statistics of each distinct author from the 32,595 posts in the FBFans dataset [containing data from anti-nuclear-power Chinese Facebook fan groups from September 2013 to August 2014, including posts and their author and liker IDs] \\ \hline _Citing paper: “Question classification using head words and their hypernyms”Huang et al. (2008)_ \\ _Cited paper: “Learning question classifiers: the role of semantic information”Li and Roth (2006)_ \\ _Oold_ & In contrast to Li and Roth (2006)’s approach which makes use of a very rich feature set [a head word feature and two approaches to augment semantic features of head words using WordNet], the authors propose to use a compact yet effective feature set [which includes five binary feature sets: question whword, head word, WordNet semantic feature for head word, word grams and word shape feature]. \\ GPT-3 & The authors propose to use a compact yet effective feature set, as opposed to Li and Roth’s (2006) approach which uses a very rich feature set. \\ _Ours_ & In contrast to Li and Roth (2006)’s approach [which makes use of a very rich feature set], the authors propose to use a compact yet effective feature set [including head word feature, two approaches to augment semantic features of such head words using WordNet, Lesk’s word sense disambiguation (WSD) algorithm, depth of hypernym feature, unigrams, wh-word, unigram feature, and word shape feature]. \\ \hline \hline \end{tabular} \end{table} Table 1: Two examples of our decontextualization pipeline compared with gold annotations and end-to-end output from GPT-3. The first example is from the Qasper dataset [Dasigi et al., 2021]; the snippet is an evidence passage containing the answer the user question. The second is a text span extracted from Huang et al. (2008) citing Li and Roth (2006). Together, they demonstrate how an effective decontextualization system can improve consumption of text outside the originating document. an answer within the full document. 3. _Synthesize._ Rewrite the snippet by incorporating information from these QA pairs. First, we argue questions and answers are a natural articulation of the requisite context lacking from extracted snippets. The relationship between questions and discourse relations between document passages can be traced to many works on Questions Under Discussion (QUD) (Onea, 2016; Velleman and Beaver, 2016; De Kuthy et al., 2018; Rieser, 2019). Recent work has leveraged this idea to curate datasets for discourse coherence (Ko et al., 2020, 2022). We view decontextualization as a task that aims to recover from broken discourse relations through the resolution of question-answer pairs that connect portions of that snippet to the rest of the document. Second, we argue this framing also affords greater controllability by allowing users to specify which questions they want resolved. Within user-facing applications, question answering is a natural and well-established interaction paradigm, allowing users to forage for information within documents through natural language (Wang et al., 2022; Hoeve et al., 2020; Jahanbakhsh et al., 2022). In the decontextualization setting, since our proposed framework is agnostic to question provenance, we can either approach the first question generation step with both automatically-generated questions (e.g., to seed a "default" experience) and/or user-provided questions for greater control and personalization. ### Implementation Here, we describe our implementation of a system for snippet decontextualization under our framework. This system is easy for practitioners to adopt, making use of widely-available LLMs as well as off-the-shelf passage retrieval models. For all LLMs below, we use GPT-3 text-davinci-003 (Brown et al., 2020). See Figure 2 for a schematic. Question generation.Under our framework, questions can be provided by users and/or by automated means; we discuss the latter here. We use an LLM to generate questions using zero-shot, in-context prompting, using the following prompt: The following text is from a scientific paper, but might include language that requires more context to understand. The language might be vague (like "their results") or might be too specific (like acronyms or jargon). Write questions that ask for clarifications. If the language is clear, write "No questions." Using in context examples allowed us to better control the number of questions, but decreased their quality. Question answering.Given a question (generated in the previous step or user-provided), we approach question answering in two steps: (1) We retrieve the top \(k\) relevant paragraphs from the source document, and (2) we use an LLM to process these \(k\) passages to obtain a concise answer.4 Footnote 4: We also considered answering questions by prompting an LLM with the full document context, but performance during Figure 2: Overview of our implemented method to perform decontextualization of snippets from scientific papers. Footnote 4: [https://www.upwork.com](https://www.upwork.com) Specifically, we use a BM25 retriever for this initial retrieval step and prompt GPT-3 to use the information to answer the question using the following prompt: Using the following text taken from a scientific paper, answer the following question about the paragraph labeled "Context". Ignore any irrelevant information. If you cannot find the answer, write "No answer.": Title: [Title] Abstract: [Abstract] Paragraphs: [Top three paragraphs] Context: [Context] Question: [Question] Synthesize.Finally, we provide an LLM with the snippet, questions, and answers obtained from the previous steps. Again this is done zero-shot with the following prompt: The following text snippet is extracted from a scientific paper. Incorporate the answers to the following questions to clarify the snippet. Only include information in the Snippet, Questions, and Answers in the new snippet you write. We can also ablate the question component altogether by incorporating the gold contexts that were used to answer the questions into the prompt used for end-to-end experiments in addition to the title, section header, and context paragraph. See Section 6 for more details. ## 4 Collecting gold decontextualizations In this section, we describe our data collection process to perform our annotation study. This had two goals. First, we wanted to verify the validity of our QA framework; that is, can we collect higher-quality annotations by aligning our annotation protocol to the QA framework, as opposed to asking annotators to perform end-to-end decontextualization? Second, we wanted to evaluate our particular implementation and compare it with an end-to-end LLM baseline. ### Sources of snippets. We choose two datasets in the scientific document domain to as our source of snippets, one for each motivating user-focused setting: Question answering.We use Qasper Dasigi et al. (2021), a dataset for document-grounded QA over scientific papers. This dataset includes QA pairs along with evidence snippets that support a given answer. We use the evidence snippets as inputs that require decontextualization. Citation context preview.We obtain citation context snippets from NLP papers in the S2ORC Lo et al. (2020) collection. We specifically restrict to citation contexts that contain a single citation to keep our annotation task simpler, though we note that prior work has pointed out the prevalance of contexts containing multiple citations Lauscher et al. (2022); future work could investigate how to perform decontextualization amid multiple outward references in the same snippet. ### Annotation process We closely follow our proposed framework when collecting data (see Figure 3): 1. **Writing Questions**: Given an input that requires decontextualization, we ask annotators to write clarification questions or questions that require additional information to fully understand the snippet. Given the complexity of the annotation task we used the Upwork5 annotation platform to hire domain experts with experience in NLP. Footnote 5: [https://www.upwork.com](https://www.upwork.com) While piloting the question writing process, we determined that the questions that people ask fall into three categories: (1) Definitions Figure 3: Overview of the data collection protocol. of terms or expansions of acronyms, (2) Coreference resolution, or (3) Simply seeking more context to feel more informed. We asked the annotators to categorize their questions into one of these three categories. We also asked them to label the questions they asked as either a definition, a coreference or generic "additional context" question. 2. **Answering Questions**: We hired a separate set of annotators to write answers given an input question (from previous stage) and the source document(s). We used the Prolific6 annotation platform as a high-quality source for a larger number of annotators. Additionally, we asked the annotators to mark what evidence from the paper (or the cited paper, if one was available) supports their answer. Footnote 6: [https://www.prolific.co/](https://www.prolific.co/) After this, we manually filtered down a total of 719 initial questions to a group of 487 by eliminating ones that answered the question incorrectly or found that the question could not be answered using the information in the papers. 3. **Rewriting Snippets**: In this part, given the original snippet, and the collected question-answer pairs we ask another set of annotators to rewrite the snippet by incorporating the question answer pairs to finish the decontextualization. For this part we also used the Prolific annotation platform. ### Dataset statistics In total, we obtained 289 snippets (avg. 44.2 tokens long), 487 questions (avg. 7.8 tokens long), and 487 answers (avg. 20.7 tokens long). On average, the snippets from the Qasper dataset have 1.3 questions per snippet while the citation contexts have 1.9 questions per snippet. We provide a further breakdown of annotated question types in Table 2. ## 5 Experimental setup ### Evaluation For evaluation, we follow Choi et al. (2021) and use **SARI** scores. SARI was developed to evaluate text simplification models Xu et al. (2016). It takes an original snippet, a revised snippet, and a reference and computes scores based on which n-grams changed in the revised snippet vs the reference. Following Choi et al. (2021), we compute the SARI-add score by determining which unigrams the reference adds to the original snippet, and then calculate the F1 score between these edits and the ones that the revised snippet adds. For SARI-delete score, we determine which unigrams the reference deletes from the original snippet and then calculate the precision, recall, and F1 score between these and the ones the revised snippet deletes. In addition to calculating these scores on the unigram level, we are also interested in evaluating whether the models add the information that people want clarified. We calculate the clarification (**CLF**) precision, recall, and F1 by considering the set of clarifications that the reference adds to the original snippet against the clarifications that the revised snippet add to the original. We determine if a clarification exists through fuzzy string matching: we extract the additions from the decontextualizations, removing stop words and stemming all of the tokens. A clarification matches if at least three quarters of the added tokens in the prediction match one of the targets. ### End-to-end baseline With the abilities of today's LLMs, the immediate question is: are current methods able to succeed at end-to-end decontextualization? To study this, we again use GPT-3 text-davinci-003 with a prompt explaining the task along with various amounts of context, including the snippet and the source paper. You are a scientist in the field of natural language processing. Using the given information from a scientific paper, rewrite the given text snippet so it stands alone. To do this: - Remove discourse markers (like "in conclusion", "in this section", "for instance", etc.) - Replace first-person pronouns with third person pronouns. Replace "we" with "the authors" - Remove time-specific words like "current" - Make other surface-level changes to fix grammar - Replace any vague terms in the snippets - Define any specific terms or acronyms In Table 3, we can see the results of running GPT 3 on this end-to-end task. It does not perform very well: with SARI add scores indicating low overlap with the reference and the CLF scores indicating not covering many of the same things. Though, as Choi et al. (2021) note, there are many valid decontextualizations, so the precise value of the SARI scores are not meaningful, but we can still use them to compare the systems. Qualitatively we observe that while the generations often match the right form, and the surface-level changes are made, the generations often miss clarifying the questions that annotators had. This suggests that there is still room for improvement. We observe two challenges of the end-to-end task. The first is that giving the entire paper as input provides a lot of context, which might make it difficult to find the relevant information to clarify. The second is that models struggle to know what needs to be clarified. To address the first challenge, we explore different ways of incorporating context that is likely to contain relevant information. Choi et al. (2021) find that most of the sentences they decontextualize only require the Title, Section Header of the section the sentence is in, and the paragraph surrounding the snippet. For our snippets from scientific documents, this is likely not sufficient--particularly when paper-specific terms need to be defined. As such, we explore a number of different options. * **TSP**. Title, Section header, and the **P**aragraph containing the snippet. This is the same condition as Choi et al. (2021) * **TASP** and **TAISP**. These add the **A**bstract and Introduction respectively as both of these contain much of the background context that might need to be incorporated into the snippets. Limiting the amount of information available to the model actually helps (the CLF score for the TASP condition is slightly higher than the TAISP, potentially because there is too much distracting information in the introductions), but there are still issues about not clarifying everything that annotators wanted to be clarified (as illustrated by the low CLF scores overall). ## 6 Results ### Quantitative results The main quantitative results are in Table 4. There are two takeaways. The first is that the surface-level edits that the Pipeline system makes are better than the ones the end-to-end system makes (as illustrated by the higher SARI add scores). The SARI del scores indicate that the End-to-end systems make deletions that match the gold more often than the Pipeline system does. The most relevant parts of the text that are deleted are the discourse markers like "however", and we explore those next. Second, we find that the story is a little less clear when looking at the information that is added us \begin{table} \begin{tabular}{c c c c} \hline \hline & SARI add & SARI del & CLF \\ \hline Gold Questions & 0.336 & 0.662 & 0.042 \\ Pipeline & 0.228 & 0.539 & 0.015 \\ \hline E2E (TASP) & 0.131 & 0.961 & 0.023 \\ E2E (QC) & 0.164 & 0.964 & 0.018 \\ \hline \hline \end{tabular} \end{table} Table 4: Synthesis. E2E (best) is the best End-to-end baseline. E2E (QC) is the End-to-end prompt run with the context paragraphs that are retrieved with the questions. \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{3}{c}{Question Type} \\ & Definition & Coreference & Additional Context \\ \hline Question Answering & 50 (31\%) & 59 (37\%) & 52 (32\%) \\ Citation Contexts & 102 (31\%) & 142 (44\%) & 82 (25\%) \\ \hline \hline \end{tabular} \end{table} Table 2: Number of tokens and counts in the dataset. This combines the citation context previews and the Qasper dataset. \begin{table} \begin{tabular}{c c c c} \hline \hline Context & SARI add & SARI del & CLF \\ \hline TSP & 0.131 & 0.961 & 0.019 \\ TASP & 0.148 & 0.963 & 0.023 \\ TAISP & 0.143 & 0.922 & 0.020 \\ \hline \hline \end{tabular} \end{table} Table 3: SARI add, del, and CLF scores for GPT-3 on the end-to-end task. For the context: **T** = Title, **A** = Abstract, **P** = Paragraph containing the snippet, **S** = Section header of the snippet’s section, **I** = Introduction, ing the CLF scores. The Pipeline system actually has lower CLF scores than the End-to-end system, which indicates that the information they include tends to not overlap with what the references include There are many ways to decontextualize a sentence, so the Pipeline and End-to-end appear to just be incorporating different information. If a reader had specific information that they wanted to incorporate, how would they go about including it? In the Pipeline setting, there is a clear place - we can replace the questions that are generated with questions that people have. We do this ("Gold Questions" row of Table 4) and find that the SARI-add and CLF are much higher, because now the same questions are being answered. For a more fair comparison, we also bring in more information to the End-to-end pipeline by providing it with the gold paragraphs that answered the questions, in the "E2E (QC)" row of the table. We find that it's having the questions that matters, and this oracle End-to-end system has lower SARI and CLF scores. ### Human evaluation We also conduct human evaluations on 30 randomly chosen snippets on the End-to-end system (TSP), Pipeline, and Gold Annotations. Each snippet is first evaluated for a Pass/Fail on the decontextualization task; this is determined by whether our annotator had an unresolved clarifying question after reading the snippet. Among the snippets that Passed, the annotator then indicated which system was the most informative. The annotator also indicated circumstances when a snippet contains too much information extraneous or irrelevant information compared to the gold snippet, and when a snippet is disfluent. We find that there is not much of a difference in the quality between the snippets output by the End-to-End and the Pipeline systems, which is in-line with quantitative results. We present some select qualitative examples in Table 1. ### Human intervention Manual inspection of the generated snippets suggested both models struggled to get match the granularity of information required by the gold annotation, which brings us back to the challenge of handling high variability in user preferences. We first rule out one hypothesis that the question generation module is producing off-topic, unrelated questions. In fact, the questions that were generated under our method had substantial overlap with the ones that were written. After stemming and removing stop words from the questions, the average Jaccard similarity between the generated and annotator written questions was \(58.1\%\). We next investigate whether issues in the full automation setting could be mitigated if we had obtained questions from humans instead. We conduct a set of oracle experiments, providing gold questions to our pipeline, skipping the question generation module. In Table 6, we illustrate how our pipeline system is able to recover from generating questions that the original question annotator didn't specify, thereby resulting in a final decontextualized snippet closer to the gold annotation. ## 7 Conclusion This work presents a framework and a system to operationalize decontextualization for real-world, user facing applications. Our investigation begins by enumerating functional limitations of current decontextualization system. Namely, we establish three key aspects for effective decontextualization that are currently missing: ability to process an arbitrary set of sentences, resolve within-document references, and provide a transparent list of edits to users. After listing desiderata of for an effective system, we introduce a framework to improve reliability of a decontextualization pipeline, while also enabling user personalization. The framework atomized decontextualization into three operations: asking clarifying questions about the passage yet-to-be decontextualized, identify evidence for the questions, and finally rewrite the snippet using retrieved evidence for grounding. Finally, we collect a small set (n=289) of gold decontextualizations of scientific text snippets, and show how LLMs can leverage our framework to perform more accurate decontextualization. Fur \begin{table} \begin{tabular}{c c c c} \hline \hline & E2E & Pipeline & Gold \\ \hline Stands Alone & 18 & 19 & 24 \\ Most Informative & 4 & 15 & 19 \\ Too much Info & 0 & 1 & - \\ Disfluent & 0 & 1 & 0 \\ \hline \hline \end{tabular} \end{table} Table 5: Human evaluation results. There are 30 snippets total. E2E is end-to-end. ther, we present results of a human evaluation that confirms the effectiveness of the proposed approach.
2308.05683
Quantum Cascade Surface Emitting Lasers
A low-cost single frequency laser emitting in the mid-infrared spectral region and dissipating minimal electrical power is a key ingredient for the next generation of portable gas sensors for high-volume applications involving chemical sensing of important greenhouse and pollutant gases. We propose here a Quantum Cascade Surface Emitting Laser (QCSEL), which we implement as a short linear cavity with high reflectivity coated end-mirrors to suppress any edge emission and use a buried semiconductor diffraction grating to extract the light from the surface. By wafer-level testing we investigate the cavity length scaling, extract mirror reflectivities larger than 0.9, and achieve a pulsed threshold power dissipation of 237 mW for an emission wavelength near 7.5 $\mu$m. Finally, we demonstrate single mode emission with a side-mode suppression ratio larger than 33 dB of a 248 $\mu$m short cavity mounted with the epitaxial layer up and operated in continuous wave at 20 $^\circ$C.
David Stark, Filippos Kapsalidis, Sergej Markmann, Mathieu Bertrand, Bahareh Marzban, Emilio Gini, Mattias Beck, Jérôme Faist
2023-08-10T16:38:47Z
http://arxiv.org/abs/2308.05683v1
# Quantum Cascade Surface Emitting Lasers ###### Abstract A low-cost single frequency laser emitting in the mid-infrared spectral region and dissipating minimal electrical power is a key ingredient for the next generation of portable gas sensors for high-volume applications involving chemical sensing of important greenhouse and pollutant gases. We propose here a Quantum Cascade Surface Emitting Laser (QCSEL), which we implement as a short linear cavity with high reflectivity coated end-mirrors to suppress any edge emission and use a buried semiconductor diffraction grating to extract the light from the surface. By wafer-level testing we investigate the cavity length scaling, extract mirror reflectivities larger than 0.9, and achieve a pulsed threshold power dissipation of 237 mW for an emission wavelength near 7.5. Finally, we demonstrate single mode emission with a side-mode suppression ratio larger than 33 dB of a 248 short cavity mounted with the epitaxial layer up and operated in continuous wave at 20. pacs: 32.50.-c, 32.50.-c, 32.50.-c, 32.50.-c, 32.50.-c, 32.50.-c, 32.50.-c Quantum Cascade Surface Emitting Lasers Introduction The mid-infrared (MIR) spectral region spanning from 2 \(\upmu\)m to 20 \(\upmu\)m is the molecular "fingerprint" region for many important organic and inorganic molecules, as they exhibit strong and narrow absorption lines within this region[1; 2]. Miniaturized optical gas sensors based on MIR absorption spectroscopy are highly attractive[3] for many applications such as industrial process control, environmental monitoring and medical diagnosis.[4] To enable low-cost and portable MIR gas sensors, compact and low power dissipation single mode light sources operating in the range of interest are of uttermost importance. The Quantum Cascade Laser (QCL) relying on intersubband transitions is an excellent candidate for a compact and coherent MIR light source because the emission wavelength can be tailored in wide ranges between 3 \(\upmu\)m and 24 \(\upmu\)m as well as between 60 \(\upmu\)m and 300 \(\upmu\)m.[5] The QCL can be modulated up to tens of GHz[6] and exhibits narrow linewidths[7] making it the source of choice for fast high-resolution gas spectroscopy in the MIR. However, harnessing intersubband transitions comes at the price of large threshold current densities (\(\geq\) 0.47 kA/cm\({}^{2}\))[8] which leads to an electrical power dissipation of several watts if a large device area is employed, hindering the integration into portable applications. By scaling the cavity length the electrical power dissipation can be reduced if low optical losses can be maintained to achieve laser action in continuous wave (CW) above room-temperature. Particularly, high mirror reflectivities with values close to unity are essential to mitigate the increase of the mirror losses \(\alpha_{\text{m}}\) with the reciprocal cavitity length \(1/L\). This can be seen by considering the threshold current density \(J_{\text{th}}\) given by \[J_{\text{th}}=\frac{\alpha_{\text{i}}+\alpha_{\text{m}}}{g\Gamma}=\frac{1}{g \Gamma}\left(\alpha_{\text{i}}+\frac{\ln(1/R)}{L}\right), \tag{1}\] where \(g\) is the material gain, \(\Gamma\) the optical confinement factor, \(\alpha_{\text{i}}\) the internal losses, and \(R\) the mirror reflectivity. Naturally, short cavity lasers not only enable smaller device footprints and thus a higher integration density, but they also offer a reduced number of axial lasing modes within the finite gain bandwidth and hence a route towards single axial mode selection. While the Vertical Cavity Surface Emitting Laser (VCSEL) provides an excellent approach for interband lasers emitting in near-infrared spectral region[9], this approach cannot be used for QCLs because of the strict intersubband selection rule which requires the electric field of the optical mode to be perpendicular to the plane of the quantum wells.[10] Ridge QCLs with cavity lengths on the order of 100 \(\upmu\)m have been reported using deeply etched Bragg mirrors with estimated reflectivities larger than 0.8 [11]. An alternative approach relied on cleaving and depositing highly reflective (HR) metallic coatings with reflectivities of 0.95 and 0.75 on the front and back facet, respectively. [12] For both approaches single mode operation could be observed, although the shortest devices were not operational at room-temperature. Superior CW temperature performance can be achieved with buried heterostructure QCLs [13; 14] featuring efficient thermal extraction [15] and low waveguide losses (0.5 cm\({}^{-1}\)) [16]. With a cavity length of 500 \(\upmu\)m and metallic HR (Al\({}_{2}\)O\({}_{3}\)/Ti/Au/Al\({}_{2}\)O\({}_{3}\)) coated back and partial HR (Al\({}_{2}\)O\({}_{3}\)/Ge) coated front facet, a threshold dissipation power of 260 mW at 10 \({}^{\circ}\)C and 330 mW at 40 \({}^{\circ}\)C has been reported. [17] Recently, a CW threshold dissipation power of only 143 mW at 20 \({}^{\circ}\)C has been demonstrated with a cleaved 250 \(\upmu\)m short cavity. [18] This was achieved by fabricating a sub-wavelength aperture in the metallic HR (Al\({}_{2}\)O\({}_{3}\)/Au) coating on both facets to suppress diffraction losses. Those approaches, show that metallic HR coatings are convenient to shorten the cavity length, although both facets need to be coated and due to the low transmissivity of the metallic layer an aperture is required to still extract light. To this end, buried second order diffraction gratings [19; 20] open up a way to couple out the light vertically. Furthermore, to scale the cavity length below 200 \(\upmu\)m [12], cleaving is not suitable, because the mechanical handling is challenging and the cavity length cannot precisely be controlled. Dry-etching controlled by lithography instead offers a promising alternative. Therefore, we introduce the Quantum Cascade Surface Emitting Laser (QCSEL), which is essentially an in-plane semiconductor laser implemented in a microcavity, combining an outcoupling element with HR mirrors to suppress any edge emission. Here, we realize the QCSEL as a linear buried heterostructure laser with a buried second order grating in the vicinity of the waveguide, dry-etched facets and HR coatings deposited on wafer level. We demonstrate the operation of QCSELs with two different active regions designed for 4.5 \(\upmu\)m and 8 \(\upmu\)m emission wavelength. We utilize wafer-level testing to investigate the cavity length scaling to assess the mirror reflectivty and to ultimately reduce the power dissipation and the number of lasing modes. Experiment For this work we use two strain-balanced InGaAs/AlInAs active regions grown on InP substrates by molecular beam epitaxy (MBE). From electroluminescence measurements, the peak wavelength and the full width half maxima estimating the gain bandwidth are inferred for both active regions. These values together with the sheet doping densities and the relevant fabrication parameters discussed below are summarized in Table 1. The buried heterostructure fabrication process [21; 22] starts with the definition and wet-etching of the second order grating on the top n-InGaAs cladding layer. The waveguide core is also formed by wet-etching. The lateral semi-insulating InP:Fe blocking layer and the top InP:Si contact layer are regrown by means of metal-organic vapor phase epitaxy (MOVPE). After depositing the metallic top-contact (Ti/Pt/Au), the end-mirror facets are dry-etched with inductively coupled plasma (ICP). Subsequently, both facets are coated with Si\({}_{3}\)N\({}_{4}\)/Ti/Au to build HR end-mirrors. The former dielectric layer is deposited by plasma-enhanced chemical vapor deposition (PECVD) and the latter metallic layers by electron beam evaporation. Note that for the metallic coating a Ti adhesion layer before the Au deposition is necessary for further processing and we estimate its thickness to be thinner than 20 nm. After revealing the top-contact by wet- and dry-etching, an additional Au layer with a thickness between 3 \(\upmu\)m to 5 \(\upmu\)m is electroplated. Lastly, the substrate is thinned down to about 200 \(\upmu\)m and a back contact (Ge/Au/Ni/Au) is deposited. The QCSEL device architecture \begin{table} \begin{tabular}{l|c c} Active region & EV1464 & EV2616 \\ \hline \(\lambda_{0}\) (\(\mu\)m) & 4.5 & 8 \\ \(2\gamma\) (cm\({}^{-1}\)) & 238 & 198 \\ \(n_{\text{2D}}\)(10\({}^{11}\)cm\({}^{-2}\)) & 1.49 & 1.04 \\ \(t_{\text{top}}\)(nm) & 200 & 300 \\ \(d_{\text{etch}}\)(nm) & 95 & 236 \\ \(t_{\text{coat}}\)(nm) & 435 & 300 \\ \end{tabular} \end{table} Table 1: Overview of the fabricated quantum cascade active regions: Peak wavelength \(\lambda_{0}\) and full width half maximum \(2\gamma\) extracted from electroluminescence spectra, nominal sheet doping density \(n_{\text{2D}}\), thickness of the top n-InGaAs cladding layer \(t_{\text{top}}\), etch depth for the grating \(d_{\text{etch}}\), and the thickness of the dielectric coating \(t_{\text{coat}}\) extracted from scanning electron microscopy (SEM) cross-section images. is illustrated in Fig. 1 with fabrication images and cross-section schematics. To estimate the reflectivity of the HR end-mirrors, we performed 3D simulations (COMSOL) of a 1 \(\upmu\)m waveguide section terminated with a Si\({}_{3}\)N\({}_{4}\)/Au coating, see Fig. 2(a). The modal reflectivities for varying dielectric coating thickness \(t_{\text{coat}}\) with and without Ti adhesion layer are shown in Fig. 2(b). For a wavelength of 4.5 \(\upmu\)m and 8 \(\upmu\)m, we expect a modal reflectivity larger than 0.96 and 0.89, respectively. At a wavelength of 8 \(\upmu\)m the absorption losses of Si\({}_{3}\)N\({}_{4}\) are dominating [23] and although thinner coatings are more beneficial, we chose \(t_{\text{coat}}=300\) nm to ensure the electrical insulation of the coating. In this work, the active device length is scaled from 504 \(\upmu\)m down to 46 \(\upmu\)m. Approximating the cavity length \(L\) by the active device length and neglecting the optical path length in the HR coatings (\(L\gg t_{\text{coat}}\)), the free spectral range \(\Delta\nu\) can be written as \[\Delta\nu=\frac{1}{2n_{\text{g}}L}, \tag{2}\] where \(n_{\text{g}}\) is the group index of the guided mode in the active region. By further assuming \(n_{\text{g}}=3.4\) Figure 1: The QCSEL device architecture: (a) Scanning electron microscopy (SEM) image of the device after fabrication. The device footprint is below 500 \(\times\) 400 \(\upmu\)m\({}^{2}\) and the cavity length is 79 \(\upmu\)m. The light is extracted through the metallic aperture in the center of the device indicated by the crossing of the two dashed lines. The dashed lines display the axial and lateral direction. The schematic diagrams in (b) and (c) show the axial and the lateral cross-section of the linear cavity, respectively. The red arrow indicates the vertical light extraction. (d) SEM of a cross-section of a 50 \(\upmu\)m short linear cavity corresponding to schematic diagram in (b). The inset displays a magnified view of the grating section where the InGaAs layers are highlighted for clarity. (e) SEM image of a dry-etched facet of a test structure showing the etch quality. \(\Delta\nu\) ranging from 3 cm\({}^{-1}\) to 32 cm\({}^{-1}\) are accessible in our experiments. The design of the outcoupling element, a non-resonant diffraction grating, follows the approach by Jouy and co-workers.[20] The near-second order grating period is determined by \[\Lambda=\frac{N-0.5}{N}\cdot\frac{\lambda}{n_{\text{eff}}}. \tag{3}\] where \(N\) is the number of grating periods, \(n_{\text{eff}}\) the effective guided mode index, and \(\lambda\) the free space wavelength. The outcoupling is investigated with 2D simulations (Lumerical) where the length of the grating section \(L_{\text{gs}}\) is fixed by \(N=7\) and the fabrication parameters of the respective active region are used (see Table 1). The simulation and the electric field component responsible for the vertical outcoupling for a wavelength of 8 \(\mu\)m are illustrated in Fig. 3(a). The transmission towards the surface and the substrate for varying wavelengths and the corresponding induced optical losses of a single pass of the grating section are considered in Fig. 3(b). The optical losses are expressed Figure 2: 3D Simulation of the facet reflectivity: (a) Electric field pattern along the growth direction \(|\mathbf{E}_{z}(x,z)|\) for 1 \(\upmu\)m waveguide section terminated with a Si\({}_{3}\)N\({}_{4}\)/Au coating. Note that a rectangular waveguide is assumed with constant width. Here, the wavelength is 4.5 \(\upmu\)m, the active region thickness is 2.03 \(\upmu\)m, and \(t_{\text{coat}}=435\) nm. (b) Modal reflectivity for varying \(t_{\text{coat}}\) and wavelengths of 4.5 \(\upmu\)m (top panel) and 8 \(\upmu\)m (bottom panel). The dashed lines include a 20 nm thick Ti adhesion layer. The vertical dotted lines correspond to \(t_{\text{coat}}\) of the fabrication (see Table 1) and the stars indicate the experimentally deduced reflectivities discussed below. in terms of the length of the grating section \(L_{\rm gs}\) using \[\alpha=-\frac{\ln(1-T)}{L_{\rm gs}}. \tag{4}\] Here, \(T\) is the transmission, which corresponds to either the transmission to the surface, the transmission to the substrate, or the transmission back to the input (reflection, not shown in Fig. 3(b)). For both wavelength ranges, the surface extraction losses are \(\alpha_{\rm surf}<0.1\) cm\({}^{-1}\), the substrate losses \(\alpha_{\rm sub}>0.3\) cm\({}^{-1}\), and the reflection losses \(\alpha_{\rm refl}<0.04\) cm\({}^{-1}\). While for long cavities (\(L\gg L_{\rm gs}\)), such an outcoupling element results in small slope efficiencies (\(\eta\propto\alpha_{\rm surf}/\alpha_{\rm tot}\), where \(\alpha_{\rm tot}\) are the total optical losses), an enhancement of the slope efficiency is expected as the cavity length is scaled (\(L\sim L_{\rm gs}\)). This is because the induced surface extraction losses can be seen as localized mirror losses which are proportional to the reciprocal length. Besides the short grating section, we also use small metallic apertures (\(\leq 21.2\times 10.5\)\(\mu\)m\({}^{2}\)) to ensure uniform electrical pumping and sufficient thermal extraction. Figure 3: 2D Simulation of the outcoupling element using 7 grating periods: (a) Electric field pattern along the waveguide \(|\mathbf{E}_{x}(x,z)|\) for \(\lambda=8\)\(\mu\)m. The input power \(P_{\rm in}\) exciting the axial waveguide mode and the monitors detecting the power directed towards the surface \(P_{\rm surf}\) and the substrate \(P_{\rm sub}\) are illustrated. (b) The transmission \(T\) towards the surface \(T_{\rm surf}=P_{\rm surf}/P_{\rm in}\) and the substrate \(T_{\rm sub}=P_{\rm sub}/P_{\rm in}\) versus the wavelength. The optical losses of a single pass are computed using Eq. (4). The grating etch depths and the thicknesses of the top n-InGaAs cladding layers of Table 1 are used. To characterize the QCSELs we developed an automatized probe-station to perform light-current-voltage (LIV) characterization on wafer-level, where the individual QCSELs can be adressed with a motorized stage. Light is collected and refocused on the detector with two 3 in off-axis parabolic mirrors, where the first mirror has a numerical aperture of about 0.7. Either a HgCdTe detector (Vigo systems, model: PVM-2TE-10.6-1x1-TO8-wZnSeAR-70+MIP-DC-250M) or a power meter are used as a detector. For spectral and CW characterization some devices were mounted on submounts. Initially the QCSELs are characterized in pulsed operation with a repetition rate of 96.15 kHz and a pulse width of 312 ns. Also cleaved reference lasers without any coating and diffraction grating are characterized to estimate the material gain (discussed below) and the average widths of the QCSELs which are obscured by the HR coatings (see Fig. 1). The average widths are computed by fitting the sub-threshold IV-curves of the QCSELs to a sub-threshold IV-curve of a reference laser. Using these widths, instead of individual widths inferred during the fabrication, more accurate current density values for QCSELs across the whole sample can be obtained. All the measurements presented in the following section are performed at 20\({}^{\circ}\)C. ## III Results As mentioned above, to exploit the advantages of short cavity lasers, i.e. single frequency operation at minimal electrical power dissipation on a compact device footprint, high facet reflectivities with values close to unity are required. We estimate these facet reflectivities by measuring the threshold of lasers with varying lengths. The threshold current densities \(J_{\text{th}}\) of QCSELs and reference lasers are extracted and shown in the top panels of Fig. 4. As expected from Eq. (1), \(J_{\text{th}}\) increases linearly with the reciprocal length. The material gain \(g\Gamma\) is estimated using Eq. (1) and fitting \(J_{\text{th}}\) of the reference lasers with cleaved facets for which we assume equal reflectivities of 0.28. Finally, with \(g\Gamma\), \(J_{\text{th}}\) of the QCSELs and Eq. (1), the reflectivity of the QCSEL facets is deduced. These results are summarized in Table 2 and it can be seen that the experimental reflectivity of 0.905 agrees well with the simulations for EV2616. Albeit a higher experimental reflectivity of 0.92 is obtained for EV1464, the value deviates from the simulated values by more Figure 4: Cavity length scaling based on the active region EV1464 (a) and EV2616 (b). The top panels show the threshold current densities over the reciprocal length for cleaved reference lasers without extractor and the QCSELs. The lengths of the reference lasers are annotated. The bottom panels show the corresponding electrical power dissipation of the QCSELs in a semilogarithmic plot. Note that the device lengths of the QCSELs are indicated with the top axes in each panel. The inset of the top panel (a) shows the probing of the QCSELs on wafer-level. than 0.03. This discrepancy could originate from the roughness induced by dry-etching, which is neglected in our simulations. Due to the shorter emission wavelength of EV1464 compared to EV2616, we expect the reflectivity of the end-mirrors to be more prone to roughness. Considering the simulations shown in Fig. 2(b), at a wavelength of 4.5 \(\upmu\)m the reflectivity can be improved potentially by more than 0.01. This improvement can be achieved by minimizing the thickness of the Ti adhesion layer which ultimately reduces the mirror losses by more than 28 %. At a wavelength of 8 \(\upmu\)m the reflectivity can be additionally increased to about 0.95 by reducing \(t_{\text{coat}}\) and another 0.01 improvement is predicted by employing Al\({}_{2}\)O\({}_{3}\) as dielectric coating. Lastly, for both wavelengths the diffraction losses at the facet can still be reduced by adjusting the refractive index profile of the waveguide core e.g. by changing the thickness of the active region or the cladding layers. The scaling of the electrical power dissipation versus the reciprocal length is shown in the bottom panels of Fig 4. The filled area illustrates the overall power dissipation, the limiting lower and upper curves indicate the dissipation at threshold \(P_{\text{th}}\) and at maximum optical power \(P_{\text{max}}\), respectively. For a QCSEL based on EV1464 and an active area of \((100\times 6.8)\)\(\upmu\)m\({}^{2}\), \(P_{\text{max}}\) and \(P_{\text{th}}\) are reduced to 758 mW and 548 mW, respectively. For a QCSEL based on EV2616 and an active area of \((73\times 7.1)\)\(\upmu\)m\({}^{2}\), \(P_{\text{max}}\) and \(P_{\text{th}}\) are reduced to 411 mW and 276 mW, respectively. Among all the characterized QCSELs (\(>240\) lasers), the pulsed LIV-characteristics of QCSELs with lowest \(P_{\text{th}}\) for both active regions are reported in Fig. 5. Compared to EV1464 where a \(P_{\text{th}}\) of 513 mW is observed, a much lower \(P_{\text{th}}\) of 237 mW could be achieved for EV2616 with the lower sheet doping density. The overall power dissipation can easily be improved by decreasing the doping levels in the claddings and the active region and by further employing a narrow gain active region. \begin{table} \begin{tabular}{c c c c c} Active region & \(g\Gamma\) (cm/kA) & \(R\) & \(\widetilde{R}_{\text{Ti/Au}}\) & \(\widetilde{R}_{\text{Au}}\) \\ \hline EV1464 & \(2.3\pm 0.2\) & \(0.92\pm 0.01\) & 0.964 & 0.978 \\ EV2616 & \(5.7\pm 0.3\) & \(0.905\pm 0.005\) & 0.895 & 0.910 \\ \end{tabular} \end{table} Table 2: Reciprocal length study for both active regions: Experimental material gain \(g\Gamma\), experimental reflectivity \(R\), and the simulated reflectivity with and without the Ti adhesion layer \(\widetilde{R}_{\text{Ti/Au}}\) and \(\widetilde{R}_{\text{Au}}\), respectively. Note that for the cleaved facets a reflectivity of \(0.28\pm 0.01\) is assumed. In Fig. 6(a), we demonstrate the reduction of axial lasing modes towards single mode operation through cavity length scaling. For QCSELs with varying lengths, the number of modes and the corresponding free spectral range were assessed from spectra acquired for currents close to rollover. The free spectral range is then fitted to Eq. (2) resulting in \(n_{\rm g}=3.30\) and \(n_{\rm g}=3.44\) for EV1464 and EV2616, respectively. Although for the shortest device (\(L=71\)\(\upmu\)m) still two modes are observed, this is an encouraging result because the active region is originally designed for broad gain employing two active region stacks. Moreover, the device lengths are not designed such that peak of the gain curve is matching an axial cavity mode. Thus, we are convinced that reliable single axial mode selection can be achieved with a narrow gain and single stack active region featuring a vertical optical transition. Nevertheless, a dominant single mode could be observed with a side mode suppression ratio (SMSR) of 18 dB at 20 mA from a QCSEL based on EV2616 (\(L=104\)\(\upmu\)m), see Fig. 6(b). This mode is centered at 1347.5 cm\({}^{-1}\) with the axial mode index \(q=95\) and for larger currents the SMSR decreases as the intensity of the mode \(q=97\) increases. For a QC Figure 5: Pulsed light-current-voltage (LIV) characteristics with the lowest power dissipation at threshold \(P_{\rm th}\) for QCSELs based on EV1464 (a) and EV2616 (b). The device area and the slope efficiencies \(\eta\) are annotated. SEL based on the shorter wavelength active region EV1464 (\(L=152\) um), the spectra for varying currents are shown in Fig. 6(c) and multiple modes around 4.4 um are observed. Compared to the spectra of the QCSEL shown in Fig. 6(b), more modes show up due to the longer cavity length and therefore smaller free spectral range. Additionally, from the electroluminescence measurements (see Table 1) we estimate a larger gain bandwidth for the active region EV1464. Single mode emission can be observed when operating the QCSELs in CW. The spectra and the corresponding LIV for a QCSEL based on EV2616 (\(L=248\) um) is shown in Fig. 7. Although the mode selection is not guaranteed, only one mode appears and at 45 mA a SMSR larger than 33 dB is deduced. It needs to be emphasized, that the chip used for these CW measurements is mounted on a submount with epitaxial layer up and that improvements are expected from a thinner substrate or from a double-channel geometry [17]. An enhancement of the slope efficiency as discussed above is demonstrated in Fig. 8(a), show Figure 6: Pulsed spectral characterization of the QCSELs: (a) Number of modes (square markers) and free spectral ranges (circle and cross markers) extracted from spectra acquired for currents close to rollover. The solid lines correspond to the fit of the free spectral range (see Eq.(2)). The horizontal gray line indicates the number of modes equals one. (b) Spectra acquired from a QCSEL with \(L=104\) μm. The axial mode index \(q\) and the side mode suppression ratio are illustrated. (c) Spectra acquired from a QCSEL with \(L=152\) μm (corresponding LIV shown in Fig. 5(a)). The arrow indicates the signal-to-noise ratio. Note that all curves appearing blueish are acquired from QCSELs based on EV1464 (wafer-level) with 0.5 cm\({}^{-1}\) resolution and a pulse width of 52 ns. All curves appearing reddish are acquired from QCSELs based on EV2616 (chip mounted on submounts) with 0.075 cm\({}^{-1}\) and a pulse width of 52 ns. ing that for \(L\geq 146\)\(\upmu\)m the slope efficiency increases roughly linearly with the reciprocal length as expected. For smaller \(L\) the slope efficiency then drops because \(J_{\text{th}}\) increases close to the maximum current density \(J_{\text{max}}\), which can be seen from the corresponding LIV-characteristics shown in Fig. 8(b). To extend the enhancement of the slope efficiency for shorter cavities, higher reflectivities of the end-mirrors are required. In Fig. 8(c)-(d), the pulsed LIV-characteristics of QCSELs exhibiting the largest slope efficiencies are reported. The values for the slope efficiencies are 44.4 mW/A and 75.7 mW/A for EV1464 and EV2616, respectively. These exceptionally higher values might be explained by a more favorable alignment between the dry-etched end-mirrors or by unintentional air voids during the regrowth step after wet-etching the grating. Creating instead air voids on purpose by a two-step regrowth, employing a thicker etch step into the n-InGaAs cladding layer, or extending the area of the metallic aperture provide ways to increase the surface Figure 7: Continuous wave characterization of a QCSEL based on EV2616 (chip mounted on submount): (a) Spectra acquired with a resolution of 0.075 cm\({}^{-1}\). The side mode suppression ratio is illustrated for a current of 45 mA. (b) LIV characteristics with annotated device area, slope efficiency \(\eta\) and threshold dissipation power \(P_{\text{th}}\). extraction losses and ultimately the slope efficiency. Figure 8: (a) Slope efficiency versus the reciprocal length extracted from LI curves of QCSELs based on EV2616. (b) Pulsed LIV characteristics for a selection of QCSELs used in (a). The device lengths are annotated. (c)-(d) Pulsed LIV characteristics exhibiting the largest slope efficiencies for QCSELs based on EV1464 (c) and EV2616 (d). The device areas and the slope efficiencies \(\eta\) are annotated. Conclusions We have proposed the QCSEL for the next generation of portable and large scale applications relying on gas phase absorption spectroscopy. The QCSEL was implemented as a compact buried heterostructure laser with footprints well below \((500\times 400)\)\(\upmu\)m\({}^{2}\) based on two different active regions designed for 4.5 \(\upmu\)m and 8 \(\upmu\)m. Lasing is demonstrated for devices as short as 71 \(\upmu\)m showing that higher integration densities are feasible. The key to further down scale the cavity length accompanied by the reduction of the electrical power dissipation and the number of axial lasing modes, is to increase the mirror reflectivites above 0.92 achieved in this work. This can be done by leveraging on high-quality dry-etching, minimizing the Ti adhesion layer, and adjusting the refractive index profile of the waveguide core to reduce diffraction losses. To reduce the power dissipation beyond 237 mW and to guarantee single axial mode selection, low doped and narrow gain active regions will be combined with cavity lengths designed such that a single axial mode is matching the peak wavelength of the gain curve. ## V Acknowledgements The authors gratefully acknowledge the financial support from Innosuise - Swiss Innovation Agency (Innovation Projects: 52899.1 and 53098.1) and the ETH Zurich Foundation (Project: 2020-HS-348). The authors express their gratitude to Zhixin Wang and Ruijung Wang for useful discussion and conceiving ideas. Furthermore, the authors would like to thank Bo Meng for helpful hints for the device fabrication, Moritz Muller for developing further the automatized probe-station, and Philipp Taschler for valuable inputs on the manuscript text. ## VI Data Availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ## VII Keywords quantum cascade lasers, mid-infrared, single mode, low dissipation, surface emission, scaling, microcavity
2301.03921
Black String solutions in Rainbow Gravity
In this paper we study black string solutions under the consideration of rainbow gravity. We have analytically obtained the solution for four-dimensional black strings in terms of the functions $f(E/E_p)$ and $g(E/E_p)$ that sets the energy scale where the rainbow gravity becomes relevant. We have also obtained the Hawking temperature for the black string, from which we could see that the rainbow functions play the role of increasing or decreasing the Hawking temperature for a given horizon radius depending on the choice of such rainbow functions. We have computed the entropy, specific heat and free energy for the black string. The entropy and specific heat exhibit a rainbow dependence, while the free energy is not modified by the rainbow functions. Finally we have studied the effects of the rainbow gravity in the orbits of massive and massless particles around a black string. We could verify that neither massive nor massless particles exhibit stable orbits around a black string in the scenario of rainbow gravity, for any configuration of rainbow functions.
R. Dárlla, F. A. Brito, J. Furtado
2023-01-10T11:52:15Z
http://arxiv.org/abs/2301.03921v1
# Black String solutions in Rainbow Gravity ###### Abstract In this paper we study black string solutions under the consideration of rainbow gravity. We have analytically obtained the solution for four-dimensional black strings in terms of the functions \(f(E/E_{p})\) and \(g(E/E_{p})\) that sets the energy scale where the rainbow gravity becomes relevant. We have also obtained the Hawking temperature for the black string, from which we could see that the rainbow functions play the role of increasing or decreasing the Hawking temperature for a given horizon radius depending on the choice of such rainbow functions. We have computed the entropy, specific heat and free energy for the black string. The entropy and specific heat exhibit a rainbow dependence, while the free energy is not modified by the rainbow functions. Finally we have studied the effects of the rainbow gravity in the orbits of massive and massless particles around a black string. We could verify that neither massive nor massless particles exhibit stable orbits around a black string in the scenario of rainbow gravity, for any configuration of rainbow functions. ## I Introduction There are models of quantum gravity known as Doubly Special Relativity (DSR) that suggests the existence of an invariant energy scale (or lenght) independent of the observer. In such models the dispersion relation is modified when we consider energies near to Planck energy scale [1; 2; 3; 4], which implies that the speed of light is not the only relativistic invariant. Moreover, these models suggest that exists more possible consistent modifications to general relativity other than the introduction of quantum corrections in the Einstein-Hilbert action. The approaches that receive the name of rainbow's gravity suggest that the usual energy-momentum dispersion relation is deformed close to the Planck scale and that spacetime is also modified due to the non-linear representation of Lorentz transformations, so that its geometry changes according to the energy of the test particle in it. This means that particles with different energies distort spacetime differently in a type of spacetime backreaction leading to the mentioned modification of the relativistic energy-momentum dispersion relation [9]. These approaches are studied in several scenarios such as string field theory [5], loop quantum gravity [6], and non-commutative geometry [7]. Some theoretical proposals suggest corrections both in the action and in the dispersion relation as in [8]. Some phenomena can be explained through this semi-classical approach, such as the ultra-high-energy cosmic rays that are currently observed but still have unknown origin, suggesting that the dispersion relation is indeed modified. In astrophysics, the influence of the rainbow's gravity on the properties of a black hole has been studied in several scenarios, including its thermodynamics [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20], and also in the study of cosmic strings [21; 22; 23]. In addition, to understand the early universe, in which the energies involved were close to the Planck scale, such a modified theory of gravity plays an important role to avoid an initial singularity [24; 25; 26; 27; 28; 29]. Finally, in general field theory there is a lot of recent developments regarding rainbow gravity in the context of Bose-Einstein condensation [30], Klein-Gordon oscillation [29], Landau-Aharonov-Casher effect [31], particle production [32], among others. In this paper we study black string solutions under the consideration of rainbow gravity. We have analytically obtained the solution for four-dimensional black strings in terms of the functions \(f(E/E_{p})\) and \(g(E/E_{p})\) that sets the energy scale where the rainbow gravity becomes relevant. We have also obtained the Hawking temperature for the black string, from which we could see that for a give horizon radius the rainbow gravity contribution promotes an increasing in the Hawking temperature. ## II Rainbow gravity review The rainbow gravity was first studied in the context of Double Special Relativity (DSR) and it emerges as a generalization to curved spacetime of the deformed Lorentz symmetry group (locally). One of its consequences is the arising of a modified energy-momentum dispersion relation. Such modification is usually written in the form [33; 34; 33] \[E^{2}f^{2}(E/E_{P})-p^{2}c^{2}g^{2}(E/E_{P})=m^{2}c^{4}, \tag{1}\] where \(f(E/E_{P})\) and \(g(E/E_{P})\) are the so called rainbow functions, being \(E\) the energy of the probe particle and \(E_{P}\) the Planck energy. In the low-energy limit the rainbow functions converges to unit, restoring the standard dispersion relation. However in the high-energy limit the rainbow functions end up violating the usual energy-momentum dispersion relation. The modification of this latter corresponds to a change in the metric, according to [34], so that the Minkowski spacetime becomes \[ds^{2}=\frac{dt^{2}}{f^{2}(E/E_{P})}-\frac{1}{g^{2}(E/E_{P})}\delta_{ij}dx^{ i}dx^{j}. \tag{2}\] In order to study the rainbow gravity effects on the Friedmann-Robertson-Walker (FRW) universe [35; 36], the following rainbow functions were considered (case I) \[f(E/E_{P})=1,\ \ g(E/E_{P})=\sqrt{1-\xi(E/E_{P})^{s}}, \tag{3}\] where \(s>1\) and \(\xi\) is a dimensionless free parameter of the model, which we will consider the same as the other rainbow functions to facilitate comparison between the employed models. Another interesting choice for the rainbow functions is the following (case II), \[f(E/E_{P})=g(E/E_{P})=\frac{1}{1-\xi(E/E_{P})}. \tag{4}\] Such rainbow functions were considered in [3; 33] (and references therein) in studying possible nonsingular universe solutions and in [34], since it assures a constant light velocity, it may provides a solution for the horizon problem. A last choice of rainbow functions of great interest is given by (case III) \[f(E/E_{P})=\frac{e^{\xi(E/E_{P})}-1}{\xi(E/E_{P})},\ \ g(E/E_{P})=1. \tag{5}\] This choice of the rainbow functions was originally considered in [9] in the context of Gamma Ray Bursts. Later, this same choice was also addressed in [35; 37] in connection with FRW solutions. ## III Black string solution in rainbow gravity Let us consider the following line element for the black string \[ds^{2} = -\frac{A(r)}{f(E/E_{P})}dt^{2}+\frac{1}{g(E/E_{p})A(r)}dr^{2}+ \frac{r^{2}}{g(E/E_{p})}d\phi^{2} \tag{6}\] \[+\frac{\alpha^{2}r^{2}}{g(E/E_{p})}dz^{2},\] where \(t\in(-\infty,\infty)\), the radial coordinate \(r\in[0,\infty)\), the angular coordinate \(\phi\in[0,2\pi)\) and the axial coordinate \(z\in(-\infty,\infty)\). The \(\alpha\) parameter is considered as \(\alpha^{2}=-\Lambda/3\). \[A(r)=\frac{\alpha^{2}r^{2}}{[g(E/E_{p})]^{2}}-\frac{4\mu}{\alpha r}, \tag{7}\] The above solution for the black string in the rainbow gravity scenario recovers the usual black string solution [38] when \(g(E/E_{p})=1\), i.e., \[A(r)=\left(\alpha^{2}r^{2}-\frac{4\mu}{\alpha r}\right). \tag{8}\] For the black string the Einstein-Hilbert effective action requires the cosmological constant contribution, so that, \[S_{u}=\frac{1}{2\kappa^{2}}\int d^{4}x\sqrt{-g}\left(R-2\Lambda\right). \tag{9}\] where \(\kappa=8\pi G\) and \(R\) is the Ricci scalar. The EFE for this ansatz gives us \[G_{t}^{s}-3\alpha^{2} = [g(E/E_{p})]^{2}\left[\frac{1}{r}\frac{dA(r)}{dr}+\frac{A(r)}{r^ {2}}\right]-3\alpha^{2}, \tag{10}\] We can see that the energy-momentum tensor for the ansatz of equation (6) is \(T_{r}^{\mu}=-\rho(r)\ \mathrm{diag}(1,1,0,0)+p_{i}(r)\ \mathrm{diag}(0,0,1,1)\), where \(p_{i}=p_{\phi}=p_{z}\). This way we can find \(A(r)\) by solving \(G_{t}^{s}-3\alpha^{2}=-\kappa^{2}\rho(r)\), so that we find The behaviour of the black string solution in rainbow gravity is depicted in figure (1) for the cases I and II. Note that the case III for the rainbow functions does not gives us any modification in the black string solution, since \(g(E/E_{p})=1\). Let us discuss briefly the role played by the rainbow gravity scenario in the black string solution. As we can see in (1), as we increase the value of the energy \(E\), approaching the Planck energy scale, we also increase the value of the horizon radius for the cases I and II of the rainbow functions. Figure 1: Black string solution in rainbow gravity. For this plot we have considered \(E_{p}=1\), \(s=1\), \(\xi=0.4\), \(\alpha=0.5\) and \(\mu=0.7\). In (a) we are considering the case I for the rainbow functions while in (b) we are considering the case II. ## IV Black string thermodynamics in rainbow gravity Our black string solution in the rainbow gravity scenario has the horizon curves defined by \(A(\tilde{r_{h}})=0\), so that the linear mass can be written as \[\mu=\frac{\alpha^{3}\tilde{r_{h}}^{3}}{4g(E/E_{p})^{2}}. \tag{11}\] Here \(\tilde{r_{h}}=r_{h}\ [g(E/E_{P})]^{2/3}\), where \(r_{h}\) is the horizon radius of the usual General Relativity solution for the black string. Then, the expression (11) becomes \[\mu=\frac{\alpha^{3}r_{h}^{3}}{4}. \tag{12}\] Thus, this linear mass has no modification due the rainbow gravity. In possession of the solution for the static black string in the rainbow gravity scenario given by (7), we are able to study the thermodynamics of the black string by computing the Hawking's temperature by means of \(T_{H}=\frac{\dot{A}^{\prime}(D_{h})}{4\pi}\). Thus we obtain \[\tilde{T}_{H}=\frac{3\alpha^{2}\ r_{h}}{4\pi\ [g(E/E_{P})]^{4/3}}. \tag{13}\] The behaviour of the Hawking temperature for the cases I and II is depicted in the figure (2). For both cases (I and II) the same linear behaviour of the usual Hawking temperature for black strings is present. However some slight differences between the cases must be highlighted. For the case I (fig(2a)) we can see that for a given horizon radius the Hawking temperature is greater when we consider the effect of rainbow gravity. The opposite occurs for the case II (fig(2a)), where for a given horizon radius the Hawking temperature is smaller when we consider the effect of rainbow gravity. In order to properly understand the thermodynamics of the black string in the rainbow gravity context it is necessary to compute the entropy, specific heat and free energy. The entropy can computed directly from the expression \(dS=\frac{d\mu}{T_{H}}\), in which we get \[\tilde{S}=\frac{\pi\ \alpha\ r_{h}^{2}\ [g(E/E_{P})]^{4/3}}{2}. \tag{14}\] Clearly, this recovers the usual black string result \(S=\frac{1}{2}\pi\alpha r_{h}^{2}\) when \(g(E/E_{P})=1\). As we can see in figure (3), for both cases we have the same quadratic dependence of the horizon radius that the usual black string entropy exhibit. However, differently from the Hawking temperature, the case I promotes a decreasing in the entropy for a given horizon radius while the case II promotes an increasing in the entropy for a given horizon radius. The specific heat can be calculated by \(\tilde{C}_{v}=\frac{d\mu}{dT_{H}}\) from which we obtain \[\tilde{C}_{v}=\pi\alpha r_{h}^{2}\ [g(E/E_{P})]^{4/3} \tag{15}\] Similar to entropy, in case I for a given horizon radius the specific heat is smaller when we consider the effect of rainbow gravity. The opposite happens for case II. When \(g(E/E_{P})=1\) we get \(C_{v}=\pi\alpha r_{h}^{2}\), i.e. the usual black string specific heat in General Relativity. The behaviour of the specific heat for the cases I and II of the rainbow functions is depicted in (5). As it is widely known, the thermodynamical stability of black holes (black strings for our case) is directly related to the sign of the heat capacity. A positive heat capacity indicates that the system is thermodynamically stable, while its negativity imply a thermodynamical instability. Therefore, the result for the specific heat in the context of rainbow gravity indicates a thermodynamically stable black string. On the other hand, the rainbow gravity presents no modification in the free energy \(F=\mu-T_{H}S\), yielding therefore the usual black string result \[F=-\frac{\alpha^{3}r_{h}^{3}}{8}. \tag{16}\] Figure 2: Hawking temperature for black string solution in rainbow gravity. For this plot we have considered \(E_{p}=1\), \(s=1\), \(\xi=0.4\), \(\alpha=0.5\) and \(\mu=0.7\). In (a) we are considering the case I for the rainbow functions while in (b) we are considering the case II. ## V Geodesics and circular orbits The particle's geodesic in orbit around a static black string is given by \[\dot{r}^{2}=\omega^{2}-A(r)\left(\frac{L^{2}}{r^{2}}+m^{2}\right), \tag{17}\] where \(\omega\) is the particle's energy, \(L\) is the angular momentum and \(m\) is the particle's mass. Thus the effective potential is defined as \[V_{r}=A(r)\left(\frac{L^{2}}{r^{2}}+m^{2}\right). \tag{18}\] The circular geodesics occur at the points \(r_{c}\) satisfying \(\frac{1}{2}r_{c}^{2}=0\) and \(V_{r}^{\prime}(r_{c})=0\). In Fig. (5) we depict the effective potential of massless and massive particles for the black string in the rainbow gravity scenario. It is shown that there is no case where circular orbits are stable, similarly to the usual black string solution. Therefore, the rainbow gravity does not modify significantly the results for geodesics and circular orbits in comparison to the usual black string. ## VI Conclusion In this paper we study black string solutions under the consideration of rainbow gravity. We have analytically obtained the solution for four-dimensional black strings in terms of the functions \(f(E/E_{p})\) and \(g(E/E_{p})\) that sets the energy scale where the rainbow gravity becomes relevant. We could verify that the black string solution depends only on the function \(g(E/E_{p})\), and consequently, all the thermodynamic parameters will depend only on \(g(E/E_{p})\). We have plotted the behaviour of the black string solution in (1), and we could see that as we increase the value of the energy \(E\), approaching the Planck energy scale, we also increase the value of the horizon radius for the cases I and II of the rainbow functions. We have also obtained the Hawking temperature for the black string, from which we could see that the rainbow functions play the role of increasing or decreasing the Hawking temperature for a given horizon radius depending on the choice of such rainbow functions. We have computed the entropy, specific heat and free energy for the black string. The entropy and specific heat exhibit a rainbow dependence, while the free energy is not modified by the rainbow functions. Figure 4: Especific Heat for black string solution in rainbow gravity. For this plot we have considered \(E_{p}=1\), \(s=1\), \(\xi=0.4\), \(\alpha=0.5\) and \(\mu=0.7\). In (a) we are considering the case I for the rainbow functions while in (b) we are considering the case II. Figure 3: Entropy for black string solution in rainbow gravity. For this plot we have considered \(E_{p}=1\), \(s=1\), \(\xi=0.4\), \(\alpha=0.5\) and \(\mu=0.7\). In (a) we are considering the case I for the rainbow functions while in (b) we are considering the case II. Finally we have studied the effects of the rainbow gravity in the orbits of massive and massless particles around a black string. We could verify that neither massive nor massless particles exhibit stable orbits around a black string in the scenario of rainbow gravity, for any configuration of rainbow functions. ## Acknowledgements FAB would like to thank CNPq and CNPq/PRONEX/FAPESQ-PB (Grant nos. 165/2018 and 312104/2018-9), for partial financial support. JF would like to thank the Fundacao Cearense de Apoio ao Desenvolvimento Cientifico e Tecnologico (FUNCAP) under the grant PRONEM PNE0112-00085.01.00/16 for financial support.
2310.15802
Nonlinear response theory for lossy superconducting quantum circuits
We introduce a numerically exact and yet computationally feasible nonlinear response theory developed for lossy superconducting quantum circuits based on a framework of quantum dissipation in a minimally extended state space. Starting from the Feynman--Vernon path integral formalism for open quantum systems with the system degrees of freedom being the nonlinear elements of the circuit, we eliminate the temporally non-local influence functional of all linear elements by introducing auxiliary harmonic modes with complex-valued frequencies coupled to the non-linear degrees of freedom of the circuit. In our work, we propose a concept of time-averaged observables, inspired by experiment, and provide an explicit formula for producing their quasiprobability distribution. Furthermore, we systematically derive a weak-coupling approximation in the presence of a drive, and demonstrate the applicability of our formalism through a study on the dispersive readout of a superconducting qubit. The developed framework enables a comprehensive fully quantum-mechanical treatment of nonlinear quantum circuits coupled to their environment, without the limitations of typical approaches to weak dissipation, high temperature, and weak drive. Furthermore, we discuss the implications of our findings to the quantum measurement theory.
V. Vadimov, M. Xu, J. T. Stockburger, J. Ankerhold, M. Möttönen
2023-10-24T12:53:10Z
http://arxiv.org/abs/2310.15802v1
# Nonlinear response theory for lossy superconducting quantum circuits ###### Abstract We introduce a numerically exact and yet computationally feasible nonlinear response theory developed for lossy superconducting quantum circuits based on a framework of quantum dissipation in a minimally extended state space. Starting from the Feynman-Vernon path integral formalism for open quantum systems with the system degrees of freedom being the nonlinear elements of the circuit, we eliminate the temporally non-local influence functional of all linear elements by introducing auxiliary harmonic modes with complex-valued frequencies coupled to the non-linear degrees of freedom of the circuit. In our work, we propose a concept of time-averaged observables, inspired by experiment, and provide an explicit formula for producing their quasiprobability distribution. Furthermore, we systematically derive a weak-coupling approximation in the presence of a drive, and demonstrate the applicability of our formalism through a study on the dispersive readout of a superconducting qubit. The developed framework enables a comprehensive fully quantum-mechanical treatment of nonlinear quantum circuits coupled to their environment, without the limitations of typical approaches to weak dissipation, high temperature, and weak drive. Furthermore, we discuss the implications of our findings to the quantum measurement theory. ## I Introduction Quantum systems inherently interact with their environments, typically consisting of a macroscopic number of degrees of freedom [1; 2; 3]. These interactions give rise to a diverse range of phenomena, including decoherence, dissipation-induced phase transitions [4; 5], and emergence of classical physics in quantum systems [6] among many others [3]. Understanding and accurately describing the dynamics of open quantum systems is of fundamental importance in various fields, ranging from quantum information science and quantum computing [7; 8; 9; 10] to condensed matter physics [11] and quantum optics [1; 12]. Recent technological advances [13; 14; 15; 16; 17; 18; 19; 20] have opened up new possibilities for experimental studies of open quantum systems, necessitating the development of accurate and computationally efficient theoretical models. Traditional Markovian approaches [1; 21; 22; 23] for treating open quantum systems rely on assumptions of weak coupling and time-scale separation of the system and the bath, or substantially high temperature, which may not always hold in experimentally relevant setups leading to non-Markovian dynamics [24; 25; 26; 27; 28; 29; 30]. Furthermore, these approaches may have limited applicability in the presence of structured reservoirs with gaps or other singularities in the spectral density. Non-perturbative treatments often involve discretization of the bath [31; 32] or employ the Feynman-Vernon path integral formalism [33]. The latter includes techniques such as path integral Monte Carlo (PIMC) [34; 35; 36; 37], unraveling the time-nonlocal influence functional using auxiliary stochastic fields [38; 39], introducing a set of auxiliary density operators governed by a system of time-local hierarchical equations of motion (HEOM) [40; 41], and others [42; 43; 44; 45; 46]. Recently, a broad variety of numerically exact methods has been unified within the framework of quantum dissipation in a minimally extended state space (QD-MESS) [47; 48] which provides high accuracy with relatively moderate computational resources. In addition to the inevitable interaction with the natural environment, the study of open quantum systems is also motivated by the need to carry out measurements on real quantum systems [49]. In such cases, the measurement apparatus itself acts as an environment, leading to controllable decoherence. In this setup standard von Neumann measurements [50] can be performed only on the macroscopic measurement device or its part [49], hence a single measurement gives a very limited information about the system of interest. This has two consequences: first, no real measurement is instantaneous since the result usually comes from integration of continuous measurements, and second, even continuous measurements performed with a specific device cannot provide access to arbitrary observables of the system of interest. In many experimental setups it is common to probe the electromagnetic field [51] emitted from the measured system. To this end, the input-output (IO) theory [52; 53; 54] for quantum optical systems and superconducting quantum circuits was developed to calculate the properties of the emitted far-field radiation and its integral characteristics. A major advantage of the IO theory is the possi bility to generalize it for quantum networks [55], where the output from one part of the system may serve as the input for another part. These theories are quite accurate in quantum optical systems but rely on the Born-Markov approximation, which may break down, e.g., in the strong coupling regime. In fact, non-Markovian effects may be more pronounced in microwave superconducting circuits [29; 30] which are among the leading platforms for quantum computing applications [51; 56; 57]. Effects of retardation and correlations between the system and environment are critical for the design and optimization of superconducting qubits, quantum gates, initialization, and readout protocols, especially in circuits with tunable dissipative elements [58; 59; 60; 61; 16] and distributed-element qubits [62]. Thus, a rigorous and computationally efficient IO theory that goes beyond the limitations of Markov approximations is essential for accurate characterizing and harnessing the dynamics of superconducting circuits. The first non-Markovian IO theory was presented in Ref. [63] based on temporally non-local Heisenberg equations of motion for the degrees of freedom of the system and the field operators in the case of a one-dimensional chiral field propagating in one direction. However, solving these equations can be technically involved for non-linear systems. Later, this approach was generalized for quantum networks [64] and squeezed input fields [65]. In other studies [66; 67], the IO formalism was developed for atoms in a cavity which is coupled to a Markovian bath. Due to the approximation of the chiral field, it is infeasible to properly account for thermal effects. This approximation introduces negative-frequency modes, which do not have a thermal state at a positive temperature. It is a reasonable approximation in quantum optics, since the characteristic frequency scale is much higher than the temperature scale, especially if the back-scattering can be neglected. This is not the case for the superconducting quantum circuits, where thermal effects are important and reflections from the qubits and resonators are usually strong. The main goal of this paper is to develop a general formalism for dissipative superconducting quantum circuits which goes beyond the Born-Markov approximation, utilizing modern techniques for open quantum systems. We refer to this formalism as nonlinear response (NLR) theory. In superconducting circuits, the sources and measurement devices of the microwave field are typically connected to other components by dispersion-free coaxial or coplanar waveguides. These transmission lines act as dissipation channels, in addition to dielectric losses [68; 69] or quasiparticles in the superconductors [70; 71; 72; 73; 74]. In our formalism, we utilize the QD-MESS approach and account for the non-Markovian environment which is represented by finite number of damped bosonic modes [47; 75] coupled to the nonlinear degrees of freedom of the circuit. By incorporating these auxiliary modes, we can capture the full non-Markovian dynamics induced by the system-environment interaction, including memory effects and correlations. Our formalism provides a systematic framework for studying the input-output characteristics of the circuit, taking into account the interplay between the dissipation, the nonlinearity of the circuit, and the external driving fields. The suggested approach is similar to the black-box quantization technique [76; 77], however it is free from the limitation of weak dissipation [76] and the need of bath discretization [77]. The structure of the paper is as follows: In Sec. II, we provide a brief introduction to the Caldeira-Leggett model, which describes a broad class of dissipative quantum systems, and establish the connections between classical Langevin equations and the influence functional components in Keldysh space. We focus the remainder of the paper on the study of superconducting quantum circuits. Section III contains the central result of our paper, presenting a detailed derivation of the NLR theory. Starting from the classical Kirchhoff's law for the circuit and the insights gained from Sec. II, we derive the action on the Keldysh contour for the circuit. We explicitly incorporate classical driving and introduce a quantum source or counting field to obtain the generating functional of the output field. We then proceed to eliminate the linear degrees of freedom of the circuit and unravel the resulting time-nonlocal influence functional by introducing auxiliary bosonic modes. This leads to a time-local QD-MESS equation that governs the dynamics of the open quantum system. We define observables as time averages of the output field and provide an explicit formula for their quasiprobability distribution through the generating functional. In Sec. IV, we focus on the systematic derivation of the weak-coupling regime in the presence of possibly strong driving fields, which are non-trivial to accommodate exactly in master equations. The coupling to off-resonant auxiliary modes can be eliminated through the use of a dynamical Schrieffer-Wolf transformation [78; 79]. In Sec. V, we illustrate our theory by applying it to the problem of dispersive readout of a superconducting transmon qubit. Finally, we summarize our findings and provide our conclusions in Sec. VI. ## II Caldeira-Leggett model Throughout the paper we use the following notation: Lowercase bold characters denote vectors, uppercase bold characters denote matrices. A bold capital character with subscripts denotes a certain block of the matrix denoted by the corresponding character. Functions of time and frequency denoted by the same character are related to each other by Fourier transformation: \[X(\omega)=\int\limits_{-\infty}^{+\infty}X(t)\mathrm{e}^{\imath \omega t}\:\mathrm{d}t, \tag{1}\] \[X(t)=\frac{1}{2\pi}\int\limits_{-\infty}^{+\infty}X(\omega) \mathrm{e}^{-\imath\omega t}\:\mathrm{d}t. \tag{2}\] The operators in the Hilbert space are emphasized by \(\hat{:}\) accent, the operators in the extended Liouville space are denoted by \(\hat{:}\) accents. We begin our analysis by considering the paradigmatic Caldeira-Leggett model [80] for dissipative systems. The classical Lagrangian of this model is given by \[\mathcal{L}(\dot{x},\dot{q}_{k},x,q_{k},t)=\frac{m\dot{x}^{2}}{2}- V(x,t)\\ +\sum_{k}\left[\frac{m_{k}\dot{q}_{k}^{2}}{2}-\frac{m_{k}\omega_{ k}^{2}}{2}\left(q_{k}-\frac{c_{k}}{m_{k}\omega_{k}^{2}}x\right)^{2}\right], \tag{3}\] which describes a particle of mass \(m\) in a possibly time-dependent potential \(V(x,t)\) interacting with a plethora of environmental harmonic oscillators each with mass \(m_{k}\), bare angular frequency \(\omega_{k}\), and coupling constant \(c_{k}\) which provides the coupling between the system and the \(k\):th oscillator. The Feynman-Vernon path integral approach [33] is the conventional quantum-mechanical treatment of this system. The time evolution of the full density matrix of the system \(W(\mathbf{x}_{+},\mathbf{x}_{-},t)\) from time \(t^{\mathrm{i}}\) to \(t^{\mathrm{f}}>t^{\mathrm{i}}\) can be found as \[W\left(\mathbf{x}_{+}^{\mathrm{f}},\mathbf{x}_{-}^{\mathrm{f}},t^{ \mathrm{f}}\right)=\int K\left(\mathbf{x}_{+}^{\mathrm{f}},\mathbf{x}_{-}^{\mathrm{f} },t^{\mathrm{f}};\mathbf{x}_{+}^{\mathrm{i}},\mathbf{x}_{-}^{\mathrm{i}},t^{\mathrm{i} }\right)\\ W\left(\mathbf{x}_{+}^{\mathrm{i}},\mathbf{x}_{-}^{\mathrm{i}},t^{\mathrm{i} }\right)\:\mathrm{d}\mathbf{x}_{+}^{\mathrm{i}}\:\mathrm{d}\mathbf{x}_{-}^{\mathrm{i}}, \tag{4}\] where the subscripts "\(+\)" and "\(-\)" stand for the first and the second arguments of the density matrix, respectively, \(\mathbf{x}_{\pm}=\left[x_{\pm}\:\:\:q_{1,\pm}\:\:q_{2,\pm}\:\:\ldots\right]^{\mathrm{T}}\) are vectors of all coordinates of all the degrees of freedom, and the propagator \(K(\ldots)\) is given by the following path integral: \[K(\ldots)=\int\mathcal{D}[\mathbf{x}_{+},\mathbf{x}_{-}]\exp\left\{\frac{t}{\hbar} \left(S[\mathbf{x}_{+}]-S[\mathbf{x}_{-}]\right)\right\}. \tag{5}\] Here, the path integration is carried out over the fixed-boundary trajectories \(\mathbf{x}_{\pm}\left(t^{\mathrm{i}/\mathrm{f}}\right)=\mathbf{x}_{\pm}^{\mathrm{i}/ \mathrm{f}}\) and the functional \(S[\mathbf{x}]\) is a classical action \[S[\mathbf{x}]=\int\limits_{t^{\mathrm{i}}}^{t^{\mathrm{f}}}\mathcal{L}\left(\dot{ \mathbf{x}},\mathbf{x},t\right)\:\mathrm{d}t. \tag{6}\] It is convenient to carry out a Keldysh rotation to so-called classical and quantum trajectories [11] \[\mathbf{x}_{\pm}=\mathbf{x}_{\mathrm{c}}\pm\frac{\mathbf{x}_{\mathrm{q}}}{2}. \tag{7}\] The choice of the Keldysh formulation for the path integral is justified by its simplicity in the coupling to the external fields: classical fields couple to quantum degrees of freedom of the system, and quantum counting fields couple to classical degrees of freedom. We utilize this substitution in the path integral (5), consider the limit \(t^{\mathrm{i}}\rightarrow-\infty\) and \(t^{\mathrm{f}}\rightarrow+\infty\), and assume that the full system was initially in a thermal state at temperature \(T\). The drive, if there is any, is assumed to be off in the past \(\hat{V}(x,t)\to 0\) at \(t\rightarrow-\infty\). We calculate the trace over all degrees of freedom and obtain unity \[1=\int\mathcal{D}[\mathbf{x}_{\mathrm{c}},\mathbf{x}_{\mathrm{d}}]\exp\left\{\frac{t}{ \hbar}S[\mathbf{x}_{\mathrm{c}},\mathbf{x}_{\mathrm{d}}]\right\}, \tag{8}\] where the Keldysh action is given by [11] \[S[\mathbf{x}_{\mathrm{c}},\mathbf{x}_{\mathrm{d}}]=\int\limits_{-\infty}^ {+\infty}\left\{\frac{m}{2}\left[x_{\mathrm{c}}\:\:x_{\mathrm{q}}\right]\mathbf{D} ^{-1}\left(\imath\partial_{t}\right)\begin{bmatrix}x_{\mathrm{c}}\\ x_{\mathrm{q}}\end{bmatrix}\right.\\ -V\left(x_{\mathrm{c}}+\frac{x_{\mathrm{q}}}{2},t\right)+V\left(x_{ \mathrm{c}}-\frac{x_{\mathrm{q}}}{2},t\right)\\ +\sum_{k}\left[\frac{m_{k}}{2}\left[q_{k,\mathrm{c}}\:\:q_{k, \mathrm{d}}\right]\mathbf{D}^{-1}\left(\imath\partial_{t}\right)\begin{bmatrix}q_ {k,\mathrm{c}}\\ q_{k,\mathrm{q}}\end{bmatrix}\right.\\ \left.-\omega_{k}^{2}\left(q_{k,\mathrm{c}}-\frac{c_{k}x_{\mathrm{ c}}}{m_{k}\omega_{k}^{2}}\right)\left(q_{k,\mathrm{q}}-\frac{c_{k}x_{\mathrm{q}}}{m_{k} \omega_{k}^{2}}\right)\right]\right\}. \tag{9}\] Here, we introduced an inverse Green's function \[\mathbf{D}^{-1}(\omega)=\begin{bmatrix}0&(\omega-\imath 0^{+})^{2}\\ (\omega+\imath 0^{+})^{2}&2\imath 0^{+}\omega\coth\left(\frac{\hbar\omega}{2 \hbar\mathrm{T}}\right)\end{bmatrix}, \tag{10}\] where \(\mathbf{D}^{-1}(\imath\partial_{t})\) stands for the function of the differential operator. The symbol \(0^{+}\) should be treated as an infinitesimal positive frequency introduced to provide proper causality. The quantum-classical and classical-quantum components are the retarded and advanced inverse Green's functions of a free classical particle with unit mass, respectively. The quantum-quantum or Keldysh component is a time non-local regularization which arises from the thermal initial condition in the action. The action is quadratic in the bath degrees of freedom, and hence the path integral over them can be evaluated explicitly. We obtain an effective action for the system coordinate as \[S_{\mathrm{s}}\left[x_{\mathrm{c}},x_{\mathrm{q}}\right]\\ =-\imath\hbar\ln\left(\int\mathcal{D}\left[q_{k,\mathrm{c}},q_{k, \mathrm{d}}\right]\exp\left\{\frac{\imath}{\hbar}S\left[\mathbf{x}_{\mathrm{c}}, \mathbf{x}_{\mathrm{q}}\right]\right\}\right). \tag{11}\] This effective action is given by \[S_{\rm s}\left[x_{\rm c},x_{\rm q}\right]=\int\limits_{-\infty}^{+ \infty}\left[m\dot{x}_{\rm c}\dot{x}_{\rm q}\right.\] \[\qquad\left.-V\left(x_{c}+\frac{x_{\rm q}}{2},t\right)+V\left(x_{ \rm c}-\frac{x_{\rm q}}{2},t\right)\right]\;{\rm d}t\] \[-\frac{1}{2}\int\limits_{-\infty}^{+\infty}\left[x_{\rm c}(t)\; \;x_{\rm q}(t)\right]\mathbf{\Sigma}(t-t^{\prime})\begin{bmatrix}x_{\rm c}(t^{ \prime})\\ x_{\rm q}(t^{\prime})\end{bmatrix}\;{\rm d}t\;{\rm d}t^{\prime}, \tag{12}\] where the integration over bath degrees of freedom results in the emergence of a finite time non-local term called influence functional [33]. We refer to the kernel of this functional \(\mathbf{\Sigma}(t-t^{\prime})\) as self-energy in analogy with the self-energy from condensed matter physics. The self-energy has a similar structure to that of the inverse Green's function, namely, \[\mathbf{\Sigma}(\omega)=\begin{bmatrix}0&\Sigma^{\rm A}(\omega)\\ \Sigma^{\rm R}(\omega)&\Sigma^{\rm K}(\omega)\end{bmatrix} \tag{13}\] and \(\mathbf{\Sigma}(t-t^{\prime})\) is its Fourier image \[\mathbf{\Sigma}(t-t^{\prime})=\frac{1}{2\pi}\int\limits_{-\infty}^{+\infty}\mathbf{ \Sigma}(\omega){\rm e}^{-\imath\omega(t-t^{\prime})}\;{\rm d}\omega. \tag{14}\] The retarded component \(\Sigma^{\rm R}(\omega)\) is a classical force-coordinate response function of the bath, the advanced component, extended to the whole complex frequency plane, \(\Sigma^{\rm A}(\omega)=\left[\Sigma^{\rm R}(\omega^{*})\right]^{*}\) is its conjugate, and the Keldysh component is given by the fluctuation dissipation theorem as \[\Sigma^{\rm K}(\omega)=\frac{1}{2}\left[\Sigma^{\rm R}(\omega)-\Sigma^{\rm A }(\omega)\right]\coth\left(\frac{\hbar\omega}{2k_{\rm B}T}\right). \tag{15}\] By applying the Hubbard-Stratonovich transformation to the Keldysh part of the influence functional (12) and employing a saddle-point approximation, we obtain the quasiclassical Langevin equation for the system as [81, 11] \[-m\ddot{x}_{\rm c}-\int\limits_{-\infty}^{+\infty}\Sigma^{\rm R}(t-t^{\prime })x_{\rm c}(t^{\prime})\;{\rm d}t^{\prime}-\frac{\partial V(x_{\rm c},t)}{ \partial x_{\rm c}}=\xi(t), \tag{16}\] where \(\xi(t)\) is a Gaussian-noise term with the correlation function \[\langle\xi(t)\xi(t^{\prime})\rangle=\frac{\imath}{2\pi}\int\limits_{-\infty} ^{+\infty}\Sigma^{\rm K}(\omega){\rm e}^{-\imath\omega(t-t^{\prime})}\;{\rm d}\omega. \tag{17}\] A quantum counterpart of this equation is a Heisenberg-Langevin equation [82, 83, 84] which is written for coordinate operators and has an operator-valued noise term with Gaussian statistics. However, its direct application beyond linear systems is complicated due to the lack of a closed system of equations for correlation functions. The main conclusion from this section is that in order to construct the Feynman-Vernon influence functional, it is enough to know only the classical response function of the bath and its temperature. If we do not have a microscopic model for the bath, but know its classical response function, we can introduce the corresponding terms phenomenologically into the Keldysh action. ## III Nonlinear response theory for quantum circuits Although many parts of this theory are general and agnostic to the physical realization, we motivate our studies by a particular class of open quantum systems, namely, superconducting quantum circuits. We consider circuits formed by lumped elements with linear elements such as capacitors, inductors, Ohmic resistors, and nonlinear elements such as Josephson junctions. Due to their inherent nonlinearity, the latter are the key components of every superconducting qubit. The control and the measurement of the circuit is provided by coupling to semi-infinite transmission lines, which inevitably introduce additional dissipation channels in the system. Such systems indeed fall into the Caldeira-Legett class of systems. However, from the experimental point of view, not all of its observables can be probed directly. The only experimentally accessible quantities are the output fields in the transmission lines, which belong rather to the environment. As we further discuss below, the whole division between the system and the environment may be ambiguous. Therefore, there is a need of a theory which connects the classical drive field with the experimentally observable output field. ### Classical equations of motion Using conclusions from the previous section, we begin with writing classical equations of motion for the electric circuit [85]. We define the flux \(\varphi_{k}\) for each node \(k\) of the circuit and express Kirchhoff's law for it as \[-\sum_{k^{\prime}}C_{kk^{\prime}}\left(\ddot{\varphi}_{k}-\ddot{ \varphi}_{k^{\prime}}\right)-\sum_{k^{\prime}}\frac{\dot{\varphi}_{k}-\dot{ \varphi}_{k^{\prime}}}{R_{kk^{\prime}}}\] \[-\sum_{k^{\prime}}\frac{\varphi_{k}-\varphi_{k^{\prime}}}{L_{kk^{ \prime}}}-\sum_{k^{\prime}}i_{1,kk^{\prime}}-\sum_{j}\frac{\dot{\varphi}_{k}-2v _{\rm i,(kj)}}{Z_{(kj)}}=0, \tag{18}\] where the first four sums describe the currents coming into the node \(k\) from capacitors \(C_{kk^{\prime}}\), resistors \(R_{kk^{\prime}}\), inductors \(L_{kk^{\prime}}\), and Josephson junctions which connect it to the other nodes \(k^{\prime}\), whereas the last sum corresponds to the current coming from the transmission lines with characteristic impedances \(Z_{(kj)}\) coupled to the node \(k\) as illustrated in Fig. 1. Here, index \(j\) in a pair (\(kj\)) enumerates all the transmission lines connected to the node \(k\). The Josephson current is given by \[i_{\mathrm{J,}kk^{\prime}}=\frac{2\pi E_{\mathrm{J,}kk^{\prime}}}{\Phi_{0}}\sin \left[\frac{2\pi}{\Phi_{0}}\left(\varphi_{k}-\varphi_{k^{\prime}}-\Phi_{kk^{ \prime}}\right)\right], \tag{19}\] where \(E_{\mathrm{J,}kk^{\prime}}\) is the Josephson energy of the junction between the nodes \(k\) and \(k^{\prime}\), \(\Phi_{kk^{\prime}}\) is the flux bias of the junction, and \(\Phi_{0}\) is the superconducting flux quantum. If a pair of nodes \(k\) and \(k^{\prime}\) is not connected by a capacitor then we put \(C_{kk^{\prime}}=0\). For the same cases related to resistors, inductors, and Josephson junctions we put \(R_{kk^{\prime}}\rightarrow\infty\), \(L_{kk^{\prime}}\rightarrow\infty\), and \(E_{\mathrm{J,}kk^{\prime}}=0\). If there is a ground node in the circuit, we put the corresponding flux \(\varphi_{\mathrm{ground}}=0\). The output \(v_{\mathrm{o,}(kj)}\) voltage in the transmission line \(j\) connected to the node \(k\) can be expressed through the input field and the voltage at the node as \[v_{\mathrm{o,}(kj)}=\dot{\varphi}_{k}-v_{\mathrm{i,}(kj)} \tag{20}\] since the voltage at the node \(v_{\mathrm{o,}(kj)}+v_{\mathrm{i,}(kj)}=\dot{\varphi}_{k}\) must be single-valued for all transmission lines \(j\) connected to the node \(k\). We rewrite Eq. (18) in the matrix form, introducing notations \(\mathbf{\varphi}\) as vector of the fluxes at all the nodes, \(\mathbf{v}_{\mathrm{i/o}}\) as vectors of input and output voltages in the transmission lines, and \(\mathbf{i}_{\mathrm{J}}\) as vector of currents through all the Josephson junctions: \[\mathbf{K}^{\mathrm{R}}(\imath\partial_{t})\begin{bmatrix}\mathbf{\varphi}\\ \mathbf{v}_{\mathrm{i}}\end{bmatrix}=\begin{bmatrix}\mathbf{\nabla}^{\mathsf{T}}\mathbf{i} _{\mathrm{J}}\\ \mathbf{v}_{\mathrm{o}}\end{bmatrix}, \tag{21}\] where the frequency dependent matrix \(\mathbf{K}^{\mathrm{R}}(\omega)\) describes the causal response of the linear elements, and matrix \(\mathbf{\nabla}^{\mathsf{T}}\) is a discrete divergence matrix for the Josephson currents. The explicit expression for the elements of the matrix \(\mathbf{K}^{\mathrm{R}}(\omega)\) can be obtained by applying a Fourier transformation to Kirchhoff's law and the input-output relations. The matrix \(\mathbf{K}^{\mathrm{R}}(\omega)\) has a block structure, where blocks correspond to the linear coupling between dynamical degrees of freedom and input and output fields: \[\mathbf{K}^{\mathrm{R}}(\omega)=\begin{bmatrix}\mathbf{K}^{\mathrm{R}}_{\varphi \varphi}\left(\omega\right)&\mathbf{K}^{\mathrm{R}}_{\varphi\mathrm{i}}\left( \omega\right)\\ \mathbf{K}^{\mathrm{R}}_{\mathrm{o}\varphi}\left(\omega\right)&\mathbf{K}^{\mathrm{R}} _{\mathrm{oi}}\left(\omega\right)\end{bmatrix}. \tag{22}\] The first block \(\mathbf{K}^{\mathrm{R}}_{\varphi\varphi}\left(\omega\right)\) describes the linear coupling between the nodes: \[K^{\mathrm{R}}_{\varphi\varphi;kk^{\prime}}\left(\omega\right) =\sum_{j}\frac{\imath\omega}{Z_{(kj)}}\delta_{kk^{\prime}}\\ -(-1)^{\delta_{kk^{\prime}}}\left(\omega^{2}C_{kk^{\prime}}+ \frac{\imath\omega}{R_{kk^{\prime}}}-\frac{1}{L_{kk^{\prime}}}\right). \tag{23}\] The block \(\mathbf{K}^{\mathrm{R}}_{\varphi\mathrm{i}}(\omega)\) describes the coupling to the driving field: \[K^{\mathrm{R}}_{\varphi\varphi;k(k^{\prime}j)}=\frac{2}{Z_{kj}}\delta_{kk^{ \prime}}. \tag{24}\] The latter blocks provide the expression of the output field through the node fluxes and input voltages: \[K^{\mathrm{R}}_{\mathrm{o}\varphi;(kj)k^{\prime}}(\omega) =-\imath\omega\delta_{kk^{\prime}}, \tag{25}\] \[K^{\mathrm{R}}_{\mathrm{oi};(kj)(k^{\prime}j^{\prime})}(\omega) =-\delta_{kk^{\prime}}\delta_{jj^{\prime}}.\] Note that the matrix \(\mathbf{K}^{\mathrm{R}}(\omega)\) is nothing but the retarded component of the self-energy coming from the dissipation channels, combined with the contributions from the linear reactive elements. ### Keldysh action Let us write down the Keldysh action for the considered circuit, assuming that all the dissipation channels are described by a uniform temperature \(T\). The full information on the dynamics of the circuit and its observables is stored in the corresponding generating functional [11]. This can be expressed through a path integral \[\mathcal{Z}[\mathbf{v}_{\mathrm{i}},\mathbf{\eta}_{\mathrm{o}}]=\int\mathcal{D}[\bm {\varphi}_{\mathrm{c}},\mathbf{\varphi}_{\mathrm{q}}]\exp\left\{\frac{\imath}{ \hbar}S_{\mathrm{circ}}[\mathbf{\phi}]\right\}, \tag{26}\] where \(\mathbf{\phi}=\begin{bmatrix}\mathbf{\varphi}_{\mathrm{c}}^{\mathsf{T}}&\mathbf{v}_{ \mathrm{i}}^{\mathsf{T}}&\mathbf{\varphi}_{\mathrm{q}}^{\mathsf{T}}&\mathbf{\eta}_{ \mathrm{o}}^{\mathsf{T}}\end{bmatrix}^{\mathsf{T}}\), \(\mathbf{\varphi}_{\mathrm{c}}\) and \(\mathbf{\varphi}_{\mathrm{q}}\) stand for classical and quantum trajectories of the dynamical degrees of freedom \(\mathbf{\varphi}\), and \(\mathbf{\eta}_{\mathrm{o}}\) is an auxiliary quantum field which is an argument of the generating functional. The latter plays the role of a counting field used to generate the statistics of observables [2; 11] The action of the circuit \(S_{\mathrm{circ}}[\mathbf{\phi}]\) is given by \[S_{\mathrm{circ}}[\mathbf{\phi}]=S_{\mathrm{J}}[\mathbf{\varphi}_{\mathrm{c}},\mathbf{ \varphi}_{\mathrm{q}}]+\frac{1}{4\pi}\int\limits_{-\infty}^{+\infty}\mathbf{\phi} ^{\dagger}(\omega)\mathbf{G}^{-1}(\omega)\mathbf{\phi}(\omega)\;\mathrm{d}\omega, \tag{27}\] Figure 1: Node \(k\) with flux \(\varphi_{k}\), connected to a transmission line and other nodes via capacitors, resistors, inductors, and Josephson junctions. The output voltage in the transmission line is determined by the input voltage and the voltage given by \(\dot{\varphi}_{k}\). where \[S_{\rm J}[\mathbf{\varphi}_{\rm c},\mathbf{\varphi}_{\rm q}]=-\int\limits_{-\infty}^{+ \infty}V_{\rm J}[\mathbf{\varphi}_{\rm c}(t),\mathbf{\varphi}_{\rm q}(t)]\;{\rm d}t \tag{28}\] and \[V_{\rm J}(\mathbf{\varphi}_{\rm c},\mathbf{\varphi}_{\rm q})\\ =2\sum\limits_{k<k^{\prime}}E_{{\rm J},kk^{\prime}}\sin\left[ \frac{2\pi\left(\psi_{{\rm c},kk^{\prime}}-\Phi_{kk^{\prime}}\right)}{\Phi_{0} }\right]\sin\left(\frac{\pi\psi_{{\rm q},kk^{\prime}}}{\Phi_{0}}\right). \tag{29}\] is the action for Josephson junctions and \(\psi_{{\rm c}/{\rm q},kk^{\prime}}=\varphi_{{\rm c}/{\rm q},k}-\varphi_{{\rm c }/{\rm q},k^{\prime}}\) are the classical and quantum variables for the flux difference across the junction between the nodes \(k\) and \(k^{\prime}\). The inverse bare Green's function assumes the form \[\mathbf{G}^{-1}(\omega)=\begin{bmatrix}\mathbf{0}&\mathbf{K}^{\rm A}(\omega)\\ \mathbf{K}^{\rm R}(\omega)&\mathbf{K}^{\rm K}(\omega)\end{bmatrix}. \tag{30}\] The retarded component is the same matrix \(\mathbf{K}^{\rm R}(\omega)\) introduced in the previous subsection, the advanced component is its hermitian conjugate \[\mathbf{K}^{\rm A}(\omega)=\left[\mathbf{K}^{\rm R}(\omega^{*})\right]^{\dagger}, \tag{31}\] and the Keldysh component is determined by the fluctuation dissipation theorem \[\mathbf{K}^{\rm K}(\omega)=\frac{1}{2}\coth\left(\frac{\hbar\omega}{2k_{\rm B}T} \right)\begin{bmatrix}\mathbf{K}^{\rm R}_{\varphi\varphi}(\omega)-\mathbf{K}^{\rm A}_{ \varphi\varphi}(\omega)&0\\ 0&0\end{bmatrix}. \tag{32}\] In addition to the classical driving field \(\mathbf{v}_{\rm i}\), we have introduced an auxiliary quantum source \(\mathbf{\eta}_{\rm o}\) which couples to the output field in the transmission lines. The saddle-point approximation with respect to the quantum degrees of freedom and the quantum source \(\mathbf{\eta}_{\rm o}\) yields the classical equations of motion (21). It is convenient to use vectors \(\mathbf{\psi}_{{\rm c}/{\rm q}}\) of all classical and quantum flux differences across all junctions as dynamical variables. By doing so, we move from a node description of the electric circuit to an element description. For simplicity, we assume that there are no Josephson junction loops since otherwise we need to pick a minimal set of linearly independent variables which fully parametrize the Josephson energy of the circuit. They should be complemented by dynamical variables \(\mathbf{\psi}^{\prime}_{{\rm c}/{\rm q}}\) such that a linear transformation of chosen real-valued matrices \(\mathbf{Q}_{\varphi\psi}\) and \(\mathbf{Q}_{\varphi\psi^{\prime}}\) \[\mathbf{\varphi}_{{\rm c}/{\rm q}}=\begin{bmatrix}\mathbf{Q}_{\varphi\psi}&\mathbf{Q}_{ \varphi\psi^{\prime}}\end{bmatrix}\begin{bmatrix}\mathbf{\psi}_{{\rm c}/{\rm q}} \\ \mathbf{\psi}^{\prime}_{{\rm c}/{\rm q}}\end{bmatrix} \tag{33}\] is reversible. Note that the action becomes quadratic with respect to the \(\mathbf{\psi}^{\prime}\) degrees of freedom, and hence they can be integrated out. By doing this, we shift the boundary between the system and environment and consider the linear subcircuit as part of the environment. The subcircuit of remaining nonlinear elements we refer to as core system. The effective action arising from the Gaussian integration over these degrees of freedom is given by \[S_{\rm core}[\mathbf{\phi}_{\rm core}] =S_{\rm J}[\mathbf{\psi}_{\rm c},\mathbf{\psi}_{\rm q}]\] \[-\frac{1}{4\pi}\int\limits_{-\infty}^{+\infty}\mathbf{\phi}^{\dagger }_{\rm core}(\omega)\mathbf{\Sigma}_{\rm core}(\omega)\mathbf{\phi}_{\rm core}(\omega), \tag{34}\] where \(\mathbf{\phi}_{\rm core}=\begin{bmatrix}\mathbf{\psi}^{\sf T}_{\rm c}&\mathbf{v}^{\sf T}_{ \rm i}&\mathbf{\psi}^{\sf T}_{\rm q}&\mathbf{\eta}^{\sf T}_{\rm o}\end{bmatrix}^{\sf T}\) and the self-energy becomes \[\mathbf{\Sigma}_{\rm core}(\omega)=\begin{bmatrix}\mathbf{0}&\mathbf{\Sigma }^{\rm A}_{\rm core}(\omega)\\ \mathbf{\Sigma}^{\rm R}_{\rm core}(\omega)&\mathbf{\Sigma}^{\rm K}_{\rm core}(\omega) \end{bmatrix}\\ =\tilde{\mathbf{G}}^{-1}_{\mathbf{\phi}\mathbf{\psi}^{\prime}}(\omega) \tilde{\mathbf{G}}_{\mathbf{\psi}^{\prime}\mathbf{\psi}^{\prime}}(\omega)\tilde{\mathbf{G}}^{- 1}_{\mathbf{\psi}^{\prime}\mathbf{\phi}}(\omega)-\tilde{\mathbf{G}}^{-1}_{\mathbf{\phi}\mathbf{ \phi}}(\omega), \tag{35}\] with \(\tilde{\mathbf{G}}^{-1}(\omega)=\mathbf{Q}^{\sf T}\mathbf{G}^{-1}(\omega)\mathbf{Q}\) and \[\mathbf{Q}=\begin{bmatrix}\mathbf{Q}_{\varphi\psi}&\mathbf{Q}_{\varphi\psi^{\prime}}&0&0& 0\\ 0&0&1&0&0\\ 0&0&0&\mathbf{Q}_{\varphi\psi}&\mathbf{Q}_{\varphi\psi^{\prime}}&0\\ 0&0&0&0&0&1\end{bmatrix}. \tag{36}\] ### Analytical properties of the self-energy The self-energy is a meromorphic matrix function of complex-valued frequency. We can classify its poles into two categories. The first category of poles corresponds to the frequencies \(\omega^{\rm d}_{k}\) where the matrix \(\tilde{\mathbf{G}}_{\mathbf{\psi}^{\prime}\mathbf{\psi}^{\prime}}(\omega)\) is singular. They have a purely classical origin and we refer to them as to dynamical poles. Since these poles come from a matrix inverse, the corresponding residues have unit rank. The other category of poles arise from the Bose-Einstein distribution function \(\coth[\hbar\omega/(2k_{\rm B}T)]\) and corresponds to bosonic Matsubara frequencies \(\imath\omega^{\rm m}_{n}=2\pi nk_{\rm B}T/\hbar\), \(n=\pm 1,\pm 2,\dots\). Since all dynamical variables and source fields are real-valued in the time domain, the self-energy has the symmetry property \(\mathbf{\Sigma}_{\rm core}(t)=\mathbf{\Sigma}^{\sf T}_{\rm core}(-t)\). Hence the action \(S_{\rm core}[\mathbf{\phi}_{\rm core}]\) can be expressed through a modified self-energy with a retarded causality. Due to this symmetry property \[\int\limits_{-\infty}^{+\infty}\mathbf{\psi}^{\dagger}(\omega)\mathbf{ \Sigma}^{\rm R}_{\rm core}(\omega)\mathbf{x}(\omega)\;{\rm d}\omega\\ =\int\limits_{-\infty}^{+\infty}\mathbf{x}^{\dagger}(\omega)\mathbf{ \Sigma}^{\rm A}_{\rm core}(\omega)\mathbf{y}(\omega)\;{\rm d}\omega, \tag{37}\] where \(\mathbf{x}(\omega)\) and \(\mathbf{y}(\omega)\) are Fourier images of arbitrary real-valued vector functions, the advanced component of self-energy can be replaced with the retarded one. We apply a Hilbert transformation to the Keldysh component \[\tilde{\mathbf{\Sigma}}_{\rm core}^{\rm K}(\omega)=\frac{\imath}{2\pi}\int\limits_{- \infty}^{+\infty}\frac{\mathbf{\Sigma}_{\rm core}^{\rm K}(\omega-\omega^{\prime})}{ \omega^{\prime}+\imath 0^{+}}\;{\rm d}\omega^{\prime}. \tag{38}\] The resulted \(\tilde{\mathbf{\Sigma}}_{\rm core}^{\rm K}(\omega)\) has a retarded causality. It means that all of the poles of \(\tilde{\mathbf{\Sigma}}_{\rm core}^{\rm K}(\omega)\) are located in the lower complex half-plane and coincide with the poles of \(\mathbf{\Sigma}_{\rm core}^{\rm K}(\omega)\). Then we construct a rectangular self-energy matrix \[\tilde{\mathbf{\Sigma}}_{\rm core}(\omega)=\left[\mathbf{\Sigma}_{\rm core}^{\rm R}( \omega)\;\;\tilde{\mathbf{\Sigma}}_{\rm core}^{\rm K}(\omega)\right] \tag{39}\] which satisfies \[\frac{1}{2}\int\limits_{-\infty}^{+\infty}\left[\mathbf{x}^{\dagger}( \omega)\;\;\mathbf{y}^{\dagger}(\omega)\right]\mathbf{\Sigma}_{\rm core}(\omega) \begin{bmatrix}\mathbf{x}(\omega)\\ \mathbf{y}(\omega)\end{bmatrix}\;{\rm d}\omega\\ =\int\limits_{-\infty}^{+\infty}\mathbf{y}^{\dagger}(\omega)\tilde{\mathbf{ \Sigma}}_{\rm core}(\omega)\begin{bmatrix}\mathbf{x}(\omega)\\ \mathbf{y}(\omega)\end{bmatrix}\;{\rm d}\omega. \tag{40}\] A meromorphic function allows the Mittag-Leffler expansion which has the following form for the self-energy: \[\tilde{\mathbf{\Sigma}}_{\rm core}(\omega)=\mathbf{P}(\omega)+\sum\limits_{k}\frac{ \mathbf{R}_{k}^{\rm d}}{\omega-\omega_{k}^{\rm d}}+\sum\limits_{n}\frac{\left[0\; \;\mathbf{R}_{n}^{\rm m}\right]}{\omega+\imath\omega_{n}^{\rm m}}, \tag{41}\] where \(\mathbf{P}(\omega)=\mathbf{P}^{(0)}+\mathbf{P}^{(1)}\omega+\mathbf{P}^{(2)}\omega^{2}\) is a polynomial matrix, \(\mathbf{R}_{k}^{\rm d}\) is a unit rank matrix residue at the dynamical pole \(k\), and \(\mathbf{R}_{n}^{\rm m}\) is a matrix residue at the Matsubara pole \(n\) which contributes only to the Keldysh component of the self-energy. In connection to the above expansion, we note that a sharp cut-off of the Matsubara series gives a quite poor approximation for the cotangent function present in the Bose-Einstein distribution function. In practice it is more efficient to use a rational approximation in the form \[\coth\left(\frac{\hbar\omega}{2k_{\rm B}T}\right)\approx\omega\tau+\frac{2k_{ \rm B}T}{\hbar\omega}+\sum\limits_{n=1}^{N}\frac{2\omega r_{n}^{\rm gm}}{ \omega^{2}+(\omega_{n}^{\rm gm})^{2}}, \tag{42}\] where \(\omega_{n}^{\rm gm}\) are the poles of rational approximations, \(r_{n}^{\rm gm}\) are corresponding residues, and \(\tau\) is a constant parameter. Such an approximation can be obtained, e.g., with the symmetrized version of AAA algorithm [47; 86; 75] which turns out to be very accurate in a finite range of frequencies even with very few poles taken into account. We refer to the frequencies \(\omega_{n}^{\rm gm}\) as generalized Matsubara frequencies. Finally, the action for the circuit reads as \[S_{\rm core}[\mathbf{\phi}_{\rm core}]=S_{\rm J}[\mathbf{\psi}_{\rm c}, \mathbf{\psi}_{\rm q}]\\ +\int\limits_{-\infty}^{+\infty}\sum\limits_{jj^{\prime}=0}^{1} \frac{{\rm d}^{j}\mathbf{\phi}_{\rm core,q}^{\sf T}(t)}{{\rm d}t^{j}}\mathbf{A}^{(jj^ {\prime})}\frac{{\rm d}^{j^{\prime}}\mathbf{\phi}_{\rm core}(t)}{{\rm d}t^{j^{ \prime}}}\;{\rm d}t\\ -\frac{1}{2\pi}\int\limits_{-\infty}^{+\infty}\sum\limits_{k}\bm {\phi}_{\rm core,q}^{\dagger}(\omega)\frac{\mathbf{R}_{k}^{\rm d}}{\omega-\omega_ {k}^{\rm d}}\mathbf{\phi}_{\rm core}\;{\rm d}\omega\\ -\frac{1}{2\pi}\int\limits_{-\infty}^{+\infty}\sum\limits_{n}\bm {\phi}_{\rm core,q}^{\dagger}(\omega)\frac{\mathbf{R}_{n}^{\rm gm}}{\omega+i\omega _{n}^{\rm gm}}\mathbf{\phi}_{\rm core,q}(\omega)\;{\rm d}\omega, \tag{43}\] where the influence functional in Eq. (34) has been replaced by the three terms following \(S_{\rm J}[\mathbf{\psi}_{\rm c},\mathbf{\psi}_{\rm q}]\). Here \[\mathbf{A}^{(00)}=-\mathbf{P}^{(0)},\] \[\mathbf{A}^{(01)}-\mathbf{A}^{(10)}=-\imath\mathbf{P}^{(1)}, \tag{44}\] \[\mathbf{A}^{(11)}=-\mathbf{P}^{(2)},\] sum \(\mathbf{A}^{(01)}+\mathbf{A}^{(10)}\) can be arbitrary, and \(\mathbf{R}_{n}^{\rm gm}\) are the matrix residues at generalized Matsubara poles. We have denoted the quantum components of the vector \(\mathbf{\phi}_{\rm core}\) as \(\mathbf{\phi}_{\rm core,q}=\left[\mathbf{\psi}_{\rm q}^{\sf T}\;\;\mathbf{\eta}_{\rm o}^{ \sf T}\right]^{\sf T}\). ### Auxiliary modes We treat each non-local term in the action separately introducing auxiliary bosonic degrees of freedom via the Hubbard-Stratonovich transformation [87; 47; 88] as \[\exp\left[-\frac{\imath}{2\pi\hbar}\int\limits_{-\infty}^{+ \infty}\mathbf{y}^{\dagger}(\omega)\frac{\mathbf{R}}{\omega-\omega_{*}}\mathbf{x}(\omega) \;{\rm d}\omega\right]\\ =\int{\cal D}[\mathbf{a},\mathbf{a}^{\dagger}]\exp\left\{\frac{\imath}{ \hbar}\int\limits_{-\infty}^{+\infty}\left[\imath\hbar\mathbf{a}^{\dagger}(t)\dot {\mathbf{a}}(t)-\hbar\omega_{*}|\mathbf{a}(t)|^{2}\right.\right.\\ \left.+\mathbf{a}^{\dagger}(t)\mathbf{V}^{\sf T}\mathbf{x}(t)+\mathbf{y}^{\sf T}(t )\mathbf{U}\mathbf{a}(t)\right]\;{\rm d}t\right\}, \tag{45}\] where \(\mathbf{x}(t)\) and \(\mathbf{y}(t)\) are arbitrary real valued-vector functions, \(\mathbf{R}\) is an arbitrary matrix which is factorized as \(\mathbf{R}=\hbar\mathbf{U}\mathbf{V}^{\sf T}\), and \(\mathbf{a}\) is an auxiliary complex-valued vector degree of freedom. Such a factorization is favorable since the matrix \(\mathbf{R}\) is not full-rank, and can be obtained, e.g., using the singular value decomposition. The dimension of the auxiliary vector degree of freedom \(\mathbf{a}\) is given by the rank of the matrix \(\mathbf{R}\). Consequently, we obtain a time-local action of the QD-MES equation as \[S_{\rm QD\text{-MES}}\left[\mathbf{\phi}_{\rm core},\mathbf{a},\mathbf{b}\right]\\ =\int\limits_{-\infty}^{+\infty}{\cal L}_{\rm QD\text{-MES}} \left(\dot{\mathbf{\phi}}_{\rm core},\dot{\mathbf{a}},\dot{\mathbf{b}},\mathbf{\phi}_{\rm core}, \mathbf{a},\mathbf{b}\right)\;{\rm d}t, \tag{46}\] where the QD-MESS Lagrangian is given by \[\mathcal{L}_{\text{QD-MESS}}\left(\dot{\mathbf{\phi}}_{\text{core}}, \dot{\mathbf{a}},\dot{\mathbf{b}},\mathbf{\phi}_{\text{core}},\mathbf{a},\mathbf{b}\right)\\ =-V_{\text{J}}(\mathbf{\psi}_{\text{c}},\mathbf{\psi}_{\text{q}})+\sum_{jj^ {\prime}=0}^{1}\frac{\text{d}^{j}\mathbf{\phi}_{\text{core},\text{q}}^{\text{T}}}{ \text{d}t^{j}}\mathbf{A}^{(jj^{\prime})}\frac{\text{d}^{j^{\prime}}\mathbf{\phi}_{ \text{core}}}{\text{d}t^{j^{\prime}}}\\ +\sum_{k}\left(\hbar a_{k}^{*}\dot{a}_{k}-\hbar\omega_{k}^{\text{ d}}|a_{k}|^{2}\right.\\ \left.+\mathbf{\phi}_{\text{core},\text{q}}^{\text{T}}\mathbf{u}_{k}^{ \text{d}}a_{k}+a_{k}^{*}\mathbf{v}_{k}^{\text{dT}}\mathbf{\phi}_{\text{core}}\right)\\ +\sum_{n}\left(\hbar\mathbf{b}_{n}^{\dagger}\dot{\mathbf{b}}_{n}+u\omega_ {n}^{\text{gm}}|\mathbf{b}_{n}|^{2}\right.\\ \left.+\mathbf{\phi}_{\text{core},\text{q}}^{\text{T}}\mathbf{b}_{n}^{ \text{gm}}\mathbf{b}_{n}+\mathbf{b}_{n}^{\dagger}\mathbf{V}_{n}^{\text{gm}}\mathbf{\phi}_{ \text{core},\text{q}}\right), \tag{47}\] where \(\mathbf{R}_{k}^{\text{d}}=\hbar\mathbf{u}_{k}^{\text{d}}\mathbf{v}_{k}^{\text{dT}}\), \(\mathbf{R}_{\text{n}}^{\text{gm}}=\hbar\mathbf{U}_{n}^{\text{gm}}\mathbf{V}_{n}^{\text{ gm}}\), \(a_{k}\) and \(\mathbf{b}_{n}\) are auxiliary degrees of freedom corresponding to dynamical and Matsubara poles of the self-energy, and all the dynamical variables and sources are given in the time domain. In this Lagrangian all the auxiliary bosonic modes are coupled to the flux degrees of freedom \(\mathbf{\psi}_{\text{c}}\) and \(\mathbf{\psi}_{\text{q}}\), hence we refer to the QD-MESS equation produced by this Lagrangian as flux QD-MESS. However, this form is not unique. Instead of applying a Hubbard-Stratonovich transformation as in Eq. (45), we may expand in the integrand \[\frac{1}{\omega-\omega_{*}}=-\frac{1}{\omega_{*}}+\frac{\omega}{ \omega_{*}(\omega-\omega_{*})}=\\ -\frac{1}{\omega_{*}}-\frac{\omega}{\omega_{*}^{2}}+\frac{\omega^ {2}}{\omega_{*}^{2}(\omega-\omega_{*})}. \tag{48}\] Multiplication by \(\omega\) of a Fourier image corresponds to applying the \(\imath\partial_{t}\) operator in time domain. Hence we can express the time non-local term as \[\exp\left[-\frac{\imath}{2\pi\hbar}\int\limits_{-\infty}^{+\infty }\mathbf{y}^{\dagger}(\omega)\frac{\mathbf{R}}{\omega-\omega_{*}}\mathbf{x}(\omega)\; \text{d}\omega\right]\\ =\int\mathcal{D}[\mathbf{a},\mathbf{a}^{\dagger}]\exp\left\{\frac{1}{ \hbar}\int\limits_{-\infty}^{+\infty}[\imath\hbar\mathbf{a}^{\dagger}(t)\dot{\mathbf{ a}}(t)\right.\\ \left.+\hbar\omega_{*}|\mathbf{a}(t)|^{2}+\mathbf{y}^{\mathsf{T}}(t)\mathbf{R }\left(\frac{1}{\omega_{*}}+\frac{i\partial_{t}}{\omega_{*}^{2}}\right)\mathbf{x}( t)\right.\\ \left.+\mathbf{a}^{\dagger}(t)\mathbf{V}^{\mathsf{T}}\dot{\mathbf{x}}(t)+\dot {\mathbf{y}}^{\mathsf{T}}(t)\mathbf{U}\mathbf{a}(t)\right]\;\text{d}t\right\}. \tag{49}\] This way, with a proper renormalization of the time-local terms in the action (43), one can obtain a QD-MESS Lagrangian where the auxiliary degrees of freedom couple to the voltages \(\dot{\mathbf{\psi}}_{\text{c}}\) and \(\dot{\mathbf{\psi}}_{\text{q}}\) for some of the modes (see Appendix A for details). This formulation may be beneficial for practical calculations, but for the sake of simplicity of the presentation, we proceed with the flux QD-MESS Lagrangian. ### QD-MESS equation To derive the quantum equation which corresponds to the Lagrangian (47), we proceed with a standard transition to the Schrodinger picture. First, we introduce the canonical conjugate variables to the fluxes, i.e., the charges \[\mathbf{q}_{\text{c}}=\frac{\partial\mathcal{L}_{\text{QD-MESS}}}{ \partial\dot{\mathbf{\psi}}_{\text{q}}^{\mathsf{T}}}, \tag{50}\] \[\mathbf{q}_{\text{q}}^{\mathsf{T}}=\frac{\partial\mathcal{L}_{\text{ QD-MESS}}}{\partial\dot{\mathbf{\psi}}_{\text{c}}}\] Then, we apply the Legendre transformation associated with Eq. (50) and obtain the Liouvillian \[\mathfrak{L}=\mathbf{q}_{\text{q}}^{\mathsf{T}}\dot{\mathbf{\psi}}_{\text{c}}+\dot{\bm {\psi}}_{\text{q}}^{\mathsf{T}}\mathbf{q}_{\text{c}}+\imath\hbar\sum_{k}a_{k}^{*} \dot{a}_{k}+\imath\hbar\sum_{n}\mathbf{b}_{n}^{\dagger}\dot{\mathbf{b}}_{n}-\mathcal{ L}_{\text{QD-MESS}}. \tag{51}\] This Liouvillian, when written as a function of canonical variables, can be interpreted as a quantum Liouvillian by imposing the canonical commutation relations \[\begin{split}\left[\ddot{\psi}_{\text{c},\alpha},\ddot{q}_{\text{ q},\alpha^{\prime}}\right]=\imath\hbar\delta_{\alpha\alpha^{\prime}},& \leavevmode\nobreak\ \left[\ddot{\psi}_{\text{q},\alpha},\ddot{q}_{\text{c},\alpha^{\prime}} \right]=\imath\hbar\delta_{\alpha\alpha^{\prime}},\\ \left[\ddot{a}_{k},\dot{\mathbf{\epsilon}}_{k^{\prime}}^{\dagger} \right]=\delta_{kk^{\prime}},&\leavevmode\nobreak\ \left[\ddot{b}_{n,\alpha},\dot{\mathbf{ \epsilon}}_{n^{\prime},\alpha^{\prime}}^{\dagger}\right]=\delta_{nn^{\prime}} \delta_{\alpha\alpha^{\prime}},\end{split} \tag{52}\] with all other single-operator commutators vanishing. Finally, the quantum Liouvillian, which governs the dynamics of the superconducting quantum circuit, reads \[\ddot{\mathfrak{L}}=\dot{\mathbf{x}}_{\text{q}}^{\mathsf{T}}\mathbf{L}_{ \text{quad}}\dot{\mathbf{x}}+V_{\text{J}}\left(\dot{\mathbf{\psi}}_{\text{c}},\ddot{\mathbf{ \psi}}_{\text{q}}\right)\\ +\sum_{j}\left(\hbar\omega_{\text{q}}^{\text{d}}\dot{\mathbf{ \epsilon}}_{k}^{\dagger}\dot{\mathbf{\epsilon}}_{k}-\dot{\mathbf{\epsilon}}_{k}^{\dagger} \mathbf{v}_{k}^{\text{dT}}\ddot{\mathbf{\phi}}_{\text{core}}-\dot{\mathbf{\phi}}_{\text{ core},\text{q}}^{\mathsf{T}}\mathbf{u}_{k}^{\text{d}}\dot{\mathbf{\epsilon}}_{k}\right)\\ -\sum_{n}\left(\hbar\omega_{\text{m}}^{\text{sm}}\dot{\mathbf{b}}_{n}^ {\dagger}\ddot{\mathbf{b}}_{n}+\ddot{\mathbf{b}}_{n}\mathbf{V}_{n}^{\text{gm}}\ddot{\mathbf{ \phi}}_{\text{core},\text{q}}+\ddot{\mathbf{\phi}}_{\text{core},\text{q}}^{\mathsf{T}} \mathbf{U}_{n}^{\text{gm}}\ddot{\mathbf{b}}_{n}\right),\end{split} \tag{53}\] where \(\mathbf{L}_{\text{quad}}\) is a constant matrix, which describes the coupling between the classical and the quantum charge and flux operators and their coupling to the sources, \(\dot{\mathbf{x}}=\left[\dot{\mathbf{q}}_{\text{c}}^{\mathsf{T}}\ \dot{\mathbf{\psi}}_{\text{c}}^{\mathsf{T}}\ \dot{\mathbf{\psi}}_{\text{t}}^{\mathsf{T}}\ \mathbf{v}_{\text{t}}^{\mathsf{T}}\ \mathbf{u}_{\text{q}}^{\mathsf{T}}\ \dot{\mathbf{q}}_{\text{q}}^{\mathsf{T}}\ \dot{\mathbf{\psi}}_{\text{q}}^{\mathsf{T}}\ \mathbf{\eta}_{\text{o}}^{\mathsf{T}}\ \mathbf{\eta}_{\text{o}}^{\mathsf{T}}\ \mathbf{\eta}_{\text{o}}^{\mathsf{T}}\right]^{\mathsf{T}}\) is a vector of all charges, fluxes, and sources, while \(\dot{\mathbf{x}}_{\text{q}}=\left[\dot{\mathbf{q}}_{\text{q}}^{\mathsf{T}}\ \dot{\mathbf{\psi}}_{\text{q}}^{\mathsf{T}}\ \ \dot{\mathbf{\eta}}_{\text{o}}^{\mathsf{T}}\ \mathbf{\eta}_{\text{o}}^{\mathsf{T}}\ \mathbf{\eta}_{\text{o}}^{\mathsf{T}}\right]^{\mathsf{T}}\) is its quantum component, \(\ddot{\mathbf{\phi}}_{\text{core}}=\left[\dot{\mathbf{\psi}}_{\text{c}}^{\mathsf{T}}\ \mathbf{v}_{\text{t}}^{\mathsf{T}}\ \ \dot{\mathbf{\psi}}_{\text{t}}^{\mathsf{T}}\ \ \dot{\bm The QD-MESS equation, which governs the dynamics of the dissipative quantum-electric circuit, reads as \[\imath\hbar\frac{\mathrm{d}}{\mathrm{d}t}|W\rangle=\left.\tilde{\mathfrak{L}} \right|_{\boldsymbol{\eta}_{\mathrm{o}}=0}|W\rangle, \tag{54}\] where \(|W\rangle\) is a multidimensional vector in an extended Liouville space, which completely describes a state of both system and its environment. Equation (54) has a conservation law \[\langle 1|\frac{\mathrm{d}}{\mathrm{d}t}|W\rangle=0, \tag{55}\] where \(\langle 1|\) is a left zero eigenvector of the quantum operators of all dynamical variables \(\dot{\boldsymbol{\psi}}_{\mathrm{q}}\) and \(\ddot{\boldsymbol{q}}_{\mathrm{q}}\), and of the creation operators \(\ddot{\boldsymbol{a}}^{\dagger}\) and \(\ddot{\boldsymbol{b}}^{\dagger}\) of all the auxiliary modes, i.e. it corresponds to zero occupation number of each of these modes. Consequently, the proper ordering of the operators in the quantum Liouvillian (53) results in \[\langle 1|\;\tilde{\mathfrak{L}}\rvert_{\boldsymbol{\eta}_{\mathrm{o}}=0}=0. \tag{56}\] The product \(\langle 1|W\rangle\) is equivalent to the trace operation of the full system-environment density operator, hence Eq. (55) is equivalent to the conservation of probability for the standard Liouville-von Neumann equation. Thus, it is natural to choose a normalization of both \(|W\rangle\) and \(\langle 1|\) such that \(\langle 1|W\rangle=1\). The thermal equilibrium state corresponds to a right zero eigenvector of the Liouvillian in the absence of classical and quantum sources fields \[\left.\tilde{\mathfrak{L}}\right|_{\boldsymbol{\varphi}_{\mathrm{i}}=0,\; \boldsymbol{\eta}_{\mathrm{o}}=0}|W_{\mathrm{eq}}\rangle=0. \tag{57}\] The physical meaning of the components of \(|W\rangle\) may be non-trivial to identify, especially given that the choice of the degrees of freedom included in the system and in the environment may be arbitrary. This raises two issues: the preparation of the initial state for the simulations and the calculation of the probability distribution of observables. The first issue can be overcome by an explicit simulation of the initialization pulse, assuming the initial state to be the equilibrium state \(|W_{\mathrm{eq}}\rangle\). Below, we introduce a method how to calculate mutual quasiprobability distributions of various observables. ### Observables Let us focus on a case where we apply an input field and measure the output field within a time interval \((t^{\mathrm{i}},t^{\mathrm{f}})\). Before \(t^{\mathrm{i}}\), the open system is in the thermal state \(|W_{\mathrm{eq}}\rangle\). The result of a measurement is a time trace of the output field \(\boldsymbol{v}_{\mathrm{o}}(t)\) sampled from a stochastic process. The observables \(\boldsymbol{o}\) we are interested in are weighted averages of these fields over the above-mentioned time interval, and hence given by \[\boldsymbol{o}=\int\limits_{t^{\mathrm{i}}}^{t^{\mathrm{f}}}\boldsymbol{F}(t) \boldsymbol{v}_{\mathrm{o}}(t)\;\mathrm{d}t, \tag{58}\] where \(\boldsymbol{F}(t)\) is a matrix-valued weight function. Since the output field is stochastic, these observables \(\boldsymbol{o}\) are random variables. Their mutual quasiprobability distribution can be formally found as [11] \[p(\boldsymbol{o})=\left.\left\{\delta\left[-\imath\hbar\int\limits_{t^{ \mathrm{i}}}^{t^{\mathrm{f}}}\boldsymbol{F}(t)\frac{\delta}{\delta\boldsymbol{ \eta}_{\mathrm{o}}^{\mathrm{T}}(t)}\;\mathrm{d}t-\boldsymbol{o}\right]\mathcal{ Z}[\boldsymbol{v}_{\mathrm{i}},\boldsymbol{\eta}_{\mathrm{o}}]\right\}\right|_{ \boldsymbol{\eta}_{\mathrm{o}}=0}, \tag{59}\] where the argument of the delta function is a linear functional acting on the generating functional \(\mathcal{Z}[\boldsymbol{v}_{\mathrm{i}},\boldsymbol{\eta}_{\mathrm{o}}]\). The latter itself can be calculated by solving the QD-MESS equation (54) with a non-zero quantum source \[\mathcal{Z}[\boldsymbol{v}_{\mathrm{i}},\boldsymbol{\eta}_{\mathrm{o}}]= \langle 1|\mathbb{T}\exp\left[-\frac{\imath}{\hbar}\int\limits_{t^{\mathrm{ i}}}^{t^{\mathrm{f}}}\tilde{\mathfrak{L}}(t)\;\mathrm{d}t\right]|W_{\mathrm{eq}}\rangle, \tag{60}\] where \(\mathbb{T}\) is a time-ordering symbol. By employing a Fourier transformation of a multidimensional delta function, we find that the quasiprobability density can be calculated as \[p(\boldsymbol{o})=\int\mathrm{e}^{\imath\boldsymbol{s}^{\mathrm{T}}\boldsymbol {o}}\mathcal{Z}[\boldsymbol{v}_{\mathrm{i}},\hbar\boldsymbol{F}^{\mathrm{T}}( t)\boldsymbol{s}]\;\frac{\mathrm{d}\boldsymbol{s}}{(2\pi)^{\mathrm{dim}( \boldsymbol{o})}}. \tag{61}\] Since \(\mathcal{Z}[\boldsymbol{v}_{\mathrm{i}},0]=1\) for an arbitrary drive field \(\boldsymbol{v}_{\mathrm{i}}\), the quasiprobability density \(p(\boldsymbol{o})\) integrates to unity. However, it is not clear at this point whether it satisfies the non-negativity requirement for a probability density. For example, the Wigner quasiprobability function, which may have negative values, falls into this class of distribution functions if one chooses \(\boldsymbol{q}_{\mathrm{c}}\) and \(\dot{\boldsymbol{\psi}}_{\mathrm{c}}\) as observables instead of the time-averaged output fields. ## IV Weak-coupling regime Even though it is possible to write a time-local equation of motion which fully describes the dynamics of the dissipative quantum circuit, solving this equation remains challenging due to the exponentially high dimension of the state vector \(|W\rangle\). This issue is pronounced in the ultra-low temperature domain, where the generalized Matsubara frequencies accumulate to zero. Curiously, the QD-MESS equation written explicitly in Fock basis of the auxiliary modes, is identical to HEOM [75; 47] which can be solved using tensor network algorithms [89; 90]. Here, we suggest a systematic approach to eliminate weakly coupled auxiliary modes, which results in a substantial reduction of dimensionality required for the Liouville space of linear degrees of freedom. The general form of the QD-MESS Liouvillian reads as \[\begin{split}&\tilde{\mathfrak{L}}(t)=\tilde{\mathfrak{L}}^{(0)}(t)+ \\ &\sum_{k}\left\{\left[\dot{x}_{k}^{\prime}+\alpha_{k}^{\prime}(t )\right]\!\tilde{a}_{k}+\left[\dot{x}_{k}^{\prime\prime}+\alpha_{k}^{\prime \prime}(t)\right]\!\tilde{a}_{k}^{\dagger}+\hbar\omega_{k}\tilde{a}_{k}^{ \dagger}\tilde{a}_{k}\right\},\end{split} \tag{62}\] where we denote the Liouvillian of all non-linear degrees of freedom as \(\hat{\mathfrak{X}}^{(0)}(t)\), unify the dynamical and generalized Matsubara modes, denote all the coupling operators \(\check{x}^{\prime}_{k}\) and \(\check{x}^{\prime\prime}_{k}\), and introduce the drive variables \(\alpha^{\prime}_{k}(t)\) and \(\alpha^{\prime\prime}_{k}(t)\) which may originate from both, classical \(\mathbf{v}_{i}\) and quantum \(\mathbf{\eta}_{o}\) sources. Note that this form is not unique since one can incorporate some of the auxiliary modes to \(\hat{\mathfrak{X}}^{(0)}(t)\). First, we eliminate time-dependent source terms by carrying out a shift transformation \[|W\rangle=e^{\check{T}(t)}|W_{\rm shift}\rangle, \tag{63}\] where \(\check{T}(t)=\sum_{k}\left[\beta^{\prime}_{k}(t)\check{a}_{k}+\beta^{\prime \prime}_{k}(t)\check{a}^{\dagger}_{k}\right]\) and the time-dependent shifts \(\beta^{\prime}_{k}(t)\) and \(\beta^{\prime\prime}_{k}(t)\) are required to satisfy the differential equations \[\begin{split}& u\hbar\dot{\beta}^{\prime}_{k}(t)=-\hbar\omega_{k} \beta^{\prime}_{k}(t)+\alpha^{\prime}_{k}(t),\\ & u\hbar\dot{\beta}^{\prime\prime}_{k}(t)=\hbar\omega_{k}\beta^ {\prime\prime}_{k}(t)+\alpha^{\prime\prime}_{k}(t).\end{split} \tag{64}\] Thus, the dynamics of \(|W_{\rm shift}\rangle\) is governed by the following Liouvillian \[\begin{split}\check{\mathfrak{X}}_{\rm shift}(t)=\check{ \mathfrak{X}}^{(0)}_{\rm shift}(t)+\\ &\sum_{k}\left(\check{x}^{\prime}_{k}\check{a}_{k}+\check{x}^{ \prime\prime}_{k}\check{a}^{\dagger}_{k}+\hbar\omega_{k}\check{a}^{\dagger}_{ k}\check{a}_{k}\right),\end{split} \tag{65}\] \[\begin{split}\check{\mathfrak{X}}^{(0)}_{\rm shift}(t)=\check{ \mathfrak{X}}^{(0)}(t)+\sum_{k}\left[\check{x}^{\prime}_{k}(t)\beta^{\prime \prime}_{k}(t)-\check{x}^{\prime\prime}_{k}(t)\beta^{\prime}_{k}(t)\right]\\ +\frac{1}{2}\sum_{k}\left[\alpha^{\prime}_{k}(t)\beta^{\prime \prime}_{k}(t)-\alpha^{\prime\prime}_{k}(t)\beta^{\prime}_{k}(t)\right].\end{split} \tag{66}\] Up to this point, we have not done any approximations to arrive at the above Liouvillian since the shift transformation is exact. Next, we proceed with eliminating the transverse coupling to the auxiliary modes by a dynamical Schrieffer-Wolf transformation [78, 79] \[|W_{\rm shift}\rangle=e^{\check{S}(t)}|W_{\rm SW}\rangle, \tag{67}\] where \(\check{S}(t)=\sum_{k}\left[\check{p}^{\prime}_{k}(t)\check{a}_{k}+\check{p} ^{\prime\prime}_{k}(t)\check{a}^{\dagger}_{k}\right]\) and operators \(\check{p}^{\prime}_{k}(t)\) and \(\check{p}^{\prime\prime}_{k}(t)\) act in the space of the Liouvillian \(\check{\mathfrak{X}}^{(0)}_{\rm shift}(t)\). We emphasize that this Schrieffer-Wolf transformation is performed at the Liouvillian level and is non-unitary. The first-order expansion of the exponents with respect to operators \(\check{p}^{\prime}_{k}(t)\) and \(\check{p}^{\prime\prime}_{k}(t)\) reveals that the transverse coupling may be eliminated if \(\check{p}^{\prime}_{k}(t)\) and \(\check{p}^{\prime\prime}_{k}(t)\) satisfy the following equations: \[\begin{split}& u\frac{\mathrm{d}\check{p}^{\prime}_{k}(t)}{ \mathrm{d}t}=\left[\check{\mathfrak{X}}^{(0)}_{\rm shift}(t),\check{p}^{\prime }_{k}(t)\right]-\hbar\omega_{k}\check{p}^{\prime}_{k}(t)+\check{x}^{\prime}_{ k},\\ & u\frac{\mathrm{d}\check{p}^{\prime\prime}_{k}(t)}{\mathrm{d}t} =\left[\check{\mathfrak{X}}^{(0)}_{\rm shift}(t),\check{p}^{\prime\prime}_{k}(t )\right]+\hbar\omega_{k}\check{p}^{\prime\prime}_{k}(t)+\check{x}^{\prime \prime}_{k}.\end{split} \tag{68}\] The second-order expansion yields the Schrieffer-Wolf correction to the Liouvillian \[\begin{split}\check{\mathfrak{X}}_{\rm SW}\approx\check{ \mathfrak{X}}^{(0)}_{\rm shift}(t)+\\ &\sum_{k}\left\{\hbar\omega_{k}\check{a}^{\dagger}_{k}\check{a} _{k}+\frac{1}{2}\left[\check{x}^{\prime}_{k}\check{a}_{k}+\check{x}^{\prime \prime}_{k}\check{a}^{\dagger}_{k},\check{S}(t)\right]\right\}.\end{split} \tag{69}\] Next, we apply a partial rotating-wave approximation for the auxiliary degrees of freedom, keeping only the diagonal terms \(\check{a}^{\dagger}_{k}\check{a}_{k}\). This results in \[\begin{split}\check{\mathfrak{X}}_{\rm SW}(t)\approx\check{ \mathfrak{X}}^{(0)}_{\rm SW}(t)+\sum_{k}\left\{\hbar\omega_{k}+[\check{x}^{ \prime}_{k},\check{p}^{\prime\prime}_{k}(t)]\right\}\hat{a}^{\dagger}_{k} \check{a}_{k},\\ \check{\mathfrak{X}}^{(0)}_{\rm SW}(t)=\check{\mathfrak{X}}^{(0)} _{\rm shift}(t)+\frac{1}{2}\sum_{k}\left[\check{x}^{\prime}_{k}\check{p}^{ \prime\prime}_{k}(t)-\check{p}^{\prime}_{k}(t)\check{x}^{\prime\prime}_{k} \right].\end{split} \tag{70}\] The last step is valid only if there are no pairs \((k,k^{\prime})\) for which \(\omega_{k}\pm\omega_{k^{\prime}}\) is small. Such a delicate case appears, for example, for two weakly decaying dynamical modes with frequencies \(\omega_{k}=-\omega^{*}_{k^{\prime}}\). These pairs should be treated carefully: One may either keep the off-diagonal terms \(\check{a}_{k}\check{a}_{k^{\prime}}\) and \(\check{a}^{\dagger}_{k}\check{a}^{\dagger}_{k^{\prime}}\) in the transformed Liouvillian, or take these modes into account exactly by incorporating them into \(\check{\mathfrak{X}}^{(0)}(t)\) in the original Liouvillian (62). The dynamical Schrieffer-Wolf transformation is perturbative, hence it can be applied only to systems weakly coupled to the auxiliary modes. This perturbative treatment is similar to standard Born-Markov approximations widely used for open quantum systems in a weak coupling regime. However, the presented approach provides a systematic way to derive dissipative terms in the Liouvillian in case of complicated circuits, for which employing the standard IO theory [52, 53, 54] becomes hindered. On top of that, resonances can be treated accurately by incorporating weakly decaying auxiliary modes into the Liouvillian \(\check{\mathfrak{X}}^{(0)}(t)\). In addition to the weak coupling requirement, one of the most restrictive limitations of this approach is that it can be used only for sufficiently small systems, for which Eq. (68) can be solved exactly. This makes it challenging for circuits with multiple Josephson junctions or many weakly decaying dynamical modes. However, accurate solutions of even small circuits may be of great interest, for example, from the point of view of single-qubit operation fidelities. ## V Dispersive readout of a transmon qubit Here, we demonstrate our theory in an experimentally feasible scenario by applying it to a model of the dis \begin{table} \begin{tabular}{c|c|c|c|c|c|c} \(C_{\rm r}\) & \(E_{3}/\hbar\) & \(C_{\rm r}\) & \(L_{\rm r}\) & \(C_{\rm c}\) & \(C_{\rm o}\) & \(Z\) \\ \hline 80 fF & 12.5 GHz & 570 fF & 0.7 nH & 40 fF & 15 fF & 50 \(\Omega\) \\ \end{tabular} \end{table} Table 1: Parameters of the circuit shown in Fig. 2 used for numerical calculations. persive readout of a superconducting qubit [51]. As an example system shown in Fig. 2, we employ a typical transmon qubit [91] which is capacitively coupled to an off-resonant readout resonator. The resonator is capacitively coupled to two semi-infinite transmission lines, one of which is used to drive the readout mode and the other to measure the output field. No input is applied on the transmission line for the measurement and the output field to the drive line is disregarded. Kirchhoff's law for the circuit shown in Fig. 2 yields \[-\left(C_{\mathrm{t}}+C_{\mathrm{c}}\right)\tilde{\varphi}_{\mathrm{ t}}+C_{\mathrm{c}}\tilde{\varphi}_{\mathrm{r}}-\frac{2\pi E_{\mathrm{J}}}{ \Phi_{0}}\sin\left(\frac{2\pi\varphi_{\mathrm{t}}}{\Phi_{0}}\right)=0, \tag{71}\] \[-\left(C_{\mathrm{r}}+C_{\mathrm{c}}+C_{\mathrm{o}}\right)\tilde {\varphi}_{\mathrm{r}}+C_{\mathrm{c}}\tilde{\varphi}_{\mathrm{t}}+C_{\mathrm{ o}}\tilde{\varphi}_{\mathrm{o}}-\frac{\varphi_{\mathrm{r}}}{L_{\mathrm{r}}}=0,\] \[-C_{\mathrm{o}}\left(\tilde{\varphi}_{\mathrm{o}}-\tilde{\varphi }_{\mathrm{r}}\right)-\frac{2}{Z}\left(\hat{\varphi}_{\mathrm{o}}-v_{\mathrm{ i}}\right)=0,\] \[v_{\mathrm{o}}=\hat{\varphi}_{\mathrm{o}}.\] The blocks of inverse Green's functions (30) for this circuit are given by \[\mathbf{K}^{\mathrm{R}}(\omega) =\begin{bmatrix}\omega^{2}\tilde{C}_{\mathrm{t}}&-\omega^{2}C_{ \mathrm{c}}&0&0\\ -\omega^{2}C_{\mathrm{c}}&\omega^{2}\tilde{C}_{\mathrm{r}}-L_{\mathrm{r}}^{-1 }&-\omega^{2}C_{\mathrm{o}}&0\\ 0&-\omega^{2}C_{\mathrm{o}}&\omega^{2}C_{\mathrm{o}}+2\imath\omega Z^{-1}&2Z^{ -1}\\ 0&0&-\imath\omega&0\end{bmatrix}, \tag{72}\] \[\mathbf{K}^{\mathrm{A}}(\omega) =\begin{bmatrix}\mathbf{K}^{\mathrm{R}}(\omega^{*})\end{bmatrix}^{ \dagger},\] \[\mathbf{K}^{\mathrm{K}}(\omega) =\begin{bmatrix}0&0&0&0\\ 0&0&0&0\\ 0&0&2\imath\omega Z^{-1}\coth\left(\frac{\hbar\omega}{2k_{\mathrm{B}}\Gamma} \right)&0\\ 0&0&0&0\end{bmatrix},\] where \(\tilde{C}_{\mathrm{t}}=C_{\mathrm{t}}+C_{\mathrm{c}}\) and \(\tilde{C}_{\mathrm{r}}=C_{\mathrm{r}}+C_{\mathrm{c}}+C_{\mathrm{o}}\). The dynamical degree of freedom of the core system is represented by transmon flux \(\psi=\varphi_{\mathrm{t}}\). The rest of the circuit can be described by three dynamical auxiliary modes, two of them have frequencies close to \(\pm 1/\sqrt{L_{\mathrm{r}}\tilde{C}_{\mathrm{r}}}\) with small negative imaginary parts and one with purely imaginary frequency around \(-2\imath/(ZC_{\mathrm{o}})\). For our calculations we use parameters of the circuit shown in Tab. 1. It appears to be more convenient to use coupling to the auxiliary modes through voltage \(\psi\) instead of flux coupling shown in Eq. (47). This leads to the coupling through the charge of the transmon island to the auxiliary modes in Liouvillian. More details can be found in Appendix A. Since the coupling is weak, we may apply the dynamical Schrieffer-Wolf approach developed in Sec. IV for all of the generalized Matsubara modes and the dynamical mode with imaginary frequency. We restrict the linear space of \(\tilde{\mathfrak{L}}^{(0)}(t)\) and the coupling operators in the Liouvillian (62) to the four lowest levels for the isolated transmon and six levels for the weakly dissipative auxiliary modes. See more technical details on the construction of the Liouvillian in Appendix B. ### Transmission coefficient As the first step we probe the spectrum of the device by calculating the transmission coefficient \(S_{21}(\omega)\). It is defined as \(\langle v_{\mathrm{o}}(\omega)\rangle=S_{21}(\omega)v_{\mathrm{i}}(\omega)\) for infinitesimal drive \(v_{\mathrm{i}}\), where the average is taken over the realizations of the output field. It can be evaluated as [11] \[S_{21}(\omega)=\left.-\imath\hbar\frac{\delta^{2}Z[v_{\mathrm{i}},\eta_{ \mathrm{o}}]}{\delta v_{\mathrm{i}}(\omega)\delta\eta_{\mathrm{o}}^{*}(\omega) }\right|_{v_{\mathrm{i}}=0,\ \eta_{\mathrm{o}}=0}. \tag{73}\] In order to calculate it, we apply a Schrieffer-Wolf transformation for \(v_{\mathrm{i}}=0\) and \(\eta_{\mathrm{o}}=0\) to the static Liouvillian and to the drive terms. The resulting Liouvillian assumes the form \[\tilde{\mathfrak{L}}_{\mathrm{SW}} =\tilde{\mathfrak{L}}_{\mathrm{SW}}^{\mathrm{(static)}} \tag{74}\] \[+\begin{bmatrix}\eta_{\mathrm{o}}&\dot{\eta}_{\mathrm{o}}&1\end{bmatrix} \begin{bmatrix}L_{\mathrm{oi}}^{(00)}&L_{\mathrm{oi}}^{(01)}&\tilde{V}_{ \mathrm{c}}^{(0)}\\ L_{\mathrm{oi}}^{(10)}&L_{\mathrm{oi}}^{(11)}&\tilde{V}_{\mathrm{c}}^{(1)}\\ \tilde{V}_{\mathrm{q}}^{(0)}&\tilde{V}_{\mathrm{q}}^{(1)}&0\end{bmatrix} \begin{bmatrix}v_{\mathrm{i}}\\ \tilde{v}_{\mathrm{i}}\\ 1\end{bmatrix}.\] Here the static Liouvillian is equal to \[\tilde{\mathfrak{L}}_{\mathrm{SW}}^{\mathrm{(static)}}=\left.\tilde{\mathfrak{L }}_{\mathrm{SW}}\right|_{v_{\mathrm{i}}=0,\eta_{\mathrm{o}}=0}, \tag{75}\] \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline \(\omega_{\mathrm{t}}/(2\pi)\) & \(\gamma_{\mathrm{r}}/(2\pi)\) & \(\alpha/(2\pi)\) & \(\omega_{\mathrm{r}}/(2\pi)\) & \(\gamma_{\mathrm{r}}/(2\pi)\) & \(\chi/(2\pi)\) \\ \hline \multirow{4}{*}{3.834} & \(10^{-6}\) & & & \\ & (0.01 K) & & & \\ \cline{1-1} & \(6.9\cdot 10^{-5}\) & & & \\ \cline{1-1} & (0.1 K) & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Frequency \(\omega_{\mathrm{t}}\), linewidth \(\gamma_{\mathrm{t}}\), and anharmonicity \(\alpha\) of the transmon, frequency \(\omega_{\mathrm{r}}\) and linewidth \(\gamma_{\mathrm{r}}\) of the resonator, and the dispersive shift \(\chi\), obtained from the diagonalization of the system Liouvillian. The qubit linewidth is given at low (top cell) and high (bottom cell) temperatures. All the quantities are given in GHz. Figure 2: Schematic of a dispersive readout setup for a transmon qubit. The circuit consists of the qubit, capacitively coupled to a resonator, which in turn capacitively coupled to two semi-infinite transmission lines. operators which couple system degrees of freedom with the input and output field are given by \[\tilde{V}_{\mathrm{c}}^{(0)} =\left.\left(\frac{\partial\tilde{\mathfrak{L}}}{\partial\eta_{ \mathrm{o}}}+\left[\frac{\partial\tilde{\mathfrak{L}}}{\partial\eta_{\mathrm{o }}},\tilde{S}\right]\right)\right|_{v_{\mathrm{i}}=0,\eta_{\mathrm{o}}=0}, \tag{76}\] \[\tilde{V}_{\mathrm{c}}^{(1)} =\left.\left(\frac{\partial\tilde{\mathfrak{L}}}{\partial\tilde {\eta}_{\mathrm{o}}}+\left[\frac{\partial\tilde{\mathfrak{L}}}{\partial\tilde {\eta}_{\mathrm{o}}},\tilde{S}\right]\right)\right|_{v_{\mathrm{i}}=0,\eta_{ \mathrm{o}}=0},\] \[\tilde{V}_{\mathrm{q}}^{(0)} =\left.\left(\frac{\partial\tilde{\mathfrak{L}}}{\partial v_{ \mathrm{i}}}+\left[\frac{\partial\tilde{\mathfrak{L}}}{\partial v_{\mathrm{i} }},\tilde{S}\right]\right)\right|_{v_{\mathrm{i}}=0,\eta_{\mathrm{o}}=0},\] \[\tilde{V}_{\mathrm{q}}^{(1)} =\left.\left(\frac{\partial\tilde{\mathfrak{L}}}{\partial\tilde {v}_{\mathrm{i}}}+\left[\frac{\partial\tilde{\mathfrak{L}}}{\partial\tilde{v} _{\mathrm{i}}},\tilde{S}\right]\right)\right|_{v_{\mathrm{i}}=0,\eta_{\mathrm{o }}=0},\] and direct coupling between the input and output field is provided by \[L_{\mathrm{oi}}^{(00)} =\left.\frac{\partial^{2}\tilde{\mathfrak{L}}}{\partial\eta_{ \mathrm{o}}\partial v_{\mathrm{i}}}\right|_{v_{\mathrm{i}}=0,\eta_{\mathrm{o }}=0},\ L_{\mathrm{oi}}^{(01)} =\left.\frac{\partial^{2}\tilde{\mathfrak{L}}}{\partial\eta_{ \mathrm{o}}\partial\tilde{v}_{\mathrm{i}}}\right|_{v_{\mathrm{i}}=0,\eta_{ \mathrm{o}}=0}, \tag{77}\] \[L_{\mathrm{oi}}^{(10)} =\left.\frac{\partial^{2}\tilde{\mathfrak{L}}}{\partial\tilde{ \eta}_{\mathrm{o}}\partial v_{\mathrm{i}}}\right|_{v_{\mathrm{i}}=0,\eta_{ \mathrm{o}}=0},\ L_{\mathrm{oi}}^{(11)} =\left.\frac{\partial^{2}\tilde{\mathfrak{L}}}{\partial\tilde{ \eta}_{\mathrm{o}}\partial\tilde{v}_{\mathrm{i}}}\right|_{v_{\mathrm{i}}=0,\eta _{\mathrm{o}}=0}.\] The latter are c-numbers since the both classical and quantum fields couple linearly to the system and auxiliary degrees of freedom in the Liouvillian (53). Here, we drop the terms quadratic in \(\eta_{\mathrm{o}}\), since they contribute only to the fluctuations. We emphasize that the generator of the Schrieffer-Wolf transformation \(\tilde{S}\) should be assumed for the absent drive \(v_{\mathrm{i}}=0\) and \(\eta_{\mathrm{o}}=0\). Then, the transmission coefficient can be expressed as \[S_{21}(\omega)=\sum_{jj^{\prime}=0}^{1}v^{j-j^{\prime}}\omega^{j +j^{\prime}}\langle 1|\tilde{V}_{\mathrm{c}}^{(j)}\tilde{\mathfrak{G}}( \omega)\tilde{V}_{\mathrm{q}}^{(j^{\prime})}|W_{\mathrm{eq}}\rangle\\ +\sum_{jj^{\prime}=0}^{1}v^{j-j^{\prime}}\omega^{j+j^{\prime}}L_{ \mathrm{oi}}^{(jj^{\prime})}, \tag{78}\] where \(\tilde{\mathfrak{G}}(\omega)=\left[\omega-\tilde{\mathfrak{L}}_{\mathrm{SW}}^{ (\mathrm{static})}\right]^{-1}\) which can be efficiently calculated using numerical diagonalization of the static Liouvillian. The frequency dependence of the absolute value of \(S_{21}(\omega)\) is shown in Fig. 3 at a low temperature of \(T=0.01\) K and a high temperature of \(T=0.1\) K. At the low temperature, the circuit is close to its ground state and there are only two transitions visible. There is a strong feature at the resonator frequency \(\omega_{\mathrm{r}}\) and a weak signature at the transmon frequency \(\omega_{\mathrm{t}}\). At the high temperature, both qubit and resonator become thermally populated, and consequently more transitions become visible. In Fig. 3(b), we can clearly observe lines which correspond to transitions between different transmon levels as well as the splitting of the resonator feature caused by the dispersive shift (see the inset). A spurious feature below the resonator peak arises due to the hard cut-off of the transmon levels which results in the increased dispersive shift for the highest considered transmon level. Positions of the peaks and their widths correspond to the eigenvalues of the Liouvillian \(\tilde{\mathfrak{L}}_{\mathrm{SW}}^{(\mathrm{static})}\). The results are shown in Tab. 2. Note that the linewidth of the qubit transition is increased at high temperature which would result in the lower \(T_{2}\) time of the qubit. ### Readout of a hot transmon Next, we proceed to the simulation of the state of the output field during transmon readout. For the sake of simplicity, we do not consider the preparation of the qubit state and the simulation of its subsequent dynamics with Figure 3: Absolute value of the transmission coefficient \(S_{21}(\omega)\) for (a) cold transmon at temperature \(T=0.01\) K and (b) hot transmon at temperature \(T=0.1\) K. The parameters of the circuit are given in Tab. 1. Frequencies and linewidths of the qubit and the resonator, qubit anharmonicity and dispersive shift are shown in Tab. 2. respect to the readout pulse. Instead, we consider a continuous monochromatic drive \(v_{\rm i}(t)=V_{\rm d}\cos\left(\omega_{\rm d}t\right)\) and the qubit-resonator system at temperature \(T\) relaxed to the steady state with respect to the drive. Details of the calculation can be found in Appendix C. The quantity of interest is a homodyne demodulated signal averaged over a finite time interval of length \(t_{\rm int}\): \[S_{\rm i}+\imath S_{\rm q}=\frac{2}{t_{\rm int}V_{\rm d}}\int\limits_{0}^{t_{ \rm int}}v_{\rm o}(t)e^{\imath\omega_{\rm d}t}\ {\rm d}t. \tag{79}\] The real and imaginary components \(S_{\rm i}\) and \(S_{\rm q}\) correspond to in-phase and quadrature components of the output field, respectively, normalized by drive amplitude. These components are measured also in the single-shot readout experiments although typically using a weighted time average over the duration of the readout pulse. We calculate the quasiprobability density \(p(S_{\rm i},S_{\rm q})\) using Eq. (61) and show the results and simulation parameters in Fig. 4. We can clearly distinguish four disk-shaped clouds which correspond to the different transmon states. The intensities of the clouds are proportional to the thermal populations of the qubit states. In addition there are faint lines which connect different clouds. These lines correspond to the spontaneous transitions between the qubit states which may occur during the integration time [92]. Similar connecting lines have indeed been observed in experiments [93]. The value of \(t_{\rm int}\) is pivotal for the accurate readout protocol. For long integration times, the spontaneous transitions become more and more probable and in the limit \(t_{\rm int}\rightarrow\infty\), the quasiprobability distribution collapses into a delta function centered at the real and imaginary parts of the transmission coefficient for the finite-amplitude drive. In the opposite limit, with the decrease of \(t_{\rm int}\), the clouds begin to grow and, at some point, overlap. Therefore, there is an optimal integration time, not too short such that the clouds can still be distinguishable, and not too long such that probability of spontaneous transition during the measurement is small. ### Connection to measurement theory The choice of the qubit readout problem, in addition to the purely illustrative purpose, is also motivated by the problem of the quantum measurement. The qubit readout falls into a class of continuous inefficient measurements [49] since the state of the qubit is not probed directly, but through the averaging of the output field over a finite time interval. In a contrast to the standard presentation of measurements [49], we do not consider an initial product state of the system and the environment or the measurement device, but a thermal initial state. Moreover, we do not separate the system and the environment and study the open quantum system as a whole. Such an approach may be favorable outside of the weak coupling regime, where product initial states gives rise to non-exponential short-time decoherence [94; 95]. ## VI Conclusions We presented an exact nonlinear response theory, free of the limiting requirements of weak coupling, low temperature, and weak drive. The dissipation and the driving field introduced by the coupled transmission lines is reduced to the coupling of the non-linear system degrees of freedom to a discrete set of auxiliary bosonic modes. We split the modes into two classes: dynamical and generalized Matsubara modes. The first ones provide coupling to classical field, while latter are responsible for thermal and quantum fluctuations in the circuit. This results in the emergence of a time-local QD-MESS equation which fully describes the open-quantum-system dynamics of the circuit. We introduced observables as weighted time averages of the output fields emitted to the transmission lines and derived an analytical expression of their mutual quasiprobability distribution based on the Fourier transformation of the generating functional. This formalism allows to calculate homo- and heterodyne demodulated signal as well as calculate broadband characteristics of the output field. In the weak coupling regime, we employed a dynamical Schrieffer-Wolf transformation, which provides a consistent way of calculating the dissipators for arbitrary circuits and driving fields. Finally, we illustrated our findings by studying a dispersive-readout setup of a superconducting transmon qubit. The developed NLR theory provides a powerful framework for the theoretical analysis of open quantum systems such as superconducting quantum circuits, including extraction of experimentally observable quantities. Figure 4: Quasiprobability density of normalized in-phase and quadrature components of the output field. Temperature is \(T=0.1\) K, drive frequency is \(\omega_{\rm d}=2\pi\times 7.7146\) GHz, drive amplitude is \(V_{\rm d}=0.2\)\(\mu\)V, integration time is \(t_{\rm int}=5.18\)\(\mu\)s. Going beyond the weak-coupling regime can be achievable by solving the QD-MESS equation with advanced numerical techniques such as tensor networks. In principle, the developed formalism can be applied not only to lumped-element circuits but also to distributed elements. The necessary self-energies can be computed explicitly using high-frequency simulation software for linear components of the distributed element circuits. This would pave the way for exact quantum simulations of practical devices. In the future, the presented formalism can also be modified for the calculation of thermodynamic properties of the circuit. A non-equilibrium case can be simulated by introducing generalized Matsubara modes corresponding to different temperatures. Here, the counting field corresponding to the heat flow couples to the classical expression for Joule losses which are quadratic in terms of the microwave field in the circuit. Such a generalization would allow to study heat transport in non-equilibrium systems and implement accurate simulations of quantum heat engines. Among our findings there is a treatment of open quantum systems free from explicit system-environment separation and from an initial product state condition. Such a description is valuable for the development of the measurement theory in the regime of intermediate to strong coupling between the system and the measurement device. ###### Acknowledgements. V.V. thanks Kalle Kansanen and Riya Baruah for fruitful discussions. This work has been financially supported by the Academy of Finland Centre of Excellence program (project no. 336810) and THEPOW (project no. 349594), the European Research Council under Advanced Grant no. 101053801 (ConceptQ), by Horizon Europe programme HORIZON-CL4-2022-QUANTUM-01-SGA via the project 101113946 OpenSuperQPlus100, the German Science Foundation (DFG) under AN336/12-1 (For2724), the State of Baden-Wuttemberg under KQCBW/SiQuRe, and the BMBF within the QS Solid project. We acknowledge the computational resources provided by the Aalto Science-IT project. ## Appendix A Charge QD-MESS equation In this appendix, we provide the charge QD-MESS equation in contrast to the flux version considered in the main article. We begin with the classical QD-MESS Lagrangian with the voltage coupling to the auxiliary degrees of freedom, given by \[\tilde{\mathcal{L}}_{\text{QD-MESS}}\left(\dot{\mathbf{\phi}}_{\text{ core}},\dot{\mathbf{a}},\dot{\mathbf{b}},\mathbf{\phi}_{\text{core}},\mathbf{a},\mathbf{b}\right)=-V_{ \text{j}}(\mathbf{\psi}_{\text{c}},\mathbf{\psi}_{\text{q}})\\ +\sum_{jj^{\prime}=0}^{1}\frac{\text{d}^{j}\mathbf{\phi}_{\text{core },\text{q}}^{\text{T}}}{\text{d}t^{j}}\tilde{\mathbf{A}}(jj^{\prime})\frac{\text{ d}^{j^{\prime}}\mathbf{\phi}_{\text{core}}}{\text{d}t^{j^{\prime}}}\\ +\sum_{k}\left(h\dot{a}_{k}^{*}\dot{a}_{k}-\hbar\omega_{k}^{\text{ d}}|a_{k}|^{2}\\ +\tilde{\mathbf{\phi}}_{\text{core},\text{q}}^{\text{T}}\tilde{\mathbf{u }}_{k}^{\text{d}}a_{k}+a_{k}^{*}\tilde{\mathbf{v}}_{k}^{\text{dT}}\tilde{\mathbf{ \phi}}_{\text{core}}\right)\\ +\sum_{n}\left(\imath\hbar\dot{\mathbf{b}}_{n}^{\dagger}\dot{\mathbf{b}}_ {n}+\imath\omega_{n}^{\text{gm}}|\mathbf{b}_{n}|^{2}\right.\\ \left.+\tilde{\mathbf{\phi}}_{\text{core},\text{q}}^{\text{T}}\tilde{ \mathbf{U}}_{n}^{\text{gm}}\mathbf{b}_{n}+\mathbf{b}_{n}^{\dagger}\tilde{\mathbf{V}}_{n}^{ \text{gm}\text{T}}\tilde{\mathbf{\phi}}_{\text{core},\text{q}}\right), \tag{10}\] where \(\tilde{\mathbf{\phi}}_{\text{core}}=\left[\dot{\mathbf{\psi}}_{\text{c}}^{\text{T}}\ \mathbf{v}_{i}^{\text{T}}\ \dot{\mathbf{\psi}}_{\text{q}}^{\text{T}}\ \mathbf{\eta}_{\text{o}}^{\text{T}}\right]^{\text{T}}\) and \(\tilde{\mathbf{\phi}}_{\text{core},\text{q}}=\left[\dot{\mathbf{\psi}}_{\text{q}}^{ \text{T}}\ \mathbf{\eta}_{\text{o}}^{\text{T}}\right]^{\text{T}}\). The expressions for the classical and quantum charges (50), Liouvillian (51) and the commutation relations (52) remain valid but the part of the Liouvillian which describes the dynamics of the auxiliary modes becomes non-diagonal. The form of Eq. (62) can be obtained by applying the Bogoliubov transformation to the operators of the auxiliary modes. The difference from the flux QD-MESS Liouvillian is that the operators of the auxiliary modes are coupled to the charge degrees of freedom. The main advantage of using the charge QD-MESS equation for the transmon readout circuit shown in Fig. 2 is that it evades spurious inductive terms in the Liouvillian which would break quasi-charge conservation of the transmon island. ## Appendix B Discrete approximation of QD-MESS for the transmon Direct numerical simulation of the QD-MESS equation is impossible, since even the Liouvillian \(\tilde{\mathfrak{L}}^{(0)}(t)\) in Eq. (62) is infinite-dimensional and one needs to come up with its finite-dimensional approximation in some suitable basis. For a general circuit with intermediate or strong dissipation it may be a challenging problem. For the transmon readout circuit we use, the coupling of the qubit to the resonator and to the transmission lines is weak, and hence we can use a finite number of bare transmon basis states for our calculations. The dissipation-free part of the transmon Liouvillian reads as \[\tilde{\mathfrak{L}}_{\text{t}}=\frac{\tilde{q}_{\text{q}}\tilde{q}_{\text{c} }}{C_{\text{eff}}}+2E_{\text{j}}\sin\left(\frac{2\pi\ddot{\psi}_{\text{c}}}{ \mathbf{\Phi}_{0}}\right)\sin\left(\frac{\pi\ddot{\psi}_{\text{q}}}{\mathbf{\Phi}_{0} }\right). \tag{11}\] We make a substitution \[\begin{split}&\ddot{\psi}_{\text{c}}=\frac{\ddot{\psi}_{+}+\ddot{ \psi}_{-}}{2},\ \dot{q}_{\text{c}}=\frac{\dot{q}_{+}+\ddot{q}_{-}}{2},\\ &\dot{\psi}_{\text{q}}=\dot{\psi}_{+}-\ddot{\psi}_{-},\ \dot{q}_{\text{q}}=\dot{q}_{+}-\ddot{q}_{-},\end{split} \tag{12}\] which is a transformation from the classical and quantum variables to the forward and backward-path variables. In terms of these variables, the Liouvillian assumes the form \[\begin{split}\check{\mathfrak{L}}_{\text{t}}&=\check{H} _{+}-\check{H}_{-},\\ \check{H}_{\pm}&=\frac{\check{q}_{\pm}^{2}}{2C_{ \text{eff}}}-E_{\text{j}}\cos\left(\frac{2\pi\check{\psi}_{\pm}}{\Phi_{0}} \right).\end{split} \tag{30}\] This Liouvillian is nothing but a commutator with a standard transmon Hamiltonian \[\begin{split}\check{\mathfrak{L}}_{\text{t}}\operatorname{vec} \left(\hat{\rho}\right)&=\operatorname{vec}\left(\left[\hat{H}, \hat{\rho}\right]\right),\\ \hat{H}&=\frac{\hat{q}^{2}}{2C_{\text{eff}}}-E_{ \text{j}}\cos\left(\frac{2\pi\hat{\psi}}{\Phi_{0}}\right),\end{split} \tag{31}\] where \(\operatorname{vec}\left(\hat{\rho}\right)\) stands for the vectorization of the density operator. For the latter, we can construct a finite-difference approximation, diagonalize it numerically and use a few lowest transmon levels \(\ket{n}\) with energies \(E_{n}\) as a basis. Consequently, an arbitrary operator \(\check{A}_{\pm}\) acting in the Liouville space can be expressed through its counterpart \(\check{A}\) acting in the Hilbert space as \[\check{A}_{+}\tilde{\rightarrow}\hat{A}\otimes\hat{1},\ \check{A}_{-}\tilde{ \rightarrow}\hat{1}\otimes\hat{A}^{\mathsf{T}}, \tag{32}\] where \(\tilde{\rightarrow}\) stands for isomorphism relation, \(\otimes\) denotes Kronecker product, and \(\hat{1}\) is the identity operator in the Hilbert space. For the bosonic operators of the auxiliary modes, which we do not eliminate with dynamical Schrieffer-Wolf transformation, we cut the Liouville space with a few lowest levels. For our calculations, we use four levels of the transmon and six levels of the auxiliary modes. ## Appendix C QD-MESS equation for the periodically driven system For the transmon readout we solve the QD-MESS equation with the following classical and quantum sources: \[\begin{split} v_{\text{i}}(t)&=V_{\text{d}}\cos( \omega_{\text{d}}t),\\ \eta_{\text{o}}(s_{1},s_{2},t)&=s_{1}\cos(\omega_{ \text{d}}t)+s_{2}\sin(\omega_{\text{d}}t),\end{split} \tag{33}\] for different values of the parameters \(s_{1}\) and \(s_{2}\). The total Liouvillian is periodic in time, and hence Floquet theory can be applied to such a system. We consider the evolution operator over a single period as \[\check{U}(s_{1},s_{2})=\mathbb{T}\exp\left[-\frac{t}{\hbar}\int\limits_{0}^{2 \pi/\omega_{\text{d}}}\check{\mathfrak{L}}(t)\ \mathrm{d}t\right]. \tag{34}\] The steady state of the system corresponds to the unit eigenstate of this operator in the absence of the quantum drive \[\check{U}(0)|W_{\text{ss}}\rangle=|W_{\text{ss}}\rangle. \tag{35}\] We consider that the integration time \(t_{\text{int}}\) is an integer multiple \(N\) of the drive period \(2\pi/\omega_{\text{d}}\). Thus the characteristic function can be evaluated as \[\mathcal{Z}(s_{1},s_{2})=\langle 1|\left[\tilde{U}\left(\frac{s_{1}}{N},\frac{s_ {2}}{N}\right)\right]^{N}|W_{\text{ss}}\rangle. \tag{36}\] These quantities can be efficiently evaluated using the dynamical Schrieffer-Wolf approach for weakly coupled auxiliary modes.
2303.13393
Synthetic aperture radar imaging below a random rough surface
Motivated by applications in unmanned aerial based ground penetrating radar for detecting buried landmines, we consider the problem of imaging small point like scatterers situated in a lossy medium below a random rough surface. Both the random rough surface and the absorption in the lossy medium significantly impede the target detection and imaging process. Using principal component analysis we effectively remove the reflection from the air-soil interface. We then use a modification of the classical synthetic aperture radar imaging functional to image the targets. This imaging method introduces a user-defined parameter, $\delta$, which scales the resolution by $\sqrt{\delta}$ allowing for target localization with sub wavelength accuracy. Numerical results in two dimensions illustrate the robustness of the approach for imaging multiple targets. However, the depth at which targets are detectable is limited due to the absorption in the lossy medium.
Arnold D. Kim, Chrysoula Tsogka
2023-03-23T16:09:37Z
http://arxiv.org/abs/2303.13393v1
# Synthetic aperture radar imaging below a random rough surface ###### Abstract Motivated by applications in unmanned aerial based ground penetrating radar for detecting buried landmines, we consider the problem of imaging small point like scatterers situated in a lossy medium below a random rough surface. Both the random rough surface and the absorption in the lossy medium significantly impede the target detection and imaging process. Using principal component analysis we effectively remove the reflection from the air-soil interface. We then use a modification of the classical synthetic aperture radar imaging functional to image the targets. This imaging method introduces a user-defined parameter, \(\delta\), which scales the resolution by \(\sqrt{\delta}\) allowing for target localization with sub wavelength accuracy. Numerical results in two dimensions illustrate the robustness of the approach for imaging multiple targets. However, the depth at which targets are detectable is limited due to the absorption in the lossy medium. ## 1 Introduction Landmine detection using unmanned aerial based radar is gaining attention because it provides high resolution images while avoiding the interaction with the object and the surrounding medium (Fernandez et al., 2018; Francke and Dobrovolskiy, 2021). Those imaging systems use synthetic aperture radar (SAR) processing to achieve high resolution imaging of both metallic and dielectric targets. In SAR, high resolution is achieved because the data are treated coherently along the flight path of a single transmitter/receiver mounted on an aircraft. For landmine detection, SAR image processing is used and the data are coherently processed along the synthetic aperture formed by an unmanned aerial vehicle flying above the ground over the area of interest. Other related remote sensing applications include precision agriculture, forestry monitoring and glaciology. Landmine detection is a very important problem with both civilian and military applications. It has been a subject of extreme interest and several imaging methodologies have been proposed in the literature. We refer to the review article (Daniels, 2006) for an overview on the subject and to (Gonzalez-Huici, Catapano, & Soldovieri, 2014) for a comparison between different imaging techniques in the specific context of landmine detection. The method we employ here is a modification of the classical SAR processing technique. Specifically we apply to the classical imaging functional a Mobius transformation that depends on a user defined parameter, \(\delta\). Assuming a synthetic aperture of length \(a\), and system bandwidth \(B\), we have recently shown (Kim & Tsogka, 2023c) that the resolution of the imaging method in cross-range (the direction parallel to the synthetic aperture) is \(\sqrt{\delta}\lambda L/a\) and the range (direction orthogonal to cross-range) resolution is \(\sqrt{\delta}c/B\) with \(c\) the speed of the waves, \(\lambda\) the central wavelength and \(L\) the distance of propagation. We have also carried out a resolution analysis of this method for imaging in a lossy medium (Kim & Tsogka, 2023a) where we have shown that one should not use the absorption in the medium even if it is known. Although, absorption does not affect significantly the resolution of the imaging method, it does affect the target detectability. Specifically, if \(z\) denotes the depth of the target below the air-soil interface, the product \(\beta z\) corresponds to the absorption length scale of the problem with \(\beta\) denoting the loss tangent, that is the ratio of the imaginary part over the real part of the relative dielectric constant. For targets buried deep so that \(\beta z\gg 1\) measurements become too small to detect targets, especially if the data are corrupted by additive measurement noise as is often the case in practical applications. For a sufficiently long flight path, the air-soil interface is most likely not uniformly flat. Moreover, height fluctuations in this interface cannot be known with certainty. For this reason we model this interface using a random rough surface. It then becomes crucially important for a subsurface imaging method to be robust to those uncertainties in the interface. Additionally, there may be multiple interactions between scattering by subsurface targets and the random rough surface (Long, Khine, & Kim, 2010). Here, we assume only one interaction between the random rough surface and the subsurface target since that has been shown to be sufficiently accurate for targets buried in a lossy medium (El-Shenawee, 2002). We model the height of the air-soil interface \(h(x)\) using a Gaussian-correlated random process that is characterized by the RMS height, \(h_{\text{RMS}}\) and the correlation length, \(\ell\). We consider here that the RMS height is small with respect to the correlation length which is of the order of the central wavelength while the aperture is large compared to both. In this regime, multiple-scattering effects are important and enhanced backscattering is observed. Enhanced backscattering is a multiple scattering phenomenon in which a well-defined peak in the retro-reflected direction is observed (Maradudin et al., 1991; Ishimaru, 1991; Maradudin & Mendez, 2007). Imaging in media with random rough surfaces is a new paradigm for imaging in random media and requires different methods than the ones developed for volumetric scattering (Borcea, Garnier, Papanicolaou, & Tsogka, 2011) or imaging in random waveguides (Borcea, Garnier, & Tsogka, 2015). The key difference here is that randomness is isolated only at the interface separating the two media. Even though waves multiply scatter on the rough surface, they also scatter away from the rough surface. Consequently, there is no dominant cumulative diffusion phenomenon due to this kind of randomness. For the synthetic aperture setup the measurements are exactly in the retro-reflected direction so the data have uniform power at each spatial location along the flight path. To remove the strong reflection introduced by the ground-air interface we use PCA or more precisely the singular value decomposition (SVD) of the data matrix. Principal component analysis (PCA) has been proposed as a method for removing ground bounce signals in (Tjora, Eide, & Lundheim, 2004). For a flat surface the ground bounce can be removed from the data by taking out the contribution corresponding to the first singular value. Here we see that due to multiple scattering to remove the reflection from the random interface contributions corresponding to the first few singular values should be taken out from the data. This SVD based approach for ground bounce removal is advantageous because it does not require any _a priori_ information about the media, including the exact location of the interface. Our imaging method requires computing Green's function for a medium composed of adjacent half spaces. This Green's function is represented as a Fourier integral of a highly oscillatory function. Accurately computing such integrals is quite challenging and several approaches have been proposed to this effect (Cai, 2002; O'Neil, Greengard, & Pataki, 2014; Bruno, Lyon, Perez-Arancibia, & Turc, 2016). The approach we follow here is similar to the method presented by Barnett and Greengard (Barnett & Greengard, 2011), where we integrate on a deformed contour in the complex plane to avoid branch points. The remainder of the paper is as follows. In Section 2 we present the synthetic aperture radar setup. In Section 3 our model for the rough surface is described as well as the integral equations formulation for computing the solution to the forward problem. The algorithm for computing the measurements is then explained in Section 4. The solution of the inverse scattering problem entails two steps. The first step that uses the singular value decomposition of the data matrix to remove the ground bounce is presented in Section 5. The second step consists in reconstructing an image using the modified synthetic aperture imaging algorithm and is explained in Section 6. We present numerical results in two dimensions that illustrate the effectiveness of the imaging method in Section 7. We finish with our conclusions in Section 8. ## 2 SAR imaging Here we describe the SAR imaging system for the problem to be studied. We limit our computations to the two-dimensional \(xz\)-plane to simplify the simulations. However, the imaging method we describe easily extends to three-dimensional problems. Consider a platform moving along a prescribed flight path. At fixed locations along the flight path: \(\boldsymbol{x}_{n}=(x_{n},z_{n})\) for \(n=1,\ldots,N\), the platform emits a multi-frequency signal that propagates down to an interface that separates the air where the platform is moving from a lossy medium below the interface. See Fig. 1 for a sketch of this imaging system. Let \(\omega_{m}\) for \(m=1,\ldots,M\) denote the set of frequencies used for emitting and recording signals. We apply the start-stop approximation here in which we neglect the motion of the platform and targets in comparison to the emitting and recording of signals. The complete set of measurements corresponds to the suite of experiments conducted at each location on the path. For this problem, the signal emitted from the platform propagates down to the interface. Part of the signal is reflected by the interface which is called the ground bounce signal. The portion of that ground bounce signal that reaches the platform is recorded. Another part of the signal is transmitted across the interface and is incident on the subsurface targets which then scatter that signal. Since the medium below the interface is lossy, the power in the signals incident on and scattered by the targets is attenuated. A portion of that attenuated scattered signal is transmitted across the interface and propagates up to the platform where it is also recorded. Measurements are therefore comprised of ground bounce and scattered signals reaching the platform. Using these measurements we seek to solve the inverse scattering problem that identifies and locates targets in the lossy medium below the interface. The medium above the interface is uniform and lossless and we assume that it is known. The medium below is also uniform, but lossy, so it has a complex relative dielectric permittivity. We assume we know the real part of the relative dielectric permittivity, but not its imaginary part corresponding to the absorption in the medium. Finally, the interface between the two media is unknown, but we assume that we know its mean, which is constant. There are several key challenges to consider for this problem. Measurements include ground bounce and scattered signals. The ground bounce signals have more power than the scattered Figure 1: A sketch of the subsurface synthetic aperture imaging system. A platform moves along a prescribed flight path producing a synthetic aperture above an interface separating air from a lossy medium. The platform emits a signal and records the echoes including ground bounce signals due to reflections by the interface and scattered signals by the targets. The objective for the imaging problem is to identify and locate the subsurface targets. signals, but do not contain information about the targets. Thus, one needs an effective method to remove the ground bounce from measurements. Because the interface is uncertain, it is important to remove these ground bounce signals without requiring explicit knowledge of the interface location. Once that issue can be adequately addressed, we then require high-resolution images of the targets in an unknown, lossy medium obtained through solution of the inverse scattering problem. The absorption in the medium will limit the depth at which one can reliably solve the inverse scattering problem. However, we are interested in identifying targets that are located superficially below the interface, so the penetration depths needed for this problem are not too prohibitive. In addition, measurements are corrupted by additive measurement noise. Another noteworthy issue is that removal of the ground bounce signal from measurements will effectively increase the relative amount of noise in what remains which will limit the values of the signal-to-noise ratio (SNR) for which imaging will be effective. ## 3 Rough surface scattering We model uncertainty in the interface separating the two media using random rough surfaces. In particular, we consider Gaussian-correlated random surfaces that are characterized by the RMS height, \(h_{\rm RMS}\) and the correlation length, \(\ell\). In what follows, we give the integral equation formulation for computing reflection and transmission of signals across one realization of a random rough surface. Let \(z=h(x)\) for \(-\infty<x<\infty\) denote one realization of the random rough surface separating two different media. The medium in \(z>h(x)\) is uniform and lossless. The medium in \(z<h(x)\) is also uniform, but lossy with relative dielectric constant \(\epsilon_{r}(1+{\rm i}\beta)\) with \(\epsilon_{r}\) denoting the real part of the relative dielectric constant and \(\beta\geq 0\) denoting the loss tangent (ratio of the imaginary part over the real part of the relative dielectric constant). We consider two problems in which a point source is either above or below the interface. In what follows we assume that the total field and its normal derivative are continuous on \(z=h(x)\) and that those fields satisfy appropriate out-going conditions as \(z\rightarrow\pm\infty\). ### Integral equations formulation Suppose a point source is located at \((x_{0},z_{0})\) with \(z_{0}>h(x_{0})\). Using Green's second identity, we write \[u(x,z)=G_{0}(x,z;x_{0},z_{0})+\mathscr{D}_{0}[U](x,z)-\mathscr{S}_{0}[V](x,z), \quad z>h(x), \tag{1}\] with \[\mathscr{D}_{0}[U](x,z)=\int_{-\infty}^{\infty}\frac{\partial G_{0}(x,z;\xi,h (\xi))}{\partial n}\sqrt{1+(h^{\prime}(\xi))^{2}}U(\xi){\rm d}\xi,\] \[\mathscr{S}_{0}[V](x,z)=\int_{-\infty}^{\infty}G_{0}(x,z;\xi,h(\xi))V(\xi){\rm d}\xi.\] Here, \[G_{0}(x,z;x^{\prime},z^{\prime})=\frac{{\rm i}}{4}H_{0}^{(1)}\left(k_{0}\sqrt{(x -x^{\prime})^{2}+(z-z^{\prime})^{2}}\right),\] with \(k_{0}=\omega/c\) and \[\frac{\partial G_{0}(x,z;\xi,\zeta)}{\partial n}\sqrt{1+(h^{\prime}(\xi))^{2} }=h^{\prime}(\xi)\frac{\partial G_{0}(x,z;\xi,\zeta)}{\partial\xi}-\frac{ \partial G_{0}(x,z;\xi,\zeta)}{\partial\zeta}. \tag{2}\] In addition, we have \[v(x,z)=-\mathscr{D}_{1}[U](x,z)+\mathscr{S}_{1}[V](x,z),\quad z<h(x), \tag{3}\] with \(\mathscr{D}_{1}\) and \(\mathscr{S}_{1}\) defined the same as \(\mathscr{D}_{0}\) and \(\mathscr{S}_{0}\), but with \(G_{0}\) replaced with \[G_{1}(x,z;x^{\prime},z^{\prime})=\frac{{\rm i}}{4}H_{0}^{(1)}\left(k_{1}\sqrt {(x-x^{\prime})^{2}+(z-z^{\prime})^{2}}\right),\] and \(k_{1}=k_{0}\sqrt{\epsilon_{r}(1+{\rm i}\beta)}\). Now, suppose a point source is located at \((x_{1},z_{1})\) with \(z_{1}<h(x_{1})\). For that case we have \[u(x,z)=\mathscr{D}_{0}[U](x,z)-\mathscr{S}_{0}[V](x,z),\quad z>h(x), \tag{4}\] and \[v(x,z)=G_{1}(x,z;x_{1},z_{1})-\mathscr{D}_{1}[U](x,z)+\mathscr{S}_{1}[V](x,z),\quad z<h(x). \tag{5}\] The fields \(u\) defined by either (1) or (4), and \(v\) defined by either (3) or (5) are given in terms of surface fields \(U(\xi)\) and \(V(\xi)\). Physically, \(U(\xi)=u(\xi,h(\xi))\) is the evaluation of the field on the interface point, \((\xi,h(\xi))\). The field \(V(\xi)\) is defined in terms of the normal derivative of \(u\) according to \[V(\xi)=\sqrt{1+(h^{\prime}(\xi))^{2}}\frac{\partial u(\xi,h(\xi))}{\partial n} =h^{\prime}(\xi)\frac{\partial u(\xi,\zeta)}{\partial\xi}-\frac{\partial u( \xi,\zeta)}{\partial\zeta}.\] These formulations given above make use of the aforementioned assumption that both \(u\) and \(\partial_{n}u\) are continuous on the interface \(z=h(x)\). The surface fields \(U\) and \(V\) are not yet determined. To determine them we evaluate \(u\) and \(v\) in the limit as \((x,z)\to(\xi,h(\xi))\) from above and below, respectively. In that limit, the \(\mathscr{D}_{0}\) and \(\mathscr{D}_{1}\) operators produce a jump and the result is a system of boundary integral equations. For the fields defined by (1) and (3), the resulting system is \[\frac{1}{2}U(\xi)-\mathscr{D}_{0}[U](\xi)+\mathscr{S}_{0}[V](\xi) =G_{0}(\xi,h(\xi);x_{0},z_{0}), \tag{6a}\] \[\frac{1}{2}U(\xi)+\mathscr{D}_{1}[U](\xi)-\mathscr{S}_{1}[V](\xi) =0, \tag{6b}\] and for the fields defined by (4) and (5), the resulting system is \[\frac{1}{2}U(\xi)-\mathscr{D}_{0}[U](\xi)+\mathscr{S}_{0}[V](\xi) =0, \tag{7a}\] \[\frac{1}{2}U(\xi)+\mathscr{D}_{1}[U](\xi)-\mathscr{S}_{1}[V](\xi) =G_{1}(\xi,h(\xi);x_{1},z_{1}). \tag{7b}\] The solution of each of these systems results in the determination of \(U\) and \(V\) for their respective problem. Once those are determined, the fields above and below the interface are computed through evaluation of (1) and (3) when the source is above the interface, or (4) and (5) when the source is below the interface. We give the numerical method we use to solve these systems in the Appendix. ### Enhanced backscattering The bistatic cross-section \(\sigma(\theta_{s},\theta_{i})\) is the fraction of power reflected in the far field by the rough surface in direction \((\sin\theta_{s},\cos\theta_{s})\) with \(\theta_{s}\) denoting the scattered angle made with respect to the \(z\)-axis due to a plane wave incident in direction \((\sin\theta_{i},-\cos\theta_{i})\) with \(\theta_{i}\) denoting the angle of incidence. Reflection by the random rough surface makes up an important component of measurements in this imaging problem. Here, we use the bistatic cross-section to characterize reflection by the rough surface over the range of frequencies: 3.1 GHz to 5.1 GHz. We use the method given in (Tsang, Kong, Ding, & Ao, 2004, Chapter 4) to generate these rough surfaces and compute the corresponding bistatic cross-sections. We then average over several realizations of the rough surface to determine canonical features of these rough surfaces. In Fig. 2 we show the bistatic cross-section due to a plane wave with \(\theta_{i}=30\) degrees averaged over 100 realizations of a Gaussian-correlated rough surface with RMS height \(h_{\text{RMS}}=0.2\) cm and correlation length \(\ell=8\) cm. These results show a sharp angular cone about \(\theta_{s}=\theta_{i}\) as a consequence of enhanced backscattering. Enhanced backscattering is a canonical multiple scattering phenomenon in which counter-propagating scattered waves add coherently in the retro-reflected direction, \(\theta_{s}=\theta_{i}\). With these surface roughness parameters, we find that scattering by the random rough surface is significant and cannot be ignored. Because these rough surfaces exhibit enhanced backscattering, there is significant multiple scattering. Moreover, SAR measurements use a single emitter/receiver, so we measure the field exactly at the retro-reflected angle corresponding to the peak of the angular cone. However, we do not care to reconstruct this rough surface profile for this imaging problem. Rather, we seek a method that attempts to identify and locate targets without needing to consider this rough surface. Nonetheless, scattering by the rough surface will be an important factor in the measurements. ## 4 Modeling measurements In this work we consider scattering by subsurface point targets. This assumption simplifies the modeling of measurements which, in turn, enables the determination of the effectiveness of a subsurface imaging method. We consider imaging point targets here as a necessary first problem for any effective imaging method to solve. To model measurements we must consider both the ground bounce signal that is the reflection by the rough surface, and the scattered signal by the targets. Assuming that scattering by each target is independent from any others, we give the procedure we use to model measurements for a single point target located at \((x_{1},z_{1})\) below due to a point source located at \((x_{0},z_{0})\). 1. Compute one realization of the Gaussian-correlated rough surface, \(z=h(x)\), with RMS height \(h_{\text{RMS}}\) and correlation length \(\ell\). 2. Solve the system (6). Let \(U_{0}\) and \(V_{0}\) denote the solution. 3. Compute the ground-bounce signal, \(R\), through evaluation of \[R=\mathscr{D}_{0}[U_{0}](x_{0},z_{0})-\mathscr{S}_{0}[V_{0}](x_{0},z_{0}).\] This expression is the field reflected by the rough surface evaluated at the same location as the source. 4. Solve the system (7). Let \(U_{1}\) and \(V_{1}\) denote the solution. Figure 2: [Left] Average of the bistatic cross-section, \(\langle\sigma(\theta_{s},\theta_{i})\rangle\), over 100 realizations of a Gaussian-correlated random rough surface with \(h_{\text{RMS}}=0.2\) cm and \(\ell=8\) cm due to a plane wave incident with \(\theta_{i}=30\) degrees. [Right] A close-up of this result about \(\theta_{s}=\theta_{i}\). 5. Compute the field scattered by the point target, \(S\), through evaluation of \[S=\left(\mathscr{D}_{0}[U_{1}](x_{0},z_{0})-\mathscr{S}_{0}[V_{1}](x_{0},z_{0}) \right)\rho\left(-\mathscr{D}_{1}[U_{0}](x_{1},z_{1})+\mathscr{S}_{1}[V_{0}](x_ {1},z_{1})\right).\] There are three factors in this expression written in right-to-left order just like matrix products. The third factor corresponds to the field emitted from the source that transmits across the interface and is incident on the target. The second factor is the reflectivity of the target \(\rho\). The first factor is the propagation of the second and third terms from the target location to the receiver location. Steps 2 through 5 of this procedure are repeated over each frequency \(\omega_{m}\) for \(m=1,\ldots,M\) and each spatial location of the platform \(\mathbf{x}_{n}\) for \(n=1,\ldots,N\). The results are \(M\times N\) matrices \(R\) and \(S\). When there are multiple targets, we repeat Steps 4 and 5 for each of the targets and \(S\) is the sum of those results. Using this procedure above, we model measurements according to \[D=R+S+\eta, \tag{8}\] with \(\eta\) denoting additive measurement noise which we model as Gaussian white noise. The inverse scattering problem is to identify targets and determine their locations from the data matrix \(D\). ## 5 Ground bounce signal removal According to measurement model (8), the ground bounce signal \(R\) is added to the scattered signal \(S\). The ground bounce signal does not contain any information about the targets. Since we do not seek to reconstruct the interface for this imaging problem, \(R\) impedes the solution of the inverse scattering problem. Hence, we seek to remove it from measurements. The key assumption we make is that the relative amount of power in \(R\) is larger than that in \(S\). This assumption opens the opportunity to use principal component analysis to attempt to remove \(R\) from \(D\). Let \(D=U\Sigma V^{H}\) denote the singular value decomposition of \(D\) where \(V^{H}\) denotes the Hermitian or conjugate transpose of \(V\). Because of uncertainty in the interface, we are not able to explicitly determine the structure of the singular values \(\sigma_{j}\) for \(j=1,\ldots,\min(M,N)\) in the \(M\times N\) diagonal matrix \(\Sigma\). Instead we seek to observe any changes in the spectrum of singular values that indicate a separation between contributions by \(R\) and \(S\). Consider \(M=25\) frequencies uniformly sampling the bandwidth ranging from 3.1 GHz to 5.1 GHz and \(N=21\) spatial locations of the platform uniformly sampling the aperture \(a=1\) m at 1 m above the mean interface height \(\langle h(x)\rangle=0\). We set \(\epsilon_{r}=9\) and \(\beta=0.1\). Using one realization of a rough surface with \(h_{\text{RMS}}=0.2\) cm and \(\ell=8\) cm, we compute \(R\). Then we compute the SVD of \(R\) and examine the singular values. In Fig. 3 we show results for one realization of the Gaussian-correlated rough surface with \(h_{\rm RMS}=0.2\) cm and \(\ell=8\) cm shown in the left plot and the corresponding singular values (normalized by the first singular value, \(\sigma_{1}\)) for the resulting ground bounce signals in the right plot. Note that this realization of the rough surface is one among those used to study the bistatic cross-section in Fig. 2 which exhibited enhanced backscattering. Consequently, we know that the ground bounce signals include strong multiple scattering by the rough surface. Looking at the singular values in Fig. 3 we identify a change in behavior in their decay. From \(j=1\) to \(j=5\), we find that \(\sigma_{j}\) decays rapidly over two orders of magnitude. In contrast, from \(j=6\) to \(j\approx 15\), we find that the decay of \(\sigma_{j}\) is much slower and then decays thereafter. We have observed that this qualitative behavior of the singular values persists over different realizations. Through these observations of the behavior of singular values for \(R\), we now propose a method to approximately remove \(R\) from \(D\) given as the following procedure. 1. Compute the SVD of the measurement matrix \(D=U\Sigma V^{H}\). 2. Identify the index \(j^{*}\) where the rapid decay of the singular values stops and the behavior changes. 3. Compute \[\tilde{D}=D-\sum_{i=1}^{j^{*}}\sigma_{i}{\bf u}_{i}{\bf v}_{i}^{H},\] (9) where \({\bf u}_{i}\) and \({\bf v}_{i}\) denote the \(i\)-th columns of \(U\) and \(V\), respectively. It is likely that this procedure does not remove \(R\) from \(D\) exactly. However, we apply this procedure to obtain \(\tilde{D}\) and test below if this procedure works well enough for identifying and locating targets. Note that measurement noise is applied to \(D=R+S\). The corresponding SNR is defined according to \({\rm SNR}=10\log_{10}(\|R+S\|_{F}/\|\eta\|_{F})\) with \(\|\cdot\|_{F}\) denoting the Frobenius norm. This SNR Figure 3: [Left] One realization of the Gaussian-correlated random rough surface with \(h_{\rm RMS}=0.2\) cm and \(\ell=8\) cm with \(k_{0}\) denoting the wavenumber at the central frequency. [Right] The singular values of the ground bounce signals by this rough surface normalized by the first singular value \(\sigma_{1}\). is dominated by \(R\) since \(\|R\|_{F}\gg\|S\|_{F}\). When we remove \(R\) from \(D\), there will be an effective SNR (\(\text{eSNR}=10\log_{10}(\|S\|_{F}^{2}/\|\eta\|_{F}^{2})\)) based on \(S\) which will be much lower. For this reason, we see that this subsurface imaging problem is more sensitive to noise than other imaging problems where ground bounce signals are not present. ## 6 Kirchhoff migration imaging Consider a sub-region of \(z<h(x)\) where we seek to form an image. We call this sub-region the imaging window (IW). Let \((x,z)\in\text{IW}\) denote a search point in the IW. To form an image which identifies targets and gives estimates for their locations, we evaluate the KM imaging functional, \[I^{\text{KM}}(\mathbf{y})=\left|\sum_{m=1}^{M}\sum_{n=1}^{N}\tilde{d}_{mn}a_{mn}^{ *}(x,z)\right|, \tag{10}\] over a mesh of grid points sampling the IW. Here \(\tilde{d}_{mn}\) is the \((m,n)\) entry of the matrix \(\tilde{D}\) and \(a_{mn}(x,z)\) are called the illuminations. The superscript \({}^{*}\) denotes the complex conjugate. The illuminations effectively back-propagate the data so that the resulting image formed shows peaks on the target locations. ### Computing illuminations To compute the illuminations \(a_{mn}(x,z)\) we first note that we do not know the interface \(z=h(x)\) nor do we seek to reconstruct it. However, we assume that \(\langle h(x)\rangle=0\) is known, so we consider the interface \(z=0\) instead. Additionally, we do not know the loss tangent \(\beta\) that dictates the absorption in the lower medium. In fact, we have shown previously that making use of any knowledge of the absorption is not useful for imaging to identify and locate targets (Kim & Tsogka, 2023a). However, we assume that \(\epsilon_{r}\) is known. With these assumptions, we write \[a_{mn}(x,z)=\phi_{mn}^{(0)}(x,z)\phi_{mn}^{(1)}(x,z). \tag{11}\] Here, \(\phi_{mn}^{(0)}(x,z)\) corresponds to the field on \((x,z)\) due to a point source with frequency \(\omega_{m}\) located at \(\mathbf{x}_{n}\) whose amplitude is normalized to unity. The quantity \(\phi_{mn}^{(1)}(x,z)\) is the field with frequency \(\omega_{m}\) evaluated on \(\mathbf{x}_{n}\) due to a point source at \((x,z)\) whose amplitude is normalized to unity. Using Fourier transform methods, we find that the field \(u^{(0)}\) evaluated on \((x,z)\) due to a point source with frequency \(\omega_{m}\) located at \(\mathbf{x}_{n}=(x_{n},z_{n})\) is \[u^{(0)}=\frac{\mathrm{i}}{2\pi}\int\frac{e^{\mathrm{i}(q_{0}z_{n}-q_{1}z)}}{q _{0}+q_{1}}e^{\mathrm{i}\xi(x-x_{n})}\mathrm{d}\xi, \tag{12}\] with \(q_{0}=\sqrt{\omega_{m}^{2}/c^{2}-\xi^{2}}\) and \(q_{1}=\sqrt{\epsilon_{r}\omega_{m}^{2}/c^{2}-\xi^{2}}\). Similarly, we find that the field \(u^{(1)}\) evaluated on \((x_{n},z_{n})\) due to a point source with frequency \(\omega_{m}\) located at \((x,z)\) is \[u^{(1)}=\frac{\mathrm{i}}{2\pi}\int\frac{e^{\mathrm{i}(q_{0}z_{n}-q_{1}z)}}{q_{0 }+q_{1}}e^{\mathrm{i}\xi(x_{n}-x)}\mathrm{d}\xi. \tag{13}\] Upon computing \(u^{(0)}\) and \(u^{(1)}\), we evaluate \(\phi^{(0)}_{mn}=u^{(0)}/|u^{(0)}|\) and \(\phi^{(1)}_{mn}=u^{(1)}/|u^{(1)}|\). Both \(u^{(0)}\) and \(u^{(1)}\) are integrals of the form, \[I=\int_{-\infty}^{\infty}\frac{f(\xi)}{\sqrt{k_{0}^{2}-\xi^{2}}+\sqrt{k_{1}^{2 }-\xi^{2}}}e^{\mathrm{i}\beta_{1}\sqrt{k_{0}^{2}-\xi^{2}}+\mathrm{i}\beta_{2} \sqrt{k_{1}^{2}-\xi^{2}}}e^{\mathrm{i}\xi\gamma}\mathrm{d}\xi, \tag{14}\] with \(k_{1}=k_{0}\sqrt{\varepsilon_{r}}\), and \(\beta_{1}\), \(\beta_{2}\), and \(\gamma\) denoting real parameters. The wavenumbers \(k_{0}\) and \(k_{1}\) are real, and we assume that \(|k_{0}|<|k_{1}|\). This Fourier integral, which is one example of a Sommerfeld integral, is notoriously difficult to compute due to the highly oscillatory behavior of the function inside the integral. There have been several approaches to compute this Fourier integral accurately (Cai, 2002; O'Neil et al., 2014; Bruno et al., 2016). To compute (14), we follow (Barnett & Greengard, 2011) and integrate on a deformed contour in the complex plane to avoid branch points. Here, we use the deformed contour \[\xi(s)=s+\mathrm{i}A\left[e^{-w(s+k_{0})^{2}}+e^{-w(s+k_{1})^{2}}-e^{-w(s-k_{0 })^{2}}-e^{-w(s-k_{1})^{2}}\right],\] with \(-\infty<s<\infty\), and \(A\) and \(w\) denoting user-defined parameters. Integration is taken with respect to \(s\) over a truncated, finite interval chosen so that the truncation error is smaller than the finite precision arithmetic. In the simulations that follow, we have used 500 quadrature points with \(A=0.4\) and \(w=6\). We also use the suggestion in (Barnett & Greengard, 2011) of applying the mapping \(s=\sinh(\beta)\) with \(-\infty<\beta<\infty\) to cluster quadrature points in the interval \((-k_{0},k_{0})\). ### Modified KM We have recently developed a modification to KM that allows for tunably high-resolution images of individual targets (Kim & Tsogka, 2023c). Suppose that we have evaluated (10) and identified a target. In a region about that target, we normalize \(I^{\mathrm{KM}}\) so that its peak value is 1. Let \(\bar{I}^{\mathrm{KM}}\) denote the normalization of \(I^{\mathrm{KM}}\) in this region. With this normalized image, we compute the following Mobius transformation, \[I^{\mathrm{KM}}_{\delta}(\mathbf{y})=\frac{\delta}{1-(1-\delta)\bar{I}^{\mathrm{ KM}}(\mathbf{y})}, \tag{15}\] with \(\delta>0\) denoting a user-defined tuning parameter. We call the resulting image formed with (15) the modified KM image. In the whole space, we have determined that this modified KM method scales the resolution of KM by \(\sqrt{\delta}\). Because \(\delta\) is a user-defined quantity, it can be set to be arbitrarily small. It is in this way that \(I^{\mathrm{KM}}_{\delta}\) produces tunably high-resolution images of targets. ## 7 Numerical results We now present numerical results where we have (i) simulated measurements using the procedure given in Section 4, (ii) removed the ground bounce signal using the procedure given in Section 5, and then produced images through evaluation of the KM and modified KM imaging functions given in Section 6. Just as we have done for the results shown in Section 5, we have used \(M=25\) frequencies uniformly sampling the bandwidth ranging from 3.1 GHz to 5.1 GHz and \(N=21\) spatial locations of the platform uniformly sampling the aperture \(a=1\) m situated 1 m above the average interface height \(\langle h(x)\rangle=0\). We set \(\epsilon_{r}=9\) and \(\beta=0.1\) as suggested by Daniels for modeling buried landmines (Daniels, 2006). We compute imaging results for one realization of a Gaussian-correlated rough surface that has \(h_{\rm RMS}=0.2\) cm and \(\ell=8\) cm. ### Single target Let the origin of a coordinate system correspond to the center of the flight path in the \(x\)-coordinate and the mean surface height \(\langle h(x)\rangle=0\) in the \(z\)-coordinate as shown in Fig. 1. We compute images for a target located at \((2,-8)\) cm with reflectivity \(\rho=3.4\)i. Measurement noise is added to the simulated measurements so that \({\rm SNR}=24.2\) dB. Figure 4 shows the singular values for the data matrix \(D\) normalized by the first singular value. Similar to what we observed in Section 5 with the ground bounce signals, we find that the first 5 singular values decay rapidly. The singular values \(\sigma_{j}\) for \(j>5\) show a different behavior. Thus, we apply the ground bounce removal procedure given in Section 5 using \(j^{*}=5\). We show real part of the data matrix \(D\) in the top left plot of Fig. 5. In the top right plot of Fig. 5 we show the real part of the ground bounce signals in \(R\). Note that the plots for \(D\) and \(R\) are Figure 4: Singular values of the matrix \(D\). These measurements include the ground bounce signals by one realization of a Gaussian-correlated rough surface with \(h_{\rm RMS}=0.2\) cm and \(\ell=8\) cm. Additionally, they include scattering by a point target located at \((2,-8)\) cm with \(\rho=3.4\)i. Measurement noise has been added so that \({\rm SNR}=24.2\) dB. nearly indistinguishable consistent with our assumption that the ground bounce signals dominate the measurements. In the bottom left plot of Fig. 5 we show the real part of the scattered fields in \(S\). Note that those values in \(S\) are nearly 2 orders of magnitude smaller than those of \(R\). The bottom right plot shows the real part of \(\tilde{D}\) resulting from removing the contributions from the first \(j^{*}=5\) singular values. While the magnitudes of the values in \(S\) and \(\tilde{D}\) are comparable, they appear qualitatively different from one another. Thus, it is unclear from these results whether or not \(\tilde{D}\) contains information regarding the target. In Fig. 6 we apply KM (center plot) and the modified KM with \(\delta=10^{-2}\) (right plot) to \(\tilde{D}\). For reference, we have also included the result of applying KM to \(S\) in the left plot of Fig. 6. This ideal case represents exact ground bounce removal. Despite the fact that the results for \(S\) and \(\tilde{D}\) in Fig. 5 were not qualitatively similar, the corresponding KM images in Fig. 6 are quite similar in the vicinity of the target and show peaks about the target location, \((2,-8)\)cm. The peak of the KM image (center) is accompanied by several imaging artifacts away from the target location. In contrast, by applying the modified KM method we eliminate those artifacts and obtain a high resolution image of the target. We note that the predicted location determined from where the KM and modified KM images attain their peak value on the meshed used to plot them is \((1.5,-8.2)\) cm, which is slightly shifted from the true location. Nonetheless, this result is quite good given the Figure 5: Real part of the entries of (a) the data matrix \(D\), (b) the ground bounce signals \(R\), (c) the scattered signals \(S\), and (d) the matrix \(\tilde{D}\) with the contributions from the first 5 singular values removed. uncertainty in the surface, the inexact method for ground bounce removal, unknown absorption, and substantial measurement noise in the system. The unknown absorption puts a depth limitation on imaging targets. When the target depth is comparable to the absorption length, the imaging method is not able to distinguish between the true target and a weaker target less deep in the medium. We have observed this phenomenon with optical diffusion (Gonzalez-Rodriguez, Kim, Moscoso, & Tsogka, 2018). Here, uncertainty in the rough surface complicates this situation even further. In Fig. 7 we show KM and modified KM (\(\delta=10^{-2}\)) images for a target located at \((2,-12)\) cm (top row) and for a target located at \((2,-16)\) cm. As the target is placed deeper into the medium, we observe an increase in the KM imaging artifacts. For the target located \(12\) cm below the surface, we find that these imaging artifacts contain the peak value of the function and the target is no longer identifiable in the image. The modified KM images clearly show this behavior. The inability of the imaging method to identify targets deep in the medium is either due to the absorption, the uncertainty of the rough surface, some combination of these, or possibly other factors. In Fig. 8 we show the resulting image for a target located at \((2,-16)\) cm with the reduced loss tangent, \(\beta=0.05\). All other parameters are the same as those used in the previous images. With this reduced loss tangent, we find that KM and the modified KM are clearly able to identify the target. From this result we conclude that the absorption is the main factor limiting the range of target depths for this imaging method. As we explained above, when we remove ground bounce signals, we introduce an effective SNR (eSNR) that is important for subsurface imaging. We expect that KM will be effective as long as eSNR \(>0\) dB. For the results shown in Fig. 6, SNR \(=24.2\) dB and eSNR \(=3.0\) dB. The resulting image clearly identifies the target and accurately predicts its location. In contrast, we show results for SNR \(=14.2\) dB and eSNR \(=-7.0\) dB in Fig. 9. This image has several artifacts that dominate over any peak formation about the target location. It is important to note that the Figure 6: [Left] The ideal imaged formed through evaluation of the KM imaging function (10) applied to the scattered signals contained in \(S\). [Center] The image formed through evaluation of (10) applied to \(\tilde{D}\). [Right] The imaged formed through evaluation of the modified KM imaging function (15) with \(\delta=10^{-2}\) applied to the KM image in the center. In each of the plots, the exact target location is plotted as a red “\(\odot\)” symbol. eSNR that we use here cannot be estimated _a priori_. This result demonstrates that SNR demands on imaging systems are higher for subsurface imaging problems than other imaging problems that do not involve ground bounce signals. ### Multiple targets We now consider imaging regions with 3 targets. Target 1 is located at \((-9.0,10.1)\) cm with reflectivity \(\rho_{1}=3.6\)i, target 2 is located at \((1.0,-9.4)\) cm with reflectivity \(\rho_{2}=3.4\)i and target 3 is located at \((11.0,-9.8)\) cm with reflectivity \(\rho_{3}=3.6\)i. The measurements were computed using the procedure given in Section 4. Measurement noise has been added so that \(\text{SNR}=24.2\) dB. The result from evaluating the KM imaging function (10) for this problem is shown in the left figure of Fig. 10. The corresponding result from evaluating the modified KM imaging function (15) with \(\delta=10^{-2}\) is shown in the right plot of Fig. 10. These images show that the method is capable of identifying the three targets and give good predictions for their locations. Figure 7: [Left] The imaged formed through evaluation of the KM imaging function (10). The exact target location is plotted as a red “\(\odot\)” symbol. [Right] The imaged formed through evaluation of the modified KM imaging function (15) with \(\delta=10^{-2}\). The top row is for a target located at \((2,-12)\) cm and the bottom row is for a target located at \((2,-16)\) cm. The result from the modified KM method does not show the three targets equally clearly. In fact, the peak formed near target 2 is the strongest in the KM image, so the result for the modified KM image shows target 2 most clearly. This is because the normalization of the KM image required for evaluating the modified KM image is based on target 2. As an alternative, we consider \(5\,\mathrm{cm}\,\times\,5\,\mathrm{cm}\) sub-regions about each of the peaks of the KM image. Within each of those sub-regions, we normalize the KM image and evaluate the modified KM image with \(\delta=10^{-2}\). Those results are shown in Fig. 11. Each of those sub-region images is centered about the corresponding exact target location and scaled by the central wavenumber \(k_{0}\). Even though the predicted target locations are shifted from the exact target location, these results show that these shifts are small fractions of the central wavelength. These results show that this imaging method is capable of identifying multiple targets. However, there are limitations. The targets cannot be too close to one another due to the finite resolution of KM imaging. Moreover, due to absorption in the medium, there are depth limitations to where targets can be identified. Additionally, when there are multiple targets at different depths, it is Figure 8: The same as Fig. 7(b) except that the absorption is reduced from the previous results with \(\beta=0.05\). Figure 9: [Left] KM image and [Right] modified KM image with \(\delta=10^{-2}\) for a target located at \((2,-8)\) cm with \(\mathrm{SNR}=14.2\) dB and \(\mathrm{eSNR}=-7.0\) dB. likely that those targets that are deeper than others may be not be identifiable in images. ## 8 Conclusions We have discussed synthetic aperture subsurface imaging of point targets. Here, we have modeled uncertainty about the interface between the two media with Gaussian-correlated random rough surfaces characterized by a RMS height and correlation length. The medium above the interface is uniform and lossless. The medium below the interface is uniform and lossy. The loss tangent of the medium below the interface is not known when imaging. The imaging method involves two steps. First, we attempt to remove ground bounce signals using principal component analysis. This method does not require any explicit information about the interface other than the ground bounce signals is stronger than the scattered signals. There is no _a priori_ method to choose the number of principal components to include in the ground bounce removal procedure. Instead, we have proposed to determine where the decay of the singular values Figure 11: Evaluation of the modified KM imaging function (15) with \(\delta=10^{-2}\) in sub-regions centered about each target location. Figure 10: [Left] The imaged formed through evaluation of the KM imaging function (10) for three targets. The exact target locations are plotted as a red “\(\odot\)” symbol. [Right] The image formed through evaluation of the modified KM imaging function (15) with \(\delta=10^{-2}\). Measurement noise is added so that \(\text{SNR}=24.2\) dB. changes behavior and use that for the ground bounce removal procedure. Using the resulting matrix after removing the ground bounce signal, we apply Kirchhoff migration (KM) and our modification to it that allows for tunably high resolution images of targets. In our implementation of KM imaging, we compute so-called illuminations for the problem with a flat interface at the mean interface height using only the real part of the relative dielectric permittivity for the medium below that interface, so we completely neglect the unknown absorption in the medium. Our numerical results show that despite uncertainty in the interface, the inexactness of the ground bounce removal procedure, unknown absorption, and measurement noise, this imaging method is able to identify and locate targets robustly and accurately. However, there are limitations to the capabilities of this imaging method. The main limitation for this imaging method is that targets cannot be too deep below the interface. Absorption attenuates the scattered power and depends on the path length of signals. When targets are deep below the interface, the path length of scattered signals are too large and attenuation renders those scattered signals undetectable within the dynamic range of measurements. Additionally, targets cannot be too closely situated to one another. The KM imaging method is limited in its resolution. If targets are situated closer than the resolution capabilities of KM, they cannot be distinguished. Despite the limitations of this imaging method, we find these results to be a promising first step toward practical imaging problems. A key extension of this work will be to incorporate quantitative imaging methods that will open opportunities for target classification in addition to identification and location. We have recently developed methods for recovering the radar cross-section (RCS) for dispersive point targets when there is no ground bounce signal (Kim & Tsogka, 2023b). Recovering the RCS for individual targets can be used to classify targets by properties related to their size or material properties when their shape or other geometrical features are not available for recovery. The challenge with quantitative imaging methods for this problem will be addressing both the unknown absorption and uncertain rough interface. As mentioned previously, absorption will attenuate the power scattered by targets. Moreover, it will attenuate power non-uniformly over frequency which introduces new challenges. The uncertainty in the rough interface also affects our ability to recover quantitative information. Because our method for removing ground bounce signals from an unknown rough surface is approximate, it yields errors in the phase which impeded the recovery of quantitative information. Developing extensions that allow for quantitative subsurface imaging is the subject of our future work. ## Appendix: Numerical solution of the system of boundary integral equations The method that we use to compute realizations of the Gaussian-correlated rough surface (Tsang et al., 2004) uses discrete Fourier transforms, which assumes periodicity over the interval \([-L/2,L/2]\) The truncated domain width \(L\) is chosen large enough so that edges do not strongly affect the results. In the simulations used here we set \(L=4\) m compared to the 1 m aperture and 30 cm wide imaging window. To compute the numerical solution of (6) or (7), we first truncate the integrals to the interval \(-L/2\leq\xi\leq L/2\) and then replace those integrals with numerical quadrature rules. The result of this approximation is a finite dimensional linear system of equations suitable for numerical computation. Because the rough surfaces are periodic, we use the periodic trapezoid rule (composite trapezoid rule for a periodic domain). However, because the integral operators in (6) and (7) are weakly singular, we need to make modifications to the periodic trapezoid rule which we explain below. We discuss the modification to the periodic trapezoid rule we use for the integrals, \[I_{D}(s)=\int_{-L/2}^{L/2}\frac{\partial G(s,h(s);t,h(t))}{\partial n}\sqrt{1 +(h^{\prime}(t))^{2}}U(t)\mathrm{d}t,\] (A1) and \[I_{S}(s)=\int_{-L/2}^{L/2}G(s,h(s);t,h(t))V(t)\mathrm{d}t,\] (A2) with \[G(s,h(s);t,h(t))=\frac{\mathrm{i}}{4}H_{0}^{(1)}\left(k\sqrt{(s-t)^{2}+(h(s)-h (t))^{2}}\right).\] Let \(t_{j}=-L/2+(j-1)\Delta t\) for \(j=1,\ldots,M\) denote the \(M\) quadrature points with \(\Delta t=L/M\). By applying the periodic trapezoid rule to (A1) and (A2) and evaluating that result on \(s=t_{i}\), we obtain \[I_{D}^{M}(t_{i})=\Delta t\sum_{j=1}^{M}\frac{\partial G(t_{i},h(t_{i});t_{j},h (t_{j}))}{\partial n}\sqrt{1+(h^{\prime}(t_{j}))^{2}}U(t_{j}),\] and \[I_{S}^{M}(t_{i})=\Delta t\sum_{j=1}^{M}G(t_{i},h(t_{i});t_{j},h(t_{j}))V(t_{j}).\] Let \(A\) be the \(M\times M\) matrix whose entries are \[a_{ij}=\Delta t\frac{\partial G(t_{i},h(t_{i});t_{j},h(t_{j}))}{\partial n} \sqrt{1+(h^{\prime}(t_{j}))^{2}},\] (A3) and let \(B\) be the \(M\times M\) matrix whose entries are \[b_{ij}=\Delta tG(t_{i},h(t_{i});t_{j},h(t_{j})).\] (A4) With these matrices defined, the approximations for the integral operators given above are matrix-vector products. The problem with these results is that the kernels for \(I_{D}^{M}\) and \(I_{S}^{M}\) are singular on \(t_{j}=t_{i}\), so the diagonal entries of \(A\) and \(B\) cannot be specified. The modification to the periodic trapezoid rule we make is to replace the diagonal entries of \(A\) and \(B\) by \[a_{ii}=U(t_{i})\int_{t_{i}-\Delta t/2}^{t_{i}+\Delta t/2}\frac{\partial G(t_{i}, h(t_{i});t,h(t))}{\partial n}\sqrt{1+(h^{\prime}(t))^{2}}\mathrm{d}t,\] and \[b_{ii}=V(t_{i})\int_{t_{i}-\Delta t/2}^{t_{i}+\Delta t/2}G(t_{i},h(t_{i});t,h( t))\mathrm{d}t.\] Note that we have assumed that \(U(t)\) and \(V(t)\) are approximately constant over this interval thereby allowing us to factor them out from the integral. Substituting \(t=t_{i}+\tau\) and \(\mathrm{d}t=\mathrm{d}\tau\), we obtain \[a_{ii}=U(t_{i})\int_{-\Delta t/2}^{\Delta t/2}\frac{\partial G(t_{i},h(t_{i}); t_{i}+\tau,h(t_{i}+\tau))}{\partial n}\sqrt{1+(h^{\prime}(t_{i}+\tau))^{2}} \mathrm{d}\tau,\] and \[b_{ii}=V(t_{i})\int_{-\Delta t/2}^{\Delta t/2}G(t_{i},h(t_{i});t_{i}+\tau,h(t_ {i}+\tau))\mathrm{d}\tau.\] Next, we evaluate the expressions involving \(G\) and find that \[\frac{\partial G(t_{i},h(t_{i});t_{i}+\tau,h(t_{i}+\tau))}{ \partial n}\sqrt{1+(h^{\prime}(t_{i}+\tau))^{2}}\\ =-\frac{\mathrm{i}k}{4}\left[h^{\prime}(t_{i})\tau-h(t_{i})+h(t_{ i}+\tau)\right]\frac{H_{1}^{(1)}(k\sqrt{\tau^{2}+(h(t_{i})-h(t_{i}+\tau))^{2}})}{ \sqrt{\tau^{2}+(h(t_{i})-h(t_{i}+\tau))^{2}}},\] and \[G(t_{i},h(t_{i});t_{i}+\tau,h(t_{i}+\tau))=\frac{\mathrm{i}}{4}H_{0}^{(1)}(k \sqrt{\tau^{2}+(h(t_{i})-h(t_{i}+\tau))^{2}})\] Expanding about \(\tau=0\), we find \[\frac{\partial G(t_{i},h(t_{i});t_{i}+\tau,h(t_{i}+\tau))}{\partial n}\sqrt{1 +(h^{\prime}(t_{i}+\tau))^{2}}=\frac{h^{\prime\prime}(t_{i})}{4\pi(1+(h^{ \prime}(t_{i}))^{2})}+O(\tau^{2}),\] and \[G(t_{i},h(t_{i});t_{i}+\tau,h(t_{i}+\tau))=\frac{1}{4\pi}\left[-2\gamma+ \mathrm{i}\pi-2\log\left(\frac{1}{2}k|\tau|\sqrt{1+(h^{\prime}(t_{i}))^{2}} \right)\right]+O(\tau^{2}),\] with \(\gamma=0.5772\dots\) denoting the Euler-Mascheroni constant. Integrating these expressions over \(-\Delta t/2\leq\tau\leq\Delta t/2\), we set \[a_{ii}=\frac{\Delta t}{4\pi}\frac{h^{\prime\prime}(t_{i})}{1+(h^{\prime}(t_{i} ))^{2}},\] (A5) \[b_{ii}=\frac{\Delta t}{2\pi}\left[1-\gamma+\mathrm{i}\frac{\pi}{2}-\log\left(\frac{ 1}{4}k\Delta t\sqrt{1+\left(h^{\prime}(t_{i})\right)^{2}}\right)\right].\] (A6) Thus, to form the matrix \(A\), we evaluate (A3) for all \(i\neq j\) and (A5) for \(i=j\). Similarly, to form the matrix \(B\), we evaluate (A4) for all \(i\neq j\) and (A6) for \(i=j\). With these matrices, we seek the vectors of unknowns, \(\mathbf{u}=(U(t_{1}),\ldots,U(t_{M}))\) and \(\mathbf{v}=(V(t_{1}),\ldots,V(t_{M}))\) through solution of the block system of equations, \[\begin{bmatrix}\frac{1}{2}I-A_{0}&B_{0}\\ \frac{1}{2}I+A_{1}&-B_{1}\end{bmatrix}\begin{bmatrix}\mathbf{u}\\ \mathbf{v}\end{bmatrix}=\begin{bmatrix}\mathbf{f}_{0}\\ \mathbf{f}_{1}\end{bmatrix}.\] Here \(I\) is the identity matrix, \(A_{0}\) and \(B_{0}\) correspond to evaluation of the \(A\) and \(B\) matrices with wavenumber \(k_{0}\) and \(A_{1}\) and \(B_{1}\) correspond to evaluation of the \(A\) and \(B\) matrices with wavenumber \(k_{1}=k_{0}\sqrt{\epsilon_{r}(1+\mathrm{i}\beta)}\). The right-hand side block vectors contain the evaluation of the source above the interface \(\mathbf{f}_{0}\) and below the interface \(\mathbf{f}_{1}\) on the set of interface points \((t_{j},h(t_{j}))\) for \(j=1,\ldots,M\). ## Acknowledgments The authors acknowledge support by the Air Force Office of Scientific Research (FA9550-21-1-0196). A. D. Kim also acknowledges support by the National Science Foundation (DMS-1840265). ## Data Availability Statement The data and numerical methods used in this study are available at Zenodo via [https://doi.org/10.5281/zenodo.7754256](https://doi.org/10.5281/zenodo.7754256)
2301.05388
The mixing of dust and gas in the high latitude translucent cloud MBM 40
Context. High latitude molecular clouds (hereafter HLMCs) permit the study of interstellar gas dynamics and astrochemistry with good accuracy due to their proximity, generally clear lines of sight, and lack of internal star-forming activity which can heavily modify the physical context. MBM 40, one of the nearest HLMCs, has been extensively studied, making it a superb target to infer and study the dust-to-gas mixing ratio (DGMR). Aims. The mixing of dust and gas in the interstellar medium remains a fundamental issue to keep track of astrochemistry evolution and molecular abundances. Accounting for both molecular and atomic gas is difficult because $H_2$ is not directly observable and HI spectra always show different dynamical profiles blended together which are not directly correlated with the cloud. We used two independent strategies to infer the molecular and atomic gas column densities and compute the dust-to-gas mixing ratio. Methods. We combined $HI$ 21 cm and $^{12}CO$ line observations with the IRAS 100 $\mu$m image to infer the dust-to-gas mixing ratio within the cloud. The cloud 21 cm profile was extracted using a hybrid Gaussian decomposition where $^{12}CO$ was used to deduce the total molecular hydrogen column density. Infrared images were used to calculate the dust emission. Results. The dust-to-gas mixing ratio is nearly uniform within the cloud as outlined by the hairpin structure. The total hydrogen column density and 100 $\mu$m emissivity are linearly correlated over a range in $N(H_{tot})$ of one order of magnitude.
Marco Monaci, Loris Magnani, Steven N. Shore
2023-01-13T04:29:03Z
http://arxiv.org/abs/2301.05388v1
# The mixing of dust and gas in the high latitude translucent cloud MBM 40 ###### Abstract Context:High latitude molecular clouds (hereafter HLMCs) permit the study of interstellar gas dynamics and astrochemistry with good accuracy due to their proximity, generally clear lines of sight, and lack of internal star-forming activity which can heavily modify the physical context. MBM 40, one of the nearest HLMCs, has been extensively studied, making it a superb target to infer and study the dust-to-gas mixing ratio (DGMR). Aims:The mixing of dust and gas in the interstellar medium remains a fundamental issue to keep track of astrochemistry evolution and molecular abundances. Accounting for both molecular and atomic gas is difficult because H\({}_{2}\) is not directly observable and H i spectra always show different dynamical profiles blended together which are not directly correlated with the cloud. We used two independent strategies to infer the molecular and atomic gas column densities and compute the dust-to-gas mixing ratio. Methods:We combined H i 21 cm and \({}^{12}\)CO line observations with the IRAS 100 \(\mu\)m image to infer the dust-to-gas mixing ratio within the cloud. The cloud 21 cm profile was extracted using a hybrid Gaussian decomposition where \({}^{12}\)CO was used to deduce the total molecular hydrogen column density. Infrared images were used to calculate the dust emission. Results:The dust-to-gas mixing ratio is nearly uniform within the cloud as outlined by the hairpin structure. The total hydrogen column density and 100 \(\mu\)m emissivity are linearly correlated over a range in N(H\({}_{\rm tot}\)) of one order of magnitude. Conclusions: ## 1 Introduction The admixture of dust and gas is an essential, but poorly determined, property of the interstellar medium. It affects the modeling of radiative balance, moderates astrochemical processes, and affects star formation within molecular clouds (for the theoretical aspect see, e.g., Lee et al. (2017), Tricco et al. (2017), Marchand et al. (2021); for the observational picture, e.g., Reach et al. (2017)). The widely adopted value for the mass ratio in our Galaxy is \(\sim 100-150\)(Hildebrand (1983)), but it varies in other galaxies (Young et al. (1986)) and even in clouds in our Galaxy (Reach et al. (2015)).1 Footnote 1: We must emphasize at the beginning of this Letter that our aim is to study the mixing of the components and not the mass ratio, as conventionally discussed, because we are trying to avoid the uncertainties induced by adopting specific dust models. The usual procedure to obtain the dust-to-gas ratio (hereafter DGR) infers the neutral hydrogen column densities from the dust extinction which is then used to obtain the dust mass. Alternatively, atomic resonance transitions have been used to infer the atomic column densities, but these are altered by density-dependent depletion in the gas phase and from the spotty sampling of any intervening clouds because of the sparse coverage by stars and extragalactic sources. The sparseness of background sources also complicates the distance determination of clouds using standard methods such as star counting, Wolf diagrams, or multiband extinction measurements (e.g., Sun et al. (2021), Lv et al. (2018), Liljestrom & Mattila (1988)). These techniques are hampered by the proximity of the translucent and high latitude molecular clouds (see, e.g., Magnani & Shore (2017)). Moreover, different lines of sight through these clouds may sample very different structures (e.g., Lombardi et al. (2014)). If the DGRs differ, this complicates extinction corrections and inferences about the cloud masses based on infrared imaging. It is now possible, however, to obtain the atomic column densities directly following the completion of all-sky H i surveys that complement the infrared maps obtained for Galactic and cosmological surveys. The object of our study, MBM 40, is a small molecular structure embedded in a larger H i flow (Shore et al. (2003)). The molecular gas distribution has been discussed extensively (Magnani et al. (1985), MBM; Lee et al. (2002); Shore et al. (2003), SMLM; Shore et al. (2006), SLCM; Chastain et al. (2009), CCM10). The CO(1-0) molecular gas distribution is complex with the denser gas in a hairpin structure surrounded by a diffuse envelope. The distance to the cloud is 93 pc (Zucker et al. (2019)) and previous studies (Magnani et al. (1996b); SMLM; CCM10) have derived a molecular mass of 20 - 40 \(\,\mathrm{M}_{\odot}\). There is no evidence of star formation despite a rigorous search (Magnani et al. (1996a)). This cloud provides an exemplar of a non-star-forming medium where no internal processing has affected the dust properties. We should, therefore, see a more nearly pristine presentation of the DGR and its uniformity than would be obtained from a more active source. ## 2 Data We have used a range of archival data for this study. We briefly describe here the individual data sets. ### Galfa The Galactic Arecibo L-Band Feed Array HI (GALFA-HI, Peek et al. (2011),) is an extended survey between \(-1^{\circ}\leq\delta\la 38^{\circ}\) with an angular resolution of \(4^{\prime}\) and a 0.184 km s\({}^{-1}\) spectral resolution using the William E. Gordon 305-m telescope at the Arecibo Observatory (Peek et al. (2018)).2. We used the narrow bandwidth data repository (see Peek (2017)) centered at 0 km s\({}^{-1}\); each spectrum has 2 048 channels spanning \(|v_{\rm 1,SR}|\la 188\) km s\({}^{-1}\). We used ancillary data furnished with the GALFA datacube to correct for stray radiation and cropped the PPV datacube in position to match the extension and velocity (\(|v_{\rm LSR}|\la 15\) km s\({}^{-1}\)) of MBM 40, which is detected between \(\sim 2\) and \(\sim 4\) km s\({}^{-1}\) in molecular gas. The top panel of Fig. 1 shows an example of a GALFA narrowband spectrum (within the range \(|v_{\rm LSR}|\la 20\) km s\({}^{-1}\)) with a signal-to-noise ratio (S/N) \(\sim 60\); each spectrum within the cloud has a similar S/N. Footnote 2: During these observations the Arecibo Observatory was operated by SRI International under a cooperative agreement with the National Science Foundation, and in alliance with Ana G. Méndez-Universidad Metropolitana, and the Universities Space Research Association. ### Fcrao The \({}^{12}\)CO observations were obtained using the Five College Radio Astronomy Observatory (FCRAO3) in early 2000. The full datacube is composed of 24 576 frequency-switched spectra with a velocity resolution of about 0.05 km s\({}^{-1}\). We used only the 56 central channels centered at 3 km s\({}^{-1}\). The average rms noise value is 0.7 K (Shore et al. (2003)). The \({}^{12}\)CO radiation temperature (\(T_{\rm R}\)) was calculated taking the antenna temperature and then dividing by \(\eta_{\rm its}\eta_{\rm c}\), where \(\eta_{\rm its}\) is the forward-scattering and spillover efficiency (\(\simeq 0.7\)) and \(\eta_{\rm c}\) is the source filling factor, which we assumed to be unity (see SMLM for a complete discussion of FCRAO observations). The bottom panel of Fig. 1 shows the equivalent FCRAO \({}^{12}\)CO profile near the same position of the H i GALFA spectrum. The H i profile shows different components, one of which is compatible with the MBM 40 bulk velocity of \(\sim 3\) km s\({}^{-1}\). Footnote 3: The FCRAO was supported in part by the National Science Foundation and was operated with the permission of the Metropolitan District Commission, Commonwealth of Massachusetts. The HI and \({}^{12}\)CO temperature contours for a restricted velocity range are shown in Fig. 2. Here we identify two positions to which we refer in the following sections. ### IRAS 100 \(\mu\)m dust map For the dust distribution, we used the IRAS 100 \(\mu\)m IRIS images that have a spatial resolution of about \(2^{\prime}\) (Miville-Deschenes & Lagache (2005)). These were published after our first study of MBM 40 and have reduced striping, improved zodiacal light subtraction, and a zero level compatible with DIRBE. The image is an interpolated \(50\times 50\) pixel matrix with the same H i and \({}^{12}\)CO spatial resolution centered at \(RA=16^{10}\)\({}^{m}\)57\(\aas@@fstack{s}\)19, \(DEC=+21\)\(\aas@@fstack{\circ}\)52\(\aas@@fstack{\prime}\)28. We did not use Planck data for the principal study because the spatial resolution is slightly lower and further reduced by a required interpolation to a common coordinate grid (from galactic to equatorial). We show in the Appendix, however, that our conclusions are unchanged based on Planck. Figure 1: Sample spectra of MBM 40 used in this study. _(Top panel.)_ Single GALFA spectrum at \(\alpha=242.7583^{\circ},\delta=+21.8084^{\circ}\) (J2000.0), in the lower part of the western ridge of MBM 40. _(Bottom panel.)_ Sample FCRAO \({}^{12}\)CO spectrum near the same position as the H i, obtained by averaging 64 spectra in an \(8\times 8\) pattern to enhance the S/N. The vertical dashed line shared by two plots indicates the bulk velocity of the cloud (3.35 km s\({}^{-1}\)). The H i spectrum shows the brightness temperature (\(T_{\rm R}\)), and the \({}^{12}\)CO spectrum is in terms of the radiation temperature (\(T_{\rm R}\)). Figure 2: H i brightness temperature at 3.4 km s\({}^{-1}\) with integrated line \({}^{12}\)CO intensity (black contours at levels 2, 3, and 4 K km s\({}^{-1}\)). White circles denote the positions to which we refer in the text. In this velocity slice (very near the molecular cloud bulk velocity), the H i emission and \({}^{12}\)CO emission (position A) are anticorrelated. ## 3 Data analysis ### Velocity slices The comparison between \({}^{12}\)CO and H i profiles in different velocity slices is shown in Fig. 3. Because the spectral resolution of H i is coarser than \({}^{12}\)CO, a linear interpolation was performed to match the velocity resolution of the FCRAO data (see the next subsection for further details). The H i is more extended in velocity than the molecular gas traced by \({}^{12}\)CO. It appears as a "cocoon" (Shore et al. (2003),Verschur (1974)) around cloud molecular gas and shows some internal structures. We note that H i is present at both lower and higher velocities (i.e., \(v<2.25\) km s\({}^{-1}\) or \(v>4\) km s\({}^{-1}\)) where no \({}^{12}\)CO is present, tracing foreground and background gas. The H i spatial resolution is sufficient to compare the atomic and molecular gas structures mapped by \({}^{12}\)CO. The velocity slices show an anticorrelation between atomic and molecular gas, especially at position A, marked by a white circle in the 3 km s\({}^{-1}\) slice in Fig. 3. The \({}^{12}\)CO enhancement is associated with a lower atomic gas column density, indicative of a phase transition. This condensation is visible in all velocity slices. ### Data rebinning and interpolation The 21 cm and \({}^{12}\)CO maps differ in spectral and spatial resolutions. To compare these data, spectral and spatial linear interpolations were performed. Each H i spectrum was linearly interpolated using the \({}^{12}\)CO FCRAO velocities (\(\Delta v=0.05\) km s\({}^{-1}\)) as query points. The final spectral H i resolution is 3.5 times higher than the original GALFA resolution. Each H i spectrum has been checked for artifacts and, although linear interpolation is sensitive to S/N, in our case S/N is over 60 for each spectrum. We then performed a 2D linear interpolation for both H i and FCRAO data to a common 50 \(\times\) 50 pixel grid, covering about 0:8 in RA and 1\({}^{\circ}\) in DEC. Consequently, using this procedure at each position yields linked GALFA and FCRAO spectra with the same velocity resolution. ### H i profile decomposition Each H i profile is a blend of multiple components. Because the 21 cm transition is almost always optically thin for lines of sight toward high-latitude clouds, the full line of sight contributes to the observed profile, not only to gas associated with MBM 40, and significant effort was made to decompose each HI spectrum into simpler Gaussian components (see Murray et al. (2021), Lindner et al. (2015), Pingel et al. (2013)). In this work we effected a two-step decomposition of each profile guided by velocity information from \({}^{12}\)CO as explained below. We used Gaussian profiles to separate the foreground and background as well as cloud neutral hydrogen. The \({}^{12}\)CO observations constrain a velocity window in which the cloud is embedded, so it is possible to use this information to trim the H i spectra. We emphasize that the choice of Gaussian fitting was used only to select that gas that is connected to the cloud. This is different from recent Gaussian decomposition studies aimed at characterizing the fine structure of the cold neutral medium (e.g., Murray et al. (2021)). We started the H i decomposition by subtracting a very broad, diffuse component (see, Verschur & Magnani (1994)) from each GALFA profile, fitting a Gaussian only to the wings of the profile outside the interval \(-7.55\leq v_{\rm LSR}\leq 7.35\) km s\({}^{-1}\), and then we fit a second Gaussian to the residual emission. Fig. 4 (left panel) shows an example of the procedure. We excluded positions where there is no clear double profile (i.e., where only diffuse gas is present in the neighborhood of MBM 40), and we fit only the redward wing of each H i residual emission, avoiding points below 2 km s\({}^{-1}\) (see Fig. 4, right panel). This procedure cannot map all the gas because we do not know the parent distribution; however, the narrow Gaussian component is at least proportional to the total gas connected with MBM 40. ### Molecular and atomic gas column density If H i is optically thin, as in MBM 40, and if we assume that one temperature dominates each velocity channel, then the total atomic hydrogen column density is given by the following (see for example Draine (2010)): \[N({\rm H\,{\sc i}})=1.813\cdot 10^{18}\int_{-\infty}^{+\infty}T_{\rm{B,H \,{\sc i}}}(v)\;{\rm d}v\;\;\;\;\;{\rm cm}^{-2}, \tag{1}\] where \(T_{\rm{B,H\,{\sc i}}}(v)\) is the brightness temperature in K and \(v\) is the velocity in km s\({}^{-1}\). The molecular hydrogen column density, \(N({\rm H_{2}})\), was obtained using the integrated \({}^{12}\)CO line in \(T_{\rm R}\) multiplied by a conversion factor (\(X_{\rm CO}\) in units of cm\({}^{-2}\) K\({}^{-1}\) km\({}^{-1}\) s): \[N({\rm H_{2}})=X_{\rm CO}W({\rm CO}_{\rm{2=1\to 0}})\;\;\;\;\;{\rm cm}^{-2}, \tag{2}\] where W(CO) is the velocity-integrated \({}^{12}\)CO radiation temperature. The value of \(X_{\rm CO}\) is not constant in the Galaxy or even inside the same cloud (Bolatto et al. (2013)), and it is a source of systematic uncertainty. Cotten & Magnani (2013) found that for MBM 40, the \(X_{\rm CO}\) factor spans from \((0.6-3.3)\cdot 10^{20}\) with an average of \(1.3\cdot 10^{20}\): we adopted this value as well as a systematic uncertainty of about a factor of two. For each FCRAO position, we evaluated \(N({\rm H_{2}})\) as follows: \[N({\rm H_{2}})=1.3\cdot 10^{20}\;{\rm K}^{-1}\;{\rm km}^{-1}\;{\rm s}\int_{- \infty}^{+\infty}T_{\rm{R,CO}}(v)\;{\rm d}v\;\;\;\;\;{\rm cm}^{-2}, \tag{3}\] where \(T_{\rm{R,CO}}(v)\) is the radiation temperature for \({}^{12}\)CO (cf. SMLM). Fig. 5 shows the derived column densities for atomic and molecular hydrogen. Using equations 1 and 3, we obtain \(N({\rm H_{tot}})=2N({\rm H_{2}})+N({\rm H\,{\sc i}})\) within the cloud boundary. Fig. 6 shows N(H\({}_{\rm tot}\)). The enhancement is quite steep inside the cloud, where the total column density reaches \(N({\rm H_{tot}})\approx 16\cdot 10^{20}\) cm\({}^{-2}\). We assumed 93 pc as the distance of the cloud (Zucker et al. (2019)). Thus, the spatial separation between diffuse gas and the maximum hydrogen column density near position A is \(\approx 0.18\;pc\), similar to that in \({}^{12}\)CO, where the \({}^{12}\)CO fades out more or less ten beams away. The cloud shows a broad atomic gas environment within the \(N({\rm H_{tot}})=2\cdot 10^{20}\) cm\({}^{-2}\) contour (i.e., the outermost contour in Fig. 6) where the \({}^{12}\)CO is too faint to be detected with reasonable integration times. The distribution of atomic gas is consistent with Shore et al. (2003), where it was modeled with a "cocoon" shape, but using higher fidelity from GALFA revealed some internal structure. Position B (the red dot on the top of Fig. 6) shows \({}^{12}\)CO weak lines with virtually no \({}^{13}\)CO. ### Dust-to-gas mixing ratio (DGMR) The DGMR was obtained using the IRIS 100 \(\mu\)m image. We linearly interpolated the image to obtain a \(50\times 50\) matrix, in which each pixel has a relative H i and \({}^{12}\)CO spectra, so we were able to directly perform a simple division to obtain the DGMR. The mean 100 \(\mu\)m of the MBM 40 complex is a few MJy sr\({}^{-1}\) and the hydrogen column density is about \(10^{20}\) cm\({}^{-2}\), so we scaled the latter by \(10^{-20}\) to obtain comparable values with dust emission and then used the following ratio: \[\mathrm{DGMR}=\frac{E_{100}}{N(\mathrm{H_{tot}})}\,10^{20}\left(\mathrm{MJy\ sr^{-1}\ cm^{2}}\right), \tag{4}\] where \(E_{100}\) is the total emission of the IRIS 100 \(\mu\)m image and \(N(\mathrm{H_{tot}})\) is the total hydrogen column density. The derived ratio (Fig. 7) is nearly constant within the complex, indicating that the dust and gas are well mixed. We tested the robustness of our procedure by calculating the DGMR using the complete H i spectra, that is without Gaussian decomposition and including the atomic gas over all velocities. The result is nearly the same because the majority of gas is in molecular form (mapped by \({}^{12}\)CO). Considering the full H i velocity range, Figure 4: (_Left panel.1_) Subtraction of extremely diffuse gas from H i sample profile. Bold gray lines are wings fitted by a single Gaussian (black dashed line) and the gray dashed line is the section we excluded from the fit, the solid black line is the result of the subtraction, where the diffuse component is severely reduced. (_Right panel.1_) Narrow component extraction: crosses indicate points below 2 km s\({}^{-1}\) which we suppose are not directly linked with gas traced by \({}^{12}\)CO and which were excluded for the narrow Gaussian fit (black solid line). Circles denote points we used for the fit. Figure 5: Total column density of H i (left panel) and H\({}_{2}\) (right panel). Outside the jagged boarders visible via N(H i), the column densities were not calculated due to a lack of \({}^{12}\)CO emission. We note that near position A, there is a deficit in atomic hydrogen where an enhancement in \({}^{12}\)CO is present. Figure 3: H i velocity slices (color scale) compared to the same \({}^{12}\)CO velocity slices (black contours). Contours span from 1 K to 5 K in steps of 1 K. The white circle in the 3 km s\({}^{-1}\) panel marks position A in the western ridge. the N(H i) is nearly doubled, but it accounts for about 30% of the total gas. However, without any Gaussian decomposition, the result is much more sensitive to the \(X_{\rm CO}\) factor: if we use a lower value, that is to say 0.6 (instead of 1.3), the DGMR changes abruptly and the MBM 40 cloud is barely visible. Conversely, if we decompose the H i profiles, the DGMR remains constant and also the cloud structure is clearly visible. Therefore, we can conclude that not all H i gas mapped by GALFA is linked with MBM 40, but only the narrow component that we extracted. The qualitative appearance from the map in Fig. 7 is quantified by the linearity of the relation of the scatterplot of the total gas column density versus 100 \(\mu\)m emissivity (see Fig. 8). ## 4 Discussion and conclusions We have shown that the dust is well mixed with gas within the translucent cloud MBM 40, both in densest and rarefied regions, indicating that HLMCs similar to MBM 40 are ideal comparisons for modelling how dust affects gas condensation. Liseau et al. (2015) used a different molecule, N\({}_{2}\)H\({}^{+}\) instead of CO which they argued would be more condensed, to study the dust-to-gas mass ratio in \(\rho\) Oph, a dense star-forming region. In contrast, we are only concerned with the degree to which the atomic and molecular gas, and the dust, are mixed within this region. Consequently, we do not need to make any assumptions about the intrinsic dust properties, only that the dust temperature is approximately constant across the cloud. This is valid for MBM 40, as confirmed by the Planck dust temperature map. Murray et al. (2021) provide a detailed picture of the neutral hydrogen column densities and optical depths of diffuse gas at high Galactic latitudes. Their mean value for N(H i), around 2 \(\times\) 10\({}^{20}\)cm\({}^{-2}\), is similar to the highest values we have for the cocoon of MBM 40. In contrast, within the cloud boundaries, the neutral hydrogen is depleted relative to inferred H\({}_{2}\) and there we find a total hydrogen column density an order of magnitude greater (Fig. 6), similar to the much coarser result presented in SMLM. The maximum in N(H\({}_{tot}\)) corresponds to the minimum in N(H i). As Murray et al. found for their sample, the gas in MBM 40 is not associated with any large structure, such as a supernova remnant or bubble, but it appears to be a transition to molecular gas within a more extended neutral hydrogen filamentary shear flow. The cloud-associated atomic gas is connected in space and radial velocity to more extended regions at distances up to 10 pc in which there is no evidence for excess IRIS 100 \(\mu\)m emission, even in those locations where N(H i) is about the same as for MBM 40. The comparatively small scale of the molecular structures in MBM 40 provides a testbed for understanding the phase transi Figure 8: Scatter plot of the total column density versus 100 \(\mu\)m emissivity showing a linear relation. The red dashed line is the best fit with a slope of \(0.316\pm 0.005\)\(MJy\)\(sr^{-2}\)\(cm^{2}\) and a correlation coefficient \(\rho=0.757^{+0.033}_{-0.035}\). Errors were evaluated by the standard deviation of the background where no signal is present in either \(N(H_{tot})\) or 100 \(\mu\)m emissivity images. Figure 6: Total hydrogen column density. The colorbar values must be multiplied by a factor of 10\({}^{30}\). Red dots indicate the positions discussed in Fig. 2. Figure 7: Map of DGMR. The colorbar values are expressed in 10\({}^{30}\) MJy sr\({}^{-1}\) cm\({}^{2}\). Red dots indicate the same positions as reported in Fig. 2. tion of the diffuse gas in isolated environments. The gas and dust are well mixed regardless of the total gas density. This result should inform turbulent mixing simulations and studies of grain chemistry. Our next paper, which is currently in preparation, will present the dynamical and astrochemical tracers. ###### Acknowledgements. We thank the Arecibo William E. Gordon Observatory staff and GALFA team for HI 21cm database. The \({}^{12}\)CO FCRAO data are obtained from SMLM 2003 study. IRIS infrared images are obtained using the SkyView Virtual Observatory and NASA/IPAC Infrared Science Archive. We also thank the referee for valuable suggestions that extended the discussion. ## References * Bolatto et al. (2013) Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, ARA & A, 51, 207 * Chastain et al. (2009) Chastain, R. J., Cotten, D., & Magnani, L. 2009, AJ, 139, 267 * Cotten & Magnani (2013) Cotten, D. L. & Magnani, L. 2013, MNRAS, 436, 1152 * Draine (2010) Draine, B. T. 2010, Physics of the interstellar and intergalactic medium (Princeton University Press) * Hildebrand (1983) Hildebrand, R. H. 1983, QJRAS, 24, 267 * Lee et al. (2017) Lee, H., Hopkins, P. F., & Squire, J. 2017, MNRAS, 469, 3532 * Lee et al. (2002) Lee, Y., Chung, H. S., & Kim, H. 2002, JKAS, 35, 97 * Liljestrom & Mattila (1988) Liljestrom, T. & Mattila K. 1988, A & A, 196, 243 * Lindner et al. (2015) Lindner, R. R., Vera-Ciro, C., Murray, C. E., et al. 2015, AJ, 149, 138 * Lesau et al. (2015) Lesau, R., Larsson, B., Luntila, T., et al. 2015, A&A, 578, A131 * Lombardi et al. (2014) Lombardi, M., Bouy, H., Alves, J., & Lada, C. J. 2014, A&A, 566, A45 * Lv et al. (2018) Lv, Z.-p., Jiang, B.-w., & Li, J. 2018, ChA & A, 42, 213 * Magnani et al. (1985) Magnani, L., Blitz, L., & Mundy, L. 1985, ApJ, 295, 402 * Magnani et al. (1996a) Magnani, L., Caillault, J.-P., Hearty, T., et al. 1996a, ApJ, 465, 825 * Magnani et al. (1996b) Magnani, L., Hartmann, D., & Speck, B. G. 1996b, ApJS, 106, 447 * Magnani & Shore (2017) Magnani, L. & Shore, S. N. 2017, A Dirty Window (Springer) * Marchand et al. (2021) Marchand, P., Guillet, V., Lebreuilly, U., & Mac Low, M.-M. 2021, A & A, 649, A50 * Miville-Deschenes & Lagache (2005) Miville-Deschenes, M.-A. & Lagache, G. 2005, ApJS, 157, 302 * Murray et al. (2021) Murray, C. E., Stanimirovic, S., Heiles, C., et al. 2021, ApJS, 256, 37 * Peek (2017) Peek, J. 2017, GALFA-HI DR2 Narrow Data Cubes * Peek et al. (2011) Peek, J., Heiles, C., Douglas, K. A., et al. 2011, ApJS, 194, 20 * Peek et al. (2018) Peek, J. E. G., Babler, B. L., Zheng, Y., et al. 2018, ApJS, 234, 2 * Pingel et al. (2013) Pingel, N. M., Stanimirovic, S., Peek, J., et al. 2013, ApJ, 779, 36 * Planck Collaboration (2020) Planck Collaboration IV. Planck 2020, A&A, 641, A4 * Reach et al. (2017) Reach, W. T., Bernard, J.-P., Jarrett, T. H., & Heiles, C. 2017, ApJ, 851, 119 * Reach et al. (2015) Reach, W. T., Heiles, C., & Bernard, J.-P. 2015, ApJ, 811, 118 * Shore et al. (2006) Shore, S. N., LaRosa, T. N., Chastain, R. J., & Magnani, L. 2006, A & A, 457, 197 * Shore et al. (2003) Shore, S. N., Magnani, L., LaRosa, T. N., & McCarthy, M. N. 2003, ApJ, 593, 413 * Sun et al. (2021) Sun, M., Jiang, B., Zhao, H., & Ren, Y. 2021, ApJS, 256, 46 * Tricco et al. (2017) Tricco, T. S., Price, D. J., & Laibe, G. 2017, MNRAS, 471, L52 * Verschuur (1974) Verschuur, G. L. 1974, ApJS, 27, 65 * Verschuur & Magnani (1994) Verschuur, G. L. & Magnani, L. 1994, AJ, 107, 287 * Young et al. (1986) Young, J., Schloerb, F., Kenney, J., & Lord, S. 1986, ApJ, 304, 443 * Zucker et al. (2019) Zucker, C., Speagle, J. S., Schlafly, E. F., et al. 2019, ApJ, 879, 125 ## Appendix: Analysis based the _Planck_ dust maps ### Dust-to-gas mass ratio from Planck The Planck Legacy Archive (PLA) 1 provides high-level maps, including the dust mass column density. We extracted a dust map for MBM 40 in M\({}_{\odot}\) pc\({}^{-2}\) and converted our total gas column density to 3.4 mass units to obtain the DGMR (Planck Collaboration IV. Planck 2018 results. (2020)). The result is shown in Fig. 9. Footnote 1: Based on observations obtained with Planck ([http://www.esa.int/Planck](http://www.esa.int/Planck)), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada. There is also a DGMR correlation. The greater dispersion results from the reduced resolution caused by coordinate interpolation. But, in addition, the derivation of a single temperature for each pixel affects the correlation isolated to within the cloud. Each pixel was integrated over the line of sight and the spectral energy distribution was modeled for each of these to give a unique temperature, but we also know from the velocity analysis that there is extended foreground and background cold dust that is not associated with MBM 40. We therefore used only the dust emission from the 100 \(\mu\)m IRAS image, without any assumption as to temperature modeling, and we discuss only the mixing ratio. ### Dust temperature using Planck We selected a PLA region including MBM 40 to check whether the dust temperature is also constant through the whole cloud. The result, shown in Fig. 10, is that the temperature is nearly constant with a mean value of \(\sim 18~{}K\). It is notable that the internal cloud structure is rendered uniform, confirming that the visible substructures are regions differing in density, not temperature. Figure 9: DGMR using Planck data.
2305.10514
Asymptotic equivalence of a subclass of WCLT non degenerate GKSL generator
We prove the assymptotic equivalence of a sequence of block diagonal matrices with Toeplitz blocks. The blocks are the principal submatrices of an originating Toeplitz sequence with generating symbol of the Wiener class. As an application, using the invariance of certain \textit{diagonal} and \textit{cyclic-diagonal} operator subspaces of the GKSL generators of circulant and WCLT quantum Markov semigroups, the asymptotic equivalence of the families is proved under suitable hypothesis.
Jorge R. Bolaños-Servín
2023-05-17T18:47:54Z
http://arxiv.org/abs/2305.10514v2
# Asymptotic equivalence of a subclass of WCLT non degenerate GKSL generators ###### Abstract We prove the assymptotic equivalence of a sequence of block diagonal matrices with Toeplitz blocks. The blocks are the principal submatrices of an originating Toeplitz sequence with generating symbol of the Wiener class. As an application, using the invariance of certain _diagonal_ and _cyclic-diagonal_ operator subspaces of the GKSL generators of circulant and WCLT quantum Markov semigroups, the asymptotic equivalence of the families is proved under suitable hypothesis. ## 1 Introduction The keystone to the theory of the asymptotic study of eigenvalues and singular values of Toeplitz matrices is a well known result due to Szego. The first Szego Limit Theorem describes the collective asymptotic distribution of the eigenvalues in the hermitian case while the stronger Avram-Partner theorem is concerned with the collective asymptotic distribution of the singular values of (not necessarily hermitian) Toeplitz matrices. The starting point is always a sequence of the \(n\times n\) truncations of an infinite Toeplitz matrix whose elements are the Fourier coefficients of a function commonly referred by specialists as the generating _symbol_. It was only recently that the interest in asymptotic formulas for individual eigenvalues sparked back interest although for a few narrow classes of generating symbols[3]. For instance in Maximenko[4] a quantile function is used to achieve uniform approximation of the singular values. Circulant matrices are a subclass of Toeplitz matrices with very convenient properties, such as that form an abelian sub-algebra, therefore diagonal with respect to a common basis which allows explicit computation of their eigenvalues. These matrices arise in many contexts, from stochastic processes as infinitesimal generators of Markov chains over abelian finite groups to numerical analysis as optimal preconditioners for Toeplitz systems[9]. The idea of deriving asymptotic results on Toeplitz matrices by comparison with conveniently (adhoc) constructed circulant matrices has been quite useful and is standard now [11, 14, 13]. Quantum Markov Semigroups (QMS) are weak\(*\)-continuous semigroups of completely positive identity preserving maps acting on a von Neumann algebra. They are widely used in applications to quantum physics to represent the memoryless evolution of an open quantum system. From a probabilistic point of view, they are a natural non commutative generalization of classical Markov semigroups. In the case when a quantum Markov semigroup is defined on a finite dimensional algebra, as considered in this work, it is always uniformly continuous. The families of QMS considered in this paper are the circulant family ([8]) and the weak coupling limit type (WCLT) family ([1]). In spite of the latter being quite a rich class of semigroups, Fagnola and Quezada[10] pointed out the disjointness of these families. However both types of QMS exhibit the invariance of certain operator subspaces: _diagonal_ subspaces \(\mathcal{V}\) for WCLT and _cyclic-diagonal_ subspaces \(\mathcal{B}\) for circulant. For the circulant family, the invariance has been used, together with the nice features of circulant matrices, to obtain the explicit computation of invariant states, the quantum entropy production rate, see [8], and some spectral properties including the spectral gap [6]. For the WCLT family on the other hand, although it was thought that the invariance alone was enough to characterize the non degenerate class of WCLT generators, it has been recently established in [10] that this is not the case and additional hypothesis are needed. The aforementioned families of operator subspaces \(\mathcal{V}\) and \(\mathcal{B}\) are closely related to the Toeplitz and circulant matrix structure, it is natural to ask whether an ad-hoc family of circulant QMS can be constructed in order to derive asymptotic results of a WCLT QMS. The purpouse of this paper is to obtain asymptotically equivalent sequences for completely positive (CP) operators and GKSL generators with a Toeplitz. The main results of asymptotic equivalence are contained in Theorems 15 and 20, and as a consequence results on the eigenvalue distribution follow in Corollary - -. The paper is organized as follows. In Section 2 we review general results on the asymptotic behaviour of sequences of circulant and Toeplitz matrices and introduce the main objects of our study (Equation (3) and Definition 4). We also develop a suitable vectorization process which will be helpful in the sequel. Section 3 is devoted to the block diagonal matrix representations (with Toeplitz blocks) of C.P. Toeplitz operators and the asymptotic equivalence to C.P. circulant operators is proved under the natural assumption that the generating sequence is in \(\ell^{1}(\mathbb{Z})\). In Section 5, block diagonal matrix (with pseudo-Toeplitz blocks) representations of the subclass of WCLT generators with C.P. Toeplitz part are derived. The asymptotic equivalence of this subclass of generators and circulant generators is proved under the assumption of absolutely summability of the generating sequence. ## 2 Preliminaries ### Asymptotic behaviour of matrices The idea of asymptotic equivalence for sequences of matrices we use through out the paper, due to Gray et al.[11], provides a simplified and direct path to the basic eigenvalue distribution theorems. This method is implicit and is a special case of more general results of Grenander and Szego[12]. For each \(n\geq 1\) let \(\mathbb{C}^{n}\) and \(\mathbb{C}^{n^{2}}\) be endowed with the corresponding Euclidean inner product. We consider the Hilbert spaces of continuous linear operators on each of the above spaces with the Hilbert-Schmidt inner product \(\langle A,B\rangle_{HS}=\mathrm{tr}(A^{*}B)\) respectively, namely, \(\left(\mathcal{M}_{n}(\mathbb{C}),\langle\cdot,\cdot\rangle_{HS}\right)\) and \(\left(\mathcal{M}_{n^{2}}(\mathbb{C}),\langle\cdot,\cdot\rangle_{HS}\right)\). Recall that the strong norm and the Hilbert-Schmidt norm on \(\mathcal{M}_{m}(\mathbb{C})\) are defined as \[\|A\|=\sup\left\{\frac{\langle u,A^{*}Au\rangle^{\frac{1}{2}}}{\|u\|}:u\in \mathbb{C}^{s}\right\},\ |A|=\langle A,A\rangle_{HS}^{\frac{1}{2}},\ A\in\mathcal{M}_{s}( \mathbb{C}),\ m=n,n^{2}.\] The normalized Hilbert-Schmidt norm will be denoted by \(|A|_{m}=\frac{1}{\sqrt{m}}|A|\). In this note, a sequence of matrices \(\{A_{n}\}_{n}\) is to be understood as a sequence increasing in size, i.e., \(A_{n}\in\mathcal{M}_{n}(\mathbb{C})\) for \(n\geq 1\). **Definition 1**.: _Two sequences of matrices \(\{A_{n}\}_{n}\), \(\{B_{n}\}_{n}\) are said to be asymptotically equivalent if_ 1. \(A_{n}\)_,_ \(B_{n}\) _are uniformly bounded in strong norm:_ \[\|A_{n}\|,\|B_{n}\|<M<\infty.\] 2. \(A_{n}-B_{n}\) _goes to zero in the normalized Hilbert-Schmidt norm:_ \[\lim_{n\to\infty}|A_{n}-B_{n}|_{n}=0.\] _The asymptotic equivalence of two sequences \(\{A_{n}\}_{n}\), \(\{B_{n}\}_{n}\) is commonly denoted by \(A_{n}\sim B_{n}\)._ The asymptotic equivalence of matrices implies that product and inverses behave similarly. Although the asymptotic equivalence is not sufficient to specify the behaviour of individual eigenvalues, it does specify the average behaviour in the sense of its distribution. This is the fundamental theorem concerning asymptotic eigenvalue behavior. **Theorem 2**.: _Let \(\{A_{n}\}_{n}\), \(\{B_{n}\}_{n}\) be asymptotically equivalent sequences of matrices with eigenvalues \(\alpha_{n,k}\) and \(\beta_{n,k}\), respectively. Assume that the eigenvalues of either matrix converge, i.e.,_ \[\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{n,k}^{s}\] _exists and is finite for any positive integer \(s\). Then_ \[\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\alpha_{n,k}^{s}=\lim_{n\to\infty} \frac{1}{n}\sum_{k=0}^{n-1}\beta_{n,k}^{s}\] As a consequence of the above theorem, asymptotic equivalence implies that for any polynomial \(p\) \[\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}p(\alpha_{n,k})=\lim_{n\to\infty} \frac{1}{n}\sum_{k=0}^{n-1}p(\beta_{n,k}), \tag{1}\] even more, the Stone-Weirstrass theorem allows us to replace \(p\) by an arbitrary function \(F\) continuous in \([-M,M]\) whenever the sequences are hermitian. In this case the sequences \(\{\alpha_{n,k}\}_{k,n},\{\beta_{n,k}\}_{k,n}\) are said to be asymptotically equally distributed. A stronger result was derived by Trench[15] using the Wielandt-Hoffman theorem in which (1) is replaced by \[\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}|F(\alpha_{n,k})-F(\beta_{n,k})|=0,\] where the eigenvalues are ordered noncreasingly, in this case the sequences are said to be asymptotically absolutely equally distributed. ### Circulant and Toeplitz asymptotic equivalence Recall that an \(n\times n\) matrix \(T_{n}=(t_{i,j})_{i,j}\) is said to be Toeplitz if \(t_{i,j}=t_{i-j}\) and \(C_{n}=(c_{i,j})_{n}\) is said to be circulant if \(c_{i,j}=c_{j-i}\), where the subindex operation in the latter is \(\mod n\). Following Gray in [11], one constructs a sequence of Toeplitz matrices \(\{T_{n}\}_{n=1}^{\infty}\) (increasing in size) given a fixed collection \(t=\{t_{j}\}_{j\in\mathbb{Z}}\), called the _symbol_, satisfying some summability condition, for instance \(t\in\ell^{1}(\mathbb{Z})\). Then the sequence circulant matrices \(\{\widetilde{C}_{n}\}_{n}\) is asymptotically equivalent to \(\{T_{n}\}_{n=1}^{\infty}\) where \[\tilde{c}_{j}=\left\{\begin{array}{ll}t_{0}&\mbox{if }j=0,\\ t_{-j}+t_{n-j}&\mbox{if }j=1,2,\ldots,n-1.\end{array}\right. \tag{2}\] The aim of this paper is to obtain analogous results for completely positive (CP) linear operators with Toeplitz-like structure with respect to a fixed basis. This class of operators will be precisely defined in the sequel. ### Toeplitz and circulant CP maps and GKSL generators Notice that any fixed orthonormal basis \(\{e_{i}\}_{i}\) of \(\mathbb{C}^{n}\) induces a _diagonal_\(\mathcal{V}=\{\mathcal{V}_{l}\}_{l=-(n-1)}^{n-1}\) and _cyclic-diagonal_\(\mathcal{B}=\{\mathcal{B}_{j}\}_{j=0}^{n-1}\) orthogonal decomposition of \(\mathcal{M}_{n}(\mathbb{C})\) as follows. For \(j=0,1,\ldots,n-1\), define the subspaces \[\mathcal{V}_{-j} =\mathrm{span}\{|e_{i}\rangle\langle e_{i+j}|:i=0,1,\ldots,n-1-j\},\] \[\mathcal{V}_{j} =\mathrm{span}\{|e_{i+j}\rangle\langle e_{i}|:i=0,1,\ldots,n-1-j\},\] \[\mathcal{B}_{j} =\mathrm{span}\{|e_{i}\rangle\langle e_{i+j}|:i=0,1,\ldots,n-1\}.\] The subindex operations in \(\mathcal{B}_{j}\) are understood \(\bmod n\). The \(\mathcal{V}\)-subspaces correspond to the slicing in diagonal, subdiagonal and superdiagonal subspaces. The ordered sets \(\{l\in\mathbb{Z}:-(n-1)\leq l\leq n-1\}\) and \(\{j\in\mathbb{Z}:j=0,1,\ldots,n-1\}\) will be referred as the _diagonal_ index set and _cyclic-diagonal_ index set respectively. It is straightforward to verify that each family is orthogonal w.r.t. the Hilbert-Schmidt inner product, i.e., \[\mathcal{M}_{n}(\mathbb{C})=\overline{\bigoplus}_{l}\mathcal{V}_{l}=\ \ \bigoplus_{j=0}^{n-1}\ \ \mathcal{B}_{j}. \tag{3}\] To simplify our notation we use the symbol \(\overline{\bigoplus}_{l}\) to denote \(\bigoplus_{l=-(n-1)}^{n-1}\), i.e., when the index set in the sum is the _diagonal_ index set. Denote by \(S\) the left shift operator (w.r.t. the orthonormal basis fixed above) \[Se_{i}=\left\{\begin{array}{ll}e_{i-1}&\mbox{ if }0<i\leq n-1,\\ 0&\mbox{ other,}\end{array}\right.\] and \(J\) be the _cyclic_ left shift operator \(Je_{i}=e_{i-1}\). Subindex operations for \(J\) are understood \(\bmod n\). In recent years the invariance of \(\mathcal{V}\) or \(\mathcal{B}\) of GKSL generators has proven useful in the study of some QMS generators [5, 8, 7], this strategy can reduce the dimension in some cases from \(n^{2}\) dimensional to an \(n\) dimensional problem. In Theorem 3.2 and Corollary 3.1 of [10], the authors prove that the \(\mathcal{V}\)-invariance of a CP operator \(\Phi\) induces the existence of a Kraus representation where each Kraus operator \(V_{l}\) belongs to some subspace \(\mathcal{V}_{m_{l}}\), and thus it can be written either as \[S^{m_{l}}M_{m_{l}}\ \ \mbox{or}\ \ S^{*m_{l}}M_{m_{l}}\] for some integer \(m_{l}\geq 0\) and some multiplication operator \(M_{m_{l}}\). Similarly, the \(\mathcal{B}\)-invariance of \(\Phi\) induces a Kraus representation in terms of the cyclic translation \(J\). We state the proposition without proof since it consists of a simplified version of the original results pointed above. **Proposition 3**.: _Let \(\Phi\) be a CP operator on \(\mathcal{M}_{n}(\mathbb{C})\) such that \(\Phi(\mathcal{B}_{l})\subseteq\mathcal{B}_{l}\) for all \(l=0,1,\ldots,n-1\). Then there exists a Kraus representation \(\Phi(x)=\sum_{l=0}^{n-1}L_{l}^{*}xL_{l}\) in which each \(L_{l}\) can we written in one of the following forms:_ \[J^{m}M_{m_{l}}\text{ or }J^{*m}M_{m_{l}} \tag{4}\] _for some integer \(0\leq m\leq n-1\) and some multiplication operator \(M_{m_{l}}\)._ Throughout the rest of the paper we will exclusively work with the case when the multiplication operators above are multiples of the identity. **Definition 4**.: _Let \(\Phi:\mathcal{M}_{n}(\mathbb{C})\longrightarrow\mathcal{M}_{n}(\mathbb{C})\) be a completely positive linear operator._ 1. \(\Phi\) _is said to be a CP Toeplitz operator (w.r.t. to the basis_ \(\{e_{i}\}_{i}\)_) if it has a representation of the form_ \[\Phi(x)=\sum_{j=0}^{n-1}t_{j}S^{*j}xS^{j}+\sum_{j=1}^{n-1}t_{-j}S^{j}xS^{*j}, \hskip 14.226378ptt_{j}\geq 0.\] (5) 2. \(\Phi\) _is said to be a CP circulant operator (w.r.t. to the basis_ \(\{e_{i}\}_{i}\)_) if it has a representation of the form_ \[\Phi(x)=\sum_{j=0}^{n-1}c_{n-j}J^{*j}xJ^{j},\hskip 14.226378ptc_{j}\geq 0.\] (6) Recall that a quantum Markov semigroup on a finite dimensional algebra is always uniformly continuous and its infinitesimal generator \(\mathcal{L}\) is a bounded operator characterized by a particular form, usually called canonical Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) form: \[\mathcal{L}(x)=\Phi(x)+G^{*}x+xG\] where \(\Phi\) is a CP operator and \(G=-\frac{1}{2}\Phi(\mathds{1}_{n})-iH\) with \(H=H^{*}\in\mathcal{M}_{n}(\mathbb{C})\). This representation is not unique. We shall consider GKSL generators with representations such that the CP part is Toeplitz or Circulant as follows. **Definition 5**.: _Let \(\mathcal{L}\) be the GKSL generator of a uniformly continuous QMS._ 1. \(\mathcal{L}\) _is said to be a Toeplitz QMS generator if the diagonal subspaces_ \(\mathcal{V}\) _are invariant for_ \(\mathcal{L}\) _and there exists a representation such that its CP (i.e., dissipative) part is Toeplitz._ 2. \(\mathcal{L}\) _is said to be a circulant QMS generator if the diagonal subspaces_ \(\mathcal{B}\) _and there exists a representation such that its CP (i.e., dissipative) part is circulant._ The class of Circulant QMS has been extensively studied in [6, 8, 7]. **Remark 6**.: _The \(\mathcal{V}\)-invariance (\(\mathcal{B}\)-invariance) alone is not enough to have CP Toeplitz (circulant) operators. The additional condition on the multiplication operators is needed to have a CP Toeplitz (circulant) behaviour._ ### Toeplitz and circulant induced vectorizations Vectorization expresses through coordinates an isomorphism \(\mathbb{C}^{n}\otimes\mathbb{C}^{n}\cong\mathbb{C}^{n^{2}}\) which is a unitary transformation compatible with the Euclidian inner product of \(\mathbb{C}^{n^{2}}\). The most common vectorization used in the literature consists in stacking each column of \(x=(x_{i,j})_{ij}\in\mathcal{M}_{n}(\mathbb{C})\) into an \(n^{2}\) coordinate vector \[\mathrm{vec}(x)=(x_{0,0},x_{1,0},\ldots,x_{n-1,0},x_{0,1},\ldots,x_{n-1,1}, \ldots,x_{0,n-1},\ldots,x_{n-1,n-1})^{T}\in\mathbb{C}^{n^{2}}.\] We will use two different vectorization isomorphisms \(v\) and \(b\) with the difference being the order in which the elements are arranged in a vector. This order is directly induced by the structure of the \(\mathcal{V}\) and \(\mathcal{B}\) subspaces, respectively. For each \(-(n-1)\leq l\leq n-1\) let \(\{e_{i}^{l}\}_{i}\) be the canonical basis of \(\mathbb{C}^{n-|l|}\). Define \(v_{l}:\mathcal{V}_{l}\longrightarrow\mathbb{C}^{n-|l|}\) as \(v_{l}:\left\{\begin{array}{ll}|e_{i}\rangle\langle e_{i-l}|\longmapsto e_{ i}^{l}&\mbox{if }l<0,\\ |e_{i+l}\rangle\langle e_{i}|\longmapsto e_{i}^{l}&\mbox{if }\ l\geq 0, \end{array}\right.\) and for each \(0\leq j\leq n-1\) define \(b_{j}:\mathcal{B}_{n}\longrightarrow\mathbb{C}^{n}\) as \(|e_{i}\rangle\langle e_{i+j}|\longmapsto e_{i}\). Each \(v_{l}\) and \(b_{j}\) is an isometry. Let \(v\) and \(b\) be the isomorphisms between \(\mathcal{M}_{n}(\mathbb{C})\) and \(\mathbb{C}^{n^{2}}\) induced by the smaller isomorphisms \(\{v_{l}\}_{l}\), \(\{b_{j}\}_{j}\) respectively, so \[v(x)=v_{0}(x_{0})\oplus\bigoplus_{l=1}^{n-1}v_{-l}(x_{-l})\oplus v_{l}(x_{l}) \quad\mbox{ and }\quad b(x)=b_{0}(\mathrm{x}_{0})\bigoplus_{j=1}^{n-1}b_{j}( \mathrm{x}_{j}),\] where \(x=\bigoplus_{l}x_{l}=\bigoplus_{j}\mathrm{x}_{j}\), \(x_{l}\in\mathcal{V}_{l},\mathrm{x}_{j}\in\mathcal{B}_{j}\), are the unique orthogonal decomposition of \(x\) w.r.t. to \(\mathcal{V}\) and \(\mathcal{B}\) respectively. Thus \(v(x)\) is the vectorization of \(x\) which rearranges its elements w.r.t. to the _diagonal_ subspaces in the _diagonal ordering_ and \(b(x)\) is the one that rearranges them w.r.t. to the _cyclic-diagonal_ subspaces. It is straightforward to verify that the vectorizations \(v\) and \(b\) are compatible with the Euclidian (HS) inner product of \(\mathbb{C}^{n^{2}}\) as well, \[\langle x,y\rangle_{HS}=\mathrm{tr}(x^{*}y)=\left\langle v(x),v(y)\right\rangle _{\mathbb{C}^{n^{2}}}=\left\langle b(x),b(y)\right\rangle_{\mathbb{C}^{n^{2} }}\ \mbox{ for }x,y\in\mathcal{M}_{n}(\mathbb{C}).\] We denote with \(\Phi\cong A\) whenever a matrix \(A\in\mathcal{M}_{n^{2}}(\mathbb{C})\) represents a linear operator \(\Phi\) acting on \(\mathcal{M}_{n}(\mathbb{C})\) w.r.t. the reordering of the canonical basis \(\{|e_{i}\rangle\langle e_{j}|\}_{i,j}\) induced by the _diagonal_ or _cyclic-diagonal_ index sets. ### Circulant C.P. maps Circulant CP operators appear as the dissipative part of the GKSL generators of circulant Quantum Markov Semigroups (2. in Def. 5) in [8]. Some of the spectral properties of these generators have been studied in [6] together with estimations for the spectral gap. Every CP circulant operator \(\tilde{\Phi}\) (Def. 4) has an associated circulant matrix \(C^{(n)}=(c_{j-i})_{ij}\). The properties satisfied by a circulant GKSL generator and its associated circulant matrix can be found in [8]. We recall the main property we shall use in the sequel. **Proposition 7**.: _A circulant CP map \(\tilde{\Phi}\) with asociated circulant matrix \(C^{(n)}\) satisfies_ \[\tilde{\Phi}\cong\bigoplus_{j=0}^{n-1}C^{(n)}=\mbox{\rm 1 \kern-3.8pt1}_{n}\otimes C^{(n)},\] _therefore \(b\left(\tilde{\Phi}(x)\right)=(\mbox{\rm 1 \kern-3.8pt1}_{n}\otimes C^{(n)})b(x)\) for all \(x\in\mathcal{M}_{n}(\mathbb{C})\)._ _As a consequence \(\|\tilde{\Phi}\|=\|\mbox{\rm 1\kern-3.8pt1}_{n}\otimes C^{(n)}\|\) and \(|\tilde{\Phi}|_{n^{2}}=|\mbox{\rm 1\kern-3.8pt1}_{n}\otimes C^{(n)}|_{n^{2}}\)._ ## 3 CP Toeplitz maps In this section we shall provide analogous results for CP Toeplitz maps in terms of an associated Toeplitz matrix and the subspaces \(\mathcal{V}\). Let \(\Phi\) be a CP Toeplitz map (Def.4, _1._) and let \(T_{0}^{(n)}=(t_{i-j})_{i,j}\) be the \(n\times n\) Toeplitz matrix whose entries are the scalars from the constant multiplication operators in \(\Phi\). The following lemma deals with the restriction of \(\Phi\) to the \(\mathcal{V}\)-subspaces and its relation to some submatrices of \(T_{0}^{(n)}\). It is the analogous version of Proposition 7 to the case of Toeplitz CP maps. **Lemma 8**.: _Let \(\Phi\) be a C.P. Toeplitz operator. The restriction of \(\Phi\) to the subspace \(\mathcal{V}_{l}\) is given by the \(n-|l|\times n-|l|\) leading principal submatrix \(T_{|l|}^{(n)}\) of \(T_{0}^{(n)}\),_ \[T_{|l|}^{(n)}=\sum_{i,k=0}^{n-1-|l|}\langle e_{i},T_{0}^{(n)}e_{k}\rangle|e_{i }\rangle\langle e_{k}|,\] _that is, for \(j=0,1,\ldots,n-1\),_ \[v_{-j}\left(\Phi(|e_{i}\rangle\langle e_{i+j}|)\right)=T_{j}^{(n )}e_{i}\ \ i=0,1,\ldots,n-1-j,\] \[v_{j}\left(\Phi(|e_{i+j}\rangle\langle e_{i}|)\right)=T_{j}^{(n) }e_{i}\ \ i=0,1,\ldots,n-1-j.\] \[v_{l}(\Phi|_{\mathcal{V}_{l}}(x))=T_{|l|}^{(n)}v_{l}(x)\ \ \text{for any}\ x\in\mathcal{M}_{n}(\mathbb{C})\ \text{and}\ l=-(n-1),\ldots,0,\ldots,n-1. \tag{7}\] Proof.: Let \(j\geq 0\) and let \(|e_{i}\rangle\langle e_{i+j}|\) be any of the basic elements of \(\mathcal{V}_{-j}\). Then, explicit computations using \(S\) and \(S^{*}\) yield \[\Phi(|e_{i}\rangle\langle e_{i+j}|) =\sum_{\begin{subarray}{c}r=0\\ 0\leq i+r\leq n-1-j\end{subarray}}^{n-1}t_{r}|e_{i+r}\rangle\langle e_{i+r+j}| +\sum_{\begin{subarray}{c}r=1\\ 0\leq i-r\leq n-1-j\end{subarray}}^{n-1}t_{-r}|e_{i-r}\rangle\langle e_{i-r+j}|\] \[=\sum_{\begin{subarray}{c}r=0\\ 0\leq i+r\leq n-1-j\end{subarray}}^{n-1}t_{r}v_{-j}^{-1}(e_{i+r}^{j})+\sum_{ \begin{subarray}{c}r=0\\ 0\leq i-r\leq n-1-j\end{subarray}}^{n-1}t_{-r}v_{-j}^{-1}(e_{i-r}^{j})\] \[=v_{-j}^{-1}\left(\sum_{\begin{subarray}{c}r=0\\ 0\leq i+r\leq n-1-j\end{subarray}}^{n-1-j}t_{r}e_{i+r}^{j}+\sum_{ \begin{subarray}{c}r=0\\ 0\leq i-r\leq n-1-j\end{subarray}}t_{-r}e_{i-r}^{j}\right)=v_{-j}^{-1}(T_{j}e_{ i}^{j}),\] where the fact that \(v_{-l}(|e_{k}\rangle\langle e_{k+j}|)=e_{k}\) was used in the second row and for the last equality follows since \[T_{j}^{(n)}=\sum_{r=0}^{n-1-j}\left(\sum_{\begin{subarray}{c}k=0\\ 0\leq k+r\leq n-1-j\end{subarray}}^{n-1-j}t_{-r}|e_{k}^{j}\rangle\langle e_{k+r }^{j}|+t_{r}|e_{k+r}^{j}\rangle\langle e_{k}^{j}|\right).\] This proves (7) for \(\mathcal{V}_{-j}\). The remaining case is analogous. In virtue of the following theorem, we will say \(T_{0}^{(n)}\) is the associated Toeplitz matrix of \(\Phi\). **Theorem 9**.: _A Toeplitz CP map \(\Phi\) with associated Toeplitz matrix \(T_{0}^{(n)}\) satisfies_ \[\Phi\cong\overline{\bigoplus_{l}}T_{l}^{(n)}\ \ \ \ \text{and}\ \ \ \ v\left(\Phi(x)\right)=\left(\overline{ \bigoplus_{l}}T_{l}^{(n)}\right)v(x)\ \ \text{for all}\ x\in\mathcal{M}_{n}(\mathbb{C})\] _w.r.t. the diagonal ordering._ _As a consequence the norms satisfy \(\left\|\Phi\right\|=\left\|\overline{\bigoplus_{l}}T_{l}^{(n)}\right\|\) and \(\left|\Phi\right|_{n^{2}}=\left|\overline{\bigoplus_{l}}T_{l}^{(n)}\right|_{n ^{2}}.\)_ Proof.: The first part follows from the last proposition. For the strong norm it is straightforward to see that \(\left\|\Phi(x)\right\|^{2}=\langle\Phi(x),\Phi(x)\rangle_{HS}=\left\langle v( \Phi(x)),v(\Phi(x))\right\rangle_{\mathbb{C}^{n^{2}}}\) \[=\left\langle\overline{\bigoplus_{l}}T_{l}^{(n)}v(x),\overline{\bigoplus_{l} }T_{l}^{(n)}v(x)\right\rangle_{\mathbb{C}^{n^{2}}}=\left\|\overline{\bigoplus _{l}}T_{l}^{(n)}v(x)\right\|^{2}.\] The desired result follows since \(\left\|x\right\|=\left\|v(x)\right\|\). For the HS norm, let \(e_{ij}=|e_{i}\rangle\langle e_{j}|\), then since \(v\) is unitary, \[|\Phi|_{n^{2}}^{2} =\frac{1}{n^{2}}\sum_{i,j}\left\langle\Phi(e_{ij}),\Phi(e_{ij}) \right\rangle_{HS}=\frac{1}{n^{2}}\sum_{i,j}\left\langle v(\Phi(e_{ij})),v(\Phi (e_{ij}))\right\rangle_{\mathbb{C}^{n^{2}}}\] \[=\frac{1}{n^{2}}\sum_{i,j}\left\langle\overrightarrow{\bigoplus_{ l}}T_{l}^{(n)}v(e_{ij}),\overrightarrow{\bigoplus_{l}}T_{l}^{(n)}v(e_{ij}) \right\rangle_{\mathbb{C}^{n^{2}}}=\left|\overrightarrow{\bigoplus_{l}}T_{l}^{ (n)}\right|_{n^{2}}^{2}.\] Again, reordering the basis the conclusion follows. The last theorem shows that with the proper reordering of the basis, any Toeplitz CP operator \(\Phi\) on \(\mathcal{M}_{n}(\mathbb{C})\) is represented by \(\overrightarrow{\bigoplus_{l}}T_{l}^{(n)}\), a \(n^{2}\times n^{2}\) block diagonal matrix with \(n-|j|\times n-|j|\) Toeplitz blocks. Each block is a leading principal submatrix of \(T_{0}^{(n)}\) and \(T_{j}^{(n)}=T_{-j}^{(n)}\). **Remark 10**.: \(\overrightarrow{\bigoplus_{l}}T_{l}^{(n)}\) _is a block diagonal matrix with Toeplitz blocks but it is not a Toeplitz \(n^{2}\times n^{2}\) matrix. As far as we know this Toeplitz-like matrix structure has not been studied before._ Denoting the spectrum of \(A\) by \(\sigma(A)\), an immediate corollary follows. **Corollary 11**.: _The following properties hold:_ 1. \[\sigma(\Phi)=\bigcup_{j=0}^{n-1}\sigma(T_{j}^{(n)}).\] (8) 2. _Each_ \(\lambda\in\{t_{0},\ t_{0}-\sqrt{t_{1}t_{-1}},\ t_{0}+\sqrt{t_{1}t_{-1}}\}\) _is always an eigenvalue of_ \(\Phi\) _with associated eigenvectors given by_ \[\lambda =t_{0}-\sqrt{t_{1}t_{-1}}\longrightarrow\left\{\begin{array}{ c}-\sqrt{t_{-1}}|e_{n-2}\rangle\langle e_{0}|+\sqrt{t_{1}}|e_{n-1}\rangle \langle e_{1}|\\ -\sqrt{t_{-1}}|e_{0}\rangle\langle e_{n-2}|+\sqrt{t_{1}}|e_{1}\rangle\langle e _{n-1}|\\ \end{array}\right.\] \[\lambda =t_{0}+\sqrt{t_{1}t_{-1}}\longrightarrow\left\{\begin{array}{ c}\sqrt{t_{-1}}|e_{n-2}\rangle\langle e_{0}|+\sqrt{t_{1}}|e_{n-1}\rangle \langle e_{1}|\\ \sqrt{t_{-1}}|e_{0}\rangle\langle e_{n-2}|+\sqrt{t_{1}}|e_{1}\rangle\langle e _{n-1}|.\end{array}\right.\] \[\lambda =t_{0}\qquad\qquad\longrightarrow\left\{\begin{array}{c}|e_{0} \rangle\langle e_{n-1}|\\ |e_{n-1}\rangle\langle e_{0}|\end{array}\right.\] Asymptotic equivalence of CP Toeplitz and Circulant maps Let \(\{t_{l}\}_{l=-\infty}^{\infty}\) be a non-negative sequence of the Wieiner class and for each \(n\geq 1\) let \(\Phi^{(n)}:\mathcal{M}_{n}(\mathbb{C})\longrightarrow\mathcal{M}_{n}(\mathbb{C})\) be the CP Toeplitz operator \[\Phi^{(n)}(x)=\sum_{j=0}^{n-1}t_{j}S^{(n)*j}xS^{(n)^{j}}+\sum_{j=1}^{n-1}t_{-j} S^{(n)^{j}}xS^{(n)*j},\] thus \(\{\Phi^{(n)}\}_{n}\) is a sequence of CP Toeplitz maps associated with \(t\). Notice that \(\{T_{0}^{(n)}\}_{n}\) is both the sequence of Toeplitz matrices associated to \(\{\Phi^{(n)}\}_{n}\) and the Toeplitz sequence associated to the symbol \(f(\eta)=\sum_{k}t_{k}e^{ik\eta},\eta\in[0,2\pi]\). Our purpose is to find a sequence of CP circulant maps \(\{\widetilde{\Phi}^{(n)}\}_{n}\) asymptotically equivalent to \(\{\Phi^{(n)}\}_{n}\). It is well known that if \(\{C^{(n)}\}_{n}\) is the sequence of circulant matrices given by \(C^{(n)}=(c_{j-i}^{(n)})_{ij}\) where the subindex operations are mod \(n\) and \[c_{j}^{(n)}=\left\{\begin{array}{ll}t_{0}&\mbox{if $j=0$},\\ t_{-j}+t_{n-j}&\mbox{if $j=1,2,\ldots,n-1$}.\end{array}\right. \tag{9}\] then \(T_{0}^{(n)}\thicksim C^{(n)}\), see [11]. Let \(\{\widetilde{\Phi}^{(n)}\}_{n}\) be the sequence of circulant CP maps such that their associated circulant matrices are precisely \(\{C^{(n)}\}_{n}\), that is, \[\widetilde{\Phi}^{(n)}(x)=\sum_{j=0}^{n-1}c_{n-j}^{(n)*j}xJ^{(n)j},\quad n\geq 1. \tag{10}\] The following standard results will be useful to manage the norms of the blocks involved. **Lemma 12**.: _Let \(\mathcal{H}_{1},\mathcal{H}_{2}\) be finite dimensional Hilbert spaces and \(A_{i}\in\mathcal{B}(\mathcal{H}_{i}),\,i=1,2\). Then_ \[\|A_{1}\oplus A_{2}\|=\left\|\begin{pmatrix}A&0\\ 0&B\end{pmatrix}\right\|=\max\{\|A\|,\|B\|\}\,\,.\] **Lemma 13**.: _For each \(n\geq 1\) let \(T_{j}^{(n)}\) be the principal \(\ n-j\times n-j\) submatrix of \(T_{0}^{(n)}\). Then_ \[\|T_{-j}^{(n)}\oplus T_{n-j}^{(n)}\|\leq\|T_{0}^{(n)}\|\quad\mbox{for $j=1,2,\ldots,n-1$}.\] By Theorems 9 and 7 together with the simple observation that \(\mathcal{B}_{0}=\mathcal{V}_{0},\,\mathcal{B}_{j}=\mathcal{V}_{-j}\oplus \mathcal{V}_{n-j},\) allows us to obtain that \[\Phi^{(n)}-\widetilde{\Phi}^{(n)}\cong T_{0}^{(n)}-C^{(n)}\oplus\bigoplus_{j= 1}^{n-1}\left(T_{-j}^{(n)}\oplus T_{n-j}^{(n)}-C^{(n)}\right)\quad\mbox{for each $n\geq 1$}. \tag{11}\] **Lemma 14**.: _The Hilbert-Schmidt norm of the blocks in the last equation satisfies:_ 1. _If_ \(j=1,\ldots,n-1,\) _then_ \[\left|T_{-j}^{(n)}\oplus T_{n-j}^{(n)}-C^{(n)}\right|^{2}=2\sum_{l=0}^{j-1}\sum_{ k=1}^{n-j}(t_{-n+k+l}+t_{k+l})^{2}+\sum_{k=j+1}^{n-1}k(t_{k}^{2}+t_{-k}^{2})\] 2. \[\sum_{j=1}^{n-1}\left|T_{-j}^{(n)}\oplus T_{n-j}^{(n)}-C^{(n)}\right|^{2}=2\sum _{j=1}^{n-1}j(n-j)(t_{-j}+t_{n-j})^{2}+\sum_{j=2}^{n-1}j(j-1)(t_{j}^{2}+t_{-j}^ {2}).\] 3. _As a consequence_ \[\frac{1}{n^{2}}\sum_{j=1}^{n-1}\left|T_{-j}^{(n)}\oplus T_{n-j}^{(n)}-C^{(n)} \right|^{2}\underset{n}{\longrightarrow}0\] (12) Proof.: _(1)_ and _(2)_ are straightforward by direct computations. For _(3)_, since \(\sum_{j=1}^{\infty}t_{j}^{2}+t_{-j}^{2}<\left(\sum_{j=1}^{\infty}t_{j}+t_{-j} \right)^{2}<\infty,\) for any \(\epsilon>0\) there exists \(N=N(\epsilon)\) such that \[\sum_{j=N}^{\infty}t_{-j}+t_{j}\leq\frac{\sqrt{\epsilon}}{2},\ \ \text{and}\ \ \ \ \sum_{j=N}^{\infty}t_{-j}^{2}+t_{j}^{2}\leq\frac{\epsilon}{2}. \tag{13}\] We will use \(2\). For the first sum notice that \[\frac{2}{n^{2}}\sum_{j=1}^{n-1}j(n-j)(t_{-j}+t_{n-j})^{2} \leq 2\left(\frac{1}{n}\sum_{j=1}^{n-1}\sqrt{j(n-j)}(t_{-j}+t_{n-j} )\right)^{2}\] \[<2\left(\sum_{j=1}^{n-1}(t_{-j}+t_{n-j})\right)^{2}=2\left(\sum_{ j=1}^{n-1}(t_{-j}+t_{j})\right)^{2}<\infty.\] Now if \(n>N\) then \[\frac{1}{n}\sum_{j=1}^{n-1}\sqrt{j(n-j)}(t_{-j}+t_{j}) =\frac{1}{n}\sum_{j=1}^{N-1}\sqrt{j(n-j)}(t_{-j}+t_{j})+\frac{1}{ n}\sum_{j=N}^{n-1}\sqrt{j(n-j)}(t_{-j}+t_{j})\] \[<\frac{1}{\sqrt{n}}\sum_{j=1}^{N-1}\sqrt{j}(t_{-j}+t_{j})+\frac{ 1}{n}\sum_{j=N}^{\infty}\sqrt{j(n-j)}(t_{-j}+t_{j})\] \[\leq\frac{1}{\sqrt{n}}\sum_{j=1}^{N-1}\sqrt{j}(t_{-j}+t_{j})+\frac {\sqrt{\epsilon}}{2}\underset{n}{\longrightarrow}\frac{\sqrt{\epsilon}}{2}.\] While in a similar manner for the second term if \(n>N\) we have \[\frac{1}{n^{2}}\sum_{j=2}^{n-1}j(j-1)(t_{j}^{2}+t_{-j}^{2}) =\frac{1}{n^{2}}\sum_{j=2}^{N-1}j(j-1)(t_{j}^{2}+t_{-j}^{2})+\frac {1}{n^{2}}\sum_{j=N}^{n-1}j(j-1)(t_{j}^{2}+t_{-j}^{2})\] \[<\frac{1}{n^{2}}\sum_{j=2}^{N-1}j(j-1)(t_{j}^{2}+t_{-j}^{2})+\sum _{j=N}^{\infty}(t_{j}^{2}+t_{-j}^{2})\] \[\leq\frac{1}{n^{2}}\sum_{j=2}^{N-1}j(j-1)(t_{j}^{2}+t_{-j}^{2})+ \frac{\epsilon}{2}\underset{n}{\longrightarrow}\frac{\epsilon}{2}.\] This shows that \(\lim_{n\to\infty}\frac{1}{n^{2}}\sum_{j=1}^{n-1}\left|T_{-j}^{(n)} \oplus T_{n-j}^{(n)}-C^{(n)}\right|^{2}=0\). We are now in position to prove the main theorem of this section. **Theorem 15**.: _If \(t=\{t_{m}\}_{m}\in\ell^{1}(\mathbb{Z})\), the sequence of Toeplitz CP maps \(\{\Phi^{(n)}\}_{n}\) is assymptotically equivalent to be a sequence of CP Circulant maps \(\{\widetilde{\Phi}^{(n)}\}_{n}\), i.e., \(\Phi^{(n)}\thicksim\widetilde{\Phi}^{(n)}\)._ Proof.: By combining the previous lemmas and the fact that by construction \(T_{0}^{(n)}\thicksim C^{(n)}\), the uniform boundness follows since for any \(n\geq 1\), \[\left\|\Phi^{(n)}\right\|=\left\|T_{0}^{(n)}\oplus\bigoplus_{j=1}^{n-1}T_{-j}^ {(n)}\oplus T_{n-j}^{(n)}\right\|=\max_{1\leq j\leq n-1}\{\|T_{0}^{(n)}\|,\|T_ {-j}^{(n)}\oplus T_{n-j}^{(n)}\|\}=\|T_{0}^{(n)}\|<M, \tag{14}\] \[\left\|\widetilde{\Phi}^{(n)}\right\|=\left\|[\widetilde{\Phi}^{(n)}]_{ \mathcal{B}}\right\|=\max_{0\leq j\leq n-1}\|C^{(n)}\|<M.\] For the Hilbert-Schmidt norm, using (11) and the orthogonality of different blocks we have \[\left|\Phi^{(n)}-\widetilde{\Phi}^{(n)}\right|_{n^{2}}^{2}=\frac{1}{n}\left|T _{0}^{(n)}-C^{(n)}\right|_{n}^{2}+\frac{1}{n^{2}}\sum_{j=1}^{n-1}\left|T_{-j}^ {(n)}\oplus T_{n-j}^{(n)}-C^{(n)}\right|^{2}.\] The assymptotic equivalence implies that \(\left|T_{0}^{(n)}-C^{(n)}\right|_{n}\xrightarrow[n]{}0\), while the remaining term vanishes by Lemma 14. We have thus showed that \(\lim_{n\to\infty}\left|\Phi^{(n)}-\widetilde{\Phi}^{(n)}\right|_{n^{2}}^{2}=0\). This concludes the proof. ## 5 Toeplitz class of non degenerate WCLT QMs Let Recall the definition of weak coupling limit type (WCLT) generators contained in [1]. Let \(h_{n}\) be an \(n\)-dimensional Hilbert space and \(H_{S}=H_{S}^{*},D\in\mathcal{B}(h_{n})\) with spectral decomposition \[H_{S}=\sum_{\epsilon_{m}\in\sigma(H_{S})}\epsilon_{m}P_{\epsilon_{m}}\] where \(\epsilon_{0}<\epsilon_{1}\cdots<\epsilon_{k}\) and \(P_{\epsilon_{m}}\) is the spectral projection corresponding to the eigenvalue \(\epsilon_{m}\). The _Bohr frequencies_ are all the differences \(\omega=\epsilon_{m}-\epsilon_{l}\) with \(\epsilon_{m},\epsilon_{l}\) eigenvalues of \(H_{S}\) and denote by \[B_{+}=\{\,\omega=\epsilon_{m}-\epsilon_{l}>0\,\mid\,\epsilon_{m},\epsilon_{l} \in\mathrm{Sp}(H_{S})\,\},\] the set of all strictly positive Bohr frequencies. Define for each \(\omega\in B_{+}\) the operators \[D_{\omega} =\sum_{\omega=\epsilon_{m}-\epsilon_{l}}P_{\epsilon_{l}}DP_{ \epsilon_{m}}, D_{\omega}^{*}=\sum_{\omega=\epsilon_{m}-\epsilon_{l}}P_{\epsilon_{l }}D^{*}P_{\epsilon_{l}}\] The \(H_{S}\)-WCLT Markov generator \({\cal L}^{(n)}\) on \({\cal B}(h_{n})\) has the GKSL form given by: \[{\cal L}^{(n)}(x)=\sum_{\omega\in B_{+}}{\cal L}^{(n)}_{\omega}(x),\] \[{\cal L}^{(n)}_{\omega}(x)={\rm i}[H_{\omega},x]-\Gamma_{-\omega}\left(\frac{1 }{2}\{D^{*}_{\omega}D_{\omega},x\}-D^{*}_{\omega}xD_{\omega}\right)\] \[-\Gamma_{+\omega}\left(\frac{1}{2}\{D_{\omega}D^{*}_{\omega},x\}-D_{\omega}xD ^{*}_{\omega}\right).\] where \(\Gamma_{\pm\omega}\geq 0\) and \(H_{\omega}\) is a selfadjoint operator commuting with \(H_{S}\), \[H_{\omega}=\zeta_{-\omega}D^{*}_{\omega}D_{\omega}+\zeta_{+\omega}D_{\omega}D ^{*}_{\omega},\ \ \ \ \ \zeta_{-\omega},\zeta_{+\omega}\in\mathbb{R}.\] WCLT generators are seen to be \({\cal V}\)-invariant in the basis of \(H_{S}\). Theorem 4.1 in Fagnola and Quezada[10] gives a precise characterization of when a \({\cal V}\)-invariant GKSL generator is WCLT. This theorem implies that WCLT QMS generators are Toeplitz for suitable choices of \(H_{S}\) and \(D\). Indeed one possible realization is to choose the harmonic osscilator, i.e., a non-degenerate \(H_{S}\) with purely rank one spectral projections (so no multiplicities) system Hamiltonian with the form \[H_{S}=\sum_{m_{=}0}^{n-1}m\omega\ P_{m},\ \ P_{m}=|e_{m}\rangle\langle e_{m}|, \ \ \omega>0,\] where \(\{e_{m}\}_{m}\) is the basis of eigenvectors of \(H_{S}\), and the interaction operator \(D=\sum_{i=0}^{n-1}S^{i}\) is the sum of all powers of the left shift operator in \(h_{n}\) with respect to the above basis. This basis will be the canonical basis in what follows. In this case the set of (strictly positive) Bohr frequencies is \(B_{+}=\{m\omega:1\leq m\leq n-1\}\). Following the previous construction, to each strictly positive Bohr frequency \(\omega_{m}=m\omega\) we associate the operators \[D_{m}=D_{\omega_{m}}=\sum_{j=0}^{n-(m+1)}P_{i}DP_{m+i},\ \ \ \ \ \ \ \ \ \ \ \ D^{*}_{m}=D^{*}_{\omega_{m}}=\sum_{i=0}^{n-(m+1)}P_{m+i}D^{*}P_{i}.\] Then each \({\cal L}_{m}={\cal L}_{\omega_{m}}\) can be seen to have the form \[{\cal L}_{m}(x)=i[H_{m},x]+\Gamma^{-}_{m}S^{*m}xS^{m}+\Gamma^{+}_{m}S^{m}xS^{ *m}+\Delta^{*}_{m}x+x\Delta_{m},\ {\rm where}\] \[\Delta_{m}=\Delta^{*}_{m}=-\frac{1}{2}(\Gamma^{-}_{m}S^{*m}S^{m}+\Gamma^{+}_{m }S^{m}S^{*m}),\ \ H_{m}=H^{*}_{m}=\zeta^{-}_{m}S^{*m}S^{m}+\zeta^{+}_{m}S^{m}S^{*m}.\] Omitting the notation of the dimension for now, the WCLT generator \[{\cal L}(x)=\sum_{m}{\cal L}_{m}(x)=\Phi(x)+G^{*}x+xG,\ \ G=\sum_{m}\Delta_{m}-iH_ {m} \tag{15}\] has the CP Toeplitz dissipative part \[\Phi(x)=\sum_{m=1}^{n-1}\Gamma_{m}^{-}S^{*m}xS^{m}+\Gamma_{m}^{+}S^{m}xS^{*m}\] with associated Toeplitz matrix \(T_{0}\) according to definition 4 taking \(\Gamma_{m}^{-}=t_{m}\) and \(\Gamma_{m}^{+}=t_{-m}\). By the previous sections in order to fully characterize \(\mathcal{L}\) on the diagonal subspaces \(\mathcal{V}\) it just remains to study the non-dissipative part of the generator which for convenience we shall denote by \(\Psi(x)=G^{*}x+xG\). It can be verified that \(G\) is a multiplication operator (i.e., diagonal w.r.t. to the canonical basis). **Definition 16**.: _Given the scalars \(\{\Gamma_{m}^{+},\Gamma_{m}^{-}:m=1,\ldots,n-1\}\), \(\{\zeta_{m}^{+},\zeta_{m}^{-}:m=1,\ldots,n-1\}\), define_ \[s =\sum_{m=1}^{n-1}\Gamma_{m}^{+}+\Gamma_{m}^{-},\quad\tilde{s}= \sum_{m=1}^{n-1}\zeta_{m}^{+}+\zeta_{m}^{-},\] \[s(k) =\sum_{m=1}^{n-1-k}\Gamma_{m}^{+}+\sum_{m=1}^{k}\Gamma_{m}^{-} \quad\text{ for }k=0,1,\ldots,n-1,\] \[\tilde{s}(k) =\sum_{m=1}^{n-1-k}\zeta_{m}^{+}+\sum_{m=1}^{k}\zeta_{m}^{-}, \quad\text{ for }k=0,1,\ldots,n-1.\] **Lemma 17**.: _The following relations hold_ 1. \(\dfrac{s(k)+s(k+j)}{2}<s\quad\text{for any }j=1,\ldots,n-1\) _and_ \(k=0,1,\ldots,n-1-j.\)__ 2. \(\sum_{j=1}^{n-1}\sum_{k=0}^{n-1-j}s-\dfrac{s(k)+s(k+j)}{2}=(n-1) \sum_{m=1}^{n-1}m(\Gamma_{m}^{+}+\Gamma_{m}^{-})\)__ 3. \(\sum_{k=0}^{n-1}\Big{(}\sum_{m=n-k}^{n-1}\Gamma_{m}^{+}+\sum_{m=k+1}^{n-1} \Gamma_{m}^{-}\Big{)}=\sum_{m=1}^{n-1}m(\Gamma_{m}^{+}+\Gamma_{m}^{-})\)__ **Theorem 18**.: 1. _The restriction of_ \(\Psi\) _to the subspace_ \(\mathcal{V}_{l}\)_,_ \(-(n-1)\leq l\leq n-1\) _is given by the_ \(n-|l|\times n-|l|\) _matrix_ \[G_{0} =-\sum_{k=0}^{n-1}s(k)|e_{k}\rangle\langle e_{k}|\quad\text{for }l=0,\] \[G_{l} =\sum_{k=0}^{n-1-|l|}-\left(\dfrac{s(k)+s(k+|l|)}{2}+i\dfrac{l}{ |l|}[\tilde{s}(k)-\tilde{s}(k+|l|)]\right)|e_{k}\rangle\langle e_{k}|,\text{ for }l\neq 0,\] _that is, for_ \(j=0,1,\ldots,n-1\)__ \[v_{-j}\left(\Psi(|e_{k}\rangle\langle e_{k+j}|)\right)=G_{-j}e_{ k}\quad\text{for all }k=0,1,\ldots,n-1-j,\] \[v_{j}\left(\Psi(|e_{k+j}\rangle\langle e_{k}|)\right)=G_{j}e_{k} \quad\text{for all }k=0,1,\ldots,n-1-j.\] _Therefore_ \[v_{l}(\Psi|_{\mathcal{V}_{l}}(x))=G_{|l|}v_{l}(x)\text{ \ for any }x\in\mathcal{M}_{p}(\mathbb{C})\text{ and }l=-(n-1),\ldots,0,\ldots,n-1.\] (16) 2. _The non-dissipative part_ \(\Psi\) _satisfies_ \[\Psi\cong G_{0}\oplus\bigoplus_{j=1}^{n-1}G_{-j}\oplus G_{j}\cong G_{0}\oplus \bigoplus_{j=1}^{n-1}G_{-j}\oplus G_{n-j}.\] 3. \(\|\Psi\|\leq s\)_._ Proof.: For any \(k=0,1,\ldots,n-1\) explicit computations show that \[\begin{split} Ge_{k}&=-\frac{1}{2}\sum_{m=1}^{n-1} \left(\Gamma_{m}^{-}S^{sm}S^{m}+\Gamma_{m}^{+}S^{m}S^{sm}\right)e_{k}-i\left( \zeta_{m}^{-}S^{sm}S^{m}+\zeta_{m}^{+}S^{m}S^{sm}\right)e_{k}\\ &=-\frac{1}{2}\left(\sum_{m=1}^{n-1-k}\Gamma_{m}^{+}S^{m}e_{k+m}+ \sum_{m=1}^{k}\Gamma_{m}^{-}S^{*m}e_{k-m}\right)-i\left(\sum_{m=1}^{n-1-k}\zeta _{m}^{+}S^{m}e_{k+m}+\sum_{m=1}^{k}\zeta_{m}^{-}S^{*m}e_{k-m}\right)\\ &=-\frac{1}{2}\left(\sum_{m=1}^{n-1-k}\Gamma_{m}^{+}+\sum_{m=1}^{ k}\Gamma_{m}^{-}\right)e_{k}-i\left(\sum_{m=1}^{n-1-k}\zeta_{m}^{+}+\sum_{m=1}^{ k}\zeta_{m}^{-}\right)e_{k}\\ &=-\frac{1}{2}\left(s-\sum_{m=n-1-k+1}^{n-1}\Gamma_{m}^{+}-\sum_{ m=k+1}^{n-1}\Gamma_{m}^{-}\right)e_{k}-i\left(\tilde{s}-\sum_{m=n-1-k+1}^{n-1} \zeta_{m}^{+}-\sum_{m=k+1}^{n-1}\zeta_{m}^{-}\right)e_{k}\\ &=-\left(\frac{s(k)}{2}+i\tilde{s}(k)\right)e_{k}.\end{split}\] Let \(j\geq 0\). Since \(\Psi(|e_{k}\rangle\langle e_{k+j}|)=|G^{*}e_{k}\rangle\langle e_{k+j}|+|e_{k} \rangle\langle G^{*}e_{k+j}|\), using the above computation it is straightforward to see that \(v_{0}(\Psi(|e_{k}\rangle\langle e_{k}|))=G_{0}e_{k}\) holds. Thus \[\begin{split} v_{-j}\Big{(}\Psi(|e_{k}\rangle\langle e_{k+j}|) \Big{)}&=-\left(\frac{s(k+j)+s(k)}{2}+i[\tilde{s}(k+j)-\tilde{s}( k)]\right)v_{-j}\Big{(}|e_{k}\rangle\langle e_{k+j}|\Big{)}\\ &=G_{-j}e_{k}\end{split}\] Analogous computations show that \(v_{j}\Big{(}\Psi(|e_{k+j}\rangle\langle e_{k}|)\Big{)}=G_{j}e_{k}\), while by a standard linearity argument (16) and thus 2. follows. Finally for the bound note that each \(G_{-j},G_{j}\) is diagonal. Since \(s(k)<s\), using 1. from Lemma 17 one concludes that \[\|\Psi\|=\max_{1\leq j\leq n-1}\{\|G_{0}\|,\|G_{-j}\oplus G_{n-j}\|\}\leq s.\] Putting together Theorem 9 and Theorem 18 we have thus proved: **Theorem 19**.: _The WCLT generator satisfies_ \[\begin{split}\mathcal{L}&\cong T_{0}+G_{0}\oplus \bigoplus_{l=1}^{n-1}T_{-l}+G_{-l}\oplus T_{l}+G_{l},\\ &\cong T_{0}+G_{0}\oplus\bigoplus_{l=1}^{n-1}T_{-l}+G_{-l}\oplus T _{n-l}+G_{n-l}.\end{split}\] Asymptotic equivalence of WCLT and circulant generators Given two sequences in \(\ell^{1}(\mathbb{Z})\), \(\Gamma=\{\Gamma_{m}\}_{m}\), \(\Gamma_{m}\geq 0\) and \(\zeta=\{\zeta_{m}\}_{m}\), \(\zeta_{m}\in\mathbb{R}\), consider the sequence \(\{\mathcal{L}_{T}^{(n)}\}_{n}\) of **WCLT** generators with C.P. Toeplitz dissipative part according to (15). So for each \(n\geq 1\) \[\mathcal{L}_{T}^{(n)}(x) =\Phi^{(n)}(x)+G^{(n)*}x+xG^{(n)},\] \[\Phi^{(n)}(x) =\sum_{m=1}^{n-1}\Gamma_{m}^{-}S^{*m}xS^{m}+\Gamma_{m}^{+}S^{m}xS^ {*m},\] \[G^{(n)} =-\frac{1}{2}\Phi^{(n)}(1\!\!1_{n})-iH^{(n)},\] \[H^{(n)} =\sum_{m=1}^{n-1}\zeta_{m}^{-}S^{*m}S^{m}+\zeta_{m}^{+}S^{m}S^{*m}.\] where \[\Gamma_{m}=\left\{\begin{array}{ll}\Gamma_{m}^{+}&\mbox{if }m>0\\ \Gamma_{m}^{-}&\mbox{if }m<0\end{array}\right.,\;\mbox{ and }\;\;\zeta=\left\{ \begin{array}{ll}\zeta^{+}&\mbox{if }m>0\\ \zeta^{-}&\mbox{if }m<0\end{array}\right..\] We remark that generators are _adapted_ to the sequences \(\Gamma\), \(\zeta\). Call the partial sums Let \(s^{(n)}=\sum_{m=1}^{n-1}\Gamma_{m}^{+}+\Gamma_{m}^{-}\) and \(\check{s}^{(n)}=\sum_{m=1}^{n-1}|\zeta_{m}^{+}|+|\zeta_{m}^{-}|\). Consider the sequence \(\{\mathcal{L}_{n}^{(C)}\}_{n}\) of circulant GKSL generators given by \[\mathcal{L}_{n}^{(C)}(x)=\widetilde{\Phi}^{(n)}(x)-s^{(n)}1\!\!1_{n}.\] where \(\widetilde{\Phi}^{(n)}\) is the C.P. circulant map defined in (10) and \(s^{(n)}=\sum_{j=0}^{n-1}c_{j}^{(n)}\). The circulant matrix associated to \(\mathcal{L}_{n}^{(C)}\) (see [8]) is the matrix \(Q^{(n)}=C^{(n)}-s^{(n)}1\!\!1_{n}\). It is well known that \(\|T_{0}^{(n)}\|\leq\|\Gamma\|\) and \(\|C^{(n)}\|\leq\|\Gamma\|\). **Theorem 20**.: _Let \(\{\mathcal{L}_{n}^{(T)}\}_{n}\) be a sequence WCLT GKSL generators with C.P. Toeplitz dissipative part adapted to \(\Gamma\) and \(\zeta\). If \(\Gamma,\zeta\in\ell^{1}(\mathbb{Z})\) then there exists a sequence of circulant GKSL generators \(\{\mathcal{L}_{n}^{(C)}\}_{n}\) such that \(\mathcal{L}_{n}^{(T)}\thicksim\mathcal{L}_{n}^{(C)}\)._ Proof.: The sequences are uniformly bounded since for \(n\geq 1\) \[\|\mathcal{L}_{n}^{(T)}\| =\|\Phi_{n}+G_{n}\|\leq M+\|\Gamma\|\] \[\|\mathcal{L}_{n}^{(C)}\| =\|Q^{(n)}\|=\|C^{(n)}-s^{(n)}1\!\!1_{n}\|\leq 2\|\Gamma\|.\] The representation theorems for the generators allow us to write \[\left|\mathcal{L}_{T}^{(n)}-\mathcal{L}_{C}^{(n)}\right|_{n^{2}}^{2}=\frac{1} {n^{2}}\left|T_{0}^{(n)}+G_{0}^{(n)}-Q^{(n)}\right|^{2}+\frac{1}{n^{2}}\sum_{ j=1}^{n-1}\left|T_{-j}^{(n)}+G_{-j}^{(n)}\oplus T_{n-j}^{(n)}+G_{n-j}^{(n)}-Q^{(n )}\right|^{2}. \tag{17}\] By the orthogonality of different blocks we have \[\left|T_{0}^{(n)}+G_{0}^{(n)}-Q^{(n)}\right|^{2} =\left|T_{0}^{(n)}-C^{(n)}\right|^{2}+\left|G_{0}^{(n)}+s^{(n)} \mathds{1}_{n}\right|^{2},\] \[\left|T_{-j}^{(n)}+G_{-j}^{(n)}\oplus T_{n-j}^{(n)}+G_{n-j}^{(n)} -Q^{(n)}\right|^{2} =\left|T_{-j}^{(n)}\oplus T_{n-j}^{(n)}-C^{(n)}\right|^{2}+\left| G_{-j}\oplus G_{n-j}+s^{(n)}\mathds{1}_{n}\right|^{2}.\] Thus \[\left|\mathcal{L}_{T}^{(n)}-\mathcal{L}_{C}^{(n)}\right|_{n^{2}}^ {2} =\frac{1}{n}\left|T_{0}^{(n)}-C^{(n)}\right|_{n}^{2}+\frac{1}{n} \sum_{j=1}^{n-1}\left|T_{-j}^{(n)}\oplus T_{n-j}^{(n)}-C^{(n)}\right|_{n}^{2}\] \[+\frac{1}{n}\left|G_{0}^{(n)}+s^{(n)}\mathds{1}_{n}\right|_{n}^{2 }+\frac{1}{n}\sum_{j=1}^{n-1}\left|G_{-j}\oplus G_{n-j}+s^{(n)}\mathds{1}_{n} \right|_{n}^{2}. \tag{18}\] Recall that by construction \(\left|T_{0}^{(n)}-C^{(n)}\right|_{n}\xrightarrow[n]{}0\), while by Lemma 14 \[\frac{1}{n}\sum_{j=1}^{n-1}\left|T_{-j}^{(n)}\oplus T_{n-j}^{(n)}-C^{(n)} \right|_{n}^{2}\xrightarrow[n]{}0.\] It remains to show the sums in (18) vanish as well. For the first sum take \(\epsilon>0\) and let \(N\) be such that \(\sum_{k=N}^{\infty}\Gamma_{k}^{+}+\Gamma_{k}^{-}\leq\sqrt{\epsilon}\). Using that \(x^{2}+y^{2}\leq(x+y)^{2}\) and \(3\). from Lemma 17, if \(n\geq N\) we have \[\frac{1}{n}\left|G_{0}^{(n)}+s^{(n)}\mathds{1}_{n}\right|_{n}^{2} =\frac{1}{n^{2}}\sum_{k=0}^{n-1}\left|s^{(n)}-s^{(n)}(k)\right|^{2 }=\frac{1}{n^{2}}\sum_{k=0}^{n-1}\left(\sum_{m=n-k}^{n-1}\Gamma_{m}^{+}+\sum_ {m=k+1}^{n-1}\Gamma_{m}^{-}\right)^{2}\] \[\leq\frac{1}{n^{2}}\left(\sum_{k=0}^{n-1}\left[\sum_{m=n-k}^{n-1} \Gamma_{m}^{+}+\sum_{m=k+1}^{n-1}\Gamma_{m}^{-}\right]\right)^{2}\] \[=\left(\frac{1}{n}\sum_{k=1}^{N-1}k(\Gamma_{k}^{+}+\Gamma_{k}^{- })+\frac{1}{n}\sum_{k=N}^{n-1}k(\Gamma_{k}^{+}+\Gamma_{k}^{-})\right)^{2}\] \[\leq\left(\frac{1}{n}\sum_{k=1}^{N-1}k(\Gamma_{k}^{+}+\Gamma_{k}^ {-})+\sum_{k=N}^{\infty}(\Gamma_{k}^{+}+\Gamma_{k}^{-})\right)^{2}\] \[\leq\left(\frac{1}{n}\sum_{k=1}^{N-1}k(\Gamma_{k}^{+}+\Gamma_{k}^ {-})+\sqrt{\epsilon}\right)^{2}\xrightarrow[n]{}\epsilon\] The remaining term is in general a sum of the square modulus of complex numbers. For convenience call \[\mathcal{R}(k,k+j)=\frac{s(k)+s(k+j)}{2},\qquad\mathcal{I}(k,k+j)=\tilde{s}(k )-\tilde{s}(k+j).\] A change of variables in the second allows us to simplify each term of the last sum in (18) \[\Big{|}G_{-j}\oplus G_{n-j}+s^{(n)}\mathds{1}_{n}\Big{|}^{2} =\sum_{k=0}^{n-1-j}\bigg{|}\Big{(}\mathcal{R}(k,k+j)-s^{(n)}\Big{)} +i\left(\frac{j}{|j|}\mathcal{I}(k,k+j)\right)\bigg{|}^{2}\] \[+\sum_{k=0}^{j-1}\bigg{|}\Big{(}\mathcal{R}(k,k+n-j)-s^{(n)}\Big{)} +i\left(\frac{n-j}{|n-j|}\mathcal{I}(k,k+n-j)\right)\bigg{|}^{2}\] \[= 2\sum_{k=0}^{n-1-j}\bigg{|}\Big{(}\mathcal{R}(k,k+j)-s^{(n)} \Big{)}+i\left(\frac{j}{|j|}\mathcal{I}(k,k+j)\right)\bigg{|}^{2}\] \[= 2\sum_{k=0}^{n-1-j}\Big{(}\mathcal{R}(k,k+j)-s^{(n)}\Big{)}^{2} +2\sum_{k=0}^{n-1-j}\Big{(}j\ \mathcal{I}(k,k+j)\Big{)}^{2} \tag{19}\] For \(\epsilon>0\) let \(N\) be such that \(\sum_{m=N}^{\infty}\Gamma_{m}^{+}+\Gamma_{m}^{-}\leq\frac{\epsilon}{4\| \Gamma\|}\). Using \(x^{2}=a^{2}+2a(x-a)+(x-a)^{2}\) and _(1)_,_(2)_ of Lemma 17, if \(n>N\) then the first sum of (19) is \[\frac{2}{n^{2}}\sum_{j=1}^{n-1}\sum_{k=0}^{n-1-j}\Big{(}\mathcal{ R}(k,k+j)-s^{(n)}\Big{)}^{2}\] \[=\frac{2}{n^{2}}\sum_{j=1}^{n-1}\sum_{k=0}^{n-1-j}\Big{(}\mathcal{ R}(k,k+j)^{2}-\Big{(}s^{(n)}\Big{)}^{2}-2s^{(n)}\Big{(}\mathcal{R}(k,k+j)-s^{(n )}\Big{)}\Big{)}\] \[=\frac{2}{n^{2}}\sum_{j=1}^{n-1}\sum_{k=0}^{n-1-j}\mathcal{R}(k,k +j)^{2}-\frac{(n-1)}{n}\Big{(}s^{(n)}\Big{)}^{2}+\frac{4s^{(n)}}{n^{2}}\sum_{ j=1}^{n-1}\sum_{k=0}^{n-1-j}\Bigg{[}s^{(n)}-\frac{s^{(n)}(k)+s^{(n)}(k+j)}{2} \Bigg{]}\] \[<\frac{2}{n^{2}}\sum_{j=1}^{n-1}\sum_{k=0}^{n-1-j}\Big{(}s^{(n)} \Big{)}^{2}-\frac{(n-1)}{n}\Big{(}s^{(n)}\Big{)}^{2}+\frac{4(n-1)}{n^{2}}s^{(n )}\sum_{m=1}^{n-1}m(\Gamma_{m}^{+}+\Gamma_{m}^{-})\] \[\leq\frac{4(n-1)}{n^{2}}s^{(n)}\sum_{m=1}^{N-1}m(\Gamma_{m}^{+}+ \Gamma_{m}^{-})+\frac{4(n-1)}{n^{2}}s^{(n)}\sum_{m=N}^{\infty}m(\Gamma_{m}^{+} +\Gamma_{m}^{-})\] \[<\frac{4}{n}s^{(n)}\sum_{m=1}^{N-1}m(\Gamma_{m}^{+}+\Gamma_{m}^{- })+4s^{(n)}\sum_{m=N}^{\infty}(\Gamma_{m}^{+}+\Gamma_{m}^{-})\xrightarrow[n]{}\epsilon,\] where the convergence \(s^{(n)}\xrightarrow[n]{}\|\Gamma\|\) was used. On the other hand since \(\hat{s}^{(n)}\xrightarrow[n]{}\|\zeta\|\), the second sum of (19) satisfies \[\frac{2}{n^{2}}\sum_{j=1}^{n-1}\sum_{k=0}^{n-1-j}\Big{(}j\ \mathcal{I}(k,k+j)\Big{)}^{2} \leq 2\Big{(}\frac{1}{n}\sum_{j=1}^{n-1}\sum_{k=0}^{n-1-j}\sqrt{j} \ \mathcal{I}(k,k+j)\Big{)}^{2}\] \[\leq 2\Big{(}\frac{1}{\sqrt{n}}\sum_{j=1}^{n-1}\sum_{k=0}^{n-1-j} \mathcal{I}(k,k+j)\Big{)}^{2}\] \[=2\Big{(}\frac{1}{\sqrt{n}}\Big{(}\sum_{l=1}^{n-1}\zeta_{l}^{+}- \sum_{l=1}^{n-1}\zeta_{l}^{-}\Big{)}\Big{)}^{2}\] \[\leq 2\Big{(}\frac{1}{\sqrt{n}}\hat{s}^{(n)}\Big{)}^{2} \xrightarrow[n]{}0\] This shows that \[\lim_{n}\left|\mathcal{L}_{T}^{(n)}-\mathcal{L}_{C}^{(n)}\right|_{n^{2}}=0.\] ## Acknowledgement
2308.16199
Finsler Geometry Modeling and Monte Carlo Study on Geometrically Confined Skyrmions in Nanodots
Using the Finsler geometry modeling (FG) technique without spontaneous magnetic anisotropy, we numerically study the stability and morphology of geometrically confined skyrmions experimentally observed in nanodots. We find a confinement effect that stabilizes skyrmions for a low external magnetic field without mechanical stresses by decreasing the diameter of the cylindrical lattice and strain effects that cause the sky and vortex to emerge under the zero magnetic field. Moreover, the obtained MC data on the morphological changes are also consistent with the reported experimental data.
Gildas Diguet, Benjamin Ducharne, Sahbi El Hog, Fumitake Kato, Hiroshi Koibuchi, Tetsuya Uchimoto, Hung The Diep
2023-08-26T02:27:29Z
http://arxiv.org/abs/2308.16199v2
# Finsler Geometry Modeling and Monte Carlo Study on Geometrically Confined skyrmions in Nanodots ###### Abstract Using the Finsler geometry modeling (FG) technique without spontaneous magnetic anisotropy, we numerically study the stability and morphology of geometrically confined skyrmions experimentally observed in nanodots. We find a confinement effect that stabilizes skyrmions for a low external magnetic field without mechanical stresses by decreasing the diameter of the cylindrical lattice and strain effects that cause the sky and vortex to emerge under the zero magnetic field. Moreover, the obtained MC data on the morphological changes are also consistent with the reported experimental data. + Footnote †: slugcomment: ## 1 Introduction The stability of skyrmion (sky) configurations in chiral magnets and materials hosting skys play key roles in future skyrmion control technology [1, 2, 3]. The well-known conditions for sky stabilization are external magnetic field, magnetic anisotropy, and magnetoelastic coupling [4, 5, 6]. Geometric confinement (GC) was also proposed as a stabilization technique [7], and a remarkable GC effect in combination with a strain effect was demonstrated in a recent experiment on nanodot stky [8, 9]. In Ref. [10], we proposed a model for GC, in which zero Dzyaloshinskii-Moriya interaction (DMI) is assumed on the surfaces parallel to the magnetic field. Strain effects were also implemented in the model via lattice deformations, causing static strains. However, the induced strains are static and have no positional dependence [10]. To explain the experimental results presented in Refs. [9], we introduce a directional degree of freedom \(\bar{\tau}(\in S^{2}/2)\) of in-homogeneous strain at each lattice vertex and assume that the interaction length dynamically depends on the direction of \(\bar{\tau}\) in the framework of Finsler geometry (FG), in sharp contrast to the ordinarily assumed constant Euclidean length. The interaction modified by the FG modeling prescription are ferromagnetic interaction (FMI), DMI, and a second-order ferromagnetic interaction with a magneto-elastic coupling. No explicit magnetic anisotropy is assumed; therefore, the material we study is slightly different from that in [9]. Remarkably, the assumed interactions in our study are dynamically modified to become anisotropic when \(\bar{\tau}\) is aligned in a specific direction. The \(\bar{\tau}\) direction is controllable by external stress \(\bar{f}\), and the controlled and direction-dependent \(\bar{\tau}\) modifies the anisotropic interactions that play a role in direction-dependent magnetoelastic coupling and magnetic anisotropy. Moreover, the GC effect is naturally implemented in the FG modeling technique as a small DMI on the surface compared with the bulk DMI. ## 2 Method ### 3D Cylindrical Lattices and Radial Stresses Cylindrical geometry lattices discretized by tetrahedrons are used for the simulations (see Appendix A for further details of the lattice structure). Mechanical stresses are applied along the radial direction (Fig. 1(a)), and the magnetic field applied along the \(z\) direction. We use three different lattices of size \(N\!=\!2083,5430,11962\), the total number of vertices, for the ratios \(R(=D/h)\!=\!1.2,2,3\) of the diameter \(D\) with fixed height \(h\!=\!12\) (Appendix A). ### Hamiltonian and Monte Carlo The discrete Hamiltonian is given by \[H(\bar{\sigma},\bar{\tau})=\lambda\,H_{\rm FM}+DH_{\rm DM}+H_{B}+H_{f}+\alpha H _{\rm ME}, \tag{1}\] where \(\bar{\sigma}(\in S^{2})\) and \(\bar{\tau}(\in S^{2}/2)\) denote the spin and strain variables, respectively, defined at each lattice vertex. A free boundary condition is assumed for the variables on the surface. The symbols \(\lambda,D\) and \(\alpha\) on the right-hand side are the interaction coefficients, and the terms are given as follows: Figure 1: (a) Illustrations of tensile and compressive stresses radially applied to a cylindrical lattice composed of tetrahedrons, (b) strain direction \(\bar{\tau}_{1}\) at vertex 1 and its component \(|\bar{\tau}_{1}\cdot\bar{\epsilon}_{ij}|\) along a local coordinate axis \(x_{1}\) of a tetrahedron with vertices 1, 2, 3 and 4. \[\begin{split} H_{\text{FM}}&=\sum_{\Lambda}\sum_{ij( \Lambda)}\Gamma_{ij}(\vec{\tau})\left(1-\vec{\sigma}_{i}\cdot\vec{\sigma}_{j} \right),\\ H_{\text{DM}}&=\sum_{\Lambda}\sum_{ij(\Lambda)} \Gamma_{ij}(\vec{\tau})\vec{\epsilon}_{ij}\cdot\vec{\sigma}_{i}\times\vec{ \sigma}_{j},\\ H_{B}&=-\sum_{\vec{\tau}}\vec{\sigma}_{i}\cdot \vec{B},\quad\vec{B}=(0,0,B),\\ H_{f}&=-\text{sgn}(f)\sum_{i}\left(\vec{\tau}_{i} \cdot\vec{f}\right)^{2},\\ \vec{f}&=f\frac{\vec{\tau}}{\|\vec{r}\|},\quad\text{ sgn}(f)=\left\{\begin{matrix}1&(f\,:\,\text{tension})\\ -1&(f\,:\,\text{compression})\end{matrix}\right.,\\ H_{\text{ME}}&=\text{sgn}(f)f\sum_{\Lambda}\sum_{ij(\Delta)} \Omega_{ij}(\vec{\tau})\left(\vec{\sigma}_{i}\cdot\vec{\sigma}_{j}\right)^{2}.\end{split} \tag{2}\] The first term \(H_{\text{FM}}\) describes the ferromagnetic interaction (FMI), which is deformed to have a dynamical interaction coefficient \(\Gamma_{ij}(\vec{\tau})\) between the nearest neighbors \(\vec{\sigma}_{i}\) and \(\sigma_{j}\) (see Appendix B for FG modeling details). Note that \(\Gamma_{ij}(\vec{\tau})\) and \(\Omega_{ij}(\vec{\tau})\) are normalized, such that \(\Gamma_{ij}(\vec{\tau})\to 1\), \(\Omega_{ij}(\vec{\tau})\to 1\) for isotropic \(\vec{\tau}\), which corresponds to the zero-stress configuration (Appendix B). The second term describes a deformed DMI with the same \(\Gamma_{ij}(\vec{\tau})\). The third term is the Zeeman energy and the fourth term is the response energy of strain \(\vec{\tau}\) to external stress \(\vec{f}\) along the radial direction. Because \(\vec{\tau}\) is nonpolar, the factor \(\text{sgn}(f)\) is introduced to distinguish between tensile and compressive stresses. The final term is the energy for the magnetostriction quadratic with respect to \(\vec{\sigma}\) including the coefficient \(\Omega_{ij}(\vec{\tau})\), which differs slightly from \(\Gamma_{ij}(\vec{\tau})\) for \(H_{\text{FM}}\) and \(H_{\text{DM}}\) (Appendix B). Note that \(H_{f}=H_{\text{ME}}=0\) for \(f=0\). To update \(\vec{\sigma}\) and \(\vec{\tau}\), we use the Metropolis Monte Carlo technique with random initial configurations. ## 3 Results and Discussion ### Confinement Effect First, we show the effect of GC on sky stabilization observed experimentally in Ref. [9] that sky confined in small nanodott is stable under a small magnetic field. To observe this GC effect, we numerically determine a set of parameters for the sky to be stable on the lattice of \(N=2083\) and use the same parameters on the larger lattices of \(N=5430\) and \(N=11932\). The \(z\) component of spins is plotted in Figs. 2(a)-(c), and we find that a stable sky on the \(N=2083\) lattice becomes unstable as the lattice size increases. The spins of \(\sigma^{z}<0\) are plotted. ### Strain Effect We show the morphological changes in the spin configurations, including the sky, with the corresponding strain \(\vec{\tau}\) configurations, under radial tension (\(f>0\)) and compression (\(f<0\)) with zero external magnetic field (\(B=0\)). In the case of \(f=0\), a stripe phase appears (Fig. 3(a)), and this stripe changes to sky when a tensile stress \(f=1.6\) is applied under \(\alpha=0.3\) (Fig. 3(b)), where the \(\vec{\tau}\) is parallel to the radial direction as expected. When a compression \(f=-4\) applied under \(\alpha=0.3\), a vortex configuration emerges with a spiral \(\vec{\tau}\) (Fig. 3(c)). The numerically obtained morphological change in the spin configurations under a variation in \(f\) is consistent with the experimentally reported results in Ref. [9]. It is interesting to note that the spiral of \(\vec{\tau}\) is accompanied by a vortex configuration of \(\vec{\sigma}\) under a radial compression. The configurations including sky are stable though sky and stripe phases are not clearly separated. The spin direction at the center of sky is spontaneously determined in contrast to the case of skys with \(B\neq 0\) under \(f=0\) shown in Fig. 2. To observe the strain effect in detail, we use a lattice of size \(N=5430\), and plot the results in Figs. 4(a)-(e) with the same parameters assumed on the \(N=2083\) lat Figure 4: Morphological changes on the \(N=5430\) lattice at \((T,\lambda,D)=(0.5,1,0.7)\) with \(B=0\). Snapshots of (a) \(f=0\) (stripe), (b) \(f=1.52\) (st-sk), (c) \(f=2\) (skyrmion), (d) \(f=-0.8\) (stripe) and (e) \(f=-3.2\) (vortex). St-sk in (b) denotes an intermediate phase between stripe and sky. The spins of \(\sigma^{z}>0\) are plotted. Figure 3: Top and side views of spin configurations obtained at (a) \(f=0\), (b) \(f=1.6\), and (c) \(f=-4\). The spins of \(\sigma^{z}>0\) are plotted. Figure 2: A confinement effect in which a stable sky on the lattice of (a) \(N=2083\) becomes unstable on larger lattices (b) \(N=5430\) and (c) \(N=11932\). The spins of \(\sigma^{z}<0\) are plotted. Small thin cylinders denote the \(\vec{\tau}\) direction, which is isotropic because \(f=0\). \(T\) is the temperature (\(k_{B}=1\) in the simulation unit). tice except for \(D\!=\!0.7\) (for sky size suitable to the lattice diameter). We find that the stripe phase at \(f\!=\!\alpha\!=\!0\) changes to the st-sk, which is an intermediate phase between stripe and sky, and to the sky phase when the tensile stress \(f(>0)\) increases to \(f\!=\!1.52\), whereas it changes to the vortex phase when \(f(<0)\) decreases. The stripe and vortex phases are now clear compared to the case of \(N\!=\!2083\). #### 3.0.3 Direction-dependent Interaction Coefficients Fig. 5 In-plane and \(z\) direction components of (a) DMI and (b) ME coupling constants, where the in-plane components are defined by \(\Gamma^{\rm in}\!=\!\Gamma^{+}\!+\!\Gamma^{\theta}\), \(\Omega^{\rm in}\!=\!\Omega^{\rm r}\!+\!\Delta^{\theta}\), and \(\Omega^{\rm c}_{\rm eff}\!=\!\alpha/\Omega^{\rm c}\); (\(f\!\neq\!0\)) corresponding to \(K_{\rm av}\) in Ref. [9]. The effective coupling constants \(\Gamma^{\rm in,z}\), \(\Omega^{\rm in,z}\), and \(\Omega^{\rm c}_{\rm eff}\!=\!\alpha/\Omega^{\rm c}\); (\(\alpha\!=\!0.3,f\!\neq\!0\)), are plotted in Figs. 5(a),(b), where \(\Omega^{\rm c}_{\rm eff}\) corresponds to the magnetic anisotropy \(K_{\rm av}\) in Ref. [9]. \(\Gamma^{\rm in}\) increases when \(f\) varies from negative (compression) to positive (tension) consistently with \(D_{\rm av}\) reported in Ref. [9], where \(D_{\rm av}\) decreases because the sign of the DMI energy in this paper is opposite to that in Ref. [9]. The behavior that \(\Omega^{\rm c}_{\rm eff}\) increases with increasing \(f\) from \(f<0\) to \(f>0\) is consistent with that of \(K_{\rm av}\) in Ref. [9]. The nonpolar and polar order parameters \(\bar{\sigma}^{\rm in}\) and \(|M^{\rm c}|\) are plotted (Figs. 6(a),(b)), where \(\bar{\sigma}^{\rm i}\!=\!(3/2)((\langle\sigma^{\rm i}\rangle^{2})-1/3)\), \(\bar{\sigma}^{\rm in}\!=\!\sigma^{\rm i}\!+\!\sigma^{\rm j}\!=\!\sigma^{\rm r }\!+\!\sigma^{\rm h}\), and \(|M^{\rm c}|\!=\!|\sum_{i}\sigma^{\rm c}_{i}|/\sum_{i}1\). These \(\bar{\sigma}^{\rm z}\) and \(\bar{\sigma}^{\rm in}\) in Fig. 6(a) clarify phase boundaries between the stripe, st-sk, and sky phases. We find from Fig. 6(b) that \(|M^{\rm c}|\nearrow[f(>0)\nearrow]\), where \(|M^{\rm c}|\) for \(B\!=\!0.05\) is included, and this behavior of \(|M^{\rm c}|\) is consistent with experimentally observed result that \(|M^{\rm c}|\) increases with increasing \(f(>\!0)\) under external \(B\) including \(B\!=\!0\) at least in sky phase [9]. The sky range slightly increases in the \(f\) axis when a small non-zero \(B\) such as \(B\!=\!0.05\) is applied. ## 4 Concluding Remarks We present tentative numerical results for geometrically confined (GC) skyrmions (skys) in nanodots simulated using a Finsler geometry (FG) model, in which anisotropies of interactions, including magnetic anisotropy, are dynamically generated by strains without spontaneous anisotropy. The results show that (i) the GC effect confines the sky in nanodots and stabilizes the sky in smaller nanodots with a small external magnetic field \((0,0,B)\). This GC effect originates from the surface effect, which makes the surface DMI smaller than the bulk DMI [10]. We also have (ii) radial strain effects that cause the sky to emerge in a steady state for tensile strain without external \(B\). In addition to the stable sky states, an intermediate state between sky and stripe appears, and hence sky is not always clearly separated from the stripe phase. Further numerical studies are necessary. Detailed information on the models and numerical results will be reported elsewhere. ## Acknowledgments This work is supported in part by Collaborative Research Project J23Ly07 of the Institute of Fluid Science (IFS), Tohoku University. The numerical simulations were performed in part on the supercomputer system AFI-NITY at the Advanced Fluid Information Research Center, Institute of Fluid Science, Tohoku University.
2307.11073
OBJECT 3DIT: Language-guided 3D-aware Image Editing
Existing image editing tools, while powerful, typically disregard the underlying 3D geometry from which the image is projected. As a result, edits made using these tools may become detached from the geometry and lighting conditions that are at the foundation of the image formation process. In this work, we formulate the newt ask of language-guided 3D-aware editing, where objects in an image should be edited according to a language instruction in context of the underlying 3D scene. To promote progress towards this goal, we release OBJECT: a dataset consisting of 400K editing examples created from procedurally generated 3D scenes. Each example consists of an input image, editing instruction in language, and the edited image. We also introduce 3DIT : single and multi-task models for four editing tasks. Our models show impressive abilities to understand the 3D composition of entire scenes, factoring in surrounding objects, surfaces, lighting conditions, shadows, and physically-plausible object configurations. Surprisingly, training on only synthetic scenes from OBJECT, editing capabilities of 3DIT generalize to real-world images.
Oscar Michel, Anand Bhattad, Eli VanderBilt, Ranjay Krishna, Aniruddha Kembhavi, Tanmay Gupta
2023-07-20T17:53:46Z
http://arxiv.org/abs/2307.11073v1
# OBJECT 3DIT: ###### Abstract Existing image editing tools, while powerful, typically disregard the underlying 3D geometry from which the image is projected. As a result, edits made using these tools may become detached from the geometry and lighting conditions that are at the foundation of the image formation process. In this work, we formulate the new task of language-guided 3D-aware editing, where objects in an image should be edited according to a language instruction _in context_ of the underlying 3D scene. To promote progress towards this goal, we release OBJECT: a dataset consisting of \(400\)K editing examples created from procedurally generated 3D scenes. Each example consists of an input image, editing instruction in language, and the edited image. We also introduce 3DIT: single and multi-task models for four editing tasks. Our models show impressive abilities to understand the 3D composition of entire scenes, factoring in surrounding objects, surfaces, lighting conditions, Figure 1: We present 3DIT, a model to edit individual objects in the context of a rich scene with language conditioning. 3DIT is able to effectively edit objects while considering their scale and viewpoint, is able to add, remove and edit shadows to be consistent with the scene lighting and is able to account for object occlusions. Training on our new benchmark OBJECT, 3DIT remarkably generalizes to images in the CLEVR dataset as well as the real world. shadows, and physically-plausible object configurations. Surprisingly, training on only synthetic scenes from OBJect, editing capabilities of 3DIT generalize to real-world images. More information can be found on the project page at [https://prior.allenai.org/projects/object-edit](https://prior.allenai.org/projects/object-edit). ## 1 Introduction In today's visually-oriented society, the art of image editing has become an indispensable necessity. With the proliferation of camera phones and influences from social media platforms, amateur photographers want to transform ordinary snapshots into visual masterpieces. Unfortunately, the process of image editing is still in its infancy. Professional tools such as Photoshop allow pixel-level edits that can adjust lighting, insert objects, remove clutter, and introduce new shadows; however, these tools, with their steep learning curves are often daunting for novices. With the hopes of pulling image editors out from the minutiae of painstaking pixel-level edits, generative models have been heralded as a promise for object-level edits [55; 56; 29; 50]. Unfortunately, object-centric editing--translating or rotating an object while preserving the 3D geometry of the original photograph--is out of reach for generative models [27; 23; 70; 45; 68]. Although recent strides can take a segmented object and rotate and translate it, they typically operate on objects in isolation and often disregard any scene and lighting context [45; 52; 68]. Others require multiple viewpoints to reconstruct an object in 3D [5; 23; 70]. There is a need for models that can edit objects from a single image while preserving the structure of 3D objects and re-render shadows for the edited scene with the original lighting conditions. To enable 3D-aware editing of objects in an image, we introduce OBJect, **Obj**averse **E**diting in **C**ontex**T**, a large-scale benchmark to train and evaluate language-conditioned models that edit objects in images. We develop OBJect by combining Objaverse [12], a recent 3D asset library, and Blender [11], a 3D rendering engine. OBJect contains 400k editing examples derived from procedurally generated 3D scenes. Scenes consist of up to four objects, chosen from 59k unique objects, placed on a flat textured surface with an environment lighting map, a three-point lighting system that moves with the camera, and a directional light. As shown in Figure 1, we support four types of object edits: (a) translation across the surface; (b) rotating around the axis orthogonal to the surface; (c) inserting new objects; and (d) removing existing ones. Our 3D rendering engine ensures that all edits are physically plausible and the generated images capture realistic changes in 3D geometry, illumination, and shading resulting from the underlying edit. For instance, rotation and translation require maintaining contact with the surface; inserting new objects requires identifying stable supported poses for new objects; and removing objects often requires rendering occluded objects. Each image contains a language instruction describing one of the four edits and a resulting ground truth edited image. Edited images are evaluated using quantitative metrics that capture realism and faithfulness to the ground truth. We also introduce 3DIT (**3D**-aware **D**iffusion **I**mage-editing with **T**ext), a model which supports each of the four manipulation tasks with language conditioning. 3DIT is initialized with the Zero-1-to-3 [45] diffusion model (which was trained to perform novel view synthesis) and finetuned on the OBJect dataset for object-centric image editing. The resultant model has effectively been obtained using a three-stage learning curriculum, starting with massive stable diffusion's web-scale pre-training on image-text pairs, followed by Zero-1-to-3's pre-training stage to enhance the model's understanding of 3D objects, and finally with fine-tuning on OBJect to enable object-centric edits. On OBJect's test images, 3DIT outperforms baselines across all four tasks on metrics that capture the faithfulness of the scene edit. Given the known limitations of automatic quantitative metrics, we also provide a human evaluation study, where 3DIT's outputs are preferred to the baselines over \(70\%\) of the time. Edits produced by 3DIT tend to preserve the original scene's structure and not just the edited object. 3DIT preserves the scale and viewpoint of objects, it removes and adds appropriate shadows wherever necessary, and even infills previously occluded portions of the image when the occluder is translated or removed. A multi-task variant of 3DIT performs well despite having to support all four transformations using a single set of parameters. Finally, 3DIT generalizes surprisingly well to new image domains such as CLEVR, a popular synthetic dataset for visual reasoning, as well as real-world images (see Figure 1). This highlights 3DIT's remarkable capability given that OBJect is a synthetic, procedurally generated dataset. Related work Much of this work has been inspired by the complexity of editing objects in real-world scenes. It begins to connect old ideas of editing objects in images with today's generative models. **Image editing with generative models:** The goals of editing objects and semantic regions in images with language have been active for over a decade [43]. Back then, productizable edits were limited to simple changes like cropping, colorization and resizing to complex procedures such as object removal, addition, and rearrangement [57, 28, 16, 44, 14, 82, 20, 3, 69, 51]. Traditionally, these tasks were performed manually using tools like Adobe Photoshop. However, the origin of Generative Adversarial Networks (GANs) [21] revolutionized the field, propelling significant strides toward automation. StyleGAN [34, 35, 33] notably facilitated intricate modifications to the synthesized images, paving the way for sophisticated GAN-based editing techniques with greater control and flexibility [64, 81, 10, 58, 9, 4, 1, 65]. Since then, advancements in generative image architectures have been marked by the emergence of diffusion models [15]. When coupled with the availability of large-scale image-text datasets [62], these models have facilitated the generation of high-fidelity, diverse scenes [49, 56, 60, 61, 32]. Concurrent with these developments, a new wave of image editing methodologies utilizing these large-scale diffusion models have been introduced [80, 48, 36, 27, 41, 17]. Despite these advancements, models lack the 3D awareness necessary for maintaining geometric and lighting consistency. Our dataset, OBJECT, aims to bridge this gap by enhancing existing methods and serving to evaluate future methodologies. **3D-aware image editing:** A host of recent research, including StyleNeRF [23], EG3D [5], SJC [70], DreamFusion [52], Zero-1-to-3 [45], and Make-It-3D [68], has explored lifting 2D images to 3D. By contrast, our model--3DIT-- comprehensively considers the entire scene, not just the object of interest, encompassing geometry, lighting, and other salient attributes of the background. **Scene rearrangement:** Current research in scene rearrangement tasks primarily involve solving rearrangement from robotic manipulation and embodied agents [37, 53, 54, 46] to provide more intuitive and human-like commands for scene manipulation and navigation. Specific attempts have also been made to apply these techniques to room rearrangements [71, 76, 75] using datasets like AI2-THOR [39], Habitat [67], Gibson [77] or 3D-FRONT [18]. For instance, LegoNet [75] focuses on room rearrangements without the need to specify the goal state, learning arrangements that satisfy human criteria from professionally arranged datasets provided by 3D-FRONT [18]. Distinct from these works, our research introduces a unique perspective. We focus on object-level rearrangements with a primary emphasis on 3D-aware image editing using language instructions. 3DIT is trained with OBJECT to edit scenes with a high degree of realism and 3D coherence. **3D asset datasets:** A diverse set of 3D asset dataset such as ShapeNet [6] and the recent Obja-verse [12] have played a pivotal role in 3D computer vision. ShapeNet provides a richly-annotated, large-scale dataset of 3D shapes that has found numerous applications in object recognition, scene understanding, and 3D reconstruction. Objaverse has offered a large collection of 3D objects that are semantically segmented and paired with natural language descriptions. Objaverse has been instrumental in the construction of OBJECT and also advancing several other related research areas, including generating textured meshes [7, 19, 25] zero-shot single image 3D generation [45] and enriching simulators [40, 13] for Embodied AI. **Synthetic datasets for vision models:** Diagnostic datasets such as CLEVR [31] and CLEVERER [78] provide a rigorous test bed for the visual reasoning abilities of models. They contain synthetically generated images of 3D scenes with simple primitives and associated questions that require an understanding of the scene's objects, attributes, and relations to answer correctly. Kubric [22] is an image and video dataset generation engine that can model physical interactions between objects. In a similar vein, OBJECT offers procedurally generated scenes of commonly occurring natural objects derived from ObjaVerse [12] with configurable 3D objects and associated language instructions. **Benchmarks for image editing:** There is currently a scarcity of benchmarks to evaluate generative models [30], especially for 3D scene editing. Existing ones, including light probes [73], repopulating street scenes [74], GeoSim [8] and CADSim [72] are not publicly available. Our presented OBJECT benchmark will be made publicly available. ## 3 OBJECT: A benchmark for Object Editing in Context Our goal is to design and evaluate image editing models capable of editing objects in scenes. To enable training and evaluation of such models, we develop OBJECT. OBJECT contains scenes with multiple objects placed on a flat textured surface and illuminated with realistic lighting. These edits are described to the model using a combination of language and numerical values (e.g. pixel coordinates and object rotation angle). All edits result in structural changes to the scene which in turn affect illumination changes such as inter-object reflections and shadows. The model does not have access to the underlying 3D scene (including object segmentations, locations, 3D structure, and lighting direction); it must infer these from the input pixels. ### Object editing tasks OBJECT supports four fundamental object editing tasks: Each of the following manipulations targets a single object within a scene that may contain multiple objects. We now describe each task and the capabilities required from an image editing model to succeed at the task. For specifying locations in an image, we use a coordinate system where (0,0) represents the bottom-left corner and (1,1) the top-right corner. Objects are specified in each task using their crowdsourced descriptions. **Translation**: Given the x-y coordinates of a target location, a specified object is moved from its original location in the scene to the target location while preserving its angular pose and surface contact. Since the camera is fixed relative to the scene, a change in object location requires to model to synthesize newly visible portions of the object. The model is required to change the object's scale in the image due to perspective projection i.e. the objects should appear smaller when moved further away from the camera and vice-versa. The new location may also result in drastically different illumination of the object. **Rotation**: A specified object is rotated counter-clockwise around the vertical axis passing through the object's center of mass and perpendicular to the ground by a given angle. To succeed, the model must localize the object, extrapolate the object's shape from a single viewpoint, and re-imagine the scene with the rotated object. Rotating objects leads to intricate changes to the shadow projected on the ground plane which are challenging to accurately produce. **Insertion**: Given a language description, an object matching the description is added to the scene at a designated x-y location. The model must perform object generation at the desired location with stable pose and surface contact. Besides modeling the object shape, the model also needs to understand the interaction of the geometry with scene lighting to generate a realistic shadow for the object. **Removal**: A specified object is removed from the scene. The model must not only be able to locate and segment the object, but also in-paint the object region using scene context. This often requires inpainting an object that was previously partially or fully occluded. ### Benchmark curation Paired image-&-text data is plentiful on the internet and large corpora are commonly used to train text-to-image models. However, there is a lack of image editing data consisting of initial and edited image pairs, with a description of the edit. Gathering such a dataset at scale from the real world requires significant manipulation and annotation effort. Our key insight is that while object manipulation data is difficult to acquire, it is much easier to synthesize large volumes of this data leveraging the latest advances in photorealistic rendering and large 3D asset libraries. Therefore, OBJECT contains procedurally generated 3D scenes rendered with objects from these asset libraries. Figure 2: Scene generation in OBJECT depicting camera constraints, directional lighting (environment and three-point lighting not shown), and the resulting object shadows. **Object source**. OBJECT scenes are constructed using one to four 3D objects from the Objavverse dataset[12]. The entire Objavverse dataset contains more than \(800\)k assets. Since obiaverse contains objects with errors (some objects are not fully rendered or contain no texture), we filter the objects down to a set of \(59\)k via a combination of Sketchfab metadata-based filtering and crowdsourcing. The resulting objects all have various textures, are easily recognizable, are of high quality and resolution, are free of copyrighted material, are in isolation (as opposed to a single asset with multiple objects), and are free floating (so that they may be placed on any surface in the generated scenes). Each of these assets is annotated with one of \(1613\) unique semantic categories using crowdsourcing. Workers were shown a rotating 3D rendering of a particular object and asked to apply a category label; they were provided with a handy autocomplete list of roughly \(1400\) categories sourced from LVIS [24]categories. If, however, workers were unable to find an appropriate category, they had the option of generating a new category. After this they were asked to write a sentence that describes the object, pointing out any interesting or noteworthy details that would distinguish it from other objects in the same category. Finally, category names were cleaned up to remove spelling errors; we removed unusual or rare categories. We randomly choose \(1513\) categories to be seen during training while holding out the remaining \(100\) as unseen categories for validation and testing. This category split helps quantify the generalization gap in editing previously seen vs novel objects. We use a library of \(17\) texture maps obtained from [2] to simulate wooden, cobblestone, and brick flooring for the scenes. **Scene construction**. To have the scene layout and lighting appear natural, we specify a set of constraints and perform rejection sampling for selecting physically plausible object instances, scene layout, camera positions, and lighting parameters. We limit all scenes to a minimum of one and a maximum of four objects. To identify a natural resting pose for these objects, we perform a physical simulation in Blender where we drop each object onto an XY ground plane and record its resting pose. Then to identify object placements, we sample a bounding box of the same x-y aspect ratio as the object and uniformly scale the object to lie in this bounding box. We ensure that objects, when rotated, do not intersect each other: bounding boxes who's circumscribed circles intersect are rejected. To avoid tiny objects being placed in the same scene as very large objects, we enforce the ratio between the smallest and longest largest side of each bounding box to be greater than \(0.8\). We randomly place the camera in the upper hemisphere surrounding the plane and point it towards the origin which lies on the ground plane. We further constrain the camera elevation angle from the ground between \(40^{\circ}\) to \(80^{\circ}\) to ensure that the viewing angle is neither too close to the ground nor completely vertical which are both relatively unnatural. In each scene, there is a designated object that is manipulated. If this object is not visible from the camera, we move the camera away from the origin until the object is visible both before and after the manipulation. **Scene lighting**. We use several light sources to realistically illuminate the scene. First, we add a random environment lighting map, which are special images that capture the light in a real-world scene from all directions, giving the impression that our constructed scenes are imbedded in various indoor and outdoor locations in the real world. We download 18 of these environment maps with CC0 licences from [https://polyhaven.com/](https://polyhaven.com/). Next, we add a three-point lighting system that automatically adapts to the camera view. This involves placing the key light for primary illumination, the fill light to soften key light shadows, and the back light to distinguish the subject from the background. These lights serve to effectively shade the objects in the front of the camera so that their 3D form is apparent. Finally, the scene is illuminated with directional lighting with the direction randomly sampled within a conical neighborhood around the negative-z direction to simulate an overhead light source. This consists of parallel rays emitted by a single light source infinitely far away and therefore can be specified by intensity and direction without specifying a source position. We generate 100k training examples for each task, and 1024 scenes for validation and testing. The 3D scenes are automatically generated using Blender and its Cycles ray tracer for rendering each scene. We also render segmentation masks that denote object instances, plane and background pixels for all scenes. ## 4 3DIT: a scene-aware editing model **Task setup.** Consider a 3D scene, \(S\), filled with multiple objects. Let \(x_{1}\in\mathbb{R}^{H\times W\times 3}\) represent an image of this scene produced by a rendering function \(f\). Let \(l\) represent the text description of the edit, and \(v\) represent the task-specific numerial values (i.e. angle for the rotation task and x,y coordinates for removal, insertion, and translation) to describe the desired edit to the scene \(S\). In this paper, we consider object-centric manipulations including rotating, translating, inserting, and removing objects. Manipulating the objects in \(S\) can yield a new image \(x_{2}=f(M(S,l,v))\), where \(M\) applied the transformation \(l,v\) in 3D. Our goal is to produce \(x_{2}\) without access to the 3D scene \(S\) and instead, directly editing the source image \(x_{1}\). Importantly, we have no explicit information about the scene (including scene geometry and layout), no explicit information about the lighting (such as its location and intensity), and no access to the camera parameters. All this information must be implicitly inferred from the single source image \(x_{1}\). Concretely, we wish to produce the target image \(x_{2}=\hat{f}_{\theta}(x_{1},l,v)\), where \(\hat{f}\) is a learned function with parameters \(\theta\). **Background.** Diffusion models [59] have recently shown spectacular results in generating images conditioned on text descriptions. These models consist of an encoder \(\mathcal{E}\) that maps an image \(x\) into a latent code \(z=\mathcal{E}(x)\), a decoder, \(\mathcal{D}\) that can map a latent code back to image space, and a U-Net \(\epsilon_{\theta}\) with learned parameters \(\theta\) used for denoising. Some diffusion models are trained on large training corpora such as LAION-5B [63] and are able to produce high-quality high-resolution images that faithfully represent input text descriptions. The recently proposed Zero-1-to-3 model[45] finetunes image-conditioned Stable Diffusion[42] on the task of generating an image of a single object from a novel viewpoint, conditioned on an input view and a relative camera transformation. **3DIT.** Our model, 3DIT, builds upon Zero-1-to-3. We design \(\hat{f}_{\theta}(\cdot)\) using the same base architecture but make changes to its conditioning module \(c_{\theta}(\cdot)\). Our changes enable the conditioning module to accept edit instructions in the form of language and location information to precisely define the desired edit. In the cross-attention conditional module, Zero-1-to-3 uses a CLIP image encoding to represent the initial image, followed by concatenating a four-dimensional vector encoding camera pose information. This \(772\)-dimensional vector gets passed through a multi-layered perceptron (MLP) to map it back down to a size of \(768\) dimensions. Similarly, we encode the source image \(x_{1}\) using the same CLIP image encoder. We encode \(v\) and concatenated the vector with the image representation and feed it into the MLP. Next, we append the MLP outputs with edit text tokens \(l\), which are extracted using CLIP's text encoder. We finetune our model from the \(16,500\)-step checkpoint of Zero-1-to-3. During training, the network takes a noised latent encoding of \(z_{t}\), timestep \(t\) and conditioning information \(c(x_{1},l,v)\), where \(z_{t}\) is the latent representation of the target image at time step \(t\). and produces a denoising score estimate \(\epsilon_{\theta}(z_{t},t,c(x_{1},l,v))\) where \(c(\cdot)\in\mathbb{R}^{768\times N}\) outputs a sequence of conditional embedding vectors. We finetune the network with the standard diffusion loss [29; 60]: \[\min_{\theta}\mathbb{E}_{z\sim\mathcal{E}_{\theta}(x_{1}),t,\epsilon\sim \mathcal{N}(0,1)}||\epsilon-\epsilon_{\theta}(z_{t},t,c(x_{1},l,v))||\] The entire training procedure for 3DIT resembles a three-stage curriculum. The first pre-training stage with billions of LAION image-text pairs teaches the model to produce rich imagery for text descriptions constructed from a very large vocabulary. These images contain diverse backgrounds and a variety of objects within them. Although the model can faithfully produce imagery, its ability to perform fine-grained manipulations of objects remains limited. The second pre-training stage with from Zero-1-to-3's millions of image-image pairs of different viewpoints enhances the model's understanding of 3D object shape and camera projection. The third training stage, outlined in this paper, uses OBJECT and teaches the model to manipulate objects in physically plausible scenes. We show that the resulting model can now edit individual objects without any 3D explicit information. ## 5 Experiments We now present experiments to evaluate our 3DIT model. First, we evaluate single task variants of 3DIT, i.e. one model for each of the four tasks - object rotation, translation, insertion and removal. For each of these tasks, we evaluate the performance of the model on novel scenes with objects seen at training time, and with objects unseen at training time. We also provide evaluations for a multi-task model - trained to perform all four tasks. ### Baselines For each of the four tasks, we create strong baselines inspired by recent approaches like VisProg [26] and Socratic models [79] that chain multiple foundation models together to create performant systems for various tasks including image editing. **Removal:** Removing an object from a scene requires segmenting out the object and inpainting the region. Since a point within the object region is provided as input, we use SAM [38] to generate a segmentation mask. We first use SAM in the generation mode to get candidate masks for the entire scene and select the mask which contains the point and occupies no more than a third of the area of the entire image. If no such mask is found, we attempt to get a mask by directly using the point as input to SAM to get a mask. Then, we use Stable Diffusion (SD) to inpaint the masked region using the prompt "a rendering of an uncluttered textured floor with no objects". We found that using the selected region-encompassing bounding box as the mask works better than using the fine-grained segmentation mask. **Insertion:** This baseline uses SD and the target location to re-imagine the scene with an object of the provided category. The final image is generated by SD using the prompt "a 3D rendering of category on a textured floor" conditioned on the initial image and a fixed-size (\(200\times 200\)) square mask around the target location. **Translation:** Translation requires localizing the object given the category name, removing it from the initial location, and inserting it at the target location. We use OWL-ViT [47] to localize the object given the category name. The detected bounding box is fed into SAM to generate the object segmentation mask which is then used for inpainting similar to the Removal baseline. Finally, the segmented object is composited at the target location. Figure 3: Generated examples from 3DIT as well as baselines for each of the four tasks in the OBJECT benchmark. **Rotation:** Here we use Zero-1-to-3 [45] as a baseline which requires the object to be tightly centered in the image with a white background. So, we first localize the object using OWL-ViT, crop the localized region, and segment it using SAM to create the appropriate input for Zero-1-to-3 for performing the rotation. The rotated object is composited back onto the image and the remaining unfilled regions are inpainted using SD. ### Quantitative evaluation We follow Zero-1-to-3 and use four metrics to automatically evaluate the quality and accuracy of the edited image - PSNR, SSIM, LPIPS, and FID. The first 3 directly compare the prediction to the ground truth image, while FID measures the similarity between the predicted and ground truth sets of images. Instead of computing the metrics for the whole image, we focus on the region where the edits are targeted. To do this, we simply use the ground truth segmentation mask to crop the targeted rectangular region of interest prior to computing the metrics. Since our model, as well as our baselines, can generate multiple solutions for each input, our evaluation considers the best-of-four prediction as per the SSIM metric to compute the final scores for all metrics. This considers the typical use case for editing applications where a user has the flexibility to pick from a range of generated solutions. We report metrics separately for seen and unseen object categories. Table 1 presents quantitative evaluations for 3DIT in comparison to the baselines. 3DIT outperforms the baselines for all four tasks at the metrics PSNR, SSIM and LPIP. Notably, the multi task model does well in comparison to the single task variant, in spite of having to learn 4 tasks using the same number of learnable parameters. The FID scores for the baseline models tend to be higher. This is because the baselines tend to cut/paste objects in the image (for e.g. in the translation task), which retains image fidelity, even if the scale of the object is incorrect. 3DIT on the other hand does not explicitly cut/paste segments and instead must render them using the diffusion process, and is thus prone to a poorer fidelity. On the contrary, our model is able to properly account for a variety of challenging changes to the underlying 3D scene when editing images, as shown in Figure 4. Its worth noting that the automatic evaluation metrics have limitations and often do not capture editing nuances encompassing geometry, lighting, and fidelity to the instruction. This motivates the need for human evaluation studies. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Seen Objects} & \multicolumn{4}{c}{Unseen Objects} \\ Model & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIP \(\downarrow\) & FID \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIP \(\downarrow\) & FID \(\downarrow\) \\ \hline \multicolumn{8}{c}{_Task: Translation_} \\ \hline Baseline & 13.699 & **0.309** & 0.485 & 0.942 & 14.126 & **0.326** & **0.467** & 0.968 \\ 3DIT(1-task) & 14.546 & 0.273 & 0.494 & 0.254 & 14.400 & 0.262 & 0.498 & 0.261 \\ 3DIT(Multitask) & **15.21** & 0.300 & **0.472** & **0.244** & **15.200** & 0.292 & 0.477 & **0.253** \\ \hline \multicolumn{8}{c}{_Task: Rotation_} \\ \hline Baseline & 13.179 & 0.269 & 0.540 & 0.997 & 12.848 & 0.270 & 0.538 & 1.693 \\ 3DIT(1-task) & 16.828 & **0.386** & **0.428** & 0.291 & **16.293** & **0.372** & **0.445** & 0.280 \\ 3DIT(Multitask) & **16.859** & 0.382 & 0.429 & **0.248** & 16.279 & 0.366 & 0.447 & **0.236** \\ \hline \multicolumn{8}{c}{_Task: Insertion_} \\ \hline Baseline & 12.297 & **0.269** & 0.594 & 0.969 & 12.542 & **0.275** & 0.584 & 1.325 \\ 3DIT(1-task) & 13.469 & 0.267 & **0.549** & 0.254 & 12.974 & 0.261 & **0.566** & 0.233 \\ 3DIT(Multitask) & **13.630** & 0.263 & 0.551 & **0.222** & **13.088** & 0.261 & 0.568 & **0.214** \\ \hline \multicolumn{8}{c}{_Task: Removal_} \\ \hline Baseline & 12.494 & 0.383 & 0.465 & 0.801 & 12.123 & 0.379 & 0.459 & 1.047 \\ 3DIT(1-task) & 24.937 & **0.588** & 0.254 & 0.241 & 24.474 & 0.561 & **0.260** & 0.258 \\ 3DIT(Multitask) & **24.980** & 0.585 & **0.249** & **0.236** & **24.661** & **0.568** & **0.260** & **0.240** \\ \hline \hline \end{tabular} \end{table} Table 1: Quantitative evaluation using generated samples. For each method, four samples per test image were generated. The best image according to the PSNR metric is selected to represent each sample, and these values are averaged across samples. To ensure that the metrics focus on the transformed object and not the background which mostly remains unchanged, metrics are computed using the region around the transformed object’s mask. ### Human evaluation studies We conduct human preference evaluations between 3DIT and the relevent baseline by showing two images and asking annotators to select the one that best matches the ground truth image. We measure (1) **Geometric consistency** - This requires humans to consider the geometric correctness of the transformed object, including the scale, positioning of the object on the ground plane and its relationship to other objects. It also requires humans to consider the correctness of other objects in the scene which may get occluded or unoccluded as a result of the transformation. the source caption. (2) **Lighting consistency** - This requires humans to consider the lighting correctness of the transformed object, including the direction and scale of the shadow as a result of the directional lighting. It also requires humans to consider the correctness of the shadows of other objects in the scene which may get occluded or unoccluded as a result of the transformation. Both evaluations also allow a third option (Tie) to be selected. Each pairwise evaluation is carried out for 30 test samples. Table 2 presents a human evaluation study of the 3DIT model (in a single task setting) in comparison to the corresponding baseline for all four tasks. 3DIT is heavily favored by humans, consistently obtaining preference scores of 70 % and more across all four tasks for geometric as well lighting consistency. The tied scores refer to instances where both models did exceedingly poorly and where both models did a close to perfect job. For the translation task, 3DIT is able to scale the object appropriately, as well rendering the shadow correctly. The baseline, in particular, does a poor job of the shadow and gets the scale wrong, leading to a physically implusible image. For the rotation task, 3DIT performs a rotation consistent with the ground plane and also renders a superior shadow. For the removal task, 3DIT tends to inpaint occluded objects well, and correctly adjusts their shadows. It also does well at removing the entire extent of the correct object in contrast to the baseline. Figure 4: The figure shows the ability of 3DIT to handle various challenges of 3D-aware image editing such as: (a) (_Top left_) perspective size changes; (b) (_Top right_) synthesizing novel view points; (c) (_Bottom left_) generating occluded regions; (d) (_Bottom right_) accounting for scene lighting while rendering objects and their shadows. ### Real-world transfer While we train our models on simulated data, we test the model's ability to transfer to real-world images qualitatively. Figure 5 shows our model's output for different prompts for the same input image for all four tasks. We find these preliminary results encouraging as the outputs not only respect the task description but also look reasonably photo-realistic with appropriate shadows despite never seeing real-world editing examples during training. ## 6 Limitations and Broader Impact Our work explores the use of synthetic data for training physically plausible and scene-aware image editing models. Given that even training on scenes with limited realism and complexity results in models that transfer well to the real world, there is tremendous potential to significantly improve performance by using more advanced photo-realistic simulators. Finetuning on a small set of hand-crafted real-world editing examples may also improve transfer to real-world images and enable compelling editing applications. Our work leads the way towards easy-to-use and increasingly powerful image editing capabilities for the broader society in the near future. Like any generative model, our work could also potentially be misused for propagating misinformation. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Task & \multicolumn{3}{c}{Geometric consistency} & \multicolumn{3}{c}{Lighting consistency} \\ \hline & Baseline & 3DIT(_Ours_) & Tie & Baseline & 3DIT(_Ours_) & Tie \\ \hline Translation & 20.0 \% & 73.3 \% & 6.6 \% & 3.3 \% & 80.0 \% & 16.6 \% \\ Rotation & 3.3 \% & 80.0 \% & 16.6 \% & 6.6 \% & 73.3 \% & 20.0 \% \\ Insertion & 13.3 \% & 70.0 \% & 16.6 \% & 10.0 \% & 73.3 \% & 16.6 \% \\ Removal & 3.3 \% & 86.6 \% & 10.0 \% & 0.0 \% & 86.6 \% & 13.3 \% \\ \hline \hline \end{tabular} \end{table} Table 2: Outcome of the human evaluation. The table illustrates the evaluators’ preferences for 3DIT assessed on geometric accuracy and 3D lighting consistency. Baseline methods rarely gained preference due to their limited capacity to maintain geometric quality and lighting consistency. Figure 5: 3DIT is able to generalize to the real world while only being trained on a synthetic dataset. Here we show varying prompts for each of the four editing tasks. Conclusion This work presents 3DIT, a model capable of editing individual objects within images, given a language instruction. 3DIT is trained on a new dataset, OBJect, consisting of 400k 3D scenes procedurally generated using Objaverse objects. 3DIT performs well across on OBJect and shows promising generalization to CLEVR as well as the real world.
2303.03180
Highly ordered LIPSS on Au thin film for plasmonic sensing fabricated by double femtosecond pulses
We report on the single-step fabrication of homogeneous and highly ordered Laser Induced Periodic Surface Structures (LIPSS) over large areas on Au nanolayers, that can be used for plasmonic sensing. A comprehensive study on LIPSS formation on 32 nm Au film upon double, 170 fs pulse irradiation, unveiled the key importance of interpulse delay as the determining factor behind the homogeneity of laser induced structures and confirmed that highly ordered, functional LIPSS occur solely upon double pulse irradiation. In particular, the impact of pulse overlap, fluence and interpulse delay reveals that homogeneous LIPSS formation is optimized within a specific interpulse delay range. At the same time, examination of nanoscale features of the structures points out a significant differentiation of the LIPSS formation characteristics between single and double pulse irradiation. Theoretical investigation complements experimental results providing insights on the structure formation mechanism. Ellipsometric measurements validate that such structures exhibit characteristic plasmon resonances that can be exploited for sensing applications. The presented data demonstrate a novel functionality of LIPSS, while providing strong evidence of the capabilities of femtosecond double pulse irradiation as a valuable and low-cost tool for the precise fabrication of highly ordered structures.
Fotis Fraggelakis, Panagiotis Lingos, Emma Cusworth, Vasyl G. Kravets, Alexander N. Grigorenko, Andrei V. Kabashin, Emmanuel Stratakis
2023-03-06T14:48:18Z
http://arxiv.org/abs/2303.03180v2
# Highly ordered LIPSS on Au thin film for plasmonic sensing fabricated by double femtosecond pulses ###### Abstract We report on the single-step fabrication of homogeneous and highly ordered Laser Induced Periodic Surface Structures (LIPSS) over large areas on Au nanolayers, that can be used for plasmonic sensing. A comprehensive study on LIPSS formation on 32 nm Au film upon double, 170 fs pulse irradiation, unveiled the key importance of interpulse delay as the determining factor behind the homogeneity of laser induced structures and confirmed that highly ordered, functional LIPSS occur solely upon double pulse irradiation. In particular, the impact of pulse overlap, fluence and interpulse delay reveals that homogeneous LIPSS formation is optimized within a specific interpulse delay range. At the same time, examination of nanoscale features of the structures points out a significant differentiation of the LIPSS formation characteristics between single and double pulse irradiation. Theoretical investigation complements experimental results providing insights on the structure formation mechanism. Ellipsometric measurements validate that such structures exhibit characteristic plasmon resonances that can be exploited for sensing applications. The presented data demonstrate a novel functionality of LIPSS, while providing strong evidence of the capabilities of femtosecond double pulse irradiation as a valuable and low-cost tool for the precise fabrication of highly ordered structures. Institute of Electronic Structure and Laser (IESI), Foundation for Research and Technology (FORTH),N. Plastra 100, Vassilika Vouton, 70013 Heraklion, Crete, Greece 2Department of Physics and Astronomy, Manchester University, Manchester M13 9PL, UK 3Aix Marseille Univ, CNRS, LP3, Campus de Luminy, Case 917, 13288, Marseille, France 4Department of Physics, University of Crete, 71003 Heraklion, Crete, Greece *Correspondence: [email protected], [email protected] ## I Introduction Laser surface processing is emerging as an efficient way to transform the properties of solid surfaces. Micro and nano morphologies, often inspired by nature, have been fabricated by laser on various materials to artificially tailor surface properties[1]. Among the functionalities reported for bulk materials are coloring[2], superhydrophobicity[3], anti-icing[4], anti-reflectivity[5], surface blackening[6] altered bacteria[7] and cell adhesion[8]. Currently, the scientific interest for laser surface processing of materials with thickness in the order of the optical penetration depth, so called thin films, is increasing rapidly [9] Structures developed on thin films can be used in various applications including optical elements [10], plasmonic sensors[9] and substrates for nanomaterial growth[11]. Fabricating surface structures via laser, offers many advantages compared to other techniques such as photolithography or e-beam lithography. In particular, it is a cheap, fast, scalable and chemical free process. Furthermore, the constant increase of laser processing throughput and the speed of positioning systems enable laser surface processing be a viable industrial scale solution for materials' functionalization. Nature provides a variety of functional surfaces from which the laser engineering of biomimetic surfaces field is inspired [113]. The eventual functionality attained upon laser processing is the collective result of the surface morphology at the both the micro- and the nano-scale level [12] with the reported features covering a broad palette of shapes, sizes and hierarchical formation [1]. In this context, reproducing functionalities found in nature on technical surfaces is always coming down to controlling the laser-induced morphology and numerous studies have been focused in developing new structures [14, 15] or in tailoring structures geometry [16]. The most common type of laser-induced structure that can be fabricated in almost any kind of material is LIPSS, Particularly, the Low Spatial Frequency LIPSS (LSFL) [17] are periodic structures with sizes in the range of laser wavelength and can have either uniaxial (1D-LIPSS) or multiaxial symmetry (2D-LIPSS). LIPSS with periodicities well below the laser wavelength (\(\lambda\)/2 to \(\lambda\)/10) are coined as high spatial frequency LIPSS (HSFL) [17]. Key process parameters such as laser wavelength, polarization, fluence and number of incident pulses can have a major impact on LIPSS formation. In particular, the LSFL period is linked to the laser wavelength, whereas the HSFL period seem to depend mostly on other parameters, such as the laser fluence and the pulse duration [18]. At the same time, the final structure formation is a multi-pulse process where the surface relief is shaped progressively pulse after pulse [19], therefore a certain number of pulses is required for developing pronounced structures. When large areas are textured, the delivery of evenly distributed amounts of energy over the processed area is essential to maintain the homogeneity of the structures formed, which makes the irradiation strategy a crucial aspect of the process. More complex morphologies arise when pulse characteristics such as polarization, spatial intensity and temporal profile are tailored. For example 2D-LIPSS can be fabricated upon irradiation by beams with circular [20] and azimuthal/radial polarization [21]. Understanding the underlying structure formation mechanism is essential to control the laser-induced surface morphology. Even though there is no generally accepted model, LIPSS are considered to be the result of the synergistic contribution of two processes; Inhomogeneous light absorption which occurs during the irradiation (Process I) and thermal and hydrodynamic which take place at timescales much longer than the pulse duration (Process II) [14, 19, 22, 23]. The different timescales of the two processes were unveiled by both pump-probe experiments [24, 25] and simulations [26]. In a descriptive view, Process I refers to the setting of the initial conditions of the surface reorganization process. Mainly, pulse polarization, energy and intensity distribution as well as the material's electronic structure and surface morphology define the distribution of the energy on the surface during irradiation. There are plenty of theoretical and experimental works which attribute the origin of laser-induced nanoscale periodic structures on metallic surfaces to the superposition of the incident laser beam with the scattered electromagnetic waves such as surface plasmon polaritons (SPPs) created on the air-metal interface and evanescent quasi-cylindrical waves [27, 28, 29, 30]. SPPs are considered the dominant source of surface waves, a process that results from the coherent interaction of the incident laser field with free electrons created in the material [31]. According to that model, polarization control and spatial beam shaping are expected to have a major impact on Process I by imposing the absorption pattern. Following the equilibration between electrons and lattice and depending on the deposited energy, the material surface melts or ablates and ultimately resolidifies in nanosecond timescales [32]. Considering sub ablation conditions, the temperature gradients residue from Process I [19] give rise to surface tension and molten materials' density variations along the surface that drive the displacement of molten material due to hydrodynamical motion [19]. Depending on the amplitude and the steepness of the temperature gradient, different hydrodynamical effects may take place including Marangoni flow [19, 33]. Heat is gradually dissipated into the bulk, the melting depth and temperature changes affecting the hydrodynamical values of the process such as materials viscosity, density and thermal conductivity. Several aspects of this dynamic process are yet to be unveiled and can only be fully understood via multiscale simulation. Recently the role of pressure waves in the development of flow instabilities was investigated theoretically [33]. Both Processes I and II can be tailored to some extend upon double pulse irradiation (DPI) via the variation of the interpulse delay separation time, \(\Delta\tau\), and the polarization state. In particular, it has been shown that DPI with interpulse delay time in the picosecond regime can have a strong impact on the morphology attained, enabling control over the hierarchical formation of 2D-LIPSS [14]. Moreover, DPI can be employed to increase structure homogeneity over large area, producing highly ordered structures [34, 35]. DPI in picosecond delay times is also linked to the HSFL formation, as hierarchical 2D-LIPSS and HSFL [14, 36], as well as homogeneous nanoholes and nano protrusions [34, 37, 38]. DPI is also expected to have a drastic impact on Process II [39, 40], considering that case the second pulse arrives at the surface during the evolution of the hydrodynamical phenomena [26]. When DPI is combined with spatial pulse shaping, for example upon using Direct Light Interference Patterning control over both Process I and II can be realized [39], giving rise to the generation pf structures [40]. Although the aforementioned mechanisms are mostly applicable for bulk materials, some of the basic principles of LIPSS formation can be also applicable to the case of thin metallic films with thickness in the order of a few tens of nanometers. Nonetheless, significant differences are anticipated [41], considering that the material's thickness is comparable with the penetration depth of the laser radiation. In particular, metallic thin films are embedded on two dielectric media, the superstrate (i.e. air) and the substrate, exhibiting two interfaces where SPPs could be simultaneously excited. When the film thickness is comparable to the penetration depth, the SPP mode on the one interface can transfer energy to the other through the film and vice versa which results in complex spatial modulation and distribution of the laser energy within the material [42, 43]. Moreover, during LIPSS formation the pulse energy and the total number of pulses should be properly controlled to avoid film damage [41]. Finally, the development of plasmonic devices are mostly based on noble metals, in which LIPSS are hard to formed form upon single pulse irradiation [44]. In this work we employ DPI to overcome the above-mentioned limitations related to thin film laser processing of noble metals and fabricate homogeneous LIPSS on 32nm Au thin film. Both ps and ns interpulse delay regimes are investigated, while the impact of important process parameters such as pulse to pulse overlap and laser fluence is carried out. Besides this, the role of \(\Delta\tau\) in the homogeneity of the structures attained is thoroughly investigated and discussed. As determined by spectroscopic ellipsometry measurements, the fabricated LIPSS structures exhibit pronounced plasmon resonances, making such structures suitable for sensing applications. ## 2 Experimental part laser processing Thin Au films with thickness of d\({}_{Au}\)=32 \(\pm\) 2 nm on glass substrate (d\({}_{sub}\)=170 \(\mu\)m) have been laser processed using the radiation of a 170 fs laser source emitting at 1030 nm (Light Conversion, Pharos). For the generation of the double pulses and the tuning of \(\Delta\tau\) a setup shown in Figure 1 was developed. In detail, the main beam is divided into two parts by a polarizing beam splitter (BS) and then is guided into two arms, Arm A and Arm B respectively. Arm A is controlled by a computer-assisted micrometer displacement controller (DC) used for setting the \(\Delta\tau\) value in the range of 0 to 50 ps with accuracy of 2 fs. Arm B is placed on a manual dovetail rail to produce \(\Delta\tau\) values in the range of 100 ps to 2 ns with an accuracy of ~3.5 ps. The first half wave plate (\(\lambda\)/2) combined with a linear polarizing cube (LPC) is utilized to control the fluence distribution between the two pulses. The second \(\lambda\)/2, in Arm B, controls the polarization of Arm B, which in this series of experiments is fixed vertically as shown in Figure 1. As a result, the two arms generate two pulses with parallel polarization onto the sample surface. The recombination of the beams of the two arms in colinear propagation is realized by the BS. Beam attenuation is realized via a computer controlled \(\lambda\)/2 in combination with a LPC. A computer-controlled attenuation part consisting of a half waveplate and a LPC is used to tune the pulse energy. A polarization control unit consisting of a \(\lambda\)/2 is used to control the polarization direction onto the sample plane. The sample is placed on a programable 3-axis motorized stage. The spot size 2w\({}_{0}\) was calculated to be ~55 \(\upmu\)m in diameter at 1/e\({}^{2}\) using a CCD camera placed at the focal plane. The overlap of the two pulses is estimated with accuracy in the order of the pulse duration (170 fs), utilizing a second harmonic generation (SHG) crystal. The experiments were conducted at normal incidence and in ambient air. The process parameters considered in the experimental part are: The average fluence, \(\partial\) that is calculated using the equation (1) in Table 1 where the E\({}_{\text{p}}\) is the energy per pulse measured with a pyroelectric power meter. The overlap (Ov) is estimated as the average number of incident pulses at any point of irradiated area (pps) as describe in the equation (2) of Table 1, where \(u\) is the average speed and \(f\) the repetition rate. Finally, \(\Delta\tau\) is calculated using equation (3) of Table 1, where \(c\) stands for the speed of light, \(\Delta L\) is the optical path difference of the two beams while the presence of the factor of 2 comes from the fact that the pulse travels two times the displacement of the arm due to the setup geometry. The hatch, \(H\) is defined as the step between the scanning lines and is fixed to H = 2 \(\upmu\)m, after some preliminary study, unless otherwise stated. Finally, the average dose \(D\)= \(\partial\)-\(p\)ps is a measure of the total irradiation energy per spot. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Value & Sympo & Geulation & Min & Web \\ \hline Fluence & \(\Phi\) & (1) \(\Phi[J/cm^{2}]=E_{p}/\pi w_{0}^{2}\) & 0.04 J/cm\({}^{2}\) & 0.16 J/cm\({}^{2}\) \\ Overlap & **Ov** & (2) \(Ov[pps]=(2w_{0}/u)\cdot f\) & 20 pps & 400 pps \\ \hline Hatch & H & 2 \(\upmu\)m & 20 \(\upmu\)m \\ \hline Interpulse delay & \(\Delta\tau\) & (3) \(\Delta\tau\) (\(ps\)) = 2 \(\cdot\dfrac{\Delta L(\mu m)}{c}\cdot 10^{6}\) & 0 ps, 5 ps, 10 ps, 20 ps, 50 ps, 100 ps, 300 ps, 500 ps, 800 ps, 1 ns, 1.5 ns, 2 ns. \\ \hline \end{tabular} \end{table} Table 1: Process parameters and value ranges he morphologies of the laser-fabricated structures were visualized by a field-emission Scanning Electron Microscope (SEM). All the measurements of the features of surface structures were performed by a 2D-FFT analysis of the corresponding SEM images using Gwyddion ([http://gwyddion.net/](http://gwyddion.net/)), a free and open-source software for data visualization and analysis. Electromagnetic SimulationsHere we provide a description of the theoretical model used for the determination of the electromagnetic modes that are excited after irradiation of a thin Au film with fs pulses. Analytical approaches such as the Sipe theory, provides a _near-surface_ description of the inhomogeneous absorption of optical radiation by the roughened surface [45]. On the other hand, computational approaches based on the numerical solution of the Maxwell equations such as finite-difference time-domain (FDTD) [46] and finite integration technique (FIT) [47] are capable to provide details about the electromagnetic modes produced on complex media in three dimensions. We investigate numerically the spatial modulation of the energy below the irradiated rough surface of a \(d=32\) nm thick Au/SiO\({}_{2}\) thin film, resulting from the electromagnetic field distribution i.e. SPP coupling between the two dielectric-metal interfaces and other surface waves, by solving the integral form of the Maxwell equations. For this purpose, the Maxwell's Grid Equations are solved considering a three layer system (air-metal-glass) with optical constants \(\varepsilon_{a}=1\) (air), \(\varepsilon_{g}=2.1\) (glass) and \(\varepsilon_{m}=-44.47\ +i3.2\) (Au) [48] for laser wavelength \(\lambda_{L}=1026\) nm. For Au thin film described above, the optical skin depth is comparable to the film thickness. The laser beam is considered to be a normally-incident plane-wave and linearly polarized along the \(x\)-axis of duration \(t=\)100fs. The laser beam is propagating along the \(z\) axis and the \(xy\) plane, i.e. the sample plane, is perpendicular to the propagation Figure 1: The setup. Abbreviations: beam splitter (BS), linear polarizer (LPC), half waveplate (\(\lambda\)/2), Attenuation part (AT), polarization control part (PC). Linear displacement (\(\Delta x\)), time delay (\(\Delta x\)). direction. We keep simple periodic boundary conditions for \(xy\), while at the \(z\)-boundaries normal to the propagation, convolutional perfectly matched layers (CPML) are used in order to truncate the computational domain and avoid non-physical reflections at the edges of the simulation grid. The periodic effects due to periodic boundary conditions are of low importance since the irradiated metallic area is large enough and can be supressed by the optical losses. Using the boundary conditions described above, the spatial laser profile matches perfectly the \(xy\) plane of the structure with homogeneous intensity distribution. The surface roughness plays the key role in electromagnetic wave scattering and generation of SPP that are the precursors of LIPSS. To emulate the features of a rough surface, we introduce randomly distributed scattering centres in the form of nano-holes of radius \(r<d\) along the film surface. The concentration of inhomogeneities at the air/Au interface is considered to be \(C=N\pi r^{2}/S\approx 0.25\%\) where N is the number of the nano-holes and the laser affected area \(S=10\times 10\) m\({}^{2}\). A similar approach has been introduced in similar studies for bulk materials [29, 49, 50]. The interaction of light with the surface inhomogeneities of the material produces electromagnetic interference patterns along the surface, which determine the energy absorption landscape. Since the absorbed energy of the film is proportional to the intensity, in order to capture the absorption energy maxima and minima due to scattering of overlapping surface waves, we calculate the normalized intensity difference \((I-I_{S})/I_{S}\) where \(I\sim\left|\vec{E}\right|^{2}\)is total intensity of Au films below the rough surface, while \(I_{S}\) is the total intensity of Au below a defect-free surface. This difference depicts the intensity maxima and minima due to both scattered radiative and non-radiative fields by the subwavelength imperfections. ### Ellipsometric Measurements To measure optical properties of the fabricated samples, we used a variable angle focused-beam spectroscopic ellipsometer Woollam M 2000F. It is based on the rotating polariser-compensator-analyser setup (Figure 2) and utilises a diode array spectrophotometer to extract the spectral parameters \(\Psi\) (ellipsometric reflection) and \(\Delta\)(ellipsometric phase) in the wavelength range of 240-1690 nm, with a wavelength step of \(\sim\)1.0 nm for 240-1000 nm and \(\sim\)2.0 nm for 1000-1690 nm. The beam spot size on the sample was approximately 30 \(\mu\)m \(\times\) 60 \(\mu\)m for ~60-70\({}^{\circ}\) angles of incidence. These parameters are related to the sample reflection as \(\tan(\Psi)\exp(i\Delta)=r_{p}/r_{s}\), where \(r_{p}\) and \(r_{s}\) are the amplitude reflection coefficients for \(p\)- and \(s\)-polarized light, respectively. In addition to ellipsometric parameters \(\Psi\) and \(\Delta\), the ellipsometer enables one to separately measure \(R_{p}\)=\(|r_{p}|^{2}\) and \(R_{s}\)=\(|r_{s}|^{2}\) providing the intensity reflection spectra for \(p\)- and \(s\)-polarised light, respectively. The errors on the polariser, compensator and analyser azimuthal parameters are generally less than \(\pm\)0.01\({}^{\circ}\), which implies that, the ellipsometric parameters \(\Psi\) and \(\Delta\) can be measured with an error of level \(\pm\)0.02\({}^{\circ}\). The schematic for ellipsometry measurements is presented in 2, showing that the Au plasmonic LIPSS were oriented in such a way that the plane of incidence was parallel to the array lattice vector (i.e the LIPSS were perpendicular to the plane of incidence). An Ultra Plus Carl Zeiss SEM was used Figure 2: Schematic diagram of ellipsometric measurement for gold patterned nanostructures and geometry of orientation of ablated samples with respect to the direction of incident polarised light. for high-resolution imaging of the nanostructures. The unique in-lens SEM detector gives resolution of the order of 1.0 nm at 15 kV (1.6 nm at 1 kV), dependent on the type of samples. Theoretical modelling of optical properties of fabricated samples was performed with the help of Fresnel theory where we applied the effective-medium approximation (EMA) to the top metal layer nanostructured by laser ablation. WVASE32 software of J.A. Woollam Company was used to perform calculations. The model geometry for studied samples was chosen to be constructed of three layers: glass as a substrate, unperturbed film of Au and an EMA layer of ablated gold film which was a combination of gold and air. The optical constants of EMA layer were calculated using a Maxwell-Garnett theory. The thickness of the unperturbed gold layer, thickness of the EMA layer and the ratio of void to gold in the EMA layer were varied to achieve the best fit with the measured data. ## 3 Results and discussion In this part the morphology obtained following double pulse laser irradiation of Au thin films is presented and discussed. The laser processing results are complemented with theoretical simulations on laser surface coupling and surface functionality characterization. Resulting morphology for single and double pulse irradiation Figure 3 illustrates the resulting morphology on Au surface after irradiation with single and double pulses respectively. The overlap value is fixed for all cases to Ov = 150 pps, whereas the fluence and \(\Delta\)t values vary as indicated. Different fluence ranges are considered for single and double pulses; For single pulses the fluence values varies between \(\Phi\) = 70 and 100 \(\mu\)J/cm\({}^{2}\), while for double pulses between \(\Phi\) = 80 and 120 \(\mu\)J/cm\({}^{2}\). The fluence values are chosen appropriately to illustrate the different laser induced morphologies from structure appearance (low \(\Phi\)) to thin film damage (high \(\Phi\)). The difference in the \(\Phi\) values between single and double pulses is attributed to the variation of the effective fluence between single and double pulses irradiation as well as between different \(\Delta\)t values[51]. For single pulse irradiation (Figure 3, SPI and \(\Phi\) = 70 \(\mu\)J/cm\({}^{2}\) the material is only partially structured, particularly craters are formed on the surface, while traces of LIPSS are present around the crater. For \(\Phi\) = 75 \(\mu\)J/cm\({}^{2}\) craters densify in a random way on the surface, their size is in the order of 1 \(\mu\)m and the areas among them is covered with inhomogeneous and pale LIPSS. For \(\Phi\) = 80 \(\mu\)J/cm\({}^{2}\) craters densify further and merge, while LIPSS appear to be less prominent compared to \(\Phi\) = 75 \(\mu\)J/cm\({}^{2}\). For higher fluences (\(\Phi\) = 90 - 100 \(\mu\)J/cm\({}^{2}\)) the craters grow in number and size leading to film damage. For double pulse irradiation (DPI), and \(\Delta\tau\) = 20 ps a different evolution of structures is observed upon increasing the fluence. For \(\Phi\) = 80 \(\mu\)l/cm\({}^{2}\) no trace of surface modification is observed, while for \(\Phi\) = 85 \(\mu\)l/cm\({}^{2}\) areas with nano roughness appear on the surface. Notably, for \(\Phi\) = 90 \(\mu\)l/cm\({}^{2}\) prominent and homogeneous LUPSS are formed together with few craters. Upon increase of fluence to 100 and 120 \(\mu\)l/cm\({}^{2}\), craters densify, grow and merge in a similar way as for SPI leading to surface damage. The same trend is observed in all the cases of DPI presented in Figure 3; At low \(\Phi\) values, random areas with nano roughness appear on the surface. At intermediate \(\Phi\) values, homogeneous LUPSS are formed on the surface for most of the interpulse delays examined. As \(\Phi\) takes higher values, craters are formed, densify, grow, and damage the surface. The specific fluence threshold leading to the formation of the Figure 3: SEM images showing the different morphologies obtained by SPI and DPI at different fluences, \(\Phi\), and interpulse delays, \(\Delta\tau\). Homogeneous and prominent LUPSS marked by "". morphologies varies depending on the interpulse delay. Homogeneous and prominent LIPSS, marked by '*' in Figure 3, are formed almost in all cases of DPI; in particular, for \(\Delta\tau\) = 50 ps LIPSS are formed at \(\Phi\) = 90 \(\mu\)J/cm\({}^{2}\), for \(\Delta\tau\) = 0.5 ns at \(\Phi\) = 85 \(\mu\)J/cm\({}^{2}\), for \(\Delta\tau\) = 1 ns at \(\Phi\) = 85 \(\&\) 90 \(\mu\)J/cm\({}^{2}\) and finally for \(\Delta\tau\) = 2 ns at \(\Phi\) = 100 \(\mu\)J/cm\({}^{2}\). The observed variation of the \(\Phi\) values leading to optimum LIPSS structures is attributed to the ultrafast dynamics affected by the interpulse delay. Nevertheless, a deep understanding of the underlying mechanisms needs to be further studied and theoretically investigated. LIPSS Process window in respect to the \(\Delta\tau\) valueThe pulse-to-pulse overlap has a similar impact as \(\Phi\) to LIPSS formation. Therefore, different combinations of \(\Phi\) and Ov lead to homogeneous LIPSS formation, as presented in Figure 4. In this figure, each type of morphology is indicated with a specific color. In particular, combination of process parameters that led to non-textured surface are indicated with white. Transition regimes, in which we have a partial development of LIPSS, but some areas stay non-textured, are marked by pale yellow. Areas that have been textured homogeneously with LIPSS without craters (less than 5 craters with diameter larger than 500 nm within an area of 500\(\mu\)m\({}^{2}\)) are marked with yellow. Areas that show significant formation of craters are marked with brown, while damaged areas are marked with grey. Another type of morphology observed, marked with pink, indicates areas consisting of the generation of LIPSS with two different periods formed at the same irradiation conditions. This type of morphology will be discussed in detail in the following section e). Following a careful examination of the graphs illustrated in Figure 4, it is evident that for a given \(\Delta\tau\), there is interdependence between \(\Phi\) and Ov values leading to homogeneous LIPSS formation. In particular, homogeneous LIPSS are formed for low fluence values combined with high pulse overlap, average fluence values and pulse overlap, as well as high fluence values with low pulse overlap. Interestingly, the average dose, D, is substantially different in each case and increases upon fluence decrease, from D = 2 mJ/cm\({}^{2}\) when \(\Phi\) = 100 \(\mu\)J/cm\({}^{2}\) to D = 26 mJ/cm\({}^{2}\) when \(\Phi\) = 65 \(\mu\)J/cm\({}^{2}\). Such D value variation for more than one order of magnitude points out that LIPSS formation occurs above a certain \(\Phi\) threshold. When \(\Phi\) is low (\(\Phi\) = 65 \(\mu\)J/cm\({}^{2}\)), that threshold is reached at a smaller spot radius and a large number of pulses is required to texture the area (Ov = 400 pps). When the fluence is high (\(\Phi\) = 100 \(\mu\)J/cm\({}^{2}\)), the effective spot radius is larger and a small number of pulses is required to texture the area (Ov = 20 pps). Thus, relatively higher \(\Phi\) values are more effective for LIPSS formation and this is the trend observed for any \(\Delta\tau\) value tested. The effectiveness of the laser structuring process is indicated by the range of Ov and \(\Phi\) values leading to homogeneous LIPSS generation. In order to have a qualitative representation of the process window range leading to homogeneous LIPSS generation, we mark the areas, A\({}_{\text{PW}}\), among symbols leading to homogeneous LIPSS with pale yellow color (Figure 4). Interestingly A\({}_{\text{PW}}\) varies notably among the different \(\Delta\tau\) tested, while A\({}_{\text{PW}}\) is maximized at an optimum \(\Delta\tau\) range 1 ns \(<\Delta\tau<\) 1.5 ns. Theoretical investigations Figure 4: Graphs showing the resulting morphology upon systematic variation of two process parameters (\(\Phi\) & Ov) for six interpulse delay (\(\Delta\tau\)) values. The colour of the symbols corresponds to the different morphology attained for each set of parameters as indicated in the legend shown at the bottom. The yellow areas marked in each graph indicate the range of process parameters that lead to homogeneous LIPSS formation. The percentages indicated correspond to the percentage of the process parameters providing homogeneous LIPSS, with respect to the whole investigated process window. indicate that at t \(\sim\) 1 ns after irradiation, for the fluence values considered here, the material's surface is still melted[32], and microfluidic motion takes place on the molten surface[40]. Moreover, it is expected that the molten material has acquired sufficient velocity, driven by the induced temperature gradients. Therefore, we hypothesize that microfluidic phenomena such as Marangoni instabilities and convection flow are playing key role in LIPSS formation. This hypothesis should be further investigated theoretically for the particular conditions considered here. ### Early stages of LIPSS formation and the appearance of craters The first stages of structure formation provide significant findings on underlining the different response of the material upon SPI and DPI respectively. To analyse the initial stages we illustrate in Figure 5 SEM images comparing an unprocessed area with the surfaces processed with single and double pulses at different irradiation doses for \(\Delta\)t = 1 ns. In the unprocessed area shown on Figure 5, i), some imperfections occurred during film deposition are evident, with a characteristic size in the order of 300 nm. We consider that such imperfections may act as hot spots and locally amplify the intensity of the incident laser field[52]. Upon SPI with \(\Phi\) = 70 \(\mu\)l/cm\({}^{2}\)and Ov = 50 pps (Figure 5, ii, SPI) a number of craters appear on the surface together with a few faded traces of periodic structures surrounding them. Upon increase of dose, when Ov = 100 pps, areas covered by craters in close proximity are observed with traces of pale LIPSS surrounding the craters. Imperfections can be rarely found at that stage. This is suggesting that the initial ones present in the unprocessed surface have potentially evolved into craters due to electromagnetic field amplification in their proximity. When Ov = 150 pps the number of craters is significantly increased and LIPSS surrounding them become more prominent. At the same time LIPSS of adjacent craters become to overlap, giving rise to formation of inhomogeneous LIPSS areas (cyan frame in Figure 5, ii, SPI, Ov = 150 pps). The observed LIPSS distortion can be attributed to the random distribution of the craters; as a result, the inter-crater distance is not always an integer multiple of the SP wavelength giving rise to destructive interference. Figure 5: Comparison of the early stages of structure formation in the case of SPI and DPI respectively. i. Defects on unprocessed surface. ii. Structure appearance upon dose increase for SPI and DPI with \(\Delta\)t = 1 ns respectively. Fluence is Q= 80 ml/cm\({}^{2}\) for DPI and Q= 70 ml/cm\({}^{2}\) for SPI. The first stages of structures formation in the DPI case was studied for \(\Delta\tau=1\) ns, as it lies within the optimum \(\Delta\tau\) range (0.8 ns<\(\Delta\tau\)< 1.5 ns) for regular LIPSS formation. The corresponding SEM images shown in Figure 5, it illustrate that for an initial Ov of 100 pps more imperfections are generated on the irradiated surface that maintain the same size compared to those present in the unprocessed one. For Ov = 200 pps the surface becomes roughened and the number of imperfections increases, whilst their size seem to grow slightly. Finally when Ov = 300 pps regular LIPSS cover the hole irradiated area, while no imperfections are observed. EDX measurements confirmed the entire removal of the film material in the areas between LIPSS (Figure 5, iii). the structures, the 2D - FT maps (Figure 6, b & b\({}^{\prime}\)) was subjected to threshold analysis. A graph was produced in each case, showing in white the points exceeding the threshold of 1/e\({}^{2}\) of maximum signal intensity in FT for \(\Lambda\)\({}^{\sim}\)640 nm. Based on the signal intensity, the angular dispersion of the structures can be estimated for SPI to \(\Delta\alpha\)\({}^{\sim}\)68\({}^{\circ}\) (Figure 6, c) whereas in the case of DPI to \(\Delta\alpha\)\({}^{\sim}\)11\({}^{\circ}\) (Figure 6, c\({}^{\prime}\)). HSFL and double period LIPSS Figure 6: Comparison of optimized structures for SPI (a) and DPI (a\({}^{\prime}\)) respectively. For SPI, \(\phi\) = 90 \(\mu\)/cm\({}^{2}\) and Ov = 100 pps whilst for DPI, \(\phi\) = 85 \(\mu\)/cm\({}^{2}\) and Ov = 150 pps. Fourier transform (FT) maps are shown for SPI and DPI respectively in b and b\({}^{\prime}\). An FT map showing the signal points with intensity l\(>\)max/e\({}^{2}\) are shown, in c and c\({}^{\prime}\), for SPI and DPI respectively. As discussed in the previous paragraphs, double period LIPSS (2P-LIPSS) are formed upon both SPI and DPI, nonetheless the homogeneity of the structures differs strikingly between the two cases. For SPI, double period LIPSS are observed sporadically for irradiation doses above the LIPSS formation threshold (see for example, Figure 6, a). For DPI and specifically when \(\Delta\tau=2\) ns, 2P-LIPSS are formed over extended areas. Notably, several neighboring combinations of \(\Phi\) and N led to 2P-LIPSS, as indicated by pink area in Figure 4.Figure 7 Figure 7 a) and b) illustrates an example of 2P-LIPSS generation upon DPI, while the two periods measured are \(\Lambda_{\text{t}}=648\pm 1\) nm and \(\Lambda_{\text{tl}}=987\pm 5\) nm. The origin of the two LIPSS periods observed can be attributed to the SPP periods, excited at the Au-substrate and air-Au interfaces respectively. To account for the effect of such two coinciding electromagnetic modes on surface structuring, we employ the theoretical model described in paragraph II.b. Figure 7 d illustrates the energy absorption patterns in the transverse plane on Au film during irradiation, while Figure 7 f illustrates the corresponding Fast Fourier Transform (FFT) of the periodic features representative to the irradiation conditions described above. The presence of the nano-holes on the metallic surfaces favours strong energy confinement at their edges localized parallel to the laser polarization, On the contrary the far fields, consisting of SPPs and quasi-cylindrical wave components, exhibit maxima and minima that are formed perpendicularly to the laser polarization. As a result, the dominant periodic/quasi-periodic absorption patterns are observed with a periodicity of \(\Lambda_{bottom}=690\)\(nm\) (seen at \(k_{x}/k_{0}\approx\lambda_{L}/\Lambda\) at FFT). Furthermore, periodic features of periodicity close to the laser wavelength are also captured in FFT (\(k_{x}/k_{0}\approx 1\)), which are attributed to the interference patterns created at the air/metal interface. The two periodicities observed are not only the result due to the interference of the incident beam with TM-polarized SPPs and quasi-cylindrical waves on air/metal Figure 7: _a,b) SEM image of Au surface processed with \(\Delta\tau=2\) ns, N = 100 pps, \(\Phi=100\)\(\mu\)/cm\({}^{2}\). (c) Image of pinching instability of convection flow in silicon oil reproduced from [53]. (d) Intensity distribution in the xy (sample) plane below the top air/Au interface of a 32nm Au/SiO\({}_{2}\) thin film for \(\lambda_{L}=\) 1026 nm. (e) Absorbed energy distribution represented by the z-component of the electric field on the xz (propagation) plane of the air/Au film/dielectric system. SPPs are formed on both air/Au and Au/SiO\({}_{2}\) interfaces (around Z=0). The double-headed arrow indicates the orientation of the laser beam polarization. (f) Fourier spectrum of the intensity patterns in the xy plane showing quasi-periodic features of two distinct periodicities, \(\Lambda_{top}\approx\lambda_{L}=1026nm\) (inner circle \(k_{x}/k_{0}\approx 1\)) and \(\Lambda_{bottom}\approx 690nm\) (outer circle \(k_{x}/k_{0}\approx\lambda_{L}/\Lambda\) ). (g) Wavelengths of the two bound SPPs on air/Au/SiO2 at \(\lambda_{L}\)=1026 nm provided by the numerical solution of Eq.1. For the calculations, we used thickness independent Au permittivity taken from Ref. [12]._ interface but also the film thickness is small enough to promote two supported SPPs which can be excited and couple each other simultaneously at the two interfaces of the thin film. In Figure 7, e we present a side view of the absorbed energy distribution along the thin film on the \(xz\) plane where the z-component of the electric field is normalized by its maximum value. We observe that the SPPs confined to the air/metal as well as at the metal/glass interfaces propagate along the \(x\)-axis with evanescent decay in the perpendicular \(z\)-direction. The periodicity of the SPP developed at the air/metal interface is nearly equal to the laser wavelength (\(\varLambda_{top}=1026\) nm) while the SPP at the metal/glass interface is nearly equal to \(\lambda_{L}/n_{g}\approx 690\)nm where \(n_{g}=\sqrt{\varepsilon_{g}}\). This is an indication that LIPSS on Au thin films originate from coupled SPPs excited on both interfaces with periodicity expected very close to the SPP developed at the metal/glass interface. The two periods (\(\varLambda_{bottom}=690\)\(nm\), \(\varLambda_{top}=1026\)\(nm\) ) are very close to the experimental values (\(\varLambda_{\text{t}}=648\pm 1\) nm and \(\varLambda_{\text{u}}=987\pm 5\) nm). The two SPP periodicities can also be predicted by applying the Maxwell equations at the interfaces of a three-layer (dielectric/metal/dielectric) system in order to determine the spatial field profile and dispersion of propagating waves for guided electromagnetic modes in waveguides. If the metallic medium is thin enough compared to the penetration depth, such a system is capable to support coupled SPP modes whose properties can be controlled by the thickness of the metallic film. This can be clearly seen in the dispersion relation of flat thin films in an asymmetric dielectric environment [6] \[exp(-2k_{m}d)=\frac{k_{m}\Big{/}\varepsilon_{m}+\frac{k_{a}\Big{/}\varepsilon_ {a}}{k_{m}\Big{/}\varepsilon_{m}-\frac{k_{a}\Big{/}\varepsilon_{a}}{k_{m} \Big{/}\varepsilon_{m}-\frac{k_{g}\Big{/}\varepsilon_{g}}{k_{m}\Big{/} \varepsilon_{m}-\frac{k_{g}\Big{/}\varepsilon_{g}}{\varepsilon_{g}}}}}}{k_{m} \Big{/}\varepsilon_{m}-\frac{k_{g}\Big{/}\varepsilon_{g}}{\varepsilon_{g}}} \tag{1}\] Where \[k_{j}=\pm\sqrt{\beta^{2}-\varepsilon_{j}k_{0}^{2}} \tag{2}\] In Eq. (1), \(j=a,m\) or \(g\) denotes the three media (\(a\) for air, \(g\) for glass, \(m\) for metal), \(d\) is the thickness of the film, \(\beta\) is the propagation constant of the SPP, and \(k_{0}=2\pi/\lambda_{L}\) is the free-space wavenumber at the laser wavelength. The numerical solution of Eq. provides the supported SPP wavelength \(\varLambda=2\pi/Re(\beta)\) for given film thickness. The calculated wavelengths of the two-interface surface plasmons are dependent on the film thickness, as illustrated in Figure 7, g. Each interface can sustain bound SPPs. With increasing the film thickness to \(d>35\) nm, the SPPs at the two interfaces become decoupled and separated while strong coupling occurs as the thickness decreases below \(\sim\)30 nm, while the SPP periodicity decreases abruptly from 687 nm to 395 nm at \(d=5\) nm for the lower SPP mode. For \(d=32\) nm the SPP periodicity is \(\varLambda=\)700 nm (Figure 7,f, green dot line) which are very close to the captured periodicity of the energy absorption patterns found by simulation. Both theoretical values are quite close the experimentally observed periods validating their electromagnetic origin. The underlining mechanism permitting the coexistence of homogeneous 2P-LIPSS should be sought in the contribution of hydrodynamics (Process II). [47] In Figure 7, b, a part of the processed surface is shown in detail, demonstrating the transition area between the two different LIPSS periods. The obtained LIPSS pattern resembles in detail a type of convection flow identified as "pinching Instability" shown in Figure 7, c, reproduced from [53]. In order to facilitate the comparison of the periods between Figure 7, b and c, the two LIPSS periods on Au are normalized in accordance to the smaller value giving \(\Lambda^{\prime}_{1}=1\pm 0.005\) a.u and \(\Lambda^{\prime}_{1}=1.52\pm 0.01\) a.u. The corresponding two periods, estimated in Figure 7, c are \(\Lambda^{\prime}_{1}=1\pm 0.02\) a.u and \(\Lambda^{\prime}_{1}=1.48\pm 0.02\) a.u. Even though the values do not match within the error limits their relative difference is \({}^{\sim}2.5\%\) pointing out that convection flow could potentially play a role in the formation of 2P- LIPSS over large areas. Significant similarities between the obtained structures and convection flow patterns have been also reported on steel upon DPI [14] and theoretical works [193] describe the important contribution of Marangoni instability and convection flow to LIPSS formation. According to such theoretical works, the instability pattern strongly depends on the local excitation conditions. In this context we hypothesize that a periodic heat distribution on the melted material can impose a double period, which can hydrodynamically coexist only upon DPI due to convection flow. Therefore, the 2P-LIPSS formation can be considered as the result of the synergistic effect of SPPs with two periods that drives the generation of a pinching hydrodynamic instability. #### 4.2.1 Ellipsometric measurements A SEM image of one of the fabricated nanostructures, tested by spectroscopic ellipsometry, is shown in Figure 8, a. The grain size of deposited thin Au film riches up to 20-25 nm. The ripple period estimated in this image is \(\Lambda=620\pm 10\) nm. Typical ellipsometric angles spectra of a patterned thin Au film are shown in Figure 8b,c. We recorded a pair of ellipsometric parameters, \(\Psi\) and \(\Delta\) in the wavelength range from 250 nm to 1000 nm as a function of angle of incidence. Quite unexpectedly, we found that the fabricated nanostructures demonstrate topological darkness in reflection at some wavelength and an angle of light incidence. Figure 8b shows the ellipsometric reflection measured at three different angles of incidence for the sample shown in Figure 8a. We see that the polarized p-reflection goes to zero at an angle of around 62\({}^{\circ}\) and wavelength of \({}^{\sim}\)592nm. At the condition of zero reflection, the phase of light \(\Delta\) exhibits 180\({}^{\circ}\)-jump behaviour which is a tell-tale feature of topological darkness [54] (see Figs. 8e). The sharp phase variations and the deep minimum of \(\Psi\) for the \(p\)-polarized reflection (the red line in Figs. 8 b) quickly disappear when the incident angle is changed ever so slightly (by several degrees). It is worth noting that both \(\Psi\) and \(\Delta\) demonstrate complicated behaviour due to diffraction of incident light on the periodic structure as well as due to light scattering connected with the imperfection of fabricated nanostructures. To model the measured optical spectra, we applied the effective-medium approximation [55] for the top metal layer of LIPSS and the Fresnel theory for the whole sample. The system geometry was set up according to the SEM data of the samples (see Figure 8a). The modelling results for the geometry of Figure 8a are shown in Figure 8d (along with modelling geometry shown in the inset), where they are compared with the experimental data. It can be observed that the theory reproduces well the position of topological minima for the ellipsometric function of \(\Psi\) and the jump of \(\Delta\) at the resonance, but the absolute values of \(\Psi\) outside of the resonance are much larger than those measured experimentally. This could be partly attributed to some limitation of the EMA model, though mainly to the light scattering due to structural imperfections in fabricated LIPSS which implies that we extended the concept of topological darkness to scattering media. Further work is in progress to address this matter. #### 4.2.6 Conclusions The results discussed in this work demonstrate the possibility of generating plasmonic sensing elements on thin Au film upon laser irradiation. To this end, a systematic study of LIPSS formation on thin Au film upon single and double pulse irradiation is presented. The formation of highly regular LIPSS over large Figure 8: (a) High resolution SEM images of one of the fabricated LIPSS nanostructures. (b) Measured changes of ellipsometric parameter \(\Psi\) (amplitude) as a function of the incident wavelengths and angles for the LIPSS nanostructures shown in (a). The reflection exhibits total darkness at the angle of incidence of 62 \({}^{\circ}\)and wavelength of \({}^{\sim}\)590 nm. (c) Measured changes of ellipsometric parameter \(\Delta\) (phase) as a function of the incident wavelengths and angles for the LIPSS nanostructures shown in (a). (d) Comparison of the experimental and modelled ellipsometric parameters \(\Psi\)for angle of incidence of 62 \({}^{\circ}\)for the the LIPSS nanostructures shown in (a). areas on thin Au film surface upon double pulse irradiation is investigated, and the underlying formation mechanism is discussed. In particular, the key role of the interpulse delay in producing regular LIPSS structures over large areas is discussed and the optimum interpulse delay regime is identified to range between \(\Delta\)x = 0.8 ns and \(\Delta\)x = 1.5 ns. The striking differences in the outcome of single and double pulse processing underline the hydrodynamic origin behind the regularity of LIPSS formation. Electromagnetic simulation of the propagation of the incident laser beam on Au surface provides insight on the origin of LIPSS periods in very good agreement with the experimental data. Furthermore, ellipsometry measurements validate the possibility of using the substrate as plasmonic sensor. Indeed, the reflection by the regular LIPSS areas exhibits a characteristic total darkness effect at the angle of incidence of 62\({}^{\circ}\) and the wavelength of ~590nm. We believe that our results provide comprehensive and valuable data for the generation of functional periodic structures on Au thin layers and can further contribute to the development of new applications of laser functionalized surfaces. This work was supported by the EU's H2020 framework programme for research and innovation under the NFFA-Europe-Pilot project (Grant No. 101007417). VGK and ANG acknowledge support of Graphene Flagship programme, Core 3 (881603).
2303.04837
Non-Binary Gender Expression in Online Interactions
Many openly non-binary gender individuals participate in social networks. However, the relationship between gender and online interactions is not well understood, which may result in disparate treatment by large language models. We investigate individual identity on Twitter, focusing on gender expression as represented by users chosen pronouns. We find that non-binary groups tend to receive less attention in the form of likes and followers. We also find that nonbinary users send and receive tweets with above-average toxicity. The study highlights the importance of considering gender as a spectrum, rather than a binary, in understanding online interactions and expression.
Rebecca Dorn, Negar Mokhberian, Julie Jiang, Jeremy Abramson, Fred Morstatter, Kristina Lerman
2023-03-08T19:16:57Z
http://arxiv.org/abs/2303.04837v2
# Non-Binary Gender Expression in Online Interactions ###### Abstract The presence of non-binary gender individuals in social networks is increasing; however, the relationship between gender and activity within online communities is not well understood and limited by the failures of automated gender recognition algorithms to recognize non-binary individuals. We use natural language processing to investigate individual identity on the Twitter platform, focusing on gender expression as represented by users' chosen pronouns from among \(14\) different pronoun groups. We find that non-binary groups tend to be more active on the platform, preferring to post messages rather than liking others' messages, compared to binary groups. Additionally, non-binary groups receive more replies, but their messages are reshared and liked less. We also find significant variation in the emotional expressions within non-binary groups. The study highlights the importance of considering gender as a spectrum, rather than a binary, in understanding online interactions and expression. University of Southern California Information Science Institute [email protected], [email protected], [email protected], [email protected] ## Introduction Identity is fundamental to how individuals define themselves internally, and how they present themselves to others [3]. Individual's identity, which is defined along multiple dimensions such as age, gender, race, ethnic origin, socio-economic status, etc., also influences how they express themselves, and how they experience others within social interactions. As social interactions continue to migrate online, social media platforms play an increasingly critical role in identity formation [1]. One aspect of online identity that has received significant attention from social media researchers is collective identity, which manifests itself through people joining social movements. Collective identity can become visible online when people adopt certain hashtags, such as #metoo or #blacklivesmatter, or use them in their online profiles. Using hashtags as markers of online collective identity, researchers have studied how these identities organize themselves within social hierarchies [1], who they interact with, and what they talk about [1]. However, while individual identity overlaps with collective identity, it is not synonymous with it and requires further study. Recent advances in natural language processing (NLP) provide new tools for the study of identity in online platforms. Language mediates social interactions and conveys not just the meaning of online conversations but also its emotional tone and moral sentiment. Importantly, emotions expressed in text messages can affect the feelings of others even in the absence of visual cues (e.g., facial expressions) or audio signals (e.g., tone of voice) [13]. Language cues also capture hostility, threats, personal attacks, and other dimensions of toxic speech. We study individual identity on the Twitter platform, and how it mediates online expression and interactions. We focus on gender, one of the core dimensions of individual identity. While gender has been traditionally conceptualized in Western society as binary--specifically'male' vs 'female'--two recent developments have transformed how we think about gender. First, there is growing recognition of gender as a cultural construct, distinct from the biologically-based sex; second, there is growing awareness that gender forms a spectrum, rather than a binary, identity. As a proxy of gender expression, we use pronouns users choose to display in their online profile or biography. These pronouns range from the traditional binary gender categories, such as'she/her' and 'he/him', to non-binary and gender nonconforming [14] categories 'they/them','she/ze','she/they/xe', etc. We study fourteen pronoun groups with the most members, revealing the spectrum of gender identity online. Gender-quer populations are at a higher risk of workplace discrimination, social isolation and shortened lifespan [10]. Additionally, LGTBQ+ youth are more likely to participate in online LGBTQ+ communities than real life communities [15], making it crucial to create positive online spaces for them. To explore how welcoming online communities are to gender-quer populations, we investigate the following research questions: **RQ1**: How do users in different pronouns groups vary in their level of online activity and the attention they receive from others? **RQ2**: What are the differences in the emotions expressed by different user pronoun groups? Which groups receive more negative replies? **RQ3**: Which user pronoun groups convey more toxicity on Twitter? Which groups experience more toxicity from others? We operationalize the research questions through the following proxies. As a measure of online _activity_ we use number of messages posted, liked by others, and the following count. Similarly, we measure the _attention_ group members receive as the average number of retweets and likes of own messages, share of own messages with replies, and follow-ers count. We use state-of-the-art NLP models to automatically quantify a range of emotions expressed in text, such as joy, anger, fear and disgust [1]. In addition to affect, we use NLP models to detect toxicity in language [1], enabling us to study social inclusion and exclusion at scale. We find that while non-binary groups receive more replies, their messages get less attention through reshares and likes. We also find interesting variation in emotions expressed by different groups, and the emotionality of replies to their messages. In particular, gender-queer groups tend to express more negative emotions in their tweets and fewer positive emotions, in contrast to binary groups, who express more positive emotions and fewer negative emotions. In line with our hypothesis, non-binary users tend to receive fewer positive replies to their tweets. Surprisingly, we find that non-binary groups also post more toxic messages. However, we do not yet have enough evidence to conclude whether that's an indicator of actual harmful interactions or the biases in toxicity detection algorithm. Our work makes the following contributions. First, we treat gender expression as a spectrum and not a binary category, allowing us to compare outcomes of computational tools across groups. Additionally, we explore the visibility and interactions in social media networks of people with historically marginalized gender identities. Finally, our work poses new research questions surrounding emotion and toxicity detection's ability to treat gender non-conforming users fairly. This will help improve the visibility of non-binary users in online communities and foster an environment that is welcoming to all forms of gender expression. ## Related Works Identity is a construct used by sociologists and social psychologists to help understand how people define themselves and experience others in social interactions (e.g., [1]). An individual's 'identity' spans multiple dimensions, including age, gender, race, socio-economic status and sexual orientation. Identity not only defines how individuals understand themselves internally but also how they express themselves to others. Gender and IdentityGender is a core dimension of identity. Inspired by second-wave feminism [15], people have started to draw a distinction between sex (biologically-produced) and gender (culturally-produced) identity. This helped resolve the tension between the traditional conceptualization of gender in Western society and science as binary (i.e.,'male','female'), and historical and societal evidence of the presence of third-gender individuals [12]. Despite the growing awareness of this distinction, computational social science and machine learning often treat gender as binary. For example, machine learning systems in recent years have claimed to predict an individual as male or female based on their written name, handwriting, voice and other characteristics [13, 14, 15]. As another example, Wordnet was shown to under-represent non-binary gender terms [11]. Despite offering non-binary gender options, Meta has been found to internally reconfigure user genders according to their perceived assigned gender at birth [10]. This results in continuing a legacy of erasure and harm for gender-queer people [10]. Twitter biographies have been used to measure expressions of personal identity and cultural trends. Previous work introduced Longitudinal Online Profile Sampling (LOPS) which measures identity formation through the evolution of a user's twitter biography [12]. LOPS was used to compare how 1.35 million Twitter user biographies evolved over 5 years, taking one snapshot of user biographies annually [12]. The longitudinal study found that the tokens with the highest prevalence within biographies were _he_, _him_, _she_ and _her_. The LOPS studies relied upon the notion of **personally expressed identity** (PEI) where individuals declare _their own_ attributes. Collective Identity in Online Social MovementsCollective identity is used by scholars to explain how social movements form and maintain commitment from participants over time [14]. Researchers operationalized collective identity in online systems by looking at the hashtags users adopt in their posts or use in their online profiles, e.g., #blacklivesmatter and #metoo. Using this approach, researchers studied how social movements mobilize members online, examined hierarchies of collective identities and cohesiveness of their vocabulary [15]. Others examined how collective identity forms through collective actions, like the joint airing of grievances online [13], and how expressions of collective identity by participants differ from how they are represented in the media [12]. While the concept of collective identity overlaps with the notion of individual identity, the two are not synonymous. Toxicity DetectionHarmful speech, a category that includes personal attacks, insults, threats, harassment, and hate speech targeting specific subgroups, contributes to the toxicity of online communities [10]. To combat negative effects of harmful speech, researchers developed methods for toxicity detection. One of the more popular of these is the Perspective API [15], which has been trained on diverse social media posts that have been labeled by human annotators along several dimensions of harm, such as identity attack, insult, obscenity, or threat. However, Perspective API was shown to be biased, giving texts referring to certain racial or gender groups higher toxicity scores [16] To combat bias, [1] introduced Detoxify API that has been trained on open source data emphasizing toxicity towards specific identities [11]. 2019). We use this state-of-the-art method to measure toxicity of speech made by members of different gender identity groups, as well as speech directed at them. Emotion DetectionTo classify the emotional tone of online messages, early research relied on dictionary-based approaches to measure the sentiment expressed in text (Bollen, Mao, and Zeng 2011; Mejova et al. 2014), or alternatively its valence and arousal (Warriner, Kuperman, and Brysbaert 2013). Recently, a new generation of methods based on large language models have been introduced (Alhuzali and Ananiadou 2021). When trained on large-scale ground truth datasets of sentences that have been assigned emotion labels by human annotators, e.g., the SemEval Mohammad et al. (2018) or GoEmotions data (Demszky et al. 2020), these models can quantify emotional expressions in text at scale. We use these methods to quantify emotions expressed in tweets in our dataset. ## Methods ### Data Our work is supported by a collection of over 2 billion tweets related to the Covid-19 pandemic collected between January 21, 2020 and November 5, 2021 (Chen, Lerman, and Ferrara 2020). This data includes tweets from 2,066,165 users with specified pronouns in their Twitter profiles or biographies. The presence of pronouns is determined by whether a user has specified any combination of {he, him, his, she, her, they, them, their, xe, xem, ze, zem} separated by forward slashes or commas, with any or no white space in their profile descriptions (Jiang et al. 2022). Our data was collected in real-time continuously, meaning we record the profile descriptions of the users at the time of their tweets and therefore some users may have since changed what pronouns they use. ### Gender Spectrum Gender identity (e.g., woman) is separate from the pronouns a person uses (e.g., she/hers or they/them). We use the term gender expression in reference to a person's identity surrounding gender, not the gender label assigned. We use the terms **gender-queer** to describe anyone who does not identify as a cis-gender man or cis-gender woman, and **non-binary** to describe anyone who does not identify as either a man or a woman. To quantify differences by gender we are required to operationalize gender. We use different user pronouns to classify users according to their gender expression. This is a proxy for self expression and is not representative of whether they are a man, non-binary person, etc. We cannot extrapolate whether a person using she/her or her/him is gender-queer or non-binary through their pronouns. Thus, we limit our label of these groups to those using binary pronoun series. We group the pronouns into five different series: he/him/his, she/her/hers, they/them/theirs, xe/xem and ze/zem. We encode the different combinations of pronouns used via a 5-digit dummy variable. For example, she/they could be encoded as 10100, signaling that both she (bit 0) and they (bit 2) are used. Such a dummy variable can be malleable to express a range of pronoun or gender representations available in a data set. Additionally, it is computationally efficient and allows for quick computations such as masking. We encode the pronouns of all 2 million users into this 5-digit schema. We then identify distinct pronoun groups (she/hers, he/they/ze, etc.) with at least 100 members. We randomly sample 100 users from each group with a public Twitter profile as of September 30, 2022. This gives us 14 pronoun groups with at least 100 users. A small number of users removed their profiles or changed their profiles to private while we construct our analysis. We exclude them from our analysis. Table 1 reports the number of users in the original Covid-19 data set and the number of users sampled. For each user in our sample, we collect at most 1000 of their most recent tweets posted before September 30, 2022. This may result in a different number of tweets for each pronoun group. Table 1 shows the composition of pronoun groups within this sample. Groups are listed in decreasing order of their size in the Covid-19 data set. In Table 1 we report the total number of tweets in our sample authored by each pronoun group. The Table also shows what percentage of these tweets are **original tweets** (tweets that are not replies). Tweets are retrieved using the Twitter API using the user timeline API call. Note that the data collection occurred two months before Twitter's Fall 2022 leadership change. ### Reply Collection We collect up to 10 replies to each tweet that is an original tweet (not a reply) in our dataset. There are 802,737 tweets collected that are original tweets. Of these original tweets we collect the first 10 replies with the same conversation_id using the Twitter API. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **Group** & **Original** & **Sample** & **Sample** & **\%Original** \\ & **Users** & **Users** & **Tweets** & **Tweets** \\ \hline She & 1,194,565 & 99 & 81,480 & 73.01 \\ \hline He & 461,264 & 99 & 83,588 & 62.49 \\ \hline She/They & 158,025 & 98 & 84,556 & 71.67 \\ \hline They & 132,374 & 100 & 82,101 & 71.81 \\ \hline He/They & 77,951 & 100 & 82,046 & 64.31 \\ \hline She/He/They & 20,882 & 97 & 86,818 & 70.15 \\ \hline She/He & 16,424 & 99 & 81,532 & 68.79 \\ \hline They/Xe & 1,312 & 99 & 83,284 & 69.73 \\ \hline He/They/Xe & 1,015 & 99 & 87,761 & 66.89 \\ \hline He/Xe & 863 & 99 & 83,744 & 68.00 \\ \hline Xe & 569 & 100 & 83,214 & 66.44 \\ \hline She/They/Xe & 434 & 99 & 88,182 & 64.73 \\ \hline She/Xe & 271 & 98 & 87,216 & 66.44 \\ \hline They/Ze & 216 & 97 & 79,359 & 67.49 \\ \hline Total & 2,066,165 & 1,383 & 1,174,881 & 67.99 \\ \hline \end{tabular} \end{table} Table 1: Pronoun Group composition within collected sample. Users shows number of unique users in sample, tweets shows total tweets, and Original shows percent of tweets not in response to another tweet. ### Emotion Inference To measure the presence of emotions in a tweet we use SpanEmo, a multi-label emotion classification system [1, 13]. SpanEmo utilizes BERT encodings of input text and emotion classes, and yields a higher F1 score than base BERT model. Given some input text, the model returns one boolean value per emotion, denoting whether it is sufficiently confident that the emotion is present in text. ### Toxicity Inference We are interested in quantifying the levels of toxicity in replies to users in our sample and original tweets in our sample. To measure toxicity we use _Detoxify_ model called _unbiased_, a RoBERTa model fine-tuned on the Kaggle Jigsaw Unintended Bias in Toxicity Classification Challenge data set. This model has an AUC score of 92.11. Toxicity models are known to be triggered by keywords, regardless if they're used as reclamation [13]. Additionally, toxicity detectors have been documented to fall short of picking up on the LGBTQ+ community's'mock impleoltess' used in social interactions [10]. We approach toxicity scores as metrics that are supposed to capture toxicity in tweets for all people, including Queer communities, though remain cognizant that it is prone to failure. ## Results ### RQ1: Activity and Attention _Activity_ refers to a user's engagement with others on the platform. Activity includes the number of status updates (original tweets) a user posts, the number of tweets posted by others that the user favorites (likes), and the number of accounts the user is following. Supplementary Figure 6 shows the distribution of each user pronoun group's activity. The plots exclude outliers so as to better visualize differences between each pronoun group's activity. Tweet Count denotes the number of original tweets, retweets and replies posted by a user. In Figure 6, we see that the group with the highest median tweet count is _she/they/xe_ (8357), followed by _she/xe_ (8343), and then _he_ (7873). The group with the lowest median tweet count is _he/xe_ (5077), then _he/they/xe_ (5472) and _the/ye_ (5717). Favorite count, which denotes the number of tweets of others the user liked, also varies substantially between pronoun groups. The groups with the highest median favorite count are _she/they/xe_ (21581), _she_ (21406), and _she/he/they_ (20708). The groups with the lowest median favorite count are _he/they/xe_ (11760), _he/xe_ (12631) and _xe_ (13227). Finally, the group with the highest median following count, which denotes how many other accounts a user follows, is the (521), _she/they_ (471) and _he/they_ (455). Groups with the lowest median following count are _he/xe_ (309), _he/they/xe_ (327) and _xe_ (345). We combine the different measures of activity within the ratio of status updates to favorites (S:F Ratio). This ratio balances between how "vocal" users are (only posting, but not liking tweets of others) and "silent" they are (only liking the tweets of others but not posting). The boxplots in Figure 2 show the distribution of each pronoun group's S:F ratio, with infinite values of the ratio mapped to 5. Extremely low values (lowest is 0.000249) appear as 0 in the boxplot. The dotted line at 1.0 represents balance, with users who tweet and like tweets at the same rate. Users with smaller S:F values are more "silent" online, while users with larger values are more "vocal." The pronoun groups with the highest S:F ratio are _xe_ (.581), _he/xe_ (.580) and _she/xe_ (.571), while the groups with the lowest S:F ratios are _she/they_ (.384), _he/they_ (.391) and _they_ (.407). Together, these results suggest that members of pronoun groups with smaller overall size are more "vocal," posting more than liking other people's tweets, while members of the more established pronoun groups are more "silent," tending to lurk rather than post. _Attention_ captures the online prominence of users, i.e., how much other Twitter users engage with individual users and their tweets. We operationalize this measure using the average number of retweets and likes per tweet (either original tweet or reply) received by tweets in the collected sample, percent of tweets with replies, and the follower counts (distribution of followers per pronoun group using the coronavirus data set). Average retweets, likes and followers are plotted excluding outliers. Figure 1 shows boxplots of attention received by each pronoun group. The groups with the highest median retweets are among the larger pronoun groups [_they_ (.122), _he_ (.091), and _he/they_ (.086)], while the groups with the lowest median retweets are the smaller groups [_he/they/xe_ (.037), _xe_ (.040) and _she/they/xe_ (.041)]. The average number of likes each user's tweets get shows a similar behavior. The groups with the highest median likes are the larger pronoun groups [_they_ (.156), _he_ (.116) and _she/they_ (1.681)], while the groups with the lowest median likes are the smaller groups [_she/xe_ (.894), _she/he/they_ (1.162) and _she/they/xe_ (1.163)]. In both cases, the smaller pronoun groups _they/ze_ and _she/xe_ deviate from the trend. The followers count shows the same U-shaped trend, with the largest and smallest pronoun groups with more followers. The percent verified in Figure 1 plots for all users in the coronavirus data set. The groups with the highest percent verified are _he_ (8.42%), _she_ (4.59%) and _she/xe_ (3.69%). User verification status was taken before November 2022, where for a brief moment verification status was able to be purchased. Interestingly, replies show the opposite trend. Members of smaller pronoun groups tend to have a larger share of their tweets replied to than members of the largest (and the smallest) pronoun groups. The group is with smallest share of tweets with replies is the group using _they_ pronouns. While other users will retweet their tweets, they don't tend to engage in conversations with this group. ### RQ2: Emotions in Tweets and Replies We look at the prevalence of emotions in tweets. We visualize the emotions using heatmaps. The prevalence of each emotion \(i\) is normalized across groups using z-score normalization \(z_{i}=\frac{x_{ij}-\mu_{i}}{\sigma_{i}}\) where \(\mu_{i}\) is average across all pronoun groups' share of tweets displaying emotion \(i\), and \(\sigma_{i}\) is standard deviation of this quantity across all pronoun groups. Figure 3 shows the heatmaps of emotional prevalence in tweets posted by users from each pronoun group, separated by original tweets (top) and replies (bottom). The groups are ordered left to right from most to least numerous in the initial set of unique users. Emotions are ordered such that positive, more pleasant emotions (joy, love, optimism, trust) are at the top and negative, less pleasant emotions are at the bottom (fear, sadness, pessimism, disgust, anger). In the middle are the more neutral emotions of anticipation and surprise. Z-scores help visualize differences in the emotion's prevalence across groups. We note several interesting patterns. The _she/her_ group shows relatively more positive and neutral emotions and less negative emotions. The replies by the _she_ group show much more joy, love, and optimism, and much less anger and disgust. The _he_ group shows less than average love, more optimism and neutral emotions, but also less than average pessimism and sadness. This pattern is also largely observed in replies made by users in this group, although the replies show more anger and disgust. The patterns are largely different for the other pronoun groups. Tweets made by the minority pronoun groups exhibit more negative emotions and less positive emotions compared to other groups, especially in replies. One big outlier in these patterns (marked by a bright blue line) is the _she/he_ group that shows far less emotions overall compared to other groups. There are also interesting trends when looking how each emotion appears in different groups. Moving across the heatmaps horizontally, we see that in general positive and neutral emotions progress from being overrepresented (red Figure 1: Attention. Users with more representation receive more attention in retweet and like averages. Figure 2: User activity. Boxplots show distribution of S:F (Statuses to Favorites) ratios within each pronoun group. Pronouns listed top to bottom least to most representation. Groups with lower representation are more vocal. der) to being underrepresented (bluer). Specifically, trust and optimism becomes less prevalent in smaller pronoun groups, with the exception of smallest _they/ze_. This also holds true for the neutral anticipation and surprise. Among the negative emotions, pessimism becomes progressively more prominent in smaller pronoun groups, with the exception of _she/they_ and _she/he_. In Figure 3 we look at the emotions present in replies received. The figure looks remarkably similar to Figure 3b. In summary, the binary pronoun series, _she_ and _he_ express more love, optimism, trust and surprise in their tweets than the non-binary pronouns. Other than _she/he_ and _they/ze_, users with non-binary profile pronouns typically have less presence of emotion in their tweets. Users in less popular non-binary pronoun groups express relatively fewer positive emotions in replies. The presence of negative emotions is mixed across binary and non-binary pronoun groups. ### RQ3: Toxicity in Tweets and Replies We next look at toxicity of tweets that a user posted, both original tweets and replies (sent tweets), and toxicity of tweets the user received in reply to their tweets (received tweets). The groups among the highest in both mean and median toxicity score for sent tweets are _she/he/they_ and _they/xe_, while the groups among the lowest median and mean toxicity scores are _he/they_ and _he/they/xe_. An alternate way to analyze harmful speech is to look at the incidence of highly toxic tweets. Content moderation algorithms may automatically flag comments with toxicity scores above some threshold. Figure 4 plots the share of highly toxic tweets (toxicity \(>\) 0.9) among the tweets sent Figure 4: Percent of Tweets Posted and Percent of Replies Received Labeled as Toxic (toxicity \(>\) 0.9) Figure 3: Emotions in tweets. Heatmaps show the relative presence of an emotion in the tweets (top) or reply tweets (bottom) of each pronoun group. We use z-score normalization across groups for each emotion to better highlight differences in the emotion between groups. Color in the heatmap shows the emotion’s z-score. and received tweets. There is large diversity in how many highly toxic tweets each group sends and receives. The pronoun group with fewest highly toxic tweets is the _she_ group. The group with the highest proportion of toxic tweets received is the _he/xe_ group. Surprisingly, the non-binary pronoun groups appear to post more highly toxic tweets than the binary groups. This suggests that more of their tweets would be flagged or removed by content moderation. These groups also appear to receive more highly toxic tweets. In general, there is correlation between the group's share of highly toxic tweets received and highly toxic tweets sent, with groups that sent fewer toxic tweets also receiving fewer toxic tweets in replies. In Figure 5 we explore the difference between toxicity in tweets sent and replies received. Higher numbers signal more toxicity sent than received. All groups have receive a very similar proportion of toxicity that they send out, with the largest absolute difference being about 1%. More groups supply higher toxicity than they receive. ## Conclusion In this study, we conduct an exploratory analysis of the Twitter behavior of 14 pronoun groups, including both binary and non-binary pronoun series. We investigate how each group interacts with others online by analyzing their activity, attention, emotions, and toxicity. We also examine the levels of toxicity and types of emotions present in replies to their tweets. We find that the pronoun groups with the most representation receive the most validation from other twitter users as well as the twitter verification system. Attention metrics were consistently highest for pronoun groups with the most popular pronoun series: _she_, _he_, _she/they_, _they_ and _he/they_. Additionally, the groups with least representation have the least amount of user verification. We note two interesting observations within the pronoun groups with less representation on Twitter. First, many of these groups present as more vocal by the status to favorites ratio, despite having low tweet counts. We also found that patterns in activity and attention tend to occur in pairs of pronoun groups, with similar behavior occurring between groups _they/xe_ and _he/they/xe_, groups _xe_ and _he/xe_ and groups _she/they/xe_ and _she/xe_. Overall, we found no clear link between activity and attention. Though there is minor fluctuation, the presence of emotions stays largely stagnant between original tweets, replies sent and replies received. Users of less represented non-binary groups generally have lower presence of pleasant emotions, especially in replies sent and received. In addition, we find that gender-queer groups express more toxicity. although we cannot rule out this is due to biases in automated toxicity detection. One limitation of this work is that the number of users sampled per pronoun group is fairly low. We are interested in seeing how the results change if we were to re-sample the users, or use a larger sample. Another limitation is that the toxicity detector used may perform poorly on gender-queer users, for example flagging re-claimed slurs as markers of toxicity. We aim to expand on this research by investigating how toxicity detectors handle text written by individuals identifying as gender-queer. To accomplish this, we will use a combination of crowd-sourced annotators and "gold star" labels, which are labels generated by the authors themselves, to manually label the presence of toxicity in the tweets we collected in this study. We will then compare the performance of the toxicity detector used with the labels generated. **Broader perspective, ethics and competing interests** This study touches on an important but potentially sensitive topic of gender identity. To protect individual user identity, we used only public data that was collected following Twitter's terms of service. To minimize risk to users, all identifiable information was removed and analysis was performed on aggregated data. We therefore believe the negative outcomes of the use of these data are minimal. Study was reviewed by the authors' IRB and designated exempt. Authors declare no competing interests. The study's benefits include using the findings to foster a more inclusive and welcoming online environment for all gender expressions. ## Acknowledgments Siyi Guo and Donald Berry for helping extract emotions from Tweets.
2306.07538
Quantum Stochastic Molecular Dynamics Simulations of the Viscosity of Superfluid Helium
Decoherent quantum equations of motion are derived that yield the trajectory of an open quantum system. The viscosity of superfluid Lennard-Jones helium-4 is obtained with a quantum stochastic molecular dynamics algorithm. The momentum state occupancy entropy is counted with a continuous representation of boson number and averages are obtained with umbrella sampling. Instantaneous snapshots of the Bose-Einstein condensed system show multiple highly occupied momentum states. The viscosity is obtained from the Onsager-Green-Kubo relation with the time correlation function modified in the quantum case. On the saturation curve, at higher temperatures the viscosities of the classical and quantum liquids are equal. With decreasing temperature the viscosity of the classical liquid increases whereas that of the quantum liquid decreases. Below the $\lambda$-transition the viscosity lies significantly below the classical value, being small but positive due to the mixture of condensed and uncondensed bosons. The computed trajectories give a physical explanation of the molecular mechanism for superfluidity.
Phil Attard
2023-06-13T05:05:22Z
http://arxiv.org/abs/2306.07538v1
# Quantum Stochastic Molecular Dynamics Simulations of the Viscosity of Superfluid Helium ###### Abstract Decoherent quantum equations of motion are derived that yield the trajectory of an open quantum system. The viscosity of superfluid Lennard-Jones helium-4 is obtained with a quantum stochastic molecular dynamics algorithm. The momentum state occupancy entropy is counted with a continuous representation of boson number and averages are obtained with umbrella sampling. Instantaneous snapshots of the Bose-Einstein condensed system show multiple highly occupied momentum states. The viscosity is obtained from the Onsager-Green-Kubo relation with the time correlation function modified in the quantum case. On the saturation curve, at higher temperatures the viscosities of the classical and quantum liquids are equal. With decreasing temperature the viscosity of the classical liquid increases whereas that of the quantum liquid decreases. Below the \(\lambda\)-transition the viscosity lies significantly below the classical value, being small but positive due to the mixture of condensed and uncondensed bosons. The computed trajectories give a physical explanation of the molecular mechanism for superfluidity. Projects/QSM23/Paper/QSMD.tex ## I Introduction This paper explores the nature of Bose-Einstein condensation and of superfluid viscosity by using quantum stochastic molecular dynamics (QSMD) computer simulations. A feature of the calculations is that the bosons interact with each other via the Lennard-Jones pair potential with helium-4 parameters. This gives a much more realistic molecular picture of the Bose-Einstein condensate than the original ideal boson (ie. non-interacting) calculations of F. London (1938), which showed that the \(\lambda\)-transition in \({}^{4}\)He was due to Bose-Einstein condensation. It also gives a more realistic picture of superfluid flow than the original two-fluid model of Tisza, which, based on F. London's ideas, also invokes non-interacting bosons (Tisza 1938, Balibar 2017). Although today many accept without question Bose-Einstein condensation as the origin of the \(\lambda\)-transition and superfluidity, in the past this has been seriously questioned. Landau objected: [F. London and] 'L. Tisza suggested that helium II should be considered as a degenerate ideal Bose gas...This point of view, however, cannot be considered as satisfactory...nothing would prevent atoms in a normal state from colliding with excited atoms, ie. when moving through the liquid they would experience a friction and there would be no superfluidity at all. In this way the explanation advanced by Tisza not only has no foundation in his suggestions but is in direct contradiction to them.' (Landau 1941) One should take Landau's objection seriously, if for no other reason than that he was awarded the Nobel prize for this particular paper. In the conclusion to the present paper a detailed molecular-level rebuttal of Landau's argument is given. Feynman was also critical of the two-fluid model of superfluidity: 'The division into a normal fluid and a superfluid although yielding a simple model for understanding the final equations appears artificial from a microscopic point of view.' (Feynman 1954) The present results agree with this broad characterization by Feynman in that they do not support the binary division that underlies the conventional interpretation of Bose-Einstein condensation. F. London (1938) treated the ground state (also known as He II) as discrete, and the excited states (also known as He I) as a continuum, which is the reason that I refer to it as the binary division approximation. Even today this binary division remains the dominant model, and it is one of the significant differences between my approach and the conventional approach. An additional difference is that the conventional view of Bose-Einstein condensation, including F. London's (1938) treatment of the \(\lambda\)-transition, takes 'ground state' to mean ground energy state and 'excited states' to mean excited energy states. In contrast, I mean ground momentum state and excited momentum states. For non-interacting bosons the two definitions are equivalent. Einstein in a letter to Paul Ehrenfest (1924) wrote 'From a certain temperature on, the molecules "condense" without attractive forces, that is, they accumulate at zero velocity.' (Balibar 2014) Einstein was referring to ideal bosons. One can see that the extension of his idea to interacting bosons is ambiguous: since energy states and momentum states are not equivalent for interacting bosons, which of the two should be used to describe condensation? Whereas the conventional interpretation assumes that Einstein meant energy states, the present paper takes momentum states to be the natural repository of condensation. As mentioned, the present work also disagrees with Einstein's and with F. London's notion that condensation is solely into the ground state, whether energy or momentum, which is another reason that I do _not_ invoke the binary division approximation. There are detailed reasons, both experimental and mathematical, for the present interpretation of Bose-Einstein condensation as being into multiple states, (Attard 2023 sections 8.2.4, 8.6.6 and 9.2). To those arguments can now be added the numerical results of the present QSMD simulations, which show in the quantum regime that at any one time multiple momentum states are highly occupied, that above the condensation transition the ground momentum state is rarely the most occupied state, and that below the transition a minority of the bosons are in the ground momentum state. This paper differs in several respects from the previous quantum loop Monte Carlo (QLMC) simulations of Lennard-Jones \({}^{4}\)He (Attard 2022, Attard 2023 chapter 8). That work invoked a permutation loop expansion and used series of up to seven loops. It was valid on the high temperature side of the \(\lambda\)-transition, it used the binary division approximation, and it gave only static results. The present QSMD paper gives dynamic information such as the momentum time correlation function, the viscosity, and the individual trajectories of the condensed and the uncondensed bosons. It uses only pure momentum loops, which dominate below the \(\lambda\)-transition. (Position permutation loops are important above and at the \(\lambda\)-transition (Attard 2022, 2023 chapter 8).) The present paper dispenses with the binary-division approximation, and explores momentum state condensation explicitly. The present paper uses a number of concepts and mathematical analysis from my previous work. In some cases it provides numerical confirmation for these earlier predictions. The present paper is best read in conjunction with chapters 7, 8, and 9 of the recent edition of my book, _Entropy Beyond the Second Law_ (Attard 2023 second edition). ## II Analysis and Simulation Algorithm ### Analysis for the Static Structure Following Attard (2022, 2023 chapters 7 and 8), consider a system of \(N\) identical bosons interacting with potential energy \(U(\mathbf{q})\), where \(\mathbf{q}=\{\mathbf{q}_{1},\mathbf{q}_{2},\ldots,\mathbf{q}_{N}\}\) is the position configuration. The classical kinetic energy is \(\mathcal{K}(\mathbf{p})=p^{2}/2m\), where \(\mathbf{p}=\{\mathbf{p}_{1},\mathbf{p}_{2},\ldots,\mathbf{p}_{N}\}\) is the momentum configuration, \(\mathbf{p}_{j}\) being the momentum eigenvalue of boson \(j\). The normalized momentum eigenfunctions for the discrete momentum case are \(|\mathbf{p}\rangle=V^{-N/2}e^{-\mathbf{q}\cdot\mathbf{p}}/\hbar\). The system has volume \(V=L^{3}\) and the spacing between momentum states is \(\Delta_{p}=2\pi\hbar/L\)(Messiah 1961, Merzbacher 1970). Below the continuum limit will be taken. An open quantum system is one that exchanges energy and other conserved variables with its environment. As a result of entanglement the subsystem becomes decoherent, and a formally exact expression for the grand partition function for bosons (Attard 2018b, 2021) \[\Xi^{+} = \mathrm{TR}^{\prime}\;e^{-\beta\hat{\mathcal{H}}} \tag{2.1}\] \[= \sum_{N=0}^{\infty}\frac{z^{N}}{N!}\sum_{\hat{\mathrm{P}}}\sum_{ \mathbf{p}}\Big{\langle}\hat{\mathrm{P}}\mathbf{p}\Big{|}e^{-\beta\hat{ \mathcal{H}}}\Big{|}\mathbf{p}\Big{\rangle}\] \[= \sum_{N=0}^{\infty}\frac{z^{N}}{N!}\sum_{\hat{\mathrm{P}}}\sum_{ \mathbf{p}}\int\mathrm{d}\mathbf{q}\;\left\langle\hat{\mathrm{P}}\mathbf{p} |\mathbf{q}\right\rangle\,\left\langle\mathbf{q}\,\Big{|}e^{-\beta\hat{ \mathcal{H}}}\Big{|}\,\mathbf{p}\right\rangle\] \[\approx \sum_{N=0}^{\infty}\frac{z^{N}}{N!V^{N}}\sum_{\hat{\mathrm{P}}} \sum_{\mathbf{p}}\int\mathrm{d}\mathbf{q}\;e^{-\beta\mathcal{H}(\mathbf{q}, \mathbf{p})}\frac{\left\langle\hat{\mathrm{P}}\mathbf{p}|\mathbf{q}\right\rangle }{\left\langle\mathbf{p}|\mathbf{q}\right\rangle}\] \[= \sum_{N=0}^{\infty}\frac{z^{N}}{N!V^{N}}\sum_{\mathbf{p}}\int \mathrm{d}\mathbf{q}\;e^{-\beta\mathcal{H}(\mathbf{q},\mathbf{p})}\eta^{+}( \mathbf{q},\mathbf{p}).\] The permutation operator is \(\hat{\mathrm{P}}\). In the penultimate equality the commutation function has been neglected; the error introduced by this approximation is negligible in systems that are dominated by long range effects (Attard 2018b, 2021). The classical Hamiltonian phase space function is \(\mathcal{H}(\mathbf{q},\mathbf{p})=\mathcal{K}(\mathbf{p})+U(\mathbf{q})\). The symmetrization function \(\eta^{+}(\mathbf{q},\mathbf{p})\) is the sum of Fourier factors over all boson permutations. Note that the quantum state of the open subsystem is given by the simultaneous specification of the positions and momenta of its particles, and that this is a formally exact consequence of the decoherence of the open quantum subsystem (Attard 2018b, 2021). The consequences of the lack of simultaneity of position and momentum, which is a well-known if unrealistic consequence of the Copenhagen interpretation of quantum mechanics, are subsumed into the commutation function. This is neglected in the present paper, although more generally it can be accounted for (Attard 2018b, 2021, 2023). This specification of the positions and momenta is a form of quantum realism that goes further than the de Broglie-Bohm pilot wave theory, although it is strictly valid only for open quantum subsystems. The grand partition function is just the sum over number of the canonical partition function, \(\Xi^{+}(z,V,T)=\sum_{N}z^{N}Z^{+}(N,V,T)\), with the latter being (Attard 2023 equation (9.1)) \[Z^{+}(N,V,T)\] \[= \frac{1}{N!}\sum_{\hat{\mathrm{P}}}\frac{1}{V^{N}}\int\mathrm{d} \mathbf{q}\;e^{-\beta U(\mathbf{q})}\sum_{\mathbf{p}}e^{-\beta p^{2}/2m}e^{ \mathbf{q}\cdot(\mathbf{p}-\mathbf{p}^{\prime})/\hbar}\] \[= Z^{+,\mathrm{id}}(N,V,T)\frac{Q^{\mathrm{cl}}(N,V,T)}{V^{N}}.\] Here the momentum state is \(\mathbf{p}=\mathbf{n}\Delta_{p}\), where \(\mathbf{n}\) is a \(3N\)-dimensional integer vector, and the permuted state is \(\mathbf{p}^{\prime}\equiv\hat{\mathrm{P}}\mathbf{p}\). The classical configuration integral is \(Q^{\mathrm{cl}}(N,V,T)=\int\mathrm{d}\mathbf{q}\;e^{-\beta U(\mathbf{q})}\). The factorization of the momentum terms occurs when only permutations between bosons in the same momentum state are allowed, \(p_{j\alpha}-p^{\prime}_{j\alpha}=0\), which makes the symmetrization function independent of the position configuration. This is valid at low temperatures where there is a limited number of accessible momentum states (Attard 2023 section 9.3). Under these circumstances the permutation loops are predominantly momentum loops in which all particles in a loop are in the same momentum state. The momentum factor that arises is \[Z^{+,{\rm id}}(N,V,T) \tag{3}\] \[= \frac{1}{N!}\sum_{\mathbb{P}}\prod_{j,\alpha}\sum_{n_{j\alpha}=- \infty}^{\infty}e^{-\beta\Delta_{p}^{2}n_{j\alpha}^{2}/2m}\delta_{n_{j\alpha}, n^{\prime}_{j\alpha}}.\] This neglects position permutation loops, which arise from the integration over the momentum states in which bosons in the same state form a set of measure zero, and which do not necessarily contribute zero upon integration because of the interaction potential in the integrand (Attard 2023 section 8.5). However, when pure momentum loops give the dominant contribution, as is assumed here, then all the wave function symmetrization effects are contained within this momentum factor. Because of the factorization, the momentum integral is the same as that for non-interacting bosons, and it is directly connected with the known grand canonical ideal bosons results (F. London 1938, Pathria 1972 chapter 7, Attard 2023 section 8.2). The present analysis differs from that analysis in three ways: First it avoids the binary division continuum approximation, with the focus being on the actual occupancy of the discrete momentum states. Second, the relationship between fugacity and number, \(\Xi(N,V,T)\), is different for interacting and for ideal bosons. And third, the boson dynamics depend upon their interactions. It must be emphasized that this factorization is predicated upon the neglect of the commutation function, and upon the explicit restriction to pure momentum permutation loops. The latter can be expected to be valid below the \(\lambda\)-transition, but the neglect of position loops limits its utility at higher temperatures (Attard 2023 section 8.5). The ideal boson partition function above contains the Kronecker-\(\delta\), \(\delta_{n_{j\alpha},n^{\prime}_{j\alpha}}\). In the momentum configuration \(\mathbf{p}=\mathbf{n}\Delta_{p}\), there are \[N_{\mathbf{a}}(\mathbf{p})=\sum_{j=1}^{N}\delta_{\mathbf{p}_{j},\mathbf{a}} \tag{4}\] bosons in the single particle momentum state \(\mathbf{a}\). Hence there are (Attard 2021 equation (20)) \[\chi^{+}_{\mathbf{n}}=\sum_{\mathbb{P}}\langle\hat{\mathbb{P}}\mathbf{n}| \mathbf{n}\rangle=\prod_{\mathbf{a}}N_{\mathbf{a}}(\mathbf{n})! \tag{5}\] non-zero permutations of the bosons in the system. Therefore the permutation entropy associated with a momentum configuration is \[S^{\rm perm}(\mathbf{p})=k_{\rm B}\ln\chi^{+}_{\mathbf{n}}=k_{\rm B}\sum_{ \mathbf{a}}\ln N_{\mathbf{a}}(\mathbf{p})!. \tag{6}\] (This is the internal entropy of the point in phase space, which is in addition to the external reservoir entropy that is discussed shortly.) With this the ideal boson partition function becomes \[Z^{+,{\rm id}}(N,V,T)=\frac{1}{N!}\sum_{\mathbf{p}}e^{-\beta\mathcal{K}( \mathbf{p})}e^{S^{\rm perm}(\mathbf{p})/k_{\rm B}}, \tag{7}\] where the classical kinetic energy function is \(\mathcal{K}(\mathbf{p})=\sum_{j=1}^{N}p_{j}^{2}/2m=p^{2}/2m\). Taking the continuum momentum limit (discussed below), the full partition function is \[Z^{+}(N,V,T) \tag{8}\] \[= \frac{1}{N!\Delta_{p}^{3N}V^{N}}\int{\rm d}\mathbf{p}\int{\rm d} \mathbf{q}\;e^{S^{\rm perm}(\mathbf{p})/k_{\rm B}}e^{-\beta\mathcal{K}( \mathbf{p})}e^{-\beta U(\mathbf{q})}\] \[= \frac{1}{N!\hbar^{3N}}\int{\rm d}\mathbf{\Gamma}\;e^{S^{\rm perm }(\mathbf{p})/k_{\rm B}}e^{S^{\rm f}(\mathbf{\Gamma})/k_{\rm B}}.\] Here the reservoir entropy is the usual Maxwell-Boltzmann factor \[S^{\rm f}(\mathbf{\Gamma})=\frac{-\mathcal{H}(\mathbf{\Gamma})}{T}, \tag{9}\] where the Hamiltonian function of classical phase space is the sum of the kinetic and potential energies, \(\mathcal{H}(\mathbf{\Gamma})=\mathcal{K}(\mathbf{p})+U(\mathbf{q})\). One can identify from this that the canonical equilibrium quantum probability density of a point in classical phase space is proportional to the exponential of the total entropy, \[\wp(\mathbf{\Gamma})=\frac{1}{Z^{\prime}}e^{S(\mathbf{\Gamma})/k_{\rm B}},\; \;S(\mathbf{\Gamma})=S^{\rm perm}(\mathbf{p})+S^{\rm f}(\mathbf{\Gamma}). \tag{10}\] The relation between entropy and probability is to be expected on very general grounds (Attard 2002, Attard 2012, Attard 2023 chapter 1). This expression for the total entropy is the same as in the treatment on the far-side of the \(\lambda\)-transition in Attard (2023 section 9.3). It includes only permutations of bosons in the same momentum state (pure momentum loops). It precludes the position loops that dominate on the high-temperature side of the \(\lambda\)-transition, where their increase in number and size gives the marked increase in the heat capacity leading up to the \(\lambda\)-transition (Attard 2023 section 8.5). Below I shall discuss the relation between the continuum limit and the occupancy of the quantized momentum states with regard to the dynamics of the system. For the continuum momentum configuration \(\mathbf{p}=\{\mathbf{p}_{1},\mathbf{p}_{2},\ldots,\mathbf{p}_{N}\}\), the simplest definition of the discrete occupancy of the single-particle momentum state \({\bf a}={\bf n}\Delta_{p}\) is \[N_{\bf a}({\bf p}) = \sum_{j=1}^{N}\prod_{\alpha=x,y,z}\Theta\big{(}p_{j\alpha}-(a_{ \alpha}-\Delta_{p}/2)\big{)} \tag{11}\] \[\quad\times\Theta\big{(}(a_{\alpha}+\Delta_{p}/2)-p_{j\alpha} \big{)},\] where the Heaviside unit step function appears. ### Decoherent Quantum Equations of Motion As mentioned in the introduction, the focus of this paper is on the dynamics of superfluidity. The above analysis gives the state of an open quantum system of bosons as a point in classical phase space. In the present subsection molecular-level equations of motion for the decoherent quantum system are derived. The decoherent quantum velocity in phase space is (Attard 2023 section 9.4.4) \[\dot{\bf\Gamma}^{0}=-T\nabla^{\dagger}S({\bf\Gamma}). \tag{12}\] This is a major new result for the time evolution of open quantum systems. A point in classical phase space is \({\bf\Gamma}\equiv\{{\bf q},{\bf p}\}\), the gradient operator is \(\nabla\equiv\{\nabla_{q},\nabla_{p}\}\), and its conjugate is \(\nabla^{\dagger}\equiv\{\nabla_{p},-\nabla_{q}\}\). The superscript \(0\) signifies the non-thermostatted decoherent trajectory. The rationale for this expression is that it ensures that time averages equal phase space averages (Attard 2002 section 5.1.3, 2012 section 7.2.3, 2023 section 5.2.3). Hamilton's equations of motion, which are the classical adiabatic equations of motion, is this with \(S\Rightarrow S^{\rm r}\equiv-{\cal H}/T\) (ie. neglecting the permutation entropy of the quantized momentum states). Since the permutation entropy is an even function of the momenta, \(S^{\rm perm}(-{\bf p})=S^{\rm perm}({\bf p})\), as is the reservoir entropy, these decoherent quantum equations of motion are time reversible (cf. Attard 2023 section 5.4.2). These equations are incompressible, \(\nabla\cdot\dot{\bf\Gamma}^{0}=0\), so that volume elements are conserved on a decoherent quantum trajectory (Attard 2002 section 5.1.5, 2012 section 7.2.4, 2023 section 5.1.2). The decoherent rate of change of entropy vanishes, \(\dot{\bf\Gamma}^{0}\cdot\nabla S=0\), as can be seen by inspection. Putting these together these equations of motion ensure that the quantum probability density in phase space, \(\wp({\bf\Gamma})=Z^{\prime-1}e^{S({\bf\Gamma})/k_{\rm B}}\), is a constant of the decoherent quantum motion. In the general case, \(S=S^{\rm r}+k_{\rm B}\ln(\eta\omega)\). In this paper the commutation function is neglected, \(\omega({\bf\Gamma})=1\). In general the symmetrization function gives the permutation entropy, \(S^{\rm perm}({\bf p})=k_{\rm B}\ln\eta({\bf\Gamma})\). In this paper this is approximated by pure momentum loops. In this paper the only position dependence for the entropy comes from the potential energy contribution to the reservoir entropy. Hence for a homogeneous system in which the potential energy is translationally invariant, the total momentum is a constant of the motion on this decoherent quantum trajectory. Of course, it is a formally exact statistical requirement that the equilibrium probability density be stationary on the decoherent quantum trajectory. Note that the decoherent quantum equations of motion do _not_ maximize the entropy. Rather, they maintain it. The dissipative term (next) in the equations of motion will tend to increase the entropy in the event that the current point in phase space is an unlikely one (appendix A). The superscript \(0\) is used to distinguish the decoherent quantum term from the stochastic dissipative thermostat contribution. The decoherent quantum equations of motion, as well as the stochastic dissipative terms, are for a statistical quantum system that is open to the environment. The decoherent quantum evolution to linear order in the time step is \[{\bf\Gamma}^{0}(t+\tau|{\bf\Gamma}(t),t)={\bf\Gamma}(t)+\tau\dot{\bf\Gamma}^{ 0}(t). \tag{13}\] The second entropy formulation of non-equilibrium thermodynamics shows that the general form for the equations of motion is necessarily stochastic and dissipative (Attard 2012 sections 3.6 and 7.4.5, 2023 section 5.4). Hence the generic equations of motion in classical phase space are (Attard 2023 section 9.4.4) \[{\bf\Gamma}(t+\tau)={\bf\Gamma}^{0}(t+\tau|{\bf\Gamma}(t),t)+{\bf R}_{p}(t). \tag{14}\] The thermostat contribution has only momentum components, \({\bf R}_{p}(t)=\overline{{\bf R}}_{p}(t)+\bar{{\bf R}}_{p}(t)\). This has been proven in the classical case (Attard 2012 sections 3.6 and 7.4.5) and it is assumed that it carries over to the present quantum case. The dissipative force is \[\overline{{\bf R}}_{p}(t)=\frac{\sigma^{2}}{2k_{\rm B}}\nabla_{p}S({\bf\Gamma} (t)), \tag{15}\] and the stochastic force satisfies \[\langle\tilde{{\bf R}}_{p}(t)\rangle={\bf 0},\ {\rm and}\ \langle\tilde{{\bf R}}_{p}(t) \tilde{{\bf R}}_{p}(t^{\prime})\rangle=\sigma^{2}\delta_{t,t^{\prime}}{\rm I }_{pp}. \tag{16}\] As these terms are derived from a linear expansion of the transition probability over a time step (Attard 2012 sections 3.6 and 7.4.5, 2023 section 5.4), the thermostat should represent a small perturbation on the decoherent quantum trajectory. The results should be independent of the stochastic parameter, provided that it is large enough to compensate the inevitable numerical error that arises from the first order equation of motions over the finite time step. Further, as is discussed in the results section III.0.1, as \(\sigma\) is made smaller the classical part of the trajectory tends toward that of an adiabatic (microcanonical) system, and the energy fluctuations and the heat capacity are reduced from their canonical values. In appendix A it is proven that the equilibrium probability density is stable during its evolution according to these equations of motion. In detail the equations of motion are to first order in the time step \[q_{j\alpha}(t+\tau) = q_{j\alpha}(t)-\tau T\partial_{p,j\alpha}S(t)\] \[p_{j\alpha}(t+\tau) = p_{j\alpha}(t)+\tau T\partial_{q,j\alpha}S(t) \tag{17}\] \[\quad+\ \frac{\sigma^{2}}{2k_{\rm B}}\partial_{p,j\alpha}S(t)+ \tilde{R}_{p,j\alpha}(t).\] Second order equations of motion are derived in appendix B. Here and below, \(j=1,2,\ldots,N\) and \(\alpha=x,y,z\). Of course \(\partial_{p,j\alpha}S^{\rm r}(t)=-p_{j\alpha}/mT\), and \(\partial_{q,j\alpha}S(t)=\partial_{q,j\alpha}S^{\rm r}(t)=-\partial_{q,j\alpha }U(t)/T=f^{0}_{j\alpha}(t)/T\), which are the classical velocity and the classical adiabatic force (divided by the temperature), respectively. Notice how the permutation entropy contributes to the decoherent quantum evolution of the positions. This breaks the classical nexus between momentum and the rate of change of position, and it gives a non-local contribution to the latter. These equations of motion with continuous momentum give the transition rate for the quantized momentum states. They ensure the exact quantum equilibrium probability density in classical phase space with quantized momentum states, which itself is the exact formulation of a quantum statistical system that allows position and momentum to be simultaneously specified. (Actually, the formula \(\dot{\bf\Gamma}^{0}=-T\nabla^{\dagger}S({\bf\Gamma})\) is exact; the present implementation invokes two approximations for the entropy, namely the neglect of the commutation function, and the restriction to pure momentum permutation loops.) The continuous momentum can be seen as the realization of the bosons between momentum measurements, and there is no reason that the values should not interpolate momentum eigenvalues. The decoherent quantum equations of motion could be interpreted as the realistic manifestation of Schrodinger's equation for an open quantum system. For a macroscopic system, the spacing between quantum levels is immeasurably small, and so for all phase space functions of practical interest the continuum limit may be taken, the only exceptions being the momentum state occupancy and the permutation entropy. It remains to obtain an expression for the gradient of the permutation entropy, \(\partial_{p,j\alpha}S^{\rm perm}({\bf p}(t))\). ### Continuous Occupancy In the formulation of the preceding subsection, the momentum \({\bf p}=\{{\bf p}_{1},{\bf p}_{2},\ldots,{\bf p}_{N}\}\) is a continuous variable that is not quantized. Although it is trivial to find from this the occupation of discrete states and the permutation entropy, equation (11), the equations of motion require the momentum gradient of the permutation entropy, and this is a \(\delta\)-function at the boundaries of the occupied states (appendix C). It is not numerically feasible to compute this directly. This conundrum is resolved in the present paper by defining a continuous occupancy and permutation entropy, combined with umbrella sampling. Two different formulations for the fractional occupancy are used. If boson \(j\) is actually in the state \({\bf a}_{j}\), \(p_{j\alpha}\in(a_{j\alpha}-\Delta_{p}/2,a_{j\alpha}+\Delta_{p}/2)\), then the first formulation defines the fraction of it associated with the state \({\bf a}_{j}\) in the direction \(\alpha\) to be \[\phi_{j\alpha}\equiv 1-[2(p_{j\alpha}-a_{j\alpha})/\Delta_{p}]^{\kappa}/2,\ \ \kappa=2,4,\ldots\] (18a) One sees that \[\phi_{j\alpha}\in[1/2,1]\]. The even positive integer \[\kappa\] is chosen with a fixed value; the larger this is, the greater is the share of the occupation weight given to the state \[{\bf a}_{j}\], and the closer to the boundary of the state does the boson have to be in order to contribute significantly to the occupancy of the nearest neighboring states. The second formulation defines the fraction to be \[\phi_{j\alpha}\equiv\frac{1}{2}-\frac{s_{j\alpha}}{2}\tanh[2\kappa\{p_{j \alpha}-(a_{j\alpha}+a^{\prime}_{j\alpha})/2\}/\Delta_{p}]. \tag{18b}\] Here \(s_{j\alpha}\equiv{\rm sign}(p_{j\alpha}-a_{j\alpha})\) tells which half of the momentum state the boson is in, and \(a^{\prime}_{j\alpha}=a_{j\alpha}+\Delta_{p}s_{j\alpha}\) is the closest neighbor state to boson \(j\) along the \(p_{\alpha}\) axis. Hence the argument of the hyperbolic tangent tells the distance to the closest boundary. Again the fraction is equal to one half at the boundary, and toward the center of the state it approaches \(0.5+0.5\tanh\kappa\approx 1\), depending on how large \(\kappa\) is. With these the continuous occupancy of the single particle momentum state \({\bf a}\) for the continuous momentum configuration \({\bf p}\) is defined as \[{\cal N}_{\bf a}({\bf p})\] \[= \sum_{j=1}^{N}\Big{\{}\phi_{jx}\phi_{jy}\phi_{jz}\delta_{a,{\bf a }_{j}}+\overline{\phi}_{jx}\overline{\phi}_{jy}\overline{\phi}_{jz}\delta_{{ \bf a},{\bf a}^{\prime}_{j}}\] \[+\overline{\phi}_{jx}\phi_{jy}\phi_{jz}\delta_{a_{x},a^{\prime}_{j x}}\delta_{a_{y},a_{jy}}\delta_{a_{z},a_{jz}}+2\ {\rm similar\ terms}\] \[+\overline{\phi}_{jx}\overline{\phi}_{jy}\phi_{jz}\delta_{a_{x},a^ {\prime}_{jx}}\delta_{a_{y},a^{\prime}_{jy}}\delta_{a_{x},a_{jz}}+2\ {\rm similar\ terms}\Big{\}},\] where \(\overline{\phi}=1-\phi\). This formulation distributes the unit weight of each boson over its own and seven neighboring states. The closer the boson is to the center of its momentum state, the more weight it contributes to its own state. As mentioned, the larger \(\kappa\) is, the thinner is the boundary region for significant weight sharing, and the more the gradient approaches a \(\delta\)-function. If the boson is at the boundary between two momentum states, exactly half of the respective one-dimensional part of the weight is contributed to each. With this continuous (ie. real number) occupancy, the permutation entropy of a momentum configuration is \[S^{\rm perm}({\bf p})=k_{\rm B}\sum_{\bf a}\ln\Gamma({\cal N}_{\bf a}+1), \tag{20}\] where \(\Gamma(z+1)=z!\) is the Gamma function (Abramowitz and Stegun 1972 chapter 6). Now \(\phi_{j\alpha}\) is the \(\alpha\) component of the weight of boson \(j\) in the state \({\bf a}_{j}\), and \(1-\phi_{j\alpha}\) is that in the neighboring state \({\bf a}^{\prime}_{j}={\bf a}_{j}+{\bf s}_{j}\Delta_{p}\). Changing \({\bf p}_{j}\) changes \(\mathbf{\phi}_{j}\) and hence \({\cal N}_{{\bf a}_{j}}\) and the seven \({\cal N}_{{\bf a}^{\prime}_{j}}\). With this the components of the gradient of the permutation entropy are \[\nabla_{p,j\alpha}S^{\rm perm}({\bf p}) \tag{21}\] \[= k_{\rm B}\sum_{r_{jx}=0}^{1}\sum_{r_{jy}=0}^{1}\sum_{r_{jz}=0}^{1} \frac{\partial\ln\Gamma({\cal N}_{{\bf a}_{j}^{\prime}}+1)}{\partial{\cal N}_{{ \bf a}_{j}^{\prime\prime}}}\ \frac{\partial\Phi_{j}^{({\bf r}_{j})}}{\partial p_{j \alpha}},\] \[{\bf a}_{j}^{\prime\prime}\equiv{\bf a}_{j}+\sum_{\gamma=x,y,z}r_ {j\gamma}s_{j\gamma}\Delta_{p}\widehat{\gamma}.\] Here has been defined \[\Phi_{j}^{({\bf r}_{j})} \equiv \Phi_{jx}^{(r_{jx})}\Phi_{jy}^{(r_{jy})}\Phi_{jz}^{(r_{jz})}, \tag{22}\] \[\Phi_{j\alpha}^{(r_{j\alpha})}\equiv\overline{r}_{j\alpha}\phi_{ j\alpha}+r_{j\alpha}\overline{\phi}_{j\alpha},\] where \(\overline{r}_{j\alpha}=1-r_{j\alpha}\). This is the contribution from boson \(j\) to the occupancy of the state \({\bf a}_{j}^{\prime\prime}\), and one has \(\partial_{p,j\alpha}{\cal N}_{{\bf a}_{j}^{\prime\prime}}=\partial_{p,j\alpha }\Phi_{j}^{({\bf r}_{j})}\). The \(x\)-component of the required derivative is \[\frac{\partial\Phi_{j}^{({\bf r}_{j})}}{\partial p_{jx}} = (1-2r_{jx})\,(\partial_{p_{jx}}\phi_{jx})\Phi_{jy}^{(r_{jy})}\Phi_{jz }^{(r_{jz})}, \tag{23}\] and similarly for the other two components. The continuous occupancy formulation is equivalent to sampling phase space with an umbrella weight. One should correct for this by taking the average over the trajectory to be \[\langle f({\bf\Gamma})\rangle_{N,V,T}=\frac{\sum_{n}f({\bf\Gamma}({\bf t}_{n} ))\prod_{\bf a}\frac{N_{\bf a}(t_{n})!}{\Gamma({\cal N}_{\bf a}(t_{n})+1)}}{ \sum_{n}\prod_{\bf a}\frac{N_{\bf a}(t_{n})!}{\Gamma({\cal N}_{\bf a}(t_{n})+1) }}. \tag{24}\] Umbrella sampling is of course a formally exact statistical technique (ie. confidence that the average value is exact increases with increasing number of samples). In practice however, the computational efficiency (ie. the statistical error for a fixed number of time steps) can be quite sensitive to the choice of umbrella function and parameters. In the long run the average value is not affected by such a choice, but the confidence in the value is. The ratio of the actual weight to the umbrella weight can be quite large, (the present fractional occupancy functions systematically underestimate the occupancy of highly occupied states) and it increases with system size. The larger the ratio the lower the statistical accuracy for a given length of simulation. In both formulations of the fractional occupancy, the umbrella weight ratio approaches unity with increasing \(\kappa\). However, there are practical computational limits to how large \(\kappa\) can be, including the cost of evaluating the fractional occupancy function, and its accuracy. Also large permutation entropy gradients, which scale with \(\kappa\) and the proximity to a momentum state boundary, necessitate a smaller time step than would suffice for the classical equations of motion. It was found in practice that the fractional occupancy underestimated the actual occupancy of highly occupied states, and overestimated that of few occupied states. The former is the more significant effect, and it leads to the umbrella weight ratio being significantly greater than unity. To ameliorate this, most runs reported below were performed with the continuous occupancy defined as \({\cal N}_{\bf a}=c{\cal N}_{\bf a}\), with \(c=1.02\)-1.04, depending on the temperature and the function used for the fractional occupancy. (The value was determined by making the average of the ratio of the actual weight to the umbrella weight close to unity in some smaller equilibration runs.) The first derivative of the permutation entropy given above should be multiplied by \(\partial_{{\cal N}_{\bf a}}\bar{\cal N}_{\bf a}=c\), since \(\nabla_{p}S^{\rm perm}=k_{\rm B}\sum_{\bf a}[\partial_{\bar{\cal N}_{\bf a}}\ln \bar{\cal N}_{\bf a}][\partial_{{\cal N}_{\bf a}}\bar{\cal N}_{\bf a}]\nabla_ {p}{\cal N}_{\bf a}\). The second derivative given in appendix B similarly contains terms linear and quadratic in \(c\). ### Computational Details The quantum stochastic molecular dynamics (QSMD) algorithm has much in common with earlier classical SMD versions (Attard 2012 section 11.1). Periodic boundary conditions and the minimum image convention were used in position space. Both first and second order equations of motion were used, with the latter taking about three times longer to evaluate at each time step but enabling a ten times larger time step. The time step and thermostat parameters chosen were sufficient in the classical case to yield the known exact classical kinetic energy and fluctuations therein to within a few per cent. The Lennard-Jones pair potential was used \(u_{\rm LJ}(q_{jk})=4\epsilon_{\rm LJ}[(\sigma_{\rm LJ}/q_{jk})^{12}-(\sigma_{ \rm LJ}/q_{jk})^{6}]\), with potential cut-off of \(R_{\rm cut}=3.5\sigma_{\rm LJ}\). The Lennard-Jones parameters for helium that were used were \(\epsilon_{\rm He}=10.22k_{\rm B}\,{\rm J}\) and \(\sigma_{\rm He}=0.2556\,{\rm nm}\)(van Sciver 2012). A spatially based small cell neighbor table was used. The number density was \(\rho=N/L^{3}\), and the spacing between momentum states was \(\Delta_{p}=2\pi\hbar/L\). For most of the simulations \(N=1,000\). Some tests were performed with up to \(N=10,000\) in the classical case; in the quantum case the performance of the umbrella sampling deteriorated with increasing system size. The simulations were for a homogeneous system at the liquid saturation density at each temperature (Attard 2023 table 8.3). The simulation was broken into 10 blocks, and the fluctuation in the block averages was used to estimate the statistical error. Repeat runs were also used to estimate independently the statistical error. Generally a single run of the program for \(N=1000\) consisted of \(5\times 10^{5}\) time steps. The internal statistical error for the occupancies estimated from the fluctuations in the averages over each of 10 blocks of this was quite often much smaller than the statistical error estimated from the fluctuations in the averages in a series of consecutive runs. This says that the occupancies over consecutive blocks were substantially correlated, which gives an indication of how large and long-lived are the fluctuations in momentum state occupancies. Since the start point of each run was the end point of the previous run, there is no guarantee that correlations do not cause a non-negligible underestimate in the occupancy error obtained from consecutive runs. The statistical error reported below is generally the larger of the various estimates. The author's experience with classical systems and with Monte Carlo simulations has led him to conclude that in the quantum regime the present system displays unusually large and long-lived fluctuations, and it is very slow to equilibrate. The Gamma function was evaluated using small and large argument expansions (Abramowitz and Stegun 1972 equations (6.1.36) and (6.1.41)). These are readily differentiated. The viscosity was obtained from the momentum-moment-momentum-moment time-correlation function (Attard 2012 equation (9.117)), \[\eta_{xz}(t)=\frac{1}{2Vk_{\rm B}T}\int_{-t}^{t}{\rm d}t^{\prime}\,\left<\dot {P}^{0}_{xz}(\mathbf{\Gamma})\dot{P}^{0}_{xz}(\mathbf{\Gamma}(t^{\prime}| \mathbf{\Gamma},0))\right>. \tag{2.25}\] This is often called a Green-Kubo expression (Green 1954, Kubo 1966). In fact it was Onsager who first gave the relationship between the transport coefficients and the time correlation functions (Onsager 1931), and the present author's own derivation owen much to this (Attard 2012). The first \(z\)-moment of the \(x\)-component of momentum is \(P_{zx}(\mathbf{\Gamma})=\sum_{j=1}^{N}z_{j}p_{xj}\). In the classical case \(\dot{P}^{0}_{xx}=\dot{P}^{0}_{xz}\) (see next). The author's classical derivation of the Green-Kubo relations via the second entropy assumes a Markov regression regime in which plateau region \(\eta(t)\) is independent of \(t\). In practice \(\eta(t)\) varies most slowly with \(t\) at its maximum, which is taken as 'the' value of the viscosity. This is justified by the fact that the curvature of the plateau region decreases with increasing system size, and it can be expected to become flat in the thermodynamic limit. Obviously the maximum value can be sensitive to statistical errors, particularly since these grow with \(t\). The results below for \(\eta(t)\) allow the reader to judge for themselves the best estimate of the viscosity. The trajectory \(\mathbf{\Gamma}(t^{\prime}|\mathbf{\Gamma},0)\) is meant to be the decoherent quantum one, but the results below use the thermostatted trajectory. The results do not appear sensitive to the value of the thermostat parameter \(\sigma\). Perhaps more significant is that the trajectory is the one generated using the continuous occupancy for the permutation entropy, and no correction for this is made beyond applying the ratio of the actual weight to the umbrella weight at the start of the trajectory during the phase space averaging. One should really apply a trajectory weight correction that is the product of the umbrella to actual transition probability weights for each point on the trajectory. The above is one of several expressions for the viscosity (Attard 2012 equations (9.116) and (9.117)) that are nominally equivalent, but which differ in their statistical behavior. Of these, the present one that depends only upon the rate of change of the momentum moment is unique in being suitable for systems with periodic boundary conditions and the minimum image convention. (The reason that the other expressions don't work is that they depend upon the momentum moment rather than its rate of change, and when bosons enter and leave the central simulation cell over the course of the trajectory the momentum moment is messed up.) The decoherent quantum rate of change of the first momentum moment is \[\dot{P}^{0}_{xz}(\mathbf{\Gamma}) = \dot{\mathbf{\Gamma}}^{0}\cdot\nabla P_{xz}(\mathbf{\Gamma}) \tag{2.26}\] \[= \sum_{i=1}^{N}\Big{[}\frac{p_{z}p_{xi}}{m}-Tp_{xi}\nabla_{p,zi}S^ {\rm perm}(\mathbf{p})\Big{]}\] \[\quad-\sum_{i<j}^{N}u^{\prime}(q_{ij})\frac{[z_{i}-z_{j}][x_{i}-x _{j}]}{q_{ij}}.\] This is suitable when the potential energy consists solely of central pair terms. The first term is the contribution to the rate of momentum moment change due to molecular diffusion (the first term of which is classical), and the second term is the contribution from intermolecular forces. This expression depends upon the relative separations rather than the absolute positions, which is why the value of the viscosity based upon it is independent of the system volume when periodic boundary conditions and the minimum image convention are applied. Compared to the classical expression (Attard 2012 equation (9.119)), this contains the additional decoherent quantum term, \(\dot{q}^{0,0,\rm u}_{iz}p_{xi}=-Tp_{xi}\nabla_{p,iz}S^{\rm perm}(\mathbf{p})\), which is directly responsible for the reduction in the viscosity of the quantum liquid in the condensed regime (section III.4). Unlike in the classical case, this quantum contribution is _not_ symmetric, so that \(\dot{P}^{0}_{xz}\neq\dot{P}^{0}_{xz}\). (This means that there are six independent estimates of the viscosity in each simulation.) The viscosity tensor, which is this multiplied by its evolved value, integrated, and averaged, is expected to be symmetric for a homogeneous system. The results below are presented in dimensionless form: the temperature is \(T^{*}=k_{\rm B}T/\epsilon_{\rm LJ}\), the number density is \(\rho^{*}=\rho\sigma_{\rm LJ}^{3}\), the time step is \(\tau^{*}=\tau/t_{\rm LJ}\), the unit of time is \(t_{\rm LJ}=\sqrt{m\sigma_{\rm LJ}^{2}/\epsilon_{\rm LJ}}\), the variance of the stochastic force is \(\sigma^{*}=\sigma/\sqrt{mk_{\rm B}T\tau/t_{\rm LJ}}\), the momentum is \(p_{x}^{*}=p_{x}/\sqrt{mk_{\rm B}T}\), and the viscosity is \(\eta^{*}=\eta\sigma_{\rm LJ}^{3}/\epsilon_{\rm LJ}t_{\rm LJ}\). ## III Results Table 1 shows quantum stochastic molecular dynamics (QSMD) results at four points on the Lennard-Jones \({}^{4}\)He liquid saturation curve. Quantum loop Monte Carlo (QLMC) simulations, give the \(\lambda\)-transition in Lennard-Jones \({}^{4}\)He at about \(T^{*}=0.625\) (Attard 2023 chapter 8). This is driven by the divergence in the position permutation loops (Attard 2023 section 8.4, 8.6, 9.2), which are neglected here. The present QSMD simulations invoke pure momentum permutation loops via momentum state occupancy, which results from wave function symmetrization for ideal bosons (London 1938, Pathria 1972 chapter 7, Attard 2023 sections 8.2 and 8.3). The ideal boson \(\lambda\)-transition, \(\rho_{\rm c,id}\Lambda_{\rm c,id}^{3}=2.612\), can be seen in table 1 to occur between \(T^{*}=0.60\) and \(T^{*}=0.55\). There is no unambiguous evidence in the table of a transition, since the heat capacity peak is within the statistical error, the maximum occupancy and the ground state occupancy increase continuously (although the former is greater than the latter only above the ideal boson transition), and the viscosity decreases continuously through this range. These results are discussed individually below. One reason for the limited evidence for the \(\lambda\)-transition in the present series of simulations is its weakness in the case of ideal bosons (London 1938, Pathria 1972 chapter 7, Attard 2023 sections 8.2 and 8.3). In the previous QLMC simulations for Lennard-Jones \({}^{4}\)He bosons both pure and mixed loops were used (Attard 2023 chapter 8). Mixing ground momentum state bosons in position loops suppresses condensation, which makes the transition quite sharp (Attard 2023 section 8.5). The absence of both position loops and such mixing in the present simulations means that the transition itself must resemble that for ideal bosons, which prevents a divergence in the heat capacity and precludes a sharp transition in condensation. #### iii.2.1 Heat Capacity From the classical (\(S^{\rm perm}({\bf p})\equiv 0\)) results in table 1, it can be seen that in general the kinetic energy is within 0.1-3% of the exact result \(\beta\langle{\cal K}\rangle^{\rm cl}/N=3/2\). The accuracy depends upon the length of the time step, the order of the equations of motion, and the magnitude of the thermostat parameter \(\sigma^{*}\). Comparing the first (not shown) and second order equations of motion, for comparable accuracy a ten times larger time step could be used for the latter, which took about three time longer to evaluate per time step. It was assumed that the parameters that were found adequate in the classical case could be used for the quantum calculations. The simulated heat capacity was found to be sensitive to the value used for \(\sigma\). For example, at \(T^{*}=0.65\) for the classical case, at \(\sigma^{*}=0.2\) the heat capacity is \(C_{V}/Nk_{\rm B}=1.9(4)\). The fluctuations in kinetic energy are \(\beta^{2}[\langle{\cal K}^{2}\rangle-\langle{\cal K}\rangle^{2}]/N=1.0(2)\), compared to the exact value of \(3/2\). At \(\sigma^{*}=0.5\), the heat capacity is \(C_{V}/Nk_{\rm B}=2.4(9)\), and the fluctuations in kinetic energy are \(\beta^{2}[\langle{\cal K}^{2}\rangle-\langle{\cal K}\rangle^{2}]/N=1.3(5)\). In the former case the average ratio of the stochastic change in kinetic energy at each time step to the classical adiabatic change is \(\bar{\Delta}{\cal K}/\Delta^{0}{\cal K}=0.082\), and in the latter it is 0.51. In the former case the viscosity is \(\eta^{*}=3.8(6)\), and in the latter case 3.7(12). From these one concludes that the value of the stochastic parameter affects the heat capacity, and that a value of at least \(\sigma^{*}=0.5\) should be used. The larger value has no significant effect on the viscosity. Energy is a classically conserved variable in a microcanonical system, and so the adiabatic energy fluctuations must be zero. This explains why the heat capacity, as measured by the average of the energy fluctuations, is so sensitive to small values of the stochastic parameter \(\sigma\). (The classical adiabatic motion is the dominant contribu \begin{table} \begin{tabular}{c c c c c c c c c c c c c} \hline \(T^{*}\) & \(\rho^{*}\) & \(\rho\Lambda^{3}\) & Msteps & \(\sigma^{*}\) & \(\beta{\cal K}/N\) & \(\beta U/N\) & \(|\bar{\Delta}_{\cal K}/\Delta_{\cal K}^{0}|\) & avocc & maxocc & \(N_{\bf 0}\) & \(C_{V}/Nk_{\rm B}\) & \(\eta^{*}(0.5)\) & \(\eta^{*}(1.5)\) \\ \hline & & & \multicolumn{6}{c}{quantum} & & & & & & & \\ 0.70 & 0.847 & 1.76 & 2.5 & 1.0 & 0.93(1) & -8.77(2) & 2.4 & 1.98(2) & 33(5) & 26(4) & 1.3(4) & 2.8(21) & 3.9(37) \\ 0.65 & 0.868 & 2.02 & 4.5 & 1.0 & 0.86(2) & -9.75(2) & 3.0 & 2.18(3) & 52(10) & 45(12) & 1.2(5) & 0.93(91) & 2.6(13) \\ 0.60\({}^{\dagger}\) & 0.887 & 2.33 & 7 & 0.2 & 0.84(1) & -10.832(7) & 0.13 & 2.35(3) & 62(7) & 53(8) & 1.8(5) & 2.7(7) & 3.9(18) \\ 0.60 & 0.887 & 2.33 & 2.5 & 1.0 & 0.77(1) & -10.86(1) & 2.6 & 2.48(5) & 86(11) & 86(12) & 1.3(5) & 1.3(12) & 2.1(19) \\ 0.55\({}^{\ddagger}\) & 0.905 & 2.70 & 5 & 0.2 & 0.75(3) & -12.12(2) & 0.09 & 2.67(5) & 96(19) & 94(21) & 1.3(5) & 1.7(7) & 0.7(25) \\ 0.55 & 0.905 & 2.70 & 3.5 & 0.2 & 0.77(2) & -12.11(2) & 0.13 & 2.61(7) & 83(23) & 80(25) & 1.4(9) & 2.6(13) & 4.7(33) \\ 0.55 & 0.905 & 2.70 & 5 & 1.0 & 0.68(1) & -12.15(2) & 3.1 & 2.81(5) & 147(23) & 147(23) & 2.1(11) & 3.9(12) & 7.8(28) \\ \hline & & & \multicolumn{6}{c}{classical} & & & & & & \\ 0.70 & 0.847 & 1.76 & 2 & 1.0 & 1.501(6) & -8.748(6) & 2.1 & 1.302(2) & 5.39(2) & 1.75(3) & 2.5(2) & 3.05(32) & 3.73(51) \\ 0.65 & 0.868 & 2.02 & 1.5 & 0.2 & 1.51(2) & -9.71(2) & 0.08 & 1.34(1) & 5.70(7) & 1.97(4) & 1.9(4) & 3.11(35) & 3.8(6) \\ 0.65 & 0.868 & 2.02 & 1 & 0.5 & 1.50(3) & -9.72(2) & 0.50 & 1.35(1) & 5.73(7) & 1.99(4) & 2.4(9) & 3.03(44) & 3.7(12) \\ 0.60\({}^{\dagger}\) & 0.887 & 2.33 &.4–6.0 & 0.2 & 1.42(2) & -10.89(1) & 0.17 & 1.43(1) & 7.47(7) & 2.51(7) & 0.03(6) & 3.81(99) & 5.0(23) \\ 0.60 & 0.887 & 2.33 & 3 & 0.2 & 1.53(2) & -10.79(2) & 0.08 & 1.383(1) & 6.00(7) & 2.21(6) & 0.08(1) & 4.02(51) & 5.1(10) \\ 0.60 & 0.887 & 2.33 & 1.5 & 0.5 & 1.51(1) & -10.81(2) & 0.5 & 1.392(5) & 6.06(4) & 2.30(4) & 2.3(2) & 4.16(58) & 6.1(14) \\ 0.55 & 0.905 & 2.70 & 2 & 1.0 & 1.501(8) & -12.12(1) & 2.0 & 1.456(4) & 6.53(3) & 2.66(4) & 2.8(6) & 4.56(40) & 6.9(12) \\ \hline & & & \multicolumn{6}{c}{\({}^{\dagger}N=5,000,\ {}^{\ddagger}\tau^{*}=0.5\times 10^{-4}\)} & & & & & & & \\ \end{tabular} \end{table} Table 1: Stochastic molecular dynamics results for Lennard-Jones \({}^{4}\)He at liquid saturation density (\(N=1000\), \(\tau^{*}=10^{-4}\), second order equations of motion). Msteps is millions of time steps, \(|\bar{\Delta}{\cal K}/\Delta^{0}{\cal K}|\) is the ratio of the stochastic to the decoherent quantum change in kinetic energy per time step, avocc is \(N\) divided by the average number of occupied states, and maxocc is the average number in the maximally occupied state. For the quantum cases, the hyperbolic fractional occupancy, Eq. (18b), was used with \(\kappa=11\) and \(c=1.02\)–1.04. The number in parentheses is twice the standard error of the final digit or digits, which gives the 95% confidence interval. tion to the decoherent quantum motion.) The viscosity, on the other hand, is measured from the fluctuations of the momentum moment, and this is not a conserved variable in a microcanonical system. One therefore does not expect the viscosity to be sensitive to the value of \(\sigma\). The ideal boson heat capacity at the \(\lambda\)-transition is \(C_{V}/Nk_{\rm B}=1.925\) at \(\rho_{\rm c,id}\Lambda^{3}_{\rm c,id}=2.612\). (Pathria 1972). The greatest heat capacity in the present results on a relatively coarse temperature grid is \(C_{V}/Nk_{\rm B}=1.8(5)\) at \(T^{*}=0.60\), \(\sigma^{*}=0.2\), \(\rho\Lambda^{3}=2.33\) or \(C_{V}/Nk_{\rm B}=2.1(11)\) at \(T^{*}=0.55\), \(\sigma^{*}=1.0\), \(\rho\Lambda^{3}=2.70\). Although the statistical error is rather large, the heat capacity in the table for both the quantum and the classical liquid shows a rather weak variation with temperature. The largest quantum value is smaller than experimental values because it is missing position permutation loops, which diverge. It should be larger than the ideal boson value because it includes the potential energy contribution. However the situation is complicated by the fact that for interacting bosons the ideal fugacity factor is smaller than the ideal fugacity at the same value of \(\rho\Lambda^{3}\), and hence the number of condensed bosons and their fluctuations are also smaller. That the peak occurs close to the location of the ideal boson peak is expected because ideal boson statistics are used for the permutation entropy. At \(T^{*}=0.55\) the QSMD kinetic and potential energy contributions to the heat capacity are about equal. The discrepancy between \(\langle\Delta{\cal H}^{2}\rangle\) and \(\langle\Delta{\cal K}^{2}\rangle+\langle\Delta U^{2}\rangle\) was of the same order as the statistical error. In general, but most notably for smaller values of \(\sigma^{*}\), the former expression was less than the latter, which indicates a negative correlation between the fluctuations in potential and kinetic energy, as expected from energy conservation on the dominant classical adiabatic part of the trajectory. Because of the factorization of the momentum and the position integrals, exact equality would be expected if the thermostat was adequate. Since the main focus of this paper is on dynamic rather than static properties, and since position permutation loops have been neglected, it was decided not to pursue more reliable values for the heat capacity. The kinetic energies in the quantum cases were significantly less than the classical result. This is an expected effect of condensation into low lying momentum states in the quantum system. #### iv.2.2 Stochastic Dissipative Contribution The indirect effect of the stochastic parameter was just discussed in the context of the heat capacity. The direct influence of the thermostat can be gleaned from \(|\Delta{\cal K}/\Delta^{0}{\cal K}|\), which is the ratio of the stochastic to the decoherent quantum change in kinetic energy per time step. Using a parameter value \(\sigma^{*}=0.5\) gave a ratio of about 1.5 in most cases. The ratio can be expected to scale quadratically with the parameter; using \(\sigma^{*}=0.2\) reduced the ratio to 0.1-0.2, and a value of \(\sigma^{*}=1.0\) increased the ratio to 2-3. A large value of the thermostat parameter speeds equilibration, but should otherwise have no effect on static equilibrium averages (apart from the energy fluctuations). From the table, it appears to have little effect on dynamic quantities such as the viscosity. As mentioned, the momentum moment is not a conserved variable in the microcanonical system. For the quantum case at \(T^{*}=0.55\) (below the ideal boson transition), the value \(\sigma^{*}=0.2\) gives a significantly smaller ground momentum state occupancy than the value at \(\sigma^{*}=1.0\) for the same time step. Also, the kinetic energy is significantly higher, and the heat capacity is lower. A smaller time step at \(\sigma^{*}=0.2\) increases the occupancy slightly. Broadly similar conclusions can be drawn from the results at \(T^{*}=0.60\). It may be that in these cases the numerical error due to the length of the time step is too large to be compensated completely by the thermostat. In general terms, the smaller the time step the more reliable are the results. #### iv.2.3 Occupancy The quantity avocc in table 1 is the total number of bosons divided by the average number of occupied states, which were counted with the quantized occupancy, equation (11). One can see that for the quantum liquid avocc is on the order of two, and that it increases with decreasing temperature. It is about 50% larger in the quantum case than in the classical case. The average maximum occupancy maxocc is the average occupancy of the most occupied state at each instant. It varies between about 30 and 150 in the quantum case, again increasing with decreasing temperature. At \(T^{*}=0.60\) in the classical case it is about 6 compared to about 77 in the quantum case. These results are for \(N=1000\); for \(N=5000\) in the classical case maxocc = \(7.47(7)\), which tends to confirm that the occupancy of a state is an intensive variable that is independent of the system size (Attard 2023 sections 8.2.4, 8.6.6, 9.2). (The reason for the small increase in the maximum for the five times larger system appears to be the finer resolution of the states in the larger system; measuring the maximum of a curve by sampling produces a larger value if more samples are used.) The confirmation of the intensive nature of occupancy is even stronger and less ambiguous for the occupancy of the ground momentum state itself. By definition the average maximum occupancy must be greater than or equal to the average occupancy of the ground momentum state, \(\langle N_{\bf 0}\rangle\). Below the ideal boson condensation transition, \(\rho_{\rm c,id}\Lambda^{3}_{\rm c,id}=2.612\), at \(T^{*}=0.55\) (\(\rho^{*}=0.905\), \(\rho\Lambda^{3}=2.702\)) the maximum occupancy and the ground state occupancy are equal to within statistical error. It is emphasized that in this case about 15% of the bosons in the system are condensed into the ground momentum state, which of course means that 85% of the bosons are not in the ground momentum state. The relative number of bosons in the ground momentum state is expected to go to zero in the thermodynamic limit (Attard 2023 sections 8.2.4, 8.6.6, 9.2). To see this another way, at \(T^{*}=0.55\), in one series of quantum runs at \(\sigma^{*}=1.0\), the average ground momentum state occupancy was 147(23) and the average occupancy of a single first excited momentum state was 24(4). Since the latter is six-fold degenerate, this means that on average there are as many bosons in a first excited state as in the ground momentum state: 143(9) versus 147(23). In another series of quantum runs at the same temperature and \(\sigma^{*}=0.2\), \(\langle N_{\bf 0}\rangle=80(25)\) and \(\langle N_{1,0,0}\rangle=23(3)\), which gives on average 136(8) bosons in all the first excited states. These appear to be affected by the time step, as halving it to \(\tau^{*}=0.5\times 10^{-4}\) with \(\sigma^{*}=0.2\), gives \(\langle N_{\bf 0}\rangle=94(21)\) (and \(\langle N_{1,0,0}\rangle=23.5(16)\)). This is probably the most reliable value. The runs at the smaller time step were smoother and gave smaller statistical error for the same number of time steps (ie. half the total time of the trajectory), and in retrospect it would have been better to have run all the quantum cases with the smaller time step. Due to permutation entropy, the'stickiness' of the ground momentum state is greater than that of any one first excited momentum state. But it must be clearly understood that these occupancy numbers, \({\cal O}(10^{2})\), will not change in taking the thermodynamic limit, which is to say that the permutation entropy for the ground momentum state does not diverge. These results contradict Einstein (1924) and London (1938), who assumed that Bose-Einstein condensation is solely into the ground state. It is not even predominantly into the ground state, since there are more condensed bosons _not_ in the ground state than are in the ground state. (A condensed boson is here taken to be one in a multiply occupied momentum state.) Instead they reinforce the present author's contention that condensation is into multiple low-lying momentum states (Attard 2023 sections 8.2.4, 8.6.6, 9.2). There is also a contradiction between the present result of \({\cal O}(10^{2})\) bosons in the momentum ground state at \(T^{*}=0.55\), with a previous estimate of \({\cal O}(10^{6})\) bosons in the critical velocity momentum state for capillary superfluid flow (Attard 2023 section 9.4.3). That number is based on the binary division approximation, which is inconsistent in that it equates the number of condensed bosons (extensive) to the number in a single momentum state (intensive). It would appear that the earlier estimate should be reinterpreted as referring to a range of momentum states rather than to a single critical momentum state. This would make more sense in that superfluid flow is macroscopic, and the total occupancy of a macroscopic number of momentum states is macroscopic. Figure 1 shows the average occupancy of momentum states along an axis. These are from a different series of simulations to the data shown in table 1 and test various umbrella sampling parameters. Values of \(\kappa\) in Eq. (2.18a) and in Eq. (2.18b) should not be compared. No systematic studies of the behavior with this parameter were carried out beyond noting that values around 10 were better than values around 2. It was concluded the hyperbolic form, Eq. (2.18b), was more statistically efficient than the polynomial form Eq. (2.18a). In both case the umbrella weight ratio was brought closer to unity by judicious choice of the parameter \(c\). Of course what matters is the fluctuations in this ratio, but these are necessarily smaller the smaller it is. At \(T^{*}=0.65\), for \(\kappa=11\) (hyperbolic umbrella fractional occupancy, Eq. (2.18b)) using \(c=1.040\) gave average umbrella weight ratio 6(6) and using \(c=1.035\) gave \(6(10)\times 10^{3}\). The potential energy was \(\beta U/N=-9.732(16)\) and -9.725(19), respectively. The kinetic energy was \(\beta{\cal K}/N=0.96(4)\) and \(0.90(2)\), respectively. The maxocc was 34(4) and 34(3). The error estimates are for the same number of time steps. Despite the fact that the average umbrella weight ratio is quite sensitive to the value of the parameter \(c\), there is no strong evidence that using it to make the ratio close to unity significantly reduces the statistical error. The data in figure 1 is at \(T^{*}=0.6\), \(\rho^{*}=0.8872\), where \(\rho\Lambda^{3}=2.33\). This is just above the ideal boson condensation transition, \(\rho_{c,{\rm id}}\Lambda_{c,{\rm id}}^{3}=2.612\). The classical distribution is \(N(p_{x},0,0)=N({\bf 0})e^{-\beta p_{x}^{2}/2m}\) (not shown). Compared to this the quantum distribution in figure 1 is much higher and more sharply peaked For comparison, the classical liquid has ground momentum state occupancy \(N({\bf 0})=2.18(5)\) (table 1) compared to the quantum value of 53(8)-86(12) (table 1) or 22(9)-38(13) (figure 1). Perhaps the most important point of the figure is the agreement between the known analytic result for ideal bosons, \(N(p_{x},0,0)=ze^{-\beta p_{x}^{2}/2m}/[1-ze^{-\beta p_{x}^{2}/2m}]\), and the present simulations. This confirms the validity of the present equations of motion, molecular dynamics simulation algorithm, and the continuous distribution umbrella sampling technique. The fugacity fitted here, \(z=0.966\), is the product of the ideal and excess factors, \(z=z^{\rm id}z^{\rm ex}\), with \(z^{\rm id}=0.995\) and \(z^{\rm ex}=VQ^{\rm cl}(N,V,T)/Q^{\rm cl}(N+1,V,T)\). The occupation of the ground momentum state for Lennard-Jones bosons is almost an order of magnitude smaller (28 versus 197) than would be predicted for ideal bosons at the same density and thermal wavelength. The rather large statistical error for the ground momentum state, as can be seen from the error bars in figure 1 or by comparing the results to those table 1, is an indicator that the fluctuations in occupancy are large and long-lived. For ideal bosons, the relative mean square fluctuations are close to unity in the condensed regime, \([\langle N_{\bf a}^{2}\rangle_{\rm id}^{+}-(\langle N_{\bf a}\rangle_{\rm id}^ {+})^{2}]/(\langle N_{\bf a}\rangle_{\rm id}^{+})^{2}=1+1/\langle N_{\bf a} \rangle_{\rm id}^{+}\) (Pathria 1972 equation (6.3.9), Attard 2023 equation (7.159)). Figure 2 gives a typical snapshot of the instantaneous boson occupancy of the low lying momentum states in the \(p_{z}=0\) plane. Of course this was taken from a trajectory corresponding to the umbrella distribution, and it can be regarded as typical of the actual probability distribution only in so far as the former approximates the latter. The envelope of the approximately Gaussian distribution apparent in figure 1 can be seen. What is also noticeable is the roughness of the instantaneous distribution, with adjacent states often having markedly different occupancies. In this particular snapshot, the ground momentum state is the second most occupied state in the \(p_{z}=0\) plane. This is by no means atypical, as the results for maxocc and \(\langle N_{\bf 0}\rangle\) in table 1 confirm. In none of the snapshots from the simulations that have been examined for \(\rho\Lambda^{3}>\rho_{\rm c,id}\Lambda^{3}_{\rm c,id}=2.612\) is there any indication that a significantly greater number of bosons occupy the ground momentum state in preference to the nearby excited states. However, for \(T^{*}=0.55\) and \(\rho\Lambda^{3}=2.70\), the snapshots confirm that the ground momentum state is almost always the most occupied momentum state, with only two exceptions in twenty four snapshots noted. #### iv.1.4 Viscosity As a check of the present algorithm and computer program, a comparison was made with previous classical calculations of the Lennard-Jones viscosity. Previous classical non-equilibrium stochastic dissipative molecular dynamics (NESMD) simulations for a Lennard-Jones fluid (\(R_{\rm cut}^{*}=2.5\)) at \(T^{*}=2\) and \(\rho^{*}=0.8\) gave \(\eta^{*}=1.88(3)\)(Attard 2018a). At the same state point the present QSMD program in classical mode (ie. \(S^{\rm perm}({\bf p})\equiv 0\) and \(\nabla_{p}S^{\rm perm}({\bf p})\equiv{\bf 0}\)), with \(R_{\rm cut}^{*}=3.5\) gave \(\eta^{*}=2.16(15)\). Since the non-equilibrium algorithm is different to the equilibrium Green-Kubo approach, and since the two programs used to calculate the viscosity are unrelated to each other (except for the neighbor table subroutines), this confirms that the present program is free from error, at least in the classical parts of the computation. Figure 3 shows the viscosity time correlation function, equation (2.25), averaged over all six off-diagonal matrix elements. The viscosity must go to zero at \(t=0\), as can be seen directly from equation (2.25), or from the usual parity arguments based on time reversibility (Attard 2012 equation (2.60)). It can be seen that the classical viscosity rises rapidly to a rather broad maximum. The maximum value is 'the' viscosity. The curvature of the plateau region decreases with increasing system size, and it can be expected to become flat in the thermodynamic limit. In so far as hydrodynamics usually refers to steady state or slowly varying flows, the long-time limit of the shear viscosity time correlation function is the appropriate value, which is likely the maximum value for a finite-sized system when the thermodynamic limit is taken. In the present cases the maximum in the classical viscosity occurs for \(t^{*}\lesssim 2.0\). Because the curves are flat, missing the location of the maximum does not greatly affect the value estimated for the viscosity. The classical viscosities at \(T^{*}=0.65\) and \(T^{*}=0.60\) are insensitive to the two different values of the stochastic parameter \(\sigma^{*}\) that were used. Across the temperature range, it can be seen that the classical viscosity increases with decreasing temperature on the saturation curve. The quantum viscosities are more noisy than their classical counterparts despite the fact they they were generally run for longer. No doubt that this is due at least in part to the large fluctuations in the momentum state occupancy that occur in the quantum case. It is also likely due to the umbrella sampling. There are two contributions to the difference between the viscosities of the quantum and classical liquids. First is the quantum versus classical statistics, which amongst other things enhances the multiple occupancy of low-lying momentum states in the quantum case. Second is the decoherent quantum term, \(q_{iz}^{0,\rm qu}p_{xi}=-Tp_{xi}\nabla_{p,iz}S^{\rm perm}({\bf p})\), in the rate of change of the first momentum moment, equation (26). If this is set to zero, then the viscosity of the quantum liquid generally increases, and at lower temperatures it can actually become larger than the viscosity of the classical liquid. The quantum viscosity is obtained by integrating the time correlation function over the thermostatted, umbrella trajectory (ie. the one that invokes the continuous occupancy). There are two approximations here: First, and likely less serious, is using the thermostatted rather than the decoherent quantum trajectory. Second, is using the gradient of the continuous occupancy for the transitions at each step on the trajectory. No correction was made by invoking the product of the ratios of the exact to the umbrella transition probability along the trajectory. (Because the exact gradient of the permutation entropy would be required for the exact transition probability, and if it were numerically feasible to compute this there would be no need to use continuous occupancy or umbrella sampling.) Despite the noise in the quantum cases one can still draw some conclusions by comparing each with the corresponding classical case. At the highest temperature, \(T^{*}=0.70\), the quantum viscosity more or less coincides with the classical viscosity. It should be noted that even at this temperature, which is well above the ideal boson \(\lambda\)-transition (\(\rho\Lambda^{3}=1.76\) compared to \(\rho_{\rm c,id}\Lambda^{3}_{\rm c,id}=2.612\)), the present QSMD simulations, which neglect position permutation loops but retain pure momentum permutation loops, give a much higher occupancy for the ground momentum state in the quantum than in the classical case, (26(4) compared to 1.75(3), table 1). The occupancy of other low lying momentum states is similarly enhanced. This no doubt contributes in part to the reduced viscosity on small time intervals, which might be interpreted as evidence for superfluidity arising from a dilute mixture of condensed and uncondensed bosons. QLMC simulations show that including mixed position and momentum permutation loops suppresses condensation into the ground momentum state above the \(\lambda\)-transition temperature (Attard 2023 section 8.4). This appears to be the reason that the superfluid transition measured experimentally coincides with the \(\lambda\)-transition. These mixed loops are neglected in the present QSMD simulations, as are all position permutation loops. This explains why here there is some condensation into the ground and other low lying momentum states above the ideal boson \(\lambda\)-transition temperature, and the (apparently consequent) reduction in the viscosity. At \(T^{*}=0.65\), the quantum viscosity lies significantly below the classical velocity on short time intervals and at longer time scales it appears to level off below the classical value. The occupancy of the ground momentum state is 45(12) in the quantum case and 1.99(4) in the classical case (table 1). The occupancy of other low lying momentum states is also enhanced. At \(T^{*}=0.60\), the quantum viscosity lies significantly below the classical velocity for Figure 3: The viscosity for the quantum (solid and dash-dot curves) and the classical (dash curves) saturated liquids. The time step is \(\tau^{*}=10^{-4}\) except for the dash-dot curves where \(\tau^{*}=0.5\times 10^{-4}\). The stochastic parameter is \(\sigma^{*}=1.0\) (solid and long-dash curves), \(\sigma^{*}=0.5\) (medium-dash curves), or \(\sigma^{*}=0.2\) (dash-dot and short-dash curves). The dotted curves give the 95% confidence interval. the entire time interval shown. Toward the end of the time interval it appears to be rising or perhaps to be leveling off, but the statistical error is rather too large to be certain about this. Whereas at \(T^{*}=0.65\) the average maximum occupancy was greater than the occupancy of the ground momentum state (46(9) compared to 37(9) for \(\tau^{*}=10^{-4}\)), at \(T^{*}=0.60\) the two are about the same, (86(11) compared to 86(12) for \(\tau^{*}=10^{-4}\); 63(5) compared to 52(8) for for \(\tau^{*}=5\times 10^{-5}\)). This says that ground momentum state occupancy is not the only contributing factor to the reduction in viscosity. At \(T^{*}=0.55\), the quantum viscosity lies significantly below the classical viscosity over the entire time interval exhibited. In fact, for \(t^{*}\gtrsim 1\) the viscosity is zero within the statistical error. This probably should not be taken too literally; the liquid contains a significant fraction of bosons in few occupied momentum states, and one would expect the interactions amongst these to lead to a non-zero viscosity. It is not clear what will happen for time intervals larger than those studied, but one might guess that the quantum viscosity levels off to its value at the end of the short time interval. In summary one can conclude that at low temperatures the viscosity of the quantum liquid is less than that of the classical liquid, and the size of the decrement increases with decreasing temperature. There is no sharp transition in viscosity with temperature in the present QSMD simulations because they neglect position permutation loops and mixed permutation loops, which suppress boson condensation above the \(\lambda\)-transition (Attard 2023 section 8.4). Also, the Lennard-Jones pair potential contributes to the fugacity so that the condensation for the present interacting bosons is quantitatively different to that for ideal bosons. #### iv.1.5 Trajectory Figure 4 shows a randomly chosen trajectory of one of the bosons in the system at \(T^{*}=0.55\), which is below the ideal boson \(\lambda\)-transition. The evolution of the system is governed by the QSMD equations of motion with continuous occupancy, equation (18b) with \(\kappa=11\). The occupancy of the states visited by this boson is typically 5-20 on this portion of its trajectory, and it can therefore be called a condensed boson. Taking account as well of the \(y\)- and \(z\)-components of the momentum (not shown), for perhaps one quarter of the trajectory shown this condensed boson is in the ground momentum state. (The ground momentum state of the \(x\)-component is not necessarily the ground momentum state of the boson, as it may be in an excited momentum state of one or both of the other components.) The occupancy shown in figure 4A is the discrete one. It would be difficult to discern the continuous occupancy from this on the scale of the figure. Compared to the exact discrete occupancy, the root mean square error of the continuous occupancy is 4.8% over this trajectory. In this and the following figures, the velocity of the particle, defined as the change in position at each time step divided by the length of the time step, is compared to the momentum divided by the mass. In classical mechanics these would be equal. What is noticeable in the figure are the large spikes in velocity. These result from jumps in the position that are much larger than the prior classical changes. The spikes are more or less continuous, with each lasting on the order of \(10^{2}\) time steps (depending on the width), and comprising a sequence of moves in the same direction that first increases and then decreases in magnitude. (Usually the velocity during the spike does not change sign, in which case there is a nett jump away from the original trajectory.) Presumably these continuous spikes result from the continuous occupancy formulation, and in the limit \(\kappa\rightarrow\infty\) they would become \(\delta\)-functions (appendix C). Although the velocity spikes are clearly associated with the proximity to the boundary of a momentum state (figure 4B), the occupancy of the current state changes more frequently (figure 4A). This is for two reasons: First the spike often precludes a change in momentum state (figure 4B). And second, the changes in occupancy of Figure 4: The \(x\)-components of the trajectory of a condensed boson in phase space (\(T^{*}=0.55\), \(\tau^{*}=5\times 10^{-5}\), \(\sigma^{*}=0.2\), \(\kappa=11\) in equation (18b)). The solid curve is the velocity (ie. rate of change of position), the dashed curve is the momentum divided by mass (mostly obscured), and the dotted curve is the discrete occupancy of the momentum state of the boson. (B) is a magnification of a portion of (A), with the dotted lines marking the boundaries of the momentum states. a highly occupied state are most probably due to other bosons entering or exiting the momentum state. A portion of the trajectory is shown in detail in figure 4B. There is a deal of noise apparent in the velocity spikes on this scale, which perhaps indicates a smaller time step, or a smaller value of \(\kappa\), would have been efficacious. (The continuous occupancy using equation (18a) with \(\kappa=10\) gives roughly similar behavior, with the spikes perhaps being smaller in length but broader in width at the base. From the same starting position in phase space, the two trajectories visibly diverge after about \(t^{*}=0.02\).) One sees that the spikes occur when the momentum approaches the boundary of a momentum state. In most, but not all, cases the rate of change of momentum reverses sign during the spike, which means that the jump is along a position path such that the force on the particle passes through zero and reverses sign. In consequence, the particle either remains in, or returns to, the original momentum state. Nevertheless, it is possible to escape the ground momentum state, and the fluctuations in the occupancy of the state that the boson is currently in are quite large (figure 4A). Another point to be noticed in figure 4B is that the sign of the position jump usually has the opposite sign to the product of the current force and the difference in occupancies of the proposed less the current momentum state, assuming the most occupied states are those with less momentum, (cf. equation (17)). In figure 4B at around about \(t^{*}=0.14\) there is a spike during which the velocity changes sign. Perhaps this is better identified as two separate spikes of opposite sign. In any case the sequence signifies a jump away from the original position followed by a jump back to it. This coincides with going from a few occupied to a highly occupied state, as can be seen in figure 4A. This type of jump with velocity reversal in a single spike appears to comparatively rare for condensed bosons, but more common for uncondensed bosons (see below). It is overly simplistic to interpret every position jump as a pair collision. They are collisions only in the sense that they occur upon approach to the boundary of a momentum state, and a boson can only approach such a boundary if it experiences a nett force, which of course must come from molecular interactions. Figure 5 shows the \(x\)-components of the trajectory of a boson from a different run at the same temperature. The first excited negative \(x\)-momentum state shown has an occupancy of less than 5, as does the state prior to the first peak; the remaining ground and first positive excited \(x\)-momentum states have an occupancy of 10-30. The trajectory in figure 5A shows spikes near the boundaries of the momentum states. The spikes do not always preclude a change of state, but their sign does appear to be consistent with the discrete occupancy, equation (17). Spikes 4, 6, and 7 give a position jump to zero force, which prevents a change in the momentum state. Figure 5B shows the components of the actual position from which the velocity in figure 5A is derived. (The \(y\)- and \(z\)-components of position have been shifted to display them on the same scale as the \(x\)-component.) It can be seen that the curves are relatively smooth. The jumps are actually quite small on the scale of the Lennard-Jones \({}^{4}\)He diameter \(\sigma_{\rm LJ}\), which is used as the unit of length. The length of the jump is more or less given by the width of the spike in velocity. It can be seen that in two cases (1-3 and 5-6), the \(x\)-components of the jumps come in pairs that cancel so that the trajectory after the second jump is more or less the continuation of it prior to the first jump. These jumps occur close to two similar paired jumps in the \(y\)-component of the position trajectory. Since it is the boundary of the momentum state for each component that determines the jump, the extent of correlation between them would be expected to be determined by the magnitude of the nett force on the boson. From the molecular point of view the return jump can be understood by noting that the first jump leaves a cavity at the original position, and the subsequent mechanical forces act on the boson to refill it, thereby assuaging nature's abhorrence.The jumps in position in figure 5B are quite small, and it is not clear that a detailed molecular interpretation is called for. Figure 6 shows the trajectory of an uncondensed boson. In this case the occupancy of the momentum states that it visits is mainly 1-2, except toward the end of the trajectory shown. One can see that even for such an un Figure 5: (A) The \(x\)-components of the trajectory of a boson in phase space (state, parameters, and curves as in figure 4). (B) The components of the actual position over time, with the numbers giving the corresponding jumps. condensed boson there are spikes in the velocity, which are associated with approaching the boundary of a new momentum state (figure 6B). The spikes are narrower and have smaller magnitudes than for condensed bosons, which means that the jumps in position are smaller. Many spikes show a reversal of velocity, which effectively cancels the jump. In figure 6B the transitions are from a singly occupied to an empty momentum state. For most of this portion of the trajectory, the momentum is increasing, which means that the net force on the uncondensed boson is non-zero and positive. The position jumps more or less cancel each other during the velocity-reversal spikes and so the trajectory continues on as before; if one overlooks the spikes the trajectory is effectively classical, adiabatic, and continuous. This is an important observation because the classical limit is the limit where all momentum states are either singly occupied or else empty. Hence an uncondensed boson such as this should follow a classical trajectory. It is reasonable to suppose that in the discrete occupancy limit, \(\kappa\rightarrow\infty\), the two spikes that comprise a velocity-reversal become two equal, opposite, and superimposed \(\delta\)-functions, the cancelation of which leaves a pure classical trajectory. In appendix C it is shown that for discrete occupancy there are no spikes or jumps upon the transition between momentum states for uncondensed bosons. Figure 7 shows a collision between a condensed boson (\(j=816\), in a state with occupancy \(\approx 35\) at \(t^{*}\approx 0.31\)) and an uncondensed boson (\(k=796\), occupancy \(=1\)). The condensed boson was in a low-lying momentum state occupied by about 40 bosons, but it was not the ground momentum state. Although the trajectory is affected by the interaction potentials with other bosons in the vicinity, since the Lennard-Jones pair potential becomes steeply repulsive for separations \(q_{jk}^{*}\leq 2^{1/6}=1.12\), one can say that the jump at \(t^{*}\approx 0.31\) is in response to the pair collision between the two bosons. The \(z\)-separation is small and constant over the time period shown, the \(y\)-separation is larger and shows a jump, and the \(x\)-separation is small and decreasing. Hence one can say that the collision occurs in the \(xy\)-plane, with the relative motion in the \(x\)-direction with a lateral offset (ie. impact parameter) in the \(y\)-direction. Figure 7B shows that the collision at \(t^{*}\approx 0.31\) does not change the \(y\)-momentum state of the condensed boson. This is due to the jump in that direction that gives a larger separation that passes through zero force, turning it from repulsive to attractive. A schematic based on this'side-step' collision is given in subsection IV 5 below, where it is used to explain the molecular basis of superfluidity. Figure 7 was obtained using equation (18b) with \(\kappa=11\). The run was repeated from the same starting Figure 6: The \(z\)-component of the trajectory of an uncondensed boson in phase space (state, parameters, and curves as in figure 4). (B) shows a detail of the trajectory during which the boson is only ever in singly occupied momentum states. Figure 7: Collision of a condensed boson \(j\) and an uncondensed boson \(k\) (state and parameters as in figure 4). (A) Components of the separation, \(\mathbf{q}_{jk}=\mathbf{q}_{\bar{\nu}}-\mathbf{q}_{k}\). (B) \(y\)-component of the velocity (solid curve) and the momentum divided by mass (dashed curve) of the condensed boson \(j\). point with the same sequence of pseudo-random numbers using equation (18a) with \(\kappa=10\). The two trajectories for the same pair of bosons diverged noticeably for \(t^{*}\gtrsim 0.02\). Using a different pair of bosons (\(j=841\) and \(k=517\), which of all pairs had the smallest separation one fifth of the way through the run), which also happened to be a condensed/uncondensed pair (the condensed boson was in a low-lying momentum state occupied by about 30 bosons, but it was not the ground momentum state), it was found that a qualitatively similar side-step collision occurred (results not shown). This shows that the condensed boson side-step collision mechanism on the decoherent quantum trajectory is rather common, and that it is quite robust and independent of the specific continuous occupancy representation. The above trajectories can be understood from the conservation of entropy by the decoherent quantum equations of motion. This can be rearranged as \[\dot{S}^{0} = \dot{\mathbf{\mathbf{\Gamma}}}^{0}\cdot\nabla S \tag{31}\] \[= -T\nabla_{p}[S^{r}+S^{\mathrm{perm}}]\cdot\nabla_{q}[S^{r}+S^{ \mathrm{perm}}]\] \[\quad+T\nabla_{q}[S^{r}+S^{\mathrm{perm}}]\cdot\nabla_{p}[S^{r}+ S^{\mathrm{perm}}]\] \[= \big{\{}-T\nabla_{p}S^{r}\cdot\nabla_{q}S^{r}+T\nabla_{q}S^{r} \cdot\nabla_{p}S^{r}\big{\}}\] \[\quad+\big{\{}-T\nabla_{p}S^{\mathrm{perm}}(\mathbf{p})\cdot \nabla_{q}S^{r}\] \[\quad+T\nabla_{q}S^{r}\cdot\nabla_{p}S^{\mathrm{perm}}(\mathbf{ p})\big{\}}\] \[= \frac{-1}{T}\dot{\mathbf{q}}^{0,\mathrm{qu}}\cdot\nabla_{q}U( \mathbf{q})+\dot{\mathbf{p}}^{0,\mathrm{cl}}\cdot\nabla_{p}S^{\mathrm{perm}}( \mathbf{p}).\] This is obviously zero already at the second equality. But it is interesting to separate out the classical adiabatic change of reservoir entropy in the first set of braces in the third equality, which itself is zero, and to rewrite the second set of braces as the sum of the quantum change in potential energy and the classical change in permutation entropy. This shows that the quantum spike, \(\dot{\mathbf{q}}^{0,\mathrm{qu}}\), gives a jump rate in position that changes the potential energy and hence the reservoir entropy to exactly cancel the change in permutation entropy that is induced by the classical adiabatic force, \(\mathbf{f}^{0}=\dot{\mathbf{p}}^{0,\mathrm{cl}}\). ## IV Conclusion #### iv.0.1 Limitations of the Present Results Several approximations and simplifications limit the quantitative reliability of the present simulations. First is the reliance upon the Lennard-Jones pair potential, which is exact for the long range attraction, but which approximates the functional form and strength of the short range repulsion. Second, is the neglect of many body interactions, of which the leading order Axilrod-Teller three-body potential is predominantly repulsive at short-range. Third is the neglect of the commutation function, which is short-ranged and repulsive. The commutation function ultimately reflects the Heisenberg uncertainty principle and the zero point energy, and it means that at low temperatures \({}^{4}\)He remains a liquid where otherwise it would be expected to be a solid. Whereas Bose-Einstein condensation is a non-local phenomenon that is insensitive to short-range effects, the viscosity itself depends on short-range interactions. In consequence the present simulations are approximate in that they neglect these three effects; the change in viscosity in going from the classical to the quantum liquid is more reliable than the individual values. A related point is that the saturation liquid density used in the present QSMD simulations is the value for Lennard-Jones \({}^{4}\)He obtained from classical Monte Carlo simulations (Attard 2023 table 8.3). The fourth limitation of the present simulations is that they invoke only pure momentum loops. Although exact for ideal bosons. for the more realistic case of interacting bosons, mixed permutation loops, where the bosons can be in different momentum states, contribute significantly on the high temperature side of the \(\lambda\)-transition. Pure permutation loops can be expected to dominate on the low temperature side of the \(\lambda\)-transition (Attard 2023 section 9.3). The \(\lambda\)-transition and Bose-Einstein condensation for ideal bosons occurs for \(\rho_{\mathrm{c,id}}\Lambda_{\mathrm{c,id}}^{3}=2.612\) (F. London 1938, Pathria 1972, Attard 2022, 2023). The present results for saturated liquid Lennard-Jones \({}^{4}\)He span the ideal boson transition and end at \(\rho\Lambda^{3}=2.70\) at \(T^{*}=0.55\). The present treatment of wave function symmetrization is that of ideal bosons, and the thermodynamic state point of that treatment includes the regime of Bose-Einstein condensation for ideal bosons. A fifth limitation is the calculation of the viscosity by integrating the time correlation function over the continuous occupancy trajectory. Instead one should weight each time step on the trajectory by the ratio of the exact to the umbrella transition weight. Unlike the umbrella sampling of the phase space points themselves where the exact probability can be calculated from the exact occupancy and permutation entropy, the exact transition probability requires the gradient of the permutation entropy, which is problematic to obtain (appendix D). The continuous occupancy and umbrella sampling method poses statistical limitations. The quantum liquid is more challenging than the classical liquid, and the statistical error is much larger for a given total time. Also the length of the time step needs to be smaller to solve the decoherent quantum trajectory accurately. It appears that the numerical procedures are quite sensitive to the functional form and to the value of the parameters used for the continuous occupancy. Although the average values are not affected by the chosen form and parameter values for the continuous occupancy, the length of the simulation required to produce reliable results is. #### iv.0.2 New Concepts Previous work has shown that entanglement with the environment or reservoir decoheres the subsystem wave function and in a formally exact sense enables the quan tum state of an open system to be characterized by simultaneous values of the particles' positions and momenta, summing over which gives statistical averages (Attard 2018b, 2021, 2023). A similar decoherence has been obtained with the environmental selection approach that has been applied to the quantum measurement problem (Zeh 2001, Zurek 2003, Schlosshauer, 2005). The main differences are that the phase space analysis is in the context of quantum statistical mechanics and it provides an actual probability distribution. The classical phase space formulation is a form of quantum realism. The major conceptual contribution of the present paper is the decoherent quantum equations of motion, \(\dot{\mathbf{\Gamma}}^{0}=-T\nabla^{\dagger}S(\mathbf{\Gamma})\). This reduces to Hamilton's equations in the classical limit, and makes a phase space average equals the simple time average over the trajectory. The decoherent quantum equations of motion ensure the constancy of the exact equilibrium probability density of the open quantum system. The added stochastic dissipative thermostat confers stability to the evolution (appendix A). The existence of molecular trajectories (section III.5) is a challenge to conventional quantum mechanics. The Copenhagen interpretation says that in a closed system the position and momentum of a particle do not exist between measurements, and that there is no such thing as a particle trajectory. The state of the closed subsystem is believed to come into existence only when it is observed. The Copenhagen interpretation is an anti-realist theory. The present analysis and calculations are for an open quantum subsystem. The exchange of energy with the heat reservoir entangles the wave function and causes it to collapse. This allows the state of the subsystem to be specified by a point in classical phase space (Attard 2018b, 2021, 2023). The present trajectories are a result of the equations of motion that correspond to a transition probability that preserves the equilibrium probability distribution during the evolution of the subsystem. This is a fundamental requirement of probability theory (Attard 2012). The specification of the state of the subsystem as a point in classical phase space, and the existence of a trajectory, means that this treatment of an open quantum system may be classified as a realist theory: The state of the subsystem exists whether or not an observer makes a measurement. A consequence of the present decoherent quantum equations of motion is that the velocity does not equal the momentum divided by the mass when changes in momentum state occupancy are involved. Changes in occupancy can be precluded by jumps in position. In the present calculations these jumps are exceedingly small compared to the size of \({}^{4}\)He. The jumps are likely to be larger and more abrupt in the discrete occupancy limit, \(\kappa\rightarrow\infty\) (cf. appendix C). The spikes containing velocity reversal cancel in the classical limit, but in the quantum regime step jumps certainly will not. Perhaps one might regard the jumps as a remnant of the underlying quantum nature of a closed subsystem, which, according to the Copenhagen interpretation, precludes a continuous trajectory. Although the decoherent quantum equations of motion are deterministic, the jumps would appear to be random in comparison to the smooth classical trajectory. In the present results the classical limit emerges naturally from the decoherent quantum equations of motion. In the classical extreme the momentum state occupancy is only ever 0 or 1, the velocity reversal spikes predominate, and, since the two components of such jumps have opposite sign, in the discrete occupancy limit, \(\kappa\rightarrow\infty\), these likely become superimposed and cancel so that they do not actually effect the trajectory. This was found analytically to be the case for discrete occupancy (appendix C). That is, for uncondensed bosons the change in a momentum state occurs without even a speed bump. Hence in the classical limit the boson decoherent quantum trajectory becomes continuous, and Hamilton's adiabatic classical equations of motion emerge. #### iv.1.3 Algorithmic Advances Two practical problems had to be solved in order to implement the present decoherent quantum equations of motion on a computer, both for the present study of \({}^{4}\)He, and for the simulation of open quantum systems more generally. The first was to combine the continuum momenta of classical phase space with the discrete momentum eigenvalues of the quantum system. The second was to provide a viable procedure that gives the gradient of the permutation entropy. The discrete momentum eigenvalues are necessary to obtain the momentum state occupancy and hence the permutation entropy for quantum statistics. The second problem was solved by defining a continuous form of occupancy with a computable gradient, which was treated as a form of umbrella sampling over the trajectory from which the exact phase space averages could be obtained. Combining the momentum continuum with the quantized eigenvalues gives the transitions between momentum states. With this and the decoherent quantum equations of motion one has the generalization of Schrodinger's equation to an open quantum system. The quantum stochastic molecular dynamics (QSMD) simulation method enables the description of macroscopic quantum condensed matter systems at the molecular level. The continuous occupancy presents both a challenge and an opportunity for further developments. The decoherent quantum equations of motion formally apply to fermions with the permutation entropy replaced by the Fermi exclusion principle. An efficient form of umbrella sampling lies in the details. #### iv.1.4 Results for Liquid Helium Beyond the quantum statistical theory and computational algorithms developed here lie the specific results obtained for liquid helium at temperatures encompassing the \(\lambda\)-transition. Perhaps the most significant of these are the results for the viscosity. As far as I am aware these are the first realistic results for the superfluidity of interacting bosons. I do not count the phenomenological two-fluid theory of Tisza (1938), which uses F. London's (1938) analysis of Bose-Einstein condensation based on ideal (non-interacting) boson statistics. Nor do I count Landau's (1941) phonon-roton phenomenological theory, which is based solely on the first excited energy state. This might perhaps be relevant at 1 fK, but not at the temperatures of the \(\lambda\)-transition where superfluidity has actually been measured. Landau never accepted Bose-Einstein condensation as the origin of the \(\lambda\)-transition or of superfluidity, and so it is a little hard to understand how anyone can simultaneously hold to be true both Bose-Einstein condensation and Landau's theory of superfluidity. The present results show that the reduction in the viscosity from its classical value coincides with the condensation of bosons into multiple multiply-occupied low-lying momentum states. Defining uncondensed bosons to be those in few-occupied momentum states, and condensed bosons to be those in highly occupied momentum states, one could perhaps argue for a form of Tisza's (1938) two-fluid theory, namely a linear combination of zero viscosity of the condensed bosons and the non-zero classical value of the uncondensed bosons. However, like Feynman (1954) I see this as unrealistic since there is no sharp delineation between the two. The present results for a spectrum of momentum state occupancies underscore the difficulty in turning Tisza's (1938) two-fluid theory into something with quantitative predictive value. One thing that is certain is that the ground momentum state does not play the dominant role in superfluidity that has hitherto been assumed. The present results show a reduction in viscosity at temperatures close to the \(\lambda\)-transition temperature even though the ground momentum state is not the sole highly occupied state, or even at every instant the most occupied state. Furthermore the occupancy of the ground momentum state does not become macroscopic in the thermodynamic limit. Since superfluidity can be observed and measured in the laboratory it must be a macroscopic phenomenon. The number of condensed bosons by the present definition becomes macroscopic in the thermodynamic limit, whereas the number in the ground momentum state does not. #### iii.1.5 Molecular Interpretation of Superfluidity And now to Landau's objection to Bose-Einstein condensation as the origin of superfluidity: 'nothing would prevent atoms in a normal state from colliding with excited atoms, ie. when moving through the liquid they would experience a friction and there would be no superfluidity at all' (Landau 1941). The fundamental conceptual conclusion that can be drawn from the present equations of motion is that the total entropy is conserved on the decoherent quantum trajectory. This means that any change in permutation entropy in a collision must be compensated by an equal and opposite change in the reservoir entropy. (The decoherent quantum equations do _not_ maximize the entropy.) If an upcoming change in permutation entropy, on, say, a classical trajectory, would be so large that it could not be so compensated, than the trajectory itself must shift to avoid the collision. The quantum term in the decoherent quantum equations of motion (12), \(\dot{\bf q}^{0,{\rm qu}}=-T\nabla_{p}S^{\rm perm}({\bf p})\), has the effect of eliminating the shear viscosity in the condensed regime, as the following argument shows. Consider the glancing collision between two repulsive particles (figure 8). The sketch is based on the trajectory computed in figure 7 where the uncondensed boson is little affected by the collision (not shown). Suppose that the collision occurs in superfluid flow from left to right, with the highly occupied momentum states being highly aligned and clustered about some non-zero magnitude. On approach of the pair close enough to feel the repulsive part of the interaction potential, the classical adiabatic part of the collision attempts to rotate the momentum of the condensed boson. But this would knock it into a few-occupied state, thereby reducing the permutation entropy. Instead the decoherent quantum term increases the collision offset (ie. impact parameter) such that the distance of closest approach is larger than would occur on the classical trajectory. Prior to the lateral jump the lateral component of the repulsive force causes the boson to approach the lateral momentum boundary, the crossing of which would change the permutation entropy. The jump to a region of zero force precludes the change in momentum state and conserves the entropy. This description of zero change in permutation entropy Figure 8: Schematic based on the trajectory in figure 7 of a collision in the \(xy\)-plane between a condensed boson (filled) and an uncondensed boson (empty) in the classical case (dashed curves, ballistic collision) and in the quantum case (solid curves, side-step collision). while the condensed particle follows a trajectory of zero force is a simplified picture that likely holds when the change in momentum state would lead to a large change in permutation entropy. This is confirmed by many of the jumps shown in figures 4-7, which do not result in a change in momentum state because the jump reverses the sign of the force. The decoherent quantum equations of motion show that in an open quantum system the change in position can occur independently of the momentum. The decoherent quantum term can do this because it is not bound by the classical notion that the rate of change of position must equal the momentum divided by the particle mass. In other words, the velocity of the quantum trajectory in position space is not equal to the momentum divided by mass when changes in permutation entropy are involved. After the collision, which is time-reversible, the condensed boson remains within the envelope of highly occupied momentum states for the superfluid flow. The fact that the distance of closest approach is larger in the quantum case than in the classical case means that the transfers of momenta longitudinally and laterally are likely zero. In figures 7 and 8, the lateral momentum state of the condensed boson does not change in the collision, and since total momentum is conserved on a decoherent quantum trajectory, neither does that of the uncondensed boson. Since the shear viscosity is a direct measure of longitudinal momentum transferred laterally, it must be zero in the quantum condensed regime. This answers Landau's objection. It explains at the molecular level how a collision between a condensed and an uncondensed boson does not result in the lateral transfer of longitudinal momentum, and why the shear viscosity vanishes. It shows, qualitatively, why Bose-Einstein condensation gives rise to superfluidity. The quantitative molecular measure of the effect is provided by the viscosity given by the present quantum stochastic molecular dynamics algorithm.
2305.01419
Cryogenic payloads for the Einstein Telescope -- Baseline design with heat extraction, suspension thermal noise modelling and sensitivity analyses
The Einstein Telescope (ET) is a third generation gravitational wave detector that includes a room-temperature high-frequency (ET-HF) and a cryogenic low-frequency laser interferometer (ET-LF). The cryogenic ET-LF is crucial for exploiting the full scientific potential of ET. We present a new baseline design for the cryogenic payload that is thermally and mechanically consistent and compatible with the design sensitivity curve of ET. The design includes two options for the heat extraction from the marionette, based on a monocrystalline high-conductivity marionette suspension fiber and a thin-wall titanium tube filled with static He-II, respectively. Following a detailed description of the design options and the suspension thermal noise (STN) modelling, we present the sensitivity curves of the two baseline designs, discuss the influence of various design parameters on the sensitivity of ET-LF and conclude with an outlook to future R&D activities.
Xhesika Koroveshi, Lennard Busch, Ettore Majorana, Paola Puppo, Piero Rapagnani, Fulvio Ricci, Paolo Ruggi, Steffen Grohmann
2023-05-02T13:48:50Z
http://arxiv.org/abs/2305.01419v2
Cryogenic payloads for the Einstein Telescope - Baseline design with heat extraction, suspension thermal noise modelling and sensitivity analyses ###### Abstract The Einstein Telescope (ET) is a third generation gravitational wave detector that includes a room-temperature high-frequency (ET-HF) and a cryogenic low-frequency laser interferometer (ET-LF). The cryogenic ET-LF is crucial for exploiting the full scientific potential of ET. We present a new baseline design for the cryogenic payload that is thermally and mechanically consistent and compatible with the design sensitivity curve of ET. The design includes two options for the heat extraction from the marionette, based on a monocrystalline high-conductivity marionette suspension fiber and a thin-wall titanium tube filled with static He-II, respectively. Following a detailed description of the design options and the suspension thermal noise (STN) modelling, we present the sensitivity curves of the two baseline designs, discuss the influence of various design parameters on the sensitivity of ET-LF and conclude with an outlook to future R&D activities. ## I Introduction The Einstein Telescope (ET) is a third generation gravitational wave (GW) detector with a xylophone design, combining a low-frequency (LF) and a high-frequency (HF) laser interferometer. Sensitivities lie in the range of 3 Hz to 30 Hz (ET-LF) and 30 Hz to 10 kHz (ET-HF), respectively. The low-frequency sensitivity is crucial for exploiting the full scientific potential of ET, in particular with regard to: * the observation of binary neutron stars (BNS), staying long time in the bandwidth, * pre-merger detection to probe the central engine of gamma ray bursts (GRB), particularly to understand the jet composition, the particle acceleration mechanism, the radiation and energy dissipation mechanisms, * detecting a large number of kilonovae counterparts, * detecting primordial black holes (PBH) at redshifts \(z>30\), and * detecting intermediate massive back holes (IMBH) in the range of \(10^{2}-10^{4}\,M_{\odot}\)[1]. Figure 1 shows the noise contributions to the sensitivity curve ET-D [2], based on payload design parameters listed in Table 1. Cryogenic operation of the payload is indispensable to suppress the suspension thermal noise (STN) to the level of gravity gradients, i.e. Newtonian noise (NN). Both STN and NN are the fundamental noises that dominate the ET-LF noise budget at frequencies below 10 Hz. The technical implementation of the parameters in Table 1 is not straightforward [3; 4]. Therefore, in this paper we develop a baseline design of a cryogenic payload for ET-LF, which is consistent in terms of mechanical and thermal design as well as STN modelling. It shall serve as a stepping stone for the cryostat design and for future payload design optimization, rather than assuming it "final". The focus of this paper is purely on the payload, not yet including the impact of co Figure 1: ET-LF noise contributions in the ET-D sensitivity curve [2]. is a subject of future R&D. Section II introduces the baseline cryogenic payload design for ET-LF with two heat extraction concepts, which are further explained in Sections III and IV. This is followed in Section V by a detailed description of the STN modelling. Section VI then presents the sensitivity curves of the baseline designs. The influence of various design parameters on the sensitivity of ET-LF is analyzed in Section VII, before main conclusions and an outlook to future R&D activities are presented in Section VIII. ## II Baseline design of a cryogenic payload for ET-LF ### Overall operating conditions ET-LF shall be operated with a 1550 nm wavelength laser at an arm power of 18 kW. The baseline material for the mirror and its suspension fibers is monocrystalline silicon. An alternative material for the mirror and its suspensions is sapphire. The operating temperature of the mirror is between 10 K and 20 K. While 0.1 W heat load have been estimated in [2; 5], we re-define an engineering design target of \[\dot{Q}=0.5\,\mathrm{W} \tag{1}\] total heat load on the ET-LF payload, considering the size and complexity of the cryostat and including the need for optical access. This value entails a thermal safety margin and compares to a range of \(0.5-1.0\,\mathrm{W}\) that KAGRA assumes on its cryogenic detectors (test mass of 23 kg), partially caused by a higher absorption in its sapphire mirrors [6]. ### Conceptual design of the payload The baseline design of the ET-LF cryogenic payload is derived from the double pendulum layout of the Advanced Virgo (AdVirgo) payload [7]. It is depicted in Fig. 2 and includes the following main components: * _Platform (PF)_, from which the marionette and the cage are suspended separately, using a single suspension for the marionette and three suspensions for the cage, respectively. The platform is the first stage inside the ET-LF cryostat volume, being suspended from a warm super-attenuator system. * _Marionette (MA)_, which coordinates the position of the mirror via four monocrystalline silicon suspension fibers. These suspensions are connected to so-called mirror ears that are attached via hydroxide catalysis bonding (HCB) onto the sides of the mirror. * _Actuation cage (CA)_, which serves as a reaction mass for both the mirror and the marionette. In addition, various sensory devices are installed on this robust structure to avoid a direct contact with the sensitive optics. In AdVirgo, the cage is rigidly attached to the PF, whereas a suspended cage is proposed here. * _Mirror/Test mass (MI)_, which constitutes the core optical element of the interferometer. The design of the cryogenic payload must consider thermal and mechanical feasibility, while fulfilling a low STN contribution compatible with the ET-D sensitivity curve [2]. The correct implementation of interfaces, temperature gradients and mechanical safety factors is essential. Further design aspects include the cryostat dimensions and space requirements for installation and auxiliary systems, the fabrication of long and high quality monocrystalline suspension fibers and the achievable marionette temperature based on the cooling concept. The AdVirgo payload operating at room-temperature has a MA made of 316L stainless steel [7]. In the ET-LF cryogenic payload, the material for the MA remains to \begin{table} \begin{tabular}{l c c c} \hline \hline & Marionette & Recoil mass & Mirror \\ \hline Mass (kg) & 422 & 211 & 211 \\ \hline Supension length (m) & 2 & 2 & 2 \\ Supension diameter (mm) & 3 & 3 & 3 \\ \hline Supension material (-) & Ti6Al4V & Silicon & Silicon \\ Loss angle (-) & \(1\times 10^{-5}\) & \(1\times 10^{-8}\) & \(1\times 10^{-8}\) \\ Temperature (K) & 2 & 10 & 10 \\ \hline \hline \end{tabular} \end{table} Table 1: ET-LF payload design parameters from [2], using a branched pendulum model as in Virgo. Figure 2: Baseline design of the ET-LF cryogenic payload based on AdVirgo’s double pendulum design. be decided. In addition to mechanical functionality, the choice is influenced by thermal aspects, i.e. the transient cool-down behavior and the achievable temperatures in steady-state operation. Therefore, aluminum alloys offer an alternative material choice for the MA. The combination of various constraints yields the baseline design parameters listed in Table 2. While the mirror suspensions are generally made of monocrystalline silicon or sapphire fibers, we propose two alternative concepts for the heat extraction and the marionette suspension, respectively. The first concept presented in Sec. III relies on a cooling interface on the CA and the PF, requiring a monocrystalline high-conductivity marionette suspension made of silicon or sapphire as well. The second concept uses a thin-wall titanium tube as marionette suspension that is filled with superfluid He-II. This concept provides cooling at 2 K down to the marionette and is explained in Sec. IV. In both heat extraction concepts, the cooling interface design will affect the sensitivity. These designs are not yet advanced enough to be included in the scope of this paper. ## III Concept with Monocrystalline Marionette suspension ### Motivation The initial conceptual payload design with the parameters in Table 1 is based on a heat extraction interface on the marionette, which is thermally insulated from the platform via a low-conductivity Ti6Al4V suspension [2]. STN computations [35; 36] reveal that the cooling interface must be implemented on the CA and passed to the PF, as the direct connection of any high dissipation cooling path on the marionette would critically affect the thermal noise. Hence, the suspension material must induce low STN and provide high thermal conductivity and mechanical strength. Al6N can be used to connect the payload to the cryogenic system, but is not an option to suspend the marionette due to a very low yield strength [37], which is more than one order of magnitude smaller compared to crystalline silicon or sapphire. The low STN requirement is achieved for crystalline sapphire and silicon at low temperatures thanks to their high quality factor \(Q\)[38], where \(Q\) is the inverse of the loss angle \(\phi\) at resonance [39; 40], cf. Sec. V. The overall results from the computations converge in the assessment that the marionette suspension mechanics must assure low thermal noise, i.e. mechanical dissipation, in order to preserve ET-LF sensitivity goals. The most advanced toy-model takes into account the presence of soft thermal links similar to those implemented in KAGRA [41; 42], connecting the CA to the thermal shield, combined with a heat extraction through the PF via a crystalline marionette suspension fiber with high \(Q\) and high thermal conductivity. Figure 3 depicts the examined thermal heat link interface possibilities on the cryogenic payload. A simple double stage payload with sapphire marionette and mirror suspensions is used as a reference. In the simplest case (left), the thermal link to the cryogenic system is modelled by a connection to the MA, showing up critical impact on the predicted sensitivity. A more realistic case (right) adopts a connection onto the CA and PF, ensuring sustainable mechanics of the thermal link connection. Figure 4 shows the effect of the soft thermal link on the STN for the cases depicted in Fig. 3. Imposing a nominal assumption for the STN roughly comparable with that of ET-LF, it can be realized that the link should be connected far from the MI. Figure 4 demonstrates that KAGRA-like thermal links must be connected to \begin{table} \begin{tabular}{l c c c|c c} & & Marionette & & & Mirror \\ Cooling concept & Monolithic & Monolithic & He-II filled & Silicon & Sapphire \\ \hline Mass (kg) & 200 & 220 & 200 & 200 & 220 \\ Suspension length (m) & 1.0 & 1.0 & 1.0 & 1.2 & 1.2 \\ Suspension diameter (mm) & 8.1 & 6.5 & 8.3 & 3.0 & 2.3 \\ Suspension material (-) & Silicon & Sapphire & Ti, He-II & Silicon & Sapphire \\ Bulk loss angle (-) & \(1\times 10^{-9}\) & \(3\times 10^{-9}\) & \(1\times 10^{-6}\) & \(1\times 10^{-9}\) & \(3\times 10^{-9}\) \\ Temperature (K) & 15 & 17 & 2 & \(15\ldots 20\) & \(20\ldots 23\) \\ \end{tabular} \end{table} Table 2: Baseline design parameters of the ET-LF payload, including two marionette cooling concepts. Figure 3: Schemes of thermal link connection possibilities onto the cryogenic payload. the cage, being the minimum distance from the mirror in order not to compromise the STN. The criticality is reached with the parameters mentioned in the caption of Fig. 4. The computation is analytic, but FEA modelling provides similar results [4],[43]. A secondary but yet significant issue is the seismic thermal noise injection and the drag through the thermal link, which has been estimated with a similar schematic as in Fig. 4[35]. The outline of the solid conduction cooling through thermal links is that in order to reduce the mechanical thermal noise induced from the link, it must be connected to the CA and cannot reach the MA. The implementation is feasible through a careful mechanical and thermal design of the payload in order to operate at \(T_{\rm MI}\approx 20\,\)K. The monocrystalline-based concept in this paper assumes using the same material (silicon or sapphire) for the mirror, the mirror suspensions and the marionette suspension. Nonetheless, also a hydrid monocrystalline suspension application is being analyzed [44; 45]. ### Mechanical dimensioning The marionette suspension is dimensioned for the total mechanical load of the MA and the MI, considering a safety factor SF = 3 with regard to the ultimate strength \(\sigma_{\rm max}\). The material properties listed in Table 3 for sapphire and silicon yield the dimensions in Table 2, which are used in the STN modelling in this paper. For sapphire, a very conservative value of 400 MPa is assumed, based upon [15]. Significantly higher values of breaking strength at low temperatures, spread in the range of 1000 MPa to 2600 MPa, have been recently measured [46] and certified using samples produced in Japan by Shinkosha [47] and machined from a single ingot. ### Thermal behavior The thermal behavior depends on the thermal conductivity in the range of 10 K to 30 K. In thin suspension fibers, the phonon boundary scattering may significantly reduce the bulk conductivity [48]. Thermal conductivity data for high-purity monocrystals and monocrystalline fibers of silicon and sapphire are therefore compared in Fig. 5. In silicon fibers, a marginal reduction of thermal conductivity is visible [49], whereas a significant reduc Figure 4: Impact on the STN due to a direct connection of a 1 m thermal link (made of 28 braids, each composed by 49 Al6N wires with \(d=150\,\)μmm [41] and assuming \(\phi_{\rm TL}=0.5\)) on the MA and CA, respectively. \begin{table} \begin{tabular}{l r r|r r r} \hline \hline & Silicon & Sapphire & Ti6Al4V & Titanium & Al50561 \\ \(T\) (K) & 20 & 20 & 2.0 & 2.0 & 2.0 \\ \hline \(\phi_{\rm bulk}\) (-) & \(1\times 10^{-9}\)[8]1 & \(3\times 10^{-9}\)[10] & \(1\times 10^{-4}\)[11]1 & \(1\times 10^{-9}\)[12] & \(2.5\times 10^{-8}\)[13] \\ \(\sigma_{\rm y}\) (MPa) & 230 [14]2 & 400 [15]3 & 1600 [16] & 1200 [16] & 280 [16] \\ \(\lambda(T)\) (W/m/K) & 4940 [17] & 6000 [18] & 0.22 [19] & 2.5 [19] & 2.0 [20] \\ \(c_{\rm p}(T)\) (J/kg/K) & 3.40 [21] & 0.69 [22] & 0.01 [19] & 0.12 [19] & 0.10 [23] \\ \(\alpha(T)\) (1/K) & \(-2.9\times 10^{-9}\)[24] & \(1.3\times 10^{-8}\)[25] & \(6.0\times 10^{-6}\)[26] & \(5.5\times 10^{-8}\)[26] & \(14\times 10^{-6}\)[16] \\ \(\beta\) (1/K) & \(-7.9\times 10^{-6}\)[27]1 & \(-4.4\times 10^{-6}\)[28]1 & \(-4.6\times 10^{-4}\)[29]1 & \(-4.6\times 10^{-4}\)[29]1 & \(1.2\times 10^{-4}\)[30] \\ \(E\) (GPa) & 130 [31]1 & 360 [32]1 & 127 [33]1 & 130 [16] & 81 [30] \\ \(\rho\) (kg/m3) & 2330 [34] & 3980 [34] & 4540 [19] & 4540 [19] & 2660 [19] \\ \(\alpha_{\rm surf}\) (m) & 5 \(\times 10^{-13}\)[9] & \(5\times 10^{-13}\)[8] & 0.0 & 0.0 & 0.0 \\ \hline \hline \end{tabular} \end{table} Table 3: Physical properties of silicon and sapphire at 20 K and metals at 2 K. Some of the indicated references comprise temperature dependencies, which are included in the STN model presented in the Sections VI and VII. tion is reported for sapphire fibers [18]. Further thermal conductivity measurements of silicon and sapphire fiber samples are planned within future R&D activities. The nominal heat load from Eq. (1) together with the dimensions in Table 2 and the thermal conductivity data of monocrystalline fibers according Fig. 5 yield temperature gradients along the marionette suspension of \(\Delta T_{\rm ma}=3\,\)K for silicon and \(\Delta T_{\rm ma}=5\,\)K for sapphire, respectively. ## IV Concept with He-II filled Marionette suspension tube ### Motivation for using He-II Cryogenic fluids have been extensively used to operate the second generation of resonant GW detectors [53] and later proposed for cooling the test masses of the GW interferometers [54]. The use of He-II is motivated by the exceptional properties of superfluid helium, rather than its temperature around \(2\,\)K. The abundant \({}^{4}\)He isotope can exist in two liquid forms, separated by the \(\lambda\)-line depicted in Fig. 6. While liquid helium at \(T>T_{\lambda}\) (called He-I) exhibits normal fluid behavior, it becomes a quantum fluid (called He-II) at \(T<T_{\lambda}\) when fractions of the atoms condense in the ground-state as a Bose-Einstein condensate [55; 56]. The He-II is composed of a normal and a superfluid component, as described by the two-fluid model [57; 58]. The second-order phase transition from He-I to He-II is associated with dramatic property changes. Particularly relevant is the exceptional increase in thermal conductivity, yielding a thermal reservoir to absorb and conduct heat in the quietest possible manner. This property enables the concept of heat extraction from the ET-LF payload via a static He-II column inside a thin-wall marionette suspension tube. For the conditions given in Fig. 5, He-II can exceed the thermal conductivity of high-purity sapphire or silicon by at least one order of magnitude. This concept provides a temperature of \(2\,\)K at the marionette, which is an essential parameter to reduce the STN as discussed in Sec. VI. Related to the thermal conductivity, the quantum fluid properties may imply that thermal and mechanical dissipation in the static He-II column is very low, and that momentum transfer to/from the suspension tube may not take place due to superfluidity. These hypotheses, however, require experimental validation, as the integration of a quantum fluid in suspensions of GW detectors has never been analysed and presents a new field of research. ### Conceptual layout The conceptual layout of the He-II marionette suspension is depicted in Fig. 7. In addition to the thin-wall marionette suspension tube, an internal guiding tube enables cool-down of the payload in counter-flow with supercritical helium (\(p>2.3\,\)bar) at adjustable supply temperatures. The helium supply can be implemented at a cooling interface on the PF, using multiple thin-wall and'soft' capillaries attached to vibration isolation systems, similar to the heat link concept in KAGRA [42; 59]. The capillaries connect the cooling interface to a cryogenic supply unit in the vicinity of the cryostat, cf. [60; 61]. Exemplary capillary dimensions are given in [62]. For steady-state operation, the normal He-I is transformed in a static He-II column [60]. The internal guiding tube has no function in this case, i.e. heat conduction takes place via the entire He-II cross-section. By contact Figure 5: Thermal conductivity of He-II at given design conditions —[50],[51], compared to 6N-aluminium (RRR = \(10^{4}\)) —[19], high-purity sapphire —[52], monocrystalline sapphire fibers \(\,\)[18], high-purity silicon \(\,\)[17] and monocrystalline silicon fibers \(\,\)[49]. with the He-II suspension, the marionette reaches a temperature of \(2\,\mathrm{K}\). The silicon mirror temperature is around \(15\,\mathrm{K}\) due to the heat load and the temperature gradient in the monocrystalline mirror suspensions. ### Mechanical dimensioning The marionette suspension tube carries the mechanical load of the MA and the MI. The dimensioning includes a mechanical safety factor \(\mathrm{SF}=3\) with regard to the yield strength \(\sigma_{\mathrm{y}}\) of the tube material (c.f. Table 3 for material options). Beside low-temperature ductility and mechanical strength, a decisive constrain in the material choice is related to suspension losses. This yields a preference for titanium, as discussed in Sec. VI. ### Thermal dimensioning In this marionette suspension concept, the thermal dimensioning (i.e. the He-II cross-section) is independent from the mechanical dimensioning (i.e the suspension tube wall cross-section). The two-fluid model [57, 58] describes the heat transport in static He-II by a counterflow between the normal and the superfluid components on a molecular level, i.e. there is no macroscopic movement of the bulk liquid. The most efficient laminar regime is achieved only in narrow channels of \(d<$10\,\mathrm{\SIUnitSymbolMicro m}$\), where the normal and superfluid components do not interact. In channels of \(d>$1\,\mathrm{mm}$\), an additional turbulent term starts dominating the temperature gradient by the excitation of rotons and a resulting mutual friction among the two components. The mutual friction signifies a dissipative process that limits the heat transport [55], but the thermal conductivity remains nonetheless higher than in pure solids as shown in Fig. 5. The temperature gradient along the He-II column in the marionette suspension is given by: \[\Delta T_{\mathrm{ma}}=\frac{32\,\eta\,L_{\mathrm{ma}}}{\left(d_{\mathrm{h}} \,\rho\,s\right)^{2}\,T}\dot{q}+\frac{L_{\mathrm{ma}}}{h\left(\frac{T}{T_{ \mathrm{\lambda}}}\right)g_{\mathrm{peak}}(p)}\dot{q}^{3.4} \tag{2}\] where the left term signifies the analytic description of the laminar regime [56] and the right term uses the model from Sato et al. [50] for the turbulent regime. \(L_{\mathrm{ma}}\) denotes the marionette suspension length, \(\eta\) the dynamic viscosity, \(\rho\) the density and \(s\) the entropy of the He-II, \(d_{\mathrm{h}}\) refers to the hydraulic diameters of the circular and the annular cross-sections shown in Fig. 8, \(\dot{q}\) is the heat flux, and \(h(T)\) and \(g_{\mathrm{peak}}(p)\) are empirical functions from Sato et al. [50]. For the baseline design under nominal operating conditions, the contribution from the laminar term is negligibly small. Defining a temperature gradient of \(\Delta T_{\mathrm{ma}}=$50\,\mathrm{mK}$\) with regard to the overall He-II operating concept explained in [60], the suspension tube design parameters are summarized in Table 4. The suspension tube lower end temperature is the highest temperature in the He-II system set to \(1.9\,\mathrm{K}\), where the thermal conductivity peak is located. The suspension tube outer diameter \(d_{\mathrm{o}}\) results from the required He-II cross-section, whereas the wall thickness \(s_{\mathrm{o}}\) from the mechanical design. The inner guiding tube dimensions are chosen such that equal cross-sections of the inner tube and the annular gap yield similar flow velocities during cool-down. Figure 9 shows the relation between the required suspension tube diameter \(d_{\mathrm{o}}\) and the heat load that can be extracted at \(\Delta T_{\mathrm{ma}}=$50\,\mathrm{mK}$\) and \(L_{\mathrm{ma}}=$1\,\mathrm{m}$\). Figure 8: Suspension tube design. Figure 7: Conceptual layout of the He-II marionette suspension. ### Cool-down with normal He-I flow One main advantage of this concept is the ability for convective cool-down of the ET-LF payload. This is enabled by normal He-I flow through the double-walled marionette suspension tube as indicated in Fig. 7. The heat flux \(\dot{q}\) from the marionette to the helium flow is correlated by \[\dot{q}=\alpha\left(T_{\rm wall}-T_{\rm He}\right), \tag{3}\] where \(\alpha\) denotes the heat transfer coefficient, \(T_{\rm wall}\) the wall temperature and \(T_{\rm He}\) the fluid temperature. Using aluminum alloy 1200 as marionette material, the marionette temperature change is given by \[\frac{\mathrm{d}T_{\rm MA}(t)}{\mathrm{d}t}=\frac{-\dot{q}A_{\rm HT}}{c_{\rm p,Al}(T_{\rm MA}(t))M_{\rm MA}} \tag{4}\] where \(A_{\rm HT}\) is the heat transfer area, \(c_{\rm p,Al}(T)\) the specific heat capacity and \(M_{\rm MA}\) the marionette mass. This equation simplifies the marionette as a block capacitance, neglecting the influences of finite heat conductivity. The cool-down process is therefore analyzed numerically by CFD simulation. In the model development process, simulations were set up in ANSYS Fluent(r) (finite volume method) and in COMSOL Multiphysics(r) (finite element method), allowing validation of the numerical model independence. Simulations are carried out for the marionette and suspension design parameters in Tables 2 and 4. The suspension tube length is \(1.105\,\mathrm{m}\) in total, of which \(105\,\mathrm{mm}\) are centrally connected to the bottom half of the marionette, passing through a slightly wider bore in the upper half. This insertion yields a heat transfer area of \(A_{\rm HT}\approx\,2750\,\mathrm{mm}^{2}\). Table 5 lists additional simulation parameters and material properties in the relevant temperature range of \(3.0\,\mathrm{K}\) to \(293.15\,\mathrm{K}\). The \(3\,\mathrm{K}\) denote the convective pre-cooling limit before the transformation to He-II operation. In the numerical model, the geometry is simplified by axial symmetry, yielding a cylindrical marionette instead of the octagonal prism shape displayed in Fig. 2. The helium properties are implemented via REFPROP [64] in ANSYS Fluent(r), and by the Peng-Robinson (Twu) equation of state [65] in COMSOL Multiphysics(r), respectively. The operating conditions in Table 5 yield exclusively turbulent flow regimes. In order to solve the flow problems, the standard turbulence eddy viscosity \(\mathrm{k}-\epsilon\) model with re-normalisation group (RNG) methods developed by Yakhot et al. [66] is applied for its accuracy regarding heat transfer [67]. Scalable wall functions are implemented for the generated spatial discretization, since they enable an adequate resolution of thermally and fluid-dynamically induced effects close to the walls within the fluid domain. The helium supply temperature \(T_{\rm He,in}\) is set as function of the average marionette temperature for controlled cool-down. A constant \(\Delta T=100\,\mathrm{K}=\overline{T}_{\rm MA}(t)-T_{\rm He,in}\) is defined until the lowest helium supply temperature of \begin{table} \begin{tabular}{l c} Parameter/property & Value/expression \\ \hline \(M_{\rm He}\) & \(1.0\,\mathrm{g}\,\mathrm{s}^{-1}\) \\ \(p_{\rm He,out}\) & \(2.5\,\mathrm{bar}\)(a) \\ \(T_{\rm He,in}(t)\) & \(\max\{\overline{T}_{\rm MA}(t)-\Delta T_{\rm MA-He,in},3.0\,\mathrm{K}\}\) \\ \(\Delta T_{\rm MA-He,in}\) & \(100\,\mathrm{K}\) \\ \hline \(\overline{T}_{\rm MA}(t=0)\) & \(293.15\,\mathrm{K}\) \\ \(T_{\rm MA}(t_{\rm end})\) & \(3.01\,\mathrm{K}\) \\ \(d_{\rm MA}\) & \(700\,\mathrm{mm}\) \\ \(h_{\rm MA}\) & \(210\,\mathrm{mm}\) \\ \(A_{\rm HT}\) & \(2750\,\mathrm{mm}^{2}\) \\ \hline \(\lambda_{\rm Al}(T)\) & \(59.4\ldots 502\,\mathrm{W}\,\mathrm{m}^{-1}\,\mathrm{K}^{-1}\)[63] \\ \(c_{\rm p,Al}(T)\) & \(0.29\ldots 942\,\mathrm{J}\,\mathrm{kg}^{-1}\,\mathrm{K}^{-1}\)[19] \\ \(\lambda_{\rm Tl}(T)\) & \(4.03\ldots 36.0\,\mathrm{W}\,\mathrm{m}^{-1}\,\mathrm{K}^{-1}\)[19] \\ \(c_{\rm p,Ti}(T)\) & \(0.20\ldots 520\,\mathrm{J}\,\mathrm{kg}^{-1}\,\mathrm{K}^{-1}\)[19] \\ \end{tabular} \end{table} Table 5: CFD simulation parameters and material properties of the marionette convective cooling model. \begin{table} \begin{tabular}{l c} Parameter & Value \\ \hline \(L_{\rm ma}\) & \(1.0\,\mathrm{m}\) \\ \(M_{\rm MA}\) & \(200\,\mathrm{kg}\) \\ \(M_{\rm MI}\) & \(200\,\mathrm{kg}\) \\ \hline **Constrains:** & \\ Mechanical SF & \(3.0\) \\ \(T(y=L_{\rm ma})\) & \(1.9\,\mathrm{K}\) \\ \(p_{\rm He-II,in}\) & \(1.2\,\mathrm{bar}\)(a) \\ \(\Delta T_{\rm ma}\) & \(50\,\mathrm{mK}\) \\ \(\dot{Q}\) & \(0.5\,\mathrm{W}\) \\ \hline **Design results:** & \\ \(d_{\rm o}\) & \(8.30\,\mathrm{mm}\) \\ \(s_{\rm o}\) & \(0.36\,\mathrm{mm}\) \\ \(d_{\rm i}\) & \(5.80\,\mathrm{mm}\) \\ \(s_{\rm i}\) & \(0.05\,\mathrm{mm}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Suspension tube design parameters. Figure 9: Cooling capacity of the He-II suspension as function of the outer tube diameter. 3 K is reached and held constant subsequently. The low-pressure limit of \(p_{\rm He,out}=2.5\) bar(a) ensures supercritical single-phase flow during the entire cool-down process. The marionette and suspension tube surfaces are considered adiabatic, while the internal guiding tube is diabatic. Figure 10 shows exemplary results of the CFD simulation at an intermediate time step with \(\overline{T}_{\rm MA}=103\) K and \(T_{\rm He,in}=3\) K. At the bottom end, the helium flow is returned from the inner guiding tube to the outer annular gap. Due to internal heat exchange in the suspension, the helium supply flow is heated up by \(\Delta T\approx 57\) K before entering the marionette heat transfer area \(A_{\rm HT}\) at \(T_{\rm He}\approx 60\) K. Yet, the temperature difference between marionette and helium is still around 40 K, driving the heat extraction from the marionette. In comparison, the temperature gradients within the marionette are small due to the high thermal conductivity of aluminum alloy 1200, especially at \(T~{}<~{}100\) K [63]. Results of the numerical simulation in terms of cool-down time and pressure loss are presented in Fig. 11. The pressure loss \(\Delta p_{\rm He}=p_{\rm He,in}-p_{\rm He,out}\) decreases with temperature due to decreasing flow velocities at increasing densities. A distinct point in the cool-down curve is found at \(t\approx 3.2\) d, where \(T_{\rm He,in}\) reaches the low temperature limit of 3 K. Towards the end of the cool-down at \(t\geq 4\) d and \(T\leq 50\) K, the marionette temperature decreases rapidly due to the \(T^{3}\)-dependence of specific heat capacity. The results in Fig. 11 indicate that the marionette can be cooled from ambient to operating temperature in about 4.2 d. With a helium mass flow rate of 1 g s\({}^{-1}\), the pressure drop in the suspension is \(\Delta p_{\rm He}<300\) mbar, which is compatible with the helium supply system presented in [60]. In addition to the numerical approach, the results are verified by implementing heat transfer and pressure drop correlations in a set of differential equations, yielding the dashed line for the pressure drop in Fig. 11. A more detailed discussion of this model exceeds the scope of this paper. In a next step, the cool-down of a silicon test mass is investigated, using the data of the marionette cool-down model to define the temperatures of the mirror suspensions at their upper ends. Radiative heat transfer is included in this model, as former studies have shown the need of combined convective and radiative cooling to achieve sufficient cool-down rates [61]. The simulation data are summarized in Table 6. Conservative assumptions are made for the emissivity of silicon with \(\epsilon_{\rm MI,Si}=0\) at \(T<120\) K and \(\epsilon_{\rm MI,Si}=0.75\) at \(T>260\) K, respectively, due to the lack of data. Thermal radiation is implemented from the test mass to a surrounding black body at \(T=5\) K starting at \(t=0\), representing the thermal shield operation around payload [60]. Figure 12 contains the simulation results, showing that the silicon test mass can be cooled from ambient to operating temperature in about 12.8 d. The first phase at \(t<5.2\) d is driven by thermal radiation, where temperature differences between the test mass and the shield are large, and the marionette cool-down is yet in progress. In the second phase, thermal radiation is effectively disabled with \(\epsilon_{\rm MI,Si}=0\). The heat extraction from the test mass occurs exclusively via the test mass suspension fibers to the helium-cooled marionette. Towards the cool-down end, this mechanism is amplified by increasing thermal conductivity values of silicon and the strongly decreasing heat capacity. Figure 11: Helium pressure loss in the suspension tube and marionette temperature during cool-down for the conditions listed in Table 5. Figure 10: Temperature contours and velocity field in the bottom section of the marionette; intermediate results at \(\overline{T}_{\rm MA}\approx 103\) K. ## V Modelling of suspension thermal noise ### Theoretical foundations Thermal noise is a thermally driven motion, which is directly related to the mechanical dissipative behavior of a system. Since thermal noise is a generalized type of Brownian motion with a random displacement of particles, it can be described using the Fluctuation-Dissipation Theorem (FDT) [70]. It states that the dissipations in the system are the driving force for thermal fluctuations, which are presented in form of a displacement spectral density. The FDT is able to include the contributions of various dissipative sources, using an equivalent macroscopic mechanical model that describes the total impedance of the system in the frequency domain [71]. The theoretical foundations and further details of thermal noise modelling in mechanical systems, especially for suspensions used in gravitational wave detectors, are explained in [72, 73, 74, 75, 39, 76, 77]. The general approach can be summarized by the following steps: 1. Choice of the representative mechanical model and the dissipation mechanisms effecting the system. 2. Definition of the equation of motion and its expression in the frequency domain by Fourier transformation: \(x(t)\to x(\omega)\cdot e^{i\omega t}\). 3. Definition of the system's mechanical impedance \(Z(\omega)\) and admittance \(Y(\omega)\), the inverse of the impedance. These can be calculated by applying an external force on the system and inspecting its reaction based on its equation of motion: \[Z(\omega)\equiv\frac{\vec{F}}{i\omega\vec{x}(\omega)}\quad\text{and}\quad Y( \omega)\equiv\left[Z(\omega)\right]^{-1}.\] (5) 4. Calculation of the thermal noise spectral density based on the FDT: \[x^{2}(\omega)=\frac{4k_{\text{B}}T}{\omega^{2}}\cdot\text{Re}\left(\left[Z( \omega)\right]^{-1}\right)\,\] (6) where \(k_{\text{B}}\) is the Boltzmann constant, \(T\) the suspension temperature and \(\omega\) the angular frequency. ### Dissipation in a pendulum system Energy dissipation in a mechanical system can arise from various sources, contributing directly to the generation of thermal noise [39]. The dissipation is quantified by the loss angle \(\phi\), representing the ratio between the imaginary and the real restoring force in the system [76]. The suspension thermal noise model in this paper considers the ET-LF payload as a double pendulum, with MA and MI as point-masses and suspensions with homogeneous mechanical losses. The loss angle of a suspension includes the summation of different dissipation mechanisms as \[\phi_{\text{susp}}(\omega)=\phi_{\text{bulk}}+\phi_{\text{therm}}(\omega)+ \phi_{\text{surf}}+\phi_{\text{join}}. \tag{7}\] **Bulk losses \(\mathbf{\phi_{\text{bulk}}}\)** designate intrinsic dissipations in the bulk material. These losses arise from the structural composition and from defects in the material, depending on temperature and frequency [40]. The frequency-dependence is usually considered negligible and the temperature-dependence is determined experimentally [77]. For the ET-LF marionette suspension, several materials are proposed as depicted in Tables 2 and 3. Their bulk losses at cryogenic temperatures are summarized in Table 7. For the mirror suspensions, the proposed materials include monocrystalline silicon or sapphire. The bulk loss angles applied in this STN model are summarized in Table 3. **Thermoelastic losses \(\mathbf{\phi_{\text{therm}}}\)** are frequency-dependent losses occurring in a suspension under \begin{table} \begin{tabular}{l c} Parameter & Value \\ \hline \(M_{\text{MI}}\) & 200 kg \\ \(d_{\text{MI}}\) & 450 mm \\ \(h_{\text{MI}}\) & 570 mm \\ \(d_{\text{mi}}\) & 3.0 mm \\ \(L_{\text{mi}}\) & 1.2 m \\ \(T_{\text{Shield}}\) & 5.0 K \\ \(\epsilon_{\text{MI},\text{Si}}(T)\) & \(0.41\ldots 0.75\)[68]1 \\ \(\lambda_{\text{Si}}(T)\) & 2330 \(\ldots\) 5130 W m\({}^{-1}\) K\({}^{-1}\)[17] \\ \(c_{\text{p,Si}}(T)\) & \(0.28\ldots\) 707 J kg\({}^{-1}\) K\({}^{-1}\)[69] \\ \end{tabular} \end{table} Table 6: Simulation parameters used in the test mass cool-down model. Figure 12: Cool-down of an ET-LF silicon test mass installed within a thermal shield at \(T=5\) K for the conditions listed in Tables 5 and 6. tension, characterized by a broad maximum at a characteristic frequency [39, 85]. These losses originate from local temperature gradients generated by the compression and expansion at the suspension bending point. These gradients induce a heat flux that is accompanied with entropy generation (i.e. energy dissipation) [9, 86]. For modelling, [86] proposes to consider both a contribution from the linear expansion coefficient \(\alpha\), and a non-linear contribution from the temperature-dependence of the Young's modulus \(E\) via the thermal elastic coefficient \(\beta=\text{dln}E/\text{d}T\) \[\phi_{\text{therm}}(\omega)=\frac{ET}{\rho c_{\text{p}}}\left(\alpha-\sigma \frac{\beta}{E}\right)^{2}\left(\frac{\omega\tau}{1+(\omega\tau)^{2}}\right)\, \tag{8}\] with \(\sigma\) as the suspension tension and \(\tau\) as the thermal diffusion time, for a circular suspension given as [87] \[\tau=\frac{d^{2}\rho c_{\text{p}}}{13.55\lambda}\, \tag{9}\] with \(d\) as the suspension diameter and \(\lambda\) as the thermal conductivity of the suspension material. This type of losses depends on geometry and tension (cf.Eq. (8)). Thus a reduction or nullification via an optimized suspension profile design, as applied in silica suspensions in current detectors, could be possible [14]. **Surface losses \(\mathbf{\phi_{\text{surf}}}\)** are mechanical losses in a thin surface layer \(h_{\text{s}}\), the dissipation depth, which differ from the bulk losses [9, 88]. These depend on the surface quality and on treatment techniques (e.g. polishing, dry or wet chemical etching), but are generally not yet fully understood [79]. \(\phi_{\text{surf}}\) are determined from experimental data, using the surface loss parameter \(\alpha_{\text{surf}}=h_{\text{s}}\phi_{\text{bulk}}\). The relation between \(\phi_{\text{surf}}\), \(\alpha_{\text{surf}}\), the geometry factor \(\mu\), the surface area \(A_{\text{surf}}\) and the volume \(V\) is given by [88] and simplified for thin circular fibers with \(\mu=2\) to \[\phi_{\text{surf}}=\alpha_{\text{surf}}\frac{\mu A_{\text{surf}}}{V}=h_{\text {s}}\phi_{\text{bulk}}\frac{8}{d} \tag{10}\] This equation shows that surface losses become increasingly relevant with higher surface to volume ratio. **Jointing losses \(\mathbf{\phi_{\text{join}}}\)** are additional mechanical losses resulting from the clamping between the suspensions and their anchors. The minimization of these losses requires dedicated numerical simulations including the real payload geometry alongside experimental validation [89]. Equivalently to the ET conceptual design study [2] and the design report update [5], this type of losses requires a more advanced design and is hence not yet considered in the model. ### Dynamic behaviour of the pendulum system The suspension thermal noise modelling requires the mechanical impedance \(Z(\omega)\) of the payload derived from the equations of motion as given in Eq. (6). The double pendulum system representing the ET-LF payload is depicted in Fig. 13. It is modelled as a double mode oscillator with a stiffness constant for each pendulum stage [39, 75]. The equations of motion in the frequency do Figure 13: Scheme of the representative mechanical system used to model the STN of the ET-LF payload (ma = marionette suspension, MA = marionette, mi = mirror suspension, MI = mirror). \begin{table} \begin{tabular}{l l l l} \hline \hline Material & Type/treatment & \(T\) (K) & \(\phi_{\text{bulk}}\) (-) \\ \hline Silicon & Single crystal (100) [8] & 3.5 & \(5\times 10^{-10}\) \\ & & 10 & \(1\times 10^{-9}\) \\ & & 20 & \(3\times 10^{-9}\) \\ Silicon & Single crystal (100) [38] & 10 & \(5\times 10^{-9}\) \\ & & 20 & \(8\times 10^{-9}\) \\ Silicon & Single crystal (100) [78] & 18 & \(5\times 10^{-9}\) \\ Silicon & Single crystal (111) [38] & 10 & \(1.1\times 10^{-8}\) \\ & & 20 & \(1.2\times 10^{-8}\) \\ Sapphire & Single crystal, annealed [10] & 10 & \(2\times 10^{-9}\) \\ & & 20 & \(3\times 10^{-9}\) \\ Sapphire & Single crystal, annealed [79] & 4.0 & \(2\times 10^{-10}\) \\ & & 10 & \(1\times 10^{-9}\) \\ & & 20 & \(3\times 10^{-9}\) \\ Sapphire & Hemlite Grade [80] & 4.2 & \(4\times 10^{-9}\) \\ & & 10 & \(5\times 10^{-9}\) \\ & & 20 & \(5.6\times 10^{-9}\) \\ Titanium & Grade 1, annealed [12] & 1...20 & \(6\times 10^{-7}\) \\ Titanium & Grade 1, stress-relieved [12] & 1...20 & \(1\times 10^{-6}\) \\ Titanium & Grade 1, untreated [12] & 1...20 & \(1\times 10^{-6}\) \\ Titanium & Grade 2 [81] & 4.2 & \(5\times 10^{-7}\) \\ & & 20 & \(1\times 10^{-6}\) \\ Ti6Al4V & Grade 5 [11] & 80 & \(1\times 10^{-4}\) \\ Al5056 & untreated [82] & 2.0 & \(6\times 10^{-8}\) \\ Al5056 & annealed [82, 83] & 2.0 & \(2.5\times 10^{-8}\) \\ Al5056 & - [84] & 2.0 & \(1.6\times 10^{-7}\) \\ \hline \hline \end{tabular} \end{table} Table 7: Bulk loss angles \(\phi_{\text{bulk}}\) of various materials at cryogenic temperatures. main are given as \[0 = -M_{\rm MA}\omega^{2}x_{\rm ma}+k_{\rm ma}x_{\rm ma}+k_{\rm mi}(x_{ \rm ma}-x_{\rm mi}) \tag{11}\] \[F = -M_{\rm MI}\omega^{2}x_{\rm mi}+k_{\rm mi}(x_{\rm mi}-x_{\rm ma})\, \tag{12}\] where \(F\) is an external force applied onto the mirror stage. The spring constants \(k_{\rm ma}\) and \(k_{\rm mi}\) of the marionette and the mirror stages, respectively, are calculated via the representative mechanical system of each pendulum stage. Gonzalez [74] provides a detailed summary of mechanical models applicable to suspensions used in gravitational wave detectors. **The marionette stage** is modelled using a simple pendulum with a suspended point-mass, considering a lossless gravitational potential and a lossy elastic potential as the only energy sources. The latter is a simplified treatment to introduce dissipation, given that the violin modes of the marionette suspension have been shown to have a negligible impact compared to the dominating ones of the mirror suspensions. Hence, the marionette violin modes don't need to be considered in the dynamics of the representative system. The marionette spring constant \(k_{\rm ma}\) is obtained from the lossless gravitational spring constant \(k_{\rm g}\) and the lossy elastic spring constant \(k_{\rm el}\) via \[k_{\rm ma}=k_{\rm g}+k_{\rm el}(1+i\phi_{\rm susp}(\omega)). \tag{13}\] Introducing the dilution factor \(D\), which depicts the ratio between the system's elastic and gravitational potential energies [39, 76], as \[D=\frac{k_{\rm el}}{k_{\rm g}}=\frac{n\sqrt{EI\sigma}}{2L^{2}}\frac{L}{Mg}= \frac{1}{2L}\sqrt{\frac{nEI}{Mg}}\, \tag{14}\] with \(M\) as the total mass suspended by \(n\) wires, \(\sigma\) as the tension in each wire, \(I\) as the area moment of inertia, \(g\) as the gravitational acceleration, and \(L\) as the suspension length, \(k_{\rm ma}\) yields [90] \[k_{\rm ma}=k_{\rm g}(1+D+i\phi_{\rm pend}(\omega)). \tag{15}\] The definition of the pendulum loss angle \(\phi_{\rm pend}(\omega)\) in Eq. (16) shows that the pendulum losses are lower than the suspension losses \(\phi_{\rm susp}\) according Eq. (7) due to dilution via \(D\) \[\phi_{\rm pend}(\omega)=\phi_{\rm susp}(\omega)D. \tag{16}\] The equation of motion for the marionette stage includes only a complex spring potential \[-k_{\rm ma}x=M_{\rm MA}\frac{\partial^{2}x}{\partial t^{2}}\, \tag{17}\] yielding after Fourier transformation in the frequency domain \(x(t)\to x(\omega)\cdot e^{i\omega t}\) \[k_{\rm ma}x-M_{\rm MA}\omega^{2}x=0. \tag{18}\] **The mirror stage** is modelled using a pendulum consisting of four anelastic suspension fibers suspending a point mass. Thus, in addition to the pendulum's degree of freedom (DoF), from which the pendulum mode is extracted, also the degrees of freedom related to the transverse motion along the suspension are included in order to obtain the infinite series of violin modes associated to its bending [73, 74, 90]. The effective mirror spring constant \(k_{\rm mi}\) associated to the suspension elasticity and gravitational restoring force is derived by solving the elastic equation for a slightly deflected suspension stretched by a tension \(\sigma\)[74, 90] \[-E_{\rm ex}I\frac{\partial^{4}x(y)}{\partial y^{4}}+\sigma\frac{\partial^{2} x(y)}{\partial y^{2}}=\rho S\frac{\partial^{2}x(y)}{\partial t^{2}}. \tag{19}\] A Fourier transformation in the frequency domain \(x(y,t)\to x(y,\omega)\cdot e^{i\omega t}\) yields \[E_{\rm ex}I\frac{\partial^{4}x(y)}{\partial y^{4}}-\sigma\frac{\partial^{2} x(y)}{\partial y^{2}}-\rho S\omega^{2}x(y)=0\, \tag{20}\] with \(S\) as the cross-sectional area of the suspension. The complex Young's modulus \(E_{\rm ex}\) introduces the dissipation into the system as \[E_{\rm ex}=E(1+i\phi_{\rm susp}(\omega)). \tag{21}\] The general solution of Eq. (20) yields the displacement of the suspension \(x(y)\) along the suspension axis \(y\) \[x(y)=C_{1}\sin(k_{\rm s}y)+C_{2}\cos(k_{\rm s}y)+C_{3}e^{k_{\rm s}y}+C_{4}e^{- k_{\rm s}y} \tag{22}\] with \(k_{\rm s}\) as the wave number associated to the flexural stiffness of the suspension [73, 91] \[k_{\rm s}=\sqrt{\frac{\sigma+\sqrt{\sigma^{2}+4E_{\rm ex}I\rho S\omega^{2}}}{2 E_{\rm ex}I}}\, \tag{23}\] and \(k_{\rm e}\) as the wave number of an elastic fiber \[k_{\rm e}=\sqrt{\frac{-\sigma+\sqrt{\sigma^{2}+4E_{\rm ex}I\rho S\omega^{2}}}{2 E_{\rm ex}I}}\, \tag{24}\] where the constants \(C_{1}\) to \(C_{4}\) are defined from the system boundary conditions. The first two boundary conditions result from the upper part of the suspension at \(y=0\), where the fixed clamping on the marionette yields \[x(0)=0\quad\mbox{and}\quad\frac{\partial x}{\partial y}(0)=0. \tag{25}\] The third and the fourth boundary conditions are associated to the bottom part at \(y=L_{\rm mi}\), where the mirror is attached and foregoes a displacement of \(x_{0}\) \[x(L_{\rm mi}) = x_{0}\, \tag{26}\] \[\frac{\partial x}{\partial y}(L_{\rm mi}) = 0\quad\mbox{or}\quad\frac{\partial^{2}x}{\partial y^{2}}(L_{\rm mi })=0. \tag{27}\] Assuming the mirror as a lumped mass causes the suspension slope at the bottom to be a free parameter [74]. Somiya [91] reports how this boundary condition can be defined for different mirror positioning approaches. In case of the mirror facing in beam direction, the suspension bends at the attachment point yielding \(\frac{\partial x}{\partial y}(L_{\rm mi})=0\)[90]. This condition is applied in this paper, equivalently to the ET-LF design in [5]. After defining the suspension displacement function \(x(y)\) according Eq. (22), the effective mirror spring constant \(k_{\rm mi}\) can be derived by applying an external force \(F\) on the suspended mass at \(y=L_{\rm mi}\)[73, 90]. From the equation of motion of a lumped mass \[E_{\rm cx}I\frac{\partial^{3}x}{\partial y^{3}}(L_{\rm mi})-\sigma\frac{ \partial x}{\partial y}(L_{\rm mi})-M_{\rm MI}\omega^{2}x(L_{\rm mi})=F\, \tag{28}\] and \(\frac{\partial x}{\partial y}(L_{\rm mi})=0\) as boundary condition, the mirror spring constant \(k_{\rm mi}\) for 4 suspensions is given by Eq. (34). The mechanical impedance \(Z_{\rm horz}(\omega)\), needed for the STN calculation in Eq. (6), is defined by the equations of motion for the double pendulum system, presented in matrix form in Eq. (35). Through this motion matrix, the mechanical impedance \(Z_{\rm horz}(\omega)\) is obtained from Eq. (36). The spring constants \(k_{\rm ma}\) and \(k_{\rm mi}\) implemented in our STN model are taken from Eqs. (15) and (34). The system dynamics described above refer to the horizontal DoF, representing the dominant source for the STN. Nonetheless, the vertical DoF delivers also a non-negligable contribution to the STN and is included in the model. The approach for modeling the vertical impedance \(Z_{\rm vert}(\omega)\) is analogous to the algorithm above, whereby both the marionette and mirror stages are represented via simple pendulum systems, whose vertical spring constants are given by [90] \[k_{\rm mi,vert} = \frac{4E_{\rm mi}S_{\rm mi}}{L_{\rm mi}}(1+i\phi_{\rm sup,mi})\, \tag{29}\] \[k_{\rm ma,vert} = (2\pi\cdot 0.4\,{\rm Hz})^{2}M_{\rm MA+MI}(1+i\phi_{\rm susp,ma}) \tag{30}\] The vertical spring constant for the marionette suspension can be evaluated as a set of two springs connected in series, namely the marionette suspension itself and the spring blades at its upper part. The resulting vertical spring constant is dominated by the soft magnetic spring blades of the super-attenuator system. The value \(0.4\,{\rm Hz}\) in Eq. (30) refers to the natural frequency measured for the magnetic anti-spring blades in AdVirgo [90]. Similar spring blades are assumed in this model for ET-LF. Finally, the overall STN spectral density is \[x_{\rm total}^{2}(\omega)=x_{\rm horz}^{2}(\omega)+\theta_{\rm vh}^{2}x_{\rm vert }^{2}(\omega)\, \tag{31}\] where \(\theta_{\rm vh}\) is the vertical-to-horizontal coupling factor. Weak coupling of vertical motion into horizontal motion results from the non-parallel alignment of the test masses at the ends of the interferometer arms due to the Earth's curvature. For a \(10\,{\rm km}\) ET-LF arm, \(\theta_{\rm vh}\) yields \[\theta_{\rm vh}=\frac{L_{\rm arm}}{2R_{\rm Earth}}=7.8\times 10^{-4}. \tag{32}\] ### Implementation of temperature levels For systems including non-uniform temperatures, the STN is usually modelled using the normal modal approach [90, 2, 5], as the standard FDT assumes a single homogeneous temperature level in the whole system as seen in Eq. (6). The modal approach can be a heavy computational task [92] and includes only homogeneous dissipation. To include inhomogeneuos losses, Levin [72] introduces an extended formulation of the standard FDT. Komori et al. [93] propose a discrete version of this extended FDT, which can be applied for STN modelling of systems with inhomogeneous temperature levels, such as cryogenic payloads. This approach foresees the discretization of the system into elements, where each element is associated with a homogeneous temperature and an individual mechanical impedance, where the thermal noise spectral density sensed by the element \(j\) of the system is given by [93] \[x^{2}(\omega)=\frac{2k_{\rm B}}{\omega^{2}}\sum_{j}T_{j}Z^{-1}(Z_{\rm j}+Z_{ \rm j}^{\dagger})Z^{-1\dagger}. \tag{33}\] The STN model in this paper uses the discrete FDT approach for modelling the cryogenic payload consisting of two elements (i.e., mirror stage and marionette stage). Here homogeneous losses and a constant temperature along the suspensions, namely the highest temperatures at the lower ends, are assumed. \[k_{\rm mi}=\frac{-E_{\rm cx}I\frac{\partial^{3}x}{\partial y^{3}}(L_{\rm mi}) }{x(L_{\rm mi})}=\frac{4E_{\rm cx}Ik_{\rm s}k_{\rm e}(k_{\rm s}^{3}\cos(k_{\rm e }L_{\rm mi})+k_{\rm s}^{2}k_{\rm e}\sin(k_{\rm e}L_{\rm mi})+k_{\rm e}^{3} \sin(k_{\rm e}L_{\rm mi}))}{(k_{\rm s}^{2}-k_{\rm e}^{2})\sin(k_{\rm e}L_{\rm mi })-2k_{\rm s}k_{\rm e}\cos(k_{\rm e}L_{\rm mi})} \tag{34}\] \[\left[\begin{array}{cc}k_{\rm ma}+k_{\rm mi}-M_{\rm MA}\omega^{2}&-k_{\rm mi }\\ -k_{\rm mi}&k_{\rm mi}-M_{\rm MI}\omega^{2}\end{array}\right]\left[\begin{array} []{c}x_{\rm ma}\\ x_{\rm mi}\end{array}\right]=\left[\begin{array}{c}0\\ F\end{array}\right] \tag{35}\] \[Z_{\rm horz}(\omega)=\frac{\vec{F}}{i\omega\vec{x}(\omega)}=\frac{1}{i\omega} \left[\begin{array}{cc}k_{\rm ma}+k_{\rm mi}-M_{\rm MA}\omega^{2}&-k_{\rm mi }\\ -k_{\rm mi}&k_{\rm mi}-M_{\rm MI}\omega^{2}\end{array}\right] \tag{36}\] Figure 13 illustrates the main modelling parameters implemented for the marionette and mirror suspensions, respectively. This conservative approach reduces computational effort, as results from KAGRA [93] show that including the temperature gradients along the suspensions in the STN model has a negligible impact. ## VI Sensitivity of the baseline design Using the STN model of Sec. V with the parameters in Tables 2 and 3, the STN curves of the baseline design options are depicted in Fig. 14. Both the monocrystalline and the He-II filled marionette suspension concepts fulfill the sensitivity requirements of the ET-D curve [2]. The combination of a He-II filled marionette suspension with a sapphire mirror yields STN values similar to the monolithic sapphire marionette concept and is therefore not displayed. When comparing the three STN curves of the baseline design in Fig. 14 with the suspension losses from Eq. (7), plotted for various materials in Fig. 15, two major conclusions can be drawn: 1. The suspension loss angle \(\phi_{\rm susp}\), especially of the mirror suspensions, has a crucial impact on the STN, yielding the difference between silicon and sapphire in the monocrystalline concepts. 2. The lower marionette suspension temperature \(T_{\rm ma}\) compensates higher marionette suspension losses \(\phi_{\rm susp}\) in the He-II filled titanium suspension tube, yielding results similar to the monocrystalline silicon suspension concept in the range of 3 Hz to 30 Hz. The latter effect can be deduced from the thermal noise of a simple harmonic oscillator far from its resonance (pendulum mode) at \(\omega\gg\omega_{0}\)[39] as \[x^{2}(\omega)\propto T\phi_{\rm susp}. \tag{37}\] When approaching the pendulum peak \(\omega\approx\omega_{0}\), however, the impact of \(\phi_{\rm susp}\) dominantes the STN, which is visible at 1 Hz in Fig. 14. Various values for the bulk loss angle \(\phi_{\rm bulk}\) of monocrystalline silicon and sapphire have been reported at cryogenic temperatures, see Table 7. Thus, implying the necessity for R&D to refine the confidence interval of the data, which are immensely affected from the experimental setup. In this work, \(\phi_{\rm bulk}\) values of \(1\times 10^{-9}\) and \(3\times 10^{-9}\) have been applied for silicon and sapphire, respectively. Given the crucial influence of this parameter, as also presented in Sec. VII, these values should be revised accordingly based on future R&D. The thermoelastic losses of silicon and sapphire at \(T\leqslant 25\) K are negligible compared to the dominant bulk losses in \(\phi_{\rm susp}\), as depicted for 20 K in Fig. 15. Further investigations on surface losses of treated, strength-improved monolithic silicon and sapphire crystals are crucial, in order to use reliable values in the model, because especially in small-scale structures such as suspensions, these losses can be a significant source [9, 14]. For thin silicon flexures, Nawrodt et al. [9] report a surface loss parameter \(\alpha_{\rm surf}\) of \(5\times 10^{-13}\) m at \(T=10\) K, yielding a dissipation depth of \(h_{\rm s}=5\times 10^{-4}\) m. For sapphire, currently the surface loss parameter has not been investigated, hence it is assumed to be equal to that of silicon. In this model, for both silicon and sapphire suspensions a value of \(\alpha_{\rm surf}=5\times 10^{-13}\) m is applied. In the He-II concept, only losses in the titanium suspension tube are being considered so far. An additional contribution may originate from the static superfluid. Though the He-II dissipation is expected to be minor, this may change when the relative velocity between the two fluid components exceeds a critical value [94, 95]. Above this critical velocity, a tangle of quantized vortexes arises. Then an extra term, due to the interaction of the quantum vortexes with the normal fluid should appear in addition to that of the viscous normal component. Since the ratio between the superfluid and the normal component is a function of temperature, the whole He-II contribution to the dissipation has to be investigated in future experiments, both in terms of frequency and temperature [96]. In metals, \(\phi_{\rm therm}\) represents the dominant loss contribution to \(\phi_{\rm susp}\), cf. Fig. 15. Here, especially the parameters \(\alpha\) and \(\beta\) are decisive. Compared to the other metals, titanium induces the lowest suspension losses, hence it is the proposed material for the suspension tube design. In this model, \(\beta_{\rm TI}\) is conservatively set equal to \(\beta_{\rm TIGAUY}=-4.6\times 10^{-4}\), instead of \(-1.9\times 10^{-5}\) as reported in [97]. For the bulk losses, the conservative value of \(\phi_{\rm bulk}=1\times 10^{-6}\) at 2 K is used for the titanium suspension tube. A contribution from \(\phi_{\rm surf}\) is neglected, because the surface treatment and finishing technologies Figure 14: STN of the baseline design for the monocrystalline and He-II based marionette cooling concepts (Sa = sapphire, Si = silicon, ST = He-II suspension tube). in metals are usually expected to provide a high quality surface and hence a minor \(\phi_{\rm surf}\). The cross-sectional area of the marionette suspension tube \[S_{\rm ST}=\pi\left(\frac{d_{\rm o}}{2}+s_{\rm o}\right)^{2}-\pi\left(\frac{d_{ \rm o}}{2}\right)^{2}, \tag{38}\] is implemented in the evaluation of the tension in Eq. (8) \[\sigma=\frac{M_{\rm MA+MI}\ g}{S_{\rm ST}}\, \tag{39}\] and converted to an equivalent diameter \[d_{\rm ST}=\sqrt{\frac{4S_{\rm ST}}{\pi}}\, \tag{40}\] to be applied in Eq. (9). The suspension tube area moment of inertia used in Eq. (14) is \[I_{\rm ST}=\frac{\pi}{4}\left[\left(\frac{d_{\rm o}}{2}+s_{\rm o}\right)^{4}- \left(\frac{d_{\rm o}}{2}\right)^{4}\right]. \tag{41}\] ## VII Parameter study ### General This section presents a study of various payload design parameters that influence the STN in the ET-LF frequency range. We use the He-II filled marionette suspension concept with a silicon mirror as a reference, because variations of other design parameters do not affect the temperature \(T_{\rm ma}=T_{\rm MA}=2\,\)K in this case. Therefore, effects of different mirror suspension designs can be better discriminated. The applied physical property data are summarized in Table 3. For a consistent comparison, the analysis considers the resulting mirror temperatures \(T_{\rm mi}=T_{\rm MI}\) due to the parameter variations, i.e. a mechanical dimensioning and a thermal modelling is applied prior to each STN modelling. The results of the parameter study are visualized in Figs.16 and 17 in the frequency range of \(0.3\,\)Hz to \(100\,\)Hz in order to include the impact on the pendulum modes below \(1\,\)Hz. ### Influence of the mirror suspension design The mirror suspension design determines the STN in the frequency range above \(10\,\)Hz, especially due to the violin and the vertical modes, but impacts also the sensitivity at lower frequencies. Variations of the mirror suspension temperature, length, diameter and bulk losses are investigated. With \(T_{\rm ma}=T_{\rm MA}=2\,\)K, the temperature \(T_{\rm mi}=T_{\rm MI}\) is a function of the heat load. Around the design target from Eq. (1), heat loads of \(0.1\,\)W, \(0.5\,\)W and \(1.0\,\)W yield silicon mirror temperatures of \(9\,\)K, \(15\,\)K and \(20\,\)K, respectively. The corresponding STN curves in Fig. 16a indicate a minor effect of the heat load on the STN. It must be noted, however, that the achievable mirror temperature strongly depends on the marionette temperature. The length of the mirror suspensions is an essential design parameter, influencing both the STN and the cryostat design. Figure 16b shows the STN for mirror suspensions of \(2.0\,\)m, \(1.2\,\)m and \(0.8\,\)m length, respectively, yielding mirror temperatures to \(18\,\)K, \(15\,\)K and \(13\,\)K at \(0.5\,\)W heat load. A decreasing length \(L_{\rm mi}\) yields a shift of all the modes to higher frequencies. This is beneficial for the sensitivity regrading the violin and vertical modes, but it also implies a shift of the pendulum modes below \(1\,\)Hz to higher frequencies. The latter results in an STN increase between \(1\,\)Hz to \(20\,\)Hz. Therefore, \(L_{\rm mi}\) is a design parameter to be optimized, considering constraints imposed by the ET-LF sensitivity, ongoing R&D on high-quality fiber manufacturing [49] and the ET-LF cryostat and tower dimensions. The impact of the mirror suspension diameter \(d_{\rm mi}\) is presented in Fig. 16c, considering different ultimate strength values and mechanical safety factors. This yields \(d_{\rm mi}=3\,\)mm for \(\sigma_{\rm max}=230\,\)MPa with SF = 3, \(d_{\rm mi}=4\,\)mm for \(\sigma_{\rm max}=120\,\)MPa with SF = 3 and \(d_{\rm mi}=5.6\,\)mm for \(\sigma_{\rm max}=120\,\)MPa with SF = 6, respectively. Increasing suspension diameters \(d_{\rm mi}\) result in higher STN values, despite a better heat extraction with lower temperatures \(T_{\rm mi}\). This is mainly caused by the shifting of the vertical and first violin modes towards each other. Furthermore, \(d_{\rm mi}\) determines the position of the mirror suspension bending points via \(\lambda_{\rm bp}=\sqrt{EI/\sigma}\), which due to payload control related constrains must be aligned with the center of mass of the suspended mirror and marionette, respectively. As a consequence, the over Figure 15: \(\phi_{\rm susp}\) (Eq. (7)) and \(\phi_{\rm therm}\) (Eq. (8)) of the marionette suspension for metallic suspension tubes (ST) and for monolithic silicon and sapphire suspensions, with design parameters from Table 2. all length of the mirror suspensions must include these additional lengths in the upper and lower parts. For the baseline design parameters in Table 2, both sapphire and silicon yield a total additional length of \(6\,\mathrm{cm}\). This additional length has a negligible impact on the STN modelling, but is an important aspect to be considered in the suspension manufacturing and payload design, such as the calculation of the suspension system frequencies and temperature gradients. The mirror suspension bulk loss angle \(\phi_{\mathrm{bulk,mi}}\) has a strong impact on the STN, given that it directly affects the overall mechanical dissipation of the suspensions, cf. Eq. (7). Figure 16d shows the STN for silicon mirror suspensions with \(\phi_{\mathrm{bulk,mi}}\) of \(1\times 10^{-9}\), \(3\times 10^{-9}\) and \(5\times 10^{-9}\), respectively. The surface losses are calculated under the assumption of a constant dissipation depth of \(h_{s}=5\times 10^{-4}\,\mathrm{m}\) according Eq. (10), yielding total suspension losses \(\phi_{\mathrm{susp,mi}}\) of \(2.3\times 10^{-9}\), \(7\times 10^{-9}\) and \(1.2\times 10^{-8}\), respectively. Increasing \(\phi_{\mathrm{bulk,mi}}\) induces a higher STN over the complete frequency range. ### Influence of the marionette suspension design The marionette suspension has a dominant impact on the STN at frequencies below \(10\,\mathrm{Hz}\). Again, we use the He-II filled marionette suspension concept for reference, where \(T_{\mathrm{ma}}=T_{\mathrm{MA}}=2\,\mathrm{K}\) are fixed on principle, and investigate the influence of the suspension length, the marionette mass and the suspension material. The resulting trends may apply to monolithic marionette suspensions as well, but more detailed design studies including the cooling interface will be necessary in order to determine appropriate temperature values. Figure 16(a) presents the STN modelled with marionette suspension lengths of \(L_{\mathrm{ma}}=0.8\,\mathrm{m}\), \(1.0\,\mathrm{m}\) and \(2.0\,\mathrm{m}\). A decrease in \(L_{\mathrm{ma}}\) yields a shift of the pendulum modes to higher frequencies. Increasing STN values, however, are Figure 16: Sensitivity analyses of the mirror suspension parameters \(T_{\mathrm{mi}}\), \(L_{\mathrm{mi}}\), \(d_{\mathrm{mi}}\) and \(\phi_{\mathrm{bulk,mi}}\) on the STN. only observed at \(f<3\,\)Hz. The violin and the vertical modes remain unchanged, as they are defined solely by the mirror suspensions. The variation of STN with marionettes of \(100\,\)kg, \(200\,\)kg and \(400\,\)kg is analyzed in Fig. 16(b). A reduction of the marionette mass results in a shift of the pendulum and the vertical modes to higher frequencies, resulting in slightly higher STN values in the frequency range of \(3\,\)Hz to \(5\,\)Hz. The benefit of a lighter marionette, however, is a reduced cool-down time. Additional restrictions may come from the payload control system, whereby the marionette should not weight less than the mirror. The marionette suspension tube material influences the STN via the suspension losses \(\phi_{\mathrm{susp}}\) (cf. Fig. 15) and the wall thickness resulting from the mechanical dimensioning. Figure 16(c) depicts the impact of different materials on the STN, showing that the ET-D sensitivity curve can only be reached with a titanium suspension tube. ## VIII Conclusions and outlook We presented a baseline design for the ET-LF cryogenic payload, which is thermally and mechanically consistent and fulfils the STN requirements given by the ET-D sensitivity curve. Analytic and FEA simulations indicate that soft thermal links cannot be connected to the marionette. Therefore, two possible heat extraction concepts are proposed, including a high-\(Q\) and high-conductivity monocrystalline marionette suspension made of silicon or sapphire, and a He-II filled marionette suspension tube made of titanium, respectively. In the latter case, the lower operating temperature of \(2\,\)K compensates for the lower \(Q\) of titanium. The theoretical fundamentals of STN modelling applied to cryogenic payloads are described in detail and available sources for material data are compiled. A parameter study is performed in order to identify the impact of various design parameters on the ET-LF sensitivity, illustrating the parameter space for future payload design optimizations. The suspension losses are shown to have a decisive impact, highlighting the need for dedicated R&D on bulk and surface losses under ET-LF operating conditions. A reduction of the mirror suspension length is shown to deteriorate the STN in the ET-LF frequency range, whereas the marionette suspension length has a less important impact. Hence, a combined variation of these two parameters may be beneficial in future design studies. The actual value of the heat load on the mirror is shown to have a marginal impact on the STN, assuming that the necessary cooling capacity is available. Future R&D on cryogenic payloads will be embedded in a wide context of activities outlined e.g. in [98]. For the monocrystalline concept foreseeing a silicon or sapphire marionette suspension, the cool-down behaviour and vibration transmission will be investigated in upcoming R&D in the ET-Cryo facility of the Amaldi Re Figure 17: Parameter analysis of the marionette design parameters: \(L_{\mathrm{ma}}\), \(M_{\mathrm{MA}}\), \(\phi_{\mathrm{susp,ma}}\) on the STN. search Center (ARC), devoted on testing and developing the main features of an ET-LF payload using a solid conductive cooling cryostat. Thermal shielding, soft thermal links as well as high-\(Q\) and high-conductivity monocrystalline suspensions for marionette and mirror will be tested. Also, key relevant features concerning the cryostat design versus payload will be tested in order to envisage the actual impact of connecting the payload to the cryogenic system. The ARC ET-Cryo Lab is ready and the design of the test cryostat is underway. The alternative He-II concept is shown to fulfil the STN requirements as well, cooling the marionette to 2 K and conducting the heat load through a static He-II column inside the marionette suspension tube. This concept enables convective cool-down of the ET-LF payload by controlled He-I flow in about two weeks. Open questions related to the integration of a quantum fluid in a gravitational wave detector suspension, in particular the effect of He-II on mechanical dissipation and vibration transmission, will be addressed in future experiments by the authors at KIT. A new facility for \(Q\)-measurements down to 2 K is presently being planned, allowing both investigations of solid and He-II filled suspensions. The scope of this facility includes R&D on the mechanical integration of the cooling interface on the platform, the supply capillaries and their vibration attenuation system in order to investigate the noise propagation from the cooling system into the payload. ###### Acknowledgements. The authors would like to acknowledge the support from the German Ministry for Education and Research (BMBF, Gr 05A20VK4), and from the Karlsruhe School of Elementary Particle and Astroparticle Physics: Science and Technology (KSETA). The study in this paper has been developed within the frameworks of Italian PRIN2020, cod. 2020BSYXCB LoVeC-ET (Low-frequency Versus Cryogenics for ET), the EC exchange programme NEWS - H2020-MSCA-RISE-2016 GA no. 734303, and ETIC - Einstein Telescope Infrastructure Consortium (IR0000004) - MUR call n. 3264 PNRR, Miss.4 - Comp. 2, Line 3.1. We are indebted to KAGRA colleagues, for the precious discussions concerning solid conduction cooling-down of payloads. ## Symbol List \begin{tabular}{l l} \hline \hline Symbol & Definition \\ \hline \(\alpha\) & Linear expansion coefficient \\ \(\alpha\) & Heat transfer coefficient \\ \(\alpha_{\rm surf}\) & Surface loss parameter \\ \(\beta\) & Thermal elastic coefficient \\ \(\epsilon\) & Effective emissivity \\ \(\lambda\) & Thermal conductivity \\ \(\lambda_{\rm bp}\) & Bending point position \\ \(\mu\) & Geometry factor \\ \(\rho\) & Density \\ \(\eta\) & Dynamic viscosity \\ \(\omega\) & Angular frequency \\ \(\sigma\) & Tension \\ \(\sigma_{\rm max}\) & Ultimate tensile strength \\ \(\sigma_{\rm y}\) & Yield strength \\ \(\tau\) & Thermal diffusion time \\ \(\phi\) & Loss angle \\ \(A\) & Area \\ \(A_{\rm surf}\) & Surface area \\ \(C\) & Constant \\ \(c_{\rm p}\) & Specific heat capacity \\ \(d\) & Diameter \\ \(D\) & Dilution factor \\ \(E\) & Young's modulus \\ \(f\) & Frequency \\ \(F\) & Force \\ \(g\) & Standard gravitational acceleration \\ \(h\) & Height \\ \(h_{\rm s}\) & Dissipation depth \\ \(I\) & Area moment of inertia \\ \(k\) & Spring constant \\ \(k_{\rm B}\) & Boltzmann constant \\ \(k_{\rm e}\) & Elastic fiber wave number \\ \(k_{\rm s}\) & Flexural stiffness wave number \\ \(L\) & Length \\ \(\dot{M}\) & Mass flow \\ \(M\) & Mass \\ \(n\) & Number of fibers \\ \(p\) & Pressure \\ \(\dot{q}\) & Heat flux \\ \(\dot{Q}\) & Cooling power \\ \(Q\) & Quality factor \\ \(s\) & Specific entropy \\ \(s\) & Wall thickness \\ \(S\) & Cross-sectional area \\ \(t\) & Time \\ \(T\) & Temperature \\ \(V\) & Volume \\ \(x\) & Displacement \\ \(x^{2}(\omega)\) & Displacement Spectral Density \\ \(y\) & Longitudinal coordinate \\ \(Y\) & Mechanical admittance \\ \(Z\) & Mechanical impedance \\ \hline \hline \end{tabular} ## ABbreviation List
2305.19256
Ambient Diffusion: Learning Clean Distributions from Corrupted Data
We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples. This problem arises in scientific applications where access to uncorrupted samples is impossible or expensive to acquire. Another benefit of our approach is the ability to train generative models that are less likely to memorize individual training samples since they never observe clean training data. Our main idea is to introduce additional measurement distortion during the diffusion process and require the model to predict the original corrupted image from the further corrupted image. We prove that our method leads to models that learn the conditional expectation of the full uncorrupted image given this additional measurement corruption. This holds for any corruption process that satisfies some technical conditions (and in particular includes inpainting and compressed sensing). We train models on standard benchmarks (CelebA, CIFAR-10 and AFHQ) and show that we can learn the distribution even when all the training samples have $90\%$ of their pixels missing. We also show that we can finetune foundation models on small corrupted datasets (e.g. MRI scans with block corruptions) and learn the clean distribution without memorizing the training set.
Giannis Daras, Kulin Shah, Yuval Dagan, Aravind Gollakota, Alexandros G. Dimakis, Adam Klivans
2023-05-30T17:43:33Z
http://arxiv.org/abs/2305.19256v1
# Ambient Diffusion: ###### Abstract We present the first diffusion-based framework that can learn an unknown distribution using only highly-corrupted samples. This problem arises in scientific applications where access to uncorrupted samples is impossible or expensive to acquire. Another benefit of our approach is the ability to train generative models that are less likely to memorize individual training samples since they never observe clean training data. Our main idea is to introduce _additional measurement distortion_ during the diffusion process and require the model to predict the original corrupted image from the further corrupted image. We prove that our method leads to models that learn the conditional expectation of the full uncorrupted image given this additional measurement corruption. This holds for any corruption process that satisfies some technical conditions (and in particular includes inpainting and compressed sensing). We train models on standard benchmarks (CelebA, CIFAR-10 and AFHQ) and show that we can learn the distribution even when all the training samples have \(90\%\) of their pixels missing. We also show that we can finetune foundation models on small corrupted datasets (e.g. MRI scans with block corruptions) and learn the clean distribution without memorizing the training set. ## 1 Introduction Diffusion generative models [48; 24; 51] are emerging as versatile and powerful frameworks for learning high-dimensional distributions and solving inverse problems [34; 11; 35; 28]. Numerous recent developments [52; 30] have led to text conditional foundation models like Dalle-2 [41], Latent Diffusion [45] and Imagen [47] with incredible performance in general image domains. Training these models requires access to high-quality datasets which may be expensive or impossible to obtain. For example, direct images of black holes cannot be observed [12; 19] and high-quality MRI images require long scanning times, causing patient discomfort and motion artifacts [28]. Recently, Carlini et al. [8], Somepalli et al. [49], and Jagielski et al. [27] showed that diffusion models can memorize examples from their training set. Further, an adversary can extract dataset samples given only query access to the model, leading to privacy, security and copyright concerns. For many applications, we may want to learn the distribution but not individual training images e.g. we might want to learn the distribution of X-ray scans but not memorize images of specific patient scans from the dataset. Hence, we may want to introduce corruption as a design choice. We show that it is possible to train diffusions that learn a distribution of clean data by only observing highly corrupted samples. **Prior work in supervised learning from corrupted data.** The traditional approach to solving such problems involves training a restoration model using supervised learning to predict the clean image based on the measurements [43; 44; 57; 39]. The seminal Noise2Noise [38] work introduced a practical algorithm for learning how to denoise in the absence of any non-noisy images. This framework and its generalizations [5; 37; 53] have found applications in electron microscopy [16], tomographic image reconstruction [56], fluorescence image reconstruction [59], blind inverse problems [20; 5], monocular depth estimation and proteomics [6]. Another related line of work uses Stein's Unbiased Risk Estimate (SURE) to optimize an unbiased estimator of the denoising objective without access to non-noisy data [18]. We stress that the aforementioned research works study the problem of _restoration_, whereas are interested in the problem of _sampling_ from the clean distribution. Restoration algorithms based on supervised learning are only effective when the corruption level is relatively low [15]. However, it might be either not possible or not desirable to reconstruct individual samples. Instead, the desired goal may be to learn to _generate_ fresh and completely unseen samples from the distribution of the uncorrupted data but _without reconstructing individual training samples_. Indeed, for certain corruption processes, it is theoretically possible to perfectly learn a distribution only from highly corrupted samples (such as just random one-dimensional projections), even though individual sample denoising is usually impossible in such settings. Specifically, AmbientGAN [7] showed that general \(d\) dimensional distributions can be learned from _scalar_ observations, by observing only projections on one-dimensional random Gaussian vectors, in the infinite training data limit. The theory requires an infinitely powerful discriminator and hence does not apply to diffusion models. **Our contributions.** We present the first diffusion-based framework to learn an unknown distribution \(\mathcal{D}\) when the training set only contains highly-corrupted examples drawn from \(\mathcal{D}\). Specifically, we consider the problem of learning to sample from the target distribution \(p_{0}(\mathbf{x}_{0})\) given corrupted samples \(A\mathbf{x}_{0}\) where \(A\sim p(A)\) is a random corruption matrix (with known realizations and prior distribution) and \(\mathbf{x}_{0}\sim p_{0}(\mathbf{x}_{0})\). Our main idea is to introduce _additional measurement distortion_ during the diffusion process and require the model to predict the original corrupted image from the further corrupted image. Figure 1: **Left panel:** Baseline method of vanilla finetuning Deepfloyd IF using \(3000\) images from CelebA-HQ. We show generated sample images and nearest neighbors from the finetuning set. As shown, the generated samples are often near-identical copies from training data. This verifies related work Carlini et al. [8], Somepalli et al. [49], and Jagielski et al. [27] that pointed out that diffusions often generate training samples. **Right panel:** We finetune the same foundation model (Deepfloyd IF) using our method and \(3000\) highly corrupted training images. The corruption adds noise and removes \(80\) percent random pixels. We show generated samples and nearest neighbors from the training set. Our method still learns the clean distribution of faces (with some quality deterioration, as shown) but does not memorize training data. We emphasize that our training is performed without ever accessing clean training data. * We provide an algorithm that provably learns \(\mathbb{E}[\mathbf{x}_{0}|\tilde{A}(\mathbf{x}_{0}+\sigma_{t}\mathbf{\eta}),\tilde{A}]\), for all noise levels \(t\) and for \(\tilde{A}\sim p(\tilde{A}\mid A)\) being a further corrupted version of \(A\). The result holds for a general family of corruption processes \(A\sim p(A)\). For various corruption processes, we show that the further degradation introduced by \(\tilde{A}\) can be very small. * We use our algorithm to train diffusion models on standard benchmarks (CelebA, CIFAR-10 and AFHQ) with training data at different levels of corruption. * Given the learned conditional expectations we provide an approximate sampler for the target distribution \(p_{0}(\mathbf{x}_{0})\). * We show that for up to \(90\%\) missing pixels, we can learn reasonably well the distribution of uncorrupted images. We outperform the previous state-of-the-art AmbientGAN [7] and natural baselines. * We show that our models perform on par or even outperform state-of-the-art diffusion models for solving certain inverse problems even without ever seeing a clean image during training. Our models do so with a single prediction step while our baselines require hundreds of diffusion steps. * We use our algorithm to finetune foundational pretrained diffusion models. Our finetuning can be done in a few hours on a single GPU and we can use it to learn distributions with a few corrupted samples. * We show that models trained on sufficiently corrupted data do not memorize their training set. We measure the tradeoff between the amount of corruption (that controls the degree of memorization), the amount of training data and the quality of the learned generator. * We open-source our code and models: [https://github.com/giannisdaras/ambient-diffusion](https://github.com/giannisdaras/ambient-diffusion). ## 2 Background Training a diffusion model involves two steps. First, we design a corruption process that transforms the data distribution gradually into a distribution that we can sample from [52; 14]. Typically, this corruption process is described by an Ito SDE of the form: \(\mathrm{d}\mathbf{x}=\mathbf{f}(\mathbf{x},t)\mathrm{d}t+g(t)\mathrm{d}\mathbf{w}\), where \(\mathbf{w}\) is the standard Wiener process. Such corruption processes are _reversible_ and the reverse process is also described by an Ito SDE [3]: \(\mathrm{d}\mathbf{x}=\big{(}\mathbf{f}(\mathbf{x},t)-g^{2}(t)\nabla_{\mathbf{x}}\log p_{t}(\bm {x})\big{)}\,\mathrm{d}t+g(t)\mathrm{d}\mathbf{w}\). The designer of the diffusion model is usually free to choose the drift function \(\mathbf{f}(\cdot,\cdot)\) and the diffusion function \(g(\cdot)\). Typical choices are setting \(\mathbf{f}(\mathbf{x},t)=\mathbf{0},g(t)=\sqrt{\frac{\mathrm{d}\sigma_{t}^{2}}{\mathrm{d }t}}\) (Variance Exploding SDE) or setting \(\mathbf{f}(x,t)=-\beta(t)\mathbf{x},g(t)=\sqrt{\beta(t)}\) (Variance Preserving SDE). Both of these choices lead to a Gaussian terminal distribution and are equivalent to a linear transformation in the input. The goal of diffusion model training is to learn the function \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\), which is known as the score function. To simplify the presentation of the paper, we will focus on the Variance Exploding SDE that leads to conditional distributions \(\mathbf{x}_{t}=\mathbf{x}_{0}+\sigma_{t}\mathbf{\eta}\). Figure 2: Illustration of our method: Given training data with deleted pixels, we corrupt further by erasing more (illustrated with green color). We feed the learner the further corrupted images and we evaluate it on the originally observed pixels. We can do this during training since the green pixel values are known to us. The score network learner has no way of knowing whether a pixel was missing from the beginning or whether it was corrupted by us. Hence, the score network learns to predict the clean image everywhere. Our method is analogous to grading a random subset of the questions in a test, but the students not knowing which questions will be graded. Vincent [55] showed that we can learn the score function at level \(t\) by optimizing for the score-matching objective: \[J(\theta)=\frac{1}{2}\mathbb{E}_{(\mathbf{x}_{0},\mathbf{x}_{t})}\left|\left|\mathbf{h}_{ \theta}(\mathbf{x}_{t},t)-\mathbf{x}_{0}\right|\right|^{2}. \tag{2.1}\] Specifically, the score function can be written in terms of the minimizer of this objective as: \[\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t})=\frac{\mathbf{h}_{\theta^{*}}(\mathbf{x}_{t },t)-\mathbf{x}_{t}}{\sigma_{t}}. \tag{2.2}\] This result reveals a fundamental connection between the score-function and the best restoration model of \(\mathbf{x}_{0}\) given \(\mathbf{x}_{t}\), known as Tweedie's Formula [17]. Specifically, the optimal \(\mathbf{h}_{\theta^{*}}(\mathbf{x}_{t},t)\) is given by \(\mathbb{E}[\mathbf{x}_{0}|\mathbf{x}_{t}]\), which means that \[\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t})=\frac{\overbrace{ \mathbb{E}[\mathbf{x}_{0}|\mathbf{x}_{t}]}^{\text{best restoration}}-\mathbf{x}_{t}}{\sigma_{t}}. \tag{2.3}\] Inspired by this restoration interpretation of diffusion models, the Soft/Cold Diffusion works [14; 4] generalized diffusion models to look at non-Markovian corruption processes: \(\mathbf{x}_{t}=C_{t}\mathbf{x}_{0}+\sigma_{t}\mathbf{\eta}\). Specifically, Soft Diffusion proposes the Soft Score Matching objective: \[J_{\text{soft}}(\theta)=\frac{1}{2}\mathbb{E}_{(\mathbf{x}_{0},\mathbf{x}_{t})}\left| \left|C_{t}\left(\mathbf{h}_{\theta}(\mathbf{x}_{t},t)-\mathbf{x}_{0}\right)\right|\right| ^{2}, \tag{2.4}\] and shows that it is sufficient to recover the score function via a generalized Tweedie's Formula: \[\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t})=\frac{C_{t}\mathbb{E}[\mathbf{x}_{0}| \mathbf{x}_{t}]-\mathbf{x}_{t}}{\sigma_{t}}. \tag{2.5}\] For these generalized models, the matrix \(C_{t}\) is a design choice (similar to how we could choose the functions \(\mathbf{f},g\)). Most importantly, for \(t=0\), the matrix \(C_{t}\) becomes the identity matrix and the noise \(\sigma_{t}\) becomes zero, i.e. we observe samples from the true distribution. ## 3 Method As explained in the introduction, in many cases we do not observe uncorrupted images \(\mathbf{x}_{0}\), either by design (to avoid memorization and leaking of sensitive data) or because it is impossible to obtain clean data. Here we study the case where a learner only has access to linear measurements of the clean data, i.e. \(\mathbf{y}_{0}=A\mathbf{x}_{0}\), and the corruption matrices \(A:\mathbb{R}^{m\times n}\). We note that we are interested in non-invertible corruption matrices. We ask two questions: 1. Is it possible to learn \(\mathbb{E}[\mathbf{x}_{0}|A(\mathbf{x}_{0}+\sigma_{t}\mathbf{\eta}),A]\) for all noise levels \(t\), given only access to corrupted samples \((\mathbf{y}_{0}=A\mathbf{x}_{0},A)\)? 2. If so, is it possible to use this restoration model \(\mathbb{E}[\mathbf{x}_{0}|A(\mathbf{x}_{0}+\sigma_{t}\mathbf{\eta}),A]\) to recover \(\mathbb{E}[\mathbf{x}_{0}|\mathbf{x}_{t}]\) for any noise level \(t\), and thus sample from the true distribution through the score function as given by Tweedie's formula (Eq. 2.3)? We investigate these questions in the rest of the paper. For the first, the answer is affirmative but only after introducing additional corruptions, as we explain below. For the second, at every time step \(t\), we approximate \(\mathbb{E}[\mathbf{x}_{0}|\mathbf{x}_{t}]\) directly using \(\mathbb{E}[\mathbf{x}_{0}|A\mathbf{x}_{t},A]\) (for a chosen \(A\)) and substitute it into Eq. 2.3. Empirically, we observe that the resulting approximate sampler yields good results. ### Training For the sake of clarity, we first consider the case of random inpainting. If the image \(\mathbf{x}_{0}\) is viewed as a vector, we can think of the matrix \(A\) as a diagonal matrix with ones in the entries that correspond to the preserved pixels and zeros in the erased pixels. We assume that \(p(A)\) samples a matrix where each entry in the diagonal is sampled i.i.d. with a probability \(1-p\) to be \(1\) and \(p\) to be zero. We would like to train a function \(\mathbf{h}_{\theta}\) which receives a corruption matrix \(A\) and a noisy version of a corrupted image, \(\mathbf{y}_{t}=A\underbrace{(\mathbf{x}_{0}+\sigma_{t}\mathbf{\eta})}_{\mathbf{x}_{t}}\) where \(\mathbf{\eta}\sim\mathcal{N}(0,I)\), and produces an estimate for the conditional expectation. The simplest idea would be to simply ignore the missing pixels and optimize for: \[J_{\mathrm{naive}}^{\mathrm{corr}}(\theta)=\frac{1}{2}\mathbb{E}_{(\mathbf{x}_{0}, \mathbf{x}_{t},A)}\left|\left|A\left(\mathbf{h}_{\theta}(A,A\mathbf{x}_{t},t)-\mathbf{x}_{0} \right)\right|\right|^{2}, \tag{3.1}\] Despite the similarities with Soft Score Matching (Eq 2.4), this objective will not learn the conditional expectation. The reason is that the learner is never penalized for performing arbitrarily poorly in the missing pixels. Formally, any function \(\mathbf{h}_{\theta^{\prime}}\) satisfying \(A\mathbf{h}_{\theta^{\prime}}(A,\mathbf{y}_{t},t)=A\mathbb{E}[\mathbf{x}_{0}|A\mathbf{x}_{t},A]\) is a minimizer. Instead, we propose to _further corrupt_ the samples before feeding them to the model, and ask the model to predict the original corrupted sample from the further corrupted image. Concretely, we randomly corrupt \(A\) to obtain \(\tilde{A}=BA\) for some matrix \(B\) that is selected randomly given \(A\). In our example of missing pixels, \(\tilde{A}\) is obtained from \(A\) by randomly erasing an additional fraction \(\delta\) of the pixels that survive after the corruption \(A\). Here, \(B\) will be diagonal where each element is \(1\) with probability \(1-\delta\) and \(0\) w.p. \(\delta\). We will penalize the model on recovering all the pixels that are visible in the sample \(A\mathbf{x}_{0}\): this includes both the pixels that survive in \(\tilde{A}\mathbf{x}_{0}\) and those that are erased by \(\tilde{A}\). The formal training objective is given by minimizing the following loss: \[J^{\mathrm{corr}}(\theta)=\frac{1}{2}\mathbb{E}_{(\mathbf{x}_{0},\mathbf{x}_{t},A, \tilde{A})}\left|\left|A\left(\mathbf{h}_{\theta}(\tilde{A},\tilde{A}\mathbf{x}_{t},t) -\mathbf{x}_{0}\right)\right|\right|^{2}, \tag{3.2}\] The key idea behind our algorithm is as follows: the learner does not know if a missing pixel is missing because we never had it (and hence do not know the ground truth) or because it was deliberately erased as part of the further corruption (in which case we do know the ground truth). Thus, the best learner cannot be inaccurate in the unobserved pixels because with non-zero probability it might be evaluated on some of them. Notice that the trained model behaves as a denoiser in the observed pixels and as an inpainter in the missing pixels. We also want to emphasize that the probability \(\delta\) of further corruption can be arbitrarily small as long as it stays positive. The idea of further corruption can be generalized from the case of random inpainting to a much broader family of corruption processes. For example, if \(A\) is a random Gaussian matrix with \(m\) rows, we can form \(\tilde{A}\) by deleting one row from \(A\) at random. If \(A\) is a block inpainting matrix (i.e. a random block of fixed size is missing from all of the training images), we can create \(\tilde{A}\) by corrupting further with one more non-overlapping missing block. Examples of our further corruption are shown in Figure 2. In our Theory Section, we prove conditions under which it is possible to recover \(\mathbb{E}[\mathbf{x}_{0}|\tilde{A}\mathbf{x}_{t},\tilde{A}]\) using our algorithm and samples \((\mathbf{y}_{0}=A\mathbf{x}_{0},A)\). Our goal is to satisfy this condition while adding minimal further corruption, i.e. while keeping \(\tilde{A}\) close to \(A\). ### Sampling **Fixed mask sampling.** To sample from \(p_{0}(\mathbf{x}_{0})\) using the standard diffusion formulation, we need access to \(\nabla_{\mathbf{x}_{t}}\log p_{t}(\mathbf{x}_{t})\), which is equivalent to having access to \(\mathbb{E}[\mathbf{x}_{0}|\mathbf{x}_{t}]\) (see Eq. 2.3). Instead, our model is trained to predict \(\mathbb{E}[\mathbf{x}_{0}|\tilde{A}\mathbf{x}_{t},\tilde{A}]\) for all matrices \(A\) in the support of \(p(A)\). We note that for random inpainting, the identity matrix is technically in the support of \(p(A)\). However, if the corruption probability \(p\) is at least a constant, the probability of seeing the identity matrix is exponentially small in the dimension of \(\mathbf{x}_{t}\). Hence, we should not expect our model to give good estimates of \(\mathbb{E}[\mathbf{x}_{0}|\tilde{A}\mathbf{x}_{t},\tilde{A}]\) for corruption matrices \(A\) that belong to the tails of the distribution \(p(A)\). The simplest idea is to sample a mask \(\tilde{A}\sim p(\tilde{A})\) and approximate \(\mathbb{E}[\mathbf{x}_{0}|\mathbf{x}_{t}]\) with \(\mathbb{E}[\mathbf{x}_{0}|\tilde{A}\mathbf{x}_{t},\tilde{A}]\). Under this approximation, the discretized sampling rule becomes: \[\mathbf{x}_{t-\Delta t}=\underbrace{\frac{\sigma_{t-\Delta t}}{\sigma_{t}}}_{ \gamma_{t}}\mathbf{x}_{t}+\underbrace{\frac{\sigma_{t}-\sigma_{t-\Delta t}}{\sigma_ {t}}}_{1-\gamma_{t}}\underbrace{\mathbb{E}[\mathbf{x}_{0}|\tilde{A}\mathbf{x}_{t}, \tilde{A}]}_{\tilde{x}_{0}}. \tag{3.3}\] This idea works surprisingly well. Unless mentioned otherwise, we use it for all the experiments in the main paper and we show that we can generate samples that are reasonably close to the true distribution (as shown by metrics such as FID and Inception) even with \(90\%\) of the pixels missing. **Sampling with Reconstruction Guidance.** In the Fixed Mask Sampler, at any time \(t\), the prediction is a convex combination of the current value and the predicted denoised image. As \(t\to 0\), \(\gamma_{t}\to 0\). Hence, for the masked pixels, the fixed mask sampler outputs the conditional expectation of their value given the observed pixels. This leads to averaging effects as the corruption gets higher. To correct this problem, we add one more term in the update: the Reconstruction Guidance term. The issue with the previous sampler is that the model never sees certain pixels. We would like to evaluate the model using different masks. However, the model outputs for the denoised image might be very different when evaluated with different masks. To account for this problem, we add an additional term that enforces updates that lead to consistency on the reconstructed image. The update of the sampler with Reconstruction Guidance becomes: \[\mathbf{x}_{t-\Delta t}=\gamma_{t}\mathbf{x}_{t}+(1-\gamma_{t})\mathbb{E}[\mathbf{x}_{0}| \tilde{A}\mathbf{x}_{t},\tilde{A}]-w_{t}\nabla_{\mathbf{x}_{t}}\mathbb{E}_{A^{\prime}} ||\mathbb{E}[\mathbf{x}_{0}|\tilde{A}\mathbf{x}_{t},\tilde{A}]-\mathbb{E}[\mathbf{x}_{0}| \tilde{A}^{\prime}\mathbf{x}_{t},\tilde{A}^{\prime}]||^{2}. \tag{3.4}\] This sampler is inspired by the Reconstruction Guidance term used in Imagen [23] to enforce consistency and correct for the sampling drift caused by imperfect score matching [13]. We see modest improvements over the Fixed Mask Sampler for certain corruption ranges. We ablate this sampler in the Appendix, Section E.3. In the Appendix, Section A.1, we also prove that in theory, whenever it is possible to reconstruct \(p_{0}(\mathbf{x}_{0})\) from corrupted samples, it is also possible to reconstruct it using access to \(\mathbb{E}[\mathbf{x}_{0}|A\mathbf{x}_{t},A]\). However, as stated in the Limitations section, we were not able to find any practical algorithm to do so. ## 4 Theory As elaborated in Section 3, one of our key goals is to learn the best restoration model for the measurements at all noise levels, i.e., the function \(\mathbf{h}(A,\mathbf{y}_{t},t)=\mathbb{E}[\mathbf{x}_{0}|\mathbf{y}_{t},A]\). We now show that under a certain assumption on the distribution of \(A\) and \(\tilde{A}\), the true population minimizer of Eq. 3.2 is indeed essentially of the form above. This assumption formalizes the notion that even conditional on \(\tilde{A}\), \(\tilde{A}\) has considerable variability, and the latter ensures that the best way to predict \(A\mathbf{x}_{0}\) as a function of \(\tilde{A}\mathbf{x}_{t}\) and \(\tilde{A}\) is to optimally predict \(\mathbf{x}_{0}\) itself. All proofs are deferred to the Appendix. **Theorem 4.1**.: _Assume a joint distribution of corruption matrices \(A\) and further corruption \(\tilde{A}\). If for all \(\tilde{A}\) in the support it holds that \(\mathbb{E}_{A|\tilde{A}}[A^{T}A]\) is full-rank, then the unique minimizer of the objective in equation 3.2 is given by_ \[\mathbf{h}_{\theta^{*}}(\tilde{A},\mathbf{y}_{t},t)=\mathbb{E}[\mathbf{x}_{0}\mid\tilde{ A}\mathbf{x}_{t},\tilde{A}] \tag{4.1}\] Two simple examples that fit into this framework (see Corollaries A.1 and A.2 in the Appendix) are: * Inpainting: \(A\in\mathbb{R}^{n\times n}\) is a diagonal matrix where each entry \(A_{ii}\sim\mathrm{Ber}(1-p)\) for some \(p>0\) (independently for each \(i\)), and the additional noise is generated by drawing \(\tilde{A}|A\) such that \(\tilde{A}_{ii}=A_{ii}\cdot\mathrm{Ber}(1-\delta)\) for some small \(\delta>0\) (again independently for each \(i\)).1 Footnote 1: \(\mathrm{Ber}(q)\) indicates a Bernoulli random variable with a probability of \(q\) to equal \(1\) and \(1-q\) for \(0\). * Gaussian measurements: \(A\in\mathbb{R}^{m\times n}\) consists of \(m\) rows drawn independently from \(\mathcal{N}(0,I_{n})\), and \(\tilde{A}\in\mathbb{R}^{m\times n}\) is constructed conditional on \(A\) by zeroing out its last row. Notice that the minimizer in Eq 4.1 is not entirely of the form we originally desired, which was \(\mathbf{h}(A,\mathbf{y}_{t},t)=\mathbb{E}[\mathbf{x}_{0}\mid A\mathbf{x}_{t},A]\). In place of \(A\), we now have \(\tilde{A}\), which is a further degraded matrix. Indeed, one trivial way to satisfy the condition in Theorem 4.1 is by forming \(\tilde{A}\) completely independently of \(A\), e.g. by always setting \(\tilde{A}=0\). However, in this case, the function we learn is not very useful. For this reason, we would like to add as little further noise as possible and ensure that \(\tilde{A}\) is close to \(A\). In natural noise models such as the inpainting noise model, by letting the additional corruption probability \(\delta\) approach \(0\), we can indeed ensure that \(\tilde{A}\) follows a distribution very close to that of \(A\). Experimental Evaluation ### Training from scratch on corrupted data Our first experiment is to train diffusion models from scratch using corrupted training data at different levels of corruption. The corruption model we use for these experiments is random inpainting: we form our dataset by deleting each pixel with probability \(p\). To create the matrix \(\tilde{A}\), we further delete each row of \(A\) with probability \(\delta\) - this removes an additional \(\delta\)-fraction of the surviving pixels. Unless mentioned otherwise, we use \(\delta=0.1\). We train models on CIFAR-10, AFHQ, and CelebA-HQ. All our models are trained with corruption level \(p\in\{0.0,0.2,0.4,0.6,0.8,0.9\}\). We use the EDM [30] codebase to train our models. We replace convolutions with Gated Convolutions [58] which are known to perform better for inpainting-type problems. To use the mask \(\tilde{A}\) as an additional input to the model, we simply concatenate it with the image \(\mathbf{x}\). The full training details can be found in the Appendix, Section C. We first evaluate the restoration performance of our model for the task it was trained on (random inpainting and noise). We compare with state-of-the-art diffusion models that were trained on clean data. Specifically, for AFHQ we compare with the state-of-the-art E \begin{table} \begin{tabular}{l c|c c c c} \hline \hline **Dataset** & **Corruption Probability** & **Method** & **LPIPS** & **PSNR** & **NFE** \\ \hline CelebA-HQ & & Ours & **0.037** & **31.51** & 1 \\ \cline{3-6} & 0.6 & DPS & 0.053 & 28.21 & 100 \\ \cline{3-6} & & & 0.139 & 25.76 & 35 \\ \cline{3-6} & & DDRM & 0.088 & 27.38 & 99 \\ & & & 0.069 & 28.16 & 199 \\ \hline & & Ours & **0.084** & **26.80** & 1 \\ \cline{3-6} & & DPS & 0.107 & 24.16 & 100 \\ \cline{3-6} & 0.8 & & 0.316 & 20.37 & 35 \\ & & DDRM & 0.188 & 22.96 & 99 \\ \cline{3-6} & & & 0.153 & 23.82 & 199 \\ \hline & & Ours & **0.152** & **23.34** & 1 \\ \cline{3-6} & & DPS & 0.168 & 20.89 & 100 \\ \cline{3-6} & 0.9 & & 0.461 & 15.87 & 35 \\ & & DDRM & 0.332 & 18.74 & 99 \\ & & & 0.242 & 20.14 & 199 \\ \hline \hline AFHQ & & Ours & 0.030 & 33.27 & 1 \\ \cline{3-6} & 0.4 & & **0.020** & **34.06** & 100 \\ \cline{3-6} & & & 0.122 & 25.18 & 35 \\ & & DDRM & 0.091 & 26.42 & 99 \\ & & & 0.088 & 26.52 & 199 \\ \hline & & Ours & 0.062 & 29.46 & 1 \\ \cline{3-6} & & DPS & **0.051** & **30.03** & 100 \\ \cline{3-6} & 0.6 & & 0.246 & 20.76 & 35 \\ & & DDRM & 0.166 & 22.79 & 99 \\ & & & 0.160 & 22.93 & 199 \\ \hline & & Ours & 0.124 & **25.37** & 1 \\ \cline{3-6} & & DPS & **0.107** & 25.30 & 100 \\ \cline{3-6} & & & 0.525 & 14.56 & 35 \\ & & DDRM & 0.295 & 18.08 & 99 \\ & & & 0.258 & 18.86 & 199 \\ \hline \end{tabular} \end{table} Table 1: Comparison of our model (trained on corrupted data) with state-of-the-art diffusion models on CelebA (DDIM [50] model) and AFHQ (EDM [30] model) for solving the random inpainting inverse problem. Our model performs on par with state-of-the-art diffusion inverse problem solvers, even though it has never seen uncorrupted training data. Further, this is achieved with a single score function evaluation. To solve this problem with a standard pre-trained diffusion model we need to use a reconstruction algorithm (such as DPS [11] or DDRM [34]) that typically requires hundreds of steps. CelebA we compare with DDIM [50]. These models were not trained to denoise, but we can use the prior learned in the denoiser as in [54; 29] to solve any inverse problem. We experiment with the state-of-the-art reconstruction algorithms: DDRM [34] and DPS [11]. We summarize the results in Table 1. Our model performs similarly to other diffusion models, even though it has never been trained on clean data. Further, it does so by requiring only one step, while all the baseline diffusion models require hundreds of steps to solve the same task with inferior or comparable performance. The performance of DDRM improves with more function evaluations at the cost of more computation. For DPS, we did not observe significant improvement by increasing the number of steps to more than \(100\). We include results with noisy inpainted measurements and comparisons with a supervised method in the Appendix, Section E, Tables 3, 4. We want to emphasize that all the baselines we compare against have an advantage: they are trained on _uncorrupted_ data. Instead, our models were only trained on corrupted data. This experiment indicates that: i) our training algorithm for learning the conditional expectation worked and ii) that the choice of corruption that diffusion models are trained to reverse matters for solving inverse problems. Next, we evaluate the performance of our diffusion models as generative models. To the best of our knowledge, the only generative baseline with quantitative results for training on corrupted data is AmbientGAN [7] which is trained on CIFAR-10. We further compare with a diffusion model trained without our further corruption algorithm. We plot the results in Figure 3. The diffusion model trained without our further corruption algorithm performs well for low corruption levels but collapses entirely for high corruption. Instead, our model trained with further corruption maintains reasonable corruption scores even for high corruption levels, outperforming the previous state-of-the-art AmbientGAN for all ranges of corruption levels. For CelebA-HQ and AFHQ we could not find any generative baselines trained on corrupted data to compare against. Nevertheless, we report FID and Inception Scores and summarize our results in Table 4 to encourage further research in this area. As shown in the Table, for CelebA-HQ and AFHQ, we manage to maintain a decent FID score even with \(90\%\) of the pixels deleted. For CIFAR-10, the performance degrades faster, potentially because of the lower resolution of the training images. ### Finetuning foundation models on corrupted data We can apply our technique to finetune a foundational diffusion model. For all our experiments, we use Deepfloyd's IF model [2], which is one of the most powerful open-source diffusion generative models available. We choose this model over Stable Diffusion [46] because it works in the pixel space (and hence our algorithm directly applies). Figure 4: Inception/FID results on random inpainting for models trained with our algorithm on CelebA-HQ, AFHQ and CIFAR-10. Figure 3: Performance on CIFAR-10 as a function of the corruption level. We compare our method with a diffusion model trained without our further corruption trick and AmbientGAN [7]. Ambient Diffusion outperforms both baselines for all ranges of corruption levels. Memorization.We show that we can finetune a foundational model on a limited dataset without memorizing the training examples. This experiment is motivated by the recent works of Carlini et al. [8], Somepalli et al. [49], and Jagielski et al. [27] that show that diffusion generative models memorize training samples and they do it significantly more than previous generative models, such as GANs, especially when the training dataset is small. Specifically, Somepalli et al. [49] train diffusion models on subsets of size \(\{300,3000,30000\}\) of CelebA and they show that models trained on \(300\) or \(3000\) memorize and blatantly copy images from their training set. We replicate this training experiment by finetuning the IF model on a subset of CelebA with \(3000\) training examples. Results are shown in Figure 1. Standard finetuning of Deepfloyd's IF on \(3000\) images memorizes samples and produces almost exact copies of the training set. Instead, if we corrupt the images by deleting \(80\%\) of the pixels prior to training and finetune, the memorization decreases sharply and there are distinct differences between the generated images and their nearest neighbors from the dataset. This is in spite of finetuning until convergence. Figure 5: **Left panel:** We finetune Deepfloyd’s IF diffusion model to make it a generative model for MRI images of brains with tumors. We use a small dataset [26] of only \(155\) images that was corrupted by removing large blocks as shown. **Right panel:** Generated samples from our finetuned model. As shown, the model learns the statistics of full brain tumor MRI images. The training set was resized to \(64\times 64\) but the generated images are at \(256\times 256\). The higher resolution is obtained by simply leveraging the power of the cascaded IF model. Figure 6: Distribution of similarity values to the nearest neighbor in the dataset for a finetuned IF model on a 3000 samples CelebA subset. Please note that similarity values above \(0.95\) roughly correspond to the same person and similarities below \(0.75\) typically correspond to random faces. Therefore the baseline finetuning process (red) often generates images that are near copies of the training set. On the contrary, our fine-tuning with corrupted samples (blue) shows a clear shift to the left. Visually we never observed a near-identical image generated from our process, see also Figure 1 for qualitative results. To quantify the memorization, we follow the methodology of Sompalli et al. [49]. Specifically, we generate 10000 images from each model and we use DINO [9]-v2 [42] to compute top-\(1\) similarity to the training images. Results are shown in Figure 6. Similarity values above \(0.95\) roughly correspond to the same person while similarities below \(0.75\) typically correspond to random faces. The standard finetuning (Red) often generates images that are near-identical with the training set. Instead, fine-tuning with corrupted samples (blue) shows a clear shift to the left. Visually we never observed a near-copy generated from our process - see also Figure 1. We repeat this experiment for models trained on the full CelebA dataset and at different levels of corruption. We include the results in Figure 8 of the Appendix. As shown, the more we increase the corruption level the more the distribution of similarities shifts to the left, indicating less memorization. However, this comes at the cost of decreased performance, as reported in Table 4. **New domains and different corruption.** We show that we can also finetune a pre-trained foundation model on a _new domain_ given a limited-sized dataset in a few hours in a single GPU. Figure 5 shows generated samples from a finetuned model on a dataset containing \(155\) examples of brain tumor MRI images [26]. As shown, the model learns the statistics of full brain tumor MRI images while only trained on brain-tumor images that have a random box obfuscating \(25\%\) of the image. The training set was resized to \(64\times 64\) but the generated images are at \(256\times 256\) by simply leveraging the power of the cascaded Deepfloyd IF. **Limitations.** Our work has several limitations. First, there is a tradeoff between generator quality and corruption levels. For higher corruption, it is less likely that our generator memorizes parts of training examples, but at a cost of degrading quality. Precisely characterizing this trade-off is an open research problem. Further, in this work, we only experimented with very simple approximation algorithms to estimate \(\mathbb{E}[\mathbf{x}_{0}|\mathbf{x}_{t}]\) using our trained models. Additionally, we cannot make any strict privacy claim about the protection of any training sample without making assumptions about the data distribution. We show in the Appendix that it is possible to recover \(\mathbb{E}[\mathbf{x}_{0}|\mathbf{x}_{t}]\) exactly using our restoration oracle, but we do not have an algorithm to do so. Finally, our method cannot handle measurements that also have noise. Future work could potentially address this limitation by exploiting SURE regularization as in [1]. Acknowledgements.The authors would like to thank Tom Goldstein for insightful discussions that benefited this work. This research has been supported by NSF Grants CCF 1763702, AF 1901292, CNS 2148141, Tripods CCF 1934932, NSF AI Institute for Foundations of Machine Learning (IFML) 2019844, the Texas Advanced Computing Center (TACC) and research gifts by Western Digital, WNCG IAP, UT Austin Machine Learning Lab (MLL), Cisco and the Archie Strait Endowed Faculty Fellowship. Giannis Daras has been supported by the Onassis Fellowship (Scholarship ID: F ZS 012-1/2022-2023), the Bodossaki Fellowship and the Leventis Fellowship.
2305.16165
A Conceptual Model for End-to-End Causal Discovery in Knowledge Tracing
In this paper, we take a preliminary step towards solving the problem of causal discovery in knowledge tracing, i.e., finding the underlying causal relationship among different skills from real-world student response data. This problem is important since it can potentially help us understand the causal relationship between different skills without extensive A/B testing, which can potentially help educators to design better curricula according to skill prerequisite information. Specifically, we propose a conceptual solution, a novel causal gated recurrent unit (GRU) module in a modified deep knowledge tracing model, which uses i) a learnable permutation matrix for causal ordering among skills and ii) an optionally learnable lower-triangular matrix for causal structure among skills. We also detail how to learn the model parameters in an end-to-end, differentiable way. Our solution placed among the top entries in Task 3 of the NeurIPS 2022 Challenge on Causal Insights for Learning Paths in Education. We detail preliminary experiments as evaluated on the challenge's public leaderboard since the ground truth causal structure has not been publicly released, making detailed local evaluation impossible.
Nischal Ashok Kumar, Wanyong Feng, Jaewook Lee, Hunter McNichols, Aritra Ghosh, Andrew Lan
2023-05-11T21:20:29Z
http://arxiv.org/abs/2305.16165v2
# A Conceptual Model for End-to-End Causal Discovery in Knowledge Tracing ###### Abstract In this paper, we take a preliminary step towards solving the problem of causal discovery in knowledge tracing, i.e., finding the underlying causal relationship among different skills from real-world student response data. This problem is important since it can potentially help us understand the causal relationship between different skills without extensive A/B testing, which can potentially help educators to design better curricula according to skill prerequisite information. Specifically, we propose a conceptual solution, a novel causal gated recurrent unit (GRU) module in a modified deep knowledge tracing model, which uses i) a learnable permutation matrix for causal ordering among skills and ii) an optionally learnable lower-triangular matrix for causal structure among skills. We also detail how to learn the model parameters in an end-to-end, differentiable way. Our solution is placed among the top entries in Task 3 of the NeurIPS 2022 Challenge on Causal Insights for Learning Paths in Education. We detail preliminary experiments as evaluated on the challenge's public leaderboard since the ground truth causal structure has not been publicly released, making detailed local evaluation impossible. Causal Discovery, Knowledge Tracing, Response Data 2022 ac acmcopyright ## 1 Introduction Knowledge Tracing (KT) [1] refers to the problem of estimating a student's understanding or mastery of certain skills, concepts, or knowledge components through their responses to questions and using these estimates to predict future performance. KT methods are frequently utilized in modern online education platforms to determine the knowledge levels of many students to enable the platform to provide personalized feedback and recommendations, ultimately leading to better learning results [19]. KT methods are limited in how they represent the relationship between skills; One key limitation is that most do not model the **causal** relationships between skills. Most KT methods simply treat human expert-provided skill tags as a flat structure (with a few exceptions, such as [27], that organize skills hierarchically as trees). As a result, these models are not capable of providing meaningful pedagogical insights, i.e., predicting future student performance if a particular instructional plan is applied instead of the actual plan applied. Causal analysis tools are a perfect fit to address these limitations in KT. The task of _causal discovery_, i.e., learning causal relationships among different skills from observational data, is especially important. First, it helps educators learn prerequisite relationships among skills. This can guide educators in ordering topics within their curriculum and can guide students to review prerequisite information when they are stuck on a question [2]. Second, causal relationships among skills helps us with the task of _causal inference_, i.e., estimating the effect of a particular pedagogical treatment or intervention. Traditionally, these tasks are addressed through randomized controlled trials which are difficult to scale. Therefore, incorporating causal discovery into KT methods has the potential to become a scalable alternative since it can be done solely from observational student response data. Performing causal discovery directly from observational student response data is challenging since it is not straightforward to estimate treatment effects from observational data with incomplete or no knowledge of the causal relationship between skills. This problem is referred to as the _end-to-end causal inference_ problem, where we discover the causal graph and estimate treatment effects together. ### Contributions In this paper, we take a **preliminary** step towards learning causal ordering among skills from student response data. This task is proposed in the NeurIPS 2022 Challenge on Causal Insights for Learning Paths in Education1. Our proposed **conceptual** solution is, to the best of our knowledge, the first KT method to learn the causal structure among human expert-provided skill tags directly from observational data in an _end-to-end_ manner. Specifically, our contributions in this paper are as follows: Footnote 1: [https://eedi.com/projects/neurips-2022](https://eedi.com/projects/neurips-2022) * First, we propose an interpretable _causal structure_ model that characterizes both i) the dependency among skills using a lower-triangular matrix and ii) their prerequisite ordering using a permutation matrix. We hypothesize that this module can be combined with any existing KT method that rely on human expert-provided skill tags. * Second, as a (among the top) solution2 to Task 3 in the NeurIPS 2022 Challenge, we apply our causal structure module to a variant of deep knowledge tracing (DKT) [17], with a _causal_ gated recurrent unit (GRU) module at its core, due to i) the simple nature of DKT and ii) its good empirical performance in our experiments. Footnote 2: The code for our solution can be found at: [https://github.com/umass-ml4ed/Neurips-Challenge-22](https://github.com/umass-ml4ed/Neurips-Challenge-22) * Third, we detail our experimental results based on the public leaderboard of the NeurIPS 2022 Challenge. We are honest up front that our evaluation is limited since i) the ground-truth causal structure data is not publicly released and ii) the nature of this brand new task means that there are no baselines to compare against. ## 2 Related Work ### KT methods Existing KT methods can be classified along several different axes, the first of which is how they represent the student knowledge representation variable \(h\). Classic Bayesian KT methods, such as those in [9, 15, 28], treat student knowledge as a latent binary variable. Recent methods like deep learning-based KT methods, such as [4, 14, 17, 22, 29], treat student knowledge as hidden states in neural networks. This setup results in models that excel at predicting future performance but have limited interpretability [3]. Another major axis is how KT methods represent responses, questions, skills, and time steps. To represent student responses, most existing KT methods treat them as binary-valued indicating response correctness. However, a few methods, such as option tracing [5] and predict partial analysis [26], have characterized student responses as non-binary-valued by analyzing the specific options selected on multiple-choice questions. Another exception is [11], which uses large language models to predict open-ended student responses in a generative way. To represent questions and skills, most existing KT methods one-hot encode them based on question IDs or skill tags [24], except [11, 12]. To represent time steps, most existing KT methods treat each question as a discrete time step, with a few exceptions such as [25], which considers the exact, continuous time elapsed between responses. ### Causal Analysis Methods In the field of education, there exist very few works on causality and especially few in the context of KT. [8] is closely related to our work, where the authors study the relationship between courses in higher educational institutions using historical student performance data. They use matching methods and regression to determine the average treatment effect (ATE). Along similar lines, [20] and [21] developed theory and methods for analyzing A/B testing data and presented studied data collected from real-world randomized controlled trials. These works focus on causal inference, i.e., assuming that the structure is given and the focus is on estimating the treatment effect. However, the data we use from Eedi contains fine-grained skills, defined as the smallest elements of learning among primary/middle school students. Therefore, our work is different from these works in terms of both the goal and the educational context. The only existing works that study causal discovery in the context of KT are [10] and [13]. The former uses a special model structure that has some similarity to ours to model knowledge state transitions among uninterpretable latent skills. The authors showed that their method, while simple, is highly accurate in predicting unobserved student responses, but do not evaluate on whether the identified causal structure is valid. Our proposed method to learn latent causal structure is closely based on the structural equation model (SEM) [16]. SEM enables us to estimate the relationships between observed and latent variables, offering valuable insights into their underlying relationships. The hypothesized causal relationship among variables is represented as a directed acylic graph (DAG). In this work, our goal is to learn the causal structure graph \(\mathcal{G}\). ## 3 Methodology We now detail our conceptual causal KT method. ### Basic Setup The basic KT model contains two components: \[\mathbf{h}_{j,t}\sim f(\mathbf{h}_{j,t-1},\mathbf{x}_{j,t}). \tag{1}\] \[p(Y_{j,t}\,|\,\mathbf{h}_{j,t},i_{j,t}). \tag{2}\] For a student \(j\) at time step \(t\), The knowledge estimation component in Eq. (1) estimates the current knowledge state \(\mathbf{h}_{j,t}\) given the previous knowledge state \(\mathbf{h}_{j,t-1}\) and the student's performance on the problem \(\mathbf{x}_{j,t}\) as inputs. The response prediction component in Eq. (2) outputs the prediction of the student's likelihood of answering the next question \(Y_{j,t}\) correctly given the current knowledge state \(\mathbf{h}_{j,t}\) and the next question index \(i_{j,t}\) as input. During the learning process, the KT model needs to maximize the predicted likelihood across responses of all students, i.e., \(\sum_{j}\sum_{t}\log p(Y_{j,t}\,|\,Y_{j,1},\ldots,Y_{j,t-1})\). We adopt the DKT setup detailed in [18] for consistency. Since incorporating causal learning into the base KT model introduces additional parameters, we use the gated recurrent unit (GRU) as the transition model instead of long short-term memory (LSTM) for computational efficiency. For response prediction, we simply use a single linear layer over the hidden states of the GRU. For causal discovery, i.e., learning the causal structure among skills, we use the causal GRU module detailed below. Figure 1: The implementation of a causal GRU cell. All the GRU weight matrices, \(\mathbf{W}_{z}\), \(\mathbf{W}_{r}\), and \(\mathbf{W}\) are multiplied by the causal mask \(\mathbf{M}=\mathbf{PLP}^{T}\), resulting in \(\mathbf{W}^{\prime}_{z}\), \(\mathbf{W}^{\prime}_{r}\), and \(\mathbf{W}^{\prime}\). ### Causal GRU We now detail the structure of the Causal GRU. From now on, we drop the student index \(j\) for notation simplicity. We define a permuted causal mask \(\mathbf{M}\) that represents the causal ordering and structure between skills. The \(\mathbf{L}\) matrix represents the _causal structure/skill dependency_, and the \(\mathbf{P}\) matrix represents the _causal ordering_. The permuted causal mask \(\mathbf{M}\) is calculated in Eq. (3) as first multiplying \(\mathbf{P}\) by \(\mathbf{L}\) to obtain the updated causal structure and multiplying by \(\mathbf{P}^{T}\) to transform the causal structure into the original space. The parameters in the Causal GRU are masked, i.e., element-wise multiplied by the permuted causal mask \(\mathbf{M}\) in Eq. (4). By masking out some parameters, we zero out parameters that do not satisfy the causal graph. This step ensures that there is no relationship between the hidden states of the non-causally dependent skills in latent student knowledge states. The latest student knowledge state estimation \(\mathbf{h}_{t}\) is calculated in Eq. (5). The input \(\mathbf{x}_{t}\) is represented as a one-hot vector with the dimension size equals to the number of skills \(C\). The entry of value \(\pm 1\) represents whether the student can correctly answer the question corresponding to the skill. The input \(\mathbf{h}_{t-1}\) is the previous student knowledge state estimation. The implementation detail of a Causal GRU cell can be found in Fig. 1. \[\mathbf{M}=\mathbf{PLP}^{T}, \tag{3}\] \[\mathbf{W}^{{}^{\prime}}=\mathbf{M}\odot\mathbf{W},\] (4) \[\mathbf{h}_{t}=\mathit{GRU}_{c}(\mathbf{h}_{t-1},\mathbf{x}_{t}). \tag{5}\] #### 3.2.1 Causal Ordering One important element of the causal GRU is the _causal ordering_ matrix \(\mathbf{P}\), which we set to be a permutation matrix. By definition, a permutation matrix has exactly one entry of 1 in each row and each column and 0s elsewhere. Since multiplying a matrix by a permutation matrix permutes the order of the columns/rows of that matrix, the permutation matrix is naturally capable of sorting skills into order based on prerequisite relationships. However, the binary and discrete nature of the permutation matrix makes the learning process non-differentiable. To solve this problem, we introduce a relaxed version of the problem by approximating a permutation matrix with a doubly stochastic matrix, i.e., one where all entries are non-negative and the summation of each column/row is equal to 1, i.e., \(P_{i,k}\in[0,1]\), \(\sum_{k}P_{i,k}=1\,\forall i\), \(\sum_{i}P_{i,k}=1\,\forall k\). Instead of learning a doubly stochastic matrix directly, which is very difficult, we learn a matrix of free parameters \(\mathbf{\tilde{P}}\), from which we can obtain \(\mathbf{P}\) after applying the _Sinkhorn_ operator [23]\(\mathbf{P}=\text{Sinkhorn}(\mathbf{\tilde{P}})\). The _Sinkhorn_ operator works as follows: First, starting with the base matrix \(\mathbf{\tilde{P}}\), we subtract the largest entry of the matrix from each entry, multiply each entry with the temperature hyper-parameter, and pass it through an exponential function. Second, we apply a series of row and column normalizations by dividing each entry of the column/row by the summation of all the entries in the column/row. In our implementation of the _Sinkhorn_ operator, there are two hyper-parameters: _temperature_ and _unroll_. The temperature hyper-parameter specifies the extent of the continuous relaxation: the larger the temperature hyper-parameter, the closer \(\mathbf{P}\)'s entries are to either 0 or 1. The unroll hyper-parameter specifies the number of times row/column normalization is carried out: the more times the normalization is applied, the closer \(\mathbf{P}\) is to satisfying the row/column normalization constraints. #### 3.2.2 Causal Structure The other important element of the causal GRU is the _causal structure/ skill dependency_ matrix \(\mathbf{L}\), which we set to be lower triangular. By definition, a lower triangular matrix is one in which all the elements above the principal diagonal of the matrix are 0. This matrix is important since it specifies the causal structure among the skills. Once the skills are ordered using the _causal ordering_ matrix \(\mathbf{P}\), we apply the _causal structure_ matrix \(\mathbf{L}\) to regularize student knowledge state transitions across time steps. Due to its lower-diagonal structure, an entry \(L_{i,k}>0\) with \(i>k\) implies that skill \(k\) is a prerequisite of skill \(i\). Therefore, since the causal GRU weight matrices are masked by the \(\mathbf{PLP}^{\mathbf{T}}\) matrix, at the next time step \(t\), the entry in the latent student knowledge vector that corresponds to skill \(i\), \([\mathbf{h}_{t}]_{i}\), depends only on the entries in the previous knowledge state that correspond to prerequisites of skill \(i\), i.e., \([\mathbf{h}_{t-1}]_{k}\,\forall k\) s.t. \(L_{i,k}>0\). The L matrix being lower diagonal ensures that the resulting causal structure is a DAG. This means that if skill \(C_{1}\) depends on \(C_{2}\) then \(C_{2}\) cannot depend on \(C_{1}\). As a concrete example, Fig. 2 visualizes the effect of applying a mask \(\mathbf{M}=\mathbf{PLP}^{T}\) on the skill matrix \(C\). The skill matrix \(C\) represents 5 skills where each skill is represented as a one-hot vector. The causal ordering matrix \(\mathbf{P}\) is applied to the skill matrix to give a skill ordering of \(C_{3}\), \(C_{2}\), \(C_{1}\), \(C_{5}\), \(C_{4}\) which is in the order of decreasing pre-requisites (\(C_{3}\) being the most pre-requisite of all skills). We further apply the \(\mathbf{L}\) matrix (in this case a lower diagonal matrix with all ones) that specifies that every subsequent skill depends on the preceding one. For example, it specifies that \(C_{4}\) depends on all of \(C_{1}\), \(C_{2}\), \(C_{3}\), and \(C_{5}\); \(C_{5}\) depends on \(C_{1}\), \(C_{2}\), and \(C_{3}\) and so on. One easy choice for \(\mathbf{L}\) is to set its lower-diagonal part to be all ones; this setting means that every subsequent skill causally depends on all the previous skills. However, in practice, causal dependencies among skills may not be this dense; most skills will only be causally related to a few other skills. To resolve this problem, we can make the \(\mathbf{L}\) matrix learnable by restricting the lower diagonal elements to be either 0 or Figure 2: The intuition behind causal GRU. Here, \(\mathbf{M}=\mathbf{PLP}^{T}\). \(M_{i,j}=1\) if and only if \([\mathbf{h}_{t-1}]_{j}\) influences \([\mathbf{h}_{t}]_{i}\) where \(\mathbf{h}_{t}\) is the student’s knowledge state at time-step \(t\). 1. We do this by learning a matrix of free parameters \(\tilde{\mathbf{L}}\), from which we can obtain \(\mathbf{L}\) after applying the element-wise sigmoid operator \(\mathbf{L}=\text{sigmoid}(\alpha\tilde{\mathbf{L}})\). A large value of the temperature parameter \(\alpha>0\) will push entries to be close to either 0 or 1 but not in between. #### 3.2.3 Skill Embeddings We use a learnable dense embedding to represent each skill and alter both the input and the output layers of the causal GRU. We learn an embedding matrix \(\mathbf{E}\) where each column \(\mathbf{e}_{c}\) represents the embedding of skill \(c\). We treat the dimension of \(\mathbf{e}_{c}\) as a hyperparameter. For the input layer, we use another learnable embedding \(\mathbf{d}\), which is either added or subtracted from the skill embedding depending on the correctness of the previous answer. We then learn the input to the causal GRU using \(NN(\mathbf{e}_{c}\pm\mathbf{d})\) where \(NN\) is a single-layer neural network. For the output, we use \(p(Y_{t})\sim NN_{o}([\mathbf{e}_{c}^{T},\tilde{\mathbf{h}}_{t}^{T}]^{T})\), where \(\tilde{\mathbf{h}}_{t}\) is a masked version of \(\mathbf{h}_{t}\) with the only non-zero entry being the one that corresponds to the skill of the next question that we are predicting. Here \(NN_{o}\) is a single-layer neural network that predicts the probability of the correct answer. ## 4 Experiments ### Data and Challenge Description We participated in Task 3 of the NeurIPS Challenge co-hosted by Eedi, Microsoft Research, and Rice University [6]. The goal of this task is to discover the causal relationships between different skills, or _constructs_ (as defined by Eedi, which means the smallest unit of learning; for example, "mental addition and subtraction" is a construct within the main topic "math"), and evaluate the effect of learning one _skill_ on another. Questions in this dataset are multiple-choice, with a single correct option and three distractors that are designed to assess a single skill. The challenge hypothesis is that it is possible to discover the hidden relationship behind different skills through analyzing the responses to a large number of diagnostic questions. The challenge uses an \(F_{1}\) score-based metric which calculates the similarity between the predicted adjacency matrix \(\hat{A}\) and the true adjacency matrix \(A\). ### Model Learning and Hyperparameters The dataset consists of 1855 skills and 6468 students. We set the default skill embedding dimension to 300. We use an adaptive strategy and start with small values of the temperature and unroll and linearly increase their values over a set of epochs. We set the initial temperature and unroll to 2 and 5 respectively and linearly increase the values with a factor of 2 and 5 respectively for every 10 epochs. We train the model for 50 epochs with a batch size of 64, and a learning rate of 5e-4 using four Nvidia Tesla 2080 GPUs with a GPU memory of 12GB each which takes about 6 hours. After training, to obtain the final _causal structure_ matrix \(\mathbf{L}\) we apply a post-processing step. We define a hyperparameter \(\kappa\) such that all values of the \(\mathbf{L}\) matrix less than \(\kappa\) are set to 0 and all values greater than or equal to \(\kappa\) are set to 1. ### Results and Discussion In Table 1, we show the results of different model variants. We report the leaderboard \(F_{1}\) score obtained in our experiments. We see that the \(F_{1}\) score is 0.11 for the case where we are not using the skill embeddings. Using an adaptive strategy increases the \(F_{1}\) score by 0.06, which suggests that the adaptive strategy is helpful during model training. We also report the results corresponding to the skill embeddings and the learnable \(\mathbf{L}\). We see that using an embedding dimension of 300 almost doubles the \(F_{1}\) score. This observation confirms our hypothesis that using skill embeddings increases the representational capacity of the neural network model and hence performs better. When the causal structure matrix \(\mathbf{L}\) is learnable, we see that we get a further 0.1 increase in the \(F_{1}\) score. The increase in the \(F_{1}\) score on using the learnable \(\mathbf{L}\) configuration of the model shows that it is better to learn the explicit causal dependence of skills instead of assuming a dense representation where each skill depends on all the skills preceding it. ## 5 Conclusions and Future Work In this work, we proposed a conceptual method for learning causal structure among skills from student response data, as a part of our solution to the NeurIPS 2022 Challenge on Causal Insights for Learning Paths in Education. Our method is a novel causal knowledge tracing method that enables us to learn the causal structure in an end-to-end manner while performing knowledge tracing. Unfortunately, due to space limitations, we cannot show a qualitative example of the learned causal structure among skills. We believe that our work should inspire future works in the direction of building causal knowledge tracing methods on observational student response data. First, it is important to evaluate the accuracy of the learned causal structure between skills, either against human domain experts or via A/B testing. Second, it is important to apply our causal module to more flexible knowledge tracing methods, such as attention-based methods, to see whether it is applicable and effective. Third, it is important to develop ways to leverage both the opinion of human experts and our data-driven causal discovery model, in a human-in-the-loop manner. The former may be less accurate but the latter requires extensive training data; a hybrid human-AI collaboration may be able to take the best from both sides. ## 6 Acknowledgements The authors thank Christoph Studer for insights on permutation matrices and the NSF (under grants 1917713, 2118706, 2202506, 2215193) for partially supporting this work. We also thank the organizers of the 2022 NeurIPS Challenge on Causal Insights for Learning Paths in Education for proposing a new and meaningful task. \begin{table} \begin{tabular}{|l|l|} \hline **Model** & **Leaderboard** \\ **Variant** & \(F_{1}\)**Score** \\ \hline No Embedding & 0.11 \\ No Adaptive & \\ \hline No Embedding & 0.17 \\ Adaptive & \\ \hline Embedding (300D) & \\ Adaptive & 0.33 \\ \hline Embedding (300D) & \\ Adaptive & 0.43 \\ Learnable L & \\ \hline \end{tabular} \end{table} Table 1: Results on different model variants.
2305.00713
$B_c$-meson decays into $J/ψ$ plus a light meson in the improved perturbative QCD formalism
In the wake of measurements on $B_c^+ \to J/\psi K^+$, $B_c^+ \to J/\psi \pi^+\pi^-\pi^+$, and $B_c^+ \to J/\psi K^+ K^-\pi^+$ at Large Hadron Collider experiments, we propose to study the decays $B_c^+ \to J/\psi M^+$ comprehensively, with $M$ being the light charged pseudoscalar ($P$), vector ($V$), scalar ($S$), axial-vector ($A$), and tensor ($T$) mesons, within the improved perturbative QCD (iPQCD) formalism at leading order in the Standard Model. The theoretical predictions for experimental observables such as branching fractions, relative ratios, and longitudinal polarization fractions in the iPQCD formalism await near future examinations relying on the upgraded Large Hadron Collider, even the forthcoming Circular Electron-Positron Collider. We emphasize that the investigations on the factorizable-emission-suppressed or -forbidden decays like $B_c^+ \to J/\psi S^+$, $B_c^+ \to J/\psi A^+_{1^1\!P_1}$, and $B_c^+ \to J/\psi T^+$, should go definitely beyond naive factorization to explore the rich dynamics, which could, in turn, further help understand the QCD nature of $B_c$ meson, as well as that of related hadrons. The future confirmations on those predictions about the relative ratios between the branching fractions of $B_c^+ \to J/\psi b_1(1235)^+ (a_0(980)^+, a_0(1450)^+, a_2(1320)^+)$ and $B_c^+ \to J/\psi \pi^+$ could further examine the reliability of this iPQCD formalism. Because of containing only tree-level $\bar b \to \bar c$ transitions, the {\it CP} asymmetries in the $B_c^+ \to J/\psi M^+$ decays exhibit naturally zero.
Xin Liu
2023-05-01T08:24:29Z
http://arxiv.org/abs/2305.00713v4
# The \(B_{c}\)-meson decays into \(J/\psi\) plus a light meson in the iPQCD formalism ###### Abstract In the wake of measurements on \(B_{c}^{+}\to J/\psi K^{+}\), \(B_{c}^{+}\to J/\psi\pi^{+}\pi^{-}\pi^{+}\), and \(B_{c}^{+}\to J/\psi K^{+}K^{-}\pi^{+}\) at Large Hadron Collider experiments, we propose to study the decays \(B_{c}^{+}\to J/\psi M^{+}\) comprehensively, with \(M\) being the light charged pseudoscalar (\(P\)), vector (\(V\)), scalar (\(S\)), axial-vector (\(A\)), and tensor (\(T\)) mesons, within the improved Perturbative QCD (iPQCD) formalism at leading order in the Standard Model. The theoretical predictions for experimental observables such as branching fractions, relative ratios, and longitudinal polarization fractions in the iPQCD formalism await near future examinations relying on the upgraded Large Hadron Collider, even the forthcoming Circular Electron-Positron Collider. We emphasize that the investigations on the factorizable-emission-suppressed or -forbidden decays like \(B_{c}^{+}\to J/\psi S^{+}\), \(B_{c}^{+}\to J/\psi A_{1P_{1}}^{+}\), and \(B_{c}^{+}\to J/\psi T^{+}\), should go definitely beyond naive factorization to explore the rich dynamics, which could, in turn, further help understand the QCD nature of \(B_{c}\) meson, as well as that of related hadrons. The future confirmations on those predictions about the relative ratios between the branching fractions of \(B_{c}^{+}\to J/\psi b_{1}(1235)^{+}(a_{0}(980)^{+},a_{0}(1450)^{+},a_{2}(1320) ^{+})\) and \(B_{c}^{+}\to J/\psi\pi^{+}\) could further examine the reliability of this iPQCD formalism. Because of containing only tree-level \(\bar{b}\to\bar{c}\) transitions, the CP asymmetries in the \(B_{c}^{+}\to J/\psi M^{+}\) decays exhibit naturally zero. pacs: 13.25.Hw, 12.38.Bx, 14.40.Nd + Footnote †: preprint: JSNU-PHY-HEP-01/23
2305.10500
Learning Likelihood Ratios with Neural Network Classifiers
The likelihood ratio is a crucial quantity for statistical inference in science that enables hypothesis testing, construction of confidence intervals, reweighting of distributions, and more. Many modern scientific applications, however, make use of data- or simulation-driven models for which computing the likelihood ratio can be very difficult or even impossible. By applying the so-called ``likelihood ratio trick,'' approximations of the likelihood ratio may be computed using clever parametrizations of neural network-based classifiers. A number of different neural network setups can be defined to satisfy this procedure, each with varying performance in approximating the likelihood ratio when using finite training data. We present a series of empirical studies detailing the performance of several common loss functionals and parametrizations of the classifier output in approximating the likelihood ratio of two univariate and multivariate Gaussian distributions as well as simulated high-energy particle physics datasets.
Shahzar Rizvi, Mariel Pettee, Benjamin Nachman
2023-05-17T18:11:38Z
http://arxiv.org/abs/2305.10500v2
# Learning Likelihood Ratios with Neural Network Classifiers ###### Abstract The likelihood ratio is a crucial quantity for statistical inference in science that enables hypothesis testing, construction of confidence intervals, reweighting of distributions, and more. Many modern scientific applications, however, make use of data- or simulation-driven models for which computing the likelihood ratio can be very difficult or even impossible. By applying the so-called "likelihood ratio trick," approximations of the likelihood ratio may be computed using clever parametrizations of neural network-based classifiers. A number of different neural network setups can be defined to satisfy this procedure, each with varying performance in approximating the likelihood ratio when using finite training data. We present a series of empirical studies detailing the performance of several common loss functionals and parametrizations of the classifier output in approximating the likelihood ratio of two univariate and multivariate Gaussian distributions as well as simulated high-energy particle physics datasets. ## I Introduction Claiming a scientific discovery requires a hypothesis test, i.e. a statistical threshold for claiming that one's experimental data reject the null hypothesis in favor of an alternative hypothesis. This might involve two probability densities: * \(H_{0}\) (the null hypothesis) * \(H_{1}\) (the alternative hypothesis) By the Neyman-Pearson lemma [1], the strongest ("uniformly most powerful") measure of whether the experimental data \(x\) support \(H_{0}\) vs. \(H_{1}\) is a likelihood ratio test. These tests are particularly widespread in reporting results in High-Energy Physics (HEP), but are also commonly used for statistical analyses across astrophysics, biology, medicine, and other scientific domains concerned with hypothesis testing or confidence intervals. The need for likelihood ratios goes beyond hypothesis testing, too - they can also be used to reweight a distribution to align with a target distribution, such as reweighting simulation samples to match real data [2; 3; 4; 5; 6; 7; 8; 9]. In the simplest form of a likelihood ratio test, where \(H_{0}\) and \(H_{1}\) are fully-defined by parameters \(\theta_{0}\) and \(\theta_{1}\), the background-only hypothesis is either rejected (or not) depending on the value of the ratio of likelihoods \(p(\theta_{0}\mid x)\) under \(H_{0}\) and \(p(\theta_{1}\mid x)\) under \(H_{1}\) in relation to the desired significance level. In practice, however, the probability densities \(H_{0}\) and \(H_{1}\) may not be explicitly known. Worse, they might be nearly impossible to compute, such as in instances where they are generated by a complex simulation model. In these cases, we can use machine learning to directly approximate the likelihood ratio itself, bypassing the need to approximate the individual probability densities. A classifier function \(f(x)\) (for instance, from a neural network) designed to distinguish data sampled from \(H_{0}\) (\(f(x)\to 0\)) vs. \(H_{1}\) (\(f(x)\to 1\)) can be used to approximate the likelihood ratio by minimizing a proper loss functional (defined in Section II): \[\operatorname*{argmin}_{f}L[f]=\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{1})}= \mathcal{L}(x). \tag{1}\] For instance, in the familiar case of training a classifier by minimizing the binary cross-entropy loss (see I), the optimal decision function \(f(x)\) is: \[f(x)=\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{0})+p(x\mid\theta_{1})}. \tag{2}\] We can then approximate the likelihood ratio with a monotonic transformation of the neural network output \(f(x)\)1: Footnote 1: This notation assumes balanced training sets for simplicity. With imbalanced classes, one would need to modify the likelihood ratio to include prior factors \(p(\theta_{i})\), though the likelihood ratio trick will still apply [10]. \[\frac{f(x)}{1-f(x)} =\frac{\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{0})+p(x\mid\theta _{1})}}{1-\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{0})+p(x\mid\theta_{1})}} \tag{3}\] \[=\frac{p(x\mid\theta_{0})}{\underline{p(x\!+\!\theta_{0})}+p(x \mid\theta_{1})-\underline{p(x\!+\!\theta_{0})}}\] (4) \[=\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{1})}=\mathcal{L}(x). \tag{5}\] This procedure, sometimes called the "likelihood ratio trick", is well-known in statistics (see e.g. [11; 12; 13]) and has been frequently used in particle physics [14; 15; 16; 2; 6; 10; 26]. A number of different loss functionals beyond binary cross-entropy can be defined to satisfy this setup, but in practice, not all such classifiers will perform equally well when approximating the likelihood ratio. In this paper, we perform a series of empirical studies to understand how different choices of loss functional and parametrization of the resulting classifier affect the performance of likelihood ratio approximation for pairs of distributions. Several recent works have investigated some improved configurations for the likelihood ratio trick in certain scientific contexts. [27] introduces a new likelihood estimation procedure as an extension of [14] using binary cross-entropy loss with SELU [28] activation. [29] notes that for one- and two-dimensional toy simulations of particle physics datasets, the maximum likelihood classifier (MLC) loss performed better than the binary cross-entropy loss when estimating the likelihood ratio - the first application of MLC loss in particle physics. [10] directly compares linear and exponential parameterizations of maximum likelihood classifier loss with binary cross-entropy loss for one-dimensional Gaussians. [14] uses calibrated classifiers to improve likelihood ratio estimation, and [18; 19] define several different approaches to likelihood ratio estimation, including augmenting the likelihood ratio trick with score regression (Rascal, Sally, etc.). [30] introduces modified versions of the cross-entropy loss that show stronger performance under limited training dataset sizes than the typical cross-entropy loss, while [31] compares the estimation of the likelihood ratio via mean square loss with ELU [32] activation, cross-entropy loss with sigmoid activation, and a proposed exponential loss with no activation function on univariate Gaussian distributions. Still other methods use normalizing flows to determine the likelihood ratio by modeling the individual densities [33; 34] or to obviate the need for the likelihood ratio approximation for reweighting distributions [35]. In light of these existing studies, this work serves as a detailed comparison of a wide range of configurations of loss functionals and output parametrizations across datasets including one-dimensional Gaussians, multi-dimensional Gaussians, and simulated high-energy particle physics datasets. We aim to highlight some best practices and serve as a guide for approximating likelihood ratios with neural network classifiers in the wider scientific community, and particularly within the domains of particle physics and astrophysics. This paper is organized as follows. In Section II, we summarize the theoretical foundation for learning likelihood ratios with neural network classifiers. In Section III, we present a series of studies focused on optimizing likelihood ratio estimation for one-dimensional Gaussian distributions where the true likelihood ratio is exactly known. In Section IV, we extend these studies to multi-dimensional Gaussian distributions. In Section V, we present some more realistic examples using simulated high-energy physics data where the true likelihood ratio is approximated using a Normalizing Flow model [33]. Finally, we summarize our conclusions and recommendations for further studies in Section VI. ## II Learning likelihood ratios Let the parameters \(\theta_{0}\) and \(\theta_{1}\) define two distributions, \(p(x\mid\theta_{0})\) and \(p(x\mid\theta_{1})\), as described in Section II.A. of [10]. The goal is to determine or approximate the likelihood ratio \[\mathcal{L}(x)=\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{1})} \tag{6}\] between the two distributions. Consider the general loss functional that depends on a learnable function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) and rescaling functions \(A:\mathbb{R}\rightarrow\mathbb{R}\) and \(B:\mathbb{R}\rightarrow\mathbb{R}\): \[\begin{split} L[f]=-\int\mathrm{d}x\bigg{(}& p(x\mid\theta_{0})A(f(x))\\ &+p(x\mid\theta_{1})B(f(x))\bigg{)}.\end{split} \tag{7}\] We can take the functional derivative of the loss functional to show that the extremum can be transformed to obtain the likelihood ratio: \[\frac{\delta L}{\delta f} =-\frac{\partial}{\partial f}\Big{(}p(x\mid\theta_{0})A(f(x))+p (x\mid\theta_{1})B(f(x))\Big{)} \tag{8}\] \[=-\bigg{(}p(x\mid\theta_{0})A^{\prime}(f(x))\cdot f^{\prime}(x)\] (9) \[\qquad+p(x\mid\theta_{0})B^{\prime}(f(x))\cdot f^{\prime}(x)\bigg{)}\] \[=0\iff-\frac{B^{\prime}(f(x))}{A^{\prime}(f(x))}=\frac{p(x\mid \theta_{0})}{p(x\mid\theta_{1})}=\mathcal{L}(x). \tag{10}\] Given that \(-B^{\prime}(f)/A^{\prime}(f)\) is a monotonic rescaling of \(f\) and \(L[f]\) is convex, the learned function \(f\) is an optimal classifier. In this paper, we first consider the four loss functionals defined by the rescaling functions in Table 1. While this is by no means an exhaustive list of all possible loss functionals, it includes a diverse array of different loss configurations. As detailed in Sec. III.3, we also consider generalized forms of two of these four loss functionals. A neural network parametrizes the learned function \(f\) as \(\phi(z)\), where \(z\) is the pre-activation output of the network and \(\phi\) is the final activation function. For the binary cross entropy (BCE) and mean squared error (MSE) losses, \[\mathcal{L}(x)=-\frac{B^{\prime}(f)}{A^{\prime}(f)}=\frac{f}{1-f}, \tag{11}\] so the likelihood ratio is the odds ratio of the learned function. That is, minimizing the BCE and MSE losses defines a classifier that computes \[\operatorname*{argmin}_{f}L[f]=\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{0})+p(x \mid\theta_{1})}\in(0,1) \tag{12}\] . To parametrize \(f\) such that the likelihood ratio is non-negative, we require that \(\phi:\mathbb{R}\rightarrow(0,1)\). However, for the maximum likelihood classifier (MLC) and square root (SQR) losses, \[\mathcal{L}(x)=-\frac{B^{\prime}(f)}{A^{\prime}(f)}=f, \tag{13}\] so the likelihood ratio is the learned function, without transformation. \[\operatorname*{argmin}_{f}L[f]=\frac{p(x\mid\theta_{0})}{p(x\mid\theta_{1})} \tag{14}\] In this case the loss-minimizing classifier computes the likelihood ratio \(\mathcal{L}(x)\in(0,\infty)\). The requirement on \(\phi\) is that \(\phi:\mathbb{R}\rightarrow(0,\infty)\). ## III Univariate Gaussians In our first case study, we consider two Gaussian distributions with slightly different means and unit variances: \(X_{0}\sim\text{Normal}(+0.1,1)\) and \(X_{1}\sim\text{Normal}(-0.1,1)\). We also considered univariate Beta and Gamma distributions - these results can be found in Appendix A. While one could in principle use Boosted Decision Trees (BDTs) instead of neural networks for the classifiers, we found that neural networks outperformed BDTs across a variety of test cases, as shown in Appendix C. All of our classifiers are therefore implemented as neural networks using Keras [36] with a Tensorflow [37] backend and Adam[38] optimizer. Each classifier consists of three hidden layers with 64, 128, and 64 nodes, sequentially. Rectified Linear Unit (ReLU) activation functions are used for the intermediate layers, with the activation for the output layer depending on the loss used to train the neural network and the parametrization being tested. Each of the three hidden layers is followed by a dropout layer with dropout probability of 10%. Unless otherwise stated, the networks were trained with 1,000,000 samples (750,000 used for training and 250,000 used for validation). 100,000 separate samples were used to evaluate the networks' performances (in particular, to calculate their mean absolute errors). Each network was trained for up to 100 epochs with a batch size of 10%, as in [10]. If the validation loss did not decrease for 10 consecutive epochs, the training was stopped (early stopping with a patience of 10). No detailed hyperparameter optimization was done. ### Naive Implementation #### iii.1.1 Motivation The naive parametrization for \(\phi(z)\) in the case of the BCE and MSE losses is \(\phi=\sigma\), the logistic function commonly used as the activation for classification tasks. In the case of the MLC and SQR losses, the most common parametrization would be \(\phi=\text{ReLU}\), the rectified linear unit activation. We chose these parametrizations for our naive implementation. To better understand how these common parametrizations of the classifiers affect their ability to learn the likelihood ratio, we implemented neural network architecture with each of the four losses, trained them to classify between the two Gaussian distributions. Since the true likelihood ratio is known, we can compare how well each of the four classifiers learns the likelihood ratio function. #### iii.1.2 Methods We implemented each classifier using an identical neural network architecture, differing only in the final activation, which acted as either the logistic (for the BCE and MSE classifiers) or ReLU (for the MLC and SQR classifiers) parametrizations for the learned function. We then trained each of the four classifier architecture on the dataset 100 times each, using the classifier's corresponding loss functional. Each classifier was evaluated on the interval \((-6,6)\) and transformed into the likelihood ratio over that same interval using the appropriate transformation from equations 11 and 13. We averaged the resulting 100 predictions for the likelihood ratio. To numerically compare the performances of different classifiers in learning the likelihood ratio, we computed their empirical mean absolute errors over 100,000 samples. For \(\hat{\mathcal{L}}\) the estimated likelihood ratio, the mean absolute error is defined as \[\text{MAE}[\hat{\mathcal{L}}]=\mathbb{E}\left[\big{|}\mathcal{L}(X)-\hat{ \mathcal{L}}(X)\big{|}\right]. \tag{15}\] We computed this for each classifier as an empirical average over the 100 different likelihood ratio predictors to \begin{table} \begin{tabular}{l c c} Loss Name & \(A(f)\) & \(B(f)\) \\ \hline \hline Binary Cross-Entropy & \(\ln(f)\) & \(\ln(1-f)\) \\ Mean Squared Error & \(-(1-f)^{2}\) & \(-f^{2}\) \\ Maximum Likelihood Classifier & \(\ln(f)\) & \(1-f\) \\ Square Root & \(-\frac{1}{\sqrt{f}}\) & \(-\sqrt{f}\) \\ \end{tabular} \end{table} Table 1: The rescaling functions \(A\) and \(B\) used to assemble the four different loss functionals considered. get a numerical measure of how well each predictor approximated the likelihood ratio. Next, we examined how varying the amount of data upon which the classifiers were trained affected their performance. In particular, for each loss, we trained 100 classifiers for each \(N\in\{10^{2},10^{3},10^{4},10^{5},10^{6},10^{7}\}\). For each value of \(N\), \(0.75N\) observations were used for training and \(0.25N\) observations were used for validation. The value of \(N=10^{6}\) corresponds to our default sample size. As before, 100,000 samples were used to estimate the MAE for each value of \(N\). #### iii.1.3 Results Figure 1 displays the likelihood ratio fits averaged over 100 models for each of the four classifiers, compared against the true likelihood ratio. The largest deviations here are in regions far outside the bulk of the training data, where the models will largely be extrapolating. We are primarily concerned with evaluating the likelihood ratio approximation where the data has good coverage: approximately \(x\in[-3,3]\). In Fig. 2, we show how the expected error for classifiers trained with each choice of loss functional decreases as the sample size increases. #### iii.1.4 Discussion The four losses result in similarly performing fits near \(x=0\); however, the MLC and SQR losses rapidly diverge from the true likelihood ratio in regions for which there is little data coverage. By comparison, the BCE and MSE perform much better, staying within 3% of the true likelihood ratio even in regions far outside the bulk of the data (\(|x|>4\)). The performance of these classifiers varies with the size of the training dataset \(N\). For relatively small training sample sizes (\(N<1000\)), the scale of the mean absolute error is dominated by the inductive bias present in each activation function: BCE and MSE losses (both using \(\sigma(z)\) activation) are nearly identical in size, while MLC and SQR losses (both using \(\mathrm{ReLU}(z)\) activation) are similarly clustered. As \(N\) increases, the MLC and SQR classifier performances approach those of the BCE and MSE classifiers. However, even for values of \(N\) larger than \(10^{5}\), the SQR classifier's MAE remains at least 0.015 above the average performance of the BCE/MSE classifiers. ### Parametrizing \(f\) The parametrization of the learned function can be adjusted. In the naive implementation, the BCE and MSE neural networks use a logistic activation function, while the MLC and SQR neural networks use a ReLU activation function. Let \(z(x)\) be the function that the neural network represents. Then \(f=\phi(z)\) is our classifier, where \(\phi\) is some parametrization of the learned function. In the cases described before, we have either \(f=\sigma(z)\) (for BCE and MSE) or \(f=\mathrm{ReLU}(z)\) (for MLC and SQR). However, for BCE and MSE, any function \(\phi:\mathbb{R}\rightarrow(0,1)\) will suffice. Two readily available such functions Figure 1: Average likelihood ratio fits for the four different losses. The MAEs are 0.0083, 0.0081, 0.0150, and 0.0254, for the BCE, MSE, MLC, and SQR likelihood ratio models, respectively. Figure 2: Mean absolute errors computed for the four different losses trained with increasingly larger sample sizes \(N\). are hyperbolic tangent and arctangent, adjusted to the appropriate range: \[f(z) =\frac{1}{2}\left(\tanh z+1\right), \tag{16}\] \[f(z) =\frac{1}{\pi}\left(\arctan z+\frac{\pi}{2}\right). \tag{17}\] In Fig. 3, we show the likelihood ratio fits, averaged over 100 models, for the logistic, hyperbolic tangent, and arctangent parametrizations of the BCE and MSE classifiers. In both cases, the default logistic parametrization performs the best, followed closely by the hyperbolic tangent parametrization, and followed distantly by the arctangent parametrization. This result is not surprising, as the logistic function is known to be well-suited for classification. For the MLC and SQR losses, we instead require any function \(\phi:\mathbb{R}\rightarrow(0,\infty)\). While the ReLU function is the default, there are other functions with such ranges, including: \[f(z) =z^{2}, \tag{18}\] \[f(z) =\exp z. \tag{19}\] Figure 4 displays the results of comparing the performances of the MLC and SQR losses in training classifiers with these parametrizations. The performances of the three parametrizations between the two losses are the same: in this case, the exponential parametrization performs remarkably better than the ReLU parametrization, and square parametrization performs the worst amongst all three. ### Generalized Loss Families #### iii.3.1 Motivation The MSE and SQR loss functionals are easily generalizable to a parametric family of loss functionals. While there are several possible parametrizations2 to choose from, we select the following for simplicity: for the MSE loss, we consider a power parameter \(p\in\mathbb{R}\), where \(p=2\) is the default value, and for the SQR loss, we consider a root parameter \(r\in\mathbb{R}\), where \(r=1\) is the default value. This yields the two families of losses presented in Table 2. Footnote 2: For example, to enforce non-singular behavior at \(r=0\) for SQR, one could consider \(A(f)=(1-f^{-\frac{\pi}{2}})/|r|\) and \(B(f)=(1-f^{\frac{\pi}{2}})/|r|\). Another interesting parametrization is \(A(f)=(f^{q}-1)/q\) and \(B(f)=1-f^{(q+1)}/(q+1)\), which is minimized at \(q=1\). Since the rescaling functions \(A\) and \(B\) have changed, the likelihood ratio recovered from \(f\) changes as well. For the \(p\)-MSE losses, for \(p\notin(0,1)\), \[\mathcal{L}(x) =-\frac{B^{\prime}(f)}{A^{\prime}(f)}=-\frac{-pf^{p-1}\cdot f^{ \prime}}{p(1-f)^{p-1}\cdot f^{\prime}} \tag{20}\] \[=\left(\frac{f}{1-f}\right)^{p-1}. \tag{21}\] We exclude the case where \(p\in(0,1)\) since the corresponding loss functional is not convex, and as such the likelihood ratio trick no longer works. And for the \(r\)-SQR losses, \[\mathcal{L}(x) =-\frac{B^{\prime}(f)}{A^{\prime}(f)}=-\frac{\frac{r}{2}f^{\frac {\pi}{2}-1}\cdot f^{\prime}}{\frac{\pi}{2}f^{-\frac{\pi}{2}-1}\cdot f^{\prime}} \tag{22}\] \[=f^{r}. \tag{23}\] A whole family of losses arises from both of the two original losses, each loss still maintaining the property that the function that minimizes the corresponding functional can recover the likelihood ratio. In addition to comparing how the four original losses performed against one another, we can compare among the losses in each of these two loss families. #### iii.3.2 Methods Since we were working over an uncountably infinite set of loss functionals, we decided to constrain our investigation to just the interval \([-2,2]\). We scanned along the interval \([-2,2]\); for each value \(p\) which we looked at, we trained 20 logistically-parametrized models on the \(p\)-MSE loss functional corresponding to that value of \(p\). Then we averaged the mean absolute errors of the 20 models together. We did the same for values of \(r\) in the interval \([-2,2]\) as well; in that case, the models were parametrized with the exponential activation function instead. We expect that near \(p^{*}=1\) and \(r^{*}=0\), where the generalized loss functionals will resemble the MAE loss, the figure-of-merit of MAE will likely be minimized, too. Due to this intrinsic relationship between the choice of loss functional and figure-of-merit, we also considered two additional figures-of-merit for evaluating these scans: the Mean Ratio and the Null Statistic, defined as: \begin{table} \begin{tabular}{c c c} Loss Name & \(A(f)\) & \(B(f)\) \\ \hline \hline \(p\)-MSE & \(-(1-f)^{p}\) & \(-f^{p}\) \\ \(r\)-SQR & \(-f^{-\frac{\pi}{2}}\) & \(-f^{\frac{\pi}{2}}\) \\ \end{tabular} \end{table} Table 2: The generalization of the MSE and SQR loss functionals to entire families of losses. Values of \(p=2\) and \(r=1\) correspond to the original definitions of the loss functionals. Figure 4: Parametrizations of \(f\) for the MLC and SQR losses. (a) The average likelihood ratio fits of the ReLU, square, and exponential parametrizations for the MLC loss, with mean absolute errors 0.0148, 0.0684, and 0.0083, respectively. (b) The average likelihood ratio fits of the ReLU, square, and exponential parametrizations for the SQR loss, with mean absolute errors 0.0367, 0.6756, and 0.0075, respectively. Figure 3: Parametrizations of \(f\) for the BCE and MSE losses. (a) The average likelihood ratio fits of the logistic, hyperbolic tangent, and arctangent parametrizations for the BCE loss, with mean absolute errors 0.0080, 0.01240, and 0.0092, respectively. (b) The average likelihood ratio fits of the logistic, hyperbolic tangent, and arctangent parametrizations for the MSE loss, with mean absolute errors 0.0084, 0.0127, and 0.0094, respectively. \[\text{Mean Ratio}[\hat{\mathcal{L}}]=E[\hat{\mathcal{L}}(X)/\mathcal{L}(X)] \tag{24}\] \[\text{Null Statistic}[\hat{\mathcal{L}}]=|E_{0}[\mathcal{L}(X)]-E_{0}[\hat{ \mathcal{L}}(X)]| \tag{25}\] We found that the overall trends reported here using MAE were similar across these alternative figures-of-merit, though the trends were less dramatic when we used the Mean Ratio figure-of-merit. #### iii.3.3 Results In Figure 5, we show the performance of the classifiers trained by these losses when modifying their power and root parameters. The values \(p^{*}\) and \(r^{*}\) minimizing the MAE were \(p^{*}=1.08,1.24\) (with \(p^{*}=1.24\) having a similar performance to that of \(p^{*}=1.08\) while being more numerically stable) and \(r^{*}=0.018\). #### iii.3.4 Discussion In 5(a), we observe vertical features for \(p\in(0,1)\). This is to be expected, as the likelihood ratio trick does not apply in the range where the corresponding loss functional is non-convex. Similarly, the vertical feature in 5(b) is due to the fact that for \(r=0\), our loss functional is constant (\(L[f]=1\)), and thus it is not strictly convex; therefore the likelihood ratio again does not work. Values of \(p\) slightly less than \(0\) or slightly greater than \(1\) resulted in the smallest mean absolute errors, while values of \(r\) close to \(0\) resulted in the smallest mean absolute errors. This result was further investigated in Section III.5 in a simple, two-dimensional classifier model. ### Optimized Implementation Altering the parametrization of the learned function \(f\) or using a more generalized loss functional yielded considerable increases in performance from the initial parametrizations and loss functionals. In Figs. 6 and 7, we chose the best-performing parametrization for each loss (logistic for BCE and MSE; exponential for MLC and SQR), and, for the MSE and SQR, chose the best-performing loss functional from each loss family (\(p^{*}=1.24\) for MSE and \(r^{*}=0.018\) for SQR), and trained classifiers with each "optimized" parametrization and loss. This was done \(100\) times for each parametrization/loss, and the resulting likelihood ratio models were averaged. In the naive implementation, the BCE and MSE models performed the best, while the SQR model had an Figure 5: (a) The mean absolute errors averaged over models trained on the generalized MSE loss family for the logistic parametrization. The mean absolute error is minimized at \(p^{*}=1.08\), but we choose the second-lowest value (\(p^{*}=1.24\)) for stability, i.e. avoiding the steep increase in MAE near \(p=1\). The arrow indicates the typical choice of \(p=2\) for MSE loss. (b) The mean absolute errors averaged over models trained on the generalized SQR loss family for the exponential parametrization. The mean absolute error was smallest at \(r^{*}=0.018\). The arrow indicates the typical choice of \(r=1\) for SQR loss. average error at least 0.015 larger than the other losses, even for large \(N\). In the optimized implementation with \(N=10^{6}\), all four loss functionals perform approximately equally, as shown in Figure 6. Figure 7 shows that for \(N>10^{5}\), the four optimized loss functionals continue to perform approximately equally well, but the new loss functionals \(p^{*}\)-MSE and \(r^{*}\)-SQR perform significantly better, reaching mean absolute errors about 2 to 4 times smaller than the other losses. The strong influence of the inductive bias of the activation function is also mitigated in the optimized implementation, as the losses are no longer grouped by activation function. ### Simple Classifiers #### iii.5.1 Motivation To better understand the behavior of the generalized loss models in Sec. III.3, we examined a much simpler classifier than the multi-layer fully-connected network used to train the models in this paper. This allows us to visualize the dynamics of each model, using numerical integration to compute the loss. The model is \[f(x)=\phi(ax+b), \tag{26}\] where \(a\) and \(b\) are the two weights of the model and \(\phi\) is its activation. In the case of the \(p\)-MSE model with the logistic parametrization, \[f_{\text{MSE}}(x) =\sigma(ax+b) \tag{27}\] \[\hat{\mathcal{L}}_{\text{MSE}}(x) =\left(\frac{\sigma(ax+b)}{1-\sigma(ax+b)}\right)^{p-1}=\left(e^{ ax+b}\right)^{p-1}. \tag{28}\] The exponentially-parametrized \(r\)-SQR model is \[f_{\text{SQR}}(x) =e^{ax+b}\] \[\hat{\mathcal{L}}_{\text{SQR}}(x) =\left(e^{ax+b}\right)^{r}.\] As a result, only the analysis of one of the two models is necessary, since the resulting likelihood ratio model is the same for \(r=p-1\). In particular, we will analyze the \(r\)-SQR model, keeping in mind that for the model MAEs, the results will be identical for \(p=r+1\). We continue working with \(X_{0}\sim\text{Normal}(+0.1,1)\) and \(X_{1}\sim\text{Normal}(-0.1,1)\). The exact likelihood ratio is given by \[\mathcal{L}(x)=e^{0.2x}, \tag{29}\] so the two-dimensional classifier will yield an exact solution at \[a^{*}=\frac{0.2}{r},\qquad b^{*}=0. \tag{30}\] #### iii.5.2 Methods To better understand how the parameter \(r\) affects the optimization landscape, we first created a grid with fineness 0.005 of \((a,b)\) pairs in the box \([-1,1]^{2}\): \[B=\frac{1}{200}\mathbb{Z}^{2}\cap[-1,1]^{2}. \tag{31}\] Figure 6: Average likelihood ratio fits for the different loss categories. The MAEs are 0.0079, 0.0045, 0.0077, 0.0034, 0.0046, and 0.0034, for the BCE, MSE, MLC, SQR, \(p^{*}\)-MSE, and \(r^{*}\)-SQR likelihood ratio models, respectively. Figure 7: Mean absolute errors computed for the different loss categories trained with increasingly larger samples. The loss functional for a particular value of \(r\), \(L_{r}\) is given by \[L_{r}[f]=\int\mathrm{d}x\bigg{(}p(x\mid\theta_{A})f(x)^{-\frac{r}{2}}+p(x\mid \theta_{B})f(x)^{\frac{r}{2}}\bigg{)} \tag{32}\] Then, we visualized the loss landscape as the contour plot of \(L_{r}\) over the set of classifiers \(F=\{(e^{ax+b})^{r}:(a,b)\in[-1,1]^{2}\}\) for different values of \(r\). The loss functional \(L_{r}\) was computed via numerical integration. #### iv.2.3 Results Figure 8 displays in the first row the resulting contour plots for \(r\in\{0.1,0.25,0.5,1\}\). Drawn over each plot, in white, are level sets of the loss at increments of \(0.02\). #### iv.2.4 Discussion While the actual values of the losses are not comparable between different values of \(r\), since each value of \(r\) corresponds to a different loss functional, it is clear that the loss functional becomes increasingly steep as \(r\) increases. As expected, as \(r\to\infty\), \(a^{*}\to 0\), and as \(r\to 0\), \(a^{*}\to\infty\). In particular, the loss landscape of \(r=0.1\) is shaped like an extremely shallow pool, indicating that there is a large space of classifiers with close to optimal performance. The minimum value \(a^{*}=2\) is not visible in the box, since small values of \(r\) correspond to large values of \(a^{*}\). On the other hand, the loss landscape of \(r=1\) is much steeper, with a minimum at \(a^{*}=0.2\) around which the landscape quickly increases to high loss values. However, since loss values between values of \(r\) are incomparable, it is unclear how the loss reflects the actual performance of the likelihood ratio model. In particular, given two classifiers \(f\) and \(g\) with \(L_{r}[f]<L_{s}[g]\), \(r\neq s\), we cannot be sure that \(f\) will yield a better likelihood ratio model than \(g\), since \(L_{r}\) and \(L_{s}\) are different loss functionals. To this end, we visualized the error landscape as the contour plot of MAE over the same set of classifiers \(F=\{(e^{ax+b})^{r}:(a,b)\in[-1,1]^{2}\}\). Since the MAE is computed only from the expected absolute difference between a predicted likelihood ratio \(\hat{\mathcal{L}}\) and the true likelihood ratio \(\mathcal{L}\), we can compare across different values of \(r\) to see which values of \(r\) result in easily obtainable well-performing classifiers. The second row of Figure 8 displays these error contour plots; indicated in white are the level sets of the error at increments of \(0.05\). For each of these, we can see that the error is zero at \((a^{*},0)\) and increases radially outwards from the minimum. The shape of the loss landscapes reflect the true nature of the performance of the classifiers; for \(r=0.1\), we still have a shallow pool of many well-performing classifiers, whereas for \(r=1\), there is a small set of well-performing classifiers around which the classifiers begin to perform much worse. That is to say, for small values of \(r\), there are many classifiers that perform well at modeling the likelihood ratio. It may be harder to find the true minimum, but most classifiers have comparable performance. On the other hand, for large values of \(r\), the loss landscape is steep with few Figure 8: Contour plots of the losses and MAEs of the two-dimensional SQR classifier \(f(x)=\exp{(ax+b)}\) over \([-1,1]^{2}\). The first row plots the value of the loss functional \(L[f]\), obtained through numerical integration, on a grid of \((a,b)\) pairs over \([-1,1]^{2}\) for various values of \(r\), with contours curves at increments of \(0.02\). The second row plots an empirically computed absolute error, \(\mathrm{MAE}[f]\), over the same grid of points, for the same values of \(r\), with contour curves at increments of \(0.05\). classifiers with decent performance. Slight perturbations around the minimum correspond to large errors. ## IV Multivariate Gaussians ### Parametrizing \(f\) #### iv.1.1 Motivation A natural extension from the univariate Gaussians analysis in the previous section would be to multivariate Gaussians, wherein the setting is complicated by the higher dimensions, but we still have knowledge of the true likelihood ratio. To this end, we first established five different case studies of different Gaussian arrangements to examine in our multivariate analysis. The first case study, labeled "Vertical," corresponds to independent Gaussians with variance 1, and means at a distance of 0.2, as in the univariate case. In this case, the background distribution is more likely over the right half-plane, whereas the signal distribution is more likely over the left half-plane. \[X_{0} \sim\text{Normal}\left(\begin{bmatrix}+0.1\\ 0\end{bmatrix},\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\right) \tag{33}\] \[X_{1} \sim\text{Normal}\left(\begin{bmatrix}-0.1\\ 0\end{bmatrix},\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\right) \tag{34}\] The next case study, "Slant," simply rotates the vertical case study by \(45^{\circ}\). This results in the same likelihood ratio as the vertical case, except rotated by \(45^{\circ}\). \[X_{0} \sim\text{Normal}\left(\begin{bmatrix}+\frac{0.1}{\sqrt{2}}\\ -\frac{0.1}{\sqrt{2}}\end{bmatrix},\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\right) \tag{35}\] \[X_{1} \sim\text{Normal}\left(\begin{bmatrix}-\frac{0.1}{\sqrt{2}}\\ +\frac{0.1}{\sqrt{2}}\end{bmatrix},\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\right) \tag{36}\] In "Circle," we consider the case where the background distribution has low variance in comparison to the signal distribution. As a result, values close to the origin are more likely to be from the background, whereas values far from the origin are more likely to be from the signal. This likelihood structure is visualized in Figure 9. \[X_{0} \sim\text{Normal}\left(\begin{bmatrix}+0.1\\ 0\end{bmatrix},\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\right) \tag{37}\] \[X_{1} \sim\text{Normal}\left(\begin{bmatrix}-0.1\\ 0\end{bmatrix},\begin{bmatrix}2&0\\ 0&2\end{bmatrix}\right) \tag{38}\] The "Hyperbola" case study looks at the case when both the background and the signal have different variances in each coordinate. This results in a hyperbola-like likelihood structure, as visualized in Figure 9. #### iv.1.2 Methods The methodology was similar to that done in Sec. III. For each case study, we implemented all four classifiers with each of the three parametrizations. Each resulting classifier architecture was trained 100 times to minimize the corresponding loss functional. We evaluated each classifier on the box \([-2,2]^{2}\), and we averaged the resulting 100 predictions for the likelihood ratio over that Figure 9: Two of the five multivariate Gaussian cases we examined, as well as some of the likelihood ratio model fits. The first row corresponds to the circle case, while the second row corresponds to the hyperbola case. The first column plots the likelihood structure of each case; red regions are regions where \(\mathcal{L}(x)\leq 1\), and blue regions are regions where \(\mathcal{L}(x)>1\). The second and third columns display contour plots of the mean absolute error for some models trained with the various losses to learn the likelihood ratios. The plot is suggestively colored to show how the structure of the data corresponds to the structure in the likelihood ratio models. box. We used the MAE as the performance metric, again as an empirical average over 100,000 samples. #### iv.1.3 Results The resulting MAEs are shown in Figure 10. Some contour plots of the mean absolute errors of some of the different parametrizations are presented in Figure 9; the remaining contour plots are provided in Appendix B. #### iv.1.4 Discussion In the univariate case, we found that the logistic and exponential parametrizations were uniformly the best parametrizations for the BCE/MSE and MLC/SQR losses, respectively. This trend bears out for the most part in this higher-dimensional case. In almost all of the case studies, the logistic and exponential parametrizations perform the best; in the cases where they don't have the smallest MAE, the difference between their MAE and the best MAE is not large. Unlike in Section III, once the optimal parametrizations are chosen for each of the four loss functionals, some differences persist in the performance of each loss. Across all five cases, the SQR loss yields the largest errors. For the Vertical and Slant cases, all four optimized loss functionals perform equally well, overlapping within one standard deviation. For the remaining cases (Checker, Circle, and Hyperbola), the optimized MLC loss with exponential parametrization performs significantly better than the other three optimized losses. It is striking to note that the MLC loss with exponential parametrization emerges as the best-performing loss configuration in some of the more complex datasets considered for these studies. The typical choice for a neural network classifier loss is arguably BCE. For the purposes of the likelihood ratio trick, however, we are interested in reinterpreting the classifier output to approximate the likelihood ratio, so it is possible that optimizing for raw classification performance alone is misguided. The MLC loss has the advantage of explicitly relating the signal and background probability distributions; in particular, the MLC loss can be intuitively understood to maximize the likelihood of \(\hat{\mathcal{L}}(x)\) with respect to \(p(x\mid\theta_{0})\) subject to the constraint that \(\hat{\mathcal{L}}(x)p(x\mid\theta_{1})\) is a probability distribution [10]. Therefore, it may be a more natural choice for this particular application than the default BCE loss. ### Generalized Loss Families #### iv.2.1 Motivation By treating the square in the MSE loss and the root in the SQR loss as parameters \(p\) and \(r\), respectively, we were able to generalize those loss functional to entire continuous parametric families of losses. We saw in the univariate case that we can optimize over \(p\) and \(r\), and were even able to see, through examining the landscapes of the different loss functionals in a simple case, what kinds of values of \(p\) and \(r\) will correspond to "better" loss functionals. We now continue this investigation in the situation of multivariate Gaussians to get a sense of how much the trend we observe continues into more complex situations. #### iv.2.2 Methods We used the same methods as in Section III.3; in this case, however, we worked with the five different multivariate Gaussians cases rather than the single univariate Gaussians case. #### iv.2.3 Results In Table 3, we list the optimal values \(p^{*}\) and \(r^{*}\) in each of the five cases. An overall comparison of the four loss functionals with optimized parameterizations alongside \(p^{*}-\)MSE and \(r^{*}-\)SQR losses is shown in Figure 11. The plots of the MAE for the various values of \(p\) and \(r\) are presented in Appendix B. #### iv.2.4 Discussion The simpler multivariate cases considered (Vertical, Slant) result in very similar values to those found in the univariate Gaussian case: \(p^{*}\) and \(r^{*}\) are close to \(1.24\) and \(0.018\), respectively. In the more complex multivariate cases (Circle, Hyperbola, and Checker), the optimal values of \(p^{*}\) are also between \(1\) and \(2\), with the expection of Hyperbola, for which \(p^{*}=-0.44\). It's possible that an equally-performing \begin{table} \begin{tabular}{l c c} Case & \(p^{*}\) & \(r^{*}\) \\ \hline \hline Vertical & \(1.12\) & \(0.018\) \\ Slant & \(1.16\) & \(0.018\) \\ Circle & \(1.28\) & \(-0.1\) \\ Hyperbola & \(-0.44\) & \(-0.2\) \\ Checker & \(1.6\) & \(-0.1\) \\ \end{tabular} \end{table} Table 3: The optimal values for \(p^{*}\) and \(r^{*}\) for the five different multivariate Gaussian cases. Note that in the univariate Gaussian case, the optimal values chosen were \(p^{*}=1.24\) and \(r^{*}=0.018\). value of \(p\) larger than 2 could also exist, but our studies did not scan far enough to probe the asymptotic behavior in that direction. The optimal values \(r^{*}\) are all negative. However, it is worth noting that the MAE landscapes for the \(r\)-SQR are symmetric, and the corresponding MAEs for \(|r^{*}|\) are small, so the signs of these values is likely due to random chance. For these cases, the optimal values of \(r^{*}\) are less than 1, as in the univariate case, but very small values of \(r^{*}\) (\(|r^{*}|<0.01\)) are too numerically unstable to consistently yield useful outputs. Overall, as shown in Figure 11, if one chooses only from the four loss functionals as defined in Table 1, but with optimized parametrizations, all four show equally good performance for the simpler cases (Vertical, Slant), but the MLC loss is significantly better than the other three choices in the more complex cases (Circle, Hyperbola, Checker). However, if one chooses \(p^{*}\) and \(r^{*}\) by scanning along these generalized loss families, the improvements are immense: for all cases except Hyperbola, the optimized \(r^{*}-\)SQR MAE is between 30% and 50% smaller than the optimized MLC MAE. ## V Physics data ### Parametrizing \(f\) #### v.1.1 Motivation In our final case study, we extended our comparison of classifier parametrizations and loss functionals to simulated high-energy particle physics datasets [39]. While there are a number of observables present in the datasets, in our analysis we considered only the leading jet transverse momentum (\(p_{T}\)), rapidity (\(y\)); azimuthal angle (\(\phi\)), and invariant mass (\(m\)). The datasets consist of particle-level and detector-level simulated QCD jets originating from \(Z\) + jets events. \(Z\) + jets events from proton-proton collisions generated at \(\sqrt{s}=14\) TeV were simulated using Herwig 7.1.5 [40, 41, 42] with the default tune and Pythia 8.243 [43, 44, 45] tune 21 [46] (ATLAS A14 central tune with NNPDF2.3LO). We call the Pythia simulation "Monte Carlo" and Herwig "data". For the generated events, the \(p_{T}\) of the \(Z\) boson is required to be larger than 150 GeV. Events then Figure 10: Mean absolute errors are compared for the four different losses considered (binary cross-entropy, mean squared error, maximum likelihood classifier, and square root), each with 3 different activation functions. For each loss, five different multivariate normal cases are studied: “Vertical”, “Slant”, “Circle”, “Checker”, and “Hyperbola”. For each case study, the best performing parametrization for each loss is shown in either red or blue. Errors represent the standard deviation across 100 independent model trainings. Figure 11: Mean absolute errors are compared for the four different losses considered (binary cross-entropy, mean squared error, maximum likelihood classifier, and square root), each with their respective optimal parametrizations. For each loss, five different multivariate normal cases are studied: “Vertical”, “Slant”, “Circle”, “Checker”, and “Hyperbola”. Errors represent the standard deviation across 100 independent model trainings. Figure 12: Histograms of the four jet features, transverse momentum \(p_{T}\) [GeV], rapidity \(y\), azimuthal angle \(\phi\), and mass \(m\) [GeV], for the Monte Carlo and data \(Z\) + jet events. Here, we treat the Monte Carlo as the signal and the data as the background. are passed through the Delphes 3.4.2 fast detector simulation [47] of the CMS detector. The datasets consist of the highest-momentum jet from \(Z\) boson events with \(p_{T}\geq 200\) GeV. This process ultimately yields about 1.6 million jets for each simulation. Figures 12 and 13 display histograms of each of the four observables for both the "Monte Carlo" and the "data". In this more complex setting, we no longer have access to the true likelihood ratio, as we do not know the underlying distributions generating these datasets. To allow for a more complete comparison of the different parametrizations' ability to model the "true" likelihood ratio, we therefore fit Normalizing Flows [48] to each sample. These flows estimate the generating distribution of the samples, and thus allow us to compute "true" likelihood ratios for these datasets. #### iv.2.2 Methods We first trained a FFJORD Normalizing Flow [48] for each of the "Monte Carlo" and "data" simulated samples. The models were tuned by comparing the performance of a classifier on distinguishing between data generated from Figure 13: A corner plot of the four jet features. Blue contours correspond to the particle-level data and red contours correspond to the detector-level data. the flow and the true data. We then used the flows as proxies for the underlying distributions of these datasets, creating new proxy datasets by sampling from the flows. The methodology following this point was again similar to what has been established before in Sec. III. We implemented all four classifiers with each of the three parametrizations on the dataset, training 100 independent copies of each classifier architecture to minimize the corresponding loss functional. We used the MAE as the performance metric, computed as in Equation 15. In particular, it was computed with the true likelihood ratio \(\mathcal{L}(X)\) from the flows and the model likelihood ratio \(\hat{\mathcal{L}}(X)\) averaged over the 100 copies of each classifier. As before, the MAE was computed as an empirical average over 100,000 samples. #### iv.1.3 Results The distributions of the "data" and "Monte Carlo" learned by the flows are plotted along side the empirical distributions in Figure 12. To quantify the quality of the flows' learned distributions, we trained classifiers to try to distinguish between proxy datasets sampled from the flows and the original datasets; for the "Monte Carlo", the AUC was 0.54, and for the "data", the AUC was 0.56. These AUCs close to 0.5 indicate that the classifier has difficulty distinguishing between these two distributions, and therefore that the flows have performed reasonably well at reflecting the target distributions. To visualize the performance of these classifiers, we performed a scan of the likelihood ratio along \(\phi\). We fixed the three other observables at values near their medians (\(p_{T}=221.8\), \(y=0.0\), \(m=16.0\)) and compared the flow likelihood ratio to the likelihood ratio modeled from the variously parametrized classifiers trained on the different loss functionals. The scans for the logistic, arctangent, and hyperbolic tangent parametrizations of the likelihood ratio, trained with the BCE and MSE losses, are displayed in Figure 14. The analogous scans for the ReLU, square, and exponential parametrizations with the MLC and SQR losses are displayed in Figure 15. The resulting MAEs are shown in Figure 16. #### iv.1.4 Discussion The observed trend in the Gaussian studies that the log-gistical and exponential parametrizations were the best for the BCE/MSE and MLC/SQR losses, respectively, also holds in the physics case. Of the four optimized loss functionals, the MLC loss with exponential parametrization performs distinctly better than the other three loss configurations, as shown in Figure 16. ### Generalized Loss Families #### iv.2.1 Motivation In the previous studies with univariate and multivariate Gaussians, we found that the performances of likelihood ratio models trained with losses from the generalized families of \(p\)-MSE and \(r\)-SQR losses followed a similar structure across various cases. In order to examine the robustness of this observed structure, we repeated the same study with the high energy particle physics dataset. #### iv.2.2 Methods The methodology for this study was similar to that of the previous studies. We scanned over values of \(p\) and \(r\) in the interval \([-2,2]\). For each increment of \(p\), we trained 20 models with the \(p\)-MSE loss functional defined by that value of \(p\) and averaged together their mean absolute errors. Likewise, for each increment of \(r\), we trained 20 models with the \(r\)-MSE loss functional defined by that value of \(r\) and averaged together their mean absolute errors. We parametrized the \(p\)-MSE classifiers with logistic activation functions and the \(r\)-SQR classifiers with exponential activation functions. All models were trained on the same set of one million samples from the flows fit to the distributions of the physics data. #### iv.2.3 Results The plots of the MAEs of the likelihood ratio models for the loss functionals are provided in Figure 17. As before, we observe vertical features in the plots when the loss functional is no longer strictly convex (\(p\in(0,1)\) and \(r=0\)). The MAE was minimized at \(p^{*}=-0.28\) and \(r^{*}=-1.3\). A comparison of the \(p^{*}-\)MSE and \(r^{*}-\)SQR losses with these chosen values alongsize the other four losses with optimized parametrizations is shown in Figure 16. #### iv.2.4 Discussion The shape of the \(p\) and \(r\) scans looks approximately similar to those observed in the previous case studies (e.g. Figure 5); however, since the MAE landscape is flat away from the non-convex regions (\(p\in(0,1)\) for \(p\)-MSE and \(r=0\) for \(r\)-SQR), the best choices \(p^{*}=-0.28\) and \(r^{*}=-1.3\) perform about the same as the unoptimized choices of \(p=2\) and \(r=1\). In this particular case, the evidence does not suggest that changing \(p^{*}\) or \(r^{*}\) from their default values of 2 and 1, respectively, would yield a significant benefit in reducing mean absolute error. It is possible that better values exist beyond the ranges \(r,p\in[-2,2]\) considered here. Overall, as shown in Figure 16, the best-performing loss is MLC with exponential parameterization. ## VI Conclusions The likelihood ratio \(\mathcal{L}(x)=\frac{p(x|\theta_{0})}{p(x|\theta_{1})}\) is a statistical quantity essential for characterizing whether an experimental dataset \(x\) better supports one of two hypotheses defined by sets of parameters \(\theta_{0}\) and \(\theta_{1}\). It is used beyond hypothesis testing, too, for applications such as reweight Figure 14: The performance of different parametrizations of \(f\) for the BCE and MSE losses for the azimuthal angle \(\phi\). (a) The average likelihood ratio fits of the logistic, hyperbolic tangent, and arctangent parametrizations for the BCE loss. (b) The average likelihood ratio fits of the logistic, hyperbolic tangent, and arctangent parametrizations for the MSE loss. Figure 15: The performance of different parametrizations of \(f\) for the MLC and SQR losses for the azimuthal angle \(\phi\). (a) The average likelihood ratio fits of the logistic, hyperbolic tangent, and arctangent parametrizations for the BCE loss. (b) The average likelihood ratio fits of the logistic, hyperbolic tangent, and arctangent parametrizations for the MSE loss. Figure 16: The MAEs are compared for the Pythia/Herwig + Delphes particle physics jet datasets [39] for the four different losses considered. Errors represent the standard deviation across 100 independent model trainings. In **(a)**, each loss is shown with 3 different parametrizations. In **(b)**, the best-performing parametrization is chosen for each loss, and these optimized losses are then directly compared. Figure 17: (a) The mean absolute errors averaged over logistically-parametrized models trained with the generalized MSE loss family. The mean absolute error is minimized at \(p^{*}=-0.28\); however, the MAE landscape here is rather flat. The arrow indicates the typical choice of \(p=2\) for the standard MSE loss. (b) The mean absolute errors averaged over exponentially-parametrized models trained with the generalized SQR loss family. The mean absolute error was smallest at \(r^{*}=-1.3\). The arrow indicates the typical choice of \(r=1\) for the standard SQR loss. ing high-dimensional distributions for background estimation and more. In contexts where calculating the likelihood ratio is impossible or very tedious, researchers can use the "likehood ratio trick", leveraging a neural network classifier to approximate the likelihood ratio. Often, the likelihood ratio trick is implemented by minimizing a typical choice of loss functional for a classifier: the binary cross-entropy loss. However, many loss functionals satisfy the likelihood ratio trick setup. In this paper, we presented detailed studies comparing four choices of loss functionals: binary cross-entropy (BCE), mean squared error (MSE), maximum likelihood classifier (MLC), and square root (SQR). For each of these four loss functionals, we also explored a suite of choices of final activation functions for parametrizing the neural network output. For the MSE and SQR losses, we performed a scan along the exponential parameter (replacing \(2\to p\) for MSE and replacing \(\frac{1}{2}\rightarrow\frac{r}{2}\) for SQR) to understand the behavior of these generalized families of loss functionals. As a result of these studies, we present the following recommendations for optimized implementations of each of these loss functionals in the likelihood ratio trick: \[\begin{split}\text{Loss}&\text{Activation}\\ \hline\hline\text{Binary Cross-Entropy (BCE)}&\sigma(z)\\ \text{Mean Squared Error (MSE)}&\sigma(z)\\ \text{Maximum Likelihood Classifier (MLC)}&\exp(z)\\ \text{Square Root (SQR)}&\exp(z)\\ \end{split}\] For MLC and SQR losses, we find that choosing small, nonzero values of \(r\) (and, correspondingly, \(p=r+1\)) tend to result in smaller mean absolute errors than the default choices (\(r=1\) and \(p=2\)) for these loss functionals. As we illustrate by mapping the loss landscape of a simple neural network, this is because smaller values of \(r\) can yield shallower loss landscapes where many values are nearly optimal, while larger values of \(r\) have steeper landscapes for models to traverse, with a much smaller proportion of the phase space corresponding to optimum values of the loss. The loss landscape will vary with each new application, so we recommend that future researchers perform a scan along \(p\) or \(r\) to find an optimum value as part of hyperparameter optimization. If a scan over \(p\) or \(r\) is not feasible, we recommend comparing the default selections (i.e. \(p=2\) and \(r=1\)) with our alternative recommendations derived from the average optimum values across our various trials \(p^{*}=1.15\) and \(r^{*}=0.1\), or: \[L_{\text{MSE}^{*}}[f]=-\int\mathrm{d}x\bigg{(} (1-f)^{1.15}p(x\mid\theta_{0})\] \[+f^{1.15}p(x\mid\theta_{1})\bigg{)}\] \[L_{\text{SQR}^{*}}[f]=-\int\mathrm{d}x\bigg{(} f^{-0.05}p(x\mid\theta_{0})+f^{0.05}p(x\mid\theta_{1})\bigg{)}.\] Across the majority of the various datasets we considered, these choices tend to have significantly smaller mean absolute errors than the default selections while maintaining good numerical stability across multiple trainings. An interesting future investigation would be to consider how to dynamically optimize \(p\) and \(r\) as learned parameters during training. When tested on univariate Gaussians and simple multivariate Gaussians (Vertical and Slant cases), all four loss implementations with optimized parametrizations perform similarly when approximating the desired likelihood ratio. For larger datasets (\(N>10^{5}\)), choosing different exponents in the definitions of MSE and SQR loss functionals results in an additional \(\geq 50\%\) reduction in errors for these cases. On more complex datasets, including multidimensional Gaussians (Checker, Hyperbola, Circle) as well as simulated high-energy physics data, the Maximum Likelihood Classifier (MLC) loss with exponential parametrization performs the best out of the four default losses considered. Choosing different exponents in the definitions of MSE and SQR loss functionals additionally results in between \(30\%\) and \(50\%\) smaller errors for the Checker and Circle cases. For the Hyperbola and simulated high-energy physics case, choosing alternate \(p^{*}\) and \(r^{*}\) values in the range \([-2,2]\) does not yield a significant performance improvement, though it is possible that better values could exist outside of this range. While these configurations performed well in our chosen case studies, these results should not be read as a guarantee that these choices will result in optimal performance for any dataset. We therefore recommend that other researchers compare the results of several of the optimized losses described in this work to yield the most effective setup for a given dataset. There remain several open questions in this line of inquiry. For instance, can an analytical analysis of these loss functionals explain some of the performance differences observed? How much can we further characterize the uncountably many possible loss functionals that satisfy this setup? How else can we generalize certain loss functionals? Pursuing these answers could help us achieve even better scientific measurements enabled by machine learning in the near future. ## Acknowledgements We are grateful to Jesse Thaler for very helpful feedback about figures-of-merit and ways to generalize our loss functions. We thank Dag Gillberg for the suggestion to compare NNs with BDTs and Vinicius Mikuni for the idea of using normalizing flows to model the LR of the physics datasets. M.P. thanks Shirley Ho and the Flatiron Institute for their hospitality while preparing this paper. S.R., M.P., and B.N. are supported by the Department of Energy, Office of Science under contract number DE-AC02-05CH11231.
2304.10629
Model theory in compactly generated (tensor-)triangulated categories
We give an account of model theory in the context of compactly generated triangulated and tensor-triangulated categories ${\cal T}$. We describe pp formulas, pp-types and free realisations in such categories and we prove elimination of quantifiers and elimination of imaginaries. We compare the ways in which definable subcategories of ${\cal T}$ may be specified. Then we link definable subcategories of ${\cal T}$ and finite-type torsion theories on the category of modules over the compact objects of ${\cal T}$. We briefly consider spectra and dualities. If ${\cal T}$ is tensor-triangulated then new features appear, in particular there is an internal duality in rigidly-compactly generated tensor-triangulated categories.
Mike Prest, Rose Wagstaffe
2023-04-20T20:12:47Z
http://arxiv.org/abs/2304.10629v3
# Model theory in compactly generated (tensor-)triangulated categories ###### Abstract We give an account of model theory in the context of compactly generated triangulated and tensor-triangulated categories \(\mathcal{T}\). We describe pp formulas, pp-types and free realisations in such categories and we prove elimination of quantifiers and elimination of imaginaries. We compare the ways in which definable subcategories of \(\mathcal{T}\) may be specified. Then we link definable subcategories of \(\mathcal{T}\) and finite-type torsion theories on the category of modules over the compact objects of \(\mathcal{T}\). We briefly consider spectra and dualities. If \(\mathcal{T}\) is tensor-triangulated then new features appear, in particular there is an internal duality in rigidly-compactly generated tensor-triangulated categories. ###### Contents * 1 Introduction and Background * 1.1 Introduction * 1.2 The restricted Yoneda functor * 1.3 Definable subcategories of module categories * 2 Model theory in compactly generated triangulated categories * 2.1 The category of pp-sorts * 2.2 Elimination of quantifiers * 2.3 Types and free realisations * 2.4 Elimination of imaginaries * 2.5 Enhancements, Ultraproducts * 3 Definable subcategories * 3.1 Definable subcategories of \(\mathcal{T}\) * 3.2 Torsion theories on Mod-\(\mathcal{T}^{\mathrm{c}}\) * 3.3 Definable subcategories of Abs-\(\mathcal{T}^{\mathrm{c}}\) * 3.4 Model theory in definable subcategories * 3.5 Torsion pairs on \(\mathcal{T}\) and torsion theories on Mod-\(\mathcal{T}^{\mathrm{c}}\) * 3.6 Spectra * 3.7 Triangulated definable subcategories * 3.8 Elementary duality * 4 Tensor-triangulated categories * 4.1 Spectra in tensor-triangulated categories * 4.2 Internal duality in tensor-triangulated categories * 4.3 Internal Hom interpretation Introduction and Background ### Introduction Model theory in a compactly generated triangulated category \(\mathcal{T}\) falls within the scope of the model theory of modules _via_ the restricted Yoneda embedding \(\mathcal{T}\to\text{Mod-}\mathcal{T}^{\text{c}}\) where \(\mathcal{T}^{\text{c}}\) denotes the subcategory of compact objects in \(\mathcal{T}\). The model theory of modules over possibly many-sorted rings, such as \(\mathcal{T}^{\text{c}}\), is well-developed but there are many special features of triangulated categories that make it worthwhile to directly develop model theory in the triangulated context. That is what we do here, and we also consider additional features which appear when the category is tensor-triangulated. A good number of the results appear elsewhere but we give a detailed and unified account which, we hope, will be a useful reference. What began as the model theory of modules - the investigation of model-theoretic questions in the context of modules over a ring - has developed in scope - to much more general categories - in depth, and in purpose having for a long time been led by interests and questions coming from representation theory. Many aspects - purity, pure-injectives, definable subcategories etc. - can be dealt with purely algebraically and, in the context of compactly generated triangulated categories, this was developed by Beligiannis and Krause [9], [23] (for earlier relevant work, see [14], [10], and [31]). But, apart from a brief treatment in [18] and some use in [3], there has not been much explicit appearance of model theory in triangulated categories. To some extent that is because there is a 'dictionary' between model theoretic and algebraic/functor-category methods, allowing much of what can be proved with model theory to be proved by other methods. But sometimes what is obvious and natural using one language is not so easily translatable into the other. Moreover, model theory can give new insights and simpler proofs. Our main aim in this paper is to make the methods of model theory readily available to be used in compactly generated triangulated categories. Some aspects - dualities, spectra, enhancements, extensions to well-generated triangulated categories - are currently in development, so we don't aim to be comprehensive but we do present the more settled material in detail. Some minimal acquaintance with model theory, at least with basic ideas in the model theory of modules, will be helpful for the reader but we do keep formal aspects of model theory to a minimum. Really, all that we need is the notion of a formula and its solution set in a structure. We do need to use sorted variables. Variables in a formula are place-holders for elements from a structure; in our context these elements may belong to different _sorts_. The idea is very simple and well-illustrated by representations of the quiver \(A_{2}\) which is \(\bullet\to\star\). A representation of this quiver in the category of modules over a ring \(R\) consists of two \(R\)-modules \(M_{\bullet}\), \(M_{\star}\) and an \(R\)-linear map from \(M_{\bullet}\) to \(M_{\star}\). Such a structure is naturally two-sorted, with elements of the sort (labelled by) \(\bullet\) being those of \(M_{\bullet}\) and those of sort (labelled by) \(\star\) being those of \(M_{\star}\). The variables we would use in writing formulas reflect that, say with subscripts, and for this example we would use variables of two sorts (labelled respectively by \(\bullet\) and \(\star\)). The difference between using a 2-sorted and 1-sorted language is the difference between treating (2-sorted) representations of that quiver (equivalently modules over the 2-sorted ring which is the (\(R\)-)path category of the quiver) and (1-sorted) modules over the path algebra of the quiver (the path algebra of the quiver is a normal, 1-sorted, ring). That is a matter of choice if there are only finitely many sorts but, because \(\mathcal{T}^{\text{c}}\) is skeketally infinite, we do need to use sorted structures and take account of sorts in formulas. For more discussion, and many examples, of this, see [44]. **We suppose throughout this paper that \(\mathcal{T}\) is a compactly generated triangulated category.** We take this to include the requirement that \(\mathcal{T}\) has infinite coproducts. We also suppose that the reader knows something about these categories; references include [32], [53, Chpt. 10], and [51] for tensor-triangulated categories. The restriction that \(\mathcal{T}\) be compactly generated could be weakened to \(\mathcal{T}\) being well-generated but, in that case, model theory using infinitary languages would be needed, so we would lose the Compactness Theorem of model theory and its many consequences. This is an interesting direction to follow and a start has been made, see [27] for instance, but here we don't look any further in that direction (also cf. [1, SS5B]). Let \(\mathcal{T}^{\text{c}}\) denote the full subcategory of compact objects. Model theory for the objects of \(\mathcal{T}\) is based on the key idea that the _elements_ of objects of \(\mathcal{T}\) are the morphisms from compact objects. That is, if \(X\) is an object of \(\mathcal{T}\) and \(A\) is a compact object of \(\mathcal{T}\), then an **element** of \(X\)**of sort** (indexed by) \(A\) is a morphism \(A\to X\) in \(\mathcal{T}\). This is just an extension of the fact that, if \(M\) is a (right) module over a (normal, 1-sorted) ring \(R\), then the elements of \(M\) may be identified with the morphisms from the module \(R_{R}\) to \(M\). There is, up to isomorphism, just a set of compact objects, so we may use the objects in a small version of \(\mathcal{T}^{\rm c}\) to index the sorts of the language for \(\mathcal{T}\). A "small version" of \(\mathcal{T}^{\rm c}\) means an equivalent category which has just a set of objects. We don't go into detail about setting up the language - for that see any of the background references on the model theory of modules - because all we really need is that it gives us a way of writing down formulas, in particular (in our context) pp formulas. Each formula defines, for every \(X\in\mathcal{T}\), a certain subset of \((A_{1},X)\oplus\cdots\oplus(A_{n},X)\) with \(A_{i}\in\mathcal{T}^{\rm c}\) (the \(A_{i}\) label the sorts of the free variables of the formula). Of course, for every object \(X\in\mathcal{T}\), each sort \((A,X)\), for \(A\in\mathcal{T}^{\rm c}\), has an abelian group structure, and this is built into the formal language. Also built into the language is the action of (a small version of) \(\mathcal{T}^{\rm c}\) on objects \(X\in\mathcal{T}\) - the morphisms of \(\mathcal{T}^{\rm c}\) "multiply" the "elements" of \(X\), taking those of one sort to a possibly different sort. Explicitly, if \(f:A\to B\) is a morphism of \(\mathcal{T}^{\rm c}\), then this induces \(b\in(B,X)\mapsto bf\in(A,X)\) - multiplication by \(f\) from sort \(B\) to sort \(A\). Note how this generalises the action of a ring on a (1-sorted) right module. In particular, each sort \((A,X)\) is a right module over \({\rm End}(A)\) but these multiplications on single sorts are only some of the multiplications that constitute the action of (the many-sorted ring) \(\mathcal{T}^{\rm c}\) on objects \(X\) of \(\mathcal{T}\). In this way an object \(X\) of \(\mathcal{T}\) is replaced by a (many-sorted) set-with-structure, precisely by the right \(\mathcal{T}^{\rm c}\)-module which is the representable functor \((-,X)\) restricted to \(\mathcal{T}^{\rm c}\). This replacement is effected by the restricted Yoneda functor \(y:\mathcal{T}\to{\rm Mod}\mbox{-}\mathcal{T}^{\rm c}\) which is given on objects by \(X\to(-,X)\upharpoonright\mathcal{T}^{\rm c}\) and on morphisms \(f:X\to Y\) by \(f\mapsto(-,f):(-,X)\to(-,Y)\). This functor is neither full nor faithful but, see 1.2, 1.3 below, it loses nothing of the model theory1 so we may do model theory directly in \(\mathcal{T}\) or, equivalently, we may move to the functor/module category \({\rm Mod}\mbox{-}\mathcal{T}^{\rm c}\), where the well-worked-out model theory of multisorted modules applies. Sometimes it is more convenient to work in the one category than the other; in any case, moving from the one context to the other is straightforward (and is detailed in this paper). Footnote 1: That is because we use finitary model theory; infinitary languages would detect more, including some **phantom** morphisms, that is, morphisms \(f\) with \(yf=0\). The move to \({\rm Mod}\mbox{-}\mathcal{T}^{\rm c}\) gives us the immediate conclusion that the theory of \(\mathcal{T}\) has pp-elimination of quantifiers. **Theorem 1.1**.: _If \(\mathcal{T}\) is a compactly generated triangulated category, then every formula in the language for \(\mathcal{T}\) is equivalent to the conjunction of a sentence (which refers to sizes of quotients of pp-definable subgroups) and a finite boolean combination of pp formulas._ A pp formula (in our context) is an existentially quantified system of linear equations. A system of \(R\)-linear equations over a possibly multisorted ring \(R\) can be written in the form \[\bigwedge_{j=1}^{m}\,\sum_{i=1}^{n}\,x_{i}r_{ij}=0_{j}\] (read the conjunction symbol \(\bigwedge\) as "and") or, more compactly, as \(\overline{x}G=0\) where \(G=(r_{ij})_{ij}\) is a matrix over \(R\). Here the subscripts can be taken to indicate (not necessarily different) sorts. If we denote this (quantifierfree) formula as \(\theta(\overline{x})\), that is, \(\theta(x_{1},\ldots,x_{n})\), then its solution set in a module \(M\) is denoted \(\theta(M)\) and is a subgroup of \(M_{1}\oplus\cdots\oplus M_{n}\) where \(M_{i}\) is the group of elements of \(M\) of sort \(i\). A projection of the solution set for such a system of equations is defined by a formula of the form \[\exists x_{k+1},\ldots,x_{n}\,\big{(}\bigwedge_{j=1}^{m}\,\sum_{i=1}^{n}\,x_{ i}r_{ij}=0_{j}\big{)}.\] A formula (equivalent to one) of this form is a **pp** (for "positive primitive") **formula** (the term **regular formula** also is used). We can write a pp formula more compactly as \(\overline{\exists y}\) (\(\overline{x}\ \overline{y}\))\(G=0\), or \(\overline{\exists y}\) (\(\overline{x}\ \overline{y}\)) (\(\begin{array}{c}G^{\prime}\\ G^{\prime\prime}\end{array}\)) = 0, equivalently \(\exists\overline{y}\ \overline{\mathbb{Z}}G^{\prime}=\overline{y}G^{\prime\prime}\), if we want to partition the matrix \(G\). If we denote this formula by \(\phi(x_{1},\ldots,x_{k})\) then its **solution set**\(\phi(M)\) in \(M\) is the subgroup of \(M_{1}\oplus\cdots\oplus M_{k}\) obtained by projecting \(\theta(M)\) to the first \(k\) components. We refer to such a solution set as a **pp-definable subgroup** of \(M\) (the terminologies "subgroup of finite definition" and "finitely matrizable subgroup" also have been used). All this applies to \(\mathcal{T}\) since the model theory of \(\mathcal{T}\) is essentially that of right \(\mathcal{T}^{\mathrm{c}}\)-modules. So Theorem 1.1 follows because, if \(R\) is a (possibly many sorted) ring, then the theory of \(R\)-modules has pp-elimination of quantifiers2 and so this applies to the theory of the image of the restricted Yoneda embedding which, as we have remarked, is the theory of \(\mathcal{T}\). Footnote 2: For the formal statement see, for instance [38, A.1.1]. That is given for \(1\)-sorted modules but the general case reduces to this because each formula involves only finitely many sorts, so is equivalent to a formula over a \(1\)-sorted ring. It turns out, [18, 3.1/3.2] and see Section 2.2, that, with this language, the theory of \(\mathcal{T}\) has complete (positive) elimination of quantifiers - every (pp) formula is equivalent to a quantifier-free (pp) formula (see 2.10). There is also a dual form of this - every pp formula is equivalent to a divisibility formula (2.9). We will also see in Section 2.4 that the theory of \(\mathcal{T}\) has elimination of pp-imaginaries - every pp-pair is definably isomorphic to a (quantifier-free) formula. As with any theory of modules, the initial category of sorts, in this case a small version of \((\mathcal{T}^{\mathrm{c}})^{\mathrm{op}}\), may be completed to the full category \(\mathbb{L}(\mathcal{T})^{\mathrm{eq}+}\) of pp-definable sorts: the objects are pp-pairs and the morphisms are the pp-definable maps between these pairs (see Section 2.1). In our context, this completed category of sorts has two manifestations. One is the category of coherent functors [24] on \(\mathcal{T}\). The other is a certain localisation of the category \((\mathrm{mod}\text{-}\mathcal{T}^{\mathrm{c}},\mathbf{Ab})^{\mathrm{fp}}\) of finitely presented functors from \(\mathrm{mod}\text{-}\mathcal{T}^{\mathrm{c}}\) - the category of finitely presented right \(\mathcal{T}^{\mathrm{c}}\)-modules - to the category \(\mathbf{Ab}\) of abelian groups. In fact, [43, 7.1/7.2], this localisation turns out to be equivalent to the opposite of \(\mathrm{mod}\text{-}\mathcal{T}^{\mathrm{c}}\) which is, in turn, equivalent to \(\mathcal{T}^{\mathrm{c}}\)-mod. The latter equivalence, 2.4, reflects the fact that the absolutely pure \(=\) fp-injective \(\mathcal{T}^{\mathrm{c}}\)-modules coincide with the flat \(\mathcal{T}^{\mathrm{c}}\)-modules. We will, in Section 2.1, give details of this, as well as the action of each of these manifestations of \(\mathbb{L}(\mathcal{T})^{\mathrm{eq}+}\) on \(\mathcal{T}\), respectively on \(y\mathcal{T}\). Free realisations and pp-types are used a lot in the model theory of modules and applications, so in Section 2.3 we point out how these look in \(\mathcal{T}\). In Section 3.1 we present the various types of data which can specify a definable subcategory of \(\mathcal{T}\). In Section 3.2 we see the bijection between definable subcategories of \(\mathcal{T}\) and hereditary torsion theories of finite type on \(\mathrm{Mod}\text{-}\mathcal{T}^{\mathrm{c}}\) and in Section 3.3 we explore that connection in more detail. The category of imaginaries of a definable subcategory is described in Section 3.4. Some connections between torsion pairs in \(\mathcal{T}\) and hereditary torsion theories on \(\mathrm{Mod}\text{-}\mathcal{T}^{\mathrm{c}}\) are seen in Section 3.5 and this is continued in Section 3.7 with the bijection between triangulated definable subcategories and smashing subcategories of \(\mathcal{T}\). Section 3.6 describes spectra associated to \(\mathcal{T}\) and this is continued for tensor-triangulated categories in Section 4.1. For definable subcategories of module categories there is a duality, elementary duality, which exists at a number of levels, in particular between definable subcategories of \(\mathrm{Mod}\text{-}R\) and \(R\)-\(\mathrm{Mod}\). This carries over, at least to algebraic definable categories. We outline that in Section 3.8. If \(\mathcal{T}\) is tensor-triangulated with \(\mathcal{T}^{\mathrm{c}}\) rigid, then there is also an _internal_ duality, induced by the duality on \(\mathcal{T}^{\mathrm{c}}\); that is described in Section 4.2. Tensor-closed definable subcategories are briefly considered in Section 4 and in Section 4.3 there is some exploration of the wider possibilities for interpreting the model-theoretic language. Background on the model theory of modules can be found in various references; we use [38] as a convenient compendium of results and references to the original papers. We give a few reminders in this paper. The approach in [38] is algebraic/functor-category-theoretic; readers coming from model theory might find [36] or [45] a more approachable introduction. For model theory of modules over many-sorted rings, see [44]. Thanks to Isaac Bird and Jordan Williamson for a number of useful comments and for sharing their preprint [11]. ### The restricted Yoneda functor The restricted Yoneda functor \(y:\mathcal{T}\to\text{Mod-}\mathcal{T}^{\text{c}}\), \(X\to(-,X)\upharpoonright\mathcal{T}^{\text{c}}\) underlies most of what we do here. Restricting its domain to the category \(\mathcal{T}^{\text{c}}\) of compact objects gives, by the Yoneda Lemma and because \(\mathcal{T}\) is idempotent-complete (see [32, 1.6.8]), an equivalence between \(\mathcal{T}^{\text{c}}\) and the category \(\text{proj-}\mathcal{T}^{\text{c}}\) of finitely generated projective right \(\mathcal{T}^{\text{c}}\)-modules. The functor \(y\) is, however, neither full nor faithful and one effect of this is that the image of \(\mathcal{T}\) in \(\text{Mod-}\mathcal{T}^{\text{c}}\) is not closed under elementary equivalence, indeed it is not a definable subcategory (see Section 3.1) of \(\text{Mod-}\mathcal{T}^{\text{c}}\). We do, however, have 1.2 and 1.3 below (the second is just by the Yoneda Lemma). First we recall (see [38, SS2.1.1]) that an embedding \(M\to N\) of objects in a module category, more generally in a definable additive category, is **pure** if, for every pp formula \(\phi\), the (image of the) solution set \(\phi(M)\) is the intersection of \(\phi(N)\) "with \(M\)", meaning with the product of sorts of \(M\) corresponding to the free variables of \(\phi\). And \(M\) is **pure-injective** if every pure embedding with domain \(M\) is split. There are many equivalent definitions, see [38, SSSS4.3.1, 4.3.2]. The theory of purity - intimately connected with solution sets of pp formulas and so with the model theory of additive structures - was developed, in algebraic terms, in compactly generated triangulated categories in [9] and [23]. Essentially, it is the theory of purity in \(\text{Mod-}\mathcal{T}^{\text{c}}\), more precisely, in the definable subcategory generated by \(y\mathcal{T}\), pulled back to \(\mathcal{T}\). For example, \(X\in\mathcal{T}\) is pure-injective iff \(yX\) is a pure-injective \(\mathcal{T}^{\text{c}}\)-module. Since \(yX\) is absolutely pure that is equivalent to it being an injective \(\mathcal{T}^{\text{c}}\)-module. The pure-injective objects of \(\mathcal{T}\) play the same key role that they do in the model theory of modules. For instance every (\(\emptyset\)-)saturated module is pure-injective and the pure-injective modules are exactly the direct summands of saturated modules (see [39, 21.1/21.2] or [36, 2.9]); this is equally true in compactly generated triangulated categories.3 Footnote 3: This comment, like a few others, is particularly directed to those coming from model theory. **Proposition 1.2**.: _[_23_, 1.8]_ _If \(X\in\mathcal{T}\) is pure-injective then, for every \(Y\in\mathcal{T}\), the restricted Yoneda map \(y:(Y,X)\to(yY,yX)\) is bijective._ **Proposition 1.3**.: _If \(A\in\mathcal{T}\) is a compact object then, for every \(X\in\mathcal{T}\), the restricted Yoneda map \(y:(A,X)\to(yA,yX)\) is bijective._ In fact there is symmetry here in that 1.3 holds more generally for \(A\) pure-projective (that is, a direct summand of a direct sum of compact objects). We will use the fact that the restricted Yoneda functor induces an equivalence between the category \(\text{Pinj}(\mathcal{T})\) of pure-injective objects in \(\mathcal{T}\) and the category \(\text{Inj-}\mathcal{T}^{\text{c}}\) of injective right \(\mathcal{T}^{\text{c}}\)-modules. **Theorem 1.4**.: _([23, 1.9]) The restricted Yoneda functor \(y:\mathcal{T}\to\text{Mod-}\mathcal{T}^{\text{c}}\) induces an equivalence_ \[\text{Pinj}(\mathcal{T})\simeq\text{Inj-}\mathcal{T}^{\text{c}}.\] ### Definable subcategories of module categories Very briefly, we recall the context of the model theory of modules and the principal associated structures. Some of this is defined more carefully later in the paper but see the references for more detail. In model theory in general, the context is typically the category of models of some complete theory, with elementary embeddings. In the context of modules, it turns out to be more natural to work with **definable subcategories**, meaning full subcategories of module categories which are closed under elementary equivalence and which are **additive**, meaning closed under direct sums and direct summands. These subcategories are equivalently characterised, without reference to model theory, as follows (see [38, SS3.4] for this and various other characterisations by closure conditions). **Theorem 1.5**.: _A subcategory \(\mathcal{D}\) of a module category is a definable subcategory iff \(\mathcal{D}\) is closed under direct products, directed colimits and pure submodules._ If \(\mathcal{X}\) is a set of modules, then we denote by \(\langle\mathcal{X}\rangle\) the definable subcategory generated by \(\mathcal{X}\). It is the closure of \(\mathcal{X}\) under the above operations, equally it is the smallest additive subcategory containing \(\mathcal{X}\) and closed under elementary equivalence. It is the case, see [38, 3.4.8], that every definable subcategory is closed under pure-injective hulls where, if \(M\) is a module, its **pure-injective hull**\(H(M)\), a minimal pure, pure-injective extension of \(M\).4 It follows that every definable subcategory is determined by the pure-injective modules in it. If \(\mathcal{T}\) is a compactly generated triangulated category and \(X\in\mathcal{T}\), then the **pure-injective hull** of \(X\) may be defined to be the (unique-to-isomorphism over \(X\), by 1.4) object \(H(X)\) of \(\mathcal{T}\) such that \(yH(X)=E(yX)\), where \(E\) denotes injective hull in the module category \(\mathrm{Mod}\text{-}\mathcal{T}^{\mathrm{c}}\). Footnote 4: In fact, \(M\) is an elementary submodule of \(H(M)\), [48, Cor. 4 to Thm. 4]. To each **definable category**\(\mathcal{D}\) - meaning a category equivalent to a definable subcategory of a module category - there is associated a skeletally small abelian category, \(\mathrm{fun}(\mathcal{D})\), of functors on \(\mathcal{D}\). This can be defined as the category of pp-imaginaries (see Section 2.1) for \(\mathcal{D}\), or as a localisation of the free abelian category on \(R\) where \(\mathcal{D}\) is a definable subcategory of \(\mathrm{Mod}\text{-}R\) (\(R\) a possibly many-sorted ring), or as the category of **coherent functors** - those that commute with direct products and directed colimits - from \(\mathcal{D}\) to \(\mathbf{Ab}\). Each definable subcategory5\(\mathcal{C}\) of \(\mathcal{D}\) is determined by the Serre subcategory \(\mathcal{S}_{\mathcal{C}}\) of \(\mathrm{fun}(\mathcal{D})\) which consists of those functors which are \(0\) on \(\mathcal{C}\), and then \(\mathrm{fun}(\mathcal{C})\) is the (abelian) quotient category \(\mathrm{fun}(\mathcal{D})/\mathcal{S}_{\mathcal{C}}\) - the Serre localisation (see [26, p. 30ff.]) of \(\mathrm{fun}(\mathcal{D})\) at \(\mathcal{S}_{\mathcal{C}}\). Footnote 5: The containing module category in 1.5 may be replaced by any definable category. Also associated to a definable category \(\mathcal{D}\) is its **Ziegler spectrum**\(\mathrm{Zg}(\mathcal{D})\) ([54], see [38, Chpt. 5]) - a topological space whose points are the isomorphism classes of indecomposable pure-injective objects in \(\mathcal{D}\) and whose open subsets are the complements of zero-sets of sets of coherent functors on \(\mathcal{D}\). The closed subsets of \(\mathrm{Zg}(\mathcal{D})\) are in natural bijection with the definable subcategories of \(\mathcal{D}\), see [38, 5.1.6]. See Section 3.6 for more on this. ## 2 Model theory in compactly generated triangulated categories We use formulas to specify the definable subsets of objects of \(\mathcal{T}\). In order to set these up, we choose a subset \(\mathcal{G}\) of \(\mathcal{T}^{\mathrm{c}}\) and we take the (opposite of the) full subcategory on \(\mathcal{G}\) to be the category of sorts. For convenience, we will assume that \(\mathcal{G}\) is equivalent to \(\mathcal{T}^{\mathrm{c}}\), that is, contains at least one isomorphic copy of each compact object of \(\mathcal{T}\). By \(\mathcal{L}_{\mathcal{G}}\) we denote the resulting language, meaning the resulting set of formulas. We could take a smaller category of sorts, for instance, if \(\mathcal{T}\) is monogenic, generated by a single compact object \(S\), then we could consider the \(1\)-sorted language based on \(S\). The obvious question is whether this would suffice, in the sense that every set definable in the larger language would also be definable in the \(1\)-sorted language. We don't pursue this here, but the relative approach and results in [17], [18] should be helpful in answering this question. In the other direction, we could make the maximal choice of sorts and use a language with the category \(\mathbb{L}(\mathcal{T})^{\mathrm{eq+}}\) of pp-imaginaries (see Section 2.1) for the sorts. Since pp-imaginaries are already definable, this does not increase the collection of definable subsets. For most purposes the choice of category of sorts does not matter provided the definable subsets are the same. However, elimination of quantifiers and elimination of imaginaries are language-dependent, rather than structure-dependent. Our choice of \(\mathcal{G}\) as (essentially) \(\mathcal{T}^{\mathrm{c}}\) is exactly analogous to basing a language for the model theory of \(R\)-modules (\(R\) a \(1\)-sorted ring) on the category \(\mathrm{mod}\text{-}R\) of finitely presented modules, rather than using the \(1\)-sorted language based on the single module \(R_{R}\) (see [40] for more on choices of languages for additive categories). Having chosen \(\mathcal{G}\) we introduce a sort \(s_{A}\) for each \(A\in\mathcal{G}\) and a symbol for addition (and a symbol for the zero) on each sort and, for each \(f:A\to B\) in \(\mathcal{G}\), a corresponding function symbol from sort \(B\) to sort \(A\) to represent multiplication by \(f=\) composition with \(f\). Note that the morphisms of \(\mathcal{G}\) are the "elements of the ring-with-many-objects \(\mathcal{G}\)". Each object \(X\in\mathcal{T}\) then becomes a structure for this language by taking its elements of sort \(s_{A}\) to be the elements of \((A,X)\) and then interpreting the function symbols in the usual/obvious way. _Remark 2.1_.: If \(\mathcal{T}\) is tensor-triangulated and has an internal hom functor right adjoint to \(\otimes\), then these sorts, which by definition are abelian groups, can be taken instead to be objects of \(\mathcal{T}\), in the sense that we could interpret the sort \(s_{A}(X)\) to be the internal hom object \([A,X]\in\mathcal{T}\). In this "internal" interpretation of the language, we have, since \((A,X)\simeq(\mathbb{1},[A,X])\) where \(\mathbb{1}\) is the tensor-unit, the (usual) elements of \(X\) of sort \(A\) identified with the morphisms \(\mathbb{1}\to[A,X]\). We will write \(\mathcal{L}(\mathcal{T})\), or just \(\mathcal{L}\) for the language. Since we assume that \(\mathcal{G}\) is equivalent to \(\mathcal{T}^{\mathrm{c}}\), the \(\mathcal{L}(\mathcal{T})\)-structure \(X\in\mathcal{T}\), which is literally a right \(\mathcal{G}\)-module, may be identified with the image, \(yX=(-,X)\upharpoonright\mathcal{T}^{\mathrm{c}}\), of \(X\) under the restricted Yoneda functor \(y:\mathcal{T}\to\mathrm{Mod}\)-\(\mathcal{T}^{\mathrm{c}}\). Therefore the model theory of \(X\) as an object of \(\mathcal{T}\) is exactly that of \(yX\) as a right \(\mathcal{T}^{\mathrm{c}}\)-module. Indeed, \(\mathcal{L}(\mathcal{T})\) is equally a language for \(\mathcal{T}\) and for the module category \(\mathrm{Mod}\)-\(\mathcal{T}^{\mathrm{c}}\), but bear in mind that there are more \(\mathcal{T}^{\mathrm{c}}\)-modules than those which are in the image of \(\mathcal{T}\) in \(\mathrm{Mod}\)-\(\mathcal{T}^{\mathrm{c}}\), more even than in the definable subcategory of \(\mathrm{Mod}\)-\(\mathcal{T}^{\mathrm{c}}\) which is generated by that image. Indeed, the definable subcategory, \(\langle y\mathcal{T}\rangle\), of \(\mathrm{Mod}\)-\(\mathcal{T}^{\mathrm{c}}\) generated by the image of \(\mathcal{T}\) is exactly the subcategory, \(\mathrm{Flat}\)-\(\mathcal{T}^{\mathrm{c}}=\mathrm{Abs}\)-\(\mathcal{T}^{\mathrm{c}}\), consisting of the flat = absolutely pure6\(\mathcal{T}^{\mathrm{c}}\)-modules. Footnote 6: In ‘most’ module categories the flat and absolutely pure modules have little overlap; the fact that they are equal over the ring \(\mathcal{T}^{\mathrm{c}}\) is a very characteristic feature here. **Theorem 2.2**.: _[_8, 8.11, 8.12_]_ _[_23, 2.7_]_ _If \(\mathcal{T}\) is a compactly generated triangulated category and \(y:\mathcal{T}\to\mathrm{Mod}\)-\(\mathcal{T}^{\mathrm{c}}\) is the restricted Yoneda functor, then \(\langle y\mathcal{T}\rangle=\mathrm{Abs}\)-\(\mathcal{T}^{\mathrm{c}}=\mathrm{Flat}\)-\(\mathcal{T}^{\mathrm{c}}\)_ Therefore the model theory of \(\mathcal{T}\) is the same as the model theory of the flat = absolutely pure right \(\mathcal{T}^{\mathrm{c}}\)-modules7. The one difference is that some structures are missing from \(\mathcal{T}\): except in the case that \(\mathcal{T}\) is pure semisimple [9, 9.3], there are structures in \(\langle y\mathcal{T}\rangle\) which are not in \(y\mathcal{T}\). However, the equivalence, 1.4, of categories \(\mathrm{Pinj}(\mathcal{T})\simeq\mathrm{Inj}\)-\(\mathcal{T}^{\mathrm{c}}\) between the pure-injective objects of \(\mathcal{T}\) and the injective \(\mathcal{T}^{\mathrm{c}}\)-modules, implies that \(y\mathcal{T}\) does contain all the pure-injective models, in particular all the saturated models, of its theory. It follows from 2.2 that implications and equivalences of pp-formulas on \(\mathcal{T}\) and on \(\mathrm{Flat}\)-\(\mathcal{T}^{\mathrm{c}}=\mathrm{Abs}\)-\(\mathcal{T}^{\mathrm{c}}\) are the same. Footnote 7: \(\mathcal{T}^{\mathrm{c}}\) is both right and left coherent as a ring with many objects (see [33, §4]), which is why the flat and the absolutely pure objects form definable subcategories (see [38, 3.4.24]). For convenience we will often write \((-,X)\) instead of \((-,X)\upharpoonright\mathcal{T}^{\mathrm{c}}=yX\) when \(X\in\mathcal{T}\). ### The category of pp-sorts Let \(R\) be a, possibly multisorted, ring and let \(\mathcal{D}\) be a definable subcategory of \(\mathrm{Mod}\)-\(R\). We recall how to define the **category \(\mathbb{L}(\mathcal{D})^{\mathrm{sq}+}\) of pp sorts** (or **pp-imaginaries**) for \(\mathcal{D}\). First, for \(\mathcal{D}=\mathrm{Mod}\)-\(R\), the category \(\mathbb{L}(\mathrm{Mod}\)-\(R)^{\mathrm{sq}+}\), more briefly denoted \(\mathbb{L}_{R}^{\mathrm{sq}+}\), has, for its objects, the **pp-pairs**\(\phi/\psi\), that is pairs \((\phi,\psi)\) of pp formulas for \(R\)-modules with \(\phi\geq\psi\), meaning \(\phi(M)\geq\psi(M)\) for all \(M\in\mathrm{Mod}\)-\(R\). For its arrows, we take the pp-definable maps between these pairs. See [38, SS3.2.2] for details and the fact that this category is abelian. Each such pp-pair defines a coherent functor \(M\mapsto\phi(M)/\psi(M)\) from \(\mathrm{Mod}\)-\(R\) to \(\mathbf{Ab}\) and every coherent functor has this form, see, for instance, [38, SS10.2]. For general \(\mathcal{D}\), a definable subcategory of \(\mathrm{Mod}\)-\(R\), we let \(\Phi_{\mathcal{D}}\) be the Serre subcategory of \(\mathbb{L}_{R}^{\mathrm{sq}+}\) consisting of those pp-pairs \(\phi/\psi\) which are **closed on**, that is \(0\) on, every \(M\in\mathcal{D}\) (that is, \(\phi(M)=\psi(M)\) for every \(M\in\mathcal{D}\)). Then \(\mathbb{L}(\mathcal{D})^{\mathrm{sq}+}\) is defined to be the quotient = Serre-localisation \(\mathbb{L}_{R}^{\mathrm{sq}+}/\Phi_{\mathcal{D}}\). So \(\mathbb{L}(\mathcal{D})^{\mathrm{sq}+}\) has the same objects as \(\mathbb{L}_{R}^{\mathrm{sq}+}\) - the pp-pairs - and the morphisms in \(\mathbb{L}(\mathcal{D})^{\mathrm{sq}+}\) are given by pp formulas which on every \(M\in\mathcal{D}\) define a function. In particular the pp-pairs closed on \(\mathcal{D}\) are isomorphic to \(0\) in \(\mathbb{L}(\mathcal{D})^{\mathrm{sq}+}\). The localised category \(\mathbb{L}(\mathcal{D})^{\mathrm{sq}+}\) also is abelian; in fact, see [46, 2.3], every skeletally small abelian category arises in this way. An equivalent [39, 12.10], but less explicit, definition is that \(\mathbb{L}(\mathcal{D})^{\mathrm{sq}+}=(\mathcal{D},\mathbf{Ab})^{\mathbb{I}\to}\) - the category of functors8 from \(\mathcal{D}\) to \(\mathbf{Ab}\) which commute with direct products and directed colimits (that is, coherent functors, equivalently [39, 25.3] interpretation functors in the model-theoretic sense). Footnote 8: additive, as always assumed in this paper It is well-known, see [38, 10.2.37, 10.2.30], and much-used, that, for \(\mathcal{D}=\mathrm{Mod}\)-\(R\), the category of pp-pairs is equivalent to the free abelian category on \(R\) and, also, that it can be realised as the category \((\mathrm{mod}\)-\(R,\mathbf{Ab})^{\mathrm{fp}}\) of finitely presented functors on finitely presented modules (see [38, 10.2.30, 10.2.37]) equivalently, as just said, it is equivalent to the category of coherent functors on all modules (see [38, SS10.2.8]). Then, for a general definable subcategory \(\mathcal{D}\) of \(\mathrm{Mod}{-}R\), we obtain \(\mathbb{L}(\mathcal{D})^{\mathrm{eq}+}\) as the Serre-quotient \((\mathrm{mod}{-}R,\mathbf{Ab})^{\mathrm{fp}}/\mathcal{S}_{\mathcal{D}}\) where \(\mathcal{S}_{\mathcal{D}}\) is the Serre subcategory of those functors \(F\in(\mathrm{mod}{-}R,\mathbf{Ab})^{\mathrm{fp}}\) with \(\overrightarrow{F}\mathcal{D}=0\). Here \(\overrightarrow{F}\) is the unique extension of (a finitely presented) \(F:\mathrm{mod}{-}R\to\mathbf{Ab}\) to a (coherent) functor from \(\mathrm{Mod}{-}R\) to \(\mathbf{Ab}\) which commutes with directed colimits. Often we simplify notation by retaining the notation \(F\) for this extension \(\overrightarrow{F}\). Under the identification of \(\mathbb{L}_{R}^{\mathrm{eq}+}\) and \((\mathrm{mod}{-}R,\mathbf{Ab})^{\mathrm{fp}}\) the Serre subcategory \(\Phi_{\mathcal{D}}\) is identified with \(\mathcal{S}_{\mathcal{D}}\). In applying this in our context, we use the following result, where Flat-\(R\) denotes the category of flat right \(R\)-modules and Abs-\(R\) denotes the category of absolutely pure \(=\) fp-injective right \(R\)-modules. **Theorem 2.3**.: _[_43_, 7.1/7.2]_ _If \(R\) is any left coherent (multisorted) ring, then Flat-\(R\) is a definable subcategory of \(\mathrm{Mod}{-}R\) and_ \[\mathbb{L}(\mathrm{Flat}{-}R)^{\mathrm{eq}+}\simeq R{\mathrm{-mod}}.\] _If \(R\) is a right coherent ring, then Abs-\(R\) is a definable subcategory of \(\mathrm{Mod}{-}R\) and_ \[\mathbb{L}(\mathrm{Abs}{-}R)^{\mathrm{eq}+}\simeq(\mathrm{mod}{-}R)^{\mathrm{ op}}.\] Because \(\mathcal{T}^{\mathrm{c}}\) is right and left coherent, [8, 8.11, 8.12], and since Abs-\(\mathcal{T}^{\mathrm{c}}=\mathrm{Flat}{-}\mathcal{T}^{\mathrm{c}}\), we have the following corollary. **Corollary 2.4**.: _If \(\mathcal{T}\) is a compactly generated triangulated category, then there is an equivalence_ \[d:\mathcal{T}^{\mathrm{c}}{\mathrm{-mod}}\simeq(\mathrm{mod}{-}\mathcal{T}^{ \mathrm{c}})^{\mathrm{op}}\] _and this category is equivalent to the category \(\mathbb{L}(\mathcal{T})^{\mathrm{eq}+}\) of pp-imaginaries for \(\mathcal{T}\)._ We write \(d\) for the (anti-)equivalence in each direction. There is another description of the category appearing in 2.4. We say that a **coherent** functor on \(\mathcal{T}\) is one which is the cokernel of a map between representable functors \((A,-):\mathcal{T}\to\mathbf{Ab}\) with \(A\in\mathcal{T}^{\mathrm{c}}\). Explicitly, if \(f:A\to B\) is in \(\mathcal{T}^{\mathrm{c}}\) then we obtain an exact sequence of functors on \(\mathcal{T}\): \[(B,-)\xrightarrow{(f,-)}(A,-)\to F_{f}\to 0;\] and the cokernel \(F_{f}\) is a typical coherent functor on \(\mathcal{T}\). In module categories having a presentation of this form, with \(A\) and \(B\) finitely presented, is equivalent to commuting with products and directed colimits but triangulated categories don't have directed colimits. There is the following analogous characterisation. **Theorem 2.5**.: _[_24_, 5.1]_ _Suppose that \(\mathcal{T}\) is a compactly generated triangulated category. Then \(F:\mathcal{T}\to\mathbf{Ab}\) is a coherent functor iff \(F\) commutes with products and sends homology colimits to colimits._ We denote the category of coherent functors on \(\mathcal{T}\), with the natural transformations between them, by \(\mathrm{Coh}(\mathcal{T})\). This category is abelian; in fact we have the following. **Theorem 2.6**.: _[_24_, 7.2]_ _There is a duality_ \[(\mathrm{mod}{-}\mathcal{T}^{\mathrm{c}})^{\mathrm{op}}\simeq\mathrm{Coh}( \mathcal{T})\] _and hence_ \[\mathrm{Coh}(\mathcal{T})\simeq\,\mathcal{T}^{\mathrm{c}}{\mathrm{-mod}}.\] Indeed, to go from \(\mathrm{Coh}(\mathcal{T})\) to \(\mathcal{T}^{\mathrm{c}}\)-mod we just restrict the action of \(F\in\mathrm{Coh}(\mathcal{T})\) to \(\mathcal{T}^{\mathrm{c}}\) and, in the other direction, we apply the projective presentation \((B,-)\to(A,-)\to G\to 0\) of a finitely presented left \(\mathcal{T}^{\mathrm{c}}\)-module in \(\mathcal{T}\) and we get a coherent functor. **Corollary 2.7**.: _The category of pp-imaginaries for a compactly generated triangulated category \(\mathcal{T}\) can be realised in the following forms_ \[\mathbb{L}(\mathcal{T})^{\mathrm{eq}+}\simeq\,\mathrm{Coh}(\mathcal{T})\, \simeq\,\mathcal{T}^{\mathrm{c}}{\mathrm{-mod}}.\] The duality in 2.6 respects the actions of those categories on \(\mathcal{T}\). We give the details. The action of \(\mathrm{Coh}(\mathcal{T})\) on \(\mathcal{T}\) is given by the exact sequence above presenting \(F_{f}\): if \(X\in\mathcal{T}\), then \(F_{f}(X)\) is defined by exactness of the sequence \[(B,X)\to(A,X)\to F_{f}(X)\to 0.\] The action of \(\mathrm{mod}\text{-}\mathcal{T}^{\mathrm{c}}\) on \(\mathcal{T}\) is given by \(\mathrm{Hom}\) applied after the restricted Yoneda functor \(y\). Explicitly: the typical finitely presented right \(\mathcal{T}^{\mathrm{c}}\)-module \(G_{f}\) is given by an exact sequence (a projective presentation) \[yA\xrightarrow{\ yf}yB\to G_{f}\to 0,\] that is \[(-,A)\xrightarrow{\ (-,f)}(-,B)\to G_{f}\to 0,\] where \(A\xrightarrow{\ f}B\in\mathcal{T}^{\mathrm{c}}\). The action of \(G_{f}\) on \(X\in\mathcal{T}\) is induced by the action of \((-,yX)\) on it: we have the exact sequence \[0\to(G_{f},(-,X))\to((-,B),(-,X))\xrightarrow{\ ((-,f),(-,X))}((-,A),(-,X)),\] that is \[0\to G_{f}(X)\to(B,X)\xrightarrow{\ (f,X)}(A,X),\] defining the value of \(G_{f}\) on the typical object \(X\in\mathcal{T}\). So, if \(G\in\mathrm{mod}\text{-}\mathcal{T}^{\mathrm{c}}\) and \(X\in\mathcal{T}\), then the action of \(G\) on \(X\) is defined by \[G(X)=(G,yX).\] Notice that the morphism \(f:A\to B\) in \(\mathcal{T}^{\mathrm{c}}\) has given rise to the exact sequence of abelian groups: \[0\to G_{f}(X)\to(B,X)\xrightarrow{\ (f,X)}(A,X)\to F_{f}(X)\to 0. \tag{1}\] The duality \((\mathrm{mod}\text{-}\mathcal{T}^{\mathrm{c}})^{\mathrm{op}}\simeq\mathrm{Coh }(\mathcal{T})\) in 2.6 takes a finitely presented right \(\mathcal{T}^{\mathrm{c}}\)-module \(G\) to the coherent functor \[G^{\circ}:X\mapsto(G,yX)=(G,(-,X))\] for \(X\in\mathcal{T}\) - that is, the action we defined just above. This takes the representable functor \(G=(-,A)\) where \(A\in\mathcal{T}^{\mathrm{c}}\), to the representable coherent functor \((A,-):\mathcal{T}\to\mathbf{Ab}\). Therefore, the 4-term exact sequence (1) above can be read as the application of the following exact sequence of functors in \(\mathrm{Coh}(\mathcal{T})\) to \(X\). \[0\to G_{f}^{\circ}\to(B,-)\xrightarrow{\ (f,-)}(A,-)\to F_{f}\to 0. \tag{2}\] In the other direction, the duality \((\mathrm{mod}\text{-}\mathcal{T}^{\mathrm{c}})^{\mathrm{op}}\simeq\mathrm{Coh }(\mathcal{T})\) takes \(F\in\mathrm{Coh}(\mathcal{T})\) to the finitely presented \(\mathcal{T}^{\mathrm{c}}\)-module \[F^{\circ}:C\mapsto(F,(C,-))\] for \(C\in\mathcal{T}^{\mathrm{c}}\). So \((A,-)^{\circ}=(-,A)\). If \(F=F_{f}\), then applying \((-,(C,-))\) to the presentation (2) of \(F_{f}\) and using that \((C,-)\) is injective in \(\mathrm{Coh}(\mathcal{T})\) (by 2.6 and since \((-,C)\) is projective in \(\mathrm{Mod}\text{-}\mathcal{T}^{\mathrm{c}}\)) allows us to read the resulting 4-term exact sequence as the application, to \(C\in\mathcal{T}^{\mathrm{c}}\), of the following exact sequence of functors in \(\mathrm{mod}\text{-}\mathcal{T}^{\mathrm{c}}\). \[0\to F_{f}^{\circ}\to(-,A)\xrightarrow{\ (-,f)}(-,B)\to G_{f}\to 0 \tag{3}\] Applying the duality-equivalences \((-)^{\circ}:(\mathrm{Coh}(\mathcal{T}))^{\mathrm{op}}\to\mathrm{Mod}\text{-} \mathcal{T}^{\mathrm{c}}\) and \((-)^{\circ}:(\mathrm{Mod}\text{-}\mathcal{T}^{\mathrm{c}})^{\mathrm{op}}\to \mathrm{Coh}(\mathcal{T})\) interchanges (2) - an exact sequence in \(\mathrm{Coh}(\mathcal{T})\) - and (3) - an exact sequence in \(\mathrm{mod}\text{-}\mathcal{T}^{\mathrm{c}}\). The equivalences of these functor categories with the category \(\mathbb{L}(\mathcal{T})^{\mathrm{eq}+}\) of pp-pairs for \(\mathcal{T}\) are given explicitly on objects as follows. Let \(f:A\to B\) be a morphism in \(\mathcal{T}^{\mathrm{c}}\), so \(F_{f}\) is a typical coherent functor. We have that \(F_{f}X=(A,X)/\mathrm{im}(f,X)\) and hence \(F_{f}\) is the functor given by the pp-pair \((x_{A}=x_{A})/(\exists y_{B}\ x_{A}=y_{B}f)\), that is \[F_{f}=(x_{A}=x_{A})/(f|x_{A}).\] We use subscripts on variables to show their sorts but might sometimes drop them for readability. We also use variables (which really belong in formulas) to label morphisms (for which they are place-holders) in what we hope is a usefully suggestive way. Also, from the exact sequence (1), we see that \(G_{f}^{\circ}(-)=\ker(f,-)\) and so is the functor given by the pp-pair \[G_{f}^{\circ}=(x_{B}f=0)/(x_{B}=0).\] Since the duality \(\operatorname{Coh}(\mathcal{T})\simeq(\text{mod-}\mathcal{T}^{\text{c}})^{ \text{op}}\) preserves the actions on \(\mathcal{T}\), these pp-pairs also give the actions of, respectively, \(F_{f}^{\circ}\) and \(G_{f}\) on \(\mathcal{T}\). To go from pp-pairs to functors, we may use 2.15 below, which says that every pp-pair is isomorphic to one of a form seen above, \(xf=0/x=0\). ### Elimination of quantifiers If a ring \(R\) is right coherent then every pp formula is equivalent on Abs-\(R\) to an annihilator formula and, if \(R\) is left coherent, then every pp formula on Flat-\(R\) is equivalent to a divisibility formula (see [38, 2.3.20, 2.3.9+2.3.19]). These results are equally valid for rings with many objects (because any formula involves only finitely many sorts, so is equivalent to a formula over a ring with one object). It follows that the theory of \(\mathcal{T}\) has elimination of quantifiers, indeed it has the stronger property elim-q\({}^{+}\), meaning that each pp formula is equivalent to a quantifier-free pp formula, that is, to a conjunction of equations9. Also \(\mathcal{T}\) has the elementary-dual elimination of pp formulas to divisibility formulas. But it is instructive to see exactly how this works when the ring is the category \(\mathcal{T}^{\text{c}}\) of compact objects of a compactly generated triangulated category \(\mathcal{T}\). This is an expansion of [18, 3.1, 3.2]. We write \(0\) for any \(n\)-tuple \((0,\ldots,0)\). Footnote 9: Indeed, since our sorts are closed under finite direct sums, every pp formula is equivalent to a single equation Given \(f:A\to B\) in \(\mathcal{T}^{\text{c}}\), form the distinguished triangle10 as shown. Footnote 10: We will often write “triangle” meaning distinguished triangle. \[A\xrightarrow{f}B\xrightarrow{g}C\to\Sigma A\] Since \(\mathcal{T}^{\text{c}}\) is triangulated, \(C\in\mathcal{T}^{\text{c}}\). Since representable functors on a triangulated category are exact (meaning that they take triangles to (long) exact sequences), for every \(X\in\mathcal{T}\), \((C,X)\xrightarrow{(g,X)}(B,X)\xrightarrow{(f,X)}(A,X)\) is exact so, for \(x_{B}\in(B,X)\), we have \(x_{B}\in\ker(B,X)\) iff \(x_{B}\in\operatorname{im}(g,X)\), that is, \(x_{B}f=0\) iff \(g\mid x_{B}\) that is, iff \(\exists y_{C}\,(x_{B}=y_{C}g)\). Thus \[x_{B}f=0\quad\Leftrightarrow\quad g\mid x_{B}.\] Since \(\mathcal{T}^{\text{c}}\) has finite direct sums, tuples of variables may be wrapped up into single variables (we do this explicitly below), so these formulas are general annihilator and divisibility formulas. Therefore every annihilator formula is equivalent to a divisibility formula and _vice versa_. We record this. **Proposition 2.8**.: _If \(A\xrightarrow{f}B\xrightarrow{g}C\to\Sigma A\) is a distinguished triangle, then the formula \(x_{B}f=0\) is equivalent to \(g\mid x_{B}\)._ Before continuing, note that, because \(\mathcal{T}^{\text{c}}\) is closed under finite direct sums, a finite sequence \((x_{1},\ldots,x_{n})\) of variables, with \(x_{i}\) of sort \(A_{i}\), may be regarded as a single variable of sort \(A_{1}\oplus\cdots\oplus A_{n}\). That simplifies notation and allows us to treat a general pp formula as one of the form \(\exists x_{B^{\prime}}\,(x_{B}f=x_{B^{\prime}}f^{\prime})\), that is, \(f^{\prime}|xf\) for short. That is equivalent to \(\exists x_{B^{\prime}}\,\Big{(}(x_{B},x_{B^{\prime}})\Big{(}\begin{array}{ c}f\\ f^{\prime}\end{array}\Big{)}=0\Big{)}\) \(A\)\(B^{\prime}\)\(B^{\prime}\)\(A\)\(B\)\(A\)\(B\)\(B \(A\)\(\xrightarrow{\left(\begin{smallmatrix}f\\ f^{\prime}\end{smallmatrix}\right)}B\oplus B^{\prime}\)\((-)\) So form the triangle \(A\)\(\xrightarrow{\left(\begin{smallmatrix}f\\ f^{\prime}\end{smallmatrix}\right)}B\oplus B^{\prime}\)\(\xrightarrow{\overline{\pi}(g,g^{\prime})}C\rightarrow\Sigma A\). By 2.8 above, the formula \(\exists\,x_{B^{\prime}}\left((x_{B},x_{B^{\prime}})\left(\begin{smallmatrix}f \\ f^{\prime}\end{smallmatrix}\right)=0\right)\) is equivalent to \(\exists\,x_{B^{\prime}}\,\exists x_{C}\left((x_{B},x_{B^{\prime}})=x_{C} \overline{g}\right)\), that is to \[\exists x_{B^{\prime}}\,\exists x_{C}\left(x_{B}=x_{C}g\,\wedge\,x_{B^{\prime }}=x_{C}g^{\prime}\right)\!,\] and the \(x_{B^{\prime}}\) is irrelevant now (set \(x_{B^{\prime}}=x_{C}g^{\prime}\)). So the original formula is equivalent to \(g\mid x_{B}\) where \(g\) is, up to sign, the map which appears in the weak pushout \(A\)\(\xrightarrow{f}\). Let us record that. **Lemma 2.9**.: _Given morphisms \(f,f^{\prime}:A\to B\) in \(\mathcal{T}^{c}\), the (typical pp) formula_ \[\exists\,x_{B^{\prime}}\left(x_{B}f=x_{B^{\prime}}f^{\prime}\right)\] _is equivalent to the divisibility formula \(g|x_{B}\), where \(g\) is as in the distinguished triangle_ \[A\xrightarrow{\left(\begin{smallmatrix}f\\ f^{\prime}\end{smallmatrix}\right)}B\oplus B^{\prime}\xrightarrow{(g,g^{ \prime})}C\rightarrow\Sigma A,\] _and hence is also equivalent to the annihilation formula \(x_{B}f^{\prime\prime}=0\) where_ \[A^{\prime}\xrightarrow{f^{\prime\prime}}B^{\prime}\xrightarrow{g}C \rightarrow\Sigma A^{\prime}\] _is a distinguished triangle._ Thus every pp formula is equivalent on \(\mathcal{T}\) to a divisibility formula and hence also to an annihilator formula. In particular: **Theorem 2.10**.: _[_18_, 3.1, 3.2]_ _If \(\mathcal{T}\) is a compactly generated triangulated category and \(\mathcal{L}\) is the language for \(\mathcal{T}\) based on \(\mathcal{T}^{c}\), then (the theory of1) \(\mathcal{T}\) has elimination of quantifiers, indeed has elim-\(q^{+}\)._ Footnote 11: Meaning that every completion of the theory of \(\mathcal{T}\) has elimination of quantifiers and the elimination is uniform over these completions. ### Types and free realisations We start with a little model theory but soon come back to the algebra. If \(A_{1},\ldots,A_{n}\) are compact objects of \(\mathcal{T}\) and if \(a_{i}:A_{i}\to X\in\mathcal{T}\) are elements of \(X\in\mathcal{T}\), then the **type** of \(\overline{a}=(a_{1},\ldots,a_{n})\) (in \(X\)) is the set of formulas \(\chi\) such that \(\overline{a}\in\chi(X)\). The **pp-type** of \(\overline{a}\in X\) is \[\operatorname{pp}^{X}(\overline{a})=\{\phi\ \operatorname{pp}:\overline{a}\in\phi(X)\}.\] Since we have pp-elimination of quantifiers (1.1) the type of \(\overline{a}\) in \(X\) is determined by its subset \(\operatorname{pp}^{X}(\overline{a})\). Indeed it is equivalent, modulo the theory of \(\mathcal{T}\) (equivalently, the theory of absolutely \(\operatorname{pure}=\operatorname{flat}\mathcal{T}^{c}\)-modules) to the set \(\operatorname{pp}^{X}(\overline{a})\,\cup\,\{\neg\psi:\psi\ \operatorname{pp}\ \text{and}\ \psi\notin \operatorname{pp}^{X}(\overline{a})\}\).12 Footnote 12: This is also true for types with parameters but we don’t use these in this paper. For more on this see, for instance, [36, 2.20]. As remarked already, because \(\mathcal{T}^{c}\) has finite direct sums, we can replace a tuple \((x_{1},\ldots,x_{n})\) of variables \(x_{i}\) of sort \(A_{i}\) by a single variable of sort \(A_{1}\oplus\cdots\oplus A_{n}\) (and, similarly, tuples of elements may be replaced by single elements). So any pp-definable subgroup of an object \(X\in\mathcal{T}\) - that is, the solution set \(\phi(X)\) in \(X\) of some pp formla \(\phi\) - can be taken to be a subgroup of \((A,X)\) for some \(A\in\mathcal{T}^{c}\). We say that two formulas are **equivalent** (on \(\mathcal{T}\)) if they have the same solution set in every \(X\in\mathcal{T}\). There is an ordering on the set of (equivalence classes of) pp formulas: if \(\phi\), \(\psi\) are pp formulas in the same free variables, then we set \(\phi\leq\psi\) iff \(\forall X\in\mathcal{T}\), \(\phi(X)\leq\psi(X)\). This (having fixed the free variables) is a lattice with meet given by conjunction \(\phi\wedge\psi\) (defining the intersection of the solution sets) and join given by sum \(\phi+\psi\) (defining the sum of the solution sets). By a **pp-type** we mean a deductively closed set of pp formulas, equivalently13 a filter (i.e. meet- and upwards-closed) in the lattice of (equivalence classes of) pp formulas (always with some fixed sequence of free variables). We note the following analogue of the module category case (see [38, 1.2.23]). Footnote 13: for types without parameters **Lemma 2.11**.: _Suppose that \(\mathcal{T}\) is a compactly generated triangulated category and \(\phi\), \(\psi\) are pp formulas with the same free variables. Then \(\phi\leq\psi\) iff for all \(A\in\mathcal{T}^{c}\) we have \(\phi(A)\leq\psi(A)\)._ **Proof.** Suppose that for all \(A\in\mathcal{T}^{c}\) we have \(\phi(A)\leq\psi(A)\) and let \(X\in\mathcal{T}\). Since \(yX\) is a flat object of Mod-\(\mathcal{T}^{c}\), it is the direct limit of some directed diagram of finitely generated projective \(\mathcal{T}^{c}\)-modules. The latter all have the form \(yA\) for some \(A\in\mathcal{T}^{c}\). Since, for any pp formula \(\phi\), \(\phi(-)\) commutes with direct limits (see [38, 1.2.31]), we conclude that \(\phi(yX)\leq\psi(yX)\), and hence that \(\phi(X)\leq\psi(X)\), as required. \(\Box\) In the above proof we made the (harmless and useful) identification of pp formulas for objects of \(\mathcal{T}\) and for right \(\mathcal{T}^{c}\)-modules. Suppose that \(p\) is a pp-type, consisting of pp formulas with free variables \(x_{1},\ldots,x_{n}\) where \(x_{i}\) has sort (labelled by) \(A_{i}\in\mathcal{T}^{c}\). Then, by [38, 3.3.6, 4.1.4], \(p\) has a **realisation** in some object \(M\) in the definable subcategory \(\langle y\mathcal{T}\rangle\) of Mod-\(\mathcal{T}^{c}\), meaning there is a tuple \(\overline{b}\) of elements in \(M\) with \(\operatorname{pp}^{M}(\overline{b})=p\). Pp-types are unchanged by pure embeddings and every such module \(M\) is a pure, indeed elementary, subobject of its pure-injective (= injective) hull, which has the form \(yX\) for some \(X\in\mathcal{T}\). So we obtain a realisation of \(p\) in some object \(X\in\mathcal{T}\): there is \(\overline{a}=(a_{1},\ldots,a_{n})\) with \(a_{i}:A_{i}\to X\) such that \(\operatorname{pp}^{X}(\overline{a})=p\). The object \(X\) is pure-injective in \(\mathcal{T}\) (1.4) and, moreover, may be chosen to be minimal such14, in which case it is denoted \(H(p)\) - the **hull** of \(p\). This is unique up to isomorphism in the sense that if \(N\) is a pure-injective object of \(\mathcal{T}\) and if \(\overline{c}\) is a tuple from \(N\) with \(\operatorname{pp}^{N}(\overline{c})=p\), then there is an embedding of \(H(p)\) into \(N\) as a direct summand, taking \(\overline{a}\) to \(\overline{c}\) and this will be an isomorphism if \(N\) also is minimal over \(\overline{c}\). See [38, SS4.3.5] for this and related results - these all apply to any compactly generated triangulated category \(\mathcal{T}\) because its model theory is really just that of a definable subcategory of Mod-\(\mathcal{T}^{c}\), and because all the pure-injective objects of that definable subcategory are images of objects of \(\mathcal{T}\). Footnote 14: Corresponding to the injective hull of the submodule of \(M\) generated by the entries of \(\overline{b}\). If \(\phi\) is a pp formula, then we have the pp-type it generates: \[\langle\phi\rangle=\{\psi:\phi\leq\psi\}.\] We say that a pp-type is **finitely generated** (**by**\(\phi\)) if it has this form for some \(\phi\). If \(\phi\) is a pp formula with free variable of sort \(A\) (without loss of generality we may assume that there is just one free variable) then a **free realisation** of \(\phi\) is a pair \((C,c_{A})\) where \(C\in\mathcal{T}^{c}\) and \(c_{A}:A\to C\) is an element of \(C\) of sort \(A\) with \(\operatorname{pp}^{C}(c_{A})=\langle\phi\rangle\). We have the following analogue to [38, 1.2.7]. In the statement of this result, we continue to overuse notation by allowing \(x_{A}\) to denote an element of sort \(A\) (in addition to our use of \(x_{A}\) to denote a variable of sort \(A\)). **Lemma 2.12**.: _Suppose \(\phi\) is a pp formula with free variable \(x_{A}\) (for some \(A\in\mathcal{T}^{c}\)). Let \(C\in\mathcal{T}^{c}\) and suppose \(c_{A}\in(A,C)\) with \(c_{A}\in\phi(C)\). Then \((C,c_{A})\) is a free realisation of \(\phi\) iff for every \(x_{A}:A\to X\in\mathcal{T}\) such that \(x_{A}\in\phi(X)\), there is a morphism \(h:C\to A\) with \(hc_{A}=x_{A}\)._ **Proof.** Existence of free realisations in \(\mathcal{T}\) (2.14 below) gives the direction (\(\Leftarrow\)) since, if \((B,b)\) is a free realisation of \(\phi\), then there is a morphism \(g:C\to B\) with \(gc_{A}=b\), so \(\operatorname{pp}^{C}(c_{A})=\langle\phi\rangle\) (because morphisms are non-decreasing on pp-types - see [38, 1.2.8]). For the converse, if \(a\in\phi(X)\), then \(\operatorname{\mathit{ya}}\in\phi(yX)\)15 Since the pp-type of \(yc_{A}\) in \(yC\) is exactly that of \(c_{A}\) in \(C\), it is generated by \(\phi\) and hence, since \(\operatorname{\mathit{ya}}\in\phi(yX)\), there is, by [38, 1.2.7], a morphism \(f^{\prime}:yC\to yX\) with \(f^{\prime}:yc_{A}=ya\). Because \(C\in\mathcal{T}^{c}\), there is, by 1.3, \(f:C\to X\) with \(f^{\prime}=yf\). Therefore \(y(fc_{A})=ya\) so, again by 1.3, \(fc_{A}=a\), as required. \(\Box\) We show that every pp formula in the language for \(\mathcal{T}\) has a free realisation in \(\mathcal{T}\). We use the fact that every formula is equivalent to a divisibility formula. If a morphism \(f\)**factors initially through** a morphism \(g\) - that is, \(f=hg\) for some \(h\) - then write \(g\geq f\). **Lemma 2.13**.: _If \(f:A\to B\) is a morphism in \(\mathcal{T}^{\rm c}\) then the pp-type, \(\langle f|x_{A}\rangle\), generated by the formula \(f|x_{A}\) is, up to equivalence of pp formulas, \(\{g|x_{A}:g\geq f\}\)._ **Proof.** By 2.9 every pp formula is equivalent to a divisibility formula, so we need only consider formulas of the form \(g|x_{A}\). If \(g\geq f\), say \(g:A\to C\) and \(f=hg\) with \(h:C\to B\), then, for any \(x_{A}:A\to X\in\mathcal{T}\) with \(f|x_{A}\), say \(x_{A}=x_{B}f\), we have \(x_{A}=x_{B}hg=x_{C}g\) with \(x_{C}=x_{B}g\), so we have \(g|x_{A}\). That is, \(g|x_{A}\in\langle f|x_{A}\rangle\). For the converse, if \(g:A\to C\) is in \(\mathcal{T}^{\rm c}\) and \(g|x_{A}\in\langle f|x_{A}\rangle\), then, applying this with \(X=B\) and \(x_{A}=f\), we obtain that there is \(h:C\to B\) such that \(hg=f\), and \(g\geq f\), as required. \(\Box\) **Corollary 2.14**.: _Suppose that \(\phi(x_{A})\) is a pp formula for the language of \(\mathcal{T}\). Choose (by 2.9) a morphism \(f:A\to B\) in \(\mathcal{T}^{\rm c}\) such that \(\phi\) is equivalent to \(f|x_{A}\). Then \((B,f)\) is a free realisation of \(\phi\)._ ### Elimination of imaginaries Next we prove elimination of pp-imaginaries: we show that every pp-pair is isomorphic, in the category \(\mathbb{L}(\mathcal{T})^{\rm eq+}\) of pp-pairs, to a pp formula, indeed by 2.10, to a quantifier-free pp formula if we identify a pp formula \(\phi(\overline{x})\) with the pp-pair \(\phi(\overline{x})/(\overline{x}=0)\) in \(\mathbb{L}(\mathcal{T})^{\rm eq+}\). Recall, 2.7, that the category of pp-imaginaries is equivalent to the category \(\mathrm{Coh}(\mathcal{T})\) of coherent functors on \(\mathcal{T}\). So let us take a coherent functor \(F_{g}\) defined by the exact sequence \((C,-)\xrightarrow{(g,-)}(B,-)\to F_{g}\to 0\) for some \(g:B\to C\) in \(\mathcal{T}^{\rm c}\). Form the (extended) distinguished triangle \(\Sigma^{-1}C\xrightarrow{\Sigma^{-1}h}A\xrightarrow{f}B\xrightarrow{g}C \xrightarrow{h}\Sigma A\) and consider the exact sequence of functors on \(\mathcal{T}\): \[(\Sigma A,-)\xrightarrow{(h,-)}(C,-)\xrightarrow{(g,-)}(B,-)\xrightarrow{(f,-)}(A,-)\xrightarrow{(\Sigma^{-1}h,-)}(\Sigma^{-1}C,-)\] where we have the factorisation \((B,-)\xrightarrow{(f,-)}(A,-)\). So \(F_{g}\simeq\mathrm{im}(f,-)\) in \((A,-)\) and therefore \(F_{g}\) is isomorphic to the functor given by the pp formula \(f\mid x_{A}\) which, by 2.8, is equivalent to the quantifier-free pp formula \(x_{A}\cdot\Sigma^{-1}h=0\); that is \(F_{g}\simeq G_{\Sigma^{-1}h}^{\circ}\) (this is also clear from the above exact sequence). Thus we have the following. **Theorem 2.15**.: _[_18_, 4.3]_ _Every pp-pair is (pp-)definably isomorphic to a pp formula which may be taken to be quantifier-free (alternatively a divisibility formula). Thus, (the theory of) \(\mathcal{T}\) has elimination of pp imaginaries._ _Explicitly, if \(g:B\to C\) is in \(\mathcal{T}^{\rm c}\) then the (typical) pp-pair \(F_{g}=\mathrm{coker}((g,-):(C,-)\to(B,-))\) is equivalent to the divisibility formula \(f|x_{A}\) and to the annihilation formula \(x_{A}\Sigma^{-1}h=0\) where \(f\) and \(h\) are such that \(\Sigma^{-1}C\xrightarrow{\Sigma^{-1}h}A\xrightarrow{f}B\xrightarrow{g}C( \xrightarrow{h}\Sigma A)\) is a distinguished triangle._ ### Enhancements, Ultraproducts Arguments using reduced products, in particular ultraproducts, are often used in model theory. In many cases their use can be replaced by arguments involving realising types in elementary extensions but in some cases the more algebraic and 'explicit' (modulo use of the axiom of choice16) ultraproduct construction is better. At first sight we can't use ultraproducts in compactly generated triangulated categories because, even though typically they have direct products, they almost never have all directed colimits (recall, e.g. [38, SS3.3.1], that an ultraproduct is a directed colimit of direct products of its component structures). Homotopy colimits along a countably infinite directed set are available but that is not enough to form ultraproducts. In [28] Laking introduced ultraproducts in this context by using Grothendieckk derivators. We don't go into the details here but see [28, SS2] for the construction of coherent reduced products for derivators. In [29] a different approach, using dg-categories and model categories, is taken. This gives, for algebraic compactly generated triangulated categories, a characterisation of definable subcategories (see Section 3.1) which is analogous to 1.5. This extends to any triangulated category with a suitable enhancement, see [49, 8.8] and [11, 6.8] which has the following formulation. **Theorem 2.16**.: _([28, 3.11], [29, 4.7],[49, 8.8], [11, 6.8]) If \(\mathcal{D}\) is a subcategory of a compactly generated triangulated category \(\mathcal{T}\) which is the underlying category of a strong and stable derivator, then the following are equivalent. (i) \(\mathcal{D}\) is a definable subcategory of \(\mathcal{T}\); (ii) \(\mathcal{D}\) is closed in \(\mathcal{T}\) under pure subobjects, products and directed homotopy colimits; (iii) \(\mathcal{D}\) is closed in \(\mathcal{T}\) under pure subobjects, products and pure quotients._ Derived categories, derivators, dg-categories, model categories (in the sense of, say, [22]) and \(\infty\)-categories all provide ways of representing triangulated categories as the result of applying a process to a somewhat more amenable type of category. In those additive categories with extra structure one can expect the model theory of (multisorted) modules to be directly applicable to the objects. This gives the possibility of approaching the model theory of a triangulated category by developing model theory in such an enhancement and then passing this through a localisation-type functor to the triangulated category. Examples include setting up elementary duality as done in [2] and [11], see Section 3.8. We don't pursue this, relatively undeveloped, direction here. ## 3 Definable subcategories ### Definable subcategories of \(\mathcal{T}\) A full subcategory \(\mathcal{D}\) of \(\mathcal{T}\) is **definable** if its objects form the zero-class of a set of coherent functors, that is, if there is \(\mathcal{A}\subseteq\operatorname{Coh}(\mathcal{T})\) such that \[\mathcal{D}=\{X\in\mathcal{T}\colon FX=0\ \forall F\in\mathcal{A}\}.\] We will write \(\mathcal{D}=\operatorname{Ann}(\mathcal{A})=\operatorname{Ann}_{\mathcal{T}} (\mathcal{A})\).17 We will see in Section 3.2 how this is a natural extension of the notion of definable subcategory of a module category. Also, if \(\mathcal{X}\) is a subcategory of \(\mathcal{T}\), set Footnote 17: We will also use this notation with a set of morphisms replacing \(\mathcal{A}\) and hope this will not give rise to confusion. \[\operatorname{Ann}_{\operatorname{Coh}(\mathcal{T})}(\mathcal{X})=\{F\in \operatorname{Coh}(\mathcal{T}):FX=0\ \forall X\in\mathcal{X}\}.\] Given a set \(\Phi\) of morphisms in \(\mathcal{T}^{\mathrm{c}}\) we have its annihilator \[\operatorname{Ann}_{\mathcal{T}}\Phi=\{X\in\mathcal{T}:\forall A\xrightarrow{ f}B\in\Phi,\ \forall B\xrightarrow{b}X\text{ we have }bf=0\}.\] We write the condition \(\forall B\xrightarrow{b}X\,(bf=0)\) succinctly as \(Xf=0\) (this being directly analogous to the relation \(Mr=0\) for a right module \(M\) and ring element \(r\)). Of course we can equally write this condition as \((f,X)=0\) or \((-,X)f=0\), according to our viewpoint. Then [24, SS7]\(\operatorname{Ann}_{\mathcal{T}}\Phi\) is a (typical) definable subcategory of \(\mathcal{T}\). In the other direction, if \(\mathcal{X}\) is a subcategory of \(\mathcal{T}\), then we may set \[\operatorname{Ann}_{\mathcal{T}^{\mathrm{c}}}\mathcal{X}=\{A\xrightarrow{ f}B\in\mathcal{T}^{\mathrm{c}}:Xf=0\ \forall X\in\mathcal{X}\}.\] The classes of morphisms of the form \(\operatorname{Ann}_{\mathcal{T}^{\mathrm{c}}}\mathcal{X}\) are what Krause calls the **cohomological ideals** of \(\mathcal{T}^{\mathrm{c}}\); we will refer to them simply as **annihilator ideals** in \(\mathcal{T}^{\mathrm{c}}\). **Lemma 3.1**.: _[_24_, SS7]_ _If \(\Phi\) is a set of morphisms in \(\mathcal{T}^{\mathrm{c}}\), then \(\operatorname{Ann}_{\mathcal{T}}\Phi\) is a definable subcategory of \(\mathcal{T}\). If \(\mathcal{X}\) is any subcategory of \(\mathcal{T}\), then \(\operatorname{Ann}_{\mathcal{T}}(\operatorname{Ann}_{\mathcal{T}^{\mathrm{c} }}\mathcal{X})=\langle\mathcal{X}\rangle\), the definable subcategory of \(\mathcal{T}\) generated by \(\mathcal{X}\). In particular there is a natural bijection between the definable subcategories of \(\mathcal{T}\) and the cohomological = annihilator ideals in \(\mathcal{T}^{\mathrm{c}}\)._ We have seen already that if \[A\xrightarrow{f}B\xrightarrow{g}C\to\Sigma A\] is a triangle, then \[bf=0\quad\Leftrightarrow\quad g\mid b.\] So we consider, given a set \(\Psi\) of morphisms in \(\mathcal{T}^{\ast}\), \[\operatorname{Div}_{\mathcal{T}}\Psi=\{X\in\mathcal{T}:\forall\,B\xrightarrow{ g}C\in\Psi,\ \forall\,B\xrightarrow{b}X,\exists\,C\xrightarrow{c}X\text{ such that }b=cg\}\] - the class of \(\Psi\)-divisible objects of \(\mathcal{T}\). We write \(g|X\) as a succinct expression of the condition "\(\forall\,B\xrightarrow{b}X\,\exists\,C\xrightarrow{c}X\) such that \(b=cg\) " (being the analogue of the condition that every element of a module \(M\) be divisible by an element \(r\) of the ring18). Then \(\operatorname{Div}_{\mathcal{T}}\Psi\) is a (typical) definable subcategory of \(\mathcal{T}\). Footnote 18: But the corresponding notation \(Xg=X\) would be less appropriate than in the usual module case because \(X\) has many sorts and that equation applies only to the \(B\)-sort of \(X\). And, in the other direction, given a subcategory \(\mathcal{X}\) of \(\mathcal{T}\), we define19 Footnote 19: We are overworking the notations Ann and Div but they are useful. **Lemma 3.2**.: _([2, 2.2]) If \(\Psi\) is a set of morphisms in \(\mathcal{T}^{\ast}\), then \(\operatorname{Div}_{\mathcal{T}}\Psi\) is a definable subcategory of \(\mathcal{T}\). If \(\mathcal{X}\) is any subcategory of \(\mathcal{T}\), then \(\operatorname{Div}_{\mathcal{T}}(\operatorname{Div}_{\mathcal{T}^{\ast}} \mathcal{X})=\langle\mathcal{X}\rangle\)._ **Proof.** Take \(Y\in\operatorname{Div}_{\mathcal{T}}(\operatorname{Div}_{\mathcal{T}^{\ast}} \mathcal{X})\). If \(g\in\operatorname{Div}_{\mathcal{T}^{\ast}}\mathcal{X}\) then \(g|Y\) so, if \(f\) is as above, \(Yf=0\). This is so for all such \(f\) (as \(g\) varies) so, by 3.1, \(Y\in\langle\mathcal{X}\rangle\), as required. \(\quad\Box\) **Corollary 3.3**.: _(1) If \(\mathcal{D}=\operatorname{Ann}_{\mathcal{T}}\Phi\) is a definable subcategory of \(\mathcal{T}\) then also \(\mathcal{D}=\operatorname{Div}_{\mathcal{T}}\{g:A\xrightarrow{f}B\xrightarrow{ g}C\to\Sigma A\) is a distinguished triangle and \(f\in\Phi\}\)._ _(2) If \(\mathcal{D}=\operatorname{Div}_{\mathcal{T}}\Psi\) is a definable subcategory of \(\mathcal{T}\) then also_ \(\mathcal{D}=\operatorname{Ann}_{\mathcal{T}}\{f:A\xrightarrow{f}B\xrightarrow{ g}C\to\Sigma A\) _is a distinguished triangle and \(g\in\Psi\}\)._ Definable subcategories are so-called because they can be defined by closure of certain pairs of pp formulas, that is, by requiring that certain quotients of pp-definable subgroups be 0. For each of the annihilation and divisibility methods of specifying these subcategories, the pp-pairs needed are obvious, being respectively \(\{(x_{p}=x_{B})/(x_{B}f=0):f:A\to B\in\Phi\}\) and \(\{(x_{p}=x_{B})/(g|x_{B}):g:B\to C\in\Psi\}\) with \(\Phi\), \(\Psi\) as above. We have used that pp-pairs can be given in both annihilation and divisibility forms, but there is another, "torsionfree" form that is not so obvious if we consider only formulas and their reduction to divisibility or annihilator forms, rather than pp-pairs. Let us consider an extended triangle as before: \[\Sigma^{-1}C\xrightarrow{\Sigma^{-1}h}A\xrightarrow{f}B\xrightarrow{g}C \xrightarrow{h}\Sigma A.\] If \(X\in\mathcal{T}\) then we obtain an exact sequence of abelian groups \[(\Sigma A,X)\xrightarrow{(h,X)}(C,X)\xrightarrow{(g,X)}(B,X)\xrightarrow{(f,X)}(A,X)\xrightarrow{(\Sigma^{-1}h,X)}(\Sigma^{-1}C,X).\] Then \(X\in\operatorname{Div}_{\mathcal{T}}(g)\) iff \((g,X)\) is epi iff \((f,X)=0\) iff \((\Sigma^{-1}h,X)\) is monic. If we denote by \(\operatorname{ann}_{X}(\Sigma^{-1}h)\) the set \(\{a:A\to X:a.\Sigma^{-1}h=0\}\), then we have \[Xf=0\quad\text{ iff }\quad g|X\quad\text{ iff }\quad\operatorname{ann}_{X}( \Sigma^{-1}h)=0. \tag{4}\] That is, \(X\in\mathcal{T}\) annihilates \(f\) iff it is \(g\)-divisible iff it is \(\Sigma^{-1}h\)-torsionfree. This gives us a third way of using morphisms in \(\mathcal{T}^{\ast}\) to cut out definable subcategories of \(\mathcal{T}\). We set, given \(\mathcal{X}\subseteq\mathcal{T}\) \[\mathcal{X}\text{-Reg}=\{\ell\in\mathcal{T}^{\ast}:\operatorname{ann}_{X}( \ell)=0\ \forall X\in\mathcal{X}\}\] and call such classes, for want of a better word, _regularity_ classes (of morphisms of \(\mathcal{T}^{\ast}\)). In the other direction, given a set \(\Xi\) of morphisms in \(\mathcal{T}^{\ast}\), we define \[\Xi\text{-TF}=\{X\in\mathcal{T}:\operatorname{ann}_{X}(\ell)=0\ \forall\ell\in\Xi\}.\] **Lemma 3.4**.: _If \(\Xi\) is a set of morphisms in \(\mathcal{T}^{\rm c}\), then \(\Xi\)-TF is a definable subcategory of \(\mathcal{T}\). If \(\mathcal{X}\) is any subcategory of \(\mathcal{T}\), then \((\mathcal{X}\)-Reg\()\)-TF\(=\langle\mathcal{X}\rangle\)._ The argument is as for 3.2. The set of pp-pairs corresponding to \(\Xi\) is \(\{(x_{A}\ell=0)/(x_{A}=0):D\xrightarrow{\ell}A\in\Xi\}\). The next result summarises some of this. For the case where \(\mathcal{T}\) is the derived category of modules over a ring, see [2, 2.2]. **Theorem 3.5**.: _A definable subcategory \(\mathcal{D}\) of \(\mathcal{T}\) may be specified by any of the following means:_ \(\mathcal{D}=\{X\in\mathcal{T}:\phi(X)/\psi(X)=0\ \forall\phi/\psi\in\Phi\}\) _where \(\Phi\) is a set of pp-pairs in \(\mathcal{L}(\mathcal{T})\);_ \(\mathcal{D}=\operatorname{Ann}(\mathcal{A})\) _where \(\mathcal{A}\subseteq\operatorname{Coh}(\mathcal{T})\);_ \(\mathcal{D}=\operatorname{Ann}_{\mathcal{T}}\Phi\) _where \(\Phi\) is a set of morphisms in \(\mathcal{T}^{\rm c}\);_ \(\mathcal{D}=\operatorname{Div}_{\mathcal{T}}\Psi\) _where \(\Psi\) is a set of morphisms in \(\mathcal{T}^{\rm c}\);_ \(\mathcal{D}=\Xi\)_-TF where \(\Xi\) is a set of morphisms in \(\mathcal{T}^{\rm c}\)._ _The Galois-correspondence-stable subsets of \(\operatorname{Coh}(\mathcal{T})\) which appear are the Serre subcategories, the Galois-correspondence-stable subsets of morphisms of the form \(\Phi\) are the annihilator = cohomological ideals20._ Footnote 20: One might consider describing the other Galois-correspondence-stable sets by closure conditions but we don’t do this. They will, however, be described indirectly, in terms of the functors they present, at the end of Section 3.3. _Moving between the last three specifications is described by (4) above._ In Section 3.3 we will say this in torsion-theoretic terms with mod-\(\mathcal{T}^{\rm c}\) in place of \(\operatorname{Coh}(\mathcal{T})\). In Section 3.2 we give the relevant background. ### Torsion theories on \(\operatorname{Mod-}\mathcal{T}^{\rm c}\) A **torsion pair** in an additive category consists of two classes: \(\mathcal{G}\) - the **torsion** class, and \(\mathcal{F}\) - the **torsionfree** class, with \((\mathcal{G},\mathcal{F})=0\) and with \(\mathcal{G}\), \(\mathcal{F}\) maximal such. If we are working in a Grothendieck category such as \(\operatorname{Mod-}\mathcal{T}^{\rm c}\), then we say that the torsion pair, or **torsion theory**, is **hereditary** if \(\mathcal{G}\) is closed under subobjects, equivalently if \(\mathcal{F}\) is closed under injective hulls and, if so, it is **of finite type** if \(\mathcal{G}\) is generated, as a hereditary torsion class, by finitely presented objects, equivalently if \(\mathcal{F}\) is closed under directed colimits (see, for instance, [38, 11.12]). We also use without further comment that, for a hereditary torsion theory, if \(F\) is a torsionfree module then the injective hull \(E(F)\) of \(F\) is torsionfree (and conversely, since the torsionfree class is closed under subobjects). For background on torsion theories, see [50]. The restricted Yoneda functor from \(\mathcal{T}\) to \(\operatorname{Mod-}\mathcal{T}^{\rm c}\) allows us to realise the definable subcategories of \(\mathcal{T}\) as the inverse images of finite-type torsionfree classes on \(\operatorname{Mod-}\mathcal{T}^{\rm c}\), as follows. Suppose that \(\mathcal{D}\) is a definable subcategory of \(\mathcal{T}\). Then \(\mathcal{D}\) is determined by the class \(\mathcal{D}\cap\operatorname{Pinj}(\mathcal{T})\) of pure-injectives in it, being the closure of that class under pure subobjects (by the comments after 1.5). By 1.4 the image \(\mathcal{E}=y(\mathcal{D}\,\cap\,\operatorname{Pinj}(\mathcal{T}))\) is a class of injective \(\mathcal{T}^{\rm c}\)-modules which is closed under direct products and direct summands, hence (e.g. [38, 11.1.1]) which is of the form \(\mathcal{F}\cap\operatorname{Inj}\)-\(\mathcal{T}^{\rm c}\) for some hereditary torsionfree class \(\mathcal{F}=\mathcal{F}_{\mathcal{D}}\) of \(\mathcal{T}^{\rm c}\)-modules. We recall, [34, 3.3] see [38, 11.1.20], that a hereditary torsionfree class of modules is of finite type exactly if it is definable. So we have to show that definability of \(\mathcal{D}\) corresponds to definability of \(\mathcal{F}_{\mathcal{D}}\), equivalently to definability of the class of absolutely pure objects in \(\mathcal{F}_{\mathcal{D}}\) ("equivalently" because \(\operatorname{Mod-}\mathcal{T}^{\rm c}\) is locally coherent, so the absolutely pure objects form a definable subcategory, see [38, 3.4.24], hence so is their intersection with any other definable subcategory). So we have to show that the torsionfree class \(\mathcal{F}_{\mathcal{D}}\) above is of finite type and that every finite type torsionfree class arises in this way. To see, this, note that, if \(X\in\mathcal{T}\) and \(F\in\operatorname{Coh}(\mathcal{T})\), then (Section 2.1) \(FX=0\) iff \((F^{\circ},yX)=0\). Set \(\mathcal{A}=\operatorname{Ann}_{\operatorname{Coh}(\mathcal{T})}(\mathcal{D})\). We have the duality from Section 2.1 between \(\operatorname{Coh}(\mathcal{T})\) and mod-\(\mathcal{T}^{\rm c}\), so consider the corresponding set \(\mathcal{A}^{\circ}=\{F^{\circ}:F\in\mathcal{A}\}\) of finitely presented \(\mathcal{T}^{\rm c}\)-modules. Since \(\mathcal{A}\) is a Serre subcategory of \(\operatorname{Coh}(\mathcal{T})\), this is a Serre subcategory of mod-\(\mathcal{T}^{\rm c}\); we set \(\mathcal{S}_{\mathcal{D}}=\mathcal{A}^{\circ}\). The \(\varinjdowndown\)closure, \(\varistopage If \(M\in{\cal F}\) is injective, hence (1.4) of the form \(yN\) for some pure-injective \(N\in{\cal T}\), then the condition \(({\cal S}_{\cal D}={\cal A}^{\circ},M)=0\) is exactly the condition \(FN=0\) for every \(F\in{\cal A}\), that is, the condition that \(N\) be in \({\cal D}\). Thus we have the correspondence between classes of pure-injectives in \({\cal T}\) of the form \({\cal D}\,\cap\,\mbox{\rm Pinj}({\cal T})\) and classes of injectives in \(\mbox{\rm Mod-}{\cal T}^{\circ}\) of the form \({\cal F}\cap\mbox{\rm Inj-}{\cal T}^{\circ}\) for some hereditary torsionfree class \({\cal F}\), and we have shown the following. **Theorem 3.6**.: _A subcategory \({\cal D}\) of a compactly generated triangulated category \({\cal T}\) is definable iff it has any of the following equivalent forms, where \(y:{\cal T}\to\mbox{\rm Mod-}{\cal T}^{\circ}\) is the restricted Yoneda functor: \({\cal D}=y^{-1}({\cal F})\) where \({\cal F}\) is a finite-type hereditary torsionfree class in \(\mbox{\rm Mod-}{\cal T}^{\circ}\)\({\cal D}=y^{-1}{\cal E}\) where \({\cal E}\) is the class of absolutely pure objects in a hereditary torsionfree class of finite type; \({\cal D}=y^{-1}{\cal E}\) where \({\cal E}\) is a definable class of absolutely pure objects in \(\mbox{\rm Mod-}{\cal T}^{\circ}\)._ We denote by \(\tau_{\cal D}=({\cal T}_{\cal D},{\cal F}_{\cal D})\) the finite-type hereditary torsion theory on \(\mbox{\rm Mod-}{\cal T}^{\circ}\) corresponding to \({\cal D}\). **Corollary 3.7**.: _The definable subcategories \({\cal D}\) of \({\cal T}\) are in natural bijection with the definable (= finite-type) hereditary torsionfree classes in \(\mbox{\rm Mod-}{\cal T}^{\circ}\) and also with the definable subcategories of \(\mbox{\rm Abs-}{\cal T}^{\circ}\)._ _Explicitly, to \({\cal D}\) correspond respectively the closure \({\cal F}_{\cal D}\) of \(\langle y{\cal D}\rangle\) under submodules, and \({\cal F}_{\cal D}\,\cap\,\mbox{\rm Abs-}{\cal T}^{\circ}\). In the other direction, we simply apply \(y^{-1}\), where \(y\) is the restricted Yoneda functor._ Note the almost complete analogy of this with the bijection (see [38, 12.3.2]) between definable subcategories of a module category \(\mbox{\rm Mod-}R\) and the finite type (= definable) hereditary torsionfree classes in \((R\mbox{\rm-mod})\mbox{\rm Mod}=(R\mbox{\rm-mod},{\bf Ab})\), equivalently with the definable classes of absolutely pure objects in \((R\mbox{\rm-mod})\mbox{\rm-Mod}=(R\mbox{\rm-mod},{\bf Ab})\). One notable difference is that the image of a definable subcategory of a triangulated category is'most' of the definable subcategory \(\langle y{\cal D}\rangle\,\cap\,\mbox{\rm Abs-}{\cal T}^{\circ}\) of modules, whereas in the module case it is all of the corresponding class of modules. This reflects the lack of directed colimits in triangulated categories, but see [28], [29] for some replacement using Grothendieck derivators for the triangulated case. The other notable difference is that the module case uses tensor product to embed (fully and faithfully) \(\mbox{\rm Mod-}R\) in \((R\mbox{\rm-mod},{\bf Ab})\). Here we have somehow avoided that. We also record the equivalence at the level of pure-injectives. **Corollary 3.8**.: _If \({\cal D}\) is a definable subcategory of \({\cal T}\) and \({\cal F}_{\cal D}\) is the corresponding hereditary torsionfree class in \(\mbox{\rm Mod-}{\cal T}^{\circ}\), then the restricted Yoneda functor \(y\) induces an equivalence_ \[\mbox{\rm Pinj}({\cal D})\simeq{\cal F}\,\cap\,\mbox{\rm Inj-}{\cal T}^{\circ}\] _between the category \(\mbox{\rm Pinj}({\cal D})\) of pure-injective objects of \({\cal T}\) which lie in \({\cal D}\) and the category \({\cal F}\,\cap\,\mbox{\rm Inj-}{\cal T}^{\circ}\) of \({\cal T}^{\circ}\)-injective modules which lie in \({\cal F}\)._ This gives some justification for our saying that the Yoneda image of a definable subcategory \({\cal D}\) in \(\mbox{\rm Mod-}{\cal T}^{\circ}\) constitutes'most of' the flat = absolutely pure objects of the corresponding hereditary torsionfree class of finite type. For, every injective in the class is in the image and every absolutely pure object in the class is a pure (even elementary) submodule of an object in the image. Note that the fact that the objects of \({\cal D}\) are the pure subobjects of the pure-injectives in \({\cal D}\) exactly corresponds to the fact that the absolutely pure modules in \({\cal F}\) are the pure submodules of the injective modules in \({\cal F}\). ### Definable subcategories of \(\mbox{\rm Abs-}{\cal T}^{\circ}\) In Section 3.1 we associated to a definable subcategory \({\cal D}\) of \({\cal T}\) three sets of morphisms, \(\mbox{\rm Ann}_{{\cal T}^{\circ}}({\cal D})\), \(\mbox{\rm Div}_{{\cal T}^{\circ}}({\cal D})\) and \({\cal D}\)-Reg, each of which determines \({\cal D}\). In this section we identify the corresponding sets of morphisms in \(\mbox{\rm mod-}{\cal T}^{\circ}\) and the ways in which they cut out the hereditary finite type torsion theory \(\tau_{\cal D}\) cogenerated by \(\langle y{\cal D}\rangle\) in \(\mbox{\rm Mod-}{\cal T}^{\circ}\). We have the following from Section 3.2. **Corollary 3.9**.: _If \({\cal T}\) is a compactly generated triangulated category, then the following are in natural bijection: (i) the definable subcategories of \({\cal T}\);_ _(ii) the definable subcategories of \(\mathrm{Mod}\text{-}\mathcal{T}^{c}\) which are contained in (so are definable subcategories of) \(\mathrm{Abs}\text{-}\mathcal{T}^{c}=\mathrm{Flat}\text{-}\mathcal{T}^{c}\);_ _(iii) the hereditary torsion theories on \(\mathrm{Mod}\text{-}\mathcal{T}^{c}\) of finite type;_ _(iv) the Serre subcategories of \(\mathrm{mod}\text{-}\mathcal{T}^{c}\)._ Given a definable subcategory \(\mathcal{D}\) of \(\mathcal{T}\), let \[\mathcal{S}_{\mathcal{D}}=\{G\in\mathrm{mod}\text{-}\mathcal{T}^{c}:(G,yX)=0 \ \forall\,X\in\mathcal{D}\}\] be the corresponding Serre subcategory of \(\mathrm{mod}\text{-}\mathcal{T}^{c}\). As noted in Section 3.2, this is the Serre subcategory \((\mathrm{Ann}\tau^{c}(\mathcal{D}))^{\circ}\) of \(\mathrm{mod}\text{-}\mathcal{T}^{c}\), it \(\varinjlim\)-generates the finite type hereditary torsion class \(\mathscr{T}_{\mathcal{D}}\) and \(\tau_{\mathcal{D}}=(\mathscr{T}_{\mathcal{D}},\mathcal{F}_{\mathcal{D}})\) is the torsion theory corresponding to \(\mathcal{D}\) under (i)\(\leftrightarrow\)(iii) of 3.9. If \(\tau\) is any hereditary torsion theory then a submodule \(L\) of a module \(M\) is \(\tau\)**-dense in \(M\)** if \(M/L\) is torsion. Also, the \(\tau\)**-closure**, \(\mathrm{cl}_{\tau}^{M}(L)\), of a submodule \(L\) of a module \(M\) is the maximal submodule of \(M\) in which \(L\) is \(\tau\)-dense, also characterised as the smallest submodule \(L^{\prime}\) of \(M\) which contains \(L\) and is such that \(M/L^{\prime}\) is \(\tau\)-torsionfree. See [50] or [38, SS11.1] for details. First we see that the annihilation, divisibility and regularity conditions with respect to \(\mathcal{D}\) translate directly to \(\mathrm{Mod}\text{-}\mathcal{T}^{c}\). **Proposition 3.10**.: _Suppose that \(\mathcal{D}\) is a definable subcategory of \(\mathcal{T}\) and \(f:A\to B\) is in \(\mathcal{T}^{c}\). Then: (1) \(f\in\mathrm{Ann}\tau^{c}(\mathcal{D})\) iff \(yX.yf=0\) for all \(X\in\mathcal{D}\);_ _(2) \(f\in\mathrm{Div}(\mathcal{D})\) iff, for every \(X\in\mathcal{D}\), \(yX\) is \(yf\)-divisible;_ _(3) \(f\in\mathcal{D}\text{-}\mathrm{Reg}\) iff, for every \(X\in\mathcal{D}\), if \(b^{\prime}:yB\to yX\) is such that \(b^{\prime}\cdot yf=0\) then \(b^{\prime}=0\)._ **Proof.** First we note that, in all three cases, it is enough for the direction (\(\Leftarrow\)) to prove that \(f\) has the property (annihilation, divisibility, regularity) for \(X\in\mathcal{D}\) pure-injective. That is because, if \(X\in\mathcal{D}\), then \(f\) satisfies, say, \(Xf=0\) if (indeed iff) \(H(X)f=0\), where \(H(X)\) is the pure-injective hull of \(X\). That is because \(X\) is pure in (indeed is an elementary substructure of) its pure-injective hull so, if a pp-pair is closed on \(H(X)\), then it will be closed on \(X\) (and _vice versa_). (1) The defining condition for \(f\) to be in \(\mathrm{Ann}\tau^{c}(\mathcal{D})\), namely that \(Xf=0\) for all \(X\in\mathcal{D}\), certainly implies \(yX.yf=0\) for all \(X\in\mathcal{D}\). If, conversely, \(yX.yf=0\) for all \(X\in\mathcal{D}\), then take \(X\in\mathcal{D}\) and suppose we have \(b:B\to X\). Then \(y(bf)=yb.yf=0\) so, by 1.3, \(bf=0\). Therefore \(Xf=0\), as required. (2) If \(f\in\mathrm{Div}(\mathcal{D})\) and we have \(a^{\prime}:yA\to yX\), then we compose with the inclusion of \(yX\) into its injective hull \(E(yX)=yH(X)\) (by 1.4) to get a morphism \(a^{\prime\prime}:yA\to yH(X)\) which, by 1.2, has the form \(ya\) for some \(a:A\to H(X)\). By assumption, and since \(H(X)\in\mathcal{D}\), \(a\) factors through \(f\), say \(a=bf\) with \(b:B\to H(X)\); therefore \(a^{\prime\prime}=yb.yf\). Thus \(\exists x_{yB}(a^{\prime\prime}=x_{yB}.yf)\) is true in \(yH(X)\). Since \(yX\) is a pure submodule of \(yH(X)\) we deduce that \(\exists x_{yB}(a^{\prime}=x_{yB}.yf)\) is true in \(yX\), that is, \(yX\) is \(yf\)-divisible. This gives (\(\Rightarrow\)). For the converse, suppose that, for every \(X\in\mathcal{D}\), \(yX\) is \(yf\)-divisible and take \(X\in\mathcal{D}\) pure-injective and \(a:A\to X\). Then we have \(ya:yA\to yX\) so, by hypothesis, there is \(b^{\prime}:yB\to yX\) with \(b^{\prime}.yf=ya\). Since \(X\) is pure-injective, by 1.2 there is \(b:B\to X\) such that \(b^{\prime}=yb\), giving \(y(bf)=ya\). By 1.3 it follows that \(bf=a\), showing that every pure-injective object in \(\mathcal{D}\) is \(f\)-injective. By the comments at the beginning of the proof and the fact that the divisibility condition is expressed by closure of a pp-pair, it follows that every object of \(\mathcal{D}\) is \(f\)-injective, as required. (3) The direction (\(\Leftarrow\)) follows immediately from 1.3. For the converse, if \(f\in\mathcal{D}\)-Reg then take \(X\in\mathcal{D}\) to be pure-injective, and suppose \(b^{\prime}:yB\to yX\) is such that \(b^{\prime}.yf=0\). By 1.2, \(b^{\prime}=yb\) for some \(b:B\to X\). That gives \(y(bf)=0\) hence, by 1.3, \(bf=0\), hence, by assumption, \(b=0\), so that \(b^{\prime}=0\). Thus \(f\) is regular on every pure-injective in \(\mathcal{D}\) and so, since that is expressed by closure of a pp-pair, \(f\) is regular on every \(X\in\mathcal{D}\), as required. \(\quad\Box\) Set \(\mathcal{S}_{\mathcal{D}}^{\circ}=\{G^{\circ}:G\in\mathcal{S}_{\mathcal{D}}\}\) to be the image of \(\mathcal{S}_{\mathcal{D}}\subseteq\mathrm{mod}\text{-}\mathcal{T}^{c}\) in \(\mathrm{Coh}(\mathcal{T})\) under the anti-equivalence 2.6. Note that, by definition of \(G\mapsto G^{\circ}\), \(\mathcal{S}_{\mathcal{D}}^{\circ}\) consists exactly of the coherent functors \(F\) such that \(FX=0\) for every \(X\in\mathcal{D}\), that is \((\mathcal{S}_{\mathcal{D}})^{\circ}=\mathrm{Ann}\tau^{c}(\mathcal{D})\). **Proposition 3.11**.: _Suppose that \(\mathcal{D}\) is a definable subcategory of \(\mathcal{T}\), let \(\mathcal{S}_{\mathcal{D}}\) be the corresponding Serre subcategory of \(\mathrm{mod}\text{-}\mathcal{T}^{\mathrm{c}}\). Denote by \(\tau_{\mathcal{D}}\) the corresponding hereditary (finite-type) torsion theory in \(\mathrm{Mod}\text{-}\mathcal{T}^{\mathrm{c}}\). Let \(f:A\to B\) be a morphism in \(\mathcal{T}^{\mathrm{c}}\). Then the following hold. (1) \(f\in\mathrm{Ann}_{\mathcal{T}^{\mathrm{c}}}(\mathcal{D})\) iff \(\mathrm{im}(yf)\in\mathcal{S}_{\mathcal{D}}\). (2) \(f\in\mathrm{Div}(\mathcal{D})\) iff \(\mathrm{ker}(yf)\in\mathcal{S}_{\mathcal{D}}\) iff \(F_{f}\in\mathcal{S}_{\mathcal{D}}^{\mathrm{c}}\). (3) \(f\in\mathcal{D}\)-\(\mathrm{Reg}\) iff \(G_{f}=\mathrm{coker}(yf)\in\mathcal{S}_{\mathcal{D}}\), that is, iff \(\mathrm{im}(yf)\) is \(\tau_{\mathcal{D}}\)-dense in \(yB\)._ **Proof.** We use that \(X\in\mathcal{D}\) iff \(yX\) is \((\tau_{\mathcal{D}}\)-)torsionfree, that is iff \((\mathcal{S}_{\mathcal{D}},yX)=0\). (1) If the image \(\mathrm{im}(yf)\) is in \(\mathcal{S}_{\mathcal{D}}\) then, for every \(X\in\mathcal{D}\), we have \((\mathrm{im}(yf),yX)=0\) because \(yX\) is torsionfree. Therefore \(yX.yf=0\), for all \(X\in\mathcal{D}\) giving, by 3.10, the implication \((\Leftarrow)\). For the other direction, first note that any morphism from \(\mathrm{im}(yf)\) to \(yX\) extends to a morphism from \(yB\) to \(yX\) by absolute purity = fp-injectivity of \(yX\). If \(\mathrm{im}(yf)\) were not torsion, there would be a nonzero morphism from \(\mathrm{im}(yf)\) to some torsionfree object which, for instance replacing the object by its injective hull, we may assume to be of the form \(yX\) with \(X\in\mathcal{D}\). This would give a morphism \(a:yB\to yX\) with \(af\neq 0\), contradicting 3.10. (2) \((\Rightarrow)\) By 3.10 we have that \(yX\) is \(yf\)-divisible for every \(X\in\mathcal{D}\). If \(\mathrm{ker}(yf)\) were not torsion (that is, since, by local coherence of \(\mathrm{Mod}\text{-}\mathcal{T}^{\mathrm{c}}\), it is finitely presented, not in \(\mathcal{S}_{\mathcal{D}}\)) then it would have a nonzero torsionfree quotient \(M\). The (torsionfree) injective hull of \(M\) would have the form \(yX\) for some pure-injective \(X\in\mathcal{D}\), yielding a morphism \(yA\to yX\) which is not zero on the kernel of \(yf\), hence which cannot factor through \(yf\) - a contradiction. For the converse, assume that \(\mathrm{ker}(yf)\in\mathcal{S}_{\mathcal{D}}\). Then any morphism \(a^{\prime}:yA\to yX\) with \(X\in\mathcal{D}\) must be zero on \(\mathrm{ker}(yf)\), since \(yX\) is torsionfree. Therefore \(a^{\prime}\) factors through \(f\). But \(yX\) is absolutely pure so, since \(\mathrm{im}(f)\) is a finitely generated subobject of \(yB\), that factorisation extends to a morphism \(b^{\prime}:yB\to X\). Thus we have a factorisation of \(a^{\prime}\) through \(yf\), and so \(yX\) is \(yf\)-divisible. By 3.10 that is enough. For the part involving \(\mathcal{S}_{\mathcal{D}}^{\mathrm{c}}\), we have \(f\in\mathrm{Div}(\mathcal{D})\) iff \((f,X):(B,X)\rightarrow(A,X)\) is epi for every \(X\in\mathcal{D}\) iff \(\mathrm{coker}(f,X)=0\) for every \(X\in\mathcal{D}\), that is, iff \(F_{f}X=0\) for every \(X\in\mathcal{D}\) and that, as noted above, is the case iff \(F_{f}\in\mathcal{S}_{\mathcal{D}}^{\mathrm{c}}\). (3) If \(yf\) is not \(\tau_{\mathcal{D}}\)-dense in \(yB\), there will be a nonzero morphism from \(yB\) and with kernel containing \(\mathrm{im}(yf)\) to a torsionfree object, hence to an object of the form \(yX\) with \(X\in\mathcal{D}\). Therefore \(yf\) is not \(y\mathcal{D}\)-regular and so, by 3.10, \(f\) is not \(\mathcal{D}\)-regular. For the converse, suppose that \(\mathrm{im}(yf)\) is \(\tau_{\mathcal{D}}\)-dense in \(yB\). Then, if \(b^{\prime}\) is a morphism from \(yB\) to a torsionfree object and the kernel of \(b^{\prime}\) contains \(\mathrm{im}(yf)\) then, since the image of \(b^{\prime}\) is torsion, we have \(b^{\prime}=0\). Therefore every object in \(y\mathcal{D}\) is \(yf\)-torsionfree which, by 3.10, is as required. \(\square\) From this, 3.5 and the equivalences (4), we have the following, where we apply the notations Ann, Div and \(\mathrm{Reg}\) and their definitions to \(\mathrm{Mod}\text{-}\mathcal{T}^{\mathrm{c}}\) with, of course, \(\mathrm{mod}\text{-}\mathcal{T}^{\mathrm{c}}\) replacing \(\mathcal{T}^{\mathrm{c}}\) as the subcategory of "small" objects. This is mostly [52, 5.1.4]. **Theorem 3.12**.: _Suppose that \(\mathcal{D}\) is a definable subcategory of \(\mathcal{T}\), let \(\tau_{\mathcal{D}}\) be the corresponding finite-type hereditary torsion theory in \(\mathrm{Mod}\text{-}\mathcal{T}^{\mathrm{c}}\) and let \(\mathcal{S}_{\mathcal{D}}\) denote the Serre subcategory of \(\tau_{\mathcal{D}}\)-torsion finitely presented \(\mathcal{T}^{\mathrm{c}}\)-modules._ _Suppose that_ \[A\xrightarrow{f}B\xrightarrow{g}C\xrightarrow{h}\Sigma A\] _is a distinguished triangle. Then: (i) \(f\in\mathrm{Ann}(\mathcal{D})\) iff \((-,f)\in\mathrm{Ann}(y\mathcal{D})\) iff \(\mathrm{im}(-,f)\in\mathcal{S}_{\mathcal{D}}\); (ii) \(g\in\mathrm{Div}(\mathcal{D})\) iff \((-,g)\in\mathrm{Div}(y\mathcal{D})\) iff \(\mathrm{ker}(-,g)\in\mathcal{S}_{\mathcal{D}}\), that is, iff \(F_{g}\in\mathcal{S}_{\mathcal{D}}^{\mathrm{c}}\); (iii) \(\Sigma^{-1}h\in\mathcal{D}\text{-}\mathrm{Reg}\) iff the image of \((-,\Sigma^{-1}h)\) is \(\tau_{\mathcal{D}}\)-dense in \((-,\Sigma^{-1}C)\), that is, iff \(G_{\Sigma^{-1}h}\in\mathcal{S}_{\mathcal{D}}\)._ _Furthermore, the conditions (i), (ii) and (iii) are equivalent._ ### Model theory in definable subcategories If \(\mathcal{D}\) is a definable category, meaning a category equivalent to a definable subcategory of a module category (over a ring possibly with many objects), then the model theory of \(\mathcal{D}\) is intrinsic to \(\mathcal{D}\), in the following senses. First, the notion of pure-exact sequence is intrinsic to \(\mathcal{D}\) because an exact sequence is pure-exact iff some ultraproduct of it is split-exact, see [38, 4.2.18]. Ultraproducts are obtained as directed colimits of products, so definable categories have ultraproducts. Definable subcategories of compactly generated triangulated categories do not in general have directed colimits, so they are not (quite) "definable categories" in this sense, though they are quite close, see 2.16. Nevertheless, as we have seen, the restricted Yoneda functor associates, to a definable subcategory \(\mathcal{D}\) of a compactly generated triangulated category, a definable subcategory of a module category which has the same model theory. **Question:** Is the model theory of a definable subcategory \(\mathcal{D}\) of a compactly generated triangulated category intrinsic, meaning definable just from the structure of \(\mathcal{D}\) as a category? Second, the category \(\mathbb{L}(\mathcal{D})^{\mathrm{eq+}}\) of pp-imaginaries for a definable subcategory \(\mathcal{D}\) of a module category \(\mathrm{Mod}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! By 3.7, every finite-type hereditary torsion theory (\(\mathscr{T},\mathscr{F}\)) on Mod-\(\mathcal{T}^{\rm c}\) gives rise to a torsion pair in \(\mathcal{T}\), namely \((^{\perp}\mathcal{D},(^{\perp}\mathcal{D})^{\perp})\) where \(\mathcal{D}=y^{-1}\mathcal{F}\). If this torsion pair is compactly generated then it follows from the above that \((^{\perp}\mathcal{D})^{\perp}=\mathcal{D}\). But in general not every finite-type hereditary torsion class in Mod-\(\mathcal{T}^{\rm c}\) arises from a torsion pair in \(\mathcal{T}\) in this way. Indeed, since, for \(A\in\mathcal{T}^{\rm c}\), \(yA\) is a projective \(\mathcal{T}^{\rm c}\)-module, and all of the finitely generated projectives in Mod-\(\mathcal{T}^{\rm c}\) are of this form, we have the following, where we denote by \(\gamma_{\mathcal{X}}\) the hereditary (finite type) torsion theory generated by (that is, with torsion class generated by) \(y\mathcal{X}\). **Corollary 3.15**.: _There is a natural injection \((\mathcal{U},\mathcal{V})\mapsto\gamma_{\mathcal{U}}\) from the set of compactly generated torsion pairs in \(\mathcal{T}\) to the set of hereditary torsion theories of finite type on Mod-\(\mathcal{T}^{\rm c}\)._ _The image is the set of hereditary torsion pairs where the torsion class is generated by a set of finitely generated projectives._ Thus we have an embedding of the lattice of compactly generated torsion pairs in \(\mathcal{T}\) into the lattice of finite type hereditary torsion theories on Mod-\(\mathcal{T}^{\rm c}\) (the ordering in each case being by inclusion of torsion classes), and the latter is isomorphic to the lattice of definable subcategories of \(\mathcal{T}\). The definable subcategories, \(\mathcal{D}\), of \(\mathcal{T}\) occuring as \(\mathcal{V}\) in a compactly generated torsion pair \((\mathcal{U},\mathcal{V})\), are, by 3.11(1), those for which the corresponding annihilator ideal \(\operatorname{Ann}_{\mathcal{T}^{\rm c}}(\mathcal{D})\) of \(\mathcal{T}^{\rm c}\) is generated as such by objects (that is, by identity morphisms of some compact objects). Note also that, if \(\mathcal{D}\) is a definable subcategory of \(\mathcal{T}\) which occurs as \(\mathcal{V}\) in a compactly generated torsion pair \((\mathcal{U},\mathcal{V})\), and if \((\mathscr{T},\mathcal{F})\) is the corresponding, in the sense of 3.7, torsion theory \(\tau_{\mathcal{D}}\), then we always have \(\mathcal{U}\subseteq y^{-1}\mathscr{T}\). That is because \(\mathscr{T}={}^{\perp}(\mathcal{F}\cap\operatorname{Inj}\nolimits\mathcal{T}^ {\rm c})\) and because each object of \(\mathcal{F}\cap\operatorname{Inj}\nolimits\mathcal{T}^{\rm c}\) has the form \(yN\) for some pure-injective \(N\in\mathcal{V}\) and then \((\mathcal{U},N)=0\) implies, by 1.2, that \((y\mathcal{U},yN)=0\), so \(y\mathcal{U}\subseteq\mathscr{T}\). For equality, \(\mathcal{U}\subseteq y^{-1}\mathscr{T}\) - that is \(\gamma_{\mathcal{U}}=\tau_{\mathcal{D}}\) - we need, by the argument just given, that \(\mathcal{U}={}^{\perp}(\mathcal{V}\cap\operatorname{Pinj}\nolimits(\mathcal{T}))\). That is, equality holds iff the torsion pair \((\mathcal{U},\mathcal{V})\) is **cogenerated by pure-injectives**. For instance, if \((\mathcal{U},\mathcal{V})\) is a t-structure with \(\mathcal{V}\) definable, then this will be the case, [2, 2.10], also see 3.19 below. For more about this and TTF-classes in compactly generated triangulated categories, see [52, Chpt. 8]. ### Spectra By a definable (additive) category we mean a category which is equivalent to a definable subcategory of the category of modules over some (possibly multi-sorted) ring. Every definable additive category \(\mathcal{C}\) is determined by its full subcategory of pure-injective objects (by [38, 5.1.4] or, more intrinsically, by [41, SS3.2]). Indeed, every definable category is determined by the indecomposable pure-injective objects in it (e.g. see [38, 5.3.50, 5.3.52]). The Ziegler spectrum, \(\operatorname{Zg}\nolimits(\mathcal{C})\), also written \(\operatorname{Zg}\nolimits_{R}\) in the case \(\mathcal{C}=\operatorname{Mod}\nolimits\)-\(R\), is the set, \(\operatorname{pinj}\nolimits(\mathcal{C})\), of isomorphism classes of indecomposable pure-injectives in \(\mathcal{C}\) endowed with the topology which has, for a basis of open sets, the \[(\phi/\psi)=\{N\in\operatorname{pinj}\nolimits(\mathcal{C}):\phi(N)>\psi(N)\}\] as \(\phi/\psi\) ranges over pp-pairs (in any suitable language for \(\mathcal{C}\)). These are exactly the compact open sets in \(\operatorname{Zg}\nolimits(\mathcal{C})\), see [38, 5.1.22]. Every definable subcategory \(\mathcal{D}\) of a definable category \(\mathcal{C}\) is determined by the set \(\operatorname{pinj}\nolimits(\mathcal{D})=\mathcal{D}\ \cap\ \operatorname{pinj}\nolimits(\mathcal{C})\) of indecomposable pure-injectives in \(\mathcal{D}\), hence by the closed subset \(\operatorname{Zg}\nolimits(\mathcal{D})=\mathcal{D}\cap\operatorname{Zg} \nolimits(\mathcal{C})\) of \(\operatorname{Zg}\nolimits(\mathcal{C})\), and every closed set in \(\operatorname{Zg}\nolimits(\mathcal{C})\) is of the form \(\operatorname{Zg}\nolimits(\mathcal{D})\) for some definable subcategory \(\mathcal{D}\) of \(\mathcal{C}\), see [38, 5.1.1]. Krause [23] showed how this carries over to compactly generated triangulated categories \(\mathcal{T}\). The **Ziegler spectrum**, \(\operatorname{Zg}\nolimits(\mathcal{T})\), of \(\mathcal{T}\) is defined to have, for its points, the (isomorphism classes of) indecomposable pure-injectives. As for definable subcategories of module categories, there are many equivalent ways of specifying a basis of (compact) open sets on this set of points, including the following (the second by 2.15): \((\phi/\psi)=\{N\in\operatorname{pinj}\nolimits(\mathcal{T}):\phi(N)/\psi(N)\neq 0\}\) for \(\phi/\psi\) a pp-pair; \(\{N\in\operatorname{pinj}\nolimits(\mathcal{T}):\operatorname{ann}_{N}(f)\neq 0\}\) for \(f\) a morphism in \(\mathcal{T}^{\rm c}\); \((F)=\{N\in\operatorname{pinj}\nolimits(\mathcal{T}):FN\neq 0\}\) for \(F\in\operatorname{Coh}\nolimits(\mathcal{T})\). There are other topologies of interest here. First consider the case where \(R\) is commutative noetherian. Then the subcategory, \(\operatorname{Inj}\nolimits\)-\(R\), of injectives in Mod-\(R\) is definable (see [38, 3.4.28]) and the corresponding closed subset of \(\mathrm{Zg}_{R}\) is just the set, \(\mathrm{inj}_{R}\), of indecomposable injective \(R\)-modules. For such a ring the set \(\mathrm{inj}_{R}\) may be identified [16], see [38, SS14.1.1], with \(\mathrm{Spec}(R)\)_via_\(P\mapsto E(R/P)\) where \(P\) is any prime ideal of \(R\) and \(E(-)\) denotes injective hull. However, the Ziegler topology restricted from \(\mathrm{Zg}_{R}\) to \(\mathrm{inj}_{R}\) induces, _via_ the above bijection, not the Zariski topology on \(\mathrm{Spec}(R)\) but its Hochster dual ([36, pp. 104/5]). Recall that the **Hochster dual** of a topology has, as a basis (on the same set of points), the complements of the compact open sets in the original topology. That fact inspired the general definition [37, pp. 200-202] of the **dual-Ziegler** (or "rep-Zariski") topology on \(\mathrm{pinj}(\mathcal{C})\) for any definable category \(\mathcal{C}\), as the Hochster-dual of the Ziegler topology21. So this dual topology has the same underlying set, \(\mathrm{pinj}(\mathcal{C})\), and has, for a basis of open sets, the complements Footnote 21: These spaces are, however, unlike those in Hochster’s original definition, not spectral, and it is not always that case that the Ziegler topology is returned as the dual of the dual-Ziegler topology, [13, 3.1] \[[\phi/\psi]=\mathrm{Zg}(\mathcal{C})\setminus(\phi/\psi)\] of the compact Ziegler-open sets. If \(\mathcal{C}\) is a locally coherent category, in particular if it is \(\mathrm{Mod}\)-\(R\) for a right coherent ring (possibly with many objects), then22 the absolutely pure objects form a definable subcategory with corresponding closed subset of \(\mathrm{Zg}(\mathcal{C})\) again being the set \(\mathrm{inj}(\mathcal{C})\) of (isomorphism types of) indecomposable injectives in \(\mathcal{C}\). This set carries a **(Gabriel-)Zariski** topology which has, for a basis of open sets, those of the form Footnote 22: For module categories, this goes back to [15], see [38, 3.4.24]; the general case is proved the same way and also follows from, for example, [39, Chpt. 6]. \[[A]=\{E\in\mathrm{inj}(\mathcal{C}):(A,C)=0\}\] for \(A\) a finitely presented object of \(\mathcal{C}\). Thus we extend the domain of applicability of the category-theoretic reformulation ([16], [47]) of the definition of the Zariski topology on a commutative coherent ring. For such a category \(\mathcal{C}\) the Ziegler topology restricted to \(\mathrm{inj}(\mathcal{C})\) is Hochster-dual to this Gabriel-Zariski topology and _vice versa_ ([38, 14.1.6]). We may compare these topologies over a commutative coherent ring \(R\) where, in general, the map \(P\mapsto E(R/P)\) is only an inclusion of \(\mathrm{Spec}(R)\) into \(\mathrm{inj}_{R}\), because there may be indecomposable injectives not of the form \(E(R/P)\), e.g. [38, 14.4.1]. The inclusion, nevertheless, is a topological equivalence - an isomorphism of frames of open subsets: every indecomposable injective is elementarily equivalent to, hence topologically equivalent to, a module of the form \(E(R/P)\) with \(P\) a prime, see [38, 14.4.5]. So, for commutative coherent rings, we may consider these various topologies as topologies on \(\mathrm{Spec}(R)\) and, so considered, the Ziegler topology coincides with the **Thomason** topology, which is defined to be the Hochster-dual of the Gabriel-Zariski topology, [19]. That is, the Ziegler topology has, for its open sets, those of the form \(\bigcup_{\lambda}\left(R/I_{\lambda}\right)\) with the \(I_{\lambda}\) finitely generated ideals of \(R\), where \[(R/I_{\lambda})=\{N\in\mathrm{pinj}_{R}:(R/I_{\lambda},N)\neq 0\}=(xI_{ \lambda}=0/x=0).\] In terms of sets of primes, the Ziegler-open sets have the form \(\bigcup_{\lambda}V(I_{\lambda})\) with the \(I_{\lambda}\) finitely generated23. These various topologies are compared in [42, SS6]. Footnote 23: For a general commutative ring, the Ziegler topology on \(\mathrm{inj}_{R}\) is finer, having open sets of a similar form but with pp-definable ideals replacing finitely generated ideals; in coherent rings the pp-definable ideals coincide with the finitely generated ideals, see [42, §6]. The discussion above applies to the locally coherent category \(\mathrm{Mod}\)-\(\mathcal{T}^{\mathrm{c}}\). As we have seen in 1.4, the restricted Yoneda functor \(y\) induces an equivalence between the category, \(\mathrm{Pinj}(\mathcal{T})\), of pure-injective objects of \(\mathcal{T}\) and the category, \(\mathrm{Inj}\)-\(\mathcal{T}^{\mathrm{c}}\), of injective right \(\mathcal{T}^{\mathrm{c}}\)-modules. Indeed, this gives a homeomorphism of spectra. **Theorem 3.16**.: _Suppose that \(\mathcal{T}\) is a compactly generated triangulated category. Then \(y:\mathcal{T}\to\mathrm{Mod}\)-\(\mathcal{T}^{\mathrm{c}}\) induces a bijection between \(\mathrm{pinj}(\mathcal{T})\) and \(\mathrm{inj}_{\mathcal{T}^{\mathrm{c}}}\). This is a homeomorphism between \(\mathrm{Zg}(\mathcal{T})\) and \(\mathrm{Zg}(\mathrm{Abs}\)-\(\mathcal{T}^{\mathrm{c}}=\mathrm{Flat}\)-\(\mathcal{T}^{\mathrm{c}})\) (the latter can also be regarded as \(\mathrm{inj}_{\mathcal{T}^{\mathrm{c}}}\) with the Thomason topology) and is also a homeomorphism between the dual-Ziegler spectrum \(\mathrm{Zar}(\mathcal{T})\) of \(\mathcal{T}\) and \(\mathrm{inj}_{\mathcal{T}^{\mathrm{c}}}\) if the latter is equipped with the Gabriel-Zariski topology which has, for a basis of open sets, the sets \([G]=\{E\in\mathrm{inj}_{\mathcal{T}^{\mathrm{c}}}:(G,E)=0\}\) for \(G\in\mathrm{mod}\)-\(\mathcal{T}^{\mathrm{c}}\)._ Since closed subsets of the Ziegler spectrum are in natural correspondence with definable subcategories, this homeomorphism underlies the bijection 3.7 between definable subcategories of \(\mathcal{T}\) and finite-type hereditary torsionfree classes in \(\operatorname{Mod}\)-\(\mathcal{T}^{\mathrm{c}}\). That also reflects the fact that a finite-type hereditary torsion theory is determined by (it is the torsionfree class cogenerated by) the set of indecomposable torsionfree injectives (see [38, 11.1.29]). We have already, in Section 3.5, considered the part of this correspondence coming from compactly generated torsion pairs in \(\mathcal{T}\), and we will also, in Section 4.1, look at how the Balmer spectrum fits into this picture in the case that \(\mathcal{T}\) is tensor-triangulated. ### Triangulated definable subcategories In this section we consider the definable subcategories \(\mathcal{D}\) of \(\mathcal{T}\) which are **triangulated**, that is, **shift-closed** (if \(X\in\mathcal{D}\), then \(\Sigma^{\pm}X\in\mathcal{D}\)) and extension-closed, where by **extension-closed** we mean that, if \(X\to Y\to Z\to\Sigma X\) is a distinguished triangle with both \(X\) and \(Z\) in \(\mathcal{D}\), then also \(Y\in\mathcal{D}\). First, some remarks on extending definable subcategories to shift-closed definable subcategories. If \(\mathcal{D}\) is a definable subcategory of \(\mathcal{T}\) then each shift \(\Sigma^{i}\mathcal{D}\) is definable, (e.g. see [52, 6.1.1]). We can define the shift-closure of \(\mathcal{D}\) to be the definable closure of \(\bigcup_{i\in\mathcal{D}}\,\Sigma^{i}\mathcal{D}\). That this is, in general, larger than \(\operatorname{Add}^{+}(\bigcup_{i\in\mathcal{D}}\,\Sigma^{i}\mathcal{D})\) (\({}^{+}\) denoting closure under pure submodules) is shown by the following example. _Example 3.17_.: Consider the derived category \(\mathcal{D}_{k[\epsilon]}=\mathcal{D}(\operatorname{Mod}\)-\(k[\epsilon])\), of the category of modules over \(k[\epsilon]=k[x]/(x^{2})\). Let \(\mathcal{D}\) be the subcategory of \(\mathcal{D}_{k[\epsilon]}\) consisting of complexes which are \(0\) in every degree \(i<0\). Then \(\mathcal{D}\) is a definable subcategory, defined by the conditions \((k[\epsilon][i],-)=0\) (\(i<0\)) where \(k[\epsilon]\) here denotes the complex with \(k[\epsilon]\) in degree \(0\) and zeroes elsewhere. The union of the (left) shifts of \(\mathcal{D}\) contains only complexes which are bounded below and so the additive closure of the union \(\bigcup_{i}\,\mathrm{Zg}(\Sigma^{i}\mathcal{D})\) of the Ziegler-spectra of these shifts does not contain, for example, the doubly infinite complex which has \(k[\epsilon]\) in each degree and multiplication by \(\epsilon\) for each of its maps. But that indecomposable pure-injective complex belongs to the Ziegler-closure of \(\bigcup_{i}\,\mathrm{Zg}(\Sigma^{i}\mathcal{D})\), indeed it is in the Ziegler-closure of the set of complexes obtained from it by replacing \(k[\epsilon]\) by \(0\) in every degree \(\leq i\) for some \(i\); this is proved in [20, SS3.4] and, in greater generality, in [3, SS6, SS4]. In contrast, if we were to take \(\mathcal{D}\) to be the image of \(\operatorname{Mod}\)-\(k[\epsilon]\) consisting of complexes concentrated in degree \(0\), then the additive closure of the union of the shifts of \(\mathcal{D}\) is definable. That follows because every object in the definable category generated by that union is finite endolength, so the Ziegler closure contains no new indecomposable pure-injectives (e.g. see [38, 4.4.30]). Thus, if \(X\) is a closed subset of the Ziegler spectrum of \(\mathcal{T}\), it may be that \(\bigcup_{i}\,\Sigma^{i}\mathrm{Zg}(\mathcal{D})\) is not Ziegler-closed. It is the case, see [52, 6.1.10], that, if points of \(\mathrm{Zg}(\mathcal{T})\) are identified with their shifts and the set of equivalence classes is given the quotient topology, then this is topologically equivalent to the space based on \(\operatorname{pinj}(\mathcal{T})\) which has, for its closed sets, those of the form \(\mathcal{D}\cap\operatorname{pinj}(\mathcal{T})\) where \(\mathcal{D}\) is a shift-closed definable subcategory of \(\mathcal{T}\). The first example in 3.17 shows that the projection map taking a point of the Ziegler spectrum of \(\mathcal{T}\) to its shift equivalence class need not be closed (the complexes in that example are endofinite, hence Ziegler-closed points). Further Ziegler-type topologies on \(\operatorname{pinj}(\mathcal{T})\) are obtained by using positively- (alternatively, negatively-) shift-closed definable subcategories of \(\mathcal{T}\), see [52, SS6.1]). A triangulated subcategory \(\mathcal{B}\) of \(\mathcal{T}\) is **smashing** if it is the kernel of a Bousfield localisation \(q:\mathcal{T}\to\mathcal{T}^{\prime}\) for which the left adjoint to \(q\), including \(\mathcal{T}^{\prime}=\mathcal{T}/\mathcal{B}\) into \(\mathcal{T}\), preserves coproducts. Hom-orthogonality gives a bijection between the definable subcategories which are triangulated and the smashing subcategories of \(\mathcal{T}\). **Theorem 3.18**.: _([25], see [52, 5.2.10]) If \(\mathcal{D}\) is a triangulated definable subcategory of the compactly generated triangulated category \(\mathcal{T}\), then \(\mathcal{B}=\,^{\perp}\mathcal{D}\) is a smashing subcategory of \(\mathcal{T}\) and \(\mathcal{D}=\mathcal{B}^{\perp}\), so \((\mathcal{B},\mathcal{D})\) is a torsion pair. Every smashing subcategory of \(\mathcal{T}\) arises in this way._ **Proposition 3.19**.: _[_23_, 3.9, Thm. C]_ _Suppose that \(\mathcal{B}\) is a smashing subcategory of \(\mathcal{T}\) and \(\mathcal{D}=\mathcal{B}^{\perp}\) is the corresponding triangulated definable subcategory. Then \(\mathcal{B}=y^{-1}\mathscr{S}_{\mathcal{D}}\), where \(\mathscr{S}_{\mathcal{D}}=\overrightarrow{\mathcal{S}_{\mathcal{D}}}\) is the torsion class for the torsion theory \(\gamma_{\mathcal{B}}=\tau_{\mathcal{D}}\) generated by \(y\mathcal{B}\), equivalently cogenerated by \(y\mathcal{D}\)._ **Corollary 3.20**.: _If \(\mathcal{D}\) is a triangulated definable subcategory of \(\mathcal{T}\), and \(\mathscr{T}_{\mathcal{D}}\) is the corresponding hereditary torsion class in \(\mathrm{Mod}\text{-}\mathcal{T}^{\mathrm{c}}\), then \(y^{-1}\mathscr{T}_{\mathcal{D}}=\,^{+}\mathcal{D}\) is a (typical) smashing subcategory of \(\mathcal{T}\)._ One says that \(\mathcal{T}\) has the **Telescope Property** if, for each smashing subcategory \(\mathcal{B}\), the torsion pair \((\mathcal{B},\mathcal{D})\) is compactly generated, equivalently, 3.15, if the Serre subcategory \(\mathcal{S}_{\mathcal{D}}=\mathscr{T}_{\mathcal{D}}\,\cap\,\mathrm{mod}\text{-} \mathcal{T}^{\mathrm{c}}\) is generated by projective (= representable) objects, see [23, Introduction]. ### Elementary duality If \(R\) is any skeletally small preadditive category (= multisorted ring), then there is a duality - _elementary duality_, [35], [21], see [38, SSSS1.3, 10.3] - between the category of pp-pairs for right \(R\)-modules and the category of pp-pairs for left \(R\)-modules. This duality induces a natural bijection between the definable subcategories of \(\mathrm{Mod}\text{-}R\) and \(R\text{-}\mathrm{Mod}\), [21, 6.6] see [38, SS3.4.2]. In particular this applies with \(R=\mathcal{T}^{\mathrm{c}}\). Because the model theory of \(\mathcal{T}\) is essentially that of \(\mathrm{Flat}\text{-}\mathcal{T}^{\mathrm{c}}=\mathrm{Abs}\text{-}\mathcal{T}^ {\mathrm{c}}\) inside \(\mathrm{Mod}\text{-}\mathcal{T}^{\mathrm{c}}\), it follows that we have a version of elementary duality between \(\mathcal{T}\) and the definable subcategory \(\mathcal{T}^{\mathrm{c}}\text{-}\mathrm{Abs}=\mathcal{T}^{\mathrm{c}}\)-Flat of \(\mathcal{T}^{\mathrm{c}}\text{-}\mathrm{Mod}\). In particular, elementary duality gives a natural bijection between the definable subcategories of \(\mathcal{T}\) and those of \(\mathcal{T}^{\mathrm{c}}\)-Flat. With the module situation in mind, it is natural to ask whether there is a compactly triangulated category \(\mathcal{T}_{1}\) such that \(\mathcal{T}_{1}^{\mathrm{c}}\simeq(\mathcal{T}^{\mathrm{c}})^{\mathrm{op}}\) and hence an elementary duality between the model theory of \(\mathcal{T}\) and the model theory of \(\mathcal{T}_{1}\)_via_\(\mathrm{Mod}\text{-}\mathcal{T}_{1}^{\mathrm{c}}\simeq\mathcal{T}^{\mathrm{c}}\)-Mod. This situation is considered in [18, SS7]. In particular, if \(\mathcal{T}\) is the derived category of modules over a ring then this is so, [18, 7.5], see also [2]; more generally it is so if \(\mathcal{T}\) is an algebraic triangulated category, [11]. **Question:** If \(\mathcal{T}\) is a compactly generated triangulated category, is there a triangulated category \(\mathcal{T}_{1}\) and an elementary duality between \(\mathcal{T}\) and \(\mathcal{T}_{1}\)? If such a category \(\mathcal{T}_{1}\) exists, is it essentially unique? By "an elementary duality" we mean at least a natural bijection between definable subcategories, probably also an anti-equivalence between the respective categories of pp-sorts, perhaps also a duality at the level of pp formulas. See the remarks in Section 2.5 about enhancements. This also raises the further general questions. **Questions:** What is a characterisation of the categories which arise as \(\mathcal{T}^{\mathrm{c}}\) where \(\mathcal{T}\) is compactly generated triangulated? Given such a category, does it come from a unique compactly generated triangulated category \(\mathcal{T}\)? and, if so, how can \(\mathcal{T}\) be constructed from it? In particular is \((\mathcal{T}^{\mathrm{c}})^{\mathrm{op}}\) of the form \(\mathcal{T}_{1}^{\mathrm{c}}\) for some compactly generated triangulated category \(\mathcal{T}_{1}\)? These seem to be hard questions to answer; they include the, only partly resolved, Margolis Conjecture in the case that \(\mathcal{T}\) is the stable homotopy category of spectra. If \(\mathcal{T}\) is the derived category \(\mathcal{D}_{R}=\mathcal{D}(\mathrm{Mod}\text{-}R)\) of some ring \(R\), we do get a good elementary duality between \(\mathcal{D}_{R}\) and \(\mathcal{D}_{R^{\mathrm{op}}}=\mathcal{D}(R\text{-}\mathrm{Mod})\). This follows because the duality \((\mathrm{proj}\text{-}R)^{\mathrm{op}}\to\mathrm{proj}\text{-}R^{\mathrm{op}}\) between the categories of finitely generated projectives given by \(P\mapsto(P,R)\) extends to the respective categories of perfect complexes, that is, to a duality \((-)^{\mathrm{t}}:(\mathcal{D}_{R}^{\mathrm{op}})^{\mathrm{op}}\simeq\mathcal{D} _{R^{\mathrm{op}}}^{\mathrm{op}}\), see [18, SS7], also [2, SS2.2]. In these papers, \(R\) is a 1-sorted ring but the arguments also apply if \(R\) is a skeletally small preadditive category. In [11, SS3.2] this is extended to algebraic triangulated categories _via_\(\mathrm{dg}\)-enhancements. We will, in Section 4.2, describe an internal duality, from [52, Chpt. 7] in the tensor-triangulated case. If \(R\) is commutative, so \(\mathcal{D}_{R}\simeq\mathcal{D}_{R^{\mathrm{op}}}\), the duality in [2] does coincide ([52, 7.3.5]) with the internal duality described in Section 4.2. For details, we refer the reader to those papers; in particular, the generalisation in [11] to algebraic triangulated categories uses enhancements (see Section 2.5), which we don't go into here (also see [29] for related use of enhancements). For an abstract approach to dualities between triangulated categories, see [11]. We continue a little further in the case that \(\mathcal{T}\) is the derived category \(\mathcal{D}_{R}\) of a module category. If \(\mathcal{D}\) is a definable subcategory of \(\mathcal{D}_{R}\), then we have the corresponding annihilator ideal \(\mathrm{Ann}_{\mathcal{D}_{R}^{\mathrm{op}}}(\mathcal{D})\), \(=\mathrm{Ann}(\mathcal{D})\) for short. Set \((\mathrm{Ann}(\mathcal{D}))^{\mathrm{t}}=\{f^{\mathrm{t}}:f\in\mathrm{Ann}( \mathcal{D})\}\), where \((-)^{\mathrm{t}}:(\mathcal{D}_{R}^{\mathrm{op}})\simeq\mathcal{D}_{R^{\mathrm{op}}}^ {\mathrm{op}}\) is the duality from the previous paragraph. Then, [2, 2.3], \((\mathrm{Ann}(\mathcal{D}))^{\mathrm{t}}\) is an annihilator ideal of \(\mathcal{D}_{R^{\mathrm{op}}}^{\mathrm{op}}\). We set \(\mathcal{D}^{\mathrm{d}}=\mathrm{Ann}_{\mathcal{D}_{R^{\mathrm{op}}}}((\mathrm{Ann }(\mathcal{D}))^{\mathrm{t}})\) and refer to this as the definable subcategory of \(\mathcal{D}_{R^{\mathrm{op}}}\)**elementary dual** to \(\mathcal{D}\). The terminology is further justified by the following, which refers, using the obvious notations, to the other ways of specifying definable subcategories. **Proposition 3.21**.: _([2, 2.2-2.5]) If \(\mathcal{D}\) is a definable subcategory of \(\mathcal{D}_{R}\) and \(\mathcal{D}^{\text{d}}\) is its elementary dual definable subcategory of \(\mathcal{D}_{R^{\text{op}}}\), then:_ \(\operatorname{Ann}(\mathcal{D}^{\text{t}})=(\operatorname{Ann}(\mathcal{D}))^ {\text{t}}\)__ \(\operatorname{Div}(\mathcal{D}^{\text{t}})=(\mathcal{D}\text{-TF})^{\text{t}}\)__ \(\mathcal{D}^{\text{t}}\text{-TF}=(\operatorname{Div}(\mathcal{D}))^{\text{t}}\). **Proof.** The first is by definition and [2, 2.3]. For the others consider \(f\in\operatorname{Ann}(\mathcal{D})\) and form the extended triangle \[\Sigma^{-1}B\xrightarrow{\Sigma^{-1}g}\Sigma^{-1}C\xrightarrow{\Sigma^{-1}h }A\xrightarrow{f}B\xrightarrow{g}C\xrightarrow{h}\Sigma A\] then dualise it: \[(\Sigma A)^{\text{t}}=\Sigma^{-1}A^{\text{t}}\xrightarrow{h^{\text{t}}}C^{ \text{t}}\xrightarrow{g^{\text{t}}}B^{\text{t}}\xrightarrow{f^{\text{t}}}A^{ \text{t}}\xrightarrow{\Sigma\,h^{\text{t}}}\Sigma\,C^{\text{w}}\xrightarrow{ \Sigma\,g^{\text{t}}}\Sigma\,B^{\text{t}}.\] Then we use the equivalences (4) from Section 3.1, namely: \[Xf=0\quad\text{ iff }\quad g|X\quad\text{ iff }\quad\operatorname{ann}_{X}( \Sigma^{-1}h)=0.\] From that we directly obtain the other two equalities. \(\quad\square\) We also have, just as for definable subcategories of module categories, that the category of pp-pairs for \(\mathcal{D}^{\text{d}}\) is the opposite to that for \(\mathcal{D}\). The latter is equivalent to \(\operatorname{mod}\text{-}\mathcal{D}^{\text{t}}_{R}/\mathcal{S}_{\mathcal{D}}\), where \(\mathcal{S}_{\mathcal{D}}=\{G:(G,yX)=0\ \forall X\in\mathcal{D}\}\). We set \(d\mathcal{S}_{\mathcal{D}}=\{dG:G\in\mathcal{S}_{\mathcal{D}}\}\), where \(d\) is the duality of 2.4.24 Footnote 24: One can set up duality at the level of pp formulas but it’s duality of pp-pairs which we really need. Also see Section 4.2 for the issues re well-definedness/independence of enhancements which arise. **Proposition 3.22**.: _If \(\mathcal{D}\) is a definable subcategory of \(\mathcal{D}_{R}\) and \(\mathcal{D}^{\text{d}}\) is its elementary dual definable subcategory of \(\mathcal{D}_{R^{\text{op}}}\), then_ \[\mathcal{S}_{\mathcal{D}^{\text{d}}}=d\mathcal{S}_{\mathcal{D}}.\] _Hence_ \[\mathbb{L}^{\text{eq}+}(\mathcal{D}^{\text{d}})=(\mathcal{D}^{\text{e}}_{R^{ \text{op}}})\text{-mod}/\mathcal{S}_{\mathcal{D}^{\text{d}}}\simeq( \operatorname{mod}\text{-}\mathcal{D}^{\text{e}}_{R}/\mathcal{S}_{\mathcal{D}}) ^{\text{op}}=(\mathbb{L}^{\text{eq}+}(\mathcal{D}))^{\text{op}}.\] This is a special case of [18, 7.4] which deals with the general case of pairs, \(\mathcal{T}\), \(\mathcal{T}_{\text{i}}\), of compactly generated triangulated categories with \(\mathcal{T}^{\text{t}}_{\text{i}}\simeq(\mathcal{T}^{\text{c}})^{\text{op}}\), also showing that, in this situation, we have a frame isomorphism between \(\operatorname{Zg}(\mathcal{T})\) and \(\operatorname{Zg}(\mathcal{T}_{\text{i}})\). It is shown in [2] that, for derived categories of module categories, elementary duality has the same relation to algebraic Hom-dualities as in the case of definable subcategories of module categories. In [11] this is treated in a very general way and a variety of specific examples, from algebra and topology, are given. ## 4 Tensor-triangulated categories Suppose now that the compactly generated triangulated category \(\mathcal{T}\) has a monoidal, that is a tensor, structure. So we have \(\otimes:\mathcal{T}\times\mathcal{T}\to\mathcal{T}\), which we assume to be commutative as well as associative, for which we have a tensor-unit \(\mathbb{1}\) - so \(\mathbb{1}\otimes X\simeq X\) for every \(X\in\mathcal{T}\). **We assume \(\otimes\) to be exact in each variable.** We drop explicit mention of associators _et cetera_, see for instance [30, Part II] for more background. We will suppose that \(\mathcal{T}\) is **rigidly-compactly generated**. That is, we assume in addition: that the tensor structure is **closed**, meaning that there is an internal hom \([-,-]:\mathcal{T}\times\mathcal{T}\to\mathcal{T}\) which is right adjoint to \(\otimes\): \((X\otimes Y,Z)\simeq(X,[Y,Z])\) for \(X,Y,Z\in\mathcal{T}\), in particular \((Y,Z)\simeq(\mathbb{1},[Y,Z])\); and, writing \(X^{\vee}=[X,\mathbb{1}]\) for the **dual** of an object \(X\in\mathcal{T}\), we assume that every compact object \(A\) is **rigid**, meaning that the natural map \(A^{\vee}\otimes B\to[A,B]\) is an isomorphism for every \(B\in\mathcal{T}^{\text{c}}\). It follows that \(\mathcal{T}^{\text{c}}\) is a **tensor-subcategory** of \(\mathcal{T}\) (i.e. is closed under \(\otimes\)), that \((A^{\vee})^{\vee}\simeq A\), that \(A^{\vee}\otimes X\simeq[A,X]\) for \(X\in\mathcal{T}\) and \(A\in\mathcal{T}^{\text{c}}\), and that the duality functor \((-)^{\vee}\) is exact (e.g. see [51, SS1, 2.12]). The monoidal structure on \(\mathcal{T}^{\rm c}\) induces, by Day convolution (see [7, Appx.]), a right-exact monoidal structure on \({\rm mod}\text{-}\mathcal{T}^{\rm c}\) and hence on \({\rm Mod}\text{-}\mathcal{T}^{\rm c}\). By definition we have \(y(A\otimes B)\simeq yA\otimes yB\) for \(A,B\in\mathcal{T}^{\rm c}\) and, see [7, A.14], the restricted Yoneda functor \(y:\mathcal{T}\to{\rm Mod}\text{-}\mathcal{T}^{\rm c}\) is monoidal. The duality 2.6 between \({\rm mod}\text{-}\mathcal{T}^{\rm op}\) and \({\rm Coh}(\mathcal{T})\) is monoidal if the latter is given the natural tensor structure (see [52, SS5.1]). We say that a definable subcategory \(\mathcal{D}\) of \(\mathcal{T}\) is **tensor-closed** if, for every \(X\in\mathcal{D}\) and \(Y\in\mathcal{T}\), we have \(X\otimes Y\in\mathcal{D}\). It is sufficient, see below, that this be so for every \(Y\in\mathcal{T}^{\rm c}\). The theorem below says that this tensor-closed condition is equivalent to corresponding requirements on the associated data. We write \(f\otimes A\) for \(f\otimes\operatorname{id}_{A}\) if \(f\) is a morphism and \(A\) an object. **Theorem 4.1**.: _[_52_, 5.1.8]_ _Suppose that \(\mathcal{T}\) is a rigidly-compactly generated tensor-triangulated category. Then the following conditions on a definable subcategory \(\mathcal{D}\) are equivalent:_ _(i)_ \(\mathcal{D}\) _is tensor-closed;_ _(ii)_ \(X\in\mathcal{D}\) _and_ \(A\in\mathcal{T}^{\rm c}\) _implies_ \(X\otimes A\in\mathcal{D}\)_;_ _(iii) if_ \(f\in{\rm Ann}_{\mathcal{T}^{\rm c}}(\mathcal{D})\) _and_ \(A\in\mathcal{T}^{\rm c}\)_, then_ \(f\otimes A\in{\rm Ann}_{\mathcal{T}^{\rm c}}(\mathcal{D})\)_;_ _(iv) the corresponding Serre subcategory_ \(\mathcal{S}_{\mathcal{D}}\) _of_ \({\rm mod}\text{-}\mathcal{T}^{\rm c}\) _is a tensor-ideal of_ \({\rm mod}\text{-}\mathcal{T}^{\rm c}\) _(it is enough that it be closed under tensoring with representable functors_ \(yA\) _with_ \(A\in\mathcal{T}^{\rm c}\)_);_ _(v) the corresponding Serre subcategory_ \({\rm Ann}_{{\rm Coh}(\mathcal{T})}(\mathcal{D})=\mathcal{S}_{\mathcal{D}}^{ \rm o}\) _of_ \({\rm Coh}(\mathcal{T})\) _is a tensor-ideal of_ \({\rm Coh}(\mathcal{T})\) _(it is enough that it be closed under tensoring with representable functors_ \((A,-)\) _with_ \(A\in\mathcal{T}^{\rm c}\)_)._ A stronger condition on a definable subcategory \(\mathcal{D}\) of \(\mathcal{T}\) is that it be a **tensor-ideal** of \(\mathcal{T}\), meaning that it is tensor-closed and triangulated. The corresponding, in the sense of 4.1, annihilator ideals and Serre subcategories are characterised in [52, 5.2.14]. The additional condition on \({\rm Ann}_{\mathcal{T}^{\rm c}}(\mathcal{D})\) is that it be exact and the additional condition on \(\mathcal{S}_{\mathcal{D}}\) is that it be perfect; these conditions come from [25], see [52, SS5.2] for the detailed statements. Furthermore, the tensor version of 3.18 is true: the triangulated tensor-closed definable subcategories of \(\mathcal{T}\) are in bijection, _via_ torsion pairs, with the smashing tensor-ideals of \(\mathcal{T}\) ([52, 5.2.14]). In [52, Chpt. 6], Wagstaffe defines and investigates various coarsenings of the Ziegler topology on \({\rm pinj}(\mathcal{T})\), in particular, the tensor-closed Ziegler spectrum, \({\rm Zg}^{\otimes}(\mathcal{T})\), which is obtained by taking the closed subsets to be those of the form \(\mathcal{D}\cap{\rm pinj}(\mathcal{T})\) where \(\mathcal{D}\) is a tensor-closed definable subcategory of \(\mathcal{T}\). ### Spectra in tensor-triangulated categories A **prime** of the tensor-triangulated category \(\mathcal{T}\) is a (thick) tensor-ideal \(\mathcal{P}\) of \(\mathcal{T}^{\rm c}\) such that if \(A,B\in\mathcal{T}^{\rm c}\) and \(A\otimes B\in\mathcal{P}\), then \(A\) or \(B\) is in \(\mathcal{P}\). The **Balmer spectrum**[4], \({\rm Spc}(\mathcal{T}^{\rm c})\) or just \({\rm Spc}(\mathcal{T})\), consists of these primes, with the topology which has, for a basis of open sets, those of the form \[U(A)=\{\mathcal{P}\in{\rm Spc}(\mathcal{T}):A\in\mathcal{P}\}\] for \(A\in\mathcal{T}^{\rm c}\). This is a spectral space and we may also consider, as in Section 3.6, the Hochster-dual, or **Thomason**, topology on the same set, which is defined by declaring that the \(U(A)\) generate, under finite union and arbitrary intersection, the _closed_ sets. Both these topologies are natural and have their uses in various contexts, see, for instance, [5]. There are various routes by which \({\rm Spc}(\mathcal{T})\) and \({\rm inj}\text{-}\mathcal{T}^{\rm c}\), and also the homological spectrum, \({\rm Spc}^{\rm h}(\mathcal{T})\), from [6], with their various topologies, may be connected, see in particular [12] and references therein. We also have the following. To a point \(\mathcal{P}\) of \({\rm Spc}(\mathcal{T})\) we can associate the finite type hereditary torsion theory \(\gamma_{\mathcal{P}}=(\overrightarrow{\mathcal{S}_{\mathcal{P}}},(y\mathcal{P} )^{\perp})\) (see Section 3.5) whose torsion class is generated as such by \(y\mathcal{P}\), that is, the torsion class is the \(\varinj\)-closure of the Serre subcategory \(\mathcal{S}_{y\mathcal{P}}\) generated by \(y\mathcal{P}\). By [6, 3.9] this gives an injection of the lattice of Balmer primes into the lattice of finite-type hereditary torsion theories, the latter ordered by inclusion of torsion classes. For, if \(\mathcal{P}\subset\mathcal{Q}\) is a proper inclusion of Balmer primes, then, by Balmer's result, there is a maximal Serre tensor-ideal \(\mathcal{B}\) of \({\rm mod}\text{-}\mathcal{T}^{\rm c}\) such that \(\mathcal{P}=y^{-1}\mathcal{B}\). Certainly \(\mathcal{S}_{y\mathcal{P}}\subseteq\mathcal{B}\) so, if we had \(\mathcal{S}_{y\mathcal{P}}=\mathcal{S}_{y\mathcal{Q}}\), then we would have \(y\mathcal{Q}\subseteq\mathcal{B}\) and hence a contradiction. Further, each finite type hereditary torsionfree class \(\mathcal{F}\) is determined by its intersection with \({\rm inj}_{\mathcal{T}^{\rm c}}\), see [38, 11.1.29], and the resulting sets \(\mathcal{F}\cap{\rm inj}_{\mathcal{T}^{\rm c}}\) are the closed sets in the Ziegler topology on \({\rm inj}_{\mathcal{T}^{\rm c}}\) (see [38, SS14.1.3]). So, to a Balmer prime \(\mathcal{P}\), we also have the associated Ziegler-closed set \((y\mathcal{P})^{\perp}\,\cap\,\mathrm{inj}_{\mathcal{T}^{c}}\). Note that this association is inclusion-reversing. If \(A\in\mathcal{T}^{c}\) then we have \[\mathcal{P}\in U(A)\text{ iff }A\in\mathcal{P}\text{ iff }yA\in\overrightarrow{S_{y \mathcal{P}}}\text{ iff }y\mathcal{P})^{\perp}\subseteq(yA)^{\perp}.\] The second equivalence is by the argument just made. Note that \((yA)^{\perp}\,\cap\,\mathrm{inj}_{\mathcal{T}^{c}}\) is the complement of the basic Ziegler-open subset of \(\mathrm{inj}_{\mathcal{T}^{c}}\) that is defined by \((yA,-)\neq 0\), hence it is basic open in the dual-Ziegler topology. For instance, if \(R\) is commutative noetherian, then the above essentially gives the embedding (see [4], [19]) of \(\mathrm{Spec}(\mathcal{D}_{R}^{\mathrm{perf}})\) with the Thomason topology into the frame of Ziegler-open subsets of \(\mathrm{Spec}(R)\), the latter being isomorphic, as a lattice, to the opposite of the lattice of finite type hereditary torsionfree classes of \(R\)-modules. ### Internal duality in tensor-triangulated categories In [52, Chpt. 7] an _internal_ duality for rigidly-compactly generated tensor-triangulated categories \(\mathcal{T}\) is defined. In this respect it is somewhat similar to elementary duality in the case that \(R\) is a commutative ring, since then the categories of right and left \(R\)-modules are naturally identified and so, in that particular context, elementary duality is an internal duality on \(\mathrm{Mod}\text{-}R\). Indeed, for a commutative ring \(R\) and the derived-tensor structure on the derived category \(\mathcal{D}_{R}\), this internal duality coincides with elementary duality, [52, 7.3.5]. The internal duality for rigidly-compactly generated tensor-triangulated \(\mathcal{T}\) comes from Wagstaffe's thesis [52] and it was also discovered independently by Bird and Williamson [11]. In [52] it is defined in terms of cohomological ideals, Serre subcategories and definable subcategories; here we note that it can also be defined at the level of formulas and pp-pairs. We continue to assume that \(\mathcal{T}\) is a rigidly-compactly generated tensor-triangulated category. Just as for the "external" duality, we can define the duality using a hom functor to an object but, in this case, we use the internal hom functor: for \(A\in\mathcal{T}^{c}\), consider \(A\mapsto[A,\mathbb{1}]\simeq A^{\vee}\otimes\mathbb{1}\simeq A^{\vee}\). Similarly, internal duality \((-)^{\vee}=[-,1]\) applied to a morphism \(A\xrightarrow{I}B\) in \(\mathcal{T}^{c}\) gives the morphism \(B^{\vee}\xrightarrow{f^{\vee}}A^{\vee}\) in \(\mathcal{T}^{c}\). Since \(\mathcal{T}\) is rigidly-compactly generated, we have that \((-)^{\vee}\) is an anti-equivalence \((\mathcal{T}^{c})^{\mathrm{op}}\simeq\mathcal{T}^{c}\) with \((-)^{\vee}\) naturally equivalent to the identity functor on \(\mathcal{T}^{c}\) (see [51, 1.4]). We also apply these notations to arbitary objects and morphisms of \(\mathcal{T}\). Given a definable subcategory \(\mathcal{D}\) of \(\mathcal{T}\), with associated annihilator ideal \(\mathrm{Ann}(\mathcal{D})=\mathrm{Ann}_{\mathcal{T}^{c}}(\mathcal{D})\), we define its **internal dual** definable subcategory of \(\mathcal{T}\) to be \(\mathcal{D}^{\vee}=\mathrm{Ann}_{\mathcal{T}}(\mathrm{Ann}(\mathcal{D})^{ \vee})\), where we set \(\mathcal{A}^{\vee}=\{f^{\vee}:f\in\mathcal{A}\}\) for \(\mathcal{A}\) a collection of morphisms in \(\mathcal{T}^{c}\). **Proposition 4.2**.: _(mostly [52, SS7.1]) Suppose that \(\mathcal{T}\) is a rigidly-compactly generated tensor-triangulated category, let \(\mathcal{D}\) be a definable subcategory and consider its elementary dual definable subcategory \(\mathcal{D}^{\vee}\). Then \((\mathrm{Ann}(\mathcal{D}))^{\vee}\) is an annihilator ideal, \((\mathcal{D}^{\vee})^{\vee}=\mathcal{D}\) and we have the following:_ \(\mathrm{Ann}(\mathcal{D}^{\vee})=(\mathrm{Ann}(\mathcal{D})^{\vee}\)__ \(\mathrm{Div}(\mathcal{D}^{\vee})=(\mathcal{D}\text{-TF})^{\vee}\)__ \(\mathcal{D}^{\vee}\text{-TF}=(\mathrm{Div}(\mathcal{D}))^{\vee}\)_._ **Proof.** The proof is very similar to that of 3.21, using [18, SS7] to get the first statements. For the last two, consider \(f\in\mathrm{Ann}(\mathcal{D})\) and form the extended triangle \[\Sigma^{-1}B\xrightarrow{\Sigma^{-1}g}\Sigma^{-1}C\xrightarrow{\Sigma^{-1}h} A\xrightarrow{f}B\xrightarrow{g}C\xrightarrow{h}\Sigma A\] then dualise it: \[(\Sigma A)^{\vee}=\Sigma^{-1}A^{\vee}\xrightarrow{h^{\vee}}C^{\vee} \xrightarrow{g^{\vee}}B^{\vee}\xrightarrow{f^{\vee}}A^{\vee}\xrightarrow{ \Sigma\,h^{\vee}}\Sigma\,C^{\vee}\xrightarrow{\Sigma\,g^{\vee}}\Sigma\,B^{ \vee}.\] Then apply Equation (4) from Section 3.1. \(\quad\Box\) This internal duality can also be given by a duality operation on pp formulas and pp-pairs. This is defined exactly as one would expect from the abelian/modules case. Namely, if \(\phi(x_{B})\), being \(\exists x_{B^{\prime}}\,(x_{B}f=x_{B^{\prime}}f^{\prime})\) is a typical pp formula, where \(f:A\to B\) and \(f^{\prime}:A\to B^{\prime}\) are in \(\mathcal{T}^{c}\), then we define the **dual** pp formula, \(\phi^{\vee}(x_{B^{\vee}})\) to be \(\exists y_{A^{\vee}}(y_{A^{\vee}}f^{\prime}=x_{B^{\vee}}\wedge\ y_{A^{\vee}}f^{ \prime\vee}=0_{B^{\prime\vee}})\). In particular, the dual of the pp formula \(x_{B}f=0\), where \(f:A\to B\), is \(f^{\prime}|x_{B^{\vee}}\) and the dual of \(f^{\prime}|x_{B}\) is \(x_{B^{\vee}}f^{\prime\vee}=0\). The **dual** of a pp-pair \(\phi/\psi\) is then defined to be \(\psi^{\vee}/\phi^{\vee}\). Note that what we have defined here is an internal duality on pp formulas in the language for (right) \(\mathcal{T}^{c}\)-modules. There is a subtlety, which is pointed out in [52]. Namely, two pp formulas might be equivalent on \(\mathcal{T}\) - that is, have the same solution set on every object of \(\mathcal{T}\) - yet their duals might not be equivalent. Indeed, we might have pp formulas \(\phi\), \(\phi_{1}\) with \(\phi(X)=\phi_{1}(X)\) for every \(X\in\mathcal{T}\), yet with \(\phi^{\vee}(X)\neq\phi_{1}^{\vee}(X)\) perhaps even for every \(X\in\mathcal{T}\) since these might be definable subgroups of distinct sorts - see [52, Example 7.1.4]. Nevertheless \(\phi^{\vee}\) and \(\phi_{1}^{\vee}\) will define isomorphic coherent functors, meaning that the pairs \(\phi^{\vee}(x)/(x=0)\) and \(\phi_{1}^{\vee}(x_{1})/(x_{1}=0)\) will be isomorphic in the category \(\mathbb{L}(\mathcal{T})^{\mathrm{eq}+}\) of pp-imaginaries for \(\mathcal{T}\). More generally, if \(\phi/\psi\) is a pp-pair with \(\phi_{1}\) equivalent to \(\phi\) and \(\psi_{1}\) equivalent to \(\psi\), then the pp-pairs \(\psi^{\vee}/\phi^{\vee}\) and \(\psi_{1}^{\vee}/\phi_{1}^{\vee}\) might be distinct but they will be isomorphic; in particular for every \(X\in\mathcal{T}\), we will have \(\psi^{\vee}(X)/\phi^{\vee}(X)=0\) iff \(\psi_{1}^{\vee}(X)/\phi_{1}^{\vee}(X)=0\). That follows from [18, 7.4], cf. 3.22, indeed it follows that there is an induced anti-isomorphism of the category \(\mathbb{L}(\mathcal{T})^{\mathrm{eq}+}\) with itself. We give some more detail; see also [52, Chpt. 7]. Since we have a duality \((-)^{\vee}:(\mathcal{T}^{c})^{\mathrm{op}}\to\mathcal{T}^{c}\) we have, by [18, 7.4], an equivalence mod-\(\mathcal{T}^{c}\to\mathcal{T}^{c}\)-mod which is given by taking \(G_{f}=\mathrm{coker}((-,f):(-,A)\to(-,B))\), where \(f:A\to B\), to \(F_{f^{\vee}}=\mathrm{coker}((f^{\vee},-):(A^{\vee},-)\to(B^{\vee},-))\). We also have the duality \(\mathcal{T}^{c}\)-mod\((\simeq\mathrm{Coh}(\mathcal{T}))\to(\mathrm{mod}-\mathcal{T}^{c})^{\mathrm{op}}\) which takes \(F_{f^{\vee}}\) to \((F_{f^{\vee}})^{\circ}:C\mapsto(F_{f^{\vee}},(C,-))\) for \(C\in\mathcal{T}^{c}\). Composing these, we have a duality mod-\(\mathcal{T}^{c}\to\mathrm{mod}-\mathcal{T}^{c}\) which takes \(G_{f}\) to \((F_{f^{\vee}})^{\circ}\). In view of the exact sequence (3) \[0\to(F_{f^{\vee}})^{\circ}\to(-,B^{\vee})\xrightarrow{(-,f^{\vee})}(-,A^{ \vee})\to G_{f^{\vee}}\to 0\] we can formulate this as follows. **Proposition 4.3**.: _Suppose that \(\mathcal{T}\) is a rigidly-compactly generated tensor-triangulated category. Then there is a duality on mod-\(\mathcal{T}^{c}\) which is given on objects by \(G_{f}\mapsto\mathrm{ker}(-,f^{\vee})\), where \((-)^{\vee}\) is the duality on \(\mathcal{T}^{c}\)._ The next result follows directly from [11, 6.12] (also [2, 2.3] in the case \(\mathcal{T}=\mathcal{D}_{R}\), \(R\) commutative). **Proposition 4.4**.: _Suppose that \(\mathcal{T}\) is a rigidly-compactly generated tensor-triangulated category and let \(\mathcal{D}\) be a definable subcategory. Then the definable subcategory of \(\mathcal{T}\) generated by the collection of objects \(\{X^{\vee}:X\in\mathcal{D}\}\) is exactly the dual definable subcategory \(\mathcal{D}^{\vee}\)._ There is potential ambiguity in the notation \(\mathcal{D}^{\vee}\) - we have defined it to be the dual definable subcategory but it would also be a natural notation for \(\{X^{\vee}:X\in\mathcal{D}\}\) but the latter, a subclass of \(\mathcal{D}^{\vee}\), is not in general all of the definable category \(\mathcal{D}^{\vee}\). Tensor-closed definable subcategories are self-dual. **Theorem 4.5**.: _[_52_, 7.2.2]_ _If \(\mathcal{D}\) is a tensor-closed definable subcategory of a rigidly-compactly generated tensor-triangulated category, then \(\mathcal{D}\) is self-dual: \(\mathcal{D}^{\vee}=\mathcal{D}\)._ ### Internal Hom interpretation We finish by pointing out some more ideals of \(\mathcal{T}^{c}\) associated to a definable category \(\mathcal{D}\) in the rigidly-compactly generated tensor-triangulated context. They appear (along with their rather provisional names) in the statement of the next result. **Proposition 4.6**.: _Suppose that \(\mathcal{T}\) is a rigidly-compactly generated tensor-triangulated category and let \(\mathcal{X}\subset\mathcal{T}\). We define the_ **tensor-annihilator** _of \(\mathcal{X}\):_ \[\otimes\mathrm{\mbox{\rm-ann}}_{\mathcal{T}^{c}}\mathcal{X}=\{f:a\to b\in \mathcal{T}^{c}:f\otimes X=0:a\otimes X\to b\otimes X\ \forall X\in\mathcal{X}\},\] _the_ **internal-hom-annihilator** _of \(\mathcal{X}\):_ \[[\mathrm{ann}]_{\mathcal{T}^{c}}\mathcal{X}=\{f:a\to b\in\mathcal{T}^{c}:[f,X]=0:[ b,X]\to[a,X]\ \forall X\in\mathcal{X}\},\] _the_ **tensor phantomiser** _of \(\mathcal{X}\):_ \[\otimes\text{-}\text{\rm{phan}}_{\mathcal{T}^{\text{c}}}\mathcal{X}=\{f:a \to b\in\mathcal{T}^{\text{c}}:f\otimes X:a\otimes X\to b\otimes X\text{ is phantom }\forall X\in\mathcal{X}\},\] _and the_ **internal-hom-phantomiser** _of \(\mathcal{X}\):_ \[[\text{\rm{phan}}]_{\mathcal{T}^{\text{c}}}\mathcal{X}=\{f:a\to b\in\mathcal{ T}^{\text{c}}:[f,X]:[b,X]\rightarrow[a,X]\text{ is phantom }\forall X\in\mathcal{X}\}.\] _All these are ideals of \(\mathcal{T}^{\text{c}}\) and the tensor-annihilator and internal-hom-annihilator are dual ideals:_ \[(\otimes\text{-}\text{\rm{ann}}_{\mathcal{T}^{\text{c}}}\mathcal{X})^{\vee}=[ \text{\rm{ann}}]_{\mathcal{T}^{\text{c}}}\mathcal{X}.\] _Moreover, the tensor phantomiser and internal-hom-phantomiser coincide (we could call this the_ **phantomiser**_) and are equal to the annihilator ideal of the smallest tensor-closed definable subcategory \(\langle\mathcal{X}\rangle^{\otimes}\) containing \(\mathcal{X}\):_ \[\otimes\text{-}\text{\rm{phan}}_{\mathcal{T}^{\text{c}}}\mathcal{X}=[\text{ \rm{phan}}]_{\mathcal{T}^{\text{c}}}\mathcal{X}=\text{\rm{Ann}}_{\mathcal{T }^{\text{c}}}\langle\mathcal{X}\rangle^{\otimes}.\] _Therefore this is also the annihilator ideal generated by each of \(\otimes\text{-}\text{\rm{ann}}_{\mathcal{T}^{\text{c}}}\mathcal{X}\) and \([\text{\rm{ann}}]_{\mathcal{T}^{\text{c}}}\mathcal{X}\)._ **Proof.** For every \(X\in\mathcal{T}\), \(A\otimes X\xrightarrow{f\otimes X}B\otimes X\) is (isomorphic to) \(A^{\vee\vee}\otimes X\xrightarrow{f^{\vee\vee}}B^{\vee\vee}\otimes X\) and therefore is \([A^{\vee},X]\xrightarrow{[f^{\vee},X]}[B^{\vee},X]\). Thus, the condition \(f\otimes X=0:A\otimes X\to B\otimes X\) is equivalent to the condition \([f^{\vee},X]=0:[A^{\vee},X]\rightarrow[B^{\vee},X]\) and we have \(\otimes\text{-}\text{\rm{ann}}_{\mathcal{T}^{\text{c}}}\mathcal{X}=([\text{ \rm{ann}}]_{\mathcal{T}}\mathcal{X})^{\vee}\). For the other parts, we have \(f\in\otimes\text{-}\text{\rm{phan}}_{\mathcal{T}^{\text{c}}}\mathcal{X}\) iff for every \(c\in\mathcal{T}^{\text{c}}\) we have \((c,f\otimes X)=0\), that is \((f^{\vee},c^{\vee}\otimes X)=0\) which, since every compact object is a dual, is equivalent to \(f^{\vee}\in\otimes\text{-}\text{\rm{ann}}_{\mathcal{T}^{\text{c}}}\langle \mathcal{X}\rangle^{\otimes}\). By 4.5, \(f^{\vee}\in\otimes\text{-}\text{\rm{ann}}_{\mathcal{T}^{\text{c}}}\langle \mathcal{X}\rangle^{\otimes}\) iff \(f\in\otimes\text{-}\text{\rm{ann}}_{\mathcal{T}^{\text{c}}}\langle\mathcal{X} \rangle^{\otimes}\). Therefore \(\otimes\text{-}\text{\rm{phan}}_{\mathcal{T}^{\text{c}}}\mathcal{X}=\otimes \text{-}\text{\rm{ann}}_{\mathcal{T}^{\text{c}}}\langle\mathcal{X}\rangle^{ \otimes}\). Also, \(f\in[\text{\rm{phan}}]_{\mathcal{T}^{\text{c}}}\mathcal{X}\) iff for every \(c\in\mathcal{T}^{\text{c}}\) we have \((c,[f,X])=0\), equivalently \((f,c^{\vee}\otimes X)=0\) which, since every compact object is a dual, is equivalent to \(f\in\text{\rm{ann}}_{\mathcal{T}^{\text{c}}}\langle\mathcal{X}\rangle^{\otimes}\). Therefore \([\text{\rm{phan}}]_{\mathcal{T}^{\text{c}}}\mathcal{X}=\text{\rm{ann}}_{ \mathcal{T}^{\text{c}}}\langle\mathcal{X}\rangle^{\otimes}=\otimes\text{-} \text{\rm{phan}}_{\mathcal{T}^{\text{c}}}\mathcal{X}\), as claimed. \(\Box\) Note that the condition \(f^{\vee}\in[\text{\rm{ann}}]_{\mathcal{T}}\mathcal{X}\) is expressed by the condition "\(Xf^{\vee}=0\)" with \(B^{\vee}\xrightarrow{f^{\vee}}A^{\vee}\). This looks like an annihilator sentence but it is for internal hom, rather than actual hom, groups. This suggests an alternative, internal-hom, interpretation of the model-theoretic language, remarked at 2.1, when \(\mathcal{T}\) is a rigidly-compactly generated tensor-triangulated category. In this interpretation the value of \(X\in\mathcal{T}\) at sort \(A\in\mathcal{T}^{\text{c}}\) is \([A,X]\), rather than \((A,X)\), and the interpretation of \(A\xrightarrow{f}B\in\mathcal{T}^{\text{c}}\) in \(X\) is \([f,X]:[B,X]\rightarrow[A,X]\) rather than \((f,X):(B,X)\rightarrow(A,X)\). In this interpretation of the language the values of sorts at objects of \(\mathcal{T}\) are again objects of \(\mathcal{T}\), not abelian groups. This also constitutes an alternative "internal restricted Yoneda" functor from \(\mathcal{T}\) to the "\(\mathcal{T}\)-valued-module category" \(\text{\rm{Mod}}_{\mathcal{T}}\mathcal{T}^{\text{c}}=((\mathcal{T}^{\text{c}})^ {\text{op}},\mathcal{T})\), which takes \(X\in\mathcal{T}\) to the functor \([-,X]:(\mathcal{T}^{\text{c}})^{\text{op}}\rightarrow\mathcal{T}\) and takes \(f:X\to Y\) to \([-,f]:[-,X]\rightarrow[-,Y]\). In this internal-hom interpretation, the language for \(\mathcal{T}\) stays the same but the interpretation has changed: instead of \((-,X)\) we use \([-,X]\). Similarly, the tensor-annihilator that we defined above belongs to a third (in this case, covariant) interpretation of the same language, based on \(-\otimes X\), rather than \((-,X)\) or \([-,X]\). In both these new interpretations the sorts belong to \(\mathcal{T}\) rather than to \(\mathbf{Ab}\), so we cannot immediately make sense of "elements" of a sort. But, using the idea of an "element" being an arrow from the tensor-unit \(\mathbb{1}\), we can move back to the category of \(\mathcal{T}^{\text{c}}\)-modules. If we do that, we recover the usual interpretation (from the internal-hom interpretation) and an 'internal dual' interpretation (from the tensor interpretation). That is, we have: \[y:\mathcal{T}\rightarrow\text{\rm{Mod}}\text{-}\mathcal{T}^{\text{c}}\text{ given by }X\mapsto(-,X);\] \[[y]:\mathcal{T}\rightarrow((\mathcal{T}^{\text{c}})^{\text{op}},\mathcal{T}) \text{ given by }X\mapsto[-,X];\] \[\epsilon:\mathcal{T}\rightarrow(\mathcal{T}^{\text{c}},\mathcal{T})\text{ given by }X\mapsto(-\otimes X).\] The latter two can then be composed with \((\mathbb{1},-)\): \[(\mathbb{1},-)|y|=y:\mathcal{T}\to((\mathcal{T}^{\rm c})^{\rm op},\mathcal{T})\to \text{Mod-}\mathcal{T}^{\rm c}\] \[\text{given by }X\mapsto[-,X]\mapsto(\mathbb{1},[-,X])\simeq(-,X);\] and \[(\mathbb{1},-)\epsilon:\mathcal{T}\to(\mathcal{T}^{\rm c},\mathcal{T})\to \mathcal{T}^{\rm c}\text{-Mod}\] \[\text{given by }X\mapsto(-\otimes X)\mapsto(\mathbb{1},-\otimes X)\simeq( \mathbb{1},[(-)^{\vee},X])\simeq((-)^{\vee},X)\] Also, essentially following [12, 4.13], note that if \(A\in\mathcal{T}^{\rm c}\) and \(X\in\mathcal{T}\), then \([A,X]=0\) iff, for all \(C\in\mathcal{T}^{\rm c}\), we have \((C,[A,X])=0\) iff, for all \(C\in\mathcal{T}^{\rm c}\), we have \((C\otimes A,X)=0\). In particular \[\{N\in\text{Zg}(\mathcal{T}):[A,N]=0\}=\bigcap_{C\in\mathcal{T}^{\rm c}}\{N \in\text{Zg}(\mathcal{T}):(C\otimes A,N)=0\}\] is an intersection of Ziegler-closed sets, hence is itself Ziegler-closed. Furthermore, continuing the above computation, we have \([A,X]=0\) iff, for all \(C\in\mathcal{T}^{\rm c}\), we have \((A\otimes C,X)=0\) iff, for all \(C\in\mathcal{T}^{\rm c}\), we have \((A,[C,X])=0\) iff, for all \(C\in\mathcal{T}^{\rm c}\), we have \((A,C^{\vee}\otimes X)=0\), iff, for all \(C\in\mathcal{T}^{\rm c}\), we have \((A,C\otimes X)=0\). So if \(\mathcal{D}\) is the definable subcategory of \(\mathcal{T}\) cut out by the condition \((A,-)=0\), then the condition \([A,-]=0\) cuts out the smallest tensor-closed definable subcategory of \(\mathcal{T}\) containing \(\mathcal{D}\).
2303.04823
Pulse-controlled qubit in semiconductor double quantum dots
We present a numerically-optimized multipulse framework for the quantum control of a single-electron charge qubit. Our framework defines a set of pulse sequences, necessary for the manipulation of the ideal qubit basis, that avoids errors associated with excitations outside the computational subspace. A novel control scheme manipulates the qubit adiabatically, while also retaining high speed and ability to perform a general single-qubit rotation. This basis generates spatially localized logical qubit states, making readout straightforward. We consider experimentally realistic semiconductor qubits with finite pulse rise and fall times and determine the fastest pulse sequence yielding the highest fidelity. We show that our protocol leads to improved control of a qubit. We present simulations of a double quantum dot in a semiconductor device to visualize and verify our protocol. These results can be generalized to other physical systems since they depend only on pulse rise and fall times and the energy gap between the two lowest eigenstates.
Aleksander Lasek, Hugo V. Lepage, Kexin Zhang, Thierry Ferrus, Crispin H. W. Barnes
2023-03-08T19:00:02Z
http://arxiv.org/abs/2303.04823v1
# Pulse-controlled qubit in semiconductor double quantum dots ###### Abstract We present a numerically-optimized multipulse framework for the quantum control of a single-electron charge qubit. Our framework defines a set of pulse sequences, necessary for the manipulation of the ideal qubit basis, that avoids errors associated with excitations outside the computational subspace. A novel control scheme manipulates the qubit adiabatically, while also retaining high speed and ability to perform a general single-qubit rotation. This basis generates spatially localized logical qubit states, making readout straightforward. We consider experimentally realistic semiconductor qubits with finite pulse rise and fall times and determine the fastest pulse sequence yielding the highest fidelity. We show that our protocol leads to improved control of a qubit. We present simulations of a double quantum dot in a semiconductor device to visualize and verify our protocol. These results can be generalized to other physical systems since they depend only on pulse rise and fall times and the energy gap between the two lowest eigenstates. ## I Introduction Accurate qubit control must maximize the probability that a qubit will remain in its computational basis. In this work, we develop a numerically-optimized method for general control of a single qubit, accounting for finite control pulse rise time and potential imperfections. We demonstrate that in a double quantum dot (DQD) structure with two gates, where the pulses and the DQD potential are imperfect, a reliable qubit can still be defined and operated with high fidelity, using experimentally realistic parameters. We achieve fast operations that are independent of the initial state, and do not induce excitations beyond the computational basis. While our framework is generic, we demonstrate its usefulness on semiconductor DQD charge qubits, owing to their usefulness and ubiquity. Semiconductor devices are attractive candidates for qubit hardware owing to their high compatibility with current industrial standards. They also benefit from decades in advances in processing and device integration that render processing costs low [1]. Progress in fabrication and measurement techniques have led to extended coherence times and more precise and faster electronics, both for qubit control and readout, paving the way to scalability [2; 3; 4; 5; 6]. Within previous suggested architectures, DQDs offer a straightforward way of producing both charge and spin qubits [7; 8]. For reading the qubit state, a charge detector [9] or even dispersive readout [10; 11; 12; 13] can be used. These detection methods are both achievable experimentally with great accuracy and speed owing to the improvement of charge detection sensitivity [14]. We thus use experimentally realistic parameters based on a semiconductor architecture. However, the results can be easily generalized to any qubit with a similar Hamiltonian form. In this paper, we first model a effective potential for a generic DQD system to define the qubit basis states as bonding and anti-bonding states, in Section II. In section III, we show how to initialize a single electron into one of the logical qubit basis states and how to perform a set of mutually orthogonal rotations on the Bloch sphere, thus an arbitrary rotation, using shaped pulses that correct for pulse rise time. We develop a pulsing scheme that is capable of generating time-optimized general unitary rotations despite imperfections, using only the voltage across the gate. We finally consider noise (section III.6), discuss our results (section IV), and give conclusions on the practicability of the scheme (section V). ## II Single-electron charge qubit definition It is generally assumed either for simplicity or ease of experimental manipulations that the logic basis-state wave functions of a DQD, \(|0\rangle\) and \(|1\rangle\), are fully localised in the left or right side of the DQD [2; 15; 16]. This assumption is convenient for "brute force" initialisation via applying a high bias voltage. The readout is also simple, realised by measuring the probability of the electron being in the left or right dot. However, in this case, quantum states necessarily contain contributions from higher energy eigenstates which give rise to additional composite oscillations, typically on timescales faster than the qubit oscillation itself [17]. They ultimately induce a loss of fidelity in gate operations. This issue is critical for practical implementations of quantum computation and schemes like bang-bang pulse sequences have been proposed in order to mitigate this effect [18]. Such sequences involve additional gate operations that could be detrimental to the overall operation time. Consequently, optimizing the qubit basis states is a necessary preliminary requirement before any other attempts at extending coherence or improving the gate fidelity. If a linear combination of the two lowest eigenstates of the DQD system is used instead of assuming a fully localized state, a true two-level system is formed. A qubit control framework that doesn't involve energy states outside of the computational space would greatly improve the fidelity compared to the method above. It is optimal to define the qubit states as equal combinations of the ground and first excited states at zero bias, because it produces well-localized qubits that can be measured while also preserving symmetry between the two logical states. This is demonstrated within the two-site localised state model [19] (App. B). A zero-bias potential also makes the qubit first-order insensitive to electrical noise, improving fidelity [20]. Moreover, as described in section III, having zero detuning as a default achieves a high fidelity \(R_{\vec{x}}\) rotation without any pulsing. The coefficients of the energy eigenstates must be equal in order to have symmetry between the qubit states. Therefore, for a given DQD potential \(V_{\text{DQD}}(x)\) (App. B), we define the logical states as: \[\begin{split}|0\rangle&=\frac{\psi^{\text{B}}(x)+ \psi^{\text{AB}}(x)}{\sqrt{2}},\\ |1\rangle&=\frac{\psi^{\text{B}}(x)-\psi^{\text{AB}}( x)}{\sqrt{2}},\end{split} \tag{1}\] where \(\psi^{\text{(A)B}}(x)\) is the (anti)bonding state wave function. While these states are not completely localized on a single dot, as their probability density is tailing to the other side of the dot (Fig. 1), they do maximize the average probability of successful readout [19]. Further localization of the states would introduce higher-energy states that would consequently not obey the ideal two-site Hamiltonian we aim to model (Eq. 3). Since there is no reference to the underlying effective potential of the DQD in our definition, this qubit is well defined for potentials that are not symmetric and more generally, for any dot shape. ## III Single qubit control In the energy eigenbasis, the Hamiltonian of the qubit system reads: \[\hat{H}(t)=-\frac{1}{2}\,\epsilon(t)\,\sigma_{x}+\frac{1}{2}\,\Delta\,\sigma_{ z}+\frac{1}{2}(E_{\text{B}}+E_{\text{AB}}). \tag{2}\] Here \(E_{\text{B}}\) and \(E_{\text{AB}}\) are the energies of the bonding and antibonding states, i.e. the two lowest energy states, at a linear detuning \(\epsilon\) = 0, \(\Delta\) is the "hybridisation energy" between the two localised states, and \(\sigma_{x/z}\) are the Pauli \(x/z\) matrices. Using the basis defined in Eq. 1, where \(|0\rangle\) and \(|1\rangle\) are on the poles of a Bloch sphere, the Hamiltonian in Eq. 2 is written as: \[\hat{H}_{\text{eff}}(t)=-\frac{1}{2}\,\epsilon(t)\,\sigma_{z}+\frac{1}{2}\, \Delta\,\sigma_{x}. \tag{3}\] We have neglected the constant factor here. The time-dependent wave function can then be written in terms of the standard \(\theta\) and \(\phi\), polar and azimuthal angles respectively, on the Bloch sphere: \[\psi(x,t)=\cos\!\left(\frac{\theta(t)}{2}\right)|0\rangle+e^{i\phi(t)}\sin\! \left(\frac{\theta(t)}{2}\right)|1\rangle\,. \tag{4}\] With no bias voltage, \(\epsilon=0\), the wave function will undergo a constant rotation around the \(z\)-axis on the Bloch sphere. When applying a non-zero bias, the axis of rotation is shifted. For the Hamiltonian in Eq. 3, a general rotation on the Bloch sphere by an angle \(\alpha\) around a direction \(\vec{n}\) is given by the solution to the time-dependent Schrodinger equation (TDSE): \[R_{\vec{n}}(\alpha(t))=\mathscr{T}\exp\left(\frac{1}{i\hbar}\int_{0}^{t}\hat{ H}_{\text{eff}}(t^{\prime})\text{d}t^{\prime}\right) \tag{5}\] where \(\mathscr{T}\) is the time-ordering operator. Rotations are performed by sending a bias voltage pulse of amplitude \(V_{\text{bias}}=\frac{\epsilon}{e\lambda}\) and duration \(t_{p}\) to the double dot where \(\epsilon\) and \(\lambda\) are respectively the detuning and voltage amplitude proportionality constant for a given potential. An instantaneous switch between the \(V_{\text{bias}}=0\) and \(V_{\text{bias}}=\frac{\epsilon}{e^{\lambda}}\) bias states is generally preferred as this simplifies the dynamics and avoids spurious qubit rotations [21]. In this case, the detuning \(\epsilon(t)\) is described as a set of step-functions and \(R_{\vec{n}}(\alpha)\) is expressed analytically as a rotation of the qubit state around the axis on the Bloch sphere which passes through the eigenstates of \(H(t^{\prime})\) at a rate proportional to the difference in energy of these two eigenstates. Such a pulse requires a linear potential along the axis of the double dot, as in Eq. 2, which is achieved by applying voltages to a set of metallic surface gates. During a square-wave pulse of detuning \(\epsilon\) Figure 1: Wave function of the two first excited states. The logical \(|0\rangle\) and \(|1\rangle\) qubits are formed using Eq. 1. The values of the DQD spacing and the electrostatic potential amplitude were chosen for illustrative purposes and the scheme presented here works for a wide range of configurations. the system will evolve according to the Hamiltonian in Eq. 3, that will be constant during the on-time of the pulse, giving an unitary time evolution (rotation): \[U(t)=R_{\vec{n}}(\alpha(t))=\exp\left(-i\frac{\vec{n}\cdot\vec{\sigma}}{2\hbar}t \right), \tag{6}\] where \(\vec{n}=(\Delta,0,\epsilon)\) is the axis of rotation, with rotation frequency given by its magnitude. Implementing such a square pulse isn't technically possible owing to practical limitations. Current and most commonly used pulse pattern generators have a built-in rise time \(\tau\) of about 40ps to 500ps depending on the brand and characteristics. The Keysight 81134A Pulse Pattern Generator has a \(\tau=\)60ps between 20% and 80% of target amplitude. The Agilent 81130A and the Anritsu MP1763C have \(\tau=\) 500ps and \(\sim\) 40ps respectively, both between 10% and 90% of target amplitude. (Fig. 2a). In this case, the step-function decomposition is not possible and, in general, Eq. 5 must be solved numerically. If the detuning can be described in terms of linear ramp functions, then Eq. 5 can be written analytically as a Landau-Zener-Stuckelberg transition [22; 23; 24] but the resulting expression becomes a function of parabolic cylinder functions which makes understanding the rotation \(R_{\vec{n}}(\alpha)\) more complex [25; 26]. In order to investigate the consequences that follow from this technical limitation, we have solved Eq. 5 numerically for a pulse with finite \(\tau\) using a GPU-accelerated version of the staggered-leapfrog method [27; 28; 29; 30; 31] (see App. D). For such a pulse, the path of an individual qubit state on the Bloch sphere during the time evolution in Eq. 5 differs from the one induced by a square pulse [32] (Fig. 3). In order to implement a high-fidelity rotation on the Bloch sphere, an effective \(R_{\vec{n}}(\alpha)\) is found by accounting for the aforementioned equipment limitations, such that the path traced on the Bloch sphere is different, but the resulting rotation remains the same as one induced by a perfect square pulse. We find that this can always be done by tuning the pulse duration and amplitude, depending on \(\tau\) and desired angle of rotation. The details of this correction are outlined in III.4. One can question whether such an adjusted operation including transient rotations is a proper rotation, i.e. independent of the initial state. The answer is yes, because while the precise path on the Bloch sphere may be difficult to describe analytically, the instantaneous Hamiltonian is still always expressed in terms of \(\sigma_{x}\) and \(\sigma_{z}\) matrices, therefore the effective operation is composed of rotations and is itself an actual rotation. We show that our pulses have the desired effect on any input state (section III.5). Additionally, it is worth noting that having a finite \(\tau\) can have a desirable effect on the qubit, as it make the pulsing operation more adiabatic compared to using square pulses. ### General rotation scheme To perform an arbitrary qubit rotation, we propose a scheme of concatenating square pulses of alternating amplitudes. We set the bias voltage to produce a detuning \(\epsilon=\pm\Delta\), which gives the axes of rotation during pulsing to be in directions \((\frac{1}{\sqrt{2}},0,\pm\frac{1}{\sqrt{2}})\) on the Bloch sphere. We will call these axes \(\vec{z}^{\prime}\) (\(\frac{1}{\sqrt{2}},0,\frac{1}{\sqrt{2}}\)) and \(\vec{x}^{\prime}\) (\(\frac{1}{\sqrt{2}},0,-\frac{1}{\sqrt{2}}\)) respectively, as they are both rotated by \(\frac{\pi}{4}\) around \(\vec{y}\) w.r.t. the usual \(\vec{z},\vec{x}\) axis of the Bloch sphere. An arbitrary rotation can be performed by combining up to five rotations around any two perpendicular axes, simply by aligning \(\vec{x}^{\prime}\) with the desired axis of rotation \(\vec{n}\), performing the rotation, and then reversing the first step. An arbitrary rotation by angle \(\alpha\) can thus be performed around axis \(\vec{n}\) in the following way: \[\begin{split} R_{\vec{n}}(\alpha)=& R_{\vec{x}^{ \prime}}\left(\frac{\pi}{2}-\phi\right)R_{\vec{x}^{\prime}}(\theta)R_{\vec{x} ^{\prime}}(\alpha)\\ &\cdot R_{\vec{z}^{\prime}}(-\theta)R_{\vec{x}^{\prime}}\left[- \left(\frac{\pi}{2}-\phi\right)\right],\end{split} \tag{7}\] where \(\theta,\phi\) are the angles of \(R_{\vec{y}}(\frac{\pi}{4})\vec{n}\) on the Bloch sphere. The argument angles of the composite rotations correspond to durations of the composite pulses, with \(2\pi\) corresponding to \(T_{\text{rot}}=\frac{2\pi\hbar}{\sqrt{2^{2}+\epsilon^{2}}}\), the period of a full rotation around \(\vec{x}^{\prime}\) or \(\vec{z}^{\prime}\) while bias voltage is on. Since the rotation around \(\vec{x}^{\prime}\) or \(\vec{z}^{\prime}\) is always in the positive direction, Figure 2: **(a)** Amplitude profile of the ideal square pulse (solid blue line) to apply the linear bias given by Eq. 6. The pulse amplitude and duration are adjusted when the rise time \(\tau\) is finite. Values of time (pulse duration, rise time) and voltage (pulse amplitude) are given for illustration purposes only. **(b)** Multiplicative amplitude adjustment factor \(\xi\) given a target rotation angle \(\theta\). Each coloured line corresponds to a different \(\tau\) (see legend). **(c)** Additive pulse duration adjustment \(\Delta T\) with respect to the original square pulse time (see panel **(a)**). The rise times are not included in the additional pulse duration. Each coloured line corresponds to a different \(\tau\) (see legend of **b**). any negative angles have to be replaced by a positive complement of \(2\pi\). Eq. 7 is simple to implement, but not optimal in operation time - it is known [33] that three rotations are sufficient, which should result is a faster operation: \[R_{\vec{n}}(\alpha)=e^{i\beta}R_{\vec{x}^{\prime}}(\Theta_{1})R_{\vec{x}^{ \prime}}(\Theta_{2})R_{\vec{x}^{\prime}}(\Theta_{3}). \tag{8}\] Here, \(\Theta_{1}\), \(\Theta_{2}\) and \(\Theta_{3}\) each depend on the angle and axis of the rotation. ### State preparation Before any quantum computation is performed, each qubit has to be initialized to a fiducial state, usually \(\ket{0}\) or \(\ket{1}\). For a generic operation involving a charge qubit, we would expect the initial state of the electron to be the ground state of the DQD (see Fig. 1). Such a state is not part of the qubit's logical basis and an initial rotation is needed. In order to rotate the wave function from the ground energy eigenstate to the qubit \(\ket{0}\) state, we can take advantage of knowing the initial state to simplify the operation. A \(R_{\vec{x}^{\prime}}\left(\pi\right)\) rotation will initialise to the \(\ket{0}\) state, while a \(R_{\vec{x}^{\prime}}\left(\pi\right)\) will do so to the \(\ket{1}\) state. Both are achieved with a single pulse, thus simplifying the initial state preparation. ### Single axis rotations Any single-qubit operation can be expressed in terms of rotations around two perpendicular axes. Here we provide the control sequence for rotations around the usual \(\vec{x},\vec{y},\vec{z}\) Bloch sphere axes from an arbitrary point on the Bloch sphere. The \(R_{\vec{y}}\) rotation consists of only 3 pulses owing to angle cancellation in Eq. 7 (as \(\phi=\frac{\pi}{2}\)): \[R_{\vec{y}}(\alpha)=R_{\vec{x}^{\prime}}\left(\frac{\pi}{2}\right)R_{\vec{x}^ {\prime}}\left(\alpha\right)R_{\vec{x}^{\prime}}\left(\frac{3\pi}{2}\right). \tag{9}\] To rotate in the opposite direction, one simply has to invert this pulse (swap \(\vec{x}^{\prime}\) and \(\vec{z}^{\prime}\)) to get: \[R_{\vec{y}}(-\alpha)=R_{\vec{x}^{\prime}}\left(\frac{\pi}{2}\right)R_{\vec{x} ^{\prime}}\left(\alpha\right)R_{\vec{x}^{\prime}}\left(\frac{3\pi}{2}\right). \tag{10}\] \(R_{z}\) and \(R_{x}\) rotations would require five pulses if done as per Eq. 7. Instead, we solve Eq. 8 for the angles to also perform them with just three pulses. Owing to symmetry, the first rotation is the same as the third one. Detailed derivation is presented in App. C. \[R_{\vec{x}/\vec{z}}(\alpha)=R_{\vec{x}^{\prime}}\left(\Theta_{1}\right)R_{ \vec{x}^{\prime}}\left(\Theta_{2}\right)R_{\vec{x}^{\prime}}\left(\Theta_{1} \right), \tag{11}\] where \[\Theta_{1}=\arccos\left(\frac{\sqrt{2}\cos\frac{\alpha}{2}}{\sqrt{\cos\left( \frac{\alpha}{2}\right)^{2}+1}}\right), \tag{12}\] Figure 3: Example pulse sequences and associated qubit rotations. Top: rotation path on the Bloch sphere. Bottom: Optimized pulse sequence where \(T_{x}=\frac{2\pi\hbar}{\Delta}\). To remain general, values of time and voltage are quoted as fractions of \(T_{x}\) and \(\frac{\Delta}{e\hbar}\) respectively. Exact experimental values will vary from one setup to the other. See the discussion for more details. All pulse sequences lead to a final state with a fidelity of \(>\)99.99%. Furthermore, the same pulse sequence can be used for any initial state on the Bloch sphere (see section III.5) without significant loss in fidelity. \[\Theta_{2}=2\arctan\left(\sin\Theta_{1}\right) \tag{13}\] for \(R_{\vec{x}}\), and \[\Theta_{2}=2\left(\pi-\arctan\left(\sin\Theta_{1}\right)\right) \tag{14}\] for \(R_{\vec{z}}\). Additionally, we note that \(-R_{\vec{z}}(\alpha)=R_{\vec{z}}(-\alpha)\), which allows us to shorten operation time for rotations with \(\alpha\geq\frac{\pi}{2}\) by inverting the pulse profile to perform the complementary rotation instead. We note that one of the effects of defining the qubit as in Eq. 1 is that \(R_{\vec{x}}\) rotation will occur automatically due to the Hamiltonian, with the rotation period \(T_{x}=\frac{2\pi\hbar}{\Delta}\). In the many qubit case, all the qubits rotate at their respective frequencies, and one would usually work in the rotating basis, therefore an \(R_{\vec{x}}\) rotation still needs to be performed as per Eq. 11. Instead of using the usual \(\vec{x},\vec{y},\vec{z}\) basis, we can instead use the \(\vec{x}^{\prime},\vec{y},\vec{z}^{\prime}\) basis which is more natural for the detuned system, and can be used to define logic gates with fewer pulses. A single \(R_{\vec{y}}(\frac{\pi}{4})\) rotation is required to move into this basis. \(R_{\vec{x}^{\prime}},R_{\vec{z}^{\prime}}\) are then achieved with a single pulse, while \(R_{\vec{y}}\) requires three, as in Eq. 9. This way, any computation can be performed in the rotated basis, where operations are quicker. At the end, one would need to rotate back to \(\vec{x},\vec{y},\vec{z}\) using a \(R_{\vec{y}}(-\frac{\pi}{4})\) rotation, for optimal readout of localised states. Some logic gate examples are: \[X=R_{\vec{x}^{\prime}}(\pi), \tag{15}\] \[Y=R_{\vec{y}}(\pi), \tag{16}\] \[Z=R_{\vec{x}^{\prime}}(\pi), \tag{17}\] \[H=R_{\vec{y}}(\frac{\pi}{2})R_{\vec{x}^{\prime}}(\pi), \tag{18}\] \[R_{\phi}=R_{\vec{x}^{\prime}}(\phi). \tag{19}\] ### Correcting for rise time To account for the actual experimentally realisable pulses not being square due to rise time and limited bandwidth, the bias voltage and pulse duration have to be adjusted. This adjustment depends on the target rotation angle and \(\tau\), but not on the input state. Therefore, it is sufficient to optimize a single pulse for the instrument rise time and range of desired rotations - these single pulses can then be concatenated into three-pulse trains to achieve arbitrary qubit rotations of high fidelity. Here we numerically find the correct adjustments. This allows experimentalists to apply the ideal control sequence by simply changing the amplitude and duration of each square pulse in the train, avoiding complicated pulse shapes while retaining high fidelity. We present the numerical results for required amplitude \(\xi\) and pulse duration \(\Delta T\) adjustments, depending on \(\tau\) and angle of rotation \(\alpha\), all expressed in terms of the physical system parameters. Here, \(\xi\) is a multiplicative factor adjusting the amplitude with respect to the square pulse amplitude (\(\xi=1\)), and \(\Delta T\) is the additive time adjustment with respect to the square pulse duration as well, as per Fig. 2 (a) - it is always greater than zero. We use generalised rise times expressed in terms of a fraction of generalised time \(T_{x}\) (period of a full rotation without any pulsing), as seen in the legend of Fig. 2. We have chosen these values to correspond to minimum possible rotation angles of \(\frac{\pi}{6},\frac{\pi}{6},\frac{\pi}{4},\frac{\pi}{3},\frac{\pi}{2}\), from shortest to longest. These are the minimum possible rotations, because they are given by a pulse that consists only of rising/falling time, with no flat top, and is therefore the shortest pulse of desired amplitude that is possible. Of course, it is still be possible to rotate by an arbitrarily small angle indirectly by adding a \(2\pi\) rotation. As can be seen in Fig. 2 (b,c) the required time adjustment rises exponentially with desired rotation angle. Therefore, it is optimal to compose any pulse of the smallest possible rotations, as this will result in shorter overall rotation time. If the target rotation angle does not subdivide into an integer number of shortest possible rotations, one needs to use somewhat longer sub-pulses appropriately. Assuming a sine-shaped rise ramp, this short pulse is a sine wave, which is straightforward to generate experimentally. Single qubit control can be achieved by sending sine waves, with frequency as high as experimentally possible, and amplitude given by \(\xi\) in Fig. 2 (b). Note that the only system-specific quantity is the energy gap \(\Delta\) - the signal frequency is independent of the qubit system and not resonant with the two-level system, and instead purely defined by experimental limitations of the equipment (\(\tau>0\)). We present examples of rotations performed with this scheme in Fig. 3, which summarises our main results. As the resulting fidelity varies significantly with even small deviations from the parameters found here, we find that trying to fit analytical expressions to the data is not very useful if high fidelity is required. While \(\Delta T\) as a function of rotation angle \(\theta\) seems to be an exponential, while \(\xi\) is a rotated S-curve, attempts to fit it results with unacceptably low fidelity for a large \(\theta\) range. Therefore, we suggest the gradient ascent search procedure described here be performed for the system of interest, taking into account the specificity of the experimental setup. This could be done using numerical simulations like in this work, or directly by taking actual measurements in an experiment. However, the latter might not be practical, as we find that thousands of fidelity evaluations are necessary to find good enough adjustment parameter values. If significant measurement error is present, the required number of experimental runs necessary might not be possible to realise, further highlighting the need for numerical simulations. Pseudo-code of the gradient ascent procedure is provided in App. E - it should enable anyone to find the optimal parameters in a general case, for rise time and angles that are required. ### Fidelity as a function of initial state The error in fidelity is found using 1-Fidelity, where fidelity is the overlap between the target state and the iterated state. Although some variation in fidelity is dependent on the initial state of the electron, any errors are below \(10^{-4}\), and as low as \(10^{-8}\) for some initial positions. This error could be reduced further if necessary by fine-tuning the adjustment parameters \(\xi,\Delta T\). Figures 4, 5, and 6 show a fidelity map for the \(R_{x}\), \(R_{y}\) and \(R_{z}\) rotations respectively, as a function of Bloch sphere angles \(\theta,\phi\). A rotation angle of \(\pi\) was chosen in each case, but the results are similar for all angles. Each plot corresponds to 500 simulations of the rotation starting from different initial states equally distributed over the Bloch sphere. ### Noise Noise is an important source of loss of fidelity in any qubit platform. If unaccounted for, the randomness of noise will lead to gradual loss of quantum information during a computation. While noise mitigation is not the goal of this work, we nonetheless investigate its impact here for completeness. In a quantum-dot-based charge qubit we base our simulations on, charge noise is one of the main noise sources. It arises from fluctuations of charge states that lead to fluctuations of electric field a qubit experiences [34]. Here, we use a simple model where the charge noise is low-frequency and can be assumed to be constant during a single quantum operation [35]. In practice, this could result from some charge trapped temporarily on one side of the DQD, imparting an electric field gradient, effectively adding an unwanted random bias voltage. Therefore, to calculate the resulting fidelity loss, we average the resulting fidelity from many simulations, each with a random amplitude. The effective Hamiltonian has an additional noise term: \[\hat{H}_{\rm noise}(t)=-\frac{1}{2}\left[\epsilon(t)+\delta_{\rm noise}\right] \sigma_{x}+\frac{1}{2}\,\Delta\,\sigma_{z}, \tag{20}\] where \(\delta_{\rm noise}\) is the noise amplitude randomly drawn from a normal distribution with mean \(\mu=0\) and standard deviation \(\sigma_{\rm noise}\), which quantifies noise strength. A large number (order of 100) of simulations are run with this randomised noise for some example operations, and the effects of this noise are compared between a square wave, and adjusted pulses accounting for rise time that are the result of this work. The random number generator seed is the same for both cases, so that they experience exactly the same noise and thus can be compared fairly. We find that the effects of charge noise on \(R_{\vec{y}}\) and \(R_{\vec{z}}\) rotations are not affected by our pulsing method. This is to be expected, as the pulse was not designed with noise in mind. At the very least, we confirm that our proposed pulse is not any worse than an idealised square wave, and further error mitigation techniques can be applied to it, as they would be to a square pulse, without it causing any loss of fidelity, while the problems associated with rise time are solved. However, we find that there is a subset of cases where our pulse sequence does produce a reduction in noise-related errors. When performing an \(R_{\vec{x}}\) rotation, it is possible to sub-divide the pulse into further smaller sub-pulses that add up to the total angle of rotation \(\theta\). This is only possible when the rise time constraint allows for such a division, as there will exist a minimum angle that you cannot subdivide further. For the \(R_{\vec{x}}\) rotation however, this method doesn't work well, as the angle \(\Theta_{2}\) is always relatively large, even for small total rotation angle \(\alpha\). Therefore, attempting to subdivide a larger rotation would result in a very long total operation time, as the total angle that needs to be rotated is no longer (approximately) proportional to \(\alpha\). The case for \(R_{\vec{y}}\) suffers from similar issues as \(R_{\vec{z}}\), therefore one cannot use this optimisation by subdivision to improve resilience against noise. The dependence of total rotated angle (which approximately corresponds to total operation time) on the required rotation angle \(\alpha\) is different for \(R_{\vec{y}}\) and \(R_{\vec{z}}\) rotations, compared to \(R_{\vec{x}}\), therefore noise reduction occurs only in the latter. An example of noise reduction owing to subdivision into smaller pulses for an \(R_{\vec{x}}\) rotation is presented in Fig. 7. This beneficial effect of subdividing the pulse can be understood by investigating the pulse sequence that achieves the rotation. As seen in Fig. 8, which shows a pulse shape of a noise-reducing sequence, the oscillating nature of the pulse takes it from being negative to positive frequently. This will average out the influence of noise to a significant degree, while keeping the total operation time close to the one for an ideal square wave. Overall, we conclude that the control techniques presented here are at least as good in resisting noise as using a square wave, and can improve upon it under certain conditions. Therefore, they are suitable to replace the square wave, and to have further noise-reducing methods applied upon them, while they offset any errors due to rise time. The optimised \(R_{\vec{x}}\) rotation is able to mitigate Figure 4: Error in fidelity for an \(R_{x}(\pi)\) rotation as a projection (left) and on the surface of the Bloch Sphere (right). Figure 5: Error in fidelity for an \(R_{y}(\pi)\) rotation as a projection (left) and on the surface of the Bloch Sphere (right). Figure 6: Error in fidelity for an \(R_{z}(\pi)\) rotation as a projection (left) and on the surface of the Bloch Sphere (right). charge noise up to almost threefold in the fidelity error (this gain increases with noise strength), given that rise time \(\tau\) enables one to perform multiple smaller rotations that add up to a required total angle. ## IV Discussion When experimentally optimising qubit rotations, voltage pulses are usually considered as square while rise and fall times from instrument limitations and other filtering effects due to the finite bandwidth of coaxial cables are neglected. While the voltage is gradually rising to some intended amplitude, the qubit will undergo transient rotations, and will not reach the expected position on the Bloch sphere. These errors accumulate over long operations, leading to poor fidelity. Moreover, applying very sharp pulses of high amplitude, with the intent of performing an \(R_{z}\) rotation, can lead to unwanted energy excitations due to non-adiabaticity, causing further fidelity loss [22, 23, 24]. The control scheme presented here overcomes both problems by explicitly adjusting the pulses for rise time, and by using relatively low pulse amplitudes, making the operations adiabatic. By using a specific amplitude giving us two perpendicular rotation axes, we achieve single-qubit control without the need for strong non-adiabatic pulses, or the requirement for perfectly square ones. The disadvantage of this scheme is operation time. As the pulse amplitude is tied to the energy gap between the first two eigenstates of the DQD, there is little control of the rotation speed, at least in the case of a semiconductor DQD system. However, careful engineering of the DQD allows for the operation time to be tailored or optimized [29]. As long as the system energies can be tuned so that the operation time is much less than qubit coherence time, the benefits of increased operation fidelity will outweigh the cost of increased duration. In this work, we simulate a semiconductor GaAs-based DQD using finite difference methods (App. D). The parameters for our simulations were chosen to be experimentally realistic in terms of energy, time scales and pulse generation. We kept these values general since specific rise/fall times and inter-dot energies will depend on each experimental implementation. Current systems are capable of generating pulses with \(\tau=40\)ps-\(500\)ps. Experimental work by Fujisawa _et al._[2, 36] contain gate pulses with \(\tau\sim 100\)ps, with total pulse time of \(600\)ps and \(V_{\text{bias}}=40\mu\)eV. More recent work achieves at least \(40\)ps pulse resolution with advanced techniques [37]. The groups cited above as well as other semiconductor-based quantum dot research [38, 4] could practically eliminate errors due to rise time and pulse-induced excitations outside of the computational space by using our proposed pulse sequences. While the semiconductor charge qubit system was used in simulations in this work, our results are easily generalisable to other types of qubits, as long as the Hamiltonian is of a similar form to Eq. 2. For example, the same scheme can be used to control a spin qubit by varying the magnetic field \(B\) instead of a voltage bias. In this particular case, it is easier to adjust the energy splitting \(\Delta=\frac{1}{2}\gamma B\) by applying a strong reference magnetic field. Increasing \(\Delta\) will result in faster operation. How ever, in the charge qubit case, it is achieved by lowering the DQD barrier. This will increase the overlap between eigenstates, decreasing localization and thus readout fidelity. No such issue arises for the spin qubit, overcoming the slower operation time of our framework. Our results can then be directly translated to the spin qubit case, by applying a magnetic field \(B^{\prime}\) in some perpendicular direction to \(B\). ## V Conclusions We have described quantum control of the optimal charge qubits for a double-quantum dot system. We presented pulse sequences for state preparation and arbitrary qubit rotations, and show how to account for the experimental control suffering from finite rise/fall times. Owing to hybridization of the eigenstates in a double-dot system, the spatial wave function of the two lowest energy eigenstates cannot be confined exclusively to the left and the right dot. The optimal qubit was found to be defined in terms of the two lowest energy eigenstates of a zero-bias system. This allowed us to reduce our model to a two-state system. We show that it is possible to prepare the qubit in such a state when it is initially in the ground state of a DQD. Combining theory and numerical techniques yields an optimal pulse sequence that accomplishes arbitrary single-qubit rotation even with non-zero rise time \(\tau\). We demonstrate how our framework results in high fidelity despite \(\tau>0\), while avoiding unwanted excitation to higher energy states. Indeed, we show that square pulses are not only unnecessary, but also undesirable, as the sharp rise can induce unwanted oscillations, while being simple to account for. Since our proposed pulse sequence reduces to sine waves to minimize total pulse duration, it is straightforward to implement experimentally. As our numerical fitting parameters depend only on the energy splitting \(\Delta\), the results are easily scalable to any particular system. Our scheme is easily generalizable to other qubit systems with similar Hamiltonians, such as spin qubits. Additionally, we study a model of charge noise, and find that our pulse scheme is at least as good as using square waves, and it some cases it even significantly reduces errors due to noise. This further justifies using our method as a direct replacement for square waves, as other noise mitigation and error correction techniques can be used on top of it. Overall, applying our results will lead to increased operation fidelity in many systems, making them viable for practical quantum computing applications. Our method of accounting for rise/fall times bears resemblance to the GRAPE ( Gradient Ascent Pulse Engineering) algorithm [39], however there are important differences. Our method specifically works to cancel the rise/fall times of assumed profile (sinusoidal in this work, but the method can be used for any shape), resulting in a simple lookup of two parameters \(\xi\) and \(\Delta T\) depending on required angle of rotation and \(\tau\) itself. GRAPE instead is a more general "black box" technique that tries to optimise a pulse sequence by constructing it from slices of piecewise constant amplitudes, by tuning these amplitudes via gradient ascent methods. This research can also be used to optimize current geometric approaches to pulse shaping [40] by taking rise times into account explicitly. We find that the method used here is simpler to implement for experimentalists, outputs a waveform composed of sinusoids, which can be described analytically, and is, by design, not limited by the device rise/fall time. ## VI Acknowledgements This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie grant agreement SAWTRAIN No. 642688. This work was supported by the Project for Developing Innovation Systems of the Ministry of Education,Culture, Sports, Science and Technology (MEXT), Japan. A.A.L. acknowledges support from Hitachi via Grant No. RG94632, and from EPSRC (Engineering and Physical Sciences Research Council) via Award No. 1948709. ## VII Data Availability The data that support the findings of this study are available from the corresponding authors on reasonable request. ## VIII Competing Interests The authors declare that there are no competing interests. ## IX Author Contributions A.L. and H.V.L. designed the code to simulate the wave function evolution, developed the theory in this work and ran all simulations. A.L. and H.V.L. wrote the manuscript with the help of all authors. K.Z. helped design the optimized pulse sequence. T.F. provided experimental parameters for realistic simulations. C.H.W.B. supervised the project. All authors discussed the simulation results. ## Appendix A Readout For completeness, we discuss a potential procedure for the readout process. In experimental setups, it is the probability of finding the electron in one of the dots which is measured rather than the qubit superposition weighting coefficients. We can express both qubits defined in Sec. II in terms of their right and left dots parts: \[\psi_{0}(x) =\langle x|0\rangle=f_{0L}(x)+f_{0R}(x) \tag{10}\] \[\psi_{1}(x) =\langle x|1\rangle=f_{1L}(x)+f_{1R}(x) \tag{11}\] Because the qubits \(|0\rangle\) and \(|1\rangle\) are orthogonal, we have: \[\begin{split} 0=\int\psi_{0}^{*}(x)\psi_{1}(x)dx&=\int f _{0L}^{*}(x)f_{1L}(x)dx+\\ \int f_{0L}^{*}(x)f_{1R}(x)dx&+\int f_{0R}^{*}(x)f_ {1L}(x)dx+\\ \int f_{0R}^{*}(x)f_{1R}(x)dx&=\int f_{0L}^{*}(x)f_ {1L}(x)dx+\\ &\int f_{0R}^{*}(x)f_{1R}(x)dx.\end{split} \tag{12}\] The qubits are mirror images of each other, such that \(\langle x|0\rangle\) has the same spatial distribution in the left (right) dot as \(\langle x|1\rangle\) has in the right (left) one. We also know that there is some non-zero overlap, unless the DQD barrier is completely separating the dots. Therefore Eq. 12 implies that : \[\int f_{0R}^{*}(x)f_{1R}(x)dx=-\int f_{0L}^{*}(x)f_{1L}(x)dx=\eta. \tag{13}\] Any arbitrary state can be written as a linear combination of the two qubits right and left dot components \[\begin{split}\psi(x)=\alpha\psi_{0}(x)+\beta\psi_{1}(x)=\\ \alpha\Big{(}f_{0L}(x)+f_{0R}(x)\Big{)}+\beta\Big{(}f_{1L}(x)+f_ {1R}(x)\Big{)},\end{split} \tag{14}\] The probability \(P_{R}\) of finding the particle in the right dot is then: \[\begin{split} P_{R}=\int_{0}^{\infty}\psi^{*}(x)\psi(x)dx& =\int_{0}^{\infty}\Big{(}\alpha^{*}f_{0R}^{*}(x)+\\ \beta^{*}f_{1R}^{*}(x)\Big{)}\,\Big{(}\alpha f_{0R}(x)+\beta f_{1 R}(x)\Big{)}dx.\end{split} \tag{15}\] Using Eq. 13, this reduces to: \[\begin{split} P_{R}=|\alpha|^{2}\int_{0}^{\infty}f_{0R}^{*}(x)f_ {0R}(x)dx+\\ |\beta|^{2}\int_{0}^{\infty}f_{1R}^{*}(x)f_{1R}(x)dx+\eta(\alpha^ {*}\beta+\alpha\beta^{*})=\\ |\alpha|^{2}P_{0R}+|\beta|^{2}P_{1R}+2\eta\mathscr{R}(\alpha^{*} \beta),\end{split} \tag{16}\] where the integrals \(P_{0R}\) and \(P_{1R}\) can be obtained initialising the qubit in the \(\psi_{0}(x)\) or \(\psi_{1}(x)\) state, respectively, and measuring the probability of finding it in the right dot. Combining Eq. 16 with the normalisation condition for \(\psi(x)\), we obtain an equation relating \(|\beta|\) to the probability \(P_{R}\) of finding the particle in the right dot, up to an error term proportional to \(\eta\), which quantifies the uncertainty of determining whether the qubit is in the left or right side of the DQD: \[|\beta|^{2}=\frac{P_{R}-P_{0R}}{P_{1R}-P_{0R}}+\delta. \tag{17}\] A similar expression exists for \(|\alpha|^{2}\), with \(P_{L}\) being the probability of finding the particle in the left dot : \[|\alpha|^{2}=\frac{P_{L}-P_{0R}}{P_{1R}-P_{0R}}-\delta, \tag{18}\] where \(\delta=2\eta\frac{\mathscr{R}(\alpha^{*}\beta)}{P_{0R}-P_{1R}}\) is the effective error. Since \(P_{1R}\approx 1\), \(P_{0R}\approx 0\), we can estimate the maximum readout error, which would occur for a maximally entangled state: \[|\;\delta\mid\lessapprox\eta. \tag{19}\] For the parameters used in this paper, \(|\;\delta\mid\leq 8\cdot 10^{-4}\). This magnitude of readout error is not very significant compared to other sources of errors in a quantum computation [41; 42], such as two-qubit gates, relaxation, or dephasing, especially since it's only applied once as the final step. Additionally, it was shown [29] that in a similar situation, adiabatically increasing the inter-dot barrier of the DQD preserves coherence, while greatly reducing this type of "overlap" error -this technique should be used when possible if the readout error is noticeable. Alternatively, as this error is a result of lack of knowledge of \(\mathscr{R}(\alpha^{*}\beta)\), a full state tomography could be performed to eliminate it completely (assuming that errors of operations associated with the tomography do not outweigh the readout error). Therefore, we conclude that measurement of the charge distribution is a viable way of reading out the qubit in our scheme. ## Appendix B Two-site localised state model and DQD potential Within the two-state model, one has to solve the time dependent Schrodinger equation with the effective Hamiltonian \(\hat{H}_{\rm eff}\) defined as \[\hat{H}_{\rm eff}(t)=-\frac{1}{2}\,\epsilon(t)\,\sigma_{x}+\frac{1}{2}\,\Delta \,\sigma_{z}+\frac{1}{2}(E_{\rm B}+E_{\rm AB}). \tag{20}\] Here \(E_{\rm B}\) and \(E_{\rm AB}\) are the energies of the bonding and antibonding states of the DQD system, i.e. the two lowest energy states, at \(\epsilon=0\) whereas \(\Delta\) is the 'hybridisation energy' between the two localised states. At zero detuning, the bonding state \(\psi^{\rm B}(x)\) is symmetric, while the antibonding state \(\psi^{\rm AB}(x)\) is antisymmetric. Therefore, their equal superpositions produce maximally localised left/right states: \[\psi^{L}(x)=\frac{1}{\sqrt{2}}(\psi^{\rm B}(x)+\psi^{\rm AB}(x)), \tag{30}\] \[\psi^{R}(x)=\frac{1}{\sqrt{2}}(\psi^{\rm B}(x)-\psi^{\rm AB}(x)). \tag{31}\] The linear detuning breaks the left/right symmetry, however as it is expressed by the Pauli \(\sigma_{x}\) matrix in the Hamiltonian, it doesn't make the system leave the \(\psi^{\rm B}(x)/\psi^{\rm AB}(x)\) two-state basis (if done adiabatically), resulting simply in a coordinate rotation of the Bloch sphere. Therefore, we can still think in terms of the left/right localised wave functions even at non-zero detuning, and varying \(\epsilon\) is a viable way of performing single qubit rotations. The two-site localised state model describes a DQD well. The effective potential in an experimental DQD system can be found using density functional theory [43; 44] and will be a complex function of all three spatial coordinates \(x,y,z\). By careful design, the dynamics in two of the directions \(y\) and \(z\) can be confined to the lowest energy subbands so that only the potential in the \(x\) direction, \(V_{\rm DQD}(x,t)\) needs be considered. For example in a GaAs/AlGaAs heterostructure, the \(z\) direction is the growth direction and modulation doping can be used to create a triangular quantum well in that direction with subband energies two orders of magnitude larger than either \(\epsilon\) or \(\Delta\). In the \(y\) direction, parabolic confinement with energies an order of magnitude larger than \(\epsilon\) or \(\Delta\) can be produced either by etching [9], fabricating a thin gate wrapping the conducting channel [45; 46] or using split-gates [47]. In order to create a DQD potential in the \(x\) direction, gates [48; 49; 50; 51] or etching [9; 52] can also be used. The aim is to create a potential \(V_{\rm DQD}(x,t)\) that has two minima separated by a tunnel barrier. A convenient potential that has this property and is defined by three parameters \(A\), \(B\) and \(\sigma\) is given by \[V_{\rm DQD}(x)=Ax^{2}+B\exp\left(\frac{-x^{2}}{2\sigma}\right) \tag{32}\] This form for \(V_{\rm DQD}\) allows us to control both the depth of the dots and the barrier between them directly, by varying the harmonic confinement \(A\), barrier height \(B\), and barrier width \(\sigma\). This potential will obey the two-site localised state model. For a specific set of parameters, this static potential will define a value for \(\Delta\) which is the energy difference between the bonding ground state \(E_{\rm B}\) and the antibonding first excited state \(E_{\rm AB}\). Detuning is introduced by adding a linear Stark shift of the form \[V_{\rm linear}(x)=V_{\rm bias}\frac{x}{2w}. \tag{33}\] Here, \(w\) is half the width of the DQD. By comparing the dependences of \(E_{\rm B}\) and \(E_{\rm AB}\) on \(V_{\rm bias}\) with the expected dependences from two-site Hamiltonian we can define the detuning parameter for \(V_{\rm DQD}\) through a linear relation \(\epsilon=e\lambda V_{\rm bias}\) with \(\lambda\) being constant. We find this linear relationship holds with an accuracy of one part in \(10^{6}\) across the range of required values of \(\epsilon\) for single-qubit operations. The total potential is \(V_{\rm tot}(x)=V_{\rm DQD}(x)+V_{\rm linear}(x)\) and Fig. 9a shows this potential at three different detunings. The DQD dynamics under time-dependent detuning will be given by the TDSE \[\hat{H}(x,t)\psi(x,t)=i\hbar\frac{\partial}{\partial t}\psi(x,t) \tag{34}\] with \[\hat{H}(x,t)=-\frac{\hbar^{2}}{2m^{*}}\frac{\partial^{2}}{\partial x^{2}}+V_{ \rm DQD}(x)+V_{\rm bias}(x,t). \tag{35}\] Time dependence is included in Eq. 34 by varying the potential slope with time: \(V_{\rm bias}(t)\). An example plot of the energies of the two lowest instantaneous solutions (the bonding and antibonding states) as function of \(V_{\rm bias}\) is shown in Fig. 9b. Analytic solutions to the TDSE in Eq. 35 can only be found in special cases. In this paper we solve Eq. 34 numerically using a GPU-accelerated version of the staggered-leapfrog method [27; 28] (see App. D). Throughout the paper we avoid using specific numerical values to keep our results general. However, here we give the actual values used for reproducability. We've use a total DQD length of 460 nm, with parameter values: \(w=230\) nm, \(A=1.276\) meV nm\({}^{-1}\), \(B=4.08\) meV so that \(\Delta=11.7\mu\)eV, and the linear coefficient \(\lambda=0.421\). We have also tested various non-symmetric potentials with the two dots having different sizes, but in all the cases the general conclusions were the same as for the symmetric potential of Eq. 32. ## Appendix C Rotation scheme derivation We find a fast and simple general rotation scheme based on creating two perpendicular axes \(\vec{x}^{\prime}\) and \(\vec{z}^{\prime}\), by setting the detuning \(\epsilon=\pm\Delta\). Then we observe that one should be able to perform a rotation around an axis at \(\frac{\pi}{4}\) w.r.t the two axes above, which would be \(\vec{x}\) and \(\vec{z}\). This is achieved by rotating by some angle \(\Theta_{1}\) around the first axis, then by \(\Theta_{2}\) around the second one, and finally by \(\Theta_{1}\) around the first one again. We will find the relationship between \(\Theta_{1}\), \(\Theta_{2}\), and the net angle rotated around \(\vec{x}\) or \(\vec{z}\) named \(\alpha\), by analytically comparing the rotation matrix elements with the straightforward \(R_{\vec{x}}\) and \(R_{\vec{x}}\) rotations. Looking at \(R_{\vec{x}}\) first: \[R_{\vec{x}}(\alpha)=\begin{pmatrix}\cos\frac{\alpha}{2}&-i\sin\frac{\alpha}{2}\\ -i\sin\frac{\alpha}{2}&\cos\frac{\alpha}{2}\end{pmatrix}. \tag{10}\] In our scheme, \[R_{\vec{x}^{\prime}}(\alpha)=R_{\vec{y}}\left(\frac{-\pi}{4}\right)R_{\vec{x}}( \alpha)R_{\vec{y}}\left(\frac{\pi}{4}\right), \tag{11}\] \[R_{\vec{z}^{\prime}}(\alpha)=R_{\vec{y}}\left(\frac{-\pi}{4}\right)R_{\vec{x} }(\alpha)R_{\vec{y}}\left(\frac{\pi}{4}\right), \tag{12}\] and we need the following to always hold: \[R_{\vec{x}}(\alpha)=R_{\vec{x}^{\prime}}(\Theta_{1})R_{\vec{z}^{\prime}}( \Theta_{2})R_{\vec{z}^{\prime}}(\Theta_{1}). \tag{13}\] Comparing the (1,1) matrix elements: \[\begin{array}{c}\cos\alpha=\cos\frac{\Theta_{2}}{2}(2\cos^{2}\frac{\Theta_ {1}}{2}-1)\\ -j\sqrt{2}(\frac{1}{2}\sin\frac{\Theta_{2}}{2}+\cos\frac{\Theta_{1}}{2}\cos \frac{\Theta_{2}}{2}\sin\frac{\Theta_{1}}{2}).\end{array} \tag{14}\] Since the imaginary part on the LHS is zero, we have: \[(\frac{1}{2}\sin\frac{\Theta_{2}}{2}+\cos\frac{\Theta_{1}}{2}\cos\frac{\Theta _{2}}{2}\sin\frac{\Theta_{1}}{2})=0. \tag{15}\] Solving the above allows us to find \(\Theta_{2}\) in terms of \(\Theta_{1}\): \[\Theta_{2}=2\arctan\left(\sin\Theta_{1}\right). \tag{16}\] Now coming back to the real part of Eq. 14 and substituting for \(\Theta_{2}\), we have: \[\cos\left(\arctan\left(\sin\Theta_{1}\right)\right)\cos\Theta_{1}=\cos\frac{ \alpha}{2}, \tag{17}\] which gives \[\Theta_{1}=\arccos\left(\frac{\sqrt{2}\cos\frac{\alpha}{2}}{\sqrt{\cos\frac{ \alpha}{2}^{2}+1}}\right). \tag{18}\] The above satisfies Eq. 13 for all matrix elements, and is therefore equivalent. It allows us to find a three pulse train that performs the \(R_{\vec{x}}(\alpha)\) rotation by an arbitrary angle \(\alpha\). We repeat the above procedure for \(R_{\vec{x}}\) to find the following: \[\Theta_{2}=-2\arctan\left(\sin\Theta_{1}\right)+2\pi, \tag{19}\] \[\Theta_{1}=\arccos\left(\frac{\sqrt{2}\cos\frac{\alpha}{2}}{\sqrt{\cos\frac{ \alpha}{2}^{2}+1}}\right). \tag{20}\] Therefore, we can perform arbitrary rotations around \(\vec{z}\) and \(\vec{x}\) this way. However, this scheme is unable to perform the \(R_{\vec{y}}\) rotation, which is achieved differently, as described in III.1. ## Appendix D Iteration method The system is modelled using an explicit iterative scheme for the one-dimensional time-dependent Schrodinger equation (TDSE) with an arbitrary potential _V(x,t)_: \[i\hbar\frac{\partial\psi(x,t)}{\partial t}=H\psi=\left[\frac{-\hbar^{2}}{2m} \frac{\partial^{2}}{\partial x^{2}}+V(x,t)\right]\psi(x,t) \tag{21}\] where \(m\) is the effective mass. The scheme, which is based on the finite difference method, was described in details by Maestri _et al._ for two particles in one dimension [53] and we adapt it to a single particle. The wave function is evaluated on a spatially discretized grid and at successive, equally separated intervals of time \(\Delta t\): \[\psi(x,t)=\psi(m\Delta x,k\Delta t)\equiv\psi_{m}^{k}, \tag{22}\] with \(m,k\) integer. The spatial part of the method is derived using Taylor expansion of the wave function: \[\frac{\partial^{2}\psi}{\partial x^{2}}\simeq\frac{\psi(x+\Delta x)-2\psi(x)+ \psi(x-\Delta x)}{\Delta x^{2}}. \tag{23}\] Therefore, using Eqs. (22) and (23), the right hand side of Eq. (21) transforms into Figure 9: (a) The DQD potential \(V_{\rm tot}\) at zero (blue), lowest (red) and highest (orange) detuning values. (b) Energies \(E\) of the bonding (\(E_{\rm B}\)) and anti-bonding (\(E_{AB}\)) eigenstates. The coloured dots mark potential shapes from part (a). \[H\psi=\Bigg{[}\frac{-\hbar^{2}}{2m}\bigg{(}\frac{\psi_{m+1}-2\psi_{m}+\psi_{m-1}}{ \Delta x^{2}}\bigg{)}\,+\,V_{m}\Bigg{]}\psi_{m}. \tag{40}\] The derivative on the left hand side of Eq. (41) is calculated by writing the exact solution of TDSE and then taking the difference between the \((k+1)^{th}\) and \((k-1)^{th}\) time steps, as suggested by Askar and Cakmak [27]: \[\psi_{m}^{k+1}=e^{-i\Delta tH/\hbar}\psi_{m}^{k}\simeq\left(1-\frac{i\Delta tH }{\hbar}\right)\psi_{m}^{k}, \tag{41}\] \[\psi_{m}^{k+1}\,-\,\psi_{m}^{k-1}=(e^{-i\Delta tH/\hbar}-e^{i\Delta tH/\hbar} )\psi_{m}^{k}\simeq-\frac{2i\Delta tH}{\hbar}\psi_{m}^{k}. \tag{42}\] To improve the accuracy, we follow Visscher's staggered-time method [54] and write the wave vector in terms of its real and imaginary parts: \(\psi_{m}^{k}=u_{m}^{k}+iv_{m}^{k}\). After inserting the Hamiltonian from Eq. (40) into Eq. (42) and rearranging the terms, we obtain a pair of simultaneous equations, which are iterated over time: \[u_{m}^{k+1}=u_{m}^{k-1}+\Big{(}2a_{x}+b\,V_{m}^{k}\Big{)}v_{m}^{k}\,- \tag{43}\] \[a_{x}(v_{m+1}^{k}+v_{m-1}^{k}),\] (44) \[v_{m}^{k+1}=v_{m}^{k-1}-\Big{(}2a_{x}+b\,V_{m}^{k}\Big{)}u_{m}^{k}\,-\] (45) \[a_{x}(u_{m+1}^{k}+u_{m-1}^{k}), \tag{46}\] where \(a_{x}=\frac{\hbar\Delta t}{m\Delta x^{2}}\) and \(b=\frac{2\Delta t}{\hbar}\). Also, the real and imaginary parts are calculated at slightly shifted times: \(u^{k}\equiv u(t),\ v^{k}\equiv v(t+\Delta t/2)\). The method above is stable as long as the following criterion is satisfied: \[\Delta t\leq\frac{\hbar}{E_{\rm max}}, \tag{47}\] with \(E_{\rm max}\) being the largest eigenvalue of the discretised Hamiltonian [55]. Furthermore, small errors due to finite computational accuracy do not accumulate with iterations and the total electron probability \(\sum_{\rm all\,}|\psi_{m}^{k}|^{2}\) is preserved over time, showing no significant deviations from unity. Appendix E Finding optimal adjustment parameters accounting for rise time \(\tau\) by gradient ascent The MATLAB code for finding the optimal adjustment parameters accounting for rise time \(\tau\) for single qubit control is available on request from the corresponding author. The time-dependent evolution is relegated to the GPU-accelerated staggered-leapfrog code described and referenced in the main work.
2301.12838
The Higgs Boson and The Fakeon Hypothesis
In this paper, we make an attempt to implement the fakeon hypothesis in particle physics. To begin with, we consider a model in which the Higgs boson is the only fakeon. We deduce its interactions with the electroweak gauge bosons. Each such new interaction can be written as a product of two factors. The first one depends, on the electroweak gauge bosons and their derivatives. The second one solely depends, on the physical Higgs and its derivatives. We also study the conserved quantities of different (free) fields in this setting.
Musongela Lubo, Muanza Steve, Kikunga Kasenda Ivan, Mavungu Tsava Christian
2023-01-19T22:41:06Z
http://arxiv.org/abs/2301.12838v1
###### Abstract ###### Abstract In this paper, we make an attempt to implement the fakeon hypothesis in particle physics. To begin with, we consider a model in which the Higgs boson is the only fakeon. We deduce its interactions with the electroweak gauge bosons. Each such new interaction can be written as a product of two factors. The first one depends, on the electroweak gauge bosons and their derivatives. The second one solely depends, on the physical Higgs and its derivatives. We also study the conserved quantities of different (free) fields in this setting. **Keywords**: Higgs Boson, Fakeon Model, Lee-Wick Models, Beyond the Standard Model Higgs Sector. **The Higgs Boson and The Fakeon Hypothesis** \({}^{*}\)**Musongela Lubo, \({}^{\dagger\lx@sectionsign}\)Muanza Steve, \({}^{*}\)Kikunga Kasenda Ivan, \({}^{*}\)Mavungu Tsava Christian** \({}^{*}\)_Physics Department, Faculty of Sciences, University of Kinshasa, P.O. Box 190 Kin XI, Kinshasa, D.R.Congo_ \({}^{\dagger}\)_General Commission for Atomic Energy, Regional Center for Nuclear Studies in Kinshasa, Campus of the_ _University of Kinshasa, P.O. Box 868 Kin XI, Kinshasa, D.R.Congo_ \({}^{\lx@sectionsign}\)_Aix-Marseille University, 163 Avenue de Luminy, 13288 Marseille Cedex 09, France_ Email: [email protected], [email protected], [email protected], [email protected] ## 1 Introduction The Standard Model of particle physics is one of the most successful constructions of theoretical physics. However, no knowledge is definitive. Thus, different works go beyond that model. Of course, the results must be in accordance with the well verified experimental results described by that model. Studies beyond the Standard Model are of many kinds. This model is basically built upon two bases : the gauge group \(SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\) and the Poincare group. The first group is about internal symmetries while the second one concerns space-time transformations. Considerations beyond the Standard Model concerns mostly the change of one of these ingredients, and possibly the particle content. One of the possibilities is to keep the Lorentz group but to change the internal group. The first attempt is to replace the group \(SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\) by a bigger one such as \(SU(5)\) or \(SO(10)\). One is then in \(GUT\) (Grand Unified Theories). Such theories have their advantages and limitations. For example, the simplest \(SU(5)\) theory has been shown to lead to a proton decay rate which is in contradiction with experiment. These theories introduce gauge bosons which are not present in the Standard Model. On the other side, these \(GUT\) embed the possibility of explaining neutrino masses, especially using the See-saw Mechanism. Another view is to keep the group \(SU(3)_{C}\times SU(2)_{L}\times U(1)_{Y}\), the Lorentz group and introduce supersymmetry [1][2]. The physical spectrum is thus doubled, with a superpartner field associated to each degree of freedom of the Standard Model. Combining the two previous ideas, SUSY GUT can be constructed. Other works have been devoted to the possibility of breaking Lorentz invariance. Some of their possible phenomenological implications have been analyzed. Some approaches start with a modification of the usual commutation relations of quantum mechanics. The work proposed here does not lie in the categories evoked above. The gauge group of the Standard Model as well as the Poincare group are incorporated in the model. Our aim is to study some ideas coming from attemps to study quantum gravity [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 18, 19, 20, 21, 22, 23, 24]. Basically, if one has fields whose propagators are not the usual ones, that has influence on scattering cross sections,etc. This paper is organized as follows. The first section is devoted to the "origins" of the fakeon hypothesis. It is based on attemps to study general relativity by introducing new terms to the Einstein-Hilbert action. Some of these extensions include terms which depend on derivatives of the Riemann tensor and thus lead to higher derivatives in the metric. The second section deals with a simpler model relying on a real scalar field which displays the kind of behaviour needed by the fakeon hypothesis. Its Lagrangian involves high order derivatives of the field [3, 4, 5, 17]. Its equations of motion are solved and show plane waves which, if interpreted naively in the usual sense, lead to a massless particle and a massive one. We verify that the equation of motion for the Green functions of this model behave in the desired way. We then make an heuristic consideration of a fakeon real scalar field which has a quartic self-interaction. Looking for a scattering involving two particles in the initial state and two in the final state, we see that the amplitude of transition at second order is finite, contrarily to the traditionnal, non fakeon treatment. Such a view has to be taken with great care. It basically relies on the replacement of the Feynman propagator of the usual theory by the one proposed by the fakeon hypothesis. A more detailed analysis is probably needed. In the third section, we study the hypothesis that the Higgs field may be a fakeon, while all the other particles of the Standard Model are not. For the free fakeon Higgs doublet, we adapt the Standard Model Lagrangian to accomodate the gauge group of the electroweak theory. We then gauge this Lagrangian and find that it introduces new couplings between the physical Higgs and the electroweak gauge bosons. The fourth section has a more formal and general accent. To approach the fakeon hypothesis, we used the simplest possible Lagrangian and it involves up to third order space-time derivatives of a scalar field. One can ask two simple questions. The first one is to know what happens if another particle of the Standard Model is also a fakeon. To study such a point of view, in which the fermions and or the gauge fields of the Standard Model could be fakeons, one has to consider Lagrangians which will bear ressemblances with what has been used here for the Higgs. We thus analyze formally a system in which the Lagrangian depends on a field and its derivatives up to the third order. We derive the equations of motion for such a system. We also derive its conserved quantities. We think this be useful for a deeper understanding of the hypothesis studied. In the conclusions, we make a discussion concerning the interactions introduced by the fakeon hypothesis between the physical Higgs and the usual electroweak gauge bosons. And we give prospects on how to test experimentally the nature of the Higgs boson, either a physical or a fake degree of freedom, through precision measurements at the next \(e^{+}e^{-}\) collider envisaged to be in service right after the LHC era. To make the reading of this article easier, we have put some technical computations in two appendices. The first one deals with the explicit calculation of the interaction terms introduced by the fakeon hypothesis. The second one is devoted to the derivation of the field equations and the conserved quantities (momenta, charges) for a general Lagrangian depending on up to third derivatives in the fundamental fields as given in section 4. It should be clear that this work does not pretend to exhaust the approach of the fakeon hypothesis or of the virtual particle in Particle Physics as suggested for example in [3, 4, 5]. The aim of this paper is quite modest. We do not study quantum gravity. We simply analyze how the fakeon hypothesis impacts a part of the electroweak theory, focusing on the Higgs field. ## 2 Gravity and Higher-Derivatives of The Metric Theories with higher-derivatives in the metric field are present in some proposals for the extension of the Einstein-Hilbert Lagrangian describing gravity in a relativistic setting[3] \[\begin{split}-2\kappa^{2}\mu^{\epsilon}\frac{\mathcal{L}_{GQ}}{ \sqrt{-g}}&=2\lambda_{c}M^{2}+\zeta R-\frac{\gamma}{M^{2}}R_{\mu \nu}R^{\mu\nu}+\frac{1}{2M^{2}}(\gamma-\eta)R^{2}\\ &-\frac{1}{M^{4}}\left(D_{\rho}R_{\mu\nu}\right)\left(D^{\rho}R^ {\mu\nu}\right)+\frac{1}{2M^{4}}(1-\xi)\left(D_{\rho}R\right)\left(D^{\rho}R \right)\\ &\frac{1}{M^{4}}\left(\alpha_{1}R_{\mu\nu}R^{\mu\rho}R^{\nu}_{ \rho}+\alpha_{2}RR_{\mu\nu}R^{\mu\nu}+\alpha_{3}R^{3}+\alpha_{4}RR_{\mu\nu \rho\sigma}R^{\mu\nu\rho\sigma}\right.\\ &\qquad\qquad\qquad\left.+\alpha_{5}R_{\mu\nu\rho\sigma}R^{\mu \rho}R^{\nu\sigma}+\alpha_{6}R_{\mu\nu\rho\sigma}R^{\rho\sigma\alpha\beta}R^{ \mu\nu}_{\alpha\beta}\right)\end{split} \tag{1}\] where \(\lambda_{c},\zeta,\gamma,\eta,\xi,\alpha_{i}(1\leq i\leq 6)\) and \(\kappa\) are constants. The following expansion of the metric around the Minkowski one \[g_{\mu\nu}=\eta_{\mu\nu}+2\kappa h_{\mu\nu} \tag{2}\] and the gauge-fixing function \[\mathbf{g}_{\mu}=\eta^{\nu\rho}\partial_{\rho}g_{\mu\nu}-\frac{1}{2}\eta^{\nu \rho}\partial_{\mu}g_{\rho\nu}. \tag{3}\] can be used [3]. With all of the above, the gauge-fixed Lagrangian is: \[\begin{split}\mathcal{L}_{gf}&=\mathcal{L}_{GQ}+ \frac{1}{4\kappa^{2}}\mathbf{g}^{\mu}\left(\zeta-\gamma\frac{\Box}{M^{2}}+ \frac{\Box^{2}}{M^{4}}\right)\mathbf{g}_{\mu}\\ &+\overline{C}^{\mu}\left(\zeta-\gamma\frac{\Box}{M^{2}}+\frac{ \Box^{2}}{M^{4}}\right)\left[\Box C_{\mu}-(2\delta^{\rho}_{\mu}\eta^{\nu\sigma }\partial_{\nu}-\eta^{\rho\sigma}\partial_{\mu})\Gamma^{\alpha}_{\rho\sigma}C _{\alpha}\right].\end{split} \tag{4}\] The propagator has the following behavior \[i\tilde{G}_{F}(p)\sim\frac{1}{p^{4}}\ \ \text{for}\ p^{2}>>m^{2}. \tag{5}\] ## 3 The Fakeon Hypothesis and Real Scalar Field The fakeon hypothesis relies on a modification of the propagators for fields needed to descibe fundamental particles in Physics. The simplest way to implement such an approach is to incorporate in the Lagrangian a supplementary term which contains a term with a third order derivative in the field. We took, from the literature, such a Lagrangian for a real scalar field. With a real scalar field in hand, the simplest thing to do is to study a system in which such a field has quartic self- interactions and the easiest process is a scattering with two outgoing particles. Heuristically, one sees that the perturbative calculation of this scattering process does not lead, at the second order, to an ultraviolet divergence. This is in contrast with the usual treatment, which displays such a behaviour, taken care of by renormalization. It turns out that for a non-interacting single scalar field \(\phi\), one can write down a Lagrangian whose propagator displays interesting features for our purposes. It implies second derivatives in the field. For the naive substitution : \(\phi_{1}+\phi_{2}\longrightarrow\phi_{3}+\phi_{4}\), the second-order element \(S_{fi}\) of the scattering matrix is non-divergent. Consider the following Lagrangian \[\begin{split}\mathcal{L}&=\frac{1}{2}\left(\partial _{\mu}\varphi\right)\left(\zeta-\gamma\frac{\Box}{M^{2}}\right)\left(\partial ^{\mu}\varphi\right)-\frac{\lambda}{4!}\varphi^{4}\\ &=\frac{1}{2}\zeta\left(\partial_{\mu}\varphi\right)\left( \partial^{\mu}\varphi\right)-\frac{\gamma}{2M^{2}}\left(\partial_{\mu}\varphi \right)\Box\left(\partial^{\mu}\varphi\right).\end{split} \tag{6}\] The propagator takes the form \[\frac{i}{\zeta\left(p^{2}+i\epsilon\right)}-\frac{i\gamma\left(\zeta M^{2}+\gamma p ^{2}\right)}{\zeta\left[\left(\zeta M^{2}+\gamma p^{2}\right)^{2}+\varepsilon^ {4}\right]} \tag{7}\] where \(\epsilon\) et \(\varepsilon\) are infinitesimal. From the action \[S=\int d^{4}x\ \mathcal{L}.\] one obtains \[\begin{split}\delta S&=\int d^{4}x\eta^{\mu\nu} \left\{\zeta\partial_{\mu}\left(\partial_{\nu}\varphi\delta\varphi\right)- \frac{\gamma}{2M^{2}}\left[\partial_{\mu}\left(\delta\varphi\Box\partial_{\nu} \varphi\right)+\Box\partial_{\nu}\left(\partial_{\mu}\varphi\delta\varphi \right)\right]\right\}\\ &+\int d^{4}x\eta^{\mu\nu}\left\{-\zeta\partial_{\mu}\left( \partial_{\nu}\varphi\right)+\frac{\gamma}{2M^{2}}\left[\left(\partial_{\mu} \Box\partial_{\nu}\varphi\right)+\left(\Box\partial_{\nu}\partial_{\mu} \varphi\right)\right]\right\}\delta\varphi=0.\end{split} \tag{8}\] The equation of motion reads \[\left(\zeta\Box+\frac{|\gamma|}{M^{2}}\Box^{2}\right)\varphi=0. \tag{9}\] Looking for plane wave solutions \(\varphi=e^{ipx}\), equation (9) gives \(\left(-\zeta p^{2}+\frac{|\gamma|}{M^{2}}p^{4}\right)+0\). This implies * either \[p^{2}=0\Longrightarrow p_{0}^{2}=\vec{p}^{2}\Longrightarrow m^{2}=0,\] (10) * or \[\frac{|\gamma|}{M^{2}}p^{2}=\zeta\] (11) \[\left(\Box_{x}+m^{2}\right)G_{Fs}(x-y) =i\delta^{4}(x-y)\ \text{et} \tag{12}\] \[\left(\zeta\Box_{x}+\frac{|\gamma|}{M^{2}}\Box_{x}^{2}\right)G_{ Ff}(x-y) =i\delta^{4}(x-y). \tag{13}\] \[iG_{Fs}(x-y) =\int\frac{d^{4}k}{(2\pi)^{4}}\frac{1}{k^{2}-m^{2}+i\epsilon}e^{ -ik(x-y)}\ \text{et} \tag{14}\] \[iG_{Ff}(x-y) =\int\frac{d^{4}k}{(2\pi)^{4}}\frac{M^{2}}{k^{2}\left(\zeta M^{2 }-|\gamma|k^{2}\right)+\varepsilon^{4}}e^{-ik(x-y)}. \tag{15}\] with \(\epsilon\) and \(\varepsilon\) infinitesimal widths. \[p_{1}+p_{2}\longrightarrow p_{3}+p_{4} \tag{16}\] \[|i\rangle =a_{\vec{p}_{1}}^{\dagger}a_{\vec{p}_{2}}^{\dagger}\left|0 \right\rangle\qquad\text{et} \tag{17}\] \[|f\rangle =a_{\vec{p}_{3}}^{\dagger}a_{\vec{p}_{4}}^{\dagger}\left|0\right\rangle. \tag{18}\] 1. **Order 0** This term does not describe an interaction. 2. **First Order** \[S_{fi}^{1}=-i\lambda(2\pi)^{4}\delta^{4}\left(p_{4}+p_{3}-p_{1}-p_{2}\right).\] (19) 3. **Second Order** \[\begin{split} S_{fi}^{2}&=\frac{-\lambda^{2}}{2} \delta^{4}(p_{3}+p_{4}-p_{1}-p_{2})\int d^{4}q\left[\frac{1}{q^{2}-m^{2}+i \epsilon}\frac{1}{\left(p_{1}+p_{2}-q\right)^{2}-m^{2}+i\epsilon}\right.\\ &\left.+\frac{1}{q^{2}-m^{2}+i\epsilon}\frac{1}{\left(p_{2}-p_{4 }-q\right)^{2}-m^{2}+i\epsilon}+\frac{1}{q^{2}-m^{2}+i\epsilon}\frac{1}{\left( p_{2}-p_{3}-q\right)^{2}-m^{2}+i\epsilon}\right].\end{split}\] (20) \[\begin{split} S_{fi}^{2}&\propto\int_{|q|\to+ \infty}\frac{d^{4}q}{q^{4}}\propto\int_{|q|\to+\infty}\frac{q^{3}}{q^{4}}dq= \int_{|q|\to+\infty}\frac{dq}{q}=[lnq]_{|q|\to+\infty}\longrightarrow\infty. \end{split}\] (21) 4. **Amplitude \(S_{fi}^{2}\) for fakeons** \[\begin{split} S_{fi}^{2}&=\frac{-\lambda^{2}}{2} \delta^{4}(p_{3}+p_{4}-p_{1}-p_{2})\\ &\times\int d^{4}q\left[\frac{M^{2}}{q^{2}\left(\zeta M^{2}-| \gamma|q^{2}\right)+\varepsilon^{4}}\,\frac{M^{2}}{\left(p_{1}+p_{2}-q\right) ^{2}\left[\zeta M^{2}-|\gamma|\left(p_{1}+p_{2}-q\right)^{2}\right]+\varepsilon ^{4}}\right.\\ &+\frac{M^{2}}{q^{2}\left(\zeta M^{2}-|\gamma|q^{2}\right)+ \varepsilon^{4}}\frac{M^{2}}{\left(p_{2}-p_{4}-q\right)^{2}\left[\zeta M^{2}- |\gamma|\left(p_{2}-p_{4}-q\right)^{2}\right]+\varepsilon^{4}}\right.\\ &\left.+\frac{M^{2}}{q^{2}\left(\zeta M^{2}-|\gamma|q^{2}\right)+ \varepsilon^{4}}\frac{1}{\left(p_{2}-p_{3}-q\right)^{2}\left[\zeta M^{2}-| \gamma|\left(p_{2}-p_{3}-q\right)^{2}\right]+\varepsilon^{4}}\right].\end{split}\] (22) \[S_{fi}^{2}\propto\int_{|q|\to+\infty}\frac{d^{4}q}{q^{8}}\propto\int_{|q|\to+ \infty}\frac{q^{3}}{q^{8}}dq=\int_{|q|\to+\infty}\frac{dq}{q^{5}}=\left[\frac {-1}{4q^{4}}\right]_{|q|\to+\infty}\longrightarrow 0. \tag{23}\] Let us now look at the simplest process in this model i.e. a scattering one. \[\phi\left(\overrightarrow{p}_{1}\right)+\phi\left(\overrightarrow{p}_{2} \right)\longrightarrow\phi\left(\overrightarrow{p}_{3}\right)\phi\left( \overrightarrow{p}_{4}\right). \tag{24}\] Knowing the initial and final states, quantum field theory tell us that the amplitude of the process is given by \(\langle f|S|i\rangle\) where \(S=T\exp\left[\frac{-i}{\hbar}\int d^{4}x\ \mathcal{L}_{\text{int}}(x)\right]\). Using the decomposition of the free field \[\phi(x)=\int d^{3}k\ \Big{[}a(\overrightarrow{k})e^{-ik.x}+a^{\dagger}( \overrightarrow{k})e^{ik.x}\Big{]}, \tag{25}\] the commutation relations are \(\left[a(\overrightarrow{k}),a^{\dagger}(\overrightarrow{k}^{\prime})\right]= \delta^{3}\left(\overrightarrow{k}-\overrightarrow{k}^{\prime}\right);\ \left[a( \overrightarrow{k}),a(\overrightarrow{k}^{\prime})\right]=\left[a^{\dagger}( \overrightarrow{k}),a^{\dagger}(\overrightarrow{k}^{\prime})\right]=0\). The non-fakeon treatment leads to Eq.(19), Eq.(20) where the integration is written in terms of the Feynman propagators for the scalar field i.e. \(\frac{1}{p^{2}-m^{2}+i\varepsilon}\). This integral is divergent and can be taken of by renormalization, in the ultraviolet. If one takes the option of simply replacing the usual propagator \(\frac{1}{p^{2}-m^{2}+i\varepsilon}\) by another one \(\frac{p^{2}-m^{2}}{\left(p^{2}-m^{2}+i\varepsilon\right)^{2}+\epsilon}\)[3, 4, 5], one find that amplitude of the scattering process is finite at the second order contrary to what happens usually. The important point is that the Feynman propagator appearing here is given as \(G_{F}(x-y)=\langle 0|T\phi(x)\phi(y)|0\rangle\) where the field \(\phi\) has been quantized, according to the rules invoked above. We have found the classical solutions of the field equation (See Eq.(25)) but have not provided the usual quantization. This means that claim that the \(S_{\rm fi}^{(2)}\) not being divergent has to be examined more closely. Section 4 gives a first step for the solution of this problem. ## 4 The Higgs Boson As a Fakeon In this paper, we begun with a Lagrangian describing a real scalar field, with the necessary ingredients to accomodate the fakeon hypothesis. We then promoted that single real scalar field to a doublet of complex fields needed for the Higgs sector in the Standard Model. The Lagrangian obtained in that fashion is invariant under global transformations of the Standard Model. We then went on to make that symmetry local, using the usual recipe of replacing the partial space-time derivative by the covariant derivative. The presence of the D'Alembertian operator in the Lagrangian leads to some lengthy expressions for the interaction between the Higgs boson and the electroweak gauge bosons. \[{\cal L}(\Phi)=\xi\left(D_{\mu}\Phi\right)^{\dagger}\ D^{\mu}\left(D_{\nu}D^{ \nu}\Phi\right)+\left(D_{\mu}\Phi\right)^{\dagger}\left(D^{\mu}\Phi\right). \tag{26}\] Since the covariant derivative commutes with the gauge transformation i.e. \[\Phi=U\ \Phi^{\prime};\ D_{\mu}\Phi=U\ D_{\mu}\Phi^{\prime} \tag{27}\] for the group \(SU(2)_{L}\), one has \[{\cal L}(\Phi^{\prime})=\xi\left(D_{\mu}\Phi^{\prime}\right)^{\dagger}\ U^{ \dagger}U\ D^{\mu}\left(D_{\nu}D^{\nu}\Phi^{\prime}\right)+\left(D_{\mu}\Phi^ {\prime}\right)^{\dagger}\ U^{\dagger}U\ \left(D^{\mu}\Phi^{\prime}\right) \tag{28}\] so that the Lagrangian is invariant when the matrix \(U\) belongs to \(SU(2)_{L}\) i.e. \({\cal L}(\Phi)={\cal L}(\Phi^{\prime})\). The same can be said of the \(U(1)_{Y}\) part of the electroweak theory. Gauging the theory gives extra couplings of the electroweak bosons \(\left(W_{\mu}^{+},W_{\mu}^{-},Z_{\mu},A_{\mu}\right)\) to the physical spin zero Higgs field \(h\) after spontaneous symmetry breaking. \[W_{\mu}^{\pm} = \frac{1}{\sqrt{2}}\left(A_{\mu}^{1}\mp iA_{\mu}^{2}\right); \tag{29}\] \[Z_{\mu} = \frac{1}{\sqrt{g^{2}+g^{\prime 2}}}\left(gA_{\mu}^{3}-g^{\prime}B_ {\mu}\right);\] (30) \[A_{\mu} = \frac{1}{\sqrt{g^{2}+g^{\prime 2}}}\left(g^{\prime}A_{\mu}^{3}+gB _{\mu}\right). \tag{31}\] \[A_{\mu}^{1}=\frac{1}{\sqrt{2}}\left(W_{\mu}^{+}+W_{\mu}^{-}\right); \tag{32}\] \[A_{\mu}^{2}=\frac{i}{\sqrt{2}}\left(W_{\mu}^{+}-W_{\mu}^{-}\right); \tag{33}\] \[A_{\mu}^{3}=\frac{1}{\sqrt{g^{2}+g^{\prime 2}}}\left(gZ_{\mu}+g^{\prime}A_{ \mu}\right); \tag{34}\] \[B_{\mu}=\frac{-1}{\sqrt{g^{2}+g^{\prime 2}}}\left(g^{\prime}Z_{\mu}-gA_{ \mu}\right). \tag{35}\] \[(D_{\mu}\Phi)^{\dagger}(D^{\mu}\Phi)= =\frac{1}{2}\partial_{\mu}h\partial^{\mu}h+(v+h)^{2}\,\frac{g^{2}} {4}W_{\mu}^{-}W^{+\mu}+(v+h)^{2}\,\frac{(g^{2}+g^{\prime 2})}{8}Z_{\mu}Z^{\mu}. \tag{36}\] \[\left[\Phi^{\dagger}\Phi-\frac{v^{2}}{2}\right]^{2}= \frac{1}{4}\left(h^{2}+2vh\right)^{2}. \tag{37}\] The covariant derivative of the Higgs doublet can be written in the following form which simplifies computations: \[D_{\mu}\Phi=\partial_{\mu}\Phi+\sum_{a=1}^{4}K_{a,\mu}\tilde{T}_{a}\Phi \tag{38}\] where one takes \[K_{1,\mu}=W_{\mu}^{+},\ K_{2,\mu}=W_{\mu}^{-},\ K_{3,\mu}=Z_{\mu},\ K_{4,\mu} =A_{\mu} \tag{39}\] so that \[\tilde{T}_{1}=\left(\begin{array}{cc}0&-\frac{ig}{\sqrt{2}}\\ 0&0\end{array}\right),\ \tilde{T}_{2}=\left(\begin{array}{cc}0&0\\ -\frac{ig}{\sqrt{2}}&0\end{array}\right) \tag{40}\] \[\tilde{T}_{3}=\left(\begin{array}{ccc}-ig\left(\frac{1}{2}-\sin^{2}( \theta)\right)\sec(\theta)&0\\ 0&\frac{1}{2}ig\sec(\theta)\end{array}\right),\ \tilde{T}_{4}=\left(\begin{array}{ccc}-ie&0 \\ 0&0\end{array}\right). \tag{41}\] The part of the Lagrangian containig the D'Alembertian can be expanded directly in term of the physical electroweak gauge bosons. An example of a new interaction is given by \[\begin{split}\mathcal{L}_{2}&=\sum_{a=1}^{4}\left( \partial^{\mu}\partial_{\nu}K_{a},^{\nu}\right)\ \frac{1}{2}\partial_{\mu}\Phi^{\dagger}\left(v+h\right)\ \frac{ig}{2}\sec\theta_{W}\ \delta_{a,3}\\ &=\frac{i}{4}g\sec\theta_{W}\left(\partial^{\mu}\partial_{\nu}K_{ 3},^{\nu}\right)\ \left(\partial_{\mu}h\right)\left(v+h\right)\\ \mathcal{L}_{2}&=\frac{i}{4}g\sec\theta_{W}\left(\partial^{\mu} \partial_{\nu}Z^{\nu}\right)\ \left(\partial_{\mu}h\right)\left(v+h\right).\end{split} \tag{42}\] In the same way, we calculate the other terms, we have then : \[\begin{split}\mathcal{L}=&\left(\partial_{\mu}h \right)(\partial^{\mu}\Box h)\\ &+\frac{1}{4}ig\sec\theta_{W}\partial_{\mu}h(v+h)\left(\partial^ {\mu}\partial_{\nu}Z^{\nu}\right)\\ &+\frac{1}{4}ig\sec\theta_{W}(\partial_{\mu}h)(\partial^{\mu}h) \partial_{\nu}Z^{\nu}\\ &+ig\sec\theta_{W}(\partial_{\mu}h)(\partial_{\nu}h)\left( \partial^{\mu}Z^{\nu}\right)\\ &+ig\sec\theta_{W}Z^{\nu}(\partial_{\mu}h)(\partial^{\mu}\partial_ {\nu}h)\\ &-\frac{1}{4}ig\sec\theta_{W}(\partial^{\mu}\Box h)(v+h)Z_{\mu}\\ &-\frac{1}{4}g^{2}(\partial_{\mu}h)(v+h)\left[\left(\partial^{ \mu}W_{\nu}^{-}\right)W^{+\nu}+\frac{1}{2}\sec^{2}\theta_{W}\left(\partial^{ \mu}Z_{\nu}\right)Z^{\nu}\right]\\ &-\frac{1}{2}g^{2}(\partial_{\mu}h)(v+h)\left[\left(\partial^{ \mu}W^{+\nu}\right)W_{\nu}^{-}+\frac{1}{2}\sec^{2}\theta_{W}\left(\partial^{ \mu}Z^{\nu}\right)Z_{\nu}\right]\\ &-\frac{1}{4}g^{2}(\partial_{\mu}h)(\partial^{\mu}h)\left[W^{+\nu }W_{\nu}^{-}+\frac{1}{2}\sec^{2}\theta_{W}Z^{\nu}Z_{\nu}\right]\\ &-\frac{1}{4}g^{2}(v+h)^{2}\left[\left(\partial^{\mu}\partial_{\nu}W^{+ \nu}\right)W_{\mu}^{-}+\frac{1}{2}\sec^{2}\theta_{W}\left(\partial^{\mu} \partial_{\nu}Z^{\nu}\right)Z_{\mu}\right]\\ &+...\end{split} \tag{43}\] The details of the computations are given in Appendix A. The Standard Model displays explicitly the couplings of the Higgs field with the fermions and gauge bosons involved. Although the mass of the Higgs was unknow til 2012, the picture was already clear and constrained by the fact that the interaction of the Higgs boson with a fermion is linked to the mass of spin half particle via the Yukawa terms in the Lagrangian. On the other side, the masses of the electroweak gauge bosons can be related to the coupling constants of the model \(g,g^{\prime}\), the hypercharge \(Y\) of the Higgs doublet and its vacuum expectation value \(v\). Basically, the search of the Higgs boson was based on its decays rates and its production rate. The part of the Lagrangian implying the Higgs field, after spontaneous symmetry breaking reads \[{\cal L}_{\rm Higgs}=\frac{1}{2}\partial_{\mu}h\,\partial^{\mu}h-\frac{\lambda} {4}\left(h^{4}+4v^{2}h^{2}+4vh^{3}\right)+\frac{1}{8}\left(v+h\right)^{2}\left[ 2g^{2}W_{\mu}^{+}W^{-\mu}+\left(g^{2}+g^{\prime 2}\right)Z_{\mu}Z^{\mu}\right] \tag{44}\] with \(m_{h}=\sqrt{2\lambda}v\). The main decays rates are given by[29]: * **The decay mode into two fermions** The rate is found to be \[\Gamma(h\longrightarrow f\overline{f})=\left(\frac{\alpha m_{h}}{8\sin^{2} \theta_{W}}\right)\frac{m_{f}^{2}}{m_{W}^{2}}\left(1-\frac{4m_{f}^{2}}{m_{h}^ {2}}\right)^{3/2}N_{c}(f)\] (45) with \(\alpha=\frac{1}{137},N_{c}(f)=1\) for leptons, \(N_{c}(f)=3\), for quarks and \(2m_{f}\leq m_{h}\) (this excludes the top quark). * **The decay mode into two gluons** \[\Gamma(h\longrightarrow 2g)=\left(\frac{\alpha m_{h}}{8\sin^{2}\theta_{W}} \right)\frac{m_{f}^{2}}{m_{W}^{2}}.\frac{\alpha_{S}}{9\pi^{2}}\bigg{|}\sum_{q} I\left(\frac{m_{h}^{2}}{m_{q}^{2}}\right)\bigg{|}^{2}\] (46) with \(I(x)\) a form factor. Among its properties, one has \(\lim_{x\longrightarrow 0}I(x)=1\) and \(\lim_{x\longrightarrow\infty}I(x)=0\). As a consequence, the heavy quarks contribute more than the light ones to this process. The inverse of this process can be seen as a precessus creating Higgs from two gluons. * **The decay mode into two photons** \[\Gamma(h\longrightarrow 2\gamma)=\left(\frac{\alpha m_{h}}{8\sin^{2}\theta_{W}} \right)\frac{m_{f}^{2}}{m_{W}^{2}}.\frac{\alpha_{S}}{18\pi^{2}}\bigg{|}\sum_{ F}Q_{f}^{2}N_{c}(f)-\frac{21}{4}\bigg{|}^{2}.\] (47) We didn't include the point of view of [29]\(m_{h}>2m_{W}\), since it is excluded by LHC. Going beyond the Standard Model does not mean one is free to do whatever one wants. The discovery of the Higgs boson poses stricts constraints in this matter. That fakeon hypothesis, in the minimal setting studied here, implies a modification of the Higgs propagator and new interactions with a common factor \(\xi\): \[{\cal L}_{\rm int}=\xi\left({\cal L}_{1}+{\cal L}_{2}+...+{\cal L}_{24}\right). \tag{48}\] The fakeon hypothesis led us to introduce the term \(\left(D_{\mu}\Phi\right)^{\dagger}\;D^{\mu}\left(D_{\nu}D^{\nu}\Phi\right)\). It should be noted that this quantity is not always a real quantity. That led us to introduce the quantity \[{\cal L}_{\star}=\left(D_{\mu}\Phi\right)^{\dagger}\;D^{\mu}\left(D_{\nu}D^{ \nu}\Phi\right)+h.c. \tag{49}\] which has the proper behaviour. The equations (45), (46) and (47), tested experimentally, are in accord with the theoretical body of the Standard Model. The fakeon hypothesis, as treated here, introduces a new constant and some non trivial interactions. It is to be seen how that hypothesis changes the predictions of the rates given in (45), (46), (47). In principle, those rates will depend on the new parameter \(\gamma^{\prime}\), with \(\gamma^{\prime}=0\) giving simply the Standard Model. Such that the quantity \(\gamma^{\prime}\) has to be very small \(|\gamma^{\prime}|<<1\). ## 5 Equations of Motion and Conserved Quantities \[\Pi=\frac{\partial{\cal L}}{\partial(\partial_{0}\varphi)}. \tag{50}\] \[\left[\varphi\left(\vec{x},t\right),\Pi\left(\vec{y},t\right)\right]=\delta^{ 3}\left(\vec{x}-\vec{y}\right). \tag{51}\] \[{\cal L}={\bf A}\left(\partial_{\mu}\varphi\right)\left(\partial^{\mu} \varphi\right)+{\bf B}\varphi^{2}+{\bf C}\left(\partial_{\mu}\varphi\right) \square\left(\partial^{\mu}\varphi\right). \tag{52}\] To understand the complete physical meaning of the Lagrangian given by (52) (with C=0), it is necessary to associate to its conserved quantities as indicated by the Noether theorem. The invariance of the Lagrangian under the Poincare group leads to the following conserved quantities: 1. The moments \(P_{\mu}\), related to the invariance under translations; 2. The "generalized" angular momentum \(M_{\mu\nu}\), related to the Lorentz transformations. When \(\mu\) and \(\nu\) are spatial indices, it is simply the angular momentum. These conserved quantities, at the classical level, obey the following relations, which give the representation of the Poincare group. \[\left[P_{\mu},P_{\nu}\right] = 0; \tag{53}\] \[\left[M_{\mu\nu},P_{\tau}\right] = \eta_{\mu\tau}P_{\nu}-\eta_{\nu\tau}P_{\mu};\] (54) \[\left[M_{\mu\nu},M_{\sigma\rho}\right] = \eta_{\nu\sigma}M_{\mu\rho}+\eta_{\mu\rho}M_{\nu\sigma}-\eta_{ \nu\rho}M_{\mu\sigma}-\eta_{\mu\sigma}M_{\nu\rho}. \tag{55}\] At the quantum level, when the \(\varphi\) field and its conjugate \(\varPi\) have been promoted to the rank of quantum operators. These obey the same relation as the one given by the expression (51) to the nearest \(i\hbar\) (in the natural unit system: \(\hbar=c=1\)). The dilemma we are facing is the following: the simplest Lagrangian involving the _fakeons_ is based on a derivative of the third order field. The equations of motion are of order 4. The Lagrangian is of the form (when \(C\neq 0\)): \[\mathcal{L}\left(\varphi,\partial_{\mu}\varphi,\partial_{\mu}\partial_{\mu} \varphi,\partial_{\mu}\partial_{\mu}\varphi,\partial_{\mu}\partial_{\mu} \varphi\right).\] This is different from the usual framework where only the first derivatives of the field appear. When \(C\neq 0\), the \(\varphi\) field has more degrees of freedom and therefore the commutation relation given in (51) does not "settle" everything. The procedure we choose is to solve the equation of motion for \(C\neq 0\). The general solution will depend on some arbitrary constants. The conserved quantities coming from the Poincare symmetry are also derived. \[\left\{\begin{array}{l}x^{\mu}\longrightarrow x^{\prime\mu}=x^{\mu}+ \delta x^{\mu}\\ \varphi_{r}(x)\longrightarrow\varphi_{r}^{\prime}(x^{\prime})=\varphi_{r}(x) +\delta\varphi_{r}(x)\end{array}\right.. \tag{56}\] Let \[\delta\varphi_{r}=\varphi_{r}^{\prime}(x^{\prime})-\varphi_{r}(x). \tag{57}\] \[\delta\varphi_{r}(x)=\tilde{\delta}\varphi_{r}(x)+\frac{\partial\varphi_{r}( x)}{\partial x^{\mu}}\delta x^{\mu}. \tag{58}\] \[\delta\varphi_{r}=\ \partial_{\mu}\delta\varphi_{r}(x)=\frac{\partial\varphi_{r} (x)}{\partial x^{\nu}}\frac{\partial\delta x^{\nu}}{\partial x^{\mu}}+\delta \left(\partial_{\mu}\varphi_{r}(x)\right). \tag{59}\] \[\delta S=0= \int d^{4}x\left[\frac{\partial\mathcal{L}}{\partial\varphi_{r}( x)}\tilde{\delta}\varphi_{r}(x)+\frac{\partial\mathcal{L}}{\partial \partial_{\mu}\varphi_{r}(x)}\tilde{\delta}\partial_{\mu}\varphi_{r}(x)+\frac {\partial\mathcal{L}}{\partial\left(\partial_{\mu}\partial_{\nu}\varphi_{r}(x )\right)}\tilde{\delta}\left(\partial_{\mu}\partial_{\nu}\varphi_{r}(x) \right)\right. \tag{60}\] \[\left.+\frac{\partial\mathcal{L}}{\partial\left(\partial_{\mu} \partial_{\nu}\partial_{\sigma}\varphi_{r}(x)\right)}\tilde{\delta}\left( \partial_{\mu}\partial_{\nu}\partial_{\sigma}\varphi_{r}(x)\right)+\partial_{ \mu}\mathcal{L}\delta x^{\mu}+\mathcal{L}\partial_{\mu}\delta x^{\mu}\right].\] \[\delta S=\int d^{4}x\left[\Omega^{r}\tilde{\delta}\varphi_{r}+\partial_{\mu }f^{\mu}\right]. \tag{61}\] \[\frac{\partial\mathcal{L}}{\partial\varphi_{r}}-\partial_{\mu}\left(\frac{ \partial\mathcal{L}}{\partial\left(\partial_{\mu}\varphi_{r}\right)}\right)+ \partial_{\mu}\partial_{\nu}\left(\frac{\partial\mathcal{L}}{\partial\left( \partial_{\mu}\partial_{\nu}\varphi_{r}(x)\right)}\right)-\partial_{\mu} \partial_{\nu}\partial_{\tau}\left(\frac{\partial\mathcal{L}}{\partial\left( \partial_{\mu}\partial_{\nu}\partial_{\tau}\varphi_{r}(x)\right)}\right)=0. \tag{62}\] \[\Theta_{\alpha}^{\mu}=\ \ P^{r,\mu}\partial_{\alpha}\varphi_{r}+R^{r,\mu\nu} \partial_{\nu}\partial_{\alpha}\varphi_{r}+C^{r,\mu\nu\tau}\partial_{\nu} \partial_{\tau}\partial_{\alpha}\varphi_{r}-\eta_{\alpha}^{\mu}\mathcal{L}. \tag{63}\] \[P_{\alpha}=\int_{V}d^{3}x\ \Theta_{\alpha}^{0}. \tag{64}\] \[\Lambda_{\nu}^{\mu}=\delta_{\nu}^{\mu}+\eta^{\mu\rho}\delta\omega_{\rho\nu}. \tag{65}\] Conclusion The question of how the fakeon hypothesis can be implemented in Particle Physics has already led to many works [3, 4, 5, 17]. Some aspects have been considered in relation to renormalizability and quantum gravity. In this work, we analysed the hypothesis of the Higgs field of the Standard Model being a fakeon. For this, we relied on the simplest lagrangian which produces a propagator with the desired behaviour for a real scalar field. This Lagrangian involves up to third derivatives in the field. After that, we proceeded to consider a doublet of complex scalar fields with a similar Lagrangian. We then went on to make that symmetry local, using the standard replacement of the partial derivative \(\partial_{\mu}\) by a covariant derivative \(D_{\mu}\). These new interactions have of course to be added to those already present in the standard electroweak theory. Each of them can be written as a product of two factors. In the products, he first factors are polynomial in the gauge bosons fields and its derivatives, and the second ones are polynomial in the expression of these interactions. The second part of our work dealt with the hypothesis that of all the particles of the Standard Model, only the Higgs was a fakeon. The Higgs boson is the last particle to have been detected at the CERN LHC experiments, ATLAS and CMS, in accordance with the Standard Model. Many extensions of the Standard Model exist. The fakeon hypothesis can be considered as one of them. Future experiments (FCC-ee or ILC or CLIC) will tell if such an approach is relevant [25, 26, 27]. The initial Lagrangian contains third order derivatives of the scalar field. This leads to an interaction Lagrangian containing many components. However, each of them can be written as a product of two factors. The first such factors depend solely on the electroweak bosons \((W^{+},W^{-},Z,A)\). The second ones only on the Higgs field and its derivatives. The first terms evoked above are polynomials in the electroweak gauge bosons and their derivatives. The degree of these polynomials goes from one to four while the derivative order goes from zero to two. The same happens for the Higgs field. The corresponding polynomials are all quadratic in terms of degree. The derivative order goes from zero to three. These two statements can be considered as the main result of this work. Of course, this work is only a first step. One needs to study in detail how the strength of the interactions deduced from the fakeon hypothesis can be restricted by the available experimental data. Another, natural step should be to implement the fakeon hypothesis for the other particles of the Standard Model. There is a priori no fundamental reason for the Higgs boson to behave in such a different manner compared to the other particles. The last point is more formal. It is related to the general Lagrangian needed for the fakeon hypothesis and the associated conserved quantities. And not only for the scalar field. We are planing to adress these questions in a near future. Acknowledgement The authors of this paper are greatful for the financial support provided by the CNRS/DERCI (France). Our project was supported under their IEA (International Emergency Action) and their DSCA (Dispositif de Soutien aux Collaborations avec l'Afrique sub-saharienne) programs.
2305.09051
Vortex wake patterns in superfluid $^{4}He$
Excitations in the form of quantized vortex rings are known to exist in superfluid $^{4}He$ at energies and momenta exceeding those of the Landau phonon-roton spectrum. They form a vortex branch of elementary excitations spectrum which is disconnected from the Landau spectrum. Interference of vortex ring excitations determines wake patterns due to uniformly traveling sources in bulk superfluid at low speeds and pressures. The dispersion law of these excitations resembles that of gravity waves on deep water with infrared wave number cutoff. As a result, vortex wake patterns featuring elements of the Kelvin ship wake are predicted. Specifically, at lowest speeds the pattern with fully developed transverse and diverging wavefronts is present. At intermediate speeds transverse wavefronts are absent within a cone whose opening angle increases with the source velocity. At largest speeds only diverging wavefronts confined within a cone whose opening angle decreases with the source velocity are found. When experimentally observed, these changes in appearance of wake patterns serve as indicators of the beginning part of the vortex branch of elementary excitations.
Eugene B. Kolomeisky
2023-05-15T22:24:19Z
http://arxiv.org/abs/2305.09051v2
# Vortex wake patterns in superfluid \({}^{4}He\) ###### Abstract Elementary excitations in the form of quantized vortex rings exist in superfluid \({}^{4}He\) at energies and momenta exceeding those of the Landau phonon-roton spectrum. Their interference determines wake patterns due to uniformly traveling sources in bulk superfluid at low speeds and pressures. The dispersion law of these excitations resembles that of gravity waves on deep water with infrared wave number cutoff. As a result, wake patterns featuring elements of the Kelvin ship wake are predicted. Specifically, at lowest speeds the pattern with fully developed transverse and diverging wavefronts is present. At intermediate speeds transverse wavefronts are absent within a cone whose opening angle increases with the source velocity. At largest speeds only diverging wavefronts confined within a cone whose opening angle decreases with the source velocity are found. When experimentally observed, these changes in appearance of wake patterns serve as indicators of the beginning part of the vortex branch of elementary excitations. Landau's insight that elementary excitations determine low-energy properties of many-body interacting systems is a paradigm of physics [1]. One such property probing elementary excitations spectra that is common to a number of physical systems is far-field wake patterns representing response to uniformly traveling external disturbances. Familiar examples include Mach waves due to a supersonic projectile [2], Cherenkov radiation emitted by a rapidly moving charge [3], and ship waves [4; 5], all of which are examples of coherent generation of the medium's elementary excitations [6]. If the relevant elementary excitations are characterized by the dispersion law \(\omega({\bf k})\) (where \(\omega\) is the frequency and \({\bf k}\) is the wave vector), a wake is present whenever there is a wave mode whose phase velocity \({\bf k}\omega/k^{2}\) (here \(k\)=\(|{\bf k}|\)) matches the projection of the velocity of the source \({\bf v}\) onto the direction of radiation \({\bf k}/k\)[5; 6]. For a source moving with velocity \({\bf v}\), this requires the existence of a wave vector \({\bf k}\) satisfying the Mach-Cherenkov-Landau (MCL) resonant radiation condition \[\omega({\bf k})={\bf k}\cdot{\bf v}\equiv kv\cos\varphi. \tag{1}\] Here \(v=|{\bf v}|\) and \(\varphi\) is the angle between the vectors \({\bf k}\) and \({\bf v}\). Eq.(1) also describes the onset of Landau damping in a plasma [7] and the breakdown of superfluidity [1]. When the excitation spectrum is linear, \[\omega=uk \tag{2}\] where \(u\) is the speed of sound (or light), the condition (1) becomes \(\cos\varphi=u/v\). It can be satisfied only if \(u\leqslant v\), i.e. there is a finite critical velocity to generate a wake pattern, \(v_{c}=u\). Recently developed theory [8] makes it possible to understand wake patterns due to excitations of general isotropic dispersion laws. One of its applications included wake patterns in superfluid \({}^{4}He\) produced by a small uniformly moving source or equivalently by a stationary obstacle in the presence of background flow. The input to the theory [8] is the excitation spectrum of \({}^{4}He\). The latter, as was recognized by Landau [9], is a superposition of two continuous spectra: one corresponding to potential flow and one, at a higher energy, corresponding to vortex motion. The potential flow part of the spectrum known as the Landau phonon-roton spectrum has the following properties [1; 9]: For small wave numbers \(k\) the elementary excitations are phonons with a linear spectrum (2). As the wave number increases, the function \(\omega({\bf k})=\omega(k)\) reaches a maximum, followed by a "roton" minimum at some \(k_{0}\). In the vicinity of \(k=k_{0}\) it is customary to expand the spectrum in powers of \(k-k_{0}\): \[\omega=\frac{\Delta}{\hbar}+\frac{\hbar(k-k_{0})^{2}}{2\mu} \tag{3}\] where \(\Delta\), \(\mu\) and \(k_{0}\) are empirically known parameters depending on the pressure [10] such as with good accuracy the critical velocity to generate a wake pattern or equivalently to destroy superfluidity is given by the slope of the straight line connecting the origin \(\omega(0)=0\) to the roton minimum \(\omega(k_{0})=\Delta/\hbar\)[8]: \[v_{c}=\frac{\Delta}{\hbar k_{0}}. \tag{4}\] The Landau critical roton velocity (4) has been attained only in experiments with isotopically pure \({}^{4}He\) at a pressure exceeding 12 bars [11]. Likewise, roton wake patterns predicted in Ref. [8] can be observed in their pure form only at elevated pressure. The reason being is that excitations of vortex nature, namely vortex rings [12], play dominant role at low pressures. Originally Landau argued [9] that the vortex motion branch of the elementary excitations spectrum starts out according to Eq.(3) with \(k_{0}=0\). However experimental data on second sound indicated that it is Eq.(3) that describes short-wavelength part of the potential flow branch of the excitation spectrum; the roton does not represent vortex motion as its group velocity is zero while vortex ring cannot be at rest. A qualitative theory of the Landau phonon-roton spectrum has been given by Feynman and its improvements have been proposed since then [12]. Landau also pointed out that just as there is no continuous transition in quantum mechanics between states with zero angular momentum and states with finite angular momentum, there may not be a continuous transition between the potential flow and the vortex motion branches of the excitation spectrum [9]. The phonon-roton spectrum is known to end at \(\omega=2\Delta/\hbar\), \(k=k_{c}\)[1]. Specifically, extrapolated to zero pressure values of the parameters are \(\Delta=0.74\ meV\), \(k_{0}=1.9\ \AA^{-1}\), and \(k_{c}=3.6\ \AA^{-1}\)[10]. Pitaevskii [13] discussed a possibility and Marchenko and Parshin [14] further argued that at low pressure the spectrum of vortex rings begins at a wave number \(k_{b}>2k_{0}\) and an energy \(\hbar\omega_{b}\) of several \(\Delta\). The coordinates of the beginning point of the spectrum were estimated as \(k_{b}=4.3\ \AA^{-1}\) and \(\hbar\omega_{b}=2.2\ meV\)[14]. In the macroscopic approximation the spectrum of quantized vortex rings is given by [1] \[\omega=2\pi^{2}\frac{\rho_{s}\hbar}{m^{2}}R\ln\frac{R}{a},\ \ \ \ \ k=2\pi^{2} \frac{\rho_{s}}{m}R^{2} \tag{5}\] where \(\rho_{s}\) is the superfluid density, \(m\) is the mass of the \(He\) atom, \(R\) is the radius of the vortex ring playing a role of the parameter, \(a\) is the core radius of the vortex having atomic scale, and it is assumed that \(\ln(R/a)\gg 1\). Eqs.(5) conveys the fact that the larger the ring radius, the larger are its wave number and frequency (energy). Existence of the beginning point of the spectrum \(k=k_{b}\) then corresponds to a ring of smallest radius \(R_{b}\). The latter can be computed from the \(k(R)\) dependence (5). Indeed, employing extrapolated to zero pressure value of the superfluid density \(\rho_{s}=0.145\ g/cm^{3}\)[1] and the estimate \(k_{b}=4.3\ \AA^{-1}\)[14] one finds \(R_{b}=3.2\ \AA^{1}\)[14]. High energy part of the vortex branch of the elementary excitation spectrum (5) has been observed in experiments of Rayfield and Reif [15] who studied the mobility of ions in Helium. They established that at energies of 1.5 to 45 \(eV\) ions create vortex rings, attach to them and move together. Charged rings of these energies have radii varying from \(5\times 10^{-6}\) to \(10^{-4}\ cm\) which correspond to very large wave numbers \(k\gg k_{b}\). So far the beginning part of the vortex branch of the elementary excitations spectrum has not been observed. The goal of this work is to determine the geometry of wake patterns in unbounded liquid visible in variation of density due to interference of vortex ring excitations. As we shall see, as the source velocity \(v\) increases, wake patterns undergo qualitative changes which have their origin in the existence of the beginning point of the spectrum \(k=k_{b}\) thus opening an opportunity to its observation. If the requirement \(\ln(R/a)\gg 1\) is relaxed to \(\ln(R/a)\geq 1\) where the expression (5) remains qualitatively correct, one can see that for a ring of radius \(R^{*}=ea\) its group \(d\omega/dk\) and phase \(\omega/k\) velocities coincide. Corresponding wave number evaluated from the equation for \(k(R)\) (5) is \(k^{*}=2\pi^{2}\rho_{s}e^{2}a^{2}/m\); specifically, \(d\omega/dk<\omega/k\) if \(k>k^{*}\). Employing experimentally deduced value of \(a=0.7\ \AA\)[14; 15] one finds that \(k^{*}=1.6\ \AA^{-1}\). Since the beginning point of the spectrum is restricted by the inequality \(k_{b}>2k_{0}\ (=3.8\ \AA^{-1}\)) [13], we infer that for \(k\geq k_{b}\) the group velocity is smaller than the phase velocity. This means that the equation \(\omega(k)=kv\) determining boundary values of the wave numbers satisfying the MCL condition (1) has either one solution or no solutions at all. With this in mind the effect of the beginning point of the spectrum on wake patterns can be in part anticipated based on the observation that the velocity \[v_{b}=\frac{\omega_{b}}{k_{b}} \tag{6}\] corresponding to the slope of a straight line connecting the origin \(\omega(0)=0\) to the beginning point of the vortex branch of the spectrum \(\omega(k_{b})\equiv\omega_{b}\) plays a special role. Indeed, if \(v<v_{b}\), then only vortex excitations with sufficiently large wave numbers satisfy the MCL condition (1) and participate in making wake patterns. On the other hand, if \(v>v_{b}\), the MCL conditions holds for excitations of all allowed wave numbers, \(k\geq k_{b}\). A relationship between the Landau critical roton velocity \(v_{c}\) (4) and the velocity \(v_{b}\) (6) is not yet established since coordinates of the beginning point of the vortex spectrum are not reliably known. If \(v_{b}<v_{c}\), as the source velocity \(v\) increases, vortex wake patterns will undergo qualitative change before the rotons of the Landau branch are generated. The reverse is true if \(v_{b}>v_{c}\). The latter scenario will be realized if the estimates \(k_{b}=4.3\ \AA^{-1}\) and \(\hbar\omega_{b}=2.2\ meV\)[14] are accurate. Indeed, extrapolated to zero pressure Landau critical roton velocity (4) is \(v_{c}=60\ m/s\) while \(v_{b}=79\ m/s\). However even in this case interfering roton and vortex wake patterns can be distinguished thanks to unique features of vortex wake patterns described below. Regardless of a relationship between \(v_{b}\) (6) and \(v_{c}\) (4), this opens an experimental opportunity to observe the beginning of the vortex branch of the excitation spectrum. Eliminating the ring radius between \(\omega\) and \(k\), Eqs.(5) can be brought into an explicit \(\omega(k)\) dependence: \[\omega^{2}=\frac{\pi^{2}\hbar^{2}\rho_{s}}{2m^{3}}k\ln^{2}\left(\frac{mk}{2 \pi^{2}\rho_{s}a^{2}}\right). \tag{7}\] Since the logarithm is a slowly varying function of its argument and the logarithmic approximation \(\ln(R/a)\gg 1\) may be only qualitatively correct in the vicinity of the beginning point of the spectrum, the dispersion law of vortex ring excitations will be approximated by an expression \[\omega^{2}=gk,\ \ \ g\simeq\frac{\hbar^{2}\rho_{s}}{m^{3}},\ \ \ k\geq k_{b} \tag{8}\] so that the characteristic velocity (6) is given by \[v_{b}=\sqrt{\frac{g}{k_{b}}}. \tag{9}\] If the parameter \(g\) would be the free fall acceleration, and existence of the wave number cutoff \(k_{b}\) would be ignored, the dispersion law (8) would be identical to that of gravity waves on deep water [2; 5]. The parameter \(g\) in Eq.(8) is however eleven orders of magnitude larger than the free fall acceleration which will have dramatic effect on the spatial scale of wake patterns. It is straightforward to verify that for the dispersion law (8) the MCL condition (1) becomes \(\cos\varphi=\sqrt{g/kv^{2}}\) and can always be satisfied for sufficiently large wave number \(k\). As a result the vortex wake appears for any velocity, thus implying that \(v_{c}=0\). The problem of determining wake patterns due to interference of the vortex ring excitations in the approximation given by Eq.(8) is then similar to that determining ship wakes. The important difference lies in the nature of the wave number cutoff. Indeed, wake patterns due to smooth traveling pressure disturbance can be understood in terms of an effective _ultraviolet_ wave number cutoff having its origin in spatial scale (s) of the pressure distribution [16]. Similarly, there exists an ultraviolet wave number cutoff in the problem of determining wake patterns due to a charge traversing a two-dimensional electron gas [17]; here the cutoff originates from the Debye screening. In both of these cases, the role of the cutoff wavenumber is to suppress sufficiently short-wavelength excitations from participating in forming wake patterns. In the case of present interest (8), however, the wave number cutoff at \(k=k_{b}\) is _infrared_ which means sufficiently long-wavelength excitations are excluded from participating in forming wake patterns. The outcome can be inferred from the classic analysis of wake patterns due to traveling point pressure source [4; 5]; below we follow the approach of Refs. [8; 16]. If \(k_{b}=0\) the parameters of the problem such as \(g\) and \(v\) can be combined into a single length scale \[l=\frac{v^{2}}{g} \tag{10}\] called the Kelvin length [16] which (for point source) determines fine structure of the resulting wake pattern. For example, for \(v\simeq 10\)\(m/s\) one finds \(l\simeq 1\)\(\AA\) which means that fine structure of the wake can be only resolved with the help of X-ray or neutron scattering. In the presence of finite wave number cutoff \(k=k_{b}\) there exists an additional length scale \(\simeq k_{b}^{-1}\) which can be combined with the Kelvin length (10) to form a dimensionless combination \[\mathcal{M}=\sqrt{lk_{b}}=\frac{v}{v_{b}} \tag{11}\] where in the second representation Eqs.(9) and (10) were employed. If \(1/k_{b}\) would be a scale characterizing the source, then \(\mathcal{M}\) would be a Froude number [5]. However since the scale \(1/k_{b}\) is the property of the medium, it is more appropriate to call the parameter \(\mathcal{M}\) a Mach number, the ratio of the source velocity to the characteristic velocity of the medium. The \(\mathcal{M}=0\) case then would be closely related to the original Kelvin wake pattern due to traveling point pressure source [4; 5]; specifically, in any plane intersecting the three-dimensional wake pattern along the path of the source one finds the Kelvin wake [8]. Hereafter the length is measured in units of the Kelvin length (10) and wave numbers in units of \(1/l\). Then the constraint \(k\geqslant k_{b}\) in Eq.(8) becomes \(k\geqslant\mathcal{M}^{2}\). Wake patterns are stationary in the reference frame of the source and their geometry can be determined via Kelvin's stationary phase argument [4; 5; 8; 16]. It is a condition of stationary of the phase \(f=\mathbf{k}\cdot\mathbf{r}\) subject to the MCL condition (1) (here \(\mathbf{r}\) is the three-dimensional position vector measured relative to the location of the source). Assuming the source is moving in the positive \(x\) direction, the stationary phase condition for the problem at hand (8) has the form [8; 16]: \[-\frac{\varrho}{x}=\frac{\sqrt{k-1}}{2k-1},\quad k\geqslant\mathcal{M}^{2} \tag{12}\] where \(\varrho\) is the distance from the \(x\)-axis (the pattern has axial symmetry around the path of the source). Since the phase \(f\) is constant along the wavefront, Eq.(12) and \(f=\mathbf{k}\cdot\mathbf{r}\) can be solved relative to \(x\) and \(\varrho\) to give the equation for the wavefronts in parametric form: \[x(k)=-2\pi n\frac{2k-1}{k^{3/2}},\ \varrho(k)=2\pi n\frac{\sqrt{k-1}}{k^{3/2}},k\geqslant\mathcal{M}^{2} \tag{13}\] where \(f=-2\pi n\) and \(n\) is positive integer [8; 16]. Eqs. (12) and (13) can have solutions only for \(x<0\) (which is where the wake is) satisfying the inequality \(k\geqslant 1\) (\(k\geqslant 1/l\) (10) in the physical units). The latter fact means that even without the additional constraint \(k\geqslant\mathcal{M}^{2}\), wake patterns are a result of interference of sufficiently short-wavelength excitations. The functional dependence (12) is shown in Figure 1. Its right-hand side vanishes at \(k=1\) and \(k\to\infty\) reaching maximum value of \(1/2\sqrt{2}\) at \(k=3/2\). Thus, Eq.(12) has two solutions for \(0\leqslant-\varrho/x<1/2\sqrt{2}\), corresponding to transverse (t) (ascending orange part of the curve) and diverging (d) (descending red part of the curve) wavefronts. These solutions merge at \(-\varrho/x=1/2\sqrt{2}\), while none are found above this value. From this one obtains Kelvin's classic result that the wake is confined by the angle \(2\arctan(1/2\sqrt{2})\approx 39^{\circ}\). The effect of the additional constraint \(k\geqslant\mathcal{M}^{2}\) in (12), representing existence of the beginning point of the spectrum (8), can be now simply understood. Indeed, if \(\mathcal{M}^{2}<1\) the constraint is unimportant and all the excitations satisfying the inequality \(k\geqslant 1\) participate in producing the wake which (in any plane containing the path of the source) will be the Kelvin wake. Specifically, transverse wavefronts connect the edges of the pattern (\(k=3/2\)), across the path of the source (\(k=1\)) as shown in Figure 2a in orange. Their periodicity along the path of the source (\(\varrho=0\)) in the original physical units, \(2\pi l\), is fixed by the Kelvin length (10). Additionally, diverging wavefronts connect the edges of the pattern (\(k=3/2\)) to the source (\(k=\infty\)) as shown in Figure 2a in red. Figures 1 and 2 are color coordinated to make it clear that interference of waves whose wave numbers belong to a range marked in given color in Figure 1 produces wavefronts colored in the same fashion in Figure 2. Wake pattern in Figure 2a undergoes qualitative change when the Mach number \(\mathcal{M}\) exceeds unity because now the constraint \(k\geqslant\mathcal{M}^{2}\) is stronger than the built-in condition \(k\geqslant 1\) in Eqs.(12) and (13). The critical value \(\mathcal{M}=\mathcal{M}_{1}=1\) corresponds to the source velocity \(v=v_{1}\) matching the characteristic velocity (6) as was already anticipated. If \(\mathcal{M}>1\), there are still two cases to consider: (i) If \(1<\mathcal{M}\leqslant\sqrt{3/2}\), then transverse wavefronts no longer reach the path of the source. According to the condition of stationary phase (12) they start out on the conical surface \[-\frac{\varrho}{x}=\frac{\sqrt{\mathcal{M}^{2}-1}}{2\mathcal{M}^{2}-1} \tag{14}\] corresponding to \(k=\mathcal{M}^{2}\) and extend to the edges of the pattern, \(k=3/2\). Transverse wavefronts are absent within the cone (14) whose opening angle is given by \[2\theta=2\arctan\frac{\sqrt{\mathcal{M}^{2}-1}}{2\mathcal{M}^{2}-1} \tag{15}\] The outcome is shown in Figure 2b. When experimentally observed, change in appearance of wake patterns between Figures 2a and 2b would be an indirect evidence of the beginning part of the vortex branch of the elementary excitations spectrum. (ii) Wake pattern in Figure 2b undergoes another qualitative change when the Mach number \(\mathcal{M}\) exceeds \(\sqrt{3/2}\) because now transverse wavefronts completely disappear. The critical value \(\mathcal{M}=\mathcal{M}_{2}=\sqrt{3/2}\) corresponds to the source velocity \(v=v_{2}=\sqrt{3/2}v_{b}\approx 97\ m/s\) where we used already mentioned estimate \(v_{b}=79\ m/s\). For \(\mathcal{M}>\sqrt{3/2}\) the wake is made only of diverging wavefronts as shown in Figure 2c. The entire wake pattern is now confined within a cone defined by Eq.(14) whose opening angle (15) decreases with the Mach number; in the \(\mathcal{M}\gg 1\) limit the opening angle vanishes as \(1/\mathcal{M}\). The instants \(v=v_{1}\) and \(v=v_{2}\) when vortex wake patterns undergo qualitative changes in their appearance represent critical phenomena. It is expected that they will be accompanied by singular changes in the wave resistance which we are planning to study in the future. To summarize, we demonstrated that vortex wake patterns in superfluid \({}^{4}He\) exhibit elements of Kelvin ship wake to varying degree depending on the source speed. We explained that changes in appearance of wake patterns with the source speed can serve as an experimental evidence of the beginning part of the vortex excitation spectrum. It is necessary to emphasize that coarse-grained outlines of the wake patterns in Figures 2a-2c should be observable by light scattering techniques while the fine structure of all the discussed wake patterns can be only resolved with the help of \(X\)-ray or neutron scattering. The author is grateful to J. P. Straley for valuable comments.
2308.07080
On (not) deriving the entropy of barocaloric phase transitions from crystallography and neutron spectroscopy
We review well-known signatures of disorder in crystallographic and inelastic neutron scattering data. We show that these can arise from different types of disorder, corresponding to different values of the system entropy. Correlating the entropy of a material with its atomistic structure and dynamics is in general a difficult problem that requires correlating information between multiple experimental techniques including crystallography, spectroscopy, and calorimetry. These comments are illustrated with particular reference to barocalorics, but are relevant to a broad range of calorics and other disordered crystalline materials.
Anthony E. Phillips, Helen C. Walker
2023-08-14T11:25:48Z
http://arxiv.org/abs/2308.07080v1
# On (not) deriving the entropy of barocaloric phase transitions ###### Abstract We review well-known signatures of disorder in crystallographic and inelastic neutron scattering data. We show that these can arise from different types of disorder, corresponding to different values of the system entropy. Correlating the entropy of a material with its atomistic structure and dynamics is in general a difficult problem that requires correlating information between multiple experimental techniques including crystallography, spectroscopy, and calorimetry. These comments are illustrated with particular reference to barocalorics, but are relevant to a broad range of calorics and other disordered crystalline materials. ## 1 Introduction ### Rationale Barocaloric materials, which can be switched between low- and high-entropy states by applying pressure, are promising candidates to replace vapour-compression refrigerants. Until recently, such materials were thought to be rather rare; however, work in the past decade has shown that many, perhaps even most solid-solid phase transitions are in principle barocaloric. [1, 2] The key fundamental challenge facing the field is therefore to understand the entropy changes in known materials well enough to be able to identify, and even design, new materials likely to show phase transitions with large entropy changes. Such entropy changes are determined by a material's atomistic structure and dynamics, which are revealed in extraordinary detail by modern diffraction and spectroscopy experiments. The results of these experiments are thus a natural place to begin building models of entropy. Unfortunately, however, these data, particularly when taken individually, may give an ambiguous or even misleading picture of contributions to a material's entropy. The purpose of this Perspective is to review pitfalls in interpreting crystallographic and spectroscopic data collected from disordered crystalline materials, with particular reference to modelling entropy directly from these data. In the following two subsections, we outline the two main contributions to entropy we will discuss - vibrational and configurational entropy (Section 1.2) - and present a simple toy model of a phase transition that illustrates the ways in which these forms of entropy can occur (Section 1.3). We then consider the effects of each of these contributions, first on crystallography (Section 2), then on neutron spectroscopy (Section 3). ### Decomposing contributions to entropy Of course, the problem of relating entropy to structure is relevant far beyond the specific application to barocalorics. In a sense, the entire discipline of materials chemistry is founded on the understanding that the bulk thermodynamics that govern a substance's stability and reactivity are determined by its atomic-level structure and dynamics. The first step in understanding this relationship is to calculate the internal energy associated with a particular atomic configuration, the realm of quantum chemistry. But when working at constant temperature, as in most practical cases, it is the free energy that must be minimised; in this case, the entropy of an atomic configuration also becomes important and indeed may dominate the free energy difference between competing phases [3, 4, 5]. Accurate determination of the entropy of a given material structure is thus just as important as the internal energy for rationalising known crystal structures, crystal structure prediction and crystal engineering and design [6]. Both energy and entropy are extensive state functions that can therefore be viewed as the sum of contributions from different atoms or physical origins. For instance, the cohesive energy of a molecular salt might include contributions from dispersion forces, hydrogen-bonded pairs, and the Coulomb force; these contributions can be visualised and understood in terms of "energy frameworks" [7, 8]. The entropy can in principle be decomposed in a similar way [9]. Here, we will consider the two main effects in non-magnetic, crystalline materials: vibrational and configurational contributions. This neglects, for instance, the electronic entropy (relevant to systems with spin degrees of freedom) [10] and the entropy of free rotation (most relevant to gas-phase molecules) [11]. Broadly speaking, the vibrational entropy describes a system's movement within a single basin of the energy hypersurface, while the configurational entropy describes its choice between multiple basins. We will expand on this distinction in the following section, but there are three important points to make at the outset. First, different sorts of entropy require different mathematical formalisms [12]. We must correctly identify which is relevant to a given material in order to perform these calculations correctly. Specifically, the configurational entropy is straightforwardly given by the Boltzmann formula \[S=k\ln n, \tag{1}\] where \(k\) is the Boltzmann constant and \(n\) the number of equivalent configurations a structure can occupy.* The vibrational entropy, by contrast, arises from the standard expression for a harmonic oscillator Footnote *: In principle, this formula applies generally, with \(n\) being the number of microstates accessible to a system in a phase space including both positions and momenta. Here we will apply it directly only in the case where \(n\) represents different, equivalent, positions, for instance in crystallographic disorder models. In the context of barocaloric materials, this is equivalent to assuming that the local shape of the potential well is similar enough in the high- and low-entropy phases that vibrational contributions effectively cancel. \[S=k\big{(}(n+1)\ln(n+1)-n\ln n\big{)}, \tag{2}\] where now \(n(E,T)\) is the _number of phonons_ in a particular mode at a given temperature. In practice, many modes will be active and this is more typically used as an integral over the phonon density of states \(g(E)\): \[S=3k\int_{0}^{\infty}g(E)\big{(}(n(E)+1)\ln(n(E)+1)-n(E)\ln n(E)\big{)}\, \mathrm{d}E. \tag{3}\] Second, this description immediately illustrates that there is no clear division between these types of entropy; our perspective may change, for instance, depending on the barrier height between basins (or, equivalently, the temperature). Third and finally, this additive property is a double-edged sword, in the sense that these calculations reduce complex configurations of atoms to single scalar numbers: it is thus easy for errors in different components to accumulate or, conversely, to cancel fortuitously. As a result, it is difficult to judge the accuracy of an entropy calculation, or to troubleshoot it, simply by comparison with the experimentally determined value. ### A toy model Since barocaloric materials require a phase transition between a high- and a low-entropy phase, we consider here a simple model of the entropy in such phase transitions. We assume that the space-group symmetries of the two phases are related by a group-subgroup relationship: in other words, that moving from the low- to the high-temperature phase introduces a specific new symmetry element to the crystallographic structure. In Fig. 1, this symmetry element is the mirror plane that maps the left half of each site to the right. We can distinguish two different ways in which this symmetry change might be achieved. In a _displacive_ phase transition (Fig. 1b), each site locally adopts this higher symmetry. By contrast, in an _order-disorder_ phase transition (Fig. 1c), each atomic site retains the same local symmetry as in the low-symmetry phase; the higher symmetry emerges only on taking the average of every site in the crystal. These two types of phase transition map neatly onto the two contributions to entropy discussed in Section 1.2. In the displacive case, each atom moves freely between the left and right sides of its site, and the entropy difference between the phases is exclusively vibrational. In the order-disorder case, each atom is confined to the well on a single side, just as it was in the low-temperature phase, and the entropy difference is exclusively configurational. As a model of the physical origin of such behaviour, anticipated in Figure 1, suppose that each atom sits in a local double-well potential, and is connected to its nearest neighbours by harmonic springs.[13] The total potential energy is thus \[U=\sum_{i}\left(-\tfrac{1}{2}\kappa_{2}u_{i}^{2}+\tfrac{1}{4}\kappa_{4}u_{i}^ {4}\right)+\tfrac{1}{2}J\sum_{i,j}^{\text{n.n.}}(u_{i}-u_{j})^{2} \tag{4}\] where the \(u_{i}\) represent the displacements of each atom from the centre of its site; \(\kappa_{2}\) and \(\kappa_{4}\) characterise the double-well potential; \(J\) characterises the inter-site "spring" interaction; and the second sum is to be taken over all pairs of nearest neighbours ("n.n.") \(i\) and \(j\). In this model, the competition between the site potentials and the nearest-neighbour interactions is encapsulated in the parameter \(s\equiv\kappa_{2}/8J\). If \(s\ll 1\), the nearest-neighbour interactions dominate over the site potentials; the phase transition is displacive; and the entropy change is entirely vibrational. If \(s\gg 1\), the site potentials dominate; the phase transition is order-disorder; and the entropy is purely configurational.[14] Vitally, though, this model demonstrates that there is a continuum between these two limits, so that a real phase transition may combine features of both sorts. Despite its simplicity, this model is difficult to analyse in full generality. Two ways of simplifying it are to assume that either the _sites_ or the vibrational _modes_ are independent of one another.[13] The first of these, the independent site approximation, gives rise to a mean-field, Landau-like theory. The second gives a theory in terms of harmonic Figure 1: (a) A toy model of a phase transition in which each particle moves in a quartic double-well potential, and is connected to its nearest neighbours by a harmonic spring. Depending on the relative strengths of the double-well and spring potentials, this can give a spectrum of high-temperature phases, ranging from (b) displacive to (c) order-disorder. The average structure is shown on the right-hand side. The new symmetry element introduced in the high-temperature phase is the mirror plane indicated here by a dotted red line. phonon modes. This is not as restrictive as it may at first seem, since it is possible to use quasiharmonic approaches with renormalised mode frequencies that take account of anharmonic behaviour - although this approach depends on being able to select the correct local minimum to explore [15]. We will use this second approach in much of the following analysis. Alternatively, this simplification can be avoided entirely by using molecular dynamics simulation to directly sample the relevant region of the potential energy surface [16]. This is an accurate and very general approach, although it may be expensive depending on the potentials used. In particular, it avoids the need to partition entropy into vibrational and configurational contributions, which could be an advantage particularly for intermediate cases, but which at the same time makes the resulting entropy more difficult to interrogate for individual structural contributions. ## 2 Crystallography The intensities of Bragg peaks depend on the space and time average of the contents of the crystallographic unit cell. Specifically, they are sensitive to the scattering density within this cell. In theory, this scattering density could be represented in terms of a wide range of possible basis functions. In practice, standard crystallographic models represent it as the sum of ellipsoids, each representing a single atom, as a way of achieving a good fit with relatively few refined parameters. The dimensions of these ellipsoids are the atomic displacement parameters (ADPs); these were earlier called "thermal parameters," but the change of name reflects that these represent displacement from the average site for any reason, including static disorder as well as thermal motion. Inasmuch as they do represent thermal vibrations, however, the choice to model each atom as an ellipsoid is equivalent to taking the (quasi)harmonic approximation [17]. We note in passing that an important research theme in crystallography aims to go beyond this "independent atom model," using anisotropic atomic form factors that can be refined against very high-quality data, determined by quantum chemistry calculations, and/or taken from standard reference databases [18, 19]. These methods, however, are not generally appropriate for the highly disordered phases common in barocaloric materials, which occur at relatively high temperatures and for which far fewer independent reflexions (data points) are generally available. There are well-established methods to determine both the vibrational and configurational entropy from such crystallographic models. The vibrational entropy can be calculated either directly from the temperature dependence of the ADPs (normal coordinate analysis) [20, 21] or by refining instead the population of vibrational modes determined from periodic DFT calculations [22]. The configurational entropy, on the other hand, has been related to the Shannon information that is encoded in a crystal structure where atoms are "labelled" by their crystallographic orbits [23, 24]. This can now be calculated routinely by the program crystIT [25]. In crystallographically ordered materials, the representation of the scattering density in terms of atomic ellipsoids is essentially unique. In the context of disordered materials, however, several equally valid representations of the _same_ scattering density may be possible; an unwary scientist may take the differences between these representations more seriously than the underlying data warrant. As an example, consider a hypothetical phase transition that creates a crystallographic mirror plane. In the low-symmetry phase (Fig. 2a), there are clear peaks of scattering density on one or other side of this plane, revealing the locations of atoms; while in the high-symmetry phase, the scattering density is smeared symmetrically across the plane (Fig. 2b). There are at least two plausible crystallographic models of the disordered phase: we might choose to use a single, fully occupied site _on_ the plane (Fig. 2c) or two half-occupied sites on either side of the plane, related by the mirror symmetry (Fig. 2d). These two models appear to correspond neatly to the two extremes of entropy discussed in Section 1: the single-site model to a displacive phase transition in which the entropy is mainly vibrational; the split-site model to an order-disorder phase transition in which the entropy is mainly configurational. For this reason, it is tempting to claim that these two types of phase transition can be distinguished by trying both models and choosing the one with the lower residual (\(R\)-factor). It would, however, be a serious mistake to decide between these two cases on the basis of the crystallographic model alone. The split-site model is likely to give an better fit, regardless of the true distribution of atomic positions, for at least two reasons. First, the split-site model, where the site is on a general position, has nine parameters: three to define the position vector, six to define the ellipsoid, which is represented by a rank-two symmetrical tensor. On the other hand, the single-site model, where the site is on the mirror plane, has only two position parameters (since it is free to move only on the 2D plane) and only four displacement parameters (which can be thought of as the length, width and breadth of the ellipsoid plus a rotation perpendicular to the mirror plane), for a total of only six parameters. The split-site model therefore has three more degrees of freedom, which are available to absorb noise even if they are not required by the true scattering density; it would be expected to give a lower \(R\)-value for this reason alone. At the least, therefore, a statistical correction for this effect is required to meaningfully compare these models [26]. Second, if the atom vibrates freely across the mirror plane, the length of the bond with its neighbour will remain roughly constant, so the distribution of scattering density is likely to be curved or "banana-shaped" (Fig. 2e). If this curvature is pronounced, then it will be better fitted by two ellipsoids at an angle, as in the split-site model (Fig. 2d), than a single ellipsoid perpendicular to the plane, as in the single-site model (Fig. 2c), even if the distribution of scattering density is indeed centered on the mirror plane! In principle, this could be accomodated by describing the scattering density in terms of a more sophisticated basis set, but this is unlikely to be practical for the reasons given above. Suppose, though, that we have distinguished reliably between these two cases, concluding that the entropy is indeed primarily configurational. Even now, we cannot directly calculate the system's entropy. As noted above, the relevant formula for configurational entropy is the Boltzmann equation (1). If the two split sites are independent, each will therefore contribute \(R\ln 2\) to the entropy, for a total of \(R\ln 4\). On the other hand, if (say) steric hindrance prevents the two atoms from being on the same side of the mirror plane at the same time, as suggested by the low-temperature structure (Fig. 2f) - or indeed, in a different situation, if there is an attractive force so that the two atoms _prefer_ to be on the same side of the mirror plane - there will only be two possibilities and the entropy will be only \(R\ln 2\)[27]. Figure 2: An example of a typical phase transition, in which the high-symmetry phase introduces an additional mirror plane (dotted line) compared to the low-symmetry phase. Shown first are the initial structural solutions of (a) the low-temperature phase and (b) the high-temperature phase. In the high-temperature phase, both (c) one-site and (d) two-side disorder models may give reasonable fits, while (e) the true distribution of scattering density may not be ellipsoidal. (f) Close contact between particular partially occupied sites may reduce the configurational entropy. Naturally, a continuum is possible between these two cases: that is, two adjacent atoms may be sterically unfavourable and therefore downweighted without actually being impossible. In other words, the fact that all of the crystallographic split sites in this model have the same average occupancy of \(1/2\) does _not_ mean that all of the possible local configurations have equal probability. In summary, even a well-fitting crystallographic model of a disordered phase may be ambiguous as to whether the entropy is primarily vibrational or configurational. We will now see that exactly the same is true of vibrational spectroscopy. ## 3 Spectroscopy A perfect, static crystal has zero entropy, so that entropic contributions must be inferred indirectly from crystallographic data in the ways discussed above. On the other hand, vibrational spectroscopy inherently reveals entropy due to this vibrational motion, which indeed can be calculated as an integral over the phonon density of states (3). In addition, however, further signatures of disorder are often observed in inelastic neutron spectra, which may reveal additional contributions to the entropy. Specifically, in contrast with the sharp, detailed phonon spectra typically produced by _ab initio_ calculations, those measured experimentally display varying degrees of broadening. Broadening arising from the instrumental resolution can be minimised by good instrument and experimental design, but cannot be removed entirely. It is therefore essential to have a good understanding of the resolution to determine whether the observed broadening has an additional physical origin. Historically, the understanding of the resolution ellipsoid on triple axis spectrometers has been very well understood, allowing an optimised data collection strategy and deconvolution from the data. For time-of-flight direct geometry spectrometer measurements, which yield a survey of excitations over a much larger region of reciprocal space, this has been more complicated, but with the development of software like Euphonic [28], the full \(Q\)-\(E\) resolution (that is, with respect to both momentum and energy transfer) can now be included in a complete data analysis. Once the instrumental resolution has been taken into account, any further broadening must have physical significance for the system being studied. As before, it is convenient to divide the potential causes into two main cases, depending on whether the disorder is dynamic or static on the timescale set by the phonon frequency. The static case corresponds more or less directly to the case of configurational entropy discussed above, while the dynamic case will alter the value of the vibrational entropy. As well as this characteristic time scale, phonons also have a characteristic length scale set by their wavelength. Thus long-wavelength, low-frequency vibrations are likely to sample a range of whatever parameter characterises the disorder (for instance, different site occupancies or molecular orientations), while short-wavelength, high-frequency vibrations may instead reflect a particular local configuration of atoms. Once again, there is no clear dividing line between these two origins. Indeed, a common and technologically relevant intermediate situation involves orientational disorder of molecules in crystals, which may be anywhere from frozen to almost unhindered depending on the phase and temperature, and which will couple to the translations involved in vibrational motion [29]. Furthermore, some forms of disorder may fall into _both_ categories: for instance, isotopic disorder is clearly configurational (static disorder), but also, because of the mass difference between isotopes, disrupts harmonic phonons (dynamic disorder) [30]. Nonetheless, the static and dynamic extreme cases can be thought of in different ways and contribute differently to the entropy, and it is worthwhile to distinguish them by a thorough analysis of the experimental data. We consider first the case of dynamic disorder. Phonons are quantum-mechanical quasiparticles that arise from quantising the vibrational normal modes of a crystal. Typically, atomic displacements in these modes are small, allowing the potential of the lattice to be written in terms of a Taylor expansion about the equilibrium position. The harmonic approximation corresponds to truncating the expansion at the second order term, valid for many small displacements, resulting in a harmonic oscillator. As noted above, this approximation is equivalent to assuming that the vibrational modes do not interact, and gives excitations with infinite lifetimes and sharp peaks. On the other hand, in systems where the potential is appreciably anharmonic, there will be phonon-phonon interactions which will reduce the phonon lifetime, resulting in broadening in energy. Using lowest-order perturbation theory, it can be shown that this damping is linearly proportional to temperature. Furthermore, in the anharmonic case, the phonon frequency is no longer independent of the mode amplitude, so that a frequency shift with temperature is also expected. Such phonon anharmonicity is significant for many important condensed matter phenomena, including negative thermal expansion, ultralow thermal conductivity, high-temperature superconductivity, and soft mode phase transitions. An in-depth understanding of the microscopic mechanism is thus vital for many applications. Phonon anharmonicity has been explored extensively using inelastic neutron scattering, through the measurement of phonon dispersion curves, phonon lifetimes and phonon densities of states. With no claim to comprehensiveness, we give here a few recent examples. In the thermoelectric PbTe, INS reveals strong broadening and softening with increasing temperature, indicative of substantial anharmonicity. This is amplified by nesting between transverse acoustic and optical modes, which enables more three-phonon scattering channels [31]. In the prototypical hybrid perovskite semiconductor MAPI (CH\({}_{3}\)NH\({}_{3}\)PbI\({}_{3}\)), triple-axis measurements show that the acoustic phonon lifetimes are 50-500 times shorter than those in conventional semiconductors, due both to strong phonon-phonon interactions and to coupling with rotations of the methylammonium ions. This could have significant implications for their application due to the impact on thermal conductivity affecting hot carrier cooling [32]. The effects of anharmonicity can also be seen in phonon density of states measurements obtained on polycrystalline samples. In the case of silicon, a systematic softening and broadening are seen in the spectra with increasing temperature, where the shifts to lower energies at high temperatures are too large to be accounted by simple quasiharmonic behaviour, while both the broadening and 80% of the softening can be accounted for by phonon anharmonicity (Fig. 3(a)) [33]. Once identified, anharmonicity can be incorporated in entropy calculations by mapping the energy of the system as a function of the relevant mode amplitudes, and hence calculating the partition function from which the free energy or entropy can be derived [34]. We now move to the second case, broadening of vibrational peaks caused by static disorder. One simple example is the broadening of crystal field levels well beyond the instrumental resolution in the putative quantum spin liquid YbMgGaO\({}_{4}\). This was ascribed to a random crystalline electric field arising from site mixing between Mg\({}^{2\star}\) and Ga\({}^{3\star}\) affecting the local coordination of the Yb\({}_{3}\)\({}^{+}\) ions, suggesting that it was inherent structural disorder responsible for the spin-liquid physics (Fig. 3(b)) [35]. Well-studied early examples of static disorder include the asymmetrical linear molecule N\({}_{2}\)O [36], which freezes into one of two orientations in the solid state, with little short-range order or subsequent reorientation. It has long been recognised that molecular dynamics simulations are an ideal means to study such materials [37, 38], but the potential scope of such simulations has been broadened tremendously both by increases in computing power and by recent developments in interatomic potentials [39]. This in turn makes it possible to examine far more complex functional materials with inherent disorder using INS, and to interpret the resulting data in terms of specific disorder models. The effects of static disorder on the vibrational spectra of crystalline materials have been modelled largely in one of two limits of computational difficulty. First, the virtual crystal approximation looks at the average effect of disorder on phonon dispersion, treating the phonon broadening as a perturbation. This typically works well only for low-frequency phonon modes and for low levels of disorder. Alternatively, supercell-based methods allow disorder to be accounted for explicitly, but are restricted by the computational cost of performing phonon (or molecular dynamics) calculations on large simulation cells. In this second case, the cost may be ameliorated by using a cheaper Hamiltonian. For instance, while density-functional theory is practically restricted to relatively small systems, perhaps \(\sim\) 100-1000 atoms, suitably parametrised empirical potentials are far less expensive and can therefore be used to model larger systems. In either case, "unfolding" the supercell phonon modes onto the Brillouin zone of the original crystal - an approach known as supercell lattice dynamics (SCLD) [40, 41] - will then yield dispersion curves with static disorder accounted for. We recently validated this approach against experimental INS data. Adamantane is a barocaloric plastic crystal that undergoes a phase transition at \(T=208\) K to an orientationally disordered phase [42], in which significant broadening and softening of the acoustic modes is observed. These modes are responsible for most of the vibrational entropy in adamantane, making it important to understand whether these effects arise due to anharmonicity or disorder. The SCLD calculations reproduce a substantial amount of the phonon broadening seen, indicating that it can be attributed in large part to orientational disorder, without the need to further take into account rotation-translation coupling [43]. As simulation methods develop further, it will be possible to explore the phonons in systems with multiple forms of disorder. This has already been demonstrated in the case of the high-entropy alloy FeCoCrMnNi, which as a random solid solution is disordered with respect to atomic size, mass and force constants, but still displays clear features in the phonon spectra. Phonon broadening is observed with an anisotropic \(Q\)-dependence, while no significant temperature dependence is observed for the phonon frequency or spectral shape, leading to the exclusion of anharmonicity or magnetic fluctuations being the dominant mechanism. Using first-principles calculations combined with the itinerant coherent Figure 3: Two examples of broadening of INS peaks due to disorder. (a) In silicon, the peak at 60 meV in the phonon density of states at low temperature (blue) shifts in frequency on heating (red), and is furthermore broadened with respect to the rescaled low-temperature data (red dashed line). Both of these effects are _vibrational_, due to phonon anharmonicity. Data taken from ref. [33]. (b) In YbMgGaO\({}_{4}\), crystalline electric field excitations of the Yb\({}^{3+}\) ion (integrated over \(4\leq Q\leq 6\) Å\({}^{-1}\), blue) are broader than the instrumental resolution function (orange). This time the effects are due to _configurational_ disorder due to site mixing between Mg\({}^{2+}\) and Ga\({}^{3+}\) ions, although the broadening is superficially similar to (a). Data taken from ref. [35]. potential approximation and supercell phonon-unfolding methods [44], it could be demonstrated that force-constant disorder is predominantly responsible for the phonon lifetime reduction [45]. We note finally that the phonon formalism applies specifically to crystalline materials, even if disordered; for amorphous materials, other descriptions of the excitations will be more suitable [46]. Indeed, these may be preferable even with impurity concentrations as low as a few percent [47]. Such alternative models, are, however, beyond the scope of this brief perspective. ## 4 Conclusions Structural disorder reveals itself with well-known signatures in both crystallographic and spectroscopic data: typically, crystallographic models show large ADPs and atom sites that "may be split", while excitation peaks are broadened. Attempting to determine a material's entropy directly from such data, however, can be dangerous, specifically because configurational and vibrational components are easy to confuse but contribute in very different ways to the total entropy. Of course, diffraction and spectroscopy remain vital tools in characterising disordered phases, including those of barocaloric materials. Certainly, any convincing mechanism for a giant entropy change must be consistent with both crystallography and spectroscopy. One way to be more confident in an analysis of contributions to the entropy is simply to combine crystallographic and spectroscopic data. Another is to cross-reference against additional techniques, for instance diffuse scattering or solid-state NMR. In general, to avoid confusion between different types of entropy, great care should be taken to predict the experimentally observable consequences of a given structural model, and to check that these are in fact distinguishable from those predicted by alternative plausible models. We conclude with a brief look to the future. The problem of deriving entropy from a structural model is in practice less routinely solved than the corresponding problem for energy. There is, however, no fundamental reason why the crystal engineering community could not target a high-entropy structure in much the same way, and with the same accuracy, as is now done to design low-energy states. This would be an extremely important step towards designing functional materials including dielectrics and ionic conductors as well, of course, as calorics. ## Acknowledgements We thank Prof. Martin T. Dove for very helpful discussions.
2305.19242
Non-stationary Gaussian Process Surrogates
We provide a survey of non-stationary surrogate models which utilize Gaussian processes (GPs) or variations thereof, including non-stationary kernel adaptations, partition and local GPs, and spatial warpings through deep Gaussian processes. We also overview publicly available software implementations and conclude with a bake-off involving an 8-dimensional satellite drag computer experiment. Code for this example is provided in a public git repository.
Annie Sauer, Andrew Cooper, Robert B. Gramacy
2023-05-30T17:31:20Z
http://arxiv.org/abs/2305.19242v1
# Non-stationary Gaussian Process Surrogates ###### Abstract We provide a survey of non-stationary surrogate models which utilize Gaussian processes (GPs) or variations thereof, including non-stationary kernel adaptations, partition and local GPs, and spatial warpings through deep Gaussian processes. We also overview publicly available software implementations and conclude with a bake-off involving an 8-dimensional satellite drag computer experiment. Code for this example is provided in a public git repository. ## 1 Introduction Computer simulations are becoming increasingly relevant as methods of data collection, whether to augment or stand in place of actual physical observations, particularly when physical observations are too costly or impossible to come by. Such computer "experiments" pervade many scientific fields including aerospace engineering (e.g., Renganathan et al., 2021), cosmology (e.g., Moran et al., 2022), epidemiology (e.g., Andrianakis et al., 2015), biology (e.g., Johnson, 2008), and physiology (e.g., Plumlee et al., 2016). Computer experiments aim to mimic the real-world. As they have been refined over the years, and their fidelity improved, they have increasingly required heftier computation. This limits the number of simulator evaluations that may be obtained, even with modern supercomputing resources. In such situations, a statistical "surrogate" model (also called an "emulator"), which is trained on a limited amount of simulation data, can be useful as a stand-in for additional expensive simulations. That is, as long as it can provide accurate predictions with appropriate uncertainty quantifiaction (UQ) at untried input configurations. Surrogates can be a crucial tool for turning limited computer simulation data into actionable scientific conclusions. The stipulation we made above bears repeating; the usefulness of a surrogate relies on its ability to provide appropriate _uncertainty quantification_. UQ is a priority because surrogate models are most often tools to be used to another end: objectives we refer to as "downstream tasks." For example, the objective may be to maximize the response, i.e., global optimization. Consider a simulation of a wind turbine in which input configurations include turbine height, angle, and blade dimensions. These affect the energy output of the turbine (e.g., Marten et al., 2013). A natural research objective is to find the configuration that maximizes energy output. Adaptive, sequential surrogate-based design targeting a global optimum is commonly termed Bayesian optimization (BO; Jones et al., 1998; Eriksson et al., 2019; Binois and Wycoff, 2022). [Often, BO is used in machine learning to optimize settings of model hyperparameters.] In this endeavour, knowing where high outputs are found is just as important as identifying input regions of high uncertainty to explore further (striking a balance between exploitation and exploration). Another objective may be to ensure the system being simulated meets certain safety thresholds by quantifying the probability of a system failure. Consider a simulation of an aircraft whose response variable is the amount of aircraft vibration. It is dangerous for the vibrations to exceed certain safety levels; thus, the design objective is to identify the input regions that result in unsafe vibration conditions (e.g., Stanford et al., 2022) so they can be avoided in practice. Both these objectives fall under the umbrella of "sequential design" or "active learning," in which input configurations are chosen sequentially and strategically to target specific learning outcomes based on surrogate information. Another common objective is simulator calibration, in which tuning parameters of the simulator are calibrated to real-world physical observations (Kennedy and O'Hagan, 2001). Sequential design has also been entertained in calibration contexts (e.g., Koermer et al., 2023). So, we need a statistical model that can handle the nonlinear response surfaces typical of computer simulations, work with limited training data, and provide accurate predictions with thorough UQ to facilitate downstream tasks. Piece of cake, right? The "go-to" surrogate model is a Gaussian process (GP). GPs are nonlinear, nonparametric regression models that are preferred for their predictive prowess; see Santner et al. (2018) and Gramacy (2020) for reviews of GPs for surrogate modeling. While GPs are the canonical surrogate modeling choice, the computer experiment community is only one of three big players in the GP world. GPs are widely used in both spatial statistics (Stein, 1999; Cressie, 2015; Banerjee et al., 2003) and machine learning (ML; Rasmussen and Williams, 2005). Although GPs unite these communities, there are subtle differences in applications which, naturally, inform modeling decisions. In spatial statistics, the focus is on low dimensional input spaces, often with missing data and anticipated measurement error. [GP regression is also called "kriging", especially in a geospatial modeling context.] The norm in ML is high input dimension, large data sizes, and lots of noise. On the contrary, computer experiments commonly live in modest dimensional realms with little-to-no noise (though there are exceptions, e.g., Baker et al., 2022), and small-to-moderate data sizes due to hefty computational costs. Nevertheless, many of the key innovations for GP surrogate modeling arise from the spatial and ML communities. We will revisit this motif throughout this chapter. Despite their nonlinear flexibility, traditional GPs are limited by the assumption of _stationarity_. Stationary GP modeling relies solely on pairwise distances between inputs, so they must impart similar dynamics across the entire input space (more on this in Section 2). Yet non-stationary response surfaces that exhibit regime shifts, sharp turns, and/or space-varying dynamics are common in computer experiments. For example, consider an aerospace simulation of a rocket re-entering the atmosphere (Pamadi et al., 2004), as featured in Gramacy (2020, Chapter 2). A visual is provided later in Figure 2. There is an abrupt shift in aeronautical dynamics when speeds cross the sound barrier. A stationary GP is unable to accommodate the stark ridge in the response surface at this boundary. Alas, we are finally at the motivation behind this chapter: advancements to typical GP models that allow non-stationary flexibility without sacrificing predictive prowess or uncertainty quantification. There has been much work on non-stationary GP modeling, with many contributions from spatial and ML communities. We see these methods falling into three categories: * **Non-stationary kernels:** adapt the GP kernel formulation spatially so dynamics are no longer strictly a function of relative distance. These methods originate from spatial statistics, where the focus is on low input dimension. * **Divide-and-conquer:** partition the input space and use traditional stationary GPs independently in each region. Localization is common in surrogate modeling and in geospatial contexts, but such schemes forfeit a degree of global scope when strong long-range dynamics are present. * **Spatial warping:** nonlinearly map inputs so that the process can be depicted as plausibly stationary. The most recently popular of these these being the "deep Gaussian process" (DGP; Damianou and Lawrence, 2013) which uses a stationary GP as the warping function. DGPs combine non-stationary and global modeling, albeit at the cost of some additional computation. Although this modern approach is inspired by recent advances in deep neural networks (DNNs) in machine learning, it actually originated in the spatial and computer modeling literature a decade before DNNs came on the scene. In the remainder of this chapter, after a brief review of stationary GPs, we will address each of these three categories, keeping an eye on surrogate modeling applications and publicly available implementations. ## 2 Gaussian process fundamentals Allow us to introduce GPs in their simplest form, to motivate the upgraded versions we will discuss later. Let \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) represent a black-box computer simulation. Denote a \(d\)-dimensional vectorized input configuration as \(\mathbf{x}\), with corresponding simulator output \(y=f(\mathbf{x})\). Similarly, let \(X_{n}\) denote the row-combined matrix of \(n\) input configurations with \(\mathbf{y}_{n}=f(X_{n})\). A GP prior assumes a multivariate normal distribution over the response, \(\mathbf{y}_{n}\sim N\left(\boldsymbol{\mu},\boldsymbol{\Sigma}(X_{n})\right)\). It is possible to model the prior mean as a linear combination of inputs, i.e., \(\boldsymbol{\mu}=X_{n}\boldsymbol{\beta}\), but we will assume \(\boldsymbol{\mu}=\mathbf{0}\), as is common after centering, without loss of generality. The prior covariance matrix \(\boldsymbol{\Sigma}(X_{n})\) is an \(n\times n\) matrix with elements \(\boldsymbol{\Sigma}(X_{n})^{ij}=\Sigma(\mathbf{x}_{i},\mathbf{x}_{j})\) denoting the covariance between the \(i^{th}\) and \(j^{th}\) input locations. A common choice for \(\boldsymbol{\Sigma}(\cdot)\) is the Gaussian or squared exponential kernel, \[\Sigma(\mathbf{x}_{i},\mathbf{x}_{j})=\Sigma(||\mathbf{x}_{i}-\mathbf{x}_{j}|| )=\sigma^{2}\left(\exp\left(-\frac{||\mathbf{x}_{i}-\mathbf{x}_{j}||^{2}}{ \phi}\right)+\nu\mathbb{I}_{i=j}\right), \tag{1}\] where \(\sigma^{2}\) acts as a scale parameter, \(\phi\) is a lengthscale, and \(\nu\) is a nugget/noise parameter. These parameters may be estimated through maximum likelihood estimation (as in Gramacy, 2020) or sampled through Markov Chain Monte Carlo (MCMC; as in Sauer et al., 2023). In deterministic computer simulations, we often fix \(\nu\) at a small constant, on the scale of \(1\times 10^{-6}\). One may swap in a Matern kernel (Stein, 1999), parameterized similarly. These covariance functions are _stationary_ as they are solely functions of relative distances. Even if we expand to separable vectorized lengthscales \(\phi=[\phi_{1},\ldots,\phi_{d}]\)(Gramacy, 2020, Chapter 5), the kernels are unable to encode information aside from (possibly scaled) relative distances. This forces similar dynamics across the entire domain, i.e., _stationarity_. Conditioned on observed training data, posterior predictions at \(n_{p}\) input locations, \(X_{p}\), follow \[\mathbf{y}_{p}\mid X_{n},\mathbf{y}_{n}\sim N\left(\boldsymbol{\mu}^{ \star},\boldsymbol{\Sigma}^{\star}\right)\quad\text{where}\quad\begin{array} []{l}\boldsymbol{\mu}^{\star}=\boldsymbol{\Sigma}(X_{p},X_{n})\boldsymbol{ \Sigma}(X_{n})^{-1}\mathbf{y}_{n}\\ \boldsymbol{\Sigma}^{\star}=\boldsymbol{\Sigma}(X_{p})-\boldsymbol{\Sigma}(X _{p},X_{n})\boldsymbol{\Sigma}(X_{n})^{-1}\boldsymbol{\Sigma}(X_{n},X_{p}), \end{array} \tag{2}\] and \(\mathbf{\Sigma}(X_{p},X_{n})\) is the \(n_{p}\times n\) matrix containing the covariances between each row of \(X_{p}\) and each row of \(X_{n}\). These closed form analytic posterior moments are convenient, but they rely heavily on the choice of covariance kernel - notice how frequently \(\mathbf{\Sigma}(\cdot)\) features in (2). If the kernel is an ill-fit for the response surface dynamics, GP predictions will be handicapped. For example, consider the non-stationary "Higdon function" generated piecewise following Gramacy et al. (2004), shown in the upper left panel of Figure 1. The left region is high signal, but the right region is far less interesting. Although the training data is observed without noise, we allow surrogates to estimate a global noise parameter - an additional test of surrogate proficiency in this setting. A stationary GP fit is provided in the upper middle panel. In trying to reconcile the disparate regimes, it over-smooths in the left region and over-inflates variance predictions in the right region. We shall revisit the other panels of the Figure in due course, after introducing recently popular forms of nonstationary modeling. Figure 1: Various surrogate fits to the piecewise Higdon function. Training data is observed without noise, but each surrogate is tasked with learning the noise level. The lower right panel shows elliptical slice samples of the DGP’s latent layer, stretching inputs in the left region and compressing inputs in the right region. Non-stationary kernels If a stationary covariance kernel is not an appropriate fit, a natural remedy is to change the kernel to one that is more flexible, while still maintaining positive definiteness. A non-stationary kernel \(\mathbf{\Sigma}_{\mathrm{ns}}(\cdot)\) must rely on more than relative distances, i.e., \(\mathbf{\Sigma}_{\mathrm{ns}}(\mathbf{x}_{i},\mathbf{x}_{j})\neq\mathbf{\Sigma}(|| \mathbf{x}_{i}-\mathbf{x}_{j}||)\). This advancement requires introducing auxilary quantities into the covariance, which are typically unknown and ideally learned from training data. Higdon et al. (1999) allowed for non-stationary kernels through process convolution, i.e., \(\mathbf{\Sigma}_{\mathrm{ns}}(\mathbf{x}_{i},\mathbf{x}_{j})=\int k_{\mathbf{x}_{i} }(u)k_{\mathbf{x}_{j}}(u)du\) where \(k_{\mathbf{x}_{i}}\) is a squared exponential kernel centered at \(\mathbf{x}_{i}\). They utilized a hierarchical Bayesian framework and sampled unknown quantities through MCMC, allowing for full UQ. Paciorek and Schervish (2003) later generalized this work to the class of Matern kernels, with further development by Katzfuss (2013). The use of process convolutions for non-stationary GP modeling is popular and has recent use cases (e.g., Nychka et al., 2018; Wang et al., 2020). Other strategies involve adjustments to the underlying structure of the problem itself. Bornn et al. (2012) expand input dimensions to find a projection into a reasonably stationary domain. [There are some parallels here to the the warping methods we introduce in Section 5.] Nychka et al. (2002) opt to represent the covariance as a linear combination of basis functions. Another way to incorporate flexibility into the kernel is by allowing for functional hyperparameters (\(\sigma^{2},\phi,\nu\) in (1), which were previously assumed constant). Heinonen et al. (2016) placed functional Gaussian process priors on \(\sigma^{2}(x)\), \(\phi(x)\), and \(\nu(x)\) and entertained two Bayesian inferential methods: maximum posterior estimates and full MCMC sampling. Binois et al. (2018) later focused on heteroskedastic modeling of \(\nu(x)\), for situations where the variance (as opposed to the correlation or entire covariance structure) is changing in the input space. Their approach emphasizes computational speed through thrifty use of Woodbury identities by taking advantage of replicates in the design. Yet, there are pitfalls to introducing too much kernel flexibility. Together, the lengthscale \(\phi\) and the nugget \(\nu\) facilitate a signal-to-noise trade-off. In settings where the noise level is not yet pinned down, it is impossible to distinguish between signal and noise. In a computer surrogate modeling framework such estimation risks can be mitigated by designing the experiment carefully. For example, in the context of heteroskedastic modeling, one can learn where additional replicates are needed in order to separate signal from noise (Binois et al., 2019). Off-the-shelf implementations of non-stationary kernels are a bit few and far between. While there are some \(\mathsf{R}\) packages offering non-stationary kernel implementations, such as convoSPAT(Risser and Calder, 2017) and BayesNSGP(Turek and Risser, 2022), these are targeted towards spatial applications and are only implemented for two-dimensional inputs. The heteroskedastic GP (hetGP) of Binois et al. (2018), however, is neatly wrapped in the hetGP package for \(\mathsf{R}\) on CRAN (Binois and Gramacy, 2021) and is ready-to-use on multi-dimensional problems. We provide a visual of a hetGP fit to the Higdon function in the upper right panel of Figure 1, although we acknowledge that this example is a mismatch to the hetGP functionality. The hetGP model offers non-stationarity in the _variance_, but the Higdon function example exhibits non-stationarity in the _mean_. Nevertheless, this benchmark still provides an interesting visual of the flexibility of hetGP. Although it oversmoothes (its predicted mean matches that of the stationary GP), it more properly allocates uncertainty in the linear region. Perhaps limited software availability in this realm is a byproduct of the fact that non-stationarity is wrapped up in the computational bottlenecks of large-scale GPs. Even with a flexible model, non-stationary dynamics will not be revealed without enough training data in the right places. In spatial statistics, where non-stationary kernels have taken hold as the weapon of choice, many of the methodological advances explicitly target both challenges at once: enhancing modeling fidelity while making approximations to deal with large training data (e.g., Grasshoff et al., 2020; Huang et al., 2021; Noack et al., 2023). In surrogate modeling, divide-and-conquer methods (discussed next) can kill those two birds with one stone. When one has the ability to augment training data at will, say through running new computer simulation, data can be acquired specifically to address model inadequacy through active learning (e.g., Sauer et al., 2023). It helps to have a relatively higher concentration of training data in regimes where dynamics are more challenging to model, or across "boundaries" when changes are abrupt. ## 4 Divide-and-conquer Divide-and-conquer GPs first partition the input space into \(k\) disjoint regions, i.e., \(\mathbb{R}^{d}=\cup_{i=1}^{k}P_{i}\) with \(\cap_{i=1}^{k}P_{i}=\emptyset\), and deploy (usually independent) stationary GPs on each element of the partition. Let \(n_{i}\subset 1,\ldots,n\) denote the indices of the training data that fall in partition \(P_{i}\), such that \(X_{n_{i}}\in P_{i}\). A partitioned zero-mean GP prior is then \[\mathbf{y}_{n_{i}}\stackrel{{\mathrm{ind}}}{{\sim}}\mathcal{N} \left(0,\mathbf{\Sigma}^{(i)}(X_{n_{i}})\right)\quad\text{for}\quad i=1,\ldots,k.\] Predictive locations are categorized into the existing partitions \(X_{p_{i}}\in P_{i}\), with independent posterior predictions following Eq. (2), but with subscripts \(p\to p_{i}\) and \(n\to n_{i}\) for each \(i=1,\ldots,k\). Each GP component uses a stationary covariance kernel \(\mathbf{\Sigma}^{(i)}(\cdot)\) following Eq. (1) or variations thereof. Here, the superscript \((i)\) denotes the fact that the kernels on each partition may be parameterized disparately, or have different values for kernel hyperparameters - this is the key to driving non-stationary flexibility. The pivotal modeling decision then becomes the choice of partitioning scheme, creating a patchwork of fits in the input space. Motivated by ML applications to regression problems, Rasmussen and Ghahramani (2001) first deployed divide-and-conquer GPs with the divisions defined by a Dirichlet process (DP). Unknown quantities, including group assignments, DP concentration parameter, and covariance hyperparameters for each GP component, were inferred using MCMC in a Gibbs framework. The hefitness of this "infinite mixture" of GPs comes with a steep computational price tag. The DP is perhaps too flexible, and may be the reason why the empirical comparisons of this work were limited to a one-dimensional illustrative example. Focusing on spatial applications in two dimensions, Kim et al. (2005) later proposed a "piecewise GP" where the partitions are defined from a Voronoi tesselation. Again, Bayesian MCMC over the partitions (tesselations in this case) was required. While the Voronoi tesselations with curved edges provide a flexible partitioning scheme, they present a challenge to extrapolate into higher dimensions. Expanding to the moderate dimensions common in surrogate modeling applications requires some reigning-in of partition flexibility. Gramacy and Lee (2008) proposed a _treed Gaussian process_ (TGP) which partitions the input space using regression trees (Chipman et al., 1998). Partitions are accomplished through a greedy decision tree algorithm, with recursive axis-aligned cuts. Independent GPs are then fit on each partition (also known as leaf nodes). The restriction of partitions to axis-aligned cuts is parsimonious enough to allow for estimation in higher dimensional spaces. TGPs naturally perform well when non-stationarity manifests along axis-aligned partitions, yet learning the locations of the optimal partitions still presents an inferential challenge. Gramacy and Lee prioritize UQ by performing MCMC sampling of the tree partitions, thus averaging over uncertainty in the partition locations. TGP implementation is neatly wrapped in the R-package tgp on CRAN (Gramacy, 2007). A motivating example that showcases the benefits of the TGP model is the Langley Glide-Back Booster simulation (LGBB; Pamadi et al., 2004; Gramacy, 2020, Chapter 2). The simulator emulates the dynamics of a rocket booster gliding through low Earth orbit. Inputs include the rocket's speed (mach), angle of attack (alpha), and slip-side angle (beta); one of the outputs of interest is the resulting lift force. The dynamics of the simulator are known to drastically vary across mach values, specifically when speeds cross the sound barrier. TGP model-fitting entertains several different partitions of the three-dimensional input space. One of the more commonly generated trees is shown in the left panel of Figure 2, which hierarchically splits based on mach values of 1.45 and alpha values of 10. The predictive surface fit using the TGP model, averaging over all high-probability trees, is visualized in the surface plot on the right, where testing locations are fixed at a beta value of 1. The partition produced by the tree is illustrated underneath, on the \(x\)-\(y\) plane; each color-coded region corresponds to one of the three leaf nodes contained in the tree on the left. The model appropriately partitions the input space where the dynamics of the response surface shift. It first chooses a split at the "divot" observed at speeds under 1.45, then decides to further divide that region based on smaller attack angles. Bitzer et al. (2023) recently expanded upon the TGP model by allowing for partitions based on hyperplanes which need not be axis-aligned (consider, for example, parallelograms instead of rectangles in the bottom plane of the right panel). So far, the traditional divide-and-conquer GPs that we've discussed have focused on training data, via their inputs, as the "work" that is to be divided. What if we instead focused on the predictive locations as the avenue of division? After all, the over-arching objective of the surrogate model is to provide predictions (and UQ) at un-tried input configurations \(X_{p}\). We can divide this "work" into \(n_{p}\) jobs: predicting at each \(\mathbf{x}_{k}\in X_{p}\) for \(k=1,\ldots,n_{p}\). Perhaps the earliest inspiration of this approach stems from "moving window" kriging methods in which kernel hyperparameters were estimated separately for each observation based on a local neighborhood (Haas, 1990), generalizing the so-called "ordinary kriging" approach to geo-spatial modeling (Matheron, 1971). Treating predictions as independent tasks sacrifices the ability to estimate the entire predictive covariance, Figure 2: _Left:_ Example of a tree generated by TGP on an LGBB dataset. Splits are made on speed (mach) and angle of attack (alpha). _Right:_ Predictive surface plot of the rocket’s lift. The partition on the input space generated by the tree is shown on the bottom panel, with colors corresponding to respective leaf nodes. \(\mathbf{\Sigma}^{\star}\) in Eq. (2), but point-wise variance estimates are sufficient for most downstream surrogate modeling tasks. Local approximate GPs (laGP; Gramacy and Apley, 2015) combine independent GP predictions with strategic selection of conditioning sets. Rather than conditioning on the entire training data \(\{X_{n},\mathbf{y}_{n}\}\) in Eq. (2), predictions for each \(\mathbf{x}_{k}\in X_{p}\) condition on a strategically chosen subset \(\{X_{n_{k}},\mathbf{y}_{n_{k}}\}\). Non-stationary flexibility comes from the independent nature of each GP; each \(\mathbf{x}_{k}\) may have its own kernel hyperparameterization, thus allowing for the modeling of different dynamics at different locations. The original motivation for this work was to circumvent the cubic computational bottlenecks of GP inference that accompany large \(n\) (by setting a maximum conditioning set size \(|n_{k}|\)), but the non-stationary flexibility has been a welcome byproduct (Sun et al., 2019). Just as partition GPs rely on a choice of partitioning scheme, local GPs rely on the choice of conditioning sets (Emery, 2009). The rudimentary approach is to select the training data observations that are closest to \(\mathbf{x}_{k}\) in Euclidean distance to populate \(X_{n_{k}}\) - these are termed the "nearest neighbors". In their seminal work, Gramacy and Apley proposed strategic sequential selection of points in each conditioning set based on variance reduction criteria. The R package laGP offers a convenient wrapper for these local approximations (Gramacy, 2016). By dividing one big problem into many small ones, laGP avoids the cubic computational bottlenecks implicit in GP regression and creates an embarassingly parallel computational situation that is amenable to multi-core and distributed computing (Gramacy et al., 2014). The left panel of Figure 3 shows an example local neighborhood of size \(n=50\) for predicting at the red dot. Each of the grey dots is one of \(N>40K\) training data locations in \([-2,2]^{2}\). It goes without saying that you couldn't fit an ordinary GP on a training data set that large, but you can use a local approximation over a vast testing grid to estimate a complex, non-stationary surface, as shown in the right panel of the figure. On an eight-core machine, the fit and prediction steps take less than minute. Divide-and-conquer GPs bring non-stationary flexibility without any fancy machinery by relying on partitioned or local application of typical stationary GPs. Yet this flexibility comes at the Figure 3: _Left:_ Example of a local neighborhood (open circles), selected from a large training set (grey dots) for predicting at the red dot. _Right:_ A global fit based on a patchwork laGPs applied over a dense testing set. expense of globality. The independent nature of the component GPs tosses out the ability to learn any global trends. As one such example, turn back to the Higdon function of Figure 1. [We omitted laGP from this exercise since local GPs are focused on large data sizes.] We presumed the training data was observed with an unknown noise variance, but assumed that the variance was constant across the entire space (aside from the hetGP model which was just for demonstration). It would be beneficial for a surrogate model to leverage all of the training data simultaneously, learning from the linear region that the data is noise-free and using that information to inform the interpolation of points in the left "wiggly" region. Yet divide-and-conquer GPs are instead tasked with estimating the noise independently on each division. The TGP fit in the lower left panel provides a clear visual; it created a partition in the middle of the space, fit separate GPs to both sides, but was unable to leverage information between the two sides. The search for a GP adaptation that is both global and non-stationary leads us to our final class of models - those that utilize spatial warpings. ## 5 Spatial warpings Rather than adjusting the kernel or applying a multitude of independent GPs, "warped" GPs attempt to apply a regular stationary GP to a new input altogether. If the response is non-stationary over \(X\)-space, perhaps we can find a new space (let's call it \(W\)-space) over which the response is plausibly stationary. Then a traditional GP prior from \(W\) to \(\mathbf{y}\), i.e., \(\mathbf{y}\sim N\left(\boldsymbol{\mu},\boldsymbol{\Sigma}(W)\right)\) with stationary \(\boldsymbol{\Sigma}(\cdot)\), would be an appropriate fit. We consider \(W\) as a "warped" version of \(X\), hence the name of this section; it is the driver of non-stationary flexibility. The characteristics of \(W\) are a crucial modeling choice. Ideally the warping should be learned from training data (extensions that incorporate expert domain specific information are of interest, but not yet thoroughly explored), but finding an optimal and/or effective warping is a tall order. Perhaps the earliest attempt to apply GPs on reformed inputs was from Sampson and Guttorp (1992), who utilized "spatial dispersion" of the response values as the warping component. They expressed spatial covariance over the spatial dispersions rather than relative input distances, i.e., \(\boldsymbol{\Sigma}_{\text{ns}}(\mathbf{x}_{i},\mathbf{x}_{j})=\text{Cov}(y_ {i}-y_{j})\). Spatial dispersions (\(W\) in our notation) were learned through a combination of multi-dimensional scaling and spline interpolation, allowing for nonlinear warpings but not accounting for uncertainty in the learned warpings. Schmidt and O'Hagan (2003) expanded upon this work by placing a GP prior over the warping, effectively creating the hierarchical model \[\begin{split}\mathbf{y}_{n}\mid W&\sim N\left( \boldsymbol{\mu}_{w},\boldsymbol{\Sigma}_{w}(W)\right)\\ W&\sim N\left(\boldsymbol{\mu}_{x},\boldsymbol{ \Sigma}_{x}(X_{n})\right).\end{split} \tag{3}\] This GP prior allowed for flexible nonlinear warpings and provided a natural avenue for uncertainty quantification surrounding \(W\). They utilized Metropolis Hastings (MH) sampling of the unknown/latent \(W\), wrapped in a Gibbs framework with various kernel hyperparameters. Schmidt and O'Hagan thus created the first deep Gaussian process, although they did not name it as such. DGPs did not gain popularity until 10 years later when those in the ML community caught on to the idea and coined the DGP name (Damianou and Lawrence, 2013), by analogy to DNNs - more on this parallel momentarily. There are several ways to compose a DGP, but the simplest is to link multiple GPs through functional compositions (3). This composition may be repeated to form deeper models. Intermediate layers remain unobserved/latent and must be inferred. Dunlop et al. (2018) have shown that this same model may be formulated through kernel convolutions, and is thus a special subset of the non-stationary kernel methods we discussed in Section 3. There are several key distinctions between the original work of Schmidt and O'Hagan and the DGPs that ML embraced a decade later. First, Schmidt and O'Hagan focused on two-dimensional inputs, meaning their warping \(W\) was a matrix of dimension \(n\times 2\). They utilized a matrix Normal distribution over \(W\) (our representation in Eq. (3) was strategically simplified), and in this smaller setting they were able to sample from the posterior distribution of the entire \(W\) matrix using MH based schemes. This proved too much of a computational burden for expanding to higher dimensions. Damianou and Lawrence (2013) proposed the simplification that each column of \(W\) be conditionally independent, leaving the two-layer DGP prior as \[\begin{split}\mathbf{y}_{n}\mid W&\sim N\left( \boldsymbol{\mu}_{w},\boldsymbol{\Sigma}_{w}(W)\right)\\ \mathbf{w}_{i}&\stackrel{{\text{ind}}}{{ \sim}}N\left(\boldsymbol{\mu}_{x},\boldsymbol{\Sigma}_{x}(X_{n})\right),~{}~{ }i=1,\ldots,p\end{split}\quad\text{where}\quad W=\begin{bmatrix} \mathbf{w}_{1}&\mathbf{w}_{2}&\ldots&\mathbf{w}_{p}\end{bmatrix}. \end{split}\] These columns are often referred to as "nodes". The number of nodes is flexible, but most commonly set to match the input dimension, i.e., \(p=d\). Now the parallel between DGPs and DNNs is clearer: a DGP is simply a DNN where the "activation functions" are Gaussian processes. Posterior inference requires integrating out the unknown latent warping, \[\mathcal{L}(\mathbf{y}\mid X)=\int\cdots\int\mathcal{L}(\mathbf{y}\mid W)\prod _{i=1}^{p}\mathcal{L}(\mathbf{w}_{i}\mid X)~{}~{}d\mathbf{w}_{1},\ldots,d \mathbf{w}_{p}, \tag{4}\] which is not tractible in general. Extensions to deeper models require additional compositions of GPs, with even more unknown functional quantities to integrate (see Sauer et al., 2023, for thorough treatment of a three-layer DGP). Faced with this integral, many in ML embrace approximate variational inference (VI) in which the posterior of Eq. (4) is equated to the most likely distribution from a known target family (e.g., Damianou and Lawrence, 2013; Bui et al., 2016; Salimbeni and Deisenroth, 2017). This approach replaces integration with optimization, but in so doing it oversimplifies UQ. Additionally, it is unable to address the multi-modal nature of the posterior that is common in DGPs (Havasi et al., 2018). Surrogate modeling tasks demand broader uncertainty quantification. Full posterior integration through MCMC sampling of the latent warping offers the solution, but Metropolis Hastings type samplers have been shown to suffer from sticky chains and poor mixing in large DGP setups. To combat this, Sauer et al. (2023) proposed a fully-Bayesian inferential scheme for DGPs that employs elliptical slice sampling (ESS; Murray et al., 2010) of latent Gaussian layers. The rejection-free proposals of the ESS algorithm work well in DGP settings and are able to explore multiple modes more readily than MH counterparts. Others have embraced Hamiltonian Monte Carlo sampling as an alternative to ESS with similar outcomes (Havasi et al., 2018). Full propogation of uncertainty through posterior samples of \(W\) is crucial for downstream surrogate modeling tasks including active learning (Sauer et al., 2023) and Bayesian optimization (Gramacy et al., 2022). The fully-Bayesian MCMC implementation of DGPs is wrapped in the deepgp R-package on CRAN (Sauer, 2022). We provide two visuals of DGP warping/flexibility utilizing this package. The first is shown in the lower center/right panels of Figure 1. The grey lines display ESS samples of latent \(W\), fit to the training data from the Higdon function. The steep slope in the left region has the effect of "stretching" the inputs where there is high signal. The flattening-off of the samples in the right region effectively "squishes" inputs in the linear region. Notice how ESS samples of \(W\) bounce back-and-forth between modes (positive slopes and negative slopes); since only pair-wise distances feature in the stationary kernel of the outer layer, these mirror images are equivalent. The resulting DGP fit is shown in the lower center panel; the warped model is able to accommodate the two piecewise regimes while leveraging global learning of kernel hyperparameters. The second visual concerns the two-dimensional G-function (Marrel et al., 2009), displayed in the left panel of Figure 4. This function is characterized by stiff peaks and steep valleys. In two dimensions, latent \(W\) is comprised of two nodes, each a conditionally independent GP over the inputs. The right panel of Figure 4 visualizes a single ESS sample of \(W\), interpreted as a warping from evenly gridded \(X\). The response surface is plausibly stationary over this warped regime, thus allowing for superior non-stationary fits. DGPs offer both global modeling and non-stationary flexibility, but they come with a hefty computational price tag. Posterior sampling requires thousands of samples, usually wrapped in a Gibbs scheme with many Gaussian likelihood evaluations required for each iteration. For training data sizes above several hundred, these computations become prohibitive. The machine learning approach - approximate inference by VI -partially addresses this issue by forcing DGPs into a neural network-style optimization problem, but it goes without saying that turning integration into optimization can severely undercut UQ. Moreover, this optimization of the Kullback-Leibler divergence between the desired DGP posterior and the specified target family requires thousands of iterations, on par with the iterations needed for MCMC mixing. Consequently, the speedups offered by VI are marginal at best. Big computational gains require further approximation. The predominant tool here is inducing points (IPs; Snelson and Ghahramani, 2006); all of the VI references we have mentioned so far, as well as the Hamiltonian Monte Carlo sampling implementation of Havasi et al. (2018), make use of IP approximations. But IPs come with several pitfalls: without large quantities and/or optimal placement, they provide blurry low-fidelity predictions (Wu et al., 2022). Some have embraced random feature expansions as alternatives to inducing points (Marmin and Filippone, 2022), but perhaps the most successful alternative has been Vecchia approximation (Vecchia, 1988; Sauer et al., 2022). The Vecchia approximation forms the basis of scalability in the deepGP package. In our own experience, the Vecchia-based DGP approach is more robust and user friendly - offering more accurate predictions with better UQ out of the box - compared to VI/IP Figure 4: _Left:_ The two-dimensional G-function. _Right:_ A posterior ESS sample of latent layer \(W\) from a two-layer DGP fit to training data from the G-function. Together, “nodes” of \(W\) act as a warped version of \(X\), allowing the DGP to accommodate the steep inclines along the diagonals. alternatives, and others in similar spirit such as the Python libraries GPflux(Dutordoir et al., 2021) and GPyTorch(Gardner et al., 2018). Occasionally we can fine-tune these libraries to get competitive results in terms of accuracy, but UQ still suffers distinctly. ## 6 A final experiment As a concluding demonstration, we present a comparison of the aforementioned non-stationary GP surrogates on a real-world computer simulation. The _Test Particle Monte Carlo_ simulator was developed by researchers at Los Alamos National Laboratory to simulate the movement of satellites in low earth orbit (Mehta et al., 2014). The simulation is contingent on the specification of a satellite mesh. Sauer et al. (2022) entertained non-stationary DGP surrogates on the GRACE satellite; we instead consider the Hubble space telescope, which has an additional input parameter, bumping the problem up to 8-dimensions. The response variable is the amount of atmospheric drag experienced by the satellite. See Gramacy (2020, Chapter 2) for further narration of this simulation suite. We utilize a data set of 1 million simulation runs provided by Sun et al. (2019), selecting disjoint random subsets of size \(n=n_{p}=\) 10,000 for training and testing. We entertain the following surrogate models: * GP SVEC: stationary GP utilizing scaled-Vecchia approximation (Katzfuss et al., 2020). * laGP: local approximate GP (Gramacy and Apley, 2015) using the laGP package (Gramacy, 2016). * TGP: treed-GP (Gramacy and Lee, 2008) using the tgp package (Gramacy, 2007). * DGP ESS: Bayesian DGP with elliptical slice sampling and Vecchia approximation (Sauer et al., 2023, 2022) using the deepgp package (Sauer, 2022). * DGP DSVI: DGP with approximate "doubly stochastic" variational inference and inducing point approximations (Salimbeni and Deisenroth, 2017) using the GPflux package (Dutordoir et al., 2021). We follow Sun et al. (2019) in fixing the noise level at \(\nu=1\times 10^{-4}\). Separable lengthscales estimated from the GP SVEC model are used to pre-scale inputs prior to fitting the other surrogate models [except for TGP since the tgp package conducts its own scaling]. Input pre-scaling and similar analogues have been shown to improve surrogate performance and have become standard (e.g., Sun et al., 2019; Katzfuss et al., 2020; Wycoff et al., 2021; Kang and Katzfuss, 2023). Model performance is reported by root mean squared error (RMSE, lower is better) and continuous rank probability score (CRPS; Gneiting and Raftery, 2007, Eq. 20, negated so lower is better). While RMSE captures surrogate predictive accuracy, CRPS incorporates posterior UQ. Results across 10 Monte Carlo repetitions, with re-randomized training and testing sets, are shown in Figure 5. Reproducible code for this experiment is provided in our public git repository.1 Footnote 1: [https://bitbucket.org/gramacylab/deepgp-ex/](https://bitbucket.org/gramacylab/deepgp-ex/) Of the five methods, DSVI's approach to deep GP inference stands out as being particularly poor. We don't have a good explanation for that except that the class of problems it was engineered for - low-signal large-data regression and classification for machine learning tasks - is different than our computer modeling context, with modest training data size and high signal-to-noise ratios. We know VI undercuts on UQ, and we suspect the IPs sacrifice too much fidelity in this instance. Observe that laGP performs relatively well, but interestingly not as well as an ordinary GP (represented by GP SVEC). Our explanation here is that laGP was designed for massive data settings on the scale of millions of observations. It was developed primarily with speed and parallelization in mind, with non-stationary flexibility being a byproduct of its divide-and-conquer approach. These data do indeed benefit from deliberate non-stationary modeling, as indicated by the TGP and DGP ESS comparators. We believe TGP edges out DGP on this example because the process benefits from crude, axis-aligned partitioning. We observed a high degree of TGP partitioning on the eighth, "panel-angle" input. It would seem the orientation of Hubble's solar panels is driving regime changes in drag dynamics. Both TGP and DGP ESS utilize a fully Bayesian approach to inference for all unknown quantities. Consequently they have high predictive accuracy (via RMSE) and UQ (via CRPS).
2305.16171
Multi-lingual and Multi-cultural Figurative Language Understanding
Figurative language permeates human communication, but at the same time is relatively understudied in NLP. Datasets have been created in English to accelerate progress towards measuring and improving figurative language processing in language models (LMs). However, the use of figurative language is an expression of our cultural and societal experiences, making it difficult for these phrases to be universally applicable. In this work, we create a figurative language inference dataset, \datasetname, for seven diverse languages associated with a variety of cultures: Hindi, Indonesian, Javanese, Kannada, Sundanese, Swahili and Yoruba. Our dataset reveals that each language relies on cultural and regional concepts for figurative expressions, with the highest overlap between languages originating from the same region. We assess multilingual LMs' abilities to interpret figurative language in zero-shot and few-shot settings. All languages exhibit a significant deficiency compared to English, with variations in performance reflecting the availability of pre-training and fine-tuning data, emphasizing the need for LMs to be exposed to a broader range of linguistic and cultural variation during training.
Anubha Kabra, Emmy Liu, Simran Khanuja, Alham Fikri Aji, Genta Indra Winata, Samuel Cahyawijaya, Anuoluwapo Aremu, Perez Ogayo, Graham Neubig
2023-05-25T15:30:31Z
http://arxiv.org/abs/2305.16171v1
# Multi-lingual and Multi-cultural Figurative Language Understanding ###### Abstract Figurative language permeates human communication, but at the same time is relatively understudied in NLP. Datasets have been created in English to accelerate progress towards measuring and improving figurative language processing in language models (LMs). However, the use of figurative language is an expression of our cultural and societal experiences, making it difficult for these phrases to be universally applicable. In this work, we create a figurative language inference dataset, MABL, for seven diverse languages associated with a variety of cultures: Hindi, Indonesian, Javanese, Kannada, Sundanese, Swahili and Yoruba. Our dataset reveals that each language relies on cultural and regional concepts for figurative expressions, with the highest overlap between languages originating from the same region. We assess multilingual LMs' abilities to interpret figurative language in zero-shot and few-shot settings. All languages exhibit a significant deficiency compared to English, with variations in performance reflecting the availability of pre-training and fine-tuning data, emphasizing the need for LMs to be exposed to a broader range of linguistic and cultural variation during training. + Footnote †: These authors contributed equally. + Footnote †: These authors contributed equally. ## 1 Introduction When you are feeling happy, do you think that you are "warm" or "cold"? If you are a monolingual English speaker, you will likely answer "warm", and use expressions like "this really warmed my heart". However, if you are a native Hindi speaker, you may answer "cold", and use expressions like "Ref and "Ref" ("coldness spreads in one's heart" ) (Sharma, 2017). Linguistic communication often involves figurative (i.e., non-literal) language (Shutova, 2011; Fussell and Moss, 2008; Lakoff and Johnson, 1981), which is laden with implicit cultural references and judgements that vary cross-culturally. Differences in figurative expressions used in different languages may be due to cultural values, history, or any number of other factors that vary across where the languages are spoken.2 Understanding figurative language therefore relies on understanding what concepts or objects are considered culturally significant, as well as their sentiment in that culture. Footnote 2: The Hindi example is most likely attributable to climatic conditions, as cold may be seen as comparatively more positive in an area where extreme heat is more common (Sharma, 2017) Better understanding of figurative language would benefit tasks such as hate speech detection or sentiment classification (ElSherief et al., 2021; van Aken et al., 2018). However, state-of-the-art language models have been shown to frequently misinterpret both novel figurative expressions and conventionalized idioms, indicating the need for improved methods (Dankers et al., 2022; \begin{table} \begin{tabular}{l l l} \hline \hline & \multicolumn{1}{c}{figurative Expression Inference} \\ \hline \multirow{4}{*}{30} & \multirow{4}{*}{\begin{tabular}{l} Omnikh kupx istna \\ (The house is very nice) \\ (The house is like a palace.) \\ \end{tabular} } & \begin{tabular}{l} Omnikh kupx istna \\ (The house is its place.) \\ \end{tabular} \\ & \begin{tabular}{l} Omnikh kupx istna \\ (The house is very ugly.) \\ \end{tabular} \\ \hline \multirow{4}{*}{id} & \multirow{4}{*}{\begin{tabular}{l} Rambutnya seperti bilium. \\ (Her hair is curry) \\ \end{tabular} } \\ & \begin{tabular}{l} Rambutnya seperti bilium. \\ (Her hair is live vermicul.) \\ \end{tabular} \\ & \begin{tabular}{l} \begin{tabular}{l} \# Authority turns. \\ (Her hair is straight.) \\ \end{tabular} \\ \hline \multirow{4}{*}{hi} & \multirow{4}{*}{\begin{tabular}{l} Mineto istna \\ (Life is sweet Gukland ) \\ \end{tabular} } & \begin{tabular}{l} Mineto istna \\ (Life is bad.) \\ \end{tabular} \\ \hline \multirow{4}{*}{kn} & \multirow{4}{*}{\begin{tabular}{l} Mineto istna \\ (It was crispy like a docs.) \\ \end{tabular} } & \begin{tabular}{l} Mineto istna \\ (It was not crisp.) \\ \end{tabular} \\ \cline{1-1} \cline{3-5} & \begin{tabular}{l} Mineto vake synonymous. \\ (Itis words are like poison.) \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 1: Examples of figurative expressions and respective inferences from the collected data. Correct answers are highlighted in green. Liu et al., 2022). Most empirical results probing language models' abilities with respect to figurative language have been based on data in English, meaning there is a comparative lack of resources and study in other languages (Chakrabarty et al., 2022; Liu et al., 2022; Pedinotti et al., 2021). We find English figurative language datasets may not have cultural relevance for other languages (SS2). This is a general challenge in NLP, as assumptions of common knowledge and important topics to talk about vary from culture to culture (Hershcovich et al., 2022). In order to better train multilingual models to interpret figurative language, as well as to understand linguistic variation in figurative expressions, we construct a multilingual dataset, MABL (**M**etaphors **A**cross **B**orders and **L**anguages), of 6,366 figurative language expressions in seven languages (SS3). Examples are shown in Table 1. We use the dataset to conduct a systematic analysis of figurative language patterns across languages and how well they are captured by current multilingual models (SS4). We find that figurative language is often very culturally-specific, and makes reference to important entities within a culture, such as food, mythology, famous people, or plants and animals native to specific regions. We benchmark multilingual model performance (SS5) and analyze model failures (SS6), finding that zero-shot performance of multilingual models is relatively poor, especially for lower-resource languages. According to (Liu et al., 2021), main factors which poses challenges on the performance in such cases are cross-lingual transfer and concept shift across languages. However, we observe that concept shift seems to play a larger role due to culturally specific examples. Adding a few examples in the target language can improve performance of larger models, but this is more beneficial for lower-resource languages. This highlights the importance of including culturally relevant training data, particularly data that highlights not just the existence of a concept, but also how people view that concept within that culture. ## 2 Linguistic and Cultural Biases of Existing Figurative Language Datasets To confirm the importance of building a multilingual, multi-cultural figurative language dataset, we first performed a pilot study to examine the feasibility of instead translating an existing figurative language dataset, Fig-QA (Liu et al., 2022), from English into other languages. While there are well-known problems with using translation to create multilingual datasets for tasks such as QA (Clark et al., 2020), it is still worth examining these issues in the context of figurative language in particular. We used the Google Translate Python API to translate the development set into languages that the authors of this paper understood.3 These were French, Japanese, and Hindi. Each annotator annotated 100 examples for both correctness (whether or not the translation was accurate), and cultural relevance (whether or not the expression was one that would make sense to a native speaker from the culture where the language is predominant). Footnote 3: [https://pypi.org/project/googletrans/](https://pypi.org/project/googletrans/) As seen in Table 2, the number of incorrect examples is large, particularly for Hindi and Japanese. This is mainly due to expressions that don't translate directly (such as a "sharp" conversation in English). Culturally irrelevant examples are due to implicitly assumed knowledge. For instance, a crowdworker from the US generated the example "it's as classic as pancakes for breakfast" with the meaning "it's very classic". However, most people from Japan would not see pancakes as a traditional breakfast, and the meaning "it's not classic" would be more appropriate. The shift in topics discussed in cultures associated with different languages can be captured by native speakers familiar with that culture, motivating our collection of natural figurative language examples from native speakers. ## 3 The MABL Dataset ### Language Selection We choose the following seven languages: Hindi (hi), Yoruba (yo), Kannada (kn), Sundanese (su), Swahili (sw), Indonesian (id), and Javanese (jv). The factors we considered while choosing these languages are as follows : **i)** We aimed to include a range of languages representing the different classes in the resource-based taxonomy of languages, proposed by Joshi et al. (2020), subject to annotator availability. \begin{table} \begin{tabular}{c c c c} \hline \hline Lang. & fr & hi & ja \\ \hline Incorrect & 13\% & 40\% & 21\% \\ Culturally irrelevant & 17\% & 20\% & 17\% \\ \hline \hline \end{tabular} \end{table} Table 2: Correctness and cultural relevance of Google translations of Fig-QA validation set. **ii)** We chose languages with a sizeable speaker population as shown in Table 5. **iii)** Our languages come from 5 typologically diverse language families spoken in 4 different countries, which allows us to include a wide range of linguistic and cultural diversity in our data. Details about the characteristics of each language in terms of available training data and number of speakers can be found in Table 5. Additional information on linguistic properties of these languages can be found in Appendix A. ### Dataset Collection To create culturally relevant examples, we crowdsourced sample collection to two or more native speakers in the seven languages. The workers were asked to generate paired metaphors that began with the same words, but had different meanings, as well as the literal interpretations of both phrases. Workers were not discouraged from generating novel metaphors, but with the caveat that any examples should be easily understood by native speakers of that language, e.g., "it's as classic as pancakes for breakfast" would not be valid if pancakes are not a breakfast food in the country in which that language is spoken. Instructions given to annotators can be found in Appendix B. After collection, each sample was validated by a separate set of workers who were fluent in that language. Any examples that were incoherent, offensive, or did not follow the format were rejected. The number of samples collected per language can be seen in Table 3. Examples of collected data can be seen in Table 1. We note that because of the limited number of samples in each language, we view the samples collected as a _test set_ for each language, meaning there is no explicit training set included with this release. ## 4 Dataset Analysis ### Concepts expressed In the structure mapping theory of metaphor, figurative language involves a **source** and **target** concept, and a comparison is made linking some features of the two (Gentner, 1983). Following Liu et al. (2022), we refer to the source as the "subject" and target as "object". 4 Footnote 4: This terminology may be confusable with subject and object in linguistics, but was used because the source and target tend to appear in these linguistic positions in a sentence. We expect objects referenced to be quite differently cross-culturally. We confirm this by translating sentences from our dataset into English, then parsing to find objects. The number of unique concepts per language, including examples, is listed in Appendix C. This may overestimate the number of unique concepts, as some concepts may be closely related (e.g., "seasonal rain" vs. "rainy season"). Despite this, we are able to identify many culturally specific concepts in these sentences, such as specific foods (hi: samosa, hi: sweet gulkand, id: durian, id: rambutan), religious figures (kn: buddha's smile, sw: king soloman), or references to popular culture (id: shinchan, _yo_: anikulapo movie, en: washington post reporter). We observe that, excluding pronouns, only 6 objects are present in all languages. These are {"sky", "ant", "ocean", "fire", "sun", "day"}. Of course, variations of all these concepts and other generic concepts may exist, since we only deduplicated objects up to lemmatization, but this small set may indicate that languages tend to vary widely in figurative expressions. Appendix D indicates the Jaccard similarity between objects in each language, which is an intuitive measure of set similarity. The equation is also given below for sets of objects from language A (\(L_{A}\)) and language B (\(L_{B}\)). \[J(L_{A},L_{B})=\frac{|L_{A}\cap L_{B}|}{|L_{A}\cup L_{B}|} \tag{1}\] The most similar language based on concepts present is highlighted in Table 4. Languages from the same region tend to group together. The set of concepts in English is actually most similar to Swahili.5 Upon inspection, there were many general terms related to nature, as well as many references to Christianity in the Swahili data, which may explain the similarity to English.6 Footnote 5: There are no particularly closely related languages to English in our dataset. Footnote 6: Authors of this paper examined unique concepts expressed in English, Swahili, and Kannada. Swahili sentences had \begin{table} \begin{tabular}{l l} \hline Language & \#Samples \\ \hline id & 1140 \\ sw & 1090 \\ su & 600 \\ jv & 600 \\ hi & 1000 \\ kn & 1190 \\ yo & 730 \\ \hline \end{tabular} \end{table} Table 3: Number of collected samples per language. ### Commonsense Categories We follow the commonsense categories defined in Liu et al. (2022) to categorize knowledge needed to understand each sentence: physical object knowledge (obj), knowledge about visual scenes (vis), social knowledge about how humans generally behave (soc), or more specific cultural knowledge (cul). The same sentence can require multiple types of knowledge. Table 6 shows the prevalence of each type of commonsense knowledge as documented by annotators. Social and object knowledge are the most dominant types required, with Yoruba having an especially high prevalence of social examples. Not many examples were marked as cultural. This may be due to differences in what annotators viewed as cultural knowledge: some knowledge may be considered to fall under the object or social category by annotators, but these same examples may seem culturally specific to people residing in the United States because the objects referenced are not necessarily relevant to English speakers in the US. ### Cross-lingual concept distribution To better understand the linguistic and cultural distribution of examples, we extract sentence-level representations from two models: **i)** XLM-R\({}_{\text{large}}\)Conneau et al. (2019), our best performing baseline model; and **ii)** LaBSE Feng et al. (2020), a language-agnostic sentence embedding model, optimized for cross-lingual retrieval. We observed that XLM-R clusters by language, whereas LaBSE clusters sentences from multiple languages together, based on conceptual similarity (as shown in Figure 2). Since LaBSE is optimized for cross-lingual sentence similarity, we chose the latter to conduct further analysis. First, we probe different edges of the cluster and observe concepts along each edge, as visualized in Figure 1. For each concept, we observe sentences from various languages clustering together. Further, these sentences portray cultural traits pertaining to each language. For example, _rice_ is commonly mentioned in languages from Indonesia, given that it is a staple food product there.8 Other examples include sentences in Hindi such as _This house is as old as a diamond_ (_diamonds_ have a significant historical background in India) or _Your house is worth lakhs_ (_lakh_ is an Indian English term).9 Footnote 8: [https://www.indonesia-investments.com/business/commodities/rice/item183](https://www.indonesia-investments.com/business/commodities/rice/item183) Footnote 9: [https://en.wikipedia.org/wiki/Indian_numbering_system](https://en.wikipedia.org/wiki/Indian_numbering_system) To qualitatively study cultural references, we further analyse metaphors belonging to universal concepts such as _food_, _weather/season_, and _friendship_, searching for sentences containing these keywords.10 We obtain 230 sentences containing _food_, 111 sentences containing _weather/season_ and 307 sentences containing _friend_. A few examples are as shown in Table 7. We observe multiple regional and cultural references, which may not be under \begin{table} \begin{tabular}{c c c c c} \hline \hline Lang. & Object & Visual & Social & Cultural \\ \hline hi & 52.4 & 16.4 & 42.0 & 9.2 \\ id & 45.8 & 5.7 & 45.6 & 7.5 \\ jv & 34.0 & 15.0 & 43.3 & 10.0 \\ kn & 63.3 & 17.1 & 20.3 & 15.2 \\ su & 34.3 & 8.6 & 33.3 & 24.0 \\ sw & 48.0 & 20.2 & 32.2 & 5.6 \\ yo & 37.3 & 6.1 & 81.0 & 10.7 \\ \hline \hline \end{tabular} \end{table} Table 6: Proportion of common-sense categories. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Lang. & hi & id & jv & kn & su & sw & yo & en \\ Most similar & kn & jv & sw & hi & jv & hi & sw & sw \\ \hline \hline \end{tabular} \end{table} Table 4: Most similar concepts sets for each language, based on Jaccard similarity of objects in each language’s sentences. Note that as in Appendix A, \(\{\text{hi},\text{kn}\}\), \(\{\text{id},\text{jv},\text{su}\}\) and \(\{\text{sw},\text{yo}\}\) respectively occur in similar geographic regions. \begin{table} \begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{**Lang.**} & \multirow{2}{*}{**Speakers**} & \multicolumn{2}{c}{**Training Data (in GB)**} & \multirow{2}{*}{**Class**} \\ & & \multicolumn{1}{c}{XLM-R} & & & \\ \hline en & 400 & 300.8 & 15.7 & 5 \\ \hline hi & 322 & 20.2 & 0.14 & 4 \\ id & 198 & 148.3 & 0.52 & 3 \\ jv & 84 & 0.2 & 0.04 & 1 \\ kn & 44 & 3.3 & 0.07 & 1 \\ su & 34 & 0.1 & 0.02 & 1 \\ sw & 20 & 1.6 & 0.03 & 2 \\ yo & 50 & - & 0.012 & 2 \\ \hline \hline \end{tabular} \end{table} Table 5: Per-language statistics (including en for reference); the speaker population of each language, its representation in pre-trained multilingual models, and the Joshi et al. (2020) class each language belongs to. First-language speaker population information is obtained from Wikipedia and Aji et al. (2022). We obtain data size estimates for multilingual BERT from Wikipedia 2019 dump statistics.7 standable by non-native speakers. For example, annotators make references to the _weather/season_ with _Peacock_ and _frying fish on asphalt_ which are innate comparisons in \(\mathrm{su}\). With reference to _food_, Indian food commonly uses _Neem_ and _Tamarind_ as referenced by metaphors in \(\mathrm{kn}\) and \(\mathrm{hi}\). _Neem_ is a bitter medicinal herb and _Tamarind_ is used to add sourness to food. Finally, we see references to mythological and fictional characters across _friendship_ metaphors, where annotators draw from their attributes to describe friendships. ## 5 Evaluation and Results ### Zero-shot #### 5.1.1 Zero-shot evaluation Here, we simply fine-tune the Multilingual Pretrained Language Models (MPLMs) on the English labelled data and evaluate on all target languages. This was performed in the standard format of inputting each example as \([\mathrm{CLS}]\)\([\mathrm{sentence}]\)\([\mathrm{SEP}]\)\([\mathrm{meaning1}]\)\([\mathrm{SEP}]\)\([\mathrm{meaning2}]\) and using a linear layer on the \([\mathrm{CLS}]\) token to classify the answer. #### 5.1.2 Zero-shot transfer results We present zero-shot evaluation results in Table 8, noting that there can be two contributors to the gap in performance in these seven languages as compared to English. First, since our fine-tuning language is English, there can be a drop in performance simply due to cross-lingual transfer. Second, there is a concept shift in these metaphors, as evidenced by our analysis in Section 4. To discern the contribution of both, we machine-translate the target test sets to \(\mathrm{en}\) (we refer to this as translate-test). The difference between \(\mathrm{translate}\)-test and \(\mathrm{zero}\)-\(\mathrm{shot}\), can be thought of as the cross-lingual transfer gap, while the rest of the difference between \(\mathrm{translate}\)-\(\mathrm{test}\) and \(\mathrm{en}\) test performance can be attributed to the concept shift. Due to possible MT errors, the results here represent upper bounds for concept shift and cross-lingual shift, which is Figure 1: UMAP visualization of the collected data. Sentence embeddings are obtained using LaBSE (Feng et al., 2020), a multilingual dual encoder model, optimized for cross-lingual retrieval. Refer to Section 4 for more details. Figure 2: We visualize sentence embeddings for two languages, Swahili (\(\mathrm{sw}\)) and English (\(\mathrm{en}\)), using our best-performing model, XLM-R Large (left) and LaBSE (right). Given that \(\mathrm{en}\) shares the highest number of concepts with \(\mathrm{sw}\), we’d expect a tight integration of embedding spaces, which is better displayed by LaBSE. further discussed in Section 6.1. **The concept shift gap is generally greater than the cross-lingual gap.** As reported in Table 8, the concept shift gap is greater than the cross-lingual transfer gap for all languages except Swahili, across all models. This result for \(\mathrm{sw}\) corroborates our findings in Section 4, where we observe that \(\mathrm{en}\) shares the greatest proportion of object concepts with \(\mathrm{sw}\). Given Swahili's extremely low-representation in MPLMs (Table 5), and its high concept overlap with English, we cover most of the gap by simply translating \(\mathrm{sw}\) to \(\mathrm{en}\). For Indonesian (\(\mathrm{id}\)), we observe that zero-shot performance itself is close to \(\mathrm{en}\) performance (83.6%) for XLM-R, since \(\mathrm{id}\) is well-represented in this model (Table 5). Hence, translating to \(\mathrm{en}\) does not help, and the model needs to be competent in better understanding the cultural references specific to \(\mathrm{id}\). In mBERT however, \(\mathrm{id}\) is poorly represented, and translating to \(\mathrm{en}\) does help improve performance. **Performance increases as model and training data size increase, but moreso for higher resource languages.** The smallest model examined, mBERT, has relatively poor performance for all languages, as all languages have < 60% accuracy. Hindi and Indonesian, the two highest-resource languages in our dataset, show a high gain in performance when using a larger model, increasing to 67.58% and 78.09% accuracy respectively. This is especially true for Indonesian, which has a relatively high amount of training data as shown in Table 5. However, lower resource languages tend to show a more modest gain in performance. ### Few-shot #### 5.2.1 Few-shot evaluation While it is common to fine-tune MPLMs on English, given its widespread use and availability, several past works have shown how this is sub-optimal (Lin et al., 2019; Debnath et al., 2021) and choosing optimal transfer languages is an important research question in itself (Dhamecha et al., 2021). While the design of an ideal allocation of annotation resources is still unknown, Lauscher et al. (2020) demonstrate the effectiveness of investing in few-shot (5-10) in-language task-specific examples, which provides vast improvements over the zero-shot setup. We include between 2-50 labelled pairs of sentences from each target language, in addition to the English labelled data, for fine-tuning the model. Training details for all models can be found in Appendix E. #### 5.2.2 Few-shot results Figure 3 presents the effects of few-shot transfer for each language. Generally, the performance gain is modest. This aligns with results from Lauscher et al. (2020), who found that performance gains were quite small on XNLI. As our task is also an NLI task, we may expect similar improvements. However, we find collecting some cultural examples could disproportionately help low-resource languages. **Augmenting with few examples usually does not help much** We observed that with a few exceptions, the increase in accuracy on the test set gained was small (< 1%). This is likely because of the diversity of facts needed in order to improve performance. As noted in Section 4.1 and Table 1, this dataset contains many unique cultural references that do not repeat, limiting the utility of seeing a few examples. **Lower-resource languages benefit more greatly from augmentation** However, there are a few exceptions to this trend. In particular, adding 50 paired Kannada examples to XLM-R\({}_{\text{large}}\) improved performance by 3.83%. Swahili also improves by 1.10% with 50 additional examples for XLM-R\({}_{\text{base}}\), and Sundanese improves by 2.33% with 50 examples for mBERT\({}_{\text{base}}\). ### Evaluation of Large Language Models In addition to the three MPLMs we examine in detail, we also examine the zero-shot performance of large pretrained language models. We choose to \begin{table} \begin{tabular}{c c c c c} \hline \hline & References to & References to & References to & \\ & weather/season & food & friendship & \\ \hline \multirow{7}{*}{ \begin{tabular}{c} Ni \\ \end{tabular} } & The Indian Ocean & That food is & My friend’s father \\ & Senrock this & kn & as sweet as & jv & is like a medium \\ & Christmas season. & & Neem & wekudura. \\ \cline{1-1} & The weather is & & Hotel food & \\ & also warm like & hi & was like & \\ & the rainy season. & & tamarind. & \\ \cline{1-1} & & The weather & & His waist is \\ & looks like you can & & the width of & \\ & & fly fish on the & sw & a lababa. \\ \cline{1-1} & & & The taste of & \\ & & love is like & & \\ & monsoon season. & & the ting food is & \\ \hline \hline \end{tabular} \end{table} Table 7: Translated examples with cultural references specific to regions where these languages are spoken. examine GPT-3 (text-davinci-\(003\)) and BLOOM-176B. As these models are autoregressive rather than masked models, we follow the standard procedure of prediction via choosing the answer with a higher predicted probability Jiang et al. (2021). The performance of GPT-3 is not very good on most languages when tested zero-shot, but we note that it has a reasonable zero-shot performance on the English development set (74.86%), higher than the reported results of \(\mathrm{text}\)-davinci-\(002\). Liu et al. (2022). There is a high concept shift gap as with the other models but also a comparatively higher cross-lingual gap as this model is much stronger in English. ## 6 Error Analysis ### Effect of English MT As noted in Section 5.1, there are two major factors that can cause difficulty in cross-lingual transfer: language shift and concept shift. We try to approximate these effects by translating the test set in each language to English. However, this is done with machine translation, so there may be errors. Despite this, translation can still benefit the model if the original language was low-resource. We can divide the model performance into four cases as shown in Table 9. First, there are easy examples (53%) which are answered correctly in both the original language and translated versions. Next there are linguisti \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Language**} & \multirow{2}{*}{ \begin{tabular}{c} **Zero-shot** \\ **Performance** \\ \end{tabular} } & \multicolumn{2}{c}{**Translate-test**} & \multicolumn{2}{c}{**Cross-Lingual**} & \multicolumn{1}{c}{**Concept Shift**} \\ & & & **(to EN)** & **Transfer Gap** & **Gap** \\ \hline \multirow{8}{*}{XLM-R\({}_{\text{large}}\)} & en\({}_{\text{large}}\) & 81.50 \(\pm\)2.41 & 81.50 \(\pm\)2.41 & 0.00 & 0.00 \\ & hi & 67.58 \(\pm\)1.38 & 67.82 \(\pm\)1.52 & 0.24 & **13.68** \\ & id & 78.09 \(\pm\)1.14 & 77.51 \(\pm\)0.91 & -0.58 & **3.99** \\ & jv & 60.93 \(\pm\)1.95 & 68.13 \(\pm\)1.66 & 7.20 & **13.37** \\ & kn & 58.08 \(\pm\)2.10 & 63.67 \(\pm\)0.98 & 5.59 & **17.83** \\ & su & 60.40 \(\pm\)1.98 & 70.07 \(\pm\)0.92 & 9.67 & **11.43** \\ & sw & 58.16 \(\pm\)0.73 & 75.29 \(\pm\)2.05 & **17.13** & 6.21 \\ & yo & - & - & - & - \\ \hline \multirow{8}{*}{XLM-R\({}_{\text{base}}\)} & en\({}_{\text{large}}\) & 75.26 \(\pm\)0.95 & 75.26 \(\pm\)0.95 & 0.00 & 0.00 \\ & hi & 62.48 \(\pm\)0.31 & 63.29 \(\pm\)0.84 & 0.81 & **11.97** \\ & id & 68.88 \(\pm\)0.71 & 66.54 \(\pm\)1.22 & -2.34 & **9.26** \\ & jv & 53.67 \(\pm\)0.54 & 58.17 \(\pm\)0.82 & 4.50 & **17.09** \\ & kn & 54.67 \(\pm\)1.31 & 57.86 \(\pm\)1.10 & 3.20 & **17.40** \\ & su & 52.41 \(\pm\)1.79 & 61.33 \(\pm\)0.68 & 8.93 & **13.93** \\ & sw & 52.73 \(\pm\)1.38 & 65.77 \(\pm\)1.82 & **13.04** & 7.31 \\ & yo & - & - & - & - \\ \hline \multirow{8}{*}{mBERT\({}_{\text{base}}\)} & en\({}_{\text{large}}\) & 70.88 \(\pm\)2.46 & 70.88 \(\pm\)2.46 & 0.00 & 0.00 \\ & hi & 51.32 \(\pm\)0.94 & 59.45 \(\pm\)1.77 & 8.13 & **11.43** \\ & id & 56.56 \(\pm\)1.66 & 63.30 \(\pm\)1.12 & 6.74 & **7.58** \\ & jv & 55.06 \(\pm\)1.70 & 60.76 \(\pm\)2.31 & 5.70 & **10.12** \\ & kn & 52.63 \(\pm\)1.15 & 56.70 \(\pm\)0.77 & 4.07 & **14.18** \\ & su & 52.87 \(\pm\)1.67 & 59.37 \(\pm\)2.37 & 6.51 & **11.51** \\ & sw & 52.12 \(\pm\)1.09 & 63.57 \(\pm\)0.78 & **11.45** & 7.31 \\ & yo & 50.52 \(\pm\)1.04 & 50.60 \(\pm\)1.28 & 0.08 & **20.28** \\ \hline \hline \multirow{8}{*}{text-davinci-003} & en\({}_{\text{large}}\) & 74.86 & 74.86 & 0.00 & 0.00 \\ & hi & 50.60 & 59.62 & 9.02 & **15.24** \\ & id & 64.21 & 66.93 & 2.72 & **7.93** \\ \cline{1-1} & jv & 51.00 & 62.17 & 11.17 & **12.70** \\ \cline{1-1} & kn & 50.08 & 57.85 & 7.76 & **17.02** \\ \cline{1-1} & su & 49.67 & 58.33 & 8.67 & **16.53** \\ \cline{1-1} & sw & 54.83 & 65.33 & **10.51** & 9.53 \\ \cline{1-1} & yo & 50.27 & 48.77 & -1.51 & **26.10** \\ \hline \hline \end{tabular} \end{table} Table 8: Averaged zero-shot evaluation \(\pm\) standard deviation of MPLMs (and GPT-3) across five seeds on all seven languages: Hindi (hi), Indonesian (id), Yoruba (yo), Kannada (kn), Sundanese (su), Swahili (sw), Javanese (jv). Additionally, we translate each of these test sets to EN (translate-test). This helps discern the gap in performance due to _i) cross-lingual transfer_ and _ii) concept shift in metaphors.._ These gaps are calculated using the EN validation set’s performance as a gold reference. Refer to Section 5.1 for more details. The gap that is higher (which indicates a more significant challenge) is highlighted for each model and language. Note that results for Yoruba are not reported for XLM-R, as it was not trained on any Yoruba data. \begin{table} \begin{tabular}{c c c} \hline \hline \multirow{2}{*}{**Transfer-EN**} & \multicolumn{2}{c}{**Transfer-EN**} \\ & **Correct** & **Incorrect** \\ \cline{2-3} & **53.06\%** & 15.52\% \\ \cline{2-3} & **Incorrect** & 19.09\% & 12.33\% \\ \hline \hline \end{tabular} \end{table} Table 9: Confusion matrix of examples that were answered correctly by XLM-R\({}_{\text{large}}\) before and after translation to English, across all languages combined. cally challenging examples (19%) which are originally answered incorrectly, but switch to being answered correctly after being translated to English.11 There are difficult-to-translate or incorrectly translated examples (15%). It's likely that these errors can be completely eliminated with a careful enough translation. Lastly, there are hard examples (12%) which are answered incorrectly before and after being translated. These contain many inherently difficult examples, and examples with specific cultural terms. Examples of each type can be found in Appendix G. Footnote 11: Linguistically challenging here means that the language is more challenging for an LM to perform well in, not that the linguistic structure is very difficult. ### Cultural Examples We examine the accuracy of XLM-R\({}_{\text{large}}\) on the commonsense categories in Section 4.2. Overall, there is a small difference in accuracy between cultural examples and the overall accuracy, with overall accuracy at 63.99% and accuracy on cultural examples at 61.68%. Accuracy for all languages can be found in Appendix H. This is a preliminary analysis, but may indicate that references to explicit named entities may not be the only issue for the model with regard to culture. ## 7 Related Work ### Figurative Language **English-centric**: Most previous inference tasks on figurative language have been in English Chakrabarty et al. (2022); Liu et al. (2022); Pedinotti et al. (2021). Further, research on figurative language in English centers around training models to detect the presence of metaphors in text Leong et al. (2020); Stowe and Palmer (2018); Tsvetkov et al. (2014). This is done using datasets primarily consisting of idioms and conventionalized metaphors. However, recognizing common metaphorical phrases may not truly test a model's ability to interpret figurative language. There is limited research on understanding metaphors, which mostly looks at linking metaphorical phrases to their literal meanings through paraphrase detection Bizzoni and Lappin (2018) or generation Shutova (2010); Mao et al. (2018). Some studies investigate LMs' ability to understand metaphors, but they do not consider the fact that metaphors have different meanings based on context Pedinotti et al. (2021); Aghazadeh et al. (2022). Most recently, Liu et al. (2022) released a dataset which requires a model to infer the correct meaning of metaphor, rather than simply identifying or paraphrasing it, hence calling to test deeper semantic understanding. **Extension to Multilingual**: Research in corpus linguistics Diaz-Vera and Caballero (2013); Kovecses (2004); Charteris-Black and Ennis (2001) suggests that there significant variation in metaphorical language between cultures. There has been some work in detecting metaphors in multilingual text Tsvetkov et al. (2013); Shutova et al. (2017). These works have focused on three relatively high-resource languages: English, Russian and Spanish. Both focused on cross-lingual techniques to identify metaphors from newspapers and dictionaries. Hence, there hasn't been any large-scale multilingual dataset of figurative language constructed, which would allow one to study cultural variations across metaphors. We fill this gap with the release of our dataset. Figure 3: Effect of adding up to 50 examples in the target language to the English training data. This strategy is most beneficial for XLM-R\({}_{\text{large}}\) with more than 10 examples in the target language. Exact results can be found in Appendix F. Conclusion Despite being relatively widespread, figurative language is relatively under-studied in NLP. This is especially true for non-English languages. To enable progress on figurative language processing, we create MABL, a figurative inference dataset across seven languages. We find considerable variation in figurative language use across languages, particularly in the unique objects that people invoke in their comparisons, spanning differences in food, mythology and religion, and famous figures or events. This variation is likely due to differences in cultural common-ground between the countries in which these languages are spoken. We find that multilingual models have considerable room for improvement on this task, and cross-cultural shift may play a significant role in the performance degradation from English. We encourage the NLP community to further examine the role that culture plays in language, and note that figurative language can be used as a testbed to examine cross-linguistic and cross-cultural variations. ## 9 Limitations First, despite our pursuit of attempting to understand figurative language use across cultures, we have barely scratched the surface in terms of diverse representation. Due to limited scope, budget, and resources, we collect data from 2-3 annotators per language, for seven languages. Further, culture can vary greatly within a language Hersh-covich et al. (2022). Therefore, until we can represent all of the worlds' people and their languages, there will always be room for improvement. We also acknowledge that the syntax captured in the dataset may not be the most diverse, as many examples follow the template "<X> is like <Y>". However, we create these simpler examples as a first step, since extension to more complex and naturalistic language can be included in future work. Second, to analyse concept shift, we machine translate test sets into English. However, these translations can be erroneous to varying degrees, which may have resulted in an over-estimation of error attribution to concept shift. This could not be avoided however, due to limited resources of obtaining human translations. Third, English may not be the best language to transfer from in zero-shot evaluation of multilingual models. While we were constrained by training data availability, past works have shown that machine-translating train sets can help, an avenue we haven't explored here. Even though we experiment with few-shot evaluation, there may exist an optimal combination of source languages which best transfer to our target languages. Fourth, the English authors recognized culture-specific terms that were not marked as cultural by annotators in the commonsense categorization across all languages. This may be because annotators, being mostly familiar with their own cultures, attributed culturally specific facts and terms as being common sense. Likewise, the English-speaking participants may have viewed a separate set of facts as common sense which would not be agreed upon by people from a different culture. It is thus difficult to disentangle common sense and culture in many cases.
2303.11590
Koopman-Hopf Hamilton-Jacobi Reachability and Control
The Hopf formula for Hamilton-Jacobi reachability (HJR) analysis has been proposed to solve high-dimensional differential games, producing the set of initial states and corresponding controller required to reach (or avoid) a target despite bounded disturbances. As a space-parallelizable method, the Hopf formula avoids the curse of dimensionality that afflicts standard dynamic-programming HJR, but is restricted to linear time-varying systems. To compute reachable sets for high-dimensional nonlinear systems, we pair the Hopf solution with Koopman theory for global linearization. By first lifting a nonlinear system to a linear space and then solving the Hopf formula, approximate reachable sets can be efficiently computed that are much more accurate than local linearizations. Furthermore, we construct a Koopman-Hopf disturbance-rejecting controller, and test its ability to drive a 10-dimensional nonlinear glycolysis model. We find that it significantly out-competes expectation-minimizing and game-theoretic model predictive controllers with the same Koopman linearization in the presence of bounded stochastic disturbance. In summary, we demonstrate a dimension-robust method to approximately solve HJR, allowing novel application to analyze and control high-dimensional, nonlinear systems with disturbance. An open-source toolbox in Julia is introduced for both Hopf and Koopman-Hopf reachability and control.
Will Sharpless, Nikhil Shinde, Matthew Kim, Yat Tin Chow, Sylvia Herbert
2023-03-21T04:42:06Z
http://arxiv.org/abs/2303.11590v6
# Koopman-Hopf Hamilton-Jacobi Reachability and Control ###### Abstract The Hopf formula for Hamilton-Jacobi Reachability analysis has been proposed for solving high-dimensional differential games as a space-parallelizeable method. In exchange, however, a complex optimization problem must be solved, limiting its application to linear time-varying systems. To compute Hamilton-Jacobi backwards reachable sets (BRS) and synthesize the corresponding online controllers for high-dimensional nonlinear systems, we pair the Hopf solution with Koopman theory. We find that this is a viable method for approximating the BRS and performs better than local linearizations. Furthermore, we construct a Koopman-Hopf controller for robustly driving a 10-dimensional, nonlinear, glycolysis model and find that it significantly out-competes expectation-minimizing and game-theoretic Model Predictive Controllers with the same Koopman linearization in the presence of bounded stochastic disturbance. In summary, we propose and validate a dimension-robust method to approximately solve HJ Reachability, allowing novel application to control high-dimensional, nonlinear systems with bounded disturbance. ## I Introduction There are several active directions for designing safe autonomous systems with myriad approaches to overcoming the difficulty of the problem. The technical rigor involved in planning for success while simultaneously avoiding failure tends to force methods to sacrifice guarantees (e.g. data-driven methods where success is higher but failure is too) or feasibility (e.g. differential inclusion where solutions are often over-conservative or burdensome to solve). Among these approaches, Hamilton-Jacobi Reachability (HJR) is well known for being a robust approach to optimal control and safe path planning [1, 2, 3, 4, 5]. When feasible, it is often the top choice in robotics, autonomous driving and other stochastic control problems because of its derivation from the theory of differential games [6, 7] which describes how to optimally posture a system to counter antagonistic or stochastic, bounded disturbances [8]. HJR is, however, not the most practical approach because of its dependency on spatial gradient approximations which makes it sensitive to the _curse of dimensionality_[1]. If this theory could be extended to higher dimensional systems, engineering efforts in diverse domains, particularly in medicine, finance and other large systems, could make strides where simpler (dimension-robust) controllers are unable to overcome disturbances. ### _Related Work_ Toward this end, several directions have been developed, including set-based propagation, often called the method of zonotopes [9], to over-approximate the Hamilton-Jacobi reachable states with linear systems [9, 10] and in some special classes of nonlinear systems [11]. The only shortcomings with these methods are that they do not inherently provide an optimal controller and also tend to be overly-conservative. Another direction is the method of decomposition and system reduction for HJR. The authors of [2] define the system structures that can be decomposed into exponentially-faster, low-dimension problems, however, any coupled dimensions cannot be decomposed. There is also a method of projecting coupled systems to independent, lower-dimensional subsystems such that the inverted solution is a conjectured over-approximation [12], however, this is often highly conservative and lacks guarantee. More recently, the Hopf formula was revived [13, 5, 14] as yet another route to HJR without sensitivity to dimension or sacrificing guarantees but in limited cases. This method involves interchanging a dynamic-programming (DP) problem for an abstract optimization problem over the characteristic curves of the Lagrange multiplier [13]. Note, it does not lend a "free lunch", instead the difficulty appears now as a so-called _curse of complexity_ because the optimization problem requires sophisticated approaches (see [13, 5, 14] for detailed analysis). Nonetheless, certain classes of systems, namely linear time-varying systems, can be robustly solved in both one-player [15, 3] and two-player optimal cases [4, 5]. Notably, [4] was paired with a naive linearization methods to control a pursuit-evasion nonlinear systems with success, although without random disturbance. In this work, we build on the aforementioned strides by novelty pairing the Hopf solution with Koopman theory for global linearization. Koopman theory involves 'lifting' Fig. 1: Here the \(\pm\epsilon\) boundary of a target \(\mathcal{T}\) (black), the DP-solved reachable set \(\mathcal{R}(\mathcal{T},t)\) (blue), the local-linearization Hopf BRS \(\mathcal{R}(\mathcal{T}_{\mathrm{Taylor}},t)\) (green), and our 15D Koopman-Hopf BRS \(\mathcal{R}(\widetilde{\mathcal{T}}_{\ell},t)\) (gold) are plotted for the Duffing oscillator at \(t=2s\) with control and disturbance and \(\epsilon=0.1\). We show the scatter plot to emphasize that Koopman-Hopf solution is solved in a space-parallelizeable fashion, hence, each point is solved independently. nonlinear dynamics to high dimensional spaces in order to linearize them with higher accuracy [16, 17]. Multiple works have found that the Koopman procedure can yield highly accurate predictions over a short horizon, suitable for Model Predictive Control (MPC) and LQR [18, 19, 20, 21, 22]. Furthermore, we are inspired by recent, impressive work into Koopman theory applied to the method of Zonotopes for general (non-optimal) reachability [10]. In this work, we use Koopman theory to define a 'lifted', linear reachability problem which approximates our true problem and can be solved by the dimension-insensitive Hopf solution. Note, this sacrifices guarantees but yields accurate approximations of HJ Backwards Reachable Sets (BRSs) - the states from which a system can be driven to a target set at a desired time for any bounded disturbance - and their corresponding optimal controllers for high-dimensional, nonlinear systems for which no other method is available. ### _Contributions and Organization_ We make the following contributions: 1. We formulate a novel method to approximately solve differential games skirting the _curse of dimensionality_. 2. We propose definitions for the corresponding Koopman reachability problem that satisfy the Hopf assumptions. 3. We compare the BRS of our method to that of the DP solution and a Taylor-based solution for both a convex and non-convex game in the Duffing system. 4. We synthesize a novel Koopman-Hopf controller and compare it to two Koopman-MPC formulations in a 10-D glycolysis model with bounded, stochastic disturbance. The paper is structured as follows. Sec. II-A formally introduces HJR, BRSs and the DP solution. Sec. II-B introduces the Hopf solution and it's limitations. Sec. II-C introduces Koopman theory to counter these limitations. Sec. III proposes a method of lifting the reachability problem to the Koopman space that satisfies the Hopf assumptions. Sec. IV demonstrates the Koopman-Hopf method, first, in the Duffing system where (because of the low dimensionality) its feasible to compare to DP (Sec. IV-A) and, second, in a 10D Glycolysis system where we compare the novel controller with Koopman-MPC's (Sec. IV-B). Finally, Sec. V summarizes the work and describes future directions. ## II Preliminaries This paper focuses on control-affine and disturbance-affine systems of the form \[\dot{x}=f_{x}(x,t)+B_{1}(t)u+B_{2}(t)d\triangleq f(x,u,d,t) \tag{1}\] where control and disturbance inputs \(u\) and \(d\) are drawn from convex sets \(\mathcal{U}\subset\mathbb{R}^{n_{u}}\), \(\mathcal{D}\subset\mathbb{R}^{n_{d}}\), and the control and disturbance functions \(u(\cdot)\) and \(d(\cdot)\) are assumed to be of the set of measurable functions \(\mathbb{U}:[t,0]\mapsto\mathcal{U}\), \(\mathbb{D}:[t,0]\mapsto\mathcal{D}\). Assuming that the dynamics (1) is Lipschitz continuous in \((x,u)\) and continuous in \(t\), there exists a unique trajectory \(x(\cdot)\subset\mathcal{X}:=\mathbb{R}^{n_{x}}\) of the system given initial state \(x\), control function \(u(\cdot)\), and disturbance function \(d(\cdot)\). ### _Hamilton-Jacobi Reachability Problem_ To design a safe autonomous controller, HJ reachability solves for the optimal control that counters an adversarial disturbance in a differential game. Here, the control player's objective is to minimize the game cost while the disturbance player seeks to maximize it [5]. The game is defined by the cost functional \[P(x,u(\cdot),d(\cdot),t)=J(x(T))+\int_{t}^{T}L(u(\tau),d(\tau))d\tau, \tag{2}\] where \(x(T)\) is the solution of (1) at time \(T\). The terminal cost \(J:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}\) is a convex, proper, lower semicontinuous function chosen such that \[\begin{cases}J(x)<0&\text{for}\quad x\in\mathcal{T}\setminus\partial\mathcal{ T}\\ J(x)=0&\text{for}\quad x\in\partial\mathcal{T}\\ J(x)>0&\text{for}\quad x\notin\mathcal{T}\end{cases}\] where \(\mathcal{T}\subset\mathcal{X}\) is a user-defined closed set representing the target to reach (or avoid, if the max/min are switched) and \(\partial\mathcal{T}\) its boundary. The running cost \(L:\mathbb{R}^{n_{u}}\times\mathbb{R}^{n_{d}}\rightarrow\mathbb{R}\) serves only to constrain the inputs and, thus, takes the form, \[L(u,d)=\mathcal{I}_{\mathcal{U}}(u)-\mathcal{I}_{\mathcal{D}}(d) \tag{3}\] where \(\mathcal{I}_{\mathcal{C}}\) are the indicator functions of \(\mathcal{U}\) and \(\mathcal{D}\), \[\mathcal{I}_{\mathcal{C}}(c)=\{0\text{ if }c\in\mathcal{C},+\infty\text{ else}\}. \tag{4}\] We have now defined the game such that for trajectory \(x(\cdot)\) arising from a given \(x\), \(u(\cdot)\subset\mathcal{U}\), \(d(\cdot)\subset\mathcal{D}\), \(t\), and (1), \[P(x,u(\cdot),d(\cdot),t)\leq 0\iff x(T)\in\mathcal{T}. \tag{5}\] The function \(V:\mathbb{R}\times R\rightarrow\mathbb{R}\) corresponding to the optimal value of the game is defined as \[V(x,t)=\sup_{d(\cdot)\subset\mathcal{T}(t)}\inf_{(\cdot)\subset\mathcal{U}}P( x,u(\cdot),d(\cdot),t) \tag{6}\] where \(\Gamma(t)\) is the set of non-anticipative strategies defined in [5, 8, 13, 14] and we assume Isaac's condition [8]. This function is useful because by the same logic of (5), \[V(x,t)\leq 0\iff x\in\mathcal{R}(\mathcal{T},t) \tag{7}\] where \(\mathcal{R}(\mathcal{T},t)\), the BRS, is the set of all states which can be driven to the target for any bounded disturbance, formally defined by \[\mathcal{R}(\mathcal{T},t)=\{x\ |\ \exists u(\cdot)\subset\mathcal{U}\ \forall d(\cdot)\subset\mathcal{D}\text{ s.t. } \tag{8}\] \[x(\cdot)\text{ satisfies }(1)\wedge x(T)\in\mathcal{T}\}. \tag{9}\] Notably, applying Bellman's principle of optimality to this time-varying value function \(V\) leads to the following well known theorem. **Theorem 1** (Evans 84).: _[_6_]_ _Given the assumptions (2.1)-(2.5) in [Evan 84], the value function \(V\) defined in (6) is the viscosity solution to the following Hamilton-Jacobi Partial Differential Equation,_ \[\frac{\partial V}{\partial t}+H(x,\nabla_{x}V,t) =0\qquad\text{on }\mathbb{R}^{n_{x}}\times[t,T], \tag{10}\] \[V(x,T) =J(x(T))\ \ \text{on }\mathbb{R}^{n_{x}}\] _where the Hamiltonian \(H:\mathbb{R}^{n_{x}}\times\mathbb{R}^{n_{x}}\times[t,T]\to\mathbb{R}\) is defined as_ \[H(x,p,t)=\min_{u\in\mathcal{U}}\max_{d\in\mathcal{D}}p\cdot f(x,u,d,t). \tag{11}\] To solve this PDE, therefore, yields the value function and corresponding BRS. Additionally, the value function can be used to derive the optimal control strategy for any point in space and time with: \[u^{*}(t)=\arg\min_{u\in\mathcal{U}}\nabla_{x}V(x,t)\cdot B_{1}(t)u. \tag{12}\] The main challenge of HJR lies in solving this PDE in (10); DP methods propagate \(V(x,t)\) by finite-differences over a grid of points that grows exponentially with respect to \(n_{x}\)[1]. In practice, this is computationally intractable for systems of \(n_{x}\geq 6\) and constrained to offline planning. ### _The Hopf Solution to HJ-PDE's_ An alternative to the brute-force grid solving of \(V(x,t)\) is the Hopf formula, which offers a solution to (10) in the form of a space-parallelizeable optimization problem. First, we define \(\phi(x,t):=V(x,T-t)\) to change the aforementioned final-value problem into an initial-value problem for simplicity. \(\phi\) is now the solution of \[\begin{split}-\frac{\partial\phi}{\partial t}+H(x,\nabla_{x}\phi, t)&=0\qquad\text{ on }\mathbb{R}^{n_{x}}\times[0,t],\\ \phi(x,0)&=J(x)\ \ \text{ on }\mathbb{R}^{n_{x}}\end{split} \tag{13}\] with Hamiltonian, \[H(x,p,t)=\max_{u\in\mathcal{U}}\min_{d\in\mathcal{D}}-p\cdot f(x,u,d,T-t). \tag{14}\] Note, for systems \(f(x,u,d,t)=f(u,d,t)\) without state dependence, the Hamiltonian \(H(x,p,t)=H(p,t)\) also lacks state-dependence and in this setting the following Hopf formula is available with limitation. This formula was conjectured in [23], proved to be the viscosity solution in [24] and [7] for \(H(p)\), and generalized to some \(H(t,p)\) in [25, 26]. Recently, [13] devised a fast method of solving this formula and [14] conjectured a general form for \(H(x,t,p)\). **Theorem 2** (Rublev 2000).: _We assume \(J(x)\) convex and Lipchitz, and that \(H(t,p)\) is pseudoconvex in \(p\) and satisfies (B.i-B.iii) in [Rublev 00], then the minimax-viscosity solution [27] of (13) is given by the time-dependent Hopf formula,_ \[\phi(x,t)=-\min_{p\in\mathbb{R}^{n_{x}}}\left\{J^{*}(p)-x\cdot p+\int_{0}^{t}H (p,\tau)d\tau\right\} \tag{15}\] _where \(J^{*}(p):\mathbb{R}^{n_{x}}\to\mathbb{R}\cup\{+\infty\}\) is the Fenchel-Legendre transform (i.e. convex-conjugate) of a convex, proper, lower semicontinuous function \(J:\mathbb{R}^{n_{x}}\to\mathbb{R}\) defined by_ \[J^{*}(p)=\sup_{x\in\mathbb{R}^{n_{x}}}\{p\cdot x-J(x)\}. \tag{16}\] _Note, under these assumptions, the minimax-viscosity and viscosity solutions are the same when \(H(s,p)\) is entirely convex or entirely concave in \(p\) for \(s\in[0,t]\)[25, 27]._ See [25, 14, 27] for analysis and comparison of minimax-viscosity and viscosity solutions. Note, only the latter corresponds to the solution of the differential game when different. For general non-convex \(H\), the question of when these solutions coincide remains open, however, like [14], we observe agreement in numerical examples below. The strength of Hopf is that now we may compute (10 & 13) and solve our problem by solving a space-parallelizeable optimization problem (i.e. \(\phi(x,t)\) does not depend on \(\phi(x^{\prime},t^{\prime}<t)\), unlike DP) [13, 5, 14], avoiding the so called _curse of dimensionality_. However, to require a state-independent Hamiltonian limits this method greatly. Furthermore, we can not guarantee we have the solution which solves the game if the Hamiltonian is non-convex, which can occur based on the relative sizes of \(\mathcal{U}\) and \(\mathcal{D}\). It was noted in [13, 5, 26] that any linear time-varying system can be mapped to a state-independent system \[\begin{split}&\dot{x}=A(t)x+B_{1}(t)u+B_{2}(t)d\\ &\to\dot{z}=\Phi(t)(B_{1}(t)u+B_{2}(t)d)\end{split} \tag{17}\] with the linear time-varying mapping \(z(t):=\Phi(t)x(t)\) defined by the fundamental matrix \(\Phi=A(t)\Phi(t),\Phi(0)=I\). After the change of variable, the Hamiltonian for \(\phi\) becomes, \[\begin{split} H_{\mathcal{Z}}(p,t)=&\max_{u\in \mathcal{U}}-p\cdot\Phi(T-t)B_{1}(T-t)u\\ &-\max_{d\in\mathcal{D}}-p\cdot\Phi(T-t)B_{2}(T-t)d.\end{split} \tag{18}\] Since the mapping \(x\to z\) is one-to-one, we know \(\phi_{\mathcal{Z}}(z,t)=\phi(x,t)\) and \[\phi_{\mathcal{Z}}(x,t)=-\min_{p\in\mathbb{R}^{n_{x}}}\left\{J^{*}_{\mathcal{Z} }(p)-z\cdot p+\int_{0}^{t}H_{\mathcal{Z}}(p,\tau)d\tau\right\} \tag{19}\] where \(\phi(z,0)=J_{\mathcal{Z}}(z,0)=J(\Phi(T)x(T))\) and we define \(T=0\), thus, \(J_{\mathcal{Z}}(z)=J(x)\) and \(J^{*}_{\mathcal{Z}}(p)=J^{*}(p)\). Given the convexity of \(\mathcal{U}\) and \(\mathcal{D}\), the Hamiltonian in (18) can be rewritten as the difference of two positively homogeneous Hamiltonians corresponding to the convex-conjugates of their Indicator functions [5], \[\begin{split} H_{\mathcal{Z}}(p,t)&=\mathcal{I}^{*}_ {\mathcal{U}}(R_{1}p)-\mathcal{I}^{*}_{\mathcal{D}}(R_{2}p),\\ R_{i}&:=-B_{i}(T-t)^{\dagger}\Phi(T-t)^{\dagger}. \end{split} \tag{20}\] This allows rapid and efficient computation of (19) that has been observed to scale linearly with \(n_{x}\) for a fixed \(t\)[5] and demonstrated in online guidance with naive linearizations [4]. Notably, when \(\mathcal{U}\) and \(\mathcal{D}\), are constrained by norms, these functions have analytical solutions, namely the dual norms. Given \(Q\in\mathbb{R}^{n\times n}\), consider two popular input constraints [5] and their \(\mathcal{I}^{*}\): \[\begin{split}\mathcal{C}_{\mathcal{E}}(Q):=\{c\ |\ ||c||_{Q}\leq 1\} \implies&\mathcal{I}^{*}_{\mathcal{C}}(\cdot)=||\cdot||_{Q^{-1}},\\ \mathcal{C}_{\mathcal{R}}(Q):=\{c\ |\ ||Qc||_{\infty}\leq 1\} \implies&\mathcal{I}^{*}_{\mathcal{C}}(\cdot)=||Q\cdot||_{1}. \end{split} \tag{21}\] If \(\mathcal{U}\) and \(\mathcal{D}\) are defined by the same set type in (21) with \(Q_{u}\) and \(Q_{d}\) respectively, then convexity of the Hamiltonian is given when \(Q_{u}\succeq Q_{d}\) i.e. when the control authority exceeds the disturbance authority. ### _Koopman Theory_ We would like to apply dimension-robust Hopf theory to high-dimensional, nonlinear systems with a highly accurate linearization to approximate BRS's and synthesize safe control. Moreover, we would like a non-local linearization method due to the need for global analysis across the state space. Koopman theory is known for outperforming other linearization methods in these regards [18, 19, 20]. Consider the discretized mapping of a nonlinear system, \(x(t_{i+1})=F(x(t_{i}))\). The Koopman operator [16, 17]\(\mathcal{K}:\mathcal{F}\rightarrow\mathcal{F}\) is defined as \(\mathcal{K}g:=g\circ F\), where \(\mathcal{F}\) is the collection of all functions that form an infinite Hilbert space, often called observables or lifting functions, and \(g\in\mathcal{F}:\mathcal{X}\rightarrow\mathbb{R}\)[17, 21]. By definition, the operator has the property, \[(\mathcal{K}g)(x(t_{i}))=g(F(x(t_{i})))=g(x(t_{i+1})) \tag{22}\] and we assume this holds for a finite space such that \(Kg\in\mathcal{G}\)[17, 21], where \(\mathcal{G}\) is an invariant subspace of the Koopman space. We assume this space is spanned by a finite basis of lifting functions \(\Psi(x):=\{\psi_{1}(x),\,\ldots,\psi_{n_{k}}(x)\}\) and, thus, \(K\in\mathbb{R}^{n_{k}\times n_{k}}\) is a finite matrix. Recent works have found this theory can be extended to systems with external inputs [17, 18, 19, 21] such that for \(x(t_{i+1})=F(x(t_{i}),u(t_{i}),d(t_{i}))\), \[g(t_{i+1})\approx Kg(t_{i})+L_{1}u(t_{i})+L_{2}d(t_{i}), \tag{23}\] where \(L_{1}\) and \(L_{2}\) are the Koopman control and disturbance matrices. Moreover, although absent from the original theory, we have found it useful, like others [18, 19, 20], to define a 'lowering' function \(\widetilde{\psi}:\mathcal{G}\rightarrow\mathcal{X}\) which is not injective in general because the approximate nature of (23). ## III Koopman-Hopf Reachability We now seek to approximate the value and BRS (II-A) of our differential game by solving the Hopf formula (19) with approximate linear dynamics derived from Koopman methods. We will first show how to define the target set in the lifted Koopman space. We will then discuss the Koopman lifting functions that are well-suited to the Hopf solution. Finally, we describe how to solve the resulting Hopf reachability analysis using the lifted target set and dynamics. ### _Defining the lifted target set_ Given a target \(\mathcal{T}\) defined in (II-A), our target can be lifted directly to the Koopman space with \(\Psi\), \[\mathcal{T}_{\mathcal{G}}:=\{g\,\mid\,g=\Psi(x),\,x\in\mathcal{T}\}. \tag{24}\] However, given that we may not have an exact linearization, we also propose an approximate target, \[\widetilde{\mathcal{T}}_{\mathcal{G}}:=\{g\,\mid\,\widetilde{\psi}(g)\in \mathcal{T}\} \tag{25}\] as the preimage of the target in the Koopman space under the lowering function \(\widetilde{\psi}\). This captures the Koopman trajectories which might evolve off of the manifold defined by the true states \(g=\Psi(x)\subset\mathcal{G}\) to \(g^{\prime}\in\mathcal{G}\) for which \(\nexists x^{\prime}\) such that \(g^{\prime}=\Psi(x^{\prime})\). ### _Lifting the target set and dynamics_ The choice of lifting functions will impact the shape of the lifted target sets \(\mathcal{T}_{\mathcal{G}}\) and \(\widetilde{\mathcal{T}}_{\mathcal{G}}\). To provide guarantees that the Hopf analysis will return the optimal solution, the cost function (and therefore target set) must be convex. This can be enforced by either (a) using lifting functions \(\Psi\) that are convex, or (b) taking the inner (for reach) or outer (for avoid) ball or convex-hull of \(\Psi(\mathcal{T})\)1. Using convex lifting functions is challenging, given that the difference of convex functions is not necessarily convex, and thus, the notion of a basis is ill-defined. On the other hand, simply "convexifying" a lifted target from non-convex lifting functions might be drastically conservative. Footnote 1: alternatively, one could use the Lax formula [14], which swaps the convexity assumption between \(J\) and \(H\). This approach has its own challenges for implementation beyond the scope of this paper. We were thus motivated to demonstrate the proposed method with two simple cases of lifting functions: 1. the identity mapping, \(I:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}^{n_{x}}\) (i.e. DMD [18]) 2. the multivariate polynomial of degree \(l\), \(P_{l}:\mathbb{R}^{n_{x}}\rightarrow\mathbb{R}^{M},M=\binom{n_{x}+l}{l}-1\). Both cases have been extensively studied in Koopman literature; the former corresponds to Dynamic-Mode-Decomposition with control (DMDc) [18] while the latter is a case of Extended-DMD with control [19, 22]. The identity mapping is trivial and thus \[I(\mathcal{X})=\mathcal{X}\implies\mathcal{T}=\mathcal{T}_{\mathcal{G}}= \widetilde{\mathcal{T}}_{\mathcal{G}}\implies\phi_{\mathcal{G}}(g,t)=\phi(x,t), \tag{26}\] assuming we also define \(\widetilde{\psi}=I\); this implies that our lifted target will be convex if our initial target is convex. The polynomial mapping \(P_{l}(x)\) is more expressive and therefore produces more accurate linearizations of dynamics. The trouble with the polynomial mapping is in the negative domain \(\mathbb{R}^{n_{x}}_{<0}:=\{x|x_{i}\leq 0\}\). On this region, the mapping is not-invertible and non-convex, and therefore impacts the Hopf reachability solution. To resolve this issue, one could consider reachable problems on the complement of \(\mathbb{R}^{n_{x}}_{<0}\), the positive vectors \(\mathbb{R}^{n_{x}}_{\geq 0}\), where the map is one-to-one. However, in applications for safety, this would require a temporal bound on the BRS growth to know when we might be incorporating ambiguous trajectories in the solution of our game. Furthermore, this would constrain our lifting procedure undesirably. It is common to circumvent the issues above by using the first-degree polynomial terms [18] to return to \(\mathcal{X}\) such that \(\widetilde{\psi}(g):=\text{Proj}_{\mathcal{X}}(g)\). This would imply that \(\widetilde{\mathcal{T}}_{\mathcal{G}}\) is the infinite extrusion of our target, and if our target were defined by either of the sets in (21) and a PSD matrix \(A\in\mathbb{R}^{n_{x}\times n_{x}}\) (e.g. an ellipsoid or box), then \[\widetilde{\mathcal{T}}_{\mathcal{G},\mathcal{C}}=\mathcal{C}(\hat{A}),\,\,\, \hat{A}:=\begin{bmatrix}A&0\\ 0&0\end{bmatrix}\in\mathbb{R}^{M\times M} \tag{27}\] is an infinite cylinder or rectangular prism. In practice, we find that finite-relaxations of this set defined by \(\mathcal{C}(\hat{A}+\epsilon I)\) for \(\epsilon<<1\) are required for quickly solving the Hopf optimization problem. Finally, we note that to guarantee convergence to the global optimum, we also need convexity of the Hamiltonian. If the control and disturbance enter the Koopman space as in (23), the sets will remain convex, however, there is a chance the \(L\) matrices may change the relative sizes and break convexity. One could limit to disturbances on the control input only to guarantee this, but we also find in practice that the non-convex/minimax-viscosity problem can be solved with a powerful optimizer like ADMM and matches the viscosity solution closely. ## IV Results All results are computed using our codebase _HopfReachability.jl_, an open-source package designed for solving 2-player linear differential games. ### _Approximate BRS of the lifted Duffing Oscillator_ Here, we test the Koopman-Hopf approach on the nonlinear Duffing oscillator in both the cases with disturbance on the control input (convex Hamiltonian) and disturbance on the full state (non-convex Hamiltonian) with the previously discussed ellipsoidal target (24). We solve the BRS's evolving from the lifted targets, backwards in time with Koopman dynamics derived from a 4-degree polynomial \(P_{4}(x)\) (with 15 dimensions) that was hyper-tuned by the _autokoopman.py_ package written by the authors of [10]. We compare our results with an HJR DP method, namely _hj_reachability.py_, considered to be highly-accurate. Figure 1 compares the BRS's at \(t=2\) in the problem with disturbance on all states and non-convex Hamiltonian. We quantify the error with the Jaccard Index over a common discritized grid (\(1\implies\) perfect matching), \[JI(\mathcal{R}_{1},\mathcal{R}_{2})=\frac{|\mathcal{R}_{1}\cap\mathcal{R}_{2 }|}{|\mathcal{R}_{1}\cup\mathcal{R}_{2}|}. \tag{28}\] The Jaccard index is used to quantify the similarity of these sets to the _hj_reachability.py_ BRS, and we include a baseline derived from the Taylor series approximate dynamics with a localization point at the center of the target in \(\mathcal{X}\). The numerical results can be viewed in Table I. Note, in the non-convex case we lose guaranteed convergence to the global optimum (minimax) and agreement between the minimax and viscosity solutions, which likely explains why the Koopman-Hopf results. ### _Comparison of Koopman controllers on the 10D Glycolysis Model_ Next, we compare the ability of a Koopman-Hopf controller to navigate the true nonlinear dynamics of a 10D glycolysis model extended from [20, 28, 29] given its 10D DMDc linearization derived from the _pykoopman.py_ package. The Koopman-Hopf controller uses the standard Hopf controller formulation [3, 4, 15] to solve for the minimum time \(T^{*}\) for which the target is reachable given the current state \(x\), \[T^{*}=\text{argmin}_{T}\phi(x,t)=\text{argmin}_{T}\phi(g,t),\quad g=\Psi(x). \tag{29}\] The controller then applies the optimal control at this time which is derived from, \[\nabla_{p}H(\nabla_{z_{g}}\phi(z_{g_{0}},T^{*}),T^{*})=e^{-tK}L_{1}u^{*}+e^{- tK}L_{2}d^{*}. \tag{30}\] We compare this Koopman-Hopf controller with two basic Koopman-MPC formulations that also evolve their dynamics in the Koopman space [19, 30]. The MPC game (MPCg) solves the optimal control for a random fixed disturbance with a one-step horizon, and then the disturbance does the same for the previous control, and this iterates for 3 iterations (where improvement plateaued). The standard MPC (MPCs) is a stochastic MPC [31] where the controller samples 20 random disturbance trajectories and minimizes the expectation of the cost given evolution with those samples. The goal of all controllers is to achieve a target ATP concentration, as a bioengineer might desire for cell growth, by manipulating the external glucose, total NAD+/NADH pool, and ADP/ATP pool. After the controllers compute a \(u(t)\), the system is then progressed with the true nonlinear dynamics (using Radau). The system is complex because of the highly-stiff nonlinearities, state constraints (\(x_{i}>0\)) and inter-connectivity of the metabolic network; naively driving the ATP/ADP pool up leads to counter-productive results, as demonstrated by MPCg and MPCs. \begin{table} \begin{tabular}{|c||c|c||c|c|} \hline t & \(\mathcal{R}(\mathcal{T}_{Taylor})\) & \(\mathcal{R}(\mathcal{T}_{\mathcal{G}})\) & \(\mathcal{R}(\mathcal{T}_{Taylor})\) & \(\mathcal{R}(\mathcal{T}_{\mathcal{G}})\) \\ \hline \hline 0.0 & 1.0 & 0.97 & 1.0 & 0.97 \\ -0.33 & 0.90 & 0.96 & 0.92 & 0.91 \\ -0.66 & 0.80 & 0.88 & 0.85 & 0.80 \\ -0.99 & 0.61 & 0.82 & 0.71 & 0.66 \\ -1.32 & 0.47 & 0.78 & 0.50 & 0.60 \\ -1.65 & 0.37 & 0.79 & 0.37 & 0.59 \\ -1.98 & 0.30 & 0.76 & 0.30 & 0.58 \\ \hline \end{tabular} \end{table} TABLE I: BRS similarity results compared to true BRS, \(JI(\mathcal{R}(\mathcal{T}),\cdot)\) for disturbance on control and convex (left), disturbance on all states, non-convex (right) Fig. 2: Four controlled evolutions of the 10D glycolysis model (based on the same Koopman lift) with the same disturbance trajectory and initial condition. ‘Auto’ signifies the controller which is disturbed but always chooses the trivial input. The Koopman-Hopf controller amplifies the cycle of the phosphate, glucose and ATP states to achieve the target in a manner that translates to the nonlinear system. The simulation was run 50 times, with each controller subjected to the same random initial conditions sampled from the realistic concentration bounds given in [28, 29] and same random disturbance trajectory. The success of the Koopman-Hopf controller can be observed in Table II and Figure 2. The Koopman-Hopf controller appears to overcome the nonlinearity of the system by amplifying the oscillations of ATP to reach the target. ## V Conclusion We propose a Koopman-Hopf method in order to approximate Hamilton-Jacobi Reachability in high-dimensional, nonlinear systems. Although preliminary, we find this approach works well for approximating BRS's and driving high-dimensional, nonlinear systems with bounded disturbance. We hope to extend this work along several exciting directions, including expansion to more complicated lifting functions such as Radial Basis Functions and Neural Networks, applying this to black box systems, and quantifying the uncertainty based on the Koopman linearization error. We believe that the proposed method will ultimately become valuable for robustly maneuvering a wide range of otherwise intractable systems. ## Acknowledgements We thank Steven Brunton, Stanley Bak and Masih Haseli for discussions about Koopman theory, Gary Hewer and Matthew Kirchner for discussions about applying Hopf to Naval applications, and Somil Bansal for thought-provoking discussions. Finally, we thank Zheng Gong and Sander Tonkens for valuable feedback on the paper.
2305.02104
Background Knowledge Grounding for Readable, Relevant, and Factual Biomedical Lay Summaries
Communication of scientific findings to the public is important for keeping non-experts informed of developments such as life-saving medical treatments. However, generating readable lay summaries from scientific documents is challenging, and currently, these summaries suffer from critical factual errors. One popular intervention for improving factuality is using additional external knowledge to provide factual grounding. However, it is unclear how these grounding sources should be retrieved, selected, or integrated, and how supplementary grounding documents might affect the readability or relevance of the generated summaries. We develop a simple method for selecting grounding sources and integrating them with source documents. We then use the BioLaySum summarization dataset to evaluate the effects of different grounding sources on summary quality. We found that grounding source documents improves the relevance and readability of lay summaries but does not improve factuality of lay summaries. This continues to be true in zero-shot summarization settings where we hypothesized that grounding might be even more important for factual lay summaries.
Domenic Rosati
2023-05-03T13:24:49Z
http://arxiv.org/abs/2305.02104v1
# Background Knowledge Grounding for Readable, Relevant, and Factual Biomedical Lay Summaries ###### Abstract Communication of scientific findings to the public is important for keeping non-experts informed of developments such as life-saving medical treatments. However, generating readable lay summaries from scientific documents is challenging, and currently, these summaries suffer from critical factual errors. One popular intervention for improving factuality is using additional external knowledge to provide factual grounding. However, it is unclear how these grounding sources should be retrieved, selected, or integrated, and how supplementary grounding documents might affect the readability or relevance of the generated summaries. We develop a simple method for selecting grounding sources and integrating them with source documents. We then use the BioLaySum summarization dataset to evaluate the effects of different grounding sources on summary quality. We found that grounding source documents improves the relevance and readability of lay summaries but does not improve factuality of lay summaries. This continues to be true in zero-shot summarization settings where we hypothesized that grounding might be even more important for factual lay summaries. ## 1 Introduction Automatic lay summarization of biomedical research is a promising approach to help inform non-experts of vital scientific and clinical discoveries. However, known issues with factuality in automatic summarization systems Gabriel et al. (2021); Maynez et al. (2020) are still a barrier that prevents their safe deployment. To improve factuality, some have suggested using grounding sources with retrieval augmentation Shuster et al. (2021); Lewis et al. (2021); Thoppilan et al. (2022). This has been show to help maintain factuality in biomedical nlp tasks Guo et al. (2022) without harming readability or relevance. Additionally, Guo et al. (2022) suggests that retrieval augmentation is especially helpful for lay summarization because those summaries need to provide necessary background knowledge (see figure 1) such as definitions which are not often found in the source text. In this paper we wanted to understand (i) how we might develop a retrieval augmentation solution for lay summarization when using whole scientific papers for models with limited input context length and (ii) what is the effect of different grounding sources that contain different types of background knowledge on readability, relevancy, and factuality. **Contributions:** We develop (i) _a simple method for selecting and using grounding sources for lay summarization_. We assess this method using the BioLaySum Goldsack et al. (2022) biomedical paper lay summarization dataset and find that (ii) _grounding has the largest effect on readability_ where in the zero-shot summarization setting definitional background knowledge from Unified Medical Language System (UMLS) and simplified encyclopedic background knowledge from Wikipedia Simple provide better readability scores. Contrary to popular opinion, we found that (iii) _grounding does not improve factuality_. Figure 1: Lay summaries use background knowledge often not in the original paper. Example of different types of background knowledge from grounding sources. Method Summarizing scientific papers requires more input tokens than a large language model can typically support due to memory constraints. The average token count for articles in BioLaySum are 8,963 tokens for PLOS and 13,942 tokens for eLife with articles up to 45,563 tokens. Since we want to explore the effect of grounding articles with additional retrieved sources there is an even greater need for large token input support. Because of these factors, the base model we use in our experiments is the Longformer Encoder-Decoder (LED) (Beltagy et al., 2020) which supports an input token length of 16,384 tokens (see Appendix B for additional training and inference details). Our method was designed to test the effect of different grounding sources on downstream summarization quality. In addition to definitional background knowledge from UMLS and encyclopedic background knowledge from Wikipedia which were used in Guo et al. (2022), we introduced two other retrieval sources, _Wikipedia Simple_ for access to encyclopedic background knowledge in simpler terms and _Scientific Abstracts_ for access to further contextual background knowledge that might emulate the additional supplementary knowledge an expert has when crafting a lay summary. In all, we tested the following four grounding sources (1) _UMLS_ named entity definitions (2) _Scientific Abstracts_ (from Crossref) (3) _Wikipedia_ (English) (4) _Wikipedia Simple_ (English). See Appendix C for a full description of these grounding sources and how they were used. Our retrieval augmentation consisted of two steps: (i) retrieving and (ii) re-ranking documents, First we took each sentence in the leading 1,024 tokens of the article (roughly corresponding to the abstract) and searched them using BM25 on indexes constructed for each grounding source (except _UMLS_ which uses another method discussed). Indexes were constructed using Pyserini (Lin et al., 2021). The top 1 most relevant passage is selected and then added to a pool of candidate passages. In the case of _Scientific Abstracts_, we remove the abstract of the document we are enhancing if it was in the pool. In the case of _UMLS_, we follow Guo et al. (2022) by using the scispaCy entity linker (Neumann et al., 2019) over the first 1,024 tokens and provide definitions for the UMLS named entities as the pool of candidate passages. The above procedure results in too many results to fit within context length. In order to resolve this, we rank the pool of candidates passages against the first 1,024 tokens of the using a crossencoder (Reimers and Gurevych, 2019) (see Appendix B for details). Finally we construct our inputs by selecting the first 8,192 tokens of the original document and the top relevant grounding passages up to 8,192 tokens. A <|SEARCH|> token is inserted between the original document and the grounding passage and global attention is placed on the <|SEARCH|> token in order to assist with attention over the grounding passages. We also supplement all grounding sources with a bibliographic reference string containing the title, authors, and year of the paper being summarised. This was motivated by seeing many ground truth summaries which cited the source document by first authors name (For example: "Parks et al. analyzed data on US deaths between 1980 and 2016" which is the first sample in the eLife training subset of BioLaySum). ## 3 Experiments ### BioLaySum We experiment with the method above using the BioLaySum lay summarization dataset which contains 29,119 training samples (24,773 from PLOS and 4,346 from eLife) and 1,617 validation samples (1,376 from PLOS and 241 from eLife). See Appendix A for more details. This dataset is used to evaluate the ability for models to provide factual, readable, and relevant lay summaries of biomedical research articles from research papers in PLOS and eLife which are paired with human written lay summaries. All of the methods use a LED base model trained for 4 epochs evaluated every 5,000 steps, like (Goldsack et al., 2022) the checkpoint with the best ROUGE-2 is selected (see Appendix B for more details). Relevancy was measured based on BERTScore, Rouge1, Rouge2, and RougeL. Factuality was measured using a BARTScore (Yuan et al., 2021) trained on the BioLaySum dataset as well as an unsupervised metric SummaC (Laban et al., 2022). Readability scores used Dale-Chall Readability Score (DCRS) and Flesch-Kincaid Grade Level (FKGL) measures. Table 1 compares _LED_, a baseline model that is only trained on the original documents to generate the output lay summaries, and _All_, where the model is trained on the original document and sup plemented with passages from the retrieval sources. The results in table 1 show a few trends, first that _All_ improves relevancy and readability over the _LED_ setting. However, these gains are not very large which indicates that the grounding sources were not a very important signal for the model. Additionally, despite our hypothesis, factuality is not improved with grounding. ### Analysis of Grounding Methods To assess the impact of various retrieval sources on summarization quality, we trained a model for each retrieval corpus (table 1). The most noticeable difference is that articles grounding in passages from _Scientific Abstracts_ and _Wikipedia_ have the highest relevancy scores. Possibly due to a similarity in the text distributions of these sources with reference lay summaries. Other grounding sources have different genre roles such as definitions in the case of _UMLS_ or non-expert reference literature in the case of _Wikipedia Simple_. All methods generally improve readability as measured by DCRS with _Wikipedia_ having the largest effect. ### Zero-shot Summarization The lack of differences between grounding sources inspired us to consider an experiment where a model might be more likely to use grounding sources. We designed a zero-shot summarization experiment (with GPT 3.5) using the same method from section 2 but with 2,048 tokens selected for the original document and the rest from the original article (see Appendix D for more details). The results in table 2 show that grounding sources make much more of a difference for zero-shot summaries than trained summaries. The _Without_ input, which means without any grounding sources, has the highest relevancy and factuality scores indicating that in a zero-shot setting grounding sources tend to provide some distraction. Interestingly, _UMLS_ and _Wikipedia Simple_ encourage more readable summaries than other methods which is what we would expect to find since _UMLS_ provides definitions which are vital to assisting non-experts with engaging scientific findings and _Wikipedia Simple_ provides plain language encyclopedic knowledge designed to be readable. As we saw above, _Scientific Abstracts_ as a grounding source allows us to construct more relevant summaries, perhaps due to _Scientific Abstracts_ preserving the scientific language and context that is still vital for lay summaries. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & DCRS & FKGL & bartscore & summac & bertscore & rouge1 & rouge2 & rougeL \\ \hline LED & 12.36 & 15.51 & **-2.21** & **21.37** & 86.22 & 45.01 & 15.44 & 24.15 \\ All & 12.29 & **15.42** & -2.28 & 21.36 & **86.26** & **45.31** & **15.75** & **24.47** \\ \hline Wikipedia & **12.27** & 15.58 & -2.24 & 21.17 & 86.15 & 44.70 & 15.01 & 23.81 \\ Abstracts & 12.30 & 15.62 & -2.29 & 21.30 & 86.18 & 44.67 & 15.29 & 24.04 \\ UMLS & 12.32 & 15.61 & -2.23 & 21.22 & 86.14 & 44.67 & 14.94 & 23.80 \\ Wiki Simple & 12.31 & 15.58 & -2.22 & 21.24 & 86.14 & 44.51 & 14.87 & 23.74 \\ \hline \hline \end{tabular} \end{table} Table 1: Readability, factuality, and relevancy scores for different grounding sources compared against LED baseline. Lower is better for DCRS and FKGL. \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & DCRS & FKGL & bartscore & summac & bertscore & rouge1 & rouge2 & rougeL \\ \hline All & 11.31 & 14.04 & -3.35 & **20.30** & 85.78 & 39.09 & 11.26 & 21.50 \\ Without & 11.23 & 14.15 & **-3.08** & **20.30** & **86.02** & **40.66** & **11.98** & **21.97** \\ Abstracts & 11.45 & 14.48 & -3.39 & 20.26 & 85.72 & 39.41 & 11.42 & 21.52 \\ UMLS & **10.80** & **13.42** & -3.37 & 20.25 & 85.56 & 38.40 & 10.00 & 20.69 \\ Wikipedia & 11.35 & 14.11 & -3.49 & 20.26 & 85.49 & 39.05 & 10.48 & 20.61 \\ Wiki Simple & 10.88 & 13.43 & -3.75 & 20.21 & 84.96 & 36.40 & 9.03 & 19.20 \\ \hline \hline \end{tabular} \end{table} Table 2: Zero-shot summarization setting exploring readability, factuality, and relevancy scores for each grounding source using GPT 3.5 Turbo. Discussion Grounding the original document in retrieval results helped most with readability, marginally with relevance and not at all with factuality. We believe this gives credence to the idea that grounding for lay summarization is primarily helpful for providing the model with background information that helps explain concepts, define terms, and otherwise situate the reader with the necessary information to be able to understand a scientific finding. We saw that the _UMLS_ and _Wikipedia Simple_ sources provided best effect on improving readability in the zero-shot summarization setting which is intuitive in that they provide clear definitions and encyclopedic background knowledge in simple terms. _Scientific Abstracts_ had the best effect of all sources on relevancy which is also intuitive because related abstracts such as ones with similar findings might help construct a more robust summary. These results indicate that we should continue to investigate the role of background information on improving readability and relevance in lay summarization. In particular, our retrieval method is quite simple and more sophisticated methods of retrieval such as dense passage retrieval Izacard et al. (2022) could be used to enhance the relevancy of grounding documents. Additionally, future work should investigate methods that learn more strongly to take advantage of grounding sources. The lack of improvement in factuality is an important aspect that future work should investigate. One explanation is that grounding sources could introduce factual or relevancy errors if the retrieved documents are irrelevant or incorrect and they end up being used in the generated summary. However, factuality metrics only measure the summary against original document, this means that statements that cannot be grounded in the original document may be penalized. This is an issue in lay summarization where there can be statements that are factual and provide necessary background knowledge but cannot be found in the original document. Future work should investigate methods of measuring factuality that are able to incorporate the necessary background knowledge when measuring the factuality of a lay summary. ## 5 Related Work There are a number of works looking at automatic lay summarization in the biomedical domain Goldsack et al. (2022); Guo et al. (2022); Luo et al. (2022); Devaraj et al. (2021). One central issue is understanding the effect of text simplification on various aspects of summary quality such as relevancy. Devaraj et al. (2022) explores the effect of text simplification on factuality by introducing a taxonomy of different error types that allows them to observe that while factual errors of missing information are a common error across generated and gold summaries, errors of substitution such as mixing up entities is a common occurrence in text simplification summarization models. Supplementing source documents with external knowledge has been one of the main interventions discussed for mitigating factual errors and hallucinations in natural language generation systems Shuster et al. (2021); Lewis et al. (2021). With the idea that having access to grounding sources allows models to draw on those sources when generating text rather than being forced to rely on parametric knowledge which may be flawed Thoppilan et al. (2022); Mallen et al. (2022). In summarization, researchers have used factual knowledge from external sources to improve factuality by encoding external knowledge in models during training Zhu et al. (2021); Mao et al. (2022) or to correct already generated summaries Lee et al. (2022). Guo et al. (2022) evaluates retrieval augmentation as a method for enhancing abstracts with background knowledge. They use definitions from UMLS and Wikipedia as different retrieval corpora and find their method improves both the readability and relevancy of summaries while maintaining a similar level of factuality as models not using grounding. However, they did not perform retrieval-augmented generation for the lay summary generation task which is the novel contribution in this work. ## Limitations Retrieval augmentation adds complexity to natural language generation requiring a separate retrieval module before the text generation step can begin. Additionally, retrieval augmentation possibly introduces more input text than the original input which is problematic for many neural network architectures with limited input space especially in the case of summarizing entire scientific papers. Finally, retrieval augmentation itself could introduce factual or relevancy errors if the retrieved documents are irrelevant or incorrect and they end up being used in the generated summary.
2304.04884
Multi-Sample Consensus Driven Unsupervised Normal Estimation for 3D Point Clouds
Deep normal estimators have made great strides on synthetic benchmarks. Unfortunately, their performance dramatically drops on the real scan data since they are supervised only on synthetic datasets. The point-wise annotation of ground truth normals is vulnerable to inefficiency and inaccuracies, which totally makes it impossible to build perfect real datasets for supervised deep learning. To overcome the challenge, we propose a multi-sample consensus paradigm for unsupervised normal estimation. The paradigm consists of multi-candidate sampling, candidate rejection, and mode determination. The latter two are driven by neighbor point consensus and candidate consensus respectively. Two primary implementations of the paradigm, MSUNE and MSUNE-Net, are proposed. MSUNE minimizes a candidate consensus loss in mode determination. As a robust optimization method, it outperforms the cutting-edge supervised deep learning methods on real data at the cost of longer runtime for sampling enough candidate normals for each query point. MSUNE-Net, the first unsupervised deep normal estimator as far as we know, significantly promotes the multi-sample consensus further. It transfers the three online stages of MSUNE to offline training. Thereby its inference time is 100 times faster. Besides that, more accurate inference is achieved, since the candidates of query points from similar patches can form a sufficiently large candidate set implicitly in MSUNE-Net. Comprehensive experiments demonstrate that the two proposed unsupervised methods are noticeably superior to some supervised deep normal estimators on the most common synthetic dataset. More importantly, they show better generalization ability and outperform all the SOTA conventional and deep methods on three real datasets: NYUV2, KITTI, and a dataset from PCV [1].
Jie Zhang, Minghui Nie, Junjie Cao, Jian Liu, Ligang Liu
2023-04-10T22:11:13Z
http://arxiv.org/abs/2304.04884v1
# Multi-Sample Consensus Driven Unsupervised Normal Estimation for 3D Point Clouds ###### Abstract Deep normal estimators have made great strides on synthetic benchmarks. Unfortunately, their performance dramatically drops on the real scan data since they are supervised only on synthetic datasets. The point-wise annotation of ground truth normals is vulnerable to inefficiency and inaccuracies, which totally makes it impossible to build perfect real datasets for supervised deep learning. To overcome the challenge, we propose a multi-sample consensus paradigm for unsupervised normal estimation. The paradigm consists of multi-candidate sampling, candidate rejection, and mode determination. The latter two are driven by neighbor point consensus and candidate consensus respectively. Two primary implementations of the paradigm, MSUNE and MSUNE-Net, are proposed. MSUNE minimizes a candidate consensus loss in mode determination. As a robust optimization method, it outperforms the cutting-edge supervised deep learning methods on real data at the cost of longer runtime for sampling enough candidate normals for each query point. MSUNE-Net, the first unsupervised deep normal estimator as far as we know, significantly promotes the multi-sample consensus further. It transfers the three online stages of MSUNE to offline training. Thereby its inference time is 100 times faster. Besides that, more accurate inference is achieved, since the candidates of query points from similar patches can form a sufficiently large candidate set implicitly in MSUNE-Net. Comprehensive experiments demonstrate that the two proposed unsupervised methods are noticeably superior to some supervised deep normal estimators on the most common synthetic dataset. More importantly, they show better generalization ability and outperform all the SOTA conventional and deep methods on three real datasets: NYUV2, KITTI, and a dataset from PCV [1]. 3D point clouds, normal estimation, unsupervised learning ## 1 Introduction Point clouds have emerged as a fundamental representation in reverse engineering, indoor scene modeling, robot grasping, and autonomous driving. Their surface normals, defining the local structure of the underlying surface to the first order, play a major role in these applications. Many deep neural networks have been proposed for more accurate normal estimation [3, 4, 5, 6, 7, 8, 9, 10], and they surpass the conventional methods [1, 12, 13, 11, 14, 15, 16, 17] indeed. But their performance drops significantly on raw point clouds since they are trained on synthesized datasets [4, 6] only, and the unseen target domains have considerable distribution shifts. An illustration can be seen in Fig. 1 where the point clouds of LiDAR, Kinect, and synthetic datasets show three different characteristics. It is a challenge to annotate point-wise normal precisely for the noisy data from various scanners, which makes them inaccessible to supervised learning that requires massive pairs of point positions and normals. Consequently, it is more desirable to estimate accurate normals for raw point clouds scanned by common sensors, such as Kinect and LiDAR, without resorting to normal annotations, as shown in Fig. 1. Conventional normal estimators [1, 11, 14, 15, 16] are unsupervised. But they tend to over smooth the inputs or may introduce additional artifacts around salient features. Manually tuning of an algorithm's parameters is necessary to accommodate different noise levels and feature scales of various point clouds. Recent developments on unsupervised image denoising [17, 18] and point cloud denoising [19, 20] have demonstrated the potential of learning from multiple random sampled observations, whether it be a pixel or a point, rather than relying on ground truth labels. It also sheds light on our scenario, even though the problem is different. We are attempting to estimate the properties of noisy observations while they are working on denoising the observations. More importantly, we have to address the multimodal distribution of randomly sampled candidate normals to achieve feature-preserving normal estimation. This is a departure from the unimodal distribution assumed in [17, 18, 19, 20] resort to the point cloud appearance to transform the distribution into a single mode, assuming the appearance is credible. Instead, we hope to solve the problem directly from observed point positions and do not depend on further input sources and assumptions. Hence the scenario of unsupervised normal estimation of noisy 3D point clouds is more challenging, which presents both theoretical difficulties and practical obstacles. At present, there is no literature addressing the problem of unsupervised normal estimation based on neural networks, to our knowledge. This paper proves that the ground truth normal of a query point is the expectation of a set of randomly sampled candidate normals, provided that the underlying surface is smooth and noise has zero mean. However, these assumptions may be invalid for scanned points with sharp features. Hence, we propose a practical unsupervised normal estimation paradigm driven by multi-sample consensus and two of its implementations: an optimization-based method (MSUNE), and a deep learning-based one (MSUNE-Net). The paradigm may be also applicable to other unsupervised low-level image and point cloud processing. Section 7 reports its preliminary application to improve DMR [20], a SOTA unsupervised point cloud denoising network. The paradigm consists of three stages: multi-candidate sampling, candidate rejection, and mode determination. The latter two are driven by neighbor point consensus and candidate consensus respectively. These stages are easier to understand with a brief introduction to MSUNE, as it is a typical implementation of the paradigm. MSUNE estimates one normal for each query point from a surface patch, _i.e._ a neighborhood, centered on the point. Taking a query point near some sharp edge, the purple point in Fig. 2, as an example, MSUNE generates multiple initial candidate normals (orange, blue, and green normals) randomly from its neighborhood. Then a more feasible set of candidate normals is built by rejecting some blurry candidates (green dotted normals) that do not conform to the underlying geometry structure. These normals have low consensus measured by the neighbor points, _i.e._ they are supported by fewer neighbor points. The distribution of the feasible candidates (orange and blue solid normals) is still a multimodal distribution. There are two modes in the 2D illustration, _i.e._ the coarse and thin red normals. Consequently, a robust candidate consensus loss function is designed in the third stage to locate the main mode (the coarse red normal), which is a feature-preserving normal with the greatest consensus of the feasible candidates. By minimizing the loss, MSUNE outputs a high-quality approximation of the ground truth normal. Higher accuracy can be achieved by sampling more initial candidate normals as shown in Fig. 1. It outperforms SOTA conventional normal estimators and supervised deep learning methods on real scan data when 400 candidates are sampled. While it is time-consuming even when 20 candidates per query point are used since the optimization is also slow. MSUNE-Net further introduces a patch-based neural network, which explores the non-local patch self-similarity of point clouds to boost multi-sample consensus. Compared with MSUNE, learning a normal solely on multiple candidates from the query point's neighborhood, MSUNE-Net can also learn from other query points with similar patches, which means the candidate consensus loss is defined on much more candidates implicitly even if the network is trained with far fewer candidates per query point. Consequently, a better approximation of the main mode of the candidates is calculated. It is also 100 times more efficient since there is only a network forward pass in inferencing. The manipulation and optimization in the three stages only occur during training. As illustrated in Fig. 1 and supported by the comprehensive experiments, our unsupervised approaches outperform the conventional methods and the cutting-edge deep learning-based methods on real Kinect [1, 21] and LiDAR datasets [2]. It is also noteworthy that the unsupervised MSUNE-Net surpasses some supervised deep normal estimators, such as HoughCNN and PCPNet, Fig. 1: MSUNE-Net shows higher performance than conventional methods and supervised deep normal estimators on the LiDAR point clouds (Left), sequence 06 of KITTI [2], and the Kinect dataset proposed in [1] (Bottom right). It is even superior to some supervised deep learning methods on the synthesized dataset of PCPNet [3] (Top right). MSUNE also outperforms previous methods in RMS errors on the Kinect dataset, when 400 initial candidate normals are employed. Some estimated normals for LiDAR point cloud by DeepFit [4] and MSUNE-Net are zoomed in blue and red boxes respectively. Compared with Deepfit, sharp features and tiny structures are better preserved in MSUNE-Net. Meanwhile, scanner noises are well suppressed by MSUNE-Net. Colors encode the orientation of the normals. Fig. 2: A 2D illustration of candidate normal distribution of a point close to a sharp edge. on the most common synthetic dataset. To summarize, our main contributions are three-fold: * A multi-sample consensus paradigm for unsupervised normal estimation, consisting of candidate sampling, candidate rejection, and mode determination, is proposed. It is also applicable to other low-level image and point cloud processing, such as unsupervised deep point cloud denoising. * MSUNE, a robust optimization method driven by the multi-sample consensus, is proposed. It outperforms all the SOTA optimization-based and supervised deep learning based normal estimators on real scanned point clouds. It is even superior to some supervised deep normal estimators on the most common synthesized dataset. * MSUNE-Net, an unsupervised deep learning method boosting the multi-sample consensus, is proposed. It is more accurate than MSUNE and is 100 times faster. ## 2 Related work **Normal estimation.** Normal estimation of point clouds is a long-standing problem in geometry processing, mainly because it is directly used in many downstream tasks. The most popular and simplest method for normal estimation is based on Principal Component Analysis (PCA) [11], which is utilized to find the eigenvectors of the covariance matrix constructed by neighbor points. Following this work, many variants have been proposed [22, 23, 24]. There are also methods based on Voronoi cells [25, 26, 27, 28]. But none of them are designed to handle outliers or sharp features [13]. Sparse representation methods [12, 16, 29] and robust statistical techniques [13, 30, 31, 32, 33, 34] are employed to improve normal estimation for point clouds with sharp features. And some other methods, such as Hough Transform [14], also show impressive results. Although these algorithms are unsupervised and have strong theoretical guarantees, they are very sensitive to noise. In addition, advanced conventional estimation methods are usually much slower than deep normal estimators, since the latter infers normals only with a forward propagation. In recent years, with the wide application and success of deep learning, normal estimation methods based on deep learning have also been proposed. There have been some attempts [35, 7, 36] to project local unstructured points into a regular domain for direct application of CNN architectures. With the advent of geometric deep learning, PointNet [37, 3, 38] and graph neural networks [39] are employed to learn a direct mapping from raw points to ground truth normals. More recent works have attempted to embed conventional methods into deep learning [9, 40, 41]. Ben et al. [4] (DeepFit) append weighted least squares polynomial surface fitting module to a PointNet network for point-wise weights estimation in an end-to-end manner. Zhu et al. [6] (AdaFit) add an additional offset prediction to improve the quality of the estimated normals. Zhang et al. [8] propose a geometry-guided neural network (G2N2) for robust and detail-preserving normal estimation. However, the above approaches are supervised, as they require a ground truth normal for each point. The ground truth normals are produced by sampling points from synthetic meshes or surfaces. Their results on the real scan data are unsatisfactory. Our approach does not require ground truth normals and performs well for real scan data. **Unsupervised learning.** Unsupervised deep learning has been intensively studied by researchers in recent years with the aim of alleviating the time-consuming and laborious data annotation challenge. When training a network, it needs some reasonable constraints to replace the ground truth. For advanced semantic tasks, pseudo-labeling [42, 43, 44] is a common idea. The pseudo labels may be from other pre-trained models or the network being trained with proper weight initialization from other supervised networks or pretext tasks. In our methods, the pseudo labels are generated from unlabeled data directly without relying on other methods. Moreover, different from these methods which depend on one accurate pseudo-label per-point, we randomly generate multiple low-cost and low-quality pseudo labels, _i.e._ multi-candidate normals, for each point. Low-level tasks, such as image denoising and point cloud denoising, usually operate under the assumption that noisy observations are stochastic realizations distributed around clean values. Lehtinen _et al._[17] utilize multiple noisy observations from image pairs to denoise an image. Going one step further, Noise2Void [45] and Noise2Self [18] remove the requirement of paired noise corruptions and instead work on a single image. Hermosilla _et al._[19] extend unsupervised image denoisers to 3D point clouds by introducing a proximity-appearance prior. The prior actually restricts the distribution of the candidate positions to have a single modal. Based on the core idea of [19], Luo _et al._[20] present an autoencoder-like neural network to improve the accuracy of denoising. Unfortunately, they cannot be naively extended to normal estimation, since they denoise the noisy observations but we estimate the indirect geometry property of the noisy observations. They take an approximated expectation of multiple random samples as the denoised pixel or point. While we locate the main mode of candidate normals without aid from other properties. The proposed MSUNE-Net is the first unsupervised deep normal estimator as far as we know. ## 3 The Multi-sample Consensus Paradigm We propose a practical unsupervised normal estimation paradigm driven by multi-sample consensus. For each query point of a point cloud, the paradigm predicts the normal of the point from a neighborhood centered on the point. It consists of three stages: multi-candidate sampling, candidate rejection, and mode determination. The first one is for sampling enough candidates, which may be from a multimodal distribution, from the point's neighborhood. The latter two stages are designed to approximate the main mode of those candidates. The candidate rejection stage filters out some candidates with lower consensus from neighbor points, and the mode determination stage estimates the main mode of the rest feasible candidates. The paradigm is actually general enough to be applied to other unsupervised low-level image and point cloud processing. A point cloud denoising example is reported in Section 7. Two implementations of the paradigm for normal estimation: an optimization-based method (MSUNE), and a deep learning-based one (MSUNE-Net), will be introduced in the next two sections. ## 4 MSUNE Given a 3D point cloud \(P=\{\mathbf{p}_{t}\mid t=1,2,\cdots,N\}\) and a query point \(\mathbf{p}_{t}\), MSUNE takes the \(k\) nearest neighbors of it, \(\mathcal{N}_{k}(\mathbf{p}_{t})\subset P\), as input and predicts a normal \(\hat{\mathbf{n}}_{t}\) for the point. MSUNE's pipeline is presented in Fig. 3. Multiple candidate normals can be initialized by a set of randomly sampled points from its neighborhood, which is detailed in subsection 4.1. We prove that the correct normal can be approximated by minimizing Eq. 2 when the underlying surface is smooth. However, if \(\mathbf{p}_{t}\) lies near some sharp features, The underlying surface of its neighborhood actually consists of multiple piecewise smooth surfaces and the candidate normals are sampled from a multimodal distribution, which invalidates the conclusion. As illustrated in Fig. 2, the randomly initialized candidates also include some disturbing normals in blue which are associated with the plane on the opposite side of the edge, and some blurry normals in green if the sampled points across the edge. Robust statistical techniques are used to filter out the impact of these undesired candidate normals driven by multi-sample consensus. The blurry normals can be rejected by the consensus of \(\mathbf{p}_{t}^{t}s\) neighbor points, _i.e._ there are a few neighbor points supporting a plane with a blurry normal (subsection 4.2). For the rest candidates, instead of approximating their expected value by minimizing the square of the Euclidean distance between \(\hat{\mathbf{n}}_{t}\) and the candidates, we design a novel candidate consensus loss (subsection 4.3). The disturbing modes have fewer supporters in the candidate normals. The main mode of the candidate normals owns the greatest consensus of them, which can be approximated by minimizing the loss function. ### _Multi-candidate sampling_ The initial candidate normals \(\hat{N}_{t}=\{\hat{\mathbf{n}}_{\theta}^{t}\}\) of the query point are built from its adaptive neighborhood \(\mathcal{N}_{\hat{k}}(\mathbf{p}_{t})\) of size \(\hat{k}\). \(\hat{\mathbf{n}}_{\theta}^{t}\) is the normal of a plane \(\theta\) determined by \(k_{s}\) points which are randomly selected from \(\mathcal{N}_{\hat{k}}(\mathbf{p}_{t})\), and a typical setting is \(k_{s}=3\). Thus it is efficient to build a large set of candidates for each query point. It is proven in the following subsection that the noise-free normal \(\mathbf{n}_{t}\) is the expectation of \(\hat{\mathbf{n}}_{\theta}^{t}\) if the underlying surface of the noisy point cloud is smooth and the expectation of the noise is zero. Hence a faithful approximation can be achieved if \(\hat{N}_{t}\) includes sufficient candidates. #### 4.1.1 Theoretical analysis Let \(\tilde{\mathbf{p}}_{t}\) be a point on the underlying smooth surface \(S\) of the point cloud, and its noisy observation is \(\mathbf{p}_{t}=\tilde{\mathbf{p}}_{t}+\varepsilon_{t}\). \(U\) is an open set on \(S\) centering at \(\tilde{\mathbf{p}}_{t}\). The normal of it is denoted as \(\mathbf{n}_{t}\). For any three non collinear points \(\{\tilde{\mathbf{p}}_{\theta}^{t},\tilde{\mathbf{p}}_{\theta 2}^{t},\tilde{\mathbf{p}}_{\theta 3}^{t}\}\) on U, \(\tilde{\mathbf{n}}_{\theta}^{t}=(\tilde{\mathbf{p}}_{\theta 2}^{t}-\tilde{ \mathbf{p}}_{\theta 1}^{t})\times(\tilde{\mathbf{p}}_{\theta 3}^{t}-\tilde{ \mathbf{p}}_{\theta 1}^{t})\) is the vector corresponding to the plane \(\theta\) spanned by \(\{\tilde{\mathbf{p}}_{\theta 1}^{t},\tilde{\mathbf{p}}_{\theta 2}^{t},\tilde{ \mathbf{p}}_{\theta 3}^{t}\}\). Their noisy observations are \(\{\mathbf{p}_{\theta 1}^{t},\mathbf{p}_{\theta 2}^{t},\mathbf{p}_{\theta 3}^{t}\}\), where \(\mathbf{p}_{\theta j}^{t}=\tilde{\mathbf{p}}_{\theta j}^{t}+\varepsilon_{j},j=1,2,3\). Each group of them defines an candidate normal \(\hat{\mathbf{n}}_{\theta}^{t}\). Here, \(\{\tilde{\mathbf{p}}_{\theta 1}^{t},\tilde{\mathbf{p}}_{\theta 2}^{t},\tilde{ \mathbf{p}}_{\theta 3}^{t}\}\) are sorted counterclockwise and noise \(\varepsilon_{j}=(\varepsilon_{xj},\varepsilon_{yj},\varepsilon_{zj})^{T}\) is assumed to be independent for components. For convenience, we do not normalize \(\tilde{\mathbf{n}}_{\theta}^{t}\) and \(\hat{\mathbf{n}}_{\theta}^{t}\) in the following theoretical analysis. We have the following theorem. **Theorem 1**.: _If \(E\{\varepsilon\}=0\), then we have \(E\left\{\hat{\mathbf{n}}_{\theta}^{t}\right\}=E\left\{\tilde{\mathbf{n}}_{ \theta}^{t}\right\}\)._ Proof.: Since \(\mathbf{p}_{\theta j}^{t}=\tilde{\mathbf{p}}_{\theta j}^{t}+\varepsilon_{j},j= 1,2,3\), we have \[\mathbf{p}_{\theta 1}^{t} =(\tilde{x}_{\theta 1}+\varepsilon_{x_{\theta 1}},\tilde{y}_{ \theta 1}+\varepsilon_{y_{\theta 1}},\tilde{z}_{\theta 1}+\varepsilon_{z_{\theta 1}})^{T}\,,\] \[\mathbf{p}_{\theta 2}^{t} =(\tilde{x}_{\theta 2}+\varepsilon_{x_{\theta 2}},\tilde{y}_{ \theta 2}+\varepsilon_{y_{\theta 2}},\tilde{z}_{\theta 1}+\varepsilon_{z_{\theta 1}})^{T}\,,\] \[\mathbf{p}_{\theta 3}^{t} =(\tilde{x}_{\theta 3}+\varepsilon_{x_{\theta 3}},\tilde{y}_{ \theta 3}+\varepsilon_{y_{\theta 3}},\tilde{z}_{\theta 3}+\varepsilon_{z_{\theta 3}})^{T}\,,\] where \(\tilde{\mathbf{p}}_{\theta j}^{t}=(\tilde{x}_{\theta j},\tilde{y}_{\theta j}, \tilde{z}_{\theta j})^{T}\) for \(j=1,2,3\) and \(\varepsilon_{s_{\theta i}}\) is independent for \(\ast=x,y,z\) and \(i=1,2,3\). Then, we have \[\hat{\mathbf{n}}_{\theta}^{t} =(\mathbf{p}_{\theta 2}^{t}-\mathbf{p}_{\theta 1}^{t})\times(\mathbf{p}_{\theta 3}^{t}-\mathbf{p}_{\theta 1}^{t})\] \[=\left(\begin{array}{c}\tilde{x}_{\theta 2}-\tilde{x}_{\theta 1}+\varepsilon_{x_{\theta 2}}- \varepsilon_{x_{\theta 1}}\\ \tilde{y}_{\theta 2}-\tilde{y}_{\theta 1}+\varepsilon_{y_{\theta 2}}- \varepsilon_{y_{\theta 1}}\\ \tilde{z}_{\theta 2}-\tilde{z}_{\theta 1}+\varepsilon_{z_{\theta 2}}- \varepsilon_{z_{\theta 1}}\\ \end{array}\right)\times\] \[\left(\begin{array}{c}\tilde{x}_{\theta 3}-\tilde{x}_{\theta 1}+\varepsilon_{x_{\theta 3}}- \varepsilon_{x_{\theta 1}}\\ \tilde{y}_{\theta 3}-\tilde{y}_{\theta 1}+\varepsilon_{y_{\theta 3}}- \varepsilon_{y_{\theta 1}}\\ \tilde{z}_{\theta 3}-\tilde{z}_{\theta 1}+\varepsilon_{z_{\theta 3}}- \varepsilon_{z_{\theta 1}}\\ \end{array}\right).\] Fig. 3: The proposed framework for unsupervised normal estimation. To estimate a normal for the query point \(\mathbf{p}_{t}\), MSUNE initializes a set of candidate normals first. It then rejects some unconvincing normals to build feasible candidates \(\{\tilde{\mathbf{n}}_{\theta}^{t}\}\). At last, it learns a noise-free normal of \(\mathbf{p}_{t}\) by minimizing the proposed candidate consensus loss defined by the feasible candidates. Instead of learning a normal just from the neighborhood of the point, MSUNE-Net can be defined by appending a normal estimation network to MSUNE. Consequently, MSUNE-Net learns more creditable normals from massive data in an unsupervised way. Denote the first component of \(\hat{\mathbf{n}}_{\theta}^{t}\) as \(\hat{\mathbf{n}}_{\theta,x}^{t}\). According to the definition of cross product, we have \[\begin{array}{l}\mathbf{n}_{\theta,x}^{t}=\left(\tilde{y}_{\theta 2}-\tilde{y}_{\theta 1}+\varepsilon_{y_{\theta 2}}-\varepsilon_{y_{\theta 1}}\right) \left(\tilde{z}_{\theta 3}-\tilde{z}_{\theta 1}+\varepsilon_{z_{\theta 3}}- \varepsilon_{z_{\theta 1}}\right)-\\ \left(\tilde{y}_{\theta 3}-\tilde{y}_{\theta 1}+\varepsilon_{y_{\theta 3}}- \varepsilon_{y_{\theta 1}}\right)\left(\tilde{z}_{\theta 2}-\tilde{z}_{\theta 1}+ \varepsilon_{z_{\theta 2}}-\varepsilon_{z_{\theta 1}}\right)\\ =\left(\tilde{y}_{\theta 2}-\tilde{y}_{\theta 1}\right)\left(\tilde{z}_{ \theta 3}-\tilde{z}_{\theta 1}\right)+\left(\tilde{y}_{\theta 2}-\tilde{y}_{\theta 1} \right)\left(\varepsilon_{z_{\theta 3}}-\varepsilon_{z_{\theta 1}}\right)\\ +\left(\tilde{z}_{\theta 3}-\tilde{z}_{\theta 1}\right)\left( \varepsilon_{y_{\theta 2}}-\varepsilon_{y_{\theta 1}}\right)+\left( \varepsilon_{y_{\theta 2}}-\varepsilon_{y_{\theta 1}}\right)\left( \varepsilon_{z_{\theta 3}}-\varepsilon_{z_{\theta 1}}\right)\\ -\left(\tilde{y}_{\theta 3}-\tilde{y}_{\theta 1}\right)\left( \tilde{z}_{\theta 2}-\tilde{z}_{\theta 1}\right)\\ -\left(\tilde{z}_{\theta 2}-\tilde{z}_{\theta 1}\right)\left( \varepsilon_{y_{\theta 3}}-\varepsilon_{y_{\theta 1}}\right)-\left( \varepsilon_{y_{\theta 3}}-\varepsilon_{y_{\theta 1}}\right)\left( \varepsilon_{z_{\theta 2}}-\varepsilon_{z_{\theta 1}}\right)\\ \end{array}\] Then, \[\begin{array}{l}E\{\hat{\mathbf{n}}_{\theta,x}^{t}\}=E_{\{\varepsilon, \hat{\mathbf{p}}_{\theta 1}^{t},\hat{\mathbf{p}}_{\theta 2}^{t},\hat{\mathbf{p}}_{\theta 3}^{t}\}}\{\hat{\mathbf{n}}_{\theta,x}^{t}\}\\ \qquad=E_{\{\hat{\mathbf{p}}_{\theta 1}^{t},\hat{\mathbf{p}}_{\theta 2}^{t},\hat{\mathbf{p}}_{\theta 3}^{t}\}}\{\hat{E}_{\{\varepsilon|\hat{\mathbf{p}}_{\theta 1}^{t},\hat{\mathbf{p}}_{\theta 2}^{t},\hat{\mathbf{p}}_{\theta 3}^{t}\}}\{\hat{\mathbf{n}}_{\theta,x}^{t}\}\}\\ \qquad=E_{\{\hat{\mathbf{p}}_{\theta 1}^{t},\hat{\mathbf{p}}_{\theta 2}^{t},\hat{\mathbf{p}}_{\theta 3}^{t}\}}\{\left(\tilde{y}_{\theta 2}-\tilde{y}_{\theta 1} \right)\left(\tilde{z}_{\theta 3}-\tilde{z}_{\theta 1}\right)\\ -\left(\tilde{y}_{\theta 3}-\tilde{y}_{\theta 1}\right)\left(\tilde{z}_{ \theta 2}-\tilde{z}_{\theta 1}\right)\}\\ \qquad=E_{\{\hat{\mathbf{n}}_{\theta,x}^{t}\}},\end{array}\] where, \(\tilde{\mathbf{n}}_{\theta,x}^{t}\) represents the first component of \(\tilde{\mathbf{n}}_{\theta}^{t}\). Therefore, \[\begin{array}{l}E\{\hat{\mathbf{n}}_{\theta}^{t}\}=E_{\{\varepsilon,\hat{ \mathbf{p}}_{\theta 1}^{t},\hat{\mathbf{p}}_{\theta 2}^{t},\hat{\mathbf{p}}_{\theta 3}^{t}\}}\{\hat{\mathbf{n}}_{\theta}^{t}\}\\ \qquad=E_{\{\hat{\mathbf{p}}_{\theta 1}^{t},\hat{\mathbf{p}}_{\theta 2}^{t},\hat{\mathbf{p}}_{\theta 3}^{t}\}}\{\hat{\mathbf{n}}_{\theta}^{t}\}=E\{\tilde{\mathbf{n}}_{\theta }^{t}\}.\end{array}\] **Corollary 1**.: _If \(S\) is smooth and \(E\{\varepsilon\}=0\), then \(E\{\hat{\mathbf{n}}_{\theta}^{t}\}\) is on the same line with \(\mathbf{n}_{t}\)._ Proof.: Since \(S\) is smooth, a small enough neighborhood \(U\) can be regarded as a plane. Then, for any three non collinear points \(\{\tilde{\mathbf{p}}_{\theta 1}^{t},\tilde{\mathbf{p}}_{\theta 2}^{t},\tilde{\mathbf{p}}_{\theta 3}^{t}\}\), \(\tilde{\mathbf{n}}_{\theta}^{t}=k_{\theta}\mathbf{n}_{t}\), where \(k_{\theta}\) is a coefficient related to \(\{\tilde{\mathbf{p}}_{\theta 1}^{t},\tilde{\mathbf{p}}_{\theta 2}^{t},\tilde{\mathbf{p}}_{ \theta 3}^{t}\}\). Thus, \(E\left\{\hat{\mathbf{n}}_{\theta}^{t}\right\}=k\mathbf{n}_{t}\), where \(k=E\{k_{\theta}\}\). Then we have \(E\left\{\hat{\mathbf{n}}_{\theta}^{t}\right\}=k\mathbf{n}_{t}\) for \(E\{\varepsilon\}=0\). Since \(\{\tilde{\mathbf{p}}_{\theta 1}^{t},\tilde{\mathbf{p}}_{\theta 2}^{t},\tilde{\mathbf{p}}_{ \theta 3}^{t}\}\) are sorted counterclockwise, \(k_{\theta}\) is positive. Therefore \(k\neq 0\). The conclusion is proof. The minimum of the following optimization problem \[\operatorname*{argmin}_{\mathbf{z}}E_{\hat{\mathbf{n}}_{\theta}^{t}}\left\{\| \mathbf{z}-\hat{\mathbf{n}}_{\theta}^{t}\|_{F}^{2}\right\}, \tag{1}\] is found at the expectation of \(\hat{\mathbf{n}}_{\theta}^{t}\), _i.e._\(\mathbf{z}=E\{\hat{\mathbf{n}}_{\theta}^{t}\}\). We have proven that \(E\{\hat{\mathbf{n}}_{\theta}^{t}\}=k\mathbf{n}_{t}\). Therefore, given finite candidate normals \(\hat{N}_{t}\), we can approximate the direction of \(\mathbf{n}_{t}\) by the following optimization problem \[\operatorname*{argmin}_{\mathbf{z}}\sum_{\hat{\mathbf{n}}_{\theta }^{t}\in N_{t}}\left\{\|\mathbf{z}-\hat{\mathbf{n}}_{\theta}^{t}\|_{F}^{2}\right\}, \tag{2}\] when the surface is smooth. In the implementation, the candidate normals \(\hat{\mathbf{n}}_{\theta}^{t}\) are normalized. To avoid introducing unnecessary symbols, we still use \(\hat{\mathbf{n}}_{\theta}^{t}\) to represent the normalized vector in the following discussion. It is noteworthy that the larger the size of the candidate set \(\hat{N}_{t}\), the better the approximation. #### 4.1.2 Adaptive neighborhood In theoretical analysis, the neighborhood \(\mathcal{N}_{k}(\mathbf{p}_{t})\) should be as small as possible to be regarded as a plane. But, a small neighborhood is inadequate to depict the underlying structure of the surface when the point cloud is polluted by large noise. Therefore, the neighborhood size is adaptively determined. Specifically, for each point \(\mathbf{p}_{t}\), we first apply the local covariance analysis to characterize its noise level \(f_{t}\) (Section 4 in [1]). The average \(f=\sum_{t=1}^{N}f_{t}/N\) is then utilized to describe the noise scale of the whole point cloud. Finally, a suitable neighborhood size is determined according to the interval in which \(f\) is located: \(\hat{k}=k_{i},if\ l_{i-1}\leq f<l_{i}\). We take \(k_{1}=32\), \(k_{2}=128\), \(k_{3}=256\), \(k_{4}=450\), \(l_{0}=0\), \(l_{1}=0.02\), \(l_{2}=0.14\), and \(l_{3}=0.16\), \(l_{4}=0.3\) in our experiments. ### _Candidates rejection_ The randomly initialized candidates are noisy unavoidably, and their number may be limited in practice. Hence it is necessary to reject some infeasible normals for a high-quality approximation of Eq. 2. Here we follow the idea of kernel density estimation. For each candidate normal \(\hat{\mathbf{n}}_{\theta}^{t}\), we define a score \(s_{\theta}^{t}\) based on the distances of all neighbor points to its corresponding plane \(\theta\): \[s_{\theta}^{t}=\sum_{i=1}^{\hat{k}}e^{-\frac{\left(d(\mathbf{p}_{i}^{t},\mathbf{ \theta})\right)^{2}}{\sigma^{2}}}, \tag{3}\] where \(\mathbf{p}_{i}^{t}\in\mathcal{N}_{k}(p_{t})\) is a neighbor point of \(\mathbf{p}_{t}\), \(d(\mathbf{p}_{i}^{t},\theta)\) represents the Euclidean distance from point \(\mathbf{p}_{i}^{t}\) to the plane \(\theta\), and \(\sigma\) is the bandwidth of the Gaussian kernel function. A higher score means a higher consensus reached by the query point's neighbor points. It is designed to reject normals supported by fewer inliers, as shown in Fig. 4. When \(d(\mathbf{p}_{i}^{t},\theta)\) is less than the bandwidth \(\sigma\), the point \( of the feasible candidate normals of a point close to sharp features is still a multimodal distribution, as demonstrated in Fig. 2. It should be possible to exclude the interference of disturbing normals and select the main mode, since \(\mathbf{p}_{t}\) is not exactly on the edge. There may be also some other causes since the limited candidates are randomly determined from the noisy point cloud and our candidate rejection strategy is naive. Anyway, the average of all the feasible candidates is still far from expected. Inspired by the idea of robust statistical techniques, we design the following candidate consensus loss function for normal estimation: \[L_{ccn}=\sum_{\theta}-e^{-\frac{\|\mathbf{z}_{t}\times\mathbf{z}_{t}^{2}\|_{E} ^{2}}{\tau^{2}}},\quad\hat{\mathbf{n}}_{\theta}^{t}\in\tilde{N}_{t}, \tag{4}\] where \(\hat{\mathbf{n}}_{t}\) is the predicted normal, \(\tau=\sin\alpha\) denotes the bandwidth of the Gaussian kernel function. \(\alpha=\frac{\pi}{6}\) in all the experiments. When the angle between the predicted normal and the candidate normal is less than \(\alpha\), we consider this candidate to be an inlier of the predicted normal. Thus a predicted normal with the most inliers, _i.e._ with the greatest consensus of the candidate normals, is desired. As shown in Fig. 5, this mechanism can effectively overcome the influence of disturbing normals. ## 5 MSUNE-Net MSUNE outperforms previous conventional methods and is comparable with some supervised deep normal estimators, such as PCPNet (see Section 6 for details). But it needs to calculate a large number of candidate normals and solve an optimization problem at each point. They are both time-consuming. Therefore, we wish to develop an unsupervised deep learning method. All the time-consuming operations will be transferred to the training process and only a network forward pass is necessary for normal inference. It is noteworthy that the network is not just an accelerator that learns from the single output normal of MSUNE, but is trained on a dataset with the candidate consensus loss defined by feasible candidates. Consequently, MSUNE-Net is also superior to MSUNE in performance. The reason is explained in the following subsection. ### _Patch-based network_ Patch-based learning is the most common approach adopted by existing deep normal estimators [3, 4, 8]. They take the patch centered at each query point as the input and the patch is normalized in preprocessing to remove unnecessary degrees of freedom from scale and rotation. We find that the patch self-similarity across point clouds is explored by such networks, and a patch-based network is a perfect fit for our problem setting. Compared with MSUNE which predicts the normal for each point independently, a patch-based network may utilize more candidate normals from similar neighborhoods implicitly during training, as illustrated in Fig. 6. Therefore, it further improves the quality of normal estimation even if there are only a few candidate normals at each point. Specifically, MSUNE-Net trained with 100 initial candidate normals is superior to MSUNE using 400 candidate normals per query point. An experiment for this is shown in Tab. IV. Generally, any patch-based normal learning network may be employed. We follow the architecture of DeepFit [4], which learns point-wise weight for weighted least squares fitting. Specifically, the input \(\mathcal{N}_{k}(p_{t})\) is passed through PointNet to extract a feature for each point. Next, the feature is fed to the multi-layer perceptron (MLP) to output weight \(w_{i}\) for each neighbor point \(\mathbf{p}_{t}^{i}\) in \(\mathcal{N}_{k}(p_{t})\). Finally, these weights are used to fit a plane whose normal is the estimated normal of \(\mathbf{p}_{t}\). For more details, please refer to Section 3.1 of [4]. We further design a feature loss function for the estimated weights to recover the sharp features. ### _Feature loss_ Two neighbor points with significantly different normals tend to be located on different surfaces divided by sharp features. Therefore, these two points should not have large weights at the same time when fitting a plane. Based on this fact, we design a feature loss to constrain the point-wise Fig. 5: Candidate consensus loss function can effectively exclude the influence of disturbing normals (dotted normals). The red normals in (a) and (b) are the average normals of all candidates and solid candidates, respectively. The candidates in the green circles are regarded as inliers of the red normals. The darker the color, the higher their contribution to the red normals, which are determined by the candidate consensus loss. Fig. 6: More candidate normals from similar patches contribute the normal approximation of the current query point implicitly when a patch-based network is employed. For simplicity, the input patches and candidate normals are drawn as points. weight learned by the network, which is beneficial to recover sharp features. The feature loss function is formulated as follow: \[L_{w}=\sum_{i,j=1}^{k}e^{-\frac{(n_{i},n_{j})^{2}}{\omega^{2}}}\cdot w_{i}\cdot w _{j}, \tag{5}\] where \(\mathbf{n}_{i}\) is the normal of \(\mathbf{p}_{i}^{i}\) computed by PCA with neighborhood size \(\hat{k}\), \(w_{i}\) is the learned weight of \(\mathbf{p}_{i}^{i}\), and \(\omega\!\!=\!\cos\frac{\pi}{5}\) is a hyperparameter. When \(n_{i}\) and \(n_{j}\) are significantly different (angles between them larger than \(\frac{\pi}{3}\)), \(e^{-\frac{(n_{i},n_{j})^{2}}{\omega^{2}}}\) will be large. This will prevent \(w_{i}\) and \(w_{j}\) from achieving large values at the same time. ### _Total loss_ The total loss to train the network includes four terms: the candidate consensus loss \(L_{\text{ccn}}\) between the predicted normal and multi-candidate normals at the query point, the feature loss \(L_{w}\) for weights learned by network, the regularization loss \(L_{\text{regw}}=-\frac{1}{k}\sum_{j=1}^{k}\log{(w_{j})}\) preventing all of weights from being optimized to 0, and a transformation matrix regularization term \(L_{\text{regm}}=|I-AA^{T}|\) where \(I\) is an identity matrix and \(A\) is the transformation matrix used in PointNet. On the whole, it is defined as follows: \[L_{\text{total}}=L_{\text{ccn}}+\gamma\cdot L_{w}+\delta_{1}\cdot L_{\text{ regw}}+\delta_{2}\cdot L_{\text{regm}}, \tag{6}\] where \(\gamma\), \(\delta_{1}\) and \(\delta_{2}\) are hyper-parameters used to balance these four terms. We set \(\gamma=10^{-4}\), \(\delta_{1}=5\times 10^{-2}\) and \(\delta_{2}=10^{-2}\) in all the experiments. ## 6 Experiments To evaluate the performance of our approach thoroughly, two types of methods are compared: 1) the conventional normal estimation methods including PCA [11], Jet [15], HF-cubes [14], LRRfast [16] and PCV [1]; 2) the deep learning-based supervised normal estimation methods including AdaFit [6], G2N2 [8], DeepFit [4], PCPNet [3] and HoughCNN [35]. For the conventional methods, we uniformly choose the same scale as ours with 256 neighbor points. The fit order of Jet is set to 2. All other parameters, if any, are set to default. Extensive experiments are conducted on the synthetic dataset of PCPNet [3] and three scanned datasets, including KITTI [2], NYUV2 [21] and the dataset from PCV [1]. The performance is evaluated with the _Root Mean Square_ (RMS) error of the angle difference [3], _RMS with threshold_ (RMS\({}_{\tau}\)) which makes the measure greater than \(\tau^{\circ}\) as bad as \(90^{\circ}\)[14], and the _Percentage of Good Points_ (PGP\(\alpha\)) metric which computes the percentage of points with an error less than \(\alpha\) degree [3]. We use the Adam optimizer with a batch size of 512 and a learning rate of 0.001. The implementation is done in PyTorch and trained on a single Nvidia GTX 1080Ti. For MSUNE and MSUNE-Net, the size \(k\) of input neighborhood \(\mathcal{N}_{k}(\mathbf{p}_{t})\) is 256, the number \(k_{s}\) of randomly selected points used to fit candidate normals is set to 4, and the number of the initial candidate normals is 100. ### _Experiments on synthetic data_ RMS angle error of our methods (MSUNE and MSUNE-Net) and other methods on the PCPNet test set are shown in Tab. 1. Both MSUNE and MSUNE-Net estimate more accurate normals than all conventional methods. For PCPNet, we use the 3-scale network provided by authors, which mildly improves performance relative to its single-scale network [3]. For HoughCNN, the lowest average error is achieved by the single-scale network provided by the authors with 100 neighbor points. AdaFit, G2N2, and DeepFit do not have multi-scale versions since they learn the least square weights. It can be seen that the supervised methods based on deep learning generally perform better than the conventional methods. Our unsupervised method based on deep learning (MSUNE-Net) is better than some supervised methods including PCPNet and HoughCNN, and slightly less effective than AdaFit, G2N2, and DeepFit. But neither of them performs well on real scanned data which can be found in the following subsections. In addition, we evaluate the normal estimation performance on the PCPNet dataset using PGP\(\alpha\). PGP\(\alpha\) of different methods with increasing \(\alpha\) values are shown in Fig. 7. It can be noted that our MSUNE-Net has higher PGP\(\alpha\) than the previous conventional unsupervised methods, and outperforms PCPNet when \(\alpha>10\). The supervised methods demonstrate higher PGP\(\alpha\) on the synthetic dataset in general. Fig. 8 provides some qualitative comparisons and PGP\(5\) and PGP\(10\) of the results are also labelled. Compared with the conventional approaches, MSUNE-Net is more robust to sharp features (first row), details (second row), and nearby surface sheets (third row). Compared with PCPNet, MSUNE-Net performs better on smooth and details regions. ### _Experiments on scanned data_ In this section, three real scanned datasets, including two Kinect datasets and one LiDAR dataset, are utilized to evaluate our methods. Note that all the learning-based methods are only trained on the PCPNet dataset and directly evaluated on these three scanned datasets. Fig. 7: Comparison of PGP\(\alpha\) on the synthetic dataset of PCPNet. MSUNE-Net surpasses previous conventional methods and is superior to PCPNet when \(\alpha>10\). #### 6.2.1 Evaluation on datasets scanned by Kinect To begin with, we use two real datasets captured by a Kinect v1 RGBD camera to evaluate the generalization ability of our methods. The real scan data reveals more challenges, such as the fluctuation on flat surfaces, originating from the projection process of Kinect camera. The non-Gaussian and discontinuous noise make the scanned data have different noise patterns from the synthetic data. Specifically, its noise often has the same magnitude as some of the features [4, 40]. **Dataset from PCV [1].** There are 71 scans in the training set and 73 scans in the test set. For each scan, a registered and reconstructed mesh from an Artec Spider\({}^{\mathrm{TM}}\) scanner (accuracy 0.5 mm) is used to help build ground truth normals. Although the annotations may not be correct or accurate enough, they provide one quantitative way to evaluate the methods at least. Tab. 2 shows RMS, RMS\({}_{r}\), and PGF\(\alpha\) for the whole test set. Some visual comparisons of estimated normals rendered in RGB color are shown in Fig. 9. From Tab. 2, we see that our methods achieve the lowest RMS and PGP errors. Moreover, we reveal an interesting phenomenon that, contrary to the results on the synthetic dataset, all supervised methods based on deep learning perform worse than the unsupervised methods. The higher the performance on the synthetic dataset, the lower the performance on the real dataset, as indicated by the RMS error of AdaFit, G2N2, and DeepFit in Tab. 1 and 2. This phenomenon occurs mainly because the noise patterns of the synthetic and real scanned datasets are different. The deep learning-based \begin{table} \begin{tabular}{l|c|c c c|c c|c} \hline & & \multicolumn{3}{c|}{**Noise**} & \multicolumn{2}{c|}{**Density**} & \\ \hline Method & No noise & Small & Middle & Large & Gradient & Stripes & Average \\ \hline **Sup.** & & & & & & & \\ AdaFit & 5.19 & 9.05 & 16.44 & 21.94 & 5.90 & 6.01 & 10.76 \\ G2N2 & 5.44 & 9.12 & 16.36 & 21.09 & 6.19 & 6.54 & 10.79 \\ DeepFit & 6.51 & 9.21 & 16.72 & 23.12 & 7.31 & 7.92 & 11.8 \\ PCPNet & 9.62 & 11.37 & 18.87 & 23.28 & 11.70 & 11.18 & 14.34 \\ HoughCNN & 10.23 & 11.62 & 22.66 & 33.39 & 11.02 & 12.47 & 16.9 \\ \hline **Unsup.** & & & & & & \\ PCA & 14.55 & 14.68 & 17.96 & 24.14 & 15.24 & 16.87 & 17.24 \\ Jet & 14.99 & 15.12 & 17.9 & 24.14 & 16.64 & 15.71 & 17.41 \\ HF-cubes & 14.42 & 13.68 & 18.86 & 27.68 & 14.84 & 16.1 & 17.59 \\ LRRfast & 11.57 & 13.09 & 19.33 & 26.75 & 12.55 & 13.4 & 16.11 \\ PCV & 13.54 & 14.51 & 18.5 & 26.65 & 14.27 & 14.52 & 16.99 \\ MSUNE & 9.8 & 13.07 & 19.37 & 25.55 & 10.47 & 10.91 & 14.86 \\ MSUNE-Net & **8.69** & **11.32** & **17.58** & **23.93** & **9.62** & **10.33** & **13.57** \\ \hline \end{tabular} \end{table} TABLE I: Comparison of the RMS angle error on the synthetic dataset of PCPNet. MSUNE-Net surpasses previous conventional methods and some supervised methods including PCPNet and HoughCNN. Fig. 8: Visual comparisons of the estimated normals on three models from the PCPNet dataset. Shapes are colored according to the normal angle error, where darker reds indicate larger errors and darker blues indicate smaller errors. Numerators and denominators are PGP5 and PGP10 of the results (higher is better). supervised method misidentifies some noises as features, which destroys the structure of smooth regions, as shown in the results of AdaFit, G2N2, DeepFit, and PCPNet in Fig. 9. This also indicates that the generalization ability of supervised methods based on deep learning needs to be improved in essence. The conventional methods, including HF-cubes, LR-Rfast, and PCV, handle the fluctuation introduced by the scanner better than supervised methods. While they tend to trigger off artifacts around sharp features, such as the noses of the two statues in Fig. 9. Our MSUNE-Net works well for both cases. It can better trade off features and smooth regions compared with other methods. This proves that our strategy has a stronger generalization ability. **NYUV2 dataset.** We also evaluate the performance of the proposed MSUNE-Net on NYUV2 dataset [21]. This dataset contains various indoor scenes without ground truth normals, so we only perform a qualitative comparison of various methods. As shown in Fig. 10, AdaFit, G2N2, and DeepFit can preserve tiny details but with the price of retaining scanner noise. PCPNet and HF-cubes can well smooth noisy sur \begin{table} \begin{tabular}{l c|c c c|c c c c} \hline Models & RMS & RMS\_10 & RMS\_15 & RMS\_20 & PCP10 & PCP15 & PCP20 & PCP25 \\ \hline \multicolumn{1}{l}{**Sup.**} & \multicolumn{6}{c}{} & \multicolumn{6}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ AdaFit & 18.38 & 66.25 & 54.17 & 44.32 & 0.4593 & 0.6421 & 0.7666 & 0.8495 \\ G2N2 & 17.57 & 64.27 & 51.40 & 41.22 & 0.4909 & 0.6782 & 0.7993 & 0.8743 \\ DeepFit & 13.68 & 58.01 & 43.71 & 32.81 & 0.5853 & 0.7682 & 0.8753 & 0.9343 \\ PCPNet & 15.73 & 62.73 & 47.17 & 35.62 & 0.5155 & 0.7302 & 0.8527 & 0.9162 \\ \hline \multicolumn{1}{l}{**Unsup.**} & \multicolumn{6}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ HF-cubes & 11.72 & 43.99 & 31.79 & 24.73 & 0.7568 & 0.8747 & 0.9274 & 0.9536 \\ LRRfast & 13.25 & 48.36 & 34.23 & 26.84 & 0.7103 & 0.8573 & 0.9162 & 0.9452 \\ PCV & 11.74 & 46.85 & 31.81 & 23.85 & 0.7279 & 0.8768 & 0.9347 & 0.9607 \\ MSUNE & 11.67 & 49.54 & 33.64 & 24.51 & 0.6967 & 0.8628 & 0.9316 & 0.9620 \\ MSUNE-Net & **10.89** & **43.75** & **30.82** & **23.40** & **0.7614** & **0.8836** & **0.9363** & **0.9631** \\ \hline \end{tabular} \end{table} TABLE II: Comparison of RMS, RMS\({}_{\tau}\), and PGP\(\alpha\) on the Kinect dataset of PCV [1]. Both MSUNE and MSUNE-Net surpass previous conventional methods and deep learning methods. Fig. 9: Visual comparison of estimated normals by different methods on two statues scanned by Kinect from PCV [1]. Supervised deep normal estimators suffer from the scanner noise, _i.e._ the small fluctuations on smooth regions. The conventional methods can handle the fluctuations but may introduce artifacts around sharp features. MSUNE-Net works well for both two cases. Numbers show RMS of the results. faces, but fail to preserve tiny details. LRRfast and PCV can capture some details and smooth the noisy surfaces, with the price of some erroneous normals in high curvature regions. Moreover, they are less practical due to longer runtime. MSUNE-Net copes with all these challenges. This experiment further illustrates that our method has better generalization performance. #### 6.2.2 Evaluation on LiDAR dataset We further demonstrate our performance on the KITTI dataset, which is widely used in autonomous driving and has served as a public odometry and SLAM benchmark since 2012. The dataset provides sequentially organized point clouds collected by LiDAR with 64 beams. The experiments are conducted on the KITTI calibration sequence 2011-09-30. We use the corresponding ground truth device poses to splice out the whole scene, where the point cloud normals of every single frame are calculated and merged for the visualization in Fig. 1 and 11. Compared with DeepFit, a cutting-edge supervised method, and PCA, the most widely used method, more geometry structures are preserved in the results of MSUNE-Net, as shown in Fig. 11. Many of them may facilitate further semantic analysis, such as the plane structures (third column), the sides of the vehicles (first and last columns), and tiny structures (second and third columns). MSUNE-Net also contributes to the more clear ground (second column) and wall (first column), since it can better prevent interference from closed objects. ### _Efficiency_ In Fig. 1, we report the average running time of different methods. Among them, the methods based on deep learning are implemented in Python 3.7.3. PCV and LRRfast are implemented in Matlab. HF-cubes is implemented in C++. DeepFit, G2N2, and MSUNE-Net have similar networks and inference times, and they are the fastest (about 0.3ms per point). PCPNet and AdaFit are a bit slower since they have more network parameters. Their inference time is 0.61ms and 0.71ms per point respectively. The conventional high-quality methods exhibit more generalization ability compared with the deep supervised methods. But they usually are more time-consuming. MSUNE-Net, a deep network-based unsupervised method, integrates the merits of the two approaches. ### _Ablation study_ **Validation of construction factors for candidate normals.** Three important factors that may affect the quality of candidate normals are examined: multi-candidate sampling (MCS), adaptive neighborhood size (ANS), and candidate Fig. 10: Visual comparison of estimated normals on three scanned scenes from NYUV2. AdaFit, G2N2, and DeepFit can not suppress scanner noise. PCPNet and HF-cubes may fail to preserve tiny structures. LRRfast and PCV tend to introduce artifacts around high curvature regions. MSUNE-Net overcomes these challenges. After performing unoriented normal estimation, we flip normals according to the camera position. Colors encode the direction of oriented normals. rejection (CR). We remove them to explore their influence on normal estimation. Without MCS, a single candidate normal, fitting all the neighbor points by PCA, is used to train the network. In the absence of ANS, we set the neighborhood size to a fixed value of 256. In order to better observe the influence of candidate normals, instead of using the candidate consensus loss and feature loss, we use a more general loss function \(L_{pre}\) to describe the difference between the predicted normal and candidate normals at each query point: \[L_{pre}=\sum_{\theta}\|\hat{\mathbf{n}}_{t}\times\mathbf{n}_{\theta}^{t}\|_{F }^{2}, \tag{7}\] where \(\mathbf{n}_{\theta}^{t}\) represents the candidate normals which can be a single normal (without MCS), initial candidates (without CR), or feasible candidates (with CR). Four ablation studies about the three factors are reported in lines 1 to 4 of Tab. III. Line 1 shows that training a network from a conventional method may exceed the conventional method itself. Specifically, if we use the results of PCA with 256 neighbors as a guide, the network still achieves slightly better performance (0.24 lower RMS angle error) than the former (the results of PCA are shown in Tab. I). After adding MCS, the RMS angle error is 0.78 lower than the PCA method with 256 neighbor points, which illustrates that learning from multi-candidate normals is superior to that from a single normal. Among these three factors, ANS plays an important role, which further reduces the RMS angle error by 1.67 and down to 14.79. CR can further improve the quality of candidate normals by rejecting the blurry normals in the initial candidate normals, which reduces the RMS angle error by 0.51. **Validation of loss terms.** The contribution of \(L_{ccn}\) and \(L_{w}\) is illustrated in line 5 to 10 of Tab. III. When \(L_{ccn}\) is disabled, \(L_{pre}\) defined in Eq. 7 is used to make sure that the network can still learn from the candidate normals. The role of CR is similar to \(L_{ccn}\) in deleting or weakening infeasible candidate normals, but they are actually complementary. To validate this, we divide the experiments into two groups: without CR (lines 5 to 7 of Tab. III) and with CR (lines 8 to 10 of Tab. III). It is found that both \(L_{ccn}\) and \(L_{w}\) further boost the performance of MSUNE-Net. The contribution of \(L_{ccn}\) is more than that of \(L_{w}\) in the absence of CR, and the effect is reversed when CR is introduced. **Validation of the number of candidate normals.** In Section IV.1, we construct multi-candidate normals by randomly picking points multiple times. In theory, the more candidate normals, the closer the empirical distribution is to the actual distribution. Thus, the higher the quality of normals learned by MSUNE/MSUNE-Net. This is consistent with our experiments in Tab. IV. For MSUNE, when the number of candidate normals is set to 4000, the time consumption is too high. Therefore, we just show its results on a noise-free point cloud and a point cloud with high noise. Since MSUNE predicts the normal for each point independently, it just learns from its own candidates. Therefore, MSUNE needs more candidate normals per point, even 4000 may not be enough. While MSUNE-Net can "see" other candidate normals of similar neighborhoods during training. Thus its performance is better than that of MSUNE, even if there are far fewer candidates at each point. When the number of candidate normals increases to 100, the performance achieves its bound. Compared with MSUNE, MSUNE-Net has also another advantage in that it is ultra-fast since no candidate normals or any optimization are necessary for network inference. Fig. 11: Visual comparison of estimated normals on four zoomed scenes of the KITTI dataset. MSUNE-Net identifies more geometry structures with higher quality. From top to bottom are PCA, DeepFit, and MSUNE-Net, respectively. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline MCS & ANS & CR & \(L_{ccn}\) & \(L_{w}\) & RMS \\ \hline ✓ & & & & & 17.00 \\ \hline ✓ & & & & & 16.46 \\ \hline ✓ & ✓ & & & & 14.79 \\ \hline ✓ & ✓ & ✓ & & & 14.28 \\ \hline ✓ & ✓ & & ✓ & & 14.17 \\ \hline ✓ & ✓ & & & ✓ & 14.65 \\ \hline ✓ & ✓ & & ✓ & ✓ & 14.03 \\ \hline ✓ & ✓ & ✓ & ✓ & & 13.97 \\ \hline ✓ & ✓ & ✓ & & ✓ & 13.73 \\ \hline ✓ & ✓ & ✓ & ✓ & ✓ & 13.57 \\ \hline \end{tabular} \end{table} TABLE III: Ablation study of MSUNE-Net on the dataset of PCPNet. ## 7 An implementation of the paradigm to unsupervised point cloud denoising The proposed multi-sample consensus paradigm is also applicable to other unsupervised tasks. This is shown by improving the cutting-edge unsupervised point cloud denoising method: DMR [20]. DMR follows the unsupervised denoising loss defined in [19]: \[L_{U}=\frac{1}{N}\sum_{t=1}^{N}\sum_{q}\|f(p_{t})-q\|, \tag{8}\] where \(f(\cdot)\) represents the denoiser that maps a noisy point to a denoised point. Points \(\{q\}\) are sampled from the noisy input point cloud \(P\) according to a prior \(P(q|p_{t})\), which is empirically defined as: \[P(q|p_{t})\propto exp(-\frac{\|q-p_{t}\|}{2\omega^{2}}). \tag{9}\] Then the sampling point \(q\) is closer to the underlying clean surface with high probability. However, it is not a suitable assumption for points around the sharp features. Although Hermosilla _et al._[19] employ RGB color annotation to overcome this problem, it is just alleviated but not eliminated. Moreover, RGB color annotation is not always available. The idea of candidate rejection and mode determination can be employed to overcome this problem. Besides we design a novel sampling strategy to generate multiple more accurate \(\{q\}\) to guide the network training. **Multi-candidate sampling (MCS).** Instead of sampling \(q\) from the input noisy point cloud \(P\) directly, we first randomly select \(k_{s}=4\) points in the \(\hat{k}\) nearest neighbors of \(p_{t}\) and take their center as a candidate \(q\). This progress is repeated multiple times to obtain a set of candidates \(\{q\}\). \(\hat{k}=64\) is the same as DMR. **Candidate rejection (CR).** A candidate is infeasible if there are fewer neighbors of \(p_{t}\) near it. Therefore, we compute a score for each candidate: \[s_{q}^{t}=\sum_{i=1}^{\hat{k}}e^{-\frac{\left(4\langle p_{t}^{i},q\rangle \right)^{2}}{\sigma^{2}}}, \tag{10}\] where \(\mathbf{p}_{t}^{i}\in\mathcal{N}_{\hat{k}}(p_{t})\) is a neighbor of \(\mathbf{p}_{t}\), \(d(\mathbf{p}_{t}^{i},q)\) represents the Euclidean distance from \(\mathbf{p}_{t}^{i}\) to \(q\), and \(\sigma\) is the bandwidth of the Gaussian kernel function. In the experiments, we set \(\sigma\) equal to the average distance of the 12 nearest neighbor points of \(\mathbf{p}_{t}\). Then the 10% of initialized candidates with lower scores are deleted. **Mode determination.** A candidate consensus loss for position, similar with \(L_{ccn}\), is defined as follows: \[L_{ccp}=\sum_{q}-e^{-\frac{\|f(p_{t})-q\|_{2}}{\sigma^{2}}}, \tag{11}\] where \(\tau\)= \(\sigma\). This results in convergence to the main mode supported by more inlier candidates. **Dataset.** For training the denoising network, we have collected 13 different classes with 7 different meshes, each from ModelNet-40 [46], and generate point clouds by randomly sampling 10K-50K points. The point clouds are then perturbed by Gaussian noise with standard deviations from 1% to 3% of the bounding box diagonal. Finally, a training dataset containing a total of 1365 models was generated. To be identical to the DMR training mechanism, we split the point cloud into patches consisting of 1024 points. For testing, we have collected 20 classes with 3 meshes each. Similarly, we generate point clouds using randomly sampled 20K and 50K points, then perturb them by Gaussian noise with standard deviations of 1%, 2%, 2.5%, and 3% of the bounding box diagonal. Finally, a test dataset containing 480 models was generated. **Results.** We evaluate the effectiveness of our strategies quantitatively and qualitatively on the dataset. For quantitative comparison, the Chamfer distance (CD) and point-to-surface distance (P2S) are used as evaluation metrics. As shown in Tab. V, the CD and P2S are gradually improved by applying the three strategies. A qualitative comparison is also shown in Fig. 12, where the colors encode the P2S errors. Our results are much cleaner and exhibit more visually pleasing surfaces than those of DMR. ## 8 Conclusion In this paper, we propose a multi-sample consensus paradigm for unsupervised normal estimation and two implementations of it: a novel optimization method (MSUNE) and the first unsupervised deep normal estimation method (MSUNE-Net). The paradigm consists of three stages: multi-candidate sampling, candidate rejection, and mode determination. It is completely generalizable for other unsupervised low-level image and point cloud processing. A preliminary implementation of the paradigm to unsupervised point cloud denoising is also introduced. We prove that the normal of each query point of a noisy point cloud is the expectation of randomly sampled candidate normals ideally. But they are actually from a multimodal distribution. Hence, MSUNE presents practicable strategies on how to sample multiple candidates, reject some infeasible ones, and determine the main mode of the candidates. The performance of MSUNE increases with the \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline & \multicolumn{3}{c|}{MSUNE} & \multicolumn{3}{c}{MSUNE-Net} \\ \hline Number of candidate normals & 20 & 100 & 400 & 4000 & 20 & 50 & 100 & 200 \\ \hline Average RMS & 17.23 & 14.86 & 14.21 & \(\backslash\) & 13.94 & 13.79 & 13.57 & 13.62 \\ \hline RMS on a noise-free point cloud & 6.07 & 5.56 & 5.36 & 5.33 & 4.65 & 4.45 & 4.39 & 4.51 \\ \hline RMS on a point cloud with high noise & 35.17 & 31.40 & 30.68 & 30.36 & 30.07 & 30.04 & 30.03 & 29.97 \\ \hline Inference time (ms) per point & 85.45 & 103.26 & 167.44 & 936.79 & 0.36 & 0.36 & 0.36 & 0.36 \\ \hline \end{tabular} \end{table} TABLE IV: Comparisons of the RMS angle error and inference time of MSUNE and MSUNE-Net with varying number of candidate normals on PCPNet’s synthetic dataset. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline Noise scale & \(1\%\) & \(2\%\) & \(2\%\) & \(2\%\) & \(3\%\) & \(3\%\) \\ \hline Error metric & CD & P2S & CD & P2S & CD & P2S & CD & P2S \\ \hline DMR & 2.47 & 2.97 & 401 & 5.58 & 4.92 & 7.20 & 5.77 & 8.73 \\ \hline DMR-MCS\({}_{L_{ccp}}\) & 2.88 & 2.84 & 3.74 & 5.15 & 4.61 & 6.66 & 5.49 & 8.24 \\ \hline DMR-MCS\({}_{L_{ccp}}\) & 2.13 & 2.43 & 3.54 & 4.75 & 4.12 & 6.25 & 5.72 & 7.75 \\ \hline DMR-MCS\({}_{L_{ccp}}\)+CR & 2.13 & 2.41 & 3.47 & 4.60 & 4.28 & 6.00 & 5.38 & 7.42 \\ \hline \end{tabular} \end{table} TABLE V: The denoising performance of DMR is gradually improved by applying our strategies. number of candidate normals. It is superior to the SOTA optimization methods and supervised deep normal estimators on real data. But the per-point optimization is very time-consuming. MSUNE-Net further introduces a neural network to boost the multi-sample consensus paradigm. With the assistance of training on massive data, the patch self-similarity across point clouds is adequately utilized, which provides far more candidate normals for each query point with similar patches implicitly. Higher performance is accomplished by MSUNE-Net even with smaller candidates set per point, and it is also 100 times faster than MSUNE. MSUNE-Net is even superior to some supervised deep normal estimators on the most common synthetic dataset. Most importantly, it shows better generalization ability and significantly outperforms previous optimization methods and SOTA supervised deep normal estimators on three real datasets. In the future, we intend to conduct in-depth research in low-level image and point cloud analysis via unsupervised deep learning driven by multi-sample consensus. It would also be interesting to explore how to generate and use multiple pseudo-labels for semantic-level unsupervised learning tasks. \begin{tabular}{c c} & Jie Zhang received the PhD degree in 2015 from the Dalian University of Technology, China. She is currently an associate professor with the School of Mathematics, Liaoning Normal University, China. Her current research Interests include geometric processing and machine learning. \\ \end{tabular} \begin{tabular}{c c} & Minghui Nie is a master student in computational mathematics at Liaoning Normal University, China. He received the bachelor's degree in mathematics and applied mathematics at Dezhou University, China, in 2021. His research interests include geometric processing and machine learning. \\ \end{tabular} \begin{tabular}{c c} & Junjie Cao received the BSc degree in 2003 and the PhD degree in 2010 from Dalian University of Technology, china. He is an associate professor with the School of Mathematical Sciences, Dalian University of Technology, China. Between 2014 and 2015, he paid an academic visit to Simon Fraser University, Canada. His research interests include geometric processing and image processing. \\ \end{tabular} \begin{tabular}{c c} & Jian Liu received the PhD degree in 2022 from the Shandong University, China. He is currently a postdoctor with the School of Software, Tsinghua University, China. His current research interests include geometric processing and machine learning. \\ \end{tabular} \begin{tabular}{c c} & Ligang Liu received the BSc degree in 1996 and the PhD degree in 2001 from Zhejiang University, China. He is a professor at the University of Science and Technology, China. Between 2001 and 2004, he was at Microsoft Research Asia. Then he was at Zhejiang University during 2004 and 2012. He paid an academic visit to Harvard University during 2009 and 2011. His research interests include geometric processing and image processing. His research works could be found at his research website: [http://staff.ustc.edu.cn/](http://staff.ustc.edu.cn/) lgiu Fig. 12: More visually pleasing surfaces are generated by the proposed unsupervised denoising method (DMR+MCS+\(L_{cep}\)+ CR) than the baseline method: DMR. The colors, from blue to red, encode the P2S error.
2308.04589
Temporal DINO: A Self-supervised Video Strategy to Enhance Action Prediction
The emerging field of action prediction plays a vital role in various computer vision applications such as autonomous driving, activity analysis and human-computer interaction. Despite significant advancements, accurately predicting future actions remains a challenging problem due to high dimensionality, complex dynamics and uncertainties inherent in video data. Traditional supervised approaches require large amounts of labelled data, which is expensive and time-consuming to obtain. This paper introduces a novel self-supervised video strategy for enhancing action prediction inspired by DINO (self-distillation with no labels). The Temporal-DINO approach employs two models; a 'student' processing past frames; and a 'teacher' processing both past and future frames, enabling a broader temporal context. During training, the teacher guides the student to learn future context by only observing past frames. The strategy is evaluated on ROAD dataset for the action prediction downstream task using 3D-ResNet, Transformer, and LSTM architectures. The experimental results showcase significant improvements in prediction performance across these architectures, with our method achieving an average enhancement of 9.9% Precision Points (PP), highlighting its effectiveness in enhancing the backbones' capabilities of capturing long-term dependencies. Furthermore, our approach demonstrates efficiency regarding the pretraining dataset size and the number of epochs required. This method overcomes limitations present in other approaches, including considering various backbone architectures, addressing multiple prediction horizons, reducing reliance on hand-crafted augmentations, and streamlining the pretraining process into a single stage. These findings highlight the potential of our approach in diverse video-based tasks such as activity recognition, motion planning, and scene understanding.
Izzeddin Teeti, Rongali Sai Bhargav, Vivek Singh, Andrew Bradley, Biplab Banerjee, Fabio Cuzzolin
2023-08-08T21:18:23Z
http://arxiv.org/abs/2308.04589v2
# Temporal DINO: A Self-supervised Video Strategy to Enhance Action Prediction ###### Abstract The emerging field of action prediction - the task of forecasting action in a video sequence - plays a vital role in various computer vision applications such as autonomous driving, activity analysis and human-computer interaction. Despite significant advancements, accurately predicting future actions remains a challenging problem due to high dimensionality, complex dynamics and uncertainties inherent in video data. Traditional supervised approaches require large amounts of labelled data, which is expensive and time-consuming to obtain. This paper introduces a novel self-supervised video strategy for enhancing action prediction inspired by DINO (self-distillation with **no** labels). The approach, named Temporal-DINO, employs two models; a'student' processing past frames; and a 'teacher' processing both past and future frames, enabling a broader temporal context. During training, the teacher guides the student to learn future context by only observing past frames. The strategy is evaluated on ROAD dataset for the action prediction downstream task using 3D-ResNet, Transformer, and LSTM architectures. The experimental results showcase significant improvements in prediction performance across these architectures, with our method achieving an average enhancement of 9.9% Precision Points (PP), which highlights its effectiveness in enhancing the backbones' capabilities of capturing long-term dependencies. Furthermore, our approach demonstrates efficiency in terms of the pretraining dataset size and the number of epochs required. This method overcomes limitations present in other approaches, including the consideration of various backbone architectures, addressing multiple prediction horizons, reducing reliance on hand-crafted augmentations, and streamlining the pretraining process into a single stage. These findings highlight the potential of our approach in diverse video-based tasks such as activity recognition, motion planning, and scene understanding. Code can be found at [https://github.com/lzzeddinTeeti/ssl_pred](https://github.com/lzzeddinTeeti/ssl_pred). ## 1 Introduction Computer vision techniques have advanced to the point at which they are able to outperform humans at certain object recognition tasks [55]. However, for many computer vision applications, a higher-level understanding of the scene is required. For example, achieving human-level performance in autonomous vehicles remains a formidable challenge [47]. One of the key reasons for this gap is the inherent difficulty in understanding what may happen next. Thus there is a growing recognition of the importance of _prediction_. Prediction plays a crucial role in enhancing the decision-making process of autonomous systems by anticipating the future behaviour of dynamic elements in the environment, _e.g_. other vehicles, pedestrians, and cyclists - thus ensuring safer operation. Moreover, prediction also facilitates the development of high-level understanding within autonomous systems, enabling more nuanced and contextually appropriate planning, leading to smoother interactions with other agents on the road [30]. However, prediction poses its own set of challenges, encompassing spatial, temporal, social, and stochastic dimensions [50]. Modelling these dimensions requires complex models, such as [43, 45, 61, 37], which require significant amounts of data - often scarce and costly to gather and annotate. To address this, leveraging the abundance of unlabelled data through self-supervised methods offers an enticing opportunity to enhance performance with minimal impact upon resources. While existing self-supervised prediction methods, including [54, 63, 27], have shown promise, they have limitations that hinder their effectiveness. Firstly, their predictive capability is limited to a very short-term horizon (typically one frame ahead) which is impractical for autonomous driving scenarios requiring longer-term predictions. Secondly, these methods often involve a two-stage process [54], which is computationally expensive and time-consuming. Finally, they are typically designed for a specific architecture, lacking the ability to generalize across different architectures. In this paper, we present a novel one-stage self-supervised representation learning strategy specifically de signed for videos. Our proposed approach draws inspiration from the image-based self-supervised DINO [7] model and extends its application to the temporal dimension. Leveraging a student-teacher framework, our method guides the _student_ model to focus on the most informative (temporal) features, enabling accurate predictions of future events. The _student_ model learns to attend to the relevant cues in the past and current moments, extracting valuable information that aids in forecasting forthcoming actions. Our approach addresses the aforementioned limitations by significantly extending the prediction range beyond one frame, enabling more practical and effective autonomous driving applications. Moreover, our proposed method eliminates the need for a two-stage process, reducing computational complexity and saving valuable time. Finally, our approach is not limited to a single architecture but is a wrapper strategy that can be used to improve prediction performance across various architectures, enhancing the generalisability and applicability of the method. The contributions of this work include: * A novel one-stage self-supervised representation learning strategy for videos, addressing limitations in prediction range, computational complexity, and architectural generalisability. * with various deep-learning architectures (including 3D-CNN, Transformer, and LSTM), showcasing its effectiveness on real-world challenging data, and versatility to use on other models and domains. * Identification of the optimal model architecture and loss function for capturing long-range dependencies in video data, shedding light on the most effective design choices for robust action prediction in autonomous driving scenarios. In the following sections, we provide a comprehensive overview of related work in self-supervised learning for images and videos, and action prediction (Section 2), followed by a detailed description of our proposed method (Section 3). We then present the experimental setup, including datasets, evaluation metrics, and training details (Section 4), and discuss the results and analysis of our experiments (Section 5). Finally, we provide a thorough discussion of our findings, and highlight potential applications and future research directions (Section 6). ## 2 Related Work ### Image-based Self-supervised Learning Two types of self-supervised strategies are commonly employed in computer vision: pretext methods and contrastive learning. The former involves utilizing specific internal properties or tasks of the input data to learn useful representations. For instance, some models learn the contextual information of the entire scene by analyzing small image patches [10]. Other approaches focus on tasks like re-colourization of grayscale images [64], predicting transformations between different views of the same image [14], or determining the order of patches extracted from an image [38, 35]. With the introduction of Vision Transformer (ViT) [11], patch-based self-supervised methods have gained prominence in the literature [21, 3, 2]. Particularly, masked auto-encoders (MAE) [21] have emerged as a preferred mechanism for model pretraining due to their superior performance on downstream tasks. Contrastive learning methods, on the other hand, focus on maximizing the dissimilarity between features extracted Figure 1: Overview of the proposed Temporal DINO. The _student_ model processes the past frames (\(x_{1:t}\)), while the _teacher_ processes both the past and future frames (\(x_{1:t+t_{Pred}}\)). A Future-past Distillation loss is applied to their representations (\(S_{\theta}\) and \(T_{\phi}\)) to guide the _student_ to capture the future temporal context from the _teacher_. from different samples to encourage discriminative representations [12, 5, 58]. Some discriminative self-supervised algorithms leverage instance-level discrimination to create distinct feature representations for different examples, leading to robust feature learning [22, 9, 17, 7]. Other approaches draw inspiration from clustering mechanisms [6, 1]. The DINO method [7], for example, employs self-distillation that trains a _student_ model on local crops and a _teacher_ model on the entire image, then find the loss between both representations. The _teacher_ will push the _student_ to learn global representations by seeing only the local ones. Whilst these techniques offer significant potential on images, their performance is limited on video-related tasks, such as action prediction. Thus, there is a need for advancements in video self-supervised learning methods. ### Video-based Self-supervised Learning Video-based self-supervised learning strategies, akin to their image counterparts, encompass both pretext and contrastive approaches [46]. However, videos introduce an additional dimension, namely the temporal dimension. The inherent order of frames within a video sequence offers intrinsic properties that can be leveraged to develop effective self-supervised learning mechanisms. Despite this potential, the research in this specific domain remains relatively limited [51, 4, 36, 57]. To address this gap, recent works have explored different strategies in video-based self-supervised learning. For instance, [20] employed complementary information from RGB and optical flow streams to co-train their model using contrastive loss. [18] utilized a 3D-CNN architecture with a video transformer and trained the model using contrastive loss on positive pairs of video sequences. CVRL [42] employed a discriminative learning approach, utilizing two augmented views of the same video as positive examples and views of other videos as negatives. Another method, VideoMoCo [39], extended the MoCo [22] framework to videos by randomly dropping frames from the video and learning the same representation for each random input. In line with these advancements, our proposed model introduces a novel approach where the _teacher_ model incorporates a longer temporal sequence than the _student_ model. This design choice aims to provide the _student_ with a wider temporal context, encompassing future frames that the student has not yet observed. By leveraging this extended temporal context within the student-teacher framework, our model aims to enhance the _student_'s ability to learn representations that capture long-term dependencies and improve its performance in video-based self-supervised learning. ### Supervised Action Prediction Solving the action prediction task requires modelling the different dimensions of the problem, including the spatial, temporal, stochastic, and social dimensions, since the driving environment is dynamic, uncertain, and multi-agent. To model the temporal dimension, [13, 29] used 3D-CNN, [43, 16] Recurrent Neural Networks, while [61] used Transformers. Regarding the stochastic dimension, GANs [28], and CVAE models [16, 61] were utilised. To model the multi-agent aspect of the problem, different types of Graph Neural Networks of different connectivity, sparsity and homogeneity were used [15, 45, 16], semantic segmentation cues [44, 34] and social pooling [40]. However, all of those methods are supervised; they require a huge amount of labelled data which is time-consuming and expensive. ### Self-supervised Action Prediction In recent studies, limited approaches have been explored in the field of self-supervised action prediction. Zatsarynna et al. [63] employed the contrastive loss of InfoNCE [19] to encourage proximity between temporally adjacent video clips in the embedding space. To preserve the order of the clips, they complemented the InfoNCE loss with an order loss in the form of cross-entropy. Their proposed model focused on using 3D-CNN as the backbone architecture, and its evaluation was limited to this specific backbone without examining the performance on other architectures. Another approach by Kochakarn et al. [27] utilized graph contrastive learning with the SimCLR loss function [9] to learn more informative embeddings for the prediction task. They incorporated an attention mechanism to achieve explainable action prediction, albeit limited to predicting only the next one frame. In contrast, our proposed method extends the prediction horizon to include the next 3, 6, and 12 frames, offering a more comprehensive temporal context. Furthermore, their approach specifically focused on graphs, while our method is designed to work with 3D-CNN, Transformers, and LSTMs, providing broader applicability. It is worth noting that both of these aforementioned approaches rely heavily on the use of contrastive loss, which requires careful crafting of augmentations. This dependency on specific augmentations can limit the generalisability and robustness of the learned representations. In contrast, Tran et al. [54] adopted a knowledge distillation approach, where a _teacher_ model trained on recognition tasks transfers its knowledge to a prediction model. However, this method follows a two-stage process involving training a _teacher_ model for recognition and then distilling the knowledge into a _student_ model for prediction. This two-stage approach introduces additional time and computational costs, which may impact the scalability and practicality of the method, particularly in real-time or resource-constrained scenarios. Furthermore, their evaluation was focused on the I3D architecture [8] without exploring the performance on other architectures. These related works provide valuable insights into self-supervised action prediction approaches. However, they have certain limitations in terms of the backbone architectures considered, the prediction horizons addressed, the reliance on specific augmentations, and the two-stage training process. In contrast, our one-stage proposed method aims to overcome these limitations by leveraging a different loss formulation and considering multiple backbone architectures while extending the prediction horizon, thereby contributing to the advancement of self-supervised action prediction techniques. ## 3 Methodology ### Representation Learning for Action Prediction In this study, our objective is to learn a mapping function \(f\) in an unsupervised manner, which takes an unlabelled and untrimmed video clip consisting of \(t\in\mathbb{R}^{+}\) frames as input. The goal is to map the 4D input clip \(x_{1:t}\in\mathbb{R}^{T\times C\times H\times W}\), to a feature vector \(g(x_{1:t})\in\mathbb{R}^{d}\), such that the learned features effectively transfer to the downstream task of action prediction. To achieve this, we draw inspiration from the DINO approach and adopt a student-teacher setup, as depicted in Figure 1. The _student_ network \(S_{\theta}\) processes only the past frames \(x_{1:t}\) during both training and inference, without access to future frames. On the other hand, the _teacher_ network \(T_{\phi}\) processes both the past and future frames \(x_{1:t+t_{Pred}}\), where \(t_{Pred}\) denotes the length of the future sequence. To ensure consistency in architecture, we downsample the sequence processed by the _teacher_ network to match the number of frames processed by the _student_. The downsampling is performed by determining a sampling frequency, which is calculated as \({}^{(t+t_{Pred})}/{t}\). For instance, if \(t=12\) frames and \(t_{Pred}=12\) frames, the _student_ processes the past (12) frames, while the _teacher_ processes (24 frames) with a step of 2, resulting in 12 frames. Despite processing sequences of the same sequence length, the _teacher_ network has access to a wider temporal context. ### Future-past Distillation Loss During training, we introduce a knowledge distillation loss between the final embeddings of the _student_ and _teacher_ networks. This loss guides the _student_ to distil knowledge about the future from the _teacher_, despite not having direct access to future frames. The aim is to teach the _student_ to focus on the most relevant features from the past frames that contribute to predicting the future. In contrast to DINO, our Future-Past Distillation (FPD) loss is defined in the Cosine Similarity form, instead of using cross-entropy. This formulation is motivated by the findings of our ablation analysis, which indicate that the Cosine-based FPD loss yields improved performance on downstream tasks. The learning objective for the pretraining stage of future-past distillation is expressed in Equation 1. The _student_ network parameters \(\theta\) are updated using backpropagation optimized by stochastic gradient descent (SGD), while the _teacher_ network parameters \(\phi\) are updated using an Exponential Moving Average (EMA) based on the _student_ network, with a scheduled momentum variable (\(m\)) as shown in Equation 2. \[\theta^{*},\phi^{*}=\operatorname*{arg\,min}_{\theta,\phi}\mathcal{L}_{FPD} \Big{(}S_{\theta}(x_{1:t}),T_{\phi}(x_{1:t+t_{Pred}})\Big{)} \tag{1}\] \[\phi_{i+1}=m_{i}\times\phi_{i}+(1-m_{i})\times\theta_{i} \tag{2}\] ### Downstream Task Definition The objective of pretraining is to enhance the performance of the model in the downstream task of predicting driver's actions. Given a past (observed) clip of length \(t\in\mathbb{R}^{+}\), the task is to predict the Ego-vehicle (driver) action in each frame in the next \(t_{Pred}\in\mathbb{R}^{+}\) frames. Building upon the optimal pretrained _student_ model \(S_{\theta^{*}}\), the prediction model \(f\) will use the _student_ model as a backbone and add a classification head on top of it. Subsequently, it performs further optimization (fine-tuning) on either both the backbone and the head parameters or solely the latter (we conducted experiments on both scenarios) for the prediction task. The objective is to map the learned features \(S_{\theta^{*}}(X_{1:t})\) to the future action labels \(y_{t+1:t+t_{Pred}}\), formally, \(f_{\psi}:S_{\theta^{*}}(X_{1:t})\to y_{t+1:t+t_{Pred}}\). Given the nature of a classification task, cross-entropy (CE) loss is utilized to guide the optimization process and refine the learning objective for the downstream task, as illustrated in Equation 3. \[\theta^{**},\psi^{*}=\operatorname*{arg\,min}_{\theta^{*},\psi} \mathcal{L}_{CE}\bigg{(}f_{\psi}\Big{(}S_{\theta^{*}}(x_{1:t})\Big{)},y_{t+1: t+t_{Pred}}\bigg{)} \tag{3}\] ## 4 Experiments ### Datasets We adhered to the convention in self-supervised learning, utilizing two distinct datasets for distinct purposes: a larger dataset for pretext task pretraining and a smaller dataset for downstream task fine-tuning. Specifically, we employed the Kinetics-400 dataset [25] and the ROad event Awareness Dataset (ROAD) [48]. Kinetics-400It is designed for action recognition and comprises over 240,000 videos. On average, each video spans 10 seconds and is assigned a single label from a pool of 400 possible action classes. The dataset's substantial video collection has facilitated its adoption in numerous video self-supervised methods [51, 39]. For our purposes, we disregard the label information as it will not be used during pretraining. RoadThe ROad event Awareness Dataset (ROAD) [48] is built on a fraction of Oxford RobotCar Dataset [33], and it is extended with multi-label annotations for action recognition, localisation, and prediction tasks within the context of autonomous driving. It comprises 22 videos from an egocentric view as shown in Figure 2, each with an 8-minute duration and a frame rate of 12 frames per second (fps). Importantly, it contains labels indicating the actions performed by the ego vehicle (driver). Notably, the dataset encompasses seven distinct ego-vehicle actions: _Move, Stop, Turn Left, Turn Right, Overtake, Move Left, and Move Right_. These labels serve as training data for fine-tuning our models to predict the driver's actions in future frames during the prediction task. ### Models We conducted a series of experiments utilizing four diverse deep-learning architectures. R3D [52]This architecture employs a 3D convolutional neural network (3D-CNN) backbone for processing video data. It is based on the ResNet-18 [23] architecture, which has proven to be successful in image recognition tasks. Swin [31]This architecture utilizes a Transformer-based 3D backbone for video processing. It is based on the Transformer architecture, originally introduced in the context of image recognition as Vision Transformer (ViT) [11]. ResNet-LSTMThis architecture combines a convolutional neural network (CNN)-based 2D backbone using the ResNet-50 architecture for image processing, along with a Long Short-Term Memory (LSTM) layer for capturing temporal dependencies. ViT-Lstm Same as the previous but replacing the ResNet with a Transfomer-based 2D backbone. The first two models use spatio-temporal backbones, which extract spatial and temporal features simultaneously, while the last two use a spatial backbone and connect it using a temporal one. These models exhibit differences in their depth and working mechanisms, offering a diverse range of approaches to our experiments. By leveraging these varied architectures, we can comprehensively investigate our proposed representation learning strategy, examine their performance, and compare their effectiveness according to the experiments outlined in the subsequent section. ### Experimental Protocol In accordance with the conventions of self-supervised learning experiments, we evaluated the performance of our models on the downstream task of action prediction under three distinct protocols: 1) _Full-Supervised:_ This protocol represents the results obtained from the model without employing the pretraining strategy. The model was trained solely on the labelled data of the prediction task. 2) _Linear Probing:_ In this protocol, the proposed strategy was applied, and the model was fine-tuned by solely updating the parameters of the prediction head. The backbone of the model was kept frozen during this fine-tuning process. 3) _Fine-tuning:_ This protocol involved applying the proposed strategy and performing full fine-tuning of the entire network, including both the backbone and the prediction head. All parameters of the model were updated during the fine-tuning process. By examining the model's performance across these three protocols, we can gain insights into the effectiveness and impact of the pretraining strategy on action prediction. ### Implementation Details We used Pytorch framework [41] to implement the models, and the Precision (P) metric was used to evaluate their performance on the action prediction task. All experiments were conducted on NVIDIA A30 graphics cards. Below are the details of the pretraining and fine-tuning. PretrainingFor pretraining, we utilized either the full dataset of Kinetics-400 or the training split of ROAD (depending on the experiment). The pretraining was optimised for 1000 (50) epochs using SGD optimiser with a learning rate of 0.005 (0.001) and a batch size of 64 (32) for Kinetics-400 and ROAD, respectively. Additionally, a cosine scheduler was employed for updating the momentum variable of the exponential moving average (EMA). Concerning the models, the LSTM architecture had a hidden dimension of 512, and the small structure of ViT (ViT-s) was used. Fine-tuningFine-tuning was performed on the ROAD dataset, with a data split of 60% for training, 20% for validation, and 20% for testing. Cross-entropy loss was employed as the objective function for fine-tuning, and it was optimised for 10 epochs using the SGD optimizer with a learning rate of 0.001, and a batch size of 32. ### Ablations In this section, we present different ablations to study the effects of different architectural components on the model performance. Specifically, we investigate the effects of variations in input sequence temporal length, self-supervised objective, backbone selection, and the strategy's performance on another downstream task (action recognition). Detailed explanations of each ablation study are provided below. Backbone and Temporal LengthThe choice of model backbone plays a critical role in the model's ability to learn effective features. Similarly, the length of the input sequence contributes to the model's capability to capture temporal dynamics and visual changes within the scene. Longer sequence lengths generally provide more visual data, but they may also introduce challenges in modelling long-term dependencies and potentially lead to performance degradation, as shown in Table 1. In this specific ablation study, We conducted experiments with the four backbones mentioned in Section 4.2, and we varied the temporal depth by using sequence lengths of 3, 6, and 12 frames for each backbone. We trained these models using the three experimental protocols outlined in Section 4.3. The prediction performance of the four models with different input lengths under the three protocols is summarized in Table 1. Loss FunctionExisting literature demonstrates that the choice of learning objective in self-supervised algorithms significantly influences the performance of models on downstream tasks [46]. In our experiments, we employed three widely used loss functions: Cross-entropy, Cosine Similarity, and Mean Squared Error (MSE) loss. The evaluation of these loss functions was performed on R3D and Swin backbones, and the corresponding results are summarized in Table 2. Action RecognitionIn addition, we performed supplementary experiments to assess the impact of our proposed strategy on a different video-based downstream task, namely Action Recognition. For this purpose, the models underwent pretraining on the Kinetics-400 dataset and subsequent fine-tuning on the UCF101 dataset [49], shown in Figure 2. Table 4 presents the state-of-the-art (SOTA) performance for action recognition. ## 5 Results Observing Table 1, it is evident that our proposed strategy yielded notable enhancements in the prediction performance of all backbone architectures, albeit to varying extents. Notably, the impact of the strategy is particularly pronounced on the 2D-based backbones, which initially exhibited comparatively lower results in the fully-supervised setup. This observation aligns with expectations since these backbones lack inherent spatio-temporal modelling capabilities. However, when trained with T-DINO, the ResNet-LSTM and ViT-LSTM backbones witnessed substantial improvements, with average increases of 22.5 and 10.1 in terms of precision points (PP), respectively. Additionally, the video backbones, R3D and Swin, experienced PP gains of 5.2 and 1.8, respectively. Table 3 shows the comparison of our strategy to other SOTA methods, showing that it surpasses the performance of both supervised and self-supervised approaches on action prediction on ROAD. Within the same table, the column 'Supervised' indicates a fully supervised method, the opposite denotes a self-supervised approach, and 'Frozen' means the fine-tuning process exclusively updates the classification head while leaving the pre-trained backbone untouched, the opposite indicates that the entire model was updated during the fine-tunning. Figure 2: Sample images from ROAD and UCF101 datasets. To have a better understanding of the generalisation capability of the proposed model, we compare the performance of T-DINO with SOTA methods on another task, human action recognition (on UCF101), summarised in Table 4. The results highlight the effectiveness of the enhanced temporal modelling offered by T-DINO when applied to the R3D backbone, Furthermore, examining the results for varying input sequence intervals across each backbone, it becomes apparent that greater improvements are observed at longer input sequences. This suggests that pretraining with T-DINO equips the backbones with enhanced abilities to capture and model long-term dependencies in the data. Notably, the Swin-transformer-based models demonstrate higher accuracy, attributed to the superior representation capabilities offered by Transformers. We observed that transformer-based spatial feature extractor (ViT) combined with LSTM-based temporal sequence modelling results in a huge improvement in the downstream task. Analyzing the results obtained from the longest input configuration consisting of 12 frames, it is evident that models pretrained on the larger dataset, Kinetics-400, exhibit superior performance compared to those pretrained on the ROAD dataset. Moreover, the models that incorporate separate modelling of spatial and temporal relationships, such as ViT+LSTM and ResNet+LSTM, outperform the models that jointly model these relationships, namely R3D and Swin, by a margin of 16.6 and 5.7 percentage points, respectively. In regard to the selection of the most optimal loss function, the findings presented in Table 2 indicate that T-DINO pretrained using the Cosine Similarity loss surpasses the performance of models trained with MSE or Cross-entropy loss. Of significant importance, self-supervised models typically require extensive datasets and a high number of pre-training epochs to achieve satisfactory generalization on downstream tasks. However, our proposed strategy, pre-trained on the ROAD dataset, exhibits a relatively comparable level of performance to models pretrained on the larger Kinetics-400 dataset. Notably, the ROAD dataset possessed a substantially smaller size and was pretrained with a lower number of epochs. These findings demonstrate T-DINO's resource and time efficiency in achieving desirable results. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Backbone & \begin{tabular}{c} Pretrained \\ on \\ \end{tabular} & \begin{tabular}{c} Interval \\ (frames) \\ \end{tabular} & \begin{tabular}{c} Linear \\ Probe \\ \end{tabular} & Fine-tunning & Supervised & Improvement \\ \hline \multirow{3}{*}{R3D} & \multirow{3}{*}{ROAD} & 3 & 36.6 & 77.7 & 74.1 & 3.6 \\ & & 6 & 29.7 & 69.2 & 64.9 & 4.3 \\ & & 12 & 36.2 & 53.3 & 45.7 & 7.6 \\ \hline \multirow{3}{*}{Swin} & \multirow{3}{*}{ROAD} & 3 & 84.7 & 87.2 & 86.4 & 0.8 \\ & & 6 & 75.8 & 82.6 & 81.7 & 0.9 \\ & & 12 & 60.1 & 60.9 & 57.1 & 3.8 \\ \hline \multirow{3}{*}{ResNet+LSTM} & \multirow{3}{*}{Kinetics-400} & 3 & 77.6 & 84.6 & 62.9 & 21.7 \\ & & 6 & 70.3 & 81.8 & 58.3 & 23.5 \\ & & 12 & 58.3 & 76.7 & 54.3 & 22.4 \\ \hline \multirow{3}{*}{ViT+LSTM} & \multirow{3}{*}{Kinetics-400} & 3 & 73.5 & 77.7 & 69.3 & 8.4 \\ & & 6 & 70.2 & 76.7 & 65.5 & 11.2 \\ \cline{1-1} & & 12 & 64.21 & 66.2 & 55.5 & 10.7 \\ \hline \hline \end{tabular} \end{table} Table 1: The precision of different backbones with varying input lengths under the three protocols mentioned in Section 4.3. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Backbone & Loss & Linear Probe & Fine-tunning & Supervised & Improvement \\ \hline \multirow{3}{*}{R3D} & MSE & 32.0 & 37.3 & 45.7 & -8.4 \\ & Cosine & 36.2 & **53.3** & 45.7 & 7.6 \\ & Cross-entropy & 30.9 & 50.4 & 45.7 & 4.7 \\ \hline \multirow{3}{*}{Swin} & MSE & 57.2 & 57.4 & 57.1 & 0.3 \\ & Cosine & 60.1 & **60.9** & 57.1 & 3.8 \\ \cline{1-1} & Cross-entropy & 49.6 & 58.4 & 57.1 & 1.3 \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation of three common loss functions on R3D and Swin backbones. ## 6 Conclusion This study represents the first attempt to leverage future information in a 'past training' model, and the promising results indicate that this teacher-student approach could provide a significant performance improvement in various prediction tasks across a number of ubiquitous model architectures in a variety of different domains - without the requirement for any additional training data. Our proposed strategy, called Temporal-DINO, leverages a teacher-student self-distillation architecture to guide the student model to learn future temporal context by observing the past only. Unlike other approaches that involve a two-stage process or rely on hand-crafted augmentations with limited prediction horizons, our one-stage strategy overcomes these limitations. Additionally, ablations highlight the strategy's generalisability, efficiency, and feasibility in hardware-constrained applications. In terms of future directions, several avenues can be pursued to further enhance and expand our proposed approach. Firstly, the inclusion of Graph Neural Networks (GNNs) as additional architectural variations could be explored to examine the strategy's ability to enhance the social dimension modelling. Secondly, expanding the evaluation of our approach to encompass a broader range of datasets from diverse domains.
2307.13670
On the asymptotic expansions of various quantum invariants II: the colored Jones polynomial of twist knots at the root of unity $e^{\frac{2π\sqrt{-1}}{N+\frac{1}{M}}}$ and $e^{\frac{2π\sqrt{-1}}{N}}$
This is the second article in a series devoted to the study of the asymptotic expansions of various quantum invariants related to the twist knots. In this article, following the method and results in \cite{CZ23-1}, we present an asymptotic expansion formula for the colored Jones polynomial of twist knot $\mathcal{K}_p$ with $p\geq 6$ at the root of unity $e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}$ with $M\geq 2$. Furthermore, by taking the limit $M\rightarrow +\infty$, we obtain an asymptotic expansion formula for the colored Jones polynomial of twist knots $\mathcal{K}_p$ with $p\geq 6$ at the root of unity $e^{\frac{2\pi\sqrt{-1}}{N}}$.
Qingtao Chen, Shengmao Zhu
2023-07-25T17:29:30Z
http://arxiv.org/abs/2307.13670v1
On the asymptotic expansions of various quantum invariants II: the colored Jones polynomial of twist knots at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}\) and \(e^{\frac{2\pi\sqrt{-1}}{N}}\) ###### Abstract. This is the second article in a series devoted to the study of the asymptotic expansions of various quantum invariants related to the twist knots. In this article, following the method and results in [3], we present an asymptotic expansion formula for the colored Jones polynomial of twist knot \(\mathcal{K}_{p}\) with \(p\geq 6\) at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}\) with \(M\geq 2\). Furthermore, by taking the limit \(M\to+\infty\), we obtain an asymptotic expansion formula for the colored Jones polynomial of twist knots \(\mathcal{K}_{p}\) with \(p\geq 6\) at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N}}\). ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Colored Jones polynomials of the twist knot \(\mathcal{K}_{p}\) * 2.2 Dilogarithm and Lobachevsky functions * 2.3 Quantum dilogrithm functions * 2.4 Saddle point method * 2.5 Conventions and Notations * 3 Calculations of the potential function * 4 Poisson summation formula * 5 Asymptotic expansion at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}\) * 6 Asymptotic expansion at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N}}\) * 7 Related Questions ## 1. Introduction In the first paper of this series [3], we have gotten an asymptotic expansion formula of the colored Jones polynomial for twist knot at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{2}}}\). The motivation of that work [3] is to study a version of volume conjecture for colored Jones polynomial proposed in [6] which states that for a hyperbolic link \(\mathcal{L}\) in \(S^{3}\), we have \[\lim_{N\to\infty}\frac{2\pi}{N}\log|J_{N}(\mathcal{L};e^{\frac{2\pi\sqrt{-1}} {N+\frac{1}{2}}})|=vol(S^{3}\setminus\mathcal{L}). \tag{1.1}\] In the present paper, we first consider the colored Jones polynomial at more general root of unity \(e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}\) with \(M\geq 2\). We obtain an similar asymptotic expansion formula for the colored Jones polynomial for the twist knot \(\mathcal{K}_{p}\) with \(p\geq 6\) similar to the case of \(M=2\) in [3]. The advantage of using the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}\) is that now we have an additional parameter \(M\), so we can take the limit \(\lim_{M\to+\infty}e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}=e^{\frac{2\pi\sqrt{- 1}}{N}}\). Hence it provide a new way to study the asymptotic expansion of the colored Jones polynomial at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N}}\). It is well-known that the original Volume Conjecture due to Kashaev-Murakami-Murakami [8, 10] was proposed at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N}}\). In the second part of this paper, we prove that it is reasonable to take the limit, so we obtain an asymptotic expansion formula for colored Jones polynomial of twist knot \(\mathcal{K}_{p}\) with \(p\geq 6\) at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N}}\). This work makes a connection between the asymptotic expansion of the colored Jones polynomial at different roots of unity. The main results of this paper are as follows. Let \(V(p,t,s)\) be the potential function of the colored Jones polynomial for the twist knot \(\mathcal{K}_{p}\) given by formula (3.14). By Proposition 5.2, there exists a unique critical point \((t_{0},s_{0})\) of \(V(p,t,s)\). Let \(x_{0}=e^{2\pi\sqrt{-1}t_{0}}\) and \(y_{0}=e^{2\pi\sqrt{-1}s_{0}}\), we put \[\zeta(p) =V(p,t_{0},s_{0})\] \[=\pi\sqrt{-1}\left((2p+1)s_{0}^{2}-(2p+3)s_{0}-2t_{0}\right)\] \[+\frac{1}{2\pi\sqrt{-1}}\left(\text{Li}_{2}(x_{0}y_{0})+\text{Li} _{2}(x_{0}/y_{0})-3\text{Li}_{2}(x_{0})+\frac{\pi^{2}}{6}\right) \tag{1.2}\] and \[\omega(p) =\frac{\sin(2\pi s_{0})e^{2\pi\sqrt{-1}t_{0}}}{(1-e^{2\pi\sqrt{-1 }t_{0}})^{\frac{3}{2}}\sqrt{\det Hess(V)(t_{0},s_{0})}}\] \[=\frac{(y_{0}-y_{0}^{-1})x_{0}}{-4\pi(1-x_{0})^{\frac{3}{2}}\sqrt {H(p,x_{0},y_{0})}} \tag{1.3}\] with \[H(p,x_{0},y_{0}) =\left(\frac{-3(2p+1)}{\frac{1}{x_{0}}-1}+\frac{2p+1}{\frac{1}{x _{0}y_{0}}-1}+\frac{2p+1}{\frac{1}{x_{0}/y_{0}}-1}-\frac{3}{(\frac{1}{x_{0}} -1)(\frac{1}{x_{0}y_{0}}-1)}\right.\] \[\left.-\frac{3}{(\frac{1}{x_{0}}-1)(\frac{1}{x_{0}/y_{0}}-1)}+ \frac{4}{(\frac{1}{x_{0}y_{0}}-1)(\frac{1}{x_{0}/y_{0}}-1)}\right). \tag{1.4}\] **Theorem 1.1**.: _For \(p\geq 6\) and \(M\geq 2\), the asymptotic expansion of the colored Jones polynomial of the twist knot \(\mathcal{K}_{p}\) at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}\) is given by the following form_ \[J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}})=( -1)^{p}\frac{4\pi e^{\pi\sqrt{-1}(\frac{1}{4}+\frac{2}{M})}(N+\frac{1}{M})^{ \frac{1}{2}}\sin\frac{\pi}{M}}{\sin\frac{\frac{1}{M}}{N+\frac{1}{M}}}\omega(p) e^{(N+\frac{1}{M})\zeta(p)}\] \[\cdot\left(1+\sum_{i=1}^{d}\kappa_{i}(p,\frac{1}{M})\left(\frac{2 \pi\sqrt{-1}}{N+\frac{1}{M}}\right)^{i}+O\left(\frac{1}{(N+\frac{1}{M})^{d+1}} \right)\right), \tag{1.5}\] _for \(d\geq 1\), where \(\omega(p)\) and \(\kappa_{i}(p,\frac{1}{M})\) are constants determined by \(\mathcal{K}_{p}\), and \(\omega(p)\) is given by formula (1.3)._ Theorem 1.1 holds for any \(M\geq 2\), note that we have the following \[\lim_{M\to\infty}J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}} })=J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N}}), \tag{1.6}\] since the colored Jones polynomial \(J_{N}(\mathcal{K}_{p};q)\) is a polynomial of \(q\). Then we prove that under the limit \(M\to+\infty\), we have **Theorem 1.2**.: _For \(p\geq 6\), the asymptotic expansion of the colored Jones polynomial of the twist knot \(\mathcal{K}_{p}\) at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N}}\) is given by the following form_ \[J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N}}) =(-1)^{p}4\pi e^{\frac{\pi\sqrt{-1}}{4}}N^{\frac{3}{2}}\omega(p)e ^{N\zeta(p)}\] \[\cdot\left(1+\sum_{i=1}^{d}\kappa_{i}(p)\left(\frac{2\pi\sqrt{-1 }}{N}\right)^{i}+O\left(\frac{1}{N^{d+1}}\right)\right), \tag{1.7}\] _for \(d\geq 1\), where \(\omega(p)\) and \(\kappa_{i}(p)\) are constants determined by \(\mathcal{K}_{p}\), and \(\omega(p)\) is given by formula (1.3)._ In [3], we have proved the following **Lemma 1.3** ([3], Lemma 5.4).: (1.8) \[2\pi\zeta(p)=vol(S^{3}\setminus\mathcal{K}_{p})+\sqrt{-1}cs(S^{3}\setminus \mathcal{K}_{p})\mod\pi^{2}\sqrt{-1}\mathbb{Z}.\] _where \(vol(S^{3}\setminus\mathcal{K}_{p})\) denotes the hyperbolic volume of the complement of \(\mathcal{K}_{p}\) in \(S^{3}\) and \(cs(S^{3}\setminus\mathcal{K}_{p})\) denotes the Chern-Simons invariant._ Then, Theorem 1.2 implies that **Corollary 1.4**.: _For \(p\geq 6\), we have_ \[\lim_{N\to\infty}\frac{2\pi}{N}\log J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{- 1}}{N}})=vol(S^{3}\setminus\mathcal{K}_{p})+\sqrt{-1}cs(S^{3}\setminus \mathcal{K}_{p})\mod\pi^{2}\sqrt{-1}\mathbb{Z}. \tag{1.9}\] Hence we prove Kashaev-Murakami-Murakami Volume Conjecture for twist knot \(\mathcal{K}_{p}\) with \(p\geq 6\). **Remark 1.5**.: We need the condition \(p\geq 6\) in the above two theorems since we use the same method from previous work [3]. We remark that this method can also work for the cases of \(p\leq-1\) with some exceptions. The rest of this article is organized as follows. In Section 2, we fix the notations and review the related materials that will be used in this paper. In Section 3, we compute the potential function for the colored Jones polynomials of the twist knot \(\mathcal{K}_{p}\) at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}\). In Section 4, we prove Proposition 4.4 which expresses the colored Jones polynomial of the twist knot \(J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}})\) as a summation of Fourier coefficients by Poisson summation formula. Section 5 is devoted to the study of the asymptotic expansion of the colored Jones polynomial \(J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}})\) of the twist knot \(\mathcal{K}_{p}\) by using the results directly from [3]. In Section 6, we obtain the asymptotic expansion of the colored Jones polynomial \(J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N}})\) of the twist knot \(\mathcal{K}_{p}\) by taking the limit \(M\to+\infty\). **Acknowledgements.** The first author would like to thank Nicolai Reshetikhin, Kefeng Liu and Weiping Zhang for bringing him to this area and a lot of discussions during his career, thank Francis Bonahon, Giovanni Felder and Shing-Tung Yau for their continuous encouragement, support and discussions, and thank Jun Murakami and Tomotada Ohtsuki for their helpful discussions and support. He also want to thank Jorgen Ellegaard Andersen, Sergei Gukov, Thang Le, Gregor Masbaum, Rinat Kashaev, Vladimir Turaev and Hiraku Nakajima for their support, discussions and interests, and thank Yunlong Yao who built him solid analysis foundation twenty years ago. The second author would like to thank Kefeng Liu and Hao Xu for bringing him to this area when he was a graduate student at CMS of Zhejiang University, and for their constant encouragement and helpful discussions since then. ## 2. Preliminaries ### Colored Jones polynomials of the twist knot \(\mathcal{K}_{p}\) We consider the twist knot \(\mathcal{K}_{p}\) illustrated in Figure 1, where the index \(2p\) represents \(2p\) crossings (half-twists). For example, \(\mathcal{K}_{-1}=4_{1}\), \(\mathcal{K}_{1}=3_{1}\), \(\mathcal{K}_{2}=5_{2}\). We will use the following formula for the _normalized \(N\)-colored Jones polynomial_ of the twist knot \(\mathcal{K}_{p}\) given by K. Habiro and G. Masbaum in [7, 9] \[J_{N}(\mathcal{K}_{p};q)=\sum_{k=0}^{N-1}\sum_{l=0}^{k}(-1)^{l}q^{\frac{k(k+3 )}{4}+pl(l+1)}\frac{\{k\}!\{2l+1\}}{\{k+l+1\}!\{k-l\}!}\prod_{i=1}^{k}(\{N+i \}\{N-i\}), \tag{2.2}\] where \[\{n\}=q^{\frac{n}{2}}-q^{-\frac{n}{2}},\ \ \text{for a positive integer $n$}. \tag{2.3}\] Figure 1. Twist knot \(\mathcal{K}_{p}\) ### Dilogarithm and Lobachevsky functions Let \(\log:\mathbb{C}\setminus(-\infty,0]\to\mathbb{C}\) be the standard logarithm function defined by \[\log z=\log|z|+\sqrt{-1}\arg z \tag{2.4}\] with \(-\pi<\arg z<\pi\). The dilogarithm function \(\operatorname{Li}_{2}:\mathbb{C}\setminus(1,\infty)\to\mathbb{C}\) is defined by \[\operatorname{Li}_{2}(z)=-\int_{0}^{z}\frac{\log(1-x)}{x}dx \tag{2.5}\] where the integral is along any path in \(\mathbb{C}\setminus(1,\infty)\) connecting \(0\) and \(z\), which is holomorphic in \(\mathbb{C}\setminus[1,\infty)\) and continuous in \(\mathbb{C}\setminus(1,\infty)\). The dilogarithm function satisfies the following properties \[\operatorname{Li}_{2}\left(\frac{1}{z}\right)=-\operatorname{Li}_{2}(z)- \frac{\pi^{2}}{6}-\frac{1}{2}(\log(-z))^{2}. \tag{2.6}\] In the unit disk \(\{z\in\mathbb{C}||z|<1\}\), \(\operatorname{Li}_{2}(z)=\sum_{n=1}^{\infty}\frac{z^{n}}{n^{2}}\), and on the unit circle \[\{z=e^{2\pi\sqrt{-1}t}|0\leq t\leq 1\}, \tag{2.7}\] we have \[\operatorname{Li}_{2}(e^{2\pi\sqrt{-1}t})=\frac{\pi^{2}}{6}+\pi^{2}t(t-1)+2 \pi\sqrt{-1}\Lambda(t) \tag{2.8}\] where \[\Lambda(t)=\operatorname{Re}\left(\frac{\operatorname{Li}_{2}(e^{2\pi\sqrt{-1 }t})}{2\pi\sqrt{-1}}\right)=-\int_{0}^{t}\log|2\sin\pi t|dt \tag{2.9}\] for \(t\in\mathbb{R}\). The function \(\Lambda(t)\) is an odd function which has period \(1\) and satisfies \(\Lambda(1)=\Lambda(\frac{1}{2})=0\). ### Quantum dilogrithm functions Given two positive integers \(N\) and \(M\) with \(M\geq 2\), we set \[\xi_{N,\frac{1}{M}}=e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}.\] We introduce the holomorphic function \(\varphi_{N,\frac{1}{M}}(t)\) for \(\{t\in\mathbb{C}|0<\operatorname{Re}t<1\}\), by the following integral \[\varphi_{N,\frac{1}{M}}(t)=\int_{-\infty}^{+\infty}\frac{e^{(2t-1)x}dx}{4x \sinh x\sinh\frac{x}{N+\frac{1}{M}}}. \tag{2.10}\] Noting that this integrand has poles at \(n\pi\sqrt{-1}(n\in\mathbb{Z})\), where, to avoid the poles at \(0\), we choose the following contour of the integral \[\gamma=(-\infty,-1]\cup\{z\in\mathbb{C}||z|=1,\operatorname{Im}z\geq 0\}\cup[1, \infty). \tag{2.11}\] **Lemma 2.1**.: _The function \(\varphi_{N,\frac{1}{M}}(t)\) satisfies_ \[(\xi_{N,\frac{1}{M}})_{n}=\exp\left(\varphi_{N,\frac{1}{M}}\left(\frac{1}{2 \left(N+\frac{1}{M}\right)}\right)-\varphi_{N,\frac{1}{M}}\left(\frac{2n+1}{2 \left(N+\frac{1}{M}\right)}\right)\right) \tag{2.12}\] _for \(0\leq n\leq N\), and_ \[(\xi_{N,\frac{1}{M}})_{n} =\exp\left(\varphi_{N,\frac{1}{M}}\left(\frac{1}{2\left(N+\frac{1}{ M}\right)}\right)-\varphi_{N,\frac{1}{M}}\left(\frac{2n+1}{2\left(N+\frac{1}{M} \right)}-1\right)\right.\] \[\left.+\log\left(1-e^{-\frac{2\pi\sqrt{-1}}{M}}\right)\right) \text{ for }N<n\leq 2N. \tag{2.13}\] **Lemma 2.2**.: _We have the following identities:_ \[\varphi_{N,\frac{1}{M}}(t)+\varphi_{N,\frac{1}{M}}(1-t)\] \[=2\pi\sqrt{-1}\left(-\frac{(N+\frac{1}{M})}{2}(t^{2}-t+\frac{1}{ 6})+\frac{1}{24(N+\frac{1}{M})}\right), \tag{2.15}\] \[\varphi_{N,\frac{1}{M}}\left(\frac{1}{2\left(N+\frac{1}{M} \right)}\right)\] \[=\frac{(N+\frac{1}{M})}{2\pi\sqrt{-1}}\frac{\pi^{2}}{6}+\frac{1} {2}\log\left(N+\frac{1}{M}\right)+\frac{\pi\sqrt{-1}}{4}-\frac{\pi\sqrt{-1}}{ 12(N+\frac{1}{M})},\] (2.16) \[\varphi_{N,\frac{1}{M}}\left(1-\frac{1}{2\left(N+\frac{1}{M} \right)}\right)\] \[=\frac{\left(N+\frac{1}{M}\right)}{2\pi\sqrt{-1}}\frac{\pi^{2}}{ 6}-\frac{1}{2}\log\left(N+\frac{1}{M}\right)+\frac{\pi\sqrt{-1}}{4}-\frac{\pi \sqrt{-1}}{12\left(N+\frac{1}{M}\right)}. \tag{2.14}\] The function \(\varphi_{N,\frac{1}{M}}(t)\) is closely related to the dilogarithm function as follows. **Lemma 2.3**.: _(1)For every \(t\) with \(0<Ret<1\),_ \[\varphi_{N,\frac{1}{M}}(t)=\frac{(N+\frac{1}{M})}{2\pi\sqrt{-1}}\text{Li}_{2}( e^{2\pi\sqrt{-1}t})-\frac{\pi\sqrt{-1}e^{2\pi\sqrt{-1}t}}{12(1-e^{2\pi\sqrt{-1}t} )}\frac{1}{N+\frac{1}{M}}+O\left(\frac{1}{(N+\frac{1}{M})^{3}}\right). \tag{2.17}\] _(2) For every \(t\) with \(0<Ret<1\),_ \[\varphi^{\prime}_{N,\frac{1}{M}}(t)=-\left(N+\frac{1}{M}\right)\log(1-e^{2\pi \sqrt{-1}t})+O\left(\frac{1}{\left(N+\frac{1}{M}\right)}\right) \tag{2.18}\] _(3) As \(N\to\infty\), \(\frac{1}{\left(N+\frac{1}{M}\right)}\varphi_{N,\frac{1}{M}}(t)\) uniformly converges to \(\frac{1}{2\pi\sqrt{-1}}\text{Li}_{2}(e^{2\pi\sqrt{-1}t})\) and \(\frac{1}{\left(N+\frac{1}{M}\right)}\varphi^{\prime}_{N,M}(t)\) uniformly converges to \(-\log(1-e^{2\pi\sqrt{-1}t})\) on any compact subset of \(\{t\in\mathbb{C}|0<Ret<1\}\)._ See the literature, such as [11, 1, 15] for the proof of Lemma 2.1, 2.2, 2.3. ### Saddle point method We need to use the following version of saddle point method as shown in [13]. **Proposition 2.4** ([13], Proposition 3.1).: _Let \(A\) be a non-singular symmetric complex \(2\times 2\) matrix, and let \(\Psi(z_{1},z_{2})\) and \(r(z_{1},z_{2})\) be holomorphic functions of the forms,_ \[\Psi(z_{1},z_{2}) =\mathbf{z}^{T}A\mathbf{z}+r(z_{1},z_{2}),\] \[r(z_{1},z_{2}) =\sum_{i,j,k}b_{ijk}z_{i}z_{j}z_{k}+\sum_{i,j,k,l}c_{ijkl}z_{i}z_{j} z_{k}z_{l}+\cdots \tag{2.19}\] _defined in a neighborhood of \(\mathbf{0}\in\mathbb{C}\). The restriction of the domain_ \[\{(z_{1},z_{2})\in\mathbb{C}^{2}|\text{Re}\Psi(z_{1},z_{2})<0\} \tag{2.20}\] _to a neighborhood of \(\mathbf{0}\in\mathbb{C}^{2}\) is homotopy equivalent to \(S^{1}\). Let \(D\) be an oriented disk embedded in \(\mathbb{C}^{2}\) such that \(\partial D\) is included in the domain (2.20) whose inclusion is homotopic to a homotopy equivalence to the above \(S^{1}\) in the domain (2.20). Then we have the following asymptotic expansion_ \[\int_{D}e^{N\psi(z_{1},z_{2})}dz_{1}dz_{2}=\frac{\pi}{N\sqrt{\det(-A)}}\left( 1+\sum_{i=1}^{d}\frac{\lambda_{i}}{N^{i}}+O(\frac{1}{N^{d+1}})\right), \tag{2.21}\] _for any \(d\), where we choose the sign of \(\sqrt{\det(-A)}\) as explained in Proposition [11], and \(\lambda_{i}\)'s are constants presented by using coefficients of the expansion \(\Psi(z_{1},z_{2})\), such presentations are obtained by formally expanding the following formula,_ \[1+\sum_{i=1}^{\infty}\frac{\lambda_{i}}{N^{i}}=\exp\left(Nr\left(\frac{ \partial}{\partial w_{1}},\frac{\partial}{\partial w_{2}}\right)\right)\exp \left(-\frac{1}{4N}(w_{1},w_{2})A^{-1}\binom{w_{1}}{w_{2}}\right)|_{w_{1}=w_{2 }=0}. \tag{2.22}\] See [11] for a proof of the Proposition 2.4, **Remark 2.5** ([13], Remark 3.2).: As mentioned in Remark 3.6 of [11], we can extend Proposition 2.4 to the case where \(\Psi(z_{1},z_{2})\) depends on \(N\) in such a way that \(\Psi(z_{1},z_{2})\) is of the form \[\Psi(z_{1},z_{2})=\Psi_{0}(z_{1},z_{2})+\Psi_{1}(z_{1},z_{2})\frac{1}{N}+R(z_{ 1},z_{2})\frac{1}{N^{2}}. \tag{2.23}\] where \(\Psi_{i}(z_{1},z_{2})\)'s are holomorphic functions independent of \(N\), and we assume that \(\Psi_{0}(z_{1},z_{2})\) satisfies the assumption of the Proposition and \(|R(z_{1},z_{2})|\) is bounded by a constant which is independent of \(N\). ### Conventions and Notations We list the following notations used in this paper and comparing to [3]. The roots of unity: \(\xi_{N,\frac{1}{M}}=e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}\) ( so \(\xi_{N,\frac{1}{2}}\) is equal to the notation \(\xi_{N}\) used in [3]); \(\xi_{N,0}=e^{\frac{2\pi\sqrt{-1}}{N}}\). The Faddeev functions: \(\varphi_{N,\frac{1}{M}}(t)\int_{-\infty}^{+\infty}\frac{e^{(2t-1)x}dx}{4x\sinh x \sinh\frac{x}{N+\frac{1}{M}}}\) ( so \(\varphi_{N,\frac{1}{2}}(t)\) is equal to the notation \(\varphi_{N}(t)\) used in [3]); \(\varphi_{N,0}(t)=\int_{-\infty}^{+\infty}\frac{e^{(2t-1)x}dx}{4x\sinh x\sinh \frac{x}{N}}\). The Potential functions: \(V_{N\frac{1}{M}}(p,t,s)\) is given by formula (3.13) ( so \(V_{N,\frac{1}{2}}(p,t,s)\) is equal to the notation \(V_{N}(p,t,s)\) used in [3]); \(V_{N,0}(p,t,s)\) is given by the formula (6.28). The Fourier coefficients: \(\hat{h}_{N,\frac{1}{M}}(m,n)\) is given by formula (4.13) ( so \(\hat{h}_{N,\frac{1}{2}}(m,n)\) is equal to the notation \(\hat{h}_{N}(m,n)\) used in [3]); \[\tilde{h}_{N,\frac{1}{M}}(m,n)=(1-e^{\frac{2\pi\sqrt{-1}(n+1)}{M}})\hat{h}_{N, \frac{1}{M}}(m,n);\] \[\tilde{h}_{N,0}(m,n)\text{ is given by formula \eqref{eq:h_N,0}.}\] ## 3. Calculations of the potential function This section is devoted to the calculations of the potential function for the colored Jones polynomial \(J_{N}(\mathcal{K}_{p};q)\) at the root of unity \(\xi_{N,\frac{1}{M}}=e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}\). We introduce the following \(q\)-Pochhammer symbol \[(q)_{n}=\prod_{i=1}^{n}(1-q^{i}). \tag{3.1}\] then we have \[\{n\}!=(-1)^{n}q^{\frac{-n(n+1)}{4}}(q)_{n}. \tag{3.2}\] From formula (2.2), we obtain \[J_{N}(\mathcal{K}_{p};q) =\sum_{k=0}^{N-1}\sum_{l=0}^{k}(-1)^{k+l}q^{pl(l+1)+\frac{l(l-1)} {2}-Nk+\frac{k(k+1)}{2}+k}\] \[\cdot\frac{(1-q^{2l+1})}{(1-q^{N})}\frac{(q)_{k}(q)_{N+k}}{(q)_{ k+l+1}(q)_{k-l}(q)_{N-k-1}}. \tag{3.3}\] Hence, at the root of unity \(\xi_{N,\frac{1}{M}}\), we have \[J_{N}(\mathcal{K}_{p};\xi_{N,\frac{1}{M}}) =\sum_{k=0}^{N-1}\sum_{l=0}^{k}\frac{(-1)^{k+l+1}\sin\frac{\pi(2l +1)}{N+\frac{1}{M}}}{\sin\frac{\pi}{N+\frac{1}{M}}}\] \[\cdot\xi_{N,\frac{1}{M}}^{(p+\frac{1}{2})l^{2}+(p+\frac{1}{2})l+ \frac{k^{2}}{2}+2k+\frac{3}{4}}\frac{(\xi_{N,\frac{1}{M}})_{k}(\xi_{N,\frac{1 }{M}})_{N+k}}{(\xi_{N,\frac{1}{M}})_{k+l+1}(\xi_{N,\frac{1}{M}})_{k-l}(\xi_{N, \frac{1}{M}})_{N-k-1}}. \tag{3.4}\] By using Lemma 2.1, we obtain \[\frac{(\xi_{N,\frac{1}{M}})_{k}(\xi_{N,\frac{1}{M}})_{N+k}}{(\xi_ {N,\frac{1}{M}})_{k+l+1}(\xi_{N,\frac{1}{M}})_{k-l}(\xi_{N,\frac{1}{M}})_{N-k- 1}}\] \[=\exp\left(\varphi_{N,\frac{1}{M}}\left(\frac{2(k+l+1)+1}{2(N+ \frac{1}{M})}-1\right)+\varphi_{N,\frac{1}{M}}\left(\frac{2(k-l)+1}{2(N+\frac{ 1}{M})}\right)\right.\] \[\left.+\varphi_{N,\frac{1}{M}}\left(1-\frac{2k+1+\frac{2}{M}}{2(N +\frac{1}{M})}\right)-\varphi_{N,\frac{1}{M}}\left(\frac{2k+1}{2(N+\frac{1}{ M})}\right)\right.\] \[\left.-\varphi_{N,\frac{1}{M}}\left(\frac{2k+1-\frac{2}{M}}{2(N+ \frac{1}{M})}\right)-\varphi_{N,\frac{1}{M}}\left(\frac{1}{2(N+\frac{1}{M})} \right)\right). \tag{3.5}\] for \(N<k+l+1\leq 2N\). Similarly, one can obtain the expression for the case \(0<k+l+1\leq N\), but which will not be used in the rest of this paper. By using Lemma 2.2, we obtain \[\begin{split}&(-1)^{l-k-1}\xi_{N,\frac{1}{M}}^{(2p+1)(l^{2}+l)+k^{ 2}+4k+\frac{3}{2}}\frac{(\xi_{N,\frac{1}{M}})_{k}(\xi_{N,\frac{1}{M}})_{N+k}}{ (\xi_{N,\frac{1}{M}})_{k+l+1}(\xi_{N,\frac{1}{M}})_{k-l}(\xi_{N,\frac{1}{M}})_ {N-k-1}}\\ &=\exp(N+\frac{1}{M})\left(\frac{\pi\sqrt{-1}(2(2p+1)l^{2}+2(2p+ 1)l+(6-\frac{4}{M})k+3-2(\frac{1}{2}+\frac{1}{M})^{2}+\frac{1}{3})}{2(N+\frac{ 1}{M})^{2}}\right.\\ &\frac{\pi\sqrt{-1}(l-\frac{3}{4}+\frac{1}{M})}{N+\frac{1}{M}}- \frac{\log(N+\frac{1}{M})}{2(N+\frac{1}{M})}-\frac{\pi\sqrt{-1}}{12}+\frac{1}{ N+\frac{1}{M}}\varphi_{N,\frac{1}{M}}\left(\frac{k+l+\frac{3}{2}}{N+\frac{1}{M}}-1 \right)\\ &\left.+\frac{1}{N+\frac{1}{M}}\varphi_{N,\frac{1}{M}}\left(\frac {k-l+\frac{1}{2}}{N+\frac{1}{M}}\right)-\frac{1}{N+\frac{1}{M}}\varphi_{N, \frac{1}{M}}\left(\frac{k+\frac{1}{2}-\frac{1}{M}}{N+\frac{1}{M}}\right) \right.\\ &\left.-\frac{1}{N+\frac{1}{M}}\varphi_{N,\frac{1}{M}}\left(\frac {k+\frac{1}{2}}{N+\frac{1}{M}}\right)-\frac{1}{N+\frac{1}{M}}\varphi_{N,\frac{ 1}{M}}\left(\frac{k+\frac{1}{2}+\frac{1}{M}}{N+\frac{1}{M}}\right)\right)\end{split} \tag{3.6}\] for \(N<k+l+1\leq 2N\). Now we set \[t=\frac{k+\frac{1}{2}}{N+\frac{1}{M}},s=\frac{l+\frac{1}{2}}{N+\frac{1}{M}}, \tag{3.7}\] and define the function \(\tilde{V}_{N,\frac{1}{M}}(p,t,s)\) as follows. For \(0<t<1\), \(0<t-s<1\) and \(1<t+s<2\), we let \[\begin{split}&\tilde{V}_{N,\frac{1}{M}}(p,t,s)\\ &=\pi\sqrt{-1}((2p+1)s^{2}+s+(\frac{2}{N+\frac{1}{M}}-2)t-\frac{ 5-\frac{4}{M}}{4(N+\frac{1}{M})}-\frac{6p+4+\frac{12}{M^{2}}}{12(N+\frac{1}{M} )^{2}}-\frac{1}{12})\\ &-\frac{\log(N+\frac{1}{M})}{2(N+\frac{1}{M})}+\frac{1}{N+\frac{ 1}{M}}\left(\varphi_{N,\frac{1}{M}}(t-s+\frac{1}{2(N+\frac{1}{M})})+\varphi_{N, \frac{1}{M}}(t+s+\frac{1}{2(N+\frac{1}{M})}-1)\right.\\ &\left.-\varphi_{N,\frac{1}{M}}(t-\frac{1}{M(N+\frac{1}{M})})- \varphi_{N,\frac{1}{M}}(t)-\varphi_{N,\frac{1}{M}}(t+\frac{1}{M(N+\frac{1}{M} )})\right)\end{split}\] Based on the above calculations, we obtain \[\begin{split}& J_{N}(\mathcal{K}_{p};\xi_{N,\frac{1}{M}})\\ &=\sum_{k=0}^{N-1}\sum_{l=0}^{k}\frac{\sin\frac{\pi(2l+1)}{N+ \frac{1}{M}}}{\sin\frac{\pi}{N+\frac{1}{M}}}e^{(N+\frac{1}{M})\tilde{V}_{N, \frac{1}{M}}\left(\frac{k+\frac{1}{2}}{N+\frac{1}{M}},\frac{l+\frac{1}{2}}{N +\frac{1}{M}}\right)}\\ &=\sum_{k=0}^{N-1}\sum_{l=0}^{k}\frac{\sin\frac{\pi(2l+1)}{N+ \frac{1}{M}}}{\sin\frac{\pi}{N+\frac{1}{M}}}e^{(N+\frac{1}{M})\left(\tilde{V}_ {N,\frac{1}{M}}\left(\frac{k+\frac{1}{2}}{N+\frac{1}{M}},\frac{l+\frac{1}{2}}{ N+\frac{1}{M}}\right)-2\pi\sqrt{-1}\frac{k}{N+\frac{1}{M}}-2(p+2)\pi\sqrt{-1} \frac{l}{N+\frac{1}{M}}\right)}.\end{split} \tag{3.8}\] For convenience, we introduce the function \(V_{N,\frac{1}{M}}(p,t,s)\) which is determined by the following formula \[\tilde{V}_{N,\frac{1}{M}}(p,t,s)-2\pi\sqrt{-1}(t-\frac{\frac{1}{2} }{N+\frac{1}{M}})-2(p+2)\pi\sqrt{-1}(s-\frac{\frac{1}{2}}{N+\frac{1}{M}})\] \[=V_{N,\frac{1}{M}}(p,t,s)+\pi\sqrt{-1}\frac{4p+7+\frac{4}{M}}{4(N +\frac{1}{M})}-\frac{1}{2(N+\frac{1}{M})}\log\left(N+\frac{1}{M}\right). \tag{3.9}\] Note that the functions \(\tilde{V}_{N,\frac{1}{M}}(p,t,s)\) and \(V_{N,\frac{1}{M}}(p,t,s)\) are defined on the region \[D=\{(t,s)\in\mathbb{R}^{2}|0<t<1,0<s<1,0<t-s<1\}. \tag{3.10}\] From formula (3.8), we finally obtain **Proposition 3.1**.: _The normalized \(N\)-th colored Jones polynomial of the twist \(\mathcal{K}_{p}\) at the root of unit \(\xi_{N,\frac{1}{M}}\) can be computed as_ \[J_{N}(\mathcal{K}_{p};\xi_{N,\frac{1}{M}})=\sum_{k=0}^{N-1}\sum_{l=0}^{k}g_{N, \frac{1}{M}}(k,l) \tag{3.11}\] _with_ \[g_{N,\frac{1}{M}}(k,l)=(-1)^{p}e^{\pi\sqrt{-1}(\frac{1}{M}-\frac{1}{4})}\frac {1}{\sqrt{(N+\frac{1}{M})}}\frac{\sin\frac{\pi(2l+1)}{N+\frac{1}{M}}}{\sin \frac{\pi}{N+\frac{1}{M}}}e^{(N+\frac{1}{M})V_{N,\frac{1}{M}}\left(p,\frac{k+ \frac{1}{2}}{N+\frac{1}{M}},\frac{l+\frac{1}{2}}{N+\frac{1}{M}}\right)}, \tag{3.12}\] _where the function \(V_{N,\frac{1}{M}}(p,t,s)\) is given by_ \[V_{N,\frac{1}{M}}(p,t,s)\] \[=\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+3)s+\left(\frac{2}{N+\frac{1}{ M}}-2\right)t-\frac{6p+4+\frac{12}{M^{2}}}{12(N+\frac{1}{M})^{2}}\right)\] \[+\frac{1}{N+\frac{1}{M}}\varphi_{N,\frac{1}{M}}\left(t+s+\frac{ \frac{1}{2}}{N+\frac{1}{M}}-1\right)+\frac{1}{N+\frac{1}{M}}\varphi_{N,\frac{1 }{M}}\left(t-s+\frac{\frac{1}{2}}{N+\frac{1}{M}}\right)\] \[-\frac{1}{N+\frac{1}{M}}\varphi_{N,\frac{1}{M}}\left(t-\frac{1}{ N+\frac{1}{M}}\varphi_{N,\frac{1}{M}}\left(t-\frac{\frac{1}{M}}{N+\frac{1}{M}}\right)\right.\] \[-\frac{1}{N+\frac{1}{M}}\varphi_{N,\frac{1}{M}}\left(t+\frac{ \frac{1}{M}}{N+\frac{1}{M}}\right)-\frac{\pi\sqrt{-1}}{12}. \tag{3.13}\] _for \(0<t<1\), \(0<t-s<1\) and \(1<t+s<2\). Similarly, one can write the corresponding expression for the function \(V_{N,\frac{1}{M}}\) for the case \(0<t<1\) and \(0<t\pm s<1\), but which will not used in the following. So we omit it here._ We define the potential function for the twist knot \(\mathcal{K}_{p}\) as follows \[V(p,t,s)=\lim_{N\to\infty}V_{N,\frac{1}{M}}(p,t,s)=\pi\sqrt{-1} \left((2p+1)s^{2}-(2p+3)s-2t\right)\] \[+\frac{1}{2\pi\sqrt{-1}}\left(\text{Li}_{2}(e^{2\pi\sqrt{-1}(t+s )})+\text{Li}_{2}(e^{2\pi\sqrt{-1}(t-s)})-3\text{Li}_{2}(e^{2\pi\sqrt{-1}t})+ \frac{\pi^{2}}{6}\right). \tag{3.14}\] ## 4. Poisson summation formula In this section, with the help of Poisson summation formula, we write the formula (3.11) as a sum of integrals. First, according to formulas (2.2) and (3.11), we have \[g_{N,\frac{1}{M}}(k,l)=(-1)^{l}q^{\frac{k(k+3)}{4}+pl(l+1)}\frac{\{2l+1\}}{\{N\} }\frac{\{k\}!\{N+k\}!}{\{k+l+1\}!\{k-l\}!\{N-k-1\}!}\big{|}_{q=\xi_{N,M}}. \tag{4.1}\] By Lemmas 2.1, 2.2, 2.3 and formula (2.9), we obtain \[\log|\{n\}!|=-(N+\frac{1}{M})\Lambda\left(\frac{n+\frac{1}{2}}{N+\frac{1}{M}} \right)+O(\log(N+\frac{1}{M})) \tag{4.2}\] for any integer \(0<n\leq N\) and at \(q=\xi_{N,\frac{1}{M}}\). We put \[v_{N,\frac{1}{M}}(t,s) =\Lambda\left(t+s-1+\frac{\frac{1}{2}}{N+\frac{1}{M}}\right)+ \Lambda\left(t-s+\frac{\frac{1}{2}}{N+\frac{1}{M}}\right)\] \[-\Lambda\left(t-\frac{\frac{1}{M}}{N+\frac{1}{M}}\right)- \Lambda\left(t\right)-\Lambda\left(t+\frac{\frac{1}{M}}{N+\frac{1}{M}}\right), \tag{4.3}\] then we obtain \[|g_{N,\frac{1}{M}}(k,l)|=e^{(N+\frac{1}{M})v_{N,\frac{1}{M}}\left(\frac{k+ \frac{1}{2}}{N+\frac{1}{M}},\frac{l+\frac{1}{2}}{N+\frac{1}{M}}\right)+O(\log (N+\frac{1}{M}))}. \tag{4.4}\] We define the function \[v(t,s)=\Lambda(t+s)+\Lambda(t-s)-3\Lambda\left(t\right). \tag{4.5}\] Note that \(\left(\frac{k+\frac{1}{2}}{N+\frac{1}{M}},\frac{l+\frac{1}{2}}{N+\frac{1}{M}} \right)\in D=\{(t,s)\in\mathbb{R}^{2}|1<t+s<2,0<t-s<1,\frac{1}{2}<t<1\}\) for \(0\leq k,l\leq N-1\). So we may assume the function \(v(t,s)\) is defined on the region \(D\). We set \[D^{\prime}_{0}=\{0.02\leq t-s\leq 0.7,1.02\leq t+s\leq 1.7,0.2\leq s\leq 0.8,0.5 \leq t\leq 0.909\}. \tag{4.6}\] Let \(\zeta_{\mathbb{R}}(p)\) be the real part of the critical value \(V(p,t_{0},s_{0})\), see formula (5.15) for its precise definition. Then we have **Lemma 4.1** ([3], Lemma 4.1).: _The following domain_ \[\left\{(t,s)\in D|v(t,s)>\frac{3.509}{2\pi}\right\} \tag{4.7}\] _is included in the region \(D^{\prime}_{0}\)._ **Remark 4.2**.: We can take \(\varepsilon>0\) small enough (such as \(\varepsilon=0.00001\)), and set \[D^{\prime}_{\varepsilon}=\left\{0.02+\varepsilon\leq t-s\leq 0.7- \varepsilon,1.02+\varepsilon\leq t+s\leq 1.7-\varepsilon,\right.\] \[\left.0.2+\varepsilon\leq s\leq 0.8-\varepsilon,0.5+ \varepsilon\leq t\leq 0.909-\varepsilon\right\}, \tag{4.8}\] then the region (4.7) can also be included in the region \(D^{\prime}_{\varepsilon}\). **Proposition 4.3** ([3], Proposition 4.3).: _For \(p\geq 6\) and \((\frac{k+\frac{1}{2}}{N+\frac{1}{M}},\frac{l+\frac{1}{2}}{N+\frac{1}{M}})\in D \setminus D^{\prime}_{0}\), we have_ \[|g_{N,\frac{1}{M}}(k,l)|<O\left(e^{(N+\frac{1}{M})(\zeta_{\mathbb{R}}(p)- \epsilon)}\right) \tag{4.9}\] _for some sufficiently small \(\epsilon>0\)._ For a sufficiently small \(\varepsilon\), we take a smooth bump function \(\psi\) on \(\mathbb{R}^{2}\) such that \(\psi(t,s)=1\) on \((t,s)\in D^{\prime}_{\varepsilon}\), \(0<\psi(t,s)<1\) on \((t,s)\in D^{\prime}_{0}\setminus D^{\prime}_{\varepsilon}\), \(\psi(t,s)=0\) for \((t,s)\notin D^{\prime}_{0}\). Let \[h_{N,\frac{1}{M}}(k,l)=\psi\left(\frac{k+\frac{1}{2}}{N+\frac{1}{M}},\frac{l+ \frac{1}{2}}{N+\frac{1}{M}}\right)g_{N,\frac{1}{M}}(k,l). \tag{4.10}\] Then by Proposition 4.3, for \(p\geq 6\), we have \[J_{N}(\mathcal{K}_{p};\xi_{N,\frac{1}{M}})=\sum_{(k,l)\in\mathbb{Z}^{2}}h_{N, \frac{1}{M}}(k,l)+O\left(e^{(N+\frac{1}{M})(\zeta_{\mathbb{R}}(p)-\epsilon)} \right). \tag{4.11}\] Note that \(h_{N,\frac{1}{M}}\) is \(C^{\infty}\)-smooth and equals zero outside \(D^{\prime}_{0}\), it is in the Schwartz space on \(\mathbb{R}^{2}\). By using Poisson summation formula, we obtain **Proposition 4.4**.: _For \(p\geq 6\) and \(M\geq 2\), the normalized \(N\)-th colored Jones polynomial of the twist knot \(\mathcal{K}_{p}\) at the root of unity \(\xi_{N,\frac{1}{M}}\) is given by_ \[J_{N}(\mathcal{K}_{p};\xi_{N,\frac{1}{M}})=\sum_{(m,n)\in\mathbb{Z}^{2}}\hat{h }_{N,\frac{1}{M}}(m,n)+O\left(e^{(N+\frac{1}{M})(\zeta_{\mathbb{R}}(p)- \epsilon)}\right) \tag{4.12}\] _where_ \[\hat{h}_{N,\frac{1}{M}}(m,n) =(-1)^{m+n+p}e^{\pi\sqrt{-1}(\frac{1}{M}-\frac{1}{4})}\frac{(N+ \frac{1}{M})^{\frac{3}{2}}}{\sin\frac{\frac{3}{N+\frac{1}{M}}}{N+\frac{1}{M}}}\] \[\cdot\int_{D^{\prime}_{0}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{M} )V_{N,\frac{1}{M}}(p,t,s;m,n)}dtds \tag{4.13}\] _with_ \[V_{N,\frac{1}{M}}\left(p,t,s;m,n\right)=V_{N,\frac{1}{M}}\left(p,t,s\right)- 2\pi\sqrt{-1}mt-2\pi\sqrt{-1}ns, \tag{4.14}\] _and \(V_{N,\frac{1}{M}}\left(p,t,s\right)\) is given by formula (3.13)._ We define the function \[V(p,t,s;m,n)\] \[=\lim_{N\to\infty}V_{N,\frac{1}{M}}\left(p,t,s;m,n\right)\] \[=\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+3+2n)s-(2+2m)t\right)\] \[+\frac{1}{2\pi\sqrt{-1}}\left(\mathrm{Li}_{2}(e^{2\pi\sqrt{-1}(t +s)})+\mathrm{Li}_{2}(e^{2\pi\sqrt{-1}(t-s)})-3\mathrm{Li}_{2}(e^{2\pi\sqrt{- 1}t})+\frac{\pi^{2}}{6}\right). \tag{4.15}\] **Lemma 4.5**.: _We have the following identity_ \[V_{N,\frac{1}{M}}(p,t,1-s;m,n) =V_{N,\frac{1}{M}}(p,t,s;m,n)-2(n+1)\pi\sqrt{-1}(1-2s)\] \[=V_{N,\frac{1}{M}}(p,t,s;m,-n-2)-2\pi\sqrt{-1}(n+1). \tag{4.16}\] Proof.: By a straightforward computation, we obtain the following identity \[\pi\sqrt{-1}\left((2p+1)(1-s)^{2}-(2p+2n+3)(1-s)+\left(\frac{2}{N+ \frac{1}{M}}-2m-2\right)t-\frac{1}{12}\right)\] \[=\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+2n+3)s+\left(\frac{2}{N+\frac{ 1}{M}}-2m-2\right)t-\frac{1}{12}\right)\] \[-2(n+1)\pi\sqrt{-1}(1-2s)\] \[=\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+2(-n-2)+3)s)+\left(\frac{2}{N+ \frac{1}{M}}-2m-2\right)t-\frac{1}{12}\right)\] \[-2\pi\sqrt{-1}(n+1). \tag{4.17}\] which immediately gives the formula (4.16). Similar to the proof of Proposition 4.6 in [3], we obtain **Proposition 4.6**.: _For any \(m,n\in\mathbb{Z}\), we have_ \[\hat{h}_{N,\frac{1}{M}}(m,-n-2)=-e^{\frac{2\pi\sqrt{-1}(n+1)}{M}}\hat{h}_{N, \frac{1}{M}}(m,n). \tag{4.18}\] **Remark 4.7**.: Formula (4.18) implies that \[\hat{h}_{N,\frac{1}{M}}(m,-1)=0. \tag{4.19}\] This is the big cancellation. The first situation of such phenomenon of "Big cancellation" happened in quantum invariants is discovered in the Volume Conjecture of the Turaev-Viro invariants by Chen-Yang [2]. The hidden reason behind that was found and described as a precise statement of symmetric property of asymptotics of quantum 6j-symbol which is on the Poisson Summation level by Chen-Murakami which is Conjecture 3 in [1]. A special case of Conjecture 3 in [1] was proved by Detcherry-Kalfagianni-Yang in [6]. To the best of our knowledge, this is the first time that such a phenomenon of big cancellation on the Poisson Summation level on the case of colored Jones polynomial is proved. ## 5. Asymptotic expansion at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}\) The goal of this section is to estimate each Fourier coefficients \(\hat{h}_{N}(m,n)\) appearing in Proposition 4.4. Recall that \[V_{N,\frac{1}{M}}(p,t,s)\] \[=\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+3)s+\left(\frac{2}{N+\frac{1}{M} }-2\right)t-\frac{6p+4+\frac{12}{M^{2}}}{12(N+\frac{1}{M})^{2}}\right)\] \[+\frac{1}{N+\frac{1}{M}}\varphi_{N,\frac{1}{M}}\left(t+s+\frac{ \frac{1}{2}}{N+\frac{1}{M}}-1\right)+\frac{1}{N+\frac{1}{M}}\varphi_{N,\frac{1 }{M}}\left(t-s+\frac{\frac{1}{2}}{N+\frac{1}{M}}\right)\] \[-\frac{1}{N+\frac{1}{M}}\varphi_{N,\frac{1}{M}}\left(t\right)- \frac{1}{N+\frac{1}{M}}\varphi_{N,\frac{1}{M}}\left(t-\frac{\frac{1}{M}}{N+ \frac{1}{M}}\right)\] \[-\frac{1}{N+\frac{1}{M}}\varphi_{N,\frac{1}{M}}\left(t+\frac{ \frac{1}{M}}{N+\frac{1}{M}}\right)-\frac{\pi\sqrt{-1}}{12}. \tag{5.1}\] and \[V(p,t,s;m,n)\] \[=\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+3+2n)s-(2+2m)t\right)\] \[+\frac{1}{2\pi\sqrt{-1}}\left(\mathrm{Li}_{2}(e^{2\pi\sqrt{-1}(t+ s)})+\mathrm{Li}_{2}(e^{2\pi\sqrt{-1}(t-s)})-3\mathrm{Li}_{2}(e^{2\pi\sqrt{-1}t})+ \frac{\pi^{2}}{6}\right). \tag{5.2}\] We have **Lemma 5.1**.: _For any \(L>0\), in the region_ \[\{(t,s)\in\mathbb{C}^{2}|(Re(t),Re(s))\in D^{\prime}_{0},|Imt|<L,|Ims|<L\}, \tag{5.3}\] _we have_ \[V_{N,\frac{1}{M}}(p,t,s;m,n) =V(p,t,s;m,n)-\frac{1}{2(N+\frac{1}{M})}\left(\log(1-e^{2\pi\sqrt {-1}(t+s)})\right.\] \[\left.+\log(1-e^{2\pi\sqrt{-1}(t-s)})-4\pi\sqrt{-1}t\right)+ \frac{w_{N,\frac{1}{M}}(t,s)}{(N+\frac{1}{M})^{2}} \tag{5.4}\] _with \(|w_{N,\frac{1}{M}}(t,s)|\) bounded from above by a constant independent of \(N,M\)._ Proof.: By using Taylor expansion, together with Lemma 2.3, we have \[\varphi_{N,\frac{1}{M}}\left(t+s-1+\frac{\frac{1}{2}}{N+\frac{1} {M}}\right)\] \[=\varphi_{N,\frac{1}{M}}(t+s-1)+\varphi^{\prime}_{N,\frac{1}{M}} (t+s-1)\frac{\frac{1}{2}}{N+\frac{1}{M}}\] \[+\frac{\varphi^{\prime\prime}_{N,\frac{1}{M}}(t+s-1)}{2}\left( \frac{\frac{1}{2}}{N+\frac{1}{M}}\right)^{2}+O\left(\left(\frac{1}{N+\frac{1 }{M}}\right)^{2}\right)\] \[=\frac{N+\frac{1}{M}}{2\pi\sqrt{-1}}\mathrm{Li}_{2}(e^{2\pi\sqrt{ -1}(t+s)})-\frac{1}{2}\log(1-e^{2\pi\sqrt{-1}(t+s)})\] \[+\frac{\pi\sqrt{-1}}{6(N+\frac{1}{M})}\frac{e^{2\pi\sqrt{-1}(t+s )}}{1-e^{2\pi\sqrt{-1}(t+s)}}+O\left(\left(\frac{1}{N+\frac{1}{M}}\right)^{2}\right) \tag{5.5}\] Similarly, we have \[\varphi_{N,\frac{1}{M}}\left(t-s+\frac{\frac{1}{2}}{N+\frac{1}{M}}\right)\] \[=\varphi_{N,\frac{1}{M}}(t-s)+\varphi^{\prime}_{N,\frac{1}{M}}(t-s )\frac{\frac{1}{2}}{N+\frac{1}{M}}\] \[+\frac{\varphi^{\prime\prime}_{N,\frac{1}{M}}(t-s)}{2}\left( \frac{\frac{1}{2}}{N+\frac{1}{M}}\right)^{2}+O\left(\left(\frac{1}{N+\frac{1}{ M}}\right)^{2}\right)\] \[=\frac{N+\frac{1}{M}}{2\pi\sqrt{-1}}\text{Li}_{2}(e^{2\pi\sqrt{-1 }(t-s)})-\frac{1}{2}\log(1-e^{2\pi\sqrt{-1}(t-s)})\] \[+\frac{\pi\sqrt{-1}}{6(N+\frac{1}{M})}\frac{e^{2\pi\sqrt{-1}(t-s )}}{1-e^{2\pi\sqrt{-1}(t-s)}}+O\left(\left(\frac{1}{N+\frac{1}{M}}\right)^{2} \right), \tag{5.6}\] \[\varphi_{N,\frac{1}{M}}\left(t+\frac{\frac{1}{M}}{N+\frac{1}{M}}\right)\] \[=\varphi_{N,\frac{1}{M}}(t)+\varphi^{\prime}_{N,\frac{1}{M}}(t) \frac{\frac{1}{M}}{N+\frac{1}{M}}+\frac{\varphi^{\prime\prime}_{N,\frac{1}{M}} (t)}{2}\left(\frac{\frac{1}{M}}{N+\frac{1}{M}}\right)^{2}+O\left(\left(\frac{ 1}{N+\frac{1}{M}}\right)^{2}\right)\] \[=\frac{N+\frac{1}{M}}{2\pi\sqrt{-1}}\text{Li}_{2}(e^{2\pi\sqrt{-1 }t})-\frac{1}{M}\log(1-e^{2\pi\sqrt{-1}t})\] \[+\left(\frac{1}{M^{2}}-\frac{1}{12}\right)\frac{\pi\sqrt{-1}}{(N +\frac{1}{M})}\frac{e^{2\pi\sqrt{-1}t}}{1-e^{2\pi\sqrt{-1}t}}+O\left(\left( \frac{1}{N+\frac{1}{M}}\right)^{2}\right), \tag{5.7}\] \[\varphi_{N,\frac{1}{M}}\left(t\right)\] \[=\frac{N+\frac{1}{M}}{2\pi\sqrt{-1}}\text{Li}_{2}(e^{2\pi\sqrt{- 1}t})-\frac{\pi\sqrt{-1}}{12(N+\frac{1}{M})}\frac{e^{2\pi\sqrt{-1}t}}{1-e^{2 \pi\sqrt{-1}t}}+O\left(\left(\frac{1}{N+\frac{1}{M}}\right)^{3}\right), \tag{5.8}\] and \[\varphi_{N,\frac{1}{M}}\left(t-\frac{\frac{1}{M}}{N+\frac{1}{M}}\right)\] \[=\varphi_{N,M}(t)-\varphi^{\prime}_{N,M}(t)\frac{\frac{1}{M}}{N+ \frac{1}{M}}+\frac{\varphi^{\prime\prime}_{N,M}(t)}{2}\left(\frac{\frac{1}{M}} {N+\frac{1}{M}}\right)^{2}+O\left(\left(\frac{1}{N+\frac{1}{M}}\right)^{2}\right)\] \[=\frac{N+\frac{1}{M}}{2\pi\sqrt{-1}}\text{Li}_{2}(e^{2\pi\sqrt{- 1}t})+\frac{1}{M}\log(1-e^{2\pi\sqrt{-1}t})\] \[+\left(\frac{1}{M^{2}}-\frac{1}{12}\right)\frac{\pi\sqrt{-1}}{(N +\frac{1}{M})}\frac{e^{2\pi\sqrt{-1}t}}{1-e^{2\pi\sqrt{-1}t}}+O\left(\left( \frac{1}{N+\frac{1}{M}}\right)^{2}\right), \tag{5.9}\] Therefore, we obtain \[V_{N,\frac{1}{M}}(p,t,s;m,n)\] \[=V(p,t,s;m,n)-\frac{1}{2(N+\frac{1}{M})}\left(\log(1-e^{2\pi\sqrt{-1 }(t+s)})\right.\] \[\left.+\log(1-e^{2\pi\sqrt{-1}(t-s)})-4\pi\sqrt{-1}t\right)\] \[-\frac{\pi\sqrt{-1}}{12(N+\frac{1}{M})^{2}}\left(-2\frac{e^{2\pi \sqrt{-1}(t+s)}}{1-e^{2\pi\sqrt{-1}(t+s)}}-2\frac{e^{2\pi\sqrt{-1}(t-s)}}{1-e^ {2\pi\sqrt{-1}(t-s)}}+\left(\frac{24}{M^{2}}-3\right)\frac{e^{2\pi\sqrt{-1}t}}{ 1-e^{2\pi\sqrt{-1}t}}\right.\] \[\left.+6p+4+\frac{12}{M^{2}}\right)+O\left(\frac{1}{(N+\frac{1}{ M})^{3}}\right), \tag{5.10}\] Finally, we let \[w_{N,\frac{1}{M}}(t,s)\] \[=-\frac{\pi\sqrt{-1}}{12}\left(-2\frac{e^{2\pi\sqrt{-1}(t+s)}}{1- e^{2\pi\sqrt{-1}(t+s)}}-2\frac{e^{2\pi\sqrt{-1}(t-s)}}{1-e^{2\pi\sqrt{-1}(t-s)}}+ \left(\frac{24}{M^{2}}-3\right)\frac{e^{2\pi\sqrt{-1}t}}{1-e^{2\pi\sqrt{-1}t}}\right.\] \[\left.+6p+4+\frac{12}{M^{2}}\right)+O\left(\frac{1}{(N+\frac{1}{ M})}\right), \tag{5.11}\] and we finish the proof Lemma 5.1. We consider the critical point of \(V(p,t,s)\), which is a solution of the following equations \[\frac{\partial V(p,t,s)}{\partial t} =-2\pi\sqrt{-1}+3\log(1-e^{2\pi\sqrt{-1}t})\] \[-\log(1-e^{2\pi\sqrt{-1}(t+s)})-\log(1-e^{2\pi\sqrt{-1}(t-s)})=0, \tag{5.13}\] \[\frac{\partial V(p,t,s)}{\partial s} =(4p+2)\pi\sqrt{-1}s-(2p+3)\pi\sqrt{-1}\] \[-\log(1-e^{2\pi\sqrt{-1}(t+s)})+\log(1-e^{2\pi\sqrt{-1}(t-s)})=0. \tag{5.12}\] **Proposition 5.2** ([3], Proposition 5.3).: _The critical point equations (5.12), (5.13) has a unique solution \((t_{0},s_{0})=(t_{0R}+X_{0}\sqrt{-1},s_{0R}+Y_{0}\sqrt{-1})\) with \((t_{0R},s_{0R})\) lies in the region \(D^{\prime}_{0}\)._ Now we set \(\zeta(p)\) to be the critical value of the potential function \(V(p,t,s)\), i.e. \[\zeta(p)=V(p,t_{0},s_{0}), \tag{5.14}\] and set \[\zeta_{\mathbb{R}}(p)=Re\zeta(p)=ReV(p,t_{0},s_{0}). \tag{5.15}\] Note that\(V_{N,\frac{1}{M}}(p,t,s;m,n)\) converges to \(V(p,t,s;m,n)\) uniformly. By Lemma 5.1 and Remark 2.5, we only need to verify the assumption of Proposition 2.4 for the function \(V(p,t,s;m,n)\) which has been done in [3]. Hence, as in [3], one can also obtain **Theorem 5.3**.: _For \(p\geq 6\) and \(M\geq 2\), the asymptotic expansion of the colored Jones polynomial of the twist knot \(\mathcal{K}_{p}\) at the root of unity \(\xi_{N,\frac{1}{M}}\) is given by_ \[J_{N}(\mathcal{K}_{p};\xi_{N,\frac{1}{M}}) =(-1)^{p}e^{\pi\sqrt{-1}(\frac{1}{4}+\frac{2}{M})}\frac{4\pi\left( N+\frac{1}{M}\right)^{\frac{1}{2}}\sin\frac{\pi}{M}}{\sin\frac{\pi}{M}}\omega(p)e^{(N +\frac{1}{M})\zeta(p)}\] \[\cdot\left(1+\sum_{i=1}^{d}\kappa_{i}(p,\frac{1}{M})\left(\frac{2 \pi\sqrt{-1}}{N+\frac{1}{M}}\right)^{i}+O\left(\frac{1}{(N+\frac{1}{M})^{d+1}} \right)\right), \tag{5.16}\] _for \(d\geq 1\), where \(\omega(p)\) and \(\kappa_{i}(p,\frac{1}{M})\) are constants determined by \(\mathcal{K}_{p}\). For example_ \[\omega(p) =\frac{\sin(2\pi s_{0})e^{2\pi\sqrt{-1}t_{0}}}{(1-e^{2\pi\sqrt{- 1}t_{0}})^{\frac{3}{2}}\sqrt{\det Hess(V)(t_{0},s_{0})}}\] \[=\frac{(y_{0}-y_{0}^{-1})x_{0}}{-4\pi(1-x_{0})^{\frac{3}{2}}\sqrt {H(p,x_{0},y_{0})}} \tag{5.17}\] _with_ \[H(p,x_{0},y_{0}) =\left(\frac{-3(2p+1)}{\frac{1}{x_{0}}-1}+\frac{2p+1}{\frac{1}{x_ {0}y_{0}}-1}+\frac{2p+1}{\frac{1}{x_{0}/y_{0}}-1}-\frac{3}{(\frac{1}{x_{0}}-1 )(\frac{1}{x_{0}y_{0}}-1)}\right.\] \[\left.-\frac{3}{(\frac{1}{x_{0}}-1)(\frac{1}{x_{0}/y_{0}}-1)}+ \frac{4}{(\frac{1}{x_{0}y_{0}}-1)(\frac{1}{x_{0}/y_{0}}-1)}\right), \tag{5.18}\] _by letting \(x_{0}=e^{2\pi\sqrt{-1}t_{0}}\) and \(y_{0}=e^{2\pi\sqrt{-1}s_{0}}\)._ ## 6. Asymptotic expansion at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N}}\) According to Proposition 4.4, we can write the colored Jones polynomial \(J_{N}(\mathcal{K}_{p};\xi_{N,\frac{1}{M}})\) of the twist knot \(\mathcal{K}_{p}\) at the root of unity \(\xi_{N,\frac{1}{M}}\) as the summation of two parts \[J_{N}(\mathcal{K}_{p};\xi_{N,\frac{1}{M}})=\sum_{(m,n)\in\mathbb{Z}^{2}}\hat{h }_{N,\frac{1}{M}}(m,n)+R_{N,\frac{1}{M}}, \tag{6.1}\] where \[R_{N,\frac{1}{M}}=\sum_{(k,l)\in\mathbb{Z}^{2}\atop(\frac{k+\frac{1}{2}}{N+ \frac{1}{M}},\frac{l+\frac{1}{2}}{N+\frac{1}{M}})\in D\setminus D_{0}^{\prime}} \tag{6.2}\] **Lemma 6.1**.: _There exists a constant \(C_{1}\) independent of \(M\), such that_ \[|R_{N,\frac{1}{M}}|\leq C_{1}e^{N(\zeta_{\mathbb{R}}(p)-\epsilon)}. \tag{6.3}\] _for some sufficiently small \(\epsilon\)._ Proof.: For any \((k,l)\in\mathbb{Z}^{2}\) with \((\frac{k+\frac{1}{2}}{N+\frac{1}{M}},\frac{l+\frac{1}{2}}{N+\frac{1}{M}})\in D \setminus D_{0}^{\prime}\), by modifying the proof of Proposition 4.3 in [3] slightly, we obtain \[|g_{N,\frac{1}{M}}(k,l)|<Ce^{(N+\frac{1}{M})(\zeta_{\mathbb{R}}(p)-\epsilon^{ \prime})}, \tag{6.4}\] where \(C\) is a constant independent of \(N\) and \(M\). Therefore, we obtain \[|R_{N,\frac{1}{M}}| \leq\sum_{(k,l)\in\mathbb{Z}^{2}}|g_{N,\frac{1}{M}}(k,l)|\leq(N+ \frac{1}{M})^{2}Ce^{(N+\frac{1}{M})(\mathbb{G}_{\mathbb{R}}(p)-\epsilon^{ \prime})}\] \[\leq(N+\frac{1}{2})^{2}Ce^{(N+\frac{1}{2})(\mathbb{G}_{\mathbb{R} }(p)-\epsilon^{\prime})}\leq C_{1}e^{N(\zeta_{\mathbb{R}}(p)-\epsilon)} \tag{6.5}\] for some constant \(C_{1}\) independent of \(M\) ( depends on \(N\)), where we have let \(\epsilon=\frac{\epsilon^{\prime}}{2}\). **Lemma 6.2**.: _There exists a constant \(C_{2}\) independent of \(M\), such that_ \[|\sum_{(m,n)\in\mathbb{Z}^{2}}\hat{h}_{N,\frac{1}{M}}(m,n)|\leq C_{2}. \tag{6.6}\] Proof.: We introduce the set \[\mathcal{S}=\{(m,n)\in\mathbb{Z}^{2}|n\geq 0\}. \tag{6.7}\] By using Proposition 4.6, we obtain \[\sum_{(m,n)\in\mathbb{Z}^{2}}\hat{h}_{N,\frac{1}{M}}(m,n)=\sum_{(m,n)\in \mathcal{S}}(1-e^{\frac{2\pi\sqrt{-1}(n+1)}{M}})\hat{h}_{N,\frac{1}{M}}(m,n). \tag{6.8}\] Therefore, we only need to prove that \[\sum_{(m,n)\in\mathcal{S}}|(1-e^{\frac{2\pi\sqrt{-1}(n+1)}{M}})\hat{h}_{N, \frac{1}{M}}(m,n)|\leq C_{2} \tag{6.9}\] for some constant \(C_{2}\) independent of \(M\). Note that, by formula (4.13), we have \[|(1-e^{\frac{2\pi\sqrt{-1}(n+1)}{M}})\hat{h}_{N,\frac{1}{M}}(m,n)|\] \[\leq\frac{2\sin\frac{(n+1)\pi}{M}}{\sin\frac{\pi}{N+\frac{1}{M}} }(N+\frac{1}{M})^{\frac{3}{2}}\left|\int_{D^{\prime}_{0}}\psi(t,s)\sin(2\pi s) e^{(N+\frac{1}{M})V_{N,\frac{1}{M}}\,(p,t,s;m,n)}dtds\right| \tag{6.10}\] By using the inequalities \[\sin(x)<x,\ \sin(x)>x-\frac{x^{3}}{3!} \tag{6.11}\] for \(x>0\), and \(M\geq 2\), we obtain \[\frac{\sin\frac{(n+1)\pi}{M}}{\sin\frac{\frac{N}{M}}{N+\frac{1}{M}}} \leq\frac{\frac{(n+1)\pi}{M}}{\left(\frac{\pi}{N+\frac{1}{M}}- \frac{1}{6}\frac{(\frac{\pi}{M})^{3}}{(N+\frac{1}{M})^{3}}\right)}=\frac{n+1} {\frac{1}{N+\frac{1}{M}}-\frac{1}{6}\frac{(\frac{\pi}{M})^{2}}{(N+\frac{1}{M} )^{3}}}\] \[\leq\frac{n+1}{\frac{1}{N+\frac{1}{2}}-\frac{1}{6}\frac{(\frac{ \pi}{2})^{2}}{(N+\frac{1}{2})^{3}}}=\frac{n+1}{\frac{1}{N+\frac{1}{2}}\left(1 -\frac{\pi^{2}}{24}\frac{1}{(N+\frac{1}{2})^{2}}\right)}\] \[\leq\frac{n+1}{\frac{1}{N+\frac{1}{2}}\left(1-\frac{\pi^{2}}{24} \frac{1}{(\frac{3}{2})^{2}}\right)}=\frac{(n+1)(N+\frac{1}{2})}{\frac{54-\pi ^{2}}{54}}. \tag{6.12}\] Therefore, we have \[|(1-e^{\frac{2\pi\sqrt{-7}(n+1)}{M}})\hat{h}_{N,\frac{1}{M}}(m,n)|\] \[\leq\frac{2\sin\frac{(n+1)\pi}{M}}{\sin\frac{\frac{\pi}{M}}{N+ \frac{1}{M}}}(N+\frac{1}{M})^{\frac{3}{2}}\left|\int_{D_{0}^{\prime}}\psi(t,s) \sin(2\pi s)e^{(N+\frac{1}{M})V_{N,\frac{1}{M}}\,(p,t,s;m,n)}dtds\right|\] \[\leq\frac{108}{54-\pi^{2}}(n+1)(N+\frac{1}{2})^{\frac{5}{2}} \left|\int_{D_{0}^{\prime}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{M})V_{N,\frac{1 }{M}}\,(p,t,s;m,n)}dtds\right|. \tag{6.13}\] Let \(\Delta=\frac{\partial^{2}}{\partial t^{2}}+\frac{\partial^{2}}{\partial s^{2}}\) be the Laplacian operator. Let \(l\geq 1\) be an integer, \(\psi(t,s)\) is a bump function which vanishes on the boundary of \(D_{0}^{\prime}\), we have \[\int_{D_{0}^{\prime}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{M})V_{N, \frac{1}{M}}\,(p,t,s)}\Delta^{l}\left(e^{-2(N+\frac{1}{M})\left(m\pi\sqrt{-1} t+n\pi\sqrt{-1}s\right)}\right)dtds\] \[=\int_{D_{0}^{\prime}}\left(\Delta^{l}\left(\psi(t,s)\sin(2\pi s) e^{(N+\frac{1}{M})V_{N,\frac{1}{M}}\,(p,t,s)}\right)\right)e^{-2(N+\frac{1}{M}) \left(m\pi\sqrt{-1}t+n\pi\sqrt{-1}s\right)}dtds. \tag{6.14}\] In the following, it is enough to use the above formula in the case of \(l=2\). Clearly, the following identity holds \[\int_{D_{0}^{\prime}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{M})V_{N, \frac{1}{M}}\,(p,t,s)}\left(\Delta^{2}\left(e^{-2(N+\frac{1}{M})\left(m\pi \sqrt{-1}t+n\pi\sqrt{-1}s\right)}\right)\right)dtds\] \[=(2\pi(N+\frac{1}{M}))^{4}(m^{2}+n^{2})^{2}\int_{D_{0}^{\prime}} \psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{M})\left(V_{N,\frac{1}{M}}\,(p,t,s;m,n) \right)}dtds. \tag{6.15}\] Therefore, for \((m,n)\in\mathcal{S}\), we have \[\int_{D_{0}^{\prime}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{M})V_{N,\frac{1}{M}}\,(p,t,s;m,n)}dtds\] \[=\int_{D_{0}^{\prime}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{M}) \left(V_{N,\frac{1}{M}}\,(p,t,s)-2\pi m\sqrt{-1}t-2\pi n\sqrt{-1}s\right)}dtds\] \[=\frac{1}{(2\pi(N+\frac{1}{M}))^{4}(m^{2}+n^{2})^{2}}\] \[\cdot\int_{D_{0}^{\prime}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{M}) V_{N,\frac{1}{M}}\,(p,t,s)}\left(\Delta^{2}e^{-2(N+\frac{1}{M})\left(m\pi \sqrt{-1}t+n\pi\sqrt{-1}s\right)}\right)dtds\] \[=\frac{1}{(2\pi(N+\frac{1}{M}))^{4}(m^{2}+n^{2})^{2}}\] \[\cdot\int_{D_{0}^{\prime}}\left(\Delta^{2}\left(\psi(t,s)\sin(2 \pi s)e^{(N+\frac{1}{M})V_{N,\frac{1}{M}}\,(p,t,s)}\right)\right)e^{-2(N+ \frac{1}{M})\left(m\pi\sqrt{-1}t+n\pi\sqrt{-1}s\right)}dtds\] \[=\frac{1}{(2\pi(N+\frac{1}{M}))^{4}(m^{2}+n^{2})^{2}}\int_{D_{0}^ {\prime}}\tilde{U}_{N,\frac{1}{M}}(p,t,s)e^{(N+\frac{1}{M})\left(V_{N,\frac{1} {M}}\,(p,t,s;m,n)\right)}dtds, \tag{6.16}\] where \[\tilde{U}_{N,\frac{1}{M}}(p,t,s)=e^{-(N+\frac{1}{M})V_{N,\frac{1}{M}}(p,t,s)} \Delta^{2}\left(\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{M})V_{N,\frac{1}{M}}(p,t,s )}\right) \tag{6.17}\] which is a smooth function independent of \(m\) and \(n\). Note that \[\lim_{M\to\infty}V_{N,\frac{1}{M}}(p,t,s)=V_{N,0}(p,t,s) \tag{6.18}\] and \[\lim_{M\to\infty}\tilde{U}_{N,\frac{1}{M}}(p,t,s)=\tilde{U}_{N,0}(p,t,s)=e^{- NV_{N,0}(p,t,s)}\Delta^{2}\left(\psi(t,s)\sin(2\pi s)e^{NV_{N,0}(p,t,s)} \right). \tag{6.19}\] Let \(x=\frac{1}{M}\), then \(V_{N,x}(p,t,s)\) and \(\tilde{U}_{N,x}(p,t,s)\) can be viewed as a continuous function of \((x,t,s)\in[0,\frac{1}{2}]\times D^{\prime}_{0}\). So we can take \[C^{\prime}=\max_{(x,t,s)\in[0,\frac{1}{2}]\times D^{\prime}_{0}}\tilde{U}_{N, x}(p,t,s), \tag{6.20}\] and \[C^{\prime\prime}=\max_{(x,t,s)\in[0,\frac{1}{2}]\times D^{\prime}_{0}}|ReV_{N,x}(p,t,s)|, \tag{6.21}\] then \(C^{\prime}\) and \(C^{\prime\prime}\) are two constants independent of \(M\). Finally, we obtain \[\sum_{(m,n)\in\mathcal{S}}\left|(1-e^{\frac{2\pi\sqrt{-1}(n+1)}{M }})\hat{h}_{N,\frac{1}{M}}(m,n)\right|\] \[\leq\frac{108}{54-\pi^{2}}(N+\frac{1}{2})^{\frac{5}{2}}\sum_{(m, n)\in\mathcal{S}}(n+1)\left|\int_{D^{\prime}_{0}}\psi(t,s)\sin(2\pi s)e^{(N+ \frac{1}{M})V_{N,\frac{1}{M}}(p,t,s;m,n)}dtds\right|\] \[=\frac{108}{54-\pi^{2}}\frac{(N+\frac{1}{2})^{\frac{5}{2}}}{(2\pi (N+\frac{1}{M}))^{4}}\sum_{(m,n)\in\mathcal{S}}\frac{n+1}{(m^{2}+n^{2})^{2}} \left|\int_{D^{\prime}_{0}}\tilde{U}_{N,\frac{1}{M}}(p,t,s)e^{(N+\frac{1}{M}) V_{N,\frac{1}{M}}(p,t,s;m,n)}dtds\right|\] \[\leq\frac{108}{54-\pi^{2}}\frac{(N+\frac{1}{2})^{\frac{5}{2}}}{(2 \pi N)^{4}}\sum_{(m,n)\in\mathcal{S}}\frac{n+1}{(m^{2}+n^{2})^{2}}C^{\prime}e ^{(N+\frac{1}{2})C^{\prime\prime}}A(D^{\prime}_{0}), \tag{6.22}\] where \(A(D^{\prime}_{0})\) in the last " \(\leq\) " denotes the area of the region \(D^{\prime}_{0}\). Let \[C_{2}=\frac{108}{54-\pi^{2}}\frac{(N+\frac{1}{2})^{\frac{5}{2}}}{(2\pi N)^{4}} \sum_{(m,n)\in\mathcal{S}}\frac{n+1}{(m^{2}+n^{2})^{2}}C^{\prime}e^{(N+\frac{1 }{2})C^{\prime\prime}}A(D^{\prime}_{0}), \tag{6.23}\] since the power series \(\sum_{(m,n)\in\mathcal{S}}\frac{n+1}{(m^{2}+n^{2})^{2}}\) is convergent, \(C_{2}\) is a constant independent of \(M\). Hence we prove Lemma 6.2. **Remark 6.3**.: By the above formula (6.22), if we let \[C_{3}=C^{\prime}e^{(N+\frac{1}{2})C^{\prime\prime}}A(D^{\prime}_{0})\frac{108} {54-\pi^{2}}\frac{(N+\frac{1}{2})^{\frac{5}{2}}}{(2\pi N)^{4}}, \tag{6.24}\] then \(C_{3}\) is a constant independent of \(M\), and we have \[\sum_{(m,n)\in\mathcal{S}}\left|(1-e^{\frac{2\pi\sqrt{-1}(n+1)}{M}})\hat{h}_{N, \frac{1}{M}}(m,n)\right|\leq C_{3}\sum_{(m,n)\in\mathcal{S}}\frac{n+1}{(m^{2}+n ^{2})^{2}}. \tag{6.25}\] Hence the series \(\sum_{(m,n)\in\mathcal{S}}(1-e^{\frac{2\pi\sqrt{-1}(n+1)}{M}})\hat{h}_{N, \frac{1}{M}}(m,n)\) is uniformly converges with respect to \(x=\frac{1}{M}\). **Lemma 6.4**.: _The following identity holds_ \[\lim_{M\to\infty}\int_{D_{0}^{\prime}}\psi(t,s)\sin(2\pi s)e^{(N+ \frac{1}{M})V_{N,\frac{1}{M}}\left(p,t,s;m,n\right)}dtds\] \[=\int_{D_{0}^{\prime}}\psi(t,s)\sin(2\pi s)\lim_{M\to\infty}e^{(N +\frac{1}{M})V_{N,\frac{1}{M}}\left(p,t,s;m,n\right)}dtds\] \[=\int_{D_{0}^{\prime}}\psi(t,s)\sin(2\pi s)e^{NV_{N,0}(p,t,s;m,n) }dtds. \tag{6.26}\] Proof.: Note that the limit \[\lim_{M\to\infty}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{M})V_{N,\frac{1}{M}}\left( p,t,s;m,n\right)}=\psi(t,s)\sin(2\pi s)e^{NV_{N,0}(p,t,s;m,n)} \tag{6.27}\] exists, where \[V_{N,0}(p,t,s) =\lim_{M\to\infty}V_{N,\frac{1}{M}}(p,t,s)\] \[=\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+3)s+\left(\frac{3}{N}-2\right) t-\frac{3p+2}{6N^{2}}\right)\] \[+\frac{1}{N}\varphi_{N,0}\left(t+s+\frac{1}{2N}-1\right)+\frac{1} {N}\varphi_{N,0}\left(t-s+\frac{1}{2N}\right)\] \[-\frac{3}{N}\varphi_{N,0}\left(t\right)-\frac{\pi\sqrt{-1}}{12}. \tag{6.28}\] Moreover, let \(x=\frac{1}{M}\), \(V_{N,x}(p,t,s)\) is a continuous function on \([0,\frac{1}{2}]\times D_{0}^{\prime}\). We take \(C=\max_{(x,t,s)\in[0,\frac{1}{2}]\times D_{0}^{\prime}}|\text{Re}V_{N,x}(p,t, s;m,n)|\). Then we obtain \[|\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{M})V_{N,\frac{1}{M}}\left(p,t,s;m,n\right)}|<e^{(N+\frac{1}{2})C}. \tag{6.29}\] Clearly, the upper bound \(e^{(N+\frac{1}{2})C}\) is independent of \(M\). Therefore, by Lebesgue dominated convergence theorem, we prove Lemma 6.4. **Proposition 6.5**.: _For \(p\geq 6\), the colored Jones polynomial of the twist knot \(\mathcal{K}_{p}\) at the root of unity \(\xi_{N,0}\) is given by_ \[J_{N}(\mathcal{K}_{p};\xi_{N,0})=\lim_{M\to\infty}J_{N}(\mathcal{K}_{p};\xi_{N,\frac{1}{M}})=\sum_{(m,n)\in\mathcal{S}}\tilde{h}_{N,0}(m,n)+O(e^{N(\zeta_{ \mathbb{R}}(p)-\epsilon)}), \tag{6.30}\] _where_ \[\mathcal{S}=\{(m,n)\in\mathbb{Z}^{2}|n\geq 0\} \tag{6.31}\] _and_ \[\tilde{h}_{N,0}(m,n)=(-1)^{m+n+p+1}2e^{\frac{\pi\sqrt{-1}}{4}}(n+1)N^{\frac{5}{2}} \int_{D^{\prime}_{0}}\psi(t,s)\sin(2\pi s)e^{NV_{N,0}(p,t,s;m,n)}dtds. \tag{6.32}\] Proof.: By formula (6.1) and Proposition 4.6, we compute \[J_{N}(\mathcal{K}_{p};\xi_{N,\frac{1}{M}}) =\sum_{(m,n)\in\mathbb{Z}^{2}}\hat{h}_{N,\frac{1}{M}}(m,n)+R_{N, \frac{1}{M}}\] \[=\sum_{(m,n)\in\mathcal{S}}\tilde{h}_{N,\frac{1}{M}}(m,n)+R_{N, \frac{1}{M}} \tag{6.33}\] where \[\tilde{h}_{N,\frac{1}{M}}(m,n) =(1-e^{\frac{2\pi\sqrt{-1}(n+1)}{M}})\hat{h}_{N,\frac{1}{M}}(m,n)\] \[=\sum_{(m,n)\in\mathcal{S}}(-1)^{m+n+p+1}2e^{\frac{(n+2)\pi\sqrt{ -1}}{M}}e^{\frac{\pi\sqrt{-1}}{4}}\sin\left(\frac{(n+1)\pi}{M}\right)\frac{(N +\frac{1}{M})^{\frac{3}{2}}}{\sin\frac{\frac{\pi}{M}}{N+\frac{1}{M}}}\] \[\cdot\int_{D^{\prime}_{0}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{M} )V_{N,\frac{1}{M}}(p,t,s;m,n)}dtds. \tag{6.34}\] Therefore, by Lemma 6.2 and Remark 6.3, we obtain \[J_{N}(\mathcal{K}_{p};\xi_{N,0}) =\lim_{M\to\infty}J_{N}(\mathcal{K}_{p};\xi_{N,\frac{1}{M}})\] \[=\sum_{(m,n)\in\mathcal{S}}\lim_{M\to\infty}\tilde{h}_{N,\frac{1} {M}}(m,n)+\lim_{M\to\infty}R_{N,\frac{1}{M}}. \tag{6.35}\] Furthermore, by using Lemma 6.4 and Lemma 6.1 respectively, we get \[\lim_{M\to\infty}\tilde{h}_{N,\frac{1}{M}}(m,n)=\tilde{h}_{N,0}(m,n), \tag{6.36}\] where \(\tilde{h}_{N,0}(m,n)\) is given by formula (6.32), and \[|\lim_{M\to\infty}R_{N,\frac{1}{M}}|<C_{1}e^{N(\zeta_{\mathbb{R}}(p)-\epsilon )}. \tag{6.37}\] Combining the above formulas together, we obtain Proposition 6.5. Similar to the proof of Lemma 5.1, we see that \(V_{N,0}(p,t,s;m,n)\) converges to the potential function \(V(p,t,s;m,n)\) uniformly on \(D^{\prime}_{0}\). We can apply the results in [3] to estimate every integral appearing in the Fourier coefficients \(\tilde{h}_{N,0}(m,n)\) of Proposition 6.5, \[\int_{D^{\prime}_{0}}\psi(t,s)\sin(2\pi s)e^{NV_{N,0}(p,t,s;m,n)}dtds. \tag{6.38}\] for \((m,n)\in\mathcal{S}\). Finally, we obtain **Theorem 6.6**.: _For \(p\geq 6\), the asymptotic expansion of the colored Jones polynomial of the twist knot \(\mathcal{K}_{p}\) at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N}}\) is given by the following form_ \[J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N}}) =(-1)^{p}4\pi e^{\frac{\pi\sqrt{-1}}{4}}N^{\frac{3}{2}}\omega(p)e ^{N\zeta(p)}\] \[\cdot\left(1+\sum_{i=1}^{d}\kappa_{i}(p)\left(\frac{2\pi\sqrt{-1} }{N}\right)^{i}+O\left(\frac{1}{N^{d+1}}\right)\right), \tag{6.39}\] _for \(d\geq 1\), where \(\omega(p)\) and \(\kappa_{i}(p)\) are constants determined by \(\mathcal{K}_{p}\)._ ## 7. Related Questions In this final section, we give two related questions which deserve to be studied further. 1. The first natural question is to study the asymptotic expansion formula for the colored Jones polynomial of the double twist knot \(\mathcal{K}_{p,s}\) at the different roots of unity. 2. It is also interesting to study the asymptotic expansion for the Reshetikhin-Turaev invariants of the closed hyperbolic 3-manifolds obtained by \(\frac{p}{q}\)-surgery along the twist knot at the root of unity \(e^{\frac{4\pi\sqrt{-1}}{r}}\). 3. We prove the volume conjecture for twist knot \(\mathcal{K}_{p}\) with \(p\geq 6\) in this paper. Ohtsuki's work [11, 12] imply that the volume conjecture for twist knots \(\mathcal{K}_{2}=5_{2}\) and \(\mathcal{K}_{3}=7_{2}\) holds. So the volume conjecture for twist knot \(\mathcal{K}_{4}\) and \(\mathcal{K}_{5}\) is still open. 4. As to the twist knot \(\mathcal{K}_{p}\) with \(p\leq-1\), the volume conjecture for \(\mathcal{K}_{-1}=4_{1}\) was proved firstly by Ekholm, \(\mathcal{K}_{-2}=6_{1}\) was proved in [14]. Our method seems can not be applied to the cases \(\mathcal{K}_{-3}=8_{1}\) and \(\mathcal{K}_{-4}=10_{1}\) directly, so from our point of view, it is also difficult to prove the volume conjecture for them.
2303.10856
Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering Regularized Self-Training
Deploying models on target domain data subject to distribution shift requires adaptation. Test-time training (TTT) emerges as a solution to this adaptation under a realistic scenario where access to full source domain data is not available, and instant inference on the target domain is required. Despite many efforts into TTT, there is a confusion over the experimental settings, thus leading to unfair comparisons. In this work, we first revisit TTT assumptions and categorize TTT protocols by two key factors. Among the multiple protocols, we adopt a realistic sequential test-time training (sTTT) protocol, under which we develop a test-time anchored clustering (TTAC) approach to enable stronger test-time feature learning. TTAC discovers clusters in both source and target domains and matches the target clusters to the source ones to improve adaptation. When source domain information is strictly absent (i.e. source-free) we further develop an efficient method to infer source domain distributions for anchored clustering. Finally, self-training~(ST) has demonstrated great success in learning from unlabeled data and we empirically figure out that applying ST alone to TTT is prone to confirmation bias. Therefore, a more effective TTT approach is introduced by regularizing self-training with anchored clustering, and the improved model is referred to as TTAC++. We demonstrate that, under all TTT protocols, TTAC++ consistently outperforms the state-of-the-art methods on five TTT datasets, including corrupted target domain, selected hard samples, synthetic-to-real adaptation and adversarially attacked target domain. We hope this work will provide a fair benchmarking of TTT methods, and future research should be compared within respective protocols.
Yongyi Su, Xun Xu, Tianrui Li, Kui Jia
2023-03-20T04:30:18Z
http://arxiv.org/abs/2303.10856v1
# Revisiting Realistic Test-Time Training: ###### Abstract Deploying models on target domain data subject to distribution shift requires adaptation. Test-time training (TTT) emerges as a solution to this adaptation under a realistic scenario where access to full source domain data is not available, and instant inference on the target domain is required. Despite many efforts into TTT, there is a confusion over the experimental settings, thus leading to unfair comparisons. In this work, we first revisit TTT assumptions and categorize TTT protocols by two key factors. Among the multiple protocols, we adopt a realistic sequential test-time training (sTTT) protocol, under which we develop a _test-time anchored clustering (TTAC)_ approach to enable stronger test-time feature learning. TTAC discovers clusters in both source and target domains and matches the target clusters to the source ones to improve adaptation. When source domain information is strictly absent (i.e. source-free) we further develop an efficient method to infer source domain distributions for anchored clustering. Finally, self-training (ST) has demonstrated great success in learning from unlabeled data and we empirically figure out that applying ST alone to TTT is prone to confirmation bias. Therefore, a more effective TTT approach is introduced by regularizing self-training with anchored clustering, and the improved model is referred to as TTAC++. We demonstrate that, under all TTT protocols, TTAC++ consistently outperforms the state-of-the-art methods on five TTT datasets, including corrupted target domain, selected hard samples, synthetic-to-real adaptation and adversarially attacked target domain. We hope this work will provide a fair benchmarking of TTT methods, and future research should be compared within respective protocols. Test-Time Training; Domain Adaptation; Transfer Learning; Self-Training ## 1 Introduction The recent success in deep learning is attributed to the availability of large labeled data [1, 2, 3] and the assumption of i.i.d. between training and test datasets. Such assumptions could be violated when testing data features a drastic difference from the training data, e.g. training on synthetic images and test on real ones, or training on clean samples and test on corrupted ones. This situation is often referred to as domain shift [4, 5, 6]. To tackle this issue, domain adaptation (DA) [7] emerges and the labeled training data and unlabeled testing data are often referred to as source and target data/domains respectively. The existing DA works either require the access to both source and target domain data during training [8, 9, 10] or training on multiple domains simultaneously [11]. The former approaches render the methods restrictive to limited scenarios where source domain data is always available during adaptation while the latter ones are computationally more expensive. To alleviate the dependence on source domain data, which may be inaccessible due to privacy issues or storage overhead, source-free domain adaptation (SFDA) emerges which handles DA on target data without access to source data [12, 13, 14, 15, 16]. SFDA is often achieved through self-training [12, 17], self-supervised learning [16, 18] or introducing prior knowledge [12] and it requires multiple training epochs on the full target dataset to allow model convergence. Despite easing the dependence on source data, SFDA has major drawbacks in a more realistic domain adaptation scenario where test data arrives in a stream and inference or prediction must be taken instantly, and this setting is often referred to as test-time training (TTT) or adaptation (TTA) [16, 19, 20, 21, 22, 23]. Despite the attractive feature of adaption at test time, we notice a confusion of what defines a test-time training and as a result comparing apples and oranges happens frequently in the community. In this work, we first categorize TTT by **two key factors** after summarizing various definitions made in existing works. First, under a realistic TTT setting, test samples are sequentially streamed and predictions should be made instantly upon the arrival of a new test sample. More specifically, the prediction of test sample \(x_{T}\), arriving at time stamp \(T\), should not be affected by any subsequent samples, \(\{x_{t}\}_{t=T+1\cdots\infty}\). The sequential protocol widely exists in many real-world application. For example, in video surveillance, cameras are expected to function instantly after installment and adaptation to target domain must be carried out on-the-fly. Throughout this work, We refer to the sequential streaming setting as the **one-pass adaptation** protocol and any other protocols violating this assumption are referred to as **multi-pass adaptation** (model may be updated on all test data for multiple epochs before inference). Second, we notice some recent works must **modify source domain training loss**, e.g. by introducing additional self-supervised branch, to allow more effective TTT [16, 19]. This will introduce additional overhead in the deployment of TTT because re-training on some source dataset, e.g. ImageNet, is computationally expensive. Thus, we distinguish methods by whether source domain training objective is modified or not. In this work, we aim to tackle on the most realistic TTT protocol, i.e. one-pass test time training with no modifications to training objective. We refer to this new TTT protocol as **sequential test time training (sTTT)**. The proposed setting is similar to TTA proposed in [20] except for not restricting access to a light-weight distribution information from the source domain. We believe having access to distribution information, e.g. distributions' mean and covariance, in the source domain is a realistic assumption for two reasons. First, the objective of TTT is efficient adaptation at test time, this assumption only requires storing the means and covariance matrices which are memory efficient. Moreover, feature distribution information will not pose any threat to privacy leakage as inverting backbone network, e.g. CNN, is known to be very challenging [24]. Nevertheless, we are aware that under certain scenarios source domain distribution information is not always available for test-time training, i.e. **source-free test-time training**. Such a situation could happen when source distribution is not recorded during training, or model is trained through federated learning where access to the whole source data is prohibited [25]. A robust TTT method should therefore be versatile and still function well in the absence of source domain distribution information. In this work, we propose four techniques to enable efficient and accurate sTTT regardless the availability of source domain distribution information. i) We are inspired by the recent progresses in unsupervised domain adaptation [26] that encourages testing samples to form clusters in the feature space. However, separately learning to cluster in the target domain without regularization from source domain does not guarantee improved adaptation [26]. To overcome this challenge, we identify clusters in both the source and target domains through a mixture of Gaussians with each component Gaussian corresponding to one category. Provided with the category-wise statistics from source domain as anchors, we match the target domain clusters to the anchors by minimizing the KL-Divergence as the training objective for sTTT. Therefore, we name the proposed method through feature clustering alone as _test-time anchored clustering (TTAC)_. Since test samples are sequentially streamed, we develop an exponential moving averaging strategy to update the target domain cluster statistics to allow gradient-based optimization. ii) Each component Gaussian in the target domain is updated by the test sample features that are assigned to the corresponding category. Thus, incorrect assignments (pseudo labels) will harm the estimation of component Gaussian. To tackle this issue, we are inspired by the correlation between network's stability and confidence and pseudo label accuracy [27, 28], and propose to filter out potential incorrect pseudo labels. Component Gaussians are then updated by the samples that have passed the filterings. To exploit the filtered out samples, we incorporate a global feature alignment [16] objective. iii) Self-training (ST) exploits unlabeled data through predicting pseudo labels with existing model and use the predicted pseudo labels as target to further update the model parameters. ST has demonstrated remarkable success for semi-supervised learning [28] and domain adaptation [29], and we hypothesize that self-training on target domain data could benefit TTT as well. However, as we discovered empirically, direct self-training on target domain yields much inferior results compared to anchored clustering. We attribute this phenomenon to the fact that when there is a significant distribution shift between source and target domains pseudo labels predicted on target domain are more likely to be incorrect. As a result, self-training on incorrect pseudo labels leads to inferior accuracy on target domain. To alleviate the impact of distribution shift on self-training, we use anchored clustering to regularize self-training such that we can simultaneously minimize distribution shift and exploit pseudo labels on target domain to update model parameters. We refer to the combined model as **TTAC++** to acknowledge the importance of anchored clustering. Extensive evaluations have demonstrated the effectiveness of combining anchored clustering and self-training. iv) When source domain distribution information is strictly absent, we propose to infer the source domain distribution by backpropagating classification loss through category-wise distribution parameters. We demonstrate through simple derivation that sampling from the distribution is not necessary during the optimization and the distribution parameters can be learned by efficient gradient descent methods. We summarize the contributions of this work as below. * In light of the confusions within TTT works, we provide a categorization of TTT protocols by two key factors. Comparison of TTT methods is now fair within each category. * We adopt a realistic TTT setting, namely sTTT. To improve test-time feature learning, we propose TTAC by matching the statistics of the target clusters to the source ones. The target statistics are updated through moving averaging with filtered pseudo labels. * To further exploit the unlabeled target domain data, we incorporate a self-training approach to update model w.r.t. classification loss, and we reveal that regularizing self-training with anchored clustering, referred to as TTAC++, consistently outperforms TTAC with minute additional computation overhead. * To enable strict source-free test-time training, we develop a light-weight method to infer source domain distributions. We demonstrate that TTAC++ outperforms state-of-the-art methods under the strict source-free sTTT protocol. * The proposed method is demonstrated on five test-time training datasets, among which three datasets (CIFAR10/100-C & ImageNet-C) focus on test-time adaptation to corrupted target domains, one (CIFAR10.1) focuses on selected hard samples and another one (VisDA) focuses on synthetic-to-real adaptation. We also evaluate test-time training on adversarially attacked target dataset. We demonstrate that TTAC++ achieves the state-of-the-art performance on all benchmarks under multiple TTT protocols. ## 2 Related Work ### _Unsupervised Domain Adaptation_ Domain adaptation [7] aims to improve model generalization when source and target data are not drawn i.i.d. Unsupervised domain adaptation (UDA) [8, 30, 31] makes an assumption that labeled data is only available in the source domain and target domain data are totally unlabeled. UDA methods often simultaneously learn domain invariant feature representations on both source and target domains to improve generalization. This is achieved by introducing a domain discriminator [8], Follow-up works improve DA by minimizing a divergence [32, 33, 34], adversarial training [10] or discovering cluster structures in the target data [26]. Apart from formulating DA within a task-specific model, re-weighting has been adopted for domain adaptation by selectively up-weighting conducive samples in the source domain [35, 36]. Despite the efforts in UDA, it is inevitable to access the source domain data which may be not accessible due to privacy issues, storage overhead, etc. Deploying DA in more realistic scenarios has inspired research into source-free domain adaptation and test-time training/adaptation. ### _Source-Free Domain Adaptation_ UDA is often implemented by simultaneously updating model parameters on both source and target domain data [8]. Having access to target domain data during model training may not be practical in real-world applications. For instance, users may buy pretrained model from suppliers and hope to adapt to proprietary data. Access to source domain data could be prohibited due to privacy or data storage issues. Without the access to source data, source-free domain adaptation (SFDA) emerges as a more realistic solution. SFDA is often developed through self-training [12, 13, 17, 21, 37], self-supervised training [16] or clustering in the target domain [14]. It has been demonstrated that SFDA performs well on seminal domain adaptation datasets even compared against UDA methods [26]. SFDA often requires access to all testing data and model adaptation is carried out by iteratively updating on the testing data. Despite the advantage of not requiring source domain data during model adaptation, the iterative model updating strategy restricts the application of SFDA to scenarios where target domain distribution is fixed and training data in target domain is readily available. In a more realistic DA scenario where data arrives in a stream and inference and adaptation must be implemented simultaneously SFDA will no longer be effective. ### _Test-Time Training_ Collecting enough samples from target domain and adapt models in an offline manner restricts the application to adapting to static target domain. To allow fast and online adaptation, test-time training (TTT) [19, 21, 22, 38, 39, 40] or adaptation (TTA) [20] emerges. TTT tackles a scenario where a distribution shift between source and target domain exists and source model is preferably adapted to target domain in a light-weight fashion. Despite many recent works claiming to be test-time training, we notice a severe confusion over the definition of TTT. In particular, whether training objective must be modified [19, 16] and whether sequential inference on target domain data is possible [20, 21]. Therefore, to reflect the key challenges in TTT, we define a setting called sequential test-time training (sTTT) which neither modifies the training objective nor violates sequential inference. Under the more clear definition, some existing works, e.g. TTT [19] and TTT++ [16] is more likely to be categorized into SFDA. Several existing works [20, 21] can be adapted to the sTTT protocol. Tent [20] proposed to adjust affine parameters in the batchnorm layers to adapt to target domain data. The high TTA efficiency inevitably leads to limited performance gain on the target domain. T3A [21] further proposed to update classifier prototype through pseudo labeling. Despite being efficient, updating classifier prototype alone does not affect feature representation for the target domain. Target feature may not form clusters at all when the distribution mismatch between source and target is large enough. In this work we propose to simultaneously cluster on the target domain and match target clusters to source domain classes, namely anchored clustering. To further constrain feature update, we introduce additional global feature alignment and pseudo label filtering. Through the introduced anchored clustering, we achieve test-time training of more network parameters and achieve the state-of-the-art performance. ### _Self-Training_ Training models with predictions from their own has been a long-standing paradigm for learning from unlabeled data. In the realm of semi-supervised learning [41], which aims to exploit few labeled data and large unlabeled, self-training has been widely adopted to produce pseudo labels for unlabeled data. Among these works, label-propagation [42] is implemented on the deep representations to provide pseudo labels for unlabeled data and self-training is achieved by training with the pseudo labels [43]. FixMatch [28] utilizes the predictions on weak augmented samples as pseudo label to supervise network training on unlabeled data. MixMatch [44] sharpens model prediction to serve as pseudo label for self-training. Self-training recently emerges as a promising solution to domain adaptation by updating model on target pseudo labels [29, 45, 46]. Some concurrent works also demonstrated that self-training is also effective when source domain data is absent [47, 48]. In this work, we hypothesize that self-training could benefit test-time training by providing pseudo labels on the target domain samples. More importantly, we discover that self-training alone without any constraint is less effective for TTT under large domain shift due to the high noise in pseudo labels. Through combining anchored clustering and self-training, we demonstrate a significant improvement from the previous state-of-the-art thanks to the improved pseudo label quality. ## 3 Methodology In this section we first introduce the anchored clustering objective for test-time training through pseudo labeling and then describe an efficient iterative updating strategy. We then introduce the solution to infer source distribution when source domain data is strictly absent, and self-training for TTT. For simplicity, We denote the source and target domain datasets as \(\mathcal{D}_{s}=\{x_{i},y_{i}\}_{i=1\cdots N_{s}}\) and \(\mathcal{D}_{t}=\{x_{i}\}_{i=1\cdots N_{t}}\) where a minibatch of target test samples at time stamp \(t\) is defined as \(\mathcal{B}^{t}=\{x_{i}\}_{i=tN_{B}\cdots(t+1)N_{B}}\). We further denote the posterior prediction for \(x_{i}\) at time stamp \(t\) as \(q_{i}^{t}=\delta(h(z_{i};w)),~{}s.t.~{}z_{i}=f(x_{i};\Theta),\) where \(\delta(\cdot,h)\), \((\cdot;w)\) and \(f(\cdot;\Theta)\) denote the a standard softmax function, the classifier head and backbone network, respectively. The \(D\) dimensional feature representation is defined as the output of backbone network \(z_{i}=f(x_{i};\Theta)\in\mathcal{R}^{+D}\) due to ReLu activation. An overview of the proposed pipeline is illustrated in Fig. 1. ### _Anchored Clustering for Test-Time Training_ Inspired by the success of discovering cluster structures in the target domain for unsupervised domain adaptation [26], we develop an anchored clustering on the test data alone as the initial module for test-time training. We first use a mixture of Gaussians to model the clusters in the target domain, here each component Gaussian represents one discovered cluster. We further use the distributions of each category in the source domain as anchors for the target distribution to match against. In this way, test data features can simultaneously form clusters and the clusters are associated with source domain categories, resulting in improved generalization to target domain. Formally, we first denote the mixture of Gaussians in the source and target domains as, \[\begin{split} p_{s}(z)&=\sum_{k}\alpha_{k}\mathcal{N}( \mu_{sk},\Sigma_{sk}),\\ p_{t}(z)&=\sum_{k}\beta_{k}\mathcal{N}(\mu_{tk}, \Sigma_{tk})\end{split} \tag{1}\] where \(\{\mu_{k}\in\mathbb{R}^{d},\Sigma_{k}\in\mathbb{R}^{d\times d}\}\) represent one cluster in the source/target domain and \(d\) is the dimension of feature embedding. Both \(\mu_{sk}\) and \(\Sigma_{sk}\) can be readily estimated from \(\mathcal{D}_{s}\) through MLE. Anchored clustering can be achieved by matching the above two distributions and one may directly minimize the KL-Divergence between the two distribution. Nevertheless, this is non-trivial because the KL-Divergence between two mixture of Gaussians has no closed-form solution which prohibits efficient gradient-based optimization. Despite some approximations exist [49], without knowing the semantic labels for each Gaussian component, even a good match between two mixture of Gaussians does not guarantee target clusters are aligned to the correct source ones and this will severely harm the performance of test-time training. In light of these challenges, we propose a category-wise alignment. Specifically, we allocate the same number of clusters in both source and target domains, each corresponding to one semantic class, and each target cluster is assigned to one source cluster. We can then minimize the KL-Divergence between each pair of clusters as in Eq. 2. \[\begin{split}\mathcal{L}_{ac}=&\sum_{k}D_{KL}( \mathcal{N}(\mu_{sk},\Sigma_{sk})||\mathcal{N}(\mu_{tk},\Sigma_{tk}))\\ =&\sum_{k}-H(\mathcal{N}(\mu_{sk},\Sigma_{sk}))+H( \mathcal{N}(\mu_{sk},\Sigma_{sk}),\mathcal{N}(\mu_{tk},\Sigma_{tk}))\end{split} \tag{2}\] The KL-Divergence can be further decomposed into the entropy \(H(\mathcal{N}(\mu_{sk},\Sigma_{sk}))\) and cross-entropy \(H(\mathcal{N}(\mu_{sk},\Sigma_{sk}),\mathcal{N}(\mu_{tk},\Sigma_{tk}))\). It is commonly true that the source reference distribution \(p_{s}(z)\) is fixed thus the entropy term is a constant \(C\) and only the cross-entropy term is to be optimized. Given the closed-form solution to the KL-Divergence between two Gaussian distributions, we now write the anchored clustering objective as, \[\begin{split}\mathcal{L}_{ac}=&\sum_{k}\{\log \sqrt{2\pi^{d}|\Sigma_{tk}|}+\frac{1}{2}(\mu_{tk}-\mu_{sk})^{\top}\Sigma_{tk}^{ -1}(\mu_{tk}-\mu_{sk})\\ &+tr(\Sigma_{tk}^{-1}\Sigma_{sk})\}+C\end{split} \tag{3}\] The source cluster parameters, mean and covariance, can be readily estimated in an offline manner by running through the training samples. These information will not cause any privacy leakage and only introduces a small computation and storage overheads. Nevertheless, one might encounter a more constrained scenario where distributional information on the source domain is prohibited, e.g. downstream user can only have access to model architecture and weights. In the following section, we shall introduce a strict source-free anchored clustering method by inferring source domain clusters. To differentiate the settings, we refer to the former one as source-light TTT, where statistical Fig. 1: Overview of TTAC++. i) In the source domain offline stage, we calculate or infer category-wise and global distributions in the source domain as anchors. ii) In the test-time stage, testing samples are sequentially streamed and pushed into a fixed-length queue. We apply self-training to testing samples to adapt model weights. Anchored clustering is employed to regularize self-training by aligning source and target domain distributions. information on source domain is still available, and the latter one as source-free TTT, where no information on source domain is available. ### _Clustering through Pseudo Labeling_ In order to test-time train network with anchored clustering loss, one must obtain target cluster parameters \(\{\mu_{tk},\Sigma_{tk}\}\). For a mini-batch of target test samples \(\mathcal{B}^{t}=\{x_{i}\}_{i=tN_{B\ldots(t+1)N_{B}}}\) at timestamp \(t\), the pseudo labels are obtained via \(\hat{y}_{i}=\operatorname*{arg\,max}_{k}q_{ik}^{t}\). Given the predicted pseudo labels we could estimate the mean and covariance for each component Gaussian with the pseudo labeled testing samples. However, pseudo labels are always subject to model's discrimination ability. The error rate for pseudo labels is often high when the domain shift between source and target is large, directly updating the component Gaussian is subject to erroneous pseudo labels, a.k.a. confirmation bias [50]. To reduce the impact of incorrect pseudo labels, we first adopt a light-weight temporal consistency (TC) pseudo label filtering approach. Compared to co-teaching [51] or meta-learning [52] based methods, this light-weight method does not introduce additional computation overhead and is therefore more suitable for test-time training. Specifically, to alleviate the impact from the noisy predictions, we calculate the temporal exponential moving averaging posteriors \(\tilde{q}^{t}\in[0,1]^{N\times K}\) as below, \[\tilde{q}_{i}^{t}=(1-\xi)*\tilde{q}_{i}^{t-1}+\xi*q_{i}^{t},\quad s.t.\quad \tilde{q}_{i}^{0}=q_{i}^{0} \tag{4}\] The temporal consistency filtering is realized as in Eq. 5 where \(\tau_{TC}\) is a threshold determining the maximally allowed difference in the most probable prediction over time. If the posterior deviate from historical value too much, it will be excluded from target domain clustering. \[F_{i}^{TC}=\mathbb{1}((q_{i\hat{y}}^{t}-\tilde{q}_{i\hat{y}}^{t-1})>\tau_{TC} ),\ s.t.\ \hat{y}=\operatorname*{arg\,max}_{k}(q_{ik}^{t}) \tag{5}\] Due to the sequential inference, test samples without enough historical predictions may still pass the TC filtering. So, we further introduce an additional pseudo label filter directly based on the posterior probability as, \[F_{i}^{PP}=\mathbb{1}\left(\tilde{q}_{i\hat{k}}^{t}>\tau_{PP}\right) \tag{6}\] By filtering out potential incorrect pseudo labels, we update the component Gaussian only with the leftover target samples as below. \[\begin{split}\mu_{tk}&=\frac{\sum\limits_{i}F_{i}^{ TC}F_{i}^{PP}\mathbb{1}\left(\hat{y}_{i}=k\right)z_{i}}{\sum\limits_{i}F_{i}^{ TC}F_{i}^{PP}\mathbb{1}\left(\hat{y}_{i}=k\right)},\\ \Sigma_{tk}&=\frac{\sum\limits_{\hat{i}}F_{i}^{TC}F_{ i}^{PP}\mathbb{1}\left(\hat{y}_{i}=k\right)(z_{i}-\mu_{tk})^{\top}(z_{i}-\mu_{tk})}{ \sum\limits_{i}F_{i}^{TC}F_{i}^{PP}\mathbb{1}\left(\hat{y}_{i}=k\right)} \end{split} \tag{7}\] ### _Global Feature Alignment_ As discussed above, test samples that do not pass the filtering will not contribute to the estimation of target clusters. Hence, anchored clustering may not reach its full potential without the filtered test samples. To exploit all available test samples, we propose to align global target data distribution to the source one. We approximate the global feature distribution of the source data as \(\hat{p}_{s}(x)=\mathcal{N}(\mu_{s},\Sigma_{s})\) and the target data as \(\hat{p}_{t}(x)=\mathcal{N}(\mu_{t},\Sigma_{t})\). To align two distributions, we again minimize the KL-Divergence as, \[\mathcal{L}_{ga}=D_{KL}(\hat{p}_{s}(x)||\hat{p}_{t}(x)) \tag{8}\] Similar idea has appeared in [16] which directly matches the moments between source and target domains [34] by minimizing the F-norm for the mean and covariance, i.e. \(||\mu_{t}-\mu_{s}||_{2}^{2}+||\Sigma_{t}-\Sigma_{s}||_{F}^{2}\). However, designed for matching complex distributions represented as drawn samples, central moment discrepancy [34] requires summing infinite central moment discrepancies and the ratios between different order moments are hard to estimate. For matching two parameterized Gaussian distributions KL-Divergence is more convenient with good explanation from a probabilistic point of view. Finally, we add a small constant to the diagonal of \(\Sigma\) for both source and target domains to increase the condition number for better numerical stability. ### _Source-Free TTT by Inferring Source Domain Distributions_ In order to enable test-time training under strict source-free setting, we propose to infer the necessary source domain statistical information, i.e. the class-wise mean and covariance matrix, from network weights only. W.o.l.g., we write the classifier head as a linear classifier \(h(z_{i};w)=w^{\top}z_{i}\) by omitting the bias term, which though can still be preserved in a homogeneous coordinate. Without knowing the true class-wise distribution, we hypothesize that each class \(k\) is subject to a uni-modal Gaussian distribution \(p_{sk}(z)=\mathcal{N}(\mu_{sk},\Sigma_{sk})\) as given in the previous section. Given a model well trained on the source domain we could expect the following class-wise risk being minimized w.r.t. classifier weights. \[\begin{split}\mathcal{L}_{sk}(w,\Theta)=&\mathbb{E} _{z\sim p_{sk}(z)}[-\log\delta(w_{k}^{\top}z)]\\ =&\mathbb{E}_{\hat{z}\sim\mathcal{N}(0,I)}[-\log \delta(w_{k}^{\top}(\mu_{sk}+A_{sk}\hat{z}))]\end{split} \tag{9}\] where \(A_{sk}A_{sk}^{\top}=\Sigma_{sk}\) satisfies a Cholesky decomposition. When source domain feature distribution is unknown while the classifier head \(w\) is available, we could rewrite the above optimization by substituting the optimization variables to source domain class-wise distribution as below, where the lower bound is derived according to Jensen inequality, as \(-\log\delta(\cdot)\) is a convex function. The equality holds when \(A_{sk}=0\). \[\hat{\mathcal{L}}_{sk}(\mu_{sk},A_{sk})= \mathbb{E}_{z\sim\mathcal{N}(0,I)}[-\log\delta(w_{k}^{\top}(\mu_{sk }+A_{sk}\hat{z}))] \tag{10}\] \[\geq -\log\delta(\mathbb{E}_{z\sim\mathcal{N}(0,I)}[w_{k}^{\top}(\mu_{ sk}+A_{sk}\hat{z})])\] (11) \[= -\log\delta(w_{k}^{\top}\mu_{sk}) \tag{12}\] We interpret this problem as discovering the source domain class-wise distribution such that samples drawn from these distributions can be correctly classified. We argue that directly optimizing Eq. 10 without any constraint on \(A_{sk}\) is equivalent to optimizing the lower bound, Eq. 12. Because any non-zero \(A_{sk}\) enables the inequality and without constraining \(A_{sk}\), a trivial solution with \(A_{sk}=0\) exists. Alternatively, one could fix the covariance matrix, \(\Sigma_{sk}\), and only update class-wise mean, \(\mu_{sk}\), and this requires Monte Carlo sampling from a standard multivariate Gaussian distribution should Eq. 10 be the objective to optimize. To get rid of the excessive computation of sampling, we empirically figure out an efficient way to infer source-domain distributions by fixing \(\Sigma_{sk}=\gamma I\) and optimizing the lower bound, Eq. 12, to estimate \(\mu_{sk}\). Moreover, since all backbone features \(z_{i}\) are positive due to ReLu activation, \(\mu\) should be all positive so that it may overlap with the true distribution of source domain features. For this purpose, we parameterize \(\mu=\hat{\mu}^{2}\) where \(\hat{\mu}\) is unconstrained, and add weight decay to \(\hat{\mu}\) to limit the norm of \(\mu\). The global feature distribution is approximated by a unimodal Gaussian distribution. Therefore, to infer the global feature distribution, we use a single Gaussian distribution \(\mathcal{N}(\mu_{s},\Sigma_{s})\) to approximate the mixture of per-category Gaussians. Specifically, the following KL-Divergence is minimized with a closed-form solution [49]. \[\mu_{s}^{*},\Sigma_{s}^{*} =\arg\min_{\mu,\Sigma_{s}}D_{KL}(\mathcal{N}(\mu_{s},\Sigma_{s}) ||\sum_{k}\frac{1}{K}\mathcal{N}(\mu_{sk},\Sigma_{sk})) \tag{13}\] \[\Rightarrow\mu_{s}^{*} =\sum_{k}\frac{1}{K}\mu_{sk}\] \[\Rightarrow\Sigma_{s}^{*} =\sum_{k}\frac{1}{K}(\Sigma_{sk}+(\mu_{sk}-\mu_{s})(\mu_{sk}-\mu_ {s})^{\top})\] ### _Efficient Iterative Updating_ Despite the distribution for source data can be trivially estimated from all available training data in a totally offline manner, estimating the distribution for target domain data is not equally trivial, in particular under the sTTT protocol. In a related research [16], a dynamic queue of test data features are preserved to dynamically estimate the statistics, which will introduce additional memory footprint [16]. To alleviate the memory cost we propose to iteratively update the running statistics for Gaussian distribution. Denoting the running mean and covariance at time stamp \(t\) as \(\mu^{t}\) and \(\Sigma^{t}\), we present the rules to update the mean and covariance in Eq. 14. More detailed derivations and update rules for per cluster statistics are deferred to the Supplementary. \[\mu^{t} =\mu^{t-1}+\delta^{t}, \tag{14}\] \[\Sigma^{t} =\Sigma^{t-1}+a^{t}\sum_{z_{i}\in\mathcal{B}}\{(z_{i}-\mu^{t-1})^ {\top}(z_{i}-\mu^{t-1})-\Sigma^{t-1}\}\] \[-\delta^{t}{}^{\top}\delta^{t}\] \[\delta^{t} =a^{t}\sum_{x_{i}\in\mathcal{B}}(z_{i}-\mu^{t-1}),\quad N^{t}=N^{ t-1}+|\mathcal{B}^{t}|,\quad a^{t}=\frac{1}{N^{t}},\] Since \(N^{t}\) grows larger overtime, new test samples will have smaller contribution to the update of target domain statistics when \(N^{t}\) is large enough. As a result, the gradient calculated from current minibatch will vanish. To alleviate this issue, we impose a clip on the value of \(\alpha^{t}\) as below. As such, the gradient can maintain a minimal scale even if \(N^{t}\) is very large. \[a^{t}=\left\{\begin{array}{ll}\frac{1}{N^{t}}&N^{t}<N_{clip}\\ \frac{1}{N_{clip}}&others\end{array}\right. \tag{15}\] ### _Self-Training for TTT_ Self-training (ST) has been widely adopted in semi-supervised learning where predictions on unlabeled data are admitted as pseudo labels, and model is trained with the pseudo labels [28, 44]. In this work, we explore employing self-training for TTT. Blindly taking all pseudo labels for training has been demonstrated to deteriorate the performance as incorrect pseudo labels act as noisy labels and a high percentage of noisy label is harmful for model training. This phenomenon is also referred to as confirmation bias [50]. As demonstrated in the empirical evaluations in Sect. 4.3, self-training alone is not guaranteed to outperform competing methods. The performance may even degrades after observing enough testing samples as shown in Fig. 4. To reduce the impact of confirmation bias, we first propose to employ anchored clustering as regularization. As anchored clustering allows better alignment between source and target feature distributions, self-training is able to benefit from more accurate pseudo labels and the model is less likely to be harmed by the wrong pseudo labels. This can be achieved by simultaneously optimizing anchored clustering losses and self-training loss. In addition, we further take an approach similar to [28] by filtering out less confident pseudo labels for self-training as in Eq. 16. \[\mathcal{L}_{st}=\frac{1}{|\mathcal{B}_{t}|}\sum_{x_{i}\in\mathcal{B}_{t}} \mathbbm{1}(\max_{k}(q_{k}(\mathcal{W}(x_{i})))\geq\tau_{st})H(\hat{y}_{i},q( \mathcal{A}(x_{i}))) \tag{16}\] where \(q(\cdot)=\sigma(h(f(\cdot)))\) denotes the probabilistic posterior, \(\hat{y}_{i}=\arg\max_{k}(q(\mathcal{W}(x_{i})))\) denotes the predicted pseudo label, \(\mathcal{W}(\cdot)\) denotes a weak augmentation operation consisting of RandomHorizontalFlip and RandomResizedCrop, \(\mathcal{A}(\cdot)\) denotes a strong augmentation operation implemented as RandAugment [53], and \(\tau_{cr}\) denotes the confidence threshold. ### _TTAC++ Training Algorithm_ We summarize the training algorithm for the TTAC++ in Algo. 1. For effective clustering in target domain, we allocate a fixed length memory space, denoted as testing time queue \(\mathcal{C}\in\mathcal{R}^{N_{C}\times H\times W\times 3}\), to store the recent testing samples. In the sTTT protocol, we first make instant prediction on each testing sample, and only update the model when \(N_{B}\) testing samples are accumulated. TTAC++ can be efficiently implemented, e.g. with two devices, one is for continuous inference and another is for model updating. ``` input : A new testing sample batch \(\mathcal{B}^{t}=\{x_{i}\}_{i=tN_{B}...(t+1)}N_{B}\): # Update the testing sample queue \(\mathcal{C}\). \(\mathcal{C}^{t}=\mathcal{C}^{t-1}\setminus\mathcal{B}^{t-N_{C}/N_{B}}\), \(\mathcal{C}^{t}=\mathcal{C}^{t}\bigcup\mathcal{B}^{t}\) for\(1\)to\(N_{itr}\)do forminibatch\(\{x_{i}^{t}\}_{i=1}^{N}\)in\(\mathcal{C}^{t}\)do # Generate weak and strong augmented samples \(\mathcal{W}(x_{i}^{t}),\quad\mathcal{A}(x_{i}^{t})\) # Obtain the predicted posterior and pseudo labels \(q(\mathcal{W}(x_{i}^{t}))=\sigma(h(f(\mathcal{W}(x_{i}^{t}))))\) \(\hat{y}_{i}=\arg\max_{k}(q(\mathcal{W}(x_{i}^{t})))\) # Update the global and per-cluster running mean and covariance by Eq. 14 with \(\mathcal{W}(x_{i}^{t})\) \(\mu^{t},\quad\Sigma^{t},\quad\{\mu_{k}^{t}\},\quad\{\Sigma_{k}^{t}\}\) # Calculate the anchored clustering losses according to Eq. 3 and Eq. 8 \(\mathcal{L}_{ac}+\lambda_{1}\mathcal{L}_{ga}\) # Calculate self-training loss according to Eq. 16 \(\mathcal{L}_{st}\) # One step gradient descent on the total loss \(\mathcal{L}_{ac}+\lambda_{1}\mathcal{L}_{ga}+\lambda_{2}\mathcal{L}_{st}\) ``` **Algorithm 1**TTAC++ Algorithm ``` input : A new testing sample batch \(\mathcal{B}^{t}=\{x_{i}\}_{i=tN_{B}...(t+1)}N_{B}\): # Update the testing sample queue \(\mathcal{C}\). \(\mathcal{C}^{t}=\mathcal{C}^{t-1}\setminus\mathcal{B}^{t-N_{C}/N_{B}}\), \(\mathcal{C}^{t}=\mathcal{C}^{t}\bigcup\mathcal{B}^{t}\) for\(1\)to\(N_{itr}\)do forminibatch\(\{x_{i}^{t}\}_{i=1}^{N}\)in\(\mathcal{C}^{t}\)do # Generate weak and strong augmented samples \(\mathcal{W}(x_{i}^{t}),\quad\mathcal{A}(x_{i}^{t})\) # Obtain the predicted posterior and pseudo labels \(q(\mathcal{W}(x_{i}^{t}))=\sigma(h(f(\mathcal{W}(x_{i}^{t}))))\) \(\hat{y}_{i}=\arg\max_{k}(q(\mathcal{W}(x_{i}^{t})))\) # Update the global and per-cluster running mean and covariance by Eq. 14 with \(\mathcal{W}(x_{i}^{t})\) \(\mu^{t},\quad\Sigma^{t},\quad\{\mu_{k}^{t}\},\quad\{\Sigma_{k}^{t}\}\) # Calculate the anchored clustering losses according to Eq. 3 and Eq. 8 \(\mathcal{L}_{ac}+\lambda_{1}\mathcal{L}_{ga}\) # Calculate self-training loss according to Eq. 16 \(\mathcal{L}_{st}\) # One step gradient descent on the total loss \(\mathcal{L}_{ac}+\lambda_{1}\mathcal{L}_{ga}+\lambda_{2}\mathcal{L}_{st}\) ## 4 Experiment In the experiment section, we first compare against existing methods on different test-time training protocols based on the two key factors. We then ablate the effectiveness of each component in TTAC++. Further analysis on the cumulative performance, qualitative insights, etc. are provided at the end of this section. ### _Datasets_ We evaluate on five test-time training datasets and report the classification error rate (\(\%\)) throughout the experiment section. To evaluate the test-time training efficacy on corrupted target images, we use **CIFAR10-C/CIFAR100-C**[54], each consisting of 10/100 classes with 50,000 training samples of clean data and 10,000 corrupted test samples. We further evaluate test-time training on hard target domain samples with **CIFAR10.1**[55], which contains around 2,000 difficult testing images sampled over years of research on the original CIFAR-10 dataset. To demonstrate the ability to do test-time training for synthetic data to real data transfer we further use **VisDA-C**[56], which is a challenging large-scale synthetic-to-real object classification dataset, consisting of 12 classes, 152,397 synthetic training images and 55,388 real testing images. To evaluate large-scale test-time training, we use **ImageNet-C**[54] which consists of 1,000 classes and 15 types of corruptions on the 50,000 testing samples. Some qualitative examples of common corruptions on ImageNet-C are illustrated in Fig. 2. Finally, we also evaluate the effectiveness of test-time training against adversarial attacks by implementing TTT on generated **adversarial samples** on CIFAR-10 dataset, which is referred to as CIFAR-10-adv throughout this work. ### _Experiment Settings_ #### 4.2.1 Hyperparameters We use the ResNet-50 [57] as backbone network for fair comparison with existing methods. In addition, ViT [58] was adopted as backbone for evaluation on the compatibility with transformer architectures. We train the backbone network \(f(\cdot)\) by SGD optimizer with momentum on all datasets. On CIFAR10-C/CIFAR100-C and CIFAR10.1, we set the batchsize to 256 and the learning rate to 0.01, 0.0001 and 0.01 respectively. On VisDA-C we set the batchsize to 128 and the learning rate to 0.0001. Hyperparameters are shared across multiple TTT protocols except for \(N_{C}\) and \(N_{itr}\) which are only applicable under one-pass adaptation protocols. \(\alpha_{k}\) and \(\beta_{k}\) respectively represent the prevalence of each category, here we set them to 1 over the number of categories. \(N_{C}\) indicates the length of the testing sample queue \(C\) under the sTTT protocol, and \(N_{itr}\) controls the update epochs on this queue. \(\tau_{TC}\) and \(\tau_{PP}\) are the thresholds used for pseudo label filtering. \(N_{clip}\) and \(N_{clip\_k}\) are the upper bounds of sample counts in the iterative updating of global statistics and target cluster statistics respectively. Finally \(\lambda_{1}\) and \(\lambda_{2}\) are the coefficients of \(\mathcal{L}_{ga}\) and \(\mathcal{L}_{cr}\) respectively, which are 1 and 10 respectively. The details of each individual hyperparamter are found in Table. 1. When source domain distribution information is not available, we estimate source domain distributions by minimizing Eq. 10 with RMSprop optimizer, learning rate 0.001 and weight decay 0.001. We choose \(\gamma=max(svdvals(\Sigma_{\mu sk}))/30\). for the fixed covariance matrix. Wall-Clock times for competing methods are recorded under PyTorch 1.10.2 framework, CUDA 11.3 and a single NVIDIA RTX 3090 GPU. #### 4.2.2 Test-Time Training Protocols We categorize test-time training protocols based on two key factors. First, whether the training objective must be changed during training on the source domain, we use Y and N to indicate if training objective is allowed to be changed or not respectively. Second, whether testing data is sequentially streamed and predicted, we use O to indicate a sequential **O**ne-pass inference and M to indicate non-sequential inference, a.k.a. Multi-pass inference. With the above criteria, we summarize 4 test-time training protocols, namely N-O, Y-O, N-M and Y-M, and the strength of the assumption increases from the first to the last protocols. Our sTTT setting makes the weakest assumption, i.e. N-O. Existing methods are categorized by the four TTT protocols, we notice that some methods can operate under multiple protocols. **Source-Free Test-Time Training**. The proposed TTAC++ relies on aligning the source and target domain distributions. It is often realistic to assume having access to the source domain data distributions, which are light-weight and there is no risk of privacy leakage. Nevertheless, for a fair comparison with existing methods that are strictly **source-free**, we adopt inferring the source domain statistics from source domain model weights only as introduced in Section 3.4. Therefore, we differentiate the source-free (SF) approach from the source-light (SL) approach in the TTT \begin{table} \begin{tabular}{l|c c c c c c c c c c c} \hline \hline Dataset & \(\alpha_{k}\) & \(\beta_{k}\) & \(N_{C}\) & \(N_{itr}\) & \(\xi\) & \(\tau_{TC}\) & \(\tau_{PP}\) & \(\tau_{cr}\) & \(N_{clip}\) & \(N_{clip}\) & \(\lambda_{1}\) & \(\lambda_{2}\) \\ \hline CIFAR10-C & 0.1 & 0.1 & 4096 & 4 & 0.9 & 0.95 & -0.001 & 0.9 & 1280 & 128 & 1.0 & 10.0 \\ CIFAR100-C & 0.01 & 0.01 & 4096 & 4 & 0.9 & 0.95 & -0.001 & 0.9 & 1280 & 64 & 1.0 & 10.0 \\ CIFAR100-C & 0.1 & 0.1 & 4096 & 4 & 0.9 & 0.95 & -0.001 & 0.9 & 1280 & 128 & 1.0 & 10.0 \\ VisDA-C & \(\frac{1}{30}\) & \(\frac{1}{30}\) & \(\frac{1}{30}\) & 4096 & 4 & 0.9 & 0.95 & -0.01 & 0.9 & 1556 & 128 & 1.0 & 10.0 \\ ImageNet-C & 0.001 & 0.01 & 0.01 & 4096 & 2 & 0.9 & 0.95 & -0.01 & 0.9 & 1280 & 64 & 1.0 & 10.0 \\ \hline \hline \end{tabular} \end{table} TABLE I: Hyper-parameters used on different datasets. Fig. 2: Illustration of corruptions on target domain images. Examples are selected from the ImageNet-C dataset. evaluation protocol. We summarize all protocols evaluated in this work in Table. II. #### 4.2.3 Competing Methods We compare the following test-time training methods. Direct testing **(TEST)** without adaptation simply do inference on target domain with source domain model. Test-time training (**TTT-R**) [19] jointly trains the rotation-based self-supervised task and the classification task in the source domain, and then only train the rotation-based self-supervised task in the streaming test samples and make the predictions instantly. The default method is classified into the Y-M protocol. Test-time normalization (**BN**) [59] moving average updates the batch normalization statistics by streamed data. The default method follows N-M protocol and can be adapted to N-O protocol. Test-time entropy minimization (**TENT**) [20] updates the parameters of all batch normalization by minimizing the entropy of the model predictions in the streaming data. By default, TENT follows the N-O protocol and can be adapted to N-M protocol. Test-time classifier adjustment (**T3A**) [21] computes target prototype representation for each category using streamed data and make predictions with updated prototypes. T3A follows the N-O protocol by default. Source Hypothesis Transfer (**SHOT**) [12] freezes the linear classification head and trains the target-specific feature extraction module by exploiting balanced category assumption and self-supervised pseudo-labeling in the target domain. SHOT by default follows the N-M protocol and we adapt it to N-O protocol. **TTT++**[16] aligns source domain feature distribution, whose statistics are calculated offline, and target domain feature distribution by minimizing the F-norm between the mean covariance. TTT++ follows the Y-M protocol and we adapt it to N-O (removing contrastive learning branch) and Y-O protocols. **AdaContrast**[22] approaches TTT from a self-training perspective. Pseudo labels on target domain testing samples are generated from a weak augmentation branch and used for supervising a strong augmentation branch. **Conjugate PL**[39] proposed to learn the best TTT objective through meta-learning. This approach discovered a loss similar to the entropy loss adopted by Tent when source domain is trained with cross-entropy loss. **Self-Training**[28], a.k.a. FixMatch, was originally developed for semi-supervised learning by estimating pseudo labels on the unlabeled data and supervise model training with pseudo labels. We adapt FixMatch to TTT by adopting the self-training component alone and refer to it as Self-Training (ST). **TTAC**[61] aligns the source and target domain feature distributions for TTT. It requires a single pass on the target domain and does not have to modify the source training objective. TTAC was originally implemented for all TTT protocols. TTAC was further augmented with additional diversity loss and entropy minimization loss introduced in SHOT [12], denoted as TTAC+SHOT [61]. Finally, we evaluate our proposed method, **TTAC++**, under all TTT protocols. For Y-O and Y-M protocols we incorporate an additional contrastive learning branch introduced in [16]. ### _Test-Time Training Evaluations_ We evaluate test-time training on four types of target domain data, including images with corruptions, manually selected hard images, synthetic to real adaptation and adversarial samples. #### 4.3.1 TTT on Corrupted Target Domain We present the test-time training results on CIFAR10/100-C datasets in Tab. III, and the results on ImageNet-C dataset in Tab. IV. For ImageNet-C, we only evaluate under the realistic sTTT (N-O) protocol. We make the following observations from the results. **sTTT (N-O) Protocol**. We first analyze the results under the proposed sTTT (N-O) protocol. Our method outperforms all competing ones by a large margin both with source domain statistics (N-O-SL) and without source domain statistics (N-O-SF). Under the most strict N-O-SF protocol, TTAC++ leads the benchmark with a large margin. It outperforms Conjugate PL by \(1.6\%\) on CIFAR10-C and SHOT by \(1.4\%\) on CIFAR100-C. When source domain statistics are available, TTAC++ gains additional advantage. Compared with TTT++, we achieved \(4\%\) and \(5\%\) improvements on CIFAR10-C and CIFAR100-C datasets respectively. TTAC++ also improves upon TTAC+SHOT where the latter adopts a class balance assumption. On ImageNet-C dataset, TTAC++ demonstrates its superiority under both N-O-SF and N-O-SL protocols. Without source domain statistics, TTAC++ outperforms Conjugate PL on 11 out of 15 types of corruptions. When source domain statistics are available, TTAC++ consistently outperforms TTAC with similar data augmentation. **Alternative Protocols**. We further compare different methods under N-M, Y-O and Y-M protocols. Under the Y-O protocol, TTT++ [16] modifies the source domain training objective by incorporating a contrastive learning branch [62]. To compare with TTT++, we also include the contrastive branch and observe a clear improvement on both CIFAR10-C and CIFAR100-C datasets. Other TTT methods are adapted to the N-M protocol by allowing training on the whole target domain data multiple epochs. Specifically, we compared with BN, TENT and SHOT. With TTAC alone we observe substantial improvement on all three datasets and TTAC can be further combined with SHOT demonstrating additional improvement. Finally, under the Y-M protocol, we demonstrate very strong performance compared to TTT-R and TTT++. It is also worth noting that TTAC under the N-O protocol can already yield results close to TTT++ under the Y-M protocol, suggesting the strong test-time training ability of TTAC even under the most challenging TTT protocol. #### 4.3.2 TTT on Selected Hard Samples as Target Domain CIFAR10.1 contains roughly 2,000 new test images that were resampled after the research on original CIFAR-10 dataset, which consists of some hard samples and reflects the normal domain shift in our life. Evaluation of TTT methods on CIFAR10.1 is widely adopted to verify the benefit of adapting to hard target domain samples. We present the results on CIFAR10.1 in Table. III. Again, we observe a strong performance of TTAC++ under all TTT protocols. #### 4.3.3 TTT on Synthetic Source to Real Target Domains VisDA-C is a large-scale benchmark of synthetic-to-real object classification dataset. We demonstrate on this dataset the ability of TTT to adapt model trained on synthetic source domain to realistic target domain data. On this dataset, we conduct experiments under \begin{table} \begin{tabular}{c|c c c} \hline \hline Protocol & Source Domain Statistics & Contrastive Branch & Multiple Passes \\ \hline N-O-SF & - & - & - \\ N-O-SL & ✓ & - & - \\ Y-O-SL & ✓ & ✓ & - \\ N-M-SF & - & - & ✓ \\ N-M-SL & ✓ & - & ✓ \\ Y-M-SL & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} \end{table} TABLE II: The used components under different TTT protocols. the N-O, Y-O, N-M and Y-M protocols with results presented in Table. V. We make the following observations from the results. First, our proposed method, TTAC++, outperforms all competing methods under all evaluation protocols. In particular, the improvement for TTAC++ is very significant under the N-O (sTTT) protocol regardless of access to source domain statistics. For example, TTAC++ outperforms the previous best method, the SHOT, by \(7\%\) under N-O-SF and, the TTAC, by \(13\%\) under N-O-SL. Moreover, TTAC++ also demonstrates very competitive performance when multiple passes on the target domain is allowed (N-M-SF), a.k.a. source-free domain adaptation. #### 4.3.4 TTT on Adversarial Target Domain The existing test-time training/adaptation works often evaluate adaptation to the target domain with hand-crafted corruptions. In this section, we investigate the robustness of test-time training subject to stronger out-of-distribution data, i.e. adversarial samples. This evaluation reveals that simple test-time training can substantially improve model's robustness to adversarial testing samples. In specific, we generate adversarial samples on the testing set of CIFAR-10 dataset by \(L^{\infty}\) PGD [63] attack with \(\epsilon=8/255\), 40 iterations and attack step size \(0.01\). The adversarial testing samples are then frozen for TTT evaluation. We extensively evaluated existing TTT methods and present the results in Tab. VI. We make the following observations from the results. First, without any test-time adaptation, direct testing with source domain model yields very poor performance (\(93.07\%\)). This is consistent with previous investigations into adversarial attacks. Furthermore, existing test-time training methods which do not consider self-training, e.g. BN, TENT and SHOT, performs relatively poor compared with methods equipped with self-training, e.g. Self-Training, TTT++ and TTAC++. We attribute the performance gap to the fact that the data augmentation applied during self-training is able to smooth out the adversarial noise. Self-training is able to exploit the pseudo labels predicted on smoothed testing samples and improve the accuracy. Finally, we observe that distribution matching is complementary to purely self-training, suggested by the improvement of TTAC++ from Self-Training under N-O-SF and N-M-SF protocols. Overall, we demonstrate that TTAC++ is a strong test-time training paradigm, it owns the ability to adapt to adversarial corruption which is stronger than hand-crafted natural corruptions. #### 4.3.5 TTT Cumulative Performance A good test-time training framework should benefit from seeing more testing samples and the performance on the target domain is \begin{table} \begin{tabular}{c|c c c c c c c c c c c c|c} \hline \hline Method & Brit & Contr & Defoc & Elast & Fog & Frost & Gauss & Glass & Impal & Jpeg & Motn & Pixel & Shot & Snow & Zoom & Avg \\ \hline TEST & 38.82 & 89.55 & 82.23 & 87.13 & 64.84 & 76.83 & 97.34 & 90.50 & 97.76 & 68.31 & 83.60 & 80.37 & 96.74 & 82.22 & 74.31 & 80.70 \\ BN (N-O-SF) & 32.33 & 50.93 & 81.28 & 52.98 & 42.21 & 64.13 & 83.25 & 83.64 & 82.52 & 59.18 & 66.23 & 49.45 & 82.59 & 62.34 & 52.51 & 63.04 \\ TENT (N-O-SF) & 31.39 & 40.27 & 57.68 & 42.03 & 35.38 & 64.32 & 84.92 & 84.96 & 81.43 & 46.84 & 49.48 & 39.77 & 84.21 & 49.23 & 43.49 & 56.89 \\ SHOT (N-O-SF) & 30.69 & 37.69 & 61.97 & 43.30 & 34.74 & 54.19 & 76.33 & 71.94 & 74.24 & 46.50 & 47.89 & 38.88 & 70.06 & 46.09 & 40.74 & 51.59 \\ Conjugate PL (N-O-SF) & **30.62** & **34.28** & 61.12 & 40.40 & 34.43 & 51.80 & 65.61 & 67.75 & **33.71** & **46.41** & 45.70 & **38.41** & 63.07 & 45.83 & 41.27 & 48.57 \\ Self-Training (N-O-SF) & 31.57 & 36.72 & 97.68 & 42.84 & 35.27 & 54.18 & 88.76 & 91.93 & 81.22 & 51.97 & 50.29 & 39.73 & 88.67 & 48.52 & 47.07 & 57.95 \\ TTAC++ (N-O-SF) & 31.61 & 36.55 & **60.39** & **38.79** & **34.20** & **49.02** & **61.62** & **62.67** & **59.37** & 45.25 & **44.73** & 38.43 & **83.32** & **43.60** & **39.99** & **46.97** \\ \hline TTAC (N-O-SL) & 30.36 & 38.84 & 69.06 & 39.67 & 36.01 & 50.20 & 66.18 & 70.17 & 64.36 & 45.59 & 51.77 & 39.72 & 62.43 & 44.56 & 42.80 & 50.11 \\ TTAC++ (N-O-SL) & **29.78** & **34.37** & **58.08** & **37.68** & **32.97** & **47.96** & **60.51** & **62.24** & **58.65** & **43.61** & **43.58** & **36.89** & **57.33** & **42.40** & **38.82** & **45.66** \\ \hline \hline \end{tabular} \end{table} TABLE IV: Test-time training on ImageNet-C under the sTTT (N-O) protocol. \begin{table} \begin{tabular}{l|c c|c c} \hline \hline Method & TTT Protocol & Assum. Strength & C10-C & C100-C & C10.1 \\ \hline TEST & - & - & 29.15 & 60.34 & 12.10 \\ \hline BN [59] & N-O-SF & Weak & 15.49 & 43.38 & 14.00 \\ TENT [20] & N-O-SF & Weak & 14.27 & 40.72 & 14.40 \\ T3A [21] & N-O-SF & Weak & 15.44 & 42.72 & 13.50 \\ SHOT [12] & N-O-SF & Weak & 13.95 & 39.10 & 13.70 \\ Conjugate PL [60] & N-O-SF & Weak & 13.21 & 39.39 & 14.20 \\ Self-Training [28] & N-O-SF & Weak & 14.66 & 39.25 & **12.85** \\ TTAC++ (Ours) & N-O-SF & Weak & **11.62** & **37.76** & **12.85** \\ \hline TT++ [16] & N-O-SL & Weak & 13.69 & 40.32 & 13.65 \\ TTAC [61] & N-O-SL & Weak & 10.94 & 36.64 & 12.80 \\ TTAC++SHOT [61] & N-O-SL & Weak & 10.99 & 36.39 & 12.40 \\ TTAC++ (Ours) & N-O-SL & Weak & **9.78** & **35.48** & **12.20** \\ \hline TT++ [16] & Y-O-SL & Medium & 13.00 & 35.23 & 12.60 \\ TTAC [61] & Y-O-SL & Medium & 10.69 & 34.82 & 12.00 \\ TTAC++ (Ours) & Y-O-SL & Medium & **10.05** & **34.30** & **11.55** \\ \hline BN [59] & N-M-SF & Medium & 15.70 & 43.30 & 14.10 \\ TTEM [20] & N-M-SF & Medium & 12.60 & 36.30 & 13.65 \\ SHOT [12] & N-M-SF & Medium & 14.70 & 38.10 & 14.25 \\ TTAC++ (Ours) & N-M-SF & Medium & **9.44** & **34.43** & **10.60** \\ \hline TT++ [16] & N-M-SL & Medium & 11.87 & 37.09 & 11.95 \\ TTAC [61] & N-M-SL & Medium & 9.42 & 33.55 & 11.00 \\ TTAC+SIROT [61] & N-M-SL & Medium & 9.54 & 32.89 & 11.30 \\ TTAC++ (Ours) & N-M-SL & Medium & **7.23** & **29.23** & **9.00** \\ \hline TTT-R [19] & Y-M-SL & Strong & 14.30 & 40.40 & 11.00 \\ TTTT+ [16] & Y-M-SL & Strong & 9.80 & 34.10 & 9.50 \\ TTAC [61] & Y-M-SL & Strong & 8.52 & 30.57 & 9.20 \\ TTAC++ (Ours) & Y-M-SL & Strong & **7.57** & **29.08** & **8.90** \\ \hline \hline \end{tabular} \end{table} TABLE III: Comparison under different TTT protocols. Y/N indicates modifying source domain training objective or not. O/M indicate one pass or multiple passes test-time training. SF/SL indicate source-free and source-light respectively. C10-C, C100-C and C10.1 refer to CIFAR10-C, CIFAR100-C and CIFAR10.1 datasets respectively. All numbers indicate error rate in percentage (\(\%\)). expected to be gradually increasing. In this section, we compare different test-time training methods by illustrating the cumulative error rate on CIFAR10/100-C and ImageNet-C (_Gaussian Blur_ corruption) under the sTTT protocols (N-O-SF/SL) in Fig. 3 and Fig. 4, respectively. As we observe from the figure, some existing TTT methods do not benefit from seeing more testing samples. For example, BN and T3A's performance stabilize after 2000 testing samples on CIFAR10/100-C. The performance of TENT and Self-Training (ST) even degrade after observing 10000 to 20000 testing samples on ImageNet-C. This empirical evaluation also suggest applying self-training alone for TTT is prone to confirmation bias and may harm the performance. In contrast, TTAC++ (pink solid line) exhibits the fastest drop of error rate among all source-free methods. Compared with all methods requiring access to source domain statistics or changing source domain training objectives, TTAC++ is is consistently lower in error rate along the TTT procedure. More importantly, the trend (sharp slope) shows higher potential for TTAC++ should more testing samples are available in the target domain. #### 4.3.6 TSNE Visualization of TTAC++ features We provide qualitative results for test-time training by visualizing the adapted features through T-SNE [64]. In Fig. 5 (a) and Fig. 5 (b), we compare the features learned by TTAC [61] and TTAC++. We observe a better separation between classes by TTAC++, implying an improved classification accuracy. ### _Ablation Study_ In this section, we validate the effectiveness of individual components, including anchored clustering, pseudo label filtering, global feature alignment, self-training and finally the compatibility with contrastive branch [16], on CIFAR10-C dataset. For anchored clustering alone, we use all testing samples to update cluster \begin{table} \begin{tabular}{c|c|c c c c c c c c c c|c} \hline \hline Protocol & Method & Airpl. & Automob. & Bisl & Cat & Dex & Dog & Frog & Horse & Ship & Truck & Avg \\ \hline - & TEST & 97.00 & 84.90 & 95.20 & 93.20 & 95.70 & 96.30 & 95.00 & 95.00 & 88.80 & 88.90 & 93.07 \\ \hline \multirow{8}{*}{N-O-SF} & BN & 95.80 & 62.30 & 89.50 & 90.60 & 91.70 & 79.60 & 78.10 & 75.60 & 83.10 & 67.20 & 81.15 \\ & TENT & 96.70 & 65.60 & 92.10 & 92.80 & 93.20 & 86.40 & 82.40 & 78.70 & 90.50 & 73.00 & 85.14 \\ & SHOT & 97.50 & 50.00 & 80.90 & 93.50 & 93.90 & 74.00 & 78.00 & 67.60 & 85.00 & 65.50 & 79.67 \\ & Self-Training & 43.10 & **12.20** & 48.20 & **53.00** & **54.30** & 45.90 & 48.40 & 43.20 & 29.20 & 35.00 & 40.26 \\ & TTAC++ & **36.70** & 14.70 & **45.80** & 60.70 & **37.80** & **40.70** & **23.40** & **33.30** & **25.60** & **27.60** & **34.63** \\ \hline \multirow{8}{*}{N-O-SL} & TTT++ & 89.60 & 46.60 & 79.90 & 86.00 & 83.80 & 66.40 & 60.80 & 47.90 & 72.70 & 56.10 & 68.98 \\ & TTAC & 60.20 & 25.60 & 58.60 & 62.10 & 41.70 & 11.40 & 41.20 & 37.60 & 36.90 & 37.20 & 46.77 \\ & TTAC++ & **23.90** & **12.60** & **32.00** & **53.50** & **34.50** & **23.90** & **22.00** & **15.70** & **12.80** & **26.26** \\ \hline \multirow{8}{*}{Y-O-SL} & TTT++ & 37.80 & 15.00 & 47.70 & 58.50 & 41.30 & 88.60 & 26.10 & 20.20 & 19.00 & 21.90 & 32.66 \\ & TTAC & 32.40 & 14.50 & 42.60 & 58.80 & 42.10 & 37.50 & 23.80 & 17.60 & 20.30 & 30.08 \\ & TTAC++ & **23.30** & **11.20** & **32.30** & **32.40** & **31.70** & **33.90** & **17.80** & **11.40** & **17.80** & **26.49** \\ \hline \multirow{8}{*}{N-M-SF} & BN & 96.20 & 99.10 & 87.90 & 99.00 & 90.70 & 79.80 & 74.80 & 72.30 & 82.90 & 64.70 & 79.93 \\ & TENT & 96.00 & 66.60 & 50.90 & 92.92 & 93.90 & 83.20 & 81.20 & 76.90 & 87.10 & 71.50 & 84.01 \\ & SHOT & 96.70 & 44.80 & 98.90 & 93.50 & 93.50 & 93.50 & 75.30 & 86.70 & 43.00 & 80.60 & 67.64 \\ & Self-Training & 30.70 & **4.40** & 35.90 & 54.30 & 39.10 & **18.50** & 45.80 & **30.30** & **19.90** & 26.90 & 30.58 \\ & TTAC++ & **25.10** & 8.10 & **36.60** & **36.90** & **25.00** & 41.20 & **16.90** & 50.30 & 26.50 & **20.30** & **28.94** \\ \hline \multirow{8}{*}{N-M-SL} & TTT++ & 74.90 & 28.50 & 68.60 & 78.30 & 74.10 & 51.70 & 47.70 & 37.10 & 52.70 & 40.90 & 54.52 \\ & TTAC++ & 36.60 & 15.10 & 46.30 & 44.70 & 47.60 & 39.10 & 28.90 & 21.30 & 21.20 & 31.33 \\ & TTAC++ & **11.10** & **5.60** & **15.40** & **28.90** & **14.50** & **17.40** & **9.30** & **7.00** & **6.20** & **10.10** & **12.55** \\ \hline \multirow{8}{*}{Y-M-SL} & TTT++ & 19.30 & 6.80 & 31.00 & 41.30 & 25.60 & 27.50 & 14.20 & 10.60 & 7.80 & 11.60 & 19.54 \\ & TTAC++ & 18.50 & 6.90 & 31.40 & 43.20 & 29.00 & 32.20 & 16.50 & 11.70 & 6.90 & 12.20 & 20.83 \\ \cline{1-1} & TTAC++ & **11.20** & **3.90** & **15.70** & **26.50** & **14.50** & **17.10** & **7.60** & **7.00** & **5.60** & **9.50** & **11.90** \\ \hline \hline \end{tabular} \end{table} TABLE VI: Test-time training on the VisDA-C dataset. \begin{table} \begin{tabular}{c|c|c c c c c c c c c c|c} \hline \hline Method & Protocol & Plane & Bcycl & Bus & Car & Horse & Knife & Mcycl & Person & Plant & Shkpd & Train & Truck & Avg \\ \hline - & TEST & 56.52 & 88.71 & 62.77 & 30.56 & 81.88 & 99.03 & 17.53 & 95.85 & 51.66 & 77.86 & 20.44 & 99.51 & 65.19 \\ \hline \multirow{8}{*}{N-O-SF} & TENT & 19.75 & 81.99 & 17.78 & 40.03 & 21.64 & 19.04 & 11.66 & 38.18 & 23.15 & 77.33 & 35.88 & 98.31 & 40.40 \\ & SHOT & 10.81 & **18.62** & 27.08 & 59.65 & 11.13 & 56.43 & 27.29 & 26.22 & 13.76 & 47.35 & 22.26 & **61.18** & 31.82 \\ & Self-Training & **4.69** & 21.12 & **13.87** & **16.00** & **4.03** & 89.56 & **4.19** & 86.35 & **3.17** & 88.82 & **15.18** & 98.05 & 37.24 \\ & TACC++ & 11.32 & 22.42 & 18.94 & 33.47 & 9.57 & 18.89 & 10 statistics. For pseudo label filtering alone, we implement as predicting pseudo labels followed by filtering, then pseudo labels are used for self-training. We make the following observations from Tab. 7. Under both N-O and N-M protocols, introducing anchored clustering or pseudo label filtering alone improves over the baseline, e.g. under N-O \(29.15\%\to 14.32\%\) for anchored clustering and \(29.15\%\to 15.00\%\) for pseudo label filtering. When anchored clustering is combined with pseudo label filtering, we observe a significant boost in performance. This is due to more accurate estimation of category-wise cluster in the target domain. We further evaluate aligning global features alone with KL-Divergence. This achieves relatively good performance and obviously outperforms the L2 distance alignment adopted in [16]. Next, when self-training is turned on in conjunction with other components, we observe a consistent improvement under all TTT protocols. Finally, we combine all components with additional contrastive branch and the full model yields the best performance under both Y-O-SL and Y-M-SL protocols. ### _Additional Analysis_ In this section, we provide additional investigations into additional the designs that affect computation efficiency, compatibility with additional backbones, randomness, and alternative designs, etc. #### 4.5.1 Test Sample Queue and Update Epochs. Under the sTTT protocol, we allow all competing methods to maintain the same test sample queue and multiple update epochs on the queue. To analyze the significance of the sample queue and update epochs, we evaluate BN, TENT, SHOT, TTAC and TTAC++ on CIFAR10-C and ImageNet-C level 5 snow corruption evaluation set under different number of update epochs on test sample queue and under a without queue protocol, i.e. only update model w.r.t. the current test sample batch. As the results presented in Tab. 9, we make the following observations. i) Maintaining a sample queue can substantially improve the performance of methods that estimate target distribution, e.g. TTAC++ (\(11.18\to 10.31\)), TTAC (\(11.91\to 10.88\) on CIFAR10-C) and SHOT (\(15.18\to 13.96\) on CIFAR10-C). This is due to more test samples giving a better estimation of true distribution. ii) Consistent improvement can be observed with increasing update epochs for SHOT, TTAC and TTAC++. We ascribe this to iterative pseudo labeling benefiting from more update epochs. These Fig. 4: Test-time cumulative error on ImageNet-C dataset with Gaussian Blur corruption. Fig. 5: To reduce the computation, we select 10,000 samples on VisDA-C dataset to draw the T-SNE visualizations. (a) T-SNE visualization of TTAC feature embedding. (b) T-SNE visualization of TTAC++ feature embedding. Fig. 3: Test-time cumulative error rate on CIFAR10/100-C datasets. observations also provide insights for deploying TTT for real-world practice. By considering memory constraint and demand for real-time model update, one can adjust the queue length and number of update epochs to strike a balance between efficiency and performance. #### 4.5.2 Computation Cost Measured in Wall-Clock Time Test sample queue and multiple update epochs introduce additional computation overhead. To investigate the impact on efficiency, we measure the overall wall time as the time elapsed from the beginning to the end of test-time training, including all I/O overheads. The per-sample wall time is then calculated as the overall wall time divided by the number of test samples. We report the per-sample wall time (in seconds) for BN, TENT, SHOT, TTAC and TTAC++ in Tab. 10 under different update epoch settings and without queue setting. The Inference row indicates the per-sample wall-clock time in a single forward pass including data I/O overhead. We observe that, under the same experiment setting, BN and TENT are more computational efficient, but TTAC++ is only 2 to 3 times more expensive than BN and TENT if no test sample queue is preserved (0.0090 v.s. 0.0030/0.0041) while the performance of TTAC++ w/o queue is still better than TENT (11.18 v.s. 13.48). In summary, TTAC++ is able to strike a balance between computation efficiency and performance depending on how much computation resource is available. This suggests allocating a separate device for model weights update is only necessary when securing best performance is the priority. #### 4.5.3 Evaluation of compatibility with Transformer Backbone In this section, we provide additional evaluation of TTAC++ with a transformer backbone, ViT [58]. In specific, we pre-train ViT on CIFAR10 clean dataset and then follow the sTTT protocol to do test-time training on CIFAR10-C testing set. The results are presented in Tab. VIII. We report the average (Avg) and standard deviation (Std) of accuracy over all 15 categories of corruptions. Again, TTAC++ consistently outperform all competing methods with transformer backbone. #### 4.5.4 Impact of Data Streaming Order The proposed sTTT protocols assumes test samples arrive in a stream and inference is made instantly on each test sample. The result for each test sample will not be affected by any following ones. In this section, we investigate how the data streaming order will affect the results. Specifically, we randomly shuffle all testing samples in CIFAR10-C for 10 times with different seeds and calculate the mean and standard deviation of test accuracy under sTTT protocol. The results in Tab. 11 suggest TTAC++ maintains consistent performance regardless of data streaming order. #### 4.5.5 Sensitivity to Hyperparameters We evaluate the sensitivity to two thresholds during pseudo label filtering, namely the temporal smoothness threshold \(\tau_{TC}\) and posterior threshold \(\tau_{PP}\). In particular, \(\tau_{TC}\) controls how much the maximal probability deviate from the historical exponential moving average (ema). If the current value is lower than the ema below a threshold, we believe the prediction is not confident and the sample should be excluded from estimating target domain cluster. \(\tau_{PP}\) controls the the minimal maximal probability and below this threshold is considered as not confident enough. We evaluate \(\tau_{TC}\)in the interval between 0 and -1.0 and \(\tau_{PP}\) in the interval from 0.5 to 0.95 with results on CIFAR10-C level 5 glass blur corruption presented in Tab. XIII. We draw the following conclusions on the evaluations. First, there is a wide range of hyperparameters that give stable performance, e.g. \(\tau_{TC}\in[0.5,0.0.9]\) and \(\tau_{PP}\in[-0.0001,-0.01]\). Second, when temporal consistency filtering is turn off, i.e. \(\tau_{TC}=-1.0\), because the probability is normalized to between 0 and 1, the performance drops substantially, suggesting the necessity to apply temporal consistency filtering. #### 4.5.6 Alternative Strategies for Updating Target Domain Clusters In Sect. 3.2, we presented target domain clustering through pseudo labeling. A temporal consistency approach is adopted to filter out confident samples to update target clusters. In this section, we discuss two alternative strategies for updating target domain clusters. Firstly, each target cluster can be updated with all samples assigned with respective pseudo label (without Filtering). This strategy will introduce many noisy samples into cluster updating and potentially harm test-time feature learning. Secondly, we use a soft assignment of testing samples to each target cluster to update target clusters (Soft Assignment). This strategy is equivalent to fitting a mixture of Gaussian through EM algorithm. We compare these two alternative strategies with our temporal consistency based filtering approach. The results are presented in Tab. XII. We find the results with temporal consistency based filtering outperforms the other two strategies on 13 out of 15 categories of corruptions, suggesting pseudo label filtering is necessary for estimating more accurate target clusters. #### 4.5.7 Alternative Design for Inferring Source Domain Distributions In this work, we develop a solution to infer the source distributions in Sect. 3.4. Alternative to learning the distribution mean, an alternative solution is developed by re-scaling the classifier weight [65]. In this section, we compare the two options for estimating source domain statistics with results presented in Tab. XIII. We conclude from the comparison that learning source domain statistics (TTAC++) is clearly better than re-scaling classifier weights [65]. We attribute the advantage of TTAC++ to the fact that backbone features are obtained after ReLu activation, thus being all positive. The classifier weights are trained without any constraints and could have negative weights. The mismatch between classifier weights and backbone features might lead to inferior results by using re-scaled classifier weights as source domain distribution mean. #### 4.5.8 Limitations and Failure Cases Finally, we discuss the limitations of TTAC++ from two perspectives. First, we point out that TTAC++ requires backpropagation to update models at testing stage, therefore additional computation overhead is required. As shown in Tab. XIII, TTAC++ is 2-5 times computationally more expensive than BN and TENT. However, contrary to usual expectation, BN and TENT are also very expensive compared with no adaptation at all. Eventually, most test-time training methods might require an additional device for test-time adaptation. We further discuss the limitations on test-time training under more severe corruptions. Specifically, we evaluate TENT, SHOT, TTAC, and TTAC++ under 1-5 levels of corruptions on CIFAR10-C with results reported in Tab. XIII. We observe generally a drop of performance from 1-5 level of corruption. Despite consistently outperforming TENT and SHOT at all levels of corruptions, TTAC++'s performance at higher corruption levels are relatively worse, suggesting future attention must be paid to more severely corrupted scenarios. \begin{table} \begin{tabular}{c|c} \hline \hline Method & Error (\%) \\ \hline Class. Weights [65] & 13.79 \\ TTAC++ (SF) & 11.62 \\ \hline \hline \end{tabular} \end{table} TABLE XIII: Comparing alternative methods to estimate source domain distributions. \begin{table} \begin{tabular}{c|c c c c c c c c c c c c c c c} \hline \hline Random Seed & 0 & 10 & 20 & 200 & 300 & 3000 & 4000 & 40000 & 50000 & 500000 & Avg \\ \hline Error (\(\%\)) & 8.82 & 8.80 & 9.38 & 9.13 & 8.88 & 8.87 & 9.07 & 8.93 & 9.02 & 8.68 & 8.96\(\pm\)0.19 \\ \hline \hline \end{tabular} \end{table} TABLE XII: Comparison of alternative strategies for updating target domain clusters. \begin{table} \begin{tabular}{c|c c c c c c c c c c c c c c c} \hline \hline Strategy & Brit & Contr & Defoc & Elst & Fog & Frost & Gauss & Glass & Impul & Jpeg & Mctn & Pixel & Shot & Snow & Zoom & Avg \\ \hline i. Without filtering & 6.01 & 7.21 & 8.13 & 13.87 & 9.03 & 9.82 & 13.13 & 18.22 & 15.66 & 11.47 & 9.26 & 9.29 & 11.68 & 9.19 & 6.79 & 10.58 \\ i. Soft Assignment & 5.91 & 6.52 & 8.05 & 13.25 & 9.08 & 9.76 & 13.14 & 17.19 & 15.45 & 11.41 & 8.88 & 9.10 & 11.53 & 9.13 & 6.83 & 10.35 \\ Filtering (Ours) & **5.99** & **6.28** & **7.53** & **12.99** & **8.95** & **9.22** & **12.13** & **15.79** & **14.37** & **10.65** & **8.70** & **8.60** & **10.70** & **8.82** & **6.37** & **9.78** \\ \hline \hline \end{tabular} \end{table} TABLE XII: Comparison of pseudo labeling thresholds on CIFAR10-C level 5 glass blur corruption. Numbers are reported as classification error (%). \begin{table} \begin{tabular}{c|c} \hline \hline Method & Error (\%) \\ \hline Class. Weights [65] & 13.79 \\ TTAC++ (SF) & 11.62 \\ \hline \hline \end{tabular} \end{table} TABLE X: Comparing alternative methods to estimate source domain distributions. \begin{table} \begin{tabular}{c|c c c c c c c c c c c c c c} \hline \hline \(\tau_{TC}\backslash\tau_{PP}\) & 0.5 & 0.6 & 0.7 & 0.8 & 0.9 & 0.95 \\ \hline 0.0 & 23.03 & 22.26 & 21.96 & 22.50 & 21.14 & 28.55 \\ 0.001 & 200.03 & 20.53 & 20.45 & 20.40 & 19.49 & 27.00 \\ 0.001 & 19.66 & 20.51 & 19.49 & 20.48 & **19.42** & 26.83 \\ 0.01 & 20.71 & 20.78 & 20.73 & 20.65 & 20.29 & 27.58 \\ -0.1 & 24.10 & 21.47 & 21.46 & 22.36 & 21.45 & 28.71 \\ -1.0 & 30.75 & 24.08 & 23.40 & 24.33 & 22.21 & 28.77 \\ \hline \hline \end{tabular} \end{table} TABLE XIII: Evaluation of pseudo labeling thresholds on CIFAR10-C level 5 glass blur corruption. Numbers are reported as classification error (%). ## 5 Conclusion Test-time training (TTT) tackles the realistic challenges of deploying domain adaptation on-the-fly. In this work, we are first motivated by the confused evaluation protocols for TTT and proposed two key criteria, namely modifying source training objective and sequential inference, to further categorize existing methods into four TTT protocols. Under the most realistic protocol, i.e. sequential test-time training (sTTT), we developed a test-time anchored clustering (TTAC) approach to align target domain features to the source ones. Unlike batchnorm and classifier prototype updates, anchored clustering allows all network parameters to be trainable, thus demonstrating stronger test-time training ability. We further proposed pseudo label filtering and an iterative update method to improve anchored clustering and save memory footprint respectively. When source domain distribution information is absent, we proposed to infer the distribution for anchored clustering through efficient gradient based optimization to achieve source-free sTTT. Finally, we incorporated self-training to update model weights with high confidence pseudo labels. We demonstrated self-training is particularly helpful with anchored clustering as regularization, and the improved model is referred to as TTAC++. Experiments on five datasets verified the effectiveness of TTAC++ under sTTT as well as other TTT protocols. We hope this work will serve as an in time taxonomy of TTT protocols and future works can be compared fairly under respective protocols. **Acknowledgement:** This work was supported in part by the National Natural Science Foundation of China (NSFC) under Grant 62106078, Guangdong R&D key project of China (No.: 2019B010155001), the Program for Guangdong Introducing Innovative and Enterpreneurial Teams (No.: 2017Z107X183), and A*STAR Career Development Award (Grant no. C210112059).
2305.12317
Hilbert-Huang Transform analysis of quasi-periodic oscillations in MAXI J1820+070
We present time-frequency analysis, based on the Hilbert-Huang transform (HHT), of the evolution on the low-frequency quasi-periodic oscillations (LFQPOs) observed in the black hole X-ray binary MAXI J1820+070. Through the empirical mode decomposition (EMD) method, we decompose the light curve of the QPO component and measure its intrinsic phase lag between photons from different energy bands. We find that the QPO phase lag is negative (low energy photons lag behind high energy photons), meanwhile the absolute value of the lag increases with energy. By applying the Hilbert transform to the light curve of the QPO, we further extract the instantaneous frequency and amplitude of the QPO. Compared these results with those from the Fourier analysis, we find that the broadening of the QPO peak is mainly caused by the frequency modulation. Through further analysis, we find that these modulations could share a common physical origin with the broad-band noise, and can be well explained by the internal shock model of the jet.
Wei Yu, Qing-Cui Bu, Zi-Xu Yang, He-Xin Liu, Liang Zhang, Yue Huang, Deng-Ke Zhou, Jin-Lu Qu, Shuang-Nan Zhang, Shu Zhang, Li-Ming Song, Shu-Mei Jia, Xiang Ma, Lian Tao, Ming-Yu Ge, Qing-Zhong Liu, Jing-Zhi Yan
2023-05-21T02:03:53Z
http://arxiv.org/abs/2305.12317v1
# Hilbert-Huang Transform analysis of quasi-periodic oscillations in MAXI J1820+070 ###### Abstract We present time-frequency analysis, based on the Hilbert-Huang transform (HHT), of the evolution on the low-frequency quasi-periodic oscillations (LFQPOs) observed in the black hole X-ray binary MAXI J1820+070. Through the empirical mode decomposition (EMD) method, we decompose the light curve of the QPO component and measure its intrinsic phase lag between photons from different energy bands. We find that the QPO phase lag is negative (low energy photons lag behind high energy photons), meanwhile the absolute value of the lag increases with energy. By applying the Hilbert transform to the light curve of the QPO, we further extract the instantaneous frequency and amplitude of the QPO. Compared these results with those from the Fourier analysis, we find that the broadening of the QPO peak is mainly caused by the frequency modulation. Through further analysis, we find that these modulations could share a common physical origin with the broad-band noise, and can be well explained by the internal shock model of the jet. X-rays: binaries - X-rays: individual: MAXI J1820+070 - Accretion 0000-0002-4980-3886]Wei Yu ## 1 Introduction Low-frequency quasi-periodic oscillations (LFQPOs) with frequencies ranging from a few millihertz to \(\sim\)30 Hz have been found in most transient black hole X-ray binaries (Motta et al., 2015; Motta, 2016; Belloni, 2010; Ingram & Motta, 2019). These oscillations are appropriately named to reflect their finite-width peaks, usually described by multi-Lorentzian components (Belloni et al., 2002; Rao et al., 2010), in the Fourier power spectra of their X-ray light curves. LFQPOs are believed to originate from the inner part of the accretion flow, however, the physical mechanism is still not well understood. Theories about the origin of LFQPOs can be generally divided into two broad categories: intrinsic models associated with wave modes of the accretion flow (Tagger & Pellat, 1999; Cabanac et al., 2010) and geometric effects such as relativistic precession of the inner hot flow or the jet due to a misalignment of the black hole spin axis and the binary orbital axis (Stella & Vietri, 1997; Stella et al., 1999; Ingram et al., 2009). The LFQPO's broad peak implies that the corresponding X-ray light curve is not strictly periodic and could be caused by a modulation with varying frequency or amplitude (Ingram & Motta, 2019). Since in the framework of Fourier analysis, the Fourier frequencies are defined as constant over the entire time, it doesn't give information of the variability of frequencies. Therefore, time-frequency analysis techniques are required to study the origin of the QPO peak broadening. One possible method to investigate the variation of QPO period is the Hilbert-Huang transform (HHT) proposed by Huang et al. (1998). The HHT is a powerful tool for analyzing phenomena with non-stationary periodicity and has been successfully applied in astronomical research, such as for the QPO in the active galactic nucleus RE J1034 + 396 (Hu et al., 2014) and the \(\sim\)4 Hz QPO observed in the black hole X-ray binary XTE J1550-564 (Su et al., 2015). The HHT is used to decompose a non-stationary signal into basis components and transform these components into instantaneous frequencies and amplitudes. The basis components are not based on any strictly mathematical form and are derived by the signal itself. In contrast, the Fourier and wavelet analysis decompose a signal based on trigonometric and wavelet functions, respectively. The instantaneous frequency, which is different from that in the Fourier analysis, is defined as the time derivative of the phase function. Therefore, the Hilbert spectrum can provide detailed information in both time and frequency domains. It is worth mentioning that the HHT method is based on the assumption that the signals are additive in the time domain. This means that the original signal can be expressed as the sum of basis components. Thus, HHT cannot be applied to multiplied or convoluted signals. The LFQPO phase lag has been commonly detected in black hole X-ray binaries and has provided insights into the geometry and the radiation processes of the accretion flow. However, the phase lag directly measured at the QPO frequency using the lag-frequency spectrum is not the intrinsic lag because of the interference brought from the underneath strong band-noise (Ma et al., 2021; Zhou et al., 2022). By using the HHT method, we are able to extract the independent light curve of QPO, which further allows us to measure the intrinsic phase lag of the QPO. This method has been applied to GRS 1915+105 to study the phase lag of LFQPOs (van den Eijnden et al., 2016), and makes great sense to probe the origin of the QPO. MAXI J1820+070 is a low-mass BHXB, discovered by the Monitor of All-sky X-ray Image (MAXI) on 11 March 2018. Follow-up observations were made by other X-ray telescopes, e.g., _Swift_/BAT, _INTEGRAL_, _NuSTAR_, and _NICER_. QPOs have been observed in MAXI J1820+070 in multiple wavebands, from optical (Yu et al., 2018, 2018; Zampieri et al., 2018; Fiori et al., 2018) to hard X-ray (Mereminskiy et al., 2018). _Insight_-HXMT carried out a Target of Opportunity (ToO) observation three days after its discovery, and monitored the whole outburst from 2018-03-14 (MJD 58191) to 2018-10-21 (MJD 58412). The high statistics and the broad energy coverage (1-250 keV) of _Insight_-HXMT allow us to perform detailed timing analysis of the broadband variability, especially at high energy. In this paper, we present the HHT-based analysis of the time-frequency properties of the LFQPOs detected in MAXI J1820+070. Through the HHT method, we measured the intrinsic phase lag of the QPO and explored the origin of its peak broadening. In Section 2, we briefly describe the _Insight_-HXMT observation and the data reduction. In Section 3, we explain how to use the HHT for adaptive decomposition of the oscillatory component from the X-ray light curve, and present how to obtain the instantaneous frequency and amplitude of the QPO. The result and discussion are given in Section 4. At last, we end with our conclusion in Section 5. ## 2 Data reduction Figure 1: Left: _Insight_-HXMT lightcurves of MAXI J1820+070 from HE (27–150 keV), ME (10–30 keV) and LE (1–10 keV), extracted from the rising hard state during its 2018 outburst from MJD 58191 to 58286. Right: The corresponding hardness-intensity diagram, with the intensity defined as the total count rate of 1–10 keV and hardness defined as the ratio between the count rates from the hard band (3–10 keV) and the soft band (1–3 keV). The six epochs selected for our study are highlighted in different colours. In this work, we use data observed with _Insight_-HXMT between March 14th and June 17th, 2018. We selected six observations that span over the entire hard state of 2018 outburst, which are highlighted in Fig. 1. There are three telescopes onboard of _Insight_-HXMT: the high-energy X-ray telescope (HE, 20-250 keV, 5,100 cm\({}^{2}\)), the medium-energy X-ray telescope (ME, 5-30 keV, 952 cm\({}^{2}\)), and the low-energy X-ray telescope (LE, 1-15 keV, 384 cm\({}^{2}\)). There are three types of Field of View (FoV): 1\({}^{\circ}\)\(\times\) 6\({}^{\circ}\) (i.e., the small FoV), 6\({}^{\circ}\)\(\times\) 6\({}^{\circ}\) (i.e., the large FoV), and the blind FoV that is used to estimate the particle induced instrumental background (see Zhang et al., 2020, and references therein). The data are processed with hpipeline under _Insight_-HXMT Data Analysis Software (HXMTDAS) version 2.04 1. The data are filtered using the criteria recommended by the _Insight_-HXMT team: the pointing offset Figure 2: Representative example of a 50-s-long lightcurve from 27–150 keV and its corresponding IMFs. From the top to bottom: the original light curve (DATA); the high-frequency noises from the summation of IMF0 to IMF2; the LFQPO (IMF3); the low frequency noise from the summation of IMF4 to the residual. angle is smaller than 0.04\({}^{\circ}\); the elevation angle is larger than 10\({}^{\circ}\); the value of the geomagnetic cutoff rigidity is larger than 8; data are used at least 300 s before and after the South Atlantic Anomaly (SAA) passage. To avoid the possible contamination from the bright earth and nearby sources, only small field of views (FoVs) are applied. Light curves are extracted from screened files using the HELCGEN, MELCGEN and LELCGEN tasks. The lightcurves extracted from 1-10 keV, 10-30 keV and 27-150 keV and the hardness-intensity diagram (HID) with the intensity defined as the total count rate from 1-10 keV and hardness defined as the ratio between the count rates from 3-10 keV and 1-3 keV are plot in Fig. 1. The lightcurves are not barycentered corrected, considering that it would not affect our results. ## 3 Hilbert-Huang transform analysis The Hilbert-Huang transform (HHT) is a method for analyzing nonlinear and non-stationary signals, which consists of two major steps (Huang and Wu, 2008): (1) using empirical mode decomposition (EMD) to decompose the signal into a number of independent intrinsic mode functions (IMFs); (2) extracting the instantaneous frequencies and amplitudes of the signal by performing the Hilbert transform of the IMFs. ### Empirical Mode Decomposition Empirical mode decomposition (EMD) is an iterative sifting process for extracting oscillation modes by subtracting the local means from the original data (Huang et al., 1998; Huang and Wu, 2008). These oscillatory modes are IMFs. An IMF represents an oscillating wave if it satisfies the following two requirements: (1) in the entire data set, the number of extrema and the number of zero crossings must either be equal or differ at most by one, and (2) at any point, the mean value of the envelope defined by the local maxima and the envelope defined by the local minima is zero. The numerical procedure to obtain those IMFs can be concluded with the following steps: (1) Identify all the local extrema of the data \(x(t)\), and form the envelopes defined by the local maxima and minima, respectively, with cubic splines. (2) Compute the mean values \(m_{1}(t)\) by averaging the upper envelope and lower envelope, and subtract the mean values from the data to get the first component \(h_{1}(t)=x(t)-m_{1}(t)\). (3) Test if \(h_{1}(t)\) is an IMF. If the first component is not an IMF, let \(h_{1}(t)\) be the new data set. Continue the steps (1) and (2) until the first component is an IMF. (4) The first IMF component is called as \(c_{1}(t)\). Let the residual signal \(r_{1}(t)=x(t)-c_{1}(t)\). Continue the steps (1)-(3) until \(r_{n}(t)\) becomes a monotonic function that no more IMF can be extracted. Based on the above algorithm, the original signal \(x(t)\) can thus be expressed as as the sum of IMFs, and the final residual, \(r_{n}(t)\): \[x(t)=\sum_{i=1}^{n}c_{i}(t)+r_{n}(t) \tag{1}\] If the original time series contains intermittent processes, the EMD may suffer from the mode mixing problem in which a modulation with the same timescale is distributed across different IMFs (Yeh et al., 2010). In this work, we applied a developed modified version of the empirical mode decomposition (EMD), i.e., the fast complementary ensemble empirical mode decomposition (CEEMD), which can reduce the effect of the mode mixing problem (Huang et al., 2009; Yeh et al., 2010; Wang et al., 2014). The code we used is from PyEMD (v1.21), an open-source Python package (Laszuk, 2017). ### Hilbert Transform The second step of the HHT is the Hilbert transform. After the decomposition step, the IMFs are submitted to this process. For a given data, \(x(t)\), the Hilbert transform, \(y(t)\), is defined as \[y(t)=\frac{1}{\pi}P\int\frac{x(^{\prime})}{t-t^{\prime}}dt^{\prime} \tag{2}\] where \(P\) is the Cauchy principle value. With this definition, \(x(t)\) and \(y(t)\) form the complex conjugate pair, so we can have an analytic signal \(z(t)\) as \[z(t)=x(t)+iy(t)=a(t)e^{i\theta(t)} \tag{3}\] where time-dependent amplitude \(a(t)\) and phase \(\theta(t)\) are Figure 3: Fourier power spectra produced from the IMF0-2 (orange), IMF3 (red), IMF4-Residual (blue) and the original light curve (black). Figure 4: Left:The superimposed lightcurves from the energy band 1–2.6 keV (blue) and 100–150 keV (red). Right: cross-correlation function of IMF3 calculated between the 1–2.6 keV and 100–150 keV energy bands. Figure 5: The LFQPO phase lags as a function of photon energy. The reference energy band is 1–2.6 keV. Left: the original phase lag. Right: the intrinsic phase lag. Figure 6: Hilbert spectrum of the LFQPO from MAXI J1820+070. The color on the z-axis represents the QPO amplitude. \[a(t)=\sqrt{x(t)^{2}+y(t)^{2}} \tag{4}\] and \[\theta(t)=\arctan\frac{y(t)}{x(t)} \tag{5}\] Therefore, the instantaneous frequency \(\omega(t)\) can be defined as \[\omega(t)=\frac{\mathrm{d}\theta(t)}{\mathrm{d}t} \tag{6}\] The instantaneous amplitude \(a(t)\) can be defined by the upper envelope of the absolute value of an IMF (Huang et al., 2009). ## 4 Result and Discussion ### Phase Lags of the LFQPO Fig. 2 shows a representative example of a 50 s lightcurve from 27-150 keV band with a \(\sim\)0.4 Hz LFQPO (ObsID P0114661044). After decomposing the lightcurve we find seven significant IMF components. A \(\sim 0.4\) Hz oscillation is identified as the IMF3. The high-frequency noise (summation from IMF0 to IMF2) and the low-frequency noise (summation from IMF4 to the final residual) are also plotted in Fig. 2, respectively. Using the adaptive decomposition, these zero-mean oscillatory components can yield physically meaningful instantaneous frequencies by using the Hilbert transform (Huang et al., 1998; Huang and Wu, 2008). Since the scope of this work is to discern QPO from noises, further investigation on decomposing each noise component is not our goal here. Therefore, in the following part, we will focus on the study of the IMF3, i.e., the LFQPO. Fig. 3 shows the average Fourier power spectrum of these components. QPO is known to be energy dependent. One way to study the energy-dependence of the QPO is to track its phase lag. As mentioned above, the phase lag directly measured at the LFQPO frequency from the lag-frequency spectrum is not the intrinsic phase lag. The strong broadband noise would bring an underlying phase-lag continuum that interferes with the measurement of the QPO phase lag (Ma et al., 2021; Zhou et al., 2022). Through the HHT method, we obtain the independent light curve of LFQPO (IMF3), which allows us directly measure the QPO intrinsic phase lag without introducing interference from the broad band noise. We decompose the light curves from multiple energy bands and calculate their instantaneous phases through HHT. In order to visualize showing the phase lag of QPO, we plot the superimposed QPO light curves from the 1-2.6 keV and 100-150 keV energy bands, respectively (see Fig. 4 left panel). We find that the QPO shows a strong soft lag, i.e., the soft photons lags the hard ones. By calculating the cross-correlation function, we find that there is a soft lag around 0.4 rad between 1-2.6 keV and 100-150 keV (see Fig. 4 right panel). This is opposite to the result given by cross-spectrum in the frequency domain, which shows a hard lag. Fig. 5 shows the QPO phase lag as functions of energy, in which the phase lag directly measured at the LFQPO frequency from the lag-frequency spectrum is referred to as the original phase lag, and the phase lag measured through the HHT method is referred to as the intrinsic phase lag. We can see that the absolute value of the intrinsic soft lag increases with increasing energy, from 0 rad to \(\sim 0.4\) rad. Similar results have also been found in Ma et al. (2021), in which the phase lags were calculated by subtracting the average lag of the band-noise from the original QPO phase lag in the lag-frequency spectrum. We quantitatively compare the intrinsic phase lags calculated by these two methods. As shown in the right panel of Fig. 5, both methods yield similar trends of the lag-energy relation. However, the absolute soft lags obtained with HHT method are different from those obtained with the method from Ma et al. (2021). There is a possible reason for these differences. The method of Ma et al. (2021) is based on the assumption that QPO and noise components are convolutional in the time domain (Zhou et al., 2022), but EMD is based on the additive relationship between different signals. According to the simulation of Zhou et al. (2022), the phase lags are different for the two cases. The relationship between QPO and band-noise is still unclear, but for both the addition and convolution assumptions, similar soft phase lags are observed. This may indicate that the relationship between QPO and noise in the time domain is more complex, such as partially convolutional and partially additive. Ma et al. (2021) explained the phase lag behavior of the QPO by employing a compact jet with precession. In this scenario, the high-energy photons come from the bottom part of the jet closer to the black hole, while the precession of the compact jet gives arise to the QPO and allows the high-energy photons to reach the observer first, resulting in a soft lag. This scenario also applies to our results. ### Modulation of LFQPO After measuring the phase lag of the QPO, we move to the second step of the HHT algorithm: using the normalized Hilbert transform (Huang et al., 2009) to extract the instantaneous frequency and amplitude of IMF3. Following Huang et al. (2009), we define the instantaneous amplitude of the IMF as the cubic Hermite spline enve lope of the local maxima of the absolute values of IMF3. In contrast to the Fourier and wavelet analysis, the calculation of the frequency in HHT is a differentiation over local time domain. Therefore, we can obtain the instantaneous frequency of a signal as long as the sampling time interval, \(dt\), is much shorter than the cycle length. This makes it possible for us to explore the origin of the broadening of the peak of LFQPO. The typical instantaneous frequency and amplitude are shown as the color map for the Hilbert spectrum in Fig. 6, calculated for the time interval from 0 to 20 s. The color depth represents the magnitude of the amplitude. Fig. 7 shows more detailed instantaneous frequency and instantaneous amplitude information. We see that there are oscillations in both the instantaneous amplitude and the instantaneous frequency of the QPO, which cause the QPO to be quasi-periodic. But with the current information, we cannot tell which kind of the oscillation is dominant. To explore this, we generate two simulated light curves using the instantaneous amplitude and frequency information (Fig. 7). One of the light curves only includes the frequency modulation, leaving the amplitude be constant. The other light curve only includes the amplitude modulation, leaving the frequency remain at the center frequency of the QPO. Next, we perform Fourier transform on these two light curves to produce their power density spectra (PDS), as shown in Fig. 8. For better comparison, the figure also includes the PDS generated from the original QPO light curve. As shown in the figure, the effect of frequency modulation (orange) on QPO broadening is significantly stronger than that of amplitude modulation (navy). This suggests that frequency modulation plays a major role in the broadening of the peak of LFQPO. According to the simulation from Ingram and Motta (2019), if the quasi-period of QPO is dominated by frequency modulation, then its fundamental frequency and harmonic frequency of QPO should have the same Q factor. On the contrary, if it is dominated by amplitude modulation, then the fundamental and harmonic frequencies of the QPO should have the same width. By fitting the original power spectrum, we find that the Q-factors of the QPO fundamental and second harmonic are \(5.5\pm 0.1\) and \(5.7\pm 0.2\), suggesting a frequency modulation. Combined these results with those we get from the HHT, we conclude that the quasi-periodic of QPO in MAXI J1820+070 is mainly caused by the frequency modulation. ### Origin Of Modulation Figure 8: The power density spectra (PDS) of FM (orange), AM (navy) simulated light curves and the original (green) QPO light curve (IMF3). Figure 7: Instantaneous frequency and amplitude of the LFQPO from MAXI J1820+070. The black lines are the simulated light curves. To explore the origin of the modulation, we calculate the power spectra of instantaneous frequency and instantaneous amplitude, respectively (see Fig. 9). We find that both them show a shape similar to that of broad-band noise, implying that the modulation is not caused by a completely random process. To verify this, we decompose more observation light curves from the hard state, which are colored marked in Fig. 1. By fitting the power spectrum with multiple Lorentzian functions, we get the cut-off frequency for each component. As shown in Fig. 10, there is a strong linear correlation between the timescale of the frequency/amplitude modulation and the broad-band noise. This may suggest that the modulation of QPO may have a common origin with broad-band noise. To further investigate what leads to the modulation, we first need to understand how the QPO originates. The origin of QPO has been studied for a long time, and several models have been proposed. One of the most accepted models is the Lense-Thirring (L-T) precession model, which assumes that QPO is generated by the relativistic precession of an inner accretion flow (Ingram et al., 2009; You et al., 2018, 2020) or a jet (Ma et al., 2021). For the first case, the modulation may come from the fluctuation propagation in the hot accretion flow. The inward propagation of the fluctuations causes changes in the surface density of the accretion flow, thus altering the hot flow's moment of inertia and inducing the precession frequency jitters. Besides, the fluctuations propagation model (Lyubarskii, 1997; Ingram and Done, 2012; Ingram, 2016; Mushtukov et al., 2019) is also widely used to explain the broad-band noise in the power spectrum (Rapisarda et al., 2014, 2016; Ingram, 2016; Rapisarda et al., 2017, 2017; Turner and Reynolds, 2021; Yang et al., 2022). Thus, it can naturally explain why the modulation has a shape similar to that of the broad-band noise in the PDS. However, this explanation has its shortcomings. Fluctuations propagation can generate not only low-frequency noise, but also high-frequency noise. We observed both low-frequency and high-frequency noises in the power spectrum, but only low-frequency modulation was observed. This cannot be well explained under the fluctuations propagation model. As for the second case, it assumes that the QPO is generated by the precession of a jet. The modulation can be explained by the internal shock model of the jet (Rees, 1978; Spada et al., 2001; Boettcher and Dermer, 2010). In this model, shells of gas are continuously ejected with randomly variable velocities and then propagate along the jet. At some points, the fastest fluctuations start catching up and merging with slower ones. This leads to the formation of shocks in which electrons are accelerated up to relativistic energies. Malzac (2014) have shown that internal shocks caused by fluctuations of the outflow velocity can produce shapes on the power spectrum that resemble low-frequency broad-band noise. The jet behaves like a low-pass filter, as the shells of plasma colliding and merging with each other, the highest frequency velocity fluctuations are gradually damped and the size of the emitting regions increases (Malzac, 2014). This model can well explain the absence of high frequency modulation. Moreover, the jet precession model can also naturally explain the large soft phase lags of QPO (Ma et al., 2021). ## 5 Conclusion In this paper, we performed the HHT analysis of the LFQPOs in MAXI J1820+070. With the EMD method, we are able to extract the independent light curve of the QPO and measure the QPO intrinsic phase lag. We find a soft QPO phase lag in this source (low energy photons lag behind high energy photons), and the absolute value of the QPO phase lag increases with energy. Our result is different from the phase lag calculated from the lag-frequency spectrum, in which the lag includes the contribution from the broadband noises. Our results show that the EMD method can significant reduce the inter Figure 9: The power density spectra (PDS) of the instantaneous frequency and instantaneous amplitude curves for IMF3. ference from the broad band noise on the measurement of the QPO intrinsic lag. By analyzing the instantaneous frequency and instantaneous amplitude obtained from HHT, we find that the broadening QPO peak in the power spectrum of MAXI J1820+070 is dominated by the frequency modulation. Through further analysis, we find that this modulation probably share a common physical origin with the broad-band noise, and can be well explained by the internal shock model of the jet. In order to examine the credibility of the results of QPO analysis, we perform the robustness tests on the HHT method. First, we simulated a red noise light curve using the Timmer-Koenig method (Timmer & Koenig, 1995), then added a sinusoidal signal with frequency and amplitude modulation as the QPO component. And then we decomposed the synthetic light curve using the CEEMD method. The results are shown in Fig. 11, in which the red line represents the QPO component and the blue line represents the noise component. Our results show that the CEEMD algorithm can effectively separate the QPO and noise components. Subsequently, we compared the decomposed QPO with the original QPO light curves and found that the QPO profile was accurately recovered (see Fig. 12). Further Hilbert transform analysis on the decomposed QPO revealed that the instantaneous phase, frequency, and amplitude trends of the decomposed QPO are highly consistent with those of the original signal, but with local fluctuations, which is possibly due to the mode mixing between the QPO and noise components. To investigate whether such fluctuations affect the measurements of the intrinsic QPO phase lag, we simulated another light curve with a \(+5s\) time lag (equivalent to \(+\pi/2\) phase lag) into the QPO component and a \(-10s\) time lag into the noise component. We then decomposed this light curve and measured the time lags between the two Figure 10: The cut-off frequency of broadband-noise as a function of that of amplitude modulation (left panel) or frequency modulation (right panel). The colour of each observation matches that in Fig. 1. decomposed QPO light curves, which is shown in Fig. 10. The two decomposed QPO components exhibited a clear positive time lag very close to \(5s\), indicating that the HHT method works very well to measure the intrinsic phase lag of QPOs.
2306.12647
The twist-3 gluon contribution to Sivers asymmetry in $J/ψ$ production in semi-inclusive deep inelastic scattering
We carry out the first calculation for the twist-3 gluon contribution to the single transverse-spin asymmetry(SSA) in $J/\psi$ production in semi-inclusive deep inelastic scattering. Our result shows that the $J/\psi$ SSA is an ideal observable to pin down the $C$-even type twist-3 gluon distribution that has a direct relationship with the gluon transverse-momentum-dependent distribution function. We also perform some numerical simulations of the $J/\psi$ SSA for the kinematics accessible at the future electron-ion-collider experiment. For color-singlet contribution, the hadronization effect of $J/\psi$ is completely canceled at the level of the SSA and the spin-dependent structure functions directly reflect the behavior of the $C$-even twist-3 gluon distribution.
Longjie Chen, Hongxi Xing, Shinsuke Yoshida
2023-06-22T03:22:59Z
http://arxiv.org/abs/2306.12647v1
The twist-3 gluon contribution to Sivers asymmetry in \(J/\psi\) production in semi-inclusive deep inelastic scattering ###### Abstract We carry out the first calculation for the twist-3 gluon contribution to the single transverse-spin asymmetry(SSA) in \(J/\psi\) production in semi-inclusive deep inelastic scattering. Our result shows that the \(J/\psi\) SSA is an ideal observable to pin down the \(C\)-even type twist-3 gluon distribution that has a direct relationship with the gluon transverse-momentum-dependent distribution function. We also perform some numerical simulations of the \(J/\psi\) SSA for the kinematics accessible at the future electron-ion-collider experiment. For color-singlet contribution, the hadronization effect of \(J/\psi\) is completely canceled at the level of the SSA and the spin-dependent structure functions directly reflect the behavior of the \(C\)-even twist-3 gluon distribution. ## I Introduction The investigation of the nucleon internal structure in high energy scatterings has been one of the central subjects in basic science since quantum chromodynamics(QCD) was established as a fundamental theory of the strong interaction. A lot of knowledge have been accumulated in the past half century through the perturbative QCD analysis of experimental data. However, in spite of tremendous theoretical and experimental effort, a lot of mysteries still lie in the nucleon structure. In particular, the role of gluons inside the nucleon is leaving a lot of room for research. The investigation of the gluon spin structure is one of the main subjects of the experiment at Relativistic Heavy Ion Collider(RHIC) that was launched in 2000. The evaluation of the gluon spin contribution to the proton spin is a major progress that was made in the past couple of decades[1; 2]. Further investigation will be inherited by the next-generation collider experiment, Electron Ion Collider(EIC)[3]. The EIC experiment aims to understand deeper gluon structure like the 3-dimensional orbital motion of gluons inside the proton. The importance of the orbital motions of the partons was realized by the emergence of the large single transverse-spin asymmetry(SSA) in high energy hadron scatterings. The large SSA was first observed in the late 70s[4; 5] and it turned out that the conventional parton picture could not describe it at all. The observation of the large SSA motivated the improvement of the conventional perturbative QCD framework. The transverse-momentum-dependent(TMD) factorization is known as one of the successful frameworks in describing existing data of the SSA. The nonperturbative functions in the TMD factorization represents the 3-dimensional motion of the partons and the success of this framework made us realize the importance of the orbital motions of the partons. Another successful framework is the twist-3 contributions in the collinear factorization. This framework describes incoherent multi-parton scattering in a hard process and this is successful in describing SSAs in single-scale processes like \(pp\to\pi X\) measured at RHIC. Although these two frameworks basically have different applicable conditions, it was found that there is a marginal region where both frameworks are valid in some processes[6; 7; 8; 9; 10; 11]. The equivalence of two frameworks is important for giving a unified picture to the origin of the SSA. Heavy flavored hadron productions are important in the context of the investigation of the gluon structure because the heavy quark fragmenting into a final state hadron is mainly produced by a fusion of gluon inside the proton[12]. The SSA in the heavy flavored hadron production is one of ideal observables to investigate the orbital motions of gluons. Those SSAs have been well discussed based on the TMD factorization for \(D\)-meson production[13; 14; 15; 16; 17] and \(J/\psi\) production[15; 18; 19; 20; 21; 22; 23; 24; 25]. On the other hand, the twist-3 gluon contribution to the SSA has been calculated only for \(D\)-meson production[26; 27; 28; 29; 30]. In this paper, we carry out the first calculation for the twist-3 gluon contribution to the SSA in \(J/\psi\) production in semi-inclusive deep inelastic scattering(SIDIS). This is required to deal with the data of the \(J/\psi\) SSA in a full kinematic range of the EIC experiment together with the TMD framework and give a unified interpretation to the data. We will also perform some numerical simulations for the \(J/\psi\) SSA. Our result clarifies the role of the \(J/\psi\) SSA in the determination of the twist-3 gluon distribution function that could give indirect information about the orbital motion of the gluons inside the proton. The remainder of this paper is organized as follows: In section II, we introduce definitions of the twist-3 gluon distribution functions relevant to our study and show some relations among them. In section III, we introduce the frame we will work on and show our derivation of the twist-3 cross section formula in detail. In section IV, we show some numerical simulations with simple models for the normalized structure functions that are accessible at the EIC. Section V is devoted to a summary of our study. ## II Definitions of Twist-3 Gluon Distribution Functions In this section, we recall the definitions of the twist-3 gluon distribution functions relevant to our study. Two types of the twist-3 functions, the kinematical functions and the dynamical functions, in general contribute to the SSA. The kinematical functions of the transversely polarized proton can be expressed by the first \(k_{T}^{2}/M_{N}^{2}\)-moment of the gluon Sivers function [31; 32]. We show their definitions below. \[\Phi_{\partial}^{\alpha\beta\gamma}(x) = \int\frac{d\lambda}{2\pi}e^{i\lambda x}\langle pS_{\perp}|F^{ \beta n}(0)F^{\alpha n}(\lambda n)|pS_{\perp}\rangle(i\overleftarrow{\partial }\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## III Calculation of the SSA in \(J/\psi\) production in SIDIS ### Unpolarized cross section for \(J/\psi\) production in SIDIS We calculate the SSA in \(J/\psi\) production in SIDIS, \[e(\ell)+p^{\uparrow}(p,S_{\perp})\to e(\ell^{\prime})+J/\psi(P_{J/\psi})+X, \tag{7}\] in the hadron frame[36]. It is convenient to use the following Lorentz invariant variables to express a cross section formula in SIDIS. \[S_{ep} = (p+\ell)^{2},\hskip 14.226378ptQ^{2}=-q^{2}=-(\ell-\ell^{\prime})^{2 },\hskip 14.226378ptx_{B}=\frac{Q^{2}}{2p\cdot q},\hskip 14.226378ptz_{f}= \frac{p\cdot P_{J/\psi}}{p\cdot q}. \tag{8}\] All momenta and the spin vector of the polarized proton are given in this frame as \[p = \Big{(}\frac{Q}{2x_{B}},0,0,\frac{Q}{2x_{B}}\Big{)},\hskip 14.226378ptq =\Big{(}0,0,0,-Q\Big{)},\] \[P_{J/\psi} = \frac{z_{f}Q}{2}\Big{(}1+\frac{P_{T}^{2}}{Q^{2}}+\frac{m_{J/\psi }^{2}}{z_{f}^{2}Q^{2}},\frac{2P_{T}}{Q}\cos\chi,\frac{2P_{T}}{Q}\sin\chi,-1+ \frac{P_{T}^{2}}{Q^{2}}+\frac{m_{J/\psi}^{2}}{z_{f}^{2}Q^{2}}\Big{)},\] \[S_{\perp} = (0,\cos\Phi_{S},\sin\Phi_{S},0), \tag{9}\] where \(P_{T}=|P_{J/\psi}^{\perp}|/z_{f}\) and \(m_{J/\psi}\) is the mass of \(J/\psi\). Using these variables, the cross section formula is given by \[\frac{d^{6}\sigma}{dx_{B}dQ^{2}dz_{f}dP_{T}^{2}d\phi d\chi}=\frac{\alpha_{em} ^{2}}{128\pi^{4}S_{ep}^{2}x_{B}^{2}Q^{2}}z_{f}L_{\mu\nu}(\ell,\ell^{\prime})W ^{\mu\nu}(p,q,P_{J/\psi}), \tag{10}\] where \(\alpha_{em}=e^{2}/4\pi\) is the QED coupling constant, \(\Phi_{S}\), \(\chi\) and \(\phi\) are the azimuthal angles of the proton's spin, the hadron plane and the lepton plane respectively as shown in FIG. 1, the leptonic tensor is given by \(L_{\mu\nu}(\ell,\ell^{\prime})=2(\ell_{\mu}\ell^{\prime}_{\nu}+\ell_{\nu}\ell ^{\prime}_{\mu})-Q^{2}g^{\mu\nu}\). The hadronic tensor \(W_{\mu\nu}(p,q,P_{J/\psi})\) describes the scattering between the virtual photon and the proton and the hadronization process of the charm quark pair into \(J/\psi\). We adopt non-relativistic QCD(NRQCD) framework[37; 38] for the description of the hadronization mechanism of \(J/\psi\). Within NRQCD, the \(J/\psi\) production is illustrated as \[e(\ell)+p(p)\to e(\ell^{\prime})+\sum_{n}c\bar{c}[n](P_{J/\psi})+X, \tag{11}\] where \(n={}^{3}S_{1}^{[1]},{}^{1}S_{0}^{[8]},{}^{3}S_{1}^{[8]},\cdots\) denotes possible Fock states of the charm quark pair hadronizing into \(J/\psi\). In this paper, we focus on the color singlet contribution \({}^{3}S_{1}^{[1]}\) as the first attempt to calculate the twist-3 gluon distribution Figure 1: Schematic illustration of the scattering in the hadron frame. effect on the \(J/\psi\) SSA. The color-singlet hadronization gives the following structure in the spinor space. \[\mathcal{N}\langle\mathcal{O}^{J/\psi}({}^{3}S_{1}^{[1]})\rangle\not{\epsilon}( \not{P}_{J/\psi}+m_{J/\psi}),\ \ \ \ \sum\epsilon^{\rho}\epsilon^{*\sigma}=-g^{\rho\sigma}+\frac{P_{J/\psi}^{\rho}P_ {J/\psi}^{\sigma}}{m_{J/\psi}^{2}}, \tag{12}\] where \(\langle\mathcal{O}^{J/\psi}({}^{3}S_{1}^{[1]})\rangle\) is the long distance matrix element(LDME) that represents the hadronization effect of the charm quark pair with the quantum state \({}^{3}S_{1}^{[1]}\) into \(J/\psi\). We don't need the explicit form of the normalization factor \(\mathcal{N}\) in our study because it is completely canceled at the level of the SSA that is given by the ratio of two cross sections. For the unpolarized cross section, the hadronic tensor is given by \[W^{\mu\nu}(p,q,P_{J/\psi})=\int_{0}^{1}\frac{dx}{x}\,G(x)\,\langle\mathcal{O}^ {J/\psi}({}^{3}S_{1}^{[1]})\rangle\,w^{\mu\nu}(xp,q,P_{J/\psi}), \tag{13}\] where \(G(x)\) is the unpolarized gluon distribution function and \(w^{\mu\nu}(xp,q,P_{J/\psi})\) represents the hard scattering between the virtual photon and a parton inside the proton. We can calculate \(L^{\mu\nu}(\ell,\ell^{\prime})w^{\mu\nu}(xp,q,P_{J/\psi})\) perturbatively by considering the diagrams in the leading-order(LO) with respect to the strong coupling constant for the unpolarized cross section. The hadronic tensor \(W_{\mu\nu}(p,q,P_{J/\psi})\) is conventionally expanded in terms of 9 independent tensors \(\mathcal{V}_{i}^{\mu\nu}\) (\(i=1,2,\cdots 9\)) [36] as \[W^{\mu\nu}=\sum_{i=1}^{9}(W^{\rho\sigma}\widetilde{\mathcal{V}}_{i\rho\sigma} )\mathcal{V}_{i}^{\mu\nu}, \tag{14}\] Figure 2: Diagrams that contribute to the unpolarized cross section. \(w^{\mu\nu}(xp,q,P_{J/\psi})\) is given by the squared amplitude of the sum of these six diagrams. where the inverse tensors \(\widetilde{\cal V}_{i\rho\sigma}\) satisfy \({\cal V}_{i}^{\mu\nu}\widetilde{\cal V}_{i^{\prime}\mu\nu}=\delta_{ii^{\prime}}\). Here we just show the explicit definitions of the symmetric tensors that are relevant to our study. \[{\cal V}_{1}^{\mu\nu}=X^{\mu}X^{\nu}+Y^{\mu}Y^{\nu},\ \ \ \ {\cal V}_{2}^{\mu\nu}=g^{\mu\nu}+Z^{\mu}Z^{\nu},\ \ \ \ {\cal V}_{3}^{\mu\nu}=T^{\mu}X^{\nu}+X^{\mu}T^{\nu},\] \[{\cal V}_{4}^{\mu\nu}=X^{\mu}X^{\nu}-Y^{\mu}Y^{\nu},\ \ \ \ {\cal V}_{8}^{\mu\nu}=T^{\mu}Y^{\nu}+Y^{\mu}T^{\nu},\ \ \ \ {\cal V}_{9}^{\mu\nu}=X^{\mu}Y^{\nu}+Y^{\mu}X^{\nu},\] \[\widetilde{\cal V}_{1}^{\mu\nu}=\frac{1}{2}(2T^{\mu}T^{\nu}+X^{\mu}X ^{\nu}+Y^{\mu}Y^{\nu}),\ \ \ \ \ \widetilde{\cal V}_{2}^{\mu\nu}=T^{\mu}T^{\nu},\ \ \ \ \widetilde{\cal V}_{3}^{\mu\nu}=-\frac{1}{2}(T^{\mu}X^{\nu}+X^{\mu}T^{\nu}),\] \[\widetilde{\cal V}_{4}^{\mu\nu}=\frac{1}{2}(X^{\mu}X^{\nu}-Y^{\mu }Y^{\nu}),\ \ \ \ \ \widetilde{\cal V}_{8}^{\mu\nu}=-\frac{1}{2}(T^{\mu}Y^{\nu}+Y^{\mu}T^{\nu}), \ \ \ \ \widetilde{\cal V}_{9}^{\mu\nu}=\frac{1}{2}(X^{\mu}Y^{\nu}+Y^{\mu}X^{\nu}),\] where each vector is defined by \[T^{\mu}=(1,0,0,0),\ \ \ \ X^{\mu}=(0,\cos\chi,\sin\chi,0),\ \ \ \ Y^{\mu}=(0,-\sin\chi,\cos\chi,0),\ \ \ \ Z^{\mu}=(0,0,0,1). \tag{15}\] Then we can calculate \(L_{\mu\nu}W^{\mu\nu}\) as \[L_{\mu\nu}W^{\mu\nu}=\sum_{i=1,\cdots,4,8,9}[L_{\mu\nu}{\cal V}_{i}^{\mu\nu}] [W_{\rho\sigma}\widetilde{\cal V}_{i}^{\rho\sigma}]=Q^{2}\sum_{i=1,\cdots,4,8,9 }{\cal A}_{i}(\phi-\chi)[W_{\rho\sigma}\widetilde{\cal V}_{i}^{\rho\sigma}], \tag{16}\] where the azimuthal dependences \({\cal A}_{i}(\varphi)\) are given by \[{\cal A}_{1}(\varphi)=\frac{4}{y^{2}}(1-y+\frac{y^{2}}{2}),\ \ \ \ {\cal A}_{2}(\varphi)=-2,\ \ \ \ {\cal A}_{3}(\varphi)=-\frac{4}{y^{2}}(2-y)\sqrt{1-y}\cos\varphi,\] \[{\cal A}_{4}(\varphi)=\frac{4}{y^{2}}(1-y)\cos 2\varphi,\ \ \ \ {\cal A}_{8}(\varphi)=-\frac{4}{y^{2}}(2-y)\sqrt{1-y}\sin\varphi,\ \ \ \ \ {\cal A}_{9}(\varphi)=\frac{4}{y^{2}}(1-y)\sin 2\varphi, \tag{17}\] where \(y=\frac{Q^{2}}{x_{B}S_{ep}}\). We can derive the unpolarized cross section formula by computing all the diagrams in FIG. 2 as \[\frac{d^{6}\sigma}{dx_{B}dQ^{2}dz_{f}dP_{T}^{2}d\phi d\chi} \tag{18}\] \[= \frac{\alpha_{em}^{2}\alpha_{s}^{2}\epsilon_{c}^{2}}{4\pi S_{ep}^ {2}x_{B}^{2}Q^{2}}\Big{(}{\cal N}\langle{\cal O}^{J/\psi}(^{3}S_{1}^{[1]}) \rangle\Big{)}\sum_{i=1,\cdots,4,8,9}{\cal A}_{i}(\phi-\chi)\int_{0}^{1} \frac{dx}{x}\,G(x)\,\hat{\sigma}_{i}\,\delta\Big{[}\frac{P_{T}^{2}}{Q^{2}}- \Big{(}1-\frac{1}{\hat{x}}+\frac{m_{J/\psi}^{2}}{z_{f}Q^{2}}\Big{)}\Big{(}1- \frac{1}{z_{f}}\Big{)}\Big{]},\] where \(\hat{x}=x_{B}/x\), \(\alpha_{s}\) is the strong coupling constant and \(e_{c}\) is the electric charge of the charm quark. We show all hard cross sections \(\hat{\sigma}_{i}\) in Appendix A because they are lengthy. The result in the \(J/\psi\) rest frame is also available in [39]. ### Twist-3 polarized cross section for \(J/\psi\) production in SIDIS We next calculate the twist-3 polarized cross section formula. The twist-3 quark distribution effect could also contribute to the SSA and it was calculated in \(pp\) collision[40]. However, it is out of scope of the present study because it vanishes in the color-singlet case in SIDIS. The general formula for the twist-3 gluon contribution in SIDIS was derived in [41] as \[W^{\rm twist-3}_{\rho\sigma}(p,q,P_{J/\psi}) = \langle{\cal O}^{J/\psi}(^{3}S_{1}^{[1]})\rangle\Big{\{}\omega^{\mu }_{\ \alpha}\omega^{\nu}_{\beta}\omega^{\lambda}_{\ \gamma}\int\frac{dx}{x^{2}}\Phi^{\alpha\beta\gamma}_{\alpha}(x)\frac{ \partial}{\partial k^{3}}S_{\mu\nu,\rho\sigma}(k)\Big{|}_{k=xp} \tag{19}\] \[-\frac{1}{2}\omega^{\mu}_{\ \alpha}\omega^{\nu}_{\beta}\omega^{\lambda}_{\ \gamma}\int dx_{1}\int dx_{2}\Big{[}\frac{-if^{abc}}{N_{c}(N_{c}^{2}-1)}N^{ \alpha\beta\gamma}(x_{1},x_{2})+\frac{N_{c}d^{abc}}{(N_{c}^{2}-4)(N_{c}^{2}-1)} O^{\alpha\beta\gamma}(x_{1},x_{2})\Big{]}\] \[\times\frac{1}{x_{1}-i\epsilon}\frac{1}{x_{2}+i\epsilon}\frac{1}{ x_{2}-x_{1}-i\epsilon}S^{abc}_{\mu\nu\lambda,\rho\sigma}(x_{1}p,x_{2}p)\Big{\}},\] where \(\omega^{\mu}_{\ \alpha}=g^{\mu}_{\ \alpha}-p^{\mu}n_{\alpha}\). \(S_{\mu\nu,\rho\sigma}(k)\) is given by \[S_{\mu\nu,\rho\sigma}(k)=H_{\mu\nu,\rho\sigma}(k)2\pi\delta\Big{[}(k+q-P_{J/ \psi})^{2}\Big{]}, \tag{20}\] A typical diagram given by the quark distribution effect. This is canceled in the color-singlet case in SIDIS. where the hard part \(H_{\mu\nu,\rho\sigma}(k)\) is given by the diagrams in FIG. 2 by replacing the momentum \(xp\) with \(k\). \(S^{abc}_{\mu\nu\lambda,\rho\sigma}(x_{1}p,x_{2}p)\) can be separated into two parts as \[S^{abc}_{\mu\nu\lambda,\rho\sigma}(x_{1}p,x_{2}p)=H^{abc}_{L\mu\nu\lambda, \rho\sigma}(x_{1}p,x_{2}p)2\pi\delta\Big{[}(x_{2}p+q-P_{J/\psi})^{2}\Big{]}+H^ {abc}_{R\mu\nu\lambda,\rho\sigma}(x_{1}p,x_{2}p)2\pi\delta\Big{[}(x_{1}p+q-P_{ J/\psi})^{2}\Big{]}. \tag{21}\] \(H^{abc}_{L\mu\nu\lambda,\rho\sigma}(x_{1}p,x_{2}p)\) is given by the product of the diagrams in FIG. 3 and the complex conjugate of the diagrams in FIG. 2. \(H^{abc}_{R\mu\nu\lambda,\rho\sigma}(x_{1}p,x_{2}p)\) is the complex conjugate of \(H^{abc}_{L\mu\nu\lambda,\rho\sigma}(x_{1}p,x_{2}p)\). From our direct calculation of the diagrams, we have observed that \(H^{abc}_{L\mu\nu\lambda,\rho\sigma}(x_{1}p,x_{2}p)\) and \(H^{abc}_{R\mu\nu\lambda,\rho\sigma}(x_{1}p,x_{2}p)\) have the following structures. \[\frac{1}{x_{1}-i\epsilon}\frac{1}{x_{2}-x_{1}-i\epsilon}H^{abc}_{L \mu\nu\lambda,\rho\sigma}(x_{1}p,x_{2}p) = \frac{1}{x_{1}-i\epsilon}H^{1\,abc}_{L\mu\nu\lambda,\rho\sigma}(x _{2}p)+\frac{x_{2}}{(x_{1}-i\epsilon)^{2}}H^{2\,abc}_{L\mu\nu\lambda,\rho \sigma}(x_{2}p)\] \[+\frac{1}{x_{2}-x_{1}-i\epsilon}H^{3\,abc}_{L\mu\nu\lambda,\rho \sigma}(x_{2}p)+\frac{x_{2}}{(x_{2}-x_{1}-i\epsilon)^{2}}H^{4\,abc}_{L\mu\nu \lambda,\rho\sigma}(x_{2}p)\] \[+\frac{1}{x_{1}-Ax+i\epsilon}H^{5\,abc}_{L\mu\nu\lambda,\rho \sigma}(x_{2}p)+\frac{1}{x_{1}-(1-A)x-i\epsilon}H^{6\,abc}_{L\mu\nu\lambda, \rho\sigma}(x_{2}p),\] \[\frac{1}{x_{2}+i\epsilon}\frac{1}{x_{2}-x_{1}-i\epsilon}H^{abc}_ {R\mu\nu\lambda,\rho\sigma}(x_{1}p,x_{2}p) = \frac{1}{x_{2}+i\epsilon}H^{1\,abc}_{R\mu\nu\lambda,\rho\sigma}(x _{1}p)+\frac{x_{1}}{(x_{2}+i\epsilon)^{2}}H^{2\,abc}_{R\mu\nu\lambda,\rho \sigma}(x_{1}p) \tag{22}\] \[+\frac{1}{x_{2}-x_{1}-i\epsilon}H^{3\,abc}_{R\mu\nu\lambda,\rho \sigma}(x_{1}p)+\frac{x_{1}}{(x_{2}-x_{1}-i\epsilon)^{2}}H^{4\,abc}_{R\mu\nu \lambda,\rho\sigma}(x_{1}p)\] \[+\frac{1}{x_{2}-Ax-i\epsilon}H^{5\,abc}_{R\mu\nu\lambda,\rho \sigma}(x_{1}p)+\frac{1}{x_{2}-(1-A)x+i\epsilon}H^{6\,abc}_{R\mu\nu\lambda, \rho\sigma}(x_{1}p),\] where \(A=\frac{Q^{2}(1+\delta-z_{f})+m^{2}_{J/\psi}\delta}{Q^{2}(2-z_{f})}\). Substituting (1), (3) and (4) into \(W^{\rm twist-3}_{\rho\sigma}(p,q,P_{J/\psi})\), we can derive the following form. \[W^{\rm twist-3}_{\rho\sigma}(p,q,P_{J/\psi}) = 2\pi\langle{\cal O}^{J/\psi}(^{3}S_{1}^{[1]})\rangle\int\frac{dx} {x^{2}}\delta\Big{(}(xp+q-P_{J/\psi})^{2}\Big{)} \tag{23}\] \[\Bigg{\{}\Big{(}x\frac{d}{dx}G_{T}^{(1)}(x)-2G_{T}^{(1)}(x)\Big{)} H_{\rho\sigma}^{G1}+G_{T}^{(1)}(x)H_{\rho\sigma}^{G2}\] \[+\Big{(}x\frac{d}{dx}\Delta H_{T}^{(1)}(x)-2\Delta H_{T}^{(1)}(x) \Big{)}H_{\rho\sigma}^{H1}+\Delta H_{T}^{(1)}(x)H_{\rho\sigma}^{H2}\] \[+\int dx^{\prime}\sum_{i}\Big{[}\Big{(}\frac{1}{x-x^{\prime}-i \epsilon}H^{Ni}_{1L\rho\sigma}+\frac{x}{(x-x^{\prime}-i\epsilon)^{2}}H^{Ni}_{2 L\rho\sigma}+\frac{1}{x^{\prime}-Ax+i\epsilon}H^{Ni}_{3L\rho\sigma}\Big{)}N^{i}(x^{ \prime},x)\] \[+\Big{(}\frac{1}{x-x^{\prime}+i\epsilon}H^{Ni}_{1R\rho\sigma}+ \frac{x}{(x-x^{\prime}+i\epsilon)^{2}}H^{Ni}_{2R\rho\sigma}+\frac{1}{x^{ \prime}-Ax-i\epsilon}H^{Ni}_{3R\rho\sigma}\Big{)}N^{i}(x,x^{\prime})\] \[+\Big{(}\frac{1}{x-x^{\prime}-i\epsilon}H^{Oi}_{1L\rho\sigma}+ \frac{x}{(x-x^{\prime}-i\epsilon)^{2}}H^{Oi}_{2L\rho\sigma}+\frac{1}{x^{ \prime}-Ax+i\epsilon}H^{Oi}_{3L\rho\sigma}\Big{)}O^{i}(x^{\prime},x)\] \[+\Big{(}\frac{1}{x-x^{\prime}+i\epsilon}H^{Oi}_{1R\rho\sigma}+ \frac{x}{(x-x^{\prime}+i\epsilon)^{2}}H^{Oi}_{2R\rho\sigma}+\frac{1}{x^{ \prime}-Ax-i\epsilon}H^{Oi}_{3R\rho\sigma}\Big{)}O^{i}(x,x^{\prime})\Big{]} \Bigg{\}},\] where we used the shorthand notations \[N^{1,2,3}(x^{\prime},x)=\{N(x^{\prime},x),N(x,x-x^{\prime}),N(x^{\prime},x^{ \prime}-x)\},\ \ \ \ O^{1,2,3}(x^{\prime},x)=\{O(x^{\prime},x),O(x,x-x^{\prime}),O(x^{\prime},x^{ \prime}-x)\}. \tag{24}\] We changed the integral variable as \(x^{\prime}\to x-x^{\prime}\) for the denominators \(1/(x^{\prime}\pm i\epsilon)\) and \(1/(x^{\prime}-(1-A)x\pm i\epsilon)\) and used the symmetries (5). One can derive the cross section formula by taking a contraction with tensors \(\widetilde{\cal V}_{i}^{p\sigma}\). We have observed from our direct calculation that the contribution from the \(C\)-odd function \(O(x_{1},x_{2})\) is exactly canceled. This can be naturally understood from the fact that the charm-anticharm pair is charge neutral. We can eliminate \(x^{\prime}\)-integral by performing the following contour integrations. \[\frac{1}{x^{\prime}-x-i\epsilon}-\frac{1}{x^{\prime}-x+i\epsilon}=2 \pi i\delta(x^{\prime}-x),\ \ \ \ \ \ \frac{1}{x^{\prime}-Ax-i\epsilon}-\frac{1}{x^{\prime}-Ax+i\epsilon}=2\pi i\delta(x^{ \prime}-Ax),\] \[\frac{1}{(x^{\prime}-x+i\epsilon)^{2}}-\frac{1}{(x^{\prime}-x-i \epsilon)^{2}}=2\pi i\frac{\partial}{\partial x^{\prime}}\delta(x^{\prime}-x). \tag{25}\] The kinematical functions \(G_{T}^{(1)}(x)\) and \(\Delta H_{T}^{(1)}(x)\) can be eliminated by using the relations (6). As a result, we can write down the cross section formula only in terms of the \(C\)-even function \(N(x_{1},x_{2})\) as \[\frac{d^{6}\Delta\sigma}{dx_{B}dQ^{2}dz_{f}dP_{T}^{2}d\phi d\chi} \tag{26}\] \[= \frac{\alpha_{em}^{2}\alpha_{s}^{2}e_{c}^{2}(2\pi M_{N})}{4\pi S_ {ep}^{2}x_{B}^{2}Q^{2}}\Big{(}{\cal N}\langle{\cal O}^{J/\psi}(^{3}S_{1}^{[1]} )\rangle\Big{)}\sum_{i=1,\cdots,4,8,9}{\cal A}_{i}(\phi-\chi){\cal S}_{i}(\Phi _{S}-\chi)\int\frac{dx}{x^{2}}\delta\Big{[}\frac{P_{T}^{2}}{Q^{2}}-\Big{(}1- \frac{1}{\tilde{x}}+\frac{m_{J/\psi}^{2}}{z_{f}Q^{2}}\Big{)}\Big{(}1-\frac{1} {z_{f}}\Big{)}\Big{]}\] \[\times\Big{[}N(x,x)\sigma_{i}^{N1}+N(x,0)\sigma_{i}^{N2}+N(x,Ax) \sigma_{i}^{N3}+N(x,(1-A)x)\sigma_{i}^{N4}+N(Ax,-(1-A)x)\sigma_{i}^{N5}\Big{]},\] where \({\cal S}_{i}(\Phi_{S}-\chi)=\sin(\Phi_{S}-\chi)(i=1,2,3,4),\ \cos(\Phi_{S}-\chi)(i=8,9)\). All hard cross sections are shown in Appendix A. It turns out that the derivative terms \(\frac{d}{dx}N(x,x)\) and \(\frac{d}{dx}N(x,0)\) given by (25) are exactly canceled in the color singlet channel, which is consistent with the statement made in [42]. However, we have observed that non-zero contributions from non-derivative terms \(N(x,x)\) and \(N(x,0)\) survive even in the color-singlet case. In addition, there are other contributions \(N(x,Ax)\), \(N(x,(1-A)x)\) and \(N(Ax,Ax-x)\) that can be regarded as the hard-pole contribution in the sense of the conventional pole calculation. A similar contribution was also observed in the case of the twist-3 quark distribution [40]. If the color-singlet contribution is dominant in \(J/\psi\) production, LDME is exactly canceled between the unpolarized cross section (18) and the polarized cross section (26) in the ratio. Thus we can conclude that the SSA in the \(J/\psi\) production is an ideal observable to investigate the \(C\)-even twist-3 gluon distribution function \(N(x_{1},x_{2})\) by eliminating other nonperturbative effects like the \(C\)-odd type twist-3 effect and the hadronization effect of \(J/\psi\). ## IV Numerical calculation for the SSA in the \(J/\psi\) production We perform numerical simulations of the \(J/\psi\) SSA for the kinematics accessible at the future EIC experiment. The polarized cross section (26) can be expanded in terms of five structure functions \({\cal F}_{i}\) (\(i=1,2,\cdots 5\)) as \[\frac{d^{6}\Delta\sigma}{dx_{B}dQ^{2}dz_{f}dP_{T}^{2}d\phi d\chi} = \sin(\phi_{h}-\phi_{S})({\cal F}_{1}+{\cal F}_{2}\cos\phi_{h}+{ \cal F}_{3}\cos 2\phi_{h}) \tag{27}\] \[+\cos(\phi_{h}-\phi_{S})({\cal F}_{4}\sin\phi_{h}+{\cal F}_{5} \sin 2\phi_{h}),\] where the azimuthal dependences are defined by \[\phi_{h}=\phi-\chi,\ \ \ \ \phi_{h}-\phi_{S}=\Phi_{S}-\chi. \tag{28}\] The unpolarized cross section is given by \[\frac{d^{6}\sigma}{dx_{B}dQ^{2}dz_{f}dP_{T}^{2}d\phi d\chi}=\sigma_{1}^{\rm U }+\sigma_{2}^{\rm U}\cos\phi_{h}+\sigma_{3}^{\rm U}\cos 2\phi_{h}. \tag{29}\] We calculate five normalized structure functions [30], \[\frac{{\cal F}_{1}}{\sigma_{1}^{\rm U}},\ \ \ \frac{{\cal F}_{2}}{2\sigma_{1}^{\rm U }},\ \ \ \frac{{\cal F}_{3}}{2\sigma_{1}^{\rm U}},\ \ \ \ \frac{{\cal F}_{4}}{2\sigma_{1}^{\rm U}},\ \ \ \ \frac{{\cal F}_{5}}{2\sigma_{1}^{\rm U}}. \tag{30}\] We here show the explicit form of \({\cal F}_{1}/\sigma_{1}^{\rm U}\) that can be derived from (17), (18) and (26). \[\frac{{\cal F}_{1}}{\sigma_{1}^{\rm U}} = \frac{2\pi M_{N}}{[\frac{4}{y^{2}}(1-y+\frac{y^{2}}{2})\hat{ \sigma}_{1}-2\hat{\sigma}_{2}]\bar{x}G(\bar{x})}\Big{[}\frac{4}{y^{2}}(1-y+ \frac{y^{2}}{2})\Big{(}\sum_{i=1}^{5}N^{i}(\bar{x})\sigma_{1}^{Ni}\Big{)}-2 \Big{(}\sum_{i=1}^{5}N^{i}(\bar{x})\sigma_{2}^{Ni}\Big{)}\Big{]}, \tag{31}\] where we defined \[N^{1,2,3,4,5}(x)=\{N(x,x),\ N(x,0),\ N(x,Ax),\ N(x,(1-A)x),\ N(Ax,-(1-A)x)\}, \tag{32}\] \[\bar{x}=x_{B}\Big{(}1+\frac{m_{J/\psi}^{2}}{z_{f}Q^{2}}+\frac{P_{T}^{2}}{Q^{2 }}\frac{z_{f}}{1-z_{f}}\Big{)}. \tag{33}\] One can easily derive other normalized structure functions in the same way. The LDME is exactly canceled as stated above and, therefore, the nonperturbative effect simply arises from ratios of twist-3 and twist-2 gluon distributions. Figure 4: Numerical calculations for the normalized structure functions in (30). \(N^{1,2,3,4,5}\) respectively show the contributions from the five functions \(N(x,x)\), \(N(x,0)\), \(N(x,Ax)\), \(N(x,(1-A)x)\), \(N(Ax,-(1-A)x)\) with the model 1 function (34). The \(C\)-even function \(N(x_{1},x_{2})\) has not been well constrained by experiment so far. We use the following simple models used in [29]. \[\text{model 1}: 0.002xG(x), \tag{34}\] \[\text{model 2}: 0.0005\sqrt{x}G(x). \tag{35}\] Experimental investigations of the twist-3 gluon contributions were reported in the past couple of years [43; 44]. The magnitudes of the above models are consistent with the upper bound of the data. Each structure function depends on Figure 5: Numerical calculations for the normalized structure functions with the model 2 function (35). five types of \(C\)-even functions \(\{N(x,x)\), \(N(x,0)\), \(N(x,Ax)\), \(N(x,(1-A)x)\), \(N(Ax,-(1-A)x)\}\). We separately plot the contributions from those five functions by substituting one of the models into each function. Models for \(N(x,Ax)\), \(N(x,(1-A)x)\), \(N(Ax,-(1-A)x)\) should take the \(A\)-dependence into account for realistic simulations. However, we use the above \(A\)-independent models for these functions because our current knowledge about the functions is very limited. Our simulations are still beneficial because they show some differences in the qualitative behavior among the hard cross sections. We perform our simulations with typical EIC kinematic valuables [3]: \(\sqrt{S_{ep}}=45\) GeV, \(Q^{2}=10\) GeV\({}^{2}\), \(x_{B}=0.005\), \(P_{J/\psi}^{\perp}=2\) GeV. FIG. 4 and 5 respectively show our simulations with the model 1 and 2 for the five structure functions. The contributions from the five types of functions show different qualitative behaviors in each normalized structure function. These simulations could help to clarify which function is dominant in the \(J/\psi\) SSA by comparing with the data from EIC. Our simulations show that the difference in the \(x\)-dependence between the model 1 and 2 does not cause a significant change in the qualitative behaviors. However, we find that the magnitudes of the contributions are uniformly increased in the model 2 compared to the model 1. This reflects the fact that the model 2 is more singular with respect to \(x\) and, therefore, it is enhanced at the small value of \(x_{B}\). The observation of the enhancement could give us the information about the small-\(x\) behavior of the \(C\)-even function. The future EIC experiment plans to investigate the proton structure in a wide range of \(x_{B}\), \(10^{-3}\lesssim x_{B}\lesssim 0.5\). The measurement of \(J/\psi\) SSA at the EIC is a great opportunity to understand the small-\(x\) behavior of the higher twist gluon distribution function. ## V Summary We performed the first calculation for the twist-3 gluon contribution to the \(J/\psi\) SSA within the collinear framework. We observed that one of the possible twist-3 contributions, \(C\)-odd type twist-3 gluon contribution, is exactly canceled in the spin-dependent cross section formula. The cancellation partially happens to also the \(C\)-even type function. The derivative terms \(\frac{d}{dx}N(x,x)\) and \(\frac{d}{dx}N(x,0)\) are exactly canceled, while nonderivative terms \(N(x,x)\) and \(N(x,0)\) survive in the cross section. Once it was stated that the soft-gluon-pole type contribution is exactly canceled in the color singlet contribution in SIDIS[42]. However, our result shows that this statement is not valid in high-\(P_{J/\psi}^{\perp}\) region where the relationship between the TMD and the collinear frameworks does not hold. In addition, we obtained the contributions from another type of the pole contributions whose existence was pointed out in [40] in the case of the twist-3 quark distribution. We have completed the LO cross section formula for the \(J/\psi\) SSA in the color-singlet case. Our result enables future investigations of the twist-3 gluon distribution function at the EIC. We performed some numerical simulations for the structure functions in the polarized cross section at the EIC kinematics. The qualitative differences among the five types of the \(C\)-even functions could help to pin down the dominant contribution to the \(J/\psi\) SSA. Our simulations show the correlation between the magnitudes of the structure functions and the \(x\)-dependence of the twist-3 gluon distribution function. Future investigations in a wide range of the Bjorken variable at the EIC will provide rich information about the little-known twist-3 gluon distribution function. ## Appendix A List of hard cross sections (1) Hard cross sections of the unpolarized cross section in (18) \[\hat{\sigma}_{1} = \frac{1}{z_{f}^{2}[Q^{2}(-1+\hat{x})+m_{J/\psi}^{2}\hat{x}]^{2}[ m_{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f})]^{2}} \tag{11}\] \[\times 64m_{J/\psi}^{2}\hat{x}^{2}\Big{(}m_{J/\psi}^{6}\hat{x}^{2}(1 -z_{f}+z_{f}^{2})+m_{J/\psi}^{2}Q^{4}[1+(-2+4\hat{x}-7\hat{x}^{2})z_{f}+(3-18 \hat{x}+22\hat{x}^{2})z_{f}^{2}\] \[-2(1-8\hat{x}+9\hat{x}^{2})z_{f}^{3}+(1-6\hat{x}+6\hat{x}^{2})z_ {f}^{4}]+Q^{6}(-1+\hat{x})z_{f}[-z_{f}+\hat{x}(-1+2z_{f})]+m_{J/\psi}^{4}Q^{2} \hat{x}[z_{f}(-3+3z_{f}-2z_{f}^{2})\] \[+\hat{x}(-5+17z_{f}-15z_{f}^{2}+6z_{f}^{3})]\Big{)}\] \[\hat{\sigma}_{2} = \frac{1}{z_{f}^{2}[Q^{2}(-1+\hat{x})+m_{J/\psi}^{2}\hat{x}]^{2}[ m_{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f})]^{2}} \tag{12}\] \[\times 64m_{J/\psi}^{2}Q^{2}\hat{x}^{2}\Big{(}Q^{4}(-1+\hat{x})^{2}z_{f} ^{2}+2m_{J/\psi}^{2}Q^{2}(-1+\hat{x})\hat{x}z_{f}(-1+6z_{f}-6z_{f}^{2}+2z_{f}^ {3})\] \[+m_{J/\psi}^{4}\hat{x}^{2}(-2+10z_{f}-11z_{f}^{2}+4z_{f}^{3})\Big{)}\] \[\hat{\sigma}_{3} = \frac{1}{z_{f}[Q^{2}(-1+\hat{x})+m_{J/\psi}^{2}\hat{x}]^{2}[m_{J/\psi }^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f})]^{2}} \tag{10}\] \[\times 64m_{J/\psi}^{2}QP_{T}\hat{x}^{3}\Big{(}Q^{4}(-1+\hat{x})z_{f}+m_{ J/\psi}^{4}\hat{x}(3-4z_{f}+2z_{f}^{2})+m_{J/\psi}^{2}Q^{2}[z_{f}(-3+4z_{f}-2z_{f}^{2})\] \[+\hat{x}(-1+9z_{f}-10z_{f}^{2}+4z_{f}^{3})]\Big{)}\] \[\hat{\sigma}_{4} = -\frac{64m_{J/\psi}^{4}\hat{x}^{3}(-1+z_{f})[m_{J/\psi}^{2}\hat{ x}+Q^{2}(-1+\hat{x})z_{f}][m_{J/\psi}^{2}+Q^{2}(-1+4z_{f}-2z_{f}^{2})]}{z_{f}^{2}[Q ^{2}(-1+\hat{x})+m_{J/\psi}^{2}\hat{x}]^{2}[m_{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat {x}-z_{f})]^{2}} \tag{11}\] \[\hat{\sigma}_{8} = \hat{\sigma}_{9}=0 \tag{12}\] (2) Hard cross sections of \(N(x,x)\) in (26) \[\hat{\sigma}_{1}^{N1} = \frac{-1}{(1-z_{f})z_{f}^{2}Q^{2}[Q^{2}(-1+\hat{x})+m_{J/\psi}^{2 }\hat{x}]^{3}[m_{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f})]^{3}}\] \[\times 128m_{J/\psi}^{2}P_{T}\hat{x}^{3}\Big{(}2m_{J/\psi}^{10} \hat{x}^{4}(5-6z_{f}+3z_{f}^{2})+4Q^{10}(-1+\hat{x})z_{f}[1-5z_{f}+3z_{f}^{2}- z_{f}^{3}\] \[+2m_{J/\psi}^{8}Q^{2}\hat{x}^{3}[-2z_{f}(8-9z_{f}+5z_{f}^{2})+ \hat{x}(-9+58z_{f}-65z_{f}^{2}+26z_{f}^{3})]\] \[+m_{J/\psi}^{6}Q^{4}\hat{x}^{2}[18-34z_{f}+77z_{f}^{2}-63z_{f}^{3} +30z_{f}^{4}-16\hat{x}z_{f}(-3+19z_{f}-20z_{f}^{2}+8z_{f}^{3})\] \[+2\hat{x}^{2}(-33+112z_{f}-65z_{f}^{2}-18z_{f}^{3}+24z_{f}^{4})]+ m_{J/\psi}^{4}Q^{6}\hat{x}[z_{f}(-54+106z_{f}-123z_{f}^{2}+71z_{f}^{3}-24z_{f}^{4})\] \[-4\hat{x}^{2}z_{f}(-25+84z_{f}-27z_{f}^{2}-32z_{f}^{3}+24z_{f}^{4} )+2\hat{x}^{3}(-19+24z_{f}+81z_{f}^{2}-114z_{f}^{3}+48z_{f}^{4})\] \[+2\hat{x}(-9+65z_{f}-123z_{f}^{2}+224z_{f}^{3}-177z_{f}^{4}+62z_{f }^{5})]+m_{J/\psi}^{2}Q^{8}[4-12z_{f}+47z_{f}^{2}-74z_{f}^{3}+67z_{f}^{4}-32z_ {f}^{5}+8z_{f}^{6}\] \[+4\hat{x}^{4}z_{f}(-13+41z_{f}-35z_{f}^{2}+12z_{f}^{3})-8\hat{x} ^{3}z_{f}(-3+31z_{f}^{2}-32z_{f}^{3}+12z_{f}^{4})\] \[+\hat{x}z_{f}(28-220z_{f}+363z_{f}^{2}-359z_{f}^{3}+188z_{f}^{4}- 48z_{f}^{5})\] \[+\hat{x}^{2}(12-84z_{f}+229z_{f}^{2}-157z_{f}^{3}+152z_{f}^{4}-11 6z_{f}^{5}+48z_{f}^{6})]\Big{)}\] \[\hat{\sigma}_{2}^{N1} = \frac{-1}{(1-z_{f})z_{f}^{2}[Q^{2}(-1+\hat{x})+m_{J/\psi}^{2} \hat{x}]^{3}[m_{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f})]^{3}}\] \[\times 512m_{J/\psi}^{2}P_{T}\hat{x}^{3}\Big{(}Q^{8}(-1+\hat{x})^{2 }z_{f}^{2}(3+\hat{x}^{2}-2z_{f}-2\hat{x}z_{f}+z_{f}^{2})+m_{J/\psi}^{8}\hat{x} ^{4}(-4+20z_{f}-23z_{f}^{2}+8z_{f}^{3})\] \[+2m_{J/\psi}^{6}Q^{2}\hat{x}^{3}[z_{f}(5-26z_{f}+29z_{f}^{2}-10z_{ f}^{3})+2\hat{x}(-2+9z_{f}-6z_{f}^{2}-2z_{f}^{3}+2z_{f}^{4})]\] \[+m_{J/\psi}^{4}Q^{4}\hat{x}^{2}[-4+24z_{f}-50z_{f}^{2}+84z_{f}^{3} -67z_{f}^{4}+20z_{f}^{5}+2\hat{x}z_{f}(6-29z_{f}+13z_{f}^{2}+12z_{f}^{3}-8z_{f} ^{4})\] \[+2\hat{x}^{2}(-2+6z_{f}+11z_{f}^{2}-20z_{f}^{3}+8z_{f}^{4})]+2m_{J/ \psi}^{2}Q^{6}\hat{x}z_{f}[3-21z_{f}+36z_{f}^{2}-35z_{f}^{3}+18z_{f}^{4}-4z_{f} ^{5}\] \[+2\hat{x}^{3}(-1+6z_{f}-6z_{f}^{2}+2z_{f}^{3})+\hat{x}^{2}(1-4z_{f} -17z_{f}^{2}+22z_{f}^{3}-8z_{f}^{4})+\hat{x}(-2+13z_{f}-7z_{f}^{2}+9z_{f}^{3}-10z _{f}^{4}+4z_{f}^{5})]\Big{)}\] \[\hat{\sigma}_{3}^{N1} = \frac{1}{z_{f}^{3}Q[Q^{2}(-1+\hat{x})+m_{J/\psi}^{2}\hat{x}]^{3}[m_{ J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f})]^{3}} \tag{10}\] \[\times 128m_{J/\psi}^{2}\hat{x}^{2}\Big{(}2m_{J/\psi}^{10}\hat{x}^{ 5}(13-21z_{f}+10z_{f}^{2})+Q^{10}(-1+\hat{x})^{2}z_{f}^{2}[4\hat{x}^{3}+3(-1+z _{f})-8\hat{x}^{2}z_{f}+\hat{x}(9-8z_{f}+4z_{f}^{2})]\] \[+m_{J/\psi}^{8}Q^{2}\hat{x}^{4}[z_{f}(-83+129z_{f}-62z_{f}^{2})+ \hat{x}(46-78z_{f}^{2}+52z_{f}^{3})]\] \[+m_{J/\psi}^{6}Q^{4}\hat{x}^{3}[22-48z_{f}+148z_{f}^{2}-177z_{f}^{ 3}+80z_{f}^{4}-\hat{x}z_{f}(101+53z_{f}-226z_{f}^{2}+136z_{f}^{3})\] \[+2\hat{x}^{2}(7+62z_{f}-79z_{f}^{2}+14z_{f}^{3}+16z_{f}^{4})]+m_{ J/\psi}^{2}Q^{8}\hat{x}z_{f}[4+35z_{f}-92z_{f}^{2}+100z_{f}^{3}-58z_{f}^{4}+16z_{ f}^{5}\] \[+\hat{x}^{4}(-2+66z_{f}-76z_{f}^{2}+32z_{f}^{3})+\hat{x}^{3}(1-71z _{f}-50z_{f}^{2}+120z_{f}^{3}-64z_{f}^{4})\] \[+\hat{x}^{2}(-2+94z_{f}-15z_{f}^{2}-6z_{f}^{3}-28z_{f}^{4}+32z_{f} ^{5})-\hat{x}(1+124z_{f}-233z_{f}^{2}+246z_{f}^{3}-150z_{f}^{4}+48z_{f}^{5})]\] \[+m_{J/\psi}^{4}Q^{6}\hat{x}^{2}[z_{f}(-63+143z_{f}-192z_{f}^{2}+14 8z_{f}^{3}-54z_{f}^{4})+2\hat{x}^{3}(-3+40z_{f}+z_{f}^{2}-50z_{f}^{3}+32z_{f} ^{4})\] \[-\hat{x}^{2}z_{f}(17+245z_{f}-246z_{f}^{2}+16z_{f}^{3}+64z_{f}^{4} )+\hat{x}(-10+110z_{f}-123z_{f}^{2}+248z_{f}^{3}-282z_{f}^{4}+132z_{f}^{5})]\Big{)}\] \[\hat{\sigma}_{4}^{N1} = \frac{1}{z_{f}^{2}Q^{2}[Q^{2}(-1+\hat{x})+m_{J/\psi}^{2}\hat{x}]^ {3}[m_{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f})]^{3}} \tag{11}\] \[\times 128m_{J/\psi}^{4}P_{T}\hat{x}^{3}\Big{(}2m_{J/\psi}^{8} \hat{x}^{4}(-5+3z_{f})+2m_{J/\psi}^{6}Q^{2}\hat{x}^{3}[-6(-2+z_{f})z_{f}+\hat{ x}(-7-11z_{f}+10z_{f}^{2})]\] \[+m_{J/\psi}^{4}Q^{4}\hat{x}^{2}[-6-13z_{f}^{2}+6z_{f}^{3}+12\hat{ x}z_{f}(1+6z_{f}-4z_{f}^{2})+2\hat{x}^{2}(1-27z_{f}+6z_{f}^{2}+8z_{f}^{3})]\] \[+Q^{8}(-1+\hat{x})z_{f}[-3(-1+z_{f})z_{f}+4\hat{x}^{3}(2-7z_{f}+4z _{f}^{2})-4\hat{x}^{2}(-1+7z_{f}-16z_{f}^{2}+8z_{f}^{3})\] \[+\hat{x}(12-57z_{f}+76z_{f}^{2}-52z_{f}^{3}+16z_{f}^{4})]+m_{J/ \psi}^{2}Q^{6}\hat{x}[-z_{f}(10+4z_{f}+z_{f}^{2})-4\hat{x}^{2}z_{f}(4-21z_{f}+ 8z_{f}^{3})\] \[+2\hat{x}^{3}(3-9z_{f}-18z_{f}^{2}+16z_{f}^{3})+2\hat{x}(5-28z_{f} +35z_{f}^{2}-47z_{f}^{3}+22z_{f}^{4})]\Big{)}\] \[\hat{\sigma}_{8}^{N1} = \frac{1}{z_{f}^{3}Q[Q^{2}(-1+\hat{x})+m_{J/\psi}^{2}\hat{x}]^{2} [m_{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f})]^{2}} \tag{12}\] \[\times 128m_{J/\psi}^{2}\hat{x}^{2}\Big{(}-Q^{6}(-1+\hat{x})z_{f}^{ 2}+2m_{J/\psi}^{6}\hat{x}^{3}(1-3z_{f}+2z_{f}^{2})\] \[+m_{J/\psi}^{4}Q^{2}\hat{x}^{2}(1-3z_{f}+2z_{f}^{2})[-3z_{f}+2\hat {x}(1+z_{f})]+m_{J/\psi}^{2}Q^{4}\hat{x}z_{f}[z_{f}(-1-2z_{f}+2z_{f}^{2})\] \[+\hat{x}^{2}(2-6z_{f}+4z_{f}^{2})+\hat{x}(-1+z_{f}+4z_{f}^{2}-4z_{f}^{ 3})]\Big{)}\] \[\hat{\sigma}_{9}^{N1} = \frac{128m_{J/\psi}^{4}P_{T}\hat{x}^{3}}{z_{f}^{2}Q^{2}[Q^{2}(-1+ \hat{x})+m_{J/\psi}^{2}\hat{x}]^{2}[m_{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f} )]^{2}}\Big{(}2m_{J/\psi}^{4}\hat{x}^{2}(-1+z_{f}) \tag{13}\] \[+2m_{J/\psi}^{2}Q^{2}\hat{x}(-1+z_{f})(\hat{x}-z_{f}+2\hat{x}z_{f} )+Q^{4}z_{f}[4\hat{x}^{2}(-1+z_{f})+z_{f}-4\hat{x}(-1+z_{f})z_{f}]\Big{)}\] \[\hat{\sigma}_{8}^{N1} = \frac{1}{z_{f}^{3}Q[Q^{2}(-1+\hat{x})+m_{J/\psi}^{2}\hat{x}]^{2}[m _{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f})]^{2}} \tag{14}\] \[\times 128m_{J/\psi}^{2}\hat{x}^{2}\Big{(}-Q^{6}(-1+\hat{x})z_{f}^{ 2}+2m_{J/\psi}^{6}\hat{x}^{3}(1-3z_{f}+2z_{f}^{2})\] \[+m_{J/\psi}^{4}Q^{2}\hat{x}^{2}(1-3z_{f}+2z_{f}^{2})[-3z_{f}+2 \hat{x}(1+z_{f})]+m_{J/\psi}^{2}Q^{4}\hat{x}z_{f}[z_{f}(-1-2z_{f}+2z_{f}^{2})\] \[+\hat{x}^{2}(2-6z_{f}+4z_{f}^{2})+\hat{x}(-1+z_{f}+4z_{f}^{2}-4z_{f}^{ 3})]\Big{)}\] \[\hat{\sigma}_{9}^{N1} = \frac{128m_{J/\psi}^{4}P_{T}\hat{x}^{3}}{z_{f}^{2}Q^{2}[Q^{2}(-1+ \hat{x})+m_{J/\psi}^{2}\hat{x}]^{2}[m_{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f} )]^{2}}\Big{(}2m_{J/\psi}^{4}\hat{x}^{2}(-1+z_{f})\] (15) \[+2m_{J/\psi}^{2}Q^{2}\hat{x}(-1+z_{f})(\hat{x}-z_{f}+2\hat{x}z_{f} )+Q^{4}z_{f}[4\hat{x}^{2}(-1+z_ \[\hat{\sigma}_{1}^{N2} = \frac{1}{(1-z_{f})z_{f}^{2}Q^{2}[Q^{2}(-1+\hat{x})+m_{J/\psi}^{2} \hat{x}]^{3}[m_{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f})]^{3}}\] \[\times 128m_{J/\psi}^{2}F_{T}\hat{x}^{3}\Big{(}2m_{J/\psi}^{10}\hat{ x}^{2}(9-10z_{f}+7z_{f}^{2})+Q^{10}(-1+\hat{x})z_{f}[3-13z_{f}+13z_{f}^{2}-5z_{f}^{3 }+\hat{x}^{3}(-3+6z_{f})\] \[+\hat{x}^{2}(-2+9z_{f}-12z_{f}^{2})+\hat{x}z_{f}(3-5z_{f}+6z_{f}^{ 2})]+2m_{J/\psi}^{8}Q^{2}\hat{x}^{3}[-2z_{f}(13-14z_{f}+9z_{f}^{2})\] \[+\hat{x}(-21+118z_{f}-117z_{f}^{2}+50z_{f}^{3})]+m_{J/\psi}^{6}Q^ {4}\hat{x}^{2}[42-90z_{f}+131z_{f}^{2}-89z_{f}^{3}+42z_{f}^{4}\] \[-4\hat{x}z_{f}(-27+143z_{f}-142z_{f}^{2}+58z_{f}^{3})+6\hat{x}^{2 }(-23+72z_{f}-31z_{f}^{2}-14z_{f}^{3}+16z_{f}^{4})]\] \[+m_{J/\psi}^{4}Q^{6}\hat{x}[z_{f}(-94+206z_{f}-209z_{f}^{2}+105z_{ f}^{3}-32z_{f}^{4})-4\hat{x}^{2}z_{f}(-46+133z_{f}-11z_{f}^{2}-76z_{f}^{3}+48z_{f}^{ 4})\] \[+2\hat{x}^{3}(-39+32z_{f}+205z_{f}^{2}-234z_{f}^{3}+96z_{f}^{4})+ 2\hat{x}(-17+145z_{f}-345z_{f}^{2}+514z_{f}^{3}-361z_{f}^{4}+118z_{f}^{5})]\] \[+m_{J/\psi}^{2}Q^{8}[-4+12z_{f}+37z_{f}^{2}-106z_{f}^{3}+109z_{f} ^{4}-52z_{f}^{5}+12z_{f}^{6}+4\hat{x}^{4}z_{f}(-31+93z_{f}-71z_{f}^{2}+24z_{f} ^{3})\] \[-4\hat{x}^{3}z_{f}(-7-27z_{f}+152z_{f}^{2}-134z_{f}^{3}+48z_{f}^{ 4})+\hat{x}z_{f}(96-572z_{f}+1041z_{f}^{2}-969z_{f}^{3}+460z_{f}^{4}-104z_{f}^{ 5})\] \[+\hat{x}^{2}(20-92z_{f}+211z_{f}^{2}-199z_{f}^{3}+316z_{f}^{4}-244 z_{f}^{5}+96z_{f}^{6})]\Big{)}\] \[\hat{\sigma}_{2}^{N2} = \frac{1}{(1-z_{f})z_{f}^{2}[Q^{2}(-1+\hat{x})+m_{J/\psi}^{2}\hat {x}]^{3}[m_{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f})]^{3}} \tag{113}\] \[\times 512m_{J/\psi}^{2}P_{T}\hat{x}^{3}\Big{(}Q^{8}(-1+\hat{x})^{2 }z_{f}^{2}(5+3\hat{x}^{2}+\hat{x}(4-6z_{f})-6z_{f}+3z_{f}^{2})+m_{J/\psi}^{8} \hat{x}^{4}(-8+40z_{f}-45z_{f}^{2}+16z_{f}^{3})\] \[+2m_{J/\psi}^{6}Q^{2}\hat{x}^{3}[z_{f}(9-46z_{f}+51z_{f}^{2}-18z_{ f}^{3})+\hat{x}(-8+36z_{f}-22z_{f}^{2}-8z_{f}^{3}+8z_{f}^{4})]\] \[+m_{J/\psi}^{4}Q^{4}\hat{x}^{2}[-8+48z_{f}-106z_{f}^{2}+160z_{f}^{ 3}-121z_{f}^{4}+36z_{f}^{5}+22\hat{x}_{f}(10-45z_{f}+11z_{f}^{2}+28z_{f}^{3}-16 z_{f}^{4})\] \[+\hat{x}^{2}(-8+24z_{f}+50z_{f}^{2}-80z_{f}^{3}+32z_{f}^{4})]+2m_{J/ \psi}^{2}Q^{6}\hat{x}z_{f}[7-45z_{f}+84z_{f}^{2}-79z_{f}^{3}+38z_{f}^{4}-8z_{f} ^{5}\] \[+\hat{x}^{3}(-4+26z_{f}-24z_{f}^{2}+8z_{f}^{3})+\hat{x}^{2}(1-43z_ {f}^{2}+46z_{f}^{3}-16z_{f}^{4})+\hat{x}(-4+19z_{f}-17z_{f}^{2}+25z_{f}^{3}-22 z_{f}^{4}+8z_{f}^{5})]\Big{)}\] \[\hat{\sigma}_{4}^{N2} = \frac{-1}{z_{f}^{2}Q^{2}[Q^{2}(-1+\hat{x})+m_{J/\psi}^{2}\hat{x}]^{3} [m_{J/\psi}^{2}\hat{x}+Q^{2}(1+\hat{x}-z_{f})]^{3}} \tag{146}\] \[\times 128m_{J/\psi}^{4}P_{T}\hat{x}^{3}\Big{(}2m_{J/\psi}^{8} \hat{x}^{4}(-1+z_{f})+2m_{J/\psi}^{6}Q^{2}\hat{x}^{3}[2(-2+z_{f})z_{f}+\hat{x} (-3+z_{f}+2z_{f}^{2})]\] \[+m_{J/\psi}^{4}Q^{4}\hat{x}^{2}[-6+12z_{f}-17z_{f}^{2}+6\hat{x}^{ 2}_{f}+6\hat{x}^{2}(-1-z_{f}+2z_{f}^{2})-4\hat{x}z_{f}(-3-4z_{f}+4z_{f}^{2})]\] \[+Q^{8}z_{f}[4\hat{x}^{4}(-1+z_{f})-4\hat{x}^{3}(1-6z_{f}+4z_{f}^{ 2})+z_{f}(-11+23z_{f}-16z_{f}^{2}+4z_{f}^{3})\] \[+\hat{x}^{2}(-12+31z_{f}-44z_{f}^{2}+20z_{f}^{3})+\hat{x}(4-15z_{f }^{2}+20z_{f}^{3}-8z_{f}^{4})]\] \[+m_{J/\psi}^{2}Q^{6}\hat{x}[4\hat{x}^{2}(11-8z_{f})z_{f}^{2}+2 \hat{x}^{3}(-1-5z_{f}+6z_{f}^{2})+z_{f}(14-32z_{f}+27z_{f}^{2}-8z_{f}^{3})\] \[+2\hat{x}(-3+7z_{f}^{2}-19z_{f}^{3}+10z_{f}^{4})]\Big{)}\] (4) Hard cross sections of \(N(x,Ax)\) in (26) \[\sigma_{1}^{N3} = \frac{-1}{(1-z_{f})z_{f}^{2}[Q^{2}(-1+\hat{x})+\hat{x}m_{J/\psi}^ {2}]^{3}[Q^{2}(1+\hat{x}-z_{f})+\hat{x}m_{J/\psi}^{2}]^{3}} \tag{148}\] \[\times 128Q^{2}P_{T}\hat{x}^{3}m_{J/\psi}^{2}\Big{(}4Q^{6}(-1+ \hat{x})(-2+z_{f})z_{f}[-z_{f}+\hat{x}(-1+2z_{f})]\] \[+m_{J/\psi}^{2}Q^{4}[-8+25z_{f}-50z_{f}^{2}+53z_{f}^{3}-32z_{f}^{ 4}+8z_{f}^{5}+\hat{x}^{2}(-4+103z_{f}-365z_{f}^{2}+446z_{f}^{3}-240z_{f}^{4}+48 z_{f}^{5})\] \[-\hat{x}(4+48z_{f}-275z_{f}^{2}+38z_{f}^{3}-224z_{f}^{4}+48z_{f}^ {5})]\] \[+Q^{2}m_{J/\psi}^{4}\hat{x}[-2+50z_{f}-83z_{f}^{2}+59z_{f}^{3}-16 z_{f}^{4}+\hat{x}(70-300z_{f}+396z_{f}^{2}-226z_{f}^{3}+48z_{f}^{4})]\] \[+m_{J/\psi}^{6}\hat{x}^{2}(-22+37z_{f}-27z_{f}^{2}+8z_{f}^{3}) \Big{)}\] \[\sigma_{2}^{N3} = \frac{-1}{(1-z_{f})z_{f}^{2}[Q^{2}(-1+\hat{x})+\hat{x}m_{J/\psi}^{2} ]^{3}[Q^{2}(1+\hat{x}-z_{f})+\hat{x}m_{J/\psi}^{2}]^{3}} \tag{149}\] \[512Q^{4}P_{T}\hat{x}^{3}(-2+z_{f})m_{J/\psi}^{2}\Big{(}Q^{4}(-1+ \hat{x})^{2}z_{f}^{2}+2m_{J/\psi}^{2}Q^{2}(-1+\hat{x})\hat{x}z_{f}(-2+11z_{f}-1 2z_{f}^{2}+4z_{f}^{3})\] \[+m_{J/\psi}^{4}\hat{x}^{2}(-4+20z_{f}-23z_{f}^{2}+8z_{f}^{3})\Big{)}\] \[\sigma_{3}^{N3} = \frac{1}{z_{f}^{3}[Q^{2}(-1+\hat{x})+\hat{x}m_{J/\psi}^{2}]^{3}[Q ^{2}(1+\hat{x}-z_{f})+\hat{x}m_{J/\psi}^{2}]^{3}} \tag{150}\] \[\times 256Q^{3}\hat{x}^{3}m_{J/\psi}^{2}\Big{(}2Q^{6}(-1+\hat{x}) ^{2}(-2+z_{f})z_{f}^{2}+m_{J/\psi}^{2}Q^{4}z_{f}[-1-18z_{f}+41z_{f}^{2}-32z_{f} ^{3}+8z_{f}^{4}\] \[-2\hat{x}(1-41z_{f}+76z_{f}^{2}-52z_{f}^{3}+12z_{f}^{4})+\hat{x}^ {2}(3-64z_{f}+111z_{f}^{2}-72z_{f}^{3}+16z_{f}^{4})]\] \[+m_{J/\psi}^{4}Q^{2}\hat{x}[1+43z_{f}-91z_{f}^{2}+67z_{f}^{3}-16z_ {f}^{4}+\hat{x}(7-91z_{f}+161z_{f}^{2}-107z_{f}^{3}+24z_{f}^{4})]\] \[+m_{J/\psi}^{6}\hat{x}^{2}(-25+50z_{f}-35z_{f}^{2}+8z_{f}^{3}) \Big{)}\] \[\sigma_{4}^{N3} = \frac{1}{z_{f}^{2}[Q^{2}(-1+\hat{x})+\hat{x}m_{J/\psi}^{2}]^{3}[Q ^{2}(1+\hat{x}-z_{f})+\hat{x}m_{J/\psi}^{2}]^{3}} \tag{151}\] \[\times 128Q^{2}P_{T}\hat{x}^{3}m_{J/\psi}^{4}\Big{(}Q^{4}z_{f}[3-3z _{f}+\hat{x}(12-67z_{f}+64z_{f}^{2}-16z_{f}^{3})+\hat{x}^{2}(-15+70z_{f}-64z_{ f}^{2}+16z_{f}^{3})]\] \[+m_{J/\psi}^{2}Q^{2}\hat{x}[-2-16z_{f}+11z_{f}^{2}+2\hat{x}(-7+43 z_{f}-37z_{f}^{2}+8z_{f}^{3})]+m_{J/\psi}^{4}\hat{x}^{2}(18-11z_{f})\Big{)}\] \[\sigma_{8}^{N3} = -\frac{256Q^{3}\hat{x}^{3}(1-z_{f})^{2}m_{J/\psi}^{4}[Q^{2}(-1+ \hat{x})z_{f}+\hat{x}m_{J/\psi}^{2}]}{z_{f}^{3}[Q^{2}(-1+\hat{x})+\hat{x}m_{J /\psi}^{2}]^{2}[Q^{2}(1+\hat{x}-z_{f})+\hat{x}m_{J/\psi}^{2}]^{3}} \tag{152}\] \[\sigma_{9}^{N3} = -\frac{128Q^{2}P_{T}\hat{x}^{3}m_{J/\psi}^{4}\Big{(}Q^{2}z_{f}[1 -z_{f}+\hat{x}(-3+2z_{f})]+m_{J/\psi}^{2}\hat{x}(-2+z_{f})\Big{)}}{z_{f}^{2}[ Q^{2}(-1+\hat{x})+\hat{x}m_{J/\psi}^{2}]^{2}[Q^{2}(1+\hat{x}-z_{f})+\hat{x}m_{J/ \psi}^{2}]^{3}} \tag{153}\] (5) Hard cross sections of \(N(x,(1-A)x)\) in (26) \[\sigma_{1}^{N4} = \frac{1}{(1-z_{f})z_{f}^{2}[Q^{2}(-1+\hat{x})+\hat{x}m_{J/\psi}^ {2}]^{3}[Q^{2}(1+\hat{x}-z_{f})+\hat{x}m_{J/\psi}^{2}]^{3}} \tag{154}\] \[\times 128Q^{2}P_{T}\hat{x}^{3}m_{J/\psi}^{2}\Big{(}4Q^{6}(-1+ \hat{x})z_{f}[-1+6z_{f}-3z_{f}^{2}+\hat{x}(3-9z_{f}+4z_{f}^{2})]+m_{J/\psi}^{2} Q^{4}[-8+15z_{f}-34z_{f}^{2}+37z_{f}^{3}\] \[-24z_{f}^{4}+6z_{f}^{5}+\hat{x}(4-80z_{f}+361z_{f}^{2}-449z_{f}^{3} +248z_{f}^{4}-52z_{f}^{5})\] \[+\hat{x}^{2}(-4+101z_{f}-379z_{f}^{2}+446z_{f}^{3}-236z_{f}^{4}+4 8z_{f}^{5})]\] \[+m_{J/\psi}^{4}Q^{2}\hat{x}[2+46z_{f}-69z_{f}^{2}+49z_{f}^{3}-12 z_{f}^{4}+\hat{x}(70-310z_{f}+400z_{f}^{2}-236z_{f}^{3}+52z_{f}^{4})]\] \[+m_{J/\psi}^{6}\hat{x}^{2}(-22+33z_{f}-25z_{f}^{2}+6z_{f}^{3}) \Big{)}\] \[\sigma_{2}^{N4} = \frac{1}{(1-z_{f})z_{f}^{2}[Q^{2}(-1+\hat{x})+\hat{x}m_{J/\psi}^{ 2}]^{3}[Q^{2}(1+\hat{x}-z_{f})+\hat{x}m_{J/\psi}^{2}]^{3}} \tag{155}\] \[\times 1024Q^{4}P_{T}\hat{x}^{3}(-2+z_{f})m_{J/\psi}^{2}\Big{(}Q^{4 }(-1+\hat{x})^{2}z_{f}^{2}+2m_{J/\psi}^{2}Q^{2}(-1+\hat{x})\hat{x}z_{f}(-1+6z_ {f}-6z_{f}^{2}+2z_{f}^{3})\] \[+m_{J/\psi}^{4}\hat{x}^{2}(-2+10z_{f}-11z_{f}^{2}+4z_{f}^{3})\Big{)}\] \[\sigma_{3}^{N4} = \frac{-1}{z_{f}^{3}[Q^{2}(-1+\hat{x})+\hat{x}m_{J/\psi}^{2}]^{3}[Q^{ 2}(1+\hat{x}-z_{f})+\hat{x}m_{J/\psi}^{2}]^{3}} \tag{103}\] \[\times 128Q^{3}\hat{x}^{2}m_{J/\psi}^{2}\Big{(}Q^{6}(-1+\hat{x})^{2 }z_{f}^{2}[3-3z_{f}+\hat{x}(-13+8z_{f})]+m_{J/\psi}^{2}Q^{4}\hat{x}z_{f}[-2-55z _{f}+112z_{f}^{2}-80z_{f}^{3}+20z_{f}^{4}\] \[-4\hat{x}z_{f}(-46+83z_{f}-55z_{f}^{2}+13z_{f}^{3})+\hat{x}^{2}(2-1 29z_{f}+220z_{f}^{2}-140z_{f}^{3}+32z_{f}^{4})]\] \[+m_{J/\psi}^{4}Q^{2}\hat{x}^{2}[-2+110z_{f}-215z_{f}^{2}+157z_{f}^ {3}-40z_{f}^{4}+\hat{x}(14-184z_{f}+319z_{f}^{2}-216z_{f}^{3}+52z_{f}^{4})]\] \[+m_{J/\psi}^{6}\hat{x}^{3}(-50+102z_{f}-77z_{f}^{2}+20z_{f}^{3}) \Big{)}\] \[\sigma_{4}^{N4} = \frac{-1}{z_{f}^{2}(Q^{2}(-1+\hat{x})+\hat{x}m_{J/\psi}^{2})^{3} (Q^{2}(1+\hat{x}-z_{f})+\hat{x}m_{J/\psi}^{2})^{3}} \tag{104}\] \[\times 128Q^{3}\hat{x}^{2}m_{J/\psi}^{2}\Big{(}Q^{4}(-1+\hat{x})^{2 }z_{f}^{2}[3-3z_{f}+\hat{x}(-5+4z_{f})]+m_{J/\psi}^{2}Q^{4}\hat{x}z_{f}[-4-43z_ {f}+106z_{f}^{2}-80z_{f}^{3}+20z_{f}^{4}\] \[-4\hat{x}(1-39z_{f}+79z_{f}^{2}-55z_{f}^{3}+13z_{f}^{4})+\hat{x}^{ 2}(8-113z_{f}+210z_{f}^{2}-140z_{f}^{3}+32z_{f}^{4})]\] \[+m_{J/\psi}^{4}Q^{2}\hat{x}^{2}[z_{f}(100-221z_{f}+163z_{f}^{2}-4 0z_{f}^{3})+\hat{x}(12-166z_{f}+321z_{f}^{2}-222z_{f}^{3}+52z_{f}^{4})]\] \[+m_{J/\psi}^{6}\hat{x}^{3}(-52+114z_{f}-83z_{f}^{2}+20z_{f}^{3}) \Big{)}\] \[\sigma_{4}^{N5} = \frac{-1}{z_{f}^{2}(Q^{2}(-1+\hat{x})+\hat{x}m_{J/\psi}^{2})^{3}(Q^{2 }(1+\hat{x}-z_{f})+\hat{x}m_{J/\psi}^{2})^{3}} \tag{103}\] \[\times 256Q^{2}P_{T}\hat{x}^{3}(-2+z_{f})m_{J/\psi}^{4}\Big{(}Q^{4}z _{f}[3(-1+z_{f})z_{f}+2\hat{x}^{2}(2-7z_{f}+4z_{f}^{2})-2\hat{x}(1-7z_{f}+5z_{f }^{2})]\] \[+m_{J/\psi}^{2}Q^{2}\hat{x}[2(4-3z_{f})z_{f}+\hat{x}(3-17z_{f}+10z _{f}^{2})]+m_{J/\psi}^{4}\hat{x}^{2}(-5+3z_{f})\Big{)}\] \[\sigma_{8}^{N5} = \frac{-1}{z_{f}^{3}(Q^{2}(-1+\hat{x})+\hat{x}m_{J/\psi}^{2})^{3} (Q^{2}(1+\hat{x}-z_{f})+\hat{x}m_{J/\psi}^{2})^{3}} \tag{104}\] \[\times 128Q^{3}\hat{x}^{2}m_{J/\psi}^{2}\Big{(}Q^{6}(-1+\hat{x})^{2 }(1+\hat{x}-z_{f})z_{f}^{2}+m_{J/\psi}^{2}Q^{4}\hat{x}z_{f}[z_{f}(-9+22z_{f}-1 6z_{f}^{2}+4z_{f}^{3})\] \[+\hat{x}^{2}(-4+17z_{f}-14z_{f}^{2}+4z_{f}^{3})-4\hat{x}(-1+2z_{f }+2z_{f}^{2}-3z_{f}^{3}+z_{f}^{4})]\] \[+m_{J/\psi}^{4}Q^{2}\hat{x}^{2}[z_{f}(12-35z_{f}+29z_{f}^{2}-8z_{ f}^{3})+\hat{x}(-4+10z_{f}+3z_{f}^{2}-10z_{f}^{3}+4z_{f}^{4})]\] \[+m_{J/\psi}^{6}\hat{x}^{3}(-4+14z_{f}-13z_{f}^{2}+4z_{f}^{3}) \Big{)}\] \[\sigma_{9}^{N5} = \frac{256Q^{2}P_{T}\hat{x}^{3}(-2+z_{f})(1-z_{f})m_{J/\psi}^{4} \Big{(}Q^{4}z_{f}(2\hat{x}^{2}+z_{f}-2\hat{x}z_{f})+m_{J/\psi}^{2}Q^{2}\hat{x }(\hat{x}-2z_{f}+2\hat{x}z_{f})+m_{J/\psi}^{4}\hat{x}^{2}\Big{)}}{z_{f}^{2}[Q^ {2}(-1+\hat{x})+\hat{x}m_{J/\psi}^{2}]^{3}[Q^{2}(1+\hat{x}-z_{f})+\hat{x}m_{J/ \psi}^{2}]^{3}} \tag{105}\] ## Acknowledgements This work is supported by the National Natural Science Foundation of China under Grants No. 12022512 and No. 12035007, by the Guangdong Major Project of Basic and Applied Basic Research No. 2020B030103000, No. 2022A1515010683 and No. 2020A1515010794 and research startup funding at South China Normal University.
2306.15128
MIMIC: Masked Image Modeling with Image Correspondences
Dense pixel-specific representation learning at scale has been bottlenecked due to the unavailability of large-scale multi-view datasets. Current methods for building effective pretraining datasets heavily rely on annotated 3D meshes, point clouds, and camera parameters from simulated environments, preventing them from building datasets from real-world data sources where such metadata is lacking. We propose a pretraining dataset-curation approach that does not require any additional annotations. Our method allows us to generate multi-view datasets from both real-world videos and simulated environments at scale. Specifically, we experiment with two scales: MIMIC-1M with 1.3M and MIMIC-3M with 3.1M multi-view image pairs. We train multiple models with different masked image modeling objectives to showcase the following findings: Representations trained on our automatically generated MIMIC-3M outperform those learned from expensive crowdsourced datasets (ImageNet-1K) and those learned from synthetic environments (MULTIVIEW-HABITAT) on two dense geometric tasks: depth estimation on NYUv2 (1.7%), and surface normals estimation on Taskonomy (2.05%). For dense tasks which also require object understanding, we outperform MULTIVIEW-HABITAT, on semantic segmentation on ADE20K (3.89%), pose estimation on MSCOCO (9.4%), and reduce the gap with models pre-trained on the object-centric expensive ImageNet-1K. We outperform even when the representations are frozen, and when downstream training data is limited to few-shot. Larger dataset (MIMIC-3M) significantly improves performance, which is promising since our curation method can arbitrarily scale to produce even larger datasets. MIMIC code, dataset, and pretrained models are open-sourced at https://github.com/RAIVNLab/MIMIC.
Kalyani Marathe, Mahtab Bigverdi, Nishat Khan, Tuhin Kundu, Patrick Howe, Sharan Ranjit S, Anand Bhattad, Aniruddha Kembhavi, Linda G. Shapiro, Ranjay Krishna
2023-06-27T00:40:12Z
http://arxiv.org/abs/2306.15128v4
# MIMIC: Masked Image Modeling ###### Abstract Many pixelwise dense prediction tasks--depth estimation and semantic segmentation--in computer vision today rely on pretrained image representations. Therefore, curating effective pretraining datasets is vital. Unfortunately, the effective pretraining datasets are those with multi-view scenes and have only been curated using annotated 3D meshes, point clouds, and camera parameters from simulated environments. We propose a dataset-curation mechanism that does not require any annotations. We mine two datasets: MIMIC-1M with \(1.3\)M and MIMIC-3M with \(3.1\)M multi-view image pairs from open-sourced video datasets and from synthetic 3D environments. We train multiple self-supervised models with different masked image modeling objectives to showcase the following findings: Representations trained on MIMIC-3M outperform those mined using annotations on multiple downstream tasks, including depth estimation, semantic segmentation, surface normals, and pose estimation. They also outperform representations that are frozen and when downstream training data is limited to few-shot. Larger dataset (MIMIC-3M) significantly improves performance, which is promising since our curation method can arbitrarily scale to produce even larger datasets. MIMIC code, dataset, and pretrained models are open-sourced at [https://github.com/RAIVNLab/MIMIC](https://github.com/RAIVNLab/MIMIC). ## 1 Introduction Today, dense vision tasks--depth prediction, semantic segmentation, surface normals, and pose estimation--rely on pretrained representations [24; 2]. Naturally, self-supervised learning lends itself as a potential solution. Despite the impressive performance on object recognition and other high-level tasks, self-supervised representations for dense prediction tasks have not yet fully delivered [49]. The representations trained on object-centric datasets such as ImageNet-1K [18] do not transfer well to dense prediction datasets such as NYUv2 [43] and ADE20K [58] which contain indoor and outdoor scenes. Moreover, the contrastive objectives that are often used on these object-centric datasets utilize augmentations that do not preserve geometric pixel-wise information [14; 15; 11]. In response, the general purpose representation learning method--masked image modeling and specifically masked autoencoders (MAE)--has become a popular default self-supervised mechanism for such tasks [24]. Unfortunately, recent findings suggest that the patch representations learned by these MAE are devoid of sufficient local information for tasks like depth estimation [49]. In response, we ask the following question: _What information is necessary to learn useful representations for dense vision tasks?_ We find a potential answer in cognitive science: 3D understanding of the physical world is one of the first visual skills emergent in infants; it plays a critical role in the development of other skills, like depth estimation, understanding surfaces, occlusions, etc [26]. Scientists hypothesize that 3D understanding emerges from infants learning the relationship between changes in visual stimuli in response to their self-motion, i.e. 3D awareness emerges by learning correspondences between appearances as the infant's vantage point changes [35]. Very recently, a machine learning paper proposed a variant of masked image modeling, named **cross**-view **co**mpletion (CroCo), which uses an objective that operationalizes learning representations in response to changes in self-motion [49]. CroCo uses a pair of multi-view images to reconstruct a masked view using the second view as support. Unfortunately, CroCo is a data-hungry objective. Its Multiview-Habitat dataset of \(1.8\)M multi-view images was curated using a method that requires ground truth 3D meshes to be annotated. Although CroCo shows promise, the lack of datasets with 3D annotations is a severe limitation, preventing its objective from scaling. If one could mine large-scale multi-view datasets, perhaps dense vision tasks could enjoy the success that the field of natural language processing has welcomed due to the availability of large-scale pretraining text [7]. In this work, we contribute MIMIC: a data-curation method for developing multi-view datasets that scale. Our method doesn't require any 3D meshes and can generate multi-view datasets from unannotated videos and 3D simulated environments. We leverage classical computer vision techniques, such as SIFT keypoint detection [33], RANSAC [21], homography estimation [23], etc. to extract correspondences between frames in open-sourced unannotated videos (see Figure 1). In other words, MIMIC produces a pretraining dataset for **m**asked **i**mage **m**odeling using **i**mage **c**orrespondences. We mine two datasets: MIMIC-1M and MIMIC-3M, and show that they effectively train useful self-supervised (MAE and CroCo) representations when compared to Multiview-Habitat. Our experiments show the following: First, representations from MIMIC-3M outperform those trained using Multiview-Habitat on multiple downstream tasks: depth estimation (NYUv2 [34]), semantic segmentation (ADE20K [58]), surface normals (Taskonomy [56]), and pose estimation (MSCOCO [31]). Second, using CroCo, we outperform both when representations are frozen as well as when the encoders are fine-tuned end-to-end for individual downstream tasks. Third, larger pretraining datasets (MIMIC-3M \(>\) MIMIC-1M) significantly improve performance, which is promising since our curation method can arbitrarily scale to produce even larger datasets. Fourth, performance on downstream tasks improves with more pretraining epochs. Fifth, we consistently perform better in few-shot experiments. Sixth, since we don't make any assumptions about the videos we mine from, we can mine from object-centric videos and perform better on ImageNet-1K[18] linear probing. Finally, the decoder that is usually discarded after masked image modeling produces better quality reconstructions when trained on MIMIC-3M. ## 2 Related work Our work enables self-supervised learning for dense vision tasks. Self-supervised learning in vision can be divided into two types: instance discrimination objectives, in which two augmented views of an image are encouraged to have the same representations, and masked image modeling, in which an image is reconstructed from an incomplete version of itself. **Instance discrimination objectives.** Amongst the instance discrimination objectives, DeepCluster [8] and SwAV [10] discriminate between the image clusters; Moco [25] and SimCLR [13] use contrastive learning; BYOL [22] and DINO [11] use interactions between a teacher and student model; Figure 1: We introduce a data-curation method that generates multi-view image datasets for self-supervised learning. Our method identifies potential data sources, including indoor scenes/people/objects videos, 3D indoor environments, outdoor street views, and stereo pairs to identify potential multiview images. Next, we use traditional computer vision methods such as SIFT keypoint detection and homography transformation to locate corresponding patches. Finally, we filter pairs based on a threshold for significant overlap, ensuring a substantial percentage of pixels match between a pair. Icons have been designed using images from Flaticon.com VICReg [4] avoids representation collapse by introducing regularization terms. These representations, unfortunately, perform quite poorly on dense vision tasks [49]. Some objectives are specifically designed to incentivize denser representations [5]. PixPro [53] and DenseCL [48] use pixel-level contrastive losses; DetCon [27] uses contrastive detections to encourage object-level representations; InsLoc [55] uses instance localization; LOCA [9] learns by predicting the patch positions of the query view in the reference view; CP2 [47] distinguishes between foreground and background; ReSim [51] learns representations by maximizing the similarity between the two sub-regions from the two augmented views. Most of these approaches however are primarily pretrained on object-centric datasets, such as ImageNet-1K [18]. Moreover, dense prediction tasks are usually used in applications such as indoor navigation and autonomous driving, which feature indoor and outdoor scenes; object-centric representations typically do not perform well. **Masked image modeling.** Amongst masked image modeling, BEiT [3] proposes the pretext task of recovering the visual tokens from a corrupted image, MAE [24] learns by masking patches of an image and inpainting the masked patches; MultiMAE extends MAE to a multi-task formulation [2]. Their approach uses pseudo labels extracted using supervised models such as DPT [38] pre-trained on Omnidata [20] for depth and Mask2Former [16] on the MSCOCO [31] for semantic segmentation; hence, MultiMAE is not fully self-supervised. CroCo [49] uses cross-view completion and ingests multi-view images. Their data curation method, though, uses 3D metadata and meshes of synthetic 3D environments; their data is also not publicly available. By contrast, MIMIC neither needs any pseudo labels extracted using supervised methods nor it needs any 3D meshes, point clouds or camera parameters for dataset curation. **Downstream task.** Several pretext task methods have improved downstream performances on semantic segmentation, depth estimation, and other dense prediction tasks that require reasoning about the 3D structure. Showing improvements for semantic segmentation, CP2 [47] copy pastes the foreground of an image onto two different backgrounds and learns representations by distinguishing the foreground and the background of the reference image; LOCA [9] learns by predicting the patch positions of the query view in the reference view. To improve object detection, DetCo [52] learns high-quality representations for object detection via multi-level supervision and contrastive learning between the image and the local patches; InsLoc [55] learns by predicting the instance category of the images composed of a foreground image pasted on a different background. To improve depth prediction, CroCo [49] proposes cross-view completion. **Data curation for large scale visual learning** Large-scale image datasets have incredibly accelerated progress in visual learning. ImageNet-1K [18], with \(1.2\)M images annotated by crowdsourcing led to several breakthroughs and is still a standard dataset used for pretraining vision models. Visual Genome [29] and LAION-5B [42], which connect language and vision, have paved the way similarly for vision-language modeling. However, the efforts so far have been focused on high-level semantic tasks like classification, and large-scale pretraining datasets for dense prediction tasks are not available publicly. To address this challenge we propose a methodology for curating multi-view datasets using videos and 3D environments. ## 3 MIMIC: Curating multi-view image dataset for dense vision tasks Although CroCo [49] recently used Multiview-Habitat, a multi-view dataset, this dataset is not yet publicly available. Moreover, their dataset generation process requires 3D mesh, point cloud, depth, or camera pose information for each scene. This requirement restricts the data sources that can be employed to curate a multi-view dataset. Regrettably, there exists no such large-scale publicly available dataset. To address this gap, we design MIMIC. MIMIC can curate multi-view image datasets without any requirements on the data sources. It works by cleverly combining traditional computer vision methods (Figure 1). The only mechanism our curation process requires is a sampling mechanism \((I_{1},I_{2})\sim g(S)\), where \(S\) is some data source from which \(g(\cdot)\) samples two images \(I_{1}\) and \(I_{2}\). For example, \(S\) can be a video from which \(g(\cdot)\) samples two image frames. Or \(S\) can be a synthetic 3D environment from which \(g(\cdot)\) navigates to random spatial locations and samples two random image renderings of the scene. **Identifying data sources.** With no restrictions on data sources, we generate our MIMIC dataset from both real as well as synthetic data sources. Due to ethical considerations, we do not extract more from real videos and focus on open source video datasets. Therefore we extract also from synthetic environments and only use their annotations to generate videos of trajectories. We use DeMoN, ScanNet, ArkitScenes, Objectron, CO3D, Mannequin, and 3DStreeView as real data sources. DeMoN [46] is a dataset containing stereo image pairs. ScanNet [17] and ArkitScenes [6] contain videos from indoor environments. Objectron [1] and CO3D [39] are collections of videos containing objects. Mannequin [30] provides a video dataset featuring individuals engaged in the mannequin challenge. 3DStreeView [57] offers a collection of street images from multiple urban areas. Synthetic sources include 3D indoor scenes from HM3D [36], Gibson [50] and Matterport [12] datasets using the Habitat simulator [41]. For these synthetic environments, we initialize an agent randomly in the 3D environment and design \(g(\cdot)\) to move the agent in random steps and directions. For each scene, the agent moves to numerous locations and captures various views. The total number of pairs sampled differs based on their navigatable area of each synthetic environment. All our data sources with their distributions are visualized in Figure 2. **Mining potential pairs.** The primary characteristic of the image pairs in our dataset resides in their ability to capture the same scene or object from varying viewpoints while exhibiting a substantial degree of overlap. The dataset is designed to strike a balance: the overlap is not excessively large to the point of containing identical images, rendering the pre-training task trivial; nor is it excessively small, resulting in disjoint image pairs that offer limited utility, making the task only self-completion. In each video or scene, many image pairs can be generated. However, our focus is on selecting a limited number of pairs that are more likely to meet our desired condition of having sufficient overlap. Nonetheless, not all of these candidate pairs may ultimately be chosen. For instance, when dealing with video data, a practical strategy involves creating a list of frames at regular time intervals, which depends on the video's speed. By selecting consecutive frames from this list, potential pairs are generated. Conversely, collecting potential pairs in 3D scenes such as HM3D [36] or Gibson [50] presents greater challenges. Therefore, inspired by CroCo [49], we employ the habitat simulator [41] to capture comprehensive environment views. The agent undergoes random rotations and movements, exploring the scene from various perspectives. By capturing images during these random walks, we generate potential pairs for further analysis. The selection process involves identifying the best pair based on a specified overlap range (\(50\%\) to \(70\%\)), ensuring the inclusion of high-quality pairs with diverse viewpoints. However, our approach doesn't rely on additional information such as meshed representations or camera parameters. Instead, we solely utilize the available images. **Matching and measuring overlap.** Given a potential image pair capturing a scene, we employ the widely recognized Scale-Invariant Feature Transform (SIFT) [33] algorithm to localize key points in both images. After obtaining the key points and descriptors, we apply a brute-force matching technique to establish correspondences between the key points in the first image and those in the second image. We further utilize these matches to estimate a homography matrix [23], leveraging the RANSAC (Random Sample Consensus) [21] algorithm to handle outliers effectively. We then partition each image into non-overlapping patches of size 16\(\times\)16. For each patch in the first image, we conduct a search for the corresponding patch in the second image by randomly sampling a set of points within the target patch. Matching these sampled points with their correspondences in the Figure 2: Distribution of Data Sources (%). Real data sources, including DeMoN, ScanNet, ArkitScenes, Objectron, CO3D, Mannequin, and 3DStreeView, contribute to 32% of MIMIC. The remaining portion consists of synthetic sources, namely HM3D, Gibson, and Matterport. second image allows us to identify the patch that exhibits the highest number of corresponding points, fulfilling our objective. Furthermore, we assess the degree of overlap by counting the number of patches in the first image whose matches fall within the bounds of the second image. The fraction of these patches over all patches provides a quantitative measure of the overlap between the images in a pair. Additionally, it is noteworthy that information from the corresponding patches is made available as metadata for each image pair. Moreover, in the data generation process, the patch size can be adjusted to cater to different research purposes or specific requirements. **Filtering out degenerate matches.** In our approach, the selection of image pairs is guided by the objective of capturing shared 3D information while mitigating redundancy. As mentioned earlier, the desired pairs consist of images that depict the same objects or scenes from different perspectives. This characteristic enables the learning model to acquire valuable insights about the underlying 3D structure. However, it is crucial to avoid including pairs where one image is a zoomed-in version of the other, as such pairs provide limited additional information. To address this concern, we modify the overlap metric used in the pair selection process. Specifically, we incorporate a criterion that prevents the inclusion of patches from the first image that have exact correspondences in the second image. Therefore, in the counting, we consider all patches that have the same corresponding patch in the second image as a single entity. This refinement in the pair selection process improves the overall quality of the dataset. **Overall statistics.** The initial version of our dataset, MIMIC-1M, comprises a total of \(1,316,199\) image pairs, each capturing different scenes or objects from varying viewpoints. Among these pairs, \(761,751\) are sourced from HM3D [36], \(305,197\) from Gibson [50], \(29,658\) from Matterport [12], \(114,729\) from Mannequin [30], \(22,184\) from DeMoN [46], \(36,433\) from ScanNet [17], and \(46,250\) from Objectron [1]. Building upon the scalability of our data curation approach, we expand the dataset to create a second version, MIMIC-3M, to contain a total of \(3,163,333\) image pairs. This expansion involves augmenting the HM3D [36] dataset with an additional \(699,322\) pairs, the Gibson [50] dataset with \(351,828\) pairs, and the inclusion of new datasets such as ArkitScenes [6] with \(81,189\) pairs, CO3D [39] with \(133,482\) pairs, and 3dStreetViews [57] with \(579,310\) pairs. By incorporating these new datasets, we further enrich the diversity and quantity of image pairs available in our dataset. ## 4 Training with MIMIC To measure the effectiveness of MIMIC, we pretrain self-supervised models on it and evaluate the utility of the learnt representations on downstream dense prediction tasks. We compare against existing pretraining dataset alternatives. ### Pretraining We use two masked image modeling objectives: Masked Autoencoders [24] and CroCo [49]. We use a ViT-B/16[19] as a backbone for all our experiments with input images sizes of \(224\times 224\). We train our models on \(8\) RTX A6000 GPUs for \(200\) epochs with a warmup of \(20\) epochs. We use a base learning rate of \(1.5\times 10^{-4}\) and an AdamW [32] optimizer with a cosine learning rate schedule, a weight decay of \(0.05\), and an effective batch size of \(4096\). We evaluate these pretrained representations on a series of downstream dense prediction tasks. **MAE [49] pretraining.**: MAE [24] masks out a large portion (\(75\%\)) of the input patches of an image and uses an asymmetric encoder-decoder architecture to reconstruct the masked-out pixels. Specifically, it uses a ViT-based encoder to extract the latent representations of the masked view. Then it pads the output with the masked tokens and feeds the latent patch embeddings to a lightweight decoder. The decoder's output reconstruction is optimized with an L2 loss. The reconstruction pixel targets are normalized by computing the mean and standard deviation of the image patches. **CroCo [49] pretraining.** CroCo [49] reconstructs a masked image input similar to MAE but supports the reconstruction process through an unmasked second support view. CroCo curates a pretraining dataset from the Habitat simulator [41] such that the two views do not have too many or too few corresponding pixels. We refer to it as Multiview-Habitat. CroCo masks \(90\%\) of the first image. CroCo uses Siamese ViT encoders with shared weights to encode each view. The decoding cross-attends over the second view while reconstructing the first masked view. ### Downstream tasks, datasets, and evaluation metrics **Depth estimation.** We use the NYUv2 [34], a standard dataset used for measuring progress in depth estimation. It consists of \(795\) training and \(654\) test images of indoor scenes. We report the \(\delta 1\) metric - which computes the percent of the pixels with error \(max(\frac{y_{p_{i}}}{y_{i_{j}}},\frac{y_{q_{i}}}{y_{p_{i}}})\) less than \(1.25\), where \(y_{p_{i}}\) is the depth prediction and \(y_{g_{i}}\) is the ground truth of the \(i\)th pixel of an image. **Semantic Segmentation.** We use ADE20K [58]. It consists of \(20,210\) training images and \(150\) semantic categories. We report the mIOU which quantifies the percentage overlap between the predicted and the ground truth predictions. **Surface normal.** is a regression task that aims to estimate the orientation of a 3D surface. We use a subset of Taskonomy [56] with \(800\) training images, \(200\) validation images, and \(54,514\) test images. We report the L1 loss value on the test set. **Classification.** We use ImageNet-1K[18] classification. It contains \(1.28\)M training images and \(50\)k validation images. We fix the encoder, run linear probing on the validation set and report accuracy. **Pose estimation.** We use MSCOCO [31] for training and report average precision (AP) and average recall (AR) on the validation set. Specifically, we adopt ViTPose-B [54] with the MAE encoder and classic decoder [54]. ### Baseline datasets We compare MIMIC with: ImageNet-1K [40] and Multiview-Habitat[49]. **ImageNet-1K[18].** ImageNet-1K is widely used large-scale dataset with \(1.2\)M training images associated with one of the 1K categories. The dataset was curated via crowdsourcing and primarily contains object centric images with animals, plants, everyday objects, and instruments. **Multiview-Habitat.** Second, we compare against models trained on Multiview-Habitat. The dataset comprises synthetic renderings of indoor scenes collected using the 3D meshes available in the Habitat simulator and is derived from the HM3D [37], ScanNet [17], Replica [44] and ReplicaCAD [45]. This dataset is not available publicly. So, we compare against the released models trained on this dataset [49]. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Model & Frozen & Dataset & \begin{tabular}{c} NYUv2 \\ depth.est. \\ \end{tabular} & \begin{tabular}{c} ADE20K \\ sem.seg. \\ \end{tabular} & \begin{tabular}{c} Taskonomy \\ surf.norm. \\ \end{tabular} & \begin{tabular}{c} MSCOCO \\ pos.est. \\ \end{tabular} \\ \hline \multirow{2}{*}{\begin{tabular}{l} MAE \\ \end{tabular} } & ✗ & ImageNet-1K & 79.60 & **46.10** & 59.20 & **74.90** & **80.40** \\ \hline \multirow{2}{*}{\begin{tabular}{l} MAE \\ \end{tabular} } & ✗ & MV-Habitat & - & - & - & - & - \\ & ✓ & MIMIC-3M & 80.65 & 29.05 & 68.97 & - & - \\ \hline \multirow{2}{*}{\begin{tabular}{l} MAE \\ \end{tabular} } & ✗ & MV-Habitat & 79.00 & 40.30 & 59.76 & - & - \\ & ✗ & MIMIC-3M & **85.32** & **40.54** & **58.72** & 69.13 & 75.22 \\ \hline \multirow{2}{*}{\begin{tabular}{l} CroCo \\ CroCo \\ \end{tabular} } & ✓ & MV-Habitat & 85.20 & - & 64.58 & - & - \\ & ✓ & MIMIC-3M & **85.81** & 30.25 & **61.7** & - & - \\ \hline \multirow{2}{*}{ \begin{tabular}{l} CroCo \\ CroCo \\ \end{tabular} } & ✗ & MV-Habitat & 85.60 & 40.60 & 54.13 & 66.50 & 73.2 \\ & ✗ & MIMIC-3M & **91.79** & **42.18** & **53.02** & **72.8** & **78.4** \\ \hline \hline \end{tabular} \end{table} Table 1: CroCo pretrained with MIMIC-3M outperforms on NYUv2 depth estimation referred to as (depth.est), ADE20K semantic segmentation (sem.seg), Taskonomy surface normal prediction (surf.norm), and MSCOCO pose estimation (pos.est.) tasks. ## 5 Experiments Our experiments highlight the following key findings: First, we observe that our representations outperform Multiview-Habitat on depth estimation (NYUv2), semantic segmentation (ADE20K), and surface normal (Taskonomy). Second, these results hold for CroCo regardless of whether the representations are kept frozen and also when the image encoder is fine-tuned (SS 5.1). Third, larger pretraining datasets (MIMIC-3M \(>\) MIMIC-1M) significantly improve performance, which is promising since our curation method can arbitrarily scale to produce even larger datasets. Fourth, for both depth estimation and semantic segmentation, we find that more pretraining continues to lead to performance gains (SS 5.2). Fifth, our performance benefits hold as we vary the number of few-shot fine-tuning data points available for both depth estimation and semantic segmentation (SS 5.4). Sixth, we see improvements on ImageNet-1K linear probing classification compared to the Multiview-Habitat dataset, whose collection required 3D meshes (SS 5.5). Finally, we find that our model produces higher-quality reconstructions using the pretraining decoder (SS 5.6). ### MIMIC-3M outperforms multiple downstream tasks Even though MIMIC-3M was generated with fewer assumptions than Multiview-Habitat, representations pretrained on MIMIC-3M perform better on multiple tasks (Table 1). When fine-tuned for depth estimation, semantic segmentation, surface normals, and pose estimation, our representations learned using both MAE and CroCo perform better. In fact, CroCo when trained on MIMIC-3M leads to the state-of-the-art \(\delta 1\) of NYUv2 depth using self-supervised methods. It also achieves an mIOU of \(42.18\) on ADE20K, an L1 loss of \(53.2\) on Taskonomy surface normals, and an average precision of \(72.8\) and an average recall of \(78.4\) on MSCOCO pose estimation. To understand the quality of the learned representations using our MIMIC-3M, we freeze the transformer backbone pretrained using MIMIC-3M and compare it with the CroCo [49] trained on Multiview-Habitat. MIMIC-3M outperforms Multiview-Habitat and improves NYUv2 depth \(\delta 1\) by \(0.61\) and reduces the L1 loss on Taskonomy surface normals by \(2.87\) points with a frozen transformer. Existing work [49] did not report the frozen semantic segmentation or pose estimation values, preventing us from comparing. ### Pretraining on MIMIC improves downstream performance with training epochs As we train for more training steps, the performance of both MIMIC-1M and MIMIC-3M improves on the downstream tasks such as depth estimation and semantic segmentation (Figure 3(a)). This trend holds regardless of whether the representations are fine-tuned or kept frozen. Figure 3: **(a)** CroCo [49] pretrained on MIMIC shows an increasing trend with the number of training epochs. The figure on the left shows the trends for the fine-tuned and frozen versions of the encoder on NYUv2 depth estimation. The figure on the right shows the trend on the ADE20K dataset. **(b)** CroCo pretrained on MIMIC-3M achieves better few shot performance on CroCo pretrained on Multiview-Habitat. The figure on the left shows the few shot performance on the NYUv2 dataset and the figure on the right shows the few shot performance on ADE20K (semantic segmentation). ### Scaling up MIMIC leads to performance gains We study the scaling trends of MIMIC by varying the data size. We experiment with two scales: the first MIMIC-1M with 1.3M image pairs and the second MIMIC-3M with 3.1M image pairs. We train CroCo [49] with these two training sets and evaluate the performance on depth estimation, semantic segmentation, and surface normals prediction tasks. Table 2(a) shows the downstream performance on depth (NYUv2), semantic segmentation (ADE20K), and surface normals(Taskonomy) tasks with data scaling. We observe consistent improvements on NYUv2 [34], ADE20K [58], and Taskonomy [56]. Specifically, with fine-tuning, MIMIC-3M improved \(\delta 1\) by 2.33 points, mIOU on ADE20K by 3.72 points, and L1 loss by 4.1 points. ### MIMIC-3M representations outperform with limited downstream data We measure the label efficiency of the learned representations trained on MIMIC-3M by evaluating its few-shot performance on NYUv2 depth estimation and ADE20k semantic segmentation. We freeze the image encoder and fine-tune the task-specific decoders by varying the number of training images. We run each k-shot finetuning at least \(5\) times and report the mean and the standard deviation of the runs. Overall the representations trained on our MIMIC-3M show better labeling efficiency than those trained used Multiview-Habitat (Figure 3(b)). ### MIMIC-3M improves linear probing accuracy on ImageNet-1K Thus far, we have focused on dense vision tasks. To understand the potential of MIMIC for the high-level classification tasks, we evaluate MAE [24] and CroCo [49] pretrained with MIMIC-3M on ImageNet-1K [18]. MIMIC-3M outperforms Multiview-Habitat on MAE by \(7.36\%\) and on CroCo by \(2.64\%\). We hypothesize that these improvements come from the real and object-centric data from Objectron [1] included in MIMIC-3M. Naturally, performance is still much lower than MAE pretrained on ImageNet-1K, which is expected since the pretraining data is in-domain. ### Pretraining on MIMIC-3M improves FID score and reduces the reconstruction error We analyze the quality of the reconstructions trained on MIMIC-3M versus Multiview-Habitat. We use FID scores [28], which indicate how realistic the reconstructions are and the reconstruction error (L2 loss) in the original masked image modeling objective. We sample a test set of \(500\) images from the Gibson dataset. We ensure that these images are sampled from the scenes that are not a part of the Multiview-Habitat[49] and MIMIC-3M. They are synthetic images for fair comparisons, which Multiview-Habitat only has synthetic images. We mask \(90\%\) of each test image and then compare the quality of the reconstructions (Table 3). Our analysis shows that CroCo trained on MIMIC-3M improves the FID by \(12.65\) points and reduces the reconstruction loss on the test set. \begin{table} \end{table} Table 2: (a) MIMIC-3M shows improvements over MIMIC-1M. (b) MIMIC-3M outperforms Multiview-Habitat[49] on the classification task. \begin{table} \end{table} Table 3: MIMIC-3M achieves better FID score and reduces the reconstruction loss on 500 test images from the Gibson dataset compared to Multiview-Habitat Discussion In this work, we present MIMIC, an approach to curating large-scale datasets for self-supervised pretraining, geared towards dense vision tasks. We discuss the limitations and safety considerations regarding our dataset, and lay out opportunities for future work. **Limitations.** There are several limitations of our work. First, we train our models with a limited computing budget and hence are undertrained. Our trends show that more pretraining continues to improve performance. Second, we pretrain CroCo on MIMIC-3M using a fixed-sized architecture ViT-B/16; model scaling experiments are outside the scope of our resources. Lastly, our curated dataset primarily consists of static objects and does not involve dynamic scenes with moving objects. Moreover, MIMIC-3M has a limited amount of object-centric data, and its suitability for object-related tasks is unknown. **Safety and ethical considerations.** While our method uses publicly available datasets for data curation, we acknowledge that the algorithm can be scaled up to scrape videos in the wild. We are aware of the safety, privacy, and ethical issues caused by models trained on large-scale datasets and the amplification of the social and racial biases these models may result in. We do not endorse using our methodology for applications other than academic research purposes and recommend the use of face blurring and NSFW filtering before deploying models trained using MIMIC. **Future work.** We would ideally like to study scaling trends with respect to ViT architecture, design methodologies to mine dynamic videos where epipolar geometric constraints do not apply, design new objectives for pretraining on image pairs curated using MIMIC, and evaluate representations on more diverse tasks. The flexibility of MIMIC makes it suitable for further scaling it up to even larger pretraining datasets. ## Acknowledgements This research is sponsored by grant from Amazon Technologies, Inc. as part of the Amazon-UW Science HUB. We thank Michael Wolf and Ariel Gordon for helpful discussions, Saygin Seyfioglu for helpful feedback, Sadjyot Gangolli for help with data, and Mitchell Wortsman for help with large-scale pretraining. We also thank the UW-IT team: Stephen Spencer, Nam Pho, and Matt Jay.
2310.01863
A unifying representation of path integrals for fractional Brownian motions
Fractional Brownian motion (fBm) is an experimentally-relevant, non-Markovian Gaussian stochastic process with long-ranged correlations between the increments, parametrised by the so-called Hurst exponent $H$; depending on its value the process can be sub-diffusive $(0 < H < 1/2)$, diffusive $(H = 1/2)$ or super-diffusive $(1/2 < H < 1)$. There exist three alternative equally often used definitions of fBm -- due to L\'evy and due to Mandelbrot and van Ness (MvN), which differ by the interval on which the time variable is formally defined. Respectively, the covariance functions of these fBms have different functional forms. Moreover, the MvN fBms have stationary increments, while for the L\'evy fBm this is not the case. One may therefore be tempted to conclude that these are, in fact, different processes which only accidentally bear the same name. Recently determined explicit path integral representations also appear to have very different functional forms, which only reinforces the latter conclusion. Here we develope a unifying equivalent path integral representation of all three fBms in terms of Riemann-Liouville fractional integrals, which links the fBms and proves that they indeed belong to the same family. We show that the action in such a representation involves the fractional integral of the same form and order (dependent on whether $H < 1/2$ or $H > 1/2$) for all three cases, and differs only by the integration limits.
O. Benichou, G. Oshanin
2023-10-03T07:56:22Z
http://arxiv.org/abs/2310.01863v1
# A unifying representation of path integrals for fractional Brownian motions ###### Abstract Fractional Brownian motion (fBm) is an experimentally-relevant, non-Markovian Gaussian stochastic process with long-ranged correlations between the increments, parametrised by the so-called Hurst exponent \(H\); depending on its value the process can be sub-diffusive (\(0<H<1/2\)), diffusive (\(H=1/2\)) or super-diffusive (\(1/2<H<1\)). There exist three alternative equally often used definitions of fBm - due to Levy and due to Mandelbrot and van Ness (MvN), which differ by the interval on which the time variable is formally defined. Respectively, the covariance functions of these fBms have different functional forms. Moreover, the MvN fBms have stationary increments, while for the Levy fBm this is not the case. One may therefore be tempted to conclude that these are, in fact, different processes which only accidentally bear the same name. Recently determined explicit path integral representations also appear to have very different functional forms, which only reinforces the latter conclusion. Here we develope a unifying equivalent path integral representation of all three fBms in terms of Riemann-Liouville fractional integrals, which links the fBms and proves that they indeed belong to the same family. We show that the action in such a representation involves the fractional integral of the same form and order (dependent on whether \(H<1/2\) or \(H>1/2\)) for all three cases, and differs only by the integration limits. Key words: Path integrals, non-Markovian processes, fractional Brownian motion and fractional Gaussian noise, non-local action, fractional integrals ## 1 Introduction In the path integral formalism, one generalises the action principle of classical mechanics by replacing a single classical "trajectory" by a functional integral over an ensemble of all possible trajectories. Such a formalism has been employed in the past for a theoretical analysis of diverse physical systems [1, 2, 3, 4, 5], and also proven to be a powerful framework for developing efficient simulation algorithms (see, e.g., [6]). Path integral representation of Brownian motion permitted to determine exactly statistical properties of several non-trivial functionals of its trajectories [7, 8, 9], by taking advantage of the Feynman-Kac theorem [1, 10]. The latter path integral representation hinges on the Wiener's expression for the probability of observing on a time-interval \((0,T)\) a given realisation of a Brownian trajectory \(x(t)\)[11]: \[P[x(t)]\sim\exp\left(-S[x(t)]\right)\,, \tag{1}\] where \(S[x(t)]\) is the action of the form \[S[x(t)]=\frac{1}{2}\int_{0}^{T}dt\,\left(\frac{dx(t)}{dt}\right)^{2}=\frac{1} {2}\int_{0}^{T}dt\,\xi_{\rm wn}^{2}(t)\,, \tag{2}\] in which we have assumed, without lack of generality, that the diffusion coefficient is equal to \(1/2\). The dependence on the diffusion coefficient can be recovered by a mere rescaling of time. In turn, \(\xi_{\rm wn}(t)\) in eq. (2) denotes a given realisation of a zero-mean Gaussian white noise, which is linked to a Brownian trajectory \(x(t)\) through the standard Langevin equation \(dx(t)/dt=\xi_{\rm wn}(t)\), and has the covariance function \[\overline{\xi_{\rm wn}(t)\xi_{\rm wn}(t^{\prime})}=\delta(t-t^{\prime})\,. \tag{3}\] with \(\delta(t)\) being the Dirac delta-function. The bar in eq. (3) denotes averaging with respect to different realisations of noise. Note that in some physical situations the first-order derivative in the quadratic action (2) is naturally replaced by a higher-order derivative. Such a type of action arises, e.g., in analyses of conformations of stiff polymers or of membranes with bending rigidity [3, 12], or of the so-called random-acceleration process [13]. In all these examples the second derivative appears in place of the first one. Extensions and generalisation of the action in eq. (2) have been obtained in case of coloured noises, noises entering in a non-linear form and some non-Markovian processes (see, e.g., [4]), as well as for some kinds of anomalous diffusions [14], including the continuous-time random walks [15] and fractional Levy motion [16, 17, 18]. The "optimal paths" \(x_{\rm opt}(t)\), i.e., the action-minimising trajectories connecting fixed positions, say, the origin \(0\) and a prescribed position \(X_{T}\) achieved at time moment \(T\), were determined in recent [19] for _arbitrary_ Gaussian stochastic processes; the derivation does not necessitate a knowledge of the explicit analogue of eq. (2) for the process under study, and involves only the covariance function \({\rm Cov}(t_{1},t_{2})=\overline{x(t_{1})x(t_{2})}\) of \(x(t)\). It was shown in [19] that \(S[x_{\rm opt}(t)]=X_{T}^{2}/2{\rm Cov}(T,T)\) and therefore \(P\simeq\exp(-X_{T}^{2}/2{\rm Cov}(t,t))\) provides then the short-time tail of the probability density function \(\Psi_{t}\) of the first-passage time \(t\) from \(0\) to \(X_{T}\) (see also [20, 21, 22, 23, 24, 25] for other aspects of first-passage times of the FBM). As it follows from the title of our paper, here we will be concerned specifically with fractional Brownian motion (fBm) - an experimentally-relevant (see, e.g., [26, 27, 28]) family of one-dimensional anomalous stochastic processes with everlasting power-law correlations between the increments, for which no analogue of the Feynman-Kac theorem exists. In fact, there are three alternative kinds of fBm, which are equally often considered in the literature and differ between themselves by the time-interval \(\Omega\) on which the time variable \(t\) is defined, and eventually, by the form of the covariance function. The first one is due to Levy [29], who defined the process for a finite interval, \(t\in[0,T]\), as a fractional Riemann-Liouville integral (see eq. (4) below and [30]) of white noise. The second kind of fBm with \(t\in(-\infty,\infty)\) was introduced by Kolmogorov [31]. Many properties of such a process and its relevance to diverse problems across the disciplines were later on discussed by Mandelbrot and van Ness [32]. Nowadays, this kind of fBm is commonly referred to as a two-sided Mandelbrot-van Ness fractional Brownian motion. Lastly, in the third case the time variable is defined on the positive half-axis, \(t\in[0,\infty)\), and this process is called a one-sided Mandelbrot-van Ness fBm. While for the two latter definitions the processes \(x(t)\) have _stationary_ increments, for the Levy fBm the increments are not stationary. An extended overview on applications of fBm can be found in recent [33]. We also note that in their analysis of the fBm Mandelbrot and van Ness introduced an important notion of fractional Gaussian noise \(\zeta_{\rm fGn}(t)\), as the noise generating a given fBm trajectory \(x(t)\) via a Langevin-type equation, \(\zeta_{\rm fGn}(t)=dx(t)/dt\). Clearly, this noise has very different statistical properties and covariance functions, as compared to the white noise \(\xi_{\rmwn}(t)\). The statistical properties of such a noise also depend on the precise definition of the fBm. Within the last three decades much progress has been achieved in determining the analogues of the action in eq. (2) for all three kinds of fBm. The path integral representation has been first established for sub-diffusive Riemann-Liouville-type fBm in form of a fractional integral [34, 35] (see also [4]). For both one-sided and two-sided versions of the Mandelbrot-van Ness process, the early-stage analyses aimed at determining the action in form of a perturbation series expansion in powers of the parameter \(\epsilon=H-1/2\), where \(H\) is the Hurst exponent (see below) with \(H=1/2\) corresponding to the standard Brownian case with uncorrelated increments. Within this approach, the zeroth-order and several sub-leading terms in this series have been found [36, 37, 38, 39, 40]. Lastly, in recent [41] explicit expressions for the action in eq. (2) were derived for both one-sided and two-sided Mandelbrot-van Ness fBms with arbitrary values of the Hurst exponent. The kernel functions in the analogues of eq. (2) evaluated in [41] are non-local, as compared to the one in eq. (2), and therefore embody essential correlations between the increments, which vanish only in the limit \(H\to 1/2\). At the same time, the kernels obtained in [34, 35] and [41] have very different functional forms and are also very different depending whether the processes are one-sided or two-sided. Given that the covariance functions of the Levy fBm and of the Mandelbrot-van Ness ones are also very different, and that the increments are not stationary in the former case and stationary in the latter one, one may be tempted to conclude that the underlying processes actually represent quite _different_ and unrelated to each other stochastic processes, despite the fact that they happen accidentally to bear the same name. In this regard, it may be instructive to develop some alternative (as compared to the representations in [34, 35] and [41]) unifying representation which will provide a clear evidence that all the above-mentioned fBm processes - Levy or Mandelbrot-van Ness ones, one-sided or two-sided, sub-diffusive and super-diffusive - indeed are members of the same family. In the present paper we show that such a natural unifying framework can be constructed by representing the kernel functions in terms of suitable fractional Riemann-Liouville integrals [30], which have the same form and order, and differ by the integration limits only. This paper is outlined as follows. In Sec. 2 we introduce basic definitions, briefly discuss the properties of fractional Brownian motions and also recall available results for their explicit representations in terms of path integrals (see [34, 35] and [41]). In Sec. 3 we summarise our main results. An interested reader can find a succinct outline of the derivations of our main results in Sec. 4, while a more detailed account is presented in the Appendices. Finally, in Sec. 5 we conclude with a brief recapitulation of our results. ## 2 Definitions and overview of previous results. As we have already remarked, fractional Brownian motion is a family of zero-mean Gaussian, non-Markovian processes with correlated increments, parametrised by the parameter \(H\) - the Hurst index, which is a real number from the interval \((0,1)\), (bounded away from \(0\) and from \(1\)). For \(H\in(0,1/2)\), the fBm exhibits a sub-diffusive behaviour due to negative correlations between the increments, while for \(H\in(1/2,1)\) the increments tend to have the same sign, which entails a super-diffusive motion. In the borderline case \(H=1/2\), standard Brownian motion with independent increments is recovered. **Levy fBm.** Levy [29] used the left-sided1 Riemann-Liouville fractional integral of function \(f(t)\) of the order \(\alpha\) (see, e.g., [30]) : Footnote 1: Note that the conventional term ”left-sided” (usually denoted by the symbol ”\(+\)” put in the subscript), simply means that the time variable \(t\) appears on the upper terminal of integration, in contrast to the right-sided case (denoted by the symbol ”\(-\) ”) with the time variable appearing on the lower terminal. \[I^{\alpha}_{+,a}[f(t)]=\frac{1}{\Gamma(\alpha)}\int_{a}^{t}\frac{f(\tau)}{ \left(t-\tau\right)^{1-\alpha}}\,d\tau\,, \tag{4}\] to define a given fBm trajectory \(x(t)\) as \[x(t)=I_{+,0}^{H+1/2}[\xi_{\rm wn}(t)]=\frac{1}{\Gamma(H+1/2)}\int_{0}^{t}\frac{ \xi_{\rm wn}(\tau)}{\left(t-\tau\right)^{1/2-H}}\,d\tau\,,\quad x(t=0)=0\,, \tag{5}\] where \(\xi_{\rm wn}(t)\) is the above-defined Gaussian, delta-correlated white noise with zero mean value and the covariance function defined in eq. (3). For \(H=1/2\), the process in eq. (5) is clearly the standard Brownian motion. The covariance function \({\rm Cov}(t_{1},t_{2})=\overline{x(t_{1})x(t_{2})}\) of the fBm process in eq. (5) is given for \(t_{1}\geq t_{2}\) by \[{\rm Cov}(t_{1},t_{2})=\frac{(H+1/2)}{\Gamma^{2}\,(H+3/2)}\frac{t_{2}}{(t_{1}t _{2})^{1/2-H}}\,_{2}F_{1}\left(1,\frac{1}{2}-H;\frac{3}{2}+H;\frac{t_{2}}{t_{1 }}\right)\,, \tag{6}\] while the expression for \(t_{2}\geq t_{1}\) is obtained from eq. (6) by merely interchanging \(t_{1}\) and \(t_{2}\). In eq. (6) and henceforth, \(\Gamma(z)\) and \({}_{2}F_{1}\) denote the Gamma function and the Gauss hypergeometric function, respectively. Note that the mean-square displacement of such a process obeys \(\overline{x^{2}(t)}=t^{2H}/(2H\Gamma^{2}(H+1/2))\), as can be inferred from eq. (6) directly, by setting \(t_{1}=t_{2}=t\). Therefore, the process is sub-diffusive for \(H<1/2\), diffusive for \(H=1/2\) and super-diffusive for \(H>1/2\). In the limit \(H\to 1/2\), the Gauss hypergeometric function \({}_{2}F_{1}\) in eq. (6) converges to \(1\) and hence, the covariance function becomes \({\rm Cov}(t_{1},t_{2})=\min(t_{1},t_{2})\), as it should. Path integral representation for the process in eq. (5) has been found in [34, 35] for a sub-diffusive (with \(0<H<1/2\)) Riemann-Liouville fBm. Using a simple argument based on the Gaussian nature of the underlying noise, it was shown (see the text after eq. (27) in [34]) that the action \(S[x(t)]\) can be written as \[\begin{split} S[x(t)]&=\frac{1}{2}\int_{0}^{T}dt \left\{I_{+,0}^{1/2-H}\left[\frac{dx(\tau)}{d\tau}\right]\right\}^{2}\\ &=\frac{1}{2}\int_{0}^{T}dt\left\{\frac{1}{\Gamma(1/2-H)}\int_{0} ^{t}\frac{dx(\tau)}{d\tau}\frac{d\tau}{(t-\tau)^{H+1/2}}\right\}^{2}\,,\quad 0<H< 1/2\,,\end{split} \tag{7}\] where the expression in the curly brackets is, in fact, the reciprocal operator (the so-called fractional derivative) of the fractional integral in eq. (5). Note that we have somewhat changed the notations in eq. (7) as compared to the ones used in [34], to make them consistent with our subsequent analysis. The representation (7) is not valid for the super-diffusive Levy fBm. Indeed, for \(1/2<H<1\), the inner fractional integral in eq. (7) is not defined, because the kernel has a non-integrable singularity. However, a valid representation in the super-diffusive case can be obtained from eq. (7) by a mere integration by parts. Assuming that the fractional Gaussian noise of the Levy-type (which generates a given trajectory \(x(t)\)) is zero for \(t=0\) and \(t=T\), we find \[\begin{split} S[x(t)]&=\frac{1}{2}\int_{0}^{T}dt \left\{I_{+,0}^{3/2-H}\left[\frac{d^{2}x(\tau)}{d\tau^{2}}\right]\right\}^{2} \\ &=\frac{1}{2}\int_{0}^{T}dt\left\{\frac{1}{\Gamma(3/2-H)}\int_{0} ^{t}\frac{d^{2}x(\tau)}{d\tau^{2}}\frac{d\tau}{(t-\tau)^{H-1/2}}\right\}^{2} \,,\quad 1/2<H<1\,.\end{split} \tag{8}\] Note that in this representation the second-order derivative naturally replaces the first-order one, likewise it happens for a _super-diffusive_ random-acceleration process [13]. Note, as well, that the singularity of the kernel is now integrable for \(1/2<H<1\) and therefore, the expression (8) is well-defined. The derivation of the form in eq. (8) using a canonical approach, which hinges on the integral representation of the Gauss hypergeometric function and application of the fractional integrals technique, will be presented elsewhere [42]. **Mandelbrot - van Ness fBm.** Mandelbrot and van Ness (MvN) [32] used instead the Weyl integral [30] to define a given fBm trajectory \(x(t)\) for \(t\geq 0\) as \[\begin{split} x(t)&=\frac{1}{\Gamma(H+1/2)}\Bigg{\{} \int_{0}^{t}(t-\tau)^{H-1/2}\xi_{\rm wn}(\tau)\,d\tau\\ &+\int_{-\infty}^{0}\left[(t-\tau)^{H-1/2}-(-\tau)^{H-1/2}\right] \xi_{\rm wn}(\tau)\,d\tau\Bigg{\}}\,,\quad x(t=0)=0\,,\end{split} \tag{9}\] and in a similar way for the evolution at \(t<0\), if one defines the time variable \(t\) on the whole real line (two-sided process). Such a fBm process, which is also fixed to be at the origin at \(t=0\), depends explicitly on the evolution of noise in the entire past. Respectively, the form of the covariance function depends on whether one considers the two-sided version of the process, with \(t\in(-\infty,\infty)\), in which case \[\mathrm{Cov}(t_{1},t_{2})=\overline{x(t_{1})x(t_{2})}=\frac{1}{2}\left(|t_{1} |^{2H}+|t_{2}|^{2H}-|t_{1}-t_{2}|^{2H}\right)\,, \tag{10}\] or restricts \(t\) to be only positive (one-sided process). In this latter case, one has \[\mathrm{Cov}(t_{1},t_{2})=\frac{1}{2}\left(t_{1}^{2H}+t_{2}^{2H}-|t_{1}-t_{2} |^{2H}\right)\,. \tag{11}\] Setting \(t_{1}=t_{2}=t\) (with \(t>0\)) in eqs. (10) and (11), one finds \(\overline{x^{2}(t)}=t^{2H}\), which differs from the analogous result for Levy fBm given above only by a numerical factor. For MvN fBms exact analogues of the action in eq. (2) were determined in recent [41]. For _sub-diffusive_ one-sided MvN fBm the action is given by \[\begin{split} S[x(t)]=\frac{\mathrm{ctg}(\pi H)}{4\pi H}\int_{0 }^{\infty}\int_{0}^{\infty}\frac{dx(t_{1})}{dt_{1}}\frac{dx(t_{2})}{dt_{2}} \frac{\mathrm{I}_{z}(1/2-H,H)}{|t_{1}-t_{2}|^{2H}}dt_{1}dt_{2}\,,\quad 0<H<1/2 \,,\\ \mathrm{I}_{z}(a,b)=\frac{1}{B(a,b)}\int_{0}^{z}x^{a-1}(1-x)^{b-1 }dx\,,\quad z=\frac{4t_{1}t_{2}}{(t_{1}+t_{2})^{2}}\,,\end{split} \tag{12}\] where \(B(a,b)\) and \(\mathrm{I}_{z}(a,b)\) are the beta- and the regularized incomplete beta-functions, respectively. In turn, for two-sided sub-diffusive fBm one has \[S[x(t)]=\frac{\mathrm{ctg}(\pi H)}{4\pi H}\int_{-\infty}^{\infty}\int_{-\infty }^{\infty}\frac{dx(t_{1})}{dt_{1}}\frac{dx(t_{2})}{dt_{2}}\frac{dt_{1}dt_{2}} {|t_{1}-t_{2}|^{2H}}\,,\quad 0<H<1/2\,. \tag{13}\] For _super-diffusive_ MvN fBm the action \(S[x(t)]\) involves second-order derivatives with respect to the time variables. For the one-sided version of the fBm the action is given by \[\begin{split} S[x(t)]&=A\int_{0}^{\infty}\int_{0}^{ \infty}\frac{d^{2}x(t_{1})}{dt_{1}^{2}}\frac{d^{2}x(t_{2})}{dt_{2}^{2}}\,|t_{1}- t_{2}|^{2-2H}\mathrm{I}_{z^{\prime}}(3/2-H,2H-2)\,dt_{1}dt_{2}\,,\\ &\qquad A=\frac{\mathrm{ctg}(\pi H)}{8\pi H(1-H)(2H-1)}\,,\quad z ^{\prime}=\frac{\min(t_{1},t_{2})}{\max(t_{1},t_{2})}\,,\quad 1/2<H<1\,,\end{split} \tag{14}\] while in the two-sided case it obeys \[S[x(t)]=A\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\frac{d^{2}x(t_{1})}{ dt_{1}^{2}}\frac{d^{2}x(t_{2})}{dt_{2}^{2}}\,|t_{1}-t_{2}|^{2-2H}\,dt_{1}dt_{2}\,, \quad 1/2<H<1\,, \tag{15}\] where the numerical amplitude \(A\) is defined in eq. (14). Several remarks are here in order: a) Likewise eq. (2) for the standard Brownian motion, the actions for the _sub-diffusive_ Levy fBm in eq. (7) and for the _sub-diffusive_ MvN fBms in eqs.(12) and (13), involve not the trajectories themselves, but rather the corresponding fractional Gaussian noises (or white noise in the case of standard Brownian motion). Together with eq. (1) they define the probability of a given realization of a fractional Brownian noise. In turn, for the _super-diffusive_ case eqs. (8), (14) and (15) involve the second-order derivatives of the trajectories and hence, the first-order derivatives of the respective fractional Gaussian noises. Recall that the action of the above mentioned random-acceleration process also involves the second derivative of the trajectories [13]. b) Despite the fact that both one-sided and two-sided MvN fBms have stationary increments, the kernels for the two-sided fBm depend only on the difference of the time variables, while for the one-sided fBm it is not the case, due to the incomplete beta-function which depends on both time variables. c) A compact expression in eq. (13) with \(H=1/4\) has been evaluated about three decades ago in [43], which aimed at finding an analogue of the Wiener's measure (1) for a given trajectory of a tagged monomer in a long Rouse polymer chain [44, 45]. It was shown that for a chain which is initially in an equilibrium state, (i.e., the chain is left to evolve freely in a thermal bath within an infinite period of time before the measurements start), the action obeys precisely the form in eq. (13) with \(H=1/4\). This is not surprising, of course, given that the dynamics of a tagged bead in an initially pre-thermalised infinitely long Rouse chain corresponds to the two-sided MvN fBm process with \(H=1/4\). d) Expression (13) with arbitrary \(H\leq 1/2\) has been recently proposed in [46] as an effective Hamiltonian of topologically-stabilised polymers in melts, permitting to cover various conformations ranging from ideal Gaussian coils to crumpled globules. ## 3 Main results We list here the main results of the present work for the MvN one- and two-sided fBms, relegating details of derivations to Sec. 4 and the Appendices. We show below that for the sub-diffusive MvN fBms the actions in eqs. (12) and (13) admit, respectively, the following representations in terms of the Riemann-Liouville fractional integrals : \[\eqalign{S[x(t)]&=B\int_{0}^{\infty}dt\left\{I_{-,\infty}^{1/2-H}\left[\frac{dx(t )}{dt}\right]\right\}^{2}\cr&=B\int_{0}^{\infty}dt\left\{\frac{1}{\Gamma(1/2-H )}\int_{t}^{\infty}\frac{dx(\tau)}{d\tau}\frac{d\tau}{(\tau-t)^{H+1/2}} \right\}^{2}\,,\quad 0<H<1/2\,,} \tag{16}\] and \[\eqalign{S[x(t)]&=B\int_{-\infty}^{\infty}dt\left\{I_{+,-\infty}^{1/2-H} \left[\frac{dx(t)}{dt}\right]\right\}^{2}\cr&=B\int_{-\infty}^{\infty}dt \left\{\frac{1}{\Gamma(1/2-H)}\int_{-\infty}^{t}\frac{dx(\tau)}{d\tau}\frac{ d\tau}{(t-\tau)^{H+1/2}}\right\}^{2}\,,\quad 0<H<1/2\,,} \tag{17}\] where in both representations the numerical amplitude \(B\) is given by \[B=\frac{1}{4H\sin(\pi H)\Gamma(2H)}\,. \tag{18}\] Expressions (16) correspond to the one-sided case and involve the right-sided fractional Riemann-Liouville integral extended over the entire "future", while the expressions (17) hold for the two-sided case and involve the left-sided fractional integral extended over the entire "past". Comparing the above expressions against the corresponding result for the Levy fBm, eq. (7), we notice that all three representations involve the fractional integral of the same order and differ only by the limits of integrations. Therefore, despite all the above-mentioned distinct features, these three kinds of sub-diffusive fBms can be considered as being the members of the same family. For the super-diffusive MvN fBm we find \[\eqalign{S[x(t)]&=B\int_{0}^{\infty}dt\left\{I_{-,\infty}^{3/2-H}\left[\frac{ d^{2}x(t)}{dt^{2}}\right]\right\}^{2}\cr&=B\int_{0}^{\infty}dt\left\{\frac{1}{ \Gamma(3/2-H)}\int_{t}^{\infty}\frac{d^{2}x(\tau)}{d\tau^{2}}\frac{d\tau}{( \tau-t)^{H-1/2}}\right\}^{2}\,,\quad 1/2<H<1\,,} \tag{19}\] and \[\eqalign{S[x(t)]&=B\int_{-\infty}^{\infty}dt\left\{I_{-,\infty}^{3/2-H}\left[ \frac{d^{2}x(t)}{dt^{2}}\right]\right\}^{2}\cr&=B\int_{-\infty}^{\infty}dt \left\{\frac{1}{\Gamma(3/2-H)}\int_{t}^{\infty}\frac{d^{2}x(\tau)}{d\tau^{2}} \frac{d\tau}{(\tau-t)^{H-1/2}}\right\}^{2}\,,\quad 1/2<H<1\,,} \tag{20}\] where the expressions (19) correspond to the one-sided, while the expressions (20) - to the two-sided super-diffusive MvN fBms. Note that the numerical amplitude \(B\) (see eq. (18)) is the same for sub- and super-diffusive dynamics. Comparing the above expressions against the corresponding result for the Levy fBm, eq. (8), we realise that, again, all three representations involve the fractional integral of the same order and differ only in the integration limits. This signifies that also in the super-diffusive case the three kinds of fBm evidently belong to the same family. ## 4 Details of derivations We pursue here the canonical approach which hinges on the following general expression for the action for an arbitrary Gaussian process \(x(t)\). This action is, in general, a quadratic functional of the form [47], \[S[x(t)]=\frac{1}{2}\int_{\Omega}dt_{1}\,x(t_{1})\int_{\Omega}dt_{2}\,x(t_{2})\, K(t_{1},t_{2})\,, \tag{21}\] where \(\Omega\) is the interval on which the time variable \(t\) is defined. In turn, the kernel \(K(t_{1},t_{2})\) in eq. (21) is a symmetric function of the time variables, i.e., \(K(t_{1},t_{2})=K(t_{2},t_{1})\), and is defined as the inverse of the covariance function of the process \(x(t)\) : \[\int_{\Omega}dt_{1}\,K(t_{1},t_{2})\,\mbox{Cov}\,(t_{1},t_{3})=\delta\,(t_{2}- t_{3}). \tag{22}\] Below we derive the solutions of the latter equation for the sub-diffusive one- and two-sided MvN fBms, as well as for their super-diffusive counterparts. A brief account of the derivation of the kernels in the above-presented explicit form was given in [41]. Here, we concentrate specifically on the solutions in the form of fractional integrals. ### Sub-diffusive Mandelbrot - van Ness fBms **One-sided case.** In this case, the interval \(\Omega\) on which the time variable is defined is \([0,\infty)\). Differentiating eq. (22) over the variable \(t_{3}\), and taking advantage of the identities \[\frac{d}{dt_{3}}\delta(t_{2}-t_{3})=-\frac{d}{dt_{2}}\delta(t_{2}-t_{3})\,, \tag{23}\] and \[\frac{d}{dt_{3}}\mbox{Cov}(t_{1},t_{3})=H\left(\frac{1}{t_{3}^{1-2H}}-\frac{ \mbox{sign}(t_{3}-t_{1})}{|t_{3}-t_{1}|^{1-2H}}\right)\,, \tag{24}\] we find that eq. (22) becomes \[\int_{0}^{\infty}dt_{1}\,K(t_{1},t_{2})\,\frac{\mbox{sign}(t_{3}-t_{1})}{|t_{ 3}-t_{1}|^{1-2H}}=\frac{1}{H}\frac{d}{dt_{2}}\delta\,(t_{2}-t_{3})+\frac{1}{t _{3}^{1-2H}}\left(\int_{0}^{\infty}du\,K(u,t_{2})\right)\,. \tag{25}\] This is an integral equation with a Feller potential in the left-hand-side and therefore, its formal solution is given by the inverse Feller transform (see, e.g., [30] and earlier paper [48]): \[K(t_{1},t_{2}) = \frac{\mbox{ctg}(\pi H)}{2\pi B(1/2-H,2H)}\frac{d}{dt_{1}}\int_{0 }^{t_{1}}\frac{du}{(t_{1}-u)^{H+1/2}}\int_{u}^{\infty}\frac{dz}{(z-u)^{H+1/2}} \tag{26}\] \[\times \left(\frac{1}{H}\frac{d}{dt_{2}}\delta(t_{2}-z)+\frac{1}{z^{1-2H }}\int_{0}^{\infty}du\,K(u,t_{2})\right).\] Performing the integrals in the right-hand-side of eq. (26), we realise that the contribution of the second term in the parentheses vanishes, because the two-fold integral \[\int_{0}^{t_{1}}\frac{du}{(t_{1}-u)^{H+1/2}}\int_{u}^{\infty}\frac{dz}{z^{1-2H}( z-u)^{H+1/2}}=\frac{4^{H}\pi^{3/2}\Gamma(1/2-H)}{\cos(\pi H)\Gamma(1-H)} \tag{27}\] is equal to a \(t_{1}\)-independent constant and hence, its derivative with respect to \(t_{1}\) is zero. The contribution of the first term in the parentheses can be conveniently represented as \[K(t_{1},t_{2})=\frac{\mathrm{ctg}(\pi H)}{2\pi HB(1/2-H,2H)}\frac{d^{2}}{dt_{1 }dt_{2}}\int_{0}^{\infty}dt\frac{\theta(t_{1}-t)\theta(t_{2}-t)}{[(t_{1}-t)(t _{2}-t)]^{H+1/2}}\,, \tag{28}\] where \(\theta(z)\) is the Heaviside theta-function, such that \(\theta(z)=1\) for \(z>0\), and zero, otherwise. Note that in this representation, the \(t_{1}\)- and \(t_{2}\)-dependent contributions factorise. Inserting the expression (28) into eq. (21) and changing the order of integrations, we get the following factorised representation \[\begin{split} S[x(t)]=\frac{\mathrm{ctg}(\pi H)}{4\pi HB(1/2-H,2 H)}\int_{0}^{\infty}& dt\left\{\int_{0}^{\infty}dt_{1}\,x(t_{1})\,\frac{d}{dt_{1}}\frac{ \theta(t_{1}-t)}{(t_{1}-t)^{H+1/2}}\right\}\\ &\times\left\{\int_{0}^{\infty}dt_{2}\,x(t_{2})\frac{d}{dt_{2}} \,\frac{\theta(t_{2}-t)}{(t_{2}-t)^{H+1/2}}\right\}\,.\end{split} \tag{29}\] Further on, performing the integration of each inner integral by parts, taking into account the initial condition \(x(0)=0\) and also supposing that the trajectories \(x(t)\) are not too over-stretched, (such that \(\lim_{t\to\infty}x(t)/t^{H+1/2}\to 0\)), we have \[S[x(t)]=\frac{1}{4H\sin(\pi H)\Gamma(2H)}\int_{0}^{\infty}dt\left\{\frac{1}{ \Gamma(1/2-H)}\int_{0}^{\infty}d\tau\,\frac{dx(\tau)}{d\tau}\,\frac{\theta( \tau-t)}{(\tau-t)^{H+1/2}}\right\}^{2}\,. \tag{30}\] The last step consists in simply recalling the definition of the theta-function, which yields eventually our result in eq. (16). The derivation of the action in eq. (12) from eq. (28), and also the behaviour in the limit \(H\to 1/2\), (in which case one recovers the standard Brownian motion result (2)) is discussed in A. **Two-sided case.** In this case, the interval \(\Omega\) on which the time variable is defined is the entire real axis, \(t\in(-\infty,\infty)\). For the two-sided case, a link between the explicit form in eq. (17) and the form (13) involving the Riemann-Liouville integral is provided by the integral representation : \[\begin{split}&\frac{1}{|t_{1}-t_{2}|^{2H}}=\frac{1}{B(1/2-H,2H)} \int_{\max(t_{1},t_{2})}^{\infty}\frac{dt}{[(t-t_{1})(t-t_{2})]^{H+1/2}}\\ &=\frac{1}{B(1/2-H,2H)}\int_{-\infty}^{\infty}dt\frac{\theta(t- t_{1})\theta(t-t_{2})}{[(t-t_{1})(t-t_{2})]^{H+1/2}}\,,\quad t_{1},t_{2}\in(- \infty,\infty)\,.\end{split} \tag{31}\] Inserting this representation into eq. (13) and changing the integration order, we get \[\begin{split} S[x(t)]=\frac{1}{4H\sin(\pi H)\Gamma(2H)}\int_{- \infty}^{\infty}dt\left\{\frac{1}{\Gamma(1/2-H)}\int_{-\infty}^{\infty}d\tau \,\frac{dx(\tau)}{d\tau}\,\frac{\theta(t-\tau)}{(t-\tau)^{H+1/2}}\right\}^{2 }\,,\end{split} \tag{32}\] from which expression we obtain the result in eq. (17). Details of the derivation of eq. (13) can be found in [41] and are also presented in a more complete form in B. ### Super-diffusive Mandelbrot - van Ness fBms **One-sided case.** For one-sided super-diffusive fBm it is convenient to take advantage of the following identity, obeyed by the covariance function in eq. (11), \[\frac{1}{2}\left(t_{1}^{2H}+t_{2}^{2H}-|t_{1}-t_{2}|^{2H}\right)=H(2H-1)\int_{ 0}^{t_{1}}\int_{0}^{t_{2}}\frac{dudv}{|u-v|^{2-2H}}\,. \tag{33}\] Inserting this identity into the integral equation (22), and integrating by parts (see C for more details), we find the following integral equation \[\int_{0}^{\infty}\frac{q(t_{1},t_{3})dt_{1}}{|t_{1}-t_{3}|^{2-2H}}=\frac{ \delta(t_{3}-t_{2})}{H(2H-1)}\,, \tag{34}\] obeyed by an auxiliary function \(q(t_{1},t_{3})\) of the form \[q(t_{1},t_{3})=\int_{0}^{t_{1}}d\tau_{1}\int_{0}^{t_{3}}d\tau_{3}\,K(\tau_{1},\tau_{3})\,. \tag{35}\] The solution of eq. (34) can be found in a standard way by applying the technique of fractional integrals (or can be found directly in [48]) and reads (see C) \[q(t_{1},t_{3})=\frac{1}{2H\sin(\pi H)\Gamma(2H)\Gamma^{2}(3/2-H)}\frac{d^{2} }{dt_{1}dt_{3}}\int_{0}^{\infty}dt\frac{\theta(t_{1}-t)\theta(t_{3}-t)}{[(t_{ 1}-t)(t_{3}-t)]^{H-1/2}}\,, \tag{36}\] such that the kernel \(K(t_{1},t_{3})\) is given by \[K(t_{1},t_{3})=\frac{1}{2H\sin(\pi H)\Gamma(2H)\Gamma^{2}(3/2-H)}\frac{d^{4} }{dt_{1}^{2}dt_{3}^{2}}\int_{0}^{\infty}dt\frac{\theta(t_{1}-t)\theta(t_{3}-t )}{[(t_{1}-t)(t_{3}-t)]^{H-1/2}}\,. \tag{37}\] Inserting the latter expression into eq. (21) and changing the integration order, we get \[S[x(t)]=\frac{1}{4H\sin(\pi H)\Gamma(2H)}\int_{0}^{\infty}dt\left\{\frac{1}{ \Gamma(3/2-H)}\int_{0}^{\infty}d\tau\,x(\tau)\frac{d^{2}}{d\tau^{2}}\frac{ \theta(\tau-t)}{(\tau-t)^{H-1/2}}\right\}^{2}\,. \tag{38}\] Supposing next that the trajectories \(x(t)\) are not too over-stretched at \(t\to\infty\), (as compared to the typical behaviour \(x(t)\simeq t^{H}\)), such that \[\lim_{t\to\infty}\frac{x(t)}{t^{H+1/2}}=0\,,\qquad\lim_{t\to\infty}\frac{1}{t ^{H-1/2}}\frac{dx(t)}{dt}=0\,, \tag{39}\] we find the representation of the action in eq. (19) by integrating twice by parts the inner integral in eq. (38). Convergence of the action in eq. (19) to the standard result in eq. (2) in the limit \(H\to 1/2\) is demonstrated in C. **Two-sided case.** Following [41], here we pursue a bit different line of thought and first reformulate our eqs. (21) and (22) in terms of a two-sided fractional Gaussian noise \(\zeta_{fGn}(t)\), rather than in terms of the fBm trajectories. These equations read \[S\left[x(t)\right]=\frac{1}{2}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \frac{dx(t_{1})}{dt_{1}}\frac{dx(t_{2})}{dt_{2}}\,q(t_{1},t_{2})\,dt_{1}\,dt_{ 2}\,, \tag{40}\] where the kernel \(q(t_{1},t_{2})\) is implicitly defined by the integral equation \[\int_{-\infty}^{\infty}dt_{1}\,q(t_{1},t_{2})\,\mathrm{cov}(t_{1},t_{3})= \delta(t_{2}-t_{3})\,, \tag{41}\] with \(\mathrm{cov}(t_{1},t_{3})\) being the covariance of the fractional Gaussian noise [32] \[\mathrm{cov}(t_{1},t_{3})=\frac{\overline{dx(t_{1})}}{dt_{1}}\frac{dx(t_{3})} {dt_{3}}=\frac{H(2H-1)}{|t_{1}-t_{3}|^{2-2H}}\,. \tag{42}\] Note that the kernel \(q(t_{1},t_{2})\) is related to the kernel \(K(t_{1},t_{2})\) in the original eq. (21) through the integral transformation \[q(t_{1},t_{2})=\int_{-\infty}^{t_{1}}d\tau_{1}\int_{-\infty}^{t_{2}}d\tau_{2} \,K(\tau_{1},\tau_{2})\,, \tag{43}\] and therefore differs from a similar property defined in eq. (35) only by the lower limit of integrations. An explicit form of the integral equation obeyed by \(q(t_{1},t_{2})\) is obtained by simply inserting the definition in eq. (42) into the eq. (41), which gives \[\int_{-\infty}^{\infty}\frac{dt_{1}\,q(t_{1},t_{2})}{|t_{1}-t_{3}|^{2-2H}}= \frac{1}{H(2H-1)}\delta(t_{2}-t_{3})\,. \tag{44}\] This is an integral equation with the Riesz potential in the left-hand-side and its solution is given by the inverse Riesz transform [30]. The solution presented in [30] yields directly the explicit representation in eq. (15), but is inconvenient for our purposes here. The point is that it contains a positive power of the difference \(|t_{1}-t_{2}|\) and therefore does not permit to get a factorised representation in terms of fractional integrals. To circumvent this difficulty, we consider instead an integral equation for the auxiliary function \[u(t_{1},t_{2})=\int_{-\infty}^{t_{2}}d\tau_{2}\,q(t_{1},\tau_{2})\,, \tag{45}\] and solve it in Appendix D using the approach based on the fractional integrals. In doing so, we find that \(u(t_{1},t_{2})\) obeys \[u(t_{1},t_{2})=-\frac{\mathrm{ctg}(\pi H)\Gamma(H-1/2)}{2\pi H\Gamma(3/2-H) \Gamma(2H)}\frac{d}{dt_{1}}\int_{-\infty}^{\infty}dt\frac{\theta(t_{1}-t) \theta(t_{2}-t)}{[(t_{1}-t)(t_{2}-t)]^{H-1/2}}\,, \tag{46}\] in which the dependence on \(t_{1}\) and \(t_{2}\) appears in a factorised form. Further on, we concentrate on eq. (40) and integrate it by parts with respect to \(t_{2}\). This gives, assuming that the fractional Gaussian noise vanishes at time equal to plus infinity, \[\eqalign{S[x(t)]&=-\frac{1}{2}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \frac{dx(t_{1})}{dt_{1}}\frac{d^{2}x(t_{2})}{dt_{2}^{2}}\,u(t_{1},t_{2})\,dt_{1 }\,dt_{2}\\ &=-\frac{1}{4H\sin(\pi H)\Gamma(2H)}\int_{-\infty}^{\infty}dt\left\{ \frac{1}{\Gamma(3/2-H)}\int_{t}^{\infty}\frac{d^{2}x(t_{2})}{dt_{2}^{2}}\frac{ dt_{2}}{(t_{2}-t)^{H-1/2}}\right\}\\ &\qquad\qquad\times\left\{\frac{1}{\Gamma(3/2-H)}\int_{-\infty}^{ \infty}dt_{1}\frac{dx(t_{1})}{dt_{1}}\frac{d}{dt_{1}}\frac{\theta(t_{1}-t)}{( t_{1}-t)^{H-1/2}}\right\}\,.\end{] (47) Performing the integral in the last line of the latter equation by parts, we arrive at the desired representation in the last two lines of eq. (20). Convergence of the action in eq. (20) to the one in eq. (2) in the limit \(H\to 1/2\) is discussed in D. ## 5 Conclusions To conclude, in the present paper we were concerned with the so-called fractional Brownian motions - non-Markovian Gaussian stochastic processes with long-ranged power-law correlations between the increments. In case when these correlations are positive prompting the increments between the successive moves to have the same sign, they entail a super-diffusive motion. On contrary, when correlations are negative such that the increments tend to have different signs, the resulting dynamical behaviour is sub-diffusive. Such processes, both super-diffusive and sub-diffusive, are often used in the literature to model various transport phenomena or systems with chemical reactions, and also efficient search processes, in which the underlying microscopic dynamics is anomalous. Concurrently, there is an ample evidence that many naturally-occurring processes are, in fact, fractional Brownian motions. In the literature, there exist three distinct definitions of fractional Brownian motions which differ basically by the interval on which the time variable is defined. These are the definition due to Levy, for which \(t\) is defined on a finite interval, and two definitions due to Mandelbrot and van Ness, for which \(t\) is defined either on the positive half-axis or on the entire real line, respectively. The corresponding covariance functions and the actions in the path-integral representations have also very different functional forms. Therefore, one may be prompted to conclude that these are, in fact, quite different and unrelated to each other classes of stochastic processes, and it is unclear why they have the same common name. Motivated by this ambiguity, here we have developed a unifying framework which links all three processes together. Namely, we have established alternative (as compared to the explicit representations in [41]) path integral representations of all three fractional Brownian motions in terms of Riemann-Liouville fractional integrals. We have shown that for all three definitions the action in such representations can be cast into the form which involves the fractional integral of the same kind and order (that depends only on whether \(H<1/2\) or \(H>1/2\)), and differs only by the integration limits. Therefore, our results show that these three kinds of anomalous diffusions are indeed the members of the same family. The authors acknowledge fruitful and instructive discussions with Baruch Meerson, Karol Penson and Horacio Wio.
2304.12980
Analytical solutions of virus propagation model in blockchain networks
The main goal of this paper is to find analytical solutions of a system of nonlinear ordinary differential equations arising in the virus propagation in blockchain networks. The presented method reduces the problem to an Abel differential equation of the first kind and solve it directly.
Youness Chatibi
2023-04-25T16:40:47Z
http://arxiv.org/abs/2304.12980v2
# Analytical solutions of virus propagation model in blockchain networks ###### Abstract The main goal of this paper is to find analytical solutions of a system of nonlinear ordinary differential equations arising in the virus propagation in blockchain networks. The presented method reduces the problem to an Abel differential equation of the first kind and solve it directly. keywords: Virus propagation, Blockchain networks, Abel differential equation, Analytical solutions. ## 1 Introduction In the last decade, many scientific problems have been shown as mathematical model with systems of nonlinear ordinary differential equations, notably in epidemiology [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11], physics [12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23], chemistry [24; 25; 26] and computer science [27; 28; 29]. However, finding analytical solutions for these systems is often very difficult. Few nonlinear systems can be solved explicitly, except that some numerical approximations of the solutions had been proposed [30; 31; 32]. Mathematical and computer modeling have received considerable attention due to their practical importance in controlling and predicting viral spread. It is very significant to study changes in infected host populations in computer networks. Note that the spread of malicious code is in many ways similar to the spread of biological viruses. In 2008, a new technology called blockchain technology was proposed by Satoshi Nakamoto [33]. It is based on a distributed ledger that records transactions in blocks without the need for a central authority of trust. Each block contains a set of transactions and has a hash link to the previous block. If transactions occur at the same time, they are recorded in the same block [34]. Therefore, blockchain modeling is very important because malware virus classes are rapidly evolving and the cybersecurity risks of cryptocurrency networks such as bitcoin are increasing [35]. This work presents analytical solutions of a model in a blockchain network that appears the interactions between viruses, protected and unprotected computers. ## 2 Mathematical description of the problem We consider \(x_{1}\), \(x_{2}\) and \(x_{3}\) as population levels of viruses, protected and unprotected systems. The mathematical formulation of the model is based on the diagram given by Fig. 1, which yield the following system of nonlinear ordinary differential equations [36]: \[\begin{cases}\dfrac{dx_{1}}{dt}=-d_{1}x_{1}+b_{2}x_{2}\\ \dfrac{dx_{2}}{dt}=b_{1}x_{1}-d_{2}x_{2}-k_{2}x_{1}x_{3}\\ \dfrac{dx_{3}}{dt}=-d_{3}x_{3}+k_{1}x_{1}x_{2},\end{cases} \tag{1}\] subject to the non-negative initial conditions \(x_{1}(t_{0})=x_{1}^{0},x_{2}(t_{0})=x_{2}^{0},x_{3}(t_{0})=x_{3}^{0}\), where \(t_{0}\geq 0\) is a real number. The parameters in the system (1) are strictly positive and listed in the following table: ## 3 Analytical resolution of the model **Theorem 1**: _The system (1) can be reduced to an ordinary differential equation (called Lienard equation) determined by_ \[\dfrac{d^{2}x_{1}}{dt^{2}}+A\dfrac{dx_{1}}{dt}+B(x_{1})=0, \tag{2}\] _where \(A\) and \(B\) are polynomial functions._ In this model, the population is constant (i.e.) \(x_{1}+x_{2}+x_{3}=N\). Then \(\frac{dx_{1}}{dt}+\frac{dx_{2}}{dt}+\frac{dx_{3}}{dt}=0\), therefore, for all \(t\geq 0\) the resolution of this system is based on the determination of the function \(x_{1}=x_{1}(t)\) because after that \[x_{2}(t) =\dfrac{1}{b_{2}}\bigg{(}\dfrac{dx_{1}}{dt}+d_{1}x_{1}\bigg{)}, \tag{3}\] \[x_{3}(t) =N-x_{1}(t)-x_{2}(t). \tag{4}\] \begin{table} \begin{tabular}{l l} \hline **Parameter** & **Description** \\ \hline \(d_{1}\) & Coefficient of decay of virus \\ \(d_{2}\) & Coefficient of decay of vulnerable systems \\ \(d_{3}\) & Coefficient of decay of protected systems \\ \(b_{1}\) & Coefficient of susceptibility of vulnerable system \\ \(b_{2}\) & Coefficient of availability of vulnerable system \\ \(k_{1}\) & Coefficient of encounter between virus and protected systems \\ \(k_{2}\) & Coefficient of interaction between virus and vulnerable system \\ \hline \end{tabular} \end{table} Table 1: Description of the model parameters Substitution of (4) into (1) gives us the following ordinary differential equations: \[\frac{dx_{2}}{dt} =(b_{1}-k_{2}N)x_{1}+k_{2}x_{1}^{2}-d_{2}x_{2}+k_{2}x_{1}x_{2}, \tag{5}\] \[\frac{dx_{3}}{dt} =d_{3}(x_{1}+x_{2})+k_{1}x_{1}x_{2}-d_{3}N. \tag{6}\] Therefore, \[k_{2}\frac{dx_{1}}{dt}+(k_{1}+k_{2})\frac{dx_{2}}{dt} =k_{1}\frac{dx_{2}}{dt}-k_{2}\frac{dx_{3}}{dt}\] \[=[(b_{1}-k_{2}N)k_{1}-d_{3}k_{2}]x_{1}-(d_{2}k_{1}+d_{3}k_{2})x_{ 2}+k_{1}k_{2}x_{1}^{2}+d_{3}k_{2}N.\] By differentiating Eq. (3) and use it, we get \[a\frac{d^{2}x_{1}}{dt^{2}}+b\frac{dx_{1}}{dt}+cx_{1}^{2}+dx_{1}+e=0, \tag{7}\] where \[a =\frac{k_{1}+k_{2}}{b_{2}},\quad b=k_{2}+\frac{k_{1}(d_{1}+d_{2}) +k_{2}(d_{1}+d_{3})}{b_{2}},\quad c=-k_{1}k_{2},\] \[d =-b_{1}k_{1}+k_{2}(d_{3}+k_{1}N)+\frac{d_{1}(d_{2}k_{1}+d_{3}k_{2 })}{b_{2}},\quad\text{and}\quad e=-d_{3}k_{2}N.\] Figure 1: Representation of the problem If we denote \(A=\frac{b}{a}\) and \(B(x_{1})=\frac{cx_{1}^{2}+dx_{1}+e}{a}\), we obtain the Lienard ordinary differential equation [37] \[\frac{d^{2}x_{1}}{dt^{2}}+A\frac{dx_{1}}{dt}+B(x_{1})=0. \tag{8}\] **Lemma 1**.: _The solutions of the Lienard equation (2) can be obtained by transforming it to an equivalent first kind first order Abel type equation given by_ \[\frac{dv}{dx_{1}}=Av^{2}+B(x_{1})v^{3}. \tag{9}\] Proof. By denoting \(u=\frac{dx_{1}}{dt}\), Eq. (2) can be expressed as \[u\frac{du}{dx_{1}}+Au+B(x_{1})=0. \tag{10}\] By introducing a new dependent variable \(v=\frac{1}{u}\), Eq. (10) takes the form of an Abel differential equation of the first kind (9). **Theorem 2**.: _The solution of the Abel equation (9) is given by_ \[v=\left(\frac{dx_{1}}{dt}\right)^{-1}=-(Ax_{1}+C)^{-1}\pm\left[Dx_{1}^{3}+Ex_{ 1}^{2}+Fx_{1}+G\right]^{-\frac{1}{2}}, \tag{11}\] _where \(C,D,E,F\) and \(G\) are constants._ Proof. For resolving Eq. (9), we put \(v=v_{1}+v_{2}\) such that \[\frac{dv_{1}}{dx_{1}} =Av_{1}^{2}, \tag{12}\] \[\frac{dv_{2}}{dx_{1}} =B(x_{1})v_{2}^{3}. \tag{13}\] For Eq. (12), we have the solution \[v_{1}=-(Ax_{1}+C)^{-1}\quad\text{where}\quad C\quad\text{is constant}. \tag{14}\] Otherwise, for (13), we put \(v_{2}=P(x_{1})^{r}\) where \(P\in\mathbb{R}[x_{1}]^{*}\) and \(r\in\mathbb{Q}^{*}\). We have \[\frac{dv_{2}}{dx_{1}}=rP^{\prime}(x_{1})[v_{2}^{3}]^{s}\] where \(s=\frac{1}{3}\big{(}1-\frac{1}{r}\big{)}\). For \(s=1\) and \(rP^{\prime}(x_{1})=B(x_{1})\), we get the solution \[v_{2}=\big{[}-2\int B(x_{1})dx_{1}+C^{\prime}\big{]}^{-\frac{1}{2}}=\big{[}Dx_ {1}^{3}+Ex_{1}^{2}+Fx_{1}+G\big{]}^{-\frac{1}{2}}, \tag{15}\] where \(D=\frac{-2c}{3a},E=\frac{-d}{a},F=\frac{-2e}{a}\) and \(G=-2C^{\prime\prime}+C^{\prime}\) (\(C^{\prime}\) and \(C^{\prime\prime}\) are integration's constants). Note that \(\tilde{v}_{2}=-v_{2}\) is also a solution. Then Eq. (9) admits two solutions that are \[v=\left(\frac{dx_{1}}{dt}\right)^{-1}=-(Ax_{1}+C)^{-1}\pm\left[Dx_{1}^{3}+Ex_{1}^ {2}+Fx_{1}+G\right]^{-\frac{1}{2}}. \tag{16}\] **Theorem 3**.: _The general solutions \((x_{1},x_{2},x_{3})\) of the system (1) is given by_ \[x_{1}=\sum_{n=1}^{\infty}\rho_{n}^{\pm}t^{n},\quad x_{2}(t)=\frac{1}{b_{2}} \bigg{(}\frac{dx_{1}}{dt}+d_{1}x_{1}\bigg{)},\quad x_{3}(t)=N-x_{1}(t)-x_{2}(t), \tag{17}\] _where the coefficients \(\rho_{n}^{\pm}\) are defined by_ \[\rho_{n}^{\pm} = \frac{1}{n(\sigma_{1}^{\pm})^{n}}\sum_{s_{1},s_{2},s_{3},\cdots}( -1)^{s_{1}+s_{2}+s_{3}+\cdots} \tag{18}\] \[\times\frac{n(n+1)\cdots(n-1+s_{1}+s_{2}+\cdots)}{s_{1}!s_{2}!s_ {3}!\cdots}\bigg{(}\frac{\sigma_{2}^{\pm}}{\sigma_{1}^{\pm}}\bigg{)}^{s_{1}} \bigg{(}\frac{\sigma_{3}^{\pm}}{\sigma_{1}^{\pm}}\bigg{)}^{s_{2}}\cdots, \tag{19}\] _and the sum over \(s\) values is restricted to partitions of \(n-1\),_ \[s_{1}+2s_{2}+3s_{3}+\cdots=n-1.\] Proof. By integrating (9) we obtain \[t=-\frac{1}{A}\log{(Ax_{1}+C)}\pm\int\left[Dx_{1}^{3}+Ex_{1}^{2}+Fx_{1}+G \right]^{-\frac{1}{2}}. \tag{20}\] Now, letting the equation \[P(x_{1})=0\quad\mbox{where}\quad P(x_{1})=Dx_{1}^{3}+Ex_{1}^{2}+Fx_{1}+G. \tag{21}\] Note that, if we apply the Tschirnhaus method [38] by putting \(y_{1}=x_{1}+\frac{E}{3D}\), Eq. (21) become \[\tilde{P}(y_{1})=0\quad\mbox{where}\quad\tilde{P}(y_{1})=y_{1}^{3}+Hy_{1}+I, \tag{22}\] where \(H=\frac{3DF-E^{2}}{3D^{2}}\) and \(I=\frac{2E^{3}-9DEF+27D^{2}G}{27D^{3}}\). Eq. (22), can be solve by Cardano's method [39] which one of its solutions take the form \[y_{1}^{(1)}=\left(\frac{-I-\sqrt{\Delta_{1}}}{2}\right)^{\frac{1}{3}}+\left( \frac{-I+\sqrt{\Delta_{1}}}{2}\right)^{\frac{1}{3}}, \tag{23}\] where \(\Delta_{1}=I^{2}+\frac{4H^{3}}{27}\geq 0\). As \(y_{1}-y_{1}^{(1)}\) divide \(\tilde{P}(y_{1})\) we have \[\tilde{P}(y_{1})=(y_{1}-y_{1}^{(1)})Q(y_{1}), \tag{24}\] where \[Q(y_{1})=y_{1}^{2}+(y_{1}+y_{1}^{(1)})y_{1}^{(1)}+H. \tag{25}\] Consequently, the two remaining roots of the equation \(Q(y_{1})=0\) are \[y_{1}^{(2)}=\frac{-y_{1}^{(1)}-\sqrt{\Delta_{2}}}{2},\qquad y_{1}^{(3)}=\frac{- y_{1}^{(1)}+\sqrt{\Delta_{2}}}{2}, \tag{26}\] where \(\Delta_{2}=y_{1}^{(1)2}-4(H+y_{1}^{(1)2})\geq 0\). Whence, \[\tilde{P}(y_{1})=(y_{1}-y_{1}^{(1)})(y_{1}-y_{1}^{(2)})(x_{1}-y_{1}^{(3)}). \tag{27}\] As \(P(x_{1})=D\cdot\tilde{P}(x_{1}+\frac{E}{3D})\) where \(D=\frac{2b_{2}k_{1}k_{2}}{3(k_{1}+k_{2})}>0\), we have \[\int v_{2}dx_{1}=\int\frac{dx_{1}}{\sqrt{D(x_{1}+\theta_{1})(x_{1}+\theta_{2}) (x_{1}+\theta_{3})}}, \tag{28}\] with \(\theta_{k}=\frac{E}{3D}-y_{1}^{(k)}\neq 0\) for \(k=1,2,3\). By applying \[\frac{1}{\sqrt{x+\theta}}=\frac{1}{\sqrt{\theta}}\sum_{p=0}^{\infty}{\left( \begin{matrix}-\frac{1}{2}\\ p\end{matrix}\right)}{\left(\frac{x}{\theta}\right)}^{p}, \tag{29}\] where \[{\left(\begin{matrix}x\\ m\end{matrix}\right)}=\frac{1}{m!}\prod_{i=0}^{m-1}(x-i)=\frac{\Gamma(x+1)}{ \Gamma(m+1)\cdot\Gamma(x-m+1)},\quad\forall m\in\mathbb{N},\quad\forall x\in \mathbb{R} \tag{30}\] is the generalized binomial coefficient, we get \[\frac{1}{\sqrt{D(x_{1}+\theta_{1})(x_{1}+\theta_{2})(x_{1}+\theta_{3})}}= \frac{1}{\sqrt{D}}\sum_{n=0}^{\infty}\mu_{n}(\theta_{1},\theta_{2},\theta_{3} )x_{1}^{n}, \tag{31}\] where \[\mu_{n}(\theta_{1},\theta_{2},\theta_{3})=\sum_{p+q+r=n}\frac{{\left( \begin{matrix}-\frac{1}{p}\\ p\end{matrix}\right)}{\left(\begin{matrix}-\frac{1}{q}\\ q\end{matrix}\right)}{\theta_{1}^{p+\frac{1}{2}}\cdot\theta_{2}^{q+\frac{1}{2 }}\cdot\theta_{3}^{r+\frac{1}{2}}}}. \tag{32}\] Consequently, the integration implies that \[\int v_{2}dx_{1}=\frac{1}{\sqrt{D}}\sum_{n=0}^{\infty}\frac{\mu_{n}(\theta_{1 },\theta_{2},\theta_{3})}{n+1}x_{1}^{n+1}. \tag{33}\] Otherwise, since \(\frac{1}{1+x}=\sum_{n=0}^{\infty}(-x)^{n}\) for \(|x|<1\) and by integration we achieve: \[-\frac{1}{A}\log{(Ax_{1}+C)}=\sum_{n=0}^{\infty}\lambda_{n}(A,C)x_{1}^{n+1}, \tag{34}\] where \[\lambda_{n}(A,C)=\frac{(-1)^{n+1}}{(n+1)C^{n+1}}A^{n}. \tag{35}\] Then, Eq. (20) become \[t=\sum_{n=1}^{\infty}\lambda_{n-1}(A,C)x_{1}^{n}\pm\frac{1}{\sqrt{D}}\sum_{n=1} ^{\infty}\frac{\mu_{n-1}(\theta_{1},\theta_{2},\theta_{3})}{n}x_{1}^{n}=\sum_{ n=1}^{\infty}\sigma_{n}^{\pm}x_{1}^{n}, \tag{36}\] where \[\sigma_{n}^{\pm}=\lambda_{n-1}(A,C)\pm\frac{1}{\sqrt{D}}\frac{\mu_{n-1}(\theta _{1},\theta_{2},\theta_{3})}{n}\quad\text{for}\quad n\geq 1. \tag{37}\] Since, the series expansion of the inverse series is given by \[x_{1}=\sum_{n=1}^{\infty}\rho_{n}^{\pm}t^{n}, \tag{38}\] where the coefficients \(\rho_{n}^{\pm}\) are defined by (for details see [40]) \[\rho_{n}^{\pm} =\frac{1}{n(\sigma_{1}^{\pm})^{n}}\sum_{s_{1},s_{2},s_{3},\cdots }(-1)^{s_{1}+s_{2}+s_{3}+\cdots} \tag{39}\] \[\quad\times\frac{n(n+1)\cdots(n-1+s_{1}+s_{2}+\cdots)}{s_{1}!s_{ 2}!s_{3}!\cdots}\bigg{(}\frac{\sigma_{2}^{\pm}}{\sigma_{1}^{\pm}}\bigg{)}^{s_ {1}}\bigg{(}\frac{\sigma_{3}^{\pm}}{\sigma_{1}^{\pm}}\bigg{)}^{s_{2}}\cdots, \tag{40}\] and the sum over \(s\) values is restricted to partitions of \(n-1\), \[s_{1}+2s_{2}+3s_{3}+\cdots=n-1.\] We note that if \(0<\theta_{1}\theta_{2}\theta_{3}\neq\frac{C^{2}}{D}\), \(\sigma_{1}^{\pm}\neq 0\). The first few \(\rho_{n}^{\pm}\) given by (19) are \[\rho_{1}^{\pm} =\frac{1}{\sigma_{1}^{\pm}},\] \[\rho_{2}^{\pm} =-\frac{1}{(\sigma_{1}^{\pm})^{3}}\sigma_{2}^{\pm},\] \[\rho_{3}^{\pm} =\frac{1}{(\sigma_{1}^{\pm})^{5}}(2(\sigma_{2}^{\pm})^{2}-\sigma _{1}^{\pm}\sigma_{3}^{\pm}).\] Finally, we use (38), Eq. (3) and Eq. (4) to get \(x_{2}\) and \(x_{3}\). #### Conflicts of Interest The authors would declare that they have no known competing interests that could have appeared to influence this work.
2304.03655
Emulating the Deutsch-Josza algorithm with an inverse-designed terahertz gradient-index lens
Photonic systems utilized as components for optical computing promise the potential of enhanced computational ability over current computer architectures. Here, an all-dielectric photonic metastructure is investigated for application as a quantum algorithm emulator (QAE) in the terahertz frequency regime; specifically, we show implementation of the Deustsh-Josza algorithm. The design for the QAE consists of a gradient-index (GRIN) lens as the Fourier transform subblock and silicon as the oracle subblock. First, we detail optimization of the metastructure through numerical analysis. Then, we employed inverse design through a machine learning approach to further optimize the structural geometry. In particular, we improved the lens thickness, in order to enhance the resulting output signal for both balanced and constant functions. We show that by optimizing the thickness of the gradient-index lens through ML, we enhance the interaction of the incident light with the metamaterial leading to a stronger focus of the outgoing wave resulting in more accurate implementation of the desired quantum algorithm in the terahertz.
Ashley N. Blackwell, Riad Yahiaoui, Yi-Huan Chen, Pai-Yen Chen, Thomas A. Searles, Zizwe A. Chase
2023-04-07T14:22:07Z
http://arxiv.org/abs/2304.03655v1
# Emulating the Deutsch-Josza algorithm with an inverse-designed terahertz gradient-index lens ###### Abstract Photonic systems utilized as components for optical computing promise the potential of enhanced computational ability over current computer architectures. Here, an all-dielectric photonic metastructure is investigated for application as a quantum algorithm emulator (QAE) in the terahertz frequency regime; specifically, we show implementation of the Deutsch-Josza algorithm. The design for the QAE consists of a gradient-index (GRIN) lens as the Fourier transform subblock and silicon as the oracle subblock. First, we detail optimization of the metastructure through numerical analysis. Then, we employed inverse design through a machine learning approach to further optimize the structural geometry. In particular, we improved the lens thickness, in order to enhance the resulting output signal for both balanced and constant functions. We show that by optimizing the thickness of the gradient-index lens through ML, we enhance the interaction of the incident light with the metamaterial leading to a stronger focus of the outgoing wave resulting in more accurate implementation of the desired quantum algorithm in the terahertz. osajournal ## 1 Introduction Improvement upon current computing architecture is required in order to enhance computation power, time and resources for industries such as medicine, catalysis, and finance. Traditional digital computers rely heavily on many transistors to function with Moore's law indicating that the number of transistors on a chip needs to double every two years to increase processing capabilities, power, and efficiency [1, 2]. Optical computers present a way to scale down the size of transistors while offering the capabilities of parallel processing, high speed, lower power consumption, and operation at multiple frequencies [3, 4] thus exploiting a photonic system can make these devices a prime candidate to realize this possibility [5], especially in the least exploited THz regime. In order to achieve an integrated analog optical computing system, it is necessary to focus the incident wave to the next operational computing component which can be achieved by the use of computational metamaterials. Numerous systems, designed for free space, have been proposed but have been difficult to integrate and bulky in design [6, 7, 8, 9]. One such metamaterial known as the gradient index (GRIN) lens, can alleviate these issues through being an on-chip device. A prototypical GRIN lens consists of a series of micro-layered structures with holes of different geometries to vary the index of refraction [10, 11] that manipulate the propagation of electromagnetic waves [12]. Previous works of GRIN lens metamaterial structures have theoretically and experimentally demonstrated to operate in the THz regime [13, 14, 15] with strong focusing capabilities [16, 17, 18] and tunability [19, 20, 21]. Recently, GRIN lenses have been integrated as components of larger photonic devices, consisting of a multitude of subblocks, to realize a new optical computing technology, the quantum algorithm emulator (QAE). QAEs simulate quantum search algorithms with classical waves via the superposition principle, interference phenomena, and in some cases entanglement which is integral to rapid searching and solving difficult problems that would be time and power consuming on digital computers. QAEs have only been explored in the microwave region where the measured electric field amplitude is the probability amplitude of the equivalent quantum state [22]. Zhang et al. proposed a dielectric device, made of Veroclear810, consisting of an oracle subblock, two Fourier transform (FT) subblocks, and a phase plate subblock [6, 22]. The oracle subblock imprints a spatially phase dependent profile on the incoming wave while the FT subblock and phase plate subblock converts the phase difference for the oracle subblock to amplitude information [22]. Through this device they were able to simulate Grover's Algorithm and show that the number of iterations performed on the device was consistent with the efficiency of the quantum search algorithm. Metamaterials have reached a high degree of maturity and have recently emerged as a new approach for quantum computing. In this context, Wei et al. proposed a quantum searcher of an on-chip silicon device that consisted of four metasurfaces: an oracle metasurface, two metalenses, and a middle metasurface where different spatial positions of the incident wave showed repeatability in the distribution of the output wave intensity [6]. Cheng et al. experimentally verified a Deutsch-Jozsa (DJ) algorithm with a millimeter scaled all-dielectric device [23]. Here, we report a new design for a quantum algorithm emulator in the THz frequency regime based on a simple platform and optimized by machine learning. The investigated device is composed of a microstructured oracle subblock made of silicon substrate and a FT subblock made of Kapton polyimide. The Kapton film chosen acts as a 2D photonic crystal (PhC) slab and proven to exhibit strong electric field confinement and interaction with THz waves with minimal absorption loss [24]. In this work, numerical simulations enhanced by machine learning (ML) were applied to optimize the hole radii and thickness of the FT subblock to achieve an optimal distribution of wave intensities. The structural design of the QAE is first presented along with the numerical analysis showing the initial optimized output. Lastly, the process and results for the ML process are discussed. Our aim is that the simple design of our device and its compact size may strongly relax the constraints for high frequency domains (e.g., IR and visible) which are expected to play a larger role in photonics and quantum technology as a whole. ## 2 Numerical evaluation of QAE The block diagram of the DJ algorithm is shown in Fig. 1 (top panel). The whole device consists of two functional subblocks, oracle and Fourier transform, respectively, brought into optical contact. Fig. 1 (bottom panel) represents the schematic view of the metamaterial-based quantum emulator. The oracle block is made from a 500-\(\upmu\)m-thick silicon (Si) substrate. It modulates the electric field profile of the incident THz light by assigning a phase shift of 0 or \(\pi\) on each spatial position along \(y\)-axis. This phase shift is introduced by physically varying the radius of the holes array drilled in the silicon substrate, or oracle block, and by consequence also varies the effective electric permittivity \(\epsilon_{e}\) along the y-axis thereby encoding the function \(f(y)\). The Fourier transform block is made from 127-\(\upmu\)m-thick polyimide film with various hole sizes acting as an all-dielectric GRIN metalens as shown in Figs. 2(a) and (b). Each hole has a period of 70 \(\upmu\)m with varying radii of 10, 15, 20, 25, and 30 \(\upmu\)m, respectively from the center to the edge of the device. Due to the rotational symmetry imposed by the geometry of the design, the structure has an identical response for both linearly TE-polarized and TM-polarized waves. Also, note that the use of flexible substrates provides an unprecedented route to achieve frequency tunability due to modifications in the profile and the periodicity of the structures when the substrates are manipulated mechanically [24]. The polyimide film is treated as a dielectric with \(\varepsilon=3.3+i0.05\). For the GRIN lens design, we chose a radially symmetric refractive index gradient following a parabolic index profile \(n(r)=n_{0}\text{sech}(\alpha r)\) with \(\alpha=1/r_{0}+\cosh^{-1}(n_{0}/n_{r}0)\) [Fig. 2(c)]. For this purpose, we introduced a spatial variation of the refractive index by arraying unit cells of different radius such that the refractive index gradually decreased from the center of the GRIN lens. The dimensions of each Figure 1: Schematic of the Quantum Algorithm Emulator. (Top panel) Block diagram of the DJ algorithm and (Bottom panel) metamaterial-based multi-layer configuration of the DJ algorithm emulator. The input THz wave of width D is modulated by the oracle block of varying hole diameters and then transformed through the Fourier block to show the output signal of either a constant function (top output) or balanced function (bottom output). Figure 2: (a) Structural design of the unit cell with the relevant geometrical dimensions: \(p_{x}=p_{y}=70\) μm, \(t=127\) μm and \(r\) varies 10, 15, 20, 25 and 30 μm, respectively. (b) Schematic view of the GRIN lens. The aperture size of the lens is \(\sim 1\) mm. (c) Index profile of the structure at 0.8 THz. The blue solid line is a parabolic fit to the refractive index profile. (d) Simulated electric field intensity on the focal plane at 0.8 THz. The input electric field intensity is also plotted for comparison. (e) Simulated normalized electric field distribution of the GRIN lens at 0.8 THz, for linearly TE-polarized radiation. cell are modified to fit the refractive index profile on the device. The design of the final GRIN lens structure is shown in Fig. 2(b). The effective permittivity of the oracle block can be expressed as: \(\epsilon_{e}(y)=(3\lambda_{0}/d_{0})^{2}\); \(\Delta\phi=0\), \(\epsilon_{e}(y)=(2.5\lambda_{0}/d_{0})^{2}\); \(\Delta\phi=\pi\), where \(d_{0}\) is the length of the oracle block and \(\lambda_{0}\) the working wavelength. To examine the performance of the designed metalens, we plotted in Fig. 2(d) the cross-section of the normalized intensity profiles in the plane \(y=0\) (axial \(x\)-\(z\) plane) around the focal point of the metalens at 0.8 THz. The input electric field intensity is also plotted for comparison. Shown in Fig. 2(e) we present the normalized local electric field distribution of the GRIN lens at 0.8 THz, for linearly TE-polarized radiation. In it, we aim to clearly demonstrate the successful realization of the focusing property of our design. When the incident light irradiates the surface of the oracle block, the phase of the transmitted wave is modulated with a factor \(k_{0}n(y)d_{0}\), where \(k_{0}=2\pi/\lambda_{0}\) is the vacuum wave vector, \(d_{0}\) is the thickness of the oracle block and \(n(y)\) is the effective refractive index at position (\(y\)). The detecting function \(f(y)\) is encoded into the input states by assigning a phase modulation on each spatial position (\(y\)) along the input transversal direction. The refractive index of the oracle operator is designed to achieve either 0 or \(\pi\) phase distribution depending on the value of the function \(f\). The Fourier transform operator is used to evaluate the final results on the output signal. There are optical processes that can produce Fourier transform of field distribution, such as diffraction and Optical spatial filtering, respectively. Generally, the far field diffraction pattern is observed at infinity. By placing a lens after the diffracting aperture, the plane at infinity is imaged onto the focal plane of the lens. This explains why a lens can perform a Fourier transform. To evaluate the performance of the device, we performed numerical simulations based on the finite-difference time-domain (FDTD) method. The length scale of the mesh was set to be less than or equal to \(\lambda_{0}/10\) throughout the simulation domain, where \(\lambda_{0}\) is the central wavelength of the incident radiation. The input and output ports are located at about 10\(\lambda_{0}\) from the device with open boundary conditions. The blue line plots in Figure 3 show the electric field intensity at the focal plane of the GRIN lens with the detecting function being constant and balance, respectively, computed for the initial numerical analysis at 0.8 Thz. The maximum intensity at y = 0 position means the encoded function \(f(y)\) is constant. However, if the center intensity is zero this indicates that the oracle subblock carries a balance function. Since the 0 and \(\pi\) phase elements correspond with different effective permittivities, it makes the oracle subblock processes spatially varying impedance which results in different transmissions creating a non-symmetric intensity for the output of the electric field as shown in Fig 3 (blue line, right panel). Figure 3: The electric field intensity at the focal plane of the GRIN lens for an intermediate working frequency of 0.8 THz, with the detecting function being constant (left panel) and balance (right panel), respectively. The blue and red lines are simulations of the output pre- and post-application of machine learning based inverse design. Machine Learning Optimization of Qae To further engineer the structural parameters, a machine learning (ML-) based optimization procedure is implemented into the design of the grin lens. ML is a powerful tool for optimization and is expected to be an efficient complementary of electromagnetic wave numerical simulations [25, 26, 27]. Recently, one of the most popular ML techniques used in conjunction with EM field wave simulations is the inverse-design model. Comparing to conventional methods, inverse-design traces the geometry configurations from the output performance such as resonant frequencies or S-parameters [28, 29]. The overall procedure used in this study is shown in Fig. 4. First, the data for training the proposed neural network (NN) model is generated using the numerical simulation software (i.e., CST) based on the initial structure parameters as in Fig. 4(a). After which, the geometry configuration is randomly generated using MATLAB with the constraints on \(r_{i}\) and h, respectively. The constraints for \(r_{i}\) are \(0<r_{i}<a/2\) and \(r_{1}<r_{2}<r_{3}<r_{4}<r_{5}<r_{6}\) where \(a=70\ \mu m\) is the characteristic length of the unit cell of each hole in Fig. 4(b). Additional, the constraint of \(h\) is \(50\ \mu m<h<300\ \mu m\). To optimize the performance of the GRIN lens, we divided the holes on the surface into six groups from inner to outer ring with the radius of \(r_{1}\), \(r_{2}\), \(r_{3}\), \(r_{4}\), \(r_{5}\), and \(r_{6}\), respectively. In addition, the thickness of the grin lens h is also involved into the optimized procedure since the path of the wave's propagation can also affect the performance. Additional details are provided in Appendix B. According to the Deutsch-Jozsa algorithm, the key point of the design is in fact the performance of the balanced function case since the output of all zero or all one case will absolutely be one peak. It is important that the proposed structure has an ability to distinguish the input status (i.e., constant or balance) from the output performance. Therefore, the proposed optimization procedure mainly focuses on the output spectrum. Hence, we only need to optimize the performance of the balanced function shown in Fig. 4(c). Moreover, since the output of the half-zero half-one case is expected to have two peaks on two sides, the input for the inverse design model can be simplified as the features of the peaks. Here, we choose the amplitude \(A_{pi}\) and the full width at half maximum Figure 4: The schematic of the proposed ML-based inverse-design of the Fourier subblock. (a) shows a unit cell which we simulated to generate the whole GRIN lens shown in (b) of varying hole diameters. (c) is the result of the balanced function which is used to apply inverse design to best fit the peaks (d) comparing the initial data of amplitude at \(\pi\) and the full width at half maximum (FWHM) of \(\pi\) the two peaks as the input parameters. (FWHM) \(FWHM_{pi}\) of the two peaks as the input parameters, i.e., \[(\bar{v}_{in})=[A_{p1},FWHM_{p1},A_{p2},FWHM_{p2}] \tag{1}\] where \(p_{1}\) and \(p_{2}\) denote the left and right peak, respectively in Fig. 4(d). The data is then split into the training and validating set with the ratio of \(80\%-20\%\). The training set is used to train the proposed NN model, while the validating set is used to check the performance of the model after each training round. Since the length of the input and output vectors are four and seven only, the artificial neural network (ANN) is well enough to achieve the optimal design. There are seven hidden layers in the proposed ANN model, each hidden layer has 16, 16, 32, 32, 16, 16, and 7 channels, respectively. All the layers except for the last one come with a rectified linear activation function (ReLU). The channel in the last layer is the output of the NN model representing the geometry configuration from Fig. 3 which is represented in the following equation: \[(\bar{v}_{out})=[r_{1},r_{2},r_{3},r_{4},r_{5},r_{6},h] \tag{2}\] where \(r_{1},r_{2},r_{3},r_{4},r_{5},r_{6}\) denote the radii of the six layers and h is the thickness of the GRIN lens. After training with 100 iterations, the proposed model can precisely predict the geometry configuration from \((\bar{v}_{out})\). Hence, we can feed the model with the desired \((\bar{v}_{in})\) so that the desired optimal geometry configuration \((\bar{v}_{out})\) is achieved. However, if the values of \((\bar{v}_{in})\) exceed the performance limitation of the proposed GRIN lens, \((\bar{v}_{out})\) may not get the same performance as \((\bar{v}_{in})\). As a result, it is required to re-validate the connection between \((\bar{v}_{in})\) and \((\bar{v}_{out})\) via the numerical simulation software. After several trials, an optimal geometry configuration is achieved and validated by FDTD method. According to Fig. 4, the performance with respect to the output spectral balance is much better for the inverse-design than the initial design. The detail comparison of the initial and optimal design is represented in Table 1. Note that the proposed GRIN lens is based on non-resonating elements that exhibit a weak variation of the refractive index over a wide frequency range, which may strongly relaxes the constraints related to narrow band functioning of traditional metasurfaces. This enables a broadband focusing effect. From Table 1, the optimized design of the GRIN lens shows minimal increases of all radii except \(r_{3}\) with changes being no more than 15%. However, the biggest change occurred with a factor of two increase of the thickness. This means the radii sizes were near optimal from the initial numerical analysis and that the amount of the computational metamaterial interacting with the incident wave is crucial in achieving the best output amplitudes and full width half maximum (FWHM) for both cases. The thickness of the GRIN lens needs to be at least one-working wavelength in such a way that the wave can better interact with the medium. The thicker the \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Parameter & \(r_{1}\) & \(r_{2}\) & \(r_{3}\) & \(r_{4}\) & \(r_{5}\) & \(r_{6}\) & \(h\) \\ \hline Initial & 10 & 15 & 25 & 25 & 30 & 30 & 127 \\ Optimal & 11.97 & 16.09 & 21.24 & 27.88 & 31.59 & 34.91 & 289.86 \\ \hline \hline Case & \multicolumn{3}{c}{All zero / All one} & \multicolumn{3}{c}{Half-zero half-one} \\ \hline Parameter & Ap & FWHM & Ap\({}_{1}\) & FWHM\({}_{1}\) & Ap\({}_{2}\) & FWHM\({}_{2}\) \\ Initial & 41.872 & 449.96 & 26.795 & 264.50 & 46.813 & 208.50 \\ Optimal & 53.455 & 218.15 & 40.976 & 195.16 & 42.792 & 217.25 \\ \hline \hline \end{tabular} \end{table} Table 1: Top: Comparison of initial numerical GRIN lens parameters to the machine Learning optimized parameters. Bottom: The performance comparison of the initial and optimal design for (i) all-zero all-one case and (ii) half-zero half one case as represented by the output wave characteristics. medium the better the interaction will be. However, if the GRIN lens is too thick, the focal point may fall inside the GRIN lens. For the constant function case the amplitude has been increased by about 26 % and the FWHM has been decreased by 52%. For the balanced function, the first peak had its amplitude increased by 50% and its FWHM decreased by 26% while the second peak was already close to optimal. For the output of either balanced or constant functions, the sharper peaks (higher amplitude and smaller FWHM) mean greater distribution of the wave intensity, stronger focus of the outgoing wave, and a better probability of handling a higher number of database inputs [6, 22] as shown in bottom right of Fig. 4 and in the red plot of the right panel of Fig 3. Cheng et al. also showed an enhancement of their balance function through increasing the number of phase elements in their oracle block. [23] Our design makes the fabrication process simpler for this improvement in output in that we only have to manufacture a thicker GRIN lens. ## 4 Conclusion To summarize, we designed, with machine learning, an optimized all-dielectric metadevice as a part of a quantum algorithm emulator for simulating the DJ algorithm in the THz region. Initial structural optimization was constructed using numerical analysis based on FDTD to evaluate its initial performance. Using a ML-based optimization procedure, the original design of the structure was further engineered for an enhanced performance as shown by the two-fold increase in the thickness of the GRIN lens. The resultant optimization showed performance improvements in the amplitude and FWHM of the peaks for both the balanced and constant cases and shows promise for the emulation of quantum algorithms with current THz technologies. ## Appendix A Appendix A: Deutsch-Josza Algorithm The Deutsch-Jozsa algorithm (Fig. S1) is considered as one of the first examples of quantum algorithms that are exponentially faster than any possible deterministic classical algorithm [30]. In the DJ algorithm, the input function is defined as following: \[f(x)=\left[0,1\right]^{N} \tag{3}\] The function \(f\) takes n-bit binary values as input and produces either 0 or 1 as output. If the value of \(f\) is 0 on all outputs or 1 on all outputs, \(f\) is called constant function. However, if the value of \(f\) is 1 for half of the output domain and 0 for the other half, \(f\) is called a balance function. The oracle sub-block encodes the detecting function f(y) by assigning a phase shift on each spatial position along the axis of wave propagation while the GRIN lens sub-block acts as a Fourier transformer to evaluate the output signal. At closer inspection for classical light, the incident wave is defined as: \[|\psi_{in}\rangle=|0\rangle^{\otimes n}|1\rangle \tag{4}\] A Hadamard transform is applied to this function followed by a unitary transform on the superposition state (Fig. S1) to produce the following output function of the outgoing wave as represented in the following equation: \[|\psi_{out}\rangle=\left[((-1)^{f\;(0)}+(-1)^{f\;(1)})|0\rangle+((-1)^{f\;(0) }-(-1)^{f\;(1)})|1\rangle\right]*(1/\sqrt{2})(|0\rangle-|1\rangle) \tag{5}\] As one can see, if f(0) = f(1) there is constructive interference on \(|0\rangle\), and destructive interference on the \(|1\rangle\) component for the first register. Therefore we obtain a constant function. If the opposite is true, then f(0) not = f(1) and a balanced function is the output [30, 31, 32]. ## Appendix B Appendix B: Additional Machine Learning Optimization Details The neural network structure contains seven hidden layers with 16, 16, 32, 32, 16, 16, and 7 channels, respectively. All the layers except for the last one come with a rectified linear activation function (ReLU). A softmax function is connected after the last hidden layer to export the output vectors. Smooth L1 loss function and Adam optimizer with learning rate of 0.001 are applied to the proposed model. In addition, the batch size is set as 30 for the small batch training [33]. The amount of the entire dataset is 600, the optimizer is Adam, and the learning rate is 0.001. The model is constructed using Pytorch in Python environment and executed under Ubuntu 16.04 platform with 16 GB RAM, Intel core i7-6700HQ CPU @ 2.6 GHz, and NVIDIA GTX 960M GPU. Cost function: Smooth L1 Loss \[L=[l_{1},...,L_{N}]^{T} \tag{6}\] \[l(x,y)=mean(L) \tag{7}\] L1 loss: \[l_{n}=|x_{n}-y_{n}| \tag{8}\] Smooth L1 Loss: \[l_{n}=(0.5(x_{n}-y_{n})^{2})/\beta \tag{9}\] \[|x_{n}-y_{n}|<\beta \tag{10}\] or \[l_{n}=|x_{n}-y_{n}|-0.5\beta \tag{11}\] where \(\beta\)=0.1, \(x_{n}\) and \(y_{n}\) are the input and output elements, respectively. Based on our experiment, the smooth L1 loss can prevent an over-fitting and exploding gradient issue and hence performs better than common loss functions such as mean square error or L1 loss. The training and validation comparison is shown in Fig. S2. To validate the consistency, we executed the experiment five times and evaluated the coefficient of variation (CV) for each configuration. The result is as shown in the Table S2, which demonstrates that there should not be a one-to-many proposed in the proposed model. Figure S2: The training and validation losses of each epoch. Since the output is still required to import back to the numerical simulation to demonstrate the performance, the loss value is sufficient to examine the performance of the proposed model instead of transferring to percent error. ## Acknowledgment A. N. B. would like to thank the partial financial support of the GEM Fellowship for this work. Further, Z. A. C. would like to thank the Bridge-to-Faculty Fellowship. ## Funding This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under contract number DE-SC0012704. ## Disclosures N/A
2305.14000
Node-wise Diffusion for Scalable Graph Learning
Graph Neural Networks (GNNs) have shown superior performance for semi-supervised learning of numerous web applications, such as classification on web services and pages, analysis of online social networks, and recommendation in e-commerce. The state of the art derives representations for all nodes in graphs following the same diffusion (message passing) model without discriminating their uniqueness. However, (i) labeled nodes involved in model training usually account for a small portion of graphs in the semisupervised setting, and (ii) different nodes locate at different graph local contexts and it inevitably degrades the representation qualities if treating them undistinguishedly in diffusion. To address the above issues, we develop NDM, a universal node-wise diffusion model, to capture the unique characteristics of each node in diffusion, by which NDM is able to yield high-quality node representations. In what follows, we customize NDM for semisupervised learning and design the NIGCN model. In particular, NIGCN advances the efficiency significantly since it (i) produces representations for labeled nodes only and (ii) adopts well-designed neighbor sampling techniques tailored for node representation generation. Extensive experimental results on various types of web datasets, including citation, social and co-purchasing graphs, not only verify the state-of-the-art effectiveness of NIGCN but also strongly support the remarkable scalability of NIGCN. In particular, NIGCN completes representation generation and training within 10 seconds on the dataset with hundreds of millions of nodes and billions of edges, up to orders of magnitude speedups over the baselines, while achieving the highest F1-scores on classification.
Keke Huang, Jing Tang, Juncheng Liu, Renchi Yang, Xiaokui Xiao
2023-05-23T12:28:43Z
http://arxiv.org/abs/2305.14000v3
# Node-wise Diffusion for Scalable Graph Learning ###### Abstract. Graph Neural Networks (GNNs) have shown superior performance for semi-supervised learning of numerous web applications, such as classification on web services and pages, analysis of online social networks, and recommendation in e-commerce. The state of the art derives representations for _all_ nodes in graphs following the _same_ diffusion (message passing) model without discriminating their uniqueness. However, (i) labeled nodes involved in model training usually account for a small portion of graphs in the semi-supervised setting, and (ii) different nodes locate at different graph local contexts and it inevitably degrades the representation qualities if treating them undistinguishedly in diffusion. To address the above issues, we develop NDM, a universal node-wise diffusion model, to capture the unique characteristics of each node in diffusion, by which NDM is able to yield high-quality node representations. In what follows, we customize NDM for semi-supervised learning and design the NICCN model. In particular, NIGCN advances the efficiency significantly since it (i) produces representations for labeled nodes only and (ii) adopts well-designed neighbor sampling techniques tailored for node representation generation. Extensive experimental results on various types of web datasets, including citation, social and co-purchasing graphs, not only verify the state-of-the-art effectiveness of NIGCN but also strongly support the remarkable scalability of NIGCN. In particular, NIGCN completes representation generation and training within 10 seconds on the dataset with hundreds of millions of nodes and billions of edges, up to orders of magnitude speedups over the baselines, while achieving the highest F1-scores on classification 1. graph neural networks, scalability, semi-supervised classification + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Information Systems + Footnote †: journal: Journal: Information Systems + Footnote †: journal unique topological structure of each labeled node during representation generation. Therefore, there is still room for improvement in efficiency and effectiveness. To explain, labeled nodes involved in model training in semi-supervised learning usually take up a small portion of graphs, especially on massive graphs, and computing representations for all nodes in graphs is unnecessarily inefficient. Meanwhile, different nodes reside in different graph locations with distinctive neighborhood contexts. Generating node representations without considering their topological uniqueness inevitably degrades the representation qualities. To remedy the above deficiencies, we first develop a node-wise diffusion model NDM. Specifically, NDM calculates an individual diffusion length for each node by taking advantage of the unique topological characteristic for high-quality node representations. In the meantime, NDM employs a universal diffusion function GHD adaptive to various graphs. In particular, GHD is a _general heat diffusion_ function that is capable of capturing different diffusion patterns on graphs with various densities. By taking NDM as the diffusion model for feature propagations, we design NiGCN (NodeWise GCN), a GCN model with superb scalability. In particular, NiGCN only computes representations for the labeled nodes for model training without calculating (hidden) representations for any other nodes. In addition, NiGCN adopts customized neighbor sampling techniques during diffusion. By eliminating those unimportant neighbors with noise features, our neighbor sampling techniques not only improve the performance of NiGCN for semi-supervised classification but also boost the efficiency significantly. We evaluate NiGCN on 7 real-world datasets and compare with 13 baselines for transductive learning and 7 competitors for inductive learning. Experimental results not only verify the superior performance of NiGCN for semi-supervised classification but also prove the remarkable scalability of NiGCN. In particular, NiGCN completes feature aggregations and training within 10 seconds on the dataset with hundreds of millions of nodes and billions of edges, up to orders of magnitude speedups over the baselines, while achieving the highest F1-scores on classification. In a nutshell, our contributions are summarized as follows. * We propose a node-wise diffusion model NDM. NDM customizes each node with a unique diffusion scheme by utilizing the topological characteristics and provides a general heat diffusion function capable of capturing different diffusion patterns on graphs with various densities. * We design a scalable GNN model NiGCN upon NDM. NiGCN calculates node representation for a small portion of labeled nodes without producing intermediate (hidden) representations for any other nodes. Meanwhile, neighbor sampling techniques adopted by NiGCN further boost its scalability significantly. * We conduct comprehensive experiments to verify the state-of-the-art performance of NiGCN for semi-supervised classification and the remarkable scalability of NiGCN. ## 2. Related Work Kipf and Welling (Kipf and Welling, 2017) propose the seminal Graph Convolutional Network (GCN) for semi-supervised classification. However, GCN suffers from severe scalability issues since it executes the feature propagation and transformation recursively and is trained in a full-batch manner. To alleviate the pain, two directions, i.e., decoupled models and sampling-based models, have been explored. **Decoupled Models.** SGC proposed by Wu et al. (Wu et al., 2017) adopts the decoupling scheme by removing non-linearity in feature transformation and propagates features of neighbors within \(K\) hops directly, where \(K\) is an input parameter. Following SGC, a plethora of decoupled models have been developed. To consider node proximity, APPNP(Kipf and Welling, 2017) utilizes personalized PageRank (PPR) (Kipf and Welling, 2017; Kipf and Welling, 2017) as the diffusion model and takes PPR values of neighbors as aggregation weights. To improve the scalability, PPRG(Beng et al., 2017) reduces the number of neighbors in aggregation by selecting neighbors with top-\(K\) PPR values after sorting them. Graph diffusion convolution (GDC) (Kipf and Welling, 2017) considers various diffusion models, including both PPR and heat kernel PageRank (HKPR) to capture diverse node relationships. Later, Chen et al. (Chen et al., 2017) apply generalized PageRank model (Kipf and Welling, 2017) and propose GBP that combines reverse push and random walk techniques to approximate feature propagation. Wang et al. (Wang et al., 2017) point out that GBP consumes a large amount of memory to store intermediate random walk matrices and propose AGP that devises a unified graph propagation model and employs forward push and random sampling to select subsets of unimportant neighborhoods so as to accelerate feature propagation. Zhang et al. (Zhang et al., 2017) consider the number of neighbor hops before the aggregated feature gets smoothing. To this end, they design NDLS and calculate an individual local-smoothing iteration for each node on feature aggregation. Recently, Feng et al. (Feng et al., 2017) investigate the graph random neural network (GRAND) model. To improve the scalability, they devise GRAND+ by leveraging a generalized forward push to compute the propagation matrix for feature aggregation. In addition, GRAND+ only incorporates neighbors with top-K values for further scalability improvement. **Sampling-based Models.** To avoid the recursive neighborhood over expansion, GraphSAGE (Chen et al., 2017) simply samples a fixed number of neighbors uniformly for each layer. Instead of uniform sampling, FastGCN (Chen et al., 2017) proposes importance sampling on neighbor selections to reduce sampling variance. Subsequently, AS-GCN (Huang et al., 2017) considers the correlations of sampled neighbors from upper layers and develops an adaptive layer-wise sampling method for explicit variance reduction. To guarantee the algorithm convergence, VR-GCN proposed by Chen et al. (Chen et al., 2017) exploits historical hidden representations as control variates and then reduces sampling variance via the control variate technique. Similar to AS-GCN, LADIFS (Zhang et al., 2017) also takes into account the layer constraint and devises a layer-wise, neighbor-dependent, and importance sampling manner, where two graph sampling methods are proposed as a consequence. Cluster-GCN (Chen et al., 2017) first applies graph cluster algorithms to partition graphs into multiple clusters, and then randomly takes several clusters as training graphs. Similarly, GraphSAINT (Zhang et al., 2017) samples subgraphs as new training graphs, aiming to improve the training efficiency. Huang et al. (Huang et al., 2017) adopt the graph coarsening method developed by Loukas (Loukas, 2017) to reshape the original graph into a smaller graph, aiming to boost the scalability of graph machine learning. Lately, Zeng et al. (Zeng et al., 2017) propose to extract localized subgraphs with bounded scopes and then run a GNN of arbitrary depth on it. This principle of decoupling GNN scope and depth, named as ShaDow, can be applied to existing GNN models. However, all the aforementioned methods either (i) generate node representations for _all_ nodes in the graphs even though labeled nodes in training are scarce or (ii) overlook the topological uniqueness of each node during feature propagation. Ergo, there is still room for improvement in both efficiency and efficacy. ## 3. Node-wise diffusion model In this section, we reveal the weakness in existing diffusion models and then design NDM, consisting of two core components, i.e., (i) the diffusion matrix and the diffusion length for each node, and (ii) the universal diffusion function generalized to various graphs. ### Notations For the convenience of expression, we first define the frequently used notations. We use calligraphic fonts, bold uppercase letters, and bold lowercase letters to represent sets (e.g., \(\mathcal{N}\)), matrices (e.g., \(\mathbf{A}\)), and vectors (e.g., \(\mathbf{x}\)), respectively. The \(i\)-th row (resp. column) of matrix \(\mathbf{A}\) is represented by \(\mathbf{A}[i,\cdot]\) (resp. \(\mathbf{A}[\cdot,i]\)). Let \(\mathbf{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\) be an undirected graph where \(\mathcal{V}\) is the node set with \(|\mathcal{V}|=n\), \(\mathcal{E}\) is the edge set with \(|\mathcal{E}|=m\), and \(\mathbf{X}\in\mathbb{R}^{n\times f}\) is the feature matrix. Each node \(o\in\mathcal{V}\) is associated with a \(f\)-dimensional feature vector \(\mathbf{x}_{o}\in\mathbf{X}\). For ease of exposition, node \(u\in\mathcal{V}\) also indicates its index. Let \(\mathcal{N}_{u}\) be the _direct_ neighbor set and \(d_{u}=|\mathcal{N}_{u}|\) be the degree of node \(u\). Let \(\mathbf{A}\in\mathbb{R}^{n\times n}\) be the adjacency matrix of \(\mathbf{G}\), i.e., \(\mathbf{A}[u,v]=1\) if \(\langle u,v\rangle\in\mathcal{E}\); otherwise \(\mathbf{A}[u,v]=0\), and \(\mathbf{D}\in\mathbb{R}^{n\times n}\) be the diagonal degree matrix of \(\mathbf{G}\), i.e., \(\mathbf{D}[u,u]=d_{u}\). Following the convention (Golovolovolov et al., 2016; Le et al., 2017), we assume that \(\mathbf{G}\) is a _self-looped_ and connected graph. ### Diffusion Matrix and Length **Diffusion Matrix.** Numerous variants of Laplacian matrix are widely adopted as diffusion matrix in existing GNN models (Golovolov et al., 2016; Le et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). Among them, the transition matrix \(\mathbf{P}=\mathbf{D}^{-1}\mathbf{A}\) is intuitive and easy-explained. Let \(1=\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{n}>-1\) be the eigenvalues of \(\mathbf{P}\). During an infinite diffusion, any initial state \(\pi_{0}\in\mathbb{R}^{n}\) of node set \(\mathcal{V}\) converges to the stable state \(\pi\), i.e., \(\pi=\lim_{t\rightarrow\infty}\pi_{0}\mathbf{P}^{t}\) where \(\pi(v)=\frac{d_{u}}{2m}\). **Diffusion Length.** As stated, different nodes reside at different local contexts in the graphs, and the corresponding receptive fields for information aggregation differ. Therefore, it is rational that each node \(u\) owns a unique length \(\ell_{u}\) of diffusion steps. As desired, node \(u\) aggregates informative signals from neighbors within the range of \(\ell_{u}\) hops while obtaining limited marginal information out of the range due to over-smoothing issues. To better quantify the effective vicinity, we first define \(\tau\)-distance as follows. **Definition 3.1** (\(\tau\)-Distance).: Given a positive constant \(\tau\) and a graph \(\mathbf{G}=(\mathcal{V},\mathcal{E})\) with diffusion matrix \(\mathbf{P}\), a length \(\ell\) is called \(\tau\)-distance of node \(u\in\mathcal{V}\) if it satisfies that for every \(v\in\mathcal{V}\), \(\frac{|\mathbf{P}^{t}[u,v]-\pi(v)|}{\pi(v)}\leq\tau\). According to Definition 3.1, \(\ell_{u}\) being \(\tau\)-distance of \(u\) ensures that informative signals from neighbors are aggregated. On the other hand, to avoid over-smoothing, \(\ell_{u}\) should not be too large. In the following, we provide an appropriate setting of \(\ell_{u}\) fitting both criteria. **Theorem 3.2**.: _Given a positive constant \(\tau\) and a graph \(\mathbf{G}=(\mathcal{V},\mathcal{E})\) with diffusion matrix \(\mathbf{P}\), \(\ell_{u}:=\left[\log_{\frac{\tau}{2m}d_{u}}\right]\) is \(\tau\)-distance of node \(u\), where \(\lambda=\max\{\lambda_{2},-\lambda_{n}\}\) and \(d_{\min}=\min\{d_{o}\colon v\in\mathcal{V}\}\)._ Proof of Theorem 3.2.: Let \(\mathbf{e}_{u}\in\mathbb{R}^{1\times n}\) be a one-hot vector having \(1\) in coordinate \(u\in\mathcal{V}\) and \(\mathbf{1}_{n}\in\mathbb{R}^{1\times n}\) be the \(1\)-vector of size \(n\). Then, \(\mathbf{P}^{t}[u,v]=\mathbf{e}_{u}\mathbf{P}^{t}\mathbf{e}_{v}^{\top}\). Let \(\tilde{\mathbf{P}}=\mathbf{D}^{1/2}\mathbf{P}\mathbf{D}^{-1/2}=\mathbf{D}^{-1/ 2}\mathbf{A}\mathbf{D}^{-1/2}\) and \(\mathbf{u}_{i}^{\top}\) be the corresponding eigenvector of its \(i\)-th eigenvalue (sorted in descending order) of \(\tilde{\mathbf{P}}\). For \(\mathbf{e}_{u}\) and \(\mathbf{e}_{v}\), we decompose \[\mathbf{e}_{u}\mathbf{D}^{-1/2}=\sum_{i=1}^{n}\alpha_{i}\mathbf{u}_{i},\text{ and }\mathbf{e}_{\mathbf{D}}\mathbf{D}^{1/2}=\sum_{i=1}^{n}\beta_{i}\mathbf{u}_{i}.\] Note that \(\{\mathbf{u}_{1}^{\top},\ldots,\mathbf{u}_{n}^{\top}\}\) form the orthonormal basis and \(\mathbf{u}_{1}=\frac{\mathbf{1}_{n}\mathbf{D}^{1/2}}{\sqrt{2m}}\). Thus, we have \(\alpha_{1}=\mathbf{e}_{u}\mathbf{D}^{-1/2}\mathbf{u}_{1}^{\top}=\frac{1}{ \sqrt{2m}}\) and \(\beta_{1}=\mathbf{e}_{\mathbf{D}}\mathbf{D}^{1/2}\mathbf{u}_{1}^{\top}=\frac{d_{ o}}{\sqrt{2m}}\). Since \(\tilde{\mathbf{P}}\) is the similar matrix of \(\mathbf{P}\), they share the same eigenvalues. Therefore, we have \[\frac{\left|\mathbf{P}^{t}[u,v]-\pi(v)\right|}{\pi(v)}=\frac{ \left|\mathbf{e}_{u}\mathbf{P}^{t}\mathbf{e}_{v}^{\top}-\pi(v)\right|}{\pi(v)}= \frac{\left|\mathbf{e}_{u}\mathbf{D}^{-1/2}\tilde{\mathbf{P}}^{t}\mathbf{D}^{1/2 }\mathbf{e}_{v}^{\top}-\pi(v)\right|}{\pi(v)}\] \[=\frac{\left|\sum_{i=1}^{n}\alpha_{i}\beta_{i}\lambda_{i}^{t}-\pi(v )\right|}{\pi(v)}=\frac{\left|\sum_{i=2}^{n}\alpha_{i}\beta_{i}\lambda_{i}^{t} \right|}{\pi(v)}\leq\lambda^{\ell}\cdot\frac{\sum_{i=2}^{n}|\alpha_{i}\beta_{i}| }{\pi(v)}\] \[\leq\lambda^{\ell}\cdot\frac{\left\|\mathbf{e}_{u}\mathbf{D}^{-1/ 2}\right\|\left\|\mathbf{e}_{\mathbf{D}}\mathbf{D}^{1/2}\right\|}{d_{o}/2m}= \frac{2m\lambda^{\ell}}{\sqrt{d_{o}d_{u}}},\] where the second inequality is by Cauchy-Schwarz inequality. Finally, setting \(\ell=\left[\log_{\lambda}\frac{\tau\sqrt{d_{o}d_{u}}}{2m}\right|\) completes the proof. For the \(\ell_{u}\) defined in Theorem 3.2, it is \(\tau\)-distance of node \(u\) and in the meantime involves the topological uniqueness of node \(u\). Moreover, the performance can be further improved by tuning the hyperparameter \(\tau\). ### Universal Diffusion Function As we know, the diffusion model defined by the symmetrically normalized Laplacian matrix \(\mathbf{L}=\mathbf{I}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\) is derived from _Graph Heat Equation_(Golov et al., 2016; Le et al., 2017), i.e., \[\frac{\mathrm{d}\mathbf{H}_{t}}{\mathrm{d}t}=-\mathbf{L}\mathbf{H}_{t},\text{ and }\mathbf{H}_{0}=\mathbf{X}, \tag{1}\] where \(\mathbf{H}_{t}\) is the node status of graph \(\mathbf{G}\) at time \(t\). By solving the above differential function, we have \[\mathbf{H}_{t}=\mathbf{e}^{-t\mathbf{L}}\mathbf{X}=\mathbf{e}^{-t(\mathbf{L}- \tilde{\mathbf{A}})}\mathbf{X}=\mathbf{e}^{-t}\sum_{\ell=0}^{\infty}\frac{t^{ \ell}\tilde{\mathbf{A}}^{\ell}}{\ell!}\mathbf{X}, \tag{2}\] where \(\tilde{\mathbf{A}}=\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\). In this regard, the underlying diffusion follows the _Heat Kernel PageRank_ (HKPR) function as \[f(\omega,\ell)=\mathbf{e}^{-\omega}\frac{\omega^{\ell}}{\ell!}, \tag{3}\] where \(\omega\in\mathbb{Z}^{+}\) is the parameter. However, \(f(\omega,\ell)\) is neither expressive nor general enough to act as the universal diffusion function For the diffusion matrix \(\mathbf{P}\) defined on \(\mathbf{G}\), we have \(\lambda=1-\Delta_{\lambda}\). Meanwhile, according to the analysis of Theorem 3.2, we know that \(\mathbf{P}^{\ell}[u,\cdot]-\pi(v)=\sum_{i=2}^{m}\alpha_{i}\beta_{i}\lambda_{i}^{\ell}\), representing the convergence, is (usually) dominated by \(\lambda^{\ell}\). As a result, diffusion on graphs with different densities, i.e., \(d_{\mathbf{G}}\), converges at different paces. In particular, sparse graphs with small \(d_{\mathbf{G}}\) incurring large \(\lambda\) tend to incorporate neighbors in a long range while dense graphs with large \(d_{\mathbf{G}}\) incurring small \(\lambda\) are prone to aggregate neighbors not far away. In addition, it has been widely reported in the literature (Han et al., 2017; Wang et al., 2018) that different graphs ask for different diffusion functions, which is also verified by our experiments in Section 5.2. To serve the universal purpose, a qualified diffusion function should be able to (i) expand smoothly in long ranges, (ii) decrease sharply in short intervals, and (iii) peak at specified hops, as required by various graphs accordingly. Clearly, the HKPR function in (3) fulfills the latter two requirements but fails the first one since it decreases exponentially when \(\ell\geq\omega\). One may propose to consider _Personalized PageRank_ (PPR). However, the PPR function is monotonically decreasing and thus cannot reach condition (iii). Inspired by the above analysis, we try to ameliorate \(f(\omega,\ell)\) to a universal diffusion function with a controllable change tendency for general purposes. To this end, we extend the graph heat diffusion Equation (3) by introducing an extra _power parameter_\(\rho\in\mathbb{R}^{+}\) and devise our _General Heat Diffusion_ (GHD) function as \[U(\omega,\rho,\ell)=\frac{\omega^{\ell}}{(\ell\cdot 1)^{\rho}\cdot C} \tag{4}\] for the diffusion weight at the \(\ell\)-th hop, where \(\omega\in\mathbb{R}^{+}\) is the new heat parameter and \(C=\sum_{\ell=0}^{\infty}\frac{\omega^{\ell}}{(\ell\cdot 1)^{\rho}}\) is the normalization factor. As desired, GHD can be regarded as a general extension of the graph heat diffusion model, and parameters \(\omega\) and \(\rho\) together determine the expansion tendency. In particular, it is trivial to verify that GHD is degraded into HKPR when \(\rho=1\), and GHD becomes PPR when \(\rho=0\). As illustrated in Figure 1, by setting different \(\omega\) and \(\rho\) combinations, GHD is able to exhibit smooth, exponential (i.e., PPR), or peak expansion (i.e., HKPR) tendency. ### Diffusion Model Design Upon \(\tau\)-distance and diffusion function UDF, our node-wise diffusion model (NDM) can be concreted. Specifically, given a target set \(\mathcal{T}\subseteq\mathcal{V}\), the representation \(\mathbf{Z}_{\mathcal{T}}\) under NDM is calculated as \[\mathbf{Z}_{\mathcal{T}}=\sum_{\ell=0}^{L}\text{UTP}^{\ell}\mathbf{X}, \tag{5}\] where \(L=\max\{\ell_{u}\colon\forall u\in\mathcal{T}\}\), \(\mathbf{U}=\text{Diag}\{U(\omega,\rho,\ell)\colon\forall u\in\mathcal{T}\}\in \mathbb{R}^{|\mathcal{T}|\times|\mathcal{T}|}\) is a diagonal matrix, and \(\Gamma=\mathbf{I}[\mathcal{T},\cdot]\in\mathbb{R}^{|\mathcal{T}|\times n}\) is the indicator matrix, i.e., \(\Gamma[i,u]=1\) if \(\mathcal{T}[i]=u\) and \(\Gamma[i,u]=0\) otherwise. The pseudo-code of NDM is presented in Algorithm 1. NDM first finds the largest degree \(d_{\max}\) for nodes \(\mathcal{T}\), and computes the corresponding \(\tau\)-distance as \(L\). Then, NDM accumulates the weights of neighbors within \(L\) ranges for each node \(u\in\mathcal{T}\), recorded as \(\mathbf{Z}_{\mathcal{T}}\). Note that \(\mathbf{U}[u,u]=0\) if \(\ell>L\). Finally, representation \(\mathbf{Z}_{\mathcal{T}}\) is calculated by multiplying the feature matrix \(\mathbf{X}\). **Time Complexity.** It takes \(O(m)\) time to calculate \(\lambda\) using the iterative methods (Han et al., 2017), and hence computing \(L\) take \(O(m+|\mathcal{T}|)\) time. Matrix multiplications UI and IP dominate the running time, which takes time complexities of \(O(n|\mathcal{T}|)\) and \(O(m|\mathcal{T}|)\), respectively. Therefore, as it takes \(O(f|\mathcal{T}|)\) time to compute \(\mathbf{Z}_{\mathcal{T}}\mathbf{X}\), the total time complexity of NDM is \(O((m+n)L|\mathcal{T}|+f|\mathcal{T}|)\). ## 4. Optimization in Node Representation Learning Algorithm 1 in Section 3 presents a general node-wise diffusion model. However, it is yet optimal to be applied to reality. In this section, we aim to instantiate NDM in a practical manner and optimize the procedure of feature propagations. ### Instantiation of NDM **Practical Implementation of \(\tau\)-Distance.** Calculating the \(\tau\)-distance of each node is one of the critical steps in NDM, which requires the second largest eigenvalue \(\lambda\) of the diffusion matrix. However, it is computationally expensive to compute \(\lambda\) for large graphs. To circumvent the scenario, we employ property 3.1 to substitute \(\lambda\) without damaging the efficacy of NDM. As we analyze in Section 3.3, according to Property 3.1, we borrow a correction factor \(C_{\mathbf{G}}\) specific for graph \(\mathbf{G}\) to ensure \(\lambda=1-\Delta_{\lambda}=\frac{C_{\mathbf{G}}}{\sqrt{d_{\mathbf{G}}}}\). Meanwhile, for the sake of practicality, we could merge hyperparameter \(\tau\) and \(C_{\mathbf{G}}\) into one tunable parameter \(\tau^{\prime}\) to control the bound of \(\tau\)-distance \(\ell_{u}\) such that \[\ell_{u}=\log_{\lambda}\frac{\tau\sqrt{d_{\min}d_{u}}}{2m}=\frac{\ln\frac{2m}{ \sqrt{d_{\min}d_{u}}}-\ln\tau}{\ln\sqrt{d_{\mathbf{G}}}-\ln C_{\mathbf{G}}} \coloneqq\tau^{\prime}\frac{\ln\frac{2m}{\sqrt{d_{\min}d_{u}}}}{\ln\sqrt{d_{ \mathbf{G}}}}. \tag{6}\] Figure 1. Three exemplary expansion tendency of GHD. **Important Neighbor Identification and Selection.** NDM in Algorithm 1 aggregates all neighbors during diffusion for each node, which, however, is neither effective nor efficient. The rationale is twofold. First, it is trivial to see that the sum of weights in the \(\ell\)-th hop is \(\sum_{v\in\mathcal{V}}U(\omega,\rho,\ell)(\mathbf{D}^{-1}\mathbf{A})^{\ell}[u, ]=U(\omega,\rho,\ell)\). If \(n\) nodes are visited, the average weight is \(\Theta\big{(}\frac{U(\omega,\rho,\ell)}{n}\big{)}\), i.e., the majority of nodes contribute negligibly to feature aggregations and only a small portion of neighbors with large weights matters. Second, as found in (Zhu et al., 2017; Wang et al., 2018), input data contain not only the low-frequency ground truth but also noises that can originate from falsely labeled data or features. Consequently, incorporating features of those neighbors could potentially incur harmful noises. Therefore, it is a necessity to select important neighbors and filter out insignificant neighbors. Based on the above analysis, we aim to identify important neighbors for target node \(u\). For ease of exposition, we first define the weight function \(\phi(t,u,v)=U(\omega,\rho,\ell)\mathbf{P}^{\ell}[u,v]\) to quantify the importance of neighbor node \(v\) to target node \(u\), and then formalize the concept of _\(\varepsilon\)-importance neighbor_ as follows. Definition 4.1 (\(\varepsilon\)-Importance Neighbor).: Given a target node \(u\) and threshold \(\varepsilon\in(0,1)\), node \(v\) is called \(\varepsilon\)-importance neighbor of \(u\) if \(\exists t\in\{0,1,\ldots,\ell_{u}\}\), we have \(\phi(t,u,v)\geq\varepsilon\). Thanks to the good characteristic of NDM, a sufficient number of random walks (RWs) are able to identify _all_ such \(\varepsilon\)-importance neighbors with high probability, as proved in the following lemma. Lemma 4.2 ().: _Given a target node \(u\), threshold \(\varepsilon\in(0,1)\), and failure probability \(\delta\in(0,1)\), assume \(\phi(\ell,u,v)\geq\varepsilon\). Suppose \(\theta=\lceil\frac{2\eta^{2}}{\varepsilon}\log(\frac{1}{\delta\varepsilon})\rceil\) RWs are generated from \(u\) and visit \(v\) for \(\theta_{v}^{(\ell)}\) times at the \(\ell\)-th step. For the weight estimation \(\hat{\phi}(\ell,u,v)=\frac{U(\omega,\rho,\ell)\theta_{v}^{(\ell)}}{\theta}\), we have_ \[\Pr\left[\bigvee_{0\leq\ell\leq\ell_{u}}\bigg{\{}\bigvee_{\{\sigma:\;\phi( \ell,u,v)\geq\varepsilon\}}\Big{(}\hat{\phi}(\ell,u,v)\leq\frac{\phi(\ell,u,v) }{\eta}\Big{)}\bigg{\}}\right]\leq\delta,\] _where \(\eta>1\) controls the approximation._ Lemma 4.2 affirms that sufficient RWs could capture \(\varepsilon\)-importance neighbors with high probability. However, there still contains deficiencies. In particular, along with those \(\varepsilon\)-important neighbors, many insignificant neighbors will be inevitably selected. For illustration, we randomly choose one target node on dataset Amazon and select its neighbors using RWs. We observe that \(10.6\%\) neighbors contribute to \(99\%\) weights, and the rest \(89.4\%\) neighbors share the left \(1\%\) weights, as shown in Figure 2. The amount of those insignificant neighbors could unavoidably impair the representation quality. To alleviate the deficiency, we propose to preserve the first-\(K\) neighbors with \(K=\frac{1}{\varepsilon^{2}}\).2 Footnote 2: One may propose to adopt top-\(K\) neighbors. However, top-\(K\) selection would incur enormous computation overheads since it requires sorting all neighbors by weights. To explain, in the \(\ell\)-th hop, each \(\varepsilon\)-important neighbor will be selected with probability at least \(\frac{\varepsilon}{U(\omega,\rho,\ell)}\), and there are at most \(\frac{U(\omega,\rho,\ell)}{\varepsilon^{2}}\) important neighbors. Thus \(\varepsilon\)-important neighbors from the \(\ell\)-th hop will be picked after at most \(\frac{U^{2}(\omega,\rho,\ell)}{\varepsilon^{2}}\) random selections in expectation. By summing up all \(\ell_{u}\) hops, we have \[\sum\nolimits_{\ell=0}^{\ell_{u}}\frac{U^{2}(\omega,\rho,\ell)}{\varepsilon^{2} }\leq\sum\nolimits_{\ell=0}^{\ell_{u}}\frac{U(\omega,\rho,\ell)}{\varepsilon^{ 2}}=\frac{1}{\varepsilon^{2}}.\] Notice that neither RWs selection nor first-\(K\) selection is suitable to solely function as stop conditions. As stated, RW inevitably incurs substantial unimportant neighbors, while first-\(K\) selection alone is not bound to terminate when no sufficient neighbors exist. Hence, they compensate each other for better performance. As evaluated in Section 5.4, first-\(K\) selection further boosts the effectiveness notably. ### Optimized Algorithm NIGCN We propose NIGCN in Algorithm 2, the GCN model by instantiating NDM. We first initialize the number \(\theta\) of RWs according to Lemma 4.2. Next, we generate length-\(t_{u}\) RWs for each \(u\in\mathcal{T}\). If neighbor \(v\) is visited at the \(\ell\)-th step, we increase its weight \(t_{v}\) by \(\frac{U(\omega,\rho,\ell)}{\theta}\) and store it into set \(\mathcal{S}\). This procedure terminates if either the number of RWs reaches \(\theta\) or the condition \(|\mathcal{S}|\geq\frac{1}{\varepsilon^{2}}\) is met. Afterward, we update on \(\mathbf{z}_{u}\) (Line 10). Eventually, the final representation \(\mathbf{Z}_{\mathcal{T}}\) is returned once all \(|\mathcal{T}|\) target nodes have been processed. ``` Input: Graph G, feature matrix \(\mathbf{X}\), target set \(\mathcal{T}\), parameters \(\eta\) and \(\delta\), hyperparameters \(\tau^{\prime}\), \(\omega\), \(\rho\), \(\varepsilon\) Output: Representation \(\mathbf{Z}_{\mathcal{T}}\) 1\(d_{\mathbf{G}}\leftarrow\frac{2m}{n}\), \(\theta\leftarrow\lceil\frac{2\eta^{2}}{\varepsilon}\log(\frac{1}{\delta \varepsilon})\rceil,\mathbf{Z}_{\mathcal{T}}\leftarrow\mathbf{0}^{|\mathcal{T}| \times f}\); 2for\(u\in\mathcal{T}\)do 3\(\ell_{u}\leftarrow\left\lceil\epsilon^{\prime}\frac{\ln\frac{2m}{\sqrt{\pi}\log \ell_{u}}}{\ln\sqrt{\Delta\log\ell_{u}}}\right\rceil\); 4for\(i\gets 1\)to\(\theta\)do 5 Generate a random walk from node \(u\) with length \(t_{u}\); 6if\(\sigma\) is visited at the \(\ell\)-th stepthen 7\(t_{v}\gets t_{v}+\frac{U(\omega,\rho,\ell)}{\theta}\); 8\(\mathcal{S}\leftarrow\mathcal{S}\cup\{v\}\); 9if\(|\mathcal{S}|\geq\frac{1}{\varepsilon^{2}}\)thenbreak 10for\(v\in\mathcal{S}\)do\(\mathbf{z}_{u}\leftarrow\mathbf{z}_{u}+t_{v}\cdot\mathbf{x}_{v}\); 11 12 13return\(\mathbf{Z}_{\mathcal{T}}\); ``` **Algorithm 2**NIGCN Accordingly, for a target node \(u\) in \(\mathcal{T}\), NIGCN is formulated as Figure 2. Weight Distribution of Neighbors. ### Time Complexity. For each node \(u\), at most \(\theta\) RWs of length \(t_{u}\) are generated at the cost of \(O(\theta t_{u})\), and the total number of neighbors is bounded by \(O(\theta t_{u})\), which limits the cost on the feature update to \(O(\theta t_{u}f)\). The total cost is \(O((f+1)\theta t_{u})\) for each target node. Let \(L=\max\{t_{u}\colon\forall u\in\mathcal{T}\}\). By replacing \(\theta=O(\frac{1}{\varepsilon}\log(\frac{1}{\delta\varepsilon}))\), the resulting time complexity of NIGCN is \(O(\frac{1|\mathcal{T}|f}{\varepsilon}\log\frac{1}{\delta\varepsilon})\). ### Time Complexity Comparison \(\varepsilon\)-importance neighbors play a crucial role in feature aggregations. We assume that qualified representations incorporate all \(\varepsilon\)-importance neighbors with high probability. When capturing such \(\varepsilon\)-importance neighbors, we analyze and compare the sampling complexities of 9 representative models3 with NIGCN, as summarized in Table 1. Footnote 3: We assume the models without explicit sampling process with sampling rate 100%. GRAND+ (Kumar et al., 2017) estimates the propagation matrix \(\Pi\) during preprocessing with error bounded by \(r_{\max}\) at the cost of \(O(\frac{(|\mathcal{T}|^{p}|+|\mathcal{T}|)L}{r_{\max}})\) where \(\mathbf{U}^{\prime}\) is a sample set of unlabeled node set \(\mathbf{U}=\mathcal{V}\setminus\mathcal{T}\). To yield accurate estimations for \(\varepsilon\)-importance neighbors, \(r_{\max}\) is set \(r_{\max}=\Theta(\varepsilon)\) and the resulting time complexity of prepossessing is \(O(\frac{Ln}{\varepsilon})\). According to (Kumar et al., 2017), its training complexity is \(O(knf+L^{\prime}knf^{2})\). AGP (Kumar et al., 2017) provides an accurate estimation for node \(u\) if \(\pi(u)>\epsilon^{\prime}\) for each dimension of feature matrix \(\mathbf{X}\), denoted as \(\mathbf{x}=\mathbf{X}[\cdot,i]\) for \(i\in\{0,\ldots,f-1\}\) where \(\|\mathbf{x}\|_{1}\leq 1,\pi=\sum_{t=0}^{L}w^{(t)}(\mathbf{D})^{-a} \mathbf{A}\mathbf{D}^{-b})^{t}\mathbf{x}\), and \(\epsilon^{\prime}\) is an input threshold. As proved in (Kumar et al., 2017), the time cost of AGP on one feature dimension is \(O(\frac{1^{2}}{\epsilon^{\prime}}\sum_{t=0}^{L}\|(\sum_{t=t}^{\infty}w^{(t)} )(\mathbf{D}^{-a}\mathbf{A}\mathbf{D}^{-b})^{t}\mathbf{x}\|_{1})\). We consider a simple case that \(a=0\) and \(b=1\). Suppose that \(\|\mathbf{x}\|_{1}=\frac{1}{C}\) for some constant \(C\geq 1\), and thus we have \[\bigg{\|}\big{(}\sum_{i=f}^{\infty}w^{(i)}\big{)}(\mathbf{A}\mathbf{D}^{-1})^ {t}\mathbf{x}\bigg{\|}_{1}=\sum_{i=f}^{\infty}w^{(i)}\big{\|}(\mathbf{A} \mathbf{D}^{-1})^{t}\mathbf{x}\big{\|}_{1}=\frac{1}{C}\sum_{i=f}^{\infty}w^{( i)}.\] Since \(\sum_{t=0}^{L}\sum_{i=f}^{\infty}w^{(i)}\geq 1\), the time complexity of AGP is at least \(\Theta(\frac{L^{2}}{\epsilon^{\prime}})\). To ensure nodes aggregating at least one \(\varepsilon\)-importance neighbor \(v\) are estimated accurately, \(\varepsilon\mathbf{x}(v)=\Omega(\epsilon^{\prime})\) is required. Since \(\|\mathbf{x}\|_{1}=\frac{1}{C}\) for some constant \(C\) and there are \(n\) nodes, it is reasonably to assume that \(\mathbf{x}(v)=O(\frac{1}{n})\). Therefore, \(\epsilon^{\prime}=O(\frac{\varepsilon}{n})\). In this regard, the time cost of AGP to capture \(\varepsilon\)-importance neighbors for all \(f\) dimensions is \(O(\frac{L^{2}nf}{\epsilon})\). GBP (Kumar et al., 2017) derives representations for the \(i\)-th dimension as \(\pi=\sum_{t=0}^{L}w^{(t)}\mathbf{D}^{t}(\mathbf{D}^{-1}\mathbf{A})^{t} \mathbf{D}^{-t}\mathbf{x}\), where \(\mathbf{x}=\mathbf{X}[\cdot,i]\) for \(i\in\{0,\ldots,f-1\}\) and \(\mathbf{X}\) is the feature matrix with \(f\) dimensions. GBP ensures that the estimation \(\hat{\pi}(u)\) of \(\pi(u)\) for any \(u\in\mathcal{T}\) is within an error of \(d_{u}^{\mathcal{V}}\), i.e., \(|\pi(u)-\hat{\pi}(u)|\leq d_{u}^{\mathcal{V}}\epsilon^{\prime}\), where the factor \(d_{u}^{\mathcal{V}}\) is due to the term \(\mathbf{D}^{t}\) and does not impact sampling errors. The final time complexity of GBP is \(\frac{Lf\sqrt{\|\mathcal{T}\|\gamma\log(nL)m/n}}{\epsilon^{\prime}}\). As discussed above, we have \(\varepsilon\mathbf{x}(u)=\Omega(\epsilon^{\prime})\) and \(\mathbf{x}(u)=O(\frac{1}{n})\), which indicates that \(\epsilon^{\prime}=O(\frac{\varepsilon}{n})\). Consequently, the time cost of GBP to capture \(\varepsilon\)-importance neighbors is \(O(\frac{Lnf\sqrt{\|\mathcal{T}\|\gamma\log(nL)m/n}}{\epsilon})\). For the rest models in Table 1, we borrow the time complexity from their official analyses since they either provide no sampling approximation guarantee or consider all neighbors without explicit sampling. As analyzed, time complexities of state of the art are linear in the size of the graph, while that of NIGCN is linear in the size of the target set \(\mathcal{T}\). In semi-supervised classification with limited labels, we have \(|\mathcal{T}|\ll n\), which confirms the theoretical efficiency superiority of NIGCN. **Parallelism.** NIGCN derives the representation of every target node _independently_ and does not rely on any intermediate representations of other nodes. This design makes NIGCN inherently parallelizable so as to be a promising solution to derive node representations for massive graphs since they can process all nodes simultaneously. Further, this enables NIGCN scalable for supervised learning as well. ## 5. Experiment In this section, we evaluate the performance of NIGCN for semi-supervised classification in terms of effectiveness (micro F1-scores) and efficiency (running times). ### Experimental Setting **Datasets.** We use seven publicly available datasets across various sizes in our experiments. Specifically, we conduct _transductive learning_ on the four citation networks, including three small citation networks (Zhu et al., 2017) Cora, Citeseer, and Pubmed, and a web-scale citation network Papers100M (Kumar et al., 2017). We run _inductive learning_ on three large datasets, i.e., citation network Ogbn-arxiv (Kumar et al., 2017), social network Reddit (Zhu et al., 2017), and co-purchasing network Amazon (Zhu et al., 2017). Table 6 in Appendix A.2 summarizes the statistics of those datasets. Among them, Papers100M is the largest dataset ever tested in the literature. For semi-supervised classification with limited labels, we randomly sample 20 nodes per class for training, 500 nodes for validation, and 1000 nodes for testing. For each dataset, we randomly generate 10 instances and report the average performance of each tested method. \begin{table} \begin{tabular}{l l l} \hline \hline GNN Method & Preprocessing & Training \\ \hline GraphSAGE & - & \(O(|\mathcal{T}|^{k}L^{f}2)\) \\ GraphSAINT & - & \(O(\frac{Lbmf}{n}+Lnf^{2})\) \\ FastGCN & - & \(O(L|\mathcal{T}|kf+L|\mathcal{T}|f^{2})\) \\ ShaDow-GCN & - & \(O(|\mathcal{T}|k^{k}+L^{\prime}nf^{2})\) \\ APPNP & - & \(O(Lmf\times L^{\prime}nf^{2})\) \\ GBP & \(O(\frac{Lnf\sqrt{\|\mathcal{T}\|\gamma\log(nL)m/n}}{\varepsilon})\) & \(O(L^{\prime}|\mathcal{T}|f^{2})\) \\ AGP & \(O(\frac{L^{2}nf}{\varepsilon})\) & \(O(L^{\prime}|\mathcal{T}|f^{2})\) \\ NDLS & \(O(Lmf)\) & \(O(L^{\prime}|\mathcal{T}|f^{2})\) \\ GRAND+ & \(O(\frac{Ln}{\varepsilon})\) & \(O(knf+L^{\prime}knf^{2})\) \\ NIGCN & \(O(\frac{Lnf}{\varepsilon})f\log\frac{1}{\delta\varepsilon})\) & \(O(L^{\prime}|\mathcal{T}|f^{2})\) \\ \hline \hline \end{tabular} \end{table} Table 1. Time complexity (\(b\) is the batch size, \(k\) is the sample size in one hop, \(L\) is the propagation length, and \(L^{\prime}\) is the number of model layers). **Baselines.** For transductive learning, we evaluate NIGCN against 13 baselines. We categorize them into three types, i.e., (i) 2 _coupled_ GNN methods GCN (Krizhevsky et al., 2014) and GAT (Yang et al., 2015), (ii) 3 _sampling-based_ methods GraphSAGE (Krizhevsky et al., 2014), GraphSAINT (Wang et al., 2015), and ShaDow-GCN (Wang et al., 2016), and (iii) 8 _decoupled_ GNN methods SGC (Wang et al., 2016), APPNP (Krizhevsky et al., 2014), and its improvement PPRGo (Wang et al., 2016), GDC (Wang et al., 2016), GBP (Krizhevsky et al., 2014), AGP (Yang et al., 2015), and two recently proposed NDLS (Yang et al., 2015), and GRAND+ (Wang et al., 2015). For inductive learning, we compare NIGCN with 7 baselines. Among the 13 methods tested in transductive learning, 7 of them are not suitable for semi-supervised inductive learning and thus are omitted, as explained in Section A.2. In addition, we include an extra method FastGCN (Chen et al., 2017) designed for inductive learning. Details for the implementations are provided in Appendix A.2. **Parameter Settings.** For NIGCN, we fix \(\eta=2\), \(\delta=0.01\) and tune the four hyperparameters \(\tau^{\prime}\), \(\omega\), \(\rho\), and \(\varepsilon\). Appendix A.2 provides the principal on how they are tuned and values selected for all datasets. As with baselines, we either adopt their suggested parameter settings or tune the parameters following the same principle as NIGCN to reach their best possible performance. All methods are evaluated in terms of _micro F1-scores_ on node classification and _running times_ including preprocessing times (if applicable) and training times. One method is omitted on certain datasets if it (i) is not suitable for inductive semi-supervised learning or (ii) runs out of memory (OOM), either GPU memory or RAM. ### Performance Results Table 2 and Table 3 present the averaged F1-scores associated with the standard deviations in _transductive learning_ on Cora, Citeseer, Pubmed, and Papers100M and _inductive learning_ on Ogbn-arxiv, Reddit, and Amazon respectively. For ease of demonstration, we highlight the _largest_ score in bold and underline the _second largest_ score for each dataset. Table 2 shows that NIGCN achieves the highest F1-scores on datasets Pubmed and Papers100M and the second highest scores on Cora and Citeseer. Meanwhile, NIGCN obtains the largest F1-scores on the three datasets Ogbn-arxiv, Reddit, and Amazon, as displayed in Table 3. In particular, the improvement margins over the second best on the three datasets are 0.93%, 0.78%, and 2.67% respectively. These observations indicate that NIGCN performs better on relatively large graphs. Intuitively, nodes in large graphs are prone to reside in various structure contexts and contain neighbors of mixed quality, and the NDM diffusion model and the sampling techniques (important neighbor identification and selection) utilized by NIGCN are able to take advantage of such node-wise characteristics. The most competitive method, GRAND+ achieves the best on datasets Cora and Citeseer. Nonetheless, as shown in Figure 6 ( Section A.3), GRAND+ runs significantly slower than NIGCN does. For the three sampling-based methods, i.e., GraphSAGE, GraphSAINT, and ShaDow-GCN, they acquire noticeably lower F1-scores than NIGCN does. This is due to that they sample neighbors and nodes randomly without customizing the sampling strategy towards target nodes, as introduced in Section 2. Meanwhile, the clear performance improvement of NIGCN over GBP and AGP clearly supports the superiority of our general heat diffusion function GHD over the diffusion models used in GBP and AGP (i.e., PPR and HKPR), as well as the efficacy of our diffusion model NDM. Overall, it is crucial to consider the unique structure characteristic of each individual node in the design of both the diffusion model and neighbor sampling techniques for node classifications. ### Scalability Evaluation on Large Graphs In this section, we evaluate the scalability of tested methods by comparing their running times on the four large datasets, Ogbn-arxiv, Reddit, Amazon, and Papers100M. In particular, the running times include preprocessing times (if applicable) and training times. For a comprehensive evaluation, we also report the corresponding running times on the three small datasets Cora, Citeseer, and Pubmed in Appendix A.3. As shown in Figure 3, NIGCN ranks third with negligible lags on dataset Ogbn-arxiv and dominates other methods noticeably on datasets Reddit, Amazon, and Papers100M. Meanwhile, its efficiency advantage expands larger as datasets grow. Specifically, on dataset Ogbn-arxiv, NIGCN, SGC, and AGP all can finish running within 1 second. The speedups of NIGCN against the second best on datasets Reddit, Amazon, and Papers100M are up to 4.12x, 8.90x, and 441.61x respectively. In particular, on the largest dataset Papers100M, NIGCN is able to complete preprocessing and model training within 10 seconds. The remarkable scalability of NIGCN lies in that (i) NIGCN only generates node representations for a small portion of labeled nodes involved in model training, and (ii) neighbor sampling techniques in NIGCN significantly reduce the number of neighbors for each labeled node in feature aggregation, \begin{table} \begin{tabular}{l l c c c c} \hline \hline \multicolumn{1}{c}{Methods} & Cora & Citeseer & Pubmed & Papers100M \\ \hline \multirow{4}{*}{ \begin{tabular}{l} GCN \\ GAT \\ \end{tabular} } & GCN & \(78.91\pm 1.87\) & \(69.11\pm 1.46\) & \(78.05\pm 1.64\) & OOM \\ & GAT & \(80.22\pm 1.48\) & \(69.35\pm 0.93\) & \(78.82\pm 1.90\) & OOM \\ \cline{2-6} & GraphSAGE & \(76.67\pm 2.21\) & \(67.41\pm 1.77\) & \(76.92\pm 2.84\) & OOM \\ & GraphSAINT & \(74.76\pm 2.86\) & \(67.51\pm 4.76\) & \(78.65\pm 4.17\) & OOM \\ & ShaDow-GCN & \(78.38\pm 2.46\) & \(66.54\pm 1.11\) & \(71.79\pm 2.92\) & OOM \\ \cline{2-6} & SGC & \(76.79\pm 1.82\) & \(70.49\pm 1.29\) & \(74.11\pm 2.55\) & \(48.59\pm 1.77\) \\ & APPNP & \(81.16\pm 0.77\) & \(69.83\pm 1.27\) & \(80.21\pm 1.79\) & OOM \\ & PPRGo & \(79.01\pm 1.88\) & \(68.92\pm 1.72\) & \(78.20\pm 1.96\) & OOM \\ & GDC & \(80.69\pm 1.99\) & \(69.69\pm 1.42\) & \(77.67\pm 1.65\) & OOM \\ & GBP & \(81.17\pm 1.60\) & \(70.18\pm 1.90\) & \(80.09\pm 1.51\) & \(44.91\pm 1.23\) \\ & APP & \(77.70\pm 2.04\) & \(67.15\pm 2.04\) & \(78.97\pm 1.33\) & \(46.71\pm 1.99\) \\ & NDLS & \(81.39\pm 1.55\) & \(69.63\pm 1.69\) & \(80.38\pm 1.41\) & OOM \\ & GRAND+ & \(\mathbf{83.48\pm 1.18}\) & \(\mathbf{71.42\pm 1.89}\) & \(79.18\pm 1.93\) & OOM \\ & NIGCN & \(\mathbf{82.13\pm 1.08}\) & \(71.35\pm 0.82\) & \(\mathbf{80.90\pm 2.02}\) & \(\mathbf{49.81\pm 1.10}\) \\ \hline \hline \end{tabular} \end{table} Table 2. F1-score (%) of transductive learning. \begin{table} \begin{tabular}{l c c c} \hline \hline Methods & Ogbn-arxiv & Reddit & Amazon \\ \hline GraphSAGE & \(51.79\pm 2.16\) & \(89.16\pm 1.16\) & \(47.71\pm 1.07\) \\ FastGCN & \(56.45\pm 1.69\) & \(92.43\pm 1.00\) & OOM \\ SGC & \(56.03\pm 1.96\) & \(92.64\pm 1.02\) & \(41.32\pm 1.10\) \\ PPRGo & \(52.12\pm 3.22\) & \(78.21\pm 3.07\) & \(60.10\pm 1.17\) \\ GBP & \(54.01\pm 2.55\) & \(76.09\pm 1.75\) & \(60.78\pm 1.04\) \\ AGP & \(55.89\pm 1.47\) & \(92.18\pm 0.88\) & \(55.72\pm 1.68\) \\ NDLS & \(54.23\pm 2.49\) & \(85.25\pm 1.24\) & \(50.10\pm 2.09\) \\ NIGCN & \(\mathbf{57.38\pm 1.31}\) & \(\mathbf{93.42\pm 0.48}\) & \(\mathbf{63.45\pm 0.70}\) \\ \hline \hline \end{tabular} \end{table} Table 3. F1-score (%) of inductive learning. as detailed analyzed in Section 4.1. This observation strongly supports the outstanding scalability of NIGCN and its capability to handle web-scale graphs. ### Ablation Study **Variants of NIGCN.** To validate the effects of diffusion model NDM and the sampling techniques in NIGCN, we design three variants of NIGCN, i.e., \(\text{NIGCN}_{\text{HKPR}}\), \(\text{NIGCN}_{\text{UDL}}\), and \(\text{NIGCN}_{\text{NFK}}\). Specifically, (i) \(\text{NIGCN}_{\text{HKPR}}\) adopts heat kernel PageRank (HKPR) instead of general heat diffusion CHD in NDM as the diffusion function, (ii) \(\text{NIGCN}_{\text{UDL}}\) unifies the diffusion length for all labeled nodes in contrast with the node-wise diffusion length in NDM, and (iii) \(\text{NIGCN}_{\text{NFK}}\) removes the first-\(K\) limitation on the number of neighbors. We test all variants on Amazon and Table 4 reports the corresponding F1-scores. For clarification, we also present the F1-score disparity from that of NIGCN. First of all, we observe that the F1-score of \(\text{NIGCN}_{\text{HKPR}}\) is 1.04% smaller than that of NIGCN. This verifies that HKPR is not capable of capturing the structure characteristics of Amazon and NDM offers better generality. Second, \(\text{NIGCN}_{\text{UDL}}\) requires 1.32% less F1-scores compared with NIGCN. This suggests that diffusion with customized length leverages the individual structure property of each target node, which benefits the node classification. Last, \(\text{NIGCN}_{\text{NFK}}\) achieves 9.88% smaller F1-score than NIGCN does, which reveals the potential noise signals from neighbors and recognizes the importance and necessity of important neighbor selection. **Label Percentages Varying.** To evaluate the robustness of NIGCN towards the portion of labeled nodes, we test NIGCN by varying the label percentages in \(\{4\%,8\%,1\%,2\%,5\%\}\) on Amazon4 and compare it with two competitive baselines PPRGo and GBP. Results in Table 5 and Figure 4 report the F1-scores and running times respectively. Footnote 4: Papers100M contains only 1.4% labeled nodes which is insufficient for testing. As displayed in table 5, NIGCN achieves the highest F1-score with average 1.93% advantages over the second highest scores across tested percentage ranges. Moreover, Figure 4 shows that NIGCN notably dominates the other two competitors in efficiency. In particular, NIGCN completes execution within 3 seconds and runs up to 5\(\times\) to 20\(\times\) faster than GBP and PPRGo in _all_ settings respectively. These findings validate the robustness and outstanding performance of NIGCN for semi-supervised classification. **Parameter Analysis.** The performance gap between \(\text{NIGCN}_{\text{HKPR}}\) and \(\text{NIGCN}\) has shown that inappropriate combination of \(\omega\) and \(\rho\) degrades the performance significantly. Here we test the effects of hyperparameters \(\tau\) and \(\varepsilon\) in control of the diffusion length \(t_{u}\) and the number of neighbors, respectively. On Amazon, we select \(\tau=1.5\), denoted as \(\tau_{o}\) and \(\varepsilon=0.05\), denoted as \(\varepsilon_{o}\). We then test \(\tau\in\{0.25\tau_{o},0.5\tau_{o},2\tau_{o},4\tau_{o}\}\) and \(\varepsilon\in\{0.25\tau_{o},0.5\tau_{o},2\tau_{o},4\tau_{o}\}\) and plot the results in Figure 5. As shown, F1-score improves along with the increase of \(\tau\) until \(\tau=\tau_{o}\) and then decreases slightly as expected. Similar patterns are also observed in the case of \(\varepsilon\). Specifically, NIGCN exhibits more sensitivity towards the change of \(\tau\) than that of \(\varepsilon\). This is because NIGCN is able to capture the most important neighbors within the right \(\tau\)-distance with high probability when changing the threshold of \(\varepsilon\)-importance neighbors, which, however, is not guaranteed when altering the bound of \(\tau\)-distance. ## 6. Conclusion In this paper, we propose NIGCN, a scalable graph neural network built upon the node-wise diffusion model NDM, which achieves orders of magnitude speedups over representative baselines on massive graphs and offers the highest F1-score on semi-supervised classification. In particular, NDM (i) utilizes the individual topological characteristic and yields a unique diffusion scheme for each target node and (ii) adopts a general heat diffusion function GHD that adapts well to various graphs. Meanwhile, to optimize the efficiency of feature aggregations, NIGCN computes representations \begin{table} \begin{tabular}{l c c c c} \hline \hline Variants & \(\text{NIGCN}_{\text{HKPR}}\) & \(\text{NIGCN}_{\text{UDL}}\) & \(\text{NIGCN}_{\text{NFK}}\) & NIGCN \\ \hline F1-score (\%) & 61.60 \(\pm\) 0.82 & 61.32 \(\pm\) 1.32 & 52.76 \(\pm\) 1.14 & 63.45 \(\pm\) 0.70 \\ Disparity (\%) & -1.85 & -2.13 & -10.69 & 0 \\ \hline \hline \end{tabular} \end{table} Table 4. Performance of NIGCN variants. for target nodes only and leverages advanced neighbor sampling techniques to identify and select important neighbors, which not only improves the performance but also boosts the efficiency significantly. Extensive experimental results strongly support the state-of-the-art performance of NIGCN for semi-supervised classification and the remarkable scalability of NIGCN.
2307.15567
Panoptic Scene Graph Generation with Semantics-Prototype Learning
Panoptic Scene Graph Generation (PSG) parses objects and predicts their relationships (predicate) to connect human language and visual scenes. However, different language preferences of annotators and semantic overlaps between predicates lead to biased predicate annotations in the dataset, i.e. different predicates for same object pairs. Biased predicate annotations make PSG models struggle in constructing a clear decision plane among predicates, which greatly hinders the real application of PSG models. To address the intrinsic bias above, we propose a novel framework named ADTrans to adaptively transfer biased predicate annotations to informative and unified ones. To promise consistency and accuracy during the transfer process, we propose to measure the invariance of representations in each predicate class, and learn unbiased prototypes of predicates with different intensities. Meanwhile, we continuously measure the distribution changes between each presentation and its prototype, and constantly screen potential biased data. Finally, with the unbiased predicate-prototype representation embedding space, biased annotations are easily identified. Experiments show that ADTrans significantly improves the performance of benchmark models, achieving a new state-of-the-art performance, and shows great generalization and effectiveness on multiple datasets.
Li Li, Wei Ji, Yiming Wu, Mengze Li, You Qin, Lina Wei, Roger Zimmermann
2023-07-28T14:04:06Z
http://arxiv.org/abs/2307.15567v3
# Panoptic Scene Graph Generation with Semantics-prototype Learning ###### Abstract Panoptic Scene Graph Generation (PSG) parses objects and predicts their relationships (predicate) to connect human language and visual scenes. However, different language preferences of annotators and semantic overlaps between predicates lead to biased predicate annotations in the dataset, i.e. different predicates for same object pairs. Biased predicate annotations make PSG models struggle in constructing a clear decision plane among predicates, which greatly hinders the real application of PSG models. To address the intrinsic bias above, we propose a novel framework named ADTrans to adaptively transfer biased predicate annotations to informative and unified ones. To promise consistency and accuracy during the transfer process, we propose to measure the invariance of representations in each predicate class, and learn unbiased prototypes of predicates with different intensities. Meanwhile, we continuously measure the distribution changes between each presentation and its prototype, and constantly screen potential biased data. Finally, with the unbiased predicate-prototype representation embedding space, biased annotations are easily identified. Experiments show that ADTrans significantly improves the performance of benchmark models, achieving a new state-of-the-art performance, and shows great generalization and effectiveness on multiple datasets. ## 1 Introduction Panoptic Scene Graph Generation (PSG) [30] aims to simultaneously detect instances and their relationships within visual scenes [4]. Instead of coarse bounding boxes used in Scene Graph Generation (SGG) [14, 29, 15, 12, 6, 31, 32], PSG proposed to construct more comprehensive scene graphs with panoptic segmentation [10]. With the potential to bridge the gap between visual scenes and human languages, PSG methods thus has the ability to contribute to related vision-language tasks, such as image retrieval [9, 22, 25], image captioning [5, 26, 37, 38], and visual question answering [21, 13, 2, 28]. However, PSG methods currently suffer from suboptimal performance due to biased and noisy information in generated scene graphs, stemming from the problem of _biased annotations_. Exploring the inference mechanism of PSG and SGG models, it is translating visual scenes to linguistic descriptions, i.e., mapping visual instances to subject/objects, and their relationships to predicates. We regard the problem above as the semantic ambiguity between predicates, and the contradictory mappings from visual to linguistics. There are a lot of semantic overlaps and hierarchical relationships among predicates, e.g., the superclass predicate _on_ for its subclass predicate _standing on_. Because of the semantic overlaps and the inconsistent preferences of an Figure 1: (a) Exemplar panoptic segmentation results of an input image. (b) Long-tail distribution problem caused by biased annotation, and the corresponding recall rate for different predicate labels. (c) and (d) present annotation transfer process. Our proposed method is capable of promoting the original dataset by identifying biased annotation and potential positive samples, and then adaptively and accurately transfer them to target triplet pairs. notators, contradictory mappings from visual to linguistics unavoidably exist in the training dataset, and deteriorate the long-tail distribution problem of the training dataset. As shown in Fig. 1, with the difficulty of applying a unified standard for annotation, annotators tend to annotate general predicate labels (e.g., "on" and "beside") for simplicity instead of informative ones (e.g., "standing on" and "looking at"). As a result, models cannot learn consistent mapping from visual to semantics, instead, they entangle predicate labels and the prior knowledge of long-tail distribution, leading to a serious harm for model training stage, as shown in the Fig. 1. Previous works [34, 33, 31, 29, 19, 18, 15, 24, 23] exploit numerous model architectures to alleviate the bias problem, but these models trained by biased datasets achieve relatively limited performances, and cannot fundamentally solve the problem. [35] have proposed to enhance the training dataset by an data transfer framework, which transfers head predicate labeled samples to tail predicate labeled ones. However, their framework inaccurately transfer a significant number of samples, leading to imbalanced performance among predicates. To alleviate the biased annotation problem, we propose to construct a promised and reasonable dataset, which includes plentiful samples with consistent predicate annotation. Specifically, for the original biased predicate annotations with extensive inter-class overlaps, we transfer them to high-quality consistent predicate annotations to promise that samples with different predicate labels are highly separated. Furthermore, we introduce a new adaptive data transfer framework named ADTrans for PSG. Our framework emphasizes consistency during the data transfer process, and performs the data transfer process adaptively and accurately. Besides the prior knowledge of dataset distribution, we believe textual information alignment helps in building consistency during the data transfer process. A general way is leveraging large language models to extract semantic embeddings, however, words embeddings generated by large language models often have high similarity because of the broad class intersection of the open world, leading to the misalignment of the textual domain and the relationship domain. Thus, we propose a prototype-based predicate representation learning method. Unbiased predicate representations are expected to share invariant features within each predicate class. Thus, we employ the contrastive learning to increase intra-class cohesion and inter-class separation, while focus more on hard samples (visually similar predicates). Meanwhile, we observe the invariance degree of representations in each predicate class, and learn predicate prototypes with different intensities. We continuously measure the distribution changes between each presentation and its prototype, and constantly screen potential biased data. Finally, with the unbiased predicate representation embedding space, biased annotations are easily identified and transferred. In summary, the following contributions are made: * A novel, plug-and-play framework named ADTrans is proposed, which aims at adaptively and accurately performing data transfer to promise a reasonable dataset with informative and standard-unified labels, and more solid training samples. * We propose a new prototype-based predicate representation learning method, aiming at a reasonable information alignment process between the textual domain and the relationship domain, to promise consistency during the data transfer process. * Comprehensive experiments demonstrate that the proposed method shows validity on multiple datasets, and significantly enhances the performance of benchmark models, achieving new state-of-the-art performances. ## 2 Related Work **Scene Graph Generation.** SGG has gained increasing attention from the computer vision community for its promising future in high-level vision-language tasks [21, 13, 2, 5, 9, 20, 27],. Early two-stage methods divide the whole task into objects locating process and relationships prediction process, and struggle for better feature extraction network [29, 34, 19, 15]. As more and more researchers emphasize Transformer based one-stage methods [18, 1], which make more fine-grained approaches, researchers find that these fine-grained methods can reach obviously better performance than these two-stage methods by considering rich internal contextual information. More recently, a novel task named Panoptic Scene Graph Generation (PSG) [30], which points out that it will contain noisy and ambiguous pixels if only coarse bounding boxes are provided, and aims at constructing more comprehensive scene graphs with panoptic segmentation rather than coarse bounding boxes. In addition, the provided ground truth panoptic segmentation can also significantly promote the performance of even the most classic SGG method, IMP [29]. In this paper, we investigate the bias exists in SGG and PSG datasets. **Towards Debiasing Scene Graph Generation.**[34] first introduces the biased prediction problem in SGG. [19] and [7] provide a more reasonable metric (Mean Recall) aiming at calculating the recall of each predicate label independently. To directly face the problem, causal inference framework [18, 36, 16] is applied to alleviate data bias during the inference process, and CogTree [33] is designed to train models with the ability to make informative predictions on predicate labels. More recently, [35] argues that performance could be promoted if there is a reasonable and sound dataset. However, the data transfer method in [35] is efficient but so rigid and inflexible that a huge number of positive samples will be wrongly transferred due to its simple and coarse transfer method, which will lead to a remarkable decline in recall rate of head predicate labels. In this paper, we provide a new approach for SGG and PSG datasets debiasing. ## 3 Method According to the previous methods [35], there exist two types of predicates that need to be refined: indistinguishable triplet pairs with semantic ambiguity and potential positive samples missed by annotators. For the indistinguishable triplet pairs, IETrans [35] identifies indistinguishable predicate pairs by checking the inconsistency between the model's predictions and the ground truth labels. To be more specific, IETrans [35] use a pre-trained model (e.g. VCTree [19]) to predict predicate labels for every pair of ground truth subject and object pairs in the PSG training dataset and identify possible indistinguishable predicate labels. For potential positive samples, IETrans [35] employs a pre-trained model to predict predicate labels for every pair of ground truth subject and object labels that have not yet been annotated with predicate labels, also known as NA samples. Different from IETrans [35] using fixed transfer ratio for all predicate pairs, we take advantage of semantic knowledge that exists in pre-trained language models to conduct an adaptive data transfer. ### Relation Representation Extraction To make the language model more sensitive to predicate semantics, we fine-tune the language model with contrastive relation representation training. **Robust Contrastive Training.** We first collect all of the triplets that appear in the training set. Each triplet will be converted to a sentence for language model processing. For example, \(<\)_person, standing on, snow \(>\)_ will be converted to _The person is standing on the snow_. Formally, given a sentence \(s_{i}\) with predicate \(p_{i}\) in the batch \(S=\{s_{k}\}_{i=k}^{N}\), we can construct its positive set \(PS_{i}=\{s_{k}|p_{i}=p_{k}\}_{k\neq i}\) and negative set \(PN_{i}=\{s_{k}|p_{i}\neq p_{k}\}_{k\neq i}\). With the training data, we use a InfoNCE loss to optimize the language model: \[L_{lm}=\sum_{i=1}^{N}-\log\frac{f_{pos}}{f_{pos}+f_{neg}}, \tag{1}\] where: \[f_{pos}=\sum_{s_{j}\in PS_{i}}e^{\sin(h_{i},h_{j})/T}, \tag{2}\] \[f_{neg}=\sum_{s_{g}\in NG_{i}}e^{\sin(h_{i},h_{g})/T}, \tag{3}\] \(T\) is a temperature hyper parameter, \(N\) is batch size, \(h_{i,j,g}\) are language model generated sentence representations for \(s_{i,j,g}\), and \(sim\left(h_{i},h_{j}\right)\) is the cosine similarity. To further boost the sensitivity to predicate similarity, we additionally introduce an angular margin \(m\) for the positive pairs. Formally, instead of using the previously defined \(e^{sim(h_{i},h_{j})/T}\), we use: \[e^{sim(h_{i},h_{j})/T}\to e^{cos(\theta_{i,j}+m)/T}, \tag{4}\] where \(\theta_{i,j}\) is the arc-cosine similarity between \(i\) and \(j\). Thus, \(f_{pos}\) becomes: \[f_{pos}=\sum_{s_{j}\in PS_{i}}e^{cos(\theta_{i,j}+m)/T}. \tag{5}\] **Alignment with Visual Domain.** There is typically a domain gap between the visual domain and the textual domain, i.e., the textual similar predicates are not visually similar. To align the language model with the visual domain, we try to incorporate some visual prior knowledge into the language model fine-tuning. Specifically, we take advantage of the confusion matrix \(C\in\mathbf{R}^{Q\times Q}\) generated by a pre-trained VCTree model, where \(Q\) is the number of predicates in the dataset, and \(C_{i,j}\) denotes the averaged prediction score for predicate \(p_{j}\) on all examples annotated with \(p_{i}\). When \(C_{i,j}\) is near to 1 (very high), \(p_{i}\) and \(p_{j}\) are visually similar and when \(C_{i,j}\) is near to 0 (very low), \(p_{i}\) and \(p_{j}\) are visually different. For the language model training, we expect the language model to be aligned with the visual domain similarity judgment. Thus, the language model should also distinguish the visually different predicate pairs but avoid the visually similar predicate pairs from being too distant in the feature space. The metric \(1-C_{i,j}\) can satisfy our requirement. Formally, we add \(1-C_{i,j}\) as a weight for negative pairs: \[e^{cos(\theta_{i,j})/T}\rightarrow(1-C_{i,j})\times e^{cos(\theta_{i,j})/T}. \tag{6}\] Then, the \(f_{neg}\) becomes: \[f_{neg}=\sum_{s_{g}\in PN_{i}}(1-C_{i,g})\,e^{cos(\theta_{i,g})/T}. \tag{7}\] **Invariant Representation Exploration.** We propose the invariant representation exploration to further promise the unbiased representation of predicates. Specifically, we say that an unbiased representation \(\Phi:X\to H\) elicits an invariant predictor \(\omega\) across positive set \(\varepsilon\) if there is an optimizer \(\omega:H\to Y\) simultaneously optimal for all samples from the positive set. The learning objective can be formulated as: \[\omega\in argmin_{\widetilde{\omega}:H\to Y}O^{e}(\omega,\Phi). \tag{8}\] Eq. 8 tries to learn a feature representation from \(\Phi(\cdot)\) that can induce an optimizer \(\omega(\cdot)\) which is simultaneously optimal for all \(e\in\varepsilon\). Thus, we propose the invariant representation regularization which can be formulated as: \[L_{irm}=\sum_{i=1}^{N}\lambda Var(L^{i}), \tag{9}\] where \(L^{i}=\left\{L_{lm}\left(j\right)|p_{i}=p_{j}\right\}\) denotes the loss values of samples from the positive set, and \(\lambda\) is a hyper-parameter. The minimization of variances of loss values encourages unbiased representation learning for each predicate class [3]. ### Semantics-prototype Learning **Dynamic Prototype Updating.** To build an unbiased embedding space for predicates, we further propose to construct the prototype space for predicates. Specifically, we specify the total prototype space \(P_{type}\in\mathbf{R}^{L\times Q}\), where \(L\) is the same as the size of semantic embedding. Then we dynamically update the prototype space depending on the degree of invariance during the robust contrastive training process. Given a batch \(S\), we can construct multiple positive sets \(PS=\left\{ps_{1},ps_{2},...,ps_{P}\right\}\) with different predicates, where \(P\) is the number of different predicates in the batch \(S\). For every positive set in the \(PS\) with predicate \(p_{i}\), we obtain its average predicate representation embedding as follows: \[H_{aver}^{p_{i}}=\frac{1}{N^{p_{i}}}{\sum_{s=1}^{N^{p_{i}}}}h_{i}, \tag{10}\] where \(N^{p_{i}}\) denotes the number of samples with predicate \(p_{i}\) in the batch \(S\), and \(h_{i}\) denotes the predicate representation embedding. We average the summation of all predicate representations with the same predicate in the batch to get the average feature embedding. With the help of observed invariant representation, we update the prototype space for predicate \(p_{i}\) with a moving average approach: \[P_{type}^{p_{i}}=\beta P_{type}^{p_{i}}+(1-\beta)\underbrace{\frac{1}{\gamma Var (L^{i})N^{p_{i}}}}_{Approach\ Speed}(H_{aver}^{p_{i}}-P_{type}^{p_{i}}), \tag{11}\] where \(\beta\) and \(\gamma\) are hyper-parameters. **Multistage Data Filtration.** Biased and noisy samples in the training dataset are certain to influence the unbiased predicate representation learning process. Thus, We design the multistage data filtration to multistage-ly filter out these bad samples. Specifically, we take the advantage of the invariant representation regularization and the sample-prototype distribution shift as the measurements for the sample's quality. For every training epoch, we collect \(V\in\mathbf{R}^{G}\) from Eq. 9, which denotes the variance of loss values of every training sample in the training dataset with \(G\) samples. Then we average the collected variances on predicate labels, getting \(V_{aver}\in\mathbf{R}^{Q}\). For every sample \(S_{i}\) with predicate label \(P_{i}\) and variance \(V_{i}\) in the training dataset, we judge whether it is part of potential biased and noisy samples, which can be formulated as: Figure 2: **Illustration of overall pipeline of our method.** It learns the unbiased predicate-prototypes by contrastive training, invariant representation exploration and multistage data filtration. The learned prototypes help to promise the consistency during data transfer process. \[P_{bn}=\left\{S_{i}|V_{i}>\mu V_{aver}^{i}(H_{aver}^{p_{i}}-P_{type}^{p_{i}})\right\}, \tag{12}\] where \(V_{aver}^{i}\) denotes the averaged variance on predicate label \(p_{i}\), and \(\mu\) is a hyper-parameter. We further sort \(P_{bn}\) by the loss value derived from Eq. 1 and drop out the top \(D\%\) of training data. If there are less than 100 samples in a predicate class, we do not drop out any more samples from it. The multistage data filtration avoids the influence of a large number of biased annotations (outlier noise). Thus, the whole unbiased predicate representation learning process promises the unbiased representation of predicates. ### Data Transfer As a result, a similarity matrix \(S\in\mathbf{R}^{Q\times Q}\) can be generated by calculating the cosine similarities between all predicate-prototypes. For indistinguishable triplets, we directly use the similarity score as an adaptive transfer ratio. To better transfer potential positive triplets, we further define an influence factor, where the intuition is to transfer more data for the scarce relation triplets with low NA score. Formally, the influence factor is: \[E_{(s_{i},p_{i},o_{i})}=\sqrt{-log\left(NA_{score}\right)\times\text{c}_{(si,oi )}\times c_{p_{i}}}, \tag{13}\] where \(-log(NA_{score})\) is the NA score, \(c(s_{i},o_{i})\) is the scarcity of triplets with subject \(s_{i}\) and \(o_{i}\), and \(c_{p_{i}}\) is the scarcity of triplets with predicate \(p_{i}\), details can be found in supplementary material. In the practice, we further normalize \(c_{(s_{i},o_{i})}\) with softmax. To judge whether a NA sample should be transferred, we rank all potential target triplet pairs according to their influence factors, and conduct the transfer of top \(K_{g}\%\) pairs. ### Resampling We can directly integrate the transferred indistinguishable pairs and potential positive pairs without conflicts. Furthermore, a special re-sampling method is introduced on the integrated dataset to further enhance it. For every triplet \((s_{i},p_{i},o_{i})\) in each image, we calculate its repeat times as: \[R=max\left(1,t\times\text{c}_{(si,oi)}\times c_{(p_{i})}\right), \tag{14}\] where \(t\) is a hyperparameter controlling the possible repeat times. The maximum value of the repeat factor within each image is then selected. ## 4 Experiment ### Dataset and Evaluation Metrics **Dataset**. We evaluate our method on classic SGG dataset Visual Genome [11] and PSG dataset [30]. **Evaluation Metric.** Following previous works [18, 34], we take recall@K (R@K) and mean recall@K (mR@K)[19, 7] as evaluation metrics. Following [30] and PSG challenge, we also adopt a new evaluation metrics named percentile recall (PR), which can be formulated as \(PR=30\%R+60\%mR+10\%PQ\), where PQ measures the quality of a predicted panoptic segmentation relative to the ground truth [10]. For conventional SGG tasks on the VG dataset, we also adopt an overall metric F@K [35], which is the harmonic average of R@K and mR@K. \begin{table} \begin{tabular}{c|c|c c|c c c c|c c} \hline \multicolumn{2}{c|}{Method} & \multicolumn{6}{c}{Scene Graph Generation} \\ \cline{2-11} \multicolumn{2}{c|}{} & R@20 & R@50 & R@100 & mR@20 & mR@50 & mR@100 & PR@20 & PR@50 & PR@100 \\ \hline \multirow{6}{*}{Two-Stage} & IMP [29] & 16.5 & 18.2 & 18.6 & 6.52 & 7.05 & 7.23 & 12.9 & 13.7 & 13.9 \\ & +IETrans [35] & 14.5 & 15.9 & 16.4 & 10.2 & 11.0 & 11.3 & 14.5 & 15.4 & 15.7 \\ & +ADTrans & 15.0 & 16.5 & 17.0 & **12.5** & **13.5** & **14.0** & **16.0** & **17.1** & **17.5** \\ \cline{2-11} & VCTree [19] & 20.6 & 22.1 & 22.5 & 9.70 & 10.2 & 10.2 & 16.0 & 16.8 & 16.9 \\ & +IETrans [35] & 17.5 & 18.9 & 19.3 & 17.1 & 18.0 & 18.1 & 19.6 & 20.5 & 20.7 \\ & +ADTrans & 17.9 & 19.5 & 19.9 & **18.0** & **18.9** & **19.0** & **20.2** & **21.2** & **21.4** \\ \cline{2-11} & MOTIFS [34] & 20.0 & 21.7 & 22.0 & 9.10 & 9.57 & 9.69 & 15.5 & 16.3 & 16.5 \\ & +IETrans [35] & 16.7 & 18.3 & 18.8 & 15.3 & 16.5 & 16.7 & 18.2 & 19.4 & 19.7 \\ & +ADTrans & 17.1 & 18.6 & 19.0 & **17.1** & **18.0** & **18.5** & **19.4** & **20.4** & **20.8** \\ \cline{2-11} & GPSnet [15] & 17.8 & 19.6 & 20.1 & 7.03 & 7.49 & 7.67 & 13.6 & 14.4 & 14.7 \\ & +IETrans [35] & 14.6 & 16.0 & 16.7 & 11.5 & 12.3 & 12.4 & 15.3 & 16.2 & 16.5 \\ & +ADTrans & 17.8 & 19.2 & 19.5 & **16.5** & **17.5** & **17.6** & **19.3** & **20.3** & **20.5** \\ \hline \multirow{3}{*}{One-Stage} & PSGTR [30] & 28.4 & 34.4 & 36.3 & 16.6 & 20.8 & 22.1 & 21.9 & 26.3 & 27.6 \\ & +IETrans [35] & 25.3 & 28.8 & 29.2 & 23.1 & 27.2 & 27.5 & 24.9 & 28.4 & 28.7 \\ \cline{1-1} & +ADTrans & 26.0 & 29.6 & 30.0 & **26.4** & **29.7** & **30.0** & **27.1** & **30.2** & **30.5** \\ \hline \end{tabular} \end{table} Table 1: The results (R@K, mR@K and PR@K) on SGDet task of our method and other baselines on PSG dataset. IETrans and ADTrans denote models equipped with different dataset-enhancement methods. ### Tasks and Implementation Details **Tasks.** We evaluate our method on three classic SGG tasks: Predicate Classification (PREDCLS), Scene Graph Classification (SGCLS) and Scene Graph Generation (SGDET). More statistics can be found in supplementary material. **Implementation Details**. For the pre-trained language model, we use pre-trained BERT-base [8]. The decision margin \(m\) is set to 10 degrees, the temperature hyper parameter T is set to 0.05, and we use an AdamW [17] optimizer with a learning rate 2e-5. The hyper-parameter \(\lambda\) is set to 0.3, \(\beta\) is set to 5e5, and \(\gamma\) is set to 1.5. The \(D\) in multistage data filtration is set to 50. For NA sample transfer, the value of \(K_{g}\) is set to 0.05, and for the re-sampling process, the value of \(t\) is set to 3e7. ### Qualitative Analysis As shown in Figure 3, we can compare results predicted by plain PSGTR and PSGTR equipped with our ADTrans. Obviously, PSGTR with ADTrans can predict more accurate relationships between instances and also predict predicate that better fits the scene. For instance, plain PSGTR predicts "_walking on_" for the person on the right, and PSGTR with ADTrans predicts "_crossing_", which fits the given scene better. We believe our method helps construct a more comprehensive scene graph. ### Comparison with State-of-the-Art Methods In this section, we report the results for ADTrans on different datasets, tasks and baseline methods. **Effectiveness of ADTrans.** From the results, we observe that our method can effectively improve the performance of baseline networks in nearly all metrics. Fig. 4 presents a comparison between our method and plain PSGTR of the detailed recall@100 of SGDet task on part of predicate classes. ADTrans performs better than plain PSGTR on almost all the above predicate classes. When compared to IETrans [35], our method shows significant improvements in both recall and mean recall on all baseline models, indicating that our method can enhance the training dataset \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{5}{c}{Scene Graph Classification} \\ \cline{2-7} & mR@20 & @50 & @100 & F@20 & @50 & @100 \\ \hline MOTIFS & 6.0 & 8.0 & 8.5 & 10.1 & 13.1 & 13.8 \\ +ADTrans & **14.8** & **17.0** & **17.8** & **20.2** & **22.5** & **23.7** \\ \hline VCTree & 6.3 & 7.5 & 8.0 & 10.7 & 12.5 & 13.3 \\ +ADTrans & **16.0** & **19.0** & **19.8** & **20.3** & **23.7** & **24.5** \\ \hline GPSnet & 10.0 & 11.8 & 12.6 & 15.7 & 17.9 & 18.9 \\ +ADTrans & **15.5** & **18.2** & **18.8** & **19.9** & **22.5** & **23.7** \\ \hline \hline \end{tabular} \end{table} Table 4: The results (mR@K and F@K) on SGCLS task of our method and other baselines on VG dataset. +ADTrans denotes baseline methods equipped with our method. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{5}{c}{Scene Graph Generation} \\ \cline{2-7} & mR@20 & @50 & @100 & F@20 & @50 & @100 \\ \hline MOTIFS & 4.8 & 6.2 & 7.1 & 8.0 & 10.3 & 11.8 \\ +ADTrans & **10.6** & **15.5** & **18.1** & **13.4** & **18.9** & **22.0** \\ \hline VCTree & 5.2 & 6.7 & 7.9 & 8.7 & 11.0 & 13.0 \\ +ADTrans & **9.7** & **12.5** & **16.9** & **12.2** & **16.3** & **20.3** \\ \hline GPSnet & 5.2 & 5.9 & 7.1 & 8.6 & 9.9 & 11.8 \\ +ADTrans & **12.3** & **15.8** & **19.2** & **15.1** & **18.6** & **21.9** \\ \hline \hline \end{tabular} \end{table} Table 2: The results (mR@K and F@K) on SGDET task of our method and other baselines on VG dataset. +ADTrans denotes baseline methods equipped with our method. Figure 3: Visualization of plain PSGTR model and PSGTR equipped with our ADTrans. PSGTR with ADTrans can predict relationships between instances with greater accuracy and also select predicates that better match the visual scene. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{5}{c}{Predicate Classification} \\ \cline{2-7} & mR@20 & @50 & @100 & F@20 & @50 & @100 \\ \hline MOTIFS & 11.7 & 15.2 & 16.2 & 19.5 & 24.5 & 26.0 \\ +ADTrans & **29.0** & **36.2** & **38.8** & **36.1** & **41.7** & **43.5** \\ \hline VCTree & 14.0 & 16.3 & 17.7 & 22.7 & 26.0 & 28.0 \\ +ADTrans & **30.0** & **32.9** & **35.5** & **37.2** & **40.5** & **42.5** \\ \hline GPSnet & 13.2 & 15.0 & 16.0 & 21.7 & 24.4 & 25.8 \\ +ADTrans & **27.3** & **32.0** & **34.7** & **34.8** & **40.2** & **42.1** \\ \hline \hline \end{tabular} \end{table} Table 3: The results (mR@K and F@K) on PREDCLS task of our method and other baselines on VG dataset. +ADTrans denotes baseline methods equipped with our method. more effectively, avoiding noisy or redundant transfer processes. When it comes to PR, which takes into account both recall and mean recall, our method outperforms all the original models by significant margins. This suggests that our method not only improves the recall of the models but also balances the performance across different predicate labels, resulting in a more comprehensive evaluation of the models' performance. **Expansibility of ADTrans.** Applied with our ADTrans, the performances of all baseline models trained on all the datasets on all the tasks are greatly improved. With the observation that all the baseline models trained on the VG dataset have poorer performances compared to the same baseline models trained on the PSG dataset, VG dataset is more challenging for our ADTrans due to more biased annotations. Our method shows great expansibility on VG dataset, baseline models trained on the dataset achieve competitive performances on all the tasks. ### Ablation Studies In this part, we evaluate our method by applying different components, using different data processing methods during triplet transfer process, and using different contrastive learning methods. **Different components in ADTrans framework.** We evaluate the importance of each component in our ADTrans. As shown in Table. 5, we incrementally add one component to the plain baseline PSGTR [30] to validate their effectiveness. The indistinguishable triplet transfer component provides the most promotion on performance. The reason is that, we transfer these inconsistent annotations to informative and consistent ones, so that models can learn a consistent mapping from visual to predicates. The potential positive transfer component additionally provides performance promotion. Our ADTrans transfers original NA samples, which are probably missed by annotators, to informative annotations. This step provides more reasonable training samples to construct a consistent training dataset. **Different data processing methods during triplet transfer.** To prove the effectiveness of our indistinguishable triplet transfer method and the harm of biased annotation, we design another simple method to process indistinguishable triplet pairs. Instead of adaptively transferring them to target pairs, we simply remove them from the original training dataset. As shown in Table. 6, when comparing the simply removing method and the original baseline, the simply removing method surprisingly greatly overtakes the baseline model, indicating the serious conflict resulting from biased annotation within the original dataset. These biased annotated samples can not help the model during the training process, but will make it difficult for the model to distinguish each predicate label, resulting in a sharp decline in model performance. When comparing the triplet transfer method and the simply removing method, there are also great margins between these two methods. With the fact that our triplet transfer method greatly outperforms the simply removing method, we observe a promising textual information alignment process and the effectiveness of our adaptive triplet transfer method is proved. **Different contrastive learning methods.** As shown in Table. 7, though we observe improvement with the help of the classic InfoNCE, we can promote the model to a higher \begin{table} \begin{tabular}{c|c c c} \hline \multirow{2}{*}{Data Processing Method} & \multicolumn{3}{c}{SGDet} \\ \cline{2-4} & mR@20 & mR@50 & mR@100 \\ \hline Original & 16.6 & 20.8 & 22.1 \\ \hline Remove & 20.0 & 24.6 & 25.3 \\ \hline Triplet Transfer & 24.9 & 28.2 & 29.2 \\ \hline \end{tabular} \end{table} Table 6: Ablation study on data processing methods. Triplet Transfer: Indistinguishable triplet transfer. Remove: simply removing all indistinguishable triplet pairs. Original: baseline method on the original dataset. Figure 4: R@100 for predicates under SGDet task among plain PSGTR, and PSGTR with ADTrans. ADTrans achieves more balanced and effective predicate discrimination among predicates with different frequencies than plain PSGTR. \begin{table} \begin{tabular}{c c c|c c c} \hline \multicolumn{2}{c|}{Module} & \multicolumn{3}{c}{Scene Graph Generation} \\ \hline IT & PT & RE & R/mR@20 & R/mR@50 & R/mR@100 \\ \hline \(\mathcal{X}\) & \(\mathcal{X}\) & \(\mathcal{X}\) & 28.4 / 16.6 & 34.4 / 20.8 & 36.3 / 22.1 \\ \hline \(\checkmark\) & \(\mathcal{X}\) & 26.2 / 24.9 & 30.3 / 28.2 & 30.7 / 29.2 \\ \(\checkmark\) & \(\checkmark\) & \(\mathcal{X}\) & 25.5 / 25.6 & 29.2 / 29.1 & 29.7 / 29.6 \\ \(\checkmark\) & \(\checkmark\) & 26.0 / 26.4 & 29.6 / 29.7 & 30.0 / 30.0 \\ \hline \end{tabular} \end{table} Table 5: Ablation study on model components. IT: Indistinguishable Triplet Transfer; PT: Potential Positive Transfer; RE: resampling. level with our method. An simply InfoNCE method increase the intra-class cohesion and inter-class separability, but cannot learn unbiased predicate representations because of biased annotations. As a result, biased annotations are inconspicuous and hard to be identified in a biased predicate representation embedding space. Our method utilizes invariant representation exploration and multistage data filtration to avoid the influence from biased annotations. ## 5 Conclusion We introduce a novel framework named ADTrans for PSG and SGG to alleviate the biased annotation problem. The framework transfers indistinguishable triplet pairs and potential positive samples to promise a reasonable training dataset with more informative and standard-unified labels, and more solid training samples. Experimental results demonstrate that the proposed method significantly enhances the performance of benchmark models on multiple datasets, achieving a new state-of-the-art performance.
2307.05774
Spectral Stability of Periodic Traveling Wave Solutions for a Double Dispersion Equation
In this paper, we investigate the spectral stability of periodic traveling waves for a cubic-quintic and double dispersion equation. Using the quadrature method we find explict periodic waves and we also present a characterization for all positive and periodic solutions for the model using the monotonicity of the period map in terms of the energy levels. The monotonicity of the period map is also useful to obtain the quantity and multiplicity of non-positive eigenvalues for the associated linearized operator and to do so, we use tools of the Floquet theory. Finally, we prove the spectral stability by analysing the difference between the number of negative eigenvalues of a convenient linear operator restricted to the space constituted by zero-mean periodic functions and the number of negative eigenvalues of the matrix formed by the tangent space associated to the low order conserved quantities of the evolution model.
Fábio Natali, Thiago P. de Andrade
2023-07-11T20:06:00Z
http://arxiv.org/abs/2307.05774v1
# Spectral stability of periodic traveling wave solutions for a double dispersion equation ###### Abstract. In this paper, we investigate the spectral stability of periodic traveling waves for a cubic-quintic and double dispersion equation. Using the quadrature method we find explict periodic waves and we also present a characterization for all positive and periodic solutions for the model using the monotonicity of the period map in terms of the energy levels. The monotonicity of the period map is also useful to obtain the quantity and multiplicity of non-positive eigenvalues for the associated linearized operator and to do so, we use tools of the Floquet theory. Finally, we prove the spectral stability by analysing the difference between the number of negative eigenvalues of a convenient linear operator restricted to the space constituted by zero-mean periodic functions and the number of negative eigenvalues of the matrix formed by the tangent space associated to the low order conserved quantities of the evolution model. Key words and phrases:spectral stability, double dispersion equation, periodic waves 2000 Mathematics Subject Classification: 76B25, 35Q51, 35Q53 ## 1. Introduction. Our aim is to prove existence and spectral stability of periodic traveling wave solution for a cubic-quintic and double dispersion equation (DDE henceforth) given by \[u_{tt}-u_{xx}+h_{1}u_{xxxx}-h_{2}u_{ttxx}+(f(u))_{xx}=0, \tag{1.1}\] where \(u=u(x,t)\), with \((x,t)\in\mathbb{R}\times\mathbb{R}^{+}\), \(h_{1}\) and \(h_{2}\) are positive real constants and \[f(u)=au^{p+1}+bu^{2p+1},\] with \(p>0\) and \(a,b\in\mathbb{R}\backslash\{0\}\). We are interested in the special case \(p=2\) and \(a=b=1\). We also assume that \(h_{1}=5\) and \(h_{2}=4\) for technical reasons. All of those specific cases enable us to consider equation (1.1) as \[u_{tt}-u_{xx}+5u_{xxxx}-4u_{ttxx}+(u^{3}+u^{5})_{xx}=0. \tag{1.2}\] The evolution equation (1.2) does not enjoy of a scaling invariance in order to determine a sharp Sobolev space to obtain the existence of a global well-posedness result in time and the reason for that is because of the presence of the double dispersion term \(5u_{xxxx}-4u_{xxtt}\). Moreover, it can be reduced in the following nonlinear system as \[\left\{\begin{array}{l}u_{t}=v_{x},\\ v_{t}=(I-4\partial_{x}^{2})^{-1}\partial_{x}((I-5\partial_{x}^{2})u-(u^{3}+u^{ 5})),\end{array}\right. \tag{1.3}\] which gives us the natural conserved quantities (energy and momentum, respectively) given by \[E(u,v)=\frac{1}{2}\int_{0}^{L}\left(u^{2}+5u_{x}^{2}+v^{2}+4v_{x}^{2}-\frac{1}{2} u^{4}-\frac{1}{3}u^{6}\right)dx, \tag{1.4}\] and \[F(u,v)=\int_{0}^{L}(uv+4u_{x}v_{x})dx, \tag{1.5}\] where \((u,v):=\left(\begin{array}{c}u\\ v\end{array}\right)\). From system (1.3), we also have the following conserved quantities \[G(u,v)=\int_{0}^{L}udx, \tag{1.6}\] and \[H(u,v)=\int_{0}^{L}vdx. \tag{1.7}\] In addition, the nonlinear system (1.3) can be reduced in an abstract Hamiltonian system as \[\left(\begin{array}{c}u\\ v\end{array}\right)_{t}=J\left(\begin{array}{c}\partial_{u}E\\ \partial_{v}E\end{array}\right), \tag{1.8}\] where \(\partial_{u}E\) and \(\partial_{v}E\) denote the Frechet derivative of \(E\) with respect to \(u\) and \(v\), respectively. Operator \(J\) is defined as \(J=(I-4\partial_{x}^{2})^{-1}\left(\begin{array}{cc}0&\partial_{x}\\ \partial_{x}&0\end{array}\right)\). Because of the natural translation invariance present in the equation (1.3), we can determine traveling wave solutions of the form \((u(x,t),v(x,t))=(\phi(x-ct),\psi(x-ct))\), where \(\phi,\psi:\mathbb{R}\rightarrow\mathbb{R}\) are real-valued functions which are \(L-\)periodic at the space variable and \(c\) is called the wave-speed of the periodic solution which is assumed to be positive. Replacing this kind of solution into equation (1.3), we see that the pair \((\phi,\psi)\) must satisfy the following system of ODE, \[\left\{\begin{array}{l}-c\phi^{\prime}=\psi^{\prime}\\ -c(\psi^{\prime}-4\psi^{\prime\prime\prime})=\phi^{\prime}-5\phi^{\prime\prime \prime}-(\phi^{3}+\phi^{5})^{\prime},\end{array}\right. \tag{1.9}\] System (1.9) can be reduced into a single equation as \[c^{2}\phi^{\prime}-\phi^{\prime}+5\phi^{\prime\prime\prime}-4c^{2}\phi^{ \prime\prime\prime}+(\phi^{3}+\phi^{5})^{\prime}=0.\] Integrating the last ODE, we obtain \[(c^{2}-1)\phi+(5-4c^{2})\phi^{\prime\prime}+\phi^{3}+\phi^{5}-A=0,\] where \(A\) is a constant of integration. In order to simplify the notation, we consider \(r=1-c^{2}\) and \(s=5-4c^{2}\). Assuming \(s\neq 0\), one has \[-\phi^{\prime\prime}+\frac{r}{s}\phi-\frac{1}{s}(\phi^{3}+\phi^{5})+\frac{A}{s }=0. \tag{1.10}\] Equation (1.10) can be seen as a two-parameter function depending on the pair \((c,A)\). In order to obtain explicit periodic waves \((\phi,\psi)\) that solve (1.9), we are going to assume \(c\in(0,1)\) and \(A=0\). Thus, we have the final equation \[-\phi^{\prime\prime}+\frac{r}{s}\phi-\frac{1}{s}(\phi^{3}+\phi^{5})=0. \tag{1.11}\] The periodic traveling solution \(\overrightarrow{\phi_{c}}=(\phi,-c\phi)\) which solves equation (1.9) arises as a critical point of the augmented energy functional \(D_{c}\) defined as \(D_{c}(u,v)=E(u,v)+cF(u,v)\). It is well known that the spectral stability of \(\overrightarrow{\phi_{c}}=(\phi,-c\phi)\) can be determined by minimizing the functional \(E\) under fixed constraints \(F\), \(G\) and \(H\). Therefore, since \(\overrightarrow{\phi_{c}}\) is a critical point of \(D_{c}\), it is suitable to think that the spectral stability can be determined by proving that the second derivative of \(D_{c}\) at the point \(\overrightarrow{\phi_{c}}\), and denoted by \[\mathcal{D}:=D_{c}^{\prime\prime}(\overrightarrow{\phi_{c}})=\left(\begin{array} []{cc}-5\frac{\partial^{2}}{\partial x^{2}}+1-3\phi^{2}-5\phi^{4}&-4c\frac{ \partial^{2}}{\partial x^{2}}+c\\ \\ -4c\frac{\partial^{2}}{\partial x^{2}}+c&-4\frac{\partial^{2}}{\partial x^{2} }+1\end{array}\right) \tag{1.12}\] is strictly positive over the set \(\{(-c4\phi^{\prime\prime}+c\phi,4\phi^{\prime\prime}-\phi),(1,0),(0,1)\}^{\perp}\). Without further ado, we shall present the basic setting to prove our spectral stability result. First, let us consider the existence of a time \(T>0\) such that the pair of solutions \((u,v)\) of (1.3) is an element of \(C([0,T),\mathbb{H}^{1}_{\mathrm{per}})\) (see Lemma 5.1). Next, let us consider the change of variables: \[p(x,t)=u(x+ct,t)-\phi(x)\qquad\quad\text{and}\qquad q(x,t)=v(x+ct,t)+c\phi(x), \tag{1.13}\] We obtain by substituting (1.13) into (1.3) and after neglecting all high order non-linear terms in \(p\) and \(q\), the following linear evolution system \[\left\{\begin{array}{l}p_{t}=\partial_{x}(q+cp),\\ q_{t}=(1-4\partial_{x}^{2})^{-1}\partial_{x}((1-5\partial_{x}^{2})p-(3\phi^{2}+ 5\phi^{4})p+c(1-4\partial_{x}^{2})q).\end{array}\right. \tag{1.14}\] Since \(\partial_{x}(q+cp)=(1-4\partial_{x}^{2})^{-1}\partial_{x}((1-4\partial_{x}^{2} )q+c(1-4\partial_{x}^{2})p)\), we obtain by (1.14), the following linear system \[W_{t}=J\mathcal{D}W, \tag{1.15}\] where \(W=\left(\begin{array}{c}p\\ q\end{array}\right)=(p,q)\), \(J=(I-4\partial_{x}^{2})^{-1}\left(\begin{array}{cc}0&\partial_{x}\\ \partial_{x}&0\end{array}\right)\) and \(\mathcal{D}\) is defined as in (1.12). To obtain the standard spectral problem, we need to consider the separation of variables \(W(x,t)=e^{\mu t}V(x)\) to generate a growing mode solution that determines if the deviations in (1.13) are stable or not in the following sense: if \(\mu\in\mathbb{C}\) is purely imaginary, then the norm of \(W\) in \(\mathbb{H}^{1}_{per}\) is bounded and the deviations in (1.13) are said to be stable. Otherwise, if the real part of \(\mu\) is positive, the norm of \(W\) in \(\mathbb{H}^{1}_{per}\) is large enough when \(t\) increases and the deviations in (1.13) can be considered unstable. The case when \(\mu\) has negative real part is not considered in our model since equation (1.1) does not have dissipative terms and \((1.4)-(1.7)\) are conserved quantities. From (1.15), the corresponding spectral problem associated to the equation (1.15) then becomes \[J\mathcal{D}V=\mu V, \tag{1.16}\] In the periodic context, the spectral problem (1.16) can be seen in an equivalent form as \[J\mathcal{D}_{\Pi}V=\mu V, \tag{1.17}\] where \[\mathcal{D}_{\Pi}=\left(\begin{array}{cc}-5\frac{\partial^{2}}{\partial x^ {2}}+1-3\phi^{2}-5\phi^{4}+\frac{1}{L}(3\phi^{2}+5\phi^{4},\cdot)_{L^{2}_{per} }&-4c\frac{\partial^{2}}{\partial x^{2}}+c\\ \\ -4c\frac{\partial^{2}}{\partial x^{2}}+c&-4\frac{\partial^{2}}{\partial x^{2} }+1\end{array}\right), \tag{1.18}\] is the projection operator defined over the space \[\mathbb{M}=\left\{(f,g)\in\mathbb{L}^{2}_{per};\ \int_{0}^{L}fdx=0\ \text{and}\ \int_{0}^{L}gdx=0\right\}.\] We have the following definition of spectral stability associated to the problem (1.17). **Definition 1.1**.: _The periodic wave \(\overrightarrow{\phi_{c}}=(\phi,-c\phi)\in\mathbb{H}^{2}_{per}\) is said to be spectrally stable if \(\sigma(J\mathcal{D})\subset i\mathbb{R}\) in \(\mathbb{M}.\) Otherwise, that is, if \(\sigma(J\mathcal{D})\) in \(\mathbb{M}\) contains a point \(\mu\) with \(Re(\mu)>0\), the periodic wave \(\overrightarrow{\phi_{c}}\) is said to be spectrally unstable._ It is important to mention that if one considers the spectral problem (1.17) in the space \(\mathbb{M},\) the operator \(J\) becomes invertible with a bounded inverse, and the problem (1.17) can be rewritten as \[\mathcal{D}_{\Pi}V=\mu J^{-1}V, \tag{1.19}\] where \(V\) is a smooth periodic vectorial function in \(\mathbb{M}\). If the kernel of \(\mathcal{D}\) is simple, the classical theories in [1], [3], [5], [6], [7], [9], [14] can be applied to obtain the spectral stability of the wave \(\overrightarrow{\phi_{c}}\) according to the Definition 1.1. In fact, consider the matrix \(\mathcal{P}\) defined as \[\mathcal{P}=\left(\begin{array}{ll}\langle\mathcal{D}^{-1}(-c\chi,\chi),(-c \chi,\chi)\rangle_{\mathbb{L}^{2}_{per}}&\langle\mathcal{D}^{-1}(1,0),(-c\chi,\chi)\rangle_{\mathbb{L}^{2}_{per}}&\langle\mathcal{D}^{-1}(0,1),(-c\chi, \chi)\rangle_{\mathbb{L}^{2}_{per}}\\ \langle\mathcal{D}^{-1}(-c\chi,\chi),(1,0)\rangle_{\mathbb{L}^{2}_{per}}& \langle\mathcal{D}^{-1}(1,0),(1,0)\rangle_{\mathbb{L}^{2}_{per}}&\langle \mathcal{D}^{-1}(0,1),(1,0)\rangle_{\mathbb{L}^{2}_{per}}\\ \langle\mathcal{D}^{-1}(-c\chi,\chi),(0,1)\rangle_{\mathbb{L}^{2}_{per}}& \langle\mathcal{D}^{-1}(1,0),(0,1)\rangle_{\mathbb{L}^{2}_{per}}&\langle \mathcal{D}^{-1}(0,1),(0,1)\rangle_{\mathbb{L}^{2}_{per}}\end{array}\right),\] where \(\chi=4\phi^{\prime\prime}-\phi\). We have to notice that since \(\mathcal{D}\) is self-adjoint, \(\ker(\mathcal{D})=[(\phi^{\prime},-c\phi^{\prime})]\) and \(\{(-c\chi,\chi),(1,0),(0,1)\}\subset\ker(\mathcal{D})^{\perp}=\text{range}( \mathcal{D}),\) we obtain that the inverse \(\mathcal{D}^{-1}\) operator of \(\mathcal{D}\) present in the entries of the matrix \(\mathcal{P}\) above is well defined because \(\mathcal{D}:\ker(\mathcal{D})^{\perp}\rightarrow\ker(\mathcal{D})^{\perp}\). Therefore, the spectral stability of the periodic wave \(\overrightarrow{\phi_{c}}\) is determined if the difference \[n(\mathcal{D}_{\Pi})-n(\mathcal{P}) \tag{1.20}\] is zero. Here, \(n(\mathcal{A})\) indicates the number of negative eigenvalues (counting multiplicities) of a certain linear operator \(\mathcal{A}\). On the other hand, if the difference in (1.20) is an odd number, the periodic wave is spectrally unstable. The difference in (1.20) is commonly called in the current literature as _Hamiltonian Krein Index_ and it appears in several spectral stability approaches as in [1], [3], [7], [9] and [14]. Particularly in [1], [7] and [14] the authors proved spectral stability results for a general second order PDE using the quadratic pencils technique in order to obtain a precise counting for the Hamiltonian Krein Index \(\mathcal{K}_{\text{Ham}}\). They obtained that if \(\mathcal{K}_{\text{Ham}}=0\), the periodic wave is spectrally stable while if \(\mathcal{K}_{\text{Ham}}\) is an odd number, the periodic wave is spectrally unstable and the senses of spectral stability/instability for both models are very close to the one presented in Definition 1.1. Among the applications presented by the authors using this technique are the Klein-Gordon and Boussinesq type equations. As far as we can see and in the periodic context, we can apply, since \(J=(I-4\partial_{x}^{2})^{-1}\left(\begin{array}{cc}0&\partial_{x}\\ \partial_{x}&0\end{array}\right)\) has derivatives in its entries, the classical approaches of stability and studying the equivalent spectral problem (1.17) to obtain the spectral stability without using the quadratic pencils technique as determined, for example, in [7] for the Boussinesq type equations. type equations. We have to notice that the equivalent problem (1.17) can be rewritten, since \(J\) in the zero-mean periodic space has bounded inverse, in the form (1.19) and the spectral stability can be determined by studying the difference (1.20) as determined in [6] and [9]. In the case of waves that vanish at infinity, the authors in [10] have determined the orbital stability for the general equation (1.1). More specifically, they considered the nonlinearity \(f(u)=au^{p+1}+bu^{2p+1}\) in some cases where it is possible to obtain a hyperbolic secant profile. When \(f(u)=u^{3}+u^{5}\), \(h_{1}=5\) and \(h_{2}=4\) in equation (1.1), the hyperbolic secant profile becomes the solution given by, \[\phi(x)=2(1-c^{2})^{\frac{1}{2}}\left(1+\sqrt{1-\frac{16}{3}(1-c^{2})}\cosh \left(2\sqrt{\frac{1-c^{2}}{5-4c^{2}}}x\right)\right)^{-\frac{1}{2}}. \tag{1.21}\] To prove the orbital stability, the authors used the approach in [5]. They proved that the linearized operator around the solitary wave \(\mathcal{D}\) has only one negative eigenvalue, which is simple, and zero is a simple eigenvalue whose associated eigenfunction is \((\phi^{\prime},-c\phi^{\prime})\). Both of these facts can then be used to conclude the orbital stability, provided that \(d^{\prime\prime}(c)>0\), where \(d^{\prime\prime}(c)\) is the second derivative with respect to \(c\) of the function \(d(c)=F(\phi,-c\phi)\). To obtain the positiveness of \(d^{\prime\prime}(c)\), they used a numerical approach. Unfortunately, they obtained some cases where \(d^{\prime\prime}(c)<0\) using numerics, but they were not enabled to conclude an instability result since \(J=(I-4\partial_{x}^{2})^{-1}\left(\begin{array}{cc}0&\partial_{x}\\ \partial_{x}&0\end{array}\right)\) does not have a bounded inverse (the instability theory in [5] can not be applied). Our intention is to provide a positive answer for the (spectral) stability/instability in the periodic context. Now, we present our spectral stability result: **Theorem 1.2**.: _Let \(L>2\pi\sqrt{\frac{\sqrt{5}}{\sqrt{5}-1}}\) be fixed and consider \(c\in\left(0,\frac{1}{2}\sqrt{5-\frac{L^{4}}{(L^{2}-4\pi^{2})^{2}}}\right)\). There exists an \(c(L)>0\) such that: (i) if \(c\in(0,c(L)]\), the periodic wave \(\overrightarrow{\phi_{c}}=(\phi,-c\phi)\) is spectrally unstable. (ii) If \(c\in\left(c(L),\frac{1}{2}\sqrt{5-\frac{L^{4}}{(L^{2}-4\pi^{2})^{2}}}\right)\), the periodic wave \(\overrightarrow{\phi_{c}}=(\phi,-c\phi)\) is spectrally stable._ Our paper is organized as follows: in Section 2, we show the existence of a family of periodic wave solutions for the equations (5.10). The characterization of periodic waves is determined in Section 3. Spectral analysis for the linearized operator \(\mathcal{D}\) is established in Section 4. Finally, the spectral stability of the periodic wave will be shown in Section 5. **Notation.** Here we introduce the basic notation concerning the periodic Sobolev spaces and other useful notations used in our paper. For a more complete introduction to these spaces, we refer the reader to see [8]. By \(L^{2}_{per}:=L^{2}_{per}([0,L])\), \(L>0\), we denote the space of all square integrable functions which are \(L\)-periodic. For \(s\geq 0\), the Sobolev space \(H^{s}_{per}:=H^{s}_{per}([0,L])\) is the set of all periodic distributions such that \[\|f\|^{2}_{H^{s}_{per}}:=L\sum_{k=-\infty}^{\infty}(1+|k|^{2})^{s}|\hat{f}(k)| ^{2}<\infty,\] where \(\hat{f}\) is the periodic Fourier transform of \(f\). The space \(H^{s}_{per}\) is a Hilbert space with natural inner product denoted by \((\cdot,\cdot)_{H^{s}_{per}}\). When \(s=0\), the space \(H^{s}_{per}\) is isometrically isomorphic to the space \(L^{2}_{per}\), that is, \(L^{2}_{per}=H^{0}_{per}\) (see, e.g., [8]). The norm and inner product in \(L^{2}_{per}\) will be denoted by \(\|\cdot\|_{L^{2}_{per}}\) and \((\cdot,\cdot)_{L^{2}_{per}}\). In our paper, we do not distinguish if the elements in \(H^{s}_{per}\) are complex- or real-valued. In addition, to simplify notation we set \(\mathbb{H}^{s}_{per}:=H^{s}_{per}\times H^{s}_{per}\) endowed with their usual norms and scalar products. When necessary and since \(\mathbb{C}\) can be identified with \(\mathbb{R}^{2}\), notations above can also be used in the complex/vectorial case in the following sense: for \(\overrightarrow{f}\in\mathbb{H}^{s}_{per}\) we have \(f=f_{1}+if_{2}\equiv(f_{1},f_{2})\), where \(f_{i}\in H^{s}_{per}\), \(i=1,2\). Throughout this paper, we fix the following embedding chains with the Hilbert space \(L^{2}_{per}\) identified with its dual (by the Riesz Theorem) as \[H^{1}_{per}\hookrightarrow L^{2}_{per}\equiv(L^{2}_{per})^{\prime} \hookrightarrow H^{-1}_{per}.\] The symbols \(\,\mathrm{sn}(\cdot,k),\,\mathrm{dn}(\cdot,k)\) and \(\,\mathrm{cn}(\cdot,k)\) represent the Jacobi elliptic functions of _snoidal_, _dnoidal_, and _cnoidal_ type, respectively. For \(k\in(0,1)\), \(K(k)\) and \(E(k)\) will denote the complete elliptic integrals of the first and second kind, respectively. For the precise definition and additional properties of the elliptic functions we refer the reader to [2]. ## 2. Explicit Solutions via Quadrature Method. Our intention is to apply the quadrature method and obtaining explicit positive periodic solutions for the system (1.9). Indeed, multiplying (1.11) by \(-2\phi^{\prime}\), we see \[(\phi^{\prime 2})^{\prime}-\frac{r}{s}(\phi^{2})^{\prime}+\frac{1}{2s}(\phi^{4} )^{\prime}+\frac{1}{3s}(\phi^{6})^{\prime}=0.\] After an integration, it follows that \[(\phi^{\prime})^{2}-\frac{r}{s}\phi^{2}+\frac{1}{2s}\phi^{4}+\frac{1}{3s} \phi^{6}-B=0,\] where \(B\) is a constant of integration. Thus, \[(\phi^{\prime})^{2}=\frac{r}{s}\phi^{2}-\frac{1}{2s}\phi^{4}-\frac{1}{3s}\phi ^{6}+B. \tag{2.1}\] Let us consider \(\Psi=\phi^{2}\). By (2.1), we obtain \[(\Psi^{\prime})^{2}=4\phi^{2}(\phi^{\prime})^{2}=4\Psi\left(\frac{r}{s}\Psi- \frac{1}{2s}\Psi^{2}-\frac{1}{3s}\Psi^{3}+B\right),\] that is, \[(\Psi^{\prime})^{2}=\frac{4}{3s}\left(-\Psi^{4}-\frac{3}{2}\Psi^{3}+3r\Psi^{2 }+3sB\Psi\right).\] Denoting \(R(\Psi)=-\Psi^{4}-\frac{3}{2}\Psi^{3}+3r\Psi^{2}+3sB\Psi\), we can write \[(\Psi^{\prime})^{2}=\frac{4}{3s}R(\Psi). \tag{2.2}\] Equation (2.2) is useful to deduce explicit periodic solutions depending on the Jacobi elliptic functions. To do so, we need to perform a precise study concerning the real roots of the polynomial \(R(\Psi)\). In what follows, we consider four roots of \(R(\Psi)\) satisfying \(\alpha_{1}<\alpha_{2}=0<\alpha_{3}<\Psi<\alpha_{4}\). Important to notice that all those real roots must satisfy the system \[\left\{\begin{array}{l}\alpha_{1}+\alpha_{3}+\alpha_{4}=-\frac{3}{2}\\ \alpha_{1}\alpha_{3}+\alpha_{1}\alpha_{4}+\alpha_{3}\alpha_{4}=-3r\\ \alpha_{1}\alpha_{3}\alpha_{4}=3sB.\end{array}\right. \tag{2.3}\] Having in mind the chain of inequalities \(\alpha_{1}<0<\alpha_{3}<\Psi<\alpha_{4}\) and the system (2.3), we can apply Formula 257.00 in [2] to get \[\Psi(x)=\frac{\alpha_{4}dn^{2}\left(\frac{2}{g\sqrt{3}s}x,k\right)}{1+\beta^{2} sn^{2}\left(\frac{2}{g\sqrt{3}s}x,k\right)},\] where \[\beta^{2}=\frac{\alpha_{4}}{-\alpha_{1}}k^{2}>0,\quad g=\frac{2}{\sqrt{\alpha_ {4}(\alpha_{3}-\alpha_{2})}},\quad k^{2}=\frac{-\alpha_{1}(\alpha_{4}-\alpha_ {3})}{\alpha_{4}(\alpha_{3}-\alpha_{1})}, \tag{2.4}\] where \(sn\) and \(dn\) are the Jacobi Elliptic functions of senoidal and dnoidal type, respectively. Parameter \(k\in(0,1)\) is called elliptic modulus. Thus, \(\phi\) is expressed as \[\phi(x)=\frac{\sqrt{\alpha_{4}}dn\left(\frac{2}{g\sqrt{3}s}x,k\right)}{\sqrt{1 +\beta^{2}sn^{2}\left(\frac{2}{g\sqrt{3}s}x,k\right)}}. \tag{2.5}\] Since the dnoidal function has fundamental period equals to \(2K(k)\), one has that the period \(T:=T_{\phi}\) of \(\phi\) is given by \[T=\frac{2\sqrt{3}sK(k)}{\sqrt{\alpha_{4}(\alpha_{3}-\alpha_{1})}}, \tag{2.6}\] where \(K(k)=\int_{0}^{1}\frac{dt}{\sqrt{(1-k^{2}t^{2})(1-t^{2})}}\) is the complete elliptic integral of the first type. Let \(c\in(0,1)\) be fixed. By (2.3), we can write \(\alpha_{1}\) and \(\alpha_{2}\) in terms of \(\alpha_{4}\) as \[\alpha_{1}=\frac{-2\alpha_{4}-3-\sqrt{3}\sqrt{\beta}}{4}\;\;\text{and}\;\; \alpha_{3}=\frac{-2\alpha_{4}-3+\sqrt{3}\sqrt{\beta}}{4}, \tag{2.7}\] where \(\beta=-4\alpha_{4}^{2}-4\alpha_{4}+3+16r\). To make \(\alpha_{1}\) and \(\alpha_{3}\) well defined, it is necessary to consider \(\beta\geq 0\). This fact occurs provided that \[\frac{-1-\sqrt{4+16r}}{2}\leq\alpha_{4}\leq\frac{-1+\sqrt{4+16r}}{2}.\] Moreover, in the interval \(\left(0,\frac{-1+\sqrt{4+16r}}{2}\right)\), we have \(\alpha_{3}^{\prime}(\alpha_{4})<0\). In addition, condition \(\alpha_{1}<0<\alpha_{3}<\alpha_{4}\) implies that the maximum value for the root \(\alpha_{3}\) is \(\frac{-1+\sqrt{1+4r}}{2}\), when \(\alpha_{4}=\frac{-1+\sqrt{1+4r}}{2}\) and the minimum value is \(0\), when \(\alpha_{4}=\frac{3+\sqrt{9+48r}}{4}\). We can conclude that \(\alpha_{3}\) and \(\alpha_{4}\) must satisfy the following inequalities \[0<\alpha_{3}<\frac{-1+\sqrt{1+4r}}{2}<\alpha_{4}<\frac{-3+\sqrt{9+48r}}{4}.\] Therefore, since inequality \[\frac{-3+\sqrt{9+48r}}{4}<\frac{-2+\sqrt{16+64r}}{4}=\frac{-1+\sqrt{4+16r}}{2}\] holds, we can deduce \(\beta>0\) as desired. On the other hand, we expressed \(\alpha_{1}\) and \(\alpha_{3}\) in terms of of \(\alpha_{4}\) and all of them are well defined in the interval \(\left(\frac{-1+\sqrt{1+4r}}{2},\frac{-3+\sqrt{9+48r}}{4}\right)\). This fact implies that it is possible to express constant \(B\) in (2.3), the period \(T\) and the modulus \(k\) as functions of the root \(\alpha_{4}\) and \(c\), that is, \[B=\frac{1}{3s}\left(\alpha_{4}^{3}+\frac{3}{2}\alpha_{4}^{2}-3r\alpha_{4}\right),\quad T=\frac{2\sqrt{2\sqrt{3}s}K(k)}{\sqrt{\alpha_{4}\sqrt{\beta}}}, \tag{2.8}\] and \[k^{2}=\frac{6\alpha_{4}^{2}+9\alpha_{4}-12r+\alpha_{4}\sqrt{3}\sqrt{\beta}}{2 \sqrt{3}\alpha_{4}\sqrt{\beta}}. \tag{2.9}\] Let \(c\in(0,1)\) be fixed. In the next theorem, we prove that the period \(T\) is strictly increasing as function of \(\alpha_{4}\in\left(\frac{-1+\sqrt{1+4r}}{2},\frac{-3+\sqrt{9+48r}}{4}\right)\). As a consequence, if \(\alpha_{4}\to\frac{-1+\sqrt{1+4r}}{2}\), then \[T\to\frac{2\pi\sqrt{s}}{\sqrt{1+4r-\sqrt{1+4r}}}\quad\text{and}\quad T>\frac{ 2\pi\sqrt{s}}{(1+4r)^{\frac{1}{4}}\sqrt{\sqrt{1+4r-1}}}.\] Indeed \(\alpha_{4}\to\frac{-1+\sqrt{1+4r}}{2}\) implies \(\alpha_{3}\to\frac{-1+\sqrt{1+4r}}{2}\) and \(\alpha_{1}\to-\frac{1+2\sqrt{1+4r}}{2}\). By the expression of \(k\) in (2.4), we have \(k\to 0\), so that \(K(k)\to\frac{\pi}{2}\). Thus, by (2.6) one has \(T\to\frac{2\pi\sqrt{s}}{\sqrt{1+4r-\sqrt{1+4r}}}\). Since \(\mathrm{dn}(\cdot,0^{+})\sim 1\), we obtain that the periodic solution \(\phi\) converges to the equilibrium solution of (1.10) as \(\phi(x)=\sqrt{\frac{\sqrt{1+4(1-c^{2})-1}}{2}}.\) On the other hand, if \(\alpha_{4}\to\frac{-3+\sqrt{9+48r}}{4}=:\sigma_{4}\) then \(\alpha_{1}\to\frac{-3-\sqrt{9+48r}}{4}=:\sigma_{1}\), and \(\alpha_{3}\to 0\). By the expression of \(k\) in (2.4), one has \(k\to 1\), \(K(k)\to\infty\) and \(T\to+\infty\). Since \(\mathrm{dn}(\cdot,1^{-})\sim\mathrm{sech}(\cdot)\) and \(\mathrm{sn}(\cdot,1^{-})\sim\tanh(\cdot)\), the periodic wave \(\phi\) must be the solitary wave \[\phi(x)=\left(\frac{-\sigma_{1}\sigma_{4}\,\mathrm{sech}^{2}(\gamma x)}{- \sigma_{1}+\sigma_{4}\tanh^{2}(\gamma x)}\right)^{\frac{1}{2}}=\left(\frac{- \sigma_{1}\sigma_{4}}{-\sigma_{4}+\frac{\sigma_{4}-\sigma_{1}}{2}+\frac{\sigma _{4}-\sigma_{1}}{2}\cosh(2\gamma x)}\right)^{\frac{1}{2}},\] where \[\gamma=\frac{2}{g\sqrt{3}s}=\sqrt{\frac{-\sigma_{1}\sigma_{4}}{3s}}.\] Since, \(-\sigma_{1}\sigma_{4}=3r\), \(\gamma=\sqrt{\frac{r}{s}}\) and \(\frac{\sigma_{4}-\sigma_{1}}{2}=\frac{\sqrt{9+48r}}{4}\), \(\phi\) can be rewritten as \[\phi(x)=\left(\frac{12r}{3+\sqrt{9+48r}\cosh(2\gamma x)}\right)^{\frac{1}{2}}.\] Finally, it is possible to express \(\phi\) as \[\phi(x)=2(1-c^{2})^{\frac{1}{2}}\left(1+\sqrt{1-\frac{16}{3}(1-c^{2})}\cosh \left(2\sqrt{\frac{1-c^{2}}{5-4c^{2}}}x\right)\right)^{-\frac{1}{2}}.\] This last expression, is exactly the solitary wave obtained in [10] for the case \(h_{1}=5\), \(h_{2}=4\) and \(a=b=1\). Next theorem establishes the monotonicity of the period \(T\) in (2.8) in terms of \(\alpha_{4}\) and \(c\). **Lemma 2.1**.: _Consider \(T\) in (2.8) as the period-map of the function \(\phi\) in (2.5) which depends on the variables \((\alpha_{4},c)\in\Omega\), where \(\Omega\subset\mathbb{R}^{2}\) is the open subset_ \[\Omega=\left(\frac{-1+\sqrt{1+4r}}{2},\frac{-3+\sqrt{9+48r}}{4}\right)\times(0, 1)\,.\] _For all \((\alpha_{4},c)\in\Omega\), the period-map \(T\) is strictly increasing with respect to \(\alpha_{4}\) and \(c\)._ Proof.: Computing the derivative of \(T\) with respect to \(\alpha_{4}:=\alpha\), we obtain \[\frac{\partial T}{\partial\alpha}=\frac{2\sqrt{2\sqrt{3}s}}{\alpha\sqrt{\beta}} \left(\frac{\partial K}{\partial k}\frac{\partial k}{\partial\alpha}\sqrt{ \alpha\sqrt{\beta}}-\frac{\left(\sqrt{\beta}+\frac{\alpha}{2\sqrt{\beta}} \frac{\partial\beta}{\partial\alpha}\right)}{2\sqrt{\alpha\sqrt{\beta}}}K(k) \right). \tag{2.10}\] On the other hand, deriving the functions \(\beta=-4\alpha^{2}-4\alpha+3+16r\) and \(k\) in (2.9) with respect to \(\alpha\), we see \[\frac{\partial q}{\partial\alpha}=-8\alpha-4, \tag{2.11}\] and \(\frac{\partial k}{\partial\alpha}=\frac{6\alpha^{3}+9\alpha^{2}-18\alpha r+ 9r+48r^{2}}{\sqrt{3}k\alpha^{2}\beta^{\frac{3}{2}}}.\) In addition, since \(r=1-c^{2}>0\) and \(\alpha>0\), we can rewrite (2.12) as \[\frac{\partial k}{\partial\alpha}=\frac{12\alpha^{3}+14\alpha^{2}+(2\alpha-9r )^{2}+18r+15r^{2}}{2\sqrt{3}k\alpha^{2}\beta^{\frac{3}{2}}}. \tag{2.12}\] Notice that \(\frac{\partial k}{\partial\alpha}\) is a positive function for all \((\alpha,c)\in\Omega\). To simplify the calculations, let us denote \(\sigma=6\alpha^{3}+9\alpha^{2}-18\alpha r+9r+48r^{2}\). Substituting (2.11) and (2.12) into (2.10), it follows that \(\frac{\partial T}{\partial\alpha}>0\) if, and only if \(\frac{\partial K}{\partial k}\frac{\sigma}{\sqrt{3}k\alpha^{2}\beta^{\frac{3} {2}}}\sqrt{\alpha\sqrt{\beta}}-\frac{\left(\sqrt{\beta}+\frac{\alpha}{2\sqrt{ \beta}}(-8\alpha-4)\right)}{2\sqrt{\alpha\sqrt{\beta}}}K(k)>0.\) Multiplying the last inequality by \(\frac{\sqrt{3}\alpha^{\frac{3}{2}}\beta^{\frac{5}{4}}}{\sigma}\), we have \(\frac{\partial T_{\phi}}{\partial\alpha}>0\) provided that \(\frac{\partial K}{\partial k}\frac{1}{k}-\left(\frac{\sqrt{3}\alpha\beta^{ \frac{1}{2}}(\beta-4\alpha^{2}-2\alpha)}{\sigma}\right)\frac{K(k)}{2}>0.\) Since the expression \(\frac{\partial K}{\partial k}\frac{1}{k}-\frac{K}{2}\) is always positive for all \(k\in(0,1)\), we can prove \(\frac{\partial T}{\partial\alpha}>0\) by proving that \(\frac{\sqrt{3}\alpha\beta^{\frac{1}{2}}(\beta-4\alpha^{2}-2\alpha)}{\sigma}<1\). This fact can be easily checked using some massive calculations or it can be verified using Mathematica program. We prove now \(T\) is increasing with respect to \(c\). In fact, deriving \(T\) in terms of \(c\), we obtain \[\frac{\partial T}{\partial c}=\frac{2\sqrt{2}3^{\frac{1}{4}}\alpha^{\frac{1}{ 2}}}{\alpha\sqrt{\beta}\sqrt{5-4c^{2}}\beta^{\frac{3}{2}}}\left[\left(-4cK(k)+ (5-4c^{2})\frac{\partial K}{\partial k}\frac{\partial k}{\partial c}\right) \beta-\frac{5-4c^{2}}{4}\frac{\partial\beta}{\partial c}K(k)\right]. \tag{2.13}\] Since \(\frac{\partial\beta}{\partial c}=-32c\) and \(\frac{\partial k}{\partial c}=\frac{2\sqrt{3}c}{k\alpha\beta^{\frac{3}{2}}} \left(2\alpha+3+8(1-c^{2})\right),\) it follows that \[\frac{\partial T}{\partial c}=C\left[\left(-4c\beta+\frac{(5-4c^{2})(32c)}{4} \right)K(k)+\frac{2\sqrt{3}c(5-4c^{2})\beta\left(2\alpha+3+8(1-c^{2})\right)}{ k\alpha_{2}\beta^{\frac{3}{2}}}\frac{\partial K}{\partial k}\right],\] where \(C=\frac{2\sqrt{2}3^{\frac{1}{4}}\alpha^{\frac{1}{2}}}{\alpha_{2}\sqrt{\beta} \sqrt{5-4c^{2}}\beta^{\frac{3}{4}}}>0.\) Let us denote \(C_{1}=2cC\). We obtain, \[\frac{\partial T}{\partial c}=C_{1}\left[\left(-2\beta+4(5-4c^{2})\right)K(k)+ \frac{\sqrt{3}(5-4c^{2})\left(2\alpha+3+8(1-c^{2})\right)}{k\alpha_{2}\beta^{ \frac{1}{2}}}\frac{\partial K}{\partial k}\right]. \tag{2.14}\] Consider \(\rho=\frac{\sqrt{3}(5-4c^{2})\left(2\alpha+3+8(1-c^{2})\right)}{\alpha_{2} \beta^{\frac{1}{2}}}.\) We have by (2.14) that \(\frac{\partial T}{\partial c}\) can be expressed as \(\frac{\partial T}{\partial c}=C_{1}\rho\left[\frac{1}{k}\frac{\partial K}{ \partial k}-\frac{2(2\beta-4(5-4c^{2}))}{\rho}\frac{K(k)}{2}\right].\) Since \(\rho>0\) and \(\frac{1}{k}\frac{\partial K}{\partial k}-\frac{K(k)}{2}>0\), it is enough to prove \(\omega:=\frac{2(2\beta-4(5-4c^{2}))}{\rho}<1.\) In fact, using that \(\beta<3+12(1-c^{2})\) and \(\alpha<\frac{-3+\sqrt{9+48(1-c^{2})}}{4},\) it follows from the fact \(\alpha>\frac{-1+\sqrt{1+4(1-c^{2})}}{2}\) that \(\omega<\frac{(1+4(1-c^{2}))^{\frac{1}{2}}(-3+\sqrt{9+48(1-c^{2})})}{((1+4(1-c^{2} ))^{\frac{1}{2}}+2+8(1-c^{2}))}\). After some simplifications, we are enabled to conclude from the last inequality \[\omega<\frac{-3+\sqrt{12}\sqrt{1+4(1-c^{2})}}{1+2\sqrt{1+4(1-c^{2})}}. \tag{2.15}\] Finally, denoting \(e=\sqrt{1+4(1-c^{2})}\), we have \(0<e<\sqrt{5}\) with \(g(e):=\frac{(-3+\sqrt{12}e)}{(1+2e)}\) being an increasing function with maximum value given by \[g(\sqrt{5})=\frac{-3+2\sqrt{15}}{1+2\sqrt{5}}. \tag{2.16}\] Combining (2.15) and (2.16), we obtain \[\omega<g(\sqrt{5})<\frac{-3+2\sqrt{16}}{1+2\sqrt{4}}<1.\] This conclude the proof of the lemma. **Theorem 2.2**.: _Let \(L>2\pi\sqrt{\frac{\sqrt{5}}{\sqrt{5}-1}}\) be fixed. For each \(c_{0}\in\left(0,\frac{1}{2}\sqrt{5-\frac{L^{4}}{(L^{2}-4\pi^{2})^{2}}}\right)\) consider the unique \(\alpha_{4,0}\in\left(\frac{-1+\sqrt{1+4r}}{2},\frac{-3+\sqrt{9+48r}}{4}\right)\) such that \(T(\alpha_{4,0},c_{0})=L\). Then,_ 1. _there exist an interval_ \(I_{1}\) _around_ \(c_{0}\)_, an interval_ \(I_{2}\) _around_ \(\alpha_{4,0}\) _and a unique smooth function_ \(\Lambda:I_{1}\to I_{2}\) _such that_ \(\Lambda(c_{0})=\alpha_{4,0}\) _and_ \[T(\Lambda(c),c)=\frac{2\sqrt{2\sqrt{3}(5-4c^{2})}K(k(\Lambda(c),c))}{\sqrt{ \Lambda(c)\sqrt{\beta(\Lambda(c),c)}}}=L,\] _where_ \(c\in I_{1}\)_,_ \(\alpha_{4}(c)=\Lambda(c)\in I_{2}\) _and_ \(k^{2}=k^{2}(c)\in(0,1)\) _is given by (_2.9_),_ 2. _the monoidal wave solution in (_2.5_),_ \(\phi_{c}:=\phi(\cdot,\Lambda(c))\) _determined by_ \(\Lambda(c)\)_, has fundamental period_ \(L\) _and satisfies (_1.11_). Moreover, the mapping_ \(c\in I_{1}\mapsto\phi_{c}\in H^{n}_{per}([0,L])\)_, is a smooth function for all_ \(n\in\mathbb{N}\)_,_ 3. _The interval_ \(I_{1}\) _can be chosen as_ \(\left[0,\frac{1}{2}\sqrt{5-\frac{L^{4}}{(L^{2}-4\pi^{2})^{2}}}\right).\)__ Proof.: The proof consists in applying the implicit function theorem. In fact, let us consider the open set \(\Omega\) as in Lemma 2.1. By Lemma 2.1, we have that \(\frac{\partial T}{\partial\alpha}>0\) for all \(\alpha\in\left(\frac{-1+\sqrt{1+4r}}{2},\frac{-3+\sqrt{9+48r}}{4}\right)\). A simple application of the implicit function theorem enables us to deduce the existence of an interval \(I_{1}\) around \(c_{0}\), an interval \(I_{2}\) around \(\alpha_{4,0}\) and a unique smooth function \(\Lambda:I_{1}\to I_{2}\) such that \(\Lambda(c_{0})=\alpha_{4,0}\) and \(T(\Lambda(c),c)=L\) for all \(c\in I_{1}\). We have then proved the first two items of the theorem. Since \(c\) was chosen arbitrarily in the interval \(\left(0,\frac{1}{2}\sqrt{5-\frac{L^{4}}{(L^{2}-4\pi^{2})^{2}}}\right)\), the uniqueness of the function \(\Lambda\) in terms of \(c\) implies that the \(I_{1}\) can be extended to the whole interval \(\left(0,\frac{1}{2}\sqrt{5-\frac{L^{4}}{(L^{2}-4\pi^{2})^{2}}}\right)\). **Corollary 2.3**.: _Let \(\Lambda:I_{1}\to I_{2}\) be given by Theorem 2.2. Thus, \(\Lambda\) is a strictly decreasing function in terms of \(c\)._ Proof.: By Lemma 2.1, Theorem 2.2 and the implicit function theorem, we can differentiate the equality \(T(\Lambda(c),c)=L\) in terms of \(c\) and to obtain \[\frac{\partial\Lambda}{\partial c}=-\frac{\frac{\partial T_{\phi}}{\partial c} }{\frac{\partial T_{\phi}}{\partial\alpha}}<0.\] ## 3. Characterization of all Positive and Periodic Waves. Let \(c_{0}\in(0,1)\) be fixed. Consider \(T\) the period-map defined in Lemma 2.1 but depending only on \(\alpha\in I_{3}=\left(\frac{-1+\sqrt{1+4r}}{2},\frac{-3+\sqrt{9+48r}}{4}\right)\). According Lemma 2.1, we see that the period-map \[T:I_{3}\rightarrow\left(\frac{2\pi\sqrt{s_{0}}}{(1+4r_{0})^{\frac{1}{4}}\sqrt{ \sqrt{1+4r_{0}-1}}},+\infty\right),\] is smooth and a strictly increasing function. As before, we have \(r_{0}=1-c_{0}^{2}\) and \(s_{0}=5-4c_{0}^{2}\). By (2.8), we see that the value of \(B\) is given in terms of \(\alpha\in I_{3}\) by \[B=\frac{1}{5-4c_{0}^{2}}\left(\frac{\alpha^{3}}{3}+\frac{\alpha^{2}}{2}-(1-c_ {0}^{2})\alpha\right).\] Next, we show that \(\frac{\partial\alpha}{\partial B}>0\). In fact, the derivative of \(B\) with respect to \(\alpha\) is \(\frac{\partial B}{\partial\alpha}=\frac{1}{5-4c_{0}^{2}}\left(\alpha^{2}+ \alpha-(1-c_{0}^{2})\right).\) Then, the zeros of \(\frac{\partial B}{\partial\alpha}\) are \(\frac{-1-\sqrt{1+4(1-c_{0}^{2})}}{2}\) and \(\frac{-1+\sqrt{1+4(1-c_{0}^{2})}}{2}\). Since \(s_{0}=5-4c_{0}^{2}>0\) and \(\alpha\in\left(\frac{-1+\sqrt{1+4r_{0}}}{2},\frac{-3+\sqrt{9+48r_{0}}}{4}\right)\), it follows that \(B\) is strictly increasing in terms of \(\alpha\) whose minimum and maximum values are given respectively by \(B\left(\frac{-1+\sqrt{1+4r_{0}}}{2}\right)=\frac{1-(5-4c_{0}^{2})^{\frac{3}{2 }}+6(1-c_{0}^{2})}{12(5-4c_{0}^{2})}=:B_{c_{0}}\) and \(B\left(\frac{-3+\sqrt{9+48r}}{4}\right)=0.\) Thus, by the inverse function theorem we obtain \[\alpha:(B_{c_{0}},0)\rightarrow\left(\frac{-1+\sqrt{1+4r_{0}}}{2},\frac{-3+ \sqrt{9+48r_{0}}}{4}\right),\] is a smooth and \(\frac{\partial\alpha}{\partial B}>0\). Therefore, one has \[\frac{\partial T}{\partial B}=\frac{\partial T}{\partial\alpha}\frac{\partial \alpha}{\partial B}>0,\] that is, we have proved the following result. **Proposition 3.1**.: _Let \(c_{0}\in(0,1)\) be fixed. The period map_ \[T:(B_{c_{0}},0)\rightarrow\left(\frac{2\pi\sqrt{s_{0}}}{(1+4r_{0})^{\frac{1}{4 }}\sqrt{\sqrt{1+4r_{0}}-1}},+\infty\right)\] _is a smooth function in terms of \(B\in(B_{c_{0}},0)\) and \(\frac{\partial T}{\partial B}>0\)._ Next, as far as we know, we can use the standard ODE theory to conclude that for \(\alpha\in I_{3}\), all positive even periodic waves satisfy the initial value problem (IVP), \[\left\{\begin{array}{l}-\phi^{\prime\prime}(x)+\frac{r_{0}}{s_{0}}\phi(x)- \frac{1}{s_{0}}(\phi^{3}(x)+\phi^{5}(x))=0,\\ \phi(0)=\sqrt{\alpha},\\ \phi^{\prime}(0)=0.\end{array}\right. \tag{3.1}\] Proposition 3.1 is then used to conclude the following result. **Proposition 3.2**.: _Let \(c_{0}\in(0,1)\) be fixed. All positive and even periodic solution associated to the equation (1.11) has the dnoidal profile given by (2.5)._ Proof.: Let \(L>2\pi\sqrt{\frac{\sqrt{5}}{\sqrt{5}-1}}\) be fixed. By the monotonicity of the period map \(T:(B_{c_{0}},0)\to\left(\frac{2\pi\sqrt{s_{0}}}{(1+4r_{0})^{\frac{4}{4}}\sqrt {\sqrt{1+4r_{0}}-1}},+\infty\right)\) determined in Proposition 3.1, there exists a unique \(B_{0}\in(B_{c_{0}},0)\) such that \(T(B_{0})=L\). Moreover, the monotonicity of \(\alpha(B)\) in terms of \(B\) so implies the existence of a unique \(\alpha_{0}\in I_{3}\) such that \(B_{0}=\alpha(\alpha_{0})\). The proof then follows by Theorem 2.2. ## 4. Spectral Properties. As mentioned in the introduction, the stability comes from the positivity of \(\mathcal{D}\), except in one null and in one negative direction. In this context the spectral analysis of \(\mathcal{D}\) turns crucial for obtaining stability. On the other hand, considering \[A=\left(\begin{array}{cc}I&0\\ -cI&I\end{array}\right),\] it follows that the spectral analysis of \(\mathcal{D}\) can be reduced to the analysis of \(\mathcal{L}:=A^{t}\mathcal{D}A\), that is \[\mathcal{L}=\left(\begin{array}{cc}\mathcal{L}_{1}&0\\ 0&\mathcal{L}_{2}\end{array}\right)=\left(\begin{array}{cc}-s\partial_{x}^{2 }+r-3\phi^{2}-5\phi^{4}&0\\ 0&I-4\partial_{x}^{2}\end{array}\right). \tag{4.1}\] In order of obtaining a characterization of the nonpositive spectrum of \(\mathcal{L}\), given by (4.1), we recall some basic facts about Floquet's theory (for further details, see [4] and [11]). Let \(Q\) be a smooth even \(T\)-periodic function. Denote by \(P\) the Hill operator defined in \(L^{2}_{per}([0,T])\), with domain \(D(P)=H^{2}_{per}([0,T])\) as \[P=-\partial_{x}^{2}+Q(x).\] As far as we know, the spectrum of \(P\) is formed by an unbounded sequence of real eigenvalues arranged as follows \[\lambda_{0}<\lambda_{1}\leq\lambda_{2}<\lambda_{3}\leq\lambda_{4}<\cdots\;< \lambda_{2n-1}\leq\lambda_{2n}\;\cdots, \tag{4.2}\] where the equality means that \(\lambda_{2n-1}=\lambda_{2n}\) is a double eigenvalue. According with the classical Oscillation Theorem, we see that the spectrum of \(P\) can be characterized by the number of zeros of the corresponding eigenfunctions. In fact, if \(\varphi\) is an eigenfunction associated to either \(\lambda_{2n-1}\) or \(\lambda_{2n}\), then \(\varphi\) has exactly \(2n\) zeros in the half-open interval \([0,T)\). In particular, the even eigenfunction associated to the first eigenvalue \(\lambda_{0}\) has no zeros in \([0,T]\). Let \(\varphi\) be a non-trivial \(T\)-periodic solution of the Hill equation \[-f^{\prime\prime}+Q(x)f=0. \tag{4.3}\] If \(y\) is a solution of (4.3) which is linearly independent with \(\varphi\), there exists a constant \(\theta\) (depending on \(y\) and \(\varphi\)) such that \[y(x+T)=y(x)+\theta\varphi(x). \tag{4.4}\] Thus, \(\theta=0\) implies that \(y\) is a periodic solution for (4.3). Next result gives that it is possible to decide the exact position of the zero eigenvalue by knowing the precise sign of \(\theta\) in (4.4). **Lemma 4.1**.: _Let \(\theta\) be the constant given by (4.4) and suppose that \(\varphi\) is an \(T-\)periodic solution for the equation (4.3) containing only two zeros over \([0,T).\) The eigenvalue \(\lambda=0\) is simple if and only if \(\theta\neq 0\). Moreover, if \(\theta\neq 0\), then \(\lambda_{1}=0\) if \(\theta<0\), and \(\lambda_{2}=0\) if \(\theta>0\)._ Proof.: See [12] and [13]. Before giving the behaviour of the non-positive spectrum of \(\mathcal{L}_{1}\), we need some preliminary tools. First of all for the solution \(\phi\) obtained by Theorem 2.2, we have that \(\phi^{\prime}\) solves the Hill equation \(\mathcal{L}_{1}\phi^{\prime}=0\) just by deriving (1.10) with respect to \(x\). In addition, the dnoidal solution in (2.5) is positive and \(\phi^{\prime}\) is an odd function having two zeros in the interval \([0,L)\). By the classical Floquet theory, we see that \(\lambda_{1}=0\) or \(\lambda_{2}=0\). We show that \(\lambda_{1}=0\) and it results to be simple. To this end, we first present the basica lemma **Lemma 4.2**.: _We have, \(\frac{dT}{dB}=-\frac{\theta}{2}\), where \(\theta\) is the constant in (4.4)._ Proof.: In a general setting, consider \(\{\phi^{\prime},y\}\) a fundamental set of solutions for the Hill equation \(\mathcal{L}y=0\), where \(y\in C^{\infty}([0,T])\). Thus, we have that \(\bar{y}\) and \(\phi^{\prime}\) satisfies the equation (4.3). In addition, the Wronskian \(W\) associated to fundamental set satisfies \(W(\phi^{\prime}(x),\bar{y}(x))=1\) for all \(x\in[0,T]\). Since \(\phi^{\prime}\) is odd and periodic, we obtain that \(y\) is even and it satisfies the following IVP \[\left\{\begin{array}{l}-(5-4c^{2})y^{\prime\prime}+(1-c^{2})y-3\phi^{2}y-5 \phi^{4}y=0,\\ y(0)=-\frac{1}{\phi^{\prime\prime}(0)},\\ y^{\prime}(0)=0.\end{array}\right. \tag{4.5}\] The smoothness of \(\phi^{\prime}\) in terms of the parameter \(B\) enables us to take the derivative of \(\phi^{\prime}(T)=0\) with respect to \(B\) to obtain \[\phi^{\prime\prime}(T)\frac{dT}{dB}+\frac{\partial\phi^{\prime}}{\partial B}( T)=0. \tag{4.6}\] Deriving equation (2.1) with respect to \(B\) and taking \(x=0\) in the final result, we obtain by (1.10) at the point \(x=0\) that \(\frac{\partial\phi}{\partial B}(0)=-\frac{1}{2\phi^{\prime\prime}(0)}\). In addition, since \(\phi^{\prime}\) is odd one has that \(\frac{\partial\phi^{\prime}}{\partial B}\) is also odd and thus, \(\frac{\partial\phi^{\prime}}{\partial B}(0)=0\). The existence and uniqueness theorem for classical ODE applied to the problem (4.5) enables us to deduce that \(y=2\frac{\partial\phi}{\partial B}\). Therefore, we can combine (4.4) with (4.6) to obtain that \(\frac{dT}{dB}=-\frac{\theta}{2}\). By Theorem 2.2, Lemma 4.1, and Lemma 4.2, we obtain the following result about the non-positive spectrum of \(\mathcal{L}\). **Lemma 4.3**.: _Let \(L>2\pi\sqrt{\frac{\sqrt{5}}{\sqrt{5}-1}}\) be fixed and consider \(c\in\left(0,\frac{1}{2}\sqrt{5-\frac{L^{4}}{(L^{2}-4\pi^{2})^{2}}}\right)\). Operator \(\mathcal{L}_{1}\) in defined in \(L^{2}_{per}\) with domain in \(H^{2}_{per}\) has a unique negative eigenvalue which is simple and zero is a simple eigenvalue with eigenfunction \(\phi^{\prime}\). Moreover, the remainder of the spectrum of \(\mathcal{L}_{1}\) is constituted by a discrete set of eigenvalues bounded away from zero._ Since \(\mathcal{L}_{2}\) in (4.1) is positive, we conclude by Lemma 4.3 and the fact that \(\mathcal{L}=A^{t}\mathcal{D}A\) the following result. **Lemma 4.4**.: _Let \(L>2\pi\sqrt{\frac{\sqrt{5}}{\sqrt{5}-1}}\) be fixed and consider \(c\in\left(0,\frac{1}{2}\sqrt{5-\frac{L^{4}}{(L^{2}-4\pi^{2})^{2}}}\right)\). Operator \(\mathcal{D}\) in defined in \(\mathbb{L}^{2}_{per}\) with domain in \(\mathbb{H}^{2}_{per}\) has a unique negative eigenvalue which is simple and zero is a simple eigenvalue with eigenfunction \((\phi^{\prime},-c\phi^{\prime})\). Moreover, the remainder of the spectrum of \(\mathcal{D}\) is constituted by a discrete set of eigenvalues bounded away from zero._ ## 5. Spectral Stability Results - Proof of Theorem 1.2. In this section we present our spectral stability result and consequently, the proof of Theorem 1.2. To ensure the existence of the pair \((p,q)\) in (1.13), it is necessary to establish the existence of a local solution \((u,v)\) for the Cauchy problem associated with the system (1.3) \[\left\{\begin{array}{l}u_{t}=v_{x},\\ v_{t}=(I-4\partial_{x}^{2})^{-1}\partial_{x}((I-5\partial_{x}^{2})u-(u^{3}+u^{ 5}))\ \mbox{in}\ [0,L]\times(0,+\infty),\\ (u(x,0),v(x,0))=(u_{0}(x),v_{0}(x))\ \mbox{in}\ [0,L].\end{array}\right. \tag{5.1}\] In fact, we have the following result **Lemma 5.1**.: _There exists a time \(t_{0}>0\) such that for all initial \((u_{0},v_{0})\in\mathbb{H}^{1}_{per}\) the Cauchy problem in \((\ref{eq:1})\) has a unique solution \((u,v)\) defined in \(C([0,t_{0}),\mathbb{H}^{1}_{per})\). In addition, the pair \((u,v)\) satisfies the conserved quantities in \((\ref{eq:1})-(\ref{eq:1})\)._ Proof.: The proof of this result has the same spirit as in [15, Section 2] and because of this, we omit the details. Next, we need to calculate the difference \(n(\mathcal{D}_{\Pi})-n(\mathcal{P})\), where \(\mathcal{D}_{\Pi}\) is defined as in (1.18). Again and to improve the comprehension of the reader, matrix \(\mathcal{P}\) is given by \[\mathcal{P}=\left(\begin{array}{ll}\langle\mathcal{D}^{-1}(-c\chi,\chi),(-c \chi,\chi)\rangle_{\mathbb{L}^{2}_{per}}&\langle\mathcal{D}^{-1}(1,0),(-c\chi,\chi)\rangle_{\mathbb{L}^{2}_{per}}&\langle\mathcal{D}^{-1}(0,1),(-c\chi, \chi)\rangle_{\mathbb{L}^{2}_{per}}\\ \\ \langle\mathcal{D}^{-1}(-c\chi,\chi),(1,0)\rangle_{\mathbb{L}^{2}_{per}}& \langle\mathcal{D}^{-1}(1,0),(1,0)\rangle_{\mathbb{L}^{2}_{per}}&\langle \mathcal{D}^{-1}(0,1),(1,0)\rangle_{\mathbb{L}^{2}_{per}}\\ \\ \langle\mathcal{D}^{-1}(-c\chi,\chi),(0,1)\rangle_{\mathbb{L}^{2}_{per}}& \langle\mathcal{D}^{-1}(1,0),(0,1)\rangle_{\mathbb{L}^{2}_{per}}&\langle \mathcal{D}^{-1}(0,1),(0,1)\rangle_{\mathbb{L}^{2}_{per}}\end{array}\right),\] where \(\chi=4\phi^{\prime\prime}-\phi\). In order to calculate \(n(\mathcal{D}_{\Pi})\), we need to use the Index Theorem for self-adjoint operators (see [9, Theorem 5.3.2] that gives a precise counting of the spectral information concerning \(\mathcal{D}_{\Pi}\) in terms of the spectral properties associated to \(\mathcal{D}\). More precisely, since \(\ker(\mathcal{D})=[(\phi^{\prime},-c\phi^{\prime})]\), we have \[\mathrm{n}(\mathcal{D}_{\Pi})=\mathrm{n}(\mathcal{D})-n_{0}-z_{0} \tag{5.2}\] and \[\mathrm{z}(\mathcal{D}_{\Pi})=\mathrm{z}(\mathcal{D})+z_{0}, \tag{5.3}\] where \(\mathrm{z}(\mathcal{A})\) indicates the dimension of the kernel of a certain linear operator \(\mathcal{A}\). Parameters \(n_{0}\) and \(z_{0}\) are, respectively, the number of negative eigenvalues and the dimension of the kernel associated to the matrix \[\mathcal{Q}=\left(\begin{array}{ll}\langle\mathcal{D}^{-1}(1,0),(1,0) \rangle_{\mathbb{L}^{2}_{per}}&\langle\mathcal{D}^{-1}(0,1),(1,0)\rangle_{ \mathbb{L}^{2}_{per}}\\ \\ \langle\mathcal{D}^{-1}(1,0),(0,1)\rangle_{\mathbb{L}^{2}_{per}}&\langle \mathcal{D}^{-1}(0,1),(0,1)\rangle_{\mathbb{L}^{2}_{per}}\end{array}\right).\] Next, we need to calculate all the entries of the matrices \(\mathcal{P}\) and \(\mathcal{Q}\) above. In fact, to calculate \(\mathcal{Q}\) the procedure is the following: since \(\{(1,0),(0,1)\}\subset\ker(\mathcal{D})^{\perp}\) and \(\mathcal{D}:\ker(\mathcal{D})^{\perp}\to\ker(\mathcal{D})^{\perp}\) is invertible, we obtain the existence of unique \((f_{1},g_{1}),\ (f_{2},g_{2})\in\mathbb{H}^{2}_{per}\) such that \(\mathcal{D}(f_{1},g_{1})=(1,0)\) and \(\mathcal{D}(f_{2},g_{2})=(0,1)\). Both cases mean, after some calculations \[-(5-4c^{2})\partial_{x}^{2}f_{1}+(1-c^{2})f_{1}-(3\phi^{2}+5\phi^{4})f_{1}=1, \tag{5.4}\] and \[-(5-4c^{2})\partial_{x}^{2}f_{2}+(1-c^{2})f_{2}-(3\phi^{2}+5\phi^{4})f_{2}=-c. \tag{5.5}\] Since \(c\neq 0\), we deduce from the uniqueness of \(f_{1}\) that \(f_{2}=-cf_{1}\). Therefore, matrix \(\mathcal{Q}\) becomes \[\mathcal{Q}=\left(\begin{array}{cc}\langle f_{1},1\rangle_{L^{2}_{per}}&-c \langle f_{1},1\rangle_{L^{2}_{per}}\\ \\ -c\langle f_{1},1\rangle_{L^{2}_{per}}&L+c^{2}\langle f_{1},1\rangle_{L^{2}_{ per}}\end{array}\right).\] On the other hand, since matrix \(\mathcal{Q}\) is a \(2\times 2\) block of the matrix \(\mathcal{P}\), we only need to calculate the inner products \(\langle\mathcal{D}^{-1}(-c\chi,\chi),(-c\chi,\chi)\rangle_{\mathbb{L}^{2}_{ per}}\), \(\langle\mathcal{D}^{-1}(-c\chi,\chi),(1,0)\rangle_{\mathbb{L}^{2}_{per}}= \langle\mathcal{D}^{-1}(1,0),(-c\chi,\chi)\rangle_{\mathbb{L}^{2}_{per}}\), and \(\langle\mathcal{D}^{-1}(-c\chi,\chi),(0,1)\rangle_{\mathbb{L}^{2}_{per}}= \langle\mathcal{D}^{-1}(0,1),(-c\chi,\chi)\rangle_{\mathbb{L}^{2}_{per}}\). In fact, since \(\mathcal{D}^{-1}(-c\chi,\chi)=(-c4\phi^{\prime\prime}+c\phi,4\phi^{\prime \prime}-\phi)=\left(\frac{\partial\phi}{\partial c},-\phi-c\frac{\partial \phi}{\partial c}\right)\) we obtain, after some computations, \[\langle\mathcal{D}^{-1}(-c\chi,\chi),(-c\chi,\chi)\rangle_{\mathbb{L}^{2}_{ per}}=\int_{0}^{L}\left[\phi^{2}+4(\phi^{\prime})^{2}+2c\phi\frac{\partial\phi}{ \partial c}+8c\phi^{\prime}\frac{\partial\phi^{\prime}}{\partial c}\right]dx, \tag{5.6}\] \[\langle\mathcal{D}^{-1}(-c\chi,\chi),(1,0)\rangle_{\mathbb{L}^{2}_{per}}=\int _{0}^{L}\frac{\partial\phi}{\partial c}dx, \tag{5.7}\] and \[\langle\mathcal{D}^{-1}(-c\chi,\chi),(0,1)\rangle_{\mathbb{L}^{2}_{per}}=- \int_{0}^{L}\phi dx-c\int_{0}^{L}\frac{\partial\phi}{\partial c}dx \tag{5.8}\] From equations (5.6), (5.7), and (5.8), we can conclude that all three inner products mentioned above can be calculated by knowing the behavior of \(\frac{\partial\phi}{\partial c}\). To this end, we need to derive equation (1.11) with respect to \(c\) in order to obtain \[-(5-4c^{2})\left(\frac{\partial\phi}{\partial c}\right)^{\prime\prime}+(1-c^{ 2})\frac{\partial\phi}{\partial c}-3\phi^{2}\frac{\partial\phi}{\partial c}- 5\phi^{4}\frac{\partial\phi}{\partial c}=-8c\phi^{\prime\prime}+2c\phi,\] that is, \[\mathcal{L}_{1}\left(\frac{\partial\phi}{\partial c}\right)=-8c\phi^{\prime \prime}+2c\phi. \tag{5.9}\] Since the derivative of \(\phi\) in terms of \(c\) preserves the parity in terms of the variable \(x\in[0,L]\), we obtain that \(\frac{\partial\phi}{\partial c}\) is the solution of the following IVP \[\left\{\begin{array}{l}\mathcal{L}_{1}\left(\frac{\partial\phi}{\partial c }\right)=-8c\phi^{\prime\prime}+2c\phi,\\ \frac{\partial\phi}{\partial c}(0)=a_{c},\\ \frac{\partial\phi^{\prime}}{\partial c}(0)=0.\end{array}\right. \tag{5.10}\] Next, we present the exact value of \(a_{c}\) to determine numerically the function \(\frac{\partial\phi}{\partial c}\) in terms of \(x\in[0,L]\). As a consequence of this fact, we obtain convenient expressions for the inner products in (5.6), (5.7) and (5.8). Let \(y\) be the non-periodic solution of the homogeneous problem associated to the IVP given by (4.5). Multiplying the first equation in (5.10) by \(y\), we obtain after two integration by parts and using \(\frac{\partial\phi^{\prime}}{\partial c}(L)=\frac{\partial\phi^{\prime}}{ \partial c}(0)=0\), \(\frac{\partial\phi}{\partial c}(L)=\frac{\partial\phi}{\partial c}(0)\) and \(y^{\prime}(0)=0\) that \[\int_{0}^{L}\mathcal{L}\left(\frac{\partial\phi}{\partial c}\right)ydx=(5-4c^{2}) \frac{\partial\phi}{\partial c}(0)y^{\prime}(L). \tag{5.11}\] Since \(y\) is non-periodic with \(y^{\prime}(0)=0\) and \(y(0)=y(L)\), we see that \(y^{\prime}(L)\neq 0\) and we obtain by the first equation in (5.10), the following formula for \(a_{c}\) given by \[a_{c}=-\frac{2c}{(5-4c^{2})y^{\prime}(L)}\int_{0}^{L}\chi ydx. \tag{5.12}\] On the other hand, to calculate the inner product \(\langle f_{1},1\rangle_{L^{2}_{per}}\) present in the matrix \(\mathcal{Q}\), we need to use a similar procedure as determined to obtain the value \(a_{c}\) in (5.12). Indeed, by (5.4), we see that \(\mathcal{L}f_{1}=1\). Multiplying the equality \(\mathcal{L}f_{1}=1\) by the non-periodic solution \(y\) of the IVP (4.5), and integrating the result over the interval \([0,L]\), we obtain after two integration by parts that \[f_{1}(0)=\frac{1}{(5-4c^{2})y^{\prime}(L)}\int_{0}^{L}ydx. \tag{5.13}\] Therefore, \(f_{1}\) solves the IVP \[\left\{\begin{array}{l}\mathcal{L}_{1}f_{1}=1,\\ f_{1}(0)=\frac{1}{(5-4c^{2})y^{\prime}(L)}\int_{0}^{L}ydx,\\ f_{1}^{\prime}(0)=0.\end{array}\right. \tag{5.14}\] Finally, we need to solve the initial value problems (5.10) and (5.14) numerically. Important to mention that both numerical results are useful to calculate the inner products present in the entries of the matrices \(\mathcal{P}\) and \(\mathcal{Q}\). Our intention is to obtain, for fixed values of \(L>2\pi\sqrt{\frac{\sqrt{5}}{\sqrt{5}-1}}\), the behaviour of the determinants of \(\mathcal{P}\) and \(\mathcal{Q}\) in terms of the wave speed \(c\in\left(0,\frac{1}{2}\sqrt{5-\frac{L^{4}}{(L^{2}-4\pi^{2})^{2}}}\right)\). We first consider the determinant of the matrix \(\mathcal{Q}\). Using Mathematica program, we can solve it to conclude that the \(\det(\mathcal{Q})\) is always positive according to the Figure 5.1. This implies that, from Lemma 4.4 and equalities (5.2) and (5.3), we have \(\mathrm{n}(\mathcal{D}_{\Pi})=n(\mathcal{D})-n_{0}-z_{0}=1\) and \(\mathrm{z}(\mathcal{D}_{\Pi})=z(\mathcal{D})+z_{0}=1\). Next picture shows the behaviour of \(\det(\mathcal{P})\). According to the Figure 5.2, we see that for a fixed \(L>0\), there exists an \(c(L)>0\) such that if \(c\in(0,c(L)]\), the periodic wave \(\overrightarrow{\phi_{c}}=(\phi,-c\phi)\) is spectrally unstable, and if \(c\in\left(c(L),\frac{1}{2}\sqrt{5-\frac{L^{4}}{(L^{2}-4\pi^{2})^{2}}}\right)\), the periodic wave \(\overrightarrow{\phi_{c}}=(\phi,-c\phi)\) is spectrally stable. Numerically, we can determine some values for the threshold value \(c(L)>0\). In fact, if \(L=15\), we obtain \(c(L)\approx 0.5511026\), and if \(L=20\), we obtain \(c(L)\approx 0.5910113\). Finally, if \(L=25\), we have \(c(L)\approx 0.6100580\), and for \(L=30\), we have \(c(L)\approx 0.6217128\). Theorem 1.2 is then proved.
2306.04743
ScienceBenchmark: A Complex Real-World Benchmark for Evaluating Natural Language to SQL Systems
Natural Language to SQL systems (NL-to-SQL) have recently shown a significant increase in accuracy for natural language to SQL query translation. This improvement is due to the emergence of transformer-based language models, and the popularity of the Spider benchmark - the de-facto standard for evaluating NL-to-SQL systems. The top NL-to-SQL systems reach accuracies of up to 85\%. However, Spider mainly contains simple databases with few tables, columns, and entries, which does not reflect a realistic setting. Moreover, complex real-world databases with domain-specific content have little to no training data available in the form of NL/SQL-pairs leading to poor performance of existing NL-to-SQL systems. In this paper, we introduce ScienceBenchmark, a new complex NL-to-SQL benchmark for three real-world, highly domain-specific databases. For this new benchmark, SQL experts and domain experts created high-quality NL/SQL-pairs for each domain. To garner more data, we extended the small amount of human-generated data with synthetic data generated using GPT-3. We show that our benchmark is highly challenging, as the top performing systems on Spider achieve a very low performance on our benchmark. Thus, the challenge is many-fold: creating NL-to-SQL systems for highly complex domains with a small amount of hand-made training data augmented with synthetic data. To our knowledge, ScienceBenchmark is the first NL-to-SQL benchmark designed with complex real-world scientific databases, containing challenging training and test data carefully validated by domain experts.
Yi Zhang, Jan Deriu, George Katsogiannis-Meimarakis, Catherine Kosten, Georgia Koutrika, Kurt Stockinger
2023-06-07T19:37:55Z
http://arxiv.org/abs/2306.04743v2
# ScienceBenchmark: A Complex Real-World Benchmark for Evaluating Natural Language to SQL Systems ###### Abstract. Natural Language to SQL systems (NL-to-SQL) have recently shown a significant increase in accuracy for natural language to SQL query translation. This improvement is due to the emergence of transformer-based language models, and the popularity of the Spider benchmark - the de-facto standard for evaluating NL-to-SQL systems. The top NL-to-SQL systems reach accuracies of up to 85%. However, Spider mainly contains simple databases with few tables, columns, and entries, which does not reflect a realistic setting. Moreover, complex real-world databases with domain-specific content have little to no training data available in the form of NL/SQL-pairs leading to poor performance of existing NL-to-SQL systems. In this paper, we introduce _ScienceBenchmark_, a new complex NL-to-SQL benchmark for three real-world, highly domain-specific databases. For this new benchmark, SQL experts and domain experts created high-quality NL/SQL-pairs for each domain. To garner more data, we extended the small amount of human-generated data with synthetic data generated using GPT-3. We show that our benchmark is highly challenging, as the top performing systems on Spider achieve a very low performance on our benchmark. Thus, the challenge is many-fold: creating NL-to-SQL systems for highly complex domains with a small amount of hand-made training data augmented with synthetic data. To our knowledge, _ScienceBenchmark_ is the first NL-to-SQL benchmark designed with complex real-world scientific databases, containing challenging training and test data carefully validated by domain experts. database, machine learning, AI + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + scientific domains. _Data augmentation_, i.e. automatic benchmark generation is the only feasible solution. **Our Approach**. In this paper, we introduce _ScienceBenchmark_, a complex real-world benchmark for evaluating NL-to-SQL systems. It is the first of its kind to be developed collaboratively with SQL experts and researchers from the fields of research policy making, astrophysics, and cancer research. We _combine_ the knowledge derived from _manual training data collection with an automatic training data generation system_ that enables NL-to-SQL systems that require large amounts of data for training to be bootstrapped on complex, scientific databases where training data would otherwise be scarce or unavailable. The main architecture of the data augmentation system is described in Figure 1. The resulting augmented dataset of _NL/SQL-pairs_ can be fed into any NL-to-SQL system for training. Our approach is generic and can boost the accuracy of any NL-to-SQL system as demonstrated in our experiments in Table 5. The core idea is the _automatic generation of synthetic SQL queries_ based on query templates that are extracted from the Spider dataset - the predominant standard for benchmarking NL-to-SQL systems. Alternatively, the system can be fed a small set of manually generated NL/SQL-pairs to provide accurate and highly semantically relevant information about a novel domain. These synthetic SQL queries are then _back-translated to natural language_ using the large language model of GPT-3 [6]. For the SQL-to-NL component used in this work, we experimented with state-of-the-art transformer-based pre-trained models that have shown their NL generation capabilities by achieving state-of-the-art scores in multiple related tasks. Our evaluation showed that the GPT-3 [6] model achieves the best performance and is able to generalize to new domains with very few samples, which is why it was integrated in our architecture. The resulting natural language questions are then filtered using a _critic model_ to select the most relevant NL question for the corresponding SQL query. **Contributions**. The major contributions of our work are the following: * a new benchmark for evaluating NL-to-SQL systems against complex, scientific databases_. _ScienceBenchmark_ contains more than 6,000 NL/SQL-pairs to help researchers address the complex challenges of real-world databases, overlooked by popular benchmarks. * We have built _ScienceBenchmark_ using a _novel approach for automatically generating training data for databases_ where little to no training data exists. Unlike previous work that focused on rather simple databases, we _concentrate on complex, real-world scientific databases_, an area where popular NL-to-SQL systems typically falter due to their lack of domain knowledge and training data. * We train three state-of-the-art NL-to-SQL systems on our benchmark. Although these systems achieve top scores on the Spider leaderboard (above 70% accuracy), none achieves a satisfactory score on our benchmark (only around 20-50% accuracy depending on the domain), showcasing the difficulty of _ScienceBenchmark_. * In summary, **we demonstrate that NL-to-SQL systems trained on current NL-to-SQL benchmark datasets give lackluster performances in zero-shot scenarios on _complex scientific databases_. We show that domain-specific training data dramatically increases the performance of these systems on _complex scientific databases_. Hence, our experiments demonstrate that _ScienceBenchmark_ is sophisticated, diverse, and well suited for analyzing and potentially improving the Figure 1. End-to-end architecture for automatic training data generation consisting of four different phases, namely (1) Seeding Phase, (2) SQL Generation Phase, (3) SQL-to-NL Translation Phase and (4) Discriminative Phase. The approach is used to produce our novel benchmark dataset called ScienceBenchmark. The details of these phases are described in Section 3.3. performance of NL-to-SQL systems on previously unseen highly complex scientific databases. ## 2. The Need for a New Benchmark - Motivating Example: Astrophysics Building an NL-to-SQL system carries many challenges (Becker et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018) that continue to trouble scientists and practitioners. On the one hand, a natural language question may be vague, contain references that are even hard for a human to understand, and use a different vocabulary from the one used in the DB. On the other hand, the respective SQL query needs to adhere to a strict syntax and to the underlying DB schema. These are all problems that NL-to-SQL systems must answer. When applying an NL-to-SQL system to a real-world scientific database, the aforementioned challenges are extended by additional peculiarities that stem from the nature and the domain of these databases. While the Spider benchmark is the first large-scale dataset with complex SQL queries, its databases cannot be considered complex. Their subject-matter is very generalized, covering topics such as pets and entertainment (concerts, orchestras, musicals etc.). The majority of these databases were created by students specifically for Spider and are not at all representative of real-world databases. In what follows, we motivate the need for a novel design and training of NL-to-SQL systems for complex, scientific databases. We will focus our discussion on astrophysics, a very data-intensive and highly complex scientific discipline with a long tradition of using relational databases (Srivastava et al., 2014). The challenges described here are not only relevant for astrophysics but also for other scientific disciplines, such as cancer research, which is also included in our experimental evaluation. As our running example, we use the astrophysics database called Sloan Digital Sky Survey (SDSS)1. This database stores information about stars and galaxies at specific locations in the sky. Further details on the complexity of this database are specified in Section 3. Footnote 1: [https://www.ads.org/](https://www.ads.org/) Let us consider three different representative astrophysics queries that serve as running examples for our paper. * _Q1: Find all Starburst galaxies._ * _Q2: What is the object id, right ascension, declination and redshift from spectroscopically observed galaxies with redshift greater than 0.5 but less than 1?_ * _Q3: Find the photometric objects with object ids and spectroscopic object id whose spectroscopic class is 'GALAXY', with the difference of magnitude \(u\) and magnitude \(r\) less than 2.22 and the difference of magnitude \(u\) and magnitude \(r\) greater than 1._ Their corresponding SQL queries are as follows: Q1: (Spider hardness: Easy) SELECT s.specobjid FROM specobj AS s WHERE s.subclass = 'STABIBST' Q2: (Spider hardness: Medium) SELECT s.bestobjid, s.ra, s.dec, s.z FROM specobj AS s WHERE s.class = 'GALAXY' AND s.z > 0.5 AND s.z < 1 Q3: (Spider hardness: Extra hard) SELECT p.objid, s.specobjid FROM photocob's S P JON specobj AS s ON s.bestobjid = p.objid WHERE s.class = 'GALAXY' AND p.u = p.r < 2.22 AND p.u = p.r > 1 As shown by these queries, the major challenges for NL-to-SQL systems for complex, scientific databases are as follows: * in natural language and SQL - extensive domain knowledge is required. Hence, neural machine translation systems pre-trained on common knowledge datasets, like Spider, typically fail in complex domains due to the large disparity in subject matter. * which can be complex astrophysics descriptions including mathematical equations and natural language texts - are required. Finally, learning the mapping of a token from a natural language question to the relevant database attribute is non-trivial. * \(r\) < 2.22_. These observations expose the need for specialized benchmarks designed to capture the particularities and semantics of the domain at hand as well as the types of queries that need to be understood by an NL-to-SQL system. These requirements, in combination with the size that such a benchmark necessitates, prohibit its manual construction. Since large and complex schemas are hard (even for experts) to understand and query, the question that naturally arises is _how to build such benchmarks_? The answer is data augmentation, which in turn brings its own challenges. ## 3. Science Benchmark: A New Benchmark for Complex Databases In this section, we present _ScienceBenchmark_, which is composed of three domain-specific databases, namely research policy making, astrophysics and cancer research. First, we introduce the databases, showing their complexity and the characteristics of each domain. Second, we describe the manual data collection, which includes SQL experts writing queries with the involvement of domain experts as part of multi-year research project called INODE - Intelligent Open Data Exploration (Becker et al., 2015). Third, we describe our automatic data augmentation approach for generating synthetic training data using GPT-3 (Chen et al., 2016). Finally, we show the training and evaluation datasets of our novel benchmark _ScienceBenchmark_. ### Complex, Real-World Databases Here, we introduce complex, real-world databases from three scientific domains, which are all of significantly greater size and complexity than the databases found in Spider (Spier, 2017). Table 1 provides an overview of these databases. **Research Policy Making:** The CORDIS database, i.e. _Community Research and Development Information Service2_, serves as the European Commission's primary source of results from the projects funded by the EU's framework programs for research and innovation. The database contains very detailed hierarchical information about the framework of funding calls and the network of industrial and academic institutions, all of which is coded in highly specific enigmatic EU terminology. Footnote 2: [https://cordis.europpa.eu/](https://cordis.europpa.eu/) An example of this is the acronym NUTS, which stands for _nomenclature of territorial units for statistics_. Even the long form does not necessarily give the casual user a clear idea of what kind of information might be stored in such a column. Another challenging aspect of this database is the amount of text (e.g. descriptive project objectives averaging 1,821 characters) and the diversity of topics in the database ranging from _Information and Media_ to _Nuclear Fission_. For _ScienceBenchmark_, we use version 2022-08 of the database, as shown in Table 1, which comprises 19 tables and 82 columns and has an average of 35,355 rows per table. The size of the database is 1 GB. **Astrophysics:** The SDSS (_Sloan Digital Sky Survey_) - introduced in Section 2 - is a database containing the most detailed three-dimensional map of the universe ever made. The data collection began in 2000 and continues today. The database has 10 tables that contain disparate numbers of columns varying between 3 and 804 columns each. The tables contain various measurements and information about the type of observed object (e.g. a star or a galaxy), distances between observed objects, and various parameters (e.g. right ascension, declination, photometric system filters (u, g, r, i and z)) that have been measured in photometric or spectroscopic observations. In order to study star-forming galaxies, the sky is measured in several color bands such as infrared or ultraviolet resulting in different spectra, i.e. alternative numerical measurements and thus various interpretations of galaxies. Unlike on Earth, the location of celestial objects is defined by their right ascension and declination. Moreover, literally, hundreds of different attributes are collected for each object, such as size, redshift, brightness, magnitudes of color bands measurements, etc. This database contains many column names and values that are labeled with abbreviations (rather than natural language) that are familiar to astrophysicists but independirable for non-specialists. In order to query this database with an NL-to-SQL system, it was necessary to add natural language labels for abbreviated columns (e.g. ra = right ascension). In addition to the lack of natural language labels for columns, much of the information in the database is numerical and is used in complex queries with mathematical operations. Querying numerical data in natural language is significantly more complex than querying a database comprised of mostly textual values. Due to the limitation of the input tokens of the language model used in our experiments, we use a subset of the database comprised of 5 original tables and 1 additional table for photometrically observed astronomical objects. As described in Table 1, there are 61 columns, averaging about 10 columns per table and an average of 14,462,875 rows per table. The size of the database is 6.1 GB. **Cancer research data:** OncoMX3 is a database funded by the U.S. National Institute of Health (NIH) that integrates knowledge from several different sources about cancer biomarkers. The version of OncoMX used in _ScienceBenchmark_ contains information from cancer biomarker databases (EDRN4, FDA5), gene expressions in healthy anatomical entities (BBee6), differential gene expressions between healthy and cancerous samples (BioXpress7) and cancer mutations (BioMuta8). As shown in Table 1,the database comprises 25 tables that have 2 to 14 columns each for a total of 106 columns and has an average of 2,636,771 rows per table. The size of the database is 12 GB. Footnote 3: [https://www.oncomx.org/](https://www.oncomx.org/) Footnote 4: [https://edrn.nci.nih.gov/](https://edrn.nci.nih.gov/) Footnote 5: [https://www.dfa.gov/](https://www.dfa.gov/) Footnote 6: [https://bege.org/](https://bege.org/) Footnote 7: [https://bioxorossibosimilars.com/](https://bioxorossibosimilars.com/) The complexity in this database lies in the heavily domain-specific information it contains as well as the complex queries that researchers use when exploring this database. For example, those unfamiliar with cancer research will not know that "BREA1" or "BRECA2" refers to _BREAST CANcer gene 1_ and _BREAST CANcer gene 2_, respectively, and that mutations in these genes are responsible for many breast and ovarian cancers. In addition to the domain-specific \begin{table} \begin{tabular}{l r r r r r r} \hline \hline **Dataset** & **DBs** & **Tables** & **Columns** & **Rows** & **Avg Rows / Table** & **Size (GB)** \\ \hline Spider & 186 & 641 & 4,268 & 1.6M & 2.5K & 0.51 \\ (Avg / DB) & & 3.5 & 23 & 8.6K & & 0.03 \\ \hline \hline \multicolumn{6}{l}{_ScienceBenchmark_} \\ \hline CORDIS & 1 & 19 & 82 & 671K & 35K & 1 \\ SDSS & 1 & 6 & 61 & 86M & 14M & 6.1 \\ OncoMX & 1 & 25 & 106 & 65M & 2.6M & 12 \\ \hline \hline \end{tabular} \end{table} Table 1. Complexity of the Spider databases (top part) compared with the databases of our new benchmark dataset _ScienceBenchmark_ (bottom part). The table shows the number of databases (DBs), tables, columns, rows per DB, average number of rows per table and DB size. For Spider we also show the average numbers per DB (Avg / DB). The scientific domains of the databases contained in the ScienceBenchmark are _research policy making_ (CORDIS), _astrophysics_ (SDSS) and _cancer research_ (OncoMX). information, even a seemingly simple query in natural language such as "Show biomarkers for breast cancer" requires a SQL query with a multi-relational join and several filters. ### Manual Data Collection In this section, we detail the manual data collection for all three databases. All of the data was generated and reviewed by an expert group consisting of at least one SQL expert and one domain expert in research policy-making, astrophysics, or cancer research. In total, the team consisted of about 20 domain and SQL experts of various age ranges and genders. All of the experts were members of the multi-year research project INODE (Brock et al., 2017), including partners from academia and industry. Before starting data collection, the domain experts such as astrophysicists and cancer researchers introduced the SQL experts to the domain-specific knowledge within the database. At the data generation stage, the domain experts developed the _natural language questions_, while the SQL experts were responsible for writing the corresponding _SQL queries_. It is important to note that the domain experts were given the task of solving realistic science questions rather than simply generating complex questions based on the database. During the review and validation phase, domain experts used their expertise to verify the SQL queries together with the output of the SQL queries. For each domain, we generated a _dev set_ of 100 NL/SQL pairs. For CORDIS and SDSS, we created a _training set_ of 100 NL-to-SQL pairs. For OncoMX, we generated 50 training samples. In contrast, for each database in the Spider dev set, there are far fewer questions, 50 on average per database, ranging from 63 to just 4 questions per database. We have double the amount of dev set queries for our databases than in Spider. ### Automatic NL-to-SQL Training Data Generation In this section, we present our automatic training data generation approach. We will use our running example for astrophysics introduced in Section 2. A concrete example of the end-to-end pipeline is depicted in Figure 1. The process of automatic training data generation is composed of four phases: 1) the _Seeding Phase_, where SQL templates are extracted from the manually created queries, 2) the _SQL Query Generation Phase_, where the templates are filled with the database content, and the ontology is used to create a more readable version of the query, 3) the _SQL-to-NL Translation Phase_, where GPT-3 generates a set of candidate questions, and 4) the _Discriminative Phase_ (candidate selection phase) that selects the top two NL questions per SQL query. #### 3.3.1. Phase 1: Seeding Phase The seeding phase ingests the manually created SQL queries (as discussed in Section 3.2) and extracts _query templates_, which are fed to the next phase. For this, the manually created queries are transformed into an _Abstract Syntax Tree_ (AST) representation called _SemQL_(Kal all tables and columns, we obtain the readable and semantically meaningful SQL query which facilitates the development of the corresponding NL questions. In the next step of the second phase, _query templates_ are filled with the contents of the database (i.e., table names, column names, and values) using the enhanced schema. To increase the diversity of the queries in the synthetic dataset, we apply _random sampling_ to the AST representation of the template. The random sampling only _changes the leaf nodes in the AST_, which represent the corresponding columns, tables, and values. For instance, the projection column _specobjd_ from table _specobj_ may be changed to column _objid_ from table _neighbors_, as shown by _Generated AST (1)_ and _Generated SQL (1)_ in Figure 1. Another result of the random sampling is represented by _Generated AST (2)_ and _Generated SQL (2)_. In this case, the table _specobj_ is still used, but the projection column \(z\), as well as the filter condition on the column _survey_, are new. **Algorithm for SQL Query Generation** Algorithm 1 details the step by step process of automatically generating a SQL query using the AST templates and enhanced schema. We explain the algorithm with the example shown in Figure 2. In particular, we analyze the generation of the following SQL query which is also shown in Figure 1: ``` SELECT T1. objid FROM neighbors AS T1 WHERE T1. neighbormode = 2 ``` The algorithm starts with extracting leaf nodes from the AST templates and initializes a new temporary set of empty hash-maps, including Tables, Columns and Values (see lines 1 to 6). Each set of _Leaf nodes_ can be represented as a quadruple for a given attribute (see line 7) which consists of the aggregator function position, table position, column position and value position. An example of such a quadruple is shown in Figure 2 (see the right side of the Root-node indicated as filter(2)). We will focus on the leaf nodes surrounded by dotted-lines and green backgrounds. For instance, Filter(2) refers to the filter with position 2, which is equivalent to a query with an exact match filter. A(0) refers to an attribute without an aggregation function. T(0) and C(1) refer to the table with position 0, i.e. _neighbors_, column with position 7 without aggregation function, i.e. _neighbor_mode_. Finally, V(3) refers to a value with position 3, i.e. the value 2. This quadruple needs to be unpacked to extract the information about tables, columns and values as described informally above. Formally, the extraction of the tables, columns and values using the enhanced schema is described in lines 8 to 20 of Algorithm 1. If a position of a certain table, column or value is not found in the keys of the hash map, the respective sampling function will select a new value within the constraints of the enhanced schema, e.g. sampleTable() for table sampling (see line 9). Then the corresponding hash map will add the new position-value pair. At the end of the loop, all hash maps are filled with required position-value pairs for tables, columns and values. Finally, the AST is created on the fly and the corresponding SQL is returned (see lines 21 and 22 of Algorithm 1). #### 3.3.3. Phase 3: SQL-to-NL Translation Phase In the third phase of our pipeline shown in Figure 1, we generate the natural language questions (NLQs) that correspond to the newly generated SQL queries. To achieve this, we use _GPT-3_9[6]. We present details on the evaluation of large language models (LLMs) for generating NL questions given a SQL statement in Section 4.1. Footnote 9: Note that we also experimented with DBPal [42] as an alternative but we opted for a custom pipeline using GPT-3 since the generated natural language questions are more fluent. However, DBPal can easily be integrated in our pipeline to further extend ScienceBenchmark with additional training data. We fine-tuned GPT-3 on a subset of 468 samples of the Spider training data for four epochs. This alleviates the need for prompt engineering. Thus, the input to GPT-3 is a SQL query, and we let GPT-3 generate 8 natural language question-candidates to increase the linguistic diversity. Since there is no additional input required beyond the SQL query (e.g., database schema, extra information about the DB, DB contents), this approach can easily be transferred to any new database without any manual effort or need for extra data. Figure 2. Example of extracting & applying query templates for automatically generating SQL queries as shown in Algorithm 1. The top part shows the abstract syntax tree (AST) of a specific query. The lower part shows how query templates are applied for generating the SQL queries based on database schema and data. For the more domain-specific databases, we conduct fine-tuning on GPT-3 with the manually created seed NL/SQL pairs. Afterwards we apply the fine-tuned LLMs to translate SQL to NL. As in the Spider dataset, to obtain a larger variety of questions and achieve higher linguistic diversity, we generate several candidate NLQs per query. This approach also approximates the Spider dataset where each SQL query has multiple semantically equivalent natural language questions. #### 3.3.4. Phase 4: Discriminative Phase The last phase of our data generation pipeline as shown in Figure 1 selects the one or two best NLQs from the set of candidate NLQs generated in the previous phase. Consider, for instance, the following two NLQs depicted in Figure 1: "_Find the center object which has nearest neighbor with neighbor mode \(2\)_" and "_Find the center id of nearest neighbor object with neighbor mode smaller than \(2\)_". The goal of the discriminative phase is to decide, which of the two NLQs better represents the SQL query "SELECT T1.objd FROM megibbors AS T1 WHERE T1.neighbormode = \(2\)". Inspired by the centroid-based text summarization method (Spier et al., 2017), the best NLQs are those, whose word embeddings are closest to the centroid of all sample questions. To find these points, we select the candidates that are closest to the centroid, i.e. the geometric median of all embeddings. For this, we apply _SentenceBERT(Serban et al., 2017)_, to generate a set of sentence embeddings for all candidates: \(x_{i}\in\mathbb{R}^{m}\) and \(1\leq i\leq n\). The best NLQs are computed by taking the geometric median and selecting the closest embedding. Given a set \(X\in\mathbb{R}^{m}\) which contains \(n\) embeddings of generated NL questions, \(x_{1},x_{2},...,x_{n}\), where \(m\) denotes the dimension of embedding space. By the definition of geometric median, we can find the closest embedding \(y\in X\) with respect to the centroid vector in the space by _solving the optimization problem_ formalized as \(f(y)\): \[f(y)=\operatorname*{arg\,max}_{y\in\mathbb{R}^{m}}\sum_{l=1}^{n}CosSim(x_{i},y) \tag{1}\] That is, finding the candidate NLQ whose embedding has the highest cosine similarity to the centroid. We perform this process \(k\) times on the set \(X\setminus\{y\}\) until we have the top \(k\) natural language candidates. We choose one or two best NLQs, i.e., \(k\in\{1,2\}\). ### ScienceBenchmark Statistics Table 2 gives an overview of our new benchmark dataset called _ScienceBenchmark_ which we constructed using the automatic data generation pipeline shown in Figure 1. Note that for each of the three domain-specific databases described in Section 3.1 we present two manually created subsets (Seed and Dev) and one automatically generated subset (Synth). The manually created Seed and Dev queries were created by a team of 20 domain and SQL experts as described in Section 3.2, while the Synth queries we produced by our data generation pipeline described in Section 3.3. The Seed queries are used for the automatic data generation pipeline to generate synthetic data (Synth), while the Dev queries are used to evaluate NL-to-SQL systems. Table 2 also shows the distribution of the four complexity classes of the queries defined by Spider (Spier et al., 2017) for each dataset. For the CORDIS and SDSS datasets, we note that the complexity of the queries is higher than the queries in the Spider Dev Set. For One-coMX, the queries are of lower complexity because queries of any higher complexity for this database include features that are outside of the scope of current NL-to-SQL systems, such as recursive traversals of complex hierarchies of anatomical entities. Note that the complexities of the queries generated by our pipeline are generally lower than the complexity of the manually created training data, or the complexity of the Spider dataset. The reason is that with more complex templates the generated queries tend to be semantically incorrect. ## 4. Evaluating the Quality of ScienceBenchmark In this section we evaluate the _quality_ of our new benchmark dataset _ScienceBenchmark_. The main objective is to answer the following two research questions: * _Research question 1: How well do current methods work for translating SQL to NL?_ * _Research question 2: What is the quality of the automatically generated synthetic data, i.e. NL/SQL pairs?_ In order to answers these questions, we first present our evaluation of four different large language models (LLMs) for translating SQL to NL. The best-performing LLM will then be used to generate the synthetic data. Afterwards, we evaluate the correctness of the synthetic data for each of the three databases of ScienceBenchmark by performing an expert evaluation. ### Evaluation of Large Language Models for SQL-to-NL Translation This section describes the experiments we performed in order to decide on which LLM to incorporate to our automatic data generation pipeline. We evaluate the accuracy of each LLM in isolation. The best LLM is then used in Phase 3 "SQL-to-NL Translation" of our automatic data generation pipeline shown in Figure 1. We analyze the performance of four different LLMs, which are all based on large-scale transformer language models (Xie et al., 2017). We use these LLMs for _translating the SQL queries_ in the Spider Dev set _to natural language_. We apply various automated metrics to these results as well as an expert evaluation. Large Language ModelsWe have chosen the following four LLMs for our SQL-to-NL translation: * GPT-2: A fine-tuned GPT-2-large model (Xie et al., 2017) with an auto-regressive decoder-only large pre-trained language model, which is well suited for text generation. * GPT-3-zero: A zero-shot GPT-3 Davinci model (Xie et al., 2017), which is a larger version of the GPT-2 model pre-trained on even more data. * GPT-3: A fine-tuned GPT-3 Davinci model, which is GPT-3 fine-tuned on NL/SQL pairs. * T5: A fine-tuned T5-base model (Xie et al., 2017), which is an encoder-decoder-based pre-trained language model developed for machine translation. We fine-tuned a GPT-2-large language model on the Spider training data for 20 epochs. The GPT-3 model was fine-tuned on a subset of Spider for 4 epochs10. For this, we sampled three NL/SQL-pairs from each database in the Spider training set at random, which resulted in a training set of 468 NL/SQL-pairs. We used a simple prompt to trigger the translation from SQL to NL. The T5-base model was fine-tuned on the entire Spider dataset, for 10 epochs. Footnote 10: We decided to use only a subset of Spider to keep the costs low, since fine-tuning on all Spider data for 20 epochs would cost 6008 (we only need 105). Note that at epochs is the default value provided by GPT-3 for fine-tuning. MetricsEach LLM is evaluated using the SacreBLEU score (Xie et al., 2017; Zhang et al., 2017) and the SentenceBERT score (Xie et al., 2017) automatic metrics. SacreBLEU is an instantiation of the BLEU score which measures the word overlap between two sentences. However, word overlap metrics do not capture semantically equivalent natural language questions. \begin{table} \begin{tabular}{l r r r r r} \hline \hline **Dataset** & **Easy** & **Medium** & **Hard** & **Extra Hard** & **Total** \\ \hline \hline CORDIS Seed & 4 (4\%) & 15 (15\%) & 38 (38\%) & 43 (43\%) & 100 \\ CORDIS Dev & 25 (25\%) & 35 (35\%) & 19 (19\%) & 21 (21\%) & 100 \\ CORDIS Synth & 726 (55.59\%) & 494 (37.83\%) & 66 (5.05\%) & 20 (1.53\%) & 1306 \\ \hline SDSS Seed & 20 (20\%) & 54 (54\%) & 2 (2\%) & 24 (24\%) & 100 \\ SDSS Dev & 12 (12\%) & 28 (28\%) & 20 (20\%) & 40 (40\%) & 100 \\ SDSS Synth & 326 (15.82\%) & 1396 (67.73\%) & 138 (6.7\%) & 201 (9.75\%) & 2061 \\ \hline OncoMX Seed & 21 (42\%) & 20 (40\%) & 7 (14\%) & 2 (4\%) & 50 \\ OncoMX Dev & 39 (37.86\%) & 49 (47.57\%) & 11 (10.68\%) & 4 (3.88\%) & 103 \\ OncoMX Synth & 464 (43.57\%) & 601 (56.43\%) & 0 (0\%) & 0 (0\%) & 1065 \\ \hline \hline Spider Train & 1944 (22.45\%) & 2831 (32.7\%) & 1758 (20.3\%) & 2126 (24.55\%) & 8659 \\ Spider Dev & 250 (24.22\%) & 440 (42.64\%) & 174 (16.86\%) & 168 (16.28\%) & 1032 \\ \hline \hline \end{tabular} \end{table} Table 2. New benchmark dataset called _ScienceBenchmark_ which we constructed using the automatic training data generation pipeline shown in Figure 1. The size and complexity of the queries in the 3 databases of ScienceBenchmark are according to the Spider (Xie et al., 2017) hardness classification scheme. The datasets Seed and Dev are manually generated by domain and SQL experts. The datasets Synth are automatically generated. In the bottom part we also include the equivalent statistics of the Spider dataset for comparison. \begin{table} \begin{tabular}{l r r r r} \hline \hline **Metric** & **GPT-2** & **GPT-3-zero** & **GPT-3** & **T5** \\ \hline **SacreBLEU** & 33.85 & 30.36 & **38.55** & 31.79 \\ **SentenceBERT** & 0.840 & 0.870 & **0.888** & 0.864 \\ **Human Expert** & 0.629 & **0.765** & 0.731 & 0.645 \\ \hline \hline \end{tabular} \end{table} Table 3. Evaluation of various large language models for generating natural language questions given a SQL query. The goal is to validate Phase 3 "SQL-to-NL Translation" of our automatic data generation pipeline shown in Figure 1. The evaluation is performed on the Spider Dev set using two different automatic performance metrics (SacreBLEU and SententeceBERT) as well as human experts. For instance, consider the following two sentences (1) "_Find all Starburst galaxies?_" and (2) "_Return all the spectroscopically observed galaxies that lie in the starburst class._". Both statements describe the same information request, however, they have a low BLEU score. Thus, we also use SentenceBERT, which measures the semantic similarity of sentences. Additionally, since automated evaluations are not perfectly reliable, we also ran an expert evaluation where 7 SQL experts rated the generated questions. For each expert, we randomly sampled 25 SQL queries from the Spider Dev set and let each of the four LLMs generate the corresponding natural language question. Thus, each expert annotated 100 SQL/synthetic question-pairs. In other words, for each LLM, we have 175 expert annotations. #### 4.1.1. Results for Spider Datasets The evaluation results on the Spider Dev Set using various metrics are summarized in Table 3. The first two lines show the scores given by the automatic metrics SacreBLEU and SentenceBERT. The third line shows the evaluation by human experts. This metric shows the ratio of samples that human experts regarded as being correct. We observe that the GPT-3 model outperforms the other models by a large margin in terms of SacreBLEU score. The average SentenceBERT similarity is also highest for the fine-tuned version of GPT-3. The human expert evaluation shows that both versions of GPT-3 achieve significantly higher scores than GPT-2 and T5. However, the difference between the two versions of GPT-3 are not significant, i.e. 76.5% vs. 73.1%. Thus, we opt to use the fine-tuned version of GPT-3 since it achieved the highest scores on ScareBLEU and SentenceBERT. #### 4.1.2. Results for ScienceBenchmark We also ran expert evaluations for each of the three domains contained in the _ScienceBenchmark_. For this, we translated 100 manually generated SQL queries (called dev queries) to NL questions using a GPT-3 model, which was fine-tuned on the specific database. For each database, we used the manually created training queries and the same 468 Spider queries used above to fine-tune GPT-3. We then performed the expert evaluation for the domain-specific GPT-3 models. For the CORDIS dataset, GPT-3 correctly translates SQL to NL in 82% of cases, for OnocoMX 73%. For SDSS, the ratio is lower at 53%, which is mostly due to the higher complexity of the dev queries. **Answer to Research Question 1: We shown that LLMs are powerful enough to generate good NL questions for a variety of domains, be it common knowledge or highly domain-specific**. ### Evaluation of Synthetic Datasets (Silver Standard) of ScienceBenchmark We now analyze the quality of the synthetic datasets (or silver standard) for the novel domains of our _ScienceBenchmark_ via an expert evaluation. Note that in the previous section we only evaluated the translation of SQL to NL for the _manually written_ Dev Set SQL queries. Now we evaluate the synthetic datasets of CORDIS, SDSS and OncoMX, where _both the SQL queries and the corresponding NL questions are automatically generated_ using the pipeline shown in Figure 1. Distantly labelled data, also known as "silver standard" data has been used as a resource for reliably training neural networks when manually labelled data or "gold standard" data is scarce or unavailable. As shown in previous work on distant supervision (Song et al., 2017), training data does not have to be perfect and neural networks can learn from noisy or partially incorrect training data. Many training data generation systems such as DBPal (Zhu et al., 2017) are based on the principal that silver standard data (possibly noisy data), is sufficient for training. Although DBPal provides an end-to-end systems analysis to show the effectiveness of the generated data, they do not provide any manual analysis of the quality or accuracy of the training data itself. Because the SQL queries in our data generation pipeline are generated using rule-based algorithms and filtered with heuristics crafted by domain experts to ensure the domain relevance of the SQL queries, we evaluate the _semantic equivalence_ of the NL questions generated in our pipeline i.e. we check if the NL question matches the meaning of the SQL query. First, we randomly sampled 100 NL/SQL-pairs from each synthetic dataset (CORDIS, SDSS, OncoMX) proportionally in line with the Spider hardness classification schema. Afterwards, we manually evaluated the NL questions against the matching SQL query. As shown in Table 4 the randomly sampled queries from each dataset are of high quality achieving 83%, 76% and 75% semantic equivalence respectively across the 3 novel domains: CORDIS, SDSS and OncoMX. **Answer to Research Question 2: This analysis demonstrates that the quality of the synthetically generated or "silver standard" data for the three novel domains of ScienceBenchmark is high and similar to the quality for the less complex Spider dataset.** ## 5. Baseline Experiments: Using ScienceBenchmark to Evaluate NL-to-SQL Systems In this section, we put _ScienceBenchmark_ into practice and perform baseline experiments to evaluate the performance of popular NL-to-SQL systems on our new benchmark. The main research questions we want to address are as follows: * _Research question 3: How we well do current NL-to-SQL systems perform on complex, real-world scientific databases?_ * _Research question 4: How much can NL-to-SQL systems be improved using data augmentation when little to no domain-specific training data is available for novel domains?_ \begin{table} \begin{tabular}{l r r r} \hline \hline & **Total \# of Synth** & **\# of Evaluated** & **Semantic** \\ & **Queries** & **Queries** & **Equivalence** \\ \hline CORDIS & 1306 & 100 & 83\% \\ SDSS & 2061 & 100 & 76\% \\ OncoMX & 1065 & 100 & 75\% \\ \hline \hline \end{tabular} \end{table} Table 4. Manual evaluation of the synthetic datasets (silver standard) of ScienceBenchmark. The results show the semantic equivalence of the automatically generated NL questions with their corresponding, automatically generated SQL queries. ### NL-to-SQL Systems To test the performance of NL-to-SQL systems on _ScienceBenchmark_, we selected state-of-the-art systems that meet the following criteria: 1) Access to the open source model and the pre-trained model weights; 2) Access to bidirectional conversion code between SQL and intermediate representations (IR) of the systems, if any IR is used in the model11. Footnote 11: Since the SQL-to-NotSQL (Liu et al., 2017) conversion code is not available, as announced by the author in their code repository,[https://github.com/ygan/NatSQL](https://github.com/ygan/NatSQL), all systems integrated with NatSQL, have been excluded from our experiments. Therefore, for our experiments we use three different state-of-the-art NL-to-SQL systems: the only two completely open source state-of-the-art NL-to-SQL systems from the Spider leaderboard12 (T5-Large, SmBoP) and an industrial-strength NL-to-SQL system, which has recently been extended to handle complex, real-world datasets (ValueNet). Footnote 12: [https://yale.github.io/spider](https://yale.github.io/spider) * T5-Large (Zhu et al., 2017) (with Picard (Petersson et al., 2017) for constrained decoding). T5-Large is a language model, which is pre-trained on large amounts of text data13. Footnote 13: Due to compilation issues with Picard’s decoder architecture implemented in Haskell, we only used T5-Large without the Picard version of the code provided by Picard. * SmBoP (Petersson et al., 2017) (with GraPPA (Katsogiannis et al., 2017)). SmBoP implements a novel autoregressive bottom-up decoder, while also being enhanced with the GraPPa, which is a language model specifically pre-trained for the NL-to-SQL task. * ValueNet (Fischer et al., 2017). ValueNet is based on the IRNet (Fischer et al., 2017) architecture and extends the SemQL grammar by adding values to the generated queries to make them executable. To handle the SDSS astrophysics data, we extended the SemQL grammar for ValueNet to incorporate mathematical operations 14. Footnote 13: We only incorporated the mathematical operations for ValueNet as it was straightforward to extend SemQL, The T5 architecture handles mathematical operations out-of-the-box as we use the unconstrained version. For reproducibility reasons, we provide an anonymized version of both the source code of our automatic training data generation approach and the datasets of _ScienceBenchmark_15. Footnote 15: The source code, the datasets, the hyperparameters, and the model-specific experiments hardware specifications for ScienceBenchmark can be found at: [https://anonymous.depen.science/i/nl-ql-data-augmentation-C378/](https://anonymous.depen.science/i/nl-ql-data-augmentation-C378/) ### Experimental Setup For each of the 3 databases of _ScienceBenchmark_, we ran four experiments: * _Spider Train (Zero-Shot)_: Here, we train the NL-to-SQL systems on the _Spider Train Set_, and run the evaluation on the _Dev Set_ of the respective new domain, i.e. on the evaluation set of CORDIS, SDSS and OncoMX. * _Spider Train + Domain Train_: We train the NL-to-SQL systems on a mix of the Spider Train Set and the manually created training data, e. _CORDIS Train_. Here the goal is to understand how much impact manually generated, domain-specific training queries have on the performance of the respective NL-to-SQL system. * _Spider Train + Domain Synth_: We train the systems on a mix of Spider training data and the synthetic data, which we automatically generated using our training data generation pipeline. For instance, _CORDIS Synth_ is the automatically generated (synthetic) training data for the domain _research policy making_. * _Spider Train + Domain Train + Synth_: Here, we train the systems on a mix of the Spider training data, the manually created training data, and the synthetic data from each domain. This shows the impact of using both synthetic and manually curated data. The evaluation is performed using _Execution Accuracy_, which is also the metric used by the Spider benchmark. In other words, we measure in how many cases the result sets of the predicted SQL queries correspond to the result sets of the gold-standard SQL query of the Dev Set. ### Experimental Results Table 5 shows the execution accuracy for the experiments using _ScienceBenchmark_. Our results show that _ScienceBenchmark_ poses a challenge to current NL-to-SQL approaches. As expected, the performance of the various systems is very low in the Zero-Shot setting, as the domains covered in Spider are hardly transferable to the novel complex domains. The results also highlight that it is not only a question of data volume. In fact, adding the manual and synthetic training data of our domains, increases the scores by a large margin, however, the absolute scores remain low. For instance, the score increases by 23% on CORDIS when training ValueNet on all the data. However, with an execution accuracy of 35% the task is far from solved. The same trend is noticeable for each of the three novel domains. **Answer to Research Question 3: The results show that the current state-of-the-art approaches, which work exceptionally well on Spider, do not perform well on real-world databases.** Thus, we pose this dataset as a challenge, which requires more than an increase in training data size. It requires novel approaches that are able to handle all the new complexities introduced by these domains. **Answer to Research Question 4: The results also show that for each domain and NL-to-SQL system, the combination of seed and synthetic queries yields an _improvement over the zero-shot baseline of up to 30%_. The magnitude of the improvements varies depending on the NL-to-SQL system and domain.** ### Discussion of the Experiments Our experiments show that our automated data augmentation pipeline creates training data which is well suited for bootstrapping novel domains. The magnitude of the improvement depends on the NL-to-SQL systems themselves. However, for all highly domain-specific databases, the performance of the systems trained with synthetic data improved. This is useful when adapting an NL-to-SQL system trained on Spider data for a novel and more complex domain with a database that contains more tables, columns and rows. The results highlight the necessity to work on real-world applications. In the zero-shot setting all the state-of-the-art systems achieved poor performance. For instance, in the SDSS domain none of the systems achieved an accuracy of even 10%. Even with the fine-tuning data, the performance of the systems are far from the 70% achieved in the Spider setting. Thus, the results show the need of a more complex and real-world oriented benchmark. Furthermore, our results reveal that the usage of the synthetic data is useful to increase the performance of fine-tuned models. In most cases using the large set of synthetic data alone yields better results than using the small manual dataset for training. The mix of both the synthetic and the manual data yield the best results. **Thus, _ScienceBenchmark_ is highly challenging as it requires to adapt most systems to domain-specific knowledge and to handle complex, real-world scientific databases - which are in stark contrast to the relatively simple databases of the Spider benchmark.** ## 6. Related Work In this section, we review the related work regarding _data augmentation_ and _NL-to-SQL benchmarks_. ### Data Augmentation Due to the need of deep learning models for high volumes of training examples, combined with sparsity of training data and the cost of manually creating it, a lot of research has been done in the area of data augmentation. Previous work on data augmentation for NL-to-SQL systems mainly focuses on generating SQL queries that run over a single table rather than over multiple tables of a complete relational database. One such example is DBPal (Zhou et al., 2017), a template-based approach for generating NL/SQL-pairs, which uses manually-crafted templates of NL/SQL-pairs, which can be filled with the names of tables and columns in order to create training instances. Additionally, the authors propose NL augmentations such as paraphrasing, random deletions and synonym substitutions. However this approach might create "robotic" NLQs whose quality might be reduced by the proposed augmentations, since designing rules that can consistently work across all possible questions is notoriously hard. Another approach (Zhou et al., 2017) creates SQL queries by using simple SQL templates and sampling column names and values from a given table and then applies Recurrent Neural Networks (RNNs) to generate the equivalent NLQ. Some key differences to our work are that: (i) we can generate augmented data without completely relying on manually created NL/SQL-pairs or templates and that (ii) our NLQ augmentation step is much more robust and can generate completely new and realistic NL utterances. Another approach (Zhou et al., 2017) uses Metamorphic Rules (MRs) to create equivalent alterations of NLQs and database schemas from given NL/SQL/DB-triplets. Even though this work mainly focuses on providing a more robust evaluation framework for NL-to-SQL systems, it also proposes a methodology for data augmentation that takes advantage of the MRs used for the evaluation. More specifically, the authors present a set of MRs that can be used to create an alteration of either the NLQ or the database schema, while keeping them semantically equivalent to the original. However, the proposed NLQ transformations are relatively simple (i.e., synonym substitution and prefix insertion/deletion/substitution) compared to our approach for generating novel and fluent NLQs. Additionally, compared to our work, this approach is not capable of augmenting the SQL part of the training examples and requires a hand-crafted set of NL/SQL-pairs in order to work for a new database. The more recent work presented in (Zhou et al., 2017) is one of the few proposed architectures that can generate examples that cover multiple tables of a relational database. This work generates SQL queries by creating templates using an abstract syntax tree grammar and filling them with attributes from the database. The NLQs are then \begin{table} \begin{tabular}{l l l l l} \hline \hline **Dataset** & \multicolumn{4}{c}{**NL-to-SQL System**} \\ \hline **Train Set** & **Dev Set** & **ValueNet** & **T5-Large w/o Picard** & **SmBoP+GraPPa** \\ \hline Spider Train (Zero-Shot) & CORDIS & 0.12 & 0.16 & 0.16 \\ Spider Train + Seed CORDIS & CORDIS & 0.20 (+0.08) & 0.20 (+0.04) & 0.20 (\(\mathtt{<}\)0.04) \\ Spider Train + Synth CORDIS & CORDIS & 0.31 (+0.19) & 0.30 (+0.14) & 0.23 (\(\mathtt{<}\)0.07) \\ Spider Train + Seed CORDIS + Synth CORDIS & CORDIS & 0.35 **(+0.23)** & 0.29 (+0.13) & 0.21 (\(\mathtt{<}\)0.05) \\ \hline Spider Train (Zero-Shot) & SDSS & 0.08 & 0.05 & 0.06 \\ Spider Train + Seed SDSS & SDSS & 0.11 (+0.03) & 0.06 (+0.01) & 0.10 (\(\mathtt{<}\)0.04) \\ Spider Train + Synth SDSS & SDSS & 0.18 (+0.10) & 0.12 (+0.07) & 0.13 (\(\mathtt{<}\)0.07) \\ Spider Train + Seed SDSS + Synth SDSS & SDSS & 0.21 **(+0.13)** & 0.15 (+0.10) & 0.15 (\(\mathtt{<}\)0.09) \\ \hline Spider Train (Zero-Shot) & OncoMx & 0.27 & 0.21 & 0.20 \\ Spider Train + Seed OnceMx & OncoMx & 0.52 (+0.25) & 0.44 (+0.23) & 0.41 (\(\mathtt{<}\)0.21) \\ Spider Train + Synth OncoMx & OncoMx & 0.47 (+0.20) & 0.34 (+0.13) & 0.27 (\(\mathtt{<}\)0.07) \\ Spider Train + Seed OnceMx + Synth OncoMx & OncoMx & 0.57 **(+0.30)** & 0.51 **(+0.30)** & 0.46 (\(\mathtt{<}\)0.26) \\ \hline \hline Spider Train (Zero-Shot) & Spider Dev & 0.70 & 0.70 & 0.74 \\ Spider Train + Synth Spider & Spider Dev & 0.68 (-0.02) & 0.68 (-0.02) & 0.69 (-0.05) \\ Synth Spider & Spider Dev & 0.40 (-0.30) & 0.37 (-0.33) & 0.35 (-0.39) \\ \hline \hline \end{tabular} \end{table} Table 5. Putting _ScienceBenchmark_ into practice: Evaluation of NL-to-SQL systems without and with data augmentation. **Execution Accuracy is evaluated on the Dev Set of three novel datasets. The numbers in brackets refer to the relative improvements with respect to the zero-shot baseline.** generated using a hierarchical, RNN-based neural model, that recursively generates explanations for all parts of the queries and then concatenates them. Our work differs from the previous because we consider much more complex and robust SQL-to-NL models that can create NLQs with much higher variety and fluency, since they are generated by taking the entire SQL query into account. Finally, our work differs from all previous work in the sense that instead of simply increasing the performance on a generic dataset like Spider (Wang et al., 2019), we focus on adapting an NL-to-SQL system on new, unseen and complex databases, with little to no manual effort. ### NL-to-SQL Benchmarks Progress in NL-to-SQL systems was systematically impeded by the lack of a common, large-scale benchmark dataset. The introduction of WikiSQL (Wang et al., 2019) and Spider (Wang et al., 2019), has drastically changed the landscape, allowing for the introduction of deep learning techniques to tackle the problem, as well as providing a common benchmark for comparing different approaches. These two benchmark datasets remain the main point of reference for NL-to-SQL systems despite much criticism (e.g., WikiSQL has low complexity and multiple errors (Kang et al., 2019) and Spider databases are not realistic (Kang et al., 2019)). NL-to-SQL benchmarks can be classified into: _domain-specific_ and _cross-domain_ datasets. Domain-specific datasets focus on a single database from a specific domain, such as: movies and television series (IMdb (Mikolov et al., 2016)), restaurant and shop reviews (Yelp (Mikolov et al., 2016) and Restaurants (Mikolov et al., 2016; Wang et al., 2019)), academic research (Scholar (Kost et al., 2016) and Academic (Kost et al., 2016)), financial data (Advising (Blek et al., 2016) and FIBEN (Kosman et al., 2016)), medical data (MIMICSQL (Kost et al., 2016)), and questions and answers from Stack Exchange (SEDE (Kang et al., 2019)). In contrast, cross-domain datasets contain multiple databases, taken from different domains. Spider-DK (Kang et al., 2019) and Spider-Syn (Kang et al., 2019), are extensions of Spider which explore system capabilities at cross-domain generalization and synonym robustness. KaggleDBQA (Kang et al., 2019) is another cross-domain dataset, although of much smaller size, that has been extracted from Kaggle and features databases taken from the Web. Another cross-domain dataset is OTTA (Blek et al., 2016), which uses an inverse annotation procedure, whereby automatically generated queries, which are visually displayed are annotated with natural language questions by non-SQL experts. **The main difference of our novel benchmark _ScienceBenchmark_, compared to all aforementioned datasets, is twofold: **(i) it contains scientific domains that use domain-specific vocabulary, and (ii) it was developed by scientists and domain-experts over the course of a multi-year research project including partners from both academia and industry. Hence, the deep interactions between scientists and domain experts ensured high quality examples that reflect queries posed by actual users of these complex, scientific databases**. ## 7. Conclusions In this work, we introduce the novel benchmark _ScienceBenchmark_ for evaluating NL-to-SQL systems against complex, real-world scientific databases. We also show the end-to-end pipeline for automatically generating large synthetic training datasets for highly domain-specific databases, which are more complex both in terms of the subject matter and in terms of the number of tables, columns and rows than the databases used in the Spider benchmark. **Our experiment results show that _ScienceBenchmark_ poses a significant challenge to current NL-to-SQL approaches. While these systems work exceptionally well on the Spider dataset which has relatively simple databases, the state-of-the-art NL-to-SQL systems do not perform well on complex, real-world scientific databases. Thus, we argue that **ScienceBenchmark can serve as a new baseline benchmark for evaluating NL-to-SQL systems and thus set the stage for novel research efforts to handle the complexities introduced by these real-world challenges.** ###### Acknowledgements. This project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 863410.
2306.15896
Content-Aware Quantization Index Modulation:Leveraging Data Statistics for Enhanced Image Watermarking
Image watermarking techniques have continuously evolved to address new challenges and incorporate advanced features. The advent of data-driven approaches has enabled the processing and analysis of large volumes of data, extracting valuable insights and patterns. In this paper, we propose two content-aware quantization index modulation (QIM) algorithms: Content-Aware QIM (CA-QIM) and Content-Aware Minimum Distortion QIM (CAMD-QIM). These algorithms aim to improve the embedding distortion of QIM-based watermarking schemes by considering the statistics of the cover signal vectors and messages. CA-QIM introduces a canonical labeling approach, where the closest coset to each cover vector is determined during the embedding process. An adjacency matrix is constructed to capture the relationships between the cover vectors and messages. CAMD-QIM extends the concept of minimum distortion (MD) principle to content-aware QIM. Instead of quantizing the carriers to lattice points, CAMD-QIM quantizes them to close points in the correct decoding region. Canonical labeling is also employed in CAMD-QIM to enhance its performance. Simulation results demonstrate the effectiveness of CA-QIM and CAMD-QIM in reducing embedding distortion compared to traditional QIM. The combination of canonical labeling and the minimum distortion principle proves to be powerful, minimizing the need for changes to most cover vectors/carriers. These content-aware QIM algorithms provide improved performance and robustness for watermarking applications.
Junlong Mao, Huiyi Tang, Shanxiang Lyu, Zhengchun Zhou, Xiaochun Cao
2023-06-28T03:36:38Z
http://arxiv.org/abs/2306.15896v2
Content-Aware Quantization Index Modulation: Leveraging Data Statistics for Enhanced Image Watermarking ###### Abstract Image watermarking techniques have continuously evolved to address new challenges and incorporate advanced features. The advent of data-driven approaches has enabled the processing and analysis of large volumes of data, extracting valuable insights and patterns. In this paper, we propose two content-aware quantization index modulation (QIM) algorithms: Content-Aware QIM (CA-QIM) and Content-Aware Minimum Distortion QIM (CAMD-QIM). These algorithms aim to improve the embedding distortion of QIM-based watermarking schemes by considering the statistics of the cover signal vectors and messages. CA-QIM introduces a canonical labeling approach, where the closest coset to each cover vector is determined during the embedding process. An adjacency matrix is constructed to capture the relationships between the cover vectors and messages. CAMD-QIM extends the concept of minimum distortion (MD) principle to content-aware QIM. Instead of quantizing the carriers to lattice points, CAMD-QIM quantizes them to close points in the correct decoding region. Canonical labeling is also employed in CAMD-QIM to enhance its performance. Simulation results demonstrate the effectiveness of CA-QIM and CAMD-QIM in reducing embedding distortion compared to traditional QIM. The combination of canonical labeling and the minimum distortion principle proves to be powerful, minimizing the need for changes to most cover vectors/carriers. These content-aware QIM algorithms provide improved performance and robustness for watermarking applications. watermarking, quantization index modulation (QIM), data-driven, minimum distortion ## I Introduction Digital watermarking is a technique used to embed information or digital markers into digital media, such as images, videos, audio files, or documents, without significantly altering the perceptual quality of the content [1, 2]. This embedded information, known as a watermark, serves various purposes, including copyright protection, content authentication, and data integrity verification. It plays a crucial role in protecting the rights and integrity of digital media in an increasingly digital and interconnected world [3, 4, 5]. Quantization index modulation (QIM) [6] is a popular data hiding paradigm due to its considerable performance advantages over spread spectrum techniques. It excels in terms of information hiding capacity, perceptual transparency, robustness, and tamper resistance. QIM has been tailored to meet the demands of many applications. Some examples include angle QIM (AQIM) [7] and difference angle QIM (DAQIM) [8] for resisting gain attacks, dither-free QIM [9] to reduce the amount of synchronization information, \(E_{8}\) lattice-based QIM [10] for enjoying the trade-off between computational complexity and robustness performance, the Tucker decomposition-based QIM [11] for robust image watermarking, and minimum distortion QIM (MD-QIM) [12] by moving the data point to the boundary of Voronoi regions to achieve smaller distortions. Digital watermarking techniques have continuously evolved to meet the emerging challenges and threats in the field. One notable trend is the emergence of data-driven watermarking methods, which leverage advanced technologies such as machine learning and blockchain algorithms to generate and embed watermarks that are specifically tailored to the unique characteristics of the media content [13, 14]. These data-driven approaches have shown promise in enhancing the robustness and imperceptibility of watermarking algorithms. Advancements in technology have played a significant role in the development of robust and imperceptible watermarking algorithms. Researchers have explored various techniques and methodologies to improve the performance of watermarking systems. Robustness refers to the ability of a watermark to withstand attacks and intentional modifications, while imperceptibility refers to the extent to which the embedded watermark is perceptually invisible to human observers. Several approaches have been proposed to enhance the robustness and imperceptibility of watermarking algorithms. Hybrid methods that combine multiple watermarking techniques, such as transform-based methods and spread spectrum techniques, have demonstrated improved performance in terms of both robustness and imperceptibility [15, 16]. These hybrid approaches leverage the strengths of different techniques to achieve a balance between robustness and imperceptibility. Furthermore, comprehensive surveys and studies have been conducted to analyze the effectiveness of existing watermarking algorithms and identify areas for improvement [17]. These studies provide valuable insights into the strengths and limitations of different approaches and offer guidelines for developing more robust and imperceptible watermarking techniques.
2310.19792
The Eval4NLP 2023 Shared Task on Prompting Large Language Models as Explainable Metrics
With an increasing number of parameters and pre-training data, generative large language models (LLMs) have shown remarkable capabilities to solve tasks with minimal or no task-related examples. Notably, LLMs have been successfully employed as evaluation metrics in text generation tasks. Within this context, we introduce the Eval4NLP 2023 shared task that asks participants to explore prompting and score extraction for machine translation (MT) and summarization evaluation. Specifically, we propose a novel competition setting in which we select a list of allowed LLMs and disallow fine-tuning to ensure a focus on prompting. We present an overview of participants' approaches and evaluate them on a new reference-free test set spanning three language pairs for MT and a summarization dataset. Notably, despite the task's restrictions, the best-performing systems achieve results on par with or even surpassing recent reference-free metrics developed using larger models, including GEMBA and Comet-Kiwi-XXL. Finally, as a separate track, we perform a small-scale human evaluation of the plausibility of explanations given by the LLMs.
Christoph Leiter, Juri Opitz, Daniel Deutsch, Yang Gao, Rotem Dror, Steffen Eger
2023-10-30T17:55:08Z
http://arxiv.org/abs/2310.19792v1
# The Eval4NLP 2023 Shared Task on ###### Abstract With an increasing number of parameters and pre-training data, generative large language models (LLMs) have shown remarkable capabilities to solve tasks with minimal or no task-related examples. Notably, LLMs have been successfully employed as evaluation metrics in text generation tasks. Within this context, we introduce the Eval4NLP 2023 shared task that asks participants to explore _prompting_ and _score extraction_ for machine translation (MT) and summarization evaluation. Specifically, we propose a novel competition setting in which we select a list of allowed LLMs and disallow fine-tuning to ensure a focus on prompting. We present an overview of participants' approaches and evaluate them on a new reference-free test set spanning three language pairs for MT and a summarization dataset. Notably, despite the task's restrictions, the best-performing systems achieve results on par with or even surpassing recent reference-free metrics developed using larger models, including GEMBA and Comet-Kiwi-XXL for MT. Finally, as a separate track, we perform a small-scale human evaluation of the plausibility of explanations given by the LLMs.1 Footnote 1: We make parts of our code and datasets available: [https://github.com/eval4nlp/SharedTask2023/tree/main](https://github.com/eval4nlp/SharedTask2023/tree/main) ## 1 Introduction The ChatGPT revolution in late 2022 has ignited a wide public and scientific debate about the possibilities (and limitations) of generative AI in various fields and application scenarios Leiter et al. (2023); Eger et al. (2023), including education Halaweh (2023), logic Liu et al. (2023), medicine Dave et al. (2023), math Frieder et al. (2023), programming Roziere et al. (2023) and science Belouadi et al. (2023). The immense research interest has also triggered the exploration of numerous approaches that leverage generative large language models (LLMs) as _evaluation metrics_ Kocmi and Federmann (2023); Liu et al. (2023); Fu et al. (2023); Xu et al. (2023); Fernandes et al. (2023) for natural language generation (NLG) tasks like machine translation (MT) and summarization. Recent LLM based approaches differ, for example, in their prompting strategies, e.g., in the way that natural language instructions are used to trigger the LLM to compute metric scores. For example, GEMBA Kocmi and Federmann (2023) uses zero-shot prompting to directly predict scores or quality labels in the output. In contrast, AutoMQM Fernandes et al. (2023) instructs LLMs to predict fine-grained error labels and uses these to compute the final scores. These works have initiated the idea of prompting for NLG evaluation, but an exhaustive exploration of approaches remains an open problem. Further, many approaches leverage closed source LLMs while much fewer use open source LLMs. Those approaches relying on open source LLMs put a large focus on acquiring training data (e.g. Xu et al., 2023) and fine-tune models to specific tasks. Given this typical focus on fine-tuning and motivated by promising work on prompting techniques2 (e.g. Wei et al., 2022; Yao et al., 2023; Wang et al., 2023; Zhou et al., 2023), we notice a research gap in the thorough examina tion of _prompting and metric score extraction in the domain of NLG metrics_, especially for **open source** generative LLMs (here, metric score extraction refers to the process of constructing the metric scores from internal parameters or the output of a model). The Eval4NLP 2023 shared tasks aims to fill this gap by disallowing participants to fine-tune models and by restricting model usage to a fixed list of LLMs (see Figure 1). Hence, participants may only vary how models are prompted, how scores are extracted, and how models are used in combination. To make the task more inclusive, we consider large and small(er) LLMs in two separate tracks. This is different from shared tasks without model restriction, where the largest models often perform best, for example, the WMT metrics shared task (e.g. Freitag et al., 2022). The goal of the shared task is to design evaluation metrics for MT and summarization, which we select as sub-tasks of NLG, while adhering to the model restrictions. Our contributions are the following: * We design a novel, restricted evaluation setting that allows to focus on _prompting and score extraction_ in building evaluation metrics. This might aid inexpensive development of new metrics without fine-tuning or could benefit the selection of metric architectures with fine-tuning. * We collect a novel dataset from Wikipedia articles created past 15.07.2023 with the goal of minimizing the use of data that has been used to pre-train LLaMA2 (Touvron et al., 2023) released on 17.07.2023. This is because some of the allowed models are fine-tuned versions of LLaMA2. * We organized a CodaLab (Pavao et al., 2023) / Codabench (Xu et al., 2022) competition where participants could submit their system scores in a dev and test phase. The dev phase has received 44 participant registrations, of which 9 teams have submitted contributions to the test phase leaderboard and system papers. This paper summarizes their approaches and findings and presents their final ranking. We find that the best performing submissions are on par with or surpassing metrics like Comet-Kiwi-XXL (Rei et al., 2023) and GEMBA (Kocmi and Federmann, 2023) (that are not restricted by the shared task settings) on our test set. This is an interesting finding, as the submissions require less parameters and did not use fine-tuned models. * In line with the Eval4NLP 2021 shared task (Fomicheva et al., 2021), we consider the _explainability_ of the designed metrics. The gen Figure 1: Using a generative LLM as MT evaluation metric. In this example, the metric is reference-free, i.e., it grades the translated sentence based on its source sentence. The input sentences are wrapped into a prompt that is given to an LLM. The LLM generates an output and a final score could for example be extracted from this textual output or from other values involved in the process. The red borders indicate the focus of our shared task. Participants should evaluate the best prompts and the best approaches to extract scores from model output. erative nature of LLMs allows to return natural language or formatted explanations of its output. While these explanations are not necessarily faithful, they also offer value if they are plausible Leiter et al. (2023) or might support the generation process itself Wei et al. (2022). Our paper is structured into seven sections. SS2 gives an overview of how our shared task is related to other competitions and presents related work on evaluation metrics. SS3 describes the competition setup and SS4 describes the datasets and annotation process for the test phase. In SS5, we highlight the approaches tested by the participants, especially those for the test set submissions. SS6 presents the final scores of the participants on the test set and further analyses. Finally, SS7 discusses future work and concludes. ## 2 Related Work In this section, we describe other work that is related to our shared task. Specifically, we give a brief overview of evaluation metrics, highlight recent developments on metrics based on generative LLMs and describe related shared tasks. NLG evaluation metricsEvaluation of NLG systems is necessary in order to be able to compare and rank different models or outputs. Manual/human evaluation is expensive, time consuming and often infeasible for larger datasets. Hence, automatic metrics are constructed. Metrics that use human references are called _reference-based_, while metrics that evaluate the generation quality based on the source text are called _reference-free_ (in MT also Quality Estimation, QE). Many early metrics like BLEU Papineni et al. (2002) and ROUGE Lin (2004) measure the lexical overlap between the generation and a human written reference. Such metrics are limited in their ability to capture semantics of generated text (e.g. Reiter (2018). Newer metrics are usually based on language models able to embed the meanings of tokens (e.g. Zhang et al. (2020); Zhao et al. (2019); Sellam et al. (2020); Rei et al. (2020). These achieve strong(er) correlations to human judgments of generation quality (e.g. Freitag et al. (2022). Embedding based metrics have also enabled reference-free evaluation. This has the added benefit of no longer needing human reference generations and therefore enables further use cases, such as checking generation quality on the fly (e.g. Zerva et al. (2022), training with metrics as supervision signal (e.g. Wu et al. (2018) and using metrics during decoding Fernandes et al. (2022). However, the usage of (large) black-box systems in the evaluation process also poses new challenges, e.g., regarding metric **efficiency**Kamal Eddine et al. (2022); Rei et al. (2022); Grunwald et al. (2022), **explainability**Kaster et al. (2021); Rei et al. (2023); Xu et al. (2023); Guerreiro et al. (2023); Leiter et al. (2023), and **robustness**Chen and Eger (2023); He et al. (2023); Vu et al. (2022). Our work is also concerned with metric explainability and efficiency, because we (1) offer a small model participation track, (2) forbid fine-tuning, (3) allow only open source models and (4) perform a short experiment on the plausibility of explanations. Surveys on NLG metrics are presented by (e.g. Celikyilmaz et al. (2021); Sai et al. (2022). Generation-based evaluation metrics Generation-based evaluation metrics have become a fundamental part of the current metric landscape, beginning with PRISM Thompson and Post (2020) and BARTScore Yuan et al. (2021). These two metrics use the **generation probability** of paraphrases or translations as mechanism to extract metric scores. Newer work that follows the same principle with more high-performing LLMs, for example GPTScore Fu et al. (2023), has shown improved correlations. Another branch of generation-based metrics has originated with recent GPT models and shows that models can directly perform the task of grading machine generated text from in-context task descriptions (e.g. Kocmi and Federmann (2023); Chiang and Lee (2023); Fu et al. (2023); Xu et al. (2023); Yang et al. (2023); Lu et al. (2023). We will refer to these metrics as **output-based**. Here, the rating is usually returned directly in the generated output text or constructed from it. Another branch of these models employs generative LLMs for ranking between better and worse generations Zheng et al. (2023); Shen et al. (2023); Ji et al. (2023). This recent surge of approaches has motivated our shared task. During the runtime of the shared task, other state-of-the-art approaches have been published (e.g. Fernandes et al. (2023). The systems submitted to our competition are different from most generation-based metrics in thoroughly exploring the usage of fixed recent open source LLMs since ChatGPT without fine-tuning. Evaluation shared tasksOur shared task is also related to other shared tasks that consider the evaluation of evaluation metrics for NLG, especially for MT and summarization. For MT, the established WMT workshop comprises multiple shared tasks on MT evaluation. Especially, the _WMT metrics shared task_ (e.g. Mathur et al., 2020; Freitag et al., 2021, 2022) and the _WMT shared task on quality estimation_ (e.g. Specia et al., 2020, 2021; Zerva et al., 2022) are related to ours. The main track of the _WMT metrics shared task_ considers the system- and segment-level evaluation quality of MT metrics -- that is, how well can metrics reflect the quality of whole MT systems or single segment translations. Recent years also put a focus on evaluating the robustness of metrics towards certain linguistic phenomena. The main track of the _WMT metrics shared task_ consists of a reference-based evaluation, i.e., metrics compare the machine translation to human-written reference translations. Recent editions also contain a track for reference-free evaluation, where submitted metrics should directly compare the machine translation to its source text. Since 2021, the _WMT metrics shared task_ has acquired its test data using the fine-grained MQM evaluation scheme (Lommel et al., 2014; Freitag et al., 2021) that has been shown to be more accurate than crowd-sourced direct assessment annotations. The _WMT shared task on quality estimation_ sets its main focus on the reference-free evaluation of machine translations. In recent years, their test sets are also annotated with MQM. Additionally, the quality estimation workshop has, for example, conducted tasks on word-level error prediction and span-level error severity prediction. Like the WMT QE shared task, our task is the reference-free evaluation of machine translations. The biggest difference of our shared task is that we fix the allowed models. That means, participants may only use models from a list we provide to them. Hence, participants have to focus on a thorough exploration of prompting and score extraction rather than fine-tuning and dataset creation. A second difference is that we include summarization as a Figure 2: Schematic overview of possible approaches to compute scores from generative LLMs. Zero-shot approaches do not present examples in the prompt, while few-shot approaches present them. Chain-of-thought (Wei et al., 2022) approaches trigger the LLM to generate an explanation of its process before returning the final score. Fine-grained approaches, e.g. Fernandes et al. (2023), first construct a detailed error analysis and then construct a final score from them. Translation probability approaches, e.g. Fu et al. (2023), use the probability of generating a paraphrase as a translation. In a majority vote approach, the results from multiple prompts could be combined. Self-refinement approaches could query a model multiple times to refine its output. Often, approaches can be combined- For example, chain-of-thought prompting can work with few shot prompting. subtask. As a third difference, our shared task has a subtrack to evaluate explanations that are created as a byproduct of scoring with generative LLM's for plausibility. This last point offers parallels to the Eval4NLP 2021 shared task (Fomicheva et al., 2021) and its successor subtask at the WMT 2022 shared task (Zerva et al., 2022) on quality estimation. These tasks treated human word-level error annotations as explanations of translation quality and evaluated their correlations to manual annotations. In our subtask, we allow for any kind of explanation. Background information on explainability for MT metrics can be found in Leiter et al. (2023). PromptingThe main goal of our shared task is to explore the _prompting_ of LLMs as explainable metrics, i.e., to explore which natural language inputs trigger LLMs to perform as metrics for NLG evaluation. Besides the approaches used for generation-based metrics described above, many more prompting strategies have been developed (see Appendix A). Here, we give a non-exhaustive overview of prompting techniques (Figure 2 shows examples of prompting approaches applied to evaluation metrics). Generally inputs to the model could either be texts that should be completed by LLMs or they could be instructions for instruction-tuned LLMs (e.g. Ouyang et al., 2022). _Zero-shot prompting_ simply provides a task description as model input, without presenting any examples (Wei et al., 2022). In contrast, _few-shot prompting_ provides examples of correct solutions, which the model should generalize on for a new problem. _Chain-of-thought (CoT) prompting_(Wei et al., 2022) triggers the model to output an explanation of computation steps before returning a result. CoT is setup either by providing similar explanation steps in the input or by prompting the model to produce explanations steps itself. Recent work finds, however, that CoT explanations may not necessarily be faithful (Turpin et al., 2023). CoT can be boosted with _self-consistency_(Wang et al., 2023) (or majority vote) where multiple explanations are sampled and the most consistent one is chosen. Another prompting paradigm is _tree-of-thought (ToT)_, where explanations are generated step-wise (termed _thoughts_) and are used to construct a tree of most likely explanation paths based on which better solutions can be found. Other works consider the automatic construction of prompts. For example Zhou et al. (2023) propose _automated prompt engineer (APE)_, a method to optimize prompts for LLMs. Also, Lewis et al. (2020) propose _retrieval augmented generation (RAG)_ to incorporate external knowledge bases into prompt generation. ## 3 Shared Task Setup As described in SS1, the goal of our shared task is to leverage generative LLMs as (explainable) metrics for MT and summarization.3 Thereby, participants are not allowed to fine-tune their models and only certain models are allowed. This leads to a setting in which participants mainly explore _prompting strategies_ and _score extraction_. With _score extraction_, we refer to the way in which the final metric score is constructed from an LLM. In SS2, we describe that metrics based on recent generative LLMs are either based on generation probabilities or directly on decoded textual output. Other options for score extraction could be embedding based, similar to BERTScore (Zhang et al., 2020), or based on attention. Footnote 3: We treat MT and summarization as separate tracks. Figure 1 shows the general setup of using generative LLMs as metrics, illustrated with an example from MT. The figure shows that final scores could be constructed from the generated model output or from other variables involved in the inference process. Specifically, recent works on prompting and metrics offer a wide range of possibilities to influence score construction even without fine-tuning. Some of them are shown in Figure 2. LLM sizesWe organize two tracks based on the model sizes. Models smaller than 25B parameters are considered as **small**, and models bigger than 25B parameters as **large**. Table 1 gives an overview of the allowed models. We mainly choose these models based on their good average performance on the Huggingface Open LLM Leaderboard.4 For Platypus2, Guanaco and WizardLM, we use 4-bit quantized versions with GPTQ (Frantar et al., 2023) to lower the system requirements to run them. Of these models, only the Guanaco model was explicitly fine-tuned with multilingual data. The models Wizard, Nous and Guanaco (fine-tunes of LLMA (Touvron et al., 2023)) were allowed for use from the start of the competition, while the other 3 models (fine-tunes of LLaMA2 (Touvron et al., 2023)) were added to the list 20 days later. In another track, we explore the explanatory value of explanations created as a byproduct of the scoring process (see SS6). PhasesOur shared task was conducted in two phases. First, we hosted a dev phase on CodaLab5(Pavao et al., 2023) from 07.08.23 to 30.09.23. In this phase, participants were developing their approaches and could already evaluate their scores on a leaderboard. While the standing in the dev phase does not influence the ranking of the shared task, the phase aided the creation of a competitive atmosphere, acted as an advertisement for the competition and allowed us to gauge the number of interested participants. The main part of the competition was the test phase conducted from 26.09.23 to 01.10.23. Due to performance problems and unforeseen issues with extending the competition setup on CodaLab, the test phase was migrated to its successor Codabench6(Xu et al., 2022). Submissions to the dev phase and test phase both had to contain at least a file with newline separated scores that grade each sample of our datasets. The test phase additionally required to enter a team name, to indicate the track for each submission and to provide additional files with (1) a short system description, (2) newline separated prompts for each input, and (3) optionally newline separated explanations. Footnote 5: [https://codalab.lisen.upsaclay.fr/competitions/15072](https://codalab.lisen.upsaclay.fr/competitions/15072) Footnote 6: [https://www.codabench.org/competitions/1359/](https://www.codabench.org/competitions/1359/) We describe the shared task datasets in SS4. ## 4 Datasets During the dev phase of our shared task, we provided participants with a train and a dev set. For the test phase, we further created a test set. ### Train & dev set Our train and dev sets are constructed from two datasets. For MT, we select the en-de (English-German) and zh-en (Chinese-English) MQM partitions of the WMT 2022 metrics shared task (Freitag et al., 2022). For summarization, we select SummEval (Fabbri et al., 2021). We conduct our task in a reference-free setting, that is, we do not provide human written reference translations or summaries. Hence, we remove the references provided with WMT and SummEval. SummEval has separate scores for relevance, factuality, coherence and consistency for each sample. We construct a single score per example by averaging these separate scores. Further changes to the original datasets include the split into train and dev partitions as well as shuffling. In the dev phase participants could experiment with generalizable (prompting) approaches. ### Test set We collect a novel test set for the test phase of our shared task. It consists of 3 language pairs for MT: en-de (English-German), en-es (English-Spanish), en-zh (English-Chinese) and a summarization part. We only choose high-resource languages, as the allowed LLaMA(2)-based models have seen limited multilingual data during their pre-training and fine-tuning. Hence, high-resource languages can indicate an upper bound of what these models can achieve without further fine-tuning. To reduce the possibility that our chosen LLMs were trained on parts of the test set, we gather Wikipedia articles created after 15.07.23 as source texts.7 Footnote 7: Limitations of this approach are discussed in §7. Annotator selectionFor MT annotation, we hire one annotator per language pair: one post-graduate student who speaks Spanish as mother tongue with English certifications, one NLP Bachelor student, who is a native English speaker that lives in Germany since many years, and one data and discourse studies Master student, who is a native Chinese speaker who uses English on a daily basis. For summarization annotation, we hire one NLP Bachelor student as well as a data and discourse studies Master student with a prior master in linguistics. Both annotators annotated the same data. All annotators demonstrated their suitability for the role in initial test rounds with further applicants. The distribution of our final MT dataset is shown in Table 3. The total annotation costs were ca. 5000\(\copyright\). Annotation toolFor MT and summarization, we perform fine-grained annotations. In MT, fine-grained MQM annotations have been shown to yield more reliable human annotations than other annotation schemes (Freitag et al., 2021). Also, the fine-grained annotations could be used later on to verify automatically generated explanations.8 We use Google's Anthea9 as annotation tool, because of its support for such fine-grained MQM annotations (Lommel et al., 2014; Freitag et al., 2021a). As we mostly annotate single sentences for MT, we modify Anthea to provide context via a Wikipedia URL that can be consulted if annotators are unsure about a translation. For summarization, annotations were conducted in a modified version of Anthea with a new template (we show a screenshot of the UI in Appendix D). MT annotationWe construct the **MT** dataset from random sentences extracted from the Wikipedia source texts we collected. Thereby, we select sentences with a minimum length of 110 characters, as tokenized by the NLTK sentence tokenizer10. In a few cases, multiple sentences are concatenated due to missing spaces between dots. We obtain machine translations with 4 different translation models (see Table 2). Further, we use MQM as annotation scheme and conducted the annotation process in multiple batches to allow for corrections in subsequent batches. The batch sizes varied between 200 and 600 samples. For the first batch, we changed parts of the process during the annotation. Footnote 10: [https://www.nltk.org/api/nltk.tokenize.html](https://www.nltk.org/api/nltk.tokenize.html) Specifically, we had accidentally chosen an incorrect tokenization for the first few samples of the first batch.11 This may have led to coarser annotation and to ignoring some punctuation issues. We still use these samples, as punctuation errors only have a very small weight in MQM and a coarser annotation does not change the severity assigned to errors. Hence, we assume that the impact on the MQM scores is minimal. Another change between annotation versions is that the first batch contains unordered sentences, while in the second version, all translations of a single source follow each other (in a random order). This has majorly improved the annotation speed as annotators do not need to reread the source sentences anymore. Further, the annotators commented on difficult source texts in the first batch. Therefore, in the following batches, we pre-filter the Wikipedia source articles by their quality classes12 and keep only c-class and better articles. Furthermore, we employ languagetool13 to filter for the grammatical correctness of the source sentences. Footnote 11: For the test phase, we keep the annotations of the first batch, as small issues in source sentences should not invalidate the possibility of creating good translations; instead, we remove every sentence from the final dataset that has at least one major source error. We do this as major source errors might cause ambiguity in the annotation process. For example, if the source is unreadable, it is unclear which quality should be expected from the translation. MT annotator agreementTo verify the quality of the dataset, members of our team who are native speakers of the respective target languages have annotated small subsets of 30-50 samples of the datasets. Table 4 shows the agreement on these subsets, which are acceptable, except for en-es. For en-es, either the MT models performed better, the annotator might have missed some errors or they might have annotated them less strictly, as suggested by Figure 5. Summarization annotation - ConceptTo create a dataset that contains fine-grained and explainable human judgments of summary quality, we perform a fine-grained annotation of three quality aspects defined by Dang (2005); Fabbri et al. (2021): (1) _factuality_, (2) _relevance_, and (3) _readability_ (where readability includes the properties of coherence and fluency used by Fabbri et al. (2021)). Factuality captures whether all facts in the summary correctly represent the source, relevance describes how relevant the summary is with respect to the source and readability includes properties such as the text being fluent, free from grammatical errors and easy to read. We note that readability is covered to a large degree by MTQM annotation \begin{table} \begin{tabular}{l|l|l} \hline \hline **Mode** & **Release Date** & **Track** \\ \hline Platypus2-70B-Instruct-GPTQ (Lee et al., 2023a) & 11.08.23 & Large \\ Guanaco-65B-GPTQ (Dettmers et al., 2023) & 25.05.23 & Large \\ WizardLM-13B-V1.1-GPTQ (Xu et al., 2023a) & 07.07.23 & Small \\ Nous-Hermes-13b & 03.06.23 & Small \\ OpenOrca-Platypus2-13B (Lee et al., 2023b; Mukherjee et al., 2023) & 11.08.23 & Small \\ orca\_mini\_v3\_7b (Mathur, 2023; Mukherjee et al., 2023) & 07.08.23 & Small \\ \hline \hline \end{tabular} \end{table} Table 1: Generative LLMs whose usage was allowed in the Eval4NLP 2023 shared task. We list the huggingface urls in Appendix E. guidelines and build on them. We change these guidelines by removing the category for _adequacy_ and adding _coherence_. Based on Dang (2005), we create the MQM category for coherence with the following sub-categories: _referential clarity_, _redundancy_, _structure_, and _meaning_. The meaning category refers to cases where the summary changes the meaning of the source text without hallucinating, e.g., by concatenating facts in the wrong order. Summarization annotation - Factuality & RelevanceWhile we base _readability_ on a variant of MQM, we annotate the _factuality_ and _relevance_ of summaries in a different way. One common approach to determine these two properties is the _pyramid method_(Nenkova and Passonneau, 2004). Here, small atomic facts of many human written references are collected and ordered in a pyramid, based on their occurrence count. With this pyramid, it can be checked whether (1) facts in a summary are correct and (2) how relevant these facts are (facts at the top are more relevant then those at the bottom). Instead of the pyramid method, we apply a more resource efficient approach, where we use a reference-free approach for annotating the summaries' relevance and factuality. Inspired by Liu et al. (2023c), who manually split the source text into atomic facts, we leverage the NLTK sentence tokenizer to split the source text into enumerated sentences. In some cases, sentences were not split correctly. In sentences of the final test set, we have corrected them manually. We treat each sentence as a single fact.14 Next, we annotate the relevance of each of these facts, i.e., how likely would the annotator use the fact in a given sentence if they should write a summary themselves. Then, we annotate which source sentence is reflected in which part of the summary. By \begin{table} \begin{tabular}{l c} \hline \hline **Type** & **Agreement** \\ \hline en-de & 0.458 \\ en-es & 0.239 \\ en-zh & 0.480 \\ summarization & 0.625 (0.316/0.654) \\ \hline \hline \end{tabular} \end{table} Table 4: Kendall tau as agreement between annotators. For MT, the agreement was calculated on 30-50 samples. For summarization, it was calculated on 373 examples. The first value in brackets is the agreement on the MQM component of our heuristic and the second value is the agreement on the relevance/factuality component of our heuristic. \begin{table} \begin{tabular}{l|l l l} \hline \hline **MT Models** & **Summarization Models** & **Summarization Models** \\ \hline mbart50\_en2m Fan et al. (2021) & sshleifer/distilbart-cnn-12-6 Shleifer and Rush (2020) \\ mbart50\_m2m Fan et al. (2021) & facebook/bart-large-cnn Lewis et al. (2020a) \\ m2m\_100\_418M Tang et al. (2021) & google/bigbird-pegasus-large-bigpatent Zaheer et al. (2020) \\ m2m\_100\_1.2B Tang et al. (2021) & facebook/bart-large-xsum Lewis et al. (2020a) \\ & mT5\_multilingual\_XLSum Hasan et al. (2021) \\ \hline \hline \end{tabular} \end{table} Table 2: An overview of the translation and summarization models we have used to created our datasets. We list the urls in Appendix E. \begin{table} \begin{tabular}{l|l l l} \hline \hline **Type** & **Train** & **Dev** & **Test** \\ \hline en-de & 11046 & 7364 & 1425 \\ en-es & - & - & 1834 \\ en-zh & - & - & 1161 (1297) \\ zh-en & 15750 & 10500 & - \\ summarization & 320 & 1280 & 671 (825) \\ \hline \hline \end{tabular} \end{table} Table 3: Number of samples in our datasets. In the case of the brackets, we filtered out potentially malformed examples after the test phase was conducted. doing so, we can weigh the relevance of each fact that appears in the summary. Finally, we annotate each fact not represented in the original source text as a hallucination. Summarization annotation- ScoreBased on the annotations for _factuality_, _relevance_ and _readability_, we build a heuristic that is negative for bad summaries and positive for good summaries. The equation is shown in Figure 3. Here, \(\alpha\), \(\beta\) and \(\gamma\) can be chosen to determine the influence of each sub-score for relevance, hallucinations and readability, respectively. There are many design choices regarding the weighting of each component and different normalization approaches. We find that these generally only have a small impact on the final ranking of our shared task (see Appendix B). Longer summaries can contain more facts and would hence receive higher scores in this heuristic. We address this issue by generating summaries of similar lengths using max token settings. The example in Figure 4 shows this annotation process. Summarization annotation - ProcessWe select random sections from Wikipedia that have a length of 150 to 800 tokens as measured by the tokenizer of _bart-large-cnn_. The summarization models we use are listed in Table 2. Like with MT, we annotated in several batches. After the first batch, as for MT, we took measures to improve the source quality and ordered the sources to allow for faster annotations. After a check on the annotation quality, some misunderstandings of the annotation classes were uncovered and discussed. In the final evaluation, we drop all examples labeled before this discussion, such that we keep a total of 671 samples. Further, one annotator showed a larger annotation speed and a more consistent understanding of the task. In the test set, we use the annotations of this annotator. We do not use the average here. When we would use the average, we would either have to use the smaller number of samples that was annotated by both annotators or we would need to mix average scores and the samples from the faster annotator. Table 4 shows the agreement between the annotators. It is high for relevance and factuality annotations and lower for the MQM part. Score distributionsFigure 5 shows the distributions of scores constructed from human annotations in our test set, i.e., the scores that are used as ground truth in our evaluation. We can see that all language pairs exhibit a pattern of centering around values divisible by 5. This makes sense, as MQM weighs major errors with 5 points. Also, in _en-es_, samples have generally received a higher score; i.e., fewer major errors were annotated. Finally, our summarization dataset, which uses a combined annotation scheme does not show this pattern. ### Evaluation Following earlier WMT tasks on segment-level evaluation, we compute Kendall's tau correlation (KENDALL, 1945) to compare the system generated scores to human scores. We further report Spearman and Pearson correlations.15 Future work could explore if the usage of other and possibly more suited variants of Kendall, as suggested by Deutsch et al. (2023), might affect the rankings of our competition. Footnote 15: For these evaluations of correlations, we use the implementations of the python scipy library: [https://scipy.org/](https://scipy.org/) ## 5 Shared Task Approaches The test phase of our shared task received submissions from 12 different teams, 9 of which submitted system papers. Here, we summarize the approaches of these 9 systems and announce their final standings. Table 5 gives an overview of the participating teams and of the tracks they are participating in.16 This table can be used as a mapping for the scores reported in SS6. Footnote 16: While the first and last authors of Larionov et al. (2023) are members of the NLLG group, we did not share any internal details that would have given them an advantage. They developed their approach independently. We divide the approaches taken by the participants based on their method to score extraction into _probability-based_, _output-based_ and _agent-based_.17 Besides their final approaches, the participants have explored a large number of possible variations, both of which we summarize. We also introduce the baseline approaches we compare the participants to. Footnote 17: View §2 for the distinction of probability-based and output-based. Probability-basedProbability-based approaches calculate how likely a paraphrase or translation of an input is generated with an LLM. Probability based approaches are explored by HIT-MI&T Lab and Pradhan/Todi. HIT-MI&T Lab define 10 different prompts to translate a source sentence with an LLM. They combine this approach with retrieval augmented generation and few-shot prompting by demonstrating samples in the input prompt selected by (among others) SBERT (Reimers and Gurevych, 2019). Further, they use ensembles to recombine the scores of multiple prompts and models. Pradhan/Todi use the probability-based approach with own prompts and prompts designed by the authors of GPTScore (Fu et al., 2023). They also explore ensembles across 4 different prompts. Output-basedAll submitted papers explore the direct usage of an LLM's natural language output as score. HIT-MI&T Lab test the same sample selection and ensembling strategies described above Figure 4: An example of the summarization annotation process: (1) we automatically perform sentence tokenization of the source text, (2) annotators assign each sentence a relevance from low to high, (3) annotators match facts in the summary with sentences in the source text, (4) annotators mark hallucinated content in the summary and (5) annotators perform MQM error annotations of the summary. Based on steps 1, 2 and 3, we rate the relevance and factuality of the generated summary. Based on 4, we rate the amount of hallucinations, which also contributes to factuality. Finally, based on 5, we rate the readability of the summary. Figure 3: A heuristic for fine-grained reference-free evaluation of summaries. We set \(\alpha=3\), \(\beta=5\) and \(\gamma=1\). with 4 different prompts in an output-based setting. NLLG follow a similar approach to HIT-MI&T Lab and retrieve demonstration examples by finding similar examples with LABSE Feng et al. (2022) embeddings in an output-based setting. Pradhan/Todi try one approach in which they present a prompt that triggers the prediction of a single score and one approach that triggers the model to first rate summary qualities for consistency, coherence, fluency and relevancy. Then they aggregate these scores in 3 different ways. LTRC quantize Orcamini themselves to run an even smaller model (which is close to violating the allowed settings of the shared task). They provide a detailed explanation to their model that triggers it to produce fine-grained scores and a combined score in the same output. DSBA choose rating guidelines from related work -- concretely, the human guidelines (HG) for SummEval, the machine guidelines for G-Eval Liu et al. (2023) and evaluation steps generated by GPT4 OpenAI (2023). They test various adaptations to this prompt, explore the usage of examples in the prompt and the usage of coarse-grained vs. fine-grained and aggregated scores. On the test set, they add a shortcut for very bad summarizations and employ bucketing for their scores. iML explores evaluating 6 different criteria over all model combinations. Kotonya et. al. explore 8 Figure 5: MQM and summarization score distributions of our datasets. These scores are constructed with heuristics from human annotations. The annotation process is described in §4.2. For MT, the scores range from -25 to 0. For summarization, they range from -25 to 54 (positive scores are possible because our heuristic rates relevant facts positively). The blue line indicates the kernel density estimation describing the distribution. prompt types: 3 base prompts and their extensions with chain-of-though Wei et al. (2022), zero-shot and few-shot settings. IUST_NLP_Lab explores various zero-shot and few-shot settings with Orcamini. Finally, IUSTNLPLab and LTRC generate explanations as an additional request to their model. Agent-basedWhile they also use an output-based setup, we place TaiwanSenior in a separate group. They define 4 characters that should be played by a model and a list of 10 properties. For example they define "Internet Troll" as a critical character or "Teacher" as more knowledgeable character, with the intention that different viewpoints can help to judge generation quality better. Then, they evaluate the combined 40 settings and use XGBoost Chen and Guestrin (2016) to combine their scores. While they did not add their top submissions to the final leaderboard they present their reasonably good final scores in their paper. BaselinesAs baselines, we apply the widely used metrics BERTScore (with XLMR-large embeddings) Zhang et al. (2020), SBERT Reimers and Gurevych (2019) cosine-similarity (with XLMR-large embeddings), SUPERT Gao et al. (2020) (with 3 settings), GEMBA Kocmi and Federmann (2023) and Comet-Kiwi-XXL Rei et al. (2023). These baselines use models that are not allowed as part of the shared task. Therefore, we refer to them as external baselines and abbreviate them as follows in SS6: _ex-BaselineBERTScore_, _ex-BaselineSBERT_, _ex-BaselineSuperMP(net2)_, _ex-BaselineSupertF(ull)_, _ex-BaselineSupert5_, _ex-BaselineGEMBA_ and _ex-BaselineComet(KiwiXXL)_. Further, we include one baseline for every allowed model that uses the DA score prompt of GEMBA Kocmi and Federmann (2023) (with a slight modification for summarization). Following the order in table 1, in SS6 we abbreviate these baselines with _baselinePlaty_lg_, _baselineGuanaco_lg_, _baselineWizard_, _baselineNous_, _baselineOrcaPlary_ and _baselineOrcaMini_. The models are further specified in Appendix F. ## 6 Results and Analysis In this section, we first report statistics of the shared task. Then, we present and discuss the final system ranking. Note that we include submissions of participants on the test set leaderboard that did not submit a system paper. However, we do not describe their approaches in SS5. Lastly, we discuss the implications of these results on the development of generation-based metrics. StatisticsThe dev phase on CodaLab has received 44 registrations, 13 of which have submitted their scores. In total, there have been 1048 submissions on the dev set suggesting that some participants might have optimized their method on the dev set. Especially, one participant submitted 417 submissions on the dev set. The test phase on Codabench has received 21 registrations and 248 submissions from 11 participants. We have restricted the number of allowed submissions per day to 10. Allowing a higher number would enable participants to optimize their approaches on the test set too much, so that the results would not reflect the generalization capability anymore. On the other hand, we wanted to give participants the option to try out multiple approaches they designed. Further, Codabench would sometimes fail to compute scores and still deduct one submission. Hence, ten submissions per day allow to accomo \begin{table} \begin{tabular}{l|c|c} \hline \hline **Team** & **Authors** & **Tracks** \\ \hline DSBA & Kim et al. (2023) & S, L, SU \\ iML & Akkasi et al. (2023) & S, L, SU \\ IUST\_NLP\_Lab & Mahmoudi (2023) & S, SU, E \\ HIT-MI\&T Lab & Zhang et al. (2023) & S, MT \\ Kotonya et. al. & Kotonya et al. (2023) & S, SU \\ LTRC & Baswani et al. (2023) & S, MT, SU, E \\ NLLG & Larionov et al. (2023) & L, MT, SU \\ Pradhan/Todi & Pradhan and Todi (2023) & S, SU \\ TaiwanSenior & Lu and Yu-Ting (2023) & S, MT \\ \hline \hline \end{tabular} \end{table} Table 5: Overview of shared task submissions. The letters are abbreviations for the following tracks: S(small model track), L(large model track), M(achine)T(translation track), SU(mmarization track), E(xplainability track). \begin{table} \begin{tabular}{l|c c c|c c c|c c c} \hline \hline & \multicolumn{3}{c}{Kendall} & \multicolumn{3}{c}{Pearson} & \multicolumn{3}{c}{Spearman} \\ Team & de & zh & es & de & zh & es & de & zh & es \\ \hline **HIT-MI\&T Lab** & **0.491** & **0.375** & **0.417** & **0.655** & **0.528** & **0.453** & **0.656** & **0.511** & **0.553** \\ _ex-BaselineGEMBA_ & **0.492** & **0.384** & **0.409** & 0.506 & 0.356 & 0.251 & **0.625** & **0.496** & 0.512 \\ _ex-BaselineComet_ & 0.421 & **0.345** & 0.288 & 0.562 & 0.443 & 0.331 & 0.583 & **0.484** & 0.403 \\ _ex-BaselineBertscore_ & 0.239 & 0.174 & 0.221 & 0.344 & 0.236 & 0.179 & 0.344 & 0.252 & 0.312 \\ _ex-BaselineSBERT_ & 0.209 & 0.167 & 0.226 & 0.246 & 0.210 & 0.081 & 0.304 & 0.242 & 0.320 \\ **LTRC** & 0.194 & 0.144 & 0.112 & 0.232 & 0.133 & 0.031 & 0.233 & 0.173 & 0.132 \\ _baselineNous_ & 0.189 & 0.011 & 0.112 & 0.183 & 0.044 & 0.045 & 0.230 & 0.013 & 0.136 \\ _baselineOrcaPlaty_ & 0.189 & 0.011 & 0.112 & 0.183 & 0.044 & 0.045 & 0.230 & 0.013 & 0.136 \\ seanstilwell & 0.120 & NaN & NaN & 0.164 & NaN & NaN & 0.152 & NaN & NaN \\ _baselineOrcaMini_ & 0.073 & 0.188 & 0.065 & 0.030 & 0.102 & 0.009 & 0.088 & 0.225 & 0.077 \\ _baselineWizard_ & 0.101 & 0.065 & 0.079 & 0.047 & 0.057 & 0.026 & 0.121 & 0.077 & 0.093 \\ **TaiwanSenior** & 0.041 & NaN & NaN & -0.037 & NaN & NaN & 0.051 & NaN & NaN \\ \hline \hline \end{tabular} \end{table} Table 6: Results of the _small_ model track for MT, ordered by the mean of correlations (for columns without NaN). Each column shows the correlation of metric scores to MQM scores for English-X language pairs. Results that are bolded are significantly better than non-bolded results, with \(p\leq 0.05\), as measured by a permute-both significance test (Deutsch et al., 2021). Teams with paper submissions are bolded. \begin{table} \begin{tabular}{l|c c c c|c c c|c c} \hline \hline & \multicolumn{3}{c}{Kendall} & \multicolumn{3}{c}{Pearson} & \multicolumn{3}{c}{Spearman} \\ Team & de & zh & es & de & zh & es & de & zh & es \\ \hline _baselinePlaty\_lg_ & **0.362** & **0.293** & **0.264** & **0.312** & **0.270** & 0.129 & **0.445** & **0.364** & **0.320** \\ _baselineGuanaco\_lg_ & **0.350** & 0.219 & **0.241** & **0.344** & 0.176 & 0.125 & **0.445** & 0.273 & **0.300** \\ **NLLG** & 0.245 & 0.139 & 0.179 & 0.257 & 0.196 & **0.155** & 0.335 & 0.190 & 0.238 \\ **kaiwalya\_large** & 0.174 & 0.113 & 0.125 & 0.161 & 0.141 & 0.052 & 0.209 & 0.138 & 0.147 \\ \hline \hline \end{tabular} \end{table} Table 7: Results of the _large_ model track for MT, ordered by the mean of correlations (for columns without NaN). Each column shows the correlation of metric scores to MQM scores for English-X language pairs. Results that are bolded are significantly better than non-bolded results, with \(p\leq 0.05\), as measured by a permute-both significance test (Deutsch et al., 2021). Teams with paper submissions are bolded. date such scenarios. Two participants have used up a contingent of \(\approx 50\) submissions. Of the 11 test phase participants, 9 have submitted a system paper. The first authors are from China, India (2), Korea, Taiwan, Canada, Iran, Germany and the United Kingdom. Thus, many authors are from developing countries. Also, many authors are students. Hence, their resource availability was limited, leading many of them to opting for smaller models. Correlation with humansHere, we present the results that the participants achieve on the test sets. A mapping between team names and authors can be found in Table 5. Table 6 shows the final ranking of the _small_ MT subtask. Compared to the other participants, HIT-MI&T Lab leads by a large margin on all correlation measures. It even outperforms the recent ex-BaselineCometKiwiXXL significantly and is only matched by ex-BaselineGEMBA with GPT-4 (in our baselines). Notably, both of these models have many more parameters. This ranking is surprising, as the scores they report on the dev set are still better than their baselines using the same models, but not comparatively strong as ex-BaselineCometKiwiXXL (see the _discussion_ paragraph in this section). The test set approach that HIT-MI&T Lab report in their paper builds on ensembling probability-based scores from prompts to OpenOrca-Platypus. These prompts contain between 3 up to 5 example demonstrations via retrieval augmented generation.18 Future work should explore whether their approach can uphold its strong performance across other datasets and settings. The ranking is then followed by various baseline models and team LTRC, which used their chain-of-thought prompting + fine-grained approach for en-de and zero shot prompting for the other two language pairs. Footnote 18: In their paper they describe that they use the maximum number of examples. However, this number is capped to 5 by their implementation. Table 7 shows the final ranking of the _large_ MT subtask. For this subtask, the baselines have not been beaten. Interestingly, NLLG who also use retrieval augmented generation, perform worse than HIT-MI&T Lab, even though they use a larger model. We assume that this is mainly caused by the latter using a probability-based approach, while NLLG use an output based approach. Potentially the translation capability of the LLMs (with next-word prediction) is larger than th \begin{table} \begin{tabular}{l r r r} \hline \hline Team & kd & ps & sp \\ \hline **DSBA** & **0.633** & **0.783** & **0.782** \\ **iML** & **0.615** & **0.763** & **0.772** \\ _ex-BaselineBertscore_ & 0.578 & **0.771** & **0.765** \\ _ex-BaselineSupertMP_ & 0.554 & 0.736 & **0.747** \\ **IUST\_NLP\_Lab** & 0.573 & 0.722 & **0.722** \\ _baselineOrcA Mini_ & 0.560 & 0.681 & **0.706** \\ _ex-BaselineSupertF_ & 0.516 & 0.686 & **0.706** \\ **Kotonya et. al.** & 0.546 & 0.680 & **0.682** \\ **LTRC** & 0.531 & 0.691 & **0.679** \\ _baselineOrcaPlaty_ & 0.552 & 0.666 & **0.674** \\ _baselineNous_ & 0.552 & 0.666 & **0.674** \\ _ex-BaselineSupert5_ & 0.492 & 0.654 & **0.678** \\ _ex-BaselineSBERT_ & 0.465 & 0.625 & **0.645** \\ _baselineWizard_ & 0.411 & 0.534 & 0.536 \\ **Pradhan/Todi** & 0.436 & 0.032 & 0.610 \\ Haaland & 0.221 & 0.514 & 0.280 \\ \hline \hline \end{tabular} \end{table} Table 8: Results of the _small_ model track for summarization, sorted by the mean of correlations. _kd_ stands for Kendall, _ps_ stands for Pearson and _sp_ stands for Spearman. Results that are bolded are significantly better than non-bolded results, with \(p\leq 0.05\), as measured by a permute-both significance test Deutsch et al. (2021). Teams with paper submissions are bolded. \begin{table} \begin{tabular}{l r r r} \hline \hline Team & kd & ps & sp \\ \hline **DSBA** & **0.603** & **0.756** & **0.766** \\ **iML** & **0.612** & **0.738** & **0.768** \\ _baselinePlaty\_lg_ & **0.600** & **0.740** & **0.753** \\ **NLLG** & 0.471 & 0.643 & 0.638 \\ _baselineGuanaco\_lg_ & 0.402 & 0.492 & 0.504 \\ \hline \hline \end{tabular} \end{table} Table 9: Results of the _large_ model track for summarization. _kd_ stands for Kendall, _ps_ stands for Pearson and _sp_ stands for Spearman. Results that are bolded are significantly better than non-bolded results, with \(p\leq 0.05\), as measured by a permute-both significance test Deutsch et al. (2021). Teams with paper submissions are written in bold. multilingual text. Table 8 shows the final ranking of the _small_ summarization subtask. DSBA and iML lead this track. In their final submission, DSBA uses zero-shot prompting in a fine-grained evaluation setting, where they create an ensemble over 3 different prompts for relevance, factuality and readability. On the other hand, the final submission of iML is using a single zero-shot prompt that asks the model to rate the syntax of an input summary with respect to its source. Table 9 shows the final ranking of the _large_ summarization subtask. Again iML and DSBA perform on par using the same methods they used for the small models. Interestingly, for MT and summarization, the small models (Tables 6 and 8) have beaten the large models. One potential reason might be that the large models take much longer to run and therefore they could not be examined with the same care. Further, it is interesting that _baselineOrcaMini_ and IUST_NLP_Lab beat many other models with OrcaMini despite its parameter count being the lowest of the allowed models'. Generally, many teams opted for the usage of small models. Some teams only use the OrcaMini model, due to resource constraints. This highlights how the usage of smaller models in metrics fosters inclusiveness. We show a further analysis of the impact of the summarization subcategories in Appendix C. PerformanceThe best performing approaches of the participants achieve a similar Kendall correlation as our team members when we were testing the inter-annotator agreement on a small subset of samples (see SS3). This suggests that these approaches are already close to the performance of native speakers that did little training with the annotation process (as compared to our main annotators with a strong language background and more annotation experience on the task). This result is similar to the findings of Freitag et al. (2021), where metrics outperformed crowd sourced DA annotation scores. This is an intriguing finding and highlights the potential of current open source models with and without fine-tuning. Also, many prompting approaches like tree-of-thoughts or self-refinement still remain to be explored, which extends this potential. Further, it shows that for closed source models like ChatGPT or GPT4, similar opportunities may exist and lead to new state-of-the-art metrics. The results also show that comparably small hardware can already be enough to create strong new metrics. In 1918, Wimble built a small ship at Hastings with help from a friend and sailed to the West Indies to seek his fortune after his family faced financial hardship. In 1922, he acquired land in the Bahamas which enabled him to begin trading with the English colonies in mainland North America. He also acquired land in North Carolina, which was formally granted to him by George Burrington's council on August 4, 1723. Wimble later moved to Boston, Massachusetts where he married Rebecca Waters, the daughter of a prominent local, on March 26, 1724. Their first son, James, was born on December 20, 1724. He owned land in the South End which he presumably operated as a distillery. While in Boston, he continued his business of moving trade goods between North Carolina and various British trade posts in the West Indies. This business enabled him to increase his land holdings in North Carolina and purchase a brigantine, which he named "Rebecca" after his wife. In 1932, Wimble lost his ship and all of its cargo to a hurricane after being forced by Governor Woodes Rogers of the Bahamas to use his ship to protect vessels and salt ponds in Rum Cay. Wimble was forced to sell a portion of his belongings, land, and slaves to cover the loss and began the process of trying to collect damages from Woodes Rogers' commandeering of his ship.19 \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline **Source** & Summary & Explanation \\ \hline \hline \end{tabular} \end{table} Table 10: Explanation generated with the approach by LTRC. It correctly identifies the issue of the word _Wimble_ repeating often. MT, we compute the baselines that use allowed models and Comet-Kiwi-XXL on the dev-set. HIT-MI&T Lab report correlations of \(0.250\) (en-de) and \(0.319\) (zh-en). The best baseline with allowed models, baselineGuanaco, achieves \(0.260\) (en-de) and \(0.311\) (zh-en). Comet-kiwi-XXL achieves \(0.295\) (en-de) and \(0.337\) (zh-en). While HIT-MI&T Lab is still very good on the dev set and beats other small model baselines by a large margin, it does not outperform the large models. This could be caused by several reasons: (1) The final approach of HIT-MI&T Lab might have been slightly different. (2) The dev set is built from WMT22 which uses data from various domains (e.g. news and social media) as a source, while we create our test set from Wikipedia. (3) We use 4 MT systems, while WMT22 is more diverse. (4) Our annotators have assigned more major errors per average than the annotators for WMT. This could be caused by (2) and (3). In Appendix G, we show the distributions of the dev sets. Future work could verify the viability of their method on further datasets and with further models. For summarization, both iML and DSBA report a score of 0.45 on the dev set using small models. This surpasses baselines we computed on the dev set. The closest one is _baselinePlaty_large_, with \(0.44\). Tha means, summarization models on the dev set achieve a similar ranking to the test set. These similar scores raise the question whether there is some upper bound for each model that can be reached with different prompts. ## 7 Conclusion We summarize the shared task and discuss future work. Summary & ImplicationsThis work describes the Eval4NLP 2023 shared task on _prompting LLMs as explainable metrics_. We have constructed a fine-grained dataset for MT and summarization evaluation, with a novel annotation scheme for the latter. Further, we have organized a competition following the novel restriction to specify allowed models and disallow fine-tuning in a MT and summarization evaluation setting. By running a small and a large model track, we have enabled participation for participants with fewer resources, leading to an inclusive shared task setting. The top scores of the participants highlight a number of interesting findings that we summarize here: * **Small Models**: The results on the test set show that the best solutions built on small models outperform those that are built on larger models. This is contradicting usual patterns and an interesting implication for metric efficiency. * **Probability-based vs. Output-based**: The MT ranking is led by a probability-based method, while the summarization ranking is led by two output-based methods. The allowed LLMs have seen little multilingual training data. Therefore, their understanding of other languages than English could be smaller than their capability of translation, hence favoring probability-based methods. Also, the winner for MT is using retrieval augmented generation, which might infuse more multilingual capabilities into the model. * **Simplicity helps**: Many baseline systems achieved high ranks, despite using a simple prompting approach. Participants often report that demonstrating examples reduced their performance. The two best performing approaches for summarization use zero-shot prompting. Hence, lean metrics are easier to design and can still be very powerful. The best ranked systems, however, explore more intricate prompts and ensembles. The contributions of our participants highlight once more how current LLMs can achieve state-of-the-art performance, even without any task-specific fine-tuning. Future WorkWe have considered high resource languages for the MT task. Future work could evaluate low resource languages, especially once more generative LLMs are released that are trained across a wide range of languages. Also, prospectively one might encourage and set rewards for pipeline-based solutions. In other words, currently most approaches of the shared task are based on single prompts or probability outputs; instead many interesting approaches like tree-of-thoughts Yao et al. (2023) explore pipelines in which the output is generated iteratively or in parallel. Future work might also create larger or more diverse datasets for our evaluation scheme. Another point is that our current work only contains a small analysis of explainability that remained indecisive on the explanation quality between two participants. This could be extended in future work. ## Acknowledgements We thank our participants for the active contribution and discussion. Further, we thank our annotators for their effort in creating our test sets. Christoph Leiter is financed by the BMBF project "Metrics4NLG". Steffen Eger is financed by DFG Heisenberg grant EG 375/5-1. ## Limitations One potential limitation of our work lies in the usage of data from Wikipedia after 15.07. Although the chosen articles were indeed picked after July 15th, it is important to note that some of the content may have been duplicated from elsewhere, a few texts were automatically translated from existing entries in other languages, and there is the possibility that some of the content was computer-generated. Another limitation of our work lies in the low agreements between our team member and our Spanish annotator for the small test conducted. Hence, the quality of our Spanish dataset is less proven and evaluation on Spanish might be less accurate than the other two language pairs. Due to time restrictions, we could not do further evaluations. Still, we believe that our Spanish annotator was capable in their language and thorough with their analysis of the samples. As another limitation, pre-filtering with language tool and later on sorting out severe source errors might miss out on more subtle errors causing problems in the test set.
2301.00582
Sparse neural networks with skip-connections for identification of aluminum electrolysis cell
Neural networks are rapidly gaining interest in nonlinear system identification due to the model's ability to capture complex input-output relations directly from data. However, despite the flexibility of the approach, there are still concerns about the safety of these models in this context, as well as the need for large amounts of potentially expensive data. Aluminum electrolysis is a highly nonlinear production process, and most of the data must be sampled manually, making the sampling process expensive and infrequent. In the case of infrequent measurements of state variables, the accuracy and open-loop stability of the long-term predictions become highly important. Standard neural networks struggle to provide stable long-term predictions with limited training data. In this work, we investigate the effect of combining concatenated skip-connections and the sparsity-promoting $\ell_1$ regularization on the open-loop stability and accuracy of forecasts with short, medium, and long prediction horizons. The case study is conducted on a high-dimensional and nonlinear simulator representing an aluminum electrolysis cell's mass and energy balance. The proposed model structure contains concatenated skip connections from the input layer and all intermittent layers to the output layer, referred to as InputSkip. $\ell_1$ regularized InputSkip is called sparse InputSkip. The results show that sparse InputSkip outperforms dense and sparse standard feedforward neural networks and dense InputSkip regarding open-loop stability and long-term predictive accuracy. The results are significant when models are trained on datasets of all sizes (small, medium, and large training sets) and for all prediction horizons (short, medium, and long prediction horizons.)
Erlend Torje Berg Lundby, Haakon Robinsson, Adil Rasheed, Ivar Johan Halvorsen, Jan Tommy Gravdahl
2023-01-02T10:13:33Z
http://arxiv.org/abs/2301.00582v2
# Sparse neural networks with skip-connections for nonlinear system identificationfootnoteinfo ###### Abstract Data-driven models such as neural networks are being applied more and more to safety-critical applications, such as the modeling and control of cyber-physical systems. Despite the flexibility of the approach, there are still concerns about the safety of these models in this context, as well as the need for large amounts of potentially expensive data. In particular, when long-term predictions are needed or frequent measurements are not available, the open-loop stability of the model becomes important. However, it is difficult to make such guarantees for complex black-box models such as neural networks, and prior work has shown that model stability is indeed an issue. In this work, we consider an aluminum extraction process where measurements of the internal state of the reactor are time-consuming and expensive. We model the process using neural networks and investigate the role of including skip connections in the network architecture as well as using \(\ell_{1}\) regularization to induce sparse connection weights. We demonstrate that these measures can greatly improve both the accuracy and the stability of the models for datasets of varying sizes. D + Footnote †: footnote 2016). Skip-connections were originally proposed by He et al. (2016) as a way to circumvent this, by introducing a shorter path between the early layers and the output. They were not only found to enable the training of significantly deeper networks, but Li et al. (2017) also demonstrated that they may help improve training convergence. In the field of dynamical systems and control, we often design a model with a purpose in mind, such as the design of a control system or state observer. Crucially, we are interested in the behavior and performance of the controlled system in terms of objectives such as energy efficiency or yield. This implies that the model does not need to be perfectly accurate for the entire state space, so long as the resulting closed-loop performance is sufficient (known as _identification for control_ (I4C)). If high-frequency measurements from the system are available, only the short-term behavior of the model is important, since any drift out of the operational space is quickly corrected. However, if measurements are rarely available, such as in the aluminum electrolysis process that we consider, the long-term model behavior and open-loop stability become much more important. Stable long-term predictions can be important for decision-making, meaning that a model with good long-term stability and accuracy is inherently important. In this work, we investigate the effects of adding skip connections and \(\ell_{1}\) regularization on the accuracy and stability of these models for short, medium, and long horizons. We address the following questions: * How do skip connections affect the stability and generalization error of neural networks trained on high-dimensional nonlinear dynamical systems? * How does sparsity affect stability and generalization error for neural networks with skip connections that model nonlinear dynamics? * How does the amount of training data affect neural networks with skip connections compared to neural networks without skip connections? We make the following contributions: * We perform a black box system identification of an aluminum electrolysis cell using different NN architectures. * We demonstrate that the accuracy and open-loop stability of the resulting models is greatly improved by using \(\ell_{1}\) weight regularization and incorporating skip connections into the architecture. * This advantage is consistent across datasets of varying sizes. ## 2 Theory ### Physics-based model for aluminum extraction We evaluate NNs for nonlinear system identification by first training them on synthetic data generated from a known physics-based models (PBM). The model used in this work describes the internal dynamics of an aluminum electrolysis cell based on the Hall-Heroult process. Figure 1 shows a diagram of the electrolysis cell. Traditional PBMs of such systems are generally constructed by studying the mass/energy balance of the chemical reactions. Lundby et al. (2022) presents a more detailed exposition of the model that we use in this work. The system is described by a set of ordinary differential equations (ODE): \[\dot{\mathbf{x}}=\mathbf{f}(\mathbf{x},\mathbf{u}), \tag{1}\] where \(\mathbf{x}\in\mathbb{R}^{8}\) and \(\mathbf{u}\in\mathbb{R}^{5}\) represent the time-varying states and inputs of the system respectively. The full set of equations are: \[\dot{x}_{1} =\frac{k_{1}(g_{1}-x_{7})}{x_{1}k_{0}}-k_{2}(x_{6}-g_{1}) \tag{2a}\] \[\dot{x}_{2} =\,u_{1}-k_{3}u_{2}\] (2b) \[\dot{x}_{3} =\,u_{3}-k_{4}u_{1}\] (2c) \[\dot{x}_{4} =\,-\frac{k_{1}(g_{1}-x_{7})}{x_{1}k_{0}}+k_{2}(x_{6}-g_{1})+k_{ 5}u_{1}\] (2d) \[\dot{x}_{5} =\,k_{6}u_{2}-u_{4}\] (2e) \[\dot{x}_{6} =\,\frac{\alpha}{x_{2}+x_{3}+x_{4}}\Bigg{[}u_{2}g_{5}+\frac{u_{ 2}^{2}u_{5}}{2620g_{2}}-k_{7}(x_{6}-g_{1})^{2}\] (2f) \[+k_{8}\frac{(x_{6}-g_{1})(g_{1}-x_{7})}{k_{0}x_{1}}-k_{9}\frac{x_ {6}-x_{7}}{k_{10}+k_{11}k_{0}x_{1}}\Bigg{]}\] \[\dot{x}_{7} =\,\frac{\beta}{x_{1}}\Bigg{[}\frac{k_{9}(g_{1}-x_{7})}{k_{15}k_{ 0}x_{1}}-k_{12}(x_{6}-g_{1})(g_{1}-x_{7})\] (2g) \[+\,\frac{k_{13}(g_{1}-x_{7})^{2}}{k_{0}x_{1}}\,-\frac{x_{7}-x_{8}} {k_{14}+k_{15}k_{0}x_{1}}\Bigg{]}\] \[\dot{x}_{8} =\,k_{17}k_{9}\left(\frac{x_{7}-x_{8}}{k_{14}+k_{15}k_{0}\cdot x _{1}}-\,\frac{x_{8}-k_{16}}{k_{14}+k_{18}}\right), \tag{2h}\] where the intrinsic properties \(g_{i}\) of the bath mixture are given as: Figure 1: Schematic of the setup \[g_{1} =991.2+112c_{x_{3}}+61c_{x_{3}}^{1.5}-3265.5c_{x_{3}}^{2.2} \tag{3a}\] \[-\frac{793c_{x_{2}}}{-23c_{x_{2}}c_{x_{3}}-17c_{x_{3}}^{2}+9.36c_{x_ {3}}+1}\] \[g_{2} =\exp\,\left(2.496-\frac{2068.4}{273+x_{6}}-2.07c_{x_{2}}\right)\] (3b) \[g_{3} =0.531+3.06\cdot 10^{-18}u_{1}^{3}-2.51\cdot 10^{-12}u_{1}^{2}\] (3c) \[+6.96\cdot 10^{-7}u_{1}-\frac{14.37(c_{x_{2}}-c_{x_{2},crit})-0.431 }{735.3(c_{x_{2}}-c_{x_{2},crit})+1}\] \[g_{4} =\frac{0.5517+3.8168\cdot 10^{-6}u_{2}}{1+8.271\cdot 10^{-6}u_{2}}\] (3d) \[g_{5} =\frac{3.8168\cdot 10^{-6}g_{3}g_{4}u_{2}}{g_{2}(1-g_{3})}. \tag{3e}\] See Table 1 for a description of these quantities. The values of these constants can be found in Lundby et al. (2022). The dynamics of the system are relatively slow. The control inputs \(u_{1},\ u_{3}\) and \(u_{4}\) are therefore well modeled as impulses that represent discrete events involving the addition or removal of substances. This results in step changes in the linear states \(x_{2},x_{3},x_{5}\), which act as accumulator states for the mass of the corresponding substance (see Table 1). The control inputs \(u_{2}\) and \(u_{5}\) are piecewise constant, and always nonzero. The inputs \(\mathbf{u}\) are determined by a simple proportional controller \(\boldsymbol{\pi}(\mathbf{x})\). The simulation model is derived in Lundby et al. (2022), and we refer to that article for further details. ### Deep neural network with skip connections A NN with \(L\) layers can be compactly written as an alternating composition of affine transformations \(\mathbf{Wz}+\mathbf{b}\) and nonlinear activation functions \(\boldsymbol{\sigma}:\mathbb{R}^{n}\mapsto\mathbb{R}^{n}\): \[\hat{\mathbf{f}}(\mathbf{z})=\hat{\mathbf{f}}_{L}\circ\cdots\circ\hat{ \mathbf{f}}_{2}\circ\hat{\mathbf{f}}_{1} \tag{4}\] where the activation function \(\boldsymbol{\sigma}_{i}\), weight matrix \(\mathbf{W}_{i}\), and bias vector \(\mathbf{b}_{i}\) correspond to the \(i\)th layer of the network. The universal approximation property of NNs makes them very attractive as a flexible model class when a lot of data is available. The representation capacity is generally understood to increase with both the depth and the width (the number of neurons in each layer), although early attempts to train very deep networks found them challenging to optimize using backpropagation due to the vanishing gradients problem. One of the major developments that enabled researchers to train deep NNs with many layers is the _skip connection_. A skip connection is simply an additional inter-layer connection that bypasses some of the layers of the network. This provides alternate pathways through which the loss can be backpropagated to the early layers of the NN, which helps mitigate the issues of vanishing and exploding gradients, which were major hurdles to training deeper models. In this work, we utilize a modified DenseNet architecture as proposed by Huang et al. (2017), where the outputs of earlier layers are concatenated to all the consecutive layers. We simplify the structure such that the model only contains skip connections from the input layer to all consecutive layers. We call this architecture InputSkip, which has reduced complexity compared to DenseNet. This design is motivated by the fact that the output of each layer (including the final output) becomes a sum of both a linear and a nonlinear transformation of the initial input \(\mathbf{x}\). Hence, the skip connections from the input layer to consecutive layers facilitate the reuse of the input features for modeling different linear and nonlinear relationships more independently of each other. ## 3 Method and setup In this section, we present all the details of data generation and its preprocessing, and the methods that are required to reproduce the work. The steps can be briefly summarized as follows: * Use Equation (2) with random initial conditions to generate 140 trajectories with 5000 timesteps each. Set aside 40 for training and 100 for testing. Construct 3 datasets by selecting 10,20 and 40 trajectories respectively. * For each model class and dataset, train 10 instances on the training data. * Repeat all experiments with \(\ell_{1}\) regularization, see loss function in Equation (5). * Use trained models to generate predicted trajectories along the test set and compare them to the 100 test trajectories. ### Data generation Equation (2) was discretized using the RK4 scheme with a fixed timestep \(h=10\,\mathrm{s}\) and numerically integrated on the interval \([0,5000h]\). We used uniformly randomly sampled initial conditions from the intervals shown in Table 2 to generate 140 unique trajectories. We set aside 40 trajectories for training and 100 of the trajectories as a test set. The 40 training trajectories were used to create 3 datasets of varying sizes (small, medium, large), namely 10, 20, and 40 trajectories. In total, the datasets contained 50000, 100000, and 200000 individual data points respectively. Equation (2) also depends on the input signal \(\mathbf{u}\). In practice, this is given by a deterministic control policy \(\mathbf{u}=\boldsymbol{\pi}(\mathbf{x})\) that stabilizes the system and keeps the state \(\mathbf{x}\) within some region of the state space that is suitable for safe operation. We found that this was insufficient to successfully train our models, because the controlled trajectories showed very little variation after some time, despite having different initial conditions. This lack of diversity in the dataset resulted in models that could not generalize to unseen states, a situation that frequently arose during evaluation. To inject more variety into the data and sample states \(\mathbf{x}\) outside of the standard operational area, we used a stochastic controller \[\boldsymbol{\pi}_{s}(\mathbf{x})=\boldsymbol{\pi}(\mathbf{x})+\mathbf{r}(t)\] that introduced random perturbations \(\mathbf{r}(t)\) to the input. These perturbations were sampled using the Amplitude-modulated Pseudo-Random Binary Signal (APRBS) method proposed by Winter and Breitsamter (2018) for nonlinear system identification. In system identification it is typical to optimize the model to estimate the function \(\hat{\mathbf{x}}=\mathbf{f}(\mathbf{x},\mathbf{u})\). However, this is not feasible for Equation (2) because the inputs \(\mathbf{u}\) are not differentiable. Instead, we discretize the trajectories using the forward Euler difference and use this as the regression variable: \[\mathbf{y}_{k}=\frac{\mathbf{x}_{k+1}-\mathbf{x}_{k}}{h}\] The datasets are then constructed as sets of the pairs \(([\mathbf{x}_{k},\mathbf{u}_{k}],\mathbf{y}_{k})\). ### Training setup We optimize the models by minimizing the following loss function using stochastic gradient descent: \[\mathbf{J}_{\theta}=\frac{1}{|\mathcal{B}|}\sum_{i\in\mathcal{B}}(\mathbf{y}_{ i}-\mathbf{\hat{f}}(\mathbf{x}_{i},\mathbf{u}_{i}))^{2}+\lambda\sum_{j=1}^{L}| \mathbf{W}_{j}| \tag{5}\] where \(\mathcal{B}\) is a _batch_ of randomly sampled subset of indices from the dataset, \(L\) is the number of layers of the NN, and \(\lambda\) is the regularization parameter. This loss function is the sum of the mean squared error (MSE) of the model \(\hat{\mathbf{f}}\) with respect to the regression variables \(\mathbf{y}\), and the \(\ell_{1}\) norm of the connection weight matrices \(\mathbf{W}_{i}\) in all layers. We used a batch size of \(|\mathcal{B}|=128\). We used the popular ADAM solver proposed by Kingma and Ba (2014) with default parameters to minimize Equation (5). ### Evaluation of model accuracy As previously mentioned, we are interested in evaluating the long-term predictive accuracy of the models. Starting from a given initial condition \(\mathbf{x}(t_{0})\), the model \(\mathbf{\hat{f}}(\mathbf{x},\mathbf{u})\) is used to generate an estimated trajectory using the recurrence: \[\hat{\mathbf{x}}_{k+1}=\hat{\mathbf{x}}_{k}+h\,\hat{\mathbf{f}}(\hat{\mathbf{ x}}_{k},\mathbf{u}_{k}) \tag{6}\] where \(\hat{\mathbf{x}}_{0}=\mathbf{x}_{0}\). Note that the input signal \(\mathbf{u}_{k}\) is replayed directly from the test trajectory. Borrowing a term from the field of time-series analysis, we refer to this as a _rolling forecast_. To evaluate the accuracy of a model over multiple trajectories, we define the Average Normalized Rolling Forecast Mean Squared Error (AN-RFMSE): \[\text{AN-RFMSE}=\frac{1}{p}\sum_{i=1}^{p}\frac{1}{n}\sum_{j=1}^{n}\left(\frac{ \hat{x}_{i}(t_{j})-x_{i}(t_{j})}{\text{std}(x_{i})}\right)^{2}, \tag{7}\] where \(\hat{x}_{i}(t_{j})\) is the model estimate of the simulated state variable \(x_{i}\) at time step \(t_{j}\), \(\text{std}(x_{i})\) is the standard deviation of variable \(x_{i}\) in the training set \(\mathcal{S}_{train}\), \(p=8\) is the number of state variables and \(n\) is the number of time steps being averaged over. ### Evaluation of model stability A symptom of model instability is that its predictions can _blow-up_, which is characterized by a rapid (often exponential) increase in prediction error. More precisely, we say that a blow-up occurs when the normalized mean absolute error for all system states exceeds three (this corresponds to standard deviations). We detect this as follows: \[\max_{j<n}\left[\frac{1}{p}\sum_{i=1}^{p}\left(\frac{|\hat{x}_{i}(t_{j})-x_{i} (t_{j})|}{\text{std}(x_{i})}\right)\right]>3 \tag{8}\] where \(p=8\) is again the number of state variables and \(n\) is the number of time steps to consider. This is a conservative estimate. However, this does not lead to any significant underestimation of the number of blow-ups. This is because once a model starts to drift rapidly, it very quickly exceeds the normal error of three standard deviations. \begin{table} \begin{tabular}{l|l} \hline Variable & Initial condition interval \\ \hline \(x_{1}\) & [2060, 4460] \\ \(c_{x_{2}}\) & [0.02, 0.05] \\ \(c_{x_{3}}\) & [0.09, 0.13] \\ \(x_{4}\) & [11500, 16000] \\ \(x_{5}\) & [9550, 10600] \\ \(x_{6}\) & [940, 990] \\ \(x_{7}\) & [790, 850] \\ \(x_{8}\) & [555, 610] \\ \hline \end{tabular} \end{table} Table 2: ## 4 Results and Discussions We characterize the different model classes (PlainDense, PlainSparse, InputSkipDense, InputSkipSparse) by estimating their blow-up frequencies and their rolling forecast mean squared error (RFMSE) on the validation data. The blow-up frequency is an interesting measure since it can indicate how stable the model is in practice. We perform a Monte Carlo analysis by training 10 instances of each model class and evaluating these on 100 trajectories randomly generated using the true model, yielding 1000 data points for each model class. We repeat the experiments for 3 different dataset sizes to study the data efficiency of the models. Figure 2 presents the total number of blow-ups recorded within each model class after \(100h\), \(2000h\), and \(5000h\) (short, medium, and long term respectively). For simplicity, blow-ups were detected by thresholding the computed variance of a predicted trajectory and manually inspected. It is clear that for short time horizons all the models exhibit robust behavior independently of the size of the training datasets. However, for medium and long time horizons, PlainDense, PlainSparse, and InputSkipDense architectures exhibit a significant number of blow-ups and therefore instability. Figure 1(a) - 1(c) show that PlainDense is generally the most unstable, with up to 67% of all trajectories resulting in a blow-up. For the smallest amount of training data (Figure 1(a)) PlainSparse and InputSkipDense have similar blow-up frequencies. For larger datasets, the PlainSparse architecture shows significantly better stability than both PlainDense and InputSkipDense. InputSkipDense and PlainDense both show better stability with increasing amounts of training data in terms of fewer blow-ups. However, both these dense models still suffer from significant amounts of blow-ups. In comparison, almost no blow-ups are recorded when using the InputSkipSparse architecture, even for the small training dataset. In Figure 2, the orange bars corresponding to the blow-up frequency of InputSkipSparse models are not visible for any of the training sets due to the significantly lower number of blow-ups. For InputSkipSparse models trained on the smallest dataset, only 3 out of 1000 possible blow-ups were reported for the longest horizon. Apart from that, no blow-ups were reported for the InputSkipSparse models. Only a few blow-ups were recorded after \(5000h\) in the medium term. Figure 3 presents a violin plot of the accuracy of each model class, expressed in terms of RFMSE over different time horizons. Only the plot for the smallest dataset (50000 points) is shown, due to the results being very similar. A larger width of the violin indicates a higher density of that given RFMSE value, while the error bars show the minimum and maximum recorded RFMSE values. The model estimates that blew up (see Figure 2) are not included. In this way, we estimate the generalization performance of the models only within their regions of stability. Note that the violin plots for model classes with many blow-ups are made using fewer samples, and can be seen as slightly "cherry-picked". Nonetheless, the InputSkipSparse architecture consistently yields more accurate results, up to an order of magnitude better than the others in the long term. ## 5 Conclusion and Future Work In this work, we compared the performance of two different model structures trained both with and without sparsity promoting \(\ell_{1}\) regularization. The two model types are standard Multi-Layer Perceptrons (MLP), and a more specialized architecture that includes skip connections from the input layer to all consecutive layers. This yields four different model structures, which we call PlainDense, PlainSparse, InputSkipDense, and InputSkipSparse. The main conclusions of the article are as follows: * NNs with skip connections are more stable for predictions over long time horizons compared to standard MLPs. Furthermore, the accuracy of NNs with skip Figure 2: Divergence plot: Number of trajectories that blow-up over different time horizons. The total number of trajectories is 1000, so the values can be read as a permille. connections is consistently higher for all forecasting horizons. * The application of sparsity-promoting \(\ell_{1}\) regularization significantly improves the stability of both the standard MLP and InputSkip architectures. This improvement was more apparent for models with the InputSkip architecture. * The InputSkipSparse showed satisfactory stability characteristics even when the amount of training data was restricted. This suggests that this architecture is more suitable for system identification tasks than the standard MLP structure. The case study shows that both sparsity-promoting regularization and skip connections can result in more stable NN models for system identification tasks while requiring less data, as well as improving their multi-step generalization for both short, medium, and long prediction horizons. Despite the encouraging performance of the sparse-skip networks, we can not guarantee similar performance for noisy data, as we have only investigated the use of synthetic data devoid of any noise. However, such a study will be an interesting line of future work. This case study also has relevance beyond the current setup. In more realistic situations, we often have a partial understanding of the system we wish to model (see Equation (2)), and only wish to use data-driven methods to correct a PBM when it disagrees with the observations (e.g. due to a faulty assumption). As shown in Robinson et al. (2022), combining PBMs and data-driven methods in this way also has the potential to inject instability into the system. Finding new ways to improve or guarantee out-of-sample behavior for data-driven methods is therefore of paramount importance to improve the safety of such systems. ## Acknowledgements This work was supported by the industry partners Borgregaard, Elkem, Hydro, Yara and the Research Council of Norway through the projects TAPI: Towards Autonomy in Process Industries (grant no. 294544) and EXAIGON: Explainable AI systems for gradual industry adoption (grant no. 304843)
2302.09790
HTNet: Human Topology Aware Network for 3D Human Pose Estimation
3D human pose estimation errors would propagate along the human body topology and accumulate at the end joints of limbs. Inspired by the backtracking mechanism in automatic control systems, we design an Intra-Part Constraint module that utilizes the parent nodes as the reference to build topological constraints for end joints at the part level. Further considering the hierarchy of the human topology, joint-level and body-level dependencies are captured via graph convolutional networks and self-attentions, respectively. Based on these designs, we propose a novel Human Topology aware Network (HTNet), which adopts a channel-split progressive strategy to sequentially learn the structural priors of the human topology from multiple semantic levels: joint, part, and body. Extensive experiments show that the proposed method improves the estimation accuracy by 18.7% on the end joints of limbs and achieves state-of-the-art results on Human3.6M and MPI-INF-3DHP datasets. Code is available at https://github.com/vefalun/HTNet.
Jialun Cai, Hong Liu, Runwei Ding, Wenhao Li, Jianbing Wu, Miaoju Ban
2023-02-20T06:31:29Z
http://arxiv.org/abs/2302.09790v1
# HTNet: Human Topology Aware Network for 3D Human Pose Estimation ###### Abstract 3D human pose estimation errors would propagate along the human body topology and accumulate at the end joints of limbs. Inspired by the backtracking mechanism in automatic control systems, we design an Intra-Part Constraint module that utilizes the parent nodes as the reference to build topological constraints for end joints at the part level. Further considering the hierarchy of the human topology, joint-level and body-level dependencies are captured via graph convolutional networks and self-attentions, respectively. Based on these designs, we propose a novel Human Topology aware Network (HTNet), which adopts a channel-split progressive strategy to sequentially learn the structural priors of the human topology from multiple semantic levels: joint, part, and body. Extensive experiments show that the proposed method improves the estimation accuracy by 18.7% on the end joints of limbs and achieves state-of-the-art results on Human3.6M and MPI-INF-3DHP datasets. Code is available at [https://github.com/vefalun/HTNet](https://github.com/vefalun/HTNet). Jialun Cai Hong Liu Runwei Ding Wenhao Li Jianbing Wu Miaojun Ban Key Laboratory of Machine Perception, Shenzhen Graduate School, Peking University {cjl, kimbing.ng, miaoju.ban}@stu.pku.edu.cn, {hongliu, dingrunwei, wenhaoli}@pku.edu.cn 3D Human Pose Estimation, Human Topology, Error Accumulation, Hierarchical Structure ## 1 Introduction 3D human pose estimation (HPE) from a monocular image is a challenging task, which has been widely applied in the sub-tasks of human analysis, such as action recognition [1] and person re-identification [2]. Benefiting from the effective 2D HPE framework [3], most recent works focus on the 2D-to-3D lifting pipeline [4], which firstly estimates 2D poses from a single image and then lifts them to 3D keypoints. Unlike image-based tasks, the 2D-to-3D pose lifting task takes inherently sparse and structural 2D joint coordinates as inputs. Therefore, it is critical to take full advantage of the structural priors of the human topology. Recently, many works [5, 6, 7] have focused on the most related local topology by modeling the correlations among body joints via Graph Convolutional Networks (GCNs), while others [8, 9, 10] have focused on the less related global contexts via the Multi-head Self-Attention (MSA) of Transformer. However, the skeleton representations of the human body cannot be sufficiently captured by these methods due to the complex topological relationships among joints, leading to dramatically wrong estimation, especially at the end joints of limbs. As shown in Fig. 1 (a), the previous state-of-the-art model [6] suffers from significant estimation errors on joints with high PDoFs, and here we define the Part Degree of Freedom (PDoF) of joints via the distance to the torso (see Fig. 1 (b)). To address these issues, we explore the human topology from the following two aspects: _(i) Error accumulation_: Since the human body is a linkage structure that limb joints highly depend on their parent nodes [11] shown in Fig. 1 (c), the estimation errors would accumulate from the central hip in the torso (root joint) to the end joints of limbs. In automatic control systems, backtracking to the previous points with fewer errors and leveraging their kinematics priors can effectively alleviate the problem of error accumulation [12, 13]. Inspired by it, an Intra-Part Constraint (IPC) is designed to take intra-part parent joints as the reference to constrain the joints with higher PDoFs. _(ii) Hierarchical structure_: The Joint motion is closely linked to the hierarchical organization of human topology, which includes joint [5], part [14, 15], and body [9] levels. Driven by this natural hierarchy, we present a novel Human Figure 1: (a) Distribution of estimation errors; (b) Joints with different PDoFs; (c) Physiological explanation of error accumulation at the end joints (_e.g._, wrist): \(\mathcal{E}=\vec{e}_{3}+\vec{e}_{2}\). Topology aware Network (HTNet) that learns human topology representations at multiple levels (see Fig. 2). Specifically, HTNet consists of alternating hierarchical mixers, each composed of three modules. At the joint level, we design a Local Joint-level Connection (LJC) based on GCN to model the physical connections between adjacent joints; At the part level, the IPC in (i) provides the constraints for intra-part joints, such that they have similar athletic trends; At the body level, the Global Body-level Interaction (GBI) based on MSA extracts global features among inter-part joints. Notably, the hierarchical mixer adopts a channel-split progressive design, which achieves a win-win scenario by providing more expressive features from various levels while keeping a small model size. In summary, the main contributions are as follows: * We propose a Human Topology aware Network (HTNet) with a channel-split progressive design to learn human topology dependencies at joint, part, and body levels. * To alleviate the error accumulation, we design an Intra-Part Constraint (IPC) module that utilizes topological constraints of parent nodes to reduce errors of end joints. * Extensive experiments demonstrate the effectiveness of HTNet, which achieves state-of-the-art results on both Human3.6M and MPI-INF-3DHP benchmark datasets. ## 2 Methodology ### Overview The overview of HTNet is depicted in Fig. 2. Given the 2D coordinates \(X\)\(\in\)\(\mathbb{R}^{N\times 2}\), we firstly map \(X\) to a high dimension \(X^{\prime}\)\(\in\)\(\mathbb{R}^{N\times C}\) via the patch embedding, where \(N\) is the number of joints and \(C\) is the channel dimensions. Then, we embed \(X^{\prime}\) with a learnable positional matrix \(E_{pos}\)\(\in\)\(\mathbb{R}^{N\times C}\) to obtain the embedded features \(X^{\ell}\), where \(\ell\) is the index of hierarchical mixers. Next, the \(X^{\ell}\) is split into three parts: \(X^{\ell}_{\textit{Lc}}\), \(X^{\ell}_{\textit{nc}}\), and \(X^{\ell}_{\textit{can}}\), which have equal dimensions \(C^{\prime}\)\(=\)\(C/3\) and are fed into corresponding modules: LJC, IPC, GBI. These modules construct the human topology from multiple semantic levels: joint, part, and body. Finally, we aggregate hierarchical representations and use a linear layer as the regression head to produce the 3D coordinates \(Y\)\(\in\)\(\mathbb{R}^{N\times 3}\). Each module and the structure of HTNet will be introduced in the following sections. ### Local Joint-level Connection Local Joint-level Connection (LJC) is a GCN-based architecture [6]. The graph-structured human skeleton can be defined as \(G\)\(=\)\((V,A)\), where \(V\) is a set of \(N\) joints and \(A\)\(\in\)\(\{0,1\}^{N\times N}\) is an adjacency matrix representing the connection relations between joints. Given the input \(X^{\ell}_{\textit{Lc}}\)\(\in\)\(\mathbb{R}^{N\times C^{\prime}}\), features of neighboring joints can be aggregated to \(Y^{\ell}_{\textit{Lc}}\) by GCN: \[\begin{array}{l}GCN(X^{\ell}_{\textit{Lc}})=\tilde{D}^{-\frac{1}{2}}\tilde {A}\tilde{D}^{-\frac{1}{2}}X^{\ell}_{\textit{Lc}}W,\\ Y^{\ell}_{\textit{Lc}}=X^{\ell}_{\textit{Lc}}+\textit{GCN}(\sigma(GCN(X^{\ell }_{\textit{Lc}}))),\end{array} \tag{1}\] where \(\tilde{A}\)\(=\)\(A\)\(+\)\(I\), \(\tilde{D}\) is the diagonal node degree matrix, \(W\) is the weight matrix, \(\sigma\) denotes the _GELU_ activation function. ### Intra-Part Constraint To alleviate the error accumulation, we design an Intra-Part Constraint (IPC) module (see Fig. 3), which aims to conduct the constraint among intra-part joints with different PDoFs. Given the input \(\tilde{X}^{\ell}_{\textit{nc}}=X^{\ell}_{\textit{nc}}+Y^{\ell}_{\textit{nc}}\), topological constraints performs in two sets of joints: _(i)_\(X^{\ell}_{\textit{Lc}}\)\(\in\)\(\mathbb{R}^{8\times C^{\prime}}\) consists of 2-PDoF and 3-PDoF joints; _(ii)_\(X^{\ell}_{2}\)\(\in\)\(\mathbb{R}^{12\times C^{\prime}}\) consists of 1-PDoF, 2-PDoF, and 3-PDoF joints. Next, \(X^{\ell}_{j}\),\(j\)\(\in\)\(\{1,2\}\) are fed into two convolution layers to generate limb features \(\mathcal{F}^{\ell}_{j}\). Then, a channel MLP [16] is adopted to aggregate information between different channels for each limb feature: \[\begin{array}{l}\mathcal{F}^{\ell}_{j}=\sigma(\textit{Conv}_{j}(X^{\ell}_{j} )),j\in\{1,2\},\\ \tilde{\mathcal{F}}^{\ell}_{j}=\mathcal{F}^{\ell}_{j}+\textit{MLP}(\textit{LN}( \mathcal{F}^{\ell}_{j})),\end{array} \tag{2}\] where \(\tilde{\mathcal{F}}^{\ell}_{j}\) are aggregated limb features, \(LN(\cdot)\) denotes Layer Normalization, _Conv\({}_{I}\)_ and _Conv\({}_{2}\)_ are convolution layers with kernel sizes of 2 and 3, respectively. Based on the above \(\tilde{\mathcal{F}}^{\ell}_{j}\), we construct two topological constraints \(\mathcal{R}_{j}\) in \(X^{\ell}_{j}\): _(i)_ In \(\mathcal{R}_{1}\), 3-PDoF joints in \(X^{\ell}_{I}\) are replaced with \(\tilde{\mathcal{F}}^{\ell}_{1}\); _(ii)_ In \(\mathcal{R}_{2}\), 3-PDoF and 2-PDoF joints in \(X^{\ell}_{2}\) are replaced with \(\tilde{\mathcal{F}}^{\ell}_{2}\). Then, the unprocessed joints are filled into the replaced \(X^{\ell}_{j}\), and \(Y^{\ell}_{\textit{nc}}\), the final output of IPC, can be represented as follows: \[Y^{\ell}_{\textit{nc}}=\tilde{X}^{\ell}_{\textit{nc}}+\mathcal{R}_{1}(\tilde{X }^{\ell}_{\textit{nc}},\tilde{\mathcal{F}}^{\ell}_{1})+\mathcal{R}_{2}(\tilde{X }^{\ell}_{\textit{nc}},\tilde{\mathcal{F}}^{\ell}_{2}). \tag{3}\] Since the limb features \(\tilde{\mathcal{F}}^{\ell}_{j}\) are the motion representation of limbs, which contains both high-PDoF joints and their parent Figure 2: Overview of the proposed HTNet. The HTNet consists of \(M\) stacked Hierarchical Mixers. Each mixer consists of three modules (LJC, IPC, and GBI), which can extract features at joint, part, and body levels. points. Therefore, replacing the high-PDoF joints with \(\tilde{\mathcal{F}}^{\ell}_{j}\) can form topological constraints and produce reasonable 3D pose estimations to mitigate the error accumulation. ### Global Body-level Interaction Global Body-level Interaction (GBI) module is built by the self-attention of Transformer, which can capture global contexts across the whole human body [10, 17]. Given \(\tilde{X}^{\ell}_{\textit{can}}=X^{\ell}_{\textit{car}}+Y^{\ell}_{\textit{rc}}\) as input, the GBI module can be formulated as: \[\begin{split}& H_{i}=\textit{Softmax}(Q^{\ell}{K^{\ell}}^{T}/ \sqrt{C^{\prime}})V^{\ell},i\in\{1,...,h\}\\ &\textit{MSA}(\tilde{X}^{\ell}_{\textit{cat}})=\text{Concat}(H_{1 },H_{2},...,H_{h})W_{out},\\ &\tilde{Y}^{\ell}_{\textit{cat}}=\tilde{X}^{\ell}_{\textit{cat} }+\textit{LN}(\textit{MSA}(\tilde{X}^{\ell}_{\textit{cat}})),\end{split} \tag{4}\] where \(h\) is the number of attention heads, \(Q^{\ell},K^{\ell},V^{\ell}\) are query, key, and value matrices, which are calculated from \(\tilde{X}^{\ell}_{\textit{cat}}\) by linear transformations, \(\tilde{Y}^{\ell}_{\textit{cat}}\) is the output of the GBI module. ### Network Structure Although hierarchical representations of human topology can be learned with LJC, IPC and GBI, it is challenging to design a connection structure for such three modules: the series structure (_i.e._, three modules are connected sequentially) will increase the model size, and the parallel structure (_i.e._, three modules divide the channels equally and work in parallel) lacks feature interactions among various levels. Inspired by Res2Net [18], we design the hierarchical mixer with a channel-split progressive structure. Specifically, channels are split into three parts and fed into LJC, IPC, and GBI. Meanwhile, the output of the previous module is added to the input of the next module as residual-like connections for learning the human topology from local to global. Then, outputs of the above three modules are concatenated to generate \(Y^{\ell}\)\(\in\)\(\mathbb{R}^{N\times C}\), and \(Y^{\ell}\) is fed to a channel MLP block: \[\begin{split} Y^{\ell}=\text{Concat}(Y^{\ell}_{\textit{rc}},Y^{ \ell}_{\textit{rc}},Y^{\ell}_{\textit{cat}}),\\ \tilde{Y}^{\ell}=Y^{\ell}+\textit{MLP}(LN(Y^{\ell})).\end{split} \tag{5}\] Such a process can aggregate features from three different levels and obtain a more expressive representation. ## 3 Experiments ### Datasets and Evaluation Metrics _Human3.6M_[19] is the largest indoor dataset for 3D HPE. It has 3.6 million images and 11 professional actors. Following [6, 17], Mean Per Joint Position Error (MPJPE) and Procrustes MPJPE (P-MPJPE) are adopted as metrics. _MPI-INF-3DHP_[20] consists of 1.3 million images from indoor and outdoor scenes. There are three different scenes in its test set: studio with a green screen (GS), studio without a green screen (noGS), and outdoor scene (Outdoor). Following [6, 17, 21], MPJPE, Percentage of Correct Keypoint (PCK), and Area Under Curve (AUC) are adopted as metrics. ### Implementation Details We implement our method using Pytorch and train the model for 30 epochs with batch size 512. The proposed HTNet consists of 3 stacked hierarchical mixers. Note that the channel dimension of inputs should be divisible by 24 (_e.g._, 240, 360) because there are eight heads in the MSA and three split channels in hierarchical mixers. The \(L_{2}\) loss is utilized to minimize the errors between predictions and ground truth. ### Method Comparison **Comparison on Human3.6M.** We compare HTNet with state-of-the-art methods on Human3.6M. As shown in Tab. 1 (a), the 2D keypoints from CPN [3] are used as inputs, and our model achieves the best results under both MPJPE (47.6_mm_) and P-MPJPE (38.6_mm_). Besides, using the ground truth 2D joints as inputs, HTNet also performs best, outperforming GraFormer [17] by \(9.3\%\) (31.9_mm_ vs. 35.2_mm_). **Comparison on MPI-INF-3DHP.** To evaluate the generalization capability of our approach, we train HTNet on the Human3.6M and test it on the MPI-INF-3DHP directly. As shown in Tab. 1 (b), HTNet achieves the best performance on all scenes, which verifies the strong generalization of our method in unseen environments. **Comparison with Temporal Methods.** To explore the portability of HTNet in video-based 3D HPE, we compare with some methods [5, 9, 10, 26, 28, 29] with similar frames as inputs, and Tab. 1 (c) shows that HTNet-S (\(C\)\(=\)\(240\)) can achieve competitive performance under MPJPE (47.2_mm_) with much fewer parameters. Notably, we find that the channel dimension will be the bottleneck to the performance gains. Thus, HTNet-L is built with larger dimensions (\(C\)\(=\)\(720\)) and performs the best (46.1_mm_). The above experiments prove that our HTNet can work well with video sequence inputs. ### Ablation Study Tab. 2 shows the results of our method under different components and structures on the Human3.6M dataset. For a fair comparison, we maintain the consistency of the parameters by keeping the same dimension channels (\(C\)\(=\)\(240\)). Figure 3: Structure of Intra-Part Constraint (IPC) module. **Impact of Model Components.** As shown in the top part of Tab. 2, HTNet with only a single-level module cannot achieve satisfactory performance, while the combination of all proposed modules performs the best via learning representations from various levels. Notably, IPC cannot work individually due to no operations on 0-PDoF and 1-PDoF joints. To further investigate the influence of IPC, we categorize limb joints by PDoFs and calculate the average MPJPE of joints in each category in Fig. 4. Compared with the baseline (the HTNet with only GBI), the average MPJPE of all joints decreased by 3.8% (\(50.2mm\) vs. \(52.2mm\)) via introducing LJC. By further integrating IPC into HTNet, the MPJPE would decreased by 6.3% (\(48.9mm\) vs. \(52.2mm\)), while MPJPE on 2-PDoF and 3-PDoF joints noticeably reduces by \(15.5\%\) and \(18.7\%\), respectively. Such a significant reduction in the estimation errors of end joints can be attributed to the number of topological constraints they obtain shown in Sec. 2.3: _(i)_ 3-PDoF joints are constrained by both 1-PDoF and 2-PDoF joints; _(ii)_ 2-PDoF joints are only constrained by 1-PDoF joints; _(iii)_ 1-PDoF joints are not subject to any constraints. **Impact of Model Structures.** As for the structures (middle part of Tab. 2), the serial structure exhibits competitive performance (\(49.2mm\)) due to the local-to-global learning; the parallel structure can reduce parameters from 7.2M to 3.0M due to the channel-split structure. The channel-split progressive structure of HTNet adopts residual connections and combines the advantages of these two designs, which performs the best (\(48.9mm\)) and maintains a small model size (3.0M). ### Visualization Fig. 5 provides qualitative comparisons on Human3.6M between HTNet and the state-of-the-art approach, _i.e._, MGCN [6]. It can be seen that our method is capable of producing more precise 3D keypoints at the end of limbs, which further proves the effectiveness of our method. ## 4 Conclusion This paper presents a Human Topology aware Network (HTNet), which takes full advantage of human structural priors for 3D HPE. To address the error accumulation, we design an Intra-Part Constraint (IPC) module that utilizes the topological constraints from intra-part parent nodes to reduce the errors of end joints. Based on the IPC, the hierarchical mixer is further designed to learn joint-part-body representations via a channel-split progressive structure, allowing the HTNet to efficiently build hierarchical representations of the human topology. Extensive experiments show that the proposed HTNet achieves state-of-the-art performance. We hope our work can inspire more researchers on skeleton-based tasks, _e.g._, action recognition, and 3D human mesh reconstruction. \begin{table} \begin{tabular}{l|c c c c} \hline \hline Method & **P1 (CPN)** & **P2 (CPN)** & **P1 (GT)** \\ \hline Martinez _et al._[4] & 62.9 & 47.7 & 45.5 \\ Ci _et al._[22] & 52.7 & 42.2 & 36.3 \\ Liu _et al._[21] & 52.4 & 41.2 & 37.8 \\ Xu _et al._[23] & 51.9 & - & 35.8 \\ Zhao _et al._[17] & 51.8 & - & 35.2 \\ Cai _et al._[5](\(\dagger\)) & 50.6 & 40.2 & 38.1 \\ Zeng _et al._[24] & 49.9 & 39.4 & 36.4 \\ Zou _et al._[6](\(\dagger\)) & 49.4 & 39.1 & 37.4 \\ \hline HTNet (Ours) & 48.9 & 39.0 & 34.0 \\ HTNet (Ours) (\(\dagger\)) & **47.6** & **38.6** & **31.9** \\ \hline \hline \end{tabular} \begin{tabular}{l|c c c c} \hline \hline Method & **GS\(\dagger\)** & **noGS\(\dagger\)** & **Outdoor\(\dagger\)** & **PCK\(\dagger\)** & **AUC\(\dagger\)** & Method & **Frame Param** & **MPJPE** \\ \hline Martinez _et al._[4] & 49.8 & 42.5 & 31.2 & 42.5 & 17.0 & Pavllo _et al._[28] & 9 & 4.4M & 49.8 \\ Mehta _et al._[20] & 70.8 & 62.3 & 58.5 & 64.7 & 31.7 & Cai _et al._[5] & 7 & 5.1M & 48.8 \\ Ci _et al._[22] & 74.8 & 70.8 & 77.3 & 74.0 & 36.7 & Zheng _et al._[9] & 9 & 9.6M & 49.9 \\ Zhou _et al._[25] & 75.6 & 71.3 & 80.3 & 75.3 & 38.0 & Li _et al._[10] & 9 & 18.9M & 47.8 \\ Zeng _et al._[24] & - & 80.3 & 77.6 & 43.8 & Chen _et al._[9] & 9 & 18.2M & 46.3 \\ Liu _et al._[26] & 77.6 & 80.5 & 80.1 & 79.3 & 45.8 & HTNetS-Ours & 9 & **3.0M** & **47.2** \\ Zeng _et al._[27] & - & - & 84.6 & 82.1 & 46.2 & Pavllo _et al._[28] & 27 & 8.6M & 48.8 \\ Xu _et al._[23] & 81.5 & 81.7 & 75.2 & 80.1 & 45.8 & Liu _et al._[26] & 27 & 5.7M & 48.5 \\ Zou _et al._[6] & 86.4 & 86.0 & 85.7 & 86.1 & 53.7 & Zheng _et al._[9] & 27 & 9.6M & 47.0 \\ \hline HTNet (Ours) & **86.9** & **86.2** & **85.9** & **86.7** & **54.1** & HTNet-L (Ours) & 27 & 10.1M & **46.1** \\ \hline \hline \end{tabular} \end{table} Table 1: **(a)** Quantitative comparison on Human3.6M under Protocol #1 (MPJPE) and Protocol #2 (P-MPJPE). The 2D keypoints detected by CPN and the ground truth of 2D poses are used as inputs. \((\dagger)\) - adopts the same refinement module as [5, 6]. **(b)** Quantitative comparison on MPI-INF-3DHP. **(c)** Quantitative comparison with temporal methods on Human3.6M. Figure 4: Ablation study for the components and structures. Figure 5: Qualitative comparison on Human3.6M.
2301.01844
Solving Unsplittable Network Flow Problems with Decision Diagrams
In unsplittable network flow problems, certain nodes must satisfy a combinatorial requirement that the incoming arc flows cannot be split or merged when routed through outgoing arcs. This so-called "no-split no-merge" requirement arises in unit train scheduling where train consists should remain intact at stations that lack necessary equipment and manpower to attach/detach them. Solving the unsplittable network flow problems with standard mixed-integer programming formulations is computationally difficult due to the large number of binary variables needed to determine matching pairs between incoming and outgoing arcs of nodes with no-split no-merge constraint. In this paper, we study a stochastic variant of the unit train scheduling problem where the demand is uncertain. We develop a novel decision diagram (DD)-based framework that decomposes the underlying two-stage formulation into a master problem that contains the combinatorial requirements, and a subproblem that models a continuous network flow problem. The master problem is modeled by a DD in a transformed space of variables with a smaller dimension, leading to a substantial improvement in solution time. Similarly to the Benders decomposition technique, the subproblems output cutting planes that are used to refine the master DD. Computational experiments show a significant improvement in solution time of the DD framework compared with that of standard methods.
Hosseinali Salemi, Danial Davarnia
2023-01-04T23:05:28Z
http://arxiv.org/abs/2301.01844v1
# Solving Unsplittable Network Flow Problems with Decision Diagrams ###### Abstract In unsplittable network flow problems, certain nodes must satisfy a combinatorial requirement that the incoming arc flows cannot be split or merged when routed through outgoing arcs. This so-called _no-split no-merge_ requirement arises in unit train scheduling where train consists should remain intact at stations that lack necessary equipment and manpower to attach/detach them. Solving the unsplittable network flow problems with standard mixed-integer programming formulations is computationally difficult due to the large number of binary variables needed to determine matching pairs between incoming and outgoing arcs of nodes with no-split no-merge constraint. In this paper, we study a stochastic variant of the unit train scheduling problem where the demand is uncertain. We develop a novel decision diagram (DD)-based framework that decomposes the underlying two-stage formulation into a master problem that contains the combinatorial requirements, and a subproblem that models a continuous network flow problem. The master problem is modeled by a DD in a transformed space of variables with a smaller dimension, leading to a substantial improvement in solution time. Similarly to the Benders decomposition technique, the subproblems output cutting planes that are used to refine the master DD. Computational experiments show a significant improvement in solution time of the DD framework compared with that of standard methods. Decision Diagrams; Network Optimization; Mixed Integer Programs; Unit Trains; Transportation History ## 1 Introduction Over the past several decades, rail freight transportation has continued to grow as the prime means of transportation for high-volume commodities. Advantages of rail transportation include reliability, safety, cost-efficiency and environmental-sustainability as compared with alternative methods of transportation. In terms of scale, the rail network accounted for 27.2 percent of U.S. freight shipment by ton-miles in 2018 (Furchtgott-Roth et al., 2021); see Figure 1. The Federal Highway Administration estimates that the total U.S. freight shipments will be 24.1 billion tons in 2040, a 30 percent increase from the 2018 total transportation of 18.6 billion tons. With the purpose of meeting such market growth, America's freight railway companies have invested nearly ###### Abstract We propose a novel approach to the design of a new class of multi-agent systems, which is based on the concept of _multi-agent systems_, which is a new class of multi-agent systems. The proposed framework is based on the concept of multi-agent systems, which is a new class of multi-agent systems, which is a new class of multi-agent systems. The proposed framework is based on the concept of multi-agent systems, which is a new class of multi-agent systems, which is a new class of multi-agent systems. The proposed framework is based on the concept of multi-agent systems, which is a new class of multi-agent systems, which is a new class of multi-agent systems, which is a new class of multi-agent systems. The proposed framework is based on the concept of multi-agent systems, which is a new class of multi-agent systems, which where cars need to be switched between trains is irrelevant in this problem, unlike scheduling other types of trains (Davarnia et al. 2019). Despite the significance of unit train scheduling, exact optimization approaches to solve associated problems are scarce, partially due to their structural complexities. One of the main challenges in modeling unit trains is the requirement that the train consists must remain intact when passing through stations that lack necessary busting/formation equipment. In optimization, this requirement is referred to as _no-split no-merge_ (NSNM), which guarantees that the flows entering to or exiting from certain nodes of the unit train network cannot be split or merged. Incorporating this requirement into typical transportation network models yields the so-called _generalized unsplittable flow problem_ (GUFP), where the objective is to determine the minimum-cost unit train schedules that satisfy the given demand. Numerous studies have shown that considering deterministic demands might result in the complete failure of the transportation scheduling (Demir et al. 2016, Layeb et al. 2018), motivating the study of stochastic variants of the unit train scheduling problems where the demand is uncertain. As a result, in this paper, we consider a stochastic variant of the GUFP, referred to SGUFP, that is modeled as a two-stage optimization problem. The first stage decides a matching between the incoming and outgoing arcs of the nodes of the railroad network, and the second stage determines the amount of flow that should be sent through the matching arcs of the network to satisfy the uncertain demand represented by a number of demand scenarios. We propose a novel exact solution framework to solve this problem in the operational level. Our proposed methodology is based on _decision diagrams_ (DDs), which are compact graphical data structures. DDs were initially introduced to represent boolean functions with applications in circuit design. Over the past decade, researchers have successfully extended DDs domain by developing DD-based algorithms to solve discrete optimization problems in different areas of application. Because of its structural limitation to model integer programs only, DDs have never been used to solve transportation problems that inherently include continuous variables. In this paper, we extend the application scope of DDs by introducing a novel framework that is capable of modeling network problems with both integer and continuous components as in the SGUFP. ### Literature Review on Train Scheduling Many variants of train routing and scheduling problems with different objective functions and set of constraints under deterministic and stochastic conditions have been introduced and vastly studied in the literature; see surveys by Cordeau, Toth, and Vigo (1998), Harrod and Gorman (2010), Lusby et al. (2011), Cacchiani and Toth (2012), and Turner et al. (2016) for different problems classifications and structures. Mixed integer linear and nonlinear programming formulations are among the most frequent exact approaches to model different classes of these problems (Jovanovic and Harker 1991, Huntley et al. 1995, Sherali and Suharko 1998, Lawley et al. 2008, Haahr and Lusby 2017, Davarnia et al. 2019). Proposed solution techniques include but are not limited to branch-and-bound methods (Jovanovic and Harker 1991, Fuchsberger and Luthi 2007), branch-and-cut frameworks (Zwaneveld, Kroon, and Van Hoesel 2001, Ceselli et al. 2008), branch-and-price approaches (Lusby 2008, Lin and Kwan 2016), graph coloring algorithms (Cornelsen and Di Stefano 2007), and heuristics (Carey and Crawford 2007, Liu and Kozan 2011, Icyuz et al. 2016). Rolling stock scheduling (Abbink et al. 2004, Alfieri et al. 2006, Haahr et al. 2016, Borndorfer et al. 2016) that assigns rolling stocks to a given timetable, and crew scheduling (Kwan 2011, Shen et al. 2013, Heil, Hoffmann, and Buscher 2020) that covers train activities by assigning crews to the associated operations are other major problems arising in the area of railroad planning. Due to the inherent uncertainty in different types of train scheduling and routing problems, many researchers have studied stochastic variants of the problems where the supply/demand is considered to be uncertain. Jordan and Turnquist (1983) propose a model for railroad car distribution where supply and demand of cars are uncertain. Jin et al. (2019) study a chance-constrained programming model for the train stop planning problem under stochastic demand. Ying, Chow, and Chin (2020) propose a deep reinforcement learning approach for train scheduling where the passenger demand is uncertain. Recently, Gong et al. (2021) propose a stochastic optimization method to solve a train timetabling problem with uncertain passenger demand. Also see works by Meng and Zhou (2011), Quaglietta, Corman, and Goverde (2013), Larsen et al. (2014) that consider train dispatching problems under stochastic environments. In the context of unit train scheduling, Lawley et al. (2008) study a time-space network flow model to schedule bulk railroad deliveries for unit trains. In their model, the authors consider characteristics of underlying rail network, demands of customers, and capacities of tracks, stations, and loading/unloading requirements. They propose a mixed integer programming (MIP) formulation that maximizes the demand satisfaction while minimizing the waiting time at stations. Lin and Kwan (2014) (cf. Lin and Kwan (2016)) propose a model for a train scheduling problem that is capable to capture locations where coupling/decoupling is forbidden. They develop a branch-and-price algorithm inspired by column generation to solve the associated problem. Lin and Kwan (2018) also propose a heuristic branch-and-bound approach to decrease coupling/decoupling redundancy. Icyuz et al. (2016) study the problem of planning coal unit trains that includes train formation, routing, and scheduling. As noted by the authors, their proposed MIP formulation fails to solve the problem directly due to its large size. As a remedy, they develop a time-efficient heuristic that produces good quality solutions. More recently, Davarnia et al. (2019) introduce and study the GUFP with application to unit train scheduling. In particular, the authors show how to impose NSNM restrictions in network optimization problems. They present a polyhedral study and propose a MIP formulation to model a stylized variant of the unit train scheduling problem. In the present paper, we use their formulation (see section 3.1) as a basis for our solution framework. The unsplittable flow problem (UFP) was first introduced by Kleinberg (1996) as a generalization of the disjoint path problem. Given a network with capacities for arcs and a set of source-terminal vertex pairs with associated demands and rewards, the objective in the UFP is to maximize the total revenue by selecting a subset of source-terminal pairs and routing flows through a _single_ path for each of them to satisfy the associated demand. In the GUFP, however, there can exist nodes that do not need to respect the NSNM requirement, and demands can be satisfied by passing flows through multiple paths. It is well-known that different variants of UFP are NP-hard (Baier, Kohler, and Skutella 2005, Kolman and Scheideler 2006, Chakrabarti et al. 2007). Since its introduction, the UFP structure has been used in different areas of application, from bandwidth allocation in heterogeneous networks (Kolman and Scheideler 2006), to survivable connection-oriented networks (Walkowiak 2006), and virtual circuit routing problems (Hu, Lan, and Wan 2009). Considering the hardness of the problem, approximation algorithms have been a common technique to tackle different variants of the UFP in the literature (Baier, Kohler, and Skutella 2005, Chakrabarti et al. 2007). ### Literature Review on Decision Diagrams DDs are directed acyclic graphs with a source and a terminal node where each source-terminal path encodes a feasible solution to an optimization problem. In DDs, each layer from the source to the terminal represents a decision variable where labels of arcs show their values. Hadzic and Hooker (2006) proposed to use DDs to model the feasible region of a discrete optimization problem and used it for postoptimality analysis. Later, Andersen et al. (2007) presented relaxed DDs to circumvent the exponential growth rate in the DD size when modeling large discrete optimization problems. Bergman et al. (2016b) introduced a branch-and-bound algorithm that iteratively uses relaxed and restricted DDs to find optimal solution. The literature contains many successful utilization of DDs in different domains; see works by Bergman and Cire (2018), Serra and Hooker (2019), Davarnia and Van Hoeve (2020), Gonzalez et al. (2020), and Hosseininasab and Van Hoeve (2021) for some examples. Until recently, applications of DDs were limited to discrete problems, and the question on how to use DDs in solving optimization problems with continuous variables was unanswered. To address this limitation, Davarnia (2021) proposed a technique called arc-reduction that generates a DD that represents a relaxation of the underlying continuous problem. In a follow-up work, Salemi and Davarnia (2022a) established necessary and sufficient conditions for a general MIP to be representable by DDs. They showed that a bounded MIP can be remodeled and solved with DDs through employing a specialized Benders decomposition technique. In this paper, we build on this framework to design a novel DD-based methodology to solve the SGUFP. ### Contributions While there are several studies in the literature dedicated to the unit train problem, exact methodologies that provide a rigorous treatment of the NSNM requirement at the heart of unit train models are scarce. In this paper, we design a novel exact DD-based framework to solve the SGUFP, as a more realistic and more challenging variant of this problem class. To our knowledge, this is the first work that studies SGUFP from an exact perspective, and the first application of DDs to a transportation problem. Our proposed framework formulates the problem in a transformed space of variables, which has a smaller dimension compared to the standard MIP formulations of the SGUFP. This presentation mitigates the computational difficulties stemmed from the MIP formulation size, providing a viable solution approach for large-scale network problems. The core principles of our DD framework can also be used to model other transportation problems with similar structure, as an alternative to traditional network optimization techniques. The remainder of this paper is organized as follows. In Section 2 we provide basic definitions and a brief overview on discrete and continuous DD models, including the DD-BD method to solve bounded MIPs. In Section 3, we adapt the DD-BD method to solve the SGUFP. We propose algorithms to construct exact and relaxed DDs to solve the problem in a transformed space. Section 4 presents computational experiments to evaluate the performance of the DD-BD method for the SGUFP. We give concluding remarks in Section 5. ## 2 Background on DDs In this section, we present basic definitions and results relevant to our DD analysis. ### Overview A DD \(\mathcal{D}=(\mathcal{U},\mathcal{A},l)\) with node set \(\mathcal{U}\), arc set \(\mathcal{A}\), and arc label mapping \(l:\mathcal{A}\to\mathbb{R}\) is a directed acyclic graph with \(n\in\mathbb{N}\) arc layers \(\mathcal{A}_{1},\mathcal{A}_{2},\ldots,\mathcal{A}_{n}\), and \(n+1\) node layers \(\mathcal{U}_{1},\mathcal{U}_{2},\ldots,\mathcal{U}_{n+1}\). The node layers \(\mathcal{U}_{1}\) and \(\mathcal{U}_{n+1}\), with \(|\mathcal{U}_{1}|=|\mathcal{U}_{n+1}|=1\), contain the root \(r\) and the terminal \(t\), respectively. In any arc layer \(j\in[n]\coloneqq\{1,2,\ldots,n\}\), an arc \((u,v)\in\mathcal{A}_{j}\) is directed from the tail node \(u\in\mathcal{U}_{j}\) to the head node \(v\in\mathcal{U}_{j+1}\). The _width_ of \(\mathcal{D}\) is defined as the size of its largest \(\mathcal{U}_{j}\). DDs can model a bounded integer set \(\mathcal{P}\subseteq\mathbb{Z}^{n}\) in such a way that each \(r\)-\(t\) arc-sequence (path) of the form \((a_{1},\ldots,a_{n})\in\mathcal{A}_{1}\times\ldots\times\mathcal{A}_{n}\) encodes a point \(\boldsymbol{y}\in\mathcal{P}\) where \(l(a_{j})=y_{j}\) for \(j\in[n]\), that is \(\boldsymbol{y}\) is an \(n\)-dimensional point in \(\mathcal{P}\) whose \(j\)-th coordinate is equal to the label value \(l(a_{j})\) of arc the \(a_{j}\). For such a DD, we have \(\mathcal{P}=\operatorname{Sol}(\mathcal{D})\), where \(\operatorname{Sol}(\mathcal{D})\) denotes the finite collection of all \(r\)-\(t\) paths. The graphical property of DDs can be exploited to optimize an objective function over a discrete set \(\mathcal{P}\). To this end, DD arcs are weighted in such a way that the cumulative weight of an \(r\)-\(t\) path that encodes a solution \(\boldsymbol{y}\in\mathcal{P}\) equals to the objective function value evaluated at \(\boldsymbol{y}\). Then, a shortest (resp. longest) \(r\)-\(t\) path for the underlying minimization (resp. maximization) problem is found, an operation that can be performed in polynomial time. The construction of an _exact_ DD as described above is computationally prohibitive due to the exponential growth rate of its size. To alleviate this difficulty, _relaxed_ and _restricted_ DDs are proposed to keep the size of DDs under control. In a relaxed DD, nodes are merged in such a way that the width of the resulting diagram is bounded by a predetermined width limit. This node-merging process ensures that all feasible solutions of the original set are encoded by a subset of all \(r\)-\(t\) paths in the resulting DD. Optimization over this relaxed DD provides a dual bound to the optimal solution of the original problems. In a restricted DD, the collection of all \(r\)-\(t\) paths of the DD encode a subset of the feasible solutions of the original set. Optimization over this restricted DD provides a primal bound to the optimal solution of the original problems. The restricted and relaxed DDs can be iteratively refined in a branch-and-bound scheme to find the optimal value of a problem through convergence of their primal and dual bounds. The following example illustrates an exact, relaxed and restricted DD for a discrete optimization problem. **Example 1**: _Consider the discrete optimization problem \(\max\{5y_{1}+10y_{2}+4y_{3}\mid\boldsymbol{y}\in\mathcal{P}\}\) where \(\mathcal{P}=\{(1,0,0),(1,0,1),(0,1,0),(0,0,1),(0,0,0)\}\). The exact DD \(\mathcal{D}\) with width 3 in Figure 2(a) models the feasible region \(\mathcal{P}\). The weight of each arc \(a\in\mathcal{A}_{j}\), for \(j\in\{1,2,3\}\), shows the contribution of variable \(y_{j}\)'s value assignment to the objective function. The longest \(r\)-\(t\) path that encodes the optimal solution \((y_{1}^{*},y_{2}^{*},y_{3}^{*})=(0,1,0)\) has length 10, which is the optimal value to the problem. By reducing the width limit to 2, we can build relaxed and restricted DDs for \(\mathcal{P}\) as follows. The relaxed DD \(\overline{\mathcal{D}}\) in Figure 2(b) provides an upper bound to the optimal solution, where the longest path with length 14 is obtained by an infeasible point \((\overline{y}_{1},\overline{y}_{2},\overline{y}_{3})=(0,1,1)\). Finally, the restricted DD \(\underline{\mathcal{D}}\) in Figure 2(c) gives a lower bound to the optimal solution, where the longest path with length 9 encodes a feasible solution \((\underline{y}_{1},\underline{y}_{2},\underline{y}_{3})=(1,0,1)\)._ ### Continuous DD Models While the framework described in the previous section can be applied to solve different classes of discrete optimization problems, its extension to model sets with continuous variables requires a fundamentally different approach. The reason that the traditional DD structure is not viable for continuous sets is that representing the domain of a continuous variable through arcs requires an infinite number of them, spanning all values within a continuous interval, which is structurally prohibitive in DD graphs. Fortunately, there is a way to overcome this obstacle by decomposing the underlying set into certain rectangular formations, which can in turn be represented through node-sequences in DDs. In what follows, we give an overview of these results as relevant to our analysis. Consider a bounded set \(\mathcal{P}\subseteq\mathbb{R}^{n}\). Salemi and Davarnia (2022a) give necessary and sufficient conditions for \(\mathcal{P}\) to admit the desired rectangular decomposition. Such a set is said to be _DD-representable_ w.r.t. a fixed index set \(I\subseteq[n]\), as there exists a DD \(\mathcal{D}\) such that \(\max\{f(\boldsymbol{x})\mid\boldsymbol{x}\in\mathcal{P}\}=\max\{f(\boldsymbol{ x})\mid\boldsymbol{x}\in\operatorname{Sol}(\mathcal{D})\}\) for every function \(f(\boldsymbol{x})\) that is convex in the space of variables \(\boldsymbol{x}_{I}\). A special case of DD-representable sets is given next. **Proposition 1**: _Any bounded mixed integer set of the form \(\mathcal{P}\subseteq\mathbb{Z}^{n}\times\mathbb{R}\) is DD-representable w.r.t. \(I=\{n+1\}\). \(\square\)_ This result gives rise to a novel DD-based framework to solve general bounded MIPs as outlined below. Consider a bounded MIP \(\mathcal{H}:=\max\{\boldsymbol{c}\boldsymbol{y}+\boldsymbol{dx}\mid A \boldsymbol{y}+G\boldsymbol{x}\leq\boldsymbol{b},\ \boldsymbol{y}\in\mathbb{Z}^{n}\}\). Using Benders decomposition (BD), formulation \(\mathcal{H}\) is equivalent to \(\max_{\boldsymbol{y}\in\mathbb{Z}^{n}}\{\boldsymbol{c}\boldsymbol{y}+\max_{ \boldsymbol{x}}\{\boldsymbol{dx}\mid G\boldsymbol{x}\leq\boldsymbol{b}-A \boldsymbol{y}\}\}\), which can be reformulated as \(\mathcal{M}=\max\{\boldsymbol{c}\boldsymbol{y}+z\mid(\boldsymbol{y};z)\in \mathbb{Z}^{n}\times[l,u]\}\), where \(l,u\in\mathbb{R}\) are some valid bounds on \(z\) induced from the boundedness of \(\mathcal{H}\). Here, \(\mathcal{M}\) is the master problem and \(z\) represents the objective value of the subproblem \(\max_{\boldsymbol{x}}\{\boldsymbol{dx}\mid G\boldsymbol{x}\leq\boldsymbol{b} -A\bar{\boldsymbol{y}}\}\) for any given \(\bar{\boldsymbol{y}}\) as an optimal solution of the master problem. The outcome of the subproblems is either an optimality cut or a feasibility cut that will be added to the master problem. Then, the master problem will be resolved. Proposition 1 implies that formulation \(\mathcal{M}\) can be directly modeled and solved with DDs. For this DD, we assign \(n\) arc layers to the integer variables \(y_{1},y_{2},\ldots,y_{n}\), and one arc layer to the continuous variable \(z\) with only two arc labels showing a lower and upper bound for this variable. To find an optimal solution, the longest path is calculated, which will be used to solve the subproblems. Note that since \(\mathcal{M}\) is a maximization problem, a longest path of the associated DD encodes an optimal solution, and its length gives the optimal value; see Example 2. The feasibility and optimality cuts generated by the subproblems will then be added to _refine_ the DD, whose longest path will be recalculated. Figure 2: The exact, relaxed, and restricted DDs representing \(\mathcal{P}\) in Example 1. Solid and dotted arcs indicate one and zero arc labels, respectively. Numbers next to arcs represent weights. The refinement technique consists of removing arcs of the DD that lead to solutions that violate the added inequality, as well as splitting nodes of the DD that lead to different subsequent partial assignments; see Bergman et al. (2016a) for a detailed account on DD refinement techniques. We illustrate this approach in Example 2. Suppose that \(\max\{2y_{1}+4y_{2}+z\,|\,\boldsymbol{y}\in\mathcal{P},z\leq 25\}\) forms the master problem at the penultimate iteration of a BD algorithm, where \(\mathcal{P}=\{(0,0),(1,1)\}\). This problem is represented by the DD \(\mathcal{D}\) in Figure 3(a) where \(-M\) is a valid lower bound for \(z\). The longest path of \(\mathcal{D}\) encodes the solution \((\hat{y}_{1},\hat{y}_{2},\hat{z})=(1,1,25)\). Assume that using the point \((\hat{y}_{1},\hat{y}_{2})=(1,1)\) in the associated subproblem generates an optimality cut \(z\leq 3y_{1}+2y_{2}+10\) for the final iteration of the BD algorithm. Refining DD \(\mathcal{D}\) with respect to this cut yields the new DD in Figure 3(b). The longest path represents the optimal solution \((y_{1}^{*},y_{2}^{*},z^{*})=(1,1,15)\) with length \(21\), which is the optimal value. Using the DD framework as outlined above can be computationally challenging due to exponential growth rate of the size of an exact DD. To mitigate this difficulty, restricted/relaxed DDs can be employed inside of the BD framework as demonstrated in Algorithm 1. We refer to this solution method as DD-BD (Salemi and Davarnia 2022a). In explaining the steps of Algorithm 1, let point \(\hat{\boldsymbol{y}}\in\mathbb{Z}^{k}\), where \(k\leq n\), be a partial value assignment to the first \(k\) coordinates of variable \(\boldsymbol{y}\), i.e., \(y_{i}=\hat{y}_{i}\) for all \(i\in[k]\). We record the set of all partial value assignments in \(\hat{\mathcal{Y}}=\{\hat{\boldsymbol{y}}\in\mathbb{Z}^{k}\mid k\in[n]\}\cup\{ \ominus\}\), where \(\ominus\) represents the case where no coordinate of \(\boldsymbol{y}\) is fixed. Set \(\mathcal{C}\) contains the produced Benders cuts throughout the algorithm, and we denote the feasible region described by these cuts by \(\mathcal{F}^{\mathcal{C}}\). Further, define \(\mathcal{M}^{C}(\hat{\boldsymbol{y}})=\max\{\boldsymbol{c}\boldsymbol{y}+z \mid(\boldsymbol{y};z)\in\mathbb{Z}^{n}\times[l,u]\cap\mathcal{F}^{\mathcal{C}},\ y_{i}=\hat{y}_{i},\forall i\in[k]\}\) to be the restricted master problem \(\mathcal{M}\) obtained through adding cuts in \(\mathcal{C}\) and fixing the partial assignment \(\hat{\boldsymbol{y}}\). In this definition, the case with \(\mathcal{C}=\emptyset\) and \(\hat{\mathcal{Y}}=\{\ominus\}\) is denoted by \(\mathcal{M}^{\emptyset}(\ominus)=\mathcal{M}\), which is an input to Algorithm 1. Figure 3: The last two iterations of solving the master problem in Example 2 ``` Data: MIP \(\mathcal{H}\), construction method to build restricted and relaxed DDs for \(\mathcal{M}\) Result: An optimal solution \((\boldsymbol{y}^{*},z^{*})\) and optimal value \(w^{*}\) to \(\mathcal{H}\) 1 initialize set of partial assignments \(\hat{\mathcal{Y}}=\{\ominus\}\), set of Benders cuts \(\mathcal{C}=\emptyset\), and \(w^{*}=-\infty\) 2if\(\hat{\mathcal{Y}}=\emptyset\)then 3 terminate and return \((\boldsymbol{y}^{*},z^{*})\) and \(w^{*}\) 4else 5 select \(\hat{\boldsymbol{y}}\in\hat{\mathcal{Y}}\) and update \(\hat{\mathcal{Y}}\leftarrow\hat{\mathcal{Y}}\setminus\{\hat{\boldsymbol{y}}\}\) 6 create a restricted DD \(\underline{\mathcal{D}}\) associated with \(\mathcal{M}^{C}(\hat{\boldsymbol{y}})\) 7if\(\underline{\mathcal{D}}\neq\emptyset\)then 8 find a longest \(r\)-\(t\) path of \(\underline{\mathcal{D}}\) with encoding point \((\underline{\boldsymbol{y}},\underline{z})\) and length \(\underline{w}\) 9 solve the BD subproblem using \(\underline{\boldsymbol{y}}\) to obtain Benders cut \(\underline{C}\) 10if\(\underline{C}\in\mathcal{C}\)then 11 go to line 17 12 13else 14 update \(\mathcal{C}\leftarrow\mathcal{C}\cup\underline{C}\) and refine \(\underline{\mathcal{D}}\) w.r.t. \(\underline{C}\) 15 go to line 8 16 17else 18 go to line 2 19if\(\underline{w}>w^{*}\)then 20 update \(w^{*}\leftarrow\underline{w}\) and \((\boldsymbol{y}^{*},z^{*})\leftarrow(\underline{\boldsymbol{y}},\underline{z})\) 21if\(\underline{\mathcal{D}}\) provides an exact representation of \(\mathcal{M}^{C}(\hat{\boldsymbol{y}})\)then 22 go to line 2 23 24else 25 create a relaxed DD \(\overline{\mathcal{D}}\) associated with \(\mathcal{M}^{C}(\hat{\boldsymbol{y}})\) 26 find a longest \(r\)-\(t\) path of \(\overline{\mathcal{D}}\) with length \(\overline{w}\) 27if\(\overline{w}>w^{*}\)then 28 solve the BD subproblem using \(\overline{\boldsymbol{y}}\) to obtain Benders cut \(\overline{C}\) 29if\(\overline{C}\in\mathcal{C}\)then 20 go to line 31 21else 22 update \(\mathcal{C}\leftarrow\mathcal{C}\cup\overline{C}\) and refine \(\overline{\mathcal{D}}\) w.r.t. \(\overline{C}\) 23 go to line 23 24 25forall\(u\) in the last exact layer of \(\overline{\mathcal{D}}\)do 26 update \(\hat{\mathcal{Y}}\leftarrow\hat{\mathcal{Y}}\cup\{\tilde{\boldsymbol{y}}\}\) where \(\tilde{\boldsymbol{y}}\) encodes longest \(r\)-\(u\) path of \(\overline{\mathcal{D}}\) 27 28 go to line 2 29 30 [MISSING_PAGE_POST] The algorithm starts with constructing a restricted DD \(\underline{\mathcal{D}}\) corresponding to \(\mathcal{M}^{C}(\hat{\mathbf{y}})\) with empty initial values for \(C\) and \(\hat{y}\). We then find a longest \(r\)-\(t\) path of \(\underline{\mathcal{D}}\) encoding solution \((\underline{\mathbf{y}},\underline{z})\). Next, using \(\underline{\mathbf{y}}\), we solve the associated subproblem to obtain a feasibility/optimality cut \(\underline{C}\). We add this cut to \(\mathcal{C}\), refine \(\underline{\mathcal{D}}\) according to it, and find a new longest \(r\)-\(t\) path. We repeat these steps until no new feasibility/optimality cut is generated. At this point, the length of a longest \(r\)-\(t\) path of \(\underline{\mathcal{D}}\), denoted by \(\underline{w}\), gives a lower bound to the master problem \(\mathcal{M}\), which is also a valid lower bound to the original problem \(\mathcal{H}\). The value of \(\underline{w}\) can be used to update \(w^{*}\), the optimal value of \(\mathcal{H}\) at termination. Next, we create a relaxed DD \(\overline{\mathcal{D}}\) corresponding to \(\mathcal{M}^{C}(\hat{\mathbf{y}})\). We find a longest \(r\)-\(t\) path of \(\overline{\mathcal{D}}\) that provides an upper bound \(\overline{w}\) to \(\mathcal{M}\). If the upper bound \(\overline{w}\) is strictly greater than the current value of \(w^{*}\), we follow steps similarly to the case for \(\underline{\mathcal{D}}\) to iteratively refine \(\overline{\mathcal{D}}\) w.r.t. feasibility/optimality cuts through solving the subproblems, until no new cut is generated. Next, we perform a specialized branch-and-bound procedure to improve the bound through expanding merged layers of the DD. To this end, we add all the partial assignments associated with nodes in the last exact layer of \(\overline{D}\) (the last node layer in which no nodes are merged) to the collection \(\hat{\mathcal{Y}}\). The nodes corresponding to partial assignments in \(\hat{\mathcal{Y}}\) are required to be further explored to check whether or not the value of \(w^{*}\) can be improved. That is, the above process is repeated for every node \(v\) with partial assignment in \(\hat{\mathcal{Y}}\) as the \(r\)-\(v\) path is fixed in the new restricted/relaxed DDs. The algorithm terminates when \(\hat{\mathcal{Y}}\) becomes empty, at which point \(w^{*}\) is the optimal value. ## 3 DD-BD Formulation for the SGUFP In this section, we adapt the DD-BD framework described in Section 2.2 to solve the SGUFP. ### MIP Formulation We study the MIP formulation of the SGUFP based on that of its deterministic counterpart given in Davarnia et al. (2019). Consider a network \(G\!=\!(V,A)\) with node set \(V\!\coloneqq\!V^{\prime}\cup\{s,t\}\) and arc set \(A\), where \(s\) and \(t\) are source and sink nodes, respectively. The source node is connected to all the supply nodes in \(S\!\subseteq\!V^{\prime}\), and the sink node is connected to all the demand nodes in \(D\!\subseteq\!V^{\prime}\). Figure 4 illustrates the general structure of this network. For a node \(q\!\in\!V\), let \(\delta^{-}(q)\!\coloneqq\!\{i\!\in\!V\mid(i,q)\!\in\!A\}\) and \(\delta^{+}(q)\!\coloneqq\!\{j\!\in\!V\mid(q,j)\!\in\!A\}\) show the set of incoming and outgoing neighbors of \(q\), respectively. Define \(\bar{V}\!\subseteq\!V^{\prime}\) as a subset of vertices that must satisfy the NSNM requirement. For each node \(q\!\in\!\bar{V}\), let binary variable \(y_{ij}^{q}\!\in\!\{0,1\}\) represent whether or not the flow entering node \(q\!\in\!\bar{V}\) through arc \((i,q)\) leaves node \(q\) through arc \((q,j)\). The first stage of SGUFP determines the matching pairs between incoming and outgoing arcs of unsplittable nodes as follows: \[\max\quad z \tag{1a}\] \[\mathrm{s.t.} \sum_{j\in\delta^{+}(q)}y_{ij}^{q}\leq 1 \forall i\in\delta^{-}(q),\ \forall q\in\bar{V} \tag{1b}\] \[\sum_{i\in\delta^{-}(q)}y_{ij}^{q}\leq 1 \forall j\in\delta^{+}(q),\ \forall q\in\bar{V}\] (1c) \[y_{ij}^{q}\in\{0,1\} \forall(i,j)\in\delta^{-}(q)\times\delta^{+}(q),\ \forall q\in\bar{V}, \tag{1d}\] where constraints (1b) ensure that each incoming arc to a node with NSNM requirement is assigned to at most one outgoing arc, and constraints (1c) guarantee that each outgoing arc from such a node is matched with at most one incoming arc. In (1a)-(1d), variable \(z\) represents the objective value of the second stage of SGUFP where the demand uncertainty is taken into account. This demand uncertainty is modeled by a set \(\Xi\) of scenarios for the demand vector \(\mathbf{d}^{\xi}\) with occurrence probability \(\Pr^{\xi}\) for each scenario \(\xi\in\Xi\). Let continuous variable \(x_{ij}^{\xi}\in\mathbb{R}_{+}\) denote the flow from node \(i\) to node \(j\) through arc \((i,j)\) under scenario \(\xi\in\Xi\). We further assign a reward \(r_{ij}\) per unit flow to be collected by routing flow through arc \((i,j)\). It follows that \(z=\sum_{\xi\in\Xi}\Pr^{\xi}z^{\xi}\), where \(z^{\xi}\) is the objective value of the second stage of SGUFP for each scenario \(\xi\in\Xi\). This subproblem is formulated as follows for a given \(\mathbf{y}\) vector: \[\max \sum_{q\in V}\sum_{j\in\delta^{+}(q)}r_{qj}x_{qj}^{\xi} \tag{2a}\] \[\mathrm{s.t.} \sum_{i\in\delta^{-}(q)}x_{iq}^{\xi}-\sum_{j\in\delta^{+}(q)}x_{qj }^{\xi}=0 \forall q\in V^{\prime}\] (2b) \[\ell_{iq}^{\xi}\leq x_{iq}^{\xi}\leq u_{iq}^{\xi} \forall i\in\delta^{-}(q),\ \forall q\in V\] (2c) \[x_{iq}^{\xi}-x_{qj}^{\xi}\leq u_{iq}^{\xi}(1-y_{ij}^{q}) \forall(i,j)\in\delta^{-}(q)\times\delta^{+}(q),\ \forall q\in\bar{V}\] (2d) \[x_{qj}^{\xi}-x_{iq}^{\xi}\leq u_{qj}^{\xi}(1-y_{ij}^{q}) \forall(i,j)\in\delta^{-}(q)\times\delta^{+}(q),\ \forall q\in\bar{V}\] (2e) \[x_{iq}^{\xi}\leq u_{iq}^{\xi}\sum_{j\in\delta^{+}(q)}y_{ij}^{q} \forall i\in\delta^{-}(q),\ \forall q\in\bar{V} \tag{2f}\] \[x_{qj}^{\xi} \leq u_{qj}^{\xi}\sum_{i\in\delta^{-}(q)}y_{ij}^{q} \forall j\in\delta^{+}(q),\ \forall q\in\bar{V} \tag{2g}\] \[x_{ij}^{\xi} \geq 0 \forall(i,j)\in A. \tag{2h}\] In the above formulation, the objective function captures the total reward collected by routing flows throughout the network (from the source \(s\) to the sink \(t\)) to satisfy demands. The flow-balance requirements are represented by (2b). Constraints (2c) bound the flow on each arc from below and above. To impose the demand requirement for each scenario \(\xi\in\Xi\), we fix \(\ell_{qt}^{\xi}=u_{qt}^{\xi}=d_{q}^{\xi}\) for all demand nodes \(q\in D\) with demand \(d_{q}^{\xi}\), and leave the lower and upper bound values unchanged for all other arcs. Constraints (2d)-(2g) model the NSNM requirement for each node \(q\in\bar{V}\). In particular, (2d) and (2e) ensure that matching arcs \((i,q)\) and \((q,j)\) have equal flows. Constraints (2f) and (2g) guarantee that an arc without a matching pair does not carry any flow. We note here that the Constraint (2b) is implied by other constraints of the above subproblem under the assumption that \(\mathbf{y}\) is feasible to the master problem (1a)-(1d). However, we maintain this constraint in the subproblem because the master formulation in our DD-based approach, as will be described in Section 3.2, may produce a solution that is not feasible to (1a)-(1d). As a result, the addition of the Constraint (2b) will lead to a tighter subproblem formulation. As discussed in Section 2.2, the first step to use the DD-BD algorithm is to decompose the underlying problem into a master and a subproblem. The above two-stage formulation of the SGUFP is readily amenable to BD since the first stage problem (1a)-(1d) can be considered as the master problem together with some valid lower and upper bounds \(-\Gamma\) and \(\Gamma\) on \(z\) induced from the boundedness of the MIP formulation. For a given \(\mathbf{y}\) value obtained from the master problem and a scenario \(\xi\in\Xi\), the second stage problem (2a)-(2h) can be viewed as the desired subproblems. The optimality/feasibility cuts obtained from each scenario-based subproblem are then added to the master problem through aggregation as described in Section 3.3. ### DD-BD: Master Problem Formulation While the DD-BD Algorithm 1 provides a general solution framework for any bounded MIP, its DD component is problem-specific, i.e., it should be carefully designed based on the specific structure of the underlying problem. In this section, we design such an oracle for the SGUFP that represents the feasible region \(\{(\text{1b})-(\text{1d}),z\in[-\Gamma,\Gamma]\}\) of the master problem (1a)-(1d). To model this feasible region in the original space of \((\mathbf{y};z)\) variables, a DD would require \(\sum_{q\in\bar{V}}|\delta^{-}(q)|\times|\delta^{+}(q)|\) arc layers to represent binary variables \(\mathbf{y}\) and one arc layer to encode the continuous variable \(z\). Constructing such a DD, however, would be computationally cumbersome due to the large number of the arc layers. To mitigate this difficulty, we take advantage of the structural flexibility of DDs in representing _irregular_ variable types that cannot be used in standard MIP models. One such variable type is the index set, where arc layers represent indices, rather than domain values. We next show that we can remarkably reduce the number of DD arc layers by reformulating the master problem in a transformed space of variables defined over index sets. Consider a node \(q\in\bar{V}\). In the following, we define mappings that assign an index to each incoming and outgoing arc of \(q\). These mappings enable us to define new variables to reduce the number of DD arc layers. Let \(\operatorname{ind}^{-}(i,q)\) be a one-to-one mapping from incoming arcs \((i,q)\), for \(i\in\delta^{-}(q)\), to the index set \(\{1,2,\ldots,|\delta^{-}(q)|\}\). Similarly, let \(\operatorname{ind}^{+}(q,j)\) be a one-to-one mapping from outgoing arcs \((q,j)\), for \(j\in\delta^{+}(q)\), to the index set \(\{1,2,\ldots,|\delta^{+}(q)|\}\). For each incoming arc \((i,q)\) with index \(h=\operatorname{ind}^{-}(i,q)\), we define an integer variable \(w_{h}^{q}\in\{0,1,\ldots,|\delta^{+}(q)|\}\) such that \(w_{h}^{q}=0\) if this incoming arc is not paired with any outgoing arc, and \(w_{h}^{q}=k>0\) if this arc is matched with an outgoing arc \((q,j)\) with index \(k=\operatorname{ind}^{+}(q,j)\). Next, we give a formulation in the space of \(\boldsymbol{w}\) variables that describes the matching between incoming and outgoing arcs of \(q\) for all \(q\in\bar{V}\). In the following, \(\operatorname{sign}(.)\) represents the sign function that returns \(1\) if its argument is strictly positive, \(0\) if the argument is zero, and \(-1\) otherwise. Further, the operator \(|.|\), when applied on a set, represents the set size; and when applied on a real number, it represents the absolute value. **Proposition 2**: _Formulation_ \[\sum_{i\in\delta^{-}(q)}\operatorname{sign}\left(\left|w_{ \operatorname{ind}^{-}(i,q)}^{q}-\operatorname{ind}^{+}(q,j)\right|\right) \geq\left|\delta^{-}(q)\right|-1 \forall j\in\delta^{+}(q),\ \forall q\in\bar{V} \tag{3a}\] \[w_{\operatorname{ind}^{-}(i,q)}^{q}\in\left\{0,1,\ldots,\left| \delta^{+}(q)\right|\right\} \forall i\in\delta^{-}(q),\ \forall q\in\bar{V} \tag{3b}\] _models the matching between incoming and outgoing arcs of nodes \(q\in\bar{V}\)._ We show the result for a single node \(q\in\bar{V}\). The extension to the multiple node case is straightforward as the matching problem for each node is independent from other nodes. For the direct implication, assume that \(M^{q}\) is a matching between incoming and outgoing arcs of \(q\), with elements of the form \((i,j)\) that represent a matching between the incoming arc \((i,q)\) and the outgoing arc \((q,j)\). We show that variables \(\boldsymbol{w}\) associated with matching pairs in \(M^{q}\) satisfy constraints (3a) and (3b). It follows from the definition of \(\boldsymbol{w}\) that, for each \((i,j)\in M^{q}\), we have \(w_{\operatorname{ind}^{-}(i,q)}^{q}=\operatorname{ind}^{+}(q,j)\). Also, for any \(i\in\delta^{-}(q)\) that does not have a matching pair in \(M^{q}\), we have \(w_{\operatorname{ind}^{-}(i,q)}^{q}=0\). These value assignments show that \(\boldsymbol{w}\) satisfies (3b) as the image of \(\operatorname{ind}^{+}\) mapping is \(\{1,\ldots,|\delta^{+}(q)|\}\). For each \(i\in\delta^{-}(q)\) and \(j\in\delta^{+}(q)\), we have \(\left|w_{\operatorname{ind}^{-}(i,q)}^{q}-\operatorname{ind}^{+}(q,j)\right|\geq 0\), with equality holding when \((i,j)\in M^{q}\). For each \(j\in\delta^{+}(q)\), there are two cases. For the first case, assume that \((i,j)\notin M^{q}\) for any \(i\in\delta^{-}(q)\). As a result, \(\left|w_{\operatorname{ind}^{-}(i,q)}^{q}-\operatorname{ind}^{+}(q,j)\right|>0\) for all \(i\in\delta^{-}(q)\). Applying the \(\operatorname{sign}(.)\) function on these terms yields \(\operatorname{sign}\left(\left|w_{\operatorname{ind}^{-}(i,q)}^{q}- \operatorname{ind}^{+}(q,j)\right|\right)=1\), which implies that \(\sum_{i\in\delta^{-}(q)}\operatorname{sign}\left(\left|w^{q}_{\operatorname{ind}^{-} (i,q)}-\operatorname{ind}^{+}(q,j)\right|\right)=|\delta^{-}(q)|\), satisfying (3a). For the second case, assume that \((i^{*},j)\in M^{q}\) for some \(i^{*}\in\delta^{-}(q)\). As a result, we have \(\sum_{i\in\delta^{-}(q)}\operatorname{sign}\left(\left|w^{q}_{\operatorname{ ind}^{-}(i,q)}-\operatorname{ind}^{+}(q,j)\right|\right)=|\delta^{-}(q)|-1\) since \(\operatorname{sign}\left(\left|w^{q}_{\operatorname{ind}^{-}(i^{*},q)}- \operatorname{ind}^{+}(q,j)\right|\right)=\left|w^{q}_{\operatorname{ind}^{-} (i^{*},q)}-\operatorname{ind}^{+}(q,j)\right|=0\), satisfying (3a). For the reverse implication, assume that \(\boldsymbol{w}\) is a feasible solution to (3a)-(3b). We show that the pairs of the form \((i,j)\) encoded by these variables constitute a feasible matching between incoming and outgoing arcs of \(q\), i.e., (i) each arc \((i,q)\) is matched with at most one arc \((q,j)\), and (ii) each arc \((q,j)\) is matched with at most one arc \((i,q)\). It follows from constraint (3b) that, for each \(i\in\delta^{-}(q)\), variable \(w^{q}_{\operatorname{ind}^{-}(i,q)}\) takes a value between \(\{0,1,\ldots,|\delta^{+}(q)|\}\). If \(w^{q}_{\operatorname{ind}^{-}(i,q)}=0\), then \((i,q)\) is not matched with any outgoing arc, otherwise it is matched with arc \((q,j)\) with \(\operatorname{ind}^{+}(q,j)=w^{q}_{\operatorname{ind}^{-}(i,q)}\). This ensures that condition (i) above is satisfied for this matching collection. Further, for each \(j\in\delta^{-}(q)\), constraint (3a) implies that \(\operatorname{sign}\left(\left|w^{q}_{\operatorname{ind}^{-}(i,q)}- \operatorname{ind}^{+}(q,j)\right|\right)\) can be equal to zero for at most one \(i\in\delta^{-}(q)\). In such a case, we would have at most one matching pair of the form \((i,j)\) in the collection, showing that condition (ii) above is satisfied. \(\square\) It follows from Proposition 2 that constraints (3a)-(3b) can replace (1b)-(1d) in the master problem (1a)-(1d) to obtain the following master problem in a transformed space of variables. \[\max_{\boldsymbol{w};z}\left\{z\left|\right.\eqref{eq:milp}-\eqref{eq:milp},z\in[-\Gamma,\Gamma]\right\}. \tag{4}\] Note that formulation (4) is an integer nonlinear program (INLP) with nonconvex and non-continuous constraint functions. Such a formulation is extremely difficult for conventional MINLP techniques and solvers to handle. However, due to structural flexibility of DDs in representing integer nonlinear programs, this problem can be easily modeled via a DD; see Davarnia and Van Hoeve (2020) for a detailed account on using DDs for modeling INLPs. In the following, we present an algorithm to construct DDs in the space of \((\boldsymbol{w};z)\) variables for the master problem (4) with a single node \(q\in\bar{V}\). The extension to the case with multiple nodes follows by replicating the DD structure. The output of Algorithm 2 is a DD with \(|\delta^{-}(q)|+1\) arc layers where the first \(|\delta^{-}(q)|\) layers represent \(\boldsymbol{w}\) variables and the last layer encodes variable \(z\). In this algorithm, \(s_{u}\) denotes the state value of DD node \(u\). The core idea of the algorithm is to use unpaired outgoing arcs of \(q\) as the state value at each DD layer that represents the matching for an incoming arc of \(q\). Next, We show that the solution set of the DD constructed by Algorithm 2_represents_ the feasible region of (4). Note here that DD representation of a MIP set, as described in Section 2.2, does not imply the encoding of all of the solutions of the set, but rather the encoding of a subset of all solutions that subsumes all the extreme points of the set. Such a representation is sufficient to solve an optimization problem over the set with an objective function convex in continuous variables, which is the case for (4). **Theorem 1**: _Consider a SGUFP with \(\bar{V}=\{q\}\). Let \(\mathcal{D}\) be a DD constructed by Algorithm 2. Then, \(\mathrm{Sol}(\mathcal{D})\) represents the feasible region of (4)._ \((\subseteq)\) Consider an \(r\)-\(t\) path of \(\mathcal{D}\) that encodes solution \((\tilde{\mathbf{w}}^{q},z)\). According to Algorithm 2, the labels of the first \(|\delta^{-}(q)|\) arcs of this path belong to \(\{0,1,\ldots,|\delta^{+}(q)|\}\), showing that \(\tilde{\mathbf{w}}^{q}\) satisfies constraints (3b). Assume by contradiction that \(\tilde{\mathbf{w}}^{q}\) does not satisfy constraints (3a), i.e., \(\sum_{i\in\delta^{-}(q)}\mathrm{sign}\left(\left|w^{q}_{\mathrm{ind}^{-}(i,q)}- \mathrm{ind}^{+}(q,j)\right|\right)\leq|\delta^{-}(q)|-2\) for some \(j\in\delta^{+}(q)\). This implies that \(\tilde{w}^{q}_{\mathrm{ind}^{-}(i^{\prime},q)}=\tilde{w}^{q}_{\mathrm{ind}^{ -}(i^{\prime\prime},q)}=\mathrm{ind}^{+}(q,j)\) for two distinct \(i^{\prime},i^{\prime\prime}\in\delta^{-}(q)\). In other words, the arcs \(\mathrm{ind}^{-}(i^{\prime},q)\) and \(\mathrm{ind}^{-}(i^{\prime\prime},q)\) of the selected \(r\)-\(t\) path both share the same label value \(\mathrm{ind}^{+}(q,j)\). According to line 3 of Algorithm 2, we must have that the state value of nodes at layers \(\mathrm{ind}^{-}(i^{\prime},q)\) and \(\mathrm{ind}^{-}(i^{\prime\prime},q)\) of the \(r\)-\(t\) path both contain \(\mathrm{ind}^{+}(q,j)\). This is a contradiction to the state update policy in line 4 of Algorithm 2, since positive arc labels at each layer of the DD will be excluded from the state value of the subsequent nodes. \((\supseteq)\) Consider a feasible solution point \((\tilde{\mathbf{w}}^{q};\tilde{z})\) of (4). Suppose \(\tilde{\mathbf{w}}^{q}=(\ell_{1},\ell_{2},\ldots,\ell_{|\delta^{-}(q)|})\). According to constraints (3a), no two coordinates of \(\tilde{\mathbf{w}}^{q}\) have the same positive value. The state value at the root node in \(\mathcal{D}\) contains all index values \(\{0,1,\ldots,|\delta^{+}(q)|\}\). According to Algorithm 2, there exists an arc with label \(\ell_{1}\) at the first layer of \(\mathcal{D}\). The state value at the head node of this arc, therefore, contains \(\ell_{2}\in\{0,1,\ldots,|\delta^{+}(q)|\}\setminus\{\ell_{1}\}\), which guarantees an arc with label \(\ell_{2}\) at the second layer of this path. Following a similar approach, we can track a path from the root to layer \(|\delta^{-}(q)|\) whose arcs labels match values of \(\tilde{\mathbf{w}}^{q}\). Note for the last layer that \(\tilde{z}\in[-\Gamma,\Gamma]\), which is included in the interval between arc labels of the last layer of \(\mathcal{D}\). As a result, \((\tilde{\mathbf{w}}^{q};\tilde{z})\) is represented by an \(r\)-\(t\) path of \(\mathcal{D}\). \(\Box\) The main purpose of using a DD that models the master problem (4) over one that models (1a)-(1d) is the size reduction in arc layers that represent variables \(\mathbf{w}\) as compared with variables \(\mathbf{y}\). It turns out that this space transformation can significantly improve the solution time of the DD approach. We refer the interested reader to Appendix A for a detailed discussion on these advantages, including preliminary computational results. Constructing exact DDs as described in Algorithm 2 can be computationally expensive for large size problems. As discussed in Section 2.2, relaxed and restricted DDs are used to circumvent this difficulty. Building restricted DDs is straightforward as it involves the selection of a subset of \(r\)-\(t\) paths of the exact DD that satisfy a preset width limit. Constructing relaxed DDs, on the other hand, requires careful manipulation of the DD structure to merge nodes in such a way that it encodes a superset of all \(r\)-\(t\) paths of the exact DD. We demonstrate a method to construct such relaxed DDs in Algorithm 3. Similarly to Algorithm 2, this algorithm is presented for a single NSNM node, but can be extended to multiple nodes by replicating the procedure. ``` Data: node \(q\in\bar{V}\), parameter \(\Gamma\) Result: a relaxed DD \(\overline{\mathcal{D}}\) create the root node \(r\in\mathcal{U}_{1}\) with state \(s_{r}=\{0,1,\ldots,|\delta^{+}(q)|\}\) forall\(i\in\{1,2,\ldots,|\delta^{-}(q)|\}\) and \(u\in\mathcal{U}_{i}\)do foreall\(\ell\in s_{u}\)do create a node \(v\in\mathcal{U}_{i+1}\) with state \((s_{u}\setminus\{\ell\})\cup\{0\}\) and an arc \(a\in\mathcal{A}_{i}\) connecting \(u\) to \(v\) with label \(l(a)=\ell\) select a subset of nodes \(v_{1},v_{2},\ldots,v_{k}\in\mathcal{U}_{i+1}\) and merge them into node \(v^{\prime}\) with state \(s_{v^{\prime}}=\bigcup_{j=1}^{k}s_{v_{j}}\) forall\(u\in\mathcal{U}_{1+|\delta^{-}(q)|}\)do create two arcs \(a_{1},a_{2}\in\mathcal{A}_{1+|\delta^{-}(q)|}\) connecting \(u\) to the terminal node with labels \(l(a_{1})=\Gamma\) and \(l(a_{2})=-\Gamma\). ``` **Algorithm 3**Construction of relaxed DD for the master problem of SGUFP with a node \(q\in\bar{V}\) Theorem 2.1: _Consider a SGUFP with \(\bar{V}=\{q\}\). Let \(\overline{\mathcal{D}}\) be a DD constructed by Algorithm 3. Then, \(\overline{\mathcal{D}}\) represents a relaxation of the feasible region of (4)._ Proof: Let \(\hat{\mathcal{D}}\) be the DD constructed by Algorithm 2 for the master problem (4) with a single node \(q\in\bar{V}\). It suffices to show that the solution set of \(\overline{\mathcal{D}}\) provides a relaxation for that of \(\hat{\mathcal{D}}\). Pick a root-terminal path \(\dot{P}\) of \(\dot{\mathcal{D}}\) with encoding point \((\dot{\mathbf{w}}^{q};\dot{z})\). We show that there exist a root-terminal path \(\overline{P}\) of \(\overline{\mathcal{D}}\) with encoding point \((\overline{\mathbf{w}}^{q};\overline{z})\) such that \(\overline{\mathbf{w}}^{q}=\dot{\mathbf{w}}^{q}\) and \(\overline{z}=\dot{z}\). Given a DD, define \(P_{k}\) to be a sub-path composed of arcs in the first \(k\) layers, for \(1\leq k\leq|\delta^{-}(q)|\). We show for any sub-path \(\dot{P}_{k}\) of \(\dot{\mathcal{D}}\) with encoding point \(\dot{\mathbf{w}}_{k}^{q}=(\dot{w}_{1}^{q},\ldots,\dot{w}_{k}^{q})\), there exists a sub-path \(\overline{P}_{k}\) of \(\overline{\mathcal{D}}\) with encoding point \(\overline{\mathbf{w}}_{k}=(\overline{w}_{1},\ldots,\overline{w}_{k})\) such that \(\overline{\mathbf{w}}_{h}=\dot{\mathbf{w}}_{h}\) for \(h=1,\ldots,k\). Note that we only need to prove the matching values for \(k\leq|\delta^{-}(q)|\), because each node at node layer \(|\delta^{-}(q)|+1\) of both \(\dot{\mathcal{D}}\) and \(\overline{\mathcal{D}}\) is connected by two arcs with labels \(-\Gamma\) and \(\Gamma\) to the terminal node, and thus there are always matching arcs with the same label for the last layer, i.e., \(\overline{\pi}=\dot{z}\). We prove the result by induction on \(k\). The base case for \(k=1\) is trivial, since \(\overline{\mathcal{D}}\) contains arcs with labels \(\{0,1,\ldots,|\delta^{+}(q)|\}\) in the first layer, which includes the label value of the first arc on \(\dot{P}_{1}\). For the induction hypothesis, assume that the statement is true for \(k=d\), i.e., for the sub-path \(\dot{P}_{d}\) with label values \(\dot{\mathbf{w}}_{d}^{q}=(\dot{w}_{1}^{q},\ldots,\dot{w}_{d}^{q})\), there is sub-path \(\overline{P}_{d}\) of \(\overline{\mathcal{D}}\) with matching arc labels. We show the statement holds for \(d+1\). Let \(u\in\dot{A}_{d+1}\) and \(v\in\overline{A}_{d+1}\) be the end nodes of \(\dot{P}_{d}\) and \(\overline{P}_{d}\), respectively. It follows from Algorithm 2 that the index set representing the state value at node \(u\) contains \(\dot{w}_{d+1}^{q}\), i.e., \(\dot{w}_{d+1}^{q}\in\dot{s}_{u}=\{0\}\cup\{1,\ldots,|\delta^{+}(q)|\}\setminus \{\dot{w}_{1},\dot{w}_{2},\ldots,\dot{w}_{d}\}\). The merging step in line 5 of Algorithm 3, on the other hand, implies that \(\overline{s}_{v}\supseteq\{0\}\cup\{1,\ldots,|\delta^{+}(q)|\}\setminus\{ \overline{w}_{1},\overline{w}_{2},\ldots,\overline{w}_{d}\}=\{0\}\cup\{1, \ldots,|\delta^{+}(q)|\}\setminus\{\dot{w}_{1},\dot{w}_{2},\ldots,\dot{w}_{d} \}=\dot{s}_{u}\), where the inclusion follows from the fact that state values at nodes on path \(\overline{P}_{d}\) contain those of each individual path due to merging operation, and the first equality holds because of the induction hypothesis. As a result, \(\overline{s}_{v}\) must contain \(\dot{w}_{d+1}^{q}\), which implies that there exists an arc with \(\dot{w}_{d+1}^{q}\) connected to node \(v\) on \(\overline{P}_{d}\). Attaching this arc to \(\overline{P}_{d}\), we obtain the desired sub-path \(\overline{P}_{d+1}\). \(\Box\) ### DD-BD: Subproblem Formulation At each iteration of the DD-BD algorithm, an optimal solution of the master problem is plugged into the subproblems to obtain feasibility/optimality cuts. For the SGUFP formulation, this procedure translates to obtaining an optimal solution of (4) in the space of \(\mathbf{w}\) variables, which is used to solve the subproblem (2a)-(2h). The formulation of the subproblem, however, is defined over the original binary variables \(\mathbf{y}\), and the resulting feasibility/optimality cuts are generated in this space. To remedy this discrepancy between the space of variables in the master and subproblems, we need to find a one-to-one mapping between variables \(\mathbf{w}\) and \(\mathbf{y}\), as outlined next. **Proposition 3**: _Consider a node \(q\in\bar{V}\). Let \(\mathbf{y}^{q}\) be a feasible solution to (1b)-(1d). Then, \(\mathbf{w}^{q}\) obtained as_ \[w_{\mathrm{ind}^{-}(i,q)}^{q}=\sum_{j\in\delta^{+}(q)}\mathrm{ind}^{+}(q,j)\,y _{ij}^{q} \forall i\in\delta^{-}(q), \tag{5}\] _is a feasible solution to (3a)-(3b). Conversely, let \(\mathbf{w}^{q}\) be a feasible solution to (3a)-(3b). Then, \(\mathbf{y}^{q}\) obtained as_ \[y_{ij}^{q}=1-\mathrm{sign}\left(\left|w_{\mathrm{ind}^{-}(i,q)}^{q}-\mathrm{ ind}^{+}(q,j)\right|\right) \forall(i,j)\in\delta^{-}(q)\times\delta^{+}(q), \tag{6}\] _is a feasible solution to (1b)-(1d)._ _Proof._ For the direct statement, let \(\mathbf{y}^{q}\) be a feasible solution to (1b)-(1d), and construct a vector \(\mathbf{w}^{q}\) according to (5). We show that \(\mathbf{w}^{q}\) satisfies all constraints (3a)-(3b). First, we show that constraints (3a) are satisfied. Assume by contradiction that there exists \(j^{\prime}\in\delta^{+}(q)\) such that \(\sum_{i\in\delta^{-}(q)}\operatorname{sign}\left(\left|w^{q}_{\operatorname{ ind}^{-}(i,q)}-\operatorname{ind}^{+}(q,j^{\prime})\right|\right)\leq|\delta^{-}(q)|-2\). This implies that \(w^{q}_{\operatorname{ind}^{-}(i^{\prime},q)}=w^{q}_{\operatorname{ind}^{-}(i^ {\prime\prime},q)}=\operatorname{ind}^{+}(q,j^{\prime})\) for some \(i^{\prime},i^{\prime\prime}\in\delta^{-}(q)\). Then, we can write that \[w^{q}_{\operatorname{ind}^{-}(i^{\prime},q)}=\sum_{j\in\delta^{+}(q)} \operatorname{ind}^{+}(q,j)y^{q}_{i^{\prime}j}=\operatorname{ind}^{+}(q,j^{ \prime})=\sum_{j\in\delta^{+}(q)}\operatorname{ind}^{+}(q,j)y^{q}_{i^{\prime \prime}j}=w^{q}_{\operatorname{ind}^{-}(i^{\prime\prime},q)},\] where the first and last equalities hold by (5). The second and third equalities in the above chain of relations imply that \(y^{q}_{i^{\prime}j^{\prime}}=y^{q}_{i^{\prime\prime}j^{\prime}}=1\), since \(\operatorname{ind}^{+}(q,j^{\prime})>0\). This violates constraints (1c), reaching a contradiction. Next, we show that constraints (3b) are satisfied. The proof follows directly from construction of \(\mathbf{w}^{q}\) and constraints (1b). For the converse statement, let \(\mathbf{w}^{q}\) be a feasible solution to (3a)-(3b), and construct a vector \(\mathbf{y}^{q}\) according to (6). We show that \(\mathbf{y}^{q}\) satisfies all constraints (1b)-(1d). To show that each constraint (1b) is satisfied, consider \(i\in\delta^{-}(q)\). We can write that \[\sum_{j\in\delta^{+}(q)}y^{q}_{ij}=|\delta^{+}(q)|-\sum_{j\in\delta^{+}(q)} \operatorname{sign}\left(\left|w^{q}_{\operatorname{ind}^{-}(i,q)}- \operatorname{ind}^{+}(q,j)\right|\right)\leq|\delta^{+}(q)|-\left(|\delta^{+ }(q)|-1\right)=1,\] where the first equality follows from the construction of \(\mathbf{y}^{q}\), and the inequality holds by (3b) as \(\left|w^{q}_{\operatorname{ind}^{-}(i,q)}-\operatorname{ind}^{+}(q,j)\right|=0\) for at most one index \(j\in\delta^{+}(q)\). To show that each constraint (1c) is satisfied, select \(j\in\delta^{+}(q)\). We have \[\sum_{i\in\delta^{-}(q)}y^{q}_{ij}=|\delta^{-}(q)|-\sum_{i\in\delta^{-}(q)} \operatorname{sign}\left(\left|w^{q}_{\operatorname{ind}^{-}(i,q)}- \operatorname{ind}^{+}(q,j)\right|\right)\leq 1,\] where the equality follows from the construction of \(\mathbf{y}^{q}\), and the inequality holds because of constraint (3a). Finally, each constraint (1d) is satisfied due to the fact that \(1-\operatorname{sign}(\left|.\right|)\in\{0,1\}\). \(\square\) Proposition 4: _Mappings described by (5) and (6) are one-to-one over their respective domains._ _Proof._ Note that the mapping described by (5) is a linear transformation of the form \(\mathbf{w}^{q}=B\mathbf{y}^{q}\) with coefficient matrix \(B\in\mathbb{Z}^{|\delta^{-}(q)|\times(|\delta^{-}(q)||\delta^{+}(q)|)}\). It is clear from the identity block structure of \(B\), that it is full row-rank, since each column contains a single non-zero element while each row has at least one non-zero element. As a result, the null space of \(B\) is the origin, which implies that \(\tilde{\mathbf{w}}^{q}=\tilde{\mathbf{w}}^{q}\) only if \(\tilde{\mathbf{y}}^{q}=\tilde{\mathbf{y}}^{q}\). For the mapping described by (6), let distinct points \(\hat{\mathbf{w}}^{q}\) and \(\tilde{\mathbf{w}}^{q}\) satisfy (3b). Construct vectors \(\tilde{\mathbf{y}}^{q}\) and \(\tilde{\mathbf{y}}^{q}\) by (6) using \(\hat{\mathbf{w}}^{q}\) and \(\tilde{\mathbf{w}}^{q}\), respectively. Because \(\hat{\mathbf{w}}^{q}\) and \(\tilde{\mathbf{w}}^{q}\) are distinct, there must exist \(i\in\delta^{-}(q)\) such that \(\hat{w}^{q}_{\mathrm{ind}^{-}(i,q)}\neq\tilde{w}^{q}_{\mathrm{ind}^{-}(i,q)}\). This implies that at least one of these variables, say \(\hat{w}^{q}_{\mathrm{ind}^{-}(i,q)}\), is non-zero. It follows from (3b) that \(\hat{w}^{q}_{\mathrm{ind}^{-}(i,q)}=\mathrm{ind}^{+}(q,j^{\prime})\) for some \(j^{\prime}\in\delta^{+}(q)\), and that \(\hat{w}^{q}_{\mathrm{ind}^{-}(i,q)}\neq\mathrm{ind}^{+}(q,j^{\prime})\). According to (6), we write that \(\hat{y}_{ij^{\prime}}=1-\mathrm{sign}\left(\left|\hat{w}^{q}_{\mathrm{ind}^{-} (i,q)}-\mathrm{ind}^{+}(q,j^{\prime})\right|\right)=1\), and that \(\tilde{y}_{ij^{\prime}}=1-\mathrm{sign}\left(\left|\tilde{w}^{q}_{\mathrm{ind} ^{-}(i,q)}-\mathrm{ind}^{+}(q,j^{\prime})\right|\right)=0\), showing that \(\tilde{\mathbf{y}}^{q}\neq\tilde{\mathbf{y}}^{q}\). \(\qed\) Using the results of Propositions 3 and 4, we can apply the DD-BD Algorithm 1 in its entirety for the SGUFP. In particular, at each iteration of the algorithm, we can transform the optimal solution \((\bar{\mathbf{w}},\bar{z})\) obtained from the DD representing the master problem (4) into a solution \((\bar{\mathbf{y}},\bar{z})\) through the mapping (6). Given an optimal first-stage solution \(\bar{\mathbf{y}}\), we can solve \(|\Xi|\) separate subproblems; one for each demand realization in the second-stage. The feasibility cuts obtained from subproblems, which are in the space of \(\mathbf{y}\) variables, are translated back into the space of \(\mathbf{w}\) variables through the mapping (5) and added to the master problem. Further, in a case where all subproblems produce an optimality cut, they can be aggregated to generate an optimality cut in the space of \((\mathbf{y},z)\), which is added to the master problem after being translated into the space of \((\mathbf{w},z)\) variables. The master DD will be refined with respect to the resulting inequalities, and an optimal solution is returned to be used for the next iteration. In the remainder of this section, we present details on the derivation of optimality/feasibility cuts from subproblem (2a)-(2h). Consider the following partitioning of the set of arcs \(A\) into subsets \[A_{1} \coloneqq\left\{(i,j)\in A\,\big{|}\,\delta^{-}(i)=\emptyset,\ \delta^{+}(j)\neq \emptyset\right\},\ A_{2} \coloneqq\left\{(i,j)\in A\,\big{|}\,\delta^{-}(i)\neq\emptyset,\ \delta^{+}(j)=\emptyset\right\},\] \[A_{3} \coloneqq\left\{(i,j)\in A\,\big{|}\,\delta^{-}(i)\neq \emptyset,\ \delta^{+}(j)\neq\emptyset\right\},\ A_{4} \coloneqq\left\{(i,j)\in A\,\big{|}\,\delta^{-}(i)=\emptyset,\ \delta^{+}(j)=\emptyset\right\},\] and let \(\mathbf{\theta}^{\xi}=(\mathbf{\beta}^{\xi},\mathbf{\gamma}^{\xi},\mathbf{\delta}^{\xi},\mathbf{ \phi}^{\xi},\mathbf{\lambda}^{\xi},\mathbf{\mu}^{\xi})\) be the vector of dual variables associated with constraints of (2a)-(2h) for a scenario \(\xi\in\Xi\). Further, define the bi-function \[f(\mathbf{y};\mathbf{\theta}^{\xi})= \sum_{q\in V}\sum_{j\in\delta^{+}(q)}\left(-\ell_{qj}\beta^{\xi} _{qj}+u_{qj}\gamma^{\xi}_{qj}\right)+\sum_{q\in V}\sum_{(i,j)\in\delta^{-}(q) \times\delta^{+}(q)}\left(u_{iq}(1-y^{q}_{ij})\lambda^{\xi}_{iqj}+u_{qj}(1-y^ {q}_{ij})\mu^{\xi}_{iqj}\right)\] \[+\sum_{q\in V}\sum_{i\in\delta^{-}(q)}\left(u_{iq}\sum_{j\in\delta^ {+}(q)}y^{q}_{ij}\sigma^{\xi}_{iq}\right)+\sum_{q\in V}\sum_{j\in\delta^{+}(q )}\left(u_{qj}\sum_{i\in\delta^{-}(q)}y^{q}_{ij}\phi^{\xi}_{qj}\right).\] For a given \(\bar{\mathbf{y}}\) and each scenario \(\xi\in\Xi\), the dual of the subproblem (2a)-(2h) can be written as follows where the symbol \(\star\) on a node means that it belongs to \(\bar{V}\). \[\min f(\bar{\mathbf{y}};\mathbf{\theta}^{\xi})\] (7a) s.t. \[\alpha^{\xi}_{q}-\beta^{\xi}_{iq}+\gamma^{\xi}_{iq}+\sum_{j:j\in \delta^{+}(\hat{\xi})}\lambda^{\xi}_{i\hat{q}j}-\sum_{j:j\in\delta^{+}(\hat{ \xi})}\mu^{\xi}_{i\hat{q}j}+\sigma^{\xi}_{i\hat{q}}\geq r_{i\hat{q}} \forall(i,\hat{q})\in A_{1} \tag{7b}\] \[\alpha^{\xi}_{q}-\beta^{\xi}_{iq}+\gamma^{\xi}_{iq}\geq r_{iq} \forall(i,q)\in A_{1} \tag{7c}\] \[-\alpha_{q}^{\xi}-\beta_{\hat{q}}^{\xi}+\gamma_{\hat{q}j}^{\xi}- \sum_{i:i\in\delta-(\hat{q})}\lambda_{i\hat{q}j}^{\xi}+\sum_{i:i\in\delta-(\hat{ q})}\mu_{i\hat{q}j}^{\xi}+\phi_{\hat{q}j}^{\xi}\geq r_{\hat{q}j} \forall(\hat{q},j)\in A_{2} \tag{7d}\] \[-\alpha_{q}^{\xi}-\beta_{q}^{\xi}+\gamma_{qj}^{\xi}\geq r_{qj} \forall(q,j)\in A_{2}\] (7e) \[-\alpha_{\hat{q}}^{\xi}+\alpha_{\hat{j}}^{\xi}-\beta_{\hat{q}j}^{ \xi}+\gamma_{\hat{q}j}^{\xi}+\sum_{i\in\delta-(\hat{q})}\left(\mu_{i\hat{q}j}^ {\xi}-\lambda_{i\hat{q}j}^{\xi}\right)+\sum_{i\in\delta+(\hat{j})}\left( \lambda_{\hat{q}ji}^{\xi}-\mu_{\hat{q}ji}^{\xi}\right)+\sigma_{\hat{q}j}^{\xi }+\phi_{\hat{q}j}^{\xi}\geq r_{\hat{q}j} \forall(\hat{q},\hat{j})\in A_{3}\] (7f) \[-\alpha_{\hat{q}}^{\xi}+\alpha_{\hat{j}}^{\xi}-\beta_{\hat{q}j}^{ \xi}+\gamma_{\hat{q}j}^{\xi}+\sum_{i\in\delta-(\hat{q})}\left(\mu_{i\hat{q}j}^ {\xi}-\lambda_{i\hat{q}j}^{\xi}\right)+\phi_{\hat{q}j}^{\xi}\geq r_{\hat{q}j} \forall(\hat{q},j)\in A_{3}\] (7g) \[-\alpha_{q}^{\xi}+\alpha_{\hat{j}}^{\xi}-\beta_{qj}^{\xi}+\gamma_{ \hat{q}j}^{\xi}+\sum_{i\in\delta+(\hat{q})}\left(\lambda_{qji}^{\xi}-\mu_{qji}^ {\xi}\right)+\sigma_{qj}^{\xi}\geq r_{\hat{q}j} \forall(q,\hat{j})\in A_{3}\] (7h) \[-\alpha_{q}^{\xi}+\alpha_{\hat{j}}^{\xi}-\beta_{qj}^{\xi}+\gamma_ {qj}^{\xi}\geq r_{qj} \forall(q,j)\in A_{3}\] (7i) \[-\beta_{iq}^{\xi}+\gamma_{iq}^{\xi}\geq r_{iq} \forall(i,q)\in A_{4}\] (7j) \[\alpha_{q}^{\xi}\in\mathbb{R} \forall q\in V^{\prime}\] (7k) \[\beta_{ij}^{\xi},\ \gamma_{ij}^{\xi},\ \sigma_{ij}^{\xi},\ \phi_{ij}^{\xi},\ \lambda_{iqj}^{\xi},\ \mu_{iqj}^{\xi}\geq 0 \forall i,q,j\in V. \tag{7l}\] If the above problem has an optimal solution \(\hat{\mathbf{\theta}}^{\xi}\) for all \(\xi\in\Xi\), the output of the subproblems will be an optimality cut of the form \(\sum_{\xi\in\Xi}\Pr^{\xi}f(\mathbf{y};\hat{\mathbf{\theta}}^{\xi})\geq z\). If the above problem is unbounded along a ray \(\hat{\mathbf{\theta}}^{\xi}\) for a \(\xi\in\Xi\), the output of the subproblem will be a feasibility cut of the form \(f(\mathbf{y};\hat{\mathbf{\theta}}^{\xi})\geq 0\). Note that replacing variables \(\mathbf{y}\) in the above constraints with \(\mathbf{w}\) through the mapping (5) results in separable nonlinear constraints. Nevertheless, since these constraints will be used to refine the master DD, their incorporation is simple due to structural flexibility of DDs in modeling such constraints; we refer the reader to Davarnia and Van Hoeve (2020) for a detailed account for modeling INLPs with DDs. ## 4 Computational Experiments In this section, we solve SGUFP as a core model for the unit train scheduling problem with demand stochasticity using three different approaches: (i) the standard MIP formulation that is a deterministic equivalent of the two-stage model and contains all variables and constraints of the master problem and \(|\Xi|\) subproblems; (ii) the Benders reformulation presented in Section 3.1 composed of the master problem (1a)-(1d) and \(|\Xi|\) subproblems (2a)-(2h); and (iii) the DD-BD algorithm proposed in the present paper. In the Benders approach, we solve separate subproblems using a fixed vector \(\bar{\mathbf{y}}\) obtained from the master problem. The feasibility cuts generated by subproblems are added directly to the constraint set of the master problem, and the optimality cuts are added as an aggregated cut over all scenarios. We note here that when there is a feasibility cut for any scenario, we add it directly to separate the solution of the current iteration and move on to the next iteration. To obtain a valid inequality that provides a bound for the single \(z\) variable, we need to aggregate valid inequalities over all scenario subproblems as \(z\) is composed of the objective value of all these subproblems. Therefore, we can only produce an optimality cut for the \(z\) variable when we have optimality cuts for all of the subproblems. For the DD-BD approach, we use the following algorithmic choices to build restricted and relaxed DDs. For the restricted DDs, we choose a subset of the \(r\)-\(t\) paths with largest lengths, which are more likely to contain an optimal solution. For the relaxed DDs, we merge nodes that have the largest number of common members in their state values. We refer the reader to Bergman et al. (2016a) for other heuristic approaches that could be used for this purpose. ### Test Instances In our experiments, we consider the structure of the SGUFP network given in Section 3.1. To ensure that the problem is always feasible, we create an artificial node \(s_{0}\) to compensate for any shortage of the supply, and add an arc from the artificial supply \(s_{0}\) to each demand node. We create test instances based on the specification given in Davarnia et al. (2019), which is inspired by realistic models. In particular, we consider a base rail network \(G^{\prime}=(V^{\prime},A^{\prime})\) where \(10\%\) and \(30\%\) of the nodes are supply and demand nodes, respectively. We assume that \(50\%\) of the nodes must satisfy the NSNM requirement. We then create a network \(G=(V,A)\) by augmenting supply/demand and artificial nodes as described above with the following settings. The integer supply value at supply nodes is randomly selected from the interval \([100,600]\). The capacity of arcs connecting \(s_{0}\) to demand nodes are considered to be unbounded, and the integer capacity value of other arcs is randomly selected from the interval \([100,300]\). For each demand scenario \(\xi\in\Xi\), the integer demand value at demand nodes is randomly chosen from the interval \([100,200]\). The reward of the arcs connecting \(s_{0}\) to the demand nodes are generated from the interval \([-10,-5]\) to represent the cost of lost demands. The reward of the arcs connecting the source to the supply nodes is randomly selected from the interval \([5,10]\), and the reward of the arcs connecting the demand nodes to the sink is fixed to zero since the flow of these arcs is also fixed. The reward of all other arcs is created randomly from the interval \([-2,2]\) where the negative values indicate the cost of sending flows through congested arcs. We consider four categories of rail networks with \(|V^{\prime}|\in\{40,60,80,100\}\). For each category, we create five scenario classes for the number of demand scenarios \(|\Xi|\in\{50,100,150,200,250\}\). For each network category and scenario class, we create five random instances based on the above settings. Test instances are publicly available (Salemi and Davarnia 2022b). ### Numerical Results In this section, we present the numerical results that compare the performance of the DD-BD formulation for the SGUFP instances with that of the MIP formulation, denoted by "MIP", and the standard Benders reformulation, denoted by "BD". All experiments are conducted on a machine running Windows 10, x64 operating system with Intel(r) Core i7 processor (2.60 GHz) and 32 GB RAM. The Gurobi optimization solver (version 9.1.1) is used to solve instances for the MIP and BD models. When solving problems with Gurobi, we turn off presolve and cuts for all methods to have a fair comparison. Tables 1-4 report the running times of each of these formulations for \(|V^{\prime}|\in\{40,60,80,100\}\) and \(|\Xi|\in\{50,100,150,200,250\}\) where the time limit is set to 3600 seconds. The symbol "\(>3600\)" indicates that the problem was not solved within the time limit. As evident in these tables, the DD-BD formulation outperforms the other alternatives. In particular, the gap between the solution time of the DD-BD and the MIP and BD approaches widens as the problem size increases. For example, as reported in Table 1, while the DD-BD approach solves all 25 instances in under 275 seconds, the MIP approach fails to solve 10 of them within 3600 seconds, 80% of which involve 200 or 250 scenarios. This shows a clear superiority of the DD-BD over the MIP method. Further, for most of the instances, the DD-BD approach outperforms the standard BD approach, rendering it as the superior solution method among all three. Figures 5-8 compare the performance of DD-BD, BD, and MIP formulations through box and whisker plots for each network size and under each scenario class. In these figures, for uniformity of illustration, we used 3600 seconds for the running time of instances that fail to solve the problem within that time limit. As the figures show, the minimum, median, and maximum of running times of the DD-BD method are remarkably smaller than those of the both BD and MIP methods in all cases. These results show the potential of the DD-BD framework in solving network problems with challenging combinatorial structures. In Appendix B, we present additional numerical results for the DD-BD approach to assess its ability to solve larger problem sizes. \begin{table} \begin{tabular}{c|l|c c c c c} \multicolumn{1}{c|}{Instance \#} & \multirow{2}{*}{Model} & \multicolumn{5}{c}{Number of scenarios} \\ & & 50 & 100 & 150 & 200 & 250 \\ \hline \multirow{3}{*}{1} & MIP & 75.74 & 512.62 & 2877.19 & \(>3600\) & \(>3600\) \\ & BD & 141.83 & 313.84 & 339.81 & 451.93 & 565.82 \\ & DD-BD & 56.94 & 129.87 & 163.43 & 219.02 & 274.36 \\ \hline \multirow{3}{*}{2} & MIP & 67.59 & 275.07 & 906.10 & 1892.21 & 2235.53 \\ & BD & 63.44 & 121.25 & 141.04 & 230.81 & 235.87 \\ & DD-BD & 42.60 & 82.65 & 128.16 & 164.52 & 208.94 \\ \hline \multirow{3}{*}{3} & MIP & 94.86 & 753.23 & 2453.05 & \(>3600\) & \(>3600\) \\ & BD & 71.14 & 139.20 & 172.86 & 224.33 & 244.91 \\ \cline{1-1} & DD-BD & 53.32 & 93.58 & 113.93 & 178.65 & 217.33 \\ \hline \multirow{3}{*}{4} & MIP & 71.46 & 309.62 & \(>3600\) & \(>3600\) & \(>3600\) \\ & BD & 63.55 & 182.01 & 267.94 & 334.74 & 380.22 \\ \cline{1-1} & DD-BD & 46.61 & 87.81 & 130.19 & 183.23 & 253.72 \\ \hline \multirow{3}{*}{5} & MIP & 380.33 & 406.73 & \(>3600\) & \(>3600\) & \(>3600\) \\ \cline{1-1} & BD & 123.69 & 198.73 & 205.16 & 231.56 & 287.24 \\ \cline{1-1} & DD-BD & 67.04 & 104.78 & 138.46 & 195.69 & 231.74 \\ \hline \end{tabular} \end{table} Table 1: Running times (in seconds) of MIP, BD, and DD-BD for \(|V^{\prime}|=40\). \begin{table} \begin{tabular}{c|l|c c c c c c} \multicolumn{1}{c|}{Instance \#} & \multicolumn{1}{c|}{Model} & \multicolumn{6}{c}{Number of scenarios} \\ & & \multicolumn{1}{c}{50} & \multicolumn{1}{c}{100} & \multicolumn{1}{c}{150} & \multicolumn{1}{c}{200} & \multicolumn{1}{c}{250} \\ \hline \multirow{3}{*}{1} & MIP & 774.18 & \(>3600\) & \(>3600\) & \(>3600\) & \(>3600\) & \(>3600\) \\ & BD & 1282.59 & 1728.71 & 1848.49 & 2307.74 & 3309.93 \\ & DD-BD & 698.36 & 1427.38 & 1731.95 & 2014.96 & 3323.54 \\ \hline \multirow{3}{*}{2} & MIP & 480.97 & \(>3600\) & \(>3600\) & \(>3600\) & \(>3600\) \\ & BD & 781.47 & 1573.23 & 1280.79 & 2672.18 & 2819.61 \\ & DD-BD & 586.89 & 1171.96 & 1848.49 & 2471.49 & 2635.22 \\ \hline \multirow{3}{*}{3} & MIP & 3071.37 & \(>3600\) & \(>3600\) & \(>3600\) & \(>3600\) \\ & BD & 1072.14 & 1322.96 & 2112.50 & 2951.55 & 3412.99 \\ & DD-BD & 485.31 & 703.70 & 1055.36 & 1803.66 & 2269.97 \\ \hline \multirow{3}{*}{4} & MIP & 838.79 & 2585.38 & \(>3600\) & \(>3600\) & \(>3600\) \\ & BD & 1548.93 & 1738.92 & 2580.53 & 2616.19 & 3169.28 \\ \cline{1-1} & DD-BD & 554.89 & 743.64 & 1098.82 & 2052.73 & 3094.23 \\ \hline \multirow{3}{*}{5} & MIP & 714.39 & \(>3600\) & \(>3600\) & \(>3600\) & \(>3600\) \\ & BD & 808.48 & 1013.68 & 1722.01 & 2824.14 & 3282.10 \\ \cline{1-1} & DD-BD & 353.48 & 700.57 & 1680.60 & 2213.81 & 2907.78 \\ \hline \end{tabular} \end{table} Table 4: Running times (in seconds) of MIP, BD, and DD-BD for \(|V^{\prime}|=100\). \begin{table} \begin{tabular}{c|l|c c c c c} \multicolumn{1}{c|}{Instance \#} & \multicolumn{1}{c|}{Model} & \multicolumn{6}{c}{Number of scenarios} \\ & & 50 & 100 & 150 & 200 & 250 \\ \hline \multirow{3}{*}{1} & MIP & 215.82 & 860.21 & \(>3600\) & \(>3600\) & \(>3600\) \\ & BD & 588.51 & 806.61 & 1731.50 & 1860.12 & 2051.52 \\ & DD-BD & 256.12 & 500.52 & 757.68 & 1025.88 & 1278.13 \\ \hline \multirow{3}{*}{2} & MIP & 479.76 & \(>3600\) & \(>3600\) & \(>3600\) & \(>3600\) \\ & BD & 398.29 & 713.01 & 861.65 & 1080.79 & 1709.04 \\ & DD-BD & 184.34 & 379.04 & 724.66 & 1088.21 & 1587.90 \\ \hline \multirow{3}{*}{3} & MIP & 238.79 & 996.22 & \(>3600\) & \(>3600\) & \(>3600\) \\ & BD & 702.18 & 1236.58 & 1650.42 & 1773.63 & 2227.89 \\ & DD-BD & 285.13 & 518.46 & 778.97 & 1046.39 & 1326.22 \\ \hline \multirow{3}{*}{4} & MIP & 404.26 & 2441.64 & 2855.29 & \(>3600\) & \(>3600\) \\ & BD & 572.83 & 1219.37 & 1334.21 & 1745.91 & 2089.80 \\ & DD-BD & 263.78 & 665.30 & 1230.81 & 1277.93 & 1444.02 \\ \hline \multirow{3}{*}{5} & MIP & 778.50 & \(>3600\) & \(>3600\) & \(>3600\) & \(>3600\) \\ & BD & 231.11 & 481.31 & 625.91 & 1310.24 & 1452.27 \\ \cline{1-1} & DD-BD & 187.34 & 376.96 & 564.34 & 1205.54 & 1412.94 \\ \end{tabular} \end{table} Table 3: Running times (in seconds) of MIP, BD, and DD-BD for \(|V^{\prime}|=80\). We conclude this section by noting that, while the focus of this paper has been on the unit train problem with the no-split no-merge requirements, the proposed DD-BD framework can be applied to model network problems that contain additional side constraints on the flow variables, as those constraints can be handled in the subproblems while the DD structure in the master problem is not considered. Figure 5: Comparison of DD-BD, BD, and MIP models when \(|V^{\prime}|=40\) under five scenarios Figure 6: Comparison of DD-BD, BD, and MIP models when \(|V^{\prime}|=60\) under five scenarios remains intact. Examples of such side constraints include the _usage-fee_ limitation (Holzhauser, Krumke, and Thielen 2017b) and the _flow ratio_ requirement (Holzhauser, Krumke, and Thielen 2017a). Applying the DD-BD method to such network models and assessing its effectiveness compared to alternative approaches could be an interesting direction for future research. Figure 8: Comparison of DD-BD, BD, and MIP models when \(|V^{\prime}|=100\) under five scenarios Figure 7: Comparison of DD-BD, BD, and MIP models when \(|V^{\prime}|=80\) under five scenarios ## 5 Conclusion In this paper, we introduce a DD-based framework to solve the SGUFP. This framework uses Benders decomposition to decompose the SGUFP into a master problem composed of the combinatorial NSNM constraints, and a subproblem that solves a continuous network flow model. The master problem is modeled by a DD, which is successively refined with respect to the cuts generated through subproblems. To assess the performance of the proposed method, we apply it to a variant of unit train scheduling problem formulated as a SGUFP, and compare it with the standard MIP and Benders reformulation of the problem. ## Acknowledgments This project is sponsored in part by the Iowa Energy Center, Iowa Economic Development Authority and its utility partners. We thank the anonymous referees and the Associate Editor for their helpful comments that contributed to improving the paper.
2301.10791
Flavor Leptogenesis During Reheating Era
Recently, it has been shown that the presence of a non-instantaneous era of reheating can significantly alter the charged lepton(s) equilibration temperature(s) which plays important role in flavor leptogenesis. In this work, we extend the analysis to a more general situation where RHNs are also produced from the decay of the inflaton. The presence of these RHNs along with the thermally generated ones (above its mass equivalent temperature only) redistributes different components of the energy density of the Universe during this reheating era, thereby affecting the charged lepton equilibration temperature (in addition to the Hubble effect) as well as the final reheating temperature $T_{\rm{RH}}$. Taking both the effects into account, we find that the decay of the lightest RHN in the set-up not only provides a platform to study flavor leptogenesis during reheating, but also an interesting framework of $quasi$-thermal leptogenesis emerges.
Arghyajit Datta, Rishav Roshan, Arunansu Sil
2023-01-25T19:05:41Z
http://arxiv.org/abs/2301.10791v2
# Flavor Leptogenesis During Reheating Era ###### Abstract Recently, it has been shown that the presence of a non-instantaneous era of reheating can significantly alter the charged lepton(s) equilibration temperature(s) which plays important role in flavor leptogenesis. In this work, we extend the analysis to a more general situation where RHNs are also produced from the decay of the inflaton. The presence of these RHNs along with the thermally generated ones (above its mass equivalent temperature only) redistributes different components of the energy density of the Universe during this reheating era, thereby affecting the charged lepton equilibration temperature (in addition to the Hubble effect) as well as the final reheating temperature \(T_{\rm RH}\). Taking both the effects into account, we find that the decay of the lightest RHN in the set-up not only provides a platform to study flavor leptogenesis during reheating, but also an interesting paradigm of \(quasi\)-thermal leptogenesis emerges. ## I Introduction The existence of non-zero neutrino masses [1; 2; 3], and the origin of baryon asymmetry [4] of the Universe (BAU) are two of the major issues that the Standard Model (SM) of particle physics fails to accommodate, indicating the necessity for the physics beyond the SM. While the issue of light neutrino mass can be elegantly handled by the introduction of three heavy SM singlet right-handed neutrinos (RHN) having Yukawa interaction with the SM Higgs and lepton doublets within the so called'seesaw' mechanism [5; 6; 7; 8; 9; 10; 11; 12], the same offers an interesting explanation of the BAU through leptogenesis [13; 14; 15]. Here, a lepton asymmetry is created as a result of the CP-violating out-of-equilibrium decay of the RHNs which then be partially converted into the baryon asymmetry through the \((B+L)\) violating sphaleron interactions of the SM at temperature \(T\gtrsim 100\) GeV. In most widely studied framework of '\(thermal\)' leptogenesis, it is considered that after the Universe enters in the radiation dominated era (the beginning of which is marked by reheating temperature \(T_{\rm RH}\)), the RHNs can be created by thermal scattering more specifically via inverse decays and 2-2 scattering mediated by SM fields. Subsequently, considering a hierarchical RHN masses, the lightest among the three RHNs (say \(N_{1}\) with mass \(M_{1}\)) starts to contribute to lepton asymmetry production via its out-of-equilibrium decay to the SM lepton (\(l_{L_{\alpha}}\)) and Higgs (\(H\)) doublets around a temperature \(T\lesssim M_{1}\). The reheating temperature obviously satisfies the condition \(T_{\rm RH}>M_{1}\). The abundance of the RHNs (\(Y_{N_{1}}\)) and the produced lepton asymmetry in a specific flavor direction (\(\Delta_{L_{\alpha}}\)) in this radiation dominated epoch are connected by the Boltzmann equations (BE) where apart from production, one needs to incorporate all the lepton-number violating processes that can potentially erase such asymmetry. Provided that such decay happens at sufficiently high temperature (at or above \(5\times 10^{11}\) GeV), where all the right-handed charged leptons of different flavors are in out of equilibrium, one can safely use the unflavored approximation [13; 14; 15]. This is because the rate of charged lepton Yukawa interactions remain weaker compared to that of RHN Yukawa interactions. On the other hand, if the leptogenesis happens at a temperature (\(T\)) below \(5\times 10^{11}\) GeV, the right-handed tau leptons equilibrate with the thermal bath and flavor effects [16; 17; 18; 19; 20; 21; 22; 23] are inevitable. Once this tau lepton Yukawa interaction is equilibrated, it tends to destroy the lepton asymmetry carried by the tau leptons generated from the decay of RHNs. For the muon and electron Yukawa interactions, this happens below \(10^{9}\) GeV and \(5\times 10^{4}\) GeV respectively. There exists a lower limit on the mass of the RHNs as \(M_{1}\gtrsim 10^{9}\) GeV (known as Davidson-Ibarra bound [24]) in order to satisfy the correct baryon asymmetry of the Universe via leptogenesis which in turn indicates that reheating temperature should be higher than this value for standard _thermal_ leptogenesis. Although it is feasible to have such a high \(T_{\rm RH}\), there is no such concrete evidence in support of it too. On the contrary, it can be as low as few MeV [25; 26; 27; 28]. In this context, it is interesting to investigate the possibility of having reheating temperature smaller than the mass of the RHNs in view of leptogenesis. While one such possibility is to have _non-thermal_ leptogenesis [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51], a different possibility opens up where a non-instantaneous reheating period (extended from \(T_{\rm Max}\) to \(T_{\rm RH}\)) can be brought into the picture. It is known that reheating can actually be a non-instantaneous process [52; 53; 54; 55] where a maximum temperature \(T_{\rm Max}\) after inflation can be realized followed by the onset of radiation dominated era indicated by reheating temperature \(T_{\rm RH}\) with \(T_{\rm Max}>T_{\rm RH}\). In a recent study [55], we have shown that leptogenesis remains a viable option even if the decaying RHN mass (lightest and hence responsible for lepton asymmetry generation) satisfies \(T_{\rm Max}>M_{1}>T_{\rm RH}\). Additionally in [55], we have an important observation (for the first time) that a prolonged reheating pe riod modifies the equilibration temperature (ET) of individual charged lepton Yukawa interactions and hence the study of leptogenesis, in particular the flavor leptogenesis during this extended reheating period becomes very rich. Here, the effective coupling of the inflaton with the SM fermion fields provides the sole contribution to the radiation component of the Universe defining the temperature of the thermal bath. This, in turn, plays a non-trivial role in controlling the expansion rate of the Universe during the course of reheating. A not-so-small choice of the effective coupling leads to a faster expansion of the Universe in the reheating period which carries the potential to delaying the era of equilibration of different charged lepton Yukawa interactions. Once the RHNs are thermally produced from the bath, and subsequently decay out of equilibrium (beyond \(T\lesssim M_{1}\)) in the period of this extended reheating, it is found that the delayed equilibration of the charged lepton Yukawa interaction can significantly shift the flavor regimes of the leptogenesis. In this work, we focus on a more general picture allowing an additional interaction between the inflaton and the RHN fields on top of the existing effective coupling between the inflaton and SM fermion fields, designated by \(y_{\phi ff}\) coupling, as in [55]. Hence, apart from thermal scattering, RHNs (\(N_{1}\) here for simplicity) may also be produced directly from the decay of the inflaton field. This new source of RHN production can in principle alters, depending on the relative strength of the RHN-inflaton interaction (indicated by \(y_{\phi NN}\)), the individual components (such as for inflaton, RHNs and radiation) of energy density of the Universe during the extended reheating period. Then, a subsequent effect on the Hubble expansion not only affects the final reheating temperature but also modifies the ET of individual charged lepton flavor with respect to the one observed in [55]. Furthermore, for the lightest RHN mass lies in between \(T_{\rm Max}\) and \(T_{\rm RH}\), we encounter an interesting situation for leptogenesis here. We have already found that a modified \(thermal\) (flavored) leptogenesis results in this extended period of reheating due to the absence of radiation domination as well as shift in the charged lepton ET. This observation can be visualized as a limiting case of vanishing inflaton-RHN coupling of the present proposal. In this general set-up, once this inflaton-RHN coupling \(y_{\phi NN}\) is switched on, injection of the non-thermal RHNs into the system on top of the thermally generated ones is expected to enhance the outcome of leptogenesis. However, with not-so-large inflaton-RHN coupling, the result remains essentially close to \(thermal\) leptogenesis in extended reheating scenario. However, once \(y_{\phi NN}\) becomes significant enough, say comparable to or larger than \(y_{\phi ff}\), the RHNs produced from inflaton decay along with thermally generated ones could stay out-of-equilibrium during this reheating period itself. As a result, these RHNs may effectively decay above the temperature \(T\sim M_{1}\) and start to produce lepton asymmetry at \(T>M_{1}\). Therefore, for such moderate range of \(y_{\phi NN}\) coupling, we realize a situation which is intermediate between purely \(thermal\) and \(non\)-\(thermal\) leptogenesis scenario, which we name as '\(quasi\)'-thermal leptogenesis. It is found that for sufficiently large \(y_{\phi NN}\), the presence of accumulated number density of RHNs helps relaxing the lower limit of the RHN mass \(M_{1}\) to some extent for which adequate lepton asymmetry can be produced. The paper is organized as follows. Below in section II, we provide a brief overview of the standard \(thermal\) leptogenesis and importance of flavors while in section III, we discuss our general set-up of \(quasi\)-thermal leptogenesis. We devote section IV to discuss the outcome of the proposal. Finally in section V, we conclude. ## II Thermal leptogenesis and effect of flavor A mere extension of the SM by three right handed singlet neutrinos (\(N_{i=1,2,3}\)) as suggested by the type-I seesaw forms the basic set-up to discuss leptogenesis, the Lagrangian of which (in the charged lepton diagonal basis) is given by \[-\mathcal{L}_{T_{I}}=\overline{\ell}_{L_{\alpha}}(Y_{\nu})_{\alpha i}\tilde{ H}N_{i}+\frac{1}{2}\overline{N_{i}^{c}}(M_{R})_{ii}N_{i}+h.c., \tag{1}\] where the lepton number violating Majorana mass term for RHNs, \(M_{R}\), is considered to be diagonal, \(M_{R}={\rm diag}(M_{1},M_{2},M_{3})\) for simplicity. The neutrino Yukawa coupling \(Y_{\nu}\) matrix in general contains CP violating phases. A Dirac mass term \(m_{D}=Y_{\nu}v/\sqrt{2}\) is generated after spontaneous breaking of electroweak symmetry with \(v=246\) GeV. In the see-saw limit \(m_{D}\ll M_{R}\), a light neutrino mass matrix \[m_{\nu}=-m_{D}M_{R}^{-1}m_{D}^{T}, \tag{2}\] results along with three heavy neutrinos. A further diagonalization of \(m_{\nu}\) by the PMNS matrix \(U\)[56; 57; 58] via \(U^{\dagger}m_{\nu}U^{*}\)= diag \((m_{1},m_{2},m_{3})\), leads to three light neutrino masses \(m_{i=1,2,3}\). The same seesaw Lagrangian also provides a natural explanation of the matter anti-matter asymmetry of the Universe via leptogenesis [13; 14; 15] where the CP-violating decays of heavy RHNs into SM lepton and Higgs doublets, \(N_{i}\to\ell_{L_{\alpha}}+H\), are instrumental. In the early Universe, provided the temperature (after inflation) was high enough, these heavy RHNs can be produced from thermal bath via inverse decay (mediated by the same neutrino Yukawa interaction) and attain thermal equilibrium. Thereafter, as the temperature drops below the individual mass of a RHN (\(i.e.\)\(T<M_{i}\)), the decay of the respective heavy field \(N_{i}\) becomes relevant for generating a CP-asymmetry along a particular lepton flavor (\(\alpha\)) direction parametrized by, \[\varepsilon^{(i)}_{\ell_{\alpha}}=\frac{\Gamma(N_{i}\to\ell_{L_{\alpha}}+H)- \Gamma(N_{i}\to\overline{\ell}_{L_{\alpha}}+\overline{H})}{\sum_{\alpha} \Gamma(N_{i}\to\ell_{L_{\alpha}}+H)+\Gamma(N_{i}\to\overline{\ell}_{L_{ \alpha}}+\overline{H})}, \tag{3}\] where the denominator corresponds to the total decay rate of \(N_{i}\) at tree level, given by: \[\Gamma_{N_{i}} =\sum_{\alpha}\Gamma(N_{i}\rightarrow\ell_{L_{\alpha}}+H)+\Gamma(N_ {i}\rightarrow\overline{\ell}_{L_{\alpha}}+\overline{H})\] \[=\frac{(Y_{\nu}^{\dagger}Y_{\nu})_{ii}}{8\pi}M_{i}. \tag{4}\] The out-of-equilibrium condition necessary for lepton asymmetry production is satisfied when the decay rate of \(N_{i}\) remains smaller than the expansion rate of the Universe. ### Unflavored estimate A non-zero \(\varepsilon_{\ell_{\alpha}}^{(i)}\) would follow due to the interference between the tree level and loop-level decay amplitudes. With a hierarchical RHN masses as \(M_{1}\ll M_{2}\ll M_{3}\), the CP-asymmetries generated by \(N_{2}\) and \(N_{3}\) are however expected to be washed out by the lepton number violating interactions of \(N_{1}\), leaving \(\varepsilon_{\ell_{\alpha}}^{(1)}\) as the only relevant one for leptogenesis. Conventionally the total CP asymmetry is calculated after having the flavor sum (henceforth we omit the generation index [1]) and is given by \[\varepsilon_{\ell}=\sum_{\alpha}\varepsilon_{\ell_{\alpha}}=\frac{1}{8\pi(Y_ {\nu}^{\dagger}Y_{\nu})_{11}}\sum_{\alpha}\sum_{j\neq 1}\left\{\text{Im}\left[( \text{Y}_{\nu}^{*})_{\alpha 1}(\text{Y}_{\nu})_{\alpha \text{j}}(\text{Y}_{\nu}^{\dagger}\text{Y}_{\nu})_{1\text{j}}\right]\mathbf{ F}\left(\frac{\text{M}_{2}^{2}}{\text{M}_{1}^{2}}\right)+\text{Im}\left[(\text{Y}_{ \nu}^{*})_{\alpha 1}(\text{Y}_{\nu})_{\alpha\text{j}}(\text{Y}_{\nu}^{\dagger}\text{Y} _{\nu})_{\text{j}1}\right]\mathbf{G}\left(\frac{\text{M}_{2}^{2}}{\text{M}_{1 }^{2}}\right)\right\}, \tag{5}\] where, \(\mathbf{F}(x)=\sqrt{x}\left[1+\frac{1}{1-x}+(1+x)\ln\left(\frac{x}{1+x}\right)\right]\) and \(\mathbf{G}(x)=1/(1-x)\) are the loop functions originated from both vertex and self-energy corrections to the decay of \(N_{1}\). It is shown in [24] that a maximum CP-asymmetry, generated from the \(N_{1}\) decay, can be extracted from Eq. (5) as, \[|\varepsilon_{\ell}|\lesssim\frac{3}{8\pi}\frac{M_{1}}{v^{2}}(m_{3}-m_{1})= \varepsilon_{\ell}^{\text{Max}}. \tag{6}\] Such an asymmetry however be washed out partially due to the lepton number violating interactions so as to write down the final \(B-L\) asymmetry including the efficiency factor \(\kappa_{f}\) by \[Y_{B-L}\equiv n_{B-L}/s=-\frac{1}{7.04}\frac{3}{4}\varepsilon_{\ell}\kappa_{f}. \tag{7}\] where, \(Y_{x}=n_{x}/s\) represents the number density to entropy density ratio for \(x\)-species. Hence, in case the \(N_{1}\) responsible for the asymmetry generation was in thermal equilibrium at the early Universe, the limit on CP-asymmetry as in Eq. (6) leads to an estimate of maximal baryon asymmetry \(Y_{B}^{\text{Max}}\). Requirement of \(Y_{B}^{\text{Max}}\geq Y_{B}^{\text{exp}}=8.718\times 10^{-11}\)[59; 4] eventually provides a lower limit on lightest RHN mass [60; 24] as: \[M_{1} \gtrsim\frac{7.04}{0.96\times 10^{-2}}\frac{8\pi v^{2}}{3m_{3}} \frac{Y_{B}^{\text{exp}}}{\kappa_{f}}\] \[\approx\frac{6\times 10^{8}\text{ GeV}}{\kappa_{f}}\ \left(\frac{Y_{B}^{\text{exp}}}{8.718\times 10^{-11}} \right)\ \left(\frac{0.05\text{ eV}}{m_{3}}\right), \tag{8}\] where light neutrino masses are considered to be hierarchical and consistent with neutrino oscillation data [57]. An accurate estimate for the final asymmetry or in other words the efficiency factor \(\kappa_{f}\) would however follow if one solves coupled BEs that correlate the abundance of the lightest RHN with the lepton number asymmetry produced. Note that in this case, the reheating temperature (considering instantaneous reheating) \(T_{\text{RH}}\) should be more than \(M_{1}\) indicating \(T_{\text{RH}}\gtrsim 10^{10}\) GeV or so. ### Flavored Regime and Charged Lepton Equilibration In evaluating the final lepton asymmetry above, a flavor sum is performed. However, it has been found that the situation can actually be more complicated as soon as charged lepton Yukawa interactions (\(Y_{\alpha}\bar{\ell}_{L_{\alpha}}He_{R_{\alpha}}\) with \(e_{R\alpha}\) as representative of right handed electron/muon/tau) become faster compared to \(N_{1}-\ell_{L}H\) interaction [20]. In that case, during the out-of-equilibrium decay process of the \(N_{1}\), the charged lepton Yukawa interaction for one or more flavor(s) may enter equilibrium leading to the breaking of quantum coherence of the lepton doublet state along different flavor directions produced from the \(N_{1}\) decay [16; 18; 19; 21; 22]. As a result, lepton asymmetry along individual flavors may start to become distinguishable. In this case, one needs to look for the evolution of the individual flavor lepton asymmetries instead of total lepton number asymmetry by constructing BEs for lepton asymmetries along individual flavors. Evaluation of Equilibration Temperature In order to check whether the charged lepton Yukawa interaction (more specifically \(H\leftrightarrow\ell_{L_{\alpha}}e_{R_{\alpha}}\)[61]) of a particular flavor \(\alpha\) is fast enough at a given temperature to be in thermal equilibrium, the associated interaction rate (\(\Gamma_{\alpha}\)) has to be more than the expansion rate of the Universe [18]. Since, this charged lepton equilibrium temperature (ET) plays a decisive role in determining the flavor effect, we elaborate on it in case of _thermal_ leptogenesis here. In the standard scenario, assuming all these phenomena are occurring in a radiation dominated Universe, the thermally averaged decay rates of the SM Higgs doublet decaying to left-handed lepton doublets and right-handed charged lepton singlets can be estimated as [61; 62; 63]: \[\langle\Gamma_{\alpha}\rangle=\int\frac{d^{3}p}{(2\pi)^{3}2E_{P}}\int\frac{d^{ 3}k}{(2\pi)^{3}2E_{k}}\int\frac{d^{3}k^{\prime}}{(2\pi)^{3}2E_{k^{\prime}}}(2 \pi)^{4}\delta^{(4)}(p-k-k^{\prime})|\mathcal{M}|^{2}\frac{f_{p}}{n_{p}}, \tag{9}\] where \(p\) is the 4-momentum of the Higgs while \(k\) and \(k^{\prime}\) are the 4-momentum of lepton doublet and singlet right handed charged lepton respectively. The thermal distribution of Higgs \(f_{p}\) and number density \(n_{p}\) are taken as: \[f_{p}=\frac{1}{e^{E_{p}/T}-1}\,,\ n_{p}=\frac{\zeta(3)T^{3}}{\pi^{2}}\,. \tag{10}\] The matrix amplitude squared \(|\mathcal{M}|^{2}\) for such decay would be (assuming final state particles have negligible mass): \[|\mathcal{M}|^{2}=2Y_{\alpha}^{2}k.k^{\prime}=Y_{\alpha}^{2}M_{H}^{2}\,,\quad \alpha=e,\mu,\tau. \tag{11}\] Evaluation of the integrals in Eq. (9) for \(T\gg M_{H}\) yields [61]: \[\langle\Gamma_{\alpha}\rangle=\frac{y_{\alpha}^{2}\pi}{192\zeta(3)T}M_{H}^{2}. \tag{12}\] Considering the thermal mass of the Higgs to be[64; 65]: \[M_{H}=M_{H}(T)\simeq\frac{T}{4}\sqrt{3g^{2}+g^{\prime^{2}}+4y_{t}^{2}+8\lambda}, \tag{13}\] where \(g,g^{\prime}\) are SM gauge coupling constants and \(y_{t}\), \(\lambda\) are the top Yukawa and the Higgs quartic couplings respectively, the thermally averaged decay rate of the decay processes will become[66]: \[\langle\Gamma_{\alpha}\rangle\simeq 5\times 10^{-3}y_{\alpha}^{2}T. \tag{14}\] Comparison between the obtained decay rates with Hubble constant (\(\mathcal{H}=1.66g_{*}^{1/2}T^{2}/M_{pl}\) in radiation dominated Universe, where \(M_{pl}\) is the Plack mass.) will lead to the ET of right-handed charged lepton singlets. Figure 1 shows the variation of \(\langle\Gamma_{\alpha}\rangle/\mathcal{H}\) with respect to temperature \(T\) for different lepton flavors: \(\alpha=e,\mu,\tau\). Note that \(\tau\) Yukawa interaction becomes fast enough around \(T=T_{0(\tau)}^{*}\simeq 5\times 10^{11}\) GeV (evident from the intersection of \(\langle\Gamma_{\tau}\rangle/\mathcal{H}\) line in blue with 1) while muon Yukawa interaction comes to equilibrium at \(T_{0(\mu)}^{*}\simeq 10^{9}\) GeV1 as seen from \(\langle\Gamma_{\mu}\rangle/\mathcal{H}=1\) point of Fig. 1. Footnote 1: Thermal field theory can be used to provide a more accurate estimate of ETs. For example, see [67] for latest calculation on ET of \(e_{R}\). #### ii.2.2 Effects of Flavor and Boltzmann Equations As a result, if a \(N_{1}\) decays while staying in out-of-equilibrium in a temperature range \(10^{9}\lesssim T\lesssim 5\times 10^{11}\) GeV, lepton asymmetry becomes distinguishable along two orthogonal directions denoted by \(\alpha=a\) (specifying a coherent superposition of \(e\) and \(\mu\) lepton flavors) and \(\tau\). Hence contrary to the unflavored case, here we need to study the evolution of \(B/3-L_{\alpha}\equiv\Delta_{\alpha}\) charges with \(\alpha=a\) and \(\tau\). With a further reduction of the temperature below \(T\lesssim 10^{9}\) GeV, the lepton doublets completely loose their quantum coherence. Hence, at this stage, lepton asymmetry becomes distinguishable along all three flavor directions \(e\), \(\mu\) and \(\tau\). As a result, evolution of lepton asymmetry along all three directions becomes relevant to produce final baryon asymmetry. Incorporating the flavor effects into account, the evolution of the abundance of decaying \(N_{1}\)s (\(Y_{N_{1}}\)) and lepton asymmetries along individual flavor direction (\(Y_{\Delta_{\alpha=e,\mu,\tau}}\)) can be represented by the following sets of flavored classical BEs (neglecting \(\Delta L=1,2\) scattering processes and subtracting the on-shell contribution from \(\Delta L=2\) process) [17; 18]: \[sHz\frac{dY_{N_{1}}}{dz} =-\left(\frac{Y_{N_{1}}}{Y_{N_{1}}^{\rm eq}}-1\right)\gamma_{D}\,, \tag{15}\] \[sHz\frac{dY_{\Delta_{\alpha}}}{dz} =-\Bigg{\{}\left(\frac{Y_{N_{1}}}{Y_{N_{1}}^{\rm eq}}-1\right) \varepsilon_{\ell\alpha}\gamma_{D}+K_{\alpha}^{0}\sum_{\beta}\Bigg{[}\frac{1 }{2}(C_{\alpha\beta}^{\ell}+C_{\beta}^{H})\gamma_{D}\Bigg{]}\frac{Y_{\Delta_{ \beta}}}{Y_{\ell}^{\rm eq}}\Bigg{\}}, \tag{16}\] where \(K_{\alpha}^{0}=\frac{(Y_{\nu}^{*})_{01}(Y_{\nu})_{a1}}{(Y_{\nu}^{2}Y_{\nu})_{11}}\) is known as flavor projector [18; 21] and \(C^{\ell},C^{H}\) matrices connect the asymmetries in lepton and Higgs sectors to asymmetries in \(\Delta_{\alpha}\) expressed in terms of \(Y_{\Delta\alpha=e,\mu,\tau}\) or \(Y_{\Delta\alpha=a,\tau}\) depending on the leptogenesis scale [18]. Eq. (16) can be a set of two (three) equations if \(10^{9}<M_{1}<5\times 10^{11}\) GeV (\(M_{1}<10^{9}\) GeV). Here \(\gamma_{D}\) represents the total decay rate density of \(N_{1}\): \[\gamma_{D}=n_{N_{1}}^{\rm eq}\,\langle\Gamma_{N_{1}}\rangle, \qquad\langle\Gamma_{N_{1}}\rangle=\frac{K_{1}(z)}{K_{2}(z)}\frac{(Y_{\nu}^{ \dagger}Y_{\nu})_{11}}{8\pi}M_{1}\,, \tag{17}\] where \(K_{1}(z)\) and \(K_{2}(z)\) are the modified Bessels functions. \(z=M_{1}/T\) is a dimensionless quantity, with respect to which we will look for the evolution of \(Y_{x}\). The equilibrium number density of the \(N_{1}\) can be expressed as: \[n_{N_{1}}^{\rm eq}=\frac{gM_{1}^{3}}{2\pi^{2}z}K_{2}\,(z)\,, \tag{18}\] where \(g\) is the number of degrees of freedom of \(N_{1}\). To study the evolutions numerically, we need to provide inputs for the neutrino Yukawa coupling \(Y_{\nu}\) matrix the structure of which can be extracted using Casas-Ibarra (CI) parametrization [68] as: \[Y_{\nu}=-i\frac{\sqrt{2}}{v}UD_{\sqrt{m}}{\bf R}D_{\sqrt{M}}\,, \tag{19}\] where \(U\) is the Pontecorvo-Maki-Nakagawa-Sakat (PMNS) matrix [58] which connects the flavor basis with mass basis for light neutrinos. \(D_{\sqrt{m}}={\rm diag}(\sqrt{\rm m_{1}},\sqrt{\rm m_{2}},\sqrt{\rm m_{3}})\) is the diagonal matrix containing the square root of light neutrino mass and similarly \(D_{\sqrt{M}}={\rm diag}(\sqrt{\rm M_{1}},\sqrt{\rm M_{2}},\sqrt{\rm M_{3}})\) represents the diagonal matrix for RHN masses. \({\bf R}\) is an orthogonal matrix satisfying \({\bf R}^{\rm T}{\bf R}=1\). Fig. 2 depicts a scenario where evolution of \(\tau\) and \(a\) lepton flavor become essential as the mass of the lightest RHN is chosen to be \(M_{1}=5\times 10^{9}\) GeV. While constructing the \(Y_{\nu}\) for this case, we consider a typical hierarchy among the RHNs as \(M_{3}=100M_{2},\ M_{2}=100M_{1}\) along with we use the best fit values of solar and atmospheric mass-squared differences (considering \(m_{1}=0\)) and mixing angles, and CP-violating phase \(\delta\) to define \(U\) via Eq. (19). For such a set of hierarchical RHNs, \({\bf R}\) can be considered to have the following structure: \[{\bf R}=\begin{pmatrix}0&\cos\theta_{R}&\sin\theta_{R}\\ 0&-\sin\theta_{R}&\cos\theta_{R}\\ 1&0&0\end{pmatrix}, \tag{20}\] Where \(\theta_{R}\) can be, in general, a complex angle, chosen to be \(\theta_{R}=2.83+0.24i\) for this case so as to realize the final asymmetry to be consistent with the correct baryon asymmetry of the Universe. As can be seen from the Fig. 2, starting from a symmetric Universe, lepton asymmetries along \(a\) and \(\tau\) direction (blue dot and red dashed lines) start to grow due to out-of-equilibrium decay of lightest RHN (the corresponding abundance of \(N_{1}\) is indicated by solid blue line) and saturates around \(z\sim 2\). Similar behavior can be seen for the number density of baryon asymmetry (magenta line) as well. Because of the sphaleron processes, produced lepton asymmetries get converted to baryon asymmetry via the relation: \(Y_{B}=28/79\sum_{\alpha}Y_{\Delta_{\alpha}}\). Eventually with large value of \(z\) Figure 2: Evolution of various comoving number densities with respect to \(z\) for \(M_{1}=5\times 10^{9}\) GeV. Horizontal black dashed line indicates the observerd baryon asymmetry \(Y_{B}^{\rm exp}\). this baryon asymmetry saturates to experimentally observed baryon asymmetry: \(Y_{B}^{\rm exp}=(8.718\pm 0.012)\times 10^{-11}\)[4; 59] (dictated by black dashed lines of Fig. 2). The discussion in this section embarks on the importance of flavor aspects in _thermal_ leptogenesis which is now being explored in the context of non-instantaneous reheating period in this work. Before that, we discuss the _non-thermal_ leptogenesis in brief. ## III RHNs produced from inflaton decay We may now turn our attention to a situation where the RHNs are being produced from inflaton (\(\phi\)) decay. In case the inflaton decays solely to \(N_{1}\)s having decay width \(\Gamma_{\phi}\), the radiation component of the Universe arises as a result of further decay of those RHNs (with decay width \(\Gamma_{N_{1}}\)) into SM lepton and Higgs doublets which thermalize rapidly. With \(\Gamma_{\phi}<<\Gamma_{N_{1}}\), the reheating temperature assuming instantaneous reheating2 is governed by the condition: \(\Gamma_{\phi}=\mathcal{H}\) given by: Footnote 2: Here contribution from preheating [69; 70; 71] is not taken into account. \[T_{\rm RH}=\left(\frac{45}{4\pi^{3}g_{*}}\right)^{1/4}\sqrt{\Gamma_{\phi}M_{p} }\,. \tag{21}\] where \(g_{*}\) represents the effective number of relativistic degrees of freedom in the SM. In this case, as \(T_{\rm RH}<M_{1}\), the \(N_{1}\)s would decay immediately after being produced from inflaton. The decay of such non-thermally produced \(N_{1}\) also generates a lepton asymmetry which can be converted to baryon asymmetry (provided \(T_{\rm RH}>100\) GeV), expressed as [29; 33; 42] \[Y_{B}=-\frac{28}{79}\frac{n_{B-L}}{s}=-\frac{28}{79}\varepsilon_{\ell}\frac{n _{N_{1}}}{s}=-\frac{28}{79}\frac{3}{4}\varepsilon_{\ell}\frac{T_{\rm RH}}{m_{ \phi}}\,. \tag{22}\] Here we assume \(\phi\) decays to \(N_{1}\) only. Also, washout by the lepton number violating process turns out to be insignificant. The additional suppression factor, proportional to \(T_{\rm RH}/m_{\phi}\) is related to the ratio of the number density of produced \(N_{1}\) and entropy \(s\) and follows from the equality \(\rho_{\phi}=\rho_{R}\) at \(T=T_{\rm RH}\). Note that, due to the condition \(M_{1}>T_{\rm RH}\), the inverse decay process \(\ell+H\to N_{1}\) can not take place and consequently, the loss of flavor coherence is not expected to occur. However, the situation alters once we consider an extended period of reheating as we plan to discuss next. ## IV Flavor effect during reheating period As previously discussed, a _thermal_ leptogenesis scenario with \(N_{1}\) mass \(M_{1}\lesssim 5\times 10^{11}\) GeV experiences flavor effects since the charged lepton Yukawa interactions start to enter equilibrium below this temperature. An estimate of such ETs for different flavors is provided in Fig. 1. Note that the ETs of these Yukawa interactions (associated to different flavors of right handed charged leptons) are calculated assuming that the Universe is already in a radiation domination era followed by an instantaneous reheating after inflation. Obviously, such consideration leads to \(T_{\rm RH}>M_{1}\) indicating the presence of high reheating temperature. Now, it is also known that the reheating might not be an instantaneous process [52; 53; 54; 25]. On top of that, the reheating temperature can be low enough (though larger than few MeV from BBN limit [25; 26; 27; 28]). The era of reheating begins when after inflation, the inflaton field \(\phi\) starts to decay. Neglecting the possibility via preheating [69; 70; 71], as this field \(\phi\) starts to decay to the lighter SM degrees of freedoms, the thermalization of these light decay products helps the Universe to attain a maximum temperature \(T_{\rm Max}\). Subsequently, the temperature of the Universe falls at a rate much slower than the standard scaling \(T\sim a^{-1}\) (\(a\) is the Friedmann-Robertson-Walker scale factor). This continues till a point (defining \(T_{\rm RH}\)) where the radiation component becomes the dominating one in the Universe. This nontrivial behavior of the temperature as function of scale factor \(a\) results due to the faster expansion rate compared to the standard scenario (quantified by Hubble \(\mathcal{H}\)) during the lifetime of the inflaton. As a result of this modified \(\mathcal{H}\), a change in the behavior of \(\langle\Gamma_{\alpha}\rangle/\mathcal{H}\) is also expected in the period in between \(T_{\rm Max}\) and \(T_{\rm RH}\), compared to the standard scenario represented in section II as estimated in [55] by the present authors. This prolonged reheating was dictated by the size of the effective coupling of the inflaton with SM fields. This would be particularly prominent provided \(T_{\rm RH}\) falls below \(T_{0(\tau)}^{*}\) and \(T_{\rm Max}\) maintains a relation \(T_{\rm Max}>T_{0(e)}^{*}\). In fact, as a result of such a delayed entry (due to the modified \(\mathcal{H}\) for \(T_{\rm Max}>T>T_{\rm RH}\)) of the charged lepton Yukawa interactions into the equilibrium, a shift of the flavor regime(s) of leptogenesis compared to the standard scenario is expected in this case of extended reheating period. If the lepton asymmetry is generated from the out-of-equilibrium decay of the lightest RHN, then depending on the scale of leptogenesis, three possibilities can be realized in presence of this extended reheating: (A) \(M_{1}<T_{\rm RH}\), (B) \(T_{\rm Max}>M_{1}>T_{\rm RH}\) and (C) \(M_{1}>T_{\rm Max}\). Case (A) corresponds to the standard _thermal_ leptogenesis as by the time decay of \(M_{1}\) becomes effective for leptogenesis, Universe is already in radiation dominated era. So, effect of this extended period of reheating does not carry any additional impact here. On the other hand, with case (C), \(N_{1}\) can never be produced thermally. They can only be produced non-thermally from the decay of some heavy particle (provided that coupling exists), like inflaton resembling the case of purely _non-thermal_ leptogenesis as discussed in section III where the effect of flavor does not play any vital role. The case (B) is however the most interesting one. In [55], it was shown that the lightest RHN can be produced from the thermal bath during \(T_{\rm Max}>T\gtrsim M_{1}\) while it decays thereafter (\(T<M_{1}\)) into SM lepton and Higgs doublets, hence generating the lepton asymmetry. Now, during this extended period of reheating, the shift in the ET of right handed charged leptons affects the lepton asymmetry production. For example, we have shown [55] that with an effective coupling of inflaton to SM fermions of order \(10^{-4}\), the ET of \(\tau_{R}\) reduces by almost an order of magnitude compared to the standard case. Such inflaton-SM fermion coupling also sets \(T_{\rm Max}\sim 7\times 10^{11}\) GeV while \(T_{\rm RH}\) becomes \(4\times 10^{10}\) GeV. Consequently, \(N_{1}\) of mass \(M_{1}\simeq 10^{11}\) GeV can be produced thermally during reheating (as \(T_{\rm Max}\sim 7\times 10^{11}\) GeV) while it decays around \(T\lesssim M_{1}\). The shift in the ET renders the corresponding leptogenesis as an unflavored case which otherwise falls in the ballpark of flavored leptogenesis (two flavor regime). Motivated by the above result, in this work, we further consider an intriguing extension (in the context of case (B) itself) where in addition to the inflaton-SM fermion effective coupling (\(y_{\phi ff}\)) there exists a direct coupling (\(y_{\phi NN}\)) between the inflaton and \(N_{1}\)s. Introduction of such a coupling not only induces \(N_{1}\)s in the system from the decay of the inflaton (in addition to the thermally generated \(N_{1}\)), but also opens up the possibility of modifying the Hubble further, hence affecting \(T_{\rm RH}\) as well as the ET depending on its relative coupling strength compared to inflaton-SM fermion coupling. Hence, with a nonzero branching of the inflaton to \(N_{1}\) in addition to the inflaton-SM fermion one, we expect to have \(N_{1}\) production from inflaton decay as well as thermal production of it via inverse decay for a temperature range \(T_{\rm Max}>T\gtrsim M_{1}\). These \(N_{1}\) however find themselves in out of equilibrium as the Hubble \(\mathcal{H}\) at this temperature regime remains large enough (\(\phi\) dominates). Therefore, \(N_{1}\) would decay and may contribute to lepton asymmetry generation even at temperature above \(M_{1}\) unless it has been washed out by inverse decay. In case \(\rho_{N_{1}}\) dominates over \(\rho_{R}\), the washout by inverse decay turns out to be weak so as not to erase the asymmetry. This particular era of leptogenesis turns out to be somewhat different from a purely thermal or non-thermal one since the lepton asymmetry here is generated from the decay of both the thermally produced \(N_{1}\)s and those produced from inflaton decays in this regime. We call it as '_quasi_'-thermal leptogenesis. Additionally for temperature below \(T\sim M_{1}\), leptogenesis proceeds in the usual way. However, with a significantly dominant coupling of \(y_{\phi NN}\) over \(y_{\phi ff}\), \(N_{1}\)s can even be produced beyond \(T=M_{1}\) point. In this case (\(T<M_{1}\)), such non-thermally produced \(N_{1}\) would instantaneously decay and contribute to lepton asymmetry production similar to the usual _non-thermal_ leptogenesis scenario. Following the above discussion, we now construct the relevant Lagrangian (apart from the SM one) as given by, \[-\mathcal{L}=y_{\phi ff}\phi\overline{f}f+y_{\phi NN}\phi\overline{N_{1}}N_{1 }+V(\phi), \tag{23}\] in addition to the Type-I seesaw Lagrangian of Eq. (1). Here, \(y_{\phi ff}\) is only an effective coupling and \(f(\overline{f})\) are the SM fermions. For simplicity, we only keep the coupling of inflaton with the lightest \(N_{1}\). Here, \(V(\phi)\) corresponds to the scalar potential of inflaton \(\phi\) responsible for realizing inflation. We have taken power-law form for the potential of the inflaton about the minima [53]: \[V(\phi)=\lambda\frac{|\phi|^{n}}{M_{P}^{n-4}}\,, \tag{24}\] where \(M_{P}\) is the reduced Planck mass. The magnitude of the coupling \(\lambda\) can be estimated from the CMB observables such as spectral index and tensor-to-scalar ratio and depends on the order of the polynomial \(n\). Origin of such choice of potential can be traced back to T-attractor models in no-scale supergravity [72]. In such setups, the effective inflaton mass \(m_{\phi}\) is a function of inflaton field. In the adiabatic approximation, it can be written as: \[m_{\phi}^{2} \equiv \partial_{\phi}^{2}V(\phi)=\ \lambda\ n(n-1)\phi^{n-2}M_{P}^{4-n} \tag{25}\] \[= n(n-1)M_{P}^{\frac{2(4-n)}{2}}\lambda^{\frac{2}{n}}\rho_{\phi} ^{\frac{n-2}{n}}\,.\] where \(\rho_{\phi}\) is the energy density of the inflaton field \(\phi\). After the end of the inflation, the inflaton starts to perform damped oscillations about the minima of the potential and eventually decays to the SM fermion-antifermion pair as well as to \(N_{1}\) following Eq. (23). Here we ignore any potential contribution that may come from preheating [32; 71]. Consequently, the energy density of the inflaton field \(\rho_{\phi}\) satisfies the equation: \[\frac{d\rho_{\phi}}{dt}+3\left(\frac{2n}{n+2}\right)\mathcal{H}\rho_{\phi}=-( \Gamma_{\phi ff}+\Gamma_{\phi NN})\rho_{\phi}, \tag{26}\] where \(\Gamma_{\phi ff}\) and \(\Gamma_{\phi NN}\) are the decay widths of inflaton to SM fermions and \(N_{1}\) respectively and expressed as \[\Gamma_{\phi ff}=\frac{y_{\phi ff}^{2}}{8\pi}m_{\phi}\,,\quad\Gamma_{\phi NN}= \frac{y_{\phi NN}^{2}}{8\pi}m_{\phi}\,. \tag{27}\] The term proportional to \(\mathcal{H}\) indicates the dilution of energy density due to the expansion of the Universe while the term on the right hand side of the BE represents the dilution (hence comes with a negative sign) of the energy density of the \(\phi\) as a result of its decay to \(N_{1}\) and SM fermion/anti-fermions. The produced fermion-antifermion pairs would interact quickly among themselves to produce other SM particles and rapidly thermalizes producing the radiation energy density component \(\rho_{R}\). At this stage, we can define the temperature of the Universe via \[T=\left[\frac{30\rho_{R}}{\pi^{2}g_{*}}\right]^{1/4}. \tag{28}\] On the other hand, the \(N_{1}\)s produced from the inflaton decay further decays to the SM particles which will eventually contribute to \(\rho_{R}\) too. Additionally, as per our consideration in case (B), the thermal bath can also produce back \(N_{1}\) particularly for the temperature of the Universe \(T_{\rm Max}>T\gtrsim M_{1}\). The BEs for \(\rho_{N_{1}}\) and \(\rho_{R}\) can therefore be written as, \[\frac{d\rho_{N_{1}}}{dt}+3\mathcal{H}\rho_{N_{1}} =-(\rho_{N_{1}}-\rho_{N_{1}}^{eq})\langle\Gamma_{N_{1}}\rangle+ \Gamma_{\phi NN}\rho_{\phi}\,, \tag{29}\] \[\frac{d\rho_{R}}{dt}+4\mathcal{H}\rho_{R} =(\rho_{N_{1}}-\rho_{N_{1}}^{eq})\langle\Gamma_{N_{1}}\rangle+ \Gamma_{\phi ff}\rho_{\phi}. \tag{30}\] In all the BEs above, \(\mathcal{H}\) represents the Hubble expansion rate to be written as \[\mathcal{H}^{2}=\frac{\rho_{\phi}+\rho_{N_{1}}+\rho_{R}}{3M_{P}^{ 2}}, \tag{31}\] since in this epoch \(T_{\rm Max}>T>T_{\rm RH}\), the energy density of the Universe comprises of the components \(\rho_{\phi},\rho_{N_{1}}\) and \(\rho_{R}\). The \(\rho_{N_{1}}^{eq}\) is the equilibrium energy density of \(N_{1}\) as given by \[\rho_{N_{1}}^{\rm eq}=\frac{M_{1}^{4}\left[\frac{3}{z^{2}}K_{2}( z)+\frac{1}{z}K_{1}(z)\right]}{\pi^{2}}, \tag{32}\] where \(K_{1},K_{2}\), and \(z\) have already been defined while explaining Eq. (17). The presence of this term in the above BEs is related to the existence of the inverse decay from radiation bath to produce \(N_{1}\). Eq.(26), (29)-(30) therefore together represent the most general set of equations to study the scenario under consideration. After discussing the \(N_{1}\) production and the related BEs to study the respective components of the energy densities of the Universe, we now turn our attention to construct the BEs relevant for leptogenesis. As discussed earlier, being Majorana particle, the decay of the lightest RHN \(N_{1}\) is a lepton number violating one and can produce CP asymmetry, which will eventually generate lepton asymmetry of the Universe. In order to take care effects of charged lepton Yukawa equilibration of different flavors, the following classical flavored BE can be constructed: \[\frac{dn_{\Delta_{\alpha}}}{dt}+3\mathcal{H}n_{\Delta_{\alpha}} = -\langle\Gamma_{N_{1}}\rangle\Big{[}\frac{\varepsilon_{\ell_{ \alpha}}}{M_{1}}(\rho_{N_{1}}-\rho_{N_{1}}^{\rm eq})+\frac{1}{2}K_{\alpha}^{0}\] \[\qquad\times\sum_{\beta}(C_{\alpha\beta}^{\ell}+C_{\beta}^{H}) \frac{n_{\rm eq}^{\rm eq}}{n_{\ell}^{\rm eq}}n_{\Delta_{\beta}}\Big{]}.\] The equation remains identical to the one of Eq. (16) except the fact that \(\mathcal{H}\) is now comprised of all the contributions from inflaton, \(N_{1}\), and radiation energy densities in line with Eq. (31). This will certainly influence the lepton asymmetry production differently compared to the scenario discussed in Section II. The first term (within the first parenthesis) on the right hand side of Eq. (III) represents the production of lepton asymmetry from the decay of lightest RHN \(N_{1}\) while the remaining terms denote the washout of the produced asymmetry along individual lepton directions due to the inverse decay of the \(N_{1}\). Apart from the flavored setup, a situation may arise where flavor effects are not that important in discussing leptogenesis. In that case, an unflavored scenario exhibits where the evolution of the \(B-L\) asymmetry is governed by the single BE given by: \[\frac{dn_{B-L}}{dt}+3\mathcal{H}n_{B-L}=-\langle\Gamma_{N_{1}} \rangle\left[\frac{\varepsilon_{\ell}}{M_{1}}(\rho_{N_{1}}-\rho_{N_{1}}^{\rm eq })+\frac{n_{N_{1}}^{\rm eq}}{2n_{\ell}^{\rm eq}}n_{B-L}\right]. \tag{34}\] Solving Eq. (26), (29), (30) and (III) or (III) simultaneously, will lead to the evolution of energy density of relevant elements of the Universe and produced lepton asymmetry from the time of the end of inflaton till today. However, while solving these BEs, it is convenient to use dimensionless variables [42; 25] for which we use the following transformations: \[E_{\phi}=\rho_{\phi}a^{3}\,,\quad E_{N_{1}}=\rho_{N_{1}}a^{3}\,, \quad R=\rho_{R}a^{4}\,,\] \[N_{B-L}=n_{B-L}a^{3}\,,\quad N_{\Delta_{\alpha}}=n_{\Delta_{ \alpha}}a^{3} \tag{35}\] Moreover, it is convenient to write the BEs as functions of the scale factor (\(a\)) rather than time (\(t\)). More precisely, we use the ratio of the scale factor to its value at the end of inflation, \[A=\frac{a}{a_{\rm end}}. \tag{36}\] We consider the initial value \(a_{\rm end}=1\) without loss of any generality. Using the newly introduced dimensionless variables, BEs in Eq. (26), (29), (30), (III) and (III) will look like: \[\frac{dE_{\phi}}{dA}=3\left(\frac{2-n}{n+2}\right)\frac{E_{\phi} }{A}-\frac{(\Gamma_{\phi ff}+\Gamma_{\phi NN})E_{\phi}}{A\mathcal{H}}\,, \tag{37}\] \[\frac{dR}{dA}=\frac{\langle\Gamma_{N_{1}}\rangle a_{\rm end}}{ \mathcal{H}}(E_{N_{1}}-E_{N_{1}}^{\rm eq})+\frac{\Gamma_{\phi ff}E_{\phi}}{ \mathcal{H}}\,,\] (38) \[\frac{dE_{N_{1}}}{dA}=\frac{\Gamma_{\phi NN}E_{\phi}}{A\mathcal{H }}-\frac{\langle\Gamma_{N_{1}}\rangle}{A\mathcal{H}}(E_{N_{1}}-E_{N_{1}}^{\rm eq })\,,\] (39) \[\frac{dN_{\Delta_{\alpha}}}{dA}=-\frac{\langle\Gamma_{N_{1}} \rangle}{A\mathcal{H}}\Bigg{[}\frac{\varepsilon_{\ell_{\alpha}}}{M_{1}}(E_{N_{ 1}}-E_{N_{1}}^{\rm eq})+\frac{1}{2}K_{\alpha}^{0}\] \[\qquad\qquad\qquad\qquad\times\sum_{\beta}(C_{\alpha\beta}^{\ell}+C _{\beta}^{H})\frac{Y_{N_{1}}^{\rm eq}}{Y_{\ell}^{\rm eq}}N_{\Delta_{\beta}} \Bigg{]}\,,\] (40) \[\frac{dN_{B-L}}{dA}=-\frac{\langle\Gamma_{N_{1}}\rangle}{A \mathcal{H}}\left[\frac{\varepsilon_{\ell}}{M_{1}}(E_{N_{1}}-E_{N_{1}}^{\rm eq })+\frac{Y_{N_{1}}^{\rm eq}}{2Y_{\ell}^{\rm eq}}N_{B-L}\right]. \tag{41}\] Finally, the produced lepton asymmetry can be converted to baryon asymmetry using the relation: \[Y_{B}=\frac{28}{79}\frac{1}{sA^{3}}N_{B-L}=\frac{28}{79}\frac{1}{sA^{3}}\sum_{ \alpha}N_{\Delta_{\alpha}}. \tag{42}\] Results We now employ the set of BEs Eq. (37)-(39) simultaneously in order to estimate the individual components of energy densities such as \(\rho_{\phi},\rho_{R}\) and \(\rho_{N_{1}}\) which are connected to \(E_{\phi},E_{R}\) and \(E_{N_{1}}\) respectively via Eq. (35). By knowing \(\rho_{R}\) as a function of the scale factor \(a\) or the rescaled one \(A\), the temperature can be defined by Eq. (28). Then, using Eq. (31), we estimate the shift of the ET if any from their standard estimate (see Fig. 1) by comparing the interaction rate of charged lepton Yukawa interaction of individual flavor \(\langle\Gamma_{\alpha}\rangle\) with \(\mathcal{H}\). Afterward, depending on the shift of ET of individual flavor, we proceed for evaluating the flavored (unflavored) \(B-L\) asymmetries by solving Eq. (40) (Eq. (41)) where we also feed the solutions of other Eqs. (37)-(39). In order to evaluate the BAU today following the above strategy, we notice that the mechanism is controlled by the following independent parameters \((i)\)\(y_{\phi ff}\) [inflaton-SM fermion effective coupling], \((ii)\)\(y_{\phi NN}\) [inflaton-RHN coupling], \((iii)\)\(M_{1}\) [the lightest RHN mass] and \((iv)\)\(\{Re[\theta_{R}],\ Im[\theta_{R}]\}\) [constituents of \(R\) matrix to estimate \(Y_{\nu}\)]. We will maintain a typical hierarchy of RHN masses as: \(M_{3}=10^{2}M_{2}=10^{4}M_{1}\). Furthermore, the dynamics also depends on the input parameters from the inflaton potential: \(\{n,\lambda\}\). To start with, \(n=2\) in the inflaton potential is chosen and the value of \(\lambda=2\times 10^{-11}\) is determined such that the inflationary observables like spectral index and tensor-to-scalar ratio are found to be within 95% C.L. of the Planck+BICEP2/Keck (PBK) constraints. For a more detailed discussion, see [27; 28; 53; 73]. Like the previous study [55], here also we confine ourselves by choosing \(y_{\phi ff}\lesssim\mathcal{O}(10^{-5}-10^{-4})\)[53; 74] as above this value, non-perturbative production of fermions from inflaton decay may start to dominate. As a result, perturbative prescription for particle production would be invalid. **[In absence of inflaton-RHN coupling:]** Note that the present scenario differs from the previous one due to the inclusion of \(y_{\phi NN}\) coupling in this work. Hence, a choice of the parameter \(y_{\phi NN}=0\) should reproduce the outcome of our previous work. We therefore start studying the phenomenology by choosing \(y_{\phi NN}=0\) (case I) first and then turning on \(y_{\phi NN}\) gradually to a value comparable to \(y_{\phi ff}\). We choose \(y_{\phi ff}=4\times 10^{-5}\) so as to be consistent with the perturbative limit on it. The Left plot of Fig. 3 shows the variation of temperature (along \(y\) axis) with the parametrized scale factor \(A\) (along \(x\) axis) where we use the solution for the radiation energy density of Eq. (38) (ignoring the first term in the right hand side as \(y_{\phi NN}=0\)) coupled with Eqs. (37) and put it back in Eq. (28). After inflation, the temperature of the Universe then attains a maximum value \(\sim T_{\rm Max}=4.45\times 10^{12}\) GeV and thereafter falls (having a different slope compared to the radiation dominated epoch) to a point where \(\rho_{\phi}=\rho_{R}\) is reached which defined the end of reheating as \(T_{\rm RH}=1.67\times 10^{10}\) GeV. Due to such faster expansion and nontrivial scale factor dependence of temperature during this extended reheating period \(T_{\rm Max}>T>T_{\rm RH}\), the charged lepton Yukawa interaction (particularly for \(\tau_{R}\) in this case) will come to thermal equilibrium at a smaller temperature than the standard radiation-dominated case. To provide a concrete evaluation of the same, we include middle plot of Fig. 3 where \(\langle\Gamma_{\alpha}\rangle/\mathcal{H}\) evolution (\(\alpha=e,\mu,\tau\) with blue, green and red lines respectively) are plotted against temperature \(T\) variation. This shift in \(\tau_{R}\) ET is depicted clearly by the intersection point of the blue line and the horizontal dashed line indicating \(\langle\Gamma_{\tau}\rangle/\mathcal{H}=1\) in middle plot of Fig. 3. The relevant ET in this extended period of reheating turn out to be \(T_{\tau}^{*}=4.7\times 10^{10}\) GeV and is included in Table 1 along with values of other parameters. The reheating temperature being bounded by \(\sim\mathcal{O}(10^{10})\) GeV, no change in \(\mu_{R}\) or \(\tau_{R}\) ET has been found as expected. With such a shift in the ET of \(\tau_{R}\) in this particular case (first row of Table 1), flavor leptogenesis would get affected. In order to have an impact of it on flavor leptogenesis, we choose a value of \(N_{1}\) mass \(M_{1}=6\times 10^{10}\) GeV which falls intermediate between the associated \(T_{\rm Max}\) and \(T_{\rm RH}\). First we estimate the evolution of various energy densities \(\rho_{\phi}\), \(\rho_{R}\), \(\rho_{N_{1}}\) in this scenario against \(A\) by solving Eqs. (37)-(39) simultaneously as shown in the rightmost plot of Fig. 3 indicated by red, blue and green solid lines respectively. In evaluating \(Y_{\nu}\), we consider \(Re[\theta_{R}]=6.03,\ Im[\theta_{R}]=0.22\) (the reason behind such a choice is to have correct BAU finally). Note that in absence of \(y_{\phi NN}\) coupling, \(N_{1}\)s are thermally generated during \(T_{\rm Max}>T>M_{1}\) from the thermal bath consisting of SM fields, thanks to radiation production from inflaton decay via \(y_{\phi ff}\). As the Universe expands, the radiation energy density is being diluted non trivially till an equality with inflaton energy density defining \(T_{\rm RH}\) (intersection of blue and red lines). Standard radiation domination follows only beyond this point. These \(N_{1}\)s would effectively decay around \(T\sim M_{1}\) and produce the lepton asymmetry via leptogenesis. The \(N_{1}\) being thermally produced, this scenario is similar to the standard flavored _thermal_ leptogenesis scenario, though impacted by the shift in \(T_{\tau}^{*}\) as seen above. We find \(M_{1}>T_{\tau}^{*}\), all the charged lepton Yukawa interactions remain out of equilibrium in this phase. As a conse quence, an unflavored leptogenesis prevails here in case of extended period of reheating. This is the main difference we experience while comparing it with standard flavored _thermal_ leptogenesis scenario in which with leptogenesis scale \(T\sim M_{1}\), \(\tau\) lepton Yukawa interaction occurs rapidly (as it is already in equilibrium, see section II) following which a two-flavor setup must be incorporated (compared to the unflavored one in present case). The corresponding evolution of lepton asymmetry is shown by black line in bottom portion of Fig. 4 as a function of the modified scale factor \(A\) which saturates to a lepton asymmetry value that eventually converts to the observed BAU value. To make the correspondence between \(\rho_{N_{1}}\) and the production of \(Y_{B-L}\), we also incorporate the \(\rho_{N_{1}}\) evolution in the top portion of the figure. **[In presence of inflaton-RHN coupling:]** We now turn on inflaton-RHN coupling and observe its impact on the charged lepton Yukawa equilibration and consequently on the produced baryon asymmetry during reheating era. Let us begin with a sufficiently small \(y_{\phi NN}=10^{-7}\) (tabulated in Table 1 as case II) as compared to \(y_{\phi ff}\) chosen. As shown in the Fig. 5, switching on \(y_{\phi NN}\) causes the inflaton to produce a large number of \(N_{1}\)s (indicated by green line) initially. This can be understood if we compare the \(\rho_{N_{1}}\) evolution (green line in Fig. 5) above temperature \(T=M_{1}\) (indicated by the vertical black small dashed line) in this case versus the case with solely thermally produced \(N_{1}\)s (see Fig. 3). However, as the temperature drops, the production of \(N_{1}\) from inflaton decay does not keep up with the Universe's expansion rate due to its feeble coupling chosen. As a result, the energy density of these \(N_{1}\)s (as decay products of inflaton) gets diluted and at some stage the production of \(N_{1}\)s from the inverse decay dominates over it. This is evident in the left plot of Fig. 5 by the sudden change of slope of \(\rho_{N_{1}}\) just before \(T=M_{1}\) which coincides with the energy density of thermally produced \(N_{1}\)s of Fig. 3. This continues till a point beyond which \(N_{1}\) decay starts to contribute to lepton asymmetry production. We also notice that due to the dominant decay of \(\phi\) into SM fermions, the \(\rho_{\phi}\) and \(\rho_{R}\) (mainly contributed from \(y_{\phi ff}\) coupling) do not alter by a noticeable amount and hence \(\mathcal{H}\) remains essentially unchanged compared to the previous case with \(y_{\phi NN}=\)0. As a result, together with \(T_{\rm Max}\) and \(T_{\rm RH}\), the \(\tau_{R}\) ET \(T_{\tau}^{*}\) remains identical with the _thermal_ case (see second row of Table 1). Hence the present situation falls in the category of unflavored leptogenesis. The evolution of the \(B-L\) asymmetry for \(y_{\phi NN}=10^{-7}\) scenario is presented by the red dash Figure 4: Evolution of energy density of \(N_{1}\) (upper panel) and produced baryon asymmetry (lower panel) \(w.r.t.\) rescaled scale factor for different values of \(y_{\phi NN}\) for \(M_{1}=6\times 10^{10}\) GeV and \(y_{\phi ff}=4\times 10^{-5}\). Figure 3: Evolution of temperature \(T\) (left panel) and various energy densities (right panel) \(w.r.t.\) rescaled scale factor for \(M_{1}=6\times 10^{10}\) GeV and \(y_{\phi NN}=0\). In the middle plot we show the dependence of \(\langle\Gamma_{\alpha}\rangle/\mathcal{H}\) on \(T\) for the same choice of \(M_{1}\) and \(y_{\phi NN}\). dotted line in the bottom plot of Fig. 4 which overlaps with the \(y_{\phi NN}=0\) case (solid black line), thereby satisfying the observed baryon asymmetry of the Universe. As we further increase the strength of the inflaton-\(N_{1}\) coupling, \(i.e.\)\(y_{\phi NN}=10^{-5}\) (as case III of Table 1 while keeping other parameters fixed at their previous values), a change in \(\rho_{N_{1}}\) becomes visible. The right plot of Fig. 5 shows the behavior of the energy densities of different components of the Universe in this case. Note that contrary to cases I and II, dominant contribution to \(\rho_{N_{1}}\) here follows from the \(N_{1}\)s being decay products of inflaton as it supersedes the thermally produced ones from inverse decay. The radiation component (blue line) however still remains dominant compared to \(\rho_{N_{1}}\) (green line). Note that \(T_{\rm Max}\) remains unaffected as it is mainly controlled by \(y_{\phi ff}\) coupling (fixed for cases I-IV) responsible for initial radiation production. However a small shift in the \(T_{\rm RH}\) as compared to cases I and II, is observed and indicated in Table 1. This happens as a result of higher \(y_{\phi NN}\) coupling which causes the inflaton to decay earlier than cases I and II so that \(\rho_{\phi}=\rho_{R}\) defining the \(T_{\rm RH}\) is realized at a slightly higher temperature. Due to the dominance of \(\rho_{R}\) (almost unchanged compared case I and II) over \(\rho_{N_{1}}\), the temperature evolution above \(M_{1}\) remains close to the two earlier cases. For the same reason, \(\mathcal{H}\) is also almost unaffected and this is reflected in the evaluation of \(T_{\tau}^{*}\) (only a slight change) as included in Table 1. As previously discussed, here also, even though we have a slightly higher value of \(T_{\tau}^{*}\) than the previous cases, the leptogenesis scale however remains larger with respect to \(T_{\tau}^{*}\). Hence, an unflavored prescription is still adequate for estimating the lepton asymmetry. The evolution of produced lepton asymmetry in this case III is shown by the blue dash-dotted line in the bottom panel of Fig. 4. As seen in this plot, the lepton asymmetry starts to being produced at a stage above temperature \(T\sim M_{1}\). This is related to the fact that \(N_{1}\)s produced during this era of reheating find themselves in out of equilibrium (\(\mathcal{H}\) is larger than decay width of \(N_{1}\)) and would decay. Additionally, \(\rho_{N_{1}}\) being comparable to \(\rho_{R}\), the inverse decay process (related with neutrino Yukawa coupling) remains subdominant compared to the \(N_{1}\) decay. As a result, lepton asymmetry (still unflavored though) produced from such decay of \(N_{1}\) would not be washed out completely and a non-zero asymmetry survives. The amount of asymmetry production increases till a point where \(\rho_{N_{1}}\) ceases to exist. Beyond it, \(Y_{B-L}\) falls to some extent, before attaining its asymptotic value which is larger than the value of the lepton asymmetry necessary for the production of observed BAU, as the produced asymmetry gets diluted due to the increase of entropy (\(N_{1}\) decay produces a sizeable \(\rho_{R}\) and hence, entropy) in the Universe. Note that this phase of leptogenesis is different from _thermal_ leptogenesis scenario as \(N_{1}\)s are never in thermal equilibrium. On the other hand, this is not purely the case of _non-thermal_ leptogenesis which happens with \(N_{1}\), as the decay product of inflaton, finding itself in an environment with \(T\ll M_{1}\) (so, thermal generation is ruled out). So here with case (B), we find a nonstandard generation of lepton asymmetry as a consequence of extended reheating where the inflaton has a sizeable coupling with the lightest RHN. As discussed in the beginning of section, we call it a 'quasi'- thermal leptogenesis as neither it is the case of a purely thermal nor that of _non-thermal_ leptogenesis. The value of \(Y_{B}\) can be brought down to the correct BAU by decreasing \(Im[\theta_{R}]=0.031\), while all other parameters/outcomes are unaffected. Finally, the above discussed effect becomes prominent if we choose to increase the \(y_{\phi NN}\) coupling further, say \(y_{\phi NN}=y_{\phi ff}\) as included in case IV of Table 1. In this case, we obtain \(\rho_{N_{1}}=\rho_{R}\). As radiation and \(N_{1}\)s con tribute equally to the energy density of the Universe during the reheating period, the expansion rate of the Universe gets modified in this scenario, affecting the \(T_{\rm RH}\) as well as \(T_{\star}^{*}\). In this case, we get a larger \(T_{\rm RH}\) as inflaton decays earlier than the previous case owing to the larger \(y_{\phi NN}\) coupling. The related numerical estimates for this case IV are listed in the fourth row of Table 1. In this case also, a larger baryon asymmetry of the Universe is created which can be settled to the observed \(Y_{B}\) value without altering any other parameter/predicted values once we reduce the value to \(Im[\theta_{R}]=0.02\). In Table 1, we have listed a few specific values of \(y_{\phi NN}\) coupling to describe the impact of reheating on leptogenesis. In Fig. 6, we provide the estimate of final baryon asymmetry (via unflavored leptogenesis) once the Yukawa coupling \(y_{\phi NN}\) is varied (\(y_{\phi NN}\leq y_{\phi ff}\)). As already found in cases I-III, with tiny \(y_{\phi NN}\) coupling (\(y_{\phi NN}\lesssim 10^{-6}\)), the final baryon asymmetry \(Y_{B}\) almost remains independent of \(y_{\phi NN}\). Thereafter, a rise in \(Y_{B}\) can be seen due to the fact that the production of RHN \(N_{1}\) from the inflaton decay also becomes significant. This additional production channel causes a significant rise in the \(N_{1}\)'s abundance \(\rho_{N_{1}}\) which further leads to a larger production of lepton asymmetry (also the baryon asymmetry). This behavior is also clear from Fig. 4. A peak in \(Y_{B}\) is observed when \(y_{\phi NN}\simeq 2\times 10^{-5}\) after which \(Y_{B}\) is reduced once the \(y_{\phi NN}\) is further increased. This fall can be understood by looking at the third term of Eq. (41) where one notes that a larger production of asymmetry also results in a larger washout of the asymmetry. **[With dominant inflaton-RHN coupling:]** So far the discussion we have, we find that the gradual increase of \(y_{\phi NN}\) coupling not only affects the temperature behavior and expansion rate of the Universe during reheating period but also impacts the lepton asymmetry production in this _quasi_-thermal regime. However we have restricted ourselves with values of couplings associated to the inflaton below \(\mathcal{O}(10^{-5}-10^{-4})\) so as to keep the analysis consistent with perturbative reheating era [74; 53]. Alongside, we take \(y_{\phi ff}\) at a borderline value \(4\times 10^{-5}\) and hence we are unable to make \(y_{\phi NN}\) larger than \(y_{\phi ff}\) by order(s) of magnitude and discuss the impact of such consideration. Also, with such choice of \(y_{\phi ff}\), the reheating temperature turns out to be high enough so as to keep \(M_{1}\) accordingly large (to realize scenario [B]). With an aim to observe the consequence of \(y_{\phi NN}>y_{\phi ff}\) while keeping things more flexible such as lowering the scale of leptogenesis impacted by the extended period of reheating, we now consider three specific situations: (i) \(M_{1}\) is close to \(T_{\rm RH}\) [BP1], (ii) \(M_{1}\) is intermediate between \(T_{\rm Max}\) and \(T_{\rm RH}\) [BP2], and (iii) \(M_{1}\) is close to \(T_{\rm Max}\) [BP3] where the mass of the lightest RHN is fixed at \(M_{1}=5\times 10^{9}\) GeV while \(y_{\phi ff}\) and \(y_{\phi NN}\) are floated to realize such considerations. We choose the values of \(Re[\theta_{R}]=2.83\) and \(Im[\theta_{R}]=0.24\) used in thermal flavored leptogenesis scenario (in two flavor regime) of section II (see Fig. 2) the result of which is consistent with correct BAU. The purpose of such a choice is to compare the outcome of the extended period of reheating on final baryon asymmetry generation with \(y_{\phi NN}>y_{\phi ff}\). Note that as inflaton-SM fermion coupling essentially define the \(T_{\rm Max}\) while \(y_{\phi NN}\) has some role to play in determining \(T_{\rm RH}\), we first make some appropriate choices of these two parameters in defining the three benchmark cases BP-1,2,3. They are listed in Table 2. For all these sets, \(y_{\phi NN}\) remains one order of magnitude larger than \(y_{\phi ff}\) coupling. In evaluating the temperature evolution during the reheating, we solve Eqs. (37)-(39) as a function of the rescaled scale factor \(A\) simultaneously and using Eq. (28), temperature is evaluated. Bottom panels of all three plots of Fig. 7 represent the temperature variation with \(A\) and upper panels depict the same for energy densities of different components. As seen from the plots, immediately after the end of inflation, the temperature reaches a maximum value \(T_{\rm Max}\). Then it starts to decrease in accordance with our previous discussion (in section IV) due to faster expansion of the Universe during this period of extended reheating. However, an interesting departure of \(T\) from this fall is observed around \(T\sim M_{1}\). This is related to the emergence of a new production channel, producing radiation from \(N_{1}\) decay asa result of \(y_{\phi NN}\) dominance. An interplay between such additional injection of radiation in the bath (which tries to increase \(\rho_{R}\)) and the depletion of \(\rho_{R}\) due to Hubble expansion, a plateau like region is formed in \(T\) evolution plot. This period however does not last long as eventually Hubble expansion rate overtakes this radiation production rate from \(N_{1}\) decay (\(\rho_{N_{1}}\) decreases sharply beyond a point). Eventually, radiation dominates over matter beyond \(T_{\rm RH}\) and temperature of the Universe drops as \(A^{-1}\). In the upper panels of Fig. 7, Figure 6: Variation of final baryon asymmetry \(Y_{B}\)\(w.r.t.\)\(y_{\phi NN}\) for \(M_{1}=6\times 10^{10}\) GeV and \(y_{\phi ff}=4\times 10^{-5}\). The points indicated by “star” represent the case-II-IV from Table 1. we note that due to choice(s) of smaller coupling(s) \(y_{\phi ff}\) (\(y_{\phi NN}\)) in going from BP1 to BP3, inflaton takes larger time to decay for BP3 (BP2)compared to BP1. As a result, matter radiation equality shifts at a later epoch resulting in lower reheating temperature for BP3 (BP2) relative to BP1. This nontrivial behavior of temperature along with the larger expansion rate of the Universe during reheating period affects the ETs of charged lepton Yukawa interactions as can be seen from Fig. 8. For RHN mass \(M_{1}=5\times 10^{9}\) GeV, though there is no shift in ET for right-handed electron, ample amount of change can be seen for ET for \(\mu_{R}\) and \(\tau_{R}\) compared to the standard radiation dominated scenario for all three BPs. This change makes the charged Yukawa interactions come to equilibrium at a much lower temperature which are included in \begin{table} \begin{tabular}{||c|c|c|c|c|c|c|c|c||} \hline Point & \(y_{\phi NN}\) & \(y_{\phi ff}\) & \(T_{\rm Max}\) (GeV) & \(M_{1}\) (GeV) & \(T_{\rm RH}\) (GeV) & \(T_{\tau}^{*}\) (GeV) & \(T_{\mu}^{*}\) (GeV) & \(Y_{R}\) \\ \hline \hline BP1 & \(10^{-6}\) & \(10^{-7}\) & \(2.23\times 10^{11}\) & \(5\times 10^{9}\) & \(4.2\times 10^{8}\) & \(2\times 10^{9}\) & \(7\times 10^{8}\) & \(5.67\times 10^{-9}\) \\ \hline BP2 & \(5\times 10^{-7}\) & \(5\times 10^{-8}\) & \(1.6\times 10^{11}\) & \(5\times 10^{9}\) & \(2.1\times 10^{8}\) & \(1.5\times 10^{9}\) & \(4\times 10^{8}\) & \(3.60\times 10^{-9}\) \\ \hline BP3 & \(10^{-7}\) & \(10^{-8}\) & \(7.04\times 10^{10}\) & \(5\times 10^{9}\) & \(4.2\times 10^{7}\) & \(6\times 10^{8}\) & \(1.3\times 10^{8}\) & \(7.28\times 10^{-10}\) \\ \hline \end{tabular} \end{table} Table 2: We list three benchmark points (BP) where the leptogenesis scale falls in between \(T_{\rm Max}\) and \(T_{\rm RH}\). While BP1 represents the case where \(M_{1}\) is closer to \(T_{\rm RH}\), BP2 represents an intermediate scenario and BP3 indicates a scenario where \(M_{1}\) lies closer to \(T_{\rm Max}\). Figure 7: Evolution of different energy densities (upper panel) and temperature \(T\) (lower panel) \(w.r.t.\) rescaled scale factor for BP1 (left panel), BP2 (middle panel), and BP3 (right panel). Here we fix \(M_{1}=5\times 10^{9}\) GeV, [BP3] \(\theta_{R}=2.83+0.24i\). Figure 9: Evolution of produced lepton asymmetry \(w.r.t.\) rescaled scale factor for all the three BPs for \(M_{1}=5\times 10^{9}\) GeV and \(\theta_{R}=2.83+0.24i\). Table 2. As a consequence of considerably lower values of \(T_{\mu}^{*}\) and \(T_{\tau}^{*}\), we expect the quantum decoherence of the SM lepton doublet states to take place here at much lower temperatures (same as charged lepton ET) compared to the standard radiation dominated scenario. Hence, for all three BPs, lepton asymmetry generation process turns out to be not affected by individual charged lepton doublets at leptogenesis scale \(M_{1}=5\times 10^{9}\) GeV and an unflavored leptogenesis prevails here. This is a new result as compared with earlier standard analysis presented in section II, where at this leptogenesis scale, \(\tau_{R}\) was already in equilibrium affecting the lepton asymmetry generation along \(\tau\) direction distinctively. Accordingly, a two flavor leptogenesis was incorporated for correct generation of lepton asymmetry. On the other hand, in this present case incorporating the effect extended reheating (with inflaton-RHN dominance), unflavor approach to evaluate the baryon asymmetry of the Universe would be enough. For all three BPs, a different rate of washout during the reheating period accounts for the main difference in the produced final baryon asymmetry. Dashed purple line of Fig. 9 shows that as \(M_{1}\) is closer to \(T_{\rm Max}\) for BP3 as a result of which the produced asymmetry suffers a larger amount of washout (due to Hubble expansion) contrary to BP1 and BP2. Finally, even with a relatively low \(M_{1}\) in this _quasi_-thermal regime, an overproduction of baryon asymmetry by one to two order(s) of magnitude is observed for these three BPs relaxing the parameter space even further with respect to the modified thermal leptogenesis scenario studied in section V. For completeness purpose, we notice that such final \(Y_{B}\) values can be brought down to correct level of BAU by changing \(\theta_{R}\) without altering other parameter values or the outcomes such as \(T_{\rm RH}\) and \(T_{\alpha}^{*}\). For example, for BP1 (BP2), one needs to set \(Im[\theta_{R}]=0.002\) (0.005), while for BP3, \(Im[\theta_{R}]\) can be fixed at 0.02 so that observed BAU can be generated. ## VI Conclusion In this work, we have shown that an extended period of reheating resulting from inflaton-decay into radiation together with the lightest RHN can significantly alter the equilibration temperature of the charged lepton Yukawa interactions. Consequently, flavored leptogenesis mechanism gets affected. We start with a discussion on how the equilibration temperature(s) of charged lepton Yukawa interaction(s) can be estimated in a radiation dominated Universe and its impact on lepton asymmetry generation known as flavored leptogenesis. In such a setup, the reheating process is generally assumed to be instantaneous and happens to be higher than the mass of the decaying RHN whose decay contributes to lepton asymmetry production. However, depending on the inflaton coupling to SM particles, the reheating process may survive a longer period creating a prolonged era of reheating, from \(T_{\rm Max}\) to \(T_{\rm RH}\). Motivated by our recent finding on the impact of this extended era of reheating on charged lepton equilibration temperature and flavored leptogenesis, here we extend the setup by including additional inflaton-RHN coupling. We find that with relatively large value of inflaton-RHN coupling compared to the inflaton-SM fermion effective coupling, the reheating period gets further modified. While the inflaton-SM fermion coupling mainly controls the maximum temperature of the Universe immediately after inflation, the inflaton-RHN coupling has the potential to impact the reheating temperature. The production of RHN and SM bath from the inflaton decay during this period of prolonged reheating helps the Universe to expand at a much faster rate (depending on the inflaton-RHN coupling though) in comparison to the scenario where inflaton decays directly to radiation solely. As a result of such faster expansion, along with the modified temperature behavior, the charged Yukawa interactions enter into equilibrium in a delayed fashion. We also observe that such a delayed equilibration of charged lepton Yukawa interactions can significantly modify the lepton asymmetry generation compared to what is observed in _thermal_ leptogenesis. For example, a flavored leptogenesis scenario found to be in two flavor regime in standard _thermal_ leptogenesis may emerge as an unflavored one here. Another interesting outcome of the present scenario is revealed with a dominant inflaton-RHN coupling with respect to inflaton-SM fermion effective coupling. Here we encounter an unusual situation where the lepton asymmetry starts to be produced at a temperature above the mass of the lightest RHN without being completely washed out. In fact, the reheating era produces an environment where the lightest RHN find itself in out-of-equilibrium in this regime and its decay therefore contributes to lepton asymmetry production. In a way, this helps to reduce the scale of leptogenesis since the inclusion of inflaton-RHN coupling may inject a large amount of RHN into the system on top of thermally produced ones (whose decay also contribute to produce lepton asymmetry) during reheating thereby resulting an enhanced lepton asymmetry. The framework however can be extended beyond our present consideration. In some of the low scale leptogenesis scenarios, the framework might alter the prediction as well as allowing the leptogenesis scale to drop even further opening the possibility to explore leptogenesis in collider experiments. The related study is beyond the scope of this paper and we plan for some more work in these directions in future. ## VII Acknowledgements A.S. acknowledges the support from grants CRG/2021/005080 and MTR/2021/000774 from SERB, Govt. of India. R.R. acknowledges the National Research Foundation of Korea (NRF) grant funded by
2301.13298
LongEval: Guidelines for Human Evaluation of Faithfulness in Long-form Summarization
While human evaluation remains best practice for accurately judging the faithfulness of automatically-generated summaries, few solutions exist to address the increased difficulty and workload when evaluating long-form summaries. Through a survey of 162 papers on long-form summarization, we first shed light on current human evaluation practices surrounding long-form summaries. We find that 73% of these papers do not perform any human evaluation on model-generated summaries, while other works face new difficulties that manifest when dealing with long documents (e.g., low inter-annotator agreement). Motivated by our survey, we present LongEval, a set of guidelines for human evaluation of faithfulness in long-form summaries that addresses the following challenges: (1) How can we achieve high inter-annotator agreement on faithfulness scores? (2) How can we minimize annotator workload while maintaining accurate faithfulness scores? and (3) Do humans benefit from automated alignment between summary and source snippets? We deploy LongEval in annotation studies on two long-form summarization datasets in different domains (SQuALITY and PubMed), and we find that switching to a finer granularity of judgment (e.g., clause-level) reduces inter-annotator variance in faithfulness scores (e.g., std-dev from 18.5 to 6.8). We also show that scores from a partial annotation of fine-grained units highly correlates with scores from a full annotation workload (0.89 Kendall's tau using 50% judgments). We release our human judgments, annotation templates, and our software as a Python library for future research.
Kalpesh Krishna, Erin Bransom, Bailey Kuehl, Mohit Iyyer, Pradeep Dasigi, Arman Cohan, Kyle Lo
2023-01-30T21:31:48Z
http://arxiv.org/abs/2301.13298v1
# LongEval: Guidelines for Human Evaluation of ###### Abstract While human evaluation remains best practice for accurately judging the faithfulness of automatically-generated summaries, few solutions exist to address the increased difficulty and workload when evaluating _long-form_ summaries. Through a survey of 162 papers on long-form summarization, we first shed light on current human evaluation practices surrounding long-form summaries. We find that 73% of these papers do not perform any human evaluation on model-generated summaries, while other works face new difficulties that manifest when dealing with long documents (e.g., low inter-annotator agreement). Motivated by our survey, we present LongEval, a set of guidelines for human evaluation of faithfulness in long-form summaries that addresses the following challenges: (1) How can we achieve high inter-annotator agreement on faithfulness scores? (2) How can we minimize annotator workload while maintaining accurate faithfulness scores? and (3) Do humans benefit from automated alignment between summary and source snippets? We deploy LongEval in annotation studies on two long-form summarization datasets in different domains (SQuALITY and PubMed), and we find that switching to a finer granularity of judgment (e.g., clause-level) reduces inter-annotator variance in faithfulness scores (e.g., std-dev from 18.5 to 6.8). We also show that scores from a _partial_ annotation of fine-grained units highly correlates with scores from a full annotation workload (0.89 Kendall's \(\tau\) using 50% judgments). We release our human judgments, annotation templates, and our software for future research.1 Footnote 1: [https://github.com/martiansideofthemoon/longeval-summarization](https://github.com/martiansideofthemoon/longeval-summarization) *Work done in an AI2 internship, author contributions here. ## 1 Introduction Human judgments are considered the gold standard for evaluating model-generated summaries (Kryscinski et al., 2019; Fabbri et al., 2021) and generated text more broadly (Celikyilmaz et al., 2020). Unfortunately, human evaluation tends to be labor-intensive, expensive to scale, and difficult to design. This is problematic as a large number of judged examples is needed to draw statistically significant conclusions about system performances (Wei and Jia, 2021) or correlations between human judgments and automatic metrics (Deutsch et al., 2021). Human evaluation is especially challenging when _long_ sequences of generated text need to be evaluated, due to the inherent subjectivity in the task (Karpinska et al., 2021; Clark et al., 2021; Krishna et al., 2021; Goyal et al., 2022). To better understand the challenges of human evaluation on long-form summaries (150 words or longer), we first conduct a comprehensive survey of 162 publications and preprints on long-form summarization (Section 2). We find that 119 papers (73%) do not perform human evaluation on long-form summaries, while the remaining papers deviate significantly from suggested best practices for reproducibility (Gehrmann et al., 2022). Current human evaluation setups lack standardization in their design decisions (such as annotation granularity), some of which can significantly impact inter-annotator agreement (Section 3.1). Finally, 20 papers explicitly mention human evaluation is expensive, difficult, and time-consuming due to the long length of summaries and source documents. To move towards a more consistent and efficient human evaluation, we present LongEval, a set of guidelines for human evaluation of faithfulness in long-form summarization (Section 3). We empirically evaluate LongEval using human annotation studies on two long-form summarization datasets: SQuALITY (Wang et al., 2022) and PubMed (Cohan et al., 2018). We provide an overview of our main research questions and findings in Figure 1 and enumerate them here: **RQ1**: _Can inter-annotator agreement be improved while evaluating faithfulness of long-form summaries via fine-grained annotations?_ **Finding**: Annotating faithfulness of individual summary clauses and aggregating them leads to significantly higher inter-annotator agreement, compared to the dominant paradigm of evaluating whole summaries at once via Likert ratings (std-dev 18.5 to 6.8 on SQuALITY). **RQ2**: _Can we reduce annotator workload by partially annotating a long summary while maintaining accurate faithfulness scores?_ **Finding**: Despite annotating a fraction of summary clauses, faithfulness scores under a reduced workload maintain high correlation with those from a full workload (0.89 Kendall's \(\tau\) at 50% workload). **RQ3**: _Do humans benefit from automatically aligning summary units to relevant sentences in the source document?_ **Finding**: Unlike suggestions in prior work on short-form summarization Hardy et al. (2019); Kryscinski et al. (2020), aligning parts of the summary to source document is only useful when the summary is highly extractive or mostly correct. Overall, our contributions are: **(1)** a 162-paper survey of current human evaluation practices in long-form summarization; **(2)**LongEval, a set of three guidelines for evaluating faithfulness in long-form summarization; **(3)** an empirical validation of LongEval guidelines on two long-form summarization datasets in different domains (SQuALITY and PubMed); **(4)** A dataset with 3-way fine-grained human faithfulness judgments for 120 SQuALITY & PubMed summaries annotated using LongEval which can be used for benchmarking automatic metrics. We open-source our human evaluation data, annotation interface, and code for future research.1 Footnote 1: We exclude five papers which used long-form summarization data for pre-training only, like Wei et al. (2022). ## 2 Survey of human evaluation practices Before discussing LongEval, we first attempt to understand current human evaluation practices in long-form summarization through a comprehensive survey of 162 papers. Our survey reveals several concerning trends: absence of human evaluation, non-reproducible experimental setups, lack of standardization, and complaints of long summaries being challenging and expensive to evaluate. These results show an urgent need to develop more efficient and standardized human evaluation protocols. _Selection of papers:_ We consider existing summarization datasets with an average summary length of at least 150 words, which includes several popular datasets like arXiv Cohan et al. (2018), Bill-Sum Kornilova and Eidelman (2019) and MultiNews Fabbri et al. (2019); see Table 1 for a full list. For our survey, we select all papers that evaluated summarization models using at least one of these datasets.2 All of these papers were published between June 2018 and September 2022, after the first long-form summarization datasets were released (PubMed / arXiv). Most of the 162 surveyed papers Figure 1: Overview of research questions considered in LongEval. Example summary taken from SQuALITY. were published in major NLP/ML venues, but we also include newer preprints from 2022. **Long-form summaries are rarely evaluated by humans.** We find that 101 out of 162 papers (62%) do not perform any human evaluation. 17 papers (11%) only perform human evaluation on short summaries (datasets like XSUM, Narayan et al., 2018), for which human evaluation is much easier. **Human evaluation studies of long-form summaries are not reproducible.** We further analyze the 44 papers performing human evaluation of long-form summaries to observe how often they follow reproducible practices from Gehrmann et al. (2022). Overall, we find that most studies do not follow these guidelines. Only 2 of the 44 papers release their raw human annotation data for further analysis. Only 9 papers provide details of their annotator instructions or interface, and just 12 papers perform any kind of statistical analysis, despite most papers annotating less than 50 summaries. While 33 papers report using multiple annotators per summary, only 12 report inter-annotator agreement. Finally, just 14 papers conduct human evaluation on more than one dataset (more statistics in Appendix C). **Existing human evaluation setups lack standardization.** In Table 2, we catalog the wide spectrum of human evaluation setups in the surveyed papers. 37 papers collect judgments of the full-length summary at once ("coarse-grained"), while 6 papers collect judgments at a finer granularity such as sentences or entities ("fine-grained"). Even within a granularity, setups differ: Likert-scale (24 papers), A/B testing (13 papers), binary per-sentence labels (4 papers) are the dominant protocols. In Section 3.1, we will see that this design decision is critical since coarse annotations have much lower inter-annotator agreement than fine.3 Footnote 3: Besides granularity, we also observe a large spectrum of annotator qualifications in our survey, ranging from MTurkers to expert graduates (Appendix C). Since non-experts are known to be unsuitable for this task (Gillick and Liu, 2010; Fabbri et al., 2021), we use experts in our work (Appendix B). **Human evaluation of long-form summaries is challenging and expensive.** Several of the surveyed papers discuss challenges in human evaluation of long-form summaries. 13 papers mention that expert annotators are necessary for human evaluation of long-form summaries, especially in technical domains like PubMed. 20 papers report that human evaluation of long-form summarization was _time-consuming_, _challenging_, and _expensive_, primarily due to the long length of the summary and source document. To tackle the issue of high annotator workload, we propose a partial annotation method in Section 3.2 and report high correlation to a full workload. Additionally, in Section 3.3 we investigate the usefulness of highlighting sentences to help annotators navigate the long source document. While this has been advocated for in short-form summary evaluation (Hardy et al., 2019; Kryscinski et al., 2020) and used in 3 surveyed long-form papers, we find that it is only helpful when summaries are mostly correct and extractive. ## 3 The LongEval guidelines for faithfulness human evaluation In Section 2, we report several concerning issues with current human evaluation practices in long-form summarization. To move towards more efficient, reproducible and standardized protocols for human evaluation, we develop the LongEval guidelines (Section 3.1-3.3, see Figure 1 for an overview). We focus on human evaluation of _faithfulness_, which Wang et al. (2022) define as: \begin{table} \begin{tabular}{l r r r} \hline \hline Dataset & \(|\text{source}|\) & \(|\text{summary}|\) & papers \\ & (words) & (words) & \\ \hline PubMed (2018) & 3092 & 205 & 59 \\ arXiv (2018) & 5906 & 163 & 55 \\ BillSum (2019) & 1284 & 174 & 19 \\ MultiNews (2019) & 2103 & 263 & 54 \\ GovReport (2021) & 7551 & 547 & 16 \\ BookSum (2021) & 5102 & 505 & 4 \\ SummScreen (2022) & 6965 & 227 & 11 \\ SQuALITY (2022) & 5194 & 227 & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: List of long-form summarization datasets considered in our survey along with average source document and summary lengths. Each dataset considered has at least 150 word summaries on average. \begin{table} \begin{tabular}{l r r} \hline \hline Type of human evaluation & \# papers & \% papers \\ \hline None & 101 & 62\% \\ Short-form summaries only & 17 & 11\% \\ \hline Likert-scale coarse-grained & 24 & 15\% \\ A/B testing coarse-grained & 13 & 8\% \\ Extrinsic evaluation & 1 & 1\% \\ Binary per sentence fine-grained & 4 & 2\% \\ QA-based fine-grained & 2 & 1\% \\ \hline \hline \end{tabular} \end{table} Table 2: Human evaluation setup in 162 summarization papers that evaluate long-form summaries. 73% of the papers do not evaluate long-form summaries with humans, while others vary significantly in their setups. "_Checking the factual errors in the summary, where a factual error is a statement that contradicts the source document, or is not directly stated, heavily implied, or logically entailed by the source document_" We conduct human annotation studies to empirically motivate LongEval. Our experiments are on two long-form summarization **datasets** spanning diverse domains and levels of abstractiveness: (1) **SQuALITY**(Wang et al., 2022) is a summarization dataset in the literary domain (avg. summary length of 227 words) where summaries describe the plots of English science fiction stories. SQuALITY is highly abstractive: on average just 16% of bigrams in the summary are present in the source document. We closely follow the human evaluation setup in Wang et al. (2022), and use BART Lewis et al. (2020) and BARTDPR Karpukhin et al. (2020) as our summarization models along with human-written summaries. (2) **PubMed**(Cohan et al., 2018) is a summarization dataset in the scientific domain (avg. summary length of 205 words) that pairs English biomedical articles from PubMed4 with their abstracts as summaries. Compared to SQuALITY, PubMed is more extractive: 54% of summary bigrams are present in the source. We use BigBird-PEGASUS-large Zaheer et al. (2020) and LongT5-large Guo et al. (2022) as our summarization models,5 along with human written summaries. By default, LongT5 / BigBird were highly extractive compared to human-written PubMed summaries (87% / 74% vs 54% bigram overlap with source). Hence, for half the generations we block 6-grams from being copied from the source,6 reducing extractiveness to \(\sim\)54%. We call this setting "PubMed-ngram-block". Footnote 4: [https://pubmed.ncbi.nlm.nih.gov/](https://pubmed.ncbi.nlm.nih.gov/) Footnote 5: LongT5 is the best publicly available PubMed summarizer. BigBird is a popular long-form summarization baseline. Footnote 6: Reducing extractiveness / copying is also a suggestion for fair-use of copyrighted work Harvard (2016; UMGC, 2020). ### RQ1: Does inter-annotator agreement improve using fine-grained annotations? In Section 2, we found that the dominant paradigm in literature (37 out of 44 papers) is to evaluate the whole summary at once ("coarse"-grained, Figure 1 top left). 6 papers instead obtain fine-grained annotations for individual units (e.g., sentences) and average them (fine, Figure 1 top right). Intuitively, fine annotation has many advantages for longer summaries -- it is less subjective than coarse, since shorter spans needs to be judged rather than a long summary, and it helps localize model errors. However, the distinction between coarse and fine is never justified in literature, and inter-annotator agreement is rarely reported to understand the task subjectivity in each setup. To better understand the tradeoff, in this section we conduct human evaluations annotating the same set of summaries using these two different protocols. **Task formulation**: Let \(F_{\text{summ}}\) denote the faithfulness score of a summary. For coarse, \(k\)-point Likert scale ratings are obtained for the summary (\(F_{\text{summ}}\in\{0,1...k\}\)), based on the faithfulness definition provided earlier. For fine, we collect binary judgments of individual units in the summary and average them, \[F_{\text{summ}}=\frac{1}{|\mathcal{C}_{\text{summ}}|}\sum_{c\in C_{\text{summ }}}F_{c},\;F_{c}\in\{0,1\}\] where \(\mathcal{C}_{\text{summ}}\) is a set of units in the summary and \(F_{c}\) is the faithfulness judgment for the unit \(c\). In both protocols, the faithfulness score of a system is defined as \(\frac{1}{|\mathcal{S}|}\sum_{\text{summ}\in\mathcal{S}}F_{\text{summ}}\) where \(\mathcal{S}\) is the set of summaries generated by the system.7 Footnote 7: We assume all summary units get an equal weight. However, some units may be more important than others, we discuss this in the Limitations section. While sentences are a popular granularity for fine (4 of the 6 surveyed papers), we found that summary sentences in both datasets were overloaded with information. Hence, we segment sentences on conjunctions and punctuation to obtain more atomic units as \(\mathcal{C}_{\text{summ}}\). These units are often clauses,8 similar to summary content units (SCUs) in Pyramid Nenkova and Passonneau (2004). Footnote 8: An even finer granularity is entities / numbers. We avoid this due to prohibitive annotation cost on long summaries. **Collecting coarse annotations**: For SQuALITY, we re-use the annotations provided by Wang et al. (2022) for faithfulness assessments. In their data, three annotators give each summary a 1-100 direct assessment rating Bojar et al. (2016). Annotators with experience in professional copyrighting and editing were hired on Upwork,9 and these annotators were also involved in the creation of SQuALITY. Unfortunately, none of the surveyed papers that reported human evaluation results on PubMed released their raw human annotations.10 Hence, we collect our own coarse evaluations on PubMed summaries on Upwork, using freelancers with professional experience reading and writing research papers (details in Appendix B.2). We collect 3 annotations per summary and use a 5-point Likert scale, the most common choice for coarse assessment in our survey (18 out of 38 papers). In total, 120 summaries are evaluated. Footnote 10: In our email correspondence with authors of these works, they mentioned losing access or compliance issues as reasons for not sharing human evaluations. We received some examples from Guo et al. (2021) and Ju et al. (2021) for reference. **Collecting fine annotations**: For both SQuALITY and PubMed, we collect fine annotations on Upwork (3 annotators per fine unit) for the _same set_ of 120 summaries evaluated using coarse annotations. For SQuALITY, we hire freelancers with professional experience in English, creative writing, or education. For PubMed, we hire freelancers with prior experience analyzing biomedical articles. See Appendix B.1 for details of our annotator screening process, compensation, instructions, and screenshots of our annotation interface. **fine annotations have higher inter-annotator agreement than coarse annotations. This leads to more confident downstream estimates**. We present our results in Table 3. Overall, we observe that across all settings, fine annotations have Figure 3: 95% confidence intervals of estimated model performances using fine (blue) and coarse (orange) annotation methods. Intervals calculated using bootstrap resampling across annotators (Appendix A). While both annotation granularities lead to similar relative ordering of systems, fine annotations have narrower confidence intervals. The higher LongT5 score vs human in PubMed is due to highly extractive LongT5 summaries (Section 3). \begin{table} \begin{tabular}{l c c} \hline \hline Dataset & coarse & fine \\ \hline SQuALITY & 18.5 & **6.8** \\ PubMed & 11.8 & **7.3** \\ PubMed + ngram block & 11.7 & **9.3** \\ \hline Average & 14.0 & **7.8** \\ \hline \hline \end{tabular} \end{table} Table 3: Average standard deviation of faithfulness scores across annotators on a 100-point rating scale. Lower variation means higher agreement. Overall, we find that fine-grained annotations have higher inter-annotator agreement than coarse-grained annotations. Note that all fine units of a summary were annotated to obtain these results (\(f=1.0\) in Section 3.2). Figure 2: 95% confidence intervals of Pearson correlations between various automatic evaluation metrics and using human evaluation data collected with fine (blue) and coarse (orange) annotation methods. In both datasets, fine annotations lead to much narrower CIs than coarse annotations. See Appendix G for plot with Kendall’s Tau. lower standard deviation (and thus higher agreement) in faithfulness scores than coarse annotations (7.8 vs 14.0 average on 100-point scaled ratings). To illustrate the importance of higher agreement, we measure its effect on two downstream statistics that human evaluation is primarily used for: (1) correlation with automatic metrics; and (2) mean system performance. We adapt the bootstrap resampling analysis11 of Deutsch et al. (2021) to estimate confidence intervals of these two downstream statistics for coarse and fine. Footnote 11: We slightly modify the algorithm in Deutsch et al. (2021) for inter-annotator variance, see Appendix A. In Figure 2, we plot the 95% confidence intervals of the Pearson correlation of various automatic evaluation metrics against fine-grained and coarse-grained human evaluation data. Across both datasets, fine data leads to much narrower confidence intervals (0.15 vs 0.35 average uncertainty in Pearson correlation on PubMed) for the same number of summaries, implying higher statistical power. In Figure 3, we observe a similar trend with mean system performance. Interestingly, both annotation methods give the same relative ordering of systems (human > bart-dpr > bart for SQuALITY, human > longTS > BigBird for PubMed-block), confirming the alignment of fine and coarse judgments on average. _Recommendation_: Unlike the dominant trend in prior work, fine-grained evaluations should be preferred over coarse grained evaluation for long-form summaries. fine annotations have lower inter-annotator variance than coarse annotations and help localize model errors. In our setup we assume all fine units are equally weighted while aggregating them to the final summary score. Despite this assumption, in our results we observe a consistent relative ordering of systems/metrics between coarse and fine annotations. Nevertheless, non-uniform weighing of units is an interesting future work direction; more in the Limitations section. ### RQ2: Can we reduce annotator workload by partially annotating a long summary? In Section 3.1, we found that fine annotations have lower variance than coarse annotations. However, long summaries may be composed of several units (sentences or phrases) which each require fine annotation. This could make fine annotation very expensive for longer summaries (as also noted in our survey). What if we instead annotate a random subset of units from the summary? While this will lower annotation cost, how accurate would these partial annotations be? We explore this tradeoff by re-using the annotations collected in Section 3.1. For every summary, we randomly sample a fraction of units \(f\in\{0.1,0.2...0.9\}\) and then measure its correlation to the full set of annotations collected. Each annotator gets a different random sample of units for the same summary. In initial experiments, we found that this yielded higher accuracy than when keeping the same set of units per annotator. Figure 4: Accuracy and variance after annotating a fraction of units per summary (X-axis) with fine. Despite annotating just a fraction of the summary, we observe a high segment-level Kendall tau correlation with a full annotation (left). However we observe higher inter-annotator variance as the fraction reduces (right). Confidence intervals shown are 95% and computed across 1000 random subsets (see Appendix F for left plot with Pearson). **Partial annotation has a high correlation to full annotation, but higher variance**: In Figure 4 (_left_) we plot the segment level Kendall's \(\tau\) correlation (relative ordering of summary scores) between a partial annotation and full annotation for different values of \(f\). Overall, we observe a high correlation across different values of \(f\). Despite annotating just half the summary (\(f=0.5\)), in both datasets we observe a high correlation of 0.78-0.89 Kendall's \(\tau\) (95% interval) with a full annotation. Does a partial annotation preserve the variance benefits of fine vs coarse? In Figure 4 (_right_) we plot the inter-annotator variance for different values of \(f\). In both datasets we find that a partial annotation has a higher variance than a full annotation. While for all values of \(f\) in SQuALITY we find that fine annotations still have lower variance than coarse, in PubMed coarse has lower variance than fine for \(f<=0.3\) with 95% confidence. _Recommendation_: Having annotators judge a random subset of units in a long-form summary is a simple way to reduce fine annotation cost, and has high correlation with a full annotation. However, it increases inter-annotator variance. Annotating 50% of the summary results in 0.78-0.89 Kendall's \(\tau\) correlation, with a 30-40% increase in standard deviation compared to full fine annotation. Partial annotation may be limited in its ability to identify issues in summaries with very few errors. However, we find that this is not the case in current systems, which are abundant in faithfulness errors. ### RQ3: Is it useful to align summary units to sentences in the source document? So far, we have focused on design decisions on the summary side of evaluation. However, evaluating faithfulness requires a comparison of facts between a summary and a _source document_. Long-form summaries tend to have long source documents (Table 1): 3.1K words for SQuALITY and 5.1K words for PubMed. In Section 2, we found several mentioned human evaluation is challenging since annotators need to read long source documents. Some prior work has suggested highlighting spans in the source document that align with the summary Hardy et al. (2019); Kryscinski et al. (2020); Vig et al. (2021) as shown in Figure 1. However, these efforts have exclusively focused on news summarization with relatively short source documents, like CNN/DM (804 words) Nallapati et al. (2016) or XSUM (438 words) Narayan et al. (2018). How useful is highlighting based on alignment, or "_hints_", when the spans are chosen from much longer documents? **What is the best highlighting algorithm?** We conduct a study to identify the alignment algorithm best suited for highlighting hints. We manually annotate 125 fine units from human-written summaries of the SQuALITY validation split, marking the sentences best supporting them from the source document. We then test several candidate methods for linking summary units to the source document. These include token overlap methods like ROUGE Lin (2004), retrievers Karpukhin et al. (2020), and fact verifiers Wadden et al. (2022). In Table 4, we find that SuperPAL Ernst et al. (2021), a weakly supervised linking algorithm, performs best (0.61 recall@3 vs the next best 0.47). To improve precision, we filter matches scoring less than 0.3 on SuperPAL, and show at most five highlights. **Do highlighted hints improve summary error detection?** To answer this question, we manually perturb 50 fine summary units in SQuALITY validation summaries, introducing entity errors or negations like Kryscinski et al. (2020). We modify the summary context of the perturbed unit to ensure summaries are self-consistent. Annotators \begin{table} \begin{tabular}{l c c c} \hline \hline Algorithm & R@3 & R@5 & R@10 \\ \hline BM25 (1995) & 0.38 & 0.46 & 0.56 \\ ROUGE-1 (2004) & 0.31 & 0.34 & 0.46 \\ SIM Liu et al. (2019) & 0.37 & 0.52 & 0.60 \\ DPR (2020) & 0.29 & 0.31 & 0.41 \\ BERTScore-DB-XL Chen et al. (2020) & 0.30 & 0.37 & 0.46 \\ Summer-CNLI Chen et al. (2022) & 0.22 & 0.26 & 0.34 \\ MultiVers-FEVER Chen et al. (2022) & 0.47 & 0.58 & 0.71 \\ SuperPAL Chen et al. (2021) & **0.61** & **0.68** & **0.77** \\ \hline \hline \end{tabular} \end{table} Table 4: A comparison of algorithms finding the top source document sentences for summary units in SQuALITY. R@\(k\) (recall@\(k\)) denotes the fraction of times the gold sentence was in the top-\(k\) predictions. \begin{table} \begin{tabular}{l c c c c} \hline \hline Hints & Acc. (\(\uparrow\)) & Agree. (\(\uparrow\)) & Time (secs) (\(\downarrow\)) \\ & (2-way) & (Fleiss) & All & First 5 \\ \hline None & **93\%** & **0.71** & 41.4 & 115.6 \\ SuperPAL & 92\% & 0.64 & 48.2 & 84.6 \\ Gold & 92\% & 0.63 & **40.4** & **60.4** \\ \hline \hline \end{tabular} \end{table} Table 5: Annotator performance (accuracy, agreement, median time) in detecting summary errors with different types of source document highlight hints. Overall, we see little difference across the three settings. are shown 50 perturbed and 50 un-perturbed summaries, and asked to annotate whether the summary units are faithful to the source in three settings:12 (1) no highlighted hints; (2) SuperPAL highlighted hints; (3) gold hints manually annotated by us. In Table 5, we show accuracy, inter-annotator agreement, and median time13 for each setting. Footnote 12: To prevent any bias, each annotator receives only one of these settings for a particular summary. Footnote 13: Calculated using the method in Akoury et al. (2020). **Highlighted hints have almost no effect in evaluating long-form summaries**: Surprisingly, we observe that in all three metrics (accuracy, agreement, median time taken), scores are quite similar across the three settings. In fact, the "no-hint" setting scores slightly higher than the SuperPAL hint settings (93% vs 92% accuracy, 0.71 vs 0.64 Fleiss \(\kappa\)) and takes annotators less time (41.4 vs 48.2 seconds per unit). However, we find that hints helped annotate the first few units of a summary quicker (84.6 secs vs 115.6 secs per unit). We attribute our findings to a _learning effect_ over time. fine annotation of long-form summaries requires annotation of several units for the same document - summary pair. As annotation progresses, annotators get more familiar with the contents of the source document and summary, reducing the need for hints over time. See Appendix E for learning trajectory plots. **Questionnaire with fine annotators confirm limited utility of hints**: Our evaluation so far is limited to perturbed human summaries. How effective are hints on model-generated summaries? To answer this, we ask five of our fine Upwork annotators (from Section 3.1) a set of three questions about their experiences using highlighted hints.14 Detailed questionnaire results along with answer snippets are shown in Table 6. Overall, annotators find hints were useful only sometimes. Hints were _less useful_ when (1) the summary unit was not supported in the source; (2) the summary unit was highly abstractive compared to the source; (3) pronouns, numbers, or abbreviations were involved; and (4) Pubmed summaries were annotated. Al \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt}} \hline \hline **Question \& TL;DR**: 4 out of 5 annotations said **Yes**, 1 said **yes only for PubMed**. Ctrl+F helped locate synonyms, entities. & **Response Snippets** \\ \hline \hline **Pl**: Did you find the highlighted hints useful while making your judgment? & “**With summaries that had poor correctness, the hints were often a mess,** and even correct spans had to be carefully checked. In summaries that were more correct, I could often just read the span and remember that it was correct, and then the **hints helped me find the right source position**, or **refresh my memory** about details.” \\ \hline **TL;DR**: 4 out of 5 annotators said **Sometimes**, 1 said **Yes**. More useful for SQuALITY, summary units copied verbatim from source, correct summaries. & “**They were more useful when the summary was a near verbatim source reproduction.”** \\ \hline **Q**: Would the highlights have been sufficient to make judgments, or was reading the entire source document necessary? & “**In PubMed, they were a little more chaotic,** even for good summaries.”” \\ \hline **TL;DR**: 3 out of 5 annotators said **No**, 2 said **sometimes in SQuALITY**. Reading the entire document was critical. & “**SQuALITY** \\ \hline **Q**: Did you use Ctrl+F searches in the source document while making judgments? & “**Yes, **all the time.** It was usually a **safer bet than using the hints**. The hints are given out of context of the whole SQuALITY story. There were a lot of problems with the PubMed hints involving **numbers**, which I often searched for. They were very rarely supported by the document, or contained wrong symbols (= instead of \(\gamma\)-)” \\ \hline **TL;DR**: 4 out of 5 annotators said **Yes**, 1 said **yes only for PubMed**. Ctrl+F helped locate synonyms, entities. & “**Yes, mostly in cases the highlight did not support the summary unit partially or entirely.”” \\ \hline **TL;DR**: 4 out of 5 annotators said **Yes**, 1 said **yes only for PubMed**. Ctrl+F helped locate synonyms, entities. & “**I told Ctrl+F when looking for very specific words, like **names.** Searching was less helpful when it came to words that had synonyms or emotions.” \\ \hline \hline \end{tabular} \end{table} Table 6: Results and snippets from our questionnaire with fine annotators. Overall, annotators find hints only sometimes useful, and mention reading the entire source document along with keyword searches. most all annotators said it was necessary to read the entire source document before annotation to get an overall idea of the plot and resolve coreferences. Nearly all annotators used "Ctrl+F" searches along with hints to search for specific keywords while making judgments. This was especially true when the summary unit was incorrect, since the source document had to be thoroughly searched (beyond the hints) before confidently marking "Incorrect". _Recommendation_: In contrast to recommendations in prior work, automatically highlighted hints are useful only in some specific cases of long-form summarization: mostly correct summaries, almost verbatim copied sentences. Annotators should be instructed to read the entire source document and to not rely solely on highlighted hints, since that could bias their judgments. Based on a small-scale study, we found SuperPAL Ernst et al. (2021) to be the most accurate method for finding hints, but its performance (61% recall@3) is far from ideal. ### To what extent do our findings generalize to short-form summarization? In this work, we exclusively focus on summarization datasets with an average summary length of at least 150 words. This constraint excludes two popular benchmarks in summarization research over the last five years: CNN/DM Nallapati et al. (2016) and XSUM Narayan et al. (2018). How relevant are our research questions (RQs) and findings for these short-form summarization benchmarks? On average, XSUM (24 words) and CNNDM (60 words) contain much shorter summaries than SQuALITY (237 words). XSUM outputs typically contain only 1 sentence or roughly 2-3 fine units per summary. This blurs the distinction between fine and coarse units, which makes it less useful to study RQ1 in these short-form settings. The shorter length of outputs also implies that evaluation is less expensive and consumes less time, which makes our RQ2 less relevant. Finally, on average, XSUM (440 words) and CNNDM (800 words) also have much shorter source documents than datasets like SQuALITY (5200 words), reducing the need for alignment (the main premise for RQ3). The main motivation behind our study is that human evaluation of long-form summarization datasets like SQuALITY and PubMed is challenging and expensive due to the long length of the generated text. **Overall, our research questions and findings are more relevant for long-form summarization datasets than for short-form summarization datasets like XSUM and CNNDM**. ## 4 Related Work A large body of recent work has focused on new _automatic_ evaluation methods for summarization via NLI-based algorithms Falke et al. (2019); Laban et al. (2022) or QA-based algorithms Wang et al. (2020); Fabbri et al. (2022). Our work focuses on the much less studied area of _human_ evaluation, the gold standard for developing automatic metrics. A notable effort in this space is the **Pyramid method**Nenkova and Passonneau (2004), along with work improving Pyramid efficiency Shapira et al. (2019); Zhang and Bansal (2021). Efficient Pyramid-like protocols have been used to collect large-scale datasets human judgments Bhandari et al. (2020); Liu et al. (2022) in short-form news summarization tasks like CNN/DM. While these efforts focus on salience evaluation and assume access to multiple references, our work focuses on faithfulness and operates in a reference-free setting. Moreover, we focus on _long-form_ summarization tasks like SQuALITY and PubMed, which are much more challenging and expensive to evaluate. Evaluating summary faithfulness relates to **fact verification**Vlachos and Riedel (2014), where claim sentences are checked against a large knowledge source Wikipedia). Prior work Nakov et al. (2021) attempts to simplify the human fact checking process by methods like knowledge source snippets Fan et al. (2020), similar to hint highlights (SS3.3). Faithfulness in summarization differs from fact verification in three ways: (1) summaries are paragraph-long and contextual compared to single sentence stand-alone claims in fact verification; (2) summaries are grounded to a source document, compared to a large knowledge source in fact verification; (3) summaries are model-generated compared to human-written claims in fact checking datasets Thorne et al. (2018); Wadden et al. (2020). ## 5 Conclusion We present the LongEval guidelines, a set of recommendations for moving towards standardized human evaluation of long-form summarization. We empirically analyze each recommendation on two datasets. Overall, we find that (1) fine-grained annotations have lower inter-annotator variance than coarse-grained annotations; (2) partially annotat ing a summary reduces annotator workload while maintaining accuracy; (3) highlighting hints in the source document has limited usefulness for evaluating long-form summaries. As future work, we plan to conduct experiments on other aspects of summarization evaluation like salience and coherence. ## Limitations Human evaluation is a noisy process with many **confounding variables**. Some of these variables were kept constant among experiments on a dataset, but modifying them could change the trends in the results. These include: (1) number of annotations per summary; (2) the specific annotation interface used; (3) granularity for fine evaluation (sentences vs phrases); (4) Number of points in the Likert scale for coarse evaluation; (5) set of summarization systems evaluated; and finally (6) relative (eg: A/B tests) vs absolute evaluation (eg: Likert), which has been discussed in Tang et al. (2022) for short-form news summarization datasets like CNN/DM. Our paper is **limited to faithfulness evaluation**, but summaries are typically evaluated for salience, fluency, coherence as well (Fabbri et al., 2021). While fluency may be less of an issue due to large-scale language model pretraining (Dou et al., 2021), coherence and salience are important aspects to evaluate especially in long-form summarization (Goyal et al., 2022). Our findings may not generalize to evaluation of coherence or salience. Our experiments in Section 3.1 **assigned an equal weight** to each fine unit while calculating the overall score of the summary. However, the faithfulness of some fine units may be more important than others. A non-uniform weighing of fine units may be a good strategy if there is a notion of how critical a particular unit is for a summary's correctness. For example: (1) PICO units are critical in medical summaries (DeYoung et al., 2021); (2) the Pyramid scheme (Nenkova and Passonneau, 2004) uses a reference frequency-based unit importance, assuming access to multiple gold references. However, a consistent notion of importance is difficult to establish across different domains, and also depends on an individual consumer's preferences. Designing non-uniform weighing schemes is an interesting direction for future research. ## Ethical Considerations All experiments involving human evaluation in this paper were exempt under institutional IRB review. We fairly compensated each Upwork freelancer involved in this study, at a rate of 15-20$ per hour (respecting their suggested Upwork hourly wage). For each round of annotation, we estimated the average amount of time the task would take (by running pilots among ourselves), and provided annotators with the estimated time requirement. Most freelancers finished the task within the time window, but sometimes exceeded it by 0.5-1 hr. We compensated freelancers based on the actual time they took and their hourly wage, rather than a fixed amount per annotation. ## Acknowledgments First and foremost, we would like to thank all the nine Upwork freelancers who contributed human annotations to this project. We are very grateful to Yixiao Song, Alex Wang, John Giorgi, Dustin Wright, Yulia Otmakhova, Daniel Deutsch, Arie Cattan, Shiyue Zhang, Tanya Goyal, Greg Durrett, Marzena Karpinska, Ankita Gupta, Nader Akoury and the Semantic Scholar team for several useful discussions at various points during the project. This work was mostly done while Kalpesh Krishna (KK) was an intern at the Allen Institute for Artificial Intelligence. KK was partly supported by a Google PhD Fellowship awarded in 2021. Author Contributions:Kalpesh Krishna led the project and performed all the technical contributions including literature review, dataset collection and processing, model implementation, annotation interface development, running experiments, and data analysis. Kalpesh also contributed to project scoping and ideation and led the writing of the paper. Erin Bransom and Bailey Kuehl helped with obtaining human judgements, including piloting the task and giving feedback, performing the annotation themselves, and hiring and managing annotators on Upwork. Pradeep Dasigi, Arman Cohan, and Kyle Lo were mentors of the project during and after Kalpesh's internship, contributing equally to project scoping, experimental design, ideation and direction throughout the course of the project and paper writing. Mohit Iyyer provided mentorship after the internship, in particular providing important feedback and direction on data analysis and contributing to paper writing.