id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2301.11504
Traveling waves in reaction-diffusion equations with delay in both diffusion and reaction terms
We study the existence of traveling waves of reaction-diffusion systems with delays in both diffusion and reaction terms of the form $\partial u(x,t)/\partial t = \Delta u(x,t-\tau_1)+f(u(x,t),u(x,t-\tau_2))$, where $\tau_1,\tau_2$ are positive constants. We extend the monotone iteration method to systems that satisfy typical monotone conditions by thoroughly studying the sign of the Green function associated with a linear functional differential equation. Namely, we show that for small positive $r$ the functional equation $x''(t)-ax'(t+r)-bx(t+r)=f(t)$, where $a\not=0, b>0$ has a unique bounded solution for each given bounded and continuous $f(t)$. Moreover, if $r>0$ is sufficiently small, $f(t)\ge 0$ for $t\in {\mathbb R}$, then the unique bounded solution $x_f(t)\le 0$ for all $t\in {\mathbb R}$. In the framework of the monotone iteration method that is developed based on this result, upper and lower solutions are found for Fisher-KPP and Belousov-Zhabotinski equations to show that traveling waves exist for these equations when delays are small in both diffusion and reaction terms. The obtained results appear to be new.
William Barker, Nguyen Van Minh
2023-01-27T02:41:08Z
http://arxiv.org/abs/2301.11504v3
# Traveling waves in reaction-diffusion equations with delay in both diffusion and reaction terms ###### Abstract. We study the existence of traveling waves of reaction-diffusion systems with delays in both diffusion and reaction terms of the form \(\partial u(x,t)/\partial t=\Delta u(x,t-\tau_{1})+f(u(x,t),u(x,t-\tau_{2}))\), where \(\tau_{1},\tau_{2}\) are positive constants. We extend the monotone iteration method to systems that satisfy typical monotone conditions by thoroughly studying the sign of the Green function associated with a linear functional differential equation. Namely, we show that for small positive \(r\) the functional equation \(x^{\prime\prime}(t)-ax^{\prime}(t+r)-bx(t+r)=f(t)\), where \(a\neq 0,b>0\) has a unique bounded solution for each given bounded and continuous \(f(t)\). Moreover, if \(r>0\) is sufficiently small, \(f(t)\geq 0\) for \(t\in\mathbb{R}\), then the unique bounded solution \(x_{f}(t)\leq 0\) for all \(t\in\mathbb{R}\). In the framework of the monotone iteration method that is developed based on this result, upper and lower solutions are found for Fisher-KPP and Belousov-Zhabotinski equations to show that traveling waves exist for these equations when delays are small in both diffusion and reaction terms. The obtained results appear to be new. Key words and phrases:Traveling waves; reaction-diffusion equations 2000 Mathematics Subject Classification: Primary: 35C07 ; Secondary: 35K57 ## 1. Introduction A typical model considered in our paper may look like \[\frac{\partial u(x,t)}{\partial t}=\Delta u(x,t-\tau_{1})+u(x,t-\tau_{2})(1-u(x,t)), \tag{1.1}\] where \(\tau_{1},\tau_{2}\) are given positive constants. When \(\tau_{1}=\tau_{2}=0\), the existence of traveling wave solutions and their stability in the above equation that becomes a Fisher-KPP equation \[\frac{\partial u(x,t)}{\partial t}=\Delta u(x,t)+u(x,t)(1-u(x,t)),\] where \(x\in\mathbb{R}\), are very well studied. The reader can find a complete account of the results and concepts in the classical monograph [8]. A rather more general and complete account of the existence and properties of traveling waves for parabolic equations can be found in [25]. A traveling wave solution \(u(x,t)\) of Eq.(1.1) is defined as a twice continuously differentiable numerical function \(\phi\) such that \(u(x,t)=\phi(x\pm ct)\) for all \(x,t\in\mathbb{R}\), where \(c\) is a constant, and \[\lim_{t\to-\infty}\phi(t)=0,\ \ \lim_{t\to\infty}\phi(t)=1. \tag{1.2}\] Such functions \(\phi\) are often called the traveling wave front, where \(c\) is the wave speed. The search for such functions \(\phi\) is carried out by investigating the existence of solutions to an ordinary differential equation after the invariant substitution \(u(x,t)=\phi(x\pm ct)\). Namely, one studies special solutions of the equation \[\pm c\phi^{\prime}(\xi)=\phi^{\prime\prime}(\xi)+\phi(\xi)(1-\phi(\xi)) \tag{1.3}\] that satisfy (1.2). The first attempt to study the existence and properties of traveling waves in reaction-diffusion equations with delay in reaction term was made in [20]. Subsequently, a numerous amount of work has been done to study the affect of delay on the existence of traveling waves as well as the properties in diffusion-reaction systems, see e.g. [7], [13], [14], [24][27] and the references there in for a few. It is noticed that the delay in the reaction-diffusion models that were considered so far was incorporated in the reaction term only. That leaves open a question as what happens if there is a delay in diffusion term of the models. It is the purpose of this paper is to study the existence of traveling waves in reaction-diffusion models when delay appears in both reaction as well as diffusion terms that satisfy some typical monotone conditions. We will extend the method of monotone iteration to prove the existence of traveling waves to the models with delay appearing in both diffusion and reaction terms. For an account of this method the reader may see, e.g. [2, 14, 27]. One of the very first and crucial step in this method is to study the bounded solutions whose existence is established by the Perron Theorem for functional equations. In fact, we need to know if the only bounded solution to the equation \[x^{\prime\prime}(t)-ax^{\prime}(t+r)-bx(t+r)=f(t) \tag{1.4}\] is negative whenever the function \(f(t)\) is positive, where \(a>0,b>0\), \(r=\tau_{1}c\), \(c\) is the wave speed of the traveling wave. When \(\tau_{1}=0\) this equation is an ODE and the Green function can be found explicitly, so the answer to the question of negativity or positivity of the only bounded solution can be answered easily. The problem is much more complicated when \(\tau_{1}>0\) as there is no explicit expression for the Green function. The existence of the latter is very well known in the theory of Functional Differential Equations. A great deal of this work will be devoted to the study of the positivity of bounded solutions. We will use the theory developed by Mallet-Paret in [15] combined with some standard techniques of Complex Functions to show that for sufficiently small \(\tau_{1}>0\) the characteristic equation of Eq.(1.4) has no root on the imaginary axis so the Green function exists and hence is negative. This allows us to extend the monotone iteration method to prove the existence of traveling waves for reaction-diffusion equations with delay appearing in both diffusion and reaction terms. This paper will be organized as follows: Section 3 will be devoted to the study of linear functional equations of the form (1.4). We will give an answer to the question as if Eq.(1.4) has a unique bounded solution if \(f(\cdot)\) is a given bounded and continuous function. Then, we study the negativity of the Green function in Section 4. Theorem 4.1 is the key for the construction of the monotone iteration method. In Section 5 we prove Theorem 5.10 which summarizes all of the construction of the monotone iteration method. Construction of upper and lower solutions to the wave equations of particular models is always a nontrivial step in the applications of the monotone iteration method. We show that for Fisher-KPP and Belousov-Zhabotinski upper and lower solutions can be found explicitly. To the best of our knowledge this paper is the first study of traveling waves in reaction-diffusion equations with delay appearing in both the diffusion and reaction terms. The treatment of linear functional equations in Sections 4 would have of independent interest. The results obtained in this paper appear to be new. ## 2. Preliminaries and notations ### Notations In this paper we will use some standard notations such as \(\mathbb{R},\mathbb{C}\) standing for the fields of reals and complex numbers. \(\Re z\) and \(\Im z\) denote the real part and imaginary part of a complex number \(z\). The space of all bounded and continuous functions from \(\mathbb{R}\to\mathbb{R}^{n}\) is denoted by \(BC(\mathbb{R},\mathbb{R}^{n})\) which is equipped with the sup-norm \(\|f\|:=\sup_{t\in\mathbb{R}}\|f(t)\|\). \(BC^{k}(\mathbb{R},\mathbb{R}^{n})\) stands for the space of all \(k\)-time continuously differentiable functions \(\mathbb{R}\to\mathbb{R}^{n}\) such that all derivatives up to order \(k\) are bounded. If the boundedness is dropped from the above function spaces we will simply denote them by \(C(\mathbb{R},\mathbb{R}^{n})\) and \(C^{k}(\mathbb{R},\mathbb{R}^{n})\). We will use the natural order in \(BC(\mathbb{R},\mathbb{R}^{n})\) that is defined as follows: For \(f,g\in BC(\mathbb{R},\mathbb{R}^{n})\) we say that \(f\leq g\) if and only if \(f(t)\leq g(t)\) for all \(t\in\mathbb{R}\), and we will say that \(f<g\) if \(f(t)\leq g(t)\) for all \(t\in\mathbb{R}\), and \(f(t)\neq g(t)\) for all \(t\in\mathbb{R}\). An operator \(P\) acting in a subset \(S\) of \(BC(\mathbb{R},\mathbb{R}^{n})\) is monotone if it preserves the natural order in the subset. That is, \(Pf\leq Pg\) whenever \(f\leq g\) and \(f,g\in BC(\mathbb{R},\mathbb{R}^{n})\). Note that if \(S\) is the whole space \(BC(\mathbb{R},\mathbb{R}^{n})\) and \(P\) is linear, then \(P\) is monotone if and only if it maps a positive function into a positive function, where \(f\in BC(\mathbb{R},\mathbb{R}^{n})\) is positive if \(f(t)\geq 0\) for all \(t\in\mathbb{R}\). A constant function \(f(t)=\alpha\) for all \(t\in\mathbb{R}\) will be denoted by \(\hat{\alpha}\). We will write \(L^{p}\) for the space \(L^{p}(\mathbb{R},\mathbb{C}^{n})\) of \(L^{p}\) vector valued functions on the line when \(n\) is a clear given positive integer. We assume that \(1\leq p\leq\infty\) and denote the space \[W^{1,p}=\{f\in L^{p}|\ f\ \text{is absolutely continuous, and}\ f^{\prime}\in L^{p}\}.\] Recall that the continuous embedding \[W^{1,p}\subset L^{\infty},\ 1\leq p\leq\infty.\] ### Rouche's Theorem Let \(A\) be an open subset of \(\mathbb{C}\), \(f\) and \(g\) are two analytic functions on \(A\). A piecewise continuously differentiable function \(\gamma:[a,b]\to\mathbb{C}\) such that \(\gamma(a)=\gamma(b)\) is called a circuit. The following theorem ([6, Rouche's Theorem, p. 247]) will be used later on: **Theorem 2.1**.: _let \(A\subset\mathbb{C}\) be a simply connected domain, \(f,g\) two analytic complex valued functions in \(A\). Let \(T\) be the (at most denumerable) set of zeros of \(f\), \(T^{\prime}\) the set of zeros of \(f+g\) in \(A\), \(\gamma\) a circuit in \(A-T\), defined on an interval \(I\). Then, if \(|g(z)|<|f(z)|\) in \(\gamma(I)\), the function \(f+g\) has no zeros on \(\gamma(I)\), and_ \[\sum_{a\in T}j(a;\gamma)\omega(a;f)=\sum_{b\in T^{\prime}}j(b;\gamma)\omega(b ;f+g), \tag{2.1}\] _where \(j(a;\gamma)\) is the index of \(\gamma\) with respect to \(a\), and \(\omega(a,f)\) is the multiplicity of the zeros of \(f\) at \(a\)._ As a consequence of the theorem, if \(\gamma\) is a simple closed circuit that encircles \(T\), then, \(j(a,\gamma)=1\) for all \(a\) inside \(\gamma(I)\) and the total zeros (counting multiplicities) of \(f\) and \(f+g\) inside the circuit \(\gamma\) are the same. ### Perron Theorem for functional differential equations This subsection is concerned with well known results (Perron Theorem) on the existence of a unique bounded solutions to mixed differential equations of the form \[x^{\prime\prime}(t)-ax^{\prime}(t+r)-bx(t+r)=f(t), \tag{2.2}\] \(a\neq 0,b>0\), if \(r>0\) the equations are advanced, and if \(r<0\) they are delayed equations, \(f(\cdot)\) is in \(BC(\mathbb{R},\mathbb{R})\) or \(L^{\infty}(\mathbb{R},\mathbb{R})\). We recall the Perron Theorem in an extended form as below. Consider functional differential equations of the form \[x^{\prime}(t)=\sum_{j=1}^{N}A_{j}x(t+r_{j})+h(t),\ x(t)\in\mathbb{R}^{n}, \tag{2.3}\] where \(r_{j}\) are positive constants \(j=1,2,\cdots,N\), \(h\in L^{p}(\mathbb{R},\mathbb{R}^{n})\), \(A_{j}\) are \(n\times n\)-matrices. Let us denote the characteristic equation associated with Eq.(2.3) by \[\Delta(\lambda):=\lambda I-\sum_{j=1}^{N}e^{\lambda r_{j}}A_{j}.\] When \(\det\Delta(\lambda)\neq 0\) for all \(\lambda\in i\mathbb{R}\), Eq.(2.3) is called hyperbolic. Let us define the operator \(\mathcal{L}\) as follows \[[\mathcal{L}x](t)=x^{\prime}(t)-\sum_{j=1}^{N}A_{j}x(t+r_{j}),\] where \(x(\cdot)\in W^{1,p}\) for some \(1\leq p\leq\infty\). The following theorem will be used in our construction of the monotone iteration later to show the existence of traveling waves: **Theorem 2.2**.: _Assume that Eq.(2.3) is hyperbolic. Then, the operator \(\mathcal{L}\) is an isomorphism from \(W^{1,p}\) onto \(L^{p}\) for \(1\leq p\leq\infty\) with inverse given by convolution_ \[(\mathcal{L}^{-1}h)(\xi)=(G*h)(\xi)=\int_{-\infty}^{\infty}G(\xi-s)h(s)ds,\] _where_ \[G(\xi)=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{i\xi\eta}\Delta(i\eta)^{-1}d\eta\] _that enjoys the estimate_ \[\|G(\xi)\|\leq Ke^{-\alpha|\xi|}\] _for some \(K>0\) and \(\alpha>0\). In particular, for each \(h\in L^{p}\) there exists a unique solution \(x(\cdot)=\mathcal{L}^{-1}h\in W^{1,p}\) to the inhomogeneous equation (2.3)._ A special case of this theorem when Eq.(2.3) becomes Eq(2.2) is the following. Note that when \(f\) is continuous the theorem is the classical Perron Theorem in which all solutions mentioned in the statements are classical solutions (see [16, 19]). **Theorem 2.3**.: _Consider a ordinary functional differential equation,_ \[x^{\prime\prime}(t)-ax^{\prime}(t+r)-bx(t+r)=f(t), \tag{2.4}\] _where \(a\neq 0,b>0,r\in\mathbb{R}\) and \(f\in BC(\mathbb{R},\mathbb{R})\) with associated characteristic function \(P(\lambda)=\lambda^{2}-a\lambda e^{r\lambda}-be^{r\lambda}.\) If \(P(i\xi)\neq 0\) for all \(\xi\in\mathbb{R},\) then the differential equation has a unique bounded solution. Moreover, the solution is given by_ \[x_{f}(t)=(G*f)(t)=\int_{-\infty}^{\infty}G(t-s)f(s)ds,\] _where \(G(t)\) is a Green's function that exponentially decays to 0 as \(|t|\to\infty.\) In particular, if \(f\in L^{\infty}\), then there exists a unique \(x(\cdot)\) such that \(x(\cdot)\), \(x^{\prime}(\cdot)\) are absolutely continuous, \(x^{\prime\prime}(\cdot)\in L^{\infty}\)._ **Remark 2.4**.: As shown in [2], removing the absolute continuity of \(x^{\prime}(\cdot)\) would be wrong. This will affect the way we construct upper and lower solutions later in Section 6. ## 3. Qualitative analysis of characteristic equations of linear functional differential equations In this section we will develop qualitative results for second order linear functional differential equations of mixed type. We will incorporate functional arguments in both the \(x(\cdot)\) and \(x^{\prime}(\cdot)\) terms. We will build the aforementioned results based off of qualitative results of non delay ordinary differential equations. The second order non delay equation \[x^{\prime\prime}(t)-ax^{\prime}(t)-bx(t)=f(t), \tag{3.1}\] where \(a\neq 0,b>0,\) and \(f(t)\) is a bounded continuous function for \(t\in\mathbb{R}\) has a unique bounded solution for \(t\in\mathbb{R}.\) This is a direct result of the Perron Theorem. To see this, the characteristic equation is \[\Delta_{1}(\lambda)=\lambda^{2}-a\lambda-b.\] The roots of the characteristic equation are \[\lambda_{1}=\frac{a+\sqrt{a^{2}+4b}}{2}>0,\lambda_{2}=\frac{a-\sqrt{a^{2}+4b} }{2}<0.\] The solution of the differential equation is \[x_{f}(t)=\int_{-\infty}^{\infty}G(t-s)f(s)ds.\] A simple computation shows that \[x_{f}(t)=\frac{1}{\lambda_{2}-\lambda_{1}}\left(\int_{-\infty}^{t}e^{\lambda_ {2}(t-s)}f(s)ds+\int_{t}^{+\infty}e^{\lambda_{1}(t-s)}f(s)ds\right), \tag{3.2}\] and thus, \[G(\xi)=\begin{cases}e^{\lambda_{1}\xi}/(\lambda_{2}-\lambda_{1}),\text{ if }\xi<0,\\ e^{\lambda_{2}\xi}/(\lambda_{2}-\lambda_{1}),\text{ if }\xi\geq 0.\end{cases} \tag{3.3}\] The Green's function is obviously negative, so for all \(t\in\mathbb{R}\) \[x_{f}(t)\leq 0,\text{ if }f(t)\geq 0.\] ### Characteristic roots of functional differential equations This subsection is concerned with the formulation and proof for some results, that are important for our later use, for equations of the form \[x^{\prime\prime}(t)-ax^{\prime}(t+r)-bx(t+r)=f(t), \tag{3.4}\] where \(a\neq 0,b>0\) and \(r\in\mathbb{R}\) is assumed to be a small. The characteristic equation of the aforementioned equation is \[z^{2}-aze^{rz}-be^{rz}=0. \tag{3.5}\] There are often issues with solving equations similar to (3.5), as they are no longer polynomials, but are transcendental functions. These "exponential" type polynomials have been studied in [4, 10, 22, 26]. Often, transcendental polynomials can be studied using methods found in complex analysis. In fact, a crucial piece of complex analysis, which is attributed to the French mathematician Eugene Rouche, will be employed to prove the following lemma. This is Rouche's mentioned in section 2. **Proposition 3.1**.: For given \(a\neq 0,b>0\), the following assertions are true: * For every \(r\) Eq.(3.5) has no root on the imaginary axis; * For every \(r\leq 0\) (\(r\geq 0\), respectively), Eq. (3.5) has only one single root in the right half of the complex plane (Eq. (3.5) has only one single root in the left half of the complex plane, respectively), and the root has continuous dependence on \(r\). * For sufficiently small \(r\leq 0\) (\(r\geq 0\), respectively) there exists only a single root in the strip \(\{z\in\mathbb{C}|\ 2\lambda_{2}\leq\Re z\leq 0\}\) (there exists only a single root in the strip \(\{z\in\mathbb{C}|\ 0\leq\Re z\leq 2\lambda_{1}\}\), respectively), and the root has continuous dependence on \(r\). Proof.: Consider the functions \(\alpha(z)\) and \(\beta(z)\) defined as \[\alpha(z):=(b+az)(1-e^{rz});\quad\beta(z):=z^{2}-az-b.\] Then, \[\gamma(z):=\alpha(z)+\beta(z)=z^{2}-aze^{rz}-be^{rz}.\] Part (i): We will show that \(|\alpha(z)|<\beta(z)|\) for all \(z\) along the imaginary line \(i\mathbb{R}\), so Eq.(3.5) cannot have any root on \(i\mathbb{R}\). Consider the function \(\alpha(z)\) on the line segment \(i\xi,-R\leq\xi\leq R\)}, where \(R>0\) is a positive number. We have \[|\alpha(i\xi)| = |b+ai\xi|\cdot|1-e^{ir\xi}|\] \[= \sqrt{b^{2}+(a\xi)^{2}}\sqrt{(1-\cos(r\xi))^{2}+\sin^{2}(r\xi)}\] \[= \sqrt{b^{2}+(a\xi)^{2}}\sqrt{4\sin^{4}(r\xi/2)+4\cos^{2}(r\xi/2) \sin^{2}(r\xi/2)}\] \[= 2\sqrt{b^{2}+(a\xi)^{2}}|\sin(r\xi/2))|.\] On the other hand, \[|\beta(i\xi)|=|-\xi^{2}+ai\xi-b|=\sqrt{(\xi^{2}+b)^{2}+(a\xi)^{2}}. \tag{3.7}\] Therefore, for every \(\xi\in\mathbb{R}\), \[\frac{|\alpha(i\xi)|}{|\beta(i\xi)|}=\frac{\sqrt{b^{2}+(a\xi)^{2}}}{\sqrt{( \xi^{2}+b)^{2}+(a\xi)^{2}}}|\sin(r\xi/2)|<1. \tag{3.8}\] This shows, by Rouche's theorem that Eq.(3.5) has no root on the imaginary axis. Part (ii): We are going to show that if \(r\geq 0\), \(\gamma(z)\) has only one root in the half complex plane \(\{z\in\mathbb{C}|\ \Re z<0\}\) by a careful application of Rouche's Theorem 2.2. To that end, we will consider a positively oriented contour \(C\) consisting of the line segment \(\{z\in\mathbb{C}|\ z=i\xi,-R\leq\xi\leq R\}\) and the half circle \(\{z\in\mathbb{C}|\ z=Re^{i\theta},-\pi/2\leq\theta\leq\pi/2\}\), where \(R\) is a given large positive number. We will choose the radius \(R\) sufficiently large later so that \(|\alpha(z)|<|\beta(z)|\) for all \(z\) in the half circle \(\{z\in\mathbb{C}|\ z=Re^{i\theta},-\pi/2\leq\theta\leq\pi/2\}\). In fact, since a simple computation shows that if \(z=Re^{i\theta}\), where \(-\pi/2\leq\theta\leq\pi/2\), then, \(\cos(\theta)\geq 0\), so \[|1-e^{rz}| =|1-e^{rR(\cos(\theta)+i\sin(\theta)}|\] \[=|1-e^{rR\cos(\theta)}\cdot e^{irR\sin(\theta)}|\] \[\leq 1+e^{rR\cos(\theta)}\] \[\leq 2.\] Therefore, for \(z=Re^{i\theta},-\pi/2\leq\theta\leq\pi/2\), we have \[\limsup_{R\to\infty}\frac{|\alpha(z)|}{|\beta(z)|} =\limsup_{R\to\infty}\left|\frac{(b+az)(1-e^{rz})}{z^{2}-az-b}\right|\] \[\leq 2\lim_{R\to\infty}\frac{|b+az|}{|z^{2}-az-b|}=0.\] Finally, for a fixed sufficiently large \(R>0\), we have \[|\alpha(z)|<|\beta(z)| \tag{3.9}\] for all \(z\in\{z\in\mathbb{C}|\ z=Re^{i\theta},-\pi/2\leq\theta\leq\pi/2\}\). By Part (i), \(|\alpha(z)|<|\beta(z)|\) for all \(z\) on the imaginary axis. Hence, if \(r\geq 0\), for sufficiently large \(R>0\), we have \[|\alpha(z)|<|\beta(z)| \tag{3.10}\] for each \(z\in C\). Note that \(\beta(z)\) has no root on \(C\), so by the Rouche Theorem 2.2, inside the contour \(C\) the number of roots (counting multiplicities) of Eq.(3.5) is the same as of the quadratic equation \(z^{2}-az-b=0\), that is, only a single root. As \(R>0\) can be chosen to be any large number, this yields that on the left half plane there exists only one simple root of Eq.(3.5). Furthermore, the continuous dependence is due to the Implicit Function Theorem (see e.g. [6]). The case \(r<0\) is treated in the same way. Part (iii): We will estimate \(|\alpha(z)|/|\beta(z)|\) on the boundary of the rectangle \(\Xi\) defined as \(\{z\in\mathbb{C}|\ 2\lambda_{2}\leq\Re z\leq 0,-R\leq\Im z\leq R\}\) for a positive number \(R\). By Part (i), for sufficiently small \(r<0\) the ratio \(|\alpha(z)|/|\beta(z)|<1\) on the imaginary axis. For brevity, denote \(\delta:=-2\lambda_{2}\) and consider the rectangle's edge \(\{z\in\mathbb{C}|\ z=-\delta+i\xi,-R\leq\xi\leq R\}\). On the line \(\{z\in\mathbb{C}|z=-\delta+i\xi,\xi\in\mathbb{R}\}\) \[|\alpha(-\delta+i\xi)| = |b+a(-\delta+i\xi)|\cdot|1-e^{-r\delta+ir\xi}| \tag{3.11}\] \[\leq \sqrt{(b-a\delta)^{2}+(a\xi)^{2}}\sqrt{(1-e^{-r\delta}\cos(r\xi) )^{2}+e^{-2r\delta}\sin^{2}(r\xi)}\] \[\leq \sqrt{(b-a\delta)^{2}+(a\xi)^{2}}\sqrt{1-2e^{-r\delta}\cos(r\xi) +e^{-r\delta}}\] \[\leq \sqrt{(b-a\delta)^{2}+(a\xi)^{2}}2e^{-r\delta/2}.\] Therefore, it is easily seen that \[0\leq\limsup_{|\xi|\to\infty}\frac{|\alpha(-\delta+i\xi)|}{|\beta(-\delta+i\xi )|}\leq 2e^{r\delta/2}\lim_{|\xi|\to\infty}\frac{\sqrt{(b-a\delta)^{2}+(a\xi )^{2}}}{|(-\delta+i\xi)^{2}-a(-\delta+i\xi)-b|}=0.\] There exists a positive constant \(K\) such that if \(|\xi|\geq K\), then \[0\leq\limsup_{|\xi|\to\infty}\frac{|\alpha(-\delta+i\xi)|}{|\beta(-\delta+i \xi)|}\leq\frac{1}{2}\] Since \[\lim_{r\to 0}\sqrt{1-2e^{r\delta}\cos(r\xi)+e^{r\delta}}=0\] for sufficiently small \(r\), \[\frac{|\alpha(-\delta+i\xi)|}{|\beta(-\delta+i\xi)|}<\frac{1}{2} \tag{3.12}\] for all \(|\xi|\leq K\). This yields that for sufficiently small \(r\), \(|\alpha(z)|/|\beta(z)|<1/2\) on the edge \(\{z\in\mathbb{C}|\ z=\delta+i\xi,-R\leq\xi\leq R\}\). On the remaining edges of the rectangle: \(\{z\in\mathbb{C}|\ z=\xi+iR,-\delta\leq\xi\leq 0\}\) and \(\{z\in\mathbb{C}|\ z=\xi-iR,-\delta\leq\xi\leq 0\}\), we have \[|\alpha(\xi+iR)| =|b+a(\xi+iR)|\cdot|1-e^{r(\xi+iR)}| \tag{3.13}\] \[\leq\sqrt{(b+a\delta)^{2}+(aR)^{2}}\cdot(1+e^{r\delta}).\] Therefore, for sufficiently large \(R\) \[\frac{|\alpha(\xi+iR)|}{|\beta(\xi+iR)|}\leq\frac{1}{2}. \tag{3.14}\] By Rouche Theorem 2.2, in the rectangle there exists exactly one single root of the equation \(\gamma(z)=0\) as the equation \(\beta(z)=0\) has only one single root \(\lambda_{2}\). Since the number \(R\) can be any large positive number, this follows that in the strip mentioned above there exists only one single root. Next, by the Implicit Function Theorem, this only root depends continuously on \(r\). The case \(r\geq 0\) is treated similarly. Before proceeding we need the following claim: **Claim 3.2**.: Let \(a\neq 0\) and \(b>0\) be any numbers. Then, \[a(a+\sqrt{a^{2}+4b})+2b>0, \tag{3.16}\] \[a(a+\sqrt{a^{2}+4b})+2b>0. \tag{3.15}\] Proof.: By assumption we have \[a^{2}(a^{2}+4b) <a^{4}+4a^{2}b+4b^{2}\] \[=(a^{2}+2b)^{2},\] so \[-(a^{2}+2b)<a\sqrt{a^{2}+4b}<a^{2}+2b. \tag{3.17}\] The first inequality in (3.17) yields (3.15), and the second inequality in (3.17) yields (3.16). **Lemma 3.3**.: _Let's denote \(\eta_{1}\) and \(\eta_{2}\) as the only root of Eq. (3.5) in the strip \(\{0\leq\Re z\leq 2\lambda_{1}\}\) and \(\{2\lambda_{2}\leq\Re z\leq 0\}\), respectively. Then,_ \[\lim_{r\downarrow 0}\Re\left(\frac{d\eta_{1}(r)}{dr}\right)>0,\quad\lim_{r \downarrow 0}\Re\left(\frac{d\eta_{2}(r)}{dr}\right)<0,\] Proof.: Set \[f(r,z)=z^{2}-aze^{rz}-be^{rz}=0.\] By the Implicit Function Theorem, near \(\lambda_{1,2}\) the roots \(\eta_{1}\) and \(\eta_{2}\) as function of \(r\) for small \(|r|\), and \[\frac{dz}{dr} =-\frac{\frac{\partial f(r,z)}{\partial r}}{\frac{\partial f(r,z) }{\partial z}}\] \[=\frac{arze^{rz}+bre^{rz}}{2z-a(e^{rz}+rze^{rz})-bre^{rz}}\] \[=\frac{re^{rz}(az+b)}{2z-a(e^{rz}+rze^{rz})-bre^{rz}}.\] Therefore, near \((r,z)=(0,\lambda_{1})\) \[\frac{d\eta_{1}}{dr} =\frac{re^{r\eta_{1}}(a\eta_{1}+b)}{2\eta_{1}-a(e^{r\eta_{1}}+r \eta_{1}e^{r\eta_{1}})-bre^{r\eta_{1}}} \tag{3.18}\] \[:=\frac{A+iB}{C+iD}.\] Note that for a complex number \(\zeta:=(A+iB)/(C+iD)\) we have \(\Re\zeta=(AC+BD)/(C^{2}+D^{2})\). We will show that \(AC+BD>0\) for sufficiently small \(r>0\). A simple computation gives \[A+iB =\left(re^{r\Re\eta_{1}}\cos(\Im\eta_{1})+ire^{r\Re\eta_{1}}\sin( \Im\eta_{1})\right)(a\Re\eta_{1}+b+ia\Im\eta_{1})\] \[=re^{r\Re\eta_{1}}\cos(\Im\eta_{1})(a\Re\eta_{1}+b)-are^{r\Re\eta_ {1}}\sin(\Im\eta_{1})\Im\eta_{1}\] \[\quad+i\left(re^{r\Re\eta_{1}}\sin(\Im\eta_{1})(a\Re\eta_{1}+b)+ are^{r\Re\eta_{1}}\cos(\Im\eta_{1})\Im\eta_{1}\right);\] \[=re^{r\Re\eta_{1}}\{[\cos(\Im\eta_{1})(a\Re\eta_{1}+b)-a\sin(\Im \eta_{1})\Im\eta_{1}]\] \[\quad+i\left[\sin(\Im\eta_{1})(a\Re\eta_{1}+b)+a\cos(\Im\eta_{1} )\Im\eta_{1}\right]\}\] \[=re^{r\Re\eta_{1}}(A^{\prime}+iB^{\prime}).\] Due to the continuous dependence of \(\eta_{1}\) on \(r\) and \(\lim_{r\to 0}\eta_{1}=\lambda_{1}\), we have \[\lim_{r\downarrow 0}A^{\prime} =a\lambda_{1}+b,\ \lim_{r\downarrow 0}B^{\prime}=0\] \[\lim_{r\downarrow 0}C =\sqrt{a^{2}+4b},\ \lim_{r\downarrow 0}D=0.\] As \(a\lambda_{1}+b=(1/2)[a(a+\sqrt{a^{2}+4b})+2b]\), by (3.15), \(a\lambda_{1}+b>0\), so we have \[\lim_{r\downarrow 0}\Re\left(\frac{d\eta_{1}}{dr}\right)=\frac{a\lambda_{1}+b}{ \sqrt{a^{2}+4b}}>0.\] Similarly near \((r,z)=(0,\lambda_{2})\) we have \[\frac{d\eta_{2}}{dr}=\frac{re^{r\eta_{2}}(a\eta_{2}+b)}{2\eta_{2}-a(e^{r\eta_{2}}+ r\eta_{1}e^{r\eta_{2}})-bre^{r\eta_{2}}}.\] If we repeat the steps we did above near \((0,\lambda_{2})\), then \[\lim_{r\downarrow 0}A^{\prime}=a\lambda_{2}+b,\ \lim_{r\downarrow 0 }B^{\prime}=0\] \[\lim_{r\downarrow 0}C=-\sqrt{a^{2}+4b},\ \lim_{r\downarrow 0 }D=0.\] Finally, by (3.16), \(a\lambda_{2}+b>0\), so \[\lim_{r\downarrow 0}\Re\left(\frac{\eta_{2}}{dr}\right)=\frac{a\lambda_{2}+b}{- \sqrt{a^{2}+4b}}<0.\] **Remark** 3.4. This lemma is important for us to prove Lemma 4.2 below. In fact, this lemma shows that the strip \(\{\Re\eta_{2}\leq\Re z\leq\Re\eta_{1}\}\) where no root of the function \(f(r,z)\) expands as \(r\) increases from \(0\). ## 4. Positive Solutions of Second Order Delay Equations The sign of any solution to a given differential equation is of upmost importance. In fact, when there is no delay results are known about the sign of second order equations.This was discussed at the beginning of the previous section. In this section, novel results will be put forth for second order delay equations.The main result of this work can now be stated. By Proposition 3.1, for sufficiently small \(r\) the characteristic equation has no root on the imaginary axis, so the inhomogeneous equation (3.4) has a unique bounded solution \(x_{f}\) for each given bounded and continuous \(f(\cdot)\). **Theorem 4.1**.: _For sufficiently small \(r\geq 0\) let \(G(t,r)\) be the Green Function of Eq.(3.4) such that \(x_{f}(t)=\int_{-\infty}^{\infty}G(t-s,r)f(s)ds\) is the unique bounded solution to Eq.(3.4), then \(G(t,r)<0\) for all \(t\in\mathbb{R}\)._ The proof will be the result of the following Lemma that will be proven hence forth and the Lagrange Mean Value Theorem. **Lemma 4.2**.: _Define \(G(t,r)\), and \(G(t)\) as the Green function for equations (2.2) and (3.1), respectively. Then_ \[\sup_{t\in\mathbb{R}}\left|\frac{\frac{\partial G(t,r)}{\partial r}}{G(t)} \right|<\infty.\] Proof.: The Green's function for the delay solution is given by (see Mallet-Paret, [15, 19]). \[G(t,r)=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{e^{i\xi t}}{-\xi^{2}-ai\xi e^{ i\xi r}-be^{i\xi r}}d\xi.\] The aforementioned integral is absolutely convergent, so \[\frac{\partial G(t,r)}{\partial r} =\frac{1}{2\pi}\frac{\partial}{\partial r}\int_{-\infty}^{\infty }\frac{e^{i\xi t}}{-\xi^{2}-ai\xi e^{i\xi r}-be^{i\xi r}}d\xi\] \[=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{\partial}{\partial r }\left(\frac{e^{i\xi t}}{-\xi^{2}-ai\xi e^{i\xi r}-be^{i\xi r}}\right)d\xi \tag{4.1}\] \[=\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{e^{i\xi(t+r)}\left( a\xi^{2}-b\xi i\right)}{\left(\xi^{2}+ai\xi e^{i\xi r}+be^{i\xi r}\right)^{2}}d\xi\] Under the theorem's condition for small \(r>0\) the function \[f(\xi):=\frac{e^{i\xi t}}{\xi^{2}+ai\xi e^{i\xi r}+be^{i\xi r}}\] is holomorphic at every point of the closed strip \(\{z\in\mathbb{C}:\;-\lambda_{1}\leq\Im z\leq|\lambda_{2}|\}\). _Case \(t>0\)_: We have \[G(t)=\frac{1}{\lambda_{2}-\lambda_{1}}e^{\lambda_{2}t}.\] By moving the integral to the parallel line \(\{z=\xi+i|\lambda_{2}|,\xi\in\mathbb{R}\}\), we have \[\left|\frac{\partial G(t,r)}{\partial r}\right| =\left|\frac{1}{2\pi}\int_{-\infty}^{\infty}\frac{e^{i(\xi+i| \lambda_{2}|)(t+r)}\left(a(\xi+i|\lambda_{2}|)^{2}-b(\xi+i|\lambda_{2}|)i \right)}{((\xi+i|\lambda_{2}|)^{2}+ai(\xi+i|\lambda_{2}|)e^{i(\xi+i|\lambda_{ 2}|)r}+be^{i(\xi+i|\lambda_{2}|)r}\right)^{2}}d\xi\right|\] \[=\frac{e^{\lambda_{2}(t+r)}}{2\pi}\left|\int_{-\infty}^{\infty} \frac{e^{i\xi(t+r)}\left(a(\xi+i|\lambda_{2}|)^{2}-b(\xi+i|\lambda_{2}|)i \right)}{((\xi+i|\lambda_{2}|)^{2}+ai(\xi+i|\lambda_{2}|)e^{i(\xi+i|\lambda_{ 2}|)r}+be^{i(\xi+i|\lambda_{2}|)r}\right)^{2}}\right|d\xi \tag{4.2}\] \[\leq K_{0}e^{\lambda_{2}t},\] where \[K_{0}:=\frac{e^{\lambda_{2}r}}{2\pi}\int_{-\infty}^{\infty}\frac{|a(\xi+i| \lambda_{2}|)^{2}-b(\xi+i|\lambda_{2}|)i|}{|(\xi+i|\lambda_{2}|)^{2}+ai(\xi+i| \lambda_{2}|)e^{i(\xi+i|\lambda_{2}|)r}+be^{i(\xi+i|\lambda_{2}|)r}\big{|}^{2} }d\xi<\infty.\] Therefore, \[\left|\frac{\frac{\partial G(t,r)}{\partial r}}{G(t)}\right|\leq K_{1}. \tag{4.3}\] _Case \(t<0\)_: Similarly, when \(t<0\), \[G(t)=\frac{1}{\lambda_{2}-\lambda_{1}}e^{\lambda_{1}t}.\] In this case we can move the line of integration down to the line \(\xi-i\lambda_{1}\), \(\xi\in\mathbb{R}\) and obtain a similar estimate \[\left|\frac{\partial G(t,r)}{\partial r}\right|\leq K_{2}e^{\lambda_{1}t}, \tag{4.4}\] so, for \(t<0\) there exists a constant \(K_{3}\) such that \[\left|\frac{\frac{\partial G(t,r)}{\partial r}}{G(t)}\right|\leq K_{3}. \tag{4.5}\] The lemma is proved. Now, we can prove Theorem 4.1. Proof.: In light of Lemma 4.2, the fact that the partial derivative of the Green's function is convergent, and the Lagrange Mean Value, we have the following estimate. \[\left|G(t,r)-G(t)\right| \leq\sup_{0<\omega<r}\left|\frac{\partial G(t,r)}{\partial r} \right|r\] \[\left|\frac{G(t,r)}{G(t)}-1\right| \leq Mr\] If we take \(0<r<1/M\), then \(G(t,r)/G(t)>0.\) This means, for sufficiently small \(r>0\), the Green function \(G(t,r)<0\). This proves the theorem. ## 5. Monotone Iteration Method for Traveling Waves This section will focus on equations of the form \[\frac{\partial u(x,t)}{\partial t}=D\frac{\partial^{2}u(x,t-\tau_{1})}{ \partial x^{2}}+f(u_{t}), \tag{5.1}\] where \(t\in\mathbb{R},\tau_{1}>0,x,u(x,t)\in\mathbb{R},\ D>0,\ f:C\left([-\tau_{2},0], \mathbb{R}\right)\rightarrow\mathbb{R}\) is continuous and \(u_{t}(x)\in C\left([-\tau_{2},0],\mathbb{R}\right),\) defined as \[u_{t}(x)=u(x,t+\theta),\ \theta\in[-\tau_{2},0],\ t\geq 0,\ x\in\mathbb{R}.\] Moreover, we will assume that \(f\) is Lipschitz continuous and \[f(\hat{0})=f(\hat{K})=0,\ \text{and}\ f(u)\neq 0,\ \hat{0}<u<\hat{K}.\] We are interested in traveling wave solutions of the form \(u(x,t)=\phi(x+ct),\ c>0.\) This transformation gives \[u(x,t) =\phi(x+ct)\] \[\frac{\partial u(x,t)}{\partial t} =c\phi^{\prime}(x+ct)\] \[\frac{\partial^{2}u(x,t-\tau_{1})}{\partial^{2}x} =\phi^{\prime\prime}(x+c(t-\tau_{1})).\] Setting \(\xi=x+ct,r_{1}=c\tau_{1},\) Eq. (5.1) becomes the following ordinary differential equation \[c\phi^{\prime}(\xi)=D\phi^{\prime\prime}(\xi-r_{1})+f_{c}(\phi_{\xi}), \tag{5.2}\] where \(f_{c}\in\mathbb{X}_{c}:=:C([-c\tau_{2},0],\mathbb{R}^{n})\to\mathbb{R},\) defined as \[f_{c}(\psi)=f(\psi^{c}),\quad\psi^{c}(\theta):=\psi(c\theta),\quad\theta\in[ -\tau_{2},0].\] By a shift of variable we can reduce the equation to \[D\phi^{\prime\prime}(\xi)-c\phi^{\prime}(\xi+r_{1})+f_{c}(\phi_{\xi+r_{1}})=0, \ \ \xi\in\mathbb{R}, \tag{5.3}\] The main purpose of this section is to look for solutions \(\phi\) of (5.3) in the following subset of \(C(\mathbb{R},\mathbb{R})\) \[\Gamma:=\{\varphi\in C(\mathbb{R},\mathbb{R}):\varphi\ \ \mbox{is nondecreasing, and}\ \ \lim_{\xi\to-\infty}\varphi(\xi)=0,\ \lim_{\xi\to+\infty}\varphi(\xi)=K\}.\] We assume that there exists a positive number \(\beta\) with such that \[f_{c}(\phi)-f_{c}(\psi)+\beta[\phi(0)-\psi(0)]\geq 0, \tag{5.4}\] for all \(\phi,\psi\in\mathbb{X}_{c}\) such that \(\phi\geq\psi\). Let us consider the linear operator \(\mathcal{L}\) in \(BC(\mathbb{R},\mathbb{R})\) defined as \[\mathcal{L}(\phi)(\xi):=D\phi^{\prime\prime}(\xi)-c\phi^{\prime}(\xi+r_{1})- \beta\phi(\xi+r_{1})=f(t), \tag{5.5}\] where \(f\in BC(\mathbb{R},\mathbb{R}),\) and \[H(\phi)(t)=f_{c}(\phi_{t+r_{1}})+\beta\phi(t+r_{1}),\ \ \ \ \ \ \ \phi\in C(\mathbb{R},\mathbb{R}). \tag{5.6}\] By Proposition 3.1 and Theorem 2.3 we can see that the operator \(\mathcal{L}\) is invertible and by Theorem 4.1 the operator \(-\mathcal{L}^{-1}\) is monotone in the space \(BC(\mathbb{R},\mathbb{R})\). As proved in [27], the operator \(H\) enjoys similar properties: **Lemma 5.1**.: _Assume that (5.4). Then, for any \(\phi\in\Gamma,\) we have that_ * \(H(\phi)(t)\geq 0,\ t\in\mathbb{R}\)_,_ 2. \(H(\phi)(t)\) _is nondecreasing in_ \(t\in\mathbb{R},\)__ 3. \(H(\psi)(t)\leq H(\phi)(t)\) _for all_ \(t\in\mathbb{R}\)_, if_ \(\psi\in C(\mathbb{R},\mathbb{R})\) _is given so that_ \(0\leq\psi(t)\leq\phi(t)\leq K\) _for all_ \(t\in\mathbb{R}\)_._ With this in mind, if we rewrite Eq.(5.3) in the form \[\phi=-\mathcal{L}^{-1}H(\phi),\ \phi\in\Gamma, \tag{5.7}\] then, \(\phi\) is a fixed point of the monotone operator \(-\mathcal{L}^{-1}H\) in \(\Gamma\). Our next steps below will make sure that under some conditions \(-\mathcal{L}^{-1}H\) is well defined in \(\Gamma\) or in a closed convex subset of \(\Gamma\) where \(-\mathcal{L}^{-1}H\) is a compact operator. Then, the fixed point of \(-\mathcal{L}^{-1}H\) will be guaranteed by the Schauder Fixed Point Theorem. To establish the compactness we need the following **Lemma 5.2**.: _For sufficiently small \(r>0\), the operator \(-\mathcal{L}^{-1}\) maps \(B_{\rho}(0):=\{g\in BC(\mathbb{R},\mathbb{R})|\,\|g\|\leq\rho\}\), where \(\rho>0\) is a given number, to a pre-compact subset of \(BC(\mathbb{R},\mathbb{R})\)._ Proof.: We will use the Arzela-Ascoli Theorem to prove the lemma. The boundedness of the set \(-\mathcal{L}^{-1}(B_{\rho}(0))\) is clear from the exponential decay of \(G(t,r)\) in \(|t|\). In fact, since there exists positive constants \(\delta,K_{0}\) such that \(|G(t,r)|\leq K_{0}e^{-\delta|t|}\) for all \(t\in\mathbb{R}\) \[\|-\mathcal{L}^{-1}f\| =\sup_{t\in\mathbb{R}}|-\mathcal{L}^{-1}f(t)|\] \[=\sup_{t\in\mathbb{R}}|\int_{-\infty}^{\infty}G(t-s)f(s)ds|\] \[\leq\sup_{t\in\mathbb{R}}\int_{-\infty}^{\infty}|G(t-s,r)|\cdot| f(s)|ds\] \[\leq\left(\int_{t}^{\infty}|G(t-s,r)|ds+\int_{-\infty}^{t}|G(t-s )|ds\right)\sup_{s\in\mathbb{R}}|f(s)|\] \[\leq\frac{2K_{0}}{\delta}\|f\|.\] Next, we are going to show that \(\mathcal{L}^{-1}(B_{\rho}(0))\) is equicontinuous. Let \(f\in B_{\rho}(0)\) be any element. Then, since \(f\in B_{\rho}(0)\), \(\|f\|\leq\rho\), such that for any \(x,y\in\mathbb{R}\), \[|\mathcal{L}^{-1}f(x)-\mathcal{L}^{-1}f(y)| =|\int_{-\infty}^{\infty}\left(G(x-s,r)-G(y-s,r)\right)f(s)ds|\] \[\leq\int_{-\infty}^{\infty}|G(x-s,r)-G(y-s,r)|\,ds\cdot\sup_{s \in\mathbb{R}}|f(s)ds|\] \[\leq\rho\int_{-\infty}^{\infty}|G(x-s,r)-G(y-s,r)|\,ds.\] Given an \(\epsilon_{0}>0\), defined as \(\epsilon:=\epsilon_{0}/(\rho K_{0})\). \(G(t,r)\) exponentially decays on each half interval of the real line, one can choose a sufficiently large number \(N=N(\epsilon)>0\) so that \[\int_{N}^{\infty}|G(s,r)|ds+\int_{-\infty}^{-N}|G(s,r)ds<\frac{\epsilon}{8\rho}.\] Then, there exists a number \(\delta_{0}>0\) dependant on \(\epsilon\) such that \[|G(x-s,r)-G(y-s,r)|<\frac{\epsilon}{8N\rho}\] for all \(x,y\in[-N,-\epsilon/4]\) or \(x,y\in[\epsilon/4,N]\). Finally, we have \[|\mathcal{L}^{-1}f(x)-\mathcal{L}^{-1}f(y)| \leq K\int_{-\infty}^{\infty}|G(x-s,r)-G(y-s,r)|\,ds\] \[\leq K\left(\int_{N}^{\infty}|G(x-s,r)-G(y-s,r)|\,ds+\int_{- \infty}^{-N}|G(x-s,r)-G(y-s,r)|\,ds\right)\] \[\quad+K\int_{\epsilon/4}^{N}|G(x-s,r)-G(y-s,r)|\,ds+K\int_{-N}^{ -\epsilon/4}|G(x-s,r)-G(y-s,r)|\,ds\] \[\quad+K\int_{-\epsilon/4}^{\epsilon/4}|G(x-s,r)-G(y-s,r)|\,ds\] \[<\frac{\epsilon}{4}+\frac{\epsilon}{4}+\frac{\rho K_{0}\epsilon} {2}\] \[=\epsilon_{0},\] for all \(x,y\) such that \(|x-y|<\delta_{0}\) and \(f\in B_{\rho}(0)\). This shows that the family \(\mathcal{L}^{-1}\Gamma\) is equicontinuous. The lemma is proved. **Definition 5.3**.: A function \(\varphi\in BC^{2}(\mathbb{R},\mathbb{R})\) is called an upper solution (lower solution, respectively) for the wave equation (5.3) if it satisfies the following \[D\varphi^{\prime\prime}(t)-c\varphi^{\prime}(t+r_{1})+f^{c}( \varphi_{t+r_{1}})\leq 0,\] \[(D\varphi^{\prime\prime}(t)-c\varphi^{\prime}(t+r_{1})+f^{c}( \varphi_{t+r_{1}})\geq 0,\text{ respectively})\] for all \(t\in\mathbb{R}\). Below we will list some standing conditions on Eq.(5.3) before state further conditions for the existence of traveling waves to Eq.(5.1): 1. \(f(\hat{0})=f(\hat{K})=0\), where \(\hat{0}\), (\(\hat{K}\), respectively) is the constant function \(\phi(\theta)=0\) (\(\phi(\theta)=K\), respectively), for all \(\theta\in[-\tau_{2},0]\); 2. There exists a positive constant \(\beta\) such that \[f(\varphi)-f(\psi)+\beta(\varphi(0)-\psi(0))\geq 0\] for all \(\varphi,\psi\in C([-\tau_{2},0],\mathbb{R})\) with \(0\leq\varphi(s)\leq\phi(s)\leq K\) for all \(s\in[-\tau_{2},0]\); * The operator \(H\) is continuous in \(BC(\mathbb{R},[0,K])\to BC(\mathbb{R},\mathbb{R})\) and \[\sup_{\phi\in\Gamma}\|H(\phi)\|<\infty.\] Before we proceed we set \(F:=-\mathcal{L}^{-1}H.\) **Lemma 5.4**.: _Let \(\phi\in BC^{2}(\mathbb{R},[0,K])\). Assume further the standing assumptions (H1), (H2), (H3). Then, \(\phi\) is an upper solution (lower solution, respectively) of Eq.(5.3) if and only if_ \[F\phi\leq\phi\ (F\phi\geq\phi,\ \text{respectively}).\] Proof.: By definition, if \(\phi\) is an upper solution, then \[D\varphi^{\prime\prime}(t)-c\varphi^{\prime}(t+r_{1})+f^{c}(\varphi_{t+r_{1}} )\leq 0.\] Since \(\phi\in BC^{2}(\mathbb{R},[0,K])\) and assumptions (H1), (H2) and (H3) we have \[\mathcal{L}\phi+H(\phi)\leq 0.\] Therefore, as \(-\mathcal{L}^{-1}\) is monotone, we have \[(-\mathcal{L}^{-1})(\mathcal{L}\phi+H(\phi))\leq 0.\] Consequently, \[-\phi-\mathcal{L}^{-1}H(\phi)\leq 0. \tag{5.8}\] Hence \[\phi\geq-\mathcal{L}^{-1}H(\phi)=F\phi.\] Conversely, we can reverse the argument to show that \(\phi\geq F\phi\) implies \(\phi\) is an upper solution. Similarly, we can prove the claim on the lower solutions. **Theorem 5.5**.: _Under the standing assumptions (H1), (H2), (H3), if there are an upper \(\overline{\varphi}\in\Gamma\) and a lower solutions \(\underline{\varphi}\in\Gamma\) of Eq.(5.3) such that for all \(\ t\in\mathbb{R}\)_ \[0\leq\underline{\varphi}(t)\leq\overline{\varphi}(t).\] _Then, there exists a monotone traveling wave solution \(\phi\) to Eq.(5.3)._ Proof.: Set \(\phi_{n}:=F^{n}(\overline{\varphi}),n\in\mathbb{N}\). As \(H\) and \(-\mathcal{L}^{-1}\) are both monotone, \(F\) is monotone as well. By induction, we can show that \(\phi_{n}\) is a monotone function for each \(n\). In fact, denoting by \(S^{h}\) the translation \(\phi(\cdot)\to\phi(\cdot-h)\), where we assume \(h>0\) is a constant, in the function space \(BC(\mathbb{R},\mathbb{R})\) we see that \(S^{h}\) commutes with both \(H\) and \(-\mathcal{L}^{-1}\). Therefore, as \(\phi\geq S^{h}\phi\), we have \[\phi_{1}:=F(\overline{\varphi})\geq F(S^{h}\overline{\phi})=S^{h}F(\overline {\varphi}),\] or \[\phi_{1}(t)\geq\phi_{1}(t-h)\] for all \(t\in\mathbb{R}\). Next, assume that \(\phi_{k}(t)\) is a nondecreasing function. Then, using the above observation we can easily show by induction that \(\phi_{k+1}(t)\) is also a nondecreasing function. This way, we obtain a sequence of bounded nondecreasing continuous functions \(\{\phi_{n}(t)\}\). Moreover, \[\underline{\varphi}\leq\phi_{n}\leq\phi_{n+1}\leq\overline{\varphi},\ n\in \mathbb{N}.\] This shows that if there exists a subsequence of \(\phi_{n}\) that is convergent in \(\Gamma\), then the sequence itself must be convergent. Consider the set \[\Gamma_{1}:=\{\phi\in\Gamma|\ \underline{\varphi}\leq\phi\leq\overline{\varphi}\}.\] Clearly, that \(\Gamma_{1}\) is a closed and convex subset of \(BC(\mathbb{R},\mathbb{R})\). The restriction of \(F\) to \(\Gamma_{1}\) is well defined as an operator from \(\Gamma_{1}\) to itself. In fact, let \(\varphi\in\Gamma_{1}\), then as above we can show that \(F(\varphi)\) is a monotone function on \(\mathbb{R}\). Next, since \(\underline{\varphi}\leq\varphi\leq\overline{\varphi}\), and by assumption, we have \[\underline{\varphi}\leq F(\underline{\varphi})\leq F(\varphi)\leq F(\overline {\varphi})\leq\overline{\varphi}. \tag{5.9}\] Since \[K =\lim_{t\to\infty}\underline{\varphi}(t)\leq\lim_{t\to\infty}F( \varphi)(t)\leq\lim_{t\to\infty}\overline{\varphi}(t)=K,\] \[0 =\lim_{t\to-\infty}\underline{\varphi}(t)\leq\lim_{t\to-\infty}F( \varphi)(t)\leq\lim_{t\to-\infty}\overline{\varphi}(t)=0.\] by the Squeeze Theorem we have \[\lim_{t\to\infty}F(\varphi)(t)=K\ \lim_{t\to-\infty}F(\varphi)(t)=0. \tag{5.10}\] This yields that \(F(\varphi)\in\Gamma\), so by (5.9) \(F(\varphi)\in\Gamma_{1}\). By Lemma 5.2 the operator \(\mathcal{L}^{-1}\) is compact, so the operator \(F\) is compact as well. Therefore, \(F\) is a continuous and compact operator from the closed and convex subset \(\Gamma_{1}\) of the Banach space \(BC(\mathbb{R},\mathbb{R})\). By the Schauder Fixed Point Theorem, \(F\) must have a fixed point in \(\Gamma_{1}\). When applying the results obtained in the above sections to particular models one is often faced with difficulty in constructing upper and lower solutions for the Monotone Iteration Method to work. To facilitate this process we can construct upper and lower solutions from rough functions, known as quasi-upper/lower solutions. **Definition 5.6**.: A function \(\varphi\in C^{1}(\mathbb{R},\mathbb{R})\), where \(\varphi,\varphi^{\prime}\) are bounded on \(\mathbb{R}\), \(\varphi^{\prime\prime}\) is locally integrable and essentially bounded on \(\mathbb{R}\) (that is, \(\varphi^{\prime\prime}\in L^{\infty}\)), is called a quasi- upper solution (quasi-lower solution, respectively) for the wave equation (5.3) if it satisfies the following for almost every \(t\in\mathbb{R}\) \[D\varphi^{\prime\prime}(t)-c\varphi^{\prime}(t+r_{1})+f^{c}( \varphi_{t+r_{1}})\leq 0,\] \[(D\varphi^{\prime\prime}(t)-c\varphi^{\prime}(t+r_{1})+f^{c}( \varphi_{t+r_{1}})\geq 0,\text{ respectively}).\] **Proposition 5.7**.: Let \(\phi\) be a nondecreasing quasi-upper solution (quasi-lower solution, respectively) of Eq.(5.3) such that \(\phi(t)\in[0,K]\) for all \(t\in\mathbb{R}\). Then, \(F\phi\) is a nondecreasing upper solution (lower solution, respectively) of Eq.(5.3). Proof.: The basic idea here is that via the operator \(\mathcal{L}^{-1}\) the smoothness of the quasi-upper or lower solutions is improved. Therefore, \(F\phi\) has enough smoothness to be an upper (or lower solution. By Theorem 2.3, the operator \(\mathcal{L}\) (mapping \(\varphi(\cdot)\mapsto\varphi^{\prime\prime}(\cdot)-c\varphi^{\prime}(\cdot+r _{1})-\beta u(\cdot+r_{1})\)) induces the operator \(T:(\varphi,\varphi^{\prime})^{T}\mapsto(0,f)\) that is an isomorphism between \(W^{1,\infty}\) and \(L^{\infty}\). Moreover, if in addition, \(f\) is continuous, then \(\varphi\) is a classical solution, that is, \(\varphi\) is twice continuously differentiable and \(\varphi,\varphi^{\prime},\varphi^{\prime\prime}\) are bounded. This yields that \(F\phi=-\mathcal{L}^{-1}H\phi\) is of class \(C^{2}\) since \(H\phi\) is continuous and bounded. Since \(\phi\) is a quasi-upper solution, arguing in the same manner as in Lemma 5.4 we can show that \[\phi\geq F\phi.\] Consequently, \(F\phi\in BC^{2}(\mathbb{R},[0,K])\). As \(F\) is monotone, this yields \(\phi\geq F\phi\geq F(F\phi)\). In particular, this shows that \(F\phi\) satisfies all conditions of Lemma 5.4 to be an upper solution of Eq.(5.3). **Lemma 5.8**.: _Assume that \(\phi(t)\) is a differentiable function such that \(\phi^{\prime}(t)\) is uniformly continuous and the limit_ \[\lim_{t\to+\infty}\phi(t)=a. \tag{5.11}\] _Then, \(\lim_{t\to+\infty}\phi^{\prime}(t)=0\)._ Proof.: Assuming to the contrary that \(\lim_{t\to+\infty}\phi^{\prime}(t)\neq 0\). Then, there exists a sequence \(\{t_{n}\}\to\infty\) such that \(\inf_{n\in\mathbb{N}}|\phi^{\prime}(t_{n})|>\epsilon\). As \(\phi^{\prime}(\cdot)\) is uniformly continuous, for each positive constant \(\epsilon>0\), there exists a positive \(\delta\) such that if \(|t-s|<\delta\), then, \[|\phi^{\prime}(t)-\phi^{\prime}(s)|\leq\frac{\epsilon}{2}. \tag{5.12}\] By (5.11), for each positive \(\epsilon\) there exists a large constant \(N\) such that \[|\phi(t)-\phi(s)|\leq\frac{\delta\epsilon}{4} \tag{5.13}\] for all \(t,s\geq N\). On the other hand, we have \[|\phi(t_{n}+\delta/2)-\phi(t_{n}-\delta/2)| =|\int_{t_{n}-\delta/2}^{t_{n}+\delta/2}\phi^{\prime}(s)ds|\] \[=|\int_{t_{n}-\delta/2}^{t_{n}+\delta/2}\phi^{\prime}(t_{n})ds+ \int_{t_{n}-\delta/2}^{t_{n}+\delta/2}(\phi^{\prime}(s)-\phi^{\prime}(t_{n}))ds|\] \[=|\delta\phi^{\prime}(t_{n})+\int_{t_{n}-\delta/2}^{t_{n}+\delta/ 2}(\phi^{\prime}(s)-\phi^{\prime}(t_{n}))ds|\] \[=|\delta\phi^{\prime}(t_{n})|-|\int_{t_{n}-\delta/2}^{t_{n}+ \delta/2}(\phi^{\prime}(s)-\phi^{\prime}(t_{n}))ds|\] \[\geq\delta\epsilon-\delta\epsilon/2\] \[=\delta\epsilon/2.\] This contradicts (5.13). That is \(\lim_{t\to+\infty}\phi^{\prime}(t)=0\). **Corollary 5.9**.: Assume that \(\phi\) is a solution of Eq.(5.3) such that \[\sup_{t>0}|\phi^{\prime\prime}(t)|<\infty \tag{5.14}\] and the limit \[\lim_{t\to+\infty}\phi(t)=a.\] Then, \(f(\hat{a})=0\), where \(\hat{a}\) is the constant function \(\varphi(\theta)=a\) for all \(\theta\in[-\tau_{2},0]\). Proof.: First, by Lemma 5.8, \(\lim_{t\to+\infty}\phi^{\prime}(t)=0\). Therefore, the function \(\varphi:=\phi^{\prime}\) satisfies all conditions of Lemma 5.8, so \(\lim_{t\to+\infty}\varphi^{\prime}(t)=\lim_{t\to+\infty}\phi^{\prime\prime}(t)=0\). Subsequently \[\lim_{t\to+\infty}f^{c}(\phi_{t})=0.\] Since \(f\) is Lipschitz continuous, \[f^{c}(\lim_{t\to+\infty}\phi_{t})=f^{c}(\hat{a})=0.\] Finally, this yields that \(f(\hat{a})=0\). The following improves Theorem 5.5. **Theorem 5.10**.: _Under the standing assumptions (H1), (H2), (H3), if there is an upper \(\overline{\varphi}\in\Gamma\) and a lower solutions \(\underline{\varphi}\) that is not necessarily in \(\Gamma\) of Eq.(5.3) such that for all \(t\in\mathbb{R}\)_ \[0\leq\underline{\varphi}(t)\leq\overline{\varphi}(t)\] _and_ \[\lim_{t\to+\infty}\underline{\varphi}(t)=a\neq 0.\] _Then, there exists a monotone traveling wave solution \(\phi\) to Eq.(5.3)._ Proof.: We follow the proof of Theorem 5.5 by considering the operator \(F\) in \(\Gamma\). Since the set \(\{\phi_{n}:=F^{n}\overline{\varphi}\}\) is precompact, it has a convergent subsequence. Due to the monotone property of this sequence \(\phi_{n}\), the sequence itself is convergent, say, to \(\phi_{0}\). Obviously, \(\phi_{0}\) is nondecreasing and \(\lim_{t\to-\infty}\phi_{0}(t)=0\) and the limit \(\lim_{t\to+\infty}\phi_{0}(t)\) exists, say, equals \(a\in[0,K]\). As \(\phi_{0}(t)\geq\underline{\varphi}(t)\), \(a>0\). As there is no equilibrium between \(0\) and \(K\), this follows that \(a=K\). Hence, \(\phi_{0}\) is a monotone wave solution of Eq.(5.3). **Remark 5.11**.: The theory we presented above is for one dimensional systems. However, it can be easily extended to multi-dimensional systems where the diffusion is diagonal, that is, the multi-dimensional systems are of the form \[\begin{cases}\frac{\partial u_{1}(x,t)}{\partial t}=D_{1}\frac{\partial^{2}u_ {1}(x,t-\tau_{1})}{\partial x^{2}}+f_{1}(u_{t})\\ \frac{\partial u_{2}(x,t)}{\partial t}=D_{2}\frac{\partial^{2}u_{2}(x,t-\tau_ {2})}{\partial x^{2}}+f_{2}(u_{t})\\ \cdots\\ \frac{\partial u_{n}(x,t)}{\partial t}=D_{n}\frac{\partial^{2}u_{n}(x,t-\tau _{n})}{\partial x^{2}}+f_{n}(u_{t}),\end{cases} \tag{5.15}\] where \(\tau_{i},i=i,2,\cdots,n\) are sufficiently small positive constants, \(D_{i},i=1,2,\cdots,n\) are positive constants, \(u=(u_{1},\cdots,u_{n})^{T}\), \(u_{t}(\theta):=u(t+\theta)\), \(\theta\in[-\tau,0]\) with given positive \(\tau\). ## 6. Applications ### Belousov-Zhabotinskii equations In this section we consider the existence of traveling waves to Belousov-Zhabotinskii Equations with delay in both diffusion and reaction terms \[\begin{cases}\frac{\partial}{\partial t}u(x,t)=\frac{\partial^{2}}{\partial x ^{2}}u(x,t-\tau_{1})+u(x,t)[1-u(x,t)-rv(x,t-\tau_{2})];\\ \frac{\partial}{\partial t}v(x,t)=\frac{\partial^{2}}{\partial x^{2}}v(x,t)- bu(x,t)v(x,t),\end{cases} \tag{6.1}\] where \(r,b,\tau_{1},\tau_{2}\) are positive constants, \(u\) and \(v\) are scalar functions. As shown in [27], the function \(f(\phi):=(f_{1}(\phi),f_{2}(\phi))^{T}\) defined as \[f_{1}(\phi) :=\psi_{1}(0)[s-\phi_{1}(0)+r\phi_{2}(-\tau_{2})] \tag{6.3}\] \[f_{2}(\phi) :=b\phi_{1}(0)[1-\phi(0)], \tag{6.2}\] where \(s:=1-r\), \(\phi\in C[-c\tau_{2},\mathbb{R}^{2})\) satisfies \[f_{c}(\phi)-f_{c}(\psi)+\beta[\phi(0)-\psi(0)]\geq 0,\] where \(\beta=diag(\beta_{1},\beta_{2})\) with \(\beta_{1}\geq 2-s\) and \(\beta_{2}\geq b\). Therefore, using the theory in the previous section we can find quasi-upper and quasi-lower solutions of the Belousov-Zhabotinskii model. The associated wave equation is of the form \[\begin{cases}\varphi_{1}^{\prime\prime}(t)-c\varphi_{1}^{\prime}(t+r_{1})+ \varphi_{1}(t+r_{1})\left((1-r)-\varphi_{1}(t+r_{1})-r\varphi_{2}(t+r_{1}-r_{ 2})\right)=0\\ \varphi_{2}^{\prime\prime}(t)-c\varphi_{2}^{\prime}(t+r_{1})+b\varphi_{1}(t+ r_{1})\left(1-\varphi_{2}(t+r_{1})\right)=0.\end{cases}, \tag{6.4}\] where \(r_{1}:=c\tau_{1},r_{2}:=c\tau_{2}\). #### 6.1.1. Quasi-upper solutions Define the numbers \(\lambda_{0}\) and \(\mu_{0}\) as \[\lambda_{0}=\frac{c+\sqrt{c^{2}-4}}{2},\ \ \mu_{0}=\frac{c+\sqrt{c^{2}-4b}}{2},\] which are the roots of the characteristic equations \[\lambda^{2}-c\lambda+1 =0, \tag{6.6}\] \[\mu^{2}-c\mu+b =0, \tag{6.5}\] respectively. Observe that since \(1<b\) then, \(\lambda_{0}>\mu_{0}\). **Claim 6.1**.: Let \(c>2\) and \(U\) be an open strip \(\{z\in\mathbb{C}|\lambda_{0}-\epsilon<\Re z<\lambda_{0}+\epsilon\}\) so that it does not include the other root of (6.5) in it. Then, for sufficiently small \(r_{1}\) there exists only a single root \(\lambda_{1}(r_{1})\) of the equation \[\lambda^{2}-c\lambda e^{r_{1}\lambda}+e^{r_{1}\lambda}=0. \tag{6.7}\] in \(U\) that depends continuously on \(r_{1}\). Moreover, \(\lambda_{1}(r_{1})\) is real and \[\lim_{s\to 0}\lambda_{1}(r_{1})=\lambda_{0}. \tag{6.8}\] Proof.: The proof can be done in the same manner as in the proof of Part (iii) of Proposition 3.1 using the Rouche Theorem. Due to the uniqueness of the root we see that \(\lambda_{1}\) must be the same as \(\overline{\lambda_{1}}\). That is, \(\lambda_{1}\) is real. **Claim 6.2**.: Let \(1<b,2\sqrt{b}<c\) and \(V\) be an open strip \(\{z\in\mathbb{C}|\mu_{0}-\epsilon<\Re z<\mu_{0}+\epsilon\}\) so that it does not include the other root of (6.6) in it. Then, for sufficiently small \(r_{1}\) there exists only a single root \(\mu_{1}(r_{1})\) of the equation \[\mu^{2}-c\mu e^{r_{1}\mu}+be^{r_{1}\mu}=0. \tag{6.9}\] in \(V\) that depends continuously on \(r_{1}\). Moreover, \(\mu_{1}(r_{1})\) is real and \[\lim_{s\to 0}\mu_{1}(r_{1})=\mu_{0}. \tag{6.10}\] Proof.: The proof can be done in the same manner as in the proof of Part (iii) of Proposition 3.1 using the Rouche Theorem. Due to the uniqueness of the root we see that \(\mu_{1}\) must be the same as \(\overline{\mu_{1}}\). That is, \(\mu_{1}\) is real. Note that as \(\lambda_{0}>\mu_{0}\) for sufficiently small \(r_{1}\) we have \(0<\mu_{1}<\lambda_{1}\). Let us define functions \(\varphi_{1}\) and \(\varphi_{2}\) as follows: \[\varphi_{1}(t):=\left\{\begin{array}{ll}\frac{1}{2}e^{\lambda_{1}t},&t\leq 0,\\ 1-\frac{1}{2}e^{-\lambda_{1}t},&t>0\end{array}\right.\quad\varphi_{2}(t):= \left\{\begin{array}{ll}\frac{1}{2}e^{\mu_{1}t},&t\leq 0,\\ 1-\frac{1}{2}e^{-\mu_{1}t},&t>0\end{array}\right.\] observe that for sufficiently small \(r_{1}\), \(0<\varphi_{1}(t)<1\) and similarly for \(0<\varphi_{2}(t)<1\). First, it is easily seen that \[\varphi_{1}^{\prime}(t)=\left\{\begin{array}{ll}\frac{\lambda_{1}}{2}e^{ \lambda_{1}},&t\leq 0,\\ \frac{\lambda_{1}}{2}e^{-\lambda_{1}t},&t>0\end{array}\right.,\quad\varphi_{1 }^{\prime\prime}(t)=\left\{\begin{array}{ll}\frac{\lambda_{1}^{2}}{2}e^{ \lambda_{1}t},&t\leq 0,\\ \frac{-\lambda_{1}^{2}}{2}e^{-\lambda_{1}t},&t>0\end{array}\right.\] Note that \(\varphi_{1}^{\prime},\varphi_{2}^{\prime}\) are both continuous and bounded on \(\mathbb{R}\) and \(\varphi_{1}^{\prime\prime},\varphi_{2}^{\prime\prime}\) exist and are continuous everywhere and bounded except for \(t=0\). **Claim 6.3**.: For sufficiently small \(r_{1}\) and \(c>2\sqrt{b}\), the vector function \((\varphi_{1},\varphi_{2})^{T}\) is a quasi-upper solution of (6.4). Proof.: **Case \(t\leq-r_{1}\)**: Plugging these functions into the first equation in (6.4) we have for \(t+r_{1}\leq 0\) \[\left[\varphi_{1}^{\prime\prime}(t)-c\varphi_{1}^{\prime}(t+r_{1})+ \varphi_{1}(t+r_{1})\right]-r\varphi_{1}(t+r_{1})(1-\varphi_{2}(t+r_{1}-r_{2}) )-\varphi_{1}^{2}(t+r_{1})\] \[=\left[\frac{\lambda_{1}^{2}}{2}e^{\lambda_{1}t}-c\frac{\lambda_{ 1}}{2}e^{\lambda_{1}(t+r_{1})}+\frac{1}{2}e^{\lambda_{1}(t+r_{1})}\right]-re^ {\lambda_{1}(t+r_{1})}\left(1-\varphi_{2}(t+r_{1}-r_{2})\right)-\frac{1}{4}e^{ 2\lambda_{1}(t+r_{1})}\] \[=\left[\lambda_{1}^{2}-c\lambda_{1}e^{r_{1}\lambda_{1}}+e^{r_{1} \lambda_{1}}\right]\frac{1}{2}e^{\lambda_{1}t}-re^{\lambda_{1}(t+r_{1})}\left( 1-\frac{1}{2}e^{\mu_{1}(t+r_{1}-r_{2})}\right)-\frac{1}{4}e^{2\lambda_{1}(t+r _{1})}\] \[=-re^{\lambda_{1}(t+r_{1})}\left(1-\frac{1}{2}e^{\mu_{1}(t+r_{1}- r_{2})}\right)-\frac{1}{4}e^{2\lambda_{1}(t+r_{1})}\leq 0.\] For the second equation in (6.4), for \(t\leq-r_{1}\) we have \[\varphi_{2}^{\prime\prime}(t)-c\varphi_{2}^{\prime}(t+r_{1})+b \varphi_{1}(t+r_{1})\left(1-\varphi_{2}(t+r_{1})\right)\] \[=\frac{\mu_{1}^{2}}{2}e^{\mu_{1}t}-c\frac{\mu_{1}}{2}e^{\mu_{1}( t+r_{1})}+\frac{b}{2}e^{\lambda_{1}(t+r_{1})}\left(1-\frac{1}{2}e^{\mu_{1}(t+r_{1}) }\right)\] \[=\frac{1}{2}\left(\mu_{1}^{2}-c\mu_{1}e^{r_{1}\mu_{1}}+be^{r_{1} \mu_{1}}\right)e^{\mu_{1}t}+\frac{b}{2}\left(e^{\lambda_{1}(t+r_{1})}-e^{\mu_ {1}(t+r_{1})}\right)-\frac{b}{4}e^{(\lambda_{1}+\mu_{1})(t+r_{1})}\] \[=\frac{b}{2}\left(e^{\lambda_{1}(t+r_{1})}-e^{\mu_{1}(t+r_{1})} \right)-\frac{b}{4}e^{(\lambda_{1}+\mu_{1})(t+r_{1})}\leq 0\] since \(\lambda_{1}>\mu_{1}\) and \(t+r_{1}\leq 0\). **Case: \(-r_{1}\leq t\leq 0\)**: \[A:= \left[\varphi_{1}^{\prime\prime}(t)-c\varphi_{1}^{\prime}(t+r_{1} )+\varphi_{1}(t+r_{1})\right]-r\varphi_{1}(t+r_{1})(1-\varphi_{2}(t+r_{1}-r_{2} ))-\varphi_{1}^{2}(t+r_{1})\] \[=\left[\frac{\lambda_{1}^{2}}{2}e^{\lambda_{1}t}-c\frac{\lambda_{ 1}}{2}e^{-\lambda_{1}(t+r_{1})}+1-\frac{1}{2}e^{-\lambda_{1}(t+r_{1})}\right]\] \[\quad-r\left(1-\frac{1}{2}e^{-\lambda_{1}(t+r_{1})}\right)(1- \varphi_{2}(t+r_{1}-r_{2}))-\varphi_{1}^{2}(t+r_{1})\] As \(\lambda_{1}\) is a root of Eq.(6.7) \[A =\frac{c\lambda_{1}}{2}\left(e^{\lambda_{1}(t+r_{1})}-e^{-\lambda _{1}(t+r_{1})}\right)+1-\frac{1}{2}\left(e^{\lambda_{1}(t+r_{1})}+e^{-\lambda _{1}(t+r_{1})}\right)\] \[\quad-r\left(1-\frac{1}{2}e^{-\lambda_{1}(t+r_{1})}\right)(1- \varphi_{2}(t+r_{1}-r_{2}))-\varphi_{1}^{2}(t+r_{1})\] \[=c\lambda_{1}\sinh(\lambda_{1}(t+r_{1}))+1-\cosh(\lambda_{1}(t+r_ {1}))\] \[\quad-r\left(1-\frac{1}{2}e^{-\lambda_{1}(t+r_{1})}\right)(1- \varphi_{2}(t+r_{1}-r_{2}))-\varphi_{1}^{2}(t+r_{1}).\] As \(-r_{1}\leq t\leq 0\) and \(r_{1}\) is sufficiently small, using the power expansions of \(\sinh x\) and \(\cosh x\) we have \[\sinh(\lambda_{1}(t+r_{1}))\approx\lambda_{1}(t+r_{1})+o(r_{1}^{2}),\quad 1- \cosh(\lambda_{1}(t+r_{1}))\approx-\frac{(\lambda_{1}(t+r_{1}))^{2}}{2}+o(r_{1} ^{2}).\] Finally, we have \[A =c\lambda_{1}^{2}(t+r_{1})-\frac{(\lambda_{1}(t+r_{1}))^{2}}{2}+o( r_{1}^{2})\] \[\quad-r\left(1-\frac{1}{2}e^{-\lambda_{1}(t+r_{1})}\right)(1- \varphi_{2}(t+r_{1}-r_{2}))-\varphi_{1}^{2}(t+r_{1})\leq 0,\] because when \(r_{1}\) is sufficiently small, the limit of the right hand side is \(-(r+1)/4\) as \(r_{1}\to 0\). For the second equation we have \[B :=\varphi_{2}^{\prime\prime}(t)-c\varphi_{2}^{\prime}(t+r_{1})+b \varphi_{1}(t+r_{1})\left(1-\varphi_{2}(t+r_{1})\right)\] \[=\frac{\mu_{1}^{2}}{2}e^{\mu_{1}t}-\frac{c\mu_{1}}{2}e^{-\mu_{1}( t+r_{1})}+b\left(1-\frac{1}{2}e^{-\lambda_{1}(t+r_{1})}\right)\frac{1}{2}e^{- \mu_{1}(t+r_{1})}\] \[=\left[\frac{\mu_{1}^{2}}{2}e^{\mu_{1}t}-\frac{c\mu_{1}}{2}e^{ \mu_{1}(t+r_{1})}+\frac{b}{2}e^{\mu_{1}(t+r_{1})}\right]+\frac{c\mu_{1}}{2}e^{ \mu_{1}(t+r_{1})}-\frac{c\mu_{1}}{2}e^{-\mu_{1}(t+r_{1})}\] \[\quad-\frac{b}{2}e^{\mu_{1}(t+r_{1})}+\frac{b}{2}e^{-\mu_{1}(t+r_{ 1})}-\frac{b}{4}e^{-(\lambda_{1}+\mu_{1})(t+r_{1})}.\] As \(\mu_{1}\) is a root of Eq.(6.9), \[B=\frac{c\mu_{1}}{2}e^{\mu_{1}(t+r_{1})}-\frac{c\mu_{1}}{2}e^{-\mu_{1}(t+r_{1 })}-\frac{b}{2}e^{\mu_{1}(t+r_{1})}+\frac{b}{2}e^{-\mu_{1}(t+r_{1})}-\frac{b}{ 4}e^{-(\lambda_{1}+\mu_{1})(t+r_{1})}.\] As \(-r_{1}\leq t\leq 0\) and the limit of the right hand side is \(-b/4<0\) as \(r_{1}\to 0\), it follows that for sufficiently small \(r_{1}\) we will have \(B\leq 0\). **Case \(t>0\)**: \[A :=[\varphi_{1}^{\prime\prime}(t)-c\varphi_{1}^{\prime}(t+r_{1})+ \varphi_{1}(t+r_{1})]-r\varphi_{1}(t+r_{1})(1-\varphi_{2}(t+r_{1}-r_{2}))- \varphi_{1}^{2}(t+r_{1})\] \[=\left[\frac{-\lambda_{1}^{2}}{2}e^{-\lambda_{1}t}-c\frac{\lambda _{1}}{2}e^{-\lambda_{1}(t+r_{1})}+1-\frac{1}{2}e^{-\lambda_{1}(t+r_{1})}\right]\] \[\quad-r\varphi_{1}(t+r_{1})(1-\varphi_{2}(t+r_{1}-r_{2}))- \varphi_{1}^{2}(t+r_{1})\] \[=\left[\frac{-\lambda_{1}^{2}}{2}e^{-\lambda_{1}t}+c\frac{\lambda _{1}}{2}e^{-\lambda_{1}t+\lambda_{1}r_{1}}-\frac{1}{2}e^{-\lambda_{1}t+\lambda _{1}r_{1}}\right]\] \[-c\frac{\lambda_{1}}{2}e^{-\lambda_{1}t+\lambda_{1}r_{1}}-c\frac {\lambda_{1}}{2}e^{-\lambda_{1}t-\lambda_{1}r_{1}}+\frac{1}{2}e^{-\lambda_{1}t +\lambda_{1}r_{1}}+1-\frac{1}{2}e^{-\lambda_{1}(t+r_{1})}\] \[-r\varphi_{1}(t+r_{1})(1-\varphi_{2}(t+r_{1}-r_{2}))-\varphi_{1} ^{2}(t+r_{1})\] Since \(\lambda_{1}\) is a root of Eq.(6.7), we have \[\frac{\lambda_{1}^{2}}{2}e^{-\lambda_{1}t}-c\frac{\lambda_{1}}{2}e^{-\lambda_{1} t+\lambda_{1}r_{1}}+\frac{1}{2}e^{-\lambda_{1}t+\lambda_{1}r_{1}}=0.\] Hence, \[A =-c\frac{\lambda_{1}}{2}e^{-\lambda_{1}t+\lambda_{1}r_{1}}-c\frac{ \lambda_{1}}{2}e^{-\lambda_{1}t-\lambda_{1}r_{1}}+\frac{1}{2}e^{-\lambda_{1}t+ \lambda_{1}r_{1}}+1-\frac{1}{2}e^{-\lambda_{1}(t+r_{1})}\] \[-r\varphi_{1}(t+r_{1})(1-\varphi_{2}(t+r_{1}-r_{2}))-\varphi_{1}^ {2}(t+r_{1})\] \[=\left[1-c\lambda_{1}\cosh(\lambda_{1}r_{1})+\sinh(\lambda_{1}r_{ 1})\right]e^{-\lambda_{1}t}\] \[-r\varphi_{1}(t+r_{1})(1-\varphi_{2}(t+r_{1}-r_{2}))-\varphi_{1}^ {2}(t+r_{1}).\] Since \[\lim_{r_{1}\downarrow 0}\left[1-c\lambda_{1}\cosh(\lambda_{1}r_{1})+\sinh( \lambda_{1}r_{1})\right]<0,\] it follows that \(A\leq 0\) for sufficiently small \(r_{1}\). For the second equation \[B :=\varphi_{2}^{\prime\prime}(t)-c\varphi_{2}^{\prime}(t+r_{1})+b \varphi_{1}(t+r_{1})\left(1-\varphi_{2}(t+r_{1})\right)\] \[=-\frac{\mu_{1}^{2}}{2}e^{-\mu_{1}t}-\frac{c\mu_{1}}{2}e^{-\mu_{1 }(t+r_{1})}+b\left(1-\frac{1}{2}e^{-\lambda_{1}(t+r_{1})}\right)\frac{1}{2}e^ {-\mu_{1}(t+r_{1})}\] \[=\left[-\frac{\mu_{1}^{2}}{2}e^{-\mu_{1}t}+\frac{\mu_{1}}{2}e^{- \mu_{1}t+\mu r_{1}}-\frac{b}{2}e^{-\mu_{1}t+\mu_{1}r_{1})}\right]\] \[\quad-\frac{c\mu_{1}}{2}e^{-\mu_{1}t+\mu r_{1}}-\frac{c\mu_{1}}{ 2}e^{-\mu_{1}t-\mu r_{1}}+\frac{b}{2}e^{-\mu_{1}t+\mu_{1}r_{1})}\] \[\quad+\frac{b}{2}e^{-\mu_{1}(t+r_{1})}-\frac{b}{4}e^{-(\lambda_{1 }+\mu_{1})(t+r_{1})}.\] As \(\mu_{1}\) is a root of Eq.(6.9), \[B =-\frac{c\mu_{1}}{2}e^{-\mu_{1}t+\mu_{1}r_{1}}-\frac{c\mu_{1}}{2} e^{-\mu_{1}t-\mu_{1}r_{1}}+\frac{b}{2}e^{-\mu_{1}t+\mu_{1}r_{1}}\] \[\quad+\frac{b}{2}e^{-\mu_{1}(t+r_{1})}-\frac{b}{4}e^{-(\lambda_{1 }+\mu_{1})(t+r_{1})}\] \[=\left(\frac{b}{2}-c\mu_{1}\right)\cosh(\mu_{1}r_{1})e^{-\mu_{1}t }-\frac{b}{4}e^{-(\lambda_{1}+\mu_{1})(t+r_{1})}.\] Since \(c>2\sqrt{b}\) we have \(c^{2}>4b\), and \[\frac{b}{2}<\frac{c^{2}}{8}<\frac{c(c+\sqrt{c^{2}-4b})}{2}=c\mu_{1}.\] This yields \(B\leq 0.\) The claim is proved. #### 6.1.2. Quasi-lower solutions To construct a quasi-lower solution we consider the equation \[\lambda^{2}-c\lambda e^{r_{1}\lambda}+\frac{e^{r_{1}\lambda}}{2}=0. \tag{6.11}\] Arguing as in Claim 6.1, for sufficiently small \(r_{1}\) Eq.(6.11) has exactly two real roots in the small neighborhood of two distinct roots \[\eta_{1,2}=\frac{c\pm\sqrt{c^{2}-2}}{2}\] of the equation \[\lambda^{2}-c\lambda+\frac{1}{2}=0.\] We will denote by \(\lambda_{2}\) the real solution of Eq(6.11) in the small neighborhood of \(\eta_{2}=(c+\sqrt{c^{2}-2})/2\). Note that for sufficiently small \(r_{1}\), since \(\eta_{2}>\lambda_{0}\), we have \(\lambda_{2}>\lambda_{1}\). For the lower solutions of Eq.(6.4) we set \[\underline{\varphi_{2}}(t)=0,t\in\mathbb{R},\quad\underline{\varphi_{1}}(t)= \begin{cases}\frac{e^{\lambda_{2}t}}{4},\ t\leq T,\\ \frac{1}{4}f(t),\ -T\leq t\leq T\\ \frac{1}{2},\ t>T,\end{cases}\] where \(T\) is a large number. i) This bridges smoothly the function \(e^{\lambda_{2}t}/4\) and the constant function \(g(t)=1/2\) ii) \(f(-T)=(1/4)e^{-\lambda_{2}T}\), \(f^{\prime}(-T)=(\lambda_{2}/4)e^{-\lambda_{2}T}\), \(f^{\prime}(T)=0\), \(f(T)=1/2\). It is possible to find the explicit form of \(a,b\) by using the properties above. In fact, \[f(-T) =-8T^{3}a+4T^{2}+1=e^{-\lambda_{2}T}\] \[f^{\prime}(-T) =-12T^{2}a-4Tb=\lambda_{2}e^{-\lambda_{2}T}.\] This allows us to find the following matrices \[A=\begin{bmatrix}-8T^{3}&4T^{2}\\ -12T^{2}&-4T\end{bmatrix},A_{a}=\begin{bmatrix}e^{-\lambda_{2}T}-1&4T^{2}\\ \lambda_{2}e^{-\lambda_{2}T}&-4T\end{bmatrix},A_{b}=\begin{bmatrix}-8T^{3}&e^{ -\lambda_{2}T}-1\\ -12T^{2}&\lambda_{2}e^{-\lambda_{2}T}.\end{bmatrix}\] Then, by Cramer's rule we have \[a=\frac{|A_{a}|}{|A|}=\frac{\lambda_{2}Te^{-\lambda_{2}T}+e^{- \lambda_{2}T}-1}{4T^{3}}\] \[b=\frac{|A_{b}|}{|A|}=\frac{2\lambda_{2}Te^{-\lambda_{2}T}+3 \left(e^{-\lambda_{2}T}-1\right)}{4T^{2}}.\] Moreover, \[\lim_{T\to\infty}\sup_{-T\leq t\leq T}\max\{|f^{\prime}(t)|,|f^{\prime\prime}(t)| \}=0. \tag{6.12}\] By investigating the function we may conclude that for large \(T\), the function \(f(t)\) has no critical point inside \([-T,T]\), and actually is increasing on \([-T,T]\). The following claim is valid. **Claim 6.4**.: Assume that \(0<r\leq 1/4\) and \(T\) is sufficiently large number. Then, \(\underline{\varphi}(t):=(\underline{\varphi}_{1}(t),\underline{\varphi}_{2}( t))^{T}\) is a lower solution of Eq.(6.4) and \[0\leq\underline{\varphi}(t)\leq\overline{\varphi}(t)\leq 1,\ t\in\mathbb{R},\] where \(\overline{\varphi}=(\varphi_{1},\varphi_{2})^{T}\) is defined from Claim 6.3. Proof.: It is easy to see that \(0\leq\underline{\varphi}(t)\leq\overline{\varphi}(t)\). Next, we will verify that \(\underline{\varphi}(t)\) is a quasi-lower solution of Eq.(6.4). Substituting the function \(\underline{\varphi}(t)\) into the second equation of Eq.(6.4) yields that \[\underline{\varphi}_{2}^{\prime\prime}(t)-c\underline{\varphi}_{2}^{\prime }(t+r_{1})+b\underline{\varphi}_{1}(t+r_{1})\left(1-\underline{\varphi}_{2}( t+r_{1})\right)=b\underline{\varphi}_{1}(t+r_{1})\geq 0. \tag{6.13}\] For the first equation, if \(t\leq-T-r_{1}\), since \[\underline{\varphi}_{1}^{\prime\prime}(t)-c\underline{\varphi}_{1}^{\prime}( t+r_{1})+\frac{1}{2}\underline{\varphi}_{1}(t+r_{1})=0,\] we have \[\underline{\varphi}_{1}^{\prime\prime}(t)-c\underline{\varphi}_{1 }^{\prime}(t+r_{1})+\underline{\varphi}_{1}(t+r_{1})\left((1-r)-\underline{ \varphi}_{1}(t+r_{1})-r\underline{\varphi}_{2}(t+r_{1}-r_{2})\right)\] \[=\underline{\varphi}_{1}^{\prime\prime}(t)-c\underline{\varphi}_{1 }^{\prime}(t+r_{1})+\underline{\varphi}_{1}(t+r_{1})\left((1-r)-\underline{ \varphi}_{1}(t+r_{1})\right)\] \[=\left[\underline{\varphi}_{1}^{\prime\prime}(t)-c\underline{ \varphi}_{1}^{\prime}(t+r_{1})+\frac{1}{2}\underline{\varphi}_{1}(t+r_{1}) \right]+\underline{\varphi}_{1}(t+r_{1})\left(\frac{1}{2}-r-\underline{ \varphi}_{1}(t+r_{1})\right)\] \[=\underline{\varphi}_{1}(t+r_{1})\left(\frac{1}{2}-r-\underline{ \varphi}_{1}(t+r_{1})\right)\geq 0.\] If \(-T-r_{1}\leq t\leq-T\), then, the first equation becomes \[B:=\underline{\varphi}_{1}^{\prime\prime}(t)-c\underline{\varphi}_{1}^{\prime} (t+r_{1})+\underline{\varphi}_{1}(t+r_{1})\left((1-r)-\underline{\varphi}_{1 }(t+r_{1})\right).\] Note that on this interval \[=\sup_{-T-r_{1}\leq t\leq T}|\underline{\varphi}_{1}^{\prime\prime}(t)|+c| \underline{\varphi}_{1}^{\prime}(t+r_{1})| =\sup_{-T\leq t\leq T}|f^{\prime}(t)|+c|f^{\prime\prime}(t)|\] that could be made as small as we like by taking \(T\) sufficiently large. Also, by definition of \(\underline{\varphi}_{1}\) on this interval, we can see that \[0<\inf_{-T\leq\xi\leq T}\underline{\varphi}_{1}(\xi)\leq\sup_{-T\leq\xi\leq T} \underline{\varphi}_{1}(\xi)\leq\frac{1}{4},\] so this yields the positiveness of \(B\). When \(t>T\), \[B:= \underline{\varphi}_{1}^{\prime\prime}(t)-c\underline{\varphi}_{1 }^{\prime}(t+r_{1})+\underline{\varphi}_{1}(t+r_{1})\left((1-r)-\underline{ \varphi}_{1}(t+r_{1})-r\underline{\varphi}_{2}(t+r_{1}-r_{2})\right)\] \[=\underline{\varphi}_{1}^{\prime\prime}(t)-c\underline{\varphi}_{ 1}^{\prime}(t+r_{1})+\underline{\varphi}_{1}(t+r_{1})\left((1-r)-\underline{ \varphi}_{1}(t+r_{1})\right)\] \[=\frac{1}{4}\left((1-r)-\frac{1}{4}\right)\] \[\geq\frac{1}{8}.\] Therefore, \(B\geq 0\). The claim is proved. **Corollary 6.5**.: Assume that \(0<r\leq 1/4\), \(2<2\sqrt{b}<c\). Then, Eq.(6.4) has a traveling wave solution if the delays \(\tau_{1},\tau_{2}\) are sufficiently small. ### Traveling waves for Fisher-KPP equations with delay in diffusion We will consider the traveling wave problem for equation \[\frac{\partial u(x,t)}{\partial t}=D\frac{\partial u(x,t-\tau_{1})}{\partial x ^{2}}+u(x,t-\tau_{2})\left(1-u(x,t)\right),\ \tau_{1},\tau_{2}>0. \tag{6.14}\] In fact, if we take the make the substitution \(\phi(x+ct)=u(x,t)\) as earlier, then for \(\xi=x+ct,\ c>0,\ D=1\) \[u(x,t-\tau_{2})=\phi(x+c(t-\tau_{2})=\phi(x+ct-c\tau_{2})=\phi( \xi-c\tau_{2})\] \[\frac{\partial u(x,t)}{\partial t}=c\phi^{\prime}(x+ct)=c\phi^{ \prime}(\xi)\] \[\frac{\partial^{2}u(x,t-\tau_{1})}{\partial x^{2}}=\phi^{\prime \prime}(x+c(t-\tau_{1}))=\phi^{\prime\prime}(\xi-c\tau_{1}).\] Taking \(r_{1}=c\tau_{1},\ r_{2}=c\tau_{2}\), equation 6.14 becomes \[c\phi^{\prime}(\xi) =\phi^{\prime\prime}(\xi-r_{1})+\phi(\xi-r_{2})\left(1-\phi(\xi)\right)\] \[=\phi^{\prime\prime}(\xi-r_{1})-c\phi^{\prime}(\xi)+\phi(\xi-r_{ 2})\left(1-\phi(\xi)\right)\] For simplicity we will write \(t=\xi-r_{1},\) we have an equation of the form \[x^{\prime\prime}(t)-cx^{\prime}(t+r_{1})+x(t+(r_{1}-r_{2}))\left(1-x(t+r_{1}) \right)=0. \tag{6.15}\] Define \(f(\varphi)=\varphi(-r_{2})\left(1-\varphi(0)\right),\) then \(\varphi\) satisfies \[f_{c}(\varphi)-f_{c}(\psi)+\beta(\varphi(0)-\psi(0))\geq 0,\] when \(\psi\leq\varphi,\ \beta\geq 1.\) #### 6.2.1. Upper Solutions The quadratic function \(P(\mu)=-\mu^{2}+c\mu-1\) has two positive solutions \[0<\mu_{1}=\frac{c-\sqrt{c^{2}-4}}{2}<\mu_{2}=\frac{c+\sqrt{c^{2}-4}}{2}.\] **Proposition 6.6**.: Let \(c>2,\ 0<\theta<1,\ \mu_{1}\) as defined above and \(r_{i}>0,i=1,2,\) are sufficiently small, then for all \(t\in\mathbb{R}\) \[\bar{\varphi}(t)=\frac{1}{1+\theta e^{-\mu_{1}t}}\] is an upper solution of equation 6.15 that belongs to \(\Gamma.\) Proof.: The proof is similar to that in Wu and Zou [27] with slight modifications. It is easy to show \(\bar{\varphi}(t)\in\Gamma.\) In fact, \(\bar{\varphi}(t)\) is differentiable for all \(t\in\mathbb{R},\) and by direct calculation \[\bar{\varphi}^{\prime}(t)=\frac{\theta\mu_{1}e^{-\mu_{1}t}}{(1+\theta e^{-\mu _{1}t})^{2}}>0.\] Thus, \(\bar{\varphi}(t)\) is non decreasing. We now turn our attention to the asymptotic behavior of \(\bar{\varphi}(t).\) Indeed, \[\lim_{t\to-\infty}\bar{\varphi}(t)=\lim_{t\to-\infty}\frac{1}{1+\theta e^{-\mu _{1}t}}=0,\] and \[\lim_{t\to\infty}\bar{\varphi}(t)=\lim_{t\to\infty}\frac{1}{1+\theta e^{-\mu_{ 1}t}}=1.\] Therefore, \(\bar{\varphi}(t)\in\Gamma.\) In order to show that \(\bar{\varphi}(t)\) is an upper solution we first need to find \(\bar{\varphi}^{\prime}(t),\bar{\varphi}^{\prime\prime}(t).\) \[\bar{\varphi}^{\prime}(t)=\frac{\theta\mu_{1}e^{-\mu_{1}t}}{\left(1+\theta e^ {-\mu_{1}t}\right)^{2}},\ \bar{\varphi}^{\prime\prime}(t)=\frac{\theta\mu_{1}^{2}e^{-\mu_{1}t}\left( \theta e^{-\mu_{1}t}-1\right)}{\left(1+\theta e^{-\mu_{1}t}\right)^{3}}.\] It is easy to see that \[\sup_{t\in\mathbb{R}}|\bar{\varphi}(t)|<\infty,\ \sup_{t\in\mathbb{R}}|\bar{ \varphi}^{\prime}(t)|<\infty.\] It is easy to see that \(\varphi^{\prime\prime}(t)\) is integrable almost everywhere.The only piece that is left to prove is \[D\bar{\varphi}^{\prime\prime}(t)-c\bar{\varphi}^{\prime}(t+r_{1})+\bar{\varphi }(t+r_{1}-r_{2})\left(1-\bar{\varphi}(t+r_{1})\right)\leq 0.\] The first and second order derivatives without delay were calculated above. Now, \[\bar{\varphi}(t+r_{1})=\frac{1}{1+\theta e^{-\mu_{1}(t+r_{1})}},\;\bar{\varphi}(t +r_{1}-r_{2})=\frac{1}{1+\theta e^{-\mu_{1}(t+r_{1}-r_{2})}},\;\bar{\varphi}^{ \prime}(t+r_{1})=\frac{\theta\mu_{1}e^{-\mu_{1}(t+r_{1})}}{\left(1+\theta e^{- \mu_{1}(t+r_{1})}\right)^{2}}.\] Substituting into the wave model and performing the following calculation \[\bar{\varphi}^{\prime\prime}(t)-c\bar{\varphi}^{\prime}(t+r_{1}) +\bar{\varphi}(t+r_{1}-r_{2})\left(1-\bar{\varphi}(t+r_{1})\right)\] \[=\frac{\mu_{1}^{2}e^{-\mu_{1}t}\left(\theta e^{-\mu_{1}t}-1 \right)}{\left(1+\theta e^{-\mu_{1}t}\right)^{3}}-c\frac{\theta\mu_{1}e^{- \mu_{1}(t+r_{1})}}{\left(1+\theta e^{-\mu_{1}(t+r_{1})}\right)^{2}}+\frac{1}{ 1+\theta e^{-\mu_{1}(t+r_{1}-r_{2})}}\left(1-\frac{1}{1+\theta e^{-\mu_{1}(t+ r_{1})}}\right).\] We are concerned with some small delay. In fact, \[\lim_{r_{1},r_{2}\to 0}\bar{\varphi}^{\prime\prime}(t)-c\bar{ \varphi}^{\prime}(t+r)+\bar{\varphi}(t+r)\left(1-\bar{\varphi}(t)\right)\] \[=\frac{\theta e^{-\mu_{1}t}}{\left(1+\theta e^{-\mu_{1}t}\right) ^{2}}\left(\frac{2D\theta\mu_{1}^{2}}{1+\theta e^{-\mu_{1}t}}-\mu_{1}^{2}-c \mu_{1}+1\right)\] \[=-\frac{\theta e^{-\mu_{1}t}}{\left(1+\theta e^{-\mu_{1}t} \right)^{3}}\left(-\theta\left(\mu_{1}^{2}+-c\mu_{1}+1\right)e^{-\mu_{1}t}+ \mu_{1}^{2}+c\mu_{1}-1\right).\] . Notice that \[-\frac{\theta e^{-\mu_{1}t}}{\left(1+\theta e^{-\mu_{1}t}\right)^{3}}<0,\;t \in\mathbb{R},\mu_{1}^{2}-c\mu_{1}+1=0.\] Employing the fact that \(\mu_{1}^{2}=c\mu_{1}-1,\) and \(c\mu_{1}\in(1,2)\) when \(c>2.\) The latter fact was shown in [27]. Thus, we can find some \(r^{*}(c)>0\) such that when \(0<r_{i}<r^{*}(c),\;i=1,2,\) we have \[\bar{\varphi}^{\prime\prime}(t)-c\bar{\varphi}^{\prime}(t+r_{1})+\bar{\varphi }(t+r_{1}-r_{2})\left(1-\bar{\varphi}(t+r_{1})\right)\leq-\frac{2\theta e^{- \mu_{1}t}}{\left(1+\theta e^{-\mu_{1}t}\right)^{3}}\left(c\mu_{1}-1\right)<0.\] #### 6.2.2. Quasi-Lower Solutions The construction of lower solutions be be similar to the construction from the Belousov-Zhabotinskii Equations above. Indeed, we define \(\underline{\varphi}(t),\) in the exact same manner as \(\underline{\varphi}_{1}(t).\) **Proposition 6.7**.: Let \(0<r_{2}<r_{1}\) be small and \(c>2,\) then take \(\underline{\varphi}_{1}(t),\bar{\varphi}(t)\) as above, then 1. \(0<\underline{\varphi}(t)\leq\bar{\varphi}(t)\leq 1\) for all \(t\in\mathbb{R}\) 2. \(\varphi(t)\) is a quasi-lower solution of equation (6.15). Proof.: We will show part \(i.)\) via direction computation. This will be done in cases. _Case 1_: \(t\leq-T-r_{1}.\) We look at \[\bar{\varphi}(t)-\underline{\varphi}(t) =\frac{1}{1+\theta e^{-\mu_{1}t}}-\frac{e^{\lambda_{2}t}}{4}= \frac{4-\left(1+\theta e^{-\mu_{1}t}\right)e^{\lambda_{2}t}}{4\left(1+\theta e ^{-\mu_{1}t}\right)}\] \[=\frac{4-e^{\lambda_{2}t}-\theta e^{-\mu_{1}t}e^{\lambda_{2}t}}{4 \left(1+\theta e^{-\mu_{1}t}\right)}\geq\frac{3-\theta e^{-\mu_{1}t}e^{ \lambda_{2}t}}{4\left(1+\theta e^{-\mu_{1}t}\right)}=\frac{3-\theta e^{( \lambda_{2}-\mu_{1})t}}{4\left(1+\theta e^{-\mu_{1}t}\right)}.\] It is clear that \(1+\theta e^{-\mu_{1}t}>0,\) so we can focus on \(3-\theta e^{(\lambda_{2}-\mu_{1})t}.\) The condition that \(0<\theta\leq 1\) gives \[3-\theta e^{(\lambda_{2}-\mu_{1})t}\geq 3-e^{(\lambda_{2}-\mu_{1})t}.\] We are concerned with small delay, so when \(r_{1}\to 0,\ \lambda_{2}\rightarrow\eta_{2}.\) Therefore, we look at \[\eta_{2}-\mu_{1}=\frac{c+\sqrt{c^{2}-2}}{2}-\frac{c-\sqrt{c^{2}-4}}{2}=\frac{ \sqrt{c^{2}-2}+\sqrt{c^{2}-4}}{2}>0.\] This gives \(0<e^{(\eta_{2}-\mu_{1})t}<1,\) so we can find some \(r(c)>0\) such that when \(r_{1},r_{2}\leq r(c)\) we have \[\bar{\varphi}(t)-\underline{\varphi}(t)\geq 0.\] _Case 2_: \(\leq-T-r_{1}<t\leq T.\) The proof is a result of the following facts. * \(\bar{\varphi}(t)\geq\underline{\varphi}(t)\) when \(t=-T-r_{1}\) * \(\bar{\varphi}^{\prime}(t)\geq\underline{\varphi}^{\prime}(t)\) in the interval for sufficiently large \(T\) * \(\bar{\varphi}(t)\geq\underline{\varphi}(t)\) when \(t=T\) _Case 3_: \(t>T.\) The proof is a direct consequence of the fact that when \(t>T,\)\(\bar{\varphi}(t)>\frac{1}{2}\) and \(\underline{\varphi}(t)=\frac{1}{2}.\) In order to show part \(ii.)\) we will modify the proof for lower solutions of the BZ reaction. Substituting the function \(\underline{\varphi}(t)\) Eq.(6.15). _Case 1_: When \(t\leq-T-r_{1},\) we have the following since \(r_{1},r_{2}\) are sufficiently small \[\underline{\varphi}^{\prime\prime}(t)-c\underline{\varphi}^{\prime}(t+r_{1})+ \frac{1}{2}\underline{\varphi}(t+r_{1}-r_{2})=0.\] Thus, \[\underline{\varphi}^{\prime\prime}(t)-c\underline{\varphi}^{\prime}(t+r _{1})+\underline{\varphi}(t+r_{1}-r_{2})\left(1-\underline{\varphi}(t+r_{1})\right)\] \[=\underline{\varphi}^{\prime\prime}(t)-c\underline{\varphi}^{ \prime}(t+r_{1})+\underline{\varphi}(t+r_{1}-r_{2})\left(1-\underline{\varphi}( t+r_{1})\right)\] \[=\left[\underline{\varphi}^{\prime\prime}(t)-c\underline{\varphi}^ {\prime}(t+r_{1})+\frac{1}{2}\underline{\varphi}(t+r_{1}-r_{2})\right]+ \underline{\varphi}(t+r_{1}-r_{2})\left(\frac{1}{2}-\underline{\varphi}(t+r_{ 1})\right)\] \[=\underline{\varphi}(t+r_{1}-r_{2})\left(\frac{1}{2}-\underline{ \varphi}(t+r_{1})\right)\geq 0.\] _Case 2_: If \(-T-r_{1}\leq t\leq T\), then, the equation becomes \[B:=\underline{\varphi}^{\prime\prime}(t)-c\underline{\varphi}^{\prime}(t+r_{1 })+\underline{\varphi}(t+r_{1}-r_{2})\left((1-\underline{\varphi}(t+r_{1}) \right).\] Note that on this interval we have the same estimates as with the BZ reaction so \(B\geq 0\) _Case 3_: When \(t>T\), \[B:= \underline{\varphi}^{\prime\prime}(t)-c\underline{\varphi}^{\prime }(t+r_{1})+\underline{\varphi}(t+r_{1}-r_{2})\left((1-\underline{\varphi}(t+r _{1})\right)\] \[=\frac{1}{4}\left((1-\frac{1}{4}\right)\] \[\geq\frac{3}{16}.\] Therefore, \(B\geq 0\). The claim is proved. So, we have proved the following **Corollary 6.8**.: Assume that \(c>2\) is given. Then, Eq.(6.15) has a traveling wave solution \(u(x,t)=\phi(x+ct)\) for sufficiently delays \(\tau_{1},\tau_{2}\).
2302.07636
DP-BART for Privatized Text Rewriting under Local Differential Privacy
Privatized text rewriting with local differential privacy (LDP) is a recent approach that enables sharing of sensitive textual documents while formally guaranteeing privacy protection to individuals. However, existing systems face several issues, such as formal mathematical flaws, unrealistic privacy guarantees, privatization of only individual words, as well as a lack of transparency and reproducibility. In this paper, we propose a new system 'DP-BART' that largely outperforms existing LDP systems. Our approach uses a novel clipping method, iterative pruning, and further training of internal representations which drastically reduces the amount of noise required for DP guarantees. We run experiments on five textual datasets of varying sizes, rewriting them at different privacy guarantees and evaluating the rewritten texts on downstream text classification tasks. Finally, we thoroughly discuss the privatized text rewriting approach and its limitations, including the problem of the strict text adjacency constraint in the LDP paradigm that leads to the high noise requirement.
Timour Igamberdiev, Ivan Habernal
2023-02-15T13:07:34Z
http://arxiv.org/abs/2302.07636v2
# DP-BART for Privatized Text Rewriting under Local Differential Privacy ###### Abstract Privatized text rewriting with local differential privacy (LDP) is a recent approach that enables sharing of sensitive textual documents while formally guaranteeing privacy protection to individuals. However, existing systems face several issues, such as formal mathematical flaws, unrealistic privacy guarantees, privatization of only individual words, as well as a lack of transparency and reproducibility. In this paper, we propose a new system 'DP-BART' that largely outperforms existing LDP systems. Our approach uses a novel clipping method, iterative pruning, and further training of internal representations which drastically reduces the amount of noise required for DP guarantees. We run experiments on five textual datasets of varying sizes, rewriting them at different privacy guarantees and evaluating the rewritten texts on downstream text classification tasks. Finally, we thoroughly discuss the privatized text rewriting approach and its limitations, including the problem of the strict text adjacency constraint in the LDP paradigm that leads to the high noise requirement.1 Footnote 1: Our code is available at [https://github.com/trusthlt/dp-bart-private-rewriting](https://github.com/trusthlt/dp-bart-private-rewriting). ## 1 Introduction Protection of privacy is increasingly gaining attention in today's world, both among the general public and within the fields of machine learning and NLP. One very common methodology for applying privacy to an algorithm is Differential Privacy (DP) (Dwork and Roth, 2013). In simple terms, DP provides a formal guarantee that any individual's contribution to a query applied on a dataset is bounded. In other words, no individual can influence this query 'too much'. One particular method of applying DP to the domain of NLP is _differentially private text rewriting_, in which an entire document is rewritten with DP guarantees by perturbing the original text representations. For instance, given a document "I would like to fly from Denver to Los Angeles this Thursday", the system may rewrite it as "Show me flights to cities in California this week". If one is training a model on intent classification for airline travel inquiry systems, either document would be a useful data point. In this way, we avoid using the original text that has uniquely identifiable qualities of a specific author, and instead create a privatized'synthetic' example. This is in fact a form of local differential privacy (LDP), which is a stronger form of DP that is not limited to a specific dataset. The benefits of an LDP text rewriting system are immense, where the output privatized dataset can be used for any downstream analysis. We also avoid the problem of having to manually determine what specific tokens in a document are private, applying LDP to the entire document. However, there is a significant difficulty in creating such a system, with a lot of perturbation required to achieve any reasonable privacy guarantees, leading to poor downstream utility. In addition, there are several issues in existing DP text rewriting systems, such as formal flaws having been discovered in their methodology (Habernal, 2021), older types of models used (e.g. single-layer LSTM, as in Krishna et al. (2021)), high privacy budgets, as well as a lack of transparency in the claimed privacy guarantees, outlined in Igamberdiev et al. (2022). To address these issues, we propose **DP-BART**, a DP text rewriting system under the local DP paradigm that improves upon existing baselines and consists of several techniques that can be directly applied to a pre-trained BART model (Lewis et al., 2019), without having to design and train such a model from scratch. Despite being a large transformer architecture, it can be easily used for data privatization, not requiring many resources. Our methodology consists of a novel clipping method for the BART model's internal encoder representa tions, as well as a pruning and additional training mechanism that reduces the amount of DP noise that needs to be added to the data during the privatization process. We summarize our contributions as follows. First, we present our **DP-BART** model and its related methodologies, aimed at reducing DP noise and reaching a better privacy/utility trade-off. For comparison, we use a reimplementation of the current primary baseline for this task, the ADePT model. Second, we run experiments to investigate the privacy/utility trade-off of these models, using five unique datasets that gradually increase in size, evaluating rewritten texts on downstream text classification tasks. Finally, we thoroughly examine the feasibility of the LDP text rewriting setting, investigating issues of the high noise requirement due to the strict text adjacency constraint, trade-offs between privacy and dataset size, what exactly is the object of privatization, required computational resources, as well as limitations of the approach as a whole and possible alternatives. ## 2 Related Work We present a theoretical background on differential privacy, the BART model, and pruning for neural networks in Appendix A. Applying differential privacy to neural network training and model publishing has converged to using a mainstream method, namely DP-SGD (Abadi et al., 2016). However, the task of text privatization is still broadly unexplored, with many unanswered questions remaining, such as dealing with the unstructured nature of text and explainability of the privacy guarantees provided to textual data (Klymenko et al., 2022). Mattern et al. (2022) explored text rewriting with global differential privacy, sampling from a generative language model trained with DP. There are only a few approaches that directly tackle the problem of differentially private text rewriting with LDP. Krishna et al. (2021) developed the ADePT system, which is an RNN-based text autoencoder that incorporates DP noise to its encoder output hidden state. As described by Habernal (2021), ADePT had a formal error in calculating the Laplace noise scale, which resulted in it violating differential privacy. A more recent text rewriting system is DP-VAE (Weggenmann et al., 2022), which added constraints to the vanilla VAE model latent space (Kingma and Welling, 2014) to obtain a bounded sensitivity on its mean and variance parameters. Despite the high difficulties of the task, the paper reports surprisingly high performance for high privacy standards. Since their experimental description lacks some key details and the code base is not public, we cannot reproduce their approach. In addition, there are a number of word-level DP systems (Feysietan et al., 2019; Xu et al., 2020; Bo et al., 2021), where individual word embeddings are perturbed with DP, with new words then sampled close to these privatized vectors. As Mattern et al. (2022) point out, there are several shortcomings of such approaches, including a lack of obfuscating syntactic information and the inability to provide proper anonymization. In essence, these methods do not privatize a full utterance, but only single words. ## 3 Methods We outline this section as follows. First, we briefly describe the baseline method we use, being a modified version of the ADePT system by Krishna et al. (2021). Next, we investigate two main issues with applying a local DP system such as ADePT to a transformer model, namely extreme sensitivity and computational infeasibility, described in Sections 3.2.1 and 3.2.2, respectively. We then demonstrate several novel mechanisms which tackle these issues and provide numerous benefits in the privacy/utility trade-off for the local DP setting. Section 3.3 describes the clipping by value module, with an additional analysis on determining optimal settings for it provided in Appendix B. Sections 3.4 and 3.5 then describe the neuron-based pruning methods which significantly reduce the amount of noise that needs to be added to the model for a given privacy budget and increase model robustness to noise through further noisy training. Low-level specifics on the pruning methods are further provided in Appendix F. ### Baseline (ADePT) ADePT starts out with a standard autoencoder architecture. Given an input document \(x\), an encoder function Enc calculates a latent vector representation \(z\). This representation is then sent to a decoder function Dec, which reconstructs the original text \(\hat{y}\). ADePT uses a single-layer, unidirectional LSTM for both the encoder and decoder. \[z=\textsc{Enc}(x)\quad\text{and}\quad\hat{y}=\textsc{Dec}(z) \tag{1}\] To incorporate differential privacy into this model, the unbounded latent vector \(z\in\mathbb{R}^{n}\) (where \(n\) is the size of the autoencoder's hidden dimension) is bounded by its norm and the clipping constant \(C\in\mathbb{R}\). Laplace or Gaussian noise (\(\eta\)) is then added to the resulting vector, from which the decoder reconstructs the original sequence, \(\hat{y}\). For comparison with our primary methodologies below, we refer to this as the clipping by norm module, outlined in equation 2. \[z^{\prime}=z\cdot\min\left(1,\frac{C}{||z||_{2}}\right)+\eta \tag{2}\] In our experiments, we make two adjustments to this system. First, we fix a theoretical issue in the sensitivity calculation for equation 2, outlined in Habernal (2021). Instead of using the sensitivity of \(2C\) for the Laplace noise scale, outlined in **Theorem 1** of Krishna et al. (2021), we instead use the corrected sensitivity of \(2C\sqrt{n}\) from **Theorem 5.1** of Habernal (2021). Second, we fix an issue with the pre-training procedure of the model. In Krishna et al. (2021), ADePT was pre-trained on the downstream datasets with clipping, but without the added DP noise from equation 2. Igamberdiev et al. (2022) demonstrated that this results in significant memorization by the model of the input documents, even after adding DP noise during the rewriting process. In order to remedy this, we therefore pre-train the autoencoder model on a public corpus, unrelated to the downstream datasets. ### Applying LDP to Transformers There are two main issues in applying a transformer model to a local DP setting similar to ADePT, outlined below. #### 3.2.1 Using LDP in pre-trained transformers suffers from extreme sensitivity First, we need a significantly larger amount of noise to be added to the model, due to the increased size of the encoder output vector. Due to the cross-attention mechanism typical of transformer models, the full output vector for the BART encoder is of size \(d_{tok}\times l\), where \(d_{tok}\) is the hidden size for a particular token, while \(l\) is the sequence length. For the smaller bart-base model, using a short sequence length of \(20\), this results in a dimensionality of \(768\times 20=15360\). In comparison, ADePT's encoder output vector dimensionality is only \(1024\) in our configuration. #### 3.2.2 High requirement of computational resources for pre-training We experimented with clipping by norm for BART, similarly to ADePT, but found that it destroys any useful representations of the model (even prior to adding the DP noise). Additional pre-training of BART that would incorporate clipping by norm turned out to be ineffective. The remaining option to learn a model with clipping by norm would be to pre-train the model from scratch. Unlike the small ADePT model, which is a unidirectional, single-layer LSTM, pre-training a BART transformer from scratch is computationally infeasible on an academic budget. While the details of BART's computational requirements are not described in Lewis et al. (2019), we can estimate this for the relatively small bart-base model of 139M parameters that was released by the original authors,2 by comparison with other similar-sized models. For instance, the BERT model (Devlin et al., 2019), with less parameters (110M for bert-base), was pre-trained for 4 days on up to 16 TPUs, as described on the authors' Github repository.3 Footnote 2: [https://github.com/facebookresearch/fairseq/tree/main/examples/bart](https://github.com/facebookresearch/fairseq/tree/main/examples/bart) Footnote 3: [https://github.com/google-research/bert](https://github.com/google-research/bert) ### DP-BART-CLV (Clipping by Value) To address the issues with clipping by norm, we developed the **DP-BART-CLV** model, shown in Figure 1. We analyzed the internal representations of a pre-trained BART model's encoder output vec Figure 1: DP-BART-CLV tor values, using a public dataset. We found that these are mostly bounded within a couple of standard deviations from their mean. We present this analysis in detail in Appendix B. To avoid significantly altering these representations, we can therefore use clipping by value (CLV), as in equation 3. \[\bar{z}_{i}=\min(\max(z_{i},C_{min}),C_{max}) \tag{3}\] for any dimension \(i\) in the encoder output vector \(z\), a set minimum threshold \(C_{min}\) and maximum threshold \(C_{max}\). The bulk of values centered around the mean of \(z\) are thus left the same, without being rescaled as in equation 2. Since these values were also found to be symmetrically distributed, we modify equation 3 to set \(C=C_{max}=-C_{min}\), as in equation 4. \[\bar{z}_{i}=\min(\max(z_{i},-C),C) \tag{4}\] The pipeline for **DP-BART-CLV** is as follows. We first initialize a BART model using a pre-trained checkpoint, where pre-training was again done on a public dataset, separate from the downstream datasets that are to be privatized. For a given document, we put it through the encoder of the model at inference time, obtaining the encoder output vector \(z\), as in equation 5. \[z=\textsc{Enc}(x) \tag{5}\] where \(x\) is the input sequence and Enc is the encoder of the BART model. While the BART model outputs the encoder's last hidden state as \(z\in\mathbb{R}^{l\times d_{tok}}\) for each mini-batch, we flatten this vector to be \(z\in\mathbb{R}^{n}\), where \(n=l\cdot d_{tok}\). Clipping is then performed as in equation 6, \[\bar{z}=\textsc{clip}(z) \tag{6}\] where clip is carried out for every dimension of the vector, according to equation 4. With this clipping mechanism in place, we can now calculate its sensitivity, in order to determine the scale of noise to add in the DP setting. This is outlined in Theorems 3.1 and 3.2 below. **Theorem 3.1**.: _Let \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) be a function as in equation 6. The \(\ell_{1}\) sensitivity \(\Delta_{1}f\) of this function is calculated as in equation 7, where \(C\in\mathbb{R}:C>0\) is the clipping constant and \(n\in\mathbb{N}\) is the dimensionality of the vector._ \[\Delta_{1}f=2Cn \tag{7}\] Proof.: See Appendix C. **Theorem 3.2**.: _Let \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) be a function as in equation 6. The \(\ell_{2}\) sensitivity \(\Delta_{2}f\) of this function is calculated as in equation 8, where \(C\in\mathbb{R}:C>0\) is the clipping constant and \(n\in\mathbb{N}\) is the dimensionality of the vector._ \[\Delta_{2}f=2C\sqrt{n} \tag{8}\] Proof.: See Appendix D. We then add noise to this clipped vector, as in equation 9. \[\dot{z}=\bar{z}+(Y_{1},\dots,Y_{n}) \tag{9}\] where each \(Y_{i}\) is drawn i.i.d. either from \(\textsc{Lap}(\frac{\Delta_{1}}{\varepsilon})\) or \(\mathcal{N}(0,2\ln(\frac{1.25}{\delta})\frac{\Delta_{2}^{2}}{\varepsilon^{2}})\) for the Laplace and Gaussian mechanisms, respectively (as outlined in Dwork and Roth (2013)). Decoding is then performed auto-regressively (e.g. using beam search), as usual, using this perturbed \(\dot{z}\) encoder output vector, instead of the original \(z\) vector, as in equation 10. \[\hat{y}=\textsc{Dec}(\dot{z}) \tag{10}\] where \(\hat{y}\) is the model's output prediction of the reconstructed input sequence \(x\). By standard arguments, the **DP-BART-CLV** model satisfies \((\varepsilon,0)\)-DP for the Laplace mechanism and \((\varepsilon,\delta)\)-DP for the Gaussian mechanism, as outlined in equation 9 (Dwork and Roth, 2013). ### DP-BART-PR (Pruning) We develop the **DP-BART-PR** model in order to address the remaining issue of dimensionality, outlined in Section 3.2.1. The **DP-BART-CLV** model, while being resource-efficient, still has the issue of a large dimensionality for the encoder output vectors, since in equations 7 and 8, the sensitivity is multiplied by a factor of \(n\) and \(\sqrt{n}\), respectively, which in turn results in a larger noise scale. **DP-BART-PR**, addressing both the resource and dimensionality issues, is an extension to the above **DP-BART-CLV**, with an additional iterative pruning/training mechanism applied to it. The procedure is outlined in Figure 2 and Algorithm 1 of Appendix E. As for **DP-BART-CLV**, we first load a pre-trained BART model checkpoint. Each input token will have an encoder output representation of dimensionality \(d_{tok}\). For every token in the sequence, we prune a certain percentage of these neurons by setting them to 0. Importantly, _these pruned neurons are the same for every single input document_. The criteria for selecting these pruned neurons is discussed in more detail in Appendix F. Following this pruning step, we train the model for \(k\) iterations to compensate for possible lost performance from pruning. This step is performed on an external public dataset, unrelated to any downstream texts that are to be privatized. During this process, we also clip each dimension of the BART encoder output vector \(z_{i}\) according to equation 4, to encourage representations to be constrained within the ranges \(-C\) and \(C\) to reduce potential negative performance impacts of clipping during the rewriting phase. We note that only a few data points are necessary for this additional training step, maintaining the low-resource setting, outlined in Appendix I. We then continue this two-step process iteratively, until a desired dimensionality reduction of the encoder output vector is reached. At the end of this process, the resulting model weights are frozen and the final pruned indices of the encoder output vector \(z\) are saved. The model is then used for text rewriting at inference time, just like in **DP-BART-CLV**, but with the additional pruning step, using the saved indices. As a result of this process, we can significantly reduce \(n\) in Equations 7 and 8, which in turn reduces the resulting noise scale used in equation 9. With less noise added to the encoder output vectors for any given \(\varepsilon\) value, we can thus expect a better privacy/utility trade-off. This pruning procedure can thus be seen as a _privacy/utility tuning knob_. With more pruning, we reduce the size of \(n\), therefore requiring less added noise for a given \(\varepsilon\) value in the DP setting. At the same time, more pruning reduces the model's expressivity with less dimensions, which will result in an inevitable performance drop after reaching a certain pruning threshold. We noticed that pruning a few dimensions (e.g. 25% of neurons) can recover basically all of the performance of the model with some additional training steps, but after a certain point this starts to degrade. The'sweet spot' we found is at approximately 75% of neurons. Additional discussions on these points can be found in Appendix F. We would like to stress again that these pruning adjustments are made just once and using public data only, after which the final model can be used locally by any individual for their own data privatization. #### 3.4.1 Proof that DP-BART-PR is differentially private **Theorem 3.3**.: _The **DP-BART-PR** model, combining Algorithm 1 and the above **DP-BART-CLV** procedure, summarized in equation 9, satisfies \((\varepsilon,0)\)-DP when using the Laplace mechanism and \((\varepsilon,\delta)\)-DP when using the Gaussian mechanism._ Proof.: See Appendix G. ### DP-BART-PR+ We further augment the above **DP-BART-PR** model by incorporating additional training steps with added DP noise. This model follows the same procedure for iterative pruning and additional training, as outlined in algorithm 1, but we add further training iterations on the pruned model with added DP noise to the clipped encoder output representations, as in equation 9. For example, using the Gaussian mechanism at \(\varepsilon=500\), at each iteration we clip the encoder output vectors \(z\) from equation 5 and add the appropriate amount of Gaussian Figure 2: Pruning and re-training procedure for the DP-BART-PR model, illustrated for one document. Each \(i^{th}\) neuron from a set of indices is set to \(0\) for all tokens of the encoder output vectors \(z\in\mathbb{R}^{l\times d_{\mathit{tok}}}\). These neuron indices are the same for any document. This process is repeated iteratively until performance starts to degrade. noise based on the sensitivity from equation 8. The idea behind this additional training is to help the model to better decode from the noisified encoder representations. As with **DP-BART-PR**, for **DP-BART-PR+** we perform these additional training iterations on a public dataset, unrelated to the downstream datasets for privatized text rewriting. A separate model is prepared for each individual privacy budget \(\varepsilon\). ## 4 Experiments ### Datasets We perform experiments on five English-language textual datasets, each gradually increasing in size (Table 1). For comparison with Krishna et al. (2021), we use ATIS Dahl et al. (1994) and Snips Coucke et al. (2018) as our'small' datasets, with the task of multi-class intent classification. We use the same train/validation/test split as in Goo et al. (2018). For a medium-sized dataset, we use the popular IMDb dataset Maas et al. (2011), on the binary classification task of movie review sentiment analysis. For this, as well as the following two datasets, we use a validation partition by randomly selecting 20% of the training set. For a large dataset, we use the dataset from Grasser et al. (2018), which is a collection of drug reviews from the website Drugs.com, also with the task of binary sentiment analysis as in Shiju and He (2022). This dataset, although publicly available, closely simulates a sensitive dataset in need of privacy protection, with detailed descriptions by users of their medical conditions and experiences with different treatments. Our final dataset is the much larger Amazon Customer Reviews dataset He and McAuley (2016), of which we take a 2M subset of reviews from various categories (e.g. electronics, office products), from the full 144M. As with Drugs.com, we modify the original five-star sentiment score to a binary classification task, with four or more stars being the 'positive' class, while the rest are 'negative'. We refer to Appendix H for more details. ### Experimental Setup We have three main experimental configurations. The first is the **original** setting, where we run experiments on our downstream datasets without any rewriting or DP. The second configuration is **rewrite-no-dp**, where we utilize each of the four models outlined in Section 3 at \(\varepsilon=\infty\) (**ADePT**, **DP-BART-CLV**, **DP-BART-PR**, **DP-BART-PR+**). Finally, the third and main configuration is **rewrite-dp**, where we compare the above four models, this time at various privacy settings (\(\varepsilon\in[10,10000]\), Laplace and Gaussian mechanisms). For **rewrite-no-dp** and **rewrite-dp**, our experimental pipeline consists of the following four steps, depending on the specific model used: **Pre-training:**: The model is pre-trained on a large public corpus. For ADePT, we use 50% of the Openwebtext corpus Gokaslan and Cohen (2019). For all our BART experiments, we load a pre-trained facebook/bart-base model.4 Footnote 4: Available from [https://huggingface.co/facebook/bart-base](https://huggingface.co/facebook/bart-base) **Further training:**: Only for DP-BART-PR and DP-BART-PR+, again performed using the Openwebtext corpus. It helps the model adjust to pruning and DP noise, respectively (as outlined in Sections 3.4 and 3.5). More details on the amount of further training in Appendix I. **Rewriting:**: We take a pre-trained model and rewrite one of the downstream datasets. **Downstream:**: We take the rewritten dataset (training and validation partitions) and run downstream experiments on it using a pre-trained BERT model with a classification head on top. We use the rewritten validation set for hyperparameter optimization (see Appendix I) and the original test set for final evaluations. See Appendix J for details on the downstream model. In the **original** setting, we use the same downstream model as above, using the original datasets instead of the rewritten ones. EvaluationWe perform two types of evaluations for the above experimental settings: intrinsic and extrinsic. For our extrinsic evaluation we measure the test \(F_{1}\) scores on the downstream task performance. This is the primary utility metric of the \begin{table} \begin{tabular}{l r|r r} **Dataset** & **Classes** & **\# Trn.+Vld.** & **\# Test** \\ \hline ATIS & 26 & 4,978 & 893 \\ Snips & 7 & 13,774 & 700 \\ IMDb & 2 & 25,000 & 25,000 \\ Drugs.com & 2 & 161,297 & 53,766 \\ Amazon & 2 & 1,904,197 & 211,605 \\ \end{tabular} \end{table} Table 1: Dataset statistics. Trn.: Train, Vld.: Validation. Size represents number of documents. rewritten texts, with privacy correspondingly quantified with the \(\varepsilon\) value. We expect that even if a text may be rewritten to look very different from the original input, it could still have enough downstream task-specific information remaining to properly train a model on this task (e.g. the sentiment of a document in the case of sentiment analysis). This is in fact the'sweet spot' we are looking for, removing identifying elements of the author, but still retaining some key features from the input for good downstream performance. We also measure BLEU scores for our intrinsic evaluation, discussed in more detail in Appendix K. ## 5 Results Figure 3 shows our downstream test \(F_{1}\) results for all datasets, at varying values of \(\varepsilon\). We report results for the Gaussian mechanism, which nearly always outperformed those of the Laplace mechanism. We present results in tabular form with mean and standard deviations in Appendix K. Additionally, we present sample rewritten texts in Appendix L. We outline the main patterns as follows. DP-BART-PR+ performs best overallDP-BART-PR+ clearly reaches the best privacy/utility trade-off for all datasets, having the highest scores at the lower \(\varepsilon\) values. DP-BART-PR results are second-best, performing better than DP-BART-CLV and ADePT, which start to significantly drop below \(\varepsilon=1000\) for all datasets. The overall results hierarchy can be clearly seen in the ATIS dataset, where at \(\varepsilon=500\), DP-BART-PR+ reaches \(F_{1}\ 0.76\), DP-BART-PR at \(0.48\), while both DP-BART-CLV and ADePT are at \(F_{1}\ 0.09\). Original vs. privateResults for the **original** setting are generally on-par with those of the **rewrite-no-dp** setting. For instance, Snips original \(F_{1}\) is \(0.98\), and \(\varepsilon=\infty\) with rewriting is also at \(F_{1}\) of \(0.98\) for DP-BART-PR, being very similar for the other three models. One exception to this is IMDb, which has a drop from original \(F_{1}\ 0.86\) to \(0.72\) for all models. This can be explained by the fact that the **original** settings use longer sequence lengths, while both **rewrite-no-dp** and **rewrite-dp** settings are limited to a sequence length of \(20\). This is not a problem for datasets such as ATIS and Snips, since their documents are generally very short, mostly limited to brief user inquiries. For a dataset such as IMDb, however, which consists of detailed reviews by individuals, limiting the sequence length results in a loss of valuable information. Epsilon vs. dataset sizeRegardless of dataset size, we can see a drop in results for all models, apart from DP-BART-PR+ with the IMDb and Amazon datasets, which have a much milder decline down to \(\varepsilon\) values of \(100\) and \(50\), respectively (e.g. 0.82 test \(F_{1}\) score at \(\varepsilon=50\) vs. 0.91 test \(F_{1}\) score for the **original** setting with Amazon). We can therefore see that a larger dataset size does not necessarily mean better results at lower \(\varepsilon\) values. For instance, the Drugs.com dataset shows a significant decline for all model types, including DP-BART-PR+ at \(\varepsilon=100\), in contrast to the smaller IMDb dataset. ## 6 Discussion and limitations Reducing noise for text rewriting with LDPWe have shown that it is possible to reduce the amount of noise in the LDP setting of privatized rewriting, in order to obtain more useful rewritten texts for downstream tasks. To compare DP-BART-CLV vs. DP-BART-PR, we can examine the resulting \(\ell_{2}\) sensitivity from equation 8 (\(\Delta_{2}f=2C\sqrt{n}\)). Setting sequence length \(l=20\) and \(C=0.1\), as in our experiments, without pruning we have a dimensionality of \(n=768\cdot 20=15360\), hence \(\Delta_{2}f=2\cdot 0.1\cdot\sqrt{15360}\approx 24.79\). With pruning we are able to remove 76.30% of those \(n\) neurons, with only \(n=182\cdot 20=3640\) remaining. The \(\ell_{2}\) sensitivity thus becomes \(\Delta_{2}f=2\cdot 0.1\cdot\sqrt{3640}\approx 12.07\). Plugging this into the Gaussian mechanism's noise scale calculation from Dwork and Roth (2013) (\(\mathcal{N}(0,2\ln(\frac{1.25}{\delta})\frac{\Delta_{2}^{2}}{\varepsilon^{2}})\)), with \(\delta=10^{-5}\) and \(\varepsilon=500\), we have \(\sigma^{2}=0.2402\) without pruning and \(\sigma^{2}=0.1169\) with pruning. We can therefore see that, **with DP-BART-PR, we are able to reduce the noise scale by more than half**. Pre-training and computational resourcesUtimately, a very effective way to prepare a model for privatized text rewriting would be to pre-train it from scratch, being fully in control of hyperparameters such as the dimensionality \(n\) of the encoder output vectors \(z\), which determines the \(\ell_{1}\) and \(\ell_{2}\) sensitivities from equations 7 and 8, respectively. In addition, the whole model could be pre-trained with added noise and clipping mechanisms, potentially being even more robust than our approach in DP-BART-PR+, where we incorporate further noisy training. We noticed for DP-BART-PR+ that the lower the \(\varepsilon\) value we use, the more additional training iterations the model needs to properly reduce the validation loss. This demonstrates that, also in the setting of pre-training from scratch, we would need to train for more iterations in order to reach lower \(\varepsilon\) values. This can pose serious challenges, however, for reasons of computational demand discussed in Section 3.2.2. DP-BART-PR+ can therefore be seen as a sweet spot approach, where we only need a few additional training iterations and can still achieve a significant dimensionality reduction through pruning, as well as additional robustness to noise. Domain of public training textsIn preparing the DP-BART models, it is important to take into account the domain of the public data that is used to (1) pre-train the original BART model, and (2) perform additional training iterations (DP-BART-PR and DP-BART-PR+). This will ultimately have an impact on the model's effectiveness for text privatization, depending on the nature of the downstream texts. For example, if this training data is restricted to news articles, then there may be limited performance for rewriting texts that are further from this domain, such as internet comments. Another obvious limitation is the language of the public data. If the model is trained on a monolingual English corpus, then it would not be possible to use it for rewriting texts from other languages. The public data used for our experiments consists of news, web text, stories and books Lewis et al. (2019); Gokaslan and Cohen (2019). We expect that expanding this to include more data and more varied domains will lead to better performance in a greater diversity of texts and downstream tasks. What is being privatizedIt is very important to be clear on exactly what information is being privatized when performing text rewriting with LDP. Since we are working with DP at the document level, the entire document is a 'data point', hence any choice and combination of words for a given sequence would be a unique identifier. We thus avoid the problem of having to choose what specific tokens are 'private' within the document. This is crucial, since stylistic aspects of an author can be very abstract, with subtle syntactic and vocabulary choices. Another significant benefit of such an approach, is that we are not limiting ourselves to any specific downstream analysis (e.g. sentiment of a document), being _task agnostic_. However, this also means that, for any given document, _any other document is neighboring_, since we are in the LDP setting. This leads us to a serious discussion on the limitations of such an approach below. An additional question arises of whether one Figure 3: Downstream test \(F_{1}\) results (macro-averaged) for each dataset, using the four model types. Lower \(\varepsilon\) corresponds to better privacy. Both **original** and **rewrite-no-dp** results can be seen on the right of each graph at \(\varepsilon=\infty\). The rest of the results represent the **rewrite-dp** setting at different \(\varepsilon\) values. We only run DP-BART-PR+ for the lower \(\varepsilon\) values, since the higher \(\varepsilon\) configurations for other models already perform well and therefore would not particularly benefit from the additional noisy training iterations. dataset may have multiple documents associated with one individual. There are several ways to go about dealing with this. One standard approach in differential privacy is to linearly scale the \(\varepsilon\) parameter. Thus, if there are \(k\) documents associated with a given individual, then a privacy budget of \(k\varepsilon\) is accounted in total (Dwork and Roth, 2013). Another option would be to simply append all texts associated with one individual into a single 'document', rewriting this using just a single \(\varepsilon\) privacy budget. Limitations of LDP for text rewritingFor every output document, any two inputs, no matter how similar or distinct, are considered neighboring. If we have a small sequence length of \(20\) tokens, with a relatively small vocabulary of \(1000\) words, then the total number of possible combinations is \(1000^{20}\), which is \(10^{60}\)! While we compress these documents into a latent vector with a limited range and dimensionality, the strict adjacency constraints are still present. We can therefore expect an inevitable utility drop when using more reasonable \(\varepsilon\) values (e.g. \(\varepsilon=1\)). With more sophisticated architectures, we have shown that it is possible to push this \(\varepsilon\) value down to some extent. However, our lowest \(\varepsilon\) is still too high to carry over into real-world applications of privacy preservation. As outlined by Hsu et al. (2014), values of \(\varepsilon\) for different applications in the DP literature can range from 0.01 to 10. Choosing the right \(\varepsilon\) value depends on the specific queries that are computed and the nature of the data (Lee and Clifton, 2011). For our case, the value of \(\varepsilon\) can be interpreted in the following manner. The \(\varepsilon\)-LDP mechanism that we are applying to our data makes any two input texts rewritten to be indistinguishable up to a factor of \(e^{\varepsilon}\). More formally, _for any two input texts_\(x\) and \(y\) to our LDP model \(\mathcal{M}\): \[\frac{\text{Pr}[\mathcal{M}(x)=z]}{\text{Pr}[\mathcal{M}(y)=z]}\leq e^{ \varepsilon}, \tag{11}\] where \(z\) is a given output text rewritten by the model. This means that, when we set \(\varepsilon=100\), then any two texts will remain indistinguishable up to a factor of \(e^{100}\). This is a very weak bound and, while it could provide some empirical privacy guarantees, on a theoretical level the privacy protection is not very strong. We can also see how this bound becomes exponentially stronger, as we decrease \(\varepsilon\). It may therefore make sense to take a slightly less strict approach to text adjacency, for instance moving into _domain specific_ text rewriting. For example, text rewriting could be carried out for a specific dataset, with the notion of adjacency restricted to any two individuals within that dataset, hence requiring much less perturbation. The strength of the privacy guarantee, in this case, would then be very dependent on the size of the dataset (Mehner et al., 2021). ## 7 Conclusion We have proposed DP-BART, a novel methodology for LDP-based privatized text rewriting, which outperforms existing methods. We have demonstrated our method's privacy/utility trade-off, the relations between the privacy budget and dataset size, and discussed limitations of the privatized text rewriting approach as a whole. Future research directions include utilizing large-scale pre-training to potentially reach a better privacy/utility trade-off, as well as investigating domain specific text rewriting for relaxing the strict requirements of the LDP approach. ## Acknowledgements The independent research group TrustHLT is supported by the Hessian Ministry of Higher Education, Research, Science and the Arts. This project was partly supported by the National Research Center for Applied Cybersecurity ATHENE. Thanks to Lena Held and Luke Bates for their helpful feedback.
2308.12506
General Covariance-Based Conditions for Central Limit Theorems with Dependent Triangular Arrays
We present a general central limit theorem with simple, easy-to-check covariance-based sufficient conditions for triangular arrays of random vectors when all variables could be interdependent. The result is constructed from Stein's method, but the conditions are distinct from related work. We show that these covariance conditions nest standard assumptions studied in the literature such as $M$-dependence, mixing random fields, non-mixing autoregressive processes, and dependency graphs, which themselves need not imply each other. This permits researchers to work with high-level but intuitive conditions based on overall correlation instead of more complicated and restrictive conditions such as strong mixing in random fields that may not have any obvious micro-foundation. As examples of the implications, we show how the theorem implies asymptotic normality in estimating: treatment effects with spillovers in more settings than previously admitted, covariance matrices, processes with global dependencies such as epidemic spread and information diffusion, and spatial process with Mat\'{e}rn dependencies.
Arun G. Chandrasekhar, Matthew O. Jackson, Tyler H. McCormick, Vydhourie Thiyageswaran
2023-08-24T02:15:04Z
http://arxiv.org/abs/2308.12506v4
# General covariance-based conditions for central limit theorems with dependent triangular arrays ###### Abstract. We present a general central limit theorem with simple, easy-to-check covariance-based sufficient conditions for triangular arrays of random vectors when all variables could be interdependent. The result is constructed from Stein's method, but the conditions are distinct from related work. We show that these covariance conditions nest standard assumptions studied in the literature such as \(M\)-dependence, mixing random fields, non-mixing autoregressive processes, and dependency graphs, which themselves need not imply each other. This permits researchers to work with high-level but intuitive conditions based on overall correlation instead of more complicated and restrictive conditions such as strong mixing in random fields that may not have any obvious micro-foundation. As examples of the implications, we show how the theorem implies asymptotic normality in estimating: treatment effects with spillovers in more settings than previously admitted, covariance matrices, processes with global dependencies such as epidemic spread and information diffusion, and spatial processes with Matern dependencies. We thank Joe Romano, Abhirup Datta, Paul Goldsmith-Pinkham, Han Hong, and Jon Wellner for helpful comments. Please email [email protected] and [email protected] with questions or comments. \({}^{\ddagger}\)Stanford, Department of Economics. \({}^{\lx@sectionsign}\)J-PAL. \({}^{\star}\)Santa Fe Institute. \({}^{\diamond}\)University of Washington, Department of Statistics. \({}^{\lx@par 2016). Specifically, we use Stein's method (Stein, 1986) to derive three high-level, easy-to-interpret conditions. In this paper, we prove that our conditions are implied by (but do not imply) the assumptions used in a wide array of specific dependence models such as \(M\)-dependence, mixing random fields, non-mixing autoregressive processes, and dependency graphs, which themselves do not nest each other. It is useful to contextualize the literature on central limit theorems for dependent data before returning to the contributions. There are several approaches to establishing sufficient conditions for central limit theorems. The first can be thought of as Lindeberg approaches, the second as more generally characteristic function-based approaches, and the third--our focus here--as Stein method approaches. The Stein method observes that \[\operatorname{E}\left[Yf(Y)\right]=\operatorname{E}\left[f^{\prime}(Y)\right]\] for all continuously differentiable functions if and only if \(Y\) has a standard normal distribution. So, when considering normalized sums taking the role of \(Y\), it is enough to show that this equality holds for all such functions asymptotically. Rinott and Rotar (2000) and Ross (2011), among others, provide a detailed view of the method along with a number of examples. This method has been utilized in various forms in a number of specific settings. For example, Bolthausen (1982) used this construction to establish dependence conditions in time series data; namely, the author establishes how the probability of a joint set of events differs from treating them as independent decays in temporal distance. So, as the mixing coefficients decay fast enough in distance, the proof proceeds by checking that the Stein argument follows leveraging the mixing structure. Distance in time can be generalized to space and, further, to random fields. That is, random variables carry indices in a Euclidean space and an analogous mixing condition is made, with decay based on Euclidean distance. Together with higher moment conditions, a Stein-based argument can be shown. Again, a literature (e.g., Jenish and Prucha (2009)) developed and refined such conditions such as using near epoch dependence. A peculiar consequence of this when attempting to apply such results to data is that the literature organized itself, at least in part, around a number of such conditions despite not having micro-foundations for the assumed dependence structure. For example, spatial standard errors are often used--e.g., in Froot (1989); Conley (1999); Driscoll and Kraay (1998)--when conducting \(Z\)-estimation (or GMM). However, in actual applications, for instance agricultural shocks such as rainfall or pests or soil, it is not clear that they should follow a specific form of interdependence satisfying \(\phi\)-mixing with a certain decay rate as invoked in Conley (1999) (cross-sectionally) or Driscoll and Kraay (1998) (temporally and cross-sectionally). Surely shocks correlate over space, but it is hard to say much beyond that. To take a different example, some models of network formation orient themselves by embedding nodes in a random field to deliver central limit theorems. However, this is not without consequence, as it forces a specific, and sometimes undesirable, pattern of link formation. For example, clustering patterns or whether and when tree-like structures can be generated in large graphs are severely restricted with such modeling techniques (Hoff et al., 2002; Lubold et al., 2023). If nodes \(i\) and \(j\) have a distance that inversely relates to the probability of a link between them forming, by the triangle inequality, the distances between \(i\) and \(k\) and \(i\) and \(j\) determine possible distances between \(j\) and \(k\) and therefore the probabilities of the \(jk\) link forming.1 But that pattern then emerges due to spatial embedding rather than a fundamental modeling of human behavior. So in both of these cases, it is less restrictive and more appropriate to impose weaker high-level conditions on correlations if asymptotic normality still follows. Footnote 1: Leung and Moon (2022) also conceptualize a notion of dependence in networks by constructing radii of “stability” defined by changes in network statistics when dropping observed edges. As an alternative to embedding variables in a metric space and toggling dependency by distance, a literature on dependency graphs emerged (Baldi and Rinott, 1989; Goldstein and Rinott, 1996; Ross, 2011). There, observations have indices in a graph, where those that are not edge-adjacent are independent. This provides a different strategy to apply Stein's method, by creating for each observation a dependency neighborhood. Sufficient sparsity in the graph structure allows a central limit theorem to apply, despite not forcing a time- or space-like structure. Examples include Ross (2011); Goldstein and Rinott (1996); Chen and Shao (2004) and a more general treatment in Chen and Shao (2004). Both embedding indices in a metric space or using a more unstructured dependency structure are similar in the sense that they constrain the total amount of correlation between the \(n\) random variables. In principle there are \(\binom{n}{2}\)-order components to this sum, but, via mixing conditions or sparsity conditions, this sum is assumed to be of order \(n\). Therefore, all strategies present some attempt to reduce the size of the covariance sum. We note that both Rinott and Rotar (2000) and Ross (2011) provide in-depth discussions and examples using Stein's method. They cover, among other things, exchangeable pairs, dependency graphs, and \(U\)-statistics. However, none of these cases allows for full correlation at any \(n\) and, furthermore, our covariance conditions are distinct. The goal of our sufficient conditions is to provide compact assumptions that are easily checked in specific examples. They can be micro-founded more easily by virtue of their generality (e.g., the treatment effects, diffusion, and covariance matrix examples) and do not have to shoehorn overly complex assumptions. And, the proofs that the conditions are satisfied are often easier. We consider a triangular array of \(n\) random vectors \(X_{1:n}^{n}\in\mathbb{R}^{p}\), which are neither necessarily independent nor identically distributed. We study conditions under which their appropriately normalized sample mean is asymptotically normally disturbed. In principle, at any \(n\), all \(X_{i}^{n}\) and \(X_{j}^{n}\) can be correlated. The proof follows the well-known Stein's method, though we develop and apply specific bounds for our purpose. To apply Stein's method we must first associate each random variable with a set of other random variables with which it carries a higher level of correlation, still keeping in mind that it could in principle have some non-zero correlation with all variables. We call these _affinity sets_, denoted \(\mathcal{A}_{(i,d)}^{n}\), to capture the other random variables with which \(i\) may have high correlations in the \(d\)-th dimension.2 We provide sufficient conditions for asymptotic normality in terms of the total amount of covariance within an affinity set and the total amount of covariance across affinity sets. As long as in the limit, the amount of interdependence in the overall mean comes from the covariances within affinity sets, asymptotic normality follows. We work with weaker conditions based on bounds on sums of covariances, differently from conditions in the previous literature.3 Footnote 3: In Arratia et al. (1989), the authors present Chen’s method Chen (1975) for Poisson approximation rather than normality, which has a similar approach to ours in collecting random variables into dependencies. While this results in nice finite sample bounds, these bounds consist of almost three separate pieces, making it less friendly to understanding these bounds together with growing sums of covariance of the samples. In fact, all five of the examples studied in Arratia et al. (1989) deal only with cases where at least one of these pieces is identically zero in the cases where Chen’s method succeeds. Many examples do not require exact zeros, which our approach focuses on. To preview our conditions, presented formally below, let us take \(p=1\) so \(X_{i}^{n}\) is scalar, and let \(Z_{i}^{n}:=X_{i}^{n}-\mathrm{E}(X_{i}^{n})\). Let \[\Omega_{n}:=\sum_{i=1}^{n}\sum_{j\in\mathcal{A}_{i}^{n}}\mathrm{cov}\left(Z_{ i},Z_{j}\right),\] denote the total covariance within the affinity sets. Informally, our conditions are the following. 1. Within affinity set covariance control: \[\sum_{i}\sum_{j,k\in\mathcal{A}_{i}^{n}}\mathrm{E}(Z_{j}Z_{k}\cdot|Z_{i}|) \text{ is small relative to }(\Omega_{n})^{3/2}.\] The average covariance between two random variables in an affinity set, when weighted by the realized magnitude of the reference variable, and the covariance between the averages given the magnitude of the reference variable, is small relative to the comparably adjusted covariance between the reference variables and their affinity sets. 2. Cross affinity set covariance control: \[\sum_{i,j}\sum_{k\in\mathcal{A}_{i}^{n},l\in\mathcal{A}_{j}^{n}}\mathrm{cov}( Z_{i}Z_{k},Z_{j}Z_{l})\text{ is small relative to }(\Omega_{n})^{2}.\] The average covariance across members of two different affinity sets (weighted by the reference variables) is sufficiently small compared to the squared covariance within affinity sets. 3. Outside affinity set covariance control: \[\sum_{i}\sum_{j\notin\mathcal{A}_{i}^{n}}\mathrm{E}(Z_{i}Z_{j}\cdot\mathrm{ sign}(\mathrm{E}[Z_{i}Z_{j}\mid Z_{j}]))\text{ is small relative to }\Omega_{n}.\] The average sign-weighted covariance outside of affinity sets is sufficiently small compared to the covariance within affinity sets. These conditions are general and easy to check as we show. They also further simplify in a number of cases such as with positive correlations (as in diffusion models and auto-regressive processes) and with binary variables. Our contributions are several-fold. First, we do not require a sparse dependency structure at any \(n\). That is, there can be non-zero correlation between any pair \(X_{i},X_{j}\). Much of the dependency graph literature leverages an independence structure in constructing their bounds and, therefore, the bounds we build are different.4 Footnote 4: In Chatterjee (2008), the author uses a decoupling approach to break apart the dependency into various independent components. This gives a slightly different perspective on the problem, treating any condition on the variance term as a separate problem. The condition one would need to satisfy in this setting is to have a bounded sum of third moments of the coordinate-wise derivative of the function of interest. Our result considers variance from the beginning as we deal with affinity sets, and as a result leads to conditions of a more approachable form for more applied researchers. Second, because of this possibility of non-zero covariance across all random vectors, we organize our bounds through covariance conditions. We are reminded of a discussion in Chen (1975) in the context of Poisson approximation. Covariance conditions are easy-to-interpret and check, and from an applied perspective often easier to justify from a microfoundation. Third, our result is for random vectors, and while the application of the Cramer-Wold device is simple in our setting--by the nature of how indexing works--it is useful to have and instructive for a practitioner. Fourth, our setup nests many of the previous literature's examples, most of which do not nest each other. We illustrate the utility of our central limit theorem through several distinct applications. We begin with an example of \(M\)-dependence in a stochastic process, noting that this implies many other types of mixing, such as \(\alpha\)-, \(\phi\)-, or \(\rho\)- mixing (Bradley, 2005). We then move to random fields, where we show an example with \(\alpha\)- and \(\phi\)- dependence. These mixing approaches require constructing idiosyncratic notions of dependence based on the underlying probability distribution that happen to imply bounds on the covariance function (see, for example, Rio (1993) or Rio (2017)), which, as noted above, is not derived from, and may not match, micro-economic foundations or scientific principles. Our covariance-based arguments are compact and direct, placing restrictions on the covariance explicitly and, thus, in a matter that is salient in the scientific context. They are also general rather than based on a specific model or type of dependence. We also show that our framework is applicable outside the context of mixing, giving examples with non-mixing autoregressive processes, dependency graphs, among other examples. In all of our examples, it is straightforward to examine the covariance structure of the random variables in question and to check that our sufficient conditions are met. Fifth, we show that our generalizations permit a wider and more practical set of analyses that were otherwise ruled out or limited in the literature. This includes treatment effects with spillover models, covariance matrices, and things like epidemic and diffusion models. Specifically, we extend the treatment effects with spillovers analysis, as in Aronow and Samii (2017), to allow every individual's exposure to treatment to possibly be increasing in every other node's treatment assignment, and nonetheless, the relevant estimator is still asymptotically normally distributed. Of course, this case, which is ubiquitous in practice, is assumed away in applied work because conventional central limit theorems do not cover such a case. We also show how a researcher can model covariance matrices without forcing a random field structure as in Conley (1999) or Driscoll and Kraay (1998). This allows applied researchers to proceed with greater generality. Further, the usual approaches rule out correlational structure across units that do not have a natural ordering such as race, ethnicity, case, and occupation. These are readily accommodated in our approach. The next two examples concern diffusion. First, we look at a sub-critical epidemic process with a number of periods longer than the graph's diameter. So, whether an individual is infected is correlated with the infection status of any other individual (assuming a connected, unweighted graph). Again, this practical situation is excluded by the previous central limit theorems in the literature. Second, we look at diffusion in stochastic block models to show here that our conditions exactly characterize when asymptotic normality holds and when it does not. Lastly, we turn to the setting of Zhan and Datta (2023): the estimation of neural network models with irregular spatial dependence, e.g., Matern covariance functions. The authors provide the first proof of consistent estimation of the neural network model in this dependent setting. We show that the covariance structure of the residuals, on which the asymptotic distribution of the estimator depends, satisfies our main assumptions and our CLT. ## 2. The Theorem We consider a triangular array of \(n\) random variables \(X_{1:n}^{n}\in\mathbb{R}^{p},\) with entries \(X_{i,d}^{n}\) and \(d\in\{1,\ldots,p\},\) each of which has finite variance (possibly varying with \(n\)). We let \(Z_{i}^{n}\in\mathbb{R}^{p}\) denote the corresponding de-meaned variables, \(Z_{i}^{n}=X_{i}^{n}-\operatorname{E}\left[X_{i}^{n}\right].\) The sum, \(S^{n}\in\mathbb{R}^{p},\) is given by \(S^{n}:=\sum_{i=1}^{n}Z_{i}^{n}.\) We suppress the dependency on \(n\) for clarity; writing \(X_{i,d}\) unless otherwise needed. ### Affinity Sets Each real-valued random variable \(X_{i,d}\) has an _affinity set_, denoted \(\mathcal{A}_{(i,d)}^{n},\) which can depend on \(n\). We require \((i,d)\in\mathcal{A}_{(i,d)}^{n}\). Heuristically, \(\mathcal{A}_{(i,d)}^{n}\) includes the indices \(j,d^{\prime}\) for which the covariance between \(X_{j,d^{\prime}}\) and \(X_{i,d}\) is relatively high in magnitude, but not those for which the covariance is low. There is no independence requirement at any \(n\) and, in fact, our sufficient conditions for the central limit theorem bound the total sums of covariances within and across affinity sets. The precise construction of affinity sets is flexible, as long as these bounds on the respective total sums are respected. ### The Central Limit Theorem Let \(\Omega_{n}\) be a \(p\times p\) matrix which houses the bulk of covariance across observations and dimensions, summing across variables all the covariances of each variable and the others in its affinity set:5 Footnote 5: Notice that this is distinct from a total variance-covariance matrix \(\Sigma_{n,dd^{\prime}}:=\sum_{i=1}^{n}\sum_{j=1}^{n}\operatorname{cov}\left(Z _{i,d},Z_{j,d^{\prime}}\right),\) which includes terms outside of \(\mathcal{A}_{(i,d)}^{n}.\) \[\Omega_{n,dd^{\prime}}:=\sum_{i=1}^{n}\sum_{(j,d^{\prime})\in\mathcal{A}_{(i,d )}^{n}}\operatorname{cov}\left(Z_{i,d},Z_{j,d^{\prime}}\right).\] In what follows, we maintain the assumption that \(\left\|\Omega_{n}\right\|_{F}\to\infty\), where \(\left\|\cdot\right\|_{F}\) is the Frobenius norm. Our first assumption is that the the total mass of the variance-covariance is not driven by the covariance between members of a given affinity set neither of which are the reference random variables themselves. That is, given reference variable \(X_{i,d}\), the covariance of some \(X_{j,d^{\prime}}\) and \(X_{k,d^{\prime\prime}}\) where both are in the reference variable's affinity set is relatively small in total across all such triples of variables compared to the variance coming from the reference variable and its affinity sets. Assumption **1** (Bound on total weighted-covariance within affinity sets).: \[\sum_{(i,d);(j,d^{\prime}),(k,d^{\prime\prime})\in\mathcal{A}^{n}_{(i,d)}} \operatorname{E}\left[|Z_{i,d}|Z_{j,d^{\prime}}Z_{k,d^{\prime\prime}}\right]=o \left(\left(\left\|\Omega_{n}\right\|_{F}\right)^{3/2}\right).\] The second assumption is that the total mass of the variance-covariance is not driven by random variables across affinity sets relative to two distinct reference variables. That is, given two random variables \(X_{i,d}\) and \(X_{j,d^{\prime}}\), the aggregate amount of weighted covariance between two other random variables--each within one of the reference variables' affinity sets--is small compared to the (squared) variance coming from the reference variable and its affinity sets. Assumption **2** (Bound on total weighted-covariance across affinity sets).: \[\sum_{(i,d),(j,d^{\prime});(k,d^{\prime\prime})\in\mathcal{A}^{n}_{(i,d)},( l,\hat{d})\in\mathcal{A}^{n}_{(j,d^{\prime})}}\operatorname{cov}\left(Z_{i,d}Z_{k,d^{ \prime\prime}},Z_{j,d^{\prime}}Z_{l^{\prime},\hat{d}}\right)=o\left(\left( \left\|\Omega_{n}\right\|_{F}\right)^{2}\right),\] The third assumption is that the total mass of variance-covariance is not driven by reference random variables and the variables outside of their affinity sets, again compared to the variance coming from the reference variable and its affinity sets. Assumption **3** (Bound on total weighted-covariance from outside of affinity sets).: \[\sum_{(i,d);(j,d^{\prime})\in\mathcal{A}^{n}_{(i,d)}}\operatorname{E}\left(Z _{i,d}Z_{j,d^{\prime}}\cdot\operatorname{sign}\left(\operatorname{E}[Z_{i,d} Z_{j,d^{\prime}}|Z_{j,d^{\prime}}]\right)\right)=o\left(\left\|\Omega_{n} \right\|_{F}\right).\] These three assumptions imply a central limit theorem. Theorem **1**.: _If Assumptions 1-3 are satisfied, then \(\Omega_{n}^{-1/2}S^{n}\rightsquigarrow\mathcal{N}(0,I_{p\times p})\)._ The proof is provided in the Appendix. The argument follows by applying the Cramer-Wold device to the arguments following Stein's method, as Chandrasekhar and Jackson (2016) argued for the univariate case. Since the Cramer-Wold device requires for all \(c\in\mathbb{R}^{p}\) fixed in \(n\) that the \(c\)-weighted sum satisfies a central limit theorem (Biscio et al., 2018) -- that is, \((c^{\prime}\Omega_{n}c)^{-1/2}c^{\prime}S^{n}\rightsquigarrow\mathcal{N}(0,1)\) -- we can consider a problem of \(np\) random variables with affinity sets. Then, by checking Assumptions 1-3 for the case of \(c=1_{p}\) the result follows. An important special case is where the affinity sets are the variables themselves: \(\mathcal{A}_{(i,d)}^{n}=\{(i,d)\}\). In that case, the conditions simplify to a total bound on the overall sum of covariances across variables (the univariate case is in Chandrasekhar and Jackson (2016)). It nests many cases in practice, and we provide an illustration in our second application. Corollary 1.: _If \(\mathcal{A}_{(i,d)}^{n}=\{(i,d)\}\), \(\operatorname{E}[Z_{i,d}Z_{j,d^{\prime}}|Z_{j,d^{\prime}}]\geq 0\) for every \((j,d^{\prime})\neq(i,d)\), and_ (i)_\(\sum_{(i,d),(j,d^{\prime})}\operatorname{cov}(Z_{i,d}^{2},Z_{j,d^{\prime}}^{2})=o \left(\left(\left\|\Omega_{n}\right\|_{F}\right)^{2}\right)\), and_ (ii)_\(\sum_{(i,d)\neq(j,d^{\prime})}\operatorname{cov}(Z_{i,d},Z_{j,d^{\prime}})=o \left(\left\|\Omega_{n}\right\|_{F}\right)\), then \(\Omega_{n}^{-1/2}S^{n}\rightsquigarrow\mathcal{N}(0,I_{p\times p})\)._ If \(\operatorname{E}[Z_{i,d}Z_{j,d^{\prime}}|Z_{j,d^{\prime}}]\geq 0\) does not hold, then (ii) can just be substituted by Assumption 3. Also, it is useful to note that, for instance, if \(p=1\) and the \(X_{i}\)'s are Bernoulli random variables with \(\operatorname{E}[X_{i}]\to 0\) (uniformly), then condition (ii) implies condition (i) (Chandrasekhar and Jackson, 2016). ## 3. Models of Dependence We first present four applications from the literature that prove asymptotic normality: (i) \(M\)-dependence, (ii) non-mixing autoregressive processes, (iii) mixing random fields, and (iv) dependency graphs. These examples do not necessarily nest each other, though we do comment on relations between the dependence types in terms of mixing, where relevant. We can construct affinity sets that meet our conditions in each case. A key distinction in our work is that the conditions we provide are general, rather than specific to a particular model class of dependency type. We provide a sketch of the core assumptions made in the relevant papers to be self-contained for the reader. We show how these assumptions imply our covariance restrictions and the relative complexity of these setups. We then present five common applications that are not covered by the previous literature but are covered by our model: (i) peer effects, (ii) covariance estimation with socio-demographic characteristics, (iii) subcritical diffusion processes, (iv) diffusion in stochastic block models, and (v) spatial dependence via a Matern covariance matrix. In these examples, we maintain consistent use of notation defined in the previous sections. The remaining notation, however, is kept consistent only within each subsection. ### \(M\)-dependence #### 3.1.1. Environment We consider Theorem 2.1 of Romano and Wolf (2000). In this application there are real-valued time series data, so \(p=1\) (and we drop the index \(d\)) and \(\Omega_{n}\) is a scalar. Under Romano and Wolf's setup, \(Z_{n,i}\) and \(Z_{n,j}\) are independent if \(|i-j|>M\). Here, \(\{Z_{n,i}\}\) are mean zero random variables. For convenience of the reader, we include the assumptions made in their paper: Suppose \(Z_{n,1},Z_{n,2},...,Z_{n,r}\) is an \(M\)-dependent sequence of random variables for some \(\delta>0\) and \(-1\leq\gamma<1\), 1. \(\operatorname{E}|Z_{n,i}|^{2+\delta}\leq\Delta_{n}\) for all \(i\) 2. \(\operatorname{var}\left(\sum_{i=a}^{a+k-1}Z_{n,i}\right)k^{-1-\gamma}\leq K_{n}\) for all \(a\) and \(k\geq M\) 3. \(\operatorname{var}\left(\sum_{i=1}^{r}Z_{n,i}\right)r^{-1}M^{-\gamma}\geq L_{n}\) 4. \(K_{n}=O\left(L_{n}\right)\) 5. \(\Delta_{n}=O\left(L_{n}{}^{1+\delta/2}\right)\) 6. \(M^{1+(1-\gamma)(1+2/\delta)}=o(r)\) #### 3.1.2. Application of Theorem 1 We consider the \(M\)-ball, \(\mathcal{A}_{i}^{n}=\{j:|j-i|\leq M\}\). We drop the subscript \(n\) in \(Z_{n,i}\) for convenience. In this case, \(\operatorname{cov}(Z_{i},Z_{j})=0\) by independence for all \(j\) with \(|i-j|>M\), so Assumption 3 is satisfied. Under bounded third and fourth moments, we check the remaining assumptions. Assumption 1 is easily verified: \[\sum_{i;j,k\in\mathcal{A}_{i}^{n}}\operatorname{E}[|Z_{i}|Z_{j}Z_{k}]=O(M^{2} \sum_{i}\operatorname{E}[|Z_{i}|^{3}])=o\left(n^{3/2}M^{3/2}\right)=o\left( \Omega_{n}^{3/2}\right),\] following their Assumption 6. Our Assumption 2 is satisfied similarly following their Assumption 6: \[\sum_{i,j;k\in\mathcal{A}_{i}^{n},l\in\mathcal{A}_{j}^{n}}\operatorname{cov}( Z_{i}Z_{k},Z_{j}Z_{l})=O(\sum_{\begin{subarray}{c}i,j:|i-j|\leq M,\\ k:|k-i|\leq M,\\ l:|l-i|\leq 2M\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Assume, without loss of generality, that \(s>t\), so \[\mathrm{cov}(Z_{t},Z_{s})=\mathrm{cov}(\sum_{l=0}^{\infty}\rho^{l}\epsilon_{t-l}, \sum_{k=0}^{\infty}\rho^{k}\epsilon_{s-k})=\rho^{M}\sum_{l=0}^{\infty}\rho^{2l} \mathrm{var}(\epsilon_{t-l})=\frac{C}{\rho+1}\cdot q(1-q)\rho^{M}\] where \(M=s-t\). #### 3.2.2. Application of Theorem 1 We can take \(\mathcal{A}_{t}^{n}=\{t\}\) and apply Corollary 1. Here \(\Omega_{n}=Cnq(1-q)\) for \(C=\sum_{l=0}^{\infty}\rho^{l}\). As \(M\) grows, \(\mathrm{cov}(Z_{t},Z_{t+M})\to 0\) as \(M\to\infty\). It is easy to see that \(\mathrm{E}[Z_{t}Z_{s}|Z_{s}]\geq 0\) for all \(s\neq t\). Condition (ii) follows immediately since \(\sum_{t\neq s}\mathrm{cov}(Z_{t},Z_{s})=\sum_{t\neq s}c\cdot q(1-q)\rho^{|t-s| }=o(n\frac{C}{\rho+1}\cdot q(1-q))\). We check Condition (i): \[\sum_{t,s}\mathrm{cov}(Z_{t}^{2},Z_{s}^{2}) \leq\sum_{t,s}\{\rho^{4|t-s|}\mathrm{var}((\sum_{l=0}^{\infty} \rho^{l}\epsilon_{t\wedge s-l})^{2})+2\rho^{|t-s|}\mathrm{cov}((\sum_{l=0}^{ \infty}\rho^{l}\epsilon_{t\wedge s-l})^{2},(\sum_{k=0}^{|t-s|-1}\rho^{k} \epsilon_{t\lor s-k})\cdot(\sum_{l=0}^{\infty}\rho^{l}\epsilon_{t\wedge s-l}))\] \[\quad+4c^{2}\cdot q^{2}(1-q)\rho^{|t-s|}\}\] \[\leq\sum_{t,s}\{\rho^{4|t-s|}\mathrm{var}((\sum_{l=0}^{\infty} \rho^{l}\epsilon_{t\wedge s-l})^{2})+2C\rho^{|t-s|}\mathrm{var}((\sum_{l=0}^{ \infty}\rho^{l}\epsilon_{t\wedge s-l})^{3/2})+4c^{2}\cdot q^{2}(1-q)\rho^{|t-s|}\}\] \[=o((\Omega_{n})^{2}),\] and the proof is complete. ### Random Fields #### 3.3.1. Environment This next example of random fields nests many time series and spatial mixing models as special cases. Specifically, we take the setting of Jenish and Prucha (2009), Theorem 1. Their setting has either \(\phi\)- or \(\alpha\)- mixing in random fields, allowing for non-stationarity and asymptotically unbounded second moments. They treat real mean-zero random field arrays \(\{Z_{i,n};i\in D_{n}\subseteq\mathbb{R}^{d},n\in\mathbb{N}\}\), where each pair of elements \(i,j\) has some minimum distance \(\rho(i,j)\geq\rho_{0}>0\), where \(\rho(i,j):=\max_{1\leq l\leq d}|i_{l}-j_{l}|\), between them. At each point on the lattice, there is a real-valued random variable drawn, so \(p=1\). For their central limit theorem results, the authors assume (see their Assumptions 2 and 5, restated below for convenience) a version of uniform integrability that allows for asymptotically unbounded second moments, while maintaining that no single variance summand dominates by scaling \(X_{i,n}:=Z_{i,n}/\mathrm{max}_{i\in D_{n}}\,c_{i,n}\) so that \(X_{i,n}\) is uniformly integrable in \(L_{2}\). The authors also assume (see their Assumption 3, restated below) conditions on the inverse function \(\alpha_{inv}\) on mixing coefficients \(\alpha\) (their Assumption 3) and \(\phi\) (their Assumption 4) together with the tail quantile functions \(Q_{i,n}\) (where \(Q_{X}(u):=\inf\{x:F_{X}(x)\geq 1-u\}\) where \(F_{X}\) is the cumulative distribution function for the random variable \(X\)), essentially requiring nice trade-off conditions between the two, such that under \(\alpha\)-mixing decaying at a rate \(O(\rho^{d+\delta})\) for some \(\delta>0\), \(\sum_{m=1}^{\infty}m^{d-1}\sup_{n}\alpha_{k,l,n}(\rho)<\infty\) for all \(k+l\leq 4\), and \(\sup_{n}\sup_{i\in D_{n}}\int_{0}^{1}\alpha_{inv}^{d}(u)Q_{i,n}(u)du\) tends to zero in the limit of upper quantiles. Again, for the sake of convenience of the reader, we include the assumptions made in their paper: 1. Assumption 2: \(\lim_{k\to\infty}\sup_{n}\sup_{i\in D_{n}}\mathrm{E}[|Z_{i,n}/c_{i,n}|^{2} \mathbf{1}\{|Z_{i,n}/c_{i,n}|>k\}]=0\) for \(c_{i,n}\in\mathbb{R}^{+}\) 2. Assumption 3: The following conditions must be satisfied by the \(\alpha\)-mixing coefficients: 1. \(\lim_{k\to\infty}\sup_{n}\sup_{i\in D_{n}}\int_{0}^{1}\alpha_{inv}^{d}(u)\left(Q_{|Z _{i,n}/c_{i,n}|\mathbf{1}\{Z_{i,n}/c_{i,n}>k\}}\right)^{2}du=0\)__ 2. \(\sum_{m=1}^{\infty}m^{d-1}\sup_{n}\alpha_{k,l,n}(m)<\infty\) for \(k+l\leq 4\) where \[\alpha_{k,l,n}(r)=\sup(\alpha_{n}(U,V),|U|\leq k,|V|\leq l,\rho(U,V)\geq r)\] 3. \(\sup_{n}\alpha_{1,\infty,n}(m)=O\left(m^{-d-\delta}\right)\)__ 3. Assumption 4: The following conditions must be satisfied by the \(\phi\)-mixing coefficients: 1. \(\sum_{m=1}^{\infty}m^{d-1}\overline{\phi}_{1,1}^{1/2}(m)<\infty\)__ 2. \(\sum_{m=1}^{\infty}m^{d-1}\overline{\phi}_{k,l}(m)<\infty\) for \(k+1\leq 4\)__ 3. \(\overline{\phi}_{1,\infty}(m)=\mathcal{O}(m^{-d-\epsilon})\) for some \(\epsilon>0\)__ 4. Assumption 5: \(\liminf_{n\to\infty}|D_{n}|^{-1}{M_{n}}^{-2}\sigma_{n}^{2}>0\)__ #### 3.3.2. Application of Theorem 1 In the following, we assume that the \(Z_{i,n}\)s have bounded second moments (otherwise, we can replace them with their scaled versions (see above), and the results should go through under bounded third and fourth moments). Here, for any \(\epsilon>0\), we take \(\mathcal{A}_{i}^{n}=\{j:\rho(i,j)\leq K^{i}(\epsilon_{n})\}\), where \(K^{i}\) is a non-increasing function. That is, we pick \(K^{i}(\epsilon_{n})\) to be large enough, and this can be decided by understanding the cumulative distribution function of the random variables. From the first part of Lemma B.1 (see also (B.10)) in Jenish and Prucha (2009), together with their assumptions 3,4 (under \(\alpha-\)mixing and \(\phi-\)mixing, respectively), 5 and Rio's covariance inequality Rio (1993), we know that for any \(\epsilon_{n}>0\), for \(k\neq i\) such that \(\rho(i,k)\geq K^{i}(\epsilon_{n})\), we can write \[|\mathrm{cov}(Z_{i,n},Z_{k,n})|\leq 4\int_{0}^{\overline{\alpha}_{1,1}(K^{i}( \epsilon_{n}))}Q_{i,n}(u)Q_{k,n}(u)du\leq\epsilon_{n}.\] We first restate Lemma B.1 in their paper for the sake of convenience of the reader: Lemma **1** (Lemma B.1 Jenish and Prucha (2009), Bradley (2007)).: _Let \(\alpha(m)\), \(m=1,2,...\) be a non-increasing sequence such that \(0\leq\alpha(m)\leq 1\), and \(\alpha(m)\to 0\) as \(m\to\infty\). Set \(\alpha(0)=1\), and define \(\alpha^{-1}(u):(0,1)\to\mathbb{N}\cup\{0\}\) such that_ \[\alpha^{-1}(u)=\max\{m\geq 0:\alpha(m)>u\}\] _for \(u\in(0,1)\)._ _Let \(f:(0,1)\to[0,\infty)\) be a Borel function, then \(q\geq 1:\)_ 1. \(\sum_{m=1}^{\infty}m^{q-1}\int_{0}^{\alpha(m)}f(u)du\leq\int_{0}^{1}[\alpha^{- 1}(u)]^{q}f(u)du\)__ 2. \(\int_{0}^{1}[\alpha^{-1}(u)]^{q}du\leq q\sum_{m-1}^{\infty}\alpha(m)m^{q-1}\)_, for any_ \(q\geq 1\)_._ Now we restate result B.10 in their paper: \[\sup_{n}\sup_{i\in D_{n}}\int_{0}^{1}\alpha_{inv}^{d}(u)Q_{i,n}^{2}(u)du=K_{1}<\infty \tag{3.1}\] where \(\alpha_{inv}(u):=\max\{m\geq 0:\sup_{n}\alpha_{1,1,n}(m)>u\}\) for \(u\in(0,1)\). Therefore, we can pick the smallest \(\epsilon_{n}\) such that \(\epsilon_{n}>\frac{1}{\rho_{0}^{d}n^{\gamma}}\) for \(0<\gamma<1\), so that, \[4\int_{0}^{\overline{\alpha}_{1,1}(K(\epsilon_{n}))}Q_{i,n}(u)Q_{k,n}(u)du\leq \epsilon_{n}\ll\int_{0}^{1}(Q_{i,n})^{2}(u)du=\mathrm{var}(Z_{i,n}).\] Taking \(\epsilon_{n}\) to satisfy the lower bound stated above allows control of the size of the affinity sets. Indeed, via a packing number calculation, we see that while this allows \(K^{i}(\epsilon_{n})\) to grow with \(n,\) it grows more slowly than \(n\). Specifically, taking \(\epsilon_{n}>\frac{1}{\rho_{0}^{d}n^{\gamma}}\) for any \(0<\gamma<1,\) and since \(\delta>0\) together with their Assumption 3, we have, \[\Big{(}\frac{K^{i}(\epsilon_{n})}{\rho_{0}}\Big{)}^{d}=\Big{(}\frac{(1/\epsilon _{n})^{-(d+\delta)}}{\rho_{0}}\Big{)}^{d}<\Big{(}\frac{1}{\epsilon_{n}\rho_{0 }^{d}}\Big{)}=o(n)\] Now, we verify that our key conditions are satisfied in this setting. We write \(K:=\max_{i}K^{i}(\epsilon_{n}),\) and first, we check Assumption 1: \[\sum_{i;j,k\in\mathcal{A}_{i}^{n}}\mathrm{E}[|Z_{i,n}|Z_{j,n}Z_{k,n}]=O(K^{2} \sum_{i}\mathrm{E}[|Z_{i,n}|^{3}])=O(K^{2}n)=o(n^{3/2}K^{3/2})=o\left(\Omega_ {n}^{3/2}\right)\] since \(\Omega_{n}^{3/2}=(\sum_{i}\sum_{j\in\mathcal{A}_{i}^{n}}\mathrm{E}[Z_{i}Z_{j} ])^{3/2}.\) The second inequality holds by the algorithm-geometric mean inequality. The remaining argument relies on rearranging the summations and using the growth rate of \(K\), i.e. \(K=o(n)\) in the third equality, as defined above. Next, we check that Assumption 2 is satisfied using similar arguments and assuming bounded fourth moments: \[\sum_{i,j;k\in\mathcal{A}_{i}^{n},l\in\mathcal{A}_{j}^{n}}\mathrm{ cov}(Z_{i,n}Z_{k,n},Z_{j,n}Z_{l,n}) =O(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! for \(j\notin\mathcal{A}_{i}^{n}\). Therefore, the summation in the left-hand-side of Assumption 3 is dominated by \(\Omega_{n}\), as shown above. ### Dependency Graphs and Chen and Shao (2004) Next, we consider dependency graphs. There is an undirected, unweighted graph \(G\) with dependency neighborhoods \(N_{i}:=\{j:\ G_{ij}=1\}\) such that \(Z_{i}\) is independent of all \(Z_{j}\) for \(j\notin N_{i}\)(Baldi and Rinott, 1989; Chen and Shao, 2004; Ross, 2011). Let \(\mathcal{A}_{i}^{n}=\{j:\ G_{ij}=1\}\). Denote the maximum cardinality of these to be \(D_{n}\). In Ross (2011) (see Theorem 3.6), together with a bounded fourth moment assumption, we see that the conditions there imply the conditions here. Indeed, we see that \[\sum_{i;j,k\in\mathcal{A}_{i}^{n}}\operatorname{E}\left[|Z_{i}|Z_{j}Z_{k} \right]\leq D^{2}\sum_{i=1}^{n}\operatorname{E}\left[Z_{i}^{3}\right],\] and for \(\Omega_{n}^{-1/2}S_{n}\rightsquigarrow N(0,1)\), we need \(D^{2}\sum_{i=1}^{n}\operatorname{E}\left[Z_{i}^{3}\right]\leq o\left(\Omega_ {n}^{3/2}\right)\) in Ross (2011) (Theorem 3.6), and hence Assumption 1 is satisfied. Similarly, \[\sum_{i,i^{\prime};j\in\mathcal{A}(i,n),j^{\prime}\in\mathcal{A}_{i}^{n}} \hskip-14.226378pt\operatorname{cov}\left(Z_{i}Z_{j},Z_{i^{\prime}}Z_{j^{ \prime}}\right)=O(D^{3}\sum_{i=1}^{n}\operatorname{E}[Z_{i}^{4}]),\] and for \(\Omega_{n}^{-1/2}S_{n}\rightsquigarrow N(0,1)\), we need \(D^{3}\sum_{i=1}^{n}\operatorname{E}[Z_{i}^{4}]\leq o\left(\left(\Omega_{n} \right)^{2}\right)\) in Ross (2011) (Theorem 3.6), and hence Assumption 2 is satisfied. Assumption 3 holds trivially, by definition of the dependency neighborhoods. Now, we consider Chen and Shao (2004). In particular, we consider their weakest assumption, LD1: Given index set \(\mathcal{I}\), for any \(i\in\mathcal{I}\), there exists an \(A_{i}\in\mathcal{I}\) such that \(X_{i}\) and \(X_{A_{i}^{C}}\) are independent. The affinity sets can be defined by the complement of the independence sets. So \(\mathcal{A}_{i}^{n}=\{j:Z_{j}\) is not independent of \(Z_{i}\}\), which is similar to the dependency graphs setting. The goals of their paper are different. They develop finite-sample Berry-Esseen bounds, with bounded \(p\)-th moments, where \(2<p\leq 3\). This is different from our approach. In our paper, we focus on covariance conditions in the asymptotics, and collect relatively more dependent sets along the triangular array. ## 4. Applications ### Peer Effects Models #### 4.1.1. Environment We now turn to an example of treatment effects with spillovers. Consider a setting in which units in a network are assigned treatment status \(T_{i}\in\{0,1\}\), as in Aronow and Samii (2017). The network is a graph, \(\mathcal{G}\), consisting of individuals (nodes) and connections (edges). For now, we consider the case in which treatment assignments are independent across nodes. However, there are spillovers in treatment effects determined by the topology of the network where treatment status within one's network neighborhood may influence one's own outcome - for instance whether a friend is vaccinated affects a person's chance of being exposed to a disease. Rather than being arbitrary, Aronow and Samii (2017) consider an exposure function \(f\) that takes on one of \(K\) finite values; i.e., \(f(T_{i};T_{1:n},\mathcal{G})\in\{d_{1},\ldots,d_{K}\}\). An estimand of the average causal effect is of the form \[\tau(d_{k},d_{l})=\frac{1}{n}\sum_{i}y_{i}(d_{k})-\frac{1}{n}\sum y_{i}(d_{l})\] where \(d_{k}\) and \(d_{l}\) are the induced exposures under the treatment vectors. The Horvitz and Thompson estimator from Horvitz and Thompson (1952) provides: \[\hat{\tau}_{HT}(d_{k},d_{l})=\frac{1}{n}\sum_{i}\mathbf{1}\{D_{i}=d_{k}\}\frac{ y_{i}(d_{k})}{\pi_{i}(d_{k})}-\frac{1}{n}\sum\mathbf{1}\{D_{i}=d_{l}\}\frac{y_{i} (d_{l})}{\pi_{i}(d_{l})}\] where \(\pi_{i}(d_{k})\) is the probability that that node \(i\) receives exposure \(d_{k}\) over all treatments. The challenge is that the treatment effects are not independent across subjects. Let \(N_{i,:}\) denote a dummy vector of whether \(j\) is in \(i\)'s neighborhood: let \(N_{ij}=1\) when \(G_{ij}=1\) with the convention \(N_{ii}=0\). Aronow and Samii (2017) consider an empirical study with \(K=4\) that are: (i) only \(i\) is treated in their neighborhood (\(d_{1}=T_{i}\cdot 1\{T_{1:n}^{\prime}N_{i,:}=0\}\)), (ii) at least one is treated in \(i\)'s neighborhood and \(i\) is treated (\(d_{2}=T_{i}\cdot 1\{T_{1:n}^{\prime}N_{i,:}>0\}\)), (iii) \(i\) is not treated but some member of the neighborhood is (\(d_{3}=(1-T_{i})\cdot 1\{T_{1:n}^{\prime}N_{i,:}>0\}\)), and (iv) neither \(i\) nor any neighbor is treated (\(d_{4}=(1-T_{i})\cdot\prod_{j:\ N_{ij}=1}(1-T_{j})\)). We show that our result allows for a more generalized setting. #### 4.1.2. Application of Theorem 1 To obtain consistency and asymptotic normality Aronow and Samii (2017) assume a covariance restriction of local dependence (Condition 5) and apply Chen and Shao (2004) to prove the result. Namely, their restriction is that there is a dependency graph \(H\) (with entries \(H_{ij}\in\{0,1\}\)) with degree that is uniformly bounded by some integer \(m\) independent of \(n\). That is, \(\sum_{j}H_{ij}\leq m\) for every \(i\). This setting is much more restrictive than our conditions, especially as there can exist indirect correlation in choices as effects propagate or diffuse through the graph. We can work with larger real exposure values, and in settings concentrating the mass of influence in a neighborhood while allowing from spillovers from everywhere. This is important to allow for in centrality-based diffusion models, SIR models, and financial flow networks, since the the spillovers in these settings are less restricted than the sparse dependency graph in their Condition 5. Indeed, we can even allow the dependency graph to be a complete graph, as long as the correlations between the nodes in this dependency graph satisfy our Assumptions 1-3. That is, we can handle cases where, for a given treatment assignment, each node has \(n\) real exposure conditions for which the exposure conditions of the whole graph can be well approximated by simple functions, where small perturbations to any node in large regions of small correlations do not substantially perturb the outcomes in these regions (i.e., across affinity sets) while perturbations of the same size in any region of larger correlations (i.e., within affinity sets) can cause significant changes in the outcomes in that region. One can think of the "shorter" monotonic regions in the simple function to be over affinity sets, and "longer" monotonic regions to be across different affinity sets. One can think about monotonically non-decreasing functions, for instance, in epidemic spread settings where any increase in the "treatment" cannot decrease the number of infected nodes. To take an example, let the true exposure of \(i\) be given by \(e_{i}(T_{1:n})\). Then consider a case where \(e_{i}(T_{1:n}^{+})-e_{i}(T_{1:n})\geq 0\), where \(T_{1:n}^{+}\) indicates an increase in any element \(j\in[n]\) from \(T_{1:n}\). This is a structure that would happen naturally in a setting with diffusion. The potential outcome for \(i\) given treatment assignment is assumed to be \(y_{i}(e_{i})\). In practice, for parsimony and ease exposures are often binned. So consider the problem where the \(2^{n}\) possible exposures \(e_{i}(T_{1:n})\) can be approximated by \(K\) well-separated "effective" exposures \(\{d_{1},d_{2},...,d_{K}\}\) where \(|d_{i}-d_{j}|>\delta\) for any \(i,j\in\{1,2,...,K\}\) and some \(\delta>0\), and for any \(r\in\{1,2,...,K\}\), \(i,j\in\{1,2,...,2^{n}\}\), we have, \(e_{i}(T_{1:n}),e_{j}(T_{1:n})\in d_{r}\) if and only if \(|e_{i}(T_{1:n})-e_{j}(T_{1:n})|<\delta\) and we have \(y_{i}(e_{i})\) smooth in its argument for every \(i\). Then, following the above, the researcher's target estimand is the average causal effect switching between two exposure bins, \[\tau(d_{k},d_{l})=\frac{1}{n}\sum_{i}1\{e_{i}(T_{1:n})\in d_{k}\}y_{i}(e_{i})- \frac{1}{n}\sum_{i}1\{e_{i}(T_{1:n})\in d_{l}\}y_{i}(e_{i}).\] The estimator for this estimand cannot directly be shown to be asymptotically normally distributed using the prior literature. It is ruled out by Condition 5 in Aronow and Samii (2017) which uses Chen and Shao (2004). However, it is straightforward to apply our result. An example of this is a sub-critical diffusion process with randomly selected set of nodes \(M_{n}\) being assigned some treatment, and every other node is subsequently infected with some probability. We provide two examples concerning diffusion which speak to this in Subsections 4.3 and 4.4. ### Covariance Estimation using Socio-economic Distances #### 4.2.1. Environment One application of mixing random fields is to use them to develop covariance matrices for estimators (e.g., Driscoll and Kraay (1998); Bester et al. (2011); Barrios et al. (2012); Cressie (2015)). Here we consider the example of Conley and Topa (2002), which builds on Conley (1999). Essentially, their approach is to parameterize the characteristics (observable or unobservable) of units that drive correlation in shocks by the Euclidean metric, as we further describe below. This, however, rules out examples that are common in practice that include (discrete) characteristics with no intrinsic ordering driving degrees of correlation. For instance, correlational structures across race, ethnicity, caste, occupation, and so on, are not readily accommodated in the framework. For a concrete example, correlations between ethnicities \(e_{i}\) and \(e_{j}\) for units \(i,j\) that are parametrized by \(p_{e_{i}e_{j}}\) in an unstructured manner are ruled out. Like this, many of these examples only admit partial orderings, if that. Yet these are important, practical considerations in applied work. The results in our Theorem 1 allows an intuitively nice treatment of such cases.6 Footnote 6: It is also immediate that our discussion below applies to combinations of temporal (and possibly cross-sectional) dependence as in Driscoll and Kraay (1998). We note that our conditions also provide consistent estimators for covariance matrices of moment conditions for parameters of interest in the GMM setting, under full-rank conditions of expected derivatives Conley (1999), since the author uses the CLT from Bolthausen (1982) under stationary random fields which is generalized in the setting above. In Conley (1999), the model is that the population lives in a Euclidean space (taken to be \(\mathbb{R}^{2}\) for the purposes of exposition), with each individual \(i\) at location \(s_{i}\). Each of these locations has an associated random field \(X_{s_{i}}.\) The author obtains the limiting distribution of parameter estimates \(b\) of \(\beta\in B\), where \(B\subset\mathbb{R}^{d}\) is a compact subset and \(\beta\) is the unique solution to \(\mathrm{E}[g(X_{s_{i}};\beta)]\) for a moment function \(g\). The authors list the following sufficient conditions on the moment function to imply consistent estimation of the expected derivatives and having full-rank: * for all \(b\in B\), the derivative \(Dg(X_{s_{i}};b)\) with respect to \(b\) is measurable, continuous on \(B\) for all \(X\in\mathbb{R}^{k}\), and first-moment continuous. * \(\mathrm{E}[Dg(X_{s_{i}};b)]<\infty\) and is of full-rank. * \(\sum_{s\in\mathbb{Z}^{2}}\mathrm{cov}(g(X_{0};\beta),g(X_{s};\beta))\) corresponding to sampled locations \(X_{s}\) is a non-singular matrix. In addition to the sufficient conditions on the expected derivatives above, we list the remaining sufficient conditions on the random field \(X_{s}\) itself used by Conley (1999) to obtain the limiting distribution of parameter estimates through the GMM; and we note that these are nested in the conditions from Jenish and Prucha (2009), with the addition of bounded \(2+\delta\)-moment of \(||g(X_{s};\beta)||\): * \(\sum_{m=1}^{\infty}m\alpha_{k,l}(m)<\infty\) for \(k+l\leq 4\) * \(\alpha_{1,\infty}(m)=o(m^{-2})\) * for some \(\delta>0\), \(\mathrm{E}[||g(X_{s};\beta)||]^{2+\delta}<\infty\) and \(\sum_{m=1}^{\infty}m(\alpha_{k,l}(m))^{\delta/(2+\delta)}<\infty\). In Conley and Topa (2002), the authors develop consistent covariance estimators, using these conditions, combining different distance metrics including physical distance as well as ethnicity (or occupation, for another example) distance in \(L_{2}\) at an aggregate level (using census tracts data). In particular, the authors use indicator vectors to encode ethnicities (or occupations), and take the Euclidean distance from aggregated (at the census tract level) indicator vectors, as a measure of these ethnic/occupational distances. For instance, they use the Euclidean metric to write a "race and ethnicity" distance between census tract \(i\), and census tract \(j\), \[D_{ij}=\sqrt{\sum_{k=1}^{9}(e_{ik}-e_{jk})^{2}},\] where the sum is taken over nine ethnicities/races, indexed by \(k\), defined by the authors. The use of indicator vectors and Euclidean distance results people of different race/ethnicity groups being in orthogonal groups (with a fixed addition of Euclidean distance \(\sqrt{2}\) between any pairs of different race/ethnicity groups). To apply in practice, one would often need to allow for varying degrees of pairwise correlation for each pair of race/ethnicity groups. Additionally, even if the correlation induced by physical distance vanishes, it may be of interest to us to maintain correlation arising from interactions within and between ethnicity groups, where being of similar ethnic groups may induce nontrivial transfer of information between people despite being physically located large distances apart. It is not too difficult to see that the indicator vector formulation above does not allow for this, since in the case of a pair of distinct ethnicities with high correlation, the formulation in Conley and Topa (2002) would require a correlation of zero.7 Footnote 7: One could approach this problem by embedding via feature maps in an infinite dimensional space, which slightly complicates things and is perhaps less meaningful to the practitioner. Instead, our results above allow us to frame this problem in a more accessible and intuitive way. #### 4.2.2. Application of Theorem 1 Consider random variables \(Z_{i}\) and \(Z_{j}\), whose correlation we can decompose into components of physical distance and racial/ethnic distance (just as in Conley and Topa (2002)). It is direct to see how our work above takes care of the physical distance component, and so we turn to the remaining distance component. For this, for instance, one could consider the pairwise interaction probabilities, \(p_{e_{i},e_{j}}\) characterizing the correlation between ethnicity \(e_{i}\) of \(i\), and ethnicity \(e_{j}\) of \(j\). Our affinity set structure then allows us to incorporate this correlation structure. That is, one can construct an affinity set \(\mathcal{A}_{i}^{n}=\{j:\rho(i,j)\leq K^{i}(\epsilon/2),\text{ or }p_{e_{i},e_{j}}\geq 1- \epsilon/2\}\) with \(K^{i}\) defined just as in Subsection 3.3. Following our previous section, our generalization holds. Attempts to (non-parametrically) develop estimates of covariance in the cross-section often leverage a time or distance structure. For example, Driscoll and Kraay (1998) assume a mixing condition on a random field such that the correlation between shocks \(\epsilon_{it}\) and \(\epsilon_{j,t-s}\) tends to zero as \(s\to\infty\). This allows for reasonably agnostic cross sectional correlational structures but requires it to be temporally invariant and studies \(T^{1/2}\) asymptotics. Although such an assumption applies in certain contexts, there are many socio-economic contexts in which it does do not apply and yet our theorem can be applied. We provide two such examples. For instance, in simple models of migration with migration cost, there is often persistence in how shocks to incentives to migrate in some areas affect populations in other areas. Nonetheless, there are very particular correlation patterns because, as an example, ethnic groups migrate to specific places based on existing populations, and so affinity sets are driven by the places to which a given ethnic group might consider moving and our central limit theorem can then be applied, provided our conditions on affinity sets apply. Another example comes from social interaction. Individuals interact with others in small groups that experience correlated shocks which correlate grouped individuals' behaviors and beliefs. Each group involves only a tiny portion of the population, and any given person interacts in a series of groups over time. Thus individuals' behaviors or beliefs are correlated with others with whom they have interacted; but without any natural temporal or spatial structure. Each person has an affinity set composed of the groups (classes, teams, and so on) that they have been part of. People may also have their own idiosyncratic shocks to behaviors and beliefs. In this example again, our central limit theorem applies despite the lack of any spatial or temporal structure. To apply our results one does not need to know the affinity sets, just to know that each person's affinity group is appropriately small relative to the population. Overall, these examples show that in very natural settings, simply assuming that some form of distance removes dependency may not be the right way to tackle the application of a central limit theorem. ### Sub-Critical Diffusion Models #### 4.3.1. Environment A finite-time SIR diffusion process occurs on a sequence of unweighted and undirected graphs \(G_{n}\). A first-infected set, or seed set \(M_{n}\), of size \(m_{n}\), of random nodes with treatment indicated \(W_{i}\in\{0,1\}\), are seeded (set to have \(W_{i}=1\)) at \(t=0\) and in period \(t=1\) each infects each of its network neighbors, \(\{j:\ G_{ij,n}=1\}\) i.i.d. with probability \(q_{n}\). The seeds then are no longer active in infecting others. In period \(t=2\) each of the nodes infected at period \(t=1\) infects each of its network neighbors who were never previously infected i.i.d. with probability \(q_{n}\). The process continues for \(T_{n}\) periods. Let \(X_{i}^{n}\in\{0,1\}\) be a binary indicator of whether \(i\) was ever infected throughout the process. Assume that the sequence of SIR models under study, \((G_{n},q_{n},T_{n},W_{n})\) have \(m_{n}\to\infty\) (with \(\alpha_{n}:=m_{n}/n=o(1)\)), \(q_{n}\to 0\), and \(T_{n}\to\infty\) (with \(T_{n}\geq\text{diam}(G_{n})\) at each \(n\)), and are such that the process is sub-critical. Since the number of periods is at least as large as the diameter, it guarantees that for a connected \(G_{n}\), \(\text{cov}(X_{i}^{n},X_{j}^{n})>0\) for each \(i,j\). The statistician may be interested in a number of quantities. For instance, the unknown parameter \(q_{n}\) may be of interest. Suppose \(\text{E}[\Psi_{i}(X_{i};q_{n},W_{1:n})]=0\) is a (scalar) moment condition satisfied only at the true parameter \(q_{n}\) given known seeding \(W_{1:n}\). The \(Z\)-estimator (or GMM) derives from the empirical analog, setting \(\sum_{i}\Psi_{i}(X_{i};\hat{q},W_{1:n})=0\). By a standard expansion argument \[(\hat{q}-q_{n})=-\left\{\sum_{i}\nabla_{q}\Psi_{i}(X_{i};\tilde{q},W_{1:n}) \right\}^{-1}\times\sum_{i}\Psi_{i}(X_{i};q_{n},W_{1:n})+o_{p}(n^{-1/2}).\] To study the asymptotic normality of the estimator, we need to study \[\frac{1}{\sqrt{\text{var}\left(\sum_{i}\Psi_{i}\right)}}\sum_{i}\Psi_{i}(X_{i };q_{n},W_{1:n}),\] which involves developing affinity sets for each \(\Psi_{i}\).8 Footnote 8: This is a general setup. To see a related but different example, consider a diffusion model with re-infection (SIS). Here \(X_{i}\) denotes the number of times \(i\) was infected. Then one can use the expected number of infections for each node given the seeded set as moments: \[\text{E}\left\{\Psi_{i}(X_{i};q_{n},W_{1:n})\right\}=\left[\sum_{t=1}^{T}q_{n }^{t}G_{n}^{t}\cdot W_{1:n}\right]_{i}.\] Again, the main difficulty is that it is not clear what the structure of the correlations of \(\Psi_{i}(X_{i})\) and \(\Psi_{j}(X_{j})\). Here the moments correspond to sums of expected walk counts from the set of seeds to a given node \(i\), which can be complicated and in a connected graph non-zero for every pair of nodes. Under sub-criticality, a vanishing share of nodes are infected from a single seed. Without sub-criticality, most of the graph can have a nontrivial probability of being infected and accurate inference cannot be made with a single network. Let us define \[\mathcal{B}_{j}^{n}:=\{i:\mathrm{P}(X_{i}^{n}=1\mid j\in M_{n},m_{n}=1)>\epsilon _{n}\}.\] Then \(\mathcal{B}_{j}^{n}\) is the set of nodes for which, if \(j\) is the only seed, the probability of being infected in the process is at least \(\epsilon_{n}\). As noted above, in a sub-critical process \(\left|\mathcal{B}_{j}^{n}\right|=o(n)\) for every \(j\), not necessarily uniformly in \(j\). For simplicity assume that there is a sequence \(\epsilon_{n}\to 0\) such that this holds uniformly (otherwise, one can simply consider sums). Let \(\beta_{n}:=\sup_{j}\left|\mathcal{B}_{j}^{n}\right|/n\) which tends to zero, such that \(\alpha_{n}\beta_{n}=o(n)\). Next, we assume that the rate at which infections happen within the affinity set is higher than outside of it, and the share of seeds is sufficiently high and affinity sets are large enough to lead to many small infection outbursts but not so large as to infect the whole network. That is, there exists some \(\mathcal{B}_{j}^{n\prime}\subset\mathcal{B}_{j}^{n}\) such that \(|\mathcal{B}_{j}^{n\prime}|=\Theta(|\mathcal{B}_{j}^{n}|)\) and \(\mathrm{P}(X_{i}=1|j\in M_{n},m_{n}=1)\geq\gamma_{n}\) with \(\gamma_{n}/\epsilon_{n}\to\infty\), such that \(\alpha_{n}^{3}=O(\gamma_{n})\), and \(\beta_{n}=O(\gamma_{n})\). This would apply to, for instance, targeted advertisements or promotions that lead to local spread of information about a product, but that does not go viral. It is clear that none of the prior examples such as random fields and dependency graphs cover this case since since all \(X_{i}^{n}\) are correlated. We now show that Theorem 1 applies to this case. #### 4.3.2. Application of Theorem 1 Let us define the affinity sets \(\mathcal{A}_{i}^{n}=\mathcal{B}_{i}^{n}\). Next, consider a random seed \(k\). Let \(\mathcal{E}_{i,j,k}:=\{a\notin\mathcal{B}_{b}^{n}:\ a,b\in\{i,j,k\},\ a\neq b\}\) denote the event that none of the nodes are in each others' affinity sets. It is clear that \(\mathrm{P}(\mathcal{E}_{i,j,k})\to 1\), since \(|\mathcal{B}_{a}^{n}|=o(n)\) for \(a\in\{i,j,k\}\) and that seeds are uniformly randomly chosen. If we look at an affinity set, it is sufficient to just look at the variance components and check it is a higher order of magnitude \[\sum_{i}\mathrm{var}(Z_{i}^{n})=n\times(1-(1-\beta_{n})^{2m_{n}})\gamma_{n}^{2 }=O(n^{2}\times\alpha_{n}\beta_{n}\times\gamma_{n}^{2}).\] Now to check Assumption 1, we compute: \[\sum_{i;j,k\in\mathcal{A}_{i}^{n}}\mathrm{E}[|Z_{i}|Z_{j}Z_{k}] =\sum_{i;j,k\in\mathcal{A}_{i}^{n},j\notin\mathcal{A}_{k}^{n} \ \alpha\ x\notin\mathcal{A}_{j}^{n}}\mathrm{E}[|Z_{i}|Z_{j}Z_{k}]+\sum_{i;j,k\in \mathcal{A}_{i}^{n},\mathcal{A}_{j}^{n},\mathcal{A}_{k}^{n}}\mathrm{E}[|Z_{i} |Z_{j}Z_{k}]\] \[\approx n^{3}\beta_{n}\epsilon_{n}^{2}+n\alpha_{n}\beta_{n} \epsilon_{n}.\] Thus, we have \[\frac{n^{6}\alpha_{n}^{3}\beta_{n}^{3}\gamma_{n}^{6}}{n^{6}\beta_{n}^{3} \epsilon_{n}^{6}}=\frac{\alpha_{n}^{3}\gamma_{n}^{6}}{\epsilon_{n}^{6}}\to\infty,\] which is satisfied. We note that the probability that no seed is in any other affinity set \[\mathrm{P}(\cap_{k\in M_{n}}\mathcal{E}_{i,j,k})=(1-\beta_{n})^{2m_{n}}\approx 1 -2m_{n}\beta_{n}=1-2n\times\alpha_{n}\beta_{n}.\] This puts an intuitive restriction on the number of seeds and percolation size as a function of \(n\). Next, we verify Assumption 2. We have, \[\sum_{i;j,k\in\mathcal{A}_{i}^{n},r\in\mathcal{A}_{k}^{n}}\mathrm{ cov}(Z_{i}^{n}Z_{j}^{n},Z_{k}^{n}Z_{r}^{n}) =O(\sum_{i;j,k\in\mathcal{A}_{i}^{n};r\in A_{k}^{n};\notin A_{i}^{n}} \mathrm{cov}(Z_{i}^{n}Z_{j}^{n}Z_{k}^{n}Z_{r}^{n}))\] \[=O(n^{2}\times(n-1)(1-\beta_{n})\beta_{n}\epsilon_{n}^{4})\] \[=O(n^{3}\beta_{n}\epsilon_{n}^{4}).\] Therefore, \[\frac{n^{4}\alpha_{n}^{2}\beta_{n}^{2}\gamma_{n}^{4}}{n^{3}\beta_{n}\epsilon_ {n}^{4}}=n\alpha_{n}\beta_{n}(\frac{\gamma_{n}}{\epsilon_{n}})^{4}\to\infty,\] is satisfied. Finally, we verify Assumption 3. Given the event \(\mathcal{E}_{i,j,k}\), we can bound the conditional covariance \[\mathrm{cov}(Z_{i}^{n},Z_{j}^{n}\mid\mathcal{E}_{i,j,k})=O(\epsilon_{n}^{2})\] by bounding the probabilities of two contagions. So then \[\sum_{i,j:\ j\notin\mathcal{A}_{i}^{n}}\sum_{k\in M_{n}}\mathrm{ cov}(Z_{i}^{n},Z_{j}^{n}\mid\mathcal{E}_{i,j,k})\mathrm{P}(\mathcal{E}_{i,j,k}) =C\sum_{i,j:\ j\notin\mathcal{A}_{i}^{n}}(1-2n\alpha_{n}\beta_{n}) \cdot\mathrm{cov}(Z_{i}^{n},Z_{j}^{n}\mid\mathcal{E}_{i,j,k})\] \[\approx(1-\alpha_{n})\cdot n\times((1-\alpha_{n})\cdot n-1)(1- \beta_{n})\times(1-2n\alpha_{n}\beta_{n})\cdot\epsilon_{n}^{2}\] for some constant \(C>0\) fixed in \(n\). Keeping orders we have \[\sum_{i,j:\ i\notin\mathcal{A}_{j}^{n}}\sum_{k\in M_{n}}\mathrm{ cov}(Z_{i}^{n},Z_{j}^{n}\mid\mathcal{E}_{i,j,k})\mathrm{P}(\mathcal{E}_{i,j,k})=O ((n\epsilon_{n})^{2}).\] Since \(\sum_{i,j:\ i\notin\mathcal{A}_{j}^{n}}\sum_{k\in M_{n}}\mathrm{ cov}(\mathrm{E}[Z_{i}^{n}\mid\mathcal{E}_{i,j,k}],\mathrm{E}[Z_{j}^{n}\mid \mathcal{E}_{i,j,k}])=0\), we have, \[\sum_{i,j:\ i\notin\mathcal{A}_{j}^{n}}\sum_{k\in M_{n}}\mathrm{ cov}(Z_{i}^{n},Z_{j}^{n})=O((n\epsilon_{n})^{2}),\] and \[\frac{n^{2}\gamma_{n}^{2}\alpha_{n}\beta_{n}}{n^{2}\epsilon_{n}^{2}}=\frac{ \gamma_{n}^{2}\alpha_{n}\beta_{n}}{\epsilon_{n}^{2}}\to\infty,\] is also satisfied. ### Diffusion in Stochastic Block Models #### 4.4.1. Environment A SIR diffusion process occurs on a sequence of unweighted and undirected networks, as in the previous example, except that the network has a block structure as generated by a standard stochastic block model (Holland et al. (1983); Lee and Wilkinson (2019)). Then \(n\) nodes are partitioned into \(k_{n}\) blocks, where block sizes are equal, or within one of each other. The network is formed as follows. With probability \(p_{n}^{in}\in(0,1)\) links are formed inside blocks, and with probability \(p_{n}^{ac}\in(0,1)\) they are formed across blocks, independently. Connections are sparse across blocks but relatively denser within blocks, as defined by the contagion process. In particular, let \(q_{n}\) be the probability of infection, as in the last example. Inside link probabilities are large enough for percolation within blocks: \(p_{n}^{in}\frac{n}{k_{n}}q_{n}>>\log\left(\frac{n}{k_{n}}\right)\). Across link probabilities are small enough for vanishing probabilities of contagion across blocks, even if all other blocks are infected: \(p_{n}^{ac}n^{2}q_{n}<<1\). The infections are seeded, for example with \(k_{n}/2\) seeds. With probability going to 1, all nodes in the blocks with the seeds will be infected and no others. There is a correlation going to 1 of infection status of nodes within the blocks, which are the affinity sets; and there is correlation of infection status going to 0 across blocks, but it is always positive. If \(k_{n}\) is bounded, then a central limit theorem fails. If \(k_{n}\) grows without bound (while allowing \(n/k_{n}\to\infty\) so that blocks are large), then the central limit theorem holds. #### 4.4.2. Application of Theorem 1 Given the previous examples, we simply sketch the application. The affinity set of node \(i\), \(\mathcal{A}_{i}^{n}\), is the block in which it resides. Letting \(\sigma_{n}^{2}\) denote \(\mathrm{var}(Z_{i})\), it follows that \(\Omega_{n}\approx n\frac{n}{k_{n}}\sigma_{n}^{2}\). Then the first assumption is satisfied noting that if \(k_{n}\to\infty\), then \[\sum_{i;j,k\in\mathcal{A}_{i}^{n}}\mathrm{E}[|Z_{i}|Z_{j}Z_{k}]=O(\sum_{i;j,k \in\mathcal{A}_{i}^{n}}\mathrm{E}[|Z_{i}|^{3}])=O\left(n\left(\frac{n}{k_{n}} \right)^{2}\mathrm{E}[|Z_{i}|^{3}]\right)=o(\Omega_{n}^{3/2}).\] Verification of the second assumption comes from noting that if \(k_{n}\to\infty\) then \[\sum_{i;j,k\in\mathcal{A}_{i}^{n},r\in\mathcal{A}_{k}^{n}}\mathrm{ cov}(Z_{i}^{n}Z_{j}^{n},Z_{k}^{n}Z_{r}^{n}) = O\left(\sum_{i;j,k,r\in\mathcal{A}_{i}^{n}}\mathrm{cov}(Z_{i}^{n }Z_{j}^{n},Z_{k}^{n}Z_{r}^{n})\right)\] \[= O\left(n\left(\frac{n}{k_{n}}\right)^{3}\mathrm{E}[Z_{i}^{4}]\right)\] \[= o(\Omega_{n}^{2}).\] Next we check the third assumption. Let \(\varepsilon_{n}=\mathrm{cov}(Z_{i}^{n},Z_{j}^{n})\) for \(i,j\) in different blocks (ignoring the approximation due to the fact that blocks may be of slightly different sizes). Note that \(\varepsilon_{n}=\mathrm{cov}(Z_{i}^{n},Z_{j}^{n})\) is on the order of contagion across blocks, which is \(p_{n}^{ac}q_{n}\frac{n^{2}}{k_{n}^{2}}=o(\frac{1}{k_{n}^{2}})\).9 Then if \(k_{n}\) grows without bound, it follows that Footnote 9: Both blocks could also be infected by some other nodes, which happens of order at most \(n^{2}(p^{ac}q_{n}\frac{n}{k_{n}})^{2}\), which is also of order \(o(\frac{1}{k_{n}^{2}})\). \[\sum_{i,j\notin\mathcal{A}_{i}^{n}}\mathrm{E}\left(Z_{i}Z_{j}\cdot\mathrm{ sign}\left(\mathrm{E}[Z_{i}Z_{j^{\prime}}|Z_{j}]\right)\right)=O\left(n^{2} \frac{k_{n}-1}{k_{n}}\varepsilon_{n}\right)=o\left(\frac{n^{2}}{k_{n}}\sigma_{ n}^{2}\right).\] Note that in this example, not only do the assumptions fail if \(k_{n}\) is bounded, but also the conclusion of the theorem fails to hold as well; so the conditions are tight in that sense. ### Spatial Process with Irregular Observations and Matern Covariance #### 4.5.1. Environment Finally, we turn to an example of neural network models for geospatial data. Specifically, we look at the environment of Zhan and Datta (2023). The authors propose a neural network generalized least squares process (NN-GLS) with the dependency in the residuals modeled by a Matern covariance function, described below. Their paper is the first to demonstrate consistency for the NN-GLS estimator in this setting. Consider a spatial process model \[Y_{i}=f_{0}(X_{i})+\varepsilon(s_{i})\] where \(X_{i}\in\mathbb{R}^{k}\) is a vector of characteristics and the residuals correspond to observations at locations \(s_{1},..,s_{n}\) in \(\mathbb{R}^{2}\). Let \(f_{0}(\cdot)\) be a continuous function and define \(\varepsilon(s_{i})\) as a Gaussian Process with covariance function \(\Sigma(s_{i},s_{j})=C(s_{i},s_{j})+\tau^{2}\delta(s_{i}=s_{j})\) for some \(\tau^{2}>0\) and \(\delta\) is the indicator function. Here \(C(s_{i},s_{j})=C(||s_{i}-s_{j}||_{2})=C(||h||_{2})\), where \[C(||h||_{2})=\sigma^{2}\frac{2^{1-\nu}(\sqrt{2}\phi||h||_{2})^{\nu}}{\Gamma( \nu)}\mathcal{K}_{\nu}(\sqrt{2}\phi||h||_{2})\] is the Matern covariance function, with modified Bessel function of the second kind \(\mathcal{K}_{\nu}(\cdot)\). We consider the setting in Zhan and Datta (2023) (Proposition 1) where \(C\left(||h||_{2}\right)=o\left(||h||_{2}^{-(2+\kappa)}\right)\) for some \(\kappa>0\). The NN-GLS fits a system of multi-layered perceptrons via the \(L_{2}\) loss function and the authors prove consistency under some assumptions including, in particular, restrictions on the spectral radius of a sparse approximation of the covariance function. That is covered by an assumption of minimum distance \(h_{0}>0\) separation of locations \(s_{i},s_{j}\) above, where \(i\neq j\). Previous work characterizes the asymptotic properties, including asymptotic normality of the neural network estimators, in the case of independent and identically distributed shocks (Shen et al., 2019). Zhan and Datta (2023) extend this result by modeling dependency using the Matern covariance function. #### 4.5.2. Application of Theorem 1 Our results from Theorem 1 apply to this case. The asymptotic distribution of the parameters in the NN-GLS model depends, of course, possibly on the entire vector \(\varepsilon(s_{1:n})\). We need to construct affinity sets in the face of a fundamental tension. We first need to make the sets cover as much of the spatial domain as possible to ensure that the affinity sets account for most of the significant dependence. However, since dependence is a function of distance in space, if we make the affinity sets too big, then the dependence between other points in observation \(i\)'s affinity set (which are themselves likely to have high covariance because they are spatially proximate) will violate our assumptions. To achieve this balance, we create affinity sets using the same restrictions as presented in Zhan and Datta (2023). Again reflecting the duality of spatial distance and dependence, we construct affinity sets such that the maximal separation in the affinity set has implications for the maximum covariance between random variables. Specifically, take the affinity sets to be defined as \(\mathcal{A}_{i}^{n}:=\{j:||s_{i}-s_{j}||_{2}<K(\epsilon_{n})\}\) where \(|\text{cov}(Z_{i,n},Z_{j,n})|\leq\epsilon_{n}\) for \(||s_{i}-s_{j}||_{2}>K(\epsilon_{n})\). Using Zhan and Datta (2023)'s restriction on the amount of dependence associated with the distance in space, namely \(C\left(||h||_{2}\right)=o\left(||h||_{2}^{-(2+\kappa)}\right)\) for some \(\kappa>0\), we can solve for the appropriate \(K(\epsilon_{n})\). Specifically, if we know that the covariance in Zhan and Datta (2023) is asymptotically bounded by \(||h||_{2}^{-(2+\kappa)}\), we can set this equal to \(\epsilon_{n}\) and solve for the appropriate distance. After the resulting algebra we take \(K(\epsilon_{n})=1/(\epsilon_{n}^{2+\kappa})\). Until now, we have defined distances that give affinity sets that contain the bulk of the dependence. We also need to ensure that, under the setup in Zhan and Datta (2023), these sets are not too large that they violate our assumptions. To do this, we take the smallest \(\epsilon_{n}\) such that \(\epsilon_{n}>\frac{1}{h_{0}^{2}n^{\gamma}}\) for \(0<\gamma<1\) and \(h_{0}\) as the minimum separation distance, defined above. Taking \(\epsilon_{n}\) to satisfy this lower bound allows control of the size of the affinity sets. Using a packing number calculation, we see that while this allows \(K(\epsilon_{n})\) to grow with \(n\), it grows more slowly than \(n\). Specifically, we have, \[\left(\frac{K(\epsilon_{n})}{h_{0}}\right)^{2}=\left(\frac{(1/\epsilon_{n})^{-(2+ \kappa)}}{h_{0}}\right)^{2}<\left(\frac{1}{\epsilon_{n}h_{0}^{2}}\right)=o(n).\] This logic generalizes to dimensions \(d\geq 1\), taking \(\epsilon_{n}\) to be the smallest \(\epsilon_{n}>\frac{1}{h_{0}^{d}n^{\gamma}}\) for \(0<\gamma<1\). Using this construction and assuming bounded third and fourth moments, we check that our Assumptions 1-3 apply. Letting \(K:=K(\epsilon)\), we show Assumption 1 holds since \[\sum_{i;j,k\in\mathcal{A}_{i}^{n}}\mathrm{E}[|Z_{i,n}|Z_{j,n}Z_{k,n}] \leq \sum_{i;j,k\in\mathcal{A}_{i}^{n}}\mathrm{E}[|Z_{i,n}||Z_{j,n}|Z_ {k,n}|]\] \[\leq \sum_{i;j,k\in\mathcal{A}_{i}^{n}}\left(\frac{1}{3}\mathrm{E}[|Z_ {i,n}|^{3}]+\frac{1}{3}\mathrm{E}[|Z_{j,n}|^{3}]+\frac{1}{3}\mathrm{E}[|Z_{k,n }|^{3}]\right)\] \[= O\left(\sum_{i;j,k\in\mathcal{A}_{i}^{n}}\mathrm{E}[|Z_{i,n}|^{ 3}]\right)\] \[= O(K^{2}\sum_{i}\mathrm{E}[|Z_{i,n}|^{3}])\] \[= O(K^{2}n)\] \[= o(n^{3/2}K^{3/2})\] \[= o\left(\Omega_{n}^{3/2}\right).\] The second inequality holds by the algorithm-geometric mean inequality. The remaining argument relies on rearranging the summations and using the growth rate of \(K\), i.e. \(K=o(n)\) in the fourth equality, as defined above based on the conditions required by Zhan and Datta (2023).The last equality follows since \(\Omega_{n}^{3/2}=(\sum_{i}\sum_{j\in\mathcal{A}_{i}^{n}}\mathrm{E}[Z_{i}Z_{j} ])^{3/2}\). We check that Assumption 2 is satisfied using similar arguments and relying on an assumption of finite fourth moment: \[\sum_{i,j;k\in\mathcal{A}_{i}^{n},l\in\mathcal{A}_{j}^{n}}\mathrm{ cov}(Z_{i,n}Z_{k,n},Z_{j,n}Z_{l,n}) = O(\sum_{\begin{subarray}{c}i,j:|s_{i}-s_{j}|\leq K,\\ k:|s_{k}-s_{i}|\leq K,\\ l:|s_{l}-s_{i}|\leq 2K\end{subarray}}\] \[= O(n\cdot K^{3}\cdot\mathrm{E}(Z_{i,n}^{4}))=o(n^{2}K^{2})=o( \Omega_{n}^{2}).\] The first equality comes from the construction of affinity sets such that the covariance terms within the affinity sets dominate those with outside the affinity sets (additional details in the verification of Assumption 3). The remaining equalities follow using similar rate arguments to the Assumption above. Assumption 3 follows from taking \(\epsilon_{n}\) to be the smallest \(\epsilon_{n}>\frac{1}{h_{0}^{d}n^{\gamma}}\) for \(\gamma=1-\beta\) for arbitrarily small \(\beta>0.\) Indeed, taking \(\epsilon_{n}\) as such, we have, \(\frac{n}{K^{1+1/(2+\kappa)}}=o(1)\), and thus, \[\sum_{i;j\notin\mathcal{A}_{i}^{n}}\mathrm{E}(Z_{i,n}Z_{j,n}\cdot \mathrm{sign}\ (\mathrm{E}[Z_{i,n}Z_{j,n}|Z_{j,n}])) = O(n(n-K)\epsilon_{n})\] \[= O(n(n-K)K^{-1/(2+\kappa)})\] \[= o(nK)\] \[= o\left(\Omega_{n}\right).\] To see the logic above, note that the covariance matrix \(\Sigma\) is postive-semidefinite with dominating non-negative diagonal (and near-diagonal) covariance terms. Furthermore, we recall that the absolute covariance between any \(Z_{i},Z_{j}\) pair decays with the distance between their locations \(||s_{i}-s_{j}||_{2}\) such that \(|\mathrm{cov}(Z_{i},Z_{j})|=o(||s_{i}-s_{j}||_{2}^{-(2+\kappa)}).\) Thus, thinking in terms of the covariance matrix, we see that the affinity sets collect the diagonal (and near-diagonal) covariance terms such that the \(\Omega_{n}\) contains only sums of dominating positive terms. While considering the signature of the conditional covariance with elements outside of the affinity sets above ensures that the summands in the LHS of Assumption 3 are non-negative, they are non-dominating, as covered by our condition of \(|\mathrm{cov}(Z_{i},Z_{j})|\leq\epsilon_{n}\) for \(j\notin\mathcal{A}_{i}^{n}\). Therefore, the summation in the left-hand-side of Assumption 3 is dominated by \(\Omega_{n}\), as shown above. ## 5. Discussion We have provided an organizing principle for modeling dependency and obtaining a central limit theorem: affinity sets. It allows for non-zero correlation across all random vectors in the triangular array and places focus on correlations within and across sets. These conditions are intuitive and apply to several key dependency structures in the literature. We illustrate their use through some practical applications for applied research such as treatment effects, covariance estimation, diffusion models, and neural network GLS models with Matern dependence. We note that researchers may prefer to assume shocks that satisfy our general conditions rather than more complicated and restrictive conditions that lack microfoundations and are more difficult to interpret. In some cases, as in several of our applied examples, our result is needed as previous conditions do not apply. It is useful to reflect on settings that our theorem does not cover. For example, the martingale central limit theorem (e.g., Billingsley (1961); Ibragimov (1963); Hall and Heyde (2014) among others) is not covered by our theorem, without modification. It admits nontrivial unconditional correlation between all variables, but relies on other structural properties to deduce the result. In fact, proofs of the martingale central limit theorem did not appeal to Stein's method, until Rollin (2018). By combining Stein and Lindeberg methods, Rollin (2018) develops a shorter proof but did not find a direct proof using the Stein technique alone. Some biased processes that do not fall under the martingale umbrella can still generate a central limit theorem if they satisfy the covariance structure that we have provided. We leave the unification of the two approaches to future research.
2306.04810
Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry
The backpropagation algorithm has experienced remarkable success in training large-scale artificial neural networks; however, its biological plausibility has been strongly criticized, and it remains an open question whether the brain employs supervised learning mechanisms akin to it. Here, we propose correlative information maximization between layer activations as an alternative normative approach to describe the signal propagation in biological neural networks in both forward and backward directions. This new framework addresses many concerns about the biological-plausibility of conventional artificial neural networks and the backpropagation algorithm. The coordinate descent-based optimization of the corresponding objective, combined with the mean square error loss function for fitting labeled supervision data, gives rise to a neural network structure that emulates a more biologically realistic network of multi-compartment pyramidal neurons with dendritic processing and lateral inhibitory neurons. Furthermore, our approach provides a natural resolution to the weight symmetry problem between forward and backward signal propagation paths, a significant critique against the plausibility of the conventional backpropagation algorithm. This is achieved by leveraging two alternative, yet equivalent forms of the correlative mutual information objective. These alternatives intrinsically lead to forward and backward prediction networks without weight symmetry issues, providing a compelling solution to this long-standing challenge.
Bariscan Bozkurt, Cengiz Pehlevan, Alper T Erdogan
2023-06-07T22:14:33Z
http://arxiv.org/abs/2306.04810v3
Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry ###### Abstract The backpropagation algorithm has experienced remarkable success in training large-scale artificial neural networks, however, its biological-plausibility is disputed, and it remains an open question whether the brain employs supervised learning mechanisms akin to it. Here, we propose correlative information maximization between layer activations as an alternative normative approach to describe the signal propagation in biological neural networks in both forward and backward directions. This new framework addresses many concerns about the biological-plausibility of conventional artificial neural networks and the backpropagation algorithm. The coordinate descent-based optimization of the corresponding objective, combined with the mean square error loss function for fitting labeled supervision data, gives rise to a neural network structure that emulates a more biologically realistic network of multi-compartmental pyramidal neurons with dendritic processing and lateral inhibitory neurons. Furthermore, our approach provides a natural resolution to the weight symmetry problem between forward and backward signal propagation paths, a significant critique against the plausibility of the conventional backpropagation algorithm. This is achieved by leveraging two alternative, yet equivalent forms of the correlative mutual information objective. These alternatives intrinsically lead to forward and backward prediction networks without weight symmetry issues, providing a compelling solution to this long-standing challenge. ## 1 Introduction How biological neural networks learn in a supervised manner has long been an open problem. The backpropagation algorithm Rumelhart et al. (1986), with its remarkable success in training large-scale artificial neural networks and intuitive structure, has inspired proposals for how biologically plausible neural networks can perform the necessary efficient credit-assignment for supervised learning in deep neural architectures (Whittington and Bogacz, 2019). Nonetheless, certain aspects of the backpropagation algorithm, combined with the oversimplified nature of artificial neurons, have been viewed as impediments to proposals rooted in this inspiration Crick (1989). One of the primary critiques regarding the biological plausibility of the backpropagation algorithm is the existence of a parallel backward path for backpropagating error from the output towards the input, which uses the same synaptic weights as the forward path (Rumelhart et al., 1986; Whittington and Bogacz, 2019; Grossberg, 1987). Although such weight transport, or weight symmetry, is deemed highly unlikely based on experimental evidence Crick (1989), Grossberg (1987), some biologically plausible frameworks still exhibit this feature, which is justified by the symmetric structure of the Hebbian updates employed in these frameworks Whittington and Bogacz (2019); Xie and Seung (2003); Scellier and Bengio (2017). The concerns about the simplicity of artificial neurons have been addressed by models which incorporate multi-compartment neuron models into networked architectures and ascribe important functions to dendritic processing in credit assignment (Larkum, 2013; Urbanczik and Senn, 2014; Sacramento et al., 2018; Golkar et al., 2022). This new perspective has enabled the development of neural networks with improved biological plausibility. In this article, we propose the use of correlative information maximization (CorInfoMax) among consecutive layers of a neural network as a new supervised objective for biologically plausible models, which offers * a principled solution to the weight symmetry problem: our proposed information theoretic criterion aims to maximize the linear dependence between the signals in two neighboring layers, naturally leading to the use of linear or affine transformations in between them. A key property of this approach is that employing two alternative expressions for the correlative mutual information (CMI) results in potentially _asymmetric forward and backward prediction networks_, offering a natural solution to the weight transport problem. Consequently, predictive coding in both directions emerges as the inherent solution to the correlative information maximization principle, fostering signal transmission in both forward and top-down directions through asymmetrical connections. While the CorInfoMax principle enhances information flow in both directions, the introduction of set membership constraints on the layer activations, such as non-negativity, through activation nonlinearities and lateral inhibitions, encourages compression of information and sparse representations Bozkurt et al. (2023). * a normative approach for deriving networks with multi-compartment neurons: the gradient search-based optimization of the CorInfoMax objective naturally leads to network models that employ multi-compartment pyramidal neuron models accompanied by interneurons as illustrated in Figure 1. As derived and explained in detail in Section 2, the resulting networks incorporate lateral connections and auto-synapses to increase the entropy of a layer, promoting utilization of all dimensions within the representation space of that layer. Meanwhile, asymmetric feedforward and feedback connections act as forward and backward predictors, respectively, to reduce the conditional entropies between layers, targeting the elimination of redundancy. ### Related work #### 1.1.1 Multi-compartmental neuron model based biologically plausible approaches Experimentally grounded studies, such as (Larkum, 2013; Petreanu et al., 2009), have been influential for considering a role for dendritic-processing in multi-compartmental neurons for learning and credit assignment Richards and Lillicrap (2019). Subsequent research has explored biologically plausible models with supervised learning functionality, such as the two-compartment neuron model by Urbanczik and Senn (2014) and the three-compartment pyramidal neuron model by Sacramento et al. (2018). Both models integrate non-Hebbian learning and spike-time dependent plasticity, while the latter includes SST interneurons (Urban-Ciecko and Barth, 2016). Similar frameworks have been proposed by (Guerguiev et al., 2017) and (Golkar et al., 2022), with the latter introducing a normative framework based on multi-compartmental neuron structure, top-down feedback, lateral and feedforward connections, and Hebbian and non-Hebbian learning rules, emerging from the optimization of a prediction error objective with a whitening constraint on co-layer neurons. In a similar vein to (Golkar et al., 2022), we propose an alternative normative framework based on information maximization principle. In this framework, the three-compartment structure and associated forward, top-down and lateral synaptic connections stem from the maximization of CMI between adjacent layers, without the imposition of any whitening constraint. #### 1.1.2 Weight symmetry problem A central concern regarding the biological plausibility of the backpropagation algorithm pertains to the weight symmetry issue: synaptic weights in the feedback path for error backpropagation are transposes of those used in the forward inference path (Whittington and Bogacz, 2019; Crick, 1989; Grossberg, 1987). The requirement of tied weights in backpropagation is questionable for physically distinct feedforward and feedback paths in biological systems, leading many researchers to focus on addressing the weight symmetry issue. Various strategies have been devised to address the weight symmetry issue, encompassing the employment of random and fixed feedback weights (Lillicrap et al., 2016), and the introduction of antisymmetry through separate random initializations (Amit, 2019). Liao et al. (2015) showed that the sign of the feedback weights (rather than their magnitude) affects the learning performance, and proposed the sign-symmetry algorithm. Intriguingly, this symmetric weight structure is also observed in biologically plausible frameworks such as predictive coding (PC) (Rao and Ballard, 1999, Whittington and Bogacz, 2017, Song et al., 2020), equilibrium propagation (EP) (Scellier and Bengio, 2017b, Laborieux et al., 2021, Laborieux and Zenke, 2022), and similarity matching (Qin et al., 2021). This phenomenon can be rationalized by the transpose symmetry of the Hebbian update with respect to inputs and outputs. The EP framework in (Laborieux et al., 2021) unties forward and backward connections inspired by (Scellier et al., 2018, Kolen and Pollack, 1994), and only yields small performance degradation. A more recent approach by Golkar et al. (2022) addresses this challenge by integrating two alternative forward prediction error loss function terms associated with the same network layer and leveraging presumed whitening constraints to eliminate shared feedback coefficients. In existing predictive coding-based schemes such as (Rao and Ballard, 1999, Whittington and Bogacz, 2017, Song et al., 2020), the loss function contains only forward prediction error terms. The feedback connection with symmetric weights, which backpropagates forward prediction error, emerges due to the gradient-based optimization of the PC loss. In contrast, our framework's crucial contribution is the adoption of two alternative expressions for the correlative mutual information between consecutive network layers as the central normative approach. Utilizing these two alternatives naturally leads to both forward and backward prediction paths with asymmetric weights, promoting information flow in both feedforward and top-down directions. Unlike the work of (Golkar et al., 2022), our method circumvents the need for layer whitening constraints and additional forward prediction terms to achieve asymmetric weights. #### 1.1.3 Correlative information maximization Information maximization has been proposed as a governing or guiding principle in several machine learning and neuroscience frameworks for different tasks: (i) The propagation of information within a self-organized network as pioneered by Linsker (1988). (ii) Extracting hidden features or factors associated with observations by maximizing information between the input and its internal representation such as independent component analysis (ICA-InfoMax) approach by Bell and Sejnowski (1995). In the neuroscience domain, the motivation has been to provide normative explanations to the behaviour of cortical activities evidenced by experimental work, such as orientation and visual stimuli length selectivity of primary visual cortex neurons (Hubel and Wiesel, 1959; Bell and Sejnowski, 1997). The same idea has been recently extended in the machine learning field by the Deep Infomax approach where the goal is to transfer maximum information from the input of a deep network to its final layer, while satisfying prior distribution constraints on the output representations (Hjelm et al., 2019). (iii) Matching representations corresponding to two alternative augmentations or modalities of the same input in the context of self-supervised learning (Becker and Hinton, 1992). Correlative mutual information maximization has been recently proposed as an alternative for Shannon Mutual Information (SMI), due to its desirable properties (Erdogan, 2022): (i) maximization of CMI is equivalent to maximizing linear dependence, which may be more relevant than establishing arbitrary nonlinear dependence in certain applications (Ozsoy et al., 2022), (ii) it is based only on the second order statistics, making it relatively easier to optimize. Erdogan (2022) proposed the use of CorInfoMax for solving blind source separation (BSS) problem to retrieve potentially correlated components from their mixtures. Ozsoy et al. (2022) proposed maximizing the CMI between the representations of two different augmentations of the same input as a self-supervised learning approach. More recently, Bozkurt et al. (2023) introduced an unsupervised framework to generate biologically plausible neural networks for the BSS problem with infinitely many domain selections using the CMI objective. In this article, we suggest employing the CorInfoMax principle for biologically plausible supervised learning. The key difference compared to the unsupervised framework presented in (Bozkurt et al., 2023) is the utilization of two alternative forms of mutual information. This leads to a bidirectional information flow that enables error backpropagation without encountering the weight symmetry issue. ## 2 Deep correlative information maximization ### Network data model We assume a dataset with \(L\) input data points \(\mathbf{x}[t]\in\mathbb{R}^{m},t=1,\ldots,L\), and let \(\mathbf{y}_{T}[t]\in\mathbb{R}^{n}\) be the corresponding labels. We consider a neural network with \(P-1\) hidden layers whose activities are denoted by \(\mathbf{r}^{(k)}\in\mathbb{R}^{N_{k}},k=1,\ldots,P-1\). For notational simplicity, we also denote input and output of the network by \(\mathbf{r}^{(0)}\) and \(\mathbf{r}^{(P)}\), i.e., \(\mathbf{r}^{(0)}[t]=\mathbf{x}[t]\) and \(\mathbf{r}^{(P)}[t]=\hat{\mathbf{y}}[t]\). We consider polytopic constraints for the hidden and output layer activities, i.e., \(\mathbf{r}^{(k)}\in\mathcal{P}^{(k)}\), where \(\mathcal{P}^{(k)}\) is the presumed polytopic domain for the \(k\)-th layer (Bozkurt et al., 2023; Tatih and Erdogan, 2021). We note that the polytopic assumptions are plausible as the activations of neurons in practice are bounded. In particular, we will make the specific assumption that \(\mathcal{P}^{(k)}=\mathcal{B}_{\infty,+}=\{\mathbf{r}:\mathbf{0}\leq\mathbf{r }\leq\mathbf{1}\}\), i.e., (normalized) activations lie in a nonnegative unit-hypercube. Such nonnegativity constraints have been connected to disentangling behavior (Plumbley, 2003; Pehlevan et al., 2017; Whittington et al., 2023), however, we consider extensions in the form of alternative polytopic sets corresponding to different feature priors Bozkurt et al. (2023). More broadly, the corresponding label \(\mathbf{y}_{T}\) can be, one-hot encoded label vectors for a classification problem, or discrete or continuous valued vectors for a regression problem. ### Correlative information maximization based signal propagation #### 2.2.1 Stochastic CorInfoMax based supervised criterion We propose the total correlative mutual information among consecutive layers, augmented with the mean-square-error (MSE) training loss, as the stochastic objective: \[J(\mathbf{r}^{(1)},\ldots,\mathbf{r}^{(P)})=\sum_{k=0}^{P-1}I^{(\varepsilon_ {k})}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})-\frac{\beta}{2}E(\|\mathbf{y}_{T} -\mathbf{r}^{(P)}\|_{2}^{2}), \tag{1}\] where, as defined in [Erdogan, 2022, Ozsoy et al., 2022] and in Appendix A, \[I^{\overset{\rightarrow}{(\epsilon_{k})}}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})= \frac{1}{2}\log\det\left(\mathbf{R}_{\mathbf{r}^{(k+1)}}+\epsilon_{k}\mathbf{I} \right)-\frac{1}{2}\log\det\left(\mathbf{R}_{\overset{\leftarrow}{\mathbf{e}}_{*} ^{(k+1)}}+\epsilon_{k}\mathbf{I}\right), \tag{2}\] and \(\mathbf{R}_{\mathbf{r}^{(k+1)}}=E(\mathbf{r}^{(k+1)}\mathbf{r}^{(k+1)}{}^{T})\), \(\mathbf{R}_{\mathbf{r}^{(k)}\mathbf{r}^{(l)}}=E(\mathbf{r}^{(k)}\mathbf{r}^{(l )}{}^{T})\) are the autocorrelation and the cross-correlation matrices corresponding to the layer activations, respectively. Furthermore, \(\mathbf{R}_{\overset{\rightarrow}{\mathbf{e}}_{*}^{(k+1)}}=\mathbf{R}_{\mathbf{ r}^{(k+1)}}-\mathbf{R}_{\mathbf{r}^{(k+1)}\mathbf{r}^{(k)}}{}^{T}( \mathbf{R}_{\mathbf{r}^{(k)}}+\epsilon_{k}\mathbf{I})^{-1}\mathbf{R}_{\mathbf{r}^ {(k)}\mathbf{r}^{(k+1)}}\) corresponds to the error autocorrelation matrix for the best linear regularized minimum MSE predictor of \(\mathbf{r}^{(k+1)}\) from \(\mathbf{r}^{(k)}\). We refer to this problem as the _regularized forward prediction problem_ represented by the optimization \[\underset{\mathbf{W}_{ff}^{(k)}}{\text{minimize}}\ E(\|\overset{\rightarrow}{ \mathbf{e}}^{(k+1)}\|_{2}^{2})+\epsilon_{k}\|\mathbf{W}_{ff}^{(k)}\|_{F}^{2}\ \ \ \ \text{s.t.}\ \ \ \overset{\rightarrow}{\mathbf{e}}^{(k+1)}=\mathbf{r}^{(k+1)}-\mathbf{W}_{ff}^{(k )}\mathbf{r}^{(k)}, \tag{3}\] and \(\mathbf{e}_{*}^{(k+1)}\) is the forward prediction error corresponding to the optimal forward predictor \(\mathbf{W}_{ff,*}^{(k)}\). An equal and alternative expression for the CMI can be written as (Appendix A) \[I^{\overset{\leftarrow}{(\epsilon_{k})}}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)} )=\frac{1}{2}\log\det(\mathbf{R}_{\mathbf{r}^{(k)}}+\epsilon_{k}\mathbf{I})-\frac {1}{2}\log\det\left(\mathbf{R}_{\overset{\leftarrow}{\mathbf{e}}_{*}^{(k)}}+ \epsilon_{k}\mathbf{I}\right), \tag{4}\] where \(\mathbf{R}_{\overset{\leftarrow}{\mathbf{e}}_{*}^{(k)}}=\mathbf{R}_{\mathbf{r}^ {(k)}}-\mathbf{R}_{\mathbf{r}^{(k+1)}\mathbf{r}^{(k)}}{}^{T}(\mathbf{R}_{ \mathbf{r}^{(k+1)}}+\epsilon_{k}\mathbf{I})^{-1}\mathbf{R}_{\mathbf{r}^{(k+1)} \mathbf{r}^{(k)}}\) corresponds to the error auto-correlation matrix for the best linear regularized minimum MSE predictor of \(\mathbf{r}^{(k)}\) from \(\mathbf{r}^{(k+1)}\). The corresponding _regularized backward prediction problem_ is defined by the optimization \[\underset{\mathbf{W}_{fb}^{(k)}}{\text{minimize}}\ E(\|\overset{\leftarrow}{ \mathbf{e}}^{(k)}\|_{2}^{2})+\epsilon_{k}\|\mathbf{W}_{fb}^{(k)}\|_{F}^{2}\ \ \ \ \text{s.t.}\ \ \ \overset{\leftarrow}{\mathbf{e}}^{(k)}=\mathbf{r}^{(k)}-\mathbf{W}_{fb}^{(k)} \mathbf{r}^{(k+1)}. \tag{5}\] We observe that the two alternative yet equivalent representations of the correlative mutual information between layers \(\mathbf{r}^{(k)}\) and \(\mathbf{r}^{(k+1)}\) in (2) and (4) are intrinsically linked to the forward and backward prediction problems between these layers, which are represented by the optimizations in (3) and (5), respectively. As we will demonstrate later, the existence of these two alternative forms for the CMI plays a crucial role in deriving a neural network architecture that overcomes the weight symmetry issue. #### 2.2.2 Sample-based supervised CorInfoMax criterion Our aim is to construct a biologically plausible neural network that optimizes the total CMI, equation 1, in an adaptive manner. Here, we obtain a sample-based version of (1) as a step towards that goal. We first define the weighted sample auto and cross-correlation matrices as follows: \[\hat{\mathbf{R}}_{\mathbf{r}^{(k)}}[t]=\frac{1-\lambda_{\mathbf{r}}}{1- \lambda_{\mathbf{r}}^{t}}\sum_{i=1}^{t}\lambda_{\mathbf{r}}^{t-i}\mathbf{r}^{ (k)}[i]\mathbf{r}^{(k)}[i]^{T},\hat{\mathbf{R}}_{\mathbf{r}^{(k)}\mathbf{r}^ {(k+1)}}[t]=\frac{1-\lambda_{\mathbf{r}}}{1-\lambda_{\mathbf{r}}^{t}}\sum_{i=1 }^{t}\lambda_{\mathbf{r}}^{t-i}\mathbf{r}^{(k)}[i]\mathbf{r}^{(k+1)}[i]^{T}, \tag{6}\] for \(k=0,\ldots,P\), respectively, where \(0\ll\lambda_{\mathbf{r}}<1\) is the forgetting factor. Next, we define two equivalent forms of the sample-based CMI, \(\hat{I}^{(\epsilon)}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})[t]\): \[\hat{I}^{\overset{\rightarrow}{(\epsilon_{k})}}(\mathbf{r}^{(k)},\mathbf{r} ^{(k+1)})[t] =\frac{1}{2}\log\det(\hat{\mathbf{R}}_{\mathbf{r}^{(k+1)}}[t]+ \epsilon_{k}\mathbf{I})-\frac{1}{2}\log\det(\hat{\mathbf{R}}_{\overset{\leftarrow}{ \mathbf{e}}^{(k+1)}}[t]+\epsilon_{k}\mathbf{I}), \tag{7}\] \[\hat{I}^{\overset{\leftarrow}{(\epsilon_{k})}}(\mathbf{r}^{(k)}, \mathbf{r}^{(k+1)})[t] =\frac{1}{2}\log\det(\hat{\mathbf{R}}_{\mathbf{r}^{(k)}}[t]+ \epsilon_{k}\mathbf{I})-\frac{1}{2}\log\det(\hat{\mathbf{R}}_{\overset{\leftarrow}{ \mathbf{e}}^{(k)}}[t]+\epsilon_{k}\mathbf{I}), \tag{8}\] where \(\hat{\mathbf{R}}_{\overset{\leftarrow}{\mathbf{e}}^{(k+1)}}[t]=\hat{\mathbf{R}}_{ \mathbf{r}^{(k+1)}}[t]-\hat{\mathbf{R}}_{\mathbf{r}^{(k)}\mathbf{r}^{(k+1)}}[ t]^{T}(\hat{\mathbf{R}}_{\mathbf{r}^{(k)}}[t]+\epsilon_{k}\mathbf{I})^{-1}\hat{ \mathbf{R}}_{\mathbf{r}^{(k)}\mathbf{r}^{(k+1)}}[t]\) is the autocorrelation matrix for the forward prediction error at level-\((k+1)\), \(\overset{\rightarrow}{\mathbf{e}}^{(k+1)}[t]\), corresponding to the best linear weighted regularized least squares predictor of \(\mathbf{r}^{(k+1)}[t]\) from the lower level activations \(\mathbf{r}^{(k)}[t]\). Similarly, \(\hat{\mathbf{R}}_{\overset{\leftarrow}{\mathbf{e}}^{(k)}}[t]=\hat{\mathbf{R}}_{ \mathbf{r}^{(k)}}[t]-\hat{\mathbf{R}}_{\mathbf{r}^{(k)}\mathbf{r}^{(k+1)}}[t]( \hat{\mathbf{R}}_{\mathbf{r}^{(k+1)}}[t]+\epsilon_{k}\mathbf{I})^{-1}\hat{\mathbf{R }}_{\mathbf{r}^{(k)}\mathbf{r}^{(k+1)}}[t]^{T}\) is the autocorrelation matrix for the backward prediction error at level-\((k)\), \(\overset{\leftarrow}{\mathbf{e}}^{(k)}[t]\), corresponding to the best linear weighted regularized least squares predictor of \(\mathbf{r}^{(k)}[t]\) from the higher level activations \(\mathbf{r}^{(k+1)}[t]\). The sample-based CorInfoMax optimization can be written as: \[\underset{\mathbf{r}^{(k)}[t],k=0,\dots,P}{\operatorname{maximize}} \sum_{k=0}^{P-1}\hat{I}^{(\epsilon_{k})}(\mathbf{r}^{(k)},\mathbf{r}^ {(k+1)})[t]-\frac{\beta}{2}\|\mathbf{y}_{T}[t]-\mathbf{r}^{(P)}[t]\|_{2}^{2} \tag{9a}\] \[\operatorname{subject\ to} \mathbf{r}^{(k)}[t]\in\mathcal{P}^{(k)},k=1,\dots,P,\] (9b) \[\mathbf{r}^{(0)}[t]=\mathbf{x}[t], \tag{9c}\] The first-order Taylor series approximation of the \(\log\det\) terms on the right of (7) and (8) are: \[\log\det\left(\hat{\mathbf{R}}_{\overline{\mathbf{e}}^{(k+1)}}[t]+ \epsilon_{k}\mathbf{I}\right)\approx\frac{1}{\epsilon_{k}}\operatorname{Tr} \left(\hat{\mathbf{R}}_{\overline{\mathbf{e}}^{(k+1)}}[t]\right)+N_{k+1}\log( \epsilon_{k})\] \[=\frac{1}{\epsilon_{k}}\sum_{i=1}^{t}\lambda_{\mathbf{r}}^{t-i} \|\mathbf{r}^{(k+1)}[i]-\mathbf{W}_{ff,*}^{(k)}[t]\mathbf{r}^{(k)}[i]\|_{2}^{ 2}+\epsilon_{k}\|\mathbf{W}_{ff,*}^{(k)}[t]\|_{F}^{2}+N_{k+1}\log(\epsilon_{k}), \tag{10}\] \[\quad\log\det\left(\hat{\mathbf{R}}_{\overline{\mathbf{e}}^{(k)}}[t] +\epsilon_{k}\mathbf{I}\right)\approx\frac{1}{\epsilon_{k}}\operatorname{Tr} \left(\hat{\mathbf{R}}_{\overline{\mathbf{e}}^{(k)}}[t]\right)+N_{k}\log(\epsilon_ {k})\] \[=\frac{1}{\epsilon_{k}}\sum_{i=1}^{t}\lambda_{\mathbf{r}}^{t-i} \|\mathbf{r}^{(k)}[i]-\mathbf{W}_{fb,*}^{(k)}[t]\mathbf{r}^{(k+1)}[i]\|_{2}^{ 2}+\epsilon_{k}\|\mathbf{W}_{fb,*}^{(k)}[t]\|_{F}^{2}+N_{k}\log(\epsilon_{k}). \tag{11}\] Note that in (10), \(\mathbf{W}_{ff,*}^{(k)}[t]\) denotes the optimal linear regularized weighted least squares forward predictor coefficients in predicting \(\mathbf{r}^{(k+1)}[i]\) from \(\mathbf{r}^{(k)}[i]\) for \(i=1,\dots,t\). Likewise, \(\mathbf{W}_{fb,*}^{(k)}[t]\) in (11) represents the optimal linear regularized weighted least squares backward predictor coefficients in predicting \(\mathbf{r}^{(k)}[i]\) from \(\mathbf{r}^{(k+1)}[i]\) for \(i=1,\dots,t\). Consequently, the optimal choices of forward and backward predictor coefficients are coupled with the optimal choices of layer activations. In the online optimization process, we initially relax this requirement and start with random predictor coefficient selections. During the learning process, we apply a coordinate ascent-based procedure on activation signals and predictor coefficients. Specifically, at time step-\(t\), we first optimize with respect to the activations \(\{\mathbf{r}^{(k)}[t],k=1,\dots,P\}\), where we assume predictor coefficients to be fixed. Next, we update the forward and backward predictor coefficients \(\mathbf{W}_{ff}^{(k)}\) and \(\mathbf{W}_{fb}^{(k)}\), for \(k=1,\dots,P\), to reduce the corresponding forward and backward prediction errors, respectively. As the algorithm iterations progress, the predictor coefficients converge to the vicinity of their optimal values. For the first phase of the online optimization, we employ a projected gradient ascent-based approach for activations: for \(k=1,\dots,P-1\), the layer activation vector \(\mathbf{r}^{(k)}[t]\) is included in the objective function terms \(\hat{I}^{(\epsilon)}(\mathbf{r}^{(k-1)},\mathbf{r}^{(k)})[t]\) and \(\hat{I}^{(\epsilon)}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})[t]\). Therefore, to calculate the gradient with respect to \(\mathbf{r}^{(k)}[t]\), we can use the modified form of expressions in (7) and (8), where we use the approximations in (10)-(11), and the optimal predictors are replaced with their current estimates: \[\nabla_{\mathbf{r}^{(k)}}\hat{J}_{k}(\mathbf{r}^{(k)})[t]=\nabla_ {\mathbf{r}^{(k)}}\hat{I}^{\overset{\rightarrow}{\leftarrow}}_{(\epsilon_{k-1 })}(\mathbf{r}^{(k-1)},\mathbf{r}^{(k)})[t]+\nabla_{\mathbf{r}^{(k)}}\hat{I}^{ \overset{\leftarrow}{\leftarrow}}_{(\epsilon_{k})}(\mathbf{r}^{(k)},\mathbf{ r}^{(k+1)})[t]\] \[=\tfrac{1}{2}\nabla_{\mathbf{r}^{(k)}}(\log\det(\hat{\mathbf{R}} _{\mathbf{r}^{(k)}}[t]+\epsilon_{k-1}\mathbf{I})+\log\det(\hat{\mathbf{R}}_{ \mathbf{r}^{(k)}}[t]+\epsilon_{k}\mathbf{I}))-\tfrac{1}{\epsilon_{k-1}}\overline{ \mathbf{e}}^{(k)}[t]-\tfrac{1}{\epsilon_{k}}\overline{\mathbf{e}}^{(k)}[t], \tag{12}\] where \[\overline{\mathbf{e}}^{(k)}[t]=\mathbf{r}^{(k)}[t]-\mathbf{W}_{ff}^{(k-1)}[t] \mathbf{r}^{(k-1)}[t],\quad\overset{\leftarrow}{\mathbf{e}}^{(k)}[t]= \mathbf{r}^{(k)}[t]-\mathbf{W}_{fb}^{(k)}[t]\mathbf{r}^{(k+1)}[t] \tag{13}\] are forward and backward prediction errors at level-\(k\), respectively. Following the procedure in Bozkurt et al. (2023), for the gradient term in (12), we can write: \[\frac{1}{2}\nabla_{\mathbf{r}^{(k)}}(\log\det(\hat{\mathbf{R}}_{ \mathbf{r}^{(k)}}[t]+\epsilon_{k-1}\mathbf{I})+\log\det(\hat{\mathbf{R}}_{\mathbf{ r}^{(k)}}[t]+\epsilon_{k}\mathbf{I}))=2\gamma\mathbf{B}_{\mathbf{r}^{(k)}}[t]\mathbf{r}^{(k)}[t], \tag{14}\] where \(\mathbf{B}_{\mathbf{r}^{(k)}}[t]=(\hat{\mathbf{R}}_{\mathbf{r}^{(k)}}[t]+\epsilon_ {k-1}\mathbf{I})^{-1}\approx(\hat{\mathbf{R}}_{\mathbf{r}^{(k)}}[t]+\epsilon_{k} \mathbf{I})^{-1}\) and \(\gamma=\frac{1-\lambda_{\mathbf{r}}}{\lambda_{\mathbf{r}}}\). The gradient of the objective for the final layer can be expressed as: \[\nabla_{\mathbf{r}^{(P)}}(\hat{I}^{\overset{\rightarrow}{\leftarrow}}( \mathbf{r}^{(P-1)},\mathbf{r}^{(P)})[t]-\frac{\beta}{2}\|\mathbf{r}^{(P)}[t]- \mathbf{y}_{T}[t]\|_{2}^{2})\] \[=\gamma\mathbf{B}_{\mathbf{r}^{(P)}}[t]\mathbf{r}^{(P)}[t]-\frac{1}{ \epsilon_{P-1}}\overline{\mathbf{e}}^{(P)}[t]-\beta(\mathbf{r}^{(P)}[t]-\mathbf{y}_ {T}[t]).\] ### Neural network formulation based on information maximization In this section, we develop a biologically plausible neural network grounded on the correlative information maximization-based network propagation model outlined in Section 2.2. To achieve this, we employ projected gradient ascent optimization for determining layer activations \(\mathbf{r}^{(1)}[t],\mathbf{r}^{(2)}[t],\ldots,\mathbf{r}^{(P)}[t]\), which shape the network structure and dynamics, as well as updating the corresponding synapses that govern the learning dynamics. #### 2.3.1 Network structure and neural dynamics In this section, we show that the projected gradient ascent solution to the optimization in (9) defines a multilayer recurrent neural network. To this end, we introduce the intermediate variable \(\mathbf{u}^{(k)}\) as the updated layer-\(k\) activations prior to the projection onto the domain set \(\mathcal{P}^{(k)}\). Utilizing the gradient expressions in (12)-(14), we can express the network dynamics for layers \(k=1,\ldots,P-1\) as follows: \[\tau_{\mathbf{u}}\frac{d\mathbf{u}^{(k)}[t;s]}{ds} =-g_{lk}\mathbf{u}^{(k)}[t;s]+\frac{1}{\epsilon_{k}}\boldsymbol{ M}^{(k)}[t]\mathbf{r}^{(k)}[t;s]-\frac{1}{\epsilon_{k-1}}\overset{\rightarrow}{ \mathbf{e}}_{u}^{(k)}[t;s]-\frac{1}{\epsilon_{k}}\overset{\leftarrow}{ \mathbf{e}}_{u}^{(k)}[t;s], \tag{15}\] \[\overset{\rightarrow}{\mathbf{e}}_{u}^{(k)}[t;s] =\mathbf{u}^{(k)}[t;s]-\boldsymbol{W}_{ff}^{(k-1)}[t]\mathbf{r}^ {(k-1)}[t;s],\quad\overset{\leftarrow}{\mathbf{e}}_{u}^{(k)}[t;s]=\mathbf{u} ^{(k)}[t;s]-\boldsymbol{W}_{fb}^{(k)}[t]\mathbf{r}^{(k+1)}[t;s],\] (16) \[\mathbf{r}^{(k)}[t;s] =\sigma_{+}(\mathbf{u}^{(k)}[t;s]), \tag{17}\] where \(\tau_{\mathbf{u}}\) is the update time constant, \(\boldsymbol{M}^{(k)}[t]=2\epsilon_{k}(\gamma\boldsymbol{B}^{(k)}[t]+g_{lk} \boldsymbol{I})\), and \(\sigma_{+}\) represents the elementwise clipped-ReLU function corresponding to the projection onto the nonnegative unit-hypercube \(\mathcal{B}_{\infty,+}\), defined as \(\sigma_{+}(u)=\min(1,\max(u,0))\). To reinterpret the dynamics in (15) to (17) as a multi-compartmental neural network, for \(k=1,\ldots,P-1\), we define the signals: \[\mathbf{v}_{A}^{(k)}[t;s]=\boldsymbol{M}^{(k)}[t]\boldsymbol{r}^{(k)}[t;s]+ \boldsymbol{W}_{fb}^{(k)}[t]\mathbf{r}^{(k+1)}[t;s],\quad\mathbf{v}_{B}^{(k)} [t;s]=\boldsymbol{W}_{ff}^{(k-1)}[t]\mathbf{r}^{(k-1)}[t;s], \tag{18}\] which allow us to rewrite the network activation dynamics (15) to (17) as: \[\tau_{\mathbf{u}}\frac{d\mathbf{u}^{(k)}[t;s]}{ds}=-g_{lk} \mathbf{u}^{(k)}[t;s]+g_{A,k}(\mathbf{v}_{A}^{(k)}[t;s]-\mathbf{u}^{(k)}[t;s] )+g_{B,k}(\mathbf{v}_{B}^{(k)}[t;s]-\mathbf{u}^{(k)}[t;s]), \tag{19}\] \[\mathbf{r}^{(k)}[t;s]=\sigma_{+}(\mathbf{u}^{(k)}[t;s]), \tag{20}\] where \(g_{A,k}=\frac{1}{\epsilon_{k-1}}\) and \(g_{B,k}=\frac{1}{\epsilon_{k}}\). Similarly, for the output layer, we employ the same expressions as (19) and (20) with \(k=P\), except that in this case we have: \[\mathbf{v}_{A}^{(P)}[t;s]=\boldsymbol{M}^{(P)}[t]\mathbf{r}^{(k)}[t;s]-( \mathbf{r}^{(P)}[t;s]-\boldsymbol{y}_{T}[t]),\quad\mathbf{v}_{B}^{(P)}[t;s]= \boldsymbol{W}_{ff}^{(P-1)}[t]\mathbf{r}^{(P-1)}[t;s], \tag{21}\] where \(g_{B,P}=\frac{1}{\epsilon_{P-1}}\), \(g_{A,P}=\beta\) and \(\boldsymbol{M}^{(P)}[t]=\beta^{-1}(\gamma\boldsymbol{B}^{(P)}[t]+g_{lk} \boldsymbol{I})\). Remarkably, the equations (18) to (21) reveal a biologically plausible neural network that incorporates three-compartmental pyramid neuron models, as presented in (Sacramento et al., 2018; Golkar et al., 2022). This intricate architecture, of which two-layer segment is demonstrated in Figure 1, naturally emerges from the proposed correlative information maximization framework. In this network structure: * \(\mathbf{u}^{(k)}\) embodies the membrane potentials for neuronal somatic compartments of the neurons at layer-\(k\), where \(\tau_{\mathbf{u}}\) is the membrane leak time constant of soma. * \(\mathbf{v}_{B}^{(k)}\) corresponds to membrane potentials for basal dendrite compartments, receiving feedforward input originating from the previous layer. * \(\mathbf{v}_{A}^{(k)}\) denotes the membrane potentials for distal apical dendrite compartments, which gather top-down input from the subsequent layer and lateral inputs represented by \(\boldsymbol{M}^{(k)}[t]\mathbf{r}^{(k)}\) in (18) and (21). Decomposing \(\boldsymbol{M}^{(k)}\) into \(\boldsymbol{D}^{(k)}-\boldsymbol{O}^{(k)}\), we find that \(\boldsymbol{D}^{(k)}\) mirrors autapses (Lubke et al., 1996), and the off-diagonal component \(\boldsymbol{O}^{(k)}\) corresponds to lateral inhibition synapses. We use \(\mathbf{i}^{(k)}=-\boldsymbol{O}^{(k)}\mathbf{r}^{(k)}\) to represent the activations of SST interneurons (Urban-Ciecko and Barth, 2016) that generate lateral inhibitions to the apical dendrites. * Forward (backward) prediction errors manifest in the membrane voltage differences between soma and basal (distal) compartments of the pyramidal neurons. * Forward (backward) prediction coefficients \(\mathbf{W}_{ff}^{(k)}\) (\(\mathbf{W}_{fb}^{(k)}\)) are associated with feedforward (top-down) synapses connecting layers \((k)\) and \((k+1)\). * The inverse of the regularization coefficient \(\epsilon_{k}\) is related to the conductance between soma and dendritic compartments. In contrast, at the output layer, the augmentation constant \(\beta\) corresponds to the conductance between soma and distal compartments. This relationship can be motivated by modifying the objective in (9a) as \[\sum_{k=0}^{P-1}\hat{I}^{(\epsilon_{k})}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})[ t]+\frac{1}{2}\hat{I}^{\overset{\leftarrow}{(\beta^{-1})}}(\mathbf{r}^{(P)}, \mathbf{y}_{T})[t],\] (22) where, through the first-order approximation, the \(\mathbf{r}^{(P)}[t]\) dependent portion of \(\hat{I}^{(\beta^{-1})}(\mathbf{r}^{(P)},\mathbf{y}_{T})[t]\) can be expressed as \(-\beta\|\mathbf{r}^{(P)}[t]-\mathbf{W}_{fb}^{(P)}\mathbf{y}_{T}[t]\|_{2}^{2}\). For accuracy, we enforce \(\mathbf{W}_{fb}^{(P)}=\mathbf{I}\). ### Learning dynamics Network parameters consists of feedforward \(\mathbf{W}_{ff}^{(k)}\), feedback \(\mathbf{W}_{fb}^{(k)}\) and lateral \(\mathbf{B}^{(k)}\) coefficients.The learning dynamics of these coefficients are elaborated below: * _Feedforward Coefficients_ are connected to the forward prediction problem defined by the optimization in (3). We can define the corresponding online optimization objective function as \[C_{ff}(\mathbf{W}_{ff}^{(k)})=\epsilon_{k}\|\mathbf{W}_{ff}^{(k)}\|_{F}^{2}+\| \overset{\leftarrow}{\mathbf{e}}^{(k+1)}[t]\|_{2}^{2}\text{ for which the the partial derivative is given by }\] \[\frac{\partial C_{ff}(\mathbf{W}_{ff}^{(k)}[t])}{\partial\mathbf{W}_{ff}^{(k)}}=2 \epsilon_{k}\mathbf{W}_{ff}^{(k)}[t]-2\overset{\rightarrow}{\mathbf{e}}^{(k+1)}[ t]\mathbf{r}^{(k)}[t]^{T}.\] (23) In Appendix C, we provide a discussion on rewriting (23) in terms of the membrane voltage difference between the distal apical and soma compartments of the neuron, based on the equilibrium condition for the neuronal dynamics: \[-\overset{\rightarrow}{\mathbf{e}}^{(k+1)}[t]\mathbf{r}^{(k)}[t]^{T}=g_{B,k} ^{-1}(g_{A,k}\mathbf{v}_{A}^{(k)}[t]-(g_{lk}+g_{A_{k}})\mathbf{u}_{*}^{(k)}[t] +\mathbf{h}_{*}[t])\mathbf{r}^{(k)}[t]^{T},\] (24) where \(\mathbf{h}_{*}[t]\) is nonzero only for neurons that are silent or firing at the maximum rate. * Similarly, _Feedback Coefficients_ are connected to the backward prediction problem defined by the optimization in (5), and the corresponding online optimization objective function as \(C_{fb}(\mathbf{W}_{fb}^{(k)})=\epsilon_{k}\|\mathbf{W}_{ff}^{(k)}\|_{F}^{2}+\|\overset{ \leftarrow}{\mathbf{e}}^{(k)}[t]\|_{2}^{2}\) for which the partial derivative is given by \[\frac{\partial C_{fb}(\mathbf{W}_{fb}^{(k)}[t])}{\partial\mathbf{W}_{fb}^{(k)}}=2 \epsilon_{k}\mathbf{W}_{fb}^{(k)}[t]-2\overset{\leftarrow}{\mathbf{e}}^{(k)}[t] \mathbf{r}^{(k+1)}[t]^{T}.\] (25) To compute the updates of both feedforward and feedback coefficients, we use the EP approach [Scellier and Bengio, 2017b], where the update terms are obtained based on the contrastive expressions of partial derivatives in (23) and (25) for the nudge phase, i.e., \(\beta=\beta^{\prime}>0\), and the free phase, i.e., \(\beta=0\), : \[\delta\mathbf{W}_{ff}^{(k)}[t]\propto\frac{1}{\beta^{\prime}}\left(( \overset{\leftarrow}{\mathbf{e}}^{(k+1)}[t]\mathbf{r}^{(k)}[t]^{T})\bigg{|}_ {\beta=\beta^{\prime}}-(\overset{\leftarrow}{\mathbf{e}}^{(k+1)}[t]\mathbf{ r}^{(k)}[t]^{T})\bigg{|}_{\beta=0}\right),\] (26) \[\delta\mathbf{W}_{fb}^{(k)}[t]\propto\frac{1}{\beta^{\prime}}\left(( \overset{\leftarrow}{\mathbf{e}}^{(k)}[t]\mathbf{r}^{(k+1)}[t]^{T})\bigg{|}_ {\beta=\beta^{\prime}}-(\overset{\leftarrow}{\mathbf{e}}^{(k+1)}[t]\mathbf{ r}^{(k+1)}[t]^{T})\bigg{|}_{\beta=0}\right).\] (27) * _Lateral Coefficients_, \(\mathbf{B}^{(k)}\) are the inverses of the \(\epsilon\mathbf{I}\) perturbed correlation matrices. We can use the update rule in [Bozkurt et al., 2023] for their learning dynamics after the nudge phase: \[\mathbf{B}^{(k)}[t+1]=\lambda_{\mathbf{r}}^{-1}(\mathbf{B}^{(k)}[t]-\gamma\mathbf{z} ^{(k)}[t]\mathbf{z}^{(k)}[t]^{T}),\text{ where }\mathbf{z}^{(k)}=\mathbf{B}^{(k)}[t]\mathbf{r}^{(k)}[t].\] ## 3 Discussion of results * In (12), we devise an update for layer activation \(\mathbf{r}^{(k)}\) by employing two distinct forms of the CMI associated with \(\mathbf{r}^{(k)}\): \(\widehat{I}^{(\epsilon_{k-1})}(\mathbf{r}^{(k-1)},\mathbf{r}^{(k)})[t]\), the CMI with the preceding layer, encompassing the forward prediction error for estimating \(\mathbf{r}^{(k)}\), and \(\widehat{I}^{(\epsilon_{k})}(\mathbf{r}^{(k)},\mathbf{r}^{(k+1)})[t]\), the CMI with the subsequent layer, incorporating the backward prediction error for estimating \(\mathbf{r}^{(k)}\). Employing these alternative expressions is crucial in circumventing the weight transport problem and offering a more biologically plausible framework. For further discussion, please refer to Appendix B. * In the context of the proposed correlative information maximization framework, predictive coding naturally emerges as a crucial mechanism. By incorporating both alternative expressions of CMI, the framework focuses on minimizing both forward and backward prediction errors between adjacent layers via feedforward and feedback connections. These connections foster bidirectional information flow, thereby enhancing the overall learning process. * Figure 1 depicts the interplay between the CorInfoMax objective and the corresponding network architecture. The emergence of lateral connections and autapses can be attributed to the maximization of the unconditional layer entropy component of the CMI, which allows for efficient utilization of the available representation dimensions. Simultaneously, the minimization of conditional entropies between adjacent layers gives rise to feedforward and feedback connections, effectively reducing redundancy within representations. * We employ time-contrastive learning, as in GenRec (O'Reilly, 1996), EP (Scellier and Bengio, 2017b) and CSM (Qin et al., 2021), by implementing separate phases with Hebbian and anti-Hebbian updates, governed by an assumed teaching signal. It has been conjectured that the teaching signal in biological networks can be modeled by the oscillations in the brain (Whittington and Bogacz, 2019; Baldi and Pineda, 1991; Ketz et al., 2013). Although the oscillatory rhythms and their synchronization in the brain are elusive, they are believed to play a crucial role in adaptive processes such as learning and predicting upcoming events (Fell and Axmacher, 2011; Engel et al., 2001). ## 4 Numerical experiments In this section, we evaluate the performance of our CorInfoMax framework on image classification tasks using three popular datasets: MNIST (LeCun and Cortes, 2010), Fashion-MNIST (Xiao et al., 2017), and CIFAR10 (Krizhevsky et al., 2009). We compare the effectiveness of our approach against other contrastive methods, such as EP (Scellier and Bengio, 2017b) and CSM (Qin et al., 2021), as well as explicit methods, including PC (Whittington and Bogacz, 2017) and PC-Nudge (Millipore et al., 2023), when training multilayer perceptron (MLP) architectures. We examine two distinct constraints on the activations of CorInfoMax Networks: (i) \(\mathcal{B}_{\infty,+}\), representing the nonnegative part of the unit hypercube, and (ii) \(\mathcal{B}_{1,+}=\{\mathbf{r}:\mathbf{r}\geq 0,\|\mathbf{r}\|_{1}\leq 1\}\), denoting the nonnegative part of the unit \(\ell_{1}\)-norm ball (Tatli and Erdogan, 2021). Table 1 presents the test accuracy results for each algorithm, averaged over 10 realizations along with the corresponding standard deviations. These findings demonstrate that CorInfoMax networks can achieve comparable or superior \begin{table} \begin{tabular}{l l l l} \hline \hline & MNIST & FashionMNIST & CIFAR10 \\ \hline **CorInfoMax-\(\mathcal{B}_{\infty,+}\)** (Appendix E.3) & \(97.62\pm 0.1\) & \(88.14\pm 0.3\) & \(51.86\pm 0.3\) \\ **CorInfoMax-\(\mathcal{B}_{1,+}\)** (Appendix E.5) & \(97.71\pm 0.1\) & \(88.09\pm 0.1\) & \(51.19\pm 0.4\) \\ EP & \(97.61\pm 0.1\) & \(88.06\pm 0.7\) & \(49.28\pm 0.5\) \\ CSM & \(98.08\pm 0.1\) & \(88.73\pm 0.2\) & \(40.79^{*}\) \\ PC & \(98.17\pm 0.2\) & \(89.31\pm 0.4\) & - \\ PC-Nudge & \(97.71\pm 0.1\) & \(88.49\pm 0.3\) & \(48.58\pm 0.7\) \\ \hline \hline \end{tabular} \end{table} Table 1: Test accuracy results (mean \(\pm\) standard deviation from \(n=10\) runs) for CorInfoMax networks are compared with other biologically-plausible algorithms. The performance of CSM on the CIFAR10 dataset is taken from (Qin et al., 2021), while the remaining results stem from our own simulations. performance in relation to the state-of-the-art methods for the selected tasks. Additional information regarding these experiments, as well as further experiments, can be found in the Appendix. We also provide the code used for these experiments in the supplementary document. ## 5 Conclusion In this article, we have presented the correlative information maximization (CorInfoMax) framework as a biologically plausible approach to constructing supervised neural network models. Our proposed method addresses the long-standing weight symmetry issue by providing a principled solution, which results in asymmetric forward and backward prediction networks. Furthermore, the CorInfoMax framework offers a normative approach for developing network models that incorporate multi-compartment pyramidal neuron models, aligning more closely with the experimental findings about the biological neural networks. One potential limitation of our framework, shared by other supervised approaches, is the necessity for model parameter search to improve accuracy. We discuss this issue in detail in Appendix F.
2303.10063
Poiseuille Flow of Carreau-Yasuda Fluid at Variable Pressure Gradient
The unsteady Poiseuille flow of Carreau-Yasuda fluid in a pipe, caused by a variable pressure gradient, is studied theoretically. In a particular case, the steady flow is considered separately. It is proved that at some values of the viscosity model parameters, the problem has a generalized solution, while at others - classical solution. For the latter, a necessary and sufficient condition is found, which depends on the maximum pressure gradient and on the Carreau-Yasuda model parameters.
Nikolay Kutev, Sonia Tabakova
2023-03-17T15:46:18Z
http://arxiv.org/abs/2303.10063v1
# Poiseuille flow of Carreau-Yasuda fluid at variable pressure gradient ###### Abstract. The unsteady Poiseuille flow of Carreau-Yasuda fluid in a pipe, caused by a variable pressure gradient, is studied theoretically. In a particular case, the steady flow is considered separately. It is proved that at some values of the viscosity model parameters, the problem has a generalized solution, while at others - classical solution. For the latter, a necessary and sufficient condition is found, which depends on the maximum pressure gradient and on the Carreau-Yasuda model parameters. Key words and phrases:Carreau-Yasuda fluid, Poiseuille flow, variable pressure gradient, classical solution, negative power index 2010 Mathematics Subject Classification: 76A05, 35J66, 35Q35 *corresponding author ## 1. Introduction Only a small group of fluids refer to the so-called Newtonian fluids, possessing constant viscosity. All other fluids are known as non-Newtonian fluids, whose properties are complicated and usually described by different nonlinear rheological models for shear stress or viscosity [1], [2]. Their rheological complexity permits them to be used in a large range of applications, such as biology, energy, additive manufacturing, etc. Usually, the viscosity (or stress) is described by nonlinear models as a function of shear rate [1], such as the power law model, Carreau model, Carreau-Yasuda model and others. These fluids can be shear-thickening, when their viscosity increases with the shear rate (power index \(n>1\)) or shear thinning in the decreasing case (power index \(n<1\)). However, the complex shear-thinning fluids are often unstable at high shear rates, which results in a negative slope of stress [3]-[5], usually expressed by \(n\lesssim 0\). Then, for the Poiseuille flow in pipes, the pressure axial gradient is no more constant and becomes a function of the radial coordinate, which means that the flow can not be described by the well-known Weissenberg-Rabinowitsch-Mooney theory [6]. This paper is a prolongation of our previous works [7]-[9], which treat the general flow problems of shear-thinning fluid flow in a pipe. The first two papers concern the unsteady flow case, while the third - the steady case. In these works, the Carreau-Yasuda model is used for the fluid viscosity of the flow, caused by a constant pressure gradient in the axial direction, which is time-dependent or steady. As a result, the flow is governed by a single non-linear PDE of parabolic type in the unsteady case or elliptic type in the steady case for the axial velocity. In [8] it was proved that the unsteady problem becomes uniformly parabolic, nonuniformly parabolic, degenerate parabolic or backward parabolic at different values of \(n\). In [9] the problem has a classical solution, for which a necessary and sufficient condition is found depending on the model parameters. In the present paper, the pressure gradient is considered as a function of the radial coordinate \(r\) in the pipe flow of the Carreau-Yasuda fluid with arbitrary power index \(n\). The existence of a classical solution will be proved separately for the steady and unsteady cases. The dimensionless velocity equation of Carreau-Yasuda flow in an infinite circular pipe at axial pressure gradient \(b(Y)\) in cylindrical coordinates is given by [8], [9]: \[8\beta^{2}U_{T}-\frac{1}{Y}\frac{\partial}{\partial Y}\left[(1+ \kappa^{\alpha}\mid U_{Y}\mid^{\alpha})^{\frac{n-1}{\alpha}}\,YU_{Y}\right]=b(Y),\] \[\mbox{in}\quad Q=\{(T,Y);\quad T>0;\quad Y\in(0,R)\}, \tag{1}\] \[U_{Y}(T,0)=U(T,R)=0\quad\mbox{for}\quad T\geq 0,\quad U(0,Y)=\Psi(Y)\quad \mbox{for}\quad Y\in[0,R], \tag{2}\] where \(U=U(T,Y)\) is the dimensionless axial velocity, \(T\) - dimensionless time, \(Y\) - dimensionless radial coordinate, \(R\) - dimensionless radius, \(8\beta^{2}=ReSt\) with \(Re\) and \(St\) as Reynolds and Strouhal numbers, \(\kappa\) - Carreau number (Weissenberg number), \(\alpha\) and \(n\) are empirically determined. The function \(\Psi(Y)\in C^{4}([0,R])\) satisfies the compatibility conditions: \[\Psi^{\prime}(0)=\Psi(R)=0,\quad\Psi^{\prime}(R)=0,\quad\Psi^{ \prime\prime}(R)+b(R)=0 \tag{4}\] \[\mbox{and}\quad b(Y)=C^{2}([0,R]),\quad 0\leq b(Y)\leq b_{0}\quad \mbox{for}\quad Y\in[0,R],\] \[b_{0}=const.,\quad b(Y)\not\equiv\ 0\quad\mbox{for}\quad Y\in(0,R). \tag{3}\] In non-divergence form eq. (1) becomes : \[P_{0}(U)=8\beta^{2}U_{T}-\Phi(\mid U_{Y}\mid)U_{YY}-\frac{1}{Y} \left(1+\kappa^{\alpha}\mid U_{Y}\mid^{\alpha}\right)^{\frac{n-1}{\alpha}}U_{ Y}=b(Y), \tag{5}\] where \[\Phi(\eta) =(1-n)\left(1+\kappa^{\alpha}\eta^{\alpha}\right)^{\frac{n-1- \alpha}{\alpha}}+n\left(1+\kappa^{\alpha}\eta^{\alpha}\right)^{\frac{n-1}{ \alpha}}\] \[=(1+n\kappa^{\alpha}\eta^{\alpha})\left(1+\kappa^{\alpha}\eta^{ \alpha}\right)^{\frac{n-1-\alpha}{\alpha}}\quad\mbox{for}\quad\eta\geq 0. \tag{6}\] Since \[\Phi^{\prime}(\eta)=(n-1)\kappa^{\alpha}\left(1+\kappa^{\alpha} \eta^{\alpha}\right)^{\frac{n-1-2\alpha}{\alpha}}\left(1+\alpha+n\kappa^{ \alpha}\eta^{\alpha}\right)\eta^{\alpha-1}, \tag{8}\] \[\Psi(0)=1,\quad\lim_{\eta\to\infty}\Psi(\eta)=\infty,\quad\mbox{ for}\quad n>1,\quad\alpha>0,\quad\kappa\neq 0,\] \[\lim_{\eta\to\infty}\Psi(\eta)=0,\quad\mbox{for}\quad n<1,\quad \alpha>0,\quad\kappa\neq 0 \tag{7}\] it follows that for \(\alpha>0\), \(\eta\geq 0\): \[\mbox{(i)}\quad\Psi(\eta)\geq 1\quad\mbox{when}\quad n>1,\quad \kappa\neq 0;\] \[\mbox{(ii)}\quad\Psi(\eta)\equiv 1\quad\mbox{when}\quad n=1\quad \mbox{or}\quad\kappa=0;\] \[\mbox{(iii)}\quad\Psi(\eta)\in(0,1]\quad\mbox{when}\quad n\in[0,1 ),\quad\kappa\neq 0. \tag{9}\] When \(n<0\), \(\kappa\neq 0\), \(\alpha>0\) and \(\eta_{0}=\kappa^{-1}\left(-\frac{1}{n}\right)^{\frac{1}{\alpha}}\), then \[\mbox{(i)}\quad\Psi(\eta)\in[0,1]\quad\mbox{for}\quad\eta\in[0, \eta_{0}]\quad\mbox{and} \tag{11}\] \[\mbox{(ii)}\quad\Psi(\eta)<0\quad\mbox{for}\quad\eta>\eta_{0}. \tag{10}\] Thus equation (1) becomes for \(\alpha>0\), \(\beta>0\): \[\begin{array}{ll}\mbox{(i)}&n>1,\quad\kappa\neq 0\quad\mbox{- singular, strictly nonuniformly quasilinear parabolic one;}\\ \mbox{(ii)}&n=1\quad\mbox{or}\quad\kappa=0\quad\mbox{- singular, linear parabolic one;}\\ \mbox{(iii)}&n\in[0,1)\quad\kappa\neq 0\quad\mbox{- singular, degenerate at infinity, quasilinear parabolic one.}\end{array} \tag{11}\] When \(n<0\), \(\alpha>0\), \(\beta>0\) and \(\kappa\neq 0\), the structure of (1) is more complicated. If \[0\leq\mid U_{Y}(T,Y)\mid<\eta_{0}=\kappa^{-1}\left(-\frac{1}{n}\right)^{\frac {1}{\alpha}} \tag{12}\] then (1) is singular, quasilinear uniformly parabolic one, while for \[\mid U_{Y}(T,Y)\mid\geq\eta_{0} \tag{13}\] is singular, degenerate backward quasilinear parabolic one. Further on we consider the regularized problem \[P_{\varepsilon}U^{\varepsilon}=8\beta^{2}U_{T}^{\varepsilon}-\frac{1}{Y+ \varepsilon}\frac{\partial}{\partial Y}\left[\left(1+\kappa^{\alpha}\mid U_{ Y}^{\varepsilon}\mid^{\alpha}\right)^{\frac{n-1}{\alpha}}(Y+\varepsilon)U_{Y}^{ \varepsilon}\right]=b(Y)\quad\mbox{in}\quad Q \tag{14}\] for every sufficiently small positive \(\varepsilon\in(0,\varepsilon_{0}]\), where \(\varepsilon_{0}\ll R\). ## 2. Steady Poiseuille flow of Carreau-Yasuda fluid In this section we prove necessary and sufficient conditions for existence and uniqueness of classical solution of the stationary part of equation (1). For convenience we consider the regularized problem: \[LV^{\varepsilon}=\frac{1}{Y+\varepsilon}\frac{\partial}{\partial Y}\left[ \left(1+\kappa^{\alpha}\mid V_{Y}^{\varepsilon}\mid^{\alpha}\right)^{\frac{n- 1}{\alpha}}(Y+\varepsilon)V_{Y}^{\varepsilon}\right]=b(Y),\quad Y\in(0,R) \tag{15}\] \[V_{Y}^{\varepsilon}(0)=0,\quad V^{\varepsilon}(R)=0\] for every \(\varepsilon\in(0,\varepsilon_{0}]\), sufficiently small positive \(\varepsilon_{0}\ll R\). The solutions of (15) are crucial for the gradient estimate of the solutions of (14) with constants independent of \(\varepsilon\). If \[B_{\varepsilon}(Y)=\frac{1}{Y+\varepsilon}\int_{0}^{Y}(s+\varepsilon)b(s)ds \quad\mbox{for}\quad Y\in[0,R] \tag{16}\] then from the l'Hopital rule for \(\varepsilon=0\), we get \[\lim_{Y\to 0}B_{0}(Y)=\lim_{Y\to 0}Yb(Y)=0,\quad B_{\varepsilon}(0)=0\quad \mbox{for}\quad\varepsilon\in(0,\varepsilon_{0}],\quad\varepsilon_{0}\ll R, \tag{17}\] \[B_{\varepsilon}^{\prime}(Y)=b(Y)-\frac{1}{(Y+\varepsilon)^{2}}\int_{0}^{Y}(s+ \varepsilon)b(s)ds, \tag{18}\] \[\lim_{Y\to 0}B_{0}^{\prime}(Y)=b(0)-\lim_{Y\to 0}\frac{Y.b(Y)}{2Y}=\frac{1}{2}b_{0}(0)\] \[B_{\varepsilon}^{\prime}(0)=b(0),\quad\mbox{for}\quad\varepsilon\in(0, \varepsilon_{0}],\] \[B_{\varepsilon}^{\prime\prime}(Y)=b^{\prime}(Y)-\frac{1}{Y+\varepsilon}b(Y)+ \frac{2}{(Y+\varepsilon)^{3}}\int_{0}^{Y}(s+\varepsilon)b(s)ds\] \[\mbox{and}\quad B_{\varepsilon}(Y)\in C^{2}([0,R])\quad\mbox{for}\quad \varepsilon\in(0,\varepsilon_{0}],\quad\varepsilon_{0}\ll R,\] \[B_{0}(Y)\in C^{1}([0,R])\cap C^{2}((0,R])\] We also define the function \[F(\eta)=(1+\kappa^{\alpha}\mid\eta\mid^{\alpha})^{\frac{n-1}{\alpha}}\eta\quad \text{for}\quad\eta\in\mathbb{R}. \tag{19}\] **Theorem 2.1**.: _Suppose \(n>0\),\(\kappa\neq 0\) or \(n\in\mathbb{R}\), \(\kappa=0\) and \(\alpha>0\). Then problem (15) has a unique classical solution \(V^{\varepsilon}(Y)\in C^{2}([0,R])\)_ \[V^{\varepsilon}(Y)=-\int_{Y}^{R}F^{-1}(B_{\varepsilon}(s))ds\quad\text{for} \quad Y\in[0,R] \tag{20}\] _and the estimate_ \[0\leq V^{\varepsilon}_{Y}(Y)\leq F^{-1}(B_{\varepsilon_{0}}(Y))\leq F^{-1} \left(\frac{b_{0}(R+\varepsilon_{0})}{2}\right) \tag{21}\] _holds for every \(Y\in[0,R]\) and \(\varepsilon\in(0,\varepsilon_{0}],\quad\varepsilon_{0}\ll R\)._ Proof.: Integrating (15) we get the identity \[F(V^{\varepsilon}_{Y})=(1+\kappa^{\alpha}\mid V^{\varepsilon}_{Y}\mid^{\alpha })^{\frac{n-1}{\alpha}}\,V^{\varepsilon}_{Y}=B_{\varepsilon}(Y) \tag{22}\] for every \(Y\in[0,R]\) and \(\varepsilon\in(0,\varepsilon_{0}],\quad\varepsilon_{0}\ll R\). For the function \(F(\eta)\) defined in (19) we get the identities \[F^{\prime}(\eta)=(1+\kappa^{\alpha}\mid\eta\mid^{\alpha})^{\frac{n-1-\alpha}{ \alpha}}\,(1+n\kappa^{\alpha}\mid\eta\mid^{\alpha}) \tag{23}\] \[F^{\prime\prime}(\eta)=(n-1)\kappa^{\alpha}\,(1+\kappa^{\alpha}\mid\eta\mid^{ \alpha})^{\frac{n-1-2\alpha}{\alpha}}\mid\eta\mid^{\alpha-2}\eta\,(1+\alpha+n \kappa^{\alpha}\mid\eta\mid^{\alpha})\quad\text{for}\quad\eta\in\mathbb{R}. \tag{24}\] Since \(F^{\prime}(\eta)>0\) for every \(\eta\in\mathbb{R}\), it follows that \(F(\eta)\) is strictly monotone increasing function \(F(\eta):[0,\infty)\to[0,\infty)\) because \(F(0)=0\), \(\lim_{\eta\to\infty}F(\eta)=\infty\). Hence, there exits the inverse function \(F^{-1}(\zeta):[0,\infty)\to[0,\infty)\) and from (22), (15) we get \[V^{\varepsilon}(Y)=-\int_{Y}^{R}F^{-1}(B_{\varepsilon}(s))ds\quad\text{for} \quad Y\in[0,R].\] Since \[\frac{\partial}{\partial\varepsilon}B_{\varepsilon}(Y)=\frac{1}{(Y+ \varepsilon)^{2}}\int_{0}^{Y}\frac{Y-s}{Y+\varepsilon}b(s)ds\geq 0\quad\text{ for}\quad\varepsilon\in(0,\varepsilon_{0}],\quad\varepsilon_{0}\ll R\] we have from (4) the estimate \[B_{\varepsilon}(Y)\leq B_{\varepsilon_{0}}(Y)=\frac{1}{Y+\varepsilon_{0}}\int _{0}^{Y}(s+\varepsilon_{0})b(s)ds\leq\frac{b_{0}}{Y+\varepsilon_{0}}\int_{0}^ {Y}(s+\varepsilon_{0})ds\leq\frac{b_{0}}{2}(R+\varepsilon_{0}) \tag{25}\] and from the monotonicity of \(F^{-1}(\zeta)\) we get \[0\leq F^{-1}(B_{\varepsilon}(Y))\leq F^{-1}\left(\frac{b_{0}(R+\varepsilon_{0} )}{2}\right)\] which proves (21). **Remark 2.1**.: _If \(n=1\) or \(\kappa=0\) then \(F(\eta)=\eta\), \(F^{-1}(\zeta)=\zeta\) and from (20) it follows that_ \[V^{\varepsilon}(Y)=-\int_{Y}^{R}\frac{1}{s+\varepsilon}\int_{0}^ {s}(t+\varepsilon)b(t)dtds\] \[\text{for every}\quad Y\in[0,R]\quad\text{and}\quad\varepsilon\in(0, \varepsilon_{0}],\quad\varepsilon_{0}\ll R.\] **Theorem 2.2**.: _Suppose \(\alpha>0\), \(n\leq 0\), \(\varepsilon\in(0,\varepsilon_{0}],\quad\varepsilon_{0}\ll R\). Then problem (15) has a unique classical solution \(V^{\varepsilon}(Y)\in C^{2}([0,R))\cap C^{1}([0,R])\)_ \[V^{\varepsilon}(Y)=-\int_{Y}^{R}F^{-1}(B_{\varepsilon}(s))ds\quad\text{for} \quad Y\in[0,R] \tag{26}\] _(i) for \(n=0\) iff_ \[B_{\varepsilon}(Y))\leq\kappa^{-1}=\lim_{\eta\to\infty}F(\eta); \tag{27}\] _(ii) for \(n<0\) iff_ \[B_{\varepsilon}(Y))\leq\left(\frac{n-1}{n}\right)^{\frac{n-1}{\alpha}}\kappa ^{-1}\left(-\frac{1}{n}\right)^{\frac{1}{\alpha}}=F\left(\kappa^{-1}\left(- \frac{1}{n}\right)^{\frac{1}{\alpha}}\right) \tag{28}\] _for every \(Y\in[0,R]\)._ _Moreover, the estimate (21) holds._ **Remark 2.2**.: _If_ \[\text{(i)}\quad n=0\quad\text{and}\quad B_{\varepsilon}(Y_{1})= \kappa^{-1},\quad Y_{1}\in(0,R)\quad\text{then}\] \[V^{\varepsilon}(Y)\in C^{2}([0,R]\setminus\{Y_{1}\})\cap C^{1}([ 0,R]);\] _(ii) \[n<0\quad\text{and}\quad B_{\varepsilon}(Y_{2})=\left(\frac{n-1}{n}\right)^{ \frac{n-1}{\alpha}}\kappa^{-1}\left(-\frac{1}{n}\right)^{\frac{1}{\alpha}}, \quad Y_{2}\in(0,R)\quad\text{then}\] \[V^{\varepsilon}(Y)\in C^{2}([0,R]\setminus\{Y_{2}\})\cap C^{1}([ 0,R])\]_ _and \(V^{\varepsilon}(Y)\) is a \(C^{1}([0,R])\) generalized solution of (15)._ Proof of Theorem 2.2.: (i) From (23) we have \(F(^{\prime}\eta)=\left(1+\kappa^{\alpha}\eta^{\alpha}\right)^{-\frac{1+\alpha }{\alpha}}>0\) for \(\eta\geq 0\) and \(F(0)=0\), \(\lim_{\eta\to\infty}F(\eta)=\kappa^{-1}\). Hence \(F(\eta)\) is strictly increasing function, \[F(\eta):[0,\infty)\to[0,\kappa^{-1}) \tag{29}\] and there exists the inverse function \[(F^{-1})(\zeta):[0,\kappa^{-1})\to[0,\infty) \tag{30}\] Since \[(F^{-1})^{\prime}(\zeta)=\frac{1}{F^{\prime}(F^{-1}(\zeta))}>0 \tag{31}\] the inverse function \(F^{-1}(\zeta)\) is strictly monotone increasing. **Sufficiency:** If (27) holds, then from (22), (30) and (27), problem (15) is equivalent to (20). After integration of (20) from the boundary condition \(V^{\varepsilon}(R)=0\), we get (26). **Necessity:** If \(V^{\varepsilon}(Y)\in C^{1}([0,R])\cap C^{2}([0,R))\) is a classical solution of (15), we suppose by contradiction that (27) fails, i.e., there exists \(Y_{0}\in[0,R]\) such that \[B_{\varepsilon}(Y_{0})>\kappa^{-1}\quad\text{for some}\quad\varepsilon\in(0, \varepsilon_{0}],\quad\varepsilon_{0}\ll R. \tag{32}\] From (15), (30) and (32) at the point \(Y_{0}\), we get the following impossible chain of inequalities \[\kappa^{-1}\geq\sup_{Y\in[0,Y]}F(V^{\varepsilon}_{Y})=B_{\varepsilon}(Y_{0})> \kappa^{-1}, \tag{33}\] which proves the necessity of (27). (ii) From (23) for \(n<0\) it follows that \[F^{\prime}(\eta)>0\quad\text{for}\quad 0\leq\eta<\kappa^{-1}\left(-\frac{1}{n}\right)^ {\frac{1}{\alpha}},\quad F^{\prime}(\eta_{0})=0, \tag{34}\] \[\eta_{0}=\kappa^{-1}\left(-\frac{1}{n}\right)^{\frac{1}{\alpha}}\quad\text{ and}\quad F^{\prime}(\eta)<0\quad\text{for}\quad\eta>\kappa^{-1}\left(-\frac{1}{n} \right)^{\frac{1}{\alpha}}\] \[F(0)=0,\quad F(\eta_{0})=\left(\frac{n-1}{n}\right)^{\frac{n-1}{\alpha}}\kappa ^{-1}\left(-\frac{1}{n}\right)^{\frac{1}{\alpha}},\quad\lim_{\eta\to\infty}F( \eta)=0. \tag{35}\] Hence the function \[F(\eta):[0,\eta_{0})\to[0,F(\eta_{0})) \tag{36}\] is strictly monotone increasing one, while \[F(\eta):[\eta_{0},\infty)\to[F(\eta_{0}),0) \tag{37}\] is strictly monotone decreasing one. From (36) and (31) the inverse function \(F^{-1}(\zeta)\) of \(F(\eta)\) in \([0,\eta_{0}]\) exits \[F^{-1}(\zeta):[0,\zeta_{0}]\to[0,\eta_{0}],\quad\zeta_{0}=F(\eta_{0})=\left( \frac{n-1}{n}\right)^{\frac{n-1}{\alpha}}\kappa^{-1}\left(-\frac{1}{n}\right) ^{\frac{1}{\alpha}} \tag{38}\] and is strictly monotone increasing one, \[F^{-1}(\zeta)\in C^{2}([0,\eta_{0}))\cap C([0,\eta_{0}]). \tag{39}\] **Sufficiency:** If (27) holds then \(F^{-1}(B_{\varepsilon}(Y))\) is well defined for \(Y\in[0,R]\) and integrating (22), we get (26). From (39) it follows that \(V^{\varepsilon}(Y)\in C^{2}([0,R))\cap C^{1}([0,R])\). **Necessity:** If \(V^{\varepsilon}(Y)\in C^{2}([0,R))\cap C^{1}([0,R])\) is a classical solution of (15), we suppose by contradiction that (28) fails, i.e., there exists \(Y_{0}\in[0,R]\) such that \[B_{\varepsilon}(Y_{0})>\left(\frac{n-1}{n}\right)^{\frac{n-1}{\alpha}}\kappa ^{-1}\left(-\frac{1}{n}\right)^{\frac{1}{\alpha}} \tag{40}\] for some \(\varepsilon\in(0,\varepsilon_{0}],\quad\varepsilon_{0}\ll R\). From the continuity of \(B_{\varepsilon}(Y)\), without loss of generality, we assume that \(Y_{0}\in(0,R)\). From (22) at the point \(Y_{0}\) and (35), (36), (40) we get the following impossible chain of inequalities \[\left(\frac{n-1}{n}\right)^{\frac{n-1}{\alpha}}\kappa^{-1}\left(-\frac{1}{n} \right)^{\frac{1}{\alpha}}\geq\sup_{Y\in[0,R]}F(V^{\varepsilon}_{Y}(Y)\geq F( V^{\varepsilon}_{Y}(Y_{0})\right.\] \[=B_{\varepsilon}(Y_{0})>\left(\frac{n-1}{n}\right)^{\frac{n-1}{\alpha}}\kappa ^{-1}\left(-\frac{1}{n}\right)^{\frac{1}{\alpha}}.\] The estimate (21) follows from (25) and the monotonicity of \(F^{-1}(\zeta)\) ## 3. Unsteady Poisseuille flow of Carreau-Yasuda fluid In this section we formulate and prove the main results in this paper for (1)-(4) for different values of the parameters \(\alpha>0\), \(\beta>0\),\(n\in\mathbb{R}\), \(\kappa\geq 0\). For this purpose we consider the regularized problem (14) in nondimensionless form: \[P_{\varepsilon}U^{\varepsilon}=8\beta^{2}U^{\varepsilon}_{T}- \Phi\left(\mid U^{\varepsilon}_{Y}\mid\right)U^{\varepsilon}_{YY}-\frac{1}{Y+ \varepsilon}\left(1+\kappa^{\alpha}\mid U^{\varepsilon}_{Y}\mid^{\alpha} \right)^{\frac{n-1}{\alpha}}U^{\varepsilon}_{Y}=b(Y)\quad\text{in}\quad Q\] \[U^{\varepsilon}_{Y}(T,0)=U^{\varepsilon}(T,R)=0\quad\text{for} \quad T\geq 0,\quad U^{\varepsilon}(0,Y)=\Psi(Y)\quad\text{for}\quad Y\in[0,R]\] \[\text{and}\quad\varepsilon\in(0,\varepsilon_{0}],\quad \varepsilon_{0}\ll R. \tag{41}\] In the following lemmas we prove apriori estimated for \(\mid U^{\varepsilon}(T,Y)\mid\) and \(\mid U^{\varepsilon}_{Y}(T,Y)\mid\) in \(\overline{Q}\), \(Q=\left\{(T,Y);T>0,Y\in(0,R)\right\}\) with constants independent of \(\varepsilon\). **Lemma 3.1**.: _Suppose \(\mid U^{\varepsilon}(T,Y)\mid\cap C^{2}(Q)\cap C^{1}(\overline{Q})\) is a solution of (41), \(\alpha>0\), \(\beta>0\) and either \(n\geq 1\), \(\kappa\neq 0\), or \(\kappa=0\), \(n\in\mathbb{R}\). Then the estimates_ \[\mid U^{\varepsilon}(T,Y)\mid\leq K_{1}(R^{2}-Y^{2})\leq K_{1}R^{2}, \tag{42}\] \[\mid U^{\varepsilon}_{Y}(T,R)\mid\leq 2K_{1}R \tag{43}\] _hold for \(T\geq 0\), \(Y\in[0,R]\), where_ \[K_{1}=\max\left\{\sup_{Y\in[0,R]}\left|\frac{\Psi(Y)}{R^{2}-Y^{2}}\right|, \frac{1}{2}b_{0}\right\}. \tag{44}\] Proof.: For the function \(H(T,Y)=K_{1}(R^{2}-Y^{2})\) and the operator \[PW=8\beta^{2}W_{T}-\Phi\left(\mid U^{\varepsilon}_{Y}\mid\right)W_{YY}-\frac{1 }{Y+\varepsilon}\left(1+\kappa^{\alpha}\mid U^{\varepsilon}_{Y}\mid^{\alpha} \right)^{\frac{n-1}{\alpha}}W_{Y} \tag{45}\] we get from (44), (9)\({}_{i}\) and (9)\({}_{ii}\) the estimate \[PH= 2\Phi\left(\mid U^{\varepsilon}_{Y}\mid\right)K_{1}+\frac{2K_{1} Y}{Y+\varepsilon}\left(1+\kappa^{\alpha}\mid U^{\varepsilon}_{Y}\mid^{\alpha} \right)^{\frac{n-1}{\alpha}}\geq 2K_{1}\geq b_{0}\geq b(Y)=PU^{\varepsilon}\] \[\text{for}\quad(T,Y)\in Q.\] Hence \(P(H-U^{\varepsilon})\geq 0\) in \(Q\), \(H(0,Y)-U^{\varepsilon}(0,Y)=K_{1}(R^{2}-Y^{2})-\Psi(Y)\geq 0\) for \(Y\in[0,R]\), \(H(T,R)-U^{\varepsilon}(T,R)=0\) and \(H_{Y}(T,0)-U^{\varepsilon}_{Y}(T,0)=0\) for \(T\geq 0\). From the strong interior maximum principle \(H(T,Y)-U^{\varepsilon}(T,Y)\) does not attain maximum or minimum at some interior point of \(Q\) and from the strong boundary maximum principle also on \(\Gamma_{1}=\left\{(T,0);T>0\right\}\)[10], [11]. Hence \(H(T,Y)-U^{\varepsilon}(T,Y)\) attains its maximum and minimum on the rest of the parabolic boundary \(\Gamma_{2}\cup\Gamma_{3}\): \[\Gamma_{2}=\left\{(0,Y);Y\in[0,R]\right\},\quad\Gamma_{3}=\left\{(T,R);T\geq 0 \right\}. \tag{46}\] The estimate (42) follows from the choice of \(K_{1}\) and the zero boundary condition on \(\Gamma_{3}\). From (42) the boundary gradient estimate becomes: \[\mid U^{\varepsilon}_{Y}(T,R)\mid\leq 2K_{1}R\quad\text{for}\quad T\geq 0. \tag{47}\] **Lemma 3.2**.: _Suppose \(U^{\varepsilon}(T,Y)\cap C^{2}(Q)\cap C^{1}(\overline{Q})\) is a solution of (41), \(\alpha>0\), \(\beta>0\) and one of the following conditions holds_ \[\text{(i)}\quad n\in(0,1),\kappa\neq 0\quad\text{or}\] \[\text{(ii)}\quad n=0,\kappa\neq 0,\sup_{Y\in[0,R]}B_{0}(Y)<\kappa^{- 1}\quad\text{or}\] \[\text{(iii)}\quad n<0,\kappa\neq 0,\sup_{Y\in[0,R]}B_{0}(Y)<\left( \frac{n-1}{n}\right)^{\frac{n-1}{\alpha}}\kappa^{-1}\left(-\frac{1}{n}\right)^ {\frac{1}{\alpha}}. \tag{48}\] _If_ \[\mid\Psi(Y)\mid<-V^{0}(Y)\quad\text{for}\quad Y\in[0,R], \tag{49}\] _where \(V^{0}(Y)\) is defined in Theorem 2.1 in case \((\ref{eq:11})_{(i)}\) and in Theorem 2.2 in case \((\ref{eq:11})_{(ii)}\) and \((\ref{eq:11})_{(iii)}\), then the estimates_ \[\mid U^{\varepsilon}(T,Y)\mid\leq(R-Y)F^{-1}\left(\frac{b_{0}(R+\varepsilon_{ 0})}{2}\right\} \tag{50}\] \[\mid U^{\varepsilon}_{Y}(T,R)\mid\leq F^{-1}\left(\frac{b_{0}(R+\varepsilon_{ 0})}{2}\right)\quad\text{hold for}\quad T\geq 0,Y\in[0,R]. \tag{51}\] Proof.: If \(V^{\varepsilon}(Y)\) is defined in Theorem 2.1 for \((\ref{eq:11})_{(i)}\) and in Theorem 2.2 for \((\ref{eq:11})_{(ii)}\) and \((\ref{eq:11})_{(iii)}\), then for the operator \(P\) given in (45), we have \[PV^{\varepsilon} =-b(Y)-A_{1}\left[(U^{\varepsilon}_{Y})^{2}-(V^{\varepsilon}_{Y}) ^{2}\right]\quad\text{in Q, where}\] \[A_{1} =\frac{1}{2}\kappa^{\alpha}\int_{0}^{1}\left[\theta(U^{ \varepsilon}_{Y})^{2}+(1-\theta)(V^{\varepsilon}_{Y})^{2}\right]^{\frac{ \alpha-2}{2}}d\theta\Big{\{}n(n-1)\left(V^{\varepsilon}_{YY}+\frac{V^{ \varepsilon}_{Y}}{Y+\varepsilon}\right)\] \[\int_{0}^{1}\left[\theta\left(1+\kappa^{\alpha}\mid U^{ \varepsilon}_{Y}\mid^{\alpha}\right)+(1-\theta)\left(1+\kappa^{\alpha}\mid V ^{\varepsilon}_{Y}\mid^{\alpha}\right)\right]^{\frac{n-1-\alpha}{\alpha}}d\theta\] \[+V^{\varepsilon}_{YY}(1-n)(n-1-\alpha)\int_{0}^{1}\left[\theta \left(1+\kappa^{\alpha}\mid U^{\varepsilon}_{Y}\mid^{\alpha}\right)+(1- \theta)\left(1+\kappa^{\alpha}\mid V^{\varepsilon}_{Y}\mid^{\alpha}\right) \right]^{\frac{n-1-2\alpha}{\alpha}}d\theta\Big{\}}\] In the above calculations, we use the identity \[\frac{V^{\varepsilon}_{Y}}{Y+\varepsilon}\left[(1+\kappa^{\alpha }\mid U^{\varepsilon}_{Y}\mid^{\alpha})^{\frac{n-1}{\alpha}}-(1+\kappa^{ \alpha}\mid V^{\varepsilon}_{Y}\mid^{\alpha})^{\frac{n-1}{\alpha}}\right] +\left[\Phi(\mid U^{\varepsilon}_{Y}\mid)-\Phi(\mid V^{\varepsilon}_{Y} \mid)\right]V^{\varepsilon}_{YY}\] \[=-A_{1}\left[(U^{\varepsilon})^{2}-(V^{\varepsilon})^{2}\right]\] Thus the function \(W=U^{\varepsilon}(T,Y)+V^{\varepsilon}(T,Y)\) satisfies the problem \[PW-A_{1}\left(U^{\varepsilon}_{Y}-V^{\varepsilon}_{Y}\right)W_{Y}=0\quad \text{in Q}\] \[W_{Y}(T,0)=W(T,R)=0\quad\text{for}\quad T\geq 0,W(0,Y)=\Psi(Y)+V^{ \varepsilon}(Y)\leq 0\quad\text{for}\quad Y\in[0,R]\] \[\text{and}\quad\varepsilon\in(0,\varepsilon_{0}],\varepsilon_{0} \ll R. \tag{52}\] Hence, from the strong interior and boundary maximum principle for classical solutions [10], [11], it follows that \(W(T,Y)\) does not attain a positive maximum in \(\overline{Q}\), i.e., \[W(T,Y)\leq 0\quad\text{in}\ \overline{Q}\quad\text{and}\quad U^{\varepsilon}(T,Y) \leq-V^{\varepsilon}(T,Y)\quad\text{for}\quad T\geq 0,Y\in[0,R].\] The opposite inequality \(U^{\varepsilon}(T,Y)\geq V^{\varepsilon}(T,Y)\) follows in the same way by means of the function \(W_{1}=-U^{\varepsilon}(T,Y)+V^{\varepsilon}(T,Y)\), which satisfies (52) with \(W_{1}(0,Y)=W_{1}(0,Y)=W_{2}(0,Y)=W_{3}(0,Y)=W_{4}(0,Y)=W_{5}(0,Y)=W_{6}(0,Y)=W_{7}( 0,Y)=W_{8}(0,Y)=W_{9}(0,Y)=W_{10}(0,Y)=W_{10}(0,Y)=W_{11}(0,Y)=W_{12}(0,Y)=W_{ 13}(0,Y)=W_{14}(0,Y)=W_{15}(0,Y)=W_{16}(0,Y)=W_{17}(0,Y)=W_{18}(0,Y)=W_{19}(0,Y)=W _{20}(0,Y)=W_{21}(0,Y)=W_{22}(0,Y)=W_{23}(0,Y)=W_{24}(0,Y)=W_{25}(0,Y)=W_{26}(0,Y)=W _{27}(0,Y)=W_{28}(0,Y)=W_{29}(0,Y)=W_{30}(0,Y)=W_{30}(0,Y)=W_{40}(0,Y)=W_{5}(0,Y)=W _{6}(0,Y)=W_{7}(0,Y)=W_{8}(0,Y)=W_{9}(0,Y)=W_{10}(0,Y)=W_{20}(0,Y)=W_{10}(0,Y)=W _{21}(0,Y)=W_{3}(0,Y)=W_{3}(0,Y)=W_{40}(0,Y)=W_{5}(0,Y)=W_{6}(0,Y)=W_{7}(0,Y)=W_{ 8}(0,Y)=W_{9}(0,Y)=W_{10}(0,Y)=W_{20}(0,Y)=W_{10}(0,Y)=W_{21}(0,Y)=W_{3}(0,Y)=W_{ 10}(0,Y)=W_{3}(0,Y)=W_{40}(0,Y)=W_{5}(0,Y)=W_{6}(0,Y)=W_{7}(0,Y)=W_{8}(0,Y)=W_{ 10}(0,Y)=W_{20}(0,Y)=W_{3}(0,Y)=W_{10}(0,Y)=W_{21}(0,Y)=W_{3}(0,Y)=W_{40}(0,Y)=W_{ 10}(0,Y)=W_{5}(0,Y)=W_{6}(0,Y)=W_{7}(0,Y)=W_{8}(0,Y)=W_{9}(0,Y)=W_{10}(0,Y)=W_{2 }(0,Y)=W_{10}(0,Y)=W_{20}(0,Y)=W_{3}(0,Y)=W_{11}(0,Y)=W_{3}(0,Y)=W_{4}(0,Y)=W_{ 11}(0,Y)=W_{22}(0,Y)=W_{3}(0,Y)=W_{5}(0,Y)=W_{6}(0,Y)=W_{7}(0,Y)=W_{8}(0,Y)=W_{ 10}(0,Y)=W_{2}(0,Y)=W_{3}(0,Y)=W_{4}(0,Y)=W_{5}(0,Y)=W_{6}(0,Y)=W_{7}(0,Y)=W_{8}(0,Y)=W_{ 10}(0,Y)=W_{9}(0,Y)=W_{10}(0,Y)=W_{2}(0,Y)=W_{3}(0,Y)=W_{10}(0,Y)=W_{2}(0,Y)=W_{3}(0,Y)=W_{ 11}(0,Y)=W_{2}(0,Y)=W_{3}(0,Y)=W_{4}(0,Y)=W_{5}(0,Y)=W_{6}(0,Y)=W_{7}(0,Y)=W_{ 8}(0,Y)=W_{9}(0,Y)=W_{10}(0,Y)=W_{10}(0,Y)=W_{2}(0,Y)=W_{3}(0,Y)=W_{4}(0,Y)=W_{ 10}(0,Y)=W_{2}(0,Y)=W_{3}(0,Y)=W_{5}(0,Y)=W_{6}(0,Y)=W_{7}(0,Y)=W_{8}(0,Y)=W_{ 10}(0,Y)=W_{2}(0,Y)=W_{3}(0,Y)=W_{4}(0,Y)=W_{5}(0,Y)=W_{6}(0,Y)=W_{7}(0,Y)=W_{8}(0,Y)=W_{ 9}(0,Y)=W_{10}(0,Y)=W_{2}(0,Y)=W_{10}(0,Y)=W_{2}(0,Y)=W_{3}(0, \(-\Psi(Y)+V^{\varepsilon}(Y)\leq 0\) for \(Y\in[0,R]\) and sufficiently small positive \(\varepsilon\). Thus (50) follows from (28) and the monotonicity of \(F^{-1}(\zeta)\). The estimate (51) is a trivial corollary of (50). **Lemma 3.3**.: _Suppose \(\mid U^{\varepsilon}(T,Y)\mid\cap C^{3}(Q)\cap C^{2}(\overline{Q})\) is a solution of (41), \(\alpha>0\), \(\beta>0\), \(n\in\mathbb{R}\), \(\kappa\geq 0\). Then the estimate_ \[\mid U^{\varepsilon}_{T}(T,Y)\mid\leq K_{2}\quad\text{for}\quad T\geq 0,Y\in[0,R],\varepsilon\in(0,\varepsilon_{0}],\varepsilon_{0}\ll R, \tag{53}\] _holds, where_ \[K_{2}=\frac{1}{8\beta^{2}}\Big{[}\sup_{Y\in[0,R]}\left|\Phi\left(\left|\Psi^{ \prime}(Y)\right|\right)\Psi^{\prime\prime}(Y)\right|+\sup_{Y\in[0,R]}\left(1+ \kappa^{\alpha}|\Psi^{\prime}(Y)|^{\alpha}\right)^{\frac{n-1}{\alpha}}\left| \frac{\Psi^{\prime}(Y)}{Y}\right|+b_{0}\Big{]} \tag{54}\] Proof.: Differentiating (41) with respect to \(T\), we obtain that \(U^{\varepsilon}_{T}\) satisfies the problem \[P_{3}U^{\varepsilon}_{T}=0\quad\text{in Q},\quad U^{\varepsilon}_{ TY}(T,0)=0,\quad U^{\varepsilon}_{T}(T,R)=0\quad\text{for}\quad T\geq 0\] \[U^{\varepsilon}_{T}(0,Y)=\frac{1}{8\beta^{2}}\Big{[}\Phi\left( \left|\Psi^{\prime}(Y)\right|\right)\Psi^{\prime\prime}(Y)+\frac{1}{Y+ \varepsilon}\left(1+\kappa^{\alpha}|\Psi^{\prime}(Y)|^{\alpha}\right)^{\frac{ n-1}{\alpha}}\Psi^{\prime}(Y)+b(Y)\Big{]}, \tag{55}\] where \[P_{3}W=8\beta^{2}W_{T}-\Phi(|U^{\varepsilon}_{Y}|)W_{YY}-\Big{[} \frac{1}{Y+\varepsilon}\left(1+\kappa^{\alpha}\mid U^{\varepsilon}_{Y}\mid^{ \alpha}\right)^{\frac{n-1-\alpha}{\alpha}}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\ _(i) If \(n>0\), \(\kappa\neq 0\) or \(\kappa=0\), \(n\in\mathbb{R}\) then the estimate_ \[|\;U_{Y}^{\varepsilon}(T,Y)\;|\leq\max\Bigl{\{}K_{3}(R+\varepsilon_ {0}),\bigl{[}K_{3}(R+\varepsilon_{0})\kappa^{-\alpha}\bigr{]}^{\frac{1}{n}} \Bigr{\}}=K_{4} \text{ holds for}\quad(T,Y)\in\overline{Q}\] \[\text{ and}\quad\varepsilon\in(0,\varepsilon_{0}],\varepsilon_{0}\ll R; \tag{58}\] _(ii) If (49) and (48)\({}_{ii}\) are satisfied, then the estimate_ \[|\;U_{Y}^{\varepsilon}(T,Y)\;|\leq K_{5}\exp(T)\quad\text{holds for}\quad(T,Y)\in \overline{Q}\quad\text{and}\quad\varepsilon\in(0,\varepsilon_{0}],\varepsilon_ {0}\ll R, \tag{59}\] _where_ \[K_{5}=\max\Bigl{\{}\frac{1}{8\beta^{2}}\sup_{Y\in[0,R]}\;|\;b_{Y}(Y)\;|,\sup_{ Y\in[0,R]}\;|\;\Psi^{\prime}(Y)\;|,F^{-1}\Bigl{(}\frac{b_{0}(R+\varepsilon_{0})}{2 }\Bigr{)}\Bigr{\}} \tag{60}\] _(iii) If (49), (48)\({}_{iii}\) and \(K_{5}<\eta_{0}\) are satisfied, then the estimate (59) holds in \(\overline{Q_{\lambda}}\), \(Q_{\lambda}=\{(T,Y);0<T<\lambda;0<Y<R\}\), where_ \[\lambda<\ln\frac{\eta_{0}}{K_{5}},\quad\eta_{0}=\kappa^{-1}\left(-\frac{1}{n} \right)^{\frac{1}{\alpha}} \tag{61}\] Proof.: Differentiating (41) with respect to \(Y\), we obtain that \(U_{Y}^{\varepsilon}\) satisfies the problem \[P_{2}U_{Y}^{\varepsilon}=b_{Y}(Y) \text{ in }\quad Q,\quad U_{Y}^{\varepsilon}(T,0)=0\quad\text{for} \quad T\geq 0,\] \[U_{Y}^{\varepsilon}(0,Y)=\Psi^{\prime}(Y)\quad\text{for}\quad Y \in[0,R] \tag{62}\] where \[P_{2}W =8\beta^{2}W_{T}-\Phi(|U_{Y}^{\varepsilon}|)W_{YY}-A_{2}W_{Y}+ \frac{1}{(Y+\varepsilon)^{2}}\left(1+\kappa^{\alpha}\;|\;U_{Y}^{\varepsilon} \;|^{\alpha}\right)^{\frac{n-1}{\alpha}}W,\] \[A_{2} =\frac{1}{Y+\varepsilon}\left(1+\kappa^{\alpha}\;|\;U_{Y}^{ \varepsilon}\;|^{\alpha}\right)^{\frac{n-1-\alpha}{\alpha}}\left(1+n\kappa^{ \alpha}\;|\;U_{Y}^{\varepsilon}\;|^{\alpha}\right)^{\frac{1}{\alpha}-1-\alpha {\alpha}}\] \[+(n-1)\kappa^{\alpha}\left(\alpha+1+n\kappa^{\alpha}\;|\;U_{Y}^{ \varepsilon}\;|^{\alpha}\right)U_{Y}^{\varepsilon}\;|^{\alpha-2}\left(1+ \kappa^{\alpha}\;|\;U_{Y}^{\varepsilon}\;|^{\alpha}\right)^{\frac{n-1-2\alpha }{\alpha}} \tag{63}\] (i) Estimate (58) follows from (56) and the inequalities \[(1+\kappa^{\alpha}\;|\;U_{Y}^{\varepsilon}\;|^{\alpha})^{\frac{n-1}{\alpha}} \;|\;U_{Y}^{\varepsilon}\;|\geq|\;U_{Y}^{\varepsilon}\;|\quad\text{for}\quad n \geq 1,\quad\kappa\neq 0\quad\text{or}\quad\kappa=0\quad\text{and}\quad n\in \mathbb{R};\] \[(1+\kappa^{\alpha}\;|\;U_{Y}^{\varepsilon}\;|^{\alpha})^{\frac{n-1}{\alpha}} \;|\;U_{Y}^{\varepsilon}\;|\geq\left(\kappa^{\alpha}+\;|\;U_{Y}^{\varepsilon}\;| ^{-\alpha}\right)\;|\;U_{Y}^{\varepsilon}\;|^{n}\geq\kappa^{\alpha}\;|\;U_{Y}^{ \varepsilon}\;|^{n}\] ii) Under the conditions in Lemma 3.5\({}_{ii}\) boundary gradient estimate (51) holds from Lemma 3.2. Simple computations give us \[P_{2}(K_{5}\exp(T))=8\beta^{2}K_{5}\exp(T)+\frac{K_{5}}{(Y+\varepsilon)^{2}} \left(1+\kappa^{\alpha}\;|\;U_{Y}^{\varepsilon}\;|^{\alpha}\right)^{\frac{n-1}{ \alpha}}\exp(T)\geq 8\beta^{2}K_{5}\geq b(Y)\quad\text{in}\quad Q.\] Thus the function \(W(T,Y)=U_{Y}^{\varepsilon}(T,Y)-8\beta^{2}K_{5}\exp(T)\) is a solution of the problem \[P_{2}W=b_{Y}-P_{2}(K_{5}\exp(T))\leq 0\quad\text{in}\quad Q\] \[W(T,0)=-K_{5}\exp(T)\leq 0,\quad W(T,R)=-K_{5}\exp(T)\leq 0\quad\text{for}\quad T \geq 0,\] \[W(0,Y)=\Psi^{\prime}(Y)-K_{5}\leq 0\quad\text{for}\quad Y\in[0,R].\] From the maximum principle [10], [11] we get the estimate \(W(T,Y)\leq 0\) in \(\overline{Q}\), i.e., \(U_{Y}^{\varepsilon}(T,Y)\leq 8\beta^{2}K_{5}\exp(T)\) for \(T\geq 0\), \(Y\in[0,R]\). Analogously, by means of the function \(U^{\varepsilon}_{Y}(T,Y)+8\beta^{2}K_{5}\exp(T)\) we obtain the opposite estimate, which proves (59). (iii) The proof of (iii) is the same as the proof of (ii) in \(\overline{Q_{\lambda}}\).The only difference is that the operator \(P_{2}\) is uniformly parabolic in \(\overline{Q_{\lambda}}\) and the maximum principle is applicable [10], [11]. **Lemma 3.6**.: _Suppose \(U^{\varepsilon}(T,Y)\in C^{3}(Q)\cap C^{1}(\overline{Q})\), \(\alpha>0\), \(\beta>0\). If_ \[n>0,\quad\kappa\neq 0,\quad\text{or}\quad\kappa=0,\quad n\in\mathbb{R}, \tag{64}\] _then the estimate_ \[\mid U^{\varepsilon}_{YY}(T,Y)\mid\leq K_{6},\quad\text{holds for}\quad T\geq 0,\quad Y\in[0,R]\quad\varepsilon\in(0,\varepsilon_{0}],\quad\varepsilon_{0} \ll R\quad\text{with} \tag{65}\] \[K_{6}=\begin{cases}&\left(12\beta^{2}K_{3}+\frac{3}{2}b_{0}\right)\Big{/} \Phi(K_{4})\text{for}\quad 0<n<1\\ &\left(12\beta^{2}K_{3}+\frac{3}{2}b_{0}\right)\text{for}\quad n>1\end{cases} \tag{66}\] _If_ \[n=0,\quad\kappa\neq 0,\quad\sup_{Y\in[0,R]}B_{0}(Y)<\kappa^{-1},\quad\text{and \eqref{eq:10} holds,} \tag{67}\] _then the estimate_ \[\mid U^{\varepsilon}_{YY}(T,Y)\mid\leq\left(12\beta^{2}K_{3}+\frac{3}{2}b_{0} \right)\Big{/}\Phi(K_{5}\exp(T_{0}))=K_{7}, \tag{68}\] _is satisfied for \(T\in[0,T_{0}]\), \(T_{0}<\infty\), \(Y\in[0,R]\), \(\varepsilon\in(0,\varepsilon_{0}]\), \(\varepsilon_{0}\ll R\)._ _If_ \[n<0,\quad\kappa\neq 0,\quad\sup_{Y\in[0,R]}B_{0}(Y)<\left(\frac{n-1}{n} \right)^{\frac{n-1}{\alpha}}\kappa^{-1}\left(-\frac{1}{n}\right)^{\frac{1}{ \alpha}},\quad K_{5}<\eta_{0}=\kappa^{-1}\left(-\frac{1}{n}\right) \tag{69}\] _and (49) holds, then the estimate (68) is obtained for \(T\in[0,\lambda]\), \(Y\in[0,R]\), where \(\lambda\) is defined in (61)._ Proof.: Estimates (65), (68) follow immediately from (41), (53), (56), (58) and (59). **Lemma 3.7**.: _Suppose \(U^{\varepsilon}(T,Y)\in C^{3}(Q)\cap C^{1}(\overline{Q})\) is a solution of (41), \(\alpha>0\), \(\beta>0\)_ _(i) If (64) holds then the estimate_ \[\Big{|}\frac{\partial^{\gamma}}{\partial Y}\Big{(}\frac{\partial^{\mu}}{ \partial T}U^{\varepsilon}(T,Y)\Big{)}\Big{|}\leq K_{8}, \tag{70}\] _holds for \(0\leq\gamma+\mu\leq 3\), \(Y\in[\delta,R]\), \(R>\delta>0\), \(T\geq 0\), \(\varepsilon\in(0,\varepsilon_{0}]\), \(\varepsilon_{0}\ll R\) and \(K_{8}\) depending on \(\gamma\), \(\mu\), \(\delta\), \(R\), \(K_{1}\), \(K_{2}\), \(K_{3}\), \(K_{6}\), but is independent of \(\varepsilon\);_ _(ii) If (67) holds then the estimate (70) is satisfied for \(0\leq\gamma+\mu\leq 3\), \(Y\in[\delta,R]\), \(\delta>0\), \(T\in[0,T_{0}]\), \(T_{0}<\infty\), \(\varepsilon\in(0,\varepsilon_{0}]\), \(\varepsilon_{0}\ll R\) and \(K_{8}\) depends on \(\gamma\), \(\mu\), \(\delta\), \(R\), \(K_{1}\), \(K_{2}\), \(K_{3}\), \(K_{6}\), but is independent of \(\varepsilon\). If (69) holds then (70) is satisfied for \(0<T<\lambda\), \(Y\in[\delta,R]\), \(\lambda\) is defined in (61)._ Proof.: Estimate (70) follows from the Schauder estimates for equation (41) and Lemmas 3.1 - 3.6. **Theorem 3.1**.: _Suppose \(\alpha>0\), \(\beta>0\)._ _(i) If \(n>0\), \(\kappa\neq 0\) or \(\kappa=0\), \(n\in\mathbb{R}\), then problem (1)-(4) has a unique classical solution \(U(T,Y)\in C^{2}(Q)\cap C^{1}(\overline{Q})\);_ _(ii) If (67) holds, then problem (1)-(4) has a unique classical solution \(U(T,Y)\in C^{2}(Q_{0})\cap C^{1}(\overline{Q_{0}})\), where \(Q_{0}=\{(T,Y);T\in[0,T_{0}],Y\in[0,R]\}\) for every \(T_{0}<\infty\)._ _(iii) If (69) holds, then problem (1)-(4) has a unique local classical solution \(U(T,Y)\in C^{2}(Q_{\lambda})\cap C^{1}(\overline{Q_{\lambda}})\), where \(Q_{\lambda}=\{(T,Y);0<T<\lambda,0<Y<R\}\) and \(\lambda\) is defined in (61)._ Proof.: From Lemma 3.5 the equation (41) becomes uniformly parabolic in \(\overline{Q}\) for case (i), in \(\overline{Q_{0}}\) for case (ii) and in \(\overline{Q_{\lambda}}\) in case (iii), respectively. Existence of a classical \(C^{4}(Q)\cap C^{2}(\overline{Q})\), respectively, \(C^{4}(Q_{0})\cap C^{2}(\overline{Q_{0}})\) or \(C^{2}(Q_{\lambda})\cap C^{1}(\overline{Q_{\lambda}})\), solution to (41) follows by means of the method of continuity on parameter and the Schauder theory [10]. From Lemma 3.7 the sequences \(\{U^{\varepsilon}(T,Y)\}\), \(\{U^{\varepsilon}_{Y}(T,Y)\}\), \(\{U^{\varepsilon}_{T}(T,Y)\}\), \(\{U^{\varepsilon}_{TY}(T,Y)\}\), \(\{U^{\varepsilon}_{YY}(T,Y)\}\) for \(\varepsilon\longrightarrow 0\) are equicontinuous and uniformly bounded for \(T\geq 0\), \(Y\in[\delta,R]\), \(\delta>0\) in case (i), for \(T\in[0,T_{0}]\), \(T_{0}<\infty\), \(Y\in[\delta,R]\), \(\delta>0\) for case (ii) and for \(T\in[0,\lambda],Y\in[\delta,R]\) in case (iii). Moreover, \(\{U^{\varepsilon}(T,Y)\}\) and \(\{U^{\varepsilon}_{Y}(T,Y)\}\) are equicontinuous and uniformly bounded in \((\overline{Q})\) in case (i), in \((\overline{Q_{0}})\) in case (ii) and in \(\overline{Q_{\lambda}}\) in case (iii) with constants independent of \(\varepsilon\). By means of the Arzela-Ascoli theorem and a diagonalization argument, there exists a subsequence \(\{U^{\varepsilon_{i}}(T,Y)\}\), which converges to the desired solution for \(\varepsilon_{i}\longrightarrow 0\) ### Acknowledgments N.K. has been supported by the Grant No BG05M2OP001-1.001-0003-C01, financed by the Science and Education for Smart Growth Operational Program (2018-2023). ### Conflict of interest The authors declare no potential conflict of interests.
2307.05827
Relational Extraction on Wikipedia Tables using Convolutional and Memory Networks
Relation extraction (RE) is the task of extracting relations between entities in text. Most RE methods extract relations from free-form running text and leave out other rich data sources, such as tables. We explore RE from the perspective of applying neural methods on tabularly organized data. We introduce a new model consisting of Convolutional Neural Network (CNN) and Bidirectional-Long Short Term Memory (BiLSTM) network to encode entities and learn dependencies among them, respectively. We evaluate our model on a large and recent dataset and compare results with previous neural methods. Experimental results show that our model consistently outperforms the previous model for the task of relation extraction on tabular data. We perform comprehensive error analyses and ablation study to show the contribution of various components of our model. Finally, we discuss the usefulness and trade-offs of our approach, and provide suggestions for fostering further research.
Arif Shahriar, Rohan Saha, Denilson Barbosa
2023-07-11T22:36:47Z
http://arxiv.org/abs/2307.05827v1
# Relational Extraction on Wikipedia Tables using Convolutional and Memory Networks ###### Abstract Relation extraction (RE) is the task of extracting relations between entities in text. Most RE methods extract relations from free-form running text and leave out other rich data sources, such as tables. We explore RE from the perspective of applying neural methods on tabularly organized data. We introduce a new model consisting of Convolutional Neural Network (CNN) and Bidirectional-Long Short Term Memory (BiLSTM) network to encode entities and learn dependencies among them, respectively. We evaluate our model on a large and recent dataset and compare results with previous neural methods. Experimental results show that our model consistently outperforms the previous model for the task of relation extraction on tabular data. We perform comprehensive error analyses and ablation study to show the contribution of various components of our model. Finally, we discuss the usefulness and trade-offs of our approach, and provide suggestions for fostering further research. ## 1 Introduction Knowledge graphs (KG) are important lexical resources for various applications involving natural language, such as web searches, question answering, etc. However, KGs quickly become incomplete as the world changes. Therefore, adding new facts to a KG is crucial for maintaining its relevance. Relation extraction (RE) is the task of extracting relations between two entities in a piece of text. RE has been widely used as a way of KG completion. Although there is a plethora of work in relation extraction, most methods process continuous free-form text (e.g., complete sentences) mentioning entities, leaving out other important data sources such as tables. Unlike previous works that used neural networks on continuous text Lin et al. (2016); Zheng et al. (2017); Su et al. (2018); Xing and Luo (2019); Lee et al. (2019); Zeng et al. (2015), we focus on extracting relations from tabular data. We use a neural model for our analysis as neural methods have been shown to outperform traditional RE approaches that require feature engineering; Wang et al. (2022) give a recent review of neural methods in relation extraction. The model extracts relations between a pair of entities in different columns inside a table and, for encyclopedic and biographical articles, between the subject of the article and an entity in a table inside that article. The model uses a combination of convolutions and memory networks to automatically extract useful features and model dependencies among features, respectively. We show that our approach can consistently outperform and makes fewer errors than a previous model. Our main contributions are as follows. 1. We outperform a state-of-the-art neural model for extracting relations from table data. 2. We perform a comprehensive error analysis to highlight the cost of model parameters for a comparable performance gain. 3. Analyze the model performance for individual relations and investigate the strengths and limitations of the proposed method. All of our code is provided in this repository: [https://github.com/simpleParadox/RE_656](https://github.com/simpleParadox/RE_656) ## 2 Related Work Most prior works have mainly focused on sentence-level RE where deep neural networks have been used to assign relations for a pair of entities Lin et al. (2016); Zeng et al. (2015); Zheng et al. (2017); Xing and Luo (2019); Lee et al. (2019). Recent works have also moved the research direction from sentence level to document level RE to utilize richer information in documents and perform relation extraction across sentences. For document-level relation extraction, recent works have also used techniques such as constructing a document-level graph using dependency trees, coreference information, rule-based heuristics, and Graph Convolutional Networks (GCN) (Sahu et al., 2019; Christopoulou et al., 2019; Nan et al., 2020) for reasoning and predicting relations. As evident, RE from continuous text is explored widely, but only a few papers have addressed the task of RE from data that is non-free form, such as data organized into tables (Macdonald and Barbosa, 2020; Munoz et al., 2014). We need features that accurately describe the input data for the relation classification task. These features can be manually created or automatically learned from the input. Munoz et al. used manual feature-engineering techniques and traditional machine-learning models to extract relations in the form of Resource Description Framework (RDF) triples from tabular data. Although their method achieved an F1-score of 79.40%, it requires complicated manual feature engineering. On the contrary, most recent works overcome the task of manual feature engineering using end-to-end deep learning techniques, and we use a similar motivation to use neural models for automating feature extraction for relation classification. The most notable work related to ours is the one by Macdonald and Barbosa, looking at extracting relations from a given pair of entities in Wikipedia tables. They used embeddings from BERT (Devlin et al., 2019) and a simple neural network with 1 LSTM unit to classify relations. Although a highly effective approach, we found the method to be over-simplistic to properly capture many relations. We show that a more sophisticated model involving convolutions and bidirectional-LSTM may be a better approach for the task of classifying relations for entity pairs from tabular data. The choice of convolution networks here is justified by the many previous works showing that CNNs perform significantly better than traditional feature-based methods for relation extraction. Each instance in our data is composed of multiple components such as table headers, table caption, section title containing the table etc. A CNN will automatically learn the useful features, and then finally, max-pooling merges them to perform predictions globally. Previous works such as Zeng et al. (2015) introduced the convolutional architecture with piece-wise max pooling (PCNN) to capture structural information between entities and adopted multi-instance learning into PCNN for a dataset that was built using distant supervision (Mintz et al., 2009). They divided the input sentence into three segments and applied a max-pooling operation on each segment instead of the entire sentence. Secondly, Lin et al. (2016) used a CNN model for an RE task with sentence-level attention for multi-instance learning, where the model used informative sentences and de-emphasized noisy samples. Finally, Xing and Luo (2019) proposed a novel framework that uses separate head-tail convolution and pooling to encode input sentences and classified relations from coarse to fine to filter out negative instances. Therefore, the papers mentioned above have shown the effectiveness of CNN for automatically learning features from sentences. Hybrid neural models have also been shown to perform well in RE tasks. Zheng et al. (2017) introduced a hybrid neural network (NN) that consists of a bidirectional encoder-decoder LSTM module (BILSTM-ED) for named entity recognition and a CNN module for relation classification. Initially, they used BILSTM-ED to capture context and then fed obtained contextual information to the CNN module to improve relation classification. Furthermore, an encoder-decoder-based CNN+LSTM approach has been presented by Su et al. (2018) for distant supervised RE. Their CNN encoder captured sentence features from a bag of sentences and merged them into a bag representation, and the LSTM decoder predicted relations sequentially by modelling relations' dependencies. As hybrid networks have shown their utility for the RE task, we utilize a hybrid architecture for relation classification from tabular data. The utility of BiLSTM is also evident in tackling the task of RE. Lee et al. (2019) proposed an end-to-end recurrent neural model incorporating an entity-aware attention mechanism with latent entity typing. They applied BiLSTM to build recurrent neural architecture to encode the context of the sentence. We also include a BiLSTM as a component of our model since it has been shown to perform well on RE tasks by modelling contextual information and leveraging long-term dependencies. ## 3 Methods Here, we describe our task and our model in detail. ### Task The task is to extract relations between a pair of entities in which one or both appear inside a table. This task has been studied in the context of Wikipedia, so we use that encyclopedia in our discussion for clarity. Recall that each Wikipedia article is about a single entity, which is called the (entity) _subject_ of that article. Our task is then to find relations either between a pair of entities appearing on the same row (but different columns) of a table inside an article, or between an entity appearing inside a table and the subject entity of the article. For example, consider a table from the Wikipedia article "Nishan-e-Haider" shown in Figure 1. Each entity under the "name of the recipient" column ("Raja Muhammad Sarwar") is a recipient of the award "Nishan-e-Haider"1. Therefore, the article subject has a relation (award-nominee) with the recipient entity in the table cell. Furthermore, elements of the article besides table cell values, like a column header ("Name of the recipient"), table section title, and caption ("Recipients") provide additional contextual information to identify the relation "award-nominee" between corresponding entity pairs. Footnote 1: [https://en.wikipedia.org/wiki/Nishan-e-Haider](https://en.wikipedia.org/wiki/Nishan-e-Haider) ### Embeddings Before training our model, we obtain vector representations of our input. For each table in the dataset, we tokenize the table cell values representing the subject and object entities. We also use contextual information from the table, including the title of the section containing the table and table headers and captions (if present). In addition, we use the subject and object column indices to obtain related entity pairs for a table row. We do not use the table section paragraphs as Macdonald and Barbosa (2020) found no gain in performance by including them. We concatenate the entity pairs and the contextual information to obtain a training sample for a given relation. We then preprocess the sample and remove all non-alphanumeric characters (e.g. \(<\)SEP\(>\) token, brackets []) using Python's regex module. Then we use the pretrained BERT tokenizer2 based on the WordPiece to tokenize the inputs. To obtain a vector representation of the concatenated input, we use HuggingFace's implementation of BERT (base_uncased) Devlin et al. (2019) pretrained on Wikipedia and BookCorpus and trained in an uncased fashion. We set the max length of the input to consist of 80 tokens, compared to the previous work by Macdonald and Barbosa (2020), which used 50 tokens. We retrieve a 768-dimensional word embedding for each token and then concatenate all the embeddings to represent the sample. We used BERT embeddings because they have been shown to perform well in various NLP tasks Baldini Soares et al. (2019); Wang et al. (2019); Nan et al. (2020); Tang et al. (2020). Moreover, we use contextual clues for tables for relation extraction which justifies the use of contextual word embeddings. Footnote 2: [https://github.com/google-research/bert/blob/master/tokenization.py](https://github.com/google-research/bert/blob/master/tokenization.py) ### Convolutional Neural Network As customary Lin et al. (2016); Xing and Luo (2019); Zeng et al. (2015), we fed the instance embeddings to a convolutional layer as it is capable of merging all the local features in input sentences. Since we are considering all surrounding information around the table, important information can appear anywhere in the input sentence. Therefore, it is necessary to leverage all local features and contextual clues in input samples. Convolution involves a dot product of the weight matrix with every k grams in the sequence S to obtain latent feature \(C^{(i)}\), which is shown in equation 1. \(W_{c}^{(i)}\in\mathbb{R}^{k\times d}\) indicates \(i_{th}\) convolutional filter, k indicates context window size of the learnable filter and \(\mathit{b}^{(i)}\) indicates bias term. To ensure input dimensions are consistent, we padded with zeros evenly to the left and right of the input sequence. Moreover, we employed 8 filters in the convolu Figure 1: Table from the Wikipedia article for “Nishan-e-Haider”. The head and tail entities can be the cell values in the same row but different columns, or, the article title (Nishan-e-Haider) and any of the cell values of the table. tion process to learn different features. We applied the ReLU non-linear activation to the output for incorporating non-linearity. \[C^{(i)}\ =\ W_{c}^{(i)}\times\ S_{l:l+k-1}\ +\ b^{(i)} \tag{1}\] Finally, we used max-pooling to preserve the most prominent features derived from each filter, which is defined in the following equation. The max-pooling operation combines all local features to obtain a fixed-size representation of each input sentence. \[C^{(i)}_{max}=max\{C^{(i)}\} \tag{2}\] ### Long-Short-Term-Memory Network We have used bidirectional long short-term memory networks (BiLSTM) because both earlier and later information can be considered for sequentially modeling contextual information in forward and reverse order. Moreover, LSTM models were successfully applied for relation extraction tasks [20, 17] as it uses memory blocks to capture long-term temporal dependencies. Macdonald and Barbosa (2019) also achieved high performance by using LSTMs to predict relations between pairs of entities in Wikipedia tables. Inspired by their work, we have experimented with BiLSTM to observe any performance increment. We use BiLSTM to capture interactions among hidden representations obtained from the pooling layer. So, the input to the BiLSTM layer is a sequence obtained from the previous layer \(C_{max}=\{c_{1},c_{2},\ldots,c_{n}\}\). Here, \(n\) indicates half of the maximum token length preserved after downsampling the convolutional output representation using the max-pooling operation. \[\overrightarrow{h_{t}}=ForwardLSTM(c_{t},h_{t-1}) \tag{3}\] \[\overleftarrow{h_{t}}=BackwardLSTM(c_{t},h_{t-1}) \tag{4}\] \[x_{t}=[\overrightarrow{h_{t}};\overleftarrow{h_{t}}] \tag{5}\] The BiLSTM consists of two sub-LSTM networks: a forward LSTM and a backward LSTM for modeling dependencies in forward and backward order, respectively. \(\overrightarrow{h_{t}}\) and \(\overleftarrow{h_{t}}\) are the computed outputs at the \(t^{\text{th}}\) time step from the forward and backward LSTM. Then, we concatenate hidden states \(\overrightarrow{h_{t}}\) and \(\overleftarrow{h_{t}}\) to obtain the final hidden representation \(h_{t}\). ### Dropout We use dropout at the BiLSTM layer for regularization to prevent overfitting. Dropout randomly _turns-off_ a fraction of hidden units during the forward pass. It ensures that hidden units can identify features independent of each other rather than showing co-adaption and enable the model to learn a more general representation. ### Classification Layer We feed the output of the LSTM/BiLSTM layer into a fully connected layer. We then take the output of the fully connected layer and apply a softmax function to obtain the probability for each class. \[z_{k} =W\times X\] \[\hat{y} =softmax(z_{k})\] where X is the output of the LSTM/BiLSTM layer. We show the architecture of our proposed model in Figure 2. ## 4 Experiments ### Dataset We use the data from Macdonald and Barbosa (2019) in all of our experiemnts. The dataset contains individual JSON files for each relation. These JSON files were obtained from a Wikidata dump from March 2019. We used subject and object column indexes present in the dataset to retrieve the subject and object entity pairs from Wikipedia articles. These subject and object entities indicate related entity pairs in the same row of table or article subject and associated table cell value. Moreover, the dataset also includes table information like the title of the table section, table caption and headers, and table section paragraph. To the best of our knowledge, this is the most recent and the largest dataset created specifically for the task of RE on tabular data. The dataset was annotated using distant supervision by aligning Freebase entities with mentions of pairs of entities appearing in the table row or article subject and table cell value. The dataset contains 217,834 tables and 29 relations (28 relation types and one _none_ relation). The dataset is highly imbalanced, with some relation classes having less than 500 examples. This results in a long-tailed dataset. We do not remove these long-tailed relations. ### Model Training and Evaluation To train and evaluate our model, we split the dataset into train and test splits. We follow the configurations used by Macdonald and Barbosa (2020), where 40% of the data was used for training the model, 40% for validation (for hyperparameter tuning), and 20% for testing. We use five seeds to obtain train, validation, and test splits and report our results which is the average over the five seeds. We use sparse categorical cross-entropy loss3 to train the model. We used one Nvidia A100 GPU (40GB Memory) for model training. Footnote 3: [https://www.tensorflow.org/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy](https://www.tensorflow.org/api_docs/python/tf/keras/losses/SparseCategoricalCrossentropy) ### Comparison with Baseline Model We use the neural relation extraction model proposed by Macdonald and Barbosa (2020), consisting of a single LSTM unit, as the baseline. In order to have a fair comparison with the model introduced by Macdonald and Barbosa, we use F1 and accuracy to measure the performance of our model. We trained the model for forty epochs (as suggested by Macdonald and Barbosa). We summarize the number of training parameters of our model and compare it to that of the baseline in Table 1. We also performed an ablation study where we removed the convolutional layer and investigated the performance of the task for the BiLSTM model only. We show the differences between the hyperparameters of our model and the baseline model in Table 2. ## 5 Results We show the results in Table 3. For relation extraction on tabular data, the previous best model was proposed by Macdonald and Barbosa (2020). Although the performance of the baseline model is significantly high, it may benefit from leveraging automated feature extraction methods, such as using a CNN to extract features. We also add more LSTM units to increase the learning capability of the model. We refer to the upgraded model as CNN+LSTM or CNN+BiLSTM (based on whether we use LSTM or BiLSTM). As we see in Table 3, both CNN+LSTM and CNN+BiLSTM outper \begin{table} \begin{tabular}{|c|c|} \hline **Model** & **Parameters** \\ \hline \hline Macdonald and Barbosa (2020) & 4,559 \\ \hline CNN-LSTM (ours) & 40,581 \\ \hline CNN-BiLSTM (ours) & 50,405 \\ \hline BiLSTM (8 units) & 86,877 \\ \hline \end{tabular} \end{table} Table 1: Comparison of trainable model parameters for baseline (Macdonald and Barbosa, 2020), our proposed model, and the BiLSTM only model which we use for comparison with our proposed model. \begin{table} \begin{tabular}{|c|c|c|} \hline **Hyperparameter** & **Ours** & **Baseline** \\ \hline \hline CNN Filters & 8 & None \\ \hline LSTM/BiLSTM units & 8 & 1 \\ \hline Batch Size & 16 & 16 \\ \hline Optimizer & Adam & RMSProp \\ \hline Max token length & 80 & 50 \\ \hline Learning rate & 2e-5 & 0.001 \\ \hline LSTM/BiLSTM Dropout & 0.2 & None \\ \hline \end{tabular} \end{table} Table 2: Comparison of hyperparameters between the baseline model and our proposed model. Figure 2: **Proposed neural architecture.** Concatenated BERT embeddings are passed through a CNN layer, max-pooling layer followed by a ReLU activation function. An LSTM / biLSTM block is used to learn dependencies which is followed by a softmax activation to obtain probabilities for each relation label. form the baseline model and are the current state-of-the-art model for relation extraction on tabular data. The accuracy of the CNN+LSTM model is 5.57% points higher, and the accuracy of the CNN+BiLSTM model is 5.8% points higher than the baseline. A higher accuracy will result in more accurately assigning a relation class to an entity pair. We believe that our model performed better because we used 8 BiLSTM units for capturing context and learning dependencies, and 8 CNN filters as a feature extractor. In contrast, Macdonald and Barbosa (2020) used only a single LSTM unit for modeling dependencies among input tokens. In comparison to the baseline method that used a maximum token length of 50, we used a maximum token length of 80 to capture more information for each instance. Furthermore, we use dropout that benefits the model, preventing overfitting and ensuring generalizability. Interestingly, our model was not able to outperform the baseline in terms of F1 score but was still able to provide comparable performance of around 92.46%. Although a model with better performance will lead to improvements in downstream tasks, for applications such as building knowledge graphs, the performance achieved by our model is sufficient. ### Ablation Study To understand the effectiveness of the convolution layer, we perform an ablation study. We perform the relation extraction on the dataset without using the CNN module, which we refer to as the BiLSTM-only model (with 8 units). The number of training parameters is shown in Table 1. Interestingly, removing the CNN module improves the performance on the task by 6.19% points more than the baseline. This improvement is likely due to the increase in the number of trainable parameters to over twice that of the CNN+LSTM model. This increase in the number of trainable parameters also leads to a more complex model. Such a result reinforces the prevalent idea that increasing the number of parameters is helpful for the model to learn information from the data. However, this comes at the cost of requiring more computing resources. ### Performance vs. Parameters Tradeoff For the dataset, a combination of convolution and memory networks performs better for the relation classification task. The number of trainable parameters for CNN+LSTM is almost ten times that of the baseline model. Although the cost of training increases, this increment in the number of parameters leads to more information being learned by the deep learning model, which results in better performance over the baseline. Moreover, the CNN+BiLSTM outperforms the CNN+LSTM model as it holds the capacity to learn more information from the data due to more trainable parameters in the BiLSTM (10,000 parameters more than CNN+LSTM model). In addition, BiLSTM equips the model with the capability of learning context in both forward and reverse order. In fact, when we train models by increasing the number of parameters, the classification accuracy increases. However, the F1 score does not follow a similar trend. Our model has a comparable F1 score which should be sufficient for relation extractions, although the baseline model performs better in terms of F1 score. As model complexity increases, so do the resources required for training the model. Compared to the baseline model, which has only 4,559 trainable parameters, our proposed model has a much higher number of parameters, significantly increasing training time. Although we do not investigate avenues of model interpretability in this work, models with more parameters generally tend to be less interpretable than models with fewer parameters. These factors should be considered when designing models for any task. Keeping this in mind, we used a max pooling layer after the CNN model to reduce the number of trainable parameters compared to the BiLSTM model without significant loss in generalizable performance. As the CNN+LSTM/BiLSTM model has a higher performance, this will directly translate into more relations being accurately added to an existing knowledge graph. Our model also converges faster than the baseline model (outperforming the previous model in terms of accuracy in about five epochs). This performance increase is likely due to the complexity of the model and more \begin{table} \begin{tabular}{|c|c|c|} \hline **Model** & **Accuracy** & **F1** \\ \hline \hline Baseline & 92\% & **95\%** \\ \hline CNN-LSTM & **97.57\%** & 91.44\% \\ \hline CNN-BiLSTM & **97.80\%** & 92.46\% \\ \hline BiLSTM-only (8 units) & **98.19\%** & 94.35\% \\ \hline \end{tabular} \end{table} Table 3: Performance measures of our approach compared to previous model. trainable parameters. From the ablation study in section 5.1, we observe that using just the BiLSTM model leads to performance gain over the CNN+BiLSTM model. However, the slight performance gain of 0.39% points in accuracy and 1.89% points in F1 score comes with the cost of a significant increase in the number of trainable parameters (36,472 more parameters than CNN+BiLSTM). This BiLSTM-only model leads to higher training time and a less interpretable architecture. Therefore, considering the computing cost and performance trade-off, we advocate for the CNN+BiLSTM for extracting relations from tabular data as a balance between the two extremes. Fine-tuning BERT may also be beneficial for our task as fine-tuning approaches for language models have been shown to benefit the task at hand Xue et al. (2019); Su and Vijay-Shanker (2022); Liu et al. (2021). However, fine-tuning can be extremely computationally extensive and may be impractical for scenarios where time is of importance. Moreover, fine-tuning BERT results in an increase in the number of trainable parameters, thus increasing the complexity of the model. Although beneficial for relation extraction, we used the embedddings from the pre-trained model in the interest of training and computation time. ### Difficult Relations We also wanted to investigate our model's ability to distinguish between difficult relations. We show a confusion matrix in Figure 3 that depicts the accuracy of our proposed model for all the relation classes (we chose the model for the best performing seed value). Relations such as director-film, actor-film, writer-film, and producer-film are some of the most confusing examples for the model. This may be due to the fact that such relations are very similar to each other and is thus difficult for the model to distinguish one from the other. One may choose to provide extra information from the Wikipedia article or the table to the model for better understanding of the relations. More research is required to explore this idea. As model complexity increases, so does the performance leading to better ability to distinguish between relations. However, this may not directly translate to high classification accuracy for difficult Figure 3: Confusion Matrix for CNN+BiLSTM. The y-axis are the predicted relation labels, and the x-axis are the true relation labels. Off-diagonal accuracy values show misclassifications for specific relations. relations. A worthwhile direction to explore would be to design intelligent model training strategies that focus specifically on difficult relations without compromising performance on the rest of the classes. ## 6 Conclusion and Future Work In this work, we proposed a neural method that uses a combination of convolution and memory networks to extract relations from Wikipedia tables, which we evaluate on a benchmark dataset. We also showed that combining convolution and max pooling helps to learn more about the data without a significant increase in the number of training parameters. We analyze our results and discuss the trade-off between the number of training parameters and model performance. Finally, we show how our model performs on relations that are deemed to be difficult to distinguish between and suggest some possible improvements for such cases. We also conducted an ablation study to show the usefulness of the CNN layer. An extension of the ablation approach would be to remove certain input fields, like table cell values, headers, and captions, to evaluate model performance. An impactful idea in the space of relation extraction is the usage of the attention mechanism. Using the attention mechanism to identify tokens in the input that better represent a relation is a promising approach that may significantly improve tabular relation extraction. We also highlight the trade-offs between parameters and the performance of the model as a first step toward probing relation extraction models. As neural network models become larger, it becomes even more crucial to provide explanations about the inner workings of the model. As neural network models grow larger with more training parameters, interpretability becomes crucial. In the future, we want to use sophisticated tools such as LIME Ribeiro et al. (2016) and SHAP Lundberg and Lee (2017) to explain how complex relation extraction models _understand_ the input to classify them into correct categories.
2302.11054
Conversational Text-to-SQL: An Odyssey into State-of-the-Art and Challenges Ahead
Conversational, multi-turn, text-to-SQL (CoSQL) tasks map natural language utterances in a dialogue to SQL queries. State-of-the-art (SOTA) systems use large, pre-trained and finetuned language models, such as the T5-family, in conjunction with constrained decoding. With multi-tasking (MT) over coherent tasks with discrete prompts during training, we improve over specialized text-to-SQL T5-family models. Based on Oracle analyses over n-best hypotheses, we apply a query plan model and a schema linking algorithm as rerankers. Combining MT and reranking, our results using T5-3B show absolute accuracy improvements of 1.0% in exact match and 3.4% in execution match over a SOTA baseline on CoSQL. While these gains consistently manifest at turn level, context dependent turns are considerably harder. We conduct studies to tease apart errors attributable to domain and compositional generalization, with the latter remaining a challenge for multi-turn conversations, especially in generating SQL with unseen parse trees.
Sree Hari Krishnan Parthasarathi, Lu Zeng, Dilek Hakkani-Tur
2023-02-21T23:15:33Z
http://arxiv.org/abs/2302.11054v1
# Conversational Text-to-SQL: An Odyssey into State-of-the-Art and Challenges Ahead ###### Abstract Conversational, multi-turn, text-to-SQL (CoSQL) tasks map natural language utterances in a dialogue to SQL queries. State-of-the-art (SOTA) systems use large, pre-trained and finetuned language models, such as the T5-family, in conjunction with constrained decoding. With multi-tasking (MT) over coherent tasks with discrete prompts during training, we improve over specialized text-to-SQL TS-family models. Based on Oracle analyses over n-best hypotheses, we apply a query plan model and a schema linking algorithm as rerankers. Combining MT and reranking, our results using T5-3B show absolute accuracy improvements of 1.0% in exact match and 3.4% in execution match over a SOTA baseline on CoSQL. While these gains consistently manifest at turn level, context dependent turns are considerably harder. We conduct studies to tease apart errors attributable to domain and compositional generalization, with the latter remaining a challenge for multi-turn conversations, especially in generating SQL with unseen parse trees. Sree Hari Krishnan Parthasarathi, Lu Zeng, Dilek Hakkani-Tur+ Alexa AI, Amazon Conversational Text-To-SQL Footnote †: 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. ## 1 Introduction Text-to-SQL is an important research topic in semantic parsing [1, 2, 3, 4, 5, 6, 7]. Spider [3] and CoSQL [5] datasets allow for making progress in complex, cross-domain, single and multi-turn text-to-SQL tasks respectively, utilizing a common set of databases, with competitive leaderboards, demonstrating the difficulty in the tasks. In contrast to Spider, CoSQL was collected as entire dialogues, and hence includes additional challenges for the text-to-SQL task in terms of integrating dialogue context. In addition to the challenges in general-purpose code generation [8, 9], where the output of the system is constrained to follow a grammar, the text-to-SQL problem is underspecified without a schema. Since public text-to-SQL tasks use relatively small datasets, previous solutions employ: a) small encoder/decoder models, with constraints on the decoder [10, 11]; b) large pretrained language models (LMs) without constraints on the decoder [12, 13], pruning finalized hypotheses. PICARD [14], a top entry on the text-to-SQL leaderboard, finetunes a pretrained T5 model and imposes SQL syntax during beam search. Our focus is on multi-turn, conversational text-to-SQL (CoSQL), and we build a system utilizing PICARD. Motivated by the notion that multi-task training (MT) [15] using coherent tasks and utilizing inductive biases can improve accuracy, we aggregate coherent task data (from CoSQL, Spider, and SParC1) along with task-specific discrete prompts during training. Next, persuaded by our previous research [16], we conduct Oracle studies on n-best lists from PICARD CoSQL system, inferring that reranking can be helpful. We adapt the two reranking methods from [16], query plan (QP) and schema linking (SL), and show that both methods can help improve multi-turn text-to-SQL. With accuracy on CoSQL being reported using exact-set-match accuracy (EM) and execution accuracy (EX), with T5-Large we observed: a) MT leads to 2.4% and 1.7% absolute improvement on EM and EX; b) combined reranking approaches yield 1.9% and 2.2% improvements; c) combining MT with reranking, with T5-Large we obtain improvements of 2.1% in EM and 3.7% in EX over a T5-Large PICARD baseline. This improvement is consistent on larger models, using T5-3B yielded about 1.0% in EM and 3.4% in EX over SOTA baseline. We also submitted our system to the CoSQL leaderboard, our system consistently improves over PICARD baseline on the held out test set on question match (1.2% absolute) and interaction match (1.1% absolute). Lastly, we analyze errors in terms of zero-shot domain and compositional generalization. All improvements presented in this paper are absolute gains (i.e., not relative). Footnote 1: SParC is a multi-turn text-to-SQL dataset, while CoSQL is conversational (both based on the same databases as Spider) – meaning that there are non-SQL interactions where the system can ask clarifications. **Contributions.** The contributions of this paper are: a) proposing MT and combining with n-best reranking to improve over SOTA on a competitive multi-turn, conversational text-to-SQL task; b) analysis of gains at turn level, zero-shot domain generalization, and compositional generalization - showing challenges in compositional generalization for multi-turn conversations. Figure 1: Proposed text-to-SQL system consists of 3 parts: (a) multi-tasking on coherent tasks with discrete prompts; (b) constrained decoding with PICARD; (c) N-best list reranking with SL and QP. ## 2 Conversational Text-to-SQL ### Related Work **Conversational text-to-SQL**: A comprehensive survey of text-to-SQL is provided in [17]. Historically, much more research has been undertaken in single-turn text-to-SQL [1, 3, 18]. SParC and CoSQL are recent multi-turn text-to-SQL datasets, with CoSQL being more realistic dialogues. Most previous works on multi-turn text-to-SQL attempt to encode context in utterances and/or previous SQL queries into the model [19, 20, 21, 22, 23]. On the other hand, PICARD [14] does not treat utterance/SQL context in a special fashion, and relies on large pretrained LMs (PLM) with constrained decoding, obtaining SOTA results. Our work brings context modeling into [14] in two ways: a) using contextual information in the two n-best reranking methods[16, 22]; b) better context learning via data augmentation and prompting to reduce cross-task interactions in MT [24, 25]. **Compositional generalization** is an active area of research in semantic parsing [26], with the focus primarily on single-turn utterances [27]. On the other hand, multi-turn text-to-SQL benchmarks are set up to evaluate cross-domain generalization [4, 5]. In this work, we attempt to bridge both aspects: compositionality analysis in multi-turn cross-domain text-to-SQL. ### Proposed Approach The overview of the proposed system is shown in Fig 1. PICARD is part (b) in the figure, while multi-task prompts and n-best list reranking methods are shown in parts (a) and (c). **A) Proposed Multi-task Prompt (MT) Approach**: We ensemble data from similar semantic parsing tasks (CoSQL, Spider, and SParC), and inject an inductive bias to the model by using discrete, task-specific, natural language prompts in input. We simply use the task names as the prompts, and the CoSQL example shown in Fig 1 has a prompt of "cosql :". **B) Proposed Reranking with MT Query Plan (QP) Model**: Starting from [16], we build a model that focuses specifically on improving long span coherence: a multi-label classification model that generates a query plan predicting whether the predicted SQL query should contain any of the 8 clauses (WHERE, EXCEPT, UNION, and INTERSECT, GROUP BY, HAVING, ORDER BY, LIMIT). We adapt the approach so that the QP model is trained in an MT fashion. **C) Proposed Reranking with Schema Linking (SL) on conversational context**: We adapt the heuristic algorithm in [16] for the multi-turn setting. For each predicted SQL query in n-best list, we follow three steps: a) extract slot names and their respective values from the conditions in the WHERE clause; then check if the slot value exists in any of the referenced tables in the FROM clause. b) obtain a list of candidate slot names/values from the current and previous turns in the interaction, which are exact/partial occurrences of the column/table names and string values in the question with name-based and value-based linking described in RAT-SQL [10]; c) For value linking, we next consider prefix/abbreviation matches on slot values with categorical types. ## 3 Experimental Setup ### Datasets and Metrics We briefly describe the datasets and metrics used in this paper [3]. **A) Text-to-SQL Datasets**: All 3 datasets (CoSQL, Spider, and SParC) are based on 200 databases (covering 138 domains), each with multiple tables. The standard protocol splits them into 140 databases for training, 20 databases for development (DEV), and 40 databases are held back for evaluation. Databases have no overlaps across the splits. CoSQL is a conversational dataset, containing 3k dialogues, with 30k+ turns and 10k+ annotated SQL queries (collected in Wizard-of-Oz fashion). Dialogues are split into 2,164 for training, and 292 for DEV, and 551 for evaluation. Spider is a single-turn dataset, containing 10,181 questions with 5,693 SQL queries. The examples are split into 7,000 for training, 1,034 for DEV, and 2,147 for evaluation. SParC is a sequential text-to-SQL dataset containing 4,298 coherent question sequences; these are split into train (3034), DEV (422), and evaluation (842). We mainly present results on CoSQL (with some discussion on Spider to analyze generalization properties). **B) Text-to-SQL Metrics**: As mentioned previously, performance is evaluated using EM and EX on the CoSQL and Spider (DEV), with DEV being used for eval, and no hyperparameter tuning being done on them. EM compares each clause between a prediction and its corresponding groundtruth SQL query. The predicted SQL query is correct only if all of the components match. This metric does not take values into account. EX compares the execution output of the predicted SQL query and its corresponding groundtruth SQL queries. Note that both EM and EX can lead to false positives and false negatives. ### Models **A) Baseline Model**: PICARD [14] is our baseline, and it is trained on Spider and CoSQL; we focus on two T5 model sizes, T5-Large and T5-3B. Input to the model includes current natural language turn, database name, and serialized database schema (table_name : coll,..., coln) with database content, and previous turns from the dialogue in reverse chronological order. During inference, constrained decoding (CD) is integrated into beam search, with a beam size of 10. **B) Multi-tasking Prompting (MT) T5 Model**: We introduced two changes: a) augment CoSQL and Spider with SParC and weight them equally during training; b) to reduce variance in estimated parameters, we inject inductive biases with task specific discrete prompts, extending the input. We finetune the model on p3dn_24xlarge instances (8 NVIDIA Tesla V100 GPUs) with teacher forcing and cross-entropy loss for 3000 epochs using a batch size of 2000 and a learning rate of \(1e^{-4}\). **C) MT Query Plan (QP) Model**: We finetune RoBERTa-Large models with a sequence classification head on p3.2xlarge instances (1 NVIDIA Tesla V100 GPU). We reused the input from MT T5 model (without database content), and output 1-hot encoded to predict labels extracted from groundtruth queries based on the existence of the 8 clauses. Models are finetuned with binary cross entropy loss for 100 epochs using a batch size of 5 and a learning rate of \(1e^{-5}\). ## 4 Results ### Multi-Tasking Approach While we follow a restrictive MT strategy (3 coherent tasks, Spider, CoSQL, and SParC), UnifiedSKG [25] follows a generalist MT strategy of training on 21 structured knowledge grounding (SKG) tasks (including the 3 tasks above). Table 1 shows results on CoSQL without CD against baseline. Note that the baseline (52.5%) performs better than UnifiedSKG (51.6%). Furthermore, the proposed MT approach performs significantly better than both models. \begin{table} \begin{tabular}{|c|c|} \hline MT Methods & EM\% \\ \hline Baseline & 52.5 \\ UnifiedSKG MT-P [25] & 51.6 \\ Proposed MT & 54.6 \\ \hline \end{tabular} \end{table} Table 1: T5-large: MT performance on CoSQL without CD. ### Oracle Analysis and Reranking Approaches We enable CD and perform Oracle analysis on 10-best hypotheses. The study is done on the baseline and proposed MT model at two selected T5 model sizes: T5-Large and T5-3B. In Table 2, for each row block, the first row is the 1-best, while the other row shows Oracle accuracies. As can be observed, both EM and EX improve significantly: for example, the T5-Large proposed MT model gains 12.6% and 10.8% absolute for EM and EX respectively. Similar gains can be seen for the baseline at the same model size, as well as for the T5-3B models, following trends observed in our paper [16]. Note that with T5-large models, we observed that with CD the first and third rows in Table 1 yielded an EM of 54.4% and 56.8% respectively. Table 3 lists reranking results on 10-best hypotheses obtained from baseline using a T5-Large model. SL contributes 0.4% and 2.0% absolute improvement on EM and EX, while QP yields smaller gains, with the gains from SL and QP being additive. ### Combined Results with MT and Reranking We present results combining MT and reranking approaches (SL and QP) in Table 4. Compared to the baseline (T5-Large PICARD CoSQL), the proposed system achieves significant improvement: we observe 2.1% and 3.7% absolute improvement on EM and EX. The table also shows the effect of each component: MT affects the model performance the most on EM and SL leads the most improvement on EX, while QP contributes the least. Note that the overall gains also carry over to T5-3B, with the EM of 58.0% and EX of 70.0% representing improvements over the SOTA baseline model. We also submitted our system to CoSQL leaderboard, on the held out test set, PICARD baseline obtained a question match of 54.6%, while our method achieved 55.8%. We also improved the interaction match from 23.7% to 24.8% with our proposed approach. **Task difficulty levels**: We now analyze the gains over difficulty levels. Table 5 shows that the gains from the proposed system (T5-Large model combining MT and reranking) consistently carry over all the pre-defined difficulty levels in CoSQL task. **Effect of additional data on MT**: The proposed MT approach has two parts: a) utilizing additional SParC data compared to baseline; b) attaching task specific prompts to model inputs. For T5-Large, the effect of adding extra data is presented in Table 6 with CD (1.2% and 0.4% absolute improvement on EM and EX). Adding task specific prompts give another 1.2% and 1.3% improvement on the metrics. ### Turn Level Analysis Table 7 shows the distribution of difficulty levels across turns. It confirms that later turns have more complex SQL queries. Since the sample counts are extremely small after Turn 5, we merged all subsequent turns into that bin, so that each row in Table 7 has at least 120 examples. These bins are then used in Fig 2 to present the performance of the baseline and proposed systems at each turn. The green line shows sample counts of the bins, and numbers could be read from the y-axis on the right side. From the bar chart, we can clearly see the proposed system consistently performs better than the baseline (never worse than the baseline). **Dialogue Context Dependency**: Some turns are highly dependent on the context provided by previous turns in the dialogue, while others are independent of the context. To understand performance of the proposed system in both cases, we manually annotated each turn in CoSQL Dev set as to whether the context is needed. We then divide the DEV examples into two groups: a) 712 context-independent examples; b) 294 context-dependent examples. Table 8 shows that for T5-Large the proposed system achieves 2.1% and 3.2% absolute improvement on EM and EX on the context-independent group, while yielding 2% and 4.8% improvement on the context-dependent group. We also present ablations removing one component at a time, with the context-independent group yielding a similar conclusion as in Section 4.3. Meanwhile, the context-dependent group shows that MT contributes the most on both metrics. \begin{table} \begin{tabular}{|l|c c|} \hline Data & EM\% & EX\% \\ \hline Baseline & 54.4 & 63.7 \\ W/ extra data & 55.6 & 64.1 \\ + MT prompt & 56.8 & 65.4 \\ \hline \end{tabular} \end{table} Table 6: T5-large: Additional data effect. \begin{table} \begin{tabular}{|l|c c|} \hline Methods & EM\% & EX\% \\ \hline Baseline & 54.4 & 63.7 \\ + SL & 54.8 & 65.7 \\ + QP & 54.9 & 63.9 \\ + QP + SL & 55.3 & 65.9 \\ \hline \end{tabular} \end{table} Table 3: T5-Large: Reranking approaches QP and SL on CoSQL. \begin{table} \begin{tabular}{|l|c c|c c|} \hline Model & T5-Large & \multicolumn{2}{c|}{T5-3B} & Notes \\ & EM\% & EX\% & EM\% & EX\% \\ \hline Baseline & 54.4 & 63.7 & 57.1 & 66.6 \\ + SL & 54.8 & 65.7 \\ + QP & 54.9 & 63.9 \\ + QP + SL & 55.3 & 65.9 \\ \hline \end{tabular} \end{table} Table 4: CoSQL: Baseline, combined results using MT and reranking, and ablations removing one component at a time. \begin{table} \begin{tabular}{|l|c c|c c|} \hline Method & \multicolumn{2}{c|}{T5-Large} & \multicolumn{2}{c|}{T5-3B} \\ & EM\% & EX\% & EM\% & EX\% \\ \hline Baseline & 54.4 & 63.7 & 57.1 & 66.6 \\ \hline Proposed & _56.5_ & _67.4_ & **58.0** & **70.0** \\ \(-\) MT & 55.3 & 65.9 & 57.8 & 69.1 \\ \(-\) SL & 56.6 & 65.6 & 58.2 & 69.2 \\ \(-\) QP & 56.7 & 67.2 & 58.1 & 69.5 \\ \hline \end{tabular} \end{table} Table 4: CoSQL: Baseline, combined results using MT and reranking, and ablations removing one component at a time. \begin{table} \begin{tabular}{|l|c c|c c|} \hline Method & \multicolumn{2}{c|}{T5-Large} & \multicolumn{2}{c|}{T5-3B} \\ & EM\% & EX\% & EM\% & EX\% \\ \hline Baseline & 54.4 & 63.7 & 57.1 & 66.6 \\ \hline Proposed & _56.5_ & _67.4_ & **58.0** & **70.0** \\ \(-\) MT & 55.3 & 65.9 & 57.8 & 69.1 \\ \(-\) SL & 56.6 & 65.6 & 58.2 & 69.2 \\ \(-\) QP & 56.7 & 67.2 & 58.1 & 69.5 \\ \hline \end{tabular} \end{table} Table 4: CoSQL: Baseline, combined results using MT and reranking, and ablations removing one component at a time. \begin{table} \begin{tabular}{|l|c|c c|c c|} \hline Diff & count & \multicolumn{2}{c|}{Baseline} & \multicolumn{2}{c|}{Proposed} \\ & & EM\% & EX\% & EM\% & EX\% \\ \hline Easy & 417 & 74.1 & 77.9 & 75.3 & 82.3 \\ Med & 320 & 51.2 & 60.0 & 53.8 & 63.8 \\ Hard & 162 & 34.0 & 59.3 & 37.7 & 61.1 \\ Extra & 107 & 17.8 & 26.2 & 19.6 & 29.9 \\ \hline Total & 1006 & 54.4 & 63.7 & 56.5 & 67.4 \\ \hline \end{tabular} \end{table} Table 5: T5-large: Performance across difficulty levels on CoSQL. ## 5 Discussion ### Performance on Spider Table 9 shows the proposed approach also improves over the baseline on the Spider task as well. For example, compared to baseline Spider task-specific model (with T5-Large), we observed 0.6% and 1.7% absolute improvement on EM and EX respectively. The T5-Large proposed system even outperforms the SOTA baseline Spider model (a T5-3B model). The table also presents ablations removing one component at a time, showing similar trends as previously in Section 4.3. ### Zero-Shot Generalization (ZSG) CoSQL and Spider tasks are designed to be cross-domain (without database overlaps over splits), system performances are then reported in zero-shot. However, we wanted to tease apart generalization performance separately due to compositionality and zero-shot domain. To attribute zero-shot "only" generalization performance, we ignore DEV examples with parse trees2 unseen in MT training data (meaning that all the remaining parse trees were observed in the training data). However, this contains examples with unseen DB schema: we then report ZSG for the proposed and baseline systems in Table 10. The performance reported for all systems are now better (because they are slightly easier examples, having removed unseen parse trees). The system generalizes well on both CoSQL and Spider task, with at least 2% improvement in the metrics at each case. Consistent with previous results, QP contributes the least in this study. A challenge in performing inference on unseen DB schema is the primary/foreign key relationships among tables, such as the example below (note that the turns are concatenated with separator "["]"). Footnote 2: To make the parse trees meaningful for this exercise, we prune them so that the leaves for each clause are ignored (values, column and table names). ### Compositional Generalization (CG) To analyze CG, we mix the train and DEV sets of the 3 datasets and re-do the split (meaning that the new train and DEV overlap in DB schema, and therefore are not zero-shot). We re-train the baseline and the proposed approach, and then present the systems with held-out examples having unseen parse trees. Table 11 shows the results: the performance of all systems are poor (even more so on CoSQL, than on Spider), showing that CG is a challenge, especially in conversational multi-turn tasks. The proposed system obtains a 1% absolute improvement on EM and EX on both CoSQL and Spider. Based on the ablation studies, QP hurts performance. Below is an example of a SQL with a novel parse tree that the model has to construct now and gets wrong. ## 6 Conclusions Using task-specific prompts and aggregating coherent task data from Spider, SParC, and CoSQL, we built a T5-family of text-to-SQL models. Generating 10-best lists from this system, and reranking them using query plan model and schema linking algorithm, we achieve significant improvement over the SOTA baseline. Using T5-3B, we obtain absolute improvements of \(1.0\%\) in EM and \(3.4\%\) in EX on the development set over a baseline SOTA system on the competitive CoSQL leaderboard. We achieved consistent improvements on the held out test set of the leaderboard as well. Proposed approach generalizes well to other tasks. Teasing apart generalization performance in terms of zero-shot only and compositionality, while the proposed approach improves over the baseline in both aspects, our study shows that compositionality remains a huge challenge, especially in conversational settings.
2301.07366
Revisiting HESS J1809$-$193 -- a very-high-energy gamma-ray source in a fascinating environment
HESS J1809$-$193 is one of the unidentified very-high-energy gamma-ray sources in the H.E.S.S. Galactic Plane Survey (HGPS). It is located in a rich environment, with an energetic pulsar and associated X-ray pulsar wind nebula, several supernova remnants, and molecular clouds in the vicinity. Furthermore, HESS J1809$-$193 was recently detected at energies above 56 TeV with HAWC, which makes it a PeVatron candidate, that is, a source capable of accelerating cosmic rays up to PeV energies. We present a new analysis of the TeV gamma-ray emission of HESS J1809$-$193 with H.E.S.S., based on improved analysis techniques. We find that the emission is best described by two components with distinct morphologies and energy spectra. We complement this study with an analysis of Fermi-LAT data in the same region. Finally, taking into account further multi-wavelength data, we interpret our results both in a hadronic and leptonic framework.
Lars Mohrmann, Vikas Joshi, Jim Hinton, Stefan Funk
2023-01-18T08:19:58Z
http://arxiv.org/abs/2301.07366v1
# Revisiting HESS J1809\(-\)193 -- a very-high-energy gamma-ray source in a fascinating environment ###### Abstract: HESS J1809\(-\)193 is one of the unidentified very-high-energy gamma-ray sources in the H.E.S.S. Galactic Plane Survey (HGPS). It is located in a rich environment, with an energetic pulsar and associated X-ray pulsar wind nebula, several supernova remnants, and molecular clouds in the vicinity. Furthermore, HESS J1809\(-\)193 was recently detected at energies above 56 TeV with HAWC, which makes it a PeVatron candidate, that is, a source capable of accelerating cosmic rays up to PeV energies. We present a new analysis of the TeV gamma-ray emission of HESS J1809\(-\)193 with H.E.S.S., based on improved analysis techniques. We find that the emission is best described by two components with distinct morphologies and energy spectra. We complement this study with an analysis of Fermi-LAT data in the same region. Finally, taking into account further multi-wavelength data, we interpret our results both in a hadronic and leptonic framework. ## 1 Introduction HESS J1809\(-\)193 is an unassociated very-high-energy (VHE; \(E>100\) GeV) \(\gamma\)-ray source that was discovered in 2007 [1] as part of the H.E.S.S. Galactic Plane Survey (HGPS; [2]). It is located close to the energetic pulsar PSR J1809\(-\)1917 (spin-down power \(\dot{E}=1.8\times 10^{36}\) erg s\({}^{-1}\), characteristic age \(\tau_{c}=51\) kyr [3], distance \(d\approx 3.3\) kpc [4]), which powers an X-ray pulsar wind nebula (PWN; see e.g. [5]). Initially, HESS J1809\(-\)193 was interpreted as being connected to this PWN, that is, due to inverse Compton (IC) emission from high-energy electrons accelerated in the pulsar wind (the "leptonic scenario"; [1]). However, the region also harbours several supernova remnants (e.g. G011.0\(-\)00.0, at a distance of \(d\approx 3\) kpc [6], which has been proposed as the progenitor SNR of PSR J1809\(-\)1917 [7], although the association is not firm) as well as dense molecular clouds [6, 7]. This has motivated an interpretation of HESS J1809\(-\)193 in a "hadronic scenario", in which the \(\gamma\)-ray emission is due to the interaction of cosmic-ray nuclei - accelerated at the SNR shock front - with gas in the molecular clouds [6, 8]. Recently, the HAWC experiment has detected \(\gamma\)-ray emission from HESS J1809\(-\)193 up to energies of \(\sim\)100 TeV [9]. Here, we present a summary of a new H.E.S.S. analysis of HESS J1809\(-\)193, which is complemented by a _Fermi_-LAT analysis of the same region. For further details, we refer to the full publication about this study, which is currently under journal review [10]. ## 2 Data analysis H.E.S.S. is an array of Cherenkov telescopes sensitive to \(\gamma\) rays in the 100 GeV\(-\)100 TeV energy range, located in Namibia [11]. Here, we used 93.2 h of data taken on HESS J1809\(-\)193 with the four 12 m diameter telescopes. For the high-level analysis, we have employed the Gammapy package (v0.17; [12, 13]) and carried out a spectro-morphological likelihood analysis that uses as input a background model constructed from archival H.E.S.S. observations (see [14] for details). The energy threshold of the combined data set is 0.27 TeV. _Fermi_-LAT is a pair conversion detector onboard the _Fermi_ satellite, sensitive to \(\gamma\) rays between \(\sim\)20 MeV and \(\sim\)300 GeV [15]. For the _Fermi_-LAT analysis, we have used 12.4 yr of data and employed the Fermitools1 (v2.2.0) and Fermipy2 (v1.1.5) packages. We analysed events passing the P8R3_SOURCE event selection (event class 128, event type 3), using a binned analysis. Footnote 1: [https://fermi.gsfc.nasa.gov/ssc/data/analysis/software](https://fermi.gsfc.nasa.gov/ssc/data/analysis/software) Footnote 2: [https://fermipy.readthedocs.io](https://fermipy.readthedocs.io) ## 3 Results In Fig. 1 we show flux maps of the HESS J1809\(-\)193 region. The source is extended on a scale of about 1\({}^{\circ}\), and shows a bright peak of emission close to its centre. The significance maps in Fig. 2 illustrate our modelling of HESS J1809\(-\)193. First, we have attempted to model the source with a single component that uses an elongated Gaussian as the spatial model. However, as is evident from Fig. 2(b), the model is not capable of describing the extended emission and the bright peak simultaneously. We therefore adopted a 2-component model, in which a second component is added to describe the compact bright peak (using a symmetric Gaussian as the spatial model). Fig. 2(c) shows that this model yields a satisfactory description of the data (statistically, it is preferred by 13.3\(\sigma\) over the 1-component model). We refer to the two components as component A and B, respectively. Component A has a 1-\(\sigma\) major-axis extent of \(\sigma_{\rm A}=(0.62\pm 0.03_{\rm stat}\pm 0.02_{\rm sys})\) deg and an eccentricity of \(e_{\rm A}=0.82\pm 0.03_{\rm stat}\), whereas for component B \(\sigma_{\rm B}=(0.095\pm 0.007_{\rm stat}\pm 0.003_{\rm sys})\) deg. Figure 1: Map showing the \(\gamma\)-ray flux above 0.27 TeV from HESS J1809\(-\)193. (a) full region. (b) zoom-in on core region. The position of PSR J1809\(-\)1917 is marked with a black triangle, cyan circles denote the positions of SNRs. The green/purple dot and lines display the position and extent of the two components (A/B) of HESS J1809\(-\)193 (cf. also Fig. 2). The grey dashed line marks the Galactic plane. Figure 2: H.E.S.S. significance maps for HESS J1809\(-\)193. Panel (a) shows the pre-modelling map, whereas panels (b) and (c) show the residual significance map for the 1-component and the 2-component model, respectively. White dashed circles denote regions excluded from the analysis. The energy spectra of the two components - which are modelled simultaneously - are shown in Fig. 3. When fitting power-law (PL) models, \({\rm d}N/{\rm d}E\propto(E/1\,{\rm TeV})^{-\Gamma}\), to both components, we obtained spectral indices of \(\Gamma_{\rm A}=2.24\pm 0.03_{\rm stat}\pm 0.02_{\rm sys}\) and \(\Gamma_{\rm B}=1.98\pm 0.05_{\rm stat}\pm 0.03_{\rm sys}\) for component A and B, respectively. However, the upper limits at high energies for component A indicate that the spectrum may cut off before reaching \(100\,{\rm TeV}\). Indeed, a power law with exponential cut-off (ECPL), \({\rm d}N/{\rm d}E\propto(E/1\,{\rm TeV})^{-\Gamma}\cdot\exp(-E/E_{c})\), is preferred (by \(8\sigma\)) for this component, in which case we obtained a spectral index \(\Gamma_{\rm A}=1.90\pm 0.05_{\rm stat}\pm 0.05_{\rm sys}\) and a cut-off energy of \(E_{c}^{A}=(12.7^{+2.7}_{-2.1}|_{\rm stat}\,{}^{+2.6}_{-1.9}|_{\rm sys})\,{\rm TeV}\). For component B, an ECPL model is not significantly preferred over the PL model. In Fig. 4, we illustrate the results of the _Fermi_-LAT analysis. Similarly to the case of H.E.S.S., extended emission around PSR J1809\(-\)1917 is visible, although no bright peak that would correspond to component B of HESS J1809\(-\)193 can be identified. Following the _Fermi_-LAT 4FGL-DR2 catalogue [17, 18], we modelled the emission with two sources: J1811.5\(-\)1925, which is modelled as a point source and connected to the nearby pulsar PSR J1811\(-\)1925 (i.e. unrelated to HESS J1809\(-\)193), and J1810.3\(-\)1925e, which is modelled as an extended source. The energy spectrum of J1810.3\(-\)1925e, exhibiting a spectral index of \(\Gamma\approx 2.5\pm 0.1\), is displayed in Fig. 5. Figure 4: Significance maps for the _Fermi_-LAT analysis. (a) Pre-modelling map. (b) With J1811.5\(-\)1925 in the model. (c) With J1811.5\(-\)1925 and J1810.3\(-\)1925e in the model. The two components of HESS J1809\(-\)193 are displayed as well. The grey dashed line marks the Galactic plane. Figure 3: Energy spectrum of HESS J1809\(-\)193. The spectra of component A and B are shown in green and purple, respectively. The solid lines show the best-fit PL models for each component, and the dashed green line the best-fit ECPL model for component A. Published spectra are taken from [1, 2, 9, 16]. ## 4 Discussion The similarity of the spatial models of component A of HESS J1809\(-\)193 and the _Fermi_-LAT source J1810.3\(-\)1925e (cf. Fig. 4) suggests a connection between these two components. However, the energy spectrum of J1810.3\(-\)1925e below 10 GeV is considerably steeper than that of component A, implying the need of a spectral break at around 0.1 TeV if both are connected. On the other hand, the spectrum of J1810.3\(-\)1925e could be connected to that of component B more smoothly (although a break would still be required), but in this case its spatial extent would greatly exceed that of its counterpart. This illustrates that a joint modelling of the emission detected with H.E.S.S. and _Fermi_-LAT is very challenging. We focus here on modelling the H.E.S.S. components. First, we have modelled the entire emission of HESS J1809\(-\)193 in a leptonic (PWN) scenario. We performed a time-dependent modelling that takes into account the pulsar braking, employing the GAMERA library [19]. Two describe both H.E.S.S. components and the X-ray nebula (which is offset from the peak in \(\gamma\)-ray emission, cf. Fig. 1), we invoked three "generations" of electrons: (i) "relic" electrons, associated with component A and injected over the system life time (\(\approx 33\) kyr); (ii) "medium-age" electrons, associated with component B and injected within the last \(\approx 4.7\) kyr; (iii) "young" electrons, associated with the X-ray nebula and injected within the last \(\approx 1.2\) kyr. The results of the model are displayed in Fig. 5. From the approximate age of the system and the measured extent of component A, it is possible to derive a diffusion coefficient for the "relic" electrons. We obtained \(D\approx 1\times 10^{28}\) cm\({}^{2}\) s\({}^{-1}\), which is of the same order as the coefficient measured in the vicinity of the Geminga PWN [20]. In such a scenario, one would furthermore expect a cut-off in the energy spectrum of component A, as the highest-energy electrons should have cooled due to IC scattering by now. This is consistent with the measured cut-off for this component at \(\approx 13\) TeV. In summary, the PWN model shows that the \(\gamma\)-ray emission of HESS J1809\(-\)193 can be modelled in a PWN scenario, and that in particular component A of HESS J1809\(-\)193 can be well described as a halo of old electrons that surround the compact PWN. Figure 5: SED of HESS J1809\(-\)193, with results of the PWN model. The thick lines display the best-fit model curves, whereas the thin lines display individual solutions of the MCMC sampling. The Suzaku data are from [5] and the radio data for G011.0\(-\)00.0 (not used in the fit) are from [21]. The presence of SNRs and molecular clouds in the region motivates to also consider a hadronic scenario in which (part of) the emission is due to cosmic-ray nuclei accelerated by the SNRs and interacting with gas in the clouds. We focus here in particular on component B of HESS J1809\(-\)193, which coincides in position with the edge of G011.0\(-\)00.0 and several of the dense molecular clouds (cf. Fig. 1). Using the Naima package [22], we have fitted a proton-proton model to component B, obtaining a required energy in primary protons of \(W_{p}\sim 4\times 10^{49}(n/1\ \mathrm{cm}^{-3})^{-1}\) erg. Considering that gas densities \(\gg\)1 cm\({}^{-3}\) are expected in the clouds [6], this presents a viable alternative interpretation. ## 5 Conclusion We have presented a new H.E.S.S. analysis of the unassociated \(\gamma\)-ray source HESS J1809\(-\)193. For the first time, we were able to resolve the emission into two components that exhibit distinct spectra and morphologies. Our _Fermi_-LAT analysis has confirmed the presence of extended emission also in the GeV energy range, which is however challenging to associate with either of the components of HESS J1809\(-\)193. The extended component A of HESS J1809\(-\)193 is compatible with a halo of old electrons around the compact PWN. The compact component B could plausibly be of either leptonic or hadronic origin.
2305.15295
A Mean-Field Method for Generic Conductance-Based Integrate-and-Fire Neurons with Finite Timescales
The construction of transfer functions in theoretical neuroscience plays an important role in determining the spiking rate behavior of neurons in networks. These functions can be obtained through various fitting methods, but the biological relevance of the parameters is not always clear. However, for stationary inputs, such functions can be obtained without the adjustment of free parameters by using mean-field methods. In this work, we expand current Fokker-Planck approaches to account for the concurrent influence of colored and multiplicative noise terms on generic conductance-based integrate-and-fire neurons. We reduce the resulting stochastic system from the application of the diffusion approximation to a one-dimensional Langevin equation. An effective Fokker-Planck is then constructed using Fox Theory, which is solved numerically to obtain the transfer function. The solution is capable of reproducing the transfer function behavior of simulated neurons across a wide range of parameters. The method can also be easily extended to account for different sources of noise with various multiplicative terms, and it can be used in other types of problems in principle.
Marcelo P. Becker, Marco A. P. Idiart
2023-05-24T16:18:35Z
http://arxiv.org/abs/2305.15295v1
# A Mean-Field Method for Generic Conductance-Based Integrate-and-Fire Neurons with Finite Timescales ###### Abstract The construction of transfer functions in theoretical neuroscience plays an important role in determining the spiking rate behavior of neurons in networks. These functions can be obtained through various fitting methods, but the biological relevance of the parameters is not always clear. However, for stationary inputs, such functions can be obtained without the adjustment of free parameters by using mean-field methods. In this work, we expand current Fokker-Planck approaches to account for the concurrent influence of colored and multiplicative noise terms on generic conductance-based integrate-and-fire neurons. We reduce the resulting stochastic system from the application of the diffusion approximation to a one-dimensional Langevin equation. An effective Fokker-Planck is then constructed using Fox Theory, which is solved numerically to obtain the transfer function. The solution is capable of reproducing the transfer function behavior of simulated neurons across a wide range of parameters. The method can also be easily extended to account for different sources of noise with various multiplicative terms, and it can be used in other types of problems in principle. ## I Introduction The brain is a complex system that organizes itself into structures ranging in size from fine-scale components [1] to large-scale arrangements involving the whole organ [2]. These structures are composed of neurons and supportive cells that interact with each other in complex ways [3], presenting a typical phenomenon of complex systems. As a result, the study of these systems poses significant challenges, requiring the development of theoretical models and frameworks to bridge the multiple scales. Theoretical models and frameworks are essential to address the multi-scale challenges found in complex systems. Nature has many examples of such systems, and condensed matter physics has developed a variety of methods to treat these types of problems [4; 5; 6; 7]. These methods have been co-opted by theoretical neuroscientists to understand the complex interactions of neurons and supportive cells in the brain [8]. Mean-field methods are an effective tool for investigating collective behaviors in neural networks, as neurons are often influenced by numerous stochastic inputs. By modeling neurons as simple transfer functions, attractor dynamics, network oscillations, synchronization, pattern formation, and phase transitions can be better analyzed [9; 10; 11; 12]. Through mean-field approaches, a connection can be made between the individual spiking neuron at the microscopic scale and the population's rate descriptions at the mesoscopic scale [12; 13]. Amit and Brunel's seminal work introduced a typical approach using the Fokker-Planck mean-field formalism to model the simple leaky integrate-and-fire model [14]. However, the mean-field treatment of conductance-based integrate-and-fire neurons introduces a new level of complexity due to the conductance variables, which increase the dimensionality of the stochastic system. These complexities can generate measurable differences, such as the temporal correlations introduced by synaptic filtering, which have been shown to alter the spiking statistics [15] and the scaling properties of neurons [16]. Dealing with these temporal correlations presents a challenge for mean-field methods. Perturbative solutions have been found for a single source of additive exponentially correlated noise [17; 18]. For linear multiplicative noise, it is possible to use the same perturbative methods by employing the effective time-constant approximation [19; 20]. By combining these two methods, a large set of problems with linear multiplicative terms can be treated. But what happens if we want to expand those methods to include more generic forms of neurons that include colored noise with nonlinear multiplicative terms? Looking back at physics, the presence of temporal correlations in colored noise processes is known to prevent the construction of an exact Fokker-Planck equation. Some forms of approximation were derived in the 80s. The best Fokker-Planck approximation, proposed by Sancho and Lindenberg [21; 22], proposes a differential form for the diffusion term that is exact in the white noise limit but only solvable in particular cases. Therefore, perturbative methods are required in most applications. The Fox theory, which uses functional calculus to derive an approximation for small correlations [23; 24], yields the same result as the best Fokker-Planck approximation at first order in the time correlation but diverges in higher orders [25]. Other methods, such as the projection operator method [25; 26], the adiabatic elimination procedure used by Jung and Hanggi [27], and the renormalized operator cumulant expansion of Der [28], have also been developed to deal with similar problems. In this work, we adapted the Fox Theory to the case of a conductance-based integrate-and-fire neuron under the influence of stochastic inputs. We numerically solved the resulting stationary Fokker-Planck equation and extracted the transfer function with different assumptions on the boundary conditions, comparing the results with proper simulations. ## II General model We consider the behavior of a point-like generic leaky integrate-and-fire neuron with conductance-based input embedded in a network of similar units. The neuron is described by the membrane potential \(V\) that follows from \[\tau_{L}\frac{dV}{dt}=-(V-E_{L})-\sum_{i}g_{i}(t)s_{i}(V)(V-E_{i})\,, \tag{1}\] where \(\tau_{L}\) is the membrane time constant, \(E_{L}\) is the resting potential, \(E_{i}\) is the reversal potential of the corresponding channel \(i\), and \(s_{i}(V)\) is a modulating function that can depend on \(V\). It is important to note that the addition of nonlinear functions of \(V\) to the equation (a quadratic or exponential function for example) is possible in principle, although we will not deal with this case here. The conductances \(g_{i}(t)\) behave as linear filters of the input signal. Specifically, we have \[\tau_{i}\frac{dg_{i}}{dt}=-g_{i}+w_{i}\sum_{j,k}\delta(t-t_{j}^{k})\,. \tag{2}\] The summation here is performed over all pre-synaptic sites \(j\) and all spikes \(k\) emitted in that site. \(w_{i}\)'s are the synaptic weights, which are kept the same for all neurons belonging to the same population. The membrane potential \(V\) evolves according to (1) until it reaches the threshold \(\theta\) when a formal spike is emitted. The potential is then reset to \(V_{r}\) and is not updated for the extent of the refractory interval \(\tau_{r}\). ## III Mean-field analysis ### Conductance As a starting point, we suppose that the neuron receives inputs from separate populations of neurons corresponding to different channels in the equation, each of them making \(K_{i}\) connections. We assume that the inputs from each population come from Poisson rate neurons with fixed rate \(\nu_{i}\). If the number of connections is large (\(K_{i}\gg 1\)) and the connection weights small (\(w_{i}\ll 1\)) the diffusion approximation can be used [29; 14] and equation (2) becomes \[\tau_{i}\frac{dg_{i}}{dt}=-g_{i}+\mu_{i}+\sqrt{\tau_{i}}\sigma_{i}\xi_{i}(t)\,, \tag{3}\] with \[\mu_{i} = w_{i}K_{i}\nu\tau_{i}\,, \tag{4}\] \[\sigma_{i}^{2} = w_{i}^{2}K_{i}\nu\tau_{i}\,. \tag{5}\] The \(\xi_{i}(t)\)'s here are uncorrelated Gaussian variables with zero mean and unit variance. ### Membrane Potential The membrane potential equation (1) can then be written as \[\frac{dV}{dt}=-\frac{(V-\mu)}{\tau}+\sum_{i}h_{i}(V)\eta_{i}(t)\,, \tag{6}\] where \[\tau=\tau_{L}/(1+\sum_{i}s_{i}(V)\mu_{i})\,, \tag{7}\] \[\mu=\frac{\tau}{\tau_{L}}(E_{L}+\sum_{i}s_{i}(V)\mu_{i}E_{i})\,, \tag{8}\] \[h_{i}(V)=s_{i}(V)\frac{\sqrt{\tau_{i}}}{\tau_{L}}\sigma_{i}(E_{i}-V)\,, \tag{9}\] and the noise variables are now \[\eta_{i}(t)=\frac{1}{\tau_{i}}\int_{0}^{t}e^{-\frac{T}{\tau_{i}}}\xi_{i}(t-T) dT\,, \tag{10}\] with the following correlations \[\langle\;\eta_{i}(t)\;\eta_{j}(t^{\prime})\;\rangle=\frac{1}{2\tau_{i}}e^{- \frac{|t-t^{\prime}|}{\tau_{i}}}\delta_{ij}\;\;. \tag{11}\] What we have now is a Langevin equation with distinct sources of colored noise. The \(n\) dimensional stochastic system is now reduced to a single SDE. This happens, however, with the cost that the noise is no longer Markovian, i.e., the fluctuations at time \(t\) depend on the fluctuations at previous times \(t^{\prime}<t\). It makes it impossible to obtain an exact Fokker-Planck equation since it adds an additional temporal integration. Therefore, in order to be able to use the Fokker-Planck approach [14], it is necessary to build an approximate Fokker-Planck equation. ### Effective Time-Constant Approximation In some cases, it might also be useful to simplify the problem by finding an approximation that allows us to eliminate the multiplicative noise. Observe that in Eq. (6) the stochastic variable \(\eta_{i}(t)\) is multiplied by a function of \(V\) implying that the noise level depends on voltage values. This complexity can be avoided by using the effective time-constant approximation [19][20] where the membrane potential is replaced by the equilibrium potential \(\mu\), resulting in \[h_{i}(V)\to h_{i}(\mu)\] This approximation implies that the modulation of noise can be seen as dependent on the distance of the equilibrium potential from its reversal potential, at least at first order. In fact, it can be argued that using this approximation, when the terms \(h_{i}(V)\) are linear, leads to a more consistent treatment of the problem, since the error generated by this approach is of the same order as the error introduced by the diffusion approximation [20]. But the most important fact here is that this approach simplifies considerably the treatment of the resulting Fokker-Planck equation. A direct consequence of this is that it lends to more easily interpretable parameters. We will compare the results with and without the use of this approximation when it is applicable. ### Fox Theory Temporal correlations in the noise of Langevin equations are known to impede the construction of an exact Fokker-Planck equation. So our task is to find an appropriate approximation that leads to a differential equation of the probability distribution of the membrane potential. From the different options available, we use here the Fox theory, since it is the one with the most direct application and is the easiest to generalize for multiple noise sources. It also possesses some relevant properties for this work. First of all, it is important to note that the approach followed in this method is non-perturbative. In fact, under a certain condition, the convergence of the approximation for \(\tau_{i}\to 0\) is uniform, that is, it converges to the white noise case for all values of \(V\) in the domain of interest [24]. The uniformity condition is \[1-\tau_{i}\left(W^{\prime}(V)-\frac{h_{i}^{\prime}(V)}{h_{i}(V)}W(V)\right)>0\,, \tag{12}\] where \(W(V)\) is the drift term in the Langevin (in our case \((\mu-V)/\tau\)) and primes indicate derivatives with respect to the argument \(V\). This condition sets a scale for \(\tau_{i}\) for which the approximation behaves reasonably well. Jung and Hanggi adiabatic method [27] is an approximation that is valid for small and large values \(\tau_{i}\), whose stationary solution agrees with the stationary solution of the Fox theory. Therefore, even though the resulting effective Fokker-Planck obtained by the Fox theory was derived for small \(\tau_{i}\) values, the stationary solution is valid also for large \(\tau_{i}\) (given that condition (12) is obeyed). The validity for both limits suggests that the Fox theory is a good interpolation between both stationary results. A small derivation of the validity of the stationary solution of the Fox theory for \(\tau_{i}\to\infty\) can also be found in [25]. The application of the Fox Theory to equation (6) results in the effective Fokker-Planck \[\frac{\partial P}{\partial t}=-\frac{\partial}{\partial V}\left[W(V)P-\sum_{i }h_{i}(V)\frac{\partial}{\partial V}(S_{i}(V)P)\right] \tag{13}\] with \[S_{i}(V)=\frac{1}{2}\left[\frac{h_{i}(V)}{1-\tau_{i}(W^{\prime}(V)-\frac{h_{i }^{\prime}(V)}{h_{i}(V)}W(V))}\right] \tag{14}\] The expansion for multiple noise sources can be simply done by using the same assumptions as in the original papers [23; 24]. ### Transfer Function First, we will explore the result of the application of the effective time-constant approximation. The resulting effective Fokker-Planck is then \[\frac{\partial P(V,t)}{\partial t}=\frac{\partial}{\partial V}\left[\frac{(V- \mu)}{\tau}P(V,t)\right]+\frac{\sigma_{V}^{2}}{2\tau}\frac{\partial^{2}P(V,t )}{\partial V^{2}}\,, \tag{15}\] where \[\sigma_{V}^{2}=\sum_{i}\sigma_{V_{i}}^{2}=\sum_{i}\frac{\tau^{2}}{\tau+\tau_{i }}h_{i}^{2}\,. \tag{16}\] and the notation was simplified by calling \(h_{i}(\mu)=h_{i}\). The resulting effective Fokker-Planck is the same as the one obtained from the simpler Langevin \[\frac{dV}{dt}=-\frac{(V-\mu)}{\tau}+\sigma_{V}\;\xi(t) \tag{17}\] with the zero mean unit variance Gaussian white noise \(\xi(t)\). The independence of the terms in the sum suggests the separation of the full variance into two independent components. There is, then, a simple interpretation of the parameters of the effective Fokker-Planck equation. The first-order term (drift) corresponds to the deterministic drive of the membrane potential. The second-order term (diffusion) can be seen as the sum of the variance of the noise sources, where the noise sources are treated as white. The combination of both approximations (effective time-constant and the Fox theory), therefore, produces the same behavior as a Langevin system with white noise and summed variances. The transfer function can be calculated, resulting in \[\frac{1}{\nu}=\tau_{r}+\tau\sqrt{\pi}\int_{\frac{V-\mu}{\sigma_{V}}}^{\frac{ \theta-\mu}{\sigma_{V}}}e^{x^{2}}(1+\mbox{erf}(x))dx\,, \tag{18}\] where \(\mbox{erf}(x)\) is the error function. We can also get the stationary probability distribution \[P_{S}(V) = \frac{2\nu\tau}{\sigma_{V}}\exp\left(-\frac{(V-\mu)^{2}}{\sigma_ {V}^{2}}\right) \tag{19}\] \[\times\int_{\frac{V-\mu}{\sigma_{V}}}^{\frac{\theta-\mu}{\sigma_{ V}}}\Theta\left(x-\frac{V_{r}-\mu}{\sigma_{V}}\right)e^{x^{2}}dx\,,\] where \(\Theta(x)\) is the Heaviside function. In these results, there is an implicit assumption of the continuity of the distribution at the threshold, which implies \(P_{S}(\theta)=0\). This assumption is not reasonable, since perturbative results for single channel conductance-based models show a discontinuity at that point [17]. We can estimate numerically the value of the distribution in a procedure that will be described shortly. We will provide, also, comparison between results with and without the assumption. ### Multiplicative Noise The complicated form of (13) allows only a formal stationary solution. Using again the continuity of the distribution and standard procedures [29], we arrive in \[\frac{1}{\nu}=\tau_{r}+\int_{V_{r}}^{\theta}\int_{-\infty}^{x}\frac{e^{F(x)-F(V)} }{\chi(x)}\;dVdx\,, \tag{20}\] where \[F(V)=\int\frac{\sum_{i}h_{i}(V)\frac{dS_{i}}{dV}-W(V)}{\chi(V)}\;dV \tag{21}\] and \[\chi(V)=\sum_{i}h_{i}(V)S_{i}(V)\,. \tag{22}\] The lower limit of the first interval can be limited by the lowest of the reversal potentials since the dynamics of the membrane potential cannot be lower than this value. But in a general sense, the infinity can be written in place as we did. This formal solution, unfortunately, is not straightforward to use, since a closed form for the integrating factor is generally not obtainable. Therefore, we have to rely on numerical methods. ### Numerical Methods and Simulations Efficient results can be obtained by using the numerical approach developed by Richardson [30]. It takes advantage of the formal solution to get faster convergence than a typical Euler integration and uses \(P_{S}(\theta)=0\) as an initial condition for the integration. However, since this assumption is not well founded in our case, we need to develop a different approach. We opted for using a double integration procedure, starting with the more reasonable assumption \(P(E_{I})=0\) and integrating forward. This results in an estimated value for \(P_{S}(\theta)\), so that we can backward integrate to obtain the distribution and the firing rate. The simulation data was generated using Brian2 [31] treating the input layer as Poisson neurons. Firing rates were taken as the time average of the spikes for a period of 10s after waiting for 5s to eliminate transients. The same strategy was used for the generation of the distribution. ## IV Conductance-Based Integrate-and-Fire Neuron We will first apply our method to a simple conductance-based integrate-and-fire neuron with two input channels: one excitatory \(g_{E}(t)\), and one inhibitory \(g_{I}(t)\). The equations describing this system are \[\tau_{L}\frac{dV}{dt} = -(V-E_{L})-\sum_{i=E,I}g_{i}(t)(V-E_{i})\,, \tag{23}\] \[\tau_{E}\frac{dg_{E}}{dt} = -g_{E}+\sum_{j,k}w_{E}\delta(t-t_{j}^{k})\,,\] (24) \[\tau_{I}\frac{dg_{I}}{dt} = -g_{I}+\sum_{j,k}w_{I}\delta(t-t_{j}^{k})\,. \tag{25}\] The neuron receives input from \(K_{E}\) excitatory input neurons and \(K_{I}\) inhibitory. Both populations fire at a fixed firing rate \(\nu_{i}\). The parameter values are chosen to be physiologically plausible and are in the range typically used in simulation works (for instance [32]), see table 1. To illustrate the validity of the diffusion approximation, for the range of values used here, we plotted the mean, the standard deviation, and the skewness of the analytical conductance and the simulated one (figure 1). As expected by the construction, the expressions for the mean and the standard deviation are a good representation of the values simulated even when the synaptic weight \(w_{E}\) is large. In contrast, we can see deviations in the skewness for small values of \(\tau_{E}\). This is somewhat expected since the values of \(g(t)\) can't be negative and the diffusion approximation doesn't take this into consideration. For small \(\tau_{E}\), we have small \(\mu_{E}\) and the Gaussian form of the approximation fails to account for the asymmetric shape of the distribution with a hard boundary at \(g=0\). Therefore, as stated in [20], the diffusion approximation introduces errors at the third-order moment of the distribution. The resulting form of the Langevin equation for the Conductance-Based Integrate-and-Fire is \[\frac{dV}{dt}=-\frac{(V-\mu)}{\tau}+h_{E}(V)\;\eta_{E}(t)+h_{I}(V)\;\eta_{I}( t)\,, \tag{26}\] where \[\tau=\frac{\tau_{L}}{1+\mu_{E}+\mu_{I}}\,,\] \[\mu=\frac{\tau}{\tau_{L}}(E_{L}+\mu_{E}E_{E}+\mu_{I}E_{I})\,,\] \[h_{E,I}(V)=\frac{\sqrt{\mu_{E}\tau_{I}}}{\tau_{L}}\sigma_{E,I}(E _{E,I}-V)\,.\] \begin{table} \begin{tabular}{c c} Parameter & Value \\ \hline \(E_{L}\) & -60mV \\ \(E_{E}\) & 0mV \\ \(E_{I}\) & -80mV \\ \(w_{E}\) & \{0.1, 0.5\} \\ \(w_{I}\) & \{0.1, 0.4, 1.0, 10.0\} \\ \(\tau_{L}\) & 20ms \\ \(\tau_{E}\) & variable \\ \(\tau_{I}\) & 10ms \\ \(\tau_{R}\) & 2ms \\ \(K_{E}\) & 400 \\ \(K_{I}\) & 100 \\ \(\theta\) & -50mV \\ \(V_{r}\) & -60mV \\ \(\nu_{i}\) & \{5, 20, 50\}Hz \\ \end{tabular} \end{table} Table 1: Table containing the parameters used for the simple Conductance-Based Integrate-and-Fire model. We can now use the effective time-constant approximation or deal with the full multiplicative problem. ### Additive Noise With the effective time-constant approximation, the Langevin simplifies to \[\frac{dV}{dt}=-\frac{(V-\mu)}{\tau}+h_{E}\eta_{E}(t)+h_{I}\eta_{I}(t)\,, \tag{27}\] where the constant coefficients are \(h_{E}=h_{E}(\mu)\) and \(h_{I}=h_{I}(\mu)\). The application of the Fox Theory results in the transfer function (18) and the probability distribution (19) with the expression for \(\sigma_{V}\) given by \[\sigma_{V}^{2}=\frac{\tau^{2}}{\tau+\tau_{E}}h_{E}^{2}+\frac{\tau^{2}}{\tau+ \tau_{I}}h_{I}^{2}\,. \tag{28}\] A comparison of the analytical results (calculated numerically) with the simulations can be seen in Fig.2. Fig.2A and Fig.2C assume \(P(\theta)=0\), while Fig.2B and Fig.2D do not. Good agreement is present for most sets of parameters tested, the exception being the high inhibition regime (\(w_{I}=10\)). Fig.2A and 2B, show that the input rate \(\nu_{i}\) has little effect on the stationary potential \(\mu\) but changes the noise variance \(\sigma_{V}^{2}\) of the neuron without threshold. Therefore, in this case, the firing rate behavior is mostly given by the changes in the input variance. A higher noise variance makes the transition from silent to firing smoother, as can be seen by the slower convergence of the firing rate curves with higher input variance. Figs.2C and 2D explore the effect of varying the inhibitory synaptic weight ( \(w_{I}=0.1\), \(w_{I}=1.0\), and \(w_{I}=10.0\)) for a constant excitatory weight \(w_{E}=0.5\). For low inhibition, there is a good agreement between theory and simulation. But for high inhibition ( \(w_{I}=10.0\)) the theory produces a sharp transition of firing rate that is not observed in the simulations. The discrepancy (see the error in Fig.2C, third row) is larger in the region where \(\mu\) is between \(E_{L}\) and \(\theta\), that is, in the sub-threshold regime. In this region, spikes are driven by membrane potential fluctuations. The sharpness of the transition compared to the data suggests that, for those parameter values, the model underestimates fluctuations. The high inhibition case also allows us to see that the \(P(\theta)=0\) brings the transition to lower \(\tau_{E}\) values. The better result of the double integration procedure stems from the improvement of the estimation of the transition region since the shape of the curve is almost the same. ### Multiplicative Noise The full-multiplicative noise treatment results in a Fokker-Planck equation with the form \[\frac{\partial P}{\partial t}=-\frac{\partial}{\partial V}\left[W(V)P\right]+ \sum_{i=E,I}\frac{\partial}{\partial V}h_{i}(V)\frac{\partial}{\partial V} \left(S_{i}(V)P\right), \tag{29}\] where the functions \(S_{E}(V)\) and \(S_{I}(V)\) are given by the generic expression in (14). The stationary differential equation that needs to be solved numerically is then \[\frac{\partial P_{s}}{\partial V}+B(V)P_{s}=-\nu H(V)\,, \tag{30}\] with \[B(V)=\frac{h_{E}(V)S_{E}^{\prime}(V)+h_{I}(V)S_{I}^{\prime}(V)- W(V)}{\chi(V)}\,, \tag{31}\] \[H(V)=\frac{\Theta(V-V_{r})}{\chi(V)}\,,\] (32) \[\chi(V)=h_{E}(V)S_{E}(V)+h_{I}(V)S_{I}(V)\,. \tag{33}\] We used this approach to solve the stationary Fokker-Planck equation numerically for the same set of parameters as in the last subsection.No appreciable differences were found between the additive and multiplicative models for different input firing rates (see Figs. 3A and 3C). However, for high inhibition (\(w_{I}=10.0\)), the full multiplicative treatment with the continuity assumption (\(P(\theta)=0\)) produced a worse quantitative result but with a better overall shape of the curve. This error was mostly corrected when we dropped this assumption and used the double integration procedure (see Fig. 3D), concentrating on the region where \(\mu\) is around the threshold value. Figure 1: Comparison of the analytical expressions obtained for the statistics of \(g_{E}\) using the diffusion approximation (lines) with simulations (circles). The analytical expressions are good descriptions of the simulations for all the range of parameters tested for the first and second moments. The skewness, however, exhibits a deviation from the Gaussian approximation for small \(\tau_{E}\). Parameters for first column, \(w_{E}=0.1\), \(w_{I}=0.4\); second column, \(w_{I}=0.8\), \(\nu_{i}=5Hz\) ### Stationary Probability Distribution To complete the analysis of the conductance-based integrate-and-fire neuron, we look at the stationary probability distributions with and without the effective time-constant approximation and the \(P_{S}(\theta)=0\) assumption. Fig.4 top row corresponds to the parameters of the blue curves that appear in Figs.2 and 3. Fig.4 bottom row corresponds to the yellow curves of the same figures. Each column corresponds to progressive values of \(\tau_{E}\) (1, 5, 10, 20, 70 ms, respectively). As the excitatory time constant increases, the voltage distribution evolves from a Gaussian shape with a sharp peak far from the threshold, towards an increasingly more distorted distribution as it interacts with the threshold. The area under the curve diminishes (since we are omitting the refractory period in the graph) corresponding to the higher firing rate of the neuron. Comparing the mean-field solutions, we can see that the major difference comes from the continuity assumption for intermediary values of \(\tau_{E}\). It is clear that the simulated distributions are not continuous. In fact, when in the mean-driven regime (\(\mu>\theta\)), the distributions concentrate between the reset and the threshold. The double integration procedure gives good results for the distribution in most cases, with an artificial low-end tail for some values. The difference generated by the effective time-constant approximation is minor, generating almost superimposed curves in some cases. ## V NMDA integrate-and-fire In the standard conductance-based integrate-and-fire neuron model, there is no \(V\) nonlinearity in the resulting Langevin equation. In principle, our mean-field method should be able to handle nonlinearities in either the drift term, the diffusion terms, or both. Here we will introduce a nonlinearity by adding NMDA channels, which are excitatory channels whose activation depends on the membrane potential. A convenient way to model its behavior is by adding an appropriately tuned sigmoidal factor to the conductance term [33]. The complete model can be written as \[\tau_{L}\frac{dV}{dt} = -(V-E_{L})-(1-\alpha)g_{A}(t)(V-E_{E})- \tag{34}\] \[-\alpha s(V)g_{N}(t)(V-E_{E})-g_{I}(t)(V-E_{I})\,,\] Figure 2: Comparison of the analytical model (lines) with simulations (circles) for the simple conductance-based integrate-and-fire neuron using the effective time-constant approximation. In columns (A) and (B), we use three different values of input firing rate \(\nu_{i}\) as a function of the excitatory time constant and set \(w_{E}=0.1\) and \(w_{I}=0.4\). In columns (C) and (D), we compare three values of inhibitory weights \(w_{I}\) and set \(w_{E}=0.5\), \(\nu_{i}=5Hz\). In the first row and columns (A) and (C) the mean potential for a thresholdless model is plotted. Columns (B) and (D) show the standard deviation of the membrane potential for the same thresholdless model. In the second row, columns (A) and (C) display the firing rates for the model with the continuous distribution assumption, and columns (B) and (D) display the firing rates using the double integration procedure. The absolute errors for the corresponding models are plotted in the third row. Figure 4: The stationary probability distributions for the Voltage of conductance-based integrate-and-fire neurons. Column (A) \(\tau_{E}=1ms\), (B) \(\tau_{E}=5ms\), (C) \(\tau_{E}=10ms\), (D) \(\tau_{E}=20ms\), (E) \(\tau_{E}=70ms\). The first row corresponds to blue curves in Fig. 2, that is, \(\nu_{i}=5Hz\) and parameters from table 1. The second row corresponds to the yellow curves on the same figure (\(w_{I}=10\), \(w_{E}=5\), and \(\nu_{i}=5Hz\)). The different lines correspond to different assumptions in the model, as can be seen in the legend in column (E). Figure 3: We compare the analytical model with simulations for the simple conductance-based integrate-and-fire neuron, incorporating full multiplicative noise. In columns (A) and (B), we vary the input firing rate \(\nu_{i}\) as a function of the excitatory time constant and set \(w_{E}=0.1\) and \(w_{I}=0.4\). In columns (C) and (D), we compare different values of inhibitory weight \(w_{I}\) and set \(w_{E}=0.5\), \(\nu=5Hz\). In the first row, columns (A) and (C) display the firing rate for the model assuming continuous distribution, and columns (B) and (D) display the firing rates using the double integration procedure. The absolute errors for the corresponding models are plotted on the second row. where \[\tau_{i}\frac{dg_{i}}{dt}=-g_{i}+\sum_{j,k}w_{i}\delta(t-t_{j}^{k})\,, \tag{35}\] \[s(V)=\frac{1}{1+([\text{Mg}^{2+}]/\gamma)\exp{(-\beta V)}}\,. \tag{36}\] Where \(i\) can be \(A\), \(N\), or \(I\), representing the AMPA, NMDA, and inhibitory channels, respectively. We also set \(w_{A}=w_{N}=w_{E}\). \(s(V)\) is the sigmoidal modulating function, \([\text{Mg}^{2+}]\) is the concentration of magnesium ions, and \(\gamma\) and \(\beta\) are fitting parameters. For \(\alpha\) to represent the proportion of AMPA and NMDA channels, we kept the number of their inputs equal, i.e., \(K_{A}=K_{N}=K_{E}\). Like before the input rate \(\nu_{i}\) is the same for all the population. Table 2 displays the values of all the model parameters. In the diffusion approximation and reduced to a one-dimensional Langevin equation, this model produces the following set of equations: \[\frac{dV}{dt}=\frac{(V-\mu(V))}{\tau(V)}+\sum_{i=A,N,I}h_{i}(V)\eta_{i}(t)\,, \tag{37}\] where \[\tau(V) = \frac{\tau_{I}}{1+(1-\alpha)\mu_{A}+\alpha s(V)\nu_{B}+\mu_{I}}\,,\] \[\mu(V) = \frac{\tau}{\tau_{L}}(E_{L}+(1-\alpha)\mu_{A}E_{E}+\alpha s(V)\mu _{N}E_{E}+\mu_{I}E_{I})\,,\] \[h_{A}(V) = (1-\alpha)\frac{\sqrt{\tau_{A}}}{\tau_{L}}\sigma_{A}(E_{E}-V)\,,\] \[h_{N}(V) = \alpha s(V)\frac{\sqrt{\tau_{N}}}{\tau_{L}}\sigma_{N}(E_{E}-V)\,,\] \[h_{I}(V) = \frac{\sqrt{\tau_{I}}}{\tau_{L}}\sigma_{I}(E_{I}-V)\,.\] Direct use of the effective time-constant approximation is no longer possible, since \(h_{N}(V)\) is no longer linear, and the approximation loses its logic [20]. However, it is worth mentioning that for the NMDA model, it is possible to linearize the term \(s(V)(V-E_{E})\) around the average membrane potential, as was done by Brunel and Wang [34]. Since we are trying to see how well the method performs for non-linear multiplicative noise, we will not perform the linearization and the approximation. We start with the Fokker-Planck equation, \[\frac{\partial P}{\partial t} = -\frac{\partial}{\partial V}\left[W(V)P\right]+\sum_{i=A,N,I} \frac{\partial}{\partial V}h_{i}(V)\frac{\partial}{\partial V}(S_{i}(V)P)\,,\] where the functions \(S_{A}(V)\), \(S_{N}(V)\), and \(S_{I}(V)\) are given by (14). \(W(V)\) is no longer linear in \(V\) and it results in different functional forms for the \(S\) functions. The linear differential equation in \(V\) is then \[\frac{\partial P_{s}}{\partial V}+B(V)P_{s}=-\nu H(V)\,, \tag{39}\] with coefficients \[B(V) = \frac{\sum_{i=A,N,I}h_{i}(V)S_{i}^{\prime}(V)-W(V)}{\chi(V)}\,, \tag{40}\] \[H(V) = \frac{\Theta(V-V_{r})}{\chi(V)}\,,\] (41) \[\chi(V) = \sum_{i=A,N,I}h_{i}(V)S_{i}(V)\,. \tag{42}\] We proceed to test this model against simulation data for different input rates (figure 5A and 5B) and for different inhibitory weights (figure 5C and 5D). Given the highly nonlinear model, it is remarkable how good is the agreement between the mean-field results and the simulations for the majority of the cases. The transition region is where most of the error is concentrated. We can also observe an interesting behavior in this model. The use of the continuity assumption can result in smaller errors in some situations, especially when \(\alpha\approx 1\). However, in the high inhibition regime, the double integration procedure produces a more accurate result, with the calculated firing rate following close to the almost quiescent simulated neuron. Transitions are then slightly better estimated with the double integration procedure, which can be important for phase transition calculations. A crucial point has to be made regarding the condition for uniform convergence (12) that is usually used as a metric for good behavior in approximation methods [24]. For the linear conductance-based models, it is obeyed for all values. However, the introduction of the NMDA nonlinearity makes this condition break for a large range of parameters as can be seen by numerical calculations of (12). Notably, the failure of the uniform convergence condition didn't affect the ability of the model to describe the system. But, for the method to work correctly it is necessary to deal with the divergence point at \(1-\tau_{i}\left(W^{\prime}(y)-\frac{h_{i}^{\prime}(y)}{h_{i}(y)}W(y)\right)=0\). We were able to remove the divergence by excluding from the numeric integration an interval of \(\pm 0.5\) around the value of \(V\) that contains the divergence. No significant impact can be observed in the results when using this procedure, which indicates that the divergence cancels out in the integration. \begin{table} \begin{tabular}{c c} **Parameter** & **Value** \\ \hline **Variable** & **Value** \\ \(\alpha\) & variable \\ \(E_{L}\) & -60mV \\ \(E_{E}\) & 0mV \\ \(E_{I}\) & -80mV \\ \(w_{E}\) & \{0.1, 0.5\} \\ \(w_{I}\) & \{0.1, 0.4, 1.0, 10.0\} \\ \(\tau_{L}\) & 20ms \\ \(\tau_{A}\) & 1ms \\ \(\tau_{N}\) & 100ms \\ \(\tau_{I}\) & 10ms \\ \(\tau_{R}\) & 2ms \\ \(K_{E}\) & 400 \\ \(K_{I}\) & 100 \\ \(\theta\) & -50mV \\ \(V_{r}\) & -60mV \\ \(\nu_{i}\) & \{5, 20, 50\}Hz \\ \([\text{Mg}^{2+}]\) & 1mM \\ \(\gamma\) & 3.57mM \\ \(\beta\) & 0.062(mV)\({}^{-1}\) \\ \end{tabular} \end{table} Table 2: Table containing the set of parameters used for the NMDA model. Fig 6 displays the estimated probability distributions for the NMDA model. The top row (corresponding to the blue curve in Figs. 5A and 5B), indicates that the estimation is good when the distribution don't interact heavily with the threshold but degrades when the interaction is significant. We see, nevertheless, that the continuity assumption causes a large error at the end of the left tale of the distribution, which doesn't occur as much with the double integration procedure. It still overestimates the probability of values slightly smaller than the reset potential. The bottom row (corresponding to the yellow curve in Figs. 5C and 5D) shows that the distribution remains mostly below the reset potential and little interaction with the threshold happens. The distribution spreads with higher \(\alpha\) but the peak barely moves. The area of the red curve remains higher than the yellow, which helps explain the higher firing rate observed in 5C. ## VI Conclusion We developed a new method for constructing a transfer function for conductance-based integrate-and-fire neurons. This method is based on the mean-field Fokker-Planck approach and incorporates colored noise through the Fox Theory. We reduced the N-dimensional system into a single Langevin equation with colored and multiplicative noise and used the Fox Theory to construct an effective Fokker-Planck equation, which was solved to obtain stationary firing rates. We tested the method on two neuron models with progressive complexity: a standard conductance-based integrate-and-fire and a conductance-based integrate-and-fire with nonlinear NMDA channels. For the standard conductance-based integrate-and-fire neuron, we compared mean-field results with firing rate data resulting from simulations as a function of the excitatory time constant. We found good agreement between the data and mean-field results in most scenarios, but the continuity assumption generated substantial errors in the transition regions. To correct that, we developed a double integration procedure that consists in estimating the probability density at the threshold with the first integration and using this value to start a backward integration to better estimate the distribution and the firing rate. We also observed that the effective time-constant approximation produced sharper transitions and the double integration procedure translated the curve to better match the transition point. We then added a nonlinear NMDA channel to the model to test the method's effectiveness for nonlinear multiplicative terms. The method produced a good description of the simulated data even when outside the range of validity given by equation (12), but required extra care for some sets of parameters. Additionally, we discovered that in this particular scenario, the application of the double integration procedure does not always yield superior outcomes compared to utilizing the continuity assumption. While it generates a more accurate Figure 5: Comparison of the NMDA analytical model with simulations for three different values of input rate \(\nu_{I}\) (columns (A) and (B)) and inhibitory synaptic weight \(w_{I}\) (columns (C) and (D)) as a function of the interpolation parameter \(\alpha\). In (A) and (B), we set \(w_{E}=0.1\) and \(w_{I}=0.4\), while in (C) and (D), we set \(w_{E}=0.5\) and \(\nu_{i}=5\)Hz. Columns (A) and (C) assume the continuity of the distribution, while (B) and (D) use the double integration procedure. We calculate the error as the absolute distance between the simulation and analytical results. estimation in the transition region, it slightly underperforms for high \(\alpha\) values. When the membrane potential mean-field probability distributions are compared with simulation data, the effects of the continuity assumption become very clear. As expected, the distributions observed in the simulations are discontinuous. In fact, the discontinuity can occur not only at the threshold but also at the reset potential. Therefore, the continuous mean-field solution does not correctly estimate the distribution in most cases. The double integration procedure, however, produces better results, with \(P(\theta)\) approximating the simulated one more accurately. We can also see that the effective time-constant approximation generates only small deviations from the full treatment, and is generally a good approximation. From the analysis of the probability distribution, it is clear that the double integration procedure produces good estimations of the distribution at the threshold limit for most cases. However, when the neuron is in the mean-driven regime, the discontinuity at the reset potential is not taken into account. An appropriate study of the correct boundary conditions of the 1-D reduced Fokker-Planck equation can probably generate better results at the discontinuity points. Since the area of the distribution is related to the firing rate, this would probably improve the transfer function resulting from the method. We have only tested our method on two types of neuron models, but it has the potential to be applied to a wider range of models. One possible extension, mentioned in the methods section, is the introduction of a nonlinear term in the drift expression. This nonlinear term can represent a quadratic [35] or an exponential integrate-and-fire neuron [36], for example. It is also possible to introduce adaptation currents that depend on the spiking time of the modeled neuron. This would introduce the firing rate in the resulting Langevin equation, making a self-consistent treatment required. It is also possible to look for non-stationary solutions when the input changes over time. For example, if we introduce an oscillatory Poissonian input, it is possible to construct a Fokker-Planck equation with the Fox theory, as the noise terms still have the same form but are now modulated by the input firing rate. However, only the stationary solutions of Fox theory are valid for all the ranges of time constants [27]. The non-stationary solutions are valid in the small \(\tau_{i}\) limit, and careful consideration of this fact is necessary when putting the method into practice.
2303.09532
Variational Principles for Mirror Descent and Mirror Langevin Dynamics
Mirror descent, introduced by Nemirovski and Yudin in the 1970s, is a primal-dual convex optimization method that can be tailored to the geometry of the optimization problem at hand through the choice of a strongly convex potential function. It arises as a basic primitive in a variety of applications, including large-scale optimization, machine learning, and control. This paper proposes a variational formulation of mirror descent and of its stochastic variant, mirror Langevin dynamics. The main idea, inspired by the classic work of Brezis and Ekeland on variational principles for gradient flows, is to show that mirror descent emerges as a closed-loop solution for a certain optimal control problem, and the Bellman value function is given by the Bregman divergence between the initial condition and the global minimizer of the objective function.
Belinda Tzen, Anant Raj, Maxim Raginsky, Francis Bach
2023-03-16T17:48:39Z
http://arxiv.org/abs/2303.09532v1
# Variational Principles for Mirror Descent and Mirror Langevin Dynamics ###### Abstract Mirror descent, introduced by Nemirovski and Yudin in the 1970s, is a primal-dual convex optimization method that can be tailored to the geometry of the optimization problem at hand through the choice of a strongly convex potential function. It arises as a basic primitive in a variety of applications, including large-scale optimization, machine learning, and control. This paper proposes a variational formulation of mirror descent and of its stochastic variant, mirror Langevin dynamics. The main idea, inspired by the classic work of Brezis and Ekeland on variational principles for gradient flows, is to show that mirror descent emerges as a closed-loop solution for a certain optimal control problem, and the Bellman value function is given by the Bregman divergence between the initial condition and the global minimizer of the objective function. convex optimization, mirror descent, deterministic and stochastic optimal control. ## I Introduction The continuous-time gradient flow \[\dot{x}(t)=-\nabla f(x(t)),\qquad x(0)=x_{0} \tag{1}\] for a \(C^{1}\) objective function \(f:\mathbb{R}^{n}\to\mathbb{R}\) is a basic primitive in continuous-time optimization and control. Under additional assumptions on the objective \(f\), the trajectory of (1) converges to a minimizer of \(f\), which justifies thinking of the gradient flow as a method for asymptotically solving the optimization problem \[\text{minimize }f(x),\quad x\in\mathbb{R}^{n}. \tag{2}\] However, apart from the local characterization of \(-\nabla f(x)\) as the "direction of steepest descent," there appears to be little discussion of the sense, if any, in which (1) is "optimal" among all dynamical systems that asymptotically solve (2). One of the few exceptions is the variational principle of Brezis and Ekeland [1, 2]: Fix an arbitrary time horizon \(T>0\). Then, among all absolutely continuous curves \(x:[0,T]\to\mathbb{R}^{n}\) with \(x(0)=x_{0}\), the trajectory of (1) on \([0,T]\) minimizes the action functional \[S(x(\cdot)):=\int_{0}^{T}\{f(x(t))+f^{*}(-\dot{x}(t))\}\,\mathrm{d}t+\frac{1} {2}|x(T)|^{2}, \tag{3}\] where \[f^{*}(v):=\sup_{x\in\mathbb{R}^{n}}\{\langle v,x\rangle-f(x)\}\] is the Legendre-Fenchel conjugate of \(f\) and \(|\cdot|\) denotes the Euclidean norm on \(\mathbb{R}^{n}\). The minimum value of \(S\) over all such \(x(\cdot)\) is equal to \(\frac{1}{2}|x_{0}|^{2}\). The underlying idea is simple and boils down to a careful analysis of the equality cases of the Fenchel-Young inequality \[f(x)+f^{*}(v)\geq\langle v,x\rangle.\] However, the Brezis-Ekeland variational principle does not say anything about the asymptotic behavior of the extremal trajectories or about the curious fact that a finite-horizon problem of minimizing the action (3) has a solution given by the flow of a time-invariant dynamical system. In this paper, we revisit this problem from a control-theoretic point of view and provide a new variational interpretation of the gradient flow as an _infinite-horizon stabilizing optimal control_[3, SS8.5]. Moreover, we consider a more general method of _mirror descent_. This method, introduced in the 1970s by Nemirovski and Yudin [4, Ch. 3], can be tailored to the geometry of the optimization problem at hand through the choice of a strongly convex _potential function_. In continuous time, mirror descent is implemented by a time-invariant dynamical system whose state \(x(t)\in\mathbb{R}^{n}\) and output \(y(t)\in\mathbb{R}^{n}\) evolve according to \[\begin{split}\dot{x}(t)&=-\nabla f(\nabla\varphi^{* }(x(t))),\qquad x(0)=x_{0}\\ y(t)&=\nabla\varphi^{*}(x(t))\end{split} \tag{4}\] where \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\) the potential function and \(\varphi^{*}\) is its Legendre-Fenchel conjugate. It is common to refer to \(x(t)\) and \(y(t)\), respectively, as the _dual-space_ and the _primal-space_ trajectories. (The Euclidean gradient flow (1) is a special case of (4) with \(\varphi(x)=\frac{1}{2}|x|^{2}\).) ### _Brief summary of contributions_ We show the following: Suppose that the objective \(f\) is strictly convex. For the controlled system \(\dot{x}(t)=u(t),y(t)=\nabla\varphi^{*}(x(t))\), we consider the class of all _stabilizing controls_[3, SS8.5], i.e., appropriately well-behaved functions \(u:[0,\infty)\to\mathbb{R}^{n}\) such that the output \(y(t)\) converges, as \(t\to\infty\), to the unique minimizer of \(f\). This class contains, among others, sufficiently smooth state feedback controls of the form \(u(t)=k(x(t))\). We then identify an instantaneous cost function \(q(x,u)\), closely related to the Lagrangian in (3), such that the state feedback law \(k(x)=-\nabla f(\nabla\varphi^{*}(x))\) [or, equivalently, the output feedback law \(\dot{k}(y)=-\nabla f(y)\)] gives a control that minimizes the infinite-horizon cost \[\int_{0}^{\infty}q(x(t),u(t))\,\mathrm{d}t \tag{5}\] over all stabilizing controls. Moreover, the value function \(V(x_{0})\), i.e., the minimum value of this cost as a function of the initial state \(x_{0}\), is given by a certain "distance," induced by the potential \(\varphi\), between the initial output \(y_{0}=\nabla\varphi^{*}(x_{0})\) and the unique minimizer of \(f\). At the same time, \(V(x)\) is the Lyapunov function for the closed-loop system \(\dot{x}(t)=-\nabla f(\nabla\varphi^{*}(x(t)))\). We also consider a stochastic variant of mirror descent, the so-called _mirror Langevin dynamics_[5, 6, 7], which is implemented by an Ito stochastic differential equation \[\begin{split}\mathrm{d}X_{t}&=-\nabla f(\nabla \varphi^{*}(X_{t}))\,\mathrm{d}t+\sqrt{2\varepsilon(\nabla^{2}\varphi^{*}(X_{ t}))^{-1}}\,\mathrm{d}W_{t}\\ Y_{t}&=\nabla\varphi^{*}(X_{t})\end{split} \tag{6}\] driven by a standard \(n\)-dimensional Brownian motion \((W_{t})_{t\geq 0}\), where \(\varepsilon>0\) is a small "temperature" parameter. In this setting, we consider a finite-horizon optimal control problem of minimizing the expected cost \[\mathbf{E}\Bigg{[}\int_{0}^{T}q(X_{t},u_{t})\,\mathrm{d}t+r(X_{T})\Bigg{|}X_{0 }=x_{0}\Bigg{]},\] over all admissible control processes \((u_{t})_{0\leq t\leq T}\) entering into the controlled SDE \[\mathrm{d}X_{t}=u_{t}\,\mathrm{d}t+\sqrt{2\varepsilon(\nabla^{2}\varphi^{*}(X_ {t}))^{-1}}\,\mathrm{d}W_{t}.\] Here, \(q\) is the same cost as in (5) and \(r\) is an appropriately chosen terminal cost. As in the deterministic case, the mirror Langevin dynamics (6) emerges as the closed-loop system corresponding to the optimal control, although the value function is now time-dependent. ## II The deterministic problem ### _Some preliminaries_ We assume that the objective function \(f:\mathbb{R}^{n}\to\mathbb{R}\) is \(C^{1}\) and strictly convex and that the potential function \(\varphi:\mathbb{R}^{n}\to\mathbb{R}\) is \(C^{2}\) and strictly convex. We let \(\bar{y}\) denote the unique global minimizer of \(f\). Both \(f\) and \(\varphi\) are assumed to be of _Legendre type_, i.e., \(|\nabla f(x)|,|\nabla\varphi(x)|\to+\infty\) as \(|x|\to+\infty\). As a consequence (see, e.g., [8, Thm. 26.5]), the gradient map \(\nabla\varphi:\mathbb{R}^{n}\to\mathbb{R}^{n}\) is bijective, with \((\nabla\varphi)^{-1}=\nabla\varphi^{*}\). The potential \(\varphi\) and its conjugate \(\varphi^{*}\) induce the so-called _Bregman divergences_ \[\begin{split} D_{\varphi}(y,y^{\prime})&:=\varphi(y )-\varphi(y^{\prime})-\langle\nabla\varphi(y^{\prime}),y-y^{\prime}\rangle\\ D_{\varphi^{*}}(x,x^{\prime})&:=\varphi^{*}(x)- \varphi^{*}(x^{\prime})-\langle\nabla\varphi^{*}(x^{\prime}),x-x^{\prime} \rangle\end{split} \tag{7}\] and the following relation holds for any pair of points \(y,y^{\prime}\in\mathbb{R}^{n}\) and their "mirror images" \(x=\nabla\varphi(y),x^{\prime}=\nabla\varphi(y^{\prime})\): \[D_{\varphi}(y,y^{\prime})=D_{\varphi^{*}}(x^{\prime},x)\] (cf. [9, SS11.2] for details). We also require that, for each fixed \(x^{\prime}\in\mathbb{R}^{n}\), the map \(x\mapsto D_{\varphi^{*}}(x,x^{\prime})\) is _radially unbounded_, i.e., \(D_{\varphi^{*}}(x,x^{\prime})\to+\infty\) as \(|x|\to+\infty\). This will be satisfied, for example, if \(\varphi\) is strongly convex, i.e., there exists some \(\alpha>0\), such that \[\varphi(y^{\prime})\geq\varphi(y)+\langle\nabla\varphi(y),y^{\prime}-y\rangle +\frac{\alpha}{2}|y^{\prime}-y|^{2} \tag{8}\] for all \(y,y^{\prime}\in\mathbb{R}^{n}\). Radial unboundedness is needed for the invocation of the Lyapunov criterion for global asymptotic stability [3, Sec. 5.7]. **Remark 1**.: We assume that \(\varphi\) is finite on all of \(\mathbb{R}^{n}\) mainly to keep the exposition simple. It is not hard to adapt the analysis to the case when the potential function \(\varphi\) is defined on a closed convex set \(\mathsf{X}\subseteq\mathbb{R}^{n}\) with nonempty interior, and \(|\nabla\varphi(x)|\to+\infty\) as \(x\) approaches any point on the boundary of \(\mathsf{X}\). This corresponds to the problem of minimizing \(f(x)\) subject to the constraint \(x\in\mathsf{X}\). ### _Infinite-horizon optimal stabilizing controls_ Consider the time-invariant controlled dynamical system \[\dot{x}(t)=u(t). \tag{9}\] For any \(x_{0}\in\mathbb{R}^{n}\), let \(\mathcal{U}_{x_{0}}\) denote the class of all _stabilizing controls_ at \(x_{0}\), i.e., all locally essentially bounded maps \(u:[0,\infty)\to\mathbb{R}^{n}\), such that the trajectory \(x(t)\) of (9) with \(x(0)=x_{0}\) is defined for all \(t\geq 0\) and \(x(t)\to\bar{x}\) as \(t\to\infty\), where \(\bar{x}:=\nabla\varphi(\bar{y})\). We would like to minimize the cost \[J_{\infty}(x_{0},u(\cdot)):=\int_{0}^{\infty}q(x(t),u(t))\,\mathrm{d}t,\] over all \(u(\cdot)\in\mathcal{U}_{x_{0}}\), where \[q(x,u):=f(\nabla\varphi^{*}(x))+f^{*}(-u)+\langle u,\bar{y}\rangle. \tag{10}\] The class \(\mathcal{U}_{x_{0}}\) is evidently nonempty since, for example, the control \(u(t)=(\bar{x}-x_{0})\mathbf{1}_{\{0\leq t\leq 1\}}\) is stabilizing at \(x_{0}\). We denote by \(V(x_{0})\) the _value function_, i.e., infimum of \(J_{\infty}(x_{0},u(\cdot))\) over all \(u(\cdot)\in\mathcal{U}_{x_{0}}\). ### _The main result_ Let \(V(x):=D_{\varphi^{*}}(x,\bar{x})\). Theorem 1, stated and proved below, states that \(V\) is the value function for the above infinite-horizon optimal control problem, and that the mirror descent dynamics (4) is the closed-loop system corresponding to an optimal stabilizing control. Moreover, the value function \(V\) is also a global Lyapunov function for (4), and the point \(\bar{x}\) is its global asymptotically stable equilibrium. The proof makes essential use of the following lemma, which we will also need at several points in the sequel: **Lemma 1**.: _The function \(V\) has the following properties:_ 1. _It is_ \(C^{2}\) _and strictly convex._ 2. \(V(\bar{x})=0\)_, and_ \(V(x)>0\) _for_ \(x\neq\bar{x}\)_._ 3. \(V(x)\to+\infty\) _as_ \(|x|\to+\infty\)_._ _Moreover, the following inequality holds for \(\dot{V}(x,u):=\langle\nabla V(x),u\rangle\):_ \[\dot{V}(x,u)+q(x,u)\geq 0,\qquad x,u\in\mathbb{R}^{n} \tag{11}\] _and equality is attained iff \(u=-\nabla f(\nabla\varphi^{*}(x))\)._ Proof.: Items 1)-3) are immediate consequences of our assumptions on \(\varphi\). Moreover, a simple computation shows that \[\dot{V}(x,u)+q(x,u)=f(\nabla\varphi^{*}(x))+f^{*}(-u)+\langle u,\nabla\varphi^{*} (x)\rangle,\] which is nonnegative by the Fenchel-Young inequality. The equality condition in (11) follows by [8, Thm. 23.5]. **Theorem 1**.: _We have the following:_ 1. _For any stabilizing control_ \(u(\cdot)\in\mathcal{U}_{x_{0}}\) _and for all_ \(t\geq 0\)_,_ \[\int_{0}^{t}q(x(t),u(t))\,\mathrm{d}t\geq V(x_{0})-V(x(t)).\] (12) _In particular,_ \(J_{\infty}(x_{0},u(\cdot))\geq V(x_{0})\)_._ 2. _For each_ \(x_{0}\)_, the closed-loop system_ \(\dot{x}(t)=-\nabla f(\nabla\varphi^{*}(x(t)))\) _gives rise to an optimal stabilizing control_ \(u(t)=-\nabla f(\nabla\varphi^{*}(x(t)))\)_, such that_ \[J_{\infty}(x_{0},u(\cdot))=V(x_{0})=D_{\varphi}(x_{0},\bar{x}).\] _Moreover,_ \(V(x)\) _is a global Lyapunov function for the closed-loop system._ Proof.: Let \(x_{0}\in\mathbb{R}^{n}\) be given and consider an arbitrary stabilizing control \(u(\cdot)\in\mathcal{U}_{x_{0}}\). Then, for \(\dot{x}(t)=u(t)\) with \(x(0)=x_{0}\) we have \[V(x(t))-V(x_{0}) =\int_{0}^{t}\frac{\mathrm{d}}{\mathrm{d}s}V(x(s))\,\mathrm{d}s\] \[=\int_{0}^{t}\dot{V}(x(s),u(s))\,\mathrm{d}s\] \[\geq-\int_{0}^{t}q(x(s),u(s))\,\mathrm{d}s,\] where the last step follows from (11). Rearranging gives (12). Moreover, taking the limit as \(t\to\infty\) and using the fact that \(V(x(t))\to 0\) as \(t\to\infty\) since \(u(\cdot)\) is stabilizing, we get the inequality \(J_{\infty}(x_{0},u(\cdot))\geq V(x)\). Next, consider the closed-loop system \(\dot{x}(t)=-\nabla f(\nabla\varphi^{*}(x(t)))\), \(x(0)=x_{0}\), that generates the mirror descent flow. Then \(\bar{x}\) is evidently an equilibrium point since \(\nabla f(\nabla\varphi^{*}(\bar{x}))=\nabla f(\bar{y})=0\). Letting \(y(t):=\nabla\varphi^{*}(x(t))\) and using (11), we have \[\frac{\mathrm{d}}{\mathrm{d}t}V(x(t)) =\dot{V}(x(t),-\nabla f(y(t)))\] \[=-q(x(t),-\nabla f(y(t)))\] \[=\langle\nabla f(y(t)),\bar{y}\rangle-f^{*}(\nabla f(y(t)))-f(y (t))\] \[\leq f(\bar{y})-f(y(t)),\] which is strictly negative whenever \(y(t)\neq\bar{y}\) by the strict convexity of \(f\), or, equivalently, whenever \(x(t)\neq\bar{x}\) since \(\nabla\varphi^{*}\) is a bijection. Together with Lemma 1, this shows that \(V\) is a global Lyapunov function [3, Def. 5.7.1] for the above closed-loop system, so \(\bar{x}\) is a globally asymptotically stable equilibrium [3, Thm. 17]. Thus, the control \(u(t)=-\nabla f(\nabla\varphi^{*}(x(t)))\) is stabilizing at \(x_{0}\), and \(J_{\infty}(x_{0},u(\cdot))=V(x_{0})\) from the equality condition in (11). ### _Quantitative estimates_ Theorem 1 allows us to obtain quantitative estimates on the approach of the trajectory of (4) to equilibrium. While similar estimates have been given in some earlier works [10, 11], the appeal of our optimal control perspective is that it allows to obtain such guarantees in a unified manner. It will be useful to introduce the following definition [12]: We say that the objective function \(f\) is \(\mu\)_-strongly convex_ (\(\mu\geq 0\)) w.r.t. the potential function \(\varphi\) if \[f(y^{\prime})\geq f(y)+\langle\nabla f(y),y^{\prime}-y\rangle+\mu D_{\varphi}( y^{\prime},y),\;y,y^{\prime}\in\mathbb{R}^{n}.\] (Obviously, if \(\mu=0\), this is simply convexity; when \(\mu>0\), the function \(f\) has some nonzero "curvature" in some neighborhood of each point \(x\), where the "geometry" is determined by the potential \(\varphi\).) **Theorem 2**.: _Let \((x(t),y(t))\), \(t\geq 0\), be the state and the output trajectories of the mirror descent dynamics (4) starting from \(x(0)=x_{0}\) and \(y(0)=y_{0}=\nabla\varphi^{*}(x_{0})\). Then the following holds for every \(t>0\):_ 1. _If_ \(f\) _is convex, then_ \[f(y(t))-f(\bar{y})\leq\frac{1}{t}D_{\varphi}(\bar{y},y_{0}).\] (13) 2. _If_ \(f\) _is_ \(\mu\)_-strongly convex w.r.t._ \(\varphi\) _then_ \[D_{\varphi}(\bar{y},y(t))\leq D_{\varphi}(\bar{y},y_{0})e^{-\mu t},\] (14) _and in that case the system (_4_) is exponentially stable._ Proof.: Let \(u(t)=-\nabla f(\nabla\varphi^{*}(x(t)))\) be the state feedback law that achieves \(V(x_{0})\). Then \[q(x(t),u(t)) =-\dot{V}(x(t),u(t))\] \[=\langle\nabla f(y(t)),y(t)-\bar{y}\rangle\] \[=f(y(t))-f(\bar{y})+D_{f}(\bar{y},y(t)).\] where the first equality is by Lemma 1 and the last equality follows by rearranging and using the definition \(D_{f}(y,y^{\prime})=f(y)-f(y^{\prime})-\langle f(y^{\prime}),y-y^{\prime}\rangle\). Therefore, using Theorem 1 and the fact that \(D_{f}(\cdot,\cdot)\geq 0\), we have \[V(x_{0}) \geq\int_{0}^{t}q(x(s),u(s))\,\mathrm{d}s\] \[=\int_{0}^{t}\{f(y(s))-f(\bar{y})\}\,\mathrm{d}s\] \[\geq t\big{(}f(y(t))-f(\bar{y})\big{)},\] where the last inequality follows from the fact that the value of the objective \(f\) decreases along the output trajectory \(y(t)\): \[\frac{\mathrm{d}}{\mathrm{d}t}f(y(t)) =\langle\nabla f(y(t)),\dot{y}(t)\rangle\] \[=\langle\nabla f(y(t)),\nabla^{2}\varphi^{*}(x(t))\dot{x}(t)\rangle\] \[=-\langle\nabla f(y(t)),\nabla^{2}\varphi^{*}(x(t))\nabla f(y(t))\rangle\leq 0\] -- since \(\varphi^{*}\) is \(C^{2}\) and strictly convex, its Hessian \(\nabla^{2}\varphi^{*}(x)\) is positive definite for all \(x\in\mathbb{R}^{n}\). Dividing by \(t\) and using the fact that \(V(x_{0})=D_{\varphi^{*}}(x_{0},\bar{x})=D_{\varphi}(\bar{y},y_{0})\), we get (13). When \(f\) is \(\mu\)-strongly convex, we have \[\frac{\mathrm{d}}{\mathrm{d}t}V(x(t)) =\dot{V}(x(t),u(t))\] \[=\langle\nabla f(y(t)),\bar{y}-y(t)\rangle\] \[\leq f(\bar{y})-f(y(t))-\mu D_{\varphi}(\bar{y},y(t))\] \[=f(\bar{y})-f(y(t))-\mu D_{\varphi^{*}}(x(t),\bar{x})\] \[=f(\bar{y})-f(y(t))-\mu V(x(t)).\] Integrating gives the estimate \[V(x(t))\leq e^{-\mu t}V(x_{0})+\int_{0}^{t}e^{-\mu(t-s)}\{f(\bar{y})-f(y(s))\} \,\mathrm{d}s\] which yields (14) since \(f(\bar{y})\leq f(y)\) for all \(y\). ### _A simple example_ As a simple illustration, consider the quadratic objective \(f(x)=\frac{1}{2}|Ax-b|^{2}\) with \(A\in\mathbb{R}^{p\times n}\) and \(b\in\mathbb{R}^{p}\) and the quadratic potential \(\varphi(x)=\frac{1}{2}|x|^{2}\). Assume that \(A^{r}A\) is nonsingular. Then the instantaneous cost \(q(x,u)\) in (10) takes the form \[q(x,u)=\frac{1}{2}|Ax-b|^{2}+\frac{1}{2}\langle u,(A^{r}A)^{-1}u\rangle-\frac{ 1}{2}|A\bar{y}-b|^{2},\] where \(\bar{y}=(A^{r}A)^{-1}A^{r}b\) is the unique minimizer of \(f\). Thus, the control-theoretic interpretation of the gradient flow for this problem naturally leads to infinite-horizon optimal stabilization of a linear system with a quadratic cost. ## III The stochastic problem We now consider a stochastic version of continuous-time mirror descent, dubbed _mirror Langevin dynamics_, or MLD [5, 6, 7]. The MLD generates a pair of random trajectories \((X_{t},Y_{t})_{t\geq 0}\) according to (6). The words "Langevin dynamics" allude to the fact that, with the quadratic potential \(\varphi(x)=\frac{1}{2}|x|^{2}\), (6) reduces to the usual Langevin dynamics \[\mathrm{d}X_{t}=-\nabla f(X_{t})\,\mathrm{d}t+\sqrt{2\varepsilon}\,\mathrm{d }W_{t}.\] The use of MLD is mainly in the context of sampling, where one makes use of the fact that the steady-state probability density of \(Y_{t}\) is proportional to \(e^{-f/\varepsilon}\). Moreover, since this limiting density concentrates on the set of global minimizers of \(f\) as \(\varepsilon\downarrow 0\), the sampling problem is intimately related to the problem of minimizing \(f\). ### _Some preliminaries_ In addition to the conditions imposed on \(f\) and \(\varphi\) in Sec. II-A, we also assume the following: * the objective function \(f\) has a Lipschitz-continuous gradient; * the potential function \(\varphi\) is \(C^{2}\), strongly convex, cf. (8), and has the _modified self-concordance property_[7], i.e., there exists some constant \(c>0\), such that \[\|\sqrt{\nabla^{2}\varphi(x)}-\sqrt{\nabla^{2}\varphi(x^{\prime})}\|_{2}\leq c |x-x^{\prime}|\] for all \(x,x^{\prime}\in\mathbb{R}^{n}\), where \(\|\cdot\|_{2}\) is the \(2\)-Schatten (or Hilbert-Schmidt) norm. In particular, the above assumption on \(\varphi\) implies that \(\varphi^{*}\) has a Lipschitz-continuous gradient [13, Thm. 4.2.1] and that the map \(x\mapsto\sqrt{(\nabla^{2}\varphi^{*}(x))^{-1}}\) is Lipschitz-continuous [7]. ### _A finite-horizon optimal control problem_ We work in the usual setting of controlled diffusion processes [14, SSVI.3-4]. Let \((\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\geq 0},\mathbf{P})\) be a probability space with a complete and right-continuous filtration, and let \((W_{t})_{t\geq 0}\) be a standard \(n\)-dimensional \((\mathcal{F}_{t})\)-Brownian motion. Let a finite horizon \(0<T<\infty\) be given. An _admissible control_ (of state feedback type) is any measurable function \(u:\mathbb{R}^{n}\times[0,T]\to\mathbb{R}^{n}\), such that the Ito SDE \[\mathrm{d}X_{t}=u(X_{t},t)\,\mathrm{d}t+\sqrt{2\varepsilon(\nabla^{2}\varphi^ {*}(X_{t}))^{-1}}\,\mathrm{d}W_{t} \tag{15}\] has a unique strong solution for all \(t\in[0,T]\) and for any deterministic initial condition \(X_{0}=x_{0}\) (cf. [15, SS5.2] for details). For each \(t\in[0,T]\) define the _expected cost-to-go_ \[J(x,t;u(\cdot))\] \[:=\mathbb{E}\Bigg{[}\int_{t}^{T}q(X_{s},u_{s})\,\mathrm{d}s+D_{ \varphi^{*}}(X_{T},\bar{x})\Bigg{|}X_{t}=x\Bigg{]}, \tag{16}\] with the instantaneous cost \(q\) the same as in (10), where \(u_{t}\) is shorthand for \(u(X_{t},t)\), and let \[V(x,t):=\inf_{u(\cdot)\text{ admissible}}J(x,t;u(\cdot)) \tag{17}\] be the value function. We say that an admissible control \(u(\cdot)\) is optimal if \(J(x,t;u(\cdot))=V(x,t)\) for all \(x\in\mathbb{R}^{n}\) and all \(t\in[0,T]\). Observe that, in contrast with the deterministic infinite-horizon problem posed in Sec. II-B, here we are dealing with a finite-horizon stochastic problem, and there is, in addition to the instantaneous cost \(q\), also a terminal cost \(D_{\varphi^{*}}(\cdot,\bar{x})\). The form of the performance criterion in (16) is reminiscent of the Brezis-Ekeland action functional (3). ### _The main result_ **Theorem 3**.: _The value function in (17) is equal to_ \[V(x,t)=D_{\varphi^{*}}(x,\bar{x})+\varepsilon n(T-t), \tag{18}\] _and the feedback control \(u(x,t)=-\nabla f(\nabla\varphi(x))\) is optimal._ Proof.: We use the verification theorem from the theory of controlled diffusions [14, SSVI.4]. We associate to the controlled diffusion process (15) a family of infinitesimal generators \((\mathcal{L}^{u}:u\in\mathbb{R}^{n})\), where \(\mathcal{L}^{u}\) is the second-order linear differential operator \[\mathcal{L}^{u}:=\sum_{i=1}^{n}u_{i}\frac{\partial}{\partial x_{i}}+\varepsilon \sum_{i,j=1}^{n}\big{(}\nabla^{2}\varphi^{*}(x)\big{)}_{ij}^{-1}\frac{ \partial^{2}}{\partial x_{i}\partial x_{j}}.\] Then it is readily verified that the function \(V\) defined in (18) is a solution of the _Hamilton-Jacobi-Bellman equation_ \[\frac{\partial}{\partial t}V(x,t)+\min_{u\in\mathbb{R}^{d}}\big{\{}\mathcal{L}^{ u}V(x,t)+q(x,u)\big{\}}=0 \tag{19}\] on \(\mathbb{R}^{n}\times[0,T]\) with the terminal condition \(V(x,T)=D_{\varphi^{*}}(x,\bar{x})\). Indeed, since \[\frac{\partial}{\partial t}V(x,t) =-\varepsilon n,\] \[\nabla V(x,t) =\nabla\varphi^{*}(x)-\nabla\varphi^{*}(\bar{x}),\] \[\nabla^{2}V(x,t) =\nabla^{2}\varphi^{*}(x)\] we can follow the same argument as in the proof of Lemma 1 to show that, for any \(u\in\mathbb{R}^{n}\), we have \[\frac{\partial}{\partial t}V(x,t)+\mathcal{L}^{u}V(x,t)+q(x,u)\] \[=-\varepsilon n+\langle u,\nabla V(x,t)\rangle+\varepsilon\, \mathrm{tr}\big{\{}(\nabla^{2}\varphi^{*}(x))^{-1}\nabla^{2}V(x,t)\big{\}}\] \[\qquad\qquad\qquad+f(\nabla\varphi^{*}(x))+f^{*}(-u)+\langle u, \nabla\varphi^{*}(\bar{x})\rangle\] \[=f(\nabla\varphi^{*}(x))+f^{*}(-u)+\langle u,\nabla\varphi^{*}(x )\rangle\geq 0,\] with equality iff \(u=-\nabla f(\nabla\varphi^{*}(x))\). Thus, \(V(x,t)\) is a solution of the HJB equation (19), and evidently \(V(x,T)=D_{\varphi^{*}}(x,\bar{x})\). Then, by Theorem 4.1 in [14, SSVI.4], \(V(x,t)\) is the value function in (17), and the control given by \[u(x,t) =\operatorname*{arg\,min}_{u\in\mathbb{R}^{n}}\big{\{}\mathcal{L }^{u}V(x,t)+q(x,u)\big{\}}\] \[=\operatorname*{arg\,min}_{u\in\mathbb{R}^{n}}\big{\{}f(\nabla \varphi^{*}(x))+f^{*}(-u)+\langle u,\nabla\varphi^{*}(x)\big{\}}\] \[=-\nabla f(\nabla\varphi^{*}(x))\] is optimal (note that it is also time-invariant). This control is admissible since, by our assumptions on \(f\) and \(\varphi\), the maps \(b(x):=-\nabla f(\nabla\varphi^{*}(x))\) and \(\sigma(x):=\sqrt{2\varepsilon(\nabla^{2}\varphi^{*}(x))^{-1}}\) are Lipschitz-continuous and have at most linear growth, i.e., there exists a constant \(K>0\), such that \[|b(x)-b(x^{\prime})|+\|\sigma(x)-\sigma(x^{\prime})\|_{2}\leq K|x- x^{\prime}|,\] \[|b(x)|^{2}+\|\sigma(x)\|_{2}^{2}\leq K(1+|x|^{2})\] for all \(x,x^{\prime}\in\mathbb{R}^{n}\). Consequently, with the choice of \(u(x,t)=b(x)\), the SDE (15) has a unique strong solution [15, SS5.2, Thm. 2.5], so \(u(\cdot)\) is indeed admissible. ### _Quantitative estimates_ **Theorem 4**.: _Let \((X_{t},Y_{t})\), \(t\geq 0\), be the random state and output trajectories of the mirror Langevin dynamics (6) with deterministic initial condition \(X_{0}=x_{0}\) and \(Y_{0}=y_{0}=\nabla\varphi^{*}(x_{0})\). Then the following holds for every \(T>0\):_ 1. _If_ \(f\) _is convex, then_ \[\frac{1}{T}\mathbf{E}\Bigg{[}\int_{0}^{T}\{f(Y_{t})-f(\bar{y})\} \,\mathrm{d}t\Bigg{|}Y_{0}=y_{0}\Bigg{]}\] \[\qquad\leq\frac{1}{T}D_{\varphi}(\bar{y},y_{0})+\varepsilon n.\] (20) 2. _If_ \(f\) _is_ \(\mu\)_-strongly convex w.r.t._ \(\varphi\)_, for_ \(\mu>0\)_, then_ \[\mathbf{E}[D_{\varphi}(\bar{y},Y_{t})|Y_{0}=y_{0}]\] \[\qquad\leq D_{\varphi}(\bar{y},y_{0})e^{-\mu t}+\frac{\varepsilon n }{\mu}(1-e^{-\mu T}).\] (21) Proof.: Let \(u_{t}:=-\nabla f(\nabla\varphi^{*}(X_{t}))\). Then, proceeding just like in the proof of Theorem 2, we can write \[q(X_{t},u_{t}) =f(Y_{t})+f^{*}(-u_{t})+\langle u_{t},\bar{y}\rangle\] \[=f(Y_{t})-f(\bar{y})+D_{f}(\bar{y},Y_{t})\] \[\geq f(Y_{t})-f(\bar{y}).\] Using this together with Theorem 3 gives \[D_{\varphi}(\bar{y},y_{0})+\varepsilon nT=D_{\varphi}(x_{0},\bar{ x})+\varepsilon nT\] \[=\mathbf{E}\Bigg{[}\int_{0}^{T}q(X_{t},u_{t})\,\mathrm{d}t+D_{ \varphi}(X_{T},\bar{x})\Bigg{|}X_{0}=x_{0}\Bigg{]}\] \[\geq\mathbf{E}\Bigg{[}\int_{0}^{T}\{f(Y_{t})-f(\bar{y})\}\, \mathrm{d}t\Bigg{|}X_{0}=x_{0}\Bigg{]}.\] Dividing both sides by \(T>0\) and using the fact the \(\sigma\)-algebras \(\sigma(X_{t}:t\in[0,T])\) and \(\sigma(Y_{t}:t\in[0,T])\) coincide since \(\nabla\varphi^{*}\) is a bijection, we obtain (20). When \(f\) is \(\mu\)-strongly convex, we have \[f(\bar{y})-f(Y_{t})\geq\langle\nabla f(Y_{t}),\bar{y}-Y_{t}\rangle+\mu D_{ \varphi}(\bar{y},Y_{t}). \tag{22}\] On the other hand, by Ito's lemma and by (19), \[V(X_{t},t)=V(X_{0},0)+\int_{0}^{t}\langle\nabla f(Y_{s}),\bar{y}-Y_{s}\rangle\, \mathrm{d}s+M_{t}, \tag{23}\] where \(M_{t}\) is a zero-mean \((\mathcal{F}_{t})\)-martingale. Since \[V(X_{s},s) =D_{\varphi*}(X_{s},\bar{x})+\varepsilon n(T-s)\] \[=D_{\varphi}(\bar{y},Y_{s})+\varepsilon n(T-s),\] combining (22) and (23) and then taking expectations given \(Y_{0}=y_{0}\) yields \[\mathbf{E}[D_{\varphi}(\bar{y},Y_{t})|Y_{0}=y_{0}]\] \[\leq D_{\varphi}(\bar{y},y_{0})+\varepsilon nt-\mu\int_{0}^{t} \mathbf{E}[D_{\varphi}(\bar{y},Y_{s})|Y_{0}=y_{0}]\,\mathrm{d}s\] for all \(t\in[0,T]\). Gronwall's inequality gives (21). Note that, in contrast with the deterministic setting (cf. Theorem 2), when the objective function \(f\) is not strongly convex, we only have guarantees on the expected average objective \(\mathbf{E}[\frac{1}{T}\int_{0}^{T}f(Y_{t})\,\mathrm{d}t|Y_{0}=y_{0}]\), which, owing to the convexity of \(f\), translates into an optimization error estimate for the time average of the trajectory, \(\tilde{Y}_{T}:=\frac{1}{T}\int_{0}^{T}Y_{t}\,\mathrm{d}t\): \[\mathbf{E}[f(\tilde{Y}_{T})-f(\bar{y})|Y_{0}=y_{0}]\] \[\leq\frac{1}{T}\mathbf{E}\Bigg{[}\int_{0}^{T}\{f(Y_{t})-f(\bar{y })\}\,\mathrm{d}y\Bigg{|}Y_{0}=y_{0}\Bigg{]}\] \[\leq\frac{1}{T}D_{\varphi}(\bar{y},y_{0})+\varepsilon n.\] However, as the following result shows, in the low-noise regime (i.e., for all sufficiently small \(\varepsilon\)), with high probability, the MLD output trajectory \((Y_{t})_{0\leq t\leq T}\) closely tracks the deterministic mirror-descent output trajectory \((y(t))_{0\leq t\leq T}\) with the same initial condition \(Y_{0}=y(0)=y_{0}\): **Theorem 5**.: _There exist positive time-independent constants \(C_{i}\), \(i=1,2,3\), such that, for every \(0<\varepsilon\leq\frac{1}{C_{1}T}e^{-C_{2}T}\), the following estimate holds with probability at least \(1-\delta\):_ \[\sup_{0\leq t\leq T}|f(Y_{t})-f(y(t))|\leq\frac{C_{3}}{T}\sqrt{n\log\frac{n}{ \delta}}. \tag{24}\] Proof.: Let \(\Delta_{t}\coloneqq|Y_{t}-y(t)|\). The following estimate holds by the Lipschitz continuity of \(\nabla f\): \[f(Y_{t})-f(y(t))\leq\langle\nabla f(y(t)),Y_{t}-y(t)\rangle+\frac{L_{f}}{2}|Y_{t }-y(t)|^{2},\] where \(L_{f}\) is the Lipschitz constant of \(\nabla f\). Moreover, the gradient norms \(|\nabla f(y(t))|\) are uniformly bounded since \[|\nabla f(y(t))| \leq|\nabla f(y(t))-\nabla f(y(0))|+|\nabla f(y(0))|\] \[\leq L_{f}|y(t)-y(0)|+|\nabla f(y(0))|\] \[\leq L_{f}|y(t)-\bar{y}|+L_{f}|y(0)-\bar{y}|+|\nabla f(y(0))|\] \[\leq 2L_{f}\sqrt{\frac{2}{\alpha}D_{\varphi}(\bar{y},y(0))+| \nabla f(y(0))|}\eqqcolon K_{0},\] which in turn implies that \[\sup_{0\leq t\leq T}|f(y(t))-f(y(0))|\leq K_{0}\sup_{0\leq t\leq T}\Delta_{t}+ \frac{L_{f}}{2}\sup_{0\leq t\leq T}\Delta_{t}^{2}. \tag{25}\] Define the matrix-valued process \((\xi_{t})_{0\leq t\leq T}\) by \(\xi_{t}\coloneqq\sqrt{\nabla^{2}\varphi^{*}(X_{t}))^{-1}}\). For each \(t\in[0,T]\), we have \[|X_{t}-x(t)|\] \[\leq L_{f}\int_{0}^{t}|Y_{s}-y(s)|\,\mathrm{d}s+\sqrt{2\varepsilon }\sup_{0\leq t\leq T}\bigg{|}\int_{0}^{t}\xi_{s}\,\mathrm{d}W_{s}\bigg{|}.\] By our assumptions on \(\varphi\), there exist positive constants \(\kappa_{2}\geq\kappa_{1}>0\), such that the eigenvalues of \(\nabla^{2}\varphi^{*}(x)\) lie in the interval \([\kappa_{1},\kappa_{2}]\). Hence, the process \(\xi_{t}\) is uniformly bounded, so the quadratic variations of the matrix entries \([\xi^{ij}]_{t}\), \(1\leq i,j\leq n\), are uniformly bounded by a positive multiple of \(t\). Hence, by the time-change theorem for martingales [15, SS3.4, Thm. 4.6], there exist a constant \(\kappa>0\) and a standard \(n\)-dimensional Brownian motion \((B_{t})_{t\geq 0}\), such that \[\sup_{0\leq t\leq T}\bigg{|}\int_{0}^{t}\xi_{s}\,\mathrm{d}W_{s}\bigg{|}\leq \sup_{0\leq t\leq\kappa T}|B_{t}|.\] Since \(\nabla\varphi^{*}\) is Lipschitz-continuous, we have \[\Delta_{t} \leq L_{\varphi^{*}}|X_{t}-x(t)|\] \[\leq L_{\varphi^{*}}L_{f}\int_{0}^{t}\Delta_{s}\,\mathrm{d}s+ \sqrt{2\varepsilon}L_{\varphi^{*}}\sup_{0\leq t\leq\kappa T}|B_{t}|.\] Gronwall's inequality therefore gives \[\sup_{0\leq t\leq T}\Delta_{t}\leq\sqrt{2\varepsilon}L_{\varphi^{*}}\sup_{0 \leq t\leq\kappa T}|B_{t}|e^{L_{\varphi^{*}}L_{f}T} \tag{26}\] If \(\varepsilon\leq\frac{1}{L_{\varphi^{*}T^{3}}^{6}}e^{-2L_{\varphi^{*}}L_{f}T}\), then, using (26) in (25), we obtain \[\sup_{0\leq t\leq T} |f(Y_{t})-f(y(t))|\] \[\leq\frac{\sqrt{2}K_{0}}{T^{3/2}}\sup_{0\leq t\leq\kappa T}|B_{t} |+\frac{L_{f}}{T^{3}}\sup_{0\leq t\leq\kappa T}|B_{t}|^{2}.\] By the reflection principle for the Brownian motion [15, p. 96], for every \(r>0\) \[\mathbf{P}\left\{\sup_{0\leq t\leq\kappa T}|B_{t}|\geq r\right\}\leq 2\mathbf{P} \left\{|B_{\kappa T}|\geq r\right\}\leq 4ne^{-r^{2}/2n\kappa T},\] and therefore \[\sup_{0\leq t\leq T}|f(Y_{t})-f(y(t))|\leq\frac{\tilde{C}}{T}\sqrt{n\log\frac{ n}{\delta}}\] with probability at least \(1-\delta\), where \(\tilde{C}\) is a constant that depends on \(K_{0},L_{f},\kappa\). ## IV Conclusion and future directions In this paper, we have presented an interpretation of deterministic and stochastic continuous-time mirror descent methods in the framework of "inverse optimal control" [16]--that is, given an autonomous (i.e., control-free) dynamical system, identify a controlled dynamical system and a cost criterion, such that the autonomous dynamics can be viewed as the closed-loop system corresponding to an optimal control. An intriguing direction for future research is to interpret other optimization methods, such as the heavy-ball method [17], through the inverse optimal control lens. ## Acknowledgments The authors would like to thank Jelena Diakonikolas for pointing them to Ref. [2], which was the initial inspiration for this work, and also Anatoli Juditsky, Philippe Rigollet, and Matus Telgarsky for insightful comments and suggestions.
2308.13618
Exponential mixing for singular skew-products
We study skew-products of the form $(x,u) \mapsto (fx, u + \varphi(x))$ where $f$ is a non-uniformly expanding map on a manifold $X$ and $\varphi: X \to \mathbb{S}^1$ is piecewise $\mathcal{C}^1$. If the systems satisfies mild assumptions (in particular singular behaviour of $\varphi$ is permitted) then we prove that the map mixes exponentially with respect to the unique SRB measure. This extends previous results by allowing singular behaviour in the fibre map.
Oliver Butterley
2023-08-25T18:30:31Z
http://arxiv.org/abs/2308.13618v2
# Exponential mixing for singular skew-products ###### Abstract. We study skew-products of the form \((x,u)\mapsto(fx,u+\varphi(x))\) where \(f\) is a non-uniformly expanding map on a manifold \(X\) and \(\varphi:X\to\mathbb{S}^{1}\) is piecewise \(\mathcal{C}^{1}\). If the systems satisfies mild assumptions (in particular singular behaviour of \(\varphi\) is permitted) then we prove that the map mixes exponentially with respect to the unique SRB measure. This extends previous results by allowing singular behaviour in the fibre map. ## 1. Introduction A deceptively simple transformation on \([0,1)^{2}\) is defined as the skew-product \((x,u)\mapsto([2x],[u+\operatorname{dist}(x,b)^{a}])\) for some \(a\in(0,1)\), \(b\in[0,1)\). Although simple to write this example presents an interesting and non-trivial difficulty. This is the motivating example for this work. The present work is devoted to exploring the technology which can be used to prove exponential mixing for this and other settings. We will permit a rather general setting (arbitrary dimension, general classes of maps). Relatively few concrete examples which have a neutral direction are known to mix exponentially, here we increase this collection. Skew products with some resemblance to this example have been studied by observing that there is an invariant unstable cone field (in [15] this was explicit, in [3, 10] this idea was still present in the assumptions). However this isn't satisfied by the above mentioned system. The singular behaviour which we permit in this present work is a significant problem for the established methods because it is impossible to have an invariant unstable cone field which is uniformly bounded away from the neutral direction. For want of a better expression we call this "unbounded twist". Nevertheless we show that the singular behaviour does not prevent good limit theorems and, in particular, our results prove that the motivating example mixes exponentially. In this present work we solve the unbounded twist issue by inducing. We present a general framework and then develop the application to a specific example. In cases like the above example we will induce even though it seems that the base map doesn't need any inducing. This "over-inducing" suffices to solve the unbounded twist issue. The key idea is to, if required, view the singularities of the fibre map as artificial singularities of the map in the construction of the induced system. All of this means that we demonstrate that a very large class of partially hyperbolic systems mix exponentially. Let \(f\) be a transformation on a compact manifold \(X\) and let \(\varphi:X\to\mathbb{S}^{1}\) be a function from manifold to the circle \(\mathbb{S}^{1}\). We will call \(f\) the _base map_ and \(\varphi\) the _fibre map_. This article is devoted to the study of partially hyperbolic systems defined as the skew-product \(f_{\varphi}:X\times\mathbb{S}^{1}\to X\times\mathbb{S}^{1}\), \[f_{\varphi}:(x,u)\mapsto(fx,u+\varphi(x)).\] In order to prove the results we will use the existence of an induced system (Young tower with exponential tails) in the sense that there exists a connected subset \(Y\subset X\) and a piecewise constant inducing time \(R:Y\to\mathbb{N}\) such that the induced map \(F:x\to f^{R(x)}(x)\) is a full branch Markov map, \(\mathcal{C}^{2}\) on partition elements. Defining the induced fibre map as \(\Phi(x)=\sum_{k=0}^{R(x)-1}\varphi(f^{k}x)\) we consider the induced skew-product \(F_{\Phi}:Y\times\mathbb{S}^{1}\to Y\times\mathbb{S}^{1}\), \[F_{\Phi}:(x,u)\mapsto(Fx,u+\Phi(x)).\] The principal aim of this work is to allow weak control on the fibre map, in particular to allow \(D(\varphi\circ f|_{\omega}^{-1})\) to be unbounded. In one part of this work we will show that if the induced skew-product satisfies certain assumptions then the original system mixes exponentially. In the other part we introduce assumptions (with the emphasis on verifiability) which suffice to show that the induced system satisfies the previously mentioned assumptions. The problem of exponential mixing for partially hyperbolic maps like these skew-products (similar to flows) is rather difficult because of the neutral direction and the singularities which we permit in the fibre map. Because of the neutral direction, in order to study the rate of mixing and other strong statistical properties, we must use some form of Dolgopyat estimate and observe oscillatory cancellations [19]. Recent years have seen significant progress for such results for systems with a neutral direction (e.g., [15, 17, 18, 33, 36, 37]), all using methods based on the work of Dolgopyat to some extent. In particular the results have been extended to the Lorenz flow and systems inspired by it [6, 7, 8, 11, 14]. Our purpose here is to give a general result for skew-products with (possibly) some degree of singular behaviour and in the process clarify exactly the reach and limits of the notions. Similar systems to the ones studied here were previously introduced [31] as a model for the Lorenz flow (in this reference it was claimed that these systems mix exponentially but there was a gap in the proof). Although the neutral direction causes significant technical difficulty, in our case we fortunately can use (to a large extent) the work of Gouezel [28] which includes the study of induced skew-products, in particular used to obtain results concerning Farey sequences. As observed before, a key point here is that we induce even if the base map is already uniformly hyperbolic, in order to obtain the required property for the induced fibre map. In some applications, including the motivating example, a large deviations result for the expanding map (for a function based on distance to singularity) implies that the (artificially) induced system has trajectories which behave well with respect to the singularities of the fibre map. Using this together with the fact that the singularities of the fibre map aren't worse than distance to some negative power, means that the induced fibre map has the required regularity. _Remark 1.1_.: Most likely these results would extend easily to the hyperbolic case when the stable foliation is at least \(\mathcal{C}^{1}\) by disintegration along the stable foliation (e.g., using [16] in a similar way as was done in [12]). However such regularity is arguably not typical in relevant settings [29]. _Remark 1.2_.: We will mostly avoid the details related to possible discontinuities in the system. Discontinuities require delicate control, particularly in higher dimension when oscillatory cancellation arguments are required (e.g., [21, 22, 23, 35]). However, inducing, as we do here for other motives, can be useful for avoiding the problems related to discontinuities (e.g., [32]). _Remark 1.3_.: Suspension semiflows and skew products as we study here share many similarities, in particular the investigation of the rate of mixing requires exactly the same estimates (the "twisted transfer operators" are identical). As such, the results here can be relatively easily transferred to the corresponding suspension semiflow setting. Alternatively, the extension to general compact group extensions (see e.g., [20]) is feasible although non-trivial and in this work we choose not to take this road. _Remark 1.4_.: In this present work we assume that the base map is at least \(\mathcal{C}^{2}\). Most likely the results can be extended to the \(\mathcal{C}^{1}\) plus Holder derivative case by already established ideas (e.g., [6, 17]). _Remark 1.5_.: In this present work our focus is on exponential mixing but there are various other relevant statistical properties, for example, central limit theorem, local limit theorem, almost-sure invariance principal, etc. It is to be expected that the main estimates we obtain during this work and which are used for proving the exponential mixing can also be used for proving the other statistical limit laws (see e.g., arguments contained in [6, 8]). (See also [24, 25, 26] for some related results.) ## 2. Setting & results Although not part of the motivating example, technology developed for non-uniformly expanding (NUE) systems will be invaluable for tackling the problem. The following four conditions are as introduced by Young [38, 39] and this structure is often called a _Young tower_ structure. **Definition 2.1**.: We say that \(f:X\to X\) is _NUE in the sense of Young_ to mean the tuple \((f,X,\mu,Y,R)\) where: * \(X\) is a compact Riemannian manifold (possibly with boundary), endowed with a Borel measure \(\mu\) (called the reference measure); * \(f\) is a nonsingular transformation on \(X\); * \(Y\) (called the base of the induced system) is a connected open subset of \(X\) with finite measure and finite diameter and there exists a finite or countable partition \(\{Y_{\ell}\}_{\ell\in\Lambda}\) of a full measure subset of \(Y\); * \(R:Y\to\mathbb{N}\) (called the return time) is a function constant on each partition element \(Y_{\ell}\); Such that the following properties are satisfied: **Y1:**: For each \(\ell\in\Lambda\) let \(R_{\ell}\) denote the constant value that \(R\) takes on \(Y_{\ell}\). For all \(\ell\in\Lambda\), the restriction of \(f^{R_{\ell}}\) to \(Y_{\ell}\) is a diffeomorphism between \(Y_{\ell}\) and \(Y\), satisfying \(\kappa\,\|v\|\leq\|Df^{R_{\ell}}(x)v\|\leq C_{\ell}\,\|v\|\) for any \(x\in Y_{\ell}\) and for any tangent vector \(v\) at \(x\), for some constants \(\kappa>1\) (independent of \(\ell\) and \(C_{\ell}\). We denote by \(F:Y\to Y\) the map which is equal to \(f^{R_{\ell}}\) on each set \(Y_{\ell}\). **Y2:**: Let \(\mathcal{H}_{F}^{n}\) denote the set of inverse branches of \(F^{n}\). Let \(J(x)\) be the inverse of the Jacobian of \(F\) at \(x\) with respect to \(\mu\). We assume that there exists a constant \(C>0\) such that, for any \(\xi\in\mathcal{H}_{F}^{1}\), \(\|D((\log J)\circ\xi)\|\leq C\). **Y3:**: There exists a constant \(C>0\) such that, for any \(\ell\), if \(\xi_{\ell}:Y\to Y_{\ell}\) denotes the corresponding inverse branch of \(F\), for any \(k\leq R_{\ell}\), then \(\|f^{k}\circ\xi_{\ell}\|_{\mathcal{C}^{1}(Y)}\leq C\). **Definition 2.2**.: Suppose that \(f:X\to X\) is NUE in the sense of Young. Then \(f\) is said to have _exponential tails_ if there exists \(\sigma_{0}>0\) such that \(\int_{Y}e^{\sigma_{0}R}\;d\mu<\infty\). If \(f:X\to X\) is NUE in the sense of Young with exponential tails then it is known [38] that there exists a probability measure \(\tilde{\mu}\) on \(X\) which is absolutely continuous with respect to \(\mu\), invariant under \(f\) and ergodic. Moreover, if \(f\) is mixing for \(\tilde{\mu}\), then it is exponentially mixing (for Holder continuous observables). **Definition 2.3**.: An open subset of a Riemannian manifold \(U\) is said to have the _weak Federer property_ with respect to a finite Borel measure \(\nu\), if, for any \(\gamma>1\), there exists \(D=D(U,\gamma)>1\) and \(\eta_{0}(\gamma)>0\) such that, for any \(\eta\in(0,\eta_{0}(\gamma))\), * There exists a set of points \(\{x_{j}\}_{j=1}^{k}\) such that the balls \(B(x_{j},\gamma\eta)\) are disjoint and compactly included in \(U\); * There exists a set of sets \(\{A_{j}\}_{j=1}^{k}\) whose union covers a full measure subset of \(U\) and \(A_{j}\subset B(x_{j},\gamma\eta D)\); * For any \(y_{j}\in B(x_{j},(\gamma-1)\eta)\), we have \(\nu(B(y_{j},\eta))\geq D^{-1}\nu(A_{j})\). **Definition 2.4**.: A family of open sets \(\{U_{n}\}_{n}\) is said to uniformly have the weak Federer property for the measure \(\nu\) if, for all \(\gamma>1\), \(\sup_{n}D(U_{n},\gamma)\) is finite. **Definition 2.5**.: Suppose that \(f:X\to X\) is NUE in the sense of Young. We say that the transformation has the _weak Federer property_ if, for each \(h\in\bigcup_{n\in\mathbb{N}}\mathcal{H}_{F}^{n}\), the sets \(h(Y)\) uniformly have the weak Federer property with respect to \(\mu_{Y}\) (the probability measure induced by \(\mu\) on \(Y\)). If \(\{U_{n}\}_{n}\) is a family of open intervals then the uniform Federer property is trivially satisfied by Lebesgue measure (see [28, SS6.1] for a general criterion for the weak Federer property and see [10, Remark 2.1] for additional comments). In order to prove exponential mixing uniformity in the Federer assumption is not required [28, Remark 2.5]. **Definition 2.6**.: Let \(Y\) be a set as above with partition \(\left\{Y_{\ell}\right\}_{\ell\in\Lambda}\). A function \(\Phi:Y\to\mathbb{S}^{1}\) is said to be _cohomologous to a locally constant function_ if there exists a \(\mathcal{C}^{1}\) function \(\psi:Y\to\mathbb{S}^{1}\) such that \(\Phi-\psi+\psi\circ F\) is constant on each set \(Y_{\ell}\), \(\ell\in\Lambda\). The skew product transformation \(f_{\varphi}:(x,u)\mapsto(fx,u+\varphi(x))\) is an isometry in the fibres and hence preserves the measure \(\nu=\tilde{\mu}\times m\) (where \(m\) denotes Lebesgue measure on \(\mathbb{S}^{1}\)). **Theorem 1**.: _Suppose that \(f:X\to X\) is NUE in the sense of Young with exponential tails and and satisfying the weak Federer property. Suppose that \(\varphi:X\to\mathbb{R}\) is \(\mathcal{C}^{1}\) on the interior of \(X\), that the induced function \(\Phi(x)=\sum_{k=0}^{R(x)-1}\varphi(f^{k}x)\) is not cohomologous to a locally constant function and \(\|D\Phi(x)DF(x)^{-1}\|\) is uniformly bounded for \(x\in Y\)._ _Then \(f_{\varphi}\) mixes exponentially for observables on \(Y\times\mathbb{S}^{1}\) in the sense that: For any \(\alpha>0\) there exists \(\theta\in(0,1)\), \(C>0\) such that, for all functions \(g\), \(h\) from \(Y\times\mathbb{S}^{1}\) to \(\mathbb{C}\), bounded and Holder continuous with exponent \(\alpha\), and for all \(n\in\mathbb{N}\),_ \[\left|\int g\circ f_{\varphi}^{n}\cdot h\ d\nu-\left(\int g\ d\nu\right)\left( \int h\ d\nu\right)\right|\leq C\theta^{n}\left\|g\right\|_{L^{\infty}}\left\| h\right\|_{\mathcal{C}^{\alpha}}.\] This theorem is essentially the work of Gouezel [28, Theorem 1.7] although the work of the reference requires the fibre map \(\varphi\) to be \(\mathcal{C}^{1}\) on \(X\) whereas we allow unbounded derivative. However we require the induced fibre map \(\Phi\) to satisfy the same conditions as are actually required during the proof in the reference. On the other hand the result stated here is only for observables supported on the base of the tower \(Y\). Details concerning the modification of the reference in order to prove the theorem are given in Section 3. _Remark 2.7_.: The restriction that the observables are supported on \(Y\times\mathbb{S}^{1}\) can, to some extent, be mitigated in a standard way by considering observables which map to observables supported on \(Y\times\mathbb{S}^{1}\) in finite steps. As such, typically exponential mixing results can be extended to observables supported on the complement of the singular set (depending on the exact construction of the inducing scheme). The assumptions of the above theorem are overly abstract from our point of view and so we would like to obtain some more verifiable conditions. (For the origin of the following assumptions see [1, 4, 27].) We use the following notation: For \(\delta>0\), set \(\operatorname{dist}_{\delta}(x,S)=\operatorname{dist}(x,S)\) if \(\operatorname{dist}(x,S)<\delta\), and \(\operatorname{dist}_{\delta}(x,S)=1\) otherwise. **Definition 2.8**.: Let \(f\) be a map on a compact Riemannian manifold \(X\) (possibly with boundary). We assume that there exists a closed subset \(S\subset M\), with zero Lebesgue measure (containing possibly discontinuities or critical points of \(f\) and with \(\partial X\subset S\)), such that \(f\) is a \(C^{2}\) local diffeomorphism on \(X\setminus S\). We say that \(f\) is _NUE in the sense of a controlled singular set_ if the following assumptions are satisfied: * (non-degeneracy close to \(S\)) We assume that there exist \(B>1\) and \(\beta>0\) such that, for any \(x\in M\setminus S\) and every \(v\in T_{x}M\setminus\{0\}\), \[\frac{1}{B}\operatorname{dist}(x,S)^{\beta}\leq\frac{\|Df(x)v\|}{\|v\|}\leq B \operatorname{dist}(x,S)^{-\beta}.\] Assume also that, for all \(x,y\in X\) with \(\operatorname{dist}(x,y)<\operatorname{dist}(x,S)/2\), \[\left|\log\|Df(x)^{-1}\|-\log\|Df(y)^{-1}\|\right|\leq B\frac{\operatorname{ dist}(x,y)}{\operatorname{dist}(x,S)^{\beta}}\] and \[\left|\log|\det Df(x)^{-1}|-\log|\det Df(y)^{-1}|\right|\leq B\frac{ \operatorname{dist}(x,y)}{\operatorname{dist}(x,S)^{\beta}}.\] **(S2):**: (points which are too close to \(S\) or haven't yet experienced expansion) Let \(\delta:(0,\epsilon_{0})\to\mathbb{R}_{+}\), \(\lambda>0\), \[\mathcal{P}_{\epsilon,N} =\left\{x\in X:\frac{1}{n}\sum_{k=0}^{n-1}-\log\mathrm{dist}_{ \delta(\epsilon)}(f^{k}x,S)>\epsilon,\text{ for some }n\geq N\right\},\] \[\mathcal{Q}_{\epsilon,N} =\left\{x\in X:\frac{1}{n}\sum_{k=0}^{n-1}\log\left\|Df(f^{k}x)^{ -1}\right\|^{-1}<\lambda,\text{ for some }n\geq N\right\}.\] We assume that there exists \(C>0\) and \(\theta\in(0,1)\) such that, for all \(\epsilon\in(0,\epsilon_{0})\), the Lebesgue measure of \(\mathcal{P}_{\epsilon,N}\cup\mathcal{Q}_{\epsilon,N}\) is not greater than \(C\theta^{N}\). In order to take advantage of the above assumptions and build Young tower structures, the key concept of _hyperbolic times_ is used. In this present work we further take advantage of this notion to deal with potential problems in the fibre map. As such, let us now recall the definition of hyperbolic times. For the purpose of this definition \(f:X\to X\) is a differentiable transformation and \(S\subset X\) is the _singularity set_. The following definition is exactly as used by Alves, Luzzatto & Pinheiro [4], including the same notation. **Definition 2.9**.: Let \(b>0\), \(\sigma\in(0,1)\), \(\delta>0\). We say that \(n\in\mathbb{N}\) is a _\((b,\sigma,\delta)\)-hyperbolic time1_ for \(x\) if, for all \(1\leq k\leq n\) Footnote 1: In the reference the terminology “\((\sigma,\delta)\)-hyperbolic time” is used and dependence on \(b\) is suppressed. \[\prod_{j=n-k}^{n-1}\left\|Df(f^{j}x)^{-1}\right\|\leq\sigma^{k}\quad\text{and }\quad\mathrm{dist}_{\delta}(f^{n-k}x,S)\geq\sigma^{bk}\] We will denote by \(H_{n}(b,\sigma,\delta)\) the set of points for which \(n\) is a \((b,\sigma,\delta)\)-hyperbolic time. The following is due to Gouezel (result described in [28, Proposition 1.16] with a proof which uses mostly [27]). **Theorem** ([28, Proposition 1.16]).: _Suppose that \(X\) is a compact Riemannian manifold and that \(f:X\to X\) is NUE in the sense of a controlled singular set (Definition 2.8) and let \(b>0\) sufficiently small. There exists \(\sigma\in(0,1)\), \(\delta>0\) and there exists an open and connected subset \(Y\) of \(X\) such that \(f\) is NUE in the sense of Young (Definition 2.1) (on base \(Y\) with respect to Lebesgue measure) with exponential tails and and satisfying the weak Federer property. Moreover the return times for the Young tower are \((b,\sigma,\delta)\)-hyperbolic times._ Note that the final statement isn't highlighted in the statement of the result in the cited reference although it is described in the argument [27, SS2]. For us this detail is important as already hinted, since we will use these hyperbolic times to control of the singular behaviour of the fibre maps. **Theorem 2**.: _Let \(f:X\to X\) be NUE in the sense of a controlled singular set \(S\subset X\) (Definition 2.8). Suppose that \(\varphi:X\to\mathbb{R}\) is \(\mathcal{C}^{1}\) on the connected components of \(X\setminus S\) and that there exists \(C>0\), \(s\geq 0\) such that, for all \(x\), \(\left\|D\varphi(x)Df(x)^{-1}\right\|\leq C\operatorname{dist}(x,S)^{-s}\). Further suppose that the induced function \(\Phi(x)=\sum_{k=0}^{R(x)-1}\varphi(f^{k}x)\) is not cohomologous to a locally constant function. Then there exists a subset \(Y\subset X\) such that the assumptions of Theorem 1 are satisfied._ The above result is proven in Section 4, using an argument based on hyperbolic times and which controls the regularity of the induced fibre map \(\Phi\). In the case of our motivating example \((x,u)\mapsto([2x],[u+\operatorname{dist}(x,1)^{a}])\) we choose the singularity set \(S=\{1\}\) even thought \(1\) isn't a singular point in any sense for the base transformation \(x\mapsto 2x\). In cases like this one needs to know that the NUE property of the system still holds, even when the singularity set is increase in order to consider also the singularities of the fibre map. This can be done with minor restrictions and follows from a type of large deviations estimate. The statement and proof of this in general settings is the content of Section 5 (as done in [9]). Returning to our motivating example, we have the following result. **Theorem 3**.: _Let \(X=[0,1]\) and \(f:X\to X\) is defined as \(f:x\mapsto 2x\mod 1\) and let \(\varphi:X\to\mathbb{S}^{1}\) be defined as \(\varphi:x\mapsto\operatorname{dist}(x,1)^{a}\) for some \(a\in(0,1)\). Then the skew-product \(f_{\varphi}:(x,u)\mapsto(fx,u+\varphi(x))\) mixes exponentially for observables supported on the complement of a neighbourhood of \(\{1\}\times\mathbb{S}^{1}\)._ Section 6 contains the proof of the above and discussion related to showing the property of the fibre map not being cohomologous to a local constant function in diverse settings. Since we can for this specific example, we take a very hands-on approach to the argument and explicitly construct the tower and prove the required properties so that the above results can be applied. ## 3. Singular skew-products In this section we assume that the induced skew-product map satisfies the assumptions and show that this implies exponential mixing for the original skew-product. This means that we prove Theorem 1. As made clear by Ruelle [34] it is important to have some condition for the fibre map. A common way to prove the results that we would like is to obtain a spectral gap for the transfer operator of the system (acting on a suitable Banach space) but the neutral direction complicated this problem (except for the work of Tsujii [36, 37]). Let \(f\) be NUE in the sense of Young with exponential tails. preserving the probability measure \(\mu\). Assume that \(\mu_{Y}\) has full support in \(Y\). Let \(\varphi:X\to\mathbb{S}^{1}\) be a \(\mathcal{C}^{1}\) function such that the induced fibre map \(\Phi(x)\) is not cohomologous to a locally constant function. Let \(\nu=\mu\otimes\operatorname{Leb}\). Since \(f_{\varphi}\) is an isometry in the fibre the measure \(\nu\) is \(f_{\varphi}\)-invariant. We consider the skew-product \(f_{\varphi}:(x,u)\mapsto(fx,u+\varphi(x))\). This result is essentially what Gouezel proved [28, SS3] however there is something that needs to be observed about their assumption on the fibre map. Their results are stated for the case when the fibre map is \(\mathcal{C}^{1}\) but such a strong condition isn't required in the proof. Here we demonstrate how their argument suffices for the result which is required in this present context. As described previously, we started with a skew-product \(f_{\varphi}:X\times\mathbb{S}^{1}\to X\times\mathbb{S}^{1}\), \[f_{\varphi}:(x,u)\mapsto(fx,u+\varphi(x))\] and then introduced the induced skew-product \(F_{\Phi}:Y\times\mathbb{S}^{1}\to Y\times\mathbb{S}^{1}\), \[F_{\Phi}:(x,u)\mapsto(Fx,u+\Phi(x))\] where \(\Phi(x)=\sum_{k=0}^{R(x)-1}\varphi(f^{k}x)\) and \(F:x\to f^{R(x)}(x)\) is a full branch Markov map. Since \(F\) is uniformly expanding and \(\Phi\) satisfies the bounded twist property we have good estimates on the associated transfer operators. This holds precisely for our setting since it only require the properties of the induced system. Following Young we will introduce a map on the tower which is a model for \(f\) and then, following Gouezel [28, SS3.1], we introduce a skew-product version of the model. (See Table 1 for a comparison between the notation of the reference and that of the present text.) Let \[\widetilde{X}=\left\{(x,\ell):x\in Y,0\leq\ell<R(x)\right\},\] together with the tower map \[\tilde{f}:(x,\ell)\mapsto\begin{cases}(x,\ell+1)&\text{if }\ell+1<R(x)\\ (Fx,0)&\text{if }\ell+1=R(x).\end{cases}\] We therefore define the tower skew-product \(\tilde{f}_{\varphi}:\widetilde{X}\times\mathbb{S}^{1}\to\widetilde{X}\times \mathbb{S}^{1}\) as \[\tilde{f}_{\varphi}:(x,\ell,u)\mapsto\begin{cases}(x,\ell+1,u)&\text{if }\ell+1<R(x)\\ (Fx,0,u+\Phi(x))&\text{if }\ell+1=R(x).\end{cases}\] We also write the same definition as \[\tilde{f}_{\varphi}:(x,\ell,u)\mapsto(\tilde{f}(x,\ell),u+\tilde{\varphi}(x, \ell))\] where \(\tilde{\varphi}(x,\ell)\) is equal to \(\Phi(x)\) when \(\ell=R(x)\) and equal to \(0\) otherwise. Using the tower and the above transfer operator estimates we prove the exponential mixing result. This requires modification of Gouezel's argument because of our weaker assumptions on the fibre map \(\varphi\). In particular we prefer to see the action in the fibre only when we arrive at the top of \begin{table} \begin{tabular}{l|l l} \hline \hline & Gouezel [28, §3] & Present text \\ \hline NUE map & \(T:X\to X\) & \(f:X\to X\) \\ Fibre map & \(\phi:X\to\mathbb{R}\) & \(\varphi:X\to\mathbb{R}\) \\ Skew-product & \(\mathcal{T}:X\times\mathbb{S}^{1}\to X\times\mathbb{S}^{1}\) & \(f_{\varphi}:X\times\mathbb{S}^{1}\to X\times\mathbb{S}^{1}\) \\ Base of tower & \(Y\subset X\) & \(Y\subset X\) \\ Partition of base & \(\{W_{\ell}\}\) & \(\{Y_{\ell}\}\) \\ Inducing times & \(r_{\ell}\) & \(R_{\ell}\) \\ Induced map & \(T_{Y}:Y\to Y\) & \(F:Y\to Y\) \\ Induced fibre map & \(\phi_{Y}:Y\to\mathbb{R}\) & \(\Phi:Y\to\mathbb{R}\) \\ Tower & \(X^{(n)}\) & \(\widetilde{X}\) \\ Tower map & \(U^{(n)}:X^{(n)}\to X^{(n)}\) & \(\tilde{f}:\widetilde{X}\to\widetilde{X}\) \\ Tower skew-product & \(\mathcal{U}^{(n)}:X^{(n)}\times\mathbb{S}^{1}\to X^{(n)}\times\mathbb{S}^{1}\) & \(\tilde{f}_{\varphi}:\widetilde{X}\times\mathbb{S}^{1}\to\widetilde{X}\times \mathbb{S}^{1}\) \\ \hline \hline \end{tabular} \end{table} Table 1. Comparison of notation the tower, not incrementally at each step as is done in the reference. The (possibly) many-to-one map \(\pi:\widetilde{X}\to X\) is defined as \((x,\ell)\mapsto f^{\ell}x\). This has the consequence that \(\pi\circ\tilde{f}=f\circ\pi\). For convenience, here and subsequently, we use the notation \(S_{n}\varphi=\sum_{\ell=0}^{n-1}\varphi\circ f^{\ell}\) and, similarly, \(S_{n}\tilde{\varphi}=\sum_{\ell=0}^{n-1}\varphi\circ\tilde{f}^{\ell}\). Abusing notation since no confusion can arise, let \(\pi:\widetilde{X}\times\mathbb{S}^{1}\to X\times\mathbb{S}^{1}\) be defined as \((x,\ell,u)\mapsto(f^{\ell}x,u+S_{\ell}\varphi(x))\). This has the consequence that \(\pi\circ\tilde{f}_{\varphi}=f_{\varphi}\circ\pi\). For each \(\ell\) we define \(\widetilde{X}_{\ell}\) to be the subset of \(\widetilde{X}\) such that the second coordinate is equal to \(\ell\). In each case this is a copy of \(\{x\in X:R(x)>\ell\}\). There is a \(f^{R}\)-invariant (\(F\)- invariant) measure \(\tilde{\nu}_{0}\) on \(\widetilde{X}_{0}\) (which is a copy of \(Y\)). This then extends to \(\tilde{\nu}\) on \(\widetilde{X}\). We then define the \(f\)-invariant measure \(\nu\) on \(X\) as \(\nu=\pi_{*}\tilde{\nu}\). We denote by \(m\) Lebesgue measure on \(\mathbb{S}^{1}\). The \(f_{\varphi}\)-invariant measure on \(X\times\mathbb{S}^{1}\) is given by \(\nu\times m\). That it is invariant is a simple consequence of \(f_{\varphi}\) being an isometry in the second coordinate. By definition of \(\pi\), \(\tilde{\nu}\) and \(\tilde{f}_{\varphi}\), \[\left|\nu\left(g\circ f_{\varphi}^{n}\cdot h\right)-\nu(g)\nu(h)\right|=\left| \tilde{\nu}\left(\tilde{g}\circ\tilde{f}_{\varphi}^{n}\cdot\tilde{h}\right)- \tilde{\nu}(\tilde{g})\tilde{\nu}(\tilde{h})\right|\] where \(\tilde{g}=g\circ\pi\) and similarly for \(h\) (observables on the tower). Let \(\mathcal{L}\) denote the transfer operator associated to \(\tilde{f}\). To take advantage of the possibility of a Fourier decomposition in the neutral direction we write \[g(x,\ell,u)=\sum_{k\in\mathbb{Z}}\hat{g}_{k}(x,\ell),\quad\text{where}\quad \hat{g}_{k}(x,\ell)=\int g(x,\ell,u)e^{-iku}\ du.\] Denote by \(J_{n}\) the Jacobian associated to \(\tilde{f}\). The twisted transfer operator is equal to, for any \(h:\widetilde{X}\to\mathbb{R}\), \[\mathcal{M}_{k}^{n}h(x)=\sum_{f^{n}y=x}J_{n}(y)h(y)e^{-ikS_{n}\tilde{\varphi} (y)}.\] In order to study correlation one considers the full transfer operator but then only the diagonal terms remain [28, (3.4),(3.5)] and so, \[\tilde{\nu}\left(\tilde{g}\circ\tilde{f}_{\varphi}^{n}\cdot\tilde{h}\right)= \sum_{k\in\mathbb{Z}}\tilde{\nu}\left(\tilde{g}_{-k}\cdot\mathcal{M}_{k}^{n} \tilde{h}_{k}\right)\] Following Gouezel [28, SS3] (the operators \(R\), \(T\), \(A\), \(B\), \(C\) are identical with identical notation as the reference) we define the following operators which we later use to reconstruct \(\mathcal{M}_{n,k}\): \[R_{n,k}h(x) =\sum_{\begin{subarray}{c}\tilde{f}^{n}y=x\\ y\in Y,\tilde{f}y,...,\tilde{f}^{n-1}y\notin Y,\tilde{f}^{n}y\in Y\end{subarray}}J _{n}(y)h(y)e^{-ikS_{n}\tilde{\varphi}(y)},\] \[T_{n,k}h(x) =\sum_{\begin{subarray}{c}\tilde{f}^{n}y=x\\ y\in Y,\tilde{f}^{n}y\in Y\end{subarray}}J_{n}(y)h(y)e^{-ikS_{n}\tilde{\varphi} (y)},\] \[A_{n,k}h(x) =\sum_{\begin{subarray}{c}\tilde{f}^{n}y=x\\ y\in Y,\tilde{f}y,...,\tilde{f}^{n}y\notin Y\end{subarray}}J_{n}(y)h(y)e^{-ikS_ {n}\tilde{\varphi}(y)},\] \[B_{n,k}h(x)= \sum_{\begin{subarray}{c}\tilde{f}^{n}y=x\\ y,\ldots,\tilde{f}^{n-1}y\not\in Y,\tilde{f}^{n}y\in Y\end{subarray}}J_{n}(y)h(y)e ^{-ikS_{n}\tilde{\varphi}(y)},\] \[C_{n,k}h(x)= \sum_{\begin{subarray}{c}\tilde{f}^{n}y=x\\ y,\ldots,\tilde{f}^{n}y\not\in Y\end{subarray}}J_{n}(y)h(y)e^{-ikS_{n}\tilde{ \varphi}(y)}.\] In words these are, respectively, the cases where: \((R)\) The orbit \(y,\tilde{f}y\ldots,\tilde{f}^{n}y\) starts and ends in \(Y\) but isn't in \(Y\) in the meantime; \((T)\) The orbit starts and ends in \(Y\); \((A)\) The orbit starts in \(Y\) but doesn't finish in \(Y\); \((B)\) The orbit is not in \(Y\) until the last iterate when it is in \(Y\); \((C)\) The orbit is never in \(Y\). Consequently, cutting the orbit at the first and last time it belongs to \(Y\) means that \[\mathcal{M}_{k}^{n}h(x)=C_{n,k}+\sum_{a+i+b=n}A_{a,k}T_{i,k}B_{b,k}\] and cutting according to each time the orbit belongs to \(Y\) implies that \[T_{n,k}=\sum_{p=1}^{\infty}\sum_{j_{1}+\cdots+j_{p}=n}R_{j_{1},k}\cdots R_{j_ {p},k}.\] Since we will work with observables \(h\) which are supported in \(Y\) we can discard the operators \(B_{n,k}\) and \(C_{n,k}\). The major part of the argument is the study of the operators \(T_{n,k}h\)[28, SS3.3] and consequently the result of exponential mixing on the tower [28, Theorem 3.6]. The same result holds in the present setting because we defined the dynamics on the tower in such a way that we see the action in the fibre only when we arrive at the top of the tower, not incrementally at each step as is done in the reference. Moreover the assumption on \(\Phi\) match those required in the reference. Finally we must understand the argument which deduces exponential mixing for \(f_{\varphi}:X\times\mathbb{S}^{1}\to X\times\mathbb{S}^{1}\) from exponential mixing of \(\tilde{f}_{\varphi}:\widetilde{X}\times\mathbb{S}^{1}\to\widetilde{X}\times \mathbb{S}^{1}\). Consider some \(\mathcal{C}^{r}\) observable \(g:Y\times\mathbb{S}^{1}\to\mathbb{C}\) and the corresponding observable on the tower \(g\circ\pi:\widetilde{X}\times\mathbb{S}^{1}\to\mathbb{C}\). Note that \(\pi\) is defined differently here compared to in the reference since it must compensate for the fact that the tower only sees the action in the fibre at the top of the tower (i.e., \((x,\ell,u)\mapsto(f^{\ell}x,u+S_{\ell}\varphi(x))\)). However, since we work with observables supported on \(Y\), the base of the tower, we don't see this discrepancy. Consequently the argument of the reference, with the modifications of the present setting, proves Theorem 1. ## 4. Twist control In this section we show that the mild control of singular behaviour of the fibre map suffices to give good control for the twist of iterates at hyperbolic times. This type of argument was previously used by Araujo & Varandas [9, SS4.2.2] and the later results on Lorenz flows relied on it [7, 8]. In this section we use it in order to prove Theorem 2. Let \(X\) be a compact Riemannian manifold (possibly with boundary), endowed with a Borel measure \(\mu\) (called the reference measure). For the purposes of this section we suppose that the base transformation \(f:X\to X\) is a nonsingular transformation and that the fibre map \(\varphi:X\to\mathbb{S}^{1}\) is piecewise \(\mathcal{C}^{1}\) in the sense of being \(\mathcal{C}^{1}\) on the open partition elements. The object of interest is the skew-product \(f_{\varphi}:X\times\mathbb{S}^{1}\to X\times\mathbb{S}^{1}\), \[f_{\varphi}:(x,u)\mapsto(fx,u+\varphi(x)).\] It would be convenient to assume that \(\left\|D(\varphi\circ f|_{\omega}^{-1})\right\|\) is uniformly bounded (where \(\omega\subset X\) is any open set such that \(f:\omega\to X\) is invertible) because this would imply the existence of a cone field which is forward invariant under \(f_{\varphi}\) (in the sense that the cones are mapped within themselves) and uniformly bounded away from the neutral direction (see e.g., [15]). Unfortunately, in cases like the one we wish to consider here, it would be impossible to have such a uniform bound. The reality is that, at a full measure set of points, any tangent vector will approach arbitrarily close to the neutral direction under the action of the partially hyperbolic dynamics. There is no reason to believe that this hinders good statistical properties but it causes a difficulty we some of the machinery we would like to use. We consider the singularity points of the fibre map as artificial singularities of the base map and use the notion of hyperbolic times (Definition 2.9). At hyperbolic times we know that the orbit hasn't been too often too close to any singularity and this is sufficient for our purposes. **Lemma 4.1**.: _Suppose that there exists \(C>0\), \(s\geq 0\) such that, for all \(x\),_ \[\left\|D\varphi(x)Df(x)^{-1}\right\|\leq C\operatorname{dist}(x,S)^{-s}.\] _Suppose that \(b\in(0,s^{-1})\), \(\sigma\in(0,1)\), \(\delta>0\). There exists \(C^{\prime}>0\) such that, whenever \(n\) is a \((b,\sigma,\delta)\)-hyperbolic time for \(x\) then, letting \(\Phi(x)=\sum_{k=0}^{n-1}\varphi(f^{k}x)\),_ \[\left\|D\Phi(x)Df^{n}(x)^{-1}\right\|\leq C^{\prime}.\] Proof.: We observe that, since \(\Phi(x)=\sum_{\ell=0}^{n-1}\varphi(f^{\ell}x)\), \[D\Phi(x)Df^{n}(x)^{-1}=\sum_{\ell=0}^{n-1}D\varphi(f^{\ell}x)Df(f^{\ell}x)^{ -1}Df^{n-\ell-1}(f^{\ell+1}x)^{-1}.\] We can use the assumption of the theorem which controls the quantity \(D\varphi(f^{\ell}x)Df(f^{\ell}x)^{-1}\). Consequently \[\left\|D\Phi(x)Df^{n}(x)^{-1}\right\|\leq C\sum_{\ell=0}^{n-1}\operatorname{ dist}(f^{\ell+1}x,S)^{-s}\left\|Df^{n-\ell-1}(f^{\ell+1}x)^{-1}\right\|.\] Observe that \(\left\|Df^{n-\ell-1}(f^{\ell+1}x)^{-1}\right\|\leq\prod_{j=\ell+1}^{n-1} \left\|Df(f^{j}x)^{-1}\right\|\). Since, by assumption, \(n\) is a \((b,\sigma,\delta)\)-hyperbolic time for \(x\), this quantity is bounded from above as \(\prod_{j=\ell+1}^{n-1}\left\|Df(f^{j}x)^{-1}\right\|\leq\sigma^{n-\ell-1}\). Additionally \(\operatorname{dist}_{\delta}(f^{\ell+1}x,S)\geq\sigma^{b(n-\ell-1)}\). We observe that \(\operatorname{dist}(\cdot,\cdot)\geq C\operatorname{dist}_{\delta}(\cdot,\cdot)\) for some \(C>0\) and immediately absorb this quantity into the previous \(C\). Since, by assumption, \(bs<1\), \[\left\|D\Phi(x)Df^{n}(x)^{-1}\right\| \leq C\sum_{\ell=0}^{n-1}\sigma^{-b(n-\ell-1)s}\sigma^{n-\ell-1}\] \[=C\sum_{k=0}^{n-1}\sigma^{(1-bs)k}\leq\frac{C}{1-\sigma^{1-bs}}.\] This estimate is independent of \(x\) and \(n\), as required by the statement of the lemma. Proof of Theorem 2.: The combination of Lemma 4.1 and Theorem [28, Proposition 1.16] is the claimed result. _Remark 4.2_.: As per the following example, \(\left\|D\varphi(x)Df(x)^{-1}\right\|\) might be unbounded even when \(\varphi\) is a bounded function. Let \(X=\mathbb{R}/\mathbb{Z}\) and let \(f:X\to X\) be defined as \(x\mapsto 2x\). The fibre map \(\varphi:X\to\mathbb{R}\) is defined for the fundamental domain, \(x\in[0,1)\), \[\varphi(x)=\begin{cases}2-2x-(1-2x)^{\frac{1}{2}}&\text{if }x<\frac{1}{2}\\ 2-2x+(2x-1)^{\frac{1}{2}}&\text{if }x\geq\frac{1}{2}.\end{cases}\] Observe that \(\varphi\) is smooth and continuous. However \(\left\|D\varphi(x)Df(x)^{-1}\right\|\) is unbounded at \(x=\frac{1}{2}\). ## 5. Enlarging the singularity set In this section we describe the relevant argument to use if we have a uniformly expanding transformation and then we need to enlarge the singularity set because of the singularities of the fibre map. We can then use a large deviations argument to show that the system is NUE in the sense of a controlled singular set (Definition 2.8) even with the enlarged singular set. **Lemma 5.1**.: _Suppose that \(f:X\to X\) is NUE in the sense of a controlled singular set\(S\subset X\) (Definition 2.8). Further suppose that \(S^{\prime}\subset X\setminus S\) is a finite union of \(\mathcal{C}^{1}\) manifolds with boundary of dimension strictly less that the dimension of \(X\). Let \(S^{\prime\prime}=S\cup S^{\prime}\subset X\)._ _Then \(f:X\to X\) is NUE in the sense of a controlled singular set, with respect to the set \(S^{\prime\prime}\subset X\)._ Proof.: Observe that \(S\cap S^{\prime}=\emptyset\). Consequently all the inequalities of assumption (S1) remain satisfied. Also, the second part of assumption (S2) (relating to \(\mathcal{Q}_{\epsilon,N}\)) remains satisfied since it doesn't depend on the singular set. It remains to consider the property which is sometimes described as _slow recurrence to the singular/critical set_. We must show that the set \[\left\{x\in X:\frac{1}{n}\sum_{k=0}^{n-1}-\log\operatorname{dist}_{\delta( \epsilon)}(f^{k}x,S^{\prime})>\epsilon,\text{ for some }n\geq N\right\}\] is small in Lebesgue measure. We can see this estimate as a question of _large deviations_ where our "observable" is \(x\mapsto-\log\operatorname{dist}_{\delta(\epsilon)}(x,S^{\prime})\). Consequently the desired estimate follows from the relevant large deviations result [5, Theorem E]. ## 6. Uniform non-integrability In this section we show how information about the original map can be used to show that the induced fibre map is not a coboundary with respect to the induced base map. In the terminology used in several of the key references, we show that the _uniform non-integrability_ (UNI) condition holds. As far as this author is aware, there are just two different ways that the fibre map is not cohomologous to a locally constant function. One approach is to check using periodic orbits and obtain a contradiction (e.g., [28, Remark 1.15 / Lemma 6.5 / Lemma A.8 / 1st paragraph of SS1.4]). Such arguments are also convenient for establishing that the fibre map not being cohomologous to a locally constant function can be obtained by arbitrary small perturbations (e.g., [2, 9, 17]). And alternative approach is to take advantage of the unbounded nature of the fibre map (or its derivative) in order to obtain a contradiction and hence prove that the fibre map is not cohomologous to a locally constant function. (See e.g., [9, SS4.2.3 & erratum], [8, Prop 3.4] and [6, Lem 4.2, Cor 4.3], often using some type of Livsic type argument [13].) In this section we take this point of view and try to exploit the unbounded nature of the fibre map in order to prove the required property. Firstly we introduce an example to remind ourselves that we need to take care in this argument. Let \(X=\mathbb{R}/\mathbb{Z}\) and let \(f:X\to X\) be defined as \(x\mapsto 2x\). Fix some \(a>0\) and define, for \(x\in[0,1]\), the fibre map \(\varphi(x)=(x)^{-a}-(fx)^{-a}\). Observe that \(\varphi(\epsilon)\to+\infty\) as \(\epsilon\to 0\) and \(\varphi(\frac{1}{2}+\epsilon)\to-\infty\) as \(\epsilon\to 0\). Consequently \(\varphi\) is unbounded yet, by definition, is cohomologous to a constant. The skew-product, although rather disguised, is simply the identity in the fibre. _Remark 6.1_.: We can also construct an example which is inspired by the Lorenz flow. Let \(f:[-1,1]\to[-1,1]\) be a "Lorenz-like" map [30]. In particular unbounded derivative at \(x=0\). Furthermore, for \(x\in[-1,1]\), let \(\varphi(x)=\log|fx|-\log|x|\). The functions goes to \(+\infty\) at \(x=0\), like seen in the Lorenz case, but, differently to the Lorenz case, goes to \(-\infty\) at the two preimages, \(f^{-1}(0)\). (The proof of [8, Theorem 3.4] considers \(f(0^{+})\), \(f^{2}(0^{+})\) and so would not see the difference with the present example and so it is subtle where such is ruled out in that work where not being cohomologous to a locally constant function is proved. However there they take advantage of the possibility of a Young tower where the base is an open interval containing the singularity [9, Theorem 4.3].) For the remainder of this section we consider the setting assumed in Theorem 3. In particular, \(X=[0,1]\) and \(f:X\to X\) is defined as \(f:x\mapsto 2x\mod 1\). Furthermore \(\varphi(x)=\operatorname{dist}(x,1)^{a}\) for some \(a\in(0,1)\). We will take advantage of the fact that in this setting we are able to define an explicit inducing scheme. Let \(Y=(0,\frac{1}{2})\subset X\), and, for all \(\ell\in\mathbb{N}\), let \(a_{\ell}=2^{-1}-2^{-\ell}\), \(Y_{\ell}=(a_{\ell},a_{\ell+1})\). By definition, \(\{Y_{\ell}\}_{\ell\in\mathbb{N}}\) is a partition of a full measure subset of \(Y\). Moreover \(f^{j}Y_{\ell}\cap Y=\emptyset\) whenever \(1\leq j\leq\ell-1\) and \(f^{\ell}:Y_{\ell}\to(0,1)\) is a bijection. In words, \(\ell\) is the first return time to \(Y\) for each \(x\in Y_{\ell}\). Following the notation earlier in this work, \(R(x)\) is defined to be \(\ell\) for each \(x\in Y_{\ell}\) and \(F:Y\to Y\) is defined as \(F:x\mapsto f^{R(x)}x\). Let \(S=\{1\}\subset X\). Although this point is not a singularity in any sense for \(f\) it is a singularity for the fibre map \(\varphi\). **Lemma 6.2**.: _For all \(b>1\) there exists \(\delta>0\) such that, for all \(\ell\in\mathbb{N}\), \(\ell\) is a \((b,\frac{1}{2},\delta)\)-hyperbolic time for \(x\in Y_{\ell}\) (with respect to the map \(f:X\to X\) and the singularity set \(S=\{1\}\subset X\))._ Proof.: Let \(x\in Y_{\ell}\). For the first property of hyperbolic times, we observe that \(\prod_{j=\ell-k}^{\ell-1}\left\|Df(f^{j}x)^{-1}\right\|=2^{-k}\), consistent with the choice of \(\sigma=\frac{1}{2}\). For the other property we must consider how close orbits can approach the singularity set. It remains to show that \(\operatorname{dist}_{\delta}(f^{\ell-k}x,1)\geq\sigma^{bk}\). It suffices to consider \(k<\ell\) since \(k=\ell\) implies that \(\operatorname{dist}(f^{\ell-k}x,1)=\operatorname{dist}(x,1)\geq\frac{1}{2}\). We calculate that, \[\operatorname{dist}(f^{\ell-k}x,1) \geq\operatorname{dist}(f^{\ell-k}a_{\ell+1},1)=1-f^{\ell-k}(2^{- 1}-2^{-(\ell+1)})\] \[=1-(1-2^{-(k+1)})=\tfrac{1}{2}2^{-k}.\] Since we assumed that \(b>1\) by choosing \(\delta>0\) we obtain, uniformly, the required estimate (i.e., \(\frac{1}{2}2^{-k}\geq 2^{-bk}\) whenever \(k\) is sufficiently large that \(\frac{1}{2}2^{-k}<\delta\)). **Lemma 6.3**.: _The induced fibre map \(\Phi:Y\to\mathbb{R}\) is not cohomologous to a function constant on each \(Y_{\ell}\)._ Proof.: Let \(x\in\overline{Y}_{1}\), \(x^{\prime}\in\overline{Y}_{2}\) be defined as the points which satisfy \(Fx=x\), \(Fx^{\prime}=x^{\prime}\). Let \(y\in Y_{1}\) be such that \(y^{\prime}=Fy\in Y_{2}\) and \(Fy^{\prime}=F^{2}y=y\). Explicitly, \[x =0,\quad fx=x,\] \[x^{\prime} =\tfrac{1}{3},\quad fx^{\prime}=\tfrac{2}{3},f^{2}x^{\prime}=x^{ \prime},\] \[y =\tfrac{1}{7},\quad y^{\prime}=fy=\tfrac{2}{7},\quad f^{2}y= \tfrac{4}{7},\quad f^{3}y=y.\] Suppose, for the sake of contradiction, that \(\Phi:Y\to\mathbb{R}\) is cohomologous to a locally constant function in the sense that there exists a \(\mathcal{C}^{1}\) function \(\widetilde{\Phi}:Y\to\mathbb{R}\) such that \(\Phi-\widetilde{\Phi}+\widetilde{\Phi}\circ F\) is constant on each set \(Y_{\ell}\) (and this extends to \(\overline{Y}_{\ell}\)). Considering the three periodic orbits introduced above, this implies that \(\Phi(x)+\Phi(x^{\prime})=\Phi(y)+\Phi(y^{\prime})\) and so, \[\varphi(x)+\varphi(x^{\prime})+\varphi(fx^{\prime})=\varphi(y)+\varphi(y^{ \prime})+\varphi(fy^{\prime}) \tag{1}\] Since \(\varphi(x)=\operatorname{dist}(x,1)^{a}\) this implies that \[\varphi(0)+\varphi(\tfrac{1}{3})+\varphi(\tfrac{2}{3})-\varphi(\tfrac{1}{7})- \varphi(\tfrac{2}{7})-\varphi(\tfrac{4}{7})=\chi(a)=0\] where, for convenience we defined \[\chi(a)=1+(\tfrac{2}{3})^{a}+(\tfrac{1}{3})^{a}-(\tfrac{6}{7})^{a}-(\tfrac{5}{ 7})^{a}-(\tfrac{3}{7})^{a}.\] An analysis of this function shows that \(\chi(0)=\chi(1)=0\) but that \(\chi(a)<0\) for all \(a\in(0,1)\). This completes the required contradiction (1). _Remark 6.4_.: Contradicting the equality (1) obtained during the previous proof was the crucial step. The same argument could be performed choosing \(x\in\overline{Y}_{n}\), \(x^{\prime}\in\overline{Y}_{m}\) then the equation to contradict would become, \[\sum_{j=0}^{n-1}\varphi(f^{j}x)-\varphi(f^{j}y)=\sum_{j=0}^{m-1}\varphi(f^{j}y ^{\prime})-\varphi(f^{j}x^{\prime}).\] However this, or similar, can be contradicted by many different choices of assumptions on \(\varphi\). For example, if \(\varphi\) were constant on \(Y\) but monotone elsewhere and strictly increasing in a neighbourhood of \(1\) the required contradiction would also hold. Proof of Theorem 3.: Lemma 6.2 implies that, for any \(b>1\) there exists \(\delta>0\) such that the induced system has return times with are \((b,\frac{1}{2},\sigma)\)-hyperbolic times. Since \(\varphi(x)=\operatorname{dist}(x,1)^{a}\) we know that \(\left\|D\varphi(x)Df(x)^{-1}\right\|\leq\frac{1}{2}\operatorname{dist}(x,S)^{ -(1-a)}\). We may choose \(b\in(0,(1-a)^{-1})\) and so Lemma 4.1 applies and proves the required control on \(D\Phi\). This, together with the proof that \(\Phi\) is not co-homologous to a locally constant function, as shown in Lemma 6.3, means that Theorem 2 applies in the setting and gives the proof of exponential mixing. ## Acknowledgements Massive thanks to Zeze Pacifico, this work would not have existed if it wasn't for them. Thanks to Roberto Castorrini, Stefano Galatolo and Carlangelo Liverani for several helpful discussions and comments. This work was partially supported by PRIN Grant "Regular and stochastic behaviour in dynamical systems" (PRIN 2017S35EHN) and the MIUR Excellence Department Project MatMod@TOV awarded to the Department of Mathematics, University of Rome Tor Vergata.
2310.19619
Towards A Holistic Landscape of Situated Theory of Mind in Large Language Models
Large Language Models (LLMs) have generated considerable interest and debate regarding their potential emergence of Theory of Mind (ToM). Several recent inquiries reveal a lack of robust ToM in these models and pose a pressing demand to develop new benchmarks, as current ones primarily focus on different aspects of ToM and are prone to shortcuts and data leakage. In this position paper, we seek to answer two road-blocking questions: (1) How can we taxonomize a holistic landscape of machine ToM? (2) What is a more effective evaluation protocol for machine ToM? Following psychological studies, we taxonomize machine ToM into 7 mental state categories and delineate existing benchmarks to identify under-explored aspects of ToM. We argue for a holistic and situated evaluation of ToM to break ToM into individual components and treat LLMs as an agent who is physically situated in environments and socially situated in interactions with humans. Such situated evaluation provides a more comprehensive assessment of mental states and potentially mitigates the risk of shortcuts and data leakage. We further present a pilot study in a grid world setup as a proof of concept. We hope this position paper can facilitate future research to integrate ToM with LLMs and offer an intuitive means for researchers to better position their work in the landscape of ToM. Project page: https://github.com/Mars-tin/awesome-theory-of-mind
Ziqiao Ma, Jacob Sansom, Run Peng, Joyce Chai
2023-10-30T15:12:09Z
http://arxiv.org/abs/2310.19619v1
# Towards A Holistic Landscape of ###### Abstract Large Language Models (LLMs) have generated considerable interest and debate regarding their potential emergence of Theory of Mind (ToM). Several recent inquiries reveal a lack of robust ToM in these models and pose a pressing demand to develop new benchmarks, as current ones primarily focus on different aspects of ToM and are prone to shortcuts and data leakage. In this position paper, we seek to answer two road-blocking questions: (1) How can we taxonomize a holistic landscape of machine ToM? (2) What is a more effective evaluation protocol for machine ToM? Following psychological studies, we taxonomize machine ToM into 7 mental state categories and delineate existing benchmarks to identify under-explored aspects of ToM. We argue for a holistic and situated evaluation of ToM to break ToM into individual components and treat LLMs as an agent who is physically situated in environments and socially situated in interactions with humans. Such situated evaluation provides a more comprehensive assessment of mental states and potentially mitigates the risk of shortcuts and data leakage. We further present a pilot study in a grid world setup as a proof of concept. We hope this position paper can facilitate future research to integrate ToM with LLMs and offer an intuitive means for researchers to better position their work in the landscape of ToM. ## 1 Introduction The term _theory of mind_ (ToM, sometimes also referred to as _mentalization_ or _mindreading_) was first introduced by Premack and Woodruff (1978) as agents' ability to impute _mental states_ to themselves and others. Many aspects of human cognition and social reasoning rely on ToM modeling of others' mental states (Gopnik and Wellman, 1992; Baron-Cohen, 1997; Gunning, 2018). This is crucial for understanding and predicting others' actions (Dennett, 1988), planning over others' beliefs and next actions (Ho et al., 2022), and various forms of reasoning and decision-making (Pereira et al., 2016; Rusch et al., 2020). Inspired by human ToM, AI researchers have made explicit and implicit efforts to develop a machine ToM for _social intelligence_: AI agents that engage in social interactions with humans (Kramer et al., 2012; Kennington, 2022) and other agents (Albrecht and Stone, 2018). A machine ToM enables an interactive paradigm of language processing (Wang et al., 2023), enhancing agents' capacity for interactions (Wang et al., 2021), explainable decision-making (Akula et al., 2022), dialogue communication (Qiu et al., 2022; Takmaz et al., 2023), and collaborative task planning (Bara et al., 2023). Machine ToM has received an increasing amount of attention, especially as the field is reshaped by _large language models_ (LLMs) such as Chat-GPT (OpenAI, 2022) and GPT-4 (OpenAI, 2023). This highlights an ongoing debate and discussion on whether a machine ToM has emerged in LLMs. While LLMs have demonstrated some capability of inferring communicative intentions, beliefs, and desires (Andreas, 2022; Kosinski, 2023; Bubeck et al., 2023), researchers also reported concerns regarding a lack of robust _agency_ in LLMs for complex social and belief reasoning tasks (Sap et al., 2022; Shapira et al., 2023) and in-context pragmatic communication (Ruis et al., 2022). Emerged or not emerged, that remains a question (or may not even be the central question to ask). In our view, existing evaluation protocols do not fully resolve this debate. Most current benchmarks focus only on a (few) aspect(s) of ToM, in the form of written stories, and are prone to data contamination, shortcuts, and spurious correlations (Trott et al., 2022; Aru et al., 2023; Shapira et al., 2023). Prior to embarking on extensive data collection for new ToM benchmarks, it is crucial to address two key questions: (1) How can we taxonomize a holistic landscape of machine ToM? (2) What is a more effective evaluation protocol for machine ToM? To embrace the transformation brought by LLMs and explore their full potential in understanding and modeling ToM, this position paper calls for a holistic investigation that taxonomizes ToM using the _Abilities in Theory of Mind Space_ (ATOMS) framework (Beaudoin et al., 2020). After a review of existing benchmarks under this framework, we put forward a situated evaluation of ToM, one that treats LLMs as agents who are physically situated in environments and socially situated in interactions with humans. We hope this paper will offer an intuitive means to identify research priorities and to help gain a deeper understanding of, as well as to effectively utilize, LLMs in ToM modeling for AI agents in the future. ## 2 Large Language Models as Theory of Mind Agents Since the advent of pre-trained language models, the research community has questioned whether they possess intrinsic mental states to represent the environment (Li et al., 2021; Storks et al., 2021; Hase et al., 2023) and comprehend the mental states of others (Sap et al., 2019; Zhang et al., 2021) through the textual description (observation) of behavioral cues. The relatively recent breakthroughs of LLMs have created many discussions and debates, primarily concerning the extent to which LLMs possess various capabilities required for a machine ToM. In this section, we first survey recent research presenting evidence and counter-evidence for the emergence of ToM in LLMs. We conclude the discussion with the limitations of current evaluation protocols. ### Do Machine ToM Emerge in LLMs? Evidence for emergent ToM in LLMs.Prior to the rise of large language models, there has been growing evidence and acknowledgment of a narrow and limited sense of agency in smaller language models. Andreas (2022) argues that language models have the capacity to predict relations between agents' observations, mental states, actions, and utterances, as they infer approximate representations of beliefs, desires, and intentions of agents mentioned in the context. These representations have a causal influence on the generated text, similar to an intentional agent's state influencing its communicative actions under a Belief-Desire-Intention (BDI) agent model (Bratman, 1987). Amidst the excitement surrounding the release of GPT-4 (OpenAI, 2023), researchers have searched for evidence of an emergent ToM in LLMs. Kosinski (2023) presents 20 case studies each of the unexpected contents task (Perner et al., 1987) and the unexpected transfer (Sally-Anne) task (Baron-Cohen et al., 1985). With direct comparisons to children's performance, the findings have been cited as potential evidence for a spontaneous emergence of ToM in LLMs. Bubeck et al. (2023) present a similar behavioral study with 10 cases of belief, emotion, and intention understanding, concluding that GPT-4 has an advanced level of ToM after qualitative comparison with predecessors. Other case studies have also shown aspects of machine ToM (Li et al., 2023; Holterman and van Deemter, 2023). Limitations of ToM capabilities in LLMs.The above findings contradict the conclusions drawn in Sap et al. (2022)'s earlier study, which shows a clear lack of ToM in GPT-3 (Brown et al., 2020) on SocialIQA (Sap et al., 2019) and ToM! (Le et al., 2019) benchmarks. As a potential account, there has been criticism that the cognitive inquiries are anecdotal and inadequate for evaluating ToM in LLMs (Marcus and Davis, 2023; Mitchell and Krakauer, 2023; Shapira et al., 2023). Following the same evaluation protocol, Ullman (2023) demonstrates that simple adversarial alternatives to Kosinski (2023) can fail LLMs. To further understand if the most recent variants of LLMs possess a robust ToM, Shapira et al. (2023) present a comprehensive evaluation over 6 tasks and 3 probing methods, showing that a robust machine ToM is absent even in GPT-4 and that LLMs are prone to shortcuts and spurious correlations. Based on the ongoing debate, it can be concluded that, while LLMs exhibit some level of sensitivity at understanding others' mental states, this capability is limited and falls short of achieving robust human-level ToM (Trott et al., 2022; Shapira et al., 2023). ### Roadblocks in ToM Evaluation in LLMs Given the pressing need for a robust machine ToM in LLMs and large-scale ToM benchmarks, researchers echo several difficulties in the evaluation protocol. Presently, ToM benchmarks suffer from three primary issues summarized as follows. Limited aspects of ToM.The evaluation of machine ToM lacks consistency in the literature due to the ambiguity surrounding the specific mental states being targeted. Existing benchmarks often focus on limited numbers of mental states, such as the _intention_(Yoshida et al., 2008), _belief_(Grant et al., 2017), _emotion_(Sap et al., 2019), and _knowledge_(Bara et al., 2021) of another agent. While all of these are necessary building blocks of machine ToM, we echo Shapira et al. (2023)'s concern that the ToM capability of LLMs may have been overclaimed based on evaluations from only a specific aspect of ToM. To give a comprehensive assessment of a holistic machine ToM, a taxonomy is essential to enable researchers to effectively position their work with different focuses and priorities, which may be orthogonal to each other. Data contamination.Data contamination refers to the lack of a verifiable train-test split that is typically established to test the ability of machine learning models to generalize (Magar and Schwartz, 2022). LLMs typically learn from internet-scale data, potentially giving them access during training to the data used to test them (Bubeck et al., 2023; Hagedorff, 2023). For ToM evaluation specifically, the training corpora of LLMs may contain research papers detailing these psychological studies. Many past studies used identical or slightly altered language prompts to test LLMs, leading to potential contamination issues (Ullman, 2023). To critically evaluate the performance of LLMs on ToM tasks, researchers must have access to the datasets used to train them (Dodge et al., 2021), which are unfortunately not available. Shortcuts and spurious correlations.The availability of shortcuts and spurious features has triggered many concerns that a model may leverage them to perform highly on a benchmark without robustly acquiring the desired skill (Sclar et al., 2023; Ullman, 2023; Shapira et al., 2023). Recent findings suggest that LLMs tend to learn surface-level statistical correlations in compositional tasks, potentially leading to an illusion of systematic learning (Dziri et al., 2023). In all likelihood, LLMs are capable of learning ToM shortcuts in a similar manner. ## 3 Towards A Holistic Landscape of Machine Theory of Mind ### Abilities in Theory of Mind Space (ATOMS) Framework The evaluation of machine ToM lacks clarity and consistency across various literature, primarily due to the ambiguity surrounding the specific _mental states_ being targeted. This ambiguity is not unique to the field of AI but is rooted in the complicated cognitive underpinnings of ToM. At the core of this ambiguity is the latent nature of _mental states_, the subject has privileged access to them while others can only infer the existence of these mental states based on observable behaviors or expressions (Dretske, 1979; Blakemore and Decety, 2001; Zaki et al., 2009). Thus, it is impossible to directly access and assess the mental states of a human, and ToM must be tested indirectly through humans' ability to understand the relationship between mental states and behaviors, especially by predicting how agents behave based on their mental states (Swettenham, 1996; Phillips et al., 2002). While the exact definition of ToM remains a central debate, the AI community can benefit from looking at what psychologists have viewed as an initial step. In this paper, we follow Beaudoin et al. (2020)'s taxonomy of ToM sub-domains, _i.e.,_ the Abilities in Theory of Mind Space (ATOMS). As shown in Figure 1, the space consists of 7 categories of mental states, including _beliefs_, _intentions_, _desires_, _emotions_, _knowledge_, _percepts_, and _non-literal communication_. We selected this taxonomy because it was derived from a comprehensive meta-analysis of ToM studies. The meta-analysis focused on young children aged 0-5 years at the early stage of cognitive development, such that the setups are simpler and more comparable, avoiding complicated physical and social engagements that cannot be trivially deployed on LLMs. Beliefs.Beliefs are informational states that people judge to be true, usually decoupled from motivational states (Dennett, 1995; Eccles and Wigfield, 2002). Beliefs, the most studied mental states in the field of ToM, are usually tested in the form of false belief tasks, including the unexpected contents test (Perner et al., 1987), the unexpected transfer (Sally-Anne) Test (Baron-Cohen et al., 1985), the second-order false belief (Ice-cream Van) Test (Perner and Wimmer, 1985). Researchers also studied their connection to actions and emotions (Swettenham, 1996). Intentions.Intentions are choices with commitment, usually associated with concrete actions towards a goal (Cohen and Levesque, 1990). As a critical component of ToM, Kennington (2022) has called for a more explicit treatment of intentions. Intentions have been extensively explored in psychology tests, e.g., behavioral re-enactment (Meltzoff, 1995), action prediction (Phillips et al., 2002), intention explanation (Smiley, 2001), and intention attribution to abstract figures (Castelli, 2006). Desires.Desires are motivational states that do not necessarily imply commitment, though they are usually emotionally charged and affect actions [12, 13]. Typical studies along this line include the Yummy-Yucky Task [11] for discrepant preferences from different individuals, the multiple desires within one individual [12], and the relationship between desires and emotions/actions [13, 14]. Emotions.Emotions are mental states associated with an individual's feelings and affective experiences, which could impact beliefs and behaviors [15, 16]. Most ToM studies on emotions focus on typical [12] and atypical [15] emotional reactions to situations. Other studies also encompass affective perspective taking [13], understanding hidden emotions [17], and morally related emotions [14]. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Benchmarks and Task Formulations} & \multicolumn{3}{c}{Tested Agent} & \multicolumn{3}{c}{Stimatedness} & \multicolumn{3}{c}{ATOMS Mental States} \\ \cline{2-13} & Task & \multicolumn{2}{c}{Input Modality} & \multicolumn{2}{c}{Physical} & \multicolumn{2}{c}{Social} & \multicolumn{2}{c}{Belief} & \multicolumn{2}{c}{Intention} & \multicolumn{2}{c}{Does} & \multicolumn{2}{c}{Emo.} & \multicolumn{2}{c}{Know} & \multicolumn{2}{c}{Per.} & \multicolumn{2}{c}{NLC} \\ \hline EFISTERING REASONING [14] & Infer & T & & & & & & & & & & & \\ ToM [13] & QA & T & - & ✓ & ✓ & ✓ & ✓ & & & & & & \\ Hi-TOM [12] & QA & T & - & ✓ & ✓ & ✓ & & & & & & & \\ MiTOM Knowledge.Many controversies revolve around the definition of knowledge as justified true beliefs [1]. In the context of AI, knowledge typically consists of information and organized representations of the world, which can be used to simplify understanding and address intricate reasoning and planning [13]. ToM studies usually involve understanding the absence of knowledge [1] as well as the connection between knowledge and perception [12] and attention [11]. Percepts.Humans are situated in the physical and social environments. To enable AI agents to operate in the world and communicate with humans, the sensory and social aspects of perception are crucial in a machine ToM. Along this line, psychological studies have investigated the perceptual perspective taking [10] and understanding the influence of limited perception on actions [1]. Non-literal communications.Being able to understand non-literal and figurative communication helps humans to perform pragmatic inference and reason about hidden words behind their written meanings [1]. Non-literal communication has been recognized as an advanced ToM capability, spanning a wide spectrum of humor and decptions [1], sarcasm [13], and faux-pas (social gaffe) situations [1]. ### A Taxonomized Review of Benchmarks The ATOMS framework can serve as an intuitive reference for researchers to identify their research priorities and situate their work better in the landscape of literature. We further take the initiative to provide a systematic review of existing benchmarks for machine ToM under the umbrella of ATOMS.1 Although there are independent research initiatives on certain ToM facets like intention classification, emotion modeling, and aspects of non-literal communications, we primarily focus on those that explicitly target ToM or inferences of latent mental states. Besides the ToM dimensions in ATOMS, we further characterize the benchmarks on their task formulation, input modalities, physical and social situatedness, and symmetricity (whether the tested agent is co-situated and engaged in mutual interactions with other ToM agents). We summarize our review in Table 1 and discuss our observations and under-explored aspects of ToM evaluation. Footnote 1: We maintain a repository for relevant literature at [https://github.com/Mars-tin/awesome-theory-of-mind](https://github.com/Mars-tin/awesome-theory-of-mind). Many aspects of ToM are under-explored.As shown in Figure 2, we notice an overwhelming research focus on the intention and belief aspects of machine ToM. Several other aspects of ToM have not received enough attention. While the field of NLP has thoroughly explored different facets of emotion and non-literal communication, e.g., in the context of dialogue systems, ToM has rarely been explicitly mentioned as motivation. More connections and integrative efforts are clearly needed. Lack of clear targeted mental states.Explicitly mentioning the Sally-Anne Test [1] as inspiration, Grant et al. (2017) developed the predecessor of ToM1 [10]. Similarly, Nematzadeh et al. (2018) cited the Icceream Van Test [23] as motivation and the FauxPas-EAI [1] benchmark followed the study of Baron-Cohen et al. (1999). While these benchmarks are cognitively grounded and target one particular aspect of ToM, the majority often incorporate multiple mental states without clear descriptions, which could make it challenging to measure the actual progress [11]. Lack of situatedness in a physical and social environment.Figure 3 illustrates the configurations of benchmarks. Each bar in the chart represents a distinct benchmark characteristic, and each segment within the bar illustrates the proportion of benchmarks with one specific setting. An immediate observation is a noticeable lack of benchmarks that encompass both physical and social environments, which highlights an existing research disparity in the field. We notice that many existing benchmarks are story-based, which verbalize the agent's perception of the environment and the behaviors of other agents in the form of story episodes, usually with language templates. The semantics of the environment are given by high-level events (e.g., Sally entered the kitchen). Many aspects of physical and social situatedness are overlooked in these benchmarks, e.g., spatial relations, the task and motivation of agents, and their action trajectories. Lack of engagement in environment.We point out that existing benchmarks primarily adopt a passive observer role to test language agents. Yet the crucial aspects of interaction and engagement between the agent and other entities involved have been overlooked. Among all the benchmarks we reviewed, only three of them treat the tested model as an active agent, one that perceives the physical and social context, reasons about others' mental states, communicates with other agents, and interacts with the environment to complete pre-defined tasks (Sclar et al., 2022; Bara et al., 2021, 2023). ## 4 Towards A Situated Theory of Mind ### Why A Situated ToM? There have been concerns that cognitive inquiries are inadequate for gaining insight into understanding ToM for LLMs (Mitchell and Krakauer, 2023; Shapira et al., 2023). However, we believe that the primary problem lies in using story-based probing as proxies for psychological tests, which situate human subjects in specific physical or social environments and record their responses to various cues. We, therefore, call for a situated evaluation of ToM, in which the tested LLMs are treated like agents who are physically situated in environments and socially situated in interactions with others. **Situated evaluation covers more aspects of ToM.** Although it is possible to frame the situations as narratives and cover all mental states using text-only benchmarks, certain aspects of ToM can only be effectively studied within specific physical or social environment (Carruthers, 2015). This is because humans have the ability to infer the mental states of others through various modalities such as visual perception, actions, attention (gazes or gestures), and speech (Stack et al., 2022). For instance, studying perceptual disparities can be challenging with text-only datasets, as they often reduce complex scenarios to rule-based manipulations over negations in the prompts (Sileo and Lernould, 2023). Benchmarks that are not situated also face challenges when it comes to implementing coordination between agents, e.g., aligning intentions towards joint actions (Jain et al., 2019) and pragmatic generation (Zhu et al., 2021; Bao et al., 2022). **Situated evaluation mitigates data contamination.** A situated ToM evaluation can mitigate data contamination, as researchers can design scenarios in simulated settings that are unlikely to be part of the LLM's training data. Carefully designed benchmarks can also incorporate seen and unseen environments to assess generalization to new tasks and new environments, fundamentally addressing the issue of data contamination (Gandhi et al., 2021). **Situated evaluation mitigates shortcuts.** By employing situated evaluation, the risk of taking shortcuts can be mitigated. Many of the existing ToM benchmarks are either limited in scale or adopt text templates to verbalize a (few) predefined scenario(s) and prompt LLMs for answers, giving answers away from syntactic structures and positional information (Le et al., 2019; Sclar et al., 2023). In a situated setting, on the contrary, we rely on simulated environments to manipulate evaluation data at scale, so that the environment, the states, and the action traces in the environment can be randomized to avoid the statistical spurious correlations. While situated evaluation can mitigate shortcuts, it does not eliminate the issue completely. For example, Aru et al. (2023) have reported that shortcuts can emerge in grid world setups if the design is not careful enough and randomness is limited. We emphasize that careful design and consideration are still required to curate any ToM benchmark. ### A Preliminary Exploration in Grid World In this section, we present a proof-of-concept study on a situated evaluation of ToM on LLMs. We choose to conduct our pilot study in MiniGrid (Chevalier-Boisvert et al., 2018), a simple and commonly used environment for ToM studies in the machine learning community (Rabinowitz et al., 2018; Sclar et al., 2022). Through basic grid world representation, we can create tasks to challenge LLMs to reason about many aspects of physical and social situatedness, e.g., spatial relations, partial observability, agent's action trajectories, and from there, their beliefs, intent, emotions, etc. This is in stark contrast to existing story-based ToM benchmarks, which only contain high-level event episodes. We demonstrate that a diverse range of challenging ToM tests, covering all mental states from ATOMS, can be effectively created in a situated manner using a simple 2D grid world. **Environment and Task Setups** We introduced 9 different ToM evaluation tasks for each mental state under ATOMS, and 1 reality-checking task to test LLMs' understanding of the world. It is important to acknowledge that our experiment serves as a proof of concept and does not aim to cover the entire spectrum of machine ToM, as our case studies are far from being exhaustive or systematic. * **Reality Check**: Given the sequence of actions, predict the closest object at the end of the trajectory. The task is designed to test LLMs' under standing of relocations in the grid world. * **Short-term Intention**: Given an incomplete trajectory and a goal, predict the next action. * **Long-term Intention**: Given an incomplete trajectory and a list of subgoals, predict the next subgoal that the agent is planning to achieve. * **Desire**: Given a complete trajectory, predict if the agent demonstrates a preference for objects. * **Percepts**: Given a complete trajectory, predict if the agent has a partial or full observation. * **Belief**: The classic unexpected transfer task with possible first and second order false belief. * **Non-literal Communication**: Given a trajectory and a statement from the agent, judge whether the agent is being deceptive. * **Knowledge**: Given a trajectory, predict the object whose location is unknown to the agent. * **Emotion**: The classic perception-emotion link test, where emotions are evoked in response to witnessing an emotionally stimulating situation. We detail two case studies and leave examples of each task in Appendix A. **Case Study 1: Beliefs.** Our belief experiments emulate the classic unexpected transfer tasks [1, 12]. As is shown in Figure 4, we simulate this disparity of belief state and world state in MiniGrid. The first-order belief task features a main room with three connected side rooms, two agents named Red and Green, and a ball. Each instance of the belief experiment begins with Green placing the ball in Room#2 while Red watches. Red then enters a separate Room#1 and shuts the door. While Red is inside of this closed room, Green transfers the ball to Room#3. Red presumably holds a _false belief_ about the location of the ball, believing it is in Room#2 though it is now in Room#3. Similarly, we implement the second-order belief task to test an incorrect belief that one agent holds about the belief of another. After Green has finished transferring the ball, it navigates to the room originally Figure 4: An overview of the first and second order false belief task illustrated in a grid world setup. We simulate the unexpected transfer scenarios with two agents, and verbalize the environment and action traces to test if LLMs hold a correct understanding of the agents’ false beliefs. Figure 5: An overview of the morally related emotional reaction tasks illustrated in a grid world setup. We simulate scenarios where an agent either directly witnesses or is ignorant of a morally related event, and verbalize the environment and action traces to test if LLMs hold a correct prediction of the agent’s emotional reaction. containing the ball and shuts the door. Red then navigates to the room now containing the ball and sees the true location of the ball. Still, Green presumably possesses a false belief about Red's belief. In both tasks, LLMs are queried with two versions of the world: a false one with the ball in the original room, and a true one with the ball in the third room (its actual location). LLMs must correctly respond that the agents hold a false belief. Case Study 2: Emotions.While the belief tasks highlight the importance of physical situatedness, we further demonstrate that social interactions can be simulated in the grid world. As is shown in Figure 5, We design morally related events that stimulate emotions (e.g., fear, appreciation). In this task, LLMs are queried to predict the emotional response of Agent-White, who either directly witnesses or is ignorant of this event. LLMs must correctly respond that the agent holds an emotional reaction only if it observes the event. Experiment Setups.For each task, we create 100 instances following a prompt template that consists of [environment description], [agent description], [observability statement], [task statement], [actions sequences], [QA]. We select GPT-4 (gpt-4-0314) and Chat-GPT (gpt-3.5-turbo-0613) for evaluation on the 9 tasks.2 Following prior work (Hu et al., 2022; Shapira et al., 2023a), we adopt MC-probing for LLMs that don't produce probabilities, which directly instructs LLMs to generate only the letter corresponding to the answer. Besides zero-shot evaluation, we also explored one-shot learning and Chain-of-Thought (CoT) prompting (Wei et al., 2022). More details are available in Appendix B. Footnote 2: We use the ChatCompletion.create function from openai package. Results and Discussion.We observe that LLMs exhibit some level of sensitivity for some mental states. Especially, GPT-4 scores up to 91% zero-shot accuracy and 96% one-shot accuracy in the long-term intention task. However, we also highlight the shortcomings of LLMs in some mental states of ATOMS to varying degrees, especially, in terms of predicting preferences, perception limitations, missing knowledge, and higher-order beliefs. These findings align with previous research (Sap et al., 2022; Trott et al., 2022; Shapira et al., 2023a), further confirming that LLMs are not yet reliable and comprehensive ToM agents. From the reality-checking task, we observe that GPT-3.5 reaches 78% accuracy with CoT prompting and GPT-4 significantly surpasses its predecessors with 83% zero-shot accuracy and 95% one-shot accuracy. Solving this reality check by no means implies that LLMs have a general perception ability of the real world, but that as a proof of concept, they demonstrate a certain (but still limited) level of situated awareness within the context of a basic abstract grid world. This implies that researchers can begin utilizing them as powerful building blocks for situated agents in complex ToM tasks. We note that it is always possible to come up with more challenging reality-checking questions to expose the limitations of LLMs, or to provide more guided prompts to assist LLMs in successfully completing ToM tasks. Undoubtedly, further research is required along this exciting yet challenging trajectory to advance ToM in LLMs and AI agents built upon LLMs. ## 5 Discussions and Action Items ### The Scope of Machine Theory of Mind Be specific about the mental states studied.Existing benchmarks often lack a clear target mental Figure 6: The LLMs’ performance across the 10 tasks is illustrated. Each bar shows how one LLM performed with a specific prompting method. Overall, the tasks are tough for all LLMs tested. The effectiveness of one-shot and CoT prompting is not consistent across the board. Some results are N/A as the prompt went out of the context window. state, making it challenging to interpret the results and measure the actual progress. To mitigate the risk of overestimating LLMs' ToM capabilities, it is recommended that future benchmark developers provide specific details regarding the targeted mental state(s) they intend to assess. Broaden the Scope of Machine ToM.A breadth of mental states and their sub-domains have already been covered by AI benchmarks (Table 1). We observed an overwhelming emphasis on the benchmarks and modeling of _beliefs_ and _intentions_, while other aspects have received insufficient attention. Still, there are considerably many blank spaces in the landscape of machine ToM, especially for more complicated forms of knowledge, desires, perspective-tasking, and emotional experiences beyond typical social situations. ### Design New Theory of Mind Benchmarks Avoid shortcuts and spurious correlations.The evaluation of LLMs itself presents significant challenges, not only in the case of ToM. Existing benchmarks suffer from issues such as data leakage and spurious correlations. Especially, shortcut solutions have been consistently reported in recent years (Le et al., 2019; Shapira et al., 2023; Aru et al., 2023). We are in pressing need of new benchmarks with scalable sizes, high-quality human annotations, and privately held-out sets for evaluation. Avoid unfair evaluations from prompting.Previous work has shown that CoT prompting can improve the performance of LLMs in ToM tasks (Li et al., 2023; Moghaddam and Honey, 2023; Shapira et al., 2023). Various recent prompting mechanisms have also been developed to improve LLM's capability on ToM tasks (Zhou et al., 2023; Leer et al., 2023). In the evaluation of LLMs' ToM capabilities, we recommend the careful documentation of prompts used and the avoidance of implicit human guidance to ensure a fair comparison. Move on to a situated ToM.We call for a situated evaluation of ToM, in which the tested LLMs are treated like agents who are physically situated in environments and socially situated in interactions with others. A situated setup covers a wider range of ToM aspects. With carefully designed benchmarks with diverse environments and unseen test sets, a situated setup can help address data contamination issues and assess generalization to new tasks and environments. Furthermore, a situated setup allows for more complicated evaluation protocols than simple inference and QA tasks. Consider a mutual and symmetric ToM.ToM is symmetric and mutual in nature, as it originally imputes the mental states of self and others. Prior research is largely limited to passive observer roles (Grant et al., 2017; Nematzadeh et al., 2018; Le et al., 2019; Rabinowitz et al., 2018) or speaker in a speaker-listener relationship (Zhu et al., 2021; Zhou et al., 2023). We encourage more studies on how humans and agents build and maintain common ground with a human ToM and a machine ToM through situated communication (Bara et al., 2021; Sclar et al., 2022). Besides, more research is needed to understand if LLMs possess early forms of intrinsic mental states given observation cues of the world. While we need to develop machines that impute the mental states of humans, humans should also develop a theory of AI's mind (ToAIM) (Chandrasekaran et al., 2017) by understanding the strengths, weaknesses, beliefs, and quirks of these black box language models. ### Neural Language Acquisition and ToM Both psychological studies (Bloom, 2002; Tomasello, 2005) and computational simulations (Liu et al., 2023) have demonstrated the effectiveness of ToM, especially intention, in language acquisition. Instead of concentrating on eliciting ToM in LLMs, we should contemplate whether certain ToM elements should be inherently present in LLMs or perhaps introduced alongside language pretraining. More research is needed to understand the connection between neural word acquisition and ToM development in machines. ## 6 Conclusion In this position paper, we survey and summarize the ongoing debate regarding the presence of a machine ToM within LLMs, and identify the inadequate evaluation protocols as the roadblock. Many benchmarks focus only on a few aspects of ToM, and are prone to shortcuts. To mediate this issue, we follow the ATOMS framework to offer a holistic review of existing benchmarks and identify under-explored aspects of ToM. We further call for a situated evaluation of ToM, one that is physically situated in environments and socially situated in interactions with humans. We hope this work can facilitate future research towards LLMs as ToM agents, and offer an intuitive means for researchers to position their work in the landscape of ToM. ### Ethical Statement The dataset created in this study includes instances that are synthetically generated from planners and RL algorithms, as well as ones created by humans. Human subjects research is approved by the University of Michigan Health Sciences and Behavioral Sciences Institutional Review Board (IRB-HSBS) under eResearch ID HUM00234647. The text generated by LLMs could potentially contain harmful, toxic, or offensive content. The authors have ensured that the data does not contain personally identifiable information or offensive content. ### Limitations Our current benchmark only covers 100 instances for each task, adding up to only 1000 instances. Our experiment serves as a proof of concept and does not aim to cover the entire spectrum of machine ToM, as our case studies are far from being exhaustive or systematic. In the future, we plan to create a more systematic benchmark with a larger scale and various forms of evaluation. Additionally, it is worth noting that the ATOMS framework is derived from human ToM studies conducted with children under the age of 5. Consequently, this framework primarily focuses on the early developmental stages of ToM, capturing the naive and potentially rudimentary aspects of ToM. For more advanced ToM capability, we point to some recent frameworks proposed by Osterhaus and Bosacki (2022) and Stack et al. (2022). ## Acknowledgements This work was supported in part by NSF IIS-1949634, NSF SES-2128623, and by the Automotive Research Center at the University of Michigan. Without implying any agreement with the contents as presented in this work, the authors extend their appreciation to Susan Gelman for her valuable feedback. The authors would like to thank all anonymous reviewers for their valuable feedback.
2310.01056
Low-Frequency Intensity Modulation of High-Frequency Rotor Noise
Acoustic spectra of rotor noise yield frequency-distributions of energy within pressure time series. However, they are unable to reveal phase-relations between different frequency components, while the latter play a role in low-frequency intensity modulation of higher-frequency rotor noise. Baars et al. (AIAA Paper 2021-0713) outlined a methodology to quantify inter-frequency modulation, which in the current work is applied to a comprehensive acoustic dataset of a rotor operating at low Reynolds number at advance ratios ranging from $J = 0$ to $0.61$. The findings strengthen earlier observations for the case of a hovering rotor, in which the modulation of the high-frequency noise is strongest at angles of $\theta \approx -20^\circ$ (below the rotor plane). For the non-zero advance ratios, modulation becomes dominant in the sector $-45^\circ \lesssim \theta \lesssim 0^\circ$, and is maximum in strength for the highest advance ratio tested ($J = 0.61$). Intensity-modulation of high-frequency noise is primarily the consequence of a far-field observer experiencing a cyclic sweep through the noise directivity patterns of the relatively directive trailing-edge/shedding noise component. This noise becomes more intense with increasing J and is associated with the broadband features of the (partially) separated flow over the rotor blades.
Woutijn J. Baars, Daniele Ragni
2023-10-02T10:05:35Z
http://arxiv.org/abs/2310.01056v1
# Low-Frequency Intensity Modulation of High-Frequency Rotor Noise ###### Abstract Acoustic spectra of rotor noise yield frequency-distributions of energy within pressure time series. However, they are unable to reveal phase-relations between different frequency components, while the latter play a role in low-frequency intensity modulation of higher-frequency rotor noise. Baars _et al._ (AIAA Paper 2021-0713) outlined a methodology to quantify inter-frequency modulation, which in the current work is applied to a comprehensive acoustic dataset of a rotor operating at low Reynolds number at advance ratios ranging from \(J=0\) to \(0.61\). The findings strengthen earlier observations for the case of a hovering rotor, in which the modulation of the high-frequency noise is strongest at angles of \(\theta\approx-20^{\circ}\) (below the rotor plane). For the non-zero advance ratios, modulation becomes dominant in the sector \(-45^{\circ}\lesssim\theta\lesssim 0^{\circ}\), and is maximum in strength for the highest advance ratio tested (\(J=0.61\)). Intensity-modulation of high-frequency noise is primarily the consequence of a far-field observer experiencing a cyclic sweep through the noise directivity patterns of the relatively directive trailing-edge/shedding noise component. This noise becomes more intense with increasing J and is associated with the broadband features of the (partially) separated flow over the rotor blades. Faculty of Aerospace Engineering, Delft University of Technology, 2629 HS Delft, The Netherlands Presented as AIAA Paper 2023-3215 at the AIAA AVIATION 2023 Forum, San Diego, CA, 12-16 June 2023. Copyright © 2023 by Baars and Ragni. Submitted to the AIAA Journal. ## 1 Introduction and context Urban air mobility (UAM) vehicles and drones comprise rotors that are typically smaller than the single-rotor technology of conventional helicopters. For instance, it is not uncommon that the many electrical takeoff and landing (eVTOL) prototype vehicles contain a multitude of rotors, _e.g._, the Job Aviation vehicle includes 6 rotors, the EHang 216 autonomous aerial vehicle has 6 rotors, the Supernal SA-1 eVTOL aircraft has 4 tiltrotors and 4 sets of stacked co-rotating rotors, and the VoloDrone and VoloCity vehicles of Volocopter include 18 rotors each. Assessing the rotor noise of new advanced air mobility (AAM) vehicles has gained a high priority, due to their envisioned operation in densely populated areas [1]. At the same time, engineering studies on the noise impact of rotors should be revisited because time-varying aspects of acoustic waveforms are rarely addressed, while these influence the human perception of rotor noise. Most studies on acoustic aspects of small-scale rotors consider standard characterization schemes that rely on time- and/or ensemble-averaging [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]: results are typically condensed to a set of acoustic spectra, their integrated energy (overall sound pressure level), as well as the directivity patterns of that acoustic energy. The time-dependent _"wop-wop"_ noise component from larger-size rotorcraft, or even the higher-frequency _"buzzing"_ noise of drone propellers, is a phenomenon of _time-varying noise intensity_ and not necessarily a direct perception of the blade passing frequency (BPF), denoted as \(f_{b}\). Current noise certification standards fall short in capturing time-varying aspects of a noise signal (_e.g._, the tone-corrected, effective perceived noise level (EPNL) and/or the A-weighted sound exposure level (SEL) in 14 CFR Part 36). Paradoxically, there is a growing body of knowledge that such time-varying aspects are highly relevant for the level of (psycho-acoustic) annoyance [14, 15, 16, 17, 18, 19]. Because noise levels that comply with a certification standard are not acceptable to the public per say, it is evident that complementary assessments of noise-perception aspects are needed. ### A. Rotor noise modulation Understanding the low-frequency modulation of higher-frequency rotor noise is important for the development of low-order modelling- and auralization-algorithms [20, 21, 22], as well as psychoacoustic modelling [23, 24, 25]. Before considering the temporal variation of rotor noise, a short review of noise sources is provided. Periodic rotor-noise components are classified as thickness noise and blade loading noise [26, 27, 28, 29]. A superimposed component of broadband noise originates from turbulence ingestion (a leading-edge mechanism) and vortical turbulent-boundary layer motions convecting past the trailing edge. In addition, for low Reynolds-number propellers, an additional near-wake source comes from the vortex shedding behind laminar and/or turbulent separation regions [30]. And finally, even when the rotor operates in a clean flow, the turbulence-ingestion noise can become dominant through the onset of blade-vortex interaction (BVI). This interaction is determined by the distance between the tip-vortex deployed by consecutive blades and the blades themselves. Temporal variations in the acoustic intensity and characteristic frequency of the noise are dubbed intensity (or amplitude) and frequency modulations, respectively. In this work, we consider the intensity modulation and refer to this as _BPF modulation_ (BPFM), because it will become evident that its time-scale is prescribed by the rotating motion of the blade. Note that this is fundamentally different from the variations in noise amplitude and Doppler-frequency shift that occur for transient helicopter flyover manoeuvres. In those cases, studies do occasionally employ time-preserving schemes when dealing with non-stationary acoustic signals [31, 32]; they focus on the acoustic footprints affiliated with _very_ long-timescale variations in noise, relative to the BPF, associated with manoeuvres of the flight vehicle. Gan _et al._[19] studied temporal variations of rotor noise from a Bell 206 helicopter in level and descending flight, including effects of aerodynamic interactions and the noise of the tail rotor. Here we study the effect in a more fundamental setting, using an isolated rotor. To illustrate BPFM, consider an acoustic spectrum of rotor noise shown in Fig. 1(a). The acoustic pressure time series associated with the BPF can be generated using a narrow band-pass filter and is shown in Fig. 1(b). When the high-frequency content of the signal is _unmodulated_, the time series after high-pass filtering (in this example \(f>10f_{b}\)) has a time-invariant envelope of the intensity (Fig. 1c). However, due to the rotating nature of the blade's noise sources, a _modulated_ intensity-envelope may arise. This modulated high-frequency noise is illustrated in Fig. 1(d) after artificially modulating the carrier signal. It is important to realize that BPFM does not affect an ensemble-averaged spectrum (or any other second-order statistics). However, the intensity variations are audible, even when the BPF is inaudible to the human ear (this is the case especially for large-scale rotorcraft systems for which typically \(f_{b}<20\,\mathrm{Hz}\)). Preserving the temporal dimension of the data can be done explicitly by performing a time-frequency analysis. By way of a wavelet transform using a Morlet wavelet, a wavelet power-spectrum (WPS) of a noise signal can be generated. Such a time-frequency representation of the acoustic energy is shown in Fig. 2(a) for a time series that is subject to BPFM (details of the implementation can be found here [33]). Alongside, in Fig. 2(b), the ensemble-averaged Fourier spectrum and time-averaged WPS are shown for this stationary noise signal. Note that the Morlet wavelet has a relatively high temporal resolution--at the expense of a fine spectral resolution--with a small cone-of-influence (COI) region. With sufficient resolution in time, it is evident that the noise at \(f>10f_{b}\) exhibits a strong intensity modulation with a time-scale equal to that of the blade passages (indicated by the alternating attenuation and intensification of energy). An in-depth route to separate tonal and broadband components is also useful to explore (particularly when harmonics appear at very high frequencies). In this regard, cyclo-stationary spectral analysis [34] or wavelet-based methods [35] can be considered, but these are beyond the scope of the current paper. Fig. 1: (a) Typical acoustic spectrum of rotor noise used to illustrate BPF modulation (BPFM). This spectrum corresponds to a rotor noise time series measured in the current study (microphone B, indicated in Fig. 3(b), for an advance ratio of \(J=0\)). (b,c,d) Time series of the narrow band-pass filtered signal encompassing the BPF, and the high-pass filtered (un)modulated noise residing at \(f>10f_{b}\). ### Present contribution and outline In this study, the time- and/or ensemble-averaged characterization of rotor noise is augmented by utilizing metrics that preserve the temporal variation in the intensity of high-frequency noise [36]. This intensity-variation is a nonlinear frequency-interaction and refers, in the context of our current work, to the phase-consistencies between the low-frequency BPF and higher-frequency noise. A primary outcome of the study by Baars _et al._[36] was that for a rotor in hover, BPFM is strongest at angles of \(\theta\approx-20^{\circ}\). A low-fidelity listening experiment also indicated that the modulation metrics are well-correlated with the degree of modulation that is audible. In terms of source mechanisms, the working principle of BPFM may initially be thought of as a result of the harmonic variation in the source-receiver distance (as a direct result of the rotor spinning). But, if that were to be the case, the normalized modulation strength would be maximum at sideline angles; this is not observed in measurements [36]. As such, a proper quantification of BPFM and an identification of its source mechanisms is needed. It will inspire new measurement and post-processing procedures to support noise regulations and the assessment of the noise impact on communities [37, 38]. For instance, this can be achieved by including BPFM aspects in realistic auralizations of rotorcraft noise [20, 39] and future noise prediction tools [21, 40, 41, 42]. This paper starts by describing an aeroacoustic experiment of a laboratory-scale rotor at advance ratios ranging from \(J=0\) to \(0.61\) (SS II). Then, SS III covers a brief review of quantifying BPFM, after which directivity patterns of BPFM are described in tandem with the results of the rotor's inflow disturbances. Since certain noise sources become more (or less) dominant with variations in \(J\)[30], due to the changing separated flow-features around the blade at relatively low Reynolds-numbers, these data allow for identifying the source mechanisms responsible for BPFM, which are described in SS IV. Figure 2: (a) Wavelet power spectrum (WPS) of a rotor noise time series, corresponding to a case with a relatively high degree of BPFM (microphone C, indicated in Fig. 3(b), for an advance ratio of \(J=0.61\)). (b) Global, time-averaged WPS associated with the same data used to construct sub-figure (a), in comparison to a conventional Fourier-based spectrum. **II. Experimental data of a small-scale rotor** **A. Experimental setup and rotor operating conditions** Two experimental campaigns were conducted: (1) acoustic measurements in the rotor's near- and far-field regions, and (2) flow-field measurements using Particle Image Velocimetry (PIV).* All aeroacoustic measurements were conducted in the anechoic A-Tunnel [43] of the Delft University of Technology. This facility is anechoic at frequencies above 200 Hz. Internal dimensions are roughly \(6.4\) m (L) \(\times\)\(6.4\) m (W) \(\times\)\(3.2\) m (H). The wind tunnel inlet measures \(0.6\) m in diameter and provides a uniform, low-turbulence intensity inflow velocity (\(\sqrt{u^{2}}/U_{\infty}\lesssim 0.05\) %). Atmospheric pressure, temperature and relative humidity were practically constant throughout the measurement duration and were taken as \(p_{\infty}=101\) 325 Pa, \(T_{\infty}=293.15\) K and RH = 40 %, respectively, yielding a density of \(\rho_{\infty}=1.207\) kg/m\({}^{3}\) and a sound speed of \(a_{\infty}=343.2\) m/s. Footnote *: All acoustic data are archived open-access and are available online at: [https://doi.org/10.4121/9c8cf649-7617-42e2-a9b1-a32d5f483964](https://doi.org/10.4121/9c8cf649-7617-42e2-a9b1-a32d5f483964). Flow-field data are available upon request: please email the authors at [email protected]. A rotor test rig was mounted to the circular wind tunnel inlet, supporting a small-scale rotor in hover (see Fig. 3c). A circular nacelle of 5 cm in diameter embedded a compact 6-axis ATI Mini40 sensor (with maximum thrust and torque capacities of 40 N and 1 Nm, respectively), providing rotor thrust and torque readings. An LMT 2280 brushless motor was used in combination with a TDK-Lambda power supply to drive the rotor, comprising a voltage range of 0-60 V and a current range of 0-80 A. A US Digital EM1 transmissive optical encoder, coupled with a US Digital disk of 25 mm in diameter, gave a one-per-revolution (1P) signal of the rotor shaft for an accurate reading of its rotational speed and angular position. The induced flow direction was physically upward in the facility, but in the plots throughout the paper the orientation is flipped upside down to represent a rotor-in-hover scenario when \(J=0\). Finally, any flow recirculation was not observed qualitatively in this relatively large chamber with a large anechoically-treated exhaust slit located roughly \(6D_{p}\) downstream of the rotor (Fig. 3c); it is therefore expected that an intensification of BPF harmonics--that have previously been linked to a flow recirculation in anechoic chambers [44]--are absent. The rotor itself was derived from an APC propeller (model 9x6e); this rotor has a diameter of 9 inch and a pitch of 6 inch. For the current setup, the diameter was scaled up to \(D_{p}=2R=0.30\) m, while all blade elements were re-shaped with a NACA 4412 airfoil. The rotor, made of an aluminum alloy, was manufactured in-house using CNC machining and with a 0.4 to 0.8 \(\upmu\)m Ra finish. This rotor is identical to the one used in benchmarking studies (_i.e._, BANC X) focusing on the flow transition over the blades and its influence on the aeroacoustic performance [30, 45, 46]. The rotor spun at a nominal rate of \(\omega=131.0\) rev/s (7 860 RPM), resulting in a BPF of \(f_{b}=262.0\) Hz. Rotor-rotational speed was kept constant to within \(\pm 0.1\%\) with the aid of a closed-loop PID-type controller working with the 1P signal. For the hover condition, the Reynolds number was \(Re_{c75}\equiv c_{75}2\pi\omega 0.75R/\nu=1.35\cdot 10^{5}\), based on the blade chord of \(c_{75}=22.4\) mm at \(r=0.75R\), and the tip Mach number was \(M_{\rm tip}\equiv 2\pi\omega R/a_{\infty}=0.358\) (note that the rotational speed of the rotor in rad/s is denoted as \(\Omega=2\pi\omega\)). Predicting the noise can be difficult at low Reynolds-numbers [45, 47, 48, 49], when the rotor operates with a reduced efficiency. Hence, the experimental data can aid the validation of simulations. To consider a variation in rotor-blade loading (as a representation of varying forward flight velocity with fixed RPM rotors), various advance ratios were considered. The advance ratio is defined as \(J\equiv U_{\infty}/(\omega 2R)=\pi U_{\infty}/U_{\text{tip}}\), and four different values were considered by fixing the rotational speed of the rotor, while changing the tunnel inflow (freestream) velocity, \(U_{\infty}\). For the four advance ratios considered, being \(J=0\) (hover), \(0.24\), \(0.41\) and \(0.61\), the corresponding freestream velocities were \(U_{\infty}\approx 0\), \(9.6\), \(16\) and \(24\,\text{m/s}\), respectively. Tip Mach numbers and Reynolds numbers were nearly constant (listed in Table 1) and considered a total velocity composed of the rotor's rotational speed component and the incoming freestream velocity component (_e.g.,_\(M_{\text{tip}}=U_{\text{tip}}/a_{\infty}\), with \(U_{\text{tip}}=\sqrt{(\Omega R)^{2}+U_{\infty}^{2}}\)). Measurements of the thrust force (\(F_{z}\)) and rotor torque (\(\tau_{z}\)) were performed for each advance ratio to yield rotor-performance data (Table 1). Coefficients of thrust (\(C_{T}\)) and torque (\(C_{\tau}\)), and the propulsive efficiency (\(\eta_{p}\)), were calculated following the relations, \[C_{T}=\frac{F_{z}}{\rho A\Omega^{2}R^{2}},\quad C_{\tau}=\frac{\tau_{z}}{\rho A \Omega^{2}R^{3}},\quad\eta_{p}=\frac{JC_{T}}{\pi C_{\tau}}. \tag{1}\] Here \(A=\pi R^{2}\) is the rotor disk area; rotor power in Watts is taken as \(P=\Omega\tau_{z}\). Finally, for the hover condition only, the Figure 3: **(a,c) Setup with the rotor axis \(z\) and radial coordinate \(r\); the rotor hub is at \((r,z)=(0,0)\).****(b) Acoustic grid of 1 120 microphone positions mapped out with a linear microphone boom (\(m=1:40\) microphone positions, and \(b=1:28\) boom positions). The acoustic contour is described in the text. Microphones A, B and C are used later and are situated at \(\rho\approx 4.2D_{p}\) and \(\theta\approx 19.7^{\circ}\), \(\theta=0^{\circ}\) and \(\theta\approx-19.7^{\circ}\), respectively.** rotor's figure of merit (FM) was computed following the conventional definition, \[\text{FM}=\frac{C_{T}^{y_{2}}}{\sqrt{2}C_{\tau}}. \tag{2}\] Thrust and torque coefficients, as well as the propulsive efficiency, are compared to the literature in Fig. 4. Data from the literature (black symbols) considered the same rotor and facility, except for lower rotational frequencies. As such, our current measurements also considered a lower rotational frequency for a direct comparison (\(\omega\approx 66\,\text{Hz}\), with the open red markers) and these match well with the previous data. At the higher rotational speed considered in this paper (solid red markers), the rotor operates at a higher thrust coefficient as the consequence of a more turbulent (slightly less separated) flow; this comes at the expense of a larger torque coefficient although the propulsive efficiency remains equal. At hover, the thrust and torque coefficients, as well as the FM, can be compared to a parametric study on small-scale rotors by Tinney & Sirohi [7]. Even though the FM is equal, our thrust and torque coefficients are higher by about \(30\,\%\) and \(40\,\%\), respectively. This is presumably caused by the more aggressive rotor pitch (roughly \(9\,\text{inch}\) in our study, compared to \(4.5\,\text{inch}\) in the study by Tinney & Sirohi [7]). Deviations in thrust (\(3\,\%\)) and torque (\(33\,\%\)), from the manufacturer's performance data, are attributed to simplifications in the theoretical predictions used to generate those performance data. When considering non-zero advance ratios, the thrust coefficient is known to decrease with an increase in \(J\), due to the lower rotor disk loading. The torque coefficient stays roughly constant for \(J<0.4\) due to the separated flow close to the root of the blades, resulting in large drag values as reported by Grande _et al._[30]. The propulsive efficiency does increase with \(J\)[3, 30], since the blade section angle of attack reduces below the stall angle and the torque decreases Figure 4: **Rotor operating performance data.** (**a**) **Thrust coefficient, \(C_{T}\),** (**b**) torque coefficient, \(C_{\tau}\), and (**c**) propulsive efficiency, \(\eta_{p}\), as a function of the advance ratio, \(J\). Current performance data are compared to studies of the same rotor and within the same facility, but at lower rotational frequencies (G22: Grande _et al._[30] and C22: Casalino _et al._[49]).** sufficiently fast so that the rotor efficiency peaks at \(J\approx 0.6\) (these studied considered \(\omega\approx 67\,\)rev/s, but the performance trends are thus similar for our current rotational speed of \(\omega=131.0\,\)rev/s). ### Measurements of the acoustic field and rotor-induced flow field Acoustic data were acquired using a linear microphone boom with 40 sensors, comprising an equidistant spacing of \(60\,\)mm. The vertical boom was mounted to a horizontal beam so that it could be traversed in \(r\). Free-field microphones were oriented such that their measuring diaphragms were co-planar with the measurement plane (this orientation avoids having to point the normal vector of the diaphragm to an aeroacoustic sound source location that can be ambiguous [50, 51]). A free-field microphone correction was applied in all spectral analyses to account for the intrusive nature and form factor of the microphone (\(90^{\circ}\) grazing incidence waves), although it only marginally affects the intensity at \(f>10\,\)kHz. For each advance ratio \(J\), the acoustic field was mapped out by translating the microphone boom to 28 radial positions (traversing step of \(80\,\)mm), resulting in a total of \(28\times 40=1\,120\) positions for which pressure time series \(p(r,z;t)\) are available (see Fig. 3b). The sensors used were \(\nicefrac{{1}}{{4}}\) in. free-field microphones (GRAS 40PH), with a frequency response range of \(5\,\)Hz to \(20\,\)kHz with a \(\pm 2\,\)dB accuracy (and a \(\pm 1\,\)dB accuracy from \(50\,\)Hz up to \(5\,\)kHz) and with a dynamic range of \(32\,\)dBA to \(135\,\)dB, with a sensitivity of \(50\,\)mV/Pa. Microphones were calibrated in situ with a GRAS 42AA piston-phone. All 40 microphones were IEPE powered and simultaneously sampled with several NI PXIe-4499 sound and vibration modules (on-board filtering prior to digitization with a 24-bit accuracy). All signals were sampled at a rate of \(f_{s}=51.2\,\)kHz for a duration of \(T=40\,\)seconds (\(2T\omega\approx 10\,480\) blade passages); this was confirmed to be more than sufficient for converged bispectral statistics at the lowest frequency of interest (see [52] and Appendix A). For spectral-based analysis, the one-sided spectrum is taken as \(\phi_{pp}(r,z;f)=2\langle P(r,z;f)P^{*}(r,z;f)\rangle\), where \(P(r,z;f)=\mathcal{F}\left[p(r,z;t)\right]\) is the temporal FFT and \(\langle\cdot\rangle\) denotes ensemble-averaging. Sound pressure spectrum levels (SPSL) in dB/Hz follow SPSL\((r,z;f)=10\log_{10}(\phi_{pp}(r,z;f)/p_{\mathrm{ref}}^{2})\), with \(p_{\mathrm{ref}}=20\,\)\(\mathrm{\SIUnitSymbolMicro Pa}\). Ensemble-averaging was conducted using FFT partitions of \(N=16f_{s}/\omega\) samples, to ensure that the discrete frequencies align with the BPF and its harmonics; this reduces the leakage of tonal energies into neighbouring frequencies [7]. The value of \(N\) yields a spectral resolution of \(\mathrm{d}f=8.2\,\)Hz and \(653\) ensembles with \(50\,\)% overlap. \begin{table} \begin{tabular}{c|c c c c|c c c c c c} \(J\) & \(U_{\infty}\) (m/s) & \(f_{b}\) (Hz) & \(M_{\mathrm{tip}}\) & \(Re_{c75}\) & \(F_{z}\) (N) & \(\tau_{z}\) (Nm) & \(P\) (W) & \(C_{T}\cdot 10^{2}\) & \(C_{\tau}\cdot 10^{3}\) & \(\eta_{p}\) & FM \\ \hline 0 & 0 & 262.0 & 0.358 & \(1.36\cdot 10^{5}\) & 21.3 & 0.41 & 339 & 1.66 & 2.14 & 0 & 0.70 \\ 0.24 & 9.6 & 262.0 & 0.359 & \(1.36\cdot 10^{5}\) & 17.9 & 0.44 & 361 & 1.39 & 2.28 & 0.48 & – \\ 0.41 & 16.0 & 262.0 & 0.361 & \(1.38\cdot 10^{5}\) & 14.0 & 0.41 & 341 & 1.09 & 2.15 & 0.66 & – \\ 0.61 & 24.0 & 262.0 & 0.365 & \(1.40\cdot 10^{5}\) & 8.3 & 0.41 & 253 & 0.65 & 1.60 & 0.78 & – \\ \end{tabular} \end{table} Table 1: Operating conditions of the \(D_{p}=0.30\,\)m diameter rotor, for each of the four advance ratios, \(J\) (the rotor is spinning at the same nominal rate of \(7\,\)860 RPM, while the freestream velocity \(U_{\infty}\) is varied to change \(J\)). For all analyses, the raw pressure time series were subject to a band-pass filter, with a flat response between \(60\,\mathrm{Hz}\) and \(15\,\mathrm{kHz}\), suppressing the non-anechoic, low-frequency content and the energy beyond the highest frequency range of the microphone. The SPSL was also corrected for atmospheric absorption (ANSI S1.26-1996) with an assumed propagation distance from the rotor hub (primarily affecting \(f>10\,\mathrm{kHz}\)). In addition, an A-weighting was applied (ANSI S1.6-1967) to account for the relative loudness perceived by the human ear [53] and attenuates energy at \(f\lesssim 3f_{b}\). Finally, it was confirmed that motor-only noise was far less intense than the rotor noise. As an example, the acoustic pressure spectra shown in Figs. 5(a,b) correspond to microphone B, for \(J=0\) and \(0.41\), respectively. Raw and corrected pressure spectra of the rotor noise are shown, as well as the corrected noise spectra of only the motor (spinning without rotor, but at the same RPM) and the noise floor of the facility (with only the wind tunnel running for the \(J\neq 0\) cases). The motor noise alone has the expected spectral peak at \(f=\omega=f_{b}/2\) and is also relatively dominant in the vicinity of \(f=14\omega=7f_{b}\), which is caused by the 14 magnetic poles of the motor (note that the motor noise is also known to arise from structural vibrations and harmonic interference [54, 55]). Nevertheless, the magnitude of the noise floor and motor-only noise are low compared to the rotor noise. Flow fields were captured using a stereoscopic PIV setup (Fig. 6) that is similar to the one reported by Grande _et al._[30, 46]. With a double cavity Quantel Evergreen EVG00200 Nd:YAG laser, a sheet was created to illuminate a relatively large region on one side of the rotor axis. Our current PIV field-of-view (FOV) aids in detailing the vortical flow structures in the near-vicinity of the rotor blade. Two Imager sCMOS cameras were used with \(2560\times 2160\,\mathrm{px}^{2}\) Figure 5: Acoustic spectra for (a) \(J=0\), and (b) \(J=0.41\), corresponding to microphone B (indicated in Fig. 3b). Plots include spectra of the raw acoustic time series (dashed), a spectrum after correction for atmospheric absorption (dash-dotted), and a spectrum with an additional \(\Lambda\)-weighting applied (solid). SPSL magnitudes of the BPF are indicated with horizontal bars and correspond to the amplitude of a pure tone at \(f=f_{b}\) (see text). Spectra corresponding to the motor-only scenario and the noise floor of the facility (and wind tunnel noise in the case of \(J\neq 0\) is) are also shown. Two Nikon lenses were mounted with Scheimpflug adapters, each with a \(60\,\mathrm{mm}\) focal length and an \(\#\) of 11. In this work, we only consider 500 image-pairs that were acquired in a phase-locked sense using the 1P signal. A sample result of the PIV campaign is shown in Fig. 6(b) and is described in the caption. ### Sound pressure level statistics Basic features of the acoustic data are documented here to generate an appreciation of the different noise characteristics of the rotor, with varying advance ratio \(J\). For brevity, the case of \(J=0\) is chosen to present a few intermediate results. First, recall that a spatial topography of the integrated SPSL over \(f>10f_{b}\) was shown in Fig. 3(b). This integrated, high-pass filtered noise content is referred to as \(\overline{p}_{10f_{b}}\) and its spatial contour clearly illustrates that the higher-frequency broadband noise exhibits the well-known quadrupole directivity pattern [26, 56]. Throughout the remainder of the paper the \(f>10f_{b}\) range is used in defining the high-frequency noise content. Changes in the \(10f_{b}\) threshold do not affect the conclusions made. The acoustic data can be condensed to directivity arcs for ease of interpretation: SPSL statistics corresponding to all \(1\,120\) measurement points were projected to a rotor-hub-centered+ arc with a virtual radius of \(\rho=5D_{p}\). In this projection, a spherical spreading law was adopted, according to \(p\propto 1/\rho\). Thus, a data collapse implies that all measurement points were located in the acoustic far-field. Results for \(\overline{p}_{10f_{b}}\) are presented in Fig. 7(c), Figure 6: (a,c) Setup with the PIV field-of-view. (b) Contour of the phase-averaged vertical velocity \(U_{z}\) at a near-zero phase angle behind the blade, for \(J=0\); vectors show the in-plane velocity with a vector-skip of 15. Superimposed are two red iso-contours of in-plane vorticity at magnitudes of \(\omega_{\phi}=$1\,500\,\mathrm{s}^{-1}$\) and \(7\,500\,\mathrm{s}^{-1}\). showing an excellent collapse of the data. This is further accentuated with the set of three lines (solid, dash-dotted, dashed), resulting from a fit to the projected data from the three most outward upper-vertical-lower perimeters of locations spanned by the microphone grid, respectively. Variations are less than \(\approx 1\,\mathrm{dBA}\) and these variations are more pronounced for large angles \(\theta\), for which the microphones on the vertical boom were relatively close to the walls of the anechoic chamber. Withal, collapse of \(\overline{p}_{10f_{b}}\) data is expected: the acoustic wavelength corresponding to \(10f_{b}\) is \(\lambda/D_{p}=a_{\infty}/(10f_{b})/D_{p}\approx 0.43\), meaning that all microphone locations were situated in the acoustic far-field, except for the closest measurement points at a region around \(\theta=0^{\circ}\) (this is the angle at which the collapse of the data markers in Fig. 7(c) is least good). When performing the same data-projection procedure for the OASPL (integral of the spectra over the entire frequency range), we obtain the directivity shown in Fig. 7(a). The absence of collapse suggests that some locations, in terms of the OASPL, were not yet situated in the acoustic far-field. When considering the SPSLs of only the BPF content4, and performing again a spherical projection to a virtual arc with a radius \(\rho=5D_{p}\), we obtain the SPSL pressure denoted as \(\overline{p}_{b}\) (Fig. 7b). Rotor-thickness noise, as well as the loading noise from thrust and torque, classifies primarily as an acoustic dipole source, and its radiated noise is strongly confined to a region around the rotor plane. For the thrust-producing rotor this periodic noise has a maximum intensity that is oriented slightly towards the downstream Figure 7: **Noise directivity patterns for advance ratio \(J=0\), obtained via spherically projecting all 1 120 data points to a rotor-hub-centered arc of radius \(\rho=5D_{p}\). Sub-figures correspond to the (a) OASPL, (b) SPSL of the BPF, and (c) integrated SPSL over \(f>10f_{b}\) (points are colored per the unprojected noise levels).** direction. Since the BPF has an acoustic wavelength of \(\lambda/D_{p}=a_{\infty}/f_{b}/D_{p}\approx 4.3\), only a subset of data points lie within the acoustic far-field. Data points closer to the rotor are subject to evanescent pressure waves from the source. Noise directivity patterns of \(\overline{p}_{b}\) and \(\overline{p}_{10f_{b}}\) are now generated for all four advance ratios \(J\), following the same procedure as described above for \(J=0\). In order to visualize the result, only the curves found through a fitting procedure to the projected data from the most far-field locations are considered. Again, these fits are done to the three most outward upper-vertical-lower perimeters of locations spanned by the microphone grid. Results are shown in Figs. 8(a,b) for the BPF content and high-frequency noise, respectively. Two primary trends are observed. Firstly, the BPF tone reduces in amplitude due to the lower intensity of the thickness/loading noise (an increase in advance ratio causes a direct decrease in disk loading). Secondly, the high-frequency noise reduces in overall amplitude too, but does not exhibit a monotonic decrease with an increase in \(J\). That is, the high-frequency noise content decreases for \(J=0\to 0.24\to 0.41\), but then increases again for the highest advance ratio of \(J=0.61\). This complex behaviour is related to noise sources associated with the change in separated-flow features over the blade [30, 49]. With increasing \(J\), the separation goes from a fully laminar separation (\(J=0\)), to one that re-attaches and forms a laminar separation bubble (\(J=0.24\) and \(0.41\)), to one that fully separates in a turbulent state (\(J=0.61\)). For the latter case, the (trailing-edge) noise of the shedding is more intense than for the laminar separation, resulting in an increase in noise intensity compared to the two intermediate advance ratios. The reason why the high-frequency noise is so dominant for the \(J=0\) case (recall the magnitude of the directivity pattern shown in Fig. 7c) is because of the turbulence-ingestion noise through the onset of a weak blade-vortex interaction (in which the blades encounter an imprint of the tip-vortex deployed by the consecutive blade). Finally, this section merely illustrates the quality of the data and the fact that acoustic data were taken in both the acoustic near- and far-field. Note however that the BPFM analysis of SS III is unaffected by the pressure obeying (or not obeying) a far-field spreading trend and its amplitude decay rate. Specifically, the modulation metrics are correlation-based and thus energy-normalized. ### Flow field in the blade's near-vicinity Results of PIV are shortly described here to provide an overview of the primary flow features induced by (and encountered by) the rotor blade. Phase-locked fields of various flow quantities at a near-zero phase angle behind the blade are shown in Fig. 9, for all four advance ratios \(J\). Fields of the in-plane vorticity \(\omega_{\phi}\) are shown in Figs. 9(a-d). Locations of the spiralling tip vortices are well-identified, as well as the vorticity in the wake sheet of the blade. For the three non-zero advance ratios, this blade wake connects the tip vortex at a wake age of \(180^{\circ}\) to the rotor hub location where a relatively weak root vortex 'folds' around the nacelle. For the \(J=0\) case, the tip vortices and wake sheets have a small axial spacing, causing an interaction of the wake sheets associated with the \(180^{\circ}\) and \(360^{\circ}\) wake ages (as also described in detail in the work by Thurman _et al._[57]). As expected for lower blade loading, the vorticity magnitude decreases with an increasing value of \(J\). Figs. 9(e-h) and Figs. 9(i-l) show filled contours of the horizontal velocity (positive outboard) and vertical velocity \(U_{z}\) (negative downward). Superimposed on these two sets of plots are two iso-contours of the in-plane vorticity, \(\omega_{\phi}\), to highlight the location of the tip vortices. It becomes apparent that the vortical flow field leading to the onset of the tip-vortex causes an inboard motion of the flow on the blade's suction side and outboard on the blade's pressure side. For \(J=0\), the inboard horizontal velocity reaches more than \(10\,\%\) of the rotor-tip velocity. This high velocity magnitude is the consequence of the contracting slipstream and an inboard flow induced by the preceding blade's tip vortex situated with its core less than \(0.2R\) below the rotor plane. For this advance ratio, turbulence-ingesion noise will be dominant. Even for the \(J=0.24\) case the tip vortex with a \(180^{\circ}\) wake age trails relatively close to the pressure side of the blade and an imprint of this vortical flow still appears clearly at the bottom edge of the blade near \(r/R\approx 0.6\). Only for \(J\geq 0.41\) the influence of the vortical flow from the preceding blade does no longer affect the flow around the successive blade, in terms of a horizontal velocity disturbance in the phase-locked mean-sense. Finally, see Grande _et al._[30] for detailed PIV-based velocity fields in the cross-section of the blades at \(r/R=0.6\). Figure 8: **Noise directivity patterns, as per Figs. 7(b,c), but now for all four advance ratios \(J\). Each set of three lines (solid, dash-dotted, dashed) resembles a fit to the projected data from the three most outward upper-vertical-lower perimeters of the microphone grid. For sub-figure (b) with the integrated SPSL over \(f>10f_{b}\) the directivity pattern of \(J=0\) is omitted to keep the radial scale condensed (the \(J=0\) case was already shown in Fig. 7c).** ## III. Blade passing frequency modulation ### A. Phase-averaged acoustic signature Here we make a first attempt to visualize BPFM using phase-averaged pressure data, denoted as \(\widetilde{p}(r,z;\tau)\) with \(\tau\) being the time coordinate within one full rotor-revolution. Signatures of the phase-averaged pressure are shown in Fig. 10. First we show for each advance ratio \(J\) the phase-averaged pressure time series for (1) the BPF tone \(\widetilde{p}_{b}\), and (2) the phase-averaged pressure in the absence of the BPF (thus \(\widetilde{p}-\widetilde{p}_{b}\), with the red thick line). Since microphones A, B and C are located at the same radial distance \(\rho\) (Fig. 3b), the time series affiliated with the BPF are nearly identical at all three locations. Signature \(\widetilde{p}-\widetilde{p}_{b}\) is the resultant of the \(2^{\rm nd}\) and higher-order harmonics being phase-consistent with the Fig. 9: Filled contours of the (a-d) in-plane vorticity \(\omega_{\phi}\), (e-h) horizontal velocity \(U_{r}\), and (i-l) vertical velocity \(U_{z}\), for all four advance ratios \(J\). Superimposed on all plots are two red iso-contours of in-plane vorticity, \(\omega_{\phi}\), corresponding to normalized vorticity magnitudes of \(\omega_{\phi}D_{p}/U_{\rm tip}=3.5\) and \(7.0\). BPF, thus surviving the phase-average. The finer undulations within the signal (for instance clear in the mic. B signal in Fig. 10b) correspond to roughly the 7\({}^{\text{th}}\) harmonic (postulated to be motor noise as discussed earlier). By construction, the signature does not contain any (phase-inconsistent) broadband noise. Nevertheless, BPFM can still be visualized in an easy manner: in the background of the red curves are 15 ensembles of the raw acoustic pressure, high-pass filtered at \(f>10f_{b}\). In addition, upper and lower envelopes to the intensity of these raw time series are shown, which were created using a Hilbert transform and by considering all 5 240 rotation-ensembles. A strong link between the variation in the high-frequency noise intensity and the BPF signal is clearly noticeable. For the mic. C signal (particularly for \(J=0.24\), 0.41 and 0.61), the intensity variation is well-correlated to the BPF signal, while this correlation is less strong for mic. A. To visualize BPFM throughout the spatial extent captured by the acoustic data, similar analyses as the ones performed to construct Fig. 10 can be performed for each microphone. The phase-averaged pressure data allows for visualizing a spatial contour of \(\widetilde{p}(r,z;\tau)\) for one specific value of \(\tau\) (note that movies with temporal evolution, \(\tau\in[0,2f_{b})\), are available as supplementary material). Fig. 11(a) shows this contour for the case of \(J=0.41\). Evidently, the BPF is dominant with a spatial wavelength of around \(4.3D_{p}\); finer undulations superimposed on the BPF are also visible, as well as the decaying nature of the sound wave amplitude. Alongside, in Fig. 11(b), a contour is shown of the envelope-amplitude to the intensity of the high-pass filtered signal at \(f>10f_{b}\) (thus equal to the zero-mean envelopes shown in Fig. 10). Certainly below the rotor disk plane, a strong link exists between the BPF signal and the variation in the high-frequency noise. Trends follow Fig. 10(c) in that the link appears strongest along a radial path going through microphone C, while being weakest for a path going through microphone A. Moreover, the phase shift between the BPF Figure 10: **Phase-averaged acoustic pressure signatures. For all four advance ratios \(J\), from top-to-bottom: BPF tone \(\widetilde{p}_{b}\); phase-averaged pressure signal in the absence of the BPF tone, \(\widetilde{p}-\widetilde{p}_{b}\), in red; phase-averaged envelope of the high-pass filtered signal \(p_{10f_{b}}\) (dark grey) and 15 ensembles superimposed (light grey). All graphs are for one full revolution of the two-bladed rotor.** and the envelope changes drastically when moving from locations above the rotor disk plane to ones below it. Next, BPFM will be quantified with correlation-based scalar metrics. ### Scalar metrics and directivity patterns of modulation Baars _et al._[36] described methods for quantifying BPFM. The various scalar metrics involved in this process are briefly summarized below, and it must be emphasized that these are computed from a single acoustic pressure time series. Then, the metrics are computed for every acoustic pressure time series of the grid measurements to inspect the trends of the modulation in the acoustic near- and far-field regions. 1. Through a bispectral analysis, the dominant quadratic inter-frequency coupling can be found out of all possible frequency combinations present within a signal. This analysis effectively correlates two frequency components, \(f_{1}\) and \(f_{2}\), to their sum (\(f_{3}=f_{1}+f_{2}\)) or difference and can be expressed as an auto-bicoherence, \(\gamma_{PPp}^{2}(f_{1},f_{2})\), according to: \[\gamma_{PPp}^{2}=\frac{\left|\phi_{PPp}\left(f_{1},f_{2}\right)\right|^{2}}{ \phi_{PP}\left(f_{1}\right)\phi_{PP}\left(f_{2}\right)\phi_{PP}\left(f_{1}+f_{ 2}\right)}\in[0,1].\] (3) Here, the numerator is the cross-bispectrum, taken as \(\phi_{PPp}\left(f_{1},f_{2}\right)=2\langle P\left(f_{1}+f_{2}\right)P^{*} \left(f_{1}\right)P^{*}\left(f_{2}\right)\rangle\). Coordinates \(r\) and \(z\) are omitted for ease of notation. Note that \(\gamma_{PPp}^{2}(f_{1},f_{2})\) indicates the degree of normalized correlation Figure 11: For the case of \(J=0.41\), spatial topography of: (a) the phase-averaged pressure signal \(\widetilde{p}(r,z;\tau)\) for one specific value of \(\tau\), and (b) the phase-averaged zero-mean envelope of the high-pass filtered signal \(p_{10f_{b}}\). Temporal evolution of these spatial iso-contours of acoustic pressure for one full rotor revolution, \(\tau\in[0,2f_{b})\), are available as movies in the supplementary material. between the energy at \(f_{1}\) and \(f_{2}\), and the energy at \(f_{1}+f_{2}\) (here we only consider sum-interactions, and not the difference-interactions per \(f_{3}=f_{1}-f_{2}\), as we are interested in how the low-frequency BPF modulates higher-frequency noise). A sample auto-bicoherence spectrum is shown in Appendix A, corresponding to the time series of microphone C at conditions of \(J=0\) (Fig. 17) and \(J=0.41\) (Fig. 18). Typically, a ridge of relatively strong correlation appears along \(f_{2}=f_{b}\), meaning that the BPF is phase-coupled to a broad range of frequencies within the same signal, \(f_{1}>f_{b}\). Said quadratic coupling is suppressed in phase-averaging (SS III.A) because the _phase_ in the cross-bispectrum can still vary per triad. A single metric \(\Gamma_{m}^{2}\) is constructed by averaging \(\gamma_{PPP}^{2}(f_{1},f_{2})\) for the primary frequency \(f_{2}=f_{b}\) and all possible quadratic frequency doublets residing at \(f_{1}>10f_{b}\) (see Appendix A for the full details). 2. The concept of _modulating_- and _carrier_-signals allows for the application of standard linear correlation methods. The modulating signal \(p_{b}(t)\) is taken as the BPF-associated time series, while a carrier signal \(p_{h}(t)\) is taken as the time series resulting from high-pass filtering at \(f>10f_{b}\). An envelope capturing the time-varying intensity of the latter signal can be generated through a Hilbert transform \(\widehat{p}_{h}(t)=|H[p_{h}(t)]|\). By correlating the modulating signal \(p_{b}(t)\) with the carrier envelope \(\widehat{p}_{h}(t)\), we obtain the temporal cross-correlation \(R_{a}(\tau_{c})=\langle p_{b}(t)\widehat{p}_{h}(t-\tau_{c})\rangle\); when normalised with the standard deviations we obtain the normalised correlation coefficient, \(\rho_{a}(\tau_{c})\in[0,1]\). Finally, two modulation metrics are defined: the correlation strength, \(\rho_{a}=\max[\rho(\tau_{c})]\), and the phase \(\phi_{a}=\tau_{cm}f_{b}(2\pi)\), where \(\tau_{cm}\) is the temporal shift for which the maximum correlation value occurs. Note that all modulation metrics considered in the current work are correlation-based and energy-normalized (\(\Gamma_{m}^{2}\), \(\rho_{a}\) and \(\phi_{a}\)). Hence the acoustic pressure-decay is irrelevant, and the analysis is applicable to a single acoustic pressure time series anywhere in the acoustic near- or far-field regions. All three metrics are computed for each of the \(1\,120\) acoustic signals and the results for two advance ratios, \(J=0\) and \(0.41\), are shown in Figs. 12 and 13, respectively. It is apparent that the BPFM strength, captured by metrics \(\Gamma_{m}^{2}\) and \(\rho_{a}\), show a similar pattern for \(J=0.41\), thus highlighting the robustness of the two metrics. Moreover, the strength is considerably weaker for \(J=0\). Later on, in this section, we detail the difference in BPFM strength for changes in the advance ratio. When we here focus on the \(J=0.41\) case with its pronounced BPFM strength, it is seen that the metrics are maximum for \(\theta\approx-14^{\circ}\) (grey dashed line in Figs. 13a,b). This will result in a distinguishable _"wop-wop"_ or _"buzzing"_ character of the noise at that polar angle. The strength of BPFM remains constant with outward distance. Even though this is expected for a pure convective acoustic wave field, it was never shown explicitly. It furthermore validates the quality of the acoustic data in the free-field simulated environment: _e.g._, the modulation is _not_ related to nodes or anti-nodes caused by reflections and acoustic waves interfering in an in- and/or out-of-phase manner. The phase relationship between modulating and carrier signals is shown in Fig. 13(c) and is important for auralization methodologies [20]. An out-of-phase behaviour corresponds to a phase of \(\phi_{a}=0.5(2\pi)\), which would mean that the BPF signal leads to the occurrence of highest-intensity in \(p_{10f_{b}}\) by half-a-period of the BPF. For the sector where the BPFM strength is large (say \(-45^{\circ}\lesssim\theta\lesssim 0^{\circ}\)), the phase in Fig. 13(c) equals roughly \(\phi_{a}=0.25(2\pi)\): the BPF signal thus leads the associated, periodic intensity variation in the high-frequency noise by a quarter period of the BPF (this is also apparent from the mic. C signal in Fig. 10c). It is important to realize that \(\Gamma_{m}^{2}\) is derived from the magnitude of the auto-bicoherence (and here does not include phase information of the underlying auto-bispectrum). As such, the strength of BPFM in the classical sense (a buzzing/breathing of higher-frequency noise at a rate of the BPF) is quantified by \(\rho_{a}\) and \(\phi_{a}\) in the remainder of this work. That is, the phase of the frequency content in the 'total' carrier signal, relative to the modulating signal, was preserved in the two-point correlation analysis through the identification of \(\phi_{a}\), while \(\Gamma_{m}^{2}\) is a measure of the Figure 12: For the case of \(J=0\), spatial topographies of: (a) the auto-bicoherence-based metric \(\Gamma_{m}^{2}\), (b) the modulation strength \(\rho_{a}\), and (c) the relative phase \(\phi_{a}\) between modulating & carrier signals. A reference (dashed) line corresponding to \(\theta=-14^{\circ}\) is indicated in sub-figures (a) and (b). Figure 13: Similar to Fig. 12, but now for \(J=0.41\). phase-coupling on a per-frequency basis. In order to infer the influence of the advance ratio \(J\) on BPFM, similar results were constructed as the ones shown in Figs. 12 and 13, but now for all four advance ratios. Since it was furthermore confirmed that the metrics were invariant with outward radial distance \(\rho\), the directivity patterns of the metrics are presented with polar plots. Figs. 14(a) and 14(b) show the directivity patterns of \(\rho_{a}\) and \(\phi_{a}\), respectively. Individual data points are shown with small markers, as well as a fit line through these data for each \(J\). Observations on the BPFM-directivity patterns can be summarized as follow, and will be described in terms of the underlying source mechanisms in SS IV. 1. Current directivity patterns confirm earlier observations of BPFM for a hovering rotor (\(J=0\)): modulation is predominantly present within the downstream sector, with a maximum strength towards \(\theta\approx-20^{\circ}\). Still, a small degree of BPFM is present above the rotor disk plane. 2. For the non-zero advance ratios the BPFM remains weak above the rotor. The phase reaches a near-constant value of \(\phi_{a}=0.8(2\pi)\) and is indicative of a slight temporal 'lead' of the high-frequency noise intensity, relative to the BPF. 3. Below the rotor plane, the BPFM is significantly stronger for the non-zero advance ratios. In the sector of \(-45^{\circ}\lesssim\theta\lesssim-5^{\circ}\) the BPFM strength slightly increases from \(J=0.24\) to the highest advance ratio tested, \(J=0.61\), although the overall directivity pattern is very similar. The maximum strength resides around \(\theta=-14^{\circ}\) for all Fig. 14: **Directivity patterns of BPFM, for all four advance ratios \(J\), with in (a) the modulation strength \(\rho_{a}\), and (b) the relative phase \(\phi_{a}\) between modulating & carrier signals.** non-zero \(J\), and the associated phase remains roughly constant at a temporal lag of the high-frequency intensity, of \(\phi_{a}\approx 0.25(2\pi)\). ## 4 IV. Source mechanisms of BPF modulation One driving factor involved in the BPFM can be thought of as the periodic variation in the source-receiver distance, due to the advance and retreat of the rotor blades. However, this effect alone would be more effectively felt at sideline angles than in the upstream and downstream regions. Given that the BPFM strength was identified to be maximum in the downstream region, the variation in source-receiver distance cannot be the root cause of BPFM. In an attempt to unravel the mechanisms at play in generating the trends in BPFM, we first focus on the high-frequency noise content of rotor noise signals. High-frequency noise can come from two source mechanisms: (1) noise associated with turbulence-ingestion, primarily affecting the leading-edge noise source mechanisms, and (2) noise associated with trailing-edge mechanisms and the shedding of (coherent) flow features from the separated region over the blade's suction side. Here the leading-edge noise is considered 'high' in frequency since it is in relation to the much lower BPF and the blade relative velocity. Fig. 15 presents a conceptual schematic of the noise directivity patterns associated with these sources, and in the reference frame that is fixed to the rotor blades. In a side view (Fig. 15a), the trailing edge noise signature has a noise directivity that is tilted forward, while leading-edge noise is more omnidirectional. A top view of the noise directivity patterns is drawn in Fig. 15(b) and accentuates the following characteristics of the noise. First, the source mechanisms of trailing-edge noise is considered. Its noise generating mechanism (related to flow features shedding past the trailing-edge) is relatively coherent along the span of the blade. Directivity-wise this results (per blade) in a dominant lobe forward (although still noticeable behind the blade). As a consequence of the rotor blade spinning, the far-field observer experiences a sweep through the directivity pattern and given the directive nature of this noise source, the BPFM of this noise source alone is strong. Secondly, when concentrating on the source mechanism of leading-edge turbulence ingestion, and in particular the outboard part of the blade encountering imprints of the tip vortex from the preceding blade, the noise directivity originates from the blade tip and is relatively more omni-directional due to the relatively small nature of this noise source (not present along the entire blade span). When a far-field observer experiences a sweep through the directivity pattern due to the rotating blades, the relative strength is less than for the trailing edge-noise. When we now first focus on the BPFM trends for non-zero advance ratios, the leading-edge noise source mechanism related to the onset of BVI is absent (particularly for the two highest advance ratios, recall the discussion of Fig. 9). As such, the modulation is strong since the high-frequency noise content is dominated by the very directive trailing-edge noise-generation mechanisms. These mechanisms include the trailing edge diffraction of the shedding of laminar separation bubbles on the blade, and/or of the flow structures in the separated region from a fully turbulent state. Specifically, the bubbles were found for the cases of \(J=0\), \(0.24\) and \(0.41\), and did increase in size when moving towards the trailing edge when the angle of attack was decreasing [30] (when the advance ratio \(J\) was increasing). So for the largest separation bubbles extending all the way up to the tips of the rotor blades, the wake vortex shedding and its associated noise are dominant. For \(J=0.61\) the wake vortex shedding becomes even more dominant when there is no reattachment of the separated flow region. When inspecting the acoustic spectra this can also be observed. Fig. 16 presents acoustic spectra in the upstream (Fig. 16a), sideline (Fig. 16b), and downstream regions (Fig. 16c). When focusing on the latter, the spectrum associated with \(J=0.61\) clearly has a larger amplitude than the \(J=0.24\) and \(0.41\) cases. When concentrating on the hover case (\(J=0\)), the directive trailing-edge noise source mechanisms must still be present. However, the noise-content at the high-frequency portion of the spectrum is dominated by the turbulence-ingestion mechanism. This can be observed from the spectra at upstream, sideline, and downstream angles. In all cases, the spectrum is dominant when considering the range \(f>10f_{b}\), and a multitude of harmonics (spectral-peaks) are still present. Because the high-frequency noise is dominated by this component, the perception of modulation is less due to the omni-directional noise directivity of this leading-edge mechanism, dominated by the rotor-tip blade region. ## 5 Concluding remarks A comprehensive study was presented on how the advance ratio (and the associated change in noise source mechanisms) influences the so-called blade passage frequency modulation (BPFM). Correlation-based metrics were successfully applied to high-fidelity acoustic datasets, spanning both the near- and far-field regions of an isolated rotor. It was hypothesized that the appearance of a strong BPFM is related to an acoustic observer--at a fixed position relative to the rotor--experiencing sweeps through the directivity pattern of trailing-edge/shedding noise (fixed to the spinning rotor blade). This type of directive noise is dominant at high-frequencies for non-zero advance ratios, and thus Figure 15: Schematic illustration of directivity patterns in the context of high-frequency noise. Both trailing edge (TE) and flow shedding-type, as well as turbulence-ingestion (TI) noise due to the onset of blade-vortex interaction (BVI) noise source mechanisms are considered. A far-field observer is subject to a higher (relative) degree of high-frequency modulation when the TI source is absent. results in the largest degree of modulation. For the hover scenario, the high-frequency noise content is dominated by turbulence-ingestion noise, a source mechanism affiliated with the leading-edge of the rotor blade and the tip vortex from the preceding blade residing in close proximity to the successive blade. It was conjectured that the omni-directional noise directivity of this leading-edge mechanism, dominated by the rotor-tip blade region, results in a relatively weaker modulation. Figure 16: Acoustic spectra in the (a) upstream, (b) sideline, and (c) downstream regions of the acoustic field, for all four advance ratios \(J\). All spectra are corrected for atmospheric-absorption and are A-weighted. SPSL magnitudes of the BPF are indicated with horizontal bars and correspond to the amplitude of a pure tone at \(f=f_{b}\) (see text). Spectra corresponding to the noise-floor of the facility (and wind tunnel noise in the case of \(J\neq 0\) is) are also shown. Future work should strengthen our hypotheses, by way of incorporating reduced-order models of the broadband noise sources in predictive tools for the tonal noise content (_e.g._, through implementation of a compact dipole/monopole F/fowcs-Williams and Hawking's acoustic analogy [58], and (empirical) directivity patterns of the leading- and trailing-edge noise sources). When concerning human perception and annoyance, future work should explore the connection between the engineering BPFM metrics and psycho-acoustic metrics. Once this connection has been established, BPFM metrics can facilitate the assessment of the noise impact of AAM vehicles (by for instance applying it to data of high-fidelity numerical computations of rotor noise [59, 60, 61] or other noise prediction frameworks [62, 40, 21, 41, 42]). ## Acknowledgements The authors wish to gratefully acknowledge Mr. Edoardo Grande for assisting in the experiments, and for stimulating discussions about the content of this manuscript. We would also like to give special thanks to Dr. ir. Tomas Sinnige for setting up the power supply and the RPM-control capability of the rotor. ## Appendix A: Bispectral analysis for computing a modulation metric Through a bispectral analysis, the dominant quadratic inter-frequency coupling can be found out of all possible frequency combinations present within a signal (here taken as \(p\) and its Fourier transform, \(P(f)=\mathcal{F}[p(t)]\)). This analysis effectively correlates two frequency components, \(f_{1}\) and \(f_{2}\), to their sum (\(f_{3}=f_{1}+f_{2}\)) or difference and can be expressed as an auto-bicoherence, \(\gamma_{PPP}^{2}(f_{1},f_{2})\), according to: \[\gamma_{PPP}^{2}=\frac{\left|\phi_{PPP}\left(f_{1},f_{2}\right)\right|^{2}}{ \phi_{PP}\left(f_{1}\right)\phi_{PP}\left(f_{2}\right)\phi_{PP}\left(f_{1}+f _{2}\right)}\in[0,1]. \tag{4}\] Here, the numerator is the cross-bispectrum, taken as \(\phi_{PPP}\left(f_{1},f_{2}\right)=2\langle P\left(f_{1}+f_{2}\right)P^{*} \left(f_{1}\right)P^{*}\left(f_{2}\right)\rangle\). Note that \(\gamma_{PPP}^{2}(f_{1},f_{2})\) indicates the degree of normalized correlation between the energy at \(f_{1}\) and \(f_{2}\), and the energy at \(f_{1}+f_{2}\) (here we only consider sum-interactions). An sample auto-bicoherence spectrum is shown in Fig. 17(a) for microphone C and for \(J=0\). Typically, a ridge is shown of relatively strong bicoherence along \(f_{2}=f_{b}\). Since this means that the BPF is phase-coupled to a broad range of frequencies with the same signal (thus to frequencies \(f_{1}>f_{b}\)), this ridge is representative of the degree of BPFM. To infer what content in a time series, or auto-spectrum (plotted in Fig. 17b for the corresponding auto-bicoherence in Fig. 17a), is _involved_ in quadratic sum-interactions, the 2D auto-bicoherence \(\gamma_{PPP}^{2}(f_{1},f_{2})\) can be condensed to a one-dimensional summed bicoherence [63, 64, 65]. This is done by way of averaging along lines of constant \(f=f_{1}+f_{2}\): \[\Gamma_{PPP}^{2}\left(f\right)=\frac{1}{N_{q}(f)}\sum_{f=f_{1}+f_{2}}\gamma_{ PPP}^{2}\left(f_{1},f_{2}\right). \tag{5}\] Here \(N_{q}(f)\) is the number of frequency doublets \(f_{1}\), \(f_{2}\). This summed bicoherence spectrum is shown in Fig. 17(c) and its frequency axis is aligned with the auto spectrum in Fig. 17(b). It is evident that \(\Gamma_{ppp}^{2}\) shows the degree of nonlinear interactions that are buried in certain frequency components (but for their \(f_{1}\) and \(f_{2}\) origin it is necessary to reside back to Fig. 17a). On a final note, the summed bicoherence spectrum is also plotted for a generated reference signal comprising random noise (uncorrelated in a linear and nonlinear way), highlighting that the bicoherence in the rotor noise signal is significant; it was furthermore ensured that all results of our bispectral analysis are converged [52]. For further inspection and in the context that BPFM modulation appears stronger for \(J=0.41\), in comparison to \(J=0\), plots similar to the ones presented in Fig. 17 are shown in Fig. 18, but now for \(J=0.41\). It is apparent that the ridge of bicoherence along \(f_{2}=f_{b}\) is higher in magnitude, particularly for the higher-frequencies, \(f_{1}>10f_{b}\). Given that the auto-bicoherence helps in forming a holistic view on the degree of phase coupling, we can define a single metric when considering the BPF tone as one of the primary frequencies forming all possible quadratic frequency doublets. For this we take the mean value of the auto-bicoherence along \(f_{2}=f_{b}\), according to \[\Gamma_{m}^{2}=\frac{1}{N_{a}}\sum_{f_{1}}\gamma_{ppp}^{2}\ (f_{1},f_{2}=f_{b}). \tag{6}\] \(\Gamma_{m}^{2}\) is a measure for the degree of phase coupling between the noise at \(f>f_{b}\) and the BPF tone at \(f=f_{b}\) (\(N_{a}\) is the number of discrete points over which the auto-bicoherence is summed). From a preliminary assessment of the auto-bicoherence and parameter \(\Gamma_{m}^{2}\) at positions A, B and C for \(J=0.41\) (Fig. 19), it is evident that \(\Gamma_{m}^{2}\) is varying in the field (_e.g._, \(\Gamma_{m}^{2}\) is minimum at position A and maximum at position C). Figure 17: **Contour of the auto-bicoherence \(\log_{10}\left[\gamma_{ppp}^{2}\right]\) generated from the time series of microphone C, for \(J=0\). (b) Corresponding acoustic spectra at microphone position C, with alongside in (c) the summed auto-bicoherence \(\Gamma_{ppp}^{2}(f=f_{1}+f_{2})\) for the rotor noise, and for a generated reference signal comprising random noise.**
2307.10594
Exploiting Structure for Optimal Multi-Agent Bayesian Decentralized Estimation
A key challenge in Bayesian decentralized data fusion is the `rumor propagation' or `double counting' phenomenon, where previously sent data circulates back to its sender. It is often addressed by approximate methods like covariance intersection (CI) which takes a weighted average of the estimates to compute the bound. The problem is that this bound is not tight, i.e. the estimate is often over-conservative. In this paper, we show that by exploiting the probabilistic independence structure in multi-agent decentralized fusion problems a tighter bound can be found using (i) an expansion to the CI algorithm that uses multiple (non-monolithic) weighting factors instead of one (monolithic) factor in the original CI and (ii) a general optimization scheme that is able to compute optimal bounds and fully exploit an arbitrary dependency structure. We compare our methods and show that on a simple problem, they converge to the same solution. We then test our new non-monolithic CI algorithm on a large-scale target tracking simulation and show that it achieves a tighter bound and a more accurate estimate compared to the original monolithic CI.
Christopher Funk, Ofer Dagan, Benjamin Noack, Nisar R. Ahmed
2023-07-20T05:16:33Z
http://arxiv.org/abs/2307.10594v1
# Exploiting Structure for Optimal Multi-Agent Bayesian Decentralized Estimation ###### Abstract A key challenge in Bayesian decentralized data fusion is the 'rumor propagation' or 'double counting' phenomenon, where previously sent data circulates back to its sender. It is often addressed by approximate methods like covariance intersection (CI) which takes a weighted average of the estimates to compute the bound. The problem is that this bound is not tight, i.e. the estimate is often over-conservative. In this paper, we show that by exploiting the probabilistic independence structure in multi-agent decentralized fusion problems a tighter bound can be found using (i) an expansion to the CI algorithm that uses multiple (non-monolithic) weighting factors instead of one (monolithic) factor in the original CI and (ii) a general optimization scheme that is able to compute optimal bounds and fully exploit an arbitrary dependency structure. We compare our methods and show that on a simple problem, they converge to the same solution. We then test our new non-monolithic CI algorithm on a large-scale target tracking simulation and show that it achieves a tighter bound and a more accurate estimate compared to the original monolithic CI. ## I Introduction In many multi-agent applications such as multi-target tracking [8, 18], simultaneous localization and mapping (SLAM) [7], and cooperative localization [5, 13], Bayesian decentralized data fusion (DDF) is used for peer-to-peer fusion of estimates. A key challenge in DDF is the so-called 'rumor propagation', where due to common data between the agents, their estimates might be correlated. Since tracking the common data requires pedigree tracking [15], which might be cumbersome in large networks, or limits the network topology [10], methods that try to bound the true uncertainty in the face of unknown correlation between the estimates gained popularity. One of the most commonly used methods for two probability distribution functions (pdf), represented by their first two moments (mean and covariance), is covariance intersection (CI) [12]. For intuition consider Fig. 1, the true fused covariance must be enclosed by the intersection of the two prior covariances \(\mathbf{P}_{a}\) and \(\mathbf{P}_{b}\) (dark gray). CI optimizes between all covariances that pass through the intersection points of \(\mathbf{P}_{a}\) and \(\mathbf{P}_{b}\) and finds the optimal bound on the true fused covariance [16]. However, in its most basic form [12], it does not make any use of the underlying independence structure between variables, thus the bound achieved by CI might not be tight. For example, when the variables represent two independent states, the true fused covariance is limited to certain areas of the intersection, as shown in 1 (light gray). In this paper, we show that this independence structure can be exploited to find a tighter bound using (i) an expansion to the CI algorithm - the non-monolithic CI (nmCI) and (ii) a more general scheme that is able to compute optimal bounds and fully exploit an _arbitrary_ dependency structure. The ability to compute optimal bounds in the latter case, even if not tractable in real-time, establishes a baseline of performance for other fusion approaches and provides a tool to empirically investigate the optimality and conservativeness of new fusion approaches. Previous work on this topic may be found in [9], which uses the same optimization formalism as this work. However, no algorithmic details are divulged and optimal solutions cannot be guaranteed. In comparison, this work may be seen as the first approach to compute guaranteed asymptotically optimal solutions for the general case. ## II Problem Statement Consider a decentralized network of \(n_{a}\) autonomous Bayesian agents, tasked with jointly monitoring a set \(\chi\) of random variables (rvs). Each agent \(a\) recursively updates its local prior pdf \(p(\chi)\) with (i) independent sensor measurements, described by \(p(z_{k}^{a}|\chi_{k}^{a})\), the likelihood of observing \(z_{k}^{a}\) conditioned on the subset of rvs \(\chi_{k}^{a}\) at time step \(k\), and Fig. 1: Intuitive example of how exploiting independence structure leads to a tighter bound. With no structure possible fused covariances occupy the intersection of \(\mathbf{P}_{a}\) and \(\mathbf{P}_{b}\) (dark gray). With structure, all possible results are limited to the light gray area, thus allowing nmCI to find a tighter bound. (ii) data set \(Z_{k}^{b}\), received from a neighboring agent \(b\in N_{a}^{a}\) via the peer-to-peer distributed variant of Bayes' rule [6], \[p_{f}(\chi|Z_{k}^{a}\cup Z_{k}^{b})\propto\frac{p^{a}(\chi|Z_{k}^{a})p^{b}(\chi|Z_ {k}^{b})}{p_{c}^{ab}(\chi|Z_{k}^{a}\cap Z_{k}^{b})}. \tag{1}\] Here \(p_{c}^{ab}(\chi|Z_{k}^{a}\cap Z_{k}^{b})\) denotes the pdf over \(\chi\) given the data common to agents \(a\) and \(b\), and needs to be removed in order to prevent 'double counting' of previously shared data. The main challenge in DDF is to account for the denominator in (1). While the denominator can be tracked explicitly, as discussed in the introduction, in this paper, we consider the problem where the dependency in the data, i.e. the common data between the agents (\(Z_{k}^{a}\cap Z_{k}^{b}\)) is unknown. We assume that some independence structure between rvs exists, e.g., that the northern and eastern states of a tracked target are independent of each other, and aim to find an optimal (according to some measure) approximation \(\bar{p}_{f}(\chi|Z_{k}^{a}\cup Z_{k}^{b})\) that is a conservative approximation of the true fused posterior pdf \(p_{f}\) in (1), \[\bar{p}_{f}(\chi|Z_{k}^{a}\cup Z_{k}^{b})\succeq p_{f}(\chi|Z_{k}^{a}\cup Z_{k }^{b}). \tag{2}\] Where here we consider \(\bar{p}_{f}(\cdot)\) to be a conservative approximation of \(p_{f}(\cdot)\) if \(\bar{\mathbf{P}}_{f}-\mathbf{P}_{f}\succeq 0\), where \(\bar{\mathbf{P}}_{f}\) and \(\mathbf{P}_{f}\) are the covariances of \(\bar{p}_{f}(\cdot)\) and \(p_{f}(\cdot)\), respectively. Assume then that \(p_{f}(\cdot)\) is described by its first two moments (mean and covariance), and let \(\chi_{a}\), \(\chi_{b}\) denote two means/point estimates with associated covariances \(\mathbf{P}_{a}\), \(\mathbf{P}_{b}\) and sparse correlation \(\mathbf{P}_{ab}\). The sparsity structure is implied by the inherent probabilistic properties of the system being estimated. In the following, we derive a robust optimization problem to simultaneously determine the optimal gains \(\mathbf{K}_{a}\) and \(\mathbf{K}_{b}\) of the linear fusion rule \(\chi_{f}=\mathbf{K}_{a}\chi_{a}+\mathbf{K}_{b}\chi_{b}\), where \(\chi_{f}\) is the fusion result, and a minimal upper bound \(\bar{\mathbf{P}}_{f}\) on the fusion result error covariance \[\mathbf{P}_{f}=\mathbf{K}_{a}\mathbf{P}_{aa}\mathbf{K}_{a}^{\top}+\mathbf{K}_ {a}\mathbf{P}_{ab}\mathbf{K}_{b}^{\top}+\mathbf{K}_{b}\mathbf{P}_{ab}^{\top} \mathbf{K}_{a}^{\top}+\mathbf{K}_{b}\mathbf{P}_{bb}\mathbf{K}_{b}^{\top},\] which is unknown, due to \(\mathbf{P}_{ab}\) being unknown. We begin by deriving a sufficient condition for \(\bar{\mathbf{P}}_{f}\) to be an upper bound on \(\mathbf{P}_{f}\), i.e., \(\bar{\mathbf{P}}_{f}\succeq\mathbf{P}_{f}\). To do so, note that \(\mathbf{P}_{ab}\) cannot take arbitrary values, but only those that result in the joint covariance \(\mathbf{P}=\begin{bmatrix}\mathbf{P}_{a}&\mathbf{P}_{ab}\\ \mathbf{P}_{ab}^{\top}&\mathbf{P}_{b}\end{bmatrix}\) being positive definite. Hence, we require that \(\bar{\mathbf{P}}_{f}\succeq\bar{\mathbf{P}}_{f}\) for any such \(\mathbf{P}_{ab}\). In particular, this implies that \(\bar{\mathbf{P}}_{f}\succeq\mathbf{P}_{f}\) holds for the true (but unknown) value of \(\mathbf{P}_{ab}\). Next, we introduce the condition \(\mathbf{K}_{a}+\mathbf{K}_{b}=\mathbf{I}\) to ensure that \(\chi_{f}\) is unbiased, should the means / point estimates \(\chi_{a}\) and \(\chi_{b}\) be unbiased. Finally, together with the objective function \(\mathrm{tr}(\bar{\mathbf{P}}_{f})\), which is strictly matrix increasing, i.e., \(\mathbf{A}\prec\mathbf{B}\implies\mathrm{tr}(\mathbf{A})<\mathrm{tr}(\mathbf{B})\), this gives us the robust optimization problem \[\underset{\mathbf{K}_{a},\mathbf{K}_{b},\mathbf{P}_{f}}{\mathrm{minimize}} \quad\mathrm{tr}(\bar{\mathbf{P}}_{f}) \tag{3}\] \[\mathrm{subject\;to} \mathbf{K}_{a}+\mathbf{K}_{b}=\mathbf{I}\] \[\bar{\mathbf{P}}_{f}\succeq\mathbf{K}_{a}\mathbf{P}_{a}\mathbf{K }_{a}^{\top}+\mathbf{K}_{a}\mathbf{P}_{ab}^{\prime}\mathbf{K}_{b}^{\top} \quad\forall\mathbf{P}_{ab}^{\prime}\in\mathcal{U}\] \[\quad+\mathbf{K}_{b}\mathbf{P}_{ab}^{\prime\prime}\mathbf{K}_{a}^{ \top}+\mathbf{K}_{b}\mathbf{P}_{b}\mathbf{K}_{b}^{\top},\] that minimizes the conservativeness of the bound. This robust optimization problem is equivalent to that presented in [9]. In the above problem, the set \(\mathcal{U}\) in the constraint is given by \[\mathcal{U}=\left\{\mathbf{P}_{ab}^{\prime}\middle|\begin{bmatrix}\mathbf{P}_{ a}&\mathbf{P}_{ab}^{\prime}\\ \mathbf{P}_{ab}^{\prime\top}&\mathbf{P}_{b}\end{bmatrix}\succ\mathbf{0},[ \mathbf{P}_{ab}^{\prime}]_{ij}=0,(i,j)\in\mathcal{I}\right\},\] where \(\mathcal{I}\) is the index set of the known zero elements of \(\mathbf{P}_{ab}\). Based on (3) it becomes clear that the more elements of \(\mathbf{P}_{ab}\) are zero, the tighter the achievable bound \(\bar{\mathbf{P}}\) becomes. This is due to \(\mathcal{U}\) getting smaller as more elements of \(\mathbf{P}_{ab}\) are known. Smaller \(\mathcal{U}\) correspond to a larger feasible set, which allows for smaller optimal objective values. When \(\mathbf{P}_{ab}\) is completely unknown, the above problem's solution simplifies to the CI solution [12, 16]. In the following, we are interested in (approximately) solving the general problem given by (3), which results in tighter bounds, as shown in Fig. 1. ## III Technical Approach To describe our technical approach consider a simple \(2D\) tracking example, where \(n_{a}=2\) autonomous agents \(a\) and \(b\) are tracking the \(x\) and \(y\) position of \(n_{t}=1\) dynamic target, thus \(\chi=[x,y]^{T}\). Assume the agents have perfect knowledge of their ego position and take noisy measurements \(y_{k}\) of the target position. In this case, assuming that the dynamics of the target are separated along the \(x\) and \(y\) directions, then the rvs describing the position of the target \(x\) and \(y\), are independent, thus \(p(\chi)=p(x,y)=p(x)\cdot p(y)\). The key insight here is that by leveraging the independence structure of the problem we can separate the data fusion problem (1) into two independent fusion problems, for \(x\) and \(y\). \[p_{f}(\chi|Z_{k}^{a}\cup Z_{k}^{b})\propto\frac{p^{a}(x|Z_{k}^{a})p^{b}(x|Z_{k} ^{b})}{p_{c}^{ab}(x|Z_{k}^{a}\cap Z_{k}^{b})}\cdot\frac{p^{a}(y|Z_{k}^{a})p^{b}(y| Z_{k}^{b})}{p_{c}^{ab}(y|Z_{k}^{a}\cap Z_{k}^{b})}. \tag{4}\] In the following sections, we exploit the problem independence structure to (i) extend the (monolithic) CI fusion rule to a new, less conservative, _non-monolithic CI_ (nmCI) fusion rule, and (ii) approximate the optimal conservative fusion result using semidefinite programming. ### _Non-Monolithic Fusion_ We start our derivation of the new fusion rule with the geometric mean density (GMD) [11][2], which is a generalization of CI to pdf fusion [14], \[p_{f}(\chi)\propto p^{a}(\chi)^{\omega}\cdot p^{b}(\chi)^{1-\omega},\quad 0 \leq\omega\leq 1. \tag{5}\] Here \(\omega\) is a scalar weighting constant, chosen according to a desired cost metric [1], and the dependency on the data \(Z\) is omitted for brevity. Assuming, as described above, that the random state vector \(\chi\) can be divided into two independent random states \(x\) and \(y\), we rewrite (5) as \[\begin{split} p_{f}(\chi)\propto p^{a}(x)^{\omega_{x}}\cdot p^{ b}(x)^{1-\omega_{x}}\cdot p^{a}(y)^{\omega_{y}}\cdot p^{b}(y)^{1-\omega_{y}},\\ 0\leq\omega_{x},\omega_{y}\leq 1.\end{split} \tag{6}\] Now the weighting constant \(\vec{\omega}=[\omega_{x},\omega_{y}]\) is no longer a scalar, but a non-monolithic vector of weights. We dub this fusion rule nmGMD. From here we focus our attention on Gaussian pdfs and cases where the non-Gaussian pdf is expressed using the first two moments. It has been shown [11] that for Gaussian pdfs, the GMD fusion is equivalent to the CI fusion rule, \[\begin{split}\mathbf{P}_{f}^{-1}&=\omega\mathbf{P}_{a }^{-1}+(1-\omega)\mathbf{P}_{b}^{-1},\\ \mathbf{P}_{f}^{-1}\mu_{f}&=\omega\mathbf{P}_{a}^{- 1}\mu_{a}+(1-\omega)\mathbf{P}_{b}^{-1}\mu_{b},\end{split} \tag{7}\] where \(\{\mu_{a},\mathbf{P}_{a}\}\) (\(\{\mu_{b},\mathbf{P}_{b}\}\)) are the estimate mean and covariance, respectively, of agent \(a\) (\(b\)). In the presence of an unknown degree of correlation between the estimates of agents \(a\) and \(b\), i.e. when \(\mathbf{P}_{ab}\) is unknown, the CI rule produces the optimal fusion result [16]. However, when the parts of the state are independent, we can deduce that the estimates must also be independent or uncorrelated. Consider again the example given at the beginning of Sec. III, if \(x\) and \(y\) are independent rvs, then the correlation between agent \(a\)'s estimate of \(x\) and agent \(b\)'s estimate of y, must be \(0\). The independence structure of the underlying estimation problem implies the following cross-covariance matrix, \(\mathbf{P}_{ab}=\mathrm{diag}(\sigma_{ab}^{x},\ \sigma_{ab}^{y})\), where \(\sigma_{ab}^{x}\) (\(\sigma_{ab}^{y}\)) are the cross-correlations between the \(x\) (\(y\)) estimates held by \(a\) and \(b\). ### _Approximately Optimal Fusion Gains and Bounds_ Unfortunately, robust optimization problems are often not computationally tractable [4]. Here, we make (3) manageable by approximating the uncountable uncertainty set \(\mathcal{U}\) by a random \(n\)-element subset \(\mathcal{U}_{n}=\{\mathbf{P}_{ab,1}^{\prime},\ldots,\mathbf{P}_{ab,n}^{\prime} \}\subset\mathcal{U}\). This simplification of (3) comes at the risk of possibly non-conservative optimal solutions, as \(\bar{\mathbf{P}}_{f}\) is no longer enforced to be conservative for every possible \(\mathbf{P}_{ab}\). However, it seems intuitive that for \(n\rightarrow\infty\) the optimal solution of the simplified problem approaches that of (3), which is conservative. This conjecture and approaches to quantify the non-conservativeness are the subjects of ongoing research. Two components are required to make the approach sketched above work: (i) a method to solve the simplified problem efficiently and (ii) a method to randomly draw \(\mathbf{P}_{ab,i}^{\prime}\) from a distribution with support given by \(\mathcal{U}\). The condition on the support guarantees that every subset of \(\mathcal{U}\) will eventually be drawn from. The first component follows from applying the Schur complement condition for positive semidefiniteness to the positive semidefinite constraints in (3), each of which can be written in the required form \(\bar{\mathbf{P}}_{f}-\mathbf{K}\mathbf{P}_{i}^{\prime}\mathbf{K}^{\top} \succeq\mathbf{0}\) where \(\mathbf{K}=[\mathbf{K}_{a},\mathbf{K}_{b}]\) and \(\mathbf{P}_{i}^{\prime}=\begin{bmatrix}\mathbf{P}_{a}&\mathbf{P}_{ab,i}^{ \prime}\\ \mathbf{P}_{ab,i}^{\prime\top}&\mathbf{P}_{b}\end{bmatrix}\). This gives the equivalent optimization problem \[\underset{\mathbf{K}_{a},\mathbf{K}_{b},\bar{\mathbf{P}}_{f}}{ \text{minimize}} \mathrm{tr}(\bar{\mathbf{P}}_{f}) \tag{8}\] \[\mathrm{subject\,to} \mathbf{K}_{a}+\mathbf{K}_{b}=\mathbf{I}\] \[\begin{bmatrix}\bar{\mathbf{P}}_{f}&\mathbf{K}\\ \mathbf{K}^{\top}&\mathbf{P}_{i}^{\prime-1}\end{bmatrix}\succeq\mathbf{0}\quad i =1,\ldots,n,\] which is a standard semidefinite program and can be solved in polynomial time by any off-the-shelf SDP solver. For the second component, we use rejection sampling. Because \(\mathcal{U}\) is unbounded, we do not sample covariance matrix cross-terms \(\mathbf{P}_{ab,i}^{\prime}\) from \(\mathcal{U}\) directly, but instead sample equivalent correlation matrix cross-terms \(\mathbf{C}_{ab,i}^{\prime}\). This has the advantage that the elements of the \(\mathbf{C}_{ab,i}^{\prime}\) are in \([-1,1]\). Hence, the rejection sampling process is as follows: (i) Sample \(\mathbf{C}_{ab,i}^{\prime}\) from the proposal distribution. This corresponds to drawing each element of \(\mathbf{C}_{ab,i}^{\prime}\) that is not fixed to zero by the sparsity pattern of \(\mathbf{P}_{ab}\) uniformly from \([-1,1]\). (ii) Accept \(\mathbf{C}_{ab,i}^{\prime}\) if the resulting joint correlation matrix is positive definite. Reject otherwise and repeat (i) and (ii). (iii) If \(\mathbf{C}_{ab,i}^{\prime}\) was accepted, compute \(\mathbf{P}_{i}^{\prime}\) from the joint correlation matrix that results from \(\mathbf{C}_{ab,i}^{\prime}\). Once \(n\) samples have been collected, (8) can be constructed and subsequently solved. ## IV Results and Discussion In this section, we first compare the optimization solution to the nmCI solution for a \(2D\) problem and show that in this case, the nmCI solution is the _optimal_ solution. Then we test the nmCI and compare it to the monolithic CI algorithm on a larger-scale problem, with 16 agents, 20 targets, and \(112\) states target tracking simulation. ### _Comparison: nmCI and Optimization Approaches_ To compare nmCI and the optimization-based approach we consider the following simple scenario with a 2D state vector and two state estimates. The covariances of the estimates are given by \(\mathbf{P}_{a}=\mathrm{diag}(3,1)\), \(\mathbf{P}_{b}=\mathrm{diag}(1,4)\), and their correlation is assumed to be of the form \(\mathbf{P}_{ab}=\mathrm{diag}(\cdot,\cdot)\). This occurs when the state consists, e.g., of the \(x\) and \(y\) position of a target, where the positions are independent due to the probabilistic model describing the target's motion and the measurements taken of it. Here, the nmCI solution is given by \(\mathbf{P}_{nmCI}=\mathrm{diag}(1,1)\), corresponding to \(\vec{\omega}=[0,1]\). We now apply the optimization approach to the above problem for different cardinalities of the set \(\mathcal{U}_{n}\). For each cardinality we perform \(1000\) Monte Carlo (MC) runs, where the true \(\mathbf{P}_{ab}\) is randomly sampled according to the procedure used in the optimization approach. The deviations \(\|\mathbf{P}_{nmCI}-\mathbf{P}_{opt}\|_{2}\) between the nmCI result and optimization result in \(\mathbf{P}_{opt}\) and the medians, maximums, and minimums of the minimum eigenvalues of \(\mathbf{P}_{nmCI}-\mathbf{P}_{f}\) and \(\mathbf{P}_{opt}-\mathbf{P}_{f}\) over the MC runs are shown in Fig.2. As can be seen, the nmCI and optimized solution do not agree for small \(|\mathcal{U}_{n}|\) but converge to each other for larger values. While nmCI is always conservative, the optimization method results in non-conservativeness for too small \(|\mathcal{U}_{n}|\), and interestingly, eventually becomes conservative at some finite \(|\mathcal{U}_{n}|\). Because Fig. 2: Left: Deviation of nmCI bound to optimized bound. Right: Median, maximum/minimum of the smallest eigenvalue of difference between computed bound and actual fused covariance. the optimized bound is optimal for (8) and identical to the guaranteed conservative nmCI bound for large enough \(|\mathcal{U}_{n}|\), it is optimal for (3) when \(|\mathcal{U}_{n}|\) is large enough. Conversely, the nmCI solution must be optimal for (3). ### _Simulation: Multi-Agent Multi-Target Tracking_ To validate the algorithms and demonstrate the advantage of the new nmCI algorithm over the original CI algorithm, we perform a 16-agent, 20-target localization and tracking simulation. In this scenario, as described by Fig. 3, the agents are split into 4 groups of 4 agents, where each group monitors 5 (mutually exclusive) targets out of the 20. Note that this means that the set of states monitored by one group is independent of the set of states monitored by another group. Assume each tracking agent \(a\ =1,...,16\) has perfect self-position knowledge but with a constant agent-target relative position measurement bias vector in the \(x\) and \(y\) directions \(s^{a}=[b_{x}^{a},b_{y}^{aT}]^{T}\). In every time step \(k\), each agent \(a\) takes two measurements \(z_{k}^{a,t}\) to target \(t\), and \(m_{k}^{a}\) to a landmark, \[\begin{split} z_{k}^{a,t}&=H\chi^{t}+s^{a}+v_{k}^{ a,1},\ \ v_{k}^{a,1}\sim\mathcal{N}(0,R^{a,1}),\\ m_{k}^{a}&=s^{a}+v_{k}^{a,2},\ \ v_{k}^{a,2}\sim \mathcal{N}(0,R^{a,2}),\end{split} \tag{9}\] where \(\chi^{t}=[x^{t},\dot{x}^{t},y^{t},\dot{y}^{t}]^{T}\) is the \(x\) and \(y\) position and velocity of target \(t\), and \(H\) is the measurement matrix. The results of 15 MC runs as computed by agent 7 are shown in Fig. 4. In (a) we can see the NEES chi-square consistency test [17, 3] with 95% confidence level for a centralized estimator (black), monolithic CI (blue), and nmCI (red). The results show that (i) both the CI and nmCI algorithms provide consistent results, (ii) the nmCI has a higher NEES value, which hints that the nmCI is less conservative, i.e. it is a _tigher bound_. Since, to the best of our knowledge, no quantitive measure of 'less conservative' exist, we choose to compare the mean root mean squared error (RMSE) in Fig. 4(b). From the graph, we can see that across all MC runs the mean RMSE and the mean average \(2\sigma\) of nmCI are \(33\%\) and \(40\%\) smaller than CI, respectively. Another interesting point for comparison and analysis is the \(\omega\) values. Fig. 4(c) compares the monolithic \(\omega\) value of the CI to \(8\) values of the nmCI (two values for each group of robots). The graph shows optimal \(\omega\) (minimum trace) for fusion between agents \(7\) and \(11\), where \(\omega=1\) indicates taking agent 7's estimate, and ignoring agent \(11\)'s estimate. The dynamics of the monolithic \(\omega\) vs. the non-monolithic \(\omega\) (according to (7)). From the network topology (Fig. 3) we can see that agent \(7\)'sits' at a much more centralized location relative to agent \(11\), receiving data from different parts of the network earlier, thus its estimate is much more informative than of agent \(11\). For the monolithic CI this results in a steady state value of \(\omega=0.87\), i.e. almost ignoring agent \(11\)'s estimate. On the other hand, for the nmCI we can see that (i) for the last two agent groups (agents \(9-16\)), where some data flows from agent \(11\) before reaching \(7\), \(\omega\) values range between \(0.59\div 0.7\), i.e. throws away much less information from agent \(11\), and (ii) for the first two agent groups (agents \(1-8\)), where information must pass through \(7\) before reaching \(11\), \(\omega=1\), i.e. completely ignoring agent \(11\)'s estimate. ## V Conclusion In this work, we showed how to exploit the probabilistic independence structure in a multi-agent team to (i) extend the original monolithic CI to a new non-monolithic CI algorithm, and (ii) develop a conservative fusion optimization approach. We demonstrated in a \(2D\) scenario that the optimization approach converges to the nmCI solution, suggesting that it is the optimal solution. We then test the performance of nmCI on a large-scale simulation and show that it provides \(33\%\) smaller RMSE and is less conservative than CI. This work surfaced some interesting theoretical and practical questions that we plan to explore, such as (i) How to quantitatively compare the level of conservativeness, (ii) How can we verify that the optimization solution converged to the optimal solution, (iii) What other types of robotics applications have a similar independence structure that can be exploited, and (iv) In what scenarios enforcing the independence structure and then using nmCI might have an advantage over using monolithic CI. Fig. 4: MC simulation results showing (a) NEES statistics, (b) mean RMSE, (c) weighing constant value \(\omega\). Fig. 3: Left: Undirected and cyclic network topology, split into 4 groups of 4 agents. Right: Target (\(T_{i}\)) tracking assignments to robots 1–4, other groups follow the same pattern.
2301.03100
Cut-off of transverse waves through the solar transition region
Context. Transverse oscillations are ubiquitously observed in the solar corona, both in coronal loops and open magnetic flux tubes. Numerical simulations suggest that their dissipation could heat coronal loops, counterbalancing radiative losses. These models rely on a continuous driver at the footpoint of the loops. However, analytical works predict that transverse waves are subject to a cut-off in the transition region. It is thus unclear whether they can reach the corona, and indeed heat coronal loops. Aims. Our aims are to determine how the cut-off of kink waves affects their propagation into the corona, and to characterize the variation of the cut-off frequency with altitude. Methods. Using 3D magnetohydrodynamic simulations, we modelled the propagation of kink waves in a magnetic flux tube, embedded in a realistic atmosphere with thermal conduction, that starts in the chromosphere and extends into the corona. We drove kink waves at four different frequencies, and determined whether they experienced a cut-off. We then calculated the altitude at which the waves were cut-off, and compared it to the prediction of several analytical models. Results. We show that kink waves indeed experience a cut-off in the transition region, and we identified the analytical model that gives the best predictions. In addition, we show that waves with periods shorter than approximately 500 s can still reach the corona by tunnelling through the transition region, with little to no attenuation of their amplitude. This means that such waves can still propagate from the footpoints of loop, and result in heating in the corona.
Gabriel Pelouze, Tom Van Doorsselaere, Konstantinos Karampelas, Julia M. Riedl, Timothy Duckenfield
2023-01-08T19:51:39Z
http://arxiv.org/abs/2301.03100v2
# Cut-off of transverse waves through the solar transition region ###### Abstract Context:Transverse oscillations are ubiquitously observed in the solar corona, both in coronal loops and open magnetic flux tubes. Numerical simulations suggest that their dissipation could heat coronal loops, counterbalancing radiative losses. These models rely on a continuous driver at the footpoint of the loops. However, analytical works predict that transverse waves are subject to a cut-off in the transition region. It is thus unclear whether they can reach the corona, and indeed heat coronal loops. Aims:Our aims are to determine how the cut-off of kink waves affects their propagation into the corona, and to characterize the variation of the cut-off frequency with altitude. Methods:Using 3D magnetohydrodynamic simulations, we modelled the propagation of kink waves in a magnetic flux tube, embedded in a realistic atmosphere with thermal conduction, that starts in the chromosphere and extends into the corona. We drove kink waves at four different frequencies, and determined whether they experienced a cut-off. We then calculated the altitude at which the waves were cut-off, and compared it to the prediction of several analytical models. Results:We show that kink waves indeed experience a cut-off in the transition region, and we identified the analytical model that gives the best predictions. In addition, we show that waves with periods shorter than approximately 500 s can still reach the corona by tunnelling through the transition region, with little to no attenuation of their amplitude. This means that such waves can still propagate from the footpoints of loop, and result in heating in the corona. ## 1 Introduction Recent advances in observations and modelling have shown that magnetohydrodynamic (MHD) waves could significantly contribute to the heating of the solar corona (see review by Van Doorsselaere et al.2020). In particular, transverse waves are ubiquitously observed, and they come in several kinds. The type that was first discovered are the transverse waves that are impulsively excited after a flare (Nakariakov et al.1999). However, these transverse waves are only sporadically excited and do not play an important role in the energy budget of the solar corona (Terradas and Arregui2018). Later on, it was discovered that the corona is filled by small-amplitude transverse waves (Tomczyk et al.2007; Tomczyk and McIntosh2009; McIntosh et al.2011; Tian et al.2012). These were observed in coronal loops as propagating (Tiwari et al.2019) or standing waves (Anfinogentov et al.2015). These low-amplitude transverse waves were also observed as propagating waves in open-field regions (Thurgood et al.2014; Morton et al.2015). These low-amplitude waves show little-to-no decay (Morton et al.2021) and are thus named "decayless". Because the flare-excited standing waves are rapidly decaying (Goddard et al.2016; Nechaeva et al.2019) due to resonant absorption (Goossens et al.2002) and non-linear Kelvin-Helmholtz instability (KHI) damping (Terradas et al.2008; Antolin et al.2014; Van Doorsselaere et al.2021; Arregui2021), it is generally thought that the decayless waves must be continuously supplied with energy to counteract its strong damping. Several mechanisms for excitation have been proposed: slip-stick driving with steady flows (Nakariakov et al.2016; Karampelas and Van Doorsselaere2020), vortex shedding (Nakariakov et al.2009; Karampelas and Van Doorsselaere2021) or footpoint driving (Nistico et al.2013; Karampelas et al.2017) through p-modes (Morton et al.2019) or convective shuffling. The latter option of footpoint driving has had some success in generating standing mode decayless waves (Afanasyev et al.2020), which counterbalance the non-linear damping through the KHI (Guo et al.2019) and lead to heating of loops (Shi et al.2021). However, for the driving of decayless waves through their footpoints, it is not well understood how the transverse waves propagate through the complicated structure of the chromosphere and transition region. The simulations of transverse-wave induced KHI heating (e.g. Karampelas et al.2019) only take into account the coronal part of the loop, that is imposing a driver at the top of the transition region. To properly model the whole loop evolution due to the wave heating, it is essential to also model the wave driver in the photosphere, and accurately capture its influence on the coronal loop dynamics. In plane-parallel atmospheres, the propagation of fast and slow waves has been well studied. It was found that these modes couple efficiently to Alfven waves through resonant absorption (Hansen and Cally2009; Cally and Andries2010; Khomenko and Cally2012). Currently, investigations are ongoing to what happens if the cross-field structuring is included into the wave propagation model (Cally and Khomenko2019; Riedl et al.2019, 2021). Another crucial ingredient is the wave's behaviour in strong (i.e. non-WKB) stratification. It is well-known that slow waves experience a cut-off while propagating through a stratified medium (Bel & Leroy, 1977). This has been verified observationally (Jess et al., 2013) and numerically (Felipe et al., 2018). Still, up to now, it is unknown if a similar cut-off exists for transverse waves in structured media. For the driving of the observed decayless waves in the corona, this is a crucial property to understand. Several analytical works predict that transverse waves are cut-off in the transition below a given frequency. The first formula was derived by Spruit (1981): \[\omega_{\rm gg1}^{2}=\frac{g}{8H}\frac{1}{2\beta+1}, \tag{1}\] where \(g\) is the gravity projected along the loop, \(H\) the pressure scale height, and \(\beta\) the ratio between the gas and magnetic pressures. For a typical isothermal atmosphere, this corresponds to a cut-off period of \(700\,\)s (Spruit, 1981). However, Lopin et al. (2014) showed that this classical cut-off is suppressed when the radial component of the magnetic field is taken into account. Lopin & Nagorny (2017) later showed that transverse waves can still be cut-off, provided a non-isothermal atmosphere. They predict the following cut-off frequency: \[\omega_{\rm LN17}^{2}=\frac{c_{k0}^{2}}{4H_{0}H(z)}\left(\delta_{B}^{2}\frac{ \mathrm{d}H(z)}{\mathrm{d}z}+\frac{H^{2}(z)}{z^{2}}\right), \tag{2}\] where \(z\) is the altitude, \(c_{k0}\) is the kink speed at the base of atmosphere (\(z=z_{0}\)), \(H\) is the pressure scale height, \(H_{0}=H(z_{0})\), and \(\delta_{B}^{2}=\left(B_{0i}^{2}-B_{0c}^{2}\right)/\left(B_{0i}^{2}+B_{0c}^{2}\right)\) is the relative difference between the magnetic field inside (\(B_{0i}\)) and outside (\(B_{0i,c}\)) the flux tube, at \(z=z_{0}\). Finally, an alternative formula was derived by Snow et al. (2017): \[\omega_{\rm gg17}^{2}=\frac{v_{A}^{2}(z)}{4z^{2}}, \tag{3}\] where \(z\) is the altitude, and \(v_{A}\) is the Alfven speed. In this article, we modelled the propagation of kink waves in an open magnetic flux tube, embedded in a non-isothermal atmosphere. The atmosphere extends from the chromosphere to the corona, and includes gravitational stratification and thermal conduction (Sect. 2). We drove kink waves at different periods, and determined whether they experienced a cut-off (Sect. 3). We compare these results to the three analytical formulas given above in Sect. 4, and summarize our conclusions in Sect. 5. ## 2 Numerical model: magnetic flux tube through the transition region We modelled a vertical magnetic flux tube of radius \(R=1\,\)Mm embedded in a stratified atmosphere, starting in the chromosphere (altitude \(z=0\,\)Mm) and extending through the transition region (\(z\approx 4\,\)Mm) into the corona. Kink waves were excited in the flux tube by applying a monoperiodic driver at the bottom of the domain (\(z=0\,\)Mm). In the upper half of the domain (\(z>50\,\)Mm), we implemented a "velocity rewrite layer" to absorb the kink waves. The driver and the velocity rewrite layer are described in Sect. 2.1. A sketch of the domain is shown on Fig. 1. We solved the 3D MHD evolution of this tube using the PLUTO code (Mignone et al., 2007), version 4.3. This code solves the conservative MHD equations (mass continuity, momentum conservation, energy conservation, and induction equation). We used the corner transport upwind finite volume scheme, where characteristic tracing is used for the time stepping, and a linear spatial reconstruction with a monotonized central difference limiter is performed. The magnetic field divergence was kept small using the extended divergence cleaning method (generalized Lagrange multiplier, or GLM), and flux was computed with the linearized Roe Riemann solver. We did not include explicit viscosity, resistivity, or cooling. However, numerical dissipation results in higher effective viscosity and resistivity than what is expected for the solar corona, as discussed by Karampelas et al. (2019). We included a modified thermal conduction, as described below. The transition region between the chromosphere and the corona is characterized by a very sharp temperature gradient. Resolving such gradient requires a very high resolution along the tube (\(\sim 1\,\)km in the transition region). In order to keep computational costs reasonable, we artificially broadened the transition region (thus reducing the temperature gradient). To that end, we modified the thermal conductivity using the method developed by Linker et al. (2001); Lionello et al. (2009); Mikic et al. (2013). Below the cut-off temperature \(T_{c}=2.5\cdot 10^{5}\,\)K, the parallel thermal conductivity was set to \(\kappa_{\parallel}=C_{0}T_{c}^{5/2}\) with \(C_{0}=9\cdot 10^{-12}\,\)Wm\({}^{-1}\)K\({}^{-7/2}\). Above \(T_{c}\), \(\kappa_{\parallel}=C_{0}T^{5/2}\). This allowed us to use a resolution of \(98\,\)km along the tube. This grid allows to fully resolve the broadened transition region, which has a minimum temperature scale length of \(1.6\,\)Mm (see Johnston & Bradshaw, 2019). The dimensions of the domain were \((L_{x},L_{y},L_{z})=(16,6,100)\) Mm. We used a uniform grid of \(400\times 150\times 1024\) cells, with a size of \(40\,\)km in the \(x\) and \(y\) directions, and \(98\,\)km in the \(z\) direction. Furthermore, we verified that the results did not change significantly when using a resolution of \(40\,\)km in the \(z\) direction. To that end, we ran a separate simulation and verified that the resulting cut-off altitude and comparison to the analytical formulas (see Sect. 4) were not strongly modified. We note that such resolution is too costly in terms of compute time to be used for all simulations in this work. The strong stratification in the transition region makes it challenging to obtain a relaxed initial state for the model. We first initialized the domain with a field-aligned hydrostatic equilibrium (Sect. 2.2). We then let the simulation relax in 2D for \(47\,\)ks (Sect. 2.3). Finally, we filled the 3D domain with this re Figure 1: Sketch of the simulation domain, showing the magnetic flux tube, the location of the kink wave driver (bottom boundary), chromosphere, transition region, corona, and velocity rewrite layer. laxed state through cylindrical symmetry, where we drove kink waves of different periods for a duration up to \(2.7\,\mathrm{ks}\) (Sect. 2.4). ### Boundary conditions and driver We first describe the boundary conditions used for the relaxation (2D) and kink wave (3D) simulations. Bottom boundaryAt the bottom boundary (base of the chromosphere, \(z=0\)), the density and pressure were extrapolated using the hydrostatic equilibrium equation. The magnetic field was extrapolated using the zero normal-gradient condition described by Karampelas et al. (2019, section 2.4). For \(v_{z}\), we either imposed a reflective boundary condition (2D relaxation, see Sect. 2.3), or imposed \(v_{z}=0\) (in 3D, see Sect. 2.4). We verified that both boundary conditions give the same results in 3D simulations. The parallel velocity components \(v_{x}\) and \(v_{y}\) were set to obey either a zero-gradient boundary condition (2D relaxation), or to follow a driver that excites kink waves (in 3D). We used a monoperiodic, dipole-like, driver developed by Pascoe et al. (2010) and updated by Karampelas et al. (2017). Inside the tube, the driver imposes: \[\left\{v_{x}(x,y,t),v_{y}(x,y,t)\right\}=\left\{v(t),0\right\}, \tag{4}\] where \(v(t)=v_{0}\cos\left(2\pi t/P_{0}\right)\), with \(v_{0}\) the driver amplitude, set to \(2\,\mathrm{km}\,\mathrm{s}^{-1}\). The driver period, \(P_{0}\), was set to different values in order to test the cut-off of kink waves. Outside the tube, the driver imposes: \[\left\{v_{x}(x,y,t),v_{y}(x,y,t)\right\}=v(t)R^{2}\frac{\left\{\left(x-x_{0}( t)\right)^{2}-y^{2},2\left(x-x_{0}(t)\right)y\right\}}{\left(\left(x-x_{0}(t) \right)^{2}+y^{2}\right)^{2}}, \tag{5}\] where \(x_{0}(t)=v_{0}P_{0}/(2\pi)\cdot\sin\left(2\pi t/P_{0}\right)\) is the centre of the tube's footpoint at time \(t\). This driver generates a kink wave polarized in the \(x\) direction. Upper boundaryAt the upper boundary (top of the corona, \(z=100\,\mathrm{Mm}\)), the magnetic field was kept symmetric. All other variables obeyed a reflective boundary condition. In order to absorb the upwards waves excited by the driver, we artificially modified the velocity in the upper half of the domain (\(z>50\,\mathrm{Mm}\)). At each time step, after solving the MHD equations, we decreased each component of the velocity \(v_{i}\) by multiplying it by a quantity \(\alpha_{x}\lesssim 1\): \[v_{i}^{\prime}=\alpha_{v}(t,z)v_{i}. \tag{6}\] In the driven 3D simulations \(\alpha_{v}\) was kept constant in time, and varied linearly along the loop, from \(1\) at \(z=z_{e}=50\,\mathrm{Mm}\), to \(\alpha_{x,\mathrm{min}}=0.9995\) at \(z=L=100\,\mathrm{Mm}\): \[\alpha_{x,\mathrm{3D}}(z)=\begin{cases}1&\text{if $z\leq z_{e}$,}\\ 1-\left(1-\alpha_{x,\mathrm{min}}\right)\left(\frac{z-z_{e}}{z-z_{e}}\right)& \text{else}.\end{cases} \tag{7}\] In the 2D relaxation run, the first third of the simulation (\(t_{1/3}=15.7\,\mathrm{ks}\)) was run without modifying the velocity (i.e. \(\alpha_{v}=1\)). During the second third, \(\alpha_{v}\) was linearly ramped down in time to match the profile \(\alpha_{x,\mathrm{3D}}(z)\) described above. Finally, the last third of the simulation was run with the constant \(\alpha_{x,\mathrm{3D}}(z)\): \[\alpha_{x,\mathrm{2D}}(z,t)=\begin{cases}1&\text{if $t\leq t_{1/3}$,}\\ 1-\left(1-\alpha_{x,\mathrm{3D}}(z)\right)\left(\frac{t-t_{1/3}}{t_{1/3}} \right)&\text{if $t_{1/3}<t\leq 2t_{1/3}$,}\\ \alpha_{x,\mathrm{3D}}(z)&\text{else}.\end{cases} \tag{8}\] The evolution of \(\alpha_{v}\) is shown in Fig. 2. This "velocity rewrite layer" can successfully absorb the kink waves that are excited by the driver at the bottom of the chromosphere. As a result, these waves are not reflected at the upper boundary, and do not propagate downwards back into the domain. We stress that the solution obtained inside the velocity rewrite layer (i.e. above \(z=50\,\mathrm{Mm}\)) is not physical, and that this layer should be considered as a part of the upper boundary. Side boundariesAt the side boundaries (\(x\) and \(y\) axes), all variables obeyed a zero-gradient boundary condition. In the 2D relaxation run, we only simulated half of the tube radius (\(x>0\)). For these simulations, we imposed a reflective boundary condition on all variables at the centre of the tube (\(x=0\)). ### Initial conditions: field-aligned hydrostatic equilibrium The simulation was initialized with a uniform vertical magnetic field of magnitude \(B_{0}=42\,\mathrm{G}\). Along the tube, we imposed the following temperature profile, derived from Aschwanden & Schrijver (2002): \[T(x,y,z)=\begin{cases}T_{\mathrm{ch}}&\text{if $z\leq\Delta_{ \mathrm{ch}}$,}\\ T_{\mathrm{ch}}+\left(T_{\mathrm{cor}}(x,y)-T_{\mathrm{ch}}\right)\left(1- \left(\frac{L-z}{L-\Delta_{\mathrm{ch}}}\right)\right)^{2}&\text{else}, \end{cases} \tag{9}\] where \(z\) is the altitude, \(L\) is the height of the computational domain, \(\Delta_{\mathrm{ch}}=4\,\mathrm{Mm}\) is thickness of the chromosphere, and \(T_{\mathrm{ch}}=20\,000\,\mathrm{K}\) is the temperature in the chromosphere. We defined the transverse temperature profile at the top of the domain, \(T_{\mathrm{cor}}(x,y)\), as: \[T_{\mathrm{cor}}(x,y)=T_{\mathrm{cor,ext}}+(T_{\mathrm{cor,int}}-T_{\mathrm{cor,ext}})\zeta(x,y), \tag{10}\] where \(T_{\mathrm{cor,int}}=1.2\,\mathrm{MK}\) is the temperature inside the tube, and \(T_{\mathrm{cor,ext}}=3.6\,\mathrm{MK}\) is the temperature outside the tube. The shape of the profile was set by \(\zeta(x,y)\): \[\zeta(x,y)=\frac{1}{2}\left[1-\tanh\left(\left(\sqrt{x^{2}+y^{2}}/R-1\right)b \right)\right], \tag{11}\] Figure 2: Velocity-rewrite coefficient \(\alpha_{v}\), applied to the velocity above \(50\,\mathrm{Mm}\) so that upper-propagating waves are not reflected back into the domain. \(\alpha_{v}\) is shown for different times of the 2D relaxation run. The last profile (\(t\geq 31.3\,\mathrm{ks}\)) is also applied in the 3D driven simulations. where \(R=1\) Mm is the tube radius, and \(b=5\) is a dimensionless number setting the width of the inhomogeneous layer between the interior and exterior of the tube (\(l\approx 6\mathrm{\,}\mathrm{\SIUnitSymbolR}/b\)). \(\zeta(x,y)\) is close to 1 inside the tube, and to 0 outside. We also set the density at the bottom of the chromosphere (\(z=0\)) to: \[\rho_{\mathrm{ch}}(x,y,z=0)=\rho_{\mathrm{ch,ext}}+(\rho_{\mathrm{ch,int}}-\rho _{\mathrm{ch,ext}})\zeta(x,y), \tag{12}\] where \(\rho_{\mathrm{ch,int}}=3.51\cdot 10^{-8}\) kg m\({}^{-3}\) is the density inside the tube, and \(\rho_{\mathrm{ch,ext}}=1.17\cdot 10^{-8}\) kg m\({}^{-3}\) is the density outside. We then integrated the field-aligned hydrostatic equilibrium equation numerically using a Crank-Nicholson scheme. The profiles of the imposed temperature and of the density resulting from the integration are shown in Fig. 3 (a). The temperature contrast (interior temperature divided by exterior temperature) is 1 in the chromosphere, and decreases to \(\nicefrac{{1}}{{3}}\) in the corona. The density contrast is 3 in the chromosphere, increases to around 7 in the transition region, and decreases again to about 4 in the upper corona. The pressure contrast is 3 in the chromosphere, and slowly decreases to reach 1.2 in the upper corona. However, this initial state is not in magnetohydrostatic (MHS) equilibrium, because the pressure varies across the flux tube, while the magnetic field does not. To fix this, we let the tube relax by running a 2D magnetohydrodynamic simulation (Sect. 2.3). We then used this relaxed state to initialize the 3D simulation of kink waves (Sect. 2.4). ### Flux tube relaxation (2D) In order to obtain a flux tube in MHS equilibrium, we first run a 2D simulation, initialized with the initial state described in Sect. 2.2. The MHD equations were solved in a longitudinal plane at \(y=0\) (see Fig. 1), with \(x\in[0,8.56]\) Mm, and \(z\in[0,100]\) Mm. We used a uniform grid of \(64\times 2048\) cells with a size of \(134\) km\(\times 49\) km. The resolution along \(z\) is higher than in the 3D runs in order to resolve the sharper gradients in the transition region (see Fig. 3). We verified that a resolution of \(40\) km in the \(x\) direction yielded the same results, by running a separate 2D simulation followed by a 3D driven simulation (\(P_{0}=200\) s), and verifying that the cut-off altitude and comparison to the analytical formulas (Sect. 4) were not significantly modified. We let the system evolve for \(47\) ks, during which the velocity rewrite parameter \(\alpha_{\mathrm{e}}\) varied as described in Eq. (8). As a result of the relaxation, periodic longitudinal flows with a velocity of about \(15\) km s\({}^{-1}\) develop along the tube. They are damped during the later stages of the simulation, as the velocity rewrite layer is gradually introduced. At the end of the relaxation run, residual velocities are lower than \(0.5\) km s\({}^{-1}\) everywhere in the domain. The resulting temperature, density, and magnetic field profiles are shown on Fig. 3 (b). Compared to the initial state (Fig. 3 a), the transition region is significantly broadened, with a thickness of about \(7\) Mm. This is the direct result of the modified thermal conductivity used in this setup, and allows for a coarser resolution along the loop in the 3D simulations. In addition, the temperature and density decrease, both inside and outside the tube. Overall, the density contrast (\(\rho_{\mathrm{int}}/\rho_{\mathrm{ext}}\)) decreases: it reaches 1 in the chromosphere, 1.2 in the transition region, and 1.8 in the corona. The temperature contrast also changes to about 1.3 in the transition, and about 0.8 in the corona. Finally, the magnetic field amplitude contrast remains very close to 1 everywhere in the domain (0.97 in the chromosphere and 1 in the corona), with a magnitude of about \(11\) G everywhere in the domain. Compared to the initial uniform magnetic field, the magnitude is divided by about four, while the contrast remains close to 1. The final temperature and density profile significantly differ from the initial conditions of 2D relaxation run. However, this is not an issue, as the goal of this study is to investigate how the analytical formulas we consider (Spruit, 1981; Lopin & Nagorny, 2017; Snow et al., 2017) predict the cut-off frequency for a given temperature and density profile. By using the relaxed profiles as an input to these analytical formulas, we obtained predictions for the relaxed system. This relaxed 2D simulation was then mapped onto the 3D domain through cylindrical symmetry. We used a rotation about the line \(x=0\) (i.e. the centre of the loop), and a trilinear interpolation to project onto the 3D Cartesian grid. ### Kink waves propagation (3D) In order to simulate the propagation of kink waves from the chromosphere to the corona, we drove the 3D simulations with the monoperiodic, dipole-like, driver described in Eqs. (4) and (5). We ran four simulations, with different driver periods \(P_{0}\): \(200\) s, Figure 3: Temperature (_black_), density (_red_), and magnetic field magnitude (_blue_) profiles inside (\(r=0\) Mm; _solid lines_) and outside (\(r=8\) Mm; _dashed lines_) the flux tube. (a) After solving the field-aligned hydrostatic equilibrium. (b) After the 2D magnetohydrodynamic relaxation. 335 s, 700 s, and 2000 s. The propagating kink waves generated by the driver are absorbed by the velocity rewrite layer at the top of the domain, and are thus not reflected downwards. The first three simulations were run for a duration of \(5P_{0}\). The last simulation was run for \(1.75P_{0}\). At the beginning of the simulations, the system goes through an initial transitory phase before the propagating kink wave is fully established (i.e. its amplitude does not change with time). We waited for \(2P_{0}\) (\(0.42P_{0}\) for \(P_{0}=2000\) s) for the kink wave to enter a stable sinusoidal regime. After this duration, we saved high-cadence snapshots at the centre of the loop (line \(x=y=0\)). For all further analysis, we used the snapshots saved after the transitory phase. The transverse velocity \(v_{x}\) at the loop centre is shown in Fig. 4. As can be seen on this figure, the amplitude of the kink wave decreases as the period increases. For the two longer driver periods (700 and 2000 s), the amplitude of the kink wave is small enough for some perturbations to become visible. They travel at the Alfven speed, and appear to be triggered by the flows remaining after the relaxation (see Sect. 2.3). These perturbations have amplitudes smaller than \(0.2\,\mathrm{km\ s^{-1}}\), and should thus have no effect on the wave. ## 3 Results: cut-off and tunnelling of transverse waves In order to determine whether the kink waves driven in the 3D simulations are experiencing a cut-off, we looked at the evolution of the velocity amplitude (Sect. 3.1), as well as the phase speed (Sect. 3.2) as a function of altitude. The analysis of these profiles allows us to establish that the transverse waves are subject to a low-frequency cut-off in the transition region. ### Wave amplitude increases with frequency In order to compute the velocity amplitude of the kink wave, we fitted the function \(A_{x}(z)\sin\left(\omega(z)t+\phi(z)\right)\) to the transverse velocity \(v_{x}(z,t)\), at each altitude (\(z\)). \(A_{x}(z)\) is the velocity amplitude, \(\omega(z)\) is the kink wave frequency, and \(\phi(z)\) is the phase. The frequency varies by less than 1 % with altitude, confirming theoretical understanding. The velocity amplitude is shown in Fig. 5. In all simulations, the wave amplitude increases with altitude, because of the density decreases with altitude and energy conservation. Across simulations, the amplitude at a given altitude increases with the frequency of the wave. This means that kink waves with higher frequencies propagate better from the chromosphere to the corona. This would be consistent with the low-frequency cut-off predicted by analytical models (see Sect. 1). ### Evanescent waves in the transition region To determine the altitude at which the waves are cut-off, we compared their phase speed \(v_{p}(z)\) to the kink speed of the flux tube \(c_{\mathrm{L}}(z)\). The inverse phase speed is equivalent to the phase difference \(\Delta\phi(z)\) between two altitudes separated by \(\Delta z\): \(1/v_{p}(z)=\Delta\phi(z)/(\omega\Delta z)\). The phase difference has been successfully used to determine the cut-off frequency of acoustic and slow-magnetonic waves in observations (Centeno et al., 2006; Felipe et al., 2010; Krishna Prasad et al., 2017; Felipe et al., 2018), and in simulations (Felipe and Sangeetha, 2020). In these articles, the authors determine the phase speed for a wide range of frequencies, but at a limited number of altitude positions. In the present study however, we could only examine four frequencies, because of the high computational cost of a simulation. However, we computed the phase difference at all altitudes of the simulation domain. This allows us to determine the altitude at which the wave is cut-off. Figure 4: Kink waves transverse velocity (\(v_{x}\)) at the loop centre (\(x=y=0\)), as a function of altitude and time. The velocity is shown for four 3D simulations with different driver periods \(P_{0}\), after an initial settling time of \(2P_{0}\) (for \(P_{0}=200\) s, 335 s and 700 s), or \(0.42P_{0}\) (for \(P_{0}=2000\) s). The dashed black lines represent a propagation at the kink speed (see Eq. (13)), and are independent of the driver period. Figure 5: Velocity amplitude of kink waves, as a function of altitude. The velocity is shown for four different driver periods (\(P_{0}\)). The inset has the same axes as the main figure, with a zoom-in on the vertical axis. The phase speed at a given altitude \(z\) was computed from the transverse velocity in the cells above and below, that is \(v_{x}(t,z+\Delta z/2)\) and \(v_{x}(t,z-\Delta z/2)\), where \(\Delta z=98\,\mathrm{km}\) is the cell size. We apodized these velocity time series with a Hann window, and computed the cross-correlation \(C(\tau,z)=v_{x}(t,z+\Delta z/2)\star v_{x}(t,z-\Delta z/2)\). We then determined the time delay \(\Delta\tau(z)\), by finding the maximum of \(C(\tau,z)\). To that end, we fitted the function \(A+B\cos\left(\omega(\tau-\Delta\tau)/\delta\right)\) to \(C(\tau,z)\), with \(\tau\in[-P_{0}/4,+P_{0}/4]\). Finally, the phase difference was given by \(\Delta\phi(z)=\omega\Delta\tau(z)\), and the inverse phase speed by \(1/v_{p}(z)=\Delta\tau(z)/\Delta z\). The inverse phase speed is shown on Fig. 6, alongside the inverse kink speed for the simulated flux tube. The kink speed \(c_{k}\) is calculated using: \[c_{k}^{2}(z)=\frac{\rho_{i}(z)v_{A,i}^{2}(z)+\rho_{c}(z)v_{A,e}^{2}(z)}{\rho_{ i}(z)+\rho_{c}(z)}, \tag{13}\] where \(\rho(z)\) is the density, \(v_{A}(z)=B(z)/\sqrt{\mu_{0}\rho(z)}\) is the Alfven speed, \(B(z)\) is the magnetic field amplitude, and \(\mu_{0}\) is the magnetic permittivity of vacuum. The indices \(i\) and \(e\) correspond, respectively, to internal and external quantities relatively to the flux tube, and are taken at \(x=0\) and \(x=8\,\mathrm{Mm}\). In simulations with short driver periods, the inverse phase speed is somewhat smaller than the inverse kink speed in the chromosphere and transition region (\(v_{p}/c_{k}\approx 2\) for \(P_{0}=200\,\mathrm{s}\), and 5 for \(P_{0}=335\,\mathrm{s}\)), and equals the inverse kink speed in the corona. On the other hand, in simulations with longer periods, the inverse phase speeds are much lower than the inverse kink speed below a given altitude. For \(P_{0}=700\,\mathrm{s}\), \(1/v_{p}\) is about 250 times smaller than \(1/c_{k}\) below \(z=1\,\mathrm{Mm}\). For \(P_{0}=2000\,\mathrm{s}\), a similar drop occurs below \(z=20\,\mathrm{Mm}\). For a propagating kink wave, the inverse phase speed is expected to be equal to the inverse kink speed. Conversely, standing and evanescent (i.e. cut-off) waves have inverse phase speeds smaller than the inverse kink speed. Thus, the decreased inverse phase speed for higher periods indicates that the waves are cut-off in at least some regions. To distinguish between the standing and evanescent cases, we have also looked at the wave amplitude (Fig. 5). In the absence of vertical stratification, the amplitude of evanescent waves decreases with altitude. However, in a stratified atmosphere (our case), the amplitude increases with altitude because of the density decrease, even for evanescent waves. On Fig. 5, the amplitude of waves with longer periods (for which \(1/v_{p}\ll 1/c_{k}\)) increases less with altitude compared to waves with shorter periods (for which \(1/v_{p}\lesssim 1/c_{k}\)). We thus conclude that the waves with longer periods are evanescent in parts of the low atmosphere, where their inverse phase speed is much lower than the inverse kink speed. This means that these long-period waves are cut-off in the transition region. ### Wave tunnelling at higher frequencies Waves with shorter periods (\(P_{0}=200\) and \(335\,\mathrm{s}\)) also show signs of cut-off at low altitudes. Below \(z=3\,\mathrm{Mm}\), the inverse phase speed \(1/v_{p}\) is lower than the inverse kink speed \(1/c_{k}\) (Fig. 6), and the amplitude increase with altitude is smaller for \(P_{0}=335\,\mathrm{s}\) than for \(P_{0}=200\,\mathrm{s}\) (Fig. 5). However, this cut-off is significantly weaker than in the long-period case. This is explained by the fact that the cut-off region (where \(1/v_{p}<1/c_{k}\)) is narrower for short periods (\(\sim 1\,\mathrm{Mm}\)) than for long periods (\(\sim 10\,\mathrm{Mm}\)). As a result, short-period waves can tunnel through the cut-off region, and propagate into the corona. Furthermore, the weak attenuation in the cut-off region (\(1/v_{p}\lesssim 1/c_{k}\)) results further reduces the effect of the cut-off. ## 4 Discussion: comparison to analytical formulas In order to compare our simulations to the analytical models, we quantified the cut-off frequency as a function of altitude. We define \(z_{c}\), the altitude at which \(c_{k}/v_{p}\) goes above a given threshold \(t_{r}\). This corresponds to the altitude where the wave leaves the cut-off regime and enters the propagating regime. That is, the cut-off altitude. We computed \(z_{c}\) for four values of \(t_{r}\) between 0.2 and 0.5. Considering the four simulations with different driver frequencies \(\omega\), we obtained the cut-off altitude as a function of the frequency, \(z_{c}(\omega)\). We compare this to the cut-off frequency as a function of altitude, \(\omega_{c}(z)\), predicted by the analytical models presented in Sect. 1. Figure 6: Inverse phase speed of the kink wave (\(1/v_{p}\)), and inverse kink speed of the flux tube (\(1/c_{k}\)), as a function of altitude. The phase speed is given for four different driver periods (\(P_{0}\)). Figure 7: Kink wave cut-off frequency as a function of altitude, from analytical models (left column of the legend), and from our numerical simulations (right column of the legend). We show the analytical predictions of Spruit (1981, SP81), Snow et al. (2017, Sn17), and of Lopin & Nagorny (2017, LN17) (_coloured lines_). For the last model, we computed the cut-off frequency for different values of \(z_{0}\), the “base of the atmosphere”. We show the cut-off altitude (\(z_{c}\)) for the four simulations that we ran with different driver frequencies (_black markers_). The cut-off altitudes are computed with different thresholds \(t_{r}\), indicated on the legend and described in the text. On Fig. 7, we show the cut-off frequency and altitude computed in our simulations, for different values of \(t_{r}\) (_black_ points). On the same figure, we show the predictions of the analytical formulas of Spruit (1981, Eq. (1)), Lopin and Nagorny (2017, Eq. (2)), and Snow et al. (2017, Eq. (3)) (_coloured lines_), computed for the temperature and density profiles used in our simulations. We implement the formula of Lopin and Nagorny (2017) for different values of \(z_{0}\), defined by the authors as "the base of the atmosphere", with no further details. Because this quantity is not accurately defined, we used four values of \(z_{0}\) in the range of 24 km (bottom cell of our simulation domain), to 1978 km. This loosely defined parameter broadens the range for the cut-off frequencies predicted by this formula. While the match is rather loose, the cut-off altitude \(z_{c}(\omega)\) measured in our simulations matches the overall variation the cut-off frequency \(\omega_{c}(z)\) predicted by the Lopin and Nagorny (2017) formula. In particular, the shape of the profiles are in good agreement. On the contrary, the Snow et al. (2017) model correctly predicts the cut-off frequency only in the lower transition region, but fails to do so in the upper transition region and corona. In particular, their model predicts a slower decrease of the cut-off frequency above 20 Mm, while the simulations and the Lopin and Nagorny (2017) show a continued decrease. Finally, the Spruit (1981) predictions are off by almost an order of magnitude at all altitudes. Thus, the formula of Lopin and Nagorny (2017) best predicts the cut-off frequency of transverse waves at different altitudes. While the broadened transition region in our simulations could affect the altitude-dependence of the cut-off frequency, this should have little impact on the validation of the analytical formulas. Indeed, these formulas include the atmospheric stratification through altitude-dependent profiles of either the pressure scale height or the Alfven speed (see Sect. 1). Because they make no hypothesis on these profiles, they should be valid regardless of the atmosphere considered. As such, the agreement with the simulations should not depend on the broadening of the transition region, provided the appropriate profile is fed into the formulas. After validating the Lopin and Nagorny (2017) formula by comparing it to our simulations, it should be applicable to other stratification profiles. We note that while analytical formulas can predict the kink cut-off frequency, this is not sufficient to know whether a kink wave with a given frequency will propagate into the corona. To that end, the thickness of the cut-off region and the strength of the attenuation have to be taken into account. As shown by our simulations, kink waves with higher frequencies (\(\geq 3\) mHz) can propagate into the corona by tunnelling through a region where they are cut-off (Sect. 3.3). Furthermore, these waves only experience a weak attenuation, because their frequency is close to the cut-off frequency. In fact, the cut-off frequency does not constitute a clear-cut boundary between oscillatory and non-oscillatory solutions. This was also reported for sound waves by Felipe and Sangeetaha (2020). Although the question of whether a solution is oscillating is well-defined mathematically, this is not straightforward to translate into a single cut-off frequency (Schmitz and Fleck, 1998). For this reason, there exist several canonical definitions for cut-off frequencies, set within the continuous variation between the oscillating and non-oscillating regimes (see e.g. Schmitz and Fleck, 1998 for sound waves in the solar atmosphere). As a result, cut-off frequencies are bound to be mere indications, rather than strong constraints, on the physical behaviour of a wave (Chae and Litvinenko, 2018). ## 5 Conclusions Transverse waves are a candidate mechanism for heating the solar corona. However, several analytical models predicted that they are cut-off in the transition region. In order to assess whether transverse waves can indeed heat the corona, it is thus crucial to determine whether they can propagate through the transition region. To that end, we have simulated the propagation of transverse kink waves in an open magnetic flux tube, embedded in an atmosphere extending from the chromosphere to the corona. We found that transverse waves are indeed cut-off in the lower solar atmosphere. However, only waves with low frequencies (\(\nu\lesssim 2\) mHz) are significantly affected. At higher frequencies, the cut-off occurs in a very thin layer (\(\sim 1\) Mm), and results in a weak attenuation. In this case, waves can tunnel through the cut-off layer, experiencing little to no amplitude attenuation. This means that transverse waves with high frequencies are able to transport energy from the chromosphere to the corona, where it can be dissipated and result in heating. Furthermore, we compared our simulations to several analytical models that predict the cut-off frequency of transverse waves. We conclude that the formula proposed by Lopin and Nagorny (2017) gives the best prediction. While our simulations use a broadened transition, we expect it to have little impact on the validation of analytical formulas. As such, the formula by Lopin and Nagorny (2017) should be able to predict the cut-off frequency for any atmospheric stratification profile. We note that while the cut-off frequency is a good first indicator of whether a wave can propagate into the corona, it cannot alone predict the whole behaviour of the wave. In particular, waves with frequencies just below the cut-off frequency (that should thus be cut-off) can still reach the corona, thanks to a combination of tunnelling, and weak attenuation. ###### Acknowledgements. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No. 724326). GP was supported by a CNES postdoctoral allocation. TVD was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 724326) and the C1 grant TRACESpace of Internal funds KU Leuven. K.K. recognises support from a postdoctoral mandate from KU Leuven Internal Funds (PDM/2019), from a UK Science and Technology Facilities Council (STFC) grant ST/T000384/1, and from a FWO (Fonds over Wetenschappelijk Onderzoek - Vlaanderen) postdoctoral fellowship (1273221N). The results received support from the FWO senior research project with number G08021N. _Software_: Astropy (Astropy Collaboration et al., 2013, 2018).
2310.03899
CrysFormer: Protein Structure Prediction via 3d Patterson Maps and Partial Structure Attention
Determining the structure of a protein has been a decades-long open question. A protein's three-dimensional structure often poses nontrivial computation costs, when classical simulation algorithms are utilized. Advances in the transformer neural network architecture -- such as AlphaFold2 -- achieve significant improvements for this problem, by learning from a large dataset of sequence information and corresponding protein structures. Yet, such methods only focus on sequence information; other available prior knowledge, such as protein crystallography and partial structure of amino acids, could be potentially utilized. To the best of our knowledge, we propose the first transformer-based model that directly utilizes protein crystallography and partial structure information to predict the electron density maps of proteins. Via two new datasets of peptide fragments (2-residue and 15-residue) , we demonstrate our method, dubbed \texttt{CrysFormer}, can achieve accurate predictions, based on a much smaller dataset size and with reduced computation costs.
Chen Dun, Qiutai Pan, Shikai Jin, Ria Stevens, Mitchell D. Miller, George N. Phillips, Jr., Anastasios Kyrillidis
2023-10-05T21:10:22Z
http://arxiv.org/abs/2310.03899v1
# Crystormer: Protein Structure Prediction via 3d Patterson Maps and Partial Structure Attention ###### Abstract Determining the structure of a protein has been a decades-long open question. A protein's three-dimensional structure often poses nontrivial computation costs, when classical simulation algorithms are utilized. Advances in the transformer neural network architecture -such as AlphaFold2- achieve significant improvements for this problem, by learning from a large dataset of sequence information and corresponding protein structures. Yet, such methods only focus on sequence information; other available prior knowledge, such as protein crystallography and partial structure of amino acids, could be potentially utilized. To the best of our knowledge, we propose the first transformer-based model that directly utilizes protein crystallography and partial structure information to predict the electron density maps of proteins. Via two new datasets of peptide fragments (2-residue and 15-residue), we demonstrate our method, dubbed CrysFormer, can achieve accurate predictions, based on a much smaller dataset size and with reduced computation costs. ## 1 Introduction Proteins, the biological molecular machines, play a central role in the majority of cellular processes (Tanford & Reynolds, 2004). The investigation of a protein's structure is a classic challenge in biology, given that its function is dictated by its specific conformation. Proteins comprise long chains of linked, relatively small organic molecules called _amino acids_, with a set of twenty of them considered as standard. However, these underlying polypeptide chains fold into complex three-dimensional structures, as well as into larger assemblies thereof. Consequently, biologists aim to establish a standardized approach for experimentally determining and visualizing the overall structure of a protein at a low cost. In the past decades, there have been three general approaches to the protein structure problem: \(i)\) ones that rely on physical experimental measurements, such as X-ray crystallography, NMR, or cryo-electron microscopy; see (Drenth, 2007) for more details; \(ii)\) protein folding simulation tools based on thermodynamic or kinetic simulation of protein physics (Brini et al., 2020; Sippl, 1990); and, \(iii)\) evolutionary programs based on bioinformatics analysis of the evolutionary history of proteins (Sali & Blundell, 1993; Roy et al., 2010). Recent advances in machine learning (ML) algorithms have inspired a fourth direction which is to train a deep neural network model on a combination of a large-scale protein structure data set (i.e., the Protein Data Bank (wwPDB consortium, 2019)) and knowledge of the amino acid sequences of a vast number of homologous proteins, to directly predict the protein structure from the protein's amino acid sequence. Recent research projects -such as Alphafold2 (Jumper et al., 2021)- further show that, with co-evolutionary bioinformatic information (e.g., multiple sequence alignments), deep learning can achieve highly accurate predictions in most cases. **Our hypothesis and contributions.** While it is true that computational methods of predicting structures without experimentally confirming data are improving, they are not yet complete -in terms of the types of structures that can be predicted- and suffer from lack of accuracy in many of the details (Terwilliger et al., 2023). X-ray crystallographic data continues to be a gold standard for critical details describing chemical interactions of proteins. Having a robust and accurate way of going directly from an X-ray diffraction pattern to a solved structure would be a strong contribution to the field of X-ray crystallography. Such approaches are missing from the literature, with the exception of Pan et al. (2023), a recent effort on the same problem based on residual convolutional autoencoders. Here, we present the first transformer-based model that utilizes protein crystallography and partial structure information to directly predict the electron density maps of proteins, going one step beyond such recent approaches. While not yet ready to solve real problems, we demonstrate success on a simplified problem. As a highlight, using a new dataset of small peptide fragments of variable unit cell sizes -a byproduct of this work- we demonstrate that our method, named CrysFormer, can achieve more accurate predictions than state of the art (Pan et al., 2023) with less computations. Some of our findings and contributions are: * CrysFormer is able to process the global information in Patterson maps to infer electron density maps; to the best of our knowledge, along with Pan et al. (2023), these are the first works to attempt this setting. * CrysFormer can incorporate "partial structure" information, when available; we also show that such information could be incorporated in existing solutions that neglected this feature, like the convolutional U-Net-based architectures in Pan et al. (2023). However, the CrysFormer architecture still leads to better reconstructions. * In practice, CrysFormer achieves a significant improvement in prediction accuracy in terms of both Pearson coefficient and mean phase error, while requiring both a smaller number of epochs to converge and less time taken per epoch. * This work introduces a new dataset of variable-cell dipeptide fragments, where all of the input Patterson and output electron density maps were derived from the Protein Databank (PDB) (wwPDB consortium, 2019), solved by X-ray Crystallography. We will make this dataset publicly available. ## 2 Problem Setup and Related Work **X-ray crystallography and the crystallographic phase problem.** X-ray crystallography has been the most commonly used method to determine a protein's electron density map1 for over 100 years (Lattman and Loll, 2008). However, there is an open question, called the crystallographic phase problem, that prevents researchers from utilizing it to predict true structures/electron density maps. Footnote 1: The electron density is a measure of the probability of an electron being present just around a particular point in space; a complete electron density map can be used to obtain a molecular model of the unit cell. In review, each spot (known as a reflection) in an X-ray crystallography diffraction pattern is denoted by three indices \(h,k,l\), known as Miller indices (Ashcroft and Mermin, 2022). These correspond to sets of parallel planes within the protein crystal's unit cell that contribute to producing the reflections. The set of possible \(h,k,l\) values is determined by the radial extent of the observed diffraction pattern. Any reflection has an underlying mathematical representation, known as a structure factor, dependent on the locations and scattering factors of all the atoms within the crystal's unit cell. In math: \[F(h,k,l)=\sum_{j=1}^{n}f_{j}\cdot e^{2\pi i(hx_{j}+ky_{j}+lz_{j})}, \tag{1}\] where the scattering factor and location of atom \(j\) are \(f_{j}\) and \((x_{j},y_{j},z_{j})\), respectively. A structure factor \(F(h,k,l)\) has both an amplitude and a phase component (denoted by \(\phi\)) and thus can be considered a complex number. Furthermore, suppose we knew both components of the structure factors corresponding to all of the reflections within a crystal's diffraction pattern. Then, in order to produce an accurate estimate of the electron density at any point \((x,y,z)\) within the crystal's unit cell, we would only need to take a Fourier transform of all of these structures, as in: \[\rho(x,y,z)=\tfrac{1}{V}\cdot\sum_{h,k,l}|F(h,k,l)|\cdot e^{-2\pi i(hx+ky+lz- \phi(h,k,l))}, \tag{2}\] where \(V\) is the volume of the unit cell. The amplitude \(|F(h,k,l)|\) of any structure factor is easy to determine, as it is simply proportional to the square root of the measured intensity of the corresponding reflection. However, it is impossible to directly determine the phase \(\phi(h,k,l)\) of a structure factor, and this is what is well-known as the crystallographic phase problem (Lattman and Loll, 2008). **Solving the phase problem.** Various methods have been developed to solve the crystallography phase problem. The three commonly used methods are isomorphous replacement, anomalous scattering, and molecular replacement (Lattman and Loll, 2008; Jin et al., 2020). Also, what is known as direct methods have been successful for small molecules that diffract to atomic resolution, but they rarely work for protein crystallography, due to the difficulty of resolving atoms as separate objects. Alternative methods have been developed to solve the phase problem based on intensity measurements alone, known as phase retrieval (Guo et al., 2021; Kappeler et al., 2017; Rivenson et al., 2018). However, these methods have not been widely used in X-ray crystallography, because they assume different sampling conditions or were designed for non-crystallographic fields of physics. The iterative non-convex Gerchberg-Saxton algorithm (Fienup, 1982; Zalevsky et al., 1996) is a well-known example of such methods, but requires more measurements than is available in crystallography. Although adaptations of the Gerchberg-Saxton algorithm have been proposed for crystallography-like settings, they have not been used to solve the phase problem except in special cases where crystals have very high solvent content (He and Su, 2015; He et al., 2016; Kingston and Millane, 2022). More recently, Candes et al. (2013) introduced the Phaselift method, a convex, complex semidefinite programming approach, and Candes et al. (2015) the Wirtinger flow algorithm (Candes et al., 2015), a non-convex phase retrieval method; both these methods have not been applied practically, due to their computationally intensive nature. ## 3 CrysFormer: Using 3d Maps and Partial Structure Attention Inspired by (Hurwitz, 2020), we rely on deep learning solutions to directly predict the electron density map of a protein. Later in the text, we demonstrate that such a data-centric method achieves both better accuracy and reduced computational cost. **The Patterson function.** We utilize the _Patterson function_(Patterson, 1934), a simplified variation of the Fourier transform from structure factors to electron density, in which all structure factor amplitudes are squared, and all phases are set to zero (i.e., ignored), as in: \[p(u,v,w)=\tfrac{1}{V}\cdot\sum_{h,k,l}|F(h,k,l)|^{2}\cdot e^{-2\pi i(hu+kv+lw )}. \tag{3}\] It is important to note the Patterson map can be directly obtained from raw diffraction data without the need for additional experiments, or any other information. Due to the discrete size of the input and output layers in deep learning models, we can discretize and reformulate the electron density map -and its corresponding Patterson map- as follows: Suppose the electron density map of a molecule in interest is discretized into a \(N_{1}\times N_{2}\times N_{3}\) 3d grid. The electron density map can then be denoted as \(\mathbf{e}\in\mathbb{R}^{N_{1}\times N_{2}\times N_{3}}\). The Patterson map is then formulated as follows, where \(\odot\) means matrix element-wise multiplication: \[\mathbf{p}=\Re\left(\mathcal{F}^{-1}\left(\mathcal{F}(\mathbf{e})\odot \mathcal{F}(\widehat{\mathbf{e}})\right)\right)\approx\Re\left(\mathcal{F}^{- 1}\left(|\mathcal{F}(\mathbf{e})|^{2}\right)\right).\] Breaking down the above expression, \(\mathcal{F}(\mathbf{e})\odot\mathcal{F}(\widehat{\mathbf{e}})\approx| \mathcal{F}(\mathbf{e})|^{2}\) denotes only the magnitude part of the complex signals, as measured through the Fourier transform of the input signal \(\mathbf{e}\). Here, \(\widehat{\mathbf{e}}\) denotes an inverse-shifted version of \(\mathbf{e}\), where its entries follow the shifted rule as in \(\widehat{e}_{i,j,k}=e_{N-i,N-j,N-k}\). **Using deep learning.** We follow a data-centric approach and train a deep learning model, abstractly represented by \(g(\boldsymbol{\theta},\cdot)\), such that given a Patterson map \(\mathbf{p}\) as input, it generates an estimate of an electron density map, that resembles closely the true map \(\mathbf{e}\). Formally, given a data distribution \(\mathcal{D}\) and \(\left\{\mathbf{p}_{i},\mathbf{e}_{i}\right\}_{i=1}^{n}\sim\mathcal{D}\), where \(\mathbf{p}_{i}\in\mathbb{R}^{N_{1}\times N_{2}\times N_{3}}\) is the Patterson map that corresponds to the true data electron density map, \(\mathbf{e}_{i}\in\mathbb{R}^{N_{1}\times N_{2}\times N_{3}}\), deep learning training aims in finding \(\boldsymbol{\theta}^{\star}\) as in: \[\boldsymbol{\theta}^{\star}=\operatorname*{arg\,min}_{\boldsymbol{\theta}}\;\left\{ \mathcal{L}(\boldsymbol{\theta}):=\tfrac{1}{n}\sum_{i=1}^{n}\ell(\boldsymbol{ \theta};\;g,\{\mathbf{p}_{i},\mathbf{e}_{i}\})=\tfrac{1}{n}\sum_{i=1}^{n}\|g( \boldsymbol{\theta},\mathbf{p}_{i})-\mathbf{e}_{i}\|_{2}^{2}\right\}.\] Since we have a regression problem, we use mean squared error as the loss function \(\mathcal{L}(\boldsymbol{\theta})\). **Using partial protein structures.** Due to the well-studied structure of amino acids, we aim to optionally utilize standardized _partial structures_ to aid prediction, when they are available. For example, let \(\mathbf{u}_{i}^{j}\in\mathbb{R}^{N_{1}\times N_{2}\times N_{3}}\) be the known standalone electron density map of the \(j\)-th amino acid of the \(i\)-th protein sample, in a standardized conformation. Abstractly, we then aim to optimize: \[\boldsymbol{\theta}^{\star}=\operatorname*{arg\,min}_{\boldsymbol{\theta}}\; \left\{\mathcal{L}(\boldsymbol{\theta}):=\tfrac{1}{n}\sum_{i=1}^{n}\ell( \boldsymbol{\theta};\;g,\{\mathbf{p}_{i},\mathbf{e}_{i},\mathbf{u}_{i}^{j}\} )=\tfrac{1}{n}\sum_{i=1}^{n}\|g(\boldsymbol{\theta},\mathbf{p}_{i},\mathbf{u }_{i}^{j})-\mathbf{e}_{i}\|_{2}^{2}\right\}.\] **Challenges and Design Principles.** We face the difficult learning problem to infer electron density maps \(\mathbf{e}\) from Patterson maps \(\mathbf{p}\), which involves Fourier transformations. _These transformations can be intuitively considered as transforming local information to global information_, which is rare in common deep model use cases. Secondly, it is nontrivial to incorporate the partial structure density maps \(\mathbf{u}_{i}^{j}\) to aid prediction. Thirdly, the 3d data format of both our inputs and outputs often increases substantially the computational requirements. Finally, since part of our contributions is novel datasets on this problem, we need to be data efficient due to the expensive dataset creation cost. Thus, the main design principles for our model can be summarized as: * _Design Principle #1_: Be able to process the global information in Patterson maps to correctly infer the corresponding electron density maps; * _Design Principle #2_: Be able to incorporate partial structure information, when available; * _Design Principle #3_: Learn to fulfill the above, with reduced computational and data-creation costs. **Gap in current knowledge.** As an initial attempt, the well-established convolution-based U-Net model (Ronneberger et al., 2015) could be utilized for this task. This is the path followed in (Pan et al., 2023). However, classical U-Nets cannot fulfill the design principles above, since: \(i)\) they mostly rely on local information within CNN layers; such a setup is not suitable when Patterson maps are available, since the latter do not have meaningful local structures. \(ii)\) It is not clear (or, at best, non-trivial) to incorporate any partial protein structures prior information, since the latter is in a different representation domain, compared to Patterson maps. Finally, \(iii)\) a large 3d U-Net model is computationally expensive and inefficient, due to the 3d filter convolution computation. **Our proposal: CrysFormer**. We propose CrysFormer, a novel, 3d Transformer model (Vaswani et al., 2017; Chen et al., 2021) with a new self-attention mechanism to process Patterson maps and partial protein structures, to directly infer electron density maps with reduced costs. Inspired by recent research on the potential connection between Fourier transforms and the self-attention mechanism, found in the Transformer model (Lee-Thorp et al., 2022), CrysFormer captures the global information in Patterson maps and "translates" it into correct electron density map predictions, via our proposed self-attention mechanism (_Design Principle #1_). CrysFormer does not need an encoder-decoder structure (Vaswani et al., 2017) and artificial information bottlenecks (Cheng et al., 2019) -as in the U-Net architecture- to force the learning of global information. By definition, CrysFormer is able to handle additional partial structure information, which comes from a different domain than the Patterson maps (_Design Principle #2_; more details below). Finally, by using efficient self-attention between 3d image patches, we can significantly reduce the overall computation cost. Detaching our model from an encoder-decoder architecture further reduces the required depth of the model and, thus, the overall training cost (_Design Principle #3_). **The architecture of the CrysFormer.** We follow ideas of a 3d visual Transformer (Chen et al., 2021) by partitioning the whole input 3d Patterson map \(\mathbf{p}_{i}\in\mathbb{R}^{N_{1}\times N_{2}\times N_{3}}\) input into a set of smaller 3d patches. We embed them into one-dimensional "word tokens", and feed them into a multi-layer, encoder-only Transformer module. If partial structures \(\mathbf{u}_{i}^{j}\) are also available, we will partition them into 3d patches and embed them into additional tokens that are sent to each self-attention layer. This way, the tokens in each layer can also "attend" the election density of partial structures, as a reference for final global electron density map predictions. Finally, we utilize a 3d convolutional layer to transform "word-tokens" back into a 3d electron density map.2 See Figure 1. Footnote 2: We also utilize 3d convolutional layer(s) at the very beginning of the execution to expand the number of channels of the Patterson map (and potentially partial structure) inputs. Mathematically, we report the following: The first part is the preprocessing and partitioning of input Patterson maps \(\mathbf{p}\) and additional partial structures \(\mathbf{u}^{j}\) into 3d patches of size \(d_{1}\times d_{2}\times d_{3}\). We embed those patches into one-dimensional tokens with dimension \(d_{t}\), using of a small MLP, and add them with a learned positional embedding; this holds for both Patterson maps and structures, as below: \[\mathbf{Patterson\ maps\ \mathbf{p}}\] \[\mathbf{X}^{0} =\texttt{3DCNN}_{\mathbf{W}_{e}}(\mathbf{p})\in\mathbb{R}^{c \times N_{1}\times N_{2}\times N_{3}}\] \[\mathbf{X}^{0} =\texttt{Partition}(\mathbf{X}^{0})\in\mathbb{R}^{\frac{N_{1} }{d_{1}}\times\frac{N_{2}}{d_{2}}\times\frac{N_{3}}{d_{3}}\times(cd_{1}d_{2}d_ {3})}\] \[\mathbf{X}^{0} =\texttt{Flatten}(\mathbf{X}^{0})\in\mathbb{R}^{\frac{N_{1} }{d_{1}}N_{2}\times n_{3}}{d_{1}d_{2}d_{3}}\times(cd_{1}d_{2}d_{3})\] \[\mathbf{X}^{0} =\texttt{MLP}_{\mathbf{W}_{e}}(\mathbf{X}^{0})\in\mathbb{R}^{ \frac{N_{1}}{d_{1}}N_{2}\times d_{3}}\times d_{t}\] \[\mathbf{X}^{0} =\texttt{MLP}_{\mathbf{W}_{e}}(\mathbf{U}^{j})\in\mathbb{R}^{ \frac{N_{1}}{d_{1}}N_{2}\times d_{3}}\times d_{t}\] \[\mathbf{X}^{0} =\mathbf{X}^{0}+\texttt{PosEmbedding}(\frac{N_{1}N_{2}N_{3}}{d_ {1}d_{2}d_{3}})\] As shown in Figure 1, we design an efficient attention mechanism such that \(i)\) only tokens from Patterson maps attend tokens from the partial structures; \(ii)\) the tokens from the additional partial structures are not passed to the next layer. This is based on that the partial structure electron density information should be used by the model as a stable reference to attend to in each layer. This one-way attention also greatly reduces the overall communication cost. In particular, let the token sequence length be \(S=\frac{N_{1}N_{2}N_{3}}{d_{1}d_{2}d_{3}}\) and let \(d_{h}\) denote the dimension of the attention head. Assuming we have \(H\) attention heads and \(L\) layers, \(\texttt{CrysFormer}\) uses the following attention mechanism: Figure 1: Abstract depiction of the \(\texttt{Crysformer}\), which utilizes a one-way attention mechanism (red and purple arrows) to incorporate the partial structure information. The tokens from the additional partial structure all come from initial 3d CNN embedding and are not passed to the next layer. \[\mathbf{U} =\texttt{Concat}^{J}_{j=1}(\mathbf{U}^{j})\in\mathbb{R}^{(SJ)\times d _{t}}\] \[\mathbf{A}^{h} =\texttt{Softmax}\left((\mathbf{W}^{h}_{q}\mathbf{X}^{\ell})^{ \top}(\texttt{Concat}(\mathbf{W}^{h}_{k}\mathbf{X}^{\ell},\mathbf{W}^{h}_{k^{ \prime}}\mathbf{U})\right)\in\mathbb{R}^{S\times(S+SJ)};\] \[\widehat{\mathbf{V}}^{h} =\mathbf{A}^{h}\left(\texttt{Concat}(\mathbf{W}^{h}_{v}\mathbf{X} ^{\ell},\mathbf{W}^{h}_{v}\mathbf{U})\right)\in\mathbb{R}^{S\times d_{h}};\] \[\mathbf{O} =\mathbf{W}_{o}\texttt{Concat}\left(\widehat{\mathbf{V}}^{0}, \ \widehat{\mathbf{V}}^{1},\ldots,\ \widehat{\mathbf{V}}^{H}\right)\in\mathbb{R}^{S\times d_{t}};\] \[\mathbf{X}^{\ell+1} =\mathbf{W}_{\texttt{H2}}(\texttt{ReLU}(\mathbf{W}_{\texttt{H 1}}\mathbf{O})),\] where, omitting the layer index, \(\mathbf{W}^{h}_{q}\), \(\mathbf{W}^{h}_{k}\), \(\mathbf{W}^{h}_{v}\) are the trainable query, key, and value projection matrices of the \(h\)-th attention head for tokens from the Patterson map, and \(\mathbf{W}^{h}_{k^{\prime}}\), \(\mathbf{W}^{h}_{v^{\prime}}\) are the corresponding matrices for tokens from the partial structure, each with dimension \(d_{h}\). Further, \(\mathbf{W}_{\texttt{H1}}\) and \(\mathbf{W}_{\texttt{H2}}\) are the trainable parameters of the fully-connected layers. We omit skip connections and layer normalization modules just to simplify notation, but these are included in practice. As a final step, we transform the output embedding back to a 3d electron density map, as follows: \[g(\boldsymbol{\theta},\mathbf{p})=\texttt{tanh}(\texttt{3DCNN}_{\mathbf{W}_{ o}}(\texttt{Rearrange}(\texttt{MLP}(\mathbf{X}^{L}))))\in\mathbb{R}^{N_{1} \times N_{2}\times N_{3}},\] and, as stated previously, we use as our loss function the standard mean squared error loss. ## 4 New Datasets We generate datasets of protein fragments, where input Patterson and output electron density maps are derived from Protein Databank (PDB) entries of proteins solved by X-ray Crystallography (wwPDB consortium, 2019). We start from a curated basis of \(\sim 24,000\) such protein structures. Then from a random subset of about half of these structures, we randomly select and store segments of adjacent amino acid residues. These examples are consisted of dipeptides (two residues) and 15-residues, leading to two datasets that we introduce with this work. The latter dataset contains 15 residues, where at most 3 residues could be shared between different examples. Using the pdbfiker Python API (Eastman et al., 2017), we remove all examples that either contain nonstandard residues or have missing atoms from our initial set. We also apply a few standardized modifications. For our dipeptide dataset, we then iteratively expand the unit cell dimensions for each example, starting from the raw \(\max-\min\) ranges in each of the three axis directions, attempting to create a minimal-size unit cell where the minimum atomic contact is at least \(2.75\) Angstroms (A).3 For our 15-residue dataset, we instead place atoms in fixed unit cells of size \(41\) A x \(30\) A x \(24\) A to simplify the now much harder problem. After this, all examples that still contain atomic contacts of less than \(2.75\) A are discarded. The examples are then reoriented via a reindexing operation, such that the first axis is always the longest and the third axis is always the shortest. Footnote 3: An Angstrom is a metric unit of length equal to \(10^{-10}\)m. One issue leading to potential ambiguity in interpreting Patterson maps is their invariance to translation of the entire corresponding electron density (Hurwitz, 2020). To tackle this, we center all atomic coordinates such that the center of mass is in the center of the corresponding unit cell. This means that our model's predicted electron densities would always be more or less centered in the unit cell. We note that this is also the case for the majority of actual protein crystals. Structure factors for each remaining example, as well as those for the corresponding partial structures for each of the present amino acids, are generated using the gemmi sfcalc program (Wojdyr, 2022) to a resolution of \(1.5\) A. An electron density and Patterson map for each example are then obtained from those structure factors with the fft program of the CCP4 program suite (Read and Schierbeek, 1988; Winn et al., 2011); partial structure densities are obtained in the same manner. We specify a grid oversampling factor of \(3.0\), resulting in a \(0.5\) A grid spacing in the produced maps. All these maps are then converted into PyTorch tensors. We then normalize the values in each of the tensors to be in the range \([-1,\ 1]\). Since, in our PyTorch implementation, all examples within a training batch are of the same size, we remove all examples from the tensor-size bins containing fewer examples than a specified minimum batch size. Experiments **Baselines.** There are no readily available off-the-self solutions for our setting, as our work is one of the first of this kind. As our baseline, we use a CNN-based U-Net model (Pan et al., 2023); this architecture is widely used in image transformation tasks (Ronneberger et al., 2015; Yan et al., 2021). For comparison, we have further enhanced this vanilla U-Net with \(i)\) additional input channels to incorporate the partial structure information, despite being evidently unsound; and \(ii)\) a refining model procedure, which retrains the U-Net using previous model predictions as additional input channels. Both of these extensions are shown to greatly improve the performance of the vanilla U-Net. We refer the reader to the appendix for more details on our baseline model architecture. **Metrics.** During testing, we calculate the Pearson correlation coefficient between the ground truth targets \(\mathbf{e}\) and model predictions \(g(\mathbf{\theta},\mathbf{p})\); the larger this coefficient is, the better. Let us denote a model prediction as \(\mathbf{e}^{\prime}\). We define \(\bar{\mathbf{e}}=\frac{1}{N_{1}N_{2}N_{3}}\sum_{i,j,k}\mathbf{e}_{i,j,k}\) and \(\bar{\mathbf{e}}^{\prime}=\frac{1}{N_{1}N_{2}N_{3}}\sum_{i,j,k}\mathbf{e}^{ \prime}_{i,j,k}\). Then, the Pearson correlation coefficient between \(\mathbf{e}\) and \(\mathbf{e}^{\prime}\) is as below: \[\mathtt{PC}(\mathbf{e},\mathbf{e}^{\prime})=\frac{\sum_{i,j,k=1}^{N_{1},N_{2},N_{3}}(\mathbf{e}^{\prime}_{i,j,k}-\bar{\mathbf{e}}^{\prime})(\mathbf{e}_{i,j,k}-\bar{\mathbf{e}})}{\sqrt{\sum_{i,j,k=1}^{N_{1},N_{2},N_{3}}(\mathbf{e}^{ \prime}_{i,j,k}-\bar{\mathbf{e}}^{\prime})+\epsilon}\cdot\sqrt{\sum_{i,j,k=1}^ {N_{1},N_{2},N_{3}}(\mathbf{e}_{i,j,k}-\bar{\mathbf{e}})+\epsilon}}, \tag{4}\] where \(\epsilon\) is a small constant to prevent division by zero. To demonstrate how well our methods solve the phase problem, we also perform phase error analysis on our models' final post-training predictions using the cphasematch program of the CCP4 program suite (Cowtan, 2011). We report the mean phase errors of our predictions in degrees, as reported by cphasematch, where a smaller phase error is desirable. Finally, we compare the convergence speed and computation cost of both methods. **Results on two-residues.** A summary of our results on our dipeptide dataset, which consisted of \(1,894,984\) training and \(210,487\) test cases, is provided in Table 1. Overall, CrysFormer achieves a significant improvement in prediction accuracy in terms of both the Pearson coefficient and phase error, while requiring a shorter time (in epochs) to converge. CrysFormer also incurs much less computation cost which results in significantly reduced wall clock time per epoch. We further visualize some of the predictions in Figure 2, comparing side by side those made by the baselines and the CrysFormer. CrysFormer produces more accurate predictions in terms of both global and local structures. This verifies our hypothesis that \(i)\) the self-attention mechanism can better capture the global information in Patterson maps, and \(ii)\) the removal of the U-Net's encoder-decoder structure prevents loss of information and improves the reproduction of finer details. E.g., the top row of Figure 2 represents a class of examples containing a large aromatic residue, Tryptophan. U-Net+R models consistently produce poor predictions in this case, while the CrysFormer better handles such residues. U-Net+PS+R shows that both providing additional input channels and using the refining procedure improves results even for U-Net architectures; yet, CrysFormer still provides better reconstruction. More visualizations can be found in the appendix. We further plot the calculated average mean phase errors of the predictions of our models against reflection resolution, see left panel of Figure 3. The predictions made by CrysFormer have lower mean phase error, compared to baselines. This means that the CrysFormer predictions, on average, can reproduce better the general shape, as well as finer details of the ground truth electron densities. Finally, we generate a chart of the fraction of our models' predictions for which the calculated mean phase error is \(<60^{\circ}\) at various ranges of resolution. We consider such predictions to accurately reproduce the level of detail specified by that resolution range. This is shown on the right panel in Figure 3. At all resolution ranges, CrysFormer predictions are clearly better than that of the \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & Mean \(\mathtt{PC}(\mathbf{e},\mathbf{e}^{\prime})\) & Mean Phase Error & Epochs & Time per epoch (mins.) \\ \hline U-Net (Pan et al., 2023) & 0.735 & 67.40\({}^{\circ}\) & 50 & 28.93 \\ U-Net+R (This work) & 0.775 & 58.67\({}^{\circ}\) & 90 & 29.06 \\ U-Net+PS+R (This work) & 0.839 & 51.34\({}^{\circ}\) & 90 & 29.31 \\ CrysFormer (This work) & **0.939** & **35.16\({}^{\circ}\)** & **35** & **12.37** \\ \hline \hline \end{tabular} \end{table} Table 1: CrysFormer versus baselines on the dipeptide dataset. U-Net+R refers to adding the refining procedure to U-Net training; U-Net+PS+R refers to adding further partial structures as additional channels. U-Net-based models. In particular, for CrysFormer, we still have a majority of predictions with phase error \(<60^{\circ}\) even at the highest ranges of resolution. **Results on 15-residues.** On our dataset of 15-residue examples, which consisted of only \(165,858\) training and \(16,230\) test cases (less than one-tenth the size of our dipeptide dataset), we trained for 80 epochs to a final average test set Pearson correlation of about \(0.747\). We then performed a refining training run of 20 epochs, incorporating the original training run's predictions as additional input channels when training the CrysFormer, and obtained an improved average test set Pearson correlation of about \(0.77\) and phase error of about \(67.66\). On both of these runs, we used the Nystrom approximate attention mechanism (Xiong et al., 2021) when incorporating our partial structure information to reduce time and space costs. Even still, each training epoch still took about \(6.28\) hours to complete. Thus due to time considerations, we decided not to attempt to train a U-Net on this dataset for purposes of comparison. We provide visualizations of some model predictions in Figure 4; more can again be found in the appendix. We also plot the average mean phase errors of the predictions of our models against reflection resolution, as well as the fraction of our models' predictions for which the calculated mean phase error is \(<60^{\circ}\) at various ranges of resolution in Figure 5. These results show that this is a Figure 3: Dipeptide dataset. **Left:** Average phase error of model predictions against reflection resolution. **Right:** Fraction of model predictions for which phase error is \(<60^{\circ}\) at various ranges of resolution. Figure 2: Visualization of electron density predictions for baselines and CrysFormer: Ground truth density maps are shown in blue, while predictions are shown in red. The model used to generate the ground truth electron density is shown in stick representation for reference. more difficult dataset with reduced sample size; yet CrysFormer predictions tend to accurately reproduce details of the desired electron densities. Furthermore, after automatic map interpretation using the autobuilding routines in _shelke_(Uson & Sheldrick, 2018) to obtain a poly-alanine chain from each of the \(16230\) test set predictions, we found that almost \(74\%\) of the resulting models had calculated amplitudes with a Pearson correlation of at least \(0.25\) to the true underlying data. Historical results indicate that further refinement would very likely produce a "correct" model if the initial poly-alanine model has at least such a correlation. ## 6 Discussion We have shown that CrysFormer outperforms state of the art models for predicting electron density maps from corresponding Patterson maps in all metrics on a newly introduced dataset (dipeptide). Overall, CrysFormer requires fewer epochs to reasonably converge and has a smaller Figure 4: Visualization of two successful predictions after a refining training run; ground truth density maps shown in blue and predictions shown in green. Figure 5: **Left:** Average phase error of model predictions on 15-residue dataset against reflection resolution. **Right:** Fraction of model predictions on 15-residue dataset for which phase error is \(<60^{\circ}\) at various ranges of resolution. We used the _Autobuild_ program within the _PHENIX_ suite (Terwilliger et al., 2008; Liebschner et al., 2019) to perform automated model building and crystallographic refinement on a randomly selected subset of \(302\) test set predictions after the refining training run. We found that \(281\) out of \(302\) (\(\sim 93\%\)) refined to a final atomic model with a crystallographic \(R\)-factor of less than \(0.38\), indicating success, when solvent flattening was applied. Without solvent flattening, \(258\) out of \(302\) (\(\sim 85\%\)) refined to such an \(R\)-factor (performing solvent flattening is known to be especially effective for unit cells with high solvent content, i.e. a large amount of empty space around the atoms). Figure 6 shows these results as scatterplots; clearly only a small fraction of the subset of predictions did not refine successfully. And even if no refinement was performed at all, and instead an atomic model was repeatedly fit to our predicted electron densities, we found that \(229\) out of \(302\) (\(\sim 76\%\)) of the best such atomic models still had a crystallographic \(R\)-factor of less than \(0.38\). Figure 6: **Left Panel:** Scatterplot of post-refinement model R-factors, with solvent flattening applied. **Right Panel:** Scatterplot of post-refinement model \(R\)-factors, without solvent flattening applied computational footprint. Furthermore, our "refining" procedure greatly improves training for the vanilla U-Net architecture on our dipeptide dataset, as well as for training CrysFormer on our both dipeptide and 15-residues dataset. **Limitations and next steps.** Following successful results on our initial 15-residue dataset, we also suggest training our model on variable unit cells at that problem size as future work. Eventually, we also prefer handling variable cell angles as well, moving beyond the orthorhombic crystal system. We will explore changing the formulation of our partial structures to have more than one amino acid residue in a structure, as having each partial structure representing only a single residue may no longer be reasonable, both computationally and from a practical perspective. **Broader Impacts.** Solving the crystallographic phase problem for proteins would dramatically reduce the time and expense of determining a new protein structure, especially if there are no close homologs already in the Protein Data Bank. There exist some methods that sometimes work under special conditions Jiang et al. (2018), or that work sometimes but only at very low resolutions David & Subbiah (1994). The recent line of work on AlphaFold Jumper et al. (2021); Tunyasuvunakool et al. (2021) definitely helps in these problems; we note though that this is true mostly in cases where reliable predictions are possible due to strong homologs and/or extensive sequence data.
2307.01499
Comparing dendritic trees with actual trees
Since they became observable, neuron morphologies have been informally compared with biological trees but they are studied by distinct communities, neuroscientists, and ecologists. The apparent structural similarity suggests there may be common quantitative rules and constraints. However, there are also reasons to believe they should be different. For example, while the environments of trees may be relatively simple, neurons are constructed by a complex iterative program where synapses are made and pruned. This complexity may make neurons less self-similar than trees. Here we test this hypothesis by comparing the features of segmented sub-trees with those of the whole tree. We indeed find more self-similarity within actual trees than neurons. At the same time, we find that many other features are somewhat comparable across the two. Investigation of shapes and behaviors promises new ways of conceptualizing the form-function link.
Roozbeh Farhoodi, Phil Wilkes, Anirudh M. Natarajan, Samantha Ing-Esteves, Julie L. Lefebvre, Mathias Disney, Konrad P. Kording
2023-07-04T06:12:28Z
http://arxiv.org/abs/2307.01499v1
# Comparing Dendritic Trees with Actual Trees ###### Abstract. Since they became observable, neuron morphologies have been informally compared with biological trees but they are studied by distinct communities, neuroscientists, and ecologists. The apparent structural similarity suggests there may be common quantitative rules and constraints. However, there are also reasons to believe they should be different. For example, while the environments of trees may be relatively simple, neurons are constructed by a complex iterative program where synapses are made and pruned. This complexity may make neurons less self-similar than trees. Here we test this hypothesis by comparing the features of segmented sub-trees with those of the whole tree. We indeed find more self-similarity within actual trees than neurons. At the same time, we find that many other features are somewhat comparable across the two. Investigation of shapes and behaviors promises new ways of conceptualizing the form-function link. ## 1. Introduction At first glance, most neuron morphologies remind us of the structure of actual trees, the ones that have green leaves. Trees have been invoked frequently as an analogy for the complex neuronal structures in the brain. Santiago Ramon y Cajal, the father of modern neuroscience and the first person to visualize the breadth of neuronal arborizations, has described this similarity many times: "The cerebral cortex is similar to a garden filled with innumerable trees, the pyramidal cells, that can multiply their branches thanks to an intelligent cultivation, sending their roots deeper and producing more exquisite flowers and fruits every day." [10]. Neuroanatomists (or botanists), use the similarity between the samples of neurons (or trees) to divide them into neuron classes (or species of trees). While neurons and trees have significantly different sizes (from \(\mu\)m for neurons to km for trees) their similarities in structure may reveal some unifying principles underlying their morphology [1]. Exploring these similarities requires detailed 3D structural measurements to compare the arborization structures despite the vast scale differences. The arborization structure of trees and neurons can be seen as the result of processes that act on two different timescales: evolutionary and developmental timescales. Over evolutionary timescales, genetic evolution acts to hardwire arborizations that have survival benefits. This usually directs the development of the overall branch architecture. On the developmental timescale, arborizations are patterned through an intersection of molecular and physical cues. This refines the development of arborization according to local resources. Despite these processes acting on both trees and neurons, there are some clear differences. Neurons grow together and essentially pack the whole brain volume, while trees have free space in their surroundings to respire. The arborizations of neurons touch one another producing an environment of communication. Conversely, trees can, and do, survive and thrive individually, and their communications are limited to their physical interactions and potentially their interactions through fungi. Recent evidence shows that trees living in close proximity do not simply compete with each other, but can develop complex resource-sharing and collaboration networks e.g. the so-called 'wood wide web' [12, 13]. Neurons react to changes in input in the order of milliseconds or seconds through transmission and plasticity [14]. Trees do respond on these time scales e.g. diffusion and transpiration at the stomatal level, but on much longer time scales as well, particularly regarding the structural change (weeks to decades) [15]. Tree growth is affected by the amount of light, nutrients, and water they receive as well as gravity, prevailing winds, and temperature. Similarly, the elements that affect a neuron's growth include molecular signals and the activities of other neurons. Both neurons and trees develop in an environment of meaningful stimuli and biological and physical constraints, which affect their arborization structures when they are matured. In spite of the differences between their respective environments, they undoubtedly share many relevant factors. First, both trees and neurons process information [13, 1]. In trees, this information is about the surrounding environment including the availability of light, water, nutrients, and certain survival factors such as competition and ease of reproduction; in neurons, it is about what function they serve and their movement of electrical and chemical signals. Additionally, both trees and neurons can grow in populations, trees in forests, and neurons in the network of the brain and the nervous system. By comparing the two, we might determine the optimal structures and patterns that trees and neurons have acquired to serve their function or survival. Comparing neuron and actual tree arborizations requires a measurement of their structures. For neurons, this structure is captured through single neuron labeling followed by microscopic imaging [16]. This often requires slicing neural tissue, staining and imaging the tissue, and then 3D reconstruction of the morphology by tracing the detected arbors using software (figure 1.a). The structure of trees is most often described using relatively simple whole-tree metrics such as diameter-at-breast height (DBH), tree height, height-to-crown, and crown diameter. More recent measurements from terrestrial laser scanning (TLS) have enabled much more detailed measures of whole-tree branching architecture and topology i.e. branch length, radius, angle, and even path length [1]. These measures are revolutionizing our understanding of both trees and neurons [13, 14, 15]. We thus finally have data to be able to quantitatively compare actual trees and neuronal arborizations. Here, we compare the morphologies of various neuron subtypes to distinct species of trees. Neuron subtype is akin to tree species and here we use 'class' to refer to both neural subtype and/or tree species. Both data types are represented as geometrical graphs, which means that the shape is broken up into nodes with an associated location as well as a map of how these nodes are connected. To compare them, we need to convert the graphical structure to a feature vector. The extracted features can depend on the general structure of the neuron or tree such as its size, its local structures such as the branching angles, or the number of stems attached to the soma/root. Features help us to compare classes of neurons with each other, species of trees with each other, and a class of neurons with a specie of trees. Our analysis finds a large set of morphological aspects to be shared between trees and neurons. Neurons and trees are often self-similar, i.e. their patterns are scaling up with their size. We define a self-similarity measurement by using the histogram of the features. In our definition, we first extract all sub-trees of a tree that start from any possible branching points. To avoid the artifacts that may come from small sub-trees, we discard sub-trees that had has less than ten leaves. Each sub-tree can be seen as a new sample such that its cutting branching point is its root. We can extract six aforementioned features for a sub-tree. We find that self-similarity is stronger for trees, suggesting that there is more relevant heterogeneity for the function and structure of neurons. ## 2. Results To ask how common principles and diverse functional objectives are reflected in the shapes of neurons and leaf-carrying trees, we can rely on newly emerging datasets. Advancements in neuroimaging techniques and ecology enable us to collect 3D structure samples of neurons and trees. Here the trees in our dataset belong to five different geological locations (Gabon, Ghana, Aus, and UK), are between ten and forty meters in height, and have a few hundred to a few thousand branching points. We selected five subtypes of neurons from a neuron morphology database (neuromorpho) with the requirement that their shape is completely reconstructed and artifacts of reconstruction are minimal (Figure 1) [1]. We used the same number of neurons and trees per class (twenty) to fairly compare them. We can thus ask if trees exhibit more self-similarity. The apparent similarities between neuron morphologies and trees suggest there may be common quantitative rules and constraints to be discovered. Both tree structures and neuron morphologies are often described as mathematical graphs composed of straight segments, and branch points [13]. This helps us to compare them meaningfully. We extract features such as angles at the branching points and normalize the histogram of the features to find a probability distribution that describes the feature. We can measure the similarity between two samples by measuring the distances between the probability distributions. We use this metric to compare classes of neurons and trees. We find that our features distinguish the samples of trees from the samples of neurons. To summarize this difference, we define a self-similarity measurement for one neuron or one tree. This is defined by considering the feature set and showing that samples of trees are significantly more self-similar than neurons. To enable a comparison of neurons and trees we need to somewhat normalize them. First, the root of neurons is the node in the reconstruction that represents the soma (nucleus). In contrast, in our tree dataset, we only observe the upper-ground part of a tree. This part usually contains a long trunk followed by branches and leaves. We define the lowest point of a tree to be its root. Second, neurons have distinctive axonal and dendritic parts. Since the dendrite often has more arborization, we restrict our analysis to the dendrite part. Third, to simplify the structures, we only consider the skeleton of trees and neurons and neglect finer structures such as spine locations for neurons and leave locations for trees. These modifications enable us to meaningfully compare actual trees and neurons. We look at six features in our sample of neurons and trees. Three features measure local properties: angles at the branching points, lengths of segments, and contraction (a measure of local curvature) of segments (figure 2 right). The other three features consider the non-local structure of a sample: distances of nodes from the root, global angle, directness, and the ratio of geodesic and Euclidean distance (figure 2 middle and left). The outcome of one feature is a histogram of values in a given range. We normalize a histogram such that the area under the histogram is equal to one. To quantify a class of neurons or trees, we take the average and standard deviation of the mean value the normalized histogram for all the samples in that class. In figure 3 representative histograms of a few classes and their features are shown. By taking an average of the deviations, we observe the variance of the features is higher for neurons in comparison with trees (figure 3). This reflects that there is more heterogeneity in the neuron classes compared to the relatively homogenous tree structures. To quantify the differences between neurons and trees, we look for the differences in the histogram of their features. To compare the distribution of features within and between trees and neurons, we use earth movers distance (EMD). EMD is a distance on distributions. If the distributions are interpreted as piles of sand over a region, then the EMD is the minimum cost to move one pile to another (figure 3). We define the distance between two samples (of trees or neurons) by summing EMD of the distribution of each pair of their features. By measuring this distance between samples of two classes, we can compute the similarity of the two classes. How similar are the branching patterns of neurons and trees? One way to quantify them is to look at the angles between two outgoing stems. Branching with more than two outgoing segments is rare in both datasets. Indeed, we observe that more than 95 percent of branching points in trees have two outgoing stems. We see that the distributions of angles for neurons have a peak between 45 and 120 degrees (figure 3). This shows that at the branching points of neurons, two outgoing neurites grow in somewhat opposite directions. In trees, branching angles are often acute where the peak of their distributions is less than 30 degrees. The peak at an acute degree shows that outgrowing segments of trees often are almost parallel to each other. Indeed, gravity and the need to be illuminated may force both outgoing segments to be roughly perpendicular to the ground. Therefore the angles at the branching points clearly differentiate neurons and trees. How similar is the growth process of neurons and trees? Ideally, we would trace their developmental process. Instead, we can look at angles after growth. Since both are curves in space, we can measure how far they deviate from straight lines, i.e. measuring the curvature. We compute the starting and ending angles relative to the straight line connecting the points. We observe that histograms of these angles are similar for both trees and neurons. While the process through which trees and neurons grow differ, they are both close to straight. In neurons, the tips of neurites actively sense the local information such as the gradients of their desired molecules [10]. In trees, the tip of a segment, called meristem, actively searches for the light spots [14]. In both trees and neurons, to sense local information, they may extend their tips and then select the one that maximizes their goal (filopodia in neurons and shoot apical meristem in trees). Comparing local features of neurites and segments of trees promises to shed light on their growth process. How often do we see perfectly linear outgrowth from the root node? In figure 3 we observe that the peak in the histogram of global angle for neurons is close to 180 versus in trees this peak is centered around 90. This means that the neurites of a neuron look more like straight lines when compared to tree segments. What we see is that trees tend to grow perpendicular to their stem which may be good to capture light. This in particular is a survival factor for some tropical trees where they are in a race with other adolescent trees to reach the canopy as they are light-limited below that. Neurites of neurons on the other hand may best grow straight as that will shorten the overall wiring length, a criterion seen as worthy of optimization in brains [11]. Clearly, Figure 2. **trees can be characterized by features.** a) Set of features for a tree and a neuron that we use in this paper are presented. The enlarged area of the neuron morphology in the circle demonstrates how branching, local and global angles are calculated. optimization of trees and neurons is distinct along this axis, where neurons minimize length and trees maximize the area covered. How are neurons and trees filling the surrounding space? Trees and neurons expand their neurites and segments to access the resources in space. To measure their spatial density, we count how many times they intersect with the spheres that are centered at their roots (Sholl analysis). The radius begins at zero and continues up to the sphere that contains almost 90 percent of the total length. We select the upper limit for neurons and trees to be 100 \(\mu m\) and 10\(m\), respectively. We observe that apart from Pyramidal neurons, the histogram has one peak close to the root. The histogram in Pyramidal neurons is bimodal as neurons have distinctive basal and apical dendritic parts. In trees, the peak is relatively far from the root. Indeed, up to a few meters, the spheres intersect with trees only at the trunk part. Among five classes of trees, we observe that Wytham Woods has a bimodal shape. Dendrites of neurons are mostly concentrated around the soma to gather information in their nearby neurons. For trees, the main body is often far from the root to compete with other trees in collecting sunlight. How distant are two consecutive branching points (where there are no other branching points on the curve that connects them) in trees and neurons? To answer that question, we measure the Euclidean distance between all consecutive branching points, called branching segments, and find their histograms3. To make it unit free, we divide the distances by their mean and only consider the histogram between zero and three. We find that all the tree classes have a sharp peak around 0.3, versus neurons often the histograms are decreasing. Do neurons and trees have signs of arborization length minimization? However, all things equal we would still expect both trees and neurons to somewhat have short connections. Indeed, the contraction feature captures the overall extra length induced. We can see that for both neurons and trees connections tend to be relatively close. This finding highlights that the minimization of distances is at least one of the criteria that both trees and neurites are optimized for. Can we use the aforementioned features to define one central feature for comparing neurons and trees? We start with the hypothesis that trees are more self-similar compared to neurons. We define self-similarity by measuring the distribution of the features of its sub-trees and computing the averaged distance between all sub-trees and trees (figure 4). Lastly, we compute the self-similarity measurement for the tree classes and neuron classes by averaging the self-similarity of the samples in that class. To test our hypothesis, we compared the self-similarity of all six features for classes of neurons and trees. In figure 4, we found that except for contraction, other features are significantly more self-similar for trees than neurons. Therefore, we help us conclude that trees retain features from the object overall and that they are both self-similar. This of course has been observed for trees (and plants) more generally going back to Da Vinci's 'rule' [14]. Further, this result is consistent across all classes of neurons and trees suggesting high self-similarity is a conserved feature of most tree species. Figure 3. **comparing the features of neurons and trees.** a) The histograms of the features for five classes of neurons and trees are shown. To compare the features, we use earth mover distance (bottom-left). For each feature, we take the distance between all pairs of neurons (left), trees (middle), and trees vs. neurons (right). ## 3. Discussion Here we asked how neurons and trees may have distinct or similar morphologies. We found their general branch patterns, as measured by 6 key features, are similar but that trees tend to branch with more acute angles, have longer segments and their segments are less bent compared to neurons. Moreover, we found that trees are more conserved in their self-similar structure, meaning that the features of their sub-trees are closer to the entire tree. By comparing neurons and trees we are implicitly asking questions about their structure. One of the main characteristics of a growth process is its self-similarity - the degree of similarity between the small subset and the whole structure. Here we compare neurons and actual trees as two extreme examples in size to test whether their structure is shaped self-similarly. By comparing a set of features, we show that trees and neurons have similar degrees of self-similarity. This may arise from the environments in which these two types of structures are developed. For example, Figure 4. **Self-similarity can explain high variance in the features.** a) Sub-trees of a neuron (left) and of a tree (right) and their features are shown (color coding is the same as below). In the center, for each sub-tree, its convex volume vs. the overall length of its segments is plotted. b) The top row shows the features of the tree (or neuron) and the bottom row is the mean and standard deviation for the features of all sub-trees of the tree (or neuron). c) For one tree (or neuron) and one feature, we can find the self-similarity by computing the averaged distance between all sub-trees and tree (or neurons). The self-similarity is compared between neurons and trees. while most trees continuously search for light in their surroundings, neurons' arbors are sensitive to chemical gradients, neuromodulator concentrations, and electrical inputs. We hope that understanding the similarities and differences between neurons and trees may enlighten our search for computational principles. The samples used in this paper are limited to five classes of neurons (all from mice) and five species of trees (from many countries). To embrace the diversity of trees and neurons, analyzing more and more diverse data would be interesting. There is a slight concern about the methods used to reconstruct neurons and trees which may affect the results [10]. To extract the features we also have to approximate the neurons and trees. This may lead to biases in the features. For example, local tilt angles might be noisy because the reconstruction could not perfectly measure the configuration of local branches of neurons or trees. There is not clear dictionary between the features that we defined and the environmental correspondence. For instance, some features might emerge during the development. As an example, the Wytham Woods trees have a history of management, particularly copping where they were repeatedly pruned low down earlier in their lives, then left alone over the last few decades. That leads to a relatively short trunk and then a very bushy crown which is not a 'natural' shape but obviously one that is still entirely feasible for vigorous growth. This may lead to a bimodal histogram for their distance from the root (figure 3). In another example, being self-similar informs our search for growth models while more involved features may relate to gravity or light, the brain's surface or information. We just reported the differences, to uncover what is this dictionary we may need more studies that address it. We can use this finding to search for the relevant biological processes that may lead to these differences. The approaches we have used here are well-suited to the examination of much larger datasets of these 3D structures. This in turn may allow the testing of mechanistic models of neuron and tree growth, with the aim of potentially uncovering general rules of neuron and tree structural growth and development under different environmental constraints. Some of the constraints are already under study within the field of neuroscience or ecology. Finding the similarities and differences between neurons and trees opens the door for both of these fields to share their vision and bring novel ideas from one field to another. We have shown here a quantitative comparison of the 3D structure of neurons and trees. The motivation for this work was to test the commonly-aired assertion that there are, superficially at least, similarities between the structures of these two biological networks. If so, this may uncover common and general underlying principles of growth and development in these networks. In addition, quantifying similarities between neurons and trees opens the possibility of finding mechanistic explanations as to how these similarities arise and provide new insights into how the specific environmental and evolutionary constraints under which each has developed are manifested in their structures. ## 4. Method ### Selecting neuron morphologies We used the reconstructed neuron morphologies from neuromorpho database (version 8.0.112). We only consider mouse morphologies since the number of neuron morphologies from mice that are deposited in neuromorpho is larger than other species. To ensure that the data is coming from healthy animals, we only use data from the experimental condition 'control'. We only performed the analysis on the dendritic part of neurons. To reduce the artifacts of the reconstruction methods, we only used data where the dendrite reconstruction is labeled as 'complete'. We use five classes of neurons: pyramidal, Purkinjeji, Basket, Aspiny and Ganglion as they are representative of many other neurons in the brain. We thus have five sets of neuron morphologies that we can compare to trees. ### Trees Trees are generated in Matthew Disney's lab using Lidar imaging. Lidar generates the full 3D structure of trees (with green leaves) which then can be process to turn it to a set of branches and segments. ### Preprocessing #### 4.3.1. Down-sampling neurons and trees Due to the extraction method and the accuracy that the experimenter is used, the nodes of the morphology are not uniformly distributed across the morphology. To overcome this, we process the morphologies in two steps. We first select a lower and upper bound for the distance between a node and its parents. Here, we set the lower bound to \(0.5\mu m\) and upper to \(1\mu m\). In the first step, we randomly select a node and if the distance between the node and its parent is higher than the upper bound, we add an extra node in the middle of the straight line that connects them. The goal of this step is to have a dense morphology. We continue this step until the Euclidean distance between all nodes and their parent is less than the upper bound. In the second step, we randomly select a node and if the distance between the node and its parent is lower than the lower bound, we remove the parent node. The goal of this step is to ensure that the nodes are uniformly distributed across the morphology. We continue this step until no nodes remain. If the upper bound is twice the lower bound, it is guaranteed that the Euclidean distance between all nodes and their parents is between the lower and upper bound. ### Features We consider six morphological features to quantify the branch organization for trees and neurons. Three features quantify changes in local regions of the arbor, and the other three are global features that are measured relative to the root of the tree. #### 4.4.1. Branch angles In the graphical representation of trees and neurons, each node either has no child (terminated), one child (intermediate), or more than one child (branching points). At each branching point with two children, we can calculate the branching angle by computing the angle between the vectors connecting the branching node to its children. #### 4.4.2. Direction change This feature is defined by the straightness of segments of trees or neurons by calculating the angles between two vectors: the vector connecting the node to its parent and the vector connecting the node to its child. This feature is only defined for nodes with one child. If the segment of the neuron is a flat line locally at the node, this value would be 180. #### 4.4.3. global angles Global angles measure how straight the segments of the neuron are grown away from the root. It is computed for each node by measuring the vector connecting the node to its parent and the vector connecting it to the root. If a tree has the tendency to explore the surrounding space, it is expected that in many nodes this angle is obtuse. #### 4.4.4. Length of segments Two branching points are called consecutive if the shortest path between them does not contain any other branching points. We can measure the Euclidean distance between all pairs of consecutive branching points, and compute its histogram. To make this histogram scale-free, we divide it by its mean. #### 4.4.5. Distance from root This feature is performed conceptually by drawing concentric circles around the cell body at incrementally increasing radii and counting the number of times each circle crosses a neuritic segment (shown counting around the circle counterclockwise for demonstration). The number of intersections is graphed as a function of radial distance from the cell body to give a quantitative representation of how neurite density varies spatially. #### 4.4.6. Contraction For each node, the shortest neural path that connects it to the soma is usually close to a straight line. To make it concrete, for each node, the ratio of its shortest path through the neuron to soma divided by the Euclidean distance between the node and soma is calculated. By subtracting one and taking the mean square of this ratio for all nodes we get the Neuronal/Euclidean ratio. #### 4.4.7. Measuring distances between features To measure the distance between two features, we use earth mover distance. In statistics, the earth mover's distance (EMD) is a measure of the distance between two probability distributions over a region. Informally, if the distributions are interpreted as two different ways of piling up a certain amount of dirt over a region, the EMD is the minimum cost of turning one pile into the other; where the cost is assumed to be the amount of dirt moved times the distance by which it is moved. Notice that all six features that we used here are histograms and therefore are a distribution. ### Self-similarity In mathematics, a self-similar object is exactly or approximately similar to a part of itself (i.e., the whole has the same shape as one or more of the parts). Many objects in the real world, such as coastlines, are statistically self-similar: parts of them show the same statistical properties at many scales. We define a self-similarity measurement by using the histogram of the features. In our definition, we first extract all sub-trees of a tree that start from any possible branching points (figure 4). To avoid the artifacts that may come from small subtrees, we discard subtrees that have less than ten leaves. Each sub-tree can be seen as a new sample such that its cutting branching point is its root. Therefore, we can extract six aforementioned features. In figure 4 two samples are shown (one tree and one neuron) with features of a few of their subtrees. We then compute the distance between features of all pairs of subtrees. By taking the average of these distances for one feature, we can compare the tree with its sub-trees. Finally, by doing this process for all features and taking their average, we can define the self-similarity of a neuron or a tree.
2305.17482
Federated Empirical Risk Minimization via Second-Order Method
Many convex optimization problems with important applications in machine learning are formulated as empirical risk minimization (ERM). There are several examples: linear and logistic regression, LASSO, kernel regression, quantile regression, $p$-norm regression, support vector machines (SVM), and mean-field variational inference. To improve data privacy, federated learning is proposed in machine learning as a framework for training deep learning models on the network edge without sharing data between participating nodes. In this work, we present an interior point method (IPM) to solve a general ERM problem under the federated learning setting. We show that the communication complexity of each iteration of our IPM is $\tilde{O}(d^{3/2})$, where $d$ is the dimension (i.e., number of features) of the dataset.
Song Bian, Zhao Song, Junze Yin
2023-05-27T14:23:14Z
http://arxiv.org/abs/2305.17482v1
# Federated Empirical Risk Minimization via Second-Order Method ###### Abstract Many convex optimization problems with important applications in machine learning are formulated as empirical risk minimization (ERM). There are several examples: linear and logistic regression, LASSO, kernel regression, quantile regression, \(p\)-norm regression, support vector machines (SVM), and mean-field variational inference. To improve data privacy, federated learning is proposed in machine learning as a framework for training deep learning models on the network edge without sharing data between participating nodes. In this work, we present an interior point method (IPM) to solve a general ERM problem under the federated learning setting. We show that the communication complexity of each iteration of our IPM is \(\widetilde{O}(d^{3/2})\), where \(d\) is the dimension (i.e., number of features) of the dataset. ###### Contents * 1 Introduction * 2 Related Work * 3 Background * 3.1 Notations * 3.2 Empirical Risk Minimization * 3.3 Central Path Method * 3.4 Newton Step * 4 IPM under FL * 4.1 Sketching Techniques * 4.2 Our Algorithm * 5 Theoretical Analysis * 6 Compared to Standard Methods * 7 Conclusion and Discussion * A Probability Tools and Basic Properties of Random Sketching Matrices * A.1 Concentration inequalities * A.2 Properties obtained by random projection * B Sketch more than once * C Bounding error of sketching * C.1 Definition of \(P\), \(\widehat{P}\), and \(\widehat{P}\) * C.2 Proof sketch * C.3 Bounding \(|g^{\top}Ph-g^{\top}\widehat{P}h|\) * C.4 Tools for bounding \(|g^{\top}Ph-g^{\top}\widehat{P}h|\) * C.5 Bounding \(|g^{\top}\widehat{P}h-g^{\top}\widehat{P}h|\) * C.6 Tools for Bounding \(|g^{\top}\widehat{P}h-g^{\top}\widehat{P}h|\) * C.7 Bounding \(|g^{\top}Ph-g^{\top}\widehat{P}h|\) * D Main Result * E Central Path * F Initial Point and Termination Condition Introduction Empirical Risk Minimization (ERM) is one of the key problems in machine learning research. ERM appears in many machine learning problems including LASSO [11], logistic regression [12, 13], support vector machines [14], AdaBoost [15], kernel regression [16, 17], etc. Due to its wide applications, a great number of works have considered this problem. They not only study the statistical convergence properties but also investigate how to develop efficient algorithms for ERM. Among these efficient algorithms, Interior Point Methods is one of the most widely-used optimization algorithm. IPM is first proposed by [16]. After that, IPM has become an active area in optimization research. There is a long line of work using IPM to speedup optimization problems, such as linear programming [17, 18, 19], semi-definite programming [15], and cutting plane method [14]. Recently, [14] develops a fast and robust algorithm to solve Empirical Risk Minimization (ERM). However, users are not willing to share data with others. Therefore, Federated Learning, which is a general framework for distributed learning on sensitive data, is paid more attention to recently. Motivated by the Sketched-SGD [17] and FetchSGD [18], there exists a large number of works focus on reducing the communication cost [19, 20, 18, 15]. In addition, some works [14] develop optimization algorithms under federated learning. Nevertheless, all of them develop distributed SGD, which is a first-order optimization algorithm. Due to the reason that first-order algorithms for ERM always depend polynomially on the Lipschitz constant of the gradient and the running time will also have to depend on the strong convexity of the optimization function [14]. In view of this, we focus on developing distributed second-order optimization algorithms in this paper. As for the distributed second-order optimization algorithm, [16] develops a distributed second-order method, which could address the bottleneck of distributed setting. However, in order to present convergence analysis, they make several strong assumptions that are unrealistic in practice. In this work, we mainly study the ERM under FL, we called it FERM (Federated Empirical Risk Minimization). We develop an IPM framework under FL settings to address FERM first. Then, considering the communication issue of the IPM framework under FL settings, we use sketching techniques to reduce the communication cost. In the end, we present the convergence analysis of our algorithm. Challenges.We have witnessed the success of the first-order optimization algorithm under FL. Nevertheless, it is non-trivial to design an IPM under FL. Especially, we need to use the sketching technique to reduce the communication cost and provide convergence analysis for IPM under FL. In the following sections, we focus on answering the following problems: * _How to design a distributed IPM algorithm without data sharing?_ * _How to use sketch matrices to compress the Hessian information under distributed setting?_ * _Is it possible to present convergence guarantees of IPM under FL?_ Before we show the specific algorithms and analysis, we first state our main result here: **Theorem 1.1** (Informal Main Result, see Appendix D for details).: _If the following conditions hold_ * _Consider a convex problem under the FL setting_ \(\min_{Ax=b,x\in\Pi_{i=1}^{m}K_{i}}c^{\top}x\)_, where_ \(K_{i}\) _is compact convex sets._ * _For each_ \(i\in[m]\)_, we are given a_ \(\nu_{i}\)_-self concordant barrier function_ \(\phi_{i}\) _for_ \(K_{i}\) _Then, there exists a FL algorithm (see Algorithm 1) runs in \(O(\sqrt{\nu}\log^{2}m\log(\frac{\nu}{\delta}))\) iterations and each iteration sends \(O(b_{\max}n)\) bits to find a vector \(x\) up to \(\delta\) error, where \(b_{\max}\) is determined by the size of sketch matrices._ Contributions.Our contributions are summarized as follows: * To the best of our knowledge, we first study the ERM under FL settings. And we propose an IPM under FL to solve FERM. * We are also the first one to sketch Hessian information of the IPM algorithm under the FL setting. Previous works only either sketch the gradient information under the FL setting or Hessian information under the classical distributed computing setting. * We show convergence guarantees of IPM under FL, which compresses the Hessian via sketching methods to reduce communication costs. Due to the reason that IPM is a second-order optimization method, it is non-trivial for us to present such convergence without making strong assumptions. Organization.We present related work in Section 2. In Section 3, we present the background of this paper. And in Section 4, we formulate the problem first. Next, we give the sketching technique we used in our algorithm and the overview of our main algorithm. In Section 5, we present the theoretical analysis of our algorithm. In Section 6, we compare our algorithm with some naive models. And we conclude this paper in Section 7. ## 2 Related Work Distributed Optimization Methods.Nowadays, distributed optimization methods have gained popularity. As for the distributed first-order optimization methods, a large number of works focus on developing communication-efficient distributed SGD. These work includes that distributed variants of stochastic gradient descent [2, 11, 12], accelerated SGD [13], variance reduction SGD [10, 11], dual coordinate ascent algorithms [14, 15, 16], and stochastic coordiante descent methods [17]. As for the distributed second-order optimization methods, DANE [13], AIDE [14], and DiSCO [18] are well-known work. CoCoA [13, 12, 15] is similar to the second-order method, but it does not use any second-order information. Federated Learning.Federated learning is a special case of distributed machine learning. Federated learning allows clients to train machine learning models without data sharing. The applications of federated learning include healthcare [15, 16], financial area [11], and autonomous vehicle [17]. Although federated learning has numerous advantages, the federated learning is always limited by the communication issue. In view of this, a great number of methods [16, 17] are developed to reduce communication cost in federated learning. FEDAVG [18] is the first work focus on solving communication efficiency problem in federated learning. After that, a great number of gradient compression methods [16, 17] have been proposed. Communication-efficient algorithms achieve success in practice. However, it is not easy to present convergence analysis for communication-efficient algorithms. Recently, [17] presents convergence analysis for SGD under federated learning. Federated learning convergence on one-layer neural networks is investigated in [11]. Furthermore, [12] gives convergence guarantees of the general federated learning on neural networks. [13] studies federated learning for convex, Lipschitz, and smooth functions. [13] proposes an federated algorithm for adversarial training in deep learning. Another interesting angle of federated learning is differential privacy, a number of works [12, 13, 14, 15, 16] have studied the privacy inspired question related to federated learning. Privacy is not the major task in this paper. Sketching Technique.Sketching technique has been widely applied to many applications in machine learning, such as low-rank approximation [17, 18, 19, 13, 15], linear regression, distributed problems [14, 15], reinforcement learning [16], tensor decomposition [13], sparsification of attention matrix [10], discrepancy minimization [14], clustering [17], online bipartite matching [13, 18], exponential and softmax regression [11, 12, 13, 15, 16], integral optimization [11], submodular problem [13], generative adversarial networks [16], symmetric norm estimation [14], optimizing neural tangent kernel [1, 13, 15, 16, 17, 18, 19, 20, 21, 22], database [23], fast attention computation [21], dynamic kernel computation [24, 25, 26], matrix completion [27], matrix sensing [28, 29, 30]. Count Sketch [20] is used in [22, 23] to reduce the cost of communication at each iteration. Count Sketch is able to approximate every coordinate of a vector with an \(\ell_{2}\) guarantee. And it is also possible to recover an approximate vector from Count Sketch. In this paper, we use AMS matrices [2] to compress the model updates at each iteration to reduce the communication cost of FL. ## 3 Background In Section 3.1, we explain the notations that we use. In Section 3.2, we introduce empirical risk minimization. In Section 3.3, we explain the central path method and the properties of the self-concordant barrier function. In Section 3.4, we present the Newton method. ### Notations Given a positive value \(n\), we use \([n]\) to denote \(\{1,2,\cdots,n\}\). We use \(m\) to denote the number of clients. For each client \(i\in[m]\), it contains dataset \(A_{i}\in\mathbb{R}^{d\times n_{i}}\). We also assume that \(\sum_{i=1}^{m}n_{i}=n\). Moreover, we define \(x\) as the main variable, and \(s\) as the slack variable. We use \(x_{i}\), \(s_{i}\), \(W_{i}\) and \(A_{i}\) to denote the variables that are computed in client \(c_{i}\). And we use \(x_{i}^{t}\), \(s_{i}^{t}\), \(W_{i}^{t}\), and \(A_{i}^{t}\) to denote the variables that are computed in client \(c_{i}\) at \(t\)-th iteration. Next, we define two operations here. The operation \(\oplus\) denotes concatenation operation, which indicates that \(x=\oplus_{i=1}^{m}x_{i}=[x_{1},x_{2},\ldots,x_{m}]^{\top}\) and \(s=\oplus_{i=1}^{m}s_{i}=[s_{1},s_{2},\ldots,s_{m}]^{\top}\). And we use \(\otimes\) to denote the following operation: \(W=\otimes_{i=1}^{m}W_{i}=\text{diag}(W_{1},W_{2},\cdots,W_{m})\). Given that \(f\) and \(g\) are two functions, \(f\lesssim g\) means that \(f\leq Cg\), where \(C\) is a constant. Let \(v\) be a vector. \(\|v\|\) represents the standard Euclidean norm. \(\mathbf{E}[]\) represents the expectation and \(\Pr[]\) denotes the probability. We use \(\nabla f(x)\) to denote the gradient of \(f\), namely \(\frac{\mathrm{d}f}{\mathrm{d}x}\). For any \(A\in\mathbb{R}^{m\times n}\), \(\|A\|_{2}\) represents its operator norm and \(\|A\|_{F}\) stands for its Frobenius norm. We also use some facts that \(\|AB\|_{2}\leq\|A\|_{2}\cdot\|B\|_{2}\), \(\|A\|_{F}\leq\sqrt{n}\|A\|_{2}\). Moreover, if the matrix \(A\in\mathbb{R}^{n\times n}\) is a block diagonal matrix, then \(A\) could be expressed as \(\text{diag}(A_{1},A_{2},\cdots,A_{m})\), where \(A_{1}\) is a matrix whose dimensions is \(n_{1}\times n_{1}\), \(A_{2}\) is a matrix whose dimensions is \(n_{2}\times n_{2}\), and \(A_{m}\) is a matrix whose dimensions is \(n_{m}\times n_{m}\). In addition, \(\sum_{i=1}^{m}n_{i}=n\). If \(A\in\mathbb{R}^{n\times n}\) is a symmetric positive semi-definite (PSD) matrix, i.e., \(A\succeq 0\) if for all vectors \(x\in\mathbb{R}^{n}\), then \(x^{\top}Ax\geq 0\), and we use \(\|v\|_{A}\) to denote \((v^{\top}Av)^{1/2}\). If we are given a convex function \(f\), we use \(\|v\|_{x}\) to denote \(\|v\|_{\nabla^{2}f(x)}\) and \(\|v\|_{x}^{*}\) to denote \(\|v\|_{\nabla^{2}f(x)^{-1}}\) for simplicity. In general, we use \(R\in\mathbb{R}^{b\times d}\) or \(S\in\mathbb{R}^{b\times d}\) to denote sketches that are used to compress model updates. In order to distinguish different sketches, we use \(R_{i}\in\mathbb{R}^{b_{i}\times d}\) and \(S_{i}\in\mathbb{R}^{b_{i}\times d}\). Furthermore, in this paper, we consider the computation model to be word RAM model. In this model, each word has \(O(\log n)\) bits and all the basic computation can be done in \(O(1)\). This is standard in the literature of algorithm design [10] and distributed algorithm [14, 15]. ### Empirical Risk Minimization We give the definition of traditional Empirical Risk Minimization (ERM) as below: **Definition 3.1** (Empirical Risk Minimization).: Given a convex function \(f_{i}:\mathbb{R}^{d}\rightarrow\mathbb{R}\), \(A_{i}\in\mathbb{R}^{d\times n_{i}}\), \(x_{i}\in\mathbb{R}^{n_{i}}\) and \(b_{i}\in\mathbb{R}^{d}\), \(\forall i\in[m]\), we call the following optimization problem as Empirical Risk Minimization problem \(\min_{x}\sum_{i=1}^{m}f_{i}(A_{i}x_{i}+b_{i})\). Then, we could rewrite the original problem by defining \(y_{i}=A_{i}x_{i}+b_{i}\), \(z_{i}=f_{i}(A_{i}x_{i}+b_{i})\). After that, we could get the following problem: \[\min_{x,y,z} \sum_{i=1}^{m}z_{i}\] s.t. \[Ax+b=y\] \[(y_{i},z_{i})\in K_{i} =\{(y_{i},z_{i}):f_{i}(y_{i})\leq z_{i}\},\forall i\in[m]\] In this paper, we mainly consider the following question, when the dimension of \(K_{i}\) could be arbitrary: \(\min_{x\in\prod_{i=1}^{m}K_{i},Ax=b}c^{\top}x\). In the next section, we briefly introduce the solutions to address the general form under centralized setting. ### Central Path Method In this section, we introduce the central path method. First, we recap the problem that we analyze: \[\min_{x\in\prod_{i=1}^{m}K_{i},Ax=b}c^{\top}x \tag{1}\] For each \(i\in[m]\), \(K_{i}\) is a convex set and \(x_{i}\) is the \(i\)-th block of \(x\) respect to \(K_{i}\). The interior point methods (IPM) consider the following path of solutions: \[x(t)=\arg\min_{Ax=b}c^{\top}x+t\sum_{i=1}^{m}\phi_{i}(x_{i}) \tag{2}\] where \(\phi_{i}(\cdot)\) is called self-concordant barrier function. (Fig. 3 is an example of barrier function). And the path is always called central path. The IPM solves Eq. (1) by decreasing \(t\to 0\) (See Fig. 1). The running time of the central path based algorithm is determined by the self-concordant barrier function. In view of this, we first present the definition and properties of self-concordant barrier function here. **Definition 3.2**.: Given a function \(\phi\), if any \(x\in\mathrm{dom}\phi\) and any \(u\in\mathbb{R}^{n}\), the following inequality holds \(|\nabla^{3}\phi(x)[u,u,u]|\ \leq 2\|u\|_{x}^{3/2},\|\nabla\phi(x)\|_{x}^{*}\ \leq\sqrt{\nu}\) where \(\|v\|_{x}:=\|v\|_{\nabla^{2}\phi(x)}\) and \(\|v\|_{x}^{*}:=\|v\|_{\nabla^{2}\phi(x)^{-1}}\), for any vector \(v\). Then, the function \(\phi\) is called as a \(\nu\) self-concordant barrier for \(K\), where \(K=\mathrm{dom}\phi\). _Remark 3.3_.: In general, \(\nu\geq 1\) for any self-concordant barrier function. [20] demonstrates that for any open convex set \(K\) contained in the Euclidean space \(\mathbb{R}^{n}\), there exists a \(O(n)\) self-concordant barrier function. We focus on a specific convex set \(K_{i}\) which has a dimension of \(O(1)\) in this paper. We make the assumption that a \(\nu_{i}\) self-concordant barrier function \(\phi_{i}\) is given, and we can efficiently compute its gradient \(\nabla\phi_{i}\) and Hessian \(\nabla^{2}\phi_{i}\) in constant time (\(O(1)\)). An important result we rely on regarding self-concordance is the stability of the norm \(\|\cdot\|_{x}\) when we alter the value of \(x\). Subsequently, we proceed to present certain properties of the self-concordant barrier function. **Theorem 3.4** (Theorem 4.1.6 in [20]).: _If the following conditions hold_ * _Suppose_ \(\phi\) _represents a self-concordant barrier function._ * _the norm_ \(\|y-x\|_{x}\) _is less than_ \(1\)__ _Then, the following inequalities hold true: \(\nabla^{2}\phi(y)\succeq(1-\|y-x\|_{x})^{2}\nabla^{2}\phi(x)\) and \(\nabla^{2}\phi(y)\preceq(1-\|y-x\|_{x})^{-2}\nabla^{2}\phi(x)\)._ Now, we consider the way to go alone with the path from \(x(1)\) to \(x(\epsilon)\), where \(\epsilon\in(0,1)\), in the next section. ### Newton Step In this section, we briefly introduce the Newton method in central path. It is a standard method, for details of the background, the readers could refer [23]. In order to follow the path from \(x(1)\) to \(x(\epsilon)\) and control in error that caused in the progress, we consider the following problem \[s/t+\nabla\phi(x) =\mu\] \[Ax =b\] \[A^{\top}y+s =t\mu+c\] Figure 1: Here is an example of the central path. The curve denotes the central path. The hexagon denotes the feasible region. The start point of the central path is \(x(1)\), and the end point of the central path is \(x(0)\). Then we follow the path \(x(t)\) to from \(x(1)\) to \(x(0)\). where \(\nabla\phi(x)=(\nabla\phi_{1}(x_{1}),\phi_{2}(x_{2}),\cdots,\nabla\phi_{m}(x_{m}))\) and \(\mu\) stands for the error that is caused in the progress. In order to control the error, the Newton step to move from \(\mu\) to \(\mu+h\) is given below: \[\delta_{s}^{*}/t+\nabla^{2}\phi(x)\cdot\delta_{x}^{*} =h\] \[A\delta_{x}^{*} =0\] \[A^{\top}\delta_{y}^{*}+\delta_{s}^{*} =0\] where \(\nabla^{2}\phi(x)=\operatorname{diag}(\nabla^{2}\phi_{1}(x_{1}),\nabla^{2} \phi_{2}(x_{2}),\cdots,\nabla^{2}\phi_{m}(x_{m}))\). Then, we define that \(W:=(\nabla^{2}\phi(x))^{-1}\) and we define the projection matrix \(P\in\mathbb{R}^{n\times n}\) below: \[P:=W^{1/2}A^{\top}(AWA^{\top})^{-1}AW^{1/2} \tag{3}\] We could get the following solutions: \[\delta_{x}^{*}\ =W^{1/2}(I-P)W^{1/2}h,\quad\delta_{y}^{*}\ =-t\cdot(AWA^{\top})^{-1} AWh,\quad\delta_{s}^{*}\ =t\cdot W^{-1/2}PW^{1/2}h\] ## 4 IPM under FL In this section, we develop the interior point methods under FL. Before we introduce our algorithm, we first introduce the sketching technique used in section 4.1. Then, we give the overview of our algorithm in section 4.2. ### Sketching Techniques In this subsection, we give the definition of AMS matrix [1] and show the statistical properties of using the AMS matrix to sketch a fixed vector. See Appendix A for rigorous proof. **Definition 4.1** (AMS sketch matrices [1]).: Let \(h_{1},h_{2},\ldots,h_{b}\) be \(b\) random hash functions. The hash functions are picked from a 4-wise independent hash family \(\mathcal{H}=\{h:[n]\rightarrow\{-1/\sqrt{b},+1/\sqrt{b}\}\}\). Then \(R\in\mathbb{R}^{b\times n}\) is an AMS sketch matrix if we set \(R_{i,j}=h_{i}(j)\). The AMS matrix has great statistical properties to sketch a fixed vector. We provide the statement in the following lemma, which is standard in literature [14, 15]. **Lemma 4.2** (Statistical properties for sketching a fixed vector).: _If the following conditions hold_ * \(h\in\mathbb{R}^{n}\) _is a fixed vector._ * \(R\) _is defined as in Definition_ 4.1_._ _Then we have_ \[\mathbf{E}[R^{\top}Rh]=h,\quad\mathbf{E}[(R^{\top}Rh)_{i}^{2}] \leq h_{i}^{2}+\frac{1}{b}\|h\|_{2}^{2}\] \[\Pr\left[|(R^{\top}Rh)_{i}-h_{i}|>\|h\|_{2}\frac{\log(n/\delta)}{ \sqrt{b}}\right]\leq\delta\] **Note.** Although AMS sketch matrix is also used in [14], there exists some difference between our paper and the previous work: Both of our work and that use AMS sketch matrix. However, the previous work uses the AMS sketch matrix outside of the projection matrix \(P\) to accelerated the whole process, where \(R\in\mathbb{R}^{b\times n}\) is an AMS matrix and \(P\in\mathbb{R}^{n\times n}\) is a projection matrix. And we add the AMS sketch matrix inside the projection matrix. The projection matrix with AMS sketch matrix in our paper is defined in Def. 4.3. There is a major issue we need to tackle: how to bound the error that is caused by adding sketching matrices outside the inverse part (\(AWA^{\top}\))? ### Our Algorithm In view of the properties of AMS sketch matrices, we could use AMS sketch matrices to bound the error caused by the sketching techniques. Next, we define the following notations to differentiate the projection matrices used in IPM: **Definition 4.3** (\(\widehat{P}\) and \(\widetilde{P}\)).: Given four independent AMS matrices, \(R_{1}\in\mathbb{R}^{b_{1}\times d}\), \(R_{2}\in\mathbb{R}^{b_{2}\times d}\), \(R_{3}\in\mathbb{R}^{b_{3}\times d}\), \(R_{4}\in\mathbb{R}^{b_{4}\times d}\), the matrix \(\widehat{P}\) and \(\widetilde{P}\) are defined as below: \[\widehat{P}=W^{1/2}A^{\top}(R_{1}^{\top}R_{1}AWA^{\top}R_{2}^{\top}R_{2})^{-1} AW^{1/2}\] and \[\widetilde{P}=W^{1/2}A^{\top}R_{3}^{\top}R_{3}(R_{1}^{\top}R_{1}AWA^{\top}R_{ 2}^{\top}R_{2})^{-1}R_{4}^{\top}R_{4}AW^{1/2}\] We want to remark that \(\widehat{P}\) is only being used for the purpose of analysis. But, we use \(\widetilde{P}\) for both analysis and algorithm. The algorithm to address ERM under FL could be divided into several steps (Fig. 4 gives an overview of the algorithm): Setup.First, we give server and each client the same random seed. Then, for each client \(c_{i}\), the client generates four independent sketching matrices \(R_{1}\), \(R_{2}\), \(R_{3}\), and \(R_{4}\). Local update.For any \(i\in[m]\), the detailed process of local update in each client \(c_{i}\) is shown as below: * Each client \(c_{i}\) updates \(x_{i}^{t-1}\) and \(s_{i}^{t-1}\) by gradient descent \(\delta_{x_{i}}^{t-1}\) and \(\delta_{s_{i}}^{t-1}\) respectively. Then, we get that \(x_{i}^{t}=x_{i}^{t-1}+\delta_{x_{i}}^{t-1}\) and \(s_{i}^{t}=s_{i}^{t-1}+\delta_{s_{i}}^{t-1}\). * Each client \(c_{i}\) computes \(W_{i}^{t}\). * Each client \(c_{i}\) computes \(\mu_{i}^{t}(x,s)=s_{i}^{t}/\overline{t}+\nabla\phi_{i}(x_{i})\) and \(\gamma_{i}^{t}(x,s)=\|\mu_{i}^{t}(x,s)\|_{\nabla^{2}\phi_{i}(x_{i}^{t})^{-1}}\). * Each client \(c_{i}\) computes \(h_{i}^{t}=-\alpha\cdot c_{i}^{t}(x,s)\mu_{i}^{t}(x,s)\). * Each client \(c_{i}\) sends its \((W_{i}^{t})^{1/2}A_{i}^{\top}R_{1}^{\top}\), \(R_{2}A_{i}W_{i}^{t}A_{i}^{\top}R_{3}^{\top}\), \(R_{4}A_{i}(W_{i}^{t})^{1/2}\) and \(h_{i}^{t}\) to the server. Figure 2: \(P\) is an ideal case of the projection matrix. However, it is infeasible to construct \(P\) under FL. In view of this, we construct \(\widetilde{P}\). In order to analyze that \(P\) is close to \(\widetilde{P}\), which means that \(\|g^{\top}Ph-g^{\top}\widetilde{P}h\|\) could be bounded by \(g\), \(h\), \(A\), and \(W\). We create an artificial matrix \(\widehat{P}\). Note that \(\widehat{P}\) is only used in analysis. The way how our analysis working is, we first show that \(P\) is close \(\widehat{P}\), then we show \(\widehat{P}\) is close to \(\widetilde{P}\). Combining two steps together, we finally prove that \(P\) is close to \(\widetilde{P}\). Global update.In each global communication round, the detailed process of global update is shown as below: * The server constructs \(\widetilde{P}\) as below \[\widetilde{P}=W^{1/2}A^{\top}R_{1}^{\top}R_{1}(R_{2}^{\top}R_{2}AWA^{\top}R_{3} ^{\top}R_{3})^{-1}R_{4}^{\top}R_{4}AW^{1/2}\] (4) * The server computes \(\delta_{x}^{t}\) and \(\delta_{s}^{t}\) as below: \[\delta_{x}^{t}=W^{1/2}(I-\widetilde{P})W^{1/2}h^{t},\quad\delta_{s}^{t}= \widetilde{t}\cdot W^{-1/2}\widetilde{P}W^{1/2}h^{t}\] * The server sends \(\delta_{x}^{t}\) and \(\delta_{s}^{t}\) to every client. Communication cost.In this paper, we always assume that \(d\geq n\). The Algorithm 1 seeds \(O(b_{\max}n)\) words at each iteration, where \(b_{\max}=\max\{b_{1},b_{2},b_{3},b_{4}\}\). Generally, we choose \(b_{\max}=O(\sqrt{n})\). Compared with the naive algorithm mentioned in Model 3, the Algorithm 1 is more practical due to the reason that FL is always limited by the network bandwidth. Theoretical Analysis In our algorithm, the main problem is how to handle the matrix \(P\) that is defined as Eq. (3), which is used in our IPM where \(W\in\mathbb{R}^{n\times n}\) is a block diagonal matrix and \(A\in\mathbb{R}^{d\times n}\). The core of Theorem 5.3 is to show the equation \(|g^{\top}Ph-g^{\top}\widetilde{P}h|\) could be bounded by \(g\), \(h\), \(A\), and \(W\). In order to prove Theorem 5.3, we divide the proof into the following steps. Given two vectors \(g,h\in\mathbb{R}^{d}\). (In the following statement and proof, we assume that \(g=h\)). We want to prove that * \(|g^{\top}Ph-g^{\top}\widehat{P}h|\) could be bounded by \(g\), \(h\), \(A\) and \(W\). We prove this by using Lemma C.3 with \(C=W^{1/2}A^{\top}\), \(B=(AWA^{\top})^{-1}\), \(R=R_{1}\) and \(S=R_{2}\). * \(|g^{\top}\widetilde{P}h-g^{\top}\widehat{P}h|\) could be bounded by \(g\), \(h\), \(A\), \(W\), and \(\widetilde{B}\), we prove this by using Lemma C.5 with \(C=W^{1/2}A^{\top}\), \(\widetilde{B}=(R_{1}^{\top}R_{1}AWA^{\top}R_{2}^{\top}R_{2})^{-1}\), \(R=R_{3}\), and \(S=R_{4}\). * Finally, we could use \[|g^{\top}Ph-g^{\top}\widetilde{P}h|\leq|g^{\top}Ph-g^{\top}\widehat{P}h|+|g^{ \top}\widehat{P}h-g^{\top}\widetilde{P}h|\] to obtain that \(|g^{\top}Ph-g^{\top}\widetilde{P}h|\) is bounded by \(g\), \(h\), \(A\) and \(W\). In order to achieve the above-mentioned steps, we need to use the following lemmas. The detailed proof of the following lemma is deferred to Appendix C. **Lemma 5.1**.: _If the following conditions hold_ * \(\widetilde{B}\in\mathbb{R}^{d\times d}\) _and_ \(C\in\mathbb{R}^{n\times d}\) _are two matrices._ * \(R\in\mathbb{R}^{b_{1}\times d}\) _and_ \(S\in\mathbb{R}^{b_{2}\times d}\) _are defined as in Definition_ 4.1_._ * \(g\in\mathbb{R}^{n}\) _and_ \(h\in\mathbb{R}^{n}\) _are vectors._ * \(b_{\min}=\{b_{1},b_{2}\}\)_._ _Then, we have_ \[g^{\top}C(R^{\top}R)\widetilde{B}(S^{\top}S)C^{\top}h-g^{\top}C\widetilde{B}C ^{\top}h\lesssim K_{0},\] _with probability at least \(1-1/\operatorname{poly}(n)\) and \(K_{0}\) is defined as follows:_ \[K_{0}\ :=\frac{\log^{1.5}d}{\sqrt{b_{\min}}}\cdot(\|g^{\top}C\|_{2}\| \widetilde{B}C^{\top}h\|_{2}+\|g^{\top}C\widetilde{B}\|_{2}\|C^{\top}h\|_{2}) +\frac{\log^{3}d}{b_{\min}}\cdot\|g^{\top}C\|_{2}\|C^{\top}h\|_{2}\|\widetilde {B}\|_{F}\] By using the above lemma, we could obtain the following result by setting \(\widetilde{B}=(R_{1}^{\top}R_{1}AWA^{\top}R_{2}^{\top}R_{2})^{-1}\) and \(C=W^{1/2}A^{\top}\), where both \(R_{1}\) and \(R_{2}\) are independent AMS matrices. \[|g^{\top}\widetilde{P}h-g^{\top}\widehat{P}h|\lesssim K_{0}.\] Although the above lemma could be used to bound the error of the second step. However, it does not show that \(\widetilde{B}\) could be bounded by \(A\) and \(W\). It is non-trivial to prove it. In order to bound the error, we first use the above lemma to obtain that \[|x^{\top}R^{\top}RB^{-1}S^{\top}Sx-x^{\top}B^{-1}x|\leq\epsilon_{0}\lambda_{ \min}(B^{-1})\] where \(b_{\min}=\{b_{1},b_{2}\}\), \(\kappa=\lambda_{\max}(B)/\lambda_{\min}(B)\), \(\epsilon_{0}=O(\sqrt{n}\log^{3}d/b_{\min})\kappa\in(0,1/10)\), \(R\in\mathbb{R}^{b_{1}\times d}\) and \(S\in\mathbb{R}^{b_{2}\times d}\) are matrices defined as in Definition 4.1 and \(B=(AWA^{\top})^{-1}\). Then we use the following lemma to bound the error that is caused by adding sketching matrices in the inverse part. **Lemma 5.2**.: _If the following conditions hold:_ * \(B\in\mathbb{R}^{d\times d}\) _is a matrix._ * \(R\in\mathbb{R}^{b_{1}\times d}\) _and_ \(S\in\mathbb{R}^{b_{2}\times d}\) _are defined as in Definition_ 4.1_._ * \(g\in\mathbb{R}^{n}\) _and_ \(h\in\mathbb{R}^{n}\) _are vectors._ * \(\epsilon_{0}\in(0,1/10)\)_._ _Then, we have_ \[(1-2\epsilon_{0})B\preceq(R^{\top}RB^{-1}S^{\top}S)^{-1}\preceq(1+2\epsilon_{ 0})B\] _with probability at least \(1-1/\operatorname{poly}(n)\)._ By using the above lemma, we could obtain that \[|g^{\top}Ph-g^{\top}\widehat{P}h|\leq 2\epsilon_{0}\|g^{\top}C\|_{2}\|C^{ \top}h\|_{2}\|B\|_{2}\] with probability \(1-1/\operatorname{poly}(n)\), where \(C=W^{1/2}A^{\top}\) and \(B=(AWA^{\top})^{-1}\in\mathbb{R}^{d\times d}\). Finally, we combine the result of Lemma 5.1 and Lemma 5.2 together to get the following theorem. **Theorem 5.3**.: _If the following conditions hold_ * _Given_ \(A\in\mathbb{R}^{d\times n}\) _and_ \(W\in\mathbb{R}^{n\times n}\)_._ * _Let_ \(R_{1}\in\mathbb{R}^{b_{1}\times d}\)_,_ \(R_{2}\in\mathbb{R}^{b_{2}\times d}\)_,_ \(R_{3}\in\mathbb{R}^{b_{3}\times d}\) _and_ \(R_{4}\in\mathbb{R}^{b_{4}\times d}\) _be four independent AMS matrices._ * \(P\) _is defined as in Eq._ (3)_._ * \(\widetilde{P}\) _is defined as in Def._ 4.3_._ * _Let_ \(g,h\in\mathbb{R}^{n}\) _be two vectors._ * _Let_ \(b_{\min}:=\min\{b_{1},b_{2}\}\)_._ _Then, we have_ \[|g^{\top}Ph-g^{\top}\widetilde{P}h|\lesssim\log^{6}d\cdot(\frac{1}{\sqrt{b_{ \min}}}+\frac{n}{b_{\min}^{2}})\cdot\kappa\cdot\|g^{\top}C\|_{2}\|C^{\top}h\| _{2}\|B\|_{2}\] _with probability at least \(1-1/\operatorname{poly}(n)\). Note that \(C=W^{1/2}A^{\top}\), \(B=(AWA^{\top})^{-1}\), and \(\kappa=\lambda_{\max}(B)/\lambda_{\min}(B)\)._ Due to the reason that \(\kappa>1\), we could choose that \(b_{\min}=\epsilon^{-1}\sqrt{n}\kappa^{2}\log^{3}d\) and \(\epsilon\in(0,1/10)\). Then, we could obtain that \(|g^{\top}Ph-g^{\top}\widetilde{P}h|\leq\epsilon\|g^{\top}C\|_{2}\|C^{\top}h\|_ {2}\|B\|_{2}\). Compared to Standard Methods In order to show the effectiveness and efficiency of our algorithm, we discuss the following three naive models and point out the disadvantages of each model respectively. We introduce the following three straightforward methods: Model 1 and Model 2 cannot get the right result under their respective framework. Model 3 can get the correct result, but it needs to send \(O(n^{2})\) words at each iteration. Moreover, Model 3 also requires clients to share their data with the untrusted server, which is not allowed under FL setting. **Model 1:** In the \(t\)-th step, each client does the following operations: (1) Compute \(W_{i}^{t}\) and \(h_{i}\); (2) Compute local \(P_{i}^{t}\), where \(P_{i}^{t}=(W_{i}^{t})^{1/2}A_{i}^{\top}(A_{i}W_{i}^{t}A_{i}^{\top})^{-1}A_{i}(W _{i}^{t})^{1/2}\); (3) By using local \(P_{i}^{t}\) and \(h_{i}\), the client could compute local update \(\delta_{x,i}\) and \(\delta_{s,i}\); (4) Finally, client sends its local update \(\delta_{x,i}\) and \(\delta_{s,i}\) to the server. The Server combines all gradients together. However, the main issue is that \[\oplus_{i=1}^{m}[(W_{i}^{t})^{1/2}(I-P_{i}^{t})(W_{i}^{t})^{1/2}h_{i}]\neq(W^{ t})^{1/2}(I-P^{t})(W^{t})^{1/2}h\] and \[\oplus_{i=1}^{m}[(W_{i}^{t})^{-1/2}P_{i}^{t}(W_{i}^{t})^{1/2}h_{i}]\neq(W^{t}) ^{-1/2}P^{t}(W^{t})^{1/2}h\] where \(P=(W^{t})^{1/2}A^{\top}(AW^{t}A^{\top})^{-1}A(W^{t})^{1/2}\), and \(I\) is an identify matrix. **Model 2:** In the \(t\)-th step, each client does the following operations: (1) Compute \(W_{i}^{t}\) and \(h_{i}\) locally; (2) Send \((A_{i}W_{i}^{t}A_{i}^{\top})^{-1}\) and \(h_{i}\) to the server. However, this method does not work well. The reason is that \[\otimes_{i=1}^{m}[(W_{i}^{t})^{1/2}A_{i}^{\top}(A_{i}W_{i}^{t}A_ {i}^{\top})^{-1}A_{i}(W_{i}^{t})^{1/2}]\oplus_{i=1}^{m}h_{i}\] \[\neq[(W^{t})^{1/2}A^{\top}(AW^{t}A^{\top})^{-1}A(W^{t})^{1/2}]h\] where \(W^{t}=\otimes_{i=1}^{m}W_{i}^{t},A=\oplus_{i=1}^{m}A_{i},\quad\text{and}\quad h =\oplus_{i=1}^{m}h_{i}\). **Model 3:** Each client sends data to the server at the 0-th iteration. Then, in the \(t\)-th step, each client does the following operations: (1) Compute locally \(W_{i}^{t}\) and \(h_{i}\); (2) Send \(W_{i}^{t}\) and \(h_{i}\) to the server. The server computes \(P\) by the following equation: \[P=(\otimes_{i=1}^{m}(W_{i}^{t})^{1/2}A^{\top})(A\otimes_{i=1}^{m}W_{i}^{t}A^{ \top})^{-1}(A\otimes_{i=1}^{m}(W_{i}^{t})^{1/2})\] Compared to the above-mentioned two models, this method could get the correct result in the end. However, it has to send \(O(n^{2})\) words at each iteration. In reality, the distributed machine learning is always limited by the network bandwidth. Moreover, people usually are not willing to share their private data with the untrusted system because of data privacy. In view of this, we propose a communication-efficient distributed interior point method under FL. ## 7 Conclusion and Discussion In a nutshell, we present the first distributed interior point method algorithm (FL-IPM) that is used to address empirical risk minimization under FL. There are differences between our algorithm and existing algorithms and the novelty of our work is shown below: (1) There exist a large number of works related to the distributed first-order optimization algorithms. However, our algorithm is a second-order optimization problem under federated learning settings. (2) We use the sketching technique to reduce the communication cost of federated learning, which is the bottleneck of federated learning. (3) Compared with the existing distributed second-order optimization algorithms, we can provide convergence analysis for our solution without making strong assumptions. As for future work, there are several things we need to consider, if we want to apply our algorithm in the real system: First, we need to consider the stragglers and device heterogeneity in the real system environment. We need to design robust algorithms to deal with stragglers during the training. In addition, the scalability of large networks is also very important, especially the latency and throughput of the network. Finally, the computational cost of the devices and server should be taken into consideration. We present theoretical results in this paper, and we are not aware of any negative societal impact. ## Appendix Roadmap.The structure of the appendix is outlined as follows: * Section A claims the probability tools used in this paper and shows the properties of random sketching matrix. * Section B presents how to bound the error of adding two sketching matrices. * Section C shows that \(|g^{\top}Ph-g^{\top}\widetilde{P}h|\) is small. * Section D presents the primary outcome of this paper along with its corresponding proof. * Section E shows several basic results of Algorithm 1. * Section F states some basic results of self-concordance function. ## Appendix A Probability Tools and Basic Properties of Random Sketching Matrices In this paper, we care less about the running time of each client in our application. The issue we want to address in this paper is the limitation of the network bandwidth (bandwidth between server and clients). In view of this, we use subsampled randomized Hadamard/Fourier matrix1 and AMS matrices. Footnote 1: We want to remark that SRHT has fast computation advantage compared to AMS. Using SRHT [11] allows multiplying the matrix with \(k\) vectors only takes \(kn\log n\) time. This is much faster compared to AMS. In our application, we only use nice statistical properties of SRHT matrices without using any fast Fourier transform [13], or more fancy sparse Fourier transform [12, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21]. However, in our case, we have more different sketching matrices and also need to apply sketching matrices inside inversion. In Section A.1, we introduce the concentration inequalities. In Section A.2, we present the properties obtained from random projection. ### Concentration inequalities We first state several useful inequalities. **Lemma A.1** (Lemma 1 on page 1325 of [11]).: _If the following conditions hold_ * \(X\sim\mathcal{X}_{k}^{2}\) _is a random variable, which is a chi-squared distribution and has_ \(k\) _degrees of freedom._ * _Each of them has a mean of_ \(0\) _and a variance of_ \(\sigma^{2}\)_._ _Then, we have_ \[\Pr[X-k\sigma^{2}\geq(2\sqrt{kt}+2t)\sigma^{2}]\leq\exp(-t)\] _and_ \[\Pr[k\sigma^{2}-X\geq 2\sqrt{kt}\sigma^{2}]\leq\exp(-t).\] **Lemma A.2** (Khintchine's Inequality).: _If the following conditions hold_ * \(\sigma_{1},\cdots,\sigma_{n}\) _are the independent and identically distributed sign random variables._ * \(z_{1},\cdots,z_{n}\) _are real numbers._ _Then, there exists positive constants, namely \(C\) and \(C^{\prime}\), satisfying that:_ \[\Pr\left[\left|\sum_{i=1}^{n}z_{i}\sigma_{i}\right|\geq Ct\|z\|_{2}\right]\leq \exp(-C^{\prime}t^{2})\] **Lemma A.3** (Bernstein Inequality).: _If the following conditions hold_ * \(X_{1},\cdots,X_{n}\) _is a set of independent random variables with zero means._ * _For any arbitrary_ \(1\leq i\leq n\)_, let the absolute value of each_ \(X_{i}\) _is almost surely bounded by a constant_ \(M\)_._ _Then, for any positive value \(t\), the following inequality holds:_ \[\Pr\left[\sum_{i=1}^{n}X_{i}>t\right]\leq\exp\left(-\frac{t^{2}/2}{\sum_{j=1}^ {n}\mathbf{E}[X_{j}^{2}]+Mt/3}\right)\] ### Properties obtained by random projection Here, we formally define the SRHT matrix and AMS sketching matrix and analyze their properties. **Definition A.4** (Subsampled randomized Hadamard/Fourier transform (SRHT) matrix [13]).: The SRHT matrix, denoted as \(R=\sqrt{n/b}\cdot SHD\), where \(R\in\mathbb{R}^{b\times n}\), and \(S\in\mathbb{R}^{b\times n}\) represents a random matrix whose rows are \(b\) uniform samples (without replacement) from the standard basis of \(\mathbb{R}^{n}\), \(H\in\mathbb{R}^{n\times n}\) is a normalized Walsh-Hadamard matrix, and \(D\in\mathbb{R}^{n\times n}\) is a diagonal matrix whose diagonal elements are i.i.d. Rademacher random variables. Figure 3: An example of the barrier function: \(-(\ln(1+x)+\ln(2x))\). The variable is changed from \(0.5\) to \(10\). In this case, \(A=[-1,-2]^{\top}\), and \(b=[1,0]^{\top}\). **Definition A.5** (AMS sketch matrix [1]).: Let \(h_{1},h_{2},\ldots,h_{b}\) be \(b\) random hash functions picking from a \(4\)-wise independent hash family \(\mathcal{H}=\{h:[n]\rightarrow\{-1/\sqrt{b},+1/\sqrt{b}\}\}\). Then, \(R\in\mathbb{R}^{b\times n}\) is a AMS sketch matrix if we set \(R_{i,j}=h_{i}(j)\). **Lemma A.6** (Lemma E.5 in [11]).: _If the following conditions hold_ * _Let_ \(h\in\mathbb{R}^{n}\) _be a fixed vector._ * _Let_ \(R\in\mathbb{R}^{b\times n}\) _be a SRHT or AMS sketch matrix as in Definition_ A.4 _and_ A.5_._ _Then, we have_ \[\mathbf{E}[R^{\top}Rh]=h,\quad\mathbf{E}[(R^{\top}Rh)_{i}^{2}] \leq h_{i}^{2}+\frac{1}{b}\|h\|_{2}^{2}\] \[\Pr\left[|(R^{\top}Rh)_{i}-h_{i}|>\|h\|_{2}\frac{\log(n/\delta)}{ \sqrt{b}}\right]\leq\delta\] ## Appendix B Sketch more than once Now, we can bound the error of adding two sketching matrices. **Lemma B.1** (Error bound of adding two sketching matrices).: _If the following conditions hold_ * \(R\in\mathbb{R}^{b_{1}\times n},S\in\mathbb{R}^{b_{2}\times n}\) _are defined as in Def._ A.5_._ * \(B\in\mathbb{R}^{n\times n}\) _is a matrix._ * \(u,v\in\mathbb{R}^{n}\) _are vectors._ _Then, with probability \(1-1/\operatorname{poly}(n)\),_ \[|u^{\top}R^{\top}RBS^{\top}Sv-u^{\top}Sv|\] \[\lesssim\frac{\log^{1.5}n}{\sqrt{b_{1}}}\cdot\|u\|_{2}\|Sv\|_{2}+ \frac{\log^{1.5}n}{\sqrt{b_{2}}}\cdot\|u^{\top}B\|_{2}\|v\|_{2}+\frac{\log^{3} n}{\sqrt{b_{1}b_{2}}}\cdot\|u\|_{2}\|v\|_{2}\|B\|_{F}.\] _holds._ Figure 4: This is an overview of our framework. In our framework, there is no need for clients to share data with the server. Clients share partial Hessian information with the server. And the server computes the update information by using Hessian information, then sends the update information to the client. Proof.: Let \(i\) be in \([n]\). Let the \(i\)-th column of \(R\) be \(R_{i}\in\mathbb{R}^{b_{1}}\). Let the \(i\)-th column of \(S\) be \(S_{i}\in\mathbb{R}^{b_{2}}\). Let \(\sigma_{i}\) be a random sign. Let \(R\) be an AMS matrix. Every column \(R_{i}\) of \(R\) follows the same distribution as \(\sigma_{i}R_{i}\). We have that \(R\) satisfies: \[1. \langle R_{i},R_{i}\rangle=1,\forall i\in[n]. \tag{5}\] \[2. \Pr[\langle R_{i},R_{j}\rangle\leq\frac{\sqrt{\log(n/\delta)}}{ \sqrt{b_{1}}},\forall i\neq j\in[n]]\geq 1-\delta. \tag{6}\] Likewise, \(S\) is an AMS matrix, and the distribution of each column \(S_{i}\) of \(S\) is identical to \(\sigma_{i}^{\prime}S_{i}\), where \(\sigma_{i}^{\prime}\) represents a random sign. Additional information can be found in [1]. Then, we can get \[u^{\top}(R^{\top}R)B(S^{\top}S)v= \sum_{i,j,i^{\prime},j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i} \sigma_{j}\sigma_{i^{\prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle R_{ i},R_{j}\rangle B_{j,i^{\prime}}\langle S_{i^{\prime}},S_{j^{\prime}}\rangle \tag{7}\] Therefore, we can divide the summation in equation Eq. (7) into three components: 1. The first part involves two pairs of indices being identical: \(j=i\) and \(j^{\prime}=i^{\prime}\). 2. The second part occurs when one pair of indices is the same: either \(j=i\) and \(j^{\prime}\neq i^{\prime}\), or conversely, \(j\neq i\) and \(j^{\prime}=i^{\prime}\). 3. The third part arises when no pair of indices are the same: \(j\neq i\) and \(j^{\prime}\neq i^{\prime}\). **Proof of Part 1.** Suppose \(j=i\) and \(j^{\prime}=i^{\prime}\). We can get \[\sum_{i=j,i^{\prime}=j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i} \sigma_{j}\sigma_{i^{\prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle R_{ i},R_{j}\rangle B_{j,i^{\prime}}\langle S_{i^{\prime}},S_{j^{\prime}}\rangle \tag{8}\] \[= \sum_{i,i^{\prime}}u_{i}v_{i^{\prime}}B_{i,i^{\prime}}\] \[= u^{\top}Bv\] For the first step, we use the fact that \(\langle R_{i},R_{i}\rangle=\langle S_{i^{\prime}},S_{i^{\prime}}\rangle=1\) for all \(i\) and \(i^{\prime}\) in \([n]\), as shown in Eq. (5). **Proof of Part 2.** Suppose that either \(j=i\) and \(j^{\prime}\neq i^{\prime}\), or conversely, \(j\neq i\) and \(j^{\prime}=i^{\prime}\). Without loss of generality, we suppose \(j=i\) and \(j^{\prime}\neq i^{\prime}\). Then, we can get \[\sum_{i=j,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i} \sigma_{j}\sigma_{i^{\prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle R_{ i},R_{j}\rangle B_{j,i^{\prime}}\langle S_{i^{\prime}},S_{j^{\prime}}\rangle\] (9) \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}B_{i,i^{\prime}}\langle S_{i^{ \prime}},S_{j^{\prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle S_{i^{\prime}},S_{j^{ \prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle S_{i^{\prime}},S_{j^{ \prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle S_{i^{\prime}},S_{j^{ \prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle S_{i^{\prime}},S_{j^{ \prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle S_{i^{\prime}},S_{j^{ \prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle S_{i^{\prime}},S_{j^{ \prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle S_{i^{\prime}},S_{j^{ \prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle S_{i^{\prime}},S_{j^{ \prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle S_{i^{\prime}},S_{j^{ \prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle S_{i^{\prime}},S_{j^{ \prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle S_{i^{\prime}},S_{j^{ \prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i^{ \prime}}^{\prime}\sigma_{j^{\prime}}^{\prime}\langle S_{i^{\prime}},S_{j^{\prime}}\rangle\] \[= \sum_{i,i^{\prime}\neq j^{\ \[= \sum_{j^{\prime}}\sigma^{\prime}_{j^{\prime}}v_{j^{\prime}}\sum_{i^ {\prime}\neq j^{\prime}}\sigma^{\prime}_{i^{\prime}}(B^{\top}u)_{i^{\prime}} \langle S_{i^{\prime}},S_{j^{\prime}}\rangle,\] For the first step, we use the fact that \(\langle R_{i},R_{i}\rangle=1\) for all \(i\) in \([n]\), as shown in Eq. (5). For the second step, we use \(\sum_{i}u_{i}B_{i,i^{\prime}}=(B^{\top}u)_{i^{\prime}}\). By the Union bound and Lemma A.2, we can get \[(\sum_{j^{\prime}}\sigma^{\prime}_{j^{\prime}}v_{j^{\prime}}\sum _{i^{\prime}\neq j^{\prime}}\sigma^{\prime}_{i^{\prime}}(B^{\top}u)_{i^{\prime }}\langle S_{i^{\prime}},S_{j^{\prime}}\rangle)^{2}\] \[\lesssim \log n\cdot\sum_{j^{\prime}}v_{j^{\prime}}^{2}(\sum_{i^{\prime} \neq j^{\prime}}\sigma^{\prime}_{i^{\prime}}(B^{\top}u)_{i^{\prime}}\langle S _{i^{\prime}},S_{j^{\prime}}\rangle)^{2}\] \[\lesssim \log^{2}n\cdot\sum_{j^{\prime}}v_{j^{\prime}}^{2}\sum_{i^{\prime }\neq j^{\prime}}(B^{\top}u)_{i^{\prime}}^{2}\langle S_{i^{\prime}},S_{j^{ \prime}}\rangle^{2}\] \[\lesssim \log^{3}n/b_{2}\cdot\sum_{j^{\prime}}v_{j^{\prime}}^{2}\sum_{i^{ \prime}\neq j^{\prime}}(B^{\top}u)_{i^{\prime}}^{2}\] \[\lesssim \log^{3}n/b_{2}\cdot\|v\|_{2}^{2}\|B^{\top}u\|_{2}^{2},\] with probability at least \(1-1/\operatorname{poly}(n)\), where the first step follows from \(t=O(\sqrt{\log n})\) and Lemma A.2, the second step is obtained by \(t=O(\sqrt{\log n})\) and Lemma A.2 again, and the third step is derived from Eq. (6). Combining the previous two equations, and considering the symmetry of the case where \(i^{\prime}=j^{\prime}\) and \(i\neq j\), we can get that \[\sum_{\begin{subarray}{c}i=j,i^{\prime}\neq j^{\prime}\\ \text{or }i^{\prime}=j^{\prime},i\neq j\end{subarray}}u_{i}v_{j^{\prime}} \sigma_{i}\sigma_{j}\sigma^{\prime}_{i^{\prime}}\sigma^{\prime}_{j^{\prime}} \langle R_{i},R_{j}\rangle B_{j,i^{\prime}}\langle S_{i^{\prime}},S_{j^{\prime }}\rangle\] \[\lesssim \log^{1.5}n/\sqrt{b_{1}}\cdot\|u\|_{2}\|Rv\|_{2}+\log^{1.5}n/ \sqrt{b_{2}}\cdot\|u^{\top}B\|_{2}\|v\|_{2} \tag{9}\] with a probability of at least \(1-1/\operatorname{poly}(n)\). **Proof of Part 3.** Suppose \(j\neq i\) and \(j^{\prime}\neq i^{\prime}\). We can show \[(\sum_{i\neq j,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma _{i}\sigma_{j}\sigma^{\prime}_{i^{\prime}}\sigma^{\prime}_{j^{\prime}}\langle R _{i},R_{j}\rangle B_{j,i^{\prime}}\langle S_{i^{\prime}},S_{j^{\prime}}\rangle) ^{2}\] \[= (\sum_{i}\sigma_{i}u_{i}\sum_{j^{\prime}}\sigma^{\prime}_{j^{ \prime}}v_{j^{\prime}}\sum_{i^{\prime}\neq j^{\prime}}\sigma^{\prime}_{i^{ \prime}}\langle S_{i^{\prime}},S_{j^{\prime}}\rangle\sum_{j\neq i}\sigma_{i} \langle R_{i},R_{j}\rangle B_{j,i^{\prime}}\rangle^{2}\] \[\lesssim \log n\cdot\sum_{i}u_{i}^{2}(\sum_{j^{\prime}}\sigma^{\prime}_{j ^{\prime}}v_{j^{\prime}}\sum_{i^{\prime}\neq j^{\prime}}\sigma^{\prime}_{i^{ \prime}}\langle S_{i^{\prime}},S_{j^{\prime}}\rangle\sum_{j\neq i}\sigma_{i} \langle R_{i},R_{j}\rangle B_{j,i^{\prime}}\rangle^{2}\] \[\lesssim \log^{2}n\cdot\sum_{i}u_{i}^{2}\sum_{j^{\prime}}v_{j^{\prime}}^{2} (\sum_{i^{\prime}\neq j^{\prime}}\sigma^{\prime}_{i^{\prime}}\langle S_{i^{ \prime}},S_{j^{\prime}}\rangle\sum_{j\neq i}\sigma_{i}\langle R_{i},R_{j} \rangle B_{j,i^{\prime}})^{2}\] \[\lesssim \log^{3}n\cdot\sum_{i}u_{i}^{2}\sum_{j^{\prime}}v_{j^{\prime}}^{2} \sum_{i^{\prime}\neq j^{\prime}}\langle S_{i^{\prime}},S_{j^{\prime}}\rangle^{2 }\sum_{j\neq i}\langle R_{i},R_{j}\rangle^{2}B_{j,i^{\prime}}^{2}\] \[\lesssim \log^{6}n/(b_{1}b_{2})\cdot\|u\|_{2}^{2}\|v\|_{2}^{2}\|B\|_{F}^{2},\] with probability \(1-1/\operatorname{poly}(n)\), where 2nd step follows from \(t=O(\sqrt{n})\) and Lemma A.2, the 3rd comes from \(t=O(\sqrt{n})\), Lemma A.2, for all \(i\in[n]\), and employing the Union bound to combine the \(n\) inequalities, the 4th and 5th step can be justified based on the same reasoning as the 3rd step. For the 6th step, we use the fact that for all \(i^{\prime}\neq j^{\prime}\in[n]\) and \(i\neq j\in[n]\), with a probability of at least \(1-1/\operatorname{poly}(n)\), \[\langle S_{i^{\prime}},S_{j^{\prime}}\rangle\lesssim\sqrt{(\log n)/b_{2}}\] and \[\langle R_{i},R_{j}\rangle\lesssim\sqrt{(\log n)/b_{1}}.\] For all \(i\), \(j\), \(i^{\prime}\), and \(j^{\prime}\) in \([n]\), we apply the Union bound to combine \(2n^{2}\) such bounds. Therefore, we can get \[\sum_{i\neq j,i^{\prime}\neq j^{\prime}}u_{i}v_{j^{\prime}}\sigma_{i}\sigma_{j }\sigma^{\prime}_{i^{\prime}}\sigma^{\prime}_{j^{\prime}}\langle R_{i},R_{j} \rangle B_{j,i^{\prime}}\langle S_{i^{\prime}},S_{j^{\prime}}\rangle\lesssim \log^{3}n/\sqrt{b_{1}b_{2}}\cdot\|u\|_{2}\|v\|_{2}\|B\|_{F}. \tag{10}\] with probability at least \(1-1/\operatorname{poly}(n)\). **Combining Part 1, Part 2, and Part 3.** First, we add Eq. (8), (9), and (10) together. Then, we plug their sum into Eq. (7). Finally, through Union bound, we can get \[u^{\top}(R^{\top}R)B(S^{\top}S)v-u^{\top}Bv\] \[\lesssim\frac{\log^{1.5}n}{\sqrt{b_{1}}}\cdot\|u\|_{2}\|Bv\|_{2}+ \ \frac{\log^{1.5}n}{\sqrt{b_{2}}}\cdot\|u^{\top}B\|_{2}\|v\|_{2}+\ \frac{\log^{3}n}{\sqrt{b_{1}b_{2}}}\cdot\|u\|_{2}\|v\|_{2}\|B\|_{F},\] with probability at least \(1-1/\operatorname{poly}(n)\). Therefore, we complete the proof. ## Appendix C Bounding error of sketching This section is arranged as follows: * Section C.1 gives the definition of \(P\), \(\widehat{P}\), and \(\widetilde{P}\). * Section C.2 presents the steps to prove that \(P\approx\widetilde{P}\). * Section C.3 shows that \(|g^{\top}Ph-g^{\top}\widehat{P}h|\) is small. * Section C.4 presents the tools that we use to bound \(|g^{\top}Ph-g^{\top}\widehat{P}h|\). * Section C.5 shows that \(|g^{\top}\widetilde{P}h-g^{\top}\widehat{P}h|\) is small. * Section C.6 presents the tools that we use to bound \(|g^{\top}\widetilde{P}h-g^{\top}\widehat{P}h|\). * Section C.7 shows that \(|g^{\top}Ph-g^{\top}\widetilde{P}h|\) is small by combining the result of \(|g^{\top}Ph-g^{\top}\widehat{P}h|\) and \(|g^{\top}\widetilde{P}h-g^{\top}\widehat{P}h|\). ### Definition of \(P\), \(\widetilde{P}\), and \(\widetilde{P}\) In this section, we formally define \(P\), \(\widehat{P}\), and \(\widetilde{P}\). **Definition C.1** (Definition of Projection Matrices).: We define \(P\in\mathbb{R}^{n\times n}\), \(\widehat{P}\in\mathbb{R}^{n\times n}\), and \(\widetilde{P}\in\mathbb{R}^{n\times n}\) as follows: \[P :=W^{1/2}A^{\top}(AWA^{\top})^{-1}AW^{1/2}\] \[\widehat{P} :=W^{1/2}A^{\top}(R_{1}^{\top}R_{1}AWA^{\top}R_{2}^{\top}R_{2})^ {-1}AW^{1/2}\] \[\widetilde{P} :=W^{1/2}A^{\top}R_{3}^{\top}R_{3}(R_{1}^{\top}R_{1}AWA^{\top}R_ {2}^{\top}R_{2})^{-1}R_{4}^{\top}R_{4}AW^{1/2}\] where \(R_{1}\in\mathbb{R}^{b_{1}\times d}\), \(R_{2}\in\mathbb{R}^{b_{2}\times d}\), \(R_{3}\in\mathbb{R}^{b_{3}\times d}\), and \(R_{4}\in\mathbb{R}^{b_{4}\times d}\) are sketching matrices. Among them, \(P\) is the ideal case of the projection matrix. \(\widetilde{P}\) is the projection matrix we use under FL. We construct \(\widehat{P}\) to analyze that \(|g^{\top}Ph-g^{\top}\widetilde{P}h|\) is bounded by \(g\), \(h\), \(A\), and \(W\), for any \(g\in\mathbb{R}^{n}\) and \(h\in\mathbb{R}^{n}\). ### Proof sketch In this section, we show that \(P\approx\widetilde{P}\). Our goal is to show that \[|g^{\top}Ph-g^{\top}\widetilde{P}h|\] is bounded by \(g\), \(h\), \(A\) and \(W\). We split it into following steps. For any two vectors \(g,h\in\mathbb{R}^{d}\), we want to prove that * \(|g^{\top}Ph-g^{\top}\widehat{P}h|\) is small, we prove this by using Lemma C.3 with * \(R=R_{1}\), and * \(|g^{\top}\widetilde{P}h-g^{\top}\widehat{P}h|\) is small, we prove this by using Lemma C.5 with * \(R=R_{3}\), and * \(|g^{\top}Ph-g^{\top}\widetilde{P}h|\) is small, we could prove it by using * \(|g^{\top}Ph-g^{\top}\widetilde{P}h|\leq|g^{\top}Ph-g^{\top}\widehat{P}h|+|g^{ \top}\widehat{P}h-g^{\top}\widetilde{P}h|\), * \(\|\widetilde{B}C^{\top}h\|_{2}\leq\|\widetilde{B}\|_{2}\|C^{\top}h\|_{2}\leq(1+ \epsilon_{0})\|B\|_{2}\|C^{\top}h\|_{2}\), and * \(\|\widetilde{B}\|_{F}\leq\sqrt{n}\|\widetilde{B}\|_{2}\leq(1+\epsilon_{0}) \sqrt{n}\|B\|_{2}\) ### Bounding \(|g^{\top}Ph-g^{\top}\widehat{P}h|\) The goal of this section is to prove the following lemma to indicate that we could bound \(|g^{\top}Ph-g^{\top}\widehat{P}h|\). Note that we assume that \(g=h\) in this lemma. However, in order to make other lemma more general, we do not assume that \(g=h\) in other lemma in this section. **Lemma C.2** (\(P\) and \(\widehat{P}\) are close).: _If the following conditions hold_ * _Let_ \(g\in\mathbb{R}^{n}\) _and_ \(h\in\mathbb{R}^{n}\) _be two vectors._ * _Let_ \(\epsilon_{0}\in(0,1/10)\)_._ _Then, we have_ \[|g^{\top}Ph-g^{\top}\widehat{P}h| \leq 2\epsilon_{0}g^{\top}CBC^{\top}h\] \[\leq 2\epsilon_{0}\|g^{\top}C\|_{2}\|C^{\top}h\|_{2}\|B\|_{2}\] _with probability at least \(1-1/\operatorname{poly}(n)\), where \(C=W^{1/2}A^{\top}\) and \(B=(AWA^{\top})^{-1}\in\mathbb{R}^{d\times d}\)._ Proof.: We assume that \(f(B,R,S)=R^{\top}RB^{-1}S^{\top}S\). By using Lemma C.3, we could obtain that \[(1-2\epsilon_{0})B\preceq(f(B,R,S))^{-1}\preceq(1+2\epsilon_{0})B\] Then, for any two vectors \(g,h\in\mathbb{R}^{d}\), we could obtain that \[(1-2\epsilon_{0})g^{\top}CBC^{\top}h \leq g^{\top}C(f(B,R,S))^{-1}C^{\top}h\] \[\leq (1+2\epsilon_{0})g^{\top}CBC^{\top}h\] According to the above inequality, it is easy for us to get that \(|g^{\top}Ph-g^{\top}\widehat{P}h|\leq 2\epsilon_{0}g^{\top}CBC^{\top}h\). We could obtain that \[g^{\top}CBC^{\top}h \leq \sqrt{g^{\top}CBC^{\top}g}\cdot\sqrt{h^{\top}CBC^{\top}h}\] \[= \|g^{\top}C\|_{2}\|C^{\top}h\|_{2}\|B\|_{2}\] by using the Cauchy-Schwarz inequality. Therefore, we could get that \[|g^{\top}Ph-g^{\top}\widehat{P}h|\leq 2\epsilon_{0}\|g^{\top}C\|_{2}\|C^{\top}h \|_{2}\|B\|_{2}\] This finishes the proof. ### Tools for bounding \(|g^{\top}Ph-g^{\top}\widehat{P}h|\) In this section, we present the tools for bounding \(|g^{\top}Ph-g^{\top}\widehat{P}h|\). **Lemma C.3** (Tools for showing \(P\) and \(\widehat{P}\) are close).: _If the following conditions hold_ * \(R\in\mathbb{R}^{b_{1}\times d}\)_,_ \(S\in\mathbb{R}^{b_{2}\times d}\) _are defined as in Definition_ 4.1_._ * \(g,h\in\mathbb{R}^{n}\) _are two vectors._ _Then, we have that_ \[(1-2\epsilon_{0})B\preceq(R^{\top}RB^{-1}S^{\top}S)^{-1}\preceq(1+2\epsilon_{ 0})B\] _with probability at least \(1-1/\operatorname{poly}(n)\), where \(\epsilon_{0}\in(0,1/10)\), and \(B\in\mathbb{R}^{d\times d}\)._ Proof.: Given any \(x\in\mathbb{R}^{d}\) such that \(\|x\|_{2}=1\), we could use Lemma B.1 to prove that \[|x^{\top}R^{\top}RB^{-1}S^{\top}Sx-x^{\top}B^{-1}x|\leq\epsilon_{0} \lambda_{\min}(B^{-1}),\] where \(b_{\min}=\{b_{1},b_{2}\}\), \(\kappa=\lambda_{\max}(B)/\lambda_{\min}(B)\) and \(\epsilon_{0}=O(\sqrt{n}\log^{3}d/b_{\min})\kappa\). Then, we have to prove two cases: **Case 1:** From \(|x^{\top}R^{\top}RB^{-1}S^{\top}Sx-x^{\top}B^{-1}x|\leq\epsilon_{0}\lambda_{ \min}(B^{-1})\), we could get that \(\lambda_{\max}(R^{\top}RB^{-1}S^{\top}S-B^{-1})\leq\epsilon_{0}\kappa\lambda_ {\min}(B^{-1})\). Then, we could the following derivation process: \[0 \geq\lambda_{\max}(R^{\top}RB^{-1}S^{\top}S-B^{-1})-\epsilon_{0} \lambda_{\min}(B^{-1})\] \[\geq\lambda_{\max}(R^{\top}RB^{-1}S^{\top}S-(1+\epsilon_{0})B^{-1})\] where the first step holds, because of we use Lemma B.1 to obtain the intermediate result. And the second step holds due to the properties of eigenvalue of the matrix. Finally, we could obtain that \[R^{\top}RB^{-1}S^{\top}S\preceq(1+\epsilon_{0})B^{-1}\] **Case 2:** From \[|x^{\top}R^{\top}RB^{-1}S^{\top}Sx-x^{\top}B^{-1}x|\leq\epsilon_{0} \lambda_{\min}(B^{-1}),\] we could get that \[\lambda_{\min}(R^{\top}RB^{-1}S^{\top}S-B^{-1})\geq-\epsilon_{0} \kappa\lambda_{\min}(B^{-1}).\] Then, we could the following equation: \[0 \leq\lambda_{\min}(R^{\top}RB^{-1}S^{\top}S-B^{-1})+\epsilon_{0} \lambda_{\min}(B^{-1})\] \[\leq\lambda_{\min}(R^{\top}RB^{-1}S^{\top}S-(1-\epsilon_{0})B^{-1})\] where the first step holds, because we use Lemma B.1 to obtain the intermediate result. The second step holds because of the properties of eigenvalue. Finally, according to the above equation, we could obtain that \[(1-\epsilon_{0})B^{-1}\preceq R^{\top}RB^{-1}S^{\top}S\] Combining two above results, due to the reason that \(\epsilon_{0}=O(\sqrt{n}\log^{3}d/b_{\min})\kappa\), we could get that \[(1-\epsilon_{0})B^{-1}\preceq R^{\top}RB^{-1}S^{\top}S\preceq(1+ \epsilon_{0})B^{-1}\] for any vector \(x\in\mathbb{R}^{n}\) and \(\|x\|_{2}=1\). Finally, we could choose \(b_{\min}=\epsilon^{-1}\sqrt{n}\kappa^{2}\log^{3}d\), where \(\epsilon\in(0,1/10)\), to make \(\epsilon_{0}\in(0,1/10)\). This finishes the proof. ### Bounding \(|g^{\top}\widetilde{P}h-g^{\top}\widehat{P}h|\) We show that \(|g^{\top}\widetilde{P}h-g^{\top}\widehat{P}h|\) can be bounded. **Lemma C.4** (\(\widetilde{P}\) and \(\widehat{P}\) are close).: _If the following conditions hold_ * \(\widetilde{B}=(R_{1}^{\top}R_{1}AWA^{\top}R_{2}^{\top}R_{2})^{-1}\)_._ * \(C=W^{1/2}A^{\top}\)_._ * _Let_ \(g\in\mathbb{R}^{n}\) _and_ \(h\in\mathbb{R}^{n}\) _be two vectors._ * _Let_ \(R_{1}\in\mathbb{R}^{b_{1}\times d}\)_,_ \(R_{2}\in\mathbb{R}^{b_{2}\times d}\) _are two sketching matrices._ * _Let_ \(b_{\min}=\{b_{1},b_{2}\}\)_._ _Then, We have_ \[|g^{\top}\widetilde{P}h-g^{\top}\widehat{P}h|\] \[\lesssim \frac{\log^{1.5}d}{\sqrt{b_{\min}}}\cdot(\|g^{\top}C\|_{2}\| \widetilde{B}C^{\top}h\|_{2}+\|g^{\top}C\widetilde{B}\|_{2}\|C^{\top}h\|_{2})\] \[+ \frac{\log^{3}d}{b_{\min}}\cdot\|g^{\top}C\|_{2}\|C^{\top}h\|_{2 }\|\widetilde{B}\|_{F}\] _with probability at least \(1-1/\operatorname{poly}(n)\)._ Proof.: We could using Lemma C.5 to prove the above lemma. By setting \(\widetilde{B}=(R_{1}^{\top}R_{1}AWA^{\top}R_{2}^{\top}R_{2})^{-1}\), and \(C=W^{1/2}A^{\top}\) where \(R_{1}\in\mathbb{R}^{b_{1}\times d}\) and \(R_{2}\in\mathbb{R}^{b_{2}\times d}\) are two sketching matrices. ### Tools for Bounding \(|g^{\top}\widetilde{P}h-g^{\top}\widehat{P}h|\) We present the tools for bounding \(|g^{\top}\widetilde{P}h-g^{\top}\widehat{P}h|\). **Lemma C.5** (Tools for showing \(\widetilde{P}\) and \(\widehat{P}\) are close).: _If the following conditions hold_ * _Let_ \(\widetilde{B}\in\mathbb{R}^{d\times d}\) _and_ \(C\in\mathbb{R}^{n\times d}\) _be two matrices._ * \(R\in\mathbb{R}^{b_{1}\times d}\)_,_ \(S\in\mathbb{R}^{b_{2}\times d}\) _are defined as in Definition_ 4.1_._ * \(g,h\in\mathbb{R}^{n}\) _are vectors._ * _Let_ \(b_{\min}=\{b_{1},b_{2}\}\)_._ _Then, we have_ \[g^{\top}C(R^{\top}R)\widetilde{B}(S^{\top}S)C^{\top}h-g^{\top} C\widetilde{B}C^{\top}h\] \[\lesssim \frac{\log^{1.5}d}{\sqrt{b_{\min}}}\cdot(\|g^{\top}C\|_{2}\| \widetilde{B}C^{\top}h\|_{2}+\|g^{\top}C\widetilde{B}\|_{2}\|C^{\top}h\|_{2})\] \[+ \frac{\log^{3}d}{b_{\min}}\cdot\|g^{\top}C\|_{2}\|C^{\top}h\|_{2} \|\widetilde{B}\|_{F}\] _with probability at least \(1-1/\operatorname{poly}(n)\)._ Proof.: This can be proved by using Lemma B.1. ### Bounding \(|g^{\top}Ph-g^{\top}\widetilde{P}h|\) We show that \(|g^{\top}Ph-g^{\top}\widetilde{P}h|\) can be bounded. **Lemma C.6** (\(P\) and \(\widetilde{P}\) are close).: _If the following conditions hold_ * _Given_ \(A\in\mathbb{R}^{d\times n}\) _and_ \(W\in\mathbb{R}^{n\times n}\)_._ * _Let_ \(R_{1}\in\mathbb{R}^{b_{1}\times d}\)_,_ \(R_{2}\in\mathbb{R}^{b_{2}\times d}\)_,_ \(R_{3}\in\mathbb{R}^{b_{3}\times d}\)_, and_ \(R_{4}\in\mathbb{R}^{b_{4}\times d}\) _be four matrices, defined as in Definition_ 4.1_._ * _Let_ \(g\in\mathbb{R}^{n}\) _and_ \(h\in\mathbb{R}^{n}\) _be two vectors._ * _Let_ \(P\) _be defined as Eq. (_3_),_ \(\widehat{P}\) _and_ \(\widetilde{P}\) _be defined as Def._ 4.3_._ * _Let_ \(b_{\min}=\min\{b_{1},b_{2}\}\)_._ _Then, we have that_ \[|g^{\top}Ph-g^{\top}\widetilde{P}h|\lesssim\ \log^{6}d\cdot(\frac{1}{ \sqrt{b_{\min}}}+\frac{n}{b_{\min}^{2}})\kappa\|g^{\top}C\|_{2}\|C^{\top}h\|_ {2}\|B\|_{2}\] _with probability at least \(1-1/\operatorname{poly}(n)\), where \(C=W^{1/2}A^{\top}\), \(B=(AWA^{\top})^{-1}\), and \(\kappa=\lambda_{\max}(B)/\lambda_{\min}(B)\)._ Proof.: In order to simplify the proof, we first define \(B\) as follows: \[\widetilde{B}:=(R_{1}^{\top}R_{1}AWA^{\top}R_{2}^{\top}R_{2})^{-1}.\] We define \(C\) as follows: \[C:=W^{1/2}A^{\top}.\] We define \(B\) as follows: \[B:=(AWA^{\top})^{-1}.\] By using triangle inequality, we could obtain that \[|g^{\top}Ph-g^{\top}\widetilde{P}h|\leq|g^{\top}Ph-g^{\top} \widehat{P}h|+|g^{\top}\widehat{P}h-g^{\top}\widetilde{P}h|.\] By using Lemma C.3 and Lemma C.5, we could obtain that \[|g^{\top}Ph-g^{\top}\widehat{P}h|\leq 2\epsilon_{0}\|g^{\top}C\|_{2} \|C^{\top}h\|_{2}\|B\|_{2}\] and \[|g^{\top}\widehat{P}h-g^{\top}\widetilde{P}h|\] \[\lesssim\frac{\log^{1.5}d}{\sqrt{b_{\min}}}\cdot(\|g^{\top}C\|_{2 }\|\widetilde{B}C^{\top}h\|_{2}+\|g^{\top}C\widetilde{B}\|_{2}\|C^{\top}h\|_{ 2})+\ \frac{\log^{3}d}{b_{\min}}\cdot\|g^{\top}C\|_{2}\|C^{\top}h\|_{2}\| \widetilde{B}\|_{F}\] According to some facts, we could get that \(\|AB\|_{2}\leq\|A\|_{2}\cdot\|B\|_{2}\) and \(\|A\|_{2}\leq\|A\|_{F}\leq\sqrt{n}\|A\|_{2}\), for \(A\in\mathbb{R}^{m\times n}\). Then, we could get that \[|g^{\top}Ph-g^{\top}\widetilde{P}h|\] \[\leq |g^{\top}Ph-g^{\top}\widehat{P}h|+|g^{\top}\widehat{P}h-g^{\top} \widetilde{P}h|\] \[\lesssim 2\epsilon_{0}\|g^{\top}C\|_{2}\|C^{\top}h\|_{2}\|B\|_{2}\] \[+ \frac{\log^{1.5}d}{\sqrt{b_{\min}}}\cdot(\|g^{\top}C\|_{2}\| \widetilde{B}C^{\top}h\|_{2}+\|g^{\top}C\widetilde{B}\|_{2}\|C^{\top}h\|_{2})\] \[+ \frac{\log^{3}d}{b_{\min}}\cdot\|g^{\top}C\|_{2}\|C^{\top}h\|_{2} \|\widetilde{B}\|_{F}\] \[\lesssim 2\epsilon_{0}\|g^{\top}C\|_{2}\|C^{\top}h\|_{2}\|B\|_{2}\] \[+ \frac{\log^{1.5}d}{\sqrt{b_{\min}}}\cdot(1+2\epsilon_{0})\|g^{ \top}C\|_{2}\|C^{\top}h\|_{2}\|B\|_{2}\] \[+ \frac{\log^{3}d}{b_{\min}}\cdot(1+2\epsilon_{0})\sqrt{n}\cdot\| g^{\top}C\|_{2}\|C^{\top}h\|_{2}\|B\|_{2}\] \[\lesssim (\frac{\log^{1.5}d}{\sqrt{b_{\min}}}+\frac{\log^{3}d}{b_{\min}}+ \frac{\sqrt{n}\log^{4.5}d}{b_{\min}^{1.5}}+\frac{n\log^{6}d}{b_{\min}^{2}}) \cdot\ \kappa\|g^{\top}C\|_{2}\|C^{\top}h\|_{2}\|B\|_{2}\] \[\lesssim \log^{6}d\cdot(\frac{1}{\sqrt{b_{\min}}}+\frac{n}{b_{\min}^{2}}) \cdot\kappa\|g^{\top}C\|_{2}\|C^{\top}h\|_{2}\|B\|_{2}\] where the first step derives from the triangle inequality and the second step is due to Lemma C.3 and Lemma C.5. The third step comes from \(\|\widetilde{B}\|_{F}\leq\sqrt{n}\|\widetilde{B}\|_{2}\leq(1+\epsilon_{0}) \sqrt{n}\|B\|_{2}\). Next, we show the reason that the fourth step holds \[\frac{\log^{1.5}d}{\sqrt{b_{\min}}}+\frac{\log^{3}d}{b_{\min}}+ \frac{\sqrt{n}\log^{4.5}d}{b_{\min}^{1.5}}+\frac{n\log^{6}d}{b_{\min}^{2}}\] \[\lesssim \log^{6}d\cdot(\frac{1}{\sqrt{b_{\min}}}+\frac{1}{b_{\min}}+ \frac{\sqrt{n}}{b_{\min}^{1.5}}+\frac{n}{b_{\min}^{2}})\] \[\lesssim \log^{6}d\cdot(\frac{1}{\sqrt{b_{\min}}}+\frac{n}{b_{\min}^{2}})\] where the first step follows from \(\log^{6}d\) is the dominate item in the numerator, and the second step follows from \(1/\sqrt{b_{\min}}>1/b_{\min},\forall b_{\min}\geq 1\) and \(\sqrt{n}/b_{\min}^{1.5}<n/b_{\min}^{2},\forall b_{\min}\leq n\). ## Appendix D Main Result In this section, we state the main result of this paper. Next, we give the proof of this statement. **Theorem D.1** (Formal Main Result).: _If the following conditions hold_ * \(\min_{Ax=b,x\in\Pi_{i=1}^{m}K_{i}}{c^{\top}x}\) _is a convex problem under the federated learning setting, where_ \(K_{i}\) _is compact convex sets._ * _For each_ \(i\in[m]\)_, we are given a_ \(\nu_{i}\)_-self concordant barrier function_ \(\phi_{i}\) _for_ \(K_{i}\)_._ * _We have_ \(x^{(0)}=\arg\min_{x}\sum_{i=1}^{m}\phi_{i}(x_{i})\)_._ * _For all_ \(x\in\prod_{i=1}^{m}K_{i}\)_, we have that_ \(\|x\|_{2}\) _is bounded by_ \(R\) _(Diameter of the set)._ * \(\|c\|_{2}\) _is bounded by_ \(L\) _(Lipschitz constant of the program)._ _Then, there exists a federated learning algorithm (see Algorithm 1) that runs in \(O(\sqrt{\nu}\log^{2}m\log(\frac{\nu}{\delta}))\) iterations and each iteration sends \(O(bn)\) words to find a vector \(x\) such that_ \[c^{\top}x \leq\min_{Ax=b,x\in\Pi_{i=1}^{m}K_{i}}c^{\top}x+LR\cdot\delta\] \[\|Ax-b\|_{1} \leq 3\delta(R\sum_{i=1}^{d}\sum_{j=1}^{n}|A_{i,j}|+\|b\|_{1})\] _where \(\|c\|_{2}\leq L\), \(\|x\|_{2}\leq R\), and \(\nu=\sum_{i=1}^{m}\nu_{i}\)._ Proof.: By combining Lemma E.3, Lemma F.2 and Lemma F.3, we could get that a vector \(x\) which satisfies the above conditions after \(O(\sqrt{\nu}\log^{2}m\log(\frac{\nu}{\delta}))\) iterations. In addition, the Algorithm 1 sends \(O(bn)\) words at each iteration (Line 14 in Algorithm 1). This finishes the proof. ## Appendix E Central Path Here, we introduce some basic result of central path in Algorithm 1, which could be used to prove the guarantee of \(W\) and the main result of this paper. Central path algorithm is a very standard method for solving linear programming [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], semi-definite programming [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23]. We first give the definition of some parameters here: **Definition E.1**.: For any \(i\in[m]\), we let \(\phi_{i}(x)\) be defined as in Definition 3.2 and let \(\mu_{i}^{t}(x,s)\in\mathbb{R}^{n_{i}}\), \(\gamma_{i}^{t}(x,s)\in\mathbb{R}\), and \(c_{i}^{t}(x,s)\in\mathbb{R}\) be defined as below: \[\mu_{i}^{t}(x,s) =s_{i}/\widetilde{t}+\nabla\phi_{i}(x_{i}) \tag{11}\] \[\gamma_{i}^{t}(x,s) =\|\mu_{i}^{t}(x,s)\|_{\nabla^{2}\phi_{i}(x_{i})^{-1}}\] (12) \[c_{i}^{t}(x,s) =\begin{cases}\frac{\exp(\gamma_{i}^{t}(x,s))/\gamma_{i}^{t}(x,s) }{(\sum_{i=1}^{m}\exp(2\lambda\gamma_{i}^{t}(x,s)))^{1/2}}&\text{if }\gamma_{i}^{t}(x,s)\geq 96 \sqrt{\alpha}\\ 0&\text{otherwise}\end{cases} \tag{13}\] where \(\lambda=O(\log m)\), \(\widetilde{t}=(1-\xi/\sqrt{\nu})^{t-1}\), and \(\xi=O(\log^{-2}(m))\). According to the Definition E.1 and Algorithm 1, we could obtain that \[h_{i}^{t}=-\alpha\cdot c_{i}^{t}(\overline{x},\overline{s})\mu_{i}^{t}( \overline{x},\overline{s}) \tag{14}\] where \(\alpha=O(1/\log^{2}m)\). In addition, we define that \[\Phi^{t}(x^{t},s^{t})=\sum_{i=1}^{m}\exp(\lambda\|\mu_{i}^{t}(x^{t},s^{t})\|_{ \nabla^{2}\phi_{i}(x_{i}^{t})^{-1}})\] where \(\lambda=O(\log m)\). Then, we could obtain the following lemma. **Lemma E.2** (Bounding \(\alpha_{i}\)).: _If the following conditions hold_ * \(\alpha\) _represents the parameter in Algorithm_ 1_._ * _For any_ \(i\) _in_ \([m]\)_, we have_ \(\alpha_{i}=\|\delta_{x,i}\|_{\overline{x}_{i}}\) _Then, we have_ \[\sum_{i=1}^{m}\alpha_{i}^{2}\leq 4\alpha^{2}.\] Proof.: Note that \[\sum_{i=1}^{m}\alpha_{i}^{2} =\|\delta_{x}\|_{\overline{x}}^{2}\] \[=h^{\top}\widetilde{V}^{1/2}(I-\widetilde{P})\widetilde{V}^{1/2} \nabla^{2}\phi(\overline{x})\widetilde{V}^{1/2}(I-\widetilde{P})\widetilde{V} ^{1/2}h.\] Due to the reason that \[(1-2\alpha)(\nabla^{2}\phi_{i}(\overline{x}_{i}))^{-1}\preceq \widetilde{V}_{i}\preceq(1+2\alpha)(\nabla^{2}\phi_{i}(\overline{x}_{i}))^{-1}\] we have that \[(1-\alpha)(\nabla^{2}\phi(\overline{x}))^{-1}\preceq\widetilde{V}\preceq(1+ \alpha)(\nabla^{2}\phi(\overline{x}))^{-1}.\] Using \(\alpha\leq\frac{1}{10000}\), we have that \[\sum_{i=1}^{m}\alpha_{i}^{2}\leq 2h^{\top}\widetilde{V}^{1/2}(I-\widetilde{P}) (I-\widetilde{P})\widetilde{V}^{1/2}h\leq 2h^{\top}\widetilde{V}h\] where we used that \(I-\widetilde{P}\) is an orthogonal projection at the end. Finally, we note that \[h^{\top}\widetilde{V}h\] \[\leq 2\sum_{i=1}^{m}\|h_{i}^{t}\|_{\overline{x}_{i}}^{2}\] \[= 2\alpha^{2}\sum_{i=1}^{m}c_{i}^{t}(\overline{x},\overline{s})^{ 2}\|\mu_{i}^{t}(\overline{x},\overline{s})\|_{\overline{x}_{i}}^{2}\] \[\leq 2\alpha^{2}\sum_{i=1}^{m}(\frac{\exp(2\lambda\gamma_{i}^{t}( \overline{x},\overline{s}))/\gamma_{i}^{t}(\overline{x},\overline{s})^{2}}{ \sum_{i=1}^{m}\exp(2\lambda\gamma_{i}^{t}(\overline{x},\overline{s}))}\|\mu_{i }^{t}(\overline{x},\overline{s})\|_{\overline{x}_{i}}^{2})\] \[= 2\alpha^{2}\frac{\sum_{i=1}^{m}\exp(2\lambda\gamma_{i}^{t}( \overline{x},\overline{s}))}{\sum_{i=1}^{m}\exp(2\lambda\gamma_{i}^{t}( \overline{x},\overline{s}))}\] \[= 2\alpha^{2}\] where the second step is from the definition of \(h_{i}^{t}\) (Eq. (14)), the third step follows from the definition of \(c_{i}^{t}\) (Eq. (13)), the fourth step follows from definition of \(\gamma_{i}^{t}\) (See Eq. (12)). Therefore, putting it all together, we can show \[\sum_{i=1}^{m}\alpha_{i}^{2}\leq 4\alpha^{2}.\] **Lemma E.3** (Lemma A.8 in [15]).: _If \(\Phi^{t}(x^{t},s^{t})\leq 80\frac{m}{\alpha}\), then_ \[\Phi^{t+1}(x^{t+1},s^{t+1})\leq\left(1-\frac{\alpha\lambda}{40 \sqrt{m}}\right)\Phi^{t}(x^{t},s^{t})+\sqrt{m}\lambda\cdot\exp(192\lambda \sqrt{\alpha}).\] _In particularly, we have \(\Phi^{t+1}(x^{t+1},s^{t+1})\leq 80\frac{m}{\alpha}\)._ Initial Point and Termination Condition Now, we state some basic results of self-concordance function, which could be used to prove the main result of this paper. **Lemma F.1** (Theorem 4.1.7, Lemma 4.2.4 in [20]).: _Let \(\phi\) be any \(\nu\)-self-concordant barrier._ _Then, for any \(x,y\in\mathrm{dom}\phi\), we have_ \[\langle\nabla\phi(x),y-x\rangle \leq\nu,\] \[\langle\nabla\phi(y)-\nabla\phi(x),y-x\rangle \geq\frac{\|y-x\|_{x}^{2}}{1+\|y-x\|_{x}}.\] _Let \(x^{*}=\arg\min_{x}\phi(x)\). For any \(x\in\mathbb{R}^{n}\) such that \(\|x-x^{*}\|_{x^{*}}\leq 1\), we have that \(x\in\mathrm{dom}\phi\)._ \[\|x^{*}-y\|_{x^{*}}\leq\nu+2\sqrt{\nu}.\] **Lemma F.2** (Lemma D.2 in [21]).: _If the following conditions hold_ * \(\min_{Ax=b,x\in\prod_{i=1}^{m}K_{i}}c^{\top}x\) _is a convex problem where for each_ \(i\)_,_ \(K_{i}\) _is a compact convex set._ * \(\phi_{i}\) _is defined as in Definition_ 3.2 _for_ \(K_{i}\)_, where_ \(i\) _is in_ \([m]\)_._ * _We have_ \(x^{(0)}=\arg\min_{x}\sum_{i=1}^{m}\phi_{i}(x_{i})\)_._ * _Diameter of the set: For any_ \(x\in\prod_{i=1}^{m}K_{i}\)_, we have that_ \(\|x\|_{2}\leq R\)_._ * _Lipschitz constant of the program:_ \(\|c\|_{2}\leq L\)_._ _Then, the modified program \(\min_{\overline{Ax}=\overline{b},\overline{x}\in\prod_{i=1}^{m}K_{i}\times \mathbb{R}_{+}}c^{\top}\overline{x}\) with_ \[\overline{A}=[A\ |\ b-Ax^{(0)}],\overline{b}=b\text{, and }\overline{c}= \left[\begin{array}{c}\frac{\delta}{LR}\cdot c\\ 1\end{array}\right]\] _satisfies the following, for any \(\delta>0\):_ 1. \(\overline{x}=\left[\begin{array}{c}x^{(0)}\\ 1\end{array}\right]\)_,_ \(\overline{y}=0_{d}\) _and_ \(\overline{y}=\left[\begin{array}{c}\frac{\delta}{LR}\cdot c\\ 1\end{array}\right]\) _are feasible primal dual vectors with_ \(\|\overline{s}+\nabla\overline{\phi}(\overline{x})\|_{\overline{x}}^{*}\leq\delta\) _where_ \(\overline{\phi}(\overline{x})=\sum_{i=1}^{m}\phi_{i}(\overline{x}_{i})-\log( \overline{x}_{m+1})\)_._ 2. _For any_ \(\overline{x}\) _such that_ \(\overline{Ax}=\overline{b},\overline{x}\in\prod_{i=1}^{m}K_{i}\times\mathbb{ R}_{+}\) _and_ \(\overline{c}^{\top}\overline{x}\leq\min_{\overline{Ax}=\overline{b},\overline{x} \in\prod_{i=1}^{m}K_{i}\times\mathbb{R}_{+}}\overline{c}^{\top}\overline{x}+ \delta^{2}\)_, the vector_ \(\overline{x}_{1:n}\) _(_\(\overline{x}_{1:n}\) _is the first_ \(n\) _coordinates of_ \(\overline{x}\)_) is an approximate solution to the original convex program in the following sense_ \[c^{\top}\overline{x}_{1:n} \leq\min_{Ax=b,x\in\prod_{i=1}^{m}K_{i}}c^{\top}x+LR\cdot\delta,\] \[\|A\overline{x}_{1:n}-b\|_{1} \leq 3\delta\cdot\left(R\sum_{i=1}^{d}\sum_{j=1}^{n}|A_{i,j}|+\|b \|_{1}\right),\] \[\overline{x}_{1:n} \in\prod_{i=1}^{m}K_{i}.\] **Lemma F.3** (Lemma D.3 in [21]).: _If the following conditions hold_ * \(\phi_{i}(x_{i})\) _is defined as in Definition_ 3.2_._ * _For any_ \(i\in[m]\)_, we possess_ \(\frac{s_{i}}{t}+\nabla\phi_{i}(x_{i})=\mu_{i}\)_,_ \(A^{\top}y+s=c\)_, and_ \(Ax=b\)_._ * \(\left\|\mu_{i}\right\|_{x,i}^{*}\leq 1\) _for all_ \(i\)_._ _Then, we have that_ \[\langle c,x\rangle\leq\langle c,x^{*}\rangle+4t\nu\] _where \(x^{*}=\operatorname*{arg\,min}_{Ax=b,x\in\prod_{i=1}^{m}K_{i}}c^{\top}x\) and \(\nu=\sum_{i=1}^{m}\nu_{i}\)._
2306.10865
Full-Duplex-Enabled Joint Communications and Sensing with Reconfigurable Intelligent Surfaces
The full-duplex (FD) technology has the potential to radically evolve wireless systems, facilitating the integration of both communications and radar functionalities into a single device, thus, enabling joint communication and sensing (JCAS). In this paper, we present a novel approach for JCAS that incorporates a reconfigurable intelligent surface (RIS) in the near-field of an FD multiple-input multiple-output (MIMO) node, which is jointly optimized with the digital beamformers to enable JSAC and efficiently handle self-interference (SI). We propose a novel problem formulation for FD MIMO JCAS systems to jointly minimize the total received power at the FD node's radar receiver while maximizing the sum rate of downlink communications subject to a Cram\'{e}r-Rao bound (CRB) constraint. In contrast to the typically used CRB in the relevant literature, we derive a novel, more accurate, target estimation bound that fully takes into account the RIS deployment. The considered problem is solved using alternating optimization, which is guaranteed to converge to a local optimum. The simulation results demonstrate that the proposed scheme achieves significant performance improvement both for communications and sensing. It is showcased that, jointly designing the FD MIMO beamformers and the RIS phase configuration to be SI aware can significantly loosen the requirement for additional SI cancellation.
Chandan Kumar Sheemar, George C. Alexandropoulos, Dirk Slock, Jorge Querol, Symeon Chatzinotas
2023-06-19T11:32:14Z
http://arxiv.org/abs/2306.10865v1
# Full-Duplex-Enabled Joint Communications and Sensing with Reconfigurable Intelligent Surfaces ###### Abstract The full-duplex (FD) technology has the potential to radically evolve wireless systems, facilitating the integration of both communications and radar functionalities into a single device, thus, enabling joint communication and sensing (JCAS). In this paper, we present a novel approach for JCAS that incorporates a reconfigurable intelligent surface (RIS) in the near-field of an FD multiple-input multiple-output (MIMO) node, which is jointly optimized with the digital beamformers to enable JSAC and efficiently handle self-interference (SI). We propose a novel problem formulation for FD MIMO JCAS systems to jointly minimize the total received power at the FD node's radar receiver, while maximizing the sum rate of downlink communications subject to a Cramer-Rao bound (CRB) constraint. In contrast to the typically used CRB in the relevant literature, we derive a novel, more accurate, target estimation bound that fully takes into account the RIS deployment. The considered problem is solved using alternating optimization, which is guaranteed to converge to a local optimum. The simulation results demonstrate that the proposed scheme achieves significant performance improvement both for communications and sensing. It is showcased that, jointly designing the FD MIMO beamformers and the RIS phase configuration to be SI aware can significantly loosen the requirement for additional SI cancellation. Full duplex, Reconfigurable Intelligent Surface, Joint Communication and Sensing ## I Introduction Full-duplex (FD) systems refer to wireless systems in which the same frequency band is used for both transmitting and receiving signals simultaneously, allowing for bi-directional communications [1]. This is in contrast to traditional half-duplex systems, in which a separate frequency band is used for transmitting and receiving signals, and the system can only perform one function at a time [2]. FD systems have the potential to significantly improve the performance and capacity of wireless networks, by allowing for more efficient use of the available spectrum, thus, enabling higher data rates. Self-interference (SI) is a major challenge for FD systems and SI cancellation (SIC) is viable to make FD a reality [3, 4]. Due to simultaneous transmission and reception, FD is currently receiving significant interest as it has the potential to enable joint communication and sensing (JCAS), implying that next-generation base stations (BSs) and radar functionalities could be integrated into a single device. In [5], a novel JSAC system transceiver design for an FD MIMO base station (BS) equipped with hybrid analog and digital beamformers is presented. Recently, reconfigurable intelligent surfaces (RISs) JCAS has gained significant interest due to its potential to increase communications and sensing performance [6]. In [7], RIS-assisted JCAS under the Cramer-Rao bound (CRB) constraint was derived. However, the derived CRB neglected the RIS contribution in enhancing the sensing performance. In [8], RIS-assisted JCAS to maximize the sum rate under a radar beam-pattern similarity constraint was investigated. In [9], a CRB minimization-based beamforming design for non-line-of-sight (LoS) RIS-assisted JCAS was studied. Note that the designs presented above do not consider the effect of SI on JCAS which may overwhelm the receiver. Moreover, the CRB derived in [7] did not consider any effect of the RIS and, in [9], only the CRB for the non-LoS sensing case was derived. In this paper, we focus on the optimization of an FD JCAS system comprising one MIMO downlink (DL) user and 1 target to be detected by the FD node's radar receiver. Firstly, we derive the exact CRB for RIS-assisted FD JCAS, by considering both the LoS and the non-LoS contributions. Then, we propose a novel formulation to jointly minimizing the effective SI power received in uplink (UL) at the radar receiver while maximizing the sum rate for the DL user, subject to the derived exact CRB constraint. However, we remark that the optimization of RIS with exact CRB constraint is extremely challenging and would require significant additional space to be elaborated in detail. Due to space limitations, we consider imposing the CRB constraint only on the digital beamformer. The considered joint optimization problem is transformed into its equivalent minimization of the mean squared error (MMSE) problem [10], which results to be composed of two terms: the SI power at the radar and the MSE at the DL user. A novel alternating optimization method is devised to minimize the overall objective function. Our goal is to tackle SI with both the digital beamformer at the MIMO transmitter and the passive beamforming offered by the RIS, hence, enabling close to ideal FD operation for JCAS. Simulation results show that the proposed scheme achieves significant performance gain compared to the conventional JCAS scheme with no RIS. Moreover, we also show that RIS can be beneficial to improve both the communications performance and the sensing performance, while also assisting in lowering the SI. The rest of the paper is organized as follows. Section II presents the system model and the exact CRB for FD JCAS. Section III presents a novel joint beamforming design. Finally, Sections IV and V present simulations results and conclusions, respectively.1 Footnote 1: _Notations:_ Boldface lower and upper case characters denote vectors and matrices, respectively. \(\mathbb{E}\{\cdot\}\), \(\text{Tr}\{\cdot\}\), and \(\mathbf{I}\) denote expectation, trace, and identity matrix, respectively. The superscripts \((\cdot)^{T}\) and \((\cdot)^{H}\) denote transpose and conjugate-transpose (Hermitian) operators, respectively. ## II System Model We consider an FD JCAS system consisting of one FD MIMO BS \(b\) equipped with \(M_{b}\) and \(N_{b}\) transmit and receive antennas, respectively, which communicates with one MIMO DL user \(j\) having \(N_{j}\) antenna elements, as shown in Fig. 1. The DL signal is also used to detect targets/scatterers (via their induced reflections) randomly distributed within the communication/sensing environment at the radar receiver of the FD MIMO node. We assume that the BS is assisted by a RIS placed within its near-field region, whose role is to jointly improve communication and sensing performance while offering strong SIC. Let \(\mathbf{V}_{j}\in\mathbb{C}^{N_{j}\times M_{j}}\) denote the digital beamformer for the unit-variance data stream \(\mathbf{s}_{j}\in\mathbb{C}^{d_{j}\times 1}\) intended for the DL user \(j\). The RIS is assumed to have \(R\) rows and \(C\) columns of reflection-tunable elements, whose phase response is denoted by the diagonal matrix \(\mathbf{\Phi}_{i}\in\mathbb{C}^{RC\times RC}\), containing \(\phi_{i}\in\mathbb{C}^{RC\times 1}\) on its main diagonal. Let \(\mathbf{H}_{j,b}\in\mathbb{C}^{N_{j}\times M_{b}}\) and \(\mathbf{H}_{j,i}\in\mathbb{C}^{N_{j}\times RC}\) denote the channels from the BS and the RIS to the DL user \(j\), respectively. The channel from the RIS to the BS and from the BS to RIS are denoted with \(\mathbf{H}_{b,i}\in\mathbb{C}^{N_{b}\times RC}\) and \(\mathbf{H}_{i,b}\in\mathbb{C}^{RC\times M_{b}}\), respectively. Let \(\mathbf{H}_{b,b}\in\mathbb{C}^{N_{b}\times M_{b}}\) denote the SI channel for the MIMO FD JCAS node \(b\) which can be decomposed as \(\mathbf{H}_{b,b}=\mathbf{H}_{b,b}^{l}+\mathbf{H}_{b,b}^{r}\), where \(\mathbf{H}_{b,b}^{l}\) and \(\mathbf{H}_{b,b}^{r}\) denote the LoS and non-LoS contributions for the SI channel, respectively. Since the transmit and receive antennas of the FD JCAS are in the near-field, we consider a spherical wavefront and model each element of \(\mathbf{H}_{b,b}^{l}\) as \[\mathbf{H}_{b,b}^{l}(m,n)=\frac{\rho}{r_{m,n}}e^{-l2\pi\frac{d_{m,n}}{d}}, \quad\forall m,n, \tag{1}\] where the scalars \(\lambda\) and \(\rho\) denote the wavelength and the power normalization constant to assure \(\mathbb{E}(\left\|\mathbf{H}_{b,b}^{l}\right\|_{F}^{2})=M_{b}N_{b}\), respectively, and the scalar \(d_{m,n}\) denotes the distance between \(m\)-th receive and \(n\)-th transmit antenna. Depending on the size of RIS, the channels \(\mathbf{H}_{b,i}\) and \(\mathbf{H}_{i,b}\) can also be in the near-field. Therefore, we consider modelling them similarly as (1). We assume perfect CSI, which can be obtained by exploiting channel reciprocity via time division duplexing (TDD). ### _Communication Model_ Let \(\mathbf{y}_{j}\) denote the received signal at the DL user \(j\), which can be written as \[\mathbf{y}_{j}=(\mathbf{H}_{j,b}+\mathbf{H}_{j,i}\mathbf{\Phi H}_{i,b}) \mathbf{V}_{j}\mathbf{s}_{j}+\mathbf{n}_{j}, \tag{2}\] where \(\mathbf{n}_{j}\sim\mathcal{CN}(\mathbf{0},\sigma_{j}^{2}\mathbf{I})\) denote the noise with variance \(\sigma_{j}^{2}\) at the DL user \(j\). Let \(\mathbf{R}_{j}=\mathbb{E}[\mathbf{y}_{j}\mathbf{y}_{j}^{H}]\) denote the signal-plus noise covariance matrix at the DL user \(j\). Let \(\mathbf{R}_{\overline{j}}=\mathbb{E}[\mathbf{n}_{\overline{j}}\mathbf{n}_{ \overline{j}}^{H}]\) denote its noise covariance matrix. The rate at the DL user \(j\) is given by \[\mathcal{R}_{j}=\log\big{[}\det(\mathbf{R}_{\overline{j}}^{-1}\mathbf{R}_{j} )\big{]}. \tag{3}\] ### _Radar Model_ We consider the radar to estimate one angle of arrival (AoA), denoted as \(\theta_{k}\). Let \(\omega_{0}\) denote the angle between the FD BS and the RIS, which is perfectly known at the BS. Let \(\mathbf{y}_{b}\) denote its total received signal, and let \(\mathbf{A}\) denote a matrix, containing contributions from all the paths, defined as \[\begin{split}\mathbf{A=}&\psi_{k}\mathbf{a}_{r}( \theta_{k})\mathbf{a}_{t}(\theta_{k})^{T}+\xi_{1,k}\mathbf{a}_{r}(\omega_{0}) \mathbf{a}_{t}(\omega_{0})^{T}\Phi\mathbf{a}_{t}(\theta_{k})\mathbf{a}_{t}( \theta_{k})^{T}\\ &\Phi\mathbf{a}_{t}(\omega_{0})\mathbf{a}_{t}(\omega_{0})^{T}+ \xi_{2,k}\ \mathbf{a}_{r}(\theta_{k})\ \mathbf{a}_{t}(\theta_{k})^{T}\Phi\mathbf{a}_{t}(\omega_{0})\mathbf{a}_{t}( \omega_{0})^{T}\\ &+\psi_{b,i}\mathbf{a}_{r}(\omega_{0})\mathbf{a}_{t}(\omega_{0})^ {T}\Phi\mathbf{c}_{\overline{j},k}\ \mathbf{a}_{t}(\theta_{k})\ \mathbf{a}_{t}(\theta_{k})^{T},\end{split} \tag{4}\] where \(\psi_{k}\) denotes the reflection coefficient for the LoS path between the BS and target \(k\), \(\mathbf{n}_{b}\sim\mathcal{CN}(\mathbf{0},\sigma_{r}^{2}\mathbf{I})\) denotes the noise at the radar with variance \(\sigma_{r}^{2}\), and \(\mathbf{a}_{r}\) and \(\mathbf{a}_{t}\) denote the transmit and receive antenna steering vectors for JCAS node, respectively. The scalars \(\xi_{1,k},\xi_{2,k}\) and \(\xi_{3,k}\) denote the reflection coefficients for the signal from RIS to target \(k\) and back to RIS, from RIS to the radar via target \(k\) and from the transmitter to the RIS via target \(k\), respectively. Given \(\mathbf{A}\), we can write the received signal \(\mathbf{y}_{b}\) as \[\mathbf{y}_{b}= \mathbf{A}\mathbf{V}_{j}\mathbf{s}_{j}+\mathbf{H}_{b,b}^{l} \mathbf{V}_{j}\mathbf{s}_{j}+\mathbf{h}_{b,i}\mathbf{\Phi H}_{i,b}\mathbf{V}_{j} \mathbf{s}_{j}+\mathbf{n}_{b}, \tag{5}\] which contains the effective SI. Assuming uniform linear arrays at the FD node, the antenna steering vector \(\mathbf{a}_{r}\) at the receiver, appearing in \(\mathbf{A}\), can be modelled as \[\mathbf{a}_{r}(\theta_{k})=\frac{1}{\sqrt{N_{b}}}[1,e^{j\frac{2\pi}{d}sin(\theta _{k})},...,e^{j\frac{2\pi}{d}d(N_{b}-1)sin(\theta_{k})}]^{T}, \tag{6}\] and similar modelling also holds for \(\mathbf{a}_{t}\). The scalars \(d\) and \(\lambda\) denote the distance and wavelength, respectively. Let \(\phi_{k}\) and \(\varphi_{k}\) denote the elevation and the azimuth angles between the RIS and the target \(k\). The RIS response can be modelled as a uniform planer array (UPA) as \[\mathbf{a}_{t}(\phi_{k},\varphi_{k})=\frac{1}{\sqrt{RC}}[1,e^{j\frac{2\pi}{\lambda} \varpi_{1}},...,e^{j\frac{2\pi}{d}d(N_{b}-1)sin(\theta_{k})}]^{T}, \tag{7}\] where \(\varpi_{i}=d_{i_{x}}sin(\phi_{k})cos(\varphi_{k})+d_{i_{x}}sin(\varphi_{k})\)[11], \(0<i<RC\), and \(d_{i_{x}}\) and \(d_{i_{x}}\) denote the distance of RIS's element \(i\) from its first element on the \(x\) and \(z\) axis, respectively. As the position and the orientation of RIS are known at the FD Fig. 1: The proposed FD MIMO JCAS system with RIS. BS, the azimuth and elevation angles can be expressed as a function of \(\theta_{k}\) to be estimated as \[\phi_{k}=arcos\big{(}\frac{l_{k}cos(\theta_{k})-r_{1}cos(\omega_{0})}{r_{2}} \big{)}, \tag{8a}\] \[\varphi_{k}=arcos\big{(}\frac{l_{k}cos(\theta_{k})-r_{1}cos(w_{0})}{r_{r}cos( \theta_{k})}\big{)}, \tag{8b}\] where \(l_{k}\) is the distance of the target \(k\), \(\omega_{0}\) is the angle between the FD BS and RIS, and the scalars \(r_{1},l_{k},r_{2},r_{r}\), assuming the RIS to be placed on the (x,z) plane, denote the distance between the FD BS and RIS with the relative angle \(\omega_{0}\), the distance between RIS and target \(k\), the distance between RIS and target \(k\) on the (x,y) plane and the distance on the (x,z) plane, respectively. To achieve an accurate estimation, the CRBs for AoA \(\theta_{k}\) should be below the threshold \(\zeta_{k}\), which imposes the constraint \[\frac{1}{\zeta_{k}}\leq\frac{1}{\text{CRB}(\theta_{k})}, \tag{9}\] where CRB(\(\theta_{k}\)) is derived in the Appendix. ### _Problem Formulation_ We embark on the task of jointly optimizing the performance of both communication and radar systems. While the concept of communication rate is well-defined for the DL user, there is no such concept that exists for the UL. The sum rate maximization problem can be formulated as a function of MMSE [10]. Conversely, even though the notion of UL rate in the context of FD JCAS is not well-established, the challenge of SI cancellation can be framed as a minimization problem, specifically minimizing the squared Frobenius norm of the SI power, thereby improving the accuracy of radar detection. For the DL user, we assume that it deploys the combiner \(\mathbf{F}_{j}\) to estimate its data streams \(\mathbf{s}_{j}\) as \(\hat{\mathbf{s}}_{j}=\mathbf{F}_{j}\mathbf{y}_{j}\). Assuming that the combiner \(\mathbf{F}_{j}\) is optimized based on the minimization of the MSE criteria, its closed-form solution is \[\mathbf{F}_{j}= \mathbf{V}_{j}^{H}(\mathbf{H}_{j,b}+\mathbf{H}_{j,t}\mathbf{ \Phi H}_{i,b})^{H}((\mathbf{H}_{j,b}+\mathbf{H}_{j,t}\mathbf{\Phi H}_{i,b}) \mathbf{V}_{j}\mathbf{V}_{j}^{H} \tag{10}\] \[(\mathbf{H}_{j,b}+\mathbf{H}_{j,t}\mathbf{\Phi H}_{i,b})+\sigma_ {r}^{2}\mathbf{I})^{-1}.\] Given \(\mathbf{F}_{j}\) as (10), the MSE of the DL user \(j\) becomes \[\mathbf{E}_{j}=(\mathbf{I}+\mathbf{V}_{j}^{H}\mathbf{H}_{j}^{H}\mathbf{R}_{j }^{-1}\mathbf{H}_{j}\mathbf{V}_{j})^{-1}, \tag{11}\] where \(\mathbf{R}_{j}=\sigma_{j}^{2}\mathbf{I}\) is the noise covariance matrix with variance \(\sigma_{j}^{2}\). Let \(\mathbf{W}_{j}\) denote the weight computed as [10] \[\mathbf{W}_{j}=\frac{w_{j}}{\text{ln}2}\mathbf{E}_{j}^{-1}. \tag{12}\] The joint minimization problem under the total sum-power, CRB and unit-modulus constraint for the RIS can be stated as \[\underset{\mathbf{V}_{j},\mathbf{\Phi}}{\text{min}} \text{Tr}(\mathbf{E}_{SI})+\text{Tr}(\mathbf{W}_{j}\mathbf{E}_{j})\] (13a) s.t. \[\text{Tr}\big{(}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\big{)}\leq p_{o}, \tag{13b}\] \[|\phi(i)|=1,\forall i, \tag{13c}\] where \(\mathbf{E}_{SI}\) is a matrix defined as \[\mathbf{E}_{SI}= \mathbf{H}_{b,b}^{I}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\mathbf{H}_{ b,b}^{H}+\mathbf{H}_{b,b}^{I}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\mathbf{H}_{i,b}^{H} \mathbf{\Phi}^{H}\mathbf{H}_{b,i}^{H}+\mathbf{H}_{b,b}\mathbf{\Phi} \tag{14}\] \[\mathbf{H}_{i,b}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\mathbf{H}_{b,b}^{ H}+\mathbf{H}_{b,t}\mathbf{\Phi H}_{i,b}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\mathbf{H}_{i,b}^{H} \mathbf{\Phi}^{H}\mathbf{H}_{b,i}^{H}\] obtained by writing the Frobenius norm squared as a function of the trace operator, whose diagonal elements capture the effective SI power and (13b) and (13c) denote the total sum-power constraint \(p_{0}\) and the unit-modulus constraint on RIS. ## III Novel Algorithm Design for FD JCAS In this section, we provide a novel algorithm to solve the optimization problem (13) based on alternating optimization. Let \(\mathcal{L}\) denote the Lagrangian of (13) and let \(\lambda_{0}\) and \(\mu_{k}\) denote the Lagrange multipliers for the total sum-power constraint at the FD BS and for the CRB constraint of target \(k\), respectively. ### _Digital Beamformer Optimization_ To optimize the digital beamformer \(\mathbf{V}_{j}\), which jointly optimizes the DL rate and handles the SI for JCAS, we take the derivative of \(\mathcal{L}\) with respect to the conjugate of \(\mathbf{V}_{j}\), which leads to the following optimal WMMSE digital beamformer \[\mathbf{V}_{j}= \Big{(}(\mathbf{H}_{j,b}+\mathbf{H}_{j,t}\mathbf{\Phi H}_{i,b})^ {H}\mathbf{F}_{j}^{H}\mathbf{W}_{j}\mathbf{F}_{j}(\mathbf{H}_{j,b}+\mathbf{H} _{j,t}\mathbf{\Phi H}_{i,b}) \tag{15}\] \[+(\mathbf{H}_{b,b}^{I}+\mathbf{H}_{b,t}\mathbf{\Phi H}_{i,b})^{H} (\mathbf{H}_{b,b}^{I}+\mathbf{H}_{b,t}\mathbf{\Phi H}_{i,b})\] \[+\mu_{k}2\tilde{\mathbf{A}}_{\theta_{k}}^{H}\mathbf{\Sigma}^{-1} \tilde{\mathbf{A}}_{\theta_{k}}+\lambda_{0}\mathbf{I}\Big{)}^{-1}(\mathbf{H}_{ j,b}+\mathbf{H}_{j,t}\mathbf{\Phi H}_{i,b})\mathbf{F}_{j}^{H}\mathbf{W}_{j}.\] where \(\tilde{\mathbf{A}}_{\theta_{k}}^{H}\) is defined in the Appendix. To find the optimal values of \(\lambda_{0}\) and \(\mu_{k}\), a linear search method can be adopted. In this study, we employ the Bisection method. ### _RIS Optimization_ We consider optimizing the RIS to jointly achieve quasi-ideal SIC on the UL side (which can be seen as a virtual UL rate optimization problem) and also maximize the rate at the DL side. For such a purpose, we consider minimizing the Frobenius Norm squared of the effective SI. Let \(\mathbf{B},\mathbf{C},\mathbf{D}\) be \[\mathbf{B}=\mathbf{H}_{b,i}^{H}\mathbf{H}_{b,i}+\mathbf{H}_{j,t}^{H}\mathbf{F}_ {j}^{H}\mathbf{W}_{j}\mathbf{F}_{j}\mathbf{H}_{j,t}, \tag{16a}\] \[\mathbf{C}=\mathbf{H}_{i,b}\mathbf{H}_{i,b}^{H}+\mathbf{H}_{i,b}\mathbf{V}_{j} \mathbf{V}_{j}^{H}\mathbf{H}_{i,b}^{H},\] (16b) \[\mathbf{D}=\mathbf{H}_{i,b}\mathbf{H}_{b,b}^{H}\mathbf{H}_{b,i}+\mathbf{H}_{i,b}\mathbf{V}_{j}\mathbf{V}_{j}^{H}\mathbf{H}_{j,b}^{H}\mathbf{F}_{j}^{H} \mathbf{W}_{j}\mathbf{F}_{j}\mathbf{H}_{j,t}, \tag{16c}\] Given the auxiliary matrices, problem (13) can be restated with respect to diagonal elements of RIS \(\phi\) as \[\underset{\phi}{\text{min}}\quad\phi^{H}\mathbf{\Lambda}\phi+\mathbf{d}^{T} \phi+\phi^{H}\mathbf{d}^{*}\] (17a) (17b) where \(\mathbf{\Lambda}=(\mathbf{B}\odot\mathbf{C})\), and \(\mathbf{d}\) is a vector made of the diagonal elements of the matrix \(\mathbf{D}\). To render a feasible solution, we adopt the majorization-maximization optimization method [12] by constructing an upper bound denoted as \(g(\cdot)\) for the objective function (17), denoted as \(f(\cdot)\). For a problem of type (17), it has been shown in [12] that at iteration \(n\) the following upper bound can be considered \[g(\mathbf{\phi}|\mathbf{\phi}^{(n)})=2\mathrm{Re}\{\mathbf{s}^{H}\mathbf{q}^{(n)}\}+c, \tag{18}\] where \(\lambda^{max}\) is the maximum eigenvalue \(\mathbf{q}^{(n)}=(\lambda^{max}\mathbf{I}-\mathbf{\Lambda})\mathbf{\phi}^{(n)}-\mathbf{ d}^{*}\). Given the upper bound and \(\mathbf{q}^{(n)}\), our problem can be restated as a minimization of the upper bound as, \[\underset{\phi_{r}}{\text{min}} 2\mathrm{Re}\{\mathbf{d}^{H}\mathbf{q}^{(n)}\},\] (19a) s.t. \[|\mathbf{\phi}(i)|=1,\quad\forall i, \tag{19b}\] By solving problem (19), we get the following \[\mathbf{\phi}^{(n+1)}=e^{i\lambda\mathbf{q}^{(n)}}. \tag{20}\] When the digital beamformer is computed under the CRB constraint at each iteration, the RIS optimization should be carried out by solving the aforementioned problem iteratively until convergence. The procedure for optimizing the phase response of RIS is given in Algorithm 1. The overall procedure to optimize and solve the joint optimization problem under the CRB is formally given in Algorithm 2. ``` Initialize: iteration index \(n\), accuracy \(\epsilon\), digital beamformer and combiner. Repeat until convergence Update \(\mathbf{F}_{j}\) with (10). Update \(\mathbf{W}_{j}\) with (12). Update \(\mathbf{V}_{j}\) with (15). Search for the Lagrange multipliers. Update \(\mathbf{\Phi}_{i}\) with Algorithm 1. if convergence condition is satisfied Stop and return the optimized variables. else repeat. ``` **Algorithm 1** Optimization of RIS The convergence of the proposed scheme is straightforward by combining the reasoning of the well-established WMMSE [10] and the majorization-maximization technique [12]. However, due to space limitations, we omit the extended proof. ## IV Simulation Results In this section, we present simulation results to validate the advantage of the proposed SI-aware FD JCAS transceiver design. We consider the FD BS and the DL user to be equipped with uniform linear arrays (ULA) at 80 m from the FD BS, and the FD BS to be placed in the center of the three-dimensional coordinate system with ULA aligned with the z-axis. The RIS is assumed to be placed on the (x,y) plane, with a relative angle of \(30^{\circ}\) with respect to the FD BS, with its first element being 5 m far from the first transmit antenna of the FD BS. We assume that the FD BS is assumed to be equipped \(M_{b}=15\) transmit antennas and \(N_{b}=10\) receive antennas, and the DL user is assumed to be equipped with \(N_{j}=5\) receiving antennas. The RIS is assumed to be of size \(10\times 10\). The channels between the FD BS and the DL user \(j\), denoted with \(\mathbf{H}_{j,b}\) and \(\mathbf{H}_{b,i}\), are modelled with the line of sight (LoS) channel model. The number of data streams to be transmitted to the DL user is set to be \(d_{j}=2\). The digital beamformer \(\mathbf{V}_{j}\) is initialized as the dominant eigenvectors of the effective channel covariance matrices, and the response of the NF-IRSs is initialized with random phases. We define the signal-to-noise-ratio (SNR) of our system as \(SNR=p_{o}/\sigma_{j}^{2}\), where \(p_{o}\) is the total transmit power and \(\sigma_{j}^{2}\) is the noise variance at the DL user. For the CRB constraint, we set \(\zeta_{k}=0.01\) and the AoA to be estimated to be randomly distributed on a circle of \(50m\) with the angle range limited to \([-\pi/2,-\pi/2]\). For comparison, we define the following benchmark schemes: 1) _RIS-Communications Only_ - A scheme in which the beamformers and RIS are designed to maximize the performance of the communications and there is no sensing and SI (half-duplex (HD) mode), 2) _No RIS-Communications Only_ - A scheme similar to scheme 1) but without RIS, and 3) _No RIS- With Sensing_- A scheme in which FD BS performs JCAS but without the aid of RIS. We label our scheme as _RIS-With Sensing_. Fig. 2 shows the performance of the communications in terms of the user rate as a function of the proposed novel FD JCAS transceiver design, in comparison to the benchmark schemes. We can see that when the BS act as an HD BS and there is no SI and sensing, the beamformer \(\mathbf{V}_{j}\) and the RIS is designed to only maximize the DL performance, leading to a higher rate. However, in the FD JCAS case, the beamformers and RIS are designed to jointly handle SI, enhance the sensing performance and improve the communications. Fig. 2: Communications performance for FD JCAS with SIC. In Fig. (3), we compare the performance of the estimation of AoA \(\theta_{k}\) when using a MUSIC-based estimator. Our approach demonstrates effective management of SI, and as the SNR increases, the estimation performance approaches the CRB. While there remains a slight difference between the estimation performance and the CRB, this gap can be further reduced by incorporating the CRB constraint into the optimization of the Reconfigurable Intelligent Surface (RIS), which is a direction for future research. ## V Conclusions This work introduces a new method to enable FD JCAS by considering the impact of SI. The authors derive the exact CRB for RIS-assisted JCAS and propose a joint optimization framework based on alternating optimization that satisfies the CRB constraint. Simulation results demonstrate that the proposed beamforming method, which accounts for SI, leads to a substantial performance improvement and effectively manages SI. Furthermore, the reduction in data transmission rate compared to a communications-only approach is negligible, which paves the path toward FD JCAS with accurate and energy-efficient SI management with RIS. In this section, we derive the CRB for AoA \(\theta_{k}\), which can be written as [13] \[\text{CRB}(\theta_{k})=\frac{1}{2}\Big{(}\text{Tr}\big{(}\mathbf{V}_{j}^{H} \tilde{\mathbf{A}}_{\theta_{k}}^{H}\mathbf{\Sigma}^{-1}\tilde{\mathbf{A}}_{ \theta_{k}}\mathbf{V}_{j}\big{)}\Big{)}^{-1}. \tag{21}\] where \(\tilde{\mathbf{A}}_{\theta_{k}}=\partial\mathbf{A}/\partial\theta_{k}\). We first define the derivatives of the antenna responses as \[\partial\mathbf{a}_{r}=\frac{1}{\sqrt{N_{b}}}[0,.....,j\frac{2\pi}{ \lambda}d(N_{b}-1)cos(\theta_{k})e^{j\frac{2\pi}{\lambda}d(N_{b}-1)sin(\theta _{k})}]^{T}, \tag{22a}\] \[\partial\mathbf{a}_{t}=\frac{1}{\sqrt{M_{b}}}[0,....,j\frac{2\pi} {\lambda}d(M_{b}-1)cos(\theta_{k})e^{j\frac{2\pi}{\lambda}d(M_{b}-1)sin(\theta _{k})}]^{T}, \tag{22b}\] Let \(\partial\varpi_{i}\) denote the derivative of \(\varpi_{i}\), obtained by expressing \(\phi_{k}\) and \(\varphi_{k}\) as a function of \(\theta_{k}\). Then the derivative of \(\mathbf{a}_{r}^{RIS}\) with respect to \(\theta_{k}\) can be written as \[\partial\mathbf{a}_{t}^{RIS}=\frac{1}{\sqrt{RC}}[0,......,j\frac{2\pi}{ \lambda}\partial\varpi_{i}e^{j\frac{2\pi}{\lambda}\varpi_{RC-1}}] \tag{23}\] By considering the complete deployment effect of RIS, including both the LoS and non-LoS links, \(\tilde{\mathbf{A}}_{\theta_{k}}\) can be written as \[\tilde{\mathbf{A}}_{\theta_{k}}= \psi_{k}\partial\mathbf{a}_{r}(\theta_{k})\mathbf{a}_{t}(\theta_ {k})^{T}+\psi_{k}\mathbf{a}_{r}(\theta_{k})\partial\mathbf{a}_{t}(\theta_{k} )^{T}\] \[+\xi_{1,k}\mathbf{a}_{r}(\omega_{0})\mathbf{a}_{t}(\omega_{0})^{ T}\Phi\partial\mathbf{a}_{t}^{RIS}(\theta_{k})\mathbf{a}_{i}(\theta_{k})^{T}\Phi \mathbf{a}_{r}^{RIS}(\omega_{0})\] \[\mathbf{a}_{t}(\omega_{0})^{T}+\xi_{1,k}\mathbf{a}_{r}(\omega_{0} )\mathbf{a}_{t}(\omega_{0})^{T}\Phi\mathbf{a}_{t}(\theta_{k})\partial\mathbf{ a}_{t}(\theta_{k})^{T}\Phi\mathbf{a}_{t}(\omega_{0})\] \[\mathbf{a}_{t}(\omega_{0})^{T}+\xi_{2,k}\ \partial\mathbf{a}_{r}(\theta_{k})\mathbf{a}_{t}^{RIS}( \theta_{k})^{T}\Phi\mathbf{a}_{t}(\omega_{0})\mathbf{a}_{t}(\omega_{0})^{T}\] \[+\xi_{2,k}\ \mathbf{a}_{r}(\theta_{k})\partial\mathbf{a}_{t}( \theta_{k})^{T}\Phi\mathbf{a}_{t}(\omega_{0})\mathbf{a}_{t}(\omega_{0})^{T}\] \[+\xi_{3,k}\mathbf{a}_{r}(\omega_{0})\mathbf{a}_{t}(\omega_{0})^{ T}\Phi\partial\mathbf{a}_{t}(\theta_{k})\ \mathbf{a}_{t}(\theta_{k})^{T}\] \[+\xi_{3,k}\mathbf{a}_{r}(\omega_{0})\mathbf{a}_{t}(\omega_{0})^{ T}\Phi\mathbf{a}_{t}(\theta_{k})\partial\mathbf{a}_{t}(\theta_{k})^{T}, \tag{24}\] ## Acknowledgement This work has been supported by the SNS JU TERRAMETA project under EU's Horizon Europe research and innovation programme under Grant Agreement number 101097101.
2303.03779
Boosting the 3D thermal-aware floorplanning problem through a master-worker parallel MOEA
The increasing transistor scale integration poses, among others, the thermal-aware floorplanning problem; consisting of how to place the hardware components in order to reduce overheating by dissipation. Due to the huge amount of feasible floorplans, most of the solutions found in the literature include an evolutionary algorithm for, either partially or completely, carrying out the task of floorplanning. Evolutionary algorithms usually have a bottleneck in the fitness evaluation. In the problem of thermal-aware floorplanning, the layout evaluation by the thermal model takes 99.5\% of the computational time for the best floorplanning algorithm proposed so far.The contribution of this paper is to present a parallelization of this evaluation phase in a master$-$worker model to achieve a dramatic speed-up of the thermal-aware floorplanning process. Exhaustive experimentation was done over three dimensional integrated circuits, with 48 and 128 cores, outperforming previous published works.
Ignacio Arnaldo, Alfredo Cuesta-Infante, J. Manuel Colmenar, José L. Risco-Martín, José L. Ayala
2023-03-07T10:31:58Z
http://arxiv.org/abs/2303.03779v1
# Boosting the 3D Thermal-Aware Floorplanning Problem through a Master-Worker Parallel MOEA ###### Abstract The increasing transistor scale integration poses, among others, the thermal-aware floorplanning problem; consisting of how to place the hardware components in order to reduce overheating by dissipation. Due to the huge amount of feasible floorplans, most of the solutions found in the literature include an evolutionary algorithm for, either partially or completely, carrying out the task of floorplanning. Evolutionary algorithms usually have a bottleneck in the fitness evaluation. In the problem of thermal-aware floorplanning, the layout evaluation by the thermal model takes 99.5% of the computational time for the best floorplanning algorithm proposed so far. The contribution of this paper is to present a parallelization of this evaluation phase in a master -worker model to achieve a dramatic speed-up of the thermal-aware floorplanning process. Exhaustive experimentation was done over three dimensional integrated circuits, with 48 and 128 cores, outperforming previous published works. Copyright \(\copyright\) 2012 John Wiley & Sons, Ltd. Thermal-aware floorplanning, optimization, multi-objective evolutionary algorithm, parallelization 2012 001-16 ## 1 Introduction Consumers continuously demand faster applications, smaller devices and recently also ubiquitous computing. So far, developments in materials and technology have allowed processor manufacturers to provide chips that attained the expected serial performance. As we approach to the limits of miniaturization, these demands become harder to accomplish. In order to remain competitive, industry has moved to parallel architectures such as integrating more cores in a die, data-parallel execution units, additional register sets for hardware threads, bigger caches and more independent memory controllers to increase memory bandwidth. For instance, multi-core general purpose computers are being shipped for years and data-centers implement heterogeneous many-core systems. Multi-processor systems-on-chip (MPSoCs) are nowadays also considered as many-core systems. Up to now, the top core integration silicon CPU chip is proposed by Intel Labs with an experimental Single-chip Cloud Computer (SCC), a research microprocessor containing 48 Intel Architecture cores [1]. Also, novel 3D multi-processor chips have been recently presented [2] as the alternative to provide the required area of integration and to reduce the communication delay among the large number of cores. While the fabrication techniques have driven the integration of an increased number of transistors to provide the required throughput, these improvements have posed major problems regarding the operating temperature, directly related to the power density [3]. As temperature increases, the carrier mobility degrades, the leakage power consumption increases, gradient temperatures appear on the surface creating electro-migrations and the lifetime of the chip decreases exponentially, all in all reducing dramatically the reliability of the system [4]. In addition, the specific placement of the functional units on the chip surface (floorplan) also affects to the temperature distribution because of the diffusive nature of heat [5]. Besides, in the 3D configuration, the power density increases with the number of layers. This effect is even more negative due to the problematic cooling of inner layers of the 3D stack. The impact of power density in the microprocessor is augmented due to the dielectric insulation layers inserted between active layers. The reason is that the thermal conductivity of the formers is very low compared to silicon and metal. Increasing the chip area to reduce the power density has two shortcomings: it is costly and requires to solve all the geometric constraints. Static external coolers reduce the temperature of the chip surface by a constant factor but do not reduce the temperature gradient across the chip. Instead, thermal-aware floorplanning algorithms attempt to place functional units in order to achieve a satisfactory temperature distribution; thus they tackle with both heat dissipation and component placement at a time. Floorplanning proposals are frequently formulated as combinatorial optimization problems that can be smoothly fit to genetic algorithms (GA). Broadly speaking, a GA performs a heuristic search throughout the solution space inspired on Darwin's principle of Natural Selection: The basic features are: (i) each iteration tests a small number of solutions (known as _population_) compared to the cardinality of the solution space, (ii) each solution (referred to as _individual_) is represented in a way suitable both for evaluation and for producing the subset of the next generation, (iii) the next generation is obtained applying genetic operators such as crossover, mutation and selection, and (iv) there is a fitness function that evaluates the individuals. Early floorplanning solutions tackled with GA proposed representations such as Polish notation [6], combined bucket array [7] and O-trees [8] that are not satisfactory in the thermal-aware problem because they were engineered to minimize area. On the contrary, the hottest elements should be spread and placed as far as possible for reducing the heating produced by closer hot units. Thus, in 2D, works like [9] decreased the peak temperature using genetic algorithms, and [10] using simulated annealing; whereas on 3D stacked systems linear programming and simulated annealing combinations may be found [11]. These works solve a single-objective optimization that takes into account only the impact that temperature has on reliability. Therefore, they cannot provide a thermally optimal solution with a minimum impact on the area of the chip and the delay due to wiring. A more comprehensive approach in 3D systems is presented in [12] and [13]. Their proposal consists of a Multi-Objective Evolutionary Algorithm (MOEA) that tackles with the thermal-aware problem (optimal placement of blocks) and also with the performance of the system (minimum wire length delay) satisfying the topological constraints. On the other hand, they show a critical bottleneck in the evaluation phase due to the complexity of the computation, and can lead to very long execution times when complex 3D many-cores architectures are considered. Our contribution in this paper is the parallelization of the thermal-aware floorplanner proposed in [12] and [13], with the aim of reducing the optimization execution time. Evolutionary algorithms (EA) are intrinsically parallel but it is in the fitness evaluation where more speed-up can be gained. Table 1 shows the complete execution time of the sequential version of the algorithm proposed in [12] until a solution is reached. The evaluation and reduction phases correspond to methods devoted to compute the fitness and ranking of the individuals, the rest implement the genetic operators of selection, crossover and mutation. In this case, fitness is computed using a thermal model that takes \(83.1\%\) of the execution time. Due to the simple representation chosen for the candidate solutions, a decodification phase is required before the fitness computation. Adding the decodification phase time and the feasibility verification time, the evaluation of individuals takes \(99.5\%\) of the total execution time. It is then clear that this task is by far the most time consuming which justifies the necessity and effort of parallelization. An EA is usually parallelized at two different levels: definition of population or fitness evaluation [14]. At the former, the population is split in a number of non-overlapping subpopulations that evolve independently but with a probability of interaction. The two most popular models are Islands and Grid models [14]. In Islands, some individuals are allowed to migrate with a given frequency. There is a rich variety of Islands topologies, being the most frequent rings, \(n\)-dimensional meshes and stars. When migrating, the worst \(k\) individuals in the destination are replaced by the new-comers, which are the best \(k\) individuals in their original island. In grids, each individual is placed in a cell of a one- or two-dimensional grid. Genetic operations take place in a small neighborhood of a given individual and their implementation is straightforward on clusters. The fitness evaluation parallelization is a much simpler and intuitive approach. All the genetic operations are performed sequentially over the whole population but, once a new generation is obtained, the individuals are evaluated in parallel. Regarding parallel MOEAs; they were early analyzed in [15]. Shortly afterwards, the master-slave paradigm was employed in [16]. Since our baseline sequential algorithm presents a high computational load in the evaluation of individuals, this paper proposes a master-worker algorithm to parallelize that phase. Our approach was tested with a set of master-worker configurations, ranging from 1 to 9 workers, as well as the sequential algorithm, in two experimental multi-core platforms with 48 and 128 3D stacked core processors each. Speedup review, validation of the proposal, study of convergence and thermal analysis are presented in this paper. Results suggest that a new representation could improve future algorithms. The rest of the paper is organized as follows. The parallel Multi-Objective Evolutionary Algorithm proposal is presented in Section 2. Experimental results are shown and discussed in Section 3. Finally, conclusions and future work are detailed in Section 4. ## 2 Parallel Multi-Objective Evolutionary Algorithm In this section we present a parallel thermal-aware floorplanner capable of optimizing many-core heterogeneous platforms under a master-worker schema. ### Details of the MOEA In [12], a Multi-Objective Evolutionary Algorithm (MOEA) is proposed for the floorplanning of 3D stacked multi-processor single-chips. This kind of chips consists of a number of layers of fixed area where the functional units (processors, memories and interconnection blocks) must be \begin{table} \begin{tabular}{c c} \hline \hline Method/Operator & Time (ms) \\ \hline Evaluation & 31264762 \\ Selection & 257 \\ Reduction & 251 \\ Mutation & 57.6 \\ Crossover & 1 \\ \hline \hline \end{tabular} \end{table} Table 1: Execution time of the different methods involved in the algorithm proposed in [12]. placed. Our approach considers this scenario performing a thermal-aware placement of the different components while the wiring delay is also minimized. Moreover, it overcomes [12] with a parallel implementation of the floorplanner that avoids the constraints imposed to the optimization time. **Block placement problem.** All the components that model the functional units of the many-core system must be placed in the 3D stack, which imposes the physical boundaries of maximum length \(L\), width \(W\) and height \(H\). Each component is represented by a block \(B_{i}\) with length \(l_{i}\), width \(w_{i}\), height \(h_{i}\) and it is denoted by its left-bottom-back corner, with coordinates \((x_{i},y_{i},z_{i})\), taken the left-bottom-back corner of the chip as origin of coordinates; where \(0\leq x_{i}\leq L-l_{i}\), \(0\leq y_{i}\leq W-w_{i}\), \(0\leq z_{i}\leq H-h_{i}\). These blocks cannot overlap. A schematic representation is shown in Figure 1. Blocks are placed sequentially. Since each component incorporates its coordinates, this method leads to a floorplan whose components are not necessarily adjacent; unlike the state of the art works [6], [7] or [8]. This represents a great advantage because cores, which are the most likely to increase the temperature, can be placed explicitly separated, thus reducing their impact in the overall temperature of the 3D chip. In order to select the best placement coordinate \(r_{i}=(x_{i},y_{i},z_{i})\) for block \(B_{i}\), given those already placed \(B_{j},j<i\), a dominance relationship is established. Therefore, a set of objective functions that evaluate the fitness, as well as a suitable representation and appropriate genetic operators, must be derived for the MOEA approach. The solution is obtained using a Non-dominated Sorting Genetic Algorithm (NSGA-II) implementation [17]. **Fitness.** There are three objective functions. The first objective \(J_{1}\) is the number of topological constraints violated by \(B_{i}\) with respect to the already placed \(B_{j}\). The second objective is the wire length, approximated by the Manhattan distance between blocks with coordinates \(r_{i}\) and \(r_{j}\), \(J_{2}=|x_{i}-x_{j}|+|y_{i}-y_{j}|+|z_{i}-z_{j}|\). Finally the thermal impact is measured through the power consumed by the unitary cells of the chip. A thermal model that considers the power density of such cells and their neighbors is used as an approximation to the steady state of the more accurate thermal model that includes non-linear and differential equations. We evaluate the thermal response of a given individual with the following model: \[J_{3}=\sum_{i<j\in 1..n}(dp_{i}*dp_{j})/(d_{ij}) \tag{1}\] where \(dp\) is the density power of the block considered and \(d_{ij}\) is the euclidean distance between blocks. This model has been shown to be accurate enough and close to the non-linear simulation [18]. Figure 1: Block representation. **Representation.** Each individual is a possible configuration of the system, and each configuration is represented through a chromosome that stores the order in which blocks are being placed. The chromosome is an array whose components contain an identifier of the functional unit that is going to be placed. For instance, if 3 cores \((C_{1},C_{2},C_{3})\) and 3 memories \((L_{1},L_{2},L_{3})\) are to be placed, the search space will have cardinality \(6!\). A possible chromosome would be \[[C_{1},L_{2},C_{3},L_{3},C_{2},L_{1}]\] meaning that \(C_{1}\) will be fist placed, then \(L_{2}\) and so on. This decodification of the chromosome requires considering the size and boundaries of the previously placed blocks. Therefore, the more the number of components, the more the decodification execution time. As we will describe in the future work section of the paper, a different representation encoding the location of the components could help reducing the evaluation phase. **Operators.** Selection was carried out by tournament, taking two random individuals and selecting the best among them. Individuals are mated in order to produce offspring. Crossover must take into account that all the components must appear once and only once in the chromosome. The so called _cycle crossover_ assures that the resulting chromosomes are just permutations of the parents. Mutation consists of swapping the content of two positions inside the chromosome or in the rotation of a randomly chosen component. Both cycle crossover and swap mutation are depicted in Figure 2. ### Details of Parallelization In this paper, we propose a parallel implementation of the MOEA described in the previous section using the master-worker model. As we have previously shown, the evaluation phase of the algorithm takes over 99% of the execution time. This is due to both the fact that the thermal response of all the individuals of the population has to be evaluated in every generation of the process, and the decodification of each individual, previous to its evaluation. The master-worker model satisfies our needs because, even though the fitness is based on a simplified thermal model, the computational cost of this evaluation increases quadratically with the number of components. Therefore, it is interesting to exploit the fact that evolutionary algorithms are intrinsically parallel and carry out the evaluation of the population in a concurrent manner. Figure 3 depicts the approach used in this work. The master distributes the population among \(n\) workers, splitting the computational load in \(n\) ways so it does not carry out any evaluation. Once workers have finished their task, they send the outcome together with the received population subset to the Figure 2: Cycle crossover and Swap mutation for permutations of six elements. master. Although the algorithm stops and waits for all workers to finish, it is clearly much faster than the sequential execution as long as each subset was large enough for compensating communication times. We propose a multi-threaded implementation where only the master executes the main thread of the algorithm. Since only workers execute the evaluation of different subsets of the population, it is expected to obtain a speed-up similar to the number of cores in the processor that executes the algorithm. ## 3 Experimental Results The experimental work will analyze the speedup obtained with the parallel version of the MOEA while making clear that the quality of the solutions remains the same. We will also analyze the thermal optimization achieved by the floorplanner. In order to evaluate our floorplanner, we study two heterogeneous 3D architectures where every stacked layer is based on the Niagara platform. They differ, from each other, in the number of cores. The first architecture is composed of 48 processor cores: 32 SPARC and 12 Power6 cores. Adding 72 memories and 6 crossbar for inter-processor communication they sum up a total of 126 components. In the second architecture, 128 cores are included: 96 SPARC plus 32 Power6. In addition, 192 memories and 16 crossbars are considered, therefore 336 components need to be placed in this scenario. The floorplanner will place the processors, the local memories and the crossbars in 4 and 9 layers respectively. Both architectures represent the current and the nearly future state-of-the-art in 3D many-core integration. ### Speedup Analysis In the first analysis, we studied the speedup obtained with the parallel version of our floorplanner. We aim to find the optimal number of workers leading to the maximum speedup. To this end, we perform a parametric sweep of the number of workers, from 1 to 9, both in the 48 and 128 core scenarios. In order to obtain the execution time of these optimizations, we run five times each one of the worker configurations, obtaining the average execution time and speedup for both scenarios. The experiments were carried out in a dedicated Intel Core-i5 machine, a 4-core processor, running at 2.80GHz. We set a fixed population size of 100 individuals and 250 generations evolution as the MOEA parameters for the 48 core scenario optimization. Table II shows the average execution times and corresponding speedups for these runs, with a number of workers ranging from 1 to 9. Figure 4 shows the obtained speedups in the 48 core scenario. It shows see that the speedup increases almost linearly until the number of workers reaches the number of cores of the processor we used for these optimizations (4-core processor). Figure 3: Master-worker configuration. In our master-worker scheme, the execution time of each worker depends on the particular evaluation time of the set of individuals that was assigned to the worker. Then, if the worker receives a set of individuals that need more time to be evaluated, the worker will slow down. On the contrary, if the individuals need less time to be evaluated, the worker will speed up. Therefore, in configurations from 2 to 4 workers, the optimization follows this behavior: the population is divided into as many sets as number of workers, then the set of individuals is sent to a different worker and the evaluation begins. Once workers finish their evaluation task, they wait until the slowest worker ends up, because the master synchronizes all the workers until the next generation. As a result, the slowest worker establishes the execution time of the evaluation of each generation, and the processor cores that run faster workers will be idle waiting for synchronization. Each worker runs in a different thread, and the load assigned to one thread cannot be divided into different processor cores. As a consequence, if the number of workers is higher than 4, the operating system scheduler will distribute the execution of the worker threads among the 4 cores. Then, the cores will swap between the threads, therefore advancing on the execution of each one. As can be seen in Figure 4, results for 5 workers and above present an asymptotic trend on speedup because the usage of resources is maximized. The particular case of the 5 workers configuration obtains the maximum speedup because maximizes the resources occupation with the lowest number of threads. These results confirm that the parallelization of the evaluation phase in the master-worker scheme contributes to the best speedup gains. In addition, there is no remarkable penalty due to the parallelization, because the speedup values above 4 cores tend to be similar. In order to strengthen this hypothesis, the same tests were run for the 128 cores scenario, where the evaluation time for each individual is much longer. Here, the number of generations of the MOEA has to be scaled up because the number of components to be placed has been increased. Therefore we consider a number of generations equal to the total number of components, which is 336: 128 cores, 192 memories and 16 crossbars. The population size remains 100 individuals. Table 3 shows the average execution time and speedup for this scenario from 1 to 9 workers configuration. Figure 5 displays the speedup trend for these data. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \# workers & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline time (s) & 24171 & 12398 & 8904 & 6918 & 6380 & 6421 & 6665 & 6502 & 6386 \\ speedup & 1 & 1.95 & 2.72 & 3.49 & 3.79 & 3.76 & 3.63 & 3.72 & 3.79 \\ \hline \hline \end{tabular} \end{table} Table 2: Average execution times and speedups obtained in the 48 cores scenario. Figure 4: Average speedup values obtained in the 48 cores scenario. As was shown, the 128 cores scenario presents the same behavior as the 48 cores one. The resources of the CPU are maximized from the 5 workers configuration on, and higher numbers of workers obtain similar speedup values. However, the performance improvement is lower than in the 48 cores configuration. This behavior occurs because the individual evaluation time is much higher in this 128 cores scenario, and the execution time of the threads does not differ so much. In the 48 cores case, the processor slots available due to the different execution time between threads allow the evaluation of more individuals than in the 128 cores configuration. As a consequence, the workers queue, waiting for processor cores, advance more in their execution, obtaining higher speedups. On the contrary, the workers of the 128 cores scenario are not able to exploit the processor free slots to evaluate as many individuals as in the 48 cores case, therefore obtaining lower speedup values. ### Validation of solutions Conceptually it is clear that the parallelization of the fitness evaluation should lead to the same results than the sequential version of the algorithm. Nevertheless, we have included in this section a brief consideration regarding the validation of the parallelization that assures such a baseline. Thus, in order to show that the quality of the solutions proposed by the parallel version of the floorplanner remains the same than in the sequential version, we compare the front of non-dominated solutions obtained with the sequential version of the algorithm with the front obtained with the parallel version using 4 and 5 workers in the 48 cores scenario. Figure 6 shows the fronts of non-dominated solutions returned by the floorplanner in these cases. In Figure 7, we compare the front of non-dominated solutions proposed by the floorplanner working with 4 and 5 nodes for the 128 cores platform. Figure 5: Average speedup values obtained in the 128 cores scenario. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \# workers & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline time (s) & 634070 & 432160 & 264318 & 203749 & 198977 & 248527 & 235313 & 250021 & 221485 \\ speedup & 1 & 1.47 & 2.14 & 3.11 & 3.19 & 2.55 & 2.69 & 2.54 & 2.86 \\ \hline \hline \end{tabular} \end{table} Table 3: Average execution times and speedups obtained in the 128 cores scenario. For every run of the algorithm, the returned non-dominated front is different. In fact each execution explores a different region of the solution space. We can see that the fronts cross each other in at least one point. Therefore, none of the returned fronts dominates the others. Since EA are intrinsically heuristic, two executions will not produce exactly the same results. Hence, in order to prove that our proposal is valid it is necessary to define a measure that analyzes the outputs (solution sets) both from sequential and parallel executions. Such a measure is usually referred to as _Indicator_\(I\). In this work, the _Hypervolume_ indicator, proposed by Zitzler and Thiele [19], has been used. The hypervolume \(I(A)\) measures the total amount of the objective space that has been 'covered' by the solution set \(A\); returning the hypervolume of that portion of the objective space that is weakly dominated by \(A\). To this end, the objective space must be bounded. Otherwise a reference point that must be at least weakly dominated by all solutions in \(A\) is used. Finally higher values of \(I\) correspond to higher quality of the measured set. The comparison has been carried out between the sequential execution, the 4-workers and the 5-workers versions of the parallel implementation. This choice was motivated because these configurations had obtained the highest non-saturated speed-ups. Results are shown in Figure 8. All were obtained after running 30 optimizations, each one with 250 generations in the 48 cores and 366 generation in the 128 cores. As expected, the three boxplots inside each picture show a similar outcome; with 25th and 75th percentiles almost identical within the 48 and 128 core plots. Figure 6: Non-dominated fronts of solutions returned by the floorplanner in the 48 cores scenario. Figure 7: Non-dominated fronts of solutions returned by the floorplanner in the 128 cores scenario. ### Convergence of the MOEA The main benefit of parallelization is the considerable reduction in the fitness evaluation time and, consequently, in the the whole procedure of finding a good floorplan. In addition, we can take advantage of such a speed up for carrying out tests in order to detect possible weak points in the algorithm that, using the sequential version, would take months to complete. As the population in an EA evolves, it is desirable to keep their diversity. Otherwise the exploration of the solution space will be guided towards a region, avoiding others which might be more promising. The analysis of the convergence is a straightforward method for verifying that diversity is maintained. At the same time, it reveals whether the EA is well engineered or there is room for improvement. A slow convergence with good results is usually due to a poor representation or not appropriate genetic operators. Although this work tackles a three objective problem, convergence is studied only for \(J_{2}\) (wire length) and \(J_{3}\) (thermal response) because \(J_{1}>0\) means that none floorplan satisfies the constraints. Thus, values \(J_{2}\) and \(J_{3}\) of feasible solutions are extracted in arrays \(W_{r,g}\) and \(T_{r,g}\) respectively, one for each optimization run \(r=1\ldots 30\) and each generation \(g=1\ldots 250\) for 48 cores and \(g=1\ldots 366\) for 128. Then six matrices are constructed in the following way: \[\begin{array}{l}\mathbf{W_{min}}(r,g)=\min\left\{W_{r,g}\right\},\quad \mathbf{W_{mean}}(r,g)=\text{mean}\left\{W_{r,g},\right\}\quad\mathbf{W_{ max}}(r,g)=\max\left\{W_{r,g}\right\},\\ \mathbf{T_{min}}(r,g)=\min\left\{T_{r,g}\right\},\quad\mathbf{T_{mean}}(r,g)= \text{mean}\left\{T_{r,g},\right\}\quad\mathbf{T_{max}}(r,g)=\max\left\{T_{r, g}\right\}.\end{array}\] This procedure is done for both 48 and 128 core configurations. Finally, all six matrices are scaled between the minimum and maximum wire length and thermal response respectively, and plotted as shown in Figure 9. The left-most pictures, corresponding to the minimum values in each generation and optimization run of \(J_{2}\) and \(J_{3}\), show a decreasing behavior eventually reaching the global minimum in almost all optimization runs for 48 cores and, at much slower rate, for 128 cores. The middle pictures show convergence of the mean values of \(J_{2}\) and \(J_{3}\). Their trend is decreasing, starting at \(1/2\) of the upper bound down to \(1/4\) in the best cases and \(1/3\) in the worst. Finally, the convergence of maximum values of \(J_{2}\) and \(J_{3}\) is shown in the right-most pictures. Values decrease down to \(1/3\) of the upper bound in most of the optimization runs. To the light of these results it is clear that generations are better fitted as evolution advances. Moreover, since the mean of the last generation is quite close Figure 8: Hypervolumes for 48 cores (left) and 128 cores(right) after 30 optimization runs. Both hypervolumes measured in the sequential and in the parallel execution, the latter with 4 and 5 workers. The central line is the median, the edges of the box are the 25th and 75th percentiles, the whiskers extend to the most extreme data points not considered outliers, which are plotted individually (+ mark). to the maximum in both objectives, less fitted individuals still have a considerable probability of being chosen, attesting that diversity is maintained. On the other hand, the slow convergence of the objective function is achieved. Figure 9: Convergence evolution of the minimum, mean and maximum values for objectives \(J_{2}\) (wire length) and \(J_{3}\) (thermal response), considering only feasible individuals; for 48 cores (above) and 128 cores (below). mean, especially in the larger problem of 128 cores, indicates that the representation could be better engineered. Genetic operators were discarded as reason because the offspring is always valid. Thus, no reconstruction is needed, which might lead to repeat certain schemas. ### Thermal analysis Finally, we analyze the thermal optimization obtained with our algorithm. The floor planner works with a fixed die size and aims to minimize both the total wire length and the maximum temperature of the chip. As we want to perform a thermal optimization of the described platforms, we need to provide the power consumption values and the areas of the different elements of the architectures as inputs to the thermal-aware floorplanner. In [20] we find that the power consumption of the sparc is 4W at 1.4GHz. In the case of the Power6, we find that 2.6W is the estimated power dissipation [21]. We consider the following areas: \(3.24mm^{2}\) and \(1.5mm^{2}\) for the sparc and Power6 respectively (see [20] and [21]). The power consumption values and areas of the memories are found with the CACTI software [22]. ### 48-cores configuration We compare an optimized configuration of the 48-cores heterogeneous platform to the 48-cores homogeneous platform represented in Figure 10. In this baseline configuration, an original architecture composed of 12 cores is replicated in all the layers. As a consequence, the SPARC cores (SPC) are placed above the others producing hotspots. On the other hand, Figure 11 shows the thermal maps of the different layers of a non-dominated solution returned by the thermal-aware floorplanner. This figure shows an optimized placement of the SPARC cores (SPC), Power6 cores (P6), memories (L2) and crossbars (Cross) achieved by the floorplanner. In this configuration the hottest elements (SPARC cores) are generally placed in the borders of the chip and in the outer layers, separated as much as possible. In fact, the floorplanner avoids placing cores above the others as vertical heat spread is also taken into account. The crossbars are placed in intermediate layers to minimize the wire length. The metrics considered for the thermal analysis of these two platforms are the maximum and mean temperature of the chip and the maximum thermal gradient. In Table 4 we present the thermal response of these two different configurations. These results show that our floorplanner proposes thermally optimized configurations. The peak temperature of 411.82K found in the original configuration is reduced to 345.30K while the mean temperature is reduced in 12.54K. We can see that the maximum thermal gradient of the optimized configuration is reduced from 109.75K to 31.81K. Therefore, not only the temperature of the chip is reduced but it is also more evenly distributed. On the other hand, the wire length of the optimized configuration is a 2.11% greater than the original which translates into a small performance penalty. ### 128-cores configuration For this larger configuration, we analyze one of the optimal floorplans obtained with our parallel implementation. Figure 12 shows the thermal map of the chosen solution. As for the 48-cores platform, we can see that the SPARC cores tend to be placed in the outer layers and in the borders of the chip. The memories and the crossbars are placed in the inner layers. This way both the chip temperature and the wire length are minimized. Nevertheless hotspots appear in this configuration. Table 5 shows the thermal response of an optimized configuration of the 128 cores platform. The hotspot visible in the first layer of the chip corresponds to the peak temperature of the chip reaching 396.84K. The mean temperature is 362.50K while the maximum thermal gradient is 75.80K. Further research and simulations with cooling techniques are required to study the feasibility of these architectures. \begin{table} \begin{tabular}{c c c c c} \hline \hline & Wire L. & \(T_{MAX}\) & \(T_{MEAN}\) & \(Grad_{MAX}\) \\ \hline 48 baseline & 6733 & 411.82 K & 344.29 K & 109.75 \\ 48 opt. & 6875 & 345.30 K & 331.75 K & 31.81 \\ \hline \hline \end{tabular} \end{table} Table 4: Thermal response of the 48 cores configurations. ## 4 Conclusions and Future Work Current and short term future state-of-the-art in 3D many-core integration requires thermal-aware floorplanning techniques able to reduce peak and mean temperatures. However, current techniques that take into account thermal issues spend the most of the execution time dealing with decoding and evaluation of solutions. This work has proposed a parallel implementation of a thermal-aware Multi-objective Evolutionary Algorithm for 3D floorplanning using a master-worker scheme. This model has provided optimized configurations for systems composed of 48 and 128 heterogeneous processor cores. We have shown that the highest speedup values are obtained when the number of workers is closer to the number of cores of the processor that runs the algorithm. In our experiments, run on a 4-core processor, we have obtained maximum speedup values of 3.79 and 3.19 respectively for the 48 and 128 core test configurations by selecting 5 workers in the optimization algorithm. Furthermore, the parallelization presented in this work has made possible the study of the convergence of the floorplanner. The performed analysis shows that convergence evolves successfully in our experiments. As a future work, we aim to overcome the drawbacks of the floorplanner presented in this work. First, we plan to replace the current approximated thermal model with a validated thermal simulator. \begin{table} \begin{tabular}{c c c c c} \hline \hline & Wire L. & \(T_{MAX}\) & \(T_{MEAN}\) & \(Grad_{MAX}\) \\ \hline 128 opt. & 31587 & 396.84 K & 362.50 K & 75.80 \\ \hline \hline \end{tabular} \end{table} Table 5: Thermal response of the 128 cores configuration. Figure 10: Thermal map of the 4 layers of the baseline configuration of the 48 cores platform. In fact, the implemented thermal model is motivated by its low computational cost but it might not be obtaining accurately the thermal behavior of the different individuals. A recent work [23] has shown that the integration of a thermal simulator in the floorplanning process leads to a simultaneous reduction of the peak temperature and the wire length. Furthermore, the model proposed in the referred work is claimed to be, not only more accurate, but also faster than the approximated thermal model. Another major improvement could be achieved with the use of a more suitable representation of the solutions. In fact, most of the floorplanning proposals are based on representations that require time consuming heuristics to decode the solutions. For instance, in our work, the decoding of the solutions together with the evaluation remains the bottleneck of the optimization process. A new representation allowing a direct mapping of the individuals into configurations of the architecture would alleviate the computational cost of the algorithm as the decoding step would be avoided. Furthermore, it would eliminate heuristics that might limit the exploration space and cause premature convergence problems. Thus, such a representation would be more suitable for fixed-outline floorplanning problems. To propose a tool consistent with the state of the art of 3D chip design, thermal-aware floorplanners have to be implemented in accordance with current thermal simulators that split the IC into thermal cells (as done by 3D-ICE [24]). This way, the coding of the solutions has to respect the grid-like structure used by this kind of simulators. Therefore, the thermal error due to the different cell sizes used in the optimization and validation processes is eliminated. Thus, a better thermal optimization can be achieved. The use of a grid-like representation together with the removal of placement heuristics makes the process well adapted for execution in massively parallel architectures such as GPUs. A deeper study of new representations is our current and future work, and the preliminary results are very promising. Figure 11: Thermal map of the 4 layers of a non-dominated solution of the 48 cores platform.
2303.14205
Hydrodynamic electronic transport
The ``flow'' of electric currents and heat in standard metals is diffusive with electronic motion randomized by impurities. However, for ultraclean metals, electrons can flow like water with their flow being described by the equations of hydrodynamics. While theoretically postulated, this situation was highly elusive for decades. In the last decade, several experimental groups have found strong indications for this type of flow, especially in graphene-based devices. In this review, we give an overview of some of the recent key developments, both on the theoretical and experimental side.
Lars Fritz, Thomas Scaffidi
2023-03-24T18:00:02Z
http://arxiv.org/abs/2303.14205v1
# Hydrodynamic electronic transport ###### Abstract The "flow" of electric currents and heat in standard metals is diffusive with electronic motion randomized by impurities. However, for ultraclean metals, electrons can flow like water with their flow being described by the equations of hydrodynamics. While theoretically postulated, this situation was highly elusive for decades. In the last decade, several experimental groups have found strong indications for this type of flow, especially in graphene-based devices. In this review, we give an overview of some of the recent key developments, both on the theoretical and experimental side. ###### Contents * 1 Introduction * 1.1 Scope * 1.2 Outline * 2 Hydrodynamic behavior in electronic lattice systems * 3 Theoretical background * 3.1 The setup * 3.2 Boltzmann equation * 4 Scenarios of electronic hydrodynamics * 4.1 (I) Electron-hole plasma hydrodynamics (EHPH) * 4.2 (II) Fermi-liquid hydrodynamics (FLH) * 4.3 (III) Fermi-liquid-phonon hydrodynamics (FLPH) * 5 Experiments * 5.1 Favorable conditions * 5.2 (I) Electron hole plasma hydrodynamics * 5.3 (II) Fermi-liquid hydrodynamics (FLH) * 5.4 (III) Fermi-liquid-phonon hydrodynamics * 6 Conclusion and outlook ## 1 Introduction There are very few 'universal truths' in physics. Hydrodynamic behavior is one of them. The motion of any substance at high enough temperature follows the laws of hydrodynamics. Hydrodynamics in its original context describes the viscous motion of water. However, its principles apply in a much wider setting: In the physics of stars and interstellar matter, magnetohydrodynamics of plasmas, but also the dynamics of soft active matter. It can also be encountered in applied disciplines, including engineering: ocean dynamics, weather modeling, aviation, the dynamics of gas flowing through pipes, or traffic flow, to name a few examples. They even apply to the physics of the early universe: At energies high enough to melt protons and neutrons, the constituent quarks form the quark-gluon plasma. When a particle collider creates this state, it only lives a tiny fraction of a second. However, during that short spell, it moves according to the laws of fluid mechanics. The reason for this almost unreasonable versatility is the underlying simplicity and generality: the basis of hydrodynamics is the relaxation of conserved quantities towards local equilibrium. The conserved quantities are fundamental: mass, momentum, and energy (and charge in charged systems). For classical hydrodynamics, the set of differential equations that describes flow phenomena is composed of the continuity equations and the Navier-Stokes equation (1). The role of the latter in the description of fluid motion is comparable to the linear Maxwell-equations in electrodynamics. Taken together with appropriate boundary conditions, they describe all features of viscous flow. While these equations are basic, they are non-linear and their solution is highly non-trivial: Proving some properties of their solutions is one of the seven Millennium Prize problems in Mathematics (2). A fundamental characteristic of fluids is their viscosity: water flows faster than honey due to its lower viscosity while having a similar density. Some classical fluids are so viscous, they appear solid. The viscosity of pitch is \(10^{11}\) times that of water. In the quantum world, one also encounters viscous liquids in strongly interacting systems: the quark-gluon plasma is estimated to have a dynamic viscosity \(~{}10^{16}\) times that of water, thereby rivaling glass. The corresponding density of the system is enhanced by the same factor compared to water, meaning the ratio of dynamical viscosity to density in water and the quark-gluon plasma are actually comparable. In the context of condensed matter physics, hydrodynamics has a successful history. It is applied in the description of strongly interacting one-dimensional systems, spin-excitations in insulators, as well as the dynamics in the vicinity of quantum critical points [3]. As a rule of thumb, hydrodynamic behavior is most likely to be encountered and discussed in strongly correlated systems. Metals fall into the class of weakly correlated systems. However, metals are also the systems where we most often talk about the 'flow' of charges and electrical currents. Until quite recently, this 'flow' was fundamentally different from the flow of water: It resembled the erratic movement of balls on a tilted nail board with random position of the nails. In this review, we summarize recent developments theoretical and experimental progress in (semi-)metals in which this is not the case and the flow of electrons resembles the flow of water through a tube with all the associated phenomena. ### Scope This paper reviews the advances in the field of electronic hydrodynamics that have taken place over the last decade from the point of view of a theorist. It is geared towards a master/PhD-level student, theoretical and experimental alike, who starts working in the field. Throughout the text, if faced with the choice, we sacrifice mathematical rigor for a more intuitive and concise presentation. There exist a number of recent excellent introductory texts to different subsets of the subject [4, 5, 6, 7, 8, 9, 10]. They are either considerably more or less technical than this review. For more technical details and/or mathematical rigor, we refer the readers to Refs. [4, 5, 6, 7, 8]. For a non-technical bird's eye view on the subject, we recommend Refs. [9, 10]. ### Outline In this paper, we distinguish three different scenarios of hydrodynamic behavior in metallic systems: (I) Hydrodynamics in systems close to charge neutrality with a mixture of electrons and holes as elementary excitations, such as graphene. We refer to the hydrodynamics in those systems as electron-hole plasma hydrodynamics, henceforth EHPH; (II) Hydrodynamics in conventional Fermi-liquids, henceforth called FLH; (III) Hydrodynamics in systems with perfect electron-phonon drag, henceforth called Fermi-liquid-phonon hydrodynamics, FLPH. It is important to note that there is in principle also room for electron-hole plasmas that are drag coupled to phonons or even to other collective modes. In Sec. 2, we discuss generic features of hydrodynamics in metallic systems, the general obstacles, and how they can be overcome. In Sec. 3, we introduce the Boltzmann equation, our primary framework used throughout this paper. We set it up in a generic manner that allows to describe the formerly mentioned three versions of hydrodynamics in Sec. 4: EHPH, FLH, and FLPH. We discuss the respective setups in Sec. 4.1, Sec. 4.2, and Sec. 4.3 and comment and their most prominent signatures (or at least the ones that are accessible in experiments at the moment). Afterwards, we discuss the experimental status in Sec. 5 and conclude with a discussion and some open question in Sec. 6. ## 2 Hydrodynamic behavior in electronic lattice systems Concepts of fluid motion were introduced to the study of transport properties of fermionic many-body systems already 70 years ago: Fluid behavior was first observed in liquid \({}^{3}\)He [11]. The first theoretical description goes back to Abrikosov and Khalatnikov in the late 1950s [12]. They understood that liquid \({}^{3}\)He was an example of the then novel Landau Fermi-liquid and that it exhibits hydrodynamic behavior. The relaxation mechanism in that system is the scattering between fermions, which conserves charge, mass, momentum, and energy. It leads to a length scale that governs the relaxational processes: it is called the inelastic mean free path, \(l_{ee}\). The hydrodynamic description is accurate as long as the system is probed over length scales much larger than \(l_{ee}\). In a Fermi-liquid, the inelastic mean free path diverges at low \(T\) as \(l_{ee}\propto T_{F}/T^{2}\) (\(T_{F}\) is the Fermi temperature), which is a direct consequence of fermion-fermion scattering being strongly suppressed due to phase space constraints. Whereas \(T_{F}\) is of the order of a few kelvin for helium at relevant pressures, it is of the order of \(1-4\times 10^{4}\) kelvin for typical metals, which leads to a strong suppression of electron-electron scattering. We will later find, that in systems like graphene close to charge neutrality, the inelastic mean free path does not suffer from the same suppression: \(l_{ee}\propto 1/T\). Either way, the important message is that having a higher temperature shortens the inelastic mean-free path \(l_{ee}\), and therefore takes the system potentially deeper into the hydrodynamic regime. However, typical electronic solid state systems are different from \({}^{3}\)He in one more crucial aspect: The underlying lattice. It introduces two length/time scales that are absent in \({}^{3}\)He. One is due to structural disorder within the lattice, called \(l_{\rm dis}\), and another one is due to scattering from lattice vibrations (phonons), called \(l_{\rm phon}\). Usually, these length scales, just as previously discussed for \(l_{ee}\), are temperature dependent. At low temperatures, electrons mostly scatter from disorder, leading to the textbook residual resistance in metals that is predominantly temperature-independent. At higher temperatures, the main scattering mechanism is due to electron-phonon interactions. In typical three dimensional metals, one finds \(l_{\rm phon}\ \propto 1/T^{3}\) (or for transport rather \(1/T^{5}\)) [13]. Usually, at relevant temperatures, one of these two scattering mechanisms is more effective in restricting electronic motion than electron-electron interaction. This is sketched in Fig. (1) (a) where the upper panel shows the conventional metallic regime. So why do these relaxation mechanisms prevent hydrodynamic behavior? Both disorder and phonon scattering violate conservation laws: Disorder scattering violates momentum conservation, whereas phonon scattering violates momentum and energy conservation. 1 Footnote 1: It is important to note that while phonons obstruct hydrodynamics in most systems, it can act as a facilitator in some cases, as discussed explicitly in the FLPH scenario, see Sec. 4.3. In the 1960s, Gurzhi realized that the absence of impurities and phonons is not a strict requirement for hydrodynamic behavior [14]. If electron-electron interactions provide the dominant scattering mechanism in a given temperature window, _i.e._, \(l_{ee}\ll l_{\rm phon},l_{\rm dis},W\) (\(W\) is the sample width), one can still speak of approximate conservation laws, opening the door for the observation of hydrodynamic behavior, see Fig. (1) lower panel. To summarize, two conditions are favorable to render the lattice sufficiently 'invisible': strong interactions, which can be 'boosted' by increasing temperature, and exceptional sample purity. Concerning phonons there needs to be a high characteristic phonon onset temperature or phonons have to drag-lock with the electrons to form a more complex fluid, as in the FLPH scenario. If all these factors come together, a hydrodynamic window can open at intermediate temperatures. For decades, lattice systems with those characteristics were not accessible. As a consequence, the field of hydrodynamics received little attention in the study of electronic transport properties in traditional solid state physics. In recent years, however, the situation has improved significantly and very pure materials have become accessible making electronic hydrodynamics an experimental reality. We will discuss this growing list of systems in Sec. 5. ## 3 Theoretical background There are two cornerstones of hydrodynamic behavior: conserved quantities and local thermal equilibrium reached through relaxational mechanisms conserving said quantities. Both can be described in the framework of the Boltzmann equation in a very elegant way. This makes the Boltzmann equation our method of choice. In the below section we lay the foundation for all the technical discussions that follow. ### The setup We start with a setup that allows describing all three types of hydrodynamic behavior discussed in this review in one unified framework. Our setup consists of electrons (\(+\)) and holes (\(-\)) as well as phonons. 2 Electrons have dispersion \(\epsilon_{+}(\vec{k})\) and holes \(\epsilon_{-}(\vec{k})=-\epsilon_{+}(\vec{k})\). A chemical potential \(\mu\) allows to tune from an electron-hole plasma to a Fermi-liquid at a given temperature \(T\): For \(|\mu|/T\leq 1\) we have an electron-hole plasma, whereas for \(|\mu|/T\gg 1\) we are in a Fermi-liquid limit, see also discussion in Sec. 4.1. Throughout the text, we make Figure 1: (a) In conventional metals relaxation is disorder dominated, whereas in hydrodynamic metals it is interaction dominated. (b) Two-band system consisting of electrons and holes. A chemical potential allows to ’tune’ from Fermi-liquid (\(|\mu|/T\gg 1\)) to an Electron-hole plasma \(|\mu|/T\ll 1\). Throughout this paper we assume \(\epsilon_{+}(\vec{k})=-\epsilon_{-}(\vec{k})\) the simplifying assumption of isotropy, _i.e._, \(\epsilon_{\pm}(\vec{k})=\epsilon_{\pm}(|\vec{k}|)\). The formalism, however, can easily accommodate a more generic dispersion, it only leads to more complicated expressions. A sketch of such a two-band model is shown in Fig. (1) (b). The phonons have a dispersion \(\omega(\vec{k})\). ### Boltzmann equation The Boltzmann equation is a fundamental equation of statistical physics [15]. It accommodates the two most important aspects of hydrodynamic behavior: (1) One can easily identify conserved quantities and derive continuity equations; (2) It describes the slow relaxation toward local equilibrium in the presence of conserved quantities. At its core, it is a differential equation for the distribution functions of the (quasi-) particles in the system. We introduce the distribution functions \(f_{+}\left(\epsilon_{+}(\vec{k}),\vec{k}\right)=f\left(\epsilon_{+}(\vec{k}), \vec{k},\vec{x}\right)\) for the electrons and \(f_{-}\left(\epsilon_{-}(\vec{k}),\vec{k},\vec{x}\right)=f\left(\epsilon_{-}( \vec{k}),\vec{k},\vec{x}\right)-1\) for the holes. Subtracting '1' from the hole distribution amounts to subtracting the filled lower band. This ensures that we can refrain from using a cutoff when calculating physical quantities from the distribution functions. In equilibrium, the distribution functions for the electrons and holes reduce to the standard Fermi-Dirac distributions \(f_{\pm}^{0}\left(\epsilon_{\pm}(\vec{k})\right)=\pm\left(\exp\left(\pm\left( \epsilon_{\pm}(\vec{k})-\mu\right)/\left(k_{B}T\right)\right)+1\right)^{-1}\), where \(k_{B}\) is the Boltzmann constant. Furthermore, we introduce the distribution function \(b(\omega(\vec{k}))\) for the phonons. In equilibrium, it is the Bose-Einstein distribution \(b^{0}(\omega(\vec{k}))=\left(\exp\left(\left(\omega(\vec{k})-\mu\right)/\left( k_{B}T\right)\right)+1\right)^{-1}\). We find three coupled Boltzmann equations (from now on we use \(\hbar=k_{B}=1\) ) \[\partial_{t}f_{+}+\partial_{\vec{k}}\epsilon_{+}(\vec{k})\partial _{\vec{r}}f_{+}+\partial_{\vec{r}}\epsilon_{+}(\vec{k})\partial_{\vec{k}}f_{+} = \mathcal{C}_{+}^{ee}+\mathcal{C}_{+}^{eh}+\mathcal{C}_{+}^{\text{ phon}}+\mathcal{C}_{+}^{\text{dis}}\;,\] \[\partial_{t}f_{-}+\partial_{\vec{k}}\epsilon_{-}(\vec{k}) \partial_{\vec{r}}f_{-}+\partial_{\vec{r}}\epsilon_{-}(\vec{k})\partial_{\vec{ k}}f_{-} = \mathcal{C}_{-}^{hh}+\mathcal{C}_{-}^{he}+\mathcal{C}_{-}^{\text{ phon}}+\mathcal{C}_{-}^{\text{dis}}\;,\] \[\partial_{t}b+\vec{\nabla}_{\vec{k}}\omega(\vec{k})\partial_{ \vec{r}}b+\partial_{\vec{r}}\omega(\vec{k})\partial_{\vec{k}}b = \mathcal{C}_{\text{phon}}^{\text{int}}+\mathcal{C}_{\text{phon}}^{ +}+\mathcal{C}_{\text{phon}}^{-}\;. \tag{1}\] The left-hand sides are the so-called streaming terms resulting from forces, inhomogeneities, and temporal changes (we neglect Berry-phase terms that are potentially present in two-band systems since we are interested in diagonal response and'metallic' systems). The right-hand sides describe the collisions of distinct physical origin, all encoded in the collision integrals \(\mathcal{C}\). Collisions enable the system to relax towards local thermal equilibrium, a requirement of hydrodynamic behavior. They also couple the three Boltzmann equations and allow the three subsystems, electrons, holes, and phonons, to exchange charge, particles, momentum, and energy. 3 Footnote 3: The magnetic part of the Lorentz force is included in Eq. 1. if the momentum is replaced by the canonical momentum according to the minimal coupling prescription [13]. #### 3.2.1 Relaxational processes Eq. 1. is a coupled set of integrodifferential equations. There are two main difficulties in solving them, all rooted in the collision terms: The collision terms are non-linear and couple all three equations. We split the collision terms into four groups of physically distinct scattering processes: 1. \(\mathcal{C}_{+}^{ee}\) and \(\mathcal{C}_{-}^{hh}\) describe scattering events in which either only electrons or only holes are involved. The processes conserve the number of electrons and holes, as well as the momentum and energy of both the electron and hole subsystems, respectively. These terms are the terms conventionally associated with hydrodynamic behavior in Fermi-liquids, see Sec. 4.2. 2. \(\mathcal{C}^{eh}_{+}\) and \(\mathcal{C}^{he}_{-}\) describe scattering events between electrons and holes. The processes do not necessarily conserve the number of electrons and holes, individually. Furthermore, they transfer momentum and energy between the electron and hole subsystems. Physically, they correspond to drag terms between the electrons and holes. Overall, the combined system of electrons and holes still conserves total charge, total momentum, and total energy. These terms allow for drag-coupled hydrodynamics in multi-component systems, such as electron-hole plasmas, Sec. 4.1. 3. \(\mathcal{C}^{\text{phon}}_{\pm}\) and \(\mathcal{C}^{\pm}_{\text{phon}}\) describe the scattering between electrons, holes, and phonons. They allow transferring momentum and energy from electrons and holes to phonons and vice versa. Taken together, they conserve total charge, total momentum, and total energy. Again, these terms allow for drag-coupled hydrodynamics in multi-component systems and are important in Fermi-liquid-phonon setups, Sec. 4.3. 4. \(\mathcal{C}^{\text{dis}}_{\pm}\) and \(\mathcal{C}^{\text{int}}_{\text{phon}}\) describe the scattering of electrons and holes from the disorder as well as the internal relaxation of the phonon system. In the case of \(\mathcal{C}^{\text{dis}}_{\pm}\), it conserves the individual number of electrons and holes and consequently total charge as well as energy, but it breaks momentum conservation. The internal phonon term, \(\mathcal{C}^{\text{int}}_{\text{phon}}\), accounts for a variety of effects: non-linear phonon-scattering, Umklapp scattering, as well as the scattering from disorder in the phonon sector. Potentially, it breaks all conservation laws associated with the phonons. These terms are classified as 'non-hydrodynamic'. A graphical representation of the role of the aforementioned scattering terms is shown in Fig. 2. We explicitly distinguish 'hydrodynamic' terms from 'non-hydrodynamic' terms. One of the critical features of the Boltzmann equation description is that it allows for identifying conserved quantities in a straightforward manner. Formally, one needs to construct the so-called collisional invariants. For a pedagogical discussion of this subject, we recommend consulting Ref. [15] or similar textbooks about kinetic theory. In hydrodynamics, we are always concerned with the conservation of particles/charge, momentum, and energy. The collisional invariants that correspond to these quantities are integrals over the respective quantity and the Boltzmann equation. Concretely, they read\(\int\frac{d^{d}k}{(2\pi)^{d}}(1,\vec{k},\text{Energy}(\vec{k}))\mathcal{C}=(N,K,E)\) and if a quantity is conserved, they equate to Figure 2: Graphical representation of how the different scattering terms act inside and between particle species. The presence or absence of certain terms determines the type of hydrodynamic or conventional transport and flow behavior. zero. The first integral refers to particle number (\(N\)), the second to momentum (\(K\)), and the last one to energy (\(E\)). For our setup, not all of those quantities have to be zero for the system to be hydrodynamic. Instead, they have to obey sum rules. The quantities are summarized in **Table 1** for the respective particle type and scattering process. These collisional invariants directly connect to the discussion above and the sketch in Fig. 2. We observe that some of the quantities are not generically zero, meaning they correspond to a quantity that is not conserved. To make this very concrete, we consider \(K_{+}^{eh}\neq 0\). This implies that in a collision between electrons and holes, the momentum in the electron sector is not conserved. However, there is a sum rule which reads that \(K_{+}^{eh}+K_{-}^{he}=0\). This implies that the total momentum of the combined system of electrons and holes is conserved, in analogy with the previous discussion. We find a set of six sum rules according to \[X_{+}^{eh}+X_{-}^{he} = 0\quad\text{and}\] \[X_{+}^{\text{phon}}+X_{-}^{\text{phon}}+X_{\text{phon}}^{+}+X_{ \text{phon}}^{-} = 0\, \tag{2.2}\] where \(X\) is \((N,K,E)\). These sum rules can be derived explicitly for rather generic interactions but we content ourselves with interpreting them: (a) The combined system of electrons and holes can exchange particles, momentum, and energy between the subsystems. However, the combined system conserves charge, total momentum, and total energy. (b) The combined system of electrons, holes, and phonons can exchange particles, momentum, and energy between them. Again, charge, phonon number, total momentum, and total energy are conserved in the combined system. There are only three terms that break these important conservation laws: \(\mathcal{C}_{\pm}^{\text{dis}}\) breaks momentum conservation of the electrons and holes and \(\mathcal{C}_{\text{phon}}^{\text{int}}\) which, depending on details, breaks phonon number conservation, momentum, and energy. #### 3.2.2 Densities and currents Besides the collisional invariants and associated conservation laws, we can also use the Boltzmann equations to derive continuity equations. This can be achieved by integrating the streaming terms of Eq. 1. over the same quantities. There, we find densities, currents, forces, and heating terms, see **Table 2**. In the same way, we find generalized currents shown in **Table 3**. ## 4 Scenarios of electronic hydrodynamics In this review, we consider three different scenarios of electronic hydrodynamics. Our setup can in principle accommodate more complex scenarios, but we focus on three scenarios \begin{table} \begin{tabular}{l|c|c|c} \hline & Particle number & Momentum & Energy \\ \hline Electron & \(N_{+}^{ee},N_{+}^{\text{dis}}=0\) & \(K_{+}^{ee}=0\) & \(E_{+}^{ee},E_{+}^{\text{dis}}=0\) \\ & \(N_{+}^{\text{phon}},N_{+}^{eh}\neq 0\) & \(K_{+}^{eh},K_{+}^{\text{dis}},K_{-}^{\text{phon}}\neq 0\) & \(E_{+}^{eh},E_{+}^{\text{phon}}\neq 0\) \\ \hline Hole & \(N_{-}^{hh},N_{-}^{\text{dis}}=0\) & \(K_{-}^{hh}=0\) & \(E_{-}^{hh},E_{+}^{\text{dis}}=0\) \\ & \(N_{-}^{\text{phon}},N_{-}^{he}\neq 0\) & \(K_{-}^{he},K_{-}^{\text{dis}},K_{-}^{\text{phon}}\neq 0\) & \(E_{-}^{he},E_{-}^{\text{phon}}\neq 0\) \\ \hline Phonon & \(N_{\text{phon}}^{\text{int}},N_{\text{phon}}^{\pm}\neq 0\) & \(K_{\text{phon}}^{\text{int}},K_{\text{phon}}^{\pm}\neq 0\) & \(E_{\text{phon}}^{\text{int}},E_{\text{phon}}^{\pm}\neq 0\) \\ \hline \end{tabular} \end{table} Table 1: Collisional invariants that are currently most discussed. The three scenarios are: (I) electron-hole plasma hydrodynamics, (II) Fermi-liquid hydrodynamics, and (III) Fermi-liquid-phonon hydrodynamics. The difference between the three scenarios can be made quite pictorial, see Fig. 3: In (I), the fluid is composed of electrons and holes, in (II) just electrons (or just holes), whereas in (III) it is electrons (or holes) and phonons. The individual constituents in the multicomponent fluid cases (I) and (III) are glued together by the aforementioned drag effects. In the following we review the three scenarios in some detail using the Boltzmann equation as the workhorse. In each of the scenarios we also discuss one or two key signatures in detail. Those signatures usually also apply to the other scenarios but we assign them in a way which is mostly motivated by the main experiments in the specific group. ### (I) Electron-hole plasma hydrodynamics (Ehph) The prime representatives of the class of electron-hole plasmas are mono- and bilayer graphene in the vicinity of their charge neutrality point. However, all Dirac type systems and potentially even semiconductors at elevated temperatures fall into this category, albeit with modifications. In the following discussion we mainly concentrate on graphene as the most commonly studied system. Monolayer graphene is a two-dimensional system of carbon atoms on the honeycomb lattice. In its undoped state, it is neither a metal nor an insulator, but a semimetal [16, 17]. \begin{table} \begin{tabular}{l|c|c|c} \hline Density & Momentum density & Energy Density & Force \\ \hline \(n_{+}=\int f_{+}\) & \(\vec{n}_{+}^{\vec{k}}=\int\vec{k}\;f_{+}\) & \(n_{+}^{\epsilon}=\int\epsilon_{+}(\vec{k})f_{+}\) & \(\vec{F}_{+}=\int\vec{k}\;f_{+}\) \\ \hline \(n_{-}=\int f_{-}\) & \(\vec{n}_{-}^{k}=\int\vec{k}\;f_{-}\) & \(n_{-}^{\epsilon}=\int\epsilon_{-}(\vec{k})f_{-}\) & \(\vec{F}_{-}=\int\vec{k}\;f_{-}\) \\ \hline \(n_{\text{phon}}=\int b\) & \(\vec{n}_{\text{phon}}^{\vec{k}}=\int\vec{k}\;b\) & \(n_{\text{phon}}^{\epsilon}=\int\omega(\vec{k})b\) & \(\vec{F}_{\text{phon}}=\int\vec{k}\;b\) \\ \hline \end{tabular} \end{table} Table 2: Densities, forces, and Joule heating \begin{table} \begin{tabular}{l|c|c|c} \hline Particle Current & Momentum Flux & Energy Current & Heating \\ \hline \(\vec{j}_{+}=\int\partial_{\vec{k}}\epsilon_{+}(\vec{k})\;f_{+}\) & \(\Pi_{ij}^{+}=\int k_{i}\partial_{k_{j}}\epsilon_{+}(\vec{k})f_{+}\) & \(\vec{j}_{+}^{\epsilon}=\int\partial_{\vec{k}}\epsilon_{+}(\vec{k})\epsilon_{+}( \vec{k})\;f_{+}\) & \(h_{+}^{\epsilon}=\int\partial_{\vec{k}}\epsilon_{+}(\vec{k})\cdot\vec{k}f_{+}\) \\ \(\vec{j}_{-}=\int\partial_{\vec{k}}\epsilon_{-}(\vec{k})\;f_{-}\) & \(\Pi_{ij}^{-}=\int k_{i}\partial_{k_{j}}\epsilon_{-}(\vec{k})f_{-}\) & \(\vec{j}_{-}^{\epsilon}=\int\partial_{\vec{k}}\epsilon_{-}(\vec{k})\epsilon_{-}( \vec{k})\;f_{-}\) & \(h_{-}^{\epsilon}=\int\partial_{\vec{k}}\epsilon_{-}(\vec{k})\cdot\vec{k}f_{-}\) \\ \hline \(\vec{j}_{\text{phon}}=\int\partial_{\vec{k}}\omega(\vec{k})\;b\) & \(\Pi_{ij}^{\text{phon}}=\int k_{i}\partial_{k_{j}}\omega(\vec{k})b\) & \(\vec{j}_{\text{phon}}^{\epsilon}=\int\partial_{\vec{k}}\omega(\vec{k})\omega( \vec{k})\;b\) & \(h_{\text{phon}}^{\epsilon}=\int\partial_{\vec{k}}\omega(\vec{k})\cdot\vec{k}b\) \\ \hline \end{tabular} \end{table} Table 3: Currents Figure 3: Composition of the fluid in the three different scenarios. Its density of states is linear in the deviation from the Dirac point. This originates from the low-energy bandstructure, shown in Fig. 4 a). Two bands, of electron and hole type, touch in isolated points in the Brillouin zone. In the vicinity of these points, the system can effectively be described by the massless Dirac equation. Consequently, the spectrum is linear in momentum according to \(\epsilon_{\pm}=\pm v_{F}|\vec{k}|\) where \(+\) refers to electrons and \(-\) to holes, and \(v_{F}\) is the Fermi velocity, see Fig. (4) a). Concerning the plasma character of charge-neutral graphene, the key insight came in a paper by Sheehy and Schmalian in 2007 [(18)]. The essence is summarized in Fig. 4 b). It shows the 'phase diagram' of graphene as a function of the chemical potential \(\mu\) (\(x\)-axis) and temperature \(T\) (\(y\)-axis). The chemical potential controls the filling of the Dirac cones: The charge density \(n_{c}\propto\mu^{2}\). Consequently, at \(\mu=0\) we have \(n_{c}=0\). However, there are still excitations at finite temperature \(T\). A quantity that is sensitive to that is the imbalance density \(n_{\rm imb}=n_{+}-n_{-}\) which behaves according to \(n_{\rm imb}\propto T^{2}\). This quantity is a measure for the density of excitations, in that case a thermal cloud of electrons and one of holes, both of equal density, which ensures \(n_{c}=0\). The finite temperature region above \(\mu\) has been dubbed the 'Dirac liquid' and it has thermodynamic properties that are very different from Fermi-liquids. The crossover region is defined by the condition \(|\mu|\approx T\). For \(|\mu|\gg T\), the system behaves like a Fermi-liquid of electron or hole type.4 Footnote 4: This discussion is not only valid in graphene, but in any Dirac-type two-band system including bilayer graphene. If the temperature is larger than the respective gap, it even applies to semiconductors. The Dirac liquid or Dirac plasma has a number of curious experimental signatures. Some of them become apparent in the bulk thermodynamic quantities, whereas others can be observed in transport probes. Concerning this review, in the case of the electron-hole plasma we mostly concentrate on bulk transport properties. #### 4.1.1 Theoretical description The starting point is the Boltzmann equation, Eq. 1.. To describe the EHPH scenario, we consider the Boltzmann equation of electrons and holes and disregard the contribution due to phonons (a justification of this is mostly of experimental Figure 4: a) Schematic of the dispersion relation of graphene near its Dirac point. b) ’Phase diagram’ of clean graphene at finite temperature. The region above \(\mu=0\) is referred to as Dirac liquid. At \(\mu\approx T\) it crosses over to a Fermi-liquid. nature and discussed in Sec. 5), _i.e._, \({\cal C}^{\rm phon}_{\pm}\approx 0\). The remaining coupled Boltzmann equations read \[\partial_{t}f_{+}+\partial_{\vec{k}}\epsilon_{+}(\vec{k})\partial_ {r}f_{+}+\partial_{r}\epsilon_{+}(\vec{k})\partial_{\vec{k}}f_{+} = {\cal C}^{ee}_{+}+{\cal C}^{eh}_{+}+{\cal C}^{\rm dis}_{+}\;,\] \[\partial_{t}f_{-}+\partial_{\vec{k}}\epsilon_{-}(\vec{k}) \partial_{r}f_{-}+\partial_{r}\epsilon_{-}(\vec{k})\partial_{\vec{k}}f_{-} = {\cal C}^{hh}_{-}+{\cal C}^{he}_{-}+{\cal C}^{\rm dis}_{-}\;. \tag{3.2}\] The corresponding conservation laws are shown in **Table 4**. It shows that electrons and holes, individually, are not conserved, whereas the total charge density \(n_{c}=n_{+}+n_{-}\) is. The same statement applies to the energy, where only the total energy \(n_{c}^{\epsilon}=n_{+}^{\epsilon}+n_{-}^{\epsilon}\) is conserved, whereas energy can be exchanged between the subsystems. The momentum density is different in that the total momentum, \(n_{c}^{\vec{k}}=n_{+}^{\vec{k}}+n_{-}^{\vec{k}}\), is not conserved if disorder is present. In the case of the EHPH, it is useful to introduce imbalance densities according to \(n_{\rm imb}=n_{+}-n_{-}\) and correspondingly momentum and energy. This leads to the continuity equations shown in **Table 5** #### 4.1.2 The thermoelectric response of EHPH One of the hallmarks of the hydrodynamic behavior of EHPH can be seen in the bulk thermoelectric response. We will discuss below that it has two key properties: an interaction-dominated bulk electric conductivity (impossible in Fermi-liquids) and an extreme violation of the Wiedemann-Franz law (13). The thermoelectric response of a system is the combined response of the system to an applied electric field \(\vec{E}\) and a temperature gradient \(\vec{\nabla}T\) captured by the thermoelectric response tensor according to (13) \[\left(\begin{array}{c}\vec{j}_{c}\\ \vec{j}^{Q}\end{array}\right)=\left(\begin{array}{cc}\sigma&\alpha\\ T\alpha&\overline{\kappa}\end{array}\right)\left(\begin{array}{c}\vec{E}\\ -\vec{\nabla}T\end{array}\right)\;. \tag{4.1}\] Below we will discuss how to calculate the response coefficients \(\sigma\), \(\alpha\), and \(\overline{\kappa}\). #### 4.1.2.1 The relaxation time approximation Solving the Boltzmann equation is very tedious: The collision terms are integral expressions involving the distribution functions themselves. Usually, there is no analytical solution, apart from in thermal equilibrium. Here, we consider near-equilibrium transport phenomena. Those can usually be described \begin{table} \begin{tabular}{l|c|c} \hline & Electrons & Holes \\ \hline Particle number & \(\partial_{t}n_{+}+\vec{\nabla}\vec{j}_{+}=N^{eh}_{+}\) & \(\partial_{t}n_{-}+\vec{\nabla}\vec{j}_{-}=N^{eh}_{+}\) \\ Momentum & \(\partial_{t}\vec{n}_{+}^{\vec{k}}+\vec{\nabla}\Pi^{+}-\vec{F}_{+}=K^{eh}_{+}+K^{ \rm dis}_{+}\) & \(\partial_{t}\vec{n}_{-}^{\vec{k}}+\vec{\nabla}\Pi^{-}-\vec{F}_{-}=K^{eh}_{-}+K^{ \rm dis}_{-}\) \\ Energy & \(\partial_{t}n_{+}^{\epsilon}+\vec{\nabla}\cdot\vec{j}_{+}^{\epsilon}-h_{+}^{ \epsilon}=E^{eh}_{+}\) & \(\partial_{t}n_{-}^{\epsilon}+\vec{\nabla}\vec{j}_{-}^{\epsilon}-h_{-}^{\epsilon}= E^{he}_{-}\) \\ \hline \end{tabular} \end{table} Table 4: Conservation laws of electrons and holes \begin{table} \begin{tabular}{l|c|c} \hline & Charge (I)+(II) & Imbalance (I) \\ \hline Particle number & \(\partial_{t}n_{c}+\vec{\nabla}\vec{j}_{c}=0\) & \(\partial_{t}n_{\rm imb}+\vec{\nabla}\vec{j}_{\rm imb}=N^{ee}_{\rm imb}\) \\ Momentum & \(\partial_{t}\vec{n}_{c}^{\vec{k}}+\vec{\nabla}\Pi^{c}-\vec{F}_{c}=K^{\rm dis}_{ c}\) & \(\partial_{t}\vec{n}_{\rm imb}^{\vec{k}}+\vec{\nabla}\Pi^{\rm imb}-\vec{F}_{\rm imb }=K^{eh}_{\rm imb}+K^{\rm dis}_{\rm imb}\) \\ Energy & \(\partial_{t}n_{c}^{\epsilon}+\vec{\nabla}\cdot\vec{j}_{c}^{\epsilon}-h_{c}^{ \epsilon}=0\) & \(\partial_{t}n_{\rm imb}^{\epsilon}+\vec{\nabla}\vec{j}_{\rm imb}-h_{\rm imb}^{ \epsilon}=E^{eh}_{\rm imb}\) \\ \hline \end{tabular} \end{table} Table 5: Conservation laws of charge and imbalance within linear-response theory which allows to make progress and eventually leads to solving linear equations. To that end, one linearizes the distribution functions according to \(f_{\pm}\approx f_{\pm}^{0}+\delta f_{\pm}\) and \(b\approx b^{0}+\delta b\) where \(\delta f_{\pm}\) and \(\delta b\) are deviations from equilibrium. These deviations are linear in the applied perturbations. In the case of thermoelectric transport, Eq. 4., those are the field \(\vec{E}\) and the temperature gradient \(\vec{\nabla}T\). The solution of the linearized coupled Boltzmann equations, while standard, is still a rather technical exercise that usually requires mode-expansion involving potentially complicated numerics [(19)]. While this is required in some situations, for instance discussed in Sec. 4.2, we can here use the relaxation time approximation [(15)]. In this approximation, all the specifics of the collision process are condensed into one quantity: The relaxation time \(\tau\). In general, the relaxation time approximation violates conservation laws. For the purpose of this discussion, we set it up such that it respects all the conservation laws and collisional invariants introduced in **Table 4** and **Table 5** in a straightforward way. Furthermore, we checked that it reproduces the characteristic qualitative features of an actual numerical solution. Assuming an applied electric field \(\vec{E}\) and a temperature gradient \(\vec{\nabla}T\) we find linearized Boltzmann equations in the relaxation time approximation according to \[\partial_{t}\delta f_{+}-\vec{\nabla}T\frac{\epsilon_{+}-\mu}{T} \vec{\nabla}_{\vec{k}}f_{+}^{0}-e\vec{E}\vec{\nabla}_{\vec{k}}f_{+}^{0} = -\frac{\delta f_{+}}{\tau_{+-}}+\frac{\delta f_{-}}{\tau_{-+}}- \frac{\delta f_{+}}{\tau^{\rm dis}}\] \[\partial_{t}\delta f_{-}-\vec{\nabla}T\frac{\epsilon_{-}-\mu}{T} \vec{\nabla}_{\vec{k}}f_{-}^{0}-e\vec{E}\vec{\nabla}_{\vec{k}}f_{+}^{0} = -\frac{\delta f_{-}}{\tau_{-+}}+\frac{\delta f_{+}}{\tau_{+-}}- \frac{\delta f_{-}}{\tau^{\rm dis}}\;. \tag{5.1}\] The relaxation times play distinct physical roles: \(1/\tau_{+-}\) and \(1/\tau_{-+}\) refer to electron-hole drag, mediated by interactions (we specify this in Sec. 5), whereas \(1/\tau_{\pm}^{\rm dis}\) accounts for disorder scattering of electrons and holes, respectively.5 Footnote 5: From Eq. 5. one can explicitly check that the sum rules, Eq. 2., hold. Using the expressions introduced in **Table 3**, we can formulate the charge current \(\vec{j}_{c}\) and energy current \(\vec{j}^{\epsilon}\) (the heat current follows from this according to \(\vec{j}^{Q}=\vec{j}^{\epsilon}-\mu/e\vec{j}_{c}\)). To that end, we combine both Boltzmann equations and assume momentum-independent scattering times. We exploit \(\vec{\nabla}_{\vec{k}}\epsilon_{+}=-\vec{\nabla}_{\vec{k}}\epsilon_{-}\) and \(\vec{\nabla}\epsilon_{+}=\vec{\nabla}\epsilon_{-}\) and integrate the Boltzmann equation. In total, we find \[\partial_{t}\vec{j}_{c}-\vec{\nabla}T\mathcal{T}_{c}-e\vec{E} \mathcal{E}_{c} = -\frac{\vec{j}_{c}}{\tau_{+}}-\frac{\vec{j}_{\rm limb}}{\tau_{-}} -\frac{\vec{j}_{c}}{\tau^{\rm dis}}\;,\] \[\partial_{t}\vec{j}_{\rm limb}-\vec{\nabla}T\mathcal{T}_{\rm limb }-e\vec{E}\mathcal{E}_{\rm limb} = -\frac{\vec{j}_{\rm limb}}{\tau^{\rm dis}}\;. \tag{5.2}\] We can do the same for the energy current, which leads to \[\partial_{t}\vec{j}^{\epsilon}-\vec{\nabla}T\mathcal{T}^{\epsilon }-e\vec{E}\mathcal{E}^{\epsilon} = -\frac{\vec{j}^{\epsilon}}{\tau^{\rm dis}}\;,\] \[\partial_{t}\vec{j}_{\rm limb}^{\epsilon}-\vec{\nabla}T\mathcal{ T}_{\rm limb}^{\epsilon}-e\vec{E}\mathcal{E}_{\rm limb}^{\epsilon} = -\frac{\vec{j}^{\epsilon}}{\tau_{-}}-\frac{\vec{j}_{\rm limb}^{ \epsilon}}{\tau_{+}}-\frac{\vec{j}_{\rm limb}^{\epsilon}}{\tau^{\rm dis}}\;. \tag{5.3}\] In the above expressions we introduced \(1/\tau_{+}=1/\tau_{+-}+1/\tau_{-+}\), \(1/\tau_{-}=1/\tau_{+-}-1/\tau_{-+}\). The quantities \(\mathcal{E}\) and \(\mathcal{T}\) can be obtained from straightforward integrals over the Boltzmann equation and are shown in **Table** (6) (note that for notational convenience we have introduced \(E_{\pm}=\epsilon_{\pm}-\mu\)). The hydrodynamic limit is reached for \(\frac{1}{\tau_{+}}\gg\frac{1}{\tau^{\rm dis}}\) (this is equivalent to \(l_{ee}\ll l_{\rm dis}\)). One can bring these four equations into the more conventional form, Eq. (4) \[\left(\begin{array}{c}\vec{j}_{c}\\ \vec{j}^{Q}\end{array}\right)=\left(\begin{array}{cc}\frac{e\mathcal{E}_{c} +\frac{\tau_{+}}{\tau_{-}}e\mathcal{E}_{\rm imb}}{-i\omega+\frac{1}{\tau_{+}} }+\frac{\tau_{+}}{\tau_{-}}\frac{e\mathcal{E}_{\rm imb}}{-i\omega+\frac{1}{ \tau^{\rm dis}}}&\frac{\mathcal{T}_{c}+\frac{\tau_{+}}{\tau_{-}}\mathcal{T}_{ \rm imb}}{-i\omega+\frac{1}{\tau_{+}}}+\frac{\tau_{+}}{\tau_{-}}-i\omega+ \frac{1}{\tau_{\rm dis}}\\ T\left(\frac{\tau_{c}+\frac{\tau_{+}}{\tau_{-}}\mathcal{T}_{\rm imb}}{-i \omega+\frac{1}{\tau_{+}}}+\frac{\tau_{+}}{\tau_{-}}\frac{\mathcal{T}_{\rm imb }}{-i\omega+\frac{1}{\tau_{\rm dis}}}\right)&\frac{\mathcal{T}^{*}-\frac{\mu \tau_{+}}{\tau_{-}}\mathcal{T}_{\rm imb}}{-i\omega+\frac{1}{\tau_{\rm dis}}} -\frac{\mu}{e}\frac{\tau_{c}+\frac{\tau_{+}}{\tau_{-}}\mathcal{T}_{\rm imb}} {-i\omega+\frac{1}{\tau_{+}}}\end{array}\right)\left(\begin{array}{c}\vec{ E}\\ \vec{\nabla}T\end{array}\right)\quad 8.\] We are interested in two particular transport coefficients: the electrical conductivity \(\sigma\) and the thermal conductivity \(\kappa=\overline{\kappa}-T\alpha^{2}\sigma\). The latter corresponds to \(\vec{j}^{Q}=-\kappa\vec{\nabla}T\) under the condition of no electric current flow. #### 4.1.2.2 **The hydrodynamic electrical conductivity** A key signature of electron-hole plasmas is their electrical conductivity which becomes most apparent at charge neutrality. It revolves around a seemingly paradoxical situation. The system has a total charge zero, \(n_{c}=0\). Nevertheless, there is a finite d.c. conductivity. Most surprisingly, this even holds true in the clean limit with no disorder at all. On the level of Eq. (8) and **Table** (6) this has the following origin: \(\mathcal{E}_{\rm imb}=0\) and \(1/\tau_{-}=0\). On the other hand, \(\mathcal{E}_{c}\neq 0\), which implies \[\sigma_{d.c.}(\mu=0,T)=e^{2}\mathcal{E}_{c}\tau_{+} \tag{9}\] where \(1/\tau_{+}\) is the inverse drag scattering time. As mentioned, this expression is finite even in the absence of disorder, _i.e._, for \(1/\tau^{\rm dis}=0\). The key to understanding this situation is summarized as a sketch in Fig. 5 a) and b). At the Dirac point, lower panel Fig. 5 a), the charge density \(n_{c}=0\). However, the imbalance density is \(n_{\rm imb}\neq 0\): At finite temperature, there are two thermal clouds of equal density, one of the electrons and one of the holes. Thus, the total charge is zero. However, an applied electric field pulls electrons and holes in opposite directions. Consequently, the total momentum of the system remains zero, but there is a current induced. Since there is no net momentum induced, there is no disorder required to relax momentum. The electric current, on the other hand, can decay. This is sketched in Fig. 5 b) where an electron-hole pair scatters into another electron-hole pair of opposite 'current'. The momentum, as discussed, is unchanged in this process. Overall, interactions provide a drag mechanism between electrons and holes that effectively 'glues' them together and makes them behave as one fluid. This is sufficient to establish a finite electric current, even without disorder. This is markedly different in a Fermi-liquid, see Fig. 5 a) upper panel. There, an applied field induces momentum and current at the same time (in Sec. 4.2 we will see that they \begin{table} \begin{tabular}{l|c|c} \hline & Electric Field & Thermal gradient \\ \hline Electrical & \(\mathcal{E}_{c}=\int_{\vec{K}}\vec{\nabla}_{\vec{k}}\epsilon_{+}\vec{\nabla}_ {\vec{k}}\left(f_{+}^{0}-f_{-}^{0}\right)\) & \(\mathcal{T}_{c}=\int_{\vec{k}}\vec{\nabla}_{\vec{k}}\epsilon_{+}\left(E_{+} \vec{\nabla}_{\vec{k}}f_{+}^{0}-E_{-}\vec{\nabla}_{\vec{k}}f_{-}^{0}\right)\) \\ & \(\mathcal{E}_{\rm imb}=\int_{\vec{k}}\vec{\nabla}_{\vec{k}}\epsilon_{+}\vec{ \nabla}_{\vec{k}}\left(f_{+}^{0}+f_{-}^{0}\right)\) & \(\mathcal{T}_{\rm imb}=\int_{\vec{k}}\vec{\nabla}_{\vec{k}}\epsilon_{+}\left(E_ {+}\vec{\nabla}_{\vec{k}}f_{+}^{0}+E_{-}\vec{\nabla}_{\vec{k}}f_{-}^{0}\right)\) \\ \hline Thermal & \(\mathcal{E}_{c}^{\epsilon}=\int_{\vec{k}}\vec{\nabla}_{\vec{k}}\epsilon_{+} \epsilon_{+}\vec{\nabla}_{\vec{k}}\left(f_{+}^{0}+f_{-}^{0}\right)\) & \(\mathcal{T}^{\epsilon}=\int_{\vec{k}}\vec{\nabla}_{\vec{k}}\epsilon_{+} \epsilon_{+}\left(E_{+}\vec{\nabla}_{\vec{k}}f_{-}^{0}+E_{-}\vec{\nabla}_{ \vec{k}}f_{-}^{0}\right)\) \\ & \(\mathcal{E}_{\rm imb}=\int_{\vec{k}}\vec{\nabla}_{\vec{k}}\epsilon_{+} \epsilon_{+}\vec{\nabla}_{\vec{k}}\left(f_{+}^{0}-f_{-}^{0}\right)\) & \(\mathcal{T}_{\rm imb}^{\epsilon}=\int_{\vec{k}}\vec{\nabla}_{\vec{k}}\epsilon_{+ }\epsilon_{+}\left(E_{+}\vec{\nabla}_{\vec{k}}f_{-}^{0}-E_{+}\vec{\nabla}_{ \vec{k}}f_{-}^{0}\right)\) \\ \hline \end{tabular} \end{table} Table 6: Driving term integrals are directly proportional). This discussion strictly speaking only applies to the charge-neutrality point. Tuning away from charge neutrality, \(\mu\neq 0\), both \(\mathcal{E}_{c},\mathcal{E}_{imb}\neq 0\) become finite. There is also an effect on the scattering times: \(1/\tau_{+-}\neq 1/\tau_{-+}\), which implies that \(1/\tau_{-}\neq 0\). Consequently, the d.c. conductivity reads \[\sigma_{d.c.}(\mu=0,T)=e^{2}\left(\mathcal{E}_{c}+\frac{\tau_{+}}{\tau_{-}} \mathcal{E}_{\rm imb}\right)\tau_{+}+e^{2}\frac{\tau_{+}}{\tau_{-}}\mathcal{E }_{\rm imb}\tau^{\rm dis}\;. \tag{10}\] It is easy to see that this diverges in the absence of disorder (\(1/\tau^{\rm dis}\to 0\)) as one would expect. The reason is that the finite charge density, \(n_{c}\neq 0\) activates the Drude peak which is associated with momentum transport. Consequently, disorder is required to relax the current, just like in a Fermi-liquid. ##### 4.1.2.3 The heat conductivity Here, we discuss the response \(\vec{j}^{Q}=-\kappa\vec{\nabla}T\) which involves the coefficient \(\kappa\) that is not part of Eq. 4. It is related to heat conductivity in the absence of current flow. At the charge neutrality point, we can again consider Eq. (8) in combination with **Table** (6): Realizing that \(\mathcal{T}_{c}=0\), \(\mathcal{T}^{\epsilon}\neq 0\), as well as \(1/\tau_{-}=0\) directly leads to \[\kappa=\mathcal{T}^{\epsilon}\tau^{\rm dis}\;. \tag{11}\] Just like in the case of the electric current, there is a finite current induced despite having \(n_{c}=0\). The reason is the same as before, \(n_{\rm imb}\neq 0\). However, contrary to the case of an electric field, a temperature gradient makes both thermal clouds, electrons and holes, diffuse into the same direction. Consequently, it excites momentum (but no electric current). This implies that a finite response coefficient \(\kappa\neq 0\) requires a momentum-relaxing process. This Figure 5: a) In a Fermi-liquid, an applied electric field and well as a temperature gradient excite finite momentum. In the Dirac liquid, a temperature gradient excites a finite momentum, whereas an electric field does not. b) In the Dirac liquid, momentum and current decouple. One can relax current without relaxing momentum. is exactly the interpretation of Eq. 11. which becomes infinite in the clean limit where \(\tau^{\rm dis}\to\infty\). This situation is again depicted in Fig. (5). #### 4.1.2.4 The Wiedemann-Franz ratio An important quantity in the study of metals is the ratio between heat conductivity \(\kappa\) and the electric conductivity \(T\sigma\). The was already established in 1853 [20] by Wiedemann and Franz. They observed that for a variety of metals, the ratio \(\kappa/(T\sigma)\) tends to a constant value at low temperatures. Later, this was called the Lorenz number [13]. It was experimentally found that it is universal and given by \[L=\frac{\kappa}{T\sigma}=L_{0}=\frac{\pi^{2}}{3}\left(\frac{k_{B}}{e}\right)^ {2} \tag{12}\] which was later explained by Sommerfeldt [13]. Whether a system tends to this value or not is often taken as empirical evidence of whether the system is a Fermi-liquid or not. The intuitive understanding of the universality of this ratio is that both heat and electric currents are transported by the same type of (quasi-)particle. Additionally, both heat and electric current undergo the same relaxational mechanisms. In the case of a standard metal, this means that both heat and electrical current are limited by the same scattering time, \(1/\tau^{\rm dis}\), which is due to the disorder. Considering the above situation, we find \[L=\frac{\mathcal{T}^{c}}{e^{2}\mathcal{E}_{c}}\frac{\tau^{\rm dis}}{\tau_{+}} \tag{13}\] at the charge neutrality point. Not only does this ratio diverge for a clean system, but it is also not a universal quantity: in general, one should expect a strong violation of the Wiedemann-Franz law close to charge neutrality as well as a strong variation across different samples. However, it is a very good measure of the relative strength of elastic and inelastic scattering in the system. To finish this discussion, it is worthwhile mentioning that the bulk thermoelectric response measures properties of the homogenous flow of the degrees of freedom of the fluid. Therefore, it is not related to the viscosity which is sensitive to the friction of adjacent fluid layers moving at different speeds. Consequently, it is not entirely straightforward to express the viscosity in terms of the scattering times introduced above which are tailored to describe the thermoelectric response. #### 4.1.2.5 Navier Stokes Most experiments on EHPH so far target bulk thermoelectric transport properties and therefore do not directly probe the viscosity (we discuss one exception in Sec. 5). We proceed to sketch the derivation of the Navier-Stokes equation in an EHPH system. In the case of classical hydrodynamics, the Navier-Stokes equation can be derived from the set of equations introduced in **Table 4**. The missing ingredient is to assume that there is a slow uniform flow \(\vec{u}\) of the fluid which is related to a local equilibrium distribution function \[f_{\pm}=\frac{1}{e^{\frac{\epsilon_{+}-\mu-\vec{u}\cdot\vec{k}}{T}}+1}. \tag{14}\] Expanding this to linear order in \(\vec{u}\), we find \(\vec{j}_{c}=n_{c}\vec{u}\), \(\vec{j}^{\epsilon}=(n^{\epsilon}+\Pi^{c})\vec{u}\), \(\vec{j}_{\rm imb}=n_{\rm imb}\vec{u}\), and \(\vec{j}_{\rm imb}^{\epsilon}=(n_{\rm imb}^{\epsilon}+\Pi^{\rm imb})\vec{u}\) which is true for any type of electronic dispersion. However, the specifies of the dispersion enter in the momentum densities \(\vec{n}_{c}^{\vec{k}}=-\vec{u}\int_{\vec{k}}\vec{k}\cdot\vec{k}\left(\partial_{ \epsilon_{+}}f_{+}^{0}+\partial_{\epsilon_{-}}f_{-}^{0}\right)\) and \(\vec{n}_{\rm imb}^{\vec{k}}=-\vec{u}\int_{\vec{k}}\vec{k}\cdot\vec{k}\left( \partial_{\epsilon_{+}}f_{+}^{0}-\partial_{\epsilon_{-}}f_{-}^{0}\right)\) which are not a priori related to a specific thermodynamic quantity. This implies that the Navier-Stokes equations must be derived on a case-by-case basis for different dispersions. The general strategy, however, is that one uses the expressions shown in **Table** (4) and closes the system of equations by relating momentum current to either charge or heat currents, or combinations thereof. For graphene close to the charge-neutrality point as well as in the FL regime, this procedure is presented in great details in Refs. [4, 5, 6]. At the time of writing this review, we are not aware of an explicit derivation in the case of bilayer graphene or other dispersions. ### (I1) Fermi-liquid hydrodynamics (FLH) Hydrodynamics for Fermi-liquids was first discussed by Abrikosov and Khalatnikov in the context of liquid helium (Ref. [12]) and by Gurzhi [14] in the context of electrons in solids. We focus here on the latter case which has attracted renewed interest in the last few years [7, 4, 5, 7, 14, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61]. #### 4.2.1 Boltzmann equation The starting point is again Eq. 1. In a conventional Fermi-liquid, we have \(|\mu|/T\gg 1\) and only one type of charge carrier, either electrons or holes. Without loss of generality, we henceforth concentrate on electrons and consequently drop the \(\pm\) index. The equation of interest in that case is \[\partial_{t}f+\vec{\nabla}_{\vec{k}}\epsilon(\vec{k})\partial_{r}f+\partial_{ r}e_{+}(\vec{k})\partial_{\vec{k}}f=\mathcal{C}^{ee}+\mathcal{C}^{\rm phon}+ \mathcal{C}^{\rm dis}\;. \tag{15}\] We will now discuss some of the properties of this equation. Contrary to the case of EHPH for which the essence of Coulomb scattering could be simplified down to the electron-hole drag term in the relaxation time approximation (see Eq. 5), in this case we will need to include the electron-electron scattering term and to keep track of its momentum dependence in order to ensure momentum conservation. Let us adapt the notation slightly and rewrite the Boltzmann equation as \[\partial_{t}f+{\bf v_{k}}\cdot\nabla_{\bf r}f-|e|{\bf E}\cdot\nabla_{\bf k}f= \mathcal{C}[f] \tag{16}\] where \({\bf E}\) is an external electric field, and \(\mathcal{C}=\mathcal{C}^{ee}+\mathcal{C}^{\rm phon}+\mathcal{C}^{\rm dis}\). In linear response, using \(f=f_{0}+\delta f\), the force term is approximated as \({\bf E}\cdot\nabla_{\bf k}f\simeq{\bf E}\cdot\nabla_{\bf k}f_{0}=\frac{df_{0}} {d\epsilon}{\bf E}\cdot v_{\bf k}\). This leads to the linearized Boltzmann equation \[\partial_{t}\chi+{\bf v_{k}}\cdot\nabla_{\bf r}\chi+\frac{|e|{\bf E}}{mv_{F}} \cdot{\bf v_{k}}=\mathcal{C}[\chi] \tag{17}\] where we have defined \(\chi\) by \(\delta f=\frac{df_{0}}{d\epsilon}mv_{F}\)\(\chi\) for later convenience. Note that the mass \(m\) is defined here as \(m\equiv k_{F}/v_{F}\) and thus behaves at \(\sqrt{n}\) for graphene, whereas it is a constant for a parabolic band. In the limit \(T\ll T_{F}\), one approximates \(\frac{df_{0}}{d\epsilon}\simeq-\delta(\epsilon-\epsilon_{F})\). This means \(\chi\) only needs to be defined at the Fermi surface. For the sake of simplicity, we use the example of a 2D circular Fermi surface in the rest of the discussion. In this case, one can parametrize the Fermi surface by an angle \(\theta\), with \({\bf k}=k_{F}(\cos(\theta),\sin(\theta))\) and \({\bf v}=v_{F}(\cos(\theta),\sin(\theta))\). It is then advantageous to decompose \(\chi\) in a Fourier series (see Fig. 6), \[\chi({\bf r},\theta)=\chi_{0}({\bf r})+\sum_{n>0}(\chi_{n,x}({\bf r})\cos(n \theta)+\chi_{n,y}({\bf r})\sin(n\theta)). \tag{18}\] Three Fourier components are of note: \(\chi_{0}\), which gives the density and is conserved due to charge conservation, and \(\chi_{1,x}\) and \(\chi_{1,y}\), which give the drift velocity \(\vec{u}\) (and thus the current \(\vec{j}=ne\vec{u}\)) along \(x\) and \(y\): \[u_{x}=\chi_{1,x},\ u_{y}=\chi_{1,y}. \tag{19}\] Note that we will use interchangeably current and momentum in this section, since they are proportional in the case of a highly degenerate Fermi-liquid (\(T\ll T_{F}\)) with a circular Fermi surface (because \({\bf v_{k}}\propto{\bf k}\) for all \({\bf k}\) at the Fermi surface). Physically, we know that a perturbation \(\chi(\theta)\) away from the Fermi-Dirac distribution will tend to decay with time due to scattering. However, certain Fourier components of \(\chi\) might decay faster than others. This is captured by writing the scattering integral as \({\cal C}[\chi]=-\sum_{n>0}\gamma_{n}\chi_{n}\), where \(\gamma_{n}\) is the decay rate of the \(n\)-th harmonic. The set of rates \(\gamma_{n}\) makes it possible to define the scattering integral of any rotationally invariant system, regardless of the microscopic source of scattering. For concreteness, let us consider a channel of finite width \(W\) along \(y\) and of infinite length along \(x\), where one applies an electric field \(\vec{E}=E\hat{x}\). In the Fourier basis, Eq. 17 takes the form of an infinite set of equations: \[\partial_{t}\chi_{1}+\frac{1}{2}v_{F}\partial_{y}(\chi_{2}) = -\gamma_{1}\ \chi_{1}+\frac{eE}{m}\] \[\partial_{t}\chi_{2}-\frac{1}{2}v_{F}\partial_{y}(\chi_{3}-\chi_ {1}) = -\gamma_{2}\ \chi_{2}\] \[\partial_{t}\chi_{3}+\frac{1}{2}v_{F}\partial_{y}(\chi_{4}-\chi_ {2}) = -\gamma_{3}\ \chi_{3}\] \[\vdots\] \[\partial_{t}\chi_{n}-\frac{(-1)^{n}}{2}v_{F}\partial_{y}(\chi_{n +1}-\chi_{n-1}) = -\gamma_{n}\ \chi_{n} \tag{20}\] where \(\chi_{n}\) is shorthand notation for \(\chi_{n,x}\) (resp. \(\chi_{n,y}\)) if \(n\) is odd (resp. even). This means the contributing Fourier components are \(\chi_{1,x},\chi_{2,y},\chi_{3,x},\dots\). Also, we dropped all the \(\partial_{x}\) terms, since we assume an infinitely long channel. Naively, one might expect that the only relaxation rate relevant to charge transport is \(\gamma_{1}\), since \(\chi_{1}\) is the mode corresponding to the charge current. This is indeed the case for a spatially uniform system for which one can neglect all \(\partial_{y}\) terms in Eq. 20. In that case, the first line of Eq. 20 is decoupled from the others and gives a closed formula for \(\chi_{1}\) from Figure 6: Examples of Fourier components \((\chi_{0},\chi_{1,x},\chi_{2,y},\chi_{3,x})\) contributing to the out-of-equilibrium distribution of electrons at the Fermi surface in a channel geometry. Pink and grey areas denote positive and negative values, respectively. which the conductivity is found to be \(\sigma=ne^{2}/m\gamma_{1}\). We thus recover the Drude formula in that case. However, in a spatially non-uniform case (e.g. for the finite-momentum conductivity \(\sigma(q)\) or for a sample with boundaries), the spatial gradient terms in Eq. 20 generate a coupling to higher harmonics. In that case, a knowledge of all the \(\gamma_{n>1}\) becomes crucial to understand transport properties [62]. It is thus the interplay between the spatial non-uniformity and the relaxation of higher harmonics through scattering which leads to interesting effects for Fermi-liquid hydrodynamics. This is in contrast to the electron-hole plasma of the previous section for which bulk properties are already hydrodynamic. After having motivated the importance of \(\gamma_{n>1}\), let us discuss their values. The simplest approximation is the textbook relaxation time approximation, which assumes a single relaxation rate \(\gamma_{n}=\gamma\). This assumes that an electron can be scattered anywhere on the Fermi surface with equal probability. Although this approximation is standard, it actually misses a very important piece of the physics in several important cases. Notably, electron-electron scattering only contributes to \(\gamma_{n>1}\) and not to \(\gamma_{1}\) due to momentum conservation. This had led to a two-rate model [63, 12] which separates momentum-relaxing scattering from momentum-conserving scattering: \[\gamma_{1} = \gamma_{\rm MR}\] \[\gamma_{n>1} = \gamma_{\rm MR}+\gamma_{\rm MC} \tag{21}\] where \(\gamma_{MR}\) is the momentum-relaxing scattering rate and receives contribution from impurity, phonon, and electron umklapp scattering, where as \(\gamma_{MC}\) is the momentum-conserving rate and receives contribution from electron non-umklapp scattering. When \(\gamma_{\rm MR}\ll\gamma_{\rm MC}\), one finds a separation of time scales between the slow relaxation of current and the fast relaxation of higher harmonics (\(\gamma_{1}\ll\gamma_{n>1}\)), which justifies a hydrodynamic expansion as explained below. Remarkably, strong electron-electron scattering is not the only way to realize a substantial separation between \(\gamma_{1}\) and \(\gamma_{n>1}\). For example, small-angle impurity scattering or certain types of boundary scattering can also lead to a sizable \(\gamma_{n>1}/\gamma_{1}\) ratio, leading to a "para-hydrodynamic" regime [64, 65]. We should note that even the two-rate model given above is a fairly crude approximation to electron-electron scattering close to a 2D Fermi surface. As shown in Refs. [66, 67], kinetic constraints in 2D lead to an anomalously long lifetime for all odd harmonics, which has important consequences for transport. Further, the special form of the collision integral at the charge neutrality point of graphene also leads to an anomalous kinetic theory[68]. #### 4.2.2 From Boltzmann to Navier-Stokes Let us now show how to go from the Boltzmann equation 20 to the Navier-Stokes equation, using an expansion which relies on the fast relaxation of higher harmonics. As long as we probe the system at scales much larger than \(l_{MC}=v_{F}\gamma_{MC}^{-1}\), we can do an expansion in a small parameter \(\epsilon=l_{MC}/W\), and use an ansatz of the form \(\chi_{n}\propto\epsilon^{n}\). To leading order in \(\epsilon\), one finds that the first two equations decouple from the rest, giving \[\partial_{t}\chi_{1}+\frac{1}{2}v_{F}\partial_{y}(\chi_{2}) = -\gamma_{1}\ \chi_{1}+\frac{eE}{m}\] \[\frac{1}{2}v_{F}\partial_{y}(\chi_{1}) = -\gamma_{2}\ \chi_{2} \tag{22}\] Based on the second equation, we can recognize \(\chi_{2}\) as a component of the viscous stress tensor, which is proportional to the spatial derivative of the flow, as in all Newtonian fluids. By plugging it into the first equation, we obtain a closed equation for the current \(\chi_{1}\): \[\partial_{t}\chi_{1}+\eta\partial_{y}^{2}(\chi_{1})\,=-\gamma_{1}\ \chi_{1}+ \tfrac{e}{m}E \tag{23}\] where the viscosity \(\eta=\tfrac{1}{4}v_{F}^{2}\tau_{2}\) is proportional to the relaxation time of the second harmonic, \(\tau_{2}=1/\gamma_{2}\). Finally, by using \(u_{x}=\chi_{1}\) and generalizing the previous analysis to a two-dimensional flow \(\vec{u}\), we find: \[\partial_{t}\vec{u}+\eta\nabla^{2}\vec{u}=-\gamma_{1}\ \vec{u}+\frac{e}{m}\vec{E} \tag{24}\] which is almost the standard Navier-Stokes equation except for the addition of a momentum-relaxing term \(-\gamma_{1}\vec{u}\). The viscous term dominates over the momentum-relaxing one if the channel width \(W\) is much smaller than the Gurzhi length: \(W\ll\sqrt{l\ l_{MR}}\). When that is the case, the resistivity is proportional to the viscosity: \(\rho\propto\eta/W^{2}\propto\tau_{2}\). (14) By "integrating out" the fast microscopic dynamics of the non-conserved higher harmonics \(\chi_{n>1}\), we have obtained a closed equation for the slow dynamics of the current \(\chi_{1}\). The only remaining information about the higher harmonics in this equation is \(\tau_{2}\), which enters the viscosity. In theory, one could now simply solve Navier-Stokes with the appropriate geometry to study experimental systems, and forget about Boltzmann altogether. In practice, this is not always justified, since experimental setups do not always have such a large separation of scale between \(W\) and \(l_{MC}\). It is therefore sometimes necessary to solve the full Boltzmann equation. Further, the question of appropriate boundary conditions for the velocity field also requires going back to a kinetic theory [69, 70]. ### (iii) Fermi-liquid-phonon hydrodynamics (FLPH) The phenomenon of electron-phonon drag is in principle not restricted to the situation with only electrons and phonons, there could also be a mixture of electrons, holes, and phonons coupled together. In view of the experimental situation, however, we concentrate on electrons and phonons here, the other situation could, however, easily be described within the outlined framework. This case of hydrodynamic behavior, FLPH, goes against the standard lore of hydrodynamics in electronic solid-state systems: In conventional metals, phonons prevent hydrodynamic behavior. They usually invalidate momentum and energy conservation because they have faster relaxation mechanisms within their subsystem, see discussion below. There is, however, a way out and phonons can act as facilitators that conspire with the electrons and make up one perfectly drag-coupled fluid. The conditions for that are the subject of this section. There have been relatively recent review-style articles on the theoretical status of this scenario in Refs. [7, 8]. We keep our discussion more superficial and concentrate on a couple of, in our view, important aspects. #### 4.3.1 Theoretical description As usual, we start from Eq. 1.. In a system with electrons and phonons, we have to consider two coupled Boltzmann equations, one for the electrons and one for the phonons (note that we again drop the subscript without loss of generality): \[\partial_{t}f+\vec{\nabla}_{\vec{k}}\epsilon(\vec{k})\partial_{ \vec{r}}f+\partial_{\vec{r}}\epsilon(\vec{k})\partial_{\vec{k}}f = \mathcal{C}^{ee}+\mathcal{C}^{\text{phon}}_{\text{$\mathrm{c}$}}+ \mathcal{C}^{\text{dis}}\] \[\partial_{t}b+\vec{\nabla}_{\vec{k}}\omega(\vec{k})\partial_{ \vec{r}}b+\partial_{\vec{r}}\omega(\vec{k})\partial_{\vec{k}}b = \mathcal{C}^{\text{int}}_{\text{phon}}+\mathcal{C}^{\text{e}}_{ \text{phon}}\;. \tag{25}\] The scattering terms in the electron sector fall into three classes: \(\mathcal{C}^{ee}\) corresponds to electron-electron scattering that conserves all quantities. \(\mathcal{C}^{\text{phon}}_{\text{$\mathrm{c}$}}\) corresponds to electron-phonon scattering and can transfer momentum and energy between electrons and phonons. Finally, there is \(\mathcal{C}^{\text{dis}}\) which relaxes momentum due to disorder breaking translational symmetry. The scattering terms for phonons fall into two classes: there is scattering between phonons and electrons, called \(\mathcal{C}^{\text{e}}_{\text{phon}}\) which is the counterpart to \(\mathcal{C}^{\text{phon}}_{\text{$\mathrm{c}$}}\) and plays the same role. There is also scattering within the phonon system itself, \(\mathcal{C}^{\text{internal}}\). The latter comprises a number of different effects: non-linear terms in the phonon sector, phonon-disorder coupling, and Umklapp scattering. In the conventional picture of the electron-phonon problem (see Ref. [13]), \(\mathcal{C}^{\text{int}}_{\text{phon}}\gg\mathcal{C}^{\text{e}}_{\text{phon}}\). This means the following: phonons relax very quickly on the time scale of electronic processes. Consequently, the phonon system effectively decouples from the electrons and relaxes on its own time scale, usually a lot faster than electronic time scales. This implies that one can treat the phonons as effectively equilibrated in the Boltzmann equation of the electrons. As a consequence, the electronic system has no momentum and no energy conservation since it can dissipate both in the phonon subsystem. Consequently, the electrons will not show hydrodynamic behavior and the transport characteristics are diffusive. From the point of view of hydrodynamics, there is another very interesting limit: \(\mathcal{C}^{\text{int}}_{\text{phon}}\ll\mathcal{C}^{\text{e}}_{\text{phon}}\) meaning electron-phonon scattering together with electron-electron interaction determines the equilibration of the combined system. In the clean limit, \(\mathcal{C}^{\text{dis}}=0\), the combined system of electrons and phonons conserves charge, total momentum, and total energy. Hence, the behavior can be expected to be hydrodynamic. Pictorially speaking, the combined system of electrons and phonons is drag-locked into one fluid, see sketch in Fig. 3. #### 4.3.2 Signatures of Flph We again concentrate here on bulk transport properties, especially thermoelectric transport since this gives a rather direct signal. ##### 4.3.2.1 Bulk signatures The bulk signatures are again best seen in thermoelectric measurements that measure both electric and heat conductivity. There is an intricate competition of effects that comes to life in that situation. First of all, an electric field only couples to the electrons and not the phonons. The electric current is entirely carried by electrons, although phonons are drag-coupled and also move (this is quite similar to the effect of Coulomb drag in double-layer setups). The heat current, on the other hand, is carried by both the electrons and the phonons. Explicitly, we find that the charge current \(\vec{j}_{c}\) and the heat current \(\vec{j}_{Q}\) are given by \[\vec{j}_{c}=-e\int_{\vec{k}}\partial_{\vec{k}}\epsilon\delta f\quad\text{and} \quad\vec{j}_{Q}=\int_{\vec{k}}\partial_{\vec{k}}\epsilon\left(\epsilon-\mu \right)\delta f+\int_{\vec{k}}\partial_{\vec{k}}\omega\,\omega\,\delta b\;. \tag{26}\] It is important to note that in a Fermi-liquid, the charge current is proportional to the momentum current, see discussion in Sec. 4.2. This implies that an electric field not only excites an electrical current, it immediately excites momentum. The electron-electron interaction, encoded in \(\mathcal{C}^{ee}\), consequently is ineffective in relaxing the electrical current. However, electrons can transfer momentum to phonons in the above setup. The phonons themselves cannot dissipate it and all they can do is give it back to the electrons before the cycle repeats. This implies that in the absence of disorder the electrical conductivity is infinite. For a finite electrical conductivity in a Fermi-liquid with perfect phonon drag, one thus needs disorder to relax momentum. To summarize this, the electrical conductivity is expected to the of the form \(\sigma=\sigma_{0}\tau_{e}^{\rm dis}\) where \(\sigma_{0}\) is a prefactor. The thermal conductivity is more complicated than that. The reason is that the thermal current of the electrons is not conserved in inelastic collisions as well as in electron-phonon collisions. The total inverse scattering time of the electronic heat current consists of three terms: \(1/\tau_{e}^{\rm eff}=1/\tau_{e}^{\rm dis}+1/\tau_{e}^{ee}+1/\tau_{e}^{\rm phon}\). There is another direct contribution of the phonons to the heat current. Through drag effects, the phonons experience the effects of disorder and electron-electron interaction, meaning the effective scattering time reads \(1/\tau_{\rm phon}^{\rm eff}=1/\tau_{\rm phon}^{\rm dis}+1/\tau_{\rm phon}^{ee}+1/ \tau_{\rm phon}^{\rm phon}\). In total this implies that the Wiedemann-Franz ratio reads \[L=\frac{\kappa_{0e}\left(\frac{1}{\tau_{e}^{\rm dis}}+\frac{1}{\tau_{e}^{ee}} +\frac{1}{\tau_{e}^{\rm phon}}\right)^{-1}+\kappa_{0\rm phon}\left(\frac{1}{ \tau_{\rm phon}^{\rm adi}}+\frac{1}{\tau_{\rm phon}^{\rm adi}}+\frac{1}{\tau_{ \rm phon}^{\rm phon}}\right)^{-1}}{T\sigma_{0}\tau_{e}^{\rm dis}}. \tag{27}\] where we defined \(\kappa_{0e}\) and \(\sigma_{0}\) such that \(L_{0}=\kappa_{0e}/(T\sigma_{0})\) is the Lorenz number. In the very clean limit, this is given by \[L\approx\frac{\kappa_{0e}\left(\frac{1}{\tau_{e}^{ee}}+\frac{1}{\tau_{e}^{\rm phon }}\right)^{-1}+\kappa_{0\rm phon}\left(\frac{1}{\tau_{\rm phon}^{ee}}+\frac{1}{ \tau_{\rm phon}^{\rm phon}}\right)^{-1}}{T\sigma_{0}\tau_{e}^{\rm dis}}. \tag{28}\] While the exact value of \(L\) obviously depends on all kinds of details, one must expect that the Wiedemann-Franz law is, possibly strongly, violated. The two main reasons are: (1) the heat current undergoes relaxational processes differently from the charge current and (2) there is an additional, potentially very big, direct contribution to the heat conductivity coming from the phonons. It is important to point out that the discussion about the violation of the Wiedemann-Franz law also applies in the case of FLH, Sec. 4.2. In that case the Wiedemann-Franz ratio reads \[L=\frac{\kappa_{0e}\left(\frac{1}{\tau_{e}^{\rm dis}}+\frac{1}{\tau_{e}^{ee}} \right)^{-1}}{T\sigma_{0}\tau_{e}^{\rm dis}}. \tag{29}\] Importantly, this implies that \(L<L_{0}\), as discussed in Refs. [71, 72]. #### 4.3.2 Boundary signatures In the picture of a strongly coupled electron-phonon fluid, it is obvious that all the quantities involving energy and momentum are heavily influenced by the presence of the phonons. Consequently, the phonons will make a direct contribution to the viscosity, and thus all quantities that are sensitive to viscosity will be different from a Fermi-liquid, even if the Fermi-liquid is hydrodynamic.6. Footnote 6: All the mentioned properties of FLPH can also be observed in a scenario of electrons coupled to their own internal collective modes, see Refs.[73, 74, 75] _www.annualreviews.org_\(\bullet\)_Hydrodynamic phenomena in electronic transport_ ## 5 Experiments In the following discussion we will first discuss the conditions for interactions, disorder, and phonons in some of the relevant systems. Eventually, we discuss the key experiments in the respective groups. ### Favorable conditions From the discussions in Sec. 3 and the ensuing ones in Secs. 4.1-4.3 it has become obvious that one key requirement for the experimental observation of solid state hydrodynamics is to have a very clean system (this is important in all three scenarios (I)-(III)). Furthermore, the electrons should be strongly interacting. We will find that this is easier to achieve in scenario (I) than in scenarios (II) and (III). Finally, there is the question about phonons. While in scenarios (I) and (II) their absence is required, in scenario (III) they are explicitly part of the fluid. #### 5.1.1 Coulomb interaction In all the scenarios, Coulomb interactions are vital to achieve hydrodynamic behavior. The Coulomb interaction is fundamental to all charged fermionic systems. Most importantly, overall it conserves total charge, total momentum, and total energy. Nevertheless, it leads to relaxation of the underlying electronic system. The associated relaxation time, \(\tau\), can be very large in Fermi liquids. The reason for that is found in phase space arguments that make relaxation through interactions inefficient, see Ref. [13] or similar textbooks. Without going through the details of the derivation, usually one has \(\tau\propto T_{F}/T^{2}\) where \(T_{F}\) is the Fermi temperature. This quantity is huge in typical metals, often on the order \(1-4\times 10^{4}\) kelvin. However, low-density metals can have a much lower \(T_{F}\), which makes them very attractive. Two of the frontrunners are mono- and bilayer graphene. In those systems, the filling and consequently \(T_{F}\) can be controlled with great precision. It can even be tuned to zero. In that situation temperature itself takes over the role of \(T_{F}\) since it controls the number of thermally excited charge carriers. In that case the scattering time behaves according to \(\tau\propto 1/T\). This allows to realize the Planckian limit which is the limit in which relaxation is entirely determined by temperature itself [3, 76]. We will see that also in the other systems that are currently investigated, the electron density is usually quite low (except for a few cases like PdCoO\({}_{2}\)) which allows to boost the role of Coulomb scattering. Additionally, it is beneficial to consider two-dimensional systems which also increases the relative interaction strength when compared to kinetic energy. #### 5.1.2 The role of disorder Disorder is the number one enemy of hydrodynamic behavior. Usually, disorder provides the shortest relaxation/scattering time in metals and the only way to increase that is to make the system ever cleaner. There has been tremendous progress in that regard. Especially remarkable is the progress in the field of graphene based devices. Throughout the last six years, it was possible to fabricate encapsulated devices in which mono- or bilayer (or even twisted versions of it) are sandwiched in-between boron-nitrid. This leads to very clean devices in which the effects of disorder scattering can be subdominant to for instance Coulomb scattering, especially if the temperature exceeds \(\approx 80\) kelvin. #### 5.1.3 The role of phonons Lastly, we have to worry about phonons. As explained in detail before, they play a special role in this review. There are two scenarios. One in which phonons equilibrate within their own subsystem and their relaxation is decoupled. In that situation, they are in principle detrimental to hydrodynamic behavior. In the case of EHPH and FLH, phonons should be absent. Again, graphene has very favorable properties. Among other things, graphene and bilayer graphene have gained fame for their structural properties. They are very stiff which implies that phonons become important at relatively elevated temperatures around \(T=100\) kelvin [16]. In the scenario of FLPH, phonons fail to relax within their own subsystem and they build a fluid in which the electrons and phonons are locked into one fluid, a scenario which has for instance been discussed in antimony and PtSn\({}_{4}\). To summarize this discussion, graphene takes a leading role, mostly due to three properties. (1) \(T_{F}\) can be made very small; (2) Disorder levels can be suppressed and (3) the onset of phonon scattering is above 100 kelvin. Nevertheless, there are also other, possibly three dimensional material systems and we will discuss some of the key experiments below. Taken together, this leads to an unusual situation. In condensed matter systems, the most spectacular things usually happen upon cooling down the system. Here, the most interesting transport window opens up at elevated temperatures. It has by now been demonstrated by several groups that the hydrodynamic window sits firmly between 10 and 100 kelvin [77, 78]. This statement applies to both mono- and bilayer graphene. ### (I) Electron hole plasma hydrodynamics Over the last decade, mono- and bilayer graphene have taken a leading role in the quest for the observation of hydrodynamic electronic flow phenomena of the EHPH type. There have been a number of measurements in which the hydrodynamics of the electron-hole-plasma was probed. #### 5.2.1 The hydrodynamic conductivity We discussed in Sec. 4.1 that systems of the EHPH type have a finite electric conductivity even in the perfect system. This is due to the electron-hole drag from Coulomb interactions that enables a current relaxation process. This is a very direct manifestation of hydrodynamics since the hydrodynamic relaxation time is directly related to a measureable bulk quantity. That way it allows a direct measurement of the Planckian dissipation time \(\tau\propto 1/T\) (Ref. [3]), a quantity that has received a lot of attention in the context of non-Fermi-liquids in recent years (Ref. [76]). #### 5.2.1.1 Monolayer graphene In the case of monolayer graphene, the strength of Coulomb scattering is controlled by a dimensionless coupling constant. In analogy with the fine structure constant in QED, it is denoted \(\alpha=e^{2}/(4\pi\varepsilon v_{F})\). In QED, this is a small number, but not in graphene. There are two reasons for that: the Fermi velocity \(v_{F}\) is much smaller than the speed of light, _i.e._, \(v_{F}/c\approx 1/300\), and the dielectric constant \(\varepsilon\) depends on the substrate. For practical purposes, this leads to a value of \(\alpha=0.2-2\) (2 is the extreme case of suspended samples), depending on the details of the setup. The resulting drag scattering rate \(1/\tau\) at the Dirac point is found to be of the general form \(1/\tau=C\alpha^{2}T\) where \(C={\cal O}(1)\)[79, 80, 81, 82, 83, 84, 85]. The constant \(C\) can be calculated from either a solution of the Boltzmann equation or the Kubo formula. Contrary to a standard metal, it is not suppressed by a factor \(T/T_{F}\), where \(T_{F}\) is the Fermi temperature. Gallagher _et al._ managed to measure the critical conductivity in monolayer graphene in an optical conductivity measurement and obtained remarkable agreement with the theory results of \(\sigma\approx 0.7e^{2}/(\alpha^{2}h)\)[86]. #### 5.2.1.2 Bilayer graphene In bilayer graphene in Bernal stacking, the situation is different. This is rooted in the fact that bilayer graphene has a finite density of states at charge neutrality. As a consequence, there is temperature independent Thomas-Fermi screening in the limit of low momenta and the strength of Coulomb interaction does not depend on the fine structure constant. Contrary to the monolayer case, this leads to a temperature independent universal conductivity at the charge neutrality point [87, 88, 89, 70]. Universal in this context means that it is independent of details of the sample. To our knowledge, the first direct measurement of the interaction dominated conductivity which matched theory predictions was done in the context of encapsulated bilayer graphene [91]. The result was a temperature independent interaction limited conductivity of \(\sigma\approx 20e^{2}/h\) which compared favorably to theory predictions from 2013 [87] (see also [88, 89, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 88, 90]). This was confirmed in a 2019 paper by Tan _et al._. In that paper, it was furthermore shown, that the plasma type physics can even be observed if the bilayer is gapped through the application of an electric field, provided the temperature is large enough. In parallel, This measurement also allowed to deduce the dielectric constant of the BN-C-BN structure, again in good agreement with theory. This has been measured in two independent experiments which are in excellent agreement and also observed this universality across a number of samples [78]. As expected from theory, the findings are consistent with a Planckian relaxation time \(\tau\propto\hbar/(k_{B}T)\) which is entirely determined by the constants of nature, \(\hbar\) and \(k_{B}\)[3]. #### 5.2.2 Thermoelectric response A unique signature of the thermoelectric response of electron-hole plasmas can be found in a maximal violation of the Wiedemann-Franz law, as discussed in Sec. 4.1. In 2016, Crossno _et al._ managed to measure the thermoelectric response of monolayer graphene [40, 6] at and in the vicinity of the charge neutrality point. One of the key findings was a strong violation of the Wiedemann-Franz law which was later fit with a two-fluid hydrodynamic theory. In order to find quantitative agreement a careful modelling of the electron-hole puddle disorder structure was required. Nevertheless, this was a strong indication, that the hydrodynamic regime was indeed reached (however, see Ref. [92]). Since then, a new series of thermoelectric transport measurement in monolayer and bilayer graphene are under way [93]. ### (Ii) Fermi-liquid hydrodynamics (FLH) In contrast to the bulk hydrodynamics observed for the electron-hole fluid, one needs to consider finite-size samples to see hydrodynamic effects in transport in the Fermi-liquid regime. Indeed, since the viscous term in the Navier-Stokes equation (see Eq. 24) is given by \(\eta\nabla^{2}\vec{v}\), viscosity can only contribute to the resistance when the flow is non-uniform in space, and will reveal itself through size-dependent contributions to transport properties. The non-local relation between electric field and current means that the sample geometry also has a drastic impact on transport properties. We will therefore classify experiments according to their geometry. There is one general caveat though, which is that transport in the ballistic regime (for which all mean free paths are much longer than the sample size) also leads to a non-local relationship between current and electric-field. In fact, in both regimes, momentum loss and thus resistance occurs due to boundary scattering. Hydrodynamics however distinguishes itself by the fact that momentum needs to diffuse through the bulk (which happens microscopically due to frequent momentum-conserving collisions) before reaching the boundary where it can be relaxed. A challenge for most experiments has therefore been to clearly distinguish between ballistics and hydrodynamics. #### 5.3.1 Channel The channel geometry is probably the simplest one and was used in a series of experiments [50, 53, 55, 94, 41, 50, 95]. In this geometry, one can either measure how the conductance \(G\) scales with the width \(W\) of the channel (with \(G\sim W^{3}\) in the hydrodynamic regime), or use local probes to visualize the Poiseuille flow. Size-dependent effects in the (negative) magnetoresistance and in the Hall effect also allow in principle a measurement of the shear and Hall viscosities, respectively [30, 44]. The profile of the Hall electric field across the channel is actually a very useful signature which can discriminate between ballistic and hydrodynamic [56], and which was measured with a local voltage probe (single-electron transistor) [53]. #### 5.3.2 Widening In the hydrodynamic regime, injecting current through a narrow aperture into a wide chamber can lead to vorticity laterally from the current. This vorticity can be measured as a non-local negative resistance[39, 32, 96]. A similar geometry was also used to measure the Hall viscosity of graphene [52]. More recently, the emergence of vorticity was also directly imaged with a scanning SQUID, with the appearance of multiple vortices in a circular chamber placed laterally to the flow given as a unique signature of hydrodynamics [64]. One can even consider more complicated geometries for which a channel forks into several subchannels forming a non-simply connected geometry, like the Tesla valve studied in Ref. [97]. #### 5.3.3 Construction A striking hydrodynamic effect is the appearance of superballistic conductance for the flow through constrictions [57, 43]. For a constriction of length \(L\) and width \(W\), one can distinguish different regimes depending on the scaling of the resistance \(R\) with \(L\) and \(W\). In the presence of strong momentum relaxation, the Ohmic regime is of course given by \(R\propto L/W\). In the absence of any scattering, one finds the Landauer-Sharvin resistance: \(R\propto 1/W\), since the number of conducting channels is proportional to \(W\). Remarkably, adding strong electron-electron scattering to the latter case leads to a hydrodynamic regime for which the resistance is inversely proportional to the constriction length: \(R_{hydro}\propto l_{ee}/LW(98)\). Superballistic flows were first studied in the case of sharp constrictions (\(L\to 0\)), for which the resistance goes like \(R\sim l_{ee}/W^{2}(42,43)\)(see also related earlier work in Refs [99, 100, 101]). These formulas for the resistance have a simple explanation, since in the hydrodynamic regime, the resistance comes from the viscous term \(\eta\nabla^{2}\vec{v}\). Whereas the viscosity always leads to a factor of \(l_{ee}\), \(\nabla^{2}\) leads to a factor of \(1/LW\) or \(1/W^{2}\) for a smooth or sharp constriction, respectively. #### 5.3.4 Corbino The Corbino (or annular) geometry leads to remarkable manifestations of the hydrodynamic regime of transport. For currents flowing radially, a Corbino device has effectively no edges (i.e. no boundaries lateral to the flow), which means the usual Poiseuille profile due to non-slip boundary conditions is absent, and the bulk resistance in the hydrodynamic regime is actually zero [54]. However, the total resistance is actually non-zero due to a voltage drop localized at the contacts which gives a contribution of the type \(R\propto\eta/r_{min}^{2}\), where \(r_{min}\) is the smaller radius of the annulus. The Corbino geometry makes the physical origin of superballistic flows particularly transparent. In this geometry, the Landauer-Sharvin resistance is delocalized into the bulk, since the number of channels decreases gradually with the radial coordinates when going from the outer to the inner contact. In the ballistic regime, only transmitted channels carry current, whereas in the hydrodynamic regime, strong scattering between electrons makes it possible for electrons to hop from reflected to transmitted channels, leading to an increased conductance [98, 102]. Additionally, the annular geometry leads to unique effects in thermoelectric [103] as well as non-linear [104] transport. #### 5.3.5 Skin effect It is possible to probe the non-local conductivity at varying length scales without varying the size of the device by studying AC currents, for which the current density is localized within a frequency-dependent skin depth of the sample edges. Ohmic, ballistic, and hydrodynamic regimes exist for the skin effect, with differing power laws for the surface resistance dependence on frequency. A study of these various regimes in PdCoO\({}_{2}\) was recently reported in Ref. [105]. #### 5.3.6 Non-linear transport Although there already exists theoretical work on the prospect of reaching higher Reynolds number flows and the instabilities which can result [106, 107, 108, 109, 110, 111], only few experiments have studied this non-linear regime so far. An early example was given in GaAs [24], in which Ohmic heating due a large DC current was used to increase the electronic temperature without increasing the lattice temperature, thereby taking the system deeper in the hydrodynamic regime (\(l_{ee}\ll l_{phon}\)). However, a more recent experiment in graphene showed that electron-phonon coupling can actually become dominant for non-linear transport, leading to a "phonon Cerenkov" instability which creates a striking exponential dependence of the resistivity along the current direction [112]. This electron-phonon instability shows that the physics of non-linear electron hydrodynamics is even richer than previously thought, and certainly deserves further study in the years to come. #### 5.3.7 Crossover between FLH and EHPH A few experiments have also studied the crossover between the Fermi-liquid and electron-hole plasma regimes, by measuring either channel flow [55] or the Wiedemann-Franz ratio [40]. ### (III) Fermi-liquid-phonon hydrodynamics As we mentioned before, the FLHP scenario has a very peculiar setup in which electrons and phonons "lock" into one fluid. The main proponents of this unusual type of hydrodynamics are currently Sb [113, 114], PtSn\({}_{4}\)[115], and WP\({}_{2}\)[116]. This scenario is complicated to prove or rule out in experiments and requires a careful separation of phonon and electron contributions. One theoretical prediction is a strong temperature dependence of the viscosity. Another important feature shows up in thermoelectric measurements together with thermodynamic measurements. According to theory, a hallmark experimental observation that is consistent with an electron-phonon scenario is to see a reduction of the Wiedemann-Franz ratio \(L\) below \(L_{0}\), see the discussion in Sec. 4.3. However, this requires a good knowledge of the phonon contribution to the thermal current. ## 6 Conclusion and outlook Recent years have seen spectacular progress in the study of the hydrodynamic regime in electronic solid state systems. Monolayer and bilayer graphene have taken a leading role but many other systems have emerged. Within this article, we concentrate on three different scenarios of hydrodynamics that are currently most discussed. Electron-hole-plasmas, Fermi-liquids, as well as drag-coupled Fermi-liquid-phonon systems. We base the technical parts of the discussion on a phenomenological Boltzmann equation. We devise an easy-to-follow recipe which allows the derivation of the thermoelectric response as well as the Navier-Stokes equation for a relatively generic setup that can accommodate electrons and holes as well as phonons (and in principle other collective modes). The central ingredients in this setup are global conservation laws and drag which transfers momentum and energy between the individual components of the fluid. In the first example, we discussed the electron-hole plasma that is relevant for Dirac-type systems, such as graphene, bilayer graphene, and also Weyl system. One of the main findings is that such systems have finite bulk electric conductivity at charge neutrality, even in the clean limit which is directly related to the relaxation time of hydrodynamic processes, the Planckian time. The second example was that of a strongly coupled electron-phonon system. We also discuss the more conventional Fermi-liquid type hydrodynamics that can best be observed in systems with restricted geometries since they are sensitive to the viscosity. In the future, it will be interesting to see to which extent experiments can make smoking gun observations in the respective systems. Ideally, one would like to see real space images of turbulences or other non-linear effects that cannot be explained with more conventional transport theories. Summary Points 1. A number of recent experiments finds indications for hydrodynamic flow in electronic systems. 2. When comparisons are possible, the agreement between theory and experiment is quite convincing. 3. Monolayer and bilayer graphene are the prime candidates for the observation of hydrodynamic flow phenomena, but many new materials are joining the list. 4. Contrary to common lore, lattice degrees of freedom can help to reach the hydrodynamic electronic limit. Future Issues 1. Can one create and detect turbulence or other nonlinear effects, maybe using novel material systems? 2. What are smoking gun signatures of hydrodynamic behavior? 3. Are there interesting novel effects in the cross-over between Fermi-liquid and electron-hole plasma regime? ## Disclosure Statement The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review. ## Acknowledgments We would like to thank numerous collaborators that accompanied and guided us over the years. In no particular order, LF acknowledges Subir Sachdev, Markus Muller, Jorg Schmalian, Jonathan Lux, Simonas Grubinskas, Kitinan Pongsangangan, Sean Hartnoll, Achim Rosch, Dirk Schuricht, Michael Schutt, Henk Stoof, Matthias Vojta, and Jonah Waissman. This work is part of the D-ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). TS acknowledges Andrew Mackenzie, Shahal Ilani and Ady Stern for collaboration on related projects, and the financial support of NSERC, in particular the Discovery Grant [RGPIN-2020-05842], the Accelerator Supplement [RGPAS-2020-00060], and the Discovery Launch Supplement [DGECR-2020-00222].
2309.01135
Face-on Map of the Molecular Disc and 3-kpc Expanding Ring of the Galaxy based on a High-Accuracy Rotation Curve
We analyze the longitude-velocity diagram (LVD) of the CO-line emission from archival data and use the most accurate rotation curve (RC) of the Milky Way to transform radial velocity to face-on position in the Galactic plane. We point out that the face-on transformation is highly sensitive to the adopted RC, especially in the inner Milky Way, in the sense that deviations of the RC from the true rotation velocity yield an artifact hole or overcrowded concentration along the tangent circle for over- or under-estimated RC. Even if the RC is sufficiently accurate, non-circular motion such as with the 3 kpc expanding ring causes significant artifacts in the resulting face-on-map as long as a circular rotation is assumed. On the other hand, if we properly take into account the non-circular motion, it can be used to solve the near-far degeneracy problem of determination of kinematic distance. We thus propose a new method to solve the degeneracy by incorporating the expanding motion of a ring or arms. We apply the method to the LVD of the 3-kpc expanding ring and present its face-on map projected onto the galactic plane for the first time.
Yoshiaki Sofue
2023-09-03T11:03:04Z
http://arxiv.org/abs/2309.01135v1
Face-on Map of the Molecular Disc and 3-kpc Expanding Ring of the Galaxy based on a High-Accuracy Rotation Curve ###### Abstract We analyze the longitude-velocity diagram (LVD) of \({}^{12}\)CO line emission from archival data and use the most accurate rotation curve (RC) of the Milky Way to transform radial velocity to face-on position in the galactic plane. We point out that the face-on transformation is highly sensitive to the adopted RC, especially in the inner Milky Way, in the sense that deviations of the RC from the true rotation velocity lead either to an artifact hole or overcrowded concentration along the tangent circle for over- or under-estimated RC, respectively. Even if the RC is sufficiently accurate, non-circular motion such as with the 3 kpc expanding ring introduces significant artifacts in the resulting face-on-map, as long as a circular rotation is assumed. On the other hand, if we properly take into account the non-circular motion, it can be used to solve the near-far degeneracy problem of determination of kinematic distance. We thus propose a new method to solve the degeneracy by incorporating the expanding motion of a ring or arms. We apply the method to the LVD of the 3-kpc expanding ring and present its face-on map projected onto the galactic plane for the first time. keywords: ISM: molecules -- Galaxy: centre -- Galaxy: disc -- Galaxy: kinematics and dynamics -- radio lines: ISM ## 1 Introduction The 3-kpc expanding ring of interstellar gas in the Milky Way is recognized as a tilted elliptical feature in the longitude-velocity diagram (LVD) of the HI and CO line emissions in the Galactic plane (Shane, 1972; Sanders & Prendergast, 1974; Oort, 1977; Dame & Thaddeus, 2008). The oval LV feature has been attributed to arm or ring structure formed either by an expanding motion of a shock front produced by an explosive event in the Galactic Center (Sanders & Prendergast, 1974; Sofue, 1977) or by non-circular motion of gas in an oval potential of a stellar bar (Fux, 1999). Due to the highly non-circular velocities, the kinematic method to determine the distance by transformation of the radial velocity cannot be applied to this particular structure. Since the pioneering work by Oort et al. (1958), there have been extensive studies for mapping the face-on structure of the Milky Way using the radial velocities of the HI and molecular gases on the assumption of circular rotation (hereafter, "face-on transformation, or POT") (Burton & Lintel Hekkert, 1986; Nakanishi & Sofue, 2003, 2006; Kalberla, 2007; Nakanishi & Sofue, 2016; Sofue & Nakanishi, 2016). Because these works have aimed at mapping the global spiral structure in the entire Galaxy, the inner arm structures have not necessarily been resolved well, although the molecular map can trace the 4-kpc molecular ring (Nakanishi & Sofue, 2016; Sofue & Nakanishi, 2016). Recently, Fujita et al. (2023) have analyzed the molecular gas distribution in the inner Galaxy between \(l=10^{\circ}\) and \(62^{\circ}\) using the high-resolution \({}^{12}\)CO -line data from FUGIN (Four-receiver system Unbiased Galactic plane Imaging survey with the Nobeyama 45-m telescope)(Umemoto et al., 2017). By applying a machine-learning near-far distinguishing method of molecular clouds, they have obtained a high-resolution face-on map of the molecular gas using FOT for a flat circular rotation. They have identified several inner spiral arms, and noticed an artificial hole in the central region. Besides the identified arms, an arm-like structure is evident in their face-on map, which apparently extends from the Scutum arm, crosses the far-side solar circle at \(l\sim 12^{\circ}\), and extends beyond the circle, composing a massive "leading" spiral structure. In this paper, we seek for the reason to produce such irregular structures often appearing in the current face-on maps of the Milky Way by considering the non-circular motion associated with the 3-kpc expanding arm. We further derive a face-on distribution of the molecular gas in the Galactic plane by analyzing the archival \({}^{12}\)CO line data. ## 2 Longitude-velocity Diagrams We make use of the archival \({}^{12}\)CO line data cubes from the Columbia and FUGIN surveys. The radial velocity \(V_{\rm{lsr}}\) used in this paper is the LSR (local standard of rest) velocity. The LVD of the entire disc is taken from the Columbia survey (Dame et al., 2001), which had angular resolution of \(9^{\prime}\) and velocity coverage of \(\pm 300\) km s\({}^{-1}\), and the used LVD at \(b=0^{\circ}\) had \(2881\times 493\) pixel sizes in the longitude and velocity directions, which was resized to a \(1000\times 1000\) pixel sized LVD, having \(0^{\circ}\).36 and 0.6 km s\({}^{-1}\)pixel sizes. LVDs from FUGIN surveys integrated between \(b=-1^{\circ}\) and \(+1^{\circ}\) have been published, in which various known galactic structures and molecular clouds have been identified (Torii et al., 2019; Kohno et al., 2021). An LVD along the Galactic plane at \(b=0^{\circ}\) has been published by Sofue (2021) in order to derive the rotation curve. In this paper, we use FOT to cover the entire galactic disk, including the distant region outside the solar circle \(\sim 20\) kpc away. Since the integrated LVD in the \(b\) direction weights near and far clouds equally, not only does the face-on map give more weight to the far side disc, but also the treated disc thickness as a function of distance also increases. To avoid this inconvenience, and for the convenience of data processing, we use the LVD at \(b=0^{\circ}\) here. This eliminates the need to consider the height distribution of the gas, but does not allow us to discuss the 3D structure of the disk. The full beam width at half maximum of the 45-m telescope was \(15\arcsec\) at the \({}^{12}\)CO (\(J=1-0\))-line frequency, and the velocity coverage and resolution were \(\pm 250\) km s\({}^{-1}\)and 1.3 km s\({}^{-1}\), respectively, and the rms noise levels were \(\sim 1\) K. The original pixel size was \((\Delta I,\Delta b,\Delta V_{\rm{lsr}})=(8\)".5, 8".5, 0.65 km s\({}^{-1})\). The FUGIN LVD is then combined with that in the Galactic Centre between \(l=-1^{\circ}\) and \(1^{\circ}\).4 with the same angular and spectral resolutions from the GC CO line survey (Tokuyama et al., 2019). The thus combined 45-m LVD was resized to \(1000\times 1000\) pixel sized LVD between \(l=-10^{\circ}\) and \(+50^{\circ}\), having \(0^{\circ}\).06 and 0.5 km s\({}^{-1}\)pixel sizes. Thus obtained LVDs are shown in Fig. 1, and are used individually to obtain face-on maps using FOT. The resulting face-on maps from the three data sets (Columbia, FUGIN and GC) will be merged on the \((X,Y)\) plane, where \(X\) and \(Y\) are the Cartesian coordinate axes with the origin at the Galactic Centre and are positive toward \(l=90^{\circ}\) direction and toward the Sun, respectively. The distance to the Galactic center from the Sun (\(R_{0}\)) is assumed to be \(R_{0}=8.0\) kpc (VERA Collaboration et al., 2020; Reid et al., 2019). ## 3 Rotation Curve The rotation curve (RC) is the most important and sensitive parameter for the conversion of radial velocity to kinematic distance. As mentioned in the previous section, the current studies have used fairly simple analytical models that approximate the overall behavior of the observed rotation velocities. While acceptable, thought not accurate enough, in the mid to outer disc, the currently adopted rotation curves do not necessarily represent the Galactic rotation with sufficient accuracy in the inner disc and the Galactic Centre (GC) region. Deviations between the assumed RC and the actual rotation speed will artificially increase or decrease the calculated density. If the model RC is underestimated compared to the true velocity, FOT produces an overestimated density near the tangent circle. This occurs because gas observed at higher velocities than the model collects gas from the surrounding region, and, furthermore, gas that does not satisfy the equation is "abandoned" (forgotten) from solution. This means that the total mass of the gas is not conserved during the FOT. Conversely, if the RC is overestimated, the gas is "swept" away from the right position to the surrounding area and the density is underestimated. As the consequence it creates an artificial "hole" on the tangent circle. In fact, a flat rotation curve fixed at \(V=240\) km s\({}^{-1}\), about \(R\sim 40\) km s\({}^{-1}\)/faster than the actual rotation at \(R\sim 2-4\) kpc, has yielded an artificial hole along the tangent circle (Fujita et al., 2023). To avoid or minimize such artifacts, we adopt here the most accurate rotation curve obtained so far for the Milky Way. Figure 2 shows the rotation curve, where the black dots represent the internal rotation curve obtained by terminal velocity fitting using the FUGIN CO-line LVD (Sofue, 2021). Triangles are taken from the grand rotation curve from the central black hole of the GC to the galactic Figure 1: [Top] LVD of \({}^{12}\)CO – line \(T_{\rm B}\) in K at \(b=0^{\circ}\) from Columbia survey (Dame et al., 2001). [Middle] Same, but from \(l=-10^{\circ}\) to \(+50^{\circ}\). [Bottom] LVD from FUGIN with 45-m telescope in the same longitude range as above (Umemoto et al., 2017) and GC surveys (Tokuyama et al., 2019). All diagrams are smoothed, not presenting the original resolutions. outskirts, which was constructed by compiling currently published RC data and fitting to the central CO line LVDs (Sofue, 2013). The red line shows the empirically fitted curve adopted for the FOT in this paper. ## 4 Face-on map for circular rotation by FOT-C We first apply the face-on map transformation assuming the circular rotation, which we call the FOT-C method. The radial velocity is related to the rotation velocity by \[v_{\rm r}=\left(v_{\rm rot}\frac{R_{0}}{R}-V_{0}\right)\sin\ l. \tag{1}\] The distance \(s\) of the object from the Sun is given by \[s=R_{0}\cos\ l\pm\sqrt{R^{2}-R_{0}^{2}\sin^{2}l}. \tag{2}\] The \(+\) and \(-\) signs stand for the far- and near-side solutions, respectively. At the distance \(s\), the volume density of the H\({}_{2}\) gas is calculated by \[n_{\rm H_{2}}=dN_{\rm H_{2}}/ds=X_{\rm CO}T_{\rm B}dv/ds, \tag{3}\] where \(N_{\rm H_{2}}=X_{\rm CO}\)\(/\)\(T_{\rm B}dv\), \(T_{\rm B}\) is the \({}^{12}\)CO brightness temperature, \(X_{\rm CO}=2\times 10^{20}\) H\({}_{2}\) cm\({}^{-2}\) [K km s\({}^{-1}\)]\({}^{-1}\)is the CO-to-H\({}_{2}\) conversion factor, and the velocity gradient along the line of sight, \(dv/ds\), is calculated by \[dv/ds=\sqrt{(dv_{1}/ds)^{2}+(dv_{2}/ds)^{2}}, \tag{4}\] where \[\frac{dv_{1}}{ds}=\left|\frac{R_{0}}{R}\left(\frac{dV_{\rm rot}}{dR}-\frac{V_ {\rm rot}}{R}\right)\sin\ l\ \cos\ (l+\theta)\right| \tag{5}\] is the gradient due to Galactic rotation, and \[dv_{2}/ds=v_{\rm\sigma}/\Delta s\sim 5\ {\rm km\ s^{-1}kpc^{-1}} \tag{6}\] is the internal gradient due to turbulent motion of the gas of the order of \(v_{\rm\sigma}\sim 5\) km s\({}^{-1}\)which gives the minimum value of gradient along each line of sight. Here, \(R\) and \(R_{0}=8\) kpc are the Galactocentric radius and solar distance from the GC, respectively, \(l\) and \(\theta\) are the Galactic longitude and Galacto-centric longitude, \(V_{\rm rot}\) and \(V_{0}=238\) km s\({}^{-1}\)are the rotation velocity at \(R\) and \(R_{0}\), as explained in Fig. 3. Figure 4 shows the face-on maps of the volume density of H\({}_{2}\) gas obtained by FOT-C using the rotation curve in Fig. 2 and LVDs in Fig. 1 under an assumption of pure circular rotation. The results Figure 3: Illustration of the velocities in rotation and expansion, and LVD from Columbia survey (Dame et al., 2001) with explanation of the structures discussed in this paper. Figure 2: Rotation curves from FUGIN (black dots: (Sofue, 2021)), from compilation for the entire Galaxy from the nucleus to the halo (triangles: (Sofue, 2013), and a model curve by the red line used for the distance determination which approximately fits the observations. from the both surveys are merged according to their longitudinal coverage. The near and far emissions are duplicated in this map, so that the map is not appropriate to discuss such non-axisymmetric structure like spiral arms. The circular assumption results in such a duplicated artifact of the nearby features such as the sharp ring-like arm along the far-side solar circle, running coherently from \(l\sim 20^{\circ}\) to \(\sim 50^{\circ}\), which is the consequence of erroneous location of the structure associated with the Aquila Rift. However, it still provides with some basic information about the axi-symmetric structures such as the 4-kpc molecular ring and some arms as identified in the earlier works (Reid et al., 2016). Due to the high accuracy of the rotation curve in the 1st and 4th quadrant (northern disc of the Milky Way), the artifact "hole" does not appear any more on the tangent circle in the right side of the GC in this map. However, hole-like regions still remains in the left side in the 2nd and 3rd quadrants (southern disc). Such logledness of the holes are due to the asymmetric rotation curves between the northern and southern Milky Way, which are represented here by a single RC model for convenience. Another notable artifact feature in this map is the two long bow-shaped arms symmetric about the \(Y\) axis, as indicated by the two dashed white lines. The positive \(X\) side bow runs towards \(l\sim 20^{\circ}\) near the Sun, crosses the 4 kpc molecular ring at \((X,Y)\sim(2,+4)\) kpc, returns almost parallel to the \(Y\) axis, crosses the 4kpc ring in the far side again at \((3,-4)\) kpc, and further extends to \((2,-9)\) kpc, finally crossing the distant solar circle. This bow feature is also visible in the map by Fujita et al. (2023) as a massive "leading arm" across the circle beyond the solar circle. The bow on the negative \(X\) side is also traced symmetrically. Such a bow structure extending for \(\sim 10-20\) kpc across the inner disc and the solar circle is difficult to explain by any astrophysical mechanism in the Galactic disc. The bow cannot be removed even using the sophisticated near-far deconstruction procedure, as demonstrated in the recent detailed study using the Nobeyama high-resolution data (Fujita et al., 2023). In the next section, we argue that the bow structures are artifacts due to the assumption of circular rotation applied to the 3-kpc expanding ring in a highly non-circular motion. We also mention that the narrow fan region around the \(Y\) axis (Sun-GC line) filled with straight features extending from the Sun in the GC direction is composed mainly of artifacts of strongly deformed central molecular zone (CMZ) due to the insufficient resolution and too much crowded equal-velocity lines (Fig. 5). So, this central fan region is out of the analysis hereafter. ## 5 Expanding ring with near-far separation by rot-ex In addition to the accuracy of rotation curve, non-circular motion also affects the face-on transformation. In this section we propose a new method to solve the degeneracy of near-far distances incorporating the expanding motion, which we call the FOT-Ex method. The radial velocity \(v_{\rm r}\) superposed by an expanding motion of \(V_{\rm e}\) (positive outward) is given by \[v_{\rm r}=\left(V_{\rm rot}\frac{R_{0}}{R}-V_{0}\right)\sin\ l\pm V_{\rm e}^{ 0}\cos\ (l+\theta). \tag{7}\] In Fig. 5 we show the distribution of \(v_{\rm r}\), or a velocity field, in the Galactic plane in the \((X,Y)\) coordinates (Fig. 3) as calculated for a flat rotation curve superposed by an expanding ring of radius 3 kpc, width 1 kpc, and expanding velocity \(V_{\rm e}=50\) km s\({}^{-1}\). The velocity field is significantly deformed from the general symmetric butterfly pattern with respect to the Sun-GC line (\(Y\) axis). The complex velocity field shows that FOT-C, assuming simple circular rotation, leads to highly deformed maps that are quite different from the actual distribution. This complex behavior of the velocity field offers a unique opportunity to resolve the degeneracy of near-far solutions on the same line of sight. However, it is impractical to transform the measured LVD into a face-on map without knowing the object's motion, whether it follows a pure circular rotation or a combination of circular and extensional motions. So we perform a separation on the LV diagram, where the 3 kpc ring is recognized and traced as a tilted LV ellipse as in Fig. 3. The 3-kpc expanding ring shows up as a tilted ellipse in the LV diagrams in Fig 6 as reproduced from the Columbia CO survey Dame et al. (2001). Panel (a) shows the ellipse by a red oval, and (b) shows the LVD after subtracting the emission along the ellipse. Panel (c) shows an LVD of the subtracted elliptical component. We first apply the simple FOT-C with circular rotation to the LVD in Fig. 6(b) using Eq. (1), where the ellipse component (3 kpc ring) has been subtracted. The result is shown in Fig 7(a), where the dark bows represent the eliminated disc gas corresponding to the LV ellipse caused by the 3-kpc expanding ring. We then apply FOT-Ex using Eq. (7) to the 3 kpc ring component represented by the tilted LV ellipse in Fig. 6(c). This solves the degeneracy problem by choosing the corresponding side of the ellipse depending on the approaching (near) and the receding (far) arms. The results are shown in Fig. 7(b) for the Columbia data and in (c) for the Nobeyama data. Figure 8 enlarges the resulting face-on maps of the 3 kpc ring and adds descriptions of the visible components. Considering the radial motion of the gas on the expanding ring, the emission of the lower half of the tilted LV ellipse is assigned to the near side in FOT-Ex, and the upper half to the far side. This separation eliminates near-far degeneracy. On the other hand, the resulting map shows a clear cut along the tangent circles, as seen in the resulting map as indicated by the dashed half circle in Fig. 8. ## 6 Discussion ### Accuracy of the maps The accuracy of the FOT-C method has been discussed in detail by Sofue (2011), where the rotation curve has been assumed to be valid, while the measurement of the radial velocity includes errors. In the present study, the accuracy of velocity measurement of individual molecular clouds is on the order of the resolution of the \({}^{12}\)CO spectrometer, \(\sim 1\) km s\({}^{-1}\), sufficiently small compared to the observed velocity \(V_{\rm far}\) of \(\sim 100\) km s\({}^{-1}\). The uncertainty of the FOT map in this study arises mainly from the uncertainty of the assumed rotation velocity \(V_{\rm rot}\) and expansion velocity of the 3-kpc ring \(V_{\rm exp}^{0}\). We here examine the effects of these quantities, \(\delta V_{\rm rot}\) and \(\delta V_{\rm exp}^{0}\), respectively, on the distance determination whose error (uncertainty) is denoted by \(\delta s\). Using equations 7 and 2, the uncertainties propagate to \(\delta s\) as follows1. Footnote 1: Small deviation of a multi variable function \(f(x_{i})\) due to small deviation \(\delta x_{l}\) is assumed to be expressed by \(\delta f^{2}\simeq\Sigma_{l}\left(\frac{\partial f}{\partial x_{l}^{2}}\, \delta x_{l}\right)^{2}\) \[\delta s\simeq A\left[\left(\frac{\delta V_{\rm rot}}{B}\right)^{2}+\left(\frac {V_{\rm rot}\delta V_{\rm exp}^{0}}{B^{2}}\right)\right]^{21/2}, \tag{8}\] Figure 4: [Top left] FOT-C maps of H\({}_{2}\) density in H\({}_{2}\) cm\({}^{-3}\) from the Columbia and [top right] Nobeyama CO-line LV diagrams for circular rotation. Bottom panel shows composite, where the region of panel (a) which is observed at Nobeyama has been replaced with map (b). Near and far emissions are duplicated in these maps. The white dashed lines trace artifact ”bows” corresponding to the 3-kpc expanding ring caused by the assumption of circular rotation. The far-side massive ”leading” arm is due to the (erroneous) location of the “forbidden” LV belt (Fig. 3), which is actually the near-side expanding (approaching) ring. where \[A=\frac{R\,\sin\,l}{\sqrt{(R/R_{0})^{2}-\sin^{2}l}} \tag{9}\] (\(R\geq R_{0}\sin\,l\)) and \[B=v_{\rm r}-V_{0}\,\sin\,l\pm V_{\rm exp}^{0}\cos(l+\theta). \tag{10}\] Equation 9 indicates that the error (uncertainty) attains maximum along the tangent-point circle with \(R=R_{0}\sin\,l\), along which the result is most sensitive to the rotation curve error. On the other hand, the effect of the rotation curve is minimized as \(\propto\,\sin\,l\) near the Sun-GC line, where the radial velocity is degenerated to zero. We then estimate a typical value of distance error (uncertainty) in the FOT map in a representative region around \(R\sim 3\) kpc, Figure 5: (a) Radial velocity field of the Milky Way with an expanding ring of a radius 3 kpc, and (b) close up in the inner region. The complicated velocity field demonstrates that the radial-velocity to space transformation on an assumption of simple circular rotation results in a considerably deformed map from the true distribution. Figure 6: [Top] LVD at \(b=0^{\circ}\) from Columbia CO survey (Dame et al., 2001) at longitude within \(\pm 50^{\circ}\) (green) superposed by a tilted ellipse representing the 3-kpc expanding ring (red). [Middle] Same, but the LV ellipse has been removed. [Bottom] Same, but only the elliptical region representing the 3-kpc ring. \(l\sim 20^{\circ}\), and \(v_{\rm r}\sim 80\) km s\({}^{-1}\), assuming that the rotation curve has uncertainty of \(\delta V_{\rm rot}=10\) km s\({}^{-1}\), as read from Fig. 2. First, we consider a case of purely circular rotation, so that the second term of Eq. 8 is ignored, or \(\delta s\sim A\delta V_{\rm rot}/B\). Then we obtain \(\delta s\sim 0.41\) kpc, which is proportional to the uncertainty \(\delta V_{\rm rot}\). If the rotation curve is sufficiently accurate with \(\delta V_{\rm rot}\lesssim 5\) km s\({}^{-1}\), the \(\delta V_{\rm rot}\) curve is not affected by the rotation curve. The \(\delta V_{\rm rot}\) curve is not affected by the rotation curve. which is the case in the present study in the 1st and 2nd quadrants (\(l=0^{\circ}\) to \(90^{\circ}\)) of the disc, where the RC determination was made. The distance uncertainty is only \(\delta s\sim 0.2\) kpc here, and the FOT map is reasonably accurate. However, in the 3rd and 4th quadrants (\(l=270^{\circ}\) to \(360^{\circ}\)), the RC may not be accurate enough, which causes the hole-like dark region in the left side of the GC and around \((X,Y)\sim(-4,+3)\) kpc in Fig. 4. If we adopt a flat rotation curve which overestimates the rotation velocity by \(\delta V_{\rm rot}\sim+30\), the distance error is as large as \(\delta s\sim 1.3\) km s\({}^{-1}\), yielding significant under/over estimation of the distance in the near/far side. This is the reason for the artifact 'hole' in the FOT map in the inner disc obtained for the flat rotation curve (Fujita et al., 2023), where the gas has been swept away from the right position for \(\pm 1.3\) kpc, making a hole of diameter \(\sim 2.6\) kpc. We next consider the case including the expanding ring, which is assumed to have expanding velocity of \(V_{\rm exp}^{0}=50\) km s\({}^{-1}\). The distance uncertainty is estimated to be \(\delta s\sim 0.55\) for \(\delta V_{\rm exp}^{0}\sim 2\) km s\({}^{-1}\), and 0.76 kpc for \(\sim 4\), for the same uncertainty of rotation curve \(\delta V_{\rm rot}=10\) km s\({}^{-1}\)as above, while it varies according to Eq. 8. To summarize, the uncertainty of the distance determination by FOT-Ex is about \(\sim 0.5\) kpc for \(\delta V_{\rm rot}\sim 10\) km s\({}^{-1}\)around the 3-kpc ring. It increases toward the tangent circle attaining the maximum there, and is minimum along the Sun-GC line. However, it must be recalled that the distance determination has a maximum uncertainty along the Sun-GC line for another reason that the radial velocity is degenerated to zero, so that the error due to \(\delta V_{\rm far}\) increases to infinity (Sofue, 2021). For this reason, the plot near the Sun-GC line (\(l\sim 0^{\circ}\)) is avoided in Fig. 8. Comparison with trigonometric parallax distances of star forming regions associated with the 3-kpc expanding arms There have been detected a number of maser sources along the tilted ellipse on the LVD corresponding the 3-kpc expanding ring (Green et al., 2011). It is reported that the trigonometric parallaxes of maser sources associated with the star-forming region G9.62+0.20 correspond to a distance of \(5.2\pm 0.6\) kpc, while its LSR velocity is \(\sim 2\) km s\({}^{-1}\)(Sanna et al., 2009). This velocity corresponds to kinematic distances of 0.5 or 16 kpc, if a circular rotation is assumed, which contradicts the trigonometric distance. The trigonometric distance is consistent with the here derived distance of the molecular 3-kpc arm at \(l\sim 10^{\circ}\) in Fig. 8, where the LSR velocity is \(\sim 2\) km s\({}^{-1}\). Therefore, we may reasonably consider that G9.62+0.20 is associated with the 3-kpc expanding molecular ring. Several more maser sources with trigonometric distances are located at \(R\sim 3\) kpc (\(s\sim 5\) kpc) associated with the near 3-kpc expanding arm, and one source near the far-side arm (Reid et al., 2019), as plotted by yellow circles in Fig. 8. For their circular alignment aong the 3-kpc ring the maser sources are considered to be physically associated with the molecular ring. However, a closer look at the figure suggests that the maser sources are systematically displaced from the molecular ring toward the Sun by about \(\sim 0.5\) kpc, whish is also demonstrated by an off-center circle fitted to the sources by Reid et al. (2019). If we rely on the maser parallax distances and consider that the sources are physically associated with the 3-kpc molecular ring, we need a modification of the ring model, so that it produces an off-center ring and asymmetric expansion. In fact, a slightly lopsided expansion is observed as the faster expanding velocity of the far side ring at \(\sim 60\) km s\({}^{-1}\)(Dame & Thaddeus, 2008) than the near side ring at 50 km s\({}^{-1}\). However, this pauses a puzzling problem why the galactocentric radius of the far-side arm is smaller (\(\sim 2.5\) kpc) than the near side arm (\(\sim 3.2-3.5\) kpc) despite the faster expansion in the far side. ### Expanding ring vs non-circular motion by a bar The present result does not exclude the possibility that the expanding "ring-like" feature on the LVD is due to a non-circular motion induced by the oval orbits in a barred potential (Binney et al., 1991; Athanassoula & Bureau, 1999; Li et al., 2022). The FOT-Ex method assumes that the 3-kpc ring is expanding at \(\sim 50\) km s\({}^{-1}\). If the same amount of radial motion can be assumed, any model, such as the bar model, would be able to lead to a similar face-on map as here. However, the nearly perfect elliptical nature of the 3-kpc ring on the LVD without nodal crossing near \(l\sim 0^{\circ}\) seems to be in favor of the expanding ring model. The bar potential model predicts that the arms on the LVD have either nodes around \(l\sim 0\) drawing a tilted 8 shape (not oval) or a parallelogram structure, as often observed in barred spiral galaxies including the 4-kpc molecular ring of the Milky Way. In anyway, the present analysis is not accurate enough to distinguish a ring from an ellipse. Namely, we cannot exclude the possibility that the 3-kpc expanding "ring" is due to a bar potential, and, vise versa, the bar theory cannot rule out the expanding ring model in the present accuracy. ## 7 Summary We have shown that face-on transformations (FOT) of molecular-line radial velocities yield maps showing circular structures representing molecular arms when suitable rotation curves are used. However, when the model RC deviates from the true rotational velocity, an artifact hole along the tangent circle occurs for an over-estimated RC, and anomalous gas concentrations for underestimated RC. Even if the RC is sufficiently accurate, the non-circular motion due to the 3 kpc expanding ring causes considerable artifacts in the resulting face-on-map as long as circular rotation is assumed. Such artifacts are seen as giant leading arms across the far side of the solar circle. This is a result of the erroneous positioning of the nearby 3 kpc expanding ring, which includes the "forbidden velocity" belt with negative velocity at \(l\sim 0^{\circ}\) to \(+12^{\circ}\) as shown in Fig. 3. The approaching LV ridge is transformed to a near-side 3kpc ring when we extract the lower half of the tilted elliptical LV component and apply FOT-Ex considering the expanding motion. The heavy leading arm on the far side, which resulted from the assumption of circular rotation, has disappeared in this FOT-Ex map. The 3-kpc ring on the far-side (Dame & Thaddeus, 2008) is also clearly visible on the face-on map as the opposite ring beyond the tangent circle by the transformation of the upper half of the LV ellipse. Finally, we comment on the limitations and future prospects of this study. This paper is the first to point out, mainly through the analysis of large-scale LVDs in the Columbia survey, that the problem of near-far degeneracy around the tangent circle can be solved by using non-circular motions of the arm and ring. The LVD from Nobeyama's high-resolution observations has been smoothed to an angular resolution of \(0^{\circ}.06\), so the original resolution (\(20^{\prime\prime}\)) is not properly incorporated into the current analysis. The Nobeyama CO LVD at the Galactic Centre is also significantly smoothed and thus did not provide useful information about gas distribution in the CMZ. Since we used LVD only in the galactic plane at \(b=0^{\circ}\), we were unable to touch on the vertical (3D) structure of the disk and ring. Higher-resolution and 3D analyses of the Nobeyama LVDs inside the Milky Way galaxy as well as in CMZ, which was avoided here as the artifact fan region, would be a subject for future work. ## Acknowledgments The author is indebted to the FUGIN team (Prof. T. Umemoto, et al.) for the CO data observed with the Nobeyama 45-m telescope operated by the NAOJ (National Astronomical Observatory of Japan). The data analysis was carried out on the computer system at the Astronomy Data Center of the National Astronomical Observatory of Japan. The author is indebted to the anonymous referee for the valuable comments to improve the paper. ## Data Availability The FUGIN data were retrieved from the JVO portal at [http://jvo.nao.ac.jp/portal](http://jvo.nao.ac.jp/portal). ## Conflict of interest The author declares that there is no conflict of interest.
2304.04099
Unsupervised Story Discovery from Continuous News Streams via Scalable Thematic Embedding
Unsupervised discovery of stories with correlated news articles in real-time helps people digest massive news streams without expensive human annotations. A common approach of the existing studies for unsupervised online story discovery is to represent news articles with symbolic- or graph-based embedding and incrementally cluster them into stories. Recent large language models are expected to improve the embedding further, but a straightforward adoption of the models by indiscriminately encoding all information in articles is ineffective to deal with text-rich and evolving news streams. In this work, we propose a novel thematic embedding with an off-the-shelf pretrained sentence encoder to dynamically represent articles and stories by considering their shared temporal themes. To realize the idea for unsupervised online story discovery, a scalable framework USTORY is introduced with two main techniques, theme- and time-aware dynamic embedding and novelty-aware adaptive clustering, fueled by lightweight story summaries. A thorough evaluation with real news data sets demonstrates that USTORY achieves higher story discovery performances than baselines while being robust and scalable to various streaming settings.
Susik Yoon, Dongha Lee, Yunyi Zhang, Jiawei Han
2023-04-08T20:41:15Z
http://arxiv.org/abs/2304.04099v3
# Unsupervised Story Discovery from Continuous News Streams ###### Abstract. Unsupervised discovery of stories with correlated news articles in real-time helps people digest massive news streams without expensive human annotations. A common approach of the existing studies for unsupervised online story discovery is to represent news articles with symbolic- or graph-based embedding and incrementally cluster them into stories. Recent large language models are expected to improve the embedding further, but a straightforward adoption of the models by indiscriminately encoding all information in articles is ineffective to deal with text-rich and evolving news streams. In this work, we propose a novel _thematic embedding_ with an off-the-shelf pretrained sentence encoder to dynamically represent articles and stories by considering their shared temporal themes. To realize the idea for unsupervised online story discovery, a scalable framework USTORY is introduced with two main techniques, _theme- and time-aware dynamic embedding_ and _novelty-aware adaptive clustering_, fueled by lightweight story summaries. A thorough evaluation with real news data sets demonstrates that USTORY achieves higher story discovery performances than baselines while being robust and scalable to various streaming settings. News Stream Mining, News Story Discovery, Document Embedding + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + Footnote †: journal: LaTeX Templates + [MISSING_PAGE_POST] this problem is **hematic embedding** with a PSE, which dynamically embeds articles and stories with their shared themes timely captured in the latest news streams. The temporal themes help focus on only story-relevant parts of articles for story discovery so that articles in the same story can be represented closer while being further from other stories. As demonstrated in Figure 2, the thematic embedding identifies only story-relevant sentences which are more clearly clustered into distinct stories, and this naturally results in better article similarities than indiscriminative sentence-level embedding as well as word- or article-level embeddings. However, implementing the idea of thematic embedding for unsupervised online story discovery pose considerable technical challenges: (1) unique themes of stories should be automatically identified (i.e., unsupervised) and timely updated (i.e., online) to derive up-to-date representations of articles and stories, (2) stories arbitrarily emerge and expire over time, so an adaptive mechanism is required to continuously cluster articles into stories, and (3) the embedding and clustering should be efficient and scalable to deal with massive news streams and support real-time applications. **Summary.** To this end, we propose a novel framework, **USTORY**, for Unsupervised Story discovery with scalable thematic embedding. USTORY can be instantiated with any existing PSEs and employs two main techniques: _theme- and time-aware dynamic embedding_ and _novelty-aware adaptive clustering_; the former systematically identifies temporal themes from diverse contexts of news streams and embed articles and stories by considering their theme and time relevance (Section 4). Then, the latter estimates article-story confidence scores to assign articles to existing stories or initiate novel stories. In the meantime, USTORY manages only the minimum but sufficient summary of stories so as to guarantee single-pass processing of the assigned articles (Section 5). We summarize the main contributions of this work as follows: * To the best of our knowledge, this is the first work to propose _thematic embedding_ with off-the-shelf pretrained sentence encoders for unsupervised online story discovery from news streams. * We propose a framework USTORY, implementing the idea of thematic embedding, which is _scalable_ with single-pass processing and _compatible_ with any pretrained sentence encoders. The source code is available at [https://github.com/cliveyn/USTORY](https://github.com/cliveyn/USTORY). * We demonstrate that USTORY _outperforms_ existing baselines by up to 51.7% in \(B^{3}\)-F1 and their PSE-variants by up to 12.5% in \(B^{3}\)-F1 in three news data sets. The _scalability_ and _robustness_ of USTORY are also verified through in-depth analyses. ## 2. Related Work Early work by Allan et al. (2018) introduced topic detection and tracking (TDT) concepts for organizing and mining news articles. News story discovery is one of the most widely studied relevant research topics in TDT, which is crucial for various downstream applications such as news recommendation (Allan et al., 2017), summarization (Zhu et al., 2018), and fine-grained event mining (Zhu et al., 2019). Previous efforts in the news story discovery can be classified as retrospective (offline) story discovery or prospective (online) story discovery, where the former generates a story timeline (Beng et al., 2019) or structure (Kumar et al., 2019) from a given set of articles usually for analyzing domain-specific events while the latter processes continuous and unbounded news streams to discover and track stories in real-time, which is the main scope of this paper. For online news story discovery, clustering-based approaches have been widely used for embedded articles. One line of studies adopted a _supervised_ approach assuming some labeled articles and external knowledge (e.g., entity labels) are available to facilitate the embedding and clustering procedures. Specifically, labeled training data sets were used to learn an adjacency matrix of articles (Zhu et al., 2019; Zhu et al., 2019; Zhu et al., 2019) or to learn a similarity threshold for cluster assignment (Zhu et al., 2019; Zhu et al., 2019; Zhu et al., 2019). For instance, Miranda et al. (Mranda et al., 2019) introduced a multilingual clustering algorithm for news article streams, where an article-cluster similarity was measured by the weighted sum of TF-IDF sub-vectors similarities and temporal similarities modeled in a Gaussian distribution. The weights for the various types of similarities were learned from labeled training data sets. Some work fine-tuned an LIM with the labeled training data sets for embedding articles along with external entity knowledge (Zhu et al., 2019; Zhu et al., 2019). However, in an online setting, the labeled articles are rarely available and quickly outdated. The other line of studies thus tried to embed and cluster articles in an _unsupervised_ manner. ConStream (2018) is one of the popular clustering algorithms for document streams and has been widely used as a competitive baseline. It managed keyword frequencies of articles and clusters them with a notion of micro-cluster. A recent work Newkens (Newkens, 2018) found overlapping keywords of articles to build a local topic graph and applied a community detection algorithm to it to detect stories. Staykovski et al. (Staykovski et al., 2018) improved NewsLens further by using the sparse TF-IDF embedding (which outperformed the dense doc2vec (Newkens, 2018) embedding alternatively proposed in the work) of articles to build a local topic graph. The existing methods, however, focus on explicit keywords statistics of articles, limiting the capability to capture implicit local contexts inside the articles. Furthermore, their fixed article representations do not fully capture evolving global contexts of news streams. In this work, we capture the local contexts of articles with a PSE, while dynamically exploiting it through thematic embedding to adapt to the global contexts of news streams. Besides, some studies (Zhu et al., 2019; Zhu et al., 2019) proposed _offline_ and _supervised_ fine-tuning of language models to specifically model news articles with relevant labeled data sets and tasks. Such fine-tuned models inherently output a fixed and deterministic representation of an input article, i.e., indiscriminative embedding, without considering the global contexts of news streams. However, their pretrained models can also be equipped in our framework for unsupervised online story discovery by being dynamically exploited over news streams through thematic embedding. Figure 2. Left: Average cosine similarities of news articles (Kumar et al., 2019) embedded by LLMs (Zhu et al., 2019; Zhu et al., 2019) with various granularity; higher (lower) is better for articles in the same (different) stories. Right: 2D-visualizations of sentences. ## 3. Problem Setting Let \(a=[s_{1},s_{2},\ldots,s_{|a|}]\) be a news article (or simply _article_) composed of sentences \(s\) and \(C=[a_{1},a_{2},\ldots,a_{|C|}]\) be a news story (or simply _story_) composed of correlated articles with a unique theme, such as California Floods, and consecutively published for a certain duration of time. We assume that each article belongs to a single story, and each story has at least \(M\) articles. A news stream \(\mathcal{A}(=\ldots,a_{i-1},a_{i},a_{i+1},\ldots)\) is a continuous and unbounded sequence of articles consecutively published at a timestamp \(t_{a_{i}}\). A _sliding window_\(\mathcal{W}\) of size \(W\) shid_ by \(S\) determines a context of the latest articles and ongoing stories in \(\mathcal{A}\). We set \(W=7\) days and \(S=1\) day by default (i.e., if no articles are added to a story for a week, it is considered expired), while they can be alternatively set as the number of articles. Then, Definition 3.1 gives a formal definition of the problem considered in this work. Definition 3.1 (Unsupervised Online Story Discovery) Given a news stream \(\mathcal{A}\), the unsupervised online story discovery is to incrementally update a set \(\mathcal{C}_{W}\) of stories from the articles in every sliding window \(\mathcal{W}\) without any human supervision or story labels. ## 4. Thematic Embedding ### Motivation The efforts to encode texts have been long made from symbolic-based models (e.g., bag-of-words) to recent LLMs (Srivastava et al., 2016; Krizhevsky et al., 2017; Krizhevsky et al., 2017). As demonstrated in Figure 2, using individual word-level or entire article-level granularity for embedding an article with an LLM may not be optimal since they are either too fine-grained (former) or too coarse-grained (latter) to get its specific story-indicative semantics, which is shared within the same story but not in different stories. Exploiting sentence-level information, on the other hand, effectively balances the abstractness of semantics and naturally meets the input constraints for typical LLMs (e.g., 512 tokens). Recent PSEs, that fine-tune LLMs with benchmark data sets and tasks for specifically embedding sentences, have shown state-of-the-art sentence embedding capabilities across various domains (Srivastava et al., 2016; Krizhevsky et al., 2017). In this work, we exploit off-the-shelf PSEs by regarding sentences as building blocks for embedding articles. Exploiting a PSE to embed a long article (i.e., tens of sentences) gives various design choices; for instance, concatenating or mean-pooling of sentences can be straightforward alternatives1. PSEs can effectively capture the local semantics of individual sentences in an article, but as shown in Figure 1, the sentences are not necessarily relevant to the theme of the article's story. Thus, USTORY employs _thematic embedding_ with a PSE, which first identifies the temporal themes of articles given a particular context of news streams (Section 4.2) and then dynamically embeds the articles and stories considering their theme- and time-relevance (Section 4.3). Footnote 1: Refer to Section 6.3 for the comparison of different alternative embedding strategies. ### Temporal Theme Identification Let _corpus_ be a set of articles collected over a period of time and _corpora_ be a set of the corpus. Then, each corpus must have a _temporal theme_ that uniquely represents the corpus especially in the latest temporal context of corpora. We model the temporal theme through a keywords retrieval process because it is efficient to mine keywords with a simple tokenization pipeline and it is effective to represent a theme explicitly with a diverse combination of keywords. Specifically, we identify thematic keywords of a corpus in context corpora by conjunctively considering (1) _recency_, (2) _popularity_, and (3) _distinctness_ of keywords. For example, let a context corpora \(\mathbb{D}\) be the latest articles in news streams and a target corpus \(d\) be the articles about 2023 California Floods in \(\mathbb{D}\). The keywords such as _'Northern California', 'Evacuation order', 'Pacific Storm'_, 'Die', and _'Rescue'_ may collectively describe the temporal theme of \(d\), while their compositions and importances will change over time. Suppose that the term _'Pacific Storm'_ has consistently appeared in \(d\) for the last 7 days by \(L\) times in each day, while _'Evacuation order'_ has appeared more actively for the last 3 days by \(2L\) times in each day. While _'Evacuation order'_ was less popular than _'Pacific Storm'_ for the last week (i.e., \(2L\times 3\) days \(<L\times 7\) days), it would be more valuable to represent the recent theme of \(d\). At the same time, other recent popular keywords such as _'Die'_ or _'Rescue'_ would be elevated if they are also frequently observed in another corpus in \(\mathbb{D}\) (e.g., Russia-Ukraine Conflict). To this end, we naturally incorporate _time-decaying property_ into popular ranking functions for discriminative information retrieval (e.g., TF-IDF (Bordes and Krizhevsky et al., 2017) and BM25L (Krizhevsky et al., 2017))2 to identify thematic keywords. Definition 4.1 (Thematic Keywords) Given a target corpus \(d\) in a context corpora \(\mathbb{D}\), a set \(\mathcal{K}_{d}\) of the top \(N\) thematic keywords that best describe the temporal theme of \(d\) at a time \(t_{c}\) is Footnote 2: Any frequency-based function can be adopted with the time-decaying property. We used TF-IDF as default which showed better results than the other functions. \[\begin{split}\mathcal{K}_{d}&=\{(k_{1},w_{k_{1}}),(k _{2},w_{k_{2}}),\ldots,(k_{N},w_{k_{N}})\},\text{where}\\ \mathcal{w}_{k}&=rec\text{-}pop(k,d,t_{c})\cdot dist(k, \mathbb{D})\\ &=\sum_{k\neq c_{d}}\exp(-\frac{|t_{kj}-t_{c}|}{\delta_{k}})\cdot \log(\frac{|d_{i}\in\mathbb{D}|+1}{|d_{i}\in\mathbb{D}:k\in\mathcal{T}_{d_{i}}| +1}+1),\end{split} \tag{1}\] where \(k_{i}\) is a single- or multi-token term appearing in \(d\) ranked by its importance \(w_{k_{i}}\) and \(\mathcal{T}_{d}\) is a set of all term appearances in \(d\). The score function \(rec\text{-}pop(k,d,t_{c})\) is time-decaying term frequency, where each term appearance \(k^{j}\) at time \(t_{k^{j}}\) is counted (for _popularity_) while being exponentially decayed by its temporal difference from \(t_{c}\) (for _reency_). The score function \(dist(k,\mathbb{D})\) is inverse corpus frequency to measure how unique \(k\) is in \(\mathbb{D}\) (for _distinctiveness_). The decaying factor \(\delta_{k}\) controls the degree of decaying and can be set to the total time span of \(\mathbb{D}\) (i.e., \(\delta_{k}=max(t_{kj})-min(t_{k^{j}})+1\) for \(k^{j}\in\mathcal{T}_{\mathbb{D}}\)). ### Theme/Time-aware Dynamic Embedding **Article Embedding**. A temporal theme can act as key guidance in embedding an article with a PSE; the article is best represented with the theme by focusing on only the theme-relevant parts of the article. Given a certain temporal theme, as a form of thematic keywords, we dynamically represent an article by pooling the representations of sentences in the article weighted by their _theme relevance_, which considers both the frequency and the importance of the thematic keywords found in each sentence. Definition 4.2 (Article Representation) Given a thematic keywords set \(\mathcal{K}_{d}\), derived from a target corpus \(d\) in context corpora \(\mathbb{D}\), a representation \(E_{a|d}\) of a target article \(a\) given \(d\) is \[E_{a|d}=\sum_{s_{t}\in\mathcal{A}}\frac{\sum_{k_{i}\in\mathcal{K}_{d}}|k_{i}^{J} \in\mathcal{T}_{a}|w_{k_{i}}}{\sum_{k_{i}\in\mathcal{K}_{d}}|k_{i}^{J}\in \mathcal{T}_{a}|w_{k_{i}}}enc(s_{t}), \tag{2}\] where \(\mathcal{T}_{a}\) and \(\mathcal{T}_{s}\) are the term appearance sets of \(a\) and its sentence \(s\), respectively, and \(enc(s)\) is a representation of \(s\) by a PSE. **Story Embedding.** As a story is basically a cluster of articles, a typical way to represent it is to average all the incorporated article representations (i.e., a cluster center). However, such a static story embedding does not correctly capture the temporal theme of the story, which gradually evolves with newly added articles. Thus, we dynamically represent a story given a target article (i.e., at a specific time of the article) by pooling the representations of articles in the story weighted by their _time relevance_ to the target article. **Definition 4.3** (Story Representation): A representation \(E_{C|a}\) of a target story \(\mathcal{C}\) given a target article \(a\) is \[E_{C|a}=\sum_{a_{i}\in C}\frac{\exp(-|t_{a}-t_{a_{i}}|/\delta_{C})}{\sum_{a_{j }\in C}\exp(-|t_{a}-t_{a_{j}}|/\delta_{C})}E_{a_{i}|C}, \tag{3}\] where the time-decaying property is applied to the temporal distance and the decaying factor \(\delta_{C}\) can be set to the total time span of the story (i.e., \(\delta_{C}=max(t_{a_{i}})-min(t_{a_{i}})\) for \(\forall a_{i}\in C\)). Figure 3 illustrates an example of thematic embedding. Suppose that an article in Figure 1 is a target article \(a\), and a story \(C\) of articles about 2023 California Floods is a target corpus \(d\). Then, the thematic keywords set \(\mathcal{K}_{C}\) is identified as a temporal theme of \(C\). When embedding \(a\) given \(C\), the theme-relevant sentences (e.g., s01, s22, and s34) served as key ingredients for representing \(a\). At the same time, when embedding \(C\) given \(a\), the articles in \(C\) temporally close to \(a\) contribute more to represent \(C\). Note that articles and stories can be dynamically represented depending on the target articles and stories, facilitating more effective story discovery. ## 5. Novelty-aware Adaptive Clustering ### Overview Using the thematic embedding in Section 4, USTORY incrementally clusters articles into stories, while adaptively discovering novel stories. The overall procedure of USTORY is illustrated in Figure 4 and outlined in Algorithm 1. For every sliding window of a news stream, USTORY gets sentence representations of new articles from a PSE (Lines 1-3) and conducts two steps. First, if there are no existing stories in the current window, as _nowel story discovery_ step, new articles are represented with their own theme, and seed stories are identified by cluster center initialization (Lines 4-5). Then, as _confidence-based story assignment_ step (Lines 6-10), unassigned articles (including the new articles) are inspected if they can be confidently added to one of the existing stories. Each article-story pair is dynamically embedded and their confidence scores are derived from their thematic similarity. An article is assigned to the most confident story if the score exceeds a threshold. In the meanwhile, a summary of existing stories is utilized and updated. Finally, the novel story discovery step is conducted for the remaining unassigned articles to form new stories (Line 11) and all the discovered current stories are reported (Line 12). The two steps are detailed in the following sections. ``` Input: a news stream \(\mathcal{A}\), a pre-trained sentence encoder \(enc\) Output: a set \(\mathbb{C}_{W}\) of stories in every sliding window \(\mathcal{W}\) 1forevery sliding window \(\mathcal{W}\) from \(\mathcal{A}\)do 2forew articles \(a_{j}\) in \(\mathcal{W}\)do 3\([enc(s)|s\in a_{j}]\leftarrow\) sentence representations of \(a_{j}\) if\(\mathbb{C}_{W}\) is emptythen /*Novel (Anitial Story Discovery (Section 5.2) */ \(\mathbb{C}_{W}\leftarrow\) initialize seed stories from new articles; /*Confidence-based Story Assignment (Section 5.3) */ forunassigned article \(a_{j}\) in \(\mathcal{W}\)do 4\(C_{t}^{*}\leftarrow\underset{C_{t}^{*}\in C_{W}}{\arg\max}con(f_{a_{j}}|C_{t})\) // The most confident story 5if\(conf_{a_{j}|C_{t}^{*}}\geq\gamma\)then 6 Assign \(a_{j}\) to \(C_{t}^{*}\) Update \(PSS_{C_{t}^{*}}\) // Story summary (Section 5.4) /*Novel Story Discovery (Section 5.2) */ 7\(\mathbb{C}_{W}\leftarrow\) add seed stories from unassigned articles; Return \(\mathbb{C}_{W}\) ``` **Algorithm 1**Overall Procedure of USTORY ### Novel Story Discovery **Initial Article Embedding.** When there are no existing stories or confident stories to be assigned to, unassigned articles (e.g., newly published or previously unassigned) are used to find novel seed stories. Since there are no existing themes or adequate themes to be considered, the unique theme of each article itself is identified for the thematic embedding of the article. Specifically, in Definition 4.1, the context corpora \(\mathbb{D}\) becomes all articles in a sliding window \(\mathcal{W}\) and the target corpus \(d\) becomes a target article \(a\). Then, the _article-indicative_ thematic keywords set \(\mathcal{K}_{a}\) are derived and it leads to the representation \(E_{a|\{a_{j}\}}\) of the article \(a\) by Definition 4.2. Figure 3. Thematic embedding with an article in Figure 1 and its story _2023 California Floods_ as the target article and story. **Seed Stories Discovery.** Once the articles are embedded to better reveal their own themes, the articles under similar themes are more likely to be closer than those under different themes. Thus, a typical cluster center initialization technique can be applied to the thematically embedded initial representations of unassigned articles to find novel seed stories. Motivated by the popularly used k-means++ initialization (Beng et al., 2017), USTORY finds the seed centers with the lowest inertia (i.e., the sum of cosine similarities between each article and its centers) to get the most thematically distinctive seed stories. The number of seeds can be decided by dividing the number of unassigned articles by the minimum number \(M\) of articles to initiate a story, which is dependent on user preferences and application requirements3. If there are no such constraints available, an existing data-driven approach for deciding the number of clusters such as LOG-Means (Beng et al., 2017) or Silhouette Coefficient (Sel et al., 2018) can be applied. Footnote 3: Typically, a news story has 5 to 20 articles on its first day in the real data sets (Stein et al., 2018; Stein et al., 2018). ### Confidence-based Story Assignment When there are existing stories, each unassigned article is evaluated to be assigned to one of the stories. Specifically, the unique temporal themes of existing stories are updated, and then the thematic similarity between each pair of stories and articles is estimated to get the robust article-story confidence score. **Thematic Embedding.** A _story-indicative_ thematic keywords set \(\mathcal{K}_{C}\) is derived by Definition 4.1, by setting the context corpora \(\mathbb{D}\) to a set \(\mathbb{C}_{\mathcal{W}}\) of stories in a sliding window \(\mathcal{W}\) and the target corpus \(d\) to the articles in a story \(C\). Then, for each pair of article \(a\) and story \(C\), the article representation \(E_{a|C}\) and the story representation \(E_{C|a}\) are derived by Definition 4.2 and 4.3, respectively. **Article-Story Thematic Similarity.** Given a pair of articles and stories, USTORY quantifies their _thematic similarity_ by conjunctively considering their semantic themes and symbolic themes. In brief, the former is estimated by the cosine similarity between their thematic representations, and the latter is estimated by the divergence of their thematic keyword distributions. These two types of similarities complement each other to estimate more robust thematic similarities. For instance, some articles about 2023 California Floods might have a few sentences of slightly different thematic semantics according to the writers' perspectives (e.g., one may focus on the rescue while another may focus more on the victim), but the overall thematic keywords distributions of these articles would be similar as they are describing the same event. On the other hand, some articles about 2023 California Floods and the other articles about Russia-Ukraine Conflict might happen to have a few sentences of similar thematic semantics describing casualties, but their overall thematic keywords distributions would be different (e.g., for articles in 2023 California Floods _'die'_ is co-occurred more frequently with _'floods'_ or _'storm'_). Definition 5.1 formally formulates the thematic similarity between an article and a story. **Definition 5.1** (Thematic Similarity): A thematic similarity between an article \(a\) and a story \(C\) is calculated as \[sim_{theme}(a,C)=max(0,\cos(E_{a|C},E_{C|a}))\cdot JSD(P_{a,\mathcal{K}_{C}}\|P _{C;\mathcal{K}_{C}}). \tag{4}\] The first term is a cosine similarity, with the negative values truncated to zero, between the thematic representations of \(a\) and \(C\). The second term is the JS-divergence (Stein et al., 2018)4 between the thematic keyword probability distributions of \(a\) and \(C\). The keyword probability \(P(k_{i}|a,\mathcal{K}_{C})\) in \(P_{a,\mathcal{K}_{C}}\) is estimated as \(\frac{|k_{i}\in\mathcal{K}_{C}|}{\sum_{k_{j}\in\mathcal{K}_{C}}|k_{j}\in \mathcal{K}_{C}|}\) where \(\mathcal{T}_{a}\) is the term appearance set of \(a\) (and similarly for \(P(k_{i}|C,\mathcal{K}_{C})\) in \(P_{C;\mathcal{K}_{C}}\)). Footnote 4: Among popular divergence measures (Stein et al., 2018; Stein et al., 2018; Stein et al., 2018), we chose JS-divergence (Stein et al., 2018) since it is bounded within a finite interval \([0,1]\), can be defined even when a keyword exists only in a story, and has a low time complexity, i.e., \(O(|\mathcal{K}_{C}|)\). **Article-Story Assignment.** Finally, USTORY derives a confidence score \(conf_{a,C}\in[0,1]\) for an article \(a\) to be assigned to a story \(C\) by comparing their thematic similarity with the other thematic similarities of all possible candidate assignments. **Definition 5.2** (Article-Story Confidence): Given a target article \(a\) and a set of candidate stories \(C_{i}\in\mathbb{C}_{\mathcal{W}}\), the article-story confidence score for \(a\) to be assigned to \(C_{i}\) is \[conf_{a,C_{i}}=\frac{\exp\left(T\cdot sim_{theme}(a,C_{i})\right)}{\sum_{C_{j} \in\mathcal{C}_{\mathcal{W}}}\exp\left(T\cdot sim_{theme}(a,C_{j})\right)}, \tag{5}\] where \(T\) is a temperature for scaling the score distribution. Then, the article is assigned to the story with the highest confidence score if it exceeds the threshold \(\gamma=1-(1-1/|\mathbb{C}_{\mathcal{W}}|)^{T}\), indicating an adjusted random assignment probability (Stein et al., 2018), or otherwise remains unassigned5. The unassigned articles are used to find seed stories as introduced in Section 5.2 and repeatedly inspected to be assigned to the updated stories in later sliding windows. Footnote 5: The sensitivity analysis and the guidance on \(T\) are given in Section 6.4 ### Scalable Processing with Story Summary **Story Summary.** Identifying temporal themes and dynamically embedding articles and stories from scratch in every sliding window cause considerable computation overheads, which is not practical in an online scenario. To realize scalable online processing, USTORY uses a novel data structure, called _pane-based story summary_, motivated by the pane-based aggregation (Stein et al., 2018) that has been widely adopted for efficient and scalable stream algorithms (Stein et al., 2018; Stein et al., 2018; Stein et al., 2018; Stein et al., 2018). Figure 4. The overall procedure of USTORY. **Definition 5.3** (Pane-based Story Summary (PSS)): Let panes \(p_{i}\) be non-overlapping subsets of consequent articles in a news stream and story panes \(p_{i}^{C}\) of a story \(C\) be a set of articles in both \(C\) and \(p_{i}\). Then, a pane-based story summary of \(C\), \[\small PSS_{C}=\{(\varphi_{t}:\langle p_{i}:\langle p_{i}^{C}\rangle,\,tf( \cdot,p_{i}^{C}),\,\Sigma_{a_{j}\in p_{i}^{C}}E_{a_{j}|C}\,\rangle)\}, \tag{6}\] maps a pane to the triplet of the number of articles, the term frequencies, and the sum of article representations in \(p_{i}^{C}\). The size of a pane determines the granularity of the story summary. While any common divisor of the window size and the slide size can be used, we set the pane size to be the slide size (i.e., the greatest common divisor) to maximize efficiency. In the confidence-based story assignment step in every sliding window, USTORY uses PSS to identify temporal themes and derive dynamic article and story representations, without accessing all previous articles in stories. Note that USTORY updates the triplet in PSS in an _additive_ manner whenever articles in stories are updated as the window slides. The sufficiency of PSS and the complexity of USTORY realized by utilizing PSS are analyzed as follows. **Efficiency Analysis.** With the help of PSS, USTORY guarantees _single-pass processing_ on the assigned articles; once an article is assigned to a story it can be discarded and only PSS is used for the following procedures. Theorem 5.4 and Theorem 5.5 respectively proves the sufficiency of PSS for the story assignment and shows the time and space complexities of USTORY when utilizing PSS. **Theorem 5.4** (Sufficiency of PSS): _The confidence-based story assignment step requires to identify thematic keywords \(\mathcal{K}_{C}\) and a story representation \(E_{C|a}\) for each story \(C\) and each unassigned article \(a\) (note that \(E_{a|C}\) is directly derived from \(\mathcal{K}_{C}\) and \(a\)). A pane-based story summary \(PSS_{C}\) is sufficient for deriving \(\mathcal{K}_{C}\) and \(E_{C|a}\)._ Proof.: \(\mathcal{K}_{C}\) and \(E_{C|a}\) can be accurately derived by only using the triplet information in \(PSS_{C}\). First, by Definition 4.1, the importance \(w_{k}\) of a thematic keyword \(k\) with a story \(C\) in a sliding window \(\mathcal{W}\) at a current timestamp \(t_{c}\) is computed from \(rec\_pop(k,C,t_{c})\) and \(dist(k,C_{\mathcal{W}})\). Since a pane \(p_{i}\) represents the articles in the same slide in \(\mathcal{W}\), i.e., \(\forall\,\in p_{i}:t_{a}=t_{p_{i}}\), \(rec\_pop(k,C,t_{c})\) and \(dist(k,C_{\mathcal{W}})\) are calculated with the term frequency \(tf(k,p_{i}^{C})\) in \(PSS_{C}\). \[\small\begin{split} rec\_pop(k,C,t_{c})&=\sum_{p_{ i}\in W}\left(\exp(-\frac{|t_{p_{i}}-t_{c}|}{\delta_{k}})\cdot tf(k,p_{i}^{C}) \right)\text{ and }\\ dist(k,C_{\mathcal{W}})&=\log(\frac{|C_{ \mathcal{W}}|+1}{\sum_{p_{i}\in W}tf(k,p_{i}^{C})>+1}+1).\end{split} \tag{7}\] Then, in Definition 4.3, \(E_{C|a}\) can be reformulated as the sum of time-decaying article representations divided by the time-decaying count of articles. Thus, it can be calculated with the articles count \(|p_{i}^{C}|\) and the article representations sum \(\sum_{a_{j}\in p_{i}^{C}}E_{a_{j}|C}\) in \(PSS_{C}\). \[E_{C|a}=\frac{\sum_{p_{i}\in W}\left(\exp(-|t_{a}-t_{p_{i}}|/ \delta_{C})\cdot\Sigma_{a_{j}\in p_{i}^{C}}E_{a_{j}|C}\right)}{\sum_{p_{i}\in W }\left(\exp(-|t_{a}-t_{p_{i}}|/\delta_{C})\cdot|p_{i}^{C}|\right)}. \tag{8}\] **Theorem 5.5** (Complexities of USTORY): _Let \(N_{W}\) and \(N_{S}\) be the numbers of articles in a window and a slide, respectively, and \(N_{C}\) be the number of existing stories. Then, the time and space complexities of USTORY are \(O(N_{C}N_{W}+N_{S})\) and \(O(N_{W}+\frac{N_{W}}{N_{S}}N_{C})\), respectively._ Proof.: The time complexity of USTORY is specifically divided into that for each step: \(O(N_{S})\) for embedding articles in every sliding window with a PSE, \(O(N_{C}N_{W})\) for the novelty story discovery step, \(O(N_{C}N_{W})\) for the confidence-based story assignment step, and \(O(N_{S})\) for updating PSS. Thus, the total time complexity of USTORY is \(O(N_{C}N_{W}+N_{S})\). Similarly, the space complexity of USTORY is specifically divided into that for managing stories and articles: \(O(\frac{N_{W}}{N_{S}}N_{C})\) for managing PSS where \(\frac{N_{W}}{N_{S}}\) is the number of panes and \(\tilde{O}(N_{W})\) for managing articles. Thus, the total space complexity of USTORY is \(O(N_{W}+\frac{N_{W}}{N_{S}}N_{C})\). Since \(N_{C},N_{S}\ll N_{W}\) in practice, the time and space complexities of USTORY are _linear_ to \(N_{W}\), mainly affected by a window size \(W\). ## 6. Experiments We conducted extensive experiments to evaluate the performance of USTORY, of which results are summarized as follows. * USTORY outperformed the existing unsupervised online story discovery algorithms and their variants with a PSE in benchmark news data sets in terms of \(B^{3}\)-F1, AMI, and ARI (Section 6.2). * The main idea employed in USTORY was demonstrated to be effective through ablation studies on the theme- and time-aware components (Section 6.2) and comparison with alternative embedding strategies (Section 6.3). * USTORY was scalable to the variation of sliding window sizes and robust to hyperparameters (Section 6.4). * USTORY discovered more quality embedding space and news stories than baselines from a real news stream (Section 6.5). ### Experiment Setting **News Streams.** We used four real news data sets for evaluation, summarized in Table 1. Newsfeed (Newsfeed, 2018; Newsfeed, 2018) is a multilingual news data set collected from Newsfeed Service (Newsfeed, 2018) in 2014, and we used English news articles with story labels. WCEP (Newsfeed, 2018) is a benchmark news data set collected from Wikipedia Current Event Portal and the Common Crawal archive. We used articles in the stories of at least 50 articles and published in 2018 and 2019 (i.e., WCEP18 and WCEP19). For a qualitative case study, we prepared USNews by collecting news articles through NewsAPI (Newsfeed, 2018) with a query of 'United States' for a month. Each data set is simulated as a news stream by feeding articles into sliding windows in chronological order. The window size \(W\) and the slide size \(S\) of the sliding window are set to 7 days and a day, respectively. **Compared Algorithms.** We compared USTORY with the five online story discovery algorithms: _ConStream_(Chawhaw and Taw for embedding articles by averaging their sentence representations. We used two PSEs for USTORY and the variants of baselines: Sentence-BERT (Srivastava et al., 2017)7 (i.e., AlgName-SenRB) and Sentence-T5 (Srivastava et al., 2017)8 (i.e., AlgName-SenT5). Please note that for USTORY and the PSE-variants of baselines, Sentence-BERT (i.e., -SenRB) is used as a default PSE unless otherwise specified. Footnote 8: [https://huggingface.co/sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) Footnote 8: [https://huggingface.co/sentence-transformers/sentence-t5-large](https://huggingface.co/sentence-transformers/sentence-t5-large) **Implementation Details.** All compared algorithms were implemented with _Python 3.8.8_ and evaluated on a Linux server with AMD EPYC 7502 32-Core CPU and 1TB RAM. We tokenized sentences and terms in articles by _SpaCy 3.2.0_ with _en_core_web_lg pipeline (Krishnan et al., 2017) and counted (1,2)-gram terms by _Sikit-learn 0.24.2_. For each algorithm, we used the default hyperparameters following the original work or tuned them to get the best results; specifically, the keywords size \(N\in[1,15]\) and the temperature \(T\in[1,5]\) for USTORY (\(N=10\) and \(T=2\) were used by default); the standard score threshold for the micro-cluster assignment \(\in[0,3]\) for _ConStream_; the number of overlapping keywords for creating an edge \(\in[1,5]\) and the similarity threshold for merging communities \(\in[0,1]\) for _NewsLens_; the number of keywords for an article \(\in[1,20]\) to form a keyword graph for _StoryForest_; the similarity threshold for cluster assignment \(\in[1,5]\) for _Miranda_; and the similarity threshold for creating an edge \(\in[0,0.5]\) and that for merging clusters \(\in[0.5,1]\) for _Staykovski_. The minimum number \(M\) of articles to form a valid story was set to the average number of articles in a story in a day, which was 8 for Newsfeed and 18 for WCEP18 and WCEP19. **Evaluation Metrics:** We used B-cubed F1 score (\(B^{3}\)-F1) (Barbani et al., 2017) for evaluating article-wise cluster quality and used Adjusted Mutual Information (_AMI_) (Barbani et al., 2017) and Adjusted Rand Index (_ARI_) (Krishnan et al., 2017) for evaluating the mutual information and similarity of clustering results, respectively, adjusted for chance. For each metric, the average score over all sliding windows is reported to evaluate the overall performances over the entire news streams. ### Story Discovery Accuracy Table 2 shows the overall evaluation results of all algorithms. **Comparison with Baselines.** USTORY achieved higher \(B^{3}\)-F1, AMI, and ARI scores than baselines for all cases. For instance, USTORY-SenRB achieved 44.9% higher \(B^{3}\)-F1, 76.6% higher AMI, and 231.1% higher ARI than baselines when averaged over all cases. We also compared the PSE-variants of the top two baselines, Miranda and Staykovski (note that the PSE-variants of the other algorithms consistently performed worse than them). The PSE-variants improved all scores from their original versions, which shows the generic embedding power of a PSE. USTORY, however, still outperformed them consistently in all cases for both SenT5 and SenRB. Specifically, USTORY-SenRB achieved 6.8% higher \(B^{3}\)-F1, 6.2% higher AMI, and 15.2% higher ARI than the SenRB-variants of the top two baselines when averaged over all cases. This clearly shows that USTORY exploits a PSE more effectively with the help of thematic embedding. **Ablation Study of USTORY.** We verified the efficacy of the theme and time-aware components employed in USTORY by preparing the three variants: * _w/o time-aware_ does not consider the recency in thematic keywords and the time relevance in embedding articles and stories. * _w/o theme-aware_ does not consider the theme relevance in embedding articles and stories. * _w/o both_ does not consider both the time- and theme-aware components described above. USTORY consistently achieved higher scores than the above three variants, which indicates that both components jointly help to discover stories. More specifically, the theme-aware component contributed more than the time-aware component to the performance. These results show that temporally close articles in a story do not necessarily share thematically similar content only, so it is more critical to identify and filter out theme-irrelevant parts in each article for online story discovery. \begin{table} \begin{tabular}{c|l|c c c|c c c|c c} \hline \hline & & \multicolumn{3}{c|}{Newsfeed} & \multicolumn{3}{c|}{WCEP18} & \multicolumn{3}{c}{WCEP19} \\ & & \(B^{3}\)-F1 & AMI & ARI & \(B^{3}\)-F1 & AMI & ARI & \(B^{3}\)-F1 & AMI & ARI \\ \hline \multirow{4}{*}{Baselines (ordered by the average \(B^{3}\)-F1)} & ConStream & 0.314 & 0.128 & 0.069 & 0.408 & 0.444 & 0.222 & 0.400 & 0.497 & 0.292 \\ & NewsLens & 0.481 & 0.309 & 0.077 & 0.527 & 0.490 & 0.117 & 0.554 & 0.529 & 0.141 \\ & StoryForest & 0.696 & 0.725 & 0.592 & 0.673 & 0.765 & 0.523 & 0.697 & 0.798 & 0.596 \\ & Miranda & 0.706 & 0.726 & 0.572 & 0.694 & 0.786 & 0.571 & 0.698 & 0.791 & 0.574 \\ & Staykovski & 0.669 & 0.602 & 0.358 & 0.697 & 0.759 & 0.487 & 0.701 & 0.765 & 0.487 \\ & Miranda-SenT5 & 0.732 & 0.753 & 0.617 & 0.710 & 0.798 & 0.629 & 0.717 & 0.805 & 0.644 \\ & Staykovski-SenT5 & 0.684 & 0.631 & 0.415 & 0.735 & 0.798 & 0.582 & 0.704 & 0.782 & 0.537 \\ & Miranda-SenRB & 0.764 & 0.785 & 0.648 & 0.751 & 0.835 & 0.656 & 0.759 & 0.837 & 0.657 \\ & Staykovski-SenRB & 0.750 & 0.720 & 0.567 & 0.754 & 0.824 & 0.642 & 0.762 & 0.830 & 0.660 \\ \hline \multirow{4}{*}{Proposed} & **USTORY-SenT5** & 0.751\({}^{*}\) & 0.763\({}^{*}\) & 0.638\({}^{*}\) & 0.780\({}^{*}\) & 0.846\({}^{*}\) & 0.694\({}^{*}\) & 0.799\({}^{*}\) & 0.861\({}^{*}\) & 0.733\({}^{*}\) \\ & over baselines & 44.44\% & 0.136\% & 0.330\% & 0.359\% & 0.386\% & 0.160\% & 0.374\% & 0.33\% & 0.4134.4\% \\ & over baselines-SenT5 & **4.62**\% & **41.11**\% & **42.86**\% & **48.0**\% & **46.0**\% & **41.48**\% & **412.5**\% & **48.5**\% & **42.52**\% \\ & **USTORY-SenRB** & **0.789\({}^{*}\)** & **0.812\({}^{*}\)** & **0.699\({}^{*}\)** & **0.810\({}^{*}\)** & **0.871\({}^{*}\)** & **0.739\({}^{*}\)** & **0.825\({}^{*}\)** & **0.880\({}^{*}\)** & **0.765\({}^{*}\)** \\ & over baselines & 0.517\% & **0.51**\% & **0.51**\% & **0.51**\% & **0.50**\% & **0.50**\% & **0.50**\% & **0.50**\% \\ & over baselines-SenRB & **4.25**\% & **4.15**\% & **41.5**\% & **41.15**\% & **42.7**\% & **41.77**\% & **44.19**\% & 0.36\% & **414.6**\% \\ & **w/o time-aware** & 0.780 & 0.801 & 0.682 & 0.801 & 0.864 & 0.736 & 0.817 & 0.875 & 0.760 \\ & **w/o theme-aware** & 0.767 & 0.792 & 0.665 & 0.771 & 0.842 & 0.679 & 0.790 & 0.856 & 0.703 \\ & **w/o both** & 0.752 & 0.777 & 0.651 & 0.753 & 0.826 & 0.677 & 0.770 & 0.843 & 0.709 \\ \hline \hline \end{tabular} * denotes statistically significant improvement (p=0.05 with the t-test) over the compared algorithms and \(\Delta\), \(\Delta\) indicate the average improvement ratio. \end{table} Table 2. Performance comparison results (the highest scores are highlighted in bold). ### Embedding Strategies Comparison We evaluated the story discovery performance of USTORY with four embedding strategies to exploit a PSE: * _IndSentMean_: mean pooling of sentence representations * _IndSentConc_: a representation of concatenated sentences * _ThemSentMean_ (default in USTORY): weighted pooling of thematically prioritized sentence representations * _ThemSentConc_: a representation of the concatenation of thematically prioritized sentences. Note that USTORY automatically identifies temporal themes of news streams to prioritize sentences for ThemSentMean and ThemSentConc. We also varied the number of sentences used for embedding articles for further understanding of the compared strategies. Figure 5 shows the \(B^{3}\)-F1 results, while the results of AMI and ARI showed similar trends. The proposed ThemSentMean strategy in USTORY showed the highest scores throughout the varying number of sentences. This indicates that thematic embedding effectively exploits all information in the article to discover stories. In general, the Mean strategies, IndSentMean and TheSentMean, outperformed the Concatenate strategies, IndSentConc and ThemSentConc, because the concatenated sentences can not incorporate all information in the article due to the input sequence limit of a PSE and also fail to capture local semantics. Meanwhile, it is notable that ThemSentConc still took advantage of the thematic prioritization of sentences and resulted in higher performance than IndSentConc. ### Scalability and Sensitivity Analysis **Scalability of USTORY.** We analyzed the scalability of USTORY and the top two baselines on varying window sizes (\(W\)), i.e., the number of articles in a window, which is the most critical factor affecting scalability. Figure 6 shows the average wall clock time for the compared algorithms to process each sliding window and their \(B^{3}\)-F1 results. Note that AMI and ARI results showed similar trends. While the top two baselines resulted in higher processing time due to the high computation cost for community detection (Staykovski) and clustering (Miranda), USTORY took a much shorter processing time for each sliding window. The increasing rate of processing time over larger windows for USTORY was lower than Miranda and comparable to Staykovski in most cases. USTORY also consistently achieve higher \(B^{3}\)-F1 scores than the other algorithms. This demonstrates the efficacy of story summary (PSS) for efficient and accurate story discovery. **Processing Time Breakdown of USTORY.** We further broke down the processing time of USTORY into the four main steps in Section 5. As shown in Figure 7, the initial article embedding step took the most processing time, followed by the story assignment step, the story summary (PSS) update step, and the seed stories discovery step. To further improve the scalability of USTORY, especially for time-critical applications, the initial embedding step can be alternatively substituted with the IndSentMean strategy discussed in Section 6.3, which are expected to be faster but still effective for initialization purposes. Meanwhile, the last three steps in Newsfeed accounted for more portions of the total processing time compared with their proportions in the other data sets. This also conforms to the complexity analysis result, as Newsfeed has more concurrent stories in sliding windows. **Sensitivity of USTORY.** Figure 8 shows the sensitivity of USTORY on the two main hyperparameters, keywords size \(N\) and the temperature \(T\), for the three performance metrics. The performances of USTORY converge early at some points and become near optimal around the default values (i.e., \(N=10\) and \(T=2\)). These trends show that setting the adequate number of keywords for identifying temporal themes (e.g., \(N\geq 3\)) and flattening the article-story confidence score distribution to at least some degree (e.g., \(T\geq 2\)) can lead to a robust story discovery performance. Figure 5. \(B^{3}\)-F1 scores of USTORY with four strategies. Figure 8. Varying keywords size \(N\) and temperature \(T\). Figure 6. Varying sliding window sizes. Figure 7. Processing time breakdown of USTORY. ### Case Study We conducted a qualitative case study on USNews with USTORY and the PSE-variants of the top two baselines. Figure 9 visualizes the ten discovered stories and their article embeddings with USTORY and the PSE-variants of baselines. The leftmost sub-figure shows a timeline of the sizes of stories with their titles (manually given by the authors for convenience) and their top five thematic keywords (automatically identified by USTORY). USTORY successfully identified and tracked the important real news stories. For instance, the long-term stories such as Russia-Ukraine Conflict and Covid 19, which have been constant issues in recent years, were continuously discovered by USTORY throughout the whole period. At the same time, the short-term stories such as North Korea Missile Test and Capitol Attack Reflects that have been discussed for some days were also timely discovered by USTORY. The resulting embedding space also shows that the articles are distinctively clustered by their stories. Following the two embedding strategies discussed in Section 6.3, the right two sub-figures respectively visualizes the article representations embedded by Them-SentMean (used by default in USTORY) and IndSentMean (used in the PSE-variants of baselines) in a 2D-space through t-SNE (S
2303.17724
Ab initio simulation of the ultrafast circular dichroism spectrum of provitamin D ring-opening
We present a method to simulate ultrafast pump-probe time-resolved circular dichroism (TRCD) spectra based on time-dependent density functional theory trajectory surface hopping. The method is applied to simulate the TRCD spectrum along the photoinduced ring-opening of provitamin D. Simulations reveal that the initial decay of the signal is due to excited state relaxation, forming the rotationally flexible previtamin D. We further show that oscillations in the experimental TRCD spectrum arise from isomerizations between previtamin D rotamers with different chirality, which are associated with the helical conformation of the triene unit. We give a detailed description of the formation dynamics of different rotamers, playing a key role in the natural regulation vitamin D photosynthesis. Going beyond the sole extraction of decay rates, simulations greatly increase the amount of information that can be retrieved from ultrafast TRCD, making it a sensitive tool to unravel details in the sub-picosecond dynamics of photoinduced chirality changes.
Enrico Tapavicza, Trevor Reutershan, Travis Thompson
2023-03-30T21:49:09Z
http://arxiv.org/abs/2303.17724v2
# Ab initio simulation of the ultrafast circular dichroism spectrum of provitamin D ring-opening ###### Abstract We present a method to simulate ultrafast pump-probe time-resolved circular dichroism (TRCD) spectra based on time-dependent density functional theory trajectory surface hopping. The method is applied to simulate the TRCD spectrum along the photoinduced ring-opening of provitamin D. Simulations reveal that the initial decay of the signal is due to excited state relaxation, forming the rotationally flexible previtamin D. We further show that oscillations in the experimental TRCD spectrum arise from isomerizations between previtamin D rotamers with different chirality, which are associated with the helical conformation of the triene unit. We give a detailed description of the formation dynamics of different rotamers, playing a key role in the natural regulation vitamin D photosynthesis. Going beyond the sole extraction of decay rates, simulations greatly increase the amount of information that can be retrieved from ultrafast TRCD, making it a sensitive tool to unravel details in the sub-picosecond dynamics of photoinduced chirality changes. Ultrafast time-resolved (TR) pump-probe spectroscopy offers the possibility to monitor chemical reactions, such as the making and breaking of bonds, electron transfer, and conformational changes on the femto- to picosecond timescale [1]. A vast number of different techniques, including TR transient absorption (TA), TR photoelectron ionization, have been developed in the last decades [2, 3]. Conformational changes in chiral molecules can in principal be detected by circular dichroism (CD) and optical rotation spectroscopy. CD spectroscopy measures the difference in absorption of left- and right-circular polarized light \(\Delta\epsilon=\epsilon_{L}-\epsilon_{R}\). Since the electronic CD signal depends on angular relations between the electric and magnetic transition dipole moment, it is highly sensitive to small changes in the electronic density induced by changes in the three-dimensional structure of chiral molecules. On the microsecond to second time-scale, this is a standard tool to study conformational changes in biomolecules, such as proteins and nucleic acids [4, 5]. However, due to noise caused by density fluctuations of the achiral background, the overall sensitivity of CD is rather small, which makes its application for ultrafast pump-probe spectroscopy challenging [6, 7]. Thanks to advances in the experimental set-up, large progress has been made in recent years, leading to pump-probe TRCD spectroscopy with time-resolutions of one picosecond and below [6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. While the first ultrafast TRCD measurements with sub-picosecond resolution were restricted to fixed wavelengths [12], recent advances in increasing the sensitivity also allow the broadband measurement of the TRCD [6, 16]. As common in pump-probe spectroscopy, the TRCD signal can be fitted to decay functions, yielding overall relaxation rates [12, 17]. However, oftentimes the TRCD contains an oscillatory fine structure, which is usually neglected in the analysis, but may potentially give more information about the structural dynamics. To obtain a relationship between the oscillatory structure of the TRCD, we apply non-adiabatic excited state molecular dynamics simulations [18, 19, 20], which have been shown to provide structural information of the photodynamics in a variety of organic systems [21, 22] and are therefore well-suited to complement pump-probe experiments [23]. Here, we apply time-dependent density functional theory surface hopping (TDDFT-SH) molecular dynamics simulations to model the TRCD along the photoinduced ring-opening reaction of provitamin D, which constitutes the initial step in natural vitamin D photosynthesis, and for which an experimental TRCD has been measured.[12] The photoinduced electrocyclic ring-opening reaction of 7-Dehydrocholesterol (Figure 1), also known as provitamin D (Pro), has been extensively studied by time-resolved transient absorption spectroscopy in different solvents[24, 25] and phospholipid bilayers.[26, 27] After the ring-opening, which occurs within hundreds of femtoseconds to a few picoseconds after photoexcitation,[19, 28] the formed rotationally flexible seco-steroid1 previtamin D (Pre) undergoes several stages of rotational isomerizations (Figure 1), as non-adiabatic simulations[19] and TRTA spectroscopy[26] reveal. Rotational isomers of vitamin D seco-steroids Figure 1: Photoinduced ring-opening of Pro followed by rotational isomerization of Pre. Dihedral angles \(\phi_{1}\) and \(\phi_{2}\) are defined by the atoms C10-C5-C6-C7 and C6-C7-C8-C9, respectively. are of crucial importance in the intrinsic self-regualtion of vitamin D synthesis [29]: On one hand, they give rise to a pronounced wavelength-dependent, conformationally controlled photochemistry of vitamin D derivatives [29, 30, 31, 32, 33]. On the other hand, rotational isomerization possibly affects the thermal formation of vitamin D via a [1, 7]-sigmatropic hydrogen shift from C19 to C9, which is thought to be possible only in helical gZg Pre isomers, where hydrogen donor (C19) is in close vicinity to the acceptor atom (C9). This last step in vitamin D formation has been found to be enhanced in biological membranes compared to isotropic solutions, possibly by trapping the gZg conformer due to steric interactions with the phospholipid molecules [34, 35]. Understanding the natural vitamin D formation in the skin, therefore requires detailed knowledge about the dynamics and distribution of rotational isomers. Besides TRTA spectroscopy, the ring-opening reaction in Pro has been investigated by ultrafast time-resolved circular dichroism spectroscopy (TRCD) by Dietzek and coworkers with a 120 fs time-resolution along fixed probe wavelengths in the UV region [12]. Being a biologic paradigm of an ultrafast photoinduced electrocyclic reaction, this reaction also constitutes a perfect test case for ultrafast pump-probe TRCD spectroscopy due to its expected chirality changes on a femtosecond time-scale: The ring-opening causes the molecule to lose one asymmetric carbon center, and furthermore, the central triene unit of Pre can adopt a left- or right-handed helical conformation, two effects that are expected to cause a change in the CD signal. The cited study constitutes an important step in the application of ultrafast TRCD and allowed to confirm the previously measured excited state lifetime of Pro of about 1 ps [24, 25, 36]. However, a closer look at the time-dependent CD signal measured by Dietzek et al. reveals an oscillatory structure besides the approximate exponential decay. In the original work, this feature was not further discussed [12]. To obtain more structural information from TRCD and to determine the cause of the oscillatory structure in the TRCD of Pro, we reinvestigate TDDFT-SH trajectories from our previous study [20]. In TDDFT-SH, 62 % of the trajectories successfully form the open-ring photoisomer Pre. The rotational degrees of freedom created by the ring-opening allow the initially formed g-Zg- conformers to relax further, adopting a distribution of different rotamers characterized by the values of \(\phi_{1}\) and \(\phi_{2}\) defined in Fig. 1. Simulations show that first a rotation around the C5-C6 bond occurs, forming t+Zg- Pre, which is superimposed by a simultaneous slower rotation around the C7-C8 bond, giving access to other rotamers. Initially isomerization occurs coherently among the trajectories and then increasingly dephase in the motion of the rotational isomerization until they eventually form a Boltzmann ensemble of rotational isomers that does not exhibit any memory of the excited state ring-opening process. The rotational dephasing time amounts to approximately 4-5 ps in gas phase simulations. In solution, this redistribution process has been found to be almost completed within tens of picoseconds to up to more than 100 ps, depending on the viscosity of the solvent [24, 27]. The TDDFT-SH approach is described in detail elsewhere [20]. In brief, initial structures of Pro are obtained from a Boltzmann ensemble at room temperature, generated by Born-Oppenheimer molecular dynamics (BOMD). The nuclear coordinates of the initial structures were propagated using TDDFT nuclear forces of the first singlet excited state (S\({}_{1}\)). Non-adiabatic coupling vectors between S\({}_{1}\) and the ground state (S\({}_{0}\)) were computed at each timestep and used to compute the _fewest switches_ probability to non-adiabatic transitions between electronic states according to Tully [37]. TDDFT-SH trajectories from our previous study [19], which all decayed to S\({}_{0}\) within 2 ps of simulation time, were extended by BOMD in S\({}_{0}\) to a total simulation time of 4.8 ps to obtain information about the hot ground state dynamics that followed the excited state relaxation. CD spectra can be efficiently calculated by linear response theory [38, 39, 40]. The central quantity of electronic CD is the electric dipole-magnetic dipole polarizability tensor \(G_{jk}\). In isotropic systems only its average value \[G(\omega)=\frac{1}{3}\sum_{j}G_{jj}(\omega) \tag{1}\] is measured. The imaginary part of \(G(z)\) at real frequency is related to the shape of the CD spectrum. Within linear response TDDFT, it can calculated from the response vector \(|X,Y\rangle\), resulting from the solution of the time-dependent Kohn-Sham eigenvalue problem [38, 41, 42], \[\mathrm{Im}[G(\omega)]=-\frac{c}{z}\langle\mu^{(j)}|X^{(j)}(z),Y^{(k)}(z)\rangle. \tag{2}\] According to \[\mathrm{Im}[G(\omega)] = \frac{c\pi}{3}\sum\frac{1}{\Omega_{0n}}(R_{0n}\delta(\omega- \Omega_{0n})\] \[-R_{0n}\delta(\omega+\Omega_{0n}))\,,\] this quantity is related to the rotatory strength \[R_{0n}=-\mathrm{Im}[\mu_{0n}\cdot\mathbf{m}_{0n}]\,, \tag{4}\] where \(\mu_{0n}\) and \(\mathbf{m}_{0n}\) are the electric and magnetic transition dipole moments, respectively. In practice, the rotatory strength are obtained as standard output from TDDFT calculations [40]. Applying Gaussian broadening with a given linewidth (LW) they can be converted to the \(\Delta\epsilon\) signal [43], measured in CD spectroscopy. To simulate the CD spectra, we averaged CD spectra of single molecular structures obtained from Gaussian broadening of rotatory strengths computed by TDDFT to obtain the macroscopic spectrum of the ensemble of rotational isomers. To compute the TRCD signal, we assume that the CD signal is caused by ground state absorption, rather than excited state absorption. Provided that the UV pump-pulse induces a \(S_{1}\gets S_{0}\) transition, this is a reasonable assumption if the probe wavelength is also in the UV region, since higher \(S_{n}\gets S_{1}\) absorption energies usually appear at lower energies than the excitation energy of S\({}_{1}\). According to this assumption, only the fraction of molecules that have already been relaxed to the ground state after initial excitation gives rise to the CD signal at delay time \(\tau\). Within surface hopping, the CD signal at time delay time \(\tau\) is then calculated as average over the number of trajectories in the ground state (\(N_{0}(\tau)\)): \[\Delta\epsilon(\tau)=\epsilon_{l}(\tau)-\epsilon_{r}(\tau)=1/N\sum_{i}^{N_{0}( \tau)}\Delta\epsilon_{i}(\tau)\,, \tag{5}\] where \(\Delta\epsilon_{i}(\tau)\) denotes the instantaneous CD spectrum of trajectory \(i\) at time \(\tau\), obtained from the rotatory strengths of the corresponding molecular structures by Gaussian broadening; \(N\) denotes the total number of trajectories. The \(\Delta\)CD signal at time \(\tau\) is obtained by subtracting the instantaneous spectrum \(\Delta\epsilon(\tau)\) from the static CD spectrum of the parent molecule Pro. In the experimental study [12], however, due to the unknown sign of the instantaneous spectrum, the instantaneous spectrum was added to the static spectrum of Pro and not subtracted. To achieve best comparability between simulated and measured spectrum, we also added the instantaneous spectrum to the static spectrum in our calculation. Before we present the TRCD, we assess the dependency of the rotatory strengths on the dihedral angles \(\phi_{1}/\phi_{2}\) in Pre. To this end, we computed the excitation energies and rotatory strengths for the ground state ensemble of Pre rotamers obtained from replica exchange molecular dynamics (REMD) (Figure 2). From the overall symmetry of this plot, it is visible that the rotatory strengths are sensitive to the dihedral angle conformation of Pre. In particular, the helicality affects the sign and magnitude of the rotatory strengths. Most obviously, this effect emerges in the comparison between the rotatory strengths of t+Zg- and t-Zg+ conformers, which have opposite helicality: values of t+Zg- conformers exhibit positive values ranging from 100-250\(\times 10^{-40}\)erg, whereas t-Zg+ conformers exhibit negative rotatory strength of similar magnitude. A similar opposite relationship appears in the comparison between t+Zt+ and t-Zt- conformers. For gZg conformers, in contrast, rotatory strengths are less sensitive to the dihedral angles, exhibiting values close to zero. Since the positive rotatory strengths in the upper half of the plot in Fig. 2 are dominating at 300 K, the overall CD spectrum of Pre obtained from Gaussian broadening of the rotatory strengths exhibits a positive band in the 240-380 nm region (Figure 3, yellow), which is opposite in sign compared to the static spectrum of Pro (purple). This qualitative trend is confirmed by experimental measurements [12, 44]. However, both experimental sources do not report the concentration at which the measurements were carried out; therefore, we cannot compare the absolute values of \(\Delta\epsilon\) of the molecules. To simulate the TRCD, we investigate the dynamics of the first 4.8 picoseconds after initial photoexcitation in terms of the rotatory strengths and the instantaneous CD spectrum (Fig. 4), allowing us to construct the TRCD spectrum (Fig. 5). In the time window 600-960 fs (Fig. 4, **A**), over 90 % of the trajectories have decayed to S\({}_{0}\) and a substantial amount of the initially formed g-Zg- conformers has isomerized to t+Zg- conformers, exhibiting Figure 2: TDDFT rotatory strengths (10\({}^{-40}\) cgs, length representation) of Pre as function of the dihedral angles \(\phi_{1}\) and \(\phi_{2}\) (defined in Fig. 1), computed for snapshot structures of Pre generated via REMD. Figure 3: Comparison between static CD spectra of Pro and Pre. Experimental spectrum of Pre from Maessen et al. [44], measured in ether/iso-pentane/alcohol at 92 K; experimental spectrum of Pro from [12]. TDDFT spectra of Pre and Pro at 300 K with a LW of 0.1 eV were computed using the 10 lowest excited states, whereas the TDDFT spectrum of Pro with LW 0.5 eV was computed with lowest 2 excited states. The latter spectrum (red) was used as static reference in the calculation of the TRCD spectrum. All spectra have been normalized to a maximum intensity of 1. strongly positive rotatory strengths (Fig. 4, **A**, left column), causing an instantaneous CD spectrum with strong intensity and positive sign (Fig. 4, **A**, right column, black). The strong positive band overcompensates the negative band of the static CD spectrum of Pro (Fig. 4, **A**, right column, red), leading to a dip in the broadband TRCD spectrum (Fig. 5, upper panel), as well as in its 280 nm and 320 nm traces (Fig. 5, lower panel, **A**), that reaches its maximum at approximately 0.8 ps. After reaching this maximum, the TRCD signal partially recovers (Fig. 5, lower panel, **B**), due to conversion of t+Zg- conformers to g+Zg+ and g+Zt- conformers with low magnitude rotatory strengths (Fig. 4, **B**, left), giving rise to a less pronounced positive band in the instantaneous spectrum (Fig. 4, **B**, right panel, black). Subsequently, the band of the instantaneous spectrum increases due to the formation of t+Zt+ conformers, exhibiting strongly positive rotatory strengths (Fig. 4, **C**, left). The following decrease of the instantaneous spectrum is caused by depopulation of t+Zt+ conformers and simultaneous formation of t-Zg+ conformers with negative rotatory strengths (Fig. 4, **D**, left). The latter oscillation (Fig. 5, **C** and **D**) appears with lower amplitude than the first one (**A** and **B**), due to an overall dephasing of the ensemble in the \(\phi_{1}/\phi_{2}\) space until a relatively constant signal is reached (Fig. 4, lower panel, **E**). The constant spectrum (Fig. 4, **E**, right, yellow) is caused by the superposition of the CD spectra of the equilibrium ensemble of the product Pre, its remaining parent molecule Pro, and the static spectrum of Pro used as reference. However, since the total simulation time only amounts to 4.8 ps, we cannot ultimately determine if the full equilibrium has been reached. Figure 4: Left column: Distribution of structures in the \(\phi_{1}/\phi_{2}\) conformational space for the time windows **A** (633–960 fs), **B** (1180–1392 fs), **C** (1516–1862 fs), **D** (2179–2563 fs), and **E** (3053–4800 fs). Red: positive rotatory strength, blue: negative rotatory strengths, green: rotatory strength close to zero. Right column: Instantaneous CD spectrum \(\Delta\epsilon\) (black), static spectrum of Pro (red), and difference spectrum \(\Delta\)CD (yellow), all averaged over the given the time interval. Blue and green vertical lines indicate the wavelengths at which traces were taken to generate Fig. 5. Similar oscillations are visible in the experimentally measured TRCD, albeit at lower frequency. The time in our simulations until the oscillations approximately disappear and a relatively constant TRCD signal is reached (\(\approx 4\) ps) is much shorter than in the experimental spectrum (\(\approx 14\) ps). This difference might be caused by two reasons: Most likely the solvent viscosity in the experimental spectrum decelerates the process of rotational isomerization. Furthermore, TDDFT-SH simulations are known to underestimate excited states lifetimes,[45] leading to a steeper initial decay of the simulated signal compared to initial decay in the experimental TRCD. Nevertheless, our simulations suggest that the oscillations in the experimental spectrum are due to rotational isomerizations, allowing to give a refined picture of this process in solution: In solution the first dip is reached at 2-2.5 ps; comparison with the simulated spectrum allows to assign this time window with formation of the high density of t+Zg- structures. Similarly, the second maximum in the experimental spectrum at 7-8 ps (corresponding to time window **C** in the simulations) indicates the formation of the t+Zt+ structures. At approximately 10 ps t-Zg+ conformers are formed, which correspond to time window D in the simulations. After about 14 ps, the experimental TRCD appears more or less constant, indicating that an equilibrium ensemble has been reached. However, since the total time in the TRCD measurement amounts to only 18 ps, it cannot be determined if the remaining oscillations are due rotational isomerization or due to noise in the measurements. Despite the differences in timescales, the simulations allow to give a detailed description of the isomerization process in solution. To include solvent viscosity, the simulations could be carried out using QM/MM, adopting a classical desciption of the solvent on a classical level. To achieve better accuracy of the intial decay, more sophisticated non-adiabatic molecular dynamics methods, such as multiple-spawning[46] of decoherence corrected surface hopping[47] be applied. Nevertheless, our study shows that TRCD in combination with non-adiabatic molecular dynamics simulations is viable tool to investigate chirality changes on a femtto- to picosecond time scale. The synergy between experiment and simulations has the poten tial to yield sensible information about structural changes that cannot be obtained by the experiment alone. ## 2 Computational Details All DFT and TDDFT calculations employ split valence, triple-\(\zeta\) SVP basis sets [48] and the hybrid PBE0 [49] functional. For the non-adiabatic dynamics, the Tully's fewest switches surface hopping [37] was employed as previously described [19, 20]. Excited state nuclear gradients non-adiabatic couplings were computed analytically [39, 50]. To prevent imaginary excitation energies TDDFT-SH was applied within the Tamm-Dancoff approximation [51]. The nuclear degrees of motion were integrated using the Verlet algorithm [52]. An NVT ensemble of initial structures and velocities of Pro was generated using BOMD, employing a Nose-Hoover thermostat with a target temperature of 300 K and a characteristic response time of 500 au. For BOMD a time step of 50 au was used for the propagation of the nuclear positions, for TDDFT-SH a time step of 40 au was used. More details for the generation of the TDDFT-SH trajectories can be found in our earlier paper [19]. The calculation of the static CD spectrum of Pro and Pre was done using an average of 200 and 500 snapshot geometries, respectively. The lowest ten excitation energies and rotatory strengths. For Pro, BOMD was used to generate the ensemble of structures, whereas for Pre, enhanced sampling using REMD [53] was applied, as described elsewhere [3, 27, 31]. For the static spectra, Gaussian line broadening was applied with a LW of 0.1 eV, yielding CD-spectra of the individual snapshot geometries. For the TRCD spectrum, a LW of 0.5 eV was applied to convert rotatory strengths to \(\Delta\epsilon\), for both the static spectrum used as reference, as well as the instantaneous spectrum. This was necessary to reduce noise due to limited sampling. For computational efficiency, only the lowest two excited states were considered in case of the TRCD. The macroscopic spectrum was then calculated as an average of the spectra of the 116 snapshot geometries (Eq. 5) at every 10 time steps of 116 TDDFT-SH trajectories. This results in a time resolution of 9.6 fs, which is below the experimental resolution of 120 fs. For the TRCD spectra, the full TDDFT response equations were solved. At each of these time steps the CD spectrum was calculated, but only trajectories that have relaxed to the ground state were taken into account. The \(\Delta\)CD spectrum is usually obtained by subtraction of the instantaneous spectrum from the static spectrum. However, in the experimental measurement, probe-pulses with opposite circular polarization are alternated using Pockel cells [12], but it is unknown which one of a pair of two consecutive pulses is left-circularly polarized and which one is right-circularly polarized. It is thus impossible to gauge the absolute sign of the instantaneous spectrum, therefore it is also possible that the \(\Delta\)CD spectrum in experimental work [12] was computed by adding the instantaneous spectrum to the static spectrum of Pro. We applied both procedures and obtained better agreement with the experimental TRCD spectrum when summation was applied. This is also consistent with the fact that the long-time limit of the TRCD is less negative than at time zero: Due to the positive sign of the CD spectrum of Pre the long-time limit of the TRCD should be less negative than the static spectrum if addition of the two spectra is applied; this is the case in the experimental spectrum. All electronic structure calculations were performed with Turbomole 6.4 [54, 55]. The construction of the TRCD was done with MATLAB [56]. We acknowledge useful discussions with Benjamin Dietzek and thank him for providing the data from his experiments. Research reported in this paper was supported by National Institute of General Medical Sciences of the National Institutes of Health (NIH) under award numbers R15GM126524, UL1GM118979-02, TL4GM118980, and RL5GM118978. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. We acknowledge technical support from the Division of Information Technology of CSULB. Figure 5: Upper panel: Simulated broadband time-resolved CD spectrum along the photoinduced Pro ring-opening reaction. The colorbar indicates the \(\Delta\)CD signal in L/(mol\(\times\)cm). Lower panel: Traces along 280 and 300 nm taken from the broadband spectrum. Blue and green circles indicate the average \(\Delta\)CD signal over the time windows A, B, C, D, and E, respectively. The distribution of conformers for these time windows is indicated in Fig. 4.
2309.02414
Cohen-Macaulay weighted chordal graphs
In this paper I give a combinatorial characterization of all the Cohen-Macaulay weighted chordal graphs. In particular, it is shown that a weighted chordal graph is Cohen- Macaulay if and only if it is unmixed.
Shuai Wei
2023-09-05T17:46:18Z
http://arxiv.org/abs/2309.02414v1
# Cohen-Macaulay weighted chordal graphs ###### Abstract. In this paper I give a combinatorial characterization of all the Cohen-Macaulay weighted chordal graphs. In particular, it is shown that a weighted chordal graph is Cohen-Macaulay if and only if it is unmixed. ## 1. Introduction **Convention.** Throughout this paper, let \(\mathbb{N}=\{1,2,\dots,\}\), \(n\in\mathbb{N}\), \(\mathbb{K}\) a field, and \(G=(V,E)\) a (finite simple) graph with vertex set \(V=V(G)=[n]=\{1,\dots,n\}\) and edge set \(E=E(G)\). An edge between vertices \(i\) and \(j\) is denoted \(ij\). Combinatorial commutative algebra is a branch of mathematics that uses combinatorics and graph theory to understand certain algebraic constructions; it also uses algebra to understand certain objects in combinatorics and graph theory. To the graph \(G\) one associates the positive integer-valued function \(\lambda:E\rightarrow\mathbb{N}\), producing a weighted graph \(G_{\lambda}\). For a weighted graph \(G_{\lambda}\) Paulsen and Sather-Wagstaff [4] introduce the weighted edge ideal \(I(G_{\lambda})\subseteq\mathbb{K}[X_{1},\dots,X_{n}]\) which is generated by all monomials \(X_{i}^{\lambda(ij)}X_{j}^{\lambda(ij)}\) such that \(ij\in E\). In particular, if \(\lambda\) is the constant function defined by \(\lambda(ij)=1\) for \(ij\in E\), then \(I(G_{\lambda})=I(G)\), where \(I(G)\) is the edge ideal associated to \(G\) in \(\mathbb{K}[X_{1},\dots,X_{n}]\), given by \(I(G)=(X_{i}X_{j}\mid ij\in E)\). A weighted graph \(G_{\lambda}\) is called Cohen-Macaulay over \(\mathbb{K}\) if \(\mathbb{K}[X_{1},\dots,X_{n}]/I(G_{\lambda})\) is a Cohen-Macaulay ring, and is called Cohen-Macaulay if it is Cohen-Macaulay over any field. The general problem is to classify the weighted graphs which are Cohen-Macaulay over \(\mathbb{K}\). As for unweighted graphs one cannot expect a general classification theorem. Paulsen and Sather-Wagstaff [4] characterized all Cohen-Macaulay weighted \(K_{1}\)-corona of graphs and in particular all Cohen-Macaulay weighted trees. In this paper we classify all Cohen-Macaulay weighted chordal graphs following the classification of all Cohen-Macaulay chordal graphs by Herzog, Hibi, and Zheng[2]. The characterization is purely graph-theoretical, and it turns out that for weighted chordal graphs the Cohen-Macaulay property is independent of the field \(\mathbb{K}\). In Section 2, we recall some definitions and notations from [1], [2], and [4]. We also prove a lemma used in proving the sufficient condition for the Cohen-Macaulay property to hold for weighted chordal graphs. In Section 3, we classify all Cohen-Macaulay weighted chordal graphs. Theorem 3.1 gives a sufficient condition and Theorem 3.3 says that the sufficient condition is also a necessary condition. ## 2. Preliminaries In subsequent sections, let \(\lambda:E\rightarrow\mathbb{N}\) be a positive integer-valued function and \(G_{\lambda}\) a weighted graph. **Definition 2.1**.: [1] A _path_ in \(G\) is a non-empty subgraph \(P=(V^{\prime},E^{\prime})\) of the form \(V^{\prime}=\{v_{0},v_{1},\ldots,v_{r}\}\) and \(E^{\prime}=\{v_{0}v_{1},v_{1}v_{2},\ldots,v_{r-1}v_{r}\}\), where \(r\in\mathbb{N}\sqcup\{0\}\), we denote the path by \(P=v_{0}v_{1}\cdots v_{r}\) for simplicity and define \(\hat{P}_{0}=v_{1}\cdots v_{r}\). **Definition 2.2**.: [1] If \(G^{\prime}=(V^{\prime},E^{\prime})\) is a subgraph of \(G\) and \(E^{\prime}=\{ij\in E\mid i,j\in V^{\prime}\}\), then \(G^{\prime}\) is an _induced subgraph_ of \(G\); we say that \(V^{\prime}\)_induces_ or _spans_\(G^{\prime}\) in \(G\), and write \(G^{\prime}=G[V^{\prime}]\). A subgraph \(G^{\prime}=(V^{\prime},E^{\prime})\) of \(G\) is a _spanning_ subgraph of \(G\) if \(V^{\prime}\) spans all of \(G\), i.e., if \(V^{\prime}=V\). If \(U\subseteq V\), we write \(G-U\) for \(G[V\smallsetminus U]\). A maximal connected subgraph of \(G\) is a _component_ of \(G\). If \(G_{1},\ldots,G_{n}\) are components of \(G\), then \(G_{i}=G[V(G_{i})]\) for \(i=1,\ldots,n\) and \(V(G_{1}),\ldots,V(G_{n})\) partition \(V\). **Definition 2.3**.: A _rooted tree_\(T\) is a tree with a special vertex \(r\in V(T)\) labelled as the "root" of the tree. A _rooted forest_ is a graph whose components are rooted trees. A subgraph \(G^{\prime}=(V^{\prime},E^{\prime})\) of \(G\) is called a _rooted spanning forest_ of \(G\) if \(G^{\prime}\) is a rooted forest and \(V^{\prime}\) spans \(G\). **Definition 2.4**.: Let \(T\) be a rooted tree with root \(r\). For any \(v\in T\smallsetminus\{r\}\), there is a unique path from \(r\) to \(v\), say \(v_{1}\ldots v_{k}\) with \(v_{1}=r\) and \(v_{k}=v\), we say that \(v_{i}\) is a _\(T\)-parent_ of \(v_{i+1}\) and \(v_{i+1}\) is a _\(T\)-child_ of \(v_{i}\) for \(i=1,\ldots,k-1\). If \(\ell\in V(T)\) has no \(T\)-child, we say that \(\ell\) is a _\(T\)-leaf_, otherwise, it is a _\(T\)-inner vertex_. In particular, if \(V(T)=\{r\}\), then \(r\) is a _\(T\)-leaf_ but not a \(T\)-inner vertex. **Definition 2.5**.: [2] A _stable subset_ or _clique_ of the graph \(G\) is a subset \(F\) of \([n]\) such that \(ij\in E\) for all \(i,j\in F\) with \(i\neq j\). We write \(\Delta(G)\) for the simplicial complex on \([n]\) whose faces are the stable subsets of \(G\). A vertex of \(\Delta(G)\) is _free_ if it belongs to exactly one facet of \(\Delta(G)\), otherwise it is called _nonfree_. **Definition 2.6**.: A _chord_ in the graph \(G\) refers to an edge that connects two non-adjacent vertices within a cycle. The graph \(G\) is called _chordal_ if every cycle of length \(>3\) has a chord. The weighted graph \(G_{\lambda}\) is called a _weighted chordal graph_ if the underlying graph \(G\) is chordal. **Definition 2.7**.: [4, Definition 1.4] A _weighted vertex cover_ of \(G_{\lambda}\) is an ordered pair \((V^{\prime},\delta^{\prime})\) with a subset \(V^{\prime}\subseteq V\) and a function \(\delta^{\prime}:V^{\prime}\rightarrow\mathbb{N}\) such that for each edge \(ij\in E\), we have that 1. \(i\in V^{\prime}\) and \(\delta^{\prime}(i)\leq\lambda(ij)\), or 2. \(j\in V^{\prime}\) and \(\delta^{\prime}(j)\leq\lambda(ij)\). The number \(\delta^{\prime}(i)\) is the _weight_ of \(v_{i}\). **Note 2.8**.: If \((V^{\prime},\delta^{\prime})\) is a weighted vector cover of \(G_{\lambda}\), then \(V^{\prime}\) is a vector cover of \(G\) by definition. **Definition 2.9**.: [4, Definition 1.9] Given two weighted vertex covers \((V^{\prime}_{1},\delta^{\prime}_{1})\) and \((V^{\prime}_{2},\delta^{\prime}_{2})\) of \(G_{\lambda}\), we write \((V^{\prime}_{2},\delta^{\prime}_{2})\leq(V^{\prime}_{1},\delta^{\prime}_{1})\) if \(V^{\prime}_{2}\subseteq V^{\prime}_{1}\) and \(\delta^{\prime}_{2}\geq\delta^{\prime}_{1}|_{V^{\prime}_{2}}\). A weighted vertex cover \((V^{\prime},\delta^{\prime})\) is _minimal_ if there does not exist another weighted vertex cover \((V^{\prime\prime},\delta^{\prime\prime})\) such that \((V^{\prime\prime},\delta^{\prime\prime})<(V^{\prime},\delta^{\prime})\). The _cardinality_ of \((V^{\prime},\delta^{\prime})\) is defined to be the cardinality of \(V^{\prime}\), in symbols, \(|(V^{\prime},\delta^{\prime})|=|V^{\prime}|\). A weighted graph \(G_{\lambda}\) is called _unmixed_ if all of the minimal weighted vertex covers of \(G_{\lambda}\) have the same cardinality, otherwise it is called _mixed_. For the proof of our first main theorem we need the following algebraic fact: **Lemma 2.10**.: Let \(R\) be a Noetherian ring, \(S=R[X_{1},\ldots,X_{n}]\), \(k\in\{0,\ldots,n-1\}\) and \(J=(I_{1}X_{1},\ldots,I_{k}X_{k},\{X_{i}^{m_{ij}}X_{j}^{m_{ij}}\}_{1\leq i<j\leq n})\) an ideal of \(S\), where \(I_{1},\ldots,I_{k}\) are ideals of \(R\), no \(I_{j}\)'s exists in \(J\) if \(k=0\), and \(m_{ij}\in\mathbb{N}\) for any \(i,j\in\mathbb{N}\). Then \(X=\sum_{i=1}^{n}X_{i}\) is a non-zero divisor on \(S/J\). Proof.: Let \(A\subseteq[n]\) be nonempty. Let \((K_{A})_{\lambda_{A}}\) be the weighted complete graph on \(A\) with the function \(\lambda_{A}:E(K_{A})\to\mathbb{N}\) given by \(\lambda_{A}(ij)=m_{ij}\) for \(i,j\in A\) such that \(1\leq i<j\leq n\). For any weighted vertex cover \((V^{\prime},\delta^{\prime})\) of \((K_{A})_{\lambda_{A}}\), define an ideal \(P^{A}(V^{\prime},\delta^{\prime}):=(X_{i}^{\delta^{\prime}(i)}\mid i\in V^{ \prime})\) in \(\mathbb{K}[X_{j}\mid j\in A]\). Let \(I((K_{A})_{\lambda_{A}})\) be the weighted edge ideal of \((K_{A})_{\lambda_{A}}\) in \(\mathbb{K}[X_{j}\mid j\in A]\). Then by [4, Theorem 3.5], we have that \[I((K_{A})_{\lambda_{A}})=\bigcap_{\min.\ (V^{\prime},\delta^{\prime})}P^{A}(V^{ \prime},\delta^{\prime}),\] where the intersection is taken over all minimal weighted vertex covers of \((K_{A})_{\lambda_{A}}\). For any \(B\subseteq[n]\), let \(\mathfrak{X}_{B}=(X_{j}\mid j\in B)S\). For any \(T\subseteq[k]\), set \(I_{T}=\sum_{j\in T}I_{j}\). Then \[J =(I_{1}X_{1},\ldots,I_{k}X_{k},I((K_{[n]})_{\lambda_{[n]}}))S\] \[=\bigcap_{T\subseteq[k]}(I_{T},\mathfrak{X}_{[k]\smallsetminus T})+ \bigcap_{(W^{\prime},\gamma^{\prime})}P^{[n]}(W^{\prime},\gamma^{\prime})\] \[=\bigcap_{T\subseteq[k]}\bigcap_{(W^{\prime},\gamma^{\prime})} \bigl{(}I_{T},\mathfrak{X}_{[k]\smallsetminus T},P^{[n]}(W^{\prime},\gamma^{ \prime})\bigr{)}S\] \[=\bigcap_{T\subseteq[k]}\bigcap_{(V^{\prime},\delta^{\prime})} \bigl{(}I_{T},\mathfrak{X}_{[k]\smallsetminus T},P^{[n]\smallsetminus([k] \smallsetminus T)}(V^{\prime},\delta^{\prime})\bigr{)}S,\] where \((W,\gamma^{\prime})\) runs through all minimal weighted vertex covers of \((K_{[n]})_{\lambda_{[n]}}\), and \((V^{\prime},\delta^{\prime})\) runs through all minimal weighted vertex covers of \((K_{[n]\smallsetminus([k]\smallsetminus T)})_{\lambda_{[n]\smallsetminus([k]\smallsetminus T )}}\). The third equality follows from [3, Lemma 7.3.2] since \(R\) is Noetherian. To prove that \(X\) is a non-zero divisor modulo \(J\) it suffices to show that \(X\) is a non-zero divisor modulo each of the ideals \((I_{T},\mathfrak{X}_{[k]\smallsetminus T},P^{[n]\smallsetminus([k]\smallsetminus T)}(V ^{\prime},\delta^{\prime}))S\). It is equivalent to show that \(X\) is a non-zero divisor on \[\frac{\overline{R}[X_{1},\ldots,X_{n}]}{(\mathfrak{X}_{[k]\smallsetminus T},P^{[ n]\smallsetminus([k]\smallsetminus T)}(V^{\prime},\delta^{\prime}))},\] where \(\overline{R}=\frac{R}{I_{T}}\). The associated prime of the primary ideal \((\mathfrak{X}_{[k]\smallsetminus T},P^{[n]\smallsetminus([k]\smallsetminus T)}(V^{ \prime},\delta^{\prime}))\) generated by pure powers is \((\mathfrak{X}_{[k]\smallsetminus T},\mathfrak{X}_{V^{\prime}})\) in \(\overline{R}[X_{1},\ldots,X_{n}]\). Since \((V^{\prime},\delta^{\prime})\) is a minimal weighted vertex cover of the weighted complete graph \((K_{[n]\smallsetminus([k]\smallsetminus T)})_{\lambda_{[n]\smallsetminus([k]\smallsetminus T )}}\), we have by [4, Proposition 4.6] there exists an \(\ell\in[n]\smallsetminus([k]\smallsetminus T)\) but \(\ell\not\in V^{\prime}\). Thus, \(X\) is a non-zero divisor on it. The following example illustrates the decomposition in previous lemma. **Example 2.11**.: Let \(R=\mathbb{K}[Y]\) be a polynomial ring and \(S=R[X_{1},X_{2},X_{3}]\). Consider the ideal \(J=(YX_{1},YX_{2},X_{1}^{2}X_{2}^{2},X_{1}^{2}X_{3}^{2},X_{2}^{2}X_{3}^{2})S\) of \(S\). Then with \(K=(X_{1}^{2}X_{2}^{2},X_{1}^{2}X_{3}^{2},X_{2}^{2}X_{3}^{2})S\), \[J =(YX_{1},YX_{2})S+K\] \[=(X_{1},X_{2})S\cap(Y,X_{2})S\cap(Y,X_{1})S\cap(Y,Y)S+K\] \[=(X_{1},X_{2},K)S\cap(Y,X_{2},K)S\cap(Y,X_{1},K)S\cap(Y,K)S\] \[=(X_{1},X_{2})S\cap(Y,X_{2},X_{1}^{2}X_{3}^{2})S\cap(Y,X_{1},X_{2} ^{2}X_{3}^{2})S\cap(Y,K)S\] \[=(X_{1},X_{2})S\cap(Y,K)S\] \[=(X_{1},X_{2},K)S\cap(Y,K)S\] \[=(YX_{1},YX_{2})S+K.\] In this paper, we will use Lemma 2.10 in the context of \(R=\mathbb{K}[X_{1},\ldots,X_{n}]\). Before proving the two main theorems, let's look at two particular examples. **Example 2.12**.: The weighted edge ideal of the following weighted graph \(G_{\lambda}\) is mixed. \(v_{2}\)\(v_{3}\)\(v_{1}\)\(v_{2}\)\(v_{3}\)\(v_{4}\)\(v_{1}\)\(v_{0}\) Theorem [4, Theorem 3.5] says that it suffices to find two minimal weighted cover of \(G_{\lambda}\) of different cardinality. Note that there always exists a minimal weighted vertex cover of size \(3+1-2=2\). For example, \((V^{\prime},\delta^{\prime})=\{\nu_{1}^{1},v_{2}^{1}\}\) is a weighted vertex cover of \(G_{\lambda}\), and it is cardinality-minimal in the sense that there doesn't exist any weighted vertex cover \((W^{\prime},\gamma^{\prime})\) of \(G_{\lambda}\) such that \((W^{\prime},\gamma^{\prime})\leq(V,\delta^{\prime})\) and \(|W^{\prime}|<|V^{\prime}|\). By [4, Proposition 1.12], \((V^{\prime},\delta^{\prime})\) induces a minimal weighted vertex cover \((V^{\prime},\delta^{\prime\prime})\) for some \(\delta^{\prime\prime}\geq\delta^{\prime}\). But there exists another minimal weighted vertex cover \(\{v_{3}^{\min\{2,a\}},v_{2}^{4},\nu_{1}^{6}\}\), whose size is \(3\). **Example 2.13**.: The weighted edge ideal of the following weighted graph \(G_{\lambda}\) is mixed. \(v_{0}\)\(v_{1}\)\(v_{2}\)\(v_{3}\)\(v_{4}\)\(v_{3}\)\(v_{5}\)\(v_{6}\)\(v_{7}\)\(v_{8}\)\(v_{9}\)\(v_{1}\)\(v_{2}\)\(v_{3}\)\(v_{4}\)\(v_{5}\)\(v_{6}\)\(v_{7}\)\(v_{8}\)\(v_{9}\)\(v_{1}\)\(v_{2}\)\(v_{3}\)\(v_{4}\)\(v_{5}\)\(v_{6}\)\(v_{7}\)\(v_{8}\)\(v_{9}\)\(v_{1}\)\(v_{2}\)\(v_{3}\)\(v_{4}\)\(v_{5}\)\(v_{6}\)\(v_{7}\)\(v_{9}\)\(v_{1}\)\(v_{2}\)\(v_{3}\)\(v_{4}\)\(v_{5}\)\(v_{6}\)\(v_{7}\)\(v_{1}\)\(v_{2}\)\(v_{3}\)\(v_{5}\)\(v_{7}\)\(v_{1}\)\(v_{2}\)\(v_{3}\)\(v_{4}\)\(v_{5}\)\(v_{6}\)\(v_{7}\)\(v_{1}\)\(v_{2}\)\(v_{3}\)\(v_{5}\)\(v_{6}\)\(v_{7}\)\(v_{1}\)\(v_{2}\)\(v_{3}\)\(v_{4}\)\(v_{5}\)\(v_{6}\)\(v_{7}\)\(v_{1}\)\(v_{2}\)\(v_{3}\)\(v_{4}\)\(v_{5}\)\(v_{6}\)\(v_{7}\)\(v_{1}\)\(v_{2}\)\(v_{3 _if \(u\) is an \(S\)-inner vertex such that \(u\nu_{0}^{T}\in E\), then_ \[\lambda(u\nu_{0}^{T})>\min\{\max\{\lambda(uw)\mid u\text{ is the $S$- parent of }w\},\] \[\max\{\lambda(\nu_{0}^{T}\nu_{1}^{Y})\mid\text{$Y$ is a component of }\mathfrak{F}\text{ such that }\nu_{0}^{T}=\nu_{0}^{Y}\}\};\] _and if \(\nu_{0}^{S}\nu_{0}^{T}\in E\), then_ \[\lambda(\nu_{0}^{S}\nu_{0}^{T})>\min\{\max\{\lambda(\nu_{0}^{S} \nu_{1}^{Y})\mid\text{$Y$ is a component of }\mathfrak{F}\text{ such that }\nu_{0}^{S}=\nu_{0}^{Y}\},\] \[\max\{\lambda(\nu_{0}^{T}\nu_{1}^{Z})\mid\text{$Z$ is a component of }\mathfrak{F}\text{ such that }\nu_{0}^{T}=\nu_{0}^{Z}\}\}.\] _Then the following conditions are equivalent._ 1. \(G_{\lambda}\) _is Cohen-Macaulay;_ 2. \(G_{\lambda}\) _is Cohen-Macaulay over_ \(\mathbb{K}\)_;_ 3. \(G_{\lambda}\) _is unmixed;_ 4. \([n]\) _is the disjoint union of_ \(F_{1},\ldots,F_{m}\)_._ Proof.: (a) \(\Longrightarrow\) (b) is trivial. (b) \(\Longrightarrow\) (c) Since \(\mathbb{K}[X_{1},\ldots,X_{n}]/I(G_{\lambda})\) is Cohen-Macaulay, we have \(I(G_{\lambda})\) is unmixed. So \(G_{\lambda}\) is unmixed by [4, Theorem 3.5 and Proposition 3.13]. (c) \(\Longrightarrow\) (d) Since \(G_{\lambda}\) unmixed, we have \(G\) is unmixed by [4, Proposition 1.14]. So (d) holds by [2, Theorem 2.1]. (d) \(\Longrightarrow\) (c) Let \((V^{\prime},\delta^{\prime})\) be a minimal weighted vertex cover of \(G_{\lambda}\) with \(V^{\prime}\subseteq[n]\) and \(\delta^{\prime}:V^{\prime}\to\mathbb{N}\). Then for \(i=1,\ldots,m\) we have \(|V^{\prime}\cap F_{i}|\geq|F_{i}|-1\) since \(F_{i}\) is a clique of \(G\). Suppose for some \(i\in\{1,\ldots,m\}\), we have \(|V^{\prime}\cap F_{i}|=|F_{i}|\), i.e., \(F_{i}\subseteq V^{\prime}\). Then \(F_{i}\) contains a nonfree vertex by [4, Proposition 4.6]. Let \(v\in F_{i}\). Claim. There exists a path \(\ell_{0}\ell_{1}\cdots\ell_{k-1}\ell_{k}\) in \(G\) with \(\ell_{0}\not\in F_{i}\), \(\ell_{1},\ldots,\ell_{k-1}\in F_{i}\) and \(\ell_{k}=v\) satisfying that if \(k\geq 2\), then \(\lambda(\ell_{0}\ell_{1})>\cdots>\lambda(\ell_{k-2}\ell_{k-1})>\lambda(\ell_{ k-1}\ell_{k})\). We will use an algorithm to find such a path. Since \(v\in F_{i}\subseteq V^{\prime}\), we have there exists \(w_{1}\in V\smallsetminus\{v\}\) such that \(\delta^{\prime}(v)\leq\lambda(vw_{1})\underset{\text{if }w_{1}\in V^{\prime}}{< \delta^{\prime}(w_{1})}\). We then go through the following steps: 1. Initially, let \(j:=1\). 2. If \(w_{j}\not\in F_{i}\) and \(j=1\), then we have a path \(w_{1}v\) in \(G\) with \(w_{1}\not\in F_{j}\) and \(v\in F_{i}\), so the claim is justified. If \(w_{j}\not\in F_{i}\) and \(j\geq 2\), then by induction we have a path \(w_{j}w_{j-1}\cdots w_{1}v\) in \(G\) with \(w_{j}\not\in F_{i}\) and \(w_{j-1},\ldots,w_{1},v\in F_{i}\) such that \[\delta^{\prime}(v)\leq\lambda(vw_{1})<\delta^{\prime}(w_{1})\leq\lambda(w_{1}w _{2})<\delta^{\prime}(w_{2})\leq\cdots<\delta^{\prime}(w_{j-1})\leq\lambda(w_{j- 1}w_{j})\underset{\text{if }w_{j}\in V^{\prime}}{<\delta^{\prime}(w_{j})},\] implying that \(\lambda(w_{j}w_{j-1})>\lambda(w_{j-1}w_{j-2})>\cdots>\lambda(w_{2}w_{1})> \lambda(w_{1}v)\) and so the claim is justified. Hence in either case we jump out of the loop. 3. If \(w_{j}\in F_{i}\), then there exists \(w_{j+1}\in V\smallsetminus\{w_{j}\}\) such that \(\delta^{\prime}(w_{j})\leq\lambda(w_{j}w_{j+1})\underset{\text{if }w_{j+1}\in V^{\prime}}{< \delta^{\prime}(w_{j+1})}\), so by induction there exists a path \(w_{j+1}w_{j}w_{j-1}\ldots w_{1}v\) with \(w_{j},\ldots,w_{1},v\in F_{i}\) such that \[\delta^{\prime}(v)\leq\lambda(vw_{1})<\delta^{\prime}(w_{1})\leq\lambda(w_{1}w _{2})<\delta^{\prime}(w_{2})\leq\cdots<\delta^{\prime}(w_{j})\leq\lambda(w_{j}w _{j+1})\underset{\text{if }w_{j+1}\in V^{\prime}}{<\delta^{\prime}(w_{j+1})}.\] 1. Re-define \(j:=j+1\). If \(w_{j}\not\in F_{i}\), then go back to Step 2. If \(w_{j}\in F_{i}\), then go back to Step 3. Since \(|F_{i}|\) is finite and \(w_{j},\ldots,w_{1},v\in F_{i}\) are distinct to each other and \(F_{i}\) contains a nonfree vertex, we have after some finite loops, it will enter Step 2 and the claim will be proved. Let \(v_{1}\in F_{i}\). Then by the claim there exists a path \(P_{1}:=\ell_{0}\ell_{1}\cdots\ell_{k-1}v_{1}\) in \(G\) with \(\ell_{0}\not\in F_{i}\) and \(\ell_{1},\ldots,\ell_{k-1},v_{1}\in F_{i}\) satisfying that if \(k\geq 2\), then \(\lambda(\ell_{0}\ell_{1})>\cdots>\lambda(\ell_{k-2}\ell_{k-1})>\lambda(\ell_{k- 1}v_{1})\). Assume \(V(\mathring{P}_{1})=F_{i}\). Then \(k\geq 2\) since \(F_{i}\) contains a free vertex. So there exists a rooted spanning forest \(\mathring{P}_{1}\) such that the unique component \(\mathring{P}_{1}\) has a nonfree vertex \(\nu_{1}^{\mathring{P}_{1}}:=\ell_{1}\) as the root and there is a nonfree vertex \(\nu_{0}^{\mathring{P}_{1}}:=\ell_{0}\) in \(V\smallsetminus F_{i}\) with \(\ell_{0}\ell_{1}\in E\) which satisfies that for the unique path \(\ell_{1}\cdots\ell_{k-1}v_{1}\) in the tree \(\mathring{P}_{1}\) with \(v_{1}\) a \(\mathring{P}_{1}\)-leaf if \(k\geq 2\), then \(\lambda(\ell_{0}\ell_{1})>\cdots>\lambda(\ell_{k-2}\ell_{k-1})>\lambda(\ell_{k -1}v_{1})\). Moreover, if \(\ell_{\alpha}\) and \(\ell_{\beta}\) are \(\mathring{P}_{1}\)-inner vertices with \(1\leq\alpha<\beta\leq k-1\), then since \((V^{\prime},\delta^{\prime})\) is a weighted vertex cover of \(G_{\lambda}\) and \(\ell_{\alpha},\ell_{\beta}\in V(\mathring{P}_{1})=F_{i}\subseteq V^{\prime}\), after settting \(\ell_{k}=v_{1}\) we have \[\lambda(\ell_{\alpha}\ell_{\beta}) \geq\min\{\delta^{\prime}(\ell_{\alpha}),\delta^{\prime}(\ell_{ \beta})\}\] \[>\min\{\lambda(\ell_{\alpha}\ell_{\alpha+1}),\lambda(\ell_{\beta} \ell_{\beta+1})\}\] \[=\min\{\max\{\lambda(\ell_{\alpha}w)\mid\ell_{\alpha}\text{ is the $\mathring{P}_{1}$-parent of $w$}\},\max\{\lambda(\ell_{\beta}x)\mid\ell_{\beta}\text{ is the $\mathring{P}_{1}$-parent of $x$}\}\};\] and if \(\ell_{\alpha}\) is an \(\mathring{P}_{1}\)-inner vertex such that \(u\nu_{0}^{\mathring{P}_{1}}\in E\), then \(\ell_{\alpha}=\ell_{1}\), and so \[\lambda(\ell_{\alpha}\nu_{0}^{\mathring{P}_{1}}) =\lambda(\ell_{0}\ell_{1})\] \[>\max\{\lambda(\ell_{1}w)\mid\ell_{1}\text{ is the $\mathring{P}_{1}$-parent of $w$}\}\] \[\geq\min\{\max\{\lambda(\ell_{\alpha}w)\mid\ell_{\alpha}\text{ is the $\mathring{P}_{1}$-parent of $w$}\},\lambda(\nu_{0}^{\mathring{P}_{1}}\nu_{1}^{\mathring{P}_{1}})\},\] a contradiction. On the other hand, we assume \(V(\mathring{P}_{1})\subsetneq F_{i}\). We then go through the following steps: 1. Initially, let \(b:=1\). 2. If there exists a vertex \(v\in F_{i}\smallsetminus(V(\mathring{P}_{1})\cup\cdots\cup V(\mathring{P}_{b}))\), then there exists a path \(P_{b+1}:=h_{0}h_{1}\cdots h_{k^{\prime}-1}v\) in \(G\) with \(h_{0}\not\in F_{i}\) and \(h_{1},\ldots,h_{k^{\prime}-1},v\in F_{i}\) satisfying that if \(k^{\prime}\geq 2\), then \(\lambda(h_{0}h_{1})>\cdots>\lambda(h_{k^{\prime}-2}\lambda_{k^{\prime}-1})> \lambda(h_{k^{\prime}-1}v)\). If \(V(\mathring{P}_{b+1})\cap(V(\mathring{P}_{1})\cup\cdots\cup V(\mathring{P}_{b}))=\emptyset\), then we put \(\mathring{P}_{b+1}\) into the rooted forest formed by \(\mathring{P}_{1},\ldots,\mathring{P}_{b}\) while making \(\mathring{P}_{b+1}\) a rooted tree with root \(h_{1}\) and making \(\mathring{P}_{b+1}\) a component of the rooted forest. Assume \(V(\mathring{P}_{b+1})\cap(V(\mathring{P}_{1})\cup\cdots\cup V(\mathring{P}_{b}))\neq\emptyset\). Let \(t=\max\{c\geq 1\mid h_{c}\in V(\mathring{P}_{1})\cup\cdots\cup V(\mathring{P}_{b})\}\). Assume \(h_{t}=u_{s}\in V(\mathring{P}_{d})\) with \(P_{d}:=u_{0}u_{1}\cdots u_{j}\) for some \(d\in\{1,\ldots,b\}\) and \(s\in\{1,\ldots,j\}\). Set \(P_{b+1}^{\prime}:=u_{0}u_{1}\cdots u_{s-1}h_{t}h_{t+1}\cdots h_{k^{\prime}-1}v\). Then \(P_{b+1}^{\prime}\) is a path in \(G\) with \(u_{0}\not\in F_{i}\) and \(u_{1},\ldots,u_{s-1},h_{t},h_{t+1},\ldots,h_{k^{\prime}-1},v\in F_{i}\). Since \(\lambda(u_{s-1}h_{t})=\lambda(u_{s-1}u_{s})\geq\delta^{\prime}(u_{s})=\delta^{ \prime}(h_{t})>\lambda(h_{t}h_{t+1})\), we have \(\lambda(u_{0}u_{1})>\cdots>\lambda(u_{s-1}h_{t})>\lambda(h_{t}h_{t+1})>\cdots> \lambda(h_{k^{\prime}-1}v)\). Re-define \(P_{b+1}:=P_{b+1}^{\prime}\), then \(\mathring{P}_{b+1}\) is merged into the component whose root is \(u_{1}\), in the rooted forest formed by \(\mathring{P}_{1},\ldots,\mathring{P}_{b}\). 3. If \(V(\mathring{P}_{1})\cup\cdots\cup V(\mathring{P}_{b+1})\subsetneq F_{i}\), then re-define \(b:=b+1\) and go back to Step 2. 4. If \(V(\mathring{P}_{1})\cup\cdots\cup V(\mathring{P}_{b+1})=F_{i}\), then there exists a rooted spanning forest \(\mathfrak{F}\) such that by induction each component \(T\) in \(\mathfrak{F}\), which is formed by a subset of the paths \(\mathring{P}_{1},\ldots,\mathring{P}_{b+1}\) say \(\mathring{P}_{i_{1}},\ldots,\mathring{P}_{i_{t}}\), has the nonfree vertex \(\nu_{1}^{T}\) as the root, which is the first vertex of any one of the paths \(\mathring{P}_{i_{1}},\ldots,\mathring{P}_{i_{t}}\), such that there is a nonfree vertex \(\nu_{0}^{T}\) in \(V\smallsetminus F_{i}\) with \(\nu_{0}^{T}\nu_{1}^{T}\in E\), which is the first vertex of any one of the paths \(P_{i_{1}},\ldots,P_{i_{t}}\), which satisfies that for the each path \(\nu_{1}^{T}u_{2}\cdots u_{j-1}u_{j}\) in the tree \(T\) with \(u_{j}\) a \(T\)-leaf if \(j\geq 2\), then \(\lambda(\nu_{0}^{T}\nu_{1}^{T})>\lambda(\nu_{1}^{T}u_{2})>\cdots>\lambda(u_{j-1}u_ {j})\). Moreover, for any component(s) \(S,T\) of \(\mathfrak{F}\), if \(u\) is an \(S\)-inner vertex and \(v\) is a \(T\)-inner vertex with \(u\neq v\), then since \((V^{\prime},\delta^{\prime})\) is a weighted vertex cover of \(G_{\lambda}\) and \(u,v\in(V(\hat{P}_{1})\cup\dots\cup V(\hat{P}_{b+1}))=F_{i}\subseteq V^{\prime}\), we have \[\lambda(uv) \geq\min\{\delta^{\prime}(u),\delta^{\prime}(v)\}\] \[>\min\{\max_{1\leq c\leq b+1}\{\lambda(uw)\mid uw\in E(\hat{P}_{c} )\},\max_{1\leq d\leq b+1}\{\lambda(vx)\mid vx\in E(\hat{P}_{d})\}\}\] \[=\min\{\max\{\lambda(uw)\mid u\text{ is the $S$-parent of $w$}\},\max\{ \lambda(vx)\mid v\text{ is the $T$-parent of $x$}\}\};\] if \(u\) is an \(S\)-inner vertex such that \(uu_{0}^{T}\in E\), then since \((V^{\prime},\delta^{\prime})\) is a weighted vertex cover of \(G_{\lambda}\), \[\lambda(u\nu_{0}^{T}) \geq\min\{\delta^{\prime}(u),\underbrace{\delta^{\prime}(\nu_{0} ^{T})}_{\text{if }\nu_{0}^{T}\in V^{\prime}}\}\] \[>\min\{\max_{1\leq c\leq b+1}\{\lambda(uw)\mid uw\in E(\hat{P}_{c })\},\] \[\qquad\max\{\lambda(\nu_{0}^{T}\nu_{1}^{Y})\mid Y\text{ is a component of $\mathfrak{F}$ such that $\nu_{0}^{T}=\nu_{0}^{Y}$}\}\}\] \[=\min\{\max\{\lambda(uw)\mid u\text{ is the $S$-parent of $w$}\},\] \[\qquad\max\{\lambda(\nu_{0}^{T}\nu_{1}^{Y})\mid Y\text{ is a component of $\mathfrak{F}$ such that $\nu_{0}^{T}=\nu_{0}^{Y}$}\}\};\] and if \(\nu_{0}^{S}\nu_{0}^{T}\in E\), then since \((V^{\prime},\delta^{\prime})\) is a weighted vertex cover of \(G_{\lambda}\), by symmetry we assume \(\nu_{0}^{S}\in V^{\prime}\) and \(\delta^{\prime}(\nu_{0}^{S})\leq\lambda(\nu_{0}^{S}\nu_{0}^{T})\), so \[\lambda(\nu_{0}^{S}\nu_{0}^{T}) \geq\delta^{\prime}(\nu_{0}^{S})\] \[>\max\{\lambda(\nu_{0}^{S}\nu_{1}^{Y})\mid Y\text{ is a component of $\mathfrak{F}$ such that $\nu_{0}^{S}=\nu_{0}^{Y}$}\}\] \[\geq\min\{\max\{\lambda(\nu_{0}^{S}\nu_{1}^{Y})\mid Y\text{ is a component of $\mathfrak{F}$ such that $\nu_{0}^{S}=\nu_{0}^{Y}$}\},\] \[\qquad\max\{\lambda(\nu_{0}^{T}\nu_{1}^{Z})\mid Z\text{ is a component of $\mathfrak{F}$ such that $\nu_{0}^{T}=\nu_{0}^{Z}$}\}\},\] a contradiction. Since \(|F_{i}|\) is finite, after some finite loops it will enter Step 4 and there will be a contradiction. Thus, \(|V^{\prime}\cap F_{i}|=|F_{i}|-1\) for \(i=1,\dots,m\). Since \([n]\) is the disjoint union \([n]=\bigcup_{i=1}^{m}F_{i}\), it follows that \(|(V^{\prime},\delta^{\prime})|=|V^{\prime}|=n-m\) and \(G_{\lambda}\) is unmixed. (c) and (d) \(\Longrightarrow\) (a) We have that \(G_{\lambda}\) is unmixed and each minimal weighted vertex cover of \(G_{\lambda}\) has cardinality \(n-m\). Let \(S=\mathbb{K}[X_{1},\dots,X_{n}]\). Then \(\dim S/I(G_{\lambda})=m\). Let \(Y_{i}=\sum_{j\in F_{i}}X_{j}\) for \(i=1,\dots,m\). We will show that \(Y_{1},\dots,Y_{m}\) is a regular sequence on \(S/I(G_{\lambda})\). This then yields that \(G_{\lambda}\) is Cohen-Macaulay. Let \(i\in\{2,\dots,m\}\), \(F_{i}:=\{i_{1},\dots,i_{\ell}\}\) and assume that \(i_{1},\dots,i_{k}\) are the nonfree vertices of \(G[F_{i}]\). Then \(k\in\{0,\dots,n-1\}\). Let \(G^{\prime}=G[[n]\smallsetminus\{i_{1},\dots,i_{\ell}\}]\) and \(\lambda^{\prime}=\lambda|_{E(G^{\prime})}\). Then \(I(G_{\lambda})=(I(G^{\prime}_{\lambda^{\prime}}),J_{1}X_{i_{1}},\dots,J_{k}X_ {i_{k}},J)\), where \(J_{j}=(X_{i_{j}}^{\lambda(i_{j}r)-1}X_{r}^{\lambda(i_{j}r)}\mid i_{j}r\in E)\) for \(j=1,\dots,k\), and \(J=(X_{i_{r}}^{\lambda(i_{r}i_{s})}X_{i_{s}}^{\lambda(i_{r}i_{s})}\mid 1\leq r<s \leq\ell)\). Since \([n]\) is the disjoint union of \(F_{1},\dots,F_{m}\) we have all generators of the ideal \((I(G^{\prime}_{\lambda^{\prime}}),Y_{1},\dots,Y_{i-1})\) belong to \(\mathbb{K}[X_{j}\mid j\in[n]\smallsetminus F_{i}]\). Thus, if we set \(R=\frac{\mathbb{K}[X_{i}|j\in[n]\smallsetminus F_{i}]}{(I(G^{\prime}_{\lambda^{ \prime}}),Y_{1},\dots,Y_{i-1})}\), then \[\frac{\frac{S}{I(G_{\lambda})}}{(Y_{1},\dots,Y_{i-1})\frac{S}{I(G_{\lambda})}} \cong\frac{R[X_{i_{1}},\dots,X_{i_{\ell}}]}{(I_{1}X_{i_{1}},\dots,I_{k}X_{i_{k} },\{X_{i_{r}}^{\lambda(i_{r}i_{s})}X_{i_{s}}^{\lambda(i_{r}i_{s})}\mid 1\leq r<s\leq\ell \})},\] where for each \(j\), the ideal \(I_{j}\) is the image of \(J_{j}\) under the residue class map onto \(R\). Thus, Lemma 2.10 implies that \(Y_{i}\) is regular on \((S/I(G_{\lambda}))/(Y_{1},\dots,Y_{i-1})(S/I(G_{\lambda}))\). We use the following example to illustrate the previous theorem and its proof. **Example 3.2**.: The following weighted chordal graph \(G_{\lambda}\) where we only give part weights of \(\lambda\) is not Cohen-Macaulay. In the drawing, let \(\nu^{\tau}\) denote an element of \((V^{\prime},\delta^{\prime})\) with \(\nu\in V^{\prime}\) and \(\delta^{\prime}(\nu)=\tau\). \(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\)\(\bullet\bullet\)\(\bullet\)\(\bullet\)\(\bullet\bullet\)\ Let \(\nu_{0}^{T_{1}}:=m\), \(\nu_{0}^{T_{2}}:=s\), \(\nu_{0}^{T_{3}}:=m\), \(\nu_{0}^{T_{4}}:=n\) and \(\nu_{0}^{T_{5}}:=r\). For example, for the path \(gbl\) in \(T_{1}\) with \(l\) a \(T_{1}\)-leaf, we have \(\lambda(\nu_{0}^{T_{1}}\nu_{1}^{T_{1}})>\lambda(\nu_{1}^{T_{1}}b)>\lambda(bl)\) since \(6>4>3\). Let \(\mathcal{S}=\{m,n,s,r\}\), \(i=3\), \(V_{2}=F_{2}\cap\mathcal{E}=\{\nu_{0}^{T_{1}}=m=\nu_{0}^{T_{3}},\nu_{0}^{T_{4}}= n,\nu_{0}^{T_{5}}=r\}\), \(V_{3}=F_{3}\cap S=\{\nu_{0}^{T_{2}}=s\}\). Let \[W=F_{2}\sqcup F_{3}\sqcup(F_{4}\smallsetminus\{z\})=\{m,\dots,y\}.\] Let \(\delta:F_{1}\sqcup W\to\mathbb{N}\). The definition for \(\delta\) is given in the drawing for \(G_{\lambda}\). For example, since \(m\in V_{2}\), \[\delta(m) =\gamma_{2}(m)\] \[=1+\max_{j\in\{1,2,3,4,5\}}\{\lambda(m\nu_{1}^{T_{j}})\mid m=\nu _{0}^{T_{j}}\}\] \[=1+\max_{j\in\{1,3,4,5\}}\{\lambda(m\nu_{1}^{T_{j}})\mid m=\nu_{0 }^{T_{j}}\}\] \[=1+\max\{\lambda(m\nu_{1}^{T_{1}}),\lambda(m\nu_{1}^{T_{3}})\}\] \[=1+\max\{\lambda(mg),\lambda(mi)\}\] \[=1+\max\{6,7\}\] \[=8,\] For \(T_{1}\)-inner vertex \(b\), \[\delta(b) =1+\max\{\lambda(b\alpha)\mid b\text{ is the $T_{1}$-parent of $\alpha$}\}\] \[=1+\max\{\lambda(ba),\lambda(bl)\}.\] \[=1+\max\{2,3\}\] \[=4.\] We have for example, for \(T_{1}\)-inner vertex \(b\) and \(T_{3}\)-inner vertex \(i\), \[\lambda(bi) =6\] \[>\min\{\max\{2,3\},5\}\] \[=\min\{\max\{\lambda(ba),\lambda(bl)\},\max\{\lambda(if)\}\}\] \[=\min\{\max\{\lambda(b\alpha)\mid b\text{ is the $T_{1}$-parent of $\alpha$}\},\] \[\max\{\lambda(i\beta)\mid i\text{ is the $T_{3}$-parent of $\beta$}\}\};\] for \(T_{1}\)-inner vertex \(b\) and \(b\nu_{0}^{T_{3}}=bm\in E\), we have \[\lambda(bm) =5\] \[>\min\{\max\{2,3\},\max\{6,7\}\}\] \[=\min\{\max\{\lambda(ba),\lambda(bl)\},\max\{\lambda(mg),\lambda(mi)\}\}\] \[=\min\{\max\{\lambda(ba),\lambda(bl)\},\max\{\lambda(m\nu_{1}^{T_{ 1}}),\lambda(m\nu_{1}^{T_{3}})\}\}\] \[=\min\{\max\{b\alpha\mid b\text{ is the $T_{1}$-parent of $\alpha$}\},\] \[\max_{j\in\{1,2,3,4,5\}}\{\lambda(m\nu_{1}^{T_{j}})\mid m=\nu_{0} ^{T_{j}}\}\};\] for \(ms\in E\), we have \[\lambda(ms) =5\] \[>\min\{\max\{6,7\},\max\{4\}\}\] \[=\min\{\max\{\lambda(mg),\lambda(mi)\},\max\{\lambda(se)\}\}\] \[=\min\{\max\{\lambda(m\nu_{1}^{T_{1}}),\lambda(m\nu_{1}^{T_{3}}) \},\max\{\lambda(s\nu_{1}^{T_{2}})\}\}\] \[=\min\{\max_{j\in\{1,2,3,4,5\}}\{\lambda(m\nu_{1}^{T_{j}})\mid m= \nu_{0}^{T_{j}}\},\] \[\max_{j\in\{1,2,3,4,5\}}\{\lambda(s\nu_{1}^{T_{j}})\mid s=\nu_{0} ^{T_{j}}\}\}.\] One can check that the given weighted vertex cover \((F_{1}\sqcup W=\{a,\ldots,y\},\delta)\) is a weighted vertex cover of \(G_{\lambda}\) but \(((F_{1}\sqcup W)\smallsetminus\{v\},\delta|_{(F_{1}\sqcup W)\smallsetminus\{v\}})\) is not a weighted vertex cover of \(G_{\lambda}\) for any \(v\in F_{1}\). Hence there exists a minimal weighted vertex cover of \(G_{\lambda}\) with cardinality \(\geq 23\). The next result says that the sufficient condition in Theorem 3.1 is also a necessary condition. **Theorem 3.3**.: _Let \(G_{\lambda}\) be a weighted chordal graph. Let \(F_{1},\ldots,F_{m}\) be the facets of \(\Delta(G)\) which admit a free vertex satisfying that \([n]\) is the disjoint union of \(F_{1},\ldots,F_{m}\). If \(G_{\lambda}\) is Cohen-Macaulay, then \(G\) and \(\lambda\) satisfies the condition in Theorem 3.1._ Proof.: Proof by contrapositive. Without loss of generality, we assume that there exists such a rooted spanning forest \(\mathfrak{F}\) of \(G[F_{1}]\) consisting of components \(T_{1},\ldots,T_{k}\) in which each rooted tree \(T_{i}\) has a nonfree vertex \(\nu_{1}^{T_{i}}\) as a root such that there is a nonfree vertex \(\nu_{0}^{T_{i}}\) in \(V\smallsetminus F_{1}\) with \(\nu_{0}^{T_{i}}\nu_{1}^{T_{i}}\in E\) satisfying the conditions as in the statement of the theorem. Let \(\mathcal{S}\) be the minimal set containing \(\nu_{0}^{T_{1}},\ldots,\nu_{0}^{T_{k}}\). Without loss of generality, we assume \(\mathcal{S}\subseteq F_{2}\sqcup\cdots\sqcup F_{i}\) for some \(i\in\mathbb{N}\smallsetminus\{1\}\) and \(F_{j}\cap\mathcal{S}\neq\emptyset\) for \(j=2,\ldots,i\). For \(j=2,\ldots,i\), assume \(F_{j}\cap\mathcal{S}=\{\nu_{0}^{T_{j_{1}}},\ldots,\nu_{0}^{T_{j_{\ell}}}\}=:V_ {j}\) for some subset \(\{j_{1},\ldots,j_{\ell}\}\subseteq\{1,\ldots,k\}\), and define \(\gamma_{j}:V_{j}\to\mathbb{N}\) by \[\gamma_{j}(\nu_{0}^{T_{j_{a}}}) =1+\max_{j^{\prime}\in\{1,\ldots,k\}}\{\lambda(\nu_{0}^{T_{j_{a}} }\nu_{1}^{T_{j^{\prime}}})\mid\nu_{0}^{T_{j_{a}}}=\nu_{0}^{T_{j^{\prime}}}\}\] \[=1+\max_{j_{a^{\prime}}\in\{j_{1},\ldots,j_{\ell}\}}\{\lambda(\nu_ {0}^{T_{j_{a}}}\nu_{1}^{T_{j_{a^{\prime}}}})\mid\nu_{0}^{T_{j_{a}}}=\nu_{0}^{T_ {j_{a^{\prime}}}}\}.\] For \(j=i+1,\ldots,m\), assume that \(v_{j}\in F_{j}\) is a free vertex. Let \[W:=F_{2}\sqcup\cdots\sqcup F_{i}\sqcup(F_{i+1}\smallsetminus\{v_{i+1}\})\sqcup \cdots\sqcup(F_{m}\smallsetminus\{v_{m}\}).\] Let \(\delta:F_{1}\sqcup W\to\mathbb{N}\). For \(j=2,\ldots,i\), set \(\delta(v)=\gamma_{j}(v)\) for any \(v\in V_{j}\). Set \(\delta(v)=1\) for any \(v\in W\smallsetminus(V_{2}\sqcup\cdots\sqcup V_{i})\). If \(\nu_{0}^{T_{j_{c}}}\in V_{j}\) and \(\nu_{0}^{T_{j_{d}^{\prime}}}\in V_{j^{\prime}}\) such that \(\nu_{0}^{T_{j_{c}}}\nu_{0}^{T_{j_{d}^{\prime}}}\in E\) for some \(j,j^{\prime}\in\{2,\ldots,i\}\), then by assumption, \[\lambda(\nu_{0}^{T_{j_{c}}}\nu_{0}^{T_{j_{d}^{\prime}}}) >\min\{\max\{\lambda(\nu_{0}^{T_{j_{c}}}\nu_{1}^{Y})\mid\text{$Y$ is a component of $\mathfrak{F}$ such that $\nu_{0}^{T_{j_{c}}}=\nu_{0}^{Y}$}\},\] \[\qquad\max\{\lambda(\nu_{0}^{T_{j_{d}}}\nu_{1}^{Z})\mid\text{$Z$ is a component of $\mathfrak{F}$ such that $\nu_{0}^{T_{j_{d}^{\prime}}}=\nu_{0}^{Z}$}\}\}\] \[=\min\{\max_{j_{c^{\prime}}\in\{j_{1},\ldots,j_{d}\}}\{\lambda( \nu_{0}^{T_{j_{c}}}\nu_{1}^{T_{j_{c^{\prime}}}})\mid\nu_{0}^{T_{j_{c}}}\nu_{1 }^{T_{j_{c^{\prime}}}}\},\] \[\qquad\max_{j_{d^{\prime}}^{\prime}\in\{j_{1}^{\prime},\ldots,j_{ d}^{\prime}\}}\{\lambda(\nu_{0}^{T_{j_{d}^{\prime}}}\nu_{1}^{T_{j_{d^{\prime}}}}) \mid\nu_{0}^{T_{j_{d}^{\prime}}}=\nu_{1}^{T_{j_{d^{\prime}}}^{\prime}}\}\}\] \[=\min\{\gamma_{j}(\nu_{0}^{T_{j_{c}}})-1,\gamma_{j^{\prime}}(\nu_ {0}^{T_{j_{d}^{\prime}}})-1\}\] \[=\min\{\delta(\nu_{0}^{T_{j_{c}}})-1,\delta(\nu_{0}^{T_{j_{d}^{ \prime}}})-1\},\] implying that \(\min\{\delta(\nu_{0}^{T_{j_{c}}}),\delta(\nu_{0}^{T_{j_{d}^{\prime}}})\}\leq \lambda(\nu_{0}^{T_{j_{c}}}\nu_{0}^{T_{j_{d}^{\prime}}})\). So we have \((W,\delta|_{W})\) is a weighted vertex cover of \((G-F_{1})_{\lambda|_{E(G-F_{1})}}\). For \(j=1,\ldots,k\) set \(\delta(v)=1\) for any \(T_{j}\)-leaf \(v\) and set \(\delta(v)=1+\max\{\lambda(vw)\mid\text{$v$ is the $T_{j}$-parent of $w$}\}\) for any \(T_{j}\)-inner vertex \(v\). Since \(\mathfrak{F}\) is a spanning forest of \(G[F_{1}]\), we have defined \(\delta(v)\) for all \(v\in F_{1}\). If \(u\) is a \(T_{c}\)-inner vertex and \(v\) is a \(T_{d}\)-inner vertex for some \(c,d\in\{1,\ldots,k\}\) with \(c\neq d\). Then by assumption, \[\lambda(uv) >\min\{\max\{\lambda(uw)\mid\text{$u$ is the $T_{c}$-parent of $w$}\},\max\{\lambda(vx)\mid\text{$v$ is the $T_{d}$-parent of $x$}\}\}\] \[=\min\{\delta(u)-1,\delta(v)-1\}\] \[=\min\{\delta(u),\delta(v)\}-1,\] implying that \(\min\{\delta(u),\delta(v)\}\leq\lambda(uv)\). For \(j=1,\ldots,k\), we have \[\lambda(\nu_{0}^{T_{j}}\nu_{1}^{T_{j}}) >\max\{\lambda(\nu_{1}^{T_{j}}w)\mid\nu_{1}^{T_{j}}\text{ is the $T_{j}$-parent of $w$}\}\] \[=\delta(\nu_{1}^{T_{j}})-1,\] implying that \(\delta(\nu_{1}^{T_{j}})\leq\lambda(\nu_{0}^{T_{j}}\nu_{1}^{T_{j}})\). Furthermore, if \(u\) is an \(T_{c}\)-inner vertex such that \(u\nu_{0}^{T_{j_{d}}}\in E\) for some \(c\in\{1,\ldots,k\}\) and \(j\in\{2,\ldots,i\}\), then by assumption, \[\lambda(u\nu_{0}^{T_{j_{d}}}) >\min\{\max\{\lambda(uw)\mid\text{$u$ is the $T_{c}$-parent of $w$}\},\] \[\qquad\max\{\lambda(\nu_{0}^{T_{j_{d}}}\nu_{1}^{Y})\mid\text{$Y$ is a component of $\mathfrak{F}$ such that $\nu_{0}^{T_{j_{d}}}=\nu_{0}^{Y}$}\}\}\] \[=\min\{\delta(u)-1,\max_{j_{d^{\prime}}\in\{j_{1},\ldots,j_{d}\}} \{\lambda(\nu_{0}^{T_{j_{d}}}\nu_{1}^{T_{j_{d^{\prime}}}})\mid\nu_{0}^{T_{j_{ d}}}\nu_{1}^{T_{j_{d^{\prime}}}}\in E\}\}\] \[=\min\{\delta(u)-1,\gamma_{j}(\nu_{0}^{T_{j_{d}}})-1\}\] \[=\min\{\delta(u)-1,\delta(\nu_{0}^{T_{j_{d}}})-1\},\] implying that \(\min\{\delta(u),\delta(\nu_{0}^{T_{j_{d}}})\}\leq\lambda(u\nu_{0}^{T_{j_{d}}})\). Hence \((F_{1}\sqcup W,\delta)\) is a weighted vertex cover of \(G_{\lambda}\). For \(j=1,\ldots,k\), let \(v\in F_{1}\smallsetminus\{\nu_{1}^{T_{j}}\}\) and \(p_{v}\) its \(T_{j}\)-parent, since \((F_{1}\sqcup W,\delta)\) is a weighted vertex cover of \(G_{\lambda}\) and \[\delta(p_{v}) =1+\max\{\lambda(p_{v}w)\mid p_{v}\text{ is the $T_{j}$-parent of $w$}\}\] \[\geq 1+\lambda(p_{v}v),\] we have \(\delta(v)\leq\lambda(p_{v}v)\), hence \(((F_{1}\sqcup W)\smallsetminus\{v\},\delta|_{(F_{1}\sqcup W)\smallsetminus\{v\}})\) is no longer a weighted vertex cover of \(G_{\lambda}\). For \(j=1,\ldots,k\), there exists \(j^{\prime}\in\{2,\ldots,i\}\) and \(j^{\prime}_{d}\in\{j^{\prime}_{1},\ldots,j^{\prime}_{\ell}\}\) such that \(j^{\prime}_{d}=j\), since \((F_{1}\sqcup W,\delta)\) is a weighted vertex cover of \(G_{\lambda}\) and \[\delta(\nu_{0}^{T_{j}}) =\delta(\nu_{0}^{T_{j^{\prime}_{d}}})\] \[=\gamma_{j^{\prime}}(\nu_{0}^{T_{j^{\prime}_{d}}})\] \[=1+\max_{j^{\prime}_{d^{\prime}}\in\{j^{\prime}_{1},\cdots,j^{ \prime}_{\ell}\}}\{\lambda(\nu_{0}^{T_{j^{\prime}_{d}}}\nu_{1}^{T_{j^{\prime} _{d^{\prime}}}})\mid\nu_{0}^{T_{j^{\prime}_{d}}}\nu_{1}^{T_{j^{\prime}_{d^{ \prime}}}}\in E\}\] \[\geq 1+\lambda(\nu_{0}^{T_{j^{\prime}_{d}}}\nu_{1}^{T_{j^{\prime} _{d}}}),\] we have \(\delta(\nu_{1}^{T_{j}})=\delta(\nu_{1}^{T_{j^{\prime}_{d}}})\leq\lambda(\nu_{0 }^{T_{j^{\prime}_{d}}}\nu_{1}^{T_{j^{\prime}_{d}}})\), hence \(((F_{1}\sqcup W)\smallsetminus\{\nu_{1}^{T_{j}}\},\delta|_{(F_{1}\sqcup W) \smallsetminus\{\nu_{1}^{T_{j}}\}})\) is not a weighted vertex cover of \(G_{\lambda}\). So \(((F_{1}\sqcup W)\smallsetminus\{v\},\delta|_{(F_{1}\sqcup W)\smallsetminus\{v\}})\) is not a weighted vertex cover of \(G_{\lambda}\) for any \(v\in F_{1}\). Thus, there exists a minimal weighted vertex \((F_{1}\sqcup W^{\prime},\delta^{\prime})\) of \(G_{\lambda}\) such that \(W^{\prime}\subseteq W\) and \(\delta^{\prime}\geq\delta\) by [4, Proposition 1.12]. Since \(F_{2},\ldots,F_{m}\) are cliques of \(G\), we have \(|W^{\prime}|\geq|W|-(i-1)\). So \[|(F_{1}\sqcup W^{\prime},\delta^{\prime})|\geq|F_{1}|+|W|-(i-1)=n-(m-i)-(i-1)= n-m+1.\] On the other hand, assume that \(v_{j}\in F_{j}\) is free for \(j=1,\ldots,m\). Let \(U=V\smallsetminus\{v_{1},\ldots,v_{m}\}\) and \(\delta^{\prime}:U\to\mathbb{N}\) defined by \(\delta^{\prime}(v)=1\). Then \((U,\delta^{\prime})\) is a weighted vertex cover of \(G_{\lambda}\). For any \(v\in U\), we have \(v\in F_{j}\) for some \(j\in\{1,\ldots,m\}\) and so \(vv_{j}\in E\), hence \((U\smallsetminus\{v\},\delta^{\prime}|_{U\smallsetminus\{v\}})\) is not a weighted vertex cover of \(G_{\lambda}\). So there is a minimal weighted vertex cover \((U,\delta^{\prime\prime})\) of \(G_{\lambda}\) such that \(\delta^{\prime\prime}\geq\delta^{\prime}\) by [4, Proposition 1.12], but \(|(U,\delta^{\prime\prime})|=|U|=n-m\). Therefore, \(G_{\lambda}\) is mixed and so \(G_{\lambda}\) is not Cohen-Macaulay by Theorem 3.1. ## Acknowledgments I am grateful for the insightful comments and feedback provided by Keri Sather-Wagstaff and Janet Vassilev.
2307.14902
CodeLens: An Interactive Tool for Visualizing Code Representations
Representing source code in a generic input format is crucial to automate software engineering tasks, e.g., applying machine learning algorithms to extract information. Visualizing code representations can further enable human experts to gain an intuitive insight into the code. Unfortunately, as of today, there is no universal tool that can simultaneously visualise different types of code representations. In this paper, we introduce a tool, CodeLens, which provides a visual interaction environment that supports various representation methods and helps developers understand and explore them. CodeLens is designed to support multiple programming languages, such as Java, Python, and JavaScript, and four types of code representations, including sequence of tokens, abstract syntax tree (AST), data flow graph (DFG), and control flow graph (CFG). By using CodeLens, developers can quickly visualize the specific code representation and also obtain the represented inputs for models of code. The Web-based interface of CodeLens is available at http://www.codelens.org. The demonstration video can be found at http://www.codelens.org/demo.
Yuejun Guo, Seifeddine Bettaieb, Qiang Hu, Yves Le Traon, Qiang Tang
2023-07-27T14:46:09Z
http://arxiv.org/abs/2307.14902v1
# CodeLens: An Interactive Tool for Visualizing Code Representations ###### Abstract Representing source code in a generic input format is crucial to automate software engineering tasks, e.g., applying machine learning algorithms to extract information. Visualizing code representations can further enable human experts to gain an intuitive insight into the code. Unfortunately, as of today, there is no universal tool that can simultaneously visualise different types of code representations. In this paper, we introduce a tool, CodeLens, which provides a visual interaction environment that supports various representation methods and helps developers understand and explore them. CodeLens is designed to support multiple programming languages, such as Java, Python, and JavaScript, and four types of code representations, including sequence of tokens, abstract syntax tree (AST), data flow graph (DFG), and control flow graph (CFG). By using CodeLens, developers can quickly visualize the specific code representation and also obtain the represented inputs for models of code. The Web-based interface of CodeLens is available at [http://www.codelens.org/](http://www.codelens.org/). The demonstration video can be found at [http://www.codelens.org/demo](http://www.codelens.org/demo). code representation, interactive visualization ## I Introduction The development of machine learning (ML) has enabled the automation of a wide range of Software Engineering (SE) tasks [1]. For example, large language models (LLMs), e.g., CodeBERT [2] and CodeX [3], have proven to achieve the state-of-the-art performance in clone detection, vulnerability detection, and code generation. One prerequisite to using ML models is the code representation [4], which involves the transformation of source code into analyzable data by the ML models. In the domain of SE, different code representations have been developed and studied, such as the text-based sequence of tokens [2], tree-based abstract syntax tree (AST) [5, 6], graph-based data flow graph (DFG) [7, 8] and control flow graph (CFG) [9]. Before adopting code representations for downstream tasks, human experts often need to interpret them so that proper actions can be taken. Without visualization, interpretation is a challenging task even for experienced developers. In addition, when using these representations (e.g., AST and CFG) in ML models, complex parsing libraries and toolkits are required to be installed to process source code. For example, to convert source code to the AST format, there are the tree-sitter [10] for multiple programming languages, Joern [11] for C/C++, and Python parser [12] for Python. Installing and understanding all these libraries can be time-consuming and complex, as a result, the whole process of code learning can not be easily automated. An additional challenge is that there is no library for some formats such as DFG and CFG. In the literature, several tools have been proposed to visualize code representations and help developers understand code. A comparison of them is shown in Table I. Among them, those from [6, 13, 15] receive source code as input and then visualize it as an AST graph. Such graphs can be downloaded for further analysis. Besides, the Java Parser from [14] can transfer Java code to AST with JSON format for understanding Java programs. Other tools such as [16, 17] can transfer and visualize programs as the text of tokens or ASTs. However, there are a few limitations w.r.t. the existing tools: 1) they can only support one specific programming language (e.g., Java or JavaScript), 2) they only support one visualization type (e.g., graph or text), and 3) after analyzing code, only one format of data can be downloaded (image or JSON). These issues seriously limit the usage of these tools in practical deployment. To address these limitations, in this paper, we introduce CodeLens, an interactive tool for visualizing different code representations for different programming languages. In more detail, CodeLens supports three popular programming languages, including Java, Python, and JavaScript. It supports four types of code representation, including sequence of tokens, AST, DFG, and CFG. To our knowledge, this is the first tool that supports visualization of both DFG and CFG. Furthermore, CodeLens supports two types of visualization, i.e., text and graph, and provides two formats of outputs, i.e., image data and JSON files, which save all the represented code information and can be downloaded for further usage. In the evaluation part, we demonstrate the usefulness of CodeLens with two straightforward use cases, 1) visualizing different types of code representations to help users understand code intuitively, and 2) providing different types of pre-processed inputs for machine learning models. It is worth noting that CodeLens can be used in many other use cases by providing the necessary input for different code analysis tasks. Due to the space limitation, we omit the details in this paper. ## II CodeVis Overview Figure 1 provides a comprehensive overview of the CodeLens architecture, which consists of a Web-based frontend (written in JavaScript using React) and a server-side backend (written in Python) connected through the Flask microframework. The frontend (client) provides a user-friendly interface, allowing developers and users to interact with different code representations seamlessly. The backend (server) of CodeLens plays a crucial role in processing code and generating graphical code representations of different formats. ### _User Interface_ The user interface prioritizes simplicity, incorporating two distinct boxes. On the left-hand side, users are provided with a console-like interactive environment where they can input their code. To cater to the diverse programming preferences of users, our interface offers a selection of three programming languages: Python, JavaScript, and Java. Within each language category, users can explore and experiment with five different code examples, providing them ample opportunities for practice. On the right-hand side, users can visualize or download the resulting output, ensuring a comprehensive understanding of the code's execution. To facilitate efficient processing, a **Convert** button is positioned between the two boxes, enabling users to initiate the transformation of their input. For a visual depiction of this intuitive interface, please refer to Figure 2. ### _Back-End Implementation_ The backend of CodeLens implements the transformations from source code (i.e., Java, Python, and JavaScript) to code representations (i.e., text-based sequence of tokens, AST, DFG, CFG). Fig. 1: Architecture of CodeLens. Fig. 2: User interface design of CodeLens. Sequence of tokensA source code is treated as plain text and processed into a linear sequence of tokens via a tokenizer. Each line of code is chopped into pieces by looking for the whitespace (tabs, spaces, newlines). Each piece is finally represented by an integer that refers to the ID of the piece in a so-called vocabulary. A piece can be a word, a subword, or a character depending on different tokenizers [18]. The subword-based tokenizer, Byte-Pair Encoding (BPE) [19] is implemented in CodeLens due to its popularity in code-related DL models. Abstract syntax tree (AST)An AST is a tree representation of the abstract syntactic structure of a piece of source code. Each node in the tree represents a construct occurring in the source code. When converting a piece of source code to an AST, only structural information is preserved, such as variable types, order and definition of executable statements, and identifiers. In CodeLens, the tree-sitter [10] library is used to parse source code in different programming languages. Data flow graph (DFG)As the name suggests, DFG [7] is a data-oriented graph representation that shows the flow of data through a piece of source code. In a DFG, each node represents a variable or an expression, and each edge represents the flow of data between them. In CodeLens, the DFG is extracted from the AST of the given source code by tracing the variable or expression statement according to the programming grammar in the underlying programming language. Control flow graph (CFG)Similar to DFG, CFG [9] is a graph-based representation. While CFG is process-oriented, it represents all paths that might be traversed through the execution. In a CFG, nodes portray basic blocks of statements and conditions, and edges describe the transfer of control and subsequent access or modification onto the same variable. For instance, a for-loop is a basic control flow statement for specifying iteration. Note that, a CFG includes two designated blocks, an entry block and an exit block where the control enters and leaves the flow. In CodeLens, the CFG is extracted from the AST of the given source code by tracing all control statements defined in the underlying programming language. ## III Use Cases In this section, we present two use cases to demonstrate the usage of CodeLens. Code representation visualizationThe first usage of CodeLens is to visualize code representation. CodeLens receives raw code snippets and then draws the code representations in the right panel as shown in Figure 3. The figures can be downloaded into different formats for further analysis. For example, a user can use them to check the change of representation after the change of code snippets or to compare two representations of two programs. Inputs to ML models extractionIn addition to representation visualization, CodeLens also supports producing pre-processed inputs for ML models. This means that users can easily test their machine-learning models of code without preparing other packages like Java/Python Parser. As shown in Figure 4, the transformed code data (e.g., tokens, AST) can be downloaded and fed to the tokenizer provided by pre-trained code models, for example, CodeBERT [2]. ## IV Related Work Code representations in ML modelsMachine learning (ML), particularly deep learning (DL), models have been proven successful in automating various software engineering tasks, such as problem classification [20], clone detection [21], and vulnerability detection [22]. Code representation is a preliminary step to convert source code into a readable format for all models. Among existing code representations, the representation of a sequence of tokens is mostly used in large language models (LLMs) for code that supports multiple downstream tasks. For example, CodeBERT [2], CodeT5 [23], and CodeX [3] take the sequence of tokens of source code by the BPE tokenizer as input. In addition to the sequence of tokens by the BPE tokenizer, GraphCodeBERT [7] adds the data flow graph (DFG) representation of source code to capture the relation of "where-the-value-comes-from" between variables. The task-specific GraphSearchNet [8] model also considers the DFG representation of source code to undertake the code search task. code2seq[6] and code2vec [5] extract a set of syntactic paths from the AST representation of source code. Online tools for code visualizationMultiple tools with Web-based interface are available, such as the AST Visualization on browser [13] and Java Parser [14]. However, as shown in the comparison in Table I, these tools have four main limitations. First, only one programming language is supported. For example, Java Parser [14], code2seq[6], and code2vec [5] only support Java. Second, the support code Fig. 4: An example of transformed code data that can be used for tokenizer. Fig. 3: Example of a Java DFG visualized by CodeLens. representation mainly focuses on AST. None of the seven tools support graph-based representation. Third, the visualization type is limited. Visualizing the nodes and edges of an AST in a tree diagram is intuitive but challenging. The Swift AST Explorer [16] and JavaScript AST Visualiser [17] simply visualize an AST in a tree structure, which is less helpful to understand the source code. Last, the output of most tools is limited to images. If a developer is willing to use the representation, e.g., AST generated by a tool, a JSON file recording the nodes and edges should be provided. In contrast, CodeLens has addressed all four limitations. ## V Conclusion In this paper, we have introduced an interactive tool named CodeLens, which visualizes four most popular code representations widely used by machine learning (ML) models, including sequence of tokens, AST, DFG, and CFG. The tool allows software and application developers to understand how source code is represented when applying ML models to automate software engineering tasks. In addition, it can also serve as an interface between source code and numer code representations in ML models. This eliminates the necessity for developers to install complex parsing libraries and toolkits, through making the code representation process more user-friendly. CodeLens opens many directions for future work. For example, CodeLens can be extended to support more programming languages (e.g., C, C++ and PHP) and other useful code representations. Another direction is to develop applications on the basis of CodeLens, e.g., integrating a vulnerability detection module into CodeLens to perform vulnerability detection tasks without the need to pre-processing the code in advance. ## Acknowledgment An experienced developer Bowen Liu (who is currently working on ML-based malware detection) was invited to test CodeLens. We thank him for his valuable feedback.
2301.06050
Probing active-sterile neutrino transition magnetic moments at LEP and CEPC
We consider the sterile neutrino, that is also know as heavy neutral lepton (HNL), interacting with the Standard Model (SM) active neutrino and photon via a transition magnetic moment, the so-called dipole portal, which can be induced from the more general dipole couplings which respect the full gauge symmetries of the SM. Depending on the interactions with ${\rm SU}(2)_L$ and ${\rm U(1)}_Y$ field strength tensors ${\cal W}_{\mu \nu}^a$ and $B^{\mu \nu}$, we consider four typical scenarios and probe the constraints on the couplings with photon $d_\gamma$ at LEP using the analyses to search monophoton signature and the measurement of $Z$ decay. We find that in the considered scenarios assuming the coupling with $Z$ boson $d_Z\neq 0$, the measurement of $Z$ decaying into photon plus invisible particles can provide stricter constraints than the monophoton searches at the LEP1. The complementary constraints to existing experiments can be provided by the LEP. We also investigate the sensitivity on the dipole portal coupling $d_\gamma$ from the monophoton searches at future electron colliders, such as CEPC, and find that CEPC can explore the previously unconstrained parameter space by current experiments.
Yu Zhang, Wei Liu
2023-01-15T09:06:47Z
http://arxiv.org/abs/2301.06050v3
# Probing active-sterile neutrino transition magnetic moments at LEP and CEPC ###### Abstract We consider the sterile neutrino, that is also know as heavy neutral lepton (HNL), interacting with the Standard Model (SM) active neutrino and photon via a transition magnetic moment, the so-called dipole portal, which can be induced from the more general dipole couplings which respect the full gauge symmetries of the SM. Depending on the interactions with \(\mathrm{SU}(2)_{L}\) and \(\mathrm{U}(1)_{Y}\) field strength tensors \(\mathcal{W}_{\mu\nu}^{a}\) and \(B^{\mu\nu}\), we consider four typical scenarios and probe the constraints on the couplings with photon \(d_{\gamma}\) at LEP using the analyses to search monophoton signature and the measurement of \(Z\) decay. We find that in the considered scenarios assuming the coupling with \(Z\) boson \(d_{Z}\neq 0\), the measurement of \(Z\) decaying into photon plus invisible particles can provide stricter constraints than the monophoton searches at the LEP1. The complementary constraints to existing experiments can be provided by the LEP. We also investigate the sensitivity on the dipole portal coupling \(d_{\gamma}\) from the monophoton searches at future electron colliders, such as CEPC, and find that CEPC can explore the previously unconstrained parameter space by current experiments. Introduction The discovery that neutrinos oscillate, and therefore have distinct mass and flavor eigenstates, has proven to be one of the most definitive pieces of evidence for physics beyond the Standard Model (BSM) in the last two decades [1; 2]. Given that the Standard Model (SM) does not predict the observed small and nonzero neutrino masses, it is reasonable to introduce new physics which is typically organized in terms of the new particles and/or interactions. One feature present in many theories explain neutrino masses is the addition of one or more heavy neutral leptons (HNLs) \(N\), that can connect with neutrino masses via a Yukawa interaction, \(\mathcal{L}\supset NHL\), have attracted significant attentions in the last few years [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. These neutral fermionic states \(N\) are singlet under the SM gauge groups, and often referred to as the so-called sterile neutrinos or right-handed neutrinos. One of the consequences of extending the SM with additional sterile neutrinos, is that the neutrino magnetic moment is generated with a tiny value proportional to the neutrino mass [14; 15; 16; 17]. Recently, such scenarios predicting HNLs with the dipole coupling to SM active neutrinos, which are allowed to offer novel signatures and features in the production and decay of \(N\) if the traditional neutrino portal coupling \(NHL\) is assumed to be absent or subdominant, have received renewed attention and have been studied in the context of colliders, beam-dump and neutrino experiments, astrophysics, cosmology, and direct searches at dark matter experiments [18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]. At the effective low-energy level, neutrino dipole portal to HNLs is described by the Lagrangian \[\mathcal{L}\supset d_{k}\bar{\nu}_{L}^{k}\sigma_{\mu\nu}F^{\mu\nu}N+\mathrm{H.c.}, \tag{1}\] where \(k=e,\mu,\tau\) denotes the flavor index of lepton, \(\nu_{L}\) is a SM left-handed (active) neutrino field, \(\sigma_{\mu\nu}=\frac{i}{2}[\gamma_{\mu},\gamma_{\nu}]\), \(F^{\mu\nu}\) is the electromagnetic field strength tensor, and \(d\) is the active-sterile neutrino transition magnetic moment, that controls the strength of the interaction with the units of (mass)\({}^{-1}\). If the typical momentum exchange is much smaller than the electroweak scale, only considering the dipole portal coupling to sterile-active neutrinos and electromagnetic field strength tensor in the simplified model in Eq. (1) at the effective low-energy level is suitable. While scattering energy can be comparable with or above the electroweak scale, the SM gauge invariant dipole couplings should be taken into account [24; 45]. The main aim of this study is to investigate the active-sterile neutrino transition magnetic moments which respect the full gauge symmetries of the SM, and to probe the corresponding sensitivity at the electron colliders with the center-of-mass (CM) energy \(\sqrt{s}\gtrsim M_{Z}\), such as LEP and future Circular Electron Positron Collider (CEPC) [46; 47]. The paper is organized as follows. In section II, we describe the model include the effective Lagrangian for the dipole portal coupling to HNLs. The production of sterile neutrino \(N\) at electron colliders is investigated in section III. We then discuss the constraints from LEP in section IV, and from future CEPC in section V. Section VI contains our discussion and conclusion. ## II The model It is worth noting that the effective Lagrangian in Eq. (1), describing the active-sterile neutrino transition magnetic moments, is not gauge invariant under \(\mathrm{SU}(2)_{L}\times\mathrm{U}(1)_{Y}\) gauge group. In order to describe the new physics above the EW scale, neutrino dipole couplings which respect the full gauge symmetries of the standard model are need to be considered and can be written as [24] \[\mathcal{L}\supset\bar{L}^{k}(d^{k}_{\mathcal{W}}\mathcal{W}^{a}_{\mu\nu}\tau ^{a}+d^{k}_{B}B^{\mu\nu})\tilde{H}\sigma_{\mu\nu}N+\mathrm{H.c.}, \tag{2}\] where \(\tilde{H}=i\sigma_{2}H^{*}\), \(\tau^{a}=\sigma^{a}/2\) with \(\sigma^{a}\) being Pauli matrices, \(\mathcal{W}^{a}_{\mu\nu}\) and \(B^{\mu\nu}\) denote the \(\mathrm{SU}(2)_{L}\) and \(\mathrm{U}(1)_{Y}\) field strength tensors with \(\mathcal{W}^{a}_{\mu\nu}\equiv\partial_{\mu}\mathcal{W}^{a}_{\nu}-\partial_{ \nu}\mathcal{W}^{a}_{\mu}\) and \(B_{\mu\nu}\equiv\partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}\). It can be seen that because of a Higgs insertion, the dipole interaction in Eq. (2) is really a dimension 6 operator. After spontaneous symmetry breaking with the Higgs vacuum expectation value \(v\), one obtains \[\mathcal{L}\supset d^{k}_{W}(\bar{\ell}^{k}W^{-}_{\mu\nu}\sigma_{\mu\nu}N)+ \bar{\nu}^{k}_{L}(d^{k}_{\gamma}F_{\mu\nu}-d^{k}_{Z}Z_{\mu\nu})\sigma_{\mu\nu} N+\mathrm{H.c.}, \tag{3}\] which can induce dipole operators to SM photon, the weak boson \(Z\) and \(W\), with the coupling \(d^{k}_{\gamma}\), \(d^{k}_{Z}\) and \(d^{k}_{W}\). One sees that the normalization of the photon field strength term in Eq. (3) can induce that of Eq. (1). For a given lepton flavor, the dipole couplings \(d_{\gamma}\), \(d_{Z}\) and \(d_{W}\) in the broken phase are linearly dependent, and determined by only two parameters \(d_{\mathcal{W}}\) and \(d_{B}\) in the unbroken phase via 1 Footnote 1: As we will mostly assume that only one of the active neutrino flavors participates in the magnetic moment interactions, the superscript \(k\) of the lepton flavor will be omitted in the following to simplify our notation, unless otherwise stated. \[d_{\gamma} = \frac{v}{\sqrt{2}}\left(d_{B}\cos\theta_{w}+\frac{d_{\mathcal{W} }}{2}\sin\theta_{w}\right),\] \[d_{Z} = \frac{v}{\sqrt{2}}\left(\frac{d_{\mathcal{W}}}{2}\cos\theta_{w}-d _{B}\sin\theta_{w}\right),\] \[d_{W} = \frac{v}{\sqrt{2}}\frac{d_{\mathcal{W}}}{2}\sqrt{2}. \tag{4}\] One find that, we have three free parameters in this model \[\{m_{N},d_{\mathcal{W}},d_{B}\}, \tag{5}\] where \(m_{N}\) is the mass of HNL. Assuming \(d_{\mathcal{W}}=a\times d_{B}\), we have \[d_{Z} = \frac{d_{\gamma}(a\cos\theta_{w}-2\sin\theta_{w})}{2\cos\theta_{w}+ a\sin\theta_{w}},\] \[d_{W} = \frac{\sqrt{2}ad_{\gamma}}{2\cos\theta_{w}+a\sin\theta_{w}}. \tag{6}\] Then the independent parameters of (5) can be replaced by the parameters \[\{m_{N},d_{\gamma},a\}. \tag{7}\] In this work, we will focus on four typical scenarios, which are listed in Table 1. In scenario I, \(a=0\) means that HNL doesn't couple with the isotriplet \(\mathcal{W}_{\mu}^{a}\) of the group \(\mathrm{SU}(2)_{L}\) leading to the coupling with \(W\) boson \(d_{W}=0\). In scenario II, HNL doesn't couple with the isosinglet \(B_{\mu}\) of the group \(\mathrm{U}(1)_{Y}\), which can be understood as \(d_{B}/d_{\mathcal{W}}=0\) corresponding to \(a\rightarrow\infty\). In scenario III, \(a=2\tan\theta_{w}\) results in that the coupling with \(Z\) boson \(d_{Z}\) vanishes. For comparison, in scenario IV we set \(a=-2\tan\theta_{w}\), which is the negative value in scenario III. The sterile neutrino \(N\) can decay into an on-shell vector boson and a SM lepton through the dipole operators in Eq. (3), with the decay rates given by \[\Gamma_{N\rightarrow\nu\gamma} = \frac{d_{\gamma}^{2}m_{N}^{3}}{4\pi},\] \[\Gamma_{N\rightarrow\nu Z} = \frac{d_{Z}^{2}(m_{N}^{2}-M_{Z}^{2})^{2}(2m_{N}^{2}+M_{Z}^{2})}{ 8\pi m_{N}^{3}}\Theta(m_{N}>M_{Z}),\] \[\Gamma_{N\rightarrow{W\ell}} = \frac{d_{W}^{2}}{8\pi m_{N}^{3}}\sqrt{(m_{N}^{2}-(M_{W}-m_{\ell}) ^{2})(m_{N}^{2}+(M_{W}-m_{\ell})^{2})} \tag{8}\] \[\times \left(2m_{\ell}^{2}(2m_{\ell}^{2}-4m_{N}^{2}-M_{W}^{2})+(m_{N}^{2 }-M_{W}^{2})(2m_{N}^{2}+M_{W}^{2})\right)\Theta(m_{N}>M_{W}+m_{\ell}).\] Besides, there will be three-body decay channels of HNL via off-shell \(W\) and \(Z\) bosons, such as \(N\to W^{*}\ell\rightarrow\ell+ff^{\prime}\) and \(N\rightarrow\nu Z^{*}\rightarrow\nu f\bar{f}\), which are suppressed by a factor of fine structure constant \(\alpha\). While when \(d_{W}\) or \(d_{Z}\) is much larger than \(d_{\gamma}\), these three-body decay channels can play an important role with \(m_{N}<m_{W}\), which will reduce the branching ratio of \(N\rightarrow\nu\gamma\). The Universal FeynRules \begin{table} \begin{tabular}{c|c|c} \hline \hline Scenario & Assumptions & Relations \\ \hline \hline I & \(d_{\mathcal{W}}=0\) & \(d_{Z}=-d_{\gamma}\tan\theta_{w}\); \(d_{W}=0\) \\ \hline II & \(d_{B}=0\) & \(d_{Z}=d_{\gamma}\cot\theta_{w}\); \(d_{W}=\sqrt{2}d_{\gamma}/\sin\theta_{w}\) \\ \hline III & \(d_{\mathcal{W}}=2\tan\theta_{w}\times d_{B}\) & \(d_{Z}=0\); \(d_{W}=\sqrt{2}d_{\gamma}\sin\theta_{w}\) \\ \hline IV & \(d_{\mathcal{W}}=-2\tan\theta_{w}\times d_{B}\) & \(d_{Z}=-d_{\gamma}\tan(2\theta_{w})\); \(d_{W}=-\sqrt{2}d_{\gamma}\sin\theta_{w}/\mathrm{cos}(2\theta_{w})\) \\ \hline \hline \end{tabular} \end{table} Table 1: Four typical scenarios considered in this work. Output (UFO) [48; 49] of the neutrino dipole model is used, and fed to MadGraph5aMC@NLO -v2.6.7 [50] to calculate the width of three-body decay channels of HNL. In Fig. 1, we present the branching ratio for \(N\) decaying to a photon plus active neutrino \[\mathrm{Br}(N\to\nu\gamma)\equiv\frac{\Gamma_{N\to\nu\gamma}}{\Gamma_{N\to\nu \gamma}+\Gamma_{N\to\nu Z}+\Gamma_{N\to W\ell}+\Gamma_{N\to 3\text{-body}}} \tag{9}\] as the function of \(m_{N}\) and the ratio of \(a=d_{\mathcal{W}}/d_{B}\). From the curves in \(a-\mathrm{Br}(N\to\nu\gamma)\) plane, we can find a very conspicuous sharp valley, because there is a singularity in \(d_{Z}\) and \(d_{W}\) with \(a=-2\cot\theta_{w}\sim-3.7\). Around this singularity point, the three-body decay channels of HNL via off-shell \(W\) and \(Z\) bosons can be sizable and even dominant when \(m_{N}<M_{W}\). While the four scenarios listed in Table 1 are all away from the singularity point \(a\sim-3.7\), thus the three-body decay channels can be safely ignored in our following calculations. From the curves in \(m_{N}-\mathrm{Br}(N\to\nu\gamma)\) plane, it can be seen that the branching ratio always decrease with the increment of \(m_{N}\) when \(m_{N}>M_{W}\). Since the width of three-body decay channels can be neglected compared with the two-body decays when \(m_{N}>M_{W}\), \(\mathrm{Br}(N\to\nu\gamma)\) tends to be \(d_{\gamma}^{2}/(d_{\gamma}^{2}+d_{Z}^{2}+d_{W}^{2})=(4+3a^{2})/(2\cos\theta_{w }+a\sin\theta_{w})^{2}\). For the heavy neutrino with \(M_{N}\gg M_{Z}\), the branching ratio \(\mathrm{Br}(N\to\nu\gamma)\) will reach its maximum near \(a=0\), then decreases with the increment of \(a\), and finally tends to be that obtained in Scenario II with \(d_{B}=0\). ## III Electron Collider Signals In this section, we will investigate the Dirac sterile neutrino \(N\) production via dipole portal at high energy \(e^{+}e^{-}\) colliders, such as LEP, and future CEPC. At electron colliders, HNL production will proceed from the process \(e^{+}e^{-}\to N\bar{\nu}_{k}+\mathrm{H.c.}\) via either \(Z\) or \(\gamma\) mediator in \(s\)-channel depending on dipole portal Figure 1: Branching ratio of the radiative HNL decay process \(N\to\nu\gamma\) as the ratio \(a=d_{\mathcal{W}}/d_{B}\) (left) and the function of the HNL mass \(m_{N}\) (right). couplings \(d_{Z}^{k}\), \(d_{\gamma}^{k}\) with \(k=e,\mu,\tau\), or via \(W\) mediator in \(t\)-channel depending on electron neutrino dipole portal coupling \(d_{W}^{e}\) in Eq. (3), respectively. With the subsequent decay channel \(N\rightarrow\nu\gamma\) in the detector, the signature of a single photon final state with missing energy can be look for at electron colliders. The total production cross section for \(e^{+}e^{-}\to N\bar{\nu}\) after integrating over all angles from \(\gamma\) and \(Z\) mediators in \(s\)-channel and \(W\) mediator in \(t\)-channel can be respectively expressed as \[\sigma^{\gamma}(e^{+}e^{-}\to N\bar{\nu}) = \frac{\alpha d_{\gamma}^{2}(s-m_{N}^{2})^{2}(s+2m_{N}^{2})}{3s^{3}}, \tag{10}\] \[\sigma^{Z}(e^{+}e^{-}\to N\bar{\nu}) = \frac{\alpha d_{Z}^{2}(s-m_{N}^{2})^{2}(s+2m_{N}^{2})}{24c_{w}^{2} s^{2}\left(\Gamma_{Z}^{2}M_{Z}^{2}+\left(M_{Z}^{2}-s\right)^{2}\right)}\Big{[} \left(8s_{w}^{2}-4s_{w}+1\right)\Big{]},\] (11) \[\sigma^{\gamma Z}(e^{+}e^{-}\to N\bar{\nu}) = \frac{\alpha d_{\gamma}d_{Z}(s-m_{N}^{2})^{2}(s+2m_{N}^{2})}{6c_{ w}s_{w}s^{2}\left(\Gamma_{Z}^{2}M_{Z}^{2}+\left(M_{Z}^{2}-s\right)^{2}\right)} \Big{[}(-4s_{w}+1)\left(M_{Z}^{2}-s\right)\Big{]},\] (12) \[\sigma^{W}(e^{+}e^{-}\to N\bar{\nu}_{e}) = \frac{\alpha(d_{W}^{e})^{2}}{2s_{w}^{2}s}\Bigg{[}-2s-\left(2M_{W} ^{2}+s\right)\log\left(\frac{M_{W}^{2}}{-m_{N}^{2}+M_{W}^{2}+s}\right)\] (13) \[+ m_{N}^{2}\left(\frac{M_{W}^{2}}{-m_{N}^{2}+M_{W}^{2}+s}+1\right) \Bigg{]},\] where \(\sigma^{\gamma Z}\) denotes the interference term between \(\gamma\) and \(Z\) mediators in \(s\)-channel, and the interference between \(s\)-channel and \(t\)-channel for electron neutrino vanished. One sees that at the low-energy electron colliders with \(\sqrt{s}\ll M_{Z}\), the contribution from \(Z\) or \(W\) mediator can be neglected comparing with the one from \(\gamma\) mediator in the condition of \(d_{Z,W}/d_{\gamma}\sim\mathcal{O}(1)\), which has been discussed in Ref.[43] at the low-energy electron colliders, such as BESIII, Belle II and future STCF. In Fig. 2, we present the cross sections of the HNL associated with electron neutrino production as the function of CM energy \(\sqrt{s}\) for \(m_{N}=0.1\) GeV (left) and \(m_{N}=100\) GeV (right) from \(\gamma\), \(Z\) and \(W\) mediators with \(d_{\gamma}=d_{Z}=d_{W}=10^{-5}\), separately. Noted that, when \(\sqrt{s}>M_{Z}\), there will be \(\sigma^{\gamma Z}/(d_{\gamma}d_{Z})<0\), thus absolute values of \(\sigma^{\gamma Z}\) are plotted in Fig. 2. We can see that the contribution from \(\gamma\) mediator \(\sigma^{\gamma}\) has little to do with the CM energy when \(m_{N}\ll\sqrt{s}\). The contribution \(\sigma^{Z}\) from \(Z\) mediator for \(m_{N}=0.1\) GeV reaches its maximum when \(\sqrt{s}=M_{Z}\) due to the \(Z\) resonance. The contribution \(\sigma^{W}\) from \(W\) mediator only appears in \(N\bar{\nu}_{e}\) production, and can be ignored comparing with \(\sigma^{Z}\) when \(\sqrt{s}\) around \(Z\)-pole. While \(\sigma^{W}\) always increase with the increment of \(\sqrt{s}\), and will be dominant when \(\sqrt{s}\gg M_{Z}\). Just thanks to the additional contribution \(\sigma^{W}\) to electron neutrino production, the sensitivity on dipole coupling \(d_{\gamma}^{e}\) will be different from \(d_{\gamma}^{\mu}\) and \(d_{\gamma}^{\tau}\) at electron colliders when \(\sqrt{s}>M_{Z}\). To make sure that there exists visible photon in the final state, the subsequent decay of \(N\) must occur inside the fiducial volume of the detector. The probability of the heavy neutrino to decay radiatively in the fiducial volume after traveling a distance \(l\) from the primary vertex is given by \[P_{dec}(l)=(1-e^{-l/l_{dec}}){\rm Br}(N\rightarrow\nu\gamma). \tag{14}\] The decay length of \(N\), \(l_{dec}\), scales as \[l_{dec}=c\tau\beta\gamma=\frac{4\pi}{d_{\gamma}^{2}m_{N}^{4}}\sqrt{E_{N}^{2}-m_{N }^{2}} \tag{15}\] in the case of \(\mathrm{Br}(N\rightarrow\nu\gamma)\simeq 1\), where \(E_{N}\) is the energy of \(N\), with \(E_{N}=\frac{s+m_{N}^{2}}{2\sqrt{s}}\) in the process \(e^{+}e^{-}\to N\bar{\nu}\). Then, the production rates from new physics in signal can be given as \[\sigma^{\mathrm{NP}}(e^{+}e^{-}\rightarrow\gamma+\mathrm{INV})=\sigma(e^{+}e^ {-}\to N\nu)\mathrm{Br}(N\rightarrow\nu\gamma)\epsilon_{cuts}\epsilon_{ det}P_{dec}(l_{D}), \tag{16}\] where \(l_{D}\) is the detector length, \(\epsilon_{cuts}\) and \(\epsilon_{det}\) are the efficiencies of the kinematic cuts and detection for the final photon, respectively. Since \(N\) is usually produced on-shell and travels some distance before decaying, we employ the narrow width approximation to derive the kinematic imformation of the final state photon. The \(1-\cos\theta\) distribution is used for the photon from \(N\) decay, where \(\theta\) is the photon angle in the rest frame of \(N\)[51; 52]. ## IV Constraints from LEP There are luxuirant analyses to search monophoton signature at LEP, which can be used to set the constraints on dipole portal coupling to HNLs. In this section, we consider the single photon events with missing energy at the CM energies around \(Z\) pole at LEP1 and at larger CM energies above \(Z\) pole at LEP2. If the coupling between HNL and \(Z\) boson exists, there will be addition constraints from \(Z\) decay, which has been measured accurately by LEP. ### Lep1 For the CM energies around \(Z\) pole at LEP1, the 95% confidence level (C.L.) upper limits on the integrated cross section for production of a single photon with \(E_{\gamma}>E_{\rm min}\) and \(|\cos\theta_{\gamma}|<0.7\) are presented as the function of a specified minimum energy \(E_{\rm min}\) by the OPAL Collaboration [53]. We adopt the 95% C.L. limit of 0.15 pb on the cross section for production of a single photon with energy exceeding \(E_{\rm min}=23\) GeV in the \(|\cos\theta_{\gamma}|<0.7\) angular region to give the corresponding 95% C.L. limit on the dipole portal to HNLs. The corresponding upper bounds on the dipole coupling \(d_{\gamma}\) are shown in Fig. 3 with green lines for the four scenarios listed in Table 1, respectively. The overall detection efficiency of photon is estimated to be \(65.7\)% [53]. ### Lep2 For larger CM energies above \(Z\) pole at LEP2, we use the DELPHI data of single photon at \(\sqrt{s}=200\sim 209\) GeV (average \(\sqrt{s}=205.4\) GeV) in the angular region of \(45^{\circ}<\theta_{\gamma}<135^{\circ}\) and \(0.06<x_{\gamma}<1.1\) with \(x_{\gamma}=2E_{\gamma}/\sqrt{s}\)[54]. The constraints on the dipole coupling can be obtained by performing a simple \(\chi^{2}\) analysis with the function of \[\chi^{2}=\left(\frac{\sigma^{\rm SM}+\sigma^{N\nu}-\sigma^{\rm exp}}{\delta \sigma^{\rm exp}}\right)^{2}, \tag{17}\] with \(\sigma^{\rm SM}=1.61\) pb, \(\sigma^{\rm exp}=1.50\) pb, and \(\delta\sigma^{\rm exp}=0.11\) pb from Ref. [54]. Estimated from the Monte Carlo cross sections and the expected numbers of events, the overall detection efficiency of photon is set to be 65%. The 95% C.L. upper limits on the dipole coupling \(d_{\gamma}\) are shown in Fig.3 with blue lines for the four scenarios. ### Z decay Negative evidence for the single photon with missing energy signal at L3 detector of LEP1 [55], set an upper limit at the 95% C.L. lying in the range of about \((3.2\sim 1.1)\times 10^{-6}\) on the branching ratio for \(Z\) decaying to invisible particles and a photon with energy greater that \(E_{\rm min}\) in the range of \((15\sim 40)\) GeV. The measurable decay width \(\Gamma_{Z\rightarrow\gamma+{\rm invisible}}\) at LEP from dipole portal to HNLs can be expressed as \[\Gamma^{\rm NP}_{Z\rightarrow\gamma+{\rm invisible}}=(\Gamma_{Z\to N \bar{\nu}}+\Gamma_{Z\rightarrow\bar{N}\nu}){\rm Br}(N\rightarrow\nu\gamma) \epsilon_{cuts}(1-P_{dec}(l_{D})). \tag{18}\] Here the decay width of \(Z\to N\bar{\nu}\) or \(Z\rightarrow\bar{N}\nu\) can be given as \[\Gamma_{Z\to N\bar{\nu}}=\Gamma_{Z\rightarrow\bar{N}\nu}=\frac{d_{Z}^{2}(M_ {Z}^{2}-m_{N}^{2})^{2}(2m_{N}^{2}+M_{Z}^{2})}{12\pi m_{Z}^{3}}\Theta(m_{Z}>M_{ N}), \tag{19}\] and \(P_{dec}(l_{D})\) denotes the probability of heavy neutrino \(N\) to decay radiatively out the detector at LEP with the detector length \(l_{D}=1\) m, \(\epsilon_{cuts}\) is the efficiency of the kinematic cuts with \(E_{\gamma}>E_{\rm min}\) for the final photon. We use the 95% C.L. upper limit of \(3.2\times 10^{-6}\) on the branching ratio \({\rm Br}(Z\rightarrow\gamma+{\rm invisible})\) with \(E_{\gamma}>15\) GeV to provide the corresponding 95% C.L. constraint on dipole portal coupling \(d_{\gamma}\), which is presented in Fig. 3 with black lines. Besides, \(N\) decaying out of the detector will contribute to the \(Z\)-boson invisible decay as \[\Gamma^{\rm NP}_{Z\rightarrow{\rm invisible}}=(\Gamma_{Z\to N\bar{ \nu}}+\Gamma_{Z\rightarrow\bar{N}\nu})P_{dec}(l_{D}). \tag{20}\] The total width of the \(Z\) boson has been measured accurately by the LEP experiments which place a strong bound on new physics contributions \(\Gamma^{\rm NP}_{Z\rightarrow{\rm invisible}}<2.0\) MeV at 95% C.L. [56]. The 95% C.L. upper limits from \(Z\) invisible decay on the dipole coupling are given in Fig. 3 with red lines. Figure 3: The 95% CL upper bounds on the dipole portal coupling \(d_{\gamma}\) under four assumptions listed in Table 1 from the monophoton searches at LEP1 [53] (green lines) and LEP2 [54] (blue solid lines for \(d_{\gamma}^{e}\) and blue dotted lines for \(d_{\gamma}^{\mu,\tau}\)), the decay \(Z\rightarrow\gamma+{\rm invisible}\)[55] (black lines) and \(Z\) invisible decay [56] (red lines), respectively. ### Results One sees that the constraints from monophoton searches at LEP1 and LEP2, and from \(Z\) decaying into invisible particles associated with a photon have a characteristic "U" shape. The right boundary of the "U" shape region for lager \(m_{N}\) is controlled by the kinematic reach, and in the case of the LEP2 extends beyond 100 GeV. The left boundary of the excluded "U" shape region for small \(m_{N}\), is controlled by the lifetime of \(N\). Since smaller \(m_{N}\) leads to the longer lifetime of \(N\), \(N\) will more likely decay out of the detector with the loss of the \(\gamma\) signal. The \(Z\) invisible decay can provide complementary constraints for the HNLs with small mass. Noted that because of the additional contribution from \(W\) mediator diagram in \(t\)-channel for \(N\nu_{e}\) production, the constraints on \(d_{\gamma}^{e}\) (blue solid lines) will be stricter than \(d_{\gamma}^{\mu,\tau}\) (blue dotted lines) from monophoton searches at LEP2 with \(\sqrt{s}>M_{Z}\) when \(d_{W}\neq 0\), which can be seen in scenarios II, III, IV. In scenarios I, II and IV with \(d_{Z}\neq 0\), there will be additional constraints from \(Z\) decay. The measurements of \(Z\) decay will derive same sensitivity for all the three lepton flavors, so almost do the monophoton searches at LEP1, since \(\sigma^{W}\) can be ignored comparing with \(\sigma^{Z}\) around \(Z\)-pole. Since LEP1 with \(\sqrt{s}\simeq M_{Z}\) can provide very competitive production rates of HNL due to the \(Z\)-resonance from \(Z\)-mediator in \(s\)-channel when \(d_{Z}\neq 0\), the sensitivities on the dipole portal coupling \(d_{\gamma}\) are much better at LEP1, which can be about one order of magnitude, than the ones at LEP2 with \(m_{N}\lesssim 90\) GeV in scenarios I, II and IV. While in scenario III with \(d_{Z}=0\), LEP2 always give leading constraints in all the plotted mass region. Though there are no \(Z\)-resonance enhancement in scenario III with \(d_{Z}=0\) at LEP1, the limits for different lepton flavors are still almost the same since \(\sigma^{\gamma}\) is more dominant than \(\sigma^{W}\) around \(Z\)-pole. The constraints from the measurement of the branching ratio for \(Z\to\gamma+{\rm invisible}\) are always found to be more stringent than the ones from monophoton searches at LEP1 in the scenarios with \(d_{Z}\neq 0\). In Fig. 4, we present the production rate for the sterile neutrino associated with active neutrino at electron colliders with \(\sqrt{s}=M_{Z}\) (left), and the branching ratio of \(Z\to\gamma+{\rm invisible}\) (right), as the function of \(a=d_{\cal W}/d_{B}\), which are all labeled with red line, respectively. Here we set \(m_{N}=10\) GeV and \(d_{\gamma}=10^{-7}\). Since there is a singularity in \(d_{Z}\) and \(d_{W}\) at \(a=-2\cot\theta_{w}\sim-3.7\), the production rate of \(N\nu\) and the branching ratio of \(Z\to\gamma+{\rm invisible}\) will increase when \(a<-2\cot\theta_{w}\) with the increment of \(a\), then decrease until \(a=2\tan\theta_{w}\sim 1.1\). With \(a=2\tan\theta_{w}\), the dipole coupling with \(Z\) boson \(d_{Z}\) becomes zero, the production rate reaches its minimum and the branching ratio goes to zero. The corresponding 95% C.L. upper limits on dipole coupling \(d_{\gamma}\) as the function of \(a\) from the monophoton searches (left) and the measurement of \(Z\to\gamma+{\rm invisible}\) (right) at LEP1 are also shown in Fig. 4 with black solid lines, respectively. In the case of \(a>0\), the upper bounds on \(d_{\gamma}\) lie in the range of \((5.8\times 10^{-7},6.0\times 10^{-6})\) from monophoton searches at LEP1. Besides the region very near \(a=2\tan\theta_{w}\) where \(d_{Z}\) is close to zero, the measurement of \(Z\) decaying into photon plus invisible particles can provide stricter constraint than monophoton searches at LEP1. The graph on the left of Fig. 5 shows the production rates of the processes \(e^{+}e^{-}\to N\nu_{e}\to\nu_{e}\bar{\nu}_{e}\gamma\) (red solid line) and \(e^{+}e^{-}\to N\nu_{\mu,\tau}\to\nu_{\mu,\tau}\bar{\nu}_{\mu,\tau}\gamma\) (red dashed line) as the function of \(a\) with \(m_{N}=100~{}{\rm GeV}\) and \(d_{\gamma}=10^{-5}\) at LEP2 with \(\sqrt{s}=205.4\) GeV. The kinematic cuts and the detection efficiency for the final photon in DELPHI data [54] are also considered. Interestingly, the cross sections of monophoton due to \(N\) production for \(m_{N}=100\) GeV at electron collider are observed to change not too much around \(a=-2\cot\theta_{w}\), not as drastically as in Fig. 4 for \(m_{N}=10\) GeV. This is because that when \(m_{N}>M_{Z}\), the openning of decay channel \(N\to\ell W\) and \(N\to\nu Z\) will reduce the branching ratio of \(N\to\nu\gamma\), which is inversely proportional to \((d_{Z}^{2}+d_{W}^{2})\) and thereby offsets the dependence of the production rate for monophoton on the couplings \(d_{Z}\) or \(d_{W}\). From the different between the production rates of \(\nu_{e}\bar{\nu}_{e}\gamma\) and \(\nu_{\mu,\tau}\bar{\nu}_{\mu,\tau}\gamma\) in Fig. 5, one can find the additional contribution from \(W\)-mediator diagram in \(t\)-channel for the production of \(N\). The corresponding 95% C.L. upper limits on the dipole portal couplings \(d_{\gamma}^{e}\) (black solid line) and \(d_{\gamma}^{\mu,\tau}\) (black dashed line) for \(m_{N}=100\) GeV using DELPHI data of the monophoton search [54] Figure 4: Left: The production rates for the process \(e^{+}e^{-}\to N\bar{\nu}\) (red line) with \(m_{N}=10\) GeV, \(d_{\gamma}=10^{-7}\) and \(\sqrt{s}=M_{Z}\), and the 95% C.L. upper limits on the neutrino dipole portal coupling to HNLs \(d_{\gamma}\) as the function of the ratio \(a=d_{\mathcal{W}}/d_{B}\) with the CM energy on \(Z\)-pole using the monophoton searches by the OPAL Collaboration at LEP1 [53] (black solid line) and future CEPC (black dashed line) with the luminosity of 100 ab\({}^{-1}\). Right: The branching ratio for \(Z\) decaying into invisible particles and a photon \({\rm Br}(Z\to\gamma+{\rm invisible})\) (red line) with \(m_{N}=10\) GeV and \(d_{\gamma}=10^{-7}\), and the constraints on \(d_{\gamma}\) as the function of the ratio \(a=d_{\mathcal{W}}/d_{B}\) with the assuming \({\rm Br}(Z\to\gamma+{\rm invisible})=10^{-7}\) in the future (black dashed line) and 95% C.L. upper upper limit of \({\rm Br}(Z\to\gamma+{\rm invisible})=3.2\times 10^{-6}\) with \(E_{\gamma}>15\) GeV from LEP (black solid line). at LEP2 are also shown in the graph on the left of Fig. 5. For \(|a|\leq 10\), the upper limits on the dipole portal couplings to HNL with mass of 100 GeV, \(d_{\gamma}^{e}\) and \(d_{\gamma}^{\mu,\tau}\), lie in the range of \((7.1\times 10^{-5},3.5\times 10^{-4})\) and \((2.5\times 10^{-4},4.1\times 10^{-4})\) from monophoton searches at LEP2, respectively. When \(a=0\) (\(d_{W}=0\)), the upper limit on \(d_{\gamma}^{e}\) reaches its maximum. ## V Constraints from Cepc In the following, we will investigate the sensitivity on the dipole portal coupling to HNLs at the future electron collider CEPC [46; 47]. The CEPC, proposed by the Chinese high energy physics community in 2012, is designed to run primarily at a CM energy of 240 GeV as a Higgs factory (\(H\)-mode) with a total luminosity of \(20~{}{\rm ab}^{-1}\) for ten years running [57]. In addition, on the \(Z\)-pole as a \(Z\) factory (\(Z\)-mode), it will also be operated with a total luminosity of \(100~{}{\rm ab}^{-1}\) for two years, perform a precise \(WW\) threshold scan (\(WW\)-mode) with a total luminosity of \(\sim 6~{}{\rm ab}^{-1}\) for one year running at \(\sqrt{s}\sim 160~{}{\rm GeV}\), and will be upgraded to a CM energy of 360 GeV, close to the \(t\bar{t}\) threshold (\(t\bar{t}\)-mode) with a total luminosity of \(\sim 1~{}{\rm ab}^{-1}\) for five years [57]. In the search for monophoton signature at CEPC, the backgrounds can be classified into two categories: the irreducible background and the reducible background. The irreducible background arises from the neutrino production associated with one photon in SM \(e^{+}e^{-}\to\nu\bar{\nu}\gamma\). The reducible background comes from any SM process with a single photon in the final state with all other visible particles undetected due to limitations of the detector acceptance. Such as the radiative Bhabha scattering, \(e^{+}e^{-}\to e^{+}e^{-}\gamma\) should be considered carefully, which has a huge cross section and can mimic the signal if both the final state electrons and positrons escape undetected, for example, through the beam pipes [58; 59]. For the monophoton signature at CEPC, we use the cuts for the final detected photon following the CEPC CDR [47]: \(|z_{\gamma}|<0.99\) and \(E_{\gamma}>0.1\) GeV. Due to the SM \(Z\) boson, the irreducible background from the SM neutrino pair production \(e^{+}e^{-}\to\nu\bar{\nu}\gamma\) exhibits a resonance in the monophoton energy spectrum which exhibits a peak at the photon energy \(E_{\gamma}^{Z}=(s-M_{Z}^{2})/2\sqrt{s}\) with a full-width-at-half-maximum as \(\Gamma_{\gamma}^{Z}=M_{Z}\Gamma_{Z}/\sqrt{s}\). To suppress the irreducible background contribution, we will veto the events within \(E_{\gamma}\in(E_{\gamma}^{Z}\pm 5\Gamma_{\gamma}^{Z})\) in the monophoton energy spectrum [60]. We apply the cut \[E_{\gamma}>E_{\gamma}^{m}(\theta_{\gamma})=\frac{\sqrt{s}}{(1+\sin\theta_{ \gamma}/\sin\theta_{b})}, \tag{21}\] on the final state photon to remove the main reducible background from the processes \(e^{+}e^{-}\to e^{+}e^{-}\gamma\) and \(e^{+}e^{-}\to\gamma\gamma\gamma\) following Ref. [60], where \(\theta_{b}\) denotes the angle at the boundary of the sub-detectors with \(\cos\theta_{b}=0.99\). The simple criteria \(S^{2}/B=2.71\) is used to probe the 95% C.L. upper bounds on the neutrino dipole portal couplings \(d_{\gamma}\) at CEPC, which are shown in Figure 6. Here we consider four scenarios with assumptions as \(d_{\mathcal{W}}=0\), \(d_{B}=0\) and \(d_{\mathcal{W}}=\pm 2\tan\theta_{w}d_{B}\), respectively, which are listed in Table. 1. The limits are calculated based on the total luminosity of \(20~{}\mathrm{ab}^{-1}\) data in the \(H\)-mode (orange lines), \(6~{}\mathrm{ab}^{-1}\) in the \(WW\)-mode (blue lines), \(100~{}\mathrm{ab}^{-1}\) in the \(Z\)-mode (green lines), and \(1~{}\mathrm{ab}^{-1}\) in the \(t\bar{t}\)-mode (red lines). There is an additional contribution from \(W\)-boson, therefore the constraints on \(d_{\gamma}^{e}\) (plotted with solid lines) are always more stringent than \(d_{\gamma}^{\mu,\tau}\) (plotted with dotted lines) except in scenario I with \(d_{W}=0\) where the sensitivities on all the three lepton flavors are same. In the \(Z\)-mode at CEPC, the constraints on \(d_{\gamma}\) with different lepton flavor are almost the same, because the additional contribution \(t\)-channel for electron can be neglected compared to \(s\)-channel due to the \(Z\) resonance when \(d_{Z}\neq 0\). One can see that the \(Z\)-mode has the best sensitivity in all four scenarios for the HNL with small mass. Especially in scenarios I, II, and IV with \(d_{Z}\neq 0\), the upper limits of \(d_{\gamma}^{e}\) probed by \(Z\)-mode are stronger than ones by other three running modes at CEPC beyond one order of magnitude in the mass region of about \((1\sim 50)\) GeV. What is more, in scenarios II and IV, \(Z\)-mode can give about two orders of magnitude of improvement over the other three running modes at CEPC in the sensitivity on \(d_{\gamma}^{\mu,\tau}\). This is because the \(Z\)-resonance with \(\sqrt{s}\simeq M_{Z}\) can significantly improve the production rate for HNLs at electron colliders. In Fig. 4, we show the 95% C.L. upper limits on dipole coupling \(d_{\gamma}\) with \(m_{N}=10\) GeV as the function of \(a\) in \(Z\) mode at CEPC via the monophoton searches, which is plotted with black dashed line. One sees that the curve has similar behavior with the one from LEP1. The upper bounds on \(d_{\gamma}\) with \(a>0\) lie in the range of \((5.8\times 10^{-7},6.0\times 10^{-6})\), which is about two order of magnitude than LEP1. We also present a projection for \(d_{\gamma}\) as the function of \(a\) with an imaginary limit on the branching ratio of \(10^{-7}\) in the future for the HNL with mass of 10 GeV at 95% C.L. on the right of Fig.4 with black dashed line. The graph on the right of Fig. 5 shows the production rates of the processes \(e^{+}e^{-}\to N\nu_{e}\to\nu_{e}\bar{\nu}_{e}\gamma\) (red solid line) and \(e^{+}e^{-}\to N\nu_{\mu,\tau}\to\nu_{\mu,\tau}\bar{\nu}_{\mu,\tau}\gamma\) (red dashed line) as the function of \(a\) with \(m_{N}=100\) GeV and \(d_{\gamma}=10^{-5}\) at CEPC with \(\sqrt{s}=240\) GeV in \(H\)-mode. The corresponding 95% C.L. upper limits on the dipole portal couplings \(d_{\gamma}^{e}\) (black solid line) and \(d_{\gamma}^{\mu,\tau}\) (black dashed line) for \(m_{N}=100\) GeV with the luminosity of 20 ab\({}^{-1}\). With \(|a|\leq 10\) and \(m_{N}=100\) GeV, the upper limits on the dipole portal couplings to HNL, \(d_{\gamma}^{e}\) and \(d_{\gamma}^{\mu,\tau}\), lie in the range of \((2.3\times 10^{-6},1.2\times 10^{-5})\) and \((9.4\times 10^{-6},1.5\times 10^{-5})\) from monophoton searches at future CEPC in \(H\)-mode, respectively. Figure 6: The expected 95% C.L. upper limits on the electron-neutrino \(d_{\gamma}^{e}\) (solid lines), and muon- or tau-neutrino \(d_{\gamma}^{\mu,\tau}\) (dotted lines) dipole portal coupling to HNLs under four assumptions listed in Table. I at CEPC in the \(Z\)-mode with 100 ab\({}^{-1}\) luminosity (green lines), in the \(H\)-mode with 20 ab\({}^{-1}\) luminosity (black lines), in \(WW\)-mode with 6 ab\({}^{-1}\) luminosity (blue lines), and in \(t\bar{t}\)-mode with 1 ab\({}^{-1}\) luminosity (red lines), respectively. Discussion and conclusion The landscape of current constraints on active-sterile neutrino transition magnetic moments \(d_{\gamma}^{k}\) with \(k=e,\mu,\tau\), which are from terrestrial experiments such as Borexino [25], Xenon-1T [25], CHARM-II [21], MiniBooNE [24], LSND [24], NOMAD [24; 61], and DONUT [62], and astrophysics supernovae SN 1987A [24], are summarized in Fig. 7 with gray shaded regions, respectively. These constraints basically do not dependent on the ratio \(a=d_{\mathcal{W}}/d_{B}\), since the typical scattering energies are far less than the electroweak scale. It is noted that the constraints from XENON-1T, Borexino [25] and SN 1987A [24] are flavor-universal. The blue shaded regions in Fig. 7 present the sensitivities on \(d_{\gamma}\) at LEP in four scenarios listed in Table 1, which are the combination of the best constraints shown in Fig. 3 using the monophoton data by the OPAL Collaboration at LEP1 [53] and by the DELPHI Collaboration at LEP2 [54], and the measurement of \(Z\) decay [55; 56]. The combination of the best constraints from four running modes at CEPC in Fig.6 are also shown in Fig. 7 with red lines. It can be found that the constraints from \(Z\) invisible decay measured at LEP are already excluded by Xenon-1T. The constraints on transition magnetic moments involving three SM active neutrinos (\(\nu_{e,\mu,\tau}\)) are same from the measurement of \(Z\) decay, while the constraints from the monophoton searches at LEP and future CEPC are in principle different on \(d_{\gamma}^{e}\) and on \(d_{\gamma}^{\mu,\tau}\) when \(a\neq 0\) (\(d_{\mathcal{W}}\neq 0\)), because there will be additional contributions from \(W\)-mediator. For \(d_{\gamma}^{e}\), beside the flavor-universal constraints from XENON-1T, Borexino [25] and SN 1987A [24], there are also complementary limits from LSND [24] with \(m_{N}\lesssim 0.07\) GeV. For heavier \(N\) only coupling with \(\nu_{e}\), LEP can explore the previously unconstrained parameter region, and will be greatly improved by CEPC. In scenario III, with \(d_{Z}=0\), the contribution from \(Z\) boson vanishes, leading to weakest limits on \(d_{\gamma}^{e}\) than other three scenarios which can be down to about \(3.3\times 10^{-4}\) at LEP and \(6.4\times 10^{-6}\) at CEPC. In scenario II, with \(d_{B}=0\), CEPC can probe the limit on \(d_{\gamma}^{e}\) down to about \(1.7\times 10^{-7}\), which is about two orders of magnitude stronger than LEP, with the limit down to about \(1.3\times 10^{-5}\) from the measurement of \(\mathrm{Br}(Z\rightarrow\gamma+\mathrm{invisible})\). For \(d_{\gamma}^{\mu}\), there are terrestrial constraints from CHARM-II [21], MiniBooNE, and NOMAD [24]. In scenario III, the best limit on \(d_{\gamma}^{\mu}\) at LEP is from the monophoton searches by the DELPHI Collaboration at LEP2 [54] in the plotted region, which is weaker than the one on \(d_{\gamma}^{e}\) due to the absence of \(W\)-exchanging channel, while still can touch the unexplored parameter region when \(m_{N}\gtrsim 1\) GeV. In other three scenarios with \(d_{Z}\neq 0\), the limits on \(d_{\gamma}^{\mu}\) with \(m_{N}\lesssim 90\) GeV shown in Fig. 7 are from the \(Z\) decay measurements at LEP, thus they are same with \(d_{\gamma}^{e}\), which are ahead of the current limits from NOMAD [24] with \(m_{N}\) larger than 5.0 GeV, 3.6 GeV, and 3.9 GeV in scenario I, II, and IV, respectively. The expected limits from the monophoton searches at CEPC are complementary to the current limits when \(m_{N}\gtrsim 3.5\) GeV in scenario III and \(m_{N}\gtrsim 0.4\) GeV in other three scenarios. On \(d_{\gamma}^{\tau}\), there is an upper 90% C.L. limit given by DONUT [62] of 5.8 \(\times 10^{-5}\)\({\rm GeV}^{-1}\) for \(m_{N}<0.3\) GeV. The constraints on \(d_{\gamma}^{\tau}\) from LEP and CEPC are same with \(d_{\gamma}^{\mu}\). The monophoton searches at LEP2 can provide complementary constraints to DONUT on \(d_{\gamma}^{\tau}\) when \(m_{N}>0.3\) GeV in scenario III with \(d_{Z}=0\). In other scenarios with \(d_{Z}\neq 0\), the measurement of \(Z\to\gamma+{\rm invisible}\) at LEP can provide leading sensitivity on \(d_{\gamma}^{\tau}\) with \(m_{N}\gtrsim 0.05\)\((0.03)\) GeV in scenario I (II/IV), respectively. The monophoton searches at CEPC can fill a huge unconstrained void of the \(d_{\gamma}^{\tau}-m_{N}\) parameter space when \(m_{N}\gtrsim 0.2\) GeV. Figure 7: The expected 95% C.L. exclusion limits on active-sterile neutrino transition magnetic moment \(d_{\gamma}\) in four scenarios listed in Table 1 at LEP (blue shaded regions), which are the combination of the best constraints shown in Fig. 3 using the monophoton data by the OPAL Collaboration at LEP1 [53] and by the DELPHI Collaboration at LEP2 [54], and the measurement of \(Z\) decay [55; 56], and at CEPC (red lines), which are the combination of the best constraints from four running modes at CEPC in Fig. 6, for three lepton flavor respectively. The landscape of current leading constraints are also shown with shaded regions, exploiting from Borexino [25], Xenon1T [25], LEP [24], and SN-1987A [24], which are relevant for all three SM neutrinos; LSND [24] only for \(d_{\gamma}^{e}\); CHARM-II [21], MiniBooNE [24], and NOMAD [24; 61] only for \(d_{\gamma}^{\mu}\); DONUT [62] only for \(d_{\gamma}^{\tau}\). ###### Acknowledgements. This work was supported in part by the National Natural Science Foundation of China (Grant No.12205153, No. 11805001) and the 2021 Jiangsu Shuangchuang (Mass Innovation and Entrepreneurship) Talent Program (JSSCBS20210213).
2308.05052
Designing Cellular Networks for UAV Corridors via Bayesian Optimization
As traditional cellular base stations (BSs) are optimized for 2D ground service, providing 3D connectivity to uncrewed aerial vehicles (UAVs) requires re-engineering of the existing infrastructure. In this paper, we propose a new methodology for designing cellular networks that cater for both ground users and UAV corridors based on Bayesian optimization. We present a case study in which we maximize the signal-to-interference-plus-noise ratio (SINR) for both populations of users by optimizing the electrical antenna tilts and the transmit power employed at each BS. Our proposed optimized network significantly boosts the UAV performance, with a 23.4dB gain in mean SINR compared to an all-downtilt, full-power baseline. At the same time, this optimal tradeoff nearly preserves the performance on the ground, even attaining a gain of 1.3dB in mean SINR with respect to said baseline. Thanks to its ability to optimize black-box stochastic functions, the proposed framework is amenable to maximize any desired function of the SINR or even the capacity per area.
Mohamed Benzaghta, Giovanni Geraci, David Lopez-Perez, Alvaro Valcarce
2023-08-09T16:32:39Z
http://arxiv.org/abs/2308.05052v1
# Designing Cellular Networks for UAV Corridors via Bayesian Optimization ###### Abstract As traditional cellular base stations (BSs) are optimized for 2D ground service, providing 3D connectivity to un-crewed aerial vehicles (UAVs) requires re-engineering of the existing infrastructure. In this paper, we propose a new methodology for designing cellular networks that cater for both ground users and UAV corridors based on Bayesian optimization. We present a case study in which we maximize the signal-to-interference-plus-noise ratio (SINR) for both populations of users by optimizing the electrical antenna tilts and the transmit power employed at each BS. Our proposed optimized network significantly boosts the UAV performance, with a 23.4 dB gain in mean SINR compared to an all-downtilt, full-power baseline. At the same time, this optimal tradeoff nearly preserves the performance on the ground, even attaining a gain of 1.3 dB in mean SINR with respect to said baseline. Thanks to its ability to optimize black-box stochastic functions, the proposed framework is amenable to maximize any desired function of the SINR or even the capacity per area. ## I Introduction Next-generation mobile networks are expected to provide reliable connectivity to UAVs1 for low-latency control and mission-specific data payloads [1, 2, 3]. However, cellular base stations (BSs) are traditionally designed to optimize _2D connectivity_ on the ground, which results in UAVs not being reached by their primary antenna lobes. Furthermore, UAVs flying above buildings experience line-of-sight (LoS) interference from numerous BSs, causing a degraded signal-to-interference-plus-noise ratio (SINR) [4, 5]. Achieving _3D connectivity_ requires re-engineering existing ground-focused deployments. Recent proposals for ubiquitous aerial connectivity rely on network densification [6, 7], dedicated infrastructure for aerial services [8, 9], or utilizing satellites to supplement the ground network [10], all of which may require costly hardware or signal processing upgrades. Footnote 1: Short for uncrewed aerial vehicles, commonly known as _drones_. Pessimistic conclusions from the above stem from the assumption that UAVs will fly uncontrollably and that cellular networks must provide coverage in every 3D location. However, as the number of UAVs increases, they could be restricted to specific air routes, known as _UAV corridors_, regulated by the appropriate authorities [11]. With the concept of UAV corridors gaining acceptance, researchers have started studying UAV trajectory optimization by matching a UAV's path to the best network coverage pattern [12, 13, 14, 15]. However, the definition of UAV corridors will likely prioritize safety over network coverage, limiting the scope of coverage-based UAV trajectory optimization and requiring instead a 3D network design tailored to UAV corridors. Recent research has focused on fine-tuning cellular deployments for UAV corridors using ad-hoc system-level optimization [16, 17, 18, 19], as well as theoretical analysis [20, 21, 22]. Despite these promising contributions, a scalable optimization framework is still needed to maximize performance functions that are mathematically intractable. In this paper, we propose a new methodology based on Bayesian optimization (BO) to design a cellular deployment for both ground users (GUE) and UAVs flying along corridors. For traditional ground-focused networks, BO has proven useful in achieving coverage/capacity tradeoffs [23], optimal radio resource allocation [24, 25], and mobility management [26]. BO can effectively maximize expensive-to-evaluate stochastic performance functions, and unlike other non-probabilistic methods, converge rapidly without requiring a large amount of data. As a case study, we maximize the mean SINR perceived by GUEs as well as UAVs on corridors by optimizing the electrical antenna tilts and the transmit power employed by each BS. We do so under realistic 3GPP assumptions for the network deployment and propagation channel model. Our main findings can be summarized as follows: * The proposed algorithm reaches convergence in less than 170 iterations for all scenarios tested. In all cases, after as few as 80 iterations, the algorithm only falls short of its final performance by less than 10 %. * Unlike a traditional cellular configuration where all BSs are downtilted and transmit at full power, pursuing a signal quality tradeoff between the ground and the UAV corridors results in a subset of the BSs being uptilted, with the rest remaining downtilted or turned-off. Such configuration is highly non-obvious and difficult to design heuristically. * The proposed optimized network boosts the SINR on the UAV corridor, with a 23.4 dB gain in mean compared to an all-downtilt, full-power baseline. Meanwhile, it nearly preserves the SINR on the ground, even attaining a gain of 1.3 dB in mean SINR with respect to said baseline. ## II System Model We now introduce the deployment, channel model, and performance metric considered. (Also see Table I.) ### _Network Deployment_ We consider the downlink of a cellular network as specified by the 3GPP [27, 28]. A total of 57 BSs are deployed at a height of 25 m. BSs are deployed on a wrapped-around hexagonal layout consisting of 19 sites with a 500 m inter-site distance (ISD). A site comprises three co-located BSs, each creating a sector (i.e., a cell) spanning a \(120^{\circ}\) angle in azimuth. Let \(\mathcal{B}\) denote the set of BSs. We set the transmit power \(p_{b}\leq 46\) dBm and vertical antenna tilt \(\theta_{b}\in[-90^{\circ},90^{\circ}]\) of each BS \(b\in\mathcal{B}\) as the object of optimization, with negative and positive angles denoting downtilts and uptilts, respectively. The network serves all user equipment (UE), i.e., both GUEs and UAVs, whose sets are denoted as \(\mathcal{G}\) and \(\mathcal{U}\), respectively. All GUEs are distributed uniformly across the entire cellular layout at a height of 1.5 m, with an average of 15 GUEs per sector. UAVs are uniformly distributed along a predefined aerial region consisting of four corridors arranged as specified in Table I and illustrated in Fig. 5, with an average of 50 uniformly deployed UAVs per corridor. ### _Propagation Channel_ The network operates on a 10 MHz band in the 2 GHz spectrum, with the available bandwidth fully reused across all cells. All radio links experience path loss and lognormal shadow fading. BSs are equipped with a directive antenna with a maximum gain of 8 dBi and a vertical (resp. horizontal) half-power beamwidth of 10\({}^{\circ}\) (resp. 65\({}^{\circ}\)). All UEs are equipped with a single omnidirectional antenna. We denote \(G_{b,k}\) as the large-scale power gain between BS \(b\) and UE \(k\), comprising path loss, shadow fading, and antenna gain, with the latter depending on the antenna tilt \(\theta_{b}\). We denote \(h_{b,k}\) as the small-scale block fading between cell \(b\) and UE \(k\). We assume that the GUEs undergo Rayleigh fading and that the UAV links experience pure LoS propagation conditions, given their elevated position with respect to the clutter of buildings.2 Each UE \(k\) is associated with the BS \(b_{k}\) providing the largest average received signal strength (RSS). Footnote 2: The small-scale fading model does not affect the conclusions drawn herein. ### _Performance Metric_ The downlink SINR in decibels (dB) experienced by UE \(k\) from its serving BS \(b_{k}\) on a given time-frequency physical resource block is given by \[\mathtt{SINR}_{\mathtt{dB},k}=10\,\log_{10}\left(\frac{p_{b_{k}}\cdot G_{b_{k },k}\cdot|h_{b_{k},k}|^{2}}{\sum\limits_{b\in\mathcal{B}\setminus b_{k}}p_{b} \cdot G_{b,k}\cdot|h_{b,k}|^{2}+\sigma_{\text{T}}^{2}}\right), \tag{1}\] where \(\sigma_{\text{T}}^{2}\) denotes the thermal noise power. The SINR in (1) depends on the vertical antenna tilts \(\theta_{b}\) as well as on the transmit powers \(p_{b}\) of all BSs--the former through the large-scale gains \(G_{b,k},b\in\mathcal{B}\). Our goal is to determine the set of BS antenna tilts and transmit powers that maximize the downlink SINR in (1) averaged over all UEs in the network.3 We therefore define the following objective function \(f(\cdot)\) to be maximized: Footnote 3: Note that the proposed framework is amenable to maximize any desired function—not necessarily the mean—of the RSS, SINR, or the capacity. \[f\left(\boldsymbol{\theta},\boldsymbol{p}\right)=\frac{\lambda}{\|\mathcal{U }\|}\cdot\sum_{k\in\mathcal{U}}\mathtt{SINR}_{\mathtt{dB},k}+\frac{1-\lambda} {\|\mathcal{G}\|}\cdot\sum_{k\in\mathcal{G}}\mathtt{SINR}_{\mathtt{dB},k}, \tag{2}\] where the vectors \(\boldsymbol{\theta}\) and \(\boldsymbol{p}\) respectively contain the antenna tilts \(\theta_{b}\) and transmit powers \(p_{b}\) of all BSs \(b\in\mathcal{B}\) and \(\|\cdot\|\) denotes the cardinality of a set. The parameter \(\lambda\in[0,1]\) is a mixing ratio that trades off GUE and UAV performance. As special cases, \(\lambda=0\) and \(\lambda=1\) optimize the cellular network for GUEs only and UAVs only, respectively. ## III Proposed Methodology In this paper, we use Bayesian optimization to determine the set of BS antenna tilts and transmit powers that maximize the objective function defined in (2). BO is a framework suitable for black-box optimization, where the objective function \(f(\cdot)\) Fig. 1: Cellular network with downtilted and uptilted BSs supporting GUEs and UAVs flying along corridors (blurred green) [21]. is non-convex, non-linear, stochastic, high-dimensional and/or computationally expensive to evaluate. In essence, BO uses the Bayes theorem to perform an informed search over the solution space, and works by iteratively constructing a probabilistic _surrogate model_ of the function being optimized based on prior evaluations of such function at a number of points in the search space [29]. The surrogate model is easier to evaluate than the function being optimized and is updated with each point evaluated. An _acquisition function_\(\alpha(\cdot)\) is then used to interpret and score the response from the surrogate to decide which point in the search space should be evaluated next. The acquisition function balances exploration (searching for new and potentially better solutions) and exploitation (focusing on the currently best-performing solutions). Further details on our methodology are provided as follows. ### _Evaluation of the Objective Function and Surrogate Model_ In this paper, a query point \(\textbf{x}=[\boldsymbol{\theta}^{\top},\boldsymbol{p}^{\top}]^{\top}\) is defined by a configuration of antenna tilts \(\theta_{b}\) and transmit powers \(p_{b}\) of all BSs \(b\in\mathscr{B}\). The corresponding value of the objective function \(f(\textbf{x})\) is the mean SINR over all UEs under given antenna tilts \(\boldsymbol{\theta}\) and transmit powers \(\boldsymbol{p}\) and is obtained from (2). For convenience, let us define \(\textbf{X}=[\textbf{x}_{1},\ldots,\textbf{x}_{N}]\) as a set of \(N\) points and \(\textbf{f}(\textbf{X})=[f_{1},\ldots,f_{N}]^{\top}\) as the set of corresponding objective function evaluations, with \(f_{i}=f(\textbf{x}_{i})\), \(i=1,\ldots,N\). As described in Section II, the objective \(f(\cdot)\) being optimized is a mathematically intractable stochastic function driven by the models and assumptions further detailed in Table I, which may be obtained by a cellular operator in real-time. To validate our proposed framework, we evaluate \(f(\cdot)\) through system-level simulations. Such simulations are affected by the inherent randomness of the UE locations and the probabilistic channel model in (1), thus yielding a noisy sample \(\tilde{f}(\textbf{x})\) when evaluating a given point **x**. Following a standard BO framework, we use a Gaussian process (GP) regressor to create a surrogate model that approximates the objective function, denoted as \(\widehat{f}(\cdot)\)[29]. The resulting GP model allows to predict the value of \(\widehat{f}(\textbf{x})\) for a queried point **x** given the previous observations \(\tilde{\textbf{f}}(\textbf{X})=\tilde{\textbf{f}}\) over which the model is constructed. Formally, the GP prior on the objective \(\tilde{f}(\textbf{x})\) prescribes that, for any set of inputs **X**, the corresponding objectives \(\tilde{\textbf{f}}\) are jointly distributed as \[p(\tilde{\textbf{f}})=\mathcal{N}(\tilde{\textbf{f}}|\,\boldsymbol{\mu}( \textbf{X}),\textbf{K}(\textbf{X})\,), \tag{3}\] where \(\boldsymbol{\mu}(\textbf{X})=[\mu(\textbf{x}_{1}),\ldots,\mu(\textbf{x}_{N}) ]^{\top}\) is the \(N\times 1\) mean vector, and \(\textbf{K}(\textbf{X})\) is the \(N\times N\) covariance matrix, whose entry \((i,j)\) is given as \([\textbf{K}(\textbf{X})]_{i,j}=k(\textbf{x}_{i},\textbf{x}_{j})\) with \(i,j\in\{1,\ldots,N\}\). For any point **x**, the mean \(\mu(\textbf{x})\) provides a prior knowledge on the objective \(f(\textbf{x})\), while the kernel \(\textbf{K}(\textbf{X})\) indicates the uncertainty across pairs of values of **x**. Given a set of observed noisy samples \(\tilde{\textbf{f}}\) at previously sampled points **X**, the posterior distribution of \(\widehat{f}(\textbf{x})\) at point **x** can be obtained as \[p(\widehat{f}(\textbf{x})=\widehat{f}\,|\,\textbf{X},\tilde{\textbf{f}})= \mathcal{N}(\widehat{f}\,|\,\mu(\textbf{x}\,|\,\textbf{X},\tilde{\textbf{f}}),\sigma^{2}(\textbf{x}\,|\,\textbf{X},\tilde{\textbf{f}})), \tag{4}\] with \[\mu(\textbf{x}\,|\,\textbf{X},\tilde{\textbf{f}})=\mu(\textbf{x})+\tilde{ \textbf{k}}(\textbf{x})^{\top}(\tilde{\textbf{K}}(\textbf{X}))^{-1}(\tilde{ \textbf{f}}-\boldsymbol{\mu}(\textbf{X})), \tag{5}\] \[\sigma^{2}(\textbf{x}\,|\,\textbf{X},\tilde{\textbf{f}})=k(\textbf{x},\textbf {x})-\tilde{\textbf{k}}(\textbf{x})^{\top}(\tilde{\textbf{K}}(\textbf{X}))^{ -1}\,\tilde{\textbf{k}}(\textbf{x}), \tag{6}\] where \(\tilde{\textbf{k}}(\textbf{x})=[k(\textbf{x},\textbf{x}_{1}),\ldots,k(\textbf{ x},\textbf{x}_{N})]^{\top}\) is the \(N\times 1\) covariance vector and \(\tilde{\textbf{K}}(\textbf{X})=\textbf{K}(\textbf{X})+\sigma^{2}\textbf{I}_{ \textbf{N}}\), with \(\sigma^{2}\) denoting the observation noise represented by the variance of the Gaussian distribution, and \(\textbf{I}_{\textbf{N}}\) denoting the \(N\times N\) identity matrix. Note that (5) and (6) represent the mean and variance of the estimation, the latter indicating the degree of confidence. ### _Proposed BO Algorithm and Acquisition Function_ The proposed BO algorithm starts by creating a GP prior \(\{\mu(\cdot),k(\cdot,\cdot)\}\) based on a dataset \(\mathcal{D}=\{\textbf{x}_{1},\ldots,\textbf{x}_{N_{\text{o}}},\tilde{f}_{1}, \ldots,\tilde{f}_{N_{\text{o}}}\}\) containing \(N_{\text{o}}\) initial observations. The dataset is constructed via system-level simulations according to the model and objective function defined in Section II. The antenna tilts \(\boldsymbol{\theta}_{i}\) and transmit powers \(\boldsymbol{p}_{i}\) for every observation point \(\textbf{x}_{i}=[\boldsymbol{\theta}_{i}^{\top},\boldsymbol{p}_{i}^{\top}]^{\top}\) in \(\mathcal{D}\) are chosen randomly in \([-90^{\circ},90^{\circ}]\) and \([6\,\text{dB},46\,\text{dB}]\), respectively. Once the initial GP prior is constructed, the vectors \(\boldsymbol{\theta}_{0}\) and \(\boldsymbol{p}_{0}\) are initialized with all entries set to \(0^{\circ}\) and \(46\,\text{dB}\)m, respectively. We denote \(\hat{f}^{*}\) as the best observed objective value, which is initialized to \(\tilde{f}_{0}^{*}=-\infty\). The algorithm then iterates over each BS \(b\in\mathscr{B}\), one at a time.4 At each such iteration \(n\), only the antenna tilt and transmit power of the BS \(b_{n}\) under consideration are updated, while keeping the remaining entries of \(\boldsymbol{\theta}_{n}\) and \(\boldsymbol{p}_{n}\) fixed to their values from the previous iteration. The query point under optimization is thus reduced to a two-dimensional vector that we will denote as \(\widehat{\textbf{x}}_{n}=[\theta_{b_{n}},p_{b_{n}}]\). Footnote 4: At iteration \(n\), the BS considered is thus \(b_{n}=((n-1)\mod\|\boldsymbol{\mathbb{B}}\|)+1\). The algorithm then leverages the observations in \(\mathcal{D}\) to choose \(\widehat{\textbf{x}}_{n}\). This is performed via an acquisition function \(\alpha(\cdot)\), which is designed to trade off the exploration of new points in less favorable regions of the search space with the exploitation of well-performing ones. The former prevents getting caught in local maxima, whereas the latter minimizes the risk of excessively degrading performance.5 In what follows, we adopt the expected improvement (EI) as the acquisition function, which has shown to perform well in terms of balancing the trade-off between exploration and exploitation [24, 29]. At every iteration \(n\), the EI tests and scores a set of \(N_{\text{c}}\) randomly drawn candidate points \(\{\widehat{\textbf{x}}_{\text{cand}_{1}},\ldots,\widehat{\textbf{x}}_{\text{ cand}_{N_{\text{c}}}}\}\) through the surrogate model. The EI is defined as [24, 30] Footnote 5: While in this paper we run the proposed optimization on system-level simulations, its practical implementation requires testing the performance (mean SINR) of each candidate point (BS tilt and power) in a real network, whereby it becomes undesirable to explore poorly performing points. \[\alpha\left(\widehat{\textbf{x}}_{\text{cand}}\,|\,\mathcal{D}\right)= [\,\mu\left(\textbf{x}_{\text{cand}}\,|\,\mathcal{D}\right)\,-\,\widehat{f}^{* }-\xi\,] (7) \[\,+\sigma^{2}\left(\widehat{\textbf{x}}_{\text{cand}}\,|\,\mathcal{D} \right)\cdot\phi(\delta),\] where \(\widehat{f}^{*}=\text{max}_{i}\,\{\widehat{f}_{\text{cand}_{i}}\}\) denotes the current best approximated objective value according to the surrogate model, \(\Phi\) (resp. \(\phi\)) is the standard Gaussian cumulative (resp. density) distribution function, and \[\delta=\frac{\mu\left(\mathbf{\widehat{x}}_{\text{cand}}\left|\,\mathcal{D}\right. \right)\,-\widehat{f}^{*}-\xi}{\sigma^{2}\left(\mathbf{\widehat{x}}_{\text{ cand}}\left|\,\mathcal{D}\right.\right)}, \tag{8}\] with \(\mu(\mathbf{\widehat{x}}_{\text{cand}}\left|\,\mathcal{D}\right.)\) and \(\sigma^{2}\left(\mathbf{\widehat{x}}_{\text{cand}}\left|\,\mathcal{D}\right.\right)\) given in (5) and (6), respectively. The parameter \(\xi\in[0,1)\) in (7) and (8) regulates the exploration vs. exploitation tradeoff, with larger values promoting the former, and vice versa. In this paper, we aim for a risk-sensitive EI acquisition function and set \(\xi=0.01\). Leveraging batch evaluation, which allows for automatic dispatch of independent operations across multiple computational resources (e.g., GPUs), at each iteration we evaluate a set of \(N_{\text{c}}=500\) candidate points through the surrogate model, using 10 batches each consisting of 50 points. The query point \(\mathbf{\widehat{x}}_{n}\) is then chosen as \[\mathbf{\widehat{x}}_{n}=\underset{i}{\text{arg max}}\ \ \alpha\left(\mathbf{ \widehat{x}}_{\text{cand}_{i}}\left|\,\mathcal{D}\right.\right). \tag{9}\] Once \(\mathbf{\widehat{x}}_{n}=[\theta_{b_{n}},p_{b_{n}}]\) is determined, the vectors \(\boldsymbol{\theta}_{n}\) and \(\boldsymbol{p}_{n}\) are obtained from \(\boldsymbol{\theta}_{n-1}\) and \(\boldsymbol{p}_{n-1}\) by replacing their \(b_{n}\)-th entries with \(\theta_{b_{n}}\) and \(p_{b_{n}}\), respectively, yielding \(\mathbf{x}_{n}=[\boldsymbol{\theta}_{n}^{\top},\boldsymbol{p}_{n}^{\top}]^{\top}\). A new observation of the objective function \(\tilde{f}(\mathbf{x}_{n})\) is then produced, and the dataset \(\mathcal{D}\), the GP prior, and the best observed objective value \(\tilde{f}^{*}\) are all updated accordingly. The algorithm then moves on to optimizing the antenna tilt and transmit power of BS \(b_{n+1}\), until all BSs in \(\mathcal{B}\) have been optimized. This loop over all BSs is then repeated until the best observed value \(\tilde{f}^{*}\) has remained unchanged for \(\ell_{\text{max}}\) consecutive loops, after which the algorithm recommends the point \(\mathbf{x}^{*}\) that produced the best observation \(\tilde{f}^{*}\). The proposed approach is summarized in Algorithm 1. ``` Input: Initial dataset \(\mathcal{D}=\{\mathbf{x}_{1},\dots,\mathbf{x}_{N_{\text{c}}},\tilde{f}_{1}, \dots,\tilde{f}_{N_{\text{c}}}\}\); Output: Optimal configuration \(\mathbf{x}^{*}\); Initialization: Create a GP prior \(\{\mu(\cdot),k(\cdot,\cdot)\}\) using \(\mathcal{D}\) and (3); Set all entries of \(\boldsymbol{\theta}_{0}\) to \(0^{\circ}\) and all entries of \(\boldsymbol{p}_{0}\) to \(46\) dBm; Set \(\mathbf{x}_{0}=[\boldsymbol{\theta}_{0}^{\top},\boldsymbol{p}_{0}^{\top}]^{\top}\), \(n=\ell=1\), \(\ell_{\text{max}}=3\), \(\tilde{f}^{*}=\tilde{f}_{0}^{*}=-\infty\); while\(\ell\leq\ell_{\text{max}}\)do \(b_{n}=((n-1)\mod\|\mathcal{B}\|)+1\); Draw \(N_{\text{c}}\) random candidate points \(\{\mathbf{\widehat{x}}_{\text{cand}_{1}},\dots,\mathbf{\widehat{x}}_{\text{ cand}_{N_{\text{c}}}}\}\); Evaluate all candidate points using (7); Obtain \(\mathbf{\widehat{x}}_{n}=[\theta_{b_{n}},p_{b_{n}}]\) from (9); Update \(\mathbf{x}_{n}\) with \(\theta_{b_{n}}\) and \(p_{b_{n}}\); Obtain observation \(\tilde{f}_{n}=\tilde{f}(\mathbf{x}_{n})\) using (2); Augment \(\mathcal{D}\) by including \(\mathbf{x}_{n}\) and \(\tilde{f}_{n}\); Update the GP prior \(\{\mu(\cdot),k(\cdot,\cdot)\}\) using \(\mathcal{D}\) and (3); if\(\tilde{f}_{n}>\tilde{f}_{n-1}^{*}\)then \(\tilde{f}_{n}^{*}\leftarrow\tilde{f}_{n}\); \(\mathbf{x}_{n}^{*}\leftarrow\mathbf{x}_{n}\); end if else \(\tilde{f}_{n}^{*}\leftarrow\tilde{f}_{n-1}^{*}\); end if if\(b_{n}=\|\mathcal{B}\|\)then if\(\tilde{f}_{n}^{*}>\tilde{f}^{*}\)then \(\tilde{f}^{*}\leftarrow\tilde{f}_{n}^{*}\); \(\mathbf{x}^{*}\leftarrow\mathbf{x}_{n}^{*}\); \(\ell\gets 0\); end if \(\ell\leftarrow\ell+1\); end if \(n\gets n+1\); end while ``` **Algorithm 1** Proposed BO algorithm ## IV Numerical Results In this section, we present the results obtained when applying our proposed framework introduced in Section III on the system model defined in Section II, for three values of \(\lambda\), namely 0, 1, and 0.5. We recall that these values correspond to optimizing the cellular network for GUEs only, for UAVs only, and for both with equal weight, respectively. The BO algorithm is run on BoTorch, an open-source library built upon PyTorch [31]. We use the Matern-5/2 kernel for \(\mathbf{K}(\mathbf{X})\) and fit the GP hyperparameters using maximum posterior estimation. _Convergence of the BO framework:_ Fig. 2 shows the convergence of the proposed BO algorithm by illustrating the best observed objective at each iteration \(n\). Convergence is reached in less than 170 iterations for all three values of \(\lambda\). In all cases, after as few as 80 iterations the algorithm only falls short of its final performance by less than 10%. In the remainder of this section, we discuss the network configuration recommended by the algorithm and quantify its final performance. _Optimal network configuration:_ Fig. 3 and Fig. 4 respectively show the optimal values of the vertical electrical antenna tilts \(\boldsymbol{\theta}\) and transmit powers \(\boldsymbol{p}\) for the case \(\lambda=0.5\), where a tradeoff is sought between SINR on the ground and along the aerial corridors. In both figures, the BS index denotes the deployment site (black dots in Fig. 5), each comprising three sectors (cells). Markers indicate whether each cell is serving GUEs (green circles), UAVs (blue diamonds), or it is switched off to mitigate unnecessary interference (red crosses). The figures show that, unlike a traditional cellular network Fig. 2: Convergence of the proposed algorithm, showing the evolution of the best observed objective vs. the number of iterations \(n\). configuration where all BSs are downtilted (e.g., to \(-12^{\circ}\)[27]) and transmit at full power, pursuing an SINR tradeoff between the ground and the UAV corridors results in a subset of the BSs being uptilted (i.e., a total of 13 BSs), with the rest remaining downtilted or turned off. Such configuration is non-obvious and would be difficult to design heuristically. _Connectivity along UAV corridors:_ Fig. 5 shows the resulting cell partitioning for the UAV corridors when the network is optimized for both populations of UEs with the recommended values for BS tilts and transmit powers given in Fig. 3 and Fig. 4 for \(\lambda=0.5\). Note that only the 13 up-tilted BSs (blue diamonds in Fig. 3) are exploited to provide service along the UAV corridors, each covering a different segment according to their geographical location and orientation. _Resulting SINR performance:_ Fig. 6 shows the cumulative distribution function (CDF) of the SINR perceived by GUEs (solid lines) and UAVs (dashed lines) when the cellular network is optimized for GUEs only (\(\lambda=0\), red), UAVs only (\(\lambda=1\), green), and both (\(\lambda=0.5\), blue). The performance of a traditional cellular network (black) is also shown as a baseline for comparison, where all BSs are downtilted to \(-12^{\circ}\) and transmit at full power as per 3GPP recommendations [27]. In the sequel, we provide further tips to easily interpret Fig. 6: * The curves labeled as {GUE, \(\lambda=0\)} (solid red) and {UAV, \(\lambda=1\)} (dashed green) can be regarded as performance upper bounds for GUEs and for UAVs. This is performance achieved when BS tilts and powers are optimized for mean SINR at GUEs only and UAVs only, respectively. * The curves for \(\lambda=0.5\) (solid and dashed blue) show the optimal tradeoff reached by the proposed BO framework when the cellular network is designed to cater for both GUEs and UAV corridors, with equal weight. Fig. 6 demonstrates that the proposed framework can optimize the cellular network in a way that significantly boosts the UAV SINR, with a 23.4 dB gain in mean compared to the all-downtilt, full-power baseline (dashed blue vs. dashed black). The UAV SINR even approaches the upper bound obtained when the network disregards the performance on the ground, falling short by only 1.2 dB in mean (dashed blue vs. dashed green). At the same time, the solution nearly preserves the GUE SINR (solid blue), incurring a loss of 2.6 dB in mean with respect to the upper bound (solid red). When compared to the 3GPP all-downtilt, full-power baseline [27] (solid black), the optimal solution even attains a gain of 1.3 dB in mean GUE SINR. Indeed, said baseline was not designed for SINR, but rather for spatial reuse and capacity. It should thus come as no surprise that it slightly underperforms the proposed framework in terms of mean SINR. ## V Conclusion In this paper, we proposed a new methodology to design a cellular deployment for both ground and aerial service based on Bayesian optimization. Fig. 4: Optimized BS power for both GUEs and UAVs (\(\lambda=0.5\)). Fig. 5: Cell partitioning for UAV corridors when the cellular network is optimized for both GUEs and UAVs (\(\lambda\) = 0.5). Fig. 3: Optimized BS tilts for both GUEs and UAVs (\(\lambda=0.5\)). Green circles, blue diamonds, and red crosses respectively denote BSs serving GUEs, serving UAVs, and switched off. _Summary of results_: As a case study, we maximized the mean SINR perceived by GUEs as well as UAVs on corridors by optimizing the electrical antenna tilts and the transmit power employed at each BS. Unlike a traditional cellular network configuration in which all BSs are downtilted and transmit at full power, pursuing a signal quality tradeoff between the GUEs and UAVs on corridors results in a subset of the BSs being uptilted, with the rest remaining downtilted or turned off. Under this setting, our algorithm finds an optimal configuration that significantly boosts the UAV SINR, with a 23.4 dB gain in mean compared to an all-downtilt, full-power baseline. Meanwhile, this tradeoff nearly preserves the performance on the ground, even attaining a gain of 1.3 dB in mean SINR with respect to said baseline. _Future research directions:_ Thanks to its ability to optimize intractable stochastic functions, the proposed framework is amenable to maximize other objectives of interest, such as an arbitary function of the RSS, SINR, or the channel capacity. In particular, we conjecture that maximizing the capacity per area would lead to a different network configuration than the one obtained for the present case study. Furthermore, while in this article we defined a single objective function capturing the performance on the ground and along UAV corridors, an extension of this work could consider multi-objective BO by defining separate performance functions for the GUEs and UAVs on corridors. The goal would then be the one of finding the Pareto front: a set of non-dominated solutions such that no objective can be improved without deteriorating another.
2302.06500
Imaging the inner astronomical unit of Herbig Be star HD 190073
Inner regions of protoplanetary disks host many complex physical processes such as star-disk interactions, magnetic fields, planet formation, and the migration of new planets. To directly study this region requires milli-arcsecond angular resolution, beyond the diffraction limit of the world's largest optical telescopes and even too small for the mm-wave interferometer ALMA. However, we can use infrared interferometers to image the inner astronomical unit. Here, we present new results from the CHARA and VLTI arrays for the young and luminous Herbig Be star HD 190073. We detect a sub-AU cavity surrounded by a ring-like structure that we interpret as the dust destruction front. We model the shape with 6 radial profiles, 3 symmetric and 3 asymmetric, and present a model-free image reconstruction. All the models are consistent with a near face-on disk with inclination $\lesssim 20^\circ$, and we measure an average ring radius of 1.4 $\pm 0.2$ mas (1.14 AU). Around $48\%$ of the total flux comes from the disk with ~$15\%$ of that emission appearing to emerge from inside the inner rim. The cause of emission is still unclear, perhaps due to different dust grain compositions or gas emission. The skewed models and the imaging point to an off-center star, possibly due to binarity. Our image shows a sub-AU structure, which seems to move between the two epochs inconsistently with Keplerian motion and we discuss possible explanations for this apparent change.
Nour Ibrahim, John D. Monnier, Stefan Kraus, Jean-Baptiste Le Bouquin, Narsireddy Anugu, Fabien Baron, Theo Ten Brummelaar, Claire L. Davies, Jacob Ennis, Tyler Gardner, Aaron Labdon, Cyprien Lanthermann, Antoine Mérand, Evan Rich, Gail H. Schaefer, Benjamin R. Setterholm
2023-02-13T16:25:49Z
http://arxiv.org/abs/2302.06500v1
# Imaging the inner astronomical unit of Herbig Be star HD 190073 ###### Abstract Inner regions of protoplanetary disks host many complex physical processes such as star-disk interactions, magnetic fields, planet formation, and the migration of new planets. To directly study this region requires milli-arcsecond angular resolution, beyond the diffraction limit of the world's largest optical telescopes and even too small for the mm-wave interferometer ALMA. However, we can use infrared interferometers to image the inner astronomical unit. Here, we present new results from the CHARA and VLTI arrays for the young and luminous Herbig Be star HD 190073. We detect a sub-AU cavity surrounded by a ring-like structure that we interpret as the dust destruction front. We model the shape with 6 radial profiles, 3 symmetric and 3 asymmetric, and present a model-free image reconstruction. All the models are consistent with a near face-on disk with inclination \(\lesssim 20^{\circ}\), and we measure an average ring radius of 1.4\(\pm\)0.2 mas (1.14 AU). Around 48% of the total flux comes from the disk with 15% of that emission appearing to emerge from inside the inner rim. The cause of emission is still unclear, perhaps due to different dust grain compositions or gas emission. The skewed models and the imaging point to an off-center star, possibly due to binarity. Our image shows a sub-AU structure, which seems to move between the two epochs inconsistently with Keplerian motion and we discuss possible explanations for this apparent change. ## 1 Introduction As the catalogs of exoplanets grow, we increasingly see evidence of otherworldly environments that we do not observe in our local solar system (see Cacciapuoti et al., 2022; Rein, 2012). Observing the early formation stages of planetary systems is key to understanding and eventually predicting the worlds we continually find. The planet formation process is still not well understood, especially around massive stars. Herbig Ae/Be stars are a class of intermediate to high-mass (2-10 M\({}_{\odot}\)) pre-main sequence stars of spectral type A or earlier that exhibit excess near-infrared (NIR) emission, which is associated with circumstellar disks. Disks around the more massive B stars are harder to study because they disappear on shorter timescales, around a few \(10^{5}\) yr, likely due to photoevaporation caused by UV radiation from the star. Herbig Be stars, in particular, are often observed with a factor of 5-10 lower disk masses than Herbig Ae and T Tauri stars, or in some cases, are not detected at all (Alonso-Albi et al., 2009). The inner regions of these disks are particularly interesting because they host many complex processes that go into planet formation, yet they are not well studied. Imaging the sub-astronomical unit (sub-AU) requires milli-arcsecond angular resolution, which is not achievable using the world's largest optical telescopes and is even too small for the mm-wave interferometer ALMA. However, infrared long-baseline interferometers can probe the disks at the sub-AU scale and give us a better look at the inner regions of the disks. Modeling has been a key tool for studying the structure of circumstellar accretion disks. Early models by Hillenbrand et al. (1992), which assumed a flat and optically thick disk extending a few stellar radii close to the star, were able to reproduce photometric near-infrared excess emission measurements. However, this theoretical picture was not confirmed by the interferometric observations carried out by Millan-Gabet et al. (1999), which found that the disk sizes had to be much larger than what was predicted by the thin and optically thick disk models. In 2001, Natta et al. (2001) and Dullemond et al. (2001) proposed a new model for the inner disk region, suggesting an optically thin cavity around the star and a "puffed-up" inner rim wall that is truncated at a sub-AU radius where the rim temperature is equal to the dust sublimation temperature. Further observations by Millan-Gabet et al. (2001) and Tuthill et al. (2001) showed that this model where the majority of the NIR excess originates from the dust sublimation radius, explained the large photometric NIR bump. Monnier & Millan-Gabet (2002) validated the "puffed-up" inner rim with the introduction of the size-luminosity diagram where they also showed that the dust sublimation temperature was between 1500-2000 K for their sample. However, more recent sub-milliarcsec interferometry observations of AB Aurigae and MWC 275 (HD 163296) carried out by Tannirkulam et al. (2008a) showed that models in which the dust evaporation rim solely produces the NIR excess, fail to explain the data. They found that a significant amount of the inner emission emerges from within the dust sublimation radius which likely does not have a sharp edge. Lazareff et al. (2017)'s PIONIER survey confirmed this general picture in addition to adding 27 new objects to the size-luminosity diagram by Monnier & Millan-Gabet (2002) and found some variations between sources. Most of what we know about Herbigs comes from observations of the more common and close-by Herbig Ae stars. Due to the rareness of massive stars and the very short disk lifetimes, only a handful of Herbig Be stars with detected disks are close enough for us to study, one of which is the B9 star HD 190073, also known as V1295 Aquila (see stellar parameters in Table 1) (Rich et al., 2022). HD 190073 is a pre-main sequence star with a spectral type of B9 and a temperature of approximately 9750K. It has a luminosity of \(\sim 760\) L\({}_{\odot}\) and a model-derived mass of 6.0 \(\pm 0.2\) M\({}_{\odot}\). Based on its mass, this star is expected to rapidly contract onto the main sequence with a spectral type of B5 and a temperature of 15500 K in \(\sim 10^{5}\) years (Cox, 2000). While HD 190073 shares some similarities with other well-studied pre-main sequence stars such as Herbigs AB Aur and MWC 275, it has more than twice the mass and will likely have a significantly different experience in terms of its disk evolution. It also has a detected magnetic field which is uncommon among Herbigs (Alecian et al., 2013). The magnetic field is much weaker than those of T Tauri stars, but it could be measured due to the star's narrow spectral lines. The low \(vsini\) 8.6 km/s is partly explained by a face-on geometry (Catala et al., 2007). Unfortunately, ALMA has not observed it yet so we cannot confirm through imaging that the disk is face-on. 2D models from the PIONIER/VLTI survey found that HD 190073 is nearly face-on and symmetric, but they did not have enough resolution to see any inner emission (Lazareff et al., 2017). The results from this PIONIER study revealed that the inclination of the HD 190073 disk is \(<20^{\circ}\), and therefore it is plausible that the slow \(v\sin i\) is due to the low inclination. Setterholm et al. (2018) recently presented the first CHARA results on HD 190073 with 3 times higher resolution. They used broadband data taken over many years and combined them as a single epoch. The resulting visibility curves lacked the expected bounce at larger baselines associated with a ring-like structure. The HD 190073 models showed a face-on disk that favored a Gaussian shape rather than a thin ring. This conclusion was similar to results from Tannirkulam et al. (2008a) for the older, less massive, and less luminous Herbig Ae stars AB Aur and MWC 275. Here in this work, we will revisit this analysis using new "snapshot" observations taken over a much shorter period of three months and with a much denser (u,v) coverage. We obtained the new H-band interferometry using the updated Michigan InfraRed Combiner - eXeter (MIRC-X) (Anugu et al., 2020) instrument at the Center for High Angular Resolution Astronomy (CHARA) Array (ten Brummelaar et al., 2005) and Precision Integrated-Optics Near-infrared Imaging ExpeRiment (PIONIER) (Le Bouquin et al., 2011) at the Very Large Telescope Interferometer (VLTI). With improved angular resolution and increased (u,v) coverage, we present new models with six different radial profiles along with a model-free image reconstruction of the sub-AU ring. The enhanced data quality allows us to model finer details in the disk structure and interpret the inner region dynamics more accurately. We start by describing our observation routines to collect the CHARA and VLTI data, as well as discussing our data reduction techniques in section SS 2. Next, we explain our methods for combining the data from the two instruments into two epochs, and we show a basic presentation of the (u,v) coverage, visibility, and closure phase measurements for each epoch in section SS 3. We present our symmetric and asymmetric models in SS 4 and describe them extensively. Then in section SS 5 we present model-free images and compare them to the models. Finally, we discuss in section SS 6 what the results of the modeling and imaging reveal about the inner disk of HD 190073. ## 2 Observations and Data Reduction This paper relies on new infrared interferometry from two different facilities, CHARA and VLTI. We coordinated observations between Mt.Wilson in California and Paranal Mountain in Chile using similar wavelengths for one month to get better-quality data. It was also important to get data around the same time due to time variability because of the short rotational period of the sub-AU region of the disk \(\sim 0.8\) years, which we calculated from previous estimations of the inner radius. Interferometric time variability has been reported for other young stellar objects (YSOs), and it was shown to be present in \(>10\%\) of the accretion disk sample studied by Kobus et al. (2020). The goal of the experiment is to make the best quality images and models by combining the baselines and (u,v) coverage of the two facilities. We were granted observing time over a three-month period, so we expect to be able to resolve some sub-AU disk motion if present. We have six observations using the CHARA array's MIRC-X instrument, and seven observations using the VLTI array's PIONIER instrument. In this section, we describe the observations at the two facilities. ### Chara HD 190073 was recorded using the MIRC-X instrument in the summer of 2019 as listed in Table 2. MIRC-X is an infrared six-telescope beam combiner at the CHARA Array. The CHARA interferometer is located at Mt. Wilson in California, USA, with an array of six 1-meter telescopes disposed in a Y shape, optimizing the imaging capability. The telescopes combine to have baselines of up to 331 meters. The maximum angular resolution is \(\frac{\lambda}{2B}\) which corresponds to 0.51 mas in H-band (\(\lambda=1.64\,\mu m\)). MIRC-X uses single-mode fibers to coherently combine the light from six telescopes simultaneously with an image-plane combination scheme and typically delivers a visibility precision better than 5%, and closure phase precision better than 1\({}^{\circ}\). The data were collected using the spectral resolution \(R=50\) mode using a prism dispersive element giving 8 spectral channels spread over \(\Delta\lambda=0.27\mu m\). We reduced the data using the MIRC-X reduction pipeline 1 (version 1.3.3) and calibrated the data using a custom IDL routine 2. The MIRC-X data reduction pipeline produces science-ready visibilities and closure phases written in OIFITS format (Duvert et al., 2017; Pauls et al., 2005). The raw visibility is calibrated using stars with known diameters. Typical observations execute a standard calibrator-science-calibrator (CAL-SCI-CAL) cycle. Calibrators are usually chosen, using the SearchCal software, to be close to the science target, both in terms of sky position and magnitude, and have a smaller angular diameter so that their visibility on a given baseline is less dependent on the diameter (Chelli et al., 2016). \begin{table} \begin{tabular}{l l} \hline \hline Property & HD 190073 \\ \hline \(\alpha\) (J2000) & 20\({}^{h}\)03\({}^{m}\)02\({}^{s}\).51 \\ \(\delta\) (J2000) & +5’44’16”.66 \\ Spectral Type & B9 \\ T\({}_{eff}\) & 9750 \(\pm\)125 K \\ R\({}_{\star}\) & 9.68 \(\pm\)0.44 R\({}_{\odot}\) \\ Age & 0.30 \(\pm\)0.02 Myr \\ Distance & 824.16 \(\pm\)21.83 pc \\ Log(L\({}_{\star}\)) L\({}_{\odot}\) & 2.88 \(\pm\) 0.03 \\ Mass & 6.0 \(\pm\)0.2 M\({}_{\odot}\) \\ Vmag & 7.79 \(\pm\)0.06 \\ Hmag & 6.61 \(\pm\)0.07 \\ \end{tabular} \end{table} Table 1: HD 190073 Stellar properties from Guzmán-Diaz et al. (2021) ### Vlt HD 190073 was recorded with the PIONIER instrument on multiple occasions throughout 2019, as listed in Table 3. PIONIER is a four-telescope beam combiner operating in the H-band (\(\lambda=1.64\,\mu m\)) at the VLTI. Data were obtained using the four auxiliary telescopes (ATs) in multiple baseline configurations giving baselines ranging from 11.3 m to 140.0 m, allowing for a maximum spatial resolution (\(\lambda/B\)) of 1.12 mas. Additionally, a prism dispersive element was used giving 6 spectral channels spread over \(\Delta\lambda=0.30\,\mu m\). Observations were taken in concatenations of calibrator (CAL) and science (SCI) targets in blocks of either CAL-SCI-CAL or CAL-SCI-CAL-SCI-CAL, to allow for effective monitoring of the transfer function and precise calibration. The data were reduced and calibrated using the standard PNDRS package v3.52 (Le Bouquin et al., 2011). ## 3 Basic Data Presentation ### Combining VLTI and CHARA Data Using the angular resolution of the instruments and the Keplerian velocity of the inner disk, we calculated the timescale on which each instrument is able to resolve a change in the disk (eg. rotation). CHARA has a maximum angular resolution of \(\frac{\lambda}{2B}=0.51\) mas for \(\lambda=1.65\ \mu m\) and \(B=331\) m. We estimated the Keplerian velocity of the dust rim using R \(\sim 1.64\) AU from Setterholm et al. (2018) and the star's mass \(M=6\ \mathrm{M_{\odot}}\) which gave us a rotational period of T = 304.5 days or 0.83 years. That means that it takes 13 days for us to be able to resolve the motion of one resolution element using MIRC-X. Doing the same calculation for the large VLTI baseline, we found that it takes 28 days to resolve a change in one resolution element using PIONIER. With that information, we aimed to split the data \begin{table} \begin{tabular}{c c c c c c c c} UT Date & Configuration & Band & no. \(\mathcal{V}^{2}\) (\(\times\) 8) & no. CP (\(\times\) 8) & Calibrator(s) & Diameter (mas) & Epoch \\ \hline 2019-05-09 & E1-W2-W1-S1-E2 & H & 9 & 7 & HD 190753 & 0.323\(\pm\)0.008 & A* \\ \hline 2019-06-05 & E1-W2-W1-S2-S1-E2 & H & 67 & 77 & HD 190753 & 0.323\(\pm\)0.008 & A \\ & & & & & HD 191656 & 0.430\(\pm\)0.01 & \\ \hline 2019-06-09 & E1-W2-W1-E2 & H & 26 & 24 & HD 197551 & 0.644\(\pm\)0.02 & A \\ \hline 2019-07-11 & E1-W2-W1-S2-S1-E2 & H & 66 & 74 & HD 190753 & 0.323\(\pm\)0.008 & B \\ \hline 2019-07-12 & E1-W2-W1-S2-E2 & H & 25 & 23 & HD 190753 & 0.323\(\pm\)0.008 & B \\ \hline 2019-08-27 & E1-W2-W1-S2-S1-E2 & H & 36 & 46 & HD 191656 & 0.430\(\pm\)0.01 & B* \\ & & & & & HD 190753 & 0.323\(\pm\)0.008 & \\ \hline \end{tabular} *For imaging, this epoch was not included; see § 5 \end{table} Table 2: CHARA/MIRC-X Observations of HD190073 \begin{table} \begin{tabular}{c c c c c c c} UT Date & Configuration & Band & no. \(\mathcal{V}^{2}\) (\(\times\) 6) & no. CP (\(\times\) 6) & Calibrator(s) & Diameter (mas) & Epoch \\ \hline 2019-06-09 & D0-G2-J3-K0 (Medium) & H & 12 & 8 & HD 188385 & 0.221\(\pm\)0.01 & A \\ & & & & HD 189509 & 0.247\(\pm\)0.03 & \\ \hline 2019-07-10 & D0-G2-J3-K0 (Medium) & H & 24 & 16 & HD 190753 & 0.323\(\pm\)0.02 & A \\ & & & & HD 191840 & 0.288\(\pm\)0.01 & \\ \hline 2019-07-20 & A0-G1-J2-J3 (Large) & H & 30 & 20 & HD 190753 & 0.323\(\pm\)0.02 & A\& B \\ & & & & HD 191840 & 0.288\(\pm\)0.01 & \\ \hline 2019-07-30 & A0-B2-C1-D0 (small) & H & 24 & 16 & HD 190753 & 0.323\(\pm\)0.02 & A \\ & & & & HD 191840 & 0.288\(\pm\)0.01 & \\ \hline 2019-08-04 & A0-B2-C1-D0 (small) & H & 12 & 8 & HD 190753 & 0.221\(\pm\)0.01 & B \\ & & & & HD 191840 & 0.247\(\pm\)0.03 & \\ \hline 2019-08-05 & D0-G2-J3-K0 (Medium) & H & 6 & 4 & HD 188385 & 0.221\(\pm\)0.01 & B \\ & & & & HD 189509 & 0.247\(\pm\)0.03 & \\ \hline 2019-08-06 & D0-G2-J3-K0 (Medium) & H & 6 & 4 & HD 188385 & 0.221\(\pm\)0.01 & B \\ & & & & HD 191840 & 0.288\(\pm\)0.01 & \\ \hline \end{tabular} \end{table} Table 3: VLTI/PIONIER Observations of HD190073 into two epochs, such that the amount of time in which we expect each epoch to change is minimized. We grouped the earlier three CHARA nights to make epoch A and the later 3 nights to make epoch B. When combining VLTI nights, we had to consider the baseline configuration as well as the dates. The different configurations (small, medium, and large) correspond to different (u,v) coverage. The small configuration contributes to the short baselines which show us the large-scale components of the disk, while the large configuration, which contributes the long baselines, allows us to constrain the small-scale components of the inner region. To get the most coverage, we included the first 4 nights (2019-06-09, 2019-07-10, 2019-07-20, 2019-07-30) into epoch A. Even though the fourth night, 2019-07-30, is more than 28 days away from the first night, 2019-06-09, we included it in epoch A because it was the closest night with a small baseline configuration. The shorter the baseline, the longer the timescale for resolving a significant change. That timescale for the small configuration is \(>1\) month, so adding 2019-07-30 to epoch A is justified. The last 3 nights (2019-08-04, 2019-08-05, 2019-08-06) went into epoch B, as well as 2019-07-20 which is shared between the two epochs as it is the only large baseline configuration we have. The last column of Table 2 and Table 3 indicates the epoch to which each night corresponds. We show the combined full (u,v) coverage, squared visibilities (\(\mathcal{V}^{2}\)), and closure phases (CP) from both instruments for each epoch in Figure 1. Based on early studies of MIRC (Monnier et al., 2012), we adopted the same minimum systematic errors. There are two types of errors associated with (\(\mathcal{V}^{2}\)) measurements. There are multiplicative errors due to the effects of seeing, and there are additive errors correcting for biases due to the limitations of the pipeline. The multiplicative error is 6% while the the additive error is \(2\times 10^{-4}\). These are used to calculate a new minimum error. If the systematic error is larger than the statistical one, then we adopt the larger one. The same thing is done for triple amplitudes but the associated errors are 10% for multiplicative and \(1\times 10^{-5}\) for additive. Closure phases have an error floor of \(0.1^{\circ}\). In section SS 3, we discuss our method of reducing the data and combining the CHARA and VLTI nights into two epochs. ### Wavelength Interpolation Since the wavelength channels vary between MIRC-X and PIONIER, and in fact, even between nights on the same instrument, we fit each the square visibility and closure phase measurement as a function of wavelength to a quadratic with a linear regression and re-sampled our data at standard wavelengths in the range 1.5-1.7 \(\mu m\) at 50 nm increments. This wavelength "smoothing" produces evenly spaced visibility measurements in 5 wavelength channels which we then use for modeling and imaging in the following sections. The amplitude of the error estimates was also fit to a quadratic and re-sampled at the chosen wavelengths. We then applied the same minimum error calibration as described in SS 3.1 ### Squared Visibilities The combination of CHARA and VLTI allows us to sample the (u,v) coverage well. Just looking at the non-wavelength-smoothed visibility curve in Figure 1, we can visually detect a bounce that was not as clear in previous studies (Setterholm et al., 2018). Due to the short time between observation nights, we are able to avoid some smearing effects that the rotation of the disk can cause on longer time scales. As mentioned in SS 3.2 above, due to the mismatch in wavelengths between nights and instruments, we will be using the wavelength-smoothed data for modeling in the next section. An initial qualitative look at the visibility curves can already point to features that we can predict. The short baseline measurements from PIONIER appear to be slightly less than unity at 0 baseline, which could indicate the presence of an over-resolved large-scale halo structure. As the baseline increases, the visibility drops off rapidly at the short baselines. The drop-off in visibility tells us about the disk diameter. Towards longer baselines from MIRC-X, we see a bounce in the visibility. A bounce in the intermediate baselines indicates the existence of a cavity in the disk, and the severity of the bounce characterizes the sharpness of the rim. A shallow bounce points to a fuzzy rim, while a steep bounce points to a sharper transition (Dullemond and Monnier, 2010). Fits by Lazareff et al. (2017) of HD 190073 seemed to favor a ring structure like we expect to see from the visibility curves, while fits by Setterholm et al. (2018) were not able to recover a cavity and concluded a Gaussian shape instead. ### Closure Phases Closure phase measurements from both instruments and for both epochs can be seen in the bottom two panels of Figure 1. The majority of the closure phases are relatively small \(\lesssim 10^{\circ}\) which tell us that the disk is not far from symmetric. On the shorter baselines, the closure phase measurements are closest to zero, which tells us that the larger scale brightness distribution is nearly symmetric. However, since the closure phases at longer baselines are non-zero and non-\(180^{\circ}\) we expect the disk to have skewness on the finer spatial scale, albeit small. Similar to the visibility plots, we are not showing the error bars on the measurements. That was done mainly to show a simplistic visualization of what the data looks like. Typical errors are \(\lesssim 5^{\circ}\). Figure 1: Top row: The (u,v) coverage of the combined data for epochs A (left) and B (right). Data from MIRC-X are shown in black, and PIONNER are shown in red. Middle row: Unsmoothed combined squared visibility measurements from both instruments. The colors correspond to the H-band wavelengths with the darkest being the longest wavelength. Bottom row: Closure phase measurements from both instruments ## 4 Modeling We split our simple geometric models into two categories in this section. First, we assume point symmetry and fit the squared visibilities and not the closure phases since the disk is nearly face-on and we have small closure phase measurements. The symmetrical models will allow us to compare our modeling to previous work, as well as constrain some physical parameters. Second, we allow the models to be asymmetric by fitting both the squared visibilities and closure phases and adding more fitting parameters to test which model best fits the closure phases. All modeling was done using PMOIRED 3(Merand, 2022). In order to combine data from all wavelengths, we modeled the star as a point source with a fixed power spectrum proportional to \(\lambda^{-4}\). The second component that went into the model is the disk with a power spectrum \(\beta_{disk}\lambda^{\alpha_{disk}}\) and we let \(\alpha_{disk}\) and \(\beta_{disk}\) be free parameters. We added a resolved halo component to the models to account for any large-scale scattered light which helps us constrain the flux contributions better. This halo component had a similar power spectrum to the disk \(\beta_{halo}\lambda^{\alpha_{halo}}\) and we let \(\alpha_{halo}\) and \(\beta_{halo}\) be free parameters. The \(\alpha_{disk}\) and \(\alpha_{halo}\) are what we are going to refer to as spectral slopes in the next section. Footnote 3: [https://github.com/amerand/PMOIRED](https://github.com/amerand/PMOIRED) ### Symmetric Modeling We introduce two geometrical models with different profiles that we refer to as the Doughnut model and the Double Sigmoid model. The Doughnut model is a simple parabolic model that allows us to constrain the flux, inclination, projection angle which indicates the direction of the major axis, spectral slopes, outer radius of the ring, and thickness. The brightness profile has the following functional form: \[f(r)=1-\left(\frac{r-\bar{r}}{2(r_{max}-r_{min})}\right)^{2} \tag{1}\] where \(\bar{r}\), \(r_{max}\), and \(r_{min}\) are the mean, maximum and minimum radii, respectively. The Double Sigmoid model adds more complexity to allow us to fit and scale the inner and outer radii independently, and the profile takes the following functional form: \[f(r)=\frac{1}{1+e^{\frac{-(r-R_{in})}{\sigma_{in}}}*\frac{1}{1+e^{\frac{(r-R_{ out})}{\sigma_{out}}}}} \tag{2}\] where, \(r_{in}\) and \(r_{out}\) are the inner and outer radii, respectively, and \(\sigma_{in}\) and \(\sigma_{out}\) are their respective scale factors that set the sharpness of the edges. By definition, the Double Sigmoid model forces the inner cavity to be dark. To investigate the possibility of flux coming from the inner cavity, we modified the Double Sigmoid profile to take the following form: \[f(r)=\left(\alpha+\frac{1-\alpha}{1+e^{\frac{-(r-R_{in})}{\sigma_{in}}}}\right) *\frac{1}{1+e^{\frac{(r-R_{out})}{\sigma_{out}}}} \tag{3}\] where the \(\alpha\) term is allowed to vary to add flux to the center or go to zero if a completely dark center is the best fit. Figure 2 is an example of what that intensity profiles might look like using \(R_{in}=1\), \(R_{out}=3.4\), \(\sigma_{in}=0.2\), \(\sigma_{out}=0.3\), and \(\alpha=0.2\) The full list of best-fit parameters from the three models can be seen compiled in Table 4. Starting with the Doughnut model, as seen in the top two panels of Figure 3, we notice that nearly all fitting parameters are consistent between epochs A and B. The disk is nearly face-on with a small inclination angle. The inner radii are very small, indicating a transition that is well beyond the expected dust wall. We found that around half of the flux originates from the disk, \(~{}42.5\%\) comes from the star, and the rest is from an over-resolved halo component. We fixed the star's spectrum to be \(\propto\lambda^{-4}\), and fit the spectra of the disk and halo as power laws. The spectral slope of the disk was fit to be \(-0.10\pm 0.14\), while the halo's slope was fit to be much steeper \(2.08\pm 1.16\). The goodness of these fits can be seen in Fig. 4, where the model is fitted to the squared visibility. The reduced (abbreviated'red') \(\chi^{2}\) for epoch A was calculated to be \(\chi^{2}_{red,A}=1.49\) and for epoch B \(\chi^{2}_{red,B}=1.72\), which is also indicated on the plots. Note that the \(\chi^{2}_{red}\) values are based on \(\mathcal{V}^{2}\) fits only. We see in the top two panels of Figure 4 that the model fits the longer baselines well, following the bounce and recovering an inner cavity like we expected as shown in the top two panels of Figure 3. The shorter baselines, on the other hand, are not fit as well using this model. The Doughnut model relates the outer and inner radii and doesn't allow us to change the sharpness of the inner and outer rims. Seeing that the shorter baselines weren't fit very well using this model points to the larger scale features not being properly represented by the Doughnut. To get around this, we introduced the more elaborate Double Sigmoid model as discussed above. Similar to the Doughnut model, the Double Sigmoid model allows us to fit for the flux, inclination, and projection angle, spectral slopes, as well as inner and outer radii and their sharpness. The middle two panels of Figure 3 show the results of the symmetric Double Sigmoid model for epochs A and B. Similar to the Doughnut, the disk seems almost face-on, with a clear inner cavity. While the fitted inclination angle for A \(i_{A}=10.1^{\circ}\pm 2.9^{\circ}\) matches closely to the angle from the Doughnut fits, the inclination angle for epoch B \(i_{B}=18.1^{\circ}\pm 1.8^{\circ}\) is higher than expected. The inclination of the disk cannot change between epochs so we estimate the inclination to generally be \(\lesssim 20^{\circ}\). The projection angles are significantly lower than the ones from the Doughnut model. That could be attributed to the almost face-on nature of the disk making it harder to constrain the projection angle. The inner rim being less diffuse and the outer rim being more diffuse in this model leads to a larger inner radius and smaller outer radius compared to the previous model. It is important to note that the \(R_{in}\) and \(R_{out}\) parameters do not necessarily reflect the size of the physical inner and outer radii. They are simply the best-fit parameters that reproduce the best brightness profile, Figure 2: We introduce a new function to describe the radial profile of the ring. The Double Sigmoid allows us to constrain the radii and their sharpness of the inner and outer rims separately. The blue curve corresponds to Equation 2 which, by definition, has no flux inside the inner rim. The orange dashed curve corresponds to the \(\alpha\) Double Sigmoid from Equation 3 which allows the center to have flux. The black dashed lines indicate the inner \(r_{1}\) and outer \(r_{2}\) radii of the profiles. The red horizontal lines indicate the \(\sigma_{1}\) and \(\sigma_{2}\) scale factors and from those profiles, we estimate the physical radii based on the intensity transition. We show the normalized brightness profiles in Figure 5 for both models and both epochs to get a better idea of how they compare. The Double Sigmoid profile is shown in orange while the Doughnut is shown in green. Indeed, we see the much smoother inner and outer rims of the Double Sigmoid compared to the sharp drops of the Doughnut model. The Double Sigmoid shows a slightly sharper inner edge compared to the outer edge. While the inner radius of the Double Sigmoid seems larger, and the outer seems smaller, due to the smoother edges, we can see from the profiles that the physical size of the disk from both models doesn't vary drastically, and the radii are not as different as the fitting parameters might lead us to believe. Other notable differences include the disk flux contribution increasing in the Double Sigmoid model, while the spectral slope decreases indicating an even smaller dependence on color. On the other hand, the halo flux contributions decreased, while the spectral slope grew redder, showing a stronger dependence on color. The reduced \(\chi^{2}\) for epoch A was calculated to be \(\chi^{2}_{red,A}=1.13\) and for epoch B \(\chi^{2}_{red,B}=1.35\), which is also indicated on the plots. We see in the middle two panels of Figure 4 that the model fits the longer baselines well, following the bounce and recovering an inner cavity like we expected as shown in the top middle panels of Figure 3. Unlike the Doughnut model, the Double Sigmoid fits the shorter baselines much better. We clearly see that the \(\chi^{2}_{red}\) significantly improves with the Double Sigmoid model where we see a bigger inner cavity and sharper inner rim. The \(\alpha\) Double Sigmoid models can be seen in the bottom row of Figure 3. The initial global fit showed that adding the \(\alpha\) term does improve the fit as evident by the \(\chi^{2}_{red}\) decreasing for both epochs, \(\chi^{2}_{red,A}=1.09\) and \(\chi^{2}_{red,B}=1.33\) which can be seen on the bottom row of Figure 4. This model tells us that \(\sim 15\%\) of the light of the disk emerges from inside the rim. The normalized brightness profile can be seen in blue in Figure 5. Figure 3: Top row: Epoch A and B symmetric Doughnut models Epoch A \(\chi^{2}_{red}=1.49\) and Epoch B \(\chi^{2}_{red}=1.72\). Middle row: Epoch A and B symmetric Double Sigmoid models \(\chi^{2}_{red}=1.13\) and Epoch B \(\chi^{2}_{red}=1.35\). Bottom row: Epoch A and B symmetric Double Sigmoid models \(\chi^{2}_{red}=1.09\) and Epoch B \(\chi^{2}_{red}=1.33\). Figure 4: Global fits of \(\mathcal{V}^{2}\) for the Doughnut model (top row), the Double Sigmoid (middle row), and the \(\alpha\) Double Sigmoid (bottom row). We see that the Doughnut model does the poorest job of fitting the short baselines while both Double Sigmoid models do a much better job. All three models fit the bounce at the longer baselines well which was expected since we see an inner cavity in all of them Next, to investigate the inner flux dependence on color, we fit each of the 5 wavelength channels independently using the same modified Double Sigmoid profile. We see in Figure 6 that all 5 wavelengths take a similar overall shape with varying flux contributions to the center. We normalized the profiles by the total flux and calculated the fractions of flux contributed by \(\alpha\) to the total flux of the disk, which we show as the filled areas under the profile curves. These fractions allow us to calculate the spectrum of the inner disk independently of the spectrum of the outer disk. To calculate the inner disk spectrum, we used the inner flux fractions to calculate how much flux they contribute to the total light from the model that consists of the star, disk, and halo. We also calculated the spectra of the disk excluding the inner emission and the halo. We assumed the star's spectrum to be \(\lambda^{-4}\). To test out the plausibility of the spectra, we added up the flux contributions from the star, disk, inner emission, and halo and compared the total to previously obtained photometry which we show in Figure 7. The spectral energy distribution (SED), which was built using SEDBYS 4(Davies, 2021), shows photometry data at J, H (shaded), and K band with the spectra that we calculated for the four components of HD 190073 over-plotted. We see the slope of the total spectrum matches the data closely. The spectrum of the disk emission is rather flat which is more consistent with dust emission rather than the steeper free-free spectrum we would expect from gas emission. Figure 5: Shown in green are the epoch A and B symmetric Doughnut brightness profiles normalized by the total flux. Both the inner and outer edges have some abruptness in the transition. We see a very narrow inner radius and a wide disk overall. Shown in orange are epochs A and B symmetric Double Sigmoid brightness profiles normalized by the total flux. Over-plotted in blue is the normalized profile of the \(\alpha\) Double Sigmoid model. We see that the outer part of the profiles roughly matches while the inner parts differ. The \(\alpha\) Double Sigmoid shows that 15% of the disk flux comes from the inner region and then shows a sharp inner edge transition Figure 6: Normalized flux profile of disk emission. The filled areas correspond to the fraction of flux contributed by inner emission we define as \(\alpha\) Figure 7: Spectral energy distribution in J, H (shaded in gray), and K bands shown in the black dots. Over-plotted are the fitted spectra from epoch A. The star was assumed to have a \(\lambda^{-4}\) spectrum which is shown in orange. The lighter orange line shows the extension of the slope across J and K bands. The blue line indicates the flux contribution from the ring, not including the inner emission. The inner emission flux is shown in purple. The over-resolved halo flux contribution is shown in red. The green line indicates the sum of all four contributions and it matches the observed photometry flux (SED data and references are in Table 5 in Appendix A). Figure 8: Top row: Epoch A and B Skewed Double Sigmoid ring model. Second row: Epoch A and B higher order Skewed Double Sigmoid ring model. Third row: Epoch A and B off-center star Double Sigmoid model. Bottom row: Epoch A and B model-free images. The white circle in the left bottom corner of the left panel corresponds to the effective beam size \(\lambda/(2B_{max})=0.55\) mas where \(\lambda=1.65\)\(\mu\)m and \(B_{max}=300\) m ### Asymmetric Modeling So far, we have been assuming that the disk is symmetric and therefore only fitting the squared visibilities. However, as mentioned before, the closure phases are non-zero and non-180\({}^{\circ}\) which means that there is asymmetry in the disk. In this section, we use 3 different simple geometric models to attempt to characterize the asymmetry. These simple models are no longer valid in describing the geometry of the disk so we have to introduce asymmetry parameters. Azimuthal asymmetry is expected from flared disks seen at an inclination. Since the Double Sigmoid performed better than the Doughnut in the symmetrical case, we used a Double Sigmoid skewed ring model with azimuthal asymmetry (Monnier et al., 2006) as the base of our modeling and added different asymmetries to it. We tested multiple azimuthal asymmetries such as adding a point source free parameter, adding a higher order multipole for azimuthal asymmetry, considering an off-center star, and more. These skewed models allow us to vary the brightness distribution across the disk. In this paper, we present three of the models we tested and show how their closure phase \(\chi^{2}_{red,CP}\) compare to give us a better picture of the disk. A full list of the best-fit parameters from these three models can be found compiled in Table 6 in Appendix B. Starting with a simple skewed Double Sigmoid ring, we see in the top two panels of Figure 8 that one side of the disk tends to be brighter. This model introduces two additional fitting parameters to represent the harmonic azimuthal variation. The amplitude is referred to as "Az Amp\({}_{1}\)" in Table 6 in Appendix B, and the projection angle, which is defined with respect to the global projection angle is denoted as "Az PA\({}_{1}\)". Both parameters have a subscript that represents their order. The amplitude of variation is stronger in epoch A compared to B. The inclination of the flared disk causes the side farther to us to appear brighter but this skewness does not produce a good fit as indicated by \(\chi^{2}_{red,CP,A}=7.33\) and \(\chi^{2}_{red,CP,B}=4.68\). This model is not complex enough to model the asymmetries, so in the next model, we introduced a higher order of asymmetry, a \(\sin(2\theta)\), to the ring. The higher order allows us to model simple structures that might be in the disk. This adds an extra pair of fitting parameters, Az Amp\({}_{2}\) and Az PA\({}_{2}\), which we list in Table 6 as well. Models of the disk, which can be seen in the second row of Figure 8, produce two distinct bright regions, and the CP \(\chi^{2}_{red,CP}\) has decreased but not by much. \(\chi^{2}_{red,CP,A}=7.24\) and \(\chi^{2}_{red,CP,B}=4.49\) for epochs A and B respectively. The first-order azimuthal amplitudes stay consistent with the previous model. Interestingly enough, the second-order amplitudes have the opposite strengths for the two epochs. Epoch A Az Amp\({}_{A,2}\) is lower than Az Amp\({}_{A,1}\) and epoch B Az Amp\({}_{B,2}\) is higher than Az Amp\({}_{B,1}\). We are not giving significant physical meanings to these models because they are meant to describe a complex structure with simple parameters. Instead of introducing even higher orders, we went with a less complex approach by testing a skewed Double Sigmoid with an off-center star, which can be seen in the third row of Figure 8. This model favored a slightly off-center star and dropped the CP \(\chi^{2}_{red,CP}\) significantly. The simple skewed ring started with a \(\chi^{2}_{red,CP,A}=7.33\) and \(\chi^{2}_{red,CP,B}=4.68\), which dropped slightly by adding the second azimuthal parameter to be \(\chi^{2}_{red,CP,A}=7.24\) and \(\chi^{2}_{red,CP,B}=4.49\), and then dropped sharply with the addition of the off-center star to be \(\chi^{2}_{red,CP,A}=1.54\) and \(\chi^{2}_{red,CP,B}=3.44\). The amplitude of skewness of the two epochs almost matches in this model. Two extra parameters were introduced to characterize how much the star was fit away from the center, an x offset and a y offset. The x offsets are similar for both epochs and they place the star \(~{}0.1\pm 0.01\) mas west of the center. The y offsets both placed the star \(~{}0.1\pm 0.01\) mas south of the center. While none of the asymmetric models fit to \(\chi^{2}_{red}\sim 1\), we are inclined to favor the off-center star as being at least a plausible component. The reason for the star's off-center location has not been established yet. One of our leading theories is the presence of a binary in the center, which will need further studies. ## 5 Imaging Since we were not getting a good fit using simple models, we decided to try imaging in the hopes of unlocking some of the mystery. As mentioned in SS 4, the closure phase data show non-zero and non-180\({}^{\circ}\) measurements that indicate some form of asymmetry in the disk. However, our asymmetric models did not succeed in characterizing the asymmetry. One possibility is an off-center star, but the physical reason behind that is still not established. Since the goal of imaging is to search for finer-scale details, we wanted to fine-tune the data binning in the two epochs. As mentioned in SS 3.1, CHARA can resolve the motion of one resolution element every 13 days. However, the first night in epoch A is more than 13 days away from the other two nights. Similarly, the last night in epoch B is more than a month away from the first two nights. Therefore, we removed the 2019-05-09 night from epoch A and 2019-08-27 from epoch B. Images were reconstructed using conventional regularized maximum likelihood and the open-source OITOOLS.jl package 5. Only powerspectra and closure phases were used for the reconstructions. In line with previous mixed modeling and imaging works (e.g. SPARCO in Kluska et al., 2014), the star was modeled as a point source in the center of the field of view, of spectrum proportional to \(\lambda^{-4}\), while the environment was assumed to follow a \(\lambda^{\alpha}\) dependency, with \(\alpha\) free to vary. In addition, we modeled potential extended or background emission as a zero visibility component, also following \(\lambda^{-4}\). Thus there were three free parameters in addition to image pixels: \(\alpha\) and the flux ratios disk/star and extended emission/star. Footnote 5: [https://github.com/fabienbaron/OITOOLS.jl](https://github.com/fabienbaron/OITOOLS.jl) Minimization was ensured by the gradient-based VMLM-B (Thiebaut, 2002) algorithm from OptimPack, enforcing positivity of the disk intensity. Three other priors were employed in addition to positivity: compactness to regularize large-scale emission, total variation to handle small-scale pixel correlations (see e.g. Thiebaut & Young, 2017, for the mathematical expression of these), as well as a novel azimuthal regularization to "circularize" YSO disks at medium scales. Given four hyperparameters (the centroid, inclination and position angle of the disk), this regularizer simply computes the location of concentric elliptical rings and then sums the azimuthal pixel variances along them. Minimizing the regularizer enforces relative smoothness along the rings, but unlike total variation. Since the expression can be written as a squared \(\ell_{2}\) norm of a linear operator on the image, it lends itself well to the gradient-based approach. Hyperpriors for the four hyperparameters were uniform and an initial hyperparameter range was roughly determined from the data by directly minimizing the variance of the de-rotated powerspectra. The Nelder-Mead simplex (Nelder & Mead, 1965) was used to optimize the four hyperparameters, new reconstructions were run with different sets of hyperparameters until a global minimum was found. Values for the other regularization hyperparameters were chosen based on quick simulations (copying the (u,v) coverage and signal-to-noise of the actual data and reconstructing disks), within the expected range (Renard et al., 2011). We see a face-on thin ring with some inner emission, similar to the \(\alpha\) Double Sigmoid model, and an off-center star. We clearly recover a cavity in the center where we see some fine-scale structure that seems to change. The last row of Figure 8 shows the images that we produced without priors from modeling. The bright spots in the ring seem to correspond with the location of the bright areas seen in a combination of the skewed models. Note that in the images, the point source representing the star is shown in the center of the image while the disk's center is off-center. The final reduced \(\chi^{2}\) for the images is \(\chi^{2}_{\rm v2}\simeq 1.09\) and \(\chi^{2}_{\rm t3phi}\simeq 0.72\) for epoch A and \(\chi^{2}_{\rm v2}\simeq 1.01\) and \(\chi^{2}_{\rm t3phi}\simeq 1.19\) for epoch B. A closer look at the images can be seen in Figure 9. We explored if the changes might be due to rotation. We performed a qualitative analysis of the apparent rotation between epochs A and B by manually rotating the disk to match the bright feature in the bottom right quadrant in epoch A to the bright spot on the middle right edge of the ring in epoch B which is indicated in Figure 9. The arrows represent a \(27^{\circ}\) counterclockwise rotation on the disks. Through modeling we found that the \(\alpha\) Double Sigmoid intensity peaked at a radius \(R_{A}=1.23\pm 0.02\) mas and \(R_{B}=1.5\pm 0.02\) mas. The Keplerian rotational period at this smaller radius that we average to be \(R=1.35\) mas is 182.7 days. The two epochs are 32 days apart which corresponds to a \(63^{\circ}\) rotation based on the calculated period. However, this motion is not consistent with the \(27^{\circ}\) rotation we estimate from the images which is more than twice as slow as Keplerian. The origin of such features is not well-known but we speculate that the motion could be caused by interactions in the outer disk versus an object embedded in the inner disk, for example. This structure could be related to planet formation, instabilities in the accretion flow, or even magnetic fields. This change could also be an artifact of the (u,v) coverage and not a physical process at all so additional epochs are needed to confirm the origin of these changes. ## 6 Discussion The symmetric \(\mathcal{V}^{2}\)-only models revealed a clear inner cavity in the disk, along with rough radii estimates. We are taking the radius at which the intensity peaks in the Double Sigmoid and \(\alpha\) Double Sigmoid models to be representative of the evaporation front. Since the inner radius is set by the dust evaporation temperature, we can use the average radius from the models to can estimate the temperature of the dust wall using the following equation from Dullemond & Monnier (2010). \[T_{\rm dust}\ =T_{*}\frac{1}{\epsilon^{1/4}}\sqrt{\frac{R_{*}}{2R}} \tag{4}\] This equation treats the inner rim as an optically thick wall that radiates like a blackbody, assumes that the gas inside the rim is transparent, and includes the "backwarming" of dust grains. The \(\epsilon\) term is the ratio of the effectiveness of emission at the wavelength at which the dust radiates away its heat and absorption at stellar wavelengths. Assuming large dust grains, we can set \(\epsilon=1\) and solve for the temperature of the inner edge using values from Table 1 and \(R=1.35\pm 0.2\) mas for the radius of the inner rim. We calculate an inner rim temperature of T\({}_{dust}=1367.13\pm 107\) K which is a reasonable estimate. If the dust grains have a larger \(\epsilon\), meaning that they cool more efficiently, then the grains can exist closer in to the star. Our modified \(\alpha\) Double Sigmoid model showed that \(\sim 15\%\) of the disk flux comes from the inside of the ring. The sub-AU inner emission could be due to such grains with \(\epsilon>1\), gas emission, or grains that can survive at higher temperatures. We have non-zero closure phases which indicate that we have some asymmetry in the ring. The closure phases are relatively small so the asymmetry is not huge. From our fits, it seems that the asymmetry is best fit by including an off-center star. While not certain about the physical meaning of that fit, one speculation is that we are looking at a binary star system. One reassuring confirmation that the model is probable, is that the imaging produced an off-center star as well. The imaging also confirmed the cavity in the center of the ring and showed new evidence of small-scale structures with rotation slightly slower than Keplerian at that radius. There is already evidence of this temporal variation on the inner few AU scale in protoplanetary disks. Kobus et al. (2020) showed evidence of interferometric temporal variation occurring in \(>10\%\) of their sample of 68 accretion disks. The asymmetric variations could be attributed to companions, forming exoplanets, asymmetries in the dust density distribution, asymmetric illuminations of the Figure 9: Epochs A and B images. The dashed crosshairs show the center of the image. While the star is in the center, the disk is not which shows that this matches our off-center star model. The green arrow indicates a 27\({}^{\circ}\) rotation. The spot starts at the bottom of the green arrow in epoch A and moves to the top in epoch B. The white circle in the left bottom corner of the left panel corresponds to the effective beam size \(\lambda/(2B_{max})=0.55\) mas where \(\lambda=1.65\)\(\mu\)m and \(B_{max}\) = 300 m circumstellar material as a result of stellar spots or obscuring, or artifacts. There are a few inner disk dynamics that have been observed that could help unlock the mystery of the variations we see in our disk. Some possible inner disk dynamics include self-shadowing by the inner disk on the outer disk (Garufi et al., 2022), dippers caused by turbulent accretion (Alencar et al., 2010), or even a small planet actively accreting close to the inner rim with accompanying spirals, vortices, or misalignment (see Benisty et al. (2017), Marr & Dong (2022)). Characterizing these structures is going to be essential to furthering our understanding of planet formation. ## 7 Conclusions We tested multiple simple geometric models, both symmetric and asymmetric, in hopes of getting a better glance at the young Herbig Be star HD 190073's accretion disk. With more sensitive data, better (u,v) coverage, and a shorter time span to avoid smearing effects, we were able to model and image the disk with unprecedented accuracy. CHARA's long baselines probe the finer structures of the disk, while the short baselines of VLTI look at the larger-scale structures. For a rapidly changing object like HD 190073, it is important to get near-simultaneous observations with CHARA and VLTI to get as close to an instant snapshot of the disk as possible. While there is still some mystery surrounding HD 190073, we were able to produce convincing evidence of a ring-like structure with an inner cavity, and some evidence of changes. The modified \(\alpha\) Double Sigmoid model showed that the inner and outer rims are relatively smooth, with the inner rim having a sharper profile due to the dust destruction front. We also see that the disk is almost face-on with inclination angles \(i\lesssim 20^{\circ}\). The modeling revealed the fractions of flux contributed by each component. The star contributes 41%, the halo 4.6%, and the disk \(\sim 56\%\) with \(\sim 15\%\) of that disk flux coming from inside the inner rim. We found that the best simple model to fit the closure phase measurements is a skewed Double Sigmoid ring with an off-center star, which was backed up by the imaging. The imaging revealed possible evidence of rotating sub-AU features. They seem to rotate \(27^{\circ}\) counterclockwise in the span of the 32 days between epochs A and B which would correspond to a rotational period that is more than two times slower than Keplerian. This change could be caused by dynamics in the outer disk, or it could be an artifact. We plan on continuing the study of HD 190073 in the future. We have new MYSTIC (Monnier et al., 2018) K-band data that we will be adding to get a wider look at the disk. We also plan on quantitatively analyzing the temporal rotation to get a better estimate of the rotation angle and to measure the correlation between the two epochs. The correlation will be able to further confirm that the rotation is not an artifact of the (u,v) coverage. Future data will also reveal if the star is off-center due to an inner binary. This work is based upon observations obtained with the Georgia State University Center for High Angular Resolution Astronomy Array at Mount Wilson Observatory. The CHARA Array is supported by the National Science Foundation under Grant No. AST-1636624 and AST-2034336. Institutional support has been provided by the GSU College of Arts and Sciences and the GSU Office of the Vice President for Research and Economic Development. MIRC-X received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant No. 639889). JDM acknowledges funding for the development of MIRC-X (NASA-XRP NNX16AD43G, NSF-AST 1909165) and MYSTIC (NSF-ATI 1506540, NSF-AST 1909165). This research has made use of the Jean-Marie Mariotti Center Aspro and SearchCal services. S.K., N.A., and C.L.D. acknowledge support from an ERC Starting Grant ("ImagePlanetFormDisses", grant agreement No. 639889), ERC Consolidator Grant ("GAIA-BIFROST", grant agreement No. 101003096), and STFC Consolidated Grant (ST/V000721/1). A.L. received funding from STFC studentship No. 630008203. Observing travel support was provided by STFC PATT grant ST/S005293/1. Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programmes 0101.C-0896(A) and 0103.C-0915(A,B,C). E.A.R. and J.D.M. acknowledges support from NSF AST 1830728. CHARA,VLTI Astropy (Astropy Collaboration et al., 2013, 2018, 2022), PMOIRED (Merand, 2022), SEDBYS (Davies, 2021), PNDRS (v3.52; Le Bouquin et al., 2011), OITOOLS.jl ([https://github.com/fabienbaron/OITOOLS.jl](https://github.com/fabienbaron/OITOOLS.jl))
2303.11355
Here comes the SU(N): multivariate quantum gates and gradients
Variational quantum algorithms use non-convex optimization methods to find the optimal parameters for a parametrized quantum circuit in order to solve a computational problem. The choice of the circuit ansatz, which consists of parameterized gates, is crucial to the success of these algorithms. Here, we propose a gate which fully parameterizes the special unitary group $\mathrm{SU}(N)$. This gate is generated by a sum of non-commuting operators, and we provide a method for calculating its gradient on quantum hardware. In addition, we provide a theorem for the computational complexity of calculating these gradients by using results from Lie algebra theory. In doing so, we further generalize previous parameter-shift methods. We show that the proposed gate and its optimization satisfy the quantum speed limit, resulting in geodesics on the unitary group. Finally, we give numerical evidence to support the feasibility of our approach and show the advantage of our gate over a standard gate decomposition scheme. In doing so, we show that not only the expressibility of an ansatz matters, but also how it's explicitly parameterized.
Roeland Wiersema, Dylan Lewis, David Wierichs, Juan Carrasquilla, Nathan Killoran
2023-03-20T18:00:04Z
http://arxiv.org/abs/2303.11355v2
# Here comes the \(\mathrm{SU}(N)\): multivariate quantum gates and gradients ###### Abstract Variational quantum algorithms use non-convex optimization methods to find the optimal parameters for a parametrized quantum circuit in order to solve a computational problem. The choice of the circuit ansatz, which consists of parameterized gates, is crucial to the success of these algorithms. Here, we propose a gate which fully parameterizes the special unitary group \(\mathrm{SU}(N)\). This gate is generated by a sum of non-commuting operators, and we provide a method for calculating its gradient on quantum hardware. In addition, we provide a theorem for the computational complexity of calculating these gradients by using results from Lie algebra theory. In doing so, we further generalize previous parameter-shift methods. We show that the proposed gate and its optimization satisfy the quantum speed limit, resulting in geodesics on the unitary group. Finally, we give numerical evidence to support the feasibility of our approach and show the advantage of our gate over a standard gate decomposition scheme. In doing so, we show that not only the expressibility of an ansatz matters, but also how it's explicitly parameterized. ## I Introduction Variational quantum computing is a paradigm of quantum computing that uses optimization algorithms to find the optimal parameters for a parameterized quantum circuit [1; 2]. Crucial for the success of such algorithms is the choice of circuit ansatz, which usually consists of multiple parameterized one and two-qubit gates. Typically, these gates are parameterized unitary matrices generated by single Pauli-string operators that can locally rotate a state around some axis: \(U(t)=\exp\{itG\}\), where \(t\) is a gate parameter and \(G\) a Pauli string. For a specific family of cost functions, there exist a variety of methods that allow one to obtain the gradient with respect to \(t\)[3; 4; 5; 6; 7; 8; 9] on quantum hardware. With these gradients, the cost function can be minimized via any gradient-descent-based algorithm. Instead of considering a gate generated by a single Pauli string, one can construct more general parameterized gates that can perform an arbitrary rotation in \(\mathrm{SU}(N)\), the special unitary group. These general \(\mathrm{SU}(N)\) rotations are used in a variety of quantum algorithms [10; 11; 12; 13]. In practice, rotations in \(\mathrm{SU}(N)\) can be implemented by composing several simple parameterized gates together into a more complicated one. For example, for single and two-qubit gates (where \(N=2,4\), respectively), there exist several general decomposition schemes of such gates into products of single-qubit gates and CNOTs [14; 15; 16; 17; 18; 19]. In practice, this compilation comes with hardware-specific challenges, since quantum hardware usually has a set of native gates into which all others have to be decomposed [20; 21]. Choosing the right parameterization for a function is important because it can significantly affect the properties of its gradients. Reparameterizing functions to obtain more useful gradients is a well-known method in statistics and machine learning. For example, in restricted maximum likelihood methods one can ensure numerical stability of quasi-Newton methods by decomposing covariance matrices into Cholesky factors [22]. In addition, methods like auxiliary linear transformations [23], batch normalization [24] and weight normalization [25] are used to improve the gradients in neural networks. In variational inference, the reparameterization trick [26] is at the core of variational autoencoder approaches and allows for gradients for stochastic back-propagation [27; 28]. In light of this, it may be worthwhile to investigate alternative parameterizations of quantum gates for variational quantum algorithms. In this work, we propose a family of parameterized unitaries called \(\mathrm{SU}(N)\) gates and provide a method to evaluate their gradients on quantum hardware. In doing so, we generalize the prior literature one step further, since many past schemes can be understood as special cases of our proposal [3; 4; 5; 6; 7; 8; 9]. We provide numerical results to support the validity of our approach and give several examples to illustrate the capabilities of the \(\mathrm{SU}(N)\) gate. We show that this gate satisfies the quantum speed limit and that it is easier to optimize compared to \(\mathrm{SU}(N)\) parameterizations that consist of products of gates. We argue that this is the case because the product of unitaries creates a "bias" in the Lie algebra that deforms the cost landscape. In addition, we highlight the connections between our formalism and the properties of semisimple Lie algebras and establish a bound on the computational complexity of the gradient estimation using tools from representation theory. ## II \(\mathrm{SU}(N)\) gates A quantum gate is a unitary operation \(U\) that acts on a quantum state \(\rho\) in a complex Hilbert space. If we ignore a global phase, then a gate \(U\) acting on \(N_{\mathrm{qubits}}\) qubits is an element of the special unitary group \(\mathrm{SU}(N)\) (see App. A), where \(N=2^{N_{\mathrm{qubits}}}\). Note that all of the following works for any \(N>1\), but here we restrict ourselves to the qubit case. We are interested in constructing a quantum gate that parameterizes all of \(\mathrm{SU}(N)\). To achieve this, we make use of the theory of Lie algebras. We will not be concerned with the formal treatment of this topic, which can be found in many excellent textbooks [29; 30; 31]. To construct our gate, we realize that \(\mathrm{SU}(N)\) is a (semisimple) Lie group and so there exists a unique connection between its elements and the Lie algebra \(\mathfrak{su}(N)\) via the so-called Lie correspondence, or Lie's third theorem [31; 32]. In particular, each \(g\in\mathrm{SU}(N)\) can be identified with an \(A\in\mathfrak{su}(N)\) via the exponential map \(g=\exp\{A\}\). For our purposes, we can understand the Lie algebra \(\mathfrak{su}(N)\) as a vector space of dimension \(N^{2}-1\) that is closed under the commutator, \([A,B]=AB-BA\in\mathfrak{su}(N)\) for \(A,B\in\mathfrak{su}(N)\). For \(\mathfrak{su}(N)\), we choose as a basis the tensor products of Pauli matrices multiplied by the imaginary unit \(i\): \[\mathcal{P}^{(N_{\mathrm{qubits}})}=\big{\{}i(\sigma_{1}\otimes\ldots\otimes \sigma_{N_{\mathrm{qubits}}})\big{\}}\setminus\{I_{N_{\mathrm{qubits}}}\}, \tag{1}\] where \(\sigma_{i}\in\{I,X,Y,Z\}\) and \(I_{N_{\mathrm{qubits}}}=iI^{\otimes N_{\mathrm{qubits}}}\). We choose the following parameterization of \(\mathrm{SU}(N)\): \begin{tabular}{|c|} \hline SU(\(N\)) gate parameterization \\ \hline \(U(\mathbf{\theta})=\exp\{A(\mathbf{\theta})\},\quad A(\mathbf{\theta})=\sum_{m}\theta_{m} G_{m},\quad\) (2) \\ \hline \end{tabular} where \(\mathbf{\theta}=(\theta_{1},\theta_{2},\ldots,\theta_{N^{2}-1})\in\mathbb{R}^{N^{2 }-1}\) and \(\{G_{m}\}\in\mathcal{P}^{(N_{\mathrm{qubits}})}\). The coordinates \(\mathbf{\theta}\) are called the canonical coordinates, which uniquely parameterize \(U\) through the Lie algebra \(\mathfrak{su}(N)\). Note that this type of gate is quite natural for many quantum hardware platforms, since the physical control of qubits is achieved by evolving the system under a controllable Hamiltonian. These Hamiltonians often have multiple independently tunable fields which can be active at the same time and do not necessarily commute. To use this gate in a gradient-based variational quantum algorithm, we have to be able to obtain partial derivatives of \(U(\mathbf{\theta})\) with respect to each parameter \(\theta_{l}\). Although there exist a variety of works that provide analytical expressions for gradients through quantum circuits via the parameter-shift rule [33; 34; 35; 36; 37; 38; 39; 4], these works almost uniformly assume that the gate is of the form \(U(\theta)=\exp\{i\theta P\}\), where \(P\) is a Hermitian operator. As far as we are aware, the only methods to obtain gradients of Eq. (2) with respect to \(\mathbf{\theta}\) are the stochastic and Nyquist parameter-shift rules of [6] and [10], respectively. The first approach relies on an integral identity for bounded operators that is estimated via Monte Carlo [34], whereas the latter is based on a theorem in Fourier analysis [35]. ## III Obtaining the gradient Here, we provide a new approach to obtain the gradient of Eq. (2) that makes use of differentiable programming, which is efficient for gates acting on a small number of qubits. To start, we note that the partial derivative with respect to a parameter \(\theta_{l}\) is given by \[\frac{\partial}{\partial\theta_{l}}U(\mathbf{\theta}) =\frac{\partial}{\partial\theta_{l}}\exp\{A(\mathbf{\theta})\}\] \[=U(\mathbf{\theta})\sum_{p=0}^{\infty}\frac{(-1)^{p}}{(p+1)!}(\mathrm{ ad}_{A(\mathbf{\theta})})^{p}\frac{\partial}{\partial\theta_{l}}A(\mathbf{\theta}). \tag{3}\] Here, \(\mathrm{ad}_{X}\) denotes the adjoint action of the Lie algebra given by the commutator \(\mathrm{ad}_{X}\left(Y\right)=[X,Y]\)[31]. Furthermore, we write \((\mathrm{ad}_{X}\left){}^{p}(Y)=[X,[X,\ldots[X,Y]]]\), hence \((\mathrm{ad}_{X}\left){}^{p}\right.\) denotes a nested commutator of \(p\) terms. For more details, see App. B. Note that the term on the right of \(U(\mathbf{\theta})\) in Eq. (3) is an element of the Lie algebra, since \(\partial/\partial\theta_{l}A(\mathbf{\theta})=G_{l}\in\mathfrak{su}(N)\) and so the commutator keeps the entire sum in the algebra. For notational clarity we define \[\Omega_{l}(\mathbf{\theta})=\sum_{p=0}^{\infty}\frac{(-1)^{p}}{(p+1)!}(\mathrm{ad}_ {A(\mathbf{\theta})})^{p}\frac{\partial}{\partial\theta_{l}}A(\mathbf{\theta}), \tag{4}\] where \(\Omega_{l}(\mathbf{\theta})\in\mathfrak{su}(N)\) is a skew-Hermitian operator that generates a unitary, which we call the _effective generator_. Given that Eq. (4) is an infinite series of nested commutators it is not clear how \(\Omega_{l}(\mathbf{\theta})\in\mathfrak{su}(N)\) can be calculated in practice without truncating the sum. We can think of \(U\) as a function \(U:\mathbb{R}^{N^{2}-1}\rightarrow\mathrm{SU}(N)\) that we evaluate at the point \(\mathbf{\theta}\). Since \(\mathrm{SU}(N)\) is a differentiable manifold, we can define a set of local coordinates on the group and represent \(U(\mathbf{x})\) as a matrix described by \(N^{2}-1\) real numbers. Hence, we can think of our gate as a coordinate transformation between the parameters \(\mathbf{x}\) and the entries of the matrix representing the unitary. Since \(U(\mathbf{x})\) depends smoothly on \(x_{l}\) via the matrix exponential, this coordinate transformation comes with a corresponding Jacobian (or more accurately, pushforward) \(dU(\mathbf{x}):T_{\mathbf{x}}\mathbb{R}^{N^{2}-1}\to T_{U(\mathbf{x})}\mathrm{SU}(N)\) that maps vectors tangential to \(\mathbb{R}^{N^{2}-1}\) to vectors tangential to \(\mathrm{SU}(N)\). We can obtain this Jacobian by differentiating the elements \(U_{nm}(\mathbf{x})\) with respect to \(x_{l}\): \[\frac{\partial}{\partial x_{l}}U_{nm}(\mathbf{x})=\partial_{x_{l}}\mathfrak{Re}[U_{ nm}(\mathbf{x})]+i\partial_{x_{l}}\mathfrak{Im}[U_{nm}(\mathbf{x})]. \tag{5}\] To obtain the above matrix function numerically, we rely on the fact that the matrix exponential and its derivative are implemented in differentiable programming frameworks such as JAX [36], PyTorch [37] and Tensorflow [38] through automatic differentiation. Here we make use of the JAX implementation, which provides the matrix exponential through a differentiable Pade approximation [39; 40]. Continuing, we note that evaluating \(\partial U(\mathbf{x})/\partial x_{l}\) at a point \(\mathbf{\theta}\) produces an element of the tangent space \(T_{U(\mathbf{\theta})}\mathrm{SU}(N)\). We can move from the tangent space to the Lie algebra by left multiplying the elementwise derivative of Eq. (5) in Eq. (3) with \(U^{\dagger}(\mathbf{\theta})\) (see App. A), \[U^{\dagger}(\mathbf{\theta})\left(\frac{\partial}{\partial x_{l}}U(\mathbf{x})\bigg{|} _{\mathbf{\theta}}\right)=U^{\dagger}(\mathbf{\theta})U(\mathbf{\theta})\Omega_{l}(\mathbf{ \theta})=\Omega_{l}(\mathbf{\theta}), \tag{6}\] which allows us to obtain \(\Omega_{l}(\mathbf{\theta})\) exactly, up to machine precision. We emphasize that these steps can be performed on a classical computer, with a cost that is only dependent on the number of qubits the gate acts on, not the number of qubits in the circuit. We now make the following observation: \(\Omega_{l}(\mathbf{\theta})\) corresponds to a tangent vector on \(\mathrm{SU}(N)\) and generates the one-parameter subgroup \(V(t)=\exp\{t\Omega_{l}(\mathbf{\theta})\}\) such that \[\Omega_{l}(\mathbf{\theta})=\frac{d}{dt}\exp\{t\Omega_{l}(\mathbf{\theta})\}\big{|}_{t =0} \tag{7}\] and \[\frac{\partial}{\partial\theta_{l}}U(\mathbf{\theta})=U(\mathbf{\theta})\frac{d}{dt }\exp\{t\Omega_{l}(\mathbf{\theta})\}\big{|}_{t=0}. \tag{8}\] We sketch this procedure schematically in Fig. 1. We now consider a typical variational setting, where we are interested in minimizing the following cost function: \[C(\mathbf{\theta})=\mathrm{Tr}\big{\{}U(\mathbf{\theta})\rho U^{\dagger}(\mathbf{\theta})H \big{\}}, \tag{9}\] where \(H\) is some Hermitian operator and \(\rho\) the initial state of the system. For simplicity we consider a circuit consisting of a single \(\mathrm{SU}(N)\) gate. Differentiating the cost function with respect to \(\theta_{l}\) gives \[\frac{\partial}{\partial\theta_{l}}C(\mathbf{\theta})=\mathrm{Tr}\bigg{\{}\bigg{(} \frac{\partial}{\partial\theta_{l}}U(\mathbf{\theta})\bigg{)}\,\rho U^{\dagger}( \mathbf{\theta})H\bigg{\}}+\mathrm{h.c.} \tag{10}\] Then, plugging in Eq. (8) we find, \[\frac{\partial}{\partial\theta_{l}}C(\mathbf{\theta})=\] \[\frac{d}{dt}\left.\mathrm{Tr}\Big{\{}\Big{(}U(\mathbf{\theta})e^{t \Omega_{l}(\mathbf{\theta})}\rho e^{-t\Omega_{l}(\mathbf{\theta})}U^{\dagger}(\mathbf{ \theta})\Big{)}\,H\Big{\}}\right|_{t=0}, \tag{11}\] where we used the skew-Hermitian property of the tangent vector \(\Omega_{l}^{\dagger}(\mathbf{\theta})=-\Omega_{l}(\mathbf{\theta})\). Note that Eq. (11) corresponds to a new circuit with the gate \(\exp\{t\Omega_{l}(\mathbf{\theta})\}\) inserted before \(U(\mathbf{\theta})\) (see Fig. 2). The gradient of this new circuit can be computed on quantum hardware with a generalized parameter-shift rule (GPSR) [7; 8; 9]. In Algorithm 1, we outline the entire algorithm for our gradient estimation and we denote the GPSR subroutine with gpsr. An alternative to the generalized shift rule is to decompose the effective generators and apply the original two-term parameter-shift rule to the constituents (see App. E.3 for details). In [41], the authors proposed the so-called stochastic parameter-shift rule for multivariate gates, which is based on the Monte Carlo approximation of an operator identity. In Fig. 3 we consider a toy example using a random Hamiltonian on a single qubit and compare the exact derivative of an \(\mathrm{SU}(2)\) gate with our generalized parameter-shift method (Algorithm 1), the stochastic parameter-shift rule and the central finite difference derivative with shifts \(\pm\frac{\delta}{2}\). In particular, we consider the gate \(U(\mathbf{\theta})=\exp(iaX+ibY)\) with \(\mathbf{\theta}=(a,b)\) and compute the partial derivative with respect to \(a\) over the range \(a\in[0,\pi]\) for three fixed values of \(b\) on a state vector simulator (without shot noise). For the finite difference recipe we use \(\delta=0.75\), which we found to be a reasonable choice for a shot budget of 100 shots per cost function evaluation (see App. E.2). We observe that the generalized \(\mathrm{SU}(N)\) derivative reproduces the exact value while the finite difference derivative is slightly biased. This is to be expected because the latter is an approximate method. While decreasing the shift size \(\delta\) reduces the deterministic approximation error, it leads to larger overall estimation errors in shot-based computations like on quantum computers (see App. E.2 and e.g., [42]). Finally, the stochastic parameter-shift rule Figure 1: Schematic depiction of our approach. We move to the Lie algebra from the tangent space by left multiplication with \(U^{\dagger}(\mathbf{\theta})\) and obtain \(\Omega_{l}(\mathbf{\theta})\). The orbit generated by \(\Omega_{l}(\mathbf{\theta})\) corresponds to the gate we have to insert in the circuit to compute the gradient. Figure 2: The partial derivative with respect to the gate parameter \(\theta_{l}\) can be obtained by adding a gate to the circuit that is generated by \(\Omega_{l}(\mathbf{\theta})\). Calculating the derivative with respect to \(t\) and evaluating at \(t=0\) then provides one with the correct gradient. yields an unbiased estimator for the exact derivative but has a finite variance, which we estimated using 100 samples (see App. E.1). We stress that this variance is a property of the differentiation method itself and not due to sampling on the quantum computer. All methods require two unique circuits per derivative but the stochastic shift rule needs additional circuits in order to suppress the variance. We provide the code for all our numerical experiments at [43]. In addition, we compare the three methods in the presence of shot noise in Fig. 4. We show the means and _single-shot_ errors estimated with 1000 shots, which we split over 100 samples for the stochastic shift rule. We observe that the generalized SU(\(N\)) shift rule systematically performs best. It is not only unbiased but also has the smallest variance. Note that for smaller parameters \(b\), the SU(\(N\)) shift rule and the stochastic shift rule show very similar variances. This is because \(U(\mathbf{\theta})\) approaches the gate \(R_{X}(a)=\exp(iaX)\), which can be differentiated with the original parameter-shift rule, and both rules indeed reduce to the two-term shift rule for \(R_{X}\). Finally, we note that Eq. (11) can is closely related to the Riemannian gradient on SU(\(N\)) [44, 45]. However, instead of a gradient flow on a Lie group, we have defined a flow on the Lie algebra \(\mathfrak{su}(N)\), which we retract back to the manifold via the exponential map. This subtle difference induces a different flow from the SU(\(N\)) one, as we illustrate in App. C. Figure 4: Gradients of \(C(\mathbf{\theta})\) as in Fig. 3 but for finitely many shots on quantum hardware. We show the single-shot error for each method, estimated with 1000 shots, which varies with the gate parameters as noted e.g., in [9]. Our generalized SU(\(N\)) shift rule systematically outperforms the other methods. For small \(b\), the SU(\(N\)) and the stochastic shift rule approach the single-parameter shift rule and hence behave similarly. The finite difference shift \(\delta=0.75\) is chosen such that the bias and variance are traded off reasonably for 100 shots (see App. E.1 and e.g., [42]). For other shot numbers, \(\delta\) needs to be optimized anew, whereas the parameter-shift rules are known to perform optimally at fixed shifts. Figure 3: Gradients of \(C(\mathbf{\theta})\) for a single SU(2) gate and a random single-qubit Hamiltonian, in the limit of infinitely many shots on quantum hardware. We take \(A(\mathbf{\theta})=iaX+ibY\) where \(\mathbf{\theta}=(a,b)\) and consider the fixed values \(b=0.5,1.0,2.0\) together with \(a\in[0,\pi]\). Our generalized shift rule (dotted) reproduces the exact value (solid), whereas the central finite difference (dashed) is biased and the stochastic shift rule (solid, shaded) comes with a finite statistical error even without shot noise from the quantum measurements. Since we look at a single-qubit operation, \(\Omega_{a}(\mathbf{\theta})\) has a single spectral gap, so we require two shifted circuits to calculate the gradient entry (see App. D for details). The finite difference and the stochastic shift rule require two circuits as well, but additional executions are need for the latter to reduce the shown single-sample error. ## IV Comparison with decomposed unitaries Previous parameterizations of \(\mathrm{SU}(N)\) unitaries consist of products of single-qubit gates and CNOTs [14; 15; 16; 17; 18; 19]. We refer to this parameterization as _decomposed_\(\mathrm{SU}(N)\) gates. On the other hand, Eq. (2) describes a general \(\mathrm{SU}(N)\) unitary by exponentiating a parameterization of the Lie algebra \(\mathfrak{su}(N)\). Here, we investigate the effects of this alternative parameterization. ### Gate speed limit First, we investigate a speed limit in terms of the gate time. We slightly modify the definition of Eq. (2) for a unitary evolution of the system, \(U(\mathbf{\theta};t)\in\mathrm{SU}(N)\), to include a time \(t\in\mathbb{R}^{+}\), \[U(\mathbf{\theta};t)=\exp\bigl{\{}\bar{A}(\mathbf{\theta})t\bigr{\}}, \tag{12}\] where \(\bar{A}(\mathbf{\theta})=A(\mathbf{\theta})/\sqrt{\mathrm{Tr}\{A(\mathbf{\theta})^{ \dagger}A(\mathbf{\theta})\}}\) is a normalized time-independent Hamiltonian (the imaginary unit \(i\) is included in \(A(\mathbf{\theta})\)). The normalization of \(\bar{A}(\mathbf{\theta})\) is equivalent to the normalization of \(\mathbf{\theta}\) in Euclidean norm, see Lemma 3 in App. F. The normalization of the Hamiltonian (or, equivalently, \(\mathbf{\theta}\)) means that the total path length covered by the evolution is directly proportional to the evolution time \(t\), since we are effectively setting the speed of the evolution to \(1\). The Lie group \(\mathrm{SU}(N)\) can be turned into a Riemannian manifold by equipping it with the Hilbert-Schmidt inner product \(g(x,y)=\mathrm{Tr}\{x^{\dagger}y\}\). A geodesic between two points on \(\mathrm{SU}(N)\) is a curve of minimal arc length. The unitary evolution \(U(\mathbf{\theta};t)\), parameterized by \(t\), is a one-parameter subgroup that gives the geodesic [44, Theorem III.6] between the identity element at time \(t=0\) to the final evolution \(U(\mathbf{\theta};t_{g})\) at time \(t=t_{g}\). Using Lemma 4 (App. F), the length of the path after time \(t_{g}\) is constant for time-independent normalized Hamiltonians with \(|\mathbf{\theta}|=1\), \[L[U(\mathbf{\theta};t),t_{g}]=\sqrt{N}t_{g}, \tag{13}\] and \(t_{g}\) is the minimum time required to generate the evolution \(U(\mathbf{\theta};t_{g})\). For an initial state \(\rho\) and final state \(\rho_{f}\), the Fubini-Study metric is used to find a minimum evolution time \[t_{g}=\frac{1}{\sqrt{N}}\arccos\biggl{(}\sqrt{\mathrm{Tr}\{\rho\rho_{f}\}} \biggr{)}, \tag{14}\] giving the Mandelstam-Tamm bound for time-independent normalized Hamiltonians. In practice, we may only have access to a restricted family of gates within \(\mathrm{SU}(N)\), for example due to hardware limitations, in which case we require a decomposition of a desired gate in \(\mathrm{SU}(N)\) into gates from this family. Here we want to compute the additional evolution time required by such a decomposition. The simplest gate decomposition is to break the unitary into two terms, \(U(\mathbf{\theta};t_{g})=U(\mathbf{\phi}^{(2)};t_{2})U(\mathbf{\phi}^{(1)};t_{1})\). The parameters \(\mathbf{\phi}^{(1)}\) and \(\mathbf{\phi}^{(2)}\) are also normalized Hamiltonians, i.e., they have the norm \(|\mathbf{\phi}^{(1)}|=|\mathbf{\phi}^{(2)}|=1\). The following theorem gives the additional evolution time of using the decomposed circuit over the geodesic evolution of a general \(\mathrm{SU}(N)\) gate. **Theorem 1**.: _Consider a general \(\mathrm{SU}(N)\) gate \(U(\mathbf{\theta};t_{g})\) with geodesic evolution time \(t_{g}\) together with a decomposition \(U(\mathbf{\phi}^{(2)};t_{2})U(\mathbf{\phi}^{(1)};t_{1})=U(\mathbf{\theta};t_{g})\). Let \(t_{d}=t_{1}+t_{2}\) be the total decomposed unitary evolution time. The additional time \(\Delta t=t_{d}-t_{g}\) required by the decomposition is then given by_ \[\Delta t=t_{d}-\arccos\big{(}\cos(t_{1})\cos(t_{2})\\ -\mathbf{\phi}^{(1)}\cdot\mathbf{\phi}^{(2)}\sin(t_{1})\sin(t_{2})\big{)} \geq 0. \tag{15}\] The proof of the theorem is in App. F. As expected, the additional time turns out to be positive, \(\Delta t>0\), for all \(\mathbf{\phi}^{(1)}\cdot\mathbf{\phi}^{(2)}<1\). Due to the normalization of \(\mathbf{\phi}^{(1)}\) and \(\mathbf{\phi}^{(2)}\), if \(\mathbf{\phi}^{(1)}\cdot\mathbf{\phi}^{(2)}=1\), then \(\mathbf{\phi}^{(1)}=\mathbf{\phi}^{(2)}=\mathbf{\theta}\). In this case, the restricted evolution is along the geodesic, \(t_{g}=t_{1}+t_{2}\), and \(\Delta t=0\). Theorem 1 gives the additional time for a decomposition into two gates. A decomposition into more gates would lead to an even greater evolution time. A corollary of Theorem 1 is that any circuit with multiple non-commuting layers of gates cannot be optimal in total time. ### Unbiased cost landscapes An additional advantage of the \(\mathrm{SU}(N)\) gate is that it weighs all optimization directions equally. In contrast, a parameterization of \(\mathrm{SU}(N)\) in terms of a product of gates will create a bias in the parameter space. We illustrate this point with the following example. Consider the decomposed \(\mathrm{SU}(2)\) gate \(V(\mathbf{\theta})=R_{Z}(\theta_{3})R_{Y}(\theta_{2})R_{Z}(\theta_{1})\) where \(R_{A}(\theta)=\exp\{i\theta A\}\) and \(A=X,Y,Z\). This is the ZYZ decomposition. Using similar techniques as in App. F, we can rewrite \(V(\mathbf{\theta})\) to be parameterized in terms of the Lie algebra: \[V(\mathbf{\theta})=\exp\{i\mathbf{\phi}\cdot\mathbf{\sigma}\}, \tag{16}\] where \(\mathbf{\sigma}=(X,Y,Z)\) and \[\mathbf{\phi}=\frac{\arccos(\cos(\theta_{2})\cos(\theta_{1}+\theta_{3}))}{\sqrt{1- \cos^{2}(\theta_{2})\cos^{2}(\theta_{1}+\theta_{3})}}\begin{pmatrix}\sin( \theta_{2})\sin(\theta_{1}-\theta_{3})\\ \sin(\theta_{2})\cos(\theta_{1}-\theta_{3})\\ \cos(\theta_{2})\sin(\theta_{1}+\theta_{3})\end{pmatrix}. \tag{17}\] If we look at the components of \(\mathbf{\phi}\), we see that the different directions in the Lie algebra are stretched or compressed as a result of the particular choice of parameterization. Consider the normalization \(|\theta_{1}|+|\theta_{2}|+|\theta_{3}|=1\) for the ZYZ decomposition and \(|\mathbf{\theta}|=1\) for the SU(\(N\)) gate. With each Hamiltonian term normalized to 1, the prefactor gives the evolution time. These choices of norm give equal total evolution times for the ZYZ decomposition and SU(2) gate, \(T_{\text{ZYZ}}=T_{\text{SU($N$)}}=\sqrt{2}\), irrespective of the specific parameters chosen. In Fig. 5, we graphically illustrate the Lie algebra deformation by showing the \(\mathbf{\phi}\) surface for both the ZYZ decomposition and SU(2) gate. Note that we have not considered any cost function here; the bias occurs at the level of the parameterization of a specific unitary. The effect of this bias is demonstrated in Fig. 6 for the simplest case of a single-qubit system with an SU(2) gate. The optimal parameters of the circuit are those that produce the state that gives the minimum of the cost function \(C(\mathbf{\theta})=-\langle Y\rangle\) (green star). We consider various initial parameters acting on the reference state \(\rho=|0\rangle\langle 0|\). The corresponding training paths are shown for each initial parameter vector. The training paths for the decomposed ZYZ circuit are depicted in Fig. 6(a). As the initial parameter \(\mathbf{\theta}_{0}\) acting on the reference state \(\rho\) (purple dots) moves closer to an unstable equilibrium point (orange diamond) the training path becomes increasingly suboptimal. At the unstable equilibrium the only gradient information is directly away from the instability rather than providing information about the direction towards the global minimum. This behavior is further illustrated by the gradient vector field on the Bloch sphere in Fig. 6(c). For the SU(\(N\)) gate, we see in Fig. 6(b) that the optimization trajectories follow a direct path to the minimum. ### Numerical experiment To investigate the effect on the performance of a typical optimization, we study how an SU(\(N\)) gate compares with a decomposed gate in a larger circuit. In Fig. 7 we provide a non-trivial example, where we incorporate our gates into a circuit and show that it performs better than a decomposed SU(4) gate on a set of random problem instances. We show the individual optimization trajectories in Fig. 8 which illustrate the faster optimization of SU(\(N\)) gates compared to decomposed gates. Like for the examples in Fig. 3 and Fig. 4, we assume that there is no gate or measurement noise. Additionally, we assume that we can always implement the gate generated by \(\Omega_{l}(\mathbf{\theta})\), and have control over all Pauli operators \(G_{m}\). In practice, we typically only have access to a fixed set of generators \(\text{span}(\{G_{m}\})<\text{span}(\mathfrak{su}(N))\). If this is the case, then we require a decomposition of \(\exp\{t\Omega_{l}(\mathbf{\theta})\}\) in terms of the available operations on the device [14; 19]. All numerical results were obtained with PennyLane[46], and the SU(\(N\)) gates can be accessed via the qml.SpecialUnitary class. Although we do not explore this here, one could make use of sparse optimization methods such as stochastic optimization [47; 48] and frugal optimization [49] for the GPSR subroutine in our algorithm. ## V Resource estimation To obtain the partial derivative in Eq. (11) in practice we need to estimate the gradient of a circuit that contains a gate generated by \(\Omega_{l}(\mathbf{\theta})\). As noted in recent works on GPSR rules [7; 8; 9], the computational cost of estimating this gradient is related to the spectral gaps of Figure 5: The total unitary evolution for the ZYZ decomposition (red) and the SU(2) gate (blue) can be expressed in the form \(\exp\{i\mathbf{\phi}\cdot\mathbf{\sigma}\}\). The components \(\mathbf{\phi}=(\phi_{1},\phi_{2},\phi_{3})\) give the magnitude of the respective basis generators \(\mathbf{\sigma}=(X,Y,Z)\). The original parameterization in \(\mathbf{\theta}\) with norm \(|\theta_{1}|+|\theta_{2}|+|\theta_{3}|=1\) gives a surface of possible values of \(\mathbf{\phi}\) and therefore possible unitary evolutions. The SU(2) gate (blue) is unbiased because its parameterization gives the correspondence \(\mathbf{\theta}=\mathbf{\phi}\) with normalization \(\mathbf{\phi}_{1}^{2}+\mathbf{\phi}_{2}^{2}+\mathbf{\phi}_{3}^{2}=1\). The unitary evolution for the ZYZ decomposition (red) is biased because the surface in the \(\mathbf{\phi}\) coordinates does not maintain an equal magnitude in all directions. \(\Omega_{l}(\mathbf{\theta})\). In particular, if \(\{\lambda_{j}\}\) is the set of (possibly degenerate) eigenvalues of \(\Omega_{l}(\mathbf{\theta})\), we define the set of unique spectral gaps as \(\Gamma=\{|\lambda_{j}-\lambda_{j^{\prime}}|\}\) where \(j^{\prime}>j\). Note that for \(d\) distinct eigenvalues, the number of unique spectral gaps \(R\) is at most \(R\leq d(d-1)/2\). The total number of parameter-shifted circuits is then \(2R\) for a single partial derivative \(\partial_{\theta_{l}}C(\mathbf{\theta})\) Depending on the generator \(\Omega_{l}(\mathbf{\theta})\), this complexity can be improved. For instance, in [7], a Cartan decomposition is used to improve the number of circuits required from polynomial to linear or even logarithmic in \(N\). Additionally, in [8], the different costs for first- and second-order gradients are determined for specific variational quantum algorithms like QAOA [50] and RootSolve [51, 52, 53, 54]. Finally, in [9], the computational cost of a variety of different gates is investigated in detail and the variance across the parameter regime is studied. Instead of focusing on specific instances of the generator \(\Omega_{l}(\mathbf{\theta})\), we make more general observation about the computational complexity of parameter-shift gradient rules. In general, \(\Omega_{l}(\mathbf{\theta})\) has full support on \(\mathfrak{su}(N)\), since the consecutive applications of \(\mathrm{ad}_{A(\mathbf{\theta})}\) in Eq. (3) typically generate all of \(\mathfrak{su}(N)\)[55]. However, for specific choices of \(A(\mathbf{\theta})\), the application of \(\mathrm{ad}_{A(\mathbf{\theta})^{p}}\) to \(\partial_{\theta_{l}}A(\mathbf{\theta})\) closes to form a subalgebra, called the dynamical Lie algebra of \(A(\mathbf{\theta})\), that is contained in \(\mathfrak{su}(N)\). These algebras are well-known in the context of quantum optimal control [56, 57], and have recently been studied in the context of variational quantum algorithms [58, 59]. For our purposes, we define the dynamical Lie algebra (DLA) \(\mathcal{L}(A(\mathbf{\theta}))\) as the subalgebra formed under the closure of the non-zero terms in \(A(\mathbf{\theta})\) under the commutator and ignore trivial subalgebras consisting of a single element. For example, given \(A(\mathbf{\theta})=i(aX+bY)\), \(\forall a,b\in\mathbb{R}\), we have \(\mathcal{L}(A(\mathbf{\theta}))=\mathrm{span}\{iX,iY,iZ\}\), since \(\mathrm{ad}_{X}\left(Y\right)=[X,Y]=iZ\) and successive commutators generate no new contributions. Note that for this example the DLA equals the full Lie algebra \(\mathfrak{su}(2)\). In particular, for \(N_{\mathrm{qubits}}=1\) there is only one possible DLA, the one that spans \(\mathfrak{su}(2)\). If \(N_{\mathrm{qubits}}>1\), there exist several nontrivial cases. For example, in [60] explicit constructions are given for DLAs that span \(\mathfrak{so}(N)\) and \(\mathfrak{sp}(N)\). In a Figure 6: Comparison of the update of circuit parameters from various initial parameters acting on the initial state \(\rho=|0\rangle\langle 0|\). The training paths are depicted on the Bloch sphere for: (a) parameterized single-qubit rotations for the ZYZ ansatz; and (b) using the \(\mathrm{SU}(N)\) gate. The purple dots represent initial states generated by applying \(U(\mathbf{\theta}_{0})\) with \(\mathbf{\theta}_{0}=(0,a,0)\) where \(a\in\{\frac{\pi}{64},\frac{\pi}{8},\frac{2\pi}{8},\frac{3\pi}{8},\frac{3\pi}{2}\}\) to \(\rho\). Note that for this choice of initial parameters, \(U(\mathbf{\theta}_{0})=V(\mathbf{\theta}_{0})\). The objective function is \(C(\mathbf{\theta})=-\langle Y\rangle\), giving the target final state at the green star—the state that gives the global minimum of \(C(\mathbf{\theta})\). The unstable equilibrium points are given by orange diamonds, at \((0,0,1)\) and \((0,0,-1)\), and the black point is at the maximum of the cost function, \((0,1,0)\). (c) shows the gradient vector field of the decomposed ZYZ ansatz. The vector field for the \(\mathrm{SU}(2)\) gate, shown in (d), coincides with the geodesic flow towards the target final state at all points which satisfies the gate speed limit. Figure 7: Comparison of decomposed gates versus \(\mathrm{SU}(N)\) gates in brick-layer circuits for random 10-qubit Hamiltonians and various depths. We consider the brick-layer circuit indicated with \(\ell=2\) in the inset, with general two-qubit gates acting on the even and odd qubits in each layer. The decomposed gate is the \(\mathrm{SU}(4)\) parameterization of [16], which is optimal in the number of CNOTs required. For each instance, we sample a Hamiltonian from the Gaussian unitary ensemble and minimize the cost in Eq. (9) via standard gradient descent. We show the difference of the relative errors in energy \(\bar{E}=(E-E_{\mathrm{min}})/(E_{\mathrm{max}}-E_{\mathrm{min}})\) between the decomposed gates and the \(\mathrm{SU}(N)\) gates, that is \(\Delta\bar{E}=\bar{E}_{\mathrm{SU}(N)}-\bar{E}_{\mathrm{Decomp.}}\). The plotted lines are the mean \(\bar{E}\), averaged over 50 random Hamiltonians for each circuit depth \(\ell\). We see that for all depths \(\Delta\bar{E}<0\) at all points during the optimization, hence the brick-layer circuit with the \(\mathrm{SU}(N)\) gates outperforms the circuit where the two-qubit interactions are parametrized as a gate composition. more recent work, the DLAs of several typical quantum many-body Hamiltonians are studied and their properties are used to prepare efficient time-evolution circuits [61]. These Hamiltonians could be used as generators \(A(\mathbf{\theta})\) for \(\mathrm{SU}(N)\) gates. Interestingly, if the DLA is maximal, i.e., there exists no smaller non-trivial subalgebra within \(\mathcal{L}(A(\mathbf{\theta}))\), then the _roots_ of the Lie algebra can be related directly to the computational cost of estimating the gradients in Eq. (11). We formally establish this connection with the following theorem: **Theorem 2**.: _The number of unique spectral gaps \(R\) of \(\Omega_{l}(\mathbf{\theta})\) is upper bounded by the number of roots \(|\Phi|\) of any maximal semi-simple DLA,_ \[R\leq|\Phi|/2. \tag{18}\] We provide the proof of Theorem 2 in App. G. We make use of the fact that any semisimple Lie algebra can be written as a direct sum of its weight spaces, which can be identified with its root system [62]. The number of roots \(|\Phi|\) can then be used to bound the total number of unique spectral gaps of \(\Omega_{l}(\mathbf{\theta})\). We can thus use Theorem 2 to assess the run time of Algorithm 1. We give several examples of \(\mathrm{SU}(N)\) gates in App. G together with the corresponding values of \(R\). Depending on the physical system or hardware that we are working with, we have to choose a representation for \(\mathfrak{su}(N)\), which is a map \(\mathfrak{su}(N)\to\mathfrak{gl}(N,\mathbb{C})\). In Eq. (1) we chose this representation to be the tensor product of the fundamental representation, i.e., Pauli monomials. Note however, that Eq. (11) and Theorem 2 hold for any irreducible representation of \(\mathfrak{su}(N)\). Additionally, by connecting the spectral gaps to the root system of the DLA, we can make use of a beautiful result in representation theory: the classification of all maximal subalgebras of the classical Lie algebras [63]. Each root system can be uniquely identified with a particular subalgebra of a Lie algebra and it can be shown that there exist a finite number of root systems. Since a DLA is a subalgebra of \(\mathfrak{su}(N)\), we can identify all possible DLAs and by extension all possible families of \(\mathrm{SU}(N)\) gates. We provide examples of this procedure in App. G. ## VI Conclusion We have proposed an alternative parameterization of general \(\mathrm{SU}(N)\) gates and a method of optimizing these gates in prototypical variational quantum algorithms. We have shown that our gates are more powerful in toy example settings, and motivated why we believe this is the case based on quantum speed-limit arguments. A natural extension of our work would be to test our method in experimental settings, both on gate-based quantum computers or quantum simulators [64; 65; 66]. With regards to the latter, several methods have been investigated that could provide pulse-level optimization of energy cost functions [67; 68]. This would obviate the need for a gate-based model of quantum computing to prepare specific states on quantum hardware. Instead, we work on the Hamiltonian level and control the system directly. Our algorithm could be applied to this setting as well, since we're effectively learning the parameters of some fixed Hamiltonian. We have shown that the \(\mathrm{SU}(N)\) gate in a circuit outperforms a decomposed gate. The number of parameters in our proposed gate requires \(4^{N_{\mathrm{subits}}}\) parameters, and it is not clear for which problems one would rather have a deeper circuit with simple gates as opposed to a shallow circuit with more powerful gates. This also begs another question: will our gates suffer from barren plateaus [69]? It is likely that a circuit of 2-qubit \(\mathrm{SU}(N)\) gates that has linear depth in \(N\) will lead to a circuit that forms an approximate 2-design, which will suffer from vanishing gradients. However, appropriate choices of the generators \(A(\mathbf{\theta})\) of our gate could keep the circuit in a polynomially scaling DLA of the entire circuit, which can avoid barren plateaus [58; 59]. Additionally, we can consider parameter initialization strategies that can improve the optimization [70; 71]. Finally, we believe that the connections between variational quantum circuits and representation theory merit further investigation. We connected the classification of all \(\mathrm{SU}(N)\) gates with the classification of semisimple Lie algebras. However, this could possibly be extended to a Figure 8: Trajectories from the optimizations in Fig. 7 for 50 random 10-qubit Hamiltonians sampled from the Gaussian unitary ensemble and an \(\ell=5\) brick-layer circuit of 2-qubit building blocks. We compare the relative error energy (see Fig. 7 for the definition of \(\bar{E}\)) when using a standard gate composition to that when using \(\mathrm{SU}(4)\) gates as building blocks. The optimization is performed with vanilla gradient descent using a learning rate of \(\eta=10^{-3}\). The \(\mathrm{SU}(4)\) gate consistently leads to faster optimization and better approximations of the ground state energy throughout all \(10^{5}\) optimization steps. classification of all variational quantum circuits based on the DLA of an ansatz. It seems that the tools to provide such a classification are available and could provide one with a method to assess the trainability and expressivity of variational circuits without explicitly referring to specific ansatze. ## VII Acknowledgements We want to thank Los Alamos National Lab for their hospitality during the Quantum Computing Summer School where the initial stages of this project took place. RW wants to thank Lex Kemper and Efekan Kokcu for discussions on the subject of dynamical Lie algebras and Matt Duchenes for his suggestions with regards to the experimental implications of our work. DL acknowledges support from the EPSRC Centre for Doctoral Training in Delivering Quantum Technologies, grant ref. EP/S021582/1. JFCA acknowledges support from the Natural Sciences and Engineering Research Council (NSERC), the Shared Hierarchical Academic Research Computing Network (SHARCNET), Compute Canada, and the Canadian Institute for Advanced Research (CIFAR) AI chair program. Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector Institute [https://vectorinstitute.ai/#partners](https://vectorinstitute.ai/#partners).
2305.13814
Leveraging BEV Representation for 360-degree Visual Place Recognition
This paper investigates the advantages of using Bird's Eye View (BEV) representation in 360-degree visual place recognition (VPR). We propose a novel network architecture that utilizes the BEV representation in feature extraction, feature aggregation, and vision-LiDAR fusion, which bridges visual cues and spatial awareness. Our method extracts image features using standard convolutional networks and combines the features according to pre-defined 3D grid spatial points. To alleviate the mechanical and time misalignments between cameras, we further introduce deformable attention to learn the compensation. Upon the BEV feature representation, we then employ the polar transform and the Discrete Fourier transform for aggregation, which is shown to be rotation-invariant. In addition, the image and point cloud cues can be easily stated in the same coordinates, which benefits sensor fusion for place recognition. The proposed BEV-based method is evaluated in ablation and comparative studies on two datasets, including on-the-road and off-the-road scenarios. The experimental results verify the hypothesis that BEV can benefit VPR by its superior performance compared to baseline methods. To the best of our knowledge, this is the first trial of employing BEV representation in this task.
Xuecheng Xu, Yanmei Jiao, Sha Lu, Xiaqing Ding, Rong Xiong, Yue Wang
2023-05-23T08:29:42Z
http://arxiv.org/abs/2305.13814v1
# Leveraging BEV Representation for ###### Abstract This paper investigates the advantages of using Bird's Eye View (BEV) representation in 360-degree visual place recognition (VPR). We propose a novel network architecture that utilizes the BEV representation in feature extraction, feature aggregation, and vision-LiDAR fusion, which bridges visual cues and spatial awareness. Our method extracts image features using standard convolutional networks and combines the features according to pre-defined 3D grid spatial points. To alleviate the mechanical and time misalignments between cameras, we further introduce deformable attention to learn the compensation. Upon the BEV feature representation, we then employ the polar transform and the Discrete Fourier transform for aggregation, which is shown to be rotation-invariant. In addition, the image and point cloud cues can be easily stated in the same coordinates, which benefits sensor fusion for place recognition. The proposed BEV-based method is evaluated in ablation and comparative studies on two datasets, including on-the-road and off-the-road scenarios. The experimental results verify the hypothesis that BEV can benefit VPR by its superior performance compared to baseline methods. To the best of our knowledge, this is the first trial of employing BEV representation in this task. ## I Introduction Visual place recognition (VPR) is an essential component of global localization in the field of autonomous driving. The VPR task is challenging because it must deal with the entangled variation from environmental changes such as illumination and seasonal transitions, as well as large perspective differences. Thus, learning a robust place representation becomes a long-standing problem in the community [1, 2, 3, 4, 5, 6, 7, 8]. In the VPR literature, both classification [4, 9] and contrastive representation learning [1, 10] algorithms have been used to generate a single image-based place feature for efficient retrieval. Some of these methods have already been employed in autonomous systems that operate in well-conditioned situations [11, 12]. As the images taken from a single perspective camera have narrow fields of view, such a solution is incapable of covering the surrounding scene. Therefore, place recognition only happens if the place is revisited from a viewpoint that is quite similar to the original one. Fortunately, with the increasing expansion of autonomous driving, multi-camera setups are gaining popularity and becoming more affordable [13, 14, 15]. Multiple cameras deployed on autonomous vehicles are able to capture the nearly 360-degree view, equivalent to LiDAR, thus promising the robustness of VPR when a large perspective occurs. The most straightforward way to tackle the VPR task in a multi-camera setup is to generate panoramic images from multi-camera images and employ the existing VPR methods, which mainly aggregate features from the image plane. Alternatively, we note that the bird's-eye view (BEV) representation attracts recent attention. Several pioneering works [16, 17] demonstrate that features from multi-camera images can be aggregated in BEV representation for better performance in detection and segmentation compared with image-based methods. Such a trend raises a question: _Can BEV representation benefits visual place recognition?_ In this paper, we set to study the effective pathways that BEV representation can promote visual place recognition. We consider that the BEV representation has advantages in three aspects: feature extraction, feature aggregation, and the vision-LiDAR feature fusion since these parts can be endowed with spatial awareness. Specifically, as shown in Fig. 1, we first extract features from images using standard convolutional networks. After the image feature extraction, pre-defined 3D spatial points are then projected onto the images to further extract the spatial features. To alleviate the possible misalign Fig. 1: The BEV representation for 360-degree visual place recognition is introduced. The BEV representation benefits the VPR in three ways. First, in the feature extraction part, a standard convolutional network is better suited to multi-view images than panoramic images. Second, rotation invariance on the BEV representation can be easily achieved by the combination of the polar transform and the Discrete Fourier transform. Third, it is more plausible to fuse the image and point cloud features within the same coordinate system. ment brought by the noisy calibration or synchronization, we further employ deformable attention. Then the consolidated features in 3D space are then converted to polar BEV, where the Discrete Fourier transform is used to achieve rotation-invariant feature aggregation. Upon BEV representation, we state the image and point cloud cues in the same coordinate, which benefits the sensor fusion for place recognition. In the experiments, we evaluate the proposed BEV-based method in ablation and comparative studies. The results show that superior performance can be achieved using our methods, verifying the hypothesis. In summary, the contributions are three-folded: * We investigate the benefits of BEV representation in 360-degree visual place recognition. To the best of our knowledge, this is the first trial of employing such representation in this task. * We propose a network architecture to utilize the BEV representation in feature extraction, feature aggregation, and vision-LiDAR fusion, bridging the visual cues and spatial awareness. * We verify the hypothesis that BEV benefits VPR in two datasets including off-the-road and on-the-road scenarios. The method shows competitive performance compared with the baseline methods, demonstrating its effectiveness. Our code is available at [https://github.com/MaverickPeter/vDiSCO](https://github.com/MaverickPeter/vDiSCO). ## II Related Works ### _Visual Place Recognition_ Early VPR methods often use local features such as SIFT [18], SURF [19] and ORB [20] with aggregate strategies like Bag of Words (BoW) [21], Vector of Locally Aggregated Descriptors (VLAD) [22], or Aggregated Selective Match Kernels (ASMK) [23]. These local features can be easily applied to panoramic images. [24] proposes visual loop closing methods that adopt SIFT as local features and BoW as aggregation. Such an implementation is an imitation of the common VPR pipeline, which also has issues with feature discrimination. With the prevalence of deep learning, remarkable progress has been made in local features [25, 26, 27]. Most known local features are reviewed in [28, 29]. In recent local feature learning frameworks, DELF [30] applies an attention layer to select local keypoints, and DOLG [31] further utilizes multi-atrous convolution layers to include multi-scale feature maps. Other than aggregated by local features, global descriptors can also be generated by several differentiable aggregation operations, such as sum-pooling [32], GeM pooling [33] and NetVLAD [1]. Thanks to the strong ability provided by the end-to-end CNN model, NetVLAD [1] and its variants [34, 35, 36] outperform early methods. Several methods [37, 38] adopt the CNN model to extract discriminative features on panoramic images. However, convolutional networks may not be the best option because objects in panoramic images tend to be distorted. Besides feature extraction and aggregation, many losses, including ranking-based triplet [39], quadruplet [40], and listwise losses [10] are proposed in order to improve the training of neural networks. More recently, some approaches focus on re-ranking the place matches using sequential information [41, 42], query expansion [43] or geometric verification [6, 30]. With the fast development of LiDAR place recognition, researchers are also interested in multi-modal fusion methods. Multi-modal methods [44, 45] bring LiDAR to vision and thus improve the performance in some degraded scenarios. These approaches, however, merely combine descriptors from different modalities and ignore the connections between their distinct features. As mentioned above, the majority of previous works concentrate on feature extraction and aggregation. In contrast to these approaches, we concentrate on feature representation. Leveraging the BEV representation in feature extraction, feature aggregation, and vision-LiDAR fusion, we bring the spatial awareness to the visual cues. ### _BEV Representation_ The majority of previous VPR techniques are designed to handle retrieval tasks with single-view images. With the advancement of autonomous robots, multi-view images are becoming more accessible. However, the corresponding VPR algorithms using multi-view images have not yet been developed. In contrast to single-camera setups, representation in multi-camera scenarios can be diverse. Panorama as an intuitive representation of multi-view images is only occasionally mentioned in the literature [24], not to mention other representations. On the contrary, in the field of autonomous driving, various spatial representations have recently gained popularity due to the rapid growth of 360-degree vision suites. Representations such as BEV and 3D volume perform better than separate images because features in the image plane are transformed into features in a spatial representation, bringing spatial awareness across 360-degree images. Among all representations, BEV is a commonly used one since it clearly presents the location and scale of objects and is suitable for various autonomous driving tasks. In the related LiDAR place recognition, several methods [46, 47, 48, 49] utilize the BEV representation to achieve rotation invariance and estimation. In [50], authors further prove that, in contrast to other representations like range images, the translation variance in BEV can be decoupled and eliminated. Some works also explore the possibility of leveraging multi-modal information using unified BEV representation. CORAL [51] creates a multi-modal BEV that works better than LiDAR-only techniques by projecting image features onto the elevation BEV to include appearance information. Radar-to-LiDAR [52] applies a joint learning strategy to acquire robust features that can be used in cross-modality localization. Other than BEV representation, 3D volume is also an effective spatial representation. In [53], 3D feature volume is used to fuse multi-modal information, but the high demand for computational memory prevents its application. In this study, we introduce BEV representation to tackle the VPR task. With all features represented in a unified BEV representation, our method can be easily extended to multi-modal scenarios. ## III Overview Our BEV-based 360-degree visual place recognition network takes observations of a place \(p\), including multi-camera images, denoted as \(I_{p}=\{I_{p}^{i}\}_{i=1}^{N_{view}}\), where \(I_{p}^{i}\) is the image of the \(i\)-th view, \(N_{view}\) is the total number of camera views. After feature extraction, the multi-image cues are combined into a BEV representation, which is then aggregated to generate the place feature \(\mathcal{M}_{p}\). The architecture is demonstrated in Fig. 2. Following this way, we generate the place features for all places in the existing map to build a database \(\{\mathcal{M}_{i}\}\). We also generate the place feature for the current query place, say \(\mathcal{M}_{p}\). By comparing the Euclidean distance between the query place feature and the features in the database, we finally recognize the current place as the one in the map having the minimal difference, say the \(i^{*}\)th place as: \[i^{*}=\arg\min_{i}\|\mathcal{M}_{p}-\mathcal{M}_{i}\| \tag{1}\] When a LiDAR measurement is available simultaneously, we further utilize a point cloud feature extractor to encode the structural feature and project the feature into BEV representation. As both modalities are represented in the same spatial coordinates, we simply concatenate the BEV representations for place feature aggregation. We assume that the sensory data is synchronized and that the intrinsic and extrinsic parameters are known. ## IV BEV Representation Encoder The type of camera used in multi-camera platforms may differ since each camera serves a different role. For example, in the Oxford Radar Dataset [14], the city autonomous driving scenario demands that the front view is significantly more important than other views, therefore, the front view is covered by a high-resolution stereo camera, whereas the three side-view cameras are fisheye. With the known intrinsic parameters, the fisheye images can be easily undistorted. As for the front stereo camera, the short baseline resulted in a huge overlap of view, therefore, we only utilize one of the stereo images to cover the area in the front. All images are cropped and resized to fit the GPU memory. ### _Image Feature Extraction_ After the image processing step, we then extract features \(F_{p}=\{F_{p}^{i}\}_{i=1}^{N_{view}}\) from each one of the processed multi-camera images \(I_{p}\). The visual feature could be extracted by any commonly used 2D backbone, such as ResNet [54] or EfficientNet [55]. Note that we do not stitch the images into one panorama but keep images undistorted separately, the reason is that such image formation is the best match for existing standard convolutional network-based feature extraction backbones, while for panorama, special architecture is required, preventing the utilization of pre-trained model. ### _Vanilla BEV Representation_ To integrate the multi-image features into a unified feature, we employ BEV representation, which endows the feature with spatial awareness. Compared with the concatenation of multi-image features, we consider that such spatial awareness provides a unified frame for multi-images fusion, bringing a stronger inductive bias on geometric constraints. We define a 3D volume representation \(\mathcal{G}\) fixed to the center of the multi-camera system to gather all image features. Its horizontal plane is aligned with the BEV plane. After the image feature extraction, each center \(g_{j}\) of these voxel grids in \(\mathcal{G}\) is projected to the image plane using the known camera parameters \(\mathcal{K}\) and retrieves the image features. To avoid quantization error, bi-linear sampling is utilized to handle subpixel retrieval. We denote the views having a valid point projection as \(\mathcal{V}_{p}\) i.e. point lies inside the camera view frustum. The process steps can be formulated as: \[\mathcal{G}(g_{j})=\frac{1}{|\mathcal{V}_{p}|}\sum_{k\in\mathcal{V}_{p}}F_{p}^ {k}(\mathcal{P}(g_{j},k,\mathcal{K})) \tag{2}\] where \(k\) indexes the projected camera view, \(F_{p}^{k}\) is the feature map of the \(k\)-th camera image. We use camera parameters \(\mathcal{K}\) to form a projection function \(\mathcal{P}(g_{j},k,\mathcal{K})\) to get the image position of a voxel grid on the \(k\)-th image. For the voxel having multiple features retrieved from different camera views, we average these features. The averaged 3D features are then compressed to a BEV using convolution layers along the Fig. 2: Overview of our framework. Given multi-view images, an image encoder is applied to acquire image features and then sampled points in the BEV representation are projected onto the image plane and associated features are gathered in a BEV. To make use of the DFT for rotation invariance, the Polar transform is leveraged to convert rotation to translation dimension. height dimension. We use this BEV feature representation as our vanilla version. Since the retrieval step requires accurate camera parameters, inaccurate calibration may have negative effects on the final results. Synchronization error is another negative source for the vanilla version feature construction. To alleviate these negative effects, we adopt deformable attention, which is an efficient attention layer where each query can interact with different parts of the image by a learned offset compensating the misalignment. ### _Deformable Attention for BEV Representation_ We present the spatial cross-attention based on deformable attention to address the misalignment as well as the high computational cost imposed by the large input scale of multi-camera images. As shown in Fig. 3, instead of a fixed projected pixel, the deformable attention allows each voxel grid to project on different areas, which can fall on some views determined by the camera parameters and learned offsets. Specifically, the deformable attention mechanism is formulated as: \[\Psi(q_{j},p,F)=\sum_{i=1}^{N_{head}}W_{i}\sum_{j=1}^{N_{key}}A_{ij}\cdot W_{i} ^{\prime}F(p+\Delta p_{ij}) \tag{3}\] where \(q_{j}\), \(p\), \(F\) represent the query, sampled pixel and feature, respectively. Precisely, the query is a learnable parameter and sampled pixel is the corresponding pixel of the query. \(N_{head}\) is the total number of attention heads and \(N_{key}\) is the total number of sampled keys for each head. \(W_{i}\) and \(W_{i}^{\prime}\) are the learnable weights. \(A_{ij}\) is the normalized attention weight and \(\Delta p_{ij}\) is the predicted offsets for this sampled point. \(F(p+\Delta p_{ij})\) is the retrieved feature at positions \(p+\Delta p_{ij}\). Note that \(\Delta p_{ij}\) allows for the compensation of the calibration and synchronization error. To avoid large memory consumption of 3D volume representation \(\mathcal{G}\), we eliminate the height dimension of the \(\mathcal{G}\) to get a lightweight BEV representation, \(\mathcal{B}\). Same as the definition in [16], in the BEV representation, we first predefine a 2D BEV grids and a group of learnable parameters \(Q=\{q_{j}\}\) as the queries, and each query is located at each grid \(b_{j}\) of the BEV. To avoid information loss in the height dimension, each query on the BEV plane has \(N_{h}\) sampled 3D points. These sampled points act the same as the grid centers in \(\mathcal{G}\). As we sum the features retrieved by \(N_{h}\) sampled points and discard the middle 3D representation, we save a significant amount of memory. The feature retrieval process is formulated as: \[\mathcal{B}(q_{j},b_{j})=\frac{1}{|\mathcal{V}_{p}|}\sum_{k\in\mathcal{V}_{p}} \sum_{h=1}^{N_{h}}\Psi(q_{j},\mathcal{P}(b_{j},h,k,\mathcal{K}),F_{p}^{k}) \tag{4}\] where \(k\) indexes the projected camera view and \(h\) indexes the sampled point in the height dimension. For each BEV query \(q_{j}\), we use the projection function \(\mathcal{P}(b_{j},h,k,\mathcal{K})\) to get the \(h\)-th sampled points on the \(k\)-th camera view. The possible misalignment in \(\mathcal{P}(b_{j},h,k,\mathcal{K})\) can be alleviated by the learned offset \(\Delta p_{ij}\) in the (3). ## V Aggregation and Fusion Aggregation aims at building the place-level feature. At one place, two visits may happen at different times and vary in perspective. The first factor, visits at different times result in appearance change which is mainly dealt with in the section on feature learning above. For the second factor, perspective change, we introduce our aggregation method, which keeps the place-level feature invariant to perspective change without learning. Moreover, the aggregation method is differentiable, enabling the back-propagation to the upstream feature learning, and the perspective change will not influence the learning of appearance features. ### _Rotation Invariance_ Note that perspective change is the rotation in BEV representation, we thus apply the discrete Fourier transform (DFT) \(\mathcal{F}_{t}\) to the polar BEV representation to achieve rotation invariance [47]. Specifically, the rotation invariance is realized by the translation invariant property of the magnitude spectrum on polar BEV, where the translation indicates the rotation in the original BEV. Denote the polarized BEV image as \(\mathcal{B}_{\rho}(\theta,r)\), where \(\theta\) is the rotation of the original BEV \(\mathcal{B}\), \(r\) is the range along the ray \(\theta\), the invariance property can be formulated as: \[|\mathcal{F}_{t}(\mathcal{B}_{\rho}(\theta,r))|=|\mathcal{F}_{t}(\mathcal{B}_ {\rho}(\theta-\alpha,r))| \tag{5}\] where \(\alpha\) is an arbitrary rotation perturbation, \(|\cdot|\) is the magnitude operation. In our method, image features are extracted on the image plane and projected to the BEV plane, forming a BEV feature representation. To improve efficiency, we first use convolution layers to reduce the BEV features' channel. With the single-layer BEV features acquired, we apply the polar transform, the DFT and magnitude, finally arriving at the place feature \(\mathcal{M}\): \[\mathcal{M}=|\mathcal{F}_{t}(\mathcal{B}_{\rho}(\theta,r))| \tag{6}\] According to (5), we know that if two visits to the same place have different perspectives but with no other changes, their place features \(\mathcal{M}\)s are the same. A case study is shown in Fig. 4. For the other changes, say environment change, Fig. 3: Demonstration of the vanilla and deformable attention. In the vanilla version, the centers of voxel grids are projected onto fixed image pixels as blue dots. In the deformable version, the sampled points in BEV are first projected onto the image shown as blue dots, and then the learnable offsets (blue arrows) finetune the position of the pixels to the red dots. measurement noise and occlusion, we leave them to the invariant feature learning. When comparing the distance between two place features, we only use the centering \(16\times 16\) region of \(\mathcal{M}\) for efficiency, because we find that the majority of useful data exists in the low frequency of the magnitude of the frequency spectrum. In addition, note that the Euclidean distance-based search can be implemented by the KD-tree, and the efficiency of database-level comparison, i.e. place recognition (1), becomes tractable. ### _Yaw Estimation_ Different from the methods that only generate global features, in our BEV representation, it is also possible to estimate the perturbation \(\alpha\), according to (5). Given the current place observation and the retrieved features, the perturbation is actually the relative yaw angle, i.e. perspective change, between the two visits. Formally, given the two polar BEV visiting the same place with only perspective change, \(\mathcal{B}_{p,\rho}\) and \(\mathcal{B}_{\mathrm{i^{*}},\rho}\), we estimate yaw angle by the cross-correlation \(\otimes\) following [47, 56, 49]: \[\begin{split} S=&\mathcal{B}_{p,\rho}(\theta,r) \otimes\mathcal{B}_{\mathrm{i^{*}},\rho}(\theta,r)\\ =&\mathcal{B}_{p,\rho}(\theta,r)\otimes\mathcal{B} _{p,\rho}(\theta-\alpha,r)\end{split} \tag{7}\] As the rotation of BEV is the translation in polar BEV, the translation of the peak in \(S\) tells the relative yaw angle \(\alpha\). To accelerate the computation of cross-correlation, we apply DFT on both sides of (7): \[\mathcal{F}_{t}(S)=\mathcal{F}_{t}(\mathcal{B}_{p,\rho})\odot\mathcal{F}_{t}( \mathcal{B}_{\mathrm{i^{*}},\rho}) \tag{8}\] where \(\odot\) is the element-wise multiplication. By extracting the phase of \(\mathcal{F}_{t}(S)\), denotes \(\angle\mathcal{F}_{t}(S)\), and applying the inverse DFT, we should have a Dirac function peaking at \(\alpha\). This process is called phase correlation. In practice, the change between two visits cannot be perfect as an assumption, so we apply a \(softmax\) layer to build a distribution of \(\alpha\): \[p(\alpha)=softmax(W\cdot\mathcal{F}_{t}^{-1}(\angle\mathcal{F}_{t}(S))+b) \tag{9}\] where \(W\) and \(b\) are two temperature parameters for \(softmax\). Unlike the \(\max\) operation, another advantage of \(softmax\) is that it is differentiable, allowing for the back-propagation from loss to upstream feature learning. ### _Vision-LiDAR Fusion_ BEV representation is popular in recent LiDAR-based place recognition methods [47, 49, 57] and achieves superior performance. When LiDAR is available, thanks to the BEV feature representation, we can simply concatenate the LiDAR feature as different modalities share the same spatial coordinates. This can be an additional advantage of BEV representation compared with the place feature using vector-level fusion [44, 45]. Specifically, we first transform the input LiDAR points into a polar coordinate and then adopt a sparse convolutional network to acquire cylindrical features. A height compression module is further applied to form a polar BEV that is consistent with the visual BEV. Then, we concatenate the features in each grid of the polar BEV and get the multi-modal BEV. The architecture of the vision-LiDAR fusion pipeline is shown in Fig. 2. With the unified BEV representation, the aggregation part of the vision-LiDAR fusion remains the same. We verify the advantage of fusion in the same coordinates in the experiments. ## VI Network Training We crop the original image and only keep the center part because the images from datasets contain a large portion of the sky (especially in the NCLT dataset) that is useless for the VPR task. The image is then resized to \(224\times 384\) for the NCLT dataset. As for the Oxford dataset, we use the center view of the front stereo camera and three other view images. These images are undistort and resized to \(320\times 640\). Note that, as the image is cropped, the given camera parameters should be modified correspondingly. Panoramic images are generated using sampling methods. We first define spherical grids and then project those grids onto the multi-view images using camera parameters to acquire the corresponding pixel value. Two generated panoramic images are shown in Fig. 5. Due to the inaccurate calibration results, the panoramic images generated from the Oxford Radar dataset suffer from misalignment in the seams between images. The size of the panoramic image used in comparative methods is \(160\times 768\) for the NCLT dataset and \(200\times 1280\) for the Oxford dataset. ### _Loss and Training Settings_ To train the network for discriminative features learning, we follow the common practice [44, 58, 47] to adopt metric learning with triplet margin loss. Multi-view images and optionally a corresponding 3D point cloud form a mini-batch. Each batch consists of several mini-batches that can be divided into an anchor, positive and negative examples. Positive examples are those that are within \(2m\) of the anchor, whereas negative examples are at least \(3m\) apart. We use batch-hard negative mining to eliminate triplets with zero losses in order to improve training efficiency. The loss term is given as: \[\mathcal{L}_{P}(s_{i},s_{i}^{+},s_{i}^{-})=max\{||s_{i}-s_{i}^{+}||_{2}-||s_{i }-s_{i}^{-}||_{2}+m,0\} \tag{10}\] Fig. 4: Case study for rotation invariance. In this case, multi-images were captured with large perspective changes (about 180-degree). The resultant place feature difference is almost zero. where \(s_{i}\), \(s_{i}^{+}\), \(s_{i}^{-}\) are place features of the anchor, positive and negative examples in the \(i\)-th batch. \(m\) is the margin of the triplet loss, which is set to \(0.2\) in our experiments. As for yaw estimation, we use the loss term as KL divergence between the probability distribution \(p(\alpha)\) and the supervision of a one-hot distribution \(\textbf{1}(\alpha^{*})\) peaking at the ground truth \(\alpha^{*}\). \[\mathcal{L}_{yaw}=KLD(p(\alpha),\textbf{1}(\alpha^{*})) \tag{11}\] We train our network in a joint manner. Two losses (10) and (11) are combined into a joint loss \(\mathcal{L}\): \[\mathcal{L}=\mathcal{L}_{P}+\lambda\mathcal{L}_{yaw} \tag{12}\] In our experiment, the \(\lambda\) is set to \(0.001\). In all learning-based experiments, we train the network for 30 epochs, reducing the learning rate by 10 at the end of 20 epochs. In the VPR task, the dimension of the image feature is set to 256, while in the multi-modal experiments, the dimensions of the image and point cloud features are set to 128. ## VII Experiments ### _Dataset and Evaluation_ We apply our method to multi-session datasets with multi-modal information, the NCLT, and the Oxford Radar RobotCar datasets. Both datasets provide multi-view images to offer 360-degree VPR. Furthermore, we utilize different sequences for multi-session place recognition evaluation. The characteristics of these sequences are introduced in the subsections. #### Vii-A1 NCLT Dataset [59] A large-scale and long-term dataset collected on the University of Michigan's North Campus by a Segway robot. It includes 27 separate sessions that were recorded biweekly between January 8, 2012, and April 5, 2013. The dataset, which spans 15 months and includes a wide range of environmental changes, includes dynamic items such as moving individuals, seasonal changes such as winter and summer, and structural changes such as building construction. We choose "2012-02-04" and "2012-03-17" sessions for training and testing, with "2012-02-04" serving as the database session and "2012-03-17" serving as the query session. #### Vii-A2 Oxford Radar RobotCar Dataset [14] An addition to the _Oxford RobotCar Dataset [60]_ for the study of multi-modal tasks. In January 2019, 32 traversals of a central Oxford route were recorded. This dataset includes a wide range of weather and lighting situations. To increase 3D scene understanding performance, a pair of Velodyne HDL-32E 3D LiDARs are mounted on the vehicle's left and right sides. The 360-degree vision is provided by a front stereo camera and three fisheye cameras. We concatenate the point clouds recorded by these two LiDARs into a single scan for simplicity in place recognition evaluation. We use "2019-01-11-13-24-51" as a database session and "2019-01-15-13-06-37" as a trajectory for training and testing, where "2019-01-11-13-24-51" serves as a database session and "2019-01-15-13-06-37" serves as a query session. Note that the given camera parameters and the synchronization across sensors in the dataset are not perfect. To avoid repeating the process in the same place when the vehicle does not move, we follow the strategy in [61] which ignores consecutive scans with less than \(20cm\) intervals. The characteristics of the two datasets are summarized in Tab. I. #### Vii-A3 Evaluation Metrics In the experiment section, we evaluate the performance of the global features and yaw estimation. We follow the similar evaluation protocol as in [1, 62]. As mentioned in the description of each dataset, the evaluation dataset is made up of a query and a database set that cover the same trajectory but are from different sessions. We use \(Recall@N\) to evaluate the performance. It measures the percentage of successfully localized queries using the top \(N\) candidates retrieved from the database. Localization is successful if one of the top \(N\) retrieved candidates is within \(d\) meters of the ground truth. In most of our experiments, \(d\) is set to 2m. ### _Comparative Methods_ * **VPR methods:** We compare a series of learning-free VPR methods that use handcrafted local features (ORB [20], SIFT [18], DenseSIFT [63]) with ASMK [23] and learning-based VPR methods NetVLAD [1], DOLG [31] and HowNet with ASMK [32]. There are two inputs of all comparative VPR methods, panoramic and front-camera image. * **Vision-LiDAR fusion methods:** We further evaluate our methods on the vision-LiDAR fusion scenarios to confirm the capability to fuse various sensor data. We also assess the performance of some recent vision-LiDAR fusion methods, such as Minkloc++ [44] and AdaFusion [45]. For fast convergence, we utilize pre-trained feature extraction modules in all methods. In the NetVLAD experiments, we also finetune the model pre-trained on the Pittsburgh dataset [64]. In addition, we evaluate the vanilla version of our method, which adopts 3D volume as the middle representation and simply retrieves the grids' features by directly projecting the grids' locations onto the image plane, the detailed description can be found in Section IV-B. Fig. 5: Generated panoramic images from the Oxford Radar dataset (top) and the NCLT dataset (bottom). ### _Verification of BEV Representation_ We hypothesize that BEV representation can play an important role in feature extraction, feature aggregation and vision-LiDAR fusion. Thus we first design the study to show the efficacy of the BEV representation in these three aspects. We build a panorama pipeline with standard CNN-based feature extraction to suppress the influence of network architecture. We use panoramic and range images as input for the sensor fusion experiment and use the same feature extraction and aggregation components. The outputs of two branches are then concatenated to create a global place feature, as in [44]. Specifically, the feature extraction module is ResNet-50, and the aggregation method is the image level GeM pooling [33]. The implemented pipeline is demonstrated in Fig. 6. Therefore, the image-level representation and the BEV representation can be compared. We report the performance of our method with deformable attention and the panorama pipeline, in terms of place recognition. As shown in Tab II, the BEV representation with \(Recall@1\) of 63.6% outperforms the panorama representation with \(Recall@1\) of 58.7%. This enhancement verifies that the spatial awareness of BEV representation can benefit the feature extraction and aggregation in 360-degree VPR. We can also find the same result in vision-LiDAR fusion experiments. As for the comparison between the vision-only and vision-LiDAR fusion, it is natural that all pipelines are improved significantly. When features are represented in BEV, both GeM pooling and DFT provide invariance, but the information loss in GeM pooling is serious. DFT, on the other hand, preserves the low-frequency information which corresponds to the primary portion of the image, thus achieving a good performance in both vision-only and fusion pipelines. ### _Comparison with Baseline Methods_ We evaluate all methods using identical evaluation protocols. The evaluation results on the NCLT and the Oxford Radar RobotCar dataset are shown in Tab. III. The top two parts of the table are results from learning-free methods and the left are learning-based methods. In the Input Mode column. _Front_ represents Front-camera image and _Pano_ represents the Panoramic image. A quick finding is that 360-degree sensing brings improvement to the front camera image only. When we compare the performances between datasets, most methods perform better on the Oxford dataset. The Oxford dataset was collected in a city scenario with no significant environmental changes and rotation variance within a week. The rich texture benefits the feature extraction module, resulting in better overall place recognition performance. As the results demonstrate, in such simple scenarios, handcrafted methods based on the human experience are competitive with learning-based methods. Among the learning-based methods, ours performs better in \(Recall@1\) and about the same in \(Recall@5\). The NCLT dataset, on the other hand, is more challenging, as there exist significant environmental changes and contains many hard scenarios with sparse texture. As a consequence, all competing methods exhibit severely degraded performance, while ours demonstrates almost similar performance to that in the Oxford dataset. We explain this as the inductive bias brought by spatial awareness, which enforces feature learning by suppressing the impact of perspective change. Some retrieval examples are shown in Fig. 7. The chosen scenarios include severe illumination changes, perspective changes as well as environmental changes involving seasonal change and dynamic objects. The third row, in particular, demonstrates that our method is not sensitive to dynamic objects and the fourth case shows the strong rotation invariance of our method. However, in the last scenario, there are plants everywhere, making all methods difficult. For other cases, thanks to spatial awareness, the retrievals of our method are closer to the query place than those of comparative methods. In addition, the improvement between the vanilla version and the deformable version verifies the compensation for hardware errors. Deformable attention provides a solution to the problem of inaccurate synchronization and calibration. As shown in Tab. III, in the NCLT dataset, the deformable attention helps find better corresponding features, while in the Oxford dataset, the deformable attention is used to alleviate the negative effects brought by the inaccurate calibration. **Revisit criterion** The revisit criterion determines whether the query is successfully recognized. To verify the robustness against the translation of place recognition, we use different revisit criteria ranging from \(2m\) to \(20m\). We notice that our methods outperform others across all thresholds. Our method with deformable attention has a smoother trend than others, which infers that the incorrectly retrieved places are far away from the query place. Based on this finding, we consider that our methods are insensitive to the selection of places in the map database. **Vision-LiDAR fusion** The BEV representation is often used in point cloud learning. As stated in the same coordinates, we further incorporate LiDAR BEV features into our visual BEV Fig. 6: Demonstration of the panorama pipeline. representation and thus achieve sensor fusion. We further compare our multimodal methods with previous works. Minkloc++ [44] and AdaFusion [45] were originally evaluated in the Oxford RobotCar dataset [60] which similar places are defined within \(25m\). We retrain and evaluate these methods with our experiment settings, where the revisit criterion is \(2m\). The two modalities in Minkloc++ and AdaFusion are concatenated at the place feature level using a late fusion strategy, whereas our methods combine the modalities at the feature level using the same coordinate, which can be regarded as middle fusion. To make a fair comparison, we decrease the dimension of each modality feature to 128, forming a final multi-modal feature with the same dimension (256) of RGB-only methods. As shown in the Vision-LiDAR Fusion part of Tab. III, with the LiDAR features involved, the overall performance of our methods improves a lot. Our explanation is that we properly combine the two modalities with a unified coordinate in the feature level, which also allows for the advantage of middle fusion [17]. The inconsistent coordinates at the feature level and simple late fusion, however, negatively affect other methods. In addition, referring to Tab. III, it is a little surprising that with BEV-based extraction and aggregation, our vision-only method also achieves competitive performance with the fusion method. ### _Yaw Estimation Evaluation_ As a side product, with the help of the correlation estimator, our method can provide yaw estimation between two BEVs. Fig. 7: Examples of top-1 retrieval of different methods. The panoramic images are used for better visualization. We demonstrate some retrieval cases with significant perspective and environmental changes. Note that the revisit criterion is \(2m\), so some of the false retrievals may also be similar to the query. We assess the yaw estimation results following [50], we demonstrate the yaw estimation errors using a quartile, which reflects the errors at the percentages of 25%, 50% and 75% of the whole results. We notice that the BEV representation not only captures the spatial relationship between features but also provides coarse estimation. With the LiDAR features utilized, the performance is further improved. Such a result is promising for compensating the yaw difference before the full degree of freedom pose estimation. ### _Backbone Ablation_ In this section, we investigate the effects of the different backbones on our method's performance. In all experiments, unless noted, we keep the other variables the same. We evaluate the performance of different image backbones. Fig. 9 shows the performance of commonly used image backbones. We find that the larger backbone does not equal better performance in RGB-only experiments. But in the fusion experiments, the larger backbone provides the better discriminative ability. We note that sometimes, the benefit of a specific backbone is tied to the input resolution which is fixed in this study. ### _Runtime Evaluation_ We assess the time cost of various modules in our approach on a GeForce RTX 2080Ti GPU with an Intel Xeon Platinum 8163 CPU. The detailed runtime analysis is listed in Tab. V. The feature extraction and aggregation modules of both methods are the same. The generation time of representation is significantly reduced in the deformable pipeline due to the lightweight BEV representation and efficient spatial deformable attention. In the localization step, the place feature can be easily retrieved by using the KD-tree structure which only takes logarithm times. Therefore, our method can be incorporated into online systems. ## VIII Conclusion In conclusion, this paper proposes the use of BEV representation for effective visual place recognition and investigates its benefits in feature extraction, feature aggregation, and vision-LiDAR feature fusion. Our proposed network architecture utilizes BEV representation to extract spatial features from images and point clouds, and achieves rotation-invariant feature aggregation through the Discrete Fourier transform. By stating image and point cloud cues in the same coordinates, our method also benefits sensor fusion for place recognition. The experimental results demonstrate that our BEV-based method outperforms baseline methods in off-the-road and on-the-road scenarios, verifying the effectiveness of BEV representation in VPR task. Therefore, our contributions in this paper demonstrate the potential of BEV representation in VPR task which can be easily integrated into the current autonomous driving framework.
2301.10632
EFX Exists for Four Agents with Three Types of Valuations
In this paper, we address the problem of determining an envy-free allocation of indivisible goods among multiple agents. EFX, which stands for envy-free up to any good, is a well-studied problem that has been shown to exist for specific scenarios, such as when there are only three agents with MMS valuations, as demonstrated by Chaudhury et al(2020), and for any number of agents when there are only two types of valuations as shown by Mahara(2020). Our contribution is to extend these results by showing that EFX exists for four agents with three distinct valuations. We further generalize this to show the existance of EFX allocations for n agents when n-2 of them have identical valuations.
Pratik Ghosal, Vishwa Prakash H. V., Prajakta Nimbhorkar, Nithin Varma
2023-01-25T15:15:59Z
http://arxiv.org/abs/2301.10632v1
# EFX Exists for Four Agents with Three Types of Valuations ###### Abstract In this paper, we address the problem of determining an envy-free allocation of indivisible goods among multiple agents. EFX, which stands for envy-free up to any good, is a well-studied problem that has been shown to exist for specific scenarios, such as when there are only three agents with MMS valuations, as demonstrated by Chaudhury et al. (2020), and for any number of agents when there are only two types of valuations as shown by Mahara (2020). Our contribution is to extend these results by showing that EFX exists for four agents with three distinct valuations. We further generalize this to show the existance of EFX allocations for \(n\) agents when \(n-2\) of them have identical valuations. ## 1 Introduction Fair division of indivisible goods is a fundamental problem in the field of multiagent systems. The problem is to allocate a set \(\mathcal{G}=\{g_{1},\ldots g_{m}\}\) of \(m\) goods to a group \(\mathcal{A}=\{a_{1},\ldots a_{n}\}\) of \(n\) agents such that each agent thinks of the allocation as being _fair_. One of the most well-studied fairness notions is that of _envy-freeness_. To quantify this notion, we model each agent \(a_{i}\), \(i\in[n]\), as having a valuation function \(v_{i}:2^{\theta}\rightarrow\mathbb{R}_{\geq 0}\) on bundles of goods. An allocation \((X_{1},X_{2},\ldots X_{n})\)1 is said to be _envy-free_ (EF) if all agents value their own bundle at least as much as that of any other agent, i.e., \(v_{i}(X_{i})\geq v_{i}(X_{j})\) for all \(i,j\in[n]\). It is well-known that EF allocations may not exist in general and various relaxations of such allocations have been proposed. Budish (2011) proposed the concept of _envy-freeness up to one good_ (EF1), where the goal is to find an allocation such that, for each agent \(a_{i}\), there exists some good \(g\) in each bundle \(X_{j}\) such that \(a_{i}\) values \(X_{i}\) at least as much as \(X_{j}\setminus\{g\}\). It is known that EF1 allocations always exist and can be found in polynomial time Lipton et al. (2004). In between the notions of EF and EF1 allocations, lie envy-freeness up to any good (EFX), which was introduced by Caragiannis et al. (2019). Given an allocation, an agent \(a_{i}\)_strongly envies_ another agent \(a_{j}\) if there exists \(g\in X_{j}\) such that \(a_{i}\) values \(X_{j}\setminus\{g\}\) over their own bundle \(X_{i}\). An allocation is EFX if no agent strongly envies another agent. In other words, each agent \(a_{i}\) values \(X_{i}\) at least as much as \(X_{j}\setminus\{g\}\) for any good \(g\) present in any \(X_{j}\). Footnote 1: Our convention is to allocate the bundle \(X_{i}\) to agent \(a_{i}\) for all \(i\in[n]\). Additionally, we only consider allocations that are complete, i.e., where \(\bigcup_{i\in[n]}X_{i}=\mathcal{G}\). Contrary to both EF and EF1, the question of whether EFX allocations always exist or not is far from settled and is one of the important questions in contemporary research on fair allocation. Plaut and Roughgarden (2020) show that when all agents have the same valuation function on bundles, then EFX always exists. They also showed that when there are only two agents, EFX always exists. Mahara (2020, 2021) improved upon this result and showed the existence of EFX for multiple agents when there are only two valuation functions. In a recent breakthrough, Chaudhury et al. (2020) showed that EFX always exists for 3 agents when the valuation functions of agents are additive.2 In addition to improving the state of the art, their contributions also include several new technical ideas to reason about EFX allocations. Footnote 2: A valuation \(v:2^{\theta}\rightarrow\mathbb{R}_{\geq 0}\) is additive if, for each bundle \(S\subseteq\mathcal{G}\) of goods, \(v(S)=\sum_{g\in S}v(\{g\})\). The result of Akrami et al. (2022) holds for slightly more general valuation functions, which they call MMS-feasible valuations (see Definition 2.3). In this work, we show the following improvement over the state of the art. **Theorem 1.1**.: _Consider a set of \(n\) agents with additive valuations where at least \(n-2\) agents have identical valuations. Then, for any set of goods, an EFX allocation always exists. Moreover, this holds even when all the agents have more general MMS-feasible valuations._ When \(n=4\), the above theorem implies the following corollary **Corollary 1.2**.: _Consider a set of \(4\) agents with at most \(3\) distinct additive valuations Then, for any set of goods, an EFX allocation always exists. Moreover, this holds even when all the agents have more general MMS-feasible valuations._ Theorem 1.1 is the first result for the existence of EFX for an arbitrary number of agents with more than two distinct valuations and is, in this sense, an improvement over the work of Mahara (2020, 2021). ### Overview of Our Techniques Several of the ideas that we use in our proofs of Theorem 1.1 are attributed to the work of Akrami et al. who give a simplified proof for the existence of EFX allocations for three agents. Our proof of Theorem 1.1 begins by considering, what we refer to as an almost feasible EFX allocation. An almost EFX feasible allocation ensures that the first \(n-1\) bundles are EFX feasible for the first \(n-2\) agents with identical valuations and that the last bundle is EFX feasible for one of the remaining two agents. Our procedure modifies such an allocation carefully to get to an EFX allocation, in which case we are done, or to another almost EFX feasible allocation. The termination of our procedure is ensured by the fact that the resulting almost EFX feasible allocation is strictly better than the previous one in a concrete sense. The novel challenge arising in our case is the fact that maintaining the above mentioned invariant and arguing about the increase in potential is more involved due to a higher number of dependencies caused by a larger number of agents. ### Related Work The notion of envy-free allocations was introduced by Gamow and Stern and Foley. For indivisible goods, Lipton et al. and Budish consider a relaxed notion of envy-freeness known as _envy-freeness up to one good (EF1)_. The notion of envy-freeness up to any good (EFX) was introduced by Caragiannis et al.. The existence of EFX allocations has been shown in various restricted settings like \(2\) agents with arbitrary valuations and any number of agents with identical valuations Plaut and Roughgarden (2020), for additive valuations with \(3\) agents Chaudhury et al. (2020), at most two valuations for an arbitrary number of agents Mahara (2020, 2021), for the case when each value of each agent can take one of the two possible values Amanatidis et al. (2021), etc. EFX allocations for the case when some goods can be left unallocated have been considered in several papers Brams et al. (2022); Cole et al. (2013); Caragiannis et al. (2019) etc. Caragiannis et al. (2019) show that discarding some items can achieve at least half of the maximum Nash Welfare whereas Chaudhury et al. show that an EFX allocation always exists for \(n\) agents with arbitrary valuations with at most \(n-1\) unallocated items, Berger et al. improve this to EFX for \(4\) agents with at most one unallocated item. ## 2 Preliminaries Let \(\mathcal{A}=\{a_{1},a_{2},\cdots,a_{n}\}\) be a set of \(n\) agents and let \(\mathcal{G}=\{g_{1},g_{2},\cdots,g_{m}\}\) be a set of \(m\) indivisible goods. An instance of discrete fair division is specified by the tuple \(\langle\mathcal{A},\mathcal{G},\mathcal{V}\rangle\), where \(\mathcal{V}=\{v_{1}(\cdot),v_{2}(\cdot),\cdots,v_{n}(\cdot)\}\) is such that for \(i\in[n]\), the function \(v_{i}:2^{\mathcal{G}}\to\mathbb{R}_{\geq 0}\) denotes the valuation of agent \(a_{i}\) on subsets of goods. Let \(a\in\mathcal{A},g\in G,S,T\subseteq\mathcal{G},v:2^{\mathcal{G}}\to \mathbb{R}_{\geq 0}\). To simplify notation, we write \(v(g)\) to denote \(v(\{g\})\) and use \(S\setminus g\), \(S\cup g\) to denote \(S\setminus\{g\}\), \(S\cup\{g\}\), respectively. We also write \(S>_{a}T\) to denote \(v_{a}(S)>v_{a}(T)\) and similarly for \(<_{a},\geq{}_{a},\leq{}_{a}\) and \(=_{a}\). We use \(\min_{a}(S,T)\) and \(\max_{a}(S,T)\) to denote \(\arg\min_{Y\in\{S,T\}}v_{a}(Y)\) and \(\arg\max_{Y\in\{S,T\}}v_{a}(Y)\). We often use the term _bundle_ to denote a subset of goods. An _allocation_ is a tuple \(X=\langle X_{1},X_{2},\ldots,X_{n}\rangle\) of \(n\) bundles such that bundle \(X_{i}\) is assigned to agent \(a_{i}\) for all \(i\in[n]\) and \(\bigcup_{i\in[n]}X_{i}=\mathcal{G}\). Given an allocation \(X=\langle X_{1},X_{2},\cdots,X_{n}\rangle\), we say that agent \(a_{i}\)_envies_ another agent \(a_{j}\) if \(v_{i}(X_{j})>v_{i}(X_{i})\). As a shorthand, we sometimes simply say that agent \(a_{i}\)_envies the bundle_\(X_{j}\). **Definition 2.1** (Strong Envy).: _Given an allocation \(X=\langle X_{1},X_{2},\cdots,X_{n}\rangle\), an agent \(a_{i}\) strongly envies an agent \(a_{j}\) if \(v_{i}(X_{i})<v_{i}(X_{j}\setminus g)\) for some \(g\in X_{j}\)._ An allocation is EFX if there is no strong envy between any pair of agents. **Definition 2.2** (Efx-Feasibility).: _A bundle \(S\subseteq\mathcal{G}\) is said to be EFX-feasible w.r.t. a disjoint bundle \(T\) according to valuation \(v\), if for all \(h\in T\), \(v(T\setminus h)<v(S)\). Given an allocation \(X=\langle X_{1},X_{2},\cdots,X_{n}\rangle\), bundle \(X_{i}\) is EFX-feasible_ for an agent \(a_{j}\)_if \(X_{i}\) is EFX-feasible w.r.t. all other bundles in \(X\) according to valuation \(v_{j}\)._ An allocation \(X=\langle X_{1},X_{2},\cdots,X_{n}\rangle\) is said to be EFX if for all \(i\in[n]\), the bundle \(X_{i}\) is EFX-feasible for agent \(a_{i}\). Minimally Envied Subset.If agent \(a_{i}\) with bundle \(X_{i}\) envies an agent \(a_{j}\) with bundle \(X_{j}\), we call a subset \(S\subseteq X_{j}\) a _minimally envied subset_ of \(X_{j}\) for agent \(a_{i}\) if both the following conditions hold. 1. \(v_{i}(X_{i})<v_{i}(S)\) 2. \(v_{i}(X_{i})\geq v_{i}(S\setminus h)\ \ \forall h\in S\) Non-Degenerate Instances Chaudhury et al. (2020); Akrami et al. (2022)An instance \(\mathcal{I}=\langle\mathcal{A},\mathcal{G},\mathcal{V}\rangle\) is said to be _non-degenerate_ if and only if no agent values two different bundles equally. That is, \(\forall a_{i}\in\mathcal{A}\) we have \(v_{i}(S)\neq v_{i}(T)\) for all \(S\neq T\), where \(S,T\subseteq\mathcal{G}\). Akrami et al. (2022) showed that it suffices to deal with non-degenerate instances when there are \(n\) agents with general valuation functions, i.e., if each non-degenerate instance has an EFX allocation, each general instance has an EFX allocation. In the rest of the paper, we only consider non-degenerate instances. This implies that all goods are positively valued by all agents as value of the empty bundle is assumed to be zero. Properties of Valuation FunctionsA valuation \(v\) is said to be _monotone_ if \(S\subseteq T\) implies \(v(S)\leq v_{i}(T)\) for all \(S,T\subseteq\mathcal{G}\). Monotonicity is a natural restriction on valuation functions and occurs frequently in real-world instances of fair division. A valuation \(v\) is _additive_ if \(v(S)=\sum_{g\in S}v(\{g\})\) for all \(S\subseteq\mathcal{G}\). Additive valuation functions are, by definition, also monotone. Akrami et al. (2022) introduced a new class of valuation functions called MMS-feasible valuations which are natural extensions of additive valuations. **Definition 2.3**.: _A valuation \(v:2^{\mathcal{G}}\to\mathbb{R}_{\geq 0}\) is MMS-feasible if for every subset of goods \(S\subseteq\mathcal{G}\) and every partitions \(A=(A_{1},A_{2})\) and \(B=(B_{1},B_{2})\) of \(S\), we have_ \[\max(v(B_{1}),v(B_{2}))>\min(v(A_{1}),v(A_{2}))\] Plaut and Roughgarden AlgorithmIn 2020, Plaut and Roughgarden (2020) gave an algorithm to compute an EFX-allocation when all agents have the same valuation \(v(\cdot)\), where the only assumption on \(v(\cdot)\) is that it is monotone. Throughout this paper, we refer to this algorithm as the PR algorithm. Let \(M\subseteq\mathcal{G}\) be a subset of goods and let \(a\) be an agent with valuation \(v\). Let \(X=\{X_{1},X_{2},\cdots,X_{k}\}\) be a \(k\)-partition of \(M\). In its most general form, the PR algorithm takes \((X,v,k)\) as input and outputs a (possibly different) \(k\)-partition \(Y=\{Y_{1},Y_{2},\cdots,Y_{k}\}\). We crucially use the following properties Plaut and Roughgarden (2020) of the output of the PR algorithm. 1. If \(Y_{i}\) is allocated to agent \(a\) then agent \(a\) does not strongly envy any other bundle in \(Y\). 2. The value of the least valued bundle does not decrease, i.e., \[\min(v(Y_{1}),v(Y_{2}),\cdots,v(Y_{k}))\geq\min(v(X_{1}),v(X_{2}),\cdots,v(X_{ k})).\] EFX for four agents with three valuations In this section, we show that EFX allocation always exists for \(n\) agents when \(n-2\) of the agents have identical valuations thus prove Theorem1.1. Consider a set of \(n\) agents \(\mathcal{A}=\{a_{1},a_{2},\cdots,a_{n-2}b_{1},c_{1}\}\), a set of \(m\) goods \(\mathcal{G}=\{g_{1},g_{2},\cdots,g_{m}\}\) and a set of three valuation functions \(\mathcal{V}=\{v_{a},v_{b},v_{c}\}\) such that agents \(a_{1},a_{2},\cdots,a_{n-2}\) have valuation \(v_{a}\) and agents \(b_{1}\) and \(c_{1}\) have valuations \(v_{b}\) and \(v_{c}\) respectively. The valuations \(v_{a}\) and \(v_{b}\) are assumed to be monotone and \(v_{c}\) is assumed to be MMS-feasible. **Definition 3.1**.: _We say that an allocation \(X=\langle X_{1},X_{2},\cdots,X_{n}\rangle\) is almost EFX-feasible if it satisfies the following conditions:_ 1. _The first_ \(n-1\) _bundles_ \(X_{1},X_{2},\cdots,X_{n-1}\) _are EFX-feasible for agents_ \(a_{1},a_{2},\cdots,a_{n-2}\)_._ 2. \(X_{n}\) _is EFX-feasible for either agent_ \(b_{1}\) _or agent_ \(c_{1}\)_._ We define a potential function \(\phi\) which assigns a real value for each allocation \(X=\langle X_{1},X_{2},\cdots,X_{n}\rangle\) as follows: \[\phi(X)=\min\{v_{a}(X_{1}),v_{a}(X_{2}),\cdots,v_{a}(X_{n}-1)\}.\] To prove Theorem1.1, we first show that almost EFX-feasible allocations always exist. Then we show that, if an allocation \(X\) is almost EFX-feasible, then either \(X\) is an EFX allocation or there exists another almost EFX-feasible allocation \(X^{\prime}\) with a strictly higher potential value, i.e., \(\phi(X^{\prime})>\phi(X)\). Since \(\phi(X)\) cannot grow arbitrarily as \(\phi(X)<v_{a}(\mathcal{G})\), there must exist an almost EFX-feasible allocation which is also an EFX allocation. Proof of Theorem1.1:.: For any given instance with \(n\) agents such that \(n-2\) agents have identical valuations, an almost EFX-feasible allocation always exists. This can be obtained by running the PR algorithm on \(\mathcal{G}\) with the valuation \(v_{a}\) for all \(n\) agents. Lets call this initial allocation \(X=\langle X_{1},X_{2},\cdots,X_{n}\rangle\). From the property 1 of the PR algorithm, all the bundles are EFX-feasible for agents \(a_{1},a_{2},\cdots,a_{n-2}\). Let agent \(c_{1}\) pick the most valued bundle from \(X\) according their valuation \(v_{c}\). Without loss of generality, we can assume that the bundle picked by agent \(c_{1}\) is \(X_{n}\). Its is clear that \(X_{n}\) is EFX-feasible for \(c_{1}\). Hence \(X\) is almost EFX-feasible. If either one among the agents \(b_{1}\) or \(c_{1}\) has at least one EFX-feasible bundle other than \(X_{n}\), say \(X_{k}\), then we are done. We allocate \(\langle X_{n},X_{k}\rangle\) to agent \(c_{1}\) and \(b_{1}\) respectively, and the remaining bundles to agents \(a_{1},a_{2},\cdots,a_{n-2}\) arbitrarily. The resulting allocation is EFX. In the remainder of the proof, we consider the case that \(X_{n}\) is the only EFX-feasible bundle for both \(b_{1}\) and \(c_{1}\). Let \(g_{b}\) and \(g_{c}\) be the least valuable good(s) in \(X_{n}\) according to agents \(b_{1}\) and \(c_{1}\), respectively. Since \(X_{n}\) is the most valued bundle and also the _only_ EFX-feasible bundle in \(X\) for agent \(b_{1}\) (or \(c_{1}\)), even if we give the maximum valued bundle from \(\{X_{1},X_{2},\cdots,X_{n-1}\}\) according to \(v_{b}\) (\(v_{c}\), respectively) to agent \(b_{1}\) (\(c_{1}\), respectively), they would strongly envy the bundle \(X_{n}\). That is \[X_{n}\setminus g_{b}>_{b}\max_{b}(X_{1},X_{2},\cdots,X_{n-1}) \tag{1}\] \[X_{n}\setminus g_{c}>_{c}\max_{c}(X_{1},X_{2},\cdots,X_{n}) \tag{2}\] Without loss of generality, assume \[X_{1}<_{a}X_{2}<_{a}\cdots<_{a}X_{n-1} \tag{3}\] Now, we consider the cases which arise when we move the least valued good from \(X_{n}\) (according to \(b_{1}\) or \(c_{1}\)) and add it to the bundle \(X_{1}\). **Case 1:**: The bundle \(X_{n}\setminus g_{b}\) remains to be the most favorite bundle for agent \(b_{1}\)_or_ the bundle \(X_{n}\setminus g_{c}\) remains to be the most favorite bundle for agent \(c_{1}\). That is, \[X_{n}\setminus g_{b}>_{b}X_{1}\cup g_{b},\text{ or }\] \[X_{n}\setminus g_{c}>_{c}X_{1}\cup g_{c}\] Here we assume that \(X_{n}\setminus g_{b}>_{b}X_{1}\cup g_{b}\). The procedure is analogous if we consider \(X_{n}\setminus g_{c}>_{c}X_{1}\cup g_{c}\) as we are only using the monotonicity of the valuation functions for Case 1. The new allocation is \(X^{\prime}=\langle X_{1}\cup g_{b},X_{2},\cdots,X_{n}\setminus g_{b}\rangle\). Combining \(X_{n}\setminus g_{b}>_{b}X_{1}\cup g_{b}\) with (1), we get that the bundle \(X_{n}\setminus g_{b}\) is the most valuable according to \(v_{b}\) and hence EFX-feasible for agent \(b_{1}\) in the new allocation. **Case 1.1:**: \(X_{1}\cup g_{b}<_{a}X_{2}\). Combining \(X_{1}\cup g_{b}>_{a}X_{1}\) and (3), we can see that \[\phi(X^{\prime})=v_{a}(X_{1}\cup g_{b})>v_{a}(X_{1})=\phi(X).\] Thus there is an increase in the potential. For agents \(a_{1},a_{2},\cdots,a_{n-2}\), the bundle \(X_{1}\cup g_{b}\) remains EFX-feasible as no other bundle has increased in value. Furthermore, For agents \(a_{1},a_{2},\cdots,a_{n-2}\), the bundles \(X_{2},X_{3},\cdots,X_{n-1}\) are EFX-feasible when compared to \(X_{1}\cup g_{b}\) as they are more valuable than \(X_{1}\cup g_{b}\) according to \(v_{a}\). They are also EFX-feasible when compared to \(X_{n}\setminus g_{b}\) because they were EFX-feasible against a higher valued bundle \(X_{n}\). Thus, bundles \(X_{1}\cup g_{b},X_{2},\cdots,X_{n-1}\) are EFX-feasible for agents \(a_{1},a_{2},\cdots,a_{n-2}\). Therefore, the new allocation is almost EFX-feasible and has an increased potential. **Case 1.23**: \(X_{1}\cup g_{b}>_{a}X_{2}\). Let \((X_{1}\cup g_{b})\setminus Z\) be a _minimally envied subset_ with respect to \(X_{2}\) under valuation \(v_{a}\). That is, Footnote 3: Note that we do not have to consider the case that \(X_{1}\cup g_{b}=_{a}X_{2}\) since the instance is assumed to be non-degenerate. \[(X_{1}\cup g_{b})\setminus Z>_{a}X_{2}\,and \tag{4}\] \[((X_{1}\cup g_{b})\setminus Z)\setminus h<_{a}X_{2}\quad\forall h \in(X_{1}\cup g_{b})\setminus Z\] Now, let the new allocation be \[X^{\prime} =\langle X^{\prime}_{1},X^{\prime}_{2},\cdots,X^{\prime}_{n}\rangle\] \[=\langle(X_{1}\cup g_{b})\setminus Z,\ X_{2},\cdots,(X_{n} \setminus\{g_{b}\})\cup Z\rangle\] Since \((X_{1}\cup g_{b})\setminus Z>_{a}X_{2}\), it holds that \(\phi(X^{\prime})=v_{a}(X_{2})>v_{a}(X_{1})=\phi(X)\). Thus the potential has strictly increased. From (1), we have \(X_{n}\setminus g_{b}>_{b}\max_{b}(X_{1},X_{2},\cdots,X_{n-1})\). From the Case 1 assumption, we also have \(X_{n}\setminus g_{b}>_{b}X_{1}\cup g_{b}\). Therefore, \[X^{\prime}_{n}=(X_{n}\setminus g_{b})\cup Z>_{b}\max_{b}(X^{\prime}_{1},X^{ \prime}_{2},X^{\prime}_{n-1})\] Thus \(X^{\prime}_{n}\) is EFX-feasible for agent \(b_{1}\). Next, we show that the bundles \(X^{\prime}_{1},X^{\prime}_{2},\cdots,X^{\prime}_{n-1}\) are EFX-feasible _among themselves_ (i.e, not compared with \(X^{\prime}_{n}\)) to agents \(a_{1},a_{2},\cdots,a_{n-2}\). The bundle \(X_{1}\) was EFX-feasible _w.r.t._\(X_{2},\cdots,X_{n-1}\) in \(X\). Therefore, \(X^{\prime}_{1}>_{a}X_{1}\) is also EFX-feasible _w.r.t._\(X^{\prime}_{2},\cdots,X^{\prime}_{n-1}\). Bundles \(X^{\prime}_{2},\cdots,X^{\prime}_{n-1}\) are EFX-feasible _w.r.t._ each other as they remain unchanged. From (4) we know that \(X^{\prime}_{1}\setminus h=((X_{1}\cup g_{b})\setminus Z)\setminus h<_{a}X_{2}\ \forall h\in((X_{1}\cup g_{b})\setminus Z)\), and from (3) we have \(X_{2}<_{a}\cdots<_{a}X_{n-1}\). Therefore, both \(X^{\prime}_{2},\cdots,X^{\prime}_{n-1}\) are EFX-feasible _w.r.t._\(X^{\prime}_{1}\) for agents \(a_{1},a_{2},\cdots,a_{n-2}\). Therefore, the bundles \(X^{\prime}_{1},X^{\prime}_{2},\cdots,X^{\prime}_{n-1}\) are EFX-feasible among themselves to agents \(a_{1},a_{2},\cdots,a_{n-2}\). All that remains is to check the EFX-feasibility of bundles \(X^{\prime}_{1},X^{\prime}_{2},\cdots,X^{\prime}_{n-1}\)_w.r.t._\(X^{\prime}_{n}\). If the bundles \(X^{\prime}_{1},X^{\prime}_{2},\cdots,X^{\prime}_{n-1}\) are EFX-feasible _w.r.t._\(X^{\prime}_{n}\), then we meet all the conditions of the invariant and hence \(X^{\prime}\) is almost EFX-feasible. Since \(\phi(X^{\prime})>\phi(X)\), we have an almost EFX-feasible solution with increased potential and we are done. Now, consider the case that one of the bundles in \(\{X^{\prime}_{1},X^{\prime}_{2},\cdots,X^{\prime}_{n-1}\}\) is _not_ EFX-feasible _w.r.t._\(X^{\prime}_{n}\). That is, \[\exists h\in X^{\prime}_{n}\ \ \text{such that}\ \ X^{\prime}_{n} \setminus h>_{a}\min_{a}(X^{\prime}_{1},X^{\prime}_{2},\cdots,X^{\prime}_{n-1})\] \[\implies X^{\prime}_{n}>_{a}\min_{a}(X^{\prime}_{1},X^{\prime}_{2}, \cdots,X^{\prime}_{n-1})\] \[\implies \min_{a}(X^{\prime}_{1},X^{\prime}_{2},\cdots,X^{\prime}_{n})= \min_{a}(X^{\prime}_{1},X^{\prime}_{2},\cdots,X^{\prime}_{n-1})=X_{2}>_{a}X_ {1}\] Now, we apply the PR algorithm on \(X^{\prime}\) under the valuation \(v_{a}\) to get a new allocation \(X^{\prime\prime}\). We can see that \(X^{\prime\prime}\) is almost EFX-feasible by relabeling the bundles appropriately if needed. From the property 2 of the PR algorithm, we also know that \(\min_{a}(X^{\prime\prime})>_{a}\min_{a}(X^{\prime})>_{a}X_{1}\). Therefore, \(\phi(X^{\prime\prime})>v_{a}(X_{1})=\phi(X)\). Thus we obtain a new almost EFX-feasible allocation with increased potential. Case 2:The bundle \(X_{n}\setminus g_{b}\) is not the most favorite bundle of agent \(b_{1}\)_and_ bundle \(X_{n}\setminus g_{c}\) is not the most favorite bundle of agent \(c_{1}\). That is, \[X_{n}\setminus\{g_{b}\}<_{b}X_{1}\cup\{g_{b}\},\text{and}\] \[X_{n}\setminus\{g_{c}\}<_{c}X_{1}\cup\{g_{c}\}\] In this case, we run the PR algorithm on \(\langle X_{1}\cup g_{b},\ X_{n}\setminus g_{b}\rangle\) under valuation \(v_{b}\) to get bundles \(Y_{n-1},Y_{n}\). Now the new allocation is \(X^{\prime}=\langle X^{\prime}_{1},X^{\prime}_{2},\cdots,X^{\prime}_{n-1},X^{ \prime}_{n}\rangle=\langle X_{2},X_{3},\cdots,Y_{n-1},Y_{n}\rangle\). We first show that bundles \(Y_{n-1}\) and \(Y_{n}\) are EFX-feasible for agents \(b_{1}\) and \(c_{1}\) respectively. \[\min_{b}(Y_{n-1},Y_{n}) >_{b}\min_{b}((X_{1}\cup g_{b}),(X_{n}\setminus g_{b}))\] \[=X_{n}\setminus\{g_{b}\}\] ( _Case 2_ assumption) \[>_{b}\max_{b}(X_{2},\cdots,X_{n-1})\] ( ( 1 )) Therefore, the bundles \(Y_{n-1}\) and \(Y_{n}\) are both EFX-feasible for agent \(b_{1}\). We let agent \(c_{1}\) choose their favorite bundle among \(Y_{n-1}\) and \(Y_{n}\). _w.l.o.g_ let \(Y_{n}>_{c}Y_{n-1}\). From the _maximin_ property of \(v_{c}\), we know the following: \[Y_{n} =\max_{c}(Y_{n-1},Y_{n}) (\because Y_{n}>_{c}Y_{n-1})\] \[\geq_{c}\min_{c}(X_{1}\cup\{g_{c}\},X_{n}\setminus\{g_{c}\}) (v_{c}\text{ is MMS-feasible})\] \[=X_{n}\setminus\{g_{c}\} (\text{{Case 2 assumption}})\] \[>_{c}\max_{c}(X_{2},\cdots,X_{n-1}) (\text{From \ \ (\ref{eq:c})})\] Therefore, the bundle \(Y_{n}\) is EFX-feasible for agent \(c_{1}\). Now, recall that the current allocation is \(X^{\prime}=\langle X_{2},X_{3},\cdots,Y_{n-1},Y_{n}\rangle\). Depending on the envy from agent \(a_{1}\), we have the following three cases: **Case 2.1:**: Agent \(a_{1}\) does not strongly envy \(Y_{n-1}\) or \(Y_{n}\). Since \(X_{2}<_{a}\cdots<_{a}X_{n-1}\), agents \(a_{2},a_{3},\cdots,a_{n-2}\) also does not strongly envy \(Y_{n-1}\) or \(Y_{n}\). Thus, \(X^{\prime}\) is an EFX allocation. **Case 2.2:**: Agent \(a_{1}\) strongly envies both \(Y_{n-1}\) and \(Y_{n}\). Then, \[Y_{n} >_{a}X_{2}\] \[Y_{n-1} >_{a}X_{2}\] \[X_{3} >_{a}X_{2} \text{From \ \ (\ref{eq:c})}\] Therefore, \(\min_{a}(X^{\prime})=X_{2}>_{a}X_{1}=\phi(X)\). That is, the minimum has strictly increased. Now we run the PR algorithm on \(X^{\prime}\) with the valuation \(v_{a}\) to get an almost EFX-feasible allocation \(X^{\prime\prime}\) with a potential value \(\phi(X^{\prime\prime})>\phi(X)\). **Case 2.3:**: Agent \(a_{1}\) strongly envies \(Y_{n-1}\) but not \(Y_{n}\). The other case is similar.4 Footnote 4: If agent \(a_{1}\) strongly envies \(Y_{n}\), then give \(Y_{n}\) to agent \(b_{1}\) and \(Y_{n-1}\) to agent \(c_{1}\). We know both \(Y_{n}\) and \(Y_{n-1}\) are EFX-feasible for agent \(b_{1}\). Thus we meet the invariant by making \(X^{\prime\prime}_{n}\) EFX-feasible for agent \(b_{1}\) instead of agent \(c_{1}\). Let \(Y^{\prime}_{n-1}\subseteq Y_{n-1}\) be such that \(Y^{\prime}_{n-1}>_{a}X_{2}\) but \(Y^{\prime}_{n-1}\setminus h<_{a}X_{2}\)\(\forall h\in Y^{\prime}_{n-1}\). Now consider the new allocation \(X^{\prime\prime}=\langle X^{\prime\prime}_{1},\cdots,X^{\prime\prime}_{n-1},X^ {\prime\prime}_{n}\rangle=\langle X_{2},\cdots,Y^{\prime}_{n-1},\ Y_{n}\cup(Y _{n-1}\setminus Y^{\prime}_{n-1})\rangle\). Previously, \(Y_{n}\) was EFX-feasible for agent \(c_{1}\). Now, the value of this bundle has increased and values of other bundles have not increased. Therefore, the new bundle \(X^{\prime\prime}_{n}\) is EFX-feasible for agent \(c_{1}\). The potential of the new allocation \(X^{\prime\prime}\) is \(\phi(X^{\prime\prime})=\min_{a}(X^{\prime\prime}_{1},X^{\prime\prime}_{2}, \cdots,Y^{\prime}_{n-1})=X_{2}>_{a}X_{1}=\phi(X)\). That is, the potential value has increased. Now, if the bundles \(X^{\prime\prime}_{1},X^{\prime\prime}_{2},\cdots,X^{\prime\prime}_{n-1}\) are EFX-feasible for agents \(a_{1},a_{2},\cdots,a_{n-2}\), we are done. We know that bundles \(X^{\prime\prime}_{1},\cdots,X^{\prime\prime}_{n-2}\) are EFX-feasible among themselves for agents \(a_{1},a_{2},\cdots,a_{n-2}\). By the construction of \(Y^{\prime}_{n-1}\), it is clear that \(X^{\prime\prime}_{1},X^{\prime\prime}_{2},\cdots,X^{\prime\prime}_{n-1}=Y^{ \prime}_{n-1}\) are EFX-feasible among themselves for agents \(a_{1},a_{2},\cdots,a_{n-2}\). Now, if \(X^{\prime\prime}_{1},X^{\prime\prime}_{2},\cdots,X^{\prime\prime}_{n-1}\) are EFX-feasible with respect to \(X^{\prime\prime}_{n}\), then all the invariant constraints are met and \(X^{\prime\prime}\) is a new almost EFX-feasible allocation with a higher potential value. Otherwise, if one of \(X^{\prime\prime}_{1},X^{\prime\prime}_{2},\cdots,X^{\prime\prime}_{n-1}\) is not EFX-feasible _w.r.t._\(X^{\prime\prime}_{n}\) according to valuation \(v_{a}\), then we have: \[\exists h\in X^{\prime\prime}_{n}\text{ such that }X^{\prime \prime}_{n}\setminus h >_{a}\min_{a}(X^{\prime\prime}_{1},X^{\prime\prime}_{2},\cdots,X^{ \prime\prime}_{n-1})\] \[=X_{2}\] \[>_{a}min_{a}(X_{1},X_{2},\cdots,X_{n})\] That is, the overall minimum has increased. Now, we run the PR algorithm on \(X^{\prime\prime}\) with the valuation \(v_{a}\) to get a new allocation \(Z\). Let agent \(c_{1}\) pick their favorite bundle. From the property of the PR algorithm, we know that \(\phi(Z)>\phi(X)\). Thus, we have a new almost EFX-feasible allocation with higher potential. This concludes the proof.
2310.19631
On particle collisions in the vicinity of the charged black holes
The process of particle collision in the vicinity of black holes is known to generate unbounded energies in the center-of-mass frame (the Banados-Silk-West (BSW) effect) under specific conditions. We consider this process in the charged black hole metrics, namely, the Reissner-Nordstrom (RN) and Majumdar-Papapetrou (MP) metrics. We consider the energy extraction from Bardeen regular black hole due to BSW effect. Like in RN case, we show that there is no restriction on energy extraction, but for real charged particles this effect is negligible. We derive necessary and sufficient conditions for this process. The conditions for the BSW effect in RN and MP metrics are shown to be identical, which is explained by the asymptotic equivalence of the two metrics near the horizons. Energy extraction in the RN metric is discussed. It is shown that if two real particles collide while falling onto a black hole, they are extremely unlikely to generate an ultra-massive particle. For the case of head-on collisions, we derive an upper bound on extracted mass, which depends on the lapse function of the metric at the point of collision.
Timur Pryadilin, Daniil Zhitov, Vitalii Vertogradov
2023-10-30T15:24:01Z
http://arxiv.org/abs/2310.19631v1
###### Abstract ###### Abstract The process of particle collision in the vicinity of black holes is known to generate unbounded energies in the center-of-mass frame (the Banados-Silk-West (BSW) effect) under specific conditions. We consider this process in the charged black hole metrics, namely, the Reissner-Nordstrom (RN) and Majumdar-Papapetrou (MP) metrics. We consider the energy extraction from Bardeen regular black hole due to BSW effect. Like in RN case, we show that there is no restriction on energy extraction, but for real charged particles this effect is negligible. We derive necessary and sufficient conditions for this process. The conditions for the BSW effect in RN and MP metrics are shown to be identical, which is explained by the asymptotic equivalence of the two metrics near the horizons. Energy extraction in the RN metric is discussed. It is shown that if two real particles collide while falling onto a black hole, they are extremely unlikely to generate an ultra-massive particle. For the case of head-on collisions, we derive an upper bound on extracted mass, which depends on the lapse function of the metric at the point of collision. _Grav. Cosmol. Nos. issue, year_ **On particle collisions in the vicinity of the charged black holes** **Timur Pryadilin,\({}^{a,}\)1** **Danil Zhitov,\({}^{b,}\)2** **and Vitalii Vertogradov\({}^{c,}\)3** Footnote 1: e-mail: [email protected] Footnote 2: e-mail: [email protected] Footnote 3: e-mail: [email protected] (Corresponding author) \({}^{a}\) _Department of Applied Mathematics and Theoretical Physics, University of Cambridge, CB3 0WA, UK_ \({}^{b}\) _Cavendish Laboratory, University of Cambridge, Cambridge, CB3 0HE, UK_ \({}^{c}\) _Physics department, Herzen state Pedagogical University of Russia, 48 Moika Emb., Saint Petersburg 191186, Russia SPb branch of SAO RAS, 65 Pulkovskoe Rd, Saint Petersburg 196140, Russia_ ## 1 Introduction Although we know about solutions of the Einstein equations which describe black holes for more than a century, only several years ago, the direct observation of gravitational waves [7] and black hole shadow [8] made them real astrophysical objects. Despite light can't escape the black hole, one can know its properties through the influence on the surrounding matter. Black holes can serve as an arena for high energy physics [9]. Penrose [10] showed that there might be particles with negative energy in the ergosphere of a rotating black hole that can be used to extract its rotational energy. Another example are charged particles which move in charged black hole background [11, 12]. Thereby, the Penrose-like process for charged black holes is also possible. This question has been studied for Reissner-Nordstrom [13] and Majumdar-Papapetrou [14] cases. Banados, Silk, and West demonstrated that particles collision near the event horizon of extremal Kerr black hole [15] can achieve arbitrarily high center-of-mass energy if the angular momentum of either of the incident particles is fine-tuned. The energy extraction is also possible due to this effect [16]. In Schwarzschild case, one can't get unbound center-of-mass energy due to this process. This process for naked singularity is investigated in papers [17, 18]. However, in the Reissner-Nordstrom-anti-de-Sitter case [19], there is innermost stable equilibrium point and the center-of-mass energy of two particles collision in this point can be unbound. The energy extraction has been considered [20] and it has been shown that one can extract arbitrarily large amount of energy. The generalization of this process for charged Vaidya dynamical black hole has been studied in [22] It was shown that negative energy particles in the ergoregion of a rotating black hole, follow so-called white hole geodesics i.e. they appear in the ergoregion from the gravitational radius [23, 24]. The same is valid for trajectories of charged particles with negative energy in Reissner-Nordstrom [13] and partly in Majumdar-Papapetrou [14] spacetimes. It means that one can consider a front collision near the event horizon, that can lead to unbound amount of energy in the centre of mass \(E_{c.m.}\)[25]. This process has been considered for a collapsing matter cloud [26]. In this paper, we consider the energy extraction from Bardeen regular black hole [27] due to BSW effect. We demonstrate that the energy extraction can be arbitrary large like in the case of an extremal Reissner-Nordstrom case [20]. We consider the BSW effect and front collision for charged Reissner-Nordstrom black hole and system of two charged Majumdar-Papapetrou black holes [30]. The energy extraction of these processes is also studied for real particle cases like an electron and proton and showed that for real particles the process leads to small amount of an extracting energy. This paper is organized as follows. In sec.II we describe the methods of BSW effect and extrating energy in sec. III we consider the center-of-mass energy of two particles collision and the energy extraction in Reissner-Nordstrom case. In sec. IV the same effects are considered for binary system of two charged black hole in Majumdar-Papapetrou case and in regular Bardeen black hole. Sec. V is the conclusion. The system of units \(c=G=1\) will be used throughout the paper. The signature is \(-\,+\,+\,+\,\). ## 2 Methods ### Basic definitions We begin by formulating a general method for analyzing particle kinematics in different backgrounds. It is introduce it by applying to the Reissner-Nordstrom solution of Maxwell-Einstein theory, describing a static spherically-symmetric charged black hole: \[ds^{2}=-fdt^{2}+\frac{1}{f}dr^{2}+r^{2}d\Omega^{2}\,, \tag{1}\] where the lapse function \(f\) is given by: \[f=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}=\frac{(r-r_{-})(r-r_{+})}{r^{2}}\,, \tag{2}\] \[r_{\pm}=M\pm\sqrt{M^{2}-Q^{2}}, \tag{3}\] and the electrostatic potential is \[\varphi(r)=\frac{Q}{r}\,. \tag{4}\] Here \(M\) is the mass of a black hole, \(Q\) its electric charge, \(r_{+}\) and \(r_{-}\) represent the radial coordinates of the event and Cauchy horizons respectively. The black hole is extremal if \(Q=M\), and if \(|Q|>M\) the singularity is naked. The latter case is not considered in this article. Without loss of generality, we assume that \(Q>0\). In this background, motion of charged particles is non-geodesic, due to the electrostatic force. Still, the spherical symmetry implies the motion is planar. For convenience, we consider the equatorial plane \(\theta=\frac{\pi}{2}\). Temporal and spherical symmetries also provide constants of motion \(E\) and \(L\) - energy and angular momentum, both defined per unit mass. We also specify \(\gamma\), charge-to-mass ratio of the particle. The equations of motion provide us the four-velocity \(u^{\mu}=dx^{\mu}/d\tau\) \[u^{0}=\frac{dt}{d\tau}=f^{-1}\left(E-\frac{\gamma Q}{r}\right)\,, \tag{5}\] \[u^{1}=\frac{dr}{d\tau}=\pm\sqrt{\left(E-\frac{\gamma Q}{r}\right)^{2}-\left( \frac{L^{2}}{r^{2}}+1\right)f}\,, \tag{6}\] \[u^{2}=\frac{d\theta}{d\tau}=0 \tag{7}\] \[u^{3}=\frac{d\varphi}{d\tau}=\frac{L}{r^{2}}\,. \tag{8}\] The sign in the expression for \(u^{1}\) is determined by whether the particle moves inward or outward. Moreover, a necessary condition is \(u^{0}>0\) outside the event horizon (so-called "forward-in-time" condition). In the BSW effect, there are two particles of the same mass \(m_{0}\) with four-velocities \(u^{\mu}_{(1)}\) and \(u^{\mu}_{(2)}\). The collision energy measured in the center of mass frame is given by [15] \[\frac{E^{2}_{\text{cm}}}{2m^{2}}=1-g_{ik}u^{i}_{1}u^{k}_{2}\,. \tag{9}\] Using the equations (5), we get \[\frac{E^{2}_{\text{cm}}}{2m^{2}}-1=\frac{1}{f}\left(X_{1}X_{2}\pm\sqrt{X_{1}^ {2}-Y_{1}f}\sqrt{X_{2}^{2}-Y_{2}f}\right)-\frac{L_{1}^{2}L_{2}^{2}}{r^{2}}\,. \tag{10}\] Here we have introduced new quantities: \[X_{i}=E_{i}-\frac{\gamma_{i}r_{Q}}{r},\quad Y_{i}=\frac{L_{i}^{2}}{r^{2}}+1, \quad i=1,2\,. \tag{11}\] The physical interpretation of \(X\) is that it behaves similarly to the kinetic energy in classical electromagnetism. We have from (5) we see that \(u^{0}=dt/d\tau=X/f\). Note, that the "forward-in-time" condition restricts accessible particle radii by the \(X>0\) condition, that is similar to the positivity of classical kinetic energy. Also, the arbitrary sign denotes to essentially different types of collision, determined to the relative signs of \(dr/d\tau\) of the two particles. We consider the two cases separately. ### Near-horizon expansions The later analysis requires certain care for detail, so we discuss approximations. A reliable technique is Taylor expanding various quantities in powers a of dimensionless quantity \(\delta=(r-r_{+})/r_{+}\). \[X=E-\frac{\gamma Q}{r}=E-\frac{\gamma Q}{r_{+}}\left(1-\delta+ \delta^{2}+...\right)\] \[=X_{h}+\frac{\gamma Q}{r_{+}}\delta-\frac{\gamma Q}{r_{+}} \delta^{2}+... \tag{12}\] where \(X_{h}=E-\frac{\gamma Q}{r_{+}}\) is the value of \(X\) on the horizon. We also need to expand \(X^{2}\): \[X^{2}=X_{h}^{2}+\frac{2\gamma QX_{h}}{r_{+}}\delta+\left(\left(\frac{\gamma Q }{r_{+}}\right)^{2}-\frac{2\gamma QX_{h}}{r_{+}}\right)\delta^{2}+... \tag{13}\] Providing an expansion for \(f\) is slightly trickier due to the presence of another scale \((r_{+}-r_{-})/r_{+}\). Then: \[f=\alpha\delta+..., \tag{14}\] where \(\alpha\geq 0\) is a constant coefficient, exact value of which is irrelevant. The subtlety is that at distances much larger than \(r_{+}-r_{-}\) the quadratic term would be larger than the linear one. Then, when one considers extremal black holes, for which \(r_{+}=r_{-}\), the leading term is quadratic: \[f=\delta^{2}+... \tag{15}\] In practical terms, the condition of extremality that we impose later, is not quite that \(r_{+}-r_{-}=0\), but that it is much smaller than the collision-horizon distance \(r_{+}\delta\). ### Restrictions from particle kinematics This discussion already provides us with some useful information. The region accesible to a particle is determined by the conditions: 1. \(X>0\) ("forward-in-time" outside the outer horizon) 2. \(X^{2}-Yf\geq 0\) (turning points) In the vicinity of the horizon, as long as \(X_{h}\) finite, both conditions are easily satisfied. However, we would also be interested in "critical" particles, such that \(X_{h}=0\). Then, since \(X(r_{+})=E-\gamma Q/r_{+}=X_{h}=0\), the condition 1 implies \(\gamma>0\). Hence, a critical particle must have the same charge sign as a black hole. In condition 2, in the vicinity of the horizon, the first two terms of the expansion vanish, and the leading term is quadratic in \(\delta\). At the same time \(f\) could be linear (with \(\alpha>0\)) for non-extremal holes. This means that critical particles cannot approach the vicinity of the horizon in this case due to violation of the second condition. If the hole is extremal (\(r_{-}=r_{+}=Q=M\)), the condition (to the leading order) transforms to \(\gamma^{2}-\left(\frac{L^{2}}{r_{+}^{2}}+1\right)\geq 0\). Therefore, _critical particles can achieve the horizon only if the hole is extremal_, and their angular momentum is bounded \[|L|\leq Q\sqrt{\gamma^{2}-1}\,. \tag{16}\] Of course, this also implies that \(\gamma\geq 1\). Moreover, for an extremal black hole with \(r_{+}=Q\), the condition of particle criticality gives \(E_{1}=\gamma_{1}\). We thus conclude that a critical particle has \(E_{1}\geq 1\), so, its trajectory always originates at infinity (except for a particular case \(E_{1}=1\), where a particle is in a state of indifferent equilibrium at every point). ### Centre-of-mass collision energy Now let's analyse the expression for the collision energy (10). The possibility of an infinite energy arises from the \(1/f\) factor, which becomes unbounded in the vicinity of the event horizon (where \(g_{00}=-f=0\)). * "+" case: this corresponds to the particles moving in opposite directions. \(f\to 0\) next to the horizon, the expression in brackets tends to \(2X_{1h}X_{2h}\), which is finite unless either particle is critical. Thus, infinite CoM energy arises for all kinds of particles, that are non-critical and can achieve the horizon. * "-" case: this case is more nuanced, since the nominator tends to zero as well. Finally, we can proceed to analyze the singular term of (9). Assume \(X_{1h},X_{2h}\) are both finite. Then \(X_{1},X_{2}\) are also in the vicinity of the outer horizon. Then for collision at a point such that \(f_{c}\ll X_{1c},X_{2c}\) (subscript 'c' for collision): \[\frac{1}{f_{c}}\left(X_{1c}X_{2c}-\sqrt{X_{1c}^{2}-Y_{1c}f_{c}} \sqrt{X_{2c}^{2}-Y_{2c}f_{c}}\right)\] \[\approx\frac{1}{2}\left(\frac{X_{1c}Y_{2c}}{X_{2c}}+\frac{X_{2c}Y _{1c}}{X_{1c}}\right), \tag{17}\] a result, essentially obtained before [21]. However, the intricacies of the approximation were not explicitly explained in the reference. One tends to think that this expression suggests that if one of the particles is critical \(X_{h}\approx X_{c}=0\), infinite energy of collision is achievable. This is true, but is not so obvious: \(X_{c}=0\) violates the condition \(f_{c}\ll X_{c}\) implied in this derivation. In addition, as previously noted, a critical particle can only achieve the horizon if the hole is extremal. One could analyse it more thoroughly. Assume \(X_{1h}=0\), \(X_{2h}\) is finite. Expanding the expression in brackets the the first order of \(\delta\), we obtain the singular contribution: \[\approx\frac{1}{f}\left((\gamma_{1}-\sqrt{\gamma_{1}^{2}-1})X_{2h}\delta+O( \delta^{2})\right) \tag{18}\] Given that \(f=\delta^{2}+O(\delta^{3})\), the singular contribution to collision \(E_{cm}^{2}/2m\) is \[\frac{(\gamma_{1}-\sqrt{\gamma_{1}^{2}-1})X_{2h}}{\delta} \tag{19}\] We finish this section by writing the condition \(\gamma\geq 1\) with SI units included: \[\tilde{\gamma}=\frac{|q|}{m\sqrt{4\pi\varepsilon_{0}G}}\geq 1, \tag{20}\] where \(\varepsilon_{0}\approx 8.854\times 10^{-12}\,\mathrm{F}\,\mathrm{m}^{-1}\) is the vacuum permittivity, and \(G\approx 6.674\times 10^{-11}\,\mathrm{N}\,\mathrm{m}^{2}\,\mathrm{kg}^{-2}\) is the gravitational constant. We call this dimensionless quantity \(\tilde{\gamma}\) the _electric ratio_ of a particle. It is numerically equal to \(|\gamma|\) in the units used previously. ### Energy extraction It was shown in [20] that when the particles with opposite signs of radial momentum collide in the vicinity of a critical charged black hole, then the energy and mass of one of the out-coming particles admit lower, and not upper bounds. Namely, in a specific scenario when the observed particle moves towards the black hole, reaches a turning point just outside the horizon, and then flies to infinity, the following inequalities hold: \[E\geq m\geq m_{\mathrm{min}}=|q_{0}|-\sqrt{q_{0}^{2}-m_{0}^{2}}, \tag{21}\] where \(E\) and \(m\) are energy and mass of the observed particle (in this section \(E\) denotes absolute energy, and not energy per unit mass), and \(q_{0},m_{0}\) are charge and mass of the critical particle. Thus, the authors reach the conclusion of a possibility of creation of ultra-heavy particles detectable at infinity. However, it is important to note that for real particles the lower bound is exceptionally small, which means that it is highly unlikely that the mass of the out-coming particle even exceeds the mass of the critical particle. Table 1 shows the value of \(A\) as well as the electric ratio \(\tilde{\gamma}\) defined in for some subatomic particles in SI units, for which we used the formula \[m_{\mathrm{min}}=\frac{|q|}{\sqrt{4\pi\varepsilon_{0}G}}+\sqrt{\frac{q^{2}}{4 \pi\varepsilon_{0}G}-m^{2}}. \tag{22}\] As we can see from Table 1, all real subatomic particles satisfy \(\tilde{\gamma}>1\) (in fact we can say that \(\tilde{\gamma}\gg 1\)), so they can in principle be critical particles. However, even for relatively heavy particles like the nucleus of gold atom, the lower bound for the mass is many orders of magnitude lower than the mass of electron, so it only tells us that these processes cannot emit ultra-light particles like neutrinos. It is reasonable to assume that even if collisions with creation of ultra-heavy particles escaping to infinity do not violate conservation laws, and hence, are physically possible, they must be severely outnumbered by collisions that produce usual particles with masses smaller than the masses of collision's constituents. We will now proceed to calculate energy extraction for the case when one of the particles is falling onto the black hole, and the other one is moving outward. We will ignore the question of how the latter particle could arise. As mentioned above, it follows from (10) that in this case the energy of collision is arbitrarily large when measured in the center-of-mass frame for all charged black holes (not necessarily extremal). Similarly, there are no longer any criticality conditions. Hence, we can expect a more significant energy extraction to be possible in this case. As usual, we assume that there are 4 particles involved in the collision. Particles 1 and 2 collide to produce particles 3 and 4. We assume that particles 1 and 3 move outward, while particles 2 and 4 move inward. The conservation laws of charge, energy, linear momentum then read \[q_{1}+q_{2}=q_{3}+q_{4}, \tag{23}\] \[X_{1}+X_{2}=X_{3}+X_{4}, \tag{24}\] \[Z_{1}-Z_{2}=Z_{3}-Z_{4}. \tag{25}\] Here, \(X_{i}=E_{i}-q_{i}Q/r\), and \(Z_{i}=\sqrt{X_{i}^{2}-m_{i}^{2}f}\). We assume that the motion is purely radial, i.e., there is no angular momentum. We interpret \(r\) as the radial coordinate of the collision process, which satisfies \(r>r_{+}\), but can be arbitrarily close to the horizon if needed. We formulate the problem we are trying to achieve as follows: with fixed parameters of the colliding particles, is it possible for particle 3 to escape to infinity with arbitrarily large energy and/or mass? As before, particle kinematics enforces the "forward-in-time" condition \(X_{i}>0\) (see sec. 2.3) for all \(i=1,2,3,4\). In particular, it implies that \(X_{3}<X_{1}+X_{2}\). We can thus state the following inequality: \[q_{3}\geq\frac{q_{3}Q}{r_{+}}>\frac{q_{3}Q}{r}=E_{3}-X_{3}>E_{3}-X_{1}-X_{2} \tag{26}\] Here we have used the fact that \(Q\leq r_{+}\), which follows from the definition (3). This equation tells us that if we want to make \(E_{3}\) arbitrarily large, it forces \(q_{3}\) to have the same magnitude. In particular, energy extraction is impossible with electrically neutral particles (\(q_{3}=0\)), as for them (26) becomes \(E_{3}<X_{1}+X_{2}\). This conclusion is valid for any collision process involving four particles in the RN metric. If we are only interested in extracting energy, and not creating heavy particle, the problem admits a trivial solution. We can imagine two electrically neutral particles that collide and precisely exchange momentum, mass, and "kinetic energy" \(X\), and the new particles have opposite charges of large absolute value: \[\begin{cases}q_{1}=q_{2}=0,\;q_{3}=-q_{4}=q,\\ X_{1}=X_{3},\;X_{2}=X_{4},\\ Z_{1}=Z_{3},\;Z_{2}=Z_{4},\\ m_{1}=m_{3},\;m_{2}=m_{4}.\end{cases} \tag{27}\] Clearly, this combination of parameters satisfies all conservation laws (23)-(25), as well as the forward-in-time conditions. The energy of particle 3 in this case is given by \[E_{3}=X_{3}+\frac{q_{3}Q}{r}=X_{1}+\frac{qQ}{r}. \tag{28}\] Since \(q\) is an arbitrary parameter, we can make it arbitrarily large in absolute value (with the same sign as \(Q\)). We thus see that in principle unbounded energy extraction is possible in this case. However, this fact is not particularly significant, because it does not even require curvature of spacetime. Moreover, the exact same process can be considered even in Newtonian mechanics with particles moving in Coulomb potential, giving the same results. Thus, it will prove to be more interesting to consider extraction of ultra-heavy particles, which will be shown to depend heavily on the curvature of spacetime. We will assume that the collision occurs close to the event horizon, so that \(f\ll 1\). We will assume that the particles 1, 2, 4 are "usual" in the sense that for them \(X_{i}\gg m_{i}f\), so \(Z_{i}\approx X_{i}\) for \(i=1,2,4\). We may not assume that this condition holds for \begin{table} \begin{tabular}{l|l|l|l|l} Particle & Mass \(m/m_{e}\) & Charge \(q/e\) & Electric ratio \(\tilde{\gamma}\) & Mass bound \\ & & & & \(m_{\min}/m_{e}\) \\ \hline electron & \(1.00\times 10^{0}\) & -1 & \(2.04\times 10^{21}\) & \(2.45\times 10^{-22}\) \\ proton & \(1.84\times 10^{3}\) & 1 & \(1.11\times 10^{18}\) & \(8.25\times 10^{-16}\) \\ \(\alpha\)-particle & \(7.29\times 10^{3}\) & 2 & \(5.60\times 10^{17}\) & \(6.52\times 10^{-15}\) \\ \(\frac{197}{79}\)Au & \(3.59\times 10^{5}\) & 79 & \(4.49\times 10^{17}\) & \(4.00\times 10^{-13}\) \\ \end{tabular} \end{table} Table 1: Lower bound of energy extraction for some subatomic particles particle 3, as will be clear from the result. Summing the energy and momentum conservation equations (24), (25)) yields in this case \[2X_{1}=X_{3}+\sqrt{X_{3}^{2}-m_{3}^{2}f}, \tag{29}\] which allows us to solve for \(X_{3}\): \[X_{3}=X_{1}+\frac{m_{3}^{2}f}{4X_{1}} \tag{30}\] However, we have from \(X_{4}>0\) that \(X_{3}<X_{1}+X_{2}\). Therefore, we can deduce an upper bound on \(m_{3}\): \[m_{3}<\sqrt{\frac{4X_{1}X_{2}}{f}} \tag{31}\] This inequality differs substantially from the inequalities such as (21) derived in [20], because it explicitly includes \(f\), which is related to spacetime curvature. We claim that for a fixed \(f\) it is possible to find a process that generates a particle of mass \(m_{3}\) arbitrarily close to this upper bound. Therefore, if the collision occurs sufficiently close to the horizon (in other words, if \(f\) is made sufficiently small), an ultra-massive particle detectable at infinity can be created. ## 3 Other spacetimes ### Majumdar-Papapetrou The Majumdar-Papapetrou spacetime describes the system of extremal black holes whose mutual gravitational attraction is canceled by their mutual electromagnetic repulsion. Here, we consider the system of two extremal black holes separated by a distance \(2a\). The line element in Weyl's cylindrical coordinates \(\{t\,\rho\,\varphi\,z\}\) has the following form: \[ds^{2}=-\frac{dt^{2}}{U^{2}}+U^{2}(d\rho^{2}+\rho^{2}d\phi^{2}+dz^{2})\,. \tag{32}\] Where \[U=1+\frac{M_{1}}{\sqrt{\rho^{2}+(z-a)^{2}}}+\frac{M_{2}}{\sqrt{\rho^{2}+(z+a) ^{2}}}\,. \tag{33}\] Here \(M_{1}\) and \(M_{2}\) are black hole masses which are equal to electric charges of black holes: \(Q_{1}=M_{1}\) and \(Q_{2}=M_{2}\). The electric potential of (32) is given by: \[\varphi=1-\frac{1}{U}\,. \tag{34}\] One should note that the metric in Weyl's coordinates (32) describes the exterior of the black holes. The event horizons are collapsed into the points \(\rho=0\,,z=\pm a\). The analogue description of a black hole like a point in Schwarzschild spacetime is described in [31]. The Lagrangian of a massive charged test particle (with charge-to-mass ratio \(\gamma\)) is given by: \[2\mathcal{L}=-U^{-2}\left(\frac{dt}{d\tau}\right)^{2}+U^{2}\left(\frac{d\rho }{d\tau}\right)^{2}+U^{2}\left(\frac{dz}{d\tau}\right)^{2}+\rho^{2}U^{2} \left(\frac{d\varphi}{d\tau}\right)^{2}+ \tag{35}\] The MP spacetime doesn't depend on \(t\) and \(\varphi\) coordinates. This fact allows us to define two constants of motion measured by observer at infinity - energy \(E\) and projection of angular momentum \(L_{z}\): \[\begin{split} E=U^{-2}\frac{dt}{d\tau}+\gamma\left(1-\frac{1}{U }\right)\,,\\ L_{z}=\rho^{2}U^{2}\frac{d\varphi}{d\tau}\,.\end{split} \tag{36}\] We define \(X=E-\gamma(1-\frac{1}{U})\), which plays a similar role in this metric as in the Reissner-Nordstrom one (see (11)). For a particle moving along the \(z\)-axis, we can determine the components of 4-velocity from the normalization condition \(g_{ik}u^{i}u^{k}=-1\): \[\begin{split} u^{0}=\frac{dt}{d\tau}=U^{2}X\,,\\ u^{1}=\frac{d\rho}{d\tau}=0\,,\\ u^{2}=\frac{d\phi}{d\tau}=0\end{split} \tag{37}\] \[u^{3}=\frac{dz}{d\tau}=\pm\sqrt{X^{2}-\frac{1}{U^{2}}}\,.\] Substituting (37) into (9), one obtains: \[\frac{E_{c.m.}^{2}}{2m^{2}}-1=U^{2}\left(X_{1}X_{2}\pm\sqrt{X_{1}^{2}-\frac{1 }{U^{2}}}\sqrt{X_{2}^{2}-\frac{1}{U^{2}}}\right)\,, \tag{38}\] where the sign depends on the relative directions of the two \(z\)-axis velocities. This analysis shows that BSW effect is possible in MP metric and produces certain restrictions on the particles, which precisely coincide with the conditions obtained for the RN metric. It is reasonable to expect that MP near each the horizons should be equivalent to extremal RN metric. We have already derived the expansions for the RN case. Now we analyse near-horizon behaviour of MP near \(z=+a\). For that, introduce a parameter \(\delta_{MP}=(z-a)\). We demonstrate the equivalence along the z-axis by deriving expressions for the metric coefficients, electrostatic potential, and X. * Metric. The MP value corresponding to \(f\) is \(U^{-2}\). We can express it explicitly \[U^{-2}=\frac{(\delta(2a+\delta))^{2}}{(\delta(2a+\delta)+M_{1}(2a +\delta)+M_{2}\delta)^{2}}\] \[\approx\frac{4a^{2}}{4a^{2}M_{1}^{2}}\delta^{2}=\frac{\delta^{2}} {M_{1}^{2}}\] (39) Just as in 15 with \(M_{1}=r_{+}\) identified. * Potential. In extremal RN: \[\phi=\frac{Q}{r}=\left(1+\frac{\delta}{M}\right)^{-1}\approx 1-\frac{ \delta}{M}+\frac{\delta^{2}}{M^{2}}+...\] (40) In MP: \[\phi=\frac{U-1}{U}=\left(1+\frac{M_{1}+M_{2}}{2aM_{1}}\delta\right)\] \[\times\left(1+\frac{M_{1}+M_{2}+2a}{2aM_{1}}\delta+\frac{1}{2aM_ {1}}\delta^{2}\right)^{-1}\] \[=1-\frac{\delta}{M_{1}}+\frac{M_{2}+2a}{2aM_{1}^{2}}\delta^{2}+...\] (41) Note that in case \(a\rightarrow\infty\) or \(M_{2}\to 0\) this reduces to the RN expression (either of these conditions makes the second black hole irrelevant to the neighbourhood of the first one). \[X=E-\gamma\phi\] (42) Then, \(X=E-\gamma\phi\) is expanded trivially. Also, \[X^{2}=X_{h}^{2}+2X_{h}\frac{\gamma}{M_{1}}\delta+\left(\frac{\gamma^{2}}{M_{1} ^{2}}-2X_{h}\gamma\frac{2a+M_{2}}{2aM_{1}}\right)\delta^{2}+...\] (43) One can clearly see that the 2nd kinematic condition \(X^{2}-Yf\geq 0\) (see sec. 2.3) leads to the same result as before for \(L=0\). I.e. \(|\gamma|>1\), and the 1st condition implies it is positive. These results demonstrate definitively, that BSW effect for motion along z-axis in MP metric is exactly identical to radial motion in extremal RN. ### Bardeen regular black hole The first solution of the Einstein equations describing the regular black hole has been obtained by Bardeen [27], which is given by [28] \[\begin{split} ds^{2}=-f(r)dt^{2}+f(r)^{-1}dr^{2}+r^{2}d\Omega^{2} \,,\\ f(r)\equiv 1-\frac{2Mr^{2}}{(r^{2}+g^{2})^{\frac{3}{2}}}\,.\end{split} \tag{44}\] Here \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\varphi^{2}\) is the geometry on the unit two-sphere, \(M\) is the mass of a black hole, \(g\) is the magnetic monopole charge of the non-linear self gravitating magnetic field. The solution (44) is interpreted as the gravitational field of a nonlinear monopole, i.e., as a magnetic solution to Einstein field equations coupled to a non-linear electrodynamics. Our consideration will go close to [20] but with several additions: * first of all, we don't assume the black hole to be critical one i.e. \(f^{\prime}(r_{h})\neq 0\) (Where \(r_{h}\) is the event horizon location); * we don't specify the potential \(\phi\) of the external magnetic field. The BSW effect has been considered for Bardeen and other regular black holes in the paper [29]. We consider the near horizon collision of two particles 1 and 2 which have been injected from infinity. In the process of collision we have particle 3 which goes away from black hole and a particle 4 which falls into a black hole. We don't consider the mass of particles, assuming that they are equal - it is not important for the analysis below. Also, we assume radial motion of the particles i.e. \(L_{i}=0\,,i=1,2,3,4\), where \(L\) is an angular momentum per unit mass. The conservation of magnetic charge \(\mu\), energy per unit mass \(E\) and radial momentum give us \[\mu_{1}+\mu_{2}=\mu_{3}+\mu_{4}\,. \tag{45}\] \[X_{1}+X_{2}=X_{3}+X_{4}\,, \tag{46}\] \[X_{i}\equiv E_{i}-\mu_{i}\phi\,.\] (47) \[-Z_{1}-Z_{2}=Z_{3}-Z_{4}\,,\] \[Z_{i}=\sqrt{X_{i}^{2}-f(r)}\,.\] Following the classification, given in [19], we consider three types of particles: 1. Usual particle. We consider near horizon collision where \(f(r)\ll 1\) which gives us \[z\approx X_{h}-\frac{f}{2X_{h}}\,.\] (48) By subscript \(h\) we denote quantites evaluated on the event horizon. 2. Critical particle. For critical particle \(X_{h}=0\). In our case for critical particle we put \(z=0\). 3. Slightly non-critical particle. In this case we assume \[\mu=\frac{E(1-\delta)}{\phi}\,,\delta>0\,.\] (49) In this case, one can write: \[z\approx E\delta-\frac{f}{2E\delta}\,.\] (50) We assume that particles 1 and 4 are usual particles, the particle 2 is critical and particle 3 is slightly non-critical. Substituting (48), (49), (50) into (47) and using (46) to eliminate \(X_{4}\) (\(X_{4}=X_{1}-X_{3}\)), one obtains the following expression: \[\frac{f}{2X_{h,1}}=2E_{3}\delta-\frac{f}{2E_{3}\delta}+\frac{f}{2X_{h,1}-2E_{ 3}\delta}\,. \tag{51}\] Reducing the expression (51) to a common denominator (We multiply (51) by \(8X_{h,1}E_{3}\delta(X_{h,1}-E_{3}\delta)\)) and neglecting \(O(\delta^{3})\), one obtains the following inequality: \[\alpha E_{3}^{2}+X_{h,1}\delta fE_{3}-X_{h,1}^{2}f\geq 0\,, \tag{52}\] \[\alpha\equiv 4X_{h,1}^{2}\delta^{2}+\delta^{2}f>0\,.\] From this inequality we obtain lower restriction on \(E_{3}\): \[E_{3}\geq\frac{X_{h,1}\delta f+\sqrt{X_{h,1}^{2}\delta^{2}f^{2}+4X_{h,1}^{2} \alpha}}{2\alpha}\,. \tag{53}\] Like in [20] we didn't find upper limit for \(E_{3}\). However, as we have shown in sec. 2.5, in RN spacetime, energy extraction for real particles is negligibly small. We expect this to be true for Bardeen metric as well. ## 4 Conclusion The conditions for divergent center-of-mass energy in the RN metric and in the MP metric are found to be very strict: the work suggests conditions on the criticality of the particles and extremality on the black holes. It seems that satisfaction of these conditions is very unlikely. There is a possibility that the used models are not fully encapsulating all the important features of real black holes (for example, there was no mention of the accretion disk). Therefore, the obtained conditions could be inapplicable to real black holes so that high-energy collisions are more likely. Therefore, it may be beneficial to perform a similar analysis for a wider range of black hole models, searching for a metric, where such conditions would be more realistic, so that these effects would have a chance of being observed experimentally. We have shown that energy extraction is possible also in Bardeen regular black hole spacetime. Like in usual Reissner-Nordstrom spacetime [20], we were able to find only the lower restriction for the extracted energy. However, for RN black hole, we have shown in Table (1) that energy extraction in collisions of real particles falling into black holes have extremely low lower bounds on extracted masses: even for relatively heavy particles like the nucleus of gold atom it is many orders of magnitude lower than the mass of electron, so it only tells us that these processes cannot emit ultra-light particles like neutrinos. The same results, we expect in Bardeen regular black hole despite that the upper bound is absent. It is reasonable to assume that even if collisions with creation of ultra-heavy particles escaping to infinity are physically possible, they must be severely outnumbered by collisions that produce usual particles with masses smaller than the masses of collision's constituents. Therefore, this result (existence of lower bound) is probably not useful for actual astrophysical purposes. On the other hand, the derived inequality (31) differs substantially from the inequalities such as (21) derived in [20], because it explicitly includes \(f\), which is related to the spacetime curvature. We claim that for a fixed \(f\) it is possible to find a process that generates a particle of mass \(m_{3}\) arbitrarily close to this upper bound. Therefore, if the collision occurs sufficiently close to the horizon (in other words, if \(f\) is made sufficiently small), an ultra-massive particle detectable at infinity can be created. However, in derivation we avoided the problem of existence of particles moving outward in close vicinity of the event horizon. It deserves a further investigation to understand if such particles actually exist near black holes in our universe. ### Acknowledgments We thank Dr. Oleg Zaslavskii (Kharkiv National University) for comments and suggestions. ### Funding We were funded by grant NUM. 22-22-00112 RSF, and supported by Letovo School Charity Fund.
2306.15935
Partial Data Inverse Problems for the Nonlinear Schrödinger Equation
In this paper we prove the uniqueness and stability in determining a time-dependent nonlinear coefficient $\beta(t, x)$ in the Schr\"odinger equation $(i\partial_t + \Delta + q(t, x))u + \beta u^2 = 0$, from the boundary Dirichlet-to-Neumann (DN) map. In particular, we are interested in the partial data problem, in which the DN-map is measured on a proper subset of the boundary. We show two results: a local uniqueness of the coefficient at the points where certain type of geometric optics (GO) solutions can reach; and a stability estimate based on the unique continuation property for the linear equation.
Ru-Yu Lai, Xuezhu Lu, Ting Zhou
2023-06-28T05:34:57Z
http://arxiv.org/abs/2306.15935v3
# Partial data inverse problems for the nonlinear time-Schrodinger equation ###### Abstract. In this paper we prove the uniqueness and stability in determining a time-dependent nonlinear coefficient \(\beta(t,x)\) in the Schrodinger equation \((i\partial_{t}+\Delta+q(t,x))u+\beta u^{2}=0\), from the boundary Dirichlet-to-Neumann (DN) map. In particular, we are interested in the partial data problem, in which the DN-map is measured on a proper subset of the boundary. We show two results: a local uniqueness of the coefficient at the points where certain type of geometric optics (GO) solutions can reach; and a stability estimate based on the unique continuation property for the linear equation. **Key words**: Nonlinearity, Inverse problems, Time-dependent Schrodinger equation ###### Contents * 1 Introduction * 1.1 Main results * 2 Well-posedness of the Dirichlet problem * 2.1 Notations * 2.2 Well-posedness * 3 Proof of Theorem 1.1 * 3.1 Geometrical optics solutions based on gaussian beam quasimodes * 3.2 Finite difference * 3.3 An integral identity * 3.4 Proof of Theorem 1.1 * 4 Proof of Theorem 1.2 and Theorem 1.3 * 4.1 Geometric optics * 4.2 Unique continuation property (UCP) * 4.3 The integral identity * 4.4 Proof of the stability estimate (Theorem 1.2) * 4.5 Proof of Theorem 1.3 ## 1. Introduction We investigate a partial data inverse problem for the time-dependent Schrodinger equation with a nonlinear term, for example, in modeling the recovery of the nonlinear electromagnetic second order polarization potential from the partial boundary measurements of electromagnetic fields. Let \(\Omega\subset\mathbb{R}^{n}\), \(n\geq 3\) be a bounded and convex domain with smooth boundary \(\partial\Omega\). For \(T>0\), we denote \(Q:=(0,T)\times\Omega\) and \(\Sigma:=(0,T)\times\partial\Omega\). Suppose \(\Gamma\) is an open proper subset of the boundary \(\partial\Omega\) and denote \[\Sigma^{\sharp}:=(0,T)\times\Gamma.\] For \(q(t,x)\in C^{\infty}(Q)\) and \(\beta(t,x)\in C^{\infty}(Q)\), we consider the nonlinear dynamic Schrodinger equation \[\left\{\begin{array}{rcll}\left(i\partial_{t}+\Delta+q(t,x)\right)u(t,x)+\beta (t,x)u(t,x)^{2}&=&0&\text{ on }Q,\\ u(t,x)&=&f&\text{ on }\Sigma,\\ u(t,x)&=&0&\text{ on }\{0\}\times\Omega,\end{array}\right. \tag{1.1}\] where \(\Delta u:=\sum_{j=1}^{n}\frac{\partial^{2}}{\partial x_{j}^{2}}\) is the spatial Laplacian. Based on the well-posedness result in Proposition 2.2, the Dirichlet-to-Neumann (DN) map \(\Lambda_{q,\beta}\) is well-defined by \[\Lambda_{q,\beta}:f\mapsto\partial_{\nu}u\big{|}_{\Sigma^{\sharp}},\quad\quad \quad f\in\mathcal{S}_{\lambda}(\Sigma)\] for \(\lambda>0\) sufficiently small (see (2.1) for the definition of \(\mathcal{S}_{\lambda}(\Sigma)\), where \(\partial_{\nu}u:=\frac{\partial u}{\partial\nu}\) and \(\nu(x)\) is the unit outer normal to \(\partial\Omega\) at the point \(x\in\partial\Omega\). The inverse problem we consider in this paper is the determination of the nonlinear potential \(\beta(t,x)\) from the partial DN-map \(\Lambda_{q,\beta}\). ### Main results For a set \(B\subset\Omega\), we denote \(\mathcal{M}_{B}\) by \[\mathcal{M}_{B}:=\{g\in C^{\infty}(Q):\,\|g\|_{C^{\infty}(Q)}\leq m_{0},\quad \text{and }g=0\text{ on }(0,T)\times B\}.\] for some positive constant \(m_{0}\). Let \(\mathcal{O}\subset\Omega\) be an open neighborhood of the boundary \(\partial\Omega\) and \(\mathcal{O}^{\prime}\subset\Omega\) be an open neighborhood of \(\Gamma^{c}:=\partial\Omega\setminus\Gamma\). We define an open subset \(\Omega_{\Gamma}\) of \(\Omega\) as \[\Omega_{\Gamma}:=\big{\{}p\in\Omega\ :((\gamma_{p,\omega_{1}}\cup\gamma_{p, \omega_{2}}\cup\gamma_{p,\omega_{1}+\omega_{2}})\cap\partial\Omega)\subset\Gamma \text{ for some }\omega_{1},\omega_{2}\in\mathbb{S}^{n-1},\omega_{1}\perp\omega_{2}\big{\}}, \tag{1.2}\] where \(\gamma_{p,\omega}\) denotes the straight line through a point \(p\) in a direction \(\omega\) in \(\mathbb{R}^{n}\) and \(\mathbb{S}^{n-1}\) is a unit sphere at the origin. Our main results are stated as follows: **Theorem 1.1** (Local uniqueness).: _Assume \(q\) and \(\beta_{j}\) are in \(C^{\infty}(Q)\) for \(j=1,\,2\). Suppose \(\Lambda_{q,\beta_{1}}(f)=\Lambda_{q,\beta_{2}}(f)\) for all \(f\in\mathcal{S}_{\lambda}(\Sigma)\) with support satisfying \(\text{supp}(f)\subset\Sigma^{\sharp}\). Then \(\beta_{1}(t,x)=\beta_{2}(t,x)\) for all \((t,x)\in(0,T)\times\Omega_{\Gamma}\)._ **Theorem 1.2** (Stability estimate).: _Assume \(\beta_{j}\in C^{\infty}(Q)\) for \(j=1,\,2\). Suppose that \((q,\beta_{1}-\beta_{2})\in\mathcal{M}_{\mathcal{O}}\times\mathcal{M}_{\mathcal{O}}\). Let \(\Lambda_{q,\beta_{j}}:\mathcal{S}_{\lambda}(\Sigma)\to L^{2}(\Sigma^{\sharp})\) be the Dirichlet-to-Neumann maps of the nonlinear Schrodinger equation (1.1) associated with \(\beta_{j}\) for \(j=1,\,2\). There exists a sufficiently small \(\delta_{0}>0\) so that if the DN maps satisfy_ \[\|(\Lambda_{q,\beta_{1}}-\Lambda_{q,\beta_{2}})f\|_{L^{2}(\Sigma^{\sharp})} \leq\delta\quad\quad\text{ for all }f\in\mathcal{S}_{\lambda}(\Sigma),\] _for some \(\delta\in(0,\delta_{0})\), then for any \(0<T^{*}<T\), there exist constants \(C>0\) independent of \(\delta\) and \(0<\sigma<1\) such that the following stability estimate holds:_ \[\|\beta_{1}-\beta_{2}\|_{L^{2}((0,T^{*})\times\Omega)}\leq C\left(\delta^{ \frac{1}{12}}+|\log(\delta)|^{-\sigma}\right).\] The logarithmic type stability estimate here is expected since we only take measurements on partial region of the boundary of the domain. The uniqueness result of Theorem 1.3 follows directly from Theorem 1.1 and Theorem 1.2 by letting \(\delta\to 0\). In particular, due to Theorem 1.1, the assumption of \(\beta_{1}-\beta_{2}\) can be relaxed to \(\mathcal{M}_{\mathcal{O}^{\prime}}\). **Theorem 1.3** (Global uniqueness).: _Assume \(\beta_{j}\in C^{\infty}(Q)\) for \(j=1,\,2\). Suppose that \((q,\beta_{1}-\beta_{2})\in\mathcal{M}_{\mathcal{O}}\times\mathcal{M}_{\mathcal{O} ^{\prime}}\). Let \(\Lambda_{q,\beta_{j}}:\mathcal{S}_{\lambda}(\Sigma)\to L^{2}(\Sigma^{\sharp})\) be the Dirichlet-to-Neumann maps of the nonlinear Schrodinger equation (1.1) with \(\beta_{j}\) for \(j=1,\,2\). If \(\Lambda_{q,\beta_{1}}(f)=\Lambda_{q,\beta_{2}}(f)\) for all \(f\in\mathcal{S}_{\lambda}(\Sigma)\), then_ \[\beta_{1}=\beta_{2}\quad\text{ in }Q.\] The nonlinear Schrodinger equation (NLS) in (1.1) can be used to model a basic second harmonic generation process in nonlinear optics. A similar NLS is the Gross-Pitaevskii (GP) equation \[(i\partial_{t}+\Delta+q)u+\beta(t,x)|u|^{2}u=0\] for the single-atom wave function, used in a mean-field description of Bose-Einstein condensates. See [44] for discussions of various NLS models based on integrability and existence of stable soliton solutions, such as the nonlinear term of a saturable one, \(|u|^{2}(1+|u|^{2}/u_{0}^{2})^{-1}\) with \(u_{0}\) a constant, or \((|u|^{2}-|u|^{4})u\). We remark in Remark 4.3 that our approach can be generalized to power type nonlinearity other than quadratic ones. Similar discussions can be found in [39] for the GP equation. Similar to those of hyperbolic equations, results related to the determination of coefficients for dynamic Schrodinger equations are usually classified into two categories of time-independent and time-dependent coefficients. For the linear equation, stability estimates for recovering the time-independent electric potential or the magnetic field from the knowledge of the dynamical Dirichlet-to-Neumann map were shown in [1, 4, 5, 6, 8, 11]. A vast literature is devoted for the inverse problems associated to the stationary Schrodinger equation, known under the name of Calderon problem, see [45, 47] for the major results when the DN-map is measured on the whole boundary and see [12, 14, 15, 23] when measured on part of the boundary. The paper [13] by Eskin is known to be the first to show the unique determination of time-dependent electric and magnetic potentials of the Schrodinger equation from the DN-map. Stability for the inverse problem with full boundary measurement was shown in [24, 25, 43]. The stable determination of time-dependent coefficients appearing in the linear Schrodinger equation from partial DN map is then given in [7]. The stability estimate for the problem of determining the time-dependent zeroth order coefficient in a parabolic equation from a partial parabolic Dirichlet-to-Neumann map can be found in [10]. In dealing with the inverse problems for nonlinear PDEs, the first order linearization of the DN-map was introduced in recovering the linear coefficient for the medium, and sometimes the nonlinear coefficients. See [17, 18, 19, 20, 21, 46] for demonstrations for certain semilinear, quasilinear elliptic equations and parabolic equations. Recently the higher order linearization, also called the multifold linearization, of the measurement operators (e.g., the Dirichlet-to-Neumann map or the source-to-solution map) has been applied in determining nonlinear coefficients in more general nonlinear differential equations. For example, based on the scheme, the nonlinear interactions of distorted plane waves were analyzed to recover the metric of a Lorentzian space-time manifold and nonlinear coefficients using the measurements of solutions to nonlinear hyperbolic equations [28, 40, 48]. In contrast the underlying problems for linear hyperbolic equations are still open, see also [9, 40] and the references therein. The method is also applied to study elliptic equations with power-type nonlinearities, including stationary nonlinear Schrodinger equations and magnetic Schrodinger equations, see [26, 27, 29, 30, 31, 36, 37, 41]. A demonstration of the method can be found in [3, 2] on nonlinear Maxwell's equations, in [32, 33] on nonlinear kinetic equations, and in [38] on semilinear wave equations. In [34], we solved an inverse problem for the magnetic Schrodinger equation with nonlinearity in both magnetic and electric potentials using partial DN-map and its nonlocal fractional diffusion version [35]. For the nonlinear dynamic Schrodinger equation considered in this paper, unique determination of time-dependent linear and nonlinear potentials from the knowledge of a source-to-solution map was discussed in [39]. The paper is organized as follows. In Section 2, we establish the well-posedness of the direct problem, the initial boundary value problem for our nonlinear time-dependent Schrodinger equation in a bounded domain for well chosen boundary conditions. Then we prove the local uniqueness result Theorem 1.1 in Section 3 by constructing the geometrical optics (GO) solutions for the linear Schodinger equation that concentrate near straight lines intersecting at a point. The higher order (multifold) linearization step is conducted via finite difference expansions in this section to derive the needed integral identity. Then we prove the stability estimate Theorem 1.2 in Section 4 where we implement a more standard type of linear GO solutions and adopt the unique continuation argument to control the boundary term due to the inaccessibility by the partial data measurement. Finally, we present the short proof of Theorem 1.3 for a global uniqueness result by combining assumptions in the previous two theorems. ## Acknowledgements R.-Y. Lai is partially supported by the National Science Foundation through grant DMS-2006731. ## 2. Well-posedness of the Dirichlet problem ### Notations Let \(r\) and \(s\) be two non-negative real numbers, \(m\) be a non-negative integer and let \(X\) be one of \(\Omega\), \(\partial\Omega\) and \(\Gamma\). We introduce the following Hilbert spaces: * the space \(L^{2}(0,T;H^{s}(X))\) that consists of all measurable functions \(f:[0,T]\to H^{s}(X)\) with norm \[\|f\|_{L^{2}(0,T;H^{s}(X))}:=\left(\int_{0}^{T}\|f(t,\cdot)\|_{H^{s}(X)}^{2}\, dt\right)^{1/2}<\infty;\] * the Sobolev space \[H^{m}(0,T;L^{2}(X)):=\{f:\,\partial_{t}^{\alpha}f\in L^{2}(0,T;L^{2}(X))\quad \text{for }\alpha=0,1,\ldots,m\};\] and the interpolation \[H^{r}(0,T;L^{2}(X))=[H^{m}(0,T;L^{2}(X)),L^{2}(0,T;L^{2}(X))]_{\theta},\quad (1-\theta)m=r.\] We also define the Hilbert space \[H^{r,s}((0,T)\times X):=H^{r}(0,T;L^{2}(X))\cap L^{2}(0,T;H^{s}(X)),\] whose norm is given by \[\|f\|_{H^{r,s}((0,T)\times X)}:=\left(\int_{0}^{T}\|f(t,\cdot)\|_{H^{s}(X)}^{2 }dt+\|f\|_{H^{r}(0,T;L^{2}(X))}^{2}\right)^{1/2}.\] For more details on these definitions, we refer to Chapter 1 and Chapter 4 in [42]. In particular, for integer \(m\geq 1\), we define \[\mathcal{H}_{0}^{m}(Q):=\{f\in H^{m}(Q):\,\partial_{t}^{\alpha}f|_{t=0}=0, \quad\alpha=0,\cdots,m-1\}.\] For \(\lambda>0\) we define the subset \(\mathcal{S}_{\lambda}(\Sigma)\) of \(H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)\) by \[\mathcal{S}_{\lambda}(\Sigma):=\Big{\{}f\in H^{2\kappa+\frac{3}{ 2},2\kappa+\frac{3}{2}}(\Sigma):\,\,\partial_{t}^{m}f(0,\cdot)=0\text{ on }\partial\Omega\text{ for integers }m<2\kappa+\frac{3}{2},\] \[\text{ and }\quad\|f\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}( \Sigma)}\leq\lambda\Big{\}}. \tag{2.1}\] ### Well-posedness We first show unique existence of the solution to the linear equation and, based on this, we apply the contraction mapping principle to deduce the well-posedness for the nonlinear equation. **Proposition 2.1**.: _(Well-posedness for the linear equations) Let \(2\kappa>\frac{n+1}{2}\) be an integer. Suppose \(q\in C^{\infty}(Q)\). For any \(f\in H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)\) satisfying \(\partial_{t}^{m}f(0,\cdot)=0\) for \(m<2\kappa+\frac{3}{2}\), there exists a unique solution \(u_{f}\in H^{2\kappa}(Q)\) to the linear system:_ \[\left\{\begin{array}{rclcl}(i\partial_{t}+\Delta+q)\,u_{f}&=&0&&\mbox{in }Q,\\ u_{f}&=&f&&\mbox{on }\Sigma,\\ u_{f}&=&0&&\mbox{on }\{0\}\times\Omega,\end{array}\right. \tag{2.2}\] _and \(u_{f}\) satisfies the estimate_ \[\|u_{f}\|_{H^{2\kappa}(Q)}\leq C\|f\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3} {2}}(\Sigma)}. \tag{2.3}\] Proof.: In light of [[42], Chapter 4, Theorem 2.3], there exists a function \(\tilde{u}\in H^{2\kappa+2,2\kappa+2}(Q)\) such that for \(0\leq\alpha<2\kappa+\frac{3}{2}\), \[\partial_{t}^{\alpha}\tilde{u}(0,\cdot)=0\quad\mbox{ in }\Omega,\qquad\tilde{u} |_{\Sigma}=f, \tag{2.4}\] and \[\|\tilde{u}\|_{H^{2\kappa+2}(Q)}\leq C\|\tilde{u}\|_{H^{2\kappa+2,2\kappa+2}(Q )}\leq C\|f\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}\] for some positive constant \(C\), depending only on \(\Omega\) and \(T\), where the first inequality holds by noticing Proposition 2.3 in Chapter 4 in [42]. Let \[F:=-(i\partial_{t}+\Delta+q)\tilde{u}.\] Since \(\tilde{u}\in H^{2\kappa+2}(Q)\), we get \(F\in H^{2\kappa+1,2\kappa}(Q)\subset H^{2\kappa,2\kappa}(Q)\) implying \(F\in H^{2\kappa}(Q)\) by using Proposition 2.3 in Chapter 4 in [42] again. In addition, due to (2.4), \(F\) has zero initial condition up to \(2\kappa\) derivative w.r.t. \(t\), which makes \(F\in\mathcal{H}_{0}^{2\kappa}(Q)\). From Lemma 4 of [39], there exists a unique solution \(u_{*}\) to the Schrodinger equation \((i\partial_{t}+\Delta+q)u_{*}=F\) with \(F|_{t=0}=0\) and \(u_{*}|_{t=0}=u_{*}|_{\Sigma}=0\). We denote by \(\mathcal{L}^{-1}\) the solution operator of this inhomogeneous Dirichlet problem for the linear Schrodinger equation, that is, \(\mathcal{L}^{-1}(F)=u_{*}\). In particular, we have that \(\mathcal{L}^{-1}:\ \mathcal{H}_{0}^{2\kappa}(Q)\ \rightarrow\ \mathcal{H}_{0}^{2\kappa}(Q)\) is a bounded linear operator. Therefore, we obtain \[\|u_{*}\|_{H^{2\kappa}(Q)}\leq C\|F\|_{\mathcal{H}_{0}^{2\kappa}(Q)}\leq C\|f \|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)},\] and \(u_{f}=\tilde{u}+u_{*}\in H^{2\kappa}(Q)\) satisfies \[\|u_{f}\|_{H^{2\kappa}(Q)}\leq\|\tilde{u}\|_{H^{2\kappa}(Q)}+\|u_{*}\|_{H^{2 \kappa}(Q)}\leq C\|f\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}.\] **Proposition 2.2**.: _(Well-posedness for the nonlinear equation) Let \(2\kappa>\frac{n+1}{2}\) be an integer. Suppose \(q\) and \(\beta\) are in \(C^{\infty}(Q)\). For any \(f\in\mathcal{S}_{\lambda}(\Sigma)\) (defined in (2.1)) with \(\lambda>0\) sufficiently small, there exists a unique solution \(u\in H^{2\kappa}(Q)\) to the problem (1.1) and it satisfies the estimate_ \[\|u\|_{H^{2\kappa}(Q)}\leq C\|f\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2} }(\Sigma)}, \tag{2.5}\] _where the constant \(C>0\) is independent of \(f\)._ Proof.: If \(u\) is a solution to (1.1), we set \(w:=u-u_{f}\) which will solve \[\left\{\begin{array}{rclcl}(i\partial_{t}+\Delta+q)w&=&-\beta(t,x)(w+u_{f}) ^{2}&&\mbox{in }Q,\\ w&=&0&&\mbox{on }\Sigma,\\ w&=&0&&\mbox{on }\{0\}\times\Omega,\end{array}\right. \tag{2.6}\] where \(u_{f}\) is the solution to (2.2). Or equivalently, \(w\) is the solution to \[w-\mathcal{L}^{-1}\circ\mathcal{K}w=0,\] where \(\mathcal{K}w:=-\beta(t,x)(w+u_{f})^{2}\). For \(2\kappa>\frac{n+1}{2}\), using the facts that \(H^{2\kappa}(Q)\) is a Banach algebra and that \(u_{f}\in\mathcal{H}_{0}^{2\kappa}(Q)\), we have that \(\mathcal{K}:\ \mathcal{H}_{0}^{2\kappa}\to\ \mathcal{H}_{0}^{2\kappa}\) is bounded. We define for \(a>0\) the subset \[X_{a}(Q):=\{u\in\mathcal{H}_{0}^{2\kappa}(Q);\ \|u\|_{H^{2\kappa}(Q)}\leq a\}.\] From (2.3), we deduce \[\|(\mathcal{L}^{-1}\circ\mathcal{K})w\|_{H^{2\kappa}(Q)}\leq C\|\mathcal{K}w\| _{H^{2\kappa}(Q)}\leq C\left(\|w\|_{H^{2\kappa}(Q)}^{2}+\|u_{f}\|_{H^{2\kappa}( Q)}^{2}\right)\leq C(a^{2}+\lambda^{2})\leq a\] for \(w\in X_{a}(Q)\) and \[\|(\mathcal{L}^{-1}\circ\mathcal{K})w_{1}-(\mathcal{L}^{-1}\circ \mathcal{K})w_{2}\|_{H^{2\kappa}(Q)}\leq C\|\mathcal{K}w_{1}-\mathcal{K}w_{2 }\|_{H^{2\kappa}(Q)}\] \[\leq C\left(\|w_{1}\|_{H^{2\kappa}(Q)}+\|w_{2}\|_{H^{2\kappa}(Q) }+\|u_{f}\|_{H^{2\kappa}(Q)}\right)\|w_{1}-w_{2}\|_{H^{2\kappa}(Q)}\] \[\leq C(a+\lambda)\|w_{1}-w_{2}\|_{H^{2\kappa}(Q)}\] \[\leq K\|w_{1}-w_{2}\|_{H^{2\kappa}(Q)},\quad\text{ for }w_{1},\,w_{2}\in X_{a}(Q)\] with \(K\in(0,1)\) provided that we choose \(0<\lambda<a<1\) and \(a\) small enough. This proves that \(\mathcal{L}^{-1}\circ\mathcal{K}\) is a contraction map on \(X_{a}(Q)\), hence there exists a fixed point \(w\in X_{a}(Q)\) as the solution to (2.6). Moreover, \[\|w\|_{H^{2\kappa}(Q)}=\|(\mathcal{L}^{-1}\circ\mathcal{K})w\|_{H ^{2\kappa}(Q)} \leq C\|\mathcal{K}w\|_{H^{2\kappa}(Q)}\] \[\leq C(\|w\|_{H^{2\kappa}(Q)}^{2}+\|u_{f}\|_{H^{2\kappa}(Q)}^{2})\] \[\leq Ca\|w\|_{H^{2\kappa}(Q)}+C\lambda\|u_{f}\|_{H^{2\kappa}(Q)},\] which further implies \[\|w\|_{H^{2\kappa}(Q)}\leq C\lambda\|u_{f}\|_{H^{2\kappa}(Q)}\] by choosing \(a\) sufficiently small. Combined with (2.3), we eventually obtain (2.5). ## 3. Proof of Theorem 1.1 ### Geometrical optics solutions based on gaussian beam quasimodes In this section we construct the geometrical optics solutions to the linear Schrodinger equation \[(i\partial_{t}+\Delta+q)u=0,\] in \(Q\), having the form \[u(t,x)=e^{i\rho(\Theta(x)-|\omega|^{2}\rho t)}a(t,x)+r(t,x)\] and vanishing on part of the boundary, where the leading part \(e^{i\rho(\Theta(x)-|\omega|^{2}\rho t)}a(t,x)\) follows the construction of gaussian beam approximate solutions concentrated near a straight line in direction \(\omega\) as \(\rho\to\infty\). For completeness, we present a detailed adaptation, to our equation, of the construction in [16], which was for the operator \(-\Delta_{g}-s^{2}\) on its transversal manifold \((M,g)\) and for large complex frequency \(s\). The analogous construction for the wave equation can be found in [22]. For other similar WKB type constructions, we refer the readers to [16, 24, 39]. Let \(p\) be a point in \(\Omega\) and \(\omega\in\mathbb{R}^{n}\) be a nonzero direction. Denote by \(\gamma_{p,\omega}\) the straight line through \(p\) in direction \(\omega\), parametrized by \(\gamma_{p,\omega}(s)=p+s\hat{\omega}\) for \(s\in\mathbb{R}\), where \(\hat{\omega}:=\omega/|\omega|\). We can choose \(\omega_{2},\ldots,\omega_{n}\in\mathbb{R}^{n}\) such that \(\mathcal{A}=\{\hat{\omega},\omega_{2},\ldots,\omega_{n}\}\) forms an orthonormal basis of \(\mathbb{R}^{n}\). Under this basis, we identify \(x\in\mathbb{R}^{n}\) by the new coordinate \(z=(s,z^{\prime})\) where \(z^{\prime}:=(z_{2},\ldots,z_{n})\), that is, \[x=p+s\hat{\omega}+z_{2}\omega_{2}+\ldots+z_{n}\omega_{n}.\] In particular, \(\gamma_{p,\omega}(s)=(s,0,\ldots,0)\). We consider the gaussian beam approximate solutions \(v\) with ansatz \[v(t,z)=e^{i\rho(\varphi(z)-|\omega|^{2}\rho t)}a(t,z;\rho),\quad\rho>0, \tag{3.1}\] in the coordinate \((t,z)\in\mathbb{R}^{n+1}\). The aim is to find smooth complex functions \(\varphi\) and \(a\). Let the Schrodinger operator act on \(v\) and get \[e^{-i\rho(\varphi(z)-|\omega|^{2}\rho t)}(i\partial_{t}+\Delta+q)v(t,z)=\rho^{ 2}(|\omega|^{2}-|\nabla\varphi|^{2})a+i\rho(2\nabla\varphi\cdot\nabla a+a \Delta\varphi)+(i\partial_{t}+\Delta+q)a. \tag{3.2}\] We _first choose the phase function_\(\varphi(z)\). The equation (3.2) suggests that we will choose the complex phase function \(\varphi\) satisfying the eikonal equation \[\mathcal{E}(\varphi):=|\nabla\varphi|^{2}-|\omega|^{2}=0\quad\text{up to $N$-th order of $z^{\prime}$ on $\gamma_{p,\omega}$},\] that is, \(\mathcal{E}(\varphi)=O(|z^{\prime}|^{N+1}).\) We substitute \(\varphi\) of the form \[\varphi(s,z^{\prime})=\sum_{k=0}^{N}\varphi_{k}(s,z^{\prime}),\quad\text{ where }\varphi_{k}(s,z^{\prime})=\sum_{|\alpha^{\prime}|=k}\frac{\varphi_{k,\alpha^{ \prime}}(s)}{\alpha^{\prime}!}(z^{\prime})^{\alpha^{\prime}}.\] Here \(\alpha\) is an \(n\)-dim multi-index \(\alpha=(\alpha_{1},\alpha^{\prime})\in\mathbb{Z}_{+}^{n}\) with \(\alpha^{\prime}=(\alpha_{2},\ldots,\alpha_{n})\), and \[\varphi_{0}(z)=|\omega|s,\quad\varphi_{1}(z)=0.\] We obtain \[|\nabla\varphi|^{2}-|\omega|^{2} =\underbrace{(2|\omega|\partial_{s}\varphi_{2}+\nabla_{z^{ \prime}}\varphi_{2}\cdot\nabla_{z^{\prime}}\varphi_{2})}_{O(|z^{\prime}|^{2})} +\underbrace{(2|\omega|\partial_{s}\varphi_{3}+2\nabla_{z^{\prime}}\varphi_{2} \cdot\nabla_{z^{\prime}}\varphi_{3})}_{O(|z^{\prime}|^{3})}\] \[\quad+\underbrace{\big{(}2|\omega|\partial_{s}\varphi_{4}+2\nabla _{z^{\prime}}\varphi_{2}\cdot\nabla_{z^{\prime}}\varphi_{4}+F_{4}(s,z^{\prime })\big{)}}_{O(|z^{\prime}|^{4})}+\cdots+O(|z^{\prime}|^{N+1}),\] where \(F_{j}(s,z^{\prime})\) is a \(j^{th}\) order homogeneous polynomial in \(z^{\prime}\) depending only on \(\varphi_{2},\ldots,\varphi_{j-1}\). Next we look for \(\varphi_{2}\) such that the first \(O(|z^{\prime}|^{2})\) term vanish. Writing \[\varphi_{2}(s,z^{\prime})=\frac{1}{2}H(s)z^{\prime}\cdot z^{\prime},\] where \(H(s)=(H_{ij}(s))_{2\leq i,j\leq n}\) is a smooth complex symmetric matrix. Then \(H\) satisfy the matrix Riccati equation \[|\omega|\frac{d}{ds}H(s)+H^{2}(s)=0. \tag{3.3}\] Imposing an initial condition \(H(0)=H_{0}\), where \(H_{0}\) is a complex symmetric matrix with positive definite imaginary part \(\text{Im}H_{0}\), by [[22] Lemma 2.56], there exists a unique smooth complex symmetric solution \(H(s)\) to (3.3) with positive definite \(\text{Im}H(s)\) for all \(s\in\mathbb{R}\). For \(|\alpha|\geq 3\), in order to make the \(O(|z^{\prime}|^{3}),\ldots,O(|z^{\prime}|^{N})\) terms vanish, one derives first order ODE's for the Taylor coefficients \(\varphi_{k,\alpha^{\prime}}\). By imposing well-chosen initial conditions at \(s=0\), we may find all the \(\varphi_{j}\), \(j=3,\ldots,N\). _Next we construct the amplitude function \(a(t,z;\rho)\)_. Let \(\chi_{\eta}\in C_{c}^{\infty}(\mathbb{R}^{n-1})\) be a smooth function with \(\chi_{\eta}=1\) for \(|z^{\prime}|\leq\frac{\eta}{2}\) and \(\chi_{\eta}=0\) for \(|z^{\prime}|\geq\eta\). Let \(\iota\in C_{0}^{\infty}(0,T)\) be a smooth cut-off function of the time variable. We make the ansatz for the amplitude as \[a(t,s,z^{\prime};\rho)=\sum_{j=0}^{N}\rho^{-j}a_{j}(t,s,z^{\prime})\chi_{\eta} (z^{\prime})=(a_{0}+\rho^{-1}a_{1}+\cdots+\rho^{-N}a_{N})\chi_{\eta}(z^{ \prime}).\] From (3.2), we should determine \(a_{j}\) from \[\begin{array}{ll}2\nabla\varphi\cdot\nabla a_{0}+a_{0}\Delta\varphi&=0&\text{ up to $N$-th order of $z^{\prime}$ on $\gamma_{p,\omega}$,}\\ 2\nabla\varphi\cdot\nabla a_{1}+a_{1}\Delta\varphi&=i(i\partial_{t}+\Delta+q) a_{0}&\text{ up to $N$-th order of $z^{\prime}$ on $\gamma_{p,\omega}$,}\\ \vdots&\\ 2\nabla\varphi\cdot\nabla a_{N}+a_{N}\Delta\varphi&=i(i\partial_{t}+\Delta+q )a_{N-1}&\text{ up to $N$-th order of $z^{\prime}$ on $\gamma_{p,\omega}$.}\end{array} \tag{3.4}\] so that the terms of \(O(\rho^{-k})\) (\(k=0,\ldots,N\)) vanish up to \(N\)-th order of \(z^{\prime}\) on \(\gamma_{p,\omega}\). Therefore, we write \(a_{0}\) to have the form \[a_{0}(t,s,z^{\prime})=\sum_{k=0}^{N}a_{0}^{k}(s,z^{\prime})\iota(t),\quad \text{where}\quad a_{0}^{k}(s,z^{\prime})=\sum_{|\alpha^{\prime}|=k}\frac{a_{0 }^{k,\alpha^{\prime}}(s)}{\alpha^{\prime}!}(z^{\prime})^{\alpha^{\prime}}.\] Here \(a_{0}^{k}\) is a \(k^{th}\) order homogeneous polynomial in \(z^{\prime}\). The first equation in (3.4) becomes \[2\nabla\varphi\cdot\nabla a_{0}+a_{0}\Delta\varphi =\iota(t)\left(2|\omega|\partial_{s}a_{0}^{0}+a_{0}^{0}\Delta_{z^ {\prime}}\varphi_{2}\right)\] \[\quad+\iota(t)\left(2|\omega|\partial_{s}a_{0}^{1}+2\nabla_{z^{ \prime}}\varphi_{2}\cdot\nabla_{z^{\prime}}a_{0}^{1}+a_{0}^{1}\Delta_{z^{ \prime}}\varphi_{2}+a_{0}^{0}\Delta_{z^{\prime}}\varphi_{3}\right)+\cdots+O(|z ^{\prime}|^{N+1}). \tag{3.5}\] Note that \(\Delta_{z^{\prime}}\varphi_{2}=tr(H(s))\). In order to let the first bracket vanish, we solve \(2|\omega|\partial_{s}a_{0}^{0}(s)+tr(H(s))a_{0}^{0}(s)=0\) with a given initial condition \(a_{0}^{0}(0)=c_{0}\) for some constant \(c_{0}\). For later purpose, we choose \(c_{0}=1\) to get \[a_{0}^{0}(s)=e^{-\frac{1}{2|\omega|}\int_{0}^{s}tr(H(t))dt}.\] Similarly, the coefficients of \(a_{0}^{1},\ldots,a_{0}^{N}\) can be determined for the other brackets in (3.5) to vanish. Lastly, we can also construct \(a_{2},\ldots,a_{N}\), which have similar forms as \(a_{0}\), in a similar way. Here we note that \(a_{0}^{k,\alpha^{\prime}}\) is smooth which further implies that \(a(t,z;\rho)\) is smooth. So far we have constructed a gaussian beam \(v(t,z)\) localized near \(\{(z_{1},0,\ldots,0),z_{1}\in\mathbb{R}\}\) of the form (3.1) with \[\varphi(s,z^{\prime})=|\omega|s+\frac{1}{2}H(s)z^{\prime}\cdot z^{\prime}+O(| z^{\prime}|^{3}),\quad a(t,s,z^{\prime})=\chi_{\eta}(z^{\prime})(a_{0}+\rho^{-1}a _{1}+\cdots+\rho^{-N}a_{N})\] with positive definite \(\mathrm{Im}H(s)\). It is easy to verify that by translation and rotation \(\Psi(x)=z\), the function defined by \(v(t,\Psi(x))\) with \(a(t,\Psi(x))\), still denoted by \(v(t,x)\) and \(a(t,x)\) respectively, is indeed the gaussian beam localized near the line \(\gamma_{p,\omega}\) and satisfy \[(i\partial_{t}+\Delta_{x}+q(t,x))v(t,x)=(i\partial_{t}+\Delta_{z}+q )v(t,z)\] \[= e^{i\rho(\varphi(z)-|\omega|^{2}\rho t)}\bigg{(}\chi_{\eta}(z^{ \prime})\left(O(|z^{\prime}|^{N+1})\rho^{2}+O(|z^{\prime}|^{N+1})\rho+(i \partial_{t}+\Delta+q)a_{N}\rho^{-N}\right)+\rho\widehat{\chi}_{\eta}(z^{ \prime})\vartheta\bigg{)}, \tag{3.6}\] where \(q(t,x)\) here is the above \(q(t,z)\) with \(z=\Psi(x)\) (We do not distinguish the names of the functions, e.g. \(q(t,x)\) and \(q(t,z)\), but only indicate the difference due to transformation by notations of variables \((t,x)\) and \((t,z)\)) and \(\widehat{\chi}_{\eta}(z^{\prime})\) is a smooth function with \(\widehat{\chi}_{\eta}=0\) for \(|z^{\prime}|<\frac{\eta}{2}\) and \(|z^{\prime}|\geq\eta\), and \(\vartheta\) vanishes near the geodesic \(\gamma_{p,\omega}.\) This last term accounts for those derivatives landing on \(\chi_{\eta}\). More specifically, we have \[v(t,x)=e^{i\rho(\Theta(x)-|\omega|^{2}\rho t)}a(t,x), \tag{3.7}\] where the phase function is explicitly given by \[\Theta(x)=\varphi(\Psi(x))=\omega\cdot(x-p)+\frac{1}{2}\mathcal{H}(x)(x-p) \cdot(x-p)+O(\mathrm{dist}(x,\gamma_{p,\omega})^{3}),\] where \(\mathcal{H}(x)\) is an \(n\times n\) matrix, defined by \[\mathcal{H}(x)=D\Psi(x)\left(\begin{array}{cc}0&0\\ 0&H((x-p)\cdot\widehat{\omega})\end{array}\right)(D\Psi(x))^{T},\] and the notation \(\mathrm{dist}(x,\gamma_{p,\omega})\) represents the distance between the point \(x\) and the line \(\gamma_{p,\omega}\). Moreover, based on the properties of \(H\), that is, \(\mathrm{Im}H(s)\) is positive definite, combined with the fact that \(D\Psi\) is a unitary matrix, we have that there exists a constant \(c_{0}>0\) such that \[\frac{1}{2}\mathrm{Im}\mathcal{H}(x)(x-p)\cdot(x-p)\geq c_{0}(\mathrm{dist}(x,\gamma_{p,\omega})^{2})\qquad\text{ for all }x. \tag{3.8}\] To summarize, we obtain **Proposition 3.1**.: _Let \(q\in C^{\infty}(Q)\) and \(\gamma_{p,\omega}\) be a straight line through a point \(p\in\Omega\) in direction \(\omega\in\mathbb{R}^{n}\). For any \(N>0\) and \(\eta>0\), there exists a family of approximate solutions \(\{v_{\rho}\in C^{\infty}(Q),\ \rho>1\}\), supported in \((0,T)\times N_{\eta}(\gamma_{p,\omega})\) where \(N_{\eta}(\gamma_{p,\omega})\) is an \(\eta\)-neighborhood of \(\gamma_{p,\omega}\), such that_ \[\|(i\partial_{t}+\Delta_{x}+q)v_{\rho}\|_{H^{1}(0,T;L^{2}(\Omega))}\leq C\rho ^{-\frac{N+1}{2}-\frac{n-1}{4}+4}, \tag{3.9}\] _and, for integer \(m\geq 0\),_ \[\|(i\partial_{t}+\Delta_{x}+q)v_{\rho}\|_{H^{m}(Q)}\leq C\rho^{-\frac{N+1}{2} -\frac{n-1}{4}+2m+2}, \tag{3.10}\] _where \(C\) is a positive constant independent of \(\rho\)._ Proof.: Take \(v_{\rho}\) as in (3.7). It remains to show (3.9) and (3.10). To begin with, since \(\mathrm{Im}(H(s))\) is positive definite, there exists \(c_{1}>0\) so that \(\mathrm{Im}(H(s))z^{\prime}\cdot z^{\prime}\geq c_{1}|z^{\prime}|^{2}\). Therefore, for \(\eta<1\) sufficiently small, in the neighborhood \(\{|z^{\prime}|<\eta\}\) one has \[|e^{i\rho(\varphi(s,z^{\prime})-|\omega|^{2}\rho t)}|\leq e^{-\frac{1}{4}c_{1 }\rho|z^{\prime}|^{2}}.\] The equation (3.6) implies \[|(i\partial_{t}+\Delta_{x}+q)v_{\rho}| \leq Ce^{-\frac{1}{4}c_{1}\rho|z^{\prime}|^{2}}\left(|z^{\prime} |^{N+1}\rho^{2}\chi_{\eta}(z^{\prime})+\rho^{-N}\chi_{\eta}(z^{\prime})+\rho \widehat{\chi}_{\eta}(z^{\prime})\vartheta\right),\] \[|\partial_{t}(i\partial_{t}+\Delta_{x}+q)v_{\rho}| \leq Ce^{-\frac{1}{4}c_{1}\rho|z^{\prime}|^{2}}\left(|z^{\prime} |^{N+1}\rho^{4}\chi_{\eta}(z^{\prime})+\rho^{2-N}\chi_{\eta}(z^{\prime})+\rho ^{3}\widehat{\chi}_{\eta}(z^{\prime})\vartheta\right).\] Hence it follows that \[\|(i\partial_{t}+\Delta_{x}+q)v_{\rho}\|_{H^{1}(0,T;L^{2}(\Omega))}^ {2}\] \[\leq C\rho^{8}\int_{0}^{T}\|e^{-\frac{1}{4}c_{1}\rho|z^{\prime}|^ {2}}|z^{\prime}|^{N+1}\chi_{\eta}(z^{\prime})\|_{L^{2}(\Omega)}^{2}dt+C\rho^{-2 N+4}\int_{0}^{T}\|e^{-\frac{1}{4}c_{1}\rho|z^{\prime}|^{2}}\chi_{\eta}(z^{\prime})\|_{L^{2}( \Omega)}^{2}dt \tag{3.11}\] \[\quad+C\rho^{6}\int_{0}^{T}\|e^{-\frac{1}{4}c_{1}\rho|z^{\prime}|^ {2}}\widehat{\chi}_{\eta}(z^{\prime})\vartheta\|_{L^{2}(\Omega)}^{2}dt=:J_{1} +J_{2}+J_{3}.\] Now by changing of variable \(z^{\prime}=\rho^{-\frac{1}{2}}y\) and applying integration by parts, we obtain \[J_{1} \leq C\rho^{8}\int_{|z^{\prime}|\leq\eta}e^{-\frac{1}{2}c_{1}\rho |z^{\prime}|^{2}}|z^{\prime}|^{2N+2}dz^{\prime}\] \[\leq C\rho^{-N-1-\frac{n-1}{2}+8}\int_{\mathbb{R}^{n-1}}e^{- \frac{1}{2}c_{1}|y|^{2}}|y|^{2N+2}dy \tag{3.12}\] \[\leq C\rho^{-N-1-\frac{n-1}{2}+8},\] where the constant \(C>0\) is independent of \(\rho\). Likewise, we can also deduce \[J_{2}\leq C\rho^{-2N-\frac{n-1}{2}+4}, \tag{3.13}\] which is controlled by (3.12) provided \(\rho\) is sufficiently large. Moreover, since \(\widehat{\chi}_{\eta}\) is supported in \(\frac{\eta}{2}\leq|z^{\prime}|\leq\eta\), by performing the change of variable \(z^{\prime}=\rho^{-\frac{1}{2}}y\) again, we derive \[J_{3} \leq C\rho^{6}\int_{\frac{\eta}{2}\leq|z^{\prime}|\leq\eta}e^{- \frac{1}{2}c_{1}\rho|z^{\prime}|^{2}}dz^{\prime}\] \[\leq C\rho^{-\frac{n-1}{2}+6}\int_{\frac{\eta}{2}\rho^{\frac{1}{ 2}}\leq|y|\leq\eta\rho^{\frac{1}{2}}}e^{-\frac{1}{2}c_{1}|y|^{2}}dy\] \[\leq C\rho^{-\frac{n-1}{2}+6}e^{-\frac{1}{2}c_{1}\eta^{2}\rho}( \eta\rho^{\frac{1}{2}})^{n-1} \tag{3.14}\] \[\leq C\eta^{n-1}e^{-\frac{1}{8}c_{1}\eta^{2}\rho}\rho^{6},\] which decays exponentially in \(\rho\) (for a fixed \(\eta\)) and is also controlled by (3.12) provided \(\rho\) is sufficiently large. Therefore, (3.9) holds by combining (3.11), (3.12), (3.13) and (3.14). Similarly, we have the following higher regularity estimate \[\|(i\partial_{t}+\Delta_{x}+q)v\|_{H^{m}(Q)}^{2}\] \[\leq C\rho^{4m+4}\int_{0}^{T}\|e^{-\frac{1}{4}c_{1}\rho|z^{\prime }|^{2}}|z^{\prime}|^{N+1}\chi_{\eta}(z^{\prime})\|_{L^{2}(\Omega)}^{2}dt+C \rho^{-2N+4m}\int_{0}^{T}\|e^{-\frac{1}{4}c_{1}\rho|z^{\prime}|^{2}}\chi_{\eta }(z^{\prime})\|_{L^{2}(\Omega)}^{2}dt\] \[\quad+C\rho^{4m+2}\int_{0}^{T}\|e^{-\frac{1}{4}c_{1}\rho|z^{\prime }|^{2}}\widehat{\chi}_{\eta}(z^{\prime})\vartheta\|_{L^{2}(\Omega)}^{2}dt\] \[\leq C\rho^{-N-1-\frac{n-1}{2}+4m+4},\] provided \(\rho\) is sufficiently large. This completes the proof of (3.10). With Proposition 3.1, we can construct the geometrical optics solutions now. **Proposition 3.2**.: _Let \(m>0\) be an even integer and \(q\in C^{\infty}(Q)\). Given \(p\in\Omega\) and \(\omega\in\mathbb{R}^{n}\), suppose that the straight line \(\gamma_{p,\omega}\) through \(p\) in direction \(\omega\) satisfies \((\gamma_{p,\omega}\cap\partial\Omega)\subset\Gamma\). Then there exists \(\rho_{0}>1\) such that when \(\rho>\rho_{0}\), the Schrodinger equation \((i\partial_{t}+\Delta+q)u=0\) admits a solution \(u\in H^{m}(Q)\) of the form_ \[u(t,x)=e^{i\rho(\Theta(x)-|\omega|^{2}\rho t)}a(t,x)+r(t,x)\] _with boundary value \(\text{supp}(u|_{(0,T)\times\partial\Omega})\subset\Sigma^{\sharp}\) and initial data \(u|_{t=0}=0\) in \(\Omega\) (or the final condition \(u|_{t=T}=0\) in \(\Omega\)). Here \(\Theta(x)\) and \(a(t,x)\) are as in (3.7) and satisfy Proposition 3.1 and the remainder \(r\) satisfies the following estimates:_ \[\|r\|_{H^{m}(Q)}\leq C\rho^{-\frac{N+1}{2}-\frac{n-1}{4}+2m+2} \tag{3.15}\] _and_ \[\|r\|_{C([0,T],H^{2}(\Omega))}+\|r\|_{C^{1}([0,T],L^{2}(\Omega))}\leq C\rho^{- \frac{N+1}{2}-\frac{n-1}{4}+4}.\] Proof.: We can choose \(\eta>0\) small enough such that \((N_{\eta}(\gamma_{p,\omega})\cap\partial\Omega)\subset\Gamma\). By the previous Proposition 3.1, for \(\rho>\rho_{0}\), we obtain \(\Theta(x)\) and \(a(t,x)\) correspondingly. By Proposition 3 and Lemma 4 in [39], we obtain the existence of the solution \(r\in H^{m}(Q)\) to \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)r&=&-(i\partial_{t}+ \Delta+q)v&\text{ in }Q,\\ r&=&0&\text{ on }\Sigma,\\ r&=&0&\text{ on }\{0\}\times\Omega,\end{array}\right.\] and the estimate \[\|r\|_{H^{m}(Q)}\leq C\|(i\partial_{t}+\Delta+q)v\|_{H^{m}(Q)}\leq C\rho^{- \frac{N+1}{2}-\frac{n-1}{4}+2m+2}.\] Here the last inequality follows from Proposition 3.1. Also, with (3.9), [[24], Lemma 2.3] suggests \[\|r\|_{C([0,T],H^{2}(\Omega))}+\|r\|_{C^{1}([0,T],L^{2}(\Omega))}\leq C\|(i \partial_{t}+\Delta+q)v\|_{H^{1}(0,T;L^{2}(\Omega))}\leq C\rho^{-\frac{N+1}{2} -\frac{n-1}{4}+4}.\] ### Finite difference We introduce the multivariate finite differences, which are approximations to the derivative. We define the second-order mixed finite difference operator \(D^{2}\) about the zero solution as follows: \[D^{2}u_{\varepsilon_{1}f_{2}+\varepsilon_{2}f_{2}}:=\frac{1}{\varepsilon_{1} \varepsilon_{2}}(u_{\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2}}-u_{ \varepsilon_{1}f_{1}}-u_{\varepsilon_{2}f_{2}}).\] Note that when \(\varepsilon_{1}=\varepsilon_{2}=0\), \(u_{\varepsilon_{1}f_{2}+\varepsilon_{2}f_{2}}=0\). We refer the interested readers to [39] for the definitions of higher order finite difference operators. For the purpose of our paper, we only need \(D^{2}\). To simplify the notation, we denote \(u_{\varepsilon_{1}f_{2}+\varepsilon_{2}f_{2}}\) by \(u_{ef}\) and define \(|\varepsilon|:=|\varepsilon_{1}|+|\varepsilon_{2}|\). Then we have the following second order expansion. **Proposition 3.3**.: _Let \(2\kappa>\frac{n+1}{2}\) be an integer and \(f_{j}\in H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)\) for \(j=1,2\). For \(|\varepsilon|:=|\varepsilon_{1}|+|\varepsilon_{2}|\) small enough, there exists a unique solution \(u_{\varepsilon f}\in H^{2\kappa}(Q)\) to the problem_ \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)u_{\varepsilon f}+\beta u _{\varepsilon f}^{2}&=&0&\text{ in }Q,\\ u_{\varepsilon f}&=&\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2}&\text{ on } \Sigma,\\ u_{\varepsilon f}&=&0&\text{ on }\{0\}\times\Omega.\end{array}\right.\] _In particular, it admits the following expression:_ \[u_{\varepsilon f}=\varepsilon_{1}U_{1}+\varepsilon_{2}U_{2}+\frac{1}{2}\left( \varepsilon_{1}^{2}W_{(2,0)}+\varepsilon_{2}^{2}W_{(0,2)}+2\varepsilon_{1} \varepsilon_{2}W_{(1,1)}\right)+\mathcal{R},\] _where for \(j=1,2\), \(U_{j}\in H^{2\kappa}(Q)\) satisfies the linear equation:_ \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)U_{j}&=&0&\text{ in }Q,\\ U_{j}&=&f_{j}&\text{ on }\Sigma,\\ U_{j}&=&0&\text{ on }\{0\}\times\Omega,\end{array}\right. \tag{3.16}\] _and for \(k_{j}\in\{0,1,2\}\) satisfying \(k_{1}+k_{2}=2\), \(W_{(k_{1},k_{2})}\in H^{2\kappa}(Q)\) is the solution to_ \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)W_{(k_{1},k_{2})}&=&-2 \beta U_{1}^{k_{1}}U_{2}^{k_{2}}&\text{ in }Q,\\ W_{(k_{1},k_{2})}&=&0&\text{ on }\Sigma,\\ W_{(k_{1},k_{2})}&=&0&\text{ on }\{0\}\times\Omega.\end{array}\right. \tag{3.17}\] _Moreover, the remainder term \(\mathcal{R}\in H^{2\kappa}(Q)\) satisfies_ \[\|\mathcal{R}\|_{H^{2\kappa}(Q)}\leq C\|\varepsilon_{1}f_{1}+\varepsilon_{2} f_{2}\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}^{3}. \tag{3.18}\] Proof.: The existence of \(u_{\varepsilon f}\in H^{2\kappa}(Q)\) is given by Proposition 2.2 when \(|\varepsilon|:=|\varepsilon_{1}|+|\varepsilon_{2}|\) sufficiently small such that \(\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2}\in\mathcal{S}_{\lambda}(\Sigma)\). Also, equations (3.16) and (3.17) are both well-posed in \(H^{2\kappa}(Q)\), for example by Proposition 4 in [39], for \(2\kappa\) as in the assumption (\(H^{2\kappa}(Q)\) is a Banach algebra). We denote \[\tilde{u}:=u_{\varepsilon f}-(\varepsilon_{1}U_{1}+\varepsilon_{2}U_{2}).\] Then it solves \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)\tilde{u}&=&-\beta u_{ \varepsilon f}^{2}&\text{ in }Q,\\ \tilde{u}&=&0&\text{ on }\Sigma,\\ \tilde{u}&=&0&\text{ on }\{0\}\times\Omega,\end{array}\right.\] Applying Lemma 4 in [39] and (2.3) gives that \[\|\tilde{u}\|_{H^{2\kappa}(Q)}\leq C\|\beta u_{ef}^{2}\|_{H^{2\kappa}(Q)}\leq C\|u _{ef}\|_{H^{2\kappa}(Q)}^{2}\leq C\|\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2}\| _{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}^{2}. \tag{3.19}\] From (3.16) and (3.17), the remainder \(\mathcal{R}\) satisfies \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)\mathcal{R}&=&-\beta u_{ ef}^{2}+\beta(\varepsilon_{1}U_{1}+\varepsilon_{2}U_{2})^{2}&\text{ in }Q,\\ \mathcal{R}&=&0&\text{ on }\Sigma,\\ \mathcal{R}&=&0&\text{ on }\{0\}\times\Omega.\end{array}\right.\] Then we have that \(\mathcal{R}\in H^{2\kappa}(Q)\) exists and satisfies \[\|\mathcal{R}\|_{H^{2\kappa}(Q)} \leq C\|-\beta u_{ef}^{2}+\beta(\varepsilon_{1}U_{1}+\varepsilon_ {2}U_{2})^{2}\|_{H^{2\kappa}(Q)}\] \[\leq C\|\tilde{u}\|_{H^{2\kappa}(Q)}\|u_{ef}+(\varepsilon_{1}U_{ 1}+\varepsilon_{2}U_{2})\|_{H^{2\kappa}(Q)}\] \[\leq C\|\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2}\|_{H^{2\kappa+ \frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}^{2}\|\varepsilon_{1}f_{1}+ \varepsilon_{2}f_{2}\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}\] \[\leq C\|\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2}\|_{H^{2\kappa+ \frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}^{3}\] by using the fact that \(H^{2\kappa}(Q)\) is a Banach algebra, the equations (3.19), (2.3) and the well-posedness of (3.16). **Remark 3.1**.: _Based on Proposition 3.3, when one of \(\varepsilon_{1}\) and \(\varepsilon_{2}\) is zero, we have_ \[u_{\varepsilon_{1}f_{1}}=\varepsilon_{1}U_{1}+\frac{1}{2}\varepsilon_{1}^{2}W_ {(2,0)}+\mathcal{R}^{(1)},\quad u_{\varepsilon_{2}f_{2}}=\varepsilon_{2}U_{2} +\frac{1}{2}\varepsilon_{2}^{2}W_{(0,2)}+\mathcal{R}^{(2)},\] _where \(\mathcal{R}^{(j)}\) is the remainder term of order \(O(\varepsilon_{j}^{3})\) for \(j=1,\,2\). We can rewrite \(u_{ef}\) as_ \[u_{ef}=u_{\varepsilon_{1}f_{1}}+u_{\varepsilon_{2}f_{2}}+\varepsilon_{1} \varepsilon_{2}W_{(1,1)}+\widetilde{\mathcal{R}}, \tag{3.20}\] _where \(\widetilde{\mathcal{R}}:=\mathcal{R}-\mathcal{R}^{(1)}-\mathcal{R}^{(2)}\). Moreover, we have_ \[W_{(1,1)}=D^{2}u_{\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2}}-\frac{1}{ \varepsilon_{1}\varepsilon_{2}}\widetilde{\mathcal{R}}\] _and also the Neumann data_ \[\partial_{\nu}W_{(1,1)}|_{\Sigma^{\sharp}}=\frac{1}{\varepsilon_{1}\varepsilon _{2}}\left(\Lambda_{q,\beta}(\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2})- \Lambda_{q,\beta}(\varepsilon_{1}f_{1})-\Lambda_{q,\beta}(\varepsilon_{2}f_{2 })\right)-\frac{1}{\varepsilon_{1}\varepsilon_{2}}\partial_{\nu}\widetilde{ \mathcal{R}}|_{\Sigma^{\sharp}}.\] _Through the rest of the paper, we only need to assume \(|\varepsilon_{1}|\sim|\varepsilon_{2}|\sim|\varepsilon|\), in which case we have \(\widetilde{\mathcal{R}}=o(\varepsilon_{1}\varepsilon_{2})\). In fact, from (3.18) we have_ \[\|\widetilde{R}\|_{H^{2\kappa}(Q)}\leq C(\varepsilon_{1}+\varepsilon_{2})^{3} \left(\|f_{1}\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}^{3}+\|f_ {2}\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}^{3}\right)^{3}.\] _In the case that \(\varepsilon_{1}\) and \(\varepsilon_{2}\) are of different scales such as \(|\varepsilon_{2}|\sim|\varepsilon_{1}|^{k}\) for some positive \(k>1\) (or vice versa), more terms can be taken in the expansions of \(u_{ef}\), \(u_{\varepsilon_{1}f_{1}}\) and \(u_{\varepsilon_{2}f_{2}}\) to eventually verify that \(\widetilde{\mathcal{R}}\) has the norm of order \(o(\varepsilon_{1}\varepsilon_{2})\)._ _Since \(W_{(k_{1},k_{2})}\) is independent of \(\varepsilon_{1}\) and \(\varepsilon_{2}\), this implies_ \[W_{(1,1)}=\lim_{\varepsilon_{1},\varepsilon_{2}\to 0}D^{2}u_{\varepsilon_{1}f_{1}+ \varepsilon_{2}f_{2}},\quad\partial_{\nu}W_{(1,1)}|_{\Sigma^{\sharp}}=\lim_{ \varepsilon_{1},\varepsilon_{2}\to 0}\frac{1}{\varepsilon_{1}\varepsilon_{2}} \left(\Lambda_{q,\beta}(\varepsilon f)-\Lambda_{q,\beta}(\varepsilon_{1}f_{1})- \Lambda_{q,\beta}(\varepsilon_{2}f_{2})\right). \tag{3.21}\] _in proper norms. For example, in \(L^{2}(\Sigma^{\sharp})\), we can derive_ \[\left\|\partial_{\nu}W_{(1,1)}|_{\Sigma^{\sharp}}-\frac{1}{\varepsilon _{1}\varepsilon_{2}}\left(\Lambda_{q,\beta}(\varepsilon f)-\Lambda_{q,\beta}( \varepsilon_{1}f_{1})-\Lambda_{q,\beta}(\varepsilon_{2}f_{2})\right)\right\|_{L ^{2}(\Sigma^{\sharp})}\] \[=\frac{1}{\varepsilon_{1}\varepsilon_{2}}\|\partial_{\nu} \widetilde{\mathcal{R}}\|_{L^{2}(\Sigma^{\sharp})}\leq\frac{1}{\varepsilon_{1 }\varepsilon_{2}}\|\partial_{\nu}\widetilde{\mathcal{R}}\|_{H^{2\kappa-\frac{3 }{2},2\kappa-\frac{3}{2}}(\Sigma^{\sharp})}\leq C\frac{1}{\varepsilon_{1} \varepsilon_{2}}\|\widetilde{\mathcal{R}}\|_{H^{2\kappa,2\kappa}(Q)}\leq C \frac{1}{\varepsilon_{1}\varepsilon_{2}}\|\widetilde{\mathcal{R}}\|_{H^{2 \kappa}(Q)} \tag{3.22}\] \[\leq C\frac{(\varepsilon_{1}+\varepsilon_{2})^{3}}{\varepsilon_{ 1}\varepsilon_{2}}\left(\|f_{1}\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2 }}(\Sigma)}+\|f_{2}\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)} \right)^{3}.\] ### An integral identity Let \(u_{\ell,\varepsilon f}\) (\(\ell=1,2\)) be the small unique solution to the initial boundary value problem for the Schrodinger equation: \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)u_{\ell,\varepsilon f}+ \beta_{\ell}u_{\ell,\varepsilon f}^{2}&=&0&\text{ in }Q,\\ u_{\ell,\varepsilon f}&=&\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2}&\text{ on }\Sigma,\\ u_{\ell,\varepsilon f}&=&0&\text{ on }\{0\}\times\Omega\end{array}\right.\] with \(\text{supp}(f_{j})\subset(0,T)\times\Gamma\) for \(j=1,2\). For \(|\varepsilon|:=|\varepsilon_{1}|+|\varepsilon_{2}|\) small enough, they admit the expansion \[u_{\ell,\varepsilon f}=\varepsilon_{1}U_{\ell,1}+\varepsilon_{2}U_{\ell,2}+ \frac{1}{2}\left(\varepsilon_{1}^{2}W_{\ell,(2,0)}+\varepsilon_{2}^{2}W_{ \ell,(0,2)}+2\varepsilon_{1}\varepsilon_{2}W_{\ell,(1,1)}\right)+\mathcal{R}_ {\ell},\] where \(U_{\ell,j}\), \(W_{\ell,(k_{1},k_{2})}\) and \(\mathcal{R}_{\ell}\) are as in Proposition 3.3. Since the linearized equations for both \(\ell\) are the same with the same boundary data \(f_{j}\), we have \[U_{1,j}=U_{2,j},\qquad j=1,2,\] denoted by \(U_{j}\) for the rest of the paper. In addition, let \(U_{0}\) be the solution of the adjoint problem: \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)U_{0}&=&0&\text{ in }Q,\\ U_{0}&=&f_{0}&\text{ on }\Sigma,\\ U_{0}&=&0&\text{ on }\{T\}\times\Omega\end{array}\right. \tag{3.23}\] with \(\text{supp}(f_{0})\subset(0,T)\times\Gamma\). **Lemma 3.1**.: _Let \(q,\,\beta_{\ell}\in C^{\infty}(Q)\) (\(\ell=1,2\)) and \(\beta:=\beta_{1}-\beta_{2}\). Suppose that_ \[\Lambda_{q,\beta_{1}}(f)=\Lambda_{q,\beta_{2}}(f)\] _for all \(f\in\mathcal{S}_{\lambda}(\Sigma)\) with \(\text{supp}(f)\subset\Sigma^{\sharp}\). Then_ \[\int_{Q}\beta U_{1}U_{2}\overline{U}_{0}\,dxdt=0. \tag{3.24}\] Proof.: We denote \[W:=W_{2,(1,1)}-W_{1,(1,1)}.\] By (3.21), we have \(\partial_{\nu}W|_{\Sigma^{\sharp}}=0\). After multiplying the equation in (3.17) by \(\overline{U}_{0}\), subtracting and integrating over \(Q\), we have \[\int_{Q}2\beta U_{1}U_{2}\overline{U}_{0}\,dxdt=\int_{\Sigma}(\overline{U}_{0} \partial_{\nu}W-W\partial_{\nu}\overline{U}_{0})\,d\sigma(x)dt=0\] due to that \(U_{0}(T,\cdot)=W(0,\cdot)=0\), \(W|_{\Sigma}=\partial_{\nu}W|_{\Sigma^{\sharp}}=0\) and \(U_{0}|_{\Sigma}\) has the support in \(\Sigma^{\sharp}\). ### Proof of Theorem 1.1 We will show that the coefficient \(\beta(t,x)\) can be recovered uniquely for all the points in \((0,T)\times\Omega_{\Gamma}\). Proof of Theorem 1.1.: For each \(p\in\Omega_{\Gamma}\), choose \(\omega_{1},\,\omega_{2}\in\mathbb{S}^{n-1}\) satisfying the condition in the description of \(\Omega_{\Gamma}\) in (1.2). Set \(\omega_{0}:=\omega_{1}+\omega_{2}\). Based on Proposition 3.2, we can find geometrical optics solutions \(U_{j}=v_{j}+r_{j}\), \(j=1,2\) for the problem (3.16) and \(U_{0}=v_{0}+r_{0}\) for its adjoint problem (3.23) associated to three lines \(\gamma_{p,\omega_{1}},\gamma_{p,\omega_{2}}\) and \(\gamma_{p,\omega_{0}}\) respectively. More specifically, we have \[v_{j}(t,x)=e^{i\rho(\Theta_{j}(x)-|\omega_{j}|^{2}\rho t)}a^{(j)}(t,x),\quad j =0,1,2\] with the phase function \[\Theta_{j}(x)=\omega_{j}\cdot(x-p)+\frac{1}{2}\mathcal{H}_{j}(x)(x-p)\cdot(x- p)+O(\operatorname{dist}(x,\gamma_{p,\omega})^{3}).\] The amplitude functions \(a^{(j)}(t,x)|_{\partial\Omega}\) are supported in \(\Gamma\) given \(\eta<\eta_{0}\) for some \(\eta_{0}>0\) and the remainder functions \(r_{j}(t,x)\) satisfy (3.15). Let \(f_{j}:=U_{j}|_{\partial\Omega}\) (\(j=0,1,2\)). From \(\Lambda_{q,\beta_{1}}(f)=\Lambda_{q,\beta_{2}}(f)\) on \(\mathcal{S}_{\lambda}(\Sigma)\) with \(\operatorname{supp}(f)\subset\Sigma^{\sharp}\) and Lemma 3.1, we obtain the integral identity (3.24). Plugging in above \(U_{j}\) (\(j=0,1,2\)), we obtain \[0=\int_{Q}\beta U_{1}U_{2}\overline{U}_{0}\,dxdt=\int_{Q}\beta v_{1}v_{2} \overline{v}_{0}\,dxdt+R_{1}+R_{2}+R_{3},\] where the remainder terms are grouped as \[R_{1}:=\int_{Q}\beta(\overline{r}_{0}v_{1}v_{2}+r_{1}v_{2}\overline{v}_{0}+r _{2}v_{1}\overline{v}_{0})\,dxdt,\] \[R_{2}:=\int_{Q}\beta(\overline{r}_{0}r_{1}v_{2}+\overline{r}_{0}r_{2}v_{1}+r _{1}r_{2}\overline{v}_{0})\,dxdt,\] \[R_{3}:=\int_{Q}\beta r_{1}r_{2}\overline{r}_{0}\,dxdt.\] When \(2\kappa>\frac{n+1}{2}\), Proposition 3.2 shows that \[R_{1}+R_{2}+R_{3}=O(\rho^{-K})\] for a large \(K>\frac{n}{2}\) by choosing \(N\) sufficiently large. Note that \(|\omega_{1}|^{2}+|\omega_{2}|^{2}=|\omega_{0}|^{2}\). The phase of the product is then given by \[\Theta_{1}(x)+\Theta_{2}(x)-\overline{\Theta}_{0}(x)=\frac{1}{2}\mathcal{H}(x )(x-p)\cdot(x-p)+\widetilde{h}(x),\] where \(\mathcal{H}(x):=\mathcal{H}_{1}(x)+\mathcal{H}_{2}(x)-\overline{\mathcal{H}_{ 0}}(x)\) whose imaginary part \[\operatorname{Im}\mathcal{H}(x)=\operatorname{Im}\mathcal{H}_{1}(x)+ \operatorname{Im}\mathcal{H}_{2}(x)+\operatorname{Im}\mathcal{H}_{0}(x).\] By (3.8), we have \[\frac{1}{2}\operatorname{Im}\mathcal{H}(x)(x-p)\cdot(x-p)\geq c_{0}( \operatorname{dist}(x,\gamma_{p,\omega_{1}})^{2}+\operatorname{dist}(x,\gamma _{p,\omega_{2}})^{2})\geq c_{0}|x-p|^{2},\] which implies \(\operatorname{Im}\mathcal{H}\) is positive definite. Also, we have for \(|x-p|\) small, \[|\widetilde{h}(x)|=O(\operatorname{dist}(x,\gamma_{p,\omega_{1}})^{3}+ \operatorname{dist}(x,\gamma_{p,\omega_{2}})^{3}+\operatorname{dist}(x,\gamma _{p,\omega_{0}})^{3})=O(|x-p|^{3}). \tag{3.25}\] Therefore, for \(\eta<\eta_{0}\) sufficiently small, we shall have \[\operatorname{Im}(\Theta_{1}(x)+\Theta_{2}(x)-\overline{\Theta}_{0}(x))\geq \widetilde{c}_{0}|x-p|^{2}\qquad\text{when }|x-p|<\eta. \tag{3.26}\] Finally, standing on these, we derive \[O(\rho^{-K})= \int_{Q}\beta v_{1}v_{2}\overline{v}_{0}\,dxdt\] \[= \int_{0}^{T}\int_{B_{2\eta}(p)}\beta e^{i\rho(\frac{1}{2}\mathcal{H }(x)(x-p)\cdot(x-p)+\widetilde{k}(x))}(\widetilde{a}_{0}(t,x)+O(\rho^{-1})) \widetilde{\chi}_{\eta}(x)\,dxdt,\] where \(\widetilde{a}_{0}(t,x)=a_{0}^{(1)}a_{0}^{(2)}\overline{a}_{0}^{(0)}(t,x)\) and \(\widetilde{\chi}_{\eta}(x):=\prod_{j=0,1,2}\chi_{\eta}(z_{p,\omega_{j}}^{\prime }(x))\) with \(z_{p,\omega_{j}}^{\prime}(x)\) being the projection of \(x-p\) onto the orthogonal \((n-1)\)-dim subspace \(\omega_{j}^{\perp}=\{\xi\in\mathbb{R}^{n}:\ \xi\cdot\omega_{j}=0\}\). By the change of variable \(\tilde{x}=\rho^{\frac{1}{2}}(x-p)\), we have \[O(\rho^{-K+\frac{n}{2}})=\int_{0}^{T}\int_{B_{2\eta}\sqrt{\rho}(0)}e^{i(\frac {1}{2}\mathcal{H}(\rho^{-\frac{1}{2}}\tilde{x}+p)\tilde{x}\cdot\tilde{x}+\rho \widetilde{h}(\rho^{-\frac{1}{2}}\tilde{x}+p))}(\beta\widetilde{a}_{0}(t,\rho ^{-\frac{1}{2}}\tilde{x}+p)+O(\rho^{-1}))\widetilde{\chi}_{\eta}(\rho^{-\frac {1}{2}}\tilde{x}+p)\,d\tilde{x}dt.\] Applying (3.25) and (3.26), and by the dominated convergence theorem, we obtain the limit as \(\rho\to\infty\) \[\left(\int_{\mathbb{R}^{n}}e^{\frac{i}{2}\mathcal{H}(p)\tilde{x}\cdot\tilde{x }}\,d\tilde{x}\right)\left(\int_{0}^{T}\beta\widetilde{a}_{0}(t,p)\,dt\right)=0,\] where we use that the pointwise limit of \(\rho\widetilde{h}(\rho^{-1/2}\widetilde{x}+p)\) is zero. We can choose the initial condition for \(H\) in the matrix Riccati equation such that the first integral is nonzero. Also recall that \(a_{0}^{(j)}(t,p)=\iota(t)\) for \(j=0,1,2\) in the constructions of \(a_{0}^{(j)}\), thus \[\int_{0}^{T}\beta(t,p)\iota^{3}(t)\,dt=0.\] Since \(\iota\) can be chosen to be any smooth cut-off function at the time variable, this leads to \(\beta(t,p)=0\) for arbitrary \(t\in(0,T)\). ## 4. Proof of Theorem 1.2 and Theorem 1.3 ### Geometric optics In this section, we will construct the geometric optics (GO) solutions to the Schrodinger equation, similar to the ones used in [24] and [39], and introduce its associated unique continuation principle. Compared to the GO solutions in Proposition 3.2, these are not localized near a straight line. Following the same ansatz for a GO solution under the global coordinate \[u(t,x)=e^{i\Phi(t,x)}\left(\sum_{k=0}^{N}\rho^{-k}a_{k}(t,x)\right)+r(t,x),\] where we take a simple linear (in \(x\)) phase \[\Phi(t,x):=\rho(x\cdot\omega-\rho|\omega|^{2}t),\] with \(\rho>0\) and \(\omega\in\mathbb{R}^{n}\). Then the terms in the amplitude naturally satisfy \[\omega\cdot\nabla a_{0}=0,\] \[2i\omega\cdot\nabla a_{1}=-(i\partial_{t}+\Delta+q)a_{0},\] \[\vdots\] \[2i\omega\cdot\nabla a_{N}=-(i\partial_{t}+\Delta+q)a_{N-1}, \tag{4.1}\] and the remainder term \(r\) satisfies \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)r&=&-\rho^{-N}e^{i\Phi(t,x)}(i \partial_{t}+\Delta+q)a_{N}&\quad\text{in }Q,\\ r&=&0&\quad\text{on }\Sigma,\\ r&=&0&\quad\text{on }\{0\}\times\Omega.\end{array}\right. \tag{4.2}\] We construct \(a_{0}\) as follows. Let \(0<T^{*}<T\) and the function \(\theta_{h}\in C_{0}^{\infty}(\mathbb{R})\) satisfy \(0\leq\theta_{h}\leq 1\) and for \(0<h<\frac{T^{*}}{4}\), \[\theta_{h}(t)=\left\{\begin{array}{rl}0&\text{in }[0,h]\cup[T^{*}-h,T^{*}],\\ 1&\text{in }[2h,\,T^{*}-2h],\end{array}\right. \tag{4.3}\] with support in \((h,T^{*}-h)\) and, moreover, for all \(j\in\mathbb{N}\), there exist constants \(C_{j}>0\) such that \[\|\theta_{h}\|_{W^{j,\infty}(\mathbb{R})}\leq C_{j}h^{-j}. \tag{4.4}\] We choose \[a_{0}(t,x):=\theta_{h}(t)e^{i(t\tau+x\cdot\xi)}\] with \(\xi\in\omega^{\perp}\). Then it satisfies \[a_{0}(t,x)=0\quad\text{ for all }(t,x)\in((0,h)\cup(T^{*}-h,T^{*}))\times\Omega,\] and the first equation in (4.1). Let \(y\in\partial\Omega\) and \(L:=\{x:\omega\cdot(x-y)=0\}\). Set \[a_{k}(t,x+s\omega)=\frac{i}{2}\int_{0}^{s}(i\partial_{t}+\Delta+q)a_{k-1}(t,x+ \tilde{s}\omega)\,d\tilde{s},\qquad x\in L,\quad j=1,\ldots,N. \tag{4.5}\] Then \(a_{j}\) (\(j=1,\ldots,N\)) satisfies (4.1) and vanishes on \(L\). The regularity of \(a_{j}\) inherits from \(a_{0}\), which is smooth both in \(t\) and \(x\). We introduce the notation \[\langle\tau,\xi\rangle:=(1+\tau^{2}+|\xi|^{2})^{1/2},\quad\tau\in\mathbb{R},\, \xi\in\mathbb{R}^{n}.\] **Proposition 4.1**.: _Let \(\omega\in\mathbb{R}^{n}\), \(N>0\) and \(m>\frac{n+1}{2}\) be an integer. Suppose that \(q\in C^{\infty}(Q)\). Then there exist GO solutions to the Schrodinger equation \((i\partial_{t}+\Delta+q)u=0\) in \(Q\) of the form_ \[u(t,x)=e^{i\Phi(t,x)}\left(a_{0}(t,x)+\sum_{k=1}^{N}\rho^{-k}a_{k}(t,x)\right) +r(t,x),\quad a_{0}(t,x)=\theta_{h}(t)e^{i(t\tau+x\cdot\xi)}\] _satisfying the initial condition \(u|_{t=0}=0\) in \(\Omega\) (or the final condition \(u|_{t=T}=0\) in \(\Omega\)). Here \(a_{k}\in H^{m}(Q)\) (\(k=1,\ldots,N\)) are given by (4.5) and satisfy_ \[\|a_{k}\|_{H^{m}(Q)}\leq C\langle\tau,\xi\rangle^{2k+m}h^{-k-m},\,\,0\leq k\leq N \tag{4.6}\] _for any \(\tau\in\mathbb{R}\), \(h\in(0,\frac{T^{*}}{4})\) small enough and \(\xi\in\omega^{\perp}\), where the constant \(C>0\) depending only on \(\Omega\) and \(T\). The remainder term \(r\) satisfies_ \[\|r\|_{H^{m}(Q)}\leq C\rho^{-N+2m}\langle\tau,\xi\rangle^{2N+m+2}h^{-(N+m+1)} \tag{4.7}\] _and_ \[\|r\|_{C^{1}([0,T],L^{2}(\Omega))\cap C([0,T],H^{2}(\Omega))}\leq C\rho^{-N+2} \langle\tau,\xi\rangle^{2N+2}h^{-N-2}\] _for some constant \(C>0\) depending only on \(\Omega\) and \(T\)._ Proof.: We show the proof for the case with zero initial condition. The case with zero final condition at \(T\) can be justified similarly. For \(k=0\), the estimate (4.6) clearly holds for \(m=0\). For \(m=1\), it is easy to check that \(\|\nabla a_{0}\|_{L^{2}(Q)}\leq C|\xi|\) and \(\|\partial_{t}a_{0}\|_{L^{2}(Q)}\leq C(|\tau|+|\xi|+h^{-1})\) and, therefore, when \(h\) is small, \[\|a_{0}\|_{H^{1}(Q)}\leq C\langle\tau,\xi\rangle h^{-1}.\] Similarly, we can also deduce the bound for \(\|a_{0}\|_{H^{m}(Q)}\). By induction, assuming that \(a_{k-1}\) satisfies \[\|a_{k-1}\|_{H^{m}(Q)}\leq C\langle\tau,\xi\rangle^{2k+m-2}h^{-k-m+1}.\] From (4.5), since we take \(x\)-derivative twice and \(t\)-derivative on \(a_{k-1}\), the estimate of \(\|a_{k}\|_{H^{m}(Q)}\) will receive extra \(\langle\tau,\xi\rangle^{2}\) and \(h^{-1}\) on top of \(\|a_{k-1}\|_{H^{m}(Q)}\). This leads to (4.6). Note that (4.6) holds for all integer \(m\geq 0\). Now we discuss the existence and estimates of \(r\) to the problem (4.2). From Proposition 3 and Lemma 4 in [39], since \(e^{i\Phi}(i\partial_{t}+\Delta+q)a_{N}\in\mathcal{H}_{0}^{m}\) with even integer \(m>\frac{n+1}{2}\), there exists a solution \(r\) to (4.2) so that \[\|r\|_{H^{m}(Q)}\leq C\rho^{-N}\|e^{i\Phi}(i\partial_{t}+\Delta+q)a_{N}\|_{H^{ m}(Q)}\leq C\rho^{-N+2m}\langle\tau,\xi\rangle^{2N+m+2}h^{-(N+m+1)}.\] In addition, from Lemma 2.3 in [24], one can also derive \[\|r\|_{C^{1}([0,T],L^{2}(\Omega))\cap C([0,T],H^{2}(\Omega))}\leq \,C\rho^{-N}\|e^{i\Phi}(i\partial_{t}+\Delta+q)a_{N}\|_{H^{1}(0,T, L^{2}(\Omega))}\] \[\leq \,C\rho^{-N}\rho^{2}(\|a_{N}\|_{H^{2}(0,T,L^{2}(\Omega))}+\|a_{N} \|_{H^{1}(0,T,H^{2}(\Omega))})\] \[\leq \,C\rho^{-N+2}\langle\tau,\xi\rangle^{2N+2}h^{-N-2}.\] **Remark 4.1**.: _The choice of \(a_{0}\) is quite flexible as long as \(\omega\cdot\nabla a_{0}=0\) is fulfilled. This flexibility is essential in the reconstruction of the unknown coefficient \(\beta\) since it will help eliminate the unwanted terms in the integral identity in order to obtain the Fourier transform of \(\beta\), see Section 4.4 for more detailed computations and explanations._ _For our purpose, we will also need the GO solution with a simple choice \(a_{0}(t,x)=\theta_{h}(t)\) where \(\theta_{h}(t)\) is given by (4.3). That is, there exist GO solutions to the Schrodinger equation \((i\partial_{t}+\Delta+q)u=0\) in \(Q\) of the form_ \[u(t,x)=e^{i\Phi(t,x)}\left(\theta_{h}(t)+\sum_{k=1}^{N}\rho^{-k}a_{k}(t,x) \right)+r(t,x),\] _satisfying the initial condition \(u|_{t=0}=0\) in \(\Omega\) (or the final condition \(u|_{t=T}=0\) in \(\Omega\)). From (4.1), we obtain \(\omega\cdot\nabla a_{1}=-\frac{1}{2}\partial_{t}\theta_{h}(t)\), implying_ \[a_{1}(t,x)=-\frac{1}{2|\omega|^{2}}\partial_{t}\theta_{h}(t)x\cdot\omega\] _with \(a_{1}(t,x)=0\) on \(\omega^{\perp}\). The rest of \(a_{k}\in H^{m}(Q)\) (\(k=2,\ldots,N\)) are given by (4.5) and one can verify_ \[\|a_{k}\|_{H^{m}(Q)}\leq Ch^{-k-m},\ 0\leq k\leq N \tag{4.8}\] _and_ \[\|r\|_{H^{m}(Q)}\leq C\rho^{-N+2m}h^{-(N+m+1)},\qquad\|r\|_{C^{1}([0,T],L^{2} (\Omega))\cap C([0,T],H^{2}(\Omega))}\leq C\rho^{-N+2}h^{-N-2} \tag{4.9}\] _for some constant \(C>0\) depending only on \(\Omega\) and \(T\). Note that under this construction \(a_{k}\) (\(k=0,\cdots,N\)) all vanish on \(((0,h)\cup(T-h,T))\times\Omega\)._ ### Unique continuation property (UCP) Recall that \(\mathcal{O}\subset\Omega\) is an open neighborhood of \(\partial\Omega\). Let \(\mathcal{O}_{j}\) (\(j=1,2,3\)) denote the open subsets of \(\mathcal{O}\) such that \(\overline{\mathcal{O}}_{j+1}\subset\mathcal{O}_{j},\ \overline{\mathcal{O}}_{j}\subset \mathcal{O}.\) Set \(\Omega_{j}:=\Omega\setminus\overline{\mathcal{O}}_{j}\) and \(Q_{j}:=(0,T)\times\Omega_{j}.\) We will need the following lemma of UCP and its corollary for the linear Schrodinger equation. The lemma follows directly from [7] by setting the magnetic potential to be zero. **Lemma 4.1** (Unique continuation property).: _Suppose that \(q\in\mathcal{M}_{\mathcal{O}}\). Let \(\tilde{w}\in H^{1,2}(Q)\) be a solution to the following system_ \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)\tilde{w}&=&g_{0}&\text{ in }Q,\\ \tilde{w}&=&0&\text{ on }\Sigma,\\ \tilde{w}&=&0&\text{ on }\{0\}\times\Omega,\end{array}\right. \tag{4.10}\] _where \(g_{0}\in L^{2}(Q)\) and \(\operatorname{supp}(g_{0})\subset(0,T)\times(\Omega\setminus\mathcal{O})\). Then for any \(T^{*}\in(0,T)\), there exist \(C>0,\gamma^{*}>0,m_{1}>0,\mu_{1}<1\) such that the following estimate holds_ \[\|\tilde{w}\|_{L^{2}((0,T^{*})\times(\Omega_{3}\setminus\Omega_{2}))}\leq C \left(\gamma^{-\mu_{1}}\|\tilde{w}\|_{H^{1,1}(Q)}+e^{m_{1}\gamma}\|\partial_{ \nu}\tilde{w}\|_{L^{2}(\Sigma^{\sharp})}\right),\] _for any \(\gamma>\gamma^{*}\). Here the constants \(C,\)\(m_{1}\) and \(\mu_{1}\) depend on \(\Omega,\)\(\mathcal{O},\)\(T^{*}\) and \(T.\)_ **Corollary 4.1**.: _Let \(q\in\mathcal{M}_{\mathcal{O}}\), and \(\tilde{w}\in H^{1,2}(Q)\) a solution of (4.10) where \(g_{0}\in L^{2}(Q)\) and \(\operatorname{supp}(g_{0})\subset(0,T)\times(\Omega\setminus\mathcal{O})\) such that \(\partial_{\nu}\tilde{w}=0\) on \(\Sigma^{\sharp}\). Then \(\tilde{w}=0\) in \((0,T)\times(\Omega_{3}\setminus\Omega_{2})\)._ ### The integral identity In this section, we derive the needed integral identity to prove the stability estimate in Theorem 1.2. We denote \[Q^{*}:=(0,T^{*})\times\Omega\quad\text{ for }0<T^{*}<T.\] Recall the notation \(u_{\ell,\varepsilon f}\) (\(\ell=1,2\)) that denotes the small unique solution to the initial boundary value problem \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)u_{\ell,\varepsilon f}+ \beta_{\ell}u_{\ell,\varepsilon f}^{2}&=&0&\text{ in }Q^{*},\\ u_{\ell,\varepsilon f}&=&\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2}&\text{ on }\Sigma,\\ u_{\ell,\varepsilon f}&=&0&\text{ on }\{0\}\times\Omega,\end{array}\right.\] where \(f_{1},f_{2}\in H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)\) and \(|\varepsilon|:=|\varepsilon_{1}|+|\varepsilon_{2}|\) is sufficiently small such that \(\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2}\in\mathcal{S}_{\lambda}(\Sigma)\). Also, let \(U_{j}\) and \(W_{\ell,(1,1)}\) be the solutions to the equations (3.16) and (3.17), respectively. In addition, let \(U_{0}\) be the solution of the adjoint problem, \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)U_{0}&=&0&\text{ in }Q^{*},\\ U_{0}&=&f_{0}&\text{ on }\Sigma,\\ U_{0}&=&0&\text{ on }\{T^{*}\}\times\Omega\end{array}\right.\] for some \(f_{0}\in H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)\) to be specified later. We also introduce a smooth cut-off function \(\chi\in C^{\infty}(\overline{\Omega})\) satisfying \(0\leq\chi\leq 1\) and \[\chi(x)=\left\{\begin{array}{rcll}0&\text{ in }\mathcal{O}_{3},\\ 1&\text{ in }\overline{\Omega}\setminus\mathcal{O}_{2},\end{array}\right.\] and denote \(W:=W_{2,(1,1)}-W_{1,(1,1)}\), which solves \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)W&=&2\beta U_{1}U_{2}& \text{ in }Q^{*},\\ W&=&0&\text{ on }\Sigma,\\ W&=&0&\text{ on }\{0\}\times\Omega,\end{array}\right. \tag{4.11}\] where \(\beta=\beta_{1}-\beta_{2}\). As we will see below, by applying this cut-off function \(\chi\) to \(W\), whose Neumann data is not necessary zero, we have a control of the energy near the boundary using UCP. First, we obtain the following key integral identity. **Lemma 4.2**.: _Suppose that \(\beta:=\beta_{1}-\beta_{2}\in\mathcal{M}_{\mathcal{O}}\). Let \(U_{j}\) and \(W\) be as above. Then_ \[\int_{Q^{*}}2\beta U_{1}U_{2}\overline{U}_{0}+[\Delta,\chi]W\overline{U}_{0}\, dxdt=0, \tag{4.12}\] _where \([\Delta,\chi]:=\Delta\chi-\chi\Delta\) is the commutator bracket._ Proof.: Let \(W^{*}(t,x):=\chi(x)W(t,x)\). Note that since \(\beta_{1}-\beta_{2}=0\) in \([0,T]\times\mathcal{O}\) and \(\chi=1\) in \(\overline{\Omega}\setminus\mathcal{O}\) (a subset of \(\overline{\Omega}\setminus\mathcal{O}_{2}\)), we have \[\chi(\beta_{1}-\beta_{2})=\beta_{1}-\beta_{2}\quad\text{ in }Q.\] This implies that the function \(W^{*}\) satisfies \[\left\{\begin{array}{rcll}(i\partial_{t}+\Delta+q)W^{*}&=&2\beta U_{1}U_{2} +[\Delta,\chi]W&\text{ in }Q^{*},\\ W^{*}&=&0&\text{ on }\Sigma,\\ W^{*}&=&0&\text{ on }\{0\}\times\Omega.\end{array}\right.\] In particular, we have \[W^{*}|_{\Sigma}=\partial_{\nu}W^{*}|_{\Sigma}=0.\] We multiply the first equation in (3.17) by \(\overline{U}_{0}\) and then integrate over \(Q^{*}\). Using the condition \(\overline{U}_{0}|_{t=T^{*}}=W|_{t=0}=0\), we finally obtain \[\int_{Q^{*}}2\beta U_{1}U_{2}\overline{U}_{0}+[\Delta,\chi]W\overline{U}_{0} \,dxdt=\int_{\Sigma}(\overline{U}_{0}\partial_{\nu}W^{*}-W^{*}\partial_{\nu} \overline{U}_{0})\,d\sigma(x)dt=0.\] ### Proof of the stability estimate (Theorem 1.2) Below we derive a series of estimates to prove the final stability result in Theorem 1.2. We choose to plug in GO solutions \(U_{j}\), \(j=0,1,2\) as in Proposition 4.1 and Remark 4.1. Specifically, we take \[U_{j}(t,x):=v_{j}(t,x)+r_{j}(t,x)=e^{i\Phi_{j}(t,x)}\left(a_{0}^{(j)}+\sum_{k= 1}^{N}\rho^{-k}a_{k}^{(j)}(t,x)\right)+r_{j}(t,x),\qquad j=0,1,2,\] where the phase function \(\Phi_{j}\) are of the form \[\Phi_{j}(t,x)=\rho\left(x\cdot\omega_{j}-\rho|\omega_{j}|^{2}t\right)\] with the vectors \(\omega_{1}\), \(\omega_{2}\) and \(\omega_{0}\) satisfying \[\omega_{1}+\omega_{2}=\omega_{0},\quad|\omega_{1}|^{2}+|\omega_{2}|^{2}=| \omega_{0}|^{2}. \tag{4.13}\] The leading amplitudes \(a_{0}^{(j)}\) are given by \[a_{0}^{(1)}(t,x)=a_{0}^{(2)}(t,x)=\theta_{h}(t),\quad a_{0}^{(0)}(t,x)=\theta_ {h}(t)e^{i(\tau t+x\cdot\xi)},\] where \(\tau\in\mathbb{R}\) and \(\xi\in\omega_{0}^{\perp}\). Substituting \(U_{j}=v_{j}+r_{j}\) (\(j=0,1,2\)) into the first term on the left-hand side of the identity (4.12), we get \[\int_{Q^{*}}2\beta U_{1}U_{2}\overline{U}_{0}\,dxdt=\int_{Q^{*}}2\beta v_{1}v _{2}\overline{v}_{0}\,dxdt+R_{1}+R_{2}+R_{3}, \tag{4.14}\] where the remainder terms are grouped into \[R_{1}:=2\int_{Q^{*}}\beta(\overline{r}_{0}v_{1}v_{2}+r_{1}v_{2}\overline{v}_{0 }+r_{2}v_{1}\overline{v}_{0})\,dxdt,\] \[R_{2}:=2\int_{Q^{*}}\beta(\overline{r}_{0}r_{1}v_{2}+\overline{r}_{0}r_{2}v_{1 }+r_{1}r_{2}\overline{v}_{0})\,dxdt,\] \[R_{3}:=2\int_{Q^{*}}\beta r_{1}r_{2}\overline{r}_{0}\,dxdt.\] We have the following asymptotics. **Lemma 4.3**.: _Let \(m>\frac{n+1}{2}\). There exist \(\rho_{0}>1\) and \(1>h_{0}>0\) such that for \(\rho>\rho_{0}\) and \(0<h<h_{0}\) such that_ \[2\int_{Q^{*}}\beta v_{1}v_{2}\overline{v}_{0}\,dxdt=2\int_{Q^{*}}\beta a_{0}^{( 1)}a_{0}^{(2)}\overline{a}_{0}^{(0)}\,dxdt+I, \tag{4.15}\] _where_ \[|I|\leq C\left(\rho^{-1}\langle\tau,\xi\rangle^{2N}h^{-N}+\rho^{-2}\langle \tau,\xi\rangle^{2N}h^{-2N}+\rho^{-3}\langle\tau,\xi\rangle^{2N+m}h^{-3N-3m} \right),\] _for any \(\tau\in\mathbb{R}\) and \(\xi\in\omega_{0}^{\perp}\). Here the positive constant \(C\) depends on \(Q^{*},\)\(N\), and \(\beta\)._ Proof.: By the definition of \(v_{j}\), we have the identity \[2\int_{Q^{*}}\beta v_{1}v_{2}\overline{v}_{0}\,dxdt=2\int_{Q^{*}}\beta a_{0}^ {(1)}a_{0}^{(2)}\overline{a}_{0}^{(0)}\,dxdt+I_{1}+I_{2}+I_{3},\] where we used the conditions (4.13) to get \(\Phi_{1}+\Phi_{2}-\Phi_{0}=0\). Here the rest \(O(\rho^{-1})\) terms are grouped into \[I_{1}:=2\int_{Q^{*}}\beta\left[a_{0}^{(1)}a_{0}^{(2)}\left(\sum_{k=1}^{N}\rho^ {-k}\overline{a}_{k}^{(0)}\right)+a_{0}^{(1)}\overline{a}_{0}^{(0)}\left(\sum _{k=1}^{N}\rho^{-k}a_{k}^{(2)}\right)+a_{0}^{(2)}\overline{a}_{0}^{(0)}\left( \sum_{k=1}^{N}\rho^{-k}a_{k}^{(1)}\right)\right]\,dxdt,\] \[I_{2}:=2\int_{Q^{*}}\beta\Bigg{[}a_{0}^{(1)}\left(\sum_{k=1}^{N}\rho^{-k}a_{k }^{(2)}\right)\left(\sum_{k=1}^{N}\rho^{-k}\overline{a}_{k}^{(0)}\right)+a_{0 }^{(2)}\left(\sum_{k=1}^{N}\rho^{-k}a_{k}^{(1)}\right)\left(\sum_{k=1}^{N} \rho^{-k}\overline{a}_{k}^{(0)}\right)\] and \[I_{3}:=2\int_{Q^{*}}\beta\left(\sum_{k=1}^{N}\rho^{-k}a_{k}^{(1)}\right)\left( \sum_{k=1}^{N}\rho^{-k}a_{k}^{(2)}\right)\left(\sum_{k=1}^{N}\rho^{-k} \overline{a}_{k}^{(0)}\right)\,dxdt.\] Let us estimate each \(I_{j}\). To this end, it is sufficient to control the first term in each \(I_{j}\) since the other terms can be handled similarly. The first term in \(I_{1}\) is controlled by \[2\left|\int_{Q^{*}}\beta a_{0}^{(1)}a_{0}^{(2)}\left(\sum_{k=1}^ {N}\rho^{-k}\overline{a}_{k}^{(0)}\right)\,dxdt\right|\] \[\leq C\|\beta\|_{L^{\infty}(Q^{*})}\|a_{0}^{(1)}\|_{L^{\infty}(Q^{*})} \|a_{0}^{(2)}\|_{L^{\infty}(Q^{*})}\left(\sum_{k=1}^{N}\rho^{-k}\|\overline{a }_{k}^{(0)}\|_{L^{2}(Q^{*})}\right)\] \[\leq C\sum_{k=1}^{N}\rho^{-k}\|\overline{a}_{k}^{(0)}\|_{L^{2}(Q^{*})} \leq C\rho^{-1}\|\overline{a}_{N}^{(0)}\|_{L^{2}(Q^{*})}\leq C\rho^{-1} \langle\tau,\xi\rangle^{2N}h^{-N},\] by (4.6) for sufficiently large \(\rho>1\) and small \(h<1\), where \(C\) depending on \(Q^{*}\), \(N\) and \(\beta\). Similarly, the second term and the third term are less than \(C\rho^{-1}h^{-N}\) by applying (4.8) instead. Combining these estimates together gives \[|I_{1}|\leq C\rho^{-1}\langle\tau,\xi\rangle^{2N}h^{-N}. \tag{4.16}\] For \(I_{2}\), applying Holder's inequality, the first term is controlled by \[2\left|\int_{Q^{*}}\beta a_{0}^{(1)}\left(\sum_{k=1}^{N}\rho^{-k}a_ {k}^{(2)}\right)\left(\sum_{k=1}^{N}\rho^{-k}\overline{a}_{k}^{(0)}\right)\,dxdt\right|\] \[\leq C\|\beta\|_{L^{\infty}(Q^{*})}\|a_{0}^{(1)}\|_{L^{\infty}(Q^{*})} \left(\sum_{k=1}^{N}\rho^{-k}\|a_{k}^{(2)}\|_{L^{2}(Q^{*})}\right)\left(\sum_{k =1}^{N}\rho^{-k}\|\overline{a}_{k}^{(0)}\|_{L^{2}(Q^{*})}\right)\] \[\leq C\left(\sum_{k=1}^{N}\rho^{-k}h^{-k}\right)\left(\sum_{k=1}^{N} \rho^{-k}\langle\tau,\xi\rangle^{2k}h^{-k}\right)\] \[\leq C\rho^{-2}\langle\tau,\xi\rangle^{2N}h^{-2N},\] by using (4.6) and (4.8) again. Similarly, the second and the third terms share the same bound. Therefore we have \[|I_{2}|\leq C\rho^{-2}\langle\tau,\xi\rangle^{2N}h^{-2N}. \tag{4.17}\] Finally, since \(m>\frac{n+1}{2}\), we can control \(I_{3}\) by \[|I_{3}|\leq C\|\beta\|_{L^{\infty}(Q^{*})}\left(\sum_{k=1}^{N}\rho^{-k}\|a_{k} ^{(1)}\|_{H^{m}(Q^{*})}\right)\left(\sum_{k=1}^{N}\rho^{-k}\|a_{k}^{(2)}\|_{H^ {m}(Q^{*})}\right)\left(\sum_{k=1}^{N}\rho^{-k}\|\overline{a}_{k}^{(0)}\|_{H^ {m}(Q^{*})}\right)\] \[\leq C\rho^{-3}\|a_{N}^{(1)}\|_{H^{m}(Q^{*})}\|a_{N}^{(2)}\|_{H^{m}(Q ^{*})}\|a_{N}^{(0)}\|_{H^{m}(Q^{*})} \tag{4.18}\] \[\leq C\rho^{-3}\langle\tau,\xi\rangle^{2N+m}h^{-3N-3m}\] Combining (4.16), (4.17), and (4.18) completes the proof. **Lemma 4.4**.: _Let \(m>\frac{n+1}{2}\). Then there exists \(\rho_{0}>1\) and \(0<h_{0}<1\) such that the three remainder terms satisfy the following estimates:_ \[|R_{1}|\leq C\rho^{-N+2m}\langle\tau,\xi\rangle^{2N+m+2}h^{-3N-3m-1},\] \[|R_{2}|\leq C\rho^{-2N+4m}\langle\tau,\xi\rangle^{2N+m+2}h^{-3N-3m-2},\] _and_ \[|R_{3}|\leq C\rho^{-3N+6m}\langle\tau,\xi\rangle^{2N+m+2}h^{-3N-3m-3}\] _for \(\rho>\rho_{0}\), \(0<h<h_{0}\), \(\tau\in\mathbb{R}\) and \(\xi\in\omega_{0}^{\perp}\), where the positive constant \(C\) depends on \(Q^{*},\,N\), and \(\beta\)._ Proof.: Again it is sufficient to evaluate the first term in each \(R_{j}\). Substituting \(v_{1},\,v_{2}\), and \(r_{0}\) into the first term of \(R_{1}\), we get \[\int_{Q^{*}}\beta\overline{\tau}_{0}v_{1}v_{2}dxdt=\int_{Q^{*}}\beta\overline {\tau}_{0}e^{i\Phi_{0}}\left(\sum_{k=0}^{N}\rho^{-k}a_{k}^{(1)}\right)\left( \sum_{k=0}^{N}\rho^{-k}a_{k}^{(2)}\right)dxdt.\] Since \(H^{m}(Q)\) is an algebra, by (4.7) and (4.8), we have \[\left|\int_{Q^{*}}\beta\overline{\tau}_{0}v_{1}v_{2}dxdt\right|\leq C\|\beta\|_{L^{\infty}(Q^{*})}\|r_{0}\|_{H^{m}(Q^{*})}\left(\sum_{k=0}^{N} \rho^{-k}\|a_{k}^{(1)}\|_{H^{m}(Q^{*})}\right)\left(\sum_{k=0}^{N}\rho^{-k}\|a _{k}^{(2)}\|_{H^{m}(Q)}\right)\] \[\leq C\rho^{-N+2m}\langle\tau,\xi\rangle^{2N+m+2}h^{-N-m-1}\|a_{N}^{( 1)}\|_{H^{m}(Q^{*})}\|a_{N}^{(2)}\|_{H^{m}(Q^{*})}\] \[\leq C\rho^{-N+2m}\langle\tau,\xi\rangle^{2N+m+2}h^{-3N-3m-1}.\] The rest terms in \(R_{1}\) satisfy the same estimate similarly. The same argument also gives the corresponding bounds for \(R_{2}\) and \(R_{3}\), using (4.6), (4.7), (4.8) and (4.9). This completes the proof of this lemma. Now we are ready to prove an estimate for the Fourier transform of \(\beta\theta_{h}^{3}(t)\) below. **Lemma 4.5**.: _Let \(2\kappa>\frac{n+1}{2}\), \(N>4\kappa+1\) and \(\beta=\beta_{1}-\beta_{2}\in\mathcal{M}_{\mathcal{O}}\). For \(\rho>\rho_{0}>1\) and \(1>h_{0}>h>0\), we have_ \[2\left|\int_{Q^{*}}\beta\theta_{h}^{3}(t)e^{-i(\tau t+x\cdot\xi)}\,dxdt\right| \leq\left|\int_{Q^{*}}[\Delta,\chi]W\overline{U}_{0}\,dxdt\right|+C\rho^{-1} \langle\tau,\xi\rangle^{2N+2\kappa+2}h^{-3N-6\kappa-3} \tag{4.19}\] _for \(\tau\in\mathbb{R}\) and \(\xi\in\omega_{0}^{\perp}\). Here the constant \(C>0\) is independent of \(\rho,\,\tau,\,\xi\) and \(h\)._ Proof.: We derive from (4.14), (4.15) with \(m=2\kappa\) and the identity (4.12) that \[2\int_{Q^{*}}\beta a_{0}^{(1)}a_{0}^{(2)}\overline{a}_{0}^{(0)}\,dxdt=-\int_{ Q^{*}}[\Delta,\chi]W\overline{U}_{0}\,dxdt-(I+R_{1}+R_{2}+R_{3}). \tag{4.20}\] With the estimates in Lemma 4.3 and Lemma 4.4, we can further simplify the estimate of \(I+R_{1}+R_{2}+R_{3}\) into \[|I+R_{1}+R_{2}+R_{3}|\leq C\rho^{-1}\langle\tau,\xi\rangle^{2N+2\kappa+2}h^{-3 N-6\kappa-3}\] by noting that \(N>4\kappa+1\), \(\rho>1\), and \(\langle\tau,\xi\rangle\geq 1\). The lemma is then proved by recalling that \(a_{0}^{(0)}=\theta_{h}(t)e^{i(\tau t+x\cdot\xi)}\) and \(a_{0}^{(1)}=a_{0}^{(2)}=\theta_{h}(t)\). Next we try to estimate the first term on the right hand side of (4.19) in terms of the boundary measurements difference. **Lemma 4.6**.: _Let \(2\kappa>\frac{n+1}{2}\) and \(\beta=\beta_{1}-\beta_{2}\in\mathcal{M}_{\mathcal{O}}\). Suppose_ \[\|(\Lambda_{q,\beta_{1}}-\Lambda_{q,\beta_{2}})f\|_{L^{2}(\Sigma^{\sharp})} \leq\delta\qquad\text{ for all }f\in\mathcal{S}_{\lambda}(\Sigma).\] _Then for \(\rho>\rho_{0}>1\), \(1>h_{0}>h>0\) and \(|\varepsilon_{1}|+|\varepsilon_{2}|\) sufficiently small, we have_ \[\|W\|_{H^{1,1}(Q)}\leq C\rho^{8\kappa}h^{-2N-4\kappa-2}\] _and_ \[\|\partial_{\nu}W\|_{L^{2}(\Sigma^{\sharp})}\leq\frac{C}{\varepsilon_{1} \varepsilon_{2}}\left(\delta+(\varepsilon_{1}+\varepsilon_{2})^{3}\rho^{12 \kappa+12}h^{-3N-6\kappa-9}\right).\] Proof.: Recall that from Remark 4.1, we can derive \[\|U_{j}\|_{H^{2s}(Q)}\leq C\rho^{4s}h^{-N-2s-1},\qquad j=1,2 \tag{4.21}\] when \(2s>\frac{n+1}{2}\). We first take \(s=\kappa\). Since the non-homogeneous term of (4.11) is \(2\beta U_{1}U_{2}\in H^{2\kappa}\), applying Lemma 4 in [39] yields that \[\|W\|_{H^{1,1}(Q)}\leq\|W\|_{H^{2\kappa}(Q)}\leq C\|U_{1}\|_{H^{2\kappa}(Q)}\| U_{2}\|_{H^{2\kappa}(Q)}\leq C\rho^{8\kappa}h^{-2N-4\kappa-2},\] where \(C\) depends on \(\beta\), \(\Omega\) and \(T\). Below we will estimate \(\partial_{\nu}W=\partial_{\nu}W_{2,(1,1)}-\partial_{\nu}W_{1,(1,1)}\). From \(f_{j}=U_{j}|_{\Sigma}\), according to (4.21) with \(s=\kappa+1\) and Theorem 2.1 (the trace theorem) in [42], we obtain \[\|f_{j}\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}\leq C\|U_{j}\| _{H^{2\kappa+2}(Q)}\leq C\rho^{4\kappa+4}h^{-N-2\kappa-3}, \tag{4.22}\] for \(j=1,\,2\), where the constant \(C\) is independent of \(f_{j}\). Denote \(\widetilde{\mathcal{R}}=\widetilde{\mathcal{R}}_{2}-\widetilde{\mathcal{R}}_{1}\) where \(\widetilde{\mathcal{R}}_{\ell}\) is the remainder as in (3.20) for \(u_{\ell,\varepsilon f}\) (\(\ell=1,2\)). From (3.22) and (4.22), we obtain \[\|\partial_{\nu}W\|_{L^{2}(\Sigma^{\sharp})}\] \[\leq\frac{1}{\varepsilon_{1}\varepsilon_{2}}\|\widetilde{ \Lambda}(\varepsilon_{1}f_{1}-\varepsilon_{2}f_{2})-\widetilde{\Lambda}( \varepsilon_{1}f_{1})-\widetilde{\Lambda}(\varepsilon_{2}f_{2})\|_{L^{2}( \Sigma^{\sharp})}+\frac{1}{\varepsilon_{1}\varepsilon_{2}}\|\partial_{\nu} \widetilde{\mathcal{R}}\|_{L^{2}(\Sigma^{\sharp})}\] \[\leq\frac{3}{\varepsilon_{1}\varepsilon_{2}}\delta+C\frac{1}{ \varepsilon_{1}\varepsilon_{2}}(\varepsilon_{1}+\varepsilon_{2})^{3}\left(\|f _{1}\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}+\|f_{2}\|_{H^{2 \kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}\right)^{3}\] \[\leq\frac{C}{\varepsilon_{1}\varepsilon_{2}}\left(\delta+( \varepsilon_{1}+\varepsilon_{2})^{3}\rho^{12\kappa+12}h^{-3N-6\kappa-9} \right).\] where \(\widetilde{\Lambda}:=\Lambda_{q,\beta_{1}}-\Lambda_{q,\beta_{2}}\). **Lemma 4.7**.: _Suppose that \(q\in\mathcal{M}_{\mathcal{O}}\) and \(\beta_{1}-\beta_{2}\in\mathcal{M}_{\mathcal{O}}\). Then for \(N>0\) large enough there exist \(\gamma^{*}>0\), \(m_{1}>0\), \(\rho_{0}>1\) and \(0<h_{0}<1\) such that_ \[\left|\int_{Q^{*}}[\Delta,\chi]W\overline{U}_{0}\,dxdt\right|\leq C\langle \tau,\xi\rangle^{2N+4}h^{-4N-6\kappa-12}\left(\gamma^{-\mu_{1}}\rho^{8\kappa +4}+e^{m_{1}\gamma}\rho^{12\kappa+16}\left(\varepsilon^{-2}\delta+\varepsilon \right)\right).\] _for \(\gamma>\gamma^{*}\), \(\tau\in\mathbb{R}\), \(\xi\in\omega_{0}^{\perp}\), \(\rho>\rho_{0}\) and \(0<h<h_{0}\). Moreover, for each \((\tau,\xi)\in\mathbb{R}^{n+1}\), the Fourier transform of \(\beta\theta_{h}^{3}\) (extended by zero outside \(Q^{*}\)) satisfies_ \[|\widetilde{\beta\theta_{h}^{3}}(\tau,\xi)| \leq C\Big{(}\rho^{-1}\langle\tau,\xi\rangle^{2N+2\kappa+2}h^{-3 N-6\kappa-3}+\langle\tau,\xi\rangle^{2N+4}\gamma^{-\mu_{1}}\rho^{8\kappa+4}h^{-4N-6 \kappa-12} \tag{4.23}\] \[\quad+\langle\tau,\xi\rangle^{2N+4}e^{m_{1}\gamma}\rho^{12\kappa +16}h^{-4N-6\kappa-12}\left(\varepsilon^{-2}\delta+\varepsilon\right)\Big{)}.\] Proof.: We choose \(\varepsilon_{1}=\varepsilon_{2}=:\varepsilon\). From Lemma 4.6, we obtain \[\|\partial_{\nu}W\|_{L^{2}(\Sigma^{\sharp})}\leq C\left(\varepsilon^{-2}\delta+ \varepsilon\rho^{12\kappa+12}h^{-3N-6\kappa-9}\right).\] By the UCP in Lemma 4.1, there exist \(\gamma^{*}>0\), \(m_{1}>0\) and \(\mu_{1}<1\) such that \[\left|\int_{Q^{*}}[\Delta,\chi]W\overline{U}_{0}\,dxdt\right|\] \[\leq C\|[\Delta,\chi]W\|_{L^{2}(0,T^{*};H^{-1}(\Omega_{3}\setminus \Omega_{2}))}\|\overline{U}_{0}\|_{L^{2}(0,T^{*};H^{1}(\Omega))}\] \[\leq C\|W\|_{L^{2}((0,T^{*})\times(\Omega_{3}\setminus\Omega_{2}) )}\|\overline{U}_{0}\|_{L^{2}(0,T^{*};H^{2}(\Omega))}\] \[\leq C\left(\gamma^{-\mu_{1}}\|W\|_{H^{1,1}(Q)}+e^{m_{1}\gamma} \|\partial_{\nu}W\|_{L^{2}(\Sigma^{\sharp})}\right)\rho^{4}\langle\tau,\xi \rangle^{2N+4}h^{-N-3}\] \[\leq C\langle\tau,\xi\rangle^{2N+4}\left(\gamma^{-\mu_{1}}\rho^{8 \kappa+4}h^{-3N-4\kappa-5}+e^{m_{1}\gamma}\left(\varepsilon^{-2}\rho^{4}h^{-N -3}\delta+\varepsilon\rho^{12\kappa+16}h^{-4N-6\kappa-12}\right)\right)\] for any \(\gamma>\gamma^{*}\). Together with Lemma 4.5 this leads to (4.23) for \(\xi\in\omega_{0}^{\perp}\). Choosing enough \(\omega_{0}\), this ends the proof. Proof of Theorem 1.2.: Let \(\rho=\gamma^{\frac{\mu_{1}}{8\kappa+5}}\) so that \[\rho^{-1}=\gamma^{-\mu_{1}}\rho^{8\kappa+4}.\] We denote \[\alpha_{1}:=4N+6\kappa+12,\quad\alpha_{2}:=2N+2\kappa+2,\quad\mu:=\frac{\mu_{1}} {8\kappa+5}.\] Then from (4.23), it is not hard to see \[|\widetilde{\beta\theta_{h}^{3}}(\tau,\xi)|\leq C\langle\tau,\xi\rangle^{\alpha _{2}}h^{-\alpha_{1}}\Big{(}\gamma^{-\mu}+e^{m_{2}\gamma}(\varepsilon+ \varepsilon^{-2}\delta)\Big{)}, \tag{4.24}\] with some index \(m_{2}>m_{1}>0\). For a fixed \(M>1\), by (4.24) and Plancherel theorem, we deduce \[\|\beta\theta_{h}^{3}\|_{H^{-1}(\mathbb{R}^{n+1})}^{2} =\int_{|(\tau,\xi)|\leq M}\langle\tau,\xi\rangle^{-2}|\widehat{ \beta\theta_{h}^{3}}(\tau,\xi)|^{2}d\tau d\xi+\int_{|(\tau,\xi)|>M}\langle\tau, \xi\rangle^{-2}|\widehat{\beta\theta_{h}^{3}}(\tau,\xi)|^{2}d\tau d\xi\] \[\leq C\left(\int_{|(\tau,\xi)|\leq M}\langle\tau,\xi\rangle^{2 \alpha_{2}}h^{-2\alpha_{1}}\left(\gamma^{-2\mu}+e^{2m_{2}\gamma}(\varepsilon^{ 2}+\varepsilon^{-4}\delta^{2})\right)\,d\tau d\xi+M^{-2}\|\beta\theta_{h}^{3} \|_{L^{2}(\mathbb{R}^{n+1})}^{2}\right)\] \[\leq CM^{2\alpha_{2}+n+1}h^{-2\alpha_{1}}\left(\gamma^{-2\mu}+e^{ 2m_{2}\gamma}(\varepsilon^{2}+\varepsilon^{-4}\delta^{2})\right)+CM^{-2}m_{0}^ {2},\] by recalling that \(|\beta|\leq m_{0}\). Thus, \[\|\beta\theta_{h}^{3}\|_{H^{-1}(\mathbb{R}^{n+1})}\leq CM^{\alpha_{2}+\frac{n+ 1}{2}}h^{-\alpha_{1}}\left(\gamma^{-\mu}+e^{m_{2}\gamma}(\varepsilon+ \varepsilon^{-2}\delta)\right)+CM^{-1}.\] By interpolating and (4.4), \[\|\beta\theta_{h}^{3}\|_{L^{2}(Q^{*})}^{2} \leq\|\beta\theta_{h}^{3}\|_{H^{-1}(Q^{*})}\|\beta\theta_{h}^{3} \|_{H^{1}(Q^{*})}\leq C\|\beta\theta_{h}^{3}\|_{H^{-1}(Q^{*})}h^{-1}\] \[\leq CM^{\alpha_{2}+\frac{n+1}{2}}h^{-\alpha_{1}-1}\left(\gamma^{- \mu}+e^{m_{2}\gamma}(\varepsilon+\varepsilon^{-2}\delta)\right)+CM^{-1}h^{-1}.\] In addition, we write \[\beta=\beta\theta_{h}^{3}+\beta(1-\theta_{h}^{3}).\] Note that \(1-\theta_{h}^{3}=0\) in \([2h,T^{*}-2h]\), which leads to \[\|1-\theta_{h}^{3}\|_{L^{2}(0,T^{*})}^{2}\leq\int_{0}^{2h}(1-\theta_{h}^{3})^ {2}dt+\int_{T^{*}-2h}^{T^{*}}(1-\theta_{h}^{3})^{2}dt\leq 4h.\] Hence, \[\|\beta\|_{L^{2}(Q^{*})}^{2} \leq C(\|\beta\theta_{h}^{3}\|_{L^{2}(Q^{*})}^{2}+\|\beta(1-\theta _{h}^{3})\|_{L^{2}(Q^{*})}^{2})\] \[\leq CM^{\alpha_{2}+\frac{n+1}{2}}h^{-\alpha_{1}-1}\left(\gamma^{ -\mu}+e^{m_{2}\gamma}(\varepsilon+\varepsilon^{-2}\delta)\right)+CM^{-1}h^{-1 }+Ch.\] Choose \(h<T^{*}/4\) satisfying \(M^{-1}h^{-1}=h\) (i.e., \(h=M^{-\frac{1}{2}}\)) such that the last two terms above have the same order. This results in \[\|\beta\|_{L^{2}(Q^{*})}^{2}\leq CM^{\alpha_{3}}\left(\gamma^{-\mu}+e^{m_{2} \gamma}(\varepsilon+\varepsilon^{-2}\delta)\right)+CM^{-\frac{1}{2}},\] where \(\alpha_{3}:=\alpha_{2}+\frac{1}{2}(\alpha_{1}+n+2)\). We also further choose \(M=\gamma^{\frac{\mu}{2+\alpha_{3}}}\) such that \[\gamma^{-\mu}M^{\alpha_{3}}=M^{-\frac{1}{2}},\] which implies that there exist constants \(0<\mu^{\prime}<1\) and \(m_{3}>m_{2}>0\) such that \[\|\beta\|_{L^{2}(Q^{*})}^{2}\leq C\left(e^{m_{3}\gamma}\varepsilon^{-2}\delta+ e^{m_{3}\gamma}\varepsilon+\gamma^{-\mu^{\prime}}\right). \tag{4.25}\] For \(\delta\in(0,\min\{1,\,e^{-6m_{3}\gamma^{*}},\,\Lambda^{\frac{1}{2}}\})\) with \(\Lambda>1\), we take \[\varepsilon=\frac{\lambda}{4}\Lambda^{\frac{-1}{6}}\delta^{\frac{1}{3}}\quad \text{ and }\quad\gamma=\frac{1}{6m_{3}}|\log(\delta)|.\] Then (4.25) becomes \[\|\beta\|_{L^{2}(Q^{*})}^{2}\leq C\left(\delta^{\frac{1}{6}}+|\log(\delta)|^{ -\mu^{\prime}}\right).\] where \(C\) depends on \(\Omega\), \(T\), \(T^{*}\), \(m_{0}\), and \(\lambda\) and \(\Lambda\). Now we verify the small condition in the well-posedness. **Remark 4.2**.: _From the above proof, the parameters are defined by_ \[\rho=\gamma^{\mu},\ M=\gamma^{\frac{\mu}{\alpha_{3}+\frac{1}{2}}},\ h=M^{-\frac{1} {2}}=\gamma^{-\frac{\mu}{2\alpha_{3}+1}},\ \gamma=\frac{1}{6m_{3}}|\log(\delta)|.\] _From (4.22), for \(j=1,2\),_ \[\|f_{j}\|_{H^{2\kappa+\frac{3}{2},2\kappa+\frac{3}{2}}(\Sigma)}\leq C\rho^{4 \kappa+4}h^{-N-2\kappa-3}\leq C\gamma^{(4\kappa+4)\mu+\frac{1}{2\alpha_{3}+1}( N+2\kappa+3)\mu}\leq Ce^{m_{3}\gamma}.\] _We took \(\varepsilon_{j}=\varepsilon\) above. Due to \(\delta<\Lambda^{\frac{1}{2}}\), it follows that_ \[|\varepsilon_{j}|\leq\frac{\lambda}{4}\Lambda^{\frac{-1}{6}}\delta^{\frac{1}{ 3}}<\frac{\lambda}{4},\] _and_ \[\|\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2}\|_{H^{2\kappa+\frac{3}{2},2 \kappa+\frac{3}{2}}(\Sigma)}\leq C\frac{\lambda}{2}\Lambda^{\frac{-1}{6}} \delta^{\frac{1}{3}}e^{m_{3}\gamma}=C\frac{\lambda}{2}\Lambda^{\frac{-1}{6}} \delta^{\frac{1}{6}}<C\frac{\lambda}{2}\Lambda^{\frac{-1}{12}}<\lambda,\] _provided \(\Lambda\) is sufficiently large. Hence, the Dirichlet data \(\varepsilon_{1}f_{1}+\varepsilon_{2}f_{2}\) belongs to \(\mathcal{S}_{\lambda}(\Sigma)\). This justifies the well-posedness and our procedures discussed above._ ### Proof of Theorem 1.3 Proof of Theorem 1.3.: From Theorem 1.1, we obtain that \(\beta_{1}=\beta_{2}\) in a neighborhood of \(\Gamma\). Combining with the hypothesis that \(\beta_{1}-\beta_{2}=0\) on \((0,T)\times\mathcal{O}^{\prime}\) yields that \(\beta_{1}-\beta_{2}=0\) near the boundary \(\partial\Omega\). Thus one can assume that \(\beta=0\) in some open neighborhood \(\mathcal{O}\) of \(\partial\Omega\). Applying the result in Theorem 1.2, for any \(T^{*}\in(0,T)\), we derive that \(\beta_{1}=\beta_{2}\) in \((0,T^{*})\times\Omega\) by letting \(\delta\to 0\), which completes the proof. **Remark 4.3**.: _Theorem 1.1 and Theorem 1.2 hold true for more general nonlinearity, such as \(\beta(t,x)u^{m}\) or \(\beta(t,x)|u|^{2m}u\). For the former case, the integral identity becomes \(\int\beta U_{1}U_{2}\ldots U_{m}\overline{U}_{0}\,dxdt=0\), where \(U_{j}\) is the solution to the linear equation. Like the setting \(m=2\) discussed above, the vectors \(\omega_{j}\) in the phase functions of GO solutions are chosen to satisfy_ \[\omega_{0}=\omega_{1}+\ldots+\omega_{m}\quad\text{ and }\quad|\omega_{0}|^{2}=| \omega_{1}|^{2}+\ldots+|\omega_{m}|^{2}\] _so that the leading complex phase functions vanish eventually in the integral identity. Once the phase functions are determined, following similar arguments in the proof of theorems lead to the unique and stable determination of \(\beta\)._ _For the case of Gross-Pitaevskii equation with nonlinearity \(\beta|u|^{2}u\) and the generalized \(\beta|u|^{2m}u\), we can treat similarly to obtain the integral identity_ \[\int\beta U_{1}\overline{U_{2}}U_{3}\overline{U_{4}}\ldots U_{2m-1}\overline{ U_{2m}}U_{2m+1}\overline{U_{0}}\ dxdt=0\] _and choose_ \[\omega_{1}-\omega_{2}+\omega_{3}-\omega_{4}+\ldots+\omega_{2m+1}- \omega_{0}=0\] \[|\omega_{1}|^{2}-|\omega_{2}|^{2}+|\omega_{3}|^{2}-|\omega_{4}|^{ 2}+\ldots+|\omega_{2m+1}|^{2}-|\omega_{0}|^{2}=0.\] _We can choose \(U_{1},\overline{U_{2}},U_{2m+1}\) and \(\overline{U}_{0}\) to be GO-solutions supported near four straight lines \(\gamma_{p,\omega_{1}}\), \(\gamma_{p,\omega_{2}}\), \(\gamma_{p,\omega_{2m+1}}\), and \(\gamma_{p,\omega_{0}}\), respectively, and let \(U_{j}\) and \(U_{j+1}\) be GO-solutions supported near \(\gamma_{p,\omega_{1}}\) for \(j=3,5,\ldots,2m-1\) so that their complex phases will cancel the other in pairs. Hence, \(\omega_{1},\omega_{2},\omega_{2m+1},\omega_{0}\) should satisfy_ \[\omega_{0}+\omega_{2} =\omega_{1}+\omega_{2m+1},\] \[|\omega_{0}|^{2}+|\omega_{2}|^{2} =|\omega_{1}|^{2}+|\omega_{2m+1}|^{2},\] _which can be achieved, for instance, by choosing_ \[\omega_{0} =(1,-1,\ldots,0),\quad\omega_{1}=(\sqrt{1-r^{2}},-1,\ldots,r),\] \[\omega_{2} =(\sqrt{1-r^{2}},\sqrt{1-r^{2}},0,\ldots,r),\quad\omega_{2m+1}=(1, \sqrt{1-r^{2}},\ldots,0),\quad 0<r<1.\]
2304.05387
MOST: Multiple Object localization with Self-supervised Transformers for object discovery
We tackle the challenging task of unsupervised object localization in this work. Recently, transformers trained with self-supervised learning have been shown to exhibit object localization properties without being trained for this task. In this work, we present Multiple Object localization with Self-supervised Transformers (MOST) that uses features of transformers trained using self-supervised learning to localize multiple objects in real world images. MOST analyzes the similarity maps of the features using box counting; a fractal analysis tool to identify tokens lying on foreground patches. The identified tokens are then clustered together, and tokens of each cluster are used to generate bounding boxes on foreground regions. Unlike recent state-of-the-art object localization methods, MOST can localize multiple objects per image and outperforms SOTA algorithms on several object localization and discovery benchmarks on PASCAL-VOC 07, 12 and COCO20k datasets. Additionally, we show that MOST can be used for self-supervised pre-training of object detectors, and yields consistent improvements on fully, semi-supervised object detection and unsupervised region proposal generation.
Sai Saketh Rambhatla, Ishan Misra, Rama Chellappa, Abhinav Shrivastava
2023-04-11T17:57:27Z
http://arxiv.org/abs/2304.05387v2
# MOST: Multiple Object localization with Self-supervised Transformers for object discovery ###### Abstract We tackle the challenging task of unsupervised object localization in this work. Recently, transformers trained with self-supervised learning have been shown to exhibit object localization properties without being trained for this task. In this work, we present **M**ultiple **O**bject localization with **S**elf-supervised **T**ransformers (MOST) that uses features of transformers trained using self-supervised learning to localize multiple objects in real world images. MOST analyzes the similarity maps of the features using box counting; a fractal analysis tool to identify tokens lying on foreground patches. The identified tokens are then clustered together, and tokens of each cluster are used to generate bounding boxes on foreground regions. Unlike recent state-of-the-art object localization methods, MOST can localize multiple objects per image and outperforms SOTA algorithms on several object localization and discovery benchmarks on PASCAL-VOC 07, 12 and COCO20k datasets. Additionally, we show that MOST can be used for self-supervised pre-training of object detectors, and yields consistent improvements on fully, semi-supervised object detection and unsupervised region proposal generation. ## 1 Introduction Object detectors are important components of several computer vision systems like visual relationship detection [20, 27], human-object interaction detection [1, 12, 41, 47], scene graph generation [51] and object tracking [49, 50] etc. Performance of object detectors is heavily reliant on the availability of training data. However, annotating large object detection datasets can be expensive and time consuming [13, 25]. Additionally, the vocabulary of object detectors is limited by the training datasets and such detectors often fail to generalize to new categories [6]. This strategy is not scalable and a more effective approach is warranted. Object discovery is one such task that has the potential to address these concerns. Object discovery is the problem of identifying and grouping objects/parts in a large collection of images without human intervention [22, 23, 33, 39]. The first step in object discovery is to obtain region proposals and subsequently group them semantically. Previous works on discovery used Selective Search [42], randomized Prim's [28] or a region proposal network (RPN) [32] to get object proposals. [44, 45, 46] used inter-image similarity to construct a graph and performed optimization or ranking, to localize objects without any supervision. Such methods are computationally expensive and often fail to scale to datasets larger than 20000 images. [31] used region proposals from an RPN and proposed a never ending learning approach and is the first method shown to scale to \(\sim\)100000 images. However, these region proposal methods are often of low quality, and therefore reduce the performance of discovery systems. Recently, LOST [37] and TokenCut [48] leveraged the object segmentation properties of transformers [43] trained using self-supervised learning (DINO [3]) to obtain high quality object proposals. They demonstrate significant improvements over state-of-the-art on object discovery, salient object detection and weakly supervised object localization benchmarks. However, both LOST [37] and TokenCut [48] assume the presence of a single salient object per image and hence, can localize only one object as shown in Fig 1 (top). This Figure 1: **Top**: Methods like LOST [37] (shown in figure), TokenCut [48] identify and localize the most salient foreground object and hence can detect only one object per image. **Bottom**: MOST is a simple, yet effective method that localizes multiple objects per image without training. assumption may hold for object centric datasets like ImageNet [34] but is not true for scene-centric real world datasets like PASCAL-VOC [11] and COCO [25]. In this work, we address the problem of localizing multiple objects per image and demonstrate the effectiveness of our approach on the task of unsupervised object localization and discovery on several standard benchmarks. We propose a new object localization method called "Multiple Object localization with Self-supervised Transformers" (MOST) which is capable of localizing multiple objects per image without using any labels. We use the features extracted from a transformer [43] network trained with DINO [3]. Our method is based on two empirical observations; 1) Patches within foreground objects have higher correlation with each other than the ones on the background [37] and 2) The similarity map computed using the features of a foreground object with all the features in the image is usually more localized and less noisier than the one computed using the feature of a background. Our algorithm analyzes the similarities between patches exhaustively using a fractal analysis tool called box counting [26]. This analysis picks a set of patches that most likely lie on foreground objects. Next, we perform clustering on the patch locations to group patches belonging to a foreground object together. Each of these clusters are called _pool_s. A binary mask is then computed for each _pool_ and a bounding box is extracted. This capability enables the algorithm to extract multiple bounding boxes per image as shown in Fig.1 (bottom). We demonstrate that **without any training**, our method can outperform state-of-the-art object localization methods that train class agnostic detectors to detect multiple objects. To prove the effectiveness of MOST, we demonstrate results on several object localization and discovery benchmarks. On self-supervised pre-training for object detectors, using MOST yields consistent improvement across multiple downstream tasks using \(6\times\) fewer boxes. When compared against other self-supervised transformer-based localization methods, MOST achieves higher recall with and without additional training. We summarize the contributions of our work below. * We propose MOST, an effective method to localize and discover multiple objects per image without supervision using transformers trained with DINO. * We perform exhaustive experiments to assess the performance of MOST on several localization and discovery benchmarks and show significant improvements over the baselines. The paper is organized as follows. In Section 2 we discuss related works on object localization and discovery. We describe our approach in detail in Section 3. We describe our experimental setup and present results in Section 4 and conclude in Section 5. ## 2 Related Works **Unsupervised Object Localization and Discovery**: Object category discovery can be broadly segregated into image-based [14, 15, 17, 18, 19, 38] and region-based methods [4, 7, 21, 22, 23, 31, 37, 44, 45, 46, 48]. Region-based methods start by generating object proposals and later group them into coherent semantic groups. Image-based approaches on the other hand, assume the localization task to be solved and focus solely on the semantic grouping. Our current method is closer to the former. Vo _et al._, [44, 45, 46] localize regions in images by constructing an inter-image similarity graph between regions and partitioning it using optimization or ranking. Due to the quadratic nature of the graph, these methods cannot scale to large datasets beyond tens of thousands of images. Our current work does not compute inter-image similarity between regions and scales linearly with number of images. Lee _et al._, [23] proposed object graphs that incorporates knowledge about a few known categories to facilitate the discovery of novel categories. MOST doesn't assume any prior knowledge and has the ability to discover objects from scratch. Lee _et al._, [22] extend object graphs to a curriculum based discovery pipeline, that learns to discover easy objects first and progressively proceeds to discover the harder ones. Along similar lines, Rambhatla _et al._, [31] proposed a large scale discovery pipeline, similar to [23] that leverages prior knowledge about a few categories. Authors of [31] use an RPN [32] trained on known categories as the localization method. In contrast to that, MOST localizes objects in images using features of a transformer [43] trained with DINO [3]. **Object localization using self-supervised networks**: Recently, CNNs [16] and Transformers [43] trained in a self-supervised fashion, have been shown to exhibit object localization/segmentation properties [3, 9]. This property is of particular interest as it has the potential to facilitate research on unsupervised localization, detection and segmentation. [37] is a simple method, based on the observation that the key features of the last self attention layer of a transformer, trained using DINO, has certain affinity. They constructed a map of inverse degree to extract bounding boxes on objects in an unsupervised fashion. This method was shown to outperform recent state-of-the-art methods by a significant margin. [48] proposed an alternate method for localizing objects using self-supervised transformers, based on normalized cut [36]. TokenCut [48] constructed an undirected graph based on token feature similarities and uses normalized cut to cluster foreground and background patches. Spectral clustering was used to solve the graph-cut and they show that the eigen vector corresponding to the second smallest eigenvalue provides a good cutting solution. TokenCut outperforms LOST on unsupervised object discovery. In addition to discovery, [48] also demonstrated impressive results on unsupervised saliency detection and weakly supervised object localization. Kyriazi _et al._, [29] proposed deep spectral methods, that performs normalized cut [36] but using an affinity matrix computed using semantic and color features. Since this method is very similar to TokenCut and achieves lower performance, we only compare with the latter in this work. However, one limitation of LOST and TokenCut is that they can localize only one object per image. Our proposed method, MOST can automatically localize multiple objects per image and outperforms LOST and TokenCut on standard discovery benchmarks. ## 3 Approach: MOST MOST can be used to localize multiple objects in an image. Our approach, illustrated in Fig. 3, first identifies a set of tokens that are computed from patches on foreground objects. These tokens are then clustered and each cluster, named _pool_, is used to obtain a bounding box. Next, we discuss a few preliminaries in Section 3.1 followed by the motivation and proposed solution in Section 3.2. ### Preliminaries **Box Counting**: Box counting is a method of analyzing a pattern by breaking and analyzing it at smaller scales. This is done by performing a raster scan of the pattern at different scales and computing appropriate metrics to assess its fractal nature. In this work, we use a sliding window scan. **Vision Transformers**: ViTs [8] operate on learned embeddings, called tokens, generated from non-overlapping image patches of resolution P\(\times\)P (typically P=8/16) that form a sequence. To be precise, an image I of shape \(H\times W\times 3\) is first divided into non-overlapping patches of resolution \(P\times P\times 3\), generating a total of N = \(HW/P^{2}\) patches. Next, each patch is embedded via a trainable linear layer to generate a token of dimension \(d\) to form a sequence of patches. An extra [CLS] token [5] is added to this sequence, whose purpose is to aggregate the information from the tokens of the sequence. Typically, a projection head is attached to the [CLS] to train the network for classification. **DINO**: DINO [3] combines self-training and knowledge distillation without labels for self supervised learning. DINO constructs two global views and several local views of lower resolution, from an image. DINO consists of a teacher and a student network. The student processes all the crops while the teacher is operated only on the global crops. The teacher network then distills its dark knowledge to the student. This encourages the student network to learn local to global correspondences. In contrast to other knowledge distillation methods, DINO's teacher network is updated dynamically during training using exponential moving average. DINO was shown to perform on par or better than several baselines on several tasks of image retrieval, copy detection, instance segmentation etc. The property of importance to the current work, is the ability of DINO to localize foreground regions of semantic entities in an image. [37, 48] leverage this property to localize the salient object in an image by using the key features from the last self-attention layer. Similar to [37, 48], we concatenate the key features of all the heads in the last self-attention layer to obtain the input to MOST. ### Multiple object localization **Motivation**: Consider the example shown in Fig. 2. We show three examples of the similarity maps of a token (shown in red) picked on the background (column 2) and foreground (columns 3, 4). Tokens within foreground patches have higher correlation than the ones on background [37]. This results in the similarity maps of foreground patches being _less_ "spatially" random than the ones on the background. The task then becomes to analyze the similarity maps and identify the ones with less spatial randomness. Box counting [24, 30] is a popular technique in fractal analysis that analyzes spatial patterns at different scales to extract desired properties. Hence, we adopt box counting for our case and since, we are interested in randomness, we adopt entropy as the metric. **Preprocessing**: The input to our method is a \(d\)-dimensional feature \(F\in\mathbb{R}^{N\times d}\), extracted from an image using a neural network. Here, \(N\) denotes the number of spatial locations in the feature map, in case of a CNN, or number of tokens, in case of a transformer network. The aim is to identify subsets of tokens, which we call _pools_, that can be used to localize all the objects in an image. We do not make any assumption on the number of objects present in the image. Given the feature \(F\), we compute an outer product matrix \(A=FF^{T}\in\mathbb{R}^{N\times N}\). Row \(i\) of matrix \(A\), i.e., \(A[i,:]\) encodes a similarity map of a token at location \(i\) with all the other tokens in \(F\). Next, each row of \(A\) is processed by the Entropy-based Box Analysis (EBA) module. **Entropy-based Box Analysis (EBA)**: The proposed entropy based box analysis module performs a fractal analysis method, called box counting to segregate similarity maps of tokens on foreground patches from those of background. As shown in Fig. 3, we perform a raster scan with increasing box (used interchangeably with kernel in this work) sizes on each map. Traditionally, measures like lacunar Figure 2: **Motivation for MOST**: Example showing similarity maps of tokens within background and foreground for an image from the COCO dataset. Similarity maps of tokens within foreground patches are less random spatially. ity [40] are computed within each box to analyze the pattern. In this work, we average the elements within each box. This can be implemented efficiently using pooling operations. The resulting downsampled map is flattened and the entropy is computed using the pmf computed as follows: \(p(x=x_{i})=\sum_{i=1}^{h.w}\frac{1(f_{i}=x_{i})}{h.w}\), where \(f_{i}\) is the \(i^{\text{th}}\) index in the feature map. A downsampled map belongs to a token on a foreground patch if its entropy is less than a threshold \(\tau\). Using \(K\) boxes in the EBA module results in \(K\) entropy values \(e_{k}(k\in\{1,2,\cdots,K\})\). Finally, we perform a majority voting among the entropies of all the downsampled maps, _i.e._, \(\sum_{i=1}^{K}\frac{1(e_{i}\leq\tau)}{K}>0.5\), to decide if the original similarity map belongs to a token on a foreground patch. A map of dimension \(n\times n\) has a maximum entropy of \(log(n^{2})\). We use a threshold of the form \(\tau=a+blog(n^{2})\) (we use \(a=1,b=0.5\) in this work). We do not consider \(\tau\) as a hyperparameter and we pick a value that is mid-way between the minimum and maximum permissible value (b=0.5). To prevent a threshold of \(0\) for \(n=1\), we add a constant (a=1). **Clustering**: The EBA module, identifies a set \(\mathcal{S}=\{p|p\in\{1,2,\cdots,N\}\}\), that contains the spatial locations of tokens computed from foreground patches. Often, highly redundant neighboring tokens are identified. We group neighboring tokens with the help of a clustering step to obtain _pools_. We convert the linear index \(p\) of the tokens to cartesian coordinates \((x,y)\), and use that as the feature for clustering. Manhattan distance is used as the dissimilarity metric with a threshold \(\epsilon\) (\(\epsilon=2\) i.e. Moore neighborhood). Since, the number of pools is not known a-priori, we use a density-based clustering method, DBSCAN [10] which automatically identifies the number of clusters from the data. _Pools_ identified by the clustering step are then post-processed to obtain bounding boxes on foreground objects. **Post-processing**: The clusters, called _pools_, obtained from the clustering step are then post-processed to obtain one box per _pool_. Consider \(M\)_pools_ identified by the clustering step, i.e. \(\mathcal{C}_{i}\), where \(i\in\{1,2,\cdots M\}\). Each _pool_\(\mathcal{C}_{i}\) is a set of token locations \(\mathcal{C}_{i}=\{p^{i}|p^{i}\in\{1,2,\cdots,N\}\}\). We leverage the first observation mentioned above to obtain a bounding box from the _pool_ as follows. First, we build a binary similarity matrix \(\hat{A}=A>0\). Next, within the tokens in the pool, we identify the one with lowest degree in \(\hat{A}\), called the _core_ token, \(c^{*}\). \[c^{*}=\operatorname*{arg\,min}_{c\in\mathcal{C}_{i}}d_{c}\quad\text{where} \quad d_{c}=\sum_{j=1}^{N}\hat{A}[c,j]\] Authors of LOST [37] report that tokens with low degrees most likely fall within an object. Next, we remove the tokens from the pool that do not correlate positively with \(c^{*}\) to form a reduced _pool_\(\mathcal{C}_{i}^{*}\). This ensures that all the tokens in the current pool lie on the same foreground object. Next, a binary mask is constructed by computing the sum of similarities of token features in \(\mathcal{C}_{i}^{*}\) with the features of all the tokens, i.e. \(m_{k}^{i}=\mathds{1}(\sum_{c\in\mathcal{C}_{i}^{*}}f_{k}^{T}f_{c}\geq 0)\). Finally, connected component analysis is performed on the binary mask and the bounding box of the island that contains \(c^{*}\) is selected as the region containing the object. We repeat this process Figure 3: **Overview of MOST**: MOST operates on features extracted from transformers trained using DINO. The features are used to compute the outer product \(A\). Each row of \(A\) is analyzed by the entropy-based box analysis (EBA) module that identifies tokens extracted from foreground patches. These patches are clustered using spatial locations as features to form _pools_. Each _pool_ is then post-processed to generate a bounding box. for all the \(M\)_pools_ to generate \(M\) bounding boxes per image. Note that, \(M\) is not assumed to be known a-priori and is decided automatically by our method. Additionally, we remove trivial boxes i.e., boxes which have area less than than a threshold (256) or cover the whole image. **Implementation Details**: For all our experiments, we use the ViT-S/16 and ViT-B/8 [8] models trained with DINO [3] to extract the features. We concatenate the key features of all the heads from the last self-attention layer to use as the input to our method. ## 4 Experiments In this section we describe, in detail, the experimental setup used for evaluation. We evaluate our method on two setups, namely the localization setup, and the discovery setup. We begin by describing the datasets and metrics in Sec. 4.1. We describe the evaluation setups in Sec. 4.2. Sec. 4.3 compares our method against contemporary work. We then describe ablation experiments in Sec. 4.4 and show qualitative results in Sec. 4.5. ### Datasets and Metrics We use the PASCAL-VOC [11] (2007, 2012 splits) and the COCO [25] (COCO20k [45] and COCO splits) datasets in our experiments. The PASCAL VOC [11] 2007 and 2012 trainval sets consists of 5011, 11540 images respectively, spanning twenty objects. The PASCAL VOC [11] test set consists of 4952 images. The COCO [25] 2014 train set consists of \(\sim\) 110k images containing over eighty objects and the COCO minival set consists of 5000 images. We do not use any class or bounding box annotations for our method except for evaluation. For the _localization_ setup, we use the average precision at different thresholds ([0.5:0.95], 0.5 and 0.75), average recall (AR1, AR10 and AR100) and Correct Localization (CorLoc) metrics for evaluation. CorLoc is defined as the fraction of the images in which atleast one object is localized with an IoU greater than a threshold (0.5 in this work). AP, AR are defined in the usual way. For the object discovery setup, we report both the PASCAL VOC style AP\({}_{50}\) and COCO style AP\({}_{[50:95]}\) metrics along with area under the purity-coverage plots [7, 31]. We refer the interested readers to [31] for definitions of purity and coverage. ### Setups **Localization setup**: This setup evaluates the localization performance of methods. We evaluate models on a) unsupervised pre-training, b) Multiple Object Localization, and c) single object localization. For unsupervised pre-training, localization methods are used to train object detectors in an unsupervised fashion and their performance is evaluated on the downstream task of object detection. In this work, we use the recently proposed DETReg [2] as the pre-training strategy which uses a Deformable DeTR [52] architecture. DETReg uses an object localization method and pre-trains an object detector in an unsupervised fashion. We evaluate the pre-trained model on the downstream tasks of semi-supervised, fully-supervised and class-agnostic object proposal generation. In the semi-supervised setting, models are trained on the PASCAL-VOC(07+12) and COCO train sets without labels and are fine-tuned on \(k\%\) of labeled data similar to [2]. In the fully supervised setting, pre-trained models are fine-tuned on the full PASCAL-VOC and COCO dataset using all the labels. For the class-agnostic object proposal generation, models are pre-trained on COCO dataset without labels and the generated object proposals are evaluated on the COCO validation set similar to [2]. We follow the settings used in [46] for multiple-object localization and evaluate on PASCAL-VOC 2007 and COCO20k. For single-object localization, we follow the \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{3}{*}{Method} & \multirow{3}{*}{ \begin{tabular}{c} Boxes \\ per \\ image \\ \end{tabular} } & \multicolumn{6}{c}{**VOC 07+12**} & \multicolumn{6}{c}{**COCO**} \\ \cline{3-12} & & \multicolumn{3}{c}{\(k=10\%\)} & \multicolumn{3}{c}{fully supervised} & \multicolumn{1}{c}{\(k=1\%\)} & \multicolumn{1}{c}{\(k=2\%\)} & \multicolumn{3}{c}{fully supervised} \\ \cline{3-12} & & AP & AP\({}_{50}\) & AP\({}_{75}\) & AP & AP\({}_{50}\) & AP\({}_{75}\) & \multicolumn{3}{c}{AP} & AP & AP & AP\({}_{50}\) & AP\({}_{75}\) \\ \hline LOST [37] & 1 & 40.88 & 60.31 & 44.36 & 63.58 & 83.27 & 70.48 & 12.83 \(\pm\) 0.32 & 17.23 \(\pm\) 0.30 & 23.43 \(\pm\) 0.38 & 44.30 & 62.80 & 48.40 \\ TCut [48] & 1 & 41.14 & 60.59 & 44.35 & 63.79 & 83.56 & 70.70 & 13.13 \(\pm\) 0.38 & 17.27 \(\pm\) 0.21 & 23.27 \(\pm\) 0.23 & 43.80 & 62.30 & 47.50 \\ & 5 & 39.12 & 57.51 & 42.29 & 63.44 & 83.14 & 70.35 & 13.57 \(\pm\) 0.38 & 17.87 \(\pm\) 0.32 & 23.17 \(\pm\) 0.40 & 44.30 & 62.80 & 48.10 \\ SS [42] & 10 & 40.76 & 60.00 & 44.46 & 64.23 & 83.44 & 71.55 & 13.73 \(\pm\) 0.29 & 18.00 \(\pm\) 0.26 & 22.83 \(\pm\) 0.25 & 43.90 & 62.60 & 47.60 \\ & 15 & 42.14 & 61.41 & 45.86 & 64.24 & 83.74 & 71.41 & 13.87 \(\pm\) 0.29 & 18.23 \(\pm\) 0.40 & 23.13 \(\pm\) 0.11 & 44.30 & 62.60 & 48.30 \\ MOST & 4.65 & 43.03 & 63.29 & 46.61 & 64.34 & 84.12 & 71.77 & 13.93 \(\pm\) 0.38 & 18.13 \(\pm\) 0.25 & 22.63 \(\pm\) 0.11 & 44.80 & 63.50 & 49.00 \\ \hline SS & 30 & 42.12 & 61.20 & 45.71 & 64.84 & 83.98 & 71.76 & 14.47 \(\pm\) 0.35 & 18.23 \(\pm\) 0.42 & **23.57 \(\pm\) 0.21** & 44.00 & 62.30 & 47.80 \\ MOST & 13.09 & **44.40** & **63.83** & **48.28** & **65.24** & **84.24** & **72.37** & **14.83 \(\pm\) 0.21** & **18.30 \(\pm\) 0.17** & 23.43 \(\pm\) 0.45 & **45.20** & **64.00** & **49.00** \\ \hline \hline \end{tabular} \end{table} Table 1: **Results on unsupervised pre-training of object detectors**. We train object detectors in a self-supervised fashion on COCO dataset using different localization methods and compare their performance on the downstream tasks of semi and fully supervised object detection. COCO train set is used for fine-tuning and \(k\%\) refers to the number of labeled samples used for training. Results are reported using AP[0.50:0.95] (denoted as AP), AP\({}_{0.50}\), and AP\({}_{0.75}\) on COCO validation set. settingss in [37, 48] and evaluate on PASCAL-VOC 2007, PASCAL-VOC 2012 and COCO20k. **Discovery setup**: This setup evaluates the object discovery performance. Similar to [37] we use the regions obtained by our localization method, to perform K-means clustering and use the resulting cluster labels to train Faster-RCNN object detectors on PASCAL-VOC 2007, 2012 trainval and COCO20k train sets. We report results of these experiments on the PASCAL-VOC 2007 test and COCO minival sets respectively. In addition to this, we report the performance of our discovery method on COCO train set, similar to the large scale discovery in [31]. ### Comparison with contemporary methods In this section we compare our method against contemporary works the _localization_ and _discovery_ setups. #### 4.3.1 Localization setup **Unsupervised Pre-training**: Table 1 compares the results of all the localization methods on unsupervised pre-training of object detectors. We use average precision at different IoU thresholds ([0.50:0.95]: AP, 0.5: AP\({}_{0.50}\), 0.75: AP\({}_{0.75}\)) for evaluation. On the semi-supervised setting, on VOC 07+12 (\(k=10\%\)), the self-supervised transformer based methods (LOST, TokenCut and MOST) outperform SS [42] with fewer boxes per image. In particular, TokenCut (denoted as TCut in Table 1) which outputs only one box per image, outperforms SS, using ten boxes per image, by \(\sim\)\(0.4\) points on mAP. MOST which outputs an average of \(4.65\) boxes per image outperforms TokenCut (the best performing self-supervised transformer based method) by \(1.89\), \(2.7\) and \(2.26\) percentage points on AP, AP\({}_{50}\), and AP\({}_{75}\) respectively. This can be attributed to the ability of MOST to output multiple foreground regions resulting in more samples for pre-training which is not possible in the case of TokenCut. MOST outperforms SS, that outputs 30 boxes per image, by \(0.91\), \(2.09\) and \(0.9\) points on AP, AP\({}_{50}\), and AP\({}_{75}\) respectively using almost \(6\times\) fewer boxes per image and this can be attributed to the ability of MOST to generate high quality proposals. On COCO, MOST outperforms TokenCut by \(0.8\) and \(0.86\) on the 1% and 2% setting of semi-supervised learning. MOST using ViT-B/8, that outputs 13.09 boxes on average per image outperforms SS (with 30 boxes) by \(0.36\), \(0.17\) on 1% and 2% respectively. On the fully supervised setting, MOST outperforms LOST and TokenCut by 0.76 and 0.55 (AP) percentage points respectively on VOC 07+12. On COCO, MOST outperforms them by 0.50 and 1 points respectively. On VOC 07+12, MOST using ViT-B/8 (13.09 boxes per image) outperforms SS (with 30 boxes per image) by 0.40. On the much harder COCO dataset, MOST outperforms SS 1.20 (AP) percentage points using \(2\times\) fewer boxes per image. In Table 2 we report the class agnostic object proposal evaluation of DETReg trained using different localization methods. We report average precision at different IoU thresholds (AP, AP\({}_{50}\), AP\({}_{75}\)) and recall \(@\) 1, 10 and 100 proposals per image (denoted as R1, R10, and R100) to evaluate the quality of region proposals. Note that the numbers in the table are low because of the unsupervised nature of training. All the self-supervised transformer-based methods achieve performance better than SS with far fewer boxes. In recall, TokenCut and MOST perform on par with each other and outperform rest of the methods with significant improvements. MOST achieves the highest performance on average precision among all the methods. It can achieve higher precision and recall because of its ability to output multiple high quality regions per image. While LOST and TokenCut output high quality boxes, they cannot output more than one box per image. SS on the other hand, outputs multiple boxes but with poor quality. **Multiple Object Localization**: We compare with LOD, the state-of-the-art method on the multi-object localization benchmark proposed by LOST using the code released by authors. On VOC2007, we attain an odAP[0.5:0.95] of 6.43 compared to 5.35 attained by LOD, an improvement of 1.09 percentage points. On the COCO20k dataset, we attain a performance of 1.70 (compared to 1.53 achieved by LOD) on the harder odAP[0.5:0.95] metric. Note, we do not \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{**VOC 2007**} & \multicolumn{3}{c}{**VOC 07+12**} & \multicolumn{3}{c}{**COCO20k**} \\ \cline{2-7} & \multirow{2}{*}{Clusters} & \multirow{2}{*}{\(\rightarrow\)} & \multirow{2}{*}{20} & \multirow{2}{*}{30} & \multirow{2}{*}{40} & \multirow{2}{*}{20} & \multirow{2}{*}{30} & \multirow{2}{*}{40} & \multirow{2}{*}{80} & \multirow{2}{*}{90} & \multirow{2}{*}{100} \\ \cline{1-1} \cline{5-7} & & & & & & & & & \\ \hline \hline LOST [37] & 9.15 & **9.6** & 9.64 & 10.11 & **10.95** & 12.14 & 12.97 & 2.66 & 2.91 & 2.86 \\ MOST & **9.20** & **10.07** & **11.09** & 10.12 & **12.89** & **13.30** & **13.31** & **13.38** & **13.32** \\ \hline \hline LOST [37] & **26.32** & **23.78** & 29.46 & **29.38** & **23.37** & **34.80** & 7.17 & 7.72 & 7.87 \\ MOST & 25.35 & **28.19** & **31.31** & 27.04 & **34.40** & 34.54 & **8.13** & **8.14** & **8.76** \\ \hline \hline \end{tabular} \end{table} Table 4: **Results on single-object localization**: Comparison of MOST with recent object discovery methods on VOC 07, 12 and COCO20k using CorLoc. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{ \begin{tabular}{c} Boxes per image \\ \end{tabular} } & AP & AP\({}_{50}\) & AP\({}_{75}\) & R1 & R10 & R100 \\ \hline LOST [37] & 1 & 0.1 & 0.5 & 0 & 0.4 & 1.4 & 3.9 \\ Tcut [48] & 1 & 0.3 & 1 & 0.1 & **0.6** & **1.9** & **4.6** \\ & 5 & 0.1 & 0.4 & 0.1 & 1 & 4.2 \\ SS [42] & 10 & 0.1 & 0.3 & 0 & 0.1 & 1.1 & 4.4 \\ & 15 & 0.1 & 0.3 & 0 & 0.1 & 1.1 & 4.1 \\ & 30 & 0.1 & 0.3 & 0 & 0.1 & 1.1 & 4.1 \\ MOST & 4.65 & **0.8** & **1.4** & **1** & **0.6** & **1.9** & 4.4 \\ \hline \hline \end{tabular} \end{table} Table 2: **Unsupervised class agnostic region proposal evaluation on COCO validation set**: We compare the performance of region proposals for training DETReg. R\(k\) is Recall\(@k\) compare with rOSD [45] as LOD [46] outperforms it. **Single Object Localization**: Table 4 compares the results of our method on single object localization with recent methods on PASCAL VOC 2007, 2012 and COCO20k respectively. We use the CorLoc metric to evaluate methods. Note that MOST is a multiple object localization method and this setup evaluates the ability of methods to output a single region. Since MOST outputs multiple boxes, we use the heuristic, average best overlap (for evaluating object proposals in [42]), to select one region per image. The numbers reported for MOST in this table are the "best" case scenario. We outperform LOST by \(12.9\), \(13.4\) and \(16.4\) percentage points on VOC 2007, 2012 and COCO20k respectively. We outperform TokenCut [48] by 6, 5.3 and 8.3 percentage points on the three datasets respectively. To obtain multiple regions per image, authors of LOST train a foreground object detector using the regions obtained by their method as supervision, called LOST+CAD [37]. This method can output multiple boxes per image and from Table 4, even without any training, our method outperforms LOST+CAD and TokenCut+CAD by 9.1, 7, 9.6 and 3.4, 2.1, 4.5 percentage points on VOC 2007, 2012 and COCO20k respectively. **Discovery Setup**: This setup evaluates the true object discovery performance as the localized boxes are used to discover semantic groups. Following LOST [37], we first cluster the features of the localized objects using K-means clustering. For VOC 2007 and 2007+2012 trainval splits, we use 20, 30 and 40 clusters. We use 80, 90 and 100 clusters for COCO20k train split. We report the results of experiments on VOC 2007, 07+12 trainval sets on VOC 2007 test set. For experiments on COCO20k, we report results on the COCO validation set. Results are tabulated in Table 3. MOST outperforms LOST in most settings with the margin of improvement higher for more number of clusters and more cluttered datasets like COCO. For more details on clustering and training refer to supplementary. Finally, we evaluate the performance of MOST on large-scale object discovery setup introduced in [31]. For this setup, we use the area under the purity coverage plot as the metric. [31] automatically identifies the number of clusters and obtains an [email protected] of 3.6% on the COCO 2014 train set. We extract the DINO [CLS] token features of regions obtained from MOST for K-Means clustering. To avoid specifying the number of clusters manually, we employ the "kneedle" method [35] to get the optimal number of clusters (more details in supplementary). Next, we randomly sample 10000 features from the whole dataset and cluster them using K-means with the optimal number of clusters. This subsampling avoids loading all the features into memory. MOST + optimal K-means achieves an [email protected] of 8.74% on COCO 2014 train set. We use the cluster labels to train an object detector on the COCO train set and achieve an AP/AP\({}_{50}\) of 3.9/9.5% compared to 5.2% AP\({}_{50}\) obtained by [31] on COCO validation set. For more experiments on unsupervised saliency detection and weakly supervised localization, refer to the supplementary. ### Ablation Experiments **Recall of boxes**: To analyze the object localization performance of MOST, we compare its recall with LOST and TokenCut on VOC 07, 12 and COCO20k datasets in Fig. 7. The x-axis represents the maximum number of boxes allowed per image and the y axis plots the recall. LOST and TokenCut generate only one box per image and hence have fixed recall in all the plots. MOST can generate more boxes and hence have higher recall than LOST and TokenCut. [37] trains a class agnostic detector (CAD) to output multiple boxes per image using the output of LOST as supervision. Without a single step of training, MOST performs competitively against LOST+CAD on all the datasets. With a class agnostic detector, MOST+CAD outperforms LOST, TokenCut and their CAD counterparts comfortably on all the datasets. On COCO20k, a much harder dataset, MOST+CAD outperforms all the methods with a significant margin demonstrating its superior localization abilities. **Effect of EBA**: We study the effect of EBA on single-object Figure 4: **Qualitative results of MOST on VOC 07, 12 and COCO**: MOST can localize multiple objects per image without training. Localization ability of MOST is not limited by the biases of annotators and can localize rocks, branches, water bodies _etc._ localization. The task of the EBA module is to identify tokens on foreground instances from similarity maps. We replace the EBA module with the strategy used by LOST [37], effectively giving LOST the ability to localize multiple objects. We use top-100 patches and this system achieves a CorLoc of 63.66 (compared to 74.84 of MOST). The EBA module can automatically pick the right tokens, unlike LOST to localize multiple objects. This experiment demonstrates the benefit of the proposed EBA module. **Effect of kernel size**: The EBA module performs box analysis in a sliding window fashion using boxes (or kernels) of different sizes. We implement this efficiently using a pooling operation. We visualize the effect of the size of pooling kernels on the final output in Fig. 5. We observe that the majority voting performed in EBA, helps in removing noisy predictions in the first triplet, where a box identified by kernel of size 1 is eliminated by majority voting of kernels with larger receptive field. In the second triplet in Fig. 5, an object which was missed by the lower order kernels (k=[1,4]), can be picked up with a higher order kernel (k=5). We refer interested reader to the supplementary material section for more analyses on the effect of kernel sizes, clustering and timing. **Effect of clustering**: MOST performs clustering with the token locations as features to obtain _pools_. Each pool contains tokens belonging to a foreground object. We show the effect of clustering qualitatively in Fig. 6. We observe that each pool focuses on one foreground object and illustrate the bounding boxes extracted from each _pool_. ### Qualitative Results: We illustrate qualitative results of MOST on VOC2007, 2012 and COCO datasets in Fig. 4. Fig. 3(a) shows results on VOC 2007 and 2012. MOST is capable of localizing fairly complex scenes in all the three datasets. We observe that, such unsupervised localization methods are not limited by the categories annotated by humans but can localize regions of "stuff" like rocks (last image of last row in Fig. 3(b)), water bodies (first image in second row of Fig. 3(b)), sign boards (third image in the first row of Fig. 3(a) right). ## 5 Conclusion We present MOST, an effective method for localizing multiple objects in complex images without a single annotation. MOST leverages object segmentation properties of transformers trained using DINO [3]. We show that the ability of MOST to localize multiple objects in an image is very effective on several object localization and discovery benchmarks. In particular, MOST outperforms recent state-of-the-art methods that train a class agnostic detector, on the task of single object localization, without any training. Further, we show that MOST achieves higher recall and covers more ground truth objects for a fixed set of boxes than LOST [37], a contemporary work on object localization. Finally, we extend MOST to the task of unsupervised saliency detection and report competitive results with recent works. Figure 5: **Effect of kernel size**: Different kernel sizes can identify different tokens as belonging to the foreground. Multiple kernels help eliminate noisy predictions (first triplet) and missed predictions (second triplet). Figure 6: Figure demonstrating the effect of clustering in MOST: Each image consists of a bounding box generated from a _pool_. We observe that each _pool_ focuses on different foreground instance. Figure 7: **Recall analysis**: Comparison of recall values of MOST, MOST+CAD with LOST and LOST+CAD. LOST generates one bounding box per image. MOST+CAD, MOST have higher recall and cover more ground-truth objects for a fixed set of boxes.
2302.02766
Generalization Bounds with Data-dependent Fractal Dimensions
Providing generalization guarantees for modern neural networks has been a crucial task in statistical learning. Recently, several studies have attempted to analyze the generalization error in such settings by using tools from fractal geometry. While these works have successfully introduced new mathematical tools to apprehend generalization, they heavily rely on a Lipschitz continuity assumption, which in general does not hold for neural networks and might make the bounds vacuous. In this work, we address this issue and prove fractal geometry-based generalization bounds without requiring any Lipschitz assumption. To achieve this goal, we build up on a classical covering argument in learning theory and introduce a data-dependent fractal dimension. Despite introducing a significant amount of technical complications, this new notion lets us control the generalization error (over either fixed or random hypothesis spaces) along with certain mutual information (MI) terms. To provide a clearer interpretation to the newly introduced MI terms, as a next step, we introduce a notion of "geometric stability" and link our bounds to the prior art. Finally, we make a rigorous connection between the proposed data-dependent dimension and topological data analysis tools, which then enables us to compute the dimension in a numerically efficient way. We support our theory with experiments conducted on various settings.
Benjamin Dupuis, George Deligiannidis, Umut Şimşekli
2023-02-06T13:24:48Z
http://arxiv.org/abs/2302.02766v2
# Generalization Bounds with Data-dependent Fractal Dimensions ###### Abstract Providing generalization guarantees for modern neural networks has been a crucial task in statistical learning. Recently, several studies have attempted to analyze the generalization error in such settings by using tools from fractal geometry. While these works have successfully introduced new mathematical tools to apprehend generalization, they heavily rely on a Lipschitz continuity assumption, which in general does not hold for neural networks and might make the bounds vacuous. In this work, we address this issue and prove fractal geometry-based generalization bounds _without_ requiring any Lipschitz assumption. To achieve this goal, we build up on a classical covering argument in learning theory and introduce a _data-dependent fractal dimension_. Despite introducing a significant amount of technical complications, this new notion lets us control the generalization error (over either fixed or random hypothesis spaces) along with certain mutual information (MI) terms. To provide a clearer interpretation to the newly introduced MI terms, as a next step, we introduce a notion of 'geometric stability' and link our bounds to the prior art. Finally, we make a rigorous connection between the proposed data-dependent dimension and topological data analysis tools, which then enables us to compute the dimension in a numerically efficient way. We support our theory with experiments conducted on various settings. Generalization bounds, Fractal geometry, Persistent homology ## 1 Introduction Understanding the generalization properties of modern neural networks has been one of the major challenges in statistical learning theory over the last decade. In a classical supervised learning setting, this task boils down to understanding the so-called _generalization error_, which arises from the population risk minimization problem, given as follows: \[\min_{w\in\mathds{R}^{d}}\Bigl{\{}\mathcal{R}(w):=\operatorname*{\mathds{E}} _{z\sim\mu_{z}}[\ell(w,z)]:=\operatorname*{\mathds{E}}_{(x,y)\sim\mu_{z}}[ \mathcal{L}(h_{w}(x),y)]\Bigr{\}},\] where \(x\in\mathcal{X}\) denotes the features, \(y\in\mathcal{Y}\) denotes the labels, \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\) denotes the data space endowed with an unknown probability measure \(\mu_{z}\), referred to as the data distribution, \(h_{w}:\mathcal{X}\longrightarrow\mathcal{Y}\) denotes a parametric predictor with \(w\in\mathds{R}^{d}\) being its parameter vector, \(\mathcal{L}:\mathcal{Y}\times\mathcal{Y}\longrightarrow\mathds{R}\) denotes the loss function, and \(\ell\) is the composition of the loss and the predictor, i.e. \(\ell(w,z)=\ell(w,(x,y))=\mathcal{L}(h_{w}(x),y)\). As \(\mu_{z}\) is unknown, in practice one resorts to the minimization of the empirical risk, given as follows: \[\hat{\mathcal{R}}_{S}(w):=\frac{1}{n}\sum_{i=1}^{n}\ell(w,z_{i}), \tag{1}\] where \(S:=(z_{i})_{1\leq i\leq n}\sim\mu_{z}^{\otimes n}\) is a set of independent and identically distributed (i.i.d.) data points. Then, our goal is to bound the worst-case generalization error that is defined as the gap between the population and empirical risk over a (potentially random) hypothesis set \(\mathcal{W}\subset\mathds{R}^{d}\): \[\mathcal{G}(S):=\sup_{w\in\mathcal{W}}\bigl{(}\mathcal{R}(w)-\hat{\mathcal{R} }_{S}(w)\bigr{)}. \tag{2}\] In the context of neural networks, one peculiar observation has been that, even when a network contains millions of parameters (i.e., \(d\gg 1\)), it might still generalize well (Zhang et al., 2017), despite accepted wisdom suggesting that typically \(\mathcal{G}\approx\sqrt{d/n}\)(Anthony and Barlett, 1999). To provide a theoretical understanding for this behavior, several directions have been explored, such as compression-based approaches (Arora et al., 2018; Suzuki et al., 2020; Barsbey et al., 2021) and the approaches focusing on the double-descent phenomenon (Belkin et al., 2019; Nakkiran et al., 2019). Recently, there has been an increasing interest in examining the role of 'algorithm dynamics' on this phenomenon. In particular, it has been illustrated that, in the case where a stochastic optimization algorithm is used for minimizing (1), the optimization trajectories can exhibit a fractal structure (Simsekli et al., 2021; Camuto et al., 2021; Birdal et al., 2021; Hodgkinson et al., 2022). Under the assumption that \(\ell\) is uniformly bounded by some \(B\) and uniformly \(L\)-Lipschitz with respect to \(w\), their results informally implies the following: with probability \(1-\zeta\), we have that \[\mathcal{G}\lesssim LB\sqrt{\frac{\bar{d}(\mathcal{W})+I_{\infty}(\mathcal{W},S)+\log(1/\zeta)}{n}}, \tag{3}\] where \(\mathcal{W}\) is a _data-dependent hypothesis set_, which is provided by the learning algorithm, \(\bar{d}(\mathcal{W})\) is a notion of _fractal dimension_ of \(\mathcal{W}\), and \(I_{\infty}(\mathcal{W},S)\) denotes the _total mutual information_ between the data \(S\) and the hypothesis set \(\mathcal{W}\). These notions will be formally defined in Section 21. In the case where the intrinsic dimension \(\bar{d}(\mathcal{W})\) is significantly smaller than the ambient dimension \(d\) (which has been empirically illustrated in (Simsekli et al., 2021; Birdal et al., 2021)), the bound in (3) provides an explanation on why overparameterized networks might not overfit in practice. Footnote 1: In (Simsekli et al., 2021; Camuto et al., 2021) the bound is logarithmic in \(L\). (Simsekli et al., 2021) only requires sub-gaussian losses while (Camuto et al., 2021) requires sub-exponential losses. Their common points is to require a Lipschitz assumption. While these bounds have brought a new perspective on understanding generalization, they also possess an important drawback, that is they all rely on a _uniform Lipschitz continuity_ assumption on \(\ell\) (with respect to the parameters), which is too strict to hold for deep learning models. While it is clear that we cannot expect Lipschitz continuity of a neural network when the parameter space is unbounded, Herre that, even for the bounded domains, the Lipschitz constants of fully connected networks are typically polynomial in the width, exponential in depth which may be excessively large in practical settings; hence might make the bounds vacuous. The Lipschitz assumption is required in (Simsekli et al., 2021; Birdal et al., 2021; Camuto et al., 2021) as it enables the use of a fractal dimension defined through _the Euclidean distance_ on the hypothesis set \(\mathcal{W}\) (which is independent of the data). Hence, another downside of the Lipschitz assumption is that, the Euclidean distance-based dimension unfortunately ignores certain important components of the learning problem, such as the how the loss \(\ell\) behaves over \(\mathcal{W}\). As shown in (Jiang et al., 2019) in the case sharpness measures (Keskar et al., 2017), which measure the sensitivity of the empirical risk around local minima and correlate well with generalization, the data-dependence may improve the ability of a complexity measure to explain generalization. ### Contributions In this study, our main goal is to address the aforementioned issues by proving fractal geometric generalization bounds without requiring any Lipschitz assumptions. Inspired by a classical approach for bounding the Rademacher complexity (defined formally in Appendix A.2), we achieve this goal by making use of a _data-dependent_ pseudo-metric on the hypothesis set \(\mathcal{W}\). Our contributions are as follows: * We prove bounds (Theorems 4 and 5) on the worst-case generalization error of the following form: \[\mathcal{G}\lesssim B\sqrt{\frac{\bar{d}_{S}(\mathcal{W})+I+\log(1/\zeta)}{n}},\] (4) where \(\bar{d}_{S}\) denotes a notion of _data-dependent_ fractal dimension and \(I\) is a (total) mutual information term (see Section 2.2). As opposed to prior work, this bound does not require any Lipschitz assumption and therefore applies to more general settings. However, this improvement comes with the expense of having a more complicated mutual information term compared to the one in (3). * To provide more understanding about the newly introduced mutual information term \(I\) and highlight its links to prior work, we introduce a notion of 'geometric stability' and without requiring Lipschitz continuity, we prove an almost identical bound to the one in Equation (3) (with a slightly worse rate in \(n\)). * In order to be able to compute the data-dependent fractal dimension, we build on (Birdal et al., 2021) and prove that our dimension can also be computed by using numerically efficient topological data analysis tools (Carlsson, 2014; Perez-Fernandez et al., 2021). Finally, we illustrate our bounds on experiments using various neural networks. In addition to not requiring Lipschitz continuity, we show that our data-dependent dimension provides improved correlations with the actual generalization error. All the proofs are provided in the Appendix. ## 2 Technical Background ### Learning framework We formalize the learning algorithm as follows. The data (probability) space is denoted by \((\mathcal{Z},\mathcal{F},\mu_{z})\)2. A learning algorithm \(\mathcal{A}\) is a map generating a random closed set \(\mathcal{W}_{S,U}\)(see (Molchanov, 2017, Definition 1.1.1)) from the data \(S\) and an external random variable \(U\) accounting for the randomness of the learning algorithm. The external randomness \(U\) takes values in some probability space \((\Omega_{U},\mathcal{F}_{U},\mu_{u})\), which means that \(U\) is \(\mathcal{F}_{U}\)-measurable and has distribution \(\mu_{u}\). Moreover, we assume that \(U\) is independent of \(S\). Therefore if we write \(\mathbf{CL}(\mathds{R}^{d})\) for the class of closed sets of \(\mathds{R}^{d}\) endowed with the Effros \(\sigma\)-algebra, as in (Molchanov, 2017), the algorithm will be thought as a measurable map: Footnote 2: For technical measure-theoretic reasons (see Section B.5), it is best to assume \(\mathcal{Z}\subseteq\mathds{R}^{N}\) for some \(N\). \[\mathcal{A}:\bigcup_{n=0}^{\infty}\mathcal{Z}^{n}\times\Omega_{U}\to\mathbf{ CL}(\mathds{R}^{d})\ni\mathcal{W}_{S,U}. \tag{5}\] This formulation encompasses several settings, such as the following two examples. **Example 1**: _Given a continuous time process of the form \(\text{dW}_{t}=-\nabla f(W_{t})\text{dt}+\Sigma(W_{t})\text{dX}_{t}\) where \(X_{t}\) is typically a Brownian motion or a Levy process, as considered in various studies (Mandt et al., 2016; Chaudhari and Soatto, 2018; Hu et al., 2018; Jastrzebski et al., 2018; Simsekli et al., 2021), we can view \(\mathcal{W}_{S,U}\) as the set of points of the trajectory \(\{W_{t},\ t\in[0,T]\}\), where \(U\) accounts for randomness coming from quantities defining the model like \(X_{t}\)._ **Example 2**: _Consider a neural network \(h_{w}(\cdot)\) and denote the output of the stochastic gradient descent (SGD) iterates by \(A(x_{0},S,U)\), where \(U\) accounts for random batch indices and \(x_{0}\) is the initialization. This induces a learning algorithm \(\mathcal{W}_{S,U}=\bigcup_{x_{0}\in X_{0}}\{A(x_{0},S,U)\}\), which is closed if \(X_{0}\) is compact under a continuity assumption on \(A\)._ ### Information theoretic quantities Recently, one popular approach to prove generalization bounds has been based on information theory. In this context, Xu and Raginsky (2017); Russo and Zou (2019) proved particularly interesting generalization bounds in terms of the _mutual information_ between input and output of the model. Other authors refined this argument in various settings (Pensia et al., 2018; Negrea et al., 2019; Steinke and Zakynthinou, 2020; Harutyunyan et al., 2021) while Asadi et al. (2019) combined mutual information and chaining to tighten the bounds. In our work we will use the total mutual information to specify the dependence between the data and the fractal properties of the hypothesis set. The classic mutual information between two random elements \(X\) and \(Y\) is defined in terms of the Kullback-Leibler (KL) divergence \(I(X,Y):=\text{KL}(\mathds{P}_{X,Y}||\mathds{P}_{X}\otimes\mathds{P}_{Y})\). It is well known that mutual information can be used as a decoupling tool (Xu and Raginsky, 2017); yet, in our setup, we will need to consider the _total mutual information_, which is defined as follows: \[I_{\infty}(X,Y):=\log\bigg{(}\sup_{B}\frac{\mathds{P}_{X,Y}(B)}{\mathds{P}_{X} \otimes\mathds{P}_{Y}(B)}\bigg{)}. \tag{6}\] Hodgkinson et al. (2022) used total mutual information to decouple the data and the optimization trajectory, they defined it as a limit of \(\alpha\)-mutual information, which is equivalent, see (van Erven and Harremoes, 2014, Theorem 6). ### The upper box-counting dimension Fractal geometry (Falconer, 2014) and dimension theory have been successful tools in the study of dynamical systems and stochastic processes (Pesin, 1997; Xiao, 2004). In our setting, we will be interested in the _upper box-counting dimension_ defined as follows. Given a (pseudo-)metric space \((X,\rho)\) and \(\delta>0\), we first define the closed \(\delta\)-ball centered in \(x\in X\) by \(B^{\rho}_{\delta}(x)=\{y\in X,\ \rho(x,y)\leq\delta\}\) and a _minimal covering_\(N^{\rho}_{\delta}(X)\) as a minimal set of points of \(X\) such that \(X\subset\bigcup_{y\in N^{\rho}_{\delta}(X)}B^{\rho}_{\delta}(y)\). We can then define the upper box-counting dimension as follows: \[\overline{\dim}^{\rho}_{B}(X):=\limsup_{\delta\to 0}\frac{\log|N^{\rho}_{ \delta}(X)|}{\log(1/\delta)}, \tag{7}\] where \(|A|\) denotes the cardinality of a set \(A\). Under the Lipschitz loss assumption, (Simsekli et al., 2021; Birdal et al., 2021; Camuto et al., 2021; Hodgkinson et al., 2022), related different kinds of fractal dimensions, computed with _the Euclidean distance_\(\rho(w,w^{\prime})=\operatorname{Eucl}(w,w^{\prime}):=\|w-w^{\prime}\|_{2}\), to the generalization error. Our approach in this study will be based on using a _data-dependent_ pseudo-metric \(\rho\), which will enable us to remove the Lipschitz assumption. ## 3 Main Results In this section we present our main theoretical results; our aim is to relate the worst-case generalization error of (5) with the upper box-counting dimension computed based on the following random pseudo-metric: \[\rho_{S}(w,w^{\prime}):=\frac{1}{n}\sum_{i=1}^{n}|\ell(w,z_{i})-\ell(w^{\prime},z_{i})|. \tag{8}\] We insist on the fact that it is only a pseudo-metric because in practice we can have \(\rho_{S}(w,w^{\prime})=0\) while \(w\neq w^{\prime}\), for example due to the internal symmetries of a neural network. ### Main assumptions A key component of our work is that we do not use any Lipschitz assumption on \(\ell\) as for example in (Simsekli et al., 2021; Hodgkinson et al., 2022). The only regularity assumption we impose is the following: **Assumption 1**: _The loss \(\ell:\mathds{R}^{d}\times\mathcal{Z}\longrightarrow\mathds{R}\) is continuous in both variables and uniformly bounded by some \(B>0\)._ We note that the box-counting dimension with respect to the pseudo-metric (8) involves minimal coverings, which we denote \(N^{\rho_{S}}_{\delta}(A)\) for some set \(A\). The boundedness assumption is essential to ensure that minimal coverings are finite and \(\overline{\dim}^{\rho_{S}}_{B}\) is also finite. Therefore our boundedness assumption cannot be replaced with a subgaussian assumption, as opposed to (Simsekli et al., 2021). We also assume that we can construct minimal coverings which are random closed (finite) sets in the sense of (Molchanov, 2017, Definition 1.1.1); this is made precise with the following assumption: **Assumption 2**: _Let \(C\subset\mathds{R}^{d}\) be any closed set, \(\delta>0\), \(S\in\mathcal{Z}^{n}\) and \(S^{\prime}\in\mathcal{Z}^{m}\). We can construct minimal \(\delta\)-coverings \(N_{\delta}^{\rho_{S^{\prime}}}(C\cap\mathcal{W}_{S,U})\) which are random finite sets with respect to the product \(\sigma\)-algebra \(\mathcal{F}^{\otimes n}\otimes\mathcal{F}^{\otimes n}\otimes\mathcal{F}_{U}\) (measurability with respect to \(S,S^{\prime},U\)). We denote by \(\mathcal{N}_{\delta}(C\cap\mathcal{W}_{S,U})\) the family of all those random minimal coverings._ **Remark 3**: _Assumption 2 essentially enables us to avoid technical measurability complications. The main message is that we assume that we are able to construct "measurable coverings". This assumption can be cast as a selection property; indeed for each realization of \((S,S^{\prime},U)\) there may be a wide range of possible minimal coverings: what we assume is that we can select one of them for each \((S,S^{\prime},U)\) so that the obtained random set is measurable. This could be achieved via Kuratowski-Ryll-Nardzewski's theorem (see (Kechris, 1995, Section 12)) applied on a proper topology on \(\textbf{CL}(\mathds{R}^{d})\), see Appendix B.5.2 for some details._ As the upper box-counting dimension (7) may be written as a countable limit, the measurability assumption 2 also implies that \(\overline{\dim}_{B}^{\rho_{S}}(\mathcal{W}_{S,U})\) is a random variable. Continuity of the loss in Assumption 1 is there for technical purposes, e.g., to make quantities of the form \(\sup_{w\in\mathcal{W}_{S,U}}\big{(}\mathcal{R}(w)-\tilde{\mathcal{R}}_{S}(w) \big{)}\) well-defined random variables (see (Molchanov, 2017, Theorem 1.3.28) and Section B.5 in the Appendix). ### Warm-up: fixed hypothesis spaces In this subsection we fix a _deterministic_ closed set \(\mathcal{W}\subset\mathds{R}^{d}\) and consider its upper box-counting dimension with respect to the data-dependent pseudo-metric (8), which we denote by \(d(S):=\overline{\dim}_{B}^{\rho_{S}}(\mathcal{W})\). Our goal is to bound the worst-case generalization error as defined in (2). The next theorem is an extension of the classical covering bounds of Rademacher complexity (Barlett and Mendelson, 2002; Rebeschini, 2020). **Theorem 4**: _For all \(\epsilon,\gamma,\eta>0\) and \(n\in\mathds{N}_{+}\) there exists \(\delta_{n,\gamma,\epsilon}>0\) such that with probability at least \(1-2\eta-\gamma\) under \(\mu_{z}^{\otimes n}\), for all \(\delta<\delta_{n,\gamma,\epsilon}\) we have:_ \[\mathcal{G}(S)\leq 2B\sqrt{\frac{4(d(S)+\epsilon)\log(1/\delta)+9\log(1/ \eta)}{n}}+2\delta.\] Theorem 4 is therefore similar to (Simsekli et al., 2021, Theorem 1), which used a fractal dimension based on the Euclidean distance on \(\mathds{R}^{d}\), \(\|w-w^{\prime}\|_{2}\) and a fixed hypothesis space. The improvement here is in the absence of Lipschitz assumption. However, Theorem 4 might not be sufficiently satisfying. The proof involves techniques that do not hold in the case of random hypothesis spaces, an issue which we address in the next subsection. ### Random hypothesis spaces Theorem 4 is interesting because it gives a bound similar to (Simsekli et al., 2021) in the case of a fixed hypothesis set but with a new notion of data dependent intrinsic dimension. Now we come to the case where the hypothesis set \(\mathcal{W}_{S,U}\) generated by the learning algorithm (5) is a random set. For notational purposes let us denote the upper box-counting dimension of \(\mathcal{W}_{S,U}\) induced by pseudo-metric (8) by \(d(S,U):=\overline{\dim_{B}^{\rho_{S}}}(\mathcal{W}_{S,U})\), and denote the worst-case generalization error by \[\mathcal{G}(S,U):=\sup_{w\in\mathcal{W}_{S,U}}(\mathcal{R}(w)-\hat{\mathcal{ R}}_{S}(w)). \tag{9}\] Here again, note that \(d(S,U)\) can be written as a countable limit of random variables and therefore defines a random variable thanks to Assumption 2. The main difficulty here is that classical arguments based on the Rademacher complexity cannot be applied in this case as \(\mathcal{W}_{S,U}\) depends on the data sample \(S\). Hence, to be able to develop a covering argument, we first cover the set \(\mathcal{W}_{S,U}\) by using the pseudo-metric \(\rho_{S}\) (cf. Section 2.3) and rely on the following decomposition: for any \(\delta>0\) and \(w^{\prime}\in N_{\delta}^{\rho_{S}}(\mathcal{W}_{S,U})\) we have that \[\mathcal{R}(w)-\hat{\mathcal{R}}_{S}(w)\leq\mathcal{R}\left(w^{\prime}\right) -\hat{\mathcal{R}}_{S}\left(w^{\prime}\right)+\left|\hat{\mathcal{R}}_{S}(w) -\hat{\mathcal{R}}_{S}\left(w^{\prime}\right)\right|+\left|\mathcal{R}(w)- \mathcal{R}\left(w^{\prime}\right)\right|.\] In the above inequality, the first term can be controlled by standard techniques as \(w^{\prime}\) lives in a finite set \(N_{\delta}^{\rho_{S}}(\mathcal{W}_{S,U})\) and the second term is trivially less than \(\delta\) by the definition of coverings. However, the last term cannot be bounded in an obvious way. To overcome this issue we introduce 'approximate level-sets' of the population risk, defined as follows3 for some \(K\in\mathds{N}_{+}\): Footnote 3: As \(U\) is independent of \(S\), we drop the dependence on it to ease the notation. \[R_{S}^{j}:=\mathcal{W}_{S,U}\cap\mathcal{R}^{-1}\bigg{(}\bigg{[}\frac{jB}{K}, \frac{(j+1)B}{K}\bigg{]}\bigg{)}, \tag{10}\] where \(j=0,\ldots,K-1\) and \(\mathcal{R}^{-1}\) denotes the inverse image of \(\mathcal{R}\). Let \(N_{\delta,j}\) collect the centers of a minimal \(\delta\)-cover of \(R_{S}^{j}\) relatively to \(\rho_{S}\)4. The next theorem provides a generalization bound for random hypothesis sets. Footnote 4: Assumption 2 extends to the randomness of those sets \(N_{\delta,j}\). **Theorem 5**: _Let us set \(K=\lfloor\sqrt{n}\rfloor\) and define \(I_{n,\delta}:=\max_{0\leq j\leq\lfloor\sqrt{n}\rfloor}I_{\infty}(S,N_{\delta, j})\). Then, for all \(\epsilon,\gamma,\eta>0\), there exists \(\delta_{n,\gamma,\epsilon}>0\) such that with probability at least \(1-\eta-\gamma\) under \(\mu_{z}^{\otimes n}\otimes\mu_{u}\), for all \(\delta<\delta_{n,\gamma,\epsilon}\) we have:_ \[\mathcal{G}(S,U)\leq\frac{B}{\sqrt{n}-1}+\delta+\sqrt{2}B\sqrt{\frac{(d(S,U)+ \epsilon)\log(2/\delta)+\log(\sqrt{n}/\eta)+I_{n,\delta}}{n}}.\] This theorem gives us a bound in the general case similar to (Simsekli et al., 2021, Theorem 2), yet without requiring Lipschitz continuity. Moreover, also similar to (Simsekli et al., 2021; Hodgkinson et al., 2022), Theorem 5 introduces a mutual information term \(I_{n,\delta}\), which intuitively measures the local mutual dependence between the data and the coverings. This can be seen as how the data influences the 'local fractal behavior' of the the hypothesis set. On the other hand, despite the similarity to prior work, \(I_{n,\delta}\) might be more complex because the dependence of \(N_{\delta,j}\) on \(S\) comes both from the pseudo-metric \(\rho_{S}\) and the hypothesis set \(\mathcal{W}_{S,U}\). In the next subsection, we show that we can modify our theory in a way that it involves the simpler mutual information term proposed in (Hodgkinson et al., 2022). ### Geometric stability and mutual information The intricate dependence between \(N_{\delta,j}\) and \(S\) makes it hard to express the term \(I_{n,\delta}\) in Theorem 3.1 or bound it with standard methods (e.g. data-processing inequality). In this subsection, we introduce a notion of 'geometric stability' to obtain a more interpretable bound. Algorithmic stability is a key notion in learning theory and has been shown to imply good generalization properties (Bousquet, 2002; Bousquet et al., 2020; Chandramoorthy et al., 2022). Recently, Foster et al. (2020) extended this notion to the stability of _hypothesis sets_, and proposed a notion of stability as a bound on a kind of Hausdorff distance between the hypothesis sets generated by neighboring datasets. In our setting this would mean that there exists some \(\bar{\beta}>0\) such that for all \(S,S^{\prime}\in\mathcal{Z}^{n}\) differing only by one element, for all \(u\in\mathcal{U}\), we have: \[\forall w\in\mathcal{W}_{S,U},\ \exists w^{\prime}\in\mathcal{W}_{S^{\prime},U},\ \forall z\in\mathcal{Z},\ |\ell(w,z)-\ell(w^{\prime},z)|\leq\bar{\beta}. \tag{11}\] Foster et al. (2020) argue that in many situations \(\bar{\beta}=\mathcal{O}(1/n)\). Inspired by (Foster et al., 2020), we introduce a stability notion, coined _geometric stability_, on the minimal coverings that will allow us to reduce the statistical dependence between the dataset \(S\sim\mu_{z}^{\otimes n}\) and those coverings. To state our stability notion, we need to refine our definition of coverings. Let \(A\subset\mathds{R}^{d}\) be some closed set, potentially random. For any \(\delta>0\) we define \(N_{\delta}(A,S)\) to be the random minimal coverings of \(A\) by closed \(\delta\)-balls under pseudo-metric \(\rho_{S}\) (8) with centers in \(A\). Note that the dependence in \(S\) in \(N_{\delta}(A,S)\) only refers to the _pseudo-metric_ used. In addition to Assumption 2 which states that we can make such a selection of \(N_{\delta}(A,S)\), making it a well-defined random set, we add the fact that this selection can be made regular enough in the following sense. We say that a set \(A\) is geometrically stable if there exist some \(\beta>0\) and \(\alpha>0\) such that for \(\delta\) small enough we can find a random covering \(S\mapsto N_{\delta}(A,S)\) such that for all \(S\in\mathcal{Z}^{n}\) and \(S^{\prime}\in\mathcal{Z}^{n-1}\) such that \(S^{\prime}=S\setminus\{z_{i}\}\) for some \(i\), then \(N_{\delta}(A,S)\) and \(N_{\delta}(A,S^{\prime})\) are within \(\beta/n^{\alpha}\) distance for an uniform data-dependent Hausdorff distance, i.e., \[\forall w\in N_{\delta}(S,A),\ \exists w^{\prime}\in N_{\delta}(S^{\prime},A),\ \sup_{z\in\mathcal{Z}}|\ell(w,z)-\ell(w^{\prime},z)|\leq\frac{\beta}{n^{ \alpha}}. \tag{12}\] Based on this definition, we assume the following condition. Let \(K\in\mathds{N}_{+}\). There exists \(\alpha\in(0,3/2)\) and \(\beta>0\) (potentially depending on \(K\)) such that all sets of the form \(\mathcal{W}_{S,U}\cap\mathcal{R}^{-1}\big{(}\big{[}\frac{jB}{K},\frac{(j+1)B}{ K}\big{]}\big{)}\) are geometrically stable with parameters \((\alpha,\beta)\). Assumption 7 essentially imposes a _local_ regularity condition on the fractal behavior of \(\mathcal{W}_{S,U}\) with respect to the pseudo-metric \(\rho_{S}\). Intuitively it means that we can select a regular enough covering among all coverings. Note that the geometric stability is a condition on how the coverings vary with respect to the pseudo-metric, which is fundamentally different than (Foster et al., 2020). The next theorem provides a generalization bound under the geometric stability condition. **Theorem 8**: _Let \(d(S,U)\) and \(\mathcal{G}(S,U)\) be as in Theorem 5 and further define \(I:=I_{\infty}(S,\mathcal{W}_{S,U})\). Suppose that 7 holds. Then there exists a constant \(n_{\alpha},\delta_{\gamma,\epsilon,n}>0\) such that for all \(n\geq n_{\alpha}\), with probability \(1-\gamma-\eta\), and for all \(\delta\leq\delta_{\gamma,\epsilon,n}\), the following inequality holds:_ \[\mathcal{G}(S,U)\leq\frac{3B+2\beta}{n^{\alpha/3}}+\delta+B\sqrt{\frac{\left( \epsilon+d(S,U)\right)\log(4/\delta)+\log(1/\eta)+\log(n)+I}{2n^{\frac{2\alpha }{3}}}}.\] _Moreover, we have that \(n_{\alpha}=\max\{2^{\frac{3}{2\alpha}},2^{1+\frac{3}{3-2\alpha}}\}\)._ While Assumption 7 might be restrictive, our goal here is to highlight how such geometric regularity can help us deal with the statistical dependence between the data and the hypothesis set. Note that the mutual information term appearing in Theorem 8 is much more interpretable compared to the corresponding terms in Theorem 5, and has the exact same form as the term presented in (Hodgkinson et al., 2022). We also note that, this way of controlling the dependence between the data and the hypothesis set comes at the expense of potentially losing in the convergence rate of our bound. More precisely, for a stability index of \(\alpha\), we get a convergence rate of \(n^{-\alpha/3}\). By examining the value of constant \(n_{\alpha}\) in Theorem 8, we observe that getting closer to an optimal rate (\(\alpha\approx\frac{3}{2}\)) implies a larger \(n_{\alpha}\), rendering our bound asymptotic. ## 4 Computational Aspects In this section, we will illustrate how the proposed data-dependent dimension can be numerically computed, by making a rigorous connection to topological data analysis (TDA) tools (Boissonat et al., 2018). ### Persistent homology Persistent homology (PH) is a well known notion in TDA typically used for point cloud analysis (Edelsbrunner and Harer, 2010; Carlsson, 2014). Previous works have linked neural networks and algebraic topology (Rieck et al., 2019; Perez-Fernandez et al., 2021), especially in (Corneanu et al., 2020) who established experimental evidence of a link between homology and generalization. Important progress was made in (Birdal et al., 2021), who used PH tools to estimate the upper-box counting dimension induced by the Euclidean distance on \(\mathcal{W}_{S,U}\). Here we extend their approach to the case of data-dependent pseudo-metrics, which lays the ground for our experimental analysis. The formal definition of PH is rather technical and is not essential to our problematic; hence, we only provide a high-level description here, and provide a more detailed description in Section A.4 (for a formal introduction, see (Boissonat et al., 2018; Memoli and Singhal, 2019). In essence, given a point cloud \(W\subset\mathds{R}^{d}\), 'PH of degree 0', denoted by \(\mathrm{PH}^{0}\) keeps track of the _connected components_ in \(W\), as we examine \(W\) at a gradually decreasing resolution. Given a bounded (pseudo-)metric space \((X,\rho)\), by using \(\mathrm{PH}^{0}\), one can introduce another notion of fractal dimension, called the _persistent homology dimension_, which we denote by \(\mathrm{dim}^{\rho}_{\mathrm{PH}^{0}}(X)\) (see Section A.4 and (Schweinhart, 2019, Definition 4)). Our particular interest in \(\mathrm{dim}^{\rho}_{\mathrm{PH}^{0}}(X)\) in the case where \(\rho\) is a proper metric comes from an important result (Kozma et al., 2005; Schweinhart, 2020) stating that for any bounded metric space \((X,\rho)\) we have the following identity. \[\overline{\mathrm{dim}}^{\rho}_{B}(X)=\mathrm{dim}^{\rho}_{\mathrm{PH}^{0}}( X). \tag{13}\] Several studies used this property to numerically evaluate the upper box-counting dimension (Adams et al., 2020; Birdal et al., 2021). In particular Birdal et al. (2021) combined it with the results from (Simsekli et al., 2021) and showed that \(\mathrm{dim}^{\mathrm{Eucl}}_{\mathrm{PH}^{0}}(X)\) associated with the Euclidean metric on the parameter space, can be linked to the generalization error under the Lipschitz loss condition. ### PH dimension in pseudo-metric spaces In order to extend the aforementioned analysis to our data-dependent dimension, we must first prove that the equality (13) extends to pseudo-metric spaces, which is established in the following theorem: **Theorem 9**: _Let \((X,\rho)\) be a bounded pseudo-metric space, we have: \(\overline{\mathrm{dim}}^{\rho}_{B}(X)=\mathrm{dim}^{\rho}_{PH^{0}}(X)\)._ This theorem shows that, similar to \(\mathrm{dim}^{\mathrm{Eucl}}_{\mathrm{PH}^{0}}(\mathcal{W}_{S,U})\), our proposed dimension \(\mathrm{dim}^{\rho_{S}}_{\mathrm{PH}^{0}}(\mathcal{W}_{S,U})\) can also be computed by using numerically efficient TDA tools. Moreover, Theorem 4 now informally implies that with probability \(1-\zeta\): \[\mathcal{G}(S)\lesssim\sqrt{\frac{\mathrm{dim}^{\rho_{S}}_{\mathrm{PH}^{0}}( \mathcal{W})\log(1/\delta)+\log(1/\zeta)}{n}}+\delta. \tag{14}\] Theorems 5 and 8 can be adapted similarly. ## 5 Experiments Experimental setup.In our experiments, we closely follow the setting used in (Birdal et al., 2021). In particular, we consider learning a neural network by using SGD, and choose the hypothesis set \(\mathcal{W}_{S,U}\) as the _optimization trajectory_ near the local minimum found by SGD5. Then, we numerically estimate \(\mathrm{dim}^{\rho_{S}}_{\mathrm{PH}^{0}}(\mathcal{W}_{S,U})\) by using the PH software provided in (Perez et al., 2021). The main difference between our approach and (Birdal et al., 2021) is that we replace the Euclidean metric with the pseudo-metric \(\rho_{S}\) to compute the PH dimension. Here is a brief description of the method: given a neural network, its loss \(\ell(w,z)\), and a dataset \(S=(z_{1},\ldots,z_{n})\), we compute the iterations of SGD for \(K^{\star}\) iterations, \((w_{k})_{k=0}^{K^{\star}}\), such that \(w_{K^{\star}}\) reaches near a local minimum. We then run SGD for 5000 more iterations and set \(\mathcal{W}_{S,U}\) to \(\{w_{K^{\star}+1},\ldots,w_{K^{\star}+5000}\}\). We then approximate \(\dim_{\text{PH}^{0}}^{\rho_{S}}(\mathcal{W}_{\mathcal{S}\mathcal{U}})\) by using the algorithm proposed in (Birdal et al., 2021) by replacing the Euclidean distance with \(\rho_{S}\). We experimentally evaluate \(\dim_{\text{PH}^{0}}^{\rho_{S}}(\mathcal{W}_{S,U})\) in different settings: (i) regression experiment with Fully Connected Networks of 5 (FCN-5) and 7 (FCN-7) layers trained on the California Housing Dataset (CHD) (Kelley Pace and Barry, 1997), (ii) training FCN-5 and FCN-7 networks on the MNIST dataset (Lecun et al., 1998) and (iii) training AlexNet (Krizhevsky et al., 2017) on the CIFAR-10 dataset (Krizhevsky et al., 2014). More experiments are Figure 1: \(\dim_{\text{PH}^{0}}^{\rho_{S}}\) (denoted \(\dim_{\text{PH}^{0}}^{S}\) in the figure) versus accuracy gap for FCN-5 (_top_), FCN-7 (_middle_) on MNIST and AlexNet (_bottom_) on CIFAR-10 Different colors indicate different learning rates and different markers indicate different batch sizes. shown in the appendix Section D. All the experiments use standard ReLU activation and vanilla SGD with constant step-size. We made both learning rate and batch size vary across a \(6\times 6\) grid. For experiments on CHD and MNIST we also used 10 different random seeds. All hyperparameter configurations are available in Section C. Note that in the case of a classification experiment, one could not compute \(\dim_{\mathrm{PH}^{0}}^{\rho_{S}}\) using a zero-one loss in (8). Indeed, it would be equivalent to computing PH on the _finite_ set \(\{0,1\}^{n}\subset\mathds{R}^{n}\), which trivially gives an upper box-counting dimension of 0. To overcome this issue, we compute \(\dim_{\mathrm{PH}^{0}}^{\rho_{S}}\) using the surrogate loss (cross entropy in our case) and illustrate that it is still a good predictor of the gap between the training and testing accuracies. For the sake of completeness, we provide how \(\dim_{\mathrm{PH}^{0}}^{\rho_{S}}\) behaves with respect to the the actual _loss gap_ in Section D. **Results.** In order to compare our data-dependent intrinsic dimension with the one introduced in (Birdal et al., 2021), which is the PH dimension induced by the Euclidean distance on the trajectory and denoted \(\dim_{\mathrm{PH}^{0}}^{\mathrm{Eucl}}\), we compute various correlation statistics, namely the Spearman's rank correlation coefficient \(\rho\)(Kendall and Stuart, 1973) and Kendall's coefficient \(\tau\)(Kendall, 1938). We also use the _mean Granulated Kendall's Coefficient_\(\boldsymbol{\Psi}\) introduced in (Jiang et al., 2019), which aims at isolating the influence of each hyperparameter and according to the authors could better capture the causal relationships between the generalization and the proposed complexity metric (the intrinsic dimension in our case). For more details on the exact computation of these coefficients, please refer to Section C.1. Figure 2: \(\dim_{\mathrm{PH}^{0}}^{\rho_{S}}\) (denoted \(\dim_{\mathrm{PH}^{0}}^{S}\) in the figure) versus generalization gap for FCN-5 (_top_) and FCN-7 (_bottom_) trained on CHD. Different colors indicate different learning rates and different markers indicate different batch sizes. Therefore \((\rho,\mathbf{\Psi},\tau)\) are our main indicators of performance. The values of each granulated Kendall's coefficient are reported in Section D6. Footnote 6: All those coefficients are between \(-1\) and \(1\), where the value of \(1\) indicating a perfect positive correlation. Figures 1 and 2 depict the data-dependent dimension versus the generalization gap, as computed in different settings. We observe that, in all cases, we have a strong correlation between \(\mathrm{dim}_{\mathrm{PH}^{0}}^{\rho_{S}}(\mathcal{W}_{S,U})\) and the generalization gap, for a wide range of hyperparameters. We also observe that the highest learning rates and lowest batch sizes seem to give less correlation, which is similar to what was observed in (Birdal et al., 2021) as well. This might be caused by the increased noise as we suspect that the point clouds in those settings show more complex fractal structures and hence require more points for a precise computation of the PH dimension. Next, we report the correlation coefficients for the same experiments in Tables 1, 2 and 3. The results show that on average our proposed dimension always yields improved metrics compared to the dimension introduced in (Birdal et al., 2021). The improvement is particularly better in the regression experiment we performed (as the classification task yields larger variations in the metrics, see Table 2). This may indicate that the proposed dimension may be particularly pertinent in specific settings. Moreover, increasing the size of the model, in all experiments, seems to have a positive impact on the correlation. We suspect that this might be due to the increasing local-Lipschitz constant of the network. We provide more experimental results in Section D. **Robustness analysis.** The computation of \(\rho_{S}(w,w^{\prime})\) requires the exact evaluation of the loss function on every data point \(\{z_{1},\ldots,z_{n}\}\) for every \(w,w^{\prime}\in\mathcal{W}_{S,U}\). This introduces \begin{table} \begin{tabular}{l l l l l} \hline \hline Model & Dim. & \(\rho\) & \(\mathbf{\Psi}\) & \(\tau\) \\ \hline FCN-5 & \(\mathrm{dim}_{\mathrm{PH}^{0}}^{\rho_{S}}\) & \(0.77_{\pm 0.08}\) & \(0.54_{\pm 0.11}\) & \(0.59_{\pm 0.07}\) \\ FCN-5 & \(\mathrm{dim}_{\mathrm{PH}^{0}}^{\rho_{S}}\) & \(\mathbf{0.87}_{\pm 0.05}\) & \(\mathbf{0.68}_{\pm 0.10}\) & \(\mathbf{0.71}_{\pm 0.09}\) \\ \hline FCN-7 & \(\mathrm{dim}_{\mathrm{PH}^{0}}^{\rho_{S}}\) & \(0.40_{\pm 0.09}\) & \(0.16_{\pm 0.08}\) & \(0.28_{\pm 0.07}\) \\ FCN-7 & \(\mathrm{dim}_{\mathrm{PH}^{0}}^{\rho_{S}}\) & \(\mathbf{0.77}_{\pm 0.08}\) & \(\mathbf{0.62}_{\pm 0.06}\) & \(\mathbf{0.77}_{\pm 0.08}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Correlation coefficients on CHD \begin{table} \begin{tabular}{l l l l l} \hline \hline Model & Dim. & \(\rho\) & \(\mathbf{\Psi}\) & \(\tau\) \\ \hline FCN-5 & \(\mathrm{dim}_{\mathrm{PH}^{0}}^{\rho_{S}}\) & \(0.62_{\pm 0.10}\) & \(0.78_{\pm 0.08}\) & \(0.47_{\pm 0.07}\) \\ FCN-5 & \(\mathrm{dim}_{\mathrm{PH}^{0}}^{\rho_{S}}\) & \(\mathbf{0.73}_{\pm 0.07}\) & \(\mathbf{0.81}_{\pm 0.07}\) & \(\mathbf{0.56}_{\pm 0.06}\) \\ \hline FCN-7 & \(\mathrm{dim}_{\mathrm{PH}^{0}}^{\rho_{S}}\) & \(0.80_{\pm 0.04}\) & \(0.88_{\pm 0.04}\) & \(0.62_{\pm 0.04}\) \\ FCN-7 & \(\mathrm{dim}_{\mathrm{PH}^{0}}^{\rho_{S}}\) & \(\mathbf{0.89}_{\pm 0.02}\) & \(\mathbf{0.90}_{\pm 0.04}\) & \(\mathbf{0.73}_{\pm 0.03}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Correlation coefficients on MNIST \begin{table} \begin{tabular}{l l l l l} \hline \hline Model & Dim. & \(\rho\) & \(\mathbf{\Psi}\) & \(\tau\) \\ \hline AlexNet & \(\mathrm{dim}_{\mathrm{PH}^{0}}^{\rho_{S}}\) & \(0.86\) & \(0.81\) & \(0.68\) \\ AlexNet & \(\mathrm{dim}_{\mathrm{PH}^{0}}^{\rho_{S}}\) & \(\mathbf{0.93}\) & \(\mathbf{0.84}\) & \(\mathbf{0.78}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Correlation coefficients with AlexNet on CIFAR-10 a computational bottleneck in case where \(n\) is excessively large. To address this issue, in this section we will explore an approximate way of computing \(\dim_{\mathrm{PH}^{0}}^{\rho_{S}}\). Similar to the computation of a stochastic gradient, instead of computing the distance on every data point, we will first draw a random subset of data points \(T\subset S\), with \(|T|\ll n\) and use the following approximation \(\rho_{S}(w,w^{\prime})\approx\rho_{T}(w,w^{\prime}):=\frac{1}{|T|}\sum_{z\in T }|\ell(w,z)-\ell(w^{\prime},z)|\). We now conduct experiments to analyze the robustness of the computation of \(\dim_{\mathrm{PH}^{0}}^{\rho_{S}}\) with respect to varying size of random subsets \(T\). More precisely, we randomly select a subset \(T\subset S\) whose size varies between \(2\%\) and \(99\%\) of the size dataset \(S\) and compute the PH dimension using the approximate pseudo-metric. Note that the whole dataset \(S\) is of course still used to produce the SGD iterates. Figure 3 presents results on the MNIST and CHD datasets in term of the relative error, i.e., \(|\dim_{\mathrm{PH}^{0}}^{\rho_{T}}-\dim_{\mathrm{PH}^{0}}^{\rho_{S}}|/\dim_{ \mathrm{PH}^{0}}^{\rho_{S}}\). The results show that the proposed dimension is significantly robust to the approximation of the pseudo-metric: even with \(40\%\) of the data, we achieve almost identical results as using the full dataset. ## 6 Conclusion In this paper, we proved generalization bounds that do not require the Lipschitz continuity of the loss, which can be crucial in modern neural network settings. We linked the generalization error to a data-dependent fractal dimension of the random hypothesis set. We first extended some classical covering arguments to state a bound in the case of a fixed hypothesis set and then proved a result in a general learning setting. While some intricate mutual information terms between the geometry and the data appeared in this bound, we presented a possible workaround by the introduction of a stability property for the coverings of the hypothesis set. Finally, we made a connection to persistent homology, which allowed us to numerically approximate the intrinsic dimension and thus support our theory with experiments. Figure 3: Robustness experiment using a FCNN trained on MNIST (_Left_) and CHD (_Right_). \(x\)-axis represents the proportion of the data \(T\) used to compute the metric, \(y\)-axis is the relative error with respect to the full dataset based dimension. Certain points remain to be studied concerning our results. First the existence of differentiable persistent homology libraries (Hofer et al., 2018, 2019) open the door to the use of our intrinsic dimension as a regularization term as in (Birdal et al., 2021). Refining our proof techniques, for example using the chaining method (Ledoux and Talagrand, 1991; Clerico et al., 2022), could help us improve our theoretical results or weaken the assumptions. ## Acknowledgments U.S. is partially supported by the French government under management of Agence Nationale de la Recherche as part of the "Investissements d'avenir" program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute). B.D. and U.S. are partially supported by the European Research Council Starting Grant DYNASTY - 101039676.
2305.09694
Isotropic stellar model in mimetic theory
We investigate how to derive an isotropic stellar model in the framework of mimetic gravitational theory. Recently, this theory has gained big interest due to its difference from Einstein's general relativity (GR), especially in the domain non-vacuum solutions. In this regard, we apply the field equation of mimetic gravitational theory to a spherically symmetric ansatz and obtain an over determined system of non-linear differential equations in which differential equations are less than the unknown functions. To overcome the over determined system we suppose a specific form of the temporal component of the metric potential, $g_{tt}$, and assume the vanishing of the anisotropic condition to derive the form of the spatial component of the metric, $g_{rr}$. In this regard, we discuss the possibility to derive a stellar isotropic model that is in agreement with observed pulsars. To examine the stability of the isotropic model we use the Tolman-Oppenheimer-Volkoff equation and the adiabatic index. Furthermore, we assess the model's validity by evaluating its compatibility with a broad range of observed pulsar masses and radii. We demonstrate that the model provides a good fit to these observations.
G. G. L. Nashed
2023-05-16T06:07:01Z
http://arxiv.org/abs/2305.09694v1
# Isotropic stellar model in mimetic theory ###### Abstract We investigate how to derive an isotropic stellar model in the framework of mimetic gravitational theory. Recently, this theory has gained big interest due to its difference from Einstein's general relativity (GR), especially in the domain non-vacuum solutions. In this regard, we apply the field equation of mimetic gravitational theory to a spherically symmetric ansatz and obtain an over determined system of non-linear differential equations in which differential equations are less than the unknown functions. To overcome the over determined system we suppose a specific form of the temporal component of the metric potential, \(g_{tt}\), and assume the vanishing of the anisotropic condition to derive the form of the spatial component of the metric, \(g_{rr}\). In this regard, we discuss the possibility to derive a stellar isotropic model that is in agreement with observed pulsars. To examine the stability of the isotropic model we use the Tolman-Oppenheimer-Volkoff equation and the adiabatic index. Furthermore, we assess the model's validity by evaluating its compatibility with a broad range of observed pulsar masses and radii. We demonstrate that the model provides a good fit to these observations. Introduction The theory of General Relativity (GR) was constructed by Einstein in (1915) and is considered one of the basic theories of modern physics as well as the quantum field theory [1]. Up to date, GR has approved many successful tests in experimental as well as observational like gravitational time dilation, bending of light, the precession of the Mercury orbit, gravitational lensing, etc [2], and the discovery of the gravitational waves [3]. In spite the huge progress of GR, it endures investigating the issues of cosmological observations like the flat galaxy's rotation curves (dark matter), the black holes singularities as well as the accelerated expansion era of the universe (dark energy). Thus, new components of matter-energy or modified theories of gravity should be proposed to investigate the observed events. Mimetic gravitational theory is a scalar-tensor one where the conformal mode can be isolated through a scalar field [4]. On the other hand, we can think of the setup of the mimetic as a special class of general conformal or disformal transformation where the transformation between the new and old metrics is degenerate. Using the non-invertible conformal or disformal transformation one can prove that the number of degrees of freedom can be increased so that the longitudinal mode becomes dynamical [5; 6; 7; 8]. The conformal transformation which relates the auxiliary metric \(\bar{g}_{\alpha\beta}\) to the physical metric \(g_{\alpha\beta}\) and the scalar field is defined as: \[g_{\alpha\beta}=\pm\left(\bar{g}^{\mu\nu}\partial_{\mu}\zeta\partial_{\nu} \zeta\right)\bar{g}_{\alpha\beta}\,. \tag{1}\] We stress that the physical metric \(g_{\alpha\beta}\) is invariant using the conformal transformation of the auxiliary metric \(\bar{g}^{\alpha\beta}\). This invariance fixes in a unique way the form of the conformal factor w.r.t. the auxiliary metric \(\bar{g}^{\mu\nu}\) and the scalar field \(\zeta\) however such transformation cannot fix the sign. Equation (1) yields that the following condition: \[g^{\alpha\beta}\partial_{\alpha}\zeta\partial_{\beta}\zeta=\pm 1\,. \tag{2}\] Equation (2) shows that \(\partial_{\beta}\zeta\) is a timelike for the \(-\) sign and spacelike for the \(+\) sign. The \(-\) sign in Eqs. (1) and (2) is the original sign of standard mimetic gravity [4] however the \(+\) sign is a generalization of the mimetic gravity. An amended type of mimetic gravity can process the cosmological singularities [9] and the singularity in the core of a black hole [10]. Furthermore, the initial attempt of the mimetic theory provides a guarantee that gravitational waves (GW) can travel at the speed of light, thereby supporting the consistency observed in recent findings such as the event GW170817 and its corresponding optical counterpart [11; 12; 13]. Moreover, mimetic theory can investigate the flat rotation curves of spiral galaxies without the need of dark matter [14; 15]. From a cosmological point of view, the theory of mimetic has discussed a lot of interesting research papers in the past few years [16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34] and black holes physics [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53]. It has been shown that for a spherically symmetric spacetime the only solution is the Schwarzschild spacetime which means that the Birkhoff's theorem is hold. Moreover, mimetic theory have been extended to \(f(R)\) gravity [54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67] and to Gauss-Bonnet gravitational theory [68; 69; 70; 71; 72]. More specifically, a unified scenario of early inflation and late-time acceleration in the framework of mimetic \(f(R)\) gravity was formulated in [73]. Moreover, it was assured that in the frame of mimetic \(f(R)\) gravity, the inflationary epoch can be discussed [73]. In the present study we discuss the interior of spherically symmetric solution within mimetic gravitational theory1. Because of the non-trivial contribution of the mimetic field \(\zeta\) the Einstein field equation gives: Footnote 1: Here in this study we will take the sign of Eq. (2) as the one raised in the original mimetic theory. \[G_{\mu\nu}+\frac{1}{2}g_{\mu\nu}(\partial^{\mu}\zeta\partial^{\nu}\zeta+1)= \kappa T_{\mu\nu}\,, \tag{3}\] where \(T_{\mu\nu}\) is the energy-momentum tensor and \(\kappa=\frac{8\pi\,G}{c^{4}}\) is the gravitational constant and \(G_{\mu\nu}\) is the Einstein tensor defined as: \[G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}G_{\mu\nu}R\,, \tag{4}\] where \(R_{\mu\nu}\) is the Ricci tensor defined as: \[R_{\mu\nu}=R^{\alpha}{}_{\mu\alpha\nu}=\Gamma^{\alpha}{}_{\mu\nu,\alpha}- \Gamma^{\alpha}{}_{\mu\alpha,\nu}+\Gamma^{\alpha}{}_{\nu\mu}\Gamma^{\beta}{}_ {\alpha\beta}-\Gamma^{\alpha}{}_{\nu\beta}\Gamma^{\beta}{}_{\alpha\mu}\,,\] with \(\Gamma^{\alpha}{}_{\nu\mu}\) being the Christoffel symbols second kind and \(R^{\alpha}{}_{\mu\alpha\nu}\) is the Riemann tensor fourth order and \(R\) is the Ricci scalar defined as \(R=g^{\mu\nu}R_{\mu\nu}\). Equations (3) coincides with Einstein GR when the scalar field has a constant value, i.e., \(\partial^{\mu}\zeta\partial^{\nu}\zeta+1=0\). There are many applications in the framework of mimetic in cosmology as well as in solar system [see for example 74; 75; 76; 77; 78]. The current study is structured as follows: In Section II, we utilize mimetic field equations, specifically equation (3), to analyze a spherically symmetric object with an anisotropic matter source. This results in a system of three nonlinear differential equations with five unknown functions, including two metric potentials, energy density, radial pressure, and tangential pressure. To close the system, we impose two additional constraints: we assume a specific form for one of the metric potentials, \(g_{tt}\), which is commonly done in interior solutions, and we assume the vanishing of anisotropy and derive the form of the spatial component of the metric potential, \(g_{rr}\). Collecting this information, we obtain the analytic expressions for the energy density and pressure that satisfy the mimetic equation of motion. In subsection II, we delineate the physical requirements that any isotropic stellar model must meet to be in agreement with a genuine compact star. In Section III, we discuss the applicability of the derived solution under the conditions presented in Section II. In Section IV, We integrate our model using the Schwarzschild solution, an external vacuum solution, and make adjustments the model parameters based on the properties of the pulsar _Cen X-3_, which has a mass estimate of \(M=1.49\pm 0.49,M_{\odot}\) and a radius of \(R=9.178\pm 0.13\) km. In Section V, we investigate the stability of the model using the TOV equation of hydrostatic equilibrium and the adiabatic index. Finally, we summarize our findings in Section VI. ## II Spherically symmetric interior solution To be able to derive an interior solution we will use a spherically symmetric spacetime to make the calculations and discussion more easy. For this aim, we assume the spacetime of spherical symmetric to have the form: \[ds^{2}=E^{2}(r)dt^{2}-\frac{1}{E_{1}(r)}\,dr^{2}-r^{2}(d\theta^{2}-\sin^{2} \theta d\phi^{2})\,, \tag{5}\] with \(E(r)\) and \(E_{1}(r)\) are unknown functions. When \(E=E_{1}\) one can recover Schwarzschild solution for exterior Einstein GR. Using Eq. (5), we get the Ricci tensor and Ricci scalar in the form: \[\mathcal{R}_{tt}(r)=\frac{E(2rE_{1}E^{\prime\prime}+E^{\prime}[rE _{1}^{\prime}+4E_{1}])}{2r}\,,\] \[\mathcal{R}_{rr}(r)=-\frac{2rE_{1}E^{\prime\prime}+E_{1}^{\prime }[rE^{\prime}+2E])}{2EE_{1}r}\,,\] \[\mathcal{R}_{\theta\theta}(r)=-\frac{2rE_{1}E^{\prime}+rEE_{1}^{ \prime}-2E(1-E_{1})}{2E}\,,\qquad\mathcal{R}_{\phi\phi}(r)=\mathcal{R}_{ \theta\theta}(r)\sin^{2}\theta\,,\] \[\mathcal{R}(r)=-\frac{E^{\prime}E_{1}^{\prime}r^{2}+2\,E_{1}E^{ \prime\prime}r^{2}+4\,E_{1}E^{\prime}r+2\,E_{1}^{\prime}Er-2\,E+2\,EE_{1}}{Er ^{2}}\,, \tag{6}\] where \(E\equiv E(r)\), \(E_{1}\equiv E_{1}(r)\), \(E^{\prime}=\frac{dE}{dr}\), \(E^{\prime\prime}=\frac{d^{2}E}{dr^{2}}\) and \(E_{1}^{\prime}=\frac{dE_{1}}{dr}\). Plugging Eq. (3) with Eq. (5) and by using Eq. (6) we get: The t t component of mimetic field equation is: \[\rho=\frac{1-rE_{1}^{\prime}-E_{1}}{r^{2}}\,,\] \[\rho=\frac{1-rE_{1}^{\prime}-E_{1}}{r^{2}}\,,\] \[\rho=\frac{2\,\zeta^{\prime 2}E_{1}^{\prime}r^{2}E^{\prime\prime}+ \left[r\left(4\,E_{1}+E_{1}^{\prime}r\right)E^{\prime}+E\left(2E_{1}-2+r^{2} \rho-3r^{2}p+2E_{1}^{\prime}r\right)\right]E_{1}\zeta^{\prime 2}+2E^{\prime}rE_{1}+E \left(E_{1}-1\right)}{Er^{2}}\,,\] \[\text{The}\,\theta\,\theta=\phi\,\phi\,\phi\,\,\text{component of Mimetic field equation is:}\] \[p_{1}=\frac{2rE_{1}E^{\prime\prime}+2E_{1}E^{\prime}+E_{1}^{ \prime}(E+rE^{\prime})}{2Er}\,,\] where we have set the Einstein gravitational constant, i.e., \(\kappa\), to unity. For an anisotropic fluid with spherical symmetry, we assume the energy-momentum tensor., i.e. \[T^{\alpha}{}_{\beta}=(p_{1}+\rho)u^{\alpha}u_{\beta}+p_{1}\delta_{\beta}^{ \alpha}+(p-p_{1})\chi^{\alpha}\chi_{\beta}\,. \tag{8}\] Here, \(\rho=\rho(r)\) represents the energy density of the fluid, \(p=p(r)\) denotes its radial pressure and \(p_{1}=p_{1}(r)\) represents the tangential pressure. As a result, the energy-momentum tensor takes the form \(T^{\alpha}\beta=diag(-\rho,\,p,\,p1,\,p_{1})\). If the mimetic scalar field has a constant value, or \(\zeta=C\), then equations (7) will be equivalent to the interior differential equations of Einstein's general relativity [80; 81].. The differential equations (7) are three non-linear in six unknowns \(E\), \(E_{1}\), \(\rho\), \(p\), \(p_{1}\) and the mimetic field \(\zeta\) which we can fix it form the use of Eq. (2), i.e., \[\zeta=\frac{1}{\sqrt{-E_{1}}}.\] Therefore, to put the above system in a solvable form we need two extra conditions. The first one is to suppose the temporal component of the metric potential \(E\) in the form [82; 83]: \[E(r)=\frac{a_{0}\left(5+4\,a_{1}r^{2}\right)}{\sqrt{1+a_{1}r^{2}}}\,, \tag{9}\] where \(a_{0}\) is a constant that has no dimension and \(a_{1}\) is another constant that has dimension of inverse length square, i.e., \(L^{-2}\). The second condition is the use of r r and \(\theta\,\theta\) components of Eq. (7), i.e., the anisotropy equation, and imposing of Eq. (9) yields: \[E_{1}(r)=\frac{\left(1+2\,a_{1}r^{2}+{a_{1}}^{2}r^{4}\right)}{\varepsilon^{3} }\bigg{\{}[5+6\,a_{1}r^{2}]\varepsilon-4\,a_{1}[1+a_{1}r^{2}]r^{2}\varepsilon _{1}+a_{2}r^{2}+a_{2}\,a_{1}r^{4}\bigg{\}}\,. \tag{10}\] Here \(a_{2}\) is a constant of integration with inverse length square dimension, i.e., \(L^{-2}\), \(\varepsilon=\sqrt{5+12\,a_{1}r^{2}+8\,{a_{1}}^{2}r^{4}}\) and \(\varepsilon_{1}=\operatorname{arctanh}\left(\frac{1+2\,a_{1}r^{2}}{\varepsilon}\right)\). Using Eqs. (9) and (10) in the system of differential Eqs. (7), we obtain the components of the energy-momentum in the form: \[\begin{split}&\rho=\frac{1}{\varepsilon^{6}}\left\{12\,\left(1+ ar^{2}\right)^{2}\left(\left(3a_{1}r^{2}+1\right)\varepsilon^{3}-4\left(3+4a_{1}r^ {2}\right)\left(1+a_{1}r^{2}\right)\varepsilon a_{1}r^{2}\right)a_{1} \varepsilon_{1}-3\,\left(1+3a_{1}r^{2}\right)\left(1+a_{1}r^{2}\right)^{2}a_{2 }\,\varepsilon^{3}\right.\\ &\left.+\bigg{(}12\left(3+4a_{1}r^{2}\right)\left(1+a_{1}r^{2} \right)^{3}r^{2}a_{2}\,\varepsilon+\varepsilon\left(144a_{1}{}^{4}r^{8}+424\, a_{1}{}^{3}r^{6}+486\,{a_{1}}^{2}r^{4}+265\,a_{1}r^{2}+60\right)\bigg{)}a_{1}\right\},\\ &\left.p=\frac{1}{\varepsilon^{6}\left(5+4a_{1}r^{2}\right)} \left\{\left(36{a_{1}}^{2}r^{4}+33\,a_{1}r^{2}+5\right)\left(1+a_{1}r^{2} \right)^{2}a_{2}\,\varepsilon^{3}-4\left(1+ar^{2}\right)^{2}a_{1}\left(12{a_{ 1}}^{2}r^{4}+15\,a_{1}r^{2}+5\right)\varepsilon^{3}\varepsilon_{1}\right.\\ &\left.+\varepsilon^{3}\bigg{(}\varepsilon^{1/2}\left(72{a_{1}}^ {3}r^{6}+190\,{a_{1}}^{2}r^{4}+167\,a_{1}r^{2}+50\right)-6\left(3+4a_{1}r^{2} \right)\left(1+a_{1}r^{2}\right)^{2}r^{2}a_{2}\,\bigg{)}a_{1}\right\}.\end{split} \tag{11}\] The energy density of Eq. (11) is the same as of GR for isotropic solution [82] however, the pressure is different. This difference is due to the contribution of the mimetic scalar field. It should be noted that if the mimetic scalar field is set equal zero in Eq. (7) and solving the system using ansatz (9) we get the form of density and pressure presented in [82]. Moreover, it is important to stress that the use of metric potentials (9) and (10) in the system (7) gives \(p=p_{1}\) which insure the isotropy of our model. The mass contained in a sphere that has radius \(r\) is given by: \[M(r)=4\pi{\int_{0}^{r}}\rho(\eta)\eta^{2}d\eta\,. \tag{12}\] By employing the expression for energy density provided in Equation (11) and substituting it into Equation (12), we obtain the asymptotic representation of the mass as: \[M(r)\approx(-0.3139182118a_{1}-0.04472135955a_{2})r^{3}+a_{1}(0.08835092710{a_ {1}}^{2}+0.02683281574a_{1}a_{2})r^{5}\] \[-{a_{1}}^{2}(0.0883509271a_{1}+0.02683281574a_{2})r^{7}+{a_{1}}^{3}(0.031145734 67a_{1}+0.0196773982a_{2})r^{9}\,. \tag{13}\] The compactness parameter with radius \(r\) of a spherically symmetric source is defined as [84; 85]: \[C(r)=\frac{2M(r)}{r}. \tag{14}\] In the next subsection, we present the physical conditions that are viable for an isotropic stellar structure and examine if model (11) satisfy them or not. ### Necessary criteria for a physically viable stellar isotropic model Before we proceed we are going to use the following dimensionless substitution: \[r=xR\,,\] where \(R\) is the radius of the star and \(x\) is a dimensionless constant that equal to one when \(r=R\) and equal zero at the center of the star. Also we assume the dimensional constants \(a_{1}\) and \(a_{2}\) to take the form: \[a_{1}=\frac{u}{R^{2}}\,,\qquad a_{2}=\frac{w}{R^{2}}. \tag{15}\] where \(u\) and \(w\) are dimensional quantities. By using the substitution of \(a_{1}\), \(a_{2}\) and \(r\) into the physical components of model, Eqs. (9), (10) and (11) we will get a dimensionless physical quantities. Now we are ready to discuss the necessary criteria that we apply in the isotropic model: A physical isotropic model must verify: \(\bullet\) The metric potentials \(E(x)\) and \(E_{1}(x)\), and \(\rho\) and \(p\) must have good behavior at the core of the stellar object and have regular behavior through the structure of the star without singularity. \(\bullet\) The energy density component, denoted as \(\rho\), is required to be positive within the internal structure of the star. Additionally, it should possess a finite positive value and exhibit a monotonically decreasing trend towards the surface of the stellar interior, i.e., \(\frac{d\rho}{dx}\leq 0\). \(\bullet\) The pressure, denoted as \(p\), must maintain a positive value throughout the fluid structure, meaning \(p\geq 0\). Furthermore, the derivative of pressure with respect to the spatial variable, i.e., \(\frac{dp}{dx}<0\), must be, indicating a decreasing pressure gradient. Additionally, at the surface, \(x=1\), (corresponding to \(r=R\)), the pressure \(p\) should be zero. \(\bullet\) The energy conditions of isotropic star requires the following inequalities: (i)The condition of weak energy (WEC): \(p+\rho>0\), \(\rho>0\). (ii) The conditions of dominant energy (DEC): \(\rho\geq|p|\). (iii) The condition of strong energy (SEC): \(p+\rho>0\), \(\rho+3p>0\). \(\bullet\) The causality condition must be verified, to have a viable true model, i.e., \(v<1\) where \(v\) is the speed of sound. \(\bullet\) The interior metric potentials, \(E\) and \(E_{1}\), must be joined smoothly to the exterior metric potentials (Schwarzschild metric) at the surface of the stellar, i.e., \(x=1\). \(\bullet\) For a true star the adiabatic index is greater than \(\frac{4}{3}\). Now, we are ready to examine the above-listed physical criteria on our model to see if it satisfies all of them or not. ## III The physical behaviors of model (11) ### The free singularity of the model a- The metric potentials given by Eqs (11) and (10) fulfill: \[E_{x\to 0}=25{a_{0}}^{2}\qquad\qquad\text{and}\qquad\qquad E_{1 \,x\to 0}=1\,. \tag{16}\] Equation (16) guarantees that the lapse functions possess finite values at the core of the stellar configuration. Additionally, the derivatives of the metric potentials with respect to x must also have finite values at the core, i.e., \(f^{\prime}(x=0)=f^{\prime}_{1}(x=0)=0\). Equations (16) ensures that the laps functions are regular at the core and have good behavior throughout the center of the star. ii-The density and pressure of Eq. (11) take the following form at core: \[\rho_{{}_{x\to 0}} = \frac{12\,u\sqrt{5}\text{arctanh}\left(1/\sqrt{5}\right)-3\,w \sqrt{5}-60\,u}{25R^{2}}\,,\] \[p_{{}_{x\to 0}} = \frac{50\,u-4\,u\sqrt{5}\text{arctanh}\left(1/\sqrt{5}\right)+w \sqrt{5}}{25R^{2}}\,. \tag{17}\] Equation (III.1) ensures the positivity of density and pressure assuming \[12\,u\sqrt{5}\text{arctanh}\left(1/\sqrt{5}\right)-3\,w\sqrt{5}-60\,u>0\,, \qquad\text{and}\qquad 50\,u-4\,u\sqrt{5}\text{arctanh}\left(1/\sqrt{5} \right)+w\sqrt{5}>0\,.\] Moreover, the Zeldovich condition [86] that connects the density and pressure at the center of the star through the inequality, i.e., \(\frac{p(0)}{\rho(0)}\leq 1\). Applying Zeldovich condition in Eq. (17), we get: \[\frac{50\,u-4\,u\sqrt{5}\text{arctanh}\left(1/\sqrt{5}\right)+w\sqrt{5}}{12\,u \sqrt{5}\text{arctanh}\left(1/\sqrt{5}\right)-3\,w\sqrt{5}-60\,u}\leq 1\,, \tag{18}\] which yields: \[\Rightarrow w\leq\frac{\left[8\sqrt{5}\text{arctanh}\left(1/\sqrt{5}\right)-5 5\right]u}{2\sqrt{5}}\,.\] iii-The derivatives of density, \(\rho\), and pressure, \(p\), of Eq. (11) are respectively: \[\rho^{\prime}=-\frac{2\,xu^{2}}{R^{2}\varepsilon^{7/2}}\bigg{\{}192\,w\,u^{5} x^{10}+192\,\varepsilon_{1}u^{5}x^{10}+1248\,\varepsilon_{1}u^{4}x^{8}-32\, \varepsilon x^{4}u^{4}x^{8}+1248\,w\,u^{4}x^{8}-96\,\varepsilon u^{3}x^{6}+30 48\,w\,u^{3}x^{6}\] \[+3048\,\varepsilon_{1}u^{3}x^{6}-372\,\varepsilon u^{2}x^{4}+337 2\,\varepsilon_{1}u^{2}x^{4}+3372\,w\,u^{2}x^{4}-480\,\varepsilon\,ux^{2}+168 0\,\varepsilon_{1}ux^{2}+1680\,w\,ux^{2}+300\,w+300\,\varepsilon_{1}-175\, \varepsilon\bigg{\}}\,,\] \[p^{\prime}=\frac{2\,xw}{\left(5+4\,ux^{2}\right)^{2}R^{2} \varepsilon^{5/2}}\bigg{\{}96\,w\,u^{5}x^{10}-384\,w^{6}\varepsilon_{1}x^{10 }+64\,u^{5}\zeta x^{8}-1792\,u^{5}\varepsilon_{1}x^{8}+448\,w\,u^{4}x^{8}+16 0\,u^{4}\varepsilon x^{6}-3448\,u^{4}\varepsilon_{1}x^{6}\] \[+862\,w\,u^{3}x^{6}-3340\,u^{3}\varepsilon_{1}x^{4}+244\,u^{3} \varepsilon x^{4}+835\,w\,u^{2}x^{4}-1600\,u^{2}\varepsilon_{1}x^{2}+220\,u^ {2}\varepsilon x^{2}+400\,w\,ux^{2}-300\,u\varepsilon_{1}+75\,u\varepsilon+7 5\,w\bigg{\}}\,, \tag{19}\] where \(\rho^{\prime}=\frac{d\rho}{dx}\) and \(p^{\prime}=\frac{dp_{r}}{dx}\). Fig. 2 (a) shows that the gradients of the components of energy-momentum tensor behave in negative way. iv-The speed of sound (when c = 1) yields: \[\begin{split}& v^{2}=\frac{dp}{d\rho}=\frac{\varepsilon^{2}}{ \left(5+4\,ux^{2}\right)^{2}}\bigg{\{}96\,w\,u^{5}x^{10}-384\,u^{6}\varepsilon_ {1}x^{10}+64\,u^{5}\varepsilon x^{8}-1792\,u^{5}\varepsilon_{1}x^{8}+448\,w\,u ^{4}x^{8}+160\,u^{4}\varepsilon x^{6}-3448\,u^{4}\varepsilon_{1}x^{6}\\ +862\,w\,u^{3}x^{6}-3340\,u^{3}\varepsilon_{1}x^{4}+244\,u^{3} \varepsilon x^{4}+835\,w\,u^{2}x^{4}-1600\,u^{2}\varepsilon_{1}x^{2}+220\,u^ {2}\varepsilon x^{2}+400\,w\,ux^{2}-300\,u\varepsilon_{1}+75\,u\varepsilon+7 5\,w\bigg{\}}\\ &\bigg{\{}48\,w\,u^{5}x^{10}-192\,u^{6}\varepsilon_{1}x^{10}+32 \,u^{5}\varepsilon x^{8}-1248\,u^{5}\varepsilon_{1}x^{8}+312\,w\,u^{4}x^{8}+9 6\,u^{4}\varepsilon x^{6}-3048\,u^{4}\varepsilon_{1}x^{6}+75\,w\\ +762\,w\,u^{3}x^{6}-3372\,u^{3}\varepsilon_{1}x^{4}+372\,u^{3} \varepsilon x^{4}+843\,w\,u^{2}x^{4}+480\,u^{2}\varepsilon x^{2}-1680\,u^{2} \varepsilon_{1}x^{2}+420\,w\,ux^{2}+175\,u\varepsilon-300\,u\varepsilon_{1} \bigg{\}}^{-1}\,.\end{split} \tag{20}\] which is less than unity as Fig. 2 (b) shows. ### Junction conditions We make the assumption that the exterior solution of the star is a vacuum, described by the Schwarzschild solution. This is because, in the mimetic theory, the Schwarzschild solution is the only exterior spherically symmetric solution [80; 81]. The form of the Schwarzschild solution is given by2 Footnote 2: The isotropic Schwarzschild solution is given by [87] \[ds^{2}=\frac{(1-M/2r)^{2}}{(1+M/2r)^{2}}dt^{2}-\frac{1}{(1+M/2r)^{4}}\,[dr^{2} +r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2})]\,. \tag{21}\] : The above metric is the one that we use in the junction conditions but due to the nature of the line-element (5), so it is logic to match it with the asymptotic form of line-element (21) which is give by: \[ds^{2}\approx\Big{(}1-\frac{2M}{r}\Big{)}dt^{2}-\Big{(}1+\frac{2M}{r}\Big{)}dr^{2 }-r^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2})\,.\] (22) : The form of the Schwarzschild solution is given by [2]: \[ds^{2}=-\Big{(}1-\frac{2M}{r}\Big{)}dt^{2}+\Big{(}1-\frac{2M}{r}\Big{)}dr^{2 }+x^{2}R^{2}d\Omega^{2},\] (23) where \(M\) is the mass of the star. The junction of the laps functions at \(x=1\) gives: \[E(x\to 1)=\left(1-\frac{2M}{R}\right),\qquad\qquad E_{1}(x\to 1)=\left(1-\frac{2M}{R} \right)^{-1}\,, \tag{24}\] in addition to the constrain of the vanishing of pressure at the surface we fix the dimensionless constants of solution (9) and (10) as: \[a_{0}=\pm\frac{\sqrt{R\left(R+Ru^{2}-2\,M-2\,Mu^{2}\right)}}{R \left(5+4\,u^{2}\right)}\,,\] \[u=-\frac{48\,\sqrt{2}M+R\left[19\sqrt{2}-9\,R\,\varrho\right]- \sqrt{M\left[4608\,M-864\,\sqrt{2}R+3648R\right]+\left[351\,\varrho^{2}-1482 \,\varrho\sqrt{2}+2702\right]R^{2}}}{24R\left(3\,\sqrt{2}-\varrho\right)}\,,\] \[w=\frac{u\left[4\left(u+1\right)^{2}\left(12u^{2}+15u+5\right) \varepsilon_{1}-\left(72u^{3}+190\,u^{2}+167\,u+50\right)\varepsilon\right]}{ \left(47\,u^{2}+39\,u^{3}+12\,u^{4}+25\,u+5\right)}\,, \tag{25}\] where \(\varrho=\text{arctanh}\left(\frac{1}{\sqrt{2}}\right)\). ## IV Examination of the model (11) with true compact stars Now, we are ready to use the previously listed conditions in Eq. (11) to examine the physical masses and radii of the stars. To extract more information of the model (11), we use the pulsar Cen X-3 which has mass \(M=1.49\pm 0.49M_{\odot}\) and radius \(R=9.178\pm 0.13\) km, respectively [88]. In this study the value of mass is \(M=1.98M_{\odot}\) and the radius is \(R=9.308km\). These conditions fix the dimensionless constants \(a_{0}\), \(u\) and \(w\) as3: Footnote 3: When the constants \(a_{0}\), \(a_{1}\) and \(a_{2}\) equal zero then the density and pressure vanish and in that case we get a vacuum solution which is the Schwarzschild solution. \[a_{0}=-0.1509026812\,,\qquad\qquad u=0.1241376545\,,\qquad w=-2.382227207\,. \tag{26}\] Using the above values of constants we plot the physical quantities of the model (11). In Figs. 1 (a), and (b) we depict energy-density and pressure of the star Cen X-3 which shows that density and pressure possess positive values as necessary for true stellar configuration moreover, the density is high at the center and decreases toward the surface Figure 1: Plots of Fig. (a) the density and Fig. (b) pressure of (11) versus the dimensionless \(x\) using the constants fixed from Cen X-3 [89]. of the star. additionally, Fig. 1 (b) shows that the value of the pressure is zero at the surface of the stellar. The behaviors of density and pressure presented in Figs. 1 (a), and (b) are appropriate to a true model. Figure 2 (a) illustrates that both the gradients of density and pressure are negative. Furthermore, Figure 2 (b) demonstrates that the speed of sound is indeed less than unity, which is a necessary condition for a valid stellar model. Additionally, Figures 2 (c), (d), and (e) exhibit the adherence to energy conditions. Hence, all the criteria associated with energy conditions are fulfilled within the model configuration of Cen X-3, thus meeting the requirements for a true, isotropic, and significant stellar model. In Fig. 3 (a) we depict the EoS against the dimensionless \(x\) which shows a nonlinear behavior. In Fig. 3 (b) we plot the pressure as a function of density which also shows a nonlinear behavior due to the isotropy of model (11). As Figs. 3 (a) and 3 (b) indicate that the source of the non-linearity of the EOS is not the mimetic scalar field only but also the isotropy of the stellar model under consideration. The mass function given by Eq. (12) is depicted in Fig 3 (c). Fig. 3 (c) show that the behavior of the mass and compactness are monotonically increasing of \(x\) and \(M_{x=0}=0\). Moreover, Fig. 3 (c) show the behavior of the compactness parameter of stellar which are also increasing. Fig. 3 (c) shows that the maximum value of the compactness of the Cen X-3 is 0.00015 as shown which is smaller than the value of GR which is 0.2035 [82]. Finally, Fig. 3 (d) indicates the behavior of the red shift of the stellar. Bohmer and Harko [90] limited the boundary red-shift to be \(Z\leq 5\). The boundary redshift of the model under consideration is evaluated and get 0.278269891. Figure 2: Plots of (a) gradients of density and pressure, (b) speed of sound, (c) weak, (d) dominant and (e) strong energy conditions of model (11), versus the dimensionless x using the constants constrained from Cen X-3. ## V Stability of the model We will examine the matter of stability through the utilization of two approaches: the Tolman-Oppenheimer-Volkoff (TOV) equations and the adiabatic index. ### Equilibrium using Tolman-Oppenheimer-Volkoff equation Now, we discuss the stability of the model (11) by supposing hydrostatic equilibrium through the TOV equation [91; 92] as presented in [93], gives the following form of an isotropic model: \[-\frac{M_{g}(x)[\rho(x)+p(x)]E}{x\sqrt{E_{1}}}-\frac{dp}{dx}=0, \tag{27}\] where \(M_{g}(x)\) is the gravitational mass which is given by: \[M_{g}(x)=4\pi\int_{0}^{x}\Bigl{(}T_{t}^{\;t}-T_{r}^{\;r}-T_{\theta}^{\;\theta} -T_{\phi}^{\;\phi}\Bigr{)}\eta^{2}E\sqrt{E_{1}}d\eta=\frac{xE^{\prime}\sqrt{E _{1}}}{2E^{2}}\,, \tag{28}\] Figure 3: Plot of (a) the EoS \(\omega=\frac{p(x)}{\rho(x)}\) versus the dimensionless x using sing the constants constrained from Sen X-3, (b) the behavior of the pressure as a function of the energy-density, (c) the behavior of the mass and compactness and (d) shows the behavior of the red shift. Inserting Eq. (28) into (27), we get \[-\frac{dp}{dx}-\frac{E^{\prime}[\rho(x)+p(x)]}{2E}=F_{g}+F_{h}=0\,, \tag{29}\] with \(F_{g}=-\frac{E^{\prime}[\rho(x)+p(x)]}{2E}\) and \(F_{h}=-\frac{dp(x)}{dx}\) are the gravitational and the hydrostatic forces respectively. These two different forces,are plotted in Fig. 4. Therefore, we prove that the pulsar in static equilibrium is stable through the TOV equation. ### Adiabatic index Another way to examine the stability of the model under consideration is to study the stability configuration using the adiabatic index that is considered an essential test. The adiabatic index \(\Gamma\) is given by: [94, 95, 96] \[\Gamma=\left(\frac{\rho(x)+p(x)}{p(x)}\right)\left(\frac{dp(x)}{d\rho(x)} \right)\,. \tag{30}\] To have stability equilibrium the adiabatic index \(\Gamma\) must be \(\Gamma>\frac{4}{3}\)[97]. For \(\Gamma=\frac{4}{3}\), the isotropic sphere possesses a neutral equilibrium. From Eq. (30), we obtain the adiabatic index of the model (11) as: \[\Gamma=\frac{3\left(1+ux^{2}\right)\left(3+ux^{2}\right)\varepsilon}{\left( 5+ux^{2}\right)^{2}}\left\{\left(12u^{2}x^{4}+15ux^{2}+5\right)\left(4u \varepsilon+w\right)\left(1+ux^{2}\right)^{2}\varepsilon+\left(72u^{3}x^{6}+ 190\,u^{2}x^{4}+167ux^{2}+50\right)\!\varepsilon^{2}u\right\}^{-1}\] \[\times\left(16u^{4}x^{8}+88u^{3}x^{6}+166u^{2}x^{4}+115ux^{2}+25 \right)+u\varepsilon\!\left(372u^{2}x^{4}+32u^{4}x^{8}+96u^{3}x^{6}+480ux^{2} +175\right)\!\right\}^{-1}. \tag{31}\] Figure 4 (b) displays the parameter \(\Gamma\), indicating that its values surpass the threshold of \(4/3\) within the interior model. This observation confirms that the stability condition is met, as required. In addition to the pulsar known as Cen X-3, a comparable analysis can be conducted for other pulsars as well. We present concise outcomes for the remaining observed pulsars in Tables 1 and 2. ## VI Discussion and conclusions In the present study, we have derived isotropic model of mimetic gravitational theory, for the first time, without assuming any specific form of the EoS. The construction of such model based on the assumption of the metric Figure 4: Plots of (a) the TOV equation (b) the adiabatic index versus the dimensionless \(x\). potential's temporal component and the vanishing of the anisotropy. The main feature of this model was its dependence on three dimensionless constants which we fixed them through the matching condition with the exterior vacuum solution of this theory, i.e., the Schwarzschild solution [46], and the vanishing of the pressure on the surface of the stellar. The physical tests carried out can be summarized: a-The density and pressure must be finite at the center of the stellar configuration, and the pressure must be zero at the surface of the star, Figs. 1 (a) and 1 (b). b-The negative values of the gradients of density and pressure Fig. 2 (a), the validation of the causality Fig. 2 (b) as well as its verification of the energy conditions, Figs. 2 (c), (d) and (e). c-Moreover, we have shown that the EoS parameter, \(\omega=\frac{p(x)}{\rho(c)}\) as well as EoS, \(p(\rho)=\omega\rho\), behave in a non-linear form which is a feature of the isotropic model, Figs. 3 (a) and 3 (b). Furthermore, we have shown the behavior of the mass and compactness are increasing, and the red-shift of this model has a value on the star's surface as \(Z=0.2782\) as shown in Figs. 3 (c) and 3 (d). d-One of the merits of this model is that it verified the TOV equation as shown in Fig. 4 (a) and its behavior of the adiabatic index is shown in Fig. 4 (b). Additionally, we have examined our model with other six pulsars and derived the numerical values of their constants. Finally, we have derived the numerical values of the density at the center and at the star's surface, the EoS parameter, \(\omega\), at the center and the surface of the start, the strong energy condition, and the red-shift at the surface of the stellar configuration. In tables 1 and 2, we tabulated all those data. To conclude, as far as we know, this is the first time to derive an isotropic model in the framework of the mimetic gravitational theory without assuming any specific form of the EoS. Can this procedure be applied to any other modified gravitational theory like \(f(R)\) or \(f(T)\)? This task will be our coming study. ## Data availability statement No Data associated in the manuscript. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{1}{c}{Star} & Reference. & Mass & radius [km] & \(a_{0}\) & \(u\) & \(w\) \\ \hline Her X-1 & [98] & \(0.85\pm 0.15\) & \(8.1\pm 0.41\) & \(-0.1738264671\) & \(0.1453714459\) & \(-2.764746699\) \\ Cen X-3 & [98] & \(1.49\pm 0.49\) & \(9.178\pm 0.13\) & \(-0.1509026812\) & \(0.1241376545\) & \(-2.382227207\) \\ RX J185635-3754 & [99] & \(0.9\pm 0.2\) & \(\simeq 6\) & \(-0.1583616821\) & \(0.1300851651\) & \(-2.489973265\) \\ 4U1608 - 52 & [100] & \(1.57\pm 0.3\) & \(9.8\pm 1.8\) & \(-0.1637386840\) & \(0.1349013201\) & \(-2.576874315\) \\ EXO 1745-268 & [101] & \(1.65\pm 0.25\) & \(10.5\pm 1.8\) & \(-0.1653378835\) & \(0.1364292317\) & \(-2.604379240\) \\ 4U 1820-30 & [102] & \(1.46\pm 0.2\) & \(11.1\pm 1.8\) & \(-0.1713098595\) & \(0.1425649058\) & \(-2.714525397\) \\ \hline \hline \end{tabular} \end{table} Table 1: Values of model parameters ## Ethics Declarations Conflict of interest The author declares that there is no conflict of interests regarding the publication of this paper.
2305.18941
A Game of Competition for Risk
In this study, we present models where participants strategically select their risk levels and earn corresponding rewards, mirroring real-world competition across various sectors. Our analysis starts with a normal form game involving two players in a continuous action space, confirming the existence and uniqueness of a Nash equilibrium and providing an analytical solution. We then extend this analysis to multi-player scenarios, introducing a new numerical algorithm for its calculation. A key novelty of our work lies in using regret minimization algorithms to solve continuous games through discretization. This groundbreaking approach enables us to incorporate additional real-world factors like market frictions and risk correlations among firms. We also experimentally validate that the Nash equilibrium in our model also serves as a correlated equilibrium. Our findings illuminate how market frictions and risk correlations affect strategic risk-taking. We also explore how policy measures can impact risk-taking and its associated rewards, with our model providing broader applicability than the Diamond-Dybvig framework. We make our methodology and open-source code available at https://github.com/louisabraham/cfrgame Finally, we contribute methodologically by advocating the use of algorithms in economics, shifting focus from finite games to games with continuous action sets. Our study provides a solid framework for analyzing strategic interactions in continuous action games, emphasizing the importance of market frictions, risk correlations, and policy measures in strategic risk-taking dynamics.
Louis Abraham
2023-05-30T11:14:39Z
http://arxiv.org/abs/2305.18941v1
# A Game of Competition for Risk ###### Abstract In this study, we present models where participants strategically select their risk levels and earn corresponding rewards, mirroring real-world competition across various sectors. Our analysis starts with a normal form game involving two players in a continuous action space, confirming the existence and uniqueness of a Nash equilibrium and providing an analytical solution. We then extend this analysis to multi-player scenarios, introducing a new numerical algorithm for its calculation. A key novelty of our work lies in using regret minimization algorithms to solve continuous games through discretization. This groundbreaking approach enables us to incorporate additional real-world factors like market frictions and risk correlations among firms. We also experimentally validate that the Nash equilibrium in our model also serves as a correlated equilibrium. Our findings illuminate how market frictions and risk correlations affect strategic risk-taking. We also explore how policy measures can impact risk-taking and its associated rewards, with our model providing broader applicability than the Diamond-Dybvig framework. We make our methodology and code open-source1. Finally, we contribute methodologically by advocating the use of algorithms in economics, shifting focus from finite games to games with continuous action sets. Our study provides a solid framework for analyzing strategic interactions in continuous action games, emphasizing the importance of market frictions, risk correlations, and policy measures in strategic risk-taking dynamics. Footnote 1: available at [https://github.com/louisabraham/cfrgame](https://github.com/louisabraham/cfrgame) + Footnote †: journal: Games and Economic Behaviour ## 1 Introduction Risk-taking during competition is an everyday occurrence, spanning numerous scenarios from financial markets to environmental policies. In these settings, individuals and organizations must balance the lure of potential rewards against the potential for negative outcomes such as bankruptcy or ecological disasters. Understanding and predicting behaviors in these contexts is crucial for an array of parties, including policymakers, regulators, and investors. Game theory provides a compelling lens for analyzing these situations. It helps model strategic interactions among players and outlines the incentives prompting their actions. Our work focuses on normal form games - situations where each player selects a strategy and earns a payoff based on the collective actions of all players. In this article, we explore continuous models of competition, where players can choose their level of risk, receiving higher rewards for taking on more risk. Nash equilibrium is a key concept for our study. It's a state of stability in the game, where no player sees an advantage in deviating from their chosen strategy. In a normal form game, a Nash equilibrium consists of strategies where each player's strategy is the best response to the strategies of others. This concept is fundamental to game theory and has been widely used in various fields like economics and political science to model strategic behavior (Moulin, 1986; Varoufakis, 2008). Our exploration starts with a straightforward normal form game involving just two players. For this setup, we provide solid proof for both the existence and uniqueness of a Nash equilibrium, and we go further by presenting an analytical solution. This simple model serves as our fundamental building block, a starting point that offers a solid base of understanding. Subsequently, we enhance our model to incorporate the complexity of multiple players. This extension allows us to probe deeper into the strategic dynamics in more realistic, multi-actor competitive environments. Even with the additional complexity, we manage to maintain the uniqueness of the Nash equilibrium and solve the game analytically. The third stage of our investigation introduces two vital real-world components: market frictions and risk correlations among firms. We begin by defining these elements in a two-player context, paving the way for more complex scenarios. The final phase of our study marks a significant departure from conven tional approaches. Given the complexities introduced by market frictions and risk correlations, we adopt a novel technique--using regret minimization algorithms to discretize and solve our game. This innovation, which opens new vistas in the study of strategic interactions, proves especially valuable in the face of the potentially intractable analytical solutions that these intricate scenarios might present. Our experimental validation establishes that the Nash equilibria in our model also function as correlated equilibria, endorsing the use of correlated equilibria to model strategic behavior. To compute these equilibria, we employ an array of algorithms, prominently featuring regret matching and counterfactual regret minimization, thus highlighting the expanding potential of algorithmic solutions for tackling complex strategic interactions. Next, we examine the impact of penalties and market frictions on strategic behavior and results in our continuous model. We find that penalties reduce both the average risk taken by players and their total rewards. Market frictions, on the other hand, lower average risk but increase total rewards. These frictions have a more significant effect on total rewards in high-penalty environments. In especially inefficient markets with high market frictions, raising penalties can promote cooperation and increase total rewards. We also assess the effects of risk correlations among firms on strategic behavior and performance. We find that players take more risks in negative correlation situations, which boosts their payoff compared to a no-correlation scenario. On the flip side, in positively correlated settings, risk-taking is reduced. The impact on performance varies, being negative in efficient markets but potentially positive in less predictable markets. Our model interestingly aligns with the Diamond-Dybvig framework, where financial institutions can choose a parameter affecting their utility function and their likelihood of bankruptcy. This parallel allows our model to explore situations such as competition among banks over deposit contract interest rates, akin to the dynamic modeled by Diamond and Dybvig. But our model is distinct and more generalized, focusing not on specific financial metrics, but on a broader notion of failure probability, enabling us to explore strategic competition dynamics in a broader array of scenarios beyond baking. Our findings offer valuable insights for policymakers, regulators, and investors who need to understand behavior in competitive, risk-laden situations. We highlight the significant influence of penalties and market frictions on strategic behavior and outcomes, and show how risk correlations can con siderably alter strategic behavior and performance in competitive dynamics. By clarifying these elements, we contribute to the discussion on how to design effective interventions and policies that encourage cooperation and improve outcomes in competitive situations involving risk. ## 2 A simple model of competition for risk ### Description In this section, we introduce a simple model of competition for risk that serves as a backbone for our study. We consider a situation where two actors, denoted as Player 1 and Player 2, engage in competition by taking actions that make them more attractive to customers but also increase their risk of failure. For example, firms may choose to lower their prices to attract more customers but in doing so, they increase the likelihood of not being able to repay their loans. Similarly, insurance companies may lower their premiums to attract more customers but this comes at the cost of a higher risk of failure. Banks may increase their deposit rates to attract more customers but this also increases their vulnerability to liquidity crises. In our model, each player directly sets their failure probability, denoted as \(r_{p}\). While this assumption may not be realistic in practice, we note that in many situations, firms use models that map real-world actions, such as setting prices or premiums, to failure probabilities. This mapping is often a monotonous function that can be inverted to yield real-world actions from failure probabilities, making our model practical. Based on the failure probabilities set by each player, the players can randomly "lose" the game. In our simple model, this translates into being applied a penalty, denoted as \(P\). We assume \(P>0\). We assume that the failure events are independent, meaning that each player draws a uniform random variable \(f_{p}\) from the interval \([0,1]\) and fails if \(f_{p}<r_{p}\). We will later introduce correlations between the variables \(f_{p}\) to model real-life situations where correlations may be positive or negative. After the failure events are determined, the players that did not fail compare their risk levels, and the player that played the highest risk level is rewarded with a payoff, denoted as \(R\). Since the game is unchanged when scaling both \(P\) and \(R\), we assume \(R=1\). In the case of ties between risk levels, we consider several ways of resolving them, such as none of the players receiving the reward, the reward being shared equally between them, or the reward being randomly given to one of them. We note that, as shown later, the optimal strategies in our model are modeled by real distributions, which means that the probability of ties is zero. However, when we use discrete action sets to compute approximate Nash equilibria, the action sets can overlap, and we implement the first two variations of resolving ties (the last two are equivalent in expectation). This simple model serves as a foundation for our study, and we will extend it by introducing correlations between the players' failure probabilities and market frictions in subsequent sections. For two players, assuming \(r_{1}>r_{2}\), the outcome matrix will be: \begin{tabular}{|c|c|c|} \cline{2-3} \multicolumn{1}{c|}{} & \(f_{1}\geq r_{1}\) & \(f_{1}<r_{1}\) \\ \hline \(f_{2}\geq r_{2}\) & \(R=1,0\) & \(-P,1\) \\ \hline \(f_{2}<r_{2}\) & \(1,-P\) & \(-P,-P\) \\ \hline \end{tabular} Each cell contains the rewards to each player. For example, in the upper left cell, no failure happens. Since we assumed \(r_{1}>r_{2}\), player 1 gets \(R=1\) and player 2 gets 0. ### Equivalence to a normal-form game We can represent the game described above in the framework of extensive-form games (Hart, 1992) by modeling the drawing of the random variables \(f_{p}\) using Chance nodes. Since the outcomes are subject to randomness, it is natural to assume that the actors operate under the expected utility hypothesis, which implies that they possess a von Neumann-Morgenstern utility function (Neumann et al., 1944). Consequently, we can define a normal-form game with payoffs equal to the expected payoffs of the corresponding extensive-form game. By doing so, we can leverage the theory of normal-form games and apply various solution concepts, such as Nash equilibria, to analyze the competition between the actors. **Proposition 2.1**.: _The expected utilities \(u_{p}\) are computed as follows in the 2-player game:_ \[u_{2}(r_{1},r_{2}) =u_{1}(r_{2},r_{1})\text{ (symmetry)}\] \[u_{1}(r_{1},r_{2}) =r_{2}(1-r_{1})R-r_{1}P+[r_{1}>r_{2}](1-r_{1})(1-r_{2})R\] _where \([\cdot]\) is the Iverson bracket._ Proof.: Player 1 can fail with probability \(r_{1}\), in which case they lose \(P\). If Player 2 loses and Player 1 does not, which happens with probability \(r_{2}(1-r_{1})\), Player 1 wins \(R\). Finally, if none of the players fails, when \(r_{1}>r_{2}\), Player 1 can win \(R\). It is possible to encompass the shared payoff in case of ties by defining the Iverson bracket to be \(\frac{1}{2}\) when \(r_{1}=r_{2}\). Figure 1 shows what the reward function of Player 1 looks like when Player 2 adopts the fixed strategy \(r_{2}=0.2\). The discontinuity of our game is similar to two games: the War of Attrition game from Smith (1974) and the visibility game from Lotker et al. (2008). In the War of Attrition game, each player independently chooses a time to quit the game. The player who stays in the game for the longest time wins a prize. However, both players incur a cost that increases over time while they are still in the game. In the visibility game, the payoff of each player is the difference with the next player, or 1 for the player that plays the largest move. A major difference between our game and those two games is that we model the probability of failure. This means, for example, that the player taking less risk can still win the reward if the first player fails. However, the structure of our problem and the analytical solution of the Figure 1: Reward function Nash equilibrium are similar to Lotker et al. (2008). We name our game the Competition for Risk game and will write it CfR in the rest of the article. ### Nash equilibrium A Nash equilibrium is a set of strategies, one for each player, such that no player can improve their payoff by unilaterally changing their strategy, given the strategies of the other players. In other words, each player's strategy is the best response to the strategies chosen by the other players. Nash equilibria are important because they provide a way to predict the outcome of a game if each player acts rationally and selfishly. They can also help explain why certain outcomes occur in real-world situations. In our model of competition for risk, finding Nash equilibria can help us understand how firms, banks, and insurance companies behave when they compete for prices and take different levels of risk. By analyzing the Nash equilibria of our model, we can predict how different players will act and what the resulting outcomes will be. Moreover, we can compare the efficiency of different equilibria and use them as a benchmark to evaluate the performance of different strategies. As in the game of Lotker et al. (2008), we can prove that there is no pure Nash equilibrium, that is, a deterministic optimal strategy. **Theorem 2.2**.: _The CfR game does not admit any pure Nash equilibrium._ Proof.: Suppose the existence of an equilibrium \(s_{1},s_{2}\). Suppose that \(s_{1}>s_{2}\). Then Player 1 can improve their payoff by playing \(s_{1}-\varepsilon\) since they still get the reward and take less risk. By symmetry, this implies that \(s_{1}=s_{2}\). If \(s_{1}<1\), then player 1 can improve their situation by playing \(s_{1}+\varepsilon\) since they get \(R\) (or \(\frac{R}{2}\) if the reward is shared). If \(s_{1}=1\) then the payoff is \(-P<0\) with probability 1 so it is better to play 0 which gives payoff 0 with probability 1. **Definition 2.1**.: A strategy \(s\) (a couple of strategies) is Pareto optimal if there is no other strategy \(s^{\prime}\) such that \(\forall p,u_{p}(s)\leq u_{p}(s^{\prime})\) and \(\exists p,u_{p}(s)<u_{p}(s^{\prime})\). It is \(\varepsilon\)-Pareto optimal if there is no strategy \(s^{\prime}\) such that \(\forall p,u_{p}(s)\leq u_{p}(s^{\prime})\) and \(\exists p,u_{p}(s)+\varepsilon<u_{p}(s^{\prime})\). **Remark**.: _If the reward is shared in case of tie, the pure strategy \((0,0)\) gives reward \(\frac{R}{2}\) to each player. This strategy is Pareto-optimal._ **Theorem 2.3**.: _For every \(\varepsilon\), there is a \(\varepsilon\)-Pareto optimal strategy that gives \(\frac{R-\varepsilon}{2}\) to each player._ Proof.: Let us consider the joint mixed strategy where each player plays uniformly at random in the interval \([0,2\varepsilon]\). The payoff is \[\mathbb{E}[u_{1}] =\mathbb{E}\left[r_{2}(1-r_{1})R-r_{1}P+[r_{1}>r_{2}](1-r_{1})(1-r _{2})R\right]\] \[=\varepsilon(1-\varepsilon)R-\varepsilon P+\frac{(1-\varepsilon) ^{2}}{2}R\] \[\rightarrow_{\varepsilon\to 0}\frac{R}{2}\] so by taking \(\varepsilon\) small enough we can get as close to \(\frac{R}{2}\) as we want. If each player gets payoff \(\frac{R-\varepsilon}{2}\)then no player can get \(\varepsilon\) without degrading the other's performance else the total payoff would be more than \(R\). However, the \(\varepsilon\)-Pareto strategy is highly concentrated around \(0\), incentivizing players to deviate and increase their chances of winning \(R\) without taking on additional risk. Thus, this strategy fails to form a Nash equilibrium. Fortunately, the CfR game possesses a unique Nash equilibrium, a powerful property that showcases the strength of our approach. Moreover, this equilibrium is symmetric. For finite games, Nash (1950) proved the existence of mixed Nash equilibria, while Glicksberg's theorem (Glicksberg, 1952) extended this result to continuous reward functions. Dasgupta and Maskin (1986) established conditions under which discontinuous games can possess Nash equilibria and symmetric games can admit symmetric equilibria. The uniqueness of the Nash equilibrium is a highly desirable property, with most models using concave reward functions to ensure it. Therefore, it is noteworthy that the CfR game exhibits a unique Nash equilibrium. We recall Theorem 2.1 from Lotker et al. (2008): **Theorem 2.4**.: _Let \((f_{1},\ldots,f_{n})\) be a Nash equilibrium point, with expected payoff \(u_{i}^{*}\) to Player \(i\) at the equilibrium point. Let \(u_{i}(x)\) (as an abuse of notation) denote the expected payoff for Player \(i\) when he plays the pure strategy \(x\) and all other players play their equilibrium mixed strategy. Then \(u_{i}(x)\leq u_{i}^{*}\) for all \(x\in[0,1]\), and furthermore, there exists a set \(\mathcal{Z}\) of measure \(0\) such that \(u_{i}(x)=u_{i}^{*}\) for all \(x\in support(f_{i})\setminus\mathcal{Z}\)._ This theorem means that at the Nash equilibrium, almost any move that is in the support of a player's strategy should give them the same (maximal) payoff. This theorem is crucial to find the equilibrium in the CfR game. **Theorem 2.5**.: _Up to a set of measure zero, the CfR game admits a unique Nash equilibrium. This equilibrium is symmetric and its distribution is \(f(x)=\left[x<1-\sqrt{\frac{k-1}{k+1}}\right]\frac{k-1}{(1-x)^{3}}\) with \(k:=\sqrt{(P+1)^{2}+1}\). The the average move is \(\bar{r}=k-(P+1)\) and the utility of each player is \(u^{*}=\bar{r}\)._ Proof.: See Appendix A for a full proof. For a less rigorous treatment, refer to the proof of the more general Theorem 3.1. At \(P=1\), the cutoff value is \(1-\sqrt{\frac{\sqrt{5}-1}{\sqrt{5}+1}}=2-\phi\approx 0.382\) with \(\phi\) the Golden ratio. We plot the distribution in Figure 2. The behavior of the cutoff \(r_{max}\) is displayed in Figure 3. Unsurprisingly, when \(P\to\infty\), the penalty becomes much larger than the reward and the players play closer to \(0\). The case when \(P\to 0\) is more surprising: the maximal cutoff value at \(P=0\) is \(h=1-\sqrt{\frac{\sqrt{2}}{\sqrt{2}+2}}\approx 0.356\). This is because even if the penalty is \(0\), the players cannot get the reward if they "lose", which prevents them from taking too much risk. We plot the distribution in Figure 2. ## 3 Generalization to multiple players Quite naturally, we wonder what the Nash equilibrium looks like for multiple players. The visibility game of Lotker et al. (2008) probably does not Figure 2: Nash equilibrium admit an analytical solution and they instead give an algorithm to produce approximate solutions. We show that the CfR game for multiple players admits a unique symmetric equilibrium and present a new numerical algorithm to compute it. We finally study the asymptotic behavior of the equilibrium. ### Nash equilibrium Interestingly, our Correlation for Risk game admits an analytical solution even for multiple players. More precisely: **Theorem 3.1**.: _There is a unique symmetric Nash equilibrium for in the CfR game with \(n\) players defined by_ \[f(x)=\frac{P+w}{(n-1)(1-x)^{2+\frac{1}{n-1}}(Px+w)^{1-\frac{1}{n-1}}}\] _for some constants \(r_{max}\) and \(w:=\bar{r}^{n-1}\) (the probability of winning when taking no risk) such that_ \[\int_{0}^{r_{max}}f(x)dx =1\] \[\int_{0}^{r_{max}}xf(x)dx =\bar{r}\] Figure 3: The cutoff goes to zero when \(P\to\infty\). Proof.: We adapt the proof of Theorem 2.5 and start by assuming the existence of a symmetric mixed equilibrium defined by the probability density \(f\). First we derive a nice expression for \(u(x)\), defined as the utility of one player choosing move \(x\) while the others play according to \(f\). For all \(x\in support(f)\): \[u(x)=-xP+(1-x)\left(\int_{0}^{x}f(y)dy+\int_{x}^{1}yf(y)dy\right)^{n-1}\] This equation is quite natural: the player loses \(P\) with probability \(x\). If they survive, with probability \(1-x\), they need the \(n-1\) other players to either play a lower value or play a higher value and fail. We can suppose as previously that \(0\) is in the support to subtract \(u(0)\). We write \(\bar{r}\) for the expectation of the action \(r\) under \(f\). \[\left(\frac{\bar{r}^{n-1}+xP}{1-x}\right)^{\frac{1}{n-1}}=\int_{0}^{x}f(y)dy+ \int_{x}^{1}yf(y)dy\] We define \(w:=\bar{r}^{n-1}\) to be the probability of winning when taking no risk, we derivate and divide by \(1-x\) to obtain: \[f(x)=\frac{P+w}{(n-1)(1-x)^{2+\frac{1}{n-1}}(Px+w)^{1-\frac{1}{n-1}}}\] Finally we can solve \(\int_{0}^{r_{max}}f(x)dx=1\) and \(\int_{0}^{r_{max}}xf(x)dx=\bar{r}\). We relegate the description of the numerical estimation of \(r_{max}\) and \(w\) to Appendix B. We display the behavior of the solution for multiple players in Figure 4. ### Asymptotic behavior We are interested in studying the equilibrium when the number of players goes to infinity. For fixed \(P\), we have the following: **Proposition 3.2**.: _When \(n\to\infty\), \(\lim r_{max}=\frac{1}{1+P}\) and \(\bar{r}\sim\frac{1}{nP}\)_ Proof.: We verify experimentally that \(r_{max}\) is never close to \(0\) or \(1\) and that \(\bar{r}\to 0\). Equation B.1 gives \[\frac{w+nP(1-r_{max})+Pr_{max}}{n(1-r_{max})(P+w)} \sqrt[n-1]{\frac{Pr_{max}+w}{1-r_{max}}}=1+\frac{w+nP}{n(P+w)} \bar{r}\] \[\sqrt[n-1]{\frac{Pr_{max}+w}{1-r_{max}}}\to 1\] Equation B.2 gives \[\frac{w-nw(1-r_{max})+Pr_{max}}{n(1-r_{max})(P+w)}\sqrt[n-1]{\frac{ Pr_{max}+w}{1-r_{max}}} =\bar{r}\frac{w+nP}{n(P+w)}\] \[\frac{w}{P}+\frac{r_{max}}{n(1-r_{max})}\sim\frac{r_{max}}{n(1-r_ {max})}\sim\bar{r}\] using \(w=\bar{r}^{n-1}=o(\bar{r})\). \(\sqrt[n-1]{\frac{Pr_{max}+w}{1-r_{max}}}\to 1\) implies \(\frac{r_{max}}{1-r_{max}}\rightarrow\frac{1}{P}\) and \(r_{max}\rightarrow\frac{1}{1+P}\). Finally, \(\bar{r}\sim\frac{1}{nP}\). We illustrate this behavior in Figure 5. A common concept in game theory is the price of anarchy \(PoA\)(Koutsoupias and Papadimitriou, 1999). The price of anarchy is the ratio between the Pareto optimum and the Nash equilibrium. It is easy to generalize Theorem 2.3 for multiple players and show that the reward can be split almost perfectly to obtain a Pareto optimal utility \(\frac{R}{n}\). The utility of our symmetric equilibrium is \(R\bar{r}^{n-1}=R\bar{r}^{n-1}=Rw\). Hence, \(PoA=1/nw\). We will instead compute the efficiency \(E=\frac{1}{PoA}=nw\in[0,1]\). We observe that when \(P=\frac{1}{n^{e}}\) with \(e\geq 0\), the efficiency \(E=nw\) of the Nash equilibrium goes to \(0\) if \(e\leq 1\) and it goes to \(1\) if \(e>1\). We plot the behavior of \(E\) in Figure 6. We interpret this as an indication that resources, here modeled by the ratio \(\frac{1}{P}=\frac{R}{P}\) of rewards to penalties, need to scale faster Figure 4: We observe a clear difference between the cases \(P=0\) (no penalty) and \(P=1\) (presence of a penalty). In both cases, the cutoff increases. However, the average risk seems to decrease sharply when there is a nonzero penalty with a mode at \(r=0\). than the number of players for them to adopt an efficient behavior. Scarcity of resources creates an inefficient Nash equilibrium. ## 4 Extensions of the Competition for Risk Game ### Market frictions One limitation of our model is the assumption that the utility functions are discontinuous at a certain threshold. While this is appropriate for certain scenarios such as call for bids, it may not hold in other real-life situations that involve noisy evaluations or aggregate many individual choices. To address this limitation, we propose replacing the threshold \([r_{1}>r_{2}]\) with a smooth choice model using the logistic function, \(\sigma_{\tau}(r_{1}-r_{2})\), where \(\sigma_{\tau}\) is the scaled sigmoid: \[\sigma_{\tau}(x):=\frac{1}{1+\exp\left(-\frac{x}{\tau}\right)}\] Recall that the failure events are \(f_{p}<r_{p}\). For a game between two players, the outcome matrix can be represented as follows: **Proposition 4.1**.: _The expected utilities \(u_{p}\) for the 2-player game with frictions are computed as follows:_ \[u_{2}(r_{1},r_{2}) =u_{1}(r_{2},r_{1})\text{ (symmetry)}\] \[u_{1}(r_{1},r_{2}) =r_{2}(1-r_{1})R-r_{1}P+(1-r_{1})(1-r_{2})\sigma_{\tau}(r_{1}-r_{2 })R\] As \(\tau\to 0\), \(\sigma_{\tau}\) approaches the Heaviside step function and market frictions disappear. ### Correlation between risks In the real world, risks are often correlated, which is not accounted for in our current model. To incorporate correlation between risks, we can introduce joint distributions for the failure events \(f_{p}\), which occur according to latent variables. In our model, we assume that \(f_{p}\) follows a uniform distribution. To introduce correlation between \(f_{1}\) and \(f_{2}\), we use the well-known NORTA (NORmal Figure 6: The efficiency clearly goes to 0 even when \(e=1\). When \(e>1\), it seems that \(E\to 1\). Values for greater values of \(n\) suffer of numerical precision issues as \(\log w\to 0\). To Anything) method (Cario and Nelson, 1997). This method allows us to create a joint distribution \((f_{1},f_{2})\) such that the marginals are uniform distributions and the Pearson correlation between \(f_{1}\) and \(f_{2}\) can be set to any arbitrary value. Following NORTA, we define \(f_{p}=\Phi(z_{p})\), where \(\Phi\) is the cumulative distribution function of the Normal distribution, and \[\begin{pmatrix}z_{1}\\ z_{2}\end{pmatrix}\sim\mathcal{N}\left(\mu,\Sigma\right)\] with \(\mu=\begin{pmatrix}0\\ 0\end{pmatrix}\) and \(\Sigma=\begin{pmatrix}1&\rho(z_{1},z_{2})\\ \rho(z_{1},z_{2})&1\end{pmatrix}\). Here, \(\rho\) is the Pearson correlation coefficient between \(z_{1}\) and \(z_{2}\), which determines the correlation between \(f_{1}\) and \(f_{2}\). As shown in Cario and Nelson (1997), specifying a correlation between \(z_{p}\) or \(f_{p}\) is equivalent to specifying \(\rho(f_{1},f_{2})\). Specifically, we have \[\rho(f_{1},f_{2})=\frac{6}{\pi}\sin^{-1}\left(\frac{\rho(z_{1},z_{2})}{2}\right)\] Hence, we use \(\rho\) to denote \(\rho(z_{1},z_{2})\) throughout the rest of the document. This model is well-suited to real-world scenarios, such as financial portfolios, where \(z_{p}\) can represent the returns on investments. In such cases, joint distributions of portfolios are typically modeled as multivariate normal distributions, and \(r_{p}\) corresponds to the Value at Risk \(v_{p}=\Phi^{-1}(r_{p})\) through the bijective function \(\Phi\), such that the failure event \(z_{p}<v_{p}\) is equivalent to \(f_{p}<r_{p}\). **Proposition 4.2**.: _The expected utilities \(u_{p}\) for the 2-player game with frictions and correlated risks are computed as follows:_ \[u_{2}(r_{1},r_{2}) =u_{1}(r_{2},r_{1})\text{ (symmetry)} \tag{1}\] \[u_{1}(r_{1},r_{2}) =(r_{2}-\tilde{r})R-r_{1}P+(1-r_{1}-r_{2}+\tilde{r})\sigma_{\tau} (r_{1}-r_{2})R \tag{2}\] _where \(\tilde{r}:=\Phi_{\rho}(\Phi^{-1}(r_{1}),\Phi^{-1}(r_{2}))\) is the probability of joint failure, with_ \[\Phi_{\rho}(v_{1},v_{2})=\frac{1}{2\pi\sqrt{1-\rho^{2}}}\int_{-\infty}^{v_{1} }\int_{-\infty}^{v_{2}}\exp\left(-\frac{x^{2}-2\rho xy+y^{2}}{2(1-\rho)^{2}} \right)dy\ dx\] _the cumulative distribution of the bivariate normal distribution with correlation \(\rho\)._ In the absence of noise, when \(\rho=\pm 1\), it is also possible to calculate the Nash equilibrium analytically: **Theorem 4.3**.: _For \(\rho=1\), the equilibrium is given by:_ \[p(x)=\frac{1+P}{1-x}\left[x<1-\exp\left(-\frac{1}{P+1}\right)\right]\] _We have_ \[\bar{r}=1-(P+1)\left(1-\exp\left(-\frac{1}{P+1}\right)\right)\] _For \(\rho=-1\), the equilibrium is given by:_ \[p(x)=\frac{P}{(1-2x)^{3/2}}\left[x<\frac{1}{2}-\frac{P^{2}}{2(P+1)^{2}}\right]\] _We have_ \[\bar{r}=\frac{1}{2P+2}\] Proof.: See Appendix C. ## 5 Computing approximate Nash equilibrium ### Approximations to games and equilibria In this section, we define some key concepts and metrics related to games and equilibria. For a given game with \(n\) players, we use \(u_{i}(\sigma)\) to denote the reward of player \(i\) when all players follow the strategy \(\sigma=(\sigma_{1},\ldots,\sigma_{n})^{2}\). A strategy \(\sigma\) is said to be a Nash equilibrium if it satisfies the following condition for all players \(i\) and all alternative strategies \(\sigma_{i}^{\prime}\in\Sigma_{i}\): \[u_{i}(\sigma)\geq u_{i}(\sigma_{i}^{\prime},\sigma_{-i})\] where \(\sigma_{-i}\) is the strategy of all players but \(i\), and \(\Sigma_{i}\) is the set of actions available to player \(i\). A game is said to be continuous if the action space \(\Sigma_{i}\) is compact and \(u_{i}\) is continuous. In such games, it is possible to approximate the Nash equilibria using a sequence of games over a reduced finite support, which leads to Glicksberg's theorem, without relying on Kakutani's theorem (Myerson, 1997). In our CfR game, which has a few points of discontinuity, it is also possible to approximate the Nash equilibria using a similar method. However, we do not provide a proof of this here, as the introduction of frictions makes our game continuous anyway. To measure the closeness of a strategy \(\sigma\) to a Nash equilibrium, we use the NashConv metric (Lanctot et al., 2017): \[\textsc{NashConv}(\sigma)=\sum_{i=1}^{n}\max_{s_{i}\in\Sigma_{i}}u_{i}(s_{i}, \sigma_{-i})-u_{i}(\sigma)\] Note that this metric only considers pure strategies \(s_{i}\in\Sigma_{i}\), due to the linearity of the payoff function for mixed strategies. The NashConv metric satisfies NashConv\((\sigma)\geq 0\), with equality holding only for a Nash equilibrium. This implies that NashConv\((\sigma)\) corresponds to the notion of \(\varepsilon\)-Nash equilibrium, where a \(\varepsilon\)-Nash equilibrium \(\sigma\) has NashConv\((\sigma)=n\varepsilon\). For a finite action space, NashConv is easy to compute since \(\Sigma_{i}\) is finite. However, for a continuous action space, no such metric is known. Nonetheless, we can approximate NashConv by taking the maximum over a finite sample of points from \(\Sigma_{i}\). This sample can be chosen randomly, or if \(\Sigma_{i}\) is an interval or a product of intervals of \(\mathbb{R}\), we can use a grid. In our CfR game, the action space is \([0,1]\). Here, we use quasi-random numbers to measure the closeness to a Nash equilibrium, inspired by the literature on hyperparameter sampling (Bousquet et al., 2017) and the efficiency of quasi-Monte-Carlo methods (Sobol', 1990). Specifically, we define the QuasiNashConv metric as: \[\textsc{QuasiNashConv}(\sigma,m)=\sum_{i=1}^{n}\max_{s_{i}\in\textsc{Sobol}(m) }u_{i}(s_{i},\sigma_{-i})-u_{i}(\sigma)\] where Sobol\((m)\) is a set of \(m\) quasi random numbers drawn using Sobol's method (Sobol', 1967). ### Correlated Equilibria A Nash equilibrium is a set of strategies where no player can improve their payoff by unilaterally changing their strategy, assuming that all other players' strategies remain unchanged. However, in some games, players may benefit from coordinating their actions in ways not captured by traditional Nash equilibrium. This is where the concept of correlated equilibrium comes in. A correlated Nash equilibrium is a set of correlated strategies where no player can improve their expected payoff by unilaterally changing their strategy, given that they observe the correlation signal. This correlation signal is not necessarily a message or communication between the players, but rather a shared random variable that affects each player's strategy consistently. **Definition 5.1**.: A correlated Nash equilibrium is a joint distribution \(\sigma\) over all moves \(\Sigma_{1}\times\Sigma_{2}\times\ldots\times\Sigma_{n}\) such that for any player \(i\) and any strategy modification \(\phi:\Sigma_{i}\rightarrow\Sigma_{i}\), \[u_{i}(\sigma_{i},\sigma_{-i})\geq u_{i}(\phi(\sigma_{i}),\sigma_{-i})\] Thus, a Nash equilibrium can be viewed as a correlated Nash equilibrium that can be decomposed into independent strategies for each player. It is evident that any Nash equilibrium is a correlated Nash equilibrium. Correlated equilibria are more suitable for the real world because they allow for a broader range of possible outcomes that can arise through coordination among the players, without necessarily requiring communication or binding agreements between them. In many real-world scenarios, it is challenging or impossible for players to communicate and make binding agreements, or they may not have complete information about the strategies of the other players. Correlated equilibria provide a way for players to achieve coordination and cooperation without requiring such communication or information, by relying on shared random variables that affect each player's strategies consistently. Finally, correlated equilibria can also capture situations where players have some degree of trust or social norms that encourage them to coordinate their actions in a specific way. For instance, in a repeated game where players interact with each other over a long period, they may develop a sense of reciprocity or reputation that encourages them to follow a certain coordinated strategy. ### Finding Correlated Equilibria with Linear Solvers Correlated equilibria are of interest because they can be computed more easily for a finite action set. A joint strategy can be represented by a mapping of probabilities: \[\Pr_{\sigma}(s_{1},s_{2},\ldots,s_{n}):=\Pr[\sigma=(s_{1},s_{2},\ldots,s_{n})]\] for all joint actions \((s_{1},s_{2},\ldots,s_{n})\). Therefore, the equation from Definition 5.1 is linear in these probabilities. An additional equation is that probabilities must sum to 1, and all probabilities are constrained to be positive. For two players, the equations are: \[\forall(s_{1},s^{\prime}_{1}),\sum_{s_{2}}\Pr_{\sigma}(s_{1},s_{2})u_{1}(s_{1},s_{2})\geq\sum_{s_{2}}\Pr_{\sigma}(s_{1},s_{2})u_{1}(s^{\prime}_{1},s_{2})\] \[\forall(s_{2},s^{\prime}_{2}),\sum_{s_{1}}\Pr_{\sigma}(s_{1},s_{2})u_{1}(s_{1},s_{2})\geq\sum_{s_{1}}\Pr_{\sigma}(s_{1},s_{2})u_{1}(s_{1},s^{\prime}_{2})\] \[\forall(s_{1},s_{2}),\Pr_{\sigma}(s_{1},s_{2})\geq 0\] \[\sum_{s_{1},s_{2}}\Pr_{\sigma}(s_{1},s_{2})=1\] The set of correlated equilibria is thus a convex polytope \(P\). It is possible to find the boundary in any direction using a linear programming solver. It is also possible to check that the correlated equilibrium is unique and is a Nash equilibrium by trying to maximize and minimize each variable over the polytope. If the maximum and minimum are equal for each variable, then the polytope only contains one point. Another method described in Appa (2002) checks the uniqueness of a solution to a linear program by solving a new linear program. However, that method requires a reformulation of the linear program as \(\max cx\) s.t. \(Ax=b,x\geq 0\), which is cumbersome in our case. We propose a simple randomized method (algorithm 5.3) that can produce confidence intervals for any confidence level (or p-value). **Theorem 5.1**.: _Given a polytope \(P\) defined by constraints \(c_{1},\ldots,c_{m}\)_ \[\Pr\left[\textsc{SumDiamSquared}(K,c_{1},\ldots,c_{m})<\varepsilon\right] \leq F_{\chi^{2}}\left(\frac{\varepsilon}{diam(P)},K\right)\] _with \(F_{\chi^{2}}(\cdot,K)\) the cumulative distribution function of the \(\chi^{2}\) distribution with \(K\) degrees of freedom._ ``` 0: Iterations \(K\), constraints \(c_{1},\ldots,c_{m}\) defining a polytope \(P\) in \(\mathbb{R}^{n}\) functionSumDiamSquared\((K,c_{1},\ldots,c_{m})\) for\(i\gets 1,\ldots,K\)do Sample \(v_{j}\sim\mathcal{N}(0,1)\) for \(j=1,\ldots,n\) \(a_{i}\leftarrow\textsc{LinProg}(v,c)\) \(\triangleright\min_{x\in P}v\cdot x\) \(b_{i}\leftarrow\textsc{LinProg}(-v,c)\) \(\triangleright\max_{x\in P}v\cdot x\) \(d_{i}\gets b_{i}-a_{i}\) endfor return\(\sum_{i}d_{i}^{2}\) endfunction Input: p-value \(p\) functionMaxDiameter\((p,K,c_{1},\ldots,c_{m})\) \(\varepsilon\leftarrow\textsc{SumDiamSquared}(K,c_{1},\ldots,c_{m})\) \(q\leftarrow\textsc{Chi2.ppf}(p,K)\) \(d\leftarrow\varepsilon/q\) return\(d\) endfunction ``` **Algorithm 1** Confidence interval on \(diam(P)\) We used the HiGHS solver (Huangfu and Hall, 2018) to solve the linear optimization subproblems (calls to LinProg). In numerical experiments, we use \(K=5\), confidence \(p=0.95\), and report \[d_{max}:=\textsc{MaxDiameter}(p,K,c_{1},\ldots,c_{m})=\frac{\textsc{SumDiamSquared }(K,c_{1},\ldots,c_{m})}{Q_{\chi^{2}}\left(1-p,K\right)}\] where \(Q_{\chi^{2}}(\cdot,K)\) is the quantile function of the \(\chi^{2}\) distribution with \(K\) degrees of freedom. When the polytope describe probability distributions, we have the bound \(d_{max}\leq 2\). Finally, we make the following trivial remark: **Proposition 5.2**.: _A correlated equilibrium \(\sigma\) is a Nash equilibrium iff the matrix \((\Pr_{\sigma}(i,j))_{i,j}\) has rank 1._ This gives us another numerical method to check that a correlated equilibrium is a Nash equilibrium: compute the second highest eigenvalue and check that it is 0. In numerical experiments, we define \(\lambda_{1}\) and \(\lambda_{2}\) as the highest and second highest eigenvalues and report the value \[\lambda:=\frac{\lambda_{2}}{\lambda_{1}}\] ### Related works solving continuous games Our exploration of regret-minimization algorithms in the context of continuous games has led us to a wide range of approaches. From these, a few distinct groups emerge, each characterized by their unique methods, assumptions, and requirements. One group includes the work of Perkins and Leslie (2014), Ganzfried (2021), and Kroupa and Votroubek (2023). Perkins and Leslie (2014) extended stochastic fictitious play to the continuous action space framework, showing convergence to an equilibrium point in two-player zero-sum games. However, this method assumes specific linear or quadratic utility functions, which limits its applicability to a narrow set of games. Similarly, Ganzfried (2021) proposed a novel algorithm for approximating Nash equilibria in continuous games, yet the scalability of their method is a concern due to the storage of all previous moves and the use of mixed integer linear programs for best response computation. Lastly, Kroupa and Votroubek (2023) presented an iterative strategy generation technique for finding mixed strategy equilibria in multiplayer general-sum continuous games, but this method requires an oracle for best response computation, a requirement that may not be met in all practical scenarios. Another group consists of Raghunathan et al. (2019) and Dou et al. (2019). Raghunathan et al. (2019) introduced the Gradient-based Nikaido-Isoda function, a merit function providing error bounds to a stationary Nash point. They showed that gradient descent converges sublinearly to a first-order stationary point of this function, making it a potential method for steady convergence towards equilibrium. Extending this work, Dou et al. (2019) offered a deep learning-based approach for approximating Nash equilibria in continuous games, allowing the finding of mixed equilibria. They utilized the pushforward measure technique to represent mixed strategies in continuous spaces and applied gradient descent for convergence to a stationary Nash equilibrium. However, we found this method to be slow and unable to converge in our game. The third group encompasses the works of Bichler et al. (2021) and Martin and Sandholm (2022). Bichler et al. (2021) proposed using artificial neural networks to learn equilibria in symmetric auction games through gradient dynamics in self-play. This work was later extended by Martin and Sandholm (2022), who used the same pushforward trick as Dou et al. (2019), adding random noise as input. They applied zeroth-order optimization techniques to compute approximate Nash equilibria in continuous-action games without access to gradients. However, we found that the success of this method is very sensitive to hyperparameters, and its dynamics can be unstable. ### Applying regret-minimization algorithms to continuous games Our study delves into the application of regret-minimization algorithms to continuous games, a distinct approach given that it does not demand any differentiability assumptions on the reward function. This approach pivots around algorithms known for identifying correlated equilibria, such as regret matching (Hart and Mas-Colell, 1997, 2001a,b), Counterfactual Regret Minimization (CFR) (Neller and Lanctot, 2013), and stochastic fictitious play (Fudenberg and Kreps, 1993). Regret matching is a notable algorithm in the realm of game theory for finding correlated equilibria. It operates by having each player select a distribution of moves at each step of the algorithm, wherein this selection is geared towards maximizing their expected utility, given the past moves of their opponent. When run for an ample number of iterations, the distributions arrive at a convergence, forming a correlated equilibrium. Building on this, we bring into play the CFR algorithm. This algorithm accumulates the expected regrets of each action played by a player at every information set. It then updates the regrets based on the counterfactual outcomes of the game - that is, what the result would have been if a different action had been taken. As these regrets are iteratively updated and actions chosen based on these revised regrets, CFR converges to a Nash equilibrium in extensive form games. In our normal form game, we incorporate CFR as a deterministic variant of Regret Matching. Further enriching our methodology is stochastic fictitious play, another variant of regret matching. It involves the computation of probability distributions using the softmax function. We also test this variant along with CFR in our experiments, thereby evaluating their performance in approximating the Nash equilibrium of our game. To approximate the CfR game, we resort to a finite grid of actions, which are evenly distributed within the interval \([0,1]\) for both players. In doing so, we differentiate between two settings, governed by a boolean variable, shift. When shift = false, both players are offered the same set of actions. However, when shift = true, the two players are presented with non-intersecting sets of actions, with each action in the interval \([0,1]\) alternately attributed to each player. Despite the fact that the action set of the resulting approximate Nash equilibrium is necessarily a subset of a predefined finite action set, our method brings a fresh perspective to the field, with its emphasis on regret minimization for both players and the removal of any differentiability assumption on the reward function. Through this research, we hope to enrich our understanding of the behavior of continuous games and lay a sturdy foundation for future exploration and improvements in this domain. ## 6 Experimental results We employ a Numba (Lam et al., 2015) implementation of the numerical method proposed by Genz (2004) to compute the bivariate normal probability in the utility function, which is the computational bottleneck in equation 1. Our experiments show that the Nash equilibrium in finite approximations of the CfR game can be obtained by computing a correlated Nash equilibrium. Moreover, we prove that either the correlated equilibrium is unique or the correlated equilibrium maximizing the total reward is a Nash equilibrium. We present the results of our experiments with different values of \(P\), \(\tau\), and \(\rho\), as well as different numbers of actions, in Figure 7, where we plot the maximum distance \(d_{max}\) and the parameter \(\lambda\) computed according to Section 5.3 with shift = true. We also present the results with shift = false in Section Appendix E. We implement and evaluate all algorithms to solve a discrete version of our game where the action space is reduced to a grid, with and without a shift. We then report the value of NashConv QuasiNashConv as defined in Section 5.1. We display the solution found by Regret Matching in Figure 8 and check that it matches the actual solution from Figure 2. We observe on Figure 9 that vanilla Regret Matching outperforms all methods. In general, Regret Matching performs much better than CFR, even when using less actions, using softmax is detrimental, and shifting the action space does not seem to impact performance much. CFR runs faster as it does not involve any random sampling. Therefore, we use vanilla Regret Matching with shifting in the rest of the experiments for the sake of simplicity. Figure 10 shows that the number of sampled actions is the main factor that drives the quality of solutions. For our experiments on equilibria in various settings, we use 500 actions and 2000 iterations (hence \(10^{6}\) steps) of vanilla Regret Matching with shifting. Our efficient implementation computes the equilibrium in just a few seconds, allowing us to explore the effects of penalties, market frictions, and correlation on risk-taking behavior and performance, as well as to evaluate the effectiveness of various interventions and policies in a competitive environment. Our results demonstrate that penalties and market frictions have a significant impact on the strategic behavior of actors in competition. Specifically, we find that penalties \(P\) decrease both the average risk taken \(\bar{r}\) by the players and their total reward \(u\), while market frictions \(\tau\) not only decrease the average risk taken but also increase the total reward. We also find that market frictions have a greater impact on the total reward in high-penalty environments. In particular, in particularly inefficient markets with high \(\tau\), increasing penalties can improve cooperation and the total reward, as illustrated in Figure 11. The correlation between firm risks also has a significant impact on risk-taking behavior and performance. Specifically, players take more risks in environments of negative correlation, which can improve their payoff, while Figure 7: Our experiments with different values of \(P\), \(\tau\), and \(\rho\), as well as different numbers of actions, for the setting with shift = true show that, for almost all values of the parameters, \(d_{max}\) is numerically zero. We cross out cases where the solver failed to find a solution, most probably due to rounding errors which caused the small solution set to disappear. In the few cases where \(d_{max}\) was not observed to be zero, \(\lambda\) is clearly zero, indicating that the best correlated equilibrium is a Nash equilibrium. Figure 8: Approximate and analytical solutions for \(P=1,\rho=0,\tau=0\). Actions were discretized to a grid of \(2^{12}\) with shift = true. The approximate solution was computed by \(10^{4}\) iterations of regret matching. Figure 9: We evaluate the performance of different algorithms in finding the Nash equilibrium of the CfR game in the standard setting with \(P=R=1\), \(\tau=0\), and \(\rho=0\), without sharing. We reduce the game to a grid of \(a=2^{12}\) actions, and evaluate the QuasiNashConv metric on \(2^{15}\) points. The algorithms are run for \(10^{4}\) iterations, where an iteration is defined as an update to the strategy. For CFR, an iteration involves updating the regrets based on the counterfactual outcomes of the game, while for regret matching, an iteration is defined as \(a\) steps of the sampling, play, and update process. Both CFR and regret matching do \(\mathcal{O}(a^{2})\) operations per iteration. We do not include the computation of the utility matrix in the computation time. Figure 11: The two figures depict the relationship between the total reward \(u\) and the average risk taken \(\bar{r}\) under different penalty \(P\) and market friction \(\tau\) settings. The left figure shows constant \(P\) level lines, while the right figure shows constant \(\tau\) level lines. We observe a linear relationship between \(u\) and \(\bar{r}\) when changing \(\tau\) at constant \(P\). We find that this behavior holds for all values of \(\rho\). Interestingly, there seems to exist a linear relationship between \(\bar{r}\) and \(u\) when changing \(\tau\) for fixed levels of \(P\). Figure 10: We evaluate the performance of the vanilla regret matching algorithm in finding the Nash equilibrium of the CfR game in the standard setting with \(P=R=1\), \(\tau=0\), and \(\rho=0\), without sharing. The quality of the resulting equilibrium depends mainly on the number of actions \(a\). We run the algorithm for \(t\) iterations, where the time complexity of the algorithm is \(\mathcal{O}(ta^{2})\) and the memory complexity is \(\mathcal{O}(a^{2})\) as it requires computing the reward matrix. We evaluate the QuasiNashConv metric on \(8a\) points. they take less risks in environments of positive correlation. The impact of correlation on performance is negative in efficient markets, but it can become positive in noisy markets, as illustrated in Figure 12. These findings provide valuable insights into the complex interplay between market structure, risk-taking behavior, and performance in a competitive environment. ## 7 Economic Policy Implications and Real-World Case Studies In this section, we aim to connect the insights gleaned from our model with broader economic literature and real-world policy implications. Drawing on established economic theories, particularly the Diamond-Dybvig model, we offer a new perspective on how financial institutions manage risk and competition. Further, we delve into how our model's findings can inform current policy debates, such as the regulation of financial advisory commissions in the European Union and contract transferability among life insurers Figure 12: We plot the average risk taken \(\bar{r}\) and the total utility \(u\) as functions of the correlation \(\rho\) between firm risks. in France. Through this discussion, we illustrate the practical applicability and potential impact of our theoretical framework on shaping economic policies and practices. ### Ties with the Diamond-Dybvig model The Diamond-Dybvig model (Diamond and Dybvig, 1983) plays a central role in our understanding of financial intermediaries, especially banks. This seminal model represents a situation where banks provide liquidity transformation services. In a nutshell, it portrays a setting in which depositors need to decide whether to withdraw their money early (impatient depositors) or keep their money in the bank until a later period (patient depositors). Banks offer a contract that allows depositors to withdraw their money early, but at a lower return than if they wait until the end. This can lead to a bank run if too many depositors decide to withdraw their money early. By allowing financial institutions in our model to "choose" a parameter that influences both a utility function and their likelihood of bankruptcy, our approach shares similarities with the Diamond-Dybvig framework. This feature makes our model particularly useful for studying phenomena such as competition among banks over deposit contract interest rates, a scenario that mirrors the dynamic modeled by Diamond and Dybvig. Despite its parallels with the Diamond-Dybvig framework, our model is distinct, original, and more expansive in scope. It is unconcerned with specific financial metrics or measures, making it more generalizable across different settings. Instead, our model pivots around the broadly applicable concept of "failure probability". This fundamental characteristic allows us to abstract away from the complexities of real-world financial instruments and focus on the core strategic interactions of players. By viewing the competition through the lens of failure probability, we can derive insights that are not confined to specific financial instruments or markets, but instead provide a versatile theoretical tool that can be applied across various sectors and scenarios. This innovative feature enhances the relevance and applicability of our model in analyzing strategic risk-taking behavior. Under certain circumstances, as we show in our study, market frictions, such as customers not optimizing their choice of bank, can increase bank profitability. Although this finding may seem intuitive, our formal model provides a rigorous foundation for this result, focusing not on explicit interest rates but rather on failure probabilities. Interestingly, our results also shed light on the implications of risk correlation for the equilibrium. In cases where the failure events are negatively correlated, market frictions appear to have little effect on the equilibrium. In contrast, when risks are positively correlated - mirroring the reality of endogenously generated financial risks tied to financial markets - a lack of market friction can spur banks to take on greater risk. This suggests that the optimal situation for both customers and banks arises when the correlation between endogenous risks is zero or negative. Under these conditions, lower frictions can benefit customers without harming banks. On the other hand, if risks are positively correlated, market frictions can harm customers while benefiting banks, creating an incentive for banks to increase these frictions (e.g., by fostering customer loyalty or withholding information). Moreover, the impact of failure penalties (or the absence of a safety net) can exacerbate this dynamic, suggesting that regulators could mitigate perverse incentives by reducing the penalty parameter \(P\) in a positively correlated environment, such as through bailouts or other protective measures (e.g., deposit insurance, liquidity provision, and bank resolution mechanisms). This perspective aligns with a strand of literature that extrapolates from the Diamond-Dybvig model to inform policy making (Bhattacharya et al., 1985; Ennis and Keister, 2009). ### Practical applications in current policy debates Outside of the banking sector, our framework has implications for other policy debates. In the European Union, for instance, the Financial Services Commissioner, Mairead McGuinness, recently proposed banning "inducements" - commissions paid by banks or insurers to financial advisors who sell their products (Jones, 2023). Proponents argue this would enhance transparency and reduce costs. Critics, however, fear it could inhibit access to financial advice. Within our model, increasing transparency equates to reducing the friction parameter \(\tau\), which alters the equilibrium, potentially heightening risk for financial institutions but also augmenting returns for consumers. Our findings indicate that regulators might safeguard consumers and influence prices simply by mandating transparency. However, we caution that the impact of such a policy on market frictions is uncertain, as it could inadvertently limit access to information via advisors. Furthermore, in France, a legislative proposal by MPs Husson and Montgolfier (Husson and de Montgolfier, 2023) seeks to enhance the transferability of contracts between life insurers, effectively reducing market frictions. Our framework suggests that, in response, insurers could take on more risk due to increased competition. However, the relationship between risk and interest rates may also change, as performance becomes costlier in terms of risk when the duration of contracts decreases because of enhanced transferability. Consequently, while our model predicts an uptick in the risks assumed by insurers, the ultimate benefit to savers remains ambiguous. ## 8 Conclusion In this study, we thoroughly examined competition models where participants strategically choose their risk levels, with those who take on more risk potentially outperforming their competitors. We devised and tested multiple algorithms to solve our game in its discrete form, with vanilla Regret Matching proving to be the most effective. We utilized this efficient implementation to delve into the impacts of penalties, market frictions, and risk correlations on strategic behavior and overall performance. Additionally, we scrutinized the effectiveness of diverse interventions and policies within this competitive landscape. Our research revealed that market frictions tend to lower the average risk taken while boosting the total reward. Moreover, we found that enhancing failure penalties can foster cooperation and augment the total reward, particularly in inefficient markets. Our exploration also showed that negative correlations among failure events stimulate risk-taking, while positive correlations may discourage it in efficient markets but potentially encourage it in less predictable, noisy markets. One noteworthy aspect of our study is its parallel with the Diamond-Dybvig framework. Our model, similar to Diamond-Dybvig, examines how financial institutions select parameters influencing their utility functions and likelihood of failure. However, our model is more generalized, focusing on universally applicable notions of failure probabilities, thereby enabling us to study the dynamics of strategic competition in a broader array of scenarios. We also showcased the adaptability of our model for policy exploration. By imposing policy measures such as transparency requirements or facilitating contract transferability, we demonstrated how policy changes can influence the equilibrium of risk-taking and consequently, the rewards. Our findings offer substantial insights for economics, finance, and policymaking. By understanding how market frictions and penalties influence competition, firms and governments can make more informed strategic decisions leading to more efficient markets. Moreover, our use of algorithmic solvers for games with continuous action sets illustrates the potential for handling more intricate models lacking closed-form solutions. In conclusion, our work provides a robust framework for modeling and analyzing strategic interactions in continuous action games, extending its implications far beyond to enrich economic research and practice. ## Acknowledgement We thank Aurelie Coursimault, Professor Helyette Geman, Marc Lanctot, Jules Pondard and Professor Philippe Raimbourg for helpful discussions and useful comments.
2303.15133
Synia: Displaying data from Wikibases
I present an agile method and a tool to display data from Wikidata and other Wikibase instances via SPARQL queries. The work-in-progress combines ideas from the Scholia Web application and the Listeria tool.
Finn Årup Nielsen
2023-03-27T12:09:48Z
http://arxiv.org/abs/2303.15133v1
# Synia: Displaying data from Wikibases ###### Abstract I present an agile method and a tool to display data from Wikidata and other Wikibase instances via SPARQL queries. The work-in-progress combines ideas from the Scholia Web application and the Listeria tool. Wikidata, Wikibase, SPARQL ## Introduction Scholia is a Web application running from the Wikimedia Foundation Toolforge server at [http://scholia.toolforge.org](http://scholia.toolforge.org). It displays data from Wikidata via SPARQL queries to the Wikidata Query Service (WDQS), particularly showing metadata about scientific publications (Nielsen et al., 2017), chemical information (Willighagen et al., 2018), and software (Rasberry and Mietchen, 2022). The Web application is implemented with the Python Flask framework and SPARQL templates are defined with Jinja2 templates that are read during the application startup and interpolated based on the Scholia user browsing. Two other tools use a similar Flask/SPARQL template approach to display Wikidata data: Ordia is specialized for the lexicographic part of Wikidata (Nielsen, 2019) and CVRminer1 on Danish companies. Common limitations for these tools are currently Footnote 1: [https://cvrminer.toolforge.org/](https://cvrminer.toolforge.org/). 1. The tools are bound to the Wikidata WDQS endpoint 2. The language is fixed to English 3. Development of new panels and aspects requires the involvement of software developers. For Magnus Manske's Listeria tool, wiki editors define MediaWiki templates with SPARQL queries on wikipages. The Listeria bot then edits on behalf of the user and generate tables on the wikipage according to the SPARQL query.2 Footnote 2: [https://listeria.toolforge.org/](https://listeria.toolforge.org/). The approach I will describe here was first explored in a specific instance of a Wikibase for data related to environmental impact assessment reports (Nielsen et al., 2023). In this abstract, I describe the extension of the approach, so it can be used more widely with only slight changes in configurations in and across different Wikibases, -- including Wikidata. ## Methods I call the tool _Synia_ with the canonical homepage set up at [https://synia.toolforge.org/](https://synia.toolforge.org/). The implementation is a serverless single-page application (SPA) consisting of a simple HTML page and some JavaScript. Instead of storing the SPARQL templates along with the Web application, the templates are stored on wikipages. The URL pattern of Scholia is borrowed and changed to use URI fragments to control which wikipage should be read and what values should be interpolated in the template. Table 1 shows some of the mapping between the URI fragment and the wikipage. A pseudo-namespace, Wikidata:Synia, is used as the default for grouping the templates. If the template is not defined on the wiki Synia creates a link, so a user/editor can create the template. Faceted search is supported, e.g., "#venue/Q15817015/topic/Q2013" shows information about the topic _Wikidata_ occurring in the journal _Semantic Web_. Aspects with multiple items, e.g., handling "#authorSq20980928,Q20895241,Q20895785" is not yet supported. When wikipages are used for templates there are at least two important issues to consider: The template should be humanly readable as a wikipage and the information read should be untrusted as wikis are usually openly editable. Currently, a limited set of components are handled, see Table 2. The parsing of the components is based on a series of regular expressions. Synia will recognize MediaWiki headings and render them with h1, h2, and h3 HTML tags. SPARQL templates for Synia are stored on the wikipage in the _Template:SPARQL_ MediaWiki template. Synia extracts the SPARQL code, interpolates the Q- and L- identifier(s), and sends the interpolated SPARQL to the SPARQL endpoint. The response is rendered as a table in the SPA using the DataTables JavaScript library or it may be rendered as a graph in an iframe with the graphing capabilities of the query service. For the ordinary wiki user, the template wikipage appears as ordinary wikipages with SPARQL as code examples, see Figure 1. The wikipage may have multiple headings and SPARQL templates. Other endpoints than the configured default can be queried. Currently Synia abuses an _endpoint_ parameter for the _Template:SPARQL_ Media-Wiki template on Wikidata to specify the other endpoint. An example using the approach is currently displayed at [https://www.wikidata.org/wiki/Wikidata:Synia:compound](https://www.wikidata.org/wiki/Wikidata:Synia:compound) where a panel for a SPARQL query goes to the endpoint of the [https://wikifcd.wikibase.cloud](https://wikifcd.wikibase.cloud) wiki (Thornton et al., 2021). This wiki has a Wikidata mapping property, so the Q-identifier can be matched across Wikibases to a Wikidata identifier. Bootstrap, jQuery, and DataTables libraries are used. To avoid leaking browsing behavior the static files are hosted along with the SPA. Configuration, e.g., about the location of templates and the default endpoint is maintained in a separate JavaScript file. A few aspects have so far been defined for Synia each with a few panels, e.g., author, work, venue, film, actor, compound, and lexeme. Figure 2 shows a screenshot of the actor aspect for the Wikidata entity Q294647 with two panels: a table and a bar chart. To demonstrate that it is possible to use other template sites and other endpoints, I set up a template page at [https://www.wikidata.org/wiki/User:Fnielsen:Synia:index](https://www.wikidata.org/wiki/User:Fnielsen:Synia:index) copying a query from Wiki-FCD and reconfigured a cloned version of Synia to use "[https://www.wikidata.org/wiki/User:Fnielsen:Synia](https://www.wikidata.org/wiki/User:Fnielsen:Synia):" as the template base URL and [https://wikifcd.wikibase.cloud/query](https://wikifcd.wikibase.cloud/query) as the query service URL. ## Discussion/Conclusions The approach for the creation of new aspects and panels with Synia is more agile and wiki-like than Scholia's method. While the creation of a new panel in Scholia usually involves the creation of a new issue in GitHub, creation of a new branch, editing SPARQL and jinja2 code, commiting, pushing, merging the branch, testing, and deploying to Toolforge, a new panel with Synia is created by just editing a wikipage. Creating a new aspect with Synia can be done by creating a new wikipage, while for Scholia it would entail editing Python code as well as all the other steps involved in creating a panel. Discussions about new aspects or changes in Scholia take place on GitHub issue pages, while for Synia, discussions could take place on the wiki, e.g., the talk page associated with the templates. Wikis with open editing, such as Wikidata, can be vandal and security is an issue. If a malicious wiki editor adds a third-party endpoint then the browsing behavior of a Synia user will leak to the third-party site. The problem could be alleviated by having a set of allowed endpoints, e.g., Wikidata and Wikibase.cloud instances. How language should best be handled is not clear. Figure 3 shows an aspect in Danish for a Danish company, so it is possible to control the language from a template. However, this approach "occupies" a specific URI pattern and a change of language is not possible without redoing much of the template. Navigation with menu and search is currently missing in Synia as well as redirects and aspect-switching that all are available in Scholia. Instead of hardcoding such components in the Web application, it is envisioned that components in the templates on the wiki could control placement of menus and search forms. SPARQL in MediaWiki templates may generate a problem as the pipe and the equality characters in SPARQL collide with the use of the characters to handle parameters in MediaWiki templates. Synia's simple regular expression parsing of the wikitext does not handle "{[!]}" that may be used to escape the pipe character in a MediaWiki template. A more elaborate parsing may be needed. ## Acknowledgment Thanks to the Scholia team, particular Daniel Mietchen and Egon Willighagen, for continued inspiration.
2310.05218
Accelerating Machine Learning Primitives on Commodity Hardware
Sliding Window Sum algorithms have been successfully used for training and inference of Deep Neural Networks. We have shown before how both pooling and convolution 1-D primitives could be expressed as sliding sums and evaluated by the compute kernels with a shared structure. In this paper, we present an extensive study of the Sliding Window convolution technique as a more efficient alternative to the commonly used General Matrix Multiplication (GEMM) based convolution in Deep Neural Networks (DNNs). The Sliding Window technique addresses the memory bloating problem and demonstrates a significant speedup in 2-D convolution. We explore the performance of this technique on a range of implementations, including custom kernels for specific filter sizes. Our results suggest that the Sliding Window computation kernels can outperform GEMM-based convolution on a CPU and even on dedicated hardware accelerators. This could promote a wider adoption of AI on low-power and low-memory devices without the need for specialized hardware. We also discuss the compatibility of model compression methods and optimized network architectures with the Sliding Window technique, encouraging further research in these areas.
Roman Snytsar
2023-10-08T16:26:18Z
http://arxiv.org/abs/2310.05218v1
# Accelerating Machine Learning Primitives on Commodity Hardware ###### Abstract Sliding Window Sum algorithms have been successfully used for training and inference of Deep Neural Networks. We have shown before how both pooling and convolution 1-D primitives could be expressed as sliding sums and evaluated by the compute kernels with a shared structure. In this paper, we present an extensive study of the Sliding Window convolution technique as a more efficient alternative to the commonly used General Matrix Multiplication (GEMM) based convolution in Deep Neural Networks (DNNs). The Sliding Window technique addresses the memory bloating problem and demonstrates a significant speedup in 2-D convolution. We explore the performance of this technique on a range of implementations, including custom kernels for specific filter sizes. Our results suggest that the Sliding Window computation kernels can outperform GEMM-based convolution on a CPU and even on dedicated hardware accelerators. This could promote a wider adoption of AI on low-power and low-memory devices without the need for specialized hardware. We also discuss the compatibility of model compression methods and optimized network architectures with the Sliding Window technique, encouraging further research in these areas. ## 1 Introduction In recent years, there has been significant progress in machine learning (ML) research, with breakthroughs in deep learning, natural language processing, and computer vision. A Deep Neural Network (DNN) is one of the most significant tools of a ML scholar [17]. DNNs are constructed from multiple layers that transform the data sequentially via operations such as pooling, convolution, and activation. In most successful DNNs, the greater portion of computational resources is consumed by performing convolution. A popular implementation of convolutional layers is expanding the input into a column matrix form (im2col) and then calling a highly tuned General Matrix Multiplication (GEMM) procedure from the existing linear algebra library such as BLIS [26] or MKL [28]. Since the hardware optimized GEMM implementations exist for every standard CPU, graphics processing unit (GPU), or digital signal processor (DSP), the im2col approach has been highly successful in DNN frameworks such as Caffe [15], Torch [5] and ONNX [3]. However, these advances have primarily benefited large corporations, and research institutions with access to massive computational resources. The democratization of AI on low power and edge devices aims to bring the benefits of AI to a wider audience, including small businesses, individual users, and the billions of Internet of Things (IoT) devices. Edge devices, such as smartphones, wearables, and IoT sensors, are often resource-constrained, with limited processing power, memory, and battery life. One major challenge in deploying AI on edge devices is the size of deep learning models, which can be hundreds of megabytes or even gigabytes. The im2col conversion further increases the memory footprint of the input matrix and reduces data locality. For a convolution with a filter size k, the column matrix is k times larger than the original input tensor. A lot of research effort has been put into applying the GEMM routines to the smaller intermediate data structures [1][27] or even to the original input data [29]. To reduce the memory requirements on edge devices and improve performance, researchers have been exploring various techniques, including model compression, network optimization, and hardware acceleration. ### Model Compression Model compression techniques aim to reduce the size of these models while maintaining their accuracy. Various methods have been proposed, including quantization, weight pruning, and knowledge distillation. Quantization is a technique that reduces the numerical precision of model weights and activations to the low precision floating point or even integer representation [7]. The quantization reduces the memory footprint and latency by an order of magnitude with only a minor degradation in accuracy for many tasks. This is particularly important for edge devices, where memory and computational resources are limited. There are also several types of extreme quantization, such as binary and ternary quantization, which use only two or three discrete values for weights and activations [6][12], and mixed-precision quantization, which uses different levels of precision for different parts of the network [4][33]. It is important to note however, that quantization is not entangled with GEMM and could be equally successful when applied to the original convolution problem. Pruning is a method that removes redundant or less important parameters from the model, resulting in a sparse model with less parameters [9]). This can lead to significant reductions in memory and computational requirements, making the model more suitable for deployment on edge devices. Pruning techniques can be broadly classified into unstructured pruning, which removes individual weights, and structured pruning, removing entire neurons, filters, or channels [30]. Unstructured pruning can achieve high levels of sparsity but may not lead to actual speedup on hardware, whereas structured pruning can result in a more efficient hardware implementations. Dynamic pruning methods have also been developed, which adjust the sparsity of the model during training or inference, further improving efficiency [8][18]. Knowledge distillation is a technique that trains a smaller model (student) to mimic the behavior of a larger, pre-trained model (teacher), effectively transferring knowledge from the teacher to the student [10]. This can result in a compact model that achieves similar accuracy to the larger model, making it more suitable for deployment on edge devices. Knowledge distillation typically involves training the student model using a combination of the original task loss and a distillation loss, which measures the discrepancy between the teacher and student model outputs. Various distillation losses have been proposed, such as the Kullback-Leibler divergence between the softened output probabilities, or the mean squared error between feature maps or logits. ### Optimized Network Architectures Further reducing the number of parameters and computations, optimized network architectures help bring the benefits of AI to a wider range of applications and devices, while addressing challenges such as latency, privacy, and energy efficiency. MobileNets are a family of efficient convolutional neural networks designed for mobile and embedded vision applications. They are based on depthwise separable convolutions, which significantly reduce the number of parameters and computations compared to standard convolutions [11]. MobileNetV2 [22] incorporates inverted residual structures and linear bottlenecks for improved efficiency. EfficientNets are a series of neural networks derived using a combination of neural architecture search and compound scaling. They achieve state-of-the-art accuracy with significantly fewer parameters compared to other models of similar accuracy. EfficientNets have been successfully used for various computer vision tasks [25]. ShuffleNet is an efficient neural network architecture that employs pointwise group convolutions and channel shuffling to reduce computation while maintaining accuracy. Pointwise convolution is equivalent to matrix multiplication and thus extremely efficient as has been proven by the ShuffleNet application to the assortment of image classification tasks [32]. SqueezeNet is a compact neural network architecture that achieves AlexNet-level accuracy on the ImageNet dataset with 50x fewer parameters [13]. It uses "fire modules" consisting of a squeeze convolution layer followed by an expansion layer to reduce the number of parameters while maintaining accuracy. ### Hardware Accelerators Custom accelerators are specialized hardware units designed specifically for the efficient execution of neural networks. These accelerators often feature optimizations such as low-precision arithmetic, dataflow-based execution, and on-chip memory hierarchies to minimize energy consumption and latency. Examples of custom accelerators for edge devices that borrow their designs from the high-power applications include Google's Edge TPU [16] and NVIDIA's Deep Learning Accelerator (DLA) [19]. These are in fact the GEMM accelerators as the popularity of the im2col tactics significantly influences the design of the custom hardware. Graphics Processing Units (GPUs) have evolved from specialized hardware for rendering graphics to highly parallel and programmable architectures suitable for a wide range of high-performance computing. They are widely used for accelerating neural network. In recent years, GPU manufacturers have introduced low-power GPUs specifically designed for edge devices, such as NVIDIA's Jetson platform [20] and the ARM Mali-G series [2]. Following the high-power GPU designs, the trend is reversing from general purpose compute to more specialized "tensor cores" for GEMM operations. Field-Programmable Gate Arrays (FPGAs) are reconfigurable hardware platforms that can be programmed to implement custom logic circuits, making them suitable for accelerating neural networks on edge devices. FPGAs offer flexibility and can be reprogrammed to adapt to different models or tasks. Examples of FPGA platforms for edge devices include Intel's Movidius Myriad X [14] and Xilinx Zynq UltraScale+ [31]. ## 2 Experiments Earlier we proposed a new algorithm for performing convolution. The Sliding Window technique [24] replaces GEMM with a novel computation kernel that operates on the unmodified input and eradicates the memory bloating problem. The speedup of 1-D convolution we have observed when compared to the baseline _MlasConv_ procedure was roughly proportional to the logarithm on the filter width [23]. In this paper we present the extension of the Sliding Window algorithms to the more practical 2-D cases. There are three different implementations on the 2-D sliding convolution. The kernel sizes up to 17 are handled by the straightforward version of the Vector Slide algorithm. Kernels of larger width do not fit into the hardware vector and require a special version that operates on multiple hardware vectors treating them as a single long compound vector. Both generic versions perform redundant shuffles, so for filter widths 3 and 5 we implemented custom kernels with optimal number of operations. We have run experiments on an Azure node with 16 cores of Intel(R) Xeon(R) Platinum 8272CL CPU and 32 GB of RAM. The speedup is measured compared to the ONNX _MlasConv_ calls. All tests have been run in a single-core configuration to exclude the effects of the task scheduling delays. The 2-D Sliding Window convolution exhibits the same roughly logarithmic speedup in correlation to the filter size. The zigzag pattern at the larger filter sizes is related to the alignment of the compound vector to the hardware vector length. Custom implementations are indeed faster than their generic counterparts. Generating custom kernels at run time might improve the performance for every filter size. An interesting observation happens at filter size 17 as it could be handled by either hardware-specific or compound implementation. The compound variation Figure 1: Speedup of the 2-D Convolution. is significantly faster. It is worth studying this phenomenon closer in hopes of improving the performance of the hardware-specific code and bringing the whole left part of the graph higher. The number of arithmetic operations performed by the sliding convolution is the same as the naive or GEMM-based algorithms. Our observations hint that the speedup comes from better memory access patterns. We have also measured the arithmetic throughput of different kernels using the Intel Advisor [21]. As the filter size increases, the throughput of the Sliding Window convolution kernels approaches the hardware limits. It is also interesting to see that the filter size misalignment with the hardware vector length results in similar performance patterns for both Sliding Window and GEMM kernels. ## 3 Conclusion We have measured the performance and throughput of the Sliding Window convolution kernels. They are more efficient than the commonly used GEMM kernels on the CPU and could even outperform dedicated hardware accelerators. Wider adoption of the Sliding Window sum algorithm could promote AI usage on the low power and low memory devices avoiding the expense of specialized hardware. Figure 2: 2-D Convolution throughput. All the model compression techniques described earlier are equally applicable to Sliding Window computation. Pruning and distillation reduce the work required for ML inference. Quantization delivers the same benefits of memory and power savings, and better vector performance. Optimized network architectures tend to use small convolution filters that diminish the advantages of the Sliding Window convolution. In the extreme case of ShuffleNet its pointwise convolutions do not benefit from the new algorithm at all. In general, small filter convolutions are memory bound, equally limiting performance of custom accelerators and CPU solutions. We encourage new research into the network architectures that use fewer layers with larger convolution filters. In many cases the hardware accelerators can be repurposed for Sliding Window algorithms with various degree of success depending on how specialized the hardware is. Pipelined nature of the Sliding Window algorithm ensues a straightforward FPGA implementation. Limited on-chip memory and logic resources can be a constraint for implementing large-scale deep learning networks. Combining Sliding Window techniques with optimized network architectures and model compression results in fast and energy efficient solutions. The algorithms are easily portable to GPU as well. The benefits of stream-lined memory access are less pronounced since explicitly controlled on-chip memory hierarchies make GPUs already highly efficient in GEMM computation. Since the accelerators for matrix multiplication are already present in the current generation of hardware and are likely to stay in future devices, they could improve throughput and performance of many computational tasks beyond GEMM. Thus, it is important to re-formulate our algorithms in terms of the small matrix multiplication, completing the circle. Competition between CPU algorithms and hardware accelerators would lead to advances in both directions, and the most spectacular results are expected at the intersection of the two research fields. Sliding Window convolution algorithms exhibit excellent performance using commodity hardware. They deliver the benefits of AI to more low-cost and low-power devices.
2308.07645
Steering Language Generation: Harnessing Contrastive Expert Guidance and Negative Prompting for Coherent and Diverse Synthetic Data Generation
Large Language Models (LLMs) hold immense potential to generate synthetic data of high quality and utility, which has numerous applications from downstream model training to practical data utilisation. However, contemporary models, despite their impressive capacities, consistently struggle to produce both coherent and diverse data. To address the coherency issue, we introduce contrastive expert guidance, where the difference between the logit distributions of fine-tuned and base language models is emphasised to ensure domain adherence. In order to ensure diversity, we utilise existing real and synthetic examples as negative prompts to the model. We deem this dual-pronged approach to logit reshaping as STEER: Semantic Text Enhancement via Embedding Repositioning. STEER operates at inference-time and systematically guides the LLMs to strike a balance between adherence to the data distribution (ensuring semantic fidelity) and deviation from prior synthetic examples or existing real datasets (ensuring diversity and authenticity). This delicate balancing act is achieved by dynamically moving towards or away from chosen representations in the latent space. STEER demonstrates improved performance over previous synthetic data generation techniques, exhibiting better balance between data diversity and coherency across three distinct tasks: hypothesis generation, toxic and non-toxic comment generation, and commonsense reasoning task generation. We demonstrate how STEER allows for fine-tuned control over the diversity-coherency trade-off via its hyperparameters, highlighting its versatility.
Charles O'Neill, Yuan-Sen Ting, Ioana Ciuca, Jack Miller, Thang Bui
2023-08-15T08:49:14Z
http://arxiv.org/abs/2308.07645v2
Steering Language Generation: Harnessing Contrastive Expert Guidance and Negative Prompting for Coherent and Diverse Synthetic Data Generation ###### Abstract Large Language Models (LLMs) hold immense potential to generate synthetic data of high quality and utility, which has numerous applications from downstream model training to practical data utilisation. However, contemporary models, despite their impressive capacities, consistently struggle to produce both coherent and diverse data. To address the coherency issue, we introduce contrastive expert guidance, where the difference between the logit distributions of fine-tuned and base language models is emphasised to ensure domain adherence. In order to ensure diversity, we utilise existing real and synthetic examples as negative prompts to the model. We deem this dual-proposed approach to logit reshaping as STEER: Semantic Text Enhancement via Embedding Repositioning. STEER operates at inference-time and systematically guides the LLMs to strike a balance between adherence to the data distribution (ensuring semantic fidelity) and deviation from prior synthetic examples or existing real datasets (ensuring diversity and authenticity). This delicate balancing act is achieved by dynamically moving towards or away from chosen representations in the latent space. STEER demonstrates improved performance over previous synthetic data generation techniques, exhibiting better balance between data diversity and coherency across three distinct tasks: hypothesis generation, toxic and non-toxic comment generation, and commonsense reasoning task generation. We demonstrate how STEER allows for fine-tuned control over the diversity-coherency trade-off via its hyperparameters, highlighting its versatility. 1Mathematical Sciences Institute, Australian National University 2Research School of Astronomy & Astrophysics, Australian National University 3School of Computing, Australian National University 4Department of Astronomy, The Ohio State University ## Introduction Large language models (LLMs) learn a compressed representation of human knowledge through understanding language constructs, grammar and semantics as conditional probability distributions over subword tokens Wei et al. (2022); Shanahan (2023); Zhao et al. (2023). The prowess of LLMs in autoregressively generating text indicates a profound internalisation of our world view expressed through language Gilbert et al. (2023). This internalised knowledge brings the potential of LLMs to not only parse, understand, and respond to data, but also to synthesise such data autonomously Veselovsky et al. (2023); Halterman (2023); Platzer and Krchova (2022); Tan et al. (2022). Despite their prowess, the generation of synthetic data by these models presents a significant challenge, specifically in terms of data coherence and diversity Alaa et al. (2022); Lu et al. (2023). Specifically, there seems to be a trade-off between two measures of data quality: (1) how well synthetic samples resemble real samples (**fidelity**); and (2) how broadly the synthetic samples cover the real distribution (**diversity**); see Figure 5. The current state-of-the-art often struggles to generate data that maintains semantic fidelity, embraces diversity, and transcends mere reproduction of the training set Li et al. (2022). Whilst larger models and optimised training may eventually mitigate this somewhat, a more intriguing question remains: _how can we reshape the probabilistic token selection at inference time in order to achieve better synthetic data generation?_ To this end, we address the challenge of synthetic data generation with **S**emantic **T**ext **E**n**hancement via **E**mbedding **R**epositioning (**STEER**). The main contribution of STEER is _contrastive expert guidance_, a logit reshaping technique that emphasises the distinctions between a fine-tuned (domain) model and the base model, attracting the generator towards producing text characteristic of the specific domain. Simultaneously, it utilises _negative prompting_, another logit modification approach, which discourages the production of tokens found in previously generated synthetic or real examples, thereby ensuring the generation of synthetic examples that are both novel and representative. STEER operates at inference time and is architecture agnostic, using fine-tuning to delineate the semantic space of interest. Our proposed STEER method offers several key contributions: 1. We propose STEER, a strategy for generating synthetic data that navigates the trade-off between coherency and diversity. STEER relies on the controllability of conditional generation induced by contrastive expert guidance and negative prompting. 2. We evaluate STEER with metrics that capture the diversity and coherency of generate data. Our approach demonstrates superior performance over current decoding methods on three tasks: scientific hypothesis generation, toxic social media comment classification, and commonsense question-answering. Further, we validate the efficacy of STEER by training models on the synthetic data for downstream tasks such as classification, and assessing human preference of STEER data over data generated with other methods. 3. Finally, we provide ablation studies determining how STEER enables better management of the trade-off between coherency and diversity by tweaking the hyperparameters that control the contrastive expert guidance and negative prompting, respectively. ## Related work ### Evolution of Generative Models and Control Over Conditional Generation Recent years have witnessed a remarkable advancement in generative artificial intelligence models, which typically ingest context, frequently presented as prompts, and subsequently generate text, images, videos, or audio conditioned upon this provided context [14, 15]. The extent to which a model attends to such context is determined during the training phase, thereby granting users minimal control [13]. This issue first garnered attention with the emergence of generative adversarial models (GANs) and then diffusion models [1]. A series of techniques and methodologies have been proposed to better control this conditioning, forming the basis of the following sections. Traditional efforts to guide language models, such as PPLM [1] and GeDI [12], relied on external classifiers for specific attributes [23]. These methods utilise the gradient of a classifier, trained to recognise certain attributes, to adjust the logits during the generation process. By computing the gradient with respect to the desired attribute and applying it to the model's hidden states, they steer the generative process towards or away from particular characteristics. This proved expensive and complex, requiring additional models and continuous adjustments during implementation. Logit guidance emerged as an alternative approach for governing aspects of text generation, focusing on the manipulation of token distributions rather than semantic control, thereby signaling a progression towards architecture-agnostic guidance without additional classifiers. In the context of autoregressive language models, the token generation process unfolds sequentially, where the probability of each token is conditioned on its preceding tokens. Mathematically, the joint probability of a token sequence \(w\) is expressed as: \[\mathrm{P}_{\theta}(w)=\prod_{i}^{N}\mathrm{P}_{\theta}(w_{i}|w_{j<i}). \tag{1}\] Here, \(\mathrm{P}_{\theta}(w_{i}|w_{j<i})\) denotes the conditional probability of the \(i\)-th token given its predecessors, and is modeled as a distribution over the entire vocabulary. This distribution, represented as logits, is a function of the model's current state and can be directly manipulated. By selectively adjusting the logits' values, it's possible to exert influence over the generated text, steering it towards specific characteristics or themes. Logit guidance, therefore, leverages this property to offer fine-grained control over the generation process, fulfilling targeted objectives without the need for external models or classifiers. For instance, one challenge we face is preserving the fidelity of synthetic examples, meaning the generated content should reflect the intended domain. Contrastive Decoding (CD) [11] sought to overcome this challenge by enhancing the distinction between the log-probabilities of a sophisticated, high-capacity language model (LM) and a less capable, smaller counterpart. The log-probabilities of the tokens from the smaller model, \(\mathrm{P}_{\phi}(w_{i}|w_{j<i})\), are subtracted from those of a larger, "expert" \(\mathrm{P}_{\theta}(w_{i}|w_{j<i})\) model as a form of regularisation. This is similar to the DExpert approach undertaken by Liu et al. (2021), who perform interpolation in the output space as a form of ensembling. Intuitively, under the ensemble, tokens only get high probability if they are considered likely by the experts and unlikely by the anti-experts. Drawing parallels to other logit adjustment methods, Context-Aware Decoding (CAD) also recalibrates token probabilities [22]. CAD achieves this by dividing the log-probability of the upcoming token, when considering the model with input context, by the log-probability of the same token under the model sans context--effectively adjusting its contextual awareness. Advancing on this premise, Malkin, Wang, and Joic (2022) introduced a coherence-boosting (CB) strategy, differentially balancing the contributions of both full and partial contexts. This method illuminates early tokens by assessing the log difference between token distributions--with and without the inclusion of this early context. In doing so, the distribution hones in on tokens exhibiting high probability in the presence of early context but low probability when devoid of it. Similarly, Su et al. (2022) used a degeneration penalty, defined as the maximum cosine similarity between the representation of a continuation and that of all previous tokens, to prevent model degeneration. They refer to this method as contrastive search. Building upon the innovative efforts to exert control over generative models, the concept of classifier-free guidance (CFG) emerged as a further refinement to the field. It embodies a methodology aligning closely with previous advancements but distinctively eliminating the reliance on separate Figure 1: **Synthetic data generation with STEER**. We first use a real dataset \(\mathcal{D}\mathcal{\mathrm{\mathrm{\mathrm{\mathrm{\tau}}}}}\) to fine-tune a generative language model \(\mathrm{P}_{\theta}\). **A:** Contrastive expert guidance modifies the sampling distribution \(\mathrm{\widetilde{P}}_{\theta}\) (green). **B:** Negative prompting downweights tokens in real/generated examples, leading to \(\mathrm{\widetilde{P}}_{\theta}\) (yellow). **C:** Final sampling distribution combines contrastive guidance and negative prompting. **D:** Sampling, with each generated example fed into negative prompts, creates a coherent and diverse synthetic dataset (magenta dots) closely resembling the real distribution (blue circle). classifier models. The genesis of CFG can be traced back to diffusion models, where techniques were developed to reshape the latent sampling distribution to synchronise more harmoniously with the prompt [14]. Addressing earlier complexities, [15] synthesized the classifier's role into the model's training process itself, paving a new path for efficiency. In the context of autoregressive language models, which inherently excel in unconditional generation, CFG represents a natural evolution [10]. By manipulating the generation of subsequent tokens to accentuate the conditioning on the prompt, it builds upon the existing framework of logit guidance and log-probability adjustment. This manipulation can be formally expressed as follows: \[\log\overline{P_{\theta}}\left(w_{i}\mid w_{j<i},c\right) =\log\mathrm{P_{\theta}}\left(w_{i}\mid w_{j<i}\right)+\gamma \big{(}\log\mathrm{P_{\theta}}\left(w_{i}\mid w_{i<j},c\right)\] \[-\log\mathrm{P_{\theta}}\left(w_{i}\mid w_{j<i}\right)\big{)}. \tag{2}\] CFG also allows for the avoidance of specific aspects of generation through the use of negative prompting. By adjusting \(\gamma\) to be negative in the equation above, control is exerted to guide generation away from a given prompt. This methodology has found exceptional efficacy in diffusion models [1, 1, 14, 15, 16], further enriching the spectrum of control over generative processes. Both CFG and negative prompting exploit the latent semantic information in token predictions [17, 18, 19, 15, 16]. ### Synthetic data generation leveraging language models In the time before today's large language models (LLMs), traditional augmentation of text data was practiced through techniques like random text edits, synonym replacements, masked word predictions, reinforcement learning, lossy translation, and text blending [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. Further advancements involved fine-tuning of models and conditional generation implementation, but the discrete nature of language hindered traditional methods' effectiveness [25, 16, 17, 18, 19]. Challenges led to the utilisation of LLMs as synthetic data generators, using methods like fine-tuning conditional generation and zero-shot/few-shot prompting. This included training with prompts, fine-tuning with specific text structures, and leveraging both techniques with additional real data [15, 16, 17, 18, 19, 20, 21, 22, 23, 24]. Many studies acknowledge a distribution difference between real and synthetic examples, reflecting poor fidelity. The generation of low fidelity data often requires filtration, using classifiers to weed out synthetic instances, or costly human evaluation [21, 22, 23]. As for diversity, existing techniques like logit suppression and temperature raising struggle with a diversity-fidelity tradeoff, leading to low-quality generations that need human quality control [22]. Some strategies leverage taxonomy-based generation or multiple inference runs with different seeds to encourage diversity, although at increased computational costs [21, 22]. As fine-tuning existing models for the purpose of synthetic text generation becomes easier and more efficient, it's clear that more controllable and rigorous methods are required in order to navigate the trade-off between fidelity and diversity. ### Evaluation frameworks for synthetic textual data The evaluation of the quality of generated data has traditionally been simplistic, often relying on downstream task performance or employing statistical divergence measures like Kullback-Leibler (KL) divergence and Frechet distance, as well as coherency measures like MAUVE and cosine similarity [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. These approaches are limited in their ability to quantify fidelity and diversity comprehensively, are often inapplicable to certain models like GANs due to inaccessible likelihoods or suffer from problems in high-dimensional spaces [23, 24, 25, 26]. Previous precision-recall analyses have shown shortcomings like mode collapse, lack of robustness, and inability to evaluate authenticity [21, 22, 23, 24, 25, 26]. Attempts have been made to propose improved metrics, but no evaluation schema has emerged as the gold-standard, emphasising the need for nuanced, sample-level evaluation to effectively triage quality and automate culling of low-quality instances [1]. As such, the evaluation of synthetic data is typically conducted by employing a wide range of metrics for each dimension of quality we wish to assess. ## 3 Steer We denote a real datapoint as \(X_{r}\) from the real distribution \(\mathbb{P}_{r}\) and a synthetic example as \(X_{s}\) from the learned synthetic distribution \(\mathbb{P}_{s}\). This then induces the definitions of the real dataset \(\mathcal{D}_{r}=\{X_{r,i}\}_{i=1}^{n}\) with \(n\) distinct data points and \(\mathcal{D}_{s}=\{X_{s,j}\}_{j=1}^{m}\) with \(m\) distinct data points. In order to use both contrastive logit reshaping and negative prompting, we must first have the ability to unconditionally sample tokens from the distribution of interest i.e. the distribution of the real data, \(\mathbb{P}_{r}\). We leverage transfer learning by starting with a pre-trained transformer architecture. The fine-tuning process employs a next-word prediction paradigm, wherein the model is conditioned on prompts such as 'Generate a scientific hypothesis:' and asked to autoregressively predict the next word in real examples. This provides us with a model \(\mathrm{P}_{\theta}\) that is proficient at generating semantically similar examples in response to the given prompt, effectively sampling from the distribution \(\mathbb{P}_{r}\). We denote \(\mathrm{P}_{\theta}\) as the _domain model_. We also retain the model before fine-tuning as the _base model_\(\mathrm{P}_{\phi}\). **Contrastive expert guidance for fidelity** STEER first seeks to reweight the importance of the target distribution itself in order to generate high-fidelity text that is from said distribution. The contrastive objective \(\widetilde{\mathrm{P}_{\theta}}\) seeks to maximise the likelihood of the domain model's sequence, while minimising the likelihood of the same sequence under the base model's distribution. ``` Contrastive Expert Confidence Let \(\mathrm{P}_{\theta}\) be the domain model capable of sampling from \(\mathbb{P}_{r}\), and \(\mathrm{P}_{\phi}\) be the base model. The modified logit sampling distribution we denote as \(\widetilde{\mathrm{P}_{\theta}}\), which is defined as: \(\log\widetilde{\mathrm{P}_{\theta}}(w_{i}|w_{j<i})=\log\mathrm{P}_{\theta}(w_ {i}|w_{j<i})-\gamma\log\mathrm{P}_{\phi}(w_{i}|w_{j<i})\), where \(\gamma\in[0,1]\) is a hyperparameter that controls the strength of the contrastive expert guidance. ``` In our approach, the fine-tuned domain model \(\mathrm{P}_{\theta}\) is sensitive to the specifics of the target domain, contrasting with the base model \(\mathrm{P}_{\phi}\), trained on a general language task. The contrastive objective leverages this difference to steer the generation towards text that aligns more with the domain distribution \(\mathbb{P}_{r}\) than with the broader distribution of the base model, the emphasis on which is controlled by hyperparameter \(\gamma\). Consequently, our synthetic examples, generated via contrastive guidance, are not only plausible--reflecting high probability under the expert language model--but also distinctive to the target domain--exhibiting lower probability under the less-specialised language model. This balance guides synthetic example generation towards text that faithfully represents the target domain distribution \(\mathcal{D}_{r}\) while still exploring the semantic space comprehensively. **Negative prompting for diverse and authentic generation** To encourage diversity and originality in synthetic examples, we complement our contrastive objective with a negative prompting mechanism. By integrating a negative prompt \(\bar{c}\)--comprising tokens from previously generated synthetic examples and real examples from \(\mathcal{D}_{r}\)--we steer the model towards novel sequence generation. This is achieved by creating another logit distribution \(\widehat{\mathrm{P}_{\theta}}\): ``` Negative prompting Let \(\mathrm{P}_{\theta}\) be the domain model and \(\bar{c}\) be the negative prompt. The logit distribution \(\widehat{\mathrm{P}_{\theta}}\) is defined as: \(\log\widetilde{\mathrm{P}_{\theta}}(w_{i}|w_{j<i},\bar{c})=\log\mathrm{P}_{ \theta}(w_{i}|w_{j<i},\bar{c})\) \(+\eta\left(\log\mathrm{P}_{\theta}(w_{i}|w_{j<i})-\log\mathrm{P}_{\theta}(w_ {i}|w_{j<i},\bar{c})\right)\) where \(\eta\in[0,1]\) is a hyperparameter that controls the strength of the negative prompting. ``` The negative prompt \(\bar{c}\) adjusts the next token's log probability, diminishing the likelihood of tokens found in \(\bar{c}\). The novelty level of the synthetic examples is regulated by the parameter \(\eta\). In practice, we maintain a dynamic set of real and synthetic examples, with the tokens therein constituting our negative prompt \(\bar{c}\). This strategy, while maintaining domain-specificity via contrastive decoding, promotes diversity and novelty by discouraging the generation of previously encountered or existing examples. **Final modified logits** This combination of contrastive expert guidance with negative prompting ensures a fine balance between adhering to the domain distribution and maintaining diversity, resulting in the generation of synthetic examples that are both novel and representative of the real dataset \(\mathcal{D}_{r}\). The final distribution used to sample the next token during decoding is obtained by modifying the logits (i.e., the inputs to the softmax function that calculates the probabilities of the next token) of the domain model according to the contrastive objective and the negative prompting. The modified logit distribution \(\widetilde{\mathrm{P}_{\theta}}\) is given by: ``` STERN Let \(\widetilde{\mathrm{P}_{\theta}}\) be the contrastive expert guidance-modified logit distribution, and \(\widehat{\mathrm{P}_{\theta}}\) be the logit distribution with negative prompting. The modified logit distribution \(\widetilde{\mathrm{P}_{\theta}}\) is defined as: \(\log\widetilde{\mathrm{P}_{\theta}}(w_{i}|w_{j<i})=\log\widetilde{\mathrm{P}_{ \theta}}(w_{i}|w_{j<i})+\log\widehat{\mathrm{P}_{\theta}}(w_{i}|w_{j<i},\bar{c })\). ``` Once we have the modified logits, we can perform nucleus sampling to generate the next token in the sequence. We continue this process until we generate a complete synthetic example. Repeating this process, sampling negative prompts from the real and current synthetic examples each time, yields a synthetic text dataset. This process is outlined in Algorithm 1 in the Appendix. ## 2 Methodology ### Datasets Description Three distinct datasets were chosen to validate the generality and flexibility of the proposed method. The _Arxiv Hypotheses_ dataset consists of 10,000 scientific hypotheses extracted from Arxiv astronomy abstracts (that is, abstracts with the astro.ph tag) using GPT-3.5. This provides a complex semantic space suitable for assessing generative model fidelity. The _Jigsaw Toxic Comments_ dataset from Kaggle comprises user comments labeled for toxicity and facilitates conditional data generation, enabling downstream classification. We selected 15,000 comments with a positive toxicity label and 15,000 comments with a negative toxicity label for a total of 30,000 examples. The third dataset, _CommonsenseQA_, developed by the Allen Institute for AI, contains 12,247 multiple-choice questions and offers a challenging platform to evaluate the model's understanding of intricate semantics and general world knowledge. We extracted the questions, multiple-choice options and answers to form one string for each example. For more detail on data curation, see the Appendix. ### Evaluation metrics The evaluation process for our proposed method begins with the instruction fine-tuning of the Falcon-7B open-source model to perform generation using the three datasets. Specifically, the model is given the real dataset of examples as instruction-completion pairs. For instance, the astronomy hypothesis generation task had the instruction "Generate a scientific hypothesis about astronomy". These fine-tuned models are used to generate 1000 examples for each dataset using the same instruction as in fine-tuning, using greedy decoding, nucleus sampling [10], contrastive search sampling [11] and STEER sampling. STEER itself uses nucleus sampling with its additional logit reshaping. To ensure balance for downstream tasks and comparison, uniform frequencies of each label are generated for the _Jigsaw Toxic Comments_ and _CommonsenseQA_ datasets. For CommonsenseQA, we generated not only the multiple-choice question and options but also the answer at the end of the generation, in the form <Question> A. <Option A> B. <Option B>... E. <Option E>. Answer: <answer>. In parallel, 1000 samples are randomly selected from a corresponding holdout test set of real dataset (with the same uniform distribution of labels for the latter two tasks). All comparative metrics presented below refer to these 1000 samples, both real and synthetic. #### Diversity The metrics used to assess diversity are _normalised \(n\)-grams_ (in our case we choose \(n=3\) and _diversity score_. _Normalised \(n\)-grams_. This is calculated as the proportion of duplicated \(n\)-grams in the text: norm-\(n=100\times\Big{(}1.0-\frac{\text{unique $n$-grams}|}{\text{total $n$-grams}|}\Big{)}\)[14]. This metric is considered an indicator of linguistic richness, where a high value corresponds to a greater variety in language use. _Diversity score_. This is a product of repetition measures at different \(n\)-gram levels, typically defined by \(\prod_{n=2}^{4}\big{(}1.0-\frac{\text{norm-}n}{100}\big{)}\). **Coherence** We use three metrics to assess coherence: _cosine similarity_ between real and synthetic datasets, _MAUVE_, and _adversarial AUROC_. These measures rely on embeddings for each example (real and synthetic), which are calculated using OpenAI's text-embedding-ada-002 model, which calculates an embedding of length 1536 for each sample. _Cosine similarity_. As a measure of semantic coherence between datasets, we calculate the cosine similarity between the mean real embeddings and the mean synthetic embeddings: \(v_{r}^{\top}v_{s}/(||v_{r}||\cdot||v_{s}||)\). A larger cosine similarity suggests greater coherence with the real dataset. _MAUVE_. MAUVE calculates information divergences in a quantised embedding space and thus measures token-distribution similarity between real and synthetic data [10]. MAUVE first quantises the embedding space into a finite number of bins, and then calculates the Kullback-Leibler divergence between the real and synthetic data distributions in each bin. The average of the Kullback-Leibler divergences across all bins is the MAUVE score. A higher MAUVE score means a closer similarity between distributions. _Adversarial AUROC_. We train an adversarial classifier to distinguish between real and synthetic data. The idea is that more coherent synthetic data will lead to a lower AUROC score for this classifier, as it becomes more difficult to distinguish between real and synthetic examples. #### Downstream performance For downstream performance evaluation, the type of validation is determined by the downstream task. For the _Arxiv Hypotheses_, a set of eight expert annotators were shown a sequence of hypothesis pairs, one being an example generated with nucleus sampling and the other with STEER. They were asked to annotate which one they preferred in terms of creativity and plausibility, without knowing which was which. The performance recorded in Figure 3 refers to the proportion of times the STEER sample was deemed to be better than the nucleus sample. A one-sided \(Z\)-test for proportions was used, testing the null hypothesis that the win rate was above \(0.5\). For the _Jigsaw Toxic Comments_ and _CommonsenseQA_ datasets we train a classifier and a question-answered, respectively, on the synthetic data. The classifier for Jigsaw was an XGBoost model trained on the synthetic embeddings. The question-answering model for CommonsenseQA was a fine-tuned BERT model with a classification head. The aim is to demonstrate superior downstream performance of the STEER dataset. ## Results Table 1 presents a comparative study of STEER with alternative decoding and sampling methods, namely greedy decoding, nucleus sampling, and contrastive search sampling. We evaluate the synthetic datasets of 1000 samples generated using these decoding methods on the basis of normalised n-grams, diversity, cosine similarity, MAUVE, and adversarial AUROC for a fine-tuned Falcon-7B model. The downstream performance comparison of STEER against other methods for four different models is encapsulated in Figure 3 and Table 2. These evaluations provide a comparative analysis of STEER's performance with real data and data generated with several alternative meth Figure 2: Win rate of STEER against nucleus sampling in the hypothesis generation task. The levels of significance are marked as follows: \(**\) denotes \(p<0.001\), \(**\) denotes \(0.001\leq p<0.01\), and \(*\) denotes \(0.01<p\leq 0.05\). Left denotes human evaluation, right is GPT-4. ods. STEER outperformed all methods in the _Jigsaw_ and _CommonsenseQA_ tasks for classification accuracy. Further, STEER had a significantly positive win rate over nucleus-generated samples in the _Hypothesis_ task for certain values of \(\gamma\) and \(\eta\), for both human and GPT-4 evaluators. We noticed that astronomy domain experts tended to have lower win-rates of STEER compared to general evaluators. Further comparisons between astronomy domain experts and general human evaluators are presented in the appendix. Finally, we investigated the impact of the contrastive expert guidance hyperparameter \(\gamma\) and the negative prompting hyperparameter \(\eta\) on both diversity and coherence. Figure 3 demonstrates how key diversity and coherence metrics are affected as we vary each of the hyperparameters. 50 synthetic Arxiv hypothesis examples were generated for each combination of \(\gamma\) and \(\eta\), and the adversarial AUROC, normalised number of n-grams and MAUVE scores were calculated. In Figure 4, we provide a notion of how generation diversity (approximated by diversity score) and generation coherency (approximated by MAUVE) vary as we vary one hyperparameter at a time (including the number of negative prompts), keeping the other constant. Whilst this is limited by the context size of the model (2048 tokens for Falcon-7B), we use the maximum number of negative prompts that fit into the context size of the model when the prompt is also included. ## Discussion In this study, we offer a comprehensive analysis of STEER, examining its capabilities and performance across diverse model architectures and various downstream tasks. Our investigation not only sheds light on its potential advantages but also uncovers constraints that open new avenues for research in logit manipulation, with the ultimate goal of harnessing STEER for synthetic text generation across multiple domains and downstream tasks. Table 1 summarises the main results, demonstrating that STEER, even with minimal hyperparameter tuning, surpasses conventional out-of-the-box sampling methods and the state-of-the-art logit-manipulation technique known as contrastive sampling. The key strength of STEER lies in its ability to significantly enhance the diversity of generated text (quantified by the number of normalised \(n\)-grams and the diversity score) without compromising coherence (as assessed by MAUVE, adversarial AUROC, and cosine similarity). Interestingly, contrastive sampling was found to underperform other methods, a phenomenon that persisted despite extensive tuning of the penalty term \(\alpha\), which controls the strength of the degeneration penalty (see Su et al. (2022)). An evaluation of the data quality produced by STEER was also conducted by training models on downstream tasks derived from the synthetic data, specifically on the _Jigsaw_ and _CommonsenseQA_ datasets. Both tasks involved classifiers and multiple-choice question-answering models respectively trained on synthetic data and evaluated on real data. Although the improvements were marginal, STEER consistently outperformed other methods (Table 2). We also engaged expert evaluators to assess human preferences on generated _Arxiv Hypotheses_. Although STEER achieved a win rate above 0.5 for low values of \(\gamma\) and \(\eta\) compared to the nucleus sampling method, this advantage diminished rapidly as both parameters exceeded 0.4 (Figure 2). Along with the performance curves shown in Figure 4, these results hint at a narrow optimal range for hyperparameters, highlighting the need for specialised tuning techniques, including grid search on logarithmic scales [11]. This figure also appears to show that good performance can be achieved with only 5-10 negative prompts. Further, MAUVE, measuring fidelity, declines as the number of negative samples increases. We hypothesise that this is because the conditional distribution becomes too constrained to produce coherent text. Interestingly, the same blind evaluation with GPT-4 revealed more consistent win rates of STEER over nucleus, appearing to have a weaker gradient with respect to the magnitude of \(\gamma\) and being much more sensitive to \(\eta\) (Figure 2). This observation, coupled with the lower STEER preferences among domain experts compared with general annotators, raises the conjecture that STEER may produce synthetically diverse and coherent examples that, however, may lack depth in semantic structure and plausibility. Further assessments across a wider task spectrum and more refined evaluation metrics will be crucial to validate or refute this hypothesis. For instance, filtering poor hypotheses from the initial fine-tuning dataset or performing additional quality ranking with fine-tuned models might align hypothesis generation better with domain experts. ### Limitations and future directions In this study, several limitations have been identified that must be acknowledged. One significant concern is the possible superficial optimisation of the evaluation metrics employed. The methods applied to gauge performance may not encompass the full complexity of the underlying processes. Furthermore, human preference evaluations have exhibited a lower alignment with STEER among astronomy domain experts for the hypothesis generation, pointing towards a potential disconnect between machine-optimised objectives and human-centric goals. Quantitative results on this discrepancy are presented in the appendix. The lack of variability across cosine similarity as a measure in our evaluation also presents a challenge, as it may not adequately capture the semantic nuances within the text. Moreover, the effectiveness of the negative prompting component of STEER is limited by the context window of the model, which constrains the number of negative prompts that can be used. STEER is also twice as expensive at inference time as typical decoding methods due to the two forward passes: one through the base model \(\mathrm{P}_{\phi}\) and one through the fine-tuned model \(\mathrm{P}_{\theta}\), at each autoregressive step. As such, STEER may be more useful in low-data regimes where the issue is not with compute but rather the lack of data to learn from: STEER emphasises quality over speed. Looking forward, several promising avenues can be explored to build upon this work. Investigating the effect of STEER on smaller models, such as GPT-2, might reveal insights into how logit-manipulation techniques can leverage the performance of smaller models to match or even surpass their larger counterparts. Implementing STEER in the latent space of hidden-layer representations rather than the token space could also provide a more nuanced control over text generation. Wandering off the main path to explore creative ideas and implementing a chain of thought in the form of back-and-forth checking could supplement this. Further, the application of STEER to bigger models warrants investigation, providing a comprehensive understanding of the method's scalability. Contrastive expert guidance functions as a regularisation technique during generation, compelling the model to create outputs that are more characteristic of the target domain. This approach can be viewed as an efficient method for enhancing fidelity, especially in scenarios where data is sparse. A valuable extension to this work would be a comparative analysis between the performance of contrastive expert guidance and the scaling of performance as the size of the real dataset available for fine-tuning increases. Such an analysis would shed light on whether contrastive guidance can serve as an advanced and efficient form of knowledge distillation, transcending the need for additional data collection and curation. ## Conclusion In this work, we illustrate that challenges with data coherency and diversity in Large Language Models (LLMs) can be mitigated through strategic logict reshaping during inference. Our novel method, STEER, leverages contrastive expert guidance and negative prompts from real and synthetic examples to balance adherence to data distribution and diversity. With its dynamic adjustments in the logit space, STEER outperforms previous approaches on the distinct tasks of hypothesis, toxic comment and commonsense question generation, as per conventional synthetic data generation metrics. Notably, STEER also exhibits a superior to capture semantic information in its synthetic examples that allow other models to achieve heightened downstream performance - human preference, binary classification accuracy, and question-answering accuracy - across all tasks. Additionally, our ablation studies underscore the control STEER provides over the coherency-diversity trade-off via its hyperparameters. This work not only addresses an existing gap in LLMs but also paves the path for more tailored applications of synthetic data generation in diverse domains. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & STEER & Greedy & Nucleus & Contrastive & _Real_ \\ \hline _Jigsaw_ & **0.94**\(\pm\) 0.02 & 0.91 \(\pm\) 0.03 & 0.90 \(\pm\) 0.02 & 0.89 \(\pm\) 0.01 & _0.98_ \\ _QA_ & **0.41**\(\pm\) 0.03 & 0.35 \(\pm\) 0.04 & 0.40 \(\pm\) 0.03 & 0.29 \(\pm\) 0.02 & _0.55_ \\ \hline \hline \end{tabular} \end{table} Table 2: Downstream performance comparison for Falcon-7B across two datasets: Jigsaw Toxic Comments and CommonsenseQA. Models were evaluated on five different splits of the real data.
2303.03989
Spectroscopic and evolutionary analyses of the binary system AzV 14 outline paths toward the WR stage at low metallicity
The origin of the observed population of Wolf-Rayet (WR) stars in low-metallicity (low-Z) galaxies, such as the Small Magellanic Cloud (SMC), is not yet understood. Standard, single-star evolutionary models predict that WR stars should stem from very massive O-type star progenitors, but these are very rare. On the other hand, binary evolutionary models predict that WR stars could originate from primary stars in close binaries. We conduct an analysis of the massive O star, AzV 14, to spectroscopically determine its fundamental and stellar wind parameters, which are then used to investigate evolutionary paths from the O-type to the WR stage with stellar evolutionary models. Multi-epoch UV and optical spectra of AzV 14 are analyzed using the non-LTE stellar atmosphere code PoWR. An optical TESS light curve was extracted and analyzed using the PHOEBE code. The obtained parameters are put into an evolutionary context, using the MESA code. AzV 14 is a close binary system consisting of two similar main sequence stars with masses of 32 Msol. Both stars have weak stellar winds with mass-loss rates of log $\dot{M}$ = -7.7. Binary evolutionary models can explain the empirically derived stellar and orbital parameters. The model predicts that the primary will evolve into a WR star with T = 100 kK, while the secondary, which will accrete significant amounts of mass during the first mass transfer phase, will become a cooler WR star with T = 50 kK and are predicted to have compared to other WR stars increased oxygen abundances. This model prediction is supported by a spectroscopic analysis of a WR star in the SMC. We hypothesize that the populations of WR stars in low-Z galaxies may have bimodal temperature distributions. Hotter WR stars might originate from primary stars, while cooler WR stars are the evolutionary descendants of the secondary stars if they accreted a significant amount of mass.
D. Pauli, L. M. Oskinova, W. -R. Hamann, D. M. Bowman, H. Todt, T. Shenar, A. A. C. Sander, C. Erba, V. M. A. Gómez-González, C. Kehrig, J. Klencki, R. Kuiper, A. Mehner, S. E. de Mink, M. S. Oey, V. Ramachandran, A. Schootemeijer, S. Reyero Serantes, A. Wofford
2023-03-07T15:40:22Z
http://arxiv.org/abs/2303.03989v1
Spectroscopic and evolutionary analyses of the binary system AzV 14 outline paths toward the WR stage at low metallicity ###### Abstract Context:The origin of the observed population of Wolf-Rayet (WR) stars in low-metallicity galaxies, such as the Small Magellanic Cloud (SMC), is not yet understood. Standard, single-star evolutionary models predict that WR stars should stem from very massive O-type star progenitors, but these are very rare. On the other hand, binary evolutionary models predict that WR stars could originate from primary stars in close binaries. Aims:We conduct an analysis of the massive O star, AzV 14, to spectroscopically determine its fundamental and stellar wind parameters, which are then used to investigate evolutionary paths from the O-type to the WR stage with stellar evolutionary models. Methods:Multi-epoch UV and optical spectra of AzV 14 are analyzed using the non-local thermodynamic equilibrium (LTE) stellar atmosphere code PoWR. An optical TESS light curve was extracted and analyzed using the PHOEBE code. The obtained parameters are put into an evolutionary context, using the MESA code. Results:AzV 14 is a close binary system with a period of \(P=3.7058\pm 0.0013\) d. The binary consists of two similar main sequence stars with masses of \(M_{1,2}\approx 32\,M_{\odot}\). Both stars have weak stellar winds with mass-loss rates of \(\log{M}/(M_{\odot}\,{\rm yr}^{-1})=-7.7\pm 0.2\). Binary evolutionary models can explain the empirically derived stellar and orbital parameters, including the position of the AzV 14 components on the Hertzsprung-Russell diagram, revealing its current age of 3.3 Myr. The model predicts that the primary will evolve into a WR star with \(T_{\rm eff}\approx 100\) kK, while the secondary, which will accrete significant amounts of mass during the first mass transfer phase, will become a cooler WR star with \(T_{\rm eff}\approx 50\) kK. Furthermore, WR stars that descend from binary components that have accreted significant amount of mass are predicted to have increased oxygen abundances compared to other WR stars. This model prediction is supported by a spectroscopic analysis of a WR star in the SMC. Conclusions:Inspired by the binary evolutionary models, we hypothesize that the populations of WR stars in low-metallicity galaxies may have bimodal temperature distributions. Hotter WR stars might originate from primary stars, while cooler WR stars are the evolutionary descendants of the secondary stars if they accreted a significant amount of mass. These results may have wide-ranging implications for our understanding of massive star feedback and binary evolution channels at low metallicity. Conclusions: ## 1 Introduction Massive stars (\(M_{\rm initial}>8\,M_{\odot}\)) in all evolutionary phases strongly affect their galactic neighborhood via stellar winds and ionizing fluxes. During core-H burning on the main sequence, they have spectral types O and early B. As massive stars evolve, their outer hydrogen (H) rich envelopes could be removed by stellar winds. The majority of massive stars are born in binary systems (e.g Sana et al., 2012, 2014; Moe & Di Stefano, 2017), and interactions with a companion star may also lead to a removal of the outer envelope. Highly evolved massive stars that lost a large portion of their outer H-rich layers and have optically thick winds are spectroscopically classified as Wolf-Rayet (WR) stars. The WR stars are typically hotter and have stronger, opti cally thick winds when compared with their evolutionary predecessors. These stars come in two major subtypes, WN and WC, which are spectroscopically identified by strong emission lines of nitrogen and carbon, respectively. WR stars end their lives with a core-collapse event, likely leading to the formation of black holes (BHs; e.g., Sukhbold et al., 2016; Gal-Yam et al., 2022). Understanding the formation pathways of WR stars in nearby low-metallicity galaxies is needed for quantifying stellar feedback and compact object populations in conditions resembling the early Universe. The Small Magellanic Cloud (SMC) galaxy has a metallicity of \(Z\approx 1/7\,Z_{\odot}\)(Hunter et al., 2007; Trundle et al., 2007) and is nearby (\(d\approx 61\) kpc; Hilditch et al., 2005), thus providing an excellent test bed for investigating stars at low metallicity. The proximity of the SMC and its low foreground and intrinsic extinction allows for a detailed study of stellar and wind parameters of low-metallicity populations of O and WR stars. This enables us to obtain a realistic understanding of massive star evolution and feedback at low metallicity in general. Yet what we have learned so far is perplexing. Only 12 WR stars exist in the SMC. All of them are very luminous and (except for one WO-type star) contain some hydrogen (Hainich et al., 2015; Shenar et al., 2016). According to the standard, single-star evolutionary tracks, their progenitors are O-type stars with masses \(\gtrsim 40\,M_{\odot}\)(Shenar et al., 2020, and references therein). For an order of magnitude estimate, one can assume that a massive star spends \(~{}10\,\%\) of its life in a WR stage. This implies that a galaxy containing approximately ten WR stars should contain approximately one-hundred massive O stars. However, there is a severe paucity of such O-stars in the SMC (Ramachandran et al., 2019; Schootemeijer et al., 2021). Close binary evolution is expected to play a principal role in the formation of WR stars (Paczynski, 1967; Kippenhahn & Weigert, 1967). However, the importance of binarity is still under debate and different studies have come to different conclusions (e.g., Vanbeveren et al., 2007; Eldridge et al., 2017; Shenar et al., 2020; Pauli et al., 2022). Up to now, the evolution toward the WR stage in binaries has chiefly been studied for primary stars, while the evolution of secondary stars has been largely neglected. In this paper, we aim to gain insights into the evolution of both components in a binary with well-established stellar parameters. One of the youngest and earliest O-type stars in the SMC is AzV 14, the main ionizing source of the H ii region NGC 261 (see Fig. 1). Previously, single epoch optical spectra of AzV 14 have been analyzed by Massey et al. (2004) and Mokiem et al. (2006). The star was classified as O5 V with a reported spectroscopic mass of \(74\,M_{\odot}-90\,M_{\odot}\) being in the mass range of potential WR single-star progenitors. From a newly obtained light curve, we know now that AzV 14 is in fact a binary. In this paper, we complement recently obtained spectra of AzV 14 in the UV and optical by archival data, aiming at securely determining stellar parameters and, on this basis, to model the evolution of the AzV 14 components toward WR stages. ## 2 Observations Presently, six spectra of AzV 14 obtained at different epochs covering the far-UV, UV and optical wavelength ranges exist in telescope archives. One far-UV spectrum was taken with the Far Ultraviolet Spectroscopic Explorer (FUSE; Oegerle et al., 2000), two UV spectra with the Hubble Space Telescope's (HST) Faint Object Spectrograph (FOS; Keyes et al., 1995) and Space Telescope Imaging Spectrograph (STIS; Branton et al., 2021), and three optical spectra with the European Southern Observatory (ESO) Very Large Telescopes (VLT) Ultraviolet and Visual Echelle Spectrograph (UVES; Dekker et al., 2000) and the X-Shooter (Vernet et al., 2011) spectrograph. One X-Shooter spectrum was taken as part of the XShootU project1 (from here on "X-Shooter (2020)") and one as part of our program, ID 109.22V0.001 (from here on "X-Shooter (2022)"). A detailed description of the individual spectra and photometry can be found in Appendix A. Footnote 1: [https://massivestars.org/xshootu/](https://massivestars.org/xshootu/) AzV 14 is close to the SMC bar, thus we adopt a distance modulus of \(DM=18.9\,\)mag (Westerlund, 1997; Harries et al., 2003; Hilditch et al., 2005). The radial velocities (RV) of the different regions in the SMC are not uniform (e.g., De Propris et al., 2010). We estimate the RV shift of the NGC 261 complex by fitting Gaussians to interstellar absorption lines as \(v_{\rm NGC261}=148\pm 2\,\)km s\({}^{-1}\). All spectra shown in this work are in the rest frame of NGC 261. AzV 14 was observed by the NASA Transiting Exoplanet Survey Satellite (TESS; Ricker et al., 2015) mission (TIC 180206579) during its sectors 1, 27 and 28 in full-frame image (FFI) mode with a cadence of 30, 10 and 10 min, respectively. TESS has a low spatial resolution of 21'' px\({}^{-1}\). However, since AzV 14 is the brightest optical object in this region we can extract its light curve (see Appendix C). Furthermore, AzV 14 was detected by the _Chandra_ X-ray telescope (Weisskopf et al., 2000). ## 3 Spectral analysis ### Spectral line variability reveals AzV 14 as a close binary The multi-epoch spectroscopy allows us to search for signs of binarity. In Fig. 2, we show selected He i and He ii lines in the Figure 1: False-color HST image composed of the images in the F475W (blue) and F657N (yellow) filters. The position of AzV 14 and the approximate size of the NGC 261 H ii region are indicated by white circles. UVES and X-Shooter spectra, taken at different epochs. Massey et al. (2004) and Mokiem et al. (2006) attributed the apparent emission feature in the middle of helium absorption lines of the UVES to a nebular contamination. New multi-epoch X-Shooter spectra clearly show that the profiles of helium lines are complex and variable. They are well explained as composite lines originating from two components of a binary system. The UVES and X-Shooter (2022) spectra must have been taken at a comparable orbital phase, as the line profile is similar in both spectra, while the X-Shooter (2020) spectrum must have been obtained close to conjunction. This is confirmed by the binary ephemeris derived in the Sect. 4. ### Establishing projected rotation velocity and RVs The projected rotation velocity as well as the RV shifts of the individual binary components are necessary ingredients of spectral modeling. In the following we describe how we derived these quantities. #### 3.2.1 Projected rotation velocities The helium lines in UVES and X-Shooter (2022) spectra are resolved into two components with lines of similar depths and widths, indicating that the components of the binary system are similar stars. We employ the iacob broad tool (Simon-Diaz & Herrero 2014) to measure the projected rotational velocities for each binary component. For the fitting procedure the lines He ii \(\lambda 4200\), He i \(\lambda 4471\), He ii \(\lambda 4542\), and He ii \(\lambda 4686\) were used, while cutting off the parts of the lines that are blended by the companion. Although helium lines are not only rotationally, but also pressure broadened, they were analyzed because of the lack of available metal lines. The iacob broad tool derives two different values for the projected rotational velocities, one from a Fourier transformation (FT) and one from the goodness of the fit (GoF). We quote the average value obtained by the different methods. As the estimates on the projected rotation velocities are based on blended lines, we adopt conservative error margins. We find that both binary components have similar projected rotational velocities of \(v_{\rm rot}\sin i=90\pm 20\) km s\({}^{-1}\). #### 3.2.2 RVs of the binary components in each spectrum The RVs of all spectra are determined by shifting the synthetic spectra until they fit the observations using a Markov chain Monte Carlo method combined with a least square fitting (Pauli et al. 2022b, their section 3.1.2 and their appendix A). The UVES spectrum shows RVs of \(v_{1}=-101.9\pm 2.6\) km s\({}^{-1}\) and \(v_{2}=80.5\pm 3.2\) km s\({}^{-1}\) for the primary and the secondary, respectively. This is comparable to those of the X-Shooter (2022) spectrum, yielding RV shifts of \(v_{1}=-112.4\pm 2.6\) km s\({}^{-1}\) and \(v_{2}=98.6\pm 2.9\) km s\({}^{-1}\). In the X-Shooter (2020) spectrum the lines are not split due to an orbital phase close to conjunction. We estimated \(v_{1}=-46.1\pm 5.8\) km s\({}^{-1}\) and \(v_{2}=22.3\pm 5.9\) km s\({}^{-1}\) for the primary and secondary, respectively. Accurate measurements of RV shifts in the optical spectra allow us to estimate the temperature and surface gravity of the individual binary components (see Sect. 3.3.1). In the STIS spectrum, the He ii \(\lambda 1640\) line is split into two individual components of similar strength (see Fig. 11), yielding RVs of \(v_{1}=-128.6\pm 2.5\) km s\({}^{-1}\) for the primary and \(v_{2}=113.2\pm 2.3\) km s\({}^{-1}\) for the secondary. Measuring RVs in the UV spectra allows to determine the terminal wind velocity (\(v_{\infty}\)) and thus the mass-loss rate (\(\dot{M}\)) more precisely (see Sect. 3.3.4). We estimated the RVs in the low resolution FOS spectrum such that the width of the oxygen lines in the range of 1330 A - 1420 A can be matched, arriving at \(v_{1}=92.9\pm 6.2\) km s\({}^{-1}\) for the primary and \(v_{2}=113.2\pm 2.8\) km s\({}^{-1}\) for the secondary. ### Stellar atmosphere modeling To analyze the spectra, we employ the Potsdam Wolf-Rayet (PoWR) model atmosphere code (Grafener et al. 2002; Hamann & Grafener 2003, 2004; Todt et al. 2015; Sander et al. 2015). A short characterization of the code is given in Appendix B. #### 3.3.1 Temperatures and surface gravities We measure the stellar temperatures using the ratio between He i to He ii lines (see Fig. 12). To constrain the surface gravities, we used the wings of the Balmer, H\(\beta\), H\(\gamma\), and H\(\delta\) lines in the UVES and X-Shooter (2022) spectra. For the primary we obtained an effective temperature of \(T_{1}=43\pm 2\) kK, while the secondary is slightly cooler with \(T_{2}=42\pm 2\) kK. The surface gravity is \(\log g=4.0\pm 0.2\) for both binary components. #### 3.3.2 Stellar luminosities and spectroscopic masses Luminosity \(L\) and color excess \(E_{\rm B-V}\) are determined by fitting the composite spectral energy distribution (SED) -- containing the synthetic flux of both stellar components -- to photometry (see Fig. 13). The color excess is modeled as a combined effect of Galactic foreground, using the reddening law Figure 13: Comparison of selected lines in the UVES (solid blue) and the X-Shooter (dashed pink and dotted yellow) optical spectra. The high resolution UVES spectrum is binned by 0.15 Å to match the X-Shooter spectra. All spectra are convolved with a Gaussian having a FWHM = 0.05 Å for a better comparison. The lines in the UVES and X-Shooter (2022) spectra are separated, while they are blended in the X-Shooter (2020) spectrum. This provides a clear indication for binarity. by Seaton (1979) with \(E_{\rm B-V,\,Gal}=0.03\) mag, and SMC background, using the reddening law by Bouchet et al. (1985) with \(E_{\rm B-V,\,SMC}=0.11\) mag. The luminosities of the two binary components are chosen such that the ratio of the depths of the synthetic helium lines match all spectra. Additionally, we use the C iii \(\lambda 1175\) line in the UV, which is sensitive to changes in the light ratio. Both stars have comparable luminosities in the optical and UV. The final luminosities are \(\log(L_{1}/L_{\odot})=5.41\pm 0.15\) and \(\log(L_{2}/L_{\odot})=5.38\pm 0.15\) for the primary and secondary, respectively. Accordingly, the masses of the primary and secondary are \(M_{\rm spec,\,1}=32^{+8}_{-7}\,M_{\odot}\) and \(M_{\rm spec,\,2}=31^{+8}_{-7}\,M_{\odot}\). The spectroscopic mass ratio is \(q_{\rm spec}=0.97\). When employing the method of Wilson (1941), one can derive an independent measure of the mass ratio from the RVs. This yields \(q_{\rm Wisen}=0.95\pm 0.02\) which is in agreement with the spectroscopic results. #### 3.3.3 CNO surface abundances All metal lines are well-matched with standard initial abundances of the SMC (see Table 2 and Fig. 3). The only line which is not well reproduced is the N iv \(\lambda 3479\) line, which is deeper than predicted by the synthetic spectrum. This absorption line is quite narrow, suggesting that the N iv \(\lambda 3479\) line might be blended with an unidentified ISM line. Pristine abundances prompt us to conclude that the AzV 14 components are young unevolved stars which have not yet interacted. #### 3.3.4 Wind mass-loss rates In the optical spectra of AzV 14, H\(\alpha\) and He ii \(\lambda 4686\) do not show any contribution from winds. However, in the UV one can see wind lines with P Cygni profiles. Key diagnostic lines are the N v \(\lambda\lambda 1239,1243\) and C iv \(\lambda 1548,1551\) resonance doublets and the He ii \(\lambda 1640\) line. By fitting the observed N v and C iv resonance doublets in the STIS and FOS spectra consistently yields a terminal wind velocity of \(v_{\infty}=1600\pm 200\) km s\({}^{-1}\) and a mass loss rate of \(\log(\dot{M}/(\,M_{\odot}\,{\rm yr}^{-1}))=-7.7\) for each of the stars (see Fig. 4). To match the O vi \(\lambda 1032,1038\) resonance doublet in the FUSE spectrum, we include in the model a hot plasma component with \(T_{\rm X}=3\) MK emitting X-rays with \(L_{\rm X,mod}=2\times 10^{30}\) erg s\({}^{-1}\). According to the _Chandra_ Point Source Catalog v.2.0 (Evans et al., 2010), the X-ray flux of AzV 14 in the 0.5 keV \(-7.0\) keV band is \(F_{\rm X}=7.3\times 10^{-16}\) erg s\({}^{-1}\) cm\({}^{-2}\), corresponding to an X-ray luminosity corrected for reddening (see Fig. 4). \(L_{\rm X}\approx 5\times 10^{32}\) erg s\({}^{-1}\) or \(\log(L_{\rm X}/L_{\rm bol})\approx-6\). We suggest that the majority of the observed X-rays originate from a colliding wind zone between two binary components in common with O-type binaries in the Galaxy (Rauw & Naze, 2016). A summary of the empirically determined stellar parameters is given in Table 1. ## 4 Orbital analysis The TESS light curve of AzV 14 displays periodic variability (originating from ellipsoidal variability, see Fig. 3). We employed a frequency analysis and phase folding techniques to determine the dominant periodicity in the TESS light curve, yielding an orbital period of \(P_{\rm orb}=3.7058\pm 0.0013\) d (see Appendix C). Orbital parameters from the light and RV curve are determined consistently with the Physics of Eclipsing Binaries (PHOEBE) code version 2.4.5 (Prsa & Zwitter, 2005; Prsa, 2011; \begin{table} \begin{tabular}{l c c c} \hline \hline parameter & primary & secondary & unit \\ \hline \(T_{\rm eff}^{\ (a)}\) & \(42.8\pm 2.0\) & \(41.8\pm 2.0\) & [kK] \\ \(\log\,g\) & \(4.0\pm 0.2\) & \(4.0\pm 0.2\) & [cm s\({}^{-2}\)] \\ \(\log\,L\) & \(5.41\pm 0.15\) & \(5.38\pm 0.15\) & [\(L_{\odot}\)] \\ \(R\) & \(9.3\pm 0.5\) & \(9.2\pm 0.5\) & [\(R_{\odot}\)] \\ \(R_{\rm RL}^{\ (b)}\) & \(15.6\pm 0.8\) & \(15.2\pm 0.7\) & [\(R_{\odot}\)] \\ \(R/R_{\rm RL}\) & \(0.60\pm 0.04\) & \(0.61\pm 0.04\) & \\ \(M_{\rm spec}\) & \(32^{+8}_{-7}\) & \(31^{+8}_{-7}\) & [\(M_{\odot}\)] \\ \(\log\,\dot{M}\) & \(-7.7\pm 0.2\) & \(-7.7\pm 0.2\) & [\(M_{\odot}\) yr\({}^{-1}\)] \\ \(v_{\infty}\) & \(1600\pm 200\) & \(1600\pm 200\) & [km s\({}^{-1}\)] \\ \(v_{\rm tot}\sin i\ ^{(c)}\) & \(90\pm 20\) & \(90\pm 20\) & [km s\({}^{-1}\)] \\ \(v_{\rm tot}^{\ (d)}\) & \(157\pm 40\) & \(157\pm 40\) & [km s\({}^{-1}\)] \\ \(P_{\rm rot}^{\ (d)}\) & \(3.0\pm 0.8\) & \(3.0\pm 0.8\) & [d] \\ \(\log\,Q_{\rm H}\) & \(49.13\) & \(49.10\) & [s\({}^{-1}\)] \\ \(\log\,Q_{\rm Hei}\) & \(48.41\) & \(48.35\) & [s\({}^{-1}\)] \\ \(\log\,Q_{\rm Hei}\) & \(43.41\) & \(43.27\) & [s\({}^{-1}\)] \\ \hline \end{tabular} \({}^{(a)}\) For a definition, see Appendix B. \({}^{(b)}\) Calculated using the orbital parameters obtained with the PHOEBE code (see Sect. 4) and the formula of Eggleton (1983). \({}^{(c)}\) Obtained with the iacob broad tool. \({}^{(d)}\) Based on the inclination \(i=35^{\circ}\) obtained with the PHOEBE code (see Sect. 4). \end{table} Table 1: Summary of the stellar parameters of both binary components obtained from spectroscopic analysis conducted with the PoWR code. Figure 3: Comparison of the phased observed and synthetic light and RV curve of AzV 14. _Upper panel_: Phased TESS light curve (dots) and best fit obtained with the PHOEBE code (red line). For the PHOEBE model we show the \(1\sigma\) and \(2\sigma\) deviations as pink shaded areas. _Lower panel_: Observed (triangles) and fitted RV curves (solid lines) of the primary (green) and secondary (red) component. Prsa et al. 2016; Horvat et al. 2018; Jones et al. 2020; Conroy et al. 2020). The input parameters are provided by the spectral analysis (Table 1). To reduce the free parameter space, the orbital period is fixed to the period measured from the TESS light curve. Since the light curve is sinusoidal and has minima at phases \(\Phi=0.0\) and \(\Phi=0.5\), the binary orbit must be close to circular (\(e=0\)). We fit the remaining orbital parameters, including the inclination. A more detailed description can be found in Appendix E. The best fit is achieved with an inclination of \(i=35\pm 5\degr\) and an epoch of the primary eclipse of \(T_{0}=2\,459\,036.101\pm 0.004\) HJD. The orbital masses are \(M_{\rm orb,1}=33.6^{+50}_{-3,7}\) and \(M_{\rm orb,1}=31.9^{+4.8}_{-3,7}\), being in agreement with the spectroscopic analysis. The remaining orbital parameters are listed in Table 1 and 2. The best fitting light and RV curves are shown in Fig. 3, including the phases at which each spectrum was taken. The X-Shooter (2020) spectrum was obtained in the orbital phase close to conjunction, while the UVES and X-Shooter (2022) spectra were measured at comparable phases out of conjunction. Given the inclination of the system, the rotation velocities of the two components are \(v_{\rm rot}=157\pm 40\) km s\({}^{-1}\). Hence, the rotation period of each star is \(P_{\rm rot}=3.0\pm 0.8\) d which is close to the orbital period, implying that the binary is tidally locked. Using the orbital period and the mass-ratio, we calculated the Roche radii for each star (see Table 1). By comparison to the previously determined stellar radii we conclude that both stars are underfilling their Roche lobe (\(R/R_{\rm RL}=0.6\)), further supporting our conclusion that both stars have not interacted yet. ## 5 Binary evolution models predict the formation of WR stars Empirically derived stellar parameters of the binary components of AzV 14 (Table 1) are used to anchor the position of the system on the Hertzsprung-Russell diagram (HRD), and on this basis to investigate their possible future evolution. We calculated binary evolution models with the Modules for Experiments in Stellar Astrophysics (MESA) code (Paxton et al. 2011, 2013, 2015, 2018, 2019). The methods and basic assumptions made in our models are described in Appendix F. In our models, a star enters WR evolutionary stages when the optical depth at its surface is \(\tau\geq 0.2\). For further information we refer to Appendix G. The current positions of the binary components in the HRD can be explained best by a binary evolutionary model with initial masses of \(M_{\rm 1,ini}=35.0\,M_{\odot}\) and \(M_{\rm 2,ini}=33.5\,M_{\odot}\), and an initial period of \(P_{\rm orb,ini}=3.7\) d. The model predicts that AzV 14 formed 3.3 Myr ago. The corresponding evolutionary tracks are shown in Fig. 4. According to the evolutionary model the binary components have not interacted yet, but will exchange mass in the future. During the future mass-transfer event, it is predicted that the secondary will accrete about \(15\,M_{\odot}\). Both binary components evolve successively into WR stars. Both components will become H-poor WN type stars similar to WN stars observed in the SMC. For comparison the HRD shown in Fig. 4 includes the positions of the apparently single WR stars SMC AB 10 and AB 4. The primary will enter the WN stage at an age of 5.9 Myr and spends most of its WR lifetime (0.35 Myr) close to the helium zero-age main sequence, which is roughly at \(\log(T_{\rm eff}/{\rm K})\approx 5.0\) (SMC AB 10 matches this position). During the WR stage, the primary will have a mass of \(17\,M_{\odot}\), while being accompanied by a \(~{}50\,M_{\odot}\) main sequence Figure 4: Evolutionary tracks of the primary (left) and secondary (right) compared to the empirical positions of AzV 14 (triangles with error ellipses). The evolutionary tracks are color-coded by the surface hydrogen abundance of the model and are overlayed by black dots corresponding to equidistant time-steps of 0.3 Myr. In the background, single star tracks are shown as dashed black lines. The instance of time in which the model is closest to the observations is marked by blue stars. The tracks are labeled by their initial stellar masses. The phases during which the model is expected to be observed as WR star, this is where \(\tau\geq 0.2\), are highlighted by bold black frames. In addition, the positions of the WR stars AB 10 and AB 4, are marked by squares which are color-coded according to their observed surface-hydrogen abundances. star in an \(\approx 8.5\) d orbit. The mass ratio has only a second-order effect on the amount of mass removed during the mass-transfer phase (e.g., Pauli et al. 2022a, their section 3.2), meaning that the primary would be located (if it goes through a stable mass-transfer event) at a comparable position in the HRD, even if it had a less massive companion that could prevent or complicate a detection. We presume that at the end of its evolution, the primary will directly collapse into a BH and remains bound. The remaining secondary star, will continue its evolution on the main sequence. Shortly after that stage, it will initiate mass-transfer onto the BH, stripping off parts of its H-rich envelope and entering the WR phase at an age of 6.9 Myr. The resulting secondary WR star will spend most of its lifetime (also 0.35 Myr) at much lower temperatures (\(\log(T_{\rm eff}/{\rm K})\approx 4.75\)) than the primary WR star while being similarly massive with \(~{}25~{}M_{\odot}\) (SMC AB 4 is at this position). This relatively cool WR star is accompanied by a BH with a mass of \(~{}16~{}M_{\odot}\) in an orbit of 3.3 d. Our binary evolutionary models of AzV 14 predict the formation of WR stars with different temperatures. Indeed, the WR population in the SMC (Hainich et al. 2015; Shenar et al. 2016, 2018) has a bimodal temperature distribution which is comparable to the predicted temperature regimes of the evolutionary models of AzV 14 (see Fig. 5). In the following, we elaborate on this idea further and discuss the implications and robustness of our findings. ## 6 Discussion ### Exploring the parameter space by computing a small grid of binary evolutionary models Inspired by the insights gained from the binary evolutionary models of AzV 14 (see Sect. 5), we explore if the bimodal temperature distribution of WR stars at low metallicity is also predicted by binary evolutionary models with different initial masses and initial orbital periods. Therefore, we calculated a small grid covering initial primary masses of 30, 45, and 60 \(M_{\odot}\), while having a fixed mass ratio of \(q=0.85\). The initial primary masses are chosen to roughly represent the full luminosity distribution of the observed WR population of the SMC. The models are calculated for initial orbital periods of 5, 50, and 500 d. A HRD containing the evolutionary tracks of all the models is shown in Fig. 6. Figure 7 depicts a simplified picture of the different formation channels of hotter and cooler WR stars. The flow chart is based on those models in our small grid that have a stable mass-transfer phase. The evolutionary tracks of the primaries with initial masses of \(30~{}M_{\odot}\) are similar to those of AzV 14. All of the primary stars expand and initiate mass-transfer events, during which they lose most of the H-rich envelope, resulting in the formation of hotter WR stars (\(\log T_{\rm eff}>4.9\)). The evolution of the secondaries depends on the initial parameters. The secondary in the system with initial period of 5 d accretes about \(\sim 5~{}M_{\odot}\) of material, leading to a rejuvenation of the core. After hydrogen is depleted in its core, the stellar model expands and quickly initiates mass transfer, stripping off parts of the accreted envelope. After the mass-transfer event the secondary has a temperature of \(\log T_{\rm eff}\approx 4.7\) and a high surface H-abundance of \(X_{\rm H}=0.5\) (left side of Fig. 7). On the other hand, the secondary in the system with initial period of 50 d accretes less than \(0.5~{}M_{\odot}\). This is a big difference to the short-period model, which is explained by the fact that in our models accretion is only allowed when the accretor can avoid rapid rotation. In the short-period binaries, the stars are tidally locked, slowing down the rotation sufficiently for the accretor to stay below critical rotation during the accretion process. In a long-period binary, the star spins up quickly, limiting the amount of mass that can be accreted (Petrovic et al. 2005; de Mink et al. 2007; Shao & Li 2016). In such long-period systems (right side of Fig. 7) the secondary initiates mass transfer after core-H burning during its evolution toward the blue (BSG) and yellow supergiant (YSG) phase, leading to the formation of a hotter WR star with \(\log T_{\rm eff}\approx 4.9\) (i.e., the same temperature regime that is also populated by the primary models). In even longer period systems (\(P=500\) d) mass transfer is initiates when the star is evolving toward a red supergiant (RSG). The model has already formed a large convective envelope which expands adiabatically, making mass-transfer unstable. Potentially, this leads to a common envelope evolution which is not modeled here. The models with initial primary masses of \(45~{}M_{\odot}\) and initial periods of 5 and 50 d also predict the formation of hotter WR stars (\(\log T_{\rm eff}>4.85\)). However, the primary model with initial period of 500 d is on its way to becoming a RSG and has already formed a large outer convection region. Similar to the models \begin{table} \begin{tabular}{l c c c} \hline \hline parameter & primary & secondary & unit \\ \hline \(T_{\rm eff}\) & 42.5 & 42.2 & [kK] \\ \(\log~{}g\) & 4.04 & 4.05 & [cm s\({}^{-2}\)] \\ \(\log~{}L\) & 5.41 & 5.36 & [\(L_{\odot}\)] \\ \(R\) & 9.5 & 9.1 & [\(R_{\odot}\)] \\ \(M_{\rm ini}\) & 35.0 & 33.5 & [\(M_{\odot}\)] \\ \(M_{\rm evo}\) & 33.7 & 32.4 & [\(M_{\odot}\)] \\ \(\log\dot{M}^{(a)}\) & \(-\)6.5 & \(-\)6.6 & [\(M_{\odot}\) yr\({}^{-1}\)] \\ \(v_{\rm rot}\) & 129 & 123 & [km s\({}^{-1}\)] \\ \hline \end{tabular} \({}^{(a)}\) According to the mass-loss recipes used in our evolutionary models (see Appendix F). \end{table} Table 2: Summary of the stellar parameters of both binary components reproduced with the MESA stellar evolution code. Figure 5: Observed temperature distribution of the WR stars in the SMC (gray area) compared to the time our evolutionary model of the primary (blue) and secondary (orange) are predicted to spend in the different temperature ranges during their WR phase. The observations are shown as Gaussians that have standard deviations corresponding to the observational uncertainties (see Table 6.1). We excluded the binary SMC AB5 from this plot, as it has a different evolutionary origin. presented above, this leads to an unstable mass transfer and possibly a common envelope evolution. In the model with an initial period of 5 d the secondary accretes a significant amount of mass (\(>10\,M_{\odot}\)) and initiates mass transfer after its main sequence evolution. After the mass transfer event the surface H fraction drops to \(X_{\rm H}=0.35\) and the temperature is \(\log T_{\rm eff}>4.7\). The secondary will end its life as a cooler WR (left side of Fig. 7). For an initially wider orbit (50 d) the secondary accretes only negligible amounts of material and, similar to the primary models, evolves into a hotter WR star (right side of Fig. 7). The primary models with initial masses of 60 \(M_{\odot}\) populate a wide range of effective temperatures (\(\log T_{\rm eff}=4.65-5.0\)). The broad temperature range can be explained by the changing efficiency of envelope stripping and its dependence on initial period and mass ratio. The efficiency is linked to the point in the primary's evolution when it transitions into the core-He burning stage, which happens before or during the mass transfer (see also Klencki et al. 2022, their figure 3). In the primary model of the system with initial period of 5 d the mass transfer is very efficient in removing the H-rich envelope and with the help of the strong WR winds the model is able to remove a significant amount of the envelope, making the star appear hot. On the other hand, in the primary model with initial period of 500 d the mass-transfer is less efficient, making the WR star appear cooler. The secondary in the system with an initial period of 5 d accretes a significant amount of mass (\(>10\,M_{\odot}\)). It initiates mass transfer after the main sequence, stripping off parts of the accreted envelope, resulting in the formation of a cooler WR. The secondary in the system with an initial period of 50 d behaves similarly to its corresponding primary model and forms a hotter WR star. The secondary in the system with initial period 500 d initiates mass transfer when it is on its way to becoming a RSG and has already formed an extended outer convection zone. Mass-transfer in this model is unstable. ### Bimodal temperature distribution of WR stars at low metallicity From considering different evolutionary pathways probed by our exploratory model grid, we learned that primaries and secondaries which have not accreted a significant amount of material evolve to hotter WR stars. Only secondaries that have accreted a fair amount of mass (i.e., those with the shortest periods) evolve into cooler WR stars. There are two main factors that may be responsible for this behavior. Firstly, evolved accretors are characterized by a steep chemical gradient at the core-envelope boundary, producing cooler WR stars once the outer envelope is lost (Schootemeijer and Langer, 2018, their figure 9). Secondly, accretors that are not fully rejuvenated tend to remain compact after the end of MS and begin the core-He burning phase as BSGs (Vanbeveren et al., 2013; Justham et al., 2014). Mass transfer from such stars leads to less efficient envelope stripping and cooler WR products (Klencki et al., 2022). This is different from the single star models (e.g., Georgy et al., 2012; Choi et al., 2016; Figure 6: Evolutionary tracks of primary (left) and secondary (right) models with initial primary masses of 30 (green gray), 45 (blue gray), and 60 \(M_{\odot}\) (red gray), fixed mass ratio \(q=0.85\) and initial orbital periods of 5 (solid), 50 (dashed), and 500 d (dotted). We highlighted the WR phase (i.e., \(r\geq 0.2\)) in bold colors. Equidistant time steps of 0.3 Myr are marked by black dots during the WR stage. In the background, we marked the positions of all observed WR stars in the SMC. Eldridge et al., 2017; Limongi and Chieffi, 2018) which can only explain the hottest and most luminous WR stars. Based on our models, we hypothesize that there must be a bimodal temperature distribution of faint WR stars at low metallicity. First, hotter WR stars of any luminosity can be well explained by the primary models as well as by the secondary models that did accrete negligible amounts of material (i.e., those in wide orbits). Second, cool and faint WR stars (\(\log T_{\rm eff}=4.65-4.7\) and \(\log(L/L_{\odot})\simeq 5.5-5.9\)) must arise from secondaries that have accreted a significant amount of material, leading to a rejuvenation and high envelope-to-core mass ratios. Third, hotter and luminous WR stars (\(\log T_{\rm eff}=4.65-4.7\) and \(\log(L/L_{\odot})\simeq 5.9-6.2\)) can be explained by primary, secondary, and single star models. We note that according to our models the cool and faint WR stars should all be accompanied by a compact companion, which can avoid detection and can help to explain observed apparently single cooler WR stars. We sketched the morphology of the WR population in the SMC in Fig. 8. It is less clear whether cooler WR stars at higher metallicities are formed in the same way as in low-metallicity galaxies. Two reasons should be considered. First, at high metallicity, the observed effective temperatures of WR stars can be lower, due to stronger winds (e.g., Sander and Vink, 2020; Sander et al., 2023), also termed "dynamic inflation" (Grassitelli et al., 2018). Second, at higher metallicity WR stars additionally suffer from the effect of hydrostatically inflated envelopes (e.g., Grafener et al., 2012; Sanyal et al., 2015), yielding again lower effective temperatures. Third, the stellar winds in the pre-WR stage at higher metallicity are stronger, stripping off more of the H-rich envelope, which is left after mass transfer. While the latter leads to the formation of hotter WR stars (as shown in Sect. 6.4.1), the effect on the observed temperature might partially be counter-balanced by the higher wind densities. All these effects make it difficult to distinguish between WR stars formed from stars which have accreted significant amounts of material in the past and intrinsically inflated WR stars. Hence, in order to clearly see a bimodal temperature distribution of WR stars originating from post interaction binaries, Figure 8: HRD containing the positions of the observed WR stars in the SMC, over-plotted by a sketch stating the different evolutionary channels of single stars and binaries undergoing stable mass transfer, leading to the different marked regions. Figure 7: Sketch of the evolutionary stages for possible formation channels of hotter and cooler WR stars under the assumption of stable mass transfer. populations at low metallicity must be considered. However, the number of WR stars decreases with decreasing metallicity (e.g., Shenar et al. 2020a), enforcing us to rely on small number statistics. In the SMC, the sample of WR stars is complete which minimizes selection biases. Therefore, despite the small number statistics, it remains the best representative sample of an WR population at low metallicity. In order to confirm or falsify our predictions, further observations of complete populations of WR stars in other low metallicity galaxies in combination with calculations of binary evolutionary models are necessary. Our findings and other recent results (Renzo & Gotberg 2021; Renzo et al. 2022) indicate that evolution of past accretors may be systematically different to those of normal (primary) stars. This impacts our understanding of stellar evolution and feedback. For instance, in the SMC hotter WR stars like AB 10 (\(\log(Q_{\rm He\,n}/{\rm s}^{-1})=48.1\)), have strong He ii ionizing fluxes, while cooler WR stars like AB 4 (\(\log(Q_{\rm He\,n}/{\rm s}^{-1})=37.5\)) have five orders of magnitude lower ionizing flux. Broader implications for binary evolution channels, proposed in this work, are yet to be explored. ### Observational fingerprints of previous evolutionary channels Our model predicts that the observed effective temperature is one of the key diagnostics to differentiate between the evolutionary origin of WN type stars at low metallicity, as explained in Sect. 6.2. However, for WN stars originating from past accretors, a low effective temperature is not the only diagnostic. The surface H-abundance can be considered as another clue, as all of our rejuvenated secondary model show \(X_{\rm H}\gtrsim 0.3\), while for their corresponding primary models the surface H-abundance are noticeably lower \(X_{\rm H}\sim 0.2\). However, for our most luminous models (\(M_{\rm ini,\,1}=60\,M_{\odot}\)) in the systems with the widest orbits, the primaries have low temperatures and the H-abundances are comparable to those of the rejuvenated secondaries. Hence, a high hydrogen abundance and a low temperature are not a robust criterion to identify the previous evolutionary path. In our search for additional fingerprints to detect past accretors, we identified a difference in the predicted CNO surface composition of normal (primary) stars and the rejuvenated secondaries. It is expected that both, the primaries and non-rejuvenated secondaries, have CNO equilibrium composition (\(X_{\rm C}=1-2\times 10^{-5},\ X_{\rm N}=139\times 10^{-5},\ \rm and\ \ to test how the evolution would be affected if the WR mass-loss rate is increased by a factor of 3. Figure 10 shows the resulting tracks. By comparison to Fig. 4, the predicted temperature regimes and the surface chemical composition during the WR phases changes. The primary loses a larger fraction of its H-poor envelope and spends more time at higher temperatures. At the end of core-He burning the stellar evolution model starts to contract instead of expanding. On the other hand, the stellar evolution model of the secondary shows even stronger response to the increased mass-loss rate. Instead of becoming a cooler WR star with plenty of hydrogen in the envelope, it is now able to remove large fractions of this envelope and position itself at higher temperatures in the HRD. We note that the region in the HRD where the stellar evolution model of the secondary is located is not populated by an observational counterpart. The above example shows that the choice of the mass-loss recipe during different evolutionary phases can have a drastic impact on the evolution of massive stars. Only when using adequate mass-loss recipes, one can explain the cooler WR stars observed in the SMC. We note that in the case of WR winds being weaker than we assume in this work, the prediction on the morphology of the temperature distribution of WR stars would shift to lower metallicities. #### 6.4.2 Example 2: More efficient mixing There are several mixing processes within stellar interiors, including convective mixing, semiconvective mixing, overshooting, rotational mixing, pulsational mixing, etc. Typically, these mixing processes are parameterized by free parameters, for example the mean free path a photon can travel, or the efficiency of a specific mixing process which cannot be predicted by theory. In the literature there are many works trying to limit the parameter space of the different free parameters (e.g., Schootemeijer et al., 2019; Higgins & Vink, 2019; Gilkis et al., 2021; Michielsen et al., 2021). In this section, we want to showcase how changing one of the free parameters can impact the evolution of a star. One of the least constrained, yet important, mixing process is the efficiency of semiconvective mixing. Schootemeijer et al. (2019) used the population of the BSGs and RGSs to constrain semiconvective mixing efficiencies, but were able only to limit it to \(\alpha_{\rm sc}\gtrsim 1\). They report that higher values only barely impact the number ratio of blue and red supergiants in galaxies. In our models presented above we used \(\alpha_{\rm sc}=1\). For the models presented in this section, however, we increased the efficiency to \(\alpha_{\rm sc}=10\) in order to see how it changes our understanding of the evolution of the binary. The evolutionary tracks of the primary and secondary with the more efficient semiconvective mixing are shown in Fig. 11. By comparing these tracks to those shown in Fig. 4, one can see that the evolution of the primary barely changes. Semiconvective mixing becomes important only in evolutionary stages after the main sequence. The primary begins mass transfer already during its main sequence evolution and spends its post-main sequence evolution as a WR star, leaving the semiconvective mixing no time to impact the evolution. On the other hand, the evolution of the secondary changes because during the accretion phase it forms extensive convective and semiconvective regions in its envelope. The secondary initiates mass-transfer after the main sequence evolution, giving semiconvection some time to efficiently mix material. In addition, due to more efficient semiconvection, the secondary expands less in the transition to the core-He burning phase and initiates mass transfer at a somewhat more advanced evolutionary stage, consequently losing less mass during mass-transfer and producing a cooler WR star with an even higher surface H abundance. This is reflected in the higher surface H-abundance and the cooler temperature. Klencki et al. (2022) recently reported something similar: Binary models at low metallicity with highly efficient semiconvective mixing will not become WR/helium stars after Case B mass transfer. Their models managed to stay at the position in Figure 10: Same as Fig. 4, but now with the WR mass-loss rates enhanced by a factor 3. the HRD at which they initiated the mass transfer and hence are core-He burning stars, disguised as O- or B-type stars. It is worth mentioning that according to our criterion based on the optical depth, the secondary of the binary evolutionary model with efficient semiconvective mixing would spend only a short time as a WR star at low temperature. These two examples illustrate that binary evolution models are quite sensitive to the input parameters. A more detailed study based on grids of detailed binary evolutionary models and population synthesis are needed to confirm or disprove our predictions on the morphology of WR populations at low metallicity. ## 7 Summary and conclusions A consistent analysis of multi-epoch optical and UV spectra of one of the earliest O-type stars in the SMC, AzV 14, reveals its binary nature. Furthermore, our analysis uncovered that the systems' two components are very similar. The primary and secondary have temperatures of \(T_{\rm eff,1}=42.8\pm 2.0\,\rm{kK}\) and \(T_{\rm eff,2}=41.8\pm 2.0\,\rm{kK}\), luminosities of \(\log(L_{1}/L_{\odot})=5.41\pm 0.15\) and \(\log(L_{2}/L_{\odot})=5.38\pm 0.15\) and surface gravities of \(\log(g_{1.2}/(\rm{cm\,s^{-2}}))=4.0\pm 0.2\), respectively. The analysis of a TESS light curve confirms the binary nature of AzV 14 with an orbital period of \(3.7058\pm 0.0013\,\rm{d}\). Their spectroscopic masses of \(M_{1}=32\pm 8\,M_{\odot}\) and \(M_{2}=31\pm 8\,M_{\odot}\) are confirmed by the orbital analysis. Both binary components drive weak stellar winds with mass-loss rates of about \(\log(\dot{M}/(M_{\odot}\,\rm{yr^{-1}}))=-7.7\pm 0.2\). The relatively high observed X-ray emission of AzV 14 is attributed to colliding winds. The new empirically derived stellar parameters and the current orbital period of AzV 14 are well explained by our evolutionary models. In particular, the spectroscopic and orbital masses of the binary components are in agreement with the evolutionary ones. According to the evolutionary models, the components of AzV 14 did not yet exchange mass. Most interestingly, the binary evolutionary model of AzV 14 predicts that the primary will evolve into a hotter WR star, while the secondary is destined to evolve into a cooler WR star. Inspired by these results we calculated a small evolutionary model grid to investigate the conditions for the formation of hotter and cooler WR stars. Guided by these calculations, we anticipate that WR populations in low metallicity galaxies should show bimodal temperature distributions. According to our models, stars born in long-period binaries both evolve to hotter WR stars. On the other hand, for stars born in short-period binaries only the primary is destined to become a hot WR star, while the secondary evolves into a cooler WR star. In our models we assume that accretion in short-period binaries is efficient, leading to a rejuvenation of the core and a different envelope-to-core mass ratio. This eventually allows the past accretors to stay at lower temperatures during their WR stage. These results are sensitive to the input physics, but if turned out to be correct they would have a vast impact on our understanding of binary evolution at low metallicity, stellar feedback, and, hence, on galaxy formation and evolution. To test if our models are reliable and cooler WR stars originate from past accretors we introduced an additional criterion: the surface oxygen abundance. From standard evolutionary models one expects that a WR star has a surface composition which corresponds to the CNO equilibrium value. However, in our models the cooler WR stars, formed by stars that accreted a significant amount of material in the past, all have surface oxygen abundance which are increased by a factor of about ten. We tested this prediction with archival spectra of the apparently single WN star AB4 in the SMC. We detect oxygen lines in its optical spectrum, and measure the high oxygen abundance which is in agreement with our evolutionary model predictions. This empirically supports the new evolutionary pathway to the formation of WR stars at low metallcity proposed in this paper. ###### Acknowledgements. The authors are appreciative to the reviewer for useful comments which helped to improve the paper, and for their suggestions. The results presented in this paper are based on observations obtained with the NASA/ESA Figure 11: Same as Fig. 4, but now with an increased efficiency of semiconvection. Hubble Space Telescope, retrieved from MAST at the Space Telescope Science Institute (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract NAS 5-26555. Support to MAST for these data are provided by the NASA Office of Space Science via grant NASG-7584 and by other grants and contracts. The TESS data presented in this paper were obtained from MAST at the STScI. Funding for the TESS mission was provided by the NASA Explorer Program. Furthermore, its conclusions are based on observations collected at the European Southern Observatory (ESO) under the program 09A-A0049. The authors thank the managing committee of XSchowett and Andrea Mehner for preparing the OBs of the XSchowett project. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. DP and SRS acknowledge financial support by the Deutsches Zentrum fur Luft und Raumfunk (DLR) grants FKZ SOROR2005 and 50OR2108. AACS and VR acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in the form of an Emmy Noether Research Group - Project-ID 445674056 (SA406/1-1, PI Sander). DMB gratefully acknowledges a senior postdoctoral fellowship from the Research Foundation Flanders (FWO) with grant agreement number 1826521N. RK acknowledges financial support via the Heisenberg Research Grant funded by the German Research Foundation (DFG) under grant no. KU 2849/9. CK acknowledges financial support from the Spanish Ministerio de Economia y Competitividad under grants AYA2016-792744-C4-P and PID2019-107408GB-C44, from Junta de Andalucia Excellence Project P18-FR-2664, and from the State Agency for Research of the Spanish MCIU through the 'Center of Excellence Severo Ochoa' award for the Institute de Astrofisica de Andalucia (SEV-2017-0709). TS acknowledges support from the European Union's Horizon 2020 under the Marie Sklodowska-Curie grant agreement No 101024065. The collaboration of coauthors was facilitated by support from the International Space Science Institute (ISSI, Bern). The authors are grateful to the Lorentz Center (Leiden) for organizing a stimulating workshop. We thank Paul Crowther for useful discussions as well as advice regarding the UVES spectrum and for providing the updated IR photometry.
2308.08576
Artistic control over the glitch in AI-generated motion capture
Artificial intelligence (AI) models are prevalent today and provide a valuable tool for artists. However, a lesser-known artifact that comes with AI models that is not always discussed is the glitch. Glitches occur for various reasons; sometimes, they are known, and sometimes they are a mystery. Artists who use AI models to generate art might not understand the reason for the glitch but often want to experiment and explore novel ways of augmenting the output of the glitch. This paper discusses some of the questions artists have when leveraging the glitch in AI art production. It explores the unexpected positive outcomes produced by glitches in the specific context of motion capture and performance art.
Jamal Knight, Andrew Johnston, Adam Berry
2023-08-16T02:55:19Z
http://arxiv.org/abs/2308.08576v1
# Artistic control over the glitch in AI-generated motion capture ###### Abstract. Artificial intelligence (AI) models are prevalent today and provide a valuable tool for artists. However, a lesser-known artifact that comes with AI models that is not always discussed is the glitch. Glitches occur for various reasons; sometimes, they are known, and sometimes they are a mystery. Artists who use AI models to generate art might not understand the reason for the glitch but often want to experiment and explore novel ways of augmenting the output of the glitch. This paper discusses some of the questions artists have when leveraging the glitch in AI art production. It explores the unexpected positive outcomes produced by glitches in the specific context of motion capture and performance art. Motion capture, machine learning, AI, performance art, keypoint detection, animation. 2020 rights rights 100 ## 1. Introduction Motion capture is used in many industries, but is perhaps most known for the entertainment industries. The movement of the apes in 'Dawn of the Planet of the Apes', Neytiri in 'Avatar' or the Hulk in 'The Avengers' were all driven by motion capture technology. The motion of performers is accurately calculated by a myriad of cameras surrounding a capture stage. This style of art needs to be accurate to the millimetre, or the performance is at risk of falling into the 'uncanny valley' of animated motion leading to the audience becoming disengaged. There are other styles of art whose representation of animation does not require accuracy to this detail, however. Performance art is one domain where lower cost, faster-setup, reduced-accuracy motion capture, for instance, has found a home, where approximated outputs are sufficient to drive (often abstract) animations of dancer movement, suitable for projection or integration into the performance space. An emerging technology in this domain is AI-powered human pose detection, which can identify coarse skeletal movement using only a small number of consumer-grade RGB cameras - an output of sufficient quality for building a fully animated mesh that reflects human movement [7]. This research explores the application of machine learning in this area, using the single-camera VIBE [8] and multi-camera EasyMocap [5] models to drive performer-driven abstract animation. While conducting this research, there has been a realisation of the value and power of embracing the glitches such systems deliver. Glitches are valued by artists; they are historically important and motivate artists to experiment [1][2][3][6][10]. But understanding the form and function of those glitches, what drives them, and how to replicate them, is critical to enabling artists to harness these unpredictable surprises in an artistic environment. While we emphasize the artistic value of glitches in AI-generated motion capture, we acknowledge the need to address the lack of transparency and understanding when glitches occur due to the black box nature of AI models. To better support artists in their creative process, we propose the exploration of explainable features that would shed light on the occurrence and nature of glitches, enabling performers to have a deeper understanding of these phenomena. By incorporating XAI techniques, we can empower artists to have more control over glitches, allowing them to intentionally exploit and manipulate unexpected outcomes to achieve their desired artistic goals. In the context of this workshop, we speculate on the potential of XAI to enhance the creative exploration of glitches in AI art production and foster a deeper connection between artists and the underlying AI models. ## 2 Errors vs the glitch The glitch can be described as '... that which creates minor disturbances without actually damaging its major functioning. Glitches do not stop transmission: they merely make it scrappy, dirty or noisy' [4]. In the book Glitch Art in Theory and Practice, [2] mentions that glitch art originated before 1939, ranging from generative music and sculpture to modern digital art. [3] says, 'Today's digital technology enables artists to explore new territories for content by capturing and examining the area beyond the boundary of "normal" functions and uses of software.' It is essential to distinguish between when a model fails to run and when the output is unexpected. Unexpected outputs in AI-generated art can be a delightful surprise by bringing new meaning to a piece or revealing an interesting and unexpected aspect. These rare occurrences could be an error in the code, a quirk of the AI model or a user error by the artist. Sometimes there is no way to know where the glitch comes from besides further experiments and re-runs. The original authors of the model have probably encountered the same unexpected results, but rarely are these documented or publicised (perhaps because they are likely viewed as errors to be suppressed or corrected). To artists, however, these accidental errors and strange outputs can sometimes be the most exciting output the model can produce. ## 3 Examples of the glitch in AI motion capture ### Controlling the ghost In this paper, we describe creative projects which result in motion capture driven animations that are displayed as a performer dances. The machine learning models that are used output an animated mesh, which is editable. The keyframes can be smoothed, and the animation can be fine-tuned after the mesh is generated. This is ideal for artists as they have some degree of control of the mesh before abstract animation is applied. When tracking performer movements to create animated meshes for performance art, interesting glitches have emerged on occasion: If four cameras capture a subject's motion, and the subject walks out of view of two or three cameras, the animated mesh will behave in a way that floats around the screen. The animated mesh can glitch and behave strangely if the camera calibration step is miscalculated. The camera calibration step in detecting human pose is important because it enables calculation of where the physical cameras are in space. If one or more of the cameras are miscalculated, the animated mesh floats and dances around the frame uncontrollably. The above can result in an ethereal embodiment of a floating ghost-like motion. It is as if the motion capture subject is haunted by their animated ghost floating nearby. The animated figure will occasionally respond if Figure 1: Visual representations of key stages in the VIBE model, the raw video input of the subject dancing (top), the mesh generation of the VIBE model superimposed over the input video before temporal smoothness is applied (centre), and a frame of the abstract animation applied to the output mesh (lower). the subject rotated suddenly or changed direction. The animation felt like it was trying to match the subject's movement for a second before giving up and continuing on its floaty path. A video example is available here [A]. Noting the potential value in this context, an attempt was made to control the number and type of glitches by purposefully adding incorrect data for the camera calibration. This was essentially an informed trial-and-error process, where the impacts of varying calibration parameters on the movement of the ghostly mesh we observed. The more incorrect calibration data was used for the cameras, the more detached the mesh became from the subject. A video example of the glitches merged with the correct pose detection output is available here [B]. Without deep knowledge of the underlying AI model, the ability to control and manipulate glitches to achieve artistic outcomes is constrained to exactly this type of trial-and-error methodology. If insight into the causes and drivers of the glitch were more readily available and their relationship to controllable parameters known, then it would both open up the types of artistic experimentation available and increase the efficiency of generating useful outputs. In the context of pose detection glitches, exaggerating or reducing unexpected movement and the ability to balance the mix between expected and unexpected outputs would prove particularly powerful and enable greater artistic control. ### Testing the ghost Feedback from choreographers gathered during collaboration indicates that they have been captivated by the glitches that AI pose detection models produce. Running Machine [2], an Australian and Japanese co-production produced by Sam McGilp, Harrison Hall, Yuiko Masukawa, Makoto Uemura and Kazuhiko Hiwa, featured the glitchy nature of AI-generated pose detection. The producers appreciated the glitch to the extent that they would try to force or exaggerate the effect. Two subjects would be recorded on video in front of a green screen, with one subject completely covered in green, who would move the other subject in various positions. This also 'confused' the AI model into producing the glitch effect, sometimes detecting the greenscreen subject and sometimes detecting the other subject. Other experiments were conducted in front of a green screen, with the top half of one subject in green and the bottom half of the other in green. A similar effect was produced. The choreographers requested an animated mesh of the glitched result without any smoothing or cleanup. A rendering of the mesh was projected on a screen as part of the performance. Examples of these glitches are available here [C][D]. ### Confusing the ghost An example where glitch artifacts were produced unexpectedly was when AI-based motion capture attempted to capture a performer on slings. The slings were attached to beams on the ceiling, and the performers would swing gracefully in different formations. The expected result would be a near-accurate representation of the performers swinging through the air. However, the AI model produced a very glitchy result. The reason for the glitch became apparent when the performer dismounted the slings when the animated mesh snapped back to the performer. It was as if the model could not understand the performer in the slings but automatically recognised them when they were walking on solid ground. Upon inspection, it was revealed that the data the model was trained on was extensive but did not include performers in slings or similar situations. This new information for the model caused it to glitch. An example of this is available here [E]. An artistic representation of glitch animation is available here [E] An entire playlist of all the glitches is available here [G]. ## 4 What artists need No AI model is perfect, and flaws are discovered after some testing. Authors of AI models primarily show examples of their models working seamlessly. However, for artists, it would be helpful to show (with examples) where the model will glitch or behave unexpectantly. In layman's terms, an accompanying explanation describing the reason for the anomaly would be helpful to avoid or exploit the unexpected result. This information would be valuable when choosing an AI model from the outset. If Github repositories were more honest and forthcoming with the various outcomes of their models, artists would be more experimental with them. It's great that a pose detection model can produce an animated mesh of a human walking, but can it produce motion that a human cannot? Which parameters can I adjust to 'break' the model and cause it to fly around the space like a ragdoll? The ability to unlock these secret abilities in the model is akin to using cheat codes in a video game to access different ways to navigate the game. It may not be what the author intended, but it is often creatively productive to experiment with. The area where there is no control is the model itself and the glitches it produces. It is unknown whether the environmental, performance or system parameters to tune which will help shape the types of interesting glitches that might power the art. This relatively untapped source of creativity holds considerable potential for innovative experimentation. Although they are not the intended outcome, glitch artifacts should be embraced, and AI practitioners are encouraged to provide methods where this phenomenon occurs. ## Acknowledgments Animal Logic Academy, Cloe Fournier, Box of Birds, Entagma.com and Carlos Barreto. This research is supported by an Australian Government Research Training Program Scholarship.
2302.05009
Network Inspection Using Heterogeneous Sensors for Detecting Strategic Attacks
We consider a two-player network inspection game, in which a defender allocates sensors with potentially heterogeneous detection capabilities in order to detect multiple attacks caused by a strategic attacker. The objective of the defender (resp. attacker) is to minimize (resp. maximize) the expected number of undetected attacks by selecting a potentially randomized inspection (resp. attack) strategy. We analytically characterize Nash equilibria of this large-scale zero-sum game when every vulnerable network component can be monitored from a unique sensor location. We then leverage our equilibrium analysis to design a heuristic solution approach based on minimum set covers for computing inspection strategies in general. Our computational results on a benchmark cyber-physical distribution network illustrate the performance and computational tractability of our solution approach.
Bobak McCann, Mathieu Dahan
2023-02-10T01:45:18Z
http://arxiv.org/abs/2302.05009v1
# Network Inspection Using Heterogeneous Sensors for Detecting Strategic Attacks ###### Abstract We consider a two-player network inspection game, in which a defender allocates sensors with potentially heterogeneous detection capabilities in order to detect multiple attacks caused by a strategic attacker. The objective of the defender (resp. attacker) is to minimize (resp. maximize) the expected number of undetected attacks by selecting a potentially randomized inspection (resp. attack) strategy. We analytically characterize Nash equilibria of this large-scale zero-sum game when every vulnerable network component can be monitored from a unique sensor location. We then leverage our equilibrium analysis to design a heuristic solution approach based on minimum set covers for computing inspection strategies in general. Our computational results on a benchmark cyber-physical distribution network illustrate the performance and computational tractability of our solution approach. ## 1 Introduction Critical infrastructure networks such as electric, gas, and water distribution systems are paramount for the well-being of society. However, these networks regularly face random disruptions as well as attacks from strategic adversaries [1, 2]. In particular, recent incidents have demonstrated that adversarial attackers can disrupt or gain control of the cyber-physical systems deployed in these networks by exploiting cyber insecurities or physical faults. A most recent example is the cyberattack against a major US fuel pipeline, which caused disruptions in the fuel supply of the Eastern United States [3]. Additional examples can be found in [4, 5]. A key part of any defense strategy is to detect attacks using sensors positioned in various locations that continuously monitor the network. If a network is small, this can be done easily by placing a sensor at each location of interest. However, for medium or large networks, it can be infeasible to position a sensor at every location. Thus the problem of how to strategically position a restricted number of sensors is crucial. We employ a game-theoretic approach to study this problem. Game theory has successfully been used to study problems in the domain of cybersecurity (and network security more broadly) [6, 7, 8, 9, 10, 11, 12, 13]. In particular, it has proven successful for sensor allocation problems [13, 14, 15]. In our model, the defender allocates heterogeneous sensors in order to detect multiple attacks caused by a strategic attacker. The sensors may differ in their detection accuracies, which typically depend on the sensing technology utilized. The defender (resp. the attacker) aims to minimize (resp. maximize) the expected number of undetected attacks. Thus we model the interactions between both players using a zero-sum game, in which both players may potentially select randomized strategies. This feature is known to be desirable in security settings in which finite resources are allocated [14, 16]. Previous simultaneous security models, such as in [17, 18, 19, 20], assume that each detection device is homogeneous. In this work, we extend the model in [14] by accounting for the potential heterogeneity in detection accuracy of the sensors available to the defender. In particular, we study how the detection heterogeneity of the defender's resources affects the strategies of both players. We study the mixed Nash Equilibria (NE) of this game. As this is a zero-sum game, NE can be computed by solving a linear program [21]. However, as the network's size increases, this linear program becomes too computationally expensive to solve because of the combinatorial nature of the players' action sets. Thus, we analyze equilibrium properties under certain conditions, and leverage our results to provide a computationally tractable heuristic solution approach that computes inspection strategies in the general case with good detection performance. Our contributions are twofold: First, we analytically solve the game and provide equilibrium properties when each component in the network is monitored from a unique sensor location. These results provide us with valuable insight regarding the impact of the detection accuracies, number of attacks, and network topology on the players' equilibrium strategies. Second, we leverage our equilibrium results to design a heuristic solution approach for computing inspection strategies in general. Our approach is based on solutions to a minimum set cover problem, which have been shown to be effective for different inspection games [10, 11, 12, 13]. We then conduct a computational study on a benchmark cyber-physical distribution network and empirically validate the performance and computational tractability of our solution approach. The paper is structured as follows. In Section 2, we introduce the network inspection game. In Section 3, we derive equilibrium properties and solve the game when each component is monitored from a unique sensor location. We then present in Section 4 our heuristic approach for computing inspection strategies in the general case and provide computational results to validate our approach. Finally, we summarize our contributions and plans for future work in Section 5. ## 2 Problem Description We consider a network containing a set of vulnerable components \(E\) that can be targeted by an attacker. A defender has access to \(b_{1}\in\mathbb{N}\) sensors that can be positioned among a set of locations (nodes) \(V\) for network monitoring. A sensor positioned at node \(v\in V\) monitors a subset of components \(E_{v}\subseteq E\), which we refer to as the _monitoring set_ of \(v\). For ease of exposition, we denote \(n\coloneqq|V|\) and \([k]\coloneqq\{1,\ldots,k\}\) for every \(k\in\mathbb{N}\). We consider that sensors can potentially differ in their detection capabilities. Specifically, for each sensor \(k\in[b_{1}]\), we let \(\lambda_{k}\in(0,1]\) denote its accuracy, i.e., the probability that it detects an attack conducted against a given component within the monitoring set of the node at which it is positioned. We order the sensors so that \(\lambda_{1}\geq\cdots\geq\lambda_{b_{1}}\). Without loss of generality, we assume that multiple sensors cannot be simultaneously positioned at the same node. Indeed, positioning additional sensors at a node \(v\in V\) can be equivalently viewed as positioning them among \(b_{1}-1\) different copies of node \(v\), where each copy has an identical monitoring set \(E_{v}\). A _sensor positioning_ is then represented as a vector \(s=(s_{1},\ldots,s_{b_{1}})\in(V\cup\{0\})^{b_{1}}\) such that \(s_{i}\neq s_{j}\) for every \((i,j)\in[b_{1}]^{2}\) with \(i\neq j\) and \(s_{i},s_{j}\neq 0\). Here, \(s_{k}\in V\) represents the node at which sensor \(k\in[b_{1}]\) is positioned by the defender, and \(s_{k}=0\) corresponds to sensor \(k\) not being positioned within the network. For consistency, we let \(E_{0}\coloneqq\emptyset\). We denote the set of all sensor positionings as \(A_{1}\). To analyze the problem of strategically positioning sensors in the network, we introduce a zero-sum game \(\Gamma\coloneqq\langle\{1,2\},(\Delta(A_{1}),\Delta(A_{2})),(-U,U)\rangle\). In this game, Player 1 (**P1**) is the defender who selects a sensor positioning \(s\in A_{1}\). Simultaneously, Player 2 (**P2**) is an attacker who selects a subset of components \(T\in 2^{E}\) to target, where \(|T|\leq b_{2}\) and \(b_{2}\in[|E|]\) is the number of attack resources he has at his disposal. We refer to such a subset of components as an _attack plan_, and denote the set of all attack plans as \(A_{2}\). In such security settings, it may be beneficial for one or both players to randomize their strategies. This feature is especially important for applications where sensing resources can be regularly moved throughout a network, which increases the strategic uncertainty faced by the attacker and hence generally achieves a higher protection level [10, 22]. Thus, we allow **P1** and **P2** to select mixed strategies. A _mixed strategy_ for the defender (resp. attacker) is a probability distribution over the set of sensor positionings \(A_{1}\) (resp. the set of attack plans \(A_{2}\)). Namely, we define the set of mixed inspection and attack strategies as \(\Delta(A_{1})\coloneqq\{\sigma^{1}\in[0,1]^{|A_{1}|}\mid\sum_{s\in A_{1}} \sigma^{1}_{s}=1\}\) and \(\Delta(A_{2})\coloneqq\{\sigma^{2}\in[0,1]^{|A_{2}|}\mid\sum_{T\in A_{2}} \sigma^{2}_{T}=1\}\) respectively, where \(\sigma^{1}_{s}\) (resp. \(\sigma^{2}_{T}\)) represents the probability assigned to the sensor positioning \(s\in A_{1}\) (resp. the attack plan \(T\in A_{2}\)) under the inspection strategy \(\sigma^{1}\) (resp. the attack strategy \(\sigma^{2}\)). We assume that the players' strategies are independent randomizations. In this model, we assume that the sensors are safe from possible damage during an attack; only the components in the network can be targeted. Additionally, we assume that detection is independent across attacks and sensors, and that if an attack against a component is detected, then the defender can nullify the damage. Hence, in our model we consider an attack on a component by **P2** to be successful if and only if it is not detected by **P1**. As such, **P1** (resp. **P2**) seeks to minimize (resp. maximize) the expected number of undetected attacks which, for any strategy profile \((\sigma^{1},\sigma^{2})\in\Delta(A_{1})\times\Delta(A_{2})\), is given by \[U(\sigma^{1},\sigma^{2})\coloneqq\mathbb{E}_{(\sigma^{1},\sigma^{2})}\left[ \sum_{e\in T}\prod_{k=1}^{b_{1}}\left(1-\lambda_{k}1_{\{e\in E_{s_{k}}\}} \right)\right],\] where the expectation is taken over all pairs of actions \((s,T)\in A_{1}\times A_{2}\), which are selected with probability \(\sigma^{1}_{s}\cdot\sigma^{2}_{T}\) by the players' strategies. Next, we show an instantiation of the zero-sum game \(\Gamma\) via an example. **Example 1**.: We consider an example of a network represented in Figure 1. In this example, the set of nodes is \(V=\{v_{1},\ldots,v_{5}\}\), the set of components is \(E=\{e_{1},\ldots,e_{9}\}\), and the monitoring sets are \(E_{v_{1}}=\{e_{1},e_{2},e_{3}\}\), \(E_{v_{2}}=\{e_{3},e_{6},e_{7}\},E_{v_{3}}=\{e_{3},e_{4},e_{5}\},E_{v_{4}}=\{e_{ 7},e_{8},e_{9}\}\), and \(E_{v_{5}}=\{e_{8}\}\). The defender has two sensors. Sensor 1 (in green) has accuracy \(\lambda_{1}=0.9\), and sensor 2 (in yellow) has accuracy \(\lambda_{2}=0.5\). In this example, the defender selects the randomized inspection strategy \(\sigma^{1}\) defined by \(\sigma^{1}_{s}=0.4\) and \(\sigma^{1}_{s^{\prime}}=0.6\), with \(s=(v_{4},v_{3})\) and \(s^{\prime}=(v_{1},v_{2})\). Simultaneously, the attacker selects the randomized attack strategy \(\sigma^{2}\) defined by \(\sigma^{2}_{T}=0.2\) and \(\sigma^{2}_{T^{\prime}}=0.8\), with \(T=\{e_{1},e_{3}\}\) and \(T^{\prime}=\{e_{4}\}\). For this example, the expected number of undetected attacks is given by \[U(\sigma^{1},\sigma^{2})= \ \sigma^{1}_{s}\sigma^{2}_{T}\left(1+(1-\lambda_{2})\right)\] \[+\sigma^{1}_{s}\sigma^{2}_{T^{\prime}}(1)\] \[+\sigma^{1}_{s^{\prime}}\sigma^{2}_{T}\left((1-\lambda_{1})+(1- \lambda_{1})(1-\lambda_{2})\right)\] \[+\sigma^{1}_{s^{\prime}}\sigma^{2}_{T^{\prime}}(1-\lambda_{2})\] \[=0.698.\] \(\triangle\) In simultaneous games, a solution concept is given by Nash Equilibrium. Specifically, a strategy profile \((\sigma^{1*},\sigma^{2*})\in\Delta(A_{1})\times\Delta(A_{2})\) is a _Nash Equilibrium_ (NE) of \(\Gamma\) if for all \((\sigma^{1},\sigma^{2})\in\Delta(A_{1})\times\Delta(A_{2})\), we have \[U(\sigma^{1*},\sigma^{2})\leq U(\sigma^{1*},\sigma^{2*})\leq U(\sigma^{1}, \sigma^{2*}).\] Equivalently, at a NE, \(\sigma^{1*}\) (resp. \(\sigma^{2*}\)) is a best response to \(\sigma^{2*}\) (resp. \(\sigma^{1*}\)). We refer to \(\sigma^{1*}\) (resp. \(\sigma^{2*}\)) as an equilibrium inspection strategy (resp. equilibrium attack strategy). Additionally, we refer to \(U(\sigma^{1*},\sigma^{2*})\) as the _value of the game_. Since \(\Gamma\) is a finite zero-sum game, the value \(U(\sigma^{1*},\sigma^{2*})\) exists and is identical for every strategy profile \((\sigma^{1*},\sigma^{2*})\in\Delta(A_{1})\times\Delta(A_{2})\) that is a NE. In other words, the value of the game is unique and well-defined. Furthermore, the zero-sum game \(\Gamma\) can be solved using the following linear programming problem [21]: \[(\mathcal{P})\ \min_{\sigma^{1}\in\Delta(A_{1})}\max_{T\in A_{2}}U(\sigma^{1},T).\] Specifically, the equilibrium inspection strategies, equilibrium attack strategies, and value of the game \(\Gamma\) are given by the optimal primal solutions, optimal dual solutions, and optimal value of \((\mathcal{P})\), respectively. However, solving \((\mathcal{P})\) becomes intractable even for medium-sized networks due to the combinatorial nature of the players' sets of actions: the number of variables and constraints in \((\mathcal{P})\) are given by \(1+|A_{1}|=1+\sum_{i=0}^{b_{1}}i!\binom{n}{i}\) and \(1+|A_{2}|=1+\sum_{j=0}^{b_{2}}\binom{|E|}{j}\), respectively. Thus, in this paper, we present an approach to provide approximate solutions to the game \(\Gamma\). We first derive an analytical characterization of a class of NE when the monitoring sets are mutually disjoint. We then leverage this result in Section 4 to derive a heuristic method for computing an approximate solution in general. Henceforth, we assume without loss of generality that \((i)\ b_{1}\leq n\), \((ii)\ b_{2}\leq|E|\), \((iii)\) each monitoring set \(E_{v}\) (\(v\in V\)) is nonempty, and \((iv)\) every component \(e\in E\) belongs to at least one monitoring set. Indeed, if some components do not belong to any monitoring set, then **P2** will always target these components and allocate his remaining resources among the components that belong to at least one monitoring set. Our game models scenarios where, for instance, each component represents an asset that can be hacked, and each node represents a computer on which software protocols can be installed to detect cyber attacks. In this scenario, our sensors are the software security protocols, which each have a certain probability of detecting a cyber attack. Stronger protocols are harder to be bypassed, and will detect an intrusion with a higher probability than a weaker protocol, which a hacker can more easily bypass. Finally, we note that in a zero-sum game, no player has a first-mover advantage. This implies that if the players were to play sequentially, the equilibrium solutions would remain valid. Thus, the game \(\Gamma\) can be used to model scenarios where the attacker selects his attack strategy after observing the defender's inspection strategy. This type of situation is frequently encountered Figure 1: Game instance on a network containing 5 nodes and 9 components. in cybersecurity applications and various other security problems more broadly. ## 3 Game-Theoretic Analysis for Mutually Disjoint Monitoring Sets In this section, we study the game \(\Gamma\) when all the monitoring sets are mutually disjoint. That is, when \(E_{v}\cap E_{w}=\emptyset\) for all \((v,w)\in V^{2}\) such that \(v\neq w\). Without loss of generality, we rewrite the set of nodes as \(V=\{v_{1},\ldots,v_{n}\}\) so that \(|E_{v_{1}}|\geq\cdots\geq|E_{v_{n}}|\). Furthermore, to simplify the equilibrium analysis, we define for every \((\sigma^{1},v)\in\Delta(A_{1})\times V\) the _detection probability_ of node \(v\) under \(\sigma^{1}\) as: \[p_{\sigma^{1}}(v)\coloneqq\sum_{j=1}^{b_{1}}\lambda_{j}\sum_{\{s\in A_{1}|s_{j }=v\}}\sigma^{1}_{s}.\] That is, \(p_{\sigma^{1}}(v)\) represents the probability that an attack in the monitoring set \(E_{v}\) is detected under the inspection strategy \(\sigma^{1}\). Similarly, we define for every \((\sigma^{2},e)\in\Delta(A_{2})\times E\) the _attack probability_ of component \(e\) under \(\sigma^{2}\) as \[p_{\sigma^{2}}(e)\coloneqq\sum_{\{T\in A_{2}|e\in T\}}\sigma^{2}_{T}.\] That is, \(p_{\sigma^{2}}(e)\) represents the probability that \(e\) is targeted under the attack strategy \(\sigma^{2}\). In order to maximize the expected number of undetected attacks, **P2**'s incentive is to spread his attacks across the monitoring sets, thus making it more challenging for **P1** to detect the attacks. However, **P2** is constrained by the topology of the network, and more particularly by the sizes of the different monitoring sets. This in turn will impact **P1**'s best-response inspection strategy. More formally, we consider the following quantity: \[k^{*}=\min\left\{k\in[n]\left|\frac{b_{2}-\sum_{j=k+1}^{n}|E_{v_{j}}|}{k}\geq|E _{v_{k+1}}|\right.\right\},\] where we let \(|E_{v_{n+1}}|\coloneqq 0\). Essentially, \(\{E_{v_{1}},\ldots,E_{v_{k^{*}}}\}\) represents the monitoring sets that are not fully targeted by **P2** when he spreads his attacks. The next theorem then characterizes a class of NE of the game \(\Gamma\) when the monitoring sets are mutually disjoint: **Theorem 1**.: _If \(E_{v}\cap E_{w}=\emptyset\) for all \((v,w)\in V^{2}\) such that \(v\neq w\), then a strategy profile \((\sigma^{1*},\sigma^{2*})\in\Delta(A_{1})\times\Delta(A_{2})\) is a NE if it satisfies the following conditions:_ \[p_{\sigma^{1*}}(v_{i})= \begin{cases}\frac{1}{k^{*}}\sum\limits_{j=1}^{\min\{b_{1},k^{*} \}}\lambda_{j}&\text{if }1\leq i\leq k^{*}\\ \lambda_{i}&\text{if }k^{*}<i\leq b_{1}\\ 0&\text{if }\max\{b_{1},k^{*}\}<i\leq n,\end{cases}\] (1) \[\sum\limits_{e\in E_{v_{i}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! lemma, we construct a strategy profile that satisfies the detection and attack probability conditions (1)-(2) of Theorem 1: **Lemma 1**.: 1. _If_ \(b_{1}\leq k^{*}\)_, consider for every_ \(l\in[k^{*}]\) _the following sensor positioning:_ \[s^{l}\coloneqq\begin{cases}(v_{l},\ldots,v_{l+b_{1}-1})\\ \qquad\qquad\qquad\qquad\text{if $1\leq l\leq k^{*}-b_{1}+1$}\\ (v_{l},\ldots,v_{k^{*}},v_{1},\ldots,v_{l+b_{1}-k^{*}-1})\\ \qquad\qquad\qquad\qquad\text{if $k^{*}-b_{1}+1<l\leq k^{*}$}.\end{cases}\] 2. _If_ \(b_{1}>k^{*}\)_, consider for every_ \(l\in[k^{*}]\) _the following sensor positioning:_ \[s^{l}\coloneqq\begin{cases}(v_{1},\ldots,v_{k^{*}},v_{k^{*}+1},\ldots,v_{b_{1 }})\\ \qquad\qquad\qquad\qquad\qquad\text{if $l=1$}\\ (v_{l},\ldots,v_{k^{*}},v_{1},\ldots,v_{l-1},v_{k^{*}+1},\ldots,v_{b_{1}})\\ \qquad\qquad\qquad\qquad\qquad\qquad\text{if $1<l\leq k^{*}$}.\end{cases}\] _Then, \(\sigma^{1*}\in\Delta(A_{1})\) defined by_ \[\sigma^{1*}_{s^{l}}=\frac{1}{k^{*}}\;\forall l\in[k^{*}],\;\text{and}\;\sigma ^{1*}_{s}=0\;\text{otherwise},\] _satisfies condition (1) in Theorem 1._ 2. _Let_ \(b^{\prime}_{2}\coloneqq k^{*}\left\lfloor\frac{1}{k^{*}}\left(b_{2}-\sum_{j=k ^{*}+1}^{n}|E_{v_{j}}|\right)\right\rfloor\)_, and for_ \(l\in[k^{*}]\) _let_ \[C^{l}\coloneqq\{1,\ldots,l+b_{2}-b^{\prime}_{2}-k^{*}-1\}\\ \qquad\qquad\cup\{l,\ldots,\min\{l+b_{2}-b^{\prime}_{2}-1,k^{*}\}\}.\] _Consider attack plans_ \(T^{l}\;\;(l\in[k^{*}])\) _defined as follows:_ \[\left|T^{l}\cap E_{v_{j}}\right|\coloneqq\begin{cases}\frac{b^{\prime}_{2}}{ k^{*}}+1&\text{if $j\in C^{l}$}\\ \frac{b^{\prime}_{2}}{k^{*}}&\text{if $j\in[k^{*}]\setminus C^{l}$}\\ \left|E_{v_{j}}\right|&\text{if $k^{*}<j\leq n$}.\end{cases}\] _Then, \(\sigma^{2*}\in\Delta(A_{2})\) defined by_ \[\sigma^{2*}_{T^{l}}=\frac{1}{k^{*}}\;\forall l\in[k^{*}],\;\text{and}\;\sigma ^{2*}_{T}=0\;\text{otherwise},\] _satisfies condition (2) in Theorem 1._ From Lemma 1, we find that an equilibrium inspection strategy can be constructed by "cycling" the positioning of sensors \(1,\ldots,\min\{k^{*},b_{1}\}\) among the nodes \(v_{1},\ldots,v_{k^{*}}\): \(s^{1}\) positions sensor 1 at node \(v_{1}\), sensor 2 at node \(v_{2}\), and so on. Then, \(s^{2}\) positions sensor 1 at node \(v_{2}\), sensor 2 at node \(v_{3}\) and so on. Furthermore, if \(b_{1}>k^{*}\), then **P1** deterministically positions sensors \(k^{*}+1,\ldots,b_{1}\) at the remaining nodes, in decreasing order of their monitoring sets' size: she positions sensor \(k^{*}+1\) at node \(v_{k^{*}+1}\), sensor \(k^{*}+2\) at node \(v_{k^{*}+2}\), and so on. Similarly, an equilibrium attack strategy can be constructed by first deterministically targeting all the components in \(E_{v_{k^{*}+1}},\ldots,E_{v_{n}}\). Then, \(\left\lfloor\frac{1}{k^{*}}\left(b_{2}-\sum_{i=k^{*}+1}^{n}|E_{v_{i}}|\right)\right\rfloor\) components are deterministically targeted within each monitoring set in \(E_{v_{1}},\ldots,E_{v_{k^{*}}}\). Finally, **P2** "cycles" his remaining attack resources (if any are remaining) over the remaining components in \(E_{v_{1}},\ldots,E_{v_{k^{*}}}\). Next, we illustrate Theorem 1 and Lemma 1 with an example. **Example 2**.: Consider the network shown in Figure 2. In this illustration, each square represents a component that can only be monitored from the node indicated below it. Thus, in this example, \(|E_{v_{1}}|=5\), \(|E_{v_{2}}|=4\), \(|E_{v_{3}}|=4\), \(|E_{v_{4}}|=2\), and \(|E_{v_{5}}|=1\). To simplify our equilibrium description, let \(e_{i,j}\;(i\in[n],\;j\in[|E_{v_{i}}|])\) represent the component in layer \(j\) of monitoring set \(E_{v_{i}}\). This example can be used to represent a computer network in which each computer lies within a closed section of the network, and such that each computer in a given closed section can detect cyberattacks conducted against only the components in its section. Suppose that **P1** has 4 sensors. Furthermore, we consider that **P2** has \(b_{2}=10\) attack resources. To spread his attacks in equilibrium, **P2** can first allocate 5 attack resources to target one component in each monitoring set (in layer 1). Then **P2** can allocate 4 attack resources to target one more component in each monitoring set that is not fully targeted (in layer 2). Finally, **P2** Figure 2: Example of a network with 5 nodes, 16 components, and mutually disjoint monitoring sets. can uniformly randomize his remaining attack resource among the remaining 3 monitoring sets that still have untargeted components. In particular, \(k^{*}=3\) in this example, and an attack strategy \(\sigma^{2*}\) constructed from Lemma 1 is given as follows: \[\sigma_{T}^{2*}=\begin{cases}\frac{1}{3}&\text{if }\,T=T_{0}\cup\{e_{1,3}\}\\ \frac{1}{3}&\text{if }\,T=T_{0}\cup\{e_{2,3}\}\\ \frac{1}{3}&\text{if }\,T=T_{0}\cup\{e_{3,3}\}\\ 0&\text{otherwise,}\end{cases}\] where \(T_{0}=\{e_{1,1},e_{2,1},e_{3,1},e_{4,1},e_{5,1},e_{1,2},e_{2,2},e_{3,2},\\ e_{4,2}\}\). We note that \(\sigma^{2*}\) satisfies conditions (2). Since **P1** has \(4>k^{*}\) sensors, she cycles the positioning of her 3 most accurate sensors among the nodes \(v_{1}\), \(v_{2}\), \(v_{3}\), and deterministically positions her remaining sensor at \(v_{4}\). The construction of such an equilibrium inspection strategy \(\sigma^{1*}\) from Lemma 1 is given as follows: \[\sigma_{s}^{1*}=\begin{cases}\frac{1}{3}&\text{if }\,s=(v_{1},v_{2},v_{3},v_{4}) \\ \frac{1}{3}&\text{if }\,s=(v_{2},v_{3},v_{1},v_{4})\\ \frac{1}{3}&\text{if }\,s=(v_{3},v_{1},v_{2},v_{4})\\ 0&\text{otherwise.}\end{cases}\] The NE \((\sigma^{1*},\sigma^{2*})\) is illustrated in Figure 3. In this example, sensor 1 (in green) has accuracy \(\lambda_{1}=0.9\), sensor 2 (in yellow) has accuracy \(\lambda_{2}=0.5\), sensor 3 (in orange) has accuracy \(\lambda_{3}=0.4\), and sensor 4 (in maroon) has accuracy \(\lambda_{4}=0.2\). In this NE, we observe that the 3 most accurate sensors are randomized so that the detection probability of each node in \(\{v_{1},v_{2},v_{3}\}\) has an identical detection probability given by \(\frac{1}{3}(0.9+0.5+0.4)=0.6\). We also note that node \(v_{5}\) is never monitored in this NE. The expected number of attacks in each of the monitoring sets \(E_{v_{1}}\), \(E_{v_{2}}\), \(E_{v_{3}}\) is given by \(\frac{7}{3}\). Every component in the remaining monitoring sets is deterministically targeted. Thus, the value of the game \(\Gamma\), i.e., the expected number of undetected attacks in equilibrium, for this example is given by \(10-3\times 0.6\times\frac{7}{3}-0.2\times 2=5.4\). \(\triangle\) Theorem 1 demonstrates that there are scenarios where it is beneficial for **P1** to leave some components completely unmonitored and instead allocate her resources on parts of the network where there will be a larger number of attacks. Such scenarios occur when \(k^{*}<n\), i.e., when the number of attack resources \(b_{2}\) is large enough and the monitoring sets are of heterogeneous sizes. Conversely, when \(k^{*}=n\), which occurs if and only if \(b_{2}<n\left|E_{v_{n}}\right|\), **P1** randomizes her sensors over all the nodes in the network and monitors every component with identical probability. In fact, in such cases, we have the following result: **Corollary 1**.: _The set of equilibrium inspection strategies is identical for any number of attack resources satisfying \(b_{2}<n|E_{v_{n}}|\)._ Hence, if **P1** does not know the exact number of attack resources **P2** has at his disposal, but knows that \(b_{2}<n|E_{v_{n}}|\), then she can compute an equilibrium inspection strategy by simply assuming that \(b_{2}=1\). Next, we investigate conditions under which **P2** needs to use all of his \(b_{2}\) resources in equilibrium when the monitoring sets are mutually disjoint: **Proposition 1**.: _If \(b_{1}\geq k^{*}\) and \(\lambda_{j}=1\) for every \(j\in[k^{*}]\), then for any \(b_{2}>k^{*}|E_{v_{k^{*}+1}}|+\sum_{j=k^{*}+1}^{n}|E_{v_{j}}|\), an attack plan \(T^{*}\) of size \(k^{*}|E_{v_{k^{*}+1}}|+\sum_{j=k^{*}+1}^{n}|E_{v_{j}}|\) that satisfies_ \[\forall j\in[n],\;|T^{*}\cap E_{v_{j}}|=\min\{|E_{v_{j}}|,|E_{v_{k^{*}+1}}|\}\] _is an equilibrium attack strategy._ _Otherwise, for any \(b_{2}\leq|E|\), any equilibrium attack strategy \(\sigma^{2*}\) necessarily randomizes over attack plans \(T\) of size exactly \(b_{2}\)._ This proposition shows that if **P1** has at least \(k^{*}\) sensors with perfect detection accuracy, then **P2** does not need to utilize more than \(k^{*}|E_{v_{k^{*}+1}}|+\sum_{j=k^{*}+1}^{n}|E_{v_{j}}|\) attack resources in equilibrium. Indeed, any additional attack resource would be necessarily allocated to components monitored by perfect sensors, and hence will be detected with probability 1. Therefore, simply targeting \(\min\{|E_{v_{j}}|,|E_{v_{k^{*}+1}}|\}\) components within each monitoring set \(E_{v_{j}}\) ensures a maximum expected number of undetected attacks in equilibrium. Figure 3: Example of a NE. Finally, the following proposition shows that **P1** must always use all her sensors in equilibrium: **Proposition 2**.: _For any \(b_{1}\leq n\), any equilibrium inspection strategy \(\sigma^{1*}\) necessarily randomizes over sensor positionings \(s\in A_{1}\) such that \(s_{k}\neq 0\) for all \(k\in[b_{1}]\)._ From this proposition, we conclude that in any NE, **P1**'s inspection strategy must randomize over sensor positionings that utilize all her resources when \(b_{1}\leq n\). ## 4 General Case Approximation ### Solution Approach In this section, we leverage our equilibrium results in the case of disjoint monitoring sets to design a heuristic approach for computing an approximate equilibrium inspection strategy in general. In the general case when monitoring sets are not necessarily disjoint, the main challenge lies in determining the subset of nodes that should receive sensors in equilibrium. As observed in Section 3, **P2** aims to spread his attacks to maximize the number of undetected attacks. Therefore, **P1**'s incentive is to position her sensors on nodes that collectively monitor a large number of network components. One natural candidate set of nodes to receive sensors is given by a _minimum set cover_, i.e., a set of nodes \(S\in 2^{V}\) of minimum size that collectively monitors all network components. Minimum set covers can be obtained by solving the following optimization problem, which can be formulated as an integer program: \[\min_{S\in 2^{V}}\,|S|\,\text{subject to}\,\,\cup_{v\in S}\,E_{v}=E.\] Although the minimum set cover problem is NP-hard, modern mixed-integer optimization solvers can be used to optimally solve large-scale problem instances [14]. To utilize our results in Section 3, we must recreate an instantiation where the monitoring sets are mutually disjoint. To this end, we partition the set of network components by utilizing the monitoring sets of the nodes in a minimum set cover \(S=\{v^{\prime}_{1},\ldots,v^{\prime}_{m}\}\in 2^{V}\). In Theorem 1, we observed that **P2** cannot spread his attacks as much in the disjoint case when the monitoring sets are of heterogeneous sizes, thus leading to a lower expected number of undetected attacks. Hence, we partition the set of network components into \(m\) subsets by greedily assigning each component to the largest monitoring set containing that component. Specifically, we first determine the monitoring set \(E_{v}\), \(v\in S\) of maximum size, suppose it is \(E_{v^{\prime}_{1}}\), and then remove every component that belongs to \(E_{v}\cap E_{v^{\prime}_{1}}\) (for all \(v\in S\setminus\{v^{\prime}_{1}\}\)) from \(E_{v}\). We then repeat this process with the second largest monitoring set, and so on until each network component belongs to exactly one set in the partitioning. Once this partitioning is obtained, we have an instance with \(m\) disjoint monitoring sets. From this, we construct an inspection strategy \(\sigma^{1^{\prime}}\) according to Lemma 1 that satisfies (1) in Theorem 1. Since equilibrium inspection strategies are optimal solutions of \((\mathcal{P})\) (see Section 2), we evaluate the performance of our approximate inspection strategy \(\sigma^{1^{\prime}}\) by computing its objective value in \((\mathcal{P})\), i.e., \(\max_{T\in A_{2}}U(\sigma^{1^{\prime}},T)\). This determines the worst-case expected number of undetected attacks if **P1** selects \(\sigma^{1^{\prime}}\) as her inspection strategy. Since for every attack plan \(T\in A_{2}\), \(U(\sigma^{1^{\prime}},T)=\sum_{e\in T}U(\sigma^{1^{\prime}},e)\), the largest number of undetected attacks can be efficiently computed by greedily selecting the \(b_{2}\) components with highest probability of undetection under \(\sigma^{1^{\prime}}\). Our heuristic approach can be summarized as follows: ``` Input:- Set of nodes \(V\) - Set of components \(E\) - Monitoring sets \(E_{v},\,\,v\in V\) - Number of sensors \(b_{1}\in\mathbb{N}\) - Number of attack resources \(b_{2}\in\mathbb{N}\) - Sensors' accuracies \(\lambda_{k}\!\in\!(0,1]\), \(k\!\in\![b_{1}]\) Result:- Inspection strategy \(\sigma^{1^{\prime}}\in\Delta(A_{1})\) 1 Compute a minimum set cover \(S\!=\!\{v^{\prime}_{1},\ldots,v^{\prime}_{m}\}\) Set \(E^{\prime}_{v}\gets E_{v}\), \(\forall v\in S\) Set \(V^{\prime}\gets S\)while\(V^{\prime}\neq\emptyset\)do 2 Select \(v^{\prime}\in\arg\max\{|E^{\prime}_{v}|,v\in V^{\prime}\}\) \(E^{\prime}_{v}\gets E^{\prime}_{v}\setminus(E^{\prime}_{v^{\prime}}\cap E ^{\prime}_{v})\), \(\forall v\in V^{\prime}\setminus\{v^{\prime}\}\)\(V^{\prime}\gets V^{\prime}\setminus\{v^{\prime}\}\) 3 end while 4 Order the nodes in \(S\) so that \(\big{|}E^{\prime}_{v^{\prime}_{1}}\big{|}\!\geq\!\cdots\!\geq\!\big{|}E^{\prime} _{v^{\prime}_{m}}\big{|}\) 5\(k^{*}\!\leftarrow\!\!\min\left\{k\in[m]\,\frac{b_{2}-\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! This real-world network from Kentucky is composed of 420 nodes that can receive sensors, and 492 components that are vulnerable to cyber-physical attacks, which induce disruptions. To detect these attacks, we consider that the defender has access to flow and pressure sensors that can be deployed at access points and shifted from one to another. These sensors can measure signals which can be used to detect the sudden rate of change of pressure or mass flow at different locations of the network. In our study, we compute the monitoring set of each node through simulations using a threshold-based detection model, as proposed in [23, 24]. All network simulations were implemented in Matlab, and all optimization problems were solved using Gurobi on a computer with a 2.3 GHz 8-Core Intel Core i9 processor and 32 GB of RAM. To evaluate the performance of our heuristic approach we consider 10 game instances where **P2** has \(b_{2}=1\) attack resource and **P1** has \(b_{1}\in[10]\) sensors, with sensor \(k\in[b_{1}]\) having accuracy \(\lambda_{k}=1-0.05(k-1)\). For such instances, \((\mathcal{P})\) only has 494 constraints since \(b_{2}=1\). Therefore, equilibrium inspection strategies of \(\Gamma\) can be obtained by solving \((\mathcal{P})\) using the column generation algorithm. We now implement our heuristic approach: We solve the minimum set cover problem, and obtain a set of 19 nodes. Next, we greedily partition the set of network components into 19 sets. Finally, we construct an inspection strategy \(\sigma^{1^{\prime}}\) according to Lemma 1. The worst-case expected number of undetected attacks under the inspection strategy \(\sigma^{1^{\prime}}\) is then computed by selecting the \(b_{2}\) components with the highest probability of not being detected under \(\sigma^{1^{\prime}}\). In Figure 5, we illustrate for \(b_{1}\in[10]\) the optimality gap achieved by \(\sigma^{1^{\prime}}\), i.e., the relative difference between the worst-case performance of \(\sigma^{1^{\prime}}\) and the value of the game (given by the optimal value of \((\mathcal{P})\)). From Figure 5, we observe that our heuristic solution achieves a detection performance that is close to the detection performance in equilibrium. However, we note that as the number of sensors increases, the optimality gap associated with our heuristic solution increases. This is due to the fact that when **P1** has more sensors, she can strategically coordinate their positioning so as to maximize the detection probabilities of the components that are monitored from multiple locations. In contrast, our heuristic approach assigns such components to a single monitoring set to construct an inspection strategy using a disjoint instance. Next, we compare in Figure 6 the running times of our heuristic method with the running times of the column generation algorithm for computing equilibrium inspection strategies. Interestingly, we observe that our heuristic solution is obtained in 0.11 seconds, and this running time is almost identical for any number of sensors. The reason is that most of the running time is spent computing a minimum set cover. As previously mentioned, although this problem is NP-hard, it can be efficiently solved by modern mixed-integer optimization solvers. In contrast, the time required to compute an equilibrium inspection strategy using column generation increases exponentially with the number of sensors \(b_{1}\). This is due to the fact that the number of variables in \((\mathcal{P})\) grows combinatorially with respect to \(b_{1}\). For instance, Figure 4: Benchmark Kentucky distribution network. Figure 5: Optimality gap of the heuristic solution when \(b_{2}=1\). when \(b_{1}=10\), the number of variables in \((\mathcal{P})\) is is approximately \(1.54\cdot 10^{26}\) for this network. Finally, we note that the column generation algorithm for computing equilibrium inspection strategies cannot be used in practice when \(b_{2}>1\), as the number of constraints in \((\mathcal{P})\) grows combinatorially with respect to \(b_{2}\). By leveraging the analytical characterization derived in Section 3, our heuristic approach remains scalable for any value of \(b_{1}\) and \(b_{2}\), and can be implemented for large-scale networks, as minimum set covers have been shown to be efficiently solvable for networks containing more than 100,000 nodes and components [14]. ## 5 Conclusion In this paper, we studied a network inspection game in which a defender allocates sensors with potentially heterogeneous detection capabilities in order to detect multiple attacks caused by a strategic attacker. In this two-person zero-sum game, the defender (resp. attacker) seeks to minimize (resp. maximize) the expected number of undetected attacks by selecting a potentially randomized inspection (resp. attack) strategy. When the monitoring sets are mutually disjoint, we derived an analytical characterization of a class of NE for this game. Additionally, we studied the dependence of these NE on the network topology, sensor accuracies, and the number of resources the attacker has at his disposal. We then leveraged our equilibrium analysis to design a heuristic solution approach for the general case based on minimum set covers. Our computational study on a benchmark cyber-physical distribution network showed that our heuristic approach is computationally tractable and provides inspection strategies with good detection performance. In future work, we aim to refine our heuristic solution approach and provide theoretical performance guarantees.
2308.10339
Production of ${^{180\rm{m}}}$Hf in photoproton reaction ${^{181}}$Ta$(γ,p)$ at energy $E_{\rm{γmax}}$ = 35-95 MeV
The production of the $^{180\rm{m}}\rm{Hf}$ nuclei in the photoproton reaction ${^{181}\rm{Ta}}(\gamma,p)$ was studied at end-point bremsstrahlung energies $E_{\rm{\gamma max}}$ = 35-95 MeV. The experiment was performed at the electron linear accelerator LUE-40 NSC KIPT with the use of the $\gamma$ activation and off-line $\gamma$-ray spectroscopy. The experimental values of the bremsstrahlung flux-averaged cross-sections $\langle{\sigma(E_{\rm{\gamma max}})}\rangle_{\rm{m}}$ for the ${^{181}\rm{Ta}}(\gamma,p)^{180\rm{m}}\rm{Hf}$ reaction were determined, and at $E_{\rm{\gamma max}} > 55$ MeV obtained for the first time. The measured values, also as the literature data, are significantly exceed the theoretical flux-averaged cross-sections $\langle{\sigma(E_{\rm{\gamma max}})}\rangle_{\rm{th}}$. The $\langle{\sigma(E_{\rm{\gamma max}})}\rangle_{\rm{th}}$ values were calculated using the cross-section $\sigma(E)$ computed with the TALYS1.95 code for six different level density models. A comparative analysis of the calculated total cross-sections for the reactions ${^{181}\rm{Ta}}(\gamma,p)^{180}\rm{Hf}$ and ${^{181}\rm{Ta}}(\gamma,n)^{180}\rm{Ta}$ was performed. It was shown that the photoproton $(\gamma,p)$ to photoneutron $(\gamma,n)$ strength ratio is consistent with the estimates based on the isospin selection rules and the value from the $(e,e'p)$ experiment.
I. S. Timchenko, O. S. Deiev, S. N. Olejnik, S. M. Potin, L. P. Korda, V. A. Kushnir, V. V. Mytrochenko, S. A. Perezhogin, A. Herzáň
2023-08-20T18:54:53Z
http://arxiv.org/abs/2308.10339v1
Production of \({}^{180{\rm m}}\)Hf in photoproton reaction \({}^{181{\rm Ta}}(\gamma,p)\) at energy \(E_{\gamma{\rm max}}\) = 35-95 MeV ###### Abstract The production of the \({}^{180{\rm m}}\)Hf nuclei in the photoproton reaction \({}^{181{\rm Ta}}(\gamma,p)\) was studied at endothermic bremsstrahlung energies \(E_{\gamma{\rm max}}\) = 35-95 MeV. The experiment was performed at the electron linear accelerator LUE-40 NSC KIPT with the use of the \(\gamma\) activation and off-line \(\gamma\)-ray spectroscopy. The experimental values of the bremsstrahlung flux-averaged cross-sections \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm m}\) for the \({}^{181{\rm Ta}}(\gamma,p)^{180{\rm m}}\)Hf reaction were determined, and at \(E_{\gamma{\rm max}}>55\) MeV obtained for the first time. The measured values, also as the literature data, are significantly exceed the theoretical flux-averaged cross-sections \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm th}\). The \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm th}\) values were calculated using the cross-section \(\sigma(E)\) computed with the TALYS1.95 code for six different level density models. A comparative analysis of the calculated total cross-sections for the reactions \({}^{181{\rm Ta}}(\gamma,p)^{180{\rm Hf}}\) and \({}^{181{\rm Ta}}(\gamma,n)^{180{\rm Ta}}\) was performed. It was shown that the photoproton \((\gamma,p)\) to photoneutron \((\gamma,n)\) strength ratio is consistent with the estimates based on the isospin selection rules and the value from the \((e,e^{\prime}p)\) experiment. pacs: 25.20.-xPhotonuclear reactions and 27.70.+q150 \(\leq A\leq 189\) + Footnote †: journal: EPJ ## 1 Introduction Experimental data on cross-sections for photonuclear reactions are important for many fields of science and technology. These data are necessary for traditional studies of the Giant Dipole Resonance (GDR), and mechanisms of its excitation and decay including competition between statistical and direct processes in decay channels, GDR configurational and isospin splitting, sum rule exhaustion, etc. The cross-sections for photonuclear reactions are also widely used in various applications, primarily in astrophysics [1], medicine [2], design of fast reactors [3] and accelerator driven sub-critical systems [4; 5]. Data on the cross-sections can be found in the comprehensive Atlases [6; 7], and these experimental results are included in the international digital databases EXFOR [8], ENDF [9], RIPL [10], and others. It was shown [11; 12; 13; 14; 15] that dicrepancies exist in the data on photoneutron cross-sections obtained in different laboratories. This led to work on the analysis of the reliability of previously measured experimental cross-sections [16], and initiated new measurements, e.g. [17; 18; 19; 20]. In the case of photoproton reactions, an analysis is also required to establish patterns and criteria for data reliability, as, for example, shown in [21]. However, there is a lack of experimental data, especially in the region of nuclei with a mass number \(A>100\) for which the \((\gamma,p)\) reaction yields are strongly suppressed. Previously, photonuclear reactions \({}^{181{\rm Ta}}(\gamma,{\rm x}n)^{180{\rm-x}}\)Ta were studied for reactions with \({\rm x}\leq 8\) at photon energies up to 130 MeV [16; 20; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32]. Absolute photoneutron cross-sections \(\sigma(E)\) were obtained for reactions with \({\rm x}\leq 4\) on quasi-monoenergetic photon beams [16; 24; 25; 26; 27; 28; 29]. Flux-averaged cross-section \(\langle\sigma(E_{\gamma{\rm max}})\rangle\) for reactions with a large number of particles in the outlet channel were determined using intense beams of bremsstrahlung \(\gamma\) rays [20; 30; 31; 32]. At the same time, according to the EXFOR nuclear database [8], photonuclear reactions on \({}^{181{\rm Ta}}\) with the yields of charged particles were studied only in a few works [33; 34; 32] due to low cross-section of such reactions. Estimates made in the TALYS1.95 code [35] for the reaction \({}^{181{\rm Ta}}(\gamma,p)^{180{\rm m}}\)Hf using the Generalized superfluid level density model (\(LD3\)) give a value of cross-section \(\sigma(E)\) \(\approx 0.0136\) mb at the maximum (\(E\approx 27\) MeV). For comparison, in the case of the \({}^{181}\)Ta\((\gamma,n)\) reaction the experimental values of \(\sigma(E)\) are about 400 mb at the energy of GDR maximum [16]. Note that values of \(\sigma(E)\) of the \({}^{181}\)Ta\((\gamma,p)\)\({}^{180\rm m}\)Hf reaction recalculated to flux-averaged cross-section \(\langle\sigma(E_{\gamma\rm max})\rangle\) using GEANT4.9.2 [36] decreased to 3.0-4.2 \(\mu\)b in the energy range 35-95 MeV. Previously, the experimental yield for reaction \({}^{181}\)Ta\((\gamma,p)\)\({}^{180\rm m}\)Hf relative to \({}^{181}\)Ta\((\gamma,n)\)\({}^{180}\)Ta were obtained at end-point bremsstrahlung energy \(E_{\gamma\rm max}=67.7\) MeV and was found value \((5\pm 1)\times 10^{-4}\)[32]. This relative yield was compared with the calculation in TALYS, which gives a value of \(3\times 10^{-5}\). Since the calculation of the flux-averaged cross-section for the \({}^{181}\)Ta\((\gamma,n)\)\({}^{180}\)Ta reaction in the TALYS1.95 code agrees well with the experimental data [20; 30] in a wide energy range, the observed discrepancy in the relative yield must be due to the differences between the experimental and calculated cross-sections for the production of the \({}^{180\rm m}\)Hf nuclei. The flux-averaged yields for the \({}^{181}\)Ta\((\gamma,p)\)\({}^{180\rm m}\)Hf reaction were studied at \(E_{\gamma\rm max}=\) 20, 40, and 55 MeV in [33]. The comparison of experimental data with the TALYS1.9 code showed a significant discrepancy with the calculation. In this work, we studied the production of \({}^{180\rm m}\)Hf in a photoprotron reaction by means of \(\gamma\)-ray spectroscopy. The bremsstrahlung flux-averaged cross-section \(\langle\sigma(E_{\gamma\rm max})\rangle_{\rm m}\) was determined in the range of end-point bremsstrahlung energy \(E_{\gamma\rm max}=\) 35-95 MeV. Also, theoretical calculations using the TALYS1.95 code were performed. A comparative analysis of the calculated total cross-sections for the reactions \({}^{181}\)Ta\((\gamma,p)\)\({}^{180}\)Hf and \({}^{181}\)Ta\((\gamma,n)\)\({}^{180}\)Ta was done. The photoproton \((\gamma,p)\) to photoneutron \((\gamma,n)\) strength ratio was calculated according to the isospin selection rules. This data was compared with calculations in TALYS1.95, and with the experimental value obtained using the cross-sections from the \((e,e^{\prime}p)\) experiment [37]. ## 2 Experimental procedure and flux-averaged cross-sections determination ### Experimental setup and method The experiment to study the production of the \({}^{180\rm m}\)Hf nuclei in photoproton reaction \({}^{181}\)Ta\((\gamma,p)\) was carried out using the method of measuring the residual \(\gamma\)-activity of an irradiated sample. This technique enables us to obtain simultaneously the data from different channels of photonuclear reactions, e.g. \((\gamma,p)\), \((\gamma,n)\), \((\gamma,2n)\) etc., for example, [38; 39; 40; 41]. The experiment was performed at the National Science Center "Kharkov Institute of Physics and Technology" (NSC KIPT), Ukraine, employing the electron linear accelerator LUE-40 [42; 43]. To generate bremsstrahlung \(\gamma\) quanta, electrons with initial energy \(E_{e}\) impinged on a converter made of a natural tantalum plate with transverse dimensions of 20\(\times\)20 mm and a thickness of 1.05 mm. The flux of bremsstrahlung \(\gamma\) quanta was cleaned from electrons using an Al absorber of cylindrical shape with a diameter of 100 mm and a length of 150 mm. There were two types of targets in the experiment. The \({}^{\rm nat}\)Ta target was used to investigate the production of the \({}^{180\rm m}\)Hf nuclei, while the purpose of the \({}^{\rm nat}\)Mo target-monitor was to control a \(\gamma\)-ray flux. The \({}^{\rm nat}\)Ta and \({}^{\rm nat}\)Mo targets had a shape of a disk with a diameter of 8 mm. Their thicknesses and masses were 50 \(\mu\)m, \(\sim\)43 mg and 100 \(\mu\)m, \(\sim\)60 mg, respectively. Both targets were simultaneously placed in a thin aluminum capsule and transported to/out of the irradiation site using a pneumatic transport system. After that, irradiated samples were taken to the measurement room where, after their removal from the Al capsule, the residual activities were measured. The cooling time for the Ta target, taking into account the transfer and removal of the target from the capsule, was no more than 2-3 min. The irradiation time and the duration of measuring the residual \(\gamma\) activity spectrum were both 30 min long. The scheme of the experiment is shown in Fig. 1. The induced \(\gamma\)-activity of the irradiated targets was measured by the semiconductor high-purity germanium (HPGe) detector, model Canberra GC-2018 with the energy resolution (FWHM) of 0.8 and 1.8 keV at 122 and 1332.5 keV, respectively. Its detection efficiency, \(\varepsilon\), at 1332.5 keV was 20% relative to the NaI(Tl) scintillator, 3 inches in diameter and 3 inches in thickness. Calibration of the detection efficiency was done by using a set of \(\gamma\)-ray radiation sources: \({}^{22}\)Na, \({}^{60}\)Co, \({}^{133}\)Ba, \({}^{137}\)Cs, \({}^{152}\)Eu, \({}^{241}\)Am. The numerical value of \(\varepsilon\) was determined for various \(\gamma\)-ray energies using the analytical curve in the form \({\rm ln}\varepsilon=\sum\limits_{i=1}^{n}a_{i}({\rm ln}E_{\gamma})^{i}\) proposed in [44]. At the end-point bremsstrahlung energies \(E_{\gamma\rm max}=\) 60.4 and 80.5 MeV, additional measurements of the \({}^{181}\)Ta\((\gamma,p)\)\({}^{180\rm m}\)Hf reaction cross-sections were performed Figure 1: The schematic block diagram of the experimental setup. The upper part shows the measurement room, where the exposed target (red colour) and the Mo target-monitor (blue colour) are extracted from the capsule and placed one by one to the HPGe detector for induced \(\gamma\)-activity measurements. The lower part shows the accelerator LUE-40, Ta converter, Al absorber, and the exposure reaction chamber. at the different experimental setup [45; 46]. In this case, a \({}^{\rm nat}\)Ta converter foil with a thickness of 100 \(\mu\)m, and a bending magnet to clean the bremsstrahlung \(\gamma\)-flux from electrons were used. The bremsstrahlung spectra of electrons were simulated using the GEANT4.9.2 toolkit [36]. In the simulation, the real relative position of the target, Ta converter, Al absorber, and elements of the experimental equipment, as well as the spatial and energy distribution of the electron beam were used as the input parameters. The monitoring of the calculated bremsstrahlung \(\gamma\) flux was performed with the use of the \({}^{100}\)Mo\((\gamma,n)^{99}\)Mo reaction yield. For this purpose, the experimentally obtained flux-averaged cross-sections were compared with the theoretical values. To determine the experimental \(\langle\sigma(E_{\gamma{\rm max}})\rangle\) values, we have used the number of counts under the \(\gamma\)-ray peak at \(E_{\gamma}=739.50\) keV with the intensity \(I_{\gamma}=12.13\%\)[47]. The theoretical values of the flux-averaged cross-section \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm th}\) were calculated using the cross-sections \(\sigma(E)\) from the TALYS1.95 code [35]. The resulting normalization coefficients \(k_{\rm mo}=\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm th}/\langle\sigma(E_ {\gamma{\rm max}})\rangle\) were used to normalize the cross-sections of the reaction under study. More details about the monitoring procedure can be found in [41; 55; 56]. The irradiated Ta converter and Al absorber generate neutrons that can trigger the reaction \({}^{100}\)Mo\((n,2n)^{99}\)Mo. To evaluate also this option, energy spectra of neutrons above the threshold energies were calculated using the GEANT4.9.2, similarly to [57]. The contribution of the \({}^{100}\)Mo\((n,2n)^{99}\)Mo reaction to the value of the induced activity of the \({}^{99}\)Mo nucleus has been estimated and it has been shown that this contribution is negligible compared to the contribution of \({}^{100}\)Mo\((\gamma,n)^{99}\)Mo. The contribution of the reaction \({}^{100}\)Mo\((\gamma,p)^{99}\)Nb, \({}^{99}\)Nb \(\stackrel{{\beta^{-}}}{{\longrightarrow}}\)\({}^{99}\)Mo is also negligible. ### Calculation of the flux-averaged cross-sections The values of the theoretical cross-section \(\sigma(E)\) computed with the TALYS1.95 code [35] were averaged over the bremsstrahlung \(\gamma\)-flux \(W(E,E_{\gamma{\rm max}})\) from the threshold energy \(E_{\rm thr}\) of the reaction under study to the end-point bremsstrahlung energy \(E_{\gamma{\rm max}}\). As a result of this procedure, the flux-averaged cross-sections \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm th}\) were calculated using the equation: \[\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm th}=\Phi^{-1}(E_{\gamma{\rm max }})\int\limits_{E_{\rm thr}}^{E_{\gamma{\rm max}}}\sigma(E)W(E,E_{\gamma{\rm max }})dE, \tag{1}\] \({\rm where}\,\Phi(E_{\gamma{\rm max}})=\int\limits_{E_{\rm thr}}^{E_{\gamma{ \rm max}}}W(E,E_{\gamma{\rm max}})dE\) is the integrated bremsstrahlung \(\gamma\)-flux. Theoretical flux-averaged cross-sections were compared with those measured in the experiment and calculated as follows: \[\langle\sigma(E_{\gamma{\rm max}})\rangle=\frac{\lambda\triangle A\Phi^{-1}( E_{\gamma{\rm max}})}{N_{x}I_{\gamma}\ \varepsilon(1-e^{-\lambda t_{\rm irr}})e^{-\lambda t_{\rm cool}}(1-e^{-\lambda t_ {\rm max}})}, \tag{2}\] where \(\triangle A\) is the number of counts in the full absorption \(\gamma\)-ray peak; \(\lambda\) denotes the decay constant (\({\rm ln}2/T_{1/2}\)); \(T_{1/2}\) is the half-life of the nucleus; \(N_{x}\) is the number of target atoms; \(I_{\gamma}\) is the intensity of the analyzed \(\gamma\) ray; \(\varepsilon\) is the detection efficiency at the energy of analyzed \(\gamma\) ray; \(t_{\rm irr}\), \(t_{\rm cool}\) and \(t_{\rm meas}\) are the irradiation time, cooling time and measurement time, respectively. A more detailed description of all the calculation procedures necessary for the determination of \(\langle\sigma(E_{\gamma{\rm max}})\rangle\) can be found in [20; 41]. To determine the experimental \(\langle\sigma(E_{\gamma{\rm max}})\rangle\), we used the parameter values of the reaction \({}^{181}\)Ta\((\gamma,p)^{180}\)Hf listed in Table 1. In addition, parameter values for \({}^{181}\)Ta\((\gamma,6n)^{175}\)Ta, \({}^{181}\)Ta\((\gamma,p)^{180}\)Hf, \({}^{181}\)Ta\((\gamma,n)^{180}\)Ta, \({}^{100}\)Mo\((\gamma,n)^{99}\)Mo reactions are listed as well. Note that if the reaction product has known isomeric state, the total flux-averaged cross-section \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm tot}\) is calculated as the sum of cross-sections for the ground state \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm g}\) and isomeric state \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm m}\), respectively. ### Experimental accuracy of flux-averaged cross-sections The uncertainty of measured flux-averaged cross-sections was determined as a square root of the quadratic sum of statistical and systematic errors. The statistical error in the observed \(\gamma\)-activity is mainly due to statistics in the full absorption peak of the corresponding \(\gamma\)-ray, which varies within 2 to 9%. The measured \(\triangle A\) value of the investigated \(\gamma\) ray depends on the detection efficiency, half-life, and the intensity \(I_{\gamma}\). The background is generally governed by the contribution from the Compton scattering of the emitted \(\gamma\) rays. The systematical errors are due to the following uncertainties of the: 1. exposure time and the electron current \(\sim\)0.5%; 2. \(\gamma\)-ray detection efficiency of the detector - 2-3%. The error is larger at \(E_{\gamma}=50\)-200 keV, this being due to a small number of calibration data points in this energy range and the convoluted shape of the efficiency curve; 3. the half-life \(T_{1/2}\) of the reaction products and the intensity \(I_{\gamma}\) of the analyzed \(\gamma\) rays; 4. normalization of the experimental data to the yield of the monitoring reaction \({}^{100}\)Mo\((\gamma,n)^{99}\)Mo made up 6 %. It should be noted that the systematic error in yield monitoring of the \({}^{100}\)Mo\((\gamma,n)^{99}\)Mo reaction stems from three errors, each reaching up to 1%. These are the statistical error in the determination of the number of counts under the \(\gamma\)-ray peak used for normalization, the uncertainty in the isotopic composition of natural molybdenum and in the intensity \(I_{\gamma}\) used. In our calculations, we have used the percentage value of \({}^{100}\)Mo isotope abundance equal to 9.63% [36]. The total uncertainties of the measured flux-averaged cross-sections are given in Fig. 3 and Table 2. ## 3 Results and discussion Experimental values of bremsstrahlung flux-averaged cross-section \(\langle\sigma(E_{\gamma\rm max})\rangle_{\rm m}\) To calculate the experimental flux-averaged cross-sections for the \({}^{181}\)Ta\((\gamma,p)^{180{\rm m}}\)Hf reaction, two \(\gamma\)-ray transitions with energies of 443.09 and 500.64 keV were used, see Fig. 2. The 443.09 keV \(\gamma\)-ray peak is preferable for use because of the much higher intensity. However, there is a \(\gamma\)-ray with a similar energy of 443.3 keV corresponding to the \({}^{175}\)Ta nucleus, which is product of the \({}^{181}\)Ta\((\gamma,6n)\) reaction. Because of the almost identical transition energies, the two peaks overlap, thus artificially increasing the intensity of the 443.09 keV peak. This fact had to be taken into account in the analysis. To estimate the magnitude of this contribution, we used the 348.5 and 436.4 keV \(\gamma\)-rays with the intensity \(I_{\gamma}=12.0\%\) and 3.8%, respectively, which correspond to the \({}^{175}\)Ta nucleus. The obtained \(\bigtriangleup A\) in these \(\gamma\)-ray peaks, taking into account the detection efficiency \(\varepsilon\), were recalculated to values of the activity of the \({}^{175}\)Ta nucleus by the 443.3 keV \(\gamma\)-ray. The resulting contribution did not exceed 1%. It should be noted that the calculation of the contribution of the competing reaction cannot be performed with an accuracy better than 29%, which is associated with a large error in the intensity \(I_{\gamma}\) for the 443.3 keV transition, see Table 1. Since the yield of the \({}^{181}\)Ta\((\gamma,6n)^{175}\)Ta reaction is \(<1\) % of the yield of the \({}^{181}\)Ta\((\gamma,p)^{180{\rm m}}\)Hf reaction, even so large error has a negligible effect on the final result. The experimental values of the bremsstrahlung flux-averaged cross-sections \(\langle\sigma(E_{\gamma\rm max})\rangle_{\rm m}\) for the reaction \({}^{181}\)Ta\((\gamma,p)^{180{\rm m}}\)Hf obtained in this work at the end-point bremsstrahlung energies \(E_{\gamma\rm max}\) = 35-95 MeV are shown in Fig. 3 and Table 2. As can be seen, within experimental uncertainties, the obtained cross-sections for the 443.09 and 500.64 keV \(\gamma\)-ray transitions are in agreement. The values of \(\langle\sigma(E_{\gamma\rm max})\rangle_{\rm m}\) obtained in additional measurements at \(E_{\gamma\rm max}\) = 60.4 and 80.5 MeV are in good agreement with all massive data (see Fig. 3). The obtained experimental results were compared with the data from literature [33], which received at \(E_{\gamma\rm max}\) = 20, 40 and 55 MeV. From Fig. 3 it can be seen, that the experimental cross-sections are in good agreement at \(E_{\gamma\rm max}\) = 55 MeV, and don't agree at lower energies. One more comparison was made with the data published in [32]. There, the experimental \({}^{181}\)Ta\((\gamma,p)^{180{\rm m}}\)Hf reaction yield relative to the \({}^{181}\)Ta\((\gamma,n)^{180{\rm g}}\)Ta reaction yield was obtained at \(E_{\gamma\rm max}\) = 67.7 MeV, and equal to \((5\pm 1)\times 10^{-4}\). For the comparison, the flux-averaged cross-sections determined in this work in the energy range \(E_{\gamma\rm max}\) = 60-80 MeV were approximated and the value \(\langle\sigma(E_{\gamma\rm max})\rangle_{\rm m}\) = \(0.057\pm 0.005\) mb at 67.7 MeV was calculated. To obtain the relative yield, the experimental values of the flux-averaged cross-section of the \({}^{181}\)Ta\((\gamma,n)^{180{\rm g}}\)Ta reaction from [20, 30] were used, taking into account the difference in bremsstrahlung \(\gamma\)-flux due to the difference in the reaction thresholds \(E_{\rm thr}(\gamma,n)\) = 7.58 MeV and \(E_{\rm thr}(\gamma,p)\) = 7.09 MeV. We get \((7.2\pm 1.4)\times 10^{-4}\) for the relative yield. Within experimental uncertainties, it agrees with the value from [32]. ### Calculated \(\sigma(E)\) and \(\langle\sigma(E_{\gamma\rm max})\rangle\) cross-sections The theoretical values of total and partial (metastable, ground) cross-sections \(\sigma(E)\) for the \({}^{181}\)Ta\((\gamma,p)^{180}\)Hf reaction with monochromatic photons were calculated using the TALYS1.95 code [35]. The calculations were performed for six different level density models denoted as \(LD\) 1-6. There are three phenomenological level density models and three options for microscopic level densities: \(LD1\): Constant temperature + Fermi gas model, introduced by Gilbert and Cameron [48]. In this model, the excitation energy range is divided into a low energy part from \(E_{0}\) up to a matching energy \(E_{\rm M}\), where the so-called constant temperature law applies and a high energy part above, where the Fermi gas model applies. \(LD2\): Back-shifted Fermi gas model [49], where the pairing energy is treated as an adjustable parameter and the Fermi gas expression is used down to \(E_{0}\). \(LD3\): Generalized superfluid model (GSM) [50, 51]. The model takes superconductive pairing correlations into account according to the Bardeen-Cooper-Schrieffer theory. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Nuclear reaction & \(E_{\rm thr}\), MeV & \(J^{\pi}\) & \(T_{1/2}\), h & \(E_{\gamma}\), keV & \(I_{\gamma}\), \% \\ \hline \({}^{181}\)Ta\((\gamma,p)^{180{\rm m}}\)Hf & 7.09 & 8\({}^{-}\) & 5.5 \(\pm\) 0.1 & 443.09 \(\pm\) 0.04 & 81.9 \(\pm\) 0.9 \\ & & & 500.64 \(\pm\) 0.18 & 14.3 \(\pm\) 0.3 \\ \hline \({}^{181}\)Ta\((\gamma,n)^{180{\rm g}}\)Ta & 7.58 & 1\({}^{+}\) & 8.152 \(\pm\) 0.006 & 103.557 \(\pm\) 0.007 & 0.81 \(\pm\) 0.16 \\ \hline \({}^{181}\)Ta\((\gamma,6n)^{175}\)Ta & 44.46 & 7/2\({}^{+}\) & 10.5 \(\pm\) 0.2 & 348.5 \(\pm\) 0.5 & 12.0 \(\pm\) 0.6 \\ & & & & 443.64 \(\pm\) 0.7 & 3.8 \(\pm\) 0.2 \\ & & & & 443.3 \(\pm\) 0.7 & 0.14 \(\pm\) 0.04 \\ \hline \({}^{100}\)Mo\((\gamma,n)^{99}\)Mo & 8.29 & 1/2\({}^{+}\) & 65.94 \(\pm\) 0.01 & 739.50 \(\pm\) 0.02 & \(12.13\pm\) 0.12 \\ \hline \hline \end{tabular} \end{table} Table 1: Spectroscopic data of the products of different reactions adopted from [47]: spin \(J\), parity \(\pi\), half-life \(T_{1/2}\) of the reaction products; \(E_{\gamma}\) and \(I_{\gamma}\) are the energies of the \(\gamma\)-ray transitions and their intensities, respectively. \(E_{\rm thr}\) denotes threshold energy of the reactions. \(LD4\): Microscopic level densities (Skyrme force) from Goriely's tables [52]. Using this model allows reading tables of microscopic level densities from RIPL database [10]. These tables were computed by S. Gorielyon based on Hartree-Fock calculations for excitation energies up to 150 MeV and for spin values up to \(I\) = 30. \(LD5\): Microscopic level densities (Skyrme force) from Hilaire's combinatorial tables [53]. The combinatorial model includes a detailed microscopic calculation of the intrinsic state density and collective enhancement. The only phenomenological aspect of the model is a simple damping function for the transition from spherical to deformed. \(LD6\): Microscopic level densities based on temperature-dependent Hartree- Fock-Bogoliubov calculations using the Gogny force [54] from Hilaire's combinatorial tables. Results of the calculations of the total, metastable and ground state cross-sections \(\sigma(E)\) for the \({}^{181}\)Ta\((\gamma,p)^{180}\)Hf reaction are shown in Figs. 4(a)-(c). The bremsstrahlung flux-averaged cross-sections \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm tot}\), \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm g}\), and \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm m}\) are presented in Figs. 4(d)-(f). \begin{table} \begin{tabular}{c c} \hline \hline \(E_{\gamma{\rm max}}\), MeV & \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm m}\), \(\mu\)b \\ \hline [MISSING_PAGE_POST] \hline \end{tabular} * The data were obtained in an additional experiment carried out on the setup described in [45]. \end{table} Table 2: Experimental flux-averaged cross-sections \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm m}\) of the \({}^{181}\)Ta\((\gamma,p)^{180}\)Hf reaction (data for \(E_{\gamma}\) = 443.09 keV). Figure 3: Bremsstrahlung flux-averaged cross-sections \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm m}\) of the \({}^{181}\)Ta\((\gamma,p)^{180{\rm m}}\)Hf reaction. Red and black circles – our experimental data for \(\gamma\)-ray with \(E_{\gamma}\) = 443.09 keV, blue and black empty squares – for \(E_{\gamma}\) = 500.64 keV, brown stars – data taken from [33]. The additional measurements at \(E_{\gamma{\rm max}}\) = 60.4 and 80.5 MeV are denoted as black full circles and black empty squares. Curve - calculation using TALYS1.95 code for model \(LD3\), upscaled by a factor of 13.7. Figure 2: Energy spectrum of \(\gamma\) rays measured by the HPGe detector from the 42.598 mg \({}^{181}\)Ta target after bremsstrahlung flux exposure time of 30 min with \(E_{\gamma{\rm max}}\) = 80.2 MeV. Spectrum fragment ranging from 300 to 600 keV is shown. Figure 4: Theoretical values of the \({}^{181}\)Ta\((\gamma,p)^{180}\)Hf reaction cross-sections calculated using the TALYS1.95 code for the level density models \(LD\) 1–6. The absolute cross-sections \(\sigma(E)\) for total, ground and metastable states for monochromatic photons are shown in panels (a)-(c), respectively. The bremsstrahlung flux-averaged cross-sections \(\langle\sigma(E_{\gamma{\rm max}})\rangle\) for total, ground and metastable states are shown in panels (d)-(f), respectively. According to the calculated cross-sections \(\sigma(E)\) and \(\langle\sigma(E_{\gamma{\rm max}})\rangle\), formation of the \({}^{180}\)Hf nucleus in the metastable state is strongly suppressed. The contribution of the metastable state in the total flux-averaged cross-section does not exceed 5% for all level density models. The largest values of the \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm m}\) and \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm g}\) cross-sections are obtained with the \(LD3\) level density model (see Fig. 4). ### Comparison of experimental data with theoretical calculations Comparison of obtained experimental cross-sections \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm m}\) for the \({}^{181}\)Ta\((\gamma,p)^{180{\rm m}}\)Hf reaction with the theoretical estimates shows that all calculated variants (\(LD\) 1-6) are underestimated. Using the least squares method it was found that the best agreement is achieved with the \(LD3\) model, where the calculated flux-averaged cross-section are smaller by a factor of 13.7. Rescaled result of the \(LD3\) model calculation is graphically shown and compared with the measured data in Fig. 3. In further text, only calculations with the \(LD3\) model will be discussed. Note that a difference of similar magnitude is also valid for the experimental data from [32; 33]. Due to \({}^{180}\)Hf being stable, it was not possible to determine the cross-section for its production in our experiment. We are not aware of any published experimental data on this topic. Therefore, only indirect estimate of the total cross-section for the \({}^{181}\)Ta\((\gamma,p)^{180}\)Hf reaction could be made based on the isospin selection rules [58]. For photodisintegration in heavy nuclei, the isospin-splitting components of GDR can be approximated by the \((\gamma,n)\) and \((\gamma,p)\) cross-sections, respectively. Let us estimate the strength ratio of the \((\gamma,p)\) to the \((\gamma,n)\) components of GDR using the expression from [59], which can be written as: \[\int\limits_{0}^{\infty}\frac{\sigma_{\gamma p}}{E}dE\Bigg{/}\int\limits_{0}^ {\infty}\frac{\sigma_{\gamma n}}{E}dE=\frac{1}{T_{0}}\times\frac{1-1.5T_{0}A^{ -2/3}}{1+1.5A^{-2/3}}, \tag{3}\] where \(\sigma_{\gamma p}\) and \(\sigma_{\gamma n}\) are total cross-sections corresponding to \((\gamma,p)\) and \((\gamma,n)\) reactions, \(T_{0}=(N-Z)/2\) is isospin of the ground state of the nucleus with \(N\) neutrons and \(Z\) protons, \(A=N+Z\). For the \({}^{181}\)Ta nucleus, \(T_{0}=17.5\), and the expected value of the strength ratio is \(9.81\times 10^{-3}\) (calculated using the right part of Eq. 3). At the same time, calculation using weighted integrals up to 100 MeV gives a ratio equal to \(2.35\times 10^{-3}\). It was shown in [20] that the theoretical cross-section for the \({}^{181}\)Ta\((\gamma,n)\) reaction describes well the experimental results. Therefore, the observed discrepancy between the calculated values is related to the total cross-section for the \({}^{181}\)Ta\((\gamma,p)\) reaction. As shown before (see Fig. 3), the experimental data for the flux-averaged cross-sections for the population of the isomeric state significantly exceed the theoretical values. Since the \(\sigma(E)_{\rm m}\) cross-section is small relative to the \(\sigma(E)_{\rm g}\), then taking into account the found factor of 13.7, slightly increases the weighted integrals ratio (left part of Eq. 3), up to \(3.47\times 10^{-3}\). It was shown in [59], that isospin selection rule approach gives an average value for a set of nuclei, while in case of a single nucleus, a deviation can be significant, e.g, for the \({}^{208}\)Pb nucleus, the expected theoretical strength ratio \(2.6\times 10^{-3}\) is 1.9 times lower than the experimentally measured one; for the \({}^{139}\)La nucleus, theory gives a 2.4 times larger value than measurement. The calculation carried out ([37] and references therein) for the \({}^{181}\)Ta nucleus using the experimental cross-sections from the \((e,e^{\prime}p)\) experiment gave the value of the strength ratio \(\sim\)1.7\(\times 10^{-3}\). The error in this estimate may be large, due to the absence of data for the experimental \((e,e^{\prime}p)\) cross-section in high-energy region for a better fitting. Thus, the strength ratio for the \({}^{181}\)Ta nucleus, determined to be \(3.47\times 10^{-3}\), lies between the expected estimate according to the isospin selection rules, and the strength ratio obtained using the experimental cross-sections from the \((e,e^{\prime}p)\) experiment [37]. ## 4 Conclusions The experimental study of the production of metastable \({}^{180}\)Hf nucleus in the photoproton reaction \({}^{181}\)Ta\((\gamma,p)^{180{\rm m}}\)Hf was performed at end-point bremsstrahlung energies \(E_{\gamma{\rm max}}\) = 35-95 MeV. The experiment was carried out at the NSC KIPT, Ukraine, with bremsstrahlung beams generated by the electron linear accelerator LUE-40 and using the \(\gamma\) activation and off-line \(\gamma\)-ray spectrometric technique. There were two different experimental setups used, the results of which are in good agreement within the experimental error. The experimental values of the flux-averaged cross-sections \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm m}\) for the \({}^{181}\)Ta\((\gamma,p)^{180{\rm m}}\)Hf reaction were determined. The \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm m}\) values for the studied reaction at energy \(E_{\gamma{\rm max}}>55\) MeV were obtained for the first time. The calculation of bremsstrahlung flux-averaged cross-section \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm th}\) was carried out using the cross-section values \(\sigma(E)\) computed with the TALYS1.95 code for different level density models \(LD\) 1-6. The theoretical estimates significantly underestimate both our experimental results and data from the literature [32; 33]. The obtained experimental \(\langle\sigma(E_{\gamma{\rm max}})\rangle_{\rm exp}\) are closest to the theoretical calculation using the \(LD3\) level density model - the Generalized superfluid model. A comparative analysis of the calculated total cross-sections for the reactions \({}^{181}\)Ta\((\gamma,p)^{180}\)Hf and \({}^{181}\)Ta\((\gamma,n)^{180}\)Ta was performed. The strength ratio \((\gamma,p)\) and \((\gamma,n)\) of the GDR photodisintegration components for the \({}^{181}\)Ta nucleus was calculated using the isospin selection rules and equals to \(9.81\times 10^{-3}\). This value was compared with the ratio of weighted integrals, in which the total cross-sections for the \({}^{181}\)Ta\((\gamma,p)\) and \({}^{181}\)Ta\((\gamma,n)\) reactions obtained from the TALYS1.95 code with the \(LD3\) model, were used. It was shown that the photoproton \((\gamma,p)\) to photon-tron \((\gamma,n)\) strength ratio of the GDR, found taking into account the experimental values for the reaction \({}^{181}\)Ta(\(\gamma,p\))\({}^{180\mathrm{m}}\)Hf, is consistent with the expected estimate according to the isospin selection rules, and the strength ratio obtained using the experimental cross-sections from the \((e,e^{\prime}p)\) experiment. ## Acknowlegment The authors would like to thank the staff of the linear electron accelerator LUE-40 NSC KIPT, Kharkiv, Ukraine, for their cooperation in the realization of the experiment. This work was supported by the Slovak Research and Development Agency under No. APVV-20-0532, and the Slovak grant agency VEGA (Contract No. 2/0067/21). Funded by the EU NextGenerationEU through the Recovery and Resilience Plan for Slovakia under the project No. 09103-03-V01-00069. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2304.06856
Application of the Bell polynomials for the solution of some differential-algebraic equations
The differential transform method is used to find numerical approximation of solution to a class of certain nonlinear differential algebraic equations. The method is based on Taylor's theorem. Coefficients of the Taylor series are determined by constructing a recurrence relation. To deal with nonlinearity of the problems, the Fa\`{a} di Bruno's formula containing the partial ordinary Bell polynomials is applied within the differential transform to avoid computation of symbolic derivatives. The error estimation results are presented too. Four concrete problems are studied to show efficiency and reliability of the method. The obtained results are compared to other methods.
Hari Mohan Srivastava, Giriraj Methi, Anil Kumar, Mohammad Izadi, Vishnu Narayan Mishra, Brahim Benhammouda
2023-04-13T22:52:20Z
http://arxiv.org/abs/2304.06856v1
# Application of the Bell polynomials for the solution of some differential algebraic equations ###### Abstract The differential transform method is used to find numerical approximation of solution to a class of certain nonlinear differential algebraic equations. The method is based on Taylor's theorem. Coefficients of the Taylor series are determined by constructing a recurrence relation. To deal with nonlinearity of the problems, the Faa di Bruno's formula containing the partial ordinary Bell polynomials is applied within the differential transform to avoid computation of symbolic derivatives. The error estimation results are presented too. Four concrete problems are studied to show efficiency and reliability of the method. The obtained results are compared to other methods. **Key Words**: Differential algebraic equations; Differential transform; Error; Convergence; Bell polynomials; Numerical solutions ## 1 Introduction System of differential algebraic equations (DAEs) are combination of ordinary differential equations (ODEs) together with purely algebraic equations. Many researchers have investigated physical problems involving DAEs in electrical network [33], modelling of constrained mechanical systems [5, 8], optimal control [6, 11] and chemical processes problems [20]. DAEs with higher index (\(>1\)) are difficult to solve. Index reduction techniques can be used to convert them into lower index problems, but this is computationally expensive and sometimes changes properties of the solutions also. Several methods have been implemented to find solutions of DAEs such as Runge-Kutta method [19], Adomian decomposition method [14], Variational iteration method [12], Multi quadric method [34], Homotopy perturbation method [10] and Iterative schemes [24]. But these methods have their limitations when dealing with nonlinear DAEs and sometimes complexity involved in calculations make them unsuitable for solving nonlinear DAEs. The regular form of DAEs is \[G\left(w(v),w^{\prime}(v),v\right)=0,\;G\in C^{1}(R^{2m+1},R^{m}),\;v\in[0,V] \tag{1}\] where Jacobian \(\left[\frac{\partial G}{\partial w^{\prime}}\right]\) is singular on \(R^{2m+1}\). Many DAEs arising in physical applications are in semi explicit form and while some other are in further restricted Hessenberg form [6]. The index-1 semi explicit DAEs are given by \[w^{\prime}(v) = G\left(w(v),u(v),v\right),G\in C(R^{m+k+1},R^{m})\] \[0 = F\left(w(v),u(v),v\right),F\in C^{1}(R^{m+k+1},R^{k}),\;v\in[0,V] \tag{2}\] where \(\frac{\partial F}{\partial u}\) is nonsingular. The index-2 Hessenberg DAEs are given by \[w^{\prime}(v) = G\left(w(v),u(v),v\right),G\in C^{1}(R^{m+k+1},R^{m})\] \[0 = F\left(w(v),v\right),F\in C^{2}(R^{m+1},R^{k}),\;v\in[0,V] \tag{3}\] where \(\left(\frac{\partial F}{\partial u}\right)\left(\frac{\partial G}{\partial w}\right)\) is nonsingular [10, 35]. The index-3 Hessenberg DAEs are given by \[u^{\prime}(v) = G\left(w(v),u(v),s,v\right),G\in C^{1}(R^{m+k+1},R^{m})\] \[w^{\prime}(v) = F\left(w(v),u(v),v\right),F\in C^{2}(R^{m+k+1},R^{m})\] \[0 = H\left(w(v),v\right),H\in C^{3}(R^{m+1},R^{l}),\;v\in(0,V) \tag{4}\] where \(\left(\frac{\partial H}{\partial w}\right)\left(\frac{\partial F}{\partial u }\right)\left(\frac{\partial G}{\partial s}\right)\) is nonsingular [10, 35]. The motivations for present work are the research work of authors [2, 3, 4, 14, 34] who studied DAEs using various semi analytical methods, but these methods are not suitable to deal highly non-linear DAEs. Therefore some techniques are needed to overcome limitations of existing techniques and can directly solve nonlinear DAEs in a well defined and reliable algorithm. We propose a simple approach involving the differential transformation in this paper. The differential transformation has been introduced by G. Pukhov as the "Taylor transform" in 1976 and applied to the study of electrical circuits [25]. The differential transformation is closely related to Taylor expansion of real analytic functions. It has applications in solving different types of problems for all classes of differential equations (ordinary, partial, delayed, fractional, fuzzy etc.). The recent developments and applications of DTM are discussed in [1, 7, 13, 15, 16, 17, 18, 22, 23, 26, 27, 30, 31, 36] and references therein. In the present paper, the differential transformation is used to solve nonlinear differenial algebraic equations. The nonlinearity in the problems is addressed by using the partial ordinary Bell polynomials in the Faa di Bruno's formula. The results obtained by this technique are compared to other methods. Error analysis of the method is studied for convergence criterion. To show the efficiency of the method some examples of this class are considered. These examples not only validate the accuracy of the method but also gives results which are more convergent to the exact solution. However, to the best of our knowledge, no researcher has applied the DTM using Bell polynomials on the practical problems discussed in the Section 4. The paper is organized as follows. In Section 2, we introduce the main idea and basic formulae of the differential transformation and provide necessary results for the nonlinearities involving partial ordinary Bell polynomials. In Section 3 we introduce the error estimate result. Numerical results and discussion are presented in Section 4. A conclusion is given in section 5. ## 2 Preliminaries In this section we discuss the main idea and basic formulae of the differential transformation as well as notations and results related to transformation of general nonlinear terms. ### Idea of differential transform Let \(w(v)\) be analytical function in domain \(D\) and \(v=v_{0}\) be any arbitrary point in \(D\). Then, \(w(v)\) can be expanded in series form about the point \(v=v_{0}\). The differential transform of the kth derivative of the function \(w(v)\) is defined as \[W(k)[v_{0}]=\frac{1}{k!}\left[\frac{d^{k}w(v)}{dv^{k}}\right]_{v=v_{0}}. \tag{5}\] The inverse differential transformation is given by \[w(v)=\sum_{k=0}^{\infty}W(k)[v_{0}](v-v_{0})^{k}. \tag{6}\] Using equation (5)-(6) \[w(v)=\sum_{k=0}^{\infty}\frac{1}{k!}\left[\frac{d^{k}w(v)}{dv^{k}}\right]_{v=v _{0}}(v-v_{0})^{k}. \tag{7}\] In real applications, the function \(w(v)\) is expressed by finite sum \[w(v)=\sum_{k=0}^{N}W(k)[v_{0}](v-v_{0})^{k}. \tag{8}\] The results which are used in this paper are listed in table (1) without proofs. ### Faa di Bruno's formula and Bell polynomials One of the principal disadvantages of most papers based on applications of differential transformations is the differential transformation is not applied directly to nonlinear terms like \(w^{n},n\in\mathbb{N}\) or \(e^{w}\). Authors [29] used Adomian polynomials to compute the differential transform of nonlinear terms. However, the differential transformation of nonlinear terms can be determined without calculating and \begin{table} \begin{tabular}{l l l} \hline & Original function & Transformed function \\ \hline 1 & \(\frac{d^{n}w(v)}{dv^{n}}\) & \((k+1)(k+2)(k+3)\ldots(k+n)W(k+n)\) \\ 2 & \(w(v)=v^{n}\) & \(\delta(k-n)\), where \(\delta\left(k-n\right)=\left\{\begin{array}{l}1,k=n\\ 0,k\neq n\end{array}\right.\) \\ 3 & \(e^{\alpha v}\) & \(\frac{\alpha^{k}}{k!}\) \\ 4 & \(w_{1}(v)w_{2}(v)\) & \(\sum_{i=0}^{k}W_{1}(i)W_{2}(k-i)\) \\ \hline \end{tabular} \end{table} Table 1: Formulae of the differential transform method evaluating symbolic derivatives by applying Faa di Bruno's formula to nonlinear terms. Here we present some necessary notations and results obtained in [28]. The proofs are not included since they can be found in the cited paper. **Definition 2.1**: _[_9_]_ _The partial ordinary Bell polynomials are the polynomials \(\hat{B}_{k,l}\left(\hat{x}_{1},\ldots,\hat{x}_{k-l+1}\right)\) in an infinite number of variables \(\hat{x}_{1},\hat{x}_{2},\ldots\) defined by the series expansion_ \[\sum_{k\geq l}\hat{B}_{k,l}\left(\hat{x}_{1},\ldots,\hat{x}_{k-l+1}\right)t^{k }=\left(\sum_{m\geq 1}\hat{x}_{m}t^{m}\right)^{l},l=0,1,2,\ldots \tag{9}\] **Lemma 2.2**: _[_28_]_ _The partial ordinary Bell polynomials \(\hat{B}_{k,l}\left(\hat{x}_{1},\ldots,\hat{x}_{k-l+1}\right),l=0,1,2,\ldots,k \geq l\) satisfy the recurrence relation_ \[\hat{B}_{k,l}\left(\hat{x}_{1},\ldots,\hat{x}_{k-l+1}\right)=\sum_{i=1}^{k-l+1 }\frac{i.l}{k}\hat{x}_{i}\hat{B}_{k-i,l-1}\left(\hat{x}_{1},\ldots,\hat{x}_{k- i-l+2}\right) \tag{10}\] _where \(\hat{B}_{0,0}=1\) and \(\hat{B}_{k,0}=0\) for \(k\geq 1\)._ **Theorem 2.3**: _[_28_]_ _Let \(g\) and \(f\) be real functions analytic near \(t_{0}\) and \(g(t_{0})\) respectively, and let \(h\) be the composition \(h\left(t\right)=\left(fog\right)\left(t\right)=f\left(g\left(t\right)\right)\). Denote \(D\left\{g\left(t\right)\right\}\left[t_{0}\right]=\left\{G\left(k\right)\right\} _{k=0}^{\infty}\), \(D\left\{f\left(t\right)\right\}\left[g\left(t_{0}\right)\right]=\left\{F \left(k\right)\right\}_{k=0}^{\infty}\) and \(D\left\{\left(f\circ g\right)\left(t\right)\right\}\left[t_{0}\right]=\left\{H \left(k\right)\right\}_{k=0}^{\infty}\) the differential transformations of functions \(g\), \(f\) and \(h\) at \(t_{0}\), \(g\left(t_{0}\right)\) and \(t_{0}\) respectively. Then the numbers \(H(k)\) in the sequence \(\left\{H\left(k\right)\right\}_{k=0}^{\infty}\) satisfy the relations \(H(0)=F(0)\) and_ \[H\left(k\right)=\sum_{l=1}^{k}F\left(l\right).\hat{B}_{k,l}\left(G\left(1\right),\ldots,G\left(k-l+1\right)\right)\ \ \text{for}\ k\geq 1. \tag{11}\] ### Implementation of method Consider higher-index Hessenberg DAEs as \[w^{\left(m\right)}\left(v\right) = f\left(w(v),u\left(v\right)\right),\] \[0 = g\left(w(v)\right), \tag{12}\] with initial conditions \[w^{\left(i\right)}\left(0\right)=\eta_{i},i=0,1,\ldots,m-1, \tag{13}\] where \(w^{\left(m\right)}\) is the \(m^{th}\) derivatives of \(w\) and \(\eta_{i}\) are given constants. To solve equations (12) and (13), apply differential transform we get the algebraic system \[\left(k+1\right)\left(k+2\right)\ldots\left(k+m\right)W\left(k+m\right) = F\left(W(k),U\left(k\right)\right),\] \[0 = G\left(W(k)\right), \tag{14}\] and \[W\left(k\right)=\eta_{i},k=0,1,\ldots,m-1, \tag{15}\] where \(W\) and \(U\) are the differential transform of \(w\) and \(u\) respectively and differential trasform of nonlinear term is obtained by Theorem 2.3. The solution steps of equations (12)-(15) are explained in figure 1. Now, the series solution is given by \[w(v)=\sum_{k=0}^{\infty}W(k)v^{k}. \tag{16}\] **Remark 2.4**: _Every step of the present method is illustrated through flowchart diagram shown in figure 1 and implemented in example 4.1 in section 4._ Figure 1: Flow chart of present method. ## 3 Error estimation For comparison, absolute error and maximum absolute error are computed and defined as \[E_{N}(v):=\left|w\left(v\right)-w_{N}\left(v\right)\right|\!,\] \[E_{N,\infty}:=\max_{0\leq v\leq 1}E_{N}(v),\] where \(w\left(v\right)\) is the exact solution and \(w_{N}\left(v\right)\) is the truncated series solution with degree \(N\). Furthermore, the relative error between exact and approximate solution is defined by \[R_{N}(v):=\frac{E_{N}(v)}{\left|w(v)\right|}.\] Further following notations have been used in presented Tables: \[w_{i,N}(v) :=\] Approximate solution obtained by present technique, \[w_{i,N}(v)\left[5\right] :=\] Approximate solution obtained by Adomian method, \[w_{i,N}(v)\left[34\right] :=\] Approximate solution obtained by Multi Quadric method, \[w_{i,N}(v)\left[21\right] :=\] Approximate solution obtained by Lie Group method, \[R_{i,N}(v) :=\] Relative error between exact and present solution. ## 4 Numerical results and discussion Four examples of nonlinear higher index Hessenberg DAEs are solved to demonstrate the effectiveness of the proposed method. Mathematica software version 11 is used to perform all numerical computations. For reader's benefit every step of the proposed technique is explained in detail in example 1. ### Example 1 Consider the nonlinear differential algebraic equation [3] \[\frac{dw_{1}}{dv} = 2w_{3},\] \[\frac{dw_{2}}{dv} = 2w_{4},\] \[\frac{dw_{3}}{dv} = -2w_{3}+e^{w_{2}}+w+\phi_{1},\] \[\frac{dw_{4}}{dv} = 2w_{4}+e^{w_{1}}+w+\phi_{2},\] \[0 = w_{1}+w_{2}-\phi_{3},\;0\leq v<1, \tag{17}\] where \[\phi_{1}\left(v\right)=-\frac{2v^{4}+2v^{3}+1}{2\left(1+v\right)^{2}},\;\phi_{2} \left(v\right)=\frac{-2v^{4}+2v^{3}-1}{2\left(1-v\right)^{2}},\;\phi_{3}\left(v \right)=ln\left(1-v^{2}\right),\] initial conditions \[w_{1}\left(0\right)=w_{2}\left(0\right)=0,\;w_{3}\left(0\right)=\frac{1}{2},\;w _{4}\left(0\right)=-\frac{1}{2}. \tag{18}\] The exact solution is given by \[w_{1}\left(v\right)=ln\left(1+v\right),\;w_{2}\left(v\right)=ln \left(1-v\right),\] \[w_{3}\left(v\right)=\frac{1}{2\left(1+v\right)},\;w_{4}\left(v \right)=-\frac{1}{2\left(1-v\right)},\;w\left(v\right)=v^{2}. \tag{19}\] Denoting \[h_{1}\left(v\right) = f_{1}\left(g_{1}\left(v\right)\right),\text{where}\;g_{1}\left(v \right)=w_{2}\left(v\right)\text{and}\;f_{1}\left(x\right)=e^{x},\] \[h_{2}\left(v\right) = f_{2}\left(g_{2}\left(v\right)\right),\text{where}\;g_{2}\left(v \right)=w_{1}\left(v\right)\text{and}\;f_{2}\left(x\right)=e^{x}.\] Differential transformation of \(f_{i}\left(x\right)\) is represented by \(F_{i}\left(x\right)\) where \(i=1,2\). Then, we get \[F_{1}\left(k\right)=F_{2}\left(k\right)=\frac{1}{k!}. \tag{20}\] The differential transform of \(h_{1}\left(v\right)\) and \(h_{2}\left(v\right)\) are represented using theorem 2.3 \[H_{1}\left(0\right) = 1,\;H_{1}\left(k\right)=\sum_{l=1}^{k}F_{1}\left(l\right)\hat{B} _{k,l}\left(W_{2}\left(1\right),\ldots,W_{2}\left(k-l+1\right)\right),\] \[H_{2}\left(0\right) = 1,\;H_{2}\left(k\right)=\sum_{l=1}^{k}F_{2}\left(l\right)\hat{B} _{k,l}\left(W_{1}\left(1\right),\ldots,W_{1}\left(k-l+1\right)\right). \tag{21}\] Applying differential transform to equations (17)-(18), we obtain the following recurrence relation \[W_{1}\left(k+1\right) = \frac{2}{\left(k+1\right)}W_{3}\left(k\right),\] \[W_{2}\left(k+1\right) = \frac{2}{\left(k+1\right)}W_{4}\left(k\right),\] \[W_{3}\left(k+1\right) = \frac{1}{\left(k+1\right)}\left(-2W_{3}\left(k\right)+H_{1}\left( k\right)+W\left(k\right)+\Phi_{1}\left(k\right)\right),\] \[W_{4}\left(k+1\right) = \frac{1}{\left(k+1\right)}\left(2W_{4}\left(k\right)+H_{2}\left( k\right)+W\left(k\right)+\Phi_{2}\left(k\right)\right),\] \[0 = W_{1}\left(k\right)+W_{2}\left(k\right)-\Phi_{3}\left(k\right),\] \[W_{1}\left(0\right) = W_{2}\left(0\right)=0,\;W_{3}\left(0\right)=\frac{1}{2},\;W_{4} \left(0\right)=-\frac{1}{2}, \tag{22}\] where \(\Phi_{1}\), \(\Phi_{2}\) and \(\Phi_{3}\) are differential transform of \(\phi_{1}\), \(\phi_{2}\) and \(\phi_{3}\) respectively. Using equations (20)-(22) we obtain the following components \[k=0: \quad W_{1}\left(1\right)=2W_{3}\left(0\right)=1,\] \[W_{2}\left(1\right)=2W_{4}\left(0\right)=-1,\] \[W_{3}\left(1\right)=-2W_{3}\left(0\right)+H_{1}\left(0\right)+W \left(0\right)+\Phi_{1}\left(0\right)=W\left(0\right)-\frac{1}{2},\] \[W_{4}\left(1\right)=2W_{4}\left(0\right)+H_{2}\left(0\right)+W \left(0\right)+\Phi_{2}\left(0\right)=W\left(0\right)-\frac{1}{2},\] \[0=W_{1}\left(0\right)+W_{2}\left(0\right)-\Phi_{3}\left(0\right),\] \[k=1: \quad H_{1}\left(1\right)=F_{1}\left(1\right)\hat{B}_{1,1}\left( W_{2}\left(1\right)\right)=F_{1}\left(1\right)W_{2}\left(1\right)=-1,\] \[H_{2}\left(1\right)=F_{2}\left(1\right)\hat{B}_{1,1}\left(W_{1} \left(1\right)\right)=F_{2}\left(1\right)W_{1}\left(1\right)=1,\] \[W_{1}\left(2\right)=W_{3}\left(1\right),\] \[W_{2}\left(2\right)=W_{4}\left(1\right),\] \[W_{3}\left(2\right)=\frac{1}{2}\left(-2W_{3}\left(1\right)+H_{1 }\left(1\right)+W\left(1\right)+\Phi_{1}\left(1\right)\right)=\frac{1}{2} \left(-2W_{3}\left(1\right)+W\left(1\right)\right),\] \[W_{4}\left(2\right)=\frac{1}{2}\left(2W_{4}\left(1\right)+H_{2} \left(1\right)+W\left(1\right)+\Phi_{2}\left(1\right)\right)=\frac{1}{2} \left(2W_{4}\left(1\right)+W\left(1\right)\right),\] \[0=W_{1}\left(1\right)+W_{2}\left(1\right)-\Phi_{3}\left(1\right),\] \[k=2: \quad H_{1}\left(2\right)=\sum_{l=1}^{2}F_{1}\left(l\right)\hat{B }_{2,1}\left(W_{2}\left(1\right),W_{2}\left(2\right)\right)=F_{1}\left(1\right) W_{2}\left(2\right)+F_{1}\left(2\right)W_{2}^{2}\left(1\right),\] \[H_{2}\left(2\right)=\sum_{l=1}^{2}F_{2}\left(l\right)\hat{B}_{2, 1}\left(W_{1}\left(1\right),W_{1}\left(2\right)\right)=F_{2}\left(1\right)W_{1 }\left(2\right)+F_{2}\left(2\right)W_{1}^{2}\left(1\right),\] \[W_{1}\left(3\right)=\frac{2}{3}W_{3}\left(2\right),\] \[W_{2}\left(3\right)=\frac{2}{3}W_{4}\left(2\right),\] \[W_{3}\left(3\right)=\frac{1}{3}\left(-2W_{3}\left(2\right)+H_{1 }\left(2\right)+W\left(2\right)+\Phi_{1}\left(2\right)\right)=\frac{1}{3} \left(-2W_{3}\left(2\right)+W\left(2\right)-\frac{3}{2}\right),\] \[W_{4}\left(3\right)=\frac{1}{3}\left(2W_{4}\left(2\right)+H_{2} \left(2\right)+W\left(2\right)+\Phi_{2}\left(2\right)\right)=\frac{1}{3} \left(2W_{4}\left(2\right)+W\left(2\right)-\frac{3}{2}\right),\] \[0=W_{1}\left(2\right)+W_{2}\left(2\right)-\Phi_{3}\left(2\right) \text{ and soon.} \tag{23}\] Now, with the help of equation (8), the series solution is given by \[w_{1}\left(v\right) = v-\frac{1}{2}v^{2}+\frac{1}{3}v^{3}-\frac{1}{4}v^{4}+\ldots,\] \[w_{2}\left(v\right) = -v-\frac{1}{2}v^{2}-\frac{1}{3}v^{3}-\frac{1}{4}v^{4}-\ldots,\] \[w_{3}\left(v\right) = \frac{1}{2}-\frac{1}{2}v+\frac{1}{2}v^{2}-\frac{1}{2}v^{3}+\frac{1}{ 2}v^{4}-\ldots,\] \[w_{4}\left(v\right) = -\frac{1}{2}-\frac{1}{2}v-\frac{1}{2}v^{2}-\frac{1}{2}v^{3}-\frac{ 1}{2}v^{4}-\ldots,\] \[w\left(v\right) = v^{2},\] which converges to the exact solution given by equation (19). \begin{table} \begin{tabular}{l l l l l} \hline \(v\) & \(w_{1}\left(v\right)\) & \(w_{1,N}(v)\) & \(w_{1,N}(v)\)[3] & \(R_{1,N}(v)\) \\ \hline 0.1 & 0.0953101798 & 0.0953101798 & 0.0953101798 & 7.2E-16 \\ 0.2 & 0.1823215568 & 0.1823215568 & 0.1823215568 & 3.0E-16 \\ 0.3 & 0.2623642645 & 0.2623642645 & 0.2623642645 & 1.4E-12 \\ 0.4 & 0.3364722366 & 0.3364722365 & 0.3364722365 & 4.5E-10 \\ 0.5 & 0.4054651081 & 0.4054650927 & 0.4054650927 & 3.7E-08 \\ 0.6 & 0.4700036292 & 0.4700029649 & 0.4700029649 & 1.4E-06 \\ 0.7 & 0.5306282511 & 0.5306123016 & 0.5306123016 & 3.0E-05 \\ 0.8 & 0.5877866649 & 0.5875375291 & 0.5875375291 & 4.2E-04 \\ 0.9 & 0.6418538862 & 0.6390499221 & 0.6390499221 & 4.3E-03 \\ 1.0 & 0.6931471806 & 0.6687714032 & 0.6687714032 & 3.5E-02 \\ \hline \end{tabular} \end{table} Table 2: Comparison of numerical solution of \(w_{1}(v)\) with exact solution and Benhammouda [3] solution for example 1 (\(N=20\)) \begin{table} \begin{tabular}{l l l l l} \hline \(v\) & \(w_{2}\left(v\right)\) & \(w_{2,N}\left(v\right)\) & \(w_{2,N}\left(v\right)\)[3] & \(R_{2,N}(v)\) \\ \hline 0.1 & -0.1053605157 & -0.1053605157 & -0.1053605157 & 3.9E-16 \\ 0.2 & -0.2231435513 & -0.2231435513 & -0.2231435513 & 2.4E-16 \\ 0.3 & -0.3566749439 & -0.3566749439 & -0.3566749439 & 1.9E-12 \\ 0.4 & -0.5108256238 & -0.5108256234 & -0.5108256234 & 6.6E-10 \\ 0.5 & -0.6931471806 & -0.6931471371 & -0.6931471371 & 6.2E-08 \\ 0.6 & -0.9162907319 & -0.9162882787 & -0.9162882787 & 2.6E-06 \\ 0.7 & -1.2039728040 & -1.2038920520 & -1.2038920520 & 6.7E-05 \\ 0.8 & -1.6094379120 & -1.6075458670 & -1.6075458670 & 1.1E-03 \\ 0.9 & -2.3025850930 & -2.2633497340 & -2.2633497340 & 1.7E-02 \\ \hline \end{tabular} \end{table} Table 3: Comparison of numerical solution of \(w_{2}(v)\) with exact solution and Benhammouda [3] solution for example 1 (\(N=20\)) \begin{table} \begin{tabular}{c c c c c} \hline \hline \(N\) & \(E_{1N,\infty}\) & \(E_{2N,\infty}\) & \(E_{3N,\infty}\) & \(E_{4N,\infty}\) \\ \hline 10 & 1.5E-02 & 1.8E-01 & 8.2E-02 & 1.5E-00 \\ 15 & 6.2E-03 & 8.27E-02 & 4.8E-02 & 9.2E-01 \\ 20 & 2.8E-03 & 3.9E-02 & 2.8E-02 & 5.4E-01 \\ \hline \hline \end{tabular} \end{table} Table 6: Maximum absolute error for \(w_{1}\), \(w_{2}\), \(w_{3}\) and \(w_{4}\) of example 1 \begin{table} \begin{tabular}{c c c c c} \hline \hline \(v\) & \(w_{4}\left(v\right)\) & \(w_{4,N}\left(v\right)\) & \(w_{4,N}\left(v\right)\)[3] & \(R_{4,N}(v)\) \\ \hline 0.1 & -0.5555561000 & -0.5555561000 & -0.5555561000 & 0 \\ 0.2 & -0.6251000000 & -0.6251000000 & -0.6251000000 & 2.1E-15 \\ 0.3 & -0.7142861000 & -0.7142861000 & -0.7142861000 & 1.0E-11 \\ 0.4 & -0.8333331000 & -0.8333331000 & -0.8333331000 & 4.3E-09 \\ 0.5 & -1.1000000000 & -1.1000000000 & -1.1000000000 & 4.7E-07 \\ 0.6 & -1.251000000 & -1.2499710000 & -1.2499710000 & 2.1E-05 \\ 0.7 & -1.6666710000 & -1.6657410000 & -1.6657410000 & 5.5E-04 \\ 0.8 & -2.5100000000 & -2.4769410000 & -2.4769410000 & 9.2E-03 \\ 0.9 & -5.100000000 & -4.4529110000 & -4.4529110000 & 1.0E-01 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of numerical solution of \(w_{4}(v)\) with exact solution and Benhammouda [3] solution for example 1(N=20) \begin{table} \begin{tabular}{c c c c c} \hline \hline \(v\) & \(w_{4}\left(v\right)\) & \(w_{4,N}\left(v\right)\) & \(w_{4,N}\left(v\right)\)[3] & \(R_{4,N}(v)\) \\ \hline 0.1 & -0.5555561000 & -0.5555561000 & -0.5555561000 & 0 \\ 0.2 & -0.6251000000 & -0.6251000000 & -0.6251000000 & 2.1E-15 \\ 0.3 & -0.7142861000 & -0.7142861000 & -0.7142861000 & 1.0E-11 \\ 0.4 & -0.8333331000 & -0.8333331000 & -0.8333331000 & 4.3E-09 \\ 0.5 & -1.1000000000 & -1.1000000000 & -1.1000000000 & 4.7E-07 \\ 0.6 & -1.251000000 & -1.2499710000 & -1.2499710000 & 2.1E-05 \\ 0.7 & -1.6666710000 & -1.6657410000 & -1.6657410000 & 5.5E-04 \\ 0.8 & -2.5100000000 & -2.4769410000 & -2.4769410000 & 9.2E-03 \\ 0.9 & -5.1000000000 & -4.4529110000 & -4.4529110000 & 1.0E-01 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of numerical solution of \(w_{3}(v)\) with exact solution and Benhammouda [3] solution for example 1(N=20) Tables [(2), (3),(4),(5)] compare the proposed solution to the exact and numerical solutions presented in [3] for the variables \(w_{1}\), \(w_{2}\), \(w_{3}\), and \(w_{4}\). It is evident that the present solution conforms well with the exact solution and the solution given in [3]. In addition, the maximum absolute error is determined in table (6), and it decreases as the number of series terms increases. ### Example 2 Consider the nonlinear differential algebraic equation [3] \[\frac{d^{2}w_{1}}{dv^{2}} = 2w_{2}-2w_{2}^{3}-w_{1}w,\] \[\frac{d^{2}w_{2}}{dv^{2}} = 2w_{1}-2w_{1}^{3}-w_{2}w,\] \[0 = w_{1}^{2}+w_{2}^{2}-1,\:v\geqslant 0, \tag{24}\] initial conditions \[w_{1}\left(0\right)=1,\:\frac{dw_{1}\left(0\right)}{dv}=0,\:w_{2}\left(0 \right)=0,\:\frac{dw_{2}\left(0\right)}{dv}=1. \tag{25}\] Figure 2: The absolute error for \(w_{1}\), \(w_{2}\), \(w_{3}\) and \(w_{4}\) of example 1. The exact solution is given by \[w_{1}\left(v\right)=cosv,\:w_{2}\left(v\right)=sinv,\:w\left(v\right)=1+sin(2v). \tag{26}\] The present solution is compared to the exact solution and numerical solution discussed in [3] for \(w_{1}\), \(w_{2}\), and \(w\) in tables [(7), (8),(9)] respectively. The present solution is clearly in good conformity with the exact solution and solution discussed in [3]. The maximum absolute error decreases as the number of terms in the series solutions increases, as shown in table (10). \begin{table} \begin{tabular}{l l l l l} \hline \hline \(v\) & \(w_{1}\left(v\right)\) & \(w_{1,N}\left(v\right)\) & \(w_{1,N}\left(v\right)\)[3] & \(R_{1,N}(v)\) \\ \hline 0.1 & 0.9950041000 & 0.9950041000 & 0.9950041000 & 0 \\ 0.2 & 0.9800671000 & 0.9800671000 & 0.9800671000 & 0 \\ 0.3 & 0.9553361000 & 0.9553361000 & 0.9553361000 & 0 \\ 0.4 & 0.9210611000 & 0.9210611000 & 0.9210611000 & 1.2E-16 \\ 0.5 & 0.8775831000 & 0.8775831000 & 0.8775831000 & 1.2E-16 \\ 0.6 & 0.8253361000 & 0.8253361000 & 0.8253361000 & 1.2E-16 \\ 0.7 & 0.7648421000 & 0.7648421000 & 0.7648421000 & 2.9E-16 \\ 0.8 & 0.6967071000 & 0.6967071000 & 0.6967071000 & 2.0E-15 \\ 0.9 & 0.6216110000 & 0.6216110000 & 0.6216110000 & 1.4E-14 \\ \hline \hline \end{tabular} \end{table} Table 7: Comparison of numerical solution of \(w_{1}(v)\) with exact solution and Benhammouda [3] solution for example 2 (\(N=15\)) \begin{table} \begin{tabular}{l l l l l} \hline \hline \(v\) & \(w_{2}\left(v\right)\) & \(w_{2,N}\left(v\right)\) & \(w_{2,N}\left(v\right)\)[3] & \(R_{2,N}(v)\) \\ \hline 0.1 & 0.0998334100 & 0.0998334100 & 0.0998334100 & 1.3E-16 \\ 0.2 & 0.1986691000 & 0.1986691000 & 0.1986691000 & 1.3E-16 \\ 0.3 & 0.2955210000 & 0.2955210000 & 0.2955210000 & 1.3E-16 \\ 0.4 & 0.3894181000 & 0.3894181000 & 0.3894181000 & 1.3E-16 \\ 0.5 & 0.4794261000 & 0.4794261000 & 0.4794261000 & 1.3E-16 \\ 0.6 & 0.5646421000 & 0.5646421000 & 0.5646421000 & 1.3E-16 \\ 0.7 & 0.6442181000 & 0.6442181000 & 0.6442181000 & 1.7E-16 \\ 0.8 & 0.717356100 & 0.7173561000 & 0.7173561000 & 1.5E-16 \\ 0.9 & 0.783327100 & 0.7833271000 & 0.7833271000 & 5.6E-16 \\ \hline \hline \end{tabular} \end{table} Table 8: Comparison of numerical solution of \(w_{2}(v)\) with exact solution and Benhammouda [3] solution for example 2 (\(N=15\)) \begin{table} \begin{tabular}{l l l l l} \hline \hline \(v\) & \(w\left(v\right)\) & \(w_{N}\left(v\right)\) & \(w_{N}\left(v\right)\)[3] & \(R_{N}(v)\) \\ \hline 0.1 & 1.1986710000 & 1.1986710000 & 1.1986710000 & 1.8E-16 \\ 0.2 & 1.3894210000 & 1.3894210000 & 1.3894210000 & 1.8E-16 \\ 0.3 & 1.5646410000 & 1.5646410000 & 1.5646410000 & 1.8E-16 \\ 0.4 & 1.7173610000 & 1.7173610000 & 1.7173610000 & 1.2E-16 \\ 0.5 & 1.8414710000 & 1.8414710000 & 1.8414710000 & 1.5E-15 \\ 0.6 & 1.9320410000 & 1.9320410000 & 1.9320410000 & 3.2E-14 \\ 0.7 & 1.9854510000 & 1.9854510000 & 1.9854510000 & 4.2E-13 \\ 0.8 & 1.9995710000 & 1.9995710000 & 1.9995710000 & 4.1E-12 \\ 0.9 & 1.9738510000 & 1.9738510000 & 1.9738510000 & 3.0E-11 \\ \hline \hline \end{tabular} \end{table} Table 9: Comparison of numerical solution of \(w(v)\) with exact solution and Benhamouda [3] solution for example 2 (\(N=15\)) Figure 3: The absolute error for \(w_{1}\), \(w_{2}\) and \(w\) of example 2. ### Example 3 Consider the nonlinear differential algebraic equation [34] \[\frac{dw_{1}}{dv} = v\lambda\frac{dw_{3}}{dv}+e^{v}-v\lambda\left(e^{v}+e^{-v}\right),\] \[\frac{dw_{2}}{dv} = \left(\lambda-5\right)\frac{dw_{3}}{dv}-e^{-v}-\left(\lambda-5 \right)\left(e^{v}+e^{-v}\right),\] \[0 = v^{2}w_{1}\left(v\right)+w_{2}\sin v-v^{2}e^{v}-\sin ve^{-v},\;0 \leqslant v\leqslant 1, \tag{27}\] where \(\lambda\) is arbitrary parameter and we take \(\lambda=15\) and initial conditions \[w_{1}\left(0\right)=1,\;w_{2}\left(0\right)=1,\;w_{3}\left(0\right)=0. \tag{28}\] The exact solution is given by \[w\left(v\right)=\Bigg{(}\begin{array}{c}e^{v}\\ e^{-v}\\ e^{v}-e^{-v}\end{array}\Bigg{)}. \tag{29}\] \begin{table} \begin{tabular}{l l l l} \hline \(v\) & \(w_{1}\left(v\right)\) & \(w_{1,N}\left(v\right)\) & \(w_{1,N}\left(v\right)\)[34] & \(R_{1,N}(v)\) \\ \hline 0.0 & 1.000000000 & 1.000000000 & 1.000000000 & 0 \\ 0.1 & 1.105170918 & 1.105170918 & 1.105170918 & 2.0E-16 \\ 0.2 & 1.221402758 & 1.221402758 & 1.221402758 & 2.0E-16 \\ 0.3 & 1.349858807 & 1.349858807 & 1.349858807 & 1.6E-16 \\ 0.4 & 1.491824697 & 1.491824697 & 1.491824697 & 7.4E-16 \\ 0.5 & 1.648721270 & 1.648721270 & 1.648721270 & 1.2E-14 \\ 0.6 & 1.822118800 & 1.822118800 & 1.822118800 & 1.2E-13 \\ 0.7 & 2.013752707 & 2.013752707 & 2.013752707 & 8.1E-13 \\ 0.8 & 2.225540928 & 2.225540928 & 2.225540928 & 4.2E-12 \\ 0.9 & 2.459603111 & 2.459603111 & 2.459603111 & 1.7E-11 \\ 1.0 & 2.718281828 & 2.718281828 & 2.718281828 & 6.3E-11 \\ \hline \end{tabular} \end{table} Table 10: Maximum absolute error for \(w_{1}\), \(w_{2}\) and \(w\) of example 2 \begin{table} \begin{tabular}{l l l l l} \hline \(N\) & \(E_{1N,\infty}\) & \(E_{2N,\infty}\) & \(E_{N,\infty}\) \\ \hline 5 & 7.2E-04 & 9.3E-05 & 1.1E-02 \\ 10 & 5.8E-10 & 7.8E-09 & 1.5E-05 \\ 15 & 8.7E-15 & 4.4E-16 & 6.0E-11 \\ \hline \end{tabular} \end{table} Table 10: Maximum absolute error for \(w_{1}\), \(w_{2}\) and \(w\) of example 2 \begin{table} \begin{tabular}{l l l l l} \hline \(v\) & \(w_{3}\left(v\right)\) & \(w_{3,N}\left(v\right)\) & \(w_{3,N}\left(v\right)\)[34] & \(R_{3,N}(v)\) \\ \hline 0.0 & 0.000000000 & 0.0000000000 & 0.0000000000 & 0 \\ 0.1 & 0.200333500 & 0.200333500 & 0.200333500 & 6.9E-16 \\ 0.2 & 0.402672005 & 0.402672005 & 0.402672005 & 1.3E-16 \\ 0.3 & 0.609040586 & 0.609040586 & 0.609040586 & 1.3E-16 \\ 0.4 & 0.821504651 & 0.821504651 & 0.821504651 & 2.5E-15 \\ 0.5 & 1.042190610 & 1.042190610 & 1.042190610 & 3.7E-14 \\ 0.6 & 1.273307164 & 1.273307164 & 1.273307164 & 3.2E-13 \\ 0.7 & 1.517167403 & 1.517167403 & 1.517167403 & 2.0E-12 \\ 0.8 & 1.776211964 & 1.776211964 & 1.776211964 & 9.9E-12 \\ 0.9 & 2.053033451 & 2.053033451 & 2.053033451 & 3.9E-11 \\ 1.0 & 2.350402387 & 2.350402387 & 2.350402387 & 1.3E-10 \\ \hline \end{tabular} \end{table} Table 13: Comparison of numerical solution of \(w_{3}(v)\) with exact solution and Vanani [34] solution for example 3 (\(N=12\)) \begin{table} \begin{tabular}{l l l l l} \hline \(v\) & \(w_{2}\left(v\right)\) & \(w_{2,N}\left(v\right)\) & \(w_{2,N}\left(v\right)\)[34] & \(R_{2,N}(v)\) \\ \hline 0.0 & 1.000000000 & 1.000000000 & 1.000000000 & 0 \\ 0.1 & 0.904837418 & 0.904837418 & 0.904837418 & 1.2E-16 \\ 0.2 & 0.818730753 & 0.818730753 & 0.818730753 & 1.3E-16 \\ 0.3 & 0.740818220 & 0.740818220 & 0.740818220 & 1.3E-16 \\ 0.4 & 0.670320046 & 0.670320046 & 0.670320046 & 1.4E-15 \\ 0.5 & 0.606530659 & 0.606530659 & 0.606530659 & 3.1E-14 \\ 0.6 & 0.548811636 & 0.548811636 & 0.548811636 & 3.6E-13 \\ 0.7 & 0.496585303 & 0.496585303 & 0.496585303 & 2.9E-12 \\ 0.8 & 0.449328964 & 0.449328964 & 0.449328964 & 1.8E-11 \\ 0.9 & 0.406569659 & 0.406569659 & 0.406569659 & 9.4E-11 \\ 1.0 & 0.367879441 & 0.367879441 & 0.367879441 & 4.0E-10 \\ \hline \end{tabular} \end{table} Table 12: Comparison of numerical solution of \(w_{2}(v)\) with exact solution and Vanani [34] solution for example 3 (\(N=12\)) Tables [(11),(12),(13)] exhibit numerical results for \(N=12\) for \(w_{1}\), \(w_{2}\), and \(w_{3}\) using the proposed technique. The absolute error of numerical results obtained from the present approach is plotted in figure (4) against the multiquadric method [34] and the maximum absolute error of numerical results obtained from the present method is shown in Table (14). Furthermore, the absolute error and maximum absolute error both decline as \(N\) increases, demonstrating that the method is convergent. ### Example 4 Consider the differential algebraic equation that studies the position of particle on a circular track, which is termed as mechanical control problem [21, 32] \[\frac{d^{2}w_{1}}{dv^{2}} = 2w_{2}+w_{1}w_{3},\] \[\frac{d^{2}w_{2}}{dv^{2}} = -2w_{1}+w_{2}w_{3},\] \[0 = w_{1}^{2}+w_{2}^{2}-1,\;0\leqslant v\leqslant 1, \tag{30}\] Figure 4: Maximum absolute error for \(w_{1}\), \(w_{2}\) and \(w_{3}\) of example 3. initial conditions \[w_{1}\left(0\right)=0,\;w_{2}\left(0\right)=1,\;w_{3}\left(0\right)=0. \tag{31}\] The exact solution is given by \[w\left(v\right)=\Bigg{(}\begin{array}{c}\sin{v^{2}}\\ \cos{v^{2}}\\ -4v^{2}\end{array}\Bigg{)}. \tag{32}\] \begin{table} \begin{tabular}{l l l l l} \hline \(v\) & \(w_{2}\left(v\right)\) & \(w_{2,N}\left(v\right)\) & \(w_{2,N}\left(v\right)\)[21] & \(R_{2,N}(v)\) \\ \hline 0.0 & 1.000000000 & 1.000000000 & 1.000000000 & 0 \\ 0.1 & 0.999950000 & 0.999950000 & 0.999950000 & 0 \\ 0.2 & 0.999200107 & 0.999200107 & 0.999200107 & 1.1E-16 \\ 0.3 & 0.995952733 & 0.995952733 & 0.995952733 & 1.1E-16 \\ 0.4 & 0.987227283 & 0.987227283 & 0.987227283 & 1.1E-16 \\ 0.5 & 0.968912422 & 0.968912422 & 0.968912422 & 1.1E-16 \\ 0.6 & 0.935896824 & 0.935896824 & 0.935896824 & 1.0E-14 \\ 0.7 & 0.882332859 & 0.882332859 & 0.882332859 & 4.5E-13 \\ 0.8 & 0.802095755 & 0.802095755 & 0.802095755 & 1.2E-11 \\ 0.9 & 0.689498433 & 0.689498433 & 0.689498433 & 2.4E-10 \\ 1.0 & 0.540302306 & 0.540302306 & 0.540302306 & 3.8E-09 \\ \hline \end{tabular} \end{table} Table 16: Comparison of numerical solution of \(w_{2}(v)\) with exact solution and Liu [21] solution for example 4 (\(N=20\)) \begin{table} \begin{tabular}{l l l l l} \hline \(v\) & \(w_{1}\left(v\right)\) & \(w_{1,N}\left(v\right)\) & \(w_{1,N}\left(v\right)\)[21] & \(R_{1,N}(v)\) \\ \hline 0.1 & 0.00999833 & 0.009999833 & 0.009999833 & 0 \\ 0.2 & 0.039989334 & 0.039989334 & 0.039989334 & 0 \\ 0.3 & 0.089878549 & 0.089878549 & 0.089878549 & 1.5E-16 \\ 0.4 & 0.159318207 & 0.159318207 & 0.159318207 & 1.7E-16 \\ 0.5 & 0.247403959 & 0.247403959 & 0.247403959 & 2.4E-14 \\ 0.6 & 0.352274233 & 0.352274233 & 0.352274233 & 9.3E-13 \\ 0.7 & 0.470625888 & 0.470625888 & 0.470625888 & 2.0E-11 \\ 0.8 & 0.597195441 & 0.597195441 & 0.597195441 & 3.0E-10 \\ 0.9 & 0.724287174 & 0.724287174 & 0.724287174 & 3.3E-09 \\ 1.0 & 0.841470985 & 0.841470984 & 0.841470984 & 2.9E-08 \\ \hline \end{tabular} \end{table} Table 15: Comparison of numerical solution of \(w_{1}(v)\) with exact solution and Liu [21] solution for example 4 (\(N=20\)) Tables (15) and (16) show numerical results for \(N=20\) using the present study. In comparison to the Lie group method employed by [21], the proposed method achieves a better approximation solution, as shown in Tables (15)-(16). Figure (5) depicts the comparison of absolute error results obtained using the present method and the Lie group method discussed in [21] and Table (17) lists the maximum absolute error using the present method. Table (17) shows the convergence of the results. As \(N\) increases (see also Tables (15)-(16)), the error decreases rapidly. ### Example 5 Consider the following nonlinear fractional differential algebraic equation \[D^{\alpha}w_{1}+\sqrt{w_{1}} = w_{2}+2e^{2v},\] \[w_{1}-w_{2}^{2} = 0, \tag{33}\] with initial conditions \[w_{1}\left(0\right)=w_{2}\left(0\right)=1. \tag{34}\] Exact solution for \(\alpha=1\) is given by \[w_{1}\left(v\right)=e^{2v},w_{2}\left(v\right)=e^{v}. \tag{35}\] \begin{table} \begin{tabular}{c c c c} \hline \hline \(N\) & \(E_{1N,\infty}\) & \(E_{2N,\infty}\) & \(E_{3N,\infty}\) \\ \hline 10 & 4.4E-05 & 3.8E-04 & 0 \\ 15 & 2.7E-06 & 2.7E-07 & 0 \\ 20 & 2.4E-08 & 2.0E-09 & 0 \\ \hline \hline \end{tabular} \end{table} Table 17: Maximum absolute error for \(w_{1}\), \(w_{2}\) and \(w_{3}\) of example 4 Figure 5: The absolute error for \(w_{1}\) and \(w_{2}\) of example 4. \begin{table} \begin{tabular}{l l l l l l l} \hline \(v\) & Exact solution & \(w_{2,N}\left(v\right)\) for & \(w_{2,N}\left(v\right)\) for & \(w_{2,N}(v)\) for & \(w_{2,N}(v)\) for & \(w_{2,N}(v)\) for \\ & for \(\alpha=1\) & \(\alpha=1\) & \(\alpha=0.9\) & \(\alpha=0.8\) & \(\alpha=0.7\) \\ \hline 0.1 & 1.105170918 & 1.105170918 & 1.110221816 & 1.114817889 & 1.118821064 \\ 0.2 & 1.221402758 & 1.221402758 & 1.233693661 & 1.245327274 & 1.255996824 \\ 0.3 & 1.349858808 & 1.349858808 & 1.371656910 & 1.392838054 & 1.412898962 \\ 0.4 & 1.491824698 & 1.491824698 & 1.525505211 & 1.558844590 & 1.591115024 \\ 0.5 & 1.648721271 & 1.648721271 & 1.696799638 & 1.745046381 & 1.792478585 \\ 0.6 & 1.822118800 & 1.822118800 & 1.887285142 & 1.953368506 & 2.019089728 \\ 0.7 & 2.013752707 & 2.013752707 & 2.098908983 & 2.18598441 & 2.273321617 \\ 0.8 & 2.225540928 & 2.225540926 & 2.333842138 & 2.445345519 & 2.557801126 \\ 0.9 & 2.459603111 & 2.459603103 & 2.594505315 & 2.734224962 & 2.875342155 \\ 1.0 & 2.718281828 & 2.718281801 & 2.883602243 & 3.055790657 & 3.228800260 \\ \hline \end{tabular} \end{table} Table 18: Comparison of numerical solution of \(w_{1}(v)\) with exact solution for \(\alpha=1\) and different values of \(\alpha\) for example 5 (\(N=10\)) \begin{table} \begin{tabular}{l l l l l l} \hline \(v\) & Exact solution & \(w_{2,N}\left(v\right)\) for & \(w_{2,N}\left(v\right)\) for & \(w_{2,N}(v)\) for & \(w_{2,N}(v)\) for \\ & for \(\alpha=1\) & \(\alpha=1\) & \(\alpha=0.9\) & \(\alpha=0.8\) & \(\alpha=0.7\) \\ \hline 0.1 & 1.221402758 & 1.221402758 & 1.232592481 & 1.242818926 & 1.251760574 \\ 0.2 & 1.491824698 & 1.491824698 & 1.522000050 & 1.550840020 & 1.577528022 \\ 0.3 & 1.822118800 & 1.822118800 & 1.881442677 & 1.939997843 & 1.996283505 \\ 0.4 & 2.225540928 & 2.225540926 & 2.327166133 & 2.429996427 & 2.531647704 \\ 0.5 & 2.718281828 & 2.718281801 & 2.879128803 & 3.045186405 & 3.212987402 \\ 0.6 & 3.320116923 & 3.320116716 & 3.561843540 & 3.815644105 & 4.076781955 \\ 0.7 & 4.055199967 & 4.055198820 & 4.405409180 & 4.778498484 & 5.168309651 \\ 0.8 & 4.953032424 & 4.953027348 & 5.446773813 & 5.979561723 & 6.543727135 \\ 0.9 & 6.049647464 & 6.049628566 & 6.731280574 & 7.475331043 & 8.272630664 \\ 1.0 & 7.389056099 & 7.388994709 & 8.314556977 & 9.335443038 & 10.44120618 \\ \hline \end{tabular} \end{table} Table 19: Comparison of numerical solution of \(w_{2}(v)\) with exact solution for \(\alpha=1\) and different values of \(\alpha\) for example 5 (\(N=10\)) Tables (18) and (19) show a comparison of numerical solutions for \(w_{1}(v)\) and \(w_{2}(v)\) with the exact solution and for different values of \(\alpha\). Figure (6) shows various values for \(w_{1}(v)\) and \(w_{2}(v)\) for different \(\alpha\). If we move away from \(\alpha=1\), the solution diverges. ## 5 Conclusion The proposed method employs the differential transform and Faa di Bruno's formula to find solutions to nonlinear DAEs. The solutions obtained using the present technique agree quite well with the exact solution and other existing solutions. The present technique yields results that are more accurate when compared to other methods. The Faa di Bruno's formula, which is a generalised form of the \(nth\)-order derivative of the chain rule of \(f\circ g\), and the Bell polynomial are effectively used to deal with nonlinearity in the DAE problems, and gets around difficulties that previous methods had, such as the need to compute complex Adomain polynomials in ADM, the difficulty in determining the perturbation parameter, trial functions, and Lagrangian multiplier in the perturbation method, HPM, and VIM respectively, and the need to discretize variables in numerical methods that are insufficient for handling nonlinear DAEs. Other polynomials, such as Legendre polynomials, represent approximations in a given interval but not at a specific point, so only the Bell polynomial is applicable in this case. In Section 4, the viability and applicability of the present approach are demonstrated for the mechanical control problem and the fractional DAE problem. We demonstrate that our approach is effective for a number of problems. Figure 6: Values of \(w_{1}\) and \(w_{2}\) of example 5. ## Conflicts of interest The authors have no conflicts of interest to declare. All authors have seen and we agree with the contents of the manuscript.
2301.12025
Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning
Existing self-supervised techniques have extreme computational requirements and suffer a substantial drop in performance with a reduction in batch size or pretraining epochs. This paper presents Cross Architectural - Self Supervision (CASS), a novel self-supervised learning approach that leverages Transformer and CNN simultaneously. Compared to the existing state-of-the-art self-supervised learning approaches, we empirically show that CASS-trained CNNs and Transformers across four diverse datasets gained an average of 3.8% with 1% labeled data, 5.9% with 10% labeled data, and 10.13% with 100% labeled data while taking 69% less time. We also show that CASS is much more robust to changes in batch size and training epochs than existing state-of-the-art self-supervised learning approaches. We have open-sourced our code at https://github.com/pranavsinghps1/CASS.
Pranav Singh, Jacopo Cirrone
2023-01-27T23:27:24Z
http://arxiv.org/abs/2301.12025v1
# Cross-Architectural Positive Pairs improve the effectiveness of Self-Supervised Learning ###### Abstract Existing self-supervised techniques have extreme computational requirements and suffer a substantial drop in performance with a reduction in batch size or pretraining epochs. This paper presents Cross Architectural - Self Supervision (CASS), a novel self-supervised learning approach that leverages Transformer and CNN simultaneously. Compared to the existing state-of-the-art self-supervised learning approaches, we empirically show that CASS-trained CNNs and Transformers across four diverse datasets gained an average of 3.8% with 1% labeled data, 5.9% with 10% labeled data, and 10.13% with 100% labeled data while taking 69% less time. We also show that CASS is much more robust to changes in batch size and training epochs than existing state-of-the-art self-supervised learning approaches. We have opensourced our code at [https://github.com/pranavsinghps1/CASS](https://github.com/pranavsinghps1/CASS). Machine Learning, Self-Supervised Learning, Self-Supervised Learning ## 1 Introduction Self-supervised learning has emerged as a powerful paradigm for learning representations that can be used for various downstream tasks like classification, object detection, and image segmentation. Pretraining with self-supervised techniques is label-free, allowing us to train even on unlabeled images. This is especially useful in fields with limited labeled data availability or if the cost and effort required to provide annotations are high. Medical Imaging is one field that can benefit from applying self-supervised techniques. Medical imaging is a field characterized by minimal data availability. First, data labeling typically requires domain-specific knowledge. Therefore, the requirement of large-scale clinical supervision may be cost and time prohibitive. Second, due to patient privacy, disease prevalence, and other limitations, it is often difficult to release imaging datasets for secondary analysis, research, and diagnosis. Third, due to an incomplete understanding of diseases. This could be either because the disease is emerging or because no mechanism is in place to systematically collect data about the prevalence and incidence of the disease. An example of the former is COVID-19 when despite collecting chest X-ray data spanning decades, the samples lacked data for COVID-19 (Sriram et al., 2021). An example of the latter is autoimmune diseases. Statistically, autoimmune diseases affect 3% of the US population or 9.9 million US citizens. There are still major outstanding research questions for autoimmune diseases regarding the presence of different cell types and their role in inflammation at the tissue level. The study of autoimmune diseases is critical because autoimmune diseases affect a large part of society and because these conditions have been on the rise recently (Galeotti and Bayry, 2020; Lerner et al., 2015; Ehrenfeld et al., 2020). Other fields like cancer and MRI image analysis have benefited from the application of artificial intelligence (AI). But for autoimmune diseases, the application of AI is particularly challenging due to minimal data availability, with the median dataset size for autoimmune diseases between 99-540 samples (Tsakalidou et al., 2022; Stafford et al., 2020). To overcome the limited availability of annotations, we turn to self-supervised learning. Models extract representations that can be fine-tuned even with a small amount of labeled data for various downstream tasks (Sriram et al., 2021). As a result, this learning approach avoids the relatively expensive and human-intensive task of data annotation. But self-supervised learning techniques suffer when limited data is available, especially in cases where the entire dataset size is smaller than the peak performing batch size for some of the leading self-supervised techniques. This calls for a reduction in the batch size; this again causes existing self-supervised techniques to drop performance; for example, state-of-the-art DINO (Caron et al., 2021) drops classification performance by 25% when trained with batch size 8. Furthermore, existing self-supervised techniques are compute-intensive and trained using multiple GPU servers over multiple days. This makes them inaccessible to general practitioners. Existing approaches in the field of self-supervised learning rely purely on Convolutional Neural Networks (CNNs) or Transformers as the feature extraction backbone and learn feature representations by teaching the network to compare the extracted representations. Instead, we propose to combine a CNN and Transformer in a response-based contrastive method. In CASS, the extracted representations of each input image are compared across two branches representing each architecture (see Figure 1). By transferring features sensitive to translation equivariance and locality from CNN to Transformer, our proposed approach - CASS, learns more predictive data representations in limited data scenarios where a Transformer-only model cannot find them. We studied this quantitatively and qualitatively in Section 5. Our contributions are as follows: * **S**elf **S**upervision (CASS), a hybrid CNN-Transformer approach for learning improved data representations in a self-supervised setting in limited data availability problems in the medical image analysis domain 1 Footnote 1: We have opensourced our code at [https://github.com/pranavsinghps1/CASS](https://github.com/pranavsinghps1/CASS) * We propose the use of CASS for analysis of autoimmune diseases such as dermatomyelitis and demonstrate an improvement of 2.55% compared to the existing state-of-the-art self-supervised approaches. To our knowledge, the autoimmune dataset contains 198 images and is the smallest known dataset for self-supervised learning. * Since our focus is to study self-supervised techniques in the context of medical imaging. We evaluate CASS on three challenging medical image analysis problems (autoimmune disease cell classification, brain tumor classification, and skin lesion classification) on three public datasets (Dermoft Project Dataset (Fisher and Rees, 2017), brain tumor MRI Dataset (Cheng, 2017; Kang et al., 2021) and ISIC 2019 (Tschandl et al., 2018; Gutman et al., 2018; Combalia et al., 2019)) and find that CASS improves classification performance (F1 Score and Recall value) over the existing state of the art self-supervised techniques by an average of 3.8% using 1% label fractions, 5.9 % with 10% label fractions and 10.13% with 100% label fractions. * Existing methods also suffer a severe drop in performance when trained for a reduced number of epochs or batch size ((Caron et al., 2021; Grill et al., 2020b; Chen et al., 2020a)). We show that CASS is robust to these changes in Sections 5.3.2 and 5.3.1. * New state-of-the-art self-supervised techniques often require significant computational requirements. This is a major hurdle as these methods can take around 20 GPU days to train (Azizi et al., 2021b). This makes them inaccessible in limited computational resource settings. CASS, on average, takes 69% less time than the existing state-of-the-art methods. We further expand on this result in Section 5.2. ## 2 Background ### Neural Network Architectures for Image Analysis CNNs are a famous architecture of choice for many image analysis applications (Khan et al., 2020). CNNs learn more abstract visual concepts with a gradually increasing receptive field. They have two favorable inductive biases: (i) translation equivariance resulting in the ability to learn equally well with shifted object positions, and (ii) locality resulting in the ability to capture pixel-level closeness in the input data. CNNs have been used for many medical image analysis applications, such as disease diagnosis (Yadav and Jadhav, 2019) or semantic segmentation (Ronneberger et al., 2015). To address the requirement of additional context for a more holistic image understanding, the Vision Transformer (ViT) architecture (Dosovitskiy et al., 2020) has been adapted to images from language-related tasks and recently gained popularity (Liu et al., 2021, 2022; Touvron et al., 2021). In a ViT, the input image is split into patches that are treated as tokens in a self-attention mechanism. Compared to CNNs, ViTs can capture additional image context but lack ingrained inductive biases of translation and location. As a result, ViTs typically outperform CNNs on larger datasets (d'Ascoli et al., 2021). #### 2.1.1 Cross-architecture Techniques Cross-architecture techniques aim to combine the features of CNNs and Transformers; they can be classified into two categories (i) Hybrid cross architecture techniques and (ii) pure cross-architecture techniques. Hybrid cross-architecture techniques combine parts of CNNs and Transformers in some capacity, allowing architectures to learn unique representations. ConViT (d'Ascoli et al., 2021) combines CNNs and ViTs using gated positional self-attention (GPSA) to create a soft convolution similar to inductive bias and improve upon the capabilities of Transformers alone. More recently, the training regimes and inferences from ViTs have been used to design a new family of convolutional architectures - ConvNext (Liu et al., 2022), outperforming benchmarks set by ViTs in classification tasks. (Li et al., 2021) further simplified the procedure to create an optimal CNN-Transformer using their self-supervised Neural Architecture Search (NAS) approach. On the other hand, pure cross-architecture techniques combine CNNs and Transformers without any changes to their architecture to help both of them learn better representations. (Gong et al., 2022) used CNN and Transformer pairs in a consistent teaching knowledge distillation format for audio classification and showed that cross-architecture distillation makes distilled models less prone to overfitting and also improves robustness. Compared with the CNN-attention hybrid models, cross-architecture knowledge distillation is more effective and does not require any model architecture change. Similarly, (Guo et al., 2022) also used a 3D-CNN and Transformer to learn strong representations and proposed a self-supervised learning module to predict an edit distance between two video sequences in the temporal order. Although their approach showed encouraging results on two datasets, their approach relies on both positive and negative pairs. Furthermore, their proposed approach is batch statistic dependent. ### Self-Supervised Learning Most existing self-supervised techniques can be classified into contrastive and reconstruction-based techniques. Traditionally, contrastive self-supervised techniques have been trained by reducing the distance between representations of different augmented views of the same image ('positive pairs') and increasing the distance between representations of augmented views from different images ('negative pairs') (He et al., 2020; Chen et al., 2020; Caron et al., 2020). But this is highly memory intensive as we need to track positive and negative pairs. Recently, Bootstrap Your Own Latent (BYOL) (Grill et al., 2020) and DINO (Caron et al., 2021) have improved upon this approach by eliminating the memory banks. The premise of using negative pairs is to avoid collapse. Several strategies have been developed with BYOL using a momentum encoder, Simple Siamese (SimSiam) (Chen and He, 2021) a stop gradient, and DINO applying the counterbalancing effects of sharpening and centering on avoiding collapse. Techniques relying only on the positive pairs are much more efficient than the ones using positive and negative pairs. Recently, there has been a surge in reconstruction-based self-supervised pretraining methods with the introduction of MSN (Assran et al., 2022), and MAE (He et al., 2021). These methods learn semantic knowledge of the image by masking a part of it and then predicting the masked portion. #### 2.2.1 Self-supervised Learning and Medical Image Analysis ImageNet is most commonly used for benchmarking and comparing self-supervised techniques. ImageNet is a balanced dataset that is not representative of real-world data, especially in the field of medical imaging, that has been characterized by class imbalance. Self-supervised methods that use batch-level statistics have been found to drop a significant amount of performance in image classification tasks when trained on ImageNet by artificially inducing class imbalance (Assran et al., 2022). This prior of some self-supervised techniques like MSN (Assran et al., 2022), SimCLR (Chen et al., 2020), and VICreg (Bardes et al., 2021) limits their applicability on imbalanced datasets, especially in the case of medical imaging. Existing self-supervised techniques typically require large batch sizes and datasets. When these conditions are not met, a marked reduction in performance is demonstrated (Caron et al., 2021; Chen et al., 2020; Caron et al., 2020; Grill et al., 2020). Self-supervised learning approaches are practical in big data medical applications (Ghesu et al., 2022; Azizi et al., 2021), such as analysis of dermatology and radiology imaging. In more limited data scenarios (3,662 images - 25,333 images), Matsoukas et al. (2021) reported that ViTs outperform their CNN counterparts when self-supervised pre-training is followed by supervised fine-tuning. Transfer learning favors ViTs when applying standard training protocols and settings. Their study included running the DINO (Caron et al., 2021) self-supervised method over 300 epochs with a batch size of 256. However, questions remain about the accuracy and efficiency of using existing self-supervised techniques on datasets whose entire size is smaller than their peak performance batch size. Also, viewing this from the general practitioner's perspective with limited computational power raises the question of how we can make practical self-supervised approaches more accessible. Adoption and faster development of self-supervised paradigms will only be possible when they become easy to plug and play with limited computational power. In this work, we explore these questions by designing CASS, a novel self-supervised approach developed with the core values of efficiency and effectiveness. In simple terms, we are combining CNN and Transformer in a response-based contrastive method by reducing similarity to combine the abilities of CNNs and Transformers. This approach was initially designed for a 198-image dataset for muscle biopsies of inflammatory lesions from patients with dermatomyositis - an autoimmune disease. The benefits of this approach are illustrated by challenges in diagnosing autoimmune diseases due to their rarity, limited data availability, and heterogeneous features. Consequently, misdiagnoses are common, and the resulting diagnostic delay plays a significant factor in their high mortality rate. Autoimmune diseases share commonalities with COVID-19 regarding clinical manifestations, immune responses, and pathogenic mechanisms. Moreover, some patients have developed autoimmune diseases after COVID-19 infection (Liu et al., 2020). Despite this increasing prevalence, the representation of autoimmune diseases in medical imaging and deep learning is limited. ## 3 Methodology We start by motivating our method before explaining it in detail (in Section 3.1). Self-supervised methods have been using different augmentations of the same image to create positive pairs. These were then passed through the same architectures but with a different set of parameters (Grill et al., 2020). In (Caron et al., 2021) the authors introduced image cropping of different sizes to add local and global information. They also used different operators and techniques to avoid collapse, as described in Section 2.2. But there can be another way to create positive pairs - through architectural differences. (Raghu et al., 2021) in their study suggested that for the same input, Transformers and CNNs extract different representations. They conducted their study by analyzing the CKA (Centered Kernel Alignment) for CNNs and Transformer using ResNet (He et al., 2016) and ViT (Vision Transformer) (Dosovitskiy et al., 2020) family of encoders, respectively. They found that Transformers have a more uniform representation across all layers as compared to CNNs. They also have self-attention, enabling global information aggregation from shallow layers and skip connections that connect lower layers to higher layers, promising information transfer. Hence, lower and higher layers in Transformers show much more similarity than in CNNs. The receptive field of lower layers for Transformers is more extensive than in CNNs. While this receptive field gradually grows for CNNs, it becomes global for Transformers around the midway point. Transformers don't attend locally in their earlier layers, while CNNs do. Using local information earlier is essential for solid performance. CNNs have a more centered receptive field as opposed to a more globally spread receptive field of Transformers. Hence, representations drawn from the same input will differ for Transformers and CNNs. Until now, self-supervised techniques have used only one kind of architecture at a time, either a CNN or Transformer. But differences in the representations learned with CNN and Transformers inspired us to create positive pairs by different architectures or feature extractors rather than using a different set of augmentations. This, by design, avoids collapse as the two architectures will never give the exact representation as output. By contrasting their extracted features at the end, we hope to help the Transformer learn representations from CNN and vice versa. This should help both the architectures to learn better representations and learn from patterns that they would miss. We verify this by studying attention maps and feature maps from supervised and CASS-trained CNN and Transformers in Appendix C.4 and Section 5.3.3. We observed that CASS-trained CNN and Transformer were able to retain a lot more detail about the input image, which pure CNN and Transformers lacked. ### Description of CASS CASS' goal is to extract and learn representations in a self-supervised way. To achieve this, an image is passed through a common set of augmentations. The augmented image is then simultaneously passed through a CNN and Transformer to create positive pairs. The output logits from the CNN and Transformer are then used to find cosine similarity loss (equation 1). This is the same loss function as used in BYOL (Grill et al., 2020). Furthermore, the intuition of CASS is very similar to that of BYOL. In BYOL to avoid collapse to a trivial solution the target and the online arm are differently parameterized and an additional predictor is used with the online arm. They compared this setup to that of GANs where joint of optimization of both arms to a common value was impossible due to differences in the arms. Analogously, In CASS instead of using an additional MLP on top of one of the arms and differently parameterizing them, we use two fundamentally different architectures. Since the two architectures give different output representations as mentioned in (Raghu et al., 2021), the model doesn't collapse. Additionally, to avoid collapse we introduced a condition where if the outputs from the CNN and Transformer are the same, artificial noise sampled from a Gaussian distribution is added to the model outputs and thereby making the loss non-zero. We also report results for CASS using a different set of CNNs and Transformers in Appendix B.6 and Section 5, and not a single case of the model collapse was registered. \[\text{loss}=2-2\times\mathrm{F(R)}\times\mathrm{F(T)} \tag{1}\] \[\text{where, }\mathrm{F(x)}=\sum_{i=1}^{N}\left(\frac{x}{\left(\max\left(\|x \|_{2}\right),\epsilon\right)}\right)\] We use the same parameters for the optimizer and learning schedule for both architectures. We also use stochastic weigh averaging (SWA) (Izmailov et al., 2018) with Adam optimizer and a learning rate of 1e-3. For the learning rate, we use a cosine schedule with a maximum of 16 iterations and a minimum value of 1e-6. ResNets are typically trained with Stochastic Gradient Descent (SGD) and our use of the Adam optimizer is quite unconventional. Furthermore, unlike existing self-supervised techniques there is no parameter sharing between the two architectures. We compare CASS against the state-of-the-art self-supervised technique DINO (DIstilation with NO labels). This choice was made based on two conditions (i) As already explained in Section 2.2.1, some self-supervised techniques use batch-level statistics that makes them less suitable for application on imbalanced datasets and imbalanced datasets are a feature of medical imaging. (ii) The self-supervised technique should be benchmarked for both CNNs and Transformers as both architectures have exciting properties and apriori, it is difficult to predict which architecture will perform better. In Figure 1, we show CASS on top, and DINO (Caron et al., 2021) at the bottom. Comparing the two, CASS does not use any extra mathematical treatment used in DINO to avoid collapse such as centering and applying the softmax function on the output of its student and teacher networks. We also provide an ablation study using a softmax and sigmoid layer for CASS in Appendix B. After training CASS and DINO for one cycle, DINO yields only one kind of trained architecture. In contrast, CASS provides two trained architectures (1 - CNN and 1 - Transformer). CASS-pre-trained architectures perform better than DINO-pre-trained architectures in most cases, as further elaborated in Section 5. ## 4 Experimental Details ### Datasets We split the datasets into three splits - training, validation, and testing following the 70/10/20 split strategy unless specified otherwise. We further expand upon our thought process for choosing datasets in Appendix C.4.5. * **Autoimmune diseases biopsy slides**(Singh and Cirrone, 2022; Van Buren et al., 2022) consists of slides cut from muscle biopsies of dermatomyositis patients stained with different proteins and imaged to generate a dataset of 198 TIFF image set from 7 patients. The presence or absence of these cells helps to diagnose dermatomyositis. Multiple cell classes can be present per image; therefore this is a multi-label classification problem. Our task here was to classify cells based on their protein staining into TFH-1, TFH-217, TFH-Like, B cells, and others. We used F1 score as our metric for evaluation, as employed in previous works by (Singh and Cirrone, 2022; Van Buren et al., 2022). These RGB images have a consistent size of 352 by 469. * **Dermofit dataset**(Fisher and Rees, 2017) contains normal RGB images captured through an SLR camera indoors with ring lightning. There are 1300 image samples, classified into 10 classes: Actinic Keratosis (AK), Basal Cell Carcinoma (BCC), Melanocytic Nevus / Mole (ML), Squamous Cell Carcinoma (SCC), Seborrhoeic Keratosis (SK), Intraepithelial carcinoma (IEC), Pyogenic Granuloma (PYO), Haemangioma (VASC), Dermatofibroma (DF) and Melanoma (MEL). This dataset comprises images of different sizes and no two images are of the same size. They range from 205x205 to 1020x1020 in size. Our pretext task is multi-class classification and we use the F1 score as our evaluation metric on this dataset. * **Brain tumor MRI dataset**(Cheng, 2017; Amin et al., 2022) 7022 images of human brain MRI that are classified into four classes: glioma, meningioma, no tumor, and pituitary. This dataset combines Br35H: Brain tumor Detection 2020 dataset used in "Retrieval of Brain tumors by Adaptive Spatial Pooling and Fisher Vector Representation" and Brain tumor classification curated by Navoneel Chakrabarty and Swati Kanchan. Out of these, the dataset curator created the training and testing splits. We followed their splits, 5,712 images for training and 1,310 for testing. Since this was a combination of multiple datasets, the size of images varies throughout the dataset from 512x512 to 219x234. The pretext of the task is multi-class classification, and we used the F1 score as the metric. * melanoma (MEL), melanocytic nevus (NV), Basal cell carcinoma (BCC), actinic keratosis(AK), benign keratosis(BKL), dermatofibroma(DF), vascular lesion (VASC) and Squamous cell carcinoma(SCC). This dataset contains images of size \(600\times 450\) and \(1024\times 1024\). The distribution of these labels is unbalanced across different Figure 1: (Top) In our proposed self-supervised architecture - CASS, R represents ResNet-50, a CNN and T in the other box represents the Transformer used (ViT); X is the input image, which becomes X’ after applying augmentations. Note that CASS applies only one set of augmentations to create X’. X’ is passed through both arms to compute loss, as in Equation 1. This differs from DINO, which passes different augmentation of the same image through networks with the same architecture but different parameters. The output of the teacher network is centered on a mean computed over a batch. Another key difference is that in CASS, the loss is computed over logits; meanwhile, in DINO, it is computed over softmax output. classes. For evaluation, we followed the metric followed in the official competition i.e balanced multi-class accuracy value, which is semantically equal to recall. ### Self-supervised learning We studied and compared results between DINO and CASS-pre-trained self-supervised CNNs and Transformers. For the same, we trained from ImageNet initialization (Matsoukas et al., 2021) for 100 epochs with a batch size of 16. We ran these experiments on an internal cluster with a single GPU unit (NVIDIA RTX8000) with 48 GB video RAM, 2 CPU cores, and 64 GB system RAM. For DINO, we used the hyperparameters and augmentations mentioned in the original implementation. For CASS, we describe the experimentation details in Appendix C.5. ### End-to-end fine-tuning In order to evaluate the utility of the learned representations, we use the self-supervised pre-trained weights for the downstream classification tasks. While performing the downstream fine-tuning, we perform the entire model (E2E fine-tuning). The test set metrics were used as proxies for representation quality. We trained the entire model for a maximum of 50 epochs with an early stopping patience of 5 epochs. For supervised fine-tuning, we used Adam optimizer with a cosine annealing learning rate starting at 3e-04. Since almost all medical datasets have some class imbalance we applied class distribution normalized Focal Loss (Lin et al., 2017) to navigate class imbalance. We fine-tune the models using different label fractions during E2E fine-tuning i.e 1%, 10%, and 100% label fractions. For example, if a model is trained with a 10% label fraction, then that model will have access only to 10% of the training dataset samples and their corresponding labels during the E2E fine-tuning after initializing weights using the CASS or DINO pretraining. ## 5 Results and Discussion ### Compute and Time analysis Analysis We ran all the experiments on a single NVIDIA RTX8000 GPU with 48GB video memory. In Table 2, we compare the cumulative training times for self-supervised training of a CNN and Transformer with DINO and CASS. We observed that CASS took an average of 69% less time compared to DINO. Another point to note is that CASS trained two architectures at the same time or in a single pass. While training a CNN and Transformer with DINO it would take two separate passes. ### Results on the four medical imaging datasets We did not perform 1% finetuning for the autoimmune diseases biopsy slides of 198 images because using 1% images would be too small a number to learn anything meaningful and the results would be highly randomized. Similarly, we also did not perform 1% fine-tuning for the dermatit dataset as the training set was too small to draw meaningful results with just 10 samples. We present the results on the four medical imaging datasets in Tables 1, 3, 4, and 5. From these tables, we observe that CASS improves upon the classification performance of existing state-of-the-art self-supervised method DINO by 3.8% with 1% labeled data, 5.9% with 10% labeled data, and 10.13% with 100% labeled data. \begin{table} \begin{tabular}{l l l l} \hline \hline \multirow{2}{*}{Techniques} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c}{Testing F1 score} \\ & & 10\% & 100\% \\ \hline DINO & ResNet-50 & **0.8237\(\pm\)0.001** & 0.84252\(\pm\)0.008 \\ CASS & ResNet-50 & 0.8158\(\pm\)0.0055 & **0.8650\(\pm\)0.0001** \\ Supervised & ResNet-50 & 0.819\(\pm\)0.0216 & 0.83895\(\pm\)0.007 \\ \hline DINO & ViT B/16 & 0.8445\(\pm\)0.0008 & 0.8639\(\pm\) 0.002 \\ CASS & ViT B/16 & **0.8717\(\pm\)0.005** & **0.8894\(\pm\)0.005** \\ Supervised & ViT B/16 & 0.8356\(\pm\)0.007 & 0.8420\(\pm\)0.009 \\ \hline \hline \end{tabular} \end{table} Table 1: Results for autoimmune biopsy slides dataset. In this table, we compare the F1 score on the test set. We observed that CASS outperformed the existing state-of-art self-supervised method using 100% labels for CNN as well as for Transformers. Although DINO outperforms CASS for CNN with 10% labeled fraction. Overall, CASS outperforms DINO by 2.2% for 100% labeled training for CNN and Transformer. For Transformers in 10% labeled training CASS’ performance was 2.7% better than DINO. \begin{table} \begin{tabular}{l l l} \hline \hline Dataset & DINO & CASS \\ \hline Autoimmune & 1 H 13 M & **21 M** \\ Dermofit & 3 H 9 M & **1 H 11 M** \\ Brain MRI & 26 H 21 M & **7 H 11 M** \\ ISIC-2019 & 109 H 21 M & **29 H 58 M** \\ \hline \hline \end{tabular} \end{table} Table 2: Self-supervised pretraining time comparison for 100 epochs on a single RTX8000 GPU. In this table, H represents hour(s), and M represents minute(s). ### Ablation Studies As mentioned in Section 2.2.1, existing self-supervised methods experience a drop in classification performance when trained for a reduced number of pretraining epochs and batch size. We performed ablation studies to study the effect of change in performance for CASS and DINO pretrained ResNet-50 and ViTB/16 on the autoimmune dataset. Additional ablation studies have been provided in Appendix. #### 5.3.1 Change in Epochs In this section, we compare the performance change in CASS and DINO pretrained and then E2E finetuned with 100% labels over the autoimmune dataset. To study the robustness, we compare the mean-variance over CNN and Transformer trained with the two techniques. The recorded mean-variance in performance for ResNet-50 and ViTB-16 trained with CASS and DINO with change in the number of pretraining epochs is 0.0001791 and 0.0002265, respectively. Based on these results, we observed that CASS \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Techniques} & \multirow{2}{*}{Backbone} & \multicolumn{2}{l}{Testing F1 score} & \multirow{2}{*}{100\%} \\ & & 1\% & & 100\% \\ \hline DINO & Resnet-50 & **0.63405\(\pm\)0.09** & **0.92325\(\pm\)0.02819** & 0.9900\(\pm\)0.0058 \\ CASS & Resnet-50 & 0.40816\(\pm\)0.13 & 0.8925\(\pm\)0.0254 & **0.9909\(\pm\) 0.0032** \\ Supervised & Resnet-50 & 0.52\(\pm\)0.018 & 0.9022\(\pm\)0.011 & 0.9899\(\pm\) 0.003 \\ \hline DINO & ViT B/16 & 0.3211\(\pm\)0.071 & 0.7529\(\pm\)0.044 & 0.8841\(\pm\) 0.0052 \\ CASS & ViT B/16 & **0.3345\(\pm\)0.11** & **0.7833\(\pm\)0.0259** & **0.9279\(\pm\) 0.0213** \\ Supervised & ViT B/16 & 0.3017 \(\pm\) 0.077 & 0.747\(\pm\)0.0245 & 0.8719\(\pm\) 0.017 \\ \hline \hline \end{tabular} \end{table} Table 4: This table contains results on the brain tumor MRI classification dataset. While DINO outperformed CASS for 1% and 10% labeled training for CNN, CASS maintained its superiority for 100% labeled training, albeit by just 0.09%. Similarly, CASS outperformed DINO for all data regimes for Transformers, incrementally 1.34% in for 1%, 3.04% for 10%, and 4.38% for 100% labeled training. We observe that this margin is more significant than for biopsy images. Such results could be ascribed to the increase in dataset size and increasing learnable information. \begin{table} \begin{tabular}{l l l l} \hline \hline \multirow{2}{*}{Techniques} & \multicolumn{2}{l}{Testing F1 score} & \multirow{2}{*}{100\%} \\ & & 10\% & & 100\% \\ \hline DINO (Resnet-50) & 0.3749\(\pm\)0.0011 & 0.6775\(\pm\)0.0005 \\ CASS (Resnet-50) & **0.4367\(\pm\)0.0002** & **0.7132\(\pm\)0.0003** \\ Supervised (Resnet-50) & 0.33\(\pm\)0.0001 & 0.6341\(\pm\)0.0077 \\ \hline DINO (ViT B/16) & 0.332\(\pm\) 0.0002 & 0.4810\(\pm\)0.0012 \\ CASS (ViT B/16) & **0.3896\(\pm\)0.0013** & **0.6667\(\pm\)0.0002** \\ Supervised (ViT B/16) & 0.299\(\pm\)0.002 & 0.456\(\pm\)0.0077 \\ \hline \hline \end{tabular} \end{table} Table 3: This table contains the results for the dermofit dataset. We observe that CASS outperforms both supervised and existing state-of-the-art self-supervised methods for all label fractions. Parenthesis next to the techniques represents the architecture used, for example, DINO(ViT B/16) represents ViT B/16 trained with DINO. In this table, we compare the F1 score on the test set. We observed that CASS outperformed the existing state-of-art self-supervised method using all label fractions and for both the architectures. \begin{table} \begin{tabular}{l l l l l} \hline \hline \multirow{2}{*}{Techniques} & \multirow{2}{*}{Backbone} & \multicolumn{2}{l}{Testing Balanced multi-class accuracy} \\ & & 1\% & 10\% & 100\% \\ \hline DINO & Resnet-50 & 0.328\(\pm\)0.0016 & 0.3797\(\pm\)0.0027 & 0.493\(\pm\)3.9e-05 \\ CASS & Resnet-50 & **0.3617\(\pm\)0.0047** & **0.41\(\pm\)0.0019** & **0.543\(\pm\)2.85e-05** \\ Supervised & Resnet-50 & 0.2640\(\pm\)0.031 & 0.3070\(\pm\)0.0121 & 0.35\(\pm\)0.006 \\ \hline DINO & ViT B/16 & 0.3676\(\pm\) 0.012 & 0.3998\(\pm\)0.056 & 0.5408\(\pm\)0.001 \\ CASS & ViT B/16 & **0.3973\(\pm\) 0.0465** & **0.4395\(\pm\)0.0179** & **0.5819\(\pm\)0.0015** \\ Supervised & ViT B/16 & 0.3074\(\pm\)0.0005 & 0.3586\(\pm\)0.0314 & 0.42\(\pm\)0.007 \\ \hline \hline \end{tabular} \end{table} Table 5: Results for the ISIC-2019 dataset. Comparable to the official metrics used in the challenge [https://challenge.isic-archive.com/landing/2019/](https://challenge.isic-archive.com/landing/2019/). The ISIC-2019 dataset is an incredibly challenging, not only because of the class imbalance issue but because it is made of partially processed and inconsistent images with hard-to-classify classes. We use balanced multi-class accuracy as our metric, which is semantically equal to recall value. We observed that CASS consistently outperforms DINO by approximately 4% for all label fractions with CNN and Transformer. trained models have less variance, i.e., they are more robust to change in the number of pretraining epochs. #### 5.3.2 Change in Batch Size Similar to Section 5.3.1, in this section, we study the change in performance concerning the batch size. As previously mentioned existing self-supervised techniques suffer a drop in performance when they are trained for small batch sizes; we studied the change in performance for batch sizes 8, 16, and 32 on the autoimmune dataset with CASS and DINO. We reported these results in Figure 2. We observe that the mean-variance in performance for ResNet-50 and ViTB-16 trained with CASS and DINO with change in batch size for CASS and DINO is 5.8432e-5 and 0.00015003, respectively. Hence, CASS is much more robust to changes in pretraining batch size than DINO. #### 5.3.3 Attention Maps To study the effect qualitatively we study the attention of a supervised and CASS-pre trained Transformer. From Figure 3 we observe that the attention map of the CASS-pre-trained Transformer is a lot more connected than a supervised Transformer due to the transfer of locality information from the CNN. We further expand on this Appendix C.4. ## 6 Conclusion Based on our experimentation on four diverse medical imaging datasets, we qualitatively concluded that CASS improves upon the classification performance of existing state-of-the-art self-supervised method DINO by 3.8% with 1% labeled data, 5.9% with 10% labeled data, and 10.13% with 100% labeled data and trained in 69% less time. Furthermore, we saw that CASS is robust to batch size changes and training epochs reduction. To conclude, for medical image analysis, CASS is computationally efficient, performs better, and overcomes some of the shortcomings of existing self-supervised techniques. This ease of accessibility and better performance will catalyze medical imaging research to help us improve healthcare solutions and propagate these advancements in state-of-the-art techniques to deep practical Figure 3: This figure shows the attention maps over a single test sample image from the autoimmune dataset. The left image is the overall attention map over a single test sample for the supervised Transformer, while the one on the right is for CASS trained Transformer. Figure 2: In Figure a, we report the change in performance with respect to the change in the number of pretraining epochs for DINO and CASS for ResNet-50 and ViTB/16, respectively. In Figure b, we report the change in performance with respect to the change in the number of pretraining batch sizes for DINO and CASS for ResNet-50 and ViTB/16, respectively. These ablation studies were conducted on the autoimmune dataset, while keeping the other hyper-parameters the same during pretraining and downstream finetuning. learning in developing countries and practitioners with limited resources to develop new solutions for underrepresented and emerging diseases. ## Acknowledgements We would like to thank Prof. Elena Sizikova (Moore Sloan Faculty Fellow, Center for Data Science (CDS), New York University (NYU)) for her valuable feedback and NYU HPC team for assisting us with our computational needs.
2302.04317
A lower bound on the overhead of quantum error correction in low dimensions
We show that a quantum architecture with an error correction procedure limited to geometrically local operations incurs an overhead that grows with the system size, even if arbitrary error-free classical computation is allowed. In particular, we prove that in order to operate a quantum error correcting code in 2D at a logical error rate of $\delta$, a space overhead of $\Omega(\sqrt{\log(1/\delta)})$ is needed for any constant depolarizing noise $p > 0$.
Nouédyn Baspin, Omar Fawzi, Ala Shayeghi
2023-02-08T20:19:28Z
http://arxiv.org/abs/2302.04317v1
# A lower bound on the overhead ###### Abstract We show that a quantum architecture with an error correction procedure limited to geometrically local operations incurs an overhead that grows with the system size, even if arbitrary error-free classical computation is allowed. In particular, we prove that in order to operate a quantum error correcting code in 2D at a logical error rate of \(\delta\), a space overhead of \(\Omega(\sqrt{\log(1/\delta)})\) is needed for any constant depolarizing noise \(p>0\). ## 1 Introduction The feasibility of quantum computing relies heavily on finding efficient quantum error correction (QEC) schemes. From a theoretical perspective QEC lies at the heart of the Quantum Threshold Theorem [1], and in practice it generally induces costly overheads. Part of this cost can be attributed to the necessity of performing frequent measurements to diagnose whether a system has suffered an error. Depending on the architecture considered, those measurements can be challenging to implement, in particular for systems limited to local interactions. The space of observables one has access to is therefore limited by the space that the computer lives in. This observation leads to the following natural question: what is the tradeoff between geometry and the performance of quantum error correction? How much information can reliably be stored in a volume of space? In this work, we show that an architecture limited to geometrically local operations and classical computation incurs an overhead when using quantum error correction. In particular, when limited to _arbitrary_ 2D local operations and _free classical computation_, we show that operating a quantum code protecting \(k\) logical qubits up to a target error \(\delta\), the number of physical qubits \(m\) required satisfies \[m\in\Omega\left(k\sqrt{\frac{\log(1/\delta)}{\log(1/p)}}\right)\,\] where \(p\in(0,1]\) is the depolarizing noise parameter. In most cases, we are interested in an error that decreases exponentially with the system size, or \(\delta\sim p^{m^{c}}\), which gives \(\frac{\log(1/\delta)}{\log(1/p)}\sim m^{c}\). Our bound therefore proves a lower bound on the overhead \(m/k\in\Omega(m^{c/2})\). In general, for other geometries, our bound reads \[m\in\Omega\left(k\cdot g_{\text{geom}}\left(\frac{\log(1/\delta)}{\log(1/p)} \right)\right),\] where \(g_{\text{geom}}\) depends on the geometry. These bounds differ from existing bounds on _local quantum codes_[1, 2, 1, 13, 14] because an architecture with local operations is not necessarily limited to local codes. For example, in [1] the authors demonstrate how to measure the syndrome of an arbitrary \(n\) qubit sparse code in constant time using \(O(n^{2})\) ancillas, by making use of free classical computation. Previous attempts to bound the performance of error correction in those systems assumed that only a specific set of gates were allowed, and were limited to a subset of classical communications [1]. In this work, our bounds hold for all operations that are separable1 between the quantum and the classical system. Finally, the methods we use here apply to any architecture that is slow at generating entanglement between its subsystems, and therefore draws a direct connection between one's ability to correct errors, and one's ability to generate entanglement. Footnote 1: Separable operations are a strict superset of classical operations [14]. In what follows we review the history of no-go theorems addressing quantum error correction in low-dimensional systems. We will focus on two parameters to capture the performance of a code. First, the dimension \(k\) corresponds to the number of qubits protected by the code. Secondly, the distance \(d\) is defined as the minimum number of qubits that must be erased for the information to be lost. An \(\llbracket n,k,d\rrbracket\) code is defined on \(n\) qubits, has dimension \(k\), and distance \(d\). An infinite family of codes satisfying \(k\in\Omega(n)\) is said to have constant rate. Previous work addressing the tradeoff between locality and error correction has focused on codes corresponding to the ground space of a sparse frustration-free Hamiltonian. If the terms of this Hamiltonian are spatially local 2, then given access to nearest neighbors interactions, we can easily verify if a system is in the codespace: it is enough to measure the terms of said Hamiltonian. A celebrated example of such codes is the family of topological codes. In the 20 years since their invention, these codes have seen a sustained theoretical and experimental interest, but no effort could improve their poor parameters \(\llbracket n,1,O(\sqrt{n})\rrbracket\). This observation begs the question: are their poor parameters inherent to their locality? Footnote 2: A code is defined to be local if the terms of its associated Hamiltonian are local This question was positively resolved in 2009, ten years after Kitaev's surface code, by Bravyi, Poulin, and Terhal (BPT) [1]. The authors established that, in 2D, no local code can outperform the surface code: any such code is bound to obey \[n/k\in\Omega(d^{2})\.\] This result formalizes a non-trivial constraint on computation with _local codes_. The ratio \(n/k\) should here be understood as an overhead, and quantifies the cost of encoding one qubit. Typically, one would like the distance to grow polynomially with \(n\), and thus the overhead grows as \(n^{\Omega(1)}\). It is good here to note that this overhead is not intrinsic to the nature of quantum mechanics, and it is possible to construct significantly better codes when relaxing the assumption of locality. In fact, the same year as the BPT paper, a groundbreaking result introduced a family of codes with constant rate and polynomial distance - with parameters \(\llbracket n,\Theta(n),\Theta(\sqrt{n})\rrbracket\) to be precise [14]. By plugging these parameters in the BPT bound, one can easily verify that those codes cannot be local. Worse: It is known that if the Hamiltonians corresponding to those codes were to act on qubits placed on a 2D lattice, they would have \(\widetilde{\Omega}(n)\) terms spanning a distance \(\widetilde{\Omega}(n^{1/4})\)[1]. Unfortunately that feature is generic: although constant rate and polynomial distance codes bear the promise of reduced overhead, they all suffer from embarrassingly non-local terms. However, all hope is not lost. One might try shuttling qubits around with SWAP gates to emulate theses non-local geometries, incurring some time overhead and additional errors. Another option is to trade this time overhead for a space overhead, by using a large number of ancillas. As previously mentioned, [1] demonstrated how to measure the syndrome of an arbitrary \(n\) qubit sparse code in constant time using \(O(n^{2})\) ancillas by using non-local classical computation. Lastly, quantum LDPC codes have very redundant stabilizers, could one operate such code in 2D by only measuring a subset of its stabilizers at a time? Presently, the limitations weighing on those alternative options are poorly understood. First it is not clear whether those approaches can be made fault-tolerant and at what cost. Secondly, when deriving bounds on fault-tolerant processes, it is hard to take into account the access to error-free, non-local classical computation. Consequently, we have a rather limited understanding of the resources needed for efficient quantum error correction in low dimensions. In this paper we address this challenge by answering two questions: 1. Given access to arbitrary local quantum operations, is it possible to lower bound the overhead of QEC in 2D? 2. Does this lower bound hold when given access to free classical computation? The framework we use to address those questions provides a new light on the structure of quantum codes, and naturally leads to generalizations and/or stronger versions of known results. For example the existing bounds on the encoding/decoding complexity of quantum codes often assume either a unitary circuit [1], or a restricted set of operations. They can also assume a specific structure to the code. For example [1] considers a subset of topological codes, [1] assume the geometry induced by the stabilizers contains some expansion. Other works have considered the question of the overhead of fault-tolerance, but with no locality restrictions, typically yielding weaker bounds [13]. Here we are to eschew those limitations and address the following question: 1. Given access to arbitrary local quantum operations, and free classical computation is it possible to lower bound the complexity of encoding/decoding quantum codes? We note that quantum circuits assisted by free classical computation can be surprisingly powerful, see e.g., the recent work [14] on preparing GHZ states and multivariate trace estimation. We are not aware of any circuit lower bounds for this model. ### Main results For the sake of readability, we state our result for 2D Euclidean dimensions, although it generalizes to other geometries. Our main contribution reads as follows. **Theorem 1** (see Theorem 28 for the formal version).: _Let \(\mathcal{C}\) be a code encoding \(k\) qubits and let \(\mathfrak{W}\) be a 2D-local circuit on \(m\) physical qubits, with non-local, error-free classical computation. If \(\mathfrak{W}\) is subject to depolarizing noise of strength \(p\) every \(O(1)\) steps, while achieving a target error below \(\delta\equiv p^{f}\), then_ \[m/k\in\Omega(\sqrt{f})\.\] This result is the first lower bound on the overhead of fault tolerance in low dimensions. It confirms that obtaining exponential error suppression, i.e. \(f\sim m^{c}\) for a constant \(c\), incurs polynomial overhead in 2D: \(m\in\Omega(k^{2/(2-c)})\). Similarly, constant overhead implies constant error rate, which is in line with previous observations made in [1]: the authors measured the syndrome of a constant rate code using \(m\sim n\sim k\) qubits, and seem to fail to suppress errors with this scheme. This also shows that the non-local operations used in the construction of constant overhead fault-tolerant schemes [12, 13] is necessary. As an alternative, one might imagine implementing a constant rate LDPC code \(\llbracket N,\Theta(N),D\rrbracket\) by concatenating it with a local code of size \(n\)3. Note that although the stabilizers are not local - and therefore the BPT bound no longer applies - one could still imagine operating it with local operations. In which case our bound gives \(f\in O(n^{2})\): the local code has to grow polynomially for exponential error suppression. Footnote 3: Model suggested by Anirudh Krishna We also obtain a lower bound on the complexity of encoding circuits for any quantum code, and syndrome extracting circuits for any stabilizer code. Those bounds are non-trivial for any constant rate code. **Theorem 2** (Encoding circuit depth, see Theorem 23 for the formal version).: _Let \(\mathcal{C}\) be an \(((n,k,d)\) quantum code and consider an encoding circuit \(\mathfrak{W}\) for \(\mathcal{C}\) on \(m\) qubits using arbitrary local operations and free classical computation, then the depth \(\Delta\) of \(\mathfrak{W}\) obeys_ \[\Delta\in\Omega(k\sqrt{d}/m)\.\] **Theorem 3** (Syndrome extracting circuit depth, see Theorem 24 for the formal version).: _Let \(\mathcal{C}\) be an \(\llbracket n,k,d\rrbracket\) stabilizer quantum code and consider a syndrome extracting circuit \(\mathfrak{W}\) for \(\mathcal{C}\) on \(m\) qubits using arbitrary local operations and free classical computation, then the depth \(\Delta\) of \(\mathfrak{W}\) obeys_ \[\Delta\in\Omega(k\sqrt{d}/m)\.\] Note how these bounds parallel Theorem 1. The second bound is tight in the regime \(m\in O(n)\). In fact, in Section VII of [1], it is shown how to measure the syndrome of any LDPC code on \(n\) qubits in \(O(\sqrt{n})\) time. By taking a good LDPC code [10, 11] we obtain \(\Delta\in\Omega(n\sqrt{n}/n)\), or \(\Delta\in\Omega(\sqrt{n})\). Similarly, in the regime \(\Delta\in O(1)\), for a good code [10, 11], Theorem 3 gives \(m\in\Omega(n^{3/2})\), while [1] provides a method to do it in \(m\in O(n^{2})\). ### Open questions 1. A crucial element in the proof of our main theorem is the observation from Section 4.2 that noisy local circuits have a limited ability to create entanglement. Along the same line of thought one could ask for a precise characterization of entanglement in noisy circuits [15]. As a point of comparison, much has been written regarding the existence an area law in the ground state of local hamiltonians [14], does a similar area law exist for sufficiently deep noisy local circuits? The answer might depend on the metric of choice, but for definiteness, one could ask if for any subset of qubits \(\Lambda\) we have \(E_{R}(\Lambda:\overline{\Lambda})\in O(\sqrt{|\Lambda|})\) when the circuit runs for more than polylog time4? This is reminiscent of the fact that noisy _unitary_ circuits converge to the maximally mixed state in polylog time [1, 1]. 2. Our notion of error rate is quite restrictive: we require the output of the circuit to be close in fidelity to the input state. In practice, for quantum computation, this might not be necessary: for exemple instead of recovering the original state, one might want to measure a logical Pauli observable up to a small error. Can our techniques be adapted to a setting where the definition of the error rate is less restrictive? 3. As discussed in the introduction, local codes can only provide \(m/k\sim f^{2}\), while our bound sits at \(m/k\sim\sqrt{f}\), which begs the question: can one actually propose an error correction scheme that is local, yet achieves \(m/k\sim\sqrt{f}\) by making use of classical communications, or can our bound be tightened? 4. In the regime \(\Delta\in O(1)\), and \(m\in O(n)\), our bound Theorem 3 can be understood as a bound on local stabilizer codes. In that case, we obtain \(k\sqrt{d}\in O(n)\), which is far from \(kd^{2}\in O(n)\) of [1], or even \(kd\in\tilde{O}(n)\) of [1]. Can one find an intuitive explanation for this difference? In their proofs, [1, 1] only need to focus on two correctable regions, while in this work we typically deal with \(m/d\) correctable regions. ## 2 Preliminaries For a Hilbert space \(\mathcal{H}\), \(\mathcal{B}(\mathcal{H})\) is the space of bounded linear operators on \(\mathcal{H}\) and \(\mathcal{D}(\mathcal{H})\) is the set of density operators on \(\mathcal{H}\), i.e., positive semidefinite operators with unit trace. We write \(\mathds{1}\) for the identity operator in \(\mathcal{H}\). For a linear operator \(\rho\) on \(\mathcal{H}\), its support is the orthogonal complement of its kernel. We write \(\mathsf{CPTP}(\mathcal{H}_{1},\mathcal{H}_{2})\) the set of completely positive trace-preserving linear maps from \(\mathcal{B}(\mathcal{H}_{1})\) to \(\mathcal{B}(\mathcal{H}_{2})\) (also called quantum channels) and \(\mathsf{CP}(\mathcal{H}_{1},\mathcal{H}_{2})\) for completely positive linear maps. When \(\mathcal{H}_{1}=\mathcal{H}_{2}\), we simply write \(\mathsf{CPTP}(\mathcal{H})\) and \(\mathsf{CP}(\mathcal{H})\). We denote \(\mathcal{I}\in\mathsf{CPTP}(\mathcal{H})\) the identity channel on the space of linear operators on \(\mathcal{H}\). We are going to consider quantum states and channels that act on multiple systems. The systems will be labelled \(A,B,X,\dots\) and should be thought of as labels for a collection of information carrying systems. As such, mathematically, these systems are finite sets. The states of such systems are described by a density operator on the corresponding Hilbert spaces, which we write as \(\mathcal{H}_{A},\mathcal{H}_{B},\mathcal{H}_{X},\dots\). For example, a state on the systems \(AX\) (which should be understood as the union of the sets \(A\) and \(X\)) will be described by an element in \(\mathcal{D}(\mathcal{H}_{A}\otimes\mathcal{H}_{X})\). We will often include the systems on which the states or channels act as a subscript, e.g., \(\rho_{AX}\in\mathcal{D}(\mathcal{H}_{A}\otimes\mathcal{H}_{X})\) for the state on \(AX\), \(\rho_{A}\equiv\operatorname{Tr}_{X}(\rho_{AX})\) and \(\mathcal{I}_{A}\in\mathsf{CPTP}(\mathcal{H}_{A})\) the identity channel on the system \(A\). Additionally, we use \(\|\cdot\|_{1}\) to denote the trace norm, and \(F(\rho,\sigma)=\left(\operatorname{Tr}\left(\rho^{\frac{1}{2}}\sigma\rho^{ \frac{1}{2}}\right)^{\frac{1}{2}}\right)^{2}\) for the Uhlmann fidelity [12]. We use the standard inequalities: \[2(1-\sqrt{F(\rho,\sigma)})\leq\|\rho-\sigma\|_{1}\leq 2\sqrt{1-F(\rho,\sigma)}\,\] and when one of the two states is pure the inequality can be improved to \(2(1-F(\rho,\sigma))\leq\|\rho-\sigma\|_{1}\). ### Distance and entropic measures One can draw an equivalence between preserving information and preserving entanglement with another party. This is formalized in the following lemma. **Lemma 4** (Theorem 2 of [1]).: _Let \(\mathcal{C}\) be a subspace of \(\mathcal{H}\). Let \(\mathcal{E}\in\mathsf{CPTP}(\mathcal{H})\) be a quantum channel such that for all states \(|\psi\rangle\in\mathcal{C}\)_ \[F(|\psi\rangle\langle\psi|,\mathcal{E}(|\psi\rangle\langle\psi|))\geq 1- \epsilon\enspace.\] _For any state \(\rho\) with support included in \(\mathcal{C}\), let \(|\rho\rangle\in\mathcal{H}_{R}\otimes\mathcal{H}\) be a purification of \(\rho\). Then, we have_ \[F(|\rho\rangle\langle\rho|,(\mathcal{I}_{R}\otimes\mathcal{E})(|\rho\rangle \langle\rho|))\geq 1-\frac{3}{2}\epsilon\enspace.\] We introduce entropic quantities that will be used throughout the paper, in particular the coherent information which is known to capture the ability of a channel to transmit quantum information. **Definition 5**.: _For a state \(\rho\in\mathcal{D}(\mathcal{H})\) and a positive operator \(\sigma\) on \(\mathcal{H}\), the relative entropy is defined as_ \[D(\rho\|\sigma)\equiv\left\{\begin{array}{ll}\operatorname{Tr}\rho(\log\rho -\log\sigma)&\text{if the support of $\rho$ is included in the support of $\sigma$}\\ +\infty&otherwise\enspace.\end{array}\right.\] _The conditional von Neumann entropy of a bipartite state \(\rho_{AB}\in\mathcal{D}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})\) is defined as_ \[S(A|B)_{\rho}\equiv-D(\rho_{AB}\|\mathds{1}_{A}\otimes\rho_{B})\enspace,\] _which can also be written as \(S(A|B)_{\rho}=S(AB)-S(B)\). The coherent information is defined as_ \[I(A)B)_{\rho}\equiv-S(A|B)_{\rho}\enspace.\] _The conditional mutual information of a tripartite state \(\rho_{ABC}\in\mathcal{D}(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\otimes \mathcal{H}_{C})\)_ \[I(A:B|C)_{\rho}\equiv S(A|C)_{\rho}-S(A|BC)_{\rho}\enspace.\] Using the monotonicity of the relative entropy (see e.g., [14]) and the continuity statement [14, Lemma 2] for conditional entropy, we immediately obtain the following statements. **Proposition 6**.: _Let \(\rho_{AB}\) be a bipartite state \(\rho_{AB}\in\mathcal{D}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})\). The coherent information satisfies the following properties:_ 1. \(I(A)B)_{\rho}\) _is right-monotonous, i.e., for any_ \(\mathcal{T}\in\mathsf{CPTP}(\mathcal{H}_{B})\)_, we have_ \(I(A)B)_\rho}\geq I(A)B)_\({}_{(\mathcal{I}_{A}\otimes\mathcal{T})(\rho)}\)_._ 2. _Let_ \(\epsilon\in[0,1]\) _and_ \(\rho,\sigma\) _such that_ \(F(\rho,\sigma)\geq 1-\epsilon\)_, then_ \(|I(A)B)_\rho-I(A)B)_\sigma|\leq 2\sqrt{\epsilon}\log\dim\mathcal{H}_{A}+g(\sqrt{ \epsilon})\)_, with_ \(g(\epsilon)=(1+\epsilon)h(\frac{\epsilon}{1+\epsilon})\)_, where_ \(h(\cdot)\) _is the binary entropy function. We use the fact that_ \(g(\epsilon)\leq 2\sqrt{\epsilon}\)_._ Note that the coherent information is not an entanglement measure, in particular \(I(A)B)_\rho\) can increase when acting on \(A\). We will need to use an entanglement measure: we choose to use the relative entropy of entanglement (REE). **Definition 7**.: _Let \(\rho\in\mathcal{D}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})\), then the relative entropy of entanglement (REE) is defined as_ \[E_{R}(A:B)_{\rho}\equiv\min_{\sigma\in\mathsf{SEP}(\mathcal{H}_{A}:\mathcal{H }_{B})}D(\rho\|\sigma)\enspace,\] _where_ \[\mathsf{SEP}(\mathcal{H}_{A}:\mathcal{H}_{B})\equiv\{\sigma\in\mathcal{D}( \mathcal{H}_{A}\otimes\mathcal{H}_{B}):\sigma=\sum_{i}p_{i}\sigma_{A,i}\otimes \sigma_{B,i},\sigma_{A,i}\in\mathcal{D}(\mathcal{H}_{A}),\sigma_{B,i}\in \mathcal{D}(\mathcal{H}_{B})\}\] _is the set of separable states._ The following proposition summarizes useful properties of the relative entropy of entanglement. **Proposition 8**.: _REE satisfies the following properties_ 1. _Continuity: let_ \(\rho,\sigma\) _such that_ \(F(\rho,\sigma)\geq 1-\epsilon\)_, then_ \(|E_{R}(A:B)_{\rho}-E_{R}(A:B)_{\sigma}|\leq\sqrt{\epsilon}\log\dim\mathcal{H}_ {A}+g(\sqrt{\epsilon})\)_, with_ \(g(\epsilon)=(1+\epsilon)h(\frac{\epsilon}{1+\epsilon})\)_._ 2. _Monotonicity under separable operations (see Definition_ 9 _below): let_ \(\mathcal{T}\) _be a separable quantum channel with respect to the bipartition_ \(A:B\)_, then_ \(E_{R}(A:B)_{\rho}\geq E_{R}(A:B)_{\mathcal{T}(\rho)}\)_._ 3. \(E_{R}(A:B)\geq I(A)B)\)_._ Proof.: The first point can be found in [25, Corollary 8] and the second one in [24]. A proof of the third point is included in the Appendix as Lemma 31. The set of separable quantum channel is a convenient superset of the set of LOCC operators. We refer to [23, Section 6.1.2] for further details on separable quantum channels. **Definition 9**.: _A bipartite quantum channel \(\mathcal{T}\in\mathsf{CPTP}(\mathcal{H}_{A}\mathop{\otimes}\mathcal{H}_{B})\) is called a separable quantum channel with respect to the bipartition \(A:B\) if it admits a Kraus representation with Kraus operators of the form \(\{T_{A,i}\mathop{\otimes}T_{B,i}\}_{i}\). We denote the set of these quantum channels by \(\mathsf{S EPC}(\mathcal{H}_{A}:\mathcal{H}_{B})\)._ ### Circuit model Our circuits will have two kinds of systems: a classical system denoted \(X\) and quantum systems denoted by the set \(A\). As computation on the classical system will be free, we can think of \(X\) as a single system, i.e., a set with one element called \(X\). On the other hand, \(A\) should be seen as a set of qubit systems. Throughout the paper, we will reserve the notation \(m\) for the size of \(A\). As such \(|A|=m=\log\dim\mathcal{H}_{A}\). In other words, \(A\) is interpreted as the set \([m]\equiv\{1,\ldots,m\}\). In addition for a set \(\Lambda\subset A\), \(\overline{\Lambda}\) denotes the complement of \(\Lambda\) in \(A\), i.e., \(A\setminus\Lambda\) unless otherwise noted (sometimes, it will denote the complement in a subset of \(A\)). For any \(Q\in\mathcal{B}(\mathcal{H}_{A})\), we write \(\mathsf{supp}\,Q\subset A\) for the set of qubits on which \(Q\) acts non-trivially. Note that even though it shares the same name, \(\mathsf{supp}\,Q\) is a subset of \(A\) and has nothing to do with the orthogonal complement of the kernel of \(Q\). It will always be clear from the context which support we are talking about. We consider the following circuit model: **Definition 10**.: _A circuit \(\mathfrak{W}\) of depth \(\Delta\) and width \(m\) is a sequence \((\mathcal{E}_{t})_{t=1}^{\Delta}\), with \(\mathcal{E}_{t}\in\mathsf{S EPC}(\mathcal{H}_{A}:\mathcal{H}_{X})\), where \(A\) is a set of size \(m\) labelling the qubit systems and \(\mathcal{H}_{X}\) is an arbitrary finite dimensional Hilbert space. We denote by \([\mathfrak{W}]\) the quantum channel obtained by composing the channels \(\mathcal{E}_{t}\):_ \[[\mathfrak{W}]=\mathcal{E}_{\Delta}\circ\cdots\circ\mathcal{E}_{1}\.\] As previously mentioned, the system \(A\) should be understood as \(m\) qubits and \(X\) is introduced to model a classical system that may be used to record and process any classical information, for example obtained from quantum measurements. The distinction between classical and quantum systems is important to allow adaptive quantum circuits with hybrid classical and quantum computations. In fact, in our noise model, we allow the system \(X\) to be noise-free as classical computation can be performed with practically perfect accuracy. The circuit width is defined as the number of qubits in the systems \(A\) and the system \(X\) can have a Hilbert space of arbitrary finite dimension. We also remark that to show our results, we do not need to assume that the system \(X\) is classical: the only thing we use is that operations involving \(A\) and \(X\) have to be separable along the cut \(A:X\). To define the geometric locality of a circuit, we introduce the connectivity graph on the set of qubits \(A\). **Definition 11** (Connectivity graph).: _Let \(A\) be a set of size \(m\). A connectivity graph \(G=(A,E)\) is an undirected graph on vertex set \(A\) and with edge set \(E\). For any \(U\subset A\), define \(\partial U=\partial_{+}U\cup\partial_{-}U\), where_ \[\partial_{-}U\equiv\{u\in U:\exists v\in A\setminus U,(u,v)\in E\}\] \[\partial_{+}U\equiv\{u\in A\setminus U:\exists u\in U,(u,v)\in E\}\.\] **Definition 12**.: _We say that \(\mathfrak{W}=(\mathcal{E}_{t})_{t=1}^{\Delta}\) with qubit systems labelled by the set \(A\) is compatible with a connectivity graph \(G\) with vertex set \(A\) if the following holds: For all \(t\), \(\mathcal{E}_{t}\) admits a Kraus representation with Kraus operators \(\{K_{A,i}^{t}\otimes K_{X,i}^{t}\}_{i}\) with \(K_{A,i}^{t}\in\mathcal{B}(\mathcal{H}_{A})\) and \(K_{X,i}^{t}\in\mathcal{B}(\mathcal{H}_{X})\) where \(K_{A,i}^{t}=\bigotimes_{j}K_{j}^{t,i}\) for some operators \(K_{j}^{t,i}\in\mathcal{B}(\mathcal{H}_{A})\) all satisfying the property: for all \(u,v\in\mathsf{supp}(K_{j}^{t,i})\), \((u,v)\) is an edge of \(G\)._ For example, for unitary circuits with only two-qubit gates, the operators \(K_{A,i}^{t}\) are tensor products of unitary operators on two qubits and the index \(i\) can be understood as selecting which two-qubit unitaries to apply as a function of the classical system \(X\). The condition of the definition requires that each such two-qubit unitary acts on neighboring vertices in the graph \(G\). Particular connectivity graphs of interest are the ones that can be embedded in a \(D\)-dimensional Euclidean space where vertices connected by an edge are close in Euclidean distance. **Definition 13**.: _A connectivity graph \(G=(A,E)\) is said to be \(c\)-local in \(D\) dimension if there exists \(\eta:A\to\mathbb{R}^{D}\) such that_ \[\forall u,v\in A,u\neq v,\|\eta(u)-\eta(v)\|_{2}\geq 1\] _and_ \[\forall(u,v)\in E,\|\eta(u)-\eta(v)\|_{2}\leq c\.\] _We will say that \(G\) is \(O(1)\)-local in dimension \(D\) if it is \(c\)-local for some constant \(c\) independent of the other parameters in the problem, in particular the number of vertices \(|A|\)._ _We say that a circuit \(\mathfrak{W}\) is \(O(1)\)-local in dimension \(D\) if it has a connectivity graph that is \(O(1)\)-local in dimension \(D\)._ Next, we establish an important lemma that will be used extensively: it states that applying one step of a circuit with a given connectivity graph can increase entanglement between a region and its complement by at most the size of the corresponding boundary. **Lemma 14** (Small incremental entangling).: _Let \(\mathfrak{W}=(\mathcal{E}_{t})_{t}\) be a quantum circuit with a connectivity graph \(G\). Then for any \(\rho\in\mathcal{D}(\mathcal{H}_{A}\otimes\mathcal{H}_{X})\), and every \(\mathcal{E}_{t}\), we have_ \[E_{R}(U:X\overline{U})_{\mathcal{E}_{t}(\rho)}\leq E_{R}(U:X\overline{U})_{ \rho}+3|\partial U|\] _for any \(U\subset A\), and \(\overline{U}\equiv A\setminus U\)._ Proof.: For \(U\subset A\), let \(\tau\in\mathsf{SEP}(\mathcal{H}_{U}:\mathcal{H}_{\overline{U}X})\) such that \[E_{R}(U:X\overline{U})_{\rho}=D(\rho\|\tau)\.\] Note that, by the fact that \(\mathfrak{W}\) has connectivity graph \(G\), we can rewrite the Kraus elements \(\{\Pi_{i}\}_{i}\) of \(\mathcal{E}_{t}\) as \(\Pi_{X,i}\otimes\Pi_{\mathsf{int}(U),i}\otimes\Pi_{\partial U,i}\otimes\Pi_{ \mathsf{int}(\overline{U}),i}\) such that \(\mathsf{supp}(\Pi_{\mathsf{int}(U),i})\) which we denote as \(\mathsf{int}(U)_{i}\) is a subset of \(U\), and similarly \(\mathsf{supp}(\Pi_{\mathsf{int}(\overline{U}),i})\equiv\mathsf{int}(\overline {U})_{i}\subset\overline{U}\), and \(\mathsf{supp}(\Pi_{\partial U,i})\equiv\partial U_{i}\subset\partial U\). We let \(\tau^{\prime}=\mathds{1}_{\partial U}/2^{|\partial U|}\otimes\mathrm{Tr}_{ \partial U}\,\mathcal{E}_{t}(\tau)\), then note that \(\tau^{\prime}\in\mathsf{SEP}(\mathcal{H}_{U}:\mathcal{H}_{\overline{U}X})\). To convince ourselves of this, remember that \(\tau\) can be written as \[\tau=\sum_{j}p_{j}\tau_{U}^{j}\otimes\tau_{\overline{U}X}^{j}\.\] Then for any element \(\Pi_{X,i}\otimes\Pi_{\mathsf{int}(U),i}\otimes\Pi_{\partial U,i}\otimes\Pi_{ \mathsf{int}(\overline{U}),i}\), we have \[\mathrm{Tr}_{\partial U_{i}}\left(\Pi_{X,i}\otimes\Pi_{\mathsf{int}(U),i} \otimes\Pi_{\partial U,i}\otimes\Pi_{\mathsf{int}(\overline{U}),i}\ \tau\ \Pi_{X,i}^{\dagger}\otimes\Pi_{\mathsf{int}(U),i}^{\dagger}\otimes\Pi_{ \partial U,i}^{\dagger}\otimes\Pi_{\mathsf{int}(\overline{U}),i}^{\dagger} \right)=\sum_{j}\hat{\tau}_{\mathsf{int}(U)_{i}}^{j}\otimes\hat{\tau}_{ \mathsf{int}(\overline{U})_{i}X}^{j}\] for some positive operators \(\{\hat{\tau}_{\mathsf{int}(U)_{i}}^{j}\}_{j},\{\hat{\tau}_{\mathsf{int}( \overline{U})_{i}X}^{j}\}_{j}\). Now since \(\partial U_{i}\subset\partial U\), then \(\mathrm{Tr}_{\partial U}\,\mathcal{E}_{t}(\tau)\in\mathsf{SEP}(\mathcal{H}_{U \setminus\partial U}:\mathcal{H}_{\overline{U}X\setminus\partial U})\). We then naturally obtain \(\tau^{\prime}=\mathds{1}_{\partial U}/2^{|\partial U|}\otimes\mathrm{Tr}_{ \partial U}\,\mathcal{E}_{t}(\tau)\in\mathsf{SEP}(\mathcal{H}_{U}:\mathcal{H }_{\overline{U}X})\). Write \(\rho^{\prime}\equiv\mathcal{E}_{t}(\rho)\). We will use the fact that for two states \(\rho_{CD},\sigma_{CD}\in\mathcal{D}(\mathcal{H}_{C}\otimes\mathcal{H}_{D})\), if \(\sigma_{CD}=\sigma_{C}\otimes\sigma_{D}\), then the relative entropy satisfies (Proposition 2 of [1]) \[D(\rho_{CD}\|\sigma_{CD})=D(\rho_{C}\|\sigma_{C})+I(C:D)_{\rho}+D(\rho_{D}\| \sigma_{D})\.\] Applying this relation to \(\rho^{\prime}\) and \(\tau^{\prime}\), we get \[E_{R}(U:\overline{U}X)_{\rho^{\prime}} \leq D(\rho^{\prime}\|\tau^{\prime})\] \[=D(\mathrm{Tr}_{\partial U}\,\rho^{\prime}\|\,\mathrm{Tr}_{ \partial U}\,\tau^{\prime})+I(\overline{\partial U}X:\partial U)_{\rho^{ \prime}}+D(\rho^{\prime}_{\partial U}\|\tau^{\prime}_{\partial U})\] \[\leq D(\mathrm{Tr}_{\partial U}\,\rho^{\prime}\|\,\mathrm{Tr}_{ \partial U}\,\tau^{\prime})+3|\partial U|\] \[\leq D(\mathcal{E}_{t}(\rho)\|\mathcal{E}_{t}(\tau))+3|\partial U|\] \[\leq D(\rho\|\tau)+3|\partial U|\] \[=E_{R}(U:X\overline{U})_{\rho}+3|\partial U|\,\] where we have used the fact that \(I(\overline{\partial U}X:\partial U)_{\rho^{\prime}}\leq 2|\partial U|\) and \(D(\rho^{\prime}_{\partial U}\|\tau^{\prime}_{\partial U})\leq|\partial U|\). ### Quantum codes We introduce some basic definitions about quantum error correcting codes. We refer to [10] for more details. Let \(k,d\leq n\) be positive integers. **Definition 15**.: _The \(n\)-qubit Pauli group \(\mathcal{P}_{n}\) is generated by \(\{i,X,Z\}^{\otimes n}\)._ **Definition 16**.: _A code \(\mathcal{C}\) is a subspace of \((\mathbb{C}^{2})^{\otimes n}\) has parameters \((\!(n,k,d)\!)\) if_ 1. \(\mathcal{C}\cong(\mathbb{C}^{2})^{\otimes k}\)__ 2. \(\forall|\psi\rangle,|\psi^{\prime}\rangle\in\mathcal{C},\forall P\in\mathcal{P }_{n},|\,\mathsf{supp}\,P|<d,\langle\psi|P|\psi^{\prime}\rangle=c(P)\langle \psi|\psi^{\prime}\rangle\)_, for some_ \(c(P)\in\mathbb{C}\)__ _._ **Definition 17**.: _A code \(\mathcal{C}\subset(\mathbb{C}^{2})^{\otimes n}\) is said to be a stabilizer code if there exists an Abelian subgroup \(S\subset\mathcal{P}_{n}\) not containing \(-I\), such that for all \(|\psi\rangle\in\mathcal{H}\),_ \[|\psi\rangle\in\mathcal{C}\quad\Leftrightarrow\quad\forall M\in S,M|\psi \rangle=|\psi\rangle\.\] _Let \(\{M_{i}\}_{i\in\{1,\ldots,n-k\}}\) be independent generators for the group \(S\), without loss of generality we take \(M_{i}\) to be Hermitian. For any state in \(\mathcal{H}\), we write \(s_{i}\in\{-1,+1\}\) the outcome of the measurement of \(M_{i}\). The vector \(s=(s_{i})_{i\in\{1,\ldots,n-k\}}\) is called the syndrome of this state. The Hilbert space then splits as a direct sum of syndrome subspaces: \(\mathcal{H}=\bigoplus\limits_{s\in\{-1,+1\}^{n-k}}\mathcal{C}_{s}\), where \(\mathcal{C}_{s}\) is defined by \(|\psi\rangle\in\mathcal{C}_{s}\Leftrightarrow\forall i\in\{1,\ldots,n-k\},M_ {i}|\psi\rangle=s_{i}|\psi\rangle\). For stabilizer codes, we use the notation \([\![n,k,d]\!]\) when the subspace \(\mathcal{C}\) has dimension \(2^{k}\) and minimum distance \(d\)._ **Definition 18**.: _Let \(\mathcal{C}\) be a code. Then a region \(\Lambda\subset[n]\) is said to be correctable if there exists \(\mathcal{R}_{\Lambda}\) such that \(\forall\rho\in\mathcal{C},\mathcal{R}_{\Lambda}\circ\mathrm{Tr}_{\Lambda}( \rho)=\rho\)._ The following standard lemma shows that any region with size at most \(d-1\) is correctable. **Lemma 19**.: _Let \(\mathcal{C}\) be a code on \(n\) qubits. Then any region \(\Lambda\subset[n]\) with \(|\Lambda|<d\) is correctable._ The next lemma shows that for any state in the code, the reduced state on a correctable region is independent of the code state. This even holds in an approximate sense. **Lemma 20** (Approximate indistinguishability).: _Let \(\epsilon\in[0,1]\), \(A^{\prime}\) be an \(n\)-qubit system and let \(\mathcal{C}\) be a code such that for any region \(\Lambda\subset A^{\prime}\) of size \(|\Lambda|<d\) there exists \(\mathcal{R}\) such that for all \(\rho\) supported on \(\mathcal{C}\), we have_ \[F(\mathcal{R}\circ\mathrm{Tr}_{\Lambda}(\rho),\rho)\geq 1-\epsilon\.\] _Then there exists \(\omega_{\Lambda}\in\mathcal{D}(\mathcal{H}_{\Lambda})\) such that for any state \(\rho\) supported on \(\mathcal{C}\), the following is satisfied_ \[F(\omega_{\Lambda},\rho_{\Lambda})\geq 1-\frac{3\epsilon}{2}\.\] Proof.: From the recovery condition, and Lemma 4, we can verify that for any purification \(|\rho\rangle_{A^{\prime}R}\) of \(\rho_{A^{\prime}}\) satisfies \[F(\mathcal{I}_{R}\otimes\mathcal{R}\circ\mathrm{Tr}_{\Lambda}(|\rho\rangle \langle\rho|),|\rho\rangle\langle\rho|)\geq 1-\frac{3}{2}\epsilon\.\] Then from Theorem 3 of [10], we can verify that there exists a state \(\omega_{\Lambda}\) such that for all states \(\rho_{A^{\prime}R}\) the following is satisfied \[\sqrt{1-F(\omega_{\Lambda}\otimes\rho_{R},\rho_{\Lambda R})}\leq\sqrt{\frac{3 \epsilon}{2}}\.\] This gives \(F(\omega_{\Lambda}\otimes\rho_{R},\rho_{\Lambda R})\geq 1-\frac{3\epsilon}{2}\), and by the monotonicity of the fidelity we obtain \(F(\omega_{\Lambda},\rho_{\Lambda})\geq 1-\frac{3\epsilon}{2}\). ## 3 Lower bounds for error correction in low dimensions In this section, we establish lower bounds on the size of geometrically local circuits for preparing a code state for a quantum code with a large minimum distance and measuring the syndrome of such a stabilizer code. To define a quantum circuit that implements such tasks, we have to choose a subset of qubits \(A^{\prime}\subset A\) that contain the desired outcome. Recall that \(A\) denotes the set of all \(m\) qubits used by the circuit and \(A^{\prime}\) will be smaller, typically of size \(n\). ### Entropic properties for code states **Lemma 21**.: _Let \(\mathcal{C}\) be a \((\!(n,k,d)\!)\) code and \(A^{\prime}\) be a set of size \(n\) labelling \(n\) qubits. Then for any region \(\Lambda\subset A^{\prime}\) such that \(|\Lambda|<d\), and for any state \(\rho\in\mathcal{D}(\mathcal{H}_{A^{\prime}})\) that has a support included in \(\mathcal{C}\), we have_ \[I(\Lambda)\overline{\Lambda})_{\rho}=S(\Lambda)_{\rho}\,\] _where \(\overline{\Lambda}\equiv A^{\prime}\setminus\Lambda\)._ Proof.: Let \(|\rho\rangle\in\mathcal{H}_{R}\mathop{\otimes}\mathcal{H}_{A^{\prime}}\) be a purification of \(\rho_{A^{\prime}}\). We write \(\rho_{RA^{\prime}}=|\rho\rangle\langle\rho|\). As Lemma 19 guarantees the existence of \(\mathcal{R}\) a recovery map, we have from Lemma 4 that \(F(\rho_{RA^{\prime}},\mathcal{I}_{R}\otimes\mathcal{R}\circ\mathrm{Tr}_{ \Lambda}(\rho_{RA^{\prime}}))=1\). From the right-monotonicity of the coherent information, we have \[I(R)A^{\prime})_{\rho}\geq I(R)\overline{\Lambda})_{\rho}\geq I(R)A^{\prime} )_{\mathcal{I}_{R}\otimes\mathcal{R}(\rho_{R}\overline{\Lambda})}=I(R)A^{ \prime})_{\rho}\.\] One can then verify that \(I(R)\overline{\Lambda})_{\rho}=I(R)A^{\prime})_{\rho}\) can be rewritten as \(S(\Lambda)_{\rho}=I(\Lambda)\overline{\Lambda})_{\rho}\). We can then show that **Lemma 22**.: _Let \(\mathcal{C}\) be a \((\!(n,k,d)\!)\) code and \(A^{\prime}\) be a set of size \(n\) labelling \(n\) qubits. Then for any partition \(\{\Lambda_{i}\}_{i}\) of \(A^{\prime}\) such that \(|\Lambda_{i}|<d\), we have, for any state \(\rho\in\mathcal{D}(\mathcal{H}_{A^{\prime}})\) that has a support included in \(\mathcal{C}\)_ \[\sum_{i}E_{R}(\Lambda_{i}:\overline{\Lambda_{i}})_{\rho}\geq k\,\] _where \(\overline{\Lambda}\equiv A^{\prime}\setminus\Lambda\)._ Proof.: We have, from Lemma 21 \[\sum_{i}I(\Lambda_{i})\overline{\Lambda_{i}})_{\rho}=\sum_{i}S(\Lambda_{i})_{ \rho}\.\] Write \(\sigma=2^{-k}\Pi_{\mathcal{C}}\) with \(\Pi_{\mathcal{C}}\) the projector on \(\mathcal{C}\). Since every \(\Lambda_{i}\) satisfies \(|\Lambda_{i}|<d\), we can use the indistinguishability of quantum codes Lemma 20 (with \(\epsilon=0\)), and we have \(\rho_{\Lambda_{i}}=\sigma_{\Lambda_{i}}\). Further, the coherent information lower bounds the REE (Proposition 8). This gives: \[\sum_{i}E_{R}(\Lambda_{i}:\overline{\Lambda_{i}})_{\rho}\geq\sum_{i}I(\Lambda_ {i})\overline{\Lambda_{i}})_{\rho}=\sum_{i}S(\Lambda_{i})_{\sigma}\geq S(A^{ \prime})_{\sigma}=k\,\] where the last inequality stems from the subadditivity of the entropy. ### Lower bounds in terms of the minimum distance In this part, we show how Lemma 22 implies lower bounds on the depth of circuits preparing code states. For simplicity of exposition, in what follows we focus on \(D\)-dimensional Euclidean spaces though this can be generalized to more complicated geometries. In what follows, \(|0\rangle\langle 0|_{X}\) denotes a fixed pure state in \(\mathcal{H}_{X}\) and \(|0\rangle\langle 0|_{A^{\prime}}\) for \(A^{\prime}\subset A\) denotes the product state \(|0\rangle\langle 0|\) on all qubits of \(A^{\prime}\). **Theorem 23** (Encoding circuits).: _Let \(\mathcal{C}\) be an \((\!(n,k,d)\!)\) quantum code. Let \(\mathfrak{W}\) be a \(D\)-dimensional \(O(1)\)-local quantum circuit of depth \(\Delta\) and width \(m\geq n\), and let \(A^{\prime}\subset A\) be a subset of the qubits of \(\mathfrak{W}\) of size \(n\). Assume that the output state \(\rho_{AX}=[\mathfrak{W}](|0\rangle\langle 0|_{A}\otimes|0\rangle\langle 0|_{X})\) of the circuit is such that \(\rho_{A^{\prime}}\) is fully supported on \(\mathcal{C}\). Then, we have_ \[\Delta\in\Omega\left(\frac{kd^{1/D}}{m}\right)\.\] We note that the proof shows more generally that for any circuit \(\mathfrak{W}\) with connectivity graph \(G\) and any partition \(\{\Gamma_{i}\}_{i=1}^{\ell}\) of \(A\), we have \(\Delta\geq\frac{k}{3\sum_{i=1}^{\ell}|\partial\Gamma_{i}|}\). Proof.: On one hand, by Lemma 22, any partition \(\{\Gamma_{i}\}_{i=1}^{\ell}\) of \(A\) such that \(|\Gamma_{i}|<d\) induces a partition \(\{\Lambda_{i}\}_{i=1}^{\ell}\) of \(A^{\prime}\), with \(\Lambda_{i}=\Gamma_{i}\cap A^{\prime}\), and \(|\Lambda_{i}|<d\). We can then guarantee that there exists \(i^{\prime}\) such that \[E_{R}(\Lambda_{i^{\prime}}:A^{\prime}\setminus\Lambda_{i^{\prime}})_{\rho}=E_ {R}(\Lambda_{i^{\prime}}:A^{\prime}\setminus\Lambda_{i^{\prime}})_{\rho}\geq k /\ell\.\] On the other hand, the Small Incremental Entangling Lemma 14 guarantees that for any \(\Gamma_{i}\) we have \[E_{R}(\Gamma_{i}:\overline{\Gamma_{i}}X)_{\rho} =E_{R}(\Gamma_{i}:\overline{\Gamma_{i}}X)_{\mathcal{E}_{\Delta} \circ\dots\circ\mathcal{E}_{1}(|0\rangle\langle 0|_{A}\otimes|0\rangle\langle 0|_{X})}\] \[\leq 3\Delta|\partial\Gamma_{i}|+E_{R}(\Gamma_{i}:\overline{ \Gamma_{i}}X)_{|0\rangle\langle 0|_{A}\otimes|0\rangle\langle 0|_{X}}\] \[=3\Delta|\partial\Gamma_{i}|\,\] where we recall the notation \(\overline{\Gamma_{i}}=A\setminus\Gamma_{i}\) and that \(|0\rangle\langle 0|_{A}\) refers to the product state where all qubits of \(A\) are set to \(|0\rangle\langle 0|\). By the monotonicity of the REE under separable operations, we can now obtain \[3\Delta|\partial\Gamma_{i^{\prime}}| \geq E_{R}(\Gamma_{i^{\prime}}:\overline{\Gamma_{i^{\prime}}}X)_{\rho}\] \[\geq E_{R}(\Lambda_{i^{\prime}}:A^{\prime}\setminus\Lambda_{i^{ \prime}})_{\rho}\] \[\geq k/\ell\.\] Since the circuit is \(D\)-dimensional, there always exist a partition \(\{\Gamma_{i}\}_{i=1}^{\ell}\) of a \(D\)-dimensional graph such that \(|\Gamma_{i}|\leq\lambda\), \(|\partial\Gamma_{i}|\in O(\lambda^{D-1/D})\), \(\ell\in O(m/\lambda)\), for any \(\lambda\), see Lemma 34. Picking \(\lambda=d-1\) and applying the inequality we obtained previously, we have \(O(\Delta\lambda^{(D-1)/D})\geq k\lambda/m\), or \(\Delta\in\Omega(\frac{k\lambda^{1/D}}{m})\). Since \(\lambda=d-1\), we obtain the desired result. Next, we move to the problem of syndrome extraction for stabilizer codes. **Theorem 24** (Syndrome extracting circuit).: _Let \(\mathcal{C}\) be a \(\![n,k,d]\!]\) stabilizer code. Assume that \(A^{\prime}\subset A\) and \(\mathfrak{W}=(\mathcal{E}_{t})_{t=1}^{\Delta}\) is a circuit such that for any \(\rho\in\mathcal{D}(\mathcal{H}_{A^{\prime}})\) we have_ \[\operatorname{Tr}_{\overline{A^{\prime}}}\circ[\mathfrak{W}]\left(\rho_{A^{ \prime}}\otimes|0\rangle\langle 0|_{\overline{A^{\prime}}}\otimes|0\rangle \langle 0|_{X}\right)=\sum_{s}\Pi_{s}\rho\Pi_{s}\otimes|s\rangle\langle s|_{X}\,\] _where \(\Pi_{s}\) is the projector onto the syndrome subspace \(\mathcal{C}_{s}\). Then \(\Delta\) obeys_ \[\Delta\in\Omega\left(\frac{kd^{1/D}}{m}\right)\.\] Proof.: The Hilbert space on \(n\) qubits naturally splits as \(\mathcal{H}=\bigoplus_{s\in\{-1,+1\}^{n-k}}\mathcal{C}_{s}\). Applying the circuit \(\mathfrak{W}\) to the state \(|0\rangle\langle 0|^{\otimes m}\), we obtain after tracing out \(\overline{A^{\prime}}\) the state \(\sum_{s\in\{-1,+1\}^{n-k}}\Pi_{s}|0\rangle\langle 0|^{\otimes n}\Pi_{s} \otimes|s\rangle\langle s|\) by assumption. Note that for any \(s\in\{-1,+1\}^{n-k}\), there exists an operator \(P_{s}\in\mathcal{P}_{n}\) (which can be seen as a correction operator for error syndrome \(s\)) such that for any \(|\psi\rangle\in\mathcal{C}_{s}\), we have \(P_{s}|\psi\rangle\in\mathcal{C}\). We add one step to this circuit: a recovery operation with Kraus elements \(\{P_{s}\otimes|0\rangle\langle s|\}_{s\in\{-1,+1\}^{n-k}}\). Note that this map is in \(\mathsf{SEPC}(\mathcal{H}_{A}:\mathcal{H}_{X})\) and as a result we obtain a circuit of depth \(\Delta+1\). The state obtained on the register \(A^{\prime}\) is then \(\sum_{s}P_{s}\Pi_{s}|0\rangle\langle 0|^{\otimes n}\Pi_{s}P_{s}\). As \(P_{s}\Pi_{s}|0\rangle^{\otimes n}\in\mathcal{C}\) for any \(s\), this state is supported on \(\mathcal{C}\) and we can apply Theorem 23: \[\Delta+1\in\Omega(kd^{1/D}/m)\.\] ## 4 Lower bounds for error correction for noisy circuits In this section, instead of making an assumption on the minimum distance of the code, we make an assumption on the logical error rate that is achieved by the error correction module. For a given noise model, we say that a circuit defining an error correction module has logical error rate \(\delta\) if after the (ideal) error correction module is applied, the output remains \(\delta\)-close to the correct state. We show that for any quantum code with an error correction module that is geometrically local, the memory overhead has to grow when the desired logical error rate decreases. For this section, it is convenient to describe a code \(\mathcal{C}\) by an encoding isometry \(U:(\mathbb{C}^{2})^{\otimes k}\rightarrow(\mathbb{C}^{2})^{\otimes n}\), i.e., \(\mathcal{C}=\operatorname{Im}(U)\). We also introduce the systems \(R\) and \(L\) corresponding to \(k\) qubits and let \(\Phi_{RL}\in\mathcal{D}(\mathcal{H}_{R}\otimes\mathcal{H}_{L})\) be a maximally entangled state. In addition let \(\mathcal{U}\in\mathsf{CPTP}(\mathcal{H}_{L},\mathcal{H}_{A^{\prime}})\) be the encoding quantum channel that maps the logical information to the code space: \[\mathcal{U}(\cdot)\equiv U\cdot U^{\dagger}\.\] We also define the preparation map \(\mathcal{P}\in\mathsf{CPTP}(\mathbb{C},\mathcal{H}_{\overline{A^{\prime}}} \otimes\mathcal{H}_{X})\) as \[\mathcal{P}(\cdot)\equiv\operatorname{Tr}(\cdot)|0\rangle\langle 0|_{\overline{A^{ \prime}}}\otimes|0\rangle\langle 0|_{X}\.\] **Definition 25**.: _An error-correction module for the code defined by the isometry \(U\) is a family of circuit \((\mathfrak{W}_{j})_{j=1}^{J}\) with \(\mathfrak{W}_{j}=(\mathcal{E}_{t,j})_{t=1}^{\Delta}\) all acting on the same systems \(AX\), and a choice of subset \(A^{\prime}\subset A\) of size \(n\). We say that such a module has logical error rate \(\delta\) if_ \[F\left(\mathcal{I}_{R}\otimes\left(\operatorname{Tr}_{\overline{A^{\prime}}X} \circ[\mathfrak{W}]_{p}\circ(\mathcal{U}\otimes\mathcal{P})\right)(\Phi_{RL}),\mathcal{I}_{R}\otimes\mathcal{U}(\Phi_{RL})\right)\geq 1-\delta\,\] _where \([\mathfrak{W}]_{p}\) is the map obtained by applying noise before each circuit \(\mathfrak{W}_{j}\) and composing all the circuits:_ \[[\mathfrak{W}]_{p}=\bigcirc_{j=1}^{J}(\mathcal{E}_{\Delta,j}\circ...\circ \mathcal{E}_{1,j}\circ(\mathcal{N}_{p}^{\otimes m}\otimes\mathcal{I}_{X}))_{i}\,\] _with \(\mathcal{N}_{p}\) the \(p\)-depolarizing channel defined as_ \[\mathcal{N}_{p}(\rho)=(1-p)\rho+p\operatorname{Tr}(\rho)\mathds{1}/2\.\] _The number \(\Delta\) is called the depth of the error correction module._ ### Lower bound on entanglement In this section we show how the existence of a good error-correction module implies that the codestates of \(\mathcal{C}\) are highly entangled. The following lemma can be thought of as analogous to Lemma 22 where instead of imposing a constraint on the minimum distance of the code, we assume that the code can correct errors with good accuracy. **Lemma 26**.: _Using the same notation as in the paragraph preceding Definition 25, assume there exists a decoding map \(\mathcal{D}\in\mathsf{CPTP}(\mathcal{H}_{A}\otimes\mathcal{H}_{X},\mathcal{ H}_{A^{\prime}})\) such that_ \[F\left(\mathcal{I}_{R}\otimes\left(\mathcal{D}\circ(\mathcal{N}_{p}^{\otimes m }\otimes\mathcal{I}_{X})\circ(\mathcal{U}\otimes\mathcal{P})\right)(\Phi_{RL} ),\mathcal{I}_{R}\otimes\mathcal{U}(\Phi_{RL})\right)\geq 1-\epsilon\.\] _Then for any partition \(\{\Lambda_{i}\}_{i},\Lambda_{i}\subset A^{\prime}\) of \(A^{\prime}\), we have_ \[\sum_{i}E_{R}(\Lambda_{i}:\overline{\Lambda_{i}})_{\mathcal{I}_{R}\otimes \mathcal{U}(\Phi_{RL})}\geq k-\sum_{i}2\sqrt{\epsilon/p^{|\Lambda_{i}|}}| \Lambda_{i}|+g(\sqrt{\epsilon/p^{|\Lambda_{i}|}})\,\] _where \(\overline{\Lambda_{i}}=A^{\prime}\setminus\Lambda_{i}\)._ Proof.: We can write \(\mathcal{N}_{p}^{\otimes m}=p^{|\Lambda_{i}|}\mathcal{N}_{\Lambda_{i}}+(1-p^{ |\Lambda_{i}|})\mathcal{M}_{i}\), with \(\mathcal{N}_{\Lambda_{i}}=(\mathds{1}_{\Lambda_{i}}/2^{|\Lambda_{i}|}\operatorname {Tr}_{\Lambda_{i}})\otimes\mathcal{N}^{\otimes|\overline{\Lambda_{i}}|}\) and \(\mathcal{M}_{i}\) some quantum channel. Using the assumed bound on the fidelity together with Lemma 29, we get \[F\left(\mathcal{I}_{R}\otimes(\mathcal{D}\circ(\mathcal{N}_{\Lambda_{i}} \otimes\mathcal{I}_{X})\circ(\mathcal{U}\otimes\mathcal{P}))\left(\Phi_{RL} \right),\mathcal{I}_{R}\otimes\mathcal{U}(\Phi_{RL})\right)\geq 1-\epsilon/p^{| \Lambda_{i}|}. \tag{1}\] Let us define the state \(\tau_{RA^{\prime}X}=\mathcal{I}_{R}\otimes\mathcal{U}(\Phi_{RL})\otimes|0 \rangle\langle 0|_{X}\). Then we have \[S(\Lambda_{i})_{\tau}=I(\Lambda_{i})R\overline{\Lambda_{i}})_{\tau} =I(\Lambda_{i})\overline{\Lambda_{i}})_{\tau}+I(\Lambda_{i}:R| \overline{\Lambda_{i}})_{\tau}\] \[=I(\Lambda_{i})\overline{\Lambda_{i}})_{\tau}+I(\Lambda_{i}:R|X \overline{\Lambda_{i}})_{\tau}\.\] In order to upper bound, \(I(\Lambda_{i}:R|X\overline{\Lambda_{i}})_{\tau}\), we use the inequality (1). For that, consider the recovery channel \(\mathcal{R}\in\mathsf{CPTP}(\mathcal{H}_{\overline{\Lambda_{i}}}\otimes \mathcal{H}_{X},\mathcal{H}_{A^{\prime}}\otimes\mathcal{H}_{X})\) defined by \(\mathcal{R}(\omega_{\overline{\Lambda_{i}}X})=\mathcal{D}(\mathds{1}_{\Lambda _{i}}/2^{|\Lambda_{i}|}\otimes\mathcal{N}_{p}^{\otimes|\overline{A^{\prime}}|} (|0\rangle\langle 0|_{\overline{A^{\prime}}})\otimes\mathcal{N}_{p}^{\otimes| \Lambda_{i}|}(\omega_{\overline{\Lambda_{i}}X}))\otimes|0\rangle\langle 0|_{X}\). Then it is easy to see that \[F(\mathcal{I}_{R}\otimes\mathcal{R}(\operatorname{Tr}_{\Lambda_{i}}\circ( \mathcal{U}\otimes\mathcal{P})(\Phi_{RL})),\tau_{RA^{\prime}X})\geq 1-\epsilon/p^{| \Lambda_{i}|}\.\] By Lemma 30, we obtain \[S(\Lambda_{i})_{\tau}\leq I(\Lambda_{i})\overline{\Lambda_{i}})_{\tau}+2\sqrt {\epsilon/p^{|\Lambda_{i}|}}|\Lambda_{i}|+g(\sqrt{\epsilon/p^{|\Lambda_{i}|}} )\.\] Equivalently \[I(\Lambda_{i})\overline{\Lambda_{i}})_{\tau}\geq S(\Lambda_{i})_{\tau}-2\sqrt {\epsilon/p^{|\Lambda_{i}|}}|\Lambda_{i}|-g(\sqrt{\epsilon/p^{|\Lambda_{i}|}} )\equiv S(\Lambda_{i})_{\tau}-h_{|\Lambda_{i}|}\.\] However, as it stands this bound is not very restrictive, as \(S(\Lambda_{i})_{\tau}\) could take any value. To resolve this issue, we sum over the individual contributions, which yields \[\sum_{i}I(\Lambda_{i})\overline{\Lambda_{i}})_{\tau} \geq\sum_{i}S(\Lambda_{i})_{\tau}-h_{|\Lambda_{i}|}\] \[\geq k-\sum_{i}h_{|\Lambda_{i}|}\.\] Since \(\sum_{i}S(\Lambda_{i})\geq S(A^{\prime})=k\). The result then follows from Lemma 31. ### Upper bound on entanglement We now describe how a local noisy circuit is limited in its ability to generate highly entangled states. We will later leverage this element in our proof of our main theorem: to preserve information, the circuit needs to produce entangled states, which it cannot, due to its locality. More specifically, this lemma formalizes the following: as the system is affected by noise, a region \(\Gamma\subset A\) can only recover \(O(|\partial\Gamma|)\) qubits of entanglement between any two noise layers. Naturally, this lower bounds the ability of any \(\Lambda\subset\Gamma\) to be entangled with the rest of the system. The motivation behind distinguishing \(\Lambda\) and \(\Gamma\) is the following. Later, we will take part of \(\mathcal{C}\) to live on \(\Lambda\), which is therefore entangled with the rest of the system. However it is hard to partition the data qubits in a manner that guarantees a small boundary to each region, as we have no information regarding how they are arranged vis-a-vis the ancillary qubits. Indeed, it is easier to partition the whole system, and \(\Lambda\subset\Gamma\) inherit the \(O(|\partial\Gamma|)\) bound on the rate at which it can be entangled with the rest of the system. **Lemma 27**.: _Let \(\rho_{RAX}\in\mathcal{D}(\mathcal{H}_{R}\otimes\mathcal{H}_{A}\otimes \mathcal{H}_{X})\) be a state on \(RAX\) of the form_ \[\rho_{RAX}=(\mathcal{I}_{R}\otimes\mathcal{L}\circ(\mathcal{N}_{p}^{\otimes m }\otimes\mathcal{I}_{X}))(\sigma_{RAX})\,\] _for some arbitrary state \(\sigma_{RAX}\) and \(\mathcal{L}\in\mathsf{CPTP}(\mathcal{H}_{A}\otimes\mathcal{H}_{X})\) be the quantum channel representing a quantum circuit of depth \(\Delta\). Let \(A^{\prime}\subset A\) be an arbitrary subset of \(A\) and assume that \(F(\rho_{RA^{\prime}},\xi_{RA^{\prime}})\geq 1-\delta\), where \(\xi_{RA^{\prime}}\in\mathcal{D}(\mathcal{H}_{R}\otimes\mathcal{H}_{A^{\prime}})\) is a pure state on \(RA^{\prime}\). Then for any \(\Gamma\subset A\) qubits, we have_ \[3\Delta|\partial\Gamma|\geq E_{R}(\Lambda:\overline{\Lambda})_{\xi}-\sqrt{ \delta/p^{|\Gamma|}}|\Lambda|-g(\sqrt{\delta/p^{|\Gamma|}})\] _where \(\Lambda\equiv\Gamma\cap A^{\prime}\), \(\overline{\Lambda}\equiv A^{\prime}\setminus\Lambda\)._ Proof.: We write \(\overline{\Gamma}=A\setminus\Gamma\), \(\Lambda\equiv\Gamma\cap A^{\prime}\) and \(\overline{\Lambda}\equiv A^{\prime}\setminus\Lambda\). We can write \(\rho_{RAX}\) as \[\rho_{RAX}=(\mathcal{I}_{R}\otimes\mathcal{L}\circ(\mathcal{N}_ {p}^{\otimes m}\otimes\mathcal{I}_{X}))(\sigma_{RAX}) =p^{|\Gamma|}(\mathcal{I}_{R}\otimes\mathcal{L}\circ(\mathcal{N }_{\Gamma}\otimes\mathcal{I}_{X}))(\sigma_{RAX})\] \[+(1-p^{|\Gamma|})(\mathcal{I}_{R}\otimes\mathcal{L}\circ( \mathcal{M}_{\Gamma}\otimes\mathcal{I}_{X}))(\sigma_{RAX})\] where \(\mathcal{N}_{\Gamma}=(1/2^{|\Gamma|}\operatorname{Tr}_{\Gamma})\otimes \mathcal{N}_{p}^{\otimes|\overline{\Gamma}|}\), \(\mathcal{M}_{\Gamma}\in\mathsf{CPTP}(\mathcal{H}_{A})\) is some quantum channel. For the sake of readability, we write \(\rho^{\mathcal{L}\circ\mathcal{N}_{\Gamma}}\equiv(\mathcal{I}_{R}\otimes \mathcal{L}\circ\mathcal{N}_{\Gamma})(\sigma_{RAX})\). We can now show, by using Lemma 29, that the state that suffered complete erasure still has to end up close to the target state: \[F(\rho_{RA^{\prime}}^{\mathcal{L}\circ\mathcal{N}_{\Gamma}},\xi_{RA^{\prime} })\leq 1-\delta/p^{|\Gamma|}\.\] From Properties 1 and 2 of Proposition 8, we are able to show that \(\Gamma\) will be entangled with the rest, because \(\Lambda\) is too, and this will lower bound the entanglement in \(\rho^{\mathcal{L}\circ\mathcal{N}_{\Gamma}}\): \[E_{R}(\Gamma:\overline{\Gamma}X)_{\rho^{\mathcal{L}\circ\mathcal{N}_{\Gamma}}} \geq E_{R}(\Lambda:\overline{\Lambda})_{\rho^{\mathcal{L}\circ\mathcal{N}_{ \Gamma}}}\geq E_{R}(\Lambda:\overline{\Lambda})_{\xi}-\sqrt{\delta/p^{|\Gamma|} }|\Lambda|-g(\sqrt{\delta/p^{|\Gamma|}})\.\] On the other hand, from Lemma 14, and the fact that \(\mathcal{L}\) has depth at most \(\Delta\) we have \(E_{R}(\Gamma:\overline{\Gamma}X)_{\rho^{\mathcal{L}\circ\mathcal{N}_{\Gamma}}} \leq 3\Delta|\partial\Gamma|+E_{R}(\Gamma:\overline{\Gamma}X)_{\mathcal{N}_{ \Gamma}(\sigma)}=3\Delta|\partial\Gamma|\). We therefore obtain \[3\Delta|\partial\Gamma|\geq E_{R}(\Lambda:\overline{\Lambda})_{\xi}-\sqrt{ \delta/p^{|\Gamma|}}|\Lambda|-g(\sqrt{\delta/p^{|\Gamma|}})\.\] ### Overhead theorem Here we prove our main result, which consists mainly in combining Lemma 26, and Lemma 27 harmoniously. **Theorem 28**.: _Let \(\mathcal{C}\) be a quantum code encoding \(k\) qubits into \(n\) qubits. For an error correction module having a \(D\)-dimensional \(O(1)\)-local quantum circuit satisfying Definition 25, with width \(m\) and achieving a logical error rate \(\delta\), we have_ \[\frac{m}{k}\in\Omega\left(\min\left\{\frac{1}{\Delta}\left(\frac{\log(1/ \delta)}{\log(1/p)}\right)^{1/D},\frac{1}{\delta^{1/8}}\right\}\right)\.\] We note that the \(\Omega\) notation hides a constant that depends only on the dimension \(D\). Observe that we are interested in the regime where the logical error rate \(\delta\) goes to zero, \(p\) is constant and \(\Delta\) is constant. In this case, the bound becomes \(\Omega\left(\log(1/\delta)^{1/D}\right)\). For an arbitrary connectivity graph \(G\), we would partition \(A\) into \(\sim\log(1/\delta)\) sets of size \(\sim\frac{m}{\log(1/\delta)}\) each having a boundary of size at most \(|\partial|\), and the bound would have the form \(m/k\in\Omega(\log(1/\delta)/|\partial|)\). Proof.: By the definition of the error correction module 25, we have \[F\left(\mathcal{I}_{R}\otimes\left(\operatorname{Tr}_{\overline{A^{\prime}X}} \circ[\mathfrak{W}]_{p}\circ(\mathcal{U}\otimes\mathcal{P})\right)(\Phi_{RL}),\mathcal{I}_{R}\otimes\mathcal{U}(\Phi_{RL})\right)\geq 1-\delta\, \tag{2}\] Let \(\{\Gamma_{i}\}_{i=1}^{\ell}\) be a partition of \(A\), and \(\{\Lambda_{i}\}_{i=1}^{\ell}\) where \(\Lambda_{i}\equiv\Gamma_{i}\cap A^{\prime}\) its induced partition on \(A^{\prime}\). We can apply Lemma 26 by considering \(\mathcal{U}:\mathcal{H}_{L}\rightarrow\mathcal{H}_{A^{\prime}}\) the encoding isometry of the code, and the existence of a recovery channel \(\mathcal{D}(\,\cdot\,)\) follows from (2). In fact \(\mathcal{D}\) is simply \(\operatorname{Tr}_{\overline{A^{\prime}X}}\circ[\mathfrak{W}]_{p}\) without the first layer of noise. That gives \[\sum_{i}E_{R}(\Lambda_{i}:\overline{\Lambda_{i}})_{\mathcal{I}_{R}\otimes \mathcal{U}(\Phi_{RL})}\geq k-\sum_{i}2\sqrt{\delta/p^{|\Lambda_{i}|}}|\Lambda _{i}|+g(\sqrt{\delta/p^{|\Lambda_{i}|}}). \tag{3}\] Let \(\rho_{RAX}=\mathcal{I}_{R}\otimes([\mathfrak{W}]_{p}\circ(\mathcal{U}\otimes \mathcal{P}))\left(\Phi_{RL}\right)\) and \(\xi_{RA^{\prime}}=\mathcal{I}_{R}\otimes\mathcal{U}(\Phi_{RL})\). The condition (2) together with Lemma 27 implies that we have for all \(i\) \[3\Delta|\partial\Gamma_{i}|\geq E_{R}(\Lambda_{i}:\overline{\Lambda_{i}})_{ \xi}-\sqrt{\delta/p^{|\Gamma_{i}|}}|\Lambda_{i}|-g(\sqrt{\delta/p^{|\Gamma_{i}| }})\.\] This results in \[3\Delta\sum_{i}|\partial\Gamma_{i}|\geq k-\sum_{i}2\sqrt{\delta/p^{|\Lambda_{ i}|}}|\Lambda_{i}|+g(\sqrt{\delta/p^{|\Lambda_{i}|}})+\sqrt{\delta/p^{|\Gamma_{i}| }}|\Lambda_{i}|+g(\sqrt{\delta/p^{|\Gamma_{i}|}})\.\] Since \(|\Gamma_{i}|\geq|\Lambda_{i}|\), and using \(f\equiv\log_{p}(\delta)\), the expression can be further simplified to \[3\Delta\sum_{i}|\partial\Gamma_{i}|\geq k-\sum_{i}3\sqrt{p^{f-|\Gamma_{i}|}}| \Gamma_{i}|+2g(\sqrt{p^{f-|\Gamma_{i}|}})\.\] One can specialize the equation above to \[3\Delta\cdot\ell\cdot\max_{i}|\partial\Gamma_{i}| \geq k-\ell\cdot\max_{i}3p^{(f-|\Gamma_{i}|)/2}|\Gamma_{i}|+4p^{(f -|\Gamma_{i}|)/4}\] \[\geq k-\ell\cdot\max_{i}7p^{(f-|\Gamma_{i}|)/4}|\Gamma_{i}|\] with \(\ell\) the cardinality of the partition \(\{\Gamma_{i}\}_{i=1}^{\ell}\) and using the fact that our choice of \(\Gamma_{i}\) will be such that \(f\geq|\Gamma_{i}|\). In \(D\)-dimensions, it is possible to find a partition \(\{\Gamma_{i}\}_{i=1}^{\ell}\) such that \(|\Gamma_{i}|\leq\lambda\), \(|\partial\Gamma_{i}|\leq c_{1}(D)\lambda^{(D-1)/D}\), and \(\ell\leq c_{2}(D)m/\lambda\) for any \(\lambda\geq 1\), where \(c_{1}(D),c_{2}(D)\) are positive constants depending only on \(D\), see Lemma 34. We take \(\lambda=f/2\). With this choice, we have \[7\ell p^{(f-|\Gamma_{i}|)/4}|\Gamma_{i}|\leq 7c_{2}(D)\frac{m}{f/2}\cdot p^{f/ 8}(f/2)=7c_{2}(D)mp^{f/8}\.\] On the other hand: \[3\Delta\cdot\ell\cdot\max_{i}|\partial\Gamma_{i}| \leq 3\Delta c_{2}(D)\frac{m}{f/2}c_{1}(D)(f/2)^{1-1/D}\] \[=3c_{1}(D)c_{2}(D)\Delta m(f/2)^{-1/D}\.\] Putting these together, we get \[m(3c_{1}(D)c_{2}(D)\Delta(f/2)^{-1/D}+7c_{2}(D)p^{f/8})\geq k\,\] which leads to \[\frac{m}{k}\geq\frac{1}{2}\min\left(\frac{f^{1/D}}{3c_{1}(D)c_{2}(D)\Delta}, \frac{p^{f/8}}{7c_{2}(D)}\right)\.\] ## Acknowledgements We would like to thank Cambyse Rouze for discussions on quantum circuits in low dimension. We acknowledge funding from the European Research Council (ERC Grant AlgoQIP, Agreement No. 851716) and from the Plan France 2030 through the project ANR-22-PETQ-0006.
2308.11027
Split Learning for Distributed Collaborative Training of Deep Learning Models in Health Informatics
Deep learning continues to rapidly evolve and is now demonstrating remarkable potential for numerous medical prediction tasks. However, realizing deep learning models that generalize across healthcare organizations is challenging. This is due, in part, to the inherent siloed nature of these organizations and patient privacy requirements. To address this problem, we illustrate how split learning can enable collaborative training of deep learning models across disparate and privately maintained health datasets, while keeping the original records and model parameters private. We introduce a new privacy-preserving distributed learning framework that offers a higher level of privacy compared to conventional federated learning. We use several biomedical imaging and electronic health record (EHR) datasets to show that deep learning models trained via split learning can achieve highly similar performance to their centralized and federated counterparts while greatly improving computational efficiency and reducing privacy risks.
Zhuohang Li, Chao Yan, Xinmeng Zhang, Gharib Gharibi, Zhijun Yin, Xiaoqian Jiang, Bradley A. Malin
2023-08-21T20:30:51Z
http://arxiv.org/abs/2308.11027v1
# Split Learning for Distributed Collaborative ###### Abstract _Deep learning continues to rapidly evolve and is now demonstrating remarkable potential for numerous medical prediction tasks. However, realizing deep learning models that generalize across healthcare organizations is challenging. This is due, in part, to the inherent siloed nature of these organizations and patient privacy requirements. To address this problem, we illustrate how split learning can enable collaborative training of deep learning models across disparate and privately maintained health datasets, while keeping the original records and model parameters private. We introduce a new privacy-preserving distributed learning framework that offers a higher level of privacy compared to conventional federated learning. We use several biomedical imaging and electronic health record (EHR) datasets to show that deep learning models trained via split learning can achieve highly similar performance to their centralized and federated counterparts while greatly improving computational efficiency and reducing privacy risks._ ## Introduction Recent advances in deep learning algorithms have enabled the development of neural networks with promising performance for a variety of healthcare data types that were previously considered challenging with traditional statistical methods, including medical images [1], natural languages [2], and structured electronic health records (EHR) [3]. Despite the remarkable progress that has been achieved, most current deep learning models in the healthcare domain are developed using data from only a single site. Yet it is evident that no single healthcare organization can collect a sufficient amount of data on diverse populations, as well as variations in organizational practices, to represent the distribution of the general patient population. Consequently, models developed from single-site data oftentimes lack sufficient generalizability and may perform poorly when applied to other sites [4]. To resolve this problem, one natural solution would be to enable disparate healthcare organizations to collaboratively train a model. However, such collaborations have met numerous obstacles, ranging from concerns over data rights to patient privacy. Over the past several years, distributed learning has been investigated as a strategy to enable multiple data holders to contribute to the development of deep learning models while protecting the privacy of the underlying raw records. Specifically, one of the most popular variants of distributed learning is _federated learning_ (FL) [5, 6], which enables data holders to contribute to the training of a learning model under the orchestration of a central server by exchanging only focused model updates while maintaining private data locally. The ability to maintain the privacy of record-level data has drawn particular research interest from the healthcare community [7, 8]. Still, the canonical form of federated learning requires model builders to reveal the details about their local models (i.e., model architecture and model parameters) and such information can be leveraged to make inferences about the privately maintained local data records. As a result, in untrustworthy environments, federated learning needs to be jointly implemented in concert with additional mechanisms to enhance its privacy support. There are various approaches for doing so; some of the popular methods include 1) _differential privacy_ (DP) [9, 10] which leverages a randomization mechanism (e.g., additive Laplacian noise) to provide privacy guarantees for individual records for algorithms on aggregate databases, 2) _secure multiparty computation_ (SMC) [11, 12] a cryptographic solution to enable a set of parties to compute a joint function on their private data without revealing anything but the prescribed output and 3) _homomorphic encryption_ (HE) [13, 14], which allows a party to compute certain mathematical operations on ciphertexts without decrypting them. However, in practice, these additional privacy protection measures often come at the cost of either significantly harming model utility (e.g., predictive performance) or increasing computational complexity, which leads to long runtimes and costly computation in cloud computing environments. In this paper, we investigate _split learning_ (SL) as a new paradigm for multi-institutional collaborative learning of deep neural networks across distributed health data. Similar to conventional distributed learning, the split learning framework is composed of a server (i.e., the coordinator) and multiple healthcare organizations (i.e., the data contributors) [15]. Its uniqueness is that under the split learning setting, the deep neural network, also referred to as the global model, is divided into two sub-models (i.e., the _client model_ and the _server model_) according to a specific layer known as the _cut layer_[16]. The healthcare organizations and the server hold only their portion of the model, which they do not share with each other. At each training round, the healthcare organizations only train the first part of a deep neural network and send, what the literature has called, the _smashed data_ (i.e., the latent representations of raw data derived from the client model) to the server. The server then completes the rest of the forward propagation and computes the loss function without accessing the clients' raw data. Finally, the training round is concluded with a backward pass, where the server sends back to the clients the computed gradients, which are then used to update the client model. This process is equivalent to a training epoch in the centralized learning setting and will iterate until the global model converges. Table 1 provides a qualitative comparison between different distributed learning frameworks. Notably, as is in the federated learning setting, no sensitive raw data is shared during the split learning training process, which maintains the privacy of patients' data. Moreover, in split learning, neither the server nor the clients have complete knowledge of the global model's architecture and weights. This is a major difference from federated learning, where the server has full access to the client's model. This notion of incompleteness in knowledge about the global model, combined with the minimal information encoded in the smashed data, further reduces the risk of privacy leakage during training. This is also beneficial to the server in scenarios where the server hopes not to reveal its developed model architecture, which may be considered proprietary. In addition to privacy benefits, split learning can also alleviate the computational burden on the healthcare organizations' side by offloading part of the training process of the deep neural network to the server (e.g., a data center) which typically has access to more computational power at a cheaper rate. In addition to the qualitative study, we further 1) provide a quantitative analysis of the split learning framework by conducting experiments across three biomedical image datasets (PathMNIST [17], OrganAMNIST [18], and BloodMNIST [19]) and two EHR datasets (eICU [20] and a private dataset from Vanderbilt University Medical Center [21]), 2) perform an analysis to compare the privacy risk under the split learning and federated learning framework, and 3) investigate the trade-off between privacy, model utility, and client-side model training efficiency in split learning. Our results suggest that split learning can consistently achieve comparable performance as federated learning while providing enhanced privacy and computational efficiency for the participating health organizations. **Method** _Federated Learning._ Federated learning can be conducted among either a small group of organizations (cross-silo) or a large population of mobile devices (cross-device). In this paper, we focus on the cross-silo setting, since it is more prevalent in the healthcare domain. As depicted in Figure1a, we assume that there are \(k\) clients who participate in the model training. Each client holds \(n_{i}\) data samples (\(i\in\{1,2,...,k\}\)). \(n=\sum_{i}^{k}n_{i}\) is the total data size. The objective of the collaboration is to train a neural network \(F_{\mathbf{w}}\) parameterized by weights \(\mathbf{w}\). At the beginning of the model training, each client initializes its local model \(\mathbf{w}_{i}^{c}\) in parallel. For each training epoch, each client calculates the loss \(\mathcal{L}\big{(}F_{\mathbf{w}_{i}^{c}}(\mathbf{X}_{i}),\mathbf{y}_{i}\big{)}\) using its own data \(\mathbf{X}_{i}\) with labels \(\mathbf{y}_{i}\) in parallel. A local model update is computed through gradient descent and shared with the server: \(\mathbf{w}_{i}^{c}\leftarrow\mathbf{w}_{i}^{c}-\eta\cdot\frac{\partial \mathcal{L}}{\partial\mathbf{w}_{i}^{c}}\). Finally, at the end of each training epoch, the server updates the global model weights by computing a weighted average of the local models, a process known as federated averaging [5]: \(\mathbf{w}\leftarrow\sum_{i}^{k}\frac{n_{i}}{n}\cdot\mathbf{w}_{i}^{c}\). This process is repeated until the global model converges. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{**Framework**} & \multicolumn{4}{c}{**Privacy Implications**} \\ \cline{2-6} & **Protection over** & **Protection over** & **Protection over** & **Model Utility** & **Computational Efficiency** \\ & **Raw Data** & **Model Parameters** & **Model Architecture** & & \\ \hline \hline **FL [5]** & Yes & No & No & High & Moderate \\ \hline **FL+DP [10]** & Yes & Variable\({}^{*}\) & No & Variable\({}^{*}\) & Moderate \\ \hline **FL+SMC [12]** & Yes & No & No & High & Low \\ \hline **FL+SMC+HE [14]** & Yes & Yes & No & High & Low \\ \hline **SL [16]** & Yes & Partial\({}^{\dagger}\) & Partial\({}^{\dagger}\) & High & High \\ \hline \end{tabular} \({}^{*}\) Depends on the choice of privacy parameters and it’s typically a trade-off between privacy and utility. \({}^{\dagger}\) Associated with the relative location of the cut layer: only the shallow layers of the model (up to the cut layer) are protected from the server. \end{table} Table 1: A qualitative comparison of different distributed learning schemes. _Split Learning._ As shown in Figure 0(b), in the split learning setting, a neural network \(F\) is partitioned into two separate sub-models. The first is the client model \(h_{\mathbf{w}^{c}}\), which takes the raw data \(\mathbf{X}\) and outputs latent representations of the data known as the _smashed data_. The second is the server model \(f_{\mathbf{w}^{*}}\), which makes predictions based on the smashed data, i.e., \(F(\mathbf{X})=(f_{\mathbf{w}^{*}}\circ h_{\mathbf{w}^{c}})(\mathbf{X})\). The clients and the server only have access to their own part of the model and cannot access the other part. Each training step of a neural network can be described as a _forward pass_ where the loss function is computed and a _backward pass_ where the model parameters are updated by back-propagating the error through gradients. In the canonical version of split learning, at each forward pass, the client computes the smashed data \(h_{\mathbf{w}^{c}}(\mathbf{x})\) and then shares the smashed data along with its label to the server. The server makes a prediction of the smashed data using the server model to compute the loss \(\mathcal{L}\Big{(}f_{\mathbf{w}^{*}}\big{(}h_{\mathbf{w}^{c}}(\mathbf{X}) \big{)},\mathbf{y}\Big{)}\). In the backward pass, the server first updates its model according to \(\mathbf{w}^{s}\leftarrow\mathbf{w}^{s}-\eta\cdot\frac{\partial\mathcal{L}}{ \partial\mathbf{w}^{s}}\) and sends the gradients of the smashed data \(\frac{\partial\mathcal{L}}{\partial\mathbf{w}^{c}(\mathbf{X})}\) to the client. The client then computes the gradients of the client model \(\frac{\partial\mathcal{L}}{\partial\mathbf{w}^{c}}\) and updates the model parameters accordingly i.e., \(\mathbf{w}^{c}\leftarrow\mathbf{w}^{c}-\eta\cdot\frac{\partial\mathcal{L}}{ \partial\mathbf{w}^{c}}\), where \(\eta\) is the learning rate. Finally, in the multiple clients' scenario, the client shares its model parameters with the next client where this process repeats. Note that the raw data is never shared in the process and the server only sees a compact representation of the client's private data (the smashed data). Moreover, the computational cost on the clients' end is greatly reduced since the client is only responsible for the computation of the first part of the model. It should be recognized that there are variants of split learning that do not require sharing labels or allow learning on vertically partitioned data [16]. _Speeding up Split Learning with Federation._ The canonical split learning framework processes each client in a sequential order, which can become very time-consuming if either the number of clients or the amount of data maintained by each client is large. Inspired by federated learning, the split learning framework can be modified to enable clients to compute their updates in a parallel manner and thereby accelerate the learning process [22]. Specifically, at each round of training, \(k\) clients first compute their smashed data in parallel \(h_{\mathbf{w}^{c}_{i}}(\mathbf{X}_{i}),i\in\{1,2,...,k\}\) and send the results to the server. Next, the server computes the loss \(\mathcal{L}_{i}\Big{(}f_{\mathbf{w}^{*}_{i}}\big{(}h_{\mathbf{w}^{c}_{i}}( \mathbf{X}_{i})\big{)},\mathbf{y}_{i}\Big{)}\) and the gradients \(\frac{\partial\mathcal{L}_{i}}{\partial\mathbf{w}^{c}_{i}}\) and sends back the gradients to each individual clients. The server then updates its model by averaging the gradients from all clients: \(\mathbf{w}^{s}\leftarrow\mathbf{w}^{s}-\eta\sum_{i}^{k}\frac{n_{i}}{n}\cdot \frac{\partial\mathcal{L}_{i}}{\partial\mathbf{w}^{s}_{i}}\). Similarly, the clients update their models via federated averaging, i.e., \(\mathbf{w}^{c}\leftarrow\mathbf{w}^{c}-\eta\sum_{i}^{k}\frac{n_{i}}{n}\cdot \frac{\partial\mathcal{L}_{i}}{\partial\mathbf{w}^{c}_{i}}\), with the help of a separate server referred to as the federated server. By allowing parallel computing on both the clients' and the server's side, this variant of split learning (named SplitFedv1 in [22]) can significantly speed up the training process. However, the drawback of this approach is that to protect the client's data privacy, it is required that the federated server is a trustworthy third party that does not collude with the main server. In addition, the federation process may negatively affect the utility of the converged model. Nevertheless, a low computation latency is often desired for deploying distributed learning solutions to large-scale real-world applications. Thus, we use this variant of split learning as the default in our experiments. Figure 1: Illustration of the federated learning and split learning frameworks. ### Evaluation #### Experimental Setup _Datasets._ We utilize five datasets to support evaluations in two types of settings - biomedical image classification and clinical concept predictions from structured EHR data. Specfically, we apply the following three datasets from the MedMNIST [23] benchmark for _biomedical image classification_ tasks: (1) PathMNIST [17]: a colon pathology dataset containing \(100,000\) non-overlapping image patches from hematoxylin & eosin stained histological images for classifying \(9\) types of tissues. We follow the recommended train/test split in our experiment, resulting in a total number of \(89,996\) images for training. An additional \(7,180\) image patches collected from a different clinical center are reserved for testing; (2) OrganAMNIST [18]: a 2D image dataset cropped from the axial view of 3D computed tomography (CT) images from Liver Tumor Segmentation Benchmark [18] for performing classification of \(11\) body organs. The dataset is partitioned into two disjoint sets, with \(34,581\) images for training and \(17,778\) for testing; and (3) BloodMNIST [19]: a peripheral blood cell images dataset containing individual normal cells organized into \(8\) classes. The dataset is partitioned into training and testing sets, each with \(11,959\) and \(3,421\) images, respectively. For the _EHR prediction_ tasks, we use the following two datasets: (1) eICU [20]: a public EHR dataset containing more than \(140,000\) patients' hospital visit records. The task is to predict the risk of being readmitted to the ICU in the next \(15\) days (i.e., binary classification) given the clinical activities during the current ICU stay. Following previous studies [24, 25], we consider five types of events as features, including diagnosis, lab test, medication, physical exam, and treatment. We use non-overlapping training and testing sets containing data from \(30,000\) and \(10,000\) patients, respectively; and (2) VUMC [21]: a private EHR dataset collected from Vanderbilt University Medical Center containing all adult inpatient visits in 2019. The task is to predict whether the patient will be discharged the next day to home or other care facilities. Visits shorter than 24h or of patients who died during hospitalization are excluded, resulting in a total number of \(26,283\) patients with an average age of \(52.9\). Table 2 provides a summary of the description and label distribution for each dataset. _Deep Learning Models._ For biomedical image classification tasks, we rely on a convolutional neural network containing five convolutional layers, two max-pooling layers, and three fully-connected (FC) layers with ReLU activation. Table 3 provides the details of the network architecture. The client owns the first part of the model, containing two convolutional layers and one max-pooling layer. For the readmission prediction task on the eICU dataset, we apply the same model architecture as described in [25]. The client model utilizes separate encoders for mapping different types of events into a same-length embedding sequence, which is then processed by a Transformer encoder. The server model is a two-layer fully-connected network for making final predictions. For the discharge prediction task with the VUMC dataset, we use a four-layer fully-connected network (\(2808-64-32-32-1\)) with ReLU activation, where the model is split after the first layer. _Centralized/Distributed Learning Setting._ In the centralized learning scenario, we train the deep learning model on the entire training set for \(50\) epochs using Adam optimizer with a batch size of \(256\). By default, we use a learning rate of \(10^{-4}\) and a weight decay of \(10^{-5}\). Specially, for the readmission prediction task on the eICU dataset, we set the \begin{table} \begin{tabular}{c|c|c|c} \hline **Dataset** & **Data Type** & **Task** & **Label Distribution** \\ \hline \hline \multirow{3}{*}{PuthMNIST} & \multirow{3}{*}{Colon pathology} & \multirow{3}{*}{Multi-class} & \multirow{3}{*}{\(16\)} \\ & & & \\ \cline{1-1} \cline{5 learning rate to be \(5\times 10^{-4}\). For the discharge prediction task on the VUMC dataset, we use a weight decay of \(10^{-4}\). In the case of federated and split learning scenarios, we randomly partition the training dataset into \(k\) disjoint subsets and assign them to each client. The parameters for the optimizers are set to be the same as in centralized learning. Unless mentioned otherwise, we use a default number of clients \(k=5\) in our experiments. _Metrics._ We use the following metrics for the biomedical image classification tasks: (1) _Accuracy_: the average classification accuracy; and (2) _AUROC_: the average area under the receiver operating characteristic (ROC) curves of each class against the rest. For the EHR prediction tasks, we report the following measures: (1) _AUPRC_: the average area under the precision-recall curve; (2) _F1_: the average F1 score; and (3) _Kappa_: Cohen's Kappa score, which measures the agreement between two raters on a classification task, defined as \(\kappa=\frac{p_{o}-p_{e}}{1-p_{e}}\), where \(p_{o}\) is the empirical probability of agreement on the label assigned to any sample (i.e., the observed agreement ratio), and \(p_{e}\) is the expected agreement when both raters assign labels randomly. The Kappa ranges from \(-1\) to \(1\), with a larger value indicating a stronger agreement. For each setting in our experiment, we train the neural network on the training dataset with five different random seeds and report its performance as measured on the hold-out testing dataset. \begin{table} \begin{tabular}{c||c|c|c|c|c|c|c|c|c|c|c|c} \hline & \multicolumn{2}{c|}{**PathMNIST**} & \multicolumn{2}{c|}{**OrganAMNIST**} & \multicolumn{2}{c|}{**BloodMNIST**} & \multicolumn{2}{c|}{**eICU**} & \multicolumn{2}{c}{**VUMC**} \\ \cline{2-13} & **Accuracy** & **AUROC** & **Accuracy** & **AUROC** & **Accuracy** & **AUROC** & **AUPRC** & **F1** & **Kappa** & **AUPRC** & **F1** & **Kappa** \\ \hline \hline **CL** & 0.8346 & 0.9734 & 0.8773 & 0.9901 & 0.9379 & 0.9958 & 0.6389 & 0.6083 & 0.4735 & 0.7898 & 0.7147 & 0.6565 \\ & \(\pm\)0.0088 & \(\pm\)0.0071 & \(\pm\)0.0051 & \(\pm\)0.0010 & \(\pm\)0.0022 & \(\pm\)0.0005 & \(\pm\)0.0127 & \(\pm\)0.0057 & \(\pm\)0.0087 & \(\pm\)0.0058 & \(\pm\)0.0037 & \(\pm\)0.0042 \\ \hline **FL** & 0.8225 & 0.9690 & 0.8689 & 0.9980 & 0.9165 & 0.9932 & 0.6384 & 0.5949 & 0.4675 & 0.7884 & 0.7122 & 0.6547 \\ & \(\pm\)0.0175 & \(\pm\)0.0043 & \(\pm\)0.0065 & \(\pm\)0.0010 & \(\pm\)0.0107 & \(\pm\)0.0102 & \(\pm\)0.0101 & \(\pm\)0.0098 & \(\pm\)0.0019 & \(\pm\)0.0029 & \(\pm\)0.0035 \\ \hline **SL** & 0.8127 & 0.9673 & 0.8538 & 0.9864 & 0.9066 & 0.9917 & 0.6331 & 0.6000 & 0.4693 & 0.7840 & 0.7116 & 0.6524 \\ & \(\pm\)0.0129 & \(\pm\)0.0069 & \(\pm\)0.0067 & \(\pm\)0.0012 & \(\pm\)0.0096 & \(\pm\)0.0013 & \(\pm\)0.0109 & \(\pm\)0.0074 & \(\pm\)0.0078 & \(\pm\)0.0028 & \(\pm\)0.0026 & \(\pm\)0.0033 \\ \hline \end{tabular} \end{table} Table 4: A comparison of final performance measured on the test dataset for centralized learning (CL), federated learning (FL), and split learning (SL) (reported with a \(95\%\) confidence interval). Figure 2: Comparison of convergence by measuring the per-epoch performance on the test dataset for federated learning and split learning. #### Utility Analysis _Performance._ We conducted experiments using centralized learning, federated learning, and split learning under the same conditions for \(50\) epochs. To eliminate the effects of overfitting in centralized learning, we assume early stopping is employed and report the best performance measured on the test dataset. As shown in Table 4, the centralized learning consistently achieved the highest performance across all five datasets, which can be considered the upper bound of the performance for distributed learning. While federated learning outperformed split learning slightly on four datasets (excluding eICU), the difference in performance between them was almost negligible (usually \(<1\%\)), with margins of error being similar. _Convergence._ Figure 2 compares the convergence of the global model between federated learning and split learning. The first and second rows plot the model performance measured on the test dataset after each epoch for split and federated learning, respectively. The third row plots the relative approximation error between federated and split learning, defined as \(\delta=|\frac{v_{SL}-v_{FL}}{v_{FL}}|\times 100\%\), where \(v_{SL}\) and \(v_{FL}\) denote the performance of split and federated learning model respectively. We observe that except for the differences caused by the initialization at the beginning of training, both split and federated learning are able to converge at a similar rate. For some datasets (e.g., eICU and VUMC), split learning can converge even faster than federated learning. This observation was further verified by the linear regression analysis shown in Figure 3, which demonstrates a high correlation for all datasets (typically with estimated coefficients \(<1.2\) and R\({}^{2}>0.9\)). _Scalibility._ To investigate the scalability of federated learning and split learning, we perform a sensitivity analysis on the VUMC dataset with different numbers of participating clients as well as different numbers of training data samples per client. As shown in Figure 4, a similar pattern is observed for both federated learning and split learning, where the performance of the resulting model (measured by AUPRC) continues to improve as the number of clients and the number of samples per client increases. Moreover, the differences in performance between federated learning and split learning as shown in Figure 3(c) (green for positive and orange for negative values) are negligible. This indicates that federated learning and split learning are both scalable to more participating clients and a larger amount of training data. Figure 4: Sensitivity analysis on the VUMC dataset. Figure 3: Linear regression results of the per-epoch performance of federated learning vs. split leaning on \(5\) datasets. _Efficiency._ Table 5 compares the efficiency on the client's end in terms of model parameters and the estimated floating point operations (FLOPs) required for computing a model forward pass. Our results indicate that split learning consumes less memory and requires fewer computations on the client side compared to federated learning across all five datasets. Notably, split learning outperforms federated learning significantly in medical image recognition tasks, reducing memory usage by approximately \(99\%\) and computational requirements by \(77\%\). However, the benefits of split learning in EHR tasks are marginal, likely due to the complexity of prediction tasks, the specific model architecture, and the choice of cut layer position used in our experiments. #### Privacy Analysis Both federated learning and split learning mitigate the systemic privacy risks from traditional centralized learning by embedding a _data minimization principle_ in their design. Federated learning achieves this principle by aggregating information collected from multiple data records into a focused model update. By contrast, split learning limits the amount of collected information about each data record by only sharing its smashed data. For the simplicity of analysis, herein we assume that the number of dimensions being revealed is proportional to the privacy risk, i.e., the amount of private information being leaked. We note that this simplified privacy model is by no means a rigorous measure of data privacy, but it can serve as a baseline with room for refinement in the future. More specifically, let us consider a client who contributes \(n_{c}\) private data samples to participate in training. Suppose the federated learning model has \(N_{w}\) the total number of parameters (i.e., \(w\in\mathbb{R}^{N_{w}}\)) and the cut layer size/smashed data dimension is \(d\) (i.e., \(h_{w^{c}}(x)\in\mathbb{R}^{d}\)). Then during each training round, the average number of dimensions revealed to the server per each training sample is \(\frac{N_{w}}{n_{c}}\) for federated learning and \(d\) for split learning, as illustrated in Figure 4(a). Since the model update is for a fixed number of dimensions (i.e., the same dimension as the model parameters), federated learning benefits from having a large size of local training data to average out the privacy risk. By contrast, the amount of information revealed for each data sample in split learning is invariant to the local data size and only dependent on the client's model architecture (i.e., the cut layer size). Since deeper layers tend to produce more compact representations (i.e., smaller \(d\)), it is usually a trade-off between computational efficiency and privacy for the client model in split learning. As such, federated learning appears to be more suitable for scenarios where every client possesses a sufficiently large quantity of data samples (\(\geq\frac{N_{w}}{d}\)) but discourages clients with fewer data from participating due to the higher privacy risk. In our setting, the minimum reasonable data size for federated learning (\(\frac{N_{w}}{d}\)) is \(102\), \(2907\), and \(12176\) for the biomedical image datasets, the VUMC dataset, and the eICU dataset, respectively. However, since modern deep neural networks are experiencing exponential growth in size, the minimum required data size can easily scale up to a number that is very difficult to achieve in practice. For example, if we train a ResNet-50 model [26] that contains over \(23\) million trainable parameters to recognize medical images and choose to split after the adaptive pooling layer (corresponds to a cut layer size of \(512\)), we would need at least over \(44000\) data records from every site to justify the adoption of federated learning. Such a requirement is oftentimes unattainable for healthcare applications, where a client with only a few patient records can also make a crucial contribution to the model. For instance, specialty healthcare facilities (e.g., smaller oncology practices) may have much fewer patients compared to general hospitals, but they typically have a much more focused dataset in terms of certain diseases. Therefore, split learning becomes more desirable in these circumstances as it is more versatile and can provide the same level of privacy benefits regardless of the size of the client's local data. _Risk of Inversion._ Compared to federated learning, split learning further reduces the risks of model/gradient inversion attack [27, 28, 29, 30] by restricting model access for both parties. As is depicted in Figure 4(b), learning can be seen as computing a function \(F\) on the client's private data \(X\) to get result \(M\). In federated learning, although the raw data \(X\) is not directly accessible to the server, the computation function \(F\) and the result \(M\) are both known to the server, which leaves the opportunity for the server to approximate an inverse function to infer \(X\) from \(M\) and break patient privacy. Differently, split learning separates the function \(F\) (i.e., the model) into the client's part \(F_{C}\) and the server's \begin{table} \begin{tabular}{c||c|c|c|c} \hline \multirow{2}{*}{**Dataset**} & \multicolumn{2}{c|}{**Split Learning**} & \multicolumn{2}{c}{**Federated Learning**} \\ \cline{2-5} & **\# of params** & **FLOPs** & **\# of params** & **FLOPs** \\ \hline \hline PathMNIST & 2,832 & 1,726,208 & 235,225 & 7,591,817 \\ \hline OrganAMNIST & 2,544 & 1,531,520 & 235,195 & 7,397,387 \\ \hline BloodMNIST & 2,832 & 1,726,208 & 235,096 & 7,591,688 \\ \hline eICU & 1,556,475 & – & 1,558,556 & – \\ \hline VUMC & 179,776 & 179,904 & 186,049 & 186,369 \\ \hline \end{tabular} * * Cannot be estimated since the input is heterogeneous clinical sequences with variable lengths and diverse features. \end{table} Table 5: Comparison on client-side model efficiency. part \(F_{S}\) and only the latter is revealed to the server. Thus, the server in split learning only has imperfect information about the computing process, which makes it quite challenging to compute the exact inverse function. #### 4.2.2 Discussion _Design Trade-offs._ There exist two major trade-offs in designing a deep learning model for split learning. (1) _Privacy-utility trade-off_: Split learning reflects the data minimization principle by allowing clients to extract and send only the relevant features of their local data to the server, while keeping the rest of the data private. The client model serves as a compressor which helps to reduce the amount of information that is transmitted over the network while preserving the most important information for the task at hand. Using a small cut layer size further suppresses the amount of information in the smashed data, but may negatively affect the model performance. Thus, one of the main objectives of designing a split learning model is to choose a proper cut layer size to find a balance between keeping enough information to capture the important aspects of the data and reducing the amount of information to minimize the privacy risk of releasing the smashed data. (2) _Privacy-efficiency trade-off_: the information bottleneck theory [31] suggests that the output of deeper layers in a neural network contains more information about the task label and less information about the original input. As such, choosing to split at a deeper layer may result in less redundant information in the smashed data, thereby reducing the privacy risk. However, setting a shallow layer as the cut layer allows the client to have a more compact model and can thus give full scope to the advantages of split learning to reduce the memory consumption and computational burden on the client side. Overall, these tuning knobs in split learning offer the user great flexibility in terms of configuring the privacy-utility-efficiency trade-off for specific tasks, whereas federated learning lacks such customizability. _Limitations._ Despite the merits of split learning, there are several limitations we would like to highlight. First, the current split learning framework only supports decentralized training of deep neural networks but does not support the training of traditional machine learning models, such as tree-based models. Second, although split learning largely relieves the computational cost on the client side, the communication cost is increased compared to federated learning, as the client needs to communicate more frequently with the server for every mini-batch to perform gradient descent. To address this, recent studies propose to utilize an intermediate edge server [32], asynchronous training scheme [33], or an automated software framework [34] to reduce the communication overheads in split learning. Additionally, service providers who do not wish to claim intellectual property over the developed model can share the server-side model with the clients after training to eliminate the communication overheads at inference time. Third, the privacy benefits of split learning heavily rely on the inaccessibility of the client model and thus might be vulnerable under a stronger threat model. For instance, in the insider attack scenario where the server is able to collude with one of the clients, the privacy protection over other clients' data would be greatly diminished. Figure 5: Conceptual comparison of privacy preservation in federated learning (FL) vs. split learning (SL). Future DirectionsThere are several notable problems that should be considered as future research. First, how can we combine split learning with rigorous statistical privacy methods (e.g., differential privacy) or secure multiparty computation to achieve more rigorous privacy protection? Moreover, how can this be done with minimal sacrifice to model utility or significantly increasing the computational cost? Second, how can we design a more nuanced framework for privacy in split learning? Specifically, we need to quantify the amount of leaked private information via smashed data and the potential privacy risks. Third, can we design a systematic framework to decide what the optimal cut layer size and location for a given neural network is to minimize the privacy risk while preserving most model utility? Fourth, how can we realize split learning on heterogeneous data types and varying data distributions? ## Conclusion A lack of sufficient data to cover the distribution of the general population is one of the major impediments to developing practical deep learning models for healthcare applications. In this work, we introduced split learning as a new distributed learning paradigm for enabling multi-institutional collaborative development of deep learning models across data silos without accessing raw patient data. Through both in-depth qualitative analysis as well as systematic quantitative experiments on five health datasets, we illustrated that split learning can achieve similar model utility as federated learning while providing better client-side model efficiency, lower risk of inversion, enhanced protection over the model, and more flexible privacy protection over clients' data. Our findings suggest that split learning is a promising alternative to federated learning for developing deep learning models without violating the privacy of the data contributors in many healthcare tasks. ## Acknowledgments This research was sponsored in part by grant U54HG012510, the NIH Bridge2AI Center. BM has been a paid consultant to TripleBlind AI, but for work unrelated to this investigation.
2303.13542
OntoMath${}^{\mathbf{PRO}}$ 2.0 Ontology: Updates of the Formal Model
This paper is devoted to the problems of ontology-based mathematical knowledge management and representation. The main attention is paid to the development of a formal model for the representation of mathematical statements in the Open Linked Data cloud. The proposed model is intended for applications that extract mathematical facts from natural language mathematical texts and represent these facts as Linked Open Data. The model is used in development of a new version of the OntoMath${}^{\mathrm{PRO}}$ ontology of professional mathematics is described. OntoMath${}^{\mathrm{PRO}}$ underlies a semantic publishing platform, that takes as an input a collection of mathematical papers in LaTeX format and builds their ontology-based Linked Open Data representation. The semantic publishing platform, in turn, is a central component of OntoMath digital ecosystem, an ecosystem of ontologies, text analytics tools, and applications for mathematical knowledge management, including semantic search for mathematical formulas and a recommender system for mathematical papers. According to the new model, the ontology is organized into three layers: a foundational ontology layer, a domain ontology layer and a linguistic layer. The domain ontology layer contains language-independent math concepts. The linguistic layer provides linguistic grounding for these concepts, and the foundation ontology layer provides them with meta-ontological annotations. The concepts are organized in two main hierarchies: the hierarchy of objects and the hierarchy of reified relationships.
Alexander Kirillovich, Olga Nevzorova, Evgeny Lipachev
2023-03-17T20:29:17Z
http://arxiv.org/abs/2303.13542v1
# OntoMath\({}^{\bf PRO}\) 2.0 Ontology: ###### Abstract This paper is devoted to the problems of ontology-based mathematical knowledge management and representation. The main attention is paid to the development of a formal model for the representation of mathematical statements in the Open Linked Data cloud. The proposed model is intended for applications that extract mathematical facts from natural language mathematical texts and represent these facts as Linked Open Data. The model is used in development of a new version of the OntoMath\({}^{\bf PRO}\) ontology of professional mathematics is described. OntoMath\({}^{\bf PRO}\) underlies a semantic publishing platform, that takes as an input a collection of mathematical papers in LaTeX format and builds their ontology-based Linked Open Data representation. The semantic publishing platform, in turn, is a central component of OntoMath digital ecosystem, an ecosystem of ontologies, text analytics tools, and applications for mathematical knowledge management, including semantic search for mathematical formulas and a recommender system for mathematical papers. According to the new model, the ontology is organized into three layers: a foundational ontology layer, a domain ontology layer and a linguistic layer. The domain ontology layer contains language-independent math concepts. The linguistic layer provides linguistic grounding for these concepts, and the foundation ontology layer provides them with meta-ontological annotations. The concepts are organized in two main hierarchies: the hierarchy of objects and the hierarchy of reified relationships. 1 Footnote 1: Please, cite as: Alexander Kirillovich, Olga Nevzorova, and Evgeny Lipachev. OntoMath\({}^{\bf PRO}\) 2.0 Ontology: Updates of Formal Model // Lobachevskii Journal of Mathematics, 2022, Vol. 43, No. 12, pp. 3504–3514. [https://doi.org/10.1134/S1995080222150136](https://doi.org/10.1134/S1995080222150136) Keywords:Formal Model, Ontology, Linked Open Data, Natural Language Processing, Mathematical Knowledge Management, OntoMath\({}^{\bf PRO}\) ## 1 Introduction This paper is devoted to a problem in the field of ontology-based mathematical knowledge management and representation, i.e. representation of mathematical statements in the Linked Open Data (LOD) cloud. There are several formalisms for mathematical knowledge representation [1]. OpenMath [2], [3] and Content MathML [4] are used to represent mathematical statements, while OMDoc [5] is used to represent complex semiformal mathematical documents. However, these formalisms are not LOD-native and require adaptation to be used in LOD. In fact, there are projects for encoding OpenMath documents as LOD-integrated RDF datasets [6], [7]. These encodings however represent math statements only indirectly: they assert statements about OpenMath documents, not the math statements themselves. We propose a formal model for representing mathematical statements in the Linked Open Data cloud in direct way. This model has been tested in development of OntoMath\({}^{\text{Edu}}\), an experimental educational mathematical ontology [8]-[10]. Now model is used in developing of the new version of OntoMath\({}^{\text{PRO}}\), an ontology of professional mathematics. The new version of OntoMath\({}^{\text{PRO}}\) ontology is intended to be used for extracting mathematical statements from natural language mathematical texts and representing extracted statements as Linked Open Data (LOD). LOD have value in themselves [11], and can be used for navigation, querying, aggregation, etc [12] - [14]. Additionally, via the OpenDreamKit project and Math-in-the-Middle ontology [15], a LOD representation can then be converted to formats of computer algebra systems. So, for example, OntoMath\({}^{\text{PRO}}\) can be used to parse a mathematical task in a natural language, and automatically solve this task by a computer algebra system. Although the first version of OntoMath\({}^{\text{PRO}}\) ontology [16], [17] has proven to be effective in several specialized services, its architecture has a number of limitations that impede its using for the intended purpose. In this regard, we started a project for developing the new major version of the ontology based on the new architecture, designed to tackle these problems. The rest of the paper is organized as following. In Section 2 we describe the first version of the ontology and outline its restrictions. In Section 2 we describe the architecture of the new version of ontology under development. In Section 3 we discuss the task of population the ontology by new concepts. In Conclusions, we summarize the current status of the project and the directions of future work. ## 2 Ontological representation of math knowledge in OntoMath\({}^{\text{PRO}}\) 1.0 ontology In this section we briefly describe the first version of the OntoMath\({}^{\text{PRO}}\) 1.0 ontology and outline its restrictions. OntoMath\({}^{\text{PRO}}\) 1.0 ontology is organized in two hierarchies: the hierarchy of fields of mathematics and the hierarchy of objects of mathematical knowledge. The ontology defines five types of relationships between concepts. The concept description contains a name in Russian and English, a definition, links to other concepts and external resources from the Linked Open Data cloud. The ontology can be used to represent individual mathematical objects as instances of the classes from the hierarchy of objects. The ontology is expressed by OWL DL (Web Ontology Language) formalism which is based on a description logic [18]. OntoMathPRO underlies a semantic publishing platform [19], that takes as an input a collection of mathematical papers in LaTeX format and builds their ontology-based Linked Open Data representation. The semantic publishing platform, in turn, is a central component of OntoMath digital ecosystem [20], [21], an ecosystem of ontologies, text analytic tools, and applications for mathematical knowledge management, including semantic search for mathematical formulas [22] and a recommender system for mathematical papers [23]. Although the OntoMathPRO 1.0 has proven to be effective in several specialized services, its architecture has a number of limitations. Examples of the most significant restrictions are 1. The existing version of the ontology contains a large number of general concepts, but a rather poor set of relationships between them. 2. The existing version of the ontology does not distinguish type concepts and role concepts; accordingly, there are no relations between roles and types. 3. Linguistic information about concepts is expressed using simple rdf labels that do not contain information about their internal structure and linguistic properties. 4. The existing version of the ontology does not contain individuals, but is based solely on the representation of classes. Such an ontology is suitable for extracting and representing individual mathematical objects, but not relationships between mathematical objects and mathematical facts. ## 3 Ontology-based representation of mathematical statements in Linked Open Data The key problem that the new version of the ontology is intended to address is representation of mathematical statements as Linked Open Data. Thus the ontology determines a functions that translate a mathematical sentence expressed in First-order logic (FOL) to a LOD-compatible RDF graph. For the obtained RDF translation preserves the meaning of the source FOL sentence, the source sentence and the obtained RDF graph must be semantically equivalent. From the model-theoretic point of view, it means that the source statement and its translation share the same set of models (in this section we use the term'model' in the technical sense of the model theory, where it means an interpretation satisfying a sentence or RDF graph). The full coincidence of the models is not possible however. First, FOL and RDF have different semantics [24]: FOL interpretations contain arbitrary \(n\)-ary predicates while a RDF/OWL interpretations contains only unary and binary, FOL interpretations support functional terms while RDF/OWL interpretations do not, and so on. Thus, we can only speak on a some kind of isomorphism between models of a FOL sentence and its RDF translation. Second, FOL is of more expressive power then RDF/OWL and so the models of a FOL statement may be only a subset of the models of its RDF translation. State the condition that the translation function must satisfy in more formal way. Let \(o\) is the graph of the ontology. Let \(S_{FOL}\) is the set of all FOL sentences. Let \(S_{RDF}\) is the set of all RDF graphs. Let \(*:S_{FOL}\to S_{RDF}\) is a partial function, that translate FOL sentences to RDF graphs. Let \(INT_{FOL}\) is a set of all FOL interpretations. Let \(INT_{RDF}\) is a set of all RDF/OWL interpretations. Let \(t:INT_{FOL}\to INT_{RDF}\) is a mapping from FOL interpretations to RDF/OWL interpretations. Define \(t\) on a set \(INTS_{FOL}\subset INT_{FOL}\) of FOL interpretation as: \(t(INTS_{FOL})=\{t(i)\mid i\in INT_{FOL}\}\). Let \(M_{FOL}(s)\) is a set of the models of a FOL sentence \(s\). Let \(M_{RDF}(g)\) is a set of the models of a RDF graph \(g\). Given a FOL sentence \(s\) expressed in terms of a FOL theory \(T\), the translation function \(*\) must satisfy the following condition \[t(M_{FOL}(T\cup\{s\}))\subset M_{RDF}(*(s)\cup o).\] ## 4 Formal model of OntoMath\({}^{\text{PRO}}\) 2.0 ontology In this section we describe the model for the new version of OntoMath\({}^{\text{PRO}}\) 2.0 ontology. As the first version, OntoMath\({}^{\text{PRO}}\) 2.0 is expressed by the OWL 2 DL formalism. The examples below are expressed by Turtle serialization of RDF and OWL. According to the model, OntoMath\({}^{\text{PRO}}\) 2.0 is organized in three layers: 1. **Domain ontology layer**, which contains language-independent math concepts. 2. **Linguistic layer**, containing multilingual lexicons, that provide linguistic grounding of the concepts from the domain ontology layer. 3. **Foundational ontology layer**, that provides the concepts with meta-ontological annotations. This three-layered structure is represented at Figure 1. The domain ontology layer is organized in two main hierarchies: the hierarchy of objects and the hierarchy of reified relationships. Figure 2: Description of the role concept _Degree of a polynomial_ in WebProtegé Figure 2 depicts a description of the role concept _Degree of a polynomial_ in the WebProtege editor. The description of contains the names of the concept in Russian and English, the metaontological annotation, the parent concept and a relation with the type concept _Polynomial_. There are two meta-ontological types of the concepts: kinds and roles. A kind is a concept that is rigid and ontologically independent [25], [26]. So, for example, the _Integer_ concept is a kind, because any integer is always a triangle, regardless of its relationship with other objects. A role is a concept that is anti-rigid and ontologically dependent [25], [26]. An object can be an instance of a role class only by virtue of its relationship with another object. So, for example, the _Degree of a polynomial_ concept is a role, since any integer is a degree of a polynomial not by itself, but only in relation to a certain polynomial. Any role concept is a subclass of some kind concept. For example, the _Degree of a polynomial_ role concept is a subclass of _Natural number_ kind concept. Relations between concepts are represented in ontology in a reified form, i.e., as concepts, not as object properties (such representation fits the standard onto-logical pattern for representing \(N\)-ary relation with no distinguished participant [27], but is applied to binary relations too). Thus, the relationships between concepts are first-order entities, and can be a subject of a statement. Reified relationships are linked to their participants by _has argument_ object properties and their subproperties. Figure 3: An example of a materialized relationship, and its instance corresponding to the “The number \(m\) divides the number \(n\)” statement (see [28]). Figure 3 shows one of the relations, represented by the _Divisibility relationship_ concept. This relation is linked to its participants, represented by _Dividend_ and _Divisor_ role concepts. These roles, in turn, are defined as subclasses of the _Natural number_ kind concept. The bottom of the figure depicts an instance of this relation, namely the _Divisibility relationship between the dividend \(n\) and the divisor \(m\)_, that binds the natural number \(n\) and the natural number \(m\) (see also [28]). This instance is a representation of natural language statement "The number \(m\) divides the number \(n\)". The mappings between ontology concepts and corresponding natural language statements are defined at the linguistic level of the ontology. Construction with a reified relationship is a RDF translation of the mathematical sentence that is expressed in FOL as an atomic formula. Translation of a formula from FOL to RDF is defined as follows. Let \(T\) is a mathematical theory, \(RNames_{math}\) is a set of predicate letters of \(T\) and \(Const_{math}\) is a set of constants of \(T\). Let \(RNames_{owl}\) is a set of URIs of classes from the hierarchy of reified relationships of the OntoMath\({}^{\text{PRO}}\) ontology and \(Const_{owl}\) is a set of URIs denoting mathematical objects. Let \(pmap:RNames_{math}\to RNames_{owl}\) and \(cmap:Const_{math}\to Const_{owl}\). A FOL sentence \(R(c_{1},...,c_{n})\), where \(R\in RNames_{math},c_{1},...,c_{n}\in Const_{math}\) is translated to the following RDF graph @prefix rdf: <[http://www.w3.org/1999/02/22-rdf-syntax-ns#](http://www.w3.org/1999/02/22-rdf-syntax-ns#)>. @prefix rdfs: <[http://www.w3.org/2000/01/rdf-schema#](http://www.w3.org/2000/01/rdf-schema#)>. @prefix owl: <[http://www.w3.org/2002/07/owl#](http://www.w3.org/2002/07/owl#)>. @prefix omp: <[http://ontomathpro.org/omp2#](http://ontomathpro.org/omp2#)>. _:rel rdf:type rmap(R). _:rel omp:hasArgument cmap (c1).... _:rel omp:hasArgument cmap (cn). The mappings between ontology concepts and corresponding natural language statements are defined at the linguistic level of the ontology. The linguistic layer contains multilingual lexicons, that provide linguistic grounding of the concepts from the domain ontology layer. Currently we are developing the lexicons for Russian and English. A lexicon consists in * Lexical entries, denoting mathematical concepts. Examples of lexical entries are "number", "prime number", "degree of a polynomial", "to intersect", etc. * Forms of lexical entries (in different numbers, cases, tenses, etc). * Syntactic trees of multi-word lexical entries. * Syntactic frames of lexical entries. A syntactic frame represents the syntactic behavior of a predicate, defining the set of syntactic arguments this predicate requires and their mappings to ontological entities. For example, a syntactic frame of the "to divide" verb determines that in "\(X\) divides \(Y\)" phrase, the subject \(X\) represents the divisor and the direct object \(Y\) represents the dividend (Figures 4, 5). The lexicons are expressed as Linguistic Linked Open Data (LLOD) datasets in terms of Lemon ontology [29], [30], LexInfo [31], OLiA [32] and PreMOn [33] ontologies (see also [34]). Lemon ontology is used to represent complex lexical resources. This ontology is based on the international standard ISO 24613:2008 "Language resource management - Lexical markup framework (LMF)" [35]. The basic elements of this ontology are lexicons, lexical entries, forms of a lexical entry, senses of a lexical entry, and concepts from ontologies of subject domains. To describe language categories (gender, number, case, tense, direct object, in-direct object, synonym, antonym, etc.) in Lemon ontology, the external ontology LexInfo is used. To describe language categories in LLOD, different ontologies are used that are linked to the ISOcat data category registry, which is the implementation of the international standard ISO 12620: 2009 "Terminology and other languages and content resources - Specification of data categories and management of the Data Category" [36]. ## 5 Replenishing the ontology by new concepts When, developing the ontology, various sources of mathematical knowledge are used, including articles from mathematical journals [38], [39]. Despite the rather significant amount of mathematical terms, the replenishment of ontologies is an urgent task. Mathematical encyclopedias and reference books are the good sources of replenishment of ontologies. We compared Figure 4: Syntactic frame for the “divide” verb and its mapping to _Divisibility_ relationship. The subject and the object syntactic arguments of the frame are mapped to _Dividend_ and _Divisor_ arguments of the the relationship. the volume of terminology from the ontology with the terminological index of the "Mathematical Handbook for Scientists and Engineers" by G. Korn and T. Korn [40] based on a specially developed program and got quite interesting results. The comparison was carried out only in the fields of mathematics, which are represented in the ontology. In total, there are 2227 terms in Korn's reference book, of which 791 terms have an intersection with terms from the ontology. When comparing terms, a cosine measure was used with text preprocessing. Pre-processing consisted in removing punctuation marks, stop words, and lemmatizing all words used pymorphy2 library [41], [42]. The threshold value for an acceptable measure of similarity was chosen to be 0.7. Figure 5: Syntactic frame for the “divide” verb and its mapping to _Divisibility_ relationship in the LLOD format The following situations were identified when comparing terminology * use of incomplete labels in the ontology (_Riemann-Stieltjes integral_ / _Riemann-Stieltjes probability integral_[40]; _Cesaro summable series_ / _summable series by Cesaro method_[40], etc.); * the comparison scores are high (above the threshold value of 0.7), but actually specific value and general value of term were compared (e.g., _Stormer interpolation formula_/_interpolation formula_[40]; _Gaussian interpolation formula_/_interpolation formula_[40]; _Adams interpolation formula_/_Adams formula_[40]). In the last example, professional review is required to match the terms. Thus, replenishment of the ontology is a very non-trivial task. Figure 5: Syntactic frame for the “divide” verb and its mapping to _Divisibility_ relationship in the LLOD format (continuation) ## 6 Conclusions In this article, we discussed the previous version of the OntoMath\({}^{\text{PRO}}\) ontology and the reasons why it became necessary to improve the formal model for representing mathematical knowledge. We have described the new formal model that underlies the new version of OntoMath\({}^{\text{PRO}}\) ontology of professional mathematics. According to this formal model, the ontology is organized into three layers: a foundational ontology layer, a domain ontology layer and a linguistic layer. The developed formal model allows representing mathematical statements in Open Linked Data cloud. Our work on the new version of the ontology is determined by the proposed model and undergoing by the following steps 1. Internal verification and correction the taxonomy of the ontology. 2. Providing the concepts with meta-ontological annotations. 3. Development of materialized relationships. 4. Development of linguistic layer. 5. Replenishing the ontology by new concepts. 6. External verification of the new version of ontology in such tasks as information extraction and semantic search. On the current stage, we have performed the internal verification of the taxonomy and provided the concepts with meta-ontological annotations. Currently, we are working on development of reified relationships and have developed several experimental ones. **Funding.** The work was funded by Russian Science Foundation according to the research project no. 21-11-00105.
2301.05559
Supercurrent and Electromotive force generations by the Berry connection from many-body wave functions
The velocity field composed of the electromagnetic field vector potential and the Berry connection from many-body wave functions explains supercurrent generation, Faraday's law for the electromotive force (EMF) generation, and other EMF generations whose origins are not electromagnetism. An example calculation for the EMF from the Berry connection is performed using a model for the cuprate superconductivity.
Hiroyasu Koizumi
2023-01-11T07:59:04Z
http://arxiv.org/abs/2301.05559v1
Supercurrent and Electromotive force generations by the Berry connection from many-body wave functions ###### Abstract The velocity field composed of the electromagnetic field vector potential and the Berry connection from many-body wave functions explains supercurrent generation, Faraday's law for the electromotive force (EMF) generation, and other EMF generations whose origins are not electromagnetism. An example calculation for the EMF from the Berry connection is performed using a model for the cuprate superconductivity. * January 2023 ## 1 Introduction The Berry phase first discovered in the context of the adiabatic approximation now prevails in various fields of physics [1, 2]. In particular, it is now an indispensable mathematical tool to detect topological defects in quantum wave functions [3]. Recently, the Berry connection from many-body wave functions was defined and its usefulness to calculate supercurrent is demonstrated [4]. A salient feature of such a formalism is that it provides a vector potential directly related to the velocity field for electric current. In the present work, we consider the supercurrent and electromotive force (EMF) generations based on the same formalism [4, 5]. The EMF is expressed using a non-irrotational 'electric field', \({\bf E}_{\rm irrot}\), whose origin may not be a real electric field. It is defined as \[{\cal E}=\oint_{C}{\bf E}_{\rm non-irrot}\cdot d{\bf r} \tag{1}\] where \(C\) is a closed electric circuit. This EMF appears due to various causes, such as chemical reactions in batters or temperature differences in metals. One of the important EMF generation mechanisms is the Faraday's law of magnetic induction. It is expressed as a total time-derivative of a magnetic flux of the magnetic field \({\bf B}\) \[{\cal E}=-\frac{d}{dt}\int_{S}{\bf B}\cdot d{\bf S} \tag{2}\] where \(S\) is a surface whose circumference is \(C\). This EMF formula is often called the "flux rule", since \(\int_{S}{\bf B}\cdot d{\bf S}\) is the magnetic flux through the surface \(S\); it has been claimed curious since it is composed of two different fundamental equations in classical theory [6], i.e., the Faraday's law of induction and the Lorentz force. The curiosity is increased by the fact that one of them is an equation for fields only, and the other includes particles and is an equation for a force on a particle. This peculiarity disappears in quantum theory using the vector potential \({\bf A}\) that is more fundamental than the magnetic field \({\bf B}\)[7, 8, 9], and the wave function makes the velocity of a particle a velocity field [10]. Then, the two contributions in the "flux rule" are connected by the duality that a \(U(1)\) phase factor added on a wave function describes a whole system motion, and also plays the role of the vector potential when it is transferred into the Hamiltonian [11]. In the present work, we extend the above vector potential and velocity field approach for the electric current generation to cases where the vector potential of the Berry connection from many-body wave functions appears [4]. We show that the EMF generation other than the electromagnetic field origin, such as those due to chemical reactions or temperature gradients can be expressed by it. The organization of the present work is as follows: we explain the velocity field appearing from the Berry connection from many-body wave functions in Section 2. We reexamine the Faraday's EMF generation formula using the velocity field from the electromagnetic vector potential in Section 3. We examine the EMF generation by the Berry connection in Section 4, and an example calculation is performed for the Nernst effect in Section 5. Lastly, we conclude the present work by mentioning implications of the present new theory in Section 6. The velocity field from the Berry connection form many-body wave functions and supercurrent generation The key ingredient in the present work is the Berry connection from many-body wave functions for electrons given by \[{\bf A}_{\Psi}^{\rm MB}({\bf r})\!=\!\frac{1}{\hbar\rho({\bf r})}{\rm Re} \left\{\int d\sigma_{1}d{\bf x}_{2}\cdots d{\bf x}_{N}\Psi^{*}({\bf r},\sigma_ {1},\cdots,{\bf x}_{N})(-i\hbar\nabla)\Psi({\bf r},\sigma_{1},\cdots,{\bf x}_ {N})\right\} \tag{3}\] where \(N\) is the total number of electrons in the system, 'Re' denotes the real part, \(\Psi\) is the total wave function, \({\bf x}_{i}\) collectively stands for the coordinate \({\bf r}_{i}\) and the spin \(\sigma_{i}\) of the \(i\)th electron, \(-i\hbar\nabla\) is the Schrodinger's momentum operator for the coordinate vector \({\bf r}\), and \(\rho({\bf r})\) is the number density calculated from \(\Psi\). This Berry connection is obtained by regarding \({\bf r}\) as the "adiabatic parameter"[1]. Let us consider the electron system whose kinetic energy operator in the Schrodinger representation is given by \[\hat{T}=-\sum_{j=1}^{N}\frac{\hbar^{2}}{2m_{e}}\nabla_{j}^{2} \tag{4}\] where \(m_{e}\) is the electron mass. For convenience, we also use the following \(\chi\) defined as \[\chi({\bf r})=-2\int_{0}^{\bf r}{\bf A}_{\Psi}^{\rm MB}({\bf r}^{ \prime})\cdot d{\bf r}^{\prime} \tag{5}\] and express the many-electron wave function \(\Psi\) as \[\Psi({\bf x}_{1},\cdots,{\bf x}_{N})=\exp\left(-\frac{i}{2}\sum_{j =1}^{N}\chi({\bf r}_{j})\right)\Psi_{0}({\bf x}_{1},\cdots,{\bf x}_{N}) \tag{6}\] Then, \(\Psi_{0}=\Psi\exp\left(\frac{i}{2}\sum_{j=1}^{N}\chi({\bf r}_{j})\right)\) is a currentless wave function for the current operator associated with \(\hat{T}\) in Eq. (4) since the contribution from \(\Psi\) and that from \(\exp\left(\frac{i}{2}\sum_{j=1}^{N}\chi({\bf r}_{j})\right)\) cancel out. In other words, a wave function is given as a product of a currentless one, \(\Psi_{0}\), and the factor for the current \(\exp\left(-\frac{i}{2}\sum_{j=1}^{N}\chi({\bf r}_{j})\right)\). The total wave function \(\Psi\) must be a single-valued function of coordinates. This makes \(\chi\) as an angular variable that satisfies some periodicity. This periodicity gives rise to non-trivial topological integer as will be explained, shortly. When electromagnetic field is included, the kinetic energy operator becomes \[\hat{T}^{\prime}=\sum_{j=1}^{N}\frac{1}{2m_{e}}(-i\hbar\nabla_{j} -q{\bf A})^{2} \tag{7}\] where \(q=-e\) is the electron charge, and \({\bf A}\) is the electromagnetic field vector potential. The magnetic field is given by \({\bf B}=\nabla\times{\bf A}\). In the following, we will use the same expression, \(\Psi\), for the total wave function. Then, the current density for \(\Psi\) is given by \[{\bf j}=-e\rho{\bf v} \tag{8}\] with the velocity field \({\bf v}\) given by \[{\bf v} = \frac{e}{m_{e}}\left({\bf A}-\frac{\hbar}{2e}\nabla\chi\right) \tag{9}\] \[= \frac{e}{m_{e}}{\bf A}+\frac{\hbar}{m_{e}}{\bf A}_{\Psi}^{\rm MB}\] The current density in Eq. (8) is known to give rise to the Meissner effect if it is a stable one due to the fact that it explicitly depends on \({\bf A}\)[10]. For the stable current case, \(\nabla\chi\) compensates the gauge ambiguity in \({\bf A}\) and makes \({\bf v}\) in Eq. (9) gauge invariant. If the Meissner effect is realize, the magnetic filed is expelled from the bulk of a superconductor [10]. Then, the flux quantization is observed for magnetic flux through a loop \(C\) that goes through the bulk of a ring-shaped superconductor \[\int_{S}{\bf B}\cdot d{\bf S} = \oint_{C}{\bf A}\cdot d{\bf r} \tag{10}\] \[= \frac{\hbar}{2e}\oint_{C}\nabla\chi\cdot d{\bf r}\] \[= \frac{h}{2e}w_{C}[\chi]\] where \(w_{C}[\chi]\) is the topological integer 'winding number' defined by \[w_{C}[\chi]=\frac{1}{2\pi}\oint_{C}\nabla\chi\cdot d{\bf r} \tag{11}\] According to Eq. (9), the presence of non-zero \(w_{C}[\chi]\) means the existence of the stable velocity field that satisfies \[\oint_{C}{\bf v}\cdot d{\bf r}=\frac{h}{2m_{e}}w_{C}[\chi] \tag{12}\] In superconductors, the quantized flux persists. This means that the condition \[\frac{d}{dt}w_{C}[\chi]=0 \tag{13}\] is realized. In normal metals, the time-derivative of the velocity field is often expressed as \[\frac{d{\bf v}}{dt}=-\frac{1}{\tau}{\bf v} \tag{14}\] using a relaxation time approximation, where \(\tau\) is the relaxation time. Combination of this with Eq. (12) yields \[\tau\frac{d}{dt}w_{C}[\chi]=-w_{C}[\chi] \tag{15}\] If the condition in Eq. (13) with nonzero \(w_{C}[\chi]\) is realized, Eq. (15) means that \(\tau\) must be \(\infty\), i.e., an infinite conductivity, or zero resistivity is realized. ## 3 The vorticity field from the vector potential A and Faraday's flux rule In this section, we consider the case where non-trivial \({\bf A}_{\Psi}^{\rm MB}\) is absent. When \({\bf A}_{\Psi}^{\rm MB}\) is trivial, it satisfies \[\nabla\times{\bf A}_{\Psi}^{\rm MB}=0 \tag{16}\] Thus, by applying \(\nabla\times\) on the both sides of Eq. (9) \[\nabla\times{\bf v}=\frac{e}{m_{e}}{\bf B} \tag{17}\] is obtained. Taking the total time-derivative of the above yields \[\nabla\times\frac{d{\bf v}}{dt}=\frac{e}{m_{e}}\partial_{t}{\bf B}+\frac{e}{m _{e}}({\bf v}\cdot\nabla){\bf B} \tag{18}\] where the total time-derivative of the field \({\bf B}\) is the Eulerian time-derivative given by \[\frac{d{\bf B}}{dt}=\partial_{t}{\bf B}+({\bf v}\cdot\nabla){\bf B} \tag{19}\] Integrating Eq. (18) over the surface \(S\), we have \[\oint_{C}\frac{d{\bf v}}{dt}\cdot d{\bf r}=\frac{e}{m_{e}}\int_{S}\partial_{t}{ \bf B}\cdot d{\bf S}+\frac{e}{m_{e}}\int_{S}({\bf v}\cdot\nabla){\bf B}\cdot d{ \bf S} \tag{20}\] where the Stokes theorem is used to convert the surface integral to the line integral. Noting that the electromotive force for an electron is given by \[{\cal E}=\frac{1}{-e}\oint_{C}\frac{d(m_{e}{\bf v})}{dt}\cdot d{\bf r} \tag{21}\] where \(-e\) is the electron charge and \(m_{e}\) is the electron mass, the following relation is obtained \[{\cal E}=-\int_{S}\partial_{t}{\bf B}\cdot d{\bf S}-\int_{S}({\bf v}\cdot \nabla){\bf B}\cdot d{\bf S} \tag{22}\] This is equal to the Faraday's formula in Eq. (2). In the situation where the circuit \(C\) moves with a constant velocity \({\bf v}_{0}\), we have the following relation \[({\bf v}_{0}\cdot\nabla){\bf B} = \nabla\times({\bf B}\times{\bf v}_{0})+{\bf v}_{0}(\nabla\cdot{ \bf B}) \tag{23}\] \[= \nabla\times({\bf B}\times{\bf v}_{0})\] due to the fact that \({\bf B}\) satisfies \(\nabla\cdot{\bf B}=0\)[12]. As a consequence, the well-known EMF formula \[{\cal E}=-\int_{S}\partial_{t}{\bf B}\cdot d{\bf S}+\oint_{C}({\bf v}_{0} \times{\bf B})\cdot d{\bf r} \tag{24}\] is obtained. The first term in it is attributed to the Faraday's law of induction, and the second to the Lorentz force. This formula is composed of two different fundamental equations in classical theory [6]. However, in the quantum mechanical formalism, two contributions stem from a single relation in Eq. (9). ## 4 The EMF generation by the Berry connection The velocity field in Eq. (9) contains the vector potential \({\bf A}_{\Psi}^{\rm MB}\) in addition to the electromagnetic vector potential \({\bf A}\). Just like \({\bf A}\), \({\bf A}_{\Psi}^{\rm MB}\) will also give rise to the EMF. We now consider a general case where the Berry connection arises from a set of states \(\{\Psi_{j}\}\) and given by \[{\bf A}^{\rm MB}=\sum_{j}p_{j}{\bf A}_{\Psi_{j}}^{\rm MB} \tag{25}\] where \(p_{j}\)'s are probabilities satisfy \[\sum_{j}p_{j}=1 \tag{26}\] and \({\bf A}_{\Psi_{j}}^{\rm MB}\) is obtained from Eq. (3) by replacing \(\Psi\) with \(\Psi_{j}\). We express \({\bf A}^{\rm MB}\) using the following density matrix \[\hat{d}=\sum_{j}p_{j}|\Psi_{j}\rangle\langle\Psi_{j}| \tag{27}\] where the operator \(\hat{\bf A}^{\rm MB}\) is defined through the relation \[\langle\Psi_{j}|\hat{\bf A}^{\rm MB}|\Psi_{j}\rangle={\bf A}^{\rm MB}_{\Psi_{j}} \tag{28}\] From now on, we allow the time-dependence in \(\Psi_{j}\). When \(\Psi_{j}\) is time-dependent, \({\bf A}^{\rm MB}_{\Psi_{j}}\) is also time-dependent. The distribution probability \(p_{j}\) can be also time and coordinate dependent. Using the density operator \(\hat{d}\) and the operator \(\hat{\bf A}^{\rm MB}\), the vector potential from the Berry connection is given by \[{\bf A}^{\rm MB}={\rm tr}\left(\hat{d}\hat{\bf A}^{\rm MB}\right) \tag{29}\] We define \({\bf B}^{\rm MB}\) by \[{\bf B}^{\rm MB}=\nabla\times{\bf A}^{\rm MB} \tag{30}\] Then, the EMF from the Berry connection is given by \[{\cal E}^{\rm MB}=-\frac{\hbar}{e}\int_{S}\partial_{t}{\bf B}^{\rm MB}\cdot d {\bf S}-\frac{\hbar}{e}\int_{S}({\bf v}\cdot\nabla){\bf B}^{\rm MB}\cdot d{ \bf S} \tag{31}\] The first term in the right hand side can arise from the time-dependence of \(p_{j}\). This means that if \(p_{j}\) varies with time due to chemical reactions, photo excitations, or etc. it will give rise to the EMF. The second term will arise if the temperature depends on the coordinate, \(T({\bf r})\), and \(p_{j}\) contains the Boltzmann factor \(\exp(-\frac{E_{j}}{k_{B}T({\bf r})})\), where \(E_{j}\) is the energy for the state \(\Psi_{j}\). It also arises when \(p_{j}\) depends on the coordinate due, for example, to the concentration gradient of chemical spices. Now we consider the case where the circuit moves with a constant vector \({\bf v}_{0}\). The circuit in this case should be regarded as a region of the system which flows due to the flow existing in the system. Such a motion may arise from a temperature gradient or concentration gradient in the system. In this case, we have the following relation, \[({\bf v}\cdot\nabla){\bf B}^{\rm MB}=-\nabla\times({\bf v}_{0}\times{\bf B}^{ \rm MB}) \tag{32}\] due to the fact that \(\nabla\cdot{\bf B}^{\rm MB}=\nabla\cdot(\nabla\times{\bf A}^{\rm MB})=0\). The equation (31) can be cast into the following form \[{\cal E}^{\rm MB}=-\frac{\hbar}{e}\oint_{C}\left[\partial_{t}{\bf A}^{\rm MB}- {\bf v}_{0}\times(\nabla\times{\bf A}^{\rm MB})\right]\cdot d{\bf r} \tag{33}\] that only contains \({\bf A}^{\rm MB}\). However, the above formula may not be convenient to use due to the fact that \({\bf A}^{\rm MB}\) contains topological singularities. A convenient one may be the following \[{\cal E}^{\rm MB}=-\frac{\hbar}{e}\frac{d}{dt}\int_{S}{\bf B}^{\rm MB}\cdot d{ \bf S} \tag{34}\] where \({\bf B}\) in the Faraday's law in Eq. (2) is replaced by \({\bf B}^{\rm MB}\). ## 5 Nernst effect In this section, we examine the Nernst effect observed in cuprate superconductors [13, 14, 15]. We examine this phenomenon using Eq. (34). A theory of superconductivity in the cuprate predicts the appearance of spin-vortices in the CuO\({}_{2}\) plane around doped holes that become small polarons [16, 17, 18]. The spin-vortices generate the vector potential \[{\bf A}^{\rm MB}=-\frac{1}{2}\nabla\chi \tag{35}\] where \(\chi\) is an angular variable with period \(2\pi\). This angular variable appears due to the requirement that the wave function to be a single-valued function of coordinates in the situation where itinerant motion of electrons around the small polaron hole is a spin-twisting one. We can decompose \(\chi\) as a sum over spin-vortices \[\chi=\sum_{j=1}^{N_{h}}\chi_{j} \tag{36}\] where \(\chi_{j}\) is a contribution form the \(j\)th small polaron hole, and \(N_{h}\) is the total number of holes that become small polarons. Each \(\chi_{j}\) is characterized by its winding number \[w_{j}=\frac{1}{2\pi}\oint_{C_{j}}\nabla\chi_{j}\cdot d{\bf r} \tag{37}\] where \(C_{j}\) is a loop that only encircles the center of the \(j\)th spin-vortex. We can assume \(w_{j}\) to be \(+1\) or \(-1\); only odd integers are allowed due to the spin-twisting motion. The numbers \(\pm 1\) are favorable from the energetic point of view. Figure 1: A schematic picture for the EMF appearing from the Berry connection generated by spin-vortices. The Berry connection creates the vector potential proportional to \(\nabla\chi\), which creates vortices (loop currents) denoted by circles with arrows. We consider two loops \(C(t)\) and \(C(t+\Delta t)\), where \(t\) and \(t+\Delta t\) denote two times with interval \(\Delta t\). The loop moves with velocity \(v_{0}\) in the \(x\)-direction due to the temperature gradient in that direction. A constant magnetic field is applied in the \(z\)-direction. A voltage is generated across the \(y\)-direction. The sample exists \(0\leq y\leq L_{y}\). The left edge of the loop at time \(t\) is \(x_{0}\) and that at time \(t+\Delta t\) is \(x_{0}+v_{0}\Delta t\). Let us consider the situation depicted in Fig. 1. We neglect the contribution from \({\bf A}\) assuming that it is small. The EMF generated across the sample in the \(y\)-direction is given by \[{\cal E}^{\rm MB} = -\frac{\hbar}{e}\frac{1}{\Delta t}\left[\int_{S(t+\Delta t)}{\bf B }^{\rm MB}\cdot d{\bf S}-\int_{S(t)}{\bf B}^{\rm MB}\cdot d{\bf S}\right] \tag{38}\] \[= -\frac{\hbar}{e}\frac{1}{\Delta t}\left[\oint_{C(t+\Delta t)}{\bf A }^{\rm MB}\cdot d{\bf r}-\oint_{C(t)}{\bf A}^{\rm MB}\cdot d{\bf r}\right]\] \[= \frac{\hbar}{e}\frac{1}{\Delta t}\oint_{\Delta C}{\bf A}^{\rm MB} \cdot d{\bf r}\] where \(S(t+\Delta t)\) and \(S(t)\) are surfaces in the \(xy\)-plane with circumferences \(C(t+\Delta t)\) and \(C(t)\), respectively; \(\Delta C\) is the loop encircling the area \(x_{0}\leq x\leq x_{0}+v_{0}\Delta t\), \(0\leq y\leq L_{y}\), with the counterclockwise direction. We approximate \(\oint_{\Delta C}{\bf A}^{\rm MB}\cdot d{\bf r}\) by \[\oint_{\Delta C}{\bf A}^{\rm MB}\cdot d{\bf r} = -\frac{1}{2}\oint_{\Delta C}\nabla\chi\cdot d{\bf r} \tag{39}\] \[\approx -\frac{1}{2}2\pi(n_{m}-n_{a})L_{y}v_{0}\Delta t\] where \(n_{m}\) and \(n_{a}\) are average densities of \(w_{j}=1\) ('meron') and \(w_{j}=-1\) ('antimeron') vortices, respectively. Thus, \(n_{m}L_{y}v_{0}\Delta t\) and \(n_{a}L_{y}v_{0}\Delta t\) are expected numbers of \(w_{j}=1\) and \(w_{j}=-1\) vortices within the loop \(\Delta C\), respectively. From Eqs. (38) and (38), the approximate \({\cal E}^{\rm MB}\) is given by \[{\cal E}^{\rm MB}\approx\frac{hv_{0}}{2e}(n_{a}-n_{m})L_{y} \tag{40}\] Thus, the electric field generated by \({\cal E}^{\rm MB}\) in the \(y\)-direction is given by \[E_{y}\approx\frac{hv_{0}}{2e}(n_{a}-n_{m}) \tag{41}\] In our previous work, \(n_{a}\) is denoted as \(n_{d}\) indicting that it yields a diamagnetic current, and \(n_{m}\) as \(n_{p}\) indicting that it yields a paramagnetic current [17, 18]. Using \(n_{d}\) and \(n_{p}\), the Nernst signal is obtained as \[e_{N}=\frac{E_{y}}{|\partial_{x}T|}=\frac{hv_{0}(n_{d}-n_{p})}{2e|\partial_{x }T|} \tag{42}\] The same formula was obtained previously for the situation where spin-vortices move by the temperature gradient [17, 18]. Here, the situation is different; the spin-vortices do not move, but the electron system affected by \(\nabla\chi\) moves. Considering that the small polaron movement is negligible at low temperature, the present situation is more realistic than the previous one. The temperature dependence is the same as the one that qualitatively explains the experimental result [18]. Note that experiments indicating the presence of loop currents different from ordinary Abrikosov vortices [19] in the cuprate [20, 21]. The present result indicates that the observed Nernst can be explained by the presence of spin-vortex-induced loop currents. ## 6 Concluding remarks Since the EMF by the Berry connection is not the electromagnetic field origin, it may be more appropriate to call it the Berry-connection motive force (BCMF) given by \[\mathcal{F}^{\mathrm{BMF}}=-e\mathcal{E}^{\mathrm{MB}}=\hbar\frac{d}{dt}\int_{S} \mathbf{B}^{\mathrm{MB}}\cdot d\mathbf{S} \tag{43}\] The BCMF will arise from quantum mechanical dynamics of particles other than electrons; for example, from proton dynamics, through chemical reactions. The non-trivial Berry phase effect has been predicted [22], and observed in the hydrogen transfer reactions [23]. Quantum mechanical effects are important in such reactions due to the relatively light mass of protons [24, 25]. It is known that the EMF generated by the proton pumps is a very important chemical process in biological systems, and the Berry-connection motive force may play some roles in the working of the proton pumps. It may be also useful to invent high performance batteries.
2301.02236
A Minimization Problem with Free Boundary for $p$-Laplacian weakly coupled System
In this paper we consider a weakly coupled $p$-Laplacian system of a Bernoulli type free boundary problem, through minimization of a corresponding functional. We prove various properties of any local minimizer and the corresponding free boundary.
Morteza Fotouhi, Henrik Shahgholian
2023-01-05T18:59:23Z
http://arxiv.org/abs/2301.02236v1
# A minimization problem with free boundary for \(p\)-Laplacian weakly coupled system ###### Abstract. In this paper we consider a weakly coupled \(p\)-Laplacian system of a Bernoulli type free boundary problem, through minimization of a corresponding functional. We prove various properties of any local minimizer and the corresponding free boundary. Key words and phrases:p-Laplacian, minimizers, free boundary regularity, system 2020 Mathematics Subject Classification: 35R35 This project was carried out during the program Geometric aspects of nonlinear PDE at Institute Mittag Leffler, Stockholm, Sweden. H. Shahgholian was supported by Swedish Research Council. Introduction ### Background Let \(\mathfrak{g}\) be a field of characteristic \(p\). Let \(\mathfrak{g}\) be a field of characteristic \(p\). Then \(\mathbf{u}_{k}-\mathbf{g}\) is bounded in \(W^{1,p}_{0}(\Omega;\mathbb{R}^{m})\) and up to a subsequence we can assume that \[\mathbf{u}_{k}\rightharpoonup\mathbf{u},\quad\text{ weakly in }W^{1,p}(\Omega;\mathbb{R}^{m}),\] \[\mathbf{u}_{k}\rightharpoonup\mathbf{u},\quad\text{a.e. in }\Omega,\] for some \(\mathbf{u}\in\mathcal{K}\). The latter convergence implies \[\int_{\Omega}\chi_{\{|\mathbf{u}_{k}|>0\}}dx\to\int_{\Omega}\chi_{\{|\mathbf{ u}|>0\}}dx,\] and the weakly lower semicontinuity of the norm implies that \[\int_{\Omega}\sum_{i=1}^{m}|\nabla u|^{i}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 3. Regularity of local minimizers **Lemma 3.1**.: _Let \(\mathbf{u}\) be a (local) minimizer of \(J\), and \(v^{i}\) the harmonic replacement (majorant) for \(u^{i}\) in \(B\subset\Omega\) (for \(B\) a small ball). Then there is a universal constant \(C=C(n,p)\) such that_ \[\int_{B}|\nabla(u^{i}-v^{i})|^{p}\,dx \leq CQ_{\max}^{p}\{|\mathbf{u}|=0\}\cap B|,\qquad\text{ when }2\leq p,\] \[\int_{B}|\nabla(u^{i}-v^{i})|^{p}\,dx \leq C(Q_{\max})^{p^{2}/2}|\{|\mathbf{u}|=0\}\cap B|^{p/2}\left( \int_{B}|\nabla u^{i}|^{p}\,dx\right)^{1-p/2},\text{ when }1<p\leq 2.\] Proof.: Let \(v^{j}=u^{j}\) for all \(j\neq i\), and extend \(v^{i}\) by \(u^{i}\) in \(\Omega\setminus B\). If \(B\) is small enough (when \(\mathbf{u}\) is absolute minimizer, we do not need this assumption), then we have \(J(\mathbf{u})\leq J(\mathbf{v})\) and consequently \[\int_{B}|\nabla u^{i}|^{p}-|\nabla v^{i}|^{p}\,dx\leq Q_{\max}^{p}\left|\{| \mathbf{u}|=0\}\cap B\right|.\] Set now \(w_{s}(x)=su^{i}(x)+(1-s)v^{i}(x)\) for \(0\leq s\leq 1\). Then \[\int_{B}|\nabla u^{i}|^{p}-|\nabla v^{i}|^{p}\,dx =\int_{B}|\nabla w_{1}|^{p}-|\nabla w_{0}|^{p}\,dx\] \[= p\int_{0}^{1}ds\int_{B}|\nabla w_{s}|^{p-2}\nabla w_{s}\cdot \nabla(u^{i}-v^{i})\,dx\] \[= p\int_{0}^{1}ds\int_{B}\left(|\nabla w_{s}|^{p-2}\nabla w_{s}-| \nabla v^{i}|^{p-2}\nabla v^{i}\right)\cdot\nabla(u^{i}-v^{i})\,dx\] \[= p\int_{0}^{1}\frac{ds}{s}\int_{B}\left(|\nabla w_{s}|^{p-2} \nabla w_{s}-|\nabla v^{i}|^{p-2}\nabla v^{i}\right)\cdot\nabla(w_{s}-v^{i}) \,dx,\] where for the third equality we have used \(\Delta_{p}v^{i}=0\). Next using \[\left(|b|^{p-2}b-|a|^{p-2}a\right)\cdot(b-a)\geq\gamma\left\{\begin{array}{ ll}|b-a|^{2}(|b|+|a|)^{p-2},&1<p\leq 2,\\ |b-a|^{p},&2\leq p,\end{array}\right.\] we obtain for \(p\geq 2\) \[\int_{B}|\nabla u^{i}|^{p}-|\nabla v^{i}|^{p}\,dx\geq\gamma p\int_{0}^{1} \frac{ds}{s}\int_{B}|\nabla(w_{s}-v^{i})|^{p}\,dx=\gamma p\int_{0}^{1}s^{p-1} ds\int_{B}|\nabla(u^{i}-v^{i})|^{p}\,dx,\] which shows the desired estimate for \(p\geq 2\). In case \(1<p\leq 2\), we have \[\int_{B}|\nabla u^{i}|^{p}-|\nabla v^{i}|^{p}\,dx \geq\gamma p\int_{0}^{1}\frac{ds}{s}\int_{B}|\nabla(w_{s}-v^{i})| ^{2}\left(|\nabla w_{s}|+|\nabla v^{i}|\right)^{p-2}\,dx\] \[\geq\gamma p\int_{0}^{1}sds\int_{B}|\nabla(u^{i}-v^{i})|^{2} \left(s|\nabla u^{i}|+(2-s)|\nabla v^{i}|\right)^{p-2}\,dx,\] \[\geq C\int_{B}|\nabla(u^{i}-v^{i})|^{2}\left(|\nabla u^{i}|+| \nabla v^{i}|\right)^{p-2}\,dx.\] On the other hand, using the Holder inequality, we have \[\int_{B}|\nabla(u^{i}-v^{i})|^{p}\,dx\leq \left(\int_{B}|\nabla(u^{i}-v^{i})|^{2}\left(|\nabla u^{i}|+| \nabla v^{i}|\right)^{p-2}\,dx\right)^{p/2}\left(\int_{B}\left(|\nabla u^{i}|+| \nabla v^{i}|\right)^{p}\right)^{1-p/2}.\] We conclude the proof by applying \(\int_{B}|\nabla v^{i}|^{p}\leq\int_{B}|\nabla u^{i}|^{p}\) (since \(v^{i}\) is \(p\)-harmonic). **Lemma 3.2**.: _(Holder regularity) Let \(\mathbf{u}\) be a (local) minimizer of \(J\) in \(B_{1}\). Then for some \(\alpha=\alpha(n,p)\)_ \[\|\mathbf{u}\|_{C^{\alpha}(B_{3/4})}\leq C(n,p,\|\mathbf{u}\|_{L^{\infty}(B_{1 })}).\] Proof.: Let \(M=\|\mathbf{u}\|_{L^{\infty}(B_{1})}\) and \(B_{r}=B_{r}(y)\) for \(y\in B_{3/4}\) and \(r<1/8\). Since \(u^{i}\) is a \(p\)-subsolution, a Caccioppoli type inequality (see [16], Lemma 3.27) implies that \[\int_{B_{r}}|\nabla u^{i}|^{p}\,dx\leq\frac{C}{r^{p}}\int_{B_{2r}}(u^{i})^{p} \,dx\leq CM^{p}r^{n-p}.\] On the other hand, if \(v^{i}\) is the \(p\)-harmonic replacement of \(u^{i}\) inside \(B_{r}\), we have the gradient estimate (see [17]) \[\sup_{B_{r/2}}|\nabla v^{i}|\leq\left(\frac{C}{r^{n}}\int_{B_{r}}|\nabla v^{i }|^{p}\,dx\right)^{1/p}\leq\frac{CM}{r}.\] Now, let us take some \(\rho<r/2\) which will be specified below and apply Lemma 3.1 in \(B_{r}(y)\) \[\|\nabla u^{i}\|_{L^{p}(B_{\rho})}\leq \|\nabla(u^{i}-v^{i})\|_{L^{p}(B_{\rho})}+\|\nabla v^{i}\|_{L^{p}( B_{\rho})}\] \[\leq \|\nabla(u^{i}-v^{i})\|_{L^{p}(B_{r})}+C\rho^{n/p}\|\nabla v^{i} \|_{L^{\infty}(B_{r/2})}\] \[\leq C\left\{\begin{array}{ll}r^{n/p}+M\rho^{n/p}r^{-1}&\text{for $2 \leq p$,}\\ M^{1-p/2}r^{n/p-1+p/2}+M\rho^{n/p}r^{-1}&\text{for $1<p\leq 2$.}\end{array}\right.\] Thus for \(r=\rho^{1-\alpha}\), if we take \(\alpha=\alpha(n,p)\) sufficiently small, we obtain \[\|\nabla u^{i}\|_{L^{p}(B_{\rho})}\leq C(M,n,p,Q_{\max})\rho^{n/p-1+\alpha}.\] By virtue of Morrey's theorem (see [19]) we conclude the proof of the lemma. The next lemma is essential to prove the Lipschitz regularity of the minimizers. **Lemma 3.3**.: _Let \(\mathbf{u}=(u^{1},\ldots,u^{m})\) be a bounded minimizer in \(B_{1}\) and \(u^{i}(0)=0\) for some \(1\leq i\leq m\). Then there exists a constant \(C=C(n,p,Q_{\max})>0\) such that_ \[\|u^{i}\|_{L^{\infty}(B_{1/4})}\leq C.\] We need to remark that the constant \(C\) is independent of the boundary values of \(\mathbf{u}\) on \(\partial\Omega\). In other words when going away from a free boundary, but staying uniformly inside the domain \(\Omega\), the minimizer cannot grow too large, regardless of the boundary values. In other words, for large enough boundary values, the origin cannot be a free boundary point. Proof.: For the sake of convenience consider \(i=1\). Towards a contradiction, assume that there is a sequence of bounded solutions \(\mathbf{u}_{k}\) in \(B_{1}\) such that \[\|u_{k}^{1}\|_{L^{\infty}(B_{1/4})}>k.\] Set \[d_{k}(x):=\text{dist}(x,\{u_{k}^{1}=0\})\quad\text{ in $B_{1}$,}\] and define \[\mathcal{O}_{k}:=\left\{x\in B_{1}:d_{k}(x)\leq(1-|x|)/3\right\}.\] Obviously, \(B_{1/4}\subset\mathcal{O}_{k}\). We have also \[m_{k}:=\sup_{\mathcal{O}_{k}}(1-|x|)u_{k}^{1}(x)\geq\frac{3}{4}\max_{B_{1/4}}u_{ k}^{1}>\frac{3}{4}k,\] Since \(u_{k}^{1}\) is bounded (for fixed \(k\)), we get \((1-|x|)u_{k}^{1}(x)\to 0\) as \(|x|\to 1\), and therefore \(m_{k}\) is attained at some point \(x_{k}\in\mathcal{O}_{k}\). So, \[u_{k}^{1}(x_{k})=\frac{m_{k}}{1-|x_{k}|}\geq m_{k}>\frac{3}{4}k.\] Now let \(y_{k}\in\partial\{u_{k}^{1}>0\}\cap B_{1}\) be such that \(|y_{k}-x_{k}|=d_{k}(x_{k})=:\delta_{k}\), which satisfies \(\delta_{k}\leq(1-|x_{k}|)/3\) due to \(x_{k}\in\mathcal{O}_{k}\). This implies that \[B_{2\delta_{k}}(y_{k})\subset B_{1}\qquad\text{ and }\qquad B_{\delta_{k}/2}(y_{k}) \subset\mathcal{O}_{k}.\] Indeed, if \(z\in B_{2\delta_{k}}(y_{k})\), \[|z|\leq|z-y_{k}|+|y_{k}-x_{k}|+|x_{k}|\leq 2\delta_{k}+\delta_{k}+|x_{k}|\leq 1,\] and if \(z\in B_{\delta_{k}/2}(y_{k})\), \[1-|z|\geq 1-|x_{k}|-|x_{k}-y_{k}|-|y_{k}-z|\geq 1-|x_{k}|-\delta_{k}-\delta_{k}/2 \geq 3\delta_{k}/2\geq 3|z-y_{k}|\geq 3d_{k}(z).\] Also, we have \(1-|z|\geq(1-|x_{k}|)/2\) for any \(z\in B_{\delta_{k}/2}(y_{k})\). Then \[\frac{1-|x_{k}|}{2}\max_{B_{k/2}(y_{k})}u_{k}^{1}\leq\max_{z\in B_{\delta_{k}/ 2}(y_{k})}(1-|z|)u_{k}^{1}(z)\leq\max_{z\in\mathcal{O}_{k}}(1-|z|)u_{k}^{1}(z )=(1-|x_{k}|)u_{k}^{1}(x_{k})\] or \[\max_{B_{k/2}(y_{k})}u_{k}^{1}\leq 2u_{k}^{1}(x_{k}).\] Since \(B_{\delta_{k}}(x_{k})\subset\{u_{k}^{1}>0\}\), then \(u_{k}^{1}\) is \(p\)-harmonic inside \(B_{\delta_{k}}(x_{k})\), i.e. \(\Delta_{p}u_{k}^{1}=0\). By the Harnack inequality for \(p\)-harmonic functions, there is a constant \(c=c(n,p)\) such that \[\min_{B_{\delta_{k}/2}(x_{k})}u_{k}^{1}\geq cu_{k}^{1}(x_{k}).\] In particular, \[\max_{B_{\delta_{k}/4}(y_{k})}u_{k}^{1}\geq cu_{k}^{1}(x_{k}).\] We define the sequence \[\mathbf{w}_{k}(x):=\frac{\mathbf{u}_{k}(y_{k}+(\delta_{k}/2)x)}{u_{k}^{1}(x_{ k})},\] whose first component satisfies \[\max_{B_{1}}w_{k}^{1}\leq 2,\qquad\max_{B_{1/2}}w_{k}^{1}\geq c>0,\qquad w_{k} ^{1}(0)=0. \tag{3}\] Moreover, \(\mathbf{w}_{k}\) is a minimizer of \[J_{k}(\mathbf{w})=\int_{B_{1}}\sum_{i=1}^{m}|\nabla w^{i}|^{p}+Q_{k}^{p}\chi_{ ||\mathbf{w}|>0|}dx,\] where \(Q_{k}(x)=\frac{\delta_{k}Q(y_{k}+(\delta_{k}/2)x)}{2u_{k}^{1}(x_{k})}\to 0\). Now consider \(v_{k}^{1}\) to be \(p\)-harmonic replacement of \(u_{k}^{1}\) in \(B_{3/4}\) and apply Lemma 3.1 \[\int_{B_{3/4}}|\nabla(w_{k}^{1}-v_{k}^{1})|^{p}\,dx\leq C(\max Q_{k})^{p}\to 0, \tag{4}\] when \(2\leq p\). Similar statement holds for \(1<p\leq 2\), we just need to note that \(\|\nabla w_{k}^{1}\|_{L^{p}}\) is uniformly bounded (\(w_{k}^{1}\) is \(p\)-subsolution and uniformly bounded in \(B_{1}\)). Furthermore, \(w_{k}^{1}\) and \(v_{k}^{1}\) are uniformly \(C^{a}\) in \(B_{5/8}\) and we can extract a subsequence (still denoted by \(w_{k}^{1}\) and \(v_{k}^{1}\)) such that \(w_{k}^{1}\to w_{0}\) and \(v_{k}^{1}\to v_{0}\) uniformly in \(B_{5/8}\). Observe that \(\Delta_{p}v_{0}=0\) in \(B_{5/8}\) and (4) implies that \(w_{0}=v_{0}+c\). Hence, \(w_{0}\) is also \(p\)-harmonic and by the strong maximum principle, \(w_{0}\equiv 0\) in \(B_{5/8}\), since \(w_{0}\geq 0\) and \(w_{0}(0)=0\). On the other hand, (3) necessitates \[\max_{B_{1/2}}w_{0}\geq c>0,\] which is a contradiction. A direct consequence of the above lemma is the following estimate. **Lemma 3.4**.: _Let \(\mathbf{u}\) be a (local) minimizer in \(\Omega\). If \(\mathrm{dist}(x_{0},\{u^{i}=0\})<\frac{1}{5}\mathrm{dist}(x_{0},\partial\Omega)\) then_ \[u^{i}(x_{0})\leq 4C\mathrm{dist}(x_{0},\{u^{i}=0\}),\] _where \(C\) is the constant defined in Lemma 3.3._ Proof.: Choose \(y_{0}\in\{u^{i}=0\}\) such that \(\mathrm{dist}(x_{0},\{u^{i}=0\})=|x_{0}-y_{0}|=d_{0}\). Now apply Lemma 3.3 to \[\mathbf{v}(x):=\frac{\mathbf{u}(y_{0}+4d_{0}x)}{4d_{0}}\] to get \[u^{i}(x_{0})\leq 4Cd_{0}.\] With the above two results we will obtain uniform Lipschitz regularity for minimizers. **Theorem 3.5**.: _Let \(\mathbf{u}\) be a (local) minimizer in \(\Omega\), then \(\mathbf{u}\) is Lipschitz. Moreover, for every \(K\in\Omega\) such that \(K\cap\partial\{u^{i}>0\}\neq\emptyset\) for some \(1\leq i\leq m\), there is a constant \(C=C(n,p,Q_{\max},\mathrm{dist}(K,\partial\Omega),\Omega)>0\) such that_ \[\|\nabla u^{i}\|_{L^{\infty}(K)}\leq C.\] Once again we remark that the constant \(C\) does not depend on the boundary values of the minimizer, as long as we stay uniformly inside the domain. Proof.: **Step 1:** We show that \(u^{i}\) is bounded in \(K\) with a universal constant \(C\) depending on the following ingredients \(n,p,Q_{\max},\mathrm{dist}(K,\partial\Omega),\Omega\). Let \(r_{0}=\frac{1}{5}\mathrm{dist}(K,\partial\Omega)\) and for any arbitrary point \(x\in K\) there is a sequence of points \(x=x_{0},\dots,x_{k}\in K\) with (we can assume \(K\) is connected, otherwise replace it with a bigger one which is connected) \[x_{j}\in B_{r_{0}/2}(x_{j-1}),\quad\text{ for }j=1,\dots,k,\] \(B_{r_{0}}(x_{j})\subset\{u^{i}>0\}\) for \(j=0,\dots,k-1\) and \(B_{r_{0}}(x_{k})\cap\{u^{i}=0\}\neq\emptyset\). Note that \(k\), the number of points, only depends on \(\Omega\) and \(\mathrm{dist}(K,\partial\Omega)\). From Lemma 3.4, we get \[u^{i}(x_{k})\leq 4Cr_{0}.\] Since \(u^{i}\) is \(p\)-harmonic in \(B_{r_{0}}(x_{j})\), \(j=0,\dots,k-1\), by virtue of Harnack's inequality, there is a constant \(c\) such that \[u^{i}(x_{j+1})\geq cu^{i}(x_{j}).\] Thus \[u^{i}(x)\leq 4c^{-k}Cr_{0}.\] **Step 2:** Here we find a control on \(\nabla u^{i}\) at points close to \(\{u^{i}=0\}\). If \(d=\operatorname{dist}(y,\{u^{i}=0\})<\frac{1}{11}\operatorname{dist}(y,\partial\Omega)\), every points \(x_{0}\in B_{d}(y)\) satisfy condition Lemma 3.4. Then \[u^{i}(x_{0})\leq 4C\operatorname{dist}(x_{0},\{u^{i}=0\})\leq 8Cd.\] Let us define \[v(x):=\frac{u^{i}(y+dx)}{d}\] which is a \(p\)-harmonic in \(B_{1}\) and \(\|v\|_{L^{\infty}(B_{1})}\leq 8C\). By \(p\)-Laplacian estimate for gradient, we obtain \[|\nabla v(0)|\leq\tilde{C}(n,p,Q_{\max}),\] that is \(|\nabla u^{i}(y)|\leq C(n,p,Q_{\max})\). **Step 3:** Let \(r_{1}=\frac{1}{11}\operatorname{dist}(K,\partial\Omega)\). If \(\operatorname{dist}(x,\{u^{i}=0\})\leq r_{1}\), by the result of Step 2 we have already \(|\nabla u^{i}(x)|\leq C\). If \(\operatorname{dist}(x,\{u^{i}=0\})>r_{1}\), then \(u^{i}\) is \(p\)-harmonic inside \(B_{r_{1}}(x)\) and \(\|u^{i}\|_{L^{\infty}(B_{r_{0}})}\) is universally bounded by the result of Step 1. Thus \(|\nabla u^{i}(x)|\) will be universally bounded. A straightforward corollary to this theorem, that can be useful later, is the following **Corollary 3.6**.: _Let \(\mathbf{u}\) be a (local) minimizer for our functional. For every \(K\Subset\Omega\) there exists constant \(C=C(n,p,Q_{\max},\operatorname{dist}(K,\partial\Omega),\Omega)\) such that_ \[\frac{1}{r}\!\!\!\!\int_{\partial B_{r}}u^{i}\,dx>C\ \ \text{implies}\ \ u^{i}>0\ \text{in}\ B_{r}.\] Proof.: If \(B_{r}\subset K\) contains a free boundary point, then by Theorem 3.5, \(u^{i}\leq Cr\) on \(\partial B_{r}\). ## 4. Nondegeneracy **Lemma 4.1**.: _For any \(0<\kappa<1\) there exists a constant \(c=c(\kappa,n,m,p,Q_{\min})>0\) such that for every minimizer \(\mathbf{u}\) and for any (small) ball \(B_{r}\subset\Omega\)_ \[\|\mathbf{u}\|_{L^{\infty}(B_{r})}<cr\ \ \text{implies}\ \ \mathbf{u}= \mathbf{0}\ \text{in}\ B_{\kappa r}.\] Proof.: Without loss of generality, we may assume \(r=1\). Let \[M=\|\mathbf{u}\|_{L^{\infty}(B_{r})}.\] Let \(\phi(x)=\phi_{\kappa}(|x|)\) be the solution of \[\Delta_{p}\phi=0,\ \ \text{in}\ B_{\sqrt{\kappa}}\setminus B_{\kappa},\qquad \phi=0\ \ \text{on}\ \partial B_{\kappa},\qquad\phi=1\ \ \text{on}\ \partial B_{\sqrt{\kappa}}\] and extend \(\phi=0\) in \(B_{\kappa}\). Set \(v=M\sqrt{\kappa}\phi\) and \(w^{i}=\min(u^{i},v)\) for all \(i=1,\dots,m\). Since \(v\geq u^{i}\) on \(\partial B_{\sqrt{\kappa}}\), so \(w^{i}=u^{i}\) on \(\partial B_{\sqrt{\kappa}}\). Therefore \(J(\mathbf{u})\leq J(\mathbf{w})\), or equivalently \[\int_{B_{r}}\sum_{i=1}^{m}|\nabla u^{i}|^{p}+Q^{p}\chi_{\{|\mathbf{u}|>0\}}\, dx\leq\int_{B_{r}\sqrt{\kappa}B_{r}}\sum_{i=1}^{m}|\nabla w^{i}|^{p}+Q^{p} \chi_{\{|\mathbf{w}|>0\}}\,dx.\] Since \(\{|\mathbf{w}|>0\}\subset\{|\mathbf{u}|>0\}\), we get \[\int_{B_{\kappa}}\sum_{i=1}^{m}|\nabla u^{i}|^{p}+Q^{p}\chi_{\{| \mathbf{u}|>0\}}\,dx \leq\int_{B_{\kappa}|B_{\kappa}}\sum_{i=1}^{m}\left(|\nabla w^{i}|^ {p}-|\nabla u^{i}|^{p}\right)\,dx\] \[\leq p\int_{B_{\kappa}}\sum_{i=1}^{m}|\nabla w^{i}|^{p-2}\nabla w ^{i}\cdot\nabla(w^{i}-u^{i})\,dx\] \[=-p\int_{\partial B_{\kappa}}\sum_{i=1}^{m}|\nabla w^{i}|^{p-2}(w ^{i}-u^{i})(\nabla w^{i}\cdot\nu)\,d\mathcal{H}^{n-1}\] \[=p\int_{\partial B_{\kappa}}\sum_{i=1}^{m}|\nabla v|^{p-2}u^{i}( \nabla v\cdot\nu)\,d\mathcal{H}^{n-1}.\] Since \(|\nabla v|\leq C(p,\kappa,n)M\) on \(\partial B_{\kappa}\), we find out that \[\int_{B_{\kappa}}\sum_{i=1}^{m}|\nabla u^{i}|^{p}+Q^{p}\chi_{\{| \mathbf{u}|>0\}}\,dx\leq CM^{p-1}\sum_{i=1}^{m}\int_{\partial B_{\kappa}}u^{i} \,d\mathcal{H}^{n-1}. \tag{5}\] On the other hand, \[\int_{\partial B_{\kappa}}u^{i}\,d\mathcal{H}^{n-1} \leq C(n,\kappa)\int_{B_{\kappa}}u^{i}+|\nabla u^{i}|\,dx\] \[=C(n,\kappa)\int_{B_{\kappa}}\left(u^{i}+|\nabla u^{i}|\right) \chi_{\{u^{j}>0\}}\,dx\] \[\leq C(n,\kappa,p,Q_{\min})\int_{B_{\kappa}}MQ^{p}\chi_{\{u^{j}> 0\}}+|\nabla u^{i}|^{p}+Q^{p}\chi_{\{u^{j}>0\}}\,dx\] \[\leq C(n,\kappa,p,Q_{\min})(1+M)\int_{B_{\kappa}}|\nabla u^{i}|^{ p}+Q^{p}\chi_{\{u^{j}>0\}}\,dx.\] Comparing with (5), we we arrive at \[\int_{B_{\kappa}}\sum_{i=1}^{m}|\nabla u^{i}|^{p}+Q^{p}\chi_{\{| \mathbf{u}|>0\}}\,dx\leq CM^{p-1}(1+M)\int_{B_{\kappa}}\sum_{i=1}^{m}|\nabla u^ {i}|^{p}+Q^{p}\chi_{\{|\mathbf{u}|>0\}}\,dx.\] Therefore, if \(M\) is small enough, we obtain that \(\mathbf{u}=\mathbf{0}\) in \(B_{\kappa}\). An immediate consequence of the above lemma is the following. For any \(K\Subset\Omega\) there are positive constants \(c_{0},C_{0}\) such that if \(B_{r}(x)\subset K\cap\{|\mathbf{u}|>0\}\) touches \(\partial\{|\mathbf{u}|>0\}\) then \[c_{0}r\leq|\mathbf{u}(x)|\leq C_{0}r. \tag{6}\] **Theorem 4.2**.: _For \(K\Subset\Omega\) there exists constant \(0<c=c(n,m,p,K,\Omega)<1\) such that for any (local) minimizer \(\mathbf{u}\) and for any (small) ball \(B_{r}(x)\subset K\) with \(x\in\partial\{|\mathbf{u}|>0\}\),_ \[c<\frac{\mathcal{L}^{n}(B_{r}(x)\cap\{|\mathbf{u}|>0\})}{\mathcal{L}^{n}(B_{r}( x))}<1-c. \tag{7}\] Proof.: By Lemma 4.1, there exists \(y\in B_{r/2}\) such that \(|\mathbf{u}(y)|\geq cr>0\). Using Lipschitz continuity we get \[\int_{\partial B_{\kappa}(y)}|\mathbf{u}|\geq\frac{cr}{2},\] provided \(\kappa\) is small enough. Hence \[\frac{1}{\kappa r}\!\!\!\!\int_{\partial B_{\kappa r}(y)}|\mathbf{u}|\geq\frac{c }{2\kappa},\] and also for at least one component \(u^{i}\) \[\frac{1}{\kappa r}\!\!\!\!\int_{\partial B_{\kappa r}(y)}u^{i}\geq\frac{c}{2 \kappa m},\] which by Corollary 3.6 implies \(|\mathbf{u}|>0\) in \(B_{\kappa r}(y)\). This gives the lower estimate in (7). To prove the estimate from above we assume, for simplicity, \(r=1\) and suppose (towards a contradiction) that there is a sequence of minimizers \(\mathbf{u}_{k}\) in \(B_{1}(0)\) such that \(0\in\partial|\{\mathbf{u}_{k}|>0\}\) and \[\mathcal{L}^{n}((|\mathbf{u}_{k}|=0])=:\varepsilon_{k}\to 0.\] Let \(v^{i}_{k}\) be a \(p\)-harmonic function in \(B_{1/2}\) with boundary data \(v^{i}_{k}=u^{i}_{k}\) on \(\partial B_{1/2}\). From Lemma 3.1, we obtain that \[\int_{B_{1/2}}|\nabla(v^{i}_{k}-u^{i}_{k})|^{p}\,dx\leq C(\varepsilon_{k}) \to 0. \tag{8}\] Since \(u^{i}_{k}\) and \(v^{i}_{k}\) are both uniformly Lipschitz in \(B_{1/4}\), we may assume that \(u^{i}_{k}\to u^{i}_{0}\) and \(v^{i}_{k}\to v^{i}_{0}\) uniformly in \(B_{1/4}\). Observe that \(\Delta_{p}v^{i}_{0}=0\) and (8) implies that \(u^{i}_{0}=v^{i}_{0}+c\). Thus \(\Delta_{p}u^{i}_{0}=0\) in \(B_{1/4}\) and from the strong minimum principle (since \(u^{i}_{0}(0)=0\)) it follows \(u^{i}_{0}\equiv 0\) in \(B_{1/4}\), since \(u^{i}_{0}\geq 0\) and \(u^{i}_{0}(0)=0\). On the other hand form nondegeneracy property, Lemma 4.1, we know \[\|\mathbf{u}_{k}\|_{L^{\infty}(B_{1/2})}\geq c>0,\] which implies a similar inequality for \(\mathbf{u}_{0}\), and hence a contradiction. **Remark 4.3**.: _Theorem 4.2, along with the Lebesgue density theorem implies that the free boundary has zero Lebesgue measure_ \[\mathcal{L}^{n}(\partial|\mathbf{u}|>0])=0.\] ## 5. The vector-valued measure \(\Delta_{p}\mathbf{u}\) Let \(0\leq\zeta\in C_{0}^{\infty}(\Omega)\) be a test function, and define the measure \(\lambda^{i}\) by \[\int\zeta\,d\lambda^{i}=-\int|\nabla u^{i}|^{p-2}\nabla u^{i}\cdot\nabla\zeta \,dx,\] which in virtue of Lemma 2.3 is a bounded non-negative measure, i.e. a Radon measure. Obviously \(\lambda^{i}\) is the formal way of expressing \(\Delta_{p}u^{i}\) in \(\Omega\). Since each \(u^{i}\) is \(p\)-subharmonic in \(\Omega\) and \(u^{i}\geq 0\) we have that \(\lambda^{i}\) is a positive Radon measure. Because \(u^{i}\) is also \(p\)-harmonic in \(\{u^{i}>0\}\) we have that the support of \(\lambda^{i}\) is in \(\Omega\cap\partial\{u^{i}>0\}\subseteq\Omega\cap\partial|\{\mathbf{u}|>0\}\).1 Let us define Footnote 1: Observe that \(u^{i}\) may be zero in some component of \(\{|\mathbf{u}|>0\}\). \[\Lambda=\Lambda^{\mathbf{u}}:=\sum_{i=1}^{m}\lambda^{i}.\] **Theorem 5.1**.: _For any \(K\Subset\Omega\) there exist constants \(c,C>0\) such that for any (local) minimizer \(\mathbf{u}\)_ \[cr^{n-1}\leq\int_{B_{r}}d\Lambda\leq Cr^{n-1}\] _for any ball \(B_{r}\subset K\) with \(x\in\partial\{|\mathbf{u}|>0\}\)._ Proof.: Let \(0\leq\zeta_{e}\in C_{0}^{\infty}(B_{r+\epsilon})\) be a suitable test function, such that \(\zeta_{e}=1\) on \(B_{r}\) and \(|\nabla\zeta_{e}|\leq 2/\epsilon\). Then \[\int\zeta_{e}\,d\lambda^{i}=-\int|\nabla u^{i}|^{p-2}\nabla u^{i}\cdot\nabla \zeta_{e}\,dx=-\int_{B_{r+\epsilon}\setminus B_{r}}|\nabla u^{i}|^{p-2}\nabla u ^{i}\cdot\nabla\zeta_{e}\,dx\leq Cr^{n-1},\] where in the last inequality we have used that \(u\) is Lipschitz. Letting \(\epsilon\) tend to zero, we arrive at \[\int_{B_{r}}d\lambda^{i}\leq Cr^{n-1}.\] To prove the estimate from below, we argue indirectly. It also suffices to consider the case \(r=1\). Assume there is a sequence of minimizers \(\mathbf{u}_{k}\) in the unit ball \(B_{1}(0)\), such that \(0\in\partial\{|\mathbf{u}_{k}|>0\}\) and for the measures \(\Lambda_{k}:=\Lambda^{\mathbf{u}_{k}}\) we have \[\varepsilon_{k}:=\Lambda_{k}(B_{1})\to 0.\] Since the functions \(\mathbf{u}_{k}\) are uniformly Lipschitz continuous, we may assume that \(\mathbf{u}_{k}\to\mathbf{u}_{0}\) in \(B_{1/2}\), where \(\mathbf{u}_{0}\) is Lipschitz continuous as well. We may also extract a subsequence (still denote by \(\mathbf{u}_{k}\)) such that \(g_{k}^{i}:=|\nabla u_{k}^{i}|^{p}\nabla u_{k}^{i}\to g_{0}^{i}\) weakly-\(\ast\) in \(L^{\infty}(B_{1/2})\) for all \(i=1,\ldots,m\). We claim that \[g_{0}^{i}=|\nabla u_{0}^{i}|^{p}\nabla u_{0}^{i},\quad\text{ in }B_{1/2}. \tag{9}\] Suppose this is true, then for every positive test function \(\zeta\in C_{0}^{\infty}(B_{1/2})\) one has \[-\int_{B_{1/2}}|\nabla u_{0}^{i}|^{p-2}\nabla u_{0}^{i}\cdot \nabla\zeta\,dx =-\lim_{k\to\infty}\int_{B_{1/2}}|\nabla u_{k}^{i}|^{p-2}\nabla u _{k}^{i}\cdot\nabla\zeta\,dx\] \[=\lim_{k\to\infty}\int_{B_{1/2}}\zeta\,d\lambda_{k}^{i}\leq\| \zeta\|_{L^{\infty}(B_{1/2})}\lim_{k\to\infty}\varepsilon_{k}=0.\] Thus \(\lambda_{0}^{i}=0\) and \(u_{0}^{i}\) is \(p\)-harmonic for all \(i=1,\ldots,m\) (note that \(u_{0}^{i}\) is the limit of a sequence of \(p\)-subharmonic functions and we already know that it is \(p\)-subharmonic). Since \(u_{0}^{i}\geq 0\) and \(u_{0}^{i}(0)=0\), by the minimum principle, we have \(u_{0}^{i}\equiv 0\) in \(B_{1/2}\). On the other hand, by nondegeneracy property (Lemma 4.1) and that \(0\in\partial\{|\mathbf{u}_{k}|>0\}\) we have \[\|\mathbf{u}_{k}\|_{L^{\infty}(B_{1/4})}\geq c>0.\] Therefore, a similar inequality holds for \(\mathbf{u}_{0}\) and we arrive at a contradiction. To close the argument we need to prove (9). In fact, if \(\overline{B_{\rho}}=\overline{B_{\rho}(y)}\subset\{|\mathbf{u}_{0}|>0\}\) then \(B_{\rho}\subset\{|\mathbf{u}_{k}|>0\}\) for sufficiently large \(k\) and \(u_{k}^{i}\) are \(p\)-harmonic in \(B_{\rho}\) for all \(i=1,\ldots,m\), (see Lemma 2.3). Therefore, one can extract a subsequence of \(\mathbf{u}_{k}\) locally converging to \(\mathbf{u}_{0}\) in \(C^{1,\alpha}(B_{\rho})\). Hence, \(g_{0}^{i}=|\nabla u_{0}^{i}|^{p}\nabla u_{0}^{i}\) in \(B_{\rho}\) for all \(i=1,\ldots,m\). Next, if \(B_{\rho}\subset\{|\mathbf{u}_{0}|=0\}\) then for any \(\kappa<1\) the nondegeneracy property entails that \(B_{x\rho}\subset\{|\mathbf{u}_{k}|=0\}\) for sufficiently large \(k=k(\kappa)\). Thus \(g_{0}^{i}=0=|\nabla u_{0}^{i}|^{p}\nabla u_{0}^{i}\). We just need to show \(\mathcal{L}^{n}(\partial\{|\mathbf{u}_{0}|>0\}\cap B_{1/2})=0\). If \(x_{0}\in\partial\{|\mathbf{u}_{0}|>0\}\cap B_{1/2}\), then \(\mathbf{u}_{0}(x_{0})=0\) Choose \(x_{k}\in\partial\{|\mathbf{u}_{k}|>0\}\cap B_{1}\) such that \(|x_{k}-x_{0}|=\operatorname{dist}(x_{0},\partial\{|\mathbf{u}_{k}|>0\})\), then relation (6) yields that \(|x_{k}-x_{0}|\to 0\). Apply Lemma 4.1 to obtain \[\|\mathbf{u}_{k}\|_{L^{n}(B_{2}(x_{0}))}\geq\|\mathbf{u}_{k}\|_{L^{n}(B_{r}(x_{ k}))}\geq cr,\] for any ball \(B_{r}(x_{0})\subset B_{1/2}\) and sufficiently large \(k\). Passing to the limit we get the same inequality for \(\mathbf{u}_{0}\), \[\|\mathbf{u}_{0}\|_{L^{n}(B_{2}(x_{0}))}\geq cr.\] This along with the Lipschitz continuity of \(\mathbf{u}_{0}\) is enough to prove that \(\mathcal{L}^{n}(B_{r}(x_{0})\cap\{|\mathbf{u}_{0}|>0\})\geq c\mathcal{L}^{n}( B_{r})\) for some \(c>0\). (see the first part of the proof of Theorem 4.2). This implies that \(\mathcal{L}^{n}(\partial\{|\mathbf{u}_{0}|>0\}\cap B_{1/2})=0\) (see Remark 4.3). The next theorem follows easily from Theorem 5.1. The proof is the same as the proof of Theorem 4.5 in [2]. **Theorem 5.2**.: _Let \(\mathbf{u}\) be a (local) minimizer in \(\Omega\). Then_ 1. _For every_ \(K\Subset\Omega\) _we have_ \(\mathcal{H}^{n-1}(K\cap\partial\{|\mathbf{u}|>0\})<\infty\)_._ 2. _There exist nonnegative Borel functions_ \(q^{i}\) _such that_ \[\Delta_{p}u^{i}=q^{i}\mathcal{H}^{n-1}\lfloor\,\partial\{|\mathbf{u}|>0\},\] _that is for every_ \(\zeta\in\mathcal{C}_{0}^{\infty}(\Omega)\)__ \[-\int_{\Omega}|\nabla u^{i}|^{p-2}\nabla u^{i}\cdot\nabla\zeta\,dx=\int_{ \Omega\cap\partial[|\mathbf{u}|>0]}\zeta q^{i}d\mathcal{H}^{n-1}.\] 3. _For any_ \(K\Subset\Omega\) _there exist constants_ \(c,C>0\) _such that_ \[c\leq\sum_{i=1}^{m}q^{i}\leq C,\] _and for_ \(B_{r}(x)\subset K\) _with_ \(x\in\partial\{|\mathbf{u}|>0\}\) _we have_ \[cr^{n-1}\leq\mathcal{H}^{n-1}(B_{r}(x)\cap\partial\{|\mathbf{u}|>0\})\leq Cr^{ n-1}.\] **Remark 5.3**.: _From (i) in Theorem 5.2 it follows that, locally, the set \(A=\Omega\cap\{|\mathbf{u}|>0\}\) has finite perimeter in \(\Omega\) in sense of that \(\mu_{\mathbf{u}}=-\nabla\chi_{A}\) is a Borel measure and the total variation \(|\mu_{\mathbf{u}}|\) is a Radon measure. We define the reduced boundary of \(A\) by_ \[\partial_{\mathrm{red}}A=\{x\in\Omega:|v_{\mathbf{u}}|=1\},\] _where \(v_{\mathbf{u}}(x)\) is the unique unit vector with_ \[\int_{B_{r}(x)}\left|\chi_{A}-\chi_{\{y:(y-x)\cdot v_{\mathbf{u}}(x)<0\}} \right|=o(r^{n}),\quad\text{ as }r\to 0,\] _if such a vector exists, and \(v_{\mathbf{u}}(x)=0\) otherwise. See [14], Chapter 4, for more details._ ## 6. Local Analysis To proceed, we will need some properties of the so-called blow-up limits. **Lemma 6.1**.: _Let \(\mathbf{u}\) be a (local) minimizer in \(\Omega\), \(K\Subset\Omega\) and \(B_{r}(x_{k})\subset K\) be a sequence of balls with \(r_{k}\to 0\), \(x_{k}\to x_{0}\in\Omega\), and \(\mathbf{u}(x_{k})=0\). Consider the blow-up sequence_ \[\mathbf{u}_{k}(x)=\frac{1}{r_{k}}\mathbf{u}(x_{k}+r_{k}x). \tag{10}\] _For a subsequence, there is a limit \(\mathbf{u}_{0}\) such that_ \[\mathbf{u}_{k}\to\mathbf{u}_{0}\;\text{ in }\;C_{\mathrm{loc}}^{0,\alpha}( \mathbb{R}^{n};\mathbb{R}^{m})\text{ for every }0<\alpha<1, \tag{11}\] \[\nabla\mathbf{u}_{k}\to\nabla\mathbf{u}_{0}\;\text{ a.e. in }\mathbb{R}^{n},\] (12) \[\partial(|\mathbf{u}_{k}|>0)\to\partial(|\mathbf{u}_{0}|>0)\;\text { locally in the Hausdorff distance},\] (13) \[\chi_{\{|\mathbf{u}_{k}|>0\}}\to\chi_{\{|\mathbf{u}_{0}|>0\}}\; \text{ in }L_{\mathrm{loc}}^{1}(\mathbb{R}^{n};\mathbb{R}^{m}),\] \[\text{ if }x_{k}\in\partial(|\mathbf{u}_{k}|>0)\;\text{ then }0\in \partial\{|\mathbf{u}_{0}|>0\}.\] Proof.: For the proof we refer to [2] and [3]. The following lemma shows that the blow-up limit is a minimizer in any ball. **Lemma 6.2**.: _If \(\mathbf{u}(x_{k})=0\) and \(x_{k}\to x_{0}\), then any blow-up limit \(\mathbf{u}_{0}=\lim_{k}\mathbf{u}_{k}\) (see (10)) with respect to \(B_{r_{k}}(x_{k})\) is an absolute minimizer of \(J_{0}\) in any ball \(B_{R}=B_{R}(0)\), where_ \[J_{0}(\mathbf{v}):=\int_{B_{R}}\sum_{i=1}^{m}|\nabla v^{i}|^{p}+Q(x_{0})^{p} \chi_{\{|\mathbf{v}|>0\}}\,dx.\] Proof.: Let \(\mathbf{w}\in W^{1,p}(B_{R};\mathbb{R}^{m})\) be such that \(w^{i}\geq 0\) for \(i=1,\ldots,m\) and \(\mathbf{w}=\mathbf{u}_{0}\) on \(\partial B_{R}\). To show \(J_{0}(\mathbf{u}_{0})\leq J_{0}(\mathbf{w})\), we choose a cut of function \(\eta\in C_{0}^{\infty}(B_{R})\) with \(0\leq\eta\leq 1\) and \(\eta=1\) in \(B_{r}\) for some \(0<r<R\), and define \[\mathbf{w}_{k}=\left(\mathbf{w}+(1-\eta)(\mathbf{u}_{k}-\mathbf{u}_{0}) \right)_{+},\] where the positive part is taken separately for each component. We also have \(\mathbf{w}_{k}=\mathbf{u}_{k}\) on \(\partial B_{R}\). Since \(\mathbf{u}\) is (local) minimizer, for sufficiently large \(k\) such that \(B_{R_{R}}(x_{k})\Subset\Omega\), we have \[\int_{B_{R}}\sum_{i=1}^{m}|\nabla u_{k}^{i}|^{p}+Q_{k}^{p}\chi_{\{|\mathbf{u }_{k}|>0\}}\,dx\leq\int_{B_{R}}\sum_{i=1}^{m}|\nabla w_{k}^{i}|^{p}+Q_{k}^{p} \chi_{\{|\mathbf{w}_{k}|>0\}}\,dx,\] where \(Q_{k}(x):=Q(x_{k}+r_{k}x)\). Since \(|\nabla\mathbf{u}_{k}|\leq C\) (due to Lipschitz continuity of \(\mathbf{u}\)) and convergences (11) and (13), the limit of the left hand side will be \(J_{0}(\mathbf{u}_{0})\). Hence \[J_{0}(\mathbf{u}_{0}) \leq\liminf_{k\to\infty}\int_{B_{R}}\sum_{i=1}^{m}|\nabla w_{k}^{ i}|^{p}+Q_{k}^{p}\chi_{\{|\mathbf{w}_{k}|>0\}}\,dx\] \[\leq\int_{B_{R}}\sum_{i=1}^{m}|\nabla w^{i}|^{p}\,dx+\int_{B_{r}} Q(x_{0})^{p}\chi_{\{|\mathbf{w}_{k}|>0\}}\,dx+\liminf_{k\to\infty}\int_{B_{R} \setminus B_{r}}Q_{k}^{p}\chi_{\{|\mathbf{w}_{k}|>0\}}\,dx\] \[\leq\int_{B_{R}}\sum_{i=1}^{m}|\nabla w^{i}|^{p}\,dx+\int_{B_{r}} Q(x_{0})^{p}\chi_{\{|\mathbf{w}_{k}|>0\}}\,dx+Q(x_{0})^{p}|B_{R}\setminus B_{ r}|.\] Now let \(r\to R\), we get \(J_{0}(\mathbf{u}_{0})\leq J_{0}(\mathbf{w})\). **Lemma 6.3**.: _Suppose \(\nabla(Q^{p})\in L^{1}(\Omega)\) and \(\mathbf{u}\) is an absolute minimizer. Then_ \[\int_{|\mathbf{u}|>0}\,\mathrm{div}\,\left[\sum_{i=1}^{m}\left(|\nabla u^{i}|^{ p}\Psi-p|\nabla u^{i}|^{p-2}(\nabla u^{i}\cdot\Psi)\nabla u^{i}\right)+Q^{p} \Psi\right]dx=0,\] _for every \(\Psi\in C_{c}^{\infty}(\Omega;\mathbb{R}^{n})\)._ Proof.: Let us define \[\Phi_{t}(x)=x+t\Psi(x)\qquad\text{ and }\qquad\mathbf{u}_{t}(x)=\mathbf{u}( \Phi_{t}(x)).\] One can show that for sufficiently small \(|t|\), \(\Phi_{t}:\Omega\to\Omega\) is a diffeomorphism. We have \(D\Phi_{t}=I+tD\Psi\) and for \(i=1,\ldots,m\) \[\nabla u_{t}^{i}=D\Phi_{t}(x)\nabla u^{i}(\Phi_{t}(x)).\] It follows that \[|\nabla u_{t}^{i}(x)|^{2}=(\nabla u^{i}(\Phi_{t}(x)))^{T}A_{t}(x)\nabla u^{i}( \Phi_{t}(x)),\] where \[A_{t}=(D\Phi_{t})^{T}D\Phi_{t}=I+t((D\Psi)^{T}+D\Psi)+t^{2}(D\Psi)^{T}D\Psi.\] By a change of variables, we have \[J(\mathbf{u}_{t}) =\int_{\Omega}\sum_{i=1}^{m}|\nabla u_{t}^{i}(x)|^{p}+Q^{p}\chi_{ \{|\mathbf{u}|>0\}}\,dx\] \[=\int_{\Omega}\sum_{i=1}^{m}\Big{(}(\nabla u^{i}(\Phi_{t}(x)))^{ T}A_{t}(x)\nabla u^{i}(\Phi_{t}(x))\Big{)}^{p/2}+Q^{p}(x)\chi_{\{|\mathbf{u}|>0\} }(\Phi_{t}(x))\,dx\] \[=\int_{\Omega}\sum_{i=1}^{m}\Big{[}\Big{(}(\nabla u^{i}(y))^{T}A _{t}(\Phi_{t}^{-1}(y))\nabla u^{i}(y)\Big{)}^{p/2}+Q^{p}(\Phi_{t}^{-1}(y))\chi_ {\{|\mathbf{u}|>0\}}(y)\Big{]}\Big{|}\det D_{y}\Phi_{t}^{-1}(y)\big{|}\,dy\] \[=\int_{\{|\mathbf{u}|>0\}}\sum_{i=1}^{m}\Big{[}\Big{(}(\nabla u^{ i}(y))^{T}A_{t}(\Phi_{t}^{-1}(y))\nabla u^{i}(y)\Big{)}^{p/2}+Q^{p}(\Phi_{t}^{-1}(y) )\Big{]}\Big{|}\det D_{y}\Phi_{t}^{-1}(y)\big{|}\,dy.\] We also have \[\frac{d}{dt}A_{t}(\Phi_{t}^{-1}(y))\Big{|}_{t=0}=D\Psi(y)+D\Psi(y)^{T}\] and \[\frac{d}{dt}\Big{|}\det D_{y}\Phi_{t}^{-1}(y)\Big{|}\Big{|}_{t=0}=-\text{div} \,\Psi(y).\] Now differentiate \(J(\mathbf{u}_{t})\) with respect to \(t\) and note that its minimum is attained at \(t=0\), then \[0=\frac{d}{dt}J(\mathbf{u}_{t})\Big{|}_{t=0} =\int_{\{|\mathbf{u}|>0\}}\sum_{i=1}^{m}p|\nabla u^{i}|^{p-2}( \nabla u^{i})^{T}D\Psi\nabla u^{i}-\Psi\cdot\nabla(Q^{p})dx\] \[\quad-\int_{\{|\mathbf{u}|>0\}}\left[\sum_{i=1}^{m}|\nabla u^{i}|^ {p}+Q^{p}\right]\text{div}\,\Psi\,dx\] \[=-\int_{\{|\mathbf{u}|>0\}}\text{div}\,\left[\sum_{i=1}^{m}\Big{(} |\nabla u^{i}|^{p}\Psi-p|\nabla u^{i}|^{p-2}(\nabla u^{i}\cdot\Psi)\nabla u^{ i}\Big{)}+Q^{p}\Psi\right]\] \[\qquad\qquad+\sum_{i=1}^{m}p(\nabla u^{i}\cdot\Psi)\Delta_{p}u^{ i}\,dx.\] Since each \(u^{i}\) is \(p\)-harmonic in \(\{u^{i}>0\}\), see (2), we arrive at the desired claim, in the lemma. **Definition 6.4**.: _The upper \(\mathcal{H}^{n-1}\)-density at any point \(x_{0}\in\partial\{|\mathbf{u}|>0\}\) is defined as_ \[\Theta^{n-1}\left(\mathcal{H}^{n-1}\Big{\{}\partial\{|\mathbf{u}|>0\},x_{0} \right):=\limsup_{r\to 0}\frac{\mathcal{H}^{n-1}(B_{r}(x_{0})\cap\partial\{| \mathbf{u}|>0\})}{\omega_{n-1}r^{n-1}},\] _where \(\omega_{n-1}\) denotes the volume of the unit sphere in \(\mathbb{R}^{n-1}\). We already know (see for example Theorem 2.7 in [13]) that for \(\mathcal{H}^{n-1}\)-a.e. point \(x_{0}\in\partial\{|\mathbf{u}|>0\}\), their upper \(\mathcal{H}^{n-1}\)-density satisfy_ \[\Theta^{n-1}\left(\mathcal{H}^{n-1}\Big{\{}\partial\{|\mathbf{u}|>0\},x_{0} \Big{)}\leq 1.\right.\] **Theorem 6.5**.: _Let \(x_{0}\in\partial_{\mathrm{red}}\{|\mathbf{u}|>0\}\) and suppose that_ \[\Theta^{n-1}\left(\mathcal{H}^{n-1}\Big{\{}\partial\{|\mathbf{u}|>0\},x_{0} \Big{)}\leq 1.\right.\] _Then \(\mathrm{Tan}(\partial\{|\mathbf{u}|>0\},x_{0})=\{x:x\cdot\nu(x_{0})=0\}\). If, in addition, \(x_{0}\) is a Lebesgue point for Radon measure \(q^{i}\mathcal{H}^{n-1}\Big{\{}\partial\{|\mathbf{u}|>0\}\), that is_ \[\int_{B_{r}(x_{0})\cap\partial\{|\mathbf{u}|>0\}}|q^{i}-q^{i}(x_{0})|\,d \mathcal{H}^{n-1}=o(r^{n-1}),\quad\text{ as }r\to 0, \tag{14}\] _then \(q^{i}(x_{0})=Q(x_{0})\) and_ \[\mathbf{u}(x_{0}+x)=(-x\cdot\nu(x_{0}))_{+}\,\mathbf{a}_{x_{0}}+o(|x|),\quad \text{ as }x\to 0,\] _for some vector \(\mathbf{a}_{x_{0}}=(\alpha^{1},\ldots,\alpha^{m})\) that_ \[|\mathbf{a}_{x_{0}}|_{p}^{p}=(\alpha^{1})^{p}+\cdots+(\alpha^{m})^{p}=\frac{ 1}{p-1}Q(x_{0})^{p}. \tag{15}\] Proof.: Without loss of generality assume that \(\nu(x_{0})=\mathbf{e}^{n}\). Let \(\mathbf{u}_{k}\) be a blow-up sequence with respect to balls \(B_{r_{k}}(x_{0})\), with blow-up limit \(\mathbf{u}_{0}\). Since \(\nu(x_{0})\) is the normal vector to \(\partial\{|\mathbf{u}|>0\}\) at \(x_{0}\), \[\int_{B_{r}(x_{0})}\left|\chi_{\{|\mathbf{u}|>0\}}\right|-\chi_{\{x:(x-x_{0}) +\nu(x_{0})<0\}}\big{|}\ dx=o(r^{n}),\quad\text{ as }r\to 0.\] This along with (13) implies \(\chi_{\{|\mathbf{u}_{0}|>0\}}=\chi_{\{x_{n}<0\}}\) almost every where in \(\mathbb{R}^{n}\). By Lemma 6.2 we know that \(\mathbf{u}_{0}\) is an absolute minimizer of \(J_{0}\) and so continuous. Then \(\{|\mathbf{u}_{0}|>0\}=\{x_{n}<0\}\). This proves that \(\{x_{n}=0\}\) is the topological tangent plane to \(\partial\{|\mathbf{u}|>0\}\) at \(x_{0}\). Now let \[\phi(x)=\min\left(1,\max(0,2-|x_{n}|)\right)\eta(x^{\prime}),\] where \(0\leq\eta\in C_{0}^{\infty}(B_{R}^{\prime})\) and \(B_{R}^{\prime}\) is \((n-1)\)-dimensional ball with radius \(R\) (\(R\) is arbitrary and fixed). Denote \(\phi_{k}(x):=r_{k}\phi(\frac{x-x_{0}}{r_{k}})\) and write \[-\int_{\mathbb{R}^{n}}|\nabla u_{0}^{i}|^{p-2}\nabla u_{0}^{i} \cdot\nabla\phi\,dx=-\lim_{k\to\infty}\int_{\mathbb{R}^{n}}|\nabla u_{k}^{i}|^ {p-2}\nabla u_{k}^{i}\cdot\nabla\phi\,dx\] \[=-\lim_{k\to\infty}r_{k}^{-n}\int_{\mathbb{R}^{n}}|\nabla u^{i}|^ {p-2}\nabla u^{i}\cdot\nabla\phi_{k}\,dx=\lim_{k\to\infty}r_{k}^{-n}\int_{ \mathbb{R}^{n}}\Delta_{p}u^{i}\phi_{k}\,dx\] \[=\lim_{k\to\infty}r_{k}^{-n}\int_{\mathbb{R}^{n}}q^{i}\phi_{k}\, d\mathcal{H}^{n-1}\,\partial\{|\mathbf{u}|>0\}=\lim_{k\to\infty}\int_{ \mathbb{R}^{n}}q^{i}(x_{0}+r_{k}x)\phi(x)\chi_{\partial\{|\mathbf{u}_{k}|>0\} }\,d\mathcal{H}^{n-1}\] \[=\lim_{k\to\infty}\int_{\mathbb{R}^{n}}q^{i}(x_{0})\phi(x)\chi_{ \partial\{|\mathbf{u}_{k}|>0\}}\,d\mathcal{H}^{n-1}=q^{i}(x_{0})\int_{\{x_{n}=0 \}}\eta(x^{\prime})\,dx^{\prime},\] where we have used assumption (14) and property (12). Therefore, for any test function \(\zeta\in C_{0}^{\infty}(B_{R})\) we have \[-\int_{B_{R}\cap\{x_{n}<0\}}|\nabla u_{0}^{i}|^{p-2}\nabla u_{0}^{i}\cdot\nabla \zeta\,dx=q^{i}(x_{0})\int_{B_{R}^{\prime}}\zeta(x^{\prime},0)\,dx^{\prime}.\] Since \(\Delta_{p}u_{0}^{i}=0\) in \(\{x_{n}<0\}\), from boundary regularity it follows that \[|\nabla u_{0}^{i}|^{p-2}\partial_{n}u_{0}^{i}=-q^{i}(x_{0})\quad\text{ on }\{x_{n}=0\},\] in the classical sense. We need to show that \[u_{0}^{i}(x)=\alpha^{i}(-x_{n})_{+},\quad\text{ where }\alpha^{i}:=\left(q^{i}(x_{0}) \right)^{1/(p-1)}. \tag{16}\] To see this, define \(w_{0}\) by \[w_{0}(x):=\left\{\begin{array}{ll}u_{0}^{i}(x),&\text{ in }x_{n}\leq 0,\\ -u_{0}^{i}(x^{\prime},-x_{n}),&\text{ in }x_{n}>0.\end{array}\right.\] It is obvious that \(w_{0}\) is \(p\)-harmonic in whole \(\mathbb{R}^{n}\) as well as \[\|\nabla w_{0}\|_{L^{\infty}(\mathbb{R}^{n})}=\|\nabla u_{0}^{i}\|_{L^{\infty} (\mathbb{R}^{n})}\leq\|\nabla u^{i}\|_{L^{\infty}(B_{r}(x_{0}))},\quad\text{ for any }r>0.\] By Liouville's theorem we conclude that \(w_{0}\) is a linear function. The boundary value on \(x_{n}=0\), \((u_{0}^{i}=0\) and \(\partial_{n}u_{0}^{i}=-\alpha^{i})\) shows that \(w_{0}(x)=-\alpha^{i}x_{n}\). This proves (16) and shows that \[u^{i}(x_{0}+x)=\alpha^{i}(-x_{n})_{+}+o(|x|),\quad\text{ as }x\to 0.\] We just have to show (15). To do this, note that \(\mathbf{u}_{0}\) is an absolute minimizer of \(J_{0}\). Apply Lemma 6.3 for \(\mathbf{u}_{0}\) and some \(\Psi\in C_{c}^{\infty}(\mathbb{R}^{n})\) \[0 =\int_{\{x_{n}<0\}}\operatorname{div}\left[\sum_{i=1}^{m}\left(| \nabla u_{0}^{i}|^{p}\Psi-p|\nabla u_{0}^{i}|^{p-2}(\nabla u_{0}^{i}\cdot\Psi) \nabla u_{0}^{i}\right)+Q(x_{0})^{p}\Psi\right]dx\] \[=\int_{\{x_{n}=0\}}\sum_{i=1}^{m}\left(|\nabla u_{0}^{i}|^{p}( \Psi\cdot\mathbf{e}^{n})-p|\nabla u_{0}^{i}|^{p-2}(\nabla u_{0}^{i}\cdot\Psi) \partial_{n}u_{0}^{i}\right)+Q(x_{0})^{p}(\Psi\cdot\mathbf{e}^{n})\,d\mathcal{ H}^{n-1}\] \[=\int_{\{x_{n}=0\}}\left(\sum_{i=1}^{m}(1-p)(\alpha^{i})^{p}+Q(x_ {0})^{p}\right)(\Psi\cdot\mathbf{e}^{n})\,d\mathcal{H}^{n-1}.\] Thus (15) will be obtained. ## 7. Regularity of free boundary **Definition 7.1**.: _Let \(\mathbf{u}\in C(\Omega,\mathbb{R}^{m})\). We say that the boundary condition \(\nabla|\mathbf{u}|_{p}=g\) on \(\partial|\{\mathbf{u}|>0\}|\) holds in viscosity sense, if_ * _For every differentiable function_ \(\phi:\mathbb{R}^{n}\to\mathbb{R}\) _that touches_ \(|\mathbf{u}|_{p}\) _from below in some_ \(x_{0}\in\partial\{|\mathbf{u}|>0\}\)_, that is_ \[|\mathbf{u}(x_{0})|_{p}=\phi(x_{0}),\quad\text{ and }\quad|\mathbf{u}|_{p}\geq\phi\ \text{ in }\{|\mathbf{u}|>0\}\cap B_{r}(x_{0})\] _for some_ \(r>0\)_, we have_ \(|\nabla\phi(x_{0})|\leq g(x_{0})\) * _For every differentiable function_ \(\phi:\mathbb{R}^{n}\to\mathbb{R}\) _that touches_ \(|\mathbf{u}|_{p}\) _from above in some_ \(x_{0}\in\partial\{|\mathbf{u}|>0\}\)_, that is_ \[|\mathbf{u}(x_{0})|_{p}=\phi(x_{0}),\quad\text{ and }\quad|\mathbf{u}|_{p}\leq\phi\ \text{ in }\{|\mathbf{u}|>0\}\cap B_{r}(x_{0})\] _for some_ \(r>0\)_, we have_ \(|\nabla\phi(x_{0})|\geq g(x_{0})\)_._ **Lemma 7.2**.: _Let \(\mathbf{u}\) be a (local) minimizer, then the boundary condition_ \[\nabla|\mathbf{u}|_{p}=\frac{1}{(p-1)^{1/p}}Q,\quad\text{ on }\partial\{| \mathbf{u}|>0\}.\] _holds in the viscosity sense._ Proof.: We show that the boundary condition holds on every point \(x_{0}\in\partial\{|\mathbf{u}|>0\}\). Suppose \(\phi\) touches \(|\mathbf{u}|_{p}\) from below at \(x_{0}\). Consider the blow-up sequences \[\mathbf{u}_{k}(x)=\frac{\mathbf{u}(x_{0}+r_{k}x)}{r_{k}}\quad\text{ and }\quad\phi_{k}(x)=\frac{\phi(x_{0}+r_{k}x)}{r_{k}},\] where \(r_{k}\searrow 0\). Observe that \[\phi_{k}(0)=|\mathbf{u}_{k}(0)|_{p}=0,\quad\phi_{k}(x)\leq|\mathbf{u}_{k}(x)| _{p},\] and up to a subsequence we have \[\nabla\phi(x_{0})\cdot x=\lim_{k\to\infty}\phi_{k}(x)\leq\lim_{k\to\infty}| \mathbf{u}_{k}(x)|_{p}=|\mathbf{u}_{0}(x)|_{p}\qquad\text{ in }\mathbb{R}^{n}. \tag{17}\] If \(\nabla\phi(x_{0})=0\), the viscosity condition holds trivially. Otherwise, the non-coincidence set \(\{|\mathbf{u}_{0}|>0\}\) contains half-space \(\{x:\nabla\phi(x_{0})\cdot x>0\}\). On the other hand, \(\mathbf{u}_{0}\) is minimizer of \(J_{0}\) (Lemma 6.2) and by Lemma 2.3 every nontrivial component of \(\mathbf{u}_{0}\), say \(u_{0}^{i}\), is positive in \(\{x:\nabla\phi(x_{0})\cdot x>0\}\). According to Lemma B.1, \(u_{0}^{i}(x)=\alpha^{i}(\nabla\phi(x_{0})\cdot x)+o(|x|)\) for some \(\alpha^{i}\). Thus any blowup of \(\mathbf{u}_{0}\) at \(x=0\) must be of the form \(\mathbf{u}_{00}(x)=\mathbf{a}_{0}(\nabla\phi(x_{0})\cdot x)\) where \(\mathbf{a}_{0}=(\alpha^{1},\cdots,\alpha^{m})\). Again apply Lemma 6.2 along with Lemma 6.3, we get \[|\mathbf{a}_{0}|_{p}|\nabla\phi(x_{0})|=\frac{1}{(p-1)^{1/p}}Q(x_{0}).\] Thus (17) yields that \[\nabla\phi(x_{0})\cdot x\leq|\mathbf{u}_{0}(x)|_{p}=|\mathbf{a}_{0}|_{p}| \nabla\phi(x_{0})\cdot x|+o(|x|),\] and so \[|\nabla\phi(x_{0})|\leq\frac{1}{(p-1)^{1/p}}Q(x_{0}).\] The same argument holds when \(\phi\) touches \(|\mathbf{u}_{0}|_{p}\) from above. **Definition 7.3**.: _A domain \(\Omega\subset\mathbb{R}^{n}\) is called non-tangentially accessible domain (NTA) with parameters \(M\geq 1\) and \(R_{0}>0\) if_ * \(\Omega\) _satisfies the corkscrew condition, that is, for any_ \(x\in\partial\Omega\) _and_ \(r\in(0,R_{0})\) _there exists_ \(a_{r}(x)\in\Omega\cap B_{r}(x)\) _such that_ \(M^{-1}r<\mathrm{dist}(a_{r}(x),\partial\Omega)\)_._ * \(\mathbb{R}^{n}\setminus\Omega\) _satisfies the corkscrew condition._ * _If_ \(x\in\partial\Omega\) _and_ \(x_{1},x_{2}\in B_{r}(x)\cap\Omega\) _for_ \(0<r<R_{0}\)_, then there exists a rectifiable curve_ \(\gamma:[0,1]\to\Omega\) _with_ \(\gamma(0)=x_{1}\) _and_ \(\gamma(1)=x_{2}\) _such that_ \(\mathcal{H}^{1}(\gamma)\leq M|x_{1}-x_{2}|\) _and_ \[\min\left\{\mathcal{H}^{1}(\gamma([0,t])),\mathcal{H}^{1}(\gamma([t,1]))\right\} \leq M\mathrm{dist}(\gamma(t),\partial\Omega),\quad\text{ for every }t\in[0,1],\] _where_ \(\mathcal{H}^{1}\) _denotes length or the one-dimensional Hausdorff measure._ **Definition 7.4**.: _We say that \(x_{0}\in\partial\{|\mathbf{u}|>0\}\) is a regular point of the free boundary if for every \(\epsilon>0\) there are \(r<1\) and a vector \(\mathbf{a}\in\mathbb{R}^{m}\) and unit vector \(\nu\in\mathbb{R}^{n}\) such that_ \[\|\mathbf{u}_{r,x_{0}}-(x\cdot\nu)_{+}\mathbf{a}\|_{L^{m}(B_{1})}\leq\epsilon.\] _We denote the set of all regular points by \(\mathcal{R}_{\mathbf{u}}\). Theorem 6.5 proves that \(\mathcal{H}^{n-1}(\partial\{|\mathbf{u}|>0\}\setminus\mathcal{R}_{\mathbf{u}})=0\)._ **Theorem 7.5**.: _Let \(\mathbf{u}=(u^{1},\ldots,u^{m})\) be a (local) minimizer and \(x_{0}\in\mathcal{R}_{\mathbf{u}}\). Furthermore, assume that \(B_{r_{0}}(x_{0})\cap\{|\mathbf{u}|>0\}\) is NTA domain. Then \(\mathcal{R}_{\mathbf{u}}\cap B_{r}(x_{0})\), for some \(0<r\leq r_{0}\), is \(C^{1,\alpha}\) for a universal exponent \(0<\alpha<1\)._ Proof.: We may assume that \(u^{1}>0\) in \(B_{r_{0}}(x_{0})\cap\{|\mathbf{u}|>0\}\). First we show that there is a Holder function \(g:B_{r}(x_{0})\cap\partial\{|\mathbf{u}|>0\}\to[c,1]\) for some \(0<r\leq r_{0}\) and \(0<c\leq 1\) such that \(u^{1}\) is a viscosity solution to the problem \[\Delta_{p}u^{1}=0,\quad\text{in }\{u^{1}>0\}\cap B_{r},\] \[|\nabla u^{1}|=\frac{gQ}{(p-1)^{1/p}},\quad\text{on }\partial\{u^{1}>0\}\cap B_{r}. \tag{18}\] Since \(B_{r_{0}}(x_{0})\cap\{|\mathbf{u}|>0\}\) is NTA domain, the boundary Harnack inequality (see [18]) implies that \(g_{i}:=u^{i}/u^{1}\) is Holder continuous in \(\overline{|\mathbf{u}|>0}\cap B_{r}\) for some \(r\leq r_{0}\). Define \[g:=(1+g_{2}^{p}+\cdots+g_{m}^{p})^{-1/p},\] and observe that \(u^{1}=g|\mathbf{u}|_{p}\). Suppose now the test function \(\phi\) is touching \(u^{1}\) from below in a point \(y\in\partial\{|\mathbf{u}|>0\}\). For \(\rho\) small enough, choose a constant \(C>0\) such that \[\frac{1}{g(x)}\geq\frac{1}{g(y)}-C|x-y|^{\mu}\geq 0,\ \ \text{for every }x\in\overline{|\mathbf{u}|>0}\cap B_{\rho},\] where \(\mu\) is Holder exponent of \(g\). Set \(\psi(x)=\phi(x)(1/g(y)-C|x-y|^{\mu})\), we get \(\psi(y)=|\mathbf{u}(y)|_{p}=0\) (note that since \(\phi(y)=0\), so \(\psi\) is differentiable at \(y\)) and \[\psi(x)\leq u^{1}(x)(\frac{1}{g(y)}-C|x-y|^{\mu})=g(x)(\frac{1}{g(y)}-C|x-y|^{ \mu})|\mathbf{u}(x)|_{p}\leq|\mathbf{u}(x)|_{p}.\] Therefore, \(\psi\) touches \(|\mathbf{u}|_{p}\) at \(y\) from below. By Lemma 7.2 \[|\nabla\psi(y)|\leq\frac{Q(y)}{(p-1)^{1/p}}.\] Note that \(\nabla\psi(y)=\frac{1}{g(y)}\nabla\phi(y)\) to see the boundary condition (18) in viscosity sense. The regularity of free boundary follows by the known results on the regularity of the one-phase scalar problem (18); see [15]. Our next result improves the theorem above, in the sense that we can remove the NTA conditions, for \(p\) close to \(2\). The main reason for not being able to handle the NTA, is the lack of ACF-monotonicity formula, use in classical paper; see proof of Proposition A.3, to prove the connectivity argument for the NTA. **Theorem 7.6**.: _Let \(\mathbf{u}=(u^{1},\ldots,u^{m})\) be a (local) minimizer. Then there is \(\epsilon_{0}>0\), such that for any \(p\in(2-\epsilon_{0},2+\epsilon_{0})\) we have_ * _The regular set_ \(\mathcal{R}_{\mathbf{u}}\)_, is locally_ \(C^{1,\alpha}\)_._ * _In dimensions_ \(2,3,4\) _the free boundary is_ \(C^{1,\alpha}\)_._ _Here \(0<\alpha<1\) is is a universal exponent._ Proof.: To prove (i) it suffices, in virtue of Theorem 7.5, to show that the free boundary is NTA, when \(\varepsilon_{0}\) is small enough. This, however, is a consequence of Proposition A.5 in Appendix A, by choosing \(\varepsilon_{0}\) accordingly. Turning to sstatement (ii) we recall that in these dimensions, and for \(p=2\), free boundaries are locally \(C^{1,\alpha}\); for \(n=2\) this was shown in [2], for \(n=3\) see [6], for \(n=4\) see [12]. Now for \(p\approx 2\), the free boundary has to be close to that of the case \(p=2\), and hence flat. More specifically, given a free boundary point \(x_{0}\) for the \(p\)-problem, and \(p\) close enough to \(2\), we have that in \(B_{r}(x_{0})\) the free boundaries of both \(p\) and \(2\), have Huasdorff distance \(\delta\ll r\), and in particular the \(p\)-free boundary is \((\delta/2)\)-flat. ## Appendix A Non-tangentially accessible domain (NTA) We note that in Definition 7.3, the condition (_iii_) can be replaced by _Harnack chain condition_ (see [4]), i.e., * Given \(\varepsilon>0\), \(x_{1},x_{2}\in\Omega\) such that \(\mathrm{dist}(x_{i},\partial\Omega)\geq\varepsilon\), \(i=1,2\) and \(|x_{1}-x_{2}|\geq\tilde{C}\varepsilon\), we can find points \(x_{1}=y_{1},y_{2},\cdots,y_{\ell}=x_{2}\) for which * \(B_{\varepsilon}(y_{i})\subset\Omega\) for \(i=1,\cdots,\ell\). * \(B_{\varepsilon}(y_{i})\cap B_{\varepsilon}(y_{i+1})\neq\emptyset\) for \(i=1,\cdots,\ell-1\). * The length of chain, \(\ell\), depends on \(\tilde{C}\) but not on \(\varepsilon\). We need an analogue of Theorem 4.1 in [1] to show that the non-coincidence set is NTA. **Lemma A.1**.: _Let \(\mathbf{u}\) be a minimizer and suppose \(0<|\mathbf{u}(x_{0})|\). Define \(\delta=\mathrm{dist}(x_{0},\Gamma)\), \(\delta_{1}=\mathrm{dist}(x_{0},\{|\mathbf{u}|\leq\frac{1}{2}|\mathbf{u}(x_{0} )|_{\infty}\})\), and suppose \(B(x_{0},\delta)\subset\Omega\). Then, there exist universal constants \(\lambda>1>\sigma\) such that_ * \(\sigma\delta\leq\delta_{1}\leq\delta\)_._ * _For some_ \(y\in\partial B_{\delta_{1}}(x_{0})\)_,_ \(|\mathbf{u}(y)|_{\infty}\geq\lambda|\mathbf{u}(x_{0})|_{\infty}\)_._ Proof.: According to relation (6), \[c_{0}\delta\leq|\mathbf{u}(x_{0})|\leq C_{0}\delta,\] and if \(u^{i}(x_{0})=\max_{1\leq j\leq m}u^{j}(x_{0})=|\mathbf{u}(x_{0})|_{\infty}\), \[\frac{c_{0}\delta}{\sqrt{m}}\leq u^{i}(x_{0})\leq C_{0}\delta.\] If \(z\in\partial\{|\mathbf{u}|\leq\frac{1}{2}|\mathbf{u}(x_{0})|_{\infty}\}\) such that \(|z-x_{0}|=\delta_{1}\), then \[\frac{1}{2}u^{i}(x_{0})=\frac{1}{2}|\mathbf{u}(x_{0})|_{\infty}\leq|\mathbf{u }(x_{0})|-|\mathbf{u}(z)|\leq|\mathbf{u}(x_{0})-\mathbf{u}(z)|\leq C_{0}\delta _{1},\] where we have used \(|\mathbf{u}(z)|=\frac{1}{2}|\mathbf{u}(x_{0})|_{\infty}\leq\frac{1}{2}| \mathbf{u}(x_{0})|\). Thus (\(i\)) holds for \(\sigma=\frac{c_{0}}{2C_{0}\sqrt{m}}\). To see (_ii_), note that \(v(x):=u^{i}(x_{0}+\delta_{1}x)/u^{i}(x_{0})\) is a \(p\)-harmonic function in \(B_{\delta/\delta_{1}}\) such that \(v(0)=1\), \(v(z)\leq\frac{1}{2}\) for some \(\hat{z}\in\partial B_{1}\). In addition, the Lipschitz constant of \(v\) is bounded by \(C_{0}\delta_{1}/u^{i}(x_{0})\leq C_{0}\delta/u^{i}(x_{0})\leq C_{0}\sqrt{m}/c_ {0}=1/2\sigma\). We claim that there is a universal constant \(\lambda>1\) such that \(v(\hat{y})\geq\lambda\) for some \(\hat{y}\in\partial B_{1}\). Otherwise, we find a sequence of \(p\)-harmonic functions \(v_{k}\) with \[\|v_{k}\|_{L^{\infty}(B_{1})}\leq 1+\frac{1}{k},\quad\|\nabla v_{k}\|_{L^{ \infty}(B_{1})}\leq\frac{1}{2\sigma},\quad v_{k}(0)=1,\quad v_{k}(z_{k})\leq \frac{1}{2},\] for some \(|z_{k}|=1\). Then there is a subsequence converging in \(C^{1,\alpha}(\overline{B}_{1})\) to a \(p\)-harmonic \(v_{0}\) where \(v_{0}(0)=1\), \(v_{0}(z_{0})\leq 1/2\), \(\|v_{0}\|_{L^{\infty}(B_{1})}=1\). This is a contradiction with the maximum principle. Therefore, we have found \(y\in\partial B_{\delta_{1}}(x_{0})\) such that \[|\mathbf{u}(y)|_{\infty}\geq u^{i}(y)\geq\lambda u^{i}(x_{0})=\lambda| \mathbf{u}(x_{0})|_{\infty}.\qed\] **Lemma A.2**.: _There exists a universal constant \(\tilde{c}_{0}\) such that if \(x_{0}\in\partial|\{|\mathbf{u}|>0\},x_{1}\in B_{r/2}(x_{0})\) and \(A_{r}\) is the connected component of \(\{|\mathbf{u}|>\frac{1}{2}|\mathbf{u}(x_{1})|_{\infty}\}\cap B_{r}(x_{0})\) containing \(x_{1}\), then_ \[\|\mathbf{u}\|_{L^{\infty}(A_{r})}\geq\tilde{c}_{0}r.\] Proof.: We use Lemma A.1 to inductively define a sequence of points \(x_{1},x_{2},\ldots,x_{k},x_{k+1}\) so that for \(j=1,\ldots,k\), 1. \(|x_{j+1}-x_{j}|=\delta_{j}=\operatorname{dist}(x_{j},\{|\mathbf{u}|\leq\frac{ 1}{2}|\mathbf{u}(x_{j})|_{\infty}\})\), 2. \(|\mathbf{u}(x_{i+1})|_{\infty}\geq\lambda|\mathbf{u}(x_{j})|_{\infty}\), 3. \(B_{\delta_{j}}(x_{j})\subset\{|\mathbf{u}|>\frac{1}{2}|\mathbf{u}(x_{1})|_{ \infty}\}\). By \((ii)\), we know that this process cannot continue indefinitely without stepping out of \(B_{r}(x_{0})\). So, we stop at the first \(k\) for which \(B_{\delta_{k+1}}(x_{k+1})\notin B_{r}(x_{0})\). Also, by Lemma A.1, we know that \[\delta_{j}\leq\operatorname{dist}(x_{j},\Gamma)\leq\frac{\delta_{j}}{\sigma},\] and therefore by (6), \[c_{0}\delta_{j}\leq|\mathbf{u}(x_{j})|\leq\frac{C_{0}\delta_{j}}{\sigma}.\] Now applying \((ii)\), we obtain (recall that \(\sigma=\frac{c_{0}}{2C_{0}\sqrt{m}}\)) \[\delta_{j}\leq\frac{\sqrt{m}}{c_{0}}|\mathbf{u}(x_{j})|_{\infty}\leq\frac{ \sqrt{m}\lambda^{-\ell}}{c_{0}}|\mathbf{u}(x_{j+\ell})|_{\infty}\leq\frac{ \lambda^{-\ell}}{2\sigma^{2}}\delta_{j+\ell}.\] Therefore, \[|x_{k}-x_{0}|\leq|x_{1}-x_{0}|+\sum_{j=1}^{k-1}|x_{j+1}-x_{j}|\leq\frac{r}{2}+ \sum_{j=1}^{k-1}\delta_{j}\leq\frac{r}{2}+\frac{\delta_{k}}{2\sigma^{2}}\sum _{j=1}^{k-1}\lambda^{j-k}\leq\frac{r}{2}+\frac{\delta_{k}}{2\sigma^{2}(\lambda -1)}.\] On the other hand, \[\delta_{k+1}= \operatorname{dist}(x_{k+1},\{|\mathbf{u}|\leq\frac{1}{2}| \mathbf{u}(x_{k+1})|_{\infty}\})\] \[\leq \delta_{k}+\operatorname{dist}(x_{k},\{|\mathbf{u}|\leq\frac{1}{ 2}|\mathbf{u}(x_{k+1})|_{\infty}\})\] \[\leq \delta_{k}+\operatorname{dist}(x_{k},\{|\mathbf{u}|\leq\frac{1}{ 2}|\mathbf{u}(x_{k})|_{\infty}\})=2\delta_{k}.\] Since \(B_{\delta_{k+1}}(x_{k+1})\not\subset B_{r}(x_{0})\), we get \[r\leq|x_{k+1}-x_{0}|+\delta_{k+1}\leq|x_{k}-x_{0}|+3\delta_{k}\leq\frac{r}{2} +c\delta_{k}.\] Thus \(\gamma r\leq\delta_{k}\) for the universal constant \(\gamma\). It necessitates that \(B_{\gamma r}(x_{k})\subset A_{r}\) and also \[|\mathbf{u}(x_{k})|\geq c_{0}\operatorname{dist}(x_{k},\Gamma)\geq c_{0} \gamma r.\] In particular, \[\max_{A_{r}}|\mathbf{u}|\geq c_{0}\gamma r.\qed\] **Proposition A.3**.: _There is \(\varepsilon_{0}>0\) such that for any \(p\in(2-\varepsilon_{0},2+\varepsilon_{0})\) there is no global minimizer of \(J_{0}\) (Lemma 6.2) that \(\mathbf{u}(0)=0\) and \(\{|\mathbf{u}|>0\}\cap B_{R}\) is disconnected for every \(R>0\)._ Proof.: First, for \(p=2\). We apply the monotonicity formula and Lemma 4.4 in [1]. Let \(A_{1}\) and \(A_{2}\) two different connected components of \(\{|\mathbf{u}_{\infty}|>0\}\). According to Lemma 2.3, we may choose a positive component of vector \(\mathbf{u}_{\infty}\) for each set \(A_{i}\), \(i=1,2\). Thus we find a function \(v\) which is harmonic in \(A_{1}\cup A_{2}\) and vanishes in \(\{|\mathbf{u}_{\infty}|=0\}\). Also, since \(\mathbf{u}_{\infty}\) is a minimizer, we know that (Theorem 4.2) \[|B_{r}\setminus(A_{1}\cup A_{2})|\geq c|B_{r}|.\] From here we infer that \(\Phi(r)/r^{\beta}\) is a non-decreasing function of \(r\) for some positive constant \(\beta>0\)[1, Lemma 4.4], where \[\Phi(r)=\frac{1}{r^{4}}\left(\int_{B_{r}\cap A_{1}}|\nabla v|^{2}|x|^{2-n}\, dx\right)\left(\int_{B_{r}\cap A_{2}}|\nabla v|^{2}|x|^{2-n}\,dx\right).\] Since \(v\) is Lipschitz, say with constant \(C_{0}\), we have the bound \[\Phi(r)\leq C_{0}^{4},\] and thus \[\Phi(1)\leq r^{-\beta}\Phi(r)\leq C_{0}^{4}r^{-\beta},\] which can not be valid for sufficiently large value of \(r\). The contradiction proves the proposition when \(p=2\). To prove the proposition for \(p\) close to \(2\) we argue by contradiction. Assume that there is a sequence of global minimizers \(\mathbf{u}_{i}\) for \(p_{i}\to 2\) that \(\mathbf{u}(0)=0\) and \(\{|\mathbf{u}_{i}|>0\}\cap B_{R}\) is not connected for any \(R>0\). We remark that all our results are stated in a slightly more general form, with constants depending uniformly for all \(p\in[1/2,3]\); see for the details [9]. In fact, we will have a uniform Lipschitz constant for all \(p\) in this compact interval as well as the nondegeneracy constant. Also, the constant \(c\) in Theorem 4.2 will be uniform. Therefore, we may choose a convergence subsequence \(\mathbf{u}_{i}\to\tilde{\mathbf{u}}_{0}\). By the same reasoning as the proof of Lemma 6.2 we get that \(\tilde{\mathbf{u}}_{0}\) is a minimizer for \(p=2\). On the other hand, \(\{|\mathbf{u}_{0}|>0\}\cap B_{R}\) is not connected for every \(R\) which contradicts the later part of the proof. **Lemma A.4**.: _Let \(\varepsilon_{0}>0\) be the constant defined in Proposition A.3. Then for any \(p\in(2-\varepsilon_{0},2+\varepsilon_{0})\) there are constants \(M\geq 1\) and \(R_{0}>0\) such that if \(x_{0}\in\partial\{|\mathbf{u}|>0\}\), \(y\in B_{R_{0}}(x_{0})\cap\partial\{|\mathbf{u}|>0\}\) and \(x_{1},x_{2}\in B_{r}(y)\) for some \(r<R_{0}\), then \(\{|\mathbf{u}|>d\}\cap B_{Mr}(y)\) has a connected component containing \(x_{1}\) and \(x_{2}\), where \(d=\frac{1}{2}\min(|\mathbf{u}(x_{1})|_{\infty},|\mathbf{u}(x_{2})|_{\infty})\)._ Proof.: Fix constant \(p\) and assume the contrary there are sequences \(\partial\{|\mathbf{u}|>0\}\ni y_{i}\to x_{0}\), \(x_{1}^{i},x_{2}^{i}\in B_{r_{i}}(y_{i})\), \(r_{i}\to 0\) and \(M_{i}\to\infty\) such that \(x_{1}^{i}\) and \(x_{2}^{i}\) are not connected in \(\{|\mathbf{u}|>d_{i}\}\cap B_{M_{i}r_{i}}(y_{i})\) where \(d_{i}=\frac{1}{2}\min(|\mathbf{u}(x_{1}^{j})|_{\infty},|\mathbf{u}(x_{2}^{i})|_ {\infty})\). Consider the blowup sequence \[\mathbf{u}(y_{i}+r_{i}x)/r_{i}\to\mathbf{u}_{0}(x),\qquad d_{i}/r_{i}\to a,\] Lemma A.2 yields that \(\{|\mathbf{u}_{0}|>a\}\cap B_{R}\) has at least two connected components for any \(R>0\). Note that \(a<\infty\) due to the Lipschitz regularity. Now consider a blowdown of \(\mathbf{u}_{0}\) \[\mathbf{u}_{0}(R_{i}x)/R_{i}\to\mathbf{u}_{\infty}(x),\qquad R_{i}\to\infty,\] then \(\{|\mathbf{u}_{\infty}|>0\}\cap B_{R}\) must have at least two connected components for any \(R>0\). On the other hand, \(\mathbf{u}_{\infty}\) is a minimizer of \(J_{0}\); Lemma 6.2. This contradicts Proposition A.3. **Proposition A.5**.: _Let \(\mathbf{u}\) be a minimizer when \(p\in(2-\varepsilon_{0},2+\varepsilon_{0})\) and \(\varepsilon_{0}\) is the constant defined in Proposition A.3. Suppose \(x_{0}\in\partial\{|\mathbf{u}|>0\}\). Then \(\{|\mathbf{u}|>0\}\) is NTA in a neighborhood of \(x_{0}\)._ Proof.: **Step 1:** (Property \((i)\), the corkscrew condition for \(\{|\mathbf{u}|>0\}\).) Assume that \(M>\|\nabla\mathbf{u}\|_{L^{\infty}(B_{1})}/c\) where \(c\) is the nondegeneracy constant defined in Lemma 4.1. If the condition fails at a point \(x\in\partial\{|\mathbf{u}|>0\}\), then for any \(y\in B_{r}(x)\cap\{|\mathbf{u}|>0\}\) we must have \(\operatorname{dist}(y,\partial\{|\mathbf{u}|>0\})\leq M^{-1}r\). Thus \[|\mathbf{u}(y)|\leq M^{-1}r\|\nabla\mathbf{u}\|_{L^{\infty}(B_{1})}\leq cr\] and by nondegeneracy, Lemma 4.1, \(\mathbf{u}=0\) in \(B_{\kappa r}(x)\). It contradicts that \(x\in\partial\{|\mathbf{u}|>0\}\). **Step 2:** (Property \((ii)\), the corkscrew condition for \(\{|\mathbf{u}|=0\}\).) Assume the contrary, \(\{|\mathbf{u}|=0\}\) does not satisfy the corkscrew condition in any neighborhood of \(x_{0}\) for any constant \(M\). Thus there is a sequence \(x_{j}\to x_{0}\) and \(r_{j}\to 0\) such that we can not find point \(a_{r_{j}}(x_{j})\) with the desired property. On the other hand, Theorem 4.2 infer that the interior of \(\{|\mathbf{u}|=0\}\) is nonempty. Let \(B_{r_{j}}(y_{j})\) be the biggest ball inside \(B_{r_{j}}(x_{j})\cap\{|\mathbf{u}|=0\}\), then we must have \(\tau_{j}/r_{j}\to 0\). Now consider the blowup \(\mathbf{u}_{j}(x):=\mathbf{u}(x_{j}+r_{j}x)/r_{j}\to\mathbf{u}_{0}(x)\), it will be a minimizer whose coincidence set has no interior in \(B_{1}\). This contradicts Theorem 4.2. **Step 3:** (Harnack chain condition.) Suppose that \(x_{1}\) and \(x_{2}\) are such that for some \(\tilde{C}>0\) and \(\varepsilon>0\) we have \[|x_{1}-x_{2}|<\tilde{C}\varepsilon,\qquad B_{\varepsilon}(x_{i})\subset\{| \mathbf{u}|>0\},\ i=1,2.\] We may assume without loss of generality, \(\operatorname{dist}(x_{1},\partial\{|\mathbf{u}|>0\})\leq\operatorname{dist}( x_{2},\partial\{|\mathbf{u}|>0\})=\delta_{0}\). If \(\delta_{0}\geq\tilde{C}\varepsilon\), then \(x_{1}\in B_{\tilde{C}\varepsilon}(x_{2})\subset\{|\mathbf{u}|>0\}\) and we can easily find the Harnack chain. So, consider the case \(\delta_{0}<\tilde{C}\varepsilon\) and choose \(x\in\partial\{|\mathbf{u}|>0\}\) such that \(|x-x_{2}|=\delta_{0}\). Then \(x_{1},x_{2}\in B_{r}(x)\) for \(r=2\tilde{C}\varepsilon\). By Lemma A.4, \(\{|\mathbf{u}|>d\}\cap B_{Mr}(x)\) has a connected component containing \(x_{1}\) and \(x_{2}\), where \(d=\frac{1}{2}\min(|\mathbf{u}(x_{1})|_{\infty},|\mathbf{u}(x_{2})|_{\infty})\). Now we have a curve \(\gamma:[0,1]\to\{|\mathbf{u}|>d\}\cap B_{Mr}(x)\) having \(x_{1}\) and \(x_{2}\) as end point. For every \(t\in[0,1]\) we know that \[|\mathbf{u}(\gamma(t))|\geq d\geq\frac{c_{0}\varepsilon}{2\sqrt{m}},\] where \(c_{0}\) comes from (6). Hence, \[\operatorname{dist}(\mathbf{u}(\gamma(t)),\partial\{|\mathbf{u}|>0\})\geq \sigma\varepsilon,\] where \(\sigma=\frac{c_{0}}{2C_{0}\sqrt{m}}\). Now we can find a sequence \(y_{1},\cdots,y_{\ell}\) on the image of \(\gamma\) such that \[\gamma[0,1]\subset\bigcup_{i=1}^{\ell}B_{\sigma\varepsilon}(y_{i})\subset\{| \mathbf{u}|>0\}\cap B_{Mr+\sigma\varepsilon}(x).\] Since \(Mr+\sigma\varepsilon=(2M\tilde{C}+\sigma)\varepsilon\), the number of balls in covering, \(\ell\) can be bounded by a constant depending only on the dimension and \((2M\tilde{C}+\sigma)/\sigma\), but not on \(x_{1},x_{2}\) or \(\varepsilon\). ## Appendix B An approximation lemma Here we prove a lemma, which we used in Lemma 7.2, and is generalization of Lemma A.1 in [5] to any \(1<p<\infty\) (see also Lemma A.1 in [10]). **Lemma B.1**.: _Let \(u\) be a nonnegative Lipschitz function in \(B_{1}^{+}\) and assume that it is \(p\)-harmonic in \(\{u>0\}\) and \(u(0)=0\)\((1<p<\infty)\). Then it has the asymptotic development_ \[u(x)=ax_{n}+o(|x|),\ \ \text{as}\ x\to 0,\] _for some \(\alpha\geq 0\), if either_ 1. \(u\) _vanishes on_ \(\{x_{n}=0\}\)_, or_ 2. \(\{x_{n}>0\}\subset\{u>0\}\)_._ Proof.: Part (_i_) is Lemma A.1 in [10]. The proof of (_ii_) is also similar by a slight modification. Let \(\ell_{k}:=\sup\{l:lk_{n}\leq u(x)\ \text{in}\ \ B_{2-\iota}^{+}\}\). Since \(\ell_{k}\) is a nondecreasing sequence and bounded by the Lipschitz constant of \(u\). Suppose \(\alpha=\lim_{k\to\infty}\ell_{k}\), then \[u(x)\geq\alpha x_{n}+o(|x|).\] If the claim fails there exists a sequence \(x^{k}\to 0\) such that \[u(x^{k})\geq\alpha x_{n}^{k}+\delta_{0}|x^{k}|,\] for some \(\delta_{0}>0\). Define \(u_{k}(x):=u(r_{k}x)/r_{k}\) where \(r_{k}=|x^{k}|\to 0\). We may also assume that \(r_{k}\leq 2^{-k}\). Since \(u_{k}\) are uniformly Lipschitz, we may consider the blowup \(u_{0}=\lim_{k\to\infty}u_{k}\), as well as \(x^{k}/r_{k}\to x^{0}\), \(|x^{0}|=1\). From the construction we will have \(\alpha x_{n}\leq u_{0}(x)\) in \(B_{1}^{+}\), and \[\frac{\delta_{0}}{2}+\alpha x_{n}\leq u_{0}(x)\ \ \ \text{ and }\ \ \ \frac{\delta_{0}}{2}+\ell_{k}x_{n}\leq u_{k}(x)\ \ \ \text{ in}\ B_{\varepsilon}(x^{0}),\] for a sufficiently small \(\varepsilon>0\) and large \(k\). Let now \(w_{k}\) be a \(p\)-harmonic function in \(B_{1}^{+}\) with smooth boundary values \[w_{k}=\ell_{k}x_{n} \text{ on }\partial B_{1}^{+}\setminus B_{\varepsilon/2}(x^{0}),\] \[w_{k}=\ell_{k}x_{n}+\tfrac{\delta_{0}}{4} \text{ on }\partial B_{1}^{+}\cap B_{\varepsilon/4}(x^{0}),\] \[\ell_{k}x_{n}\leq w_{k}\leq\ell_{k}x_{n}+\tfrac{\delta_{0}}{4} \text{ on }\partial B_{1}^{+}\cap B_{\varepsilon/2}(x^{0}),\] \[w_{k}=0 \text{ on }\{x_{n}=0\}\cap B_{1}.\] From the comparison principle we will have \(w_{k}\leq u_{k}\) in \(B_{1}^{+}\). (Note that \(u_{k}(x)\geq\ell_{k}x_{n}\), since \(r_{k}\leq 2^{-k}\)). Furthermore, \(w_{k}\to w_{0}\) in \(C^{1,\sigma}(B_{1/2}^{+})\) where \(w_{0}\) is \(p\)-harmonic with boundary data \(w_{0}=0\) on \(\{x_{n}=0\}\) and \(w_{0}\geq\alpha x_{n}\) on \(\partial B_{1}^{+}\). By Hopf boundary principle, \[w_{0}(x)\geq(\alpha+\mu)x_{n}\ \ \ \text{ in }B_{\gamma}^{+},\] for some small \(\mu\) and \(\gamma\). Thus for \(x\in B_{\gamma}^{+}\), \[u_{k}(x)\geq w_{k}(x) \geq w_{0}(x)-x_{n}\|\nabla(w_{k}-w_{0})\|_{L^{\infty}(B_{1/2}^{ +})}\] \[\geq\left(\alpha+\mu-\|\nabla(w_{k}-w_{0})\|_{L^{\infty}(B_{1/2}^{ +})}\right)x_{n}\] \[\geq\left(\alpha+\mu/2\right)x_{n}\] Now returning to \(u\) we get \(u(x)\geq(\alpha+\mu/2)x_{n}\) in \(B_{\gamma/n}^{+}\). This is a contradiction with the definition of \(\ell_{k}\) when \(k\) is sufficiently large. ## Declarations ### Data availability statement: All data needed are contained in the manuscript. ### Funding and/or Conflicts of interests/Competing interests: The authors declare that there are no financial, competing or conflict of interests.
2310.14315
Photoproduction of doubly heavy baryons at future $e^+e^-$ colliders
The photoprodution of doubly heavy baryon ($\Xi_{cc},\Xi_{bb},\Xi_{bc}$) is investigated in the context of future high-energy and high-luminosity $e^+e^-$ colliders. The study incorporates two sources of initial photons, namely the LBS photon and the WWA photon. Alongside the direct photoproduction via the sub-process $\gamma+\gamma \rightarrow \Xi_{QQ^{'}} +\bar{Q}+\bar{Q^{'}}$ ($Q^{(')}=c,b$), the resolved photoproduction channels are specifically considered, encompassing the sub-processes $\gamma + g \rightarrow \Xi_{QQ^{'}} +\bar{Q}+\bar{Q^{'}}$, $g + g \rightarrow \Xi_{QQ^{'}} +\bar{Q}+\bar{Q^{'}}$, and $q + \bar{q} \rightarrow \Xi_{QQ^{'}} +\bar{Q}+\bar{Q^{'}}$ with $q=u,d,s$. Within the framework of non-relativistic QCD, two $(cc(bb))$-diquark configurations, ${}_{\bar{\textbf{3}}}[{}^3S_1]$ and ${}_{\textbf{6}}[{}^1S_0]$, and four $(bc)$-diquark configurations, $(bc)_{\bar{\textbf{3}}}[{}^3S_1]$, $(bc)_{\textbf{6}}[{}^1S_0]$, $(bc)_{\textbf{6}}[{}^3S_1]$ and $(bc)_{\bar{\textbf{3}}}[{}^1S_0]$, are considered in the calculations. Numerical results show that the single resolved photoproduction processes provide dominant contributions under certain collision configuration. At the future $e^+e^-$ colliders, the doubly heavy baryon generated via the photoproduction mechanism is promisingly observable and can be well studied.
Xi-Jie Zhan, Xing-Gang Wu, Xu-Chang Zheng
2023-10-22T14:27:31Z
http://arxiv.org/abs/2310.14315v1
# Photoproduction of doubly heavy baryons at future \(e^{+}e^{-}\) colliders ###### Abstract The photoproduction of doubly heavy baryon (\(\Xi_{cc},\Xi_{bb},\Xi_{bc}\)) is investigated in the context of future high-energy and high-luminosity \(e^{+}e^{-}\) colliders. The study incorporates two sources of initial photons, namely the LBS photon and the WWA photon. Alongside the direct photoproduction via the sub-process \(\gamma+\gamma\to\Xi_{QQ^{\prime}}+\bar{Q}+\bar{Q}^{{}^{\prime}}\) (\(Q^{{}^{\prime}(^{\prime})}=c,b\)), the resolved photoproduction channels are specifically considered, encompassing the sub-processes \(\gamma+g\to\Xi_{QQ^{\prime}}+\bar{Q}+\bar{Q}^{{}^{\prime}}\), \(g+g\to\Xi_{QQ^{\prime}}+\bar{Q}+\bar{Q}^{{}^{\prime}}\), and \(q+\bar{q}\to\Xi_{QQ^{\prime}}+\bar{Q}+\bar{Q}^{{}^{\prime}}\) with \(q=u,d,s\). Within the framework of non-relativistic QCD, two (\(cc(bb)\))-diquark configurations, \(\mathfrak{s}[^{3}S_{1}]\) and \(\mathfrak{e}[^{1}S_{0}]\), and four (\(bc\))-diquark configurations, \((bc)_{\mathfrak{a}}[^{3}S_{1}]\), \((bc)_{\mathfrak{e}}[^{1}S_{0}]\), \((bc)_{\mathfrak{e}}[^{3}S_{1}]\) and \((bc)_{\mathfrak{a}}[^{1}S_{0}]\), are considered in the calculations. Numerical results show that the single resolved photoproduction processes provide dominant contributions under certain collision configuration. At the future \(e^{+}e^{-}\) colliders, the doubly heavy baryon generated via the photoproduction mechanism is promisingly observable and can be well studied. ## I Introduction Baryon containing two heavy quarks, referred to as doubly heavy baryons, offers a simplified structure akin to heavy quarkonia, thus enabling rigorous theoretical analysis. The first suspected observation of \(\Xi_{cc}^{+}\) was reported by the SELEX Collaboration [1; 2] in 2002 and 2005. Lately in 2017, the LHCb Collaboration identified another doubly heavy baryon, \(\Xi_{cc^{\prime}}^{++}\), through the decay mode \(\Xi_{cc}^{++}\to\Lambda_{c}^{+}K^{-}\pi^{+}\pi^{+}\), with \(\Lambda_{c}^{+}\to pK^{-}\pi^{+}\)[3]. Further validation came from the LHCb Collaboration, confirming this baryon's existence via the decay channel \(\Xi_{cc}^{++}\to\pi^{+}\Xi_{c}^{+}\)[4; 5]. These observations render the doubly heavy baryon a valuable environment for investigating quantum chromodynamics (QCD). Due to its nonrelativistic nature and the strong interaction confinement, the production of doubly heavy baryons involves nonperturbative effects that cannot be calculated using perturbative QCD. In the work by Ma et al. [6], the nonrelativistic QCD (NRQCD) [7] factorization framework was employed to describe the production process. This framework divides the process into two stages: the perturbative creation of a heavy-quark pair in a certain quantum state, referred to as a diquark, followed by its nonperturbative transition into a baryon. By expanding in the small velocity (\(v_{Q}\)) of the heavy quark in the baryon's rest frame, two leading-order states of (\(cc\))-diquarks were identified: \(\mathfrak{s}[^{3}S_{1}]\) and \(\mathfrak{e}[^{1}S_{0}]\), each associated with a corresponding long-distance matrix element (LDME), namely \(h_{\bar{\mathfrak{s}}}\) and \(h_{\mathfrak{e}}\). \(\mathfrak{s}[^{3}S_{1}](\mathfrak{e}[^{1}S_{0}])\) represents (\(cc\))-diquark is in S-wave \({}^{3}S_{1}(^{1}S_{0})\) and in the \(\overline{\mathfrak{s}}(\mathfrak{6})\) color state, while \(h_{\bar{\mathfrak{s}}}(h_{\mathfrak{e}})\) depicts its nonperturbative transition probability into the baryon. Extensive theoretical investigations have delved into the production of doubly heavy baryons [8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36]. These investigations encompass direct production in \(pp\), \(ep\), \(\gamma\gamma\) and \(e^{+}e^{-}\) collisions, as well as indirect production via the decays of Higgs bosons, \(W\) bosons, \(Z\) bosons, and top quarks. A dedicated generator, GENXICC [37; 38; 39], has been developed to simulate hadroproduction in \(pp\) collisions. The \(e^{+}e^{-}\) collider offers two primary avenues for the direct production of the doubly heavy baryon \(\Xi_{QQ^{\prime}}\): production through \(e^{+}e^{-}\) annihilation and via the photoproduction mechanism. In this work, \(\Xi_{QQ^{\prime}}\) represents the baryon \(\Xi_{QQ^{\prime}q^{\prime}}\), where \(Q(Q^{{}^{\prime}})\) stands for either a charm (\(c\)) or bottom (\(b\)) quark, and \(q\) corresponds to an up (\(u\)), down (\(d\)), or strange (\(s\)) quark. As for the photoproduction, \(\bar{\Xi}_{QQ^{\prime}}\) can be produced via direct photon-photon fusion such as \(\gamma+\gamma\to\Xi_{QQ^{\prime}}+\bar{Q}+\bar{Q}^{{}^{\prime}}\). The collision photon may originate from either the bremsstrahlung of the initial \(e^{+}e^{-}\) particles or from the process of laser back-scattering with \(e^{+}e^{-}\). In addition to direct photoproduction, there are also processes called resolved photoproduction [40], where the photon undergoes resolution, and its parton participates in the ensuing hard processes. These resolved photoproduction channels share the same order of perturbative expansion as the direct approach, necessitating their inclusion in calculations. Noteworthy earlier studies [40; 41; 42; 43; 44; 45] have indicated that these resolved channels tend to dominate the photoproduction of heavy quarkonium at \(e^{+}e^{-}\) colliders. Several next-generation \(e^{+}e^{-}\) colliders have been proposed, including the FCCee [46], the CEPC [47; 48], and the ILC [49; 50]. Designed to operate at varying high collision energies, along with unprecedented luminosities, these potent \(e^{+}e^{-}\) colliders hold the potential to serve as exceptional platforms for diverse research subjects. This study is anchored in the NRQCD framework, where we investigate the photoproduction of \(\Xi_{QQ^{\prime}}\) at future \(e^{+}e^{-}\) colliders. In addition to the direct photoproduction channel \(\gamma+\gamma\to\Xi_{QQ^{\prime}}+\bar{Q}+\bar{Q}^{{}^{\prime}}\), we also incorporate the resolved photoproduction processes, en compassing \(\gamma+g\to\Xi_{QQ^{\prime}}+\bar{Q}+\bar{Q^{\prime}}\), \(g+g\to\Xi_{QQ^{\prime}}+\bar{Q}+\bar{Q^{\prime}}\), and \(q+\bar{q}\to\Xi_{QQ^{\prime}}+\bar{Q}+\bar{Q^{\prime}}\), where \(q=u,d,s\). Section II elucidates the calculation's formulation, while Section III presents numerical outcomes and ensuing discussions. Section IV provides a succinct summary. ## II Formulation Based on the NRQCD factorization framework, the photoproduction cross section of \(\Xi_{QQ^{\prime}}\) at the \(e^{+}e^{-}\) collider can be expressed as, \[\mathrm{d}\sigma\left(e^{+}e^{-}\to e^{+}e^{-}\Xi_{QQ^{\prime}}+ \bar{Q}+\bar{Q^{\prime}}\right)\] \[=\int\mathrm{d}x_{1}f_{\gamma/e}\left(x_{1}\right)\int\mathrm{d}x _{2}f_{\gamma/e}\left(x_{2}\right)\] \[\times\sum_{i,j}\int\mathrm{d}x_{i}f_{i/\gamma}\left(x_{i}\right) \int\mathrm{d}x_{j}f_{j/\gamma}\left(x_{j}\right)\] \[\times\sum_{n}\ \mathrm{d}\hat{\sigma}(ij\to(QQ^{{}^{\prime}})[n]+ \bar{Q}+\bar{Q^{\prime}})\left\langle\mathcal{O}^{\Xi_{QQ^{\prime}}}[n]\right\rangle. \tag{1}\] Here \(f_{\gamma/e}(x)\) is the energy spectrum of the photon. \(f_{i/\gamma}(i=\gamma,g,u,d,s)\) represents the Gluck-Reya-Schienbein (GRS) distribution function of parton \(i\) in photon [52]. \(f_{\gamma/\gamma}(x)=\delta(1-x)\) is for the direct photoproduction process. \(\mathrm{d}\hat{\sigma}(ij\to(QQ^{{}^{\prime}})[n]+\bar{Q}+\bar{Q^{\prime}})\) is the differential partonic cross section, which is calculated perturbatively. For baryon \(\Xi_{cc}\) and \(\Xi_{bb}\), \(n=\mathbbm{s}[^{3}S_{1}]\) and \(\mathfrak{e}[^{1}S_{0}]\). For \(\Xi_{bc}\), \(n=\mathbbm{s}[^{3}S_{1}]\), \(\mathfrak{e}[^{1}S_{0}]\), \(\mathfrak{s}[^{1}S_{0}]\) and \(\mathbbm{s}[^{1}S_{0}]\). \(\left\langle\mathcal{O}^{\Xi_{QQ^{\prime}}}[n]\right\rangle=h_{n}\) is the long distance matrix element(LDME). People usually employ potential model, mimicking the heavy quarkonium case, introduce and relate a wave function to \(h_{\bar{\mathbf{3}}}\)[53; 54; 55; 56; 8], \[h_{\bar{\mathbf{3}}}\simeq|\Psi_{QQ^{\prime}}(0)|^{2}. \tag{2}\] As for \(h_{\mathbf{6}}\), there is no such relation and it is set to equal to \(h_{\bar{\mathbf{3}}}\) for simplicity. This assumption is grounded in NRQCD's power counting with respect to \(v_{c}\), where both \(h_{\mathbf{6}}\) and \(h_{\bar{\mathbf{3}}}\) hold equivalent orders [6]. According to NRQCD, the bound state \(\Xi_{Qc}\) can be expanded into a series of Fock states, \[|\Xi_{QQ}\rangle= c_{1}(v)|(QQ)q\rangle+c_{2}(v)|(QQ)qg\rangle \tag{3}\] \[+c_{3}(v)|(QQ)qgg\rangle+\cdots.\] Since a light quark can readily emit gluons, the constituents in Eq. (3) hold equivalent importance, specifically, \(c_{1}\sim c_{2}\sim c_{3}\). Consider a \(QQ\) pair in the \(\mathbbm{s}[^{3}S_{1}]\) state; one of the heavy quarks can emit a gluon without altering the spin of the heavy quark. Subsequently, this gluon undergoes a splitting into a pair of light quarks \(q\bar{q}\), permitting the heavy \(QQ\) pair to engage with the light \(q\) to compose \(\Xi_{QQ}\). Similarly, for a \(QQ\) pair in the \(\mathfrak{e}[^{1}S_{0}]\) state, one of the heavy quarks can emit a gluon that retains the spin of the heavy quark unchanged. This emitted gluon then segregates into a light \(q\bar{q}\) pair, and the light quarks also exhibit a propensity for gluon emission. Consequently, this heavy \(QQ\) pair can capture a light quark and a gluon to assemble into \(\Xi_{QQ}\). This elucidates why \(h_{\mathbf{6}}\) and \(h_{\bar{\mathbf{3}}}\) hold the same order in \(v_{c}\). For simplicity, we assume \(h_{\mathbf{6}}=h_{\bar{\mathbf{3}}}\) in the ensuing computations. Notably, the LDMEs serve as overarching parameters beyond the perturbative components, implying that the outcomes can be readily refined upon acquisition of novel LDMEs. As previously mentioned, the \(e^{+}e^{-}\) collider presents two primary sources of initial photons. The first emanates from the bremsstrahlung of the initial \(e^{+}e^{-}\) pairs, and its energy distribution can be well delineated within the Weizacker-Williams approximation (WWA) [57], \[f_{\gamma/e}(x) = \frac{\alpha}{2\pi}\Bigg{[}\frac{1+(1-x)^{2}}{x}\mathrm{log} \frac{Q_{\mathrm{max}}^{2}}{Q_{\mathrm{min}}^{2}} \tag{4}\] \[+2m_{e}^{2}x\left(\frac{1}{Q_{\mathrm{max}}^{2}}-\frac{1}{Q_{ \mathrm{min}}^{2}}\right)\Bigg{]},\] where \(x=E_{\gamma}/E_{e}\) represents the fraction of longitudinal momentum carried by the photon, while \(\alpha\) denotes the electromagnetic fine structure constant. \(Q_{\mathrm{min}}^{2}=m_{e}^{2}x^{2}/(1-x)\) and \(Q_{\mathrm{max}}^{2}=(E\theta_{c})^{2}(1-x)+Q_{\mathrm{min}}^{2}\), with \(\theta_{c}=32\) mrad defining the maximum scattered angular cut to ensure photon to be real. Here, \(E=E_{e}=\sqrt{s}/2\) reflects the collision energy, defined as \(\sqrt{s}\). Another source is from the laser back-scattering (LBS) with \(e^{+}e^{-}\) and its spectrum function is [58], \[f_{\gamma/e}(x)=\frac{1}{N}\left[1-x+\frac{1}{1-x}-4r(1-r)\right], \tag{5}\] where \(r=x/\left[x_{m}(1-x)\right]\), and the normalization factor, \[N = \left(1-\frac{4}{x_{m}}-\frac{8}{x_{m}^{2}}\right)\log(1+x_{m}) \tag{6}\] \[+\frac{1}{2}+\frac{8}{x_{m}}-\frac{1}{2(1+x_{m})^{2}}.\] Figure 1: The energy spectra of the LBS photon and the WWA photon. Here \(x_{m}=4E_{e}E_{l}\cos^{2}\frac{\theta}{2}\), \(E_{e}\) and \(E_{l}\) are the energies of incident electron and laser beams, respectively. \(\theta\) is the angle between them. The energy of the LBS photon is restricted by \[0\leq x\leq\frac{x_{m}}{1+x_{m}}, \tag{7}\] with optimal value of \(x_{m}\) being 4.83 [59]. These two spectra have quite different behaviors as shown in Fig. 1. Some typical Feynman diagrams for calculating the partonic cross sections are shown in Fig. 2. The well-established system Feynman Diagram Calculation (FDC) [60] is used in the analytical and numerical calculations, where the standard projection method [61] is employed to deal with the amplitudes. ## III Numerical results and discussions In the calculation, we take the wave functions at the origin as[55]\(|\Psi_{cc}(0)|^{2}=0.039\;\mathrm{GeV}^{3}\), \(|\Psi_{bb}(0)|^{2}=0.152\;\mathrm{GeV}^{3}\) and \(|\Psi_{bc}(0)|^{2}=0.065\;\mathrm{GeV}^{3}\). For consistency, we also fix the quark masses as they are given in Ref.[55]: \(m_{c}=M_{\Xi_{cc}}/2=1.8\;\mathrm{GeV}\) and \(m_{b}=M_{\Xi_{bb}}/2=5.1\;\mathrm{GeV}\). The fine structure constant is set to be \(\alpha=1/137\). As for the strong coupling constant, the one-loop running formulation is employed. The renormalization scale is by default taken as the transverse mass of \(\Xi_{QQ^{\prime}}\), \(\mu=\sqrt{M_{\Xi_{QQ^{\prime}}}^{2}+p_{t}^{2}}\) with \(p_{t}\) representing its transverse momentum. For collision energies of \(\sqrt{S}=250\), \(500\), and \(1000\mathrm{GeV}\), along with the default inputs, Table 1 illustrates the cross section for the photoproduction of \(\Xi_{QQ^{\prime}}\) using both LBS and WWA photons (in brackets). This table includes a range of spin and color configurations. The results highlight a consistent trend: as collision energy increases, all cross sections exhibit growth, for the LBS photoproduction, \[\sigma_{\Xi_{cc}}|_{250\mathrm{GeV}}:\,\sigma_{\Xi_{cc}}|_{500 \mathrm{GeV}}:\,\sigma_{\Xi_{cc}}|_{1\mathrm{TeV}} \simeq 1:1.10:1.60, \tag{8}\] \[\sigma_{\Xi_{bc}}|_{250\mathrm{GeV}}:\,\sigma_{\Xi_{bc}}|_{500 \mathrm{GeV}}:\,\sigma_{\Xi_{bc}}|_{1\mathrm{TeV}} \simeq 1:1.15:1.69,\] \[\sigma_{\Xi_{bb}}|_{250\mathrm{GeV}}:\,\sigma_{\Xi_{bb}}|_{500 \mathrm{GeV}}:\,\sigma_{\Xi_{bb}}|_{1\mathrm{TeV}} \simeq 1:1.27:1.99,\] and for the WWA photoproduction, \[\sigma_{\Xi_{cc}}|_{250\mathrm{GeV}}:\,\sigma_{\Xi_{cc}}|_{500 \mathrm{GeV}}:\,\sigma_{\Xi_{cc}}|_{1\mathrm{TeV}} \simeq 1:1.79:2.93, \tag{9}\] \[\sigma_{\Xi_{bc}}|_{250\mathrm{GeV}}:\,\sigma_{\Xi_{bc}}|_{500 \mathrm{GeV}}:\,\sigma_{\Xi_{bc}}|_{1\mathrm{TeV}} \simeq 1:2.03:3.70,\] \[\sigma_{\Xi_{bb}}|_{250\mathrm{GeV}}:\,\sigma_{\Xi_{bb}}|_{500 \mathrm{GeV}}:\,\sigma_{\Xi_{bb}}|_{1\mathrm{TeV}} \simeq 1:2.50:5.05.\] The spin- and color-configurations also give different contributions to the total cross section and under \(\sqrt{S}=500\;\mathrm{GeV}\) for the LBS photoproduction, \[\sigma_{(cc)_{\overline{n}}^{3}S_{1}}:\sigma_{(cc)_{\overline{e}} ^{[S_{0}]}}\simeq 10.63:1, \tag{10}\] \[\sigma_{(bc)_{\overline{n}}^{3}S_{1}}:\sigma_{(bc)_{\overline{e}} ^{[S_{0}]}}:\sigma_{(bc)_{\overline{e}}^{[S_{1}]}}:\sigma_{(bc)_{\overline{n} }^{3}S_{1}}:\sigma_{(bc)_{\overline{n}}^{3}S_{1}}\] \[\simeq 4.71:1:2.36:2.00,\] \[\sigma_{(bb)_{\overline{n}}^{[S_{1}]}}:\sigma_{(bb)_{\overline{e} }^{[S_{0}]}}\simeq 11.14:1.\] The cross sections via the LBS photoproduction are much larger than those of the WWA. This is due to the quite \begin{table} \begin{tabular}{c c c c} \(\sqrt{S}(\mathrm{GeV})\) & \(250\) & \(500\) & \(1000\) \\ \hline \((cc)_{\overline{a}}|^{3}S_{1}\) & \(733.68(62.75)\) & \(801.90(111.74)\) & \(1100.09(182.73)\) \\ \((cc)_{\overline{c}}|^{4}S_{0}\) & \(65.65(2.79)\) & \(75.44(5.35)\) & \(105.89(9.35)\) \\ \((bc)_{\overline{a}}|^{3}S_{1}\) & \(26.27(0.85)\) & \(30.44(1.73)\) & \(44.85(3.14)\) \\ \((bc)_{\overline{e}}|^{4}S_{0}\) & \(5.72(0.18)\) & \(6.46(0.36)\) & \(9.44(0.66)\) \\ \((bc)_{\overline{e}}|^{3}S_{1}\) & \(13.14(0.43)\) & \(15.22(0.86)\) & \(22.43(1.57)\) \\ \((bc)_{\overline{e}}|^{4}S_{0}\) & \(11.45(0.35)\) & \(12.91(0.72)\) & \(18.88(1.32)\) \\ \((bb)_{\overline{a}}|^{3}S_{1}\) & \(1.25(0.02)\) & \(1.56(0.05)\) & \(2.44(0.10)\) \\ \((bb)_{\overline{e}}|^{4}S_{0}\) & \(0.09(0.0008)\) & \(0.14(0.002)\) & \(0.23(0.005)\) \\ \end{tabular} \end{table} Table 1: The integrated cross sections (in unit of fb) under default inputs for the photoproduction of \(\Xi_{QQ^{\prime}}\) via the LBS photon and the WWA photon (in brackets), respectively. Three typical collision energies are taken as example and intermediate diquark at various spin- and color-configurations are listed. Figure 2: Some typical Feynman diagrams for calculating the partonic cross section \(\hat{\sigma}\) of \(\Xi_{QQ^{\prime}}\) photoproduction. The diagrams are drawn by JaxoDraw [51]. different spectra functions of the photon as shown in Fig. 1. When assuming an integrated luminosity of \({\cal O}(10^{4})\;{\rm fb}^{-1}\) at future \(e^{+}e^{-}\) colliders and aggregating contributions from all diquark configurations, approximately \(8.8\times 10^{6}\) (\(1.2\times 10^{6}\)) \(\Xi_{cc}\), \(6.5\times 10^{5}\) (\(3.7\times 10^{4}\)) \(\Xi_{bc}\), and \(1.7\times 10^{4}\) (\(4.9\times 10^{2}\)) \(\Xi_{bb}\) would be generated via LBS (WWA) photons, given a collision energy of \(\sqrt{S}=500\;{\rm GeV}\). The actual number of experimentally reconstructed events is significantly affected both by the decay rate of \(\Xi_{QQ^{\prime}}\) and the experimental reconstruction efficiency. Considering, for instance, the cascade decay \(\Xi_{cc}^{++}\to\Lambda_{c}^{+}K^{-}\pi^{+}\pi^{+}\simeq 10\%\)[62] and \(\Lambda_{c}^{+}\to pK^{+}\pi^{+}\simeq 5\%\)[63], and accounting for the experiment's reconstruction efficiency, the event count is expected to diminish by approximately three orders of magnitude. Consequently the photoproduction at future \(e^{+}e^{-}\) colliders could provides good opportunity to study \(\Xi_{cc}\), while there would be not enough events for \(\Xi_{bb}\). Table 2 lists the contributions from different channels for the LBS photon. With the increase of collision energy, the cross sections of \(\gamma+\gamma\) and \(q+\bar{q}\) channels decrease, while those of the other two become larger. The \(\gamma+g\) channels provide very important contributions at all these three collision energy and even dominant when the collision energy goes larger. The double resolved channels, \(g+g\) and \(q+\bar{q}\), give very tiny productions which can be ignored compared with other two. Consequently for the LBS photon, the resolved photoproduction channel(\(\gamma+g\)) at future \(e^{+}e^{-}\) colliders should be taken into account in the theoretical investigation. Conversely, in WWA photoproduction, the relative importance among various channels markedly deviates from that in LBS, as highlighted in Table 3. The cross sections across all channels burgeon with increasing collision energy. While the direct \(\gamma+\gamma\) channels consistently dominate, the contributions from other channels remain modest, and in certain cases, negligible. Fig. 3 illustrates the transverse momentum distributions of \(\Xi_{cc}\) and \(\Xi_{bc}\) photoproduction, with separate representations of contributions from different channels and intermediate diquark states. Each \(p_{t}\) distribution exhibits a discernible peak around several GeV, transitioning into a logarithmic decrease in the high \(p_{t}\) range. Throughout the entire \(p_{t}\) spectrum, the \(\underline{s}[^{3}S_{1}]\) configurations consistently hold prominence, rendering contributions from \({}_{\bf 6}[^{1}S_{0}]\) states inconsequential. Specifically, in LBS photoproduction, the \(\gamma+g\) channels dominate the lower \(p_{t}\) region, yielding the baton to the \(\gamma+\gamma\) channels as the \(p_{t}\) value increases significantly. However, in practical experiments, there maybe not enough events in large \(p_{t}\) region to make precise measurements and consequently the single resolved channel \(\gamma+g\) must be considered in the calculation of photoproduction. For the WWA case, the direct channel \(\gamma+\gamma\) is always predominant in whole \(p_{t}\) region. Fig. 4 portrays the rapidity (\(y\)) distributions of \(\Xi_{cc}\) and \(\Xi_{bc}\) photoproduction. In the LBS scenario, the curves exhibit distinctive patterns across the central rapidity region, attributed to the prevalence of the \(\gamma+g\) channel. In contrast to the transverse momentum (\(p_{t}\)) distribution, the \(\gamma+g\) and \(\gamma+\gamma\) channels do not intersect throughout the entire \(y\) range for a collision energy of \(\sqrt{S}=500\;{\rm GeV}\). Comparatively, the rapidity distributions of the WWA photoproduction appear conventional when juxtaposed with those of the LBS scenario. Finally, we engage in a concise discussion on the theoretical uncertainties within our calculations, stemming from three primary sources: the heavy quark masses, the renormalization scale \(\mu\), and the LDMEs. Notably, uncertainties arising from \(h_{\bf 3}\) and \(h_{\bf 6}\) are omitted due to the absence of reported errors in the literature. As previously indicated, these coefficients represent overall factors, and their impact on production outcomes can be readily refined with more accurate values. Table 4 illustrates the effects of varying \(m_{c}=1.8\pm 0.1\;{\rm GeV}\) while holding \(m_{b}=5.1{\rm GeV}\) and \(\mu=\sqrt{M_{\Xi_{QQ^{\prime}}}^{2}+p_{t}^{2}}\) con \begin{table} \begin{tabular}{c c c c c} \(channels\) & \(\gamma+\gamma\) & \(\gamma+g\) & \(g+g\) & \(q+\bar{q}\) \\ \hline \(\Xi_{cc}\) & \(392.72(173.84,70.13)\) & \(402.37(693.28,1112.50)\) & \(3.22(9.35,22.50)\) & \(1.02(0.88,0.85)\) \\ \(\Xi_{bc}\) & \(32.85(16.02,6.96)\) & \(23.00(47.31,84.34)\) & \(0.36(1.37,3.99)\) & \(0.37(0.33,0.30)\) \\ \(\Xi_{bb}\) & \(0.80(0.44,0.21)\) & \(0.51(1.20,2.33)\) & \(0.0066(0.033,0.11)\) & \(0.019(0.018,0.017)\) \\ \end{tabular} \end{table} Table 2: The integrated cross sections (in unit of fb) of different channels of the photoproduction of \(\Xi_{QQ^{\prime}}\) via the LBS photon. Three typical collision energies, \(250(500,1000)\;{\rm GeV}\), are taken and contributions of all intermediate diquark states have been summed up. \begin{table} \begin{tabular}{c c c c c} \(channels\) & \(\gamma+\gamma\) & \(\gamma+g\) & \(g+g\) & \(q+\bar{q}\) \\ \hline \(\Xi_{cc}\) & \(62.88(109.17,172.06)\) & \(2.63(7.83,19.77)\) & \(0.0088(0.042,0.16)\) & \(0.023(0.047,0.085)\) \\ \(\Xi_{bc}\) & \(1.71(3.32,5.64)\) & \(0.09(0.34,1.03)\) & \(0.00070(0.0045,0.021)\) & \(0.003(0.008,0.017)\) \\ \(\Xi_{bb}\) & \(0.02(0.04,0.08)\) & \(0.0015(0.0068,0.02)\) & \(9.6\times 10^{-6}(8.4\times 10^{-5},4.8\times 10^{-4})\) & \(9.7\times 10^{-5}(2.8\times 10^{-4},6.2\times 10^{-4})\) \\ \end{tabular} \end{table} Table 3: The integrated cross sections (in unit of fb) of different channels of the photoproduction of \(\Xi_{QQ^{\prime}}\) via the WWA photon. Three typical collision energies, \(250(500,1000)\;{\rm GeV}\), are taken and contributions of all intermediate diquark states have been summed up. stant. Similarly, TableV presents uncertainties resulting from \(m_{b}=5.1\pm 0.2\) GeV alongside \(m_{c}=1.8\)GeV and \(\mu=\sqrt{M_{\Xi_{QQ^{\prime}}}^{2}+p_{t}^{2}}\). From these tables, even slight deviations in heavy quark mass can yield significant fluctuations in cross-section values. For instance, in Table 4, the cross section for \((cc)_{\overline{3}}[^{3}S_{1}]\) varies by approximately 46% for a mere 12% alteration in \(m_{c}\). This pronounced sensitivity is illuminated when examining the pertinent Feynman diagrams, as exemplified in Fig. 2. For photoproduction of \(\Xi_{QQ^{\prime}}\) considered here, the final particles in the short-distance processes encompass solely \(Q\) and \(\bar{Q}\) (\(Q=c\) or \(b\)), with corresponding internal lines comprising exclusively \(Q\) and gluon propagators. Hence, the profound influence of heavy quark masses on cross section appears reasonable. Let us take the second diagram(\(\gamma+g\to c+c+\bar{c}+\bar{c}\)) in the second row in Fig. 2 as an example, which is one of the predominant diagrams, to illustrate the strong dependence on \(m_{c}\). The squared invariant mass of the gluon propagator that attached to the final \(c\bar{c}\) pair is \(k^{2}=(p_{c}+p_{\bar{c}})^{2}\). Its dominant region in phase space integration is near the threshold, i.e., \(k^{2}\sim 4m_{c}^{2}\). Consequently, when \(m_{c}\) changes from 1.7 GeV to 1.9 GeV, \(1/(k^{2})^{2}\) changes by about 36%. Table 6 assesses the sensitivity to the renormalization scale (\(\mu=\mathcal{C}\sqrt{M_{\Xi_{QQ^{\prime}}}^{2}+p_{t}^{2}}\), with \(\mathcal{C}=0.5,1,2\)), considering fixed values for \(m_{c}=1.8\) GeV and \(m_{b}=5.1\) GeV. Ev Figure 4: The \(y\) distributions for \(\Xi_{cc}\) and \(\Xi_{bc}\) photoproduction under \(\sqrt{S}=500\) GeV and default values of the parameters. Figures in the first row are for different channels and those in the second row are for different intermediate diquark states. The topmost curve in every figure is their summation. Two columns of figures on the left are for LBS photon and those on the right are for WWA photon. Figure 3: The \(p_{t}\) distributions for \(\Xi_{cc}\) and \(\Xi_{bc}\) photoproduction under \(\sqrt{S}=500\) GeV and default values of the parameters. Figures in the first row are for different channels and those in the second row are for different intermediate diquark states. The topmost curve in every figure is their summation. Two columns of figures on the left are for LBS photon and those on the right are for WWA photon. idently, substantial dependence on the renormalization scale is evident, potentially signifying the relevance of next-to-leading order corrections in \(\alpha_{s}\). As we confront real-world measurements in the future, high-order calculations become imperative. Considering above uncertainties, the results in our leading order calculation may fluctuate by about one order of magnitude. Within this range of variability, the photoproduction rates of doubly heavy baryons remain appreciable. ## IV Summary In this work, we have investigated the \(\Xi_{QQ^{\prime}}\) photoproduction within the framework of non-relativistic QCD specifically focusing on future \(e^{+}e^{-}\) colliders. The investigation encompasses two distinct sources of initial photons: the LBS photon and the WWA photon. Two \((cc(bb))\)-diquark configurations, \({}_{\bar{\bf s}}[^{3}S_{1}]\) and \({}_{\bar{\bf e}}[^{1}S_{0}]\), and four \((bc)\)-diquark configurations, \((bc)_{\bar{\bf s}}[^{3}S_{1}]\), \((bc)_{\bar{\bf e}}[^{1}S_{0}]\), \((bc)_{\bar{\bf e}}[^{3}S_{1}]\) and \((bc)_{\bar{\bf s}}[^{1}S_{0}]\), are considered. Upon assuming \(h_{\bar{\bf e}}=h_{\bar{\bf s}}\), the results demonstrate \({}_{\bar{\bf s}}[^{3}S_{1}]\) diquark state give dominant cross section, while other intermediate states also provide notable contributions. Importantly, beyond the direct photoproduction channel \(\gamma+\gamma\), our study particularly integrates the resolved mechanisms, which are not fully considered in previous studies. Numeric findings underscore the critical role of the single resolved photoproduction channel \(\gamma+g\) in LBS photoproduction, while its significance diminishes in the WWA scenario. If setting the integrated luminosity of future \(e^{+}e^{-}\) collision as \({\cal O}(10^{4})\;{\rm fb}^{-1}\), there would be about \(8.8\times 10^{6}\)\((1.2\times 10^{6})\)\(\Xi_{cc}\) and \(6.5\times 10^{5}\)\((3.7\times 10^{4})\)\(\Xi_{bc}\) baryons to be generated via the LBS (WWA) photons respectively under the collision energy \(\sqrt{S}=500\;{\rm GeV}\). While acknowledging the relatively considerable uncertainties inherent in our calculations, we anticipate that these findings could serve as a valuable preliminary exploration into photoproduction at prospective \(e^{+}e^{-}\) colliders. ###### Acknowledgements. This work was supported in part by the Natural Science Foundation of China under Grants No. 12147116, No. 12175025, No. 12005028 and No. 12147102, by the China Postdoctoral Science Foundation under Grant No. 2021M693743 and by the graduate research and innovation foundation of Chongqing, China under Grant No.ydstd1912.
2303.03751
Zeroth-Order Optimization Meets Human Feedback: Provable Learning via Ranking Oracles
In this study, we delve into an emerging optimization challenge involving a black-box objective function that can only be gauged via a ranking oracle-a situation frequently encountered in real-world scenarios, especially when the function is evaluated by human judges. Such challenge is inspired from Reinforcement Learning with Human Feedback (RLHF), an approach recently employed to enhance the performance of Large Language Models (LLMs) using human guidance. We introduce ZO-RankSGD, an innovative zeroth-order optimization algorithm designed to tackle this optimization problem, accompanied by theoretical assurances. Our algorithm utilizes a novel rank-based random estimator to determine the descent direction and guarantees convergence to a stationary point. Moreover, ZO-RankSGD is readily applicable to policy optimization problems in Reinforcement Learning (RL), particularly when only ranking oracles for the episode reward are available. Last but not least, we demonstrate the effectiveness of ZO-RankSGD in a novel application: improving the quality of images generated by a diffusion generative model with human ranking feedback. Throughout experiments, we found that ZO-RankSGD can significantly enhance the detail of generated images with only a few rounds of human feedback. Overall, our work advances the field of zeroth-order optimization by addressing the problem of optimizing functions with only ranking feedback, and offers a new and effective approach for aligning Artificial Intelligence (AI) with human intentions.
Zhiwei Tang, Dmitry Rybin, Tsung-Hui Chang
2023-03-07T09:20:43Z
http://arxiv.org/abs/2303.03751v3
# Zeroth-Order Optimization Meets Human Feedback: Provable Learning via Ranking Oracles ###### Abstract In this paper, we focus on a novel optimization problem in which the objective function is a black-box and can only be evaluated through a ranking oracle. This problem is common in real-world applications, particularly in cases where the function is assessed by human judges. Reinforcement Learning with Human Feedback (RLHF) is a prominent example of such an application, which is adopted by the recent works [24, 18, 23, 1] to improve the quality of Large Language Models (LLMs) with human guidance. We propose ZO-RankSGD, a first-of-its-kind zeroth-order optimization algorithm, to solve this optimization problem with a theoretical guarantee. Specifically, our algorithm employs a new rank-based random estimator for the descent direction and is proven to converge to a stationary point. ZO-RankSGD can also be directly applied to the policy search problem in reinforcement learning when only a ranking oracle of the episode reward is available. This makes ZO-RankSGD a promising alternative to existing RLHF methods, as it optimizes in an online fashion and thus can work without any pre-collected data. Furthermore, we demonstrate the effectiveness of ZO-RankSGD in a novel application: improving the quality of images generated by a diffusion generative model with human ranking feedback. Throughout experiments, we found that ZO-RankSGD can significantly enhance the detail of generated images with only a few rounds of human feedback. Overall, our work advances the field of zeroth-order optimization by addressing the problem of optimizing functions with only ranking feedback, and offers an effective approach for aligning human and machine intentions in a wide range of domains. Our code is released here [https://github.com/TZW1998/Taming-Stable-Diffusion-with-Human-Ranking-Feedback](https://github.com/TZW1998/Taming-Stable-Diffusion-with-Human-Ranking-Feedback). Figure 1: Application of our proposed algorithm on enhancing the quality of images generated from Stable diffusion [29] with human ranking feedback. At each iteration of this human-in-the-loop optimization, we use Stable Diffusion to generate multiple images by perturbing the latent embedding with random noise, which are then ranked by humans based on their quality. After that, the ranking information is leveraged to update the latent embedding. Introduction Ranking data is an omnipresent feature of the internet, appearing on a variety of platforms and applications, such as search engines, social media feeds, online marketplaces, and review sites. It plays a crucial role in how we navigate and make sense of the vast amount of information available online. Moreover, ranking information has a unique appeal to humans, as it enables them to express their personal preferences in a straightforward and intuitive way [24, 18, 23, 1]. The significance of ranking data becomes even more apparent when some objective functions are evaluated through human beings, which is becoming increasingly common in various applications. Assigning an exact score or rating can often require a significant amount of cognitive burden or domain knowledge, making it impractical for human evaluators to provide precise feedback. In contrast, a ranking-based approach can be more natural and straightforward, allowing human evaluators to express their preferences and judgments with ease. In this context, our paper studies an important optimization problem where the objective function can only be accessed via a ranking oracle. ### Problem formulation With an objective function \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\), we focus on the following optimization problem: \[\min_{x\in\mathbb{R}^{d}}f(x), \tag{1}\] where \(f\) is a black-box function, and we can only query it via a ranking oracle that can sort every input based on the values of \(f\). In this work, we focus on a particular family of ranking oracles where only the sorted indexes of top elements are returned. Such oracles are acknowledged to be natural for human decision-making [15]. We formally define this kind of oracle as follows: **Definition 1** (\((m,k)\)-ranking oracle).: _Given a function \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}\) and \(m\) points \(x_{1},...,x_{m}\) to query, an \((m,k)\) ranking oracle \(O_{f}^{(m,k)}\) will return \(k\) smallest points sorted in their order. For example, if \(O_{f}^{(m,k)}(x_{1},...,x_{m})=(i_{1},...,i_{k}),\) then_ \[f(x_{i_{1}})\leq f(x_{i_{2}})\leq...\leq f(x_{i_{k}})\leq\min_{j\notin\{i_{1},...,i_{k}\}}f(x_{j}).\] ### Applications The optimization problem (1) with a \((m,k)\)-ranking oracle is a common feature in many real-world applications, especially when the objective function \(f\) is evaluated by human judges. One prominent example of this type of problem is found in the growing field of Reinforcement Learning with Human Feedback (RLHF) [24, 18, 23, 1], where human evaluators are asked to rank generated text according to their personal preferences, with an aim to improve the generation quality of Large Language Models (LLMs). Inspired by these works, in Section 4, we propose a similar application in which human feedback is used to enhance the quality of images generated by Stable Diffusion [29], a state-of-the-art text-to-image generative model. An overview of this application is demonstrated in Figure 1. Beyond human feedback, ranking oracles have the potential to be useful in many other applications. For instance, in cases where the information in the values of \(f\) must remain private, ranking data may provide a more secure and confidential option for data sharing and analysis. This is particularly relevant in sensitive domains, such as healthcare or finance, where the exact value of personal information must be protected. Moreover, obtaining rank data may be cheaper and easier than obtaining exact function values in many cases, such as when optimizing the hyperparameters of a machine learning model [19, 17]. In these situations, obtaining rank data requires less time and resources than obtaining precise performance measures on validation datasets, making it an attractive option for optimizing complex models with limited computational budgets. ### Related works **Zeroth-Order Optimization.** Zeroth-order optimization, also known as derivative-free or black-box optimization, has been a topic of extensive study in the optimization literature for several decades, with notable examples including the Nelder-Mead Simplex method [21], Bayesian optimization [10], Direct Search [11] and random approximation methods [22]. However, most existing works in this area assume that the objective function value is directly accessible, and the optimization is performed based on this value, which is not appropriate for our setting where only ranking information is available. Several heuristic algorithms have been proposed that rely solely on ranking information, such as CMA-ES [19], but these methods lack theoretical guarantees and may perform poorly in practice. The most closely related work to ours is the recent study by [3], which investigates zeroth-order optimization via a comparison oracle that returns the sign of the difference in function values between two points. In fact, this comparison oracle is equivalent to \((2,1)\)-ranking oracle, which is a special case of the \((m,k)\)-ranking oracle considered in our work. Generally speaking, the approach in [3] aims to recover the gradient of the objective function using 1-bit compressive sensing techniques [25]. However, this approach is limited to convex objective functions and is not applicable to non-convex functions. In contrast to [3], our work considers a more general \((m,k)\)-ranking oracle and extends the scope to non-convex functions. Besides, our approach does not rely on any compressive sensing techniques and instead provides a novel theoretical analysis that can characterize the expected convergence behavior of our proposed algorithm. **Relationship to RLHF.** Reinforcement Learning with Human Feedback (RLHF) is a relatively new and rapidly growing field that has shown significant promise in aligning the intentions of humans and machines [24; 18; 23; 1]. The general approach in RLHF involves collecting ranking data from humans to train a reward model, which is then used to fine-tune a pre-trained model with policy gradients. However, this method faces two main challenges. First, it requires a large amount of ranking data to be collected before fine-tuning can begin, which can be time-consuming and expensive, particularly for smaller organizations. Second, the interaction with humans, or the ranking oracle, occurs only once and does not allow for the ongoing ranking of the model's output, limiting the potential for continued model improvement. Our proposed zeroth-order optimization algorithm, which can be directly applied to policy search in reinforcement learning, can be a promising alternative to existing RLHF methods as it allows for the collection of ranking data in an online fashion. This overcomes the challenges of the traditional RLHF approach by enabling the model's performance to be continuously improved without any pre-collected data. On the other hand, our algorithm can also serve as a process for data collection while optimizing the objective function, which can better facilitate small organizations to build models from scratch. ### Contributions in this work Our main contributions are summarized as follows: 1. **The first rank-based zeroth-order optimization algorithm with a theoretical guarantee.** We present an innovative method for optimizing objective functions via their ranking oracles. Our proposed algorithm is based on a new rank-based stochastic estimator for descent direction and is proven to converge to a stationary point. Additionally, we provide a rigorous analysis of how various ranking oracles can impact the convergence rate by employing a novel variance analysis. 2. **A promising alternative to RLHF.** We also show that our algorithm is applicable to policy search in reinforcement learning when only a ranking oracle of the environment reward is available. In comparison to existing RLHF approaches, Our algorithm facilitates the real-time acquisition of ranking data, which leads to continuous improvement of the model's performance without requiring any pre-collected data. 3. **A way to tame diffusion generative models with human feedback.** We also apply our algorithm to a novel application: improving the quality of images generated by Stable Diffusion [29] with human ranking feedback. Our algorithm is shown to significantly enhance the details of the generated images, providing a new perspective on how to use human feedback to improve the performance of diffusion models. Overall, our work advances the field of zeroth-order optimization by addressing the problem of optimizing functions where only ranking feedback of them is available. Furthermore, our algorithm offers a new and effective approach for aligning human and machine intentions in a wide range of domains, which could open up exciting new avenues for research and development in this emerging field. ### Notations and Assumptions For any \(x\in\mathbb{R}\), we define the sign operator as \(\text{Sign}(x)=1\) if \(x\geq 0\) and \(-1\) otherwise, and extend it to vectors by applying it element-wise. For a \(d\)-dimensional vector \(x\), we denote the \(d\)-dimensional standard Gaussian distribution by \(\mathcal{N}(0,I_{d})\). The notation \(|\mathcal{S}|\) refers to the number of elements in the set \(\mathcal{S}\). **Assumption 1**.: _Throughout this paper, we have the following assumptions on the objective function \(f\):_ 1. \(f\) _is twice continuously differentiable._ 2. \(f\) _is_ \(L\)_-smooth, meaning that_ \(\|\nabla^{2}f(x)\|\leq L\)_._ 3. \(f\) _is lower bounded by a value_ \(f^{*}\)_, that is,_ \(f(x)\geq f^{*}\) _for all_ \(x\) ### Paper organization The rest of this paper is structured as follows: Section 2 introduces a novel approach for estimating descent direction based on ranking information, with a theoretical analysis of how different ranking oracles relate to the variance of the estimated direction. Built on the foundations in Section 2 and 2.2, Section 3 presents the main algorithm, ZO-RankSGD, along with the corresponding convergence analysis. In Section 4, we demonstrate the effectiveness of ZO-RankSGD through various experiments, ranging from synthetic data to real-world applications. Finally, Section 5 concludes the paper by summarizing our findings and suggesting future research directions. ## 2 Finding descent direction from the ranking information ### A comparison-based estimator for descent direction In contrast to the prior work [3], which relies on one-bit compressive sensing to recover the gradient, we propose a simple yet effective estimator for descent direction without requiring solving any compressive sensing problem. Given an objective function \(f\) and a point \(x\), we estimate the descent direction of \(f\) using two independent Gaussian random vectors \(\xi_{1}\) and \(\xi_{2}\) as follows: \[\hat{g}(x)=S_{f}(x,\xi_{1},\xi_{2},\mu)(\xi_{1}-\xi_{2}), \tag{2}\] where \(\mu>0\) is a constant, and \(S_{f}(x,\xi_{1},\xi_{2},\mu):\mathbb{R}^{d}\times\mathbb{R}^{d}\times\mathbb{ R}_{+}\rightarrow\{1,-1\}\) is defined as: \[S_{f}(x,\xi_{1},\xi_{2},\mu)\stackrel{{\text{def}}}{{=}}\mathrm{ Sign}\left((f(x+\mu\xi_{1})-f(x+\mu\xi_{2}))\right). \tag{3}\] We prove in Lemma 1, which is one of the most important technical tools in this work, that \(\hat{g}(x)\) is an effective estimator for descent direction. **Lemma 1**.: _For any \(x\in\mathbb{R}^{d}\), we have_ \[\langle\nabla f(x),\mathbb{E}[\hat{g}(x)]\rangle\geq\|\nabla f(x)\|-C_{d}\mu L, \tag{4}\] _where \(C_{d}\geq 0\) is some constant that only depends on \(d\)._ Denote \(\gamma>0\) as the step size. From \(L\)-smoothness of \(f\), we can show that \[\mathop{\mathbb{E}}_{\xi_{1},\xi_{2}}[f(x-\gamma\hat{g}(x))]-f(x) \leq-\gamma\left\langle\nabla f(x),\mathbb{E}[\hat{g}(x)]\right\rangle +\frac{\gamma^{2}L}{2}E\left[\|\hat{g}(x)\|^{2}\right]\] \[\leq-\gamma\|\nabla f(x)\|+\gamma C_{d}\mu L+\gamma^{2}Ld, \tag{5}\] where we use the equality \(\mathbb{E}[\|\hat{g}(x)\|^{2}]=\mathbb{E}[\|\xi_{1}-\xi_{2}\|^{2}]=2d\). Therefore, whenever \(\|\nabla f(x)\|\neq 0\), the value \(\mathbb{E}_{\xi_{1},\xi_{2}}[f(x-\gamma\hat{g}(x))]\) would be strictly smaller than \(f(x)\) with sufficiently small \(\gamma\) and \(\mu\). Besides, unlike the comparison-based gradient estimator proposed in [3], our estimator (2) can be directly incorporated with ranking oracles, as we will see in the next section. ### From ranking information to pairwise comparison We first observe that ranking information can be translated into pairwise comparisons. For instance, knowing that \(x_{1}\) is the best among \(x_{1},x_{2},x_{3}\) can be represented using two pairwise comparisons: \(x_{1}\)_is better than \(x_{2}\)_ and \(x_{1}\)_is better than \(x_{3}\)_. Therefore, we propose to represent the input and output of \((m,k)\)-ranking oracles as a directed acyclic graph (DAG), \(\mathcal{G}=(\mathcal{N},\mathcal{E})\), where the node set \(\mathcal{N}=\{1,\ldots,m\}\) and the directed edge set \(\mathcal{E}=\{(i,j)\mid f(x_{i})<f(x_{j})\}\). An example of such a DAG is shown in Figure 2. Given access to a \((m,k)\)-ranking oracle \(O_{f}^{(m,k)}\) and a starting point \(x\), we can query \(O_{f}^{(m,k)}\) with the inputs \(x_{i}=x+\mu\xi_{i}\), \(\xi_{i}\sim\mathcal{N}(0,I_{d})\), for \(i=1,\ldots,m\). With the DAG \(\mathcal{G}=(\mathcal{N},\mathcal{E})\) constructed from the ranking information of \(O_{f}^{(m,k)}\), we propose the following rank-based gradient estimator: \[\tilde{g}(x)=\frac{1}{|\mathcal{E}|}\sum_{(i,j)\in\mathcal{E}}\frac{x_{j}-x_{i }}{\mu}=\frac{1}{|\mathcal{E}|}\sum_{(i,j)\in\mathcal{E}}(\xi_{j}-\xi_{i}). \tag{6}\] **Remark 1**.: _Notice that (6) can be simply expressed as a linearly weighted combination of \(\xi_{1},...,\xi_{m}\). We provide the specific form in Appendix A._ We note that (2) is a special case of (6) with \(m=2\) and \(k=1\), and it can be easily shown that \(\mathbb{E}[\tilde{g}(x)]=\mathbb{E}[\hat{g}(x)]\) and \(\mathbb{E}[|\tilde{g}(x)|^{2}]\leq\mathbb{E}[|\hat{g}(x)|^{2}]\), indicating that the benefit of using ranking information over a single comparison is a reduced variance of the gradient estimator. However, to determine the extent of variance reduction, we must examine the graph topology of \(\mathcal{G}\). **Graph topology of \(\mathcal{G}\).** The construction of the DAG \(\mathcal{G}\) described above reveals that the graph topology of \(\mathcal{G}\) is uniquely determined by \(m\) and \(k\). Especially, there are two important statistics of this graph topology. The first statistic is the number of edges \(|\mathcal{E}|\), which is related to the number of pairwise comparisons that can be extracted from the ranking result. In the precedent work [3], the number of pairwise comparisons was used to determine the variance of the gradient estimator. However, this is insufficient for our case, as the pairwise comparisons in (6) are not independent. Therefore, we require the second statistic of the DAG, which is the number of neighboring edge pairs in \(\mathcal{E}\). We define a neighboring edge pair as a pair of edges that share the same node. For instance, in Figure 2, one neighboring edge pair is \((x_{1},x_{3})\) and \((x_{1},x_{2})\). We denote this number as \(N(\mathcal{E})\) and define it formally as follows: \[N(\mathcal{E})\stackrel{{\text{def.}}}{{=}}|\{((i,j),(i^{ \prime},j))\in\bar{\mathcal{E}}\times\bar{\mathcal{E}}|i\neq i^{\prime}\}|, \tag{7}\] where \(\bar{\mathcal{E}}\) is the undirected version of \(\mathcal{E}\), namely, the edge \((i,j)\) is equivalent to \((j,i)\) in \(\bar{\mathcal{E}}\). As mentioned, the graph topology of \(\mathcal{G}\) is determined by \(m\) and \(k\). Therefore, we can analytically compute \(|\mathcal{E}|\) and \(N(\mathcal{E})\) using \(m\) and \(k\). We state these calculations in the following lemma: **Lemma 2**.: _Let \(\mathcal{G}=(\mathcal{N},\mathcal{E})\) be the DAG constructed from the ranking information of \(O_{f}^{(m,k)}\), then we have:_ \[|\mathcal{E}| =km-(k^{2}+k)/2, \tag{8}\] \[N(\mathcal{E}) =m^{2}k+mk^{2}-k^{3}+k^{2}-4mk+2k. \tag{9}\] **Variance analysis of (6) based on the graph topology.** To analyze the variance of the estimator (6), we introduce two important metrics \(M_{1}(f,\mu)\) and \(M_{2}(f,\mu)\) on the function \(f\). **Definition 2**.: \[M_{1}(f,\mu)\stackrel{{\text{def.}}}{{=}}\max_{x} \Big{\|}\operatorname*{\mathbb{E}}_{\xi_{1},\xi_{2}}\left[S_{f}(x,\xi_{1},\xi_ {2},\mu)(\xi_{1}-\xi_{2})\right]\right\|^{2},\] (10) \[M_{2}(f,\mu)\stackrel{{\text{def.}}}{{=}}\max_{x} \operatorname*{\mathbb{E}}_{\xi_{1},\xi_{2},\xi_{3}}\left[S_{f}(x,\xi_{1},\xi _{2},\mu)S_{f}(x,\xi_{1},\xi_{3},\mu)\langle\xi_{1}-\xi_{2},\xi_{1}-\xi_{3} \rangle\right],\] (11) _where \(\xi_{1}\), \(\xi_{2}\) and \(\xi_{3}\) are three independent random vectors drawn from \(\mathcal{N}(0,I_{d})\)._ Lemma 3 provides some useful upper bounds on \(M_{1}(f,\mu)\) and \(M_{2}(f,\mu)\), which help to understand the scale of these two quantities better. **Lemma 3**.: _For any function \(f\) and \(\mu>0\), we have \(M_{1}(f,\mu)\leq 2d\), \(M_{2}(f,\mu)\leq 2d\). Moreover, if \(f\) satisfies that \(\nabla^{2}f(x)=cI_{d}\) where \(c\in\mathbb{R}\) is some constant, we have \(M_{1}(f,\mu)\leq 32/\pi\)._ With \(M_{1}(f,\mu)\) and \(M_{2}(f,\mu)\), we can bound the second order moment of (6) as shown in the following Lemma 4. **Lemma 4**.: _For any \(x\in\mathbb{R}^{d}\), we have_ \[\mathbb{E}[\|\tilde{g}(x)\|^{2}]\leq\frac{2d}{|\mathcal{E}|}+\frac{N(\mathcal{E} )}{|\mathcal{E}|^{2}}M_{2}(f,\mu)+M_{1}(f,\mu). \tag{12}\] Discussion on Lemma 4.With Lemma 2 and Lemma 3, we observe that the first variance term in (12), i.e., \(\frac{2d}{|\mathcal{E}|}\), is \(\mathcal{O}(\frac{1}{km})\), and thus vanishes as \(m\to\infty\). In contrast, the second variance term \(\frac{N(\mathcal{E})}{|\mathcal{E}|^{2}}M_{2}(f,\mu)\) does not disappear as \(m\) grows, because \[\lim_{m\to\infty}\frac{N(\mathcal{E})}{|\mathcal{E}|^{2}}=\lim_{m\to\infty} \frac{m^{2}k+mk^{2}-k^{3}+k^{2}-4mk+2k}{\left(km-(k^{2}+k)/2\right)^{2}}=\frac {1}{k}, \tag{13}\] and thus only vanishes when both \(k\) and \(m\) tend to infinity. Finally, there is a non-diminishing term \(M_{1}(f,\mu)\) remaining in (12). However, as shown in Lemma 3, \(M_{1}(f,\mu)\) is smaller than \(2d\) and can be bounded by a dimension-independent constant for a certain family of functions. **Remark 2**.: _It is worth noting that our analysis can be simply extended to any ranking oracles beyond the \((m,k)\)-ranking oracle. However, we only present the analysis for the \((m,k)\)-ranking oracle in this work, as it is the common one in practice._ ## 3 ZO-RankSGD: Zeroth-Order Rank-based Stochastic Gradient Descent Now that we have presented all of our findings in Sections 2 and 2.2, we are prepared to introduce our proposed algorithm, ZO-RankSGD. The pseudocode for ZO-RankSGD is outlined in Algorithm 1, and it utilizes the gradient estimator (6) derived above. ``` 0: Initial point \(x_{0}\), stepsize \(\eta\), number of iterations \(T\), smoothing parameter \(\mu\), \((m,k)\)-ranking oracle \(O_{f}^{(m,k)}\). 1:for\(t=1\) to \(T\)do 2: Sample \(m\) i.i.d. random vectors \(\{\xi_{(t,1)},\cdots,\xi_{(t,m)}\}\) from \(N(0,I_{d})\). 3: Query the \((m,k)\)-ranking oracle \(O_{f}^{(m,k)}\) with input \(\{x_{t-1}+\mu\xi_{(t,1)},\cdots,x_{t-1}+\mu\xi_{(t,m)}\}\), and constuct the corresponding DAG \(\mathcal{G}=(\mathcal{N},\mathcal{E})\) as described in Section 2.2. 4: Compute the gradient estimator using: \(g_{t}=\frac{1}{|\mathcal{E}|}\sum_{(i,j)\in\mathcal{E}}(\xi_{(t,j)}-\xi_{(t, i)})\) 5:\(x_{t}=x_{t-1}-\eta g_{t}\). 6:endfor ``` **Algorithm 1** ZO-RankSGD ### Theoretical guarantee of ZO-RankSGD Now we present the convergence result of Algorithm 1 in the following Theorem 1. **Theorem 1**.: _For any \(\eta>0\), \(\mu>0\), \(T\in\mathbb{N}\), after running Algorithm 1 for \(T\) iterations, we have:_ \[\mathbb{E}\left[\min_{t\in\{1,\ldots,T\}}\|\nabla f(x_{t-1})\|\right]\leq\frac {f(x_{0})-f^{*}}{\eta T}+C_{d}\mu L+\frac{\eta L}{2}\left(\frac{2d}{|\mathcal{E }|}+\frac{N(\mathcal{E})}{|\mathcal{E}|^{2}}M_{2}(f,\mu)+M_{1}(f,\mu)\right), \tag{14}\] _where \(C_{d}\) is some constant that only depends on \(d\)._ **Corollary 1**.: _By taking \(\eta=\sqrt{\frac{1}{dT}}\) and \(\mu=\sqrt{\frac{d}{C_{d}^{2}T}}\) in Theorem 1, we have_ \[\mathbb{E}\left[\min_{t\in\{1,\ldots,T\}}\|\nabla f(x_{t-1})\|\right]= \mathcal{O}\left(\sqrt{\frac{d}{T}}\right). \tag{15}\] Effect of \(m\) and \(k\) on the convergence speed of Algorithm 1.As we have discussed in Section 2.2, \(m\) and \(k\) affect the convergence speed through the variance of the gradient estimator. Specifically, in the upper bound of (14), we have \(\frac{2d}{|\mathcal{E}|}+\frac{N(\mathcal{E})}{|\mathcal{E}|^{2}}M_{2}(f,\mu)= \mathcal{O}\left(\frac{d}{km}+\frac{d}{k}\right)\). ### Line search via ranking oracle In this section, we discuss two potential issues that may arise when implementing Algorithm 1. Firstly, it can be cumbersome to manually tune the step size \(\eta\) required for each iteration. Secondly, it may be challenging for users to know whether the objective function is decreasing in each iteration as the function values are not accessible. In order to address these challenges, we propose a simple and effective line search method that leverages the \((l,1)\)-ranking oracle to determine the optimal step size for each iteration. The method involves querying the oracle with a set of inputs \(\{x_{t-1},x_{t-1}-\eta\gamma g_{t},...,x_{t-1}-\eta\gamma^{l-1}g_{t}\}\), where \(\gamma\in(0,1)\) represents a scaling factor that controls the rate of step size reduction. By monitoring whether or not \(x_{t}\) is equal to \(x_{t-1}\), users can observe the progress of Algorithm 1, while simultaneously selecting a suitable step size to achieve the best results. It is worth noting that this line search technique is not unique to Algorithm 1 and can be applied to any gradient-based optimization algorithm, including those in [22, 3]. To reflect this, we present the proposed line search method as Algorithm 2, under the assumption that the gradient estimator \(g_{t}\) has already been computed. ``` 0: Initial point \(x_{0}\), stepsize \(\eta\), number of iterations \(T\), shrinking rate \(\gamma\in(0,1)\), number of trials \(l\). 1:for\(t=1\) to \(T\)do 2: Compute the gradient estimator \(g_{t}\). 3:\(x_{t}=\arg\min_{x\in\mathcal{X}_{t}}f(x)\), where \(\mathcal{X}_{t}=\{x_{t-1},x_{t-1}-\eta\gamma g_{t},...,x_{t-1}-\eta\gamma^{l-1 }g_{t}\}\). 4:endfor ``` **Algorithm 2** Line search strategy for gradient-based optimization algorithms ## 4 Experiments ### Simple functions In this section, we present experimental results demonstrating the effectiveness of Algorithm 1 on two simple functions: 1. Quadratic function: \(f(x)=\|x\|_{2}^{2}\), \(x\in\mathbb{R}^{100}\). 2. Rosenbrock function: \(f(x)=\sum_{i=1}^{99}\left((1-x_{i})^{2}+100(x_{i+1}-x_{i}^{2})^{2}\right)\), \(x=[x_{1},...,x_{100}]^{\top}\in\mathbb{R}^{100}\). To demonstrate the effectiveness of our algorithm and verify our theoretical claims, we conduct two experiments, and all figures are obtained by averaging over 10 independent runs and are visualized in the form of mean\(\pm\)std. **(1) Comparing Algorithm 1 with existing algortihms.** In this first experiment, we compare Algorithm 1 with the following algorithms in the existing literature: 1. ZO-SGD [22]: A zeroth-order optimization algorithm for valuing oracle. 2. SCOBO [3]: A zeroth-order algorithm for pairwise comparing oracle. 3. GLD-Fast [11]: A direct search algorithm for top-1 oracle, namely, \((m,1)\)-ranking oracle. 4. CMA-ES [19, 12]: A heuristic optimization algorithm for ranking oracle. To ensure a meaningful comparison, we fix the number of queries \(m=15\) at each iteration for all algorithms. For gradient-based algorithms, ZO-SGD, SCOBO, and our ZO-RankSGD, we use query points for gradient estimation and 5 points for the line search. In this experiment, we set \(m=k\) for ZO-RankSGD, i.e. it can receive the full ranking information. Moreover, we tune the hyperparameters such as stepsize, smoothing parameter, and line search parameter via grid search for each algorithm, and the details are provided in Appendix C.1. Our experiment results in Figure 3 on the two functions show that the gradient-based algorithm can outperform the direct search algorithm GLD-Fast and the heuristic algorithm CMA-ES. Besides, Algorithm 1 can outperform SCOBO because the ranking oracle contains more information than the pairwise comparison oracle. Additionally, Algorithm 1 behaves similarly to ZO-SGD, indicating that the ranking oracle can be almost as informative as the valuing oracle for zeroth-order optimization. **(2) Investigating the impact of \(m\) and \(k\) on Algorithm 1.** In this part, we aim to validate the findings presented in Lemma 4 and Theorem 1 by running Algorithm 1 with various values of \(m\) and \(k\). To keep the setup simple, we set the step size \(\eta\) to 50 and the smoothing parameter \(\mu\) to 0.01 for Algorithm 1 with line search (where \(l=5\) and \(\gamma=0.1\)). Figure 4 illustrates the performance of ZO-RankSGD under seven different combinations of \(m\) and \(k\) on the two functions. The results confirm our theoretical findings presented in Lemma 4. For example, we observe that \((m=10,k=10)\) yields better performance than \((m=100,k=1)\), as predicted by the second variance term in (12), which dominates and scales as \(\mathcal{O}(\frac{1}{k})\). ### Reinforcement Learning with ranking oracle **Motivation.** In this section, we consider the policy optimization problem in reinforcement learning with only a ranking oracle of the episode reward being available. Such a setting especially captures the scenario where human evaluators are asked to rank multiple episodes based on their expertise. In the existing RLHF approaches [24; 18; 23; 1], ranking feedback from humans is collected for every single action to train a reward model, and then the cumulative predicted reward is used as the performance measure for every episode. In contrast, the ranking feedback in our setting is collected directly on the episode level rather than the action level, providing a more precise evaluation of each episode's performance. We demonstrate the efficacy of ZO-RankSGD by showing that it can perform well on policy optimization described above. Specifically, we adopt a similar experimental setup as [3; 7], where the goal is to learn a policy for simulated robot control with several problems from the MuJoCo suite of benchmarks [35], and we restrict that the algorithm can only query the episode reward via a \((5,5)\)-ranking oracle. We compare ZO-RankSGD to the CMA-ES algorithm, which is commonly used as a baseline in reinforcement learning [2] and also relies only on ranking oracle. **Results.** The experiment results are shown in Figure 5, where the x-axis is the number of queries to the ranking oracle, and the y-axis is the ground-truth episode reward. In these experiments, we do not use line search for ZO-RankSGD, and instead, we let \(\eta=\mu\), and decay them exponentially after every rollout. As can be seen from Figure 5, our algorithm can outperform CMA-ES by a significant margin on all three tasks, exhibiting a better ability to incorporate ranking information. Figure 4: Performance of ZO-RankSGD under different combinations of \(m\) and \(k\). Figure 3: Performance of different algorithms. ### Taming Diffusion Generative Model with Human Feedback In recent years, there has been a growing interest in diffusion generative models, which have demonstrated remarkable performance in generating high-quality images [14; 31; 5]. However, these models often fall short of capturing fine details, such as human fingers. To address this issue, we draw inspiration from recent successes in aligning Language Models with human feedback [24; 18; 23; 1], and propose to utilize human ranking feedback to enhance the detail of the generated images. We have noticed that there is a concurrent work [16] sharing a similar motivation with us. However, their method is still based on RLHF and requires a considerable amount of pre-collected data for fine-tuning the diffusion model. In contrast, our proposed method does not require any pre-collected data and does not need to fine-tune the diffusion model. **Experimental Setting.** We focus on the task of text-to-image generation, using the state-of-the-art Stable Diffusion model [29] to generate images based on given text prompts. Our goal is to optimize the initial latent embedding using human ranking feedback through our proposed Algorithm 1, in order to produce images that are more appealing to humans. This experimental setting offers several advantages, including: (1) The latent embedding is a low-dimensional vector and thus requires far fewer rounds of human feedback compared to fine-tuning the entire model. (2) It can also serve as a data-collecting step before fine-tuning the model. We note that any continuous parameter, such as the classifier-free guidance scale in the diffusion model, can be optimized using human feedback in a similar way. However, in this study, we focus solely on optimizing the latent embedding as we found that it is the most crucial factor for generating high-quality images. **Denoising diffusion process as a continuous mapping.** Firstly, we remark that only ODE-based diffusion samplers, like DDIM [30] and DPM-solver [20], are used in this study as now the denoising diffusion process will be deterministic and only depends on the latent embedding. We demonstrate that optimizing the latent embedding is a valid continuous optimization problem by showing that, with slight perturbations of the latent embedding, diffusion samplers can usually generate multiple similar images. An example of this phenomenon is in Figure 6, where the first image is generated using a given latent embedding, while the next three images are generated by perturbing this embedding with noise drawn from \(\mathcal{N}(0,0.1I_{d})\). Figure 5: Performance of ZO-RankSGD and CMA-ES on three MuJoCo environments Figure 6: Continuous property of diffusion process. The used text prompt is _A teddy bear is skiing, detailed, realistic, 4K, 3D_. **Human feedback vs. CLIP score.** The CLIP model [28], which is a state-of-the-art language and image contrastive model, can be used to determine the similarity between given texts and images. However, since the model is trained using noisy text-image pairs collected from the internet, it may not always perform well when details matter. In order to demonstrate the advantage of using human feedback, we compare the generated images obtained by optimizing human preference with those obtained by optimizing the CLIP similarity score [28]. **Examples.** We present some examples of optimization results in Figure 7. We use some popular text prompts from the internet1 and provide human feedback ourselves in this experiment. As shown in the figure, our proposed Algorithm 1 can significantly improve the realism and detail of the generated images by leveraging human ranking feedback. Furthermore, we observe that those images obtained by optimizing the CLIP similarity score could produce worse images than the original ones, showing the necessity of human feedback. Footnote 1: [https://mpost.io/best-100-stable-diffusion-prompts-the-most-beautiful-ai-text-to-image-prompts](https://mpost.io/best-100-stable-diffusion-prompts-the-most-beautiful-ai-text-to-image-prompts) For more examples like the ones in Figure 7, and the details of the entire optimization process, we refer the readers to Appendix C.2. ## 5 Conclusion In conclusion, this paper rigorously studies a novel optimization problem where only a ranking oracle of the objective function is available. For this problem, we propose the first provable zeroth-order optimization algorithm, ZO-RankSGD, which has consistently demonstrated its efficacy across a wide range of applications. We also present how different ranking oracles can impact optimization performance, providing guidance on designing the user interface for ranking feedback. Figure 7: Examples of optimizing latent embedding in diffusion generative model. Initial: The initial images selected through multiple randomly generated latent embeddings serve as the initial points for the later optimization process. Human: The images obtained by optimizing human preference. CLIP: The images obtained by optimizing the CLIP similarity score. Additionally, our algorithm has been shown to be a practical and effective way to incorporate human feedback, for example, it can be used to improve the detail of images generated by Stable Diffusion with human guidance. Furthermore, our approach, while can also be used for collecting data, offers a more efficient alternative to existing Reinforcement Learning with Human Feedback (RLHF) methods by enabling the online collection of ranking data and continuous improvement of model performance. Possible future directions to this work may include extending the algorithm to handle noise and uncertainty in the ranking feedback, combining ZO-RankSGD with a model-based approach like Bayesian Optimization [10] to further improve the query efficiency, and applying it to other scenarios beyond human feedback.
2310.03669
LumiNet: The Bright Side of Perceptual Knowledge Distillation
In knowledge distillation literature, feature-based methods have dominated due to their ability to effectively tap into extensive teacher models. In contrast, logit-based approaches, which aim to distill `dark knowledge' from teachers, typically exhibit inferior performance compared to feature-based methods. To bridge this gap, we present LumiNet, a novel knowledge distillation algorithm designed to enhance logit-based distillation. We introduce the concept of 'perception', aiming to calibrate logits based on the model's representation capability. This concept addresses overconfidence issues in logit-based distillation method while also introducing a novel method to distill knowledge from the teacher. It reconstructs the logits of a sample/instances by considering relationships with other samples in the batch. LumiNet excels on benchmarks like CIFAR-100, ImageNet, and MSCOCO, outperforming leading feature-based methods, e.g., compared to KD with ResNet18 and MobileNetV2 on ImageNet, it shows improvements of 1.5% and 2.05%, respectively.
Md. Ismail Hossain, M M Lutfe Elahi, Sameera Ramasinghe, Ali Cheraghian, Fuad Rahman, Nabeel Mohammed, Shafin Rahman
2023-10-05T16:43:28Z
http://arxiv.org/abs/2310.03669v2
# LumiNet: The Bright Side of Perceptual Knowledge Distillation ###### Abstract In knowledge distillation research, feature-based methods have dominated due to their ability to effectively tap into extensive teacher models. In contrast, logit-based approaches are considered to be less adept at extracting hidden 'dark knowledge' from teachers. To bridge this gap, we present _LumiNet_, a novel knowledge-transfer algorithm designed to enhance logit-based distillation. We introduce a perception matrix that aims to recalibrate logits through adjustments based on the model's representation capability. By meticulously analyzing intra-class dynamics, _LumiNet_ reconstructs more granular inter-class relationships, enabling the student model to learn a richer breadth of knowledge. Both teacher and student models are mapped onto this refined matrix, with the student's goal being to minimize representational discrepancies. Rigorous testing on benchmark datasets (CIFAR-100, ImageNet, and MSCOCO) attests to _LumiNet_'s efficacy, revealing its competitive edge over leading feature-based methods. Moreover, in exploring the realm of transfer learning, we assess how effectively the student model, trained using our method, adapts to downstream tasks. Notably, when applied to Tiny ImageNet, the transferred features exhibit remarkable performance, further underscoring LumNet's versatility and robustness in diverse settings. With _LumiNet_, we hope to steer the research discourse towards a renewed interest in the latent capabilities of logit-based knowledge distillation. ## 1 Introduction The advancement in deep learning models has accelerated significant increases in both complexity and performance. However, this progress brings challenges associated with computational demands and model scalability. To mitigate this, Knowledge Distillation (KD) has been proposed as an efficient strategy (Hinton et al., 2015) to transfer knowledge from a larger, intricate model (teacher) to a more compact, simpler model (student). The primary objective is to trade off performance and computational efficiency. There are two ways of KD: logit- and feature-based strategies (Romero et al., 2014; Tian et al., 2019; r21, 2019; Yim et al., 2017). The logit-based methods aim to match the output distributions of the teacher and student models (Zhang et al., 2018; Mirzadeh et al., 2020; Zhao et al., 2022). In contrast, the feature-based methods are centered around aligning the intermediate layer representations between the two models (Romero et al., 2014). In general, feature-based KD outperforms logit-based KD in performance (Zhao et al., 2022). However, feature-based KD suffers from layer misalignment (Huang and Wang, 2017; Romero et al., 2014) (reducing sample density in this space), privacy concerns (Goodfellow et al., 2014) (intermediate model layers accessible for adversarial attacks revealing training data and posing significant threats), and escalating computational requirements (Vaswani et al., 2017; Zhao et al., 2022). These issues underscore the potential merits of the logit-based KD over feature-based KD. In light of these insights, this paper seeks to amplify the efficacy of the logit-based KD method, capitalizing on its inherent strengths. Several reasons underpin the disparity between logit- and feature-based KD. Firstly, logit-based KD tends to struggle with granularity. Feature-based methods leverage a broader spectrum of the teacher's knowledge by aligning intermediate representations, providing richer information to the student (Heo et al., 2019; Bengio et al., 2013; Wang and Yoon, 2021). In contrast, logits provide a more condensed representation, which might not always encapsulate the entirety of the teacher's knowledge (Romero et al., 2014). Secondly, when the teacher model has particularly high confidence in its target class, it can pose challenges. Even though temperature scaling (Hinton et al., 2015) is employed to address this, determining the optimal value to achieve proper alignment in learning remains an issue (Kim et al., 2021; Chen et al., 2021; Wang and Yoon, 2021). Thirdly, most of the logit-based methods often employ a simplistic matching criterion, which might not be robust enough to handle complex data distributions, leading to suboptimal knowledge transfer (Romero et al., 2014; Chen et al., 2021; Wang and Yoon, 2021). Recognizing these inherent issues, we embarked on a journey to address these challenges, aspiring to elevate the performance of logit-based KD beyond that of the feature-based approach. In response to the highlighted challenges inherent in traditional logit-based KD, we present _LumiNet_, a novel approach to knowledge distillation. _LumiNet_ is inspired by human perception: our ability to interpret objects based on context and past encounters Johnson (2021). Central to _LumiNet_ is the objective of holistically reconstructing instance-level distributions. For a given batch of images spanning multiple classes, each image's logits distribution is recalibrated, considering the distributions of its batch counterparts. This reconstruction operates by normalizing intra-class logits using variance and mean. Each class's column is systematically adjusted based on its inherent variance and mean, resulting in a refined 'perception' matrix. Notably, this transformative process is applied to both the teacher and the student models. The reconstructed logits maintain consistent variance scales across the teacher and student, ensuring a harmonized knowledge transfer. However, at the instance level, we attain a fresh logit distribution due to the reconstruction, enabling the student model to harness deeper insights through the KL divergence with the teacher's output. By adopting this innovative strategy, _LumiNet_ effectively addresses the challenges associated with logit-based KD: it surmounts the granularity issue by tapping into a broader knowledge spectrum, realigns the magnitude variations between teacher and student, and offers a robust matching criterion adept at handling intricate data distributions. The performance of _LumiNet_ has been evaluated on three computer vision tasks: image recognition, detection for model compression, and transfer learning for feature transfer ability. Our empirical evaluations solidify _LumiNet_'s efficacy: for instance, employing ResNet8x4 as a student, we achieved a standout 77.5% accuracy and further established benchmark supremacy across tasks like CIFAR100, ImageNet, MS-COCO and TinyImageNet. Our research contributions are: * We present _LumiNet_, a new perspective on knowledge distillation that emphasizes the reconstruction of instance-level distributions, offering a novel logit-based KD approach. * _LumiNet_ distinguishes itself with innovative features, notably its intra-class approach grounded on variance and mean. This creates a 'perception' matrix, ensuring harmonized Figure 1: Performance comparison of feature-based and logit-based methods on **(a)** CIFAR-100, **(b)** ImageNet, and **(c)** MS COCO datasets. Our proposed _LumiNet_, a logit-based method, achieves high accuracy without using extra parameters. variance between the teacher and student models, an aspect not previously addressed in the KD landscape. * Through extensive empirical evaluations, we demonstrate that our method consistently enhances performance across diverse datasets (CIFAR100, ImageNet, MS-COCO, and Tiny-ImageNe) and deep learning architectures (ResNet, VGG, ShuffleNet, MobileNet, WRN, and Faster-RCNN-FPN) and tasks (recognition, detection, and transfer learning). ## 2 Related Works **Logit-based KD:** In the domain of KD, logit-based techniques have traditionally emphasized the distillation process utilizing solely the output logits. Historically, the primary focus of research within logit distillation has been developing and refining regularization and optimization strategies rather than exploring novel methodologies. Noteworthy extensions to this conventional framework include the mutual-learning paradigm, frequently referenced as DML (Zhang et al., 2018), and incorporating the teacher assistant module, colloquially termed TAKD (Mirzadeh et al., 2020). Nonetheless, a considerable portion of the existing methodologies remain anchored to the foundational principles of the classical KD paradigm, seldom probing the intricate behaviors and subtleties associated with logits (Zhao et al., 2022). While the versatility of these logit-based methods facilitates their applicability across diverse scenarios, empirical observations suggest that their efficacy often falls short when juxtaposed against feature-level distillation techniques. **Feature-based KD:** Feature distillation, a knowledge transfer strategy, focuses on utilizing intermediate features to relay knowledge from a teacher model to a student model. State-of-the-art methods have commonly employed this technique, with some working to minimize the divergence between features of the teacher and student models (Hoe et al., 2019; Romero et al., 2014). A richer knowledge transfer is facilitated by forcing the student to mimic the teacher at the feature level. Others have extended this approach by distilling input correlations, further enhancing the depth of knowledge transfer (Park et al., 2019; Tian et al., 2019; r21, 2019; Chen et al., 2021). These methods, though high-performing, grapple with substantial computational demands and potential privacy issues, especially with complex models and large datasets. These challenges not only amplify processing time and costs but can also limit their practical applicability in real-world scenarios. Recognizing these challenges, our paper pivots its attention to logit-based distillation techniques. **Applications with KD:** Rooted in foundational work by Hinton et al. (2015) and further enriched by advanced strategies like Attention Transfer (Zagoruyko and Komodakis, 2016), ReviewKd (Chen et al., 2021), Decoupled KD (Zhao et al., 2022) and other methods (Park et al., 2019; Tian et al., 2019), KD has significantly improved performance in core vision tasks, spanning recognition (Krizhevsky et al., 2012; Simonyan and Zisserman, 2014; He et al., 2016), segmentation(Qin et al., 2021; Liu et al., 2019), and detection (Li et al., 2022; Yang et al., 2022; Zheng et al., 2023; Xu et al., 2022). Beyond vision, KD has also made notable strides in NLP tasks like machine translation and sentiment analysis (Kim and Rush, 2016; Zhang et al., 2022). KD has proven valuable in addressing broader AI challenges, such as reducing model biases (Hossain et al., 2022; Chai et al., 2022; Zhou et al., 2021; Jung et al., 2021) and strengthening common-sense reasoning (West et al., 2021). We evaluate our method within the realms of image classification and object detection. ## 3 Methodology ### Knowledge Distillation Revisited Consider a set of distinct samples denoted as \(\mathcal{X}=\{\mathbf{x}_{i}\}_{i=1}^{n}\), where \(\mathbf{x}_{i}\in\mathbb{R}^{m}\) and \(n\) represents the total number of samples. Given a parametric deep learning model \(f_{\theta}\) with learnable parameters \(\theta\), its output for a sample \(\mathbf{x}_{i}\) is defined as \(\mathbf{z}_{i}=f(\mathbf{x}_{i})\), where \(\mathbf{z}_{i}\in\mathbb{R}^{c}\), and \(c\) denotes the number of classes within the sample set \(\mathcal{X}\). In the context of Knowledge Distillation (KD) literature, the model's output \(\mathbf{z}\) is often referred to as the digit of the model. For brevity, we will omit \(\theta\) from the model notation \(f\). To provide more context within the realm of knowledge distillation, we designate \(f_{T}\) as the teacher model, and \(f_{S}\) as the student model. The fundamental objective of KD is to minimize the divergence between the digits of the student and teacher for each sample in \(\mathcal{X}\). This can be expressed mathematically as minimizing the objective, \(L_{\text{KD}}=\sum_{\mathbf{x}_{i}\in\mathcal{X}}\ell\left(f_{T}(\mathbf{x}_ {i}),f_{S}(\mathbf{x}_{i})\right)\). Here, \(\ell(\cdot,\cdot)\) is a loss function that measures the discrepancy between two vectors. For logit-based distillation, the primary objective is to align the softened logits of the student and teacher models. This alignment is quantified using the Kullback-Leibler (KL) divergence between the softened probabilities of the two models. Formally, the distillation loss, \(L_{KD}\), is defined as: \[L_{KD}=KL\left(\text{Softmax}\left(\frac{f_{T}(\mathbf{x}_{i})}{\tau}\right) \;\big{|}\big{|}\,\text{Softmax}\left(\frac{f_{S}(\mathbf{x}_{i})}{\tau} \right)\right) \tag{1}\] Here, \(\tau\) is the temperature parameter that modulates the softmax sharpness. The primary hurdle with logit-based distillation lies in the fact that any logit vector \(\mathbf{z}_{i}=f(\mathbf{x}_{i})\) is considerably more compact than its feature vector counterpart. This makes it difficult to extract the wealth of information embedded in the teacher model (Romero et al., 2014). The following section outlines some potential limitations associated with logit-based knowledge distillation. **(1) Dilemma of single- vs. multi-instances:** While logit-based distillation primarily addresses single instances (Zhao et al., 2022; Zhang et al., 2018; Mirzadeh et al., 2020), these isolated logits lack insights regarding inter-relationships within a batch. To bridge this gap, Relational Knowledge Distillation (RKD) (Park et al., 2019) harnesses the power of inter-sample relationships by leveraging knowledge across data pairs \((\mathbf{x}_{i},\mathbf{x}_{j})\). However, in emphasizing such a relational perspective, RKD might omit the nuanced knowledge specific to individual samples. Later, to amplify instance-level knowledge transfer, Decoupled Knowledge Distillation (DKD) (Zhao et al., 2022) refines this approach by segregating logits into targeted (positive class) and non-targeted (negative class) components. Yet, while DKD improves precision, it remains isolated and does not establish interconnections among multiple images, potentially overlooking broader inter-sample dynamics. **(2) Role of \(\tau\):** In knowledge distillation, temperature scaling softens teacher model outputs, serving as a regularizer to reduce overfitting. Moreover, by preventing premature over-confidence in predictions, \(\tau\) further promotes better generalization and reduces the risk of fitting too closely to training data (Hinton et al., 2015). Because of the teacher and student model's outputs have inherent statistical differences, finding a suitable value for \(\tau\) is difficult (Liu et al., 2023). Usually, KD methods require extensive \(\tau\) fine-tuning, leading to additional computational costs (Rafaf et al., 2023). **(3) Intra-class dependency:** We illustrate this issue in Figure 2. Irrespective of the input image, there is a distribution of the prediction scores of the teacher model for each individual class. This distribution reflects the teacher's positive bias toward the data. For example, as the class 'cat' is very different from other vehicle classes (car, bus, truck, and ship), a teacher can classify a 'cat' instance more confidently than other contemporary classes. No matter the input, the distribution of intra-class prediction (say 'cat') is ignored during the transfer of knowledge. Figure 2: A toy example illustrating intra-class divergence. Each row of the prediction matrix (left) represents the mean prediction score of all images belonging to the corresponding class. By column-wise averaging, we can calculate an intra-class distribution (right) representing how different/similar a class is across other classes. Here, the ‘cat’ class differs from other vehicle classes (car, bus, truck, and ship). Thus, ‘cat’ gets a low intra-class score. We say this intra-class because this score averages the same class prediction across all images in the dataset. Previous KD methods do not transfer such intra-class variance from teacher to student during the distillation process. ### Introducing _LumiNet_ Based on our previous discussion, the process of knowledge distillation is found to be further enriched for a given instance \(\mathbf{x}_{i}\) when viewed in the context of its batch samples. Formally, a measure of information \(K\) corresponding to \(\mathbf{x}_{i}\) can be obtained as: \[K(\mathbf{x}_{i})\propto\mathcal{D}(\mathbf{x}_{i})+\sum_{j\neq i}R(\mathbf{x} _{i},\mathbf{x}_{j}) \tag{2}\] Here, \(\mathcal{D}\) captures the inherent _dark knowledge_, while \(R\) captures the inter-relational dynamics amongst instances. In this paper, we propose a loss function that optimizes this formulation. **Constructing the perception:** We formulate our approach considering a batch of data samples \(\mathcal{B}=\{\mathbf{x}_{i}\}_{i=1}^{b}\), which is randomly selected from the original dataset \(\mathcal{X}\). Consequently, the logits generated by a model \(f\) for an instance \(\mathbf{x}_{i}\in\mathcal{B}\) across \(c\) classes are represented as: \(\mathbf{z}_{i}=(z_{i1},z_{i2},\ldots,z_{ic})\), where \(z_{ik}\) symbolizes the logit for the \(j^{th}\) class for instance \(x_{i}\). We adjust the logits based on the mean \(U_{j}\) and variance \(V_{j}\) of across each class \(j\) of a batch. This transformed logit is given by: \[h_{ij}=\frac{z_{ij}-U_{j}}{\sqrt{V_{j}}} \tag{3}\] Here, \(h_{ij}\) represents the augmented logit for the \(j^{th}\) class for instance \(\mathbf{x}_{i}\). Consequently, the augmented logits for instance \(\mathbf{x}_{i}\) are obtained as: \[\mathbf{h}_{i}=\left(\frac{z_{i1}-U_{1}}{\sqrt{V_{1}}},\frac{z_{i2}-U_{2}}{ \sqrt{V_{2}}},\ldots,\frac{z_{ic}-U_{c}}{\sqrt{V_{c}}}\right) \tag{4}\] In this context, the reconstructed logits \(\mathbf{h}_{i}\) capture the model's perception. Instead of merely making raw predictions, both the models (teacher and student) try to understand the finer details and differences within the batch of data. As outlined in Eq. 3, the method of constructing 'perceived' logits is explained. In short, When both the teacher and Student models' intra-class predictions are adjusted on the same scale, keeping their variance constant, it influences the probability distribution across all the classes for individual instances. This new set of logits offers us a more insightful perspective of each instance. This set of logits \(\mathbf{h}_{i}\) is referred to as 'perception'. **The _LumiNet_ Loss:** Classical knowledge distillation seeks to transfer the rich perceptual capabilities of a teacher model onto a smaller student model. To this end, we introduce _LumiNet_, a novel approach emphasizing the alignment of 'perceptions' rather than raw logits. In _LumiNet_, we focus on the perceived logits. Given an instance \(\mathbf{x}_{i}\), we denote the logits from the teacher for class \(c\) as \(h^{t}_{ic}\) and those from the student as \(h^{s}_{ic}\). The softmax operation scaled by a temperature factor \(\tau\) produces probability distributions as: \(P^{T}_{c}(\mathbf{x}_{i})=\frac{\exp(h^{s}_{ic}/\tau)}{\sum_{s^{\prime}}\exp(h^ {s}_{ic^{\prime}}/\tau)},\quad P^{S}_{c}(\mathbf{x}_{i})=\frac{\exp(h^{s}_{ic }/\tau)}{\sum_{s^{\prime}}\exp(h^{s}_{ic^{\prime}}/\tau)}.\) With this understanding, the _LumiNet_ loss can be represented as: \[\mathcal{L}_{LumiNet}=\sum_{\mathbf{x}_{i}\in\mathbf{X}}\sum_{c}P^{T}_{c}( \mathbf{x}_{i})\log\frac{P^{T}_{c}(\mathbf{x}_{i})}{P^{S}_{c}(\mathbf{x}_{i} )}, \tag{5}\] The objective of the _LumiNet_ loss Eq. 5 is to ensure that the student model not only aligns its predictions with the teacher but does so in the transformed 'perception' space. This ensures that the student does not just parrot the teacher's outputs but also learns a deeper understanding of intra-class and inter-class relationships. It also aligns with our intent outlined in Eq. 2. By minimizing the _LumiNet_ loss, we ensure that the student model's perception of data instances closely mirrors that of the teacher's, leading to a more robust and nuanced student model. ### The bright side of _LumiNet_ perception While conventional Knowledge Distillation frameworks face challenges in both logit-based and feature-based implementations, our methodology in _LumiNet_ sheds light on a renewed perspective to tackle these issues. Here's a detailed exploration of the bright side of _LumiNet_'s approach to KD: 1. **Enhanced logit granularity with perception:** Traditional logit-based approaches are restricted by the inherent granularity of their representations, as characterized by the direct logits of any input \(\mathbf{x}_{i}\). In contrast, _LumiNet_, leveraging its perception, refines this representation by introducing a transformation. Through the utilization of the mean \(U_{j}\) and variance \(V_{j}\) for the logits of each class within a batch, as defined in the perceived logits \(\mathbf{x}^{\prime}_{i}\) in Eq. 4, _LumiNet_ achieves a more nuanced understanding. This mathematical recalibration allows the model to encapsulate subtler distinctions and depth, addressing the limitations inherent to conventional logit presentations. 2. **Balanced softening and overfitting:** In traditional knowledge distillation (KD), the temperature parameter \(\tau\) tempers logits by pushing their values closer to zero, effectively reducing variance and bridging the gap between teacher and student logits for efficient knowledge transfer.In _LumiNet_, logits \(\mathbf{x}^{\prime}_{i}\) are intra-class normalized, yielding a zero mean and unit variance for each class. Thus, the reliance on \(\tau\) for inter-class adjustments is diminished due to the intrinsically reduced variance and mean of the logits. 3. **Holistic KD approach:** As discussed in the previous section, although effective in its realm, The DKD (Zhao et al., 2022) methodology sometimes fails to wholly capture the essence of the teacher's knowledge. _LumiNet_, with its perception-driven paradigm, seamlessly amalgamates targeted and non-targeted knowledge, as shown in Eq. 5. This holistic approach ensures a broader and deeper transference of knowledge. 4. **Capturing inter-instance relationships:** Recognizing the essence of both intra and inter-class contexts, _LumiNet_ employs transformations on logits through intra-class mean and variance computations to produce normalized logits \(h_{ic}\). This process intrinsically captures intra-class dynamics. Concurrently, by considering logits across all classes for an instance \(\mathbf{x}_{i}\) within the batch, _LumiNet_ implicitly addresses inter-class relationships as well. Hence, with the formulations and variables defined in previous sections, _LumiNet_ ensures that the nuances within a class and the broader inter-class relationships are effectively captured, enriching the learning context for the student model. In essence, _LumiNet_ redefines the horizons of logit-based KD. This innovative approach not only rectifies recognized challenges but also pioneers a roadmap on enhancing logit-based KD techniques to potentially overshadow their feature-based counterparts. ## 4 Experiments ### Setup **Dataset:** Using benchmark datasets, we conduct experiments on three vision tasks: image classification, object detection, and transfer learning. Our experiments leveraged _four_ widely acknowledged benchmark datasets. First, **CIFAR-100**(Krizhevsky et al., 2009), encapsulating a compact yet comprehensive representation of images, comprises 60,000 32x32 resolution images, segregated into 100 classes with 600 images per class. **ImageNet**(Russakovsky et al., 2015), a more extensive dataset, provides a rigorous testing ground with its collection of over a million images distributed across 1,000 diverse classes, often utilized to probe models for robustness and generalization. Concurrently, the **MS COCO** dataset (Lin et al., 2014), renowned for its rich annotations, is pivotal for intricate tasks, facilitating both object detection and segmentation assessments with 330K images, 1.5 million object instances, and 80 object categories. We strictly adhered to the standard dataset splits for reproducibility and benchmarking compatibility for training, validation, and testing. The **TinyImageNet1** dataset, although more compact, acts as an invaluable resource for transfer learning experiments due to its wide variety across its 200 classes. Footnote 1: [https://www.kaggle.com/c/tiny-imagenet](https://www.kaggle.com/c/tiny-imagenet) **Network architectures:** Various architectures are employed depending on the context. For CIFAR-100, homogeneous configurations use teacher models like ResNet56, ResNet110 (He et al., 2016), and WRN-40-2, paired with corresponding students such as ResNet20 and WRN-16-2 (Table 1a). In heterogeneous settings, architectures such as ResNet32\(\times\)4 and VGG13 (Simonyan and Zisserman, 2014) for teachers are paired with lightweight models like ShuffleNet\({}^{\dagger}\)V1, ShuffleNet\({}^{\text{-}}\)V2 (Mas et al., 2018) and MobileNet-V2 (Sandler et al., 2018) as students (Table 1b). For ImageNet classification, ResNet34 was employed as the teacher and ResNet18 as the student. Additionally, For object detection on MS-COCO, the Faster RCNN with FPN (Zhang et al., 2022) was utilized as the feature extractor, with predominant teacher models being ResNet variants, while the latter served as a student. A pre-trained WRN_16_2 model is further harnessed for transfer learning. **Evaluation metric:** We assess methods' performance using Top-1 and Top-5 accuracy for classification tasks. We employ Average Precision (AP, AP50, and AP70) to gauge precision levels in object detection tasks. We calculate a \(\Delta\) that denotes the performance improvement of _LumiNet_ over the classical KD method, underlining our approach's enhancements. \begin{table} \begin{tabular}{c|c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{(a) Same architecture} & \multicolumn{4}{c}{(b) Heterogeneous architecture} \\ \hline \multirow{3}{*}{**Teacher**} & \multirow{3}{*}{72.34} & \multirow{3}{*}{74.31} & \multirow{3}{*}{79.42} & \multirow{3}{*}{75.61} & \multirow{3}{*}{75.61} & \multirow{3}{*}{74.64} & \multirow{3}{*}{79.42} & \multirow{3}{*}{75.61} & \multirow{3}{*}{74.64} & \multirow{3}{*}{79.34} & \multirow{3}{*}{79.42} \\ \cline{1-1} \cline{5-12} \cl **Implementation details2:** We explore knowledge distillation using two configurations: a homogenous architecture, where both teacher and student models have identical architectural types (e.g., ResNet56 and ResNet20), and a heterogeneous architecture, where they differ (e.g., ResNet32x4 as the teacher and ShuffleNet-V1 as the student). Our study incorporates a range of neural network architectures such as ResNet, WRN, VGG, ShuffleNet-V1/V2, and MobileNetV2. Training parameters are set as follows: for CIFAR-100, a batch size of 64 and a learning rate of 0.05; for ImageNet, a batch size of 128 and a learning rate of 0.1; and for MS-COCO, a batch size of 8 with a learning rate of 0.01. The value of 't' varies from 1 to 8, depending on the specific model. We followed the usual methods of (Zhao et al., 2022). Because the logits are already smoothed out, it's good to use higher CE and KD settings than the classical settings. All models are trained on a single GPU. Footnote 2: Codes and models are available at [https://github.com/ismail31416/LumiNet](https://github.com/ismail31416/LumiNet) ### Main Results **Comparison methods:** We compare our method with well-established feature- and logit-based distillation methods, underscoring its potential and advantages in the knowledge distillation domain. Notable methods in _Feature Based Methods_ category include FitNet (Romero et al., 2014), which aligns features at certain intermediary layers, RKD (Park et al., 2019) that focuses on preserving pairwise relations of examples, and CRD (Tian et al., 2019) which minimizes the contrastive loss between the representations of the teacher and student models. Other methods in this category include OFD (Cho and Hariharan, 2019) and ReviewKD (Chen et al., 2021), each bringing unique strategies to leverage intermediary network features. _Logit Based Methods_ methods include KD (Hinton et al., 2015), DML (Zhang et al., 2018), TAKD (Mirzadeh et al., 2020), and DKD (Zhao et al., 2022), which ensures that the student's logits are similar to the teacher. **Recognition tasks:** We perform image recognition tasks on CIFAR-100 and ImageNet. On **CIFAR-100**, when teacher and student models shared identical architectures, shown in Table (a)a, _LumiNet_ presented improvements of 2-3%. And when the architectures were from different series, shown in Table (b)b, the improvements were between 3-4%, consistently outperforming the baseline, classical KD, and other methods rooted in KD's principles. Similarly, on the intricate **ImageNet** dataset, _LumiNet_ outshined, exceeding all logit-based distillation techniques and beating state-of-the-art feature-based distillation methods, shown in Table 2. These findings robustly demonstrate that regardless of dataset or architectural variations, _LumiNet_ maintains its unparalleled efficacy and highlights its unique capacity to learn based on the 'perception'. In Table 3, we show that _LumiNet_ showcases a superior trade-off between the number of extra parameters/running time vs. accuracy. The necessity for extra parameters in feature-based techniques arises from integrating projection or intermediate layers, which align the teacher's feature space to the student model. With a latency of 11 ms, our method matched the best-performing models in speed and is exceptionally efficient \begin{table} \begin{tabular}{c c c c c c|c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c|}{**Feature-Based Methods**} & \multicolumn{4}{c}{**Logit-Based Methods**} \\ \hline & Teacher & Student & FitNet & FGFI & ReviewKD & KD & TAKD & DKD & **Ours** \\ \hline AP & 42.04 & 33.26 & 34.13 & 35.44 & **36.75** & 33.97 & 34.59 & 35.05 & 35.34 \\ \(AP_{50}\) & 62.48 & 53.61 & 54.16 & 55.51 & 56.72 & 54.66 & 55.35 & 56.60 & **56.82** \\ \(AP_{75}\) & 45.88 & 35.26 & 36.71 & **38.17** & 34.00 & 36.62 & 37.12 & 37.54 & 37.56 \\ \hline \hline \end{tabular} \end{table} Table 4: Detection results on MS-COCO using Faster-RCNN-FPN (Lin et al., 2017) backbone. \begin{table} \begin{tabular}{l c|c c|c c c|c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c|}{**Feature-Based Methods**} & \multicolumn{4}{c}{**Logit-Based Methods**} \\ \hline & Teacher & Student & AT & OFD & CRD & ReviewKD & KD & DKD & **Ours** \\ \hline Top-1 & 73.31 & 69.75 & 70.69 & 70.81 & 71.17 & 71.61 & 70.66 & 71.70 & **71.78** \\ Top-5 & 91.42 & 89.07 & 90.01 & 89.98 & 90.13 & 90.51 & 89.88 & 90.41 & **90.71** \\ \hline \hline \end{tabular} \end{table} Table 2: ImageNet results. ResNet34 and ResNet18 serve as the teacher and student, respectively. \begin{table} \begin{tabular}{l c c|c c c c|c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c|}{**Feature-Based Methods**} & \multicolumn{4}{c}{**Logit-Based Methods**} \\ \hline & Teacher & Student & AT & OFD & CRD & ReviewKD & KD & DKD & **Ours** \\ \hline Top-1 & 73.31 & 69.75 & 70.69 & 70.81 & 71.17 & 71.61 & 70.66 & 71.70 & **71.78** \\ Top-5 & 91.42 & 89.07 & 90.01 & 89.98 & 90.13 & 90.51 & 89.88 & 90.41 & **90.71** \\ \hline \hline \end{tabular} \end{table} Table 2: ImageNet results. ResNet34 and ResNet18 serve as the teacher and student, respectively. (77.50%) without extra parameters. This combination of low latency, less computation, and high accuracy further reaffirms the unparalleled effectiveness and efficiency of the _LumiNet_. **Detection task:** The quality of deep features is pivotal for accurate object detection. One persistent challenge is the effective knowledge transfer between established teacher models and student detectors (Li et al., 2017). Generally, logits cannot provide knowledge for object localization (Wang et al., 2019). While logit-based techniques have been traditionally used for this, they often do not meet state-of-the-art standards. On **MS COCO** dataset, _LumiNet_ delivered noticeably better results (Table 4) compared to logit-based methods, which is comparable to feature-based methods. **Transfer learning task:** To assess the transferability of deep features, we conduct experiments to verify our algorithm's superior generalization capabilities _LumiNet_. In this context, we utilized the Wide Residual Network (WRN-16-2), distilled from WRN-40-2, as our principal feature extraction apparatus. Sequential linear probing tasks were subsequently performed on the benchmark downstream dataset, notably _Tiny-ImageNet_. Our empirical results, delineated in Figure 4(a), manifestly underscore the exemplary transferability of features cultivated through _LumiNet_. ### Ablation study **Varying batch sizes:** Figure 4(b) showcases an ablation study comparing the performance of the _LumiNet_ method with both a basic student model and the Knowledge Distillation method across various batch sizes. The batch sizes range from 32 to 256. The student model, serving as a standard baseline, demonstrates a slight decline in performance as the batch size increases. In comparison, _LumiNet_ consistently outperforms both the student and Kd methods across all tested batch sizes, suggesting its robustness and superiority in the given context. **Varying \(\tau\):** The logits within our perception framework are reconstructed with a clear statistical understanding intra-class logits. For this, both the teacher and the student models exhibit "softened" values, achieved through normalization by variance and maintaining an intra-class mean of zero. Consequently, the dependency on temperature \(\tau\) is minimal. Empirical evaluations in Figure 4(c) suggest minimal performance fluctuations across \(\tau\) (ranging between 1 and 8) yield optimal results. **Ensemble of teachers:** We employ an ensemble of two teacher models: ResNet 8x4 and WRN-40-2 (labeled in the figure as "8x4" and "40-2"). This ensemble technique, which we term "Logit Averaging Ensemble," involves averaging the logits produced by the two teacher models (Sagi and Rokach, 2018). When training the student model, WRN-16-2 (labeled as "16-2" for the regular student and "16-2(en)" for the student learned by ensemble technique), we observed a notable improvement in accuracy using this ensemble-derived guidance. As shown in Figure 4(d), when conventionally train with our _LumiNet_ approach with just the WRN-40-2 teacher, we achieve 76.38% accuracy. However, results improve slightly to 76.52% when the training is augmented with insights from the ensemble technique. This suggests that the ensemble's aggregated information potentially enables the student model to capture more intricate patterns and nuances from the teachers. Figure 4: **(a)** Transfer learning experiments from CIFAR-100 to Tiny-ImageNet. **(b)** Ablation study on different batch sizes. **(c)** Impact of different \(\tau\) values. **(d)** Performance on ensemble learning. Conclusion We propose a novel logit-based knowledge distillation strategy. Our study in KD underscores the pivotal role of intra-class variations--a dimension often underemphasized in prevailing methods. Within these class structures lie nuanced insights that traditional methods (overly reliant on replicating teacher logits) might overlook. We present _LumiNet_, a strategy imbued with the 'perception' concept, as a new solution to address this challenge. Supported by empirical results from our rigorous experiments across recognition, detection, and transfer learning, _LumiNet_ consistently demonstrates superior efficacy. In essence, for knowledge distillation to truly shine and be comprehensive, it is essential to illuminate and address these intra-class disparities. Our research with _LumiNet_ brings forth the bright side of perceptual knowledge distillation, guiding the way on this path.
2307.00977
Vacuum Zero Point Energy and its Statistical Correlations in dS Background
We study the vacuum zero point energy associated to a scalar field with an arbitrary mass and conformal coupling in a dS background. Employing dimensional regularization scheme, we calculate the regularized zero point energy density, pressure and the trace of the energy momentum tensor. It is shown that the classical relation $\langle T \rangle =-4 \langle \rho \rangle$ for the vacuum stress energy tensor receives anomalous quantum correction which depends on the mass and the conformal coupling while the relation $\langle \rho \rangle = - \langle P \rangle$ does hold. We calculate the density contrast associated to the vacuum zero point energy and show that $\delta \rho \sim \langle \rho \rangle$ indicating an inhomogeneous and non-perturbative distribution of the zero point energy. Finally, we calculate the skewness associated to the distribution of the zero point energy and pressure and show that they are highly non-Gaussian.
Hassan Firouzjahi, Haidar Sheikhahmadi
2023-07-03T12:46:26Z
http://arxiv.org/abs/2307.00977v1
# Vacuum Zero Point Energy and its Statistical Correlations in dS Background ###### Abstract We study the vacuum zero point energy associated to a scalar field with an arbitrary mass and conformal coupling in a dS background. Employing dimensional regularization scheme, we calculate the regularized zero point energy density, pressure and the trace of the energy momentum tensor. It is shown that the classical relation \(\langle T\rangle=-4\langle\rho\rangle\) for the vacuum stress energy tensor receives anomalous quantum correction which depends on the mass and the conformal coupling while the relation \(\langle\rho\rangle=-\langle P\rangle\) does hold. We calculate the density contrast associated to the vacuum zero point energy and show that \(\delta\rho\sim\langle\rho\rangle\) indicating an inhomogeneous and non-perturbative distribution of the zero point energy. Finally, we calculate the skewness associated to the distribution of the zero point energy and pressure and show that they are highly non-Gaussian. ## I Introduction Quantum field theory in a curved spacetime is a rich and vastly studied topic which deals with important theoretical and observational phenomena [1; 2; 3; 4; 5]. Since a concrete theory of quantum gravity is not at hand, usually one assumes that the background geometry is governed by the classical general relativity and then the quantum fields are quantized in this classical background. This is a simplified and incomplete treatment of the full picture because the quantization of the gravitational degrees of freedom is expected to be essential at very high energy (very short scales). But even within this simplified picture interesting and non-trivial effects emerge if one does not get too close to the quantum gravity scale. For example, as the background may be dynamical, particle creation is a common phenomena in quantum field theory in a curved spacetime [6; 7; 8; 9; 10; 11; 12]. In addition, the concept of vacua is a non-trivial issue as different observers may define different vacua associated to their quantum fields [13; 14; 15; 16; 17; 18]. An important issue in studying quantum field theory in curved spacetime is the questions of regularization and renormalization. Similar to quantum field theories in flat spacetime, physical quantities such as the energy momentum tensor, energy density and pressure suffer from infinities in a curved spacetime as well. The fact that there is no unique vacuum in a curved spacetime while particles can be created add more complexities for the treatment of regularization and renormalization in a curved spacetime. Therefore, it is an important question as how one can regularize the infinities and to read off the finite physical quantities. There are various well established schemes for regularization and renormalization in curved spacetimes such as the point splitting regularization method [19; 20; 21; 22; 23], the adiabatic regularization method based on the WKB approximation [24; 25; 26; 27], the zeta function regularization scheme [28; 29; 30; 31; 32] and the dimensional regularization procedure [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47]. Understanding quantum field theory in dS background is an important question both theoretically and observationally [49; 50; 51; 52; 53]. On the observational side, there are compelling evidences that the early universe experienced a period of inflation in which the background was nearly a dS background. The simplest models of inflation are based on scalar field dynamics in which a light scalar field rolls slowly on top of its nearly flat potential [54; 55]. While the background expansion is given by the potential, but there are quantum fluctuations associated to inflaton field which are stretched to superhorizon scales. It is believed that these quantum perturbations are the seeds of large scale structures in universe and perturbations in CMB [56; 57] which are well supported by cosmological observations [58; 59]. In addition, various cosmological observations indicate that the universe is undergoing a phase of accelerated expansion now. The origin of dark energy as the source of the late time acceleration is not known but a cosmological constant associated with the vacuum zero point energy of fields is a prime candidate which fits the data well [60; 61; 62; 63; 64]. Beside being a possible candidate for the origin of dark energy, the vacuum zero point energy and its regularization in a dS spacetime is an important question of its own right [39; 40; 41; 42; 43; 44; 45]. In this work we study the quantum fluctuations of a real scalar field with non-minimal coupling to gravity in a dS background, focusing on vacuum zero point energy and its statistical fluctuations. While quantum field theory in a dS background has been studied extensively in the past but here we look at this question in a different perspective. Motivated by ideas from inflationary model building, here we pay particular attention on the statistical variations of the vacuum zero point energy density and pressure and look for their physical implications. The rest of the paper is organized as follows. In section II we present our setup while in section III we calculate the expectation values of the zero point energy and pressure. In section IV we study the statistical fluctuations in energy density and pressure and calculate their two-point and three point correlation functions followed by Summary and Discussions in section V. Some technicalities dealing with higher correlations of the energy density and pressure are presented in Appendix A. ## II The setup We consider a real scalar field \(\Phi\) in a dS spacetime which is non-minimally coupled to gravity with the conformal coupling \(\xi\). The action is given by \[S=\int d^{D}x\sqrt{-g_{{}_{D}}}\left(-\frac{1}{2}\xi\Phi^{2}R- \frac{1}{2}\nabla^{\mu}\Phi\nabla_{\mu}\Phi-\frac{1}{2}m^{2}\Phi^{2}\right)\,, \tag{1}\] in which \(D\) refers to the dimension of the spacetime, \(g_{{}_{D}}\) stands for the determinant of the metric and \(m\) is the mass of the scalar field. Since we employ dimensional regularization to handle the quantum infinities, we keep the spacetime dimension general and only at the end set \(D=4-\epsilon\) with \(\epsilon\to 0\) as in conventional dimensional regularization approach. In four dimensional spacetime the theory is classically conformally invariant if \(m=0\) and \(\xi=\frac{1}{6}\). However, as it is well-known, this classical symmetry is anomalous under quantum perturbations which will be studied in some details below. We work in the test field limit where the background geometry is governed by the Einstein field equation and it is not affected by the presence of the test field. In order for this approximation to be consistent, we require the vacuum zero point energy and pressure associated to the fluctuations of \(\Phi\) to be much smaller than the corresponding background quantities. These requirements put constraints on the mass of the test field as we shall study below. To simplify the analysis, we consider a free theory in which there is no interaction in the field sector. It is an interesting question to extend the current analysis to the more physical setup where there is a self-interaction like \(\lambda\Phi^{4}\) in the model. For works studying various aspects of quantum effects in models with \(\lambda\Phi^{4}\) self-interaction see [39; 40; 48]. The background geometry has the form of the FLRW metric, \[ds^{2}=-dt^{2}+a(t)^{2}d\mathbf{x}^{2}\,, \tag{2}\] where \(a(t)\) is the scale factor and \(t\) is the cosmic time. It is more convenient to work with the conformal time \(\tau\) related to the cosmic time via \(d\tau=dt/a(t)\) in terms of which the metric becomes conformally flat, \[ds^{2}=a(\tau)^{2}\big{(}-d\tau^{2}+d\mathbf{x}^{2}\big{)}\,. \tag{3}\] In terms of conformal time, the relation \(aH\tau=-1\) holds in a dS background which will be used frequently in the following analysis. The dS spacetime is maximally symmetric, so the Ricci tensor and Ricci scalar are given as follows, \[R_{\mu\nu}=(D-1)H^{2}g_{\mu\nu},\qquad R=D(D-1)H^{2}\,, \tag{4}\] in which \(H\equiv\frac{\dot{a}}{a}\) is the Hubble expansion rate during inflation. The Klein-Gordon equation governing the dynamics of the field is given by \[\Box\Phi-\xi R\Phi-m^{2}\Phi^{2}=0\,. \tag{5}\] To study the quantum perturbations of the field, we introduce the canonically normalized field \(\sigma(\tau)\) \[\sigma(\tau)\equiv a^{\frac{D-2}{2}}\Phi(\tau)\,, \tag{6}\] in terms of which the action takes the following form, \[S=\frac{1}{2}\int d\tau d^{D-1}{\bf x}\left[\sigma^{\prime}(\tau)^{2}-(\nabla \sigma)^{2}+\left(\frac{(D-4)(D-2)}{4}\big{(}\frac{a^{\prime}}{a}\big{)}^{2}+ \frac{D-2}{2}\frac{a^{\prime\prime}}{a}-(m^{2}+\xi R)a^{2}\right)\sigma^{2} \right], \tag{7}\] where a prime indicates the derivative with respect to the conformal time. To quantize the field, as usual, we expand it in terms of the creation and annihilation operators in the Fourier space as follows, \[\sigma\left(x^{\mu}\right)=\int\frac{d^{D-1}{\bf k}}{(2\pi)^{\frac{(D-1)}{2}} }\left(\sigma_{k}(\tau)e^{i{\bf k}\cdot{\bf x}}a_{\bf k}+\sigma_{k}^{*}(\tau) e^{-i{\bf k}\cdot{\bf x}}a_{\bf k}^{\dagger}\right)\,, \tag{8}\] where \(\sigma_{k}(\tau)\) is the quantum mode function while \(a_{\bf k}\) and \(a_{\bf k}^{\dagger}\) satisfy the following commutation relation in \(D-1\) spatial dimension, \[\left[a_{\bf k},a_{\bf k^{\prime}}^{\dagger}\right]=\delta^{D-1}({\bf k}-{\bf k ^{\prime}})\,. \tag{9}\] The equation of motion of the mode function from the action (7) is given by \[\sigma_{k}^{\prime\prime}(\tau)+\left[k^{2}+\frac{1}{\tau^{2}}\Big{(}\frac{m^ {2}}{H^{2}}+D(D-1)\xi-\frac{D(D-2)}{4}\Big{)}\right]\sigma_{k}(\tau)=0\,. \tag{10}\] Note that the above equation is similar to the Mukhanov-Sasaki equation associated to the inflaton perturbations in an inflationary background. If we set \(m=0\) and \(\xi=\frac{1}{6}\) in \(D=4\), then the second term in the big bracket vanishes and the mode function reduces to its simple flat form. In a general \(D\)-dimensional spacetime with \(m=0\), the conformal limit is attained for the special value of \(\xi=\xi_{D}\equiv\frac{D-2}{4(D-1)}\). Imposing the Bunch-Davies (Minkowski) vacuum deep inside the horizon, the solution of the mode function from Eq. (10) is given in terms of the Hankel function \[\Phi_{k}(\tau)=a^{\frac{2-D}{2}}\sigma_{k}(\tau)=(-H\tau)^{\frac{D-1}{2}}\Big{(} \frac{\pi}{4H}\Big{)}^{\frac{1}{2}}e^{\frac{i\pi}{2}(\nu+\frac{1}{2})}H_{\nu}^ {(1)}(-k\tau)\,, \tag{11}\] where \[\nu\equiv\frac{1}{2}\sqrt{(D-1)^{2}{-}4D(D-1)\xi-4\beta^{2}}\,,\qquad\beta \equiv\frac{m}{H}\,. \tag{12}\] From the above expression of \(\nu\) we see that it can be either real or pure imaginary depending on the values of \(\xi\) and \(\beta\). For a light field with \(\beta<1\) and with a moderate value of \(\xi\), the index \(\nu\) is real while for a heavy field with \(\beta\gg 1\) we typically have a complex value of \(\nu\). Both cases of real and imaginary \(\nu\) will be considered in the following analysis. In our analysis below, we are mainly interested in the expectation values of the vacuum energy momentum tensor in vacuum \(\langle T_{v}^{\mu\nu}\rangle\), the vacuum zero point energy density \(\langle\rho_{v}\rangle\), the vacuum zero point pressure \(\langle P_{v}\rangle\) and their statistical correlations. In order to simplify the notation, we discard the subscript \(v\) in the rest of analysis unless mentioned specifically. The energy momentum tensor is given by, \[T_{\mu\nu} = (1-2\xi)\partial_{\mu}\Phi\partial_{\nu}\Phi+(2\xi-\frac{1}{2})g _{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\Phi\partial_{\beta}\Phi \tag{13}\] \[+ \xi(R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R)\Phi^{2}+2\xi(g_{\mu\nu} \Phi\Box\Phi-\Phi\nabla_{\nu}\nabla_{\mu}\Phi)-\frac{1}{2}g_{\mu\nu}m_{\Phi} ^{2}\Phi^{2}\,.\] Using the field equation (5) to eliminate \(\Box\Phi\) combined with Eq. (4), \(T_{\mu\nu}\) is simplified as follows \[T_{\mu\nu}=\partial_{\mu}\Phi\partial_{\nu}\Phi+\frac{g_{\mu\nu}}{2}(4\xi-1) \big{(}\partial^{\alpha}\Phi\partial_{\alpha}\Phi+m^{2}\Phi^{2}\big{)}+\frac{ \xi}{2}(D-1)\big{(}2+(4\xi-1)D\big{)}H^{2}g_{\mu\nu}\Phi^{2}-\xi\nabla_{\mu} \nabla_{\nu}\Phi^{2}.\] Similarly, the trace of the energy momentum-tensor \(T\equiv T_{\mu}^{\mu}\) is given by \[T=2\Big{(}(D-1)\xi+\frac{2-D}{4}\Big{)}\Big{(}\partial^{\alpha}\Phi\partial_{ \alpha}\Phi+D(D-1)\xi H^{2}\Phi^{2}\Big{)}+\Big{(}2\xi(D-1)-\frac{D}{2}\Big{)} m^{2}\Phi^{2}\,. \tag{14}\] As we shall show explicitly below, \(\langle\Phi^{2}\rangle\) is independent of \(x^{\mu}\) so the vacuum expectation value of \(\langle T_{\mu\nu}\rangle\) simplifies to \[\langle T_{\mu\nu}\rangle=\langle\partial_{\mu}\Phi\partial_{\nu}\Phi\rangle+ \frac{g_{\mu\nu}}{2}(4\xi-1)\langle\partial^{\alpha}\Phi\partial_{\alpha}\Phi \rangle+\frac{g_{\mu\nu}}{2}\Big{[}(4\xi-1)m^{2}+\frac{\xi}{2}(D-1)\big{(}2+(4 \xi-1)D\big{)}H^{2}\Big{]}\langle\Phi^{2}\rangle.\] The vacuum zero point energy is \(\rho=T_{00}\) so from the above expression we obtain \[\langle\rho\rangle=\frac{(1+4\xi)}{2}\langle\dot{\Phi}^{2}\rangle+\frac{(1-4 \xi)}{2}\langle\nabla^{i}\Phi\nabla_{i}\Phi\rangle+\frac{H^{2}}{2}\Big{[}(1-4 \xi)\big{(}\beta^{2}+D(D-1)\xi)-2(D-1)\xi\Big{]}\langle\Phi^{2}\rangle. \tag{15}\] On the other hand, the pressure \(P\) is given by \[P=\frac{1}{D-1}\bot^{\mu\nu}T_{\mu\nu}\,, \tag{16}\] in which \(\bot^{\mu\nu}\)\(\equiv g^{\mu\nu}+u^{\mu}u^{\nu}\) is projection operator and \(u^{\mu}=(1,0,0,0)\) is the comoving velocity. Consequently, we obtain \[P=\frac{1}{D-1}(T+\rho)\,. \tag{17}\] ## III Dimensional regularizations and expectation values In this section we calculate \(\langle\rho\rangle\) and \(\langle P\rangle\) using dimensional regularization scheme in \(D\) dimension. From Eq. (15) we see that \(\langle\rho\rangle\) contains the following three ingredients: \[\rho_{1}\equiv\frac{1}{2}\dot{\Phi}^{2}\,,\qquad\rho_{2}\equiv\frac{1}{2}g^{ij }\nabla_{i}\Phi\nabla_{j}\Phi\,,\qquad\rho_{3}\equiv\frac{1}{2}H^{2}\Phi^{2}\,. \tag{18}\] Let us start with \(\langle\rho_{1}\rangle\). With the mode function given in Eq. (8) and performing a simple contraction using the commutation relation (9) we obtain \[\langle\rho_{1}\rangle=\frac{\mu^{4-D}}{2a^{2}(\tau)}\int\frac{d^{D-1}{\bf k} }{(2\pi)^{D-1}}\left|\Phi^{\prime}_{k}(\tau)\right|^{2}\,, \tag{19}\] in which \(\mu\) is a mass scale to keep track of the dimensionality of physical quantities as usual in dimensional regularization analysis. To proceed further, we decompose the integral into the radial and angular parts as follows \[{\rm d}^{D-1}{\bf k}=k^{D-2}\;{\rm d}k\;{\rm d}^{D-2}\Omega\,, \tag{20}\] in which \({\rm d}^{D-2}\Omega\) represents the \(D-2\)-dimensional angular part with the volume \[\int{\rm d}^{D-2}\Omega=\frac{2\,\pi^{\frac{D-1}{2}}}{\Gamma\left(\frac{D-1}{ 2}\right)}\,. \tag{21}\] Combining all numerical factors and defining the dimensionless variable \(x\equiv-k\tau\) we finally obtain \[\langle\rho_{1}\rangle=\frac{\pi^{\frac{3-D}{2}}\mu^{4-D}H^{D}}{2^{1+D} \Gamma\left(\frac{D-1}{2}\right)}e^{-\pi{\rm Im}(\nu)}\int_{0}^{\infty}dx\;x \left|\frac{d}{dx}\left(x^{\frac{D-1}{2}}H^{(1)}_{\nu}(x)\right)\right|^{2}\,, \tag{22}\] where after integrating it reads [76] \[\langle\rho_{1}\rangle=\frac{\mu^{4-D}\pi^{-\frac{D}{2}-1}}{4}\Gamma\Big{(} \nu+\frac{D}{2}+\frac{1}{2}\Big{)}\Gamma\Big{(}-\nu+\frac{D}{2}+\frac{1}{2} \Big{)}\Gamma\big{(}-\frac{D}{2}\big{)}\cos\big{(}\pi\nu\big{)}\big{(}\frac{H }{2}\big{)}^{D}\,. \tag{23}\] As explained previously, \(\nu\) can be either real or pure imaginary. In the latter case, one simply replaces \(\nu\) by \(i\nu\) in the above and in the following expressions. In addition, the following relation for the complex conjugation \(\overline{H_{i\nu}^{(1)}}(x)\) holds \[\overline{H_{i\nu}^{(1)}}(x)=e^{\pi{\rm Im}(\nu)}H_{i\nu}^{(2)}(x)\,, \tag{24}\] which was used to obtain Eq. (23). Following similar steps, for the remaining components we obtain \[\langle\rho_{2}\rangle=\frac{\pi^{\frac{3-D}{2}}\mu^{4-D}H^{D}}{2^{1+D}\Gamma \left(\frac{D-1}{2}\right)}e^{-\pi{\rm Im}(\nu)}\int_{0}^{\infty}dx\ x^{D} \left|H_{\nu}^{(1)}(x)\right|^{2}\,, \tag{25}\] in which the result can be expressed as \[\langle\rho_{2}\rangle=\frac{\mu^{4-D}\pi^{-\frac{D}{2}-1}}{4}\big{(}D-1) \Gamma\Big{(}\nu+\frac{D}{2}+\frac{1}{2}\Big{)}\Gamma\Big{(}-\nu+\frac{D}{2}+ \frac{1}{2}\Big{)}\Gamma\big{(}-\frac{D}{2}\big{)}\cos\big{(}\pi\nu\big{)} \big{(}\frac{H}{2}\big{)}^{D}\,, \tag{26}\] and \[\langle\rho_{3}\rangle=\frac{\pi^{\frac{3-D}{2}}\mu^{4-D}H^{D}}{2^{1+D}\Gamma \left(\frac{D-1}{2}\right)}e^{-\pi{\rm Im}(\nu)}\int_{0}^{\infty}dx\ x^{D-2} \left|H_{\nu}^{(1)}(x)\right|^{2}\,, \tag{27}\] that yields \[\langle\rho_{3}\rangle=\frac{\mu^{4-D}\pi^{-\frac{D}{2}-1}}{2}\Gamma\Big{(} \nu+\frac{D}{2}-\frac{1}{2}\Big{)}\Gamma\Big{(}-\nu+\frac{D}{2}-\frac{1}{2} \Big{)}\Gamma\Big{(}-\frac{D}{2}+1\Big{)}\cos\big{(}\pi\nu\big{)}\big{(}\frac {H}{2}\big{)}^{D}\,. \tag{28}\] Incidentally, from the above equations we see that \(\langle\rho_{i}\rangle\) are constants so \(\langle\Phi^{2}(x)\rangle\propto\langle\rho_{3}\rangle\) is constant as advertised previously. This is consistent with the fact that the dS background is a maximally symmetric spacetime so \(\langle\Phi^{2}(x)\rangle\) is expected to be a constant. With the component of \(\langle\rho_{i}\rangle\) given in Eqs. (23), (26) and (28) in a general \(D\)-dimensional dS spacetime we obtain the following relations among them, \[\langle\rho_{1}\rangle=\Big{[}(D-1)\xi+\frac{\beta^{2}}{D}\Big{]}\langle\rho_ {3}\rangle\,,\qquad\langle\rho_{2}\rangle=-(D-1)\langle\rho_{1}\rangle=- \Big{[}(D-1)^{2}\xi+\frac{(D-1)}{D}\beta^{2}\Big{]}\langle\rho_{3}\rangle\,. \tag{29}\] The above relations between \(\langle\rho_{i}\rangle\) will be very helpful in the following analysis. Having each components of \(\langle\rho_{i}\rangle\) at hand, the zero point energy \(\langle\rho\rangle\) from Eq. (15) is given by \[\langle\rho\rangle=(1+4\xi)\langle\rho_{1}\rangle+(1-4\xi)\langle\rho_{2} \rangle+\Big{[}(1-4\xi)\big{(}\beta^{2}+D(D-1)\xi)-2(D-1)\xi\Big{]}\langle \rho_{3}\rangle\,. \tag{30}\] Using the relation given in Eq. (29) this further simplifies to \[\langle\rho\rangle=\frac{2\beta^{2}}{D}\langle\rho_{3}\rangle\,. \tag{31}\] Similarly, the expectation value \(\langle T\rangle\) from Eq. (14) is obtained to be \[\langle T\rangle = \Big{[}(D-2)-4(D-1)\xi\Big{]}\Big{(}\langle\rho_{1}\rangle-\langle \rho_{2}\rangle-D(D-1)\xi\langle\rho_{3}\rangle\Big{)}+\Big{[}4(D-1)\xi-D\Big{]} \beta^{2}\langle\rho_{3}\rangle \tag{32}\] \[= -2\beta^{2}\langle\rho_{3}\rangle\,.\] Comparing to Eq. (31) we obtain the interesting result that \[\langle T\rangle=-D\langle\rho\rangle\,. \tag{33}\] It is crucial to note that the above relations are valid for a general value of \(D\). In particular, we should not set \(D=4\) in the above relations before performing dimensional regularizations. The reason is that we work in \(D=4-\epsilon\) dimension in which \(\langle\rho_{i}\rangle\), \(\langle\rho\rangle\) and \(\langle T\rangle\) will contain the divergent \(\frac{1}{\epsilon}\) terms plus the regular terms. Consequently, there will be additional finite contributions from the products of a function of \(D\) and each of \(\langle\rho_{i}\rangle\) or \(\langle\rho\rangle\). Intuitively speaking, these are "quantum anomalies" which can not be seen classically. More specifically, consider the relation between \(\langle T\rangle\) and \(\langle\rho\rangle\) in Eq. (33). If we simply set \(D=4\), we obtain the classical result \(\langle T\rangle=-4\langle\rho\rangle\) which is expected for the vacuum zero point energy. However, a careful investigation shows that this is not true. Indeed, as we shall verify below, setting \(D=4-\epsilon\) and performing the dimensional regularization to leading order we have \[\langle T\rangle+4\langle\rho\rangle=\mathcal{A}\,,\hskip 28.452756pt \mathcal{A}\equiv-\frac{H^{4}\beta^{2}}{32\pi^{2}}(\beta^{2}+12\xi-2)\,. \tag{34}\] The quantity \(\mathcal{A}\) is a common effect which signals the quantum "anomalous" contributions. We see that in the massless limit \(\beta=0\) the above anomalous contribution vanishes. Furthermore, for a massive field if \(\beta\) and \(\xi\) arrange such that \(\xi=\frac{1}{6}-\frac{\beta^{2}}{12}\) the anomalous contribution vanishes as well. Similarly, the relation between \(\langle\rho\rangle\) and \(\langle\rho_{3}\rangle\) in Eq. (31) receives the anomalous correction, yielding \[\langle\rho\rangle-\frac{\beta^{2}}{2}\langle\rho_{3}\rangle=\frac{\mathcal{A }}{4}\,. \tag{35}\] On the other hand, from Eq. (17), combined with Eq. (33), we obtain the following relation between \(\langle P\rangle\) and \(\langle\rho\rangle\): \[\langle P\rangle=-\langle\rho\rangle\,. \tag{36}\] The above relation between \(\langle P\rangle\) and \(\langle\rho\rangle\) is exact and is anomalous free. It holds for both massive and massless fields. Physically this makes sense since we are dealing with bubble diagrams. As the spacetime is locally Lorentz invariant, then one requires [60; 64; 65]\(\langle T_{\mu\nu}\rangle=-\langle\rho\rangle g_{\mu\nu}\) which also yields \(\langle P\rangle=-\langle\rho\rangle\). Now contracting this tensorial relation with \(g^{\mu\nu}\) we obtain Eq. (33). However, as mentioned previously, Eq. (33) does not mean that \(\langle T\rangle=-\langle\rho\rangle+3\langle P\rangle=-4\langle\rho\rangle\) since \(D=4-\epsilon\) and there are divergent \(\frac{1}{\epsilon}\) terms hiding inside \(\langle\rho\rangle\). Intuitively speaking, local Lorentz invariance in dimensional regularization scheme adds a new "extra dimension" of size \(\epsilon\) which causes the anomalous relation Eq. (34). Now, let us calculate \(\langle\rho\rangle\) from Eq. (31) with the value of \(\rho_{3}\) given in Eq. (28). Performing the dimensional regularization to relevant order we obtain \[\langle\rho\rangle={\cal A}\big{(}\frac{-1}{\epsilon}+\frac{\Delta}{2}\big{)} +\frac{H^{4}\beta^{2}}{128\pi^{2}}(2-8\xi-3\beta^{2})\,, \tag{37}\] in which \(\Delta\) is another common factor defined via \[\Delta\equiv\ln\Big{(}\frac{H^{2}}{4\pi\mu^{2}}\Big{)}+2\Psi(\nu+\frac{1}{2}) -\pi\tan(\nu\pi)\,, \tag{38}\] where \(\Psi(x)\) is the digamma function and we have shifted \(\mu\) by \(\gamma\), the Euler number, which does not affect the physical result. Furthermore, after performing the dimensional regularization analysis we can now set \(D=4\) in which from Eq. (12) we obtain \[\nu=\frac{1}{2}\sqrt{9-4\beta^{2}-48\xi}\,. \tag{39}\] In particular, for the special case of \(\beta=\xi=0\), we obtain the expected result \(\nu=\frac{3}{2}\) for a massless field in the dS background. In addition, for a heavy field with \(\beta\gg 1\), \(\nu\) becomes pure imaginary. Looking at Eq. (37) we see that the parameter \({\cal A}\) is the coefficient of the divergent \(\frac{1}{\epsilon}\) term so that is why we obtain the hidden anomalous contribution in Eqs. (34) and (35) when expanding \(D=4-\epsilon\). Finally, after subtracting the divergent \(\frac{1}{\epsilon}\) term in Eq. (37) via the appropriate counter terms, the regularized value of the zero point energy is obtained to be \[\langle\rho\rangle_{\rm reg} = \frac{{\cal A}\,\Delta}{2}+\frac{H^{4}\beta^{2}}{128\pi^{2}}(2-8 \xi-3\beta^{2})\] \[= \frac{H^{4}\beta^{2}}{64\pi^{2}}\Big{\{}(\beta^{2}+12\xi-2)\Big{[} \ln\Big{(}\frac{H^{2}}{4\pi\mu^{2}}\Big{)}+2\Psi(\nu+\frac{1}{2})-\pi\tan(\nu \pi)\Big{]}+1-4\xi-\frac{3}{2}\beta^{2}\Big{\}}\,.\] As is common in dimensional regularization approach the term \(\ln\Big{(}\frac{H}{\mu}\Big{)}\) originates from the regularization. To read off the physical contribution, one has to further renormalize the above finite term. This can be achieved upon choosing a physical value for the mass scale parameter \(\mu\) or if one compares the values of \(\langle\rho\rangle_{\rm reg}\) at two different energy scales and look for its running with the change of the energy scale. As explained previously, depending on the mass of the field, the index \(\nu\) in Eq. (39) can be either real or imaginary. The former happens typically when the field is light or \(\xi\) is not large while the latter corresponds to the case where the field is heavy with \(\beta\gg 1\). Below we study each case separately. ### Light field with real \(\nu\) Now we consider the case where the field is light enough so \(\nu\) is real. A particular case of interest is the massless limit \(\beta=0\). We may also consider different limit of \(\xi\) as well, such as the special cases \(\xi=0\) and the conformal limit \(\xi=\frac{1}{6}\). From Eq. (40) it may look that for massless field with \(\beta=0\), we obtain \(\langle\rho\rangle_{\rm reg}=0\). However, this is tricky as there is a particular limit in which the function \(\tan(\nu\pi)\) diverges when both \(\beta,\xi\to 0\). Taking the limit \(\beta,\xi\to 0\) properly, we obtain \[\langle\rho\rangle_{\rm reg}=\frac{3H^{4}}{32\pi^{2}}\,,\hskip 28.452756pt(\xi= \beta=0)\,. \tag{41}\] Another limit of interest is \(\xi\ll 1\) such that \(\xi\ll\beta^{2}<1\). In this limit we obtain \[\langle\rho\rangle_{\rm reg}\simeq\frac{3H^{4}}{32\pi^{2}}-\frac{9\xi H^{4}}{8 \pi^{2}\beta^{2}}-\frac{H^{4}\beta^{2}}{32\pi^{2}}\Big{[}\ln\Big{(}\frac{H^{2} }{4\pi\mu^{2}}\Big{)}+\frac{10}{3}\Big{]}+\frac{H^{4}\beta^{4}}{64\pi^{2}} \Big{[}\ln\Big{(}\frac{H^{2}}{4\pi\mu^{2}}\Big{)}-\frac{31}{54}\Big{]}\,,\quad( \xi\ll\beta^{2}),\] in which the subleading terms of orders \(\xi^{2}\beta^{-4}\) or \(\beta^{6}\) and higher orders are neglected in the above expansion. On the other hand, for larger values of \(\xi\), we obtain \(\langle\rho\rangle_{\rm reg}\propto\beta^{2}\) with the coefficient depending on the value of \(\xi\). For example, for the particular limit with \(\xi=\frac{1}{6}\) we obtain \[\langle\rho\rangle_{\rm reg}=-\frac{H^{4}}{96\pi^{2}}\beta^{2}+\frac{H^{4}}{6 4\pi^{2}}\Big{[}\ln\Big{(}\frac{H^{2}}{4\pi\mu^{2}}\Big{)}-\frac{1}{2}\Big{]} \beta^{4}+{\cal O}(\beta^{6})\,,\hskip 28.452756pt(\xi=\frac{1}{6})\,. \tag{42}\] If we further assume that \(\beta=0\) so the theory is classically conformal (with \(m=0\) and \(\xi=\frac{1}{6}\)), then the above expression yields \(\langle\rho\rangle_{\rm reg}=0\). Similarly, for \(\langle T\rangle_{\rm reg}\) we can use the anomalous relation (34) to obtain \[\langle T\rangle_{\rm reg} = -4\langle\rho\rangle_{\rm reg}+{\cal A} \tag{43}\] \[= (1-2\Delta){\cal A}-\frac{H^{4}\beta^{2}}{32\pi^{2}}(2-8\xi-3 \beta^{2})\,.\] For the particular case of \(\xi=\beta=0\) we obtain \[\langle T\rangle_{\rm reg}=-\frac{3H^{4}}{8\pi^{2}}\,,\hskip 28.452756pt(\xi= \beta=0)\,. \tag{44}\] Curiously we see the trace anomaly in which \(\langle T\rangle_{\rm reg}\propto H^{4}\propto R^{2}\neq 0\). This is the hallmark of quantum field theory in a curved spacetime [49]. For small value of \(\xi\) with \(\xi\ll\beta^{2}\), we obtain \[\langle T\rangle_{\rm reg}\simeq\frac{-3H^{4}}{8\pi^{2}}+\frac{9\xi H^{4}}{2 \pi^{2}\beta^{2}}+\frac{H^{4}\beta^{2}}{8\pi^{2}}\Big{[}\ln\Big{(}\frac{H^{2}} {4\pi\mu^{2}}\Big{)}+\frac{23}{6}\Big{]}-\frac{H^{4}\beta^{4}}{16\pi^{2}}\Big{[} \ln\Big{(}\frac{H^{2}}{4\pi\mu^{2}}\Big{)}-\frac{2}{27}\Big{]}\,,\quad(\xi\ll \beta^{2}).\] On the other hand, for the particular case \(\xi\to\frac{1}{6}\) we obtain \[\langle T\rangle_{\rm reg}=\frac{H^{4}\beta^{2}}{24\pi^{2}}-\frac{3(\xi- \frac{1}{6})}{4\pi^{2}}H^{4}\beta^{2}\Big{[}\ln\Big{(}\frac{H^{2}}{4\pi\mu^{2} }\Big{)}+\frac{7}{8}\Big{]}+\mathcal{O}(\big{(}\xi-\frac{1}{6}\big{)}^{2}, \beta^{4})\,,\qquad(\xi\to\frac{1}{6})\,. \tag{45}\] If we further assume \(\beta=0\) so the theory is classically conformal invariant (with \(m=0\) and \(\xi=\frac{1}{6}\)), then \(\langle T\rangle_{\rm reg}=0\). This shows that there is no trace anomaly in the quantum level when the theory is classically conformal invariant. This is in contrast with the result of [49] who obtained \(\langle T\rangle_{\rm reg}\propto R^{2}\propto H^{4}\) when \(\xi=\frac{1}{6}\) and \(\beta=0\). ### Heavy Field with Imaginary \(\nu\) For the heavy field with \(\beta\gg 1\), the index \(\nu\) in Eq. (39) becomes pure imaginary. All our results such as Eq. (40) are formally valid with the understanding that \(\nu\equiv i\nu_{0}\) with \[\nu_{0}\equiv\frac{1}{2}\sqrt{4\beta^{2}+48\xi-9}\,\simeq\beta\,. \tag{46}\] Correspondingly, Eq. (39) yields \[\langle\rho\rangle_{\rm reg}=\frac{H^{4}\beta^{2}}{64\pi^{2}}\Big{\{}(\beta^{2 }+12\xi-2)\Big{[}\ln\big{(}\frac{H^{2}}{4\pi\mu^{2}}\big{)}+2\Psi(i\nu_{0}+ \frac{1}{2})-i\pi\tanh(\nu_{0}\pi)\Big{]}+1-4\xi-\frac{3\beta^{2}}{2}\Big{\}} \tag{47}\] In the limit \(\nu_{0}\gg 1\), we have \[2\Psi(i\nu_{0}+\frac{1}{2})-i\pi\tanh(\nu_{0}\pi)\to 2\ln(\nu_{0})+\mathcal{O}( \nu_{0}^{-2})\,. \tag{48}\] Plugging this relation into Eq. (47), assuming that \(\beta\gg\xi\) and shifting the mass scale \(\mu\) by a constant value, we obtain \[\langle\rho\rangle_{\rm reg}=\frac{H^{4}\beta^{4}}{64\pi^{2}}\ln\big{(}\frac{ \nu_{0}^{2}H^{2}}{4\pi\mu^{2}}\big{)}+\mathcal{O}(\beta^{2}H^{4})\,. \tag{49}\] Now noting that \(\nu_{0}\simeq\beta=\frac{m}{H}\), we obtain \[\langle\rho\rangle_{\rm reg}=\frac{m^{4}}{64\pi^{2}}\ln\big{(}\frac{m^{2}}{4 \pi\mu^{2}}\big{)}+\mathcal{O}(m^{2}H^{2})\,. \tag{50}\] The above result agrees with the vacuum zero point energy density in flat background [64; 65; 66; 67; 68; 69]. This result is also obtained in the black hole background [70] when the Compton wavelength of the field is much smaller than the Schwarzschild radius of the black hole. As argued in [64] and [65] one expects that \(\langle\rho_{v}\rangle\) for a heavy field in a curved background agrees with the corresponding result in a flat background. The reason is that the energy density is a local property of the spacetime. Since the Lorentz invariance is a local symmetry in GR, then the equivalence principle requires that \(\langle\rho_{v}\rangle\) for a heavy field in a curved background, with the Compton wavelength much smaller than the curvature radius of the spacetime, agrees with \(\langle\rho_{v}\rangle\) in a flat background. Nonetheless, it is an interesting exercise to demonstrate this physical expectation explicitly as we showed above. Since we work in the test field limit, we have to make sure that the induced vacuum energy density from quantum fluctuations does not affect the background geometry. For this to be the case, we require \(\langle\rho\rangle_{\rm reg}\ll 3M_{P}^{2}H^{2}\) in which \(M_{P}\) is the reduced Planck mass. Correspondingly, this absence of the backreaction imposes the following upper bound on the mass of the quantum field \[\beta<\sqrt{\frac{M_{P}}{H}}\,. \tag{51}\] This is an interesting bound. For example, suppose the background dS represents an inflationary universe. This is a good approximation as during inflation the background is very nearly like a dS spacetime. Upper bound on the amplitude of primordial tensor perturbations from the Planck observation [58; 59] requires that \(H\lesssim 10^{-6}M_{P}\). This imposes the bound \(\beta<10^{3}\) in order for our heavy field to remain a test field during inflation. Superheavy field with \(\beta\) much larger than the bound given in Eq. (51) would modify the background geometry and one has to solve the mode function with these corrections included. ## IV Density contrast and skewness In the previous analysis we have calculated the average physical quantities such as \(\langle\rho\rangle\). However, as the quantum field is fluctuating, there are fluctuations in \(\rho\) as well. In this section we calculate the variance in the energy density and pressure and their contrasts, i.e. \(\frac{\delta\rho}{\langle\rho\rangle}\) and \(\frac{\delta P}{\langle P\rangle}\). As the analysis are complicated, we restrict ourselves to the spacial case \(\xi=0\) but for arbitrary value of \(\beta\). In addition, we also calculate the skewness which is a measure of the non-Gaussian distribution of the energy momentum tensor field. To simplify the notation, let us absorb the parameter \(\beta\) into \(\rho_{3}\) by defining \[\tilde{\rho}_{3}\equiv\beta^{2}\rho_{3}=\frac{m^{2}}{2}\Phi^{2}\,. \tag{52}\] Then setting \(\xi=0\) the energy density \(\rho\) is simply given by \[\rho=\rho_{1}+\rho_{2}+\tilde{\rho}_{3}\,, \tag{53}\] while Eqs. (29) and (31) yield the the following relations among \(\langle\rho_{i}\rangle\), \[\langle\rho\rangle=2\langle\rho_{1}\rangle=\frac{-2}{D-1}\langle\rho_{2}\rangle= \frac{2}{D}\langle\tilde{\rho}_{3}\rangle\,. \tag{54}\] As explained previously, it is important that we do not set \(D=4\) at this stage. We set \(D=4\) only at the end of dimensional regularization where the divergent term and the leading finite terms are extracted from the analysis. ### Density contrast We are interested in the variance \(\delta\rho^{2}\equiv\langle\rho^{2}\rangle-\langle\rho\rangle^{2}\) which is given by \[\begin{split}\delta\rho^{2}&=\left\langle\rho_{1}^ {2}\right\rangle+\left\langle\rho_{2}^{2}\right\rangle+\left\langle\rho_{3}^ {2}\right\rangle+\left\langle\rho_{1}\rho_{2}\right\rangle+\left\langle\rho_{ 2}\rho_{1}\right\rangle+\left\langle\rho_{1}\rho_{3}\right\rangle+\left\langle \rho_{3}\rho_{1}\right\rangle+\left\langle\rho_{2}\rho_{3}\right\rangle+\left \langle\rho_{3}\rho_{2}\right\rangle\\ &-\left(\left\langle\rho_{1}\right\rangle^{2}+\left\langle\rho_{ 2}\right\rangle^{2}+\left\langle\rho_{3}\right\rangle^{2}\right)-2\left\langle \rho_{1}\right\rangle\left\langle\rho_{2}\right\rangle-2\left\langle\rho_{1} \right\rangle\left\langle\rho_{3}\right\rangle-2\left\langle\rho_{2}\right \rangle\left\langle\rho_{3}\right\rangle.\end{split} \tag{55}\] To proceed further we need to calculate \(\langle\rho_{i}\rho_{j}\rangle\). Performing various contractions (see Appendix A for further details), one can show that \[\langle\rho_{1}^{2}\rangle=3\langle\rho_{1}\rangle^{2}\,,\qquad\langle\tilde{ \rho}_{3}^{2}\rangle=3\langle\tilde{\rho}_{3}\rangle^{2}\,. \tag{56}\] The above results are understandable since \(\Phi\) is a Gaussian field while \(\rho_{1},\rho_{3}\) are made of the quartic powers of \(\Phi\) so the Wick contractions yield the factor 3 above. On the other hand, the expectation value \(\langle\rho_{2}^{2}\rangle\) is somewhat non-trivial as we have the integration of the components of the momentum in \(D-1\)-dimensional space. Performing the appropriate contractions and the integrations over the momentum, we obtain (see Appendix A for further details) \[\langle\rho_{2}^{2}\rangle=\big{(}1+\frac{2}{D-1}\big{)}\langle\rho_{2} \rangle^{2}\,. \tag{57}\] From the above relations for \(\langle\rho_{i}^{2}\rangle\) we obtain \[\delta\rho_{1}^{2}=2\langle\rho_{1}\rangle^{2}\,,\qquad\delta\rho_{2}^{2}= \frac{2}{D-1}\langle\rho_{2}\rangle^{2}\,,\qquad\delta\tilde{\rho}_{3}^{2}=2 \langle\tilde{\rho}_{3}\rangle^{2}\,, \tag{58}\] which will be useful later on. On the other hand, one can check that the average of the cross terms \(\langle\rho_{i}\rho_{j}\rangle\) with \(i\neq j\) commute: \[\langle\rho_{1}\rho_{2}\rangle=\langle\rho_{1}\rangle\langle\rho_{2}\rangle \,,\qquad\langle\rho_{1}\tilde{\rho}_{3}\rangle=\langle\rho_{1}\rangle\langle \tilde{\rho}_{3}\rangle\,,\qquad\langle\rho_{2}\tilde{\rho}_{3}\rangle= \langle\rho_{2}\rangle\langle\tilde{\rho}_{3}\rangle\,. \tag{59}\] Correspondingly, the variance \(\delta\rho^{2}\) is obtained to be the sum of the variances associated to individual contributions: \[\delta\rho^{2} = \delta\rho_{1}^{2}+\delta\rho_{2}^{2}+\delta\tilde{\rho}_{3}^{2} \tag{60}\] \[= 2\big{(}\langle\rho_{1}\rangle^{2}+\langle\tilde{\rho}_{3}\rangle^ {2}\big{)}+\frac{2}{D-1}\langle\rho_{2}\rangle^{2}\] \[= \Big{[}2\big{(}\frac{1}{4}+\frac{D^{2}}{4}\big{)}+\frac{2}{D-1} \frac{(D-1)^{2}}{4}\Big{]}\langle\rho\rangle^{2}\,,\] yielding, \[\delta\rho^{2}=\frac{D(D+1)}{2}\langle\rho\rangle^{2}\,. \tag{61}\] Correspondingly, the density contrast is obtained to be \[\frac{\delta\rho}{\langle\rho\rangle}=\pm\sqrt{\frac{D(D+1)}{2}}\,, \tag{62}\] in which the plus sign above correspond to an overdensity while the minus sign represents and underdense region under quantum fluctuations. The regularized density contrast after setting \(D=4-\epsilon\) and setting \(\epsilon\to 0\) is given by \[\Big{(}\frac{\delta\rho}{\langle\rho\rangle}\Big{)}_{\rm reg}=\pm\sqrt{10}\,. \tag{63}\] This agrees exactly with the results in [65] and [70] obtained for heavy fields in flat as well as in black-hole backgrounds. However, from the above analysis we see that the result (63) is general and is independent of the mass of the field. The fact that the density contrast is independent of the mass and only depends on the dimensionality of the spacetime (as given in Eq. (62)) is an intriguing result. As argued in [65], the fact that \(\delta\rho_{v}\sim\langle\rho_{v}\rangle\) indicates that the distribution of the vacuum zero pint energy is non-linear and non-perturbative, yielding to an inhomogeneous and anistropic background on small scales, see also [71; 72; 73; 74] for a similar interpretation. Having calculated the density contrast, it is also instructive to calculate the contrast in pressure \(\frac{\delta P}{\langle P\rangle}\). We have seen that \(\langle P\rangle=-\langle\rho\rangle\) so one may naively expect that the formula (62) should hold for the pressure contrast as well. However, there are subtlety here, yielding to a different result. Decomposing the three components of \(P=P_{1}+P_{2}+P_{3}\) as \[P_{1}\equiv\frac{(\partial_{t}\Phi)^{2}}{2}\,,\qquad P_{2}\equiv\frac{3-D}{2( D-1)}g^{ij}\partial_{i}\Phi\partial_{j}\Phi\,,\qquad\quad P_{3}\equiv-\frac{m_{ \Phi}^{2}}{2}\Phi^{2}\,, \tag{64}\] and comparing with Eq. (18), we see that \[P_{1}=\rho_{1}\,,\qquad P_{2}=\frac{3-D}{D-1}\rho_{2}\,,\qquad P_{3}=-\tilde{ \rho}_{3}\,. \tag{65}\] Indeed, the changes in \(P_{2}\) compared to corresponding value for \(\rho_{2}\) yield to a different result for the pressure contrast. Following the same steps as above, we obtain \[\delta P^{2} = \delta P_{1}^{2}+\delta P_{2}^{2}+\delta P_{3}^{2} \tag{66}\] \[= 2\big{(}\langle P_{1}\rangle^{2}+\langle P_{3}\rangle^{2}\big{)} +\frac{2}{D-1}\langle P_{2}\rangle^{2}\] \[= \frac{D^{3}-5D+8}{2(D-1)}\langle P\rangle^{2}\,.\] Correspondingly, the pressure contrast is \[\frac{\delta P}{\langle P\rangle}=\pm\sqrt{\frac{D^{3}-5D+8}{2(D-1)}}\,. \tag{67}\] It is instructive to calculate \(\frac{\delta P}{\delta\rho}\). Combining Eqs. (67) and (62), and noting that \(\langle P\rangle=-\langle\rho\rangle\), we obtain \[\frac{\delta P}{\delta\rho}=\pm\sqrt{\frac{D^{3}-5D+8}{D(D^{2}-1)}}\,. \tag{68}\] After setting \(D=4-\epsilon\) with \(\epsilon\to 0\), we obtain \[\Big{(}\frac{\delta P}{\delta\rho}\Big{)}_{\rm reg}=\pm\sqrt{\frac{13}{15}}\,. \tag{69}\] To find a physical interpretation for the meaning of the above ratio, let us treat the vacuum zero point energy as a cosmic fluid with the equation of state \(w\equiv\frac{P}{\rho}\). At the background level, from Eq. (36) we have \(w=-1\) as expected from a vacuum zero point energy. On the other hands, at the perturbation level Eq. (69) suggests that the effective equation of state is \(w=-\sqrt{\frac{13}{15}}\) so the repulsive strength of the dark energy is slightly reduced. This may be interpreted due to quantum nature of the zero point fluctuations unlike the usual picture that the vacuum zero point energy (i.e. cosmological constant) is uniformly distributed in the fabric of spacetime with a uniform equation of state \(w=-1\). Having said this, we comment that the above discussion about the effective equation of state of the vacuum zero point energy is only qualitative and care must be taken about its cosmological implications. While the above analysis indicates that the density contrast of the vacuum zero point energy is large, but one should also look at the correlation length \(L\) of these perturbations. The correlation length of zero point fluctuations was studied in [65] in which it is obtained to be at the order \(L\sim m^{-1}\). Therefore, for heavy field with \(m\gg H\), the correlation length is deep inside the horizon. On the other hand, for a light field with \(m\lesssim H\), the correlation length is comparable to or larger than the Hubble radius. The fact that we have large density contrast with long mode perturbations (for light field) can have interesting cosmological implications. ### Measure of Skewness and non-Gaussianity As the distribution of the energy momentum tensor field can be asymmetric and non-Gaussian, it is instructive to calculate the skewness associated to vacuum zero point fluctuations measured by \(\delta\rho^{3}\equiv\langle\rho^{3}\rangle-\langle\rho\rangle^{3}\). A large value of \(\delta\rho^{3}\) indicates that the system is highly non-Gaussian. Since the source of the energy is quantum fluctuations, it would not be surprising that the system be highly non-Gaussian. Following a similar approach as has been employed for calculating the variances, one can calculate \(\delta\rho^{3}\). To do this, one can check that (see Appendix A for further details) \[\langle\rho_{i}^{2}\rho_{j}\rangle=\langle\rho_{i}^{2}\rangle \langle\rho_{j}\rangle\qquad i\neq j\,, \tag{70}\] and \[\langle\rho_{i}\rho_{j}\rho_{k}\rangle=\langle\rho_{i}\rangle \langle\rho_{j}\rangle\langle\rho_{k}\rangle\qquad i\neq j\neq k\,. \tag{71}\] The above relations simplify \(\delta\rho^{3}\) greatly, yielding \[\delta\rho^{3}=\delta\rho_{1}^{3}+\delta\rho_{2}^{3}+\delta\tilde {\rho}_{3}^{3}+3\langle\rho_{1}\rangle\big{(}\delta\rho_{2}^{2}+\delta\tilde {\rho}_{3}^{2}\big{)}+3\langle\rho_{2}\rangle\big{(}\delta\rho_{1}^{2}+ \delta\tilde{\rho}_{3}^{2}\big{)}+3\langle\tilde{\rho}_{3}\rangle\big{(} \delta\rho_{1}^{2}+\delta\rho_{2}^{2}\big{)}\,, \tag{72}\] with \(\delta\rho_{i}^{2}\) given in Eq. (58). Now our job is to calculate \(\langle\rho_{i}^{3}\rangle\) and then \(\delta\rho_{i}^{3}\). Performing various contractions, one can show that the following relations hold (see Appendix A for further details): \[\big{\langle}\rho_{1}^{3}\big{\rangle}=15\langle\rho_{1}\rangle ^{3},\qquad\big{\langle}\tilde{\rho}_{3}^{3}\big{\rangle}=15\langle\tilde{ \rho}_{3}\rangle^{3}\,, \tag{73}\] while \[\big{\langle}\rho_{2}^{3}\big{\rangle}=\langle\rho_{2}\rangle^{3 }\Big{[}1+\frac{6}{D-1}+\frac{8}{(D-1)^{2}}\Big{]}\,, \tag{74}\] yielding \[\delta\rho_{1}^{3}=14\langle\rho_{1}\rangle^{3}\,,\qquad\delta \rho_{2}^{3}=\Big{(}\frac{6}{D-1}+\frac{8}{(D-1)^{2}}\Big{)}\langle\rho_{2} \rangle^{3}\,,\qquad\delta\tilde{\rho}_{3}^{3}=14\langle\tilde{\rho}_{3} \rangle^{3}\,. \tag{75}\] Plugging the above values of \(\delta\rho_{i}^{3}\) in Eq. (72), using Eq. (58) for \(\delta\rho_{i}^{2}\) and Eq. (54) expressing \(\langle\rho_{i}\rangle\) in terms of \(\langle\rho\rangle\) we finally obtain \[\frac{\delta\rho^{3}}{\langle\rho\rangle^{3}}=\frac{1}{2}(2D^{3} +3D^{2}+D+4)\,. \tag{76}\] Plugging \(D=4\) with \(\epsilon\to 0\) yields \[\frac{\delta\rho^{3}}{\langle\rho\rangle^{3}}=92\,. \tag{77}\] Therefore, we see that the distribution of the energy density is highly non-Gaussian. Similarly, for \(\delta P^{3}\) we obtain \[\frac{\delta P^{3}}{\langle P\rangle^{3}}=\frac{2D^{5}-D^{4}-3D^{3}+D^{2}-11D+28} {2(D-1)^{2}} \tag{78}\] Substituting \(D=4-\epsilon\) with \(\epsilon\to 0\) gives \[\frac{\delta P^{3}}{\langle P\rangle^{3}}=\frac{800}{9}\,. \tag{79}\] We see that the distribution of \(P\) is highly non-Gaussian as well. ## V Summary and discussions In this work we have studied the vacuum zero point energy of scalar field in dS background. We have allowed the scalar field to have an arbitrary mass parameterized via \(\beta=\frac{m}{H}\) and with the conformal coupling \(\xi\). To calculate the vacuum zero point energy and its fluctuations we have employed the dimensional regularization scheme in \(D=4-\epsilon\) spacetime. Performing the analysis in a general \(D\) dimension, the regularized physical quantities are read off after the subtraction of the divergent \(\frac{1}{\epsilon}\) terms via appropriate counter terms. We have calculated \(\langle\rho_{v}\rangle,\langle P_{v}\rangle\) and \(\langle T_{v}\rangle\). We have shown that \(\langle\rho\rangle_{\rm reg}=-\langle P\rangle_{\rm reg}\) as expected from the local Lorentz invariance but the classical relations \(\langle T_{v}\rangle_{\rm reg}=-4\langle P_{v}\rangle_{\rm reg}\) is anomalous under quantum corrections. The anomalous correction is given by the factor \(\mathcal{A}\) defined in Eq. (35) which vanishes only when \(\beta=0\) or \(\xi=\frac{1}{6}-\frac{\beta^{2}}{12}\). We have looked at \(\langle\rho_{v}\rangle_{\rm reg}\) and \(\langle T_{v}\rangle_{\rm reg}\) in various limits of the parameters space \((\beta,\xi)\). It is shown that for a massless scalar field with \(\xi=0\), the vacuum zero point energy is \(\langle\rho\rangle_{\rm reg}\propto H^{4}\propto R^{2}\) which is the hallmark of quantum fields in a curved spacetime. In addition \(\langle T_{v}\rangle_{\rm reg}=-4\langle\rho_{v}\rangle_{\rm reg}\propto H^{4} \propto R^{2}\) so we have the usual trace anomaly when \(m=0\) and \(\xi\neq\frac{1}{6}\). On the other hand, the trace anomaly disappears in the conformal massless limit \(\beta=0,\xi=\frac{1}{6}\). We have shown that for the heavy fields with \(\beta\gg 1\), the value of \(\langle\rho_{v}\rangle_{\rm reg}\sim m^{4}\) agrees with its value in a flat background plus the subleading \(m^{2}H^{2}\) corrections. This is consistent with the physical expectation since the energy density is a local property of the spacetime and the equivalence principle requires that its value should agree with the corresponding value in a flat background up to subleading corrections. We have calculated the energy density contrast and the pressure contrast for the case when \(\xi=0\). In particular, it is shown that for both massive and massless case \(\frac{\delta\rho_{v}}{\langle\rho_{v}\rangle}=\pm\sqrt{10}\) as first observed in [65] in the limit of the heavy fields in dS background. This indicates that the distribution of the vacuum zero point energy is non-linear. This can have interesting implications for the cosmological constant problem, indicating a very inhomogeneous and anisotropic background on small scales. In addition, we have shown that \(\frac{\delta P^{2}}{\delta\rho^{2}}\neq 1\) indicating a complicated picture for the effective equation of state associated to the vacuum zero point energy as a cosmic fluid. Since the statistical distribution of the vacuum zero point energy can be asymmetric and non-Gaussian we have calculated \(\delta\rho^{3}\) as a measure of skewness and non-Gaussianity. It is shown that \(\delta\rho_{v}^{3}\sim\langle\rho_{v}\rangle^{3}\) so the distribution of the vacuum zero point energy is highly non-Gaussian. To simplify the analysis we have worked in the test field limit in which the resultant vacuum zero point energy density is assumed to be much smaller than the background dS energy density. For this picture to hold, we require \(\beta<\sqrt{\frac{M_{P}}{H}}\). If the field is superheavy, then one can not neglect the backreaction of the vacuum zero point energy on the background geometry. In addition, to solve the mode function analytically, we have neglected the self-coupling of the scalar field such as the \(\lambda\Phi^{4}\) interaction. It would be interesting to extend the above analysis to the case where the field has a self-interaction and to see if the above conclusions about the vacuum energy density and its perturbations hold for an interacting field theory as well. **Acknowledgments:** We are grateful to Richard Woodard, Misao Sasaki, Xingang Chen and Mohammad Ali Gorji for useful discussions and comments on the draft. H. F. is partially supported by the "Saramadan" Federation of Iran. ## Appendix A Multiple Contractions Using Wick Theorem In this Appendix we outline the steps which are required to calculate higher expectations like \(\langle\rho_{i}^{2}\rangle\) and \(\langle\rho_{i}^{3}\rangle\) for the variance in section (IV.1) and skewness in section (IV.2). While the analysis can be done with brute force in Fourier space but the results can be obtained in real space as well, using the Wick theorem [75]. The main ingredient is that \(\Phi(x)\) is a Gaussian random field (i.e. a free field) so to calculate higher correlations involving \(\langle\Phi(x)^{n}\rangle\) with \(n>3\), one can use the Wick theorem to reduce them to combinations of \(\langle\Phi(x)^{2}\rangle\). Let us start with \(\rho_{1}=\frac{1}{2}\dot{\Phi}^{2}\), yielding \[\langle\rho_{1}^{2}\rangle=\frac{1}{4}\langle\dot{\Phi}(x)\dot{\Phi}(y)\dot{ \Phi}(z)\dot{\Phi}(w)\rangle\,, \tag{10}\] with the understanding that at the end \(x=y=z=w\). Employing the Wick theorem for the Gaussian field \(\dot{\Phi}(x)\), we obtain \[\langle\dot{\Phi}(x)\dot{\Phi}(y)\dot{\Phi}(z)\dot{\Phi}(w)\rangle= \langle\dot{\Phi}(x)\dot{\Phi}(y)\rangle\langle\dot{\Phi}(z)\dot{\Phi}(w)\rangle+ \langle\dot{\Phi}(x)\dot{\Phi}(z)\rangle\langle\dot{\Phi}(y)\dot{\Phi}(w) \rangle+\langle\dot{\Phi}(x)\dot{\Phi}(w)\rangle\langle\dot{\Phi}(y)\dot{\Phi }(z)\rangle.\] Now, setting \(x=y=z=w\), from the above three terms we simply obtain \[\langle\rho_{1}^{2}\rangle=3\langle\rho_{1}\rangle^{2}\,. \tag{10}\] In a similar manner, we also obtain \(\langle\tilde{\rho}_{3}^{2}\rangle=3\langle\tilde{\rho}_{3}\rangle^{2}\). On the other hand, the analysis for \(\langle\rho_{2}^{2}\rangle\) is somewhat non-trivial as we have spatial derivatives. More specifically, \[\langle\rho_{2}^{2}\rangle=\frac{1}{4}g^{ij}g^{kl}\langle\Phi_{,i }(x)\Phi_{,j}(x)\Phi_{,k}(y)\Phi_{,l}(y)\rangle\,. \tag{11}\] Using the Wick theorem, the result can be written as follows: \[\langle\rho_{2}^{2}\rangle = \frac{1}{4}g^{ij}g^{kl}\Big{[}\langle\Phi_{,i}(x)\Phi_{,j}(x) \rangle\langle\Phi_{,k}(y)\Phi_{,l}(y)\rangle+2\langle\Phi_{,i}(x)\Phi_{,j}(y) \rangle\langle\Phi_{,k}(x)\Phi_{,l}(y)\rangle\Big{]} \tag{12}\] \[= \langle\rho_{2}\rangle^{2}+\frac{1}{2}\langle\Phi_{,i}(x)\Phi_{,j}(x)\rangle\langle\Phi^{,i}(x)\Phi^{,j}(x)\rangle\,.\] Now, using the isotropy of the background, one can write \[\langle\Phi_{,i}(x)\Phi_{,j}(x)\rangle=c\delta_{ij}\langle(\nabla \Phi)^{2}\rangle\,. \tag{13}\] To obtain the coefficient \(c\), we contract the above expression with \(\delta^{ij}\) and noting that the spatial dimension is \(D-1\), we obtain \[c=\frac{1}{D-1}\,, \tag{14}\] and correspondingly, \[\langle\Phi_{,i}(x)\Phi_{,j}(x)\rangle=\frac{\delta_{ij}}{D-1} \langle(\nabla\Phi)^{2}\rangle=\frac{2}{D-1}\langle\rho_{2}\rangle\,. \tag{15}\] Plugging the above result in Eq. (12) we obtain \[\langle\rho_{2}^{2}\rangle=\langle\rho_{2}\rangle^{2}+\frac{2}{(D-1)} \langle\rho_{2}\rangle^{2}\,. \tag{16}\] Now for the higher orders in a general manner from the Wick theorem we obtain \[\left\langle\Phi(x_{1})\Phi(x_{2})\ldots\Phi(x_{2n})\right\rangle=(2n-1)(2n-3) \langle\Phi(x)^{2}\rangle\,. \tag{17}\] In particular, to calculate \(\langle\rho_{1}^{3}\rangle\) and \(\langle\tilde{\rho}_{3}^{3}\rangle\) with \(2n=6\), the symmetry factor is \(5\times 3=15\) yielding to Eq. (73). Now to calculate \(\langle\rho_{2}^{3}\rangle\), we note that \[\left\langle\left(\nabla\Phi\right)^{6}\right\rangle=\left\langle\Phi_{,i}\Phi^{,i}\,\Phi_{,j}\Phi^{,j}\,\Phi_{,k}\Phi^{,k}\right\rangle. \tag{104}\] There are three types of contractions. The first type is that only identical indices contract to each other. There is only one way for this type of contraction, yielding simply the result \(\langle\rho_{2}\rangle^{3}\). The other contraction is that two different indices contract with each other while the remaining two identical indices contract with each other, like \((ii),(jk),(jk)\). There are 6 different ways to do this. Then using the identity (103) this yields the total contribution \(\frac{6}{D-1}\langle\rho_{2}\rangle^{3}\). The last type is to contract all different indices like \((ij),(jk),(ki)\). There are 8 possible ways to do it, and using the identity (103), this yields the total contribution \(\frac{8}{(D-1)^{2}}\langle\rho_{2}\rangle^{3}\). Combining all, we obtain \[\left\langle\rho_{2}^{3}\right\rangle=\langle\rho_{2}\rangle^{3}\Big{[}1+ \frac{6}{D-1}+\frac{8}{(D-1)^{2}}\Big{]}\,, \tag{105}\] as reported in Eq. (74). The last thing to show is that for \(i\neq j\), \(\langle\rho_{i}\rho_{j}\rangle=\langle\rho_{i}\rangle\langle\rho_{j}\rangle\) and \(\langle\rho_{i}^{2}\rho_{j}\rangle=\langle\rho_{i}^{2}\rangle\langle\rho_{j}\rangle\) while \(\langle\rho_{i}\rho_{j}\rho_{k}\rangle=\langle\rho_{i}\rangle\langle\rho_{j} \rangle\langle\rho_{k}\rangle\) for \(i\neq j\neq k\). To verify these identities, we note that \[\langle\dot{\Phi}(x)\Phi(x)\rangle=\frac{1}{2}\frac{d}{dt}\langle\Phi(x)^{2} \rangle=0. \tag{106}\] The final equality holds because, as we have seen in the main text, \(\langle\Phi(x)^{2}\rangle\propto\langle\rho_{3}\rangle\) is a constant. This is understandable since the dS spacetime is a maximally symmetric space so \(\langle\Phi(x)^{2}\rangle\) is independent of \(x^{\mu}\). Similarly, one has \[\langle\partial_{i}\Phi(x)\Phi(x)\rangle=\frac{1}{2}\partial_{i}\langle\Phi(x )^{2}\rangle=0\,. \tag{107}\] Equipped with the above two identities and using the fact that the background is isotropic one can show by direct examinations that for \(i\neq j\), \(\langle\rho_{i}\rho_{j}\rangle=\langle\rho_{i}\rangle\langle\rho_{j}\rangle\) and \(\langle\rho_{i}^{2}\rho_{j}\rangle=\langle\rho_{i}^{2}\rangle\langle\rho_{j}\rangle\) while for \(i\neq j\neq k\), \(\langle\rho_{i}\rho_{j}\rho_{k}\rangle=\langle\rho_{i}\rangle\langle\rho_{j} \rangle\langle\rho_{k}\rangle\).
2308.01234
Eccentric Gas Disk Orbiting the White Dwarf SDSS J1228+1040
Metal pollution onto white dwarfs is a wide-spread phenomenon that remains puzzling. Some of these white dwarfs also harbour gaseous debris disks. Emission lines from these disks open a unique window to the physical properties of the polluting material, lending insights to their origin. We model the emission line kinematics for the gas disk around SDSS J1228+1040, a system that has been monitored for over two decades. We show that the disk mass is strongly peaked at 1 solar radius (modulo the unknown inclination), and the disk eccentricity decreases from a value of 0.44 at the inner edge, to nearly zero at the outer edge. This eccentricity profile is exactly what one expects if the disk is in a global eccentric mode, precessing rigidly under general relativity and gas pressure. The precession period is about two decades. We infer that the mass of the gas disk is roughly equivalent to that of a 50-km rocky body, while the mass of the accompanying dust disk is likely insignificant. The disk eccentricity confirms an origin in tidal disruption, while the short disk diffusion time suggests that the disruption event happened a few centuries ago. Moreover, we argue that the initial orbit for the disrupted body, and that of its putative planetary perturber, fall within an AU around the white dwarf. The total mass of the source population is likely orders of magnitude more massive than our own Asteroid belt, and does not seem to exist around main-sequence stars.
Olcay Ates Goksu, Taylor Kutra, Yanqin Wu
2023-08-02T15:42:02Z
http://arxiv.org/abs/2308.01234v2
# Eccentric Gas disk Orbiting the white dwarf SDSS J1228+1040 ###### Abstract Metal pollution onto white dwarfs is a wide-spread phenomenon that remains puzzling. Some of these white dwarfs also harbour gaseous debris disks. Their emission lines open a unique window to the physical properties of the polluting material, lending insights to their origin. Here, we model the emission line kinematics for the gas disk around SDSS J1228+1040, a system that has been continuously monitored for over two decades. Our model shows that the disk mass is strongly peaked at one solar radius (modulo the unknown inclination), and the disk eccentricity decreases from a value of 0.44 at the disk inner edge, to nearly zero at the outer edge. This eccentricity profile is exactly what one expects if the disk is in a global eccentric mode, precessing rigidly under the combined forces of general relativity and gas pressure, and with a period of 20 yrs. The gas disk contains a mass that is roughly equivalent to that of a 100-km rocky body, while the mass of the accompanying dust disk is likely insignificant. The disk eccentricity confirms an origin in tidal disruption, and we suggest that the disrupted body is sourced from a Mars-mass planetesimal disk within a few AU. More detailed analysis of this disk is warranted. Circumstellar Debris Disks -- Near white dwarf environment -- SDSS J1228+1040 -- Ca II triplet emission profile 0000-0002-4002-2807]Ates Goksu 0000-0002-4882-7880]Taylor Kutra 0000-0002-4882-7880]Yanqin Wu ## 1 Introduction About a third of all white dwarfs show signs of ongoing or recent accretion of heavy metals (e.g., Zuckerman et al., 2003). And starting from the first example of white dwarf G29-38 (Koester, 1987; Graham et al., 1990; Jura, 2003), many are now known to also exhibit infrared excesses, signs of circumstellar dust disks (e.g. Kilic et al., 2005, 2006; Farihi et al., 2009; Debes et al., 2011, 2012). These disks are likely metallic in composition and are responsible for the pollution. It is now commonly believed that large asteroids (and/or comets) around these stars are, for some reason, excited to high eccentricities and are tidally disrupted when they approach the white dwarfs within the Roche radius. The resulting debris forms the dust disk. However, many key elements in this story, including the source and the orbital excitation for these bodies, remain mysterious (for a review, see Farihi, 2016). Interestingly, a few percent of these dusty white dwarfs are also known to possess gaseous debris disks (Manser et al., 2020). First discovered from SDSS data by Gansicke et al. (2006) around the white dwarf SDSS J122859.93+104032.9 (short-named as J1228 below), about two dozens such disks are now known (Gansicke et al., 2006, 2007, 2008; Farihi et al., 2012; Gentile Fusillo et al., 2021). These disks manifest as double-peaked emission lines in the spectra, most conspicuously in Ca II infrared triplets. They are found exclusively around white dwarfs hotter than \(\sim 13,000\) K, likely because only such stars are bright enough to sublimate dust at a distance of \(\sim 1R_{\odot}\). As these hot stars consitute only a few percent of all white dwarfs, it is likely that most hot white dwarfs with dust disks also possess gas disks. Unlike dust disks which reveal little about their kinematics, compositions, or density distributions (e.g., see a discussion in Nixon et al., 2020), gaseous disks open a lucky window. The characteristic double-peaked emission lines from these disks contain information about their radial extent, eccentricity, surface density and temperature profiles. Interestingly, many gaseous disks around white dwarfs exhibit asymmetric lines, most easily interpreted as the signature of an eccentric disk. Moreover, these disks also appear to be time variable (e.g., Wilson et al., 2014, 2015). For instance, J1228, the best monitored system (Manser et al., 2016, 2019), shows gradual variations of its line profile over a timescale of decades. This is much longer than the local Keplerian timescale, which is of order hours. This kinematic information offers the hope of understanding the origin of these debris disks. Here, we undertake a study of the J1228 gaseous disk, with an aim to answer the following specific questions. First, how does the disk manage to retain its eccentric shape? This disk is known to extend radially of order unity. If so, general relativistic effects precess the gas differentially, and would have led to its total circularization within a few decades. In fact, such a consideration motivates Hartmann et al. (2011); Metzger et al. (2012); Cauley et al. (2018); Fortin-Archambault et al. (2020) to propose that the observed line profiles are not due to an eccentric disk, but are instead due to non-axisymmetric brightness patterns (a vortex, or a spiral wave) on a circular disk. We resolve this issue by first fitting a physically motivated disk model (SS2) against detailed observations (SS3), and then considering what resists the differential precession (SS4). Second, we hope to shed some light on the origin of such a disk. The gas density distribution and other physical quantities, extracted from our physical model, are analyzed towards this end (SS5). ## 2 Physical Model We will construct a Keplerian disk model to reproduce the observed double-peaked line profiles in the Ca II infrared triplet. In such an exercise, as one only measures the line-of-sight velocity, and is ignorant of the orbital period, one can only determine the length combination, \(a/\sin^{2}i\), where \(i\) is the orbital inclination. This differs from the usual radial velocity literature where one also knows the orbital period and can determine \(a\sin i\). We suppress the factor \(1/\sin^{2}i\) in this section, but re-introduce it in later discussions. Furthermore, in order to translate line flux into gas density, we need to consider the physics of emission. The Ca II triplet are most likely recombination lines, namely, spontaneous emission from excited levels of Ca II ions after their photo-ionization and recombination. The line fluxes should, therefore, scale linearly with the rate of recombination, which, at equilibrium, equals the rate of photo-ionization. So the emissivity \(\propto n_{\gamma}\,n_{\rm CaII}\), where \(n_{\gamma}\) is the number density of ionizing photons from the white dwarf, and \(n_{\rm CaII}\) that of Ca II. The above scaling remains valid even when the disk is very optically thick to the recombination lines (as is the case for our disk). This consideration allows us to determine the local gas density, under some simplifying assumptions. In particular, we will assume that \(n_{\gamma}\) depends on the radial distance from the white dwarf as \(n_{\gamma}\propto r^{-q}\), with \(q\) being a free parameter. We expect \(q=2\) when the disk is optically thin to the ionizing photons. We will also assume that the disk has a negligible vertical extent and is not being viewed nearly edge-on. These allow us to determine the local column density from the height-integrated emission. Lastly, we assume that Ca II perfectly traces the local gas density. We model the gas disk around J1228 as an assembly of 20 tightly packed, con-focal elliptical Keplerian rings. Their semi-major axes are evenly distributed between \(a_{\rm in}\) and \(a_{\rm out}\). The gas surface density profile is assumed to be a broken power-law with a transition radius \(a_{\rm break}\): \[\Sigma\propto\begin{cases}a^{p_{1}}\,,&a_{\rm in}\leq a<a_{\rm break}\,;\\ a^{p_{2}}\,,&a_{\rm break}\leq a\leq a_{\rm out}\,.\end{cases} \tag{1}\] Such a broken profile is motivated by the observed line profiles, which also suggest that \(p_{1}>0\) and \(p_{2}<0\). As the simplest approximation, we assume that the ring eccentricities vary linearly with the semi-major axis as \[e(a)=e_{\rm in}+\nabla e\times(a-a_{\rm in})\,. \tag{2}\] We do not specify the sign of the eccentricity gradient (\(\nabla e\)). The rings are assumed to remain apse-aligned at all times. This requires the disk to precess rigidly, a working assumption we justify in SS4. The line emissivity \(\epsilon\), from a ring segment of length \(d\ell\), is therefore \[\epsilon={\rm const}\times\,r^{-q}\,\times\Sigma\,\times\left(\frac{\frac{1}{v }d\ell}{\oint\frac{1}{v}d\ell}\right)\,, \tag{3}\] with \(v\) being the Keplerian velocity of the segment. The first factor (\(r^{-q}\)) describes the radial dependence for the ionizing flux, and the last factor describes the fractional mass within the line segment. This scales inversely with the local Keplerian velocity as mass is conserved along a Keplerian streamline. This behaviour is behind the so-called 'apocentre glow' in debris disks(Pan et al., 2016; MacGregor et al., 2017). The overall normalization constant is discussed in SS5, where we show that our model disk correctly accounts for the observed flux. To assign a Doppler velocity to the above line segment, we introduce a phase angle \(\phi\), where \(\phi=0\) corresponds to the case where the orbital long axis lie on the plane of the sky. The resultant line profile is symmetric at this phase, while the line is at its more asymmetric when \(\phi=\pi/2\). All our model parameters are illustrated in Fig. 1. In the following, we proceed to search for the best model parameters for the observed Ca-II line data from Manser et al. (2016). ## 3 MCMC and Results ### Markov Chain Monte-Carlo Manser et al. (2016) have gathered spectra of J1228 from March 2003 to May 2015 in a total of 18 epochs, using the VLT, the SDSS telescope and the WHT. C. Manser has kindly provided us with the data. Some subsequent observations are presented in Manser et al. (2019) but are not used for fitting. To prepare the emission profiles for analysis, we convert the Ca-II triplet data from wavelength (A) to velocity (km s\({}^{-1}\)), using the atomic rest-wavelengths, and a systemic velocity of \(+22\) km/ s. This value is within the range reported by Manser et al. (2016): \(+19\pm 4\) km s\({}^{-1}\).1 It is chosen so that the emission lines at the June 2007 epoch, which have very similar amplitudes in the blue- and red-shifted peaks, are also symmetric in the velocity space. We co-add the three lines to produce a joint line profile. Footnote 1: One can also determine the systemic velocity from the white dwarf spectra, after accounting for a gravitational redshift of 35 km/ s for lines emitted from the surface of the white dwarf. The precession of the disk means we have the good fortune to observe it from different vantage points, each giving some unique constraints on the disk model. We decided, initially, to focus on data from three (equally spaced) epochs: June 2007, June 2011, and May 2015. We assign a phase of \(\phi=\pi/2\) to the first epoch (most symmetric), and phases of 0.31 and \(-0.92\) to the remaining two. This choice is motivated by our inferred precession period of \(\sim 20\) yrs (see below). Our initial attempt did not produce a satisfactory fit to the June 2007 data, so we proceed to include two more epochs, April 2007 and July 2007, with their corresponding \(\phi\) values, into the procedure. This serves to strengthen the model constraint around \(\phi=\pi/2\). We now determine the best fit model parameters, using the _emcee_(Foreman-Mackey et al., 2013) implementation of the Markov Chain Monte-Carlo (_mcmc_) method. This procedure requires appropriate priors. For each of our 8 parameters, we choose a flat prior over a wide range (Table 1). Our prior on the eccentricity profile warrants some comments. First, we posit that \(e\in[0,1)\) everywhere. To avoid streamline crossing within the disk, we further impose the condition (Goldreich & Tremaine, 1979) \[\left|e(a)+\frac{\mathrm{d}\,e(a)}{\mathrm{d}\ln(a)}\right|<1\,. \tag{4}\] We run _emcee_ with 100 walkers and iterate each for 4000 steps. This ensures that the auto-correlation time is a sufficiently small fraction of the total run. We then trim the first 2000 steps to minimize the effects of the initial conditions. The full results are presented in Figure 4. There is a good convergence for all parameters, though some show a slight (but unimportant) bimodal distributions in their posteriors. The maximum-likelihood parameters, and their corresponding uncertainties, are presented in Table 1. Furthermore, the resultant line profiles for the three chosen epochs are presented in Figure 2; while Fig. 3 illustrates them for a continuum of phases. Figure 1: An illustration of our disk model. We represent the disk using a discreet set of elliptical, aligned, Keplerian rings, with a surface density profile (left panel) that is composed of two broken power-laws, and an eccentricity profile (middle panel) that is linear in radius. The right panel is a birds-eye view of the disk with the colour indicating normalized emission. This figure is produced using our best fit parameters. The phase angle \(\phi\) takes the value of 0 if the disk long-axis lies on the plane of the sky. While the overall comparison is satisfactory, we note that the data show a conspicuous shortage of emission in the blue wing at \(\phi=\pi/2\) (bottom panel in Fig. 2, yellow box in Fig. 3). This is the phase where we expect symmetric emission, and indeed the blue and the red peaks do look symmetric. The deficit is only in the wing, and it persists in observations from nearby epochs (April and July 2007). The latter rules out the possibility that the choice of our \(\phi=\pi/2\) epoch is the cause of such an asymmetry. We have no explanation for this deficit. ### Disk Properties We now review the properties of our best-fit model. Gansicke et al. (2006) and Hartmann et al. (2016) have previously determined the inner and outer radii for the gas disk: \(a_{\rm in}\sim 0.6\ R_{\odot}\) and \(a_{\rm out}\sim 1.2\,R_{\odot}\). Our solutions are broadly consistent with their values, with \(a_{\rm in}=0.57R_{\odot}\) and \(a_{\rm out}=1.7R_{\odot}\). Bear in mind that these are larger than the physical lengths by \(1/\sin^{2}i\). Our value for the inner eccentricity (\(e_{\rm in}\)) is also consistent with that inferred by Manser et al. (2016, 2019). This value is higher than the \(e=0.021\) value in the original discovery paper (Gansicke et al., 2006), because they happened to catch the lines when they were more symmetric. Our most interesting result is the eccentricity gradient. We find a significant and negative eccentricity gradient: \(\nabla e=-0.42\pm 0.059\). Compared to the inner disk, the outer disk is substantially more circular, and is in fact consistent with being circular. This result can be intuitively understood by looking at the top panel of Fig. 2: the sharp spike in the blue wing comes about because the rings are compressed together at apoapse (both in physical space and in velocity space). To do so, the inner part of the disk has to be more eccentric. We return to this negative eccentricity gradient in SS4. With our inferred surface density profile (power-law indexes \(p_{1}\sim 1.8,\,p_{2}\sim-1.9\)), the mass of the disk is strongly concentrated around \(a_{\rm break}\sim 1R_{\odot}\). We also find that \(q\sim 2.4\), slightly steeper than our expectation for an optically thin disk (\(q=2\)). It is worth comment Figure 3: Observed and modelled emission profiles as functions of the precession phase (vertical axis). The data plot is generated using epochs from April 2007 to May 2015 (Manser et al., 2016) and summing over the Ca II infrared triplet. While the overall resemblance is good, we highlight one notable exception. Near phase \(\phi=\pi/2\), the observed profile is less symmetric than the model and shows a sharper cutoff at the blue peak (yellow boxes). Figure 2: Comparing the emission profiles between model and data, for three phases. At \(\phi=\pi/2\), we expect symmetry but the data shows a deficit of emission in the blue wing. ing that, while one naively expects a degeneracy between \(q\) and \((p_{1},p_{2})\), since all of them describe radial dependencies (\(q\) on radius, while \(p\) on semi-major axis), Fig. 4 convincingly shows that the degeneracy is broken, likely by the fact that the rings are substantially eccentric. ## 4 Rigidly Precessing Disk Here, we first argue that the disk cannot be differentially precessing, as general relativity would have it. We then show, using the tool of linear eigen-mode calculations, that it has the appropriate gas pressure to resist differential GR precession. In fact, both the observed precession period and the eccentricity profile agree with theoretical expectations. We thus firmly establish a long-suspected behaviour, that the J1228 disk is rigidly precessing. We first establish a new estimate for the observed precession period. Previously, Manser et al. (2016) reported a period of 24-30 yrs, based on data up to May 2015. More recent monitoring extends the data to May 2018 (Manser et al., 2019). From these, one infers that the triplet evolves through a symmetric profile around Oct. 2017. The last time they did so was around July 2007. This led us to refine the precession period to a value of 20.5 yrs. Figure 4: Corner plot for the 1- and 2-D marginalized posterior distributions for our model parameters, with the first 2000 steps discarded as burn-in. We see good overall convergence for our parameters. We now also insert the factor of \(1/\sin^{2}i\) where necessary. Previously, Gansicke et al. (2006) have assigned an inclination of \(i\sim 70^{\circ}\) to the disk, based on the crude arguments that the disk is far from being face-on (double-peaked emission), and is also not edge-on (no self-absorption). This value remains uncertain, so we keep it as a variable. ### GR Makes It Differential An eccentric ring that is in proximity of the white dwarf experiences general relativistic (GR) precession. Let the complex eccentricity be \[E=e\,\exp(i\varpi)\,, \tag{5}\] where \(\varpi\) is the longitude of pericentre measured relative to a fixed direction in space. GR acts to advance \(\varpi\) at a rate \[\dot{\varpi}_{\rm GR}\!=\!\frac{3GM_{*}\Omega}{c^{2}a(1-e^{2})}\approx\frac{2 \pi}{84\,{\rm yrs}}\,\frac{1}{(1-e^{2})}\,\left(\frac{a}{1R_{\odot}}\right)^{ -5/2}\,, \tag{6}\] where \(\Omega=\left(GM_{*}/a^{3}\right)^{1/2}\) is the Keplerian frequency. Since our gas disk extends radially by of order unity, the above equation suggests that the eccentric disk should have been markedly twisted after only 10 yrs, the precession period for the inner most orbit. If so, streamlines from different orbits could have crossed, and the resulting dissipation should have circularized and shrunk the disk. In contrast, the sharp spikes seen in the line profiles suggest a significant eccentricity. In fact, the disk has been observed to remain the same eccentric shape for over 20 yrs. For this reason, previous studies have argued that J1228 and other similar white dwarfs do not harbour eccentric disks, but instead host circular disks with non-axisymmetric brightness patterns (e.g., spiral wave, vortex, Hartmann et al., 2011; Metzger et al., 2012).There are also suggestions of eccentric disks but with misaligned apses (Cauley et al., 2018; Fortin-Archambault et al., 2020). These proposals, while being able to provide reasonable fits to the data, are not physically motivated. An asymmetric pattern on a circular disk can be rapidly sheared out on the Keplerian timescale (even faster than GR); and an eccentric disk with misaligned apses are not known to be self-sustaining. ### Pressure Keeps It Rigid In our work, we opt to model the disk as a series of apse-aligned rings that rigidly precess. We now confirm that this is physically motivated. First, the radial pressure gradient also causes precession, at a rate that is, to order of magnitude, \[\dot{\varpi}_{p}\sim\left(\frac{c_{s}}{v_{kep}}\right)^{2}\Omega\sim\frac{2 \pi}{84{\rm yrs}}\,\left(\frac{a}{1R_{\odot}}\right)^{-3/2}\,\left(\frac{c_{s }/v_{\rm kep}}{0.0028}\right)^{2}\,, \tag{7}\] for a ring with width \(\Delta r\sim r\)(Goodchild & Ogilvie, 2006; Teyssandier & Ogilvie, 2016). To be competitive against GR (eq. 6), we only need \(c_{s}/v_{\rm kep}\geq 0.0024\), or a gas temperature2 Footnote 2: Cooler disks may still maintain rigid precession, but will require a very steep eccentricity gradient. \[T\geq 1150\,{\rm K}\times\left(\frac{\mu}{9}\right)\,, \tag{8}\] where we have evaluated at \(a=1R_{\odot}\) and have scaled the mean-molecular weight against that for singly-ionized metallic gas (see below). The temperature of the gas disk is likely controlled by photo-ionization and ranges from 5000 to 9000 K (Melis et al., 2010, also our own CLOUDY experiments). So the whole disk can easily communicate via pressure, and can smooth out any precessional mis-demeanour. The negative eccentricity gradient we report here supports the hypothesis that the disk is rigidly precessing. Such a configuration means that the rings are more compressed together at their apocentres. The radial pressure gradient there tends to precess the inner streamline backwards, while precess the outer streamlines forwards. This equilibrates their differential GR rates (eq. 6). We now make the above arguments more quantitative. We follow Miranda & Rafikov (2018) to compute the eccentricity eigenmode, the global coherent response of the disk to an eccentricity perturbation. Teyssandier & Ogilvie (2016) have studied the linear response of a locally isothermal, 3-D disk. For the case of a power-law disk, where the surface density scales as \(\Sigma\propto r^{p}\) \begin{table} \begin{tabular}{c c c c} \hline \hline Parameter & Prior & \multicolumn{2}{c}{Solution} \\ & & Mean (\(\mu\)) & Uncertainty (\(\sigma\)) \\ \hline \(a_{\rm in}\left[R_{\odot}\right]\) & \(\in\left[0.3,4\right]\) & 0.57 & 5.5\% \\ \(a_{\rm break}\left[R_{\odot}\right]\) & \(\in\left[a_{\rm in},a_{\rm out}\right]\) & 1.0 & 2.8\% \\ \(a_{\rm out}\left[R_{\odot}\right]\) & \(\in\left[a_{\rm in},7\right]\) & 1.7 & 4.1\% \\ \(e_{\rm in}\) & \(\in\left[0,1\right)\) & 0.44 & 3.9\% \\ \(\nabla e\) & Eqn. 4 & -0.42 & 14\% \\ \(q\) & \(>0\) & 2.4 & 6.8\% \\ \(p_{1}\) & \(\in\left(0,5\right)\) & 1.8 & 17\% \\ \(p_{2}\) & \(\in\left(-5,0\right)\) & -1.9 & 10\% \\ \hline \end{tabular} \end{table} Table 1: Most likely model parameters and their \(1-\sigma\) uncertainties. and where the temperature also obeys a power-law, \[T(r)=T_{\rm in}\left(\frac{r}{r_{\rm in}}\right)^{-\gamma}\,, \tag{9}\] Miranda & Rafikov (2018) simplified the Teyssandier & Ogilvie (2016) equation into \[\frac{\partial^{2}E}{\partial r^{2}}+\frac{(3-p)}{r}\frac{ \partial E}{\partial r}+\left[\frac{6-\gamma(\gamma+2)-p(\gamma+1)}{r^{2}}\right.\] \[\left.+\frac{6r^{2}\Omega^{4}}{c^{2}c_{s}^{2}}-\frac{2\Omega \omega_{prec}}{c_{s}^{2}}\right]E=0\,, \tag{10}\] where \(E=E(r)\) is the (apse-aligned) eccentricity eigenfunction, and \(w_{prec}\) the frequency of global precession. The isothermal sound speed is \(c_{s}=\sqrt{kT/\mu m_{H}}\) and we adopt \(\mu=9\) (see SS5). For our problem, since we only measure the length combination \(\tilde{r}=r/\sin^{2}i\), we transform the above equation to \[\frac{\partial^{2}E}{\partial\tilde{r}^{2}}+\frac{(3-p)}{\tilde{r }}\frac{\partial E}{\partial\tilde{r}}+\left[\frac{6-\gamma(\gamma+2)-p( \gamma+1)}{\tilde{r}^{2}}\right.\] \[\left.+\frac{6\tilde{r}^{2}\tilde{\Omega}^{4}}{c^{2}c_{s}^{2} \sin^{4}i}-\frac{2\tilde{\Omega}\omega_{prec}\sin i}{c_{s}^{2}}\right]E=0\,, \tag{11}\] where \(\tilde{\Omega}=\sqrt{(}GM_{*}/\tilde{r})\) and \(c_{s}^{2}=c_{s0}^{2}(\tilde{r}/\tilde{r}_{0})^{-\gamma}\). One seems to have some flexibilities in choosing the boundary condition (see, e.g. Miranda & Rafikov, 2018). We adopt the following set \[\left.\frac{\partial E}{\partial r}\right|_{\rm a_{in}}=0\,;\qquad\qquad E|_{ \rm a_{out}}=0\,. \tag{12}\] These differ from that in Miranda & Rafikov (2018), where they also took \(\partial E/\partial r=0\) at the outer boundary. Our adopted one is more descriptive of our best-fit solution. In any case, this does not much affect the precession rate. For our broken power-law disk, we integrate Eq. 11 from the two boundaries towards \(\tilde{a}_{\rm break}\) and insist that \(E\) and \(\partial E/\partial r\) remain continuous across \(\tilde{a}_{\rm break}\). We then look for eigenmodes of the lowest radial order. These have the smoothest eccentricity profiles and hence the lowest dissipation rates. We present in Fig. 5 the results of these calculations, adopting the best-fit model parameters from Table 1.As we do not have information on the values of \(T_{\rm in}\) and \(\gamma\), we experiment within some sensible ranges. These calculations are performed for \(i=90^{o}\), but the conclusion remains largely the same, given our uncertainties in \(T_{\rm in}\) and \(\gamma\), for inclinations as low as \(\sim 60^{o}\). We conclude two findings. First, within the relevant range of \(T_{\rm in}\), and sensible temperature profiles (\(\gamma\)), we find that the theoretical modes have periods comparable to the observed value (20.5 yrs). Second, even though we have only modelled the disk using a linear profile (the simplest choice), this profile agrees very well with the shape of the linear eigen-functions. These two quantitative agreements strongly supports the hypothesis that J1228 hosts a rigidly precessing gas disk, under the combined effects of GR and gas pressure. ## 5 Insights on origin We now discuss what our results imply for the origin of the gas disk around J1228, as well as for the pollution of white dwarfs in general. We base our discussions on the current favourite model for white dwarf pollution, tidal disruption of highly eccentric planetesimals (see review by Farihi, 2016). Figure 5: Comparing our disk model to the linear eigen-mode calculations, assuming an inclination of 90 deg. Solid curves in the left panel show the calculated precession period as a function of temperature at the inner edge, for a few choices of \(\gamma\) (eq. 9). The grey band corresponds to the precession period determined from data, and the blue dotted line that of GR period at the disk inner edge. The right panel shows the calculated eccentricity eigen-functions in solid lines, corresponding to the three starred positions on the left panel. Our best-fit eccentricity profile is plotted as the thick blue line. The theoretical modes are normalized to have the same eccentricity at the inner edge as the observed one. ### Disk Mass and the Progenitor Mass We estimate a mass for the gas disk, \(M_{\rm gas}\), by assuming that viscous spreading of the gas disk supplies the observed accretion onto J1228. The gas scale height is \[\frac{H}{r}=\frac{c_{s}}{v_{\rm kep}}\sim 0.006\left(\frac{\mu}{9}\right)^{-1/2} \left(\frac{T}{5000\rm K}\right)^{1/2}\,\left(\frac{r}{1R_{\odot}}\right)^{1/2}\,. \tag{13}\] If the disk is accreting under a constant viscosity parameter \(\alpha\)(Shakura & Sunyaev, 1973), the viscous diffusion time is \[\tau_{\rm diff} \approx \alpha^{-1}\Omega^{-1}\left(\frac{H}{r}\right)^{-2} \tag{14}\] \[\sim 200\rm yrs\,\left(\frac{r}{1R_{\odot}}\right)^{3/2}\,\left( \frac{\alpha}{10^{-2}}\right)^{-1}\,\left(\frac{H/r}{0.006}\right)^{-2}\,.\] We adopt the accretion rate as determined by Dwomoh & Bauer (2023)3 and estimate the current disk mass by assuming \(\dot{M}\sim M_{\rm gas}/\tau_{\rm diff}\), to obtain Footnote 3: This updated rate accounts for diffusion from thermohaline mixing and is much greater than the estimate of in \(\dot{M}=5.6\times 10^{8}\,\rm g/s\) in Gänsicke et al. (2012),. \[M_{\rm gas} \approx 2\times 10^{21}\rm g\left(\frac{\dot{M}}{2\times 10^{11}\rm g/ s}\right)\left(\frac{\alpha}{10^{-2}}\right)^{-1} \tag{15}\] \[\times\left(\frac{T}{5000\rm K}\right)^{-1}\left(\frac{r}{1R_{ \odot}}\right)^{1/2}\,.\] We argue that this estimate likely also reflects the total mass of the original disrupted planetesimal. The current disk is significantly eccentric - it avoids rapid streamline crossing and circularization by organizing itself into a coherent eccentric mode. But over the viscous timescale, the eccentricity should be gradually damped as the disk spreads radially. So the current disk has likely weathered no more than a few viscous times. Or, its current mass is close to its original mass. If true, this means the original disrupted plantesimals has a radius \(R_{p}\sim 100\,\rm km\). To substantiate the above estimate for the disk mass, we also consider whether it is consistent with the observed Ca II emissions. Let us adopt the same chemical composition as that for CI chrondrite from Palme et al. (2014), and assume that all metals are singly ionized, the mean nuclei weight of the gas is 15 and the mean molecular weight is 9. So the above mass estimate corresponds to a surface density \(\Sigma\sim M_{\rm gas}/r^{2}\sim 0.13\,\rm g/\,cm^{2}\), a midplane density of \(\rho\sim 3\times 10^{-10}\,\rm g/\,cm^{3}\). In the meantime, Ca is only 1/280 of the total nucleus number. We therefore arrive at a radial column density for Ca of \(\sim 3\times 10^{21}\,\rm cm^{-2}\), and a vertical one at \(\sim 2\times 10^{19}\,\rm cm^{-2}\). We find: * if all Ca is in Ca II,4, with a photo-ionization cross-section of \(5\times 10^{-19}\,\rm cm^{2}\), the optical depth for a Ca II ionizing photon is \(\tau\sim 1600\). So the gas midplane is very opaque to the Ca II ionizing photons. This may explain why we obtain a steeper fall-off of the ionizing flux (\(n_{\gamma}\propto r^{-2.4}\)), than is expected for an optically thin disk (\(n_{\gamma}\propto r^{-2}\)). Footnote 4: Ca I, with an ionization potential of 6.11eV, is easily ionized; while Ca II, with a potential of 11.87eV, is much harder to ionize. * The vertical optical depth for the Ca II infrared triplet is \(\sim 10^{6}\) (Cloudy result). This explains why the line ratios within the CaII triplet do not reflect their individual strengths, but approach those of a blackbody (Melis et al., 2010). Footnote 4: Ca I, with an ionization potential of 6.11eV, is easily ionized; while Ca II, with a potential of 11.87eV, is much harder to ionize. * the above mass estimate allows us to explain the total flux observed in Ca II triplet. For J1228, about 5.6% of its energy are in photons that can ionize Ca II (ionization energy 11.87 eV). Let the disk be optically thick to these photons from the midplane up to \(n\) scale heights. The total energy intercepted by Ca II is then \(2n\times(H/r)\times 5.6\%\times L_{\rm wd}\). As Ca II is ionized and recombined, a fraction of the ionization energy is emitted in Ca II triplets (photon energy \(\sim 1.5\) eV). So we expect a total line flux of \(\sim 2n(H/r)\times 5.6\%\times(1.5/11.87)\times L_{\rm wd}\). The observed total line flux is \(\sim 3\times 10^{-4}L_{\rm wd}\)(Manser et al., 2016), or we require \(n\sim 3.5\). In comparison, using the above midplane radial optical depth (\(\tau\sim 1600\)), and assume that the disk is in vertical hydro-static equilibrium, we find that the disk can capture these photons up to a height multiple of \(n\sim 3.8\). In other words, the observed CaII line fluxes can be explained by our inferred disk density. ### The Three Radii We aim to draw clues on the progenitor by considering the three radii we inferred for the disk, \(a_{\rm in}\), \(a_{\rm break}\) and \(a_{\rm out}\). With our values of \(p_{1}\) and \(p_{2}\), most of the disk mass lies closely around \(a=a_{\rm break}\sim 1\times\sin^{2}i\,R_{\odot}\). This suggests that \(a_{\rm break}\) may be a special radius for the progenitor body. Gas deposited here may then viscously spread both outwards and inwards, forming the extended disk. In particular, Metzger et al. (2012); Rafikov (2016) showed that the surface density of an isothermal accretion disk should scale as \(r^{-2}\), similar to our value of \(p_{2}=-1.9\). We consider two physical radii, one for dust sublimation and the other for tidal disruption. Using the most updated stellar parameters from Koester et al. (2014), \(M_{*}=0.705M_{\odot}\), \(R_{*}=0.0117R_{\odot}\), \(T_{\rm eff}=20,713\)K, a blackbody at distance \(r\) is heated to a temperature \[T_{\rm bb}=\left(\frac{1}{4}\right)^{1/4}\left(\frac{R_{\rm wd}}{r}\right)^{1 /2}T_{\rm wd}\sim 1300{\rm K}\,\left(\frac{r}{1.5R_{\odot}}\right)^{-1/2}\,, \tag{16}\] where 1300K is roughly the sublimation temperature of silicate grains under our disk midplane density, \(\rho\sim 3\times 10^{-10}\,{\rm g}\,/{\rm cm}^{3}\)(Pollack et al., 1994).5 Footnote 5: The fit provided by Isella & Natta (2005) is \(T_{\rm evap}\sim 1600\,{\rm K}\left(\frac{\rho}{10^{-9}{\rm gcm}^{-3}} \right)^{0.0195}\). This will place the sublimation radius at around \(1.5R_{\odot}\), much beyond \(a_{\rm break}=1\sin^{2}i\,R_{\odot}\), but close to our inferred outer edge, \(a_{\rm out}=1.7\sin^{2}i\,R_{\odot}\). It is therefore likely that the gas disk is truncated near the sublimation boundary. There may well be a dust component lying beyond it (see dicussion below). On the other hand, the tidal disruption radius, for a body with negligible internal strength and on a parabolic orbit (\(e\approx 1\)), is located at a peri-centre distance of (Sridhar & Tremaine, 1992; Watanabe & Miyama, 1992)6 Footnote 6: A finite internal strength will allow the body to survive closer to the white dwarf (see, e.g. Zhang et al., 2021). \[r_{\rm roche}|_{e\approx 1}\approx 1.7R_{\rm wd}\times\left(\frac{\rho_{\rm wd }}{\rho_{\rm p}}\right)^{1/3}\approx 1.0R_{\odot}\left(\frac{\rho_{\rm p}}{5\,\,{ \rm g}\,/{\rm cm}^{3}}\right)^{-1/3}\,. \tag{17}\] On the other extreme, for a body on a circular orbit, the above factor 1.7 should be replaced by 2.5 (Chandrasekhar, 1961), and \(r_{\rm roche}|_{e=0}\) is moved outwards to \(\sim 1.5R_{\odot}\). Other bound trajectories range in-between these two extremes. Our inferred \(a_{\rm break}\) lies inward of \(r_{\rm roche}|_{e\approx 1}\), supporting the story that the parent body is tidally disrupted by the white dwarf. Around J1228, its debris is vaporized to form the observed gas disk. Lastly, the observed line profiles clearly indicate that the gas disk has an abrupt inner cutoff at \(a_{\rm in}\sim 0.57\sin^{2}i\,R_{\odot}\). Metzger et al. (2012) suggested that stellar magnetic field may be able to truncate the disk, much like that around in T Tauri stars. However, we suspect a different explanation may be at work here. In the inner part of the disk, rigid precession under the stronger GR precession demands a steeper eccentricity gradient. If the disk extends closer to the white dwarf than is observed, the implied high eccentricity will be challenging for its survival - nonlinear effects can disrupt the pattern of rigid precession and cause streamline crossing. We hypothesize that disk eccentricity, rather than stellar magnetic field, truncates our disk at the observed \(a_{\rm in}\). More detailed study is required. ### A Co-spatial Dust Disk? The spectral energy distribution of J1228 shows the presence of a dusty component, with a total luminosity of \(L_{\rm dust}\sim 6\times 10^{-3}L_{\rm wd}\), and blackbody temperatures that range from 450 K to 1700 K (Brinkworth et al., 2009). If the dust lies in a geometrically flat disk and sees the central star unobstructed, it should be illuminated to a temperature (Chiang & Goldreich, 1997; Jura, 2003) \[T_{\rm dust}=\left(\frac{2}{3\pi}\right)^{1/4}\left(\frac{R_{\rm wd}}{r} \right)^{3/4}T_{\rm wd}\sim 350\,{\rm K}\left(\frac{\rm r}{1.5{\rm R}_{\odot}} \right)^{-3/4}\,. \tag{18}\] So the above dust temperatures translate to a range of \(0.2R_{\odot}\) to \(1.2R_{\odot}\). But this would place the dust component in the same radius as the gas disk. The eccentricity of the gas disk makes this problematic. The gas and dust components, if orbiting at different eccentricities and precessing independently (i.e., dust experiences GR but not gas pressure), will encounter each other at enormous speeds, of order a few hundred km/s. This leads to evaporation/sputtering of the dust grains, and circularization of the gas disk (if dust mass is high enough). In fact, this un-welcomed prospect led Metzger et al. (2012) to suggest that the gas disk cannot be eccentric, a proposition now amply refuted by our analysis. One way to resolve this is if the grains are hotter than eq. 18 dictates and can therefore lie further away, beyond the gas disk. This is possible if the grains are not in a flat disk and do not block each other's view to the star. Their temperatures will then be described by eq. 16. The observed blackbody may then arise from a region beyond \(0.9R_{\odot}\), largely avoiding the most eccentric part of the gas disk. These grains can lie even further away, if they are smaller than the wavelengths of their own thermal radiation and are thus super-heated. Such a situation (free-floating grains) can arise if the grains are short-lived and have not yet undergone collisional flattening (into a thin disk). Because of the proximity of the gas disk to the sublimation radius, these grains may be in condensation/sublimation equilibrium with the gas disk, and are transiently formed and destroyed. In this case, one can obtain a lower-limit to the dust mass by assuming that the observed dust luminosity is produced by grains of size \(s\), bulk density \(\rho_{\rm bulk}\) and temperature \(T\), \[M_{\rm dust} \geq \frac{L_{\rm dust}\,s\,\rho_{\rm bulk}}{3\sigma T^{4}}\sim 2\times 1 0^{17}\,{\rm g}\,\,\left(\frac{\rm L_{\rm dust}}{6\times 10^{-3}L_{\rm wd}}\right)\times \tag{19}\] \[\left(\frac{T}{1600\,{\rm K}}\right)^{-4}\left(\frac{s}{1\mu m} \right)\left(\frac{\rho_{\rm bulk}}{5\,{\rm g}/\,{\rm cm}^{3}}\right)\,.\] In other words, only a minute amount of dust is needed to reproduce the observed SED. The mass of the gas disk may be close to that of the progenitor body. In this scenario, the dust component (and its Poynting-Robertson drag) will be irrelevant for the evolution and accretion of the gas disk, differing from the proposal by Rafikov (2011); Metzger et al. (2012). ### the Progenitor Here, we remark on how the J1228 disk informs on the origin of white dwarf pollution. The observed gas disk is markedly eccentric. Barring the possibility that the eccentricity is excited after formation (see, e.g., proposal from Miranda & Rafikov, 2018), this points to a very eccentric orbit for the progenitor. This is expected in the hypothesis of tidal disruption. We can infer the original peri-centre approach, by assuming that the orbital angular momentum (\(\sqrt{a(1-e^{2})}\)) is largely conserved when the tidal debris is circularized into the observed disk. This yields \(r_{p}=a_{\rm orig}(1-e_{\rm orig})\sim a_{\rm break}*(1-e_{\rm break}^{2})/(1+ e_{\rm orig})\sim 0.47\sin^{2}i\,R_{\odot}\), for \(e_{\rm orig}\approx 1\) and \(e_{\rm break}\sim 0.26\) at \(a_{\rm break}\). This is at least a couple times smaller than the Roche radius (eq. 17). Who can place the progenitor on such an odd orbit, with its improbably small peri-centre approach? The most likely scenario to date is planetary perturbations. Moreover, secular perturbations of the Kozai type are likely involved to boost the probability. Furthermore, to prevent suppression by GR precession, the secular perturbations may need to act in cohort with mean-motion interactions and/or close encounters. This requires the mean-motion of the progenitor to be within a factor of a few from that of the planet. Such planetary perturbations can also help in circularizing the tidal debris. Unlike the very weak GR precession (which induces an apse advance of a mere \(\sim 3\times 10^{-5}\) radian per peri-centre passage), planetary perturbations can effectively scramble the orbits of different parts of the tidal stream, dissipate their relative kinetic energy and place them on much more compact orbits. If so, the entire process should take much less than the diffusion time (\(\tau_{\rm diff}\sim 200\) yrs), or else the disk that forms would contain only a small fraction of the progenitor mass. This suggests that the perturber should have an orbit not much further than a few AU. Lastly, we muse on the probability of seeing a live disk around a close-by white dwarf. J1228 lies at a distance of \(\sim 127\) pc. Stars of similar temperatures have a space density of \(\sim 4\times 10^{-5}/{\rm pc}^{3}/{\rm mag}\)(Leggett et al., 1998). So we expect \(\sim 300\) similarly hot stars within a similar distance as J1228. Among these, about two dozen gas disks are reported. So the occurrence rate of gas disk among hot white dwarfs is \(\sim 10\%\), not dissimilar to the occurrence rate of dust disk among all white dwarfs. If such a disk accretes in a few diffusion time (\(\tau_{\rm diff}\)), and if all progenitors are similar in size (\(R\sim 100\)km), we then infer, over the lifetime of hot white dwarfs (\(10^{7}\) to \(10^{8}\) yrs), a source planetesimal disk with mass of a Mars mass (\(\sim 6\times 10^{26}\,{\rm g}\)) is required. ## 6 Conclusion We undertake a detailed modelling of the gaseous disk around the white dwarf SDSS J1228+1040. We find that the disk has a surface density profile that peaks around \(1\,\sin^{2}i\,R_{\odot}\), and an eccentricity profile that decreases outward. The latter, we show, uncannily reproduces the theoretical profile of a disk that precesses rigidly under the combined forces of general-relativity and gas pressure. In other words, the observed disk is in an eccentric eigen-state. This explains why the disk can be eccentric yet long-lived. As we expect the eccentricity to be dissipated in the viscous timescale (\(\sim 200\) yrs), the current disk should have formed fairly recently. Based on the high accretion rate onto the white dwarf (current estimate \(\dot{M}\sim 2\times 10^{11}\,{\rm g}/\,{\rm s}\)), we infer a mass of \(\sim 10^{21}\,{\rm g}\) for the gaseous disk. Such a mass estimate is also consistent with the emission measures in the Ca II triplets. The young age of the current disk then implies that it still contains most of the source mass, pegging the progenitor at a size of \(R\sim 100\,{\rm km}\). Given the eccentricity of the gas disk, there is unlikely to be a massive dusty debris disk that is co-spatial. Rather, we suggest that the observed dust emission may arise from small amounts of grains that are in condensation/sublimation equilibrium with the gas disk. The progenitor that is tidally stripped apart likely arrives from an orbit of a few AU or shorter in range, perturbed by a nearby planetary body to reach a peri-centre distance of \(0.47\sin^{2}iR_{\odot}\). To account for the observed rate of gaseous disks among similarly hot white dwarfs, we estimate that the source disk need to contain of order a Mars mass. Moreover, this disk and its planetary perturber likely orbit around the white dwarf within a few AU, so that the tidal debris can quickly circularize (downsize) to its current shape. Our study in this work is preliminary in nature. We have not investigated in detail the temperature structure and the emission mechanism of the metallic disk. Emission line diagnostics may be used to constrain the disk inclination, which may lead to further insights. We also fall short of analyzing the 'circularization' process after tidal disruption. This latter seems a promising route to infer the nature of the progenitor and the architecture of planetary system around J1228. Such careful studies are clearly warranted. J1228 is likely not unusual. First, many white dwarfs with gaseous disks show variable emissions, indicating eccentric, precessing disks (Gansicke et al., 2008; Melis et al., 2010; Wilson et al., 2014; Cauley et al., 2018; Manser et al., 2021). Similar dynamics as we reveal here for J1228 may be in play in all of these disks. Second, while J1228 is hot and can sublimate rocks at around \(1R_{\odot}\), cooler white dwarfs will only harbour fully dusty disks. Such a disk reveals no information on its kinematics and surface density. J1228 offers us a lucky window into these otherwise obscure disks and may well be the Rosetta stone to decipher the mystery of white dwarf pollution. We acknowledge NSERC for funding. We also thank C. Manser for providing the line data, and Renu Malhotra for discussions.
2310.14473
Optimal exercise decision of American options under model uncertainty
Given the marginal distribution information of the underlying asset price at two future times $T_1$ and $T_2$, we consider the problem of determining a model-free upper bound on the price of a class of American options that must be exercised at either $T_1$ or $T_2$. The model uncertainty consistent with the given marginal information is described as the martingale optimal transport problem. We show that any option exercise scheme associated with any market model that jointly maximizes the expected option payoff must be nonrandomized if the American option payoff satisfies a suitable convexity condition and the model-free price upper bound and its relaxed version coincide. The latter condition is desired to be removed under appropriate conditions on the cost and marginals.
Tongseok Lim
2023-10-23T01:04:18Z
http://arxiv.org/abs/2310.14473v2
# Optimal exercise decision of American options under model uncertainty ###### Abstract. Given the marginal distribution information of the underlying asset price at two future times \(T_{1}\) and \(T_{2}\), we consider the problem of determining a model-free upper bound on the price of a class of American options that must be exercised at either \(T_{1}\) or \(T_{2}\). The model uncertainty consistent with the given marginal information is described as the martingale optimal transport problem. We show that any option exercise scheme associated with any market model that jointly maximizes the expected option payoff must be nonrandomized if the American option payoff satisfies a suitable convexity condition and the model-free price upper bound and its relaxed version coincide. The latter condition is desired to be removed under appropriate conditions on the cost and marginals. Version 1 of the paper posted on arXiv had an incorrect Proposition 2.1, which was used to erroneously derive the equation \(P_{c}=\overline{P}_{c}\). The proposition was removed in Ver 2, and the main theorem now assumes the equation. We would like to find sufficient conditions for the equation. where \(\mu(f):=\mathbb{E}_{\mu}[f(X)]=\int f(x)\mu(dx)\). We consider market models that are defined by the following set of martingale transports from \(\mu\) to \(\nu\): \[\mathcal{M}(\mu,\nu)=\{\pi\in\mathcal{P}(\mathbb{R}^{2})\,|\,\pi=\mathrm{Law}(X, Y),\mathbb{E}_{\pi}[Y|X]=X,\mathrm{Law}(X)=\mu,\mathrm{Law}(Y)=\nu\}.\] In finance, each \(\pi\in\mathcal{M}(\mu,\nu)\) represents a feasible joint law of the price \((X,Y)\) given the marginal information \(\mu,\nu\) in the (two-period) market, under which \((X,Y)\) is a martingale, written as \(\mathbb{E}_{\pi}[Y|X]=X\). It is well known that the condition \(\mu\preceq_{c}\nu\) is equivalent to \(\mathcal{M}(\mu,\nu)\neq\emptyset\). We refer to [10, 11, 13, 14] for further background. We consider the cost function which describes an American option payoff \[c=(c_{1},c_{2})=(c_{1}(x),c_{2}(x,y)),\quad c_{1},c_{2}\in\mathbb{R}, \tag{1.1}\] such that if an obligee (option holder) selects \(c_{1}\), she receives the payout \(c_{1}(X)\), otherwise she receives the payout \(c_{2}(X,Y)\). Thus, in the former case, her payout is determined at time \(1\), whereas it is determined at time \(2\) in the latter. We assume she can make this choice conditional on the price \(X=x\), and that she can also randomize (or split) her choice, represented by a Borel function \(s:\mathbb{R}\to[0,1]\). This means that given \(X=x\), she exercises \(c_{1}\) with probability (or proportion) \(s(x)\), otherwise \(c_{2}\) with probability \(1-s(x)\). Given a function \(s:\mathbb{R}\to\mathbb{R}\) and a measure \(\mu\) on \(\mathbb{R}\), let the measure \(s\mu\) be given by \(s\mu(B)=\int_{B}s(x)\mu(dx)\). Since \(\mu\) is fixed, the choice of a randomization \(s\) is equivalent to the choice of \(0\leq\mu_{1}\leq\mu\),1 such that with \(\mu_{2}:=\mu-\mu_{1}\), \(s_{1}:=s\), \(s_{2}:=1-s\) equals the Radon-Nikodym derivative \(\frac{d\mu_{1}}{d\mu},\frac{d\mu_{2}}{d\mu}\)\(\mu\)-a.s., respectively. This leads us to consider the optimization problem Footnote 1: All measures/distributions in this paper are assumed to be non-negative. \[P_{c}:=\sup_{\pi\in\mathcal{M}(\mu,\nu)}\sup_{\mu_{1}\leq\mu}\ \ \mathbb{E}_{\gamma_{1}}[c_{1}]+\mathbb{E}_{\gamma_{2}}[c_{2}], \tag{1.2}\] where for a given \(\pi=\pi_{x}\otimes\mu\in\mathcal{M}(\mu,\nu)\),2 we define \(\gamma_{l}=\pi_{x}\otimes\mu_{l}\), \(l=1,2\), such that \(\gamma_{1}+\gamma_{2}=\pi\) and that \(\gamma_{1}\) and \(\gamma_{2}\) share the same kernel \(\{\pi_{x}\}_{x}\) inherited from \(\pi\). Footnote 2: Any \(\pi=\mathrm{Law}(X,Y)\in\mathcal{P}(\mathbb{R}^{2})\), representing the joint law of the random variables \(X\) and \(Y\), can be written as \(\pi=\pi_{x}\otimes\mathrm{Law}(X)\), where \(\pi_{x}\in\mathcal{P}(\mathbb{R})\) is called a kernel of \(\pi\) with respect to \(\mathrm{Law}(X)\). \(\pi_{x}\) represents the conditional distribution of \(Y\) given \(X=x\), i.e., \(\pi_{x}(B)=\mathcal{P}(Y\in B\,|\,X=x)\) for all Borel set \(B\subseteq\mathbb{R}\). Note that \(\pi=\pi_{x}\otimes\mu\in\mathcal{M}(\mu,\nu)\) iff \(\int y\,\pi_{x}(dy)=x\)\(\mu\)-a.e. \(x\). In view of the obligor (the person responsible for the payment of the option), a solution \((\pi,\mu_{1})\) to (1.2) represents a worst-possible market scenario \(\pi\) combined with the option exercise scheme \(\mu_{1}\), yielding the maximum expected payout \(P_{c}\). We will assume the following regularity condition on \(c\) throughout the paper. **[A]** Throughout the paper, we assume that \(c_{1},c_{2}\) are continuous, \(\mu\preceq_{c}\nu\), and that the marginals \(\mu,\nu\) satisfy the following condition: there exist continuous functions \(v\in L^{1}(\mu)\), \(w\in L^{1}(\nu)\) such that \(|c_{1}|+|c_{2}|\leq v(x)+w(y)\). Note that this implies \[\big{|}\sum_{l}\mathbb{E}_{\gamma_{l}}[c_{l}]\big{|}\leq\sum_{l}\mathbb{E}_{ \gamma_{l}}[|c_{l}|]\leq\sum_{l}\mathbb{E}_{\pi}[|c_{l}|]\leq\mu(v)+\nu(w)< \infty\,\,\,\mbox{for any $\pi\in\mathcal{M}(\mu,\nu)$}.\] This in turn implies that the problem (1.2) is attained (i.e., admits an optimizer) by a standard argument in the calculus of variations [22]. [18] considered a specific cost called an American put, whose payoff is given by \[c_{1}(x)=(K_{1}-x)^{+},\quad c_{2}(x,y)=c_{2}(y)=(K_{2}-y)^{+},\quad K_{1}>K_{ 2}, \tag{1.3}\] and considered those option exercise schemes which are _pure_, or _non-randomized_; that is, [18] assumed that the obligee can only choose a Borel set \(B\subseteq\mathbb{R}\) in which she selects \(c_{1}\) if \(x\in B\) and \(c_{2}\) otherwise. In terms of \(\mu_{1}\), notice that this is equivalent to the statement that \(\mu_{1}\) and \(\mu_{2}\) are mutually disjoint, written as \(\mu_{1}\perp\mu_{2}\) (while \(\mu_{1}+\mu_{2}=\mu\)). In other words, [18] assumed that \(\mu_{1},\mu_{2}\) must saturate \(\mu\) on their respective supports. In addition, [18] assumed that \(\mu\) is continuous, i.e., has no atoms. Under these assumptions, [18] showed that an optimal market model \(\pi\) for the problem (1.2) is given by the _left-curtain coupling_ (see [8, 15, 18] for more details about this interesting martingale transport) along with an optimal exercise strategy \(B\), and furthermore, the cheapest superhedge can be derived. Now we would like to shift our focus and ask, "Under what conditions must the optimal option exercise be pure?" That is, when will an optimal \(\mu_{1}\) saturate \(\mu\), or equivalently, achieve \(\mu_{1}\perp\mu_{2}\)? Note that the problem (1.2) can be rewritten as \[P_{c}=\sup_{\mu_{1}\leq\mu}P_{c}(\mu_{1}),\,\,\,\,\mbox{where}\,\,\,\,P_{c}( \mu_{1}):=\sup_{\pi\in\mathcal{M}(\mu,\nu)}\,\,\mathbb{E}_{\gamma_{1}}[c_{1}] +\mathbb{E}_{\gamma_{2}}[c_{2}], \tag{1.4}\] where \(\gamma_{l}=\pi_{x}\otimes\mu_{l}\), \(l=1,2\). Note that the problem (1.2) has a nonconvex domain in terms of the variable \((\gamma_{1},\gamma_{2})\). This is because even if \((\gamma_{1},\gamma_{2})\), \((\gamma_{1}^{\prime},\gamma_{2}^{\prime})\) are feasible (i.e., sharing the same kernel respectively), the convex combination \((\frac{\gamma_{1}+\gamma_{1}^{\prime}}{2},\frac{\gamma_{2}+\gamma_{2}^{ \prime}}{2})\) may not share the same kernel thus infeasible, unless \(\mu_{1}=\mu_{1}^{\prime}\) and \(\mu_{2}=\mu_{2}^{\prime}\). On the other hand, the subproblem \(P_{c}(\mu_{1})\) has a convex domain in terms of \((\gamma_{1},\gamma_{2})\). This leads us to consider a relaxed problem (2.2) with its optimal value denoted by \(\overline{P}_{c}\). Clearly \(P_{c}\leq\overline{P}_{c}\); see Section 2 for details. Our result is the following. **Theorem 1.1**.: _Assume **[A]** and the cost form (1.1). Suppose \(y\mapsto c_{2}(x,y)\) is strictly convex and \(c_{1}(x)\neq c_{2}(x,x)\) for \(\mu\)-a.e. \(x\), and \(\nu\) is absolutely continuous with respect to the Lebesgue measure. If \(P_{c}=\overline{P}_{c}\), then every solution \((\pi,\mu_{1})\) to the problem (1.4) satisfies \(\mu_{1}\perp\mu-\mu_{1}\). Furthermore, given any optimal candidate model \(\pi\), the \(\mu_{1}\) yielding an optimal pair \((\pi,\mu_{1})\) is unique._ We note that the condition \(c_{1}(x)>c_{2}(x,x)\) is natural because, if \(c_{1}(x)\leq c_{2}(x,x)\) and \(c_{2}\) is convex in \(y\), it is always optimal to choose \(c_{2}(x,y)\) by Jensen's inequality \(c_{2}(x,x)\leq\int c_{2}(x,y)\pi_{x}(dy)\). Theorem 1.1 says that in this case, every optimal exercise, or stopping, is nonrandomized. Evidently, the problem (1.2) can be viewed as an optimal stopping problem, in which the option holder either stops at time \(1\) and receives the sure reward \(c_{1}(x)\), or goes and receives the reward \(c_{2}(x,y)\) (which is stochastic at time \(1\)) at time \(2\). This naturally places the theorem in the context of the vast literature on the Skorokhod embedding problem [7, 16, 21], with the key difference that we now face uncertainty in the family of models \(\mathcal{M}(\mu,\nu)\). Such model uncertainty was also considered in [2, 12] in continuous time setup. For more results on American options and their robust hedging, we refer to [4, 5, 6]. In the optimal transport literature, the absolute continuity of \(\mu\) is typically assumed in order to derive non-randomizing solutions, known as Monge solutions. Continuity of \(\mu\) was also assumed in [18]. In contrast, Theorem 1.1 assumes the absolute continuity of \(\nu\), while making no assumptions about \(\mu\). On the other hand, the equation assumption \(P_{c}=\overline{P}_{c}\) imposed in the theorem appears to be highly restrictive, prompting us to seek a sufficient condition that yields the equation. For example, can the absolute continuity of \(\mu\) with respect to Lebesgue measure imply the equation (with suitable additional conditions on the cost)? Finally, the uniqueness of \(\mu_{1}\) given a fixed model \(\pi\) is obtained by a standard argument in optimal transport through mixing two optimal solutions and invoking the result \(\mu_{1}\perp\mu-\mu_{1}\). When \((\pi,\mu_{1})\) and \((\pi^{\prime},\mu_{1}^{\prime})\) are both optimal (with possibly \(\pi\neq\pi^{\prime}\)), it is an open question whether \(\mu_{1}=\mu_{1}^{\prime}\) under suitable conditions. This is due to the nonconvexity of the domain of the problem (1.2) in terms of \((\gamma_{1},\gamma_{2})\). The remainder of the paper is structured as follows. The theorem will be proved utilizing a duality and its attainment result. They will be discussed in Section 2. Section 3 then presents proofs of the results. ## 2. Duality In this section, we consider cost functions more general than (1.1), such as \[\vec{c}=(c_{1},c_{2},...,c_{L}),\ c_{l}=c_{l}(x,y)\in\mathbb{R},\ l=1,2,...,L. \tag{2.1}\] Throughout this section, we assume the following. [**A'**] \(c_{l}\) are continuous for all \(l\), \(\mu\preceq_{c}\nu\), and \(\sum_{l=1}^{L}|c_{l}(x,y)|\leq v(x)+w(y)\) for some continuous functions \(v\in L^{1}(\mu)\), \(w\in L^{1}(\nu)\). As noted, the domain of the problem (1.2), in terms of the variable \((\gamma_{1},\gamma_{2})\), is nonconvex. This leads us to consider a relaxed problem for (1.2); see also [1] for related results. Let \(\mathcal{M}:=\cup_{\mu\preceq_{c}\nu}\mathcal{M}(\mu,\nu)\), that is, \(\mathcal{M}\) is the set of all martingale transports between some probability marginals in convex order, hence \(\mathcal{M}\subseteq\mathcal{P}(\mathbb{R}^{2})\). Let \(\overline{\mathcal{M}}\) be the set of all martingale transports with arbitrary nonnegative finite total mass, that is, \(\gamma\in\overline{\mathcal{M}}\) if \(\gamma\equiv 0\) or \(\gamma/||\gamma||\in\mathcal{M}\) where \(||\gamma||=\int_{\mathbb{R}^{2}}\gamma(dx,dy)\in(0,\infty)\) denotes the total mass. Define \[\mathcal{M}_{L}(\mu,\nu):=\left\{\vec{\gamma}=(\gamma_{1},...,\gamma_{L}) \,\bigg{|}\,\sum_{l=1}^{L}\gamma_{l}\in\mathcal{M}(\mu,\nu)\ \text{and}\ \gamma_{l}\in\overline{\mathcal{M}}\ \text{for all}\ l=1,...,L.\right\}\] \(\mathcal{M}_{L}(\mu,\nu)\) is clearly convex. Now we define the relaxed problem \[\overline{P}_{c}:=\sup_{\vec{\gamma}\in\mathcal{M}_{L}(\mu,\nu)}\sum_{l=1}^{L }\mathbb{E}_{\gamma_{l}}[c_{l}]. \tag{2.2}\] The difference is that in (1.2) (with the generalized cost (2.1)), \(\{\gamma_{l}\}_{l}\) are assumed to have the same kernel \(\pi_{x}\) inherited from a model \(\pi\in\mathcal{M}(\mu,\nu)\), whereas in (2.2), this restriction is relaxed. Both problems satisfy the condition \(\sum_{l}\gamma_{l}\in\mathcal{M}(\mu,\nu)\). Hence, \(P_{c}\leq\overline{P}_{c}\). We turn to the dual problem of (2.2). Define \(\overline{\Psi}_{c}\) to be the space of functions \((\varphi,\psi,\vec{\theta})=(\varphi,\psi,\theta_{1},...,\theta_{L})\) such that \(\varphi\in C(\mathbb{R})\cap L^{1}(\mu)\), \(\psi\in C(\mathbb{R})\cap L^{1}(\nu)\), \(\theta_{l}\in C_{b}(\mathbb{R})\), satisfying \[c_{l}(x,y)\leq\varphi(x)+\psi(y)+\theta_{l}(x)(y-x)\ \ \text{for all}\ l=1,...,L\ \text{and}\ (x,y)\in\mathbb{R}^{2}. \tag{2.3}\] The dual problem to (2.2) is now given by \[\overline{D}_{c}:=\inf_{(\varphi,\psi,\vec{\theta})\in\overline{\Psi}_{c}}\mu( \varphi)+\nu(\psi). \tag{2.4}\] A duality result is the following. **Proposition 2.1**.: _Assume_ **[A']**_. Then \(\overline{P}_{c}=\overline{D}_{c}\)._ For the financial meaning of the dual problems in terms of American option superhedging, we refer to [1, 3, 4, 5, 6, 17, 18, 20]. The additional element required to prove Theorem 1.1 is the dual attainment result, which asserts that there is an appropriate solution to the dual problem (2.4). For \(\xi\in\mathcal{P}(\mathbb{R})\), its potential function is defined by \(u_{\xi}(x):=\int|x-y|d\xi(y)\). Then we say that a pair of probabilities \((\mu,\nu)\) in convex order is irreducible if the set \(I:=\{x\in\mathbb{R}\,|\,u_{\mu}(x)<u_{\nu}(x)\}\) is a connected (open) interval containing the full mass of \(\mu\), i.e., \(\mu(I)=\mu(\mathbb{R})\). **Proposition 2.2**.: _Assume_ **[A']** _and suppose \((\mu,\nu)\) is irreducible. Then there exists a dual optimizer \((\varphi,\psi,\vec{\theta})\), \(\varphi,\psi:\mathbb{R}\to\mathbb{R}\cup\{+\infty\}\), \(\theta_{l}:\mathbb{R}\to\mathbb{R}\), that satisfies (2.3) tightly in the following pathwise sense (but needs not be in \(\overline{\Psi}_{c}\)):_ \[c_{l}(x,y)=\varphi(x)+\psi(y)+\theta_{l}(x)(y-x)\ \ \gamma_{l}-a.e.,\ \ \text{for all}\ \,l=1,...,L \tag{2.5}\] _for every solution \(\vec{\gamma}=(\gamma_{1},...,\gamma_{L})\) to the problem (2.2)._ We emphasize that \((\varphi,\psi,\vec{\theta})\) may not be in \(\overline{\Psi}_{c}\) but are only measurable, with \(\varphi,\psi\) real-valued \(\mu,\nu\)-a.s., respectively. They need not be integrable nor continuous. ## 3. Proofs Proof of Proposition 2.1.: Let \(\mathcal{N}\) be the set of all nonnegative finite measures on \(\mathbb{R}^{2}\) (that do not need to be martingales.) For \(\gamma\in\mathcal{N}\), let \(\gamma^{X},\gamma^{Y}\) denote its marginal on the \(x,y\)-coordinate respectively. Let \(\varphi\in C(\mathbb{R})\cap L^{1}(\mu)\), \(\psi\in C(\mathbb{R})\cap L^{1}(\nu)\), \(\theta_{l}\in C_{b}(\mathbb{R})\). We assert that the following equalities hold: \[\overline{P}_{c}=\sup_{\vec{\gamma}\in\mathcal{M}_{L}(\mu,\nu)} \sum_{l=1}^{L}\mathbb{E}_{\gamma_{l}}[c_{l}]\] \[=\sup_{\gamma_{l}\in\mathcal{N}}\inf_{\forall l}\inf_{(\varphi, \psi,\vec{\theta})}\sum_{l}\gamma_{l}(c_{l})+(\mu-\sum_{l}\gamma_{l}^{X})( \varphi)+(\nu-\sum_{l}\gamma_{l}^{Y})(\psi)-\sum_{l}\gamma_{l}(\theta_{l}(x)( y-x))\] \[=\inf_{(\varphi,\psi,\vec{\theta})}\sup_{\gamma_{l}\in\mathcal{N }\,\forall l}\mu(\varphi)+\nu(\psi)+\sum_{l}\gamma_{l}(c_{l}(x,y)-\varphi(x)- \psi(y)-\theta_{l}(x)(y-x))\] \[=\inf_{c_{l}(x,y)\leq\varphi(x)+\psi(y)+\theta_{l}(x)(y-x)\, \forall l}\mu(\varphi)+\nu(\psi)=\overline{D}_{c}.\] The derivation of the equalities is fairly standard: the second equality holds because the infimum achieves \(-\infty\) as soon as \(\sum_{l}\gamma_{l}^{X}\neq\mu\), \(\sum_{l}\gamma_{l}^{Y}\neq\nu\), or \(\gamma_{l}\notin\overline{\mathcal{M}}\), implying that \(\vec{\gamma}\) in the second line must be in \(\mathcal{M}_{L}(\mu,\nu)\) to achieve the first supremum. The third equality is based on a standard minimax theorem, which asserts that the equality holds when the sup and inf are swapped. Because the objective function is bilinear, i.e., linear in each variable (\((\gamma_{l})_{l}\) and \((\varphi,\psi,\vec{\theta})\)), the minimax theorem holds in this case and we omit the detail. The fourth equality is because, if \(c_{l}(x,y)-\varphi(x)-\psi(y)-\theta_{l}(x)(y-x)>0\) for some \((x,y)\in\mathbb{R}^{2}\), one can select \(\gamma_{l}\in\mathcal{N}\) such that the last supremum in the third line achieves \(+\infty\), which hinders to achieve the first infimum. This implies \(c_{l}(x,y)-\varphi(x)-\psi(y)-\theta_{l}(x)(y-x)\leq 0\) for all \((x,y)\), in which case it is best to choose \(\gamma_{l}\equiv 0\) for the supremum in the third line. Proof of Proposition 2.2.: The proof consists of extending the ideas in [8, 9] to the vectorial cost (2.1). We will follow the five steps illustrated in [19], thereby omitting some details here but referring to the corresponding steps in [19]. **Step 1.**\(\sum_{l=1}^{L}|c_{l}(x,y)|\leq v(x)+w(y)\) for some continuous functions \(v\in L^{1}(\mu)\), \(w\in L^{1}(\nu)\). A dual optimizer exists for \(\vec{c}\) iff so does for \(\tilde{c}:=(c_{l}(x,y)+v(x)+w(y))_{l}\). Thus by replacing \(\vec{c}=(c_{1},...,c_{L})\) with \(\tilde{c}\), from now on we assume \(c_{l}\geq 0\) for all \(l\). As \(\overline{P}_{c}=\overline{D}_{c}\in\mathbb{R}\), we can find an approximating dual optimizer \((\varphi_{n},\psi_{n},\theta_{l,n})\in\overline{\Psi}_{c}\), \(n\in\mathbb{N}\), such that the following duality holds (for all \(l=1,...,L\)): \[\varphi_{n}(x)+\psi_{n}(y)+\theta_{l,n}(x)(y-x)\geq c_{l}(x,y) \geq 0, \tag{3.2}\] \[\mu(\varphi_{n})+\nu(\psi_{n})\searrow\overline{P}_{c}\ \ \text{as}\ \ n\to\infty. \tag{3.1}\] Define \(f_{n}=-\varphi_{n}\), \(h_{l,n}=-\theta_{l,n}\), so that (3.1) becomes \[f_{n}(x)+h_{l,n}(x)(y-x)\leq\psi_{n}(y)-c_{l}(x,y)\leq\psi_{n}(y). \tag{3.3}\] Define the convex functions \[\chi_{l,n}(y):=\sup_{x\in\mathbb{R}}f_{n}(x)+h_{l,n}(x)(y-x),\ \ \ \chi_{n}:=\sup_{l=1,...,L}\chi_{l,n}. \tag{3.4}\] Notice \(\chi_{l,n}(y)\geq f_{n}(y)+h_{l,n}(y)(y-y)=f_{n}(y)\) for all \(y\in\mathbb{R}\). Hence, \[f_{n}\leq\chi_{n}\leq\psi_{n}\ \ \text{for all}\ n. \tag{3.5}\] By (3.2), this yields the uniform integral bound \[\int\chi_{n}\,d(\nu-\mu)\leq\nu(\psi_{n})-\mu(f_{n})\leq C\ \ \text{for all}\ l=1,...,L\ \text{and}\ n\in\mathbb{N}. \tag{3.6}\] Using (3.6) and the assumption that \((\mu,\nu)\) is irreducible, a local uniform boundedness of \(\{\chi_{n}\}_{n}\) can be obtained (cf. Step 1 in the proof of [19, Theorem 1.2]): there exists an increasing sequence of compact intervals \(J_{k}:=[c_{k},d_{k}]\) and constants \(M_{k}\geq 0\) for each \(k\in\mathbb{N}\), such that \(\cup_{k=1}^{\infty}J_{k}=J\), and \[0\leq\sup_{n}\chi_{n}\leq M_{k}\,\,\,\text{in}\,\,\,J_{k}. \tag{3.7}\] **Step 2.** Given any approximating dual optimizer \((\varphi_{n},\psi_{n},\theta_{l,n})\) satisfying (3.2), (3.3), the goal is to suitably modify it and deduce pointwise convergence of \(\varphi_{n},\psi_{n}\) to some functions \(\varphi,\psi\)\(\mu,\nu\)-a.s. as \(n\to\infty\), repectively, where \(\varphi,\psi\in\mathbb{R}\cup\{+\infty\}\) is \(\mu,\nu\)-a.s. finite. From convexity of \(\chi_{n}\) with \(\mu\preceq_{c}\nu\), we deduce, for all \(n\), \[C\geq\nu(\psi_{n})-\mu(f_{n})\geq\nu(\chi_{n})-\mu(f_{n})\geq\mu(\chi_{n})-\mu (f_{n})=||\chi_{n}-f_{n}||_{L^{1}(\mu)}, \tag{3.8}\] Meanwhile, (3.3) gives \(f_{n}(x)+h_{l,n}(x)(y-x)-\psi_{n}(y)\leq-c_{l}(x,y)\leq 0\), hence \[f_{n}(x)+h_{l,n}(x)(y-x)-\psi_{n}(y)\leq\chi_{n}(y)-\psi_{n}(y)\leq 0.\] Integrating by any \(\pi\in\mathcal{M}(\mu,\nu)\) implies \[||\psi_{n}-\chi_{n}||_{L^{1}(\nu)}\leq\nu(\psi_{n})-\mu(f_{n})\leq C\,\,\,\, \text{for all}\,\,n. \tag{3.9}\] These uniform \(L^{1}\) bounds, combined with the local uniform bound (3.7) and Komlos compactness theorem, can imply the desired almost sure convergence of \(\{\varphi_{n}\}\) and \(\{\psi_{n}\}\) as presented in [9] and in Step 2 in the proof of [19, Theorem 1.2], thus we omit the detail here. Also, by following Step 3 in the same proof, one can deduce the following pointwise convergence of \(\chi_{n}\) to a convex function \(\chi\) \[\lim_{n\to\infty}\chi_{n}(y)=\chi(y)\in\mathbb{R}\,\,\,\text{for every}\,\,\,y\in J. \tag{3.10}\] **Step 3.** We have obtained the almost sure limit functions \(\varphi,\psi\), with \(f:=-\varphi\). We may define \(\varphi:=+\infty\) on a \(\mu\)-null set which includes \(\mathbb{R}\setminus I\), and \(\psi:=+\infty\) on a \(\nu\)-null set which includes \(\mathbb{R}\setminus J\), so that they are defined everywhere on \(\mathbb{R}\). We will show there exists a function \(\theta_{l}:\mathbb{R}\to\mathbb{R}\), with \(h_{l}:=-\theta_{l}\), \(l=1,...,L\), such that \[\varphi(x)+\psi(y)+\theta_{l}(x)(y-x)\geq c_{l}(x,y). \tag{3.11}\] For any function \(f:\mathbb{R}\to\mathbb{R}\cup\{+\infty\}\) which is bounded below by an affine function, let \(\text{conv}[f]:\mathbb{R}\to\mathbb{R}\cup\{+\infty\}\) denote the lower semi-continuous convex envelope of \(f\), that is the supremum of all affine functions \(\lambda\) satisfying \(\lambda\leq f\) (If there is no such \(\lambda\), let \(\operatorname{conv}[f]\equiv-\infty\).) Set \(H_{l,n}(x,y):=\operatorname{conv}[\psi_{n}(\,\cdot\,)-c_{l}(x,\,\cdot\,)](y)\). By (3.3), \[f_{n}(x)+h_{l,n}(x)(y-x)\leq H_{l,n}(x,y)\leq\psi_{n}(y)-c_{l}(x,y), \tag{3.12}\] because the left hand side is affine in \(y\). Letting \(y=x\) gives \(f_{n}(x)\leq H_{l,n}(x,x)\). Next, since the \(\limsup\) of convex functions is convex, we have \[\limsup_{n\to\infty}H_{l,n}(x,y) \leq\operatorname{conv}[\limsup_{n\to\infty}\big{(}\psi_{n}(\, \cdot\,)-c_{l}(x,\,\cdot\,)\big{)}](y)\] \[\leq\operatorname{conv}[\psi(\,\cdot\,)-c_{l}(x,\,\cdot\,)](y)=:H _{l}(x,y).\] Then by the convergence \(f_{n}\to f\) and the definition of \(H_{l}(x,y)\), we get \[f(x)\leq H_{l}(x,x),\text{ and }H_{l}(x,y)\leq\psi(y)-c_{l}(x,y).\] Set \(A:=\{x\in I\,|\,\lim_{n\to\infty}f_{n}(x)=f(x)\in\mathbb{R}\}\), so that \(\mu(A)=1\). Since \(y\mapsto H_{l}(x,y)\) is continuous in \(J\) for every \(x\in A\) due to the convexity of \(y\mapsto H_{l}(x,y)\) and \(\nu\)-a.s. finiteness of \(\psi\), the subdifferential \(\partial H_{l}(x,\,\cdot\,)(y)\) is nonempty, convex and compact for every \(y\in I=\operatorname{int}(J)\). This allows us to choose a measurable function \(h_{l}:A\to\mathbb{R}\) satisfying \(h_{l}(x)\in\partial H_{l}(x,\,\cdot\,)(x)\). Such choice yields (3.11) as follows: \[f(x)+h_{l}(x)(y-x)\leq H_{l}(x,x)+h_{l}(x)(y-x)\leq H_{l}(x,y)\leq\psi(y)-c_{l }(x,y).\] We may define \(h_{l}\equiv 0\) on \(\mathbb{R}\setminus A\), noting that \(f:=-\infty\) on \(\mathbb{R}\setminus A\). **Step 4.** We will show that for any functions \(\theta_{l}:\mathbb{R}\to\mathbb{R}\), \(l=1,...,L\) that satisfies (3.11) (whose existence was shown in the previous step), and for any maximizer \(\vec{\gamma}^{*}=(\gamma_{1}^{*},...,\gamma_{L}^{*})\in\mathcal{M}_{L}(\mu,\nu)\) for the problem (2.2), it holds \[\varphi(x)+\psi(y)+\theta_{l}(x)(y-x)=c_{l}(x,y)\quad\gamma_{l}^{*}-a.e.\text { for all }l=1,...,L. \tag{3.13}\] For any \(\vec{\gamma}=(\gamma_{1},...,\gamma_{L})\in\mathcal{M}_{L}(\mu,\nu)\), Assumption **[A]** yields \(c_{l}\in L^{1}(\gamma_{l})\). We claim \[\liminf_{n\to\infty}\sum_{l=1}^{L}\int\big{(}\varphi_{n}(x)+\psi_{n}(y)+ \theta_{l,n}(x)(y-x)\big{)}d\gamma_{l} \tag{3.14}\] \[\geq\sum_{l=1}^{L}\int\big{(}\varphi(x)+\psi(y)+\theta_{l}(x)(y-x)\big{)}d \gamma_{l}\ \ \text{for every }l.\] To see how the claim implies (3.13), let \(\vec{\gamma}^{*}\) be any maximizer for (2.2). Then \[\overline{P}_{c} =\lim_{n\to\infty}\sum_{l=1}^{L}\int\big{(}\varphi_{n}(x)+\psi_{n} (y)+\theta_{l,n}(x)(y-x)\big{)}d\gamma_{l}^{*}\] \[\geq\sum_{l=1}^{L}\liminf_{n\to\infty}\int\big{(}\varphi_{n}(x)+ \psi_{n}(y)+\theta_{l,n}(x)(y-x)\big{)}d\gamma_{l}^{*}\] \[\geq\sum_{l=1}^{L}\int\big{(}\varphi(x)+\psi(y)+\theta_{l}(x)(y-x )\big{)}d\gamma_{l}^{*}\] \[\geq\sum_{l=1}^{L}\int c_{l}(x,y)\,d\gamma_{l}^{*}=\overline{P}_ {c},\] hence equality holds throughout. Notice this yields (3.13), hence the proposition. To prove (3.14), fix any \(\vec{\gamma}=(\gamma_{1},...,\gamma_{L})\in\mathcal{M}_{L}(\mu,\nu)\). The nonnegativity (3.1) gives \(\gamma_{l}^{X}(\varphi_{n})+\gamma_{l}^{Y}(\psi_{n})\geq 0\), and (3.2) gives \(\sum_{l=1}^{L}(\gamma_{l}^{X}(\varphi_{n})+\gamma_{l}^{Y}(\psi_{n}))=\mu( \varphi_{n})+\nu(\psi_{n})\searrow\overline{P}_{c}\). This implies the sequence \(\{\gamma_{l}^{X}(\varphi_{n})+\gamma_{l}^{Y}(\psi_{n})\}_{n}\) is bounded for all \(l\). With this and (3.5), as in Step 2 (but \(\gamma_{l}^{X}\preceq_{c}\gamma_{l}^{Y}\) instead of \(\mu\preceq_{c}\nu\)), we deduce \[\sup_{n}||\chi_{n}+\varphi_{n}||_{L^{1}(\gamma_{l}^{X})}<\infty,\quad\sup_{n} ||\psi_{n}-\chi_{n}||_{L^{1}(\gamma_{l}^{Y})}<\infty,\quad\text{for all $l$}.\] From this, since \(\varphi_{n}\to\varphi\), \(\psi_{n}\to\psi\), \(\chi_{n}\to\chi\), by Fatou's lemma, we get \[\chi+\varphi\in L^{1}(\gamma_{l}^{X}),\quad\psi-\chi\in L^{1}(\gamma_{l}^{Y}),\] \[\liminf_{n\to\infty}\int(\chi_{n}+\varphi_{n})\,d\gamma_{l}^{X}\geq\int(\chi+ \varphi)d\gamma_{l}^{X},\ \liminf_{n\to\infty}\int(\psi_{n}-\chi_{n})\,d\gamma_{l}^{Y}\geq\int(\psi- \chi)d\gamma_{l}^{Y}.\] This allows us to proceed \[\liminf_{n\to\infty}\int\big{(}\varphi_{n}(x)+\psi_{n}(y)+\theta _{l,n}(x)(y-x)\big{)}d\gamma_{l}\] \[=\liminf_{n\to\infty}\int\big{(}\varphi_{n}(x)+\chi_{n}(x)-\chi_ {n}(y)+\psi_{n}(y)-\chi_{n}(x)+\chi_{n}(y)+\theta_{l,n}(x)(y-x)\big{)}d\gamma _{l}\] \[\geq\int(\chi+\varphi)d\gamma_{l}^{X}+\int(\psi-\chi)d\gamma_{l}^ {Y}+\liminf_{n\to\infty}\int\big{(}\chi_{n}(y)-\chi_{n}(x)+\theta_{l,n}(x)(y- x)\big{)}d\gamma_{l}.\] To handle the last term, disintegrate \(\gamma_{l}=(\gamma_{l})_{x}\otimes\gamma_{l}^{X}\), and let \(\xi_{n}:I\to\mathbb{R}\) be a sequence of functions satisfying \(\xi_{n}(x)\in\partial\chi_{n}(x)\). This allows us to proceed \[\int\big{(}\chi_{n}(y)-\chi_{n}(x)+\theta_{l,n}(x)(y-x)\big{)}d \gamma_{l}\] \[= \iint\big{(}\chi_{n}(y)-\chi_{n}(x)+\theta_{l,n}(x)(y-x)\big{)}( \gamma_{l})_{x}(dy)\gamma_{l}^{X}(dx)\] \[= \iint\big{(}\chi_{n}(y)-\chi_{n}(x)+\xi_{n}(x)(y-x)\big{)}( \gamma_{l})_{x}(dy)\gamma_{l}^{X}(dx),\] because \(\int\theta_{l,n}(x)(y-x)(\gamma_{l})_{x}(dy)=\int\xi_{n}(x)(y-x)(\gamma_{l})_ {x}(dy)=0\). Notice that the last integrand is nonnegative. Thus by repeated Fatou's lemma, we deduce \[\liminf_{n\to\infty}\int\big{(}\chi_{n}(y)-\chi_{n}(x)+\theta_{l,n}(x)(y-x)\big{)}d\gamma_{l}\] \[\geq\int\liminf_{n\to\infty}\bigg{(}\int\big{(}\chi_{n}(y)-\chi_ {n}(x)+\xi_{n}(x)(y-x)\big{)}(\gamma_{l})_{x}(dy)\bigg{)}\gamma_{l}^{X}(dx)\] \[\geq\int\bigg{(}\int\big{(}\chi(y)-\chi(x)+\xi(x)(y-x)\big{)}( \gamma_{l})_{x}(dy)\bigg{)}\gamma_{l}^{X}(dx),\] for some \(\xi(x)\in\partial\chi(x)\) which is a limit point of the bounded sequence \(\{\xi_{n}(x)\}_{n}\). Finally, in the last line, the inner integral equals \[\int\big{(}\chi(y)-\chi(x)+\theta_{l}(x)(y-x)\big{)}(\gamma_{l})_{x}(dy).\] This proves the claim, hence the proposition. We are prepared to prove Theorem 1.1. Proof of Theorem 1.1.: Fix any optimal pair \((\pi,\mu_{1})\) for the problem (1.2), and let \(\gamma_{l}=\pi_{x}\otimes\mu_{l}\), \(l=1,2\), with \(\mu_{2}=\mu-\mu_{1}\) and the kernel \(\{\pi_{x}\}_{x}\) inherited from \(\pi\). We understand \(c_{1}(x,y)=c_{1}(x)\) in the proof. Let us first assume that \(\mu\preceq_{c}\nu\) is irreducible. Because we assume \(P_{c}=\overline{P}_{c}\), by Proposition 2.2, with \(f=-\varphi\) and \(h_{l}=-\theta_{l}\), we have \[f(x)+h_{l}(x)(y-x)+c_{l}(x,y)\leq\psi(y)\ \ \text{for each}\ l=1,2\ \text{and}\ (x,y)\in\mathbb{R}^{2}, \tag{3.16}\] \[f(x)+h_{l}(x)(y-x)+c_{l}(x,y)=\psi(y)\ \ \gamma_{l}-a.e.\,(x,y)\ \text{for each}\ l=1,2. \tag{3.15}\] Now, saying that an American option holder randomizes her exercise between \(c_{1},c_{2}\) is equivalent to saying that the common mass of \(\mu_{1},\mu_{2}\) (written as \(\mu_{1}\wedge\mu_{2}\)) is nonzero. The common mass of \(\mu_{1},\mu_{2}\) is defined by the largest measure satisfying \(\rho\leq\mu_{1}\) and \(\rho\leq\mu_{2}\). Since \(\gamma_{1}\) and \(\gamma_{2}\) have the same kernel, (3.16) implies \[f(x)+h_{l}(x)(y-x)+c_{l}(x,y)=\psi(y)\quad\pi_{x}\otimes\rho-a.e.\,(x,y)\text{ for }l=1,2. \tag{3.17}\] Observe that \(\psi\) can be taken as \(\psi:=\max(\psi_{1},\psi_{2})\), where \[\psi_{l}(y):=\sup_{x}f(x)+h_{l}(x)(y-x)+c_{l}(x,y),\] and consequently, \(\psi_{1},\psi_{2},\psi\) are all convex since \(c_{2}\) is convex in \(y\) (while \(c_{1}\) is independent of \(y\).) Now the idea is to differentiate (3.17) by \(y\) for \(\nu\)-a.e.\(\,y\), which is enabled by the fact that \(\psi\) is differentiable \(\nu\)-a.s., since \(\nu\) is assumed to be absolutely continuous with respect to Lebesgue. By the differentiation combined with the first-order optimality condition from (3.15), (3.16) for each \(l=1,2\), we deduce \[h_{1}(x)=\psi^{\prime}(y)=h_{2}(x)+(c_{2})_{y}(x,y)\quad\pi_{x}\otimes\rho-a. e.\,(x,y), \tag{3.18}\] where \((c_{2})_{y}\) denotes the partial derivative of \(c_{2}\) by \(y\), noting that (3.15), (3.16) implies \((c_{2})_{y}(x,y)\) exists \(\gamma_{2}\)-a.e., since \(\psi\) is differentiable \(\nu\)-a.e.. Now since \(c_{1}=c_{1}(x)\), the left hand side of (3.15) is linear in \(y\) when \(l=1\), while \(\psi\) is convex. With this, the first equality in (3.18) implies that for \(\rho\)-a.e.\(\,x\), \(\psi\) is linear in the smallest interval containing \(\operatorname{spt}(\pi_{x})\) which contains \(x\). Hence, \[\psi^{\prime}(y)=\psi^{\prime}(x)\quad\pi_{x}\otimes\rho-a.e.\,(x,y). \tag{3.19}\] The second equality in (3.18) thus becomes \[(c_{2})_{y}(x,y)=\psi^{\prime}(x)-h_{2}(x)\quad\pi_{x}\otimes\rho-a.e.\,(x,y). \tag{3.20}\] Because \(c_{2}\) is assumed to be strictly convex in \(y\), the solution \(y\) to (3.20) must be unique, and hence, \(y=x\) since \(\pi_{x}\) has its barycenter at \(x\). We conclude \[\pi_{x}=\delta_{x}\quad\rho-a.e.\,x, \tag{3.21}\] where \(\delta_{x}\in\mathcal{P}(\mathbb{R})\) is the Dirac mass at \(x\). (3.17) then yields \[c_{1}(x)=c_{2}(x,x)\quad\rho-a.e.\,x. \tag{3.22}\] Now if \(c_{1}(x)\neq c_{2}(x,x)\)\(\mu\)-a.s., then (3.22) implies \(\rho\equiv 0\), yielding \(\mu_{1}\perp\mu-\mu_{1}\) for any optimal pair \((\pi,\mu_{1})\). This proves the disjointness when \(\mu\preceq_{c}\nu\) is irreducible. For general \(\mu\preceq_{c}\nu\), it is well known that any convex-ordered pair \((\mu,\nu)\) can be decomposed as at most countably many irreducible pairs, and the decomposition is uniquely determined by the potential functions \(u_{\mu},u_{\nu}\). More precisely, we have: [9, Proposition 2.3] Let \((I_{k})_{1\leq k\leq N}\) be the open components of the open set \(\{u_{\mu}<u_{\nu}\}\) in \(\mathbb{R}\), where \(N\in\mathbb{N}\cup\{+\infty\}\). Let \(I_{0}=\mathbb{R}\setminus\cup_{k\geq 1}I_{k}\) and \(\mu_{k}=\mu\big{|}_{I_{k}}\) for \(k\geq 0\), so that \(\mu=\sum_{k\geq 0}\mu_{k}\). There exists a unique decomposition \(\nu=\sum_{k\geq 0}\nu_{k}\) such that \[\mu_{0}=\nu_{0},\text{ and }(\mu_{k},\nu_{k})\text{ is irreducible for }k\geq 1\text{ with }\,\mu_{k}(I_{k})=\mu_{k}(\mathbb{R}).\] Moreover, any \(\pi\in\mathcal{M}(\mu,\nu)\) admits a unique decomposition \(\pi=\sum_{k\geq 0}\pi_{k}\) such that \(\pi_{k}\in\mathcal{M}(\mu_{k},\nu_{k})\) for all \(k\geq 0\). Here, \(\pi_{0}\) must be the identity transport, i.e., \((\pi_{0})_{x}=\delta_{x}\), since it is a martingale transport between the same marginal. Since the theorem has already been proven for the irreducible pairs \((\mu_{k},\nu_{k})\), \(k\geq 1\), we only need to prove it for the identity transport \(\pi_{0}\). In this case, \(\int c_{2}(x,y)(\pi_{0})_{x}(dy)=c_{2}(x,x)\), yielding that it is optimal to exercise \(c_{1}\) when \(c_{1}(x)>c_{2}(x,x)\), while it is optimal to exercise \(c_{2}\) when \(c_{1}(x)<c_{2}(x,x)\). The assumption \(c_{1}(x)\neq c_{2}(x,x)\)\(\mu\)-a.s. therefore proves \(\mu_{1}\perp\mu-\mu_{1}\). Finally, if \((\pi,\mu_{1})\) and \((\pi,\mu_{1}^{\prime})\) are both optimal, let \(\gamma_{l}=\pi_{x}\otimes\mu_{l}\) and \(\gamma_{l}^{\prime}=\pi_{x}\otimes\mu_{l}^{\prime}\), \(l=1,2\). Let \(\tilde{\gamma}_{l}=(\gamma_{l}+\gamma_{l}^{\prime})/2\). Then \((\tilde{\gamma}_{1},\tilde{\gamma}_{2})\) is an optimal solution to (1.2) since \(\gamma_{l}\) and \(\gamma_{l}^{\prime}\) share the same kernel. Now \(\mu_{1}\neq\mu_{1}^{\prime}\) implies \(\tilde{\gamma}_{1}^{X}\not\perp\tilde{\gamma}_{2}^{X}\), a contradiction.
2308.01552
InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent
This research paper delves into the integration of OpenAI's ChatGPT into embodied agent systems, evaluating its influence on interactive decision-making benchmark. Drawing a parallel to the concept of people assuming roles according to their unique strengths, we introduce InterAct. In this approach, we feed ChatGPT with varied prompts, assigning it a numerous roles like a checker and a sorter, then integrating them with the original language model. Our research shows a remarkable success rate of 98% in AlfWorld, which consists of 6 different tasks in a simulated household environment, emphasizing the significance of proficient prompt engineering. The results highlight ChatGPT's competence in comprehending and performing intricate tasks effectively in real-world settings, thus paving the way for further advancements in task planning.
Po-Lin Chen, Cheng-Shang Chang
2023-08-03T06:19:58Z
http://arxiv.org/abs/2308.01552v1
# InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent ###### Abstract This research paper delves into the integration of OpenAI's ChatGPT into embodied agent systems, evaluating its influence on interactive decision-making benchmark. Drawing a parallel to the concept of people assuming roles according to their unique strengths, we introduce InterAct. In this approach, we feed ChatGPT with varied prompts, assigning it a numerous roles like a checker and a sorter, then integrating them with the original language model. Our research shows a remarkable success rate of 98% in AlfWorld, which consists of 6 different tasks in a simulated household environment, emphasizing the significance of proficient prompt engineering. The results highlight ChatGPT's competence in comprehending and performing intricate tasks effectively in real-world settings, thus paving the way for further advancements in task planning. **Keywords:** ChatGPT, AlfWorld, Task planning, InterAct. ## I Introduction The advent of large language models (LLMs), underpinned by transformative advancements in natural language processing (NLP), has stimulated a revolution across a wide range of applications. exemplified by models such as Transformer [1], T5 [2], GPT-4 [3], these language models have achieved impressive results in diverse tasks like paragraph summary, language translation, and code optimization. These achievements can be attributed to their ability to absorb and process massive amounts of data, making sense of the patterns and structures within the text. ChatGPT [4] is an AI language model created by OpenAI, which has been trained using a combination of pretraining and fine-tuning with human feedback. This advanced model is built on Transformer model, enabling it to produce responses that closely resemble human language. By undergoing extensive training on vast volumes of text data, ChatGPT excels in understanding and generating text in various languages and fields, answering queries, and engaging in dialogues. Unlike its predecessors that operate primarily based on a single prompt, ChatGPT combines text generation with code synthesis, thereby significantly enhancing its interactive abilities. In this paper, we assess the ability of ChatGPT to make decisions within the context of an AlfWorld simulated environment [5]. The aim is to understand the model's proficiency in absorbing and processing data to make rational decisions. Scholarly works such as ReAct [6] and Reflexion [7] showcase the decision-making, action-initiation, and reflective powers of LLMs, paving the way for remarkable progress in a range of text-based performance metrics. However, they all utilize a single language model (InstructGPT) which, despite numerous iterations of thought and reflection, often repeatedly commits the same mistakes. In this research, we devise a novel model, InterAct, which is founded on the architecture of the ReAct model [6]. It undergoes alterations in prompt formulations, incorporates different ChatGPT for support. In particular, we add a _checker_ module to tackle the issue of object misidentification. The initial basic prompt has also been revised to bolster InterAct's capabilities in constructing comprehensive search paths. This approach effectively addresses the previously mentioned shortcomings of the ReAct model. Consequently, this approach yielded a success rate of 98% in this benchmark, a significant improvement from the base ReAct agent's accuracy of 75%. These experiments provide critical insights into the potential benefits and limitations of implementing ChatGPT in AI-driven systems and technologies. In conclusion, the main insight of the paper is the advancement of AI language models like ChatGPT presents an exciting opportunity to revolutionize and reshape our interaction with technology. By leveraging these models, we can build more intuitive, responsive, and smart technologies that can effectively understand and respond to human requirements. The key contributions of our research are summarized below: 1. We introduce InterAct, an improved method where each agent, like ChatGPT, can showcase unique abilities, adeptly rectifying the limitations found in the ReAct model, such as object misidentification and inefficient planning. 2. We have designed new trajectory prompts that enable the agent to flawlessly locate items during its search process. 3. In a decision-making test within the AlfWorld simulated environment, InterAct demonstrated a 98% success rate, significantly higher than the 75% accuracy of the base ReAct agent, suggesting its potential benefits in AI-centric systems and technologies. ## II Related work **Dominance of Transformers for Robots** Transformers have emerged as the dominant architecture in various fields. Initially prominent in NLP [8, 9, 10], they have now extended their influence to include vision-based tasks [11], [12] and even reinforcement learning [13, 14]. In the realm of robotics, Transformers have found practical applications in diverse areas such as path planning [15, 16], object recognition [17], and grasping [18]. One notable example is RT-1 [19], which takes the utilization of Transformers that takes images from a robot's camera and natural language task instructions as inputs and directly outputs tokenized actions. RT-1 can also acquire new skills by observing other robots' experiences, opening opportunities for enhanced robot capabilities through multi-robot datasets. Another instance is SayCan [20], a study conducted by Google's AI team and Everyday Robots. This research employs PaLM [21] and an affordance function to empower robots to carry out complex tasks based on natural language instructions. The resulting system, PaLM-SayCan, transforms user instructions into actionable plans for the robot. Inner Monologue [22] has made further advancements by incorporating injected feedback from the environment. The work in [23] demonstrated that even without any training, sizable language models can be effectively prompted to produce credible action plans driven by goals. They also suggested multiple techniques to enhance the model's ability to generate executable outputs, all without the need for invasive probing or modifications to the underlying model. **GPT for Robotics** Moreover, recent publications, including [24, 25], and [26], have successfully incorporated models such as ChatGPT and GPT3.5 into the realm of robotics applications. These advancements facilitate interaction between the models and the environment or users, allowing for the correction of the robot's behavior. These papers showcase various prompts and outline a pipeline for the implementation of ChatGPT in robotics tasks. Additionally, they conduct experimental evaluations to assess ChatGPT's capability to execute a wide range of robotics tasks while striving to bridge the gap between natural language and actionable robot actions. **LLM for Robotics reasoning** The process of reasoning in robotics involves breaking down complex tasks into simpler subtasks that can be more easily solved by the LLM itself or with the aid of tools. Various approaches [27, 28] have been introduced to enable natural language agents to select their next action in text-based environments. One prominent approach is Chain-of-thought (CoT) reasoning, as proposed in [29]. This approach leverages emergent properties, such as reasoning and commonsense, to solve tasks through multiple steps. It enables the LLM to reason through a series of intermediate actions, leading to the desired outcome. Another approach called faithful reasoning, introduced in [30], decomposes multi-step reasoning into three distinct steps, each handled by a dedicated LLM. By dividing the task into these steps, faithful reasoning facilitates the LLM's ability to tackle complex computations effectively. Similar approaches like Scratchpad [31], which involves fine-tuning an LLM on intermediate computation steps, resulting in improved performance on multi-step computation problems. The Describe, Explain, Plan, and Select (DEPS) approach, introduced in [32], specifically developed to tackle the unique challenges of planning in open-ended environments such as Minecraft. This innovative system adeptly manages intricate tasks that demand meticulous, multi-step reasoning, effectively prioritizing sub-goals according to the agent's proximity. Notably, DEPS has exhibited remarkable results in enhancing the success rate of Minecraft tasks by offering insightful explanations for errors encountered during sub-task execution. As a groundbreaking planning agent, DEPS has achieved an unprecedented positive success rate in conquering the formidable ObtainDiamond task, marking a significant milestone in the field. A different strategy called DERA [33] presents an alternative approach by structuring a dialogue as a conversation between two agent types: "Researcher" and "Decider." The Researcher agent analyzes information and identifies key components of the problem, while the Decider agent autonomously combines the Researcher's insights and makes judgments on the final output. This approach has demonstrated notable enhancements compared to the baseline performance of GPT-4 [3] in evaluations conducted by human experts and quantitative metrics. Particularly, DERA has showcased significant advancements in safety-critical domains like healthcare. Additionally, the studies by [7, 34] have also incorporated reflection actions into the model. These reflection actions allow the model to refine its actions based on feedback received during the execution of tasks. By iteratively adjusting its actions and incorporating self-feedback, the model can improve its decision-making process and adapt to changing conditions. Our research aims to provide additional evidence supporting the effectiveness of ChatGPT in language-conditioned robotic learning simultaneously introducing novel architectures that facilitate reasoning through the coordination of various roles performed by LLMs. ## III Method: InterAct Structure In this section, we use the AlfWorld benchmark to test ChatGPT's reasoning capabilities, examining how it accomplishes household tasks step by step when provided only with a few-shot example. We will use not only ChatGPT but also a similar language model called InstructGPT (text-davinci-002). InstructGPT is particularly adept at tasks demanding succinct responses or benefiting from k-shot examples. In this particular task, unlike the previous demostration, the model is required to integrate task-oriented actions with verbal reasoning. The model needs to possess the ability to think and reason like a human. When faced with dead ends, the model should be capable of adjusting its planning based on logical reasoning. ### _AffWorld Dataset_ AlfWorld is a suite of text-based environments that challenge an agent to solve multi-step tasks in a variety of interactive environments with ALFRED [35] benchmark. The ALFRED benchmark focuses on tasks that require an agent to accomplish high-level goals in a simulated household environment by navigating and interacting through text-based actions. In AlfWorld, there are six types of tasks that challenge the agent's ability to plan, track subgoals, and explore systematically. For example, a task in AlfWorld could be to "examine a paper under a desklamp." To achieve this goal, the agent needs to navigate to specific locations within the simulated household and interact with objects using text commands. The agent might need to issue commands like "go to coffeeable 1," "take paper 2," and "use desklamp 1" to complete the task. The complexity of the tasks in AlfWorld is intentionally designed to be challenging. Task instances can have more than 50 locations and may require an expert policy more than 50 steps to solve. This complexity encourages the agent to effectively plan its actions, keep track of subgoals, and explore the environment systematically. For example, the agent may need to check all desks one by one to find the deskamp. One of the challenges presented in AlfWorld is the need to determine likely locations for common household items. For instance, a desklamp is likely to be found on desks, shelves, or dressers. This aspect of the environment provides an opportunity for language models like LLMs to leverage their pretrained commonsense knowledge to make informed decisions about the likely locations of objects. In each environment of AlfWorld, the agent has the option to select an action from a list of permissible actions, denoted as \(A_{t}\) at time step t. Upon executing an action, the agent receives an observation, \(O_{t}\), and a reward, \(R(s_{t},a_{t})\), from the environment, which then determines the next state of the agent. AlfWorld offers a diverse set of six tasks and a total of over 3000 unique environments. These environments test the agent's ability to understand the task at hand, formulate a sequential plan consisting of subtasks, and carry out the necessary actions within the given environment. In our trials, we utilize the ReAct problem-solving strategy [6], which has demonstrated superior performance across a wide array of sequential decision-making tasks. ReAct is a strategy that allows the agent to reason and act by articulating its current thoughts and performing actions based on these thoughts. At each time step, the agent has the option to execute \(<think>\): thought action to verbalize its internal thought process, or \(<action>\): to induce a response from the environment. The set of possible actions in each state is not explicitly defined, providing the agent with full autonomy in determining its next moves. To prevent syntactic errors, we provide the agent with two domain-specific few-shot trajectories. ### _Model architecture_ We introduced a novel model called InterAct, which is built upon the foundation of ReAct. The architectural diagram of InterAct can be observed in Figure 1. While ReAct has demonstrated impressive accuracy in diverse decision-making and knowledge-intensive tasks, it occasionally encounters common errors, including Perception Error, Object Misidentification, and Inefficient Planning. In simpler terms, although ReAct achieves state-of-the-art performance overall, there exists a small subset of tasks that remain unsolved due to minor imperfections in a single model. To address these challenges, InterAct leverages the combined strength of agents with distinct purposes, such as checker and sorter, to enhance the areas where ReAct is susceptible to errors. In addition, we have modified the original basic prompt to enhance InterAct's ability to plan comprehensive search paths when looking for multiple items, ensuring that no possible locations are overlooked. This optimization greatly improves the efficiency of the tasks being performed. **Sorter** When processing environmental data, ReAct initially needs to determine the likelihood of objects appearing in specific locations. However, this ranking process often falls short, leading to less efficient planning. This inefficiency may arise from the fact that the the InstructGPT model (text-davinci-002) is not sufficiently trained in factual knowledge and common-sense reasoning. On the other hand, ChatGPT has been fine-tuned using Reinforcement Learning with Human Feedback (RLHF) and has demonstrated a more nuanced understanding of various situations. It excels at making well-informed decisions, as depicted in Figure 2. To improve the efficiency of predicting object locations, we integrate ChatGPT as a decision-making component. Whenever ReAct requires this procedure, it can autonomously utilize ChatGPT, thus enhancing the effectiveness of its object search operations. **Checker** Another issue with text-davinci-002 is that it tends to mistakenly categorize similar objects as the same. For example, it might treat a pan and a pot as identical items, leading to the problem of Object Misidentification, as depicted Fig. 1: The architecture of both ReAct and InterAct. InterAct involves the integration of LLM with various agents to facilitate smoother interaction with the environment. Fig. 2: The left image was generated using text-davinci-002 for search ranking, while the right image was generated using ChatGPT. It can be observed that ChatGPT exhibits higher logical reasoning in finding objects compared to text-davinci-002. in Figure 3. To address this issue, we employ ChatGPT as a checker by providing it with appropriate prompts. We have observed that ChatGPT can successfully distinguish between similar objects. Furthermore, we utilize the results from this checker as observations and feed them back to the LLM, as illustrated in Figure 1. This approach helps us resolve the problem related to object misidentification. **Trajectory planning** In the AlfWorld environment, we encountered a mission type named "pick 2", where the agent is required to find two identical objects. We observed that ReAct alone tends to forget its previous locations, resulting in inefficient trajectories characterized by frequent revisits to the same place. In some instances, this led to hallucinations, defined as consecutive identical actions with the environment responding similarly. To address this issue, we made changes to the original model's prompt. After finding the first object and placing it in the corresponding receptacle, we allow the model to autonomously generate a trajectory while ensuring that this path does not overlook areas where the second object might be present, as shown in Figure 4. More details about prompts, we refer the reader to Appendix A. ## IV Evaluation In this section, we present a comparative analysis of the performance enhancement provided by the helpers (sorter or checker) and the new trajectory planning when compared to the baseline model. Our findings demonstrate that InterAct consistently outperforms ReAct on AlfWorld (as shown in Table I) across all tasks. On AlfWorld, the top-performing InterAct trial achieves an impressive average success rate of 98%, falling short in only 2 out of 134 tasks. This performance is significantly better than the best trials of ReAct (73%) and BUTLER (37%). Indeed, InterAct has demonstrated exceptional proficiency in handling these tasks, as evidenced by achieving a 100% success rate in four out of the six tasks. This performance showcases InterAct's remarkable ability to effectively manage and succeed in various tasks. Notably, even when ReAct is augmented only with a checker or sorter, the overall average performance surpasses that of ReAct without helpers by a significant margin. The tasks that show the most substantial improvement are "pick2" and "clean," with an approximate gain of 47% and 41%. From a qualitative standpoint, we observed that ReAct, without any helper, faces difficulties in accurately determining the presence of items in a specific location or employing ineffective search strategies. Fig. 4: Trajectory planning. In the initial scenario, the agent fails to retrieve the second pillow from the armchair after placing the first pillow on the sofa. Consequently, the agent cannot find the second pillow, resulting in an incomplete task. In the revised scenario, InterAct addresses this issue by considering the future search trajectory. It prioritizes returning to the armchair to search for the second pillow before exploring the other areas. This approach improves the chances of successfully locating the second pillow and completing the task. Fig. 3: Object Misidentification. In this scenario, the objective is to locate a pan; however, ReAct mistakenly misidentifies another object as the pan. ## V Discussion and Limitations ### _Scalability of InterAct_ Our InterAct model is scalable and adaptable to different datasets and scenarios. For instance, if there's a need for a feature similar to'memories,' we can develop an interpreter to describe the current path, among other things, without having to train numerous different language models. This is possible because ChatGPT serves as an excellent backbone for such extensions. ### _Error assessment with a supervisor module_ Despite achieving an impressive average performance of 98% on the AlfWorld dataset, our analysis of failed trajectories uncovered certain limitations. One notable drawback is the model's heavy reliance on prompt completeness within InterAct. When our examples contain missing or unaddressed components, the model fails to detect these errors, resulting in repetitive actions, even for trivial mistakes. To overcome this issue, we explored the possibility of using an alternative ChatGPT model as a supervisor to identify such errors. However, it's important to acknowledge that the accuracy of the supervisor's judgment cannot be guaranteed, and there may be occasional misidentifications leading to "action errors." In order to tackle the challenge of error detection, we conducted a comparison between ChatGPT and GPT-4. The results demonstrated a significant improvement in error detection performance with GPT-4. Unfortunately, GPT-4 is currently unavailable as an open-source model and cannot be accessed free of charge. Conducting extensive simulations using GPT-4 requires funding support. ### _Insufficiency of the dataset_ While AlfWorld is a valuable platform for assessing AI performance, it has certain limitations. Primarily, it encompasses only six types of tasks, and even within these categories, the task quantity is quite limited. These restrictions neither fully test nor make optimal use of the AI systems' capabilities. If we move to an environment offering a larger range and diversity of tasks, as well as a broader and more varied set of locations, our model will still need improvement to maintain its current level of accuracy. This aspect will be our focus for future research. ## VI Conclusion Our research is centered on enhancing the task planning capabilities of large language models. We developed a new model, InterAct, built upon the framework of the ReAct model. InterAct is a culmination of various 'helpers' (like checkers and sorters) and aims to improve upon the existing trajectory. We evaluated this framework in the AlfWorld simulated environment, where it showed a substantial increase in decision-making accuracy, soaring from 75% to an impressive 98%. This highlights the vast potential of these models in AI-driven systems and technologies. In essence, this study underscores the revolutionary potential of AI language models like ChatGPT and their pivotal role in shaping future real-world interactions. As we continue to delve into their capabilities, we are on the cusp of a new technological era marked by not only intelligence but also intuitiveness and responsiveness to human needs.
2303.04758
RANG: Reconstructing reproducible R computational environments
A complete declarative description of the computational environment is often missing when researchers share their materials. Without such description, software obsolescence and missing system components can jeopardize computational reproducibility in the future, even when data and computer code are available. The R package rang is a complete solution for generating the declarative description for other researchers to automatically reconstruct the computational environment at a specific time point. The reconstruction process, based on Docker, has been tested for R code as old as 2001. The declarative description generated by rang satisfies the definition of a reproducible research compendium and can be shared as such. In this contribution, we show how rang can be used to make otherwise unexecutable code, spanning from fields such as computational social science and bioinformatics, executable again. We also provide instructions on how to use rang to construct reproducible and shareable research compendia of current research. The package is currently available from CRAN (https://cran.r-project.org/web/packages/rang/index.html) and GitHub (https://github.com/chainsawriot/rang).
Chung-hong Chan, David Schoch
2023-03-08T17:51:05Z
http://arxiv.org/abs/2303.04758v1
# Rang: Reconstructing reproducible R computational environments ###### Abstract A complete declarative description of the computational environment is often missing when researchers share their materials. Without such description, software obsolescence and missing system components can jeopardize computational reproducibility in the future, even when data and computer code are available. The R package rang is a complete solution for generating the declarative description for other researchers to automatically reconstruct the computational environment at a specific time point. The reconstruction process, based on Docker, has been tested for R code as old as 2001. The declarative description generated by rang satisfies the definition of a reproducible research compendium and can be shared as such. In this contribution, we show how rang can be used to make otherwise unexecutable code, spanning from fields such as computational social science and bioinformatics, executable again. We also provide instructions on how to use rang to construct reproducible and shareable research compendium of current research. The package is currently available from CRAN ([https://cran.r-project.org/web/packages/rang/index.html](https://cran.r-project.org/web/packages/rang/index.html)) and GitHub ([https://github.com/chainsawriot/rang](https://github.com/chainsawriot/rang)). R * reproducibility * docker ## 1 Background _"In some cases the polarization estimation will not work... This is NOT a problem in the method, it is entirely dependent on the numpy version (and even the OS's). If you have different versions of numpy or even the same version of numpy on a different OS configuration, different networks will fail randomly... [F]or instance, the 109th Congress will fail, but will work entirely normally on a different numpy version, which will fail on a different Congress network."_ - excerpt of this README file Other than bad programming practices (Trisovic et al. 2022), the main computing barrier to computational reproducibility is the failure to reconstruct the computational environment like the one used by the original researchers. This task looks trivially simple. But as computer science research has shown, this task is incredibly complex (Abate et al. 2015; Dolstra, Loh, and Pierron 2010). In the realm of a usual scripting language such as R 1, that pertains four aspects: a) operating system, b) system components such as libxml2, c) the exact R version, and d) what and which version of the installed R packages. We will call them Component A, B, C, D in the following sections. Any change in these four components can possibly affect the execution of any shared computer code. For example, the lack of the system component libxml2 can impact whether the R package xml2 can be installed on a Linux system. If the shared computer code requires the R package xml2, then the whole execution fails. In reality, the impact of Component A is relatively weak as mainstream, open source programming languages and their software libraries are usually cross platform. In modern computational research, Linux is the de-facto operating system in high performance computing environments (e.g. Slurm). Instead, the impact of Components B, C, and D is much higher. Component D is the most volatile among them all as there are many possible combinations of R packages and versions. Software updates with breaking changes (even in a dependency) might render existing shared code using those changed features not executable or not producing the same result anymore. Also, software obsolescence is commonplace, especially since academic software is often not well maintained due to lack of incentives (Merow et al. 2023). The DevOps (software development and IT operations) community is also confronted with this problem. The issue is usually half-jokingly referred to as "it works on my machine"-problem (Valstar, Griswold, and Porter 2020, a software works on someone's local machine but is not working anymore when deployed to the production system, indicates the software tacitly depends on the computational environment of the local machine). A partial solution to this problem from the DevOps community is called _containerization_. In essence, to containerize is to develop and deploy the software together with all the libraries and the operating system in an OS-level virtualization environment. In this way, software dependency issues can be resolved inside the isolated virtualized software environment and independent of what is installed on the local computer. Docker is a popular choice in the DevOps world for containerization. To build a container, one needs to write a plain text declarative description of the required computational environment. Inside this declarative description, it should pin down all four Components mentioned above. For Docker, it is in the form of a plain text file called Dockerfile. This Dockerfile is then used as the recipe to build a Docker image, where the four Components are assembled. Then, one can launch a container with the built Docker image. There has been many papers written on how containerization solutions such as Docker can be helpful also to foster computational reproducibility of science (e.g. Nust and Hinz 2019; Peikert and Brandmaier 2021; Boettiger and Eddelbuettel 2017). Although tutorials are available (e.g. Nust and Hinz 2019), providing a declarative description of the computational environment in the form of Dockerfile is far from the standard code sharing practice. This might be due to a lack of (DevOps) skills of most scientists to create a Dockerfile (Kim, Poline, and Dumas 2018). But there are many tools available to automate the process (e.g. Nust and Hinz 2019). The case in point described in this paper, rang, is one of them. We argue that rang is the only easy-to-use solution available that can pin down and restore all four components without the reliance on any commercial service such as MRAN. ### Existing solutions renv (Ushey 2022) (and its derivatives such as jetpack and its predecessor packrat) takes a similar approach to Python's virtualenv and Ruby's Gem to pin down the exact version of R packages using a "lock file". Other solutions such as checkpoint (Ooi, de Vries, and Microsoft 2022) depend on the availability of The Microsoft R Application Network (MRAN, a time-stamped daily backup of CRAN), which will be shut down on July 1st, 2023. groundhog (Simonsohn and Gruson 2023) used to depend on MRAN but has a plan to switch to their home-grown R package repository. These solution can effectively pin down Component C and D. But they can only restore component D. Also, for solutions depending on MRAN, there is a limit on how far back this reproducibility can go, since MRAN can only go back as far as September 17, 2014. Additionally, it only covers CRAN packages. containerir (Nust and Hinz 2019) takes the current state of the computational environment and documents it as a Dockerfile. containerir makes the assumption that Component A has a weak influence on computational reproducibility and therefore defaults to Linux-based Rocker images. In this way, it fixes Component A. But containerir does not pin down the exact version of R packages. Therefore, it can pin down components A, B, C, but only a part of component D. dockta is another containerization solution that can potentially pin down all components due to the fact that MRAN is used. But it also suffers from the same limitations mentioned above. It is also worth mentioning that MRAN is not the only archival service. Posit also provides a free (_gratis_) time-stamped daily backup of CRAN and Bioconductor (a series of repositories of R package for bioinformatics and computational biology) called Posit Public Package Manager ([https://packagemanager.rstudio.com/client/#/repos/2/packages/](https://packagemanager.rstudio.com/client/#/repos/2/packages/)). It can goes as far back as October 10, 2017. These solutions are better for prospective usage, i.e. using them now to ensure the reproducibility of the current research for future researchers. rang mostly targets retrospective usage, i.e. using rang to reconstruct historical R computational environments for which the declarative descriptions are not available. One can think of rang as an archaeological tool. In this realm, we could not find any existing solution targeting R specifically which does not currently depend on MRAN. ### Structure of this paper In Section 2, we will explain how to use rang. In Section 3, rang is used to enable the reproducibility of published literature (with increasing sophistication). However, one can still use rang for prospective usage and arguably can ensure a longer term computational reproducibility than other solutions. In Section 4, rang is used to create an executable research compendium with Docker and Make (Baker 2020). ## 2 Basic usage There are two important functions of rang: resolve() and dockerize(). resolve() queries various web services from the r-hub project of the R Consortium for information about R packages at a specific time point that is necessary for reconstructing a computational environment, e.g. (deep) dependencies (Component D), R version (Component C), and system requirements (Component B). For instance, if there was a computational environment constructed on 2020-01-16 (called "snapshot date") with the several natural language processing R packages, resolve() can be used to resolve all the dependencies of these R packages. Currently, rang supports CRAN, Bioconductor, GitHub, and local packages. library(rang) graph <- resolve(pkgs = c("openNLP", "LDAvis", "topicmodels", "quanteda"), snapshot_date = "2020-01-16") graph The resolved result is an S3 object called rang. The information contained in a rang object can then be used to construct a computational environment in a similar manner as containerit, but with the packages and R versions pinned on the snapshot date. Then, the function dockerize() is used to generate the Dockerfile and other scripts in the output_dir. dockerize(graph, output_dir = "docker") For R >= 3.1, the images from the Rocker project are used (Boettiger and Eddelbuettel 2017). For R < 3.1 but >= 1.3.1, a custom image based on Debian is used. As of writing, rang does not support R < 1.3.1, i.e. snapshot date earlier than 2001-08-31 (which is 13 years earlier than all solutions depending on MRAN). There are two features of dockerize() that are important for future reproducibility. 1. By default, the container building process downloads source packages from their sources and then compiles them. This step depends on the future availability of R packages on CRAN (which is extremely likely to be the case in the near future, given the continuous availability since 1997-04-23) 2, Bioconductor, and Github. However, it is also possible to cache (or archive) the source packages now. The archived R packages can then be used instead during the building process. The significance of this step in terms of long-term computational reproducibility will be discussed in Section 4. Footnote 2: [https://stat.ethz.ch/pipermail/r-announce/1997/000001.html](https://stat.ethz.ch/pipermail/r-announce/1997/000001.html) dockerize(graph, output_dir = "docker", cache = TRUE) 2. It is also possible to install R packages in a separate library during the building process to isolate all these R packages from the main library. dockerize(graph, output_dir = "docker", cache = TRUE, lib = "anotherlibrary") For the sake of completeness, the instructions for building and running the Docker container on Unix-like systems are included here. * cd docker * ## might need to sudo * docker build -t rang. * ## interactive environment * docker run --rm --name "rangtest" -ti rang ### Project scanning The first argument of resolve() is processed by a separate function called as_pkgrefs(). For interoperability, rang supports the "package references" standard 3 used also in other packages such as renv(Ushey 2022). It is mostly used for converting "shorthands" (e.g. xml2 and S4Vectors) to package references (e.g. cran::xml2 and bioc::S4Vectors). Footnote 3: [https://r-lib.github.io/pkgdepends/reference/pkg_refs.html](https://r-lib.github.io/pkgdepends/reference/pkg_refs.html) When as_pkgrefs() is applied to a single path of a directory, it scans all relevant files (DESCRIPTION, R scripts and R Markdown files) for all R packages used (based on renv::dependencies() ). How it works is demonstrated in three of the following examples below. But an important caveat is that it can only scan CRAN and Bioconductor packages. ## 3 Case Studies The following are some examples of how rang can be used to make shared, but otherwise unexecutable, R code runnable again. The examples were drawn from various fields spanning from political science, psychological science, and bioinformatics. ### quanteda JOSS paper The software paper of the text analysis R package quanteda was published on 2018-10-06 (Benoit et al. 2018). In the paper, the following R code snippet is included. ``` library("quanteda") #constructthefeatureco-occurrencematrix examplefem<- tokens(data_corpus_irishbudget2010,remove_punct=TRUE)%>% tokens_tolayer()%>% tokens_remove(stopwords("english"),padding=FALSE)%>% fcm(context="window",window=5,tri=FALSE) #choose30mostfrequencyfeatures topfeats<-names(topfeatures(examplefem,30)) #selectthetop30featuresonly,plotthenetwork set.seed(100) textplot_network(fcm_select(examplefem,topfeats),min_freq=0.8) ``` On 2023-02-08, this code snippet is not executable with the current version of quanteda (3.2.4). It is possible to install the "period appropriate" version of quanteda (1.3.4) using remotes on the current version of R (4.2.2). And indeed, the above code snippet can still be executed. ``` remotes::install_version("quanteda",version="1.3.4") ``` The issue is that installing quanteda 1.3.4 this way installs the latest dependencies from CRAN. quanteda 1.3.4 uses a deprecated (but not yet removed) function of Matrix (as(<dgTMatrix>, "dgCMatrix")). If this function were removed in the future, the above code snippet would not be executable anymore. Using rang, one can query the version of quanteda on 2018-10-06 and create a Docker container with all the "period appropriate" dependencies. Here, the rstudio Rocker image is selected. ``` library(rang) graph<-resolve(pkgs="quanteda", snapshot_date="2018-10-06", os = "ubuntu-18.04") dockerize(graph, output_dir = "quanteda_docker", image = "rstudio") The above code snippet can be executed with the generated container without any problem (Figure 1). ### Psychological Science Cruvell et al. (2023) evaluate the computational reproducibility of 14 articles published in _Psychocological Science_. Among these articles, the paper by Hilgard et al. (2019) has been rated as having "package dependency issues". All data and computer code are available from GitHub with the last commit on 2019-01-17 4. The R code contains a list of R packages used in the project as library() statements, including an R package on GitHub that is written by the main author of that paper. However, we identified one package (compute.es) that was not written in those library() statements but used with the namespace operator, i.e. compute.es::tes(). This undocumented package can be detected by renv::dependencies(), which is the provider of the scanning function of rang. Footnote 4: [https://github.com/Joe-Hilgard/vvg-24d](https://github.com/Joe-Hilgard/vvg-24d) Based on the above information, one can run resolve() to obtain the dependency graph of all R packages on 2019-01-17. ``` ##scanallpackages r_pkgs<-as_pkgraefs("vvg-24d4") ##replacecran::hilgardwithGithub r_pkgs[r_pkgs="cran::hilhard"]<-"Joe-Hilgard/hilgard" graph<-resolve(r_pkgs, snapshot_date = "2019-01-17") ``` When running dockerize(), one can take advantage of the materials_dir parameter to transfer the shared materials from Hilgard et al. (2019) into the Docker image. ``` dockerize(graph,"hilgard",materials_dir="vvg-2d4d",cache=TRUE) ``` We then built the Docker and launch a Docker container. For this container, we changed the entry point from R to bash so that the container goes to the Linux command shell instead. ``` cdhilgard dockerbuild-thilgard. dockerrun--rm--name"hilgardcontainer"--entropyointbash-tihilgard ``` Inside the container, the materials are located in the materials directory. We used the following shell script to test the reproducibility of all R scripts. Figure 1: The code snippet running in a R 3.5.1 container created with rang cd materials rfiles=(0_data_aggregation.R1_data_cleaning.R2_analysis.R3_plotting.R) foriin${rfiles[0]} do Rscript$i code=$? if[$code!=0] then exit1 fi done AllR scripts ran fine inside the container and the figures generated are the same as the ones in Hilgard et al. (2019). ### Political Analysis The study by Trisovic et al. (2022) evaluates the reproducibility of R scripts shared on Dataverse. They found that 75% of R scripts cannot be successfully executed. Among these failed R scripts is an R script shared by Beck (2019). This R script has been "rescued" by the author of the R package groundhog (Simonsohn and Gruson 2023), as demonstrated in a blog post 5. We were wondering if rang can also be used to "rescue" the concerned R script. The date of the R script, as indicated on Dataverse, is 2018-12-12. This date is used as the snapshot date. Footnote 5: [http://datacolada.org/100](http://datacolada.org/100) ``` ##as_pkgrefsisautomaticallyrunintthiscase graph<-resolve("nathaniel",snapshot_date="2018-12-12") dockerize(graph,output_dir="nat",materials_dir="nathaniel") cdnat dockerbuild-tnat. dockerrun--rm--name"natcontainer"--entrypointbash-tinat Inside the container cdmaterials Rscriptfn_5.R ``` The same file can thus also be "rescued" by rang. ### Recover a removed R package:maxent The R package maxent introduces a machine learning algorithm with a small memory footprint and was available on CRAN until 2019. A software paper was published by the original authors in 2012 (Jurka 2012). The R package was also used in some subsequent automated content analytic papers (e.g. Lorcher and Taddicken 2017). Despite the covert editing of the package by a staffer of CRAN 6, the package was removed from CRAN in 2019 7. We attempted to install the second last (the original submitted version) and last (with covert editing) versions of maxent on R 4.2.2. Both of them didn't work. Footnote 6: [https://github.com/cran/maxent/commit/9d4c6ead27a1f41a78907b170ddd9a586192be9](https://github.com/cran/maxent/commit/9d4c6ead27a1f41a78907b170ddd9a586192be9) Footnote 7: [https://cran-archive.r-project.org/web/checks/2019/2019-03-05_check_results_maxent.html](https://cran-archive.r-project.org/web/checks/2019/2019-03-05_check_results_maxent.html) Using rang, we are able to reconstruct a computational environment with R 2.15.0 (2012-03-30) to run all code snippets published in Jurka (2012) 8. For removed CRAN packages, we strongly recommend querying the Github read-only mirror of CRAN instead ([https://github.com/cran](https://github.com/cran)). It is because in this way, the resolved system requirements have a higher chance of being correct. maxent <- resolve("cran/maxent", "2012-06-10") dockerize(maxent, "maxentdir", cache = TRUE) ### Recover a removed R package: ptproc The software paper of the R package ptproc was published in 2003 and introduced multidimensional point process models (Peng 2003). But the package has been removed from CRAN for over a decade (at least). The only release on CRAN was on 2002-10-10. The package is still listed in the "Handling and Analyzing Spatio-Temporal Data" CRAN Task View 9 despite being uninstallable without modification on any modern R system (see below). As of writing, the package, as a tarball file (tar.gz), is still downloadable from the original author's website 10. Footnote 9: [https://cran.r-project.org/web/views/SpatioTemporal.html](https://cran.r-project.org/web/views/SpatioTemporal.html) Footnote 10: [https://www.biostat.jhsph.edu/~rpeng/software/](https://www.biostat.jhsph.edu/~rpeng/software/) Even with this over-a-decade removal and new packages with similar functionalities have been created, there is evidence that ptproc is still being sought for. As late as 2017, there are blog posts on how to install the long obsolete package on modern versions of R 11. The package is extremely challenging to install on a modern R system because the package was written before the introduction of name space management in R 1.7.0 (Tierney 2003). In other words, the available tarball file from the original author's website does not contain a NAMESPACE file as all other modern R packages do. Footnote 11: [https://blog.mathandpencil.com/installing-ptproc-on-osx](https://blog.mathandpencil.com/installing-ptproc-on-osx) and [https://tomaxent.com/2017/03/16/Installing-ptproc-on-Ubuntu-16-04-LTS/](https://tomaxent.com/2017/03/16/Installing-ptproc-on-Ubuntu-16-04-LTS/) The oldest version of R that rang can support, as of writing, is R 1.3.1. rang is probably the only solution available that can support the 1.x series of R (i.e. before 2004-10-04). Similar to the case of maxent above, a Dockerfile to assemble a Docker image with ptproc installed can be generated with two lines of code. ``` graph<-resolve("ptproc",snapshot_date="2004-07-01") dockerize(graph, "-/dev/misc/ptproc",cache=TRUE) ``` Suppose we have an R script, extracted from Peng (2003), called "peng.R" like this: ``` require(ptproc) set.seed(1000) x<-cbind(runif(100),runif(100),runif(100)) hPois.cond.int<-function(params,eval.pts,pts=NULL,data=NULL,TT=NULL){ mu<-params[1] if(is.null(TT)) rep(mu,nrow(eval.pts)) else{ vol<-prod(apply(TT,2,diff)) mu*vol } } ppm<-ptproc(pts=x,cond.int=hPois.cond.int,params=50, ranges=cbind(c(0,1),c(0,1),c(0,1))) fit<-ptproc.fit(ppm,optim.control=list(trace=2),method="BFGS") summary(fit) ``` One can integrate rang into a BASH script to completely automate the batch execution of the above R script. ``` Rscript-e"require(rang);dockerize(resolve('ptproc','2004-07-01'),'pengdocker',cache=TRUE)" dockerbuild-t pengimg./pengdocker ##launchingacontainerin daemonode-dockerrun-d--rm--name"pengcontainer"-tipengimg dockercppeng.Rengcontainer:/peng.R dockerexecpengcontainerRCMDBATCHpeng.R dockerexecpengcontainercartenpeng.Rout dockercpengcontainer:/peng.Routpeng.Rout dockerstoppengcontainer The file peng.Rout contains the execution results of the script from inside the Docker container. As the random seed was preserved by the original author (Peng 2003), the above BASH script can perfectly reproduce the analysis 12. Footnote 12: It is also important to note that the random number generator (RNG) of R has been changed several times over the course of the development. In this case, we are using the same generation of RNG as Peng (2003). ### Recover a removed Bioconductor package Similar to CRAN, packages can also be removed over time from Bioconductor. The Bioconductor package Sushi has been deprecated by the original authors and is removed from Bioconductor version 3.16 (2022-11-02). Sushi is a data visualization tool for genomic data and was used in many online tutorials and scientific papers, including the original paper announcing the package by the original authors (Phanstiel et al. 2014). rang has native support for Bioconductor packages since version 0.2. We obtained the R script "PaperFigure.R" from the Github repository of Sushi 13, which generates the figure in Phanstiel et al. (2014). Similar to the above case of ptproc, we made a completely automated BASH script to run "PaperFigure.R" and get the generated figure out of the container (Figure 2). We made no modification to "PaperFigure.R". Footnote 13: [https://github.com/PhanstielLab/Sushi/blob/master/vignettes/PaperFigure.R](https://github.com/PhanstielLab/Sushi/blob/master/vignettes/PaperFigure.R) Rscript-e "require(rang);dockerize(resolve('Sushi', '2014-06-05'),'sushidocker',no_rocker=TRUE,cache=TRUE)" dockerbuild-t-sushidocker" dockerrun-d-rm-name"sushidcontainer"-tisushiimg dockercpPaperFigure.R sushicontainer:/PaperFigure.R dockerexecsushiccontainer mkdirvignettes dockerexecsushiccontainerRCMDBATCHparerFigure.R dockercpsushiccontainer:/vignettes/Figure_1.pdfsushi_figure1.pdf dockerstopsushiccontainer ## 4 Preparing research compendia with long-term computational reproducibility The above six examples show how powerful rang is to reconstruct tricky computational environments which have not been completely declared in the literature. Although we position rang mostly as an archaeological tool, we think that rang can also be used to prepare research compendia of current research. We can't predict the future but research compendia generated by rang would probably have long-term computational reproducibility. To demonstrate this point, we took the recent paper by Oser et al. (2022). This paper was selected because 1) the paper was published in _Political Communication_, a high impact journal that awards Open Science Badges; 2) shared data and R code are available; and most importantly, 3) the shared R code is well-written. In the repository of this paper, we based on the materials shared by Oser et al. (2022) and prepared a research compendium that should have long-term computational reproducibility. The research compendium is similar to the Executable Compendium suggested by the Turing way. The preparation of the research compendium is easy as rang can scan a materials directory for all R packages used 14. Footnote 14: We detected a minor issue in the code base that an undeclared Github package is used. But it can be easily solved, as in the Psychological Science example above. require(rang) _## meta-analysis is the directory of all shared materials_ cran_pkgs<-as_pkgrefs("meta-analysis") _## dmetar is an undeclared github package: MathiasHarrer/dmetar_ cran_pkgs[cran_pkgs=="cran::dmetar"]<-"MathiasHarrer/dmetar" x<-resolve(cran_pkgs,"2021-08-11", verbose=TRUE) _#print(z, all_pkgs=TRUE)_ dockerize(x, "oserdocker", materials_dir= "meta-analysis", cache=TRUE) The above R script is saved as oser.R. The central piece of the executable compendium is the Makefile. output_file=reproduced.html r_cmd= "rmarkown::render('materials/README.Rmd', output_file= '${output_file}')" handle=oser local_file=${handle}_README.html all: resolve build render echo "finished" resolve: Rscript ${handle}.R build: ${handle}docker docker build-t ${handle}img ${handle}docker render: docker run -d --rm --name "${handle}container" -ti ${handle}img Figure 2: The figure from the batch execution of PaperFigure.R inside a Docker container generated by rang docker exec ${handle}container Rscript -e ${r_cmd} docker cp ${handle}container:/materials/${output_file} ${local_file} docker stop ${handle}container export: docker save ${handle}img | gzip > ${handle}img.tar.gz rebuild: ${handle}img.tar.gz docker load < ${handle}img.tar.gz With this Makefile, one can create the Dockerfile with make resolve, build the Docker image with make build, render the RMarkdown file inside the container with make render, export the built Docker image with make export, and rebuild the exported Docker image with make rebuild. The structure of the entire executable compendium looks like this: Makefile oser.R meta-analysis/ README.md oserdocker/ oserimg.tar.gz In this executable compendium, only the first four elements are essential. The directory oserdocker (116 MB) contains cached R packages, a Dockerfile, and a verbatim copy of the directory meta-analysis/ to be transferred into the Docker image. That can be regenerated by running make resolve. However, having this directory preserved insures against the situations that some R packages used in the project were no longer available or any of the information providers used by rang for resolving the dependency relationships were not available. (Or in the rare circumstance of rang is no longer available.) oserimg.tar.gz (667 MB) is a backup copy of the Docker image. This can be regenerated by running make export. Preserving this file insures against all the situations mentioned above, but also the situations of Docker Hub and the software repositories used by the dockerized operating system being not available. When oserimg.tar.gz is available, it is possible to run make rebuild and make render even without internet access (provided that Docker and make have been installed before). Of course, there is still an extremely rare situation where Docker (the program) itself is no longer available 15. However, it is possible to convert the image file for use on other containerization solutions such as Singularity 16, if Docker is really not available anymore. Footnote 15: We can’t imagine a world without Make, a tool that has been available since 1976. Footnote 16: [https://docs.sylabs.io/guides/3.0/user-guide/singularity_and_docker.html](https://docs.sylabs.io/guides/3.0/user-guide/singularity_and_docker.html) Sharing of research artifacts less than 1G is not as challenging as it used to be. Zenodo, for example, allows the sharing of 50G of files. Therefore, sharing of the last two components of the executable compendium prepared with rang is at least possible on Zenodo 17. However, for data repositories with more restrictions on data size, sharing the executable compendium without the last two parts could be considered sufficient. For that, run make will make the default target all and generate all the things needed for reproducing the analysis inside a container. Footnote 17: The complete version of the executable compendium is available from Zenodo: [https://doi.org/10.5281/zenodo.7708417](https://doi.org/10.5281/zenodo.7708417) The above Makefile is general enough that one can reuse it by just modifying how the R scripts (the r_cmd variable) in the materials directory are executed. This can be a starting point of a standard executable compendium format. ## 5 Concluding remarks This paper presents rang, a solution to (re)construct R computational environments based on Docker. As the six examples in Section 3 show, rang can be used archaeologically to rerun old code, many of them not executable without the analytic and reconstruction processes facilitated by rang. These retrospective use cases demonstrate how versatile rang is. rang is also helpful for prospective usage, as demonstrated in Section 4 whereby an executable compendium is created. There are still many features that we did not mention in this paper. rang is built with interoperability in mind. As of writing, rang is interoperable with existing R packages such as renv and R built-in sessionInfo(). Also, the rang object can be used for network analysis with R packages such as igraph. Computational reproducibility is a complex topic and as in all of these complex topics, there is no silver bullet (Canon and Younge 2019). All solutions have their trade-offs. The (re)construction process based on rang takes notably more time than other solutions because all packages are compiled from source. rang trades computational efficiency of this often one-off (re)constructing process for correctness, backward compatibility and independence from any commercial backups of software repositories such as MRAN. There are also other limitations. In the Vignette of rang ([https://cran.r-project.org/web/packages/rang/vignettes/faq.html](https://cran.r-project.org/web/packages/rang/vignettes/faq.html)), we list all of these limitations as well as possible mitigation.
2304.00599
Learning Interacting Theories from Data
One challenge of physics is to explain how collective properties arise from microscopic interactions. Indeed, interactions form the building blocks of almost all physical theories and are described by polynomial terms in the action. The traditional approach is to derive these terms from elementary processes and then use the resulting model to make predictions for the entire system. But what if the underlying processes are unknown? Can we reverse the approach and learn the microscopic action by observing the entire system? We use invertible neural networks (INNs) to first learn the observed data distribution. By the choice of a suitable nonlinearity for the neuronal activation function, we are then able to compute the action from the weights of the trained model; a diagrammatic language expresses the change of the action from layer to layer. This process uncovers how the network hierarchically constructs interactions via nonlinear transformations of pairwise relations. We test this approach on simulated data sets of interacting theories. The network consistently reproduces a broad class of unimodal distributions; outside this class, it finds effective theories that approximate the data statistics up to the third cumulant. We explicitly show how network depth and data quantity jointly improve the agreement between the learned and the true model. This work shows how to leverage the power of machine learning to transparently extract microscopic models from data.
Claudia Merger, Alexandre René, Kirsten Fischer, Peter Bouss, Sandra Nestler, David Dahmen, Carsten Honerkamp, Moritz Helias
2023-04-02T19:01:31Z
http://arxiv.org/abs/2304.00599v2
# Learning Interacting Theories from Data ###### Abstract One challenge of physics is to explain how collective properties arise from microscopic interactions. Indeed, interactions form the building blocks of almost all physical theories and are described by polynomial terms in the action. The traditional approach is to derive these terms from elementary processes and then use the resulting model to make predictions for the entire system. But what if the underlying processes are unknown? Can we reverse the approach and learn the microscopic action by observing the entire system? We use invertible neural networks (INNs) to first learn the observed data distribution. By the choice of a suitable nonlinearity for the neuronal activation function, we are then able to compute the action from the weights of the trained model; a diagrammatic language expresses the change of the action from layer to layer. This process uncovers how the network hierarchically constructs interactions via nonlinear transformations of pairwise relations. We test this approach on simulated data sets of interacting theories. The network consistently reproduces a broad class of unimodal distributions; outside this class, it finds effective theories that approximate the data statistics up to the third cumulant. We explicitly show how network depth and data quantity jointly improve the agreement between the learned and the true model. This work shows how to leverage the power of machine learning to transparently extract microscopic models from data. ## I Introduction Models of physical systems are frequently described on the microscopic scale in terms of interactions between their degrees of freedom. Often one seeks to understand the collective behavior that arises in the system as a whole. The interactions can feature symmetries, such as spatial or temporal translation invariance. Prominent examples of these theories can be found in statistical physics, high energy physics, but also in neuroscience. The nature of the interactions is often derived as an approximation of a more complex theory. The description of systems on the microscopic scale is key to their understanding. In the absence of an underlying theory, the inverse problem has to be solved: one needs to infer the microscopic model by measurements of the collective states. This is typically a hard problem. A recent route towards a solution comes from studies [1, 2, 3, 4, 5, 6, 7, 8] that explore the link between the learned features of artificial neural networks and the statistics of the data they were trained on. This inspection yields insights both into the mechanisms by which artificial neural networks achieve stellar performance on many tasks and into the nature of the data. In this study, we make the link between learned parameters and data statistics explicit by studying generative neural networks. Generative models learn the statistics which underlie the data they are trained on. As such they must possess an internal, learned model of data which is encoded in the network parameters. In this work, we gain insights into the nature of the training data by extracting the model from the network parameters, thus bridging the gap between the learned model and its interpretation. One class of generative models are invertible neural networks (INNs), also called normalizing flows. INNs are invertible mappings trained to approximate the unknown probability distribution of the training set [9, 10]. They can be used to generate new samples from the same distribution as the training set, or to manipulate existing data consistent with the features of the training set (for example, transitions between images [11, 12, 13]). This is achieved by mapping the highly structured input data to a completely unstructured latent space. The model learned by the network is expressed through the inverse mapping, as this must generate all interactions in the data. However, the network mapping is typically high-dimensional and depends on many parameters, which does not allow for a direct interpretation. In this work, we derive interpretable microscopic theories from trained INNs. We extract an explicit data distribution, formulated in terms of interactions, from the trained network parameters. These interactions form the building blocks of the microscopic theory that describes the distribution of the training data. Furthermore, the process of extracting the microscopic theory makes the relation between the trained network parameters and the learned theory explicit. We show how interactions are hierarchically built through the composition of the network layers. This approach provides an interpretable relation between the network parameters and the learned model. We illustrate and test this framework on several exam
2303.13224
Multiple magnetic transitions and complex magnetic structures in Fe$_2$SiSe$_4$ with the sawtooth lattice
The sawtooth lattice shares some structural similarities with the kagome lattice and may attract renewed research interest. Here, we report a comprehensive study on the physical properties of Fe$_2$SiSe$_4$, an unexplored member in the olivine chalcogenides with the sawtooth lattice of Fe. Our results show that Fe$_2$SiSe$_4$ is a magnetic semiconductor with band gap of 0.66~eV. It first undergoes an antiferromagnetic transition at T$_{m1}$=110~K, then an ferrimagnetic-like one at T$_{m2}$=50~K and finally a magnetic transition at T$_{m3}$=25~K which is likely driven by the thermal populations of spin-orbit manifold on the Fe site. Neutron diffraction analysis reveals a non-collinear antiferromagnetic structure with propagation vector $\mathbf{q_1}$=(0,0,0) at T$_{m2}$<T<T$_{m1}$. Interestingly, below T$_{m2}$, an additional antiferromagnetic structure with $\mathbf{q_2}$=(0,0.5,0) appears and Fe$_2$SiSe$_4$ exhibits a complex double-$\mathbf{q}$ magnetic structure which has never been observed in sawtooth olivines. Density functional theory calculations suggest this complex noncollinear magnetic structure may originate from the competing antiferromagnetic interactions for both intra- and inter-chain in the sawtooth lattice. Furthermore, band structural calculations show that Fe$_2$SiSe$_4$ has quasi-flat band features near the valence and conduction bands. Based on the above results, we propose Fe$_2$SiSe$_4$ as a new material platform to condensed matter researches.
Feihao Pan, Xunwu Hu, Jiale Huang, Bingxian Shi, Jinchen Wang, Juanjuan Liu, Hongxia Zhang, Daye Xu, Hongliang Wang, Lijie Hao, Peng Cheng, Dao-Xin Yao
2023-03-23T12:43:35Z
http://arxiv.org/abs/2303.13224v1
Multiple magnetic transitions and complex magnetic structures in Fe\({}_{2}\)SiSe\({}_{4}\) with the sawtooth lattice ###### Abstract The sawtooth lattice shares some structural similarities with the kagome lattice and may attract renewed research interest. Here, we report a comprehensive study on the physical properties of Fe\({}_{2}\)SiSe\({}_{4}\), an unexplored member in the olivine chalcogenides with the sawtooth lattice of Fe. Our results show that Fe\({}_{2}\)SiSe\({}_{4}\) is a magnetic semiconductor with band gap of 0.66 eV. It first undergoes an antiferromagnetic transition at T\({}_{m1}\)=110 K, then an ferrimagnetic-like one at T\({}_{m2}\)=50 K and finally a magnetic transition at T\({}_{m3}\)=25 K which is likely driven by the thermal populations of spin-orbit manifold on the Fe site. Neutron diffraction analysis reveals a non-collinear antiferromagnetic structure with propagation vector **q\({}_{1}\)**=(0,0,0) at T\({}_{m2}\)\(<\)T\(<\)T\({}_{m1}\). Interestingly, below T\({}_{m2}\), an additional antiferromagnetic structure with **q\({}_{2}\)**=(0,0.5,0) appears and Fe\({}_{2}\)SiSe\({}_{4}\) exhibits a complex double-**q** magnetic structure which has never been observed in sawtooth olivines. Density functional theory calculations suggest this complex noncollinear magnetic structure may originate from the competing antiferromagnetic interactions for both intra- and inter-chain in the sawtooth lattice. Furthermore, band structural calculations show that Fe\({}_{2}\)SiSe\({}_{4}\) has quasi-flat band features near the valence and conduction bands. Based on the above results, we propose Fe\({}_{2}\)SiSe\({}_{4}\) as a new material platform to condensed matter researches. ## I Introduction Materials with the sawtooth lattice have attracted attentions from the condensed matter physics community for several reasons. Firstly, the sawtooth antiferromagnetic (AFM) chain with corner-sharing triangles of spins represents one of the fundamental models of geometrically frustrated quantum magnetism as that in triangular and kagome lattices[1; 2]. Secondly, the sawtooth lattice also exhibits flat-band feature[3] which may give rise to high thermoelectric performance[4], quantum topological phase[5; 6; 7], flat-band spin dynamics and phonon anomalies[8]. Besides, a two-dimensional kagome lattice can be viewed as the combination of one-dimensional sawtooth chains. Therefore the recent discoveries of novel physical properties in kagome material[9; 10; 11; 12; 13] may generate renewed interest in sawtooth material since these two kinds share structural similarities and connections. The A\({}_{2}\)BX\({}_{4}\) (A=Mn, Fe, Ni; B=Si, Ge; X=O,S,Se,Te) olivines represent a large material family where the transitional-metal atoms form a sawtooth lattice. Most members in this series are ferrimagnets or antiferromagnets whose spin structure could be described by magnetic propagation vector **q**=(0,0,0)[14; 15]. Among them, magnetic frustration is observed for Mn\({}_{2}\)SiSe\({}_{4}\)[16]. Mn\({}_{2}\)SiS\({}_{4}\) and Mn\({}_{2}\)GeS\({}_{4}\) are reported to exhibit anomalous magnetic properties result from the quantum fluctuations near a spin-flop bicritical point[17]. Mn\({}_{2}\)GeO\({}_{4}\) is identified as a functional material that exhibit coupled ferromagnetism and ferroelectricity[18]. Fe\({}_{2}\)GeS\({}_{4}\) and Fe\({}_{2}\)GeSe\({}_{4}\) are theoretically proposed as promising candidates with good thermoelectric performance due to the quasi-flat-band feature[4]. These findings highlight the exotic physical properties which are closely related to the peculiar sawtooth lattice. In A\({}_{2}\)BX\({}_{4}\) series, we noticed that although the magnetic properties of Fe\({}_{2}\)SiO\({}_{4}\) and Fe\({}_{2}\)SiS\({}_{4}\) have been reported[19; 15], Fe\({}_{2}\)SiSe\({}_{4}\) remains an unexplored member whose chemical phase is even missing in the current inorganic crystal structure database (ICSD). So it would be interesting to check whether the Fe\({}_{2}\)SiSe\({}_{4}\) phase with sawtooth lattice really exists. In this paper, we report the synthesize of Fe\({}_{2}\)SiSe\({}_{4}\) single crystals which is characterized by sawtooth lattice of Fe. Using magnetization, heat capacity, neutron scattering techniques and band structure calculations, Fe\({}_{2}\)SiSe\({}_{4}\) is identified as a magnetic semiconductor with multiple magnetic transitions, non-collinear double-**q** magnetic structures and quasi-flat-band. The underlying physical mechanism for the magnetic properties and potential applications of Fe\({}_{2}\)SiSe\({}_{4}\) are discussed combined with the results of density functional theory (DFT) calculations. Experimental methods Single crystals of Fe\({}_{2}\)SiSe\({}_{4}\) were grown by chemical vapor transport method. The pure powder of Fe, Si and Se were mixed in molar ratio 1:2:4 (total mass 1 g), put into a quartz tube (inner diameter of 12 mm, length 10 cm) with 100 mg iodine. Then the quartz tube was evacuated to 3.0\(\times\)10\({}^{-3}\)Pa and sealed before put into a two-zone tube furnace. The quartz tube was heated to 590 \({}^{\circ}\)C in the raw material end and 660 \({}^{\circ}\)C in the other end in 750 minutes, then maintained at the temperatures for 5760 minutes. The next step is a so-called "temperature reversing process" in which the two ends switch temperatures in 70 minutes and held in the new temperatures for 12.5 days before cooled with the furnace. The shining black single crystals of Fe\({}_{2}\)SiSe\({}_{4}\) appeared in the final cold end with typical dimensions of 1 mm\(\times\)2 mm\(\times\)0.5 mm. X-ray diffraction (XRD) patterns of powder samples were collected from a Bruker D8 Advance X-ray diffractometer using Cu K\({}_{\alpha}\) radiation. Magnetization measurements were carried out in Quantum Design MPMS3. Resistivity and heat capacity of the samples were measured on Quantum Design PPMS-14T. The powder neutron diffraction experiments were carried out on Xingzhi cold neutron triple-axis spectrometer at the China Advanced Research Reactor (CARR). For neutron experiments on Xingzhi, the incident neutron energy was fixed at 16 meV with a neutron velocity selector used upstream to remove higher order neutrons[20]. About 3 g Fe\({}_{2}\)SiSe\({}_{4}\) powders (crushed from single crystals) were used in neutron experiments. The program FullProf Suite package was used in the Rietveld refinement of neutron powder diffraction data[21; 22]. The electronic structure and magnetic properties calculations were performed using the DFT as implemented in the Vienna _ab initio_ simulation package (VASP) code[23; 24]. The generalized gradient approximation (GGA) in the form of Perdew-Burke-Ernzerhof (PBE)[25] is used for exchange-correlation functional. The projector augmented-wave (PAW) method[26] with a 300 eV plane-wave cutoff energy is employed. The valence electrons configurations for each atom are 3\(d^{7}4s^{1}\) for Fe, 3\(s^{2}3p^{2}\) for Si, and 3\(d^{10}4s^{2}4p^{4}\) for Se. A \(\Gamma\)-centered 3 \(\times\) 2 \(\times\) 5 k-points mesh within the Monkhorst-Pack scheme is used for Brillouin zone sampling. The Hubbard _U_[27] for the 3d electrons of Fe is chosen as 2 eV to reproduce the experimental magnetic moments and bandgap of Fe\({}_{2}\)SiSe\({}_{4}\). A 1 \(\times\) 2 \(\times\) 1 supercell is adopted due to the non-collinear AFM structure. The direction of the magnetic moment is constrained, that is, the superimposed double-**q** magnetic structure is adopted. All calculations are performed using the experimental structural parameters. Convergence criteria employed for both the electronic self-consistent iteration and the ionic relaxation are set to 10\({}^{-6}\) eV and 0.01 eV/\(\AA\), respectively. Heisenberg exchange interactions are calculated by the four-states method[28]. A 1 \(\times\) 2 \(\times\) 2 supercell is adopted to avoid spurious interaction by the periodic boundary conditions. ## II Results and discussions ### Crystal structure and magnetization In Fig. 1(c), the XRD patterns on powder samples (crushed from single crystals) and Rietveld refinement confirm Fe\({}_{2}\)SiSe\({}_{4}\) adopts the orthorhombic symmetry with space group \(Pnma\) (No.62), which belongs to the olivine-type structure, same as Fe\({}_{2}\)SiS\({}_{4}\) and Fe\({}_{2}\)SiO\({}_{4}\)[14; 15]. The obtained lattice parameters are \(a\) = 13.032A, \(b\) = 7.549A and \(c\) = 6.123A. The x-ray reflection pattern from the \(ab\)-plane of Fe\({}_{2}\)SiSe\({}_{4}\) single crystal in Fig. 1(b) also confirms the above results. As shown in Fig. 1(a), The Fe atoms form infinite sawtooth chains along the \(b\)-axis. The Laue diffraction patterns allow us to distinguish the major crystal axes with respect to the three-dimensional shape of Fe\({}_{2}\)SiSe\({}_{4}\) single crystal. So the temperature dependent magnetization along \(a\)-, \(b\)- and \(c\)-axis were performed respectively, the results are shown in Fig. 2. Three successive magnetic transitions could be identified. At T\({}_{m1}\)=110 K, a cusp appears which is most prominent under \(H\parallel b\) indicating an AFM transition. At lower temperatures, a second magnetic transition occurs at T\({}_{m2}\)=50 K. Its temperature dependent magneti Figure 1: (a) Crystal structure of Fe\({}_{2}\)SiSe\({}_{4}\). (b) The x-ray reflections from the \(ab\)-plane of Fe\({}_{2}\)SiSe\({}_{4}\) single crystal. The inset shows the photo of one crystal. (c) Room temperature XRD patterns of powders (crushed from single crystals) and the Rietveld refinement results. zation behavior seems to be AFM-like under \(H\parallel b\) while ferromagnetic-like along other field directions. Then at T\({}_{m3}\)=25 K, the magnetization anomalies along all three directions suggest the existence of a third magnetic transition. The temperature dependent magnetization shows strong anisotropic behavior along three different directions. Even in the paramagnetic region, the magnetization under \(H\parallel a\) or \(H\parallel c\) exhibits typical Curie-Weiss (CW) behavior while that under \(H\parallel b\) shows an anomalous linear temperature dependence up to 400 K. The CW fit of the high-temperature magnetization data yields \(\mu_{eff}/Fe=4.02\)\(\mu_{B}\) and \(\theta_{CW}\)=66 K for \(H\parallel a\), \(\mu_{eff}/Fe=4.08\)\(\mu_{B}\) and \(\theta_{CW}\)=-51 K for \(H\parallel c\). This result indicates dominant antiferromagnetic correlations in Fe\({}_{2}\)SiSe\({}_{4}\). On the other hand, the T-linear magnetization under \(H\parallel b\) is abnormal. Similar behavior has been observed in Fe-based superconductors and parent compound[29; 30; 31], which is explained as a result of AFM spin fluctuations in the normal state[32]. Further theoretical works are needed to check if this explanation applies to Fe\({}_{2}\)SiSe\({}_{4}\). ### Magnetic structure Since the T-dependent magnetization indicates the existence of three possible magnetic transitions, powder neutron diffraction on Fe\({}_{2}\)SiSe\({}_{4}\) was performed at 150 K, 80 K, 35 K and 3.5 K to determine the magnetic structure. The emergence of new peaks and large enhancement of the intensities for some nuclear peaks could be observed in the neutron diffraction spectra at lower temperature comparing with that at 150 K (Fig. S1). This allows us to identify plenty of magnetic Bragg peaks and determine the magnetic structures at different temperatures through Rietveld refinement. The detailed neutron diffraction data, representational analysis and refinement process are presented in Supplemental Material[33]. The main results are shown in Fig. 3. First of all, as shown in Fig. 4(b), the intensities of Bragg peak (0,0,1) could be served as the magnetic order parameter for the first AFM transition at T\({}_{m1}\). Similarly, the temperature dependent intensity of (1,0.5,1) signifies new magnetic order develops below T\({}_{m2}\). It should be noted there is almost no difference for the diffraction patterns at 35 K and 3.5 K within our instrumental resolution. Therefore the magnetic transition T\({}_{m3}\) may not change the zero-field magnetic structure. Secondly, at 80 K, the non-collinear AFM structure with magnetic wave-vector \(\mathbf{q_{1}}\)=(0,0,0) is identified and illustrated in Fig. 3(a). The Fe ions occupy two inequivalent Wyckoff positions. We label the ones at \(4a\) site as Fe1 (black spheres) and that at \(4c\) sites as Fe2 (gray spheres). The magnetic moments all lie in the \(ab\)-plane and the data refinement yields a total moment of \(1.45\mu_{B}\)/Fe for Fe1 and \(4.04\mu_{B}\)/Fe for Fe2. The latter points parallel to the \(b\)-axis while the former is canted to the \(a\)-axis to a certain degree. This magnetic structure is quite similar as that in olivine-type Fe\({}_{2}\)SiO\({}_{4}\) and Fe\({}_{2}\)SiS\({}_{4}\)[14; 15]. Therefore similar physical interpretation of this magnetic structure considering the indirect superexchange interactions between Fe cations via Se atoms could be given. As marked in Fig. 3(a), the Fe2-Se-Fe2 angle is more closer to \(180^{\circ}\) and gives rise to a strong AFM interaction, while the angle of Fe1-Se-Fe1 is much smaller which may largely reduce the AFM interaction[15]. A competition between crystal-field anisotropy and AFM exchange via spin-orbit coupling results in the canting of Fe1 moments[14]. Thirdly, at 3.5 K, the above magnetic model with \(\mathbf{q_{1}}\)=(0,0,0) can only partially fit the neutron diffraction patterns and could not explain new magnetic peaks appear below T\({}_{m2}\) (Fig. S1). The most prominent one is indexed as (1,0.5,1) whose temperature dependent intensities are shown in Fig. 3(b). The new magnetic peaks developed below T\({}_{m2}\) can be well defined by a new propagation vector \(\mathbf{q_{2}}\)=(0,0.5,0). We found that only a combined model which includes both the magnetic structure with \(\mathbf{q_{1}}\) and that with \(\mathbf{q_{2}}\) could achieve a good fit to the data collected at 3.5 K. The refined two magnetic structures with different wave-vector are illustrated in Fig. 3(c). One can see that the one with \(\mathbf{q_{1}}\) is quite similar as that at 80 K, except for slightly increased ordered moment and different canting angle for Fe1 moment. The magnetic structure with \(\mathbf{q_{2}}\) is also non-collinear and more complex, the magnetic unit cell is doubled along \(b\)-axis. Figure 2: Temperature dependent magnetizations with applied field along the \(a\), \(b\) and \(c\)-axis are shown respectively in (a), (b) and (c). The moments of Fe2 have two different values and one of them is almost negligible (0.03 \(\mu_{B}\)). The magnetic order at 3.5 K have two different propagation wave-vectors. These two modulations may either reside in different domains independently, known as the multi-domain state, or coexist in a single domain in the form of a superimposed double-**q** state. These two states may be indistinguishable in diffraction experiments performed on powder samples. The multi-**q** magnetic order is not a common case in magnetic material. Some famous examples include double-**q** spin-density wave in Fe-based superconductor Sr\({}_{1-x}\)Na\({}_{x}\)Fe\({}_{2}\)As\({}_{2}\) with tetragonal crystal symmetry[34] and triple-**q** magnetic state in Na\({}_{2}\)Co\({}_{2}\)TeO\({}_{6}\) with hexagonal crystal symmetry[35]. For Fe\({}_{2}\)SiSe\({}_{4}\), whether there is a double-domain state or a double-**q** state, this state in a orthorhombic crystal symmetry is particularly rarely observed. To explore the underlying mechanism for the complex magnetic structure of Fe\({}_{2}\)SiSe\({}_{4}\) at 3.5 K, we investigate its exchange couplings based on a simple Heisenberg model: \[H=\sum_{i<j}J_{ij}S_{i}S_{j} \tag{1}\] Our DFT calculations reveal the values of exchange coupling constant \(J_{i}\) between Fe ions at different sites (illustrated in Fig. 3(c)) and the results are listed in Table 1. We can see that all of the \(J_{i}\) values are positive, which means each side of the sawtooth chains are AFM coupling. The intra-chain exchange coupling \(J_{1}\), \(J_{2}\), \(J_{6}\), and \(J_{7}\) are connected by an exchange path Fe-Si-Se-Fe with the distance _d_(\(J_{1}\)) = 3.78 \(\AA\). The inter-chain exchange coupling \(J_{3}\), \(J_{4}\), \(J_{5}\) are instead realized through exchange path Fe-Se-Se-Fe with the distance _d_(\(J_{4}\)) = 4.89 \(\AA\). Furthermore, the exchange coupling values of inter-chain are almost twice as large as those of intra-chain. This is probably due to the strong hybridization of Se 4\(p\) and Fe 3\(d\) orbits near the Fermi level. A very large Fe-Fe distance _d_(\(J_{8}\)) = 7.17 \(\AA\) results in a much weaker magnetic interaction. Therefore, the competition of AFM interactions on different sides of sawtooth chain combined with the inter-chain exchange interaction may be responsible for a complex noncollinear double-**q** magnetic order in Fe\({}_{2}\)SiSe\({}_{4}\). Based on this model, we employ a superimposed double-**q** magnetic structure to constrain the direction of the magnetic moment. The calculated spin configuration at low temperature of Fe\({}_{2}\)SiSe\({}_{4}\) is shown in Table S9 and reach qualitative consistent with the superimposed double-**q** magnetic structure determined experimentally(Table S8). ### Isothermal magnetization, AC susceptibility and heat capacity Further measurements on Fe\({}_{2}\)SiSe\({}_{4}\) single crystal were performed to get insights on its magnetic state. As shown Figure 3: (a) Schematics of refined magnetic structure of Fe\({}_{2}\)SiSe\({}_{4}\) at 80 K. (b) The temperature-dependent intensities of magnetic peaks (0,0,1) and (1,0.5,1). (c) The refined magnetic structures of Fe\({}_{2}\)SiSe\({}_{4}\) at 3.5 K with different propagation vectors are illustrated respectively. in Fig. 4, the isothermal magnetizations under \(H\parallel a\) and \(H\parallel b\) are linear with increasing field for most temperatures which are consistent with an AFM state. Nevertheless, for \(H\parallel c\), nonlinear M(H) curve appears below T\({}_{m2}\)=50 K. Especially for 30 K and 40 K data, they firstly show a ferromagnetic-like fast magnetization to about 0.13\(\mu_{B}\)/Fe at low field then increases linear with field. This fast magnetization seems to gradually disappear below T\({}_{m3}\)=25 K. The above observations suggest the magnetic transition at T\({}_{m2}\) is a ferrimagnetic one. The magnetic hysteresis behavior at 35 K in Fig. 5(a) and the peak in the imaginary part of AC susceptibility \(\chi^{\prime\prime}\) at T\({}_{m2}\) (Fig. 5(b)) further support this conclusion. In previous section, the double-**q** magnetic structure below T\({}_{m2}\) identified from neutron diffraction is actually antiferromagnetic-like without net moment component in any direction. This seems to contradict the ferrimagnetic behavior observed in magnetization measurements. The explanation might be, the neutron experiment is carried out in zero field which means that the ground state without magnetic field may actually have no net magnetization. In addition, the net magnetization 0.13\(\mu_{B}\)/Fe observed in Fig. 3(c) is quite small. This small ferrimagnetic component is not easily detected due to the neutron instrumental resolution. We also need to point out, the moment of Fe has no component along \(c\)-axis at 80 K but Fe2 actually has a \(c\)-axis component of 1.14 \(\mu_{B}\) at 3.5 K which could be seen from Table S7 and Table S8. This indicates the magnetic anisotropy has notable change below T\({}_{m2}\). Another question that needs to be answered is the nature of magnetic transition at T\({}_{m3}\)=25 K. The experimentally measured specific heat of Fe\({}_{2}\)SiSe\({}_{4}\) is shown in Fig. 6. From the raw data C\({}_{p}\)(T), the magnetic transitions at T\({}_{m1}\) and T\({}_{m2}\) can be identified by the jumps, while there is no detectable feature at T\({}_{m3}\). Then if using an Einstein model-based curve to account for the phonon part of the specific heat, we can obtain the magnetic specific heat C\({}_{mag}\) shown by the blue curve in Fig. 6. The C\({}_{mag}\)(T) shows a shoulder at T\({}_{m3}\)=25 K. We found this should-like transition also exists in isostructural Fe\({}_{2}\)SiO\({}_{4}\) at similar temperature (20 K) and its origin has been uncovered as a Schottky anomaly arising from the spin-orbit manifold of the Fe\({}^{2+}\) ion[19]. To be specific, for Figure 4: Isothermal magnetizations under magnetic field applied along \(a\)-,\(b\)- and \(c\)-axis at selected temperatures are shown respectively. Figure 5: (a) Magnetic hysteresis under \(H\parallel c\) for Fe\({}_{2}\)SiSe\({}_{4}\) at selected temperatures. (b) The temperature dependent ac susceptibilities measured under an oscillated AC field of 5.0 Oe applied along \(c\)-axis. \begin{table} \begin{tabular}{c c c c c c c c} Constant & \(J_{1}\) & \(J_{2}\) & \(J_{3}\) & \(J_{4}\) & \(J_{5}\) & \(J_{6}\) & \(J_{7}\) & \(J_{8}\) \\ \hline Value (meV) & 8 & 9 & 15 & 12 & 15 & 9 & 8 & 0.3 \\ \end{tabular} \end{table} Table 1: Calculated magnetic exchange coupling constants between different Fe sites, positive value means AFM coupling. Fe\({}_{2}\)SiO\({}_{4}\), Aronson \(et\)\(al\). found that the lowest cystal-field splitting t\({}_{2g}\) level is further split into five states due to spin-orbit coupling. These spin-orbit energy levels are determined through inelastic neutron experiments and have dominate influence on the low temperature physical properties. Then the shoulder in specific heat can be quantitatively calculated and simulated by a Schottky anomaly arising from the spin-orbit excitations[19]. This explanation may also apply to the anomalous magnetic transition at T\({}_{m3}\) for Fe\({}_{2}\)SiSe\({}_{4}\). As shown in the inset of Fig. 6, above T\({}_{m1}\), the magnetic entropy of Fe\({}_{2}\)SiSe\({}_{4}\) is close to the limiting value \(2Rln5\) which is calculated by assuming the spin-orbit manifold is fully occupied and there are \(2S+1=5\) equally occupied states for each Fe\({}^{2+}\). This gives further support to the above physical picture. The thermal population in different spin-orbital states may drastically change across T\({}_{m3}\), which lead to rich magnetic behaviors in Fe\({}_{2}\)SiSe\({}_{4}\). As demonstrated in Fig. 4 and Fig. 5, the magnetization switches from AFM to ferromagnetic behavior then back to AFM behavior as deceasing temperature. In addition, magnetic field induced spin-flop transitions are also observed for M(H) curve at 15 K and 2 K under \(H\parallel c\). Combined with the semiconductor nature of this compound which will be discussed in the next section, Fe\({}_{2}\)SiSe\({}_{4}\) may find applications in optoelectronics and magnetic devices. ### Band structure The sawtooth lattice is known to have flat-bands similar as kagome lattice. The band structure of Fe\({}_{2}\)SiSe\({}_{4}\) obtained by DFT calculations is shown in Fig. 7. The result reveals Fe\({}_{2}\)SiSe\({}_{4}\) is a semiconductor with indirect band gap of 0.7 eV. The gap value is close to that determined from experiment. As shown in section III of Supplemental Material[33], the band gap is determined to be 0.40 eV through fitting the resistance curve and 0.66 eV by the diffusion reflectance spectroscopy (DRS). Considering the errors of different methods, 0.66 eV might be a more accurate value of the band gap of Fe\({}_{2}\)SiSe\({}_{4}\). From Fig. 7(a), Fe\({}_{2}\)SiSe\({}_{4}\) is found to have flat bands from the G-X crystallographic direction, similar as other sawtooth material such as Mn\({}_{2}\)SiSe\({}_{4}\) and Fe\({}_{2}\)GeSe\({}_{4}\)[4; 16]. The corresponding total and projected density of states (DOS) for Fe\({}_{2}\)SiSe\({}_{4}\) are shown in Fig. 7(b). The states near the Fermi level mainly come from the contributions from Fe \(3d\) states and Se \(4p\) states. Strong hybridization of Se \(4p\) and Fe \(3d\) orbits could be observed below the Fermi level. For kagome metals where the flat-band is quite close to the Fermi level, novel properties including emergent ferromagnetism, anomalous Hall effect, superconductivity and topological phases have been extensively reported in recent years[36; 37; 38; 39; 9; 10; 11]. We find the band gap of Fe\({}_{2}\)SiSe\({}_{4}\) is not large, there are possibilities that one could use pressure or chemical doping to tune it into a metal. Then it would be worthwhile to check whether a quasi-flat-band near Fermi level would enable realization of other versatile quantum phenomena. ## IV Conclusions In summary, we present a comprehensive study on the sawtooth lattice chalcogenide olivines Fe\({}_{2}\)SiSe\({}_{4}\). This material has shown intriguing magnetic properties. Three magnetic transitions are identified as an AFM one at Figure 6: Temperature dependent specific heat of Fe\({}_{2}\)SiSe\({}_{4}\). The magnetic specific heat C\({}_{mag}\) is derived by subtracting phonon contributions through an Einstein model-based curve fit on the data above 110 K. Then the calculated magnetic entropy is shown in the inset. Figure 7: Calculated band structure (a) and total and projected density of states (b) of Fe\({}_{2}\)SiSe\({}_{4}\). 110 K, a ferrimagnetic one at 50 K and the last one at 25 K possibly due to the spin-orbit excitation. We determined the magnetic structures at different temperatures through neutron diffraction and discover a noncollinear double-**q** magnetic order below 50 K. DFT calculations suggest this complex magnetic structure may be due to the competition of AFM interactions on different sides of sawtooth chain combined with the inter-chain exchange interaction. Through band structure calculation and spectral experiment, Fe\({}_{2}\)SiSe\({}_{4}\) is identified as a magnetic semiconductor with indirect band gap of 0.66 eV and quasi-flat-band. We propose that Fe\({}_{2}\)SiSe\({}_{4}\) may provide a new material playground for further researches on magnetic devices and flat-band effect through chemical doping. ## I Acknowledgement This work was supported by the National Key Research and Development Program of China (No. 2018YFA0306001, 2022YFA1402802), the National Natural Science Foundation of China (No. 12074426, No. 12004426, No. 11974432, No. 92165204), Shenzhen International Quantum Academy (Grant No. SIQA202102), National Key Scientific Instrument and Equipment Development Project of NSFC (No. 11227906), the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (Grants No. 22XNKJ40), NSAF (Grant No. U2030106).
2308.08210
Neural Spherical Harmonics for structurally coherent continuous representation of diffusion MRI signal
We present a novel way to model diffusion magnetic resonance imaging (dMRI) datasets, that benefits from the structural coherence of the human brain while only using data from a single subject. Current methods model the dMRI signal in individual voxels, disregarding the intervoxel coherence that is present. We use a neural network to parameterize a spherical harmonics series (NeSH) to represent the dMRI signal of a single subject from the Human Connectome Project dataset, continuous in both the angular and spatial domain. The reconstructed dMRI signal using this method shows a more structurally coherent representation of the data. Noise in gradient images is removed and the fiber orientation distribution functions show a smooth change in direction along a fiber tract. We showcase how the reconstruction can be used to calculate mean diffusivity, fractional anisotropy, and total apparent fiber density. These results can be achieved with a single model architecture, tuning only one hyperparameter. In this paper we also demonstrate how upsampling in both the angular and spatial domain yields reconstructions that are on par or better than existing methods.
Tom Hendriks, Anna Vilanova, Maxime Chamberland
2023-08-16T08:28:01Z
http://arxiv.org/abs/2308.08210v2
Neural Spherical Harmonics for structurally coherent continuous representation of diffusion MRI signal ###### Abstract We present a novel way to model diffusion magnetic resonance imaging (dMRI) datasets, that benefits from the structural coherence of the human brain while only using data from a single subject. Current methods model the dMRI signal in individual voxels, disregarding the intervoxel coherence that is present. We use a neural network to parameterize a spherical harmonics series (NeSH) to represent the dMRI signal of a single subject from the Human Connectome Project dataset, continuous in both the angular and spatial domain. The reconstructed dMRI signal using this method shows a more structurally coherent representation of the data. Noise in gradient images is removed and the fiber orientation distribution functions show a smooth change in direction along a fiber tract. We showcase how the reconstruction can be used to calculate mean diffusivity, fractional anisotropy, and total apparent fiber density. These results can be achieved with a single model architecture, tuning only one hyperparameter. In this paper we also demonstrate how upsampling in both the angular and spatial domain yields reconstructions that are on par or better than existing methods. Keywords:Diffusion MRI Implicit Neural Representation Spherical Harmonics. ## 1 Introduction The human brain is a highly structured organ. With the introduction of diffusion magnetic resonance imaging (dMRI) in vivo study of the structure of the brain became a possibility. The spatially coherent structures in the brain imply that spatial coherence should be present when modeling dMRI data. Diffusion tensor imaging (DTI) [3] fits a tensor for every voxel of the volume describing the diffusion in three primary directions. Constrained spherical deconvolution (CSD) [14] can describe the orientation and relative size of fiber bundles using fiber orientation distribution functions (fODFs). These are examples of methods that model the fiber orientation in every voxel independently, disregarding any intervoxel coherence. Interpolating correctly between voxels using classical interpolation methods (e.g. cubic interpolation) is, therefore, difficult and susceptible to noise, and can discard anatomical details. Interpolation in the angular domain has proven to be a difficult task as well, as highlighted by recent challenges in the computational dMRI community [4, 13, 9]. Machine learning approaches for upsampling in both the angular and spatial domains are a promising avenue [1]. However, these methods often rely on a strong prior obtained by training on large amount of data. This is problematic when training data is scarce or if the model is applied to data inherently different from the data it was trained on (e.g. pathological data). Ideally, a continuous and structurally coherent model should be derived at the individual level (i.e., n=1). Neural radiance field (NeRF) [8] models have shown to be extremely effective at creating continuous 3-dimensional representations, known as implicit neural representations, of scenes given a limited number of 2-dimensional input images taken from limited angles. NeRF overfits a multi-layer perceptron to essentially capture a given scene in its parameters. Unseen angles can then be sampled from this network. This concept could translate well to dMRI, as we are trying to create a complete representation from an incompletely sampled angular domain. The difference with dMRI is that every angle in a dMRI-acquisition produces a complete 3-dimensional volume of data. In this work, we propose to use a NeRF-like model to create a model of the dMRI data of a single subject that utilizes the structural coherence of the brain, while providing continuity in both the angular and spatial domain. We evaluate the resulting model in a number of downstream tasks, such as calculating microstructural metrics, and fODF estimation. We also demonstrate how the model can be used to upsample dMRI data in both the angular and spatial domain. ## 2 Methods & Experiments ### Data We sourced data from a single participant from the preprocessed Human Connectome Project dataset [18] consisting of \(18\)\(b=0s/mm^{2}\) volumes, \(90\)\(b=1000\)\(s/mm^{2}\) volumes, \(90\)\(b=3000\)\(s/mm^{2}\) volumes, with \(1.25\)mm isotropic voxels. ### Model The neural spherical harmonics model (NeSH) is an adaptation from SH-NeRF [19]. NeSH outputs an approximation \(\hat{S}(x,y,z,\mathbf{b})\) for a diffusion signal \(S(x,y,z,\mathbf{b})\). An input pair \(i\in I\) consists of a voxel midpoint coordinate \((x,y,z)\) and a gradient direction vectors \(\mathbf{b}\), where \(I\) is a set of all possible coordinate-direction pairs. I has size \(N=n_{c}\times n_{d}\) with \(n_{c}\) being the number of coordinates and \(n_{d}\) the number of directions. The input coordinates are scaled to lie in \([-1,1]\) and are positionally encoded using the generalization of the NeRF positional encoding [12] into input vector \(\mathbf{x}\). Direction vector \(\mathbf{b}\) is converted into the corresponding polar angles \(\theta\) (azimuth, \([0,2\pi)\)) and \(\phi\) (elevation, \([0,\pi]\)). A simple multi-layer perceptron (MLP) maps \(\mathbf{x}\) into a coefficient vector \(\mathbf{k}\) that parameterizes a spherical harmonics (SH) series. \[M_{\Psi}:\mathbf{x}\rightarrow\mathbf{k} \tag{1}\] \[\mathbf{k}=(k_{l}^{m})_{l:0\leq l\leq l_{max}}^{m:-l\leq m\leq l} \tag{2}\] where \(k_{l}^{m}\) is the coefficient for the SH component of degree \(l\) and order \(m\), \(l_{max}\) is the maximum degree of the SH-series, and m the order. For a given \(i\) we can now obtain an estimation of the dMRI signal: \[\hat{S}_{i}=\hat{S}(x_{i},y_{i},z_{i},\mathbf{b_{i}})=\sum_{(k_{l}^{m})^{i}\in\mathbf{ k_{i}}}(k_{l}^{m})^{i}Y_{l}^{m}(\theta,\phi) \tag{3}\] where \(Y_{l}^{m}(\theta,\phi)\) is the SH component of degree \(l\) and order \(m\) for azimuth \(\theta\) and elevation \(\phi\) obtained from \(\mathbf{b_{i}}\), \(\mathbf{k_{i}}\) is the coefficient vector given by (1) for input \(i\), and \((k_{l}^{m})^{i}\) is the coefficient of \(Y_{l}^{m}\). The dMRI signal is reconstructed with a simplified real basis SH series, using only odd-numbered degrees. Different methods of simplifying the SH series exist [14, 5]; in this paper, the method described in MRtrix3 is used [17]. The full model is shown in Figure 1. We calculate the loss as an average over all inputs for the smooth L1 loss [6] between the value of \(\hat{S}_{i}\), and the dMRI signal \(S_{i}\) defined as the dMRI signal measured at \((x_{i},y_{i},z_{i})\) in the direction of \(\mathbf{b_{i}}\). Unregularized, NeSH could be susceptible to overfitting on noise, if the maximum degree of the SH series is larger than necessary to model the diffusion data in a given voxel. An L1 regularization Figure 1: A schematic representation of the Neural spherical harmonics (NeSH) model. Inputs coordinates are spatially encoded into (\(x\)), directional vector \(\mathbf{b}\) is converted to \(\theta\) and \(\phi\). Vector \(\mathbf{x}\) is passed through the multi-layer perceptron (MLP) to produce \(\mathbf{k}\), which parameterizes the spherical harmonics series. This is sampled in direction \(\mathbf{b}\) to produce the final output \(\hat{S}\). term is added as an incentive to minimize unnecessary coefficients. The resulting loss function is: \[L=\frac{1}{N}\sum_{i\in I}\Bigl{(}smooth_{L1}(S_{i}-\hat{S}_{i})+\lambda\bigl{(} \sum_{k_{l}^{m}\in\mathbf{k_{i}}}|(k_{l}^{m})^{i}|\bigr{)}\Bigr{)} \tag{4}\] where \(|(k_{l}^{m})^{i}|\) is the absolute value of the coefficient. The loss is used to update the MLP parameters \(\Psi\). To reconstruct images from the trained model, a set \(C\) of \((x,y,z)\) coordinates is generated at the desired spatial resolution, as well as a set \(B\) of directions in the desired angular resolution. A dMRI dataset is reconstructed by first positionally encoding, and mapping every coordinate \(\mathbf{c}\in C\) to \(\mathbf{k_{c}}\) using (1), and then sampling the SH-series parameterized with \(\mathbf{k_{c}}\) for every direction \(\mathbf{b}\in B\). Effectively this applies (3) to every coordinate-direction pair, but only calculates \(\mathbf{k_{c}}\) once for every input coordinate. The model has the following hyperparameters: \(l_{max}\) sets the maximum degree of the SH, \(l_{pos}\) sets the number of positional encodings, \(\sigma\) scales the positional encoding, \(n\_layers\) sets the number of layers in the MLP, \(hidden\_dim\) sets the number of neurons in each layer, \(lr\) is the learning rate, \(\lambda\) scales the L1 regularization. ### Implementation The model is implemented in python version 3.9.16, with pytorch version 2.0.0. MRtrix3 version 3.0.4 is used to calculate DTI metrics and fODFs, and to visualize results. Scilpy1 version 1.5 is used (with python version 3.10.10) to calculate fODF based metrics, and to create interpolated spherical functions. The BET of FSL version 6.0.6.4, is used for brain mask segmentation. Footnote 1: [https://github.com/scilus/scilpy](https://github.com/scilus/scilpy) ### Experiments #### 2.4.1 Reconstruction and angular upsampling of the dMRI signal To assess if the proposed model can reproduce the original data, NeSH is fit on 30 gradient directions of the \(b=1000\ s/mm^{2}\) shell. A grid-search is performed over the hyperparameters. Visual inspection of the gradient images, as well as DTI metrics and fODF glyphs, determine which settings produce the best results. Then, to assess if these settings can be applied to a different set of gradient directions, the settings found in part one are used to fit the model on 90, 60, 45, 30, 15, 10 and 3 gradient directions for both the \(b=1000\ s/mm^{2}\) and \(b=3000\ s/mm^{2}\) dMRI acquisitions. As a comparison, spherical harmonics interpolation (SHI) [5] is fit on the same number of gradients. The root mean squared error (RMSE) is calculated for each of the models between the input gradient images and the reconstructed gradient images it produces: \[\sqrt{\frac{1}{WHD|B|}\sum_{x=1}^{W}\sum_{y=1}^{H}\sum_{z=1}^{D}\sum_{\mathbf{b} \in B}(S(x,y,z,\mathbf{b})-\hat{S}(x,y,z,\mathbf{b}))^{2}} \tag{5}\] where \(W\), \(H\), and \(D\) are the width, height, and depth of the image, \(B\) is the set of gradient directions with size \(|B|\), \(S(x,y,z,\mathbf{b})\) is the measured signal, and \(\hat{S}(x,y,z,\mathbf{b})\) is the reconstructed signal at location \(x\), \(y\), \(z\) for gradient direction \(\mathbf{b}\). Finally the capabilities of the model to upsample in the angular domain are assessed. The resulting models from the second part are sampled in all 90 gradient directions. The RMSE is calculated between the 90 original gradient images and the 90 reconstructed gradient images using (5). In all experiments the RMSE is only calculated within a brain mask. #### 2.0.2 Spatial upsampling The data modeled with NeSH can be sampled in any spatial resolution. This experiment assesses the quality of the data when upsampled in spatial domain. The HCP dataset is downsampled from 1.25mm to 2.5mm isotropic voxels. NeSH is fit on the downsampled dataset using 90 gradient directions, and then sampled at 1.25mm isotropic resolution. The downsampled dataset is also upsampled to the original 1.25mm isotropic resolution using cubic interpolation. For the resulting datasets a color encoded FA map is calculated and visualized to compare the results. #### 2.0.3 DTI and fODF Metrics In this experiment we assess if the data modeled with NeSH can be used to produce three common dMRI microstrutural metrics. Two DTI metrics: mean diffusivity (MD) and fractional anisotropy (FA), and one fODF metric: total apparent fiber density (AFD, [10]). The metrics are calculated for 90 gradients, 90 gradients reconstructed by NeSH fit on 90 gradients, 30 gradients, 90 gradients reconstructed by NeSH fit on 30 gradients, and 90 gradients reconstructed by SHI fit on 30 gradients. The \(b=1000\)\(s/mm^{2}\) shell was used for the DTI metrics, and the \(b=3000\)\(s/mm^{2}\) shell for AFD. The three measures are compared to the ones obtained from the full 90 gradients set by computing and visualizing a difference map. #### 2.0.4 fODF estimation This experiment is used to assess if fODFs can be generated from data modeled with NeSH. The same datasets as in the previous experiment are used. A response function is first extracted from the dMRI acquisitions using the single shell implementation of the algorithm by Tournier [15]. Secondly, the fODFs are calculated using single shell CSD [14]. For all five datasets, the \(b=3000\)\(s/mm^{2}\) shell is used. Results are visualized by showing fODF glyphs. ## 3 Results #### 3.0.1 Reconstruction and angular upsampling of dMRI signal The grid-search over the parameters resulted in the following hyperparameter settings: \(l_{max}=8\) (for models trained \(\leq 10\) gradient directions \(l_{max}=2\)), \(l_{pos}=12\) resulting in an input size of 75 (12 sine and cosine encodings for each dimension + raw coordinates), \(\sigma=4\), \(n\_layers=4\), \(hidden\_dim=2048\), \(lr=10\times 10^{-5}\), \(\lambda=10\times 10^{-6}\). The Adam optimizer was used with default settings. The model is trained for 5 epochs with a batch size of 1000. Figure 2 shows a slice of dMRI data for a single gradient direcion, the output generated by NeSH, as well as the mean squared error (RMSE) between the two images. NeSH produces a smoother image, removing the noise from the input image. The noise appears to be randomly distributed, without anatomical residuals. The comparisons of reconstruction error between NeSH and SHI are shown in Figure 2(a). NeSH has a higher RMSE when reconstructing a lower number of gradients, which lowers with an increasing number of gradients. SHI has a lower RMSE when reconstructing a lower number of gradients, which increases with an increasing number of gradients. SHI has a consistently lower RMSE compared to NeSH on both \(b=1000\ s/mm^{2}\) and \(b=3000\ s/mm^{2}\). The comparisons of upsampling error between NeSH and SHI are shown in Figure 2(b). For both Nesh and SHI the RMSE of the upsampled data lowers when the model is fit on more gradient directions. SHI has a lower RMSE for all gradient subsets for \(b=1000\ s/mm^{2}\), and for all subsets with more then 15 gradients for \(b=3000\ s/mm^{2}\). #### 3.2.2 Spatial upsampling Figure 4 shows the color encoded FA maps for this experiment. The dataset reconstructed by NeSH fit on 90 gradients is able to reconstruct details in the cerebellar cortex and cerebellar white matter that are lost in cubic interpolation. Finer-grained details of the 1.25mm isotropic voxel data are lost in both upsampling methods. #### 3.2.3 DTI Metrics and fODF metrics Figure 5 shows the results of this experiment. The DTI metrics (MD, FA) show low error when downsampling, and in NeSH and SHI reconstructions on both 90 and 30 gradients, when compared to Figure 2: A single axial slice of dMRI data, from a single gradient direction shown as baseline (column one) and as reconstructed by NeSH (column two). The root mean squared error (RMSE) between the two images is shown in column three. A two time magnification of the area in the green boxes shows the denoising effect more clearly. Figure 4: A slice of color encoded FA maps at the cerebellar level, generated for different datasets. Clockwise starting top left: the 1.25mm isotropic original image (ground truth), the downsampled 2.5mm isotropic image, 1.25mm isotropic image upsampled using cubic interpolation, 1.25mm isotropic image upsampled using NeSH. The arrows show two areas where NeSH is able to reconstruct details of the cerebellum which are lost in cubic interpolation. Figure 3: Root mean squared error (RMSE) of the reconstructed dMRI volumes by NeSH and spherical harmonics interpolation (SHI) compared to the gradient images used to fit the model (a) and compared to the full set of 90 gradient images (b), for both \(b=1000\ s/mm^{2}\) and \(b=3000\ s/mm^{2}\). the metrics calculated on 90 gradients. Downsampling and SHI both over- and underestimate the metric, with most errors located in the white matter areas. NeSH more frequently underestimates the metrics and the errors are located more in the grey matter areas. The AFD is overestimated by NeSH reconstructions, and underestimated when downsampling or reconstructing using SHI. In AFD the distribution of error is similar for all methods, but downsampling and SHI show an underestimation of the AFD, while NeSH shows errors in both directions. #### 5.0.1 fODF estimation The visualization of a group of fODFs in the centrum semi-ovale and the descending part of the CST can be seen in Figure 6. The glyphs created from the NeSH reconstructions show a smooth, structurally-coherent change, while maintaining the important information, i.e. the crossing of fibers in the centrum semi-ovale. In presence of a big fiber tract such as the CST, the NeSH reconstruction shows a decrease in amplitude in other directions. In other methods, the fODFs exhibit more noise and less alignment between voxels, while the peaks appear sharper. Figure 5: Visualization of mean diffusivity (MD), fractional anisotropy (FA) and total apparant fiber density (AFD). In the first column the metrics are shown as a map for a single axial slice of the volume when calculated on the full set of 90 gradient images. The remaining columns show the difference map for the other datasets when compared to the 90 gradient images. Blue signifies negative difference, red signifies positive difference. ## 4 Discussion We have introduced a novel method to model only a single acquisition of dMRI data using a neural representation of spherical harmonics, called NeSH. We show that dMRI data reconstructed by NeSH appears to be denoised compared to the original data (Figure 2. We hypothesise that the model is able to capture the continuous structures of the brain, but not the erratic nature of the noise. The RMSE lies consistently higher for both the reconstruction and upsampling compared to SHI (Figure 3, which could partly be explained by the removal of noise. For the reconstruction of the input gradients, NeSH performs worse with a decreasing number of input gradients. Possibly this indicates that to find a good representation NeSH needs a minimum number of gradients, which appears to be around 15. Both NeSH and SHI show an increase in RMSE when upsampling to 90 gradients from a decreasing number of gradients, as is to be expected. We also show that NeSH can also be used to upsample in the spatial domain. Figure 4 shows that NeSH is able to reconstruct details that are not clearly visible in the 2.5mm isotropic data, but are present in the 1.25mm ground truth. This strengthens the hypothesis that using multiple gradient directions, NeSH can model a continuous representation of the dMRI data. While the achievable level of detail is lower than achieved by Alexander et al [2], it does not rely on a prior learned from a large dataset. Furthermore, we show that the reconstructed dMRI volumes can be used to calculate MD, FA, and AFD. Compared to the metrics calculated using 90 gradient direction, NeSH differs mostly in the gray matter areas, while downsampled volumes and SHI reconstructed volumes have differences in the white Figure 6: Magnified coronal view showing the fiber orientation distributions glyps for the different datasets on a background of a T1-weighted image. First row: centrum semi-ovale, highlighted region shows the increased intervoxel consistency in NeSH modelled data. Second row: descending part of the corticospinal tract, highlighted region shows increased intervoxel consistency in NeSH modelled data, as well as a decrease in the size of the crossing fibers. matter areas. This supports our hypothesis that NeSH benefits from the structural continuity of the fiber bundles to model the data. As the white matter areas are usually the areas of interest, this could be seems as a benefit of NeSH. The increased brightness in the posterior commissure and surrounding tissue can be explained by the lack of bias field correction in pre-processing. Finally, we show that fODFs generated from NeSH reconstructions have a smooth change in fiber directions between voxels. This is also supportive of the structural continuity hypothesis. The 90 gradient, 30 gradient, and SHI reconstructed FODs show a more erratic pattern, in which the FODs are less aligned overall. The decrease in size of the crossing fibers in the descending part of the CST shows that NeSH prioritizes the major bundle in this area. Some information on possible smaller bundles is now lost, however, which is something that should be looked into in future versions. #### 3.2.2 Limitations The lack of a gold-standard in dMRI complicates the interpretation of the experiments. The denoising effect shown in Figure 2 is an example. We cannot be certain if the representation modelled by NeSH is a more realistic one than the more noisy representation or just a smoother one. In the last experiment NeSH consistently shows lower values in grey matter areas. The fODFs in the grey matter area correspondingly show a less peaked, smaller amplitude. Compared to existing techniques this can be interpreted as an error, but anatomically it makes sense as there are no large fiber bundles in the grey matter. Additionally, with no ground truth data, it is difficult to assess how good the representation outside of the voxels actually is. Synthetic datasets with known ground truths can provide a better idea. Furthermore, both the architecture and positional encodings used in this paper are simple. Many developments in the field NeRF have taken place since [20, 12]. Architectural and methodological changes to NeSH could lead to further improvement. Finally, we choose to model the dMRI signal directly through an SH-series, in order to evaluate the data quality with a variety of downstream tasks. This is not a necessity. Anything that can be transformed into dMRI signal can be modelled by NeSH (e.g. peak directions or fODFs, which can be convolved into a diffusion weighted signal). #### 3.2.3 Future work Future work will further investigate the advantages of modelling dMRI data in a continuous space, as well as further evaluate the findings of the experiments. First, the quality and usability of the denoising properties of NeSH should be compared to other existing denoising methods. Second, using clinical datasets of lower angular and spatial resolution can provide insight into the'real-world' clinical applicability of NeSH. This is especially interesting in MRI acquisitions of pathology (e.g. a glial-cell tumor) in the brain, as models relying on a prior learned on outside data might fail here. The harmonization of dMRI datasets across scanners and protocols [13, 9], is another area of research where NeSH can be applied. Third, fiber tracking is a common use-case for dMRI. We have performed fibertracking using iFOD2 [16] with tract masks and begin- and endpoint inclusion for different numbers of gradient directions. This showed no major differences between the different methods for all inspected tracts. An explanation for this is the high spatial resolution of the HCP data, which allows tracts to be generated even with a downsampled angular resolution. Further research on datasets with lower spatial resolution will have to show the value of using NeSH reconstructions for fiber tracking. Fourth, a recent paper by Mancini et al [7] has shown how compression of dMRI data using sinusoidal representation networks (SIRENs) [11] does not lead to reduced quality in downstream tasks. Using a SIREN architecture could also prove useful for the SH-based approach we have described. Finally, the generalization of the model to other subjects, protocols, and scanners has to be evaluated. We have performed a preliminary experiment which showed comparable results for signal reconstruction and fODFs. ## 5 Conclusion Modeling dMRI data using NeSH produces results in downstream tasks with similar or possibly better results than established methods. It also shows promising results in the field of angular and spatial upsampling. NeSH can make use of the structural coherence in the brain, and does not rely on a prior learned on other datasets. The experiments in this paper provide an interesting avenue for modeling dMRI data, which should be further explored in future research.
2307.09953
Bulk viscosity of rotating, hot and dense spin 1/2 fermionic systems from correlation functions
In this work we have presented the one-loop calculation of the bulk viscosity of a system of rotating, hot and dense spin 1/2 fermions within the framework of Kubo formalism calculated from correlation functions of fields which in turn is used to calculate the spectral function of energy-momentum tensors. The calculation has been done in curved space by the help of tetrad formalism, where the the gamma matrices in this set-up assume their generic structure by becoming space dependent. The techniques of thermal field theory have been employed which take into account the three energy scales viz. temperature, chemical potential and angular velocity into account in the Matsubara frequency summation. The study has been performed in the ambience of very large angular velocities, ranging from 0.1 to 1.0 GeV. The fermion propagator used in this work is appropiate for the regime of large angular velocities. We explore the behaviour of bulk viscosity with angular velocity, temperature and chemical potential through our plots.
Sarthak Satapathy
2023-07-19T12:47:59Z
http://arxiv.org/abs/2307.09953v4
# Bulk viscosity of rotating, hot and dense spin 1/2 fermionic systems from correlation functions ###### Abstract In this paper work we present the bulk viscosity of rotating, hot and dense fermions within the framework of Kubo formalism calculated from correlation functions. The calculation has been done in curved space by the help of tetrad formalism, where the changes in the vertex factor for loop diagrams and velocity of the local rest frame occurs due to the effect of the curved space metric. We have provided the numerical results describing the behaviour of bulk viscosity as a function of temperature, angular velocity and chemical potential in a rotating medium. The results are valid when the angular velocity of the fluid is very large i.e of the order of \(10^{-1}\) GeV or higher. ## I Introduction The study of Quark Gluon Plasma (QGP) created in heavy-ion collisions[1; 2] has gained a lot of popularity when subjected to extreme environments such as extremely high magnetic fields[3] and large angular velocities[4; 5]. Angular velocities of very high magnitude are produced in non-central heavy-ion collisions in the early stages of the formation of QGP as reported by STAR collaboration[6], where the angular momentum is estimated to be of the order \(1000\hbar\). In Ref.[6] the first measurement of the alignment between the angular momentum of a non-central collision and the spin of emitted particles was done, showing that this is the most vortical system[7] ever produced in heavy-ion collisions. The particles emitted from such a highly vortical system[7] are expected to be spin polarized[7] in the direction of their angular momentum. The dynamics of spin polarization has been studied in variety of scenarios such as Gubser-expanding background[8], vorticity and non-local collisions[9] and boost-invariant hydrodynamical[10]. A very detailed study on the collective dynamics of spin-1/2 fermions can be found in Ref.[11]. A brief review on local and global spin polarization has been covered in Ref.[12]. Theoretical studies have anticipated many interesting phenomena to occur in this condition, such as chiral vortical effect[13; 14], chiral vortical wave[15], splitting of masses under rotation[16], magnetic chiral density wave[17], vortical effects in AdS space[18] etc. These are anomalous processes which depend on the experimental signatures can be observed in heavy-ion collision experiments. Apart from this the study of QCD phase diagram[4; 19] under rotation is yet another interesting topic of research. Under the influence of large angular velocities, the QCD medium offers the scope to investigate anomalous properties in greater detail as they will be properly exhibited, thus allowing an easier extraction of the experimental signatures. Apart from this the the study of QCD phase diagram[4; 19; 20] under rotation is yet another interesting topic of research. Particularly in Ref.[20], the effect of rotation has been studied on the confining and deconfining phases of QGP, where it is found that at particular distance from the rotation axis, the deconfining transition is observed. In addition to the usual confinement and deconfinement phases, there exists a mixed inhomogeneous phase, which contains spatially separated confinement and deconfinement regions, thus two deconfining temperatures are predicted to be present. Studies have also been performed for lattice SU(3) gauge theory[21], where the isothermal moment of inertia for a rigidly rotating QGP has been calculated. It was found that the moment of inertia takes negative values below the supervortical temperature[21], which is 1.5 times the critical temperature, indicating a thermodynamic instability of rigid rotation beyond a certain temperature. One of the topics concerned with rotating QCD matter is the study of transport coefficients[22], which enter as inputs for performing hydrodynamical simulations. There exist various formalisms to compute the transport coefficients viz. Kinetic theory[23] studied through the Relativistic Boltzmann Equation (RBE) and Kubo formalism[24; 25; 26], based on the calculation of correlation functions and spectral functions from quantum field theory. In this work we have calculated the bulk viscosity from correlation functions of fields[24] which is a first principle approach. The Nonequilibrium Statistical Operator formalism (NESO) is a framework which allows to derive and compute transport coefficients by the help of Kubo formulas[25; 26]. A recent study by Hu[22] provides the Kubo formulas, derived from NESO formalism to calculate the transport coefficients of a rotating, hot and dense relativistic fluid. In order to approach this problem one has to calculate the correlation functions of fields which in turn allow us to compute the spectral function concerned with the corresponding transport coefficient. By comparing the analytic structure of the fermion propagator in magnetic field[27], one can guess that the propagator will not remain translationally invariant anymore due to the fixing of the rotation axis along the \(z\) direction. This indeed happens as shown in Ref.[28], where the translational invariance is recovered in the limit of very large angular velocities. This allows us to employ the propagator for studying transport and thermodynamic properties at very large angular velocities. In this work we have studied the bulk viscosity of spin-1/2 fermions at very large angular velocities. This work is organized as follows. In Section II we briefly review the behaviour of fermions in a rotating environment. Then we calculate the spectral function and the bulk viscosity of spin-1/2 fermions in Section III by using the concepts discussed in Section II. We then discuss the results of our work in Section IV. In the end we have provided calculational details in Appendix A. ## II Fermions in a rigidly rotating environment In this section we briefly give the idea of fermions in a rotating environment. In order to study this system we will work in curved space where the metric tensor resembling that of a curved space time is useful for describing the geometry of the region formed after the non-central collision which is rotating with an angular velocity \(\Omega\) around the \(z\)-axis. The metric tensor \(g^{\mu\nu}\) for this system is given by \[g_{\mu\nu}=\begin{pmatrix}1-(x^{2}+y^{2})\Omega^{2}&y\Omega&-x\Omega&0\\ y\Omega&-1&0&0\\ -x\Omega&0&-1&0\\ 0&0&0&-1\end{pmatrix}.\] (II.1) The Dirac equation describing a fermion in this cylinder[28; 29] is given by \[\big{(}i\widetilde{\gamma}^{\mu}\widetilde{D}_{\mu}-m\big{)}\psi=0\] (II.2) where \(\widetilde{\gamma}^{\mu}\) are the gamma matrices in curved space which are spacetime-dependent, \(\widetilde{D}_{\mu}\) is the covariant derivative in curved space and \(m\) is the mass of the fermion. Here \(\widetilde{D}_{\mu}\) is given by \[\widetilde{D}_{\mu}=\partial_{\mu}+\Gamma_{\mu}\] where \(\Gamma_{\mu}=\frac{1}{8}\omega_{\mu ab}\big{[}\gamma^{a},\gamma^{b}\big{]}\) is the affine connection and \(\omega_{\mu ab}\) is the spin connection. To calculate \(\omega_{\mu ab}\) one has to use the definition of vierbein (also called tetrad) and metric tensor \(g_{\mu\nu}\) which are given by \[e_{a}^{\ \mu}=\delta_{a}^{\ \mu}-\delta_{a}^{\ 0}\delta_{i}^{\ \mu}v_{i}, \quad e_{\ \mu}^{a}=\delta_{\ \mu}^{a}+\delta_{i}^{a}\delta_{\ \mu}^{0}v_{i}, \quad(a,\mu=0,1,2,3,\quad i=1,2,3)\] (II.3) \[g_{\mu\nu}=\eta_{ab}e_{\ \mu}^{a}e_{\ \nu}^{b},\] (II.4) which in turn can be used to calculate \[\omega_{\mu ab}=g_{\alpha\beta}e_{a}^{\ \alpha}\big{[}\partial_{\mu}e_{b}^{\ \beta}+\Gamma_{\mu\nu}^{\beta}e_{b}^{\ \nu}\big{]},\ \ \text{where}\ \ \Gamma_{\ \mu\nu}^{\beta}=\frac{g^{\beta\alpha}}{2}\big{[}\partial_{\nu}g_{\alpha\mu}+ \partial_{\mu}g_{\alpha\nu}-\partial_{\alpha}g_{\mu\nu}\big{]}.\] (II.5) Here \(\eta_{ab}=\text{diag}(1,-1,-1,-1)\) is the metric tensor in Minkowski space. In curved spacetime the energy-momentum tensor[29] is given by \[T^{\mu\nu}=\frac{i}{4}\big{(}\overline{\psi}\widetilde{\gamma}^{\mu} \widetilde{D}^{\nu}\psi+\overline{\psi}\widetilde{\gamma}^{\nu}\widetilde{D} ^{\mu}\psi\big{)}+\text{H.C}\] (II.6) where \(\psi\) and \(\overline{\psi}\) are the Dirac field and its conjugate, H.C denotes the Hermitian conjugate. In a uniformly rotating frame, \(\widetilde{D}^{\mu}\) is given by \[D^{\mu}=\big{(}\partial_{t}-i\frac{\Omega\Sigma_{3}}{2},-\partial_{x},- \partial_{y},-\partial_{z}\big{)}\] (II.7) where \(\Sigma_{3}=\frac{i}{2}\big{[}\gamma^{1},\gamma^{2}\big{]}\). In heavy-ion collisions the direction of rotation is perpendicular to the reaction plane which is taken to be the \(x\)-\(y\) plane for our work. Following the above mentioned calculations the free Lagrangian[31] with finite chemical potential \(\mu\) for a medium rotating with constant angular velocity \(\Omega\) is given by \[\mathcal{L}=\overline{\psi}\big{[}i\gamma^{\mu}\partial_{\mu}+\gamma^{0} \big{(}\Omega J_{z}+\mu\big{)}-m\big{]}\psi,\] (II.8) where \(J_{z}\) is the third component of the total angular momentum \(\vec{J}=\vec{x}\times\vec{p}+\vec{S}\) where \(\vec{S}=\frac{1}{2}\begin{pmatrix}\vec{\sigma}&0\\ 0&\vec{\sigma}\end{pmatrix}\) and \(m\) is the mass of the fermion. The fermion propagator[28] in momentum space derived from this Lagrangian by the help of Fock-Schwinger method[30] is given by \[S(p) = \frac{\big{(}p_{0}+\frac{\Omega}{2}-p_{z}+ip_{\perp}\big{)}\big{(} \gamma_{0}+\gamma_{3}\big{)}+m\big{(}1+\gamma_{5}\big{)}}{\big{(}p_{0}+\frac{ \Omega}{2}\big{)}^{2}-\vec{p}^{2}-m^{2}+i\epsilon}\mathcal{O}^{+}+\frac{\big{(} p_{0}-\frac{\Omega}{2}+p_{z}-ip_{\perp}\big{)}\big{(}\gamma_{0}-\gamma_{3}\big{)}+m \big{(}1+\gamma_{5}\big{)}}{\big{(}p_{0}-\frac{\Omega}{2}\big{)}^{2}-\vec{p}^ {2}-m^{2}+i\epsilon}\mathcal{O}^{-}\] (II.9) where \(p_{0}\) is the temporal component, \(p_{z}\) is the \(z\) component which is parallel to the axis of rotation and \(p_{\perp}=\sqrt{p_{x}^{2}+p_{y}^{2}}\) is the transverse component of the 4-momentum and finally \(\mathcal{O}^{\pm}\equiv\frac{1}{2}\big{[}1\pm i\gamma^{1}\gamma^{2}\big{]}\). We have employed Eq.(II.9) to perform calculations at finite temperature by the help of Imaginary Time Formalism(ITF). ## III Spectral function and bulk viscosity of fermionic system The bulk viscous pressure measures the deviation of the pressure from its equilibrium value, which causes the fluid to expand or compress. From first order dissipative hydrodynamics the bulk viscosity \(\zeta\) quantifies the dissipation when expansion or compression occurs in the fluid. The calculation of \(\zeta\) via correlation functions is given by the Kubo formula[22; 24; 25], which is based on the calculation of spectral functions of energy momentum tensor \(T^{\mu\nu}(x)\) and Noether current \(J^{\mu}(x)\) relevant to each transport coefficient. The Kubo formulas for a relativistic fluid are derived from the Nonequilibrium Statistical Operator(NESO) formalism[25; 26] developed by Zubarev, where the nonequilibrium averages of different dissipative quantities such as shear-stress tensor \(\pi^{\mu\nu}\), energy diffusion flux \(q^{\mu}\) and charge diffusion flux \(j^{\mu}(x)\) are calculated in a nonequilibrium scenario. The quantities \(\pi^{\mu\nu},q^{\mu}\) and \(j^{\mu}\) carry \(T^{\mu\nu}(x)\) and charge current \(J^{\mu}(x)\) along with rank-4 and rank-2 projectors, that are projected on to the spectral functions. Here, \(T^{\mu\nu}(x)\) and \(J^{\mu}(x)\) can be calculated from the respective quantum field theory under study. We will be restricting ourselves to the study of spin-1/2 fermions of a \(U(1)\) gauge theory. The estimation of \(\zeta\) through correlation functions at one-loop is calculated by the Kubo formula[24; 25; 26] \[\zeta=-\lim_{q_{0}\to 0}\frac{\rho_{\zeta}(q)}{q_{0}}\] (III.1) where \(\rho_{\zeta}(q)\) is the spectral function for bulk viscosity calculated from two-point correlation function of energy-momentum tensor \(T^{\mu\nu}\), \(q\) is the 4-momentum and \(q_{0}\) is the temporal component of the 4-momentum. The expression of \(\rho_{\zeta}(q)\) is given by \[\rho_{\zeta}(q)=\text{Im}\Pi(q)\] (III.2) where \(\Pi(q)\) is given by (III.3) where the notation \(\big{\langle}\mathcal{O}_{1}(x)\mathcal{O}_{2}(y)\big{\rangle}_{R}\) denotes the retarded thermal average of operators (say \(\mathcal{O}_{1}(x)\) and \(\mathcal{O}_{2}(y)\)) at spacetime points \(x\) and \(y\). Here \(\Pi_{\zeta}(q)\) is calculated in cylindrical coordinates where \(r=(t,\rho,\theta,z)\) and \(\mathcal{P}^{*}(x)\), which is the total pressure of the nonequilibrium system[32] is given by \[\mathcal{P}^{*}(x)=-\frac{T_{i}^{i}}{3}-c_{s}^{2}T^{00},\] (III.4) where \(T_{i}^{i}\) and \(T^{00}\) are various components of \(T^{\mu\nu}\) carrying spatial index \((i)\) and temporal index \((0)\) and \(c_{s}^{2}\) is the speed of sound, which for our case is \(\frac{1}{3}\) as we have considered a system of massless and non-interacting fermions. To further calculate Eq.(III.2), one has to perform the Wick's contraction of \(\psi(0)\) and \(\overline{\psi}(r)\) i.e \[\overline{\psi(0)\overline{\psi}}(r)=S(0,r)=\int\frac{d^{4}p}{(2\pi)^{4}}e^{- ip\cdot x}\big{(}-iS(p)\big{)},\] (III.5) where \(S(0,r)\) is the fermion propagator in coordinate space and \(S(p)\) is the fermion propagator in momentum space. It should be noted that in a rotating environment, the translational invariance of the coordinate space propagator is lost[28], due to the preferred direction of the rotation axis along the \(z\) direction. The breaking of translational invariance of the propagator along transverse directions the has also been seen in a background magnetic field[27; 33; 34], where a phase factor shift appears in the expression of the propagator. However for a rotating medium, the translational invariance is recovered by assuming that the fermion is completely dragged by the vortical motion or large \(\Omega\) which is of the order \(10^{-1}\) GeV or \(10^{22}\) sec\({}^{-1}\). In this situation the phase factor appearing in the coordinate space propagator[28] disappears completely, thus making the expression of the propagator depend only on the relative coordinates, which leads to a recovery of translational invariance. This situation is valid for early stages of heavy-ion collisions when the \(\Omega\) produced is very large. In Eq.(III.5) we have represented the coordinate space propagator as a Fourier transformation of the momentum space propagator[28] which is explicitly given by Eq.(II.9). On employing the Wick's contraction given in Eq.(III.5), then integrating over \(r\) and \(k\), \(\rho_{\zeta}(q)\) is given by \[\rho_{\zeta}(q) = {\rm Im}i\int\frac{d^{4}p}{(2\pi)^{4}}\Bigg{\{}-\frac{3k_{x}p_{z} }{8}\,{\rm Tr}\left[\widetilde{\gamma}^{3}S(p)\widetilde{\gamma}^{3}S(k) \right]+\frac{p_{z}^{2}}{4}\,{\rm Tr}\left[\widetilde{\gamma}^{3}S(p) \widetilde{\gamma}_{3}S(k)\right]+\frac{k_{z}^{2}}{4}\,{\rm Tr}\left[ \widetilde{\gamma}^{3}S(p)\widetilde{\gamma}_{3}S(k)\right]\Bigg{\}}\] (III.6) \[+ {\rm Im}i\Big{(}\frac{c_{s}^{2}}{12}\Big{)}\int\frac{d^{4}p}{(2 \pi)^{4}}\Bigg{\{}\,{\rm Tr}\left[\widetilde{\gamma}^{3}S(p)\widetilde{\gamma }^{0}\big{(}k_{0}+\frac{\Omega\Sigma_{3}}{2}\big{)}S(k)\right]-p_{z}\,{\rm Tr }\left[\widetilde{\gamma}^{3}\big{(}p_{0}+\frac{\Omega\Sigma_{3}}{2}\big{)}S( p)\widetilde{\gamma}^{0}S(k)\right]\] \[+ k_{z}\,{\rm Tr}\left[\widetilde{\gamma}^{3}S(p)\widetilde{\gamma }^{0}\big{(}k_{0}+\frac{\Omega\Sigma_{3}}{2}\big{)}S(k)\right]-k_{z}\,{\rm Tr }\left[\widetilde{\gamma}^{3}\big{(}p_{0}+\frac{\Omega\Sigma_{3}}{2}\big{)}S( p)\widetilde{\gamma}^{0}S(k)\right]\] \[+ k_{z}\,{\rm Tr}\left[\widetilde{\gamma}^{0}\big{(}p_{0}+\frac{ \Omega\Sigma_{3}}{2}\big{)}S(p)\widetilde{\gamma}^{3}S(k)\right]+p_{z}\,{\rm Tr }\left[\widetilde{\gamma}^{0}\big{(}p_{0}+\frac{\Omega\Sigma_{3}}{2}\big{)}S( p)\widetilde{\gamma}^{3}S(k)\right]\] \[- k_{z}\,{\rm Tr}\left[\widetilde{\gamma}^{0}S(p)\widetilde{\gamma }^{3}\big{(}k_{0}+\frac{\Omega\Sigma_{3}}{2}\big{)}S(k)\right]-p_{z}\,{\rm Tr }\left[\widetilde{\gamma}^{0}S(p)\widetilde{\gamma}^{3}\big{(}k_{0}+\frac{ \Omega\Sigma_{3}}{2}\big{)}\right]\Bigg{\}}\] \[+ {\rm Im}i\Big{(}\frac{c_{s}^{4}}{2}\Big{)}\int\frac{d^{4}p}{(2\pi) ^{4}}\Bigg{\{}-{\rm Tr}\left[\widetilde{\gamma}^{0}\big{(}p_{0}+\frac{\Omega \Sigma_{3}}{2}\big{)}S(p)\widetilde{\gamma}^{0}\big{(}k_{0}+\frac{\Omega \Sigma_{3}}{2}\big{)}S(k)\right]\] \[+ {\rm Tr}\left[\widetilde{\gamma}^{0}\big{(}p_{0}+\frac{\Omega \Sigma_{3}}{2}\big{)}^{2}S(p)\widetilde{\gamma}^{0}S(k)\right]+{\rm Tr}\left[ \widetilde{\gamma}^{0}S(p)\widetilde{\gamma}^{0}\big{(}k_{0}+\frac{\Omega \Sigma_{3}}{2}\big{)}^{2}S(k)\right]\Bigg{\}}\] In order to calculate Eq.(III.6) we will use ITF, where we will perform Matsubara frequency summation at finite chemical potential and angular velocity. The details on these computations has been provided in Appendix A. From the definition of \(\zeta\) given in Eq.(III.1), we take the long wavelength limit or the \(\vec{q}=\vec{0},q_{0}\to 0\) limit of \(\frac{\rho_{\zeta}(\vec{q},q_{0})}{q_{0}}\). Thus we obtain \(\zeta(T,\mu,\Omega)\) from \(\rho_{\zeta}(q)\) which is given by \[\zeta(T,\mu,\Omega)\] \[=-\frac{3}{8T}\Bigg{\{}\int\frac{d^{3}p}{(2\pi)^{3}}\Big{[}\frac{4 E_{p}^{2}p_{z}^{2}-4p_{\perp}^{2}p_{z}^{2}+8E_{p}p_{z}^{3}+4p_{z}^{4}-p_{z}^{2} \Omega^{2}}{4E_{p}^{2}\Gamma}\Big{]}\mathcal{N}(\pm\mu,\pm\Omega/2)\] \[+ \int\frac{d^{3}p}{(2\pi)^{3}}\Big{[}\frac{4E_{p}^{2}p_{z}^{2}-4p_{ \perp}^{2}p_{z}^{2}-8E_{p}p_{z}^{3}+4p_{z}^{4}-p_{z}^{2}\Omega^{2}}{4E_{p}^{2 }\Gamma}\Big{]}\mathcal{N}(\pm\mu,\mp\Omega/2)\] \[+ \frac{1}{2T}\Bigg{\{}\int\frac{d^{3}p}{(2\pi)^{3}}\Big{[}\frac{4E _{p}^{2}p_{z}^{2}-4p_{\perp}^{2}p_{z}^{2}+8E_{p}p_{z}^{3}+4p_{z}^{4}-p_{z}^{2} \Omega^{2}}{4E_{p}^{2}\Gamma}\Big{]}\mathcal{N}(\pm\mu,\pm\Omega/2)\] \[+ \int\frac{d^{3}p}{(2\pi)^{3}}\Big{[}\frac{4E_{p}^{2}p_{z}^{2}-4p_{ \perp}^{2}p_{z}^{2}-8E_{p}p_{z}^{3}+4p_{z}^{4}-p_{z}^{2}\Omega^{2}}{4E_{p}^{2 }\Gamma}\Big{]}\mathcal{N}(\pm\mu,\mp\Omega/2)\] \[+ \Big{(}\frac{c_{s}^{2}}{12T}\Big{)}\Bigg{\{}\int\frac{d^{3}p}{(2 \pi)^{3}}\frac{1}{4E_{p}^{2}\Gamma}\Big{\{}-2E_{p}^{2}p_{z}\Omega+4p_{\perp}^{2} p_{z}\Omega-8E_{p}p_{z}^{2}\Omega-4p_{z}^{3}\Omega+\frac{3}{2}p_{z}^{3}\Omega+4E_{p}^{3}p_{z}\] \[+ 8E_{p}p_{z}^{3}-4E_{p}p_{\perp}^{2}p_{z}+16E_{p}^{2}p_{z}^{2}-4E_{ p}p_{z}\Omega^{2}-\frac{p_{z}^{3}\Omega}{2}\Big{\}}\mathcal{N}(\pm\mu,\mp\Omega/2)\] \[+ \int\frac{d^{3}p}{(2\pi)^{3}}\frac{1}{4E_{p}^{2}\Gamma}\Big{\{}-4 E_{p}^{2}p_{z}\Omega+2p_{\perp}^{2}p_{z}\Omega-4E_{p}p_{z}^{2}\Omega-2p_{z}^{3}\Omega+ \frac{p_{z}\Omega^{3}}{2}\Big{\}}\mathcal{N}(\pm\mu,\mp\Omega/2)\] \[- \frac{c_{s}^{4}}{2T}\Bigg{\{}\int\frac{d^{3}p}{(2\pi)^{3}}\frac{1 }{4E_{p}^{2}\Gamma}\Big{\{}2E_{p}p_{z}^{2}\Omega-E_{p}\Omega^{3}/2-2E_{p}^{2} \Omega^{2}+2p_{\perp}^{2}\Omega^{2}-2p_{z}^{2}\Omega^{2}+\Omega^{4}/2+E_{p} \Omega^{3}/2-2E_{p}p_{z}\Omega^{2}\Big{\}}\mathcal{N}(\pm\mu,\mp\Omega/2)\] \[+ \int\frac{d^{3}p}{(2\pi)^{3}}\frac{1}{4E_{p}^{2}\Gamma}\Big{\{} \Omega^{4}/2-4E_{p}^{2}\Omega^{2}+4p_{\perp}^{2}\Omega^{2}-2E_{p}p_{z}^{2} \Omega-4p_{z}^{2}\Omega^{2}-4E_{p}^{2}\Omega^{2}\Big{\}}\mathcal{N}(\pm\mu,\pm \Omega/2)\] where \[\begin{split}\mathcal{N}(\pm\mu,\pm\Omega/2)&=n_{F}(E_{p} +\mu+\Omega/2)\big{(}1-n_{F}(E_{p}+\mu+\Omega/2)\big{)}+n_{F}(E_{p}-\mu-\Omega/ 2)\big{(}1-n_{F}(E_{p}-\mu-\Omega/2)\big{)},\\ \mathcal{N}(\pm\mu,\mp\Omega/2)&=n_{F}(E_{p}+\mu- \Omega/2)\big{(}1-n_{F}(E_{p}+\mu-\Omega/2)\big{)}+n_{F}(E_{p}-\mu+\Omega/2) \big{(}1-n_{F}(E_{p}-\mu+\Omega/2)\big{)},\end{split}\] (III.8) where \(n_{F}(x)\) is the Fermi-Dirac distribution function, \(E_{p}=\sqrt{p_{z}^{2}+p_{\perp}^{2}+m^{2}}\) is the energy. The final expression of \(\zeta\) for a rotaing, hot and dense fermionic system as a function of \(T,\mu\) and \(\Omega\), obtained from Kubo formalism is given by Eq.(III.7). Here \(\Gamma\) stands for the thermal width of the medium which arises at finite temperature due to interactions in the medium. In general \(\Gamma\) is a function of \(T,\mu,\Omega\) and can be calculated for various interactions. Here we have chosen \(\Gamma\) to be a constant for simplicity. In the next section we will discuss the numerical results of our work. ## IV Results and Discussions In this section we will discuss the numerical results of bulk viscosity for a rotating, hot and dense fermionic system obtained from Kubo formalism. The quantum field theoretical study of \(\zeta\) captures the interplay between \(\Omega\) and \(\mu\) which enter into the distribution function through Matsubara frequency summation. From the expression of \(\zeta\) in Eq.(III.7) it is quite obvious that \(\Omega\) plays the role of an effective chemical potential in the medium. But at the same time we should also note that \(\Omega\) enters into \(\zeta\) through the momentum space propagator given by Eq.(III.9). Therefore it would be erroneous to assume that the behaviour of \(\zeta\) with \(\mu\) and \(\Omega\) will be the same. The results for bulk viscosity have been shown by plotting the dimensionless quantity \(\zeta/T^{3}\) as a function of temperature, angular velocity and chemical potential for massless fermions. In Fig.1(a) we have shown the variation with \(T\) between 0.15 - 0.4 GeV for \(\Omega=0.2,0.25,0.3\) GeV at \(\mu=0.05\) GeV. The small value of \(\mu\) is valid for QGP which exists above \(T=150\) MeV and the highest value of \(T\) is taken to be 0.4 GeV, which is achievable in heavy-ion collision experiments. The results show an increasing behaviour with temperature. It is observed that the values of \(\zeta/T^{3}\) remain larger for higher values of \(\Omega\) in the low temperature range at around \(T=0.2\) GeV, thus following an ascending order of increase with \(\Omega\) at any particular temperature, only in the low temperature regime. But this trend changes as temperature increases further and somewhere close to \(T=0.2\) GeV, the values are higher for smaller value of \(\Omega\) at any particular temperature. The increase in bulk viscosity with temperature has been reported in Ref.[25] for a hot QGP at zero \(\Omega\). We also observe that there is a slight dip in values at low temperatures which is similar to the findings in Ref.[25]. It can be attributed to the fact that bulk viscosity is the major channel of dissipation at higher temperatures, whereas it is slightly suppressed at lower temperatures, where \(\Omega\) is the dominant energy scale. The issue of higher values of \(\zeta/T^{3}\) for lower values of \(\Omega\) in the high temperature regime is not very straightforward to understand, as we see from Eq.(III.7) that there is a non-trivial dependence of \(\zeta\) on \(\Omega\). We have also shown the variation with temperature by fixing \(\Omega=0.25\) GeV at \(\mu=0.05\), 0.075 and 0.1 GeV in Fig.1(b). The results show an increasing trend as the chemical potential increases. Similar to Fig.1(a), here we observe that there is a dip at lower values of temperature, but contrary to Fig.1(a), here we see that as the chemical potential increases, \(\zeta/T^{3}\) attains higher values at the particular temperature. The increasing behaviour of \(\zeta/T^{3}\) at higher temperature, as chemical potential increases, is linked to the increase in the participation of larger number of fermions in the dissipation process. The dips encountered at lower values of temperature can be attributed to the fact that \(\Omega\) is the dominant energy scale in that regime and it affects the behaviour of \(\zeta/T^{3}\). To have some clarity on the behaviour of \(\zeta/T^{3}\) with \(\Omega\), we take a look at Fig.2(a) where we have shown the variation with \(\Omega\) between 0.2 - 0.4 GeV at \(T=0.25,0.30\) and 0.35 GeV at fixed \(\mu=0.05\) GeV. We observe that a decreasing behaviour is seen as \(\Omega\) increases, taking higher values for higher temperatures at any particular \(\Omega\). This shows that dissipation through the bulk viscosity is minimized with rotation, but is enhanced with temperature. To study the behaviour of \(\zeta/T^{3}\) with \(\Omega\) for a fixed temperature at various chemical potentials, we have shown in Fig.2(b) the variation of \(\zeta/T^{3}\) with \(\Omega\) at \(T=0.25\) GeV for \(\mu=0.05,0.075\) and 0.1 GeV. It is observed that initially a decreasing trend is observed with \(\Omega\), but as \(\mu\) increases from 0.05 to 0.1 GeV, \(\zeta/T^{3}\) attains higher values which demonstrates that due to the participation of a larger number of fermions the bulk viscosity increases. It is also clear from Fig.2(b) that chemical potential plays a role that is similar to temperature, as both are responsible for increasing the bulk viscosity. Changes in chemical potential, even by very small values of 0.025 GeV has a dominant countereffect over \(\Omega\) which is shown by Fig.2(b). At last we have studied the variation of \(\zeta/T^{3}\) with \(\mu\), where a purely monotonically increasing behaviour is observed. From the previous plots in Fig.1 and 2 we have seen that both temperature and chemical potential enhance the bulk viscosity and therefore compete with \(\Omega\), which has an opposite effect. In Fig.3(a) an increasing trend is observed with chemical potential when \(\Omega\) is fixed at 0.25 GeV and \(T\) is increased, which is quite expected. A similar trend is observed in Fig.3(b) when we fix \(T\) at 0.25 GeV and increase \(\Omega\). But here we see that in the regime of low chemical potential, curves with lower values of \(\Omega\) dominate over those with large values and this trend gradually reverses when \(\mu\) is increased. This is due to the fact that a larger value of \(\Omega\) supresses the bulk viscosity in the region of low chemical potential, acting as the dominant energy scale. But when the chemical potential is increased beyond a certain value it dominates over the effect of \(\Omega\), thus reversing the trend of the graph. We see that although the angular velocity plays the role of an effective chemical potential in the medium through the distribution function, the effect on bulk viscosity is opposite to that of the chemical potential. The reason for the occurence of such an effect is because of the presence of \(\Omega\), not just in the distribution function, but also in the fermion propagator. Although it is not straightforward to justify the effect of \(\Omega\) due to the complicated analytic structure of \(\zeta\) given by Eq.(III.7), nevertheless a heuristic interpretation can be made. We know that bulk viscosity is a dissipation that occurs when there is a change in pressure from its equilibrium value during the expansion or contraction of the relativistic fluid. A finite rotation in the medium creates a balancing effect that counters the dissipative effect coming from temperature and chemical potential, thus reducing the bulk viscosity. It may be assumed that bulk viscosity for a rotating fermionic system will eventually approach conformality[24] when \(\Omega\) will significantly exceed all other energy scales associated with the system. To verify this we have plotted \(\zeta/T^{3}\) vs \(\Omega\) in the range \(\Omega=0.2\) - 0.8 GeV in Fig.4, where we observe that the values decrease with \(\Omega\) upto a certain value and then start increasing. The reason for this sudden increase can be traced back to the relation between the coordinates in inertial and non-inertial frames of reference. For a frame uniformly rotating in the \(z\) axis with a uniform angular velocity \(\vec{\Omega}=\Omega\hat{z}\), the coordinates are related to each other by the transformation \[T=t,\ \ X=x\cos\Omega t-y\sin\Omega t,\ \ Y=x\sin\Omega t+y\sin\Omega t,\ \ Z=z\] (IV.1) where \(x^{\mu}\equiv(t,x,y,z)\) are the coordinates of the frame that is uniformly rotating with angular velocity \(\Omega\) relative to the frame whose coordinates are given \(X^{\mu}\equiv(T,X,Y,Z)\). In Eq.(IV.1) we should note that a very large \(\Omega\), the condition of synchronicity[29] i.e physical quantities measured in the rest frame of any observer are the same, will not be satisfied. In our work there are different energy scales associated with the system viz. \(\Omega\), \(T\) and \(\mu\) which have different effects on the system. For a very large \(\Omega\) the breakdown of synchronicity will lead to dissipation through \(\Omega\) as shown in Figs.4 and 5. To maintain this condition the rest of the energy scales associated with the system i.e temperature and chemical potential should also increase accordingly. In this context a very useful discussion by Chernoub _et.al[35]_, states that in order to maintain causality i.e the condition \(\Omega\rho\leq 1\), a rigidly rotating axis should be bounded in all directions. A violation of this condition by the fermions which travel beyond \(\rho\) will cause unphysical effects[35] to occur. In Fig.4(a) we see that at different values of temperature and chemical potential the curves increase at different values of \(\Omega\), which implies that the threshold value of \(\Omega\) is unique for any particular combination of temperature and chemical potential, beyond which the condition of synchronicity will break down. It is not easy to deduce the threshold value of \(\Omega\) since \(\zeta\) has a complicated dependence on \(\Omega\). We have also studied the effect of increasing the \(\mu\) when \(T\) is fixed in Fig.4(b) which has a similarity with Fig.4(a) that an increasing trend is observed at high \(\Omega\), but the difference is that the threshold value of \(\Omega\) appears at a larger value for low chemical potential, whereas it appears at a smaller value for low temperature as shown by Fig.(4)(b). ## V Conclusions and outlook In this work we have studied the bulk viscosity of a system of fermions under rotation at finite temperature and density which becomes important when various QCD related phenomena are studied in the ambience of large angular velocity of the order \(10^{22}\) sec\({}^{-1}\). Here we have specifically studied the behaviour of the bulk viscosity of massless spin-1/2 fermions under rotation from Kubo formalism which is based on quantum field theory. The calculations have been performed by calculating the spectral function of energy-momentum tensors at finite angular velocity, temperature and chemical potential. The spectral function is useful for calculating the bulk viscosity of the medium. The angular velocity affects the transport coefficients by introducing the system to another energy scale apart from temperature and chemical potential. Throughout this work the angular velocity is taken to be large enough i.e of the order \(10^{-1}\) GeV which corresponds to \(10^{22}\) sec\({}^{-1}\), the highest value of \(\Omega\) that has been attained in STAR collaboration[6]. Accordingly, the fermion propagator[28] that has been employed for our calculations is translationally invariant in the regime of large angular velocities and hence applicable for our study. The analytic calculations show that \(\Omega\) enters into the expression of bulk viscosity through the distribution function and the momentum space propagator. Numerical results show that \(\zeta/T^{3}\) decreases with \(\Omega\), which is opposite to the effect of temperature and chemical potential which have an enhancing effect. In the regime of low \(T\) and \(\mu\), a \(\Omega\) of the order \(10^{-1}\) GeV has a dominating effect tending to compete with \(T\) and \(\mu\) thus minimizing the dissipation coming from bulk viscosity. But with a gradual increase in temperature and chemical potential, the dissipation starts to increase. An assumption might be made regarding the behaviour of bulk viscosity at extremely high \(\Omega\), that the bulk viscosity would approach conformality i.e zero. But this assumption is proved to be incorrect when the curves of \(\zeta/T^{3}\) and \(\zeta\) are analysed as shown in Fig.4 and 5. Initially rotation causes \(\zeta\) to decrease with \(\Omega\), but gradually at very high \(\Omega\) (as compared to \(T\) and \(\mu\)), the relation between the rotating and non-rotating coordinate systems given by Eq.(IV.1) breaks down, following which \(\Omega\) acts as the main source of bulk dissipation. We must also note that, a very high \(\Omega\), exceeding the present value of \(10^{22}\) sec\({}^{-1}\) can be produced only when the temperature and chemical potential produced in heavy-ion collisions are much higher than the ones taken in this work, thus having an effect comparable to that of \(\Omega\). Therefore it might be concluded that a physically valid \(\Omega\) will be the one where the quantities \(\zeta\) and \(\zeta/T^{3}\) decrease with \(\Omega\) at any value of temperature and chemical potential. Our work deals with a general result of one-loop Kubo estimation of bulk viscosity of spin 1/2 fermions under rotation. It is applicable for the study of the bulk viscosity of spin 1/2 baryons in a rotating environment. An extension to the QCD case is non-trivial since the contribution of gluons has to be taken into account. These being spin-1 particles are expected to be affected by rotation which would change the structure of the momentum space propagator. Therefore a study of gauge bosons in a rotating medium would be a starting point before approaching this problem. We should note that both magnetic field and rotation are generated by off-central heavy ion collisions, affecting the dynamics of the particles in the generated medium. While magnetic field affects the charged particles, rotation affects those particle which carry a non-zero spin. The effect of magnetic field and rotation are seen in the propagators when a quantum field theoretical study is performed. A charged particle with non-zero spin would be affected by both rotation and magnetic field. In order to study such a situation one has to employ a propagator in magnetic field and rotation for performing the computations. We should also note that our study in this work is limited to the case of one-loop diagram, which has been successful in the study of transport coefficients such as shear viscosity, bulk viscosity, thermal conductivity and electrical conductivity via NJL model and Kubo formulas. But in Ref.[25], a multiloop diagram study has been performed for zero rotation, highlighting its importance for bulk viscosity as it is successful in overcoming the inconsistencies associated with the \(1/N_{c}\) power counting scheme. ## Acknowledgements I thank Prof. Maxim Chernodub for his deep insights and suggestions on the problem. I thank Rajeev Singh for his useful comments and suggestions on the manuscript. I am grateful to my colleagues Pushpa Panday, Abhishek Tiwari, Salman Ahamad Khan, Sumit and Debarshi Dey for the valuable discussions I had with them while working on the problem. This work has been funded by the Institute Post Doctoral Scheme of Indian Institute of Technology Roorkee under the grant IITR/Estt-(A)-Rect-Cell-E-5001(130)18490. ## Appendix A Matsubara frequency sums at finite rotation, temperature and chemical potential In this work we have employed Matsubara frequency summation[36] to calculate the bulk viscosity. Here we provide the details of the Matsubara frequency summation used for calculating the spectral function of energy-momentum tensors. At finite temperature \[p_{0}\rightarrow i\widetilde{\omega}_{N},\ \ q_{0}\to i\nu_{N},\ \ \int\frac{dp_{0}}{2\pi}\to iT\sum_{N},\ \ \ \omega_{N}=(2N+1)\pi T\] where \(p_{0}\) and \(q_{0}\) are the temporal components of the 4-momentum of the fermion and boson in the one-loop diagram. The fermion propagator under rotation is given by Eq.(II.9) which reads as follows \[S(p) = \frac{\big{(}p_{0}+\frac{\Omega}{2}-p_{z}+ip_{\perp}\big{)}\big{(} \gamma_{0}+\gamma_{3}\big{)}+m\big{(}1+\gamma_{5}\big{)}}{\big{(}p_{0}+\frac{ \Omega}{2}\big{)}^{2}-\vec{p}^{2}-m^{2}+i\epsilon}\mathcal{O}^{+}+\frac{\big{(} p_{0}-\frac{\Omega}{2}+p_{z}-ip_{\perp}\big{)}\big{(}\gamma_{0}-\gamma_{3} \big{)}+m\big{(}1+\gamma_{5}\big{)}}{\big{(}p_{0}-\frac{\Omega}{2}\big{)}^{2 }-\vec{p}^{2}-m^{2}+i\epsilon}\mathcal{O}^{-}\.\] In one-loop Kubo calculations we encounter fermionic Matsubara frequency summations of the type 1. \[T\sum_{\{p_{N}\}}\frac{1}{\big{[}i\widetilde{\omega}_{N}+\frac{\Omega}{2}+\mu \big{]}^{2}-\vec{p}^{2}-m^{2}}\frac{1}{\big{[}i\nu_{N}-i\widetilde{\omega}_{N }+\frac{\Omega}{2}-\mu\big{]}^{2}-\vec{k}^{2}-m^{2}}\] (A.2) 2. \[T\sum_{\{p_{N}\}}\frac{1}{\big{[}i\widetilde{\omega}_{N}+\frac{\Omega}{2}+\mu \big{]}^{2}-\vec{p}^{2}-m^{2}}\frac{1}{\big{[}i\nu_{N}-i\widetilde{\omega}_{N }-\frac{\Omega}{2}-\mu\big{]}^{2}-\vec{k}^{2}-m^{2}}\] (A.3) 3. \[T\sum_{\{p_{N}\}}\frac{1}{\big{[}i\widetilde{\omega}_{N}-\frac{\Omega}{2}+\mu \big{]}^{2}-\vec{p}^{2}-m^{2}}\frac{1}{\big{[}i\nu_{N}-i\widetilde{\omega}_{N }+\frac{\Omega}{2}-\mu\big{]}^{2}-\vec{k}^{2}-m^{2}}\] (A.4) 4. \[T\sum_{\{p_{N}\}}\frac{1}{\big{[}i\widetilde{\omega}_{N}-\frac{\Omega}{2}+\mu \big{]}^{2}-\vec{p}^{2}-m^{2}}\frac{1}{\big{[}i\nu_{N}-i\widetilde{\omega}_{N }-\frac{\Omega}{2}-\mu\big{]}^{2}-\vec{k}^{2}-m^{2}}\] (A.5) Following the Saclay method[36] of the evaluation of Matsubara frequency sums we obtain the results for the above frequency summations in Eqs.(A.2) -(A.5) in (a), (b), (c) and (d) as follows : (a) \[T\sum_{\{p_{N}\}}\frac{1}{\big{[}i\widetilde{\omega}_{N}+\frac{ \Omega}{2}+\mu\big{]}^{2}-\vec{p}^{2}-m^{2}}\frac{1}{\big{[}i\nu_{N}-i \widetilde{\omega}_{N}+\frac{\Omega}{2}-\mu\big{]}^{2}-\vec{k}^{2}-m^{2}}\] \[= \frac{1}{4E_{p}E_{k}}\Bigg{\{}\frac{n_{F}(E_{p}+\mu+\Omega/2)+n_{ F}(E_{k}-\mu+\Omega/2)-1}{q_{0}-E_{p}-E_{k}}+\frac{n_{F}(E_{k}+\mu+\Omega/2)-n_{F}(E_{p} +\mu+\Omega/2)}{q_{0}+E_{k}-E_{p}}\] \[+ \frac{n_{F}(E_{p}-\mu-\Omega/2)-n_{F}(E_{k}-\mu-\Omega/2)}{q_{0} +E_{p}-E_{k}}+\frac{1-n_{F}(E_{p}-\mu-\Omega/2)-n_{F}(E_{k}+\mu-\Omega/2)}{q_{ 0}+E_{k}+E_{p}}\Bigg{\}}\] (b) \[T\sum_{\{p_{N}\}}\frac{1}{\big{[}i\widetilde{\omega}_{N}+\frac{ \Omega}{2}+\mu\big{]}^{2}-\vec{p}^{2}-m^{2}}\frac{1}{\big{[}i\nu_{N}-i \widetilde{\omega}_{N}-\frac{\Omega}{2}-\mu\big{]}^{2}-\vec{k}^{2}-m^{2}}\] \[= \frac{1}{4E_{p}E_{k}}\Bigg{\{}\frac{n_{F}(E_{p}+\mu+\Omega/2)+n_{ F}(E_{k}-\mu-\Omega/2)-1}{q_{0}-E_{p}-E_{k}}+\frac{n_{F}(E_{k}+\mu+\Omega/2)-n_{F}(E_{p} +\mu+\Omega/2)}{q_{0}+E_{k}-E_{p}}\] \[+ \frac{n_{F}(E_{p}-\mu-\Omega/2)-n_{F}(E_{k}-\mu-\Omega/2)}{q_{0} +E_{p}-E_{k}}+\frac{1-n_{F}(E_{p}-\mu-\Omega/2)-n_{F}(E_{k}+\mu+\Omega/2)}{q_{0} +E_{k}+E_{p}}\Bigg{\}}\] (c) \[T\sum_{\{p_{N}\}}\frac{1}{\left[i\widetilde{\omega}_{N}-\frac{ \Omega}{2}+\mu\right]^{2}-\vec{p}^{2}-m^{2}}\frac{1}{\left[i\nu_{N}-i\widetilde {\omega}_{N}+\frac{\Omega}{2}-\mu\right]^{2}-\vec{k}^{2}-m^{2}}\] \[= \frac{1}{4E_{p}E_{k}}\Bigg{\{}\frac{n_{F}(E_{p}+\mu-\Omega/2)+n_{ F}(E_{k}-\mu+\Omega/2)-1}{q_{0}-E_{p}-E_{k}}+\frac{n_{F}(E_{k}+\mu-\Omega/2)-n_{ F}(E_{k}+\mu-\Omega/2)}{q_{0}+E_{k}-E_{p}}\] \[+ \frac{n_{F}(E_{p}-\mu+\Omega/2)-n_{F}(E_{k}-\mu+\Omega/2)}{q_{0}+ E_{p}-E_{k}}+\frac{1-n_{F}(E_{p}-\mu+\Omega/2)-n_{F}(E_{k}+\mu-\Omega/2)}{q_{0}+E_{k} +E_{p}}\Bigg{\}}\] (d) \[T\sum_{\{p_{N}\}}\frac{1}{\left[i\widetilde{\omega}_{N}-\frac{ \Omega}{2}+\mu\right]^{2}-\vec{p}^{2}-m^{2}}\frac{1}{\left[i\nu_{N}-i \widetilde{\omega}_{N}-\frac{\Omega}{2}-\mu\right]^{2}-\vec{k}^{2}-m^{2}}\] \[= \frac{1}{4E_{p}E_{k}}\Bigg{\{}\frac{n_{F}(E_{p}+\mu-\Omega/2)+n_ {F}(E_{k}-\mu-\Omega/2)-1}{q_{0}-E_{p}-E_{k}}+\frac{n_{F}(E_{k}+\mu-\Omega/2)- n_{F}(E_{k}+\mu-\Omega/2)}{q_{0}+E_{k}-E_{p}}\] \[+ \frac{n_{F}(E_{p}-\mu+\Omega/2)-n_{F}(E_{k}-\mu+\Omega/2)}{q_{0} +E_{p}-E_{k}}+\frac{1-n_{F}(E_{p}-\mu+\Omega/2)-n_{F}(E_{k}+\mu-\Omega/2)}{q_{ 0}+E_{k}+E_{p}}\Bigg{\}}\]
2304.10952
Simplest fidelity-estimation method for graph states with depolarizing noise
Graph states are entangled states useful for several quantum information processing tasks such as measurement-based quantum computation and quantum metrology. As the size of graph states realized in experiments increases, it becomes more essential to devise efficient methods estimating the fidelity between the ideal graph state and an experimentally-realized actual state. Any efficient fidelity-estimation method, in general, must use multiple experimental settings, i.e., needs to switch between at least two measurements. Recently, it has been shown that a single measurement is sufficient if the noise can be modeled as the phase-flip error. Since the bit-flip error should also occur in several experiments, it is desired to extend this simplest method to noise models that include phase and bit-flip errors. However, it seems to be nontrivial because their result strongly depends on properties of the phase-flip error. In this paper, by analyzing effects of the bit-flip error on stabilizer operators of graph states, we achieve the extension to the depolarizing noise, which is a major noise model including phase and bit-flip errors. We also numerically evaluate our simplest method for noise models interpolating between the phase-flip and depolarizing noises.
Tomonori Tanizawa, Yuki Takeuchi, Shion Yamashika, Ryosuke Yoshii, Shunji Tsuchiya
2023-04-21T13:46:41Z
http://arxiv.org/abs/2304.10952v2
# Simplest fidelity-estimation method for graph states with depolarizing noise ###### Abstract Graph states are entangled states useful for several quantum information processing tasks such as measurement-based quantum computation and quantum metrology. As the size of graph states realized in experiments increases, it becomes more essential to devise efficient methods estimating the fidelity between the ideal graph state and an experimentally-realized actual state. Any efficient fidelity-estimation method, in general, must use multiple experimental settings, i.e., needs to switch between at least two measurements. Recently, it has been shown that a single measurement is sufficient if the noise can be modeled as the phase-flip error. Since the bit-flip error should also occur in several experiments, it is desired to extend this simplest method to noise models that include phase and bit-flip errors. However, it seems to be nontrivial because their result strongly depends on properties of the phase-flip error. In this paper, by analyzing effects of the bit-flip error on stabilizer operators of graph states, we achieve the extension to the depolarizing noise, which is a major noise model including phase and bit-flip errors. We also numerically evaluate our simplest method for noise models interpolating between the phase-flip and depolarizing noises. ## I Introduction Graph states [1] are entangled states useful for several quantum information processing tasks such as measurement-based quantum computation (MBQC) [2], quantum metrology [3], and quantum communication [4]. Given this versatility, tremendous theoretical [5; 6; 7; 8; 9] and experimental [10; 11; 12; 13; 14; 15; 16; 17; 18] efforts have been devoted to increase the size of graph states. As the size \(n\) of graph states realized in experiments increases, it becomes more essential to devise efficient methods estimating the fidelity \(\langle G|\rho|G\rangle\) between the ideal \(n\)-qubit graph state \(|G\rangle\) and an experimentally-realized actual state \(\rho\equiv\mathcal{E}(|G\rangle\langle G|)\) that suffers from some noise \(\mathcal{E}\). This fidelity estimation is also called the verification of graph states. So far, several efficient verification methods have been proposed for graph states [19; 20; 21; 22; 23; 24; 25; 26]. These methods proceed as follows: (i) Each qubit of \(N_{c}\) copies of \(\rho\) is given to a verifier one by one. (ii) The verifier randomly chooses a measurement basis from \(N_{m}\) kinds of measurements and measures the received state \(\rho\) in this basis. He/she repeats the same procedure for all the copies of \(\rho\). (iii) By processing all measurement outcomes with a classical computer, he/she outputs an estimated value of (or a lower bound on) the fidelity \(\langle G|\rho|G\rangle\). In most cases, to reduce the burden on the verifier as much as possible, only non-adaptive single-qubit projective measurements and efficient classical operations are required for the verifier. In this paper, we consider the same restriction on the verifier. In the evaluation of verification protocols, two parameters \(N_{c}\) and \(N_{m}\) are usually considered. So far, several attempts to reduce the number \(N_{c}\) of copies have been done, and Zhu and Hayashi have finally constructed an optimal verification protocol [25] such that \(N_{c}=\Theta(\epsilon^{-1}\log\delta^{-1})\) to guarantee \(\langle G|\rho|G\rangle\geq 1-\epsilon\) with significance level \(\delta\). As a remarkable property, the number \(N_{c}\) in their optimal protocol does not depend on the size \(n\) of the graph state \(|G\rangle\). On the other hand, the optimality of the number \(N_{m}\) of measurement settings is less explored. In many practical cases, the switching of measurement settings could be slow, and in some cases, it may be demanding or impossible (e.g., see Ref. [27]). Furthermore, since the measurement error is the most dominant in some state-of-the-art experiments [28], the reduction of the number of measurement settings should be helpful to realize verification protocols with high accuracy. Therefore, it is important to reduce \(N_{m}\) (ultimately to one) under the assumption that the verifier can perform only non-adaptive single-qubit projective measurements. However, under this assumption, it has been shown that at least two measurement settings are required for the verification of any bipartite pure entangled state if \(\mathcal{E}\) is an arbitrary noise [27]. Since bipartite pure entangled states include a subclass of graph states, their result prevents the possibility of \(N_{m}=1\) for general noises. Even if adaptive measurements are allowed for the verifier, at least two measurement settings are still necessary [25]. Recently, by restricting the noise model (i.e., by fixing \(\mathcal{E}\)), a verification protocol achieving \(N_{m}=1\) has been constructed [29]. In this protocol, \(\mathcal{E}\) is assumed to be the phase-flip error, and they have achieved the optimal number of \(N_{m}\) by using properties of the phase-flip error. More precisely, from the commutation relations between Pauli operators and the phase-flip error, they have shown that measurements of a single stabilizer operator of \(|G\rangle\) are sufficient to estimate a lower bound on the fidelity with high accuracy. Therefore, it is nontrivial whether \(N_{m}=1\) can be achieved for other noise models including bit-flip errors. In this paper, we propose a verification protocol achieving \(N_{m}=1\) for graph states in the presence of the depolarizing noise (see also Eq. (4)). Since the depolarizing noise is a major noise model used in several theoretical analyses of quantum error correction [30; 31; 32] and error mitigation [33; 34; 35], our protocol should also be compatible with other methods handling errors. To construct our protocol, we analyze the effect of depolarizing noise on the fidelity \(\langle G|\rho|G\rangle\). As a well-known fact, the depolarizing noise on \(n\) qubits can be written as a classical mixture of Pauli noises [36]. We observe that Pauli noises definitely reduce the fidelity if and only if they do not coincide with any stabilizer operator of \(|G\rangle\). From this observation, we obtain a single measurement from which we can obtain an approximate value of the fidelity. By using this measurement, we propose a verification protocol for graph states with the depolarizing noise that satisfies \(N_{c}=\Theta(\epsilon^{-2}\log\delta^{-1})\) and \(N_{m}=1\). As concrete applications, we apply our verification protocol to \(n\)-qubit fully-connected graph states, which can be converted to \(n\)-qubit Greenberger-Horne-Zeilinger (GHZ) states by local Clifford operations [37], and cluster states. Since cluster states are resource states of universal MBQC, and GHZ states can be used to perform quantum sensing achieving the Heisenberg limit [38] and non-adaptive MBQC with linear side-processing (\(\rm{NMQC}_{\oplus}\)) [39], our protocol can be used to make these protocols verifiable. We also evaluate our protocol for noise models other than the depolarizing noise. First, we consider the noise model where the phase-flip or depolarizing noise is randomly applied. We show that although it is unknown which noise is applied, our protocol works well for some cluster states. Then we consider noise models interpolating between the phase-flip and depolarizing noises. We numerically evaluate how well our protocol works for these noise models. Lastly, we compare our protocol with previous protocols. The rest of this paper is organized as follows: in Sec. II, we introduce graph states and the depolarizing noise. In Sec. III, we propose our verification protocol using only a single stabilizer measurement. In Sec. IV, as concrete examples, we apply our protocol in Sec. III to fully-connected graph states and cluster states. We also evaluate our protocol in noise models other than the depolarizing noise. In Sec. V, we compare our protocol with previous protocols. Section VI is devoted to the conclusion and discussion. In Appendices A and B, we give proofs of Lemma 1 and Theorem 1, respectively. ## II Graph states in the depolarizing channel In this section, we introduce graph states in the depolarizing channel. To this end, we first define graph states [1]. A graph \(G\equiv(V,E)\) is a pair of the set \(V\) of \(n\) vertices and the set \(E\) of edges. The \(n\)-qubit graph state \(|G\rangle\) that corresponds to the graph \(G\) is defined as \[|G\rangle\equiv\left(\prod_{(i,j)\in E}CZ_{i,j}\right)|+\rangle^{\otimes n}, \tag{1}\] where \(|+\rangle\equiv(|0\rangle+|1\rangle)/\sqrt{2}\) with \(|0\rangle\) and \(|1\rangle\) being, respectively, eigenstates of the Pauli-\(Z\) operator with eigenvalues \(+1\) and \(-1\), and \(CZ_{i,j}\) is the controlled-\(Z\) (\(CZ\)) gate applied on the \(i\)th and \(j\)th qubits. The stabilizer generators \(\{g_{i}\}_{i=1}^{n}\) for \(|G\rangle\) are defined as \[g_{i}\equiv X_{i}\left(\prod_{j:\ (i,j)\in E}Z_{j}\right). \tag{2}\] Here, \(X_{i}\) and \(Z_{j}\) are the Pauli-\(X\) and \(Z\) operators for the \(i\)th and \(j\)th qubits, respectively, and the product of \(Z_{j}\) is taken over all vertices \(j\) such that \((i,j)\in E\). For any \(i\) and \(j\), two stabilizer generators commute, i.e., \([g_{i},g_{j}]=0\). The graph state \(|G\rangle\) is the unique common eigenstate of \(\{g_{i}\}_{i=1}^{n}\) with eigenvalue \(+1\), i.e., \(g_{i}|G\rangle=|G\rangle\) for any \(i\). A stabilizer \(S_{\ell}\) is a product of stabilizer generators such that \(S_{\ell}\equiv\prod_{i=1}^{n}g_{i}^{\ell_{i}}\), where \(\ell\equiv\ell_{1}\ell_{2}\ldots\ell_{n}\in\{0,1\}^{n}\). It is a tensor product of no single-qubit operators. More precisely, by using \(s\in\{0,1\}\) and \(\tau_{i}\in\{I,X,Y,Z\}\), where \(I\) and \(Y=iXZ\) are the two-dimensional identity operator and Pauli-\(Y\) operator, respectively, it can be written as \[S_{\ell}=(-1)^{s}\bigotimes_{i=1}^{n}\tau_{i}. \tag{3}\] For any \(\ell\), the equality \(S_{\ell}|G\rangle=|G\rangle\) can be easily checked from Eqs. (1) and (2). The depolarizing channel is represented by the superoperator [36] \[\mathcal{E}(\cdot)\equiv(1-p)I(\cdot)I+\frac{p}{3}\left[X(\cdot)X+Y(\cdot)Y+Z( \cdot)Z\right]. \tag{4}\] It operates independently on each qubit, where bit-flip (Pauli-\(X\) error), phase-flip (Pauli-\(Z\) error), and bit-phase-flip errors (Pauli-\(Y\) error) occur with equal probability \(p/3\). The depolarizing channel is a major noise model that is used in several analyses as explained in Sec. I. Let \(\big{[}\boxed{\psi}\big{]}\equiv|\psi\rangle\langle\psi|\) for any pure state \(|\psi\rangle\). The density operator \(\rho\equiv\overline{\mathcal{E}^{\otimes n}(|G\rangle\langle G|)}\) for the graph state in the depolarizing channel can be written as \[\rho= (1-p)^{n}\big{[}\boxed{G}\big{]}+(1-p)^{n-1}\frac{p}{3}\sum_{i=1} ^{n}\sum_{\mu=1}^{3}\!\!\big{[}\overline{\sigma_{\mu i}|G}\big{)}\] \[+(1-p)^{n-2}\left(\frac{p}{3}\right)^{2}\sum_{1\leq i<j\leq n} \sum_{\begin{subarray}{c}1\leq\mu\leq 3\\ 1\leq\nu\leq 3\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Note that the fidelity is defined as \(\sqrt{F}\) in Ref. [36]. For ease of our argument, we use the definition in Eq. (6). Therefore, the fidelity between the graph state \(\rho\) in the depolarizing channel and the ideal state \(|G\rangle\langle G|\) can be written as \[F= \langle G|\rho|G\rangle\] \[= (1-p)^{n}\] \[+\sum_{m=1}^{n}(1-p)^{n-m}\left(\frac{p}{3}\right)^{m}\sum_{ \begin{subarray}{c}i_{1}<\ldots<i_{m}\\ \mu_{1}\ldots\mu_{m}\end{subarray}}\langle G|\left(\prod_{k=1}^{m}\sigma_{\mu _{k}i_{k}}\right)|G\rangle^{2}. \tag{7}\] The following lemma is useful for evaluation of Eq. (7). **Lemma 1**.: _Suppose \(|G\rangle\) is any \(n\)-qubit graph state, and \(\sigma_{\mu i}\) is the \(\mu\) component of the Pauli operator for the \(i\)th qubit. Then, for any natural number \(m(\leq n)\),_ \[\langle G|\left(\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}}\right)|G\rangle^{2}=1, \tag{8}\] _if and only if one of \(\pm\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}}\) coincides with a stabilizer of \(|G\rangle\). Otherwise, it vanishes._ A proof of Lemma 1 is given in Appendix A. From Lemma 1, \[\sum_{\begin{subarray}{c}i_{1}<\ldots<i_{m}\\ \mu_{1}\ldots\mu_{m}\end{subarray}}\langle G|\left(\prod_{k=1}^{m}\sigma_{\mu _{k}i_{k}}\right)|G\rangle^{2} \tag{9}\] is equal to the number of the stabilizers that are products of \(m\) Pauli operators. Since the number of stabilizers increases exponentially with \(n\), in general, it would be hard to derive the exact value of \(F\) for large \(n\). However, in Sec. IV.1, we show that \(F\) can be represented by a simple formula (Eq. (26)) for fully-connected graph states. We can assume, without loss of generality, that graph states we consider in this paper have no isolated single qubits, because verification of isolated single qubits can be performed independently of other connected qubits. Each stabilizer of a connected graph state is a product of at least two Pauli operators, so that the first-order error term (\(m=1\)) in Eq. (7) vanishes. The second-order error term (\(m=2\)) also vanishes for any graph state that has no stabilizers consist of two Pauli operators, such as a cluster state with \(n>4\). Using the identity \[|G\rangle\langle G|=\prod_{i=1}^{n}\frac{g_{i}+I_{i}}{2}=\frac{1}{2^{n}}\sum _{\ell}S_{\ell}, \tag{10}\] the fidelity can also be written as \[F=\text{Tr}(\rho|G\rangle\langle G|)=\frac{1}{2^{n}}\sum_{\ell}\text{Tr}( \rho S_{\ell}). \tag{11}\] Equation (11) indicates that the fidelity can be estimated exactly from the average of all the stabilizers \(\{S_{\ell}\}_{\ell\in\{0,1\}^{n}}\). That is, \(2^{n}\) kinds of measurement settings are required. Note that only a polynomial number of them are chosen uniformly at random and performed in actual experiments. However, since chosen measurements vary in each experiment, the estimation of the fidelity requires the ability of performing any measurement in the \(2^{n}\) stabilizer measurements. ## III Fidelity estimation by measuring a single stabilizer In this section, we discuss our idea used in constructing our simplest fidelity-estimation protocol for graph states in the depolarizing channel. The average of the stabilizer is given as \[\text{Tr}(\rho S_{\ell})= (1-p)^{n}+\sum_{m=1}^{n}(1-p)^{n-m}\left(\frac{p}{3}\right)^{m}\] \[\cdot\left[\sum_{\begin{subarray}{c}i_{1}<\ldots<i_{m}\\ \mu_{1}\ldots\mu_{m}\end{subarray}}\langle G|\left(\prod_{k=1}^{m}\sigma_{\mu _{k}i_{k}}\right)S_{\ell}\left(\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}}\right)|G \rangle\right]. \tag{12}\] Since the first-order error term vanishes in Eq. (7), the fidelity can be well approximated by the average of a stabilizer without the first-order error term when \(p\ll 1\). By comparing Eqs. (7) and (12), for \(p\ll 1\), we expect that the fidelity can be accurately estimated by measuring a single stabilizer for which the first-order error term vanishes. Let us investigate the condition on which the first-order error term in Eq. (12) vanishes. Recall that \(\tau_{i}\) is a single-qubit operator for the \(i\)th qubit of \(S_{\ell}\). \(\tau_{i}\) commutes with each of \(X\), \(Y\), and \(Z\) in the case of \(\tau_{i}=I\), while it commutes with only one of them and anticommutes with the others in the case of \(\tau_{i}\in\{X,Y,Z\}\). We thus obtain \[\sum_{\mu=1}^{3}\langle G|\sigma_{\mu i}S_{\ell}\sigma_{\mu i}|G\rangle=\left\{ \begin{array}{ll}3&(\tau_{i}=I)\\ -1&(\tau_{i}\in\{X,Y,Z\}).\end{array}\right. \tag{13}\] Using this relation, we obtain \[\sum_{i=1}^{n}\sum_{\mu=1}^{3}\langle G|\sigma_{\mu i}S_{\ell} \sigma_{\mu i}|G\rangle =3n_{I}-(n-n_{I})\] \[=4n_{I}-n, \tag{14}\] where \(n_{I}\) denotes the number of \(I\) in \(S_{\ell}\). Hence, the condition for the first-order error term to vanish is \[n_{I}=\frac{n}{4}. \tag{15}\] Generalizing the calculation of the first-order error term, the average of a stabilizer can be calculated in general as \[\text{Tr}(\rho S_{\ell})=(1-p)^{n}+\sum_{m=1}^{n}C(m)(1-p)^{n-m}\left(\frac{p}{ 3}\right)^{m}, \tag{16}\] \[C(m) \equiv \sum_{\begin{subarray}{c}i_{1}<\ldots<i_{m}\\ \mu_{1}\ldots\mu_{m}\end{subarray}}\langle G|\left(\prod_{k=1}^{m}\sigma_{\mu_{ k}i_{k}}\right)S_{\ell}\left(\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}}\right)|G\rangle \tag{17}\] \[= \sum_{j=w(\ell,m,n)}^{f(\ell,m)}(-1)^{m-j}3j^{\binom{n_{I}}{j} \binom{n-n_{I}}{m-j}},\] where \(f(\ell,m)\equiv\min\{m,n_{I}\}\) and \(w(\ell,m,n)\equiv\max\{0,m+n_{I}-n\}\). We have assumed \(\binom{0}{0}=1\). \(C(m)\) corresponds to the sum of the averages of \(S_{\ell}\) in the case of \(m\) errors. The term \(\binom{n_{I}}{j}\binom{n-n_{I}}{m-j}\) expresses the number of cases in which \(j\) errors occur among the \(n_{I}\) qubits for which \(\tau_{i}=I\), while the other \((m-j)\) errors occur on \((n-n_{I})\) qubits for which \(\tau_{i}\in\{X,Y,Z\}\). From Eq. (17), the coefficient \(C(2)\) is given as \[C(2)=8n_{I}^{2}-4(n+1)n_{I}+\frac{n(n-1)}{2}. \tag{18}\] By comparing the second-order error terms (\(m=2\)) in Eqs. (7) and (16), it should be preferable to set \(C(2)\) as a non-negative number for the verification with high accuracy. Particularly in the case of cluster states, \(C(2)=0\) is desirable. However, in the case that the first-order error term vanishes, substituting \(n_{I}=n/4\) in Eq. (18), we find that the second-order error term is negative as \(C(2)=-3n/2\). From Eq. (17), one finds that \(C(m)\) is equivalent to the coefficient of the term \(x^{n-m}y^{m}\) in the expansion of the polynomial \((x+3y)^{n_{I}}(x-y)^{n-n_{I}}\). Then, substituting \(x=1-p\) and \(y=p/3\) in \((x+3y)^{n_{I}}(x-y)^{n-n_{I}}\), we finally obtain a simple analytical expression for the average of the stabilizer as follows: \[\mathrm{Tr}(\rho S_{\ell}) = \left[(1-p)+3\cdot\frac{p}{3}\right]^{n_{I}}\left[(1-p)-\frac{p} {3}\right]^{n-n_{I}} \tag{19}\] \[= \left(1-\frac{4}{3}p\right)^{n-n_{I}}. \tag{20}\] It is clear from Eq. (20) that for fixed \(n\) and \(p\), \(\mathrm{Tr}(\rho S_{\ell})\) is determined solely by the number of \(I\) in \(S_{\ell}\). The average of the stabilizer decreases from unity as \(p\) increases in Eq. (20), because eigenstates of \(S_{\ell}\) with eigenvalue \(-1\) are mixed to the pure graph state \(|G\rangle\langle G|\) by the depolarizing noise. \(\mathrm{Tr}(\rho S_{\ell})\) becomes negative when \(p>3/4\) in the case of odd \(n-n_{I}\). Equations (11) and (20) lead to a simple expression for the fidelity \[F=\frac{1}{2^{n}}\sum_{\ell\in\{0,1\}^{n}}\left(1-\frac{4}{3}p\right)^{n-n_{I }(\ell)}, \tag{21}\] where \(n_{I}(\ell)\) denotes the number of \(I\) in the stabilizer \(S_{\ell}\). We have shown that \(C(1)=0\) when \(n_{I}=n/4\) in Eq. (14). In fact, setting \(n_{I}=n/4\) in Eqs. (19) and (20), we obtain \[\mathrm{Tr}(\rho S_{\ell})\] \[= \left(1-\frac{4}{3}p\right)^{3n/4}\] \[= \left[(1-p)^{4}-\frac{2}{3}p^{2}(1-p)^{2}+\frac{8}{27}p^{3}(1-p)- \frac{1}{27}p^{4}\right]^{n/4}. \tag{22}\] Expanding the final expression and comparing with Eq. (7), it is clear that there is no first-order error term (\(m=1\)) in Eq. (22). From this observation, we expect that the fidelity can be well estimated by measuring a single stabilizer that satisfies the condition Eq. (15). In fact, the following theorem holds. **Theorem 1**.: _Let \(|G\rangle\) be an \(n\)-qubit ideal graph state with \(n=4k\) for some natural number \(k\). Let \(\mathcal{A}\) be the set of stabilizers \(S\) of \(|G\rangle\) such that \(S=(-1)^{s}\otimes_{i=1}^{n}\tau_{i}\), where \(\tau_{i}\in\{X,Y,Z\}\) for \(3k\) kinds of \(i\)'s, \(\tau_{i}=I\) for other \(i\)'s, and \(s\in\{0,1\}\). Let \(F\equiv\langle G|\rho|G\rangle\) be the fidelity between \(|G\rangle\) and an \(n\)-qubit graph state \(\rho\) (Eq. (5)) in the depolarizing channel with the error probability \(p\). The fidelity \(\tilde{F}=(1-p)^{4k}\) up to the first-order error can be approximated by the expectation value \(F_{\mathrm{est}}\equiv\mathrm{Tr}(\rho S)=(1-4p/3)^{3k}\) of any single stabilizer \(S\) in the set \(\mathcal{A}\), such that_ \[0\leq\tilde{F}-F_{\mathrm{est}}<\frac{2}{3k} \tag{23}\] _for \(0\leq p\leq 3/4\)._ Our main contribution is to derive Eq. (20) from which Theorem 1 can be immediately obtained. A rigorous proof of Theorem 1 is given in Appendix B. Note that Theorem 1 holds for any graph state with \(n=4k\) under the condition that the graph state has no isolated single qubits, and the set \(\mathcal{A}\) is not empty. Theorem 1 implies that the more the number \(n\) of qubits increases, the more the precision of the estimation for \(\tilde{F}\) by \(F_{\mathrm{est}}\) improves. When the second-order error term in Eq. (7) vanishes, the fidelity up to the second-order error is also \(\tilde{F}\), and so the estimation of \(F\) by \(F_{\mathrm{est}}\) improves further. This is the case for two-dimensional (2D) cluster states, as we demonstrate it in the next section. Based on Theorem 1, our verification protocol runs as follows: 1. A quantum computer generates \(N\) graph states \(\rho^{\otimes N}\) in the depolarizing channel and sends them to a verifier. 2. The verifier measures \(S_{\ell}\) that satisfies the condition in Eq. (15) on each received state \(\rho\). 3. The verifier outputs \[\tilde{F}_{\mathrm{est}}\equiv\frac{\sum_{i=1}^{N}o_{i}}{N}\] (24) as an estimated value of the fidelity, where \(o_{i}\in\{+1,-1\}\) denotes the \(i\)th outcome for \(1\leq i\leq N\). \(\tilde{F}_{\rm est}\) converges to \(F_{\rm est}\) in the limit of large \(N\). In fact, the Hoeffding inequality [40] guarantees that when \(N=\lceil 2/\epsilon^{2}\log\left(2/\delta\right)\rceil\), the inequality \[\left|F_{\rm est}-\tilde{F}_{\rm est}\right|\leq\epsilon \tag{25}\] holds with probability at least \(1-\delta\). Here, \(\lceil\cdot\rceil\) is the ceiling function. The measurement of \(S_{\ell}\) in step 2 can be realized by single-qubit Pauli measurements because \(S_{\ell}\) is a tensor product of Pauli operators. Furthermore, by sequentially sending qubits one by one in step 1, no quantum memory is required for the verifier. To illustrate our protocol, we give concrete examples in Fig. 1. ## IV Applications In this section, we first discuss the estimation of the fidelity for the fully-connected graph states and the cluster states. Particularly, the cluster states are important resource states for MBQC, which allows universal quantum computation. Theorem 1 just guarantees that our simplest verification protocol outputs the estimated value that is close to the true value \(F\) of the fidelity only when \(p\) is sufficiently small. We numerically show that \(F_{\rm est}\) becomes precise approximations for any \(0\leq p\leq 1/2\) in the case of fully-connected graph and cluster states. Then, we evaluate our protocol for noise models other than the depolarizing noise. ### Fully-connected graph states We consider fully-connected graphs, in which each of the vertices is connected with all the other vertices by the edges, as shown in Fig. 2. Before applying Theorem 1 to fully-connected graph states with \(n=4k\) (\(k\in\mathbb{N}\)) qubits, we first evaluate the fidelity of any fully-connected graph state. According to Eq. (21), since \(n_{I}(\ell)\) for all \(\ell\)'s are required to evaluate the fidelity, it should not be easy to derive it in general. For the fully-connected graph states, however, Eq. (21) can be easily evaluated. In the case of even \({\rm wt}(\ell)\equiv\sum_{i=1}^{n}\ell_{i}\), i.e., \(S_{\ell}\) is a product of even \(g_{i}\)'s, we obtain \(n_{I}(\ell)=n-{\rm wt}(\ell)\), because \(\tau_{i}=I\) for \(\ell_{i}=0\), and \(\tau_{i}=X\) or \(Y\) for \(\ell_{i}=1\). Meanwhile, \(n_{I}=0\) in the case of odd \({\rm wt}(\ell)\), because \(\tau_{i}=Z\) for \(\ell_{i}=0\), and \(\tau_{i}=X\) or \(Y\) for \(\ell_{i}=1\). We thus obtain \[2^{n}F =\sum_{{\rm wt}:\ {\rm even}}{n\choose{\rm wt}}\left(1-\frac{4}{3}p \right)^{{\rm wt}}+\sum_{{\rm wt}:\ {\rm odd}}{n\choose{\rm wt}}\left(1-\frac{4}{3}p \right)^{n}\] \[=2^{n-1}\left[\left(1-\frac{2}{3}p\right)^{n}+\left(\frac{2}{3}p \right)^{n}+\left(1-\frac{4}{3}p\right)^{n}\right]. \tag{26}\] Figure 1: Schematic diagram of our simplest verification protocol. A quantum computer generates graph states \(\rho\) in the depolarizing channel and sends each qubit one by one. A verifier just measures a single stabilizer \(S_{\ell}\), which has \(3n/4\) Pauli operators, for each received state \(\rho\) by using only single-qubit Pauli measurements. No quantum memory is required for the verifier. Figure 2: Fully-connected graphs with four (left) and eight vertices (right). The black dots and the solid lines are vertices and edges, respectively. Here, we have used the following relations \[\sum_{j:\text{ even}}\binom{n}{j}x^{j}=\frac{1}{2}\left[(1+x)^{n}+(1- x)^{n}\right], \tag{27}\] \[\sum_{j:\text{ odd}}\binom{n}{j}=2^{n-1}. \tag{28}\] Now we discuss the estimation of the fidelity for fully-connected graph states with \(n=4k\) qubits. In the case of even \(k\), any stabilizer \(S_{\ell}\) with \(\mathrm{wt}=3k\) satisfies the condition \(n_{I}=n/4=k\), because \(\tau_{i}=I\) for \(\ell_{i}=0\), and \(\tau_{i}=X\) or \(Y\) for \(\ell_{i}=1\). Meanwhile, in the case of odd \(k\), any fully-connected graph state with \(n=4k\) qubits has no stabilizers that satisfy the condition \(n_{I}=n/4=k\), because any stabilizer \(S_{\ell}\) with \(\mathrm{wt}=3k\) has \(\tau_{i}=Z\) for \(\ell_{i}=0\), and \(\tau_{i}=X\) or \(Y\) for \(\ell_{i}=1\). Theorem 1 can be thus applied to any fully-connected graph state with \(n=8k\) (\(k\in\mathbb{N}\)) qubits. Figure 3 shows the comparison of \(F\) in Eq. (26) and \(F_{\mathrm{est}}=(1-4p/3)^{6k}\) for the fully-connected graph states with \(n=8k\) qubits. It demonstrates that the estimation of the fidelity \(F\) by \(F_{\mathrm{est}}\) improves as \(n\) increases. The second-order error term \(F^{(2)}\) in Eq. (7) is nonzero for the fully-connected graph states. Since any stabilizer with \(\mathrm{wt}=2\) consists of two Pauli operators, it can be written as \[F^{(2)}=\binom{n}{2}(1-p)^{n-2}\left(\frac{p}{3}\right)^{2}. \tag{29}\] The relatively large deviation of \(F_{\mathrm{est}}\) from \(F\) in Fig. 3 reflects the presence of \(F^{(2)}\). In the case of fully-connected graph states, the other error terms can also be derived. First, from Eq. (7), we obtain \[F=\sum_{\ell\in\{0,1\}^{n}}(1-p)^{n_{I}(\ell)}\left(\frac{p}{3}\right)^{n-n_{ I}(\ell)}. \tag{30}\] Then, by following the similar argument as used to derive Eq. (29) and using Eq. (28), we calculate Eq. (30) as follows: \[F= (1-p)^{n}+\sum_{k=1}^{\lfloor n/2\rfloor}\binom{n}{2k}(1-p)^{n-2 k}\left(\frac{p}{3}\right)^{2k}\] \[+2^{n-1}\left(\frac{p}{3}\right)^{n}, \tag{31}\] where \(\lfloor\cdot\rfloor\) is the floor function. Taking the summation over \(k\) as in Eq. (16), we can derive the analytical expression of the fidelity in Eq. (26) from Eq. (31). ### Two-dimensional cluster states We discuss the estimation of the fidelity of 2D cluster states, which are universal resource states for MBQC. Here, we focus on rectangular cluster states with \(n=q\times r\) (\(q,r\in\mathbb{N}\), \(q\neq r\)) qubits, because they suffice for universal quantum computation in MBQC. In contrast to fully-connected graph states, it should be difficult to derive a general expression for the fidelity of 2D cluster states. Meanwhile, the fidelity up to the third order error term can be easily obtained. In the case of \(q,r>2\), cluster states have no second-order error terms, because they have no stabilizers that consist of two Pauli operators. As for the third-order error terms, the four generators on the corners of the corresponding rectangular are the only stabilizers that consist of three Pauli operators. Thus, the fidelity up to the third-order error can be written as \[F^{\prime}=(1-p)^{n}+4(1-p)^{n-3}\left(\frac{p}{3}\right)^{3}. \tag{32}\] The fidelity of the cluster state with \(2\times 4\) qubits up to the third-order error is given as \[F^{\prime}=(1-p)^{8}+8(1-p)^{5}\left(\frac{p}{3}\right)^{3}. \tag{33}\] Since \(F\) has no second-order error terms in both cases, it is expected that \(F\) can be well estimated by \(F_{\mathrm{est}}\). Figure 4 shows Figure 3: Comparison of \(F\) (solid lines) and \(F_{\mathrm{est}}\) (dashed lines) as functions of the error probability \(p\) for the fully-connected graph states with \(n=8\), \(24\), and \(96\) qubits. The dotted line represents \(F_{\mathrm{ub}}\) in Eq. (38) with \(n=8\). Figure 4: Comparison of \(F\) (solid lines) and \(F_{\mathrm{est}}\) (dashed lines) as functions of the error probability \(p\) for rectangular cluster states of \(2\times 4\) and \(3\times 4\) qubits. The dotted lines represent \(F_{\mathrm{ub}}\) in Eq. (40). the comparison of the fidelity \(F\) and \(F_{\rm est}\) for the cluster states with \(2\times 4\) and \(3\times 4\) qubits. The stabilizers that satisfy the condition \(n_{I}=n/4\) are shown in Figs. 1 (a) and (b). Here, \(F\) is evaluated numerically by taking the average of all the stabilizers. It also demonstrates that the estimation of \(F\) by \(F_{\rm est}\) improves as \(n\) increases. Compared with Fig. 3, the deviation of \(F_{\rm est}\) from \(F\) is smaller than that for fully-connected graph states. ### Fidelity estimation for cluster states in the presence of either phase-flip or depolarizing noise So far, we have restricted the noise model and fixed \(\mathcal{E}\) to the depolarizing channel. This restriction can be justified in the case where we can specify the noise model based on knowledge of how the cluster state provided to a verifier is generated in an experiment. In this subsection, we relax this restriction and consider the possibility of estimating the fidelity of a 2D cluster state by measuring a single stabilizer in the presence of either phase-flip or depolarizing noise. It may be useful in the case where the phase-flip or depolarizing noise is randomly applied. It may be also useful in the case where the noise model cannot be decided between the phase-flip and depolarizing noises due to the lack of the knowledge of experimental setups. We first consider the cluster state of \(2\times 4\) qubits. In the presence of the phase-flip error, the fidelity of an \(n\)-qubit graph state can be estimated by measuring a single stabilizer \(S_{\ell}\) that satisfies \({\rm wt}(\ell)=n/2\)[29]. Then, the stabilizer \[S_{\ell}=g_{1}g_{3}g_{5}g_{7}=X_{1}Z_{2}X_{3}I_{4}X_{5}Z_{6}X_{7}I_{8} \tag{34}\] satisfies both the conditions \({\rm wt}(\ell)=n/2\) and \(n_{I}=n/4\), where the indices of qubits in Eq. (34) correspond to those in Fig. 1 (a). Thus, the fidelity can be estimated by measuring it in the presence of either phase-flip or depolarizing noise. Extending the above argument, the fidelity of a large cluster state with \(n=4q\times 2r\) qubits can be estimated by measuring a single stabilizer. For example, choosing generators every \(4\times 2\) qubits analogously to Eq. (34) as shown in Fig. 5, the stabilizer obtained as their product satisfies both the conditions \({\rm wt}(\ell)=n/2\) and \(n_{I}=n/4\). Furthermore, the fidelity of the same cluster state can be estimated by measuring the same stabilizer even in the presence of a more general noise model \[\mathcal{E}(\cdot)= (1-p_{x}-p_{y}-p_{z})(\cdot)\] \[+\left[p_{x}X(\cdot)X+p_{y}Y(\cdot)Y+p_{z}Z(\cdot)Z\right], \tag{35}\] \[p_{x}=p_{y}=\frac{p}{3}-\delta,\quad p_{z}=\frac{p}{3}+2\delta. \tag{36}\] This noise model interpolates the phase-flip and depolarizing noises; it reduces to the depolarizing (phase-flip) noise when \(\delta=0\) (\(\delta=p/3\)). Figure 6 shows the comparison of \(F\) and \(F_{\rm est}\) for the \((2\times 4)\)-qubit cluster state in the presence of the noise model Eq. (35). From this result, we anticipate that the fidelity of a large cluster state with \(n=4q\times 2r\) qubits can also be estimated by measuring the stabilizer specified in Fig. 5. We leave its rigorous analysis for the future work. ## V Comparison with previous protocols Several verification protocols exist for graph states that work for any type of error [19; 20; 21; 22; 23; 24; 25; 26]. The lower bound of the fidelity obtained in some of them becomes loose in general. It has been shown that the necessary number of measurement settings can be improved to \(n\) from \(2^{n}\) by using the union bound [22], where the obtained lower bound of the fidelity is given as \[F_{\rm ub}=1-\sum_{i=1}^{n}\left\{1-{\rm Tr}\left[\rho\left(\frac{I^{\otimes n }+g_{i}}{2}\right)\right]\right\} \tag{37}\] in the limit of large \(N\). Figure 5: Cluster state of \(n=4q\times 2r\) qubits. The fidelity of it in the presence of either phase-flip or depolarizing noise can be estimated by a stabilizer that is a product of the generators indicated by the red circles. Figure 6: Comparison of \(F\) and \(F_{\rm est}\) for the \((2\times 4)\)-qubit cluster state in the presence of the noise model in Eq. (35) as functions of the parameter \(\delta\). We set \(p=0.15\). For any \(n\)-qubit fully-connected graph state, using \(\mathrm{Tr}(\rho g_{i})=(1-4p/3)^{n}\), the lower bound \(F_{\mathrm{ub}}\) is calculated as \[F_{\mathrm{ub}} =1-\frac{n}{2}+\frac{n}{2}\left(1-\frac{4}{3}p\right)^{n} \tag{38}\] \[\simeq 1-\frac{2}{3}n^{2}p\quad(p\ll 1). \tag{39}\] For any \(n=q\times r\) (\(q,r\geq 2\))-qubit cluster state, the lower bound \(F_{\mathrm{ub}}\) is calculated as \[F_{\mathrm{ub}}= 1-\frac{n}{2}+\frac{1}{2}\left[4\left(1-\frac{4}{3}p\right)^{3} +2(q+r-4)\left(1-\frac{4}{3}p\right)^{4}\right.\] \[\left.+(q-2)(r-2)\left(1-\frac{4}{3}p\right)^{5}\right] \tag{40}\] \[\simeq 1-\frac{2}{3}\left[5n-2(q+r)\right]p\quad(p\ll 1). \tag{41}\] \(F_{\mathrm{ub}}\) becomes loose as \(n\) increases in both types of graph states, which is in sharp contrast with the fact that \(F_{\mathrm{est}}\) becomes tight as \(n\) increasing. Figures 3 and 4 show that our estimated value \(F_{\mathrm{est}}\) is close to the true value \(F=\langle G|\rho|G\rangle\) even when the number \(n\) of qubits and \(p\) are large. Meanwhile, the deviation of \(F_{\mathrm{ub}}\) from \(F\) increases as \(p\) increasing as shown in Figs. 3 and 4. ## VI Conclusion and Discussion We have proposed a verification protocol for graph states assuming the depolarizing channel. A remarkable feature of our verification protocol is that the fidelity of an \(n\)-qubit graph state can be estimated by just measuring a single stabilizer \(S_{\ell}\) that satisfies the condition \(n_{I}=n/4\), where \(n_{I}\) denotes the number of identity operators in \(S_{\ell}\), and so it requires only one measurement setting. Furthermore, we have shown that the estimation improves as the number \(n\) of qubits increases. We have also derived a simple analytic expression for the average of a stabilizer in the depolarizing channel. We have applied our protocol to fully-connected graph states as well as cluster states and have demonstrated its usefulness. Furthermore, we have evaluated our protocol for other noise models other than the depolarizing noise and have compared it with previous protocols. As a future work, it would be interesting to extend our results to correlated noises such as the global depolarizing noise \[\mathcal{E}(\rho)=(1-p)\rho+p\frac{I^{\otimes n}}{2^{n}}, \tag{42}\] where \(\rho\) is any \(n\)-qubit state, and \(p\) is the error probability. Since this error model is observed in actual experiments such as the quantum supremacy experiment in Ref. [28], by doing so, we can make our results more practical. ###### Acknowledgements. ST is supported by the Japan Society for the Promotion of Science Grant-in-Aid for Scientific Research (KAKENHI Grant No. 19K03691). RY is supported by JSPS Grant-in-Aid for Scientific Research (KAKENHI Grant No. 19K14616 and 20H01838). YT is supported by the MEXT Quantum Leap Flagship Program (MEXT Q-LEAP) Grant Number JPMXS0118067394 and JPMXS0120319794, JST [Moonshot R&D - MILLENNIA Program] Grant Number JPMJMS2061, and the Grant-in-Aid for Scientific Research (A) No.JP22H00522 of JSPS. SY is supported by Grant-in-Aid for JSPS Fellows (Grant No. JP22J22306). ## Appendix A: Proof of Lemma 1 Proof.: Using Eq. (1), the expectation value \(\langle G|(\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}})|G\rangle\) can be written as \[\langle G|\left(\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}}\right)|G \rangle=\langle+|^{\otimes n}U_{CZ}^{\dagger}\left(\prod_{k=1}^{m}\sigma_{\mu_ {k}i_{k}}\right)U_{CZ}|+\rangle^{\otimes n}, \tag{43}\] where \(U_{CZ}\equiv\prod_{e\in E}CZ_{e}\). Given that \(CZ\) is a Clifford operator, \(U_{CZ}^{\dagger}(\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}})U_{CZ}\) is also a tensor product of Pauli operators with a sign \(+\) or \(-\). A tensor product of Pauli operators averaged by \(|+\rangle^{\otimes n}\) yields \(\pm 1\) if it is a tensor product of \(X\) and/or \(I\) and zero otherwise. Thus we obtain \(\langle G|(\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}})|G\rangle^{2}=0\) or \(1\). For \(\langle G|(\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}})|G\rangle\) being nonzero, there must exist \(s\in\{+1,-1\}\) and the set \(A\subseteq\{1,2,\ldots,n\}\) such that \[U_{CZ}^{\dagger}\left(\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}}\right) U_{CZ} = s\left(\prod_{i\in A}X_{i}\prod_{j\in\bar{A}}I_{j}\right)\] \[s\left(\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}}\right) = U_{CZ}\left(\prod_{i\in A}X_{i}\prod_{j\in\bar{A}}I_{j}\right)U_ {CZ}^{\dagger}, \tag{44}\] where \(\bar{A}\) is the complement of \(A\). When \(\langle G|(\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}})|G\rangle\) is nonzero, thus, \(\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}}\) or \(-\prod_{k=1}^{m}\sigma_{\mu_{k}i_{k}}\) coincides with one of the stabilizers of \(|G\rangle\). ## Appendix B: Proof of Theorem 1 Proof.: Let \(X\equiv(1-p)^{4}\) and \(Y\equiv(1-4p/3)^{3}\), then \(\tilde{F}=X^{k}\) and \(F_{\mathrm{est}}=Y^{k}\). Using \(X\geq Y\) for \(p\in[0,1]\), and \[\tilde{F}-F_{\rm est}\] \[= X^{k}-Y^{k}\] \[= (X-Y)(X^{k-1}+X^{k-2}Y+\cdots+XY^{k-2}+Y^{k-1})\] \[\leq k(X-Y)X^{k-1}\] \[\leq kp^{2}\left(-\frac{1}{2}p+\frac{2}{3}\right)(1-p)^{4(k-1)} \tag{45}\] \[\leq kp_{0}^{2}\left(-\frac{1}{2}p_{0}+\frac{2}{3}\right)(1-p_{0})^{4( k-1)}\leq\frac{2}{3}kp_{0}^{2}<\frac{2}{3k}. \tag{46}\] Here, the right-hand-side of Eq. (45) takes its maximum value at \(p_{0}\), which satisfies \[p_{0}=\frac{16k+1-\sqrt{256k^{2}-352k+97}}{6(4k-1)}. \tag{47}\] The last inequality in Eq. (46) can be obtained from the inequality \(p_{0}<1/k\).
2303.15655
Joint embedding in Hierarchical distance and semantic representation learning for link prediction
The link prediction task aims to predict missing entities or relations in the knowledge graph and is essential for the downstream application. Existing well-known models deal with this task by mainly focusing on representing knowledge graph triplets in the distance space or semantic space. However, they can not fully capture the information of head and tail entities, nor even make good use of hierarchical level information. Thus, in this paper, we propose a novel knowledge graph embedding model for the link prediction task, namely, HIE, which models each triplet (\textit{h}, \textit{r}, \textit{t}) into distance measurement space and semantic measurement space, simultaneously. Moreover, HIE is introduced into hierarchical-aware space to leverage rich hierarchical information of entities and relations for better representation learning. Specifically, we apply distance transformation operation on the head entity in distance space to obtain the tail entity instead of translation-based or rotation-based approaches. Experimental results of HIE on four real-world datasets show that HIE outperforms several existing state-of-the-art knowledge graph embedding methods on the link prediction task and deals with complex relations accurately.
Jin Liu, Jianye Chen, Chongfeng Fan, Fengyu Zhou
2023-03-28T00:42:29Z
http://arxiv.org/abs/2303.15655v1
# Joint embedding in Hierarchical distance and semantic representation learning for link prediction ###### Abstract The link prediction task aims to predict missing links in the knowledge graph and is essential for the downstream application. Existing well-known models deal with this task by mainly focusing on representing knowledge graph triplets in the distance space or semantic space. However, they can not fully capture the information of head and tail entities, nor even make good use of hierarchical level information. Thus, in this paper, we propose a novel knowledge graph embedding model for the link prediction task, namely, HIE, which models each triplet (h, r, t) into distance measurement space and semantic measurement space simultaneously. Moreover, to leverage the hierarchical information of entities and relations, HIE is introduced into hierarchical-aware space for better representation learning. Specifically, we apply distance transformation operation on the head entity in distance space to obtain the tail entity instead of translation-based or rotation-based approaches. Experimental results of HIE on four real-world datasets show that HIE outperforms several existing state-of-the-art knowledge graph embedding methods on link prediction task and deals with complex relations accurately. ## 1 Introduction Knowledge graphs (KGs) have been an essential component in artificial intelligence (AI) and applied to many downstream applications, such as question answering[1][2], sentiment analysis[3] and image caption[4]. Numerous well-known large-scale knowledge graphs, such as WordNet[5], Freebase[6] and YAGO[7], composed of structural information from manifold human knowledge, stored in a directed graph, have been widely employed for various domain in the past decades. The form of knowledge fact is typically triplet, i.e. (head entity, relation, tail entity) or (subject entity, relation, object entity), or (\(h\), \(r\), \(t\)) in short. Specifically, the head or tail entity denotes the concepts or definitions, and the relation represents the relationship between entities. Taking a triplet (Allen, Workmate, Wang kai) from Fig.1 as an example, it describes the fact that Wang kai is the workmate of Allen. Although previous KGs, such as Wordnet, Freebase and YAGO are composed of massive facts, they inevitably suffer from the incompleteness of the existing knowledge graphs. As the KG shown in Fig.1, we can get the corresponding facts of the triplets that are circled in black and connected to Allen. However, conditioned on the KG, we do not know what relationship between Allen and Joe Chen, nor the relationships among those are connect to Allen. Thus, accurately predicting the missing links between known entities, i.e., link prediction task, has gained much more attention. Many knowledge graph embedding (KGE) methods are proposed to learn the continuous low-dimensional embedding expressions of entities and relations to fulfill this task. According to the form of score functions, previous KGE methods can be roughly classified into three categories: (1) Translation-based models (2) Semantic matching models (3) Deep neural network-based models. These approaches achieve promising results to some extent. However, since these methods can only capture structural information or semantic information, they lack the ability to extract hierarchical features and take advantage of two kinds of information, which potentially limits their performance on the link prediction task. As the seminal work in KGEs, TransE[8] projects knowledge graphs into low-dimensional space efficiently and obtains satisfactory results in 1-to-1 tasks. The modeling ability of TransE may be because it could capture a lot of structural information in the distance measurement space. However, TransE fails to model other complex relation properties of 1-to-N, N-to-1 and N-to-N. To solve such a problem, TransR[9] utilizes the relationship-specific mapping matrix to project head and tail entities into relation-specific space. Nevertheless, there may be a little differences between tail entities and head entities under the same relationship. Besides, TransR ignores the structural information of the dis Figure 1: Illustration of a small KG for Allen, where the relation between Joe Chen and Allen, Peng Sanyuan and Allen are needed to infer via KG embedding methods. tance measurement space and the latent semantic information between entity-relations, which can also impair the expression ability of the model. Another exciting work M-DCN to deal with such a problem was proposed in [10]. The model uses convolutional kernels and embedding combination to capture triplet-level semantic information. However, the structural features are weakened, and the way of entity combination in M-DCN can omit some original embedding information. Thus, to build an expressive KGE model and deal with the link prediction task, we propose a novel knowledge graph embedding model named HIE. Unlike the previous works, we project each entry (h, r, t) into the distance measurement space and semantic measurement space to keep structural and entity-relation's semantic information through the corresponding mapping matrix. Consequently, the model can leverage more complex features between entities and their relationships. Taking the relationship _play as_ in Fig.1 as an example, in the distance measurement space, models may get Emb(Qiao xin) \(\approx\) Emb(Song xiaoqiang). While in the semantic measurement space, the projected tail entities (i.e., Qiao Xin and Song xiaoqiang) may be close via the play_asspecific matrix by using TransR or M-DCN and the relation may not be suitable for the projected entities. Thus, the modeling ability of methods may be limited by only projecting the entities and relations in distance measurement or semantic space. Comparatively, HIE projects the head entity, relation and tail entity into fine-grained semantic space, which is shown in Fig.2. That's to say, we may get the two plausible triplets (Allen(musician), play_in_urban_film, Qiaoxin(rock teenagers)) and (Allen(worker), play_in_narrative_film, Song xiaoqiang(covid-19 fighter)). As for the relationship _Classmate_, the projected embedding in the distance measurement space keeps the structural information of the original triplet. Therefore, HIE can distinguish the triplets easily under the different relations. In addition, motivated by the TKRL model[11], which models entities with hierarchical types. We jointly extract hierarchical geometric information (from distance measurement) and semantic information (from semantic measurement) to capture more triplet features, which can also be treated as a multi-view way to use knowledge to solve the link prediction task. In the empirical experiments, we compare HIE with several state-of-the-art KGE methods, including translation-based, neural network-based and semantic matching-based models. The results on the benchmark datasets show that our proposed HIE outperforms the baseline model in the link prediction task and outperforms several complex relations modeling task. In conclusion, our main contributions are summarized as follows: * We propose a novel KGE model, namely, HIE, which keeps structural geometric information and captures semantic information by projecting each triplet (h, r, t) into both distance measurement and semantic measurement space to learn the knowledge representation jointly. * We use distance transformation operation instead of translation-based or rotation-based projection to obtain tail entity. And adopt the semantic extractor to learn fine-grained entity and relation representations in different semantic spaces. Moreover, we extract hierarchical geometric information and semantic information for the representation learning process to capture more information without extra data. * To the best of our knowledge, we are the first to combine distance measurement with semantic measurement and further employ the hierarchical information for the model to address the link prediction task. * We conduct our experiments extensively on four representative benchmark datasets WN18[8], WN18RR[12], FB15k-237[13] and YAGO3-10[14]. The evaluation results illustrate that HIE outperforms the compared several state-of-the-art methods on the link prediction task. More importantly, HIE shows its robustness for empirical results via comprehensive ablation studies. The rest of this article is organized as follows. Related work is briefly reviewed in Sec.2. In Sec.3, the proposed HIE is presented in detail. In Sec.4, extensive experiments are conducted on four real-world datasets. Also, the experimental protocols, evaluation metrics and the results on link prediction and complex relations are reported. The conclusion and future work are discussed in Sec.5. ## 2 Related work Knowledge graph embedding has been widely studied as one of the most critical parts for solving the link prediction task in recent years. According to the score functions and representation approaches, existing KGE models can be roughly divided into three categories: Translation-based models, semantic matching models, deep neural network models. Table.1 reports several state-of-the-art KGE models and their parameters. ### Translation-based models TransE[8] is regarded as the most representative translation-based model, interpreting relationships as a translation from Figure 2: Illustration of the projection from origin space to entity-relation semantic space. the head entity to the tail entity. For a triplet (\(h\),\(r\),\(t\)), it attempts that **h+r\(\star\)t** and employs \(f_{r}\)(**h,t**)=-\(||\textbf{h}+\textbf{r}-\textbf{t}||_{L_{1}/L_{2}}\) as the corresponding score function under \(L_{1}\) or \(L_{2}\) constraints. Despite its simplicity and efficiency, TransE couldn't obtain well results on complex relations, such as 1-to-N, N-to-1 and N-N. In order to address such issues, TransH[15] projects the entities into the relation hyperplane and performs a translation operation on the transformed embeddings. TransR[9] employs projection spaces for entities and relations via a relation-specific mapping matrix M\({}_{r}\) and performs translation into the relation-specific space. To improve the performance of TransR, Ji et al.[16] proposed a more fine-grained model TransD, in which entities and relations are composed of two basic vectors. One is for capturing the semantic meanings, and the other one for constructing dynamic mapping matrices M\({}_{r\textbf{h}}\)=\(\textbf{r}_{p}\textbf{h}_{p}^{T}+\textbf{t}^{\text{zero}}\) and M\({}_{r\textbf{r}}\)=\(\textbf{r}_{p}\textbf{t}_{p}^{T}+\textbf{t}^{\text{zero}}\) via the projection vectors \(\textbf{h}_{p}\), \(\textbf{t}_{p}\) and \(\textbf{r}_{p}\). Recently, there have been some variants of the translation-based model by representing the head entities and relations into the complex vector space and manifold space. ComplEx [17] firstly applies complex value embedding approach to entities and relations, which attempts to model symmetric and antisymmetric relations via Hermitian dot product between real vectors and its conjugate. However, it can't deal with the composition relation pattern. To solve this problem, RotatE[14] models the relations as an element-wise rotation rather than a translation from head entities to tail entities in complex space, and the corresponding score function is defined as \(||\textbf{h}\circ\textbf{r}-\textbf{t}||\). Moreover, since TransE exists the problems of regularization and an unchangeable negative sampling ratio, TorusE[18] projects the entities and relations on a compact Lie group in torus space, and the score function is defined as \(min_{(x,y)\in([h]+[r])\times[t]}||\) [\(|\)x - y\(||_{i}\) with the projection: \(\mathbb{R}^{n}\to T^{n}\), \(x\mapsto[x]\), where \([\textbf{h}]\), \([\textbf{r}]\), \([\textbf{t}]\in T^{n}\). Additionally, TorusE adopts the similar relation translation, i.e., \([\textbf{h}]+[\textbf{r}]\approx[\textbf{t}]\). However, the above models simply project both entities and relations into the same embedding space. Thus, they couldn't model the hierarchical characteristics for entities and relations, which leads to weak improvements for link prediction task. ### Semantic matching models Another way to obtain the embeddings is to match the latent semantic similarity between entities and relations in vector space. SE[19], which is regarded as a linear model, projects the specific relation as a square matrix pair M=(M\({}_{1}\), M\({}_{2}\)) and uses the similarity function \(f_{r}\)(**h,t**)=-\(||\textbf{M}_{1}\textbf{h}-\textbf{M}_{2}\textbf{t}||_{L_{1}/L_{2}}\) to determine a plausible triplet. As for the bilinear model, RESCAL[20] is a representative tensor factorization model to capture the latent structure information, where each relation is factored as a dense square matrix M\({}_{r}\) to associated with the entity vector embedding. However, due to the three-way rank-r factorization, it has quadratic growth parameters and tends to overfit[21]. DistMult[22], which extends RESCAL by modifying the bilinear function, proposes a simple embedding approach through a bilinear product and replaces the complex relation matrix with an easy-to-train diagonal matrix. However, when dealing with the symmetric relations, i.e., (h,r,t) and (t,r,h), DistMult exhibits the same results for triplets and lacks adequate expressive power. By \begin{table} \begin{tabular}{c c c c} \hline \hline Category & Model name & Score function & Parameters \\ \hline \multirow{8}{*}{Translation-based models} & TransE & -\(||\textbf{h}+\textbf{r}-\textbf{t}||_{L_{1}/L_{2}}\) & **h,r,t\(\in\mathbb{R}^{d}\)** \\ & TransD & -\(||(r,\textbf{h}_{p}^{T}+1)\textbf{h}+r(r,\textbf{t}_{p}^{T}+1)\textbf{t}||_{2}^{2}\) & **h,t,w\({}_{u}\),w\(\in\mathbb{R}^{d}\),r,w\({}_{v}\)\(\in\mathbb{R}^{k}\)** \\ & TransR & -\(||\textbf{M},\textbf{h}+r(M,\textbf{t}||_{2}^{2}\) & **h,t,r\(\in\mathbb{R}^{d}\)**,M\({}_{r}\)\(\in\mathbb{R}^{d}\)** \\ & ComplEx & Re(\(<\)\(r\), **h**, \(\mathbb{t}>\)) & **h,r,t\(\in\mathbb{C}^{d}\)** \\ & RotatE & \(||\textbf{h}\circ\textbf{r}-\textbf{t}||\) & **h,t,r,w\({}_{v}\)\(\in\mathbb{R}^{d}\)** \\ & TorusE & \(min_{(x,y)\in([h]+y]}||_{x}>||_{x}\) & \([\textbf{h}]\),\([\textbf{r}]\),\([\textbf{t}]\)\([\textbf{t}]\)\([\textbf{r}^{x}\)** \\ \hline \multirow{8}{*}{Semantic matching models} & DistMult & \(\textbf{h}^{T}\)diag(M,t) & **h,r,t\(\in\mathbb{R}^{d}\)** \\ & RESCAL & \(\textbf{h}^{T}\)M,t & **h,t\(\in\mathbb{R}^{d}\),M,\(\in\mathbb{R}^{d}\)** \\ & HolEx & \(\sum_{i=0}^{j}\rho(\textbf{h},r,c_{j})\)t & **h,r,t\(\in\mathbb{R}^{d}\)** \\ & Simple & \(\frac{1}{2}(\textbf{h}ort+\textbf{t}^{\prime}\textbf{t})\) & **h,r,t\(\in\mathbb{R}^{d}\)** \\ \hline \multirow{8}{*}{Deep neural network models} & KBGAN & \(f_{0}(h,\textbf{r})\) & **h,r,t\(\in\mathbb{R}^{d}\)** \\ & ConvE & \(f(\text{vec}(f([\textbf{h},\textbf{r}]*\Delta))W)\textbf{t}\) & **h,r,t\(\in\mathbb{R}^{d}\)** \\ & ConvKB & concat(\(f([\textbf{h},\textbf{r}]*\Delta))W\) & **h,r,t\(\in\mathbb{R}^{d}\)** \\ \cline{1-1} & M-DCN & \(\sigma(f([\textbf{h}\oplus\textbf{r}]*w^{\prime}))W+b) & **h,r,t\(\in\mathbb{R}^{d}\)**, \(w^{\prime}\)\(\in\mathbb{R}^{1\times 2}\)** \\ \cline{1-1} & InteractE & \(g(\text{vec}(f([\textbf{h},\textbf{r}])\oplus\text{w}))W)\textbf{h}\) & **h,r,t\(\in\mathbb{R}^{d}\)**, \(w^{\prime}\)\(\in\mathbb{R}^{1\times 2}\)** \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of several KGE models’ score function: where \(\tilde{h}\) denotes the conjugate operation over \(h\), \(<h,t>\) denotes the Hermitian dot product and equals \(\tilde{h}^{T}\). \(o\) represents the Hadamard product. \(f_{0}\) denotes the score function of discriminator. \(p(\textbf{h},\textbf{r};\textbf{c})\)=(**cob**)\(\star\)r, where \(\star\) denotes the circular correlation, \(c\) is a fixed vector. \(*\) denotes convolution operation, \(\oplus\) represents the concatenate operation, \(\oplus\) denotes the depthwise circular convolution operation. employing circular correlation of entity and relation embeddings, HolE[23] can capture rich embedding interaction information and maintain the computing efficiency as Dist-Mult. To further improve the representation ability of HolE, Xue et al.[24] proposes HolEx that interpolates the full product matrices to achieve a lower dimensionality. SimplE[25] introduces a simple tensor factorization approach by calculating the average score of the inverse triplets to be full-expressive, i.e., (**h, r, t**) and (**t, r\({}^{-1}\), h**), where **r\({}^{\prime}\)** represents the inverse relation embeddings. Although these semantic models can capture semantic information between entities and relations, they are prone to suffering overfitting due to the model redundancy in low dimension embedding space. Further, the structural information of the triplets can not be fully utilized. ### Deep neural network-based models Recently, deep neural networks have been applied in various domains successfully and attracted more and more attention in solving link prediction task. Representative neural tensor work (NTN)[26] takes entity embeddings and relation vectors as input and measures the plausibility of a triplet associated with a particular relationship by combining multi-layer perceptrons (MLPs) with bilinear models. ConvE[12] and ConvKB[13] adopt a convolution neural network to model interactions between entities and relations for the link prediction task. The difference between them is that ConvE uses 2D convolution on a reshaped entity and relation embedding matrix while ConvKB encodes the concatenation of entity and relation embeddings without reshaping. Apart from CNN, Nguyen et al.[27] proposes CapsE, which takes each triplet as a 3-column matrix and adopts Capsule Net to capture the information of embeddings [28]. However, these models suffer from three problems: (1) it is insufficient in dealing with complex relations; (2) the number of interactions that they can capture is limited; (3) the structural information between entities and relations can be ignored easily. Thus, Zhang et al.[10] presents a multi-scale dynamic convolutional network to explore the entity and relation embedding characteristics for the first problem, namely, M-DCN. As for the second problem, InteractE[29] increases the number of additional interactions via feature permutation, a novel feature reshaping, and circular convolution. For the purpose of modeling various relational patterns, capturing structural information and improving the computation efficiency of deep neural networks, Liu et al.[30] proposes AprilE by introducing a triple-level self-attention approach. Graph convolution networks (GCNs) have been used as another way to capture the graph-structured and embedding information simultaneously. R-GCN[31] uses relation-specific GCN to model the directed knowledge graph. KBGAN[32] combines KGE with generative adversarial networks (GANs) to model the entities and relations, and the score function is defined in Table.1. Although neural network-based models can capture semantic information of triplet more easily than translation-based and semantic matching models, they still suffer from large model parameters, and the inability to make good use of the global structure. ### models with hierarchical structural information To make use of auxiliary information from external knowledge for the link prediction task, several models have introduced hierarchical structures in recent years. TKRL projects entities by type information matrices, which are composed of weighted and recursive hierarchical type encoders[11]. HA-RotatE projects entities with different moduli in the different hierarchical levels [33]. MuRP takes full advantage of hyperbolic embeddings to represent hierarchical relational graph data according to the relation-specific parameters [34]. Comparatively, HAKE learns the hierarchical entity embeddings in the polar coordinate system. The above hierarchical-based learning methods only capture entity or relation features in their embedding space. However, they fail to keep the geometric information or semantic information simultaneously. Thus, they can only show advantages in dealing with complex relation properties or relation patterns. ## 3 Our Model This section presents our proposed model HIE for the link prediction task, which can capture the position and semantic information in hierarchical level representation spaces. Before introducing the HIE model, the basic mathematical notions of model are summarized in Table.2. ### Problem formulation A knowledge graph consists of triplets, which can be expressed as \(\mathcal{G}\)= \((\mathcal{E},R,\text{T})\), where \(h_{0,\ldots,|\mathcal{E}|}\)\(t_{0,\ldots,|\mathcal{E}|}\)\(\in\mathcal{E}\) and \(r_{0,\ldots,|R|}\)\(\in\mathcal{R}\). \(|\mathcal{E}|\) and \(|\mathcal{R}|\) denote the number of entities and relations, respectively. For a link prediction task, the model aims to predict the head entity with a given tail entity (\(?\),\(r\),\(t\)) or the tail entity with a given head entity (\(h\),\(r\),?) to get a valid triplet (\(h\),\(r\),\(t\)). Moreover, the model needs to learn the low-dimension embeddings for entities and relations. \begin{table} \begin{tabular}{c c} \hline \hline Notations & Short explanations \\ \hline \(\mathcal{G}\) & Knowledge graph \\ \(\mathcal{E},R,\text{T}\) & entities, relations and triplets \\ **h,r,t** & head entity, relationship and tail entity \\ \(f_{r}\)(h,t) & score function \\ \(\lambda_{1},\lambda_{2},\)\(\lambda_{-}\) & the weights of different hierarchical level \\ \(\circ\) & Hadamard product \\ \(||\cdot||_{L_{1}/L_{2}}\) & \(L_{1}\) or \(L_{2}\) norm \\ \(diag(\cdot)\) & the diagonal projection matrix \\ \hline \hline \end{tabular} \end{table} Table 2: Notations and Explanations of HIE which are used in this paper. ### HIE model #### 3.2.1 Basic structure space Unlike the TKRL, which captures hierarchical entity type information, and TranR, which projects entities into relation-specific space, comparatively, we model the triplet into distance measurement and semantic measurement space. The basic distance measurement space and semantic measurement space structure are shown in Fig.3. For each entity or relation, we consider that it carries two parts of information. One is the geometric information for the distance measurement of entities in distance space under the specific relation. The other one is used to identify the sub-semantic restrictions in semantic space. It's worth noting that the concept of distance measurement is a general way to determine the position change degree in the embedding space. Take a triplet from Fig.1, (Allen, classmate, Wangshasha), the "classmate" is 1-1 relationship in this sub-graph. Thus, entities and relationship embedding in distance measurement space can easily distinguish the triplet, the effect of the semantic is slightly low. While we choose the 1-N relation "play as", the sub-semantic plays a more important role than distance measurement space. Also, the distance measurement space keeps the geometric for the origin triplet as auxiliary information, which can be useful for determining a plausible long path triplet in dealing with complex relation properties[35]. The steps of the proposed method in basic distance measurement space and semantic measurement space are described as follows. Firstly, the entities and relations are randomly initialized as \(k\)-dimension representation. Then, all of them are equally segmented into two parts. For example head entity h is divided into \(\textbf{h}_{0}\) and \(\textbf{h}_{1}\). Here, we consider that \(\textbf{h}_{0}\) consists of geometric information and \(\textbf{h}_{1}\) consists of semantic information. Secondly, we project the \(\textbf{h}_{0}\) and \(\textbf{h}_{1}\) in distance measurement space and semantic measurement space, respectively. **Distance measurement space** In this geometric space, we utilize the corresponding matrices to obtain \(\textbf{h}_{p}^{1}\), \(\textbf{t}_{p}^{1}\) and \(\textbf{r}_{p}^{1}\), respectively. The element-wise projection in the distance measurement space is defined as: \[\left\{\begin{array}{l}\textbf{h}_{p}^{1}=diag(\text{M}_{hp}^{1})\textbf{h} _{0}\\ \textbf{t}_{p}^{1}=diag(\text{M}_{tp}^{1})\textbf{t}_{0}\\ \textbf{r}_{p}^{1}=diag(\text{M}_{rp}^{1})\textbf{\sigma}\textbf{r}_{0}\\ \end{array}\right. \tag{1}\] where \(\textbf{h}_{0}\), \(\textbf{t}_{0}\) and \(\textbf{r}_{0}\) denote the first segment representation of \(h\), \(r\), and \(t\), respectively. \(diag(\text{M}_{hp}^{s})\) is a diagonal matrix that projects h\({}_{0}\) into geometric distance measurement space, \(diag(\text{M}_{tp}^{s})\) and \(diag(\text{M}_{rp}^{s})\) denote the same operation on \(\textbf{t}_{0}\) and \(\textbf{r}_{0}\). Instead of using rotation or translation operations for knowledge graph representation, a flexible transformation matrix \(\text{M}_{r}^{s}\) is used to capture the geometric features among entities and relations. As is shown in Fig.3, the light blue straight line models the relation as a translation, and the purple curve dotted line treats the relation as a rotation. However, these modeling approaches are not flexible, nor capture more transformed information between entities and relations for dealing with complex relations. Thus, a soft transform matrix is proposed to overcome the problems, which is shown as the red curve solid line. In this way, HIE stores all the possible form of transformation and movement operation parameters through the matrix \(\text{M}_{r}^{1}\). Moreover, this approach can ensure that the projecting procedure becomes more flexible. \[\text{M}_{r}^{1}=\text{M}_{0}\textbf{r}_{p}^{1} \tag{2}\] where \(\text{M}_{0}\) is used to form transformation operation matrix and the dimension of \(\text{M}_{0}\) is \(\frac{k}{2}*1\). Hence, the position distance function in the first level of HIE is shown in Eq.3. \[\text{d}_{r}^{p1}=||\textbf{h}_{p}^{1}\text{M}_{r}^{1}-\textbf{t}_{p}^{1}||_{ L_{1}/L_{2}} \tag{3}\] **Semantic measurement space** In this semantic space, we tend to use sub-semantic to project entities and relations in a fine-grained way. Similarly, the projection in this space is defined as Eq.4. \[\left\{\begin{array}{l}\textbf{h}_{s}^{1}=diag(\text{M}_{hs}^{1})\textbf{b} _{1}\\ \textbf{t}_{s}^{1}=diag(\text{M}_{ts}^{1})\textbf{\sigma}\textbf{t}_{1}\\ \textbf{r}_{s}^{1}=diag(\text{M}_{rs}^{1})\textbf{\sigma}\textbf{r}_{1}\\ \end{array}\right. \tag{4}\] where \(\textbf{h}_{1}\), \(\textbf{t}_{1}\), and \(\textbf{r}_{1}\) denote the second segment of h, r, and t, respectively. \(diag(\text{M}_{hs}^{i})\) is a semantic extractor that projects \(\textbf{h}_{1}\) into fine-grained sub-semantic measurement space, \(diag(\text{M}_{ts}^{i})\) and \(diag(\text{M}_{rs}^{i})\) denote the same operation on \(\textbf{t}_{1}\) and \(\textbf{r}_{1}\). As is illustrated in Fig.3(b), \(\textbf{t}_{s,p}^{1}\) is the predicted tail entity and \(\textbf{r}_{h,t}^{1}\) is the ideal relation embedding. Similar to [36][37], we also try to have the following equation: \(\textbf{h}_{s}^{1}+\textbf{r}_{s}^{1}\approx\textbf{t}_{s}^{1}\), i.e. \(\textbf{r}_{s}^{1}\approx\textbf{r}_{h,t}^{1}\). And \(\textbf{r}_{s}^{1}\) is a projected composition representation in semantic space, and \(\textbf{r}_{s}^{1}\) here denotes a mixed semantic information representation, including types, concepts, and other Figure 3: The basic structure of HIE. semantic information. The following Eq.5 is used to obtain the score of triplet. \[\mathrm{d}_{r}^{\mathrm{e1}}=||\mathbf{h}_{s}^{\mathrm{h}}+\mathbf{r}_{s}^{ \mathrm{1}}-\mathbf{t}_{s}^{\mathrm{1}}||_{L_{2}} \tag{5}\] Thirdly, according to the above projections in two spaces, the joint score function in the basic structural level is defined as Eq.6. \[\mathrm{d}_{r}^{\mathrm{1}}=\alpha\mathrm{d}_{r}^{\mathrm{p1}}+(1-\alpha) \mathrm{d}_{r}^{\mathrm{e1}} \tag{6}\] where \(\alpha\) is a learnable weight parameter used to take advantage of distance measurement space and semantic measurement space. #### 3.2.2 Hierarchical structure representation In order to capture hierarchical information from different space levels, we encoder the basic structure space representation in hierarchical way, which is shown in Fig.4. Also, the semantic and position extraction matrix are defined as follows: \[\left\{\begin{aligned} \mathbf{h}_{p}^{2}&= \mathbf{h}_{p}^{\mathrm{1}}\mathrm{M}_{pe}+\mathbf{h}_{0}\\ \mathbf{t}_{p}^{2}&=\mathbf{t}_{p}^{\mathrm{1}} \mathrm{M}_{pe}+\mathbf{t}_{0}\\ \mathbf{r}_{p}^{2}&=\mathbf{r}_{p}^{\mathrm{1}} \mathrm{M}_{pe}+\mathbf{r}_{0}\\ \mathbf{h}_{s}^{2}&=\mathbf{h}_{s}^{\mathrm{1}} \mathrm{M}_{se}+\mathbf{h}_{1}\\ \mathbf{t}_{s}^{2}&=\mathbf{t}_{s}^{\mathrm{1}} \mathrm{M}_{se}+\mathbf{t}_{1}\\ \mathbf{r}_{s}^{2}&=\mathbf{r}_{s}^{\mathrm{1}} \mathrm{M}_{se}+\mathbf{r}_{1}\\ \end{aligned}\right. \tag{7}\] where \(\mathrm{M}_{pe}\) denotes the geometric extraction matrix that can obtain latent information from the shallow level, \(\mathrm{M}_{se}\) is the semantic extraction matrix. The deeper information can be captured via the above matrices. Also, through similar operations on the new embeddings in the deep level representation space, the position distance function can be defined as: \[\mathrm{d}_{r}^{\mathrm{e2}}=||\mathbf{h}_{p}^{\mathrm{2}}\mathrm{M}_{r}^{2}- \mathbf{t}_{p}^{2}||_{L_{1}/L_{2}} \tag{8}\] where \(\mathbf{h}_{p}^{2}\) and \(\mathbf{t}_{p}^{2}\) denote corresponding element-wise projection in distance measurement space, \(\mathrm{M}_{r}^{2}\) is the transformation operation parameter matrix as \(\mathrm{M}_{r}^{\mathrm{1}}\) via multiplying \(M_{1}\), which is also a \(\frac{k}{5}\)\({}^{1}\) dimensional matrix. Similarly, at different hierarchical level space, the corresponding measurement score can be obtained as Eq.9. \[\mathrm{d}_{r}^{pk}=||\mathbf{h}_{p}^{\mathrm{k}}\mathrm{M}_{r}^{k}-\mathbf{t }_{p}^{\mathrm{k}}||_{L_{1}/L_{2}} \tag{9}\] Correspondingly, the semantic function of HIE at different deep level is defined as: \[\mathrm{d}_{r}^{\mathrm{i}k}=||\mathbf{h}_{s}^{k}+\mathbf{r}_{s}^{k}-\mathbf{ t}_{s}^{k}||_{L_{2}} \tag{10}\] And the total score function in deep level is defined as: \[\mathrm{d}_{r}^{\mathrm{k}}=\alpha\mathrm{d}_{r}^{\mathrm{pk}}+(1-\alpha) \mathrm{d}_{r}^{\mathrm{i}k} \tag{11}\] Finally, according to Eq.6 and Eq.11, the final score function of HIE is defined as follows. \[f_{r}(\mathrm{h},\mathrm{t})=\sum_{k=1}^{N}\lambda_{k}\mathrm{d}_{r}^{k} \tag{12}\] where \(\lambda_{k}\) is the weights for the tradeoff between hierarchical level embedding scores, under the constrain \(1=\sum\lambda_{k}\). ### Loss Function Following [14], we use the negative sampling loss with self-adversarial training as the final loss and try to minimize it. The final loss function is illustrated as Eq.13. \[L=-\mathrm{log}\sigma(\gamma-f_{r}(\mathbf{h},\mathbf{t}))-\sum_{j=1}^{n}p( \mathbf{h}_{j}^{\prime},\mathbf{r},\mathbf{t}_{j}^{\prime})\mathrm{log}\sigma (f_{r}(\mathbf{h}_{j}^{\prime},\mathbf{t}_{j}^{\prime})-\gamma) \tag{13}\] where \(\gamma\) is a fixed margin, \(\sigma\) denotes the sigmoid function, \((\mathbf{h}_{j}^{\prime},\mathbf{t}_{j}^{\prime})\) is the \(j\)-th negative triplet, and \(p(\mathbf{h}_{j}^{\prime},\mathbf{t}_{j}^{\prime})\) is the negative triples sampling probability distribution, which is defined in Eq.14. \[p(\mathbf{h}_{j}^{\prime},\mathbf{r},\mathbf{t}_{i}^{\prime}|\{\mathbf{h}_{j },\mathbf{r}_{j},\mathbf{t}_{j}\})=\frac{\mathrm{exp}af_{r}(\mathbf{h}_{j}^{ \prime},\mathbf{t}_{j}^{\prime})}{\sum_{j}\mathrm{exp}af_{r}(\mathbf{h}_{j}^{ \prime},\mathbf{t}_{j}^{\prime})} \tag{14}\] where \(\alpha\) is the temperature of sampling, and treated as the weight of negative samples. ### Connection to other KGE models Although some mathematical forms of our proposed HIE are similar to RotatE[37], in which the author uses modulus and phase information to learn entity and relation embeddings, the aims of the two models are different indeed. Figure 4: The whole architecture of the proposed HIE. RotatE tries to model and infer all relation patterns, while HIE aims to obtain more expressive embeddings via extracting information from position distance space and semantic space. Moreover, RotatE can not deal with complex relations well due to the angle-representation. More specifically, RotatE is only an angle-specific representation of HIE by replacing the transformation matrix with a rotation angle in distance measurement space. Moreover, HIE utilizes hierarchical level information to imitate people's cognitive behavior to improve the expressive ability of the embedding process. ## 4 Experiments ### Datasets Our proposed model HIE was evaluated on the following public benchmark datasets: WN18[8], WN18RR[12], FB15k-237[13], YAGO3-10[14]. All of them are extracted from the real-world knowledge graphs and contain massive entities and relations. The statistics of them are reported in Table.3, and detailed information is listed as follows. 1. WN18 is extracted from WordNet[38], a lexical knowledge graph for English, consisting of 10 relations and 40,943 entities. Its entities represent the word sense, and the relations denote the lexical relationship between entities. Moreover, most the relation patterns are composed of **symmetry/antisymmetry** and **inversion**. 2. WN18RR is a subset of WN18, where all the inverse relations are deleted. It consists of 40,943 entities with 11 different relations. And the main relation patterns are **symmetry/antisymmetry** and **composition**. 3. FB15k-237 is a subset of FB15k with 14,541 entities and 237 relations. Besides, all the inverse relations are deleted. Thus, the main relation patterns are **composition** and **symmetry/antisymmetry**. 4. YAGO3-10 is extracted from YAGO3[39], which contains approximately 123,182 entities and 37 relations. It describes the relations among people and their attributes and has a minimum of 10 relations per. ### Evaluation Metrics Link prediction is an essential task for knowledge graph completion, aiming to predict the missing entity according to a specific relation, i.e. (\(h\), \(r\), \(\gamma\)) or (\(?\), \(r\), \(t\)). The real triplet can be obtained by ranking the results of score function \(f_{r}\). Following the experimental settings in TransE[8], for each given triplet \(x_{i}\)=(\(h\),\(r\),\(t\))\(\in\) T\({}_{test}\), we replace the head entity h or the tail entity t by any other entity in the knowledge graph to generate a set of corrupted triplets, i.e. \(\widetilde{x}_{i}^{o}\)=(\(h\)\({}^{coor}\),\(r\))\(\notin\)T or \(\widetilde{x}_{i}^{o}\)=(\(h\),\(r\)\({}^{coor}\))\(\notin\)T respectively, where \(h\)\({}^{coor}\) and \({}^{coor}\) denote a set of corresponding candidate entities. Then we check whether the proposed model HIE obtains a high score for the test triplet and a low score for corrupted triplets. The right ranks rank\({}_{i}^{o}\) and left ranks rank\({}_{i}^{s}\) of the \(i-th\) test triplet \(x_{i}\) are each associated with corrupting either head or tail entities corresponding to the score function Eq.12, i.e. \(f_{r}(\textbf{h},\textbf{t})\), and are defined as follows: \[\begin{split} rank_{i}^{o}&=1+\sum_{\widetilde{x}_{ i}^{o}\notin\textbf{T}}I[\varphi(x_{i})<\varphi(\widetilde{x}_{i}^{o})]\\ rank_{i}^{s}&=1+\sum_{\widetilde{x}_{i}^{o}\notin \textbf{T}}I[\varphi(x_{i})<\varphi(\widetilde{x}_{i}^{s})]\end{split} \tag{15}\] where \(I[P]\) denotes the indicator function. If the condition \(P\) is true, the function returns 1 and returns 0 otherwise. In the experiment, the following three popular ranking metrics are utilized as our evaluation metrics: Mean rank (MRR), Mean reciprocal rank (MRR) and Hits@k (k=1, 3, and 10 in this paper), which are defined in Eq.16. MR and MRR denote the average ranks and the average inverse ranks for all test triplets, respectively. Hits@k is the percentage of ranks lower than or equal to k. In the above three metrics, higher MRR and Hits@k mean better performance, while the lower MR indicates better performance. \[\begin{split}\text{MRR:}\ \frac{1}{2G}\sum_{x_{i}\in\textbf{T}}( rank_{i}^{o}+rank_{i}^{s})\\ \text{MRR:}\ \frac{1}{2G}\sum_{x_{i}\in\textbf{T}}(\frac{1}{rank_{i} ^{o}}+\frac{1}{rank_{i}^{s}})\\ \text{Hits@k:}\ \frac{1}{2G}\sum_{x_{i}\in\textbf{T}}I[rank_{i}^{o} \leq\text{k}]+I[rank_{i}^{s}\leq\text{k}]\end{split} \tag{16}\] where \(G=|\)T\({}_{test}|\) denotes the size of T\({}_{test}\). ### Implementation details For this experiment, we implement our proposed model by Pytorch[40] with an adam optimizer[41], and fine-tune the hyperparameters to obtain the optimal configuration via grid search approach. The hyperparameter settings are as follows: entity and relation embedding dimension \(\in\) [250,500], batch size \(\in\) [256,512], fixed margin \(\gamma\in\) [3, 6, 9, 12, 18], self-adversarial sampling temperature \(\alpha\) [0.5, 1.0], \(l_{1}\)-norm or \(l_{2}\)-norm, and test batch size \(\in\) [8, 16], \(\lambda_{1}\) and \(\lambda_{2}\)\(\in\) [ 0.2, 0.4, 0.6, 0.8]. Moreover, we compare the proposed model HIE with three categories of state-of-the-art baselines: (1) translation-based models including TransE, TransR, TransD(unif), RotatE, TorusE and ComplEx. (2) semantic matching models, \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Datasets} & \multirow{2}{*}{Rels} & \multirow{2}{*}{Ents} & \multicolumn{4}{c}{Triples} \\ \cline{3-6} & & & Train & Valid & Test \\ \hline WN18 & 18 & 40,943 & 141,442 & 5,000 & 5,000 \\ WN18RR & 11 & 40,943 & 86,835 & 3,034 & 3,134 \\ FB15k-237 & 237 & 14,541 & 272,115 & 17,535 & 20,466 \\ YAGO3-10 & 37 & 123,182 & 1,079,040 & 5,000 & 5,000 \\ \hline \hline \end{tabular} \end{table} Table 3: The statistics of datasets in the experiment. Rels denotes the number of relations. Ents is the number of entities. Train, Valid and Test denote the number of training, validation and test triples, respectively. including DistMult. (3) deep neural network models including M-DCN, ConvE, ConvKB, KBGAN, R-GCN, HARN[42] and InteractE. There are two criteria for the above algorithms to be selected, i.e., they achieve high performance within a certain range of comparison and provide the source code, or their architecture is easy for reproducibility. ### Optimal hierarchical level Due to the limited computing resources, we need to determine the number of hierarchical levels at first. In this experiment, the link prediction results for hierarchical levels of HIE on WN18RR and WN18 are displayed in Fig.5. As the hierarchical level grows, the corresponding batch size and embedding dimension decrease, and the hierarchical weights are initialized equally. HIE with four different levels are implemented, as shown in Fig.5. As can be observed from Fig.5, all the metrics are optimal when two levels are set. This is mainly due to HIE can obtain structural information and semantic information from joint spaces efficiently. However, only one level for HIE is not adequate to capture enough information for modeling. Too many levels could also introduce noises, and may lead to poor performance for link prediction. Although the metrics in four level spaces are higher than that of three level spaces, the values of four evaluation metrics are still lower than that of two levels. In the next experiments, HIE with two levels are expressed as shallow level (level 1) and deep level (level 2). In conclusion, to balance the accuracy and resources utilization, HIE with two levels are chosen in the following experiments. Also, we can get the corresponding score function as defined in Eq.17. \[f_{r}(\mathrm{h},\mathrm{t})=\lambda_{1}\mathrm{d}_{r}^{1}+\lambda_{2} \mathrm{d}_{r}^{2} \tag{17}\] ### Link prediction In this part, the link prediction results of HIE on four public datasets are summarized in Table.4 and Table.5. The best score is in **bold** and the second-best score is underlined. The evaluation results on WN18 and YGAO3-10 are reported in Table.4 and the results on WN18RR and FB15k-237 are shown in Table.5. Moreover, all the compared results are based on the optimal performance of the algorithm. The observations from Table.4 can be summarized as follows: * Our proposed model can obtain significant overall evaluation scores from the empirical results compared with three kinds of KGE models on WN18 and YAGO3-10. Due to the massive inversion relation patterns existing in WN18 where the ratio reaches 94%[43]. To meet the inversion equation, the transformation matrix in HIE will be an identity matrix. That's to say, more information on distance measurement is omitted. Thus, the proposed HIE does not achieve best scores of MRR and Hits@1 on WN18. * On WN18, the proposed HIE outperforms the compared algorithms and achieves the best MR, Hits@3 and Hits@10. In terms of MRR and Hits@1, HIE is worse than that of several state-of-the-art models, while it obtains a 30% higher Hits@1 than R-GCN[31] which uses more additional neighbor information. * On YAGO3-10, in terms of MRR, Hits@3 and Hits@10, HIE achieves the best score. As for the MR and Hits@1, HIE is slightly worse than the state-of-the-art RotatE and InteractE in the corresponding aspects. Notably, HIE obtains a 7% higher MRR than that of neural network-based model M-DCN, 59% higher MRR than that of semantic matching-based model DistMult, and 9% higher than MRR that of translation based model RotatE. Strikingly, our proposed HIE achieves better performance scores on Hits@3 and Hits@10 than that of ConvE, RotatE and DistMult. From Table.5, we can conclude that: * From the comparison of the experimental results on WN18RR and FB15k-237, compared with neural network-based models and semantic matching based models, we can observe that HIE achieves significant improvements for all the evaluation metrics. Also, it outperforms most translation-based models and obtains the best scores on MRR and Hits@3. That's to say. Our proposed HIE can capture more deep information than the previous neural network-based models and semantic information than translation-based and semantic matching based models. * On WN18RR, HIE achieves promising scores for each evaluation criteria. Compared with the neural network-based models, HIE obtains a 118% higher MRR, 14% higher Hits@10 than ConvKB, respectively. Strikingly, it obtains a 123% higher MRR, 23.5% higher Hits@10 than KBGAN. Also, HIE outperforms than DistMult and obtains a 18.2% higher Hits@10. In terms of state-of-the-art translation-based algorithm RotatE, HIE is slightly worse in Hits@1. In contrast, HIE obtains better scores on MRR, MR, Hits@3 and Hits@10 than RotatE. * On FB15k-237, HIE is the best model in terms of MRR, Hits@1 and Hits@3. Besides, HIE obtains a 2.4% higher MRR, 6% higher Hits@1, and 1.9% higher Hits@3 Figure 5: The results of different hierarchical level size on WN18 and WN18RR than that of RotatE. As for MR and Hits@10, HIE is slightly worse than RotatE. In a short summary, with the advantages of structural and semantic information, HIE can outperform the state-of-the-art algorithms on most of metrics. ### Complex Relations Modeling In this part, the performance of HIE on several typical complex relations is reported. According to TransE[8], relations can be classified into 1-to-1, N-to-1, 1-to-N and N-to-N. For each relation, it needs to compute the average number of head entities that are connected with specific tail entity \(hco_{r}\) and the average number of tail entities that are connected with specific head entity \(tcs_{r}\). Also, the complex relation patterns can be calculated as follows: \[\left\{\begin{array}{l}\mathrm{hco}_{r}<\eta\ \mathrm{and\ tcs}_{r}<\eta\implies 1- \mathrm{to}-1\\ \mathrm{hco}_{r}<\eta\ \mathrm{and\ tcs}_{r}\geqslant\eta\implies\mathrm{N-to-1}\\ \mathrm{hco}_{r}\geqslant\eta\ \mathrm{and\ tcs}_{r}<\eta\implies 1- \mathrm{to}-\mathrm{N}\\ \mathrm{hco}_{r}\geqslant\eta\ \mathrm{and\ tcs}_{r}\geqslant\eta\implies \mathrm{N-to-N}\end{array}\right. \tag{18}\] where \(\eta=1.5\) is adopted the same as TransE[8]. Specifically, we extract four typical relationships in total corresponding to the specific relation patterns from WN18, i.e., \(similar\_to\)(1-to-1), \(member\_meronym\) (N-to-1), \(part\_of\) (1-to-N), \(also\_see\) (N-to-N). The results of MRR on the above four different relationships are displayed in Fig.6. Our proposed HIE is represented by orange, and the other baseline models are by light orange. Representative results can also be reported as follows: \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{WN18} & \multicolumn{4}{c}{YAGO3-10} \\ \cline{2-11} & \multirow{2}{*}{MRR\(\uparrow\)} & \multirow{2}{*}{MR\(\downarrow\)} & \multicolumn{2}{c}{Hits@\(\uparrow\)} & \multirow{2}{*}{MRR\(\uparrow\)} & \multirow{2}{*}{MR\(\downarrow\)} & \multirow{2}{*}{Hits@\(\uparrow\)} \\ \cline{3-3} \cline{8-12} & & & & & & & & & & & \\ \cline{3-3} \cline{8-12} & & & 1 & 3 & 10 & & & & & \\ \hline TransE\({}^{[*]}\) & 0.454 & - & 0.089 & 0.823 & 0.934 & 0.238 & - & 0.212 & 0.361 & 0.447 \\ TransR\({}^{[*]}\) & 0.605 & - & 0.335 & 0.876 & 0.940 & 0.256 & - & 0.223 & 0.356 & 0.478 \\ TransD(unif) & - & 229 & - & - & 0.925 & - & - & - & - & - \\ RotatE & 0.949 & 309 & 0.944 & 0.952 & 0.959 & 0.495 & **1767** & 0.402 & 0.550 & 0.670 \\ DistMulti\({}^{[*]}\) & 0.822 & 902 & 0.728 & 0.914 & 0.936 & 0.340 & 5926 & 0.237 & 0.379 & 0.540 \\ ComplEx\({}^{[*]}\) & 0.941 & - & 0.936 & 0.945 & 0.947 & 0.355 & 6351 & 0.258 & 0.399 & 0.547 \\ M-DCN & **0.950** & - & 0.946 & **0.954** & 0.958 & 0.505 & - & 0.423 & 0.587 & 0.682 \\ InteractE & - & - & - & - & - & 0.541 & 2375 & **0.462** & - & 0.687 \\ ConvE & 0.942 & 504 & **0.955** & 0.947 & 0.935 & 0.523 & 2792 & 0.448 & 0.564 & 0.658 \\ R-GCN & 0.819 & - & 0.697 & 0.929 & 0.964 & - & - & - & - & - \\ \hline HIE & 0.930 & **131** & 0.913 & **0.954** & **0.970** & **0.542** & 2042 & 0.452 & **0.593** & **0.695** \\ \hline \hline \end{tabular} \end{table} Table 4: Results of the link prediction on WN18 and YAGO3-10 datasets. Results [*] are taken from [10]. Results [\(\blacktriangle\)] are from [12]. Other results are taken from the corresponding original papers. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{WN18RR} & \multicolumn{4}{c}{FB15k-237} \\ \cline{3-12} & \multirow{2}{*}{MRR\(\uparrow\)} & \multirow{2}{*}{MR\(\downarrow\)} & \multirow{2}{*}{MRR\(\uparrow\)} & \multirow{2}{*}{MRR\(\downarrow\)} & \multirow{2}{*}{MR\(\uparrow\)} & \multirow{2}{*}{MR\(\downarrow\)} & \multirow{2}{*}{Hits@\(\uparrow\)} \\ \cline{3-3} \cline{8-12} & & & & & & & & & 1 & 3 & 10 \\ \hline TransE\({}^{[*]}\) & 0.226 & - & - & - & 0.501 & 0.294 & 357 & - & - & 0.465 \\ MuRP & 0.477 & - & **0.438** & 0.489 & 0.555 & 0.324 & - & 0.235 & 0.356 & 0.506 \\ RotatE & 0.476 & 3340 & 0.428 & 0.492 & 0.571 & 0.338 & **177** & 0.241 & 0.375 & **0.533** \\ DistMult & 0.430 & 5110 & 0.390 & 0.440 & 0.490 & 0.241 & 254 & 0.155 & 0.263 & 0.419 \\ TorusE & 0.452 & - & 0.422 & 0.464 & 0.512 & 0.305 & - & 0.219 & 0.337 & 0.485 \\ ConvKB\({}^{[*]}\) & 0.220 & **2741** & - & - & 0.508 & 0.302 & 196 & - & - & 0.483 \\ KBGAN & 0.215 & - & - & - & 0.469 & 0.277 & - & - & - & 0.458 \\ R-GCN & - & - & - & - & - & 0.249 & - & 0.151 & 0.264 & 0.417 \\ \hline HIE & **0.480** & 2821 & 0.430 & **0.499** & **0.580** & **0.346** & 215 & **0.255** & **0.380** & 0.523 \\ \hline \hline \end{tabular} \end{table} Table 5: Results of the link prediction on WN18 and YAGO3-10 datasets. Results [\(\blacktriangledown\)] are taken from [14]. Results [\(\blacktriangle\)] are from the author’s re-publication. Other results are taken from the corresponding original papers. * As can be seen from Fig.6, HIE can model the four kinds of relations on WN18. Compared with other models, HIE outperforms the compared algorithms with four relation categories. * In addition, HIE obtains a 0.758 higher MRR than TransE on \(similar\_to\), a 19.3% higher MRR than DistMult on \(member\_meronym\), a 7.6% higher MRR than ConvE on \(part\_of\) and a 0.05 higher MRR than ComplEx on \(also\_see\). * Translation-based and semantic matching-based models can not achieve promising results due to the lack of semantic information. However, neural network-based methods can capture more features of entities and relations via the convolution process. Thus, it achieves better performance than other baselines except for HIE. Take \(also\_see\) as an example. HARN obtains a 0.383 higher MRR and 0.07 higher MRR than TransE and DistMult, respectively. However, with the advantage of deep level information and semantic information, HIE outperforms HARN and other neural network-based models as well. ### Parameter Sensitivity Analysis #### 4.7.1 study of \(\lambda\) The coefficient \(\lambda_{1}\) and \(\lambda_{2}\) are utilized to balance the shallow and deep level information and meet the condition \(\lambda_{1}+\lambda_{2}=1\). Thus, we only investigated the effect of \(\lambda_{1}\) for our proposed HIE on MRR and Hits@10 on WN18RR. The results are illustrated in Fig.7. The orange dotted line represents the MRR value of TorusE, and the green dotted line is the Hits@10 value of RotatE. We can observe that: * When the \(\lambda_{1}=0.5\), HIE achieves the best performance on MRR and Hits@10, and the values are 0.472 and 0.579, respectively. * Compared with TorusE, HIE obtains slightly higher MRR and Hits@10 when the \(\lambda_{1}=0.2,0.4,0.6\) and 0.8. In terms of \(\lambda_{1}=0.5\), HIE outperforms TorusE by 5.5%. In addition, HIE performs as good as RotatE when the \(\lambda_{1}=0.2,0.4\) and 0.6, and slightly worse than RotatE. Also, HIE obtains a 1.4% higher Hits@10 than that of RotatE. * The change of \(\lambda_{1}\) performs a slight effect on MRR and Hits@10 of HIE. The reason is that HIE can capture more information from deep and shallow levels simultaneously and ensure the stability and accuracy. #### 4.7.2 study of batch size To explore the influence of batch size, we set different sizes as 64, 128, 256, 512 and 1024 with \(\lambda_{1}=0.5\) on WN18RR. The results are reported in Fig.8. The red dotted line represents the best score on MRR, Hits@3 and Hits@10. We can see that the outstanding results can be achieved with batch size 512. Strikingly, as the batch size increases, HIE can obtain more information to improve itself. However, when the batch size is 1024, the MRR, Hits@3 and Hits@10 of HIE are worse than that of batch size 512, which is mainly because the proposed HIE capture more and more information, including useful information as well as invalid information, even harmful information, to guide the optimization of the model. Simultaneously, larger information may limit the optimization capacity of HIE. Thus, batch size 512 is utilized in HIE to project entities and relations into low-dimensional space under the premise of measuring resources and accuracy. Figure 8: The results of different batch size on MRR for WN18RR Figure 6: The results of part complex relations on WN18 Figure 7: The results of different \(\lambda_{1}\) on MRR and Hits@10 for WN18RR ### Ablation study In this part, several ablation studies on WN18RR for HIE are carried out empirically. #### 4.8.1 The impact of distance and semantic measurement Fig.9 shows the ablation study results on WN18RR of the impact of distance and semantic measurement when we only omit the distance measurement (-no_distance) and omit the semantic measurement (-no_semantic) from HIE. As can be seen from Fig.9, three results can be reported as follows: * Compared with the ablated model, the results demonstrate HIE achieves the best performance on all the evaluation metrics. Specifically, HIE obtains a 0.373 higher Hits@1 and a 0.21 higher Hits@10 than HIE-no_distance and HIE-no_semantic, respectively. * In terms of MRR and Hits@1, distance measurement significantly impacts the performance of HIE. Also, HIE-no_distance obtains a 16.2% lower Hits@3 and a 2% lower Hits@10 than HIE-no_semantic. * From the comparison of the empirical results, we can see that HIE-no_semantic or HIE-no_distance could achieve promising scores on Hits@3 and Hits@10, which is mainly because of the use of deep level information. However, due to the lack of too much position information, HIE-no_distance can not be well constrained in distance space. Thus, HIE-no_distance has poor performance on Hits@1 and MRR. In comparison, HIE-no_semantic is slightly worse than HIE. #### 4.8.2 The cross effects of both improvements Fig.10 shows the ablation study results on WN18RR of the cross effects of both improvements, i.e., distance measurement and its deep level information, and semantic measurement and its deep level information. As the results illustrated in Fig.10, we can also see that: * Distance measurement and deep level information play a crucial role in link prediction task. In contrast, semantic measurement can further mine the semantic information and improve the ability of the model. * In terms of MRR, HIE achieves 9% higher than HIE-no_distance_deep, while only 5.6% higher than HIE-no_distance. Also, without deep level information, HIE-no_distance is even worse. * Moreover, HIE obtains a 5.8% higher Hits@10 than HIE-no_semantic_deep, while only a 3.7% higher than that of HIE-no_semantic, which can also demonstrate that deep level information could help improve the ability of HIE at a higher level. ## 5 Conclusion In this paper, we propose a novel KGE model to solve the link prediction task, namely, HIE. Each segmented entry of triplet (h, r, t) is projected into the distance and semantic space via the corresponding extractor. In addition, to capture more flexible structural information, a more general transformation operation matrix is utilized to obtain tail entity instead of translation-based operation. More importantly, we introduce hierarchical level information to learn knowledge representations from hierarchical spaces jointly. Our experiments on WN18, WN18RR, FB15k-237 and YAGO3-10 for the link prediction task show that HIE outperforms several state-of-the-art methods. Besides, the results of the complex relation modeling illustrate that HIE can better model the four typical relationships than existing neural network-based models. Specifically, the study of HIE further proves that hierarchical level information can significantly improve the performance of HIE, and both distance and semantic measurement could affect the expression ability of the model. In the future work, we will focus more on multi-task learning methods to capture more interactive information between entities and relations so as to improve the efficiency and performance of HIE. Figure 10: The ablation results of cross effects of both improvements Figure 9: The ablation results of distance and semantic measurement
2307.08495
$k-$Dependent Dark Matter
With the emersion of precise cosmology and the emergence of cosmic tensions, we are faced with the question of whether the simple model of cold dark matter needs to be extended and whether doing so can alleviate the tensions and improve our understanding of the properties of dark matter. In this study, we investigate one of the generalized models of dark matter so that the behavior of this dark matter changes according to the scale of $k$. In large scales (small $k$'s), the dark matter is cold, while it becomes warm for small scales (large $k$'s). This behavior is modeled phenomenologically for two different scenarios. We show that the $S_8$ tension can be alleviated, but the $H_0$ tension becomes milder while not too much.
Parisa Arabameri, Zahra Davari, Nima Khosravi
2023-07-17T13:57:20Z
http://arxiv.org/abs/2307.08495v1
# \(k-\)Dependent Dark Matter ###### Abstract With the emersion of precise cosmology and the emergence of cosmic tensions, we are faced with the question of whether the simple model of cold dark matter needs to be extended and whether doing so can alleviate the tensions and improve our understanding of the properties of dark matter. In this study, we investigate one of the generalized models of dark matter so that the behavior of this dark matter changes according to the scale of \(k\). In large scales (small \(k\)'s), the dark matter is cold, while it becomes warm for small scales (large \(k\)'s). This behavior is modeled phenomenologically for two different scenarios. We show that the \(S_{8}\) tension can be alleviated, but the \(H_{0}\) tension becomes milder while not too much. keywords: Cold Dark Matter, Warm Dark Matter ## 1 Introduction The cold dark matter (CDM) paradigm is an important feature in particle physics and cosmology, assuming cold and collisionless dark matter particles interact only gravitationally. This component is one of the main bases in the standard \(\Lambda\)CDM model and it is responsible for about 26% of the energy density of Universe (Scott, 2020; Scolnic et al., 2018). A wide range of cosmological observations from many different epochs and at large and small scales, including CMB missions, BAO data, observations of galaxy clusters, and weak lensing experiments, supported this paradigm. However, the physical nature of DM particles remains unclear and a mystery after decades of research. On the other hand, the CDM paradigm is remarkably successful in many aspects, especially in explaining the observed properties of large-scale structures (LSS) in the Universe (in the range \(\sim 1\) Gpc down to \(\sim 10\) Mpc); however, it conflicts with observations on galactic and sub-galactic scales (\(\leq\)1 Mpc). For instance, we can point to: * The'missing satellites problem,' which refers to the fact that there is an overestimation of dwarf galaxies by the CDM model than observed in the Universe (Rubin & Ford, 1970; Rubin et al., 1980; Moore et al., 1999). * The 'cusp-core problem,' which refers to the fact that the CDM model predicts that dark matter halos should have a cuspy density profile at their centers, while observations suggest that they have a more constant density profile (Gentile et al., 2004). * The 'too big to fail problem,' which refers to the lack of observation of the most massive halos, which are predicted to be luminous (Purcell & Zentner, 2012). The small scale crisis motivated the study of scenarios that predict damped matter fluctuations below a characteristic free-streaming scale through either modification of the primordial power spectrum or non-cold dark matter models, which modify (suppress) the power spectrum at late times. Furthermore, the recent high-precision cosmological data has shown a statistically significant discrepancy in the estimation of the current values of the Hubble parameter (\(H_{0}\)) and the fluctuations amplitude of density perturbations at 8 h\({}^{-1}\)Mpc scale (\(\sigma_{8}\)) between early-time and late-time observations, which poses another challenge to the standard \(\Lambda\)CDM model. Early universe measurements like CMB Planck collaboration (Aghanim et al., 2020) estimate \(H_{0}\sim(67.0-68.5)\) km/s/Mpc, while late-time distance ladder measurements like SH0ES and H0LiCOW collaborations report \(H_{0}=(74.03\pm 1.42)\)(Riess et al., 2019). The mentioned problems, together with the lack of understanding of the nature, mass, and dynamics of dark matter particles, have sparked several extensions and alternatives to standard dark matter models of particle physics, which are theoretically well-motivated and inspire new search strategies. There are many approaches in order to investigate dark matter, such as warm dark matter (WDM), cannibal Dark Matter (Buen-Abad et al., 2018), decaying dark matter (Davari & Khosravi, 2022), dynamical dark matter, fuzzy dark matter and interacting dark matter (Loeb & Weiner, 2011; Archidiacono et al., 2019). If dark matter particles decouple from the primordial plasma when still relativistic and soon become non-relativistic, the particles are called "warm dark matter". These WDM particles would have a smaller free-streaming length than cold dark matter particles, preventing them from clustering on small scales and potentially solving the missing satellite prob lem. Furthermore, the WDM particles significantly affect the clustering of matter on large \(k\) limit and could flat the inner regions of most galaxies more than the CDM model, reconciling these values with observation and alleviating the core-cusp problem. At the large \(k\) limit, DM behaves as WDM as it slightly reduces the DM preferred mass range to a size that includes a moderate initial velocity dispersion and free streaming, sufficient to erase some small scale structures. The suppression in WDM models has a variety of observable implications: abundances of galaxies at high redshift (Pacucci et al., 2013; Menci et al., 2016), high-redshift gamma-ray bursts (GRBs) (de Souza et al., 2013), strong gravitational lensing (Gilman et al., 2020; Hsueh et al., 2020). One extension of WDM is to assume that DM comes in two components, a cold one and a warm one, which can be produced via two co-existing mechanisms. These models are called mixed dark matter (MDM) (Maccio et al., 2013; Diamanti et al., 2017; Parmbelli et al., 2021). In this paper, we decided to investigate the case that dark matter consists of only one component, but its behavior depends on \(k\)-scale such that in small \(k\) it behaves like cold DM, and in large \(k\) it shows the properties of warm DM. This scale dependent transition in the behavior can have some motivations in the physics of critical phenomena. The \(k\)-Dependent dark energy has been studied in Farhang and Khosravi (2023) based on a phenomenological gravitational phase transition model (Khosravi and Farhang, 2022; Farhang and Khosravi, 2021). The outline of this paper is as follows: in section 2, we derive Boltzmann equations governing the evolution at the perturbation level. Then, we implement the related equations in the publicly available numerical code CLASS1(the Cosmic Linear Anisotropy Solving System) (Lesgourgues and Tram, 2011) and using the code MONTEPTTION-v32(Audren et al., 2013; Brinckmann and Lesgourgues, 2019) to perform a Monte Carlo Markov chain (MCMC) analysis with a Metropolis-Hasting algorithm against the high- CMB TT, TE, EE +low- TT, EE+lensing data from Planck 2018 (Aghanim et al., 2020) in combination with other probes such as the Baryon acoustic oscillations, BAO ( BOSS DR12 (Alam et al., 2017), eBOSS Ly-\(\alpha\) combined correlations). Footnote 1: [https://github.com/lesgourg/class_public](https://github.com/lesgourg/class_public) Footnote 2: [https://github.com/baudren/montepython_public](https://github.com/baudren/montepython_public) ## 2 Phenomenology of \(k\)-dependent DM model in perturbation level In the framework of general relativity, let us consider the flat, homogeneous, and isotropic universe with energy density \(\rho(\tau)\) and pressure \(P(\tau)\) that is described by the FLRW metric. Using the Einstein equations, we can obtain the following evolution equations for the expansion factor \(a(\tau)\). \[\left(\frac{\dot{a}}{a}\right)^{2}=\frac{8\pi G}{3}a^{2}\rho, \tag{1}\] \[\frac{d}{d\tau}\left(\frac{\dot{a}}{a}\right)=-\frac{4\pi G}{3}a^ {2}(\rho+3P), \tag{2}\] where the dots denote derivatives with respect to conformal time, \(\tau\). The most convenient way to solve the linearized Einstein equations is in the two gauges in the Fourier space \(k\). In the synchronous gauge, the scalar perturbations are characterized by \(h(\vec{k},\tau)\) and \(\eta(\vec{k},\tau)\). The scalar mode of \(h_{ij}\) is given as a Fourier integral \[h_{ij}(\vec{x},\tau)=\int d^{3}k\left(h(\vec{k},\tau)\hat{k}_{i}\hat{k}_{j}+6 \eta(\vec{k},\tau)(\hat{k}_{i}\hat{k}_{j}-\frac{1}{3}\delta_{ij})\right)e^{i \vec{k}.\vec{x}}, \tag{3}\] where, h is used to denote the trace of \(h_{ij}\) in both the real space and the Fourier space (Aoyama et al., 2014). The perturbations are characterized by two scalar potentials \(\psi(\tau,\vec{x})\) and \(\phi(\tau,\vec{x})\) which appear in the line element as \[ds^{2}=a^{2}(\tau)\bigg{(}-(1+2\psi)d\tau^{2}+(1-2\phi)d\vec{x}^{2}\bigg{)}, \tag{4}\] and for a perfect fluid of energy density, \(\rho\), and pressure, \(P\), the energy-momentum tensor has the form \[T^{\mu}_{\nu}=Pg^{\mu}_{\nu}+(\rho+P)u^{\mu}u_{\nu}, \tag{5}\] where \(u^{\mu}\) is the four-velocity of the fluid. The perturbed part of energy-momentum conservation equations in \(k\)-space implies the synchronous gauge as \[\dot{\delta}=-3\mathcal{H}(c_{s}^{2}-w)\delta-(1+w)(\theta+\frac{\dot{h}}{2}), \tag{6}\] \[\dot{\theta}=-\mathcal{H}(1-3c_{g}^{2})\theta+\frac{c_{s}^{2}}{1+w}k^{2}\delta- k^{2}\sigma, \tag{7}\] and for the conformal Newtonian gauge as \[\dot{\delta}=-3\mathcal{H}(c_{s}^{2}-w)\delta-(1+w)(\theta-3\dot{ \phi}),\] \[\dot{\theta}=-\mathcal{H}(1-3c_{g}^{2})\theta+\frac{c_{s}^{2}}{1+w }k^{2}\delta-k^{2}\sigma+k^{2}\psi. \tag{8}\] The evolution equation for the shear can be obtained as \[\dot{\sigma}=-3[\frac{1}{\tau}+\mathcal{H}(\frac{2}{3}-c_{g}^{2}-\frac{1}{3} \frac{\mathcal{P}}{p})]\sigma+\frac{4}{3}\frac{c_{\rm vis}^{2}}{1+w}(2\theta+ \dot{h}). \tag{9}\] \(c_{s}\) and \(c_{g}\) in the above equations are the effective sound speed and the adiabatic sound speed, respectively. In equation 9, \(c_{\rm vis}^{2}\) is a new parameter named viscosity speed, and in implementation of CLASS, it is assumed as \(c_{\rm vis}^{2}=3wc_{g}^{2}\)(Lesgourgues and Tram, 2011). The adiabatic sound speed can be expressed as \[c_{g}^{2}=\frac{\dot{p}}{\dot{\rho}}=w-\frac{\dot{w}}{3\mathcal{H}(1+w)}, \tag{10}\] or in another form \(c_{g}^{2}=\frac{\rho}{\rho}=-w\frac{\dot{p}}{p}(\frac{\dot{a}}{a})^{-1}\frac{1}{ 3(1+w)}\), that it is stated in Tram et al. (2019), \(\frac{\dot{p}}{p}=(\frac{\dot{a}}{a})(5-\frac{\mathcal{P}}{p})\). So, the adiabatic sound speed can be rewrote as \[c_{g}^{2}=\frac{w}{3(1+w)}(5-\frac{\mathcal{P}}{p}), \tag{11}\] here \(\mathcal{P}\) is the pseudo-pressure that for any pressureless species, \(\mathcal{P}\simeq p\simeq 0\) and for relativistic species we have \(\mathcal{P}\simeq p\) since in a higher moment pressure \(\frac{p}{p}\simeq 1\). Obtaining an analytical expression for \(c_{s}^{2}\) is more complicated since there is no dynamic equation for pressure perturbation, so in Abellan et al. (2021) and Lesgourgues and Tram (2011), it is supposed that \(c_{s}^{2}\) is scale-independent and approximately equal to \(c_{g}^{2}\). Nevertheless, the full Boltzmann hierarchy calculations show that \(c_{s}^{2}\) represents a specific \(k\)-dependence and cannot be obtained with a background quantity such as \(c_{g}^{2}\) and it increases slightly on the scales \(k\). We follow the prescription in Abellan et al. (2021) for the synchronous sound speed as \[c_{s}^{2}(k)=c_{g}^{2}\left[1+\frac{1}{5}\sqrt{\frac{k}{k_{fs}}}\right], \tag{12}\] where \(k_{fs}=\sqrt{\frac{3}{2}\mathcal{H}(a)/c_{g}(a)}\) is the free-streaming length of the WDM particles. Equations 6-9 are valid for a single uncoupled fluid or for the net (mass-averaged) \(\delta\) and \(\theta\) for all fluids. They need to be modified for individual components if the components interact with each other. The CDM particles can be used to define the synchronous coordinates and therefore have zero peculiar velocities in this gauge. Setting \(\theta=\sigma=0\) and \(w=\dot{w}=0\) in equation 6 for synchronous gauge lead to \[\dot{\delta}_{\rm CDM}=-\frac{1}{2}\dot{h}. \tag{13}\] However, the CDM fluid velocity in the conformal Newtonian gauge is not zero in general. In \(k\)-space, equation 8 gives \[\dot{\delta}_{\rm CDM}=-\theta_{\rm CDM}+3\dot{\phi},\qquad\dot{\theta}_{\rm CDM }=-\frac{\dot{a}}{a}\theta_{\rm CDM}+k^{2}\psi. \tag{14}\] As we mentioned, in this study, we intend to consider dark matter such that its behavior changes in terms of scale, so it behaves as relativistic such as warm dark matter particles in large \(k\) scales, and as non- relativistic such as cold dark matter in small \(k\) scales. Therefore, we introduce a step function, \(\mathcal{S}(k)\), for switching between these two boundary conditions. \(\mathcal{S}(k)\) could be any kind of step (switching) function; for example we consider it as \[\mathcal{S}(k)=\frac{1+\tanh[\alpha(k-k_{0})]}{2}. \tag{15}\] \(\alpha\) and \(k_{0}\) are free parameters that \(\alpha\) control the smoothness of the transition between cold and warm dark matter. We rewrite 6 by using \(\mathcal{S}(k)\) as \[\dot{\delta}=\frac{\dot{h}}{2}+\mathcal{S}(k)[-3\mathcal{H}(c_{s}^{2}-w) \delta-(1+w)\theta], \tag{16}\] \[\dot{\theta}=\mathcal{S}(k)[-\mathcal{H}(1-3c_{g}^{2})\theta+\frac{c_{s}^{2} }{1+w}k^{2}\delta-k^{2}\sigma], \tag{17}\] \[\dot{\sigma}=\mathcal{S}(k)\biggl{[}-3(\frac{1}{\tau}+\mathcal{H}(\frac{2}{3} -c_{g}^{2}-\frac{1}{3}\frac{\mathcal{P}}{p}))\sigma+\frac{4}{3}\frac{c_{\rm vis }^{2}}{1+w}(2\theta+\dot{h})\biggr{]}. \tag{18}\] It is obvious that if \(\mathcal{S}(k)\) vanishes, the above equations reduce to the CDM. This case happens for \(k<k_{0}\) and is more precise for larger \(\alpha\)'s. We implement the above equations in the public Boltzmann solver CLASS. Since we expect this model to behave similarly to CDM in the cosmological background, we only change the perturbation equations in module perturbation.c. We analyzed this model in two cases: i) the sound speed behaves independently of \(k\) as a constant parameter (\(k\)-DM(1)) and ii) the case where it changes depending on \(k\) given to equation 12 (\(k\)-DM(2)). ## 3 \(k\)-Dependent DM model Verse Data In this section, we present constraints on the \(k\)-Dependent dark matter model we have introduced. For MCMC analysis, we use the Metropolis-Hastings algorithm of the cosmological sampling package MONTEPTHON-v3, connected to an altered version of the Boltzmann Solver CLASS. We use the following dataset combination to perform statistical inference: * CMB: We use the CMB temperature and polarization auto- and cross-correlation measurements of the most recent Planck 2018 legacy release, including the full temperature power spectrum at multipoles \(2\leq 1\leq 2500\) and the polarization power spectra in the range \(2\leq 1\leq 29\) (lowP). We also include information on the gravitational lensing power spectrum estimated from the CMB trispectrum analysis. (Aghanim et al., 2020). * BAO: We use the BAO measurements from the Baryon Oscillation Spectroscopic Survey Data Release 12 (BOSS DR12) (Alam et al., 2017),SS DR14-Ly-\(\alpha\) combined correlations (de Sainte Agathe et al., 2019), Lyman-\(\alpha\) forest autocorrelation de Sainte Agathe et al. (2019), and the cross correlation of Lyman-\(\alpha\) and QSO (Blomqvist et al., 2019). * LSS: We use three different sets of LSS data in order to check whether \(k\)- Dependent dark matter model leads to a suppression in the matter power spectrum relative to the CDM: **1-** KiDS + Viking 450 (KV450) matter power spectrum shape data; this combined analysis of data from the KiloDegree Survey (KiDS) and the VISTA Kilo-Degree Infrared Galaxy Survey (VIKING) includes photometric redshift measurements with cosmic shear/weak-lensing observations to measure the matter power spectrum over a wide range of \(k\)-scales at redshifts between 0.1 and 1.2 (Hildebrandt et al., 2020). **2-** Planck SZ (2013): Another independent LSS dataset is the Planck SZ which studies the properties of galaxy clusters by measuring the Sunyaev-Zeldovich effect. But we should note that the measurements of galaxy distribution from the SZ effect depend on a mass bias factor \((1-b)\) that relates the observed SZ signal to the true mass of galaxy clusters. In Planck SZ (2013), a numerical simulation of the \(S_{8}\) measurement is reported by fixing the mass bias to its central value \((1-b)=0.8\). Later, the Planck SZ (2015) report allowed \((1-b)\) to vary with a Gaussian prior centered at 0.79. The central value of the resulting \(S_{8}^{\rm SZ}\) becomes smaller but has a much larger uncertainty, \(S_{8}^{\rm SZ}=0.744\pm 0.034\), and less tension to CMB measurements (Ade et al., 2016). For our analysis, we chose this data set since the central value \(\sigma_{8}\) of the SZ (2013) analysis is consistent with many low-redshift measurements (Zu et al., 2023). **3-**WiggleZ \(P(k)\) data: Since dark energy has an effect on the expansion history of the Universe and on the growth of cosmological structures, we also use WiggleZ data in this study. The WiggleZ Dark Energy Survey is a survey to measure the large scale structure of the Universe by mapping the distance-redshift relation with baryon acoustic oscillations (Kazin et al., 2014). We employ the \(\chi^{2}\) statistics to constrain our theoretical model as: \[\chi^{2}=\frac{(\mathcal{P}_{\rm obs}-\mathcal{P}_{\rm th})^{2}}{\sigma_{\cal P }^{2}}, \tag{19}\] here \({\cal P}_{\rm obs}\), \({\cal P}_{\rm th}\) and \(\sigma_{\cal P}^{2}\) indicate the observed values, the predicted values and the standard deviation, respectively. Note that in addition to the six free parameters of the standard model, i.e, \((\Omega_{b},\Omega_{DM},10\theta_{\rm MC},\ln 10^{10}A_{s},n_{s},\tau_{\rm riolo})\), the \(k\)-Dependent dark matter model introduced in the previous section includes for the first case: \((\alpha,k_{0},w,c_{g}^{2})\) and the second case: \((\alpha,k_{0},w)\). To span the \(\alpha\) parameter's space, we work with \(\log_{10}\alpha\) instead of \(\alpha\). The flat priors we assumed for the parameters are given by \(k_{0}\in[0,10]\), \(\alpha\in[0,10^{10}]\), \(w\in[0,1]\), and \(c_{g}^{2}\in[0,1]\). The convergence of chains for each parameter is measured by the Gelman-Rubin criterion, and one can obtain acceptable \(R-1\) values (i.e., below 0.01 for every parameter) with an iterative strategy (Gelman & Rubin, 1992) and the average acceptance rate (acc) is around 0.2. \begin{table} \begin{tabular}{c c c c c c} \hline Model & Parameter & \multicolumn{2}{c}{\(Planck\)} & \multicolumn{2}{c}{\(Planck+Other\)} \\ \cline{3-6} & & best-fit & \(mean\pm\sigma\) & best-fit & \(mean\pm\sigma\) \\ \hline \multirow{8}{*}{\(k\)-DM(1)} & \(\mathbf{\Omega_{m}}\) & 0.3113 & \(0.3148\pm 0.0067\) & 0.2903 & \(0.2888\pm 0.0044\) \\ & \(\mathbf{log_{10}\alpha}\) & 7.59 & \(7.88^{+0.15}_{-0.19}\) & 7.72 & \(7.14^{+0.61}_{-0.34}\) \\ & \(\mathbf{k_{0}}\) & 1.68 & \(1.70^{+0.19}_{-0.12}\) & 0.988 & \(1.003^{+0.049}_{-0.065}\) \\ & \(\mathbf{c_{g}^{2}}\) & 0.052 & \(<0.014\) & \(0.0116(10^{-4})\) & \(<0.0373(10^{-4})\) \\ & \(\mathbf{w}\) & 0.011 & \(<0.009\) & \(0.181(10^{-4})\) & \(0.126\pm 0.049(10^{-4})\) \\ & \(\mathbf{H_{0}}\) & 67.67 & \(67.40\pm 0.49\) & 69.25 & \(69.36\pm 0.36\) \\ & \(\mathbf{S_{8}}\) & 0.8280 & \(0.829\pm 0.012\) & 0.7651 & \(0.7649^{+0.0067}_{-0.0083}\) \\ \hline \multirow{8}{*}{\(k\)-DM(2)} & \(\mathbf{\Omega_{m}}\) & 0.3107 & \(0.3129\pm 0.0073\) & 0.2954 & \(0.2969^{+0.0054}_{-0.0066}\) \\ & \(\mathbf{log_{10}\alpha}\) & 7.65 & \(7.7^{+1.4}_{-1.1}\) & 6.54 & \(6.49^{+0.77}_{-0.63}\) \\ \cline{1-1} & \(\mathbf{k_{0}}\) & 2.51 & \(12^{+15}_{-10}\) & 1.66 & \(1.93^{+0.11}_{-0.30}\) \\ \cline{1-1} & \(\mathbf{w}\) & 0.570 & \(<0.532\) & \(1.18(10^{-7})\) & \(<5.48(10^{-5})\) \\ \cline{1-1} & \(\mathbf{H_{0}}\) & 67.66 & \(67.54\pm 0.53\) & 68.84 & \(68.67^{+0.55}_{-0.48}\) \\ \cline{1-1} & \(\mathbf{S_{8}}\) & 0.822 & \(0.826\pm 0.013\) & 0.7795 & \(0.7875\pm 0.0099\) \\ \hline \multirow{8}{*}{CDM} & \(\mathbf{\Omega_{m}}\) & 0.313328 & \(0.3142\pm 0.0065\) & 0.2929 & \(0.2933\pm 0.0045\) \\ & \(\mathbf{H_{0}}\) & 67.52 & \(67.46\pm 0.47\) & 69.47 & \(69.02\pm 0.37\) \\ \cline{1-1} & \(\mathbf{S_{8}}\) & 0.828789 & \(0.831\pm 0.0012\) & 0.7872 & \(0.7833\pm 0.0074\) \\ \hline \end{tabular} \end{table} Table 1: The best and mean values and 68% confidence limit (CL) constraints for the free parameters of CDM and two \(k\)-DM models. They are given using Planck and Planck+Other datasets described in the paper. Figure 1: 1D likelihoods and 2D contours in 68% and 95% CL marginalized joint regions for chosen free parameters while they are constrained by using \(Planck+Other\) datasets. It seems the \(k\)-DM models predict lower \(S_{8}\) to alleviate this tension while the \(H_{0}\) value is not affected too much. In order to check the cosmic tensions in these models, we added data step by step in two MCMC scans as: Planck and then Planck+Other. This can provide us with further intuition as a starting point, given that Planck's data has provided the most precise measurements of the early universe. In the Table 1, we report the best and the mean values and 68% CL intervals for the main parameters, including the total matter density parameter (\(\Omega_{m}=\Omega_{B}+\Omega_{DM}\)), the present-day expansion rate of the Universe or the Hubble constant, \(H_{0}\), and \(S_{8}=\sigma_{8}\sqrt{\Omega_{m}/0.3}\) in different scenarios for two MCMC analyzes. We also show posterior distributions (\(1\sigma\) and \(2\sigma\) intervals) as dark and light-shaded contours for MCMC analysis, respectively, in the plots of the Figure 1. Some points in these plots need to be stressed. First, it is clear that by considering all different data sets, the parameters are bound more tightly than the analysis with Planck data3. Second, we can see that the decrease of \(H_{0}\) values are associated with the increase of \(\Omega_{m}\) values and vice versa in both \(k\)-Dependent and cold dark matter scenarios. As we see in Table 1, assuming \(k\) dependence of dark matter behavior for the Planck+Other analysis seems to improve the \(S_{8}\) tension for the \(k\)-Dependent dark matter scenario. However, we do not see any significant improvement in addressing the \(H_{0}\) tension. Note that the results show a small deviation from CDM due to non-zero values for \(w\) and \(c_{g}^{2}\) when we have Planck+Other datasets. Their values are at order \(\mathcal{O}(10^{-5})\), which are in agreement with generalized dark matter models (Ilic et al., 2021; Kopp et al., 2016). Footnote 3: The only Planck-constrained parameters are not shown in the figures, but it has checked that Planck and Planck+Other are consistent. This means the contours for the latter are inside the only Planck contours. Next, to check whether the fit is good and also to choose the best and most compatible model with the observational data, we employ the simplest method that is usually used in cosmology, which is called the least squares method, \(\chi^{2}_{\rm tot}\). In this case, the model with smaller \(\chi^{2}_{\rm tot}\) is taken to be a better fit to the data (Davari and Rahvar, 2021). Comparing \(k\)-Dependent model to the CDM scenario, we note that \(k\)-Dependent DM model does better than the CDM model. However, one can have the impression that the model with the lowest \(\chi^{2}_{\rm tot}\) is not necessarily the best because adding more flexibility with extra parameters will normally lead to a lower \(\chi^{2}_{\rm tot}\). In this work, the \(k\)-Dependent model has three more parameters than the CDM scenario. In order to deal with model selection, a standard approach is to compute the Akaike Information Criterion (AIC). It is defined as \[AIC=\chi^{2}_{\rm min}+2M+\frac{2M(M+1)}{N-M+1}, \tag{20}\] where \(M\) is the number of free parameters in the model and \(N\) is the number of data points; thus, \(\Delta AIC=\Delta\chi^{2}_{\rm min}+2\Delta M\). We neglect the third term in the Equation 20 for large sample sizes, \(N\geq M\). We report the result of MCMC analysis for the best-fit \(\chi^{2}_{min}\) for observational Planck and total data sets and for both models in Table 2. The results of this analysis can be interpreted with the Jeffreys' scale as follows: among all models, the one that minimizes the AIC is considered to be the best one, and if the difference between the AIC of a given model and the best model is smaller than 4, one concludes that the data equally support the best fitted model and a given model. In the case of \(4<|\Delta AIC|<10\), observations still support the given model but less than the best one. Finally, for \(|\Delta AIC|>10\), observations basically do not support the given model compared to the best model (Davari et al., 2018). According to Table 2, the only Planck data prefers CDM with respect to \(k\)-DM models. However, adding the other datasets make the situation in favor of \(k\)-DM models. This may mean that \(k\)-DM models have more space to include all the datasets altogether. In order to have a better understanding of the aspects of obtaining from MCMC scans, in the following, we discuss the features of the \(k\)-Dependent model in the CMB and the matter power spectrum. In Figure 2, we show the matter power spectrum, \(P(k)\equiv\left<\delta_{m}(k)\right>^{2}\), in the \(k\)-Dependent model relative to the CDM model for the best obtained values using Planck+Other data. As we see, \(k\)-Dependent dark matter case mimics the CDM scenario to \(k\simeq 1.3\) for the \(k\)-DM(1) and \(k\simeq 2.2hMpc^{-1}\) for the \(k\)-DM(2), but then starts to deviate at larger \(k\)'s (i.e., small scales) and suppresses the power spectrum of matter by a large difference compared to the standard model. We include the information embedded in the Ly\(\alpha\) forest measured with the eBOSS-DR14 data release on scales of a few Mpc. One reason for this difference could be the lack of observational data in this range. In Figure 3, we notice that considering \(k\) dependence for dark matter has the influence of slowing down the evolution rate of the dark matter perturbations. This means that structures cluster slower, as we predicted from the Figure 2 for \(k>1\) with a slight difference \(P_{k-DM(i)}(k)<P_{CDM}(k)\). Since the Planck collaboration has measured the temperature and polarization maps of the CMB very precisely, it has placed stringent limits on the parameter space of the CDM model. This motivates us to study \(k\)-Dependent DM signatures in CMB maps. In Figure 4, we show how \(k\) dependency affects the temperature power spectra, including the variation with respect to the CDM model. We can see a suppression in the amplitude of the lower multipoles in the temperature power spectrum. As we know, the integrated Sachs-Wolfe (ISW) effect is important on such scales. Also, the small \(I\)s of CMB (TT and, even better, EE) give information on the reionization history. We obtain the redshift of the reionization, \(z_{\rm relo}\), using the best values of parameters to be 6.02, 6.41 and 7.38 for \(k\)-DM(1), \(k\)-DM(2) and CDM respectively. The \(z_{\rm relo}\) of the \(k\)-DM(1) model has the biggest differences from the standard model. A crucial quantity in determining the age and evolution of the universe in cosmology is \(H_{0}\). It represents the current rate of expansion of the universe. Because of the impact of Hubble's expansion on the growth of matter perturbations, it is significant to survey the behavior of H(z) in various DM cosmologies. We plot the evolution of H(z)/1+z in Figure 5. Our results in Tables 1 and Figure 6 show that the assumption of dependence dark matter to \(k\) scale can only reduce the \(S_{8}\) tension and not the \(H_{0}\) tension. In general, \(k\)-DM(1) model, which considered the equation of state, \(w\), and adiabatic sound speed, \(c_{g}^{2}\) independent of \(k\) scale, reduces \(S_{8}\) tension more than other models. ## 4 Discussion The warm dark matter model has always been of interest mainly because of the possible need to alleviate the small-scale problems of the \(\Lambda\)CDM. With such insight and also motivated by the effect of adding this cosmological component to reduce current cosmological tensions, in this work we considered a scenario in which the behavior of dark matter depends on the scale. It mimics CDM for small \(k\)'s and \begin{table} \begin{tabular}{c c c c c c c} \hline Parameters & \multicolumn{2}{c}{CDM} & \multicolumn{2}{c}{k-DM(1)} & \multicolumn{2}{c}{k-DM(2)} \\ \cline{2-7} & \(Planck\) & \(Planck+Other\) & \(Planck\) & \(Planck+Other\) & \(Planck\) & \(Planck+Other\) \\ \hline \(\chi^{2}_{\rm min}\) & 2780.9 & 3824 & 2781.02 & 3812.2 & 2780.44 & 3816.7 \\ \hline \(\mathbf{AIC_{k-DM(1)}-AIC_{CDM}}\) & 0 & 0 & 8.12 & \(-3.8\) & 5.54 & \(-1.3\) \\ \hline \end{tabular} \end{table} Table 2: The result of MCMC analysis for the best-fit \(\chi^{2}\), and AIC. It shows that \(k\)-DM models have more space to have Planck+Other datasets altogether consistently. Figure 4: Temperature anisotropies in the CMB. The bottom part of the panel displays the relative temperature differences between the \(k\)-DM and the CDM model. Figure 3: The growth rate of matter fluctuations for \(k\)-Dependent DM model compared to CDM model. The observational constraints are taken from (Kazantzidis & Perivolaropoulos, 2018). Figure 2: The matter power spectrum for \(k\)-Dependent DM and CDM models, and the fractional difference between them. The behavior of \(k\)-DM models mimics the CDM for small \(k\)’s. However, we see a transition for large \(k\)’s in \(k\)-DM models. However, there is no very precise data points at those scales. WDM for large \(k\)'s. A motivation for us was to check if the trace of WDM, which can be seen in very small scales to address e.g., can the core-cusp problem show itself in the (very short) cosmological scales? Our results show that this transition can affect the amplitude of the matter fluctuations, such that reducing the \(S_{8}\) tension. However, it seems the lack of cosmological data for very large \(k\)'s makes it hard to answer to the above question. For future analysis, we can think of a more theoretical framework and also find cosmological datas at very small scales which are cleaned from the baryonic physics. One way can be tracing the effects of our model in non-linear structure formation and the dark matter halo distributions. ## 5 Acknowledgments NK would like to thank Marzieh Farhang for instructive discussions during working on Farhang & Khosravi (2023). This work has been supported financially by a grant from Basic Sciences Research Fund under grant number BSRF-phys-399-06. ZD also acknowledges support from Iran Science Elites Federation under grant number M401543. ## 6 Data Availability No new data were generated or analysed in support of this research.
2304.11494
Compatibility between stability and strategy-proofness with single-peaked preferences on trees
This paper studies the stability and strategy-proofness aspect of the two-sided one-to-one matching market. Agents have single-peaked preferences on trees. In this setting, we characterize all rich anonymous tree-single-peaked domains where a stable and (weakly group) strategy-proof matching rule exists. We also show that whenever there exists a stable and strategy-proof matching rule on a rich anonymous tree-single-peaked domain, one or both of the deferred acceptance rules (Gale and Shapley, 1962) satisfy stability and weak group strategy-proofness on that domain. Finally, we show that for markets with a size of at least five, there is no rich anonymous domain where a stable and non-bossy matching rule exists. As a corollary, we show incompatibility between stability and group strategy-proofness on rich anonymous tree-single-peaked domains for markets with a size of at least five.
Pinaki Mandal
2023-04-22T23:10:24Z
http://arxiv.org/abs/2304.11494v1
# Compatibility between stability and strategy-proofness with single-peaked preferences on trees ###### Abstract This paper studies the stability and strategy-proofness aspect of the two-sided one-to-one matching market. Agents have single-peaked preferences on trees. In this setting, we characterize all rich anonymous tree-single-peaked domains where a stable and (weakly group) strategy-proof matching rule exists. We also show that whenever there exists a stable and strategy-proof matching rule on a rich anonymous tree-single-peaked domain, one or both of the deferred acceptance rules (Gale and Shapley, 1962) satisfy stability and weak group strategy-proofness on that domain. Finally, we show that for markets with a size of at least five, there is no rich anonymous domain where a stable and non-bossy matching rule exists. As a corollary, we show incompatibility between stability and group strategy-proofness on rich anonymous tree-single-peaked domains for markets with a size of at least five. **Keywords:** Two-sided matching; Single-peaked preferences; Stability; Strategy-proofness; (Weak) group strategy-proofness; Non-bossiness **JEL Classification:** C78; D47; D63; D82 Introduction This paper deals with the _marriage problem_(Gale and Shapley, 1962), a well-known two-sided one-to-one matching market. In this market, there are two disjoint sets of agents, men and women (of equal number). Each agent on one side of the market has a strict preference over the agents on the other side, and a matching between men and women is selected based on the agents' preferences. _Stability_(Gale and Shapley, 1962) has been considered the main property to be satisfied by any sensible matching.1 A matching is stable if there is no pair of agents who prefer being matched to each other to accepting the current matching. Gale and Shapley (1962) provide an algorithm called _deferred acceptance (DA) algorithm_ that produces a stable matching at every preference profile. Footnote 1: In real-world applications, empirical studies have shown that stable mechanisms often succeed whereas unstable ones often fail. For a summary of this evidence, see Roth (2002). Apart from stability, _strategy-proofness_ is another important desideratum for a matching rule. A matching rule is strategy-proof if truthful revelation of preferences is a weakly dominant strategy for every agent. Roth (1982) shows that on the unrestricted domain, no stable matching rule is strategy-proof. Later, Alcalde and Barbera (1994) introduce the notion of the _top dominance (TD)_ property, and show that the men-proposing DA rule is strategy-proof whenever the sets of admissible preferences for women satisfy the TD property.2 Footnote 2: Alcalde and Barberá (1994) work in a setting with outside options (the choice of remaining unmatched), and with arbitrary values of the number of men and the number of women. ### Our motivation and contribution Roth (1982) and Alcalde and Barbera (1994) assume that the agents can have strict but otherwise arbitrary preferences. However, it is well-known that in many circumstances preferences of agents are restricted in a particular way. _Single-peakedness_(Black, 1948) is known as one of the most common such restrictions. This motivates us to study the compatibility between stability and strategy-proofness with single-peaked preferences. In its classical form, single-peakedness arises when the choices can be ordered based on certain criteria and agents' preferences respect that order in the sense that as one moves away from his/her most preferred choice, his/her preference declines. Demange (1982) generalizes the classical single-peakedness to _single-peakedness on trees_. Instead of focusing only on the maximal tree-single-peaked domains, we do our analysis on a class of tree-single-peaked domains that we call _rich_ and _anonymous_. A domain is rich if every agent appears as the most-preferred choice of some preference in the domain. A domain is anonymous if each agent on one side of the market has the same set of admissible preferences. Our first result characterizes all rich anonymous tree-single-peaked domains where a stable and strategy proof matching rule exists (Theorem 3.1). As a corollary, we show that there is no stable and strategy-proof matching rule on the maximal tree-single-peaked domains (Corollary 3.1). We next focus on the compatibility of stability with _weak group strategy-proofness_ - a stronger condition than strategy-proofness. Weak group strategy-proofness ensures that no group of agents can be strictly better-off by misreporting their preferences. Mandal (2023) shows that the men-proposing DA rule is weakly group strategy-proof whenever the sets of admissible preferences for women satisfy the TD property.3 In view of his result, we characterize all rich anonymous tree-single-peaked domains where a stable and weakly group strategy-proof matching rule exists (Corollary 3.2). We also show that whenever there exists a stable and strategy-proof matching rule on a rich anonymous tree-single-peaked domain, one or both of the DA rules satisfy stability and weak group strategy-proofness on that domain (Corollary 3.3). Footnote 3: Like Alcalde and Barberá (1994), Mandal (2023) also works in a setting with outside options, and with arbitrary values of the number of men and the number of women. Finally, we focus on the (in)compatibility between stability and _group strategy-proofness_. Group strategy-proofness ensures that no group of agents can be weakly better-off by misreporting their preferences. We show that whenever there are at least five men (or women) in the market, there is no rich anonymous tree-single-peaked domain where a stable and group strategy-proof matching rule exists (Corollary 3.4). In fact, we prove a stronger result, which says whenever there are at least five men (or women) in the market, there is no rich anonymous domain where a stable and _non-bossy_ matching rule exists (Theorem 3.2).4\({}^{,}\)5 Non-bossiness is a weaker condition than group strategy-proofness, and it ensures that an agent cannot change the match of any other agent without changing his/her own match. Footnote 4: The concept of non-bossiness is due to Satterthwaite and Sonnenschein (1981). Footnote 5: In a setting with outside options, and with arbitrary values of the number of men and the number of women, Kojima (2010) shows that there is no stable and non-bossy matching rule on the unrestricted domain. The paper is organized as follows. In Section 2, we introduce basic notions and notations that we use throughout the paper, define matching rules and discuss their standard properties, and present DA rules and the notion of single-peakedness on trees. In Section 3, we present our results. ## 2 Preliminaries There are two finite disjoint sets of agents, the set of _men_\(M=\{m_{1},\ldots,m_{n}\}\), and the set of equally many _women_\(W=\{w_{1},\ldots,w_{n}\}\). Let \(A=M\cup W\) be the set of all agents. Throughout the paper, we assume \(n\geq 3\). A _matching_ (between \(M\) and \(W\)) is a one-to-one function \(\mu:A\to A\) such that * \(\mu(m)\in W\) for all \(m\in M\), * \(\mu(w)\in M\) for all \(w\in W\), and 3. \(\mu(m)=w\) if and only if \(\mu(w)=m\).6 Footnote 6: It is worth noting that Condition (ii) is redundant in our framework. Since \(|M|=|W|\) and \(\mu\) is a one-to-one function, Condition (ii) follows from Condition (i). Here, \(\mu(m)=w\) means man \(m\) and woman \(w\) are matched to each other under the matching \(\mu\). We denote by \(\mathcal{M}\) the set of all matchings. ### Basic notions and notations For a finite set \(X\), let \(\mathbb{L}(X)\) denote the set of all (strict) linear orders over \(X\).7 An element of \(\mathbb{L}(X)\) is called a _preference_ over \(X\). For \(P\in\mathbb{L}(X)\) and distinct \(x,y\in X\), \(xPy\) is interpreted as "\(x\) is strictly preferred to \(y\) according to \(P\)". For \(P\in\mathbb{L}(X)\), let \(R\) denote the weak part of \(P\), i.e., for all \(x,y\in X\), \(xRy\) if and only if \(\big{[}xPy\) or \(x=y\big{]}\). For \(P\in\mathbb{L}(X)\), let \(\tau(P)\) denote the most preferred element of \(X\) according to \(P\), i.e., \(\tau(P)=x\) if and only if \(\big{[}x\in X\) and \(xPy\) for all \(y\in X\setminus\{x\}\big{]}\). Furthermore, we use the following convention throughout the paper: for a finite set \(X\) and a set of preferences \(\mathcal{D}\subseteq\mathbb{L}(X)\), whenever we write \((\cdot x\cdot y\cdot z\cdot)\in\mathcal{D}\), we mean that there exists a preference \(P\in\mathcal{D}\) such that \(xPyPz\), where \(x,y,z\in X\).8 Footnote 7: A _linear order_ is a semiconnex, asymmetric, and transitive binary relation. Footnote 8: Here, \(x\) and \(y\) need not be consecutively ranked in \(P\), and neither are \(y\) and \(z\). ### Domains and their properties Each man \(m\) has a preference \(P_{m}\) over \(W\) and each woman \(w\) has a preference \(P_{w}\) over \(M\). We denote by \(\mathcal{P}_{a}\) the set of admissible preferences for agent \(a\).9 A _preference profile_, denoted by \(P_{A}=(P_{m_{1}},\ldots,P_{m_{n}},P_{w_{1}},\ldots,P_{w_{n}})\), is an element of the Cartesian product \(\mathcal{P}_{A}:=\prod\limits_{i=1}^{n}\mathcal{P}_{m_{i}}\times\prod\limits _{j=1}^{n}\mathcal{P}_{w_{j}}\), that represents a collection of preferences - one for each agent. Following our notational convention, we denote the Cartesian product \(\prod\limits_{i=1}^{n}\mathcal{P}_{m_{i}}\) by \(\mathcal{P}_{M}\), and the Cartesian product \(\prod\limits_{j=1}^{n}\mathcal{P}_{w_{j}}\) by \(\mathcal{P}_{W}\). Furthermore, as is the convention, \(P_{-a}\) denotes a collection of preferences of all the agents except for \(a\). Also, for \(A^{\prime}\subseteq A\), let \(P_{A^{\prime}}\) denote a collection of preferences of all agents in \(A^{\prime}\), and let \(P_{-A^{\prime}}\) denote a collection of preferences of all the agents not in \(A^{\prime}\). Footnote 9: Note that \(\mathcal{P}_{m}\subseteq\mathbb{L}(W)\) for all \(m\in M\) and \(\mathcal{P}_{w}\subseteq\mathbb{L}(M)\) for all \(w\in W\). **Definition 2.1** (Richness).: For a finite set \(X\), a set of preferences \(\mathcal{D}\subseteq\mathbb{L}(X)\) is _rich_ if for each \(x\in X\), there is a preference \(P\in\mathcal{D}\) such that \(\tau(P)=x\). Throughout the paper, whenever we write \(\mathcal{P}_{A}\) is a rich domain (of preference profiles), we mean \(\mathcal{P}_{a}\) is rich for every agent \(a\). Note that since \(|M|=|W|=n\), for a rich domain \(\mathcal{P}_{A}\), there are at least \(n\) preferences in \(\mathcal{P}_{a}\) for every agent \(a\). **Definition 2.2** (Anonymity).: A domain of preference profiles \(\mathcal{P}_{A}\) is _anonymous_ if 1. \(\mathcal{P}_{m}=\mathcal{P}_{m^{\prime}}\) for all \(m,m^{\prime}\in M\), and 2. \(\mathcal{P}_{w}=\mathcal{P}_{w^{\prime}}\) for all \(w,w^{\prime}\in W\). For ease of presentation, for an anonymous domain \(\mathcal{P}_{A}\), we denote the common sets of admissible preferences for men and women by \(\mathcal{P}_{men}\) and \(\mathcal{P}_{women}\), respectively. ### Matching rules and their stability A _matching rule_ is a function \(\varphi:\mathcal{P}_{A}\rightarrow\mathcal{M}\). For a matching rule \(\varphi:\mathcal{P}_{A}\rightarrow\mathcal{M}\) and a preference profile \(P_{A}\in\mathcal{P}_{A}\), let \(\varphi_{a}(P_{A})\) denote the match of agent \(a\) by \(\varphi\) at \(P_{A}\). _Stability_ has been considered an important desideratum to be satisfied by any sensible matching rule. We use the following notions and notations to present it. A matching \(\mu\) is _blocked_ by a pair \((m,w)\in M\times W\) at a preference profile \(P_{A}\) if \(wP_{m}\mu(m)\) and \(mP_{w}\mu(w)\). A matching \(\mu\) is _stable_ at a preference profile \(P_{A}\) if it is not blocked by any pair. We denote by \(\mathcal{C}(P_{A})\) the set of all stable matchings at a preference profile \(P_{A}\). **Definition 2.3**.: A matching rule \(\varphi:\mathcal{P}_{A}\rightarrow\mathcal{M}\) is _stable_ if for all \(P_{A}\in\mathcal{P}_{A}\), we have \(\varphi(P_{A})\in\mathcal{C}(P_{A})\). #### 2.3.1 Deferred acceptance The celebrated _deferred acceptance (DA) rules_ are introduced by Gale and Shapley (1962). These rules are stable (see Remark 2.1), and will play a key role in our results. And hence, we present a brief description of these rules for completeness' sake. There are two types of DA rules, namely _men-proposing DA (MPDA) rule_ and _women-proposing DA (WPDA) rule_. We denote the MPDA rule by \(DA^{MP}\), and the WPDA rule by \(DA^{WP}\). In the following, we provide a description of the MPDA rule at a preference profile \(P_{A}\). The same of the WPDA rule can be obtained by interchanging the roles of men and women in the MPDA rule. _Step 1_.: Each man \(m\) proposes to his most preferred woman (according to \(P_{m}\)). Every woman \(w\), who has at least one proposal, tentatively keeps her most preferred man (according to \(P_{w}\)) among these proposals and rejects the rest. _Step 2_.: Every man \(m\), who was rejected at the previous step, proposes to his next preferred woman. Every woman \(w\), who has at least one proposal including any proposal tentatively kept from the earlier steps, tentatively keeps her most preferred man among these proposals and rejects the rest. This procedure is then repeated from Step 2 till a step such that each woman has a proposal. At this step, the tentative proposal accepted by a woman becomes permanent. This completes the description of the MPDA rule. **Remark 2.1** (Gale and Shapley, 1962).: Suppose \(\mathcal{P}_{A}\) is an arbitrary domain of preference profiles.10 On domain \(\mathcal{P}_{A}\), both DA rules are stable. Footnote 10: Here, \(\mathcal{P}_{A}\) need not be a rich domain or an anonymous domain. ### Matching rules and their incentive properties In practice, matching rules are often designed to satisfy incentive properties. Here are a few well-known desiderata for matching rules. **Definition 2.4**.: A matching rule \(\varphi:\mathcal{P}_{A}\rightarrow\mathcal{M}\) is 1. _strategy-proof_ if for all \(P_{A}\in\mathcal{P}_{A}\), all \(a\in A\), and all \(\tilde{P}_{a}\in\mathcal{P}_{a}\), we have \(\varphi_{a}(P_{A})R_{a}\varphi_{a}(\tilde{P}_{a},P_{-a})\). 2. _weakly group strategy-proof_ if for all \(P_{A}\in\mathcal{P}_{A}\), there do not exist a set of agents \(A^{\prime}\subseteq A\) and a preference profile \(\tilde{P}_{A^{\prime}}\) of the agents in \(A^{\prime}\) such that \(\varphi_{a}(\tilde{P}_{A^{\prime}},P_{-A^{\prime}})P_{a}\varphi_{a}(P_{A})\) for all \(a\in A^{\prime}\). 3. _group strategy-proof_ if for all \(P_{A}\in\mathcal{P}_{A}\), there do not exist a set of agents \(A^{\prime}\subseteq A\) and a preference profile \(\tilde{P}_{A^{\prime}}\) of the agents in \(A^{\prime}\) such that 1. \(\varphi_{a}(\tilde{P}_{A^{\prime}},P_{-A^{\prime}})R_{a}\varphi_{a}(P_{A})\) for all \(a\in A^{\prime}\), and 2. \(\varphi_{b}(\tilde{P}_{A^{\prime}},P_{-A^{\prime}})P_{b}\varphi_{b}(P_{A})\) for at least one \(b\in A^{\prime}\). **Remark 2.2**.: By definition, group strategy-proofness is stronger than weak group strategy-proofness, and both are stronger than strategy-proofness. **Remark 2.3** (Roth, 1982).: No stable matching rule is strategy-proof on the unrestricted domain \(\mathbb{L}^{n}(W)\times\mathbb{L}^{n}(M)\). ### Single-peakedness on trees We begin by presenting the concept of _classical single-peakedness_, i.e., single-peakedness on a line. **Definition 2.5** (Classical single-peakedness).: For a finite set \(X\), a preference \(P\in\mathbb{L}(X)\) is _single-peaked_ with respect to a linear order \(\prec_{X}\)_over \(X\) if for all \(x,y\in X\), \(\big{[}\tau(P)\preceq_{X}x\prec_{X}y\) or \(y\prec_{X}x\preceq_{X}\tau(P)\big{]}\) implies \(xPy\).11 Footnote 11: \(\preceq_{X}\) denotes the weak part of \(\prec_{X}\). Demange (1982) generalizes the classical single-peakedness to _single-peakedness on trees_. Before defining it formally, we first present some notions and notations related to graph theory that we will be using throughout the paper. A _path_ in a graph is a sequence of distinct nodes such that every two consecutive nodes form an edge. A _tree_, denoted by \(T\), is an undirected graph in which any two nodes are connected by exactly one path. For a tree \(T\), we denote its set of nodes by \(V(T)\). For ease of presentation, for a finite set \(X\), sometimes we denote by \(T_{X}\) a tree with \(X\) as its set of nodes. **Definition 2.6** (Single-peakedness on trees).: For a finite set \(X\), a preference \(P\in\mathbb{L}(X)\) is _single-peaked with respect to a tree \(T_{X}\)_ if, for all \(x,y\in X\) with \(x\) being on the (unique) path between \(\tau(P)\) and \(y\) in \(T_{X}\), we have \(xRy\). A set of preferences is called _maximal single-peaked_ _with respect to a tree \(T_{X}\)_ if it contains all single-peaked preferences with respect to \(T_{X}\). We denote such a set of preferences by \(\mathbb{S}(T_{X})\). A set of preferences \(\mathcal{D}\) (\(\subseteq\mathbb{L}(X)\)) is called _single-peaked_ _with respect to a tree \(T_{X}\)_ if \(\mathcal{D}\subseteq\mathbb{S}(T_{X})\). The following property can be easily verified: Consider a finite set \(X\), a tree \(T_{X}\), and three distinct elements \(x,y,z\in X\). Suppose \(y\) is on the path between \(x\) and \(z\) in \(T_{X}\). Then, for any single-peaked preference with respect to \(T_{X}\), \(y\) is not the worst element among \(x,y\), and \(z\) (see Demange (1982) for details). We present this property formally as a remark for later reference. **Remark 2.4**.: Given a finite set \(X\) and a tree \(T_{X}\), for all distinct \(x,y,z\in X\) with \(y\) being on the path between \(x\) and \(z\) in \(T_{X}\), we have \((\cdot x\cdot z\cdot y\cdot)\notin\mathbb{S}(T_{X})\) and \((\cdot z\cdot x\cdot y\cdot)\notin\mathbb{S}(T_{X})\). The following example explains how the classical single-peakedness is a special case of single-peakedness on trees. **Example 2.1** (Classical single-peakedness).: Suppose \(X=\{x_{1},x_{2},x_{3},x_{4}\}\). Consider the linear tree \(T_{X}\) presented in Figure 1. It is straightforward to verify that every single-peaked preference with respect to \(T_{X}\) is also a single-peaked preference with respect to the linear order \(\prec_{X}\) over \(X\), where \(x_{1}\prec_{X}x_{2}\prec_{X}x_{3}\prec_{X}x_{4}\), and vice versa. \(\lozenge\) **Remark 2.5**.: For a set \(X\) with \(k\) elements and a given linear order \(\prec_{X}\) over \(X\), there are exactly \(2^{k-1}\) preferences which are single-peaked in the classical sense. Formally, for a set \(X\) with \(k\) elements and a given linear tree \(T_{X}\), there are exactly \(2^{k-1}\) preferences in the maximal single-peaked set of preferences \(\mathbb{S}(T_{X})\). In the following example, we discuss the structure of single-peaked preferences with respect to a non-linear tree. **Example 2.2**.: Suppose \(X=\{x_{1},x_{2},x_{3},x_{4}\}\). Consider the tree \(T_{X}\) presented in Figure 2. Note that tree \(T_{X}\) is a _star_.12 Figure 1: Linear tree \(T_{X}\) for Example 2.1 It is straightforward to verify that \(\mathbb{S}(T_{X})\) consists of all the preferences which rank \(x_{2}\) first or second. \(\Diamond\) **Remark 2.6**.: For a set \(X\) with \(k\) elements and a given tree \(T_{X}\), the number of preferences in the maximal single-peaked set of preferences \(\mathbb{S}(T_{X})\) can range from \(2^{k-1}\) (when \(T_{X}\) is linear) to \(2(k-1)!\) (when \(T_{X}\) is a star). Let \(T_{M}\) be a tree with \(M\) as its set of nodes and let \(T_{W}\) be a tree with \(W\) as its set of nodes. Throughout the paper, whenever we write \(\mathcal{P}_{A}\) is a _tree-single-peaked domain_, we mean \(\mathcal{P}_{M}\subseteq\mathbb{S}^{n}(T_{W})\) and \(\mathcal{P}_{W}\subseteq\mathbb{S}^{n}(T_{M})\). ## 3 Results: Characterizations and impossibilities ### Compatibility between stability and strategy-proofness In this subsection, we study the structure of tree-single-peaked domains where a stable and strategy-proof matching rule exists. For this purpose, we first present the notion of the _top dominance_ property (Alcalde and Barbera, 1994).13 Footnote 13: One internal node and \(k\) leaves (terminal nodes) when \(k>1\), or no internal nodes and \(k+1\) leaves when \(k\leq 1\). See Star (graph theory) for details. Footnote 13: Alcalde and Barberá (1994) define top dominance property in a setting with outside options. We reformulated the property for our setting (i.e., without outside options). **Definition 3.1** (Top dominance).: For a finite set \(X\), a set of preferences \(\mathcal{D}\subseteq\mathbb{L}(X)\) satisfies the _top dominance (TD)_ property if for all \(x,y,z\in X\), \[(\cdot x\cdot y\cdot z\cdot)\in\mathcal{D}\implies(\cdot x\cdot z\cdot y\cdot) \notin\mathcal{D}.\lx@note{footnote}{In other words, if there exists a preference $P\in\mathcal{D}$ with $xPyPz$, then there is no preference $\tilde{P}\in\mathcal{D}$ such that $x\tilde{P}_{z}\tilde{P}y$.}\] **Remark 3.1**.: Given a finite set \(X\), let \(\mathcal{D}\subseteq\mathbb{L}(X)\) be a set of preferences that satisfies the TD property. Then, there cannot be two preferences in \(\mathcal{D}\) with the same most preferred element. In particular, if \(|X|=k\), then there are at most \(k\) preferences in \(\mathcal{D}\). Figure 2: Tree \(T_{X}\) for Example 2.2 Throughout the paper, whenever we write \(\mathcal{P}_{M}\) satisfies the TD property, we mean \(\mathcal{P}_{m}\) satisfies the TD property for every man \(m\). Similarly, whenever we write \(\mathcal{P}_{W}\) satisfies the TD property, we mean \(\mathcal{P}_{w}\) satisfies the TD property for every woman \(w\). We next present a result that follows from Alcalde and Barbera (1994). **Proposition 3.1**.: _Suppose \(\mathcal{P}_{A}\) is an arbitrary domain of preference profiles.15_ Footnote 15: Here, \(\mathcal{P}_{A}\) need not be a rich domain, an anonymous domain, or a tree-single-peaked domain. 1. \(\mathcal{P}_{W}\) _satisfying the TD property is a sufficient condition for the MPDA rule to be a stable and strategy-proof matching rule on_ \(\mathcal{P}_{A}\)_._ 2. \(\mathcal{P}_{M}\) _satisfying the TD property is a sufficient condition for the WPDA rule to be a stable and strategy-proof matching rule on_ \(\mathcal{P}_{A}\)_._ In what follows, we present the main result of this subsection. It characterizes the rich anonymous tree-single-peaked domains where a stable and strategy-proof matching rule exists. **Theorem 3.1**.: _Suppose \(\mathcal{P}_{A}\) is a rich anonymous tree-single-peaked domain. Then, there exists a stable and strategy-proof matching rule on \(\mathcal{P}_{A}\) if and only if either \(\mathcal{P}_{men}\), or \(\mathcal{P}_{women}\), or both satisfy the TD property._ The proof of this theorem is relegated to Appendix B. **Note 3.1**.: It should be mentioned that for a rich anonymous tree-single-peaked domain \(\mathcal{P}_{A}\) on which a stable and strategy-proof matching rule exists, at least one of \(\mathcal{P}_{men}\) and \(\mathcal{P}_{women}\) has exactly \(n\) preferences. To see this, fix a rich anonymous tree-single-peaked domain \(\mathcal{P}_{A}\) on which a stable and strategy-proof matching rule exists. Then, by Theorem 3.1, at least one of \(\mathcal{P}_{men}\) and \(\mathcal{P}_{women}\) satisfies the TD property. Without loss of generality, assume that \(\mathcal{P}_{men}\) satisfies the TD property. Since \(\mathcal{P}_{men}\) satisfies the TD property, there are at most \(n\) preferences in \(\mathcal{P}_{men}\) (see Remark 3.1). Furthermore, it follows from the richness of \(\mathcal{P}_{A}\) that there are at least \(n\) preferences in \(\mathcal{P}_{men}\). Combining all these facts, it follows that there are exactly \(n\) preferences in \(\mathcal{P}_{men}\). Since neither of \(\mathbb{S}(T_{W})\) and \(\mathbb{S}(T_{M})\) satisfies the TD property, we obtain the following corollary from Theorem 3.1. **Corollary 3.1**.: _There does not exist a stable and strategy-proof matching rule on the maximal tree-single-peaked domain \(\mathbb{S}^{n}(T_{W})\times\mathbb{S}^{n}(T_{M})\)._ It follows from Proposition 3.1 and Theorem 3.1 that on a rich anonymous tree-single-peaked domain, whenever there exists a stable and strategy-proof matching rule, one or both of the DA rules satisfy stability and strategy-proofness on that domain. We present a stronger result (Corollary 3.3) in Section 3.2. ### Compatibility between stability and weak group strategy-proofness In this subsection, we study the structure of tree-single-peaked domains where a stable and weakly group strategy-proof matching rule exists. For this purpose, we first present a result that follows from Mandal (2023). **Proposition 3.2**.: _Suppose \(\mathcal{P}_{A}\) is an arbitrary domain of preference profiles._ 1. \(\mathcal{P}_{W}\) _satisfying the TD property is a sufficient condition for the MPDA rule to be a stable and weakly group strategy-proof matching rule on_ \(\mathcal{P}_{A}\)_._ 2. \(\mathcal{P}_{M}\) _satisfying the TD property is a sufficient condition for the WPDA rule to be a stable and weakly group strategy-proof matching rule on_ \(\mathcal{P}_{A}\)_._ In what follows, we present the main result of this subsection, which we obtain from Theorem 3.1 and Proposition 3.2. It characterizes the rich anonymous tree-single-peaked domains where a stable and weakly group strategy-proof matching rule exists. **Corollary 3.2**.: _Suppose \(\mathcal{P}_{A}\) is a rich anonymous tree-single-peaked domain. Then, there exists a stable and weakly group strategy-proof matching rule on \(\mathcal{P}_{A}\) if and only if either \(\mathcal{P}_{men}\), or \(\mathcal{P}_{women}\), or both satisfy the TD property._ We also obtain the following corollary from Theorem 3.1 and Proposition 3.2. It says whenever there exists a stable and strategy-proof matching rule on a rich anonymous tree-single-peaked domain, one or both of the DA rules satisfy stability and weak group strategy-proofness on that domain. **Corollary 3.3**.: _Suppose \(\mathcal{P}_{A}\) is a rich anonymous tree-single-peaked domain on which a stable and strategy-proof matching rule exists. Then, at least one of the DA rules is a stable and weakly group strategy-proof matching rule on \(\mathcal{P}_{A}\)._ Corollary 3.3 arises a natural question: _Whenever a stable and strategy-proof matching rule exists on a rich anonymous tree-single-peaked domain, does that matching rule also satisfy weakly group strategy-proofness?_ We leave this question for further studies. ### (In)compatibility between stability and group strategy-proofness We begin by presenting the concept of _non-bossiness_ - a weaker condition than group strategy-proofness, and study the (in)compatibility between stability and non-bossiness on tree-single-peaked domains. A matching rule is non-bossy if an agent cannot change the match of another agent without changing his/her own match. **Definition 3.2**.: A matching rule \(\varphi:\mathcal{P}_{A}\to\mathcal{M}\) is _non-bossy_ if for all \(P_{A}\in\mathcal{P}_{A}\), all \(a\in A\), and all \(\tilde{P}_{a}\in\mathcal{P}_{a}\), \(\varphi_{a}(P_{A})=\varphi_{a}(\tilde{P}_{a},P_{-a})\) implies \(\varphi(P_{A})=\varphi(\tilde{P}_{a},P_{-a})\). **Remark 3.2**.: Suppose \(\mathcal{P}_{A}\) is an arbitrary domain of preference profiles. On domain \(\mathcal{P}_{A}\), every group strategy-proof matching rule is non-bossy. We next introduce the notion of the _rotation_ property, and show that it is a necessary condition for the existence of a stable and non-bossy matching rule on a rich anonymous domain. **Definition 3.3**.: For a finite set \(X\), a set of preferences \(\mathcal{D}\subseteq\mathbb{L}(X)\) satisfies the _rotation_ property if for all \(x,y,z,t\in X\), \[(\cdot x\cdot y\cdot z\cdot t\cdot)\in\mathcal{D}\implies(\cdot z\cdot x\cdot t \cdot)\notin\mathcal{D}.\lx@note{footnote}{In other words, if there exists a preference $P\in\mathcal{D}$ with $xPyPzPt$, then there is no preference $\tilde{P}\in\mathcal{D}$ such that $z\tilde{P}_{x}\tilde{P}t$.}\] **Proposition 3.3**.: _Suppose \(\mathcal{P}_{A}\) is a rich anonymous domain.17 Then, there exists a stable and non-bossy matching rule on \(\mathcal{P}_{A}\) only if either \(\mathcal{P}_{men}\), or \(\mathcal{P}_{women}\), or both satisfy the rotation property._ Footnote 17: Here, \(\mathcal{P}_{A}\) need not be a tree-single-peaked domain. The proof of this proposition is relegated to Appendix C. Finally, we present the main result of this subsection. It says whenever there are at least five men (or women) in the market, there is no rich anonymous tree-single-peaked domain where a stable and non-bossy matching rule exists. **Theorem 3.2**.: _Suppose \(\mathcal{P}_{A}\) is a rich anonymous tree-single-peaked domain. If \(n\geq 5\), then there does not exist a stable and non-bossy matching rule on \(\mathcal{P}_{A}\)._ The proof of this theorem is relegated to Appendix D. Since group strategy-proofness implies non-bossiness (see Remark 3.2), we obtain the following corollary from Theorem 3.2. **Corollary 3.4**.: _Suppose \(\mathcal{P}_{A}\) is a rich anonymous tree-single-peaked domain. If \(n\geq 5\), then there does not exist a stable and group strategy-proof matching rule on \(\mathcal{P}_{A}\)._ ## Appendix A Preliminaries Before we formally start proving our results, we first present a remark that summarizes some properties of the DA rules (see Gale and Shapley (1962) and McVitie and Wilson (1971) for details), which we will use in the proofs. **Remark A.1**.: Suppose \(\mathcal{P}_{A}\) is an arbitrary domain of preference profiles. For all \(P_{A}\in\mathcal{P}_{A}\) and all \(\mu\in\mathcal{C}(P_{A})\), 1. \(DA_{m}^{MP}(P_{A})R_{m}\mu(m)R_{m}DA_{m}^{WP}(P_{A})\) for all \(m\in M\), and 2. \(DA_{w}^{WP}(P_{A})R_{w}\mu(w)R_{w}DA_{w}^{MP}(P_{A})\) for all \(w\in W\). ## Appendix B Proof of Theorem 3.1 We first prove a lemma which we will use in the proof of Theorem 3.1. **Lemma B.1**.: _Given a finite set \(X\) and a tree \(T_{X}\), consider a rich single-peaked set of preferences \(\mathcal{D}\subseteq\mathbb{L}(X)\) with respect to \(T_{X}\). Suppose \(\mathcal{D}\) does not satisfy the TD property. Then, there exist \(x,y,z\in X\) such that_ \[\left\{(\cdot x\cdot y\cdot z\cdot),(\cdot x\cdot z\cdot y\cdot),(\cdot y\cdot x \cdot z\cdot)\right\}\in\mathcal{D}.\] Proof of Lemma b.1.: Since \(\mathcal{D}\) does not satisfy the TD property, there exist distinct \(x_{1},x_{2},x_{3}\in X\) and two preferences \(P_{1},P_{2}\in\mathcal{D}\) such that \[x_{1}P_{1}x_{2}P_{1}x_{3},\text{ and }\] (B.1a) \[x_{1}P_{2}x_{3}P_{2}x_{2}.\] (B.1b) Moreover, since \(\mathcal{D}\) is rich, there exists a preference \(P_{3}\in\mathcal{D}\) such that \(\tau(P_{3})=x_{2}\). Let \(\pi\) denote the path between \(x_{2}\) and \(x_{1}\) (in \(T_{X}\)), and \(\tilde{\pi}\) denote the path between \(x_{1}\) and \(x_{3}\). We distinguish the following two cases. **Case 1**: Suppose \(\pi\) and \(\tilde{\pi}\) have no common node other than \(x_{1}\). Since \(\pi\) and \(\tilde{\pi}\) have no common node other than \(x_{1}\), the path between \(x_{2}\) and \(x_{3}\) is the concatenation of \(\pi\) and \(\tilde{\pi}\), and consequently, \(x_{1}\) is on the path between \(x_{2}\) and \(x_{3}\). This, along with the construction of \(P_{3}\), implies that \[x_{2}P_{3}x_{1}P_{3}x_{3}.\] (B.2) (B.1) and (B.2) together complete the proof for Case 1. **Case 2**: Suppose \(\pi\) and \(\tilde{\pi}\) have common nodes other than \(x_{1}\). Let \(x_{4}\) be the first node from \(x_{2}\) on \(\pi\) such that \(x_{4}\) is on \(\tilde{\pi}\). **Claim B.1**.: \(x_{1}\)_, \(x_{2}\), \(x_{3}\), and \(x_{4}\) are distinct._ Proof of Claim b.1.: 1. Since \(\pi\) and \(\tilde{\pi}\) have common nodes other than \(x_{1}\), by the construction of \(x_{4}\), we have \(x_{4}\neq x_{1}\). 2. Suppose \(x_{4}=x_{2}\). Then, \(x_{2}\) is on the path between \(x_{1}\) and \(x_{3}\). This, along with Remark 2.4, implies \((\cdot x_{1}\cdot x_{3}\cdot x_{2}\cdot)\notin\mathbb{S}(T_{X})\). However, since \(\mathcal{D}\subseteq\mathbb{S}(T_{X})\), this contradicts (B.1b). 3. Suppose \(x_{4}=x_{3}\). Then, \(x_{3}\) is on the path between \(x_{2}\) and \(x_{1}\). This, along with Remark 2.4, implies \((\cdot x_{1}\cdot x_{2}\cdot x_{3}\cdot)\notin\mathbb{S}(T_{X})\). However, since \(\mathcal{D}\subseteq\mathbb{S}(T_{X})\), this contradicts (B.1a). Since \(x_{1}\), \(x_{2}\), and \(x_{3}\) are distinct, (i) - (iii) together complete the proof of Claim B.1. By the construction of \(x_{4}\), it is on the path between \(x_{2}\) and \(x_{1}\). This, along with Claim B.1 and Remark 2.4, implies \((\cdot x_{1}\cdot x_{2}\cdot x_{4}\cdot)\notin\mathbb{S}(T_{X})\). Since \(P_{1}\in\mathcal{D}\subseteq\mathbb{S}(T_{X})\), this, together with (B.1a), implies \[x_{4}P_{1}x_{2}P_{1}x_{3}.\] (B.3) Similarly, by the construction of \(x_{4}\), it is on the path between \(x_{1}\) and \(x_{3}\). This, along with Claim B.1 and Remark 2.4, implies \((\cdot x_{1}\cdot x_{3}\cdot x_{4}\cdot)\notin\mathbb{S}(T_{X})\). Since \(P_{2}\in\mathcal{D}\subseteq\mathbb{S}(T_{X})\), this, together with (B.1b), implies \[x_{4}P_{2}x_{3}P_{2}x_{2}.\] (B.4) Furthermore, the path between \(x_{2}\) and \(x_{3}\) is obtained by concatenating the paths \(\pi_{s}\) and \(\tilde{\pi}_{s}\), where \(\pi_{s}\) is the sub-path of \(\pi\) between \(x_{2}\) and \(x_{4}\), and \(\tilde{\pi}_{s}\) is the sub-path of \(\tilde{\pi}\) between \(x_{4}\) and \(x_{3}\), and consequently, node \(x_{4}\) is on the path between \(x_{2}\) and \(x_{3}\). This, along with construction of \(P_{3}\), implies that \[x_{2}P_{3}x_{4}P_{3}x_{3}.\] (B.5) (B.3), (B.4), and (B.5) together complete the proof for Case 2. Since Cases 1 and 2 are exhaustive, this completes the proof of Lemma B.1. _Completion of the proof of Theorem 3.1._ The "if" part of the theorem follows from Proposition 3.1. We proceed to prove the "only-if" part. Assume for contradiction that neither of \(\mathcal{P}_{men}\) and \(\mathcal{P}_{women}\) satisfies the TD property. Since \(\mathcal{P}_{A}\) is a rich anonymous tree-single-peaked domain and neither of \(\mathcal{P}_{men}\) and \(\mathcal{P}_{women}\) satisfies the TD property, by Lemma B.1, without loss of generality, assume that there exist preferences \(P_{1},P_{2},P_{3}\in\mathcal{P}_{men}\) and \(\tilde{P}_{1},\tilde{P}_{2},\tilde{P}_{3}\in\mathcal{P}_{women}\) such that \[w_{2}P_{1}w_{1}P_{1}w_{3},\,w_{2}P_{2}w_{3}P_{2}w_{1},\,w_{1}P_{3 }w_{2}P_{3}w_{3},\text{ and}\] \[m_{2}\tilde{P}_{1}m_{1}\tilde{P}_{1}m_{3},\,m_{2}\tilde{P}_{2}m_ {3}\tilde{P}_{2}m_{1},\,m_{1}\tilde{P}_{3}m_{2}\tilde{P}_{3}m_{3}.\] Since \(\mathcal{P}_{A}\) is a rich anonymous domain, we can construct the preference profiles presented in Table B.1 (the dots indicate that all preferences for the corresponding parts are irrelevant and can be chosen arbitrarily).18 Here, \(m_{k}\) denotes a man other than \(m_{1},m_{2},m_{3}\) (if any), and \(w_{k}\) denotes a woman other than \(w_{1},w_{2},w_{3}\) (if any). Note that such an agent does not change his/her preference across the mentioned preference profiles. It is straightforward to verify the following facts. 1. 1. \(DA^{MP}(P_{A}^{1})=[(m_{1},w_{1}),(m_{2},w_{2}),(m_{k},w_{k})\;\;\forall\;\;k \geq 3]\). We denote this matching by \(\mu_{1}\) in this proof. 2. \(DA^{WP}(P_{A}^{1})=[(m_{1},w_{2}),(m_{2},w_{1}),(m_{k},w_{k})\;\;\forall\;\;k \geq 3]\). We denote this matching by \(\mu_{2}\) in this proof. 2. \(DA^{MP}(P_{A}^{2})=DA^{WP}(P_{A}^{2})=\mu_{2}\). 3. \(DA^{MP}(P_{A}^{3})=DA^{WP}(P_{A}^{3})=\mu_{1}\). These facts, together with Remark A.1, imply \[\mathcal{C}(P_{A}^{1})=\{\mu_{1},\mu_{2}\},\mathcal{C}(P_{A}^{2})=\{\mu_{2}\}, \text{ and }\mathcal{C}(P_{A}^{3})=\{\mu_{1}\},\] Fix a stable matching rule \(\varphi\) on \(\mathcal{P}_{A}\). If \(\varphi(P_{A}^{1})=\mu_{1}\), then \(w_{1}\) can manipulate at \(P_{A}^{1}\) via \(\tilde{P}_{2}\). If \(\varphi(P_{A}^{1})=\mu_{2}\), then \(m_{2}\) can manipulate at \(P_{A}^{1}\) via \(P_{2}\). This implies \(\varphi\) is not strategy-proof on \(\mathcal{P}_{A}\), which completes the proof of Theorem 3.1. \(\blacksquare\) ## Appendix C Proof of Proposition 3.3 Proof of Proposition 3.3.: Suppose neither of \(\mathcal{P}_{men}\) and \(\mathcal{P}_{women}\) satisfies the rotation property. We show that there is no stable and non-bossy matching rule on \(\mathcal{P}_{A}\). Since neither of \(\mathcal{P}_{men}\) and \(\mathcal{P}_{women}\) satisfies the rotation property, without loss of generality, assume that there exist preferences \(P_{1},P_{2}\in\mathcal{P}_{men}\) and \(\tilde{P}_{1},\tilde{P}_{2}\in\mathcal{P}_{women}\) such that \[w_{1}P_{1}w_{2}P_{1}w_{4}\;\;\text{and}\;\;w_{2}P_{2}w_{3}P_{2}w_{1}P_{2}w_{4},\;\;\text{and}\] \[m_{1}\tilde{P}_{1}m_{2}\tilde{P}_{1}m_{4}\;\;\text{and}\;\;m_{2}\tilde{P}_{2}m _{3}\tilde{P}_{2}m_{1}\tilde{P}_{2}m_{4}.\] We distinguish the following four cases. **Case 1**: Suppose \(w_{1}P_{1}w_{3}\) and \(m_{1}\tilde{P}_{1}m_{3}\). Since \(\mathcal{P}_{A}\) is a rich domain, there exist four preferences \(P_{3},P_{4}\in\mathcal{P}_{men}\) and \(\tilde{P}_{3},\tilde{P}_{4}\in\mathcal{P}_{women}\) such that \(\tau(P_{3})=w_{3}\), \(\tau(P_{4})=w_{4}\), \(\tau(\tilde{P}_{3})=m_{3}\), and \(\tau(\tilde{P}_{4})=m_{4}\). Moreover, since \(\mathcal{P}_{A}\) is a rich domain, we can construct \begin{table} \begin{tabular}{c|c c c c|c c c} \hline Preference profiles & \(m_{1}\) & \(m_{2}\) & \(m_{3}\) & \(m_{k}\) & \(w_{1}\) & \(w_{2}\) & \(w_{3}\) & \(w_{k}\) \\ \hline \hline \(P_{A}^{1}\) & \(P_{3}\) & \(P_{1}\) & \(P_{3}\) & \(w_{k}\ldots\) & \(\tilde{P}_{1}\) & \(\tilde{P}_{3}\) & \(\tilde{P}_{2}\) & \(m_{k}\ldots\) \\ \(P_{A}^{2}\) & \(P_{3}\) & \(P_{1}\) & \(P_{3}\) & \(w_{k}\ldots\) & \(\tilde{P}_{2}\) & \(\tilde{P}_{3}\) & \(\tilde{P}_{2}\) & \(m_{k}\ldots\) \\ \(P_{A}^{3}\) & \(P_{3}\) & \(P_{2}\) & \(P_{3}\) & \(w_{k}\ldots\) & \(\tilde{P}_{1}\) & \(\tilde{P}_{3}\) & \(\tilde{P}_{2}\) & \(m_{k}\ldots\) \\ \hline \end{tabular} \end{table} Table B.1: Preference profiles for Theorem 3.1 the preference profiles presented in Table C.1. Here, \(m_{k}\) denotes a man other than \(m_{1},m_{2},m_{3},m_{4}\) (if any), and \(w_{k}\) denotes a woman other than \(w_{1},w_{2},w_{3},w_{4}\) (if any). Note that such an agent does not change his/her preference across the mentioned preference profiles. It is straightforward to verify the following facts. 1. [label=()] 2. \(DA^{MP}(P_{A}^{1})=[(m_{1},w_{1}),(m_{2},w_{2}),(m_{3},w_{4}),(m_{4},w_{3}),(m_{ k},w_{k})\;\;\forall\;\;k\geq 5]\). We denote this matching by \(\mu_{1}\) in this proof. 2. \(DA^{WP}(P_{A}^{1})=[(m_{1},w_{2}),(m_{2},w_{1}),(m_{3},w_{4}),(m_{4},w_{3}),(m_ {k},w_{k})\;\;\forall\;\;k\geq 5]\). We denote this matching by \(\mu_{2}\) in this proof. 3. \(DA^{MP}(P_{A}^{2})=DA^{WP}(P_{A}^{2})=\mu_{2}\). 4. \(DA^{MP}(P_{A}^{3})=DA^{WP}(P_{A}^{3})=\mu_{1}\). These facts, together with Remark A.1, imply \[\mathcal{C}(P_{A}^{1})=\{\mu_{1},\mu_{2}\},\mathcal{C}(P_{A}^{2})=\{\mu_{2}\}, \text{ and }\mathcal{C}(P_{A}^{3})=\{\mu_{1}\},\] Fix a stable matching rule \(\varphi\) on \(\mathcal{P}_{A}\). 1. [label=()] 2. Suppose \(\varphi(P_{A}^{1})=\mu_{1}\). Note that only \(m_{3}\) changes his preference from \(P_{A}^{1}\) to \(P_{A}^{2}\). This, together with the facts \(\varphi(P_{A}^{1})=\mu_{1}\) and \(\varphi(P_{A}^{2})=\mu_{2}\), implies that \(\varphi\) violates non-bossiness. 3. Suppose \(\varphi(P_{A}^{1})=\mu_{2}\). Note that only \(w_{3}\) changes her preference from \(P_{A}^{1}\) to \(P_{A}^{3}\). This, together with the facts \(\varphi(P_{A}^{1})=\mu_{2}\) and \(\varphi(P_{A}^{3})=\mu_{1}\), implies that \(\varphi\) violates non-bossiness. **Case 2**: Suppose \(w_{1}P_{1}w_{3}\) and \(m_{3}\tilde{P}_{1}m_{1}\). By renaming men \(m_{1},m_{2},m_{3}\) as \(m_{3}^{\prime},m_{1}^{\prime},m_{2}^{\prime}\), respectively, and renaming preferences \(\tilde{P}_{1},\tilde{P}_{2}\) as \(\tilde{P}_{2}^{\prime},\tilde{P}_{1}^{\prime}\), respectively, we obtain an identical situation to Case 1. \begin{table} \begin{tabular}{c|c c c c c|c c c c} \hline Preference profiles & \(m_{1}\) & \(m_{2}\) & \(m_{3}\) & \(m_{4}\) & \(m_{k}\) & \(w_{1}\) & \(w_{2}\) & \(w_{3}\) & \(w_{4}\) & \(w_{k}\) \\ \hline \hline \(P_{A}^{1}\) & \(P_{1}\) & \(P_{2}\) & \(P_{4}\) & \(P_{3}\) & \(w_{k}\ldots\) & \(\tilde{P}_{2}\) & \(\tilde{P}_{1}\) & \(\tilde{P}_{4}\) & \(\tilde{P}_{3}\) & \(m_{k}\ldots\) \\ \(P_{A}^{2}\) & \(P_{1}\) & \(P_{2}\) & \(P_{1}\) & \(P_{3}\) & \(w_{k}\ldots\) & \(\tilde{P}_{2}\) & \(\tilde{P}_{1}\) & \(\tilde{P}_{4}\) & \(\tilde{P}_{3}\) & \(m_{k}\ldots\) \\ \(P_{A}^{3}\) & \(P_{1}\) & \(P_{2}\) & \(P_{4}\) & \(P_{3}\) & \(w_{k}\ldots\) & \(\tilde{P}_{2}\) & \(\tilde{P}_{1}\) & \(\tilde{P}_{1}\) & \(\tilde{P}_{3}\) & \(m_{k}\ldots\) \\ \hline \end{tabular} \end{table} Table C.1: Preference profiles for Case 1 of Proposition 3.3 **Case 3**: Suppose \(w_{3}P_{1}w_{1}\) and \(m_{1}\tilde{P}_{1}m_{3}\). By renaming women \(w_{1},w_{2},w_{3}\) as \(w^{\prime}_{3},w^{\prime}_{1},w^{\prime}_{2}\), respectively, and renaming preferences \(P_{1},P_{2}\) as \(P^{\prime}_{2},P^{\prime}_{1}\), respectively, we obtain an identical situation to Case 1. **Case 4**: Suppose \(w_{3}P_{1}w_{1}\) and \(m_{3}\tilde{P}_{1}m_{1}\). By renaming men \(m_{1},m_{2},m_{3}\) as \(m^{\prime}_{3},m^{\prime}_{1},m^{\prime}_{2}\), respectively, renaming women \(w_{1},w_{2},w_{3}\) as \(w^{\prime}_{3},w^{\prime}_{1},w^{\prime}_{2}\), respectively, and renaming preferences \(P_{1},P_{2},\tilde{P}_{1},\tilde{P}_{2}\) as \(P^{\prime}_{2},P^{\prime}_{1},\tilde{P}^{\prime}_{2},\tilde{P}^{\prime}_{1}\), respectively, we obtain an identical situation to Case 1. Since Cases 1 - 4 are exhaustive, this completes the proof of Proposition 3.3. ## Appendix D Proof of Theorem 3.2 We first prove a lemma which we will use in the proof of Theorem 3.2. **Lemma D.1**.: _Given a finite set \(X\) and a tree \(T_{X}\), consider a rich single-peaked set of preferences \(\mathcal{D}\subseteq\mathbb{L}(X)\) with respect to \(T_{X}\). If \(|X|\geq 5\), then \(\mathcal{D}\) does not satisfy the rotation property._ Proof of Lemma d.1.: Consider a subtree \(\tilde{T}\) of \(T_{X}\) such that \(|V(\tilde{T})|=5\). \(\tilde{T}\) has exactly one of the following three tree structures. We distinguish the following three cases. **Case 1**: Suppose \(\tilde{T}\) has the tree structure given in Figure D.1a. Without loss of generality, assume that \(V(\tilde{P})=\{x_{1},x_{2},x_{3},x_{4},x_{5}\}\) and \(\tilde{T}\) is as given in Figure D.2. Figure D.2: Subtree \(\tilde{T}\) for Case 1 Figure D.1: Possible tree structures for \(\tilde{T}\) Because \(\mathcal{D}\) is rich, there exist preferences \(P_{1},P_{3},P_{5}\in\mathcal{D}\) such that \(\tau(P_{1})=x_{1}\), \(\tau(P_{3})=x_{3}\), and \(\tau(P_{5})=x_{5}\). Furthermore, since \(\mathcal{D}\subseteq\mathbb{S}(T_{X})\) and \(\tilde{T}\) is a subtree of \(T_{X}\), the facts \(\tau(P_{1})=x_{1}\) and \(\tau(P_{5})=x_{5}\), together with the structure of \(\tilde{T}\), imply \[x_{1}P_{1}x_{2}P_{1}x_{3}P_{1}x_{5},\text{ and }\] (D.1a) \[x_{5}P_{5}x_{4}P_{5}x_{3}P_{5}x_{1}.\] (D.1b) 1. Suppose \(x_{3}P_{3}x_{1}P_{3}x_{5}\). The fact \(x_{3}P_{3}x_{1}P_{3}x_{5}\) and (D.1a) together imply that \(\mathcal{D}\) violates the rotation property. 2. Suppose \(x_{3}P_{3}x_{5}P_{3}x_{1}\). The fact \(x_{3}P_{3}x_{5}P_{3}x_{1}\) and (D.1b) together imply that \(\mathcal{D}\) violates the rotation property. **Case 2**: Suppose \(\tilde{T}\) has the tree structure given in Figure D.1b. Without loss of generality, assume that \(V(\tilde{P})=\{x_{1},x_{2},x_{3},x_{4},x_{5}\}\) and \(\tilde{T}\) is as given in Figure D.3. Because \(\mathcal{D}\) is rich, there exist preferences \(P_{1},P_{4},P_{5}\in\mathcal{D}\) such that \(\tau(P_{1})=x_{1}\), \(\tau(P_{4})=x_{4}\), and \(\tau(P_{5})=x_{5}\). Furthermore, since \(\mathcal{D}\subseteq\mathbb{S}(T_{X})\) and \(\tilde{T}\) is a subtree of \(T_{X}\), the fact \(\tau(P_{1})=x_{1}\), together with the structure of \(\tilde{T}\), implies \[x_{1}P_{1}x_{2}P_{1}x_{3}P_{1}x_{4}P_{1}x_{5}\text{ or }\ x_{1}P_{1}x_{2}P_{1}x_{3}P_{1}x_{5}P_{1}x_{4}.\] (D.2) 1. Suppose \(x_{1}P_{1}x_{2}P_{1}x_{3}P_{1}x_{4}P_{1}x_{5}\). Assume for contradiction that \(\mathcal{D}\) satisfies the rotation property. Since \(\mathcal{D}\) satisfies the rotation property, the fact \(x_{2}P_{1}x_{3}P_{1}x_{4}P_{1}x_{5}\) implies \((\cdot x_{4}\cdot x_{2}\cdot x_{5}\cdot)\notin\mathcal{D}\). This, together with the fact \(\tau(P_{4})=x_{4}\) and the structure of \(\tilde{T}\), implies \(x_{4}P_{4}x_{3}P_{4}x_{5}P_{4}x_{5}P_{4}x_{2}P_{4}x_{1}\). Moreover, since \(\mathcal{D}\) satisfies the rotation property, the fact \(x_{4}P_{4}x_{3}P_{4}x_{5}P_{4}x_{1}\) implies \((\cdot x_{5}\cdot x_{4}\cdot x_{1}\cdot)\notin\mathcal{D}\). This, together with the fact \(\tau(P_{5})=x_{5}\) and the structure of \(\tilde{T}\), implies \(x_{5}P_{5}x_{3}P_{5}x_{2}P_{5}x_{1}P_{5}x_{4}\). However, \(x_{1}P_{1}x_{3}P_{1}x_{4}\) and \(x_{3}P_{5}x_{2}P_{5}x_{1}P_{5}x_{4}\) together contradict the rotation property of \(\mathcal{D}\). 2. Suppose \(x_{1}P_{1}x_{2}P_{1}x_{3}P_{1}x_{5}P_{1}x_{4}\). Figure D.3: Subtree \(\tilde{T}\) for Case 2 Assume for contradiction that \(\mathcal{D}\) satisfies the rotation property. Since \(\mathcal{D}\) satisfies the rotation property, the fact \(x_{2}P_{1}x_{3}P_{1}x_{5}P_{1}x_{4}\) implies \((\cdot x_{5}\cdot x_{2}\cdot x_{4}\cdot)\notin\mathcal{D}\). This, together with the fact \(\tau(P_{5})=x_{5}\) and the structure of \(\tilde{T}\), implies \(x_{5}P_{5}x_{3}P_{5}x_{4}P_{5}x_{2}P_{5}x_{1}\). Moreover, since \(\mathcal{D}\) satisfies the rotation property, the fact \(x_{5}P_{5}x_{3}P_{5}x_{4}P_{5}x_{1}\) implies \((\cdot x_{4}\cdot x_{5}\cdot x_{1}\cdot)\notin\mathcal{D}\). This, together with the fact \(\tau(P_{4})=x_{4}\) and the structure of \(\tilde{T}\), implies \(x_{4}P_{4}x_{3}P_{4}x_{2}P_{4}x_{1}P_{4}x_{5}\). However, \(x_{1}P_{1}x_{3}P_{1}x_{5}\) and \(x_{3}P_{4}x_{2}P_{4}x_{1}P_{4}x_{5}\) together contradict the rotation property of \(\mathcal{D}\). **Case 3**: Suppose \(\tilde{T}\) has the tree structure given in Figure D.1c. Without loss of generality, assume that \(V(\tilde{P})=\{x_{1},x_{2},x_{3},x_{4},x_{5}\}\) and \(\tilde{T}\) is as given in Figure D.4. Because \(\mathcal{D}\) is rich, there exist a preference \(P_{1}\in\mathcal{D}\) such that \(\tau(P_{1})=x_{1}\). Notice that \(x_{2}\), \(x_{3}\), \(x_{4}\), and \(x_{5}\) can appear in any order in \(P_{1}\) after \(x_{1}\). Without loss of generality, assume that \(x_{1}P_{1}x_{2}P_{1}x_{3}P_{1}x_{4}P_{1}x_{5}\). Moreover, because \(\mathcal{D}\) is rich, there exist a preference \(P_{3}\in\mathcal{D}\) such that \(\tau(P_{3})=x_{3}\). Since \(\mathcal{D}\subseteq\mathbb{S}(T_{X})\) and \(\tilde{T}\) is a subtree of \(T_{X}\), the fact \(\tau(P_{3})=x_{3}\), together with the structure of \(\tilde{T}\), implies \(x_{3}P_{3}x_{1}P_{3}x_{4}\). However, this, along with the fact \(x_{1}P_{1}x_{2}P_{1}x_{3}P_{1}x_{4}\), implies that \(\mathcal{D}\) violates the rotation property. Since Cases 1 - 3 are exhaustive, this completes the proof of Lemma D.1. \(\blacksquare\) _Completion of the proof of Theorem 3.2._ Suppose \(\mathcal{P}_{A}\) is a rich anonymous tree-single-peaked domain and let \(n\geq 5\). By Lemma D.1, neither of \(\mathcal{P}_{men}\) and \(\mathcal{P}_{women}\) satisfies the rotation property, and consequently, Theorem 3.2 follows from Proposition 3.3. \(\blacksquare\)
2310.02211
Fast Localization and Tracking in City-Scale UWB Networks
Localization of networked nodes is an essential problem in emerging applications, including first-responder navigation, automated manufacturing lines, vehicular and drone navigation, asset navigation and tracking, Internet of Things and 5G communication networks. In this paper, we present Locate3D, a novel system for peer-to-peer node localization and orientation estimation in large networks. Unlike traditional range-only methods, Locate3D introduces angle-of-arrival (AoA) data as an added network topology constraint. The system solves three key challenges: it uses angles to reduce the number of measurements required by 4x and jointly use range and angle data for location estimation. We develop a spanning-tree approach for fast location updates, and to ensure the output graphs are rigid and uniquely realizable, even in occluded or weakly connected areas. Locate3D cuts down latency by up to 75% without compromising accuracy, surpassing standard range-only solutions. It has a 10.2 meters median localization error for large-scale networks (30,000 nodes, 15 anchors spread across 14km square) and 0.5 meters for small-scale networks (10 nodes).
Nakul Garg, Irtaza Shahid, Ramanujan K Sheshadri, Karthikeyan Sundaresan, Nirupam Roy
2023-10-03T17:08:45Z
http://arxiv.org/abs/2310.02211v1
# Fast Localization and Tracking in City-Scale UWB Networks ###### Abstract Localization of networked nodes is an essential problem in emerging applications, including first-responder navigation, automated manufacturing lines, vehicular and drone navigation, asset navigation and tracking, Internet of Things and 5G communication networks. In this paper, we present _Locate3D_, a novel system for peer-to-peer node localization and orientation estimation in large networks. Unlike traditional range-only methods, _Locate3D_ introduces angle-of-arrival (AoA) data as an added network topology constraint. The system solves three key challenges: it uses angles to reduce the number of measurements required by \(4\times\) and jointly use range and angle data for location estimation. We develop a spanning-tree approach for fast location updates, and to ensure the output graphs are rigid and uniquely realizable, even in occluded or weakly connected areas. _Locate3D_ cuts down latency by up to 75% without compromising accuracy, surpassing standard range-only solutions. It has a 10.2 meters median localization error for large-scale networks (\(30,000\) nodes, 15 anchors spread across \(14km^{2}\)) and 0.5 meters for small-scale networks (10 nodes). ## 1 Introduction A swarm of connected nodes is the underlying architecture for many emerging applications. Miniaturized sensing modules are scattered like seeds [25, 65] or carried by insects [26] to scale a vast region for fine grained sensor networks toward border protection [7], animal migration tracking [31], or precision agriculture [66]. Flocks of drones can localize each other to fly in formation for charting inaccessible regions [23, 57] or for airshows [2]. A smart network of tags can enable city-scale tracking of deliveries or missing objects in real-time [39]. Future of the cellular [54] and vehicular networks [4] can pave the path toward autonomy [71, 79] and road safety [75]. Localization of the nodes in such large networks is an essential requirement and at the same time a challenging problem when the number of nodes is large. In this paper, we present _Locate3D_, a system that is grounded on both theoretical and practical foundations, providing a reliable framework for fast peer-to-peer localization as well as orientation of nodes on a large network. Multidimensional scaling (MDS) has been the essence of nearly all large-scale localization algorithms for wireless networks. The primary reason MDS is adapted so widely is its ability to reconstruct the relative map of the network nodes with little infrastructural support, even in the form of the anchor nodes with known locations. MDS-based algorithms are fundamentally centralized ranging-based systems that consider inter-node distances in the Euclidean space to optimize for sensor locations. It, however, suited the capacity of the mobile nodes that can use the time of flight of the signal or RSSI-based model to estimate ranges without additional hardware requirements or computational complexity. However, recent mobile nodes have evolved not only to accurately sense the ranges but are also equipped with multi-dimensional antenna arrays to estimate reliable angles of the nodes. For instance, around 20 years after being released for commercial applications in 2002 [16], off-the-shelf UWB sensors can now sense the angle of the received signal [51, 52]. Unfortunately, the entire class of the MDS-based network localization algorithms can not take advantage of this new-found capability of the mobile nodes. Intuitively the angle information of the peer nodes can serve as additional constraints of the network topology leading to faster convergence to the location estimation. Moreover, the range and angles are estimated simultaneously per exchange of signals between nodes without incurring additional measurement latency. This convenient information Figure 1: Adding angles to the edges between nodes introduces additional constraints to the topology, significantly decreasing the number of edges necessary to achieve a _rigid_ and _unique_ graph realization. Furthermore, this provides more information to accurately estimate the relative 3D orientations between nodes. is wasted as the MDS objective function can not jointly optimize on the Euclidean distance and angle plane. We develop a new network localization algorithm with a redefined objective function to include range and angle together for optimization to bridge this gap. The scalability of a network localization solution depends on several practical factors beyond the correctness of the theoretical formulations. The level of dependency on the infrastructure is the most crucial of them. In addition, a dynamic topology of mobile nodes requires short update latency of location estimations. Like any peer-to-peer localization system, _Locate3D_ requires at least four anchor nodes for the unambiguous global 3D location of the nodes. However, the relative locations of the entire topology are correct with no assumption on the anchors. We enable our algorithms to incorporate any number of available anchor nodes and other infrastructural opportunities to create virtual anchors that enhance overall localization accuracy. In the proposed system, the joint range and angle-based optimization reduce the measurement and initial topology estimation latency, then a spanning-tree-based optimal edge selection expedites updates on locations after initial estimation, and finally, a graph rigidity-based solution makes the estimation robust to local occlusions or poor peer-to-peer connectivity. A class of solutions for network localization resort to multimodal data to improve accuracy and reduce the update latency of the system. While effective in a small number of nodes and within restricted environmental conditions, the multimodality assumption limits the scalability of the system. It is infeasible to maintain homogeneous data quality with thousands of nodes spread across a large geographical region. For instance, some recent papers [38] use Visual Inertial Odometry (VIO) - a camera-based solution to track orientation - to improve localization accuracy. As shown in Figure 2, in an experiment with VIO and UWB localization the accuracy falters with varying lighting conditions. We argue that an unimodal solution is ideal for large networks in terms of consistency and ease of practical deployment and maintenance. It makes our solution equally applicable to the networks of resource-constrained low-power nodes. The core localization algorithm is, however, applicable to any modality that can measure the peer-to-peer range and relative angles, and naturally, the location accuracy is defined by the accuracy of the measurements. For instantiating the algorithm in a prototype and realistic large-scale simulations, we consider off-the-shelf Ultra-WideBand (UWB) wireless sensing nodes. This paper strives to improve the accuracy of 3D localization of UWB-enabled nodes on a large single-modality network as shown in Figure 1. To this end, we have made the following three specific contributions at the current stage of this project: * Developed a novel 3D network localization algorithm to jointly incorporate range and angle in topology estimation. It leads to a fast localization algorithm with 75% latency improvement for localization of a 30,000-node network spanning several kilometers with 15 static anchors spread across \(14km^{2}\). The median accuracy of location is 10.2 meters. * Developed supporting algorithm for estimating the optimal spanning tree of the network for robust localization with occlusions and limited field of view of sensor nodes. The algorithm also includes a decomposition technique a non-rigid graph into smaller rigid graphs, for both range and angle constraints. * Implemented a working prototype with UWB-enabled nodes running the proposed algorithm. We used real-world UWB measurement traces to evaluate system performance in large network models. ## 2 System Design Our system, _Locate3D_, uses pair-wise Ultra-Wideband (UWB) RF measurements, specifically range and angle-of-arrival information, to determine the precise 3D positions of interconnected nodes in relation to one another. The system is designed using four key components: Joint Range-Angle Localization:We develop an optimization method that uses range and angle measurements as constraints to build a network of localized nodes. Incorporating angles ensures an accurate location estimation with fewer number of edges compared to a range-only method, effectively reducing the latency up to a 4.2\(\times\). Reference Frame Transformation:Each node's angle measurement is a composite value of both the relative orientation (determined by the antenna angle with respect to the peer's antenna) and the angle-of-arrival. _Locate3D_ efficiently decomposes this combined measurement to transform all node axis to a common frame of reference. Scaling to Large Networks:We develop an edge selection strategy that ensures the formulation of the optimal spanning tree, facilitating the interconnection of a large topologies. Large networks often contain flexible or unconstrained edges, Figure 2: Localization tests for VIO (Visual Inertial Odometry) and UWB (Ultra-Wideband) in varying lighting conditions. (a) The result shows that the VIO system experiences drift and decreased accuracy in darker environments. (b) The result shows the path trajectories during the experiment. which can result in isolated but structurally rigid subgraphs. We aim to identify and resolve these subgraphs separately. **Opportunistic Integration with Infrastructure:**_Locate3D_ can leverages any pre-existing infrastructure, incorporating it as anchor points to refine localization accuracy. We further extend our system's capability by introducing the concept of virtual anchors. This mechanism temporarily designates specific mobile nodes as anchors and adjusting edge weights, disseminating the accuracy throughout the network. Below, we discuss the individual algorithmic contributions in detail. ### Joint Range-Angle Localization We formulate the localization problem as an optimization problem which aims to minimize the difference between measured pairwise distances and the distances corresponding to the estimated coordinates. There is a plethora of work done in netowrk localization using range measurements only. SMACOF [12, 22], non-metric MDS etc [6, 47, 64]. Some of recent works also account for missing edges [58, 59] and NLOS cases [11, 14, 67]. However, none of the works have jointly incorporated angle-of-arrival information in the network. **Problem formulation:** Consider a topology of N nodes (mobile devices), with unknown locations \(X=[(x_{1},y_{1},z_{1}),\cdots(x_{N},y_{N},z_{N})]\). Suppose the nodes measure the range between each other, \(\hat{r_{ij}}\) where \((i,j)\) is the pair nodes \(i\) and \(j\) in the topology. We aim to solve for \(X\) while minimizing the cost function: \[\min_{X}\sum_{i,j}w_{ij}(\hat{r_{ij}}-r_{ij}(X))^{2} \tag{1}\] where, \(w_{ij}\) is the weight assigned to edge between nodes \(i\) and \(j\) and \(r_{ij}(X)\) is the euclidean distance between them given by \[r_{ij}(X)=\sqrt{(x_{i}-x_{j})^{2}+(y_{i}-y_{j})^{2}+(z_{i}-z_{j})^{2}} \tag{2}\] We use \(w_{ij}=0\) for missing edges. **Adding angles in the topology:** Incorporating angles into the network topology offers the benefit of reducing the necessary number of edges, which subsequently decreases latency. This approach, however, is not as straightforward due to the highly non-convex and discontinuous nature of angular loss functions when simply added with the range-based loss function. Existing works [8, 37, 50] which use L1 and L2 losses directly on this loss function show that the objective function has many local minima. We can compute the angular loss using the function \(f(X)\), expressed as: \[f(X)=\Big{[}\hat{\theta_{ij}}-\arctan\Big{[}\frac{y_{i}-y_{j}}{x_{i}-x_{j}} \Big{]}\Big{]}^{2} \tag{3}\] This function encapsulates the difference between the measured angles, denoted by \(\hat{\theta_{ij}}\), and the angles calculated from the estimated coordinates. The non-convex nature of \(f(X)\) occurs due to the arctangent and the least square operation on angles. This results in multiple local minimums, making it highly-prone to generating inaccurate topologies. The primary contributor to this issue is the restrictive interval \([-\pi/2,\pi/2]\) that the arctangent function operates in, failing to account for points in the left quadrants of the plane, thereby resulting in the same angles for coordinates \((x,y)\) and \((-x,-y)\). To address this, we first use the 2-argument arctangent, a variant of the arctangent function that considers both x and y inputs and adjusts for the signs, hence returning angles within the interval \([-\pi,\pi]\). Unfortunately, the transformation is not enough, as gradient consists of non-differential making it prone to getting ensnared in local minimums. To overcome this, we apply another transformation to the loss function. We take the negative cosine of the angles, creating a smoother, continuous, and differentiable function restricted within the \([0,1]\) range. This transformation of the new loss function, defined as: \[f(X)=1-cos(\hat{\theta_{ij}}-arctan2(y_{i}-y_{j},x_{i}-x_{j})) \tag{4}\] Finally, we combine the range and angular loss functions to efficiently integrate angles in the network topology optimization formulating a joint optimization problem. \[\min_{X}\Big{[}\frac{\sum_{i,j}w_{ij}^{r}(\hat{r_{ij}}-r_{ij}(X))^{2}}{\sum_{ i,j}r_{ij}}+\sum_{i,j}w_{ij}^{\theta}(1-cos(\hat{\theta_{ij}}-\theta_{ij}(X))) \Big{]} \tag{5}\] where, \(\theta_{ij}(X)=arctan2(y_{i}-y_{j},x_{i}-x_{j})\). Note that we scale the range loss function with \((\sum_{i}\sum_{j}\hat{r_{ij}})\). The range loss can be much higher compared to the angular loss, which are in the range \([0,1]\), and can dominate the overall gradient/overshadow the angular loss. Hence, we normalize the range loss function before adding the angular loss part. We conducted a simulation analysis to assess the benefit of integrating angles with ranges in topology constraints. Through the simulation involving a 50-node topology within a \(100m\times 100m\) region, we see the benefits of this integration. In the simulation we progressively add edges to the graph and run the optimization it using two distinct constraints: ranges only, and a combination of ranges and angles. The results, as shown in Figure 3, reveal that utilizing only ranges necessitated more than triple the number of edges to achieve a level of accuracy comparable to that of our method of combining both ranges and angles. Figure 4 shows a 4.5\(\times\) improvement in latency over a range-only methos with no compromise in accuracy. ### Scaling to Large networks In this section, we detail how _Locate3D_ can localize large node networks efficiently. To adapt to city-scale networks, we address four significant challenges linked with large networks. \(\bullet\)_Optimal Edge Selection_: Identifying the optimal edges to sample in the graph topology is vital to form a unique, rigid structure, while optimizing latency and accuracy. \(\bullet\)_Handling Erroneous Angles_: While in theory all measured edges should have angle data, the limited FOV of commercial antenna arrays can sometimes result in inaccurate angle information. Correcting these errors is vital as they can adversely affect the graph's rigidity constraints. \(\bullet\)_Rigidity Decomposition_: Large networks frequently consist of flexible or unconstrained edges which leads to smaller but rigid subgraphs. We need to identify and solve these subgraphs individually. Next, we will elaborate on how _Locate3D_ mitigates these challenges to scale to larger networks, introducing techniques that could reduce _Locate3D_'s latency by up to \(4.2\times\). #### 2.2.1 Optimal Edge Selection During any measurement time iteration, _Locate3D_ compiles a list of optimal edges that are to be measured within the graph. These edges are chosen based on three criteria - (1) They form a rigid and unique topology, eliminating flexibility or ambiguity. (2) They are minimal in number to decrease overall latency. (3) They are feasible, meaning the edges fall within the necessary range and angle FoV. In a topology comprising n nodes, a maximum of \(n(n-1)\) edges are potentially available. However, acquiring data for each possible edge isn't viable as it is time-consuming and the increases exponentially as \(n\) increases. Hence, rather than overconstraining the topology, we can create the topology with significantly fewer edges. According to Laman's condition [29], for a system that doesn't utilize angles, the minimum number of edges equals to \(2n-3\) and \(3n-4\) for 2D and 3D setups respectively. Interestingly, our approach necessitates only a _single_ edge per node to fully constrain it, amounting to only \(n-1\) edges in total. This efficiency stems from the fact that a single edge encapsulates three constraints - range, azimuth angle, and elevation angle. Moreover, any random \(2n-3\) edges may not always be enough, as some edges could be redundant. Rather we need _well-distributed_\(2n-3\) edges to make the topology accurate. This essentially simplifies our problem, rendering it closely analogous to the _Minimum Spanning Tree problem_ found in graph theory [19]. **Minimum Spanning Tree:** The Minimum Spanning Tree (MST) is a subset of the graph's edges connecting all nodes with the least total edge weight. In our system, we utilize Kruskal's algorithm, a greedy algorithm that arranges the graph's edges in increasing order of their weights and adds edges to the MST as long as they do not form a cycle, thus guaranteeing the minimum combined edge weights. **Edge Weight Calculation:** To minimize the localization error in our MST, we have devised a sophisticated weight assignment algorithm that advances beyond existing noise distribution-based methods [14]. This algorithm considers not only range but also the availability of angles to more accurately determine the _circle of uncertainty_ -- a metric that encapsulates the potential location uncertainty induced by each edge. Accordingly, the weight of the edge, denoted as \(w\), is assigned. The radius of this circle, denoted as \(r\), is formulated by integrating factors such as the edge range value, the angle FOV binary factor (\(f_{\theta}\)) capturing the antenna-array's field-of-view, and the line-of-sight noise distribution (\(\sigma_{\text{los}}\)). Figure 4: Using angles significantly improves the latency of the system while maintaining the accuracy. Figure 5: Median localization error for varying number of edge measurements per node. Figure 3: Comparative analysis of constraints: Incorporating angles markedly reduces the number of edges required to attain the same level of accuracy as the ”ranges only” approach Thus, the edge weight is defined as \(w=\pi r^{2}f_{\theta}\sigma_{\text{los}}\). We describe the process in Algorithm 1. It should be noted that in the proposed algorithm, edges with shorter distances inherently have a smaller circle of uncertainty. Moreover, edges with available angle information are favored above those restricted to range data only. Furthermore, Figure 6 present a comparison of the localization error between our optimal spanning tree and all other possible spanning trees, showing that our algorithm's resulting graph ranks within the top 1% of all spanning trees. #### 2.2.2 Handling Erroneous Angles In practical settings, nodes may not consistently measure each other's ranges or angles due to limited sensing range or angular FOV. This is due to the limited FOV of antenna arrays present in commonly used UWB modules like Decawave [51, 52] and NXP [45].These modules typically restrict the angle-of-arrival field of view (FOV) to between 90\({}^{\circ}\) and 120\({}^{\circ}\) to maintain a respectable accuracy in measured AoA, primarily because the AoA estimation algorithm, which depends on the phase difference of the incoming signal, can result in significant errors when the AoA approaches the broadside of the antenna. Figure 8 shows the reported angles by an off-the-shelf UWB sensor [45]. To address this, _Locate3D_ tracks the rotation of each node using its inertial sensors. Utilizing this rotation data we can determine if any two nodes are within each other's FOV and then flag the angles as valid or erroneous accordingly. #### 2.2.3 Rigidity Decomposition While previous sections operate under the presumption of network topology rigidity, ensuring a unique realization of topology with given constraints, this isn't always the case. Notably, the optimal spanning tree derived earlier doesn't inherently ensure rigidity. Even fully connected graphs can remain non-rigid due to absent constraints, as illustrated in Figure 7 where spanning trees lack rigidity due to missing angle constraints. To address this, we develop a rigidity based graph decomposition technique that segments the graph into smaller _rigid subgraphs_. These subgraphs can then be solved with refined edge weights ensuring that the topology is both rigid and uniquely realizable. **Quantifying Rigidity:** Rigidity of a topology determines whether it is possible to uniquely determine the location of all nodes in the topology with respect to other node. This Figure 8: Reported angles by an off-the-shelf UWB sensor. Figure 6: Histogram of localization errors for all spanning-trees. Figure 7: Different spanning trees representing rigid and non-rigid graphs. Solid lines indicate both range+angle edge, dashed lines indicate range only edge. (a) A connected but non-rigid graph due to missing angle information in an edge. (b) The subgraph is free to rotate. (c) Adding a range measurement makes the graph rigid. leaves us with some trivial transformations of topology like translations and rotations in the space. The rigidity can be quantified using the degree of freedom (DoF) of the graph. Each node has three DoFs, representing movements in the x, y, and z directions. Thus, a graph with \(n\) nodes inherently has \(3n\) DoFs. As constraints are imposed, the available DoFs decrease. A graph is deemed rigid if its DoFs are \(\leq 3\), with these residual DoFs accounting for the whole-graph translational motions. **Eigenanalysis of Rigidity Matrix:** To identify independent and unconstrained motions that the topology can perform without violating the constraints, we perform the eigenvector analysis of the topology. To capture the constraints in a compact matrix form, we first introduce the rigidity matrix, \(R\). Every row of \(R\) denotes a unique constraint equation, which could be a distance or angle constraint between nodes. Each column of \(R\) corresponds to the \(x\), \(y\) and \(z\) coordinates for each node in the graph. Thus, for a graph having n nodes, the matrix expands to have \(3n\) columns. Each entry \(r_{ij}\) in this matrix is determined by the partial derivatives of the constraint equations 2 and 3, as given by: \[r_{ij}=\begin{cases}\frac{\partial d_{ij}}{\partial x_{m}}&\text{if edge $ij$ is a distance constraint}\\ \frac{\partial\partial y_{i}}{\partial y_{m}}&\text{if edge $ij$ is an angle constraint}\\ 0&\text{otherwise}\end{cases}\] \[R=\begin{bmatrix}\frac{\partial d_{1,2}}{\partial x_{1}}&\frac{\partial d_{1,2 }}{\partial y_{1}}&\dots&\frac{\partial d_{1,2}}{\partial x_{n}}&\frac{ \partial d_{1,2}}{\partial y_{n}}\\ \frac{\partial d_{2,3}}{\partial x_{1}}&\frac{\partial d_{2,3}}{\partial y_{1} }&\dots&\frac{\partial d_{2,3}}{\partial x_{n}}&\frac{\partial d_{2,3}}{ \partial y_{n}}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ \hline\partial\theta_{1,2}&\frac{\partial\theta_{1,2}}{\partial x_{1}}&\frac{ \partial\theta_{1,2}}{\partial y_{1}}&\dots&\frac{\partial\theta_{1,2}}{ \partial x_{n}}&\frac{\partial\theta_{1,2}}{\partial y_{n}}\\ \frac{\partial\theta_{2,3}}{\partial x_{1}}&\frac{\partial\theta_{2,3}}{ \partial y_{1}}&\dots&\frac{\partial\theta_{2,3}}{\partial x_{n}}&\frac{ \partial\theta_{2,3}}{\partial y_{n}}\\ \vdots&\vdots&\ddots&\vdots&\vdots\end{bmatrix}\] Let \(\lambda_{1},\lambda_{2},\dots,\lambda_{m}\) be the eigenvalues of the matrix \(M=RR^{T}\), where \(M\) is the symmetric and positive semi-definite matrix of \(R\). The topology is uniquely rigid if the number of zero eigenvalues of \(M\) is equal to \(3n-3\). The count of zero eigenvalues, or the degree of freedom, can be represented as: \[DoF=\sum_{i=1}^{m}\delta(\lambda_{i}) \tag{6}\] where, \[\delta(\lambda)=\begin{cases}1&\text{if $|\lambda|<\epsilon$}\\ 0&\text{otherwise}\end{cases}\] Here, \(\epsilon\) is a small positive number close to zero, chosen to account for numerical inaccuracies (e.g., due to floating-point representation limits in computers). If \(|\lambda_{i}|<\epsilon\), it is considered a zero eigenvalue. Figure 9 illustrates the eigenvectors corresponding to the three zero eigenvalues \(\lambda_{1},\lambda_{2},\) and \(\lambda_{3}\) showing the motion of each node. **Constructing Subgraphs:** By examining the zero eigenvalues, we gain insights into the _displacement vectors_ of each node. For a fully rigid topology, these vectors maintain consistent magnitudes and directions across nodes. In contrast, non-rigid graphs display varied displacements. Our core intuition is that displacement vectors can be used for decomposition: nodes that have identical displacement vectors in terms of magnitude and direction inherently form a rigid subset. This means they can only undergo collective motion. To extract and group these subsets, we first reshape the eigenvector into a 3xn matrix to extract the displacement vectors of each node. Each row within this matrix signifies the displacement of a node across the three axes. Next, we identify the unique displacement vectors in this matrix where every unique vector signals the presence of a distinct "rigid subgraph" within the primary graph. By grouping the nodes corresponding to each unique displacement vector within the matrix, we find the vertices belonging to the associated "rigid subgraph". **Critical Edges List:** Beyond recognizing rigid subgraphs, our approach also pinpoints the edges connecting the separate subgraphs. We term these edges as 'critical edges', as these edges are crucial for rigidity, serving as the connectors between independent rigid substructures. For enhanced analysis in subsequent iterations, we also prioritize these critical edges based on their latest computed locations. This prioritized list of edges underlines those edges that are more likely to come into proximity with one another, thus enhancing the overall scalability and rigidity of the graph. ### Reference Frame Transformation In the above section, we assume that all nodes shared a common frame of reference. Here, we relax this assumption and address the challenges arising from differing local frames of reference for each node. Every node measures angles based on two factors: its relative orientation (defined by its antenna angle concerning another node's antenna) and the angle-of-arrival. The challenge lies in the fact that these angle measurements are in the node's local frame of reference, which moves according to the node's orientation within a broader, global context. Consider the 2D example in Figure 10. In scenario (a), Node 1 identifies Node 2 at an angle 0. But in scenario (b), after Node 1 turns by an angle \(\gamma\), it sees Node 2 at a different angle \(\theta^{\prime}\). This change in reported angles is due to Node 1's rotation, even though the positions of the nodes didn't change. This issue gets complex in 3D, where nodes can rotate in any direction. Our goal is to deconstruct these angle measurements from orientation offsets and align all nodes to a singular, shared frame of reference. #### 2.3.1 Estimating Orientation Offsets Every node's orientation in a 3D space is defined by a set of three rotation angles. Specifically, they depict the node's rotations about its X, Y, and Z axes and are recognized as the Roll(\(\alpha\)), Pitch(\(\beta\)) and Yaw(\(\gamma\)) [74]. To estimate a node's orientation, we leverage a fundamental observation: When the orientations of nodes are in the same global frame, the measured azimuth angle-of-arrival (\(\theta\)) and elevation angle-of-arrival (\(\phi\)) are complementary, implying they obey the condition: \(\theta_{ij}-\theta_{ji}=\pi\) and \(\phi_{ij}=-\phi_{ji}\) Using this insight, we formulate an optimization problem with the goal to estimate the rotation offsets \((\alpha,\beta,\gamma)\). The objective is to minimize the deviation from the above-mentioned constraints: \[\min_{\alpha,\beta,\gamma}\sum_{i,j}(\hat{\theta}_{ij}-\hat{\theta}_{ji}-\pi) +(\hat{\phi}_{ij}+\hat{\phi}_{ji}) \tag{7}\] In this equation, \(\hat{\theta}{ij}\) and \(\hat{\phi}{ij}\) denote the rotated azimuth and elevation angles after the node has undergone rotation by \(\alpha_{i}\), \(\beta_{i}\), and \(\gamma_{i}\). Solving for this optimization provides us with the required orientation offsets to align all nodes in the system with the global frame of reference. The residual angles, \(\hat{\theta}\) and \(\hat{\phi}\), thus correspond to the AoAs in the global frame. Next, we breakdown the steps in optimization problem, detailing how we compute \(\hat{\theta}\) and \(\hat{\phi}\) through the rotation of local frames. The rotation in a 3D context can be decomposed into three unique rotations around the x, y, and z basis axes. _Step1 -_ Determine the unit vector based on the estimated azimuth and elevation angles: \[U=\begin{bmatrix}\hat{x}\\ \hat{y}\\ \hat{z}\end{bmatrix}=\begin{bmatrix}cos(\phi)*cos(\theta)\\ cos(\phi)*sin(\theta)\\ sin(\phi)\end{bmatrix}\] _Step2 -_ Apply the rotation matrix [62] to adjust the unit vector using the computed roll, pitch, and yaw angles (\(\alpha\), \(\beta\), and \(\gamma\)). This results in a new unit vector, \(V\), which reflects the node's orientation post adjustment. \[V=R_{z}(\gamma)R_{y}(\beta)R_{x}(\alpha)U\] \[V=\begin{bmatrix}\cos\gamma&-\sin\gamma&0\\ \sin\gamma&-\cos\gamma&0\\ 0&0&1\end{bmatrix}\begin{bmatrix}\cos\beta&0&\sin\beta\\ 0&1&0\\ -\sin\beta&0&\cos\beta\end{bmatrix}\begin{bmatrix}1&0&0\\ 0&\cos\alpha&-\sin\alpha\\ 0&\sin\alpha&-\cos\alpha\end{bmatrix}U\] _Step3 -_ Finally, compute the azimuth and elevation angles from the rotated vector V to obtain the modified angle measurements. \[\begin{bmatrix}\hat{\theta}\\ \hat{\phi}\end{bmatrix}=\begin{bmatrix}tan^{-1}(y/z)\\ tan^{-1}(z/\sqrt{x^{2}+y^{2}+z^{2}})\end{bmatrix}\] If we already have roll and pitch data from another source, the optimization process becomes more straightforward. Instead of considering all three rotation angles, we only focus on the yaw angle, represented by \(\gamma\). As shown in Figure 11, when two nodes are perfectly aligned with the global frame of reference, the difference between their reported AoAs is a constant, \(\pi\). However, if these nodes are rotated by certain yaw angles, their reported AoAs will change exactly equal to their orientation. \[\min_{\gamma}\sum_{i,j}(\pi-(\theta_{ij}+\gamma_{i})-(\theta_{ji}+\gamma_{j})) \tag{8}\] Using these new offsets, we can now adjust the reported AoA readings to align with the global frame of reference. The simplification is grounded in the insight that when two nodes are aligned the global frame of reference (implying that the \(\gamma\) angle is zero), the AoAs reported by these nodes follow a constraint: \(\theta_{ij}-\theta_{ji}=\pi\). However, when nodes are rotated by arbitrary yaw angles \(\gamma\), their reported AoAs adhere to a new relationship that accounts for the node's orientation: \((\theta_{ij}+\gamma_{i})-(\theta_{ji}+\gamma_{j})=\pi\), also shown in Figure 11. By initializing the optimization process with \(\gamma_{1}=0\), all subsequent orientations are determined within the reference frame of the first node, resulting in precise orientation estimates for all nodes. Subsequently, by subtracting the estimated yaw angle from the reported AoA readings (\(\hat{\theta}_{ij}=\theta_{ij}-\gamma_{i}\)), we attain AoAs that are relative to the global frame of reference. Figure 9: The displacements corresponding to zero eigenvalues represent the translational and rotational motions that the nodes can undergo without violating any constraint. ## 3 Opportunistic Anchor Integration _Locate3D_ is specifically designed to not necessitate a static anchor, but it can seamlessly integrate the presence of an existing anchor to refine the location and orientation results and transition from relative to global coordinates. This capability also offers an edge over range-only systems, which require at least three anchors for global coordinates which is a challenging requirement in highly mobile environments. We categorize anchors into two types: (1) Static Anchors and (2) Virtual Anchors. ### Static Anchors Static anchors are conventional infrastructure anchors with known locations. These anchors have increased edge weightage, enhancing their likelihood of being selected over mobile volatile edges. The impact of these anchors is further elaborated in the Evaluation Section. ### Virtual Anchors Since many new infrastructure cameras are getting connected to 5G, we take advantage of them to perform accurate localization in the vicinity and assign the users which are in the field of view of the camera as _virtual anchors_. A virtual anchor is simply a user which has high confidence score in location obtained by other modality like an infrastructure camera. **User registration:** Leveraging the connectivity of modern infrastructure cameras to 5G, an application can use them for precise localization. Users within the camera's field-of-view are designated as _virtual anchors_. Essentially, a virtual anchor is a user with a high-confidence location sourced from an external modality, like an infrastructure camera. **Technique:** Our registration technique uses correlation in user trajectories to incorporate human motion dynamics such as varying walking speeds and stationary periods. The system benefits from _heading direction_, _cosine similarity of motion_ and speed analysis. In Figure 12, we observe a scenario where some users are equipped with our node, while others are not, all within the camera's Field of View (FoV). Our primary objective is to accurately register the system users in the camera's FoV. It's crucial to avoid misclassifications here, as any incorrect virtual anchor can be misdirect our system's optimizer. To address this, we adopt a stringent approach to trajectory similarity. Our aim is to ensure zero false positives (FP). As depicted in Figure 13, employing a higher threshold for registration effectively achieves zero FPs. By analyzing 45 seconds of motion data, we successfully register approximately 30% of visible users, ensuring the reliability of our system. ## 4 Implementation To this end, we implement a prototype of _Locate3D_ and perform experiments in different small and large scale Figure 11: The relation between AoAs reported by two nodes (a) when nodes are perfectly aligned, and (b) when nodes are oriented with angle \(\gamma\) relative to the global frame of reference. Figure 12: Different scenario of users visible to a static camera anchor. Figure 10: (a) When Node 1 is perfectly aligned with the global frame of reference, it reports that Node 2 is positioned at angle \(\theta\). (b) However, when Node 1 is rotated by \(\gamma\) relative to the global frame of reference in a 3D space, it reports a distinctly different angle \(\theta^{\prime}\), for the same Node 2. Figure 13: User registered for different thresholds of trajectory matching. scenarios. **Node prototype:** We developed prototype mobile nodes of _Locate3D_ to evaluate the performance. Each node consists of a Raspberry pi 3 [49] for computation, an Intel Realsense T261 [24] to collect visual inertial odometry data. We use the NXP SR150 UWB evaluation boards [45] to collect the range, azimuth and elevation AoA data. We collect the UWB measurements at a rate of 20Hz. The overall measurement rate depends on the number of nodes in topology. All the nodes are time synced using NTP. **Software:** All the prototype nodes are part of a a network and communicate with each other using over a UDP socket. We collect and store the UWB range and AoA data and VIO data from all nodes using a python script. We then process all measurements offline using Matlab. We implement all our algorithms using Matlab's optimization toolbox. **Ground truth:** We use the Vicon motion capture setup [68] consisting of 12 Vantage V8 cameras that are mounted on the ceiling to track and record the ground truth locations and orientations of all nodes. The cameras uses IR light to track reflective markers which are attached to the nodes. We use 9 markers per node to reliably track the nodes in all orientations. The measurement accuracy of the setup is under 1mm and the frame rate is 200Hz. **Evaluation environments:** We evaluate the system's localization performance in various types of environments including LOS and NLOS scenarios and varying lighting conditions. We evaluate the system in an indoor setting with upto 6 users walking naturally in a random path. The rooms dimensions of our locations are 6m x 8m and 4m x 20m with various kinds of furniture and equipment in the environment. During data collection, we also have additional users who are not a part of the system, walking in the environment to emulate a crowded and dynamic environment. **Large scale simulation analysis:** We have also implemented a simulation testbed that draws from real-world peer-to-peer measurements, spanning various conditions like range variations, AoA discrepancies, and LOS/NLOS situations. Our dataset comprises of both precise ground truth values and their corresponding noisy ranges and AoA readings. During all our simulations, we adhere to these real-world measurements. For instance, in NLOS scenarios, nodes select an AoA reading from the NLOS subset that aligns with the particular distance and angle, incorporating contextual noise. We will open source our new simulation software and the real-world datasets. ## 5 Evaluation We conduct experiments under different conditions such as line-of-sight (LOS) and non-line-of-sight (NLOS), stationary and moving nodes. We evaluate the impact of varying numbers of edges and number of virtual anchors on localization performance. We have collected a total of 2 hours of real world data with a motion capture setup to extensively evaluate the centimeter level localization accuracy and sub degree level orientation accuracy of our system. We use the closest work [38] which uses UWB+VIO for multi-user localization as our baseline comparison and compare the two in practical scenarios. We also perform large city-scale simulation analysis using the real world data to evaluate how does our system scale. Next, we provide further details of our evaluations. ### Localization accuracy **Comparison with Baseline (Camera based system):** Figure 15(a) shows the median localization error for _Locate3D_ and the baseline [38]. We compare the errors for varying lighting conditions to show that our system is agnostic to any lighting conditions whereas Cappella's localization errors increase with decrease in illumination. This is mainly due to the over reliance of the baseline system on visual features. Our approach is dependent of RF based peer-to-peer measurements and is not affected by any visual condition. Similarly, we show in Figure 15(b) that when nodes are static and the baseline system does not has a tail of trajectory, it fails to estimate 3D location. _Locate3D_ on the other hand does Figure 14: Our evaluation setup (a) The mobile sensor node is build on a Raspberry Pi, UWB antenna array, Intel Realsense VIO, IR markers for ground truth and a battery pack. (b) One of the test environment with Vicon motion capture. Figure 15: 3D Localization performance of _Locate3D_ compared to Cappella [38], which depends on visual odometry along with UWB. Results shows performance in different lighting condition and in (a) dynamic and (b) static networks. not rely on odometry and can estimate the complete 6DOF location and orientation for all the nodes whether it is static or mobile. **City-level simulation:** We evaluate the performance _Locate3D_ in city-scale scenarios by emulating a 3800 x 3800 x 200 meter 3D environment with real world collected data infused in the simulation. We simulated a total of \(30,000\) nodes distributed randomly, as described in section 4. Figure 16 shows the localization accuracy of the nodes with different number of infrastructure anchors. The median localization error is 1.3 meters. Furthermore, even when we reduce the number of anchors to a mere 0.05% of the total nodes, which is 15 anchors in the city, the localization error remains well within the decimeter range. **Varying anchor density:** We investigate the impact of node density on the localization performance of _Locate3D_. To achieve this, we set up an emulated scenario consisting of 1,000 nodes distributed within a 3D space measuring 200 x 200 x 50 meters. Figure 17 reveals an exponential decay in localization error with the increase in the number of nodes. With only 4 nodes, the localization error drops below the 2-meters. **2D vs 3D accuracy:** We evaluate the 2D and 3D location estimation performance of _Locate3D_ by comparing the euclidean distance errors of each node in the topology from the ground truth. Since our system is infrastructure free and does not require any anchors for localization, it estimates the node locations is a local reference frame. To compute the errors with the ground truth, we rotate and translate the estimated topology into a global reference frame and then evaluate the location errors for each node in the topology. Figure 18(a) shows the cumulative density function (CDF) of the 3D and 2D localization errors. The median errors for 2D is less compared to the 3D errors. It is due the additional dimension of height which the optimization algorithm solves. This evaluation contains 83 minutes of real world data containing different scenarios of LOS, NLOS, static and mobile nodes. Overall the median errors are \(18cm\) and \(30cm\) and \(90^{th}\) percentile errors are \(38cm\) and \(65cm\) for 2D and 3D respectively. **Impact of virtual anchors:** Figure 18(b) shows the localization error with varying number of virtual anchors registered by the system. Virtual anchors help is orienting the topology in an absolute coordinate space instead of a local coordinate space. Ideally, we need atleast 3 anchors to remove ambiguities in 3D orientation of the topology. But in our system, since we get angles of the topology along with the location, we could also peg the entire topology using only two anchors. When we have only one anchor to peg the topology, the main source of error is the rotation of topology. The location errors of the anchor is reduced when we get the virtual anchor. However, due to any error is the angle of the virtual anchor, the entire topology is rotated. Hence the error in case of one anchor is higher. As we register more number of anchors, the accuracy gets better. **LOS vs NLOS:** Figure 19(a) presents the CDF of the 2D and 3D localization errors in both line-of-sight and non-line-of-sight scenarios. In the NLOS scenario, there is a slight increase in localization error compared to the LOS scenario. This increase is primarily attributed to the higher noise levels in the received signals at the nodes due to obstructed signal paths and reflections. Specifically, in the 2D scenario, the median localization error rises from \(13cm\) in LOS to \(28cm\) in NLOS. Similarly, in the 3D scenario, the Figure 16: City-scale analysis: Localization errors of \(30,000\) nodes in \(3800\times 3800\times 200\) meter 3D space. (a) Using 10% of nodes as anchors (b) Using 0.05% of nodes as anchors. Figure 17: City-scale analysis: Localization errors of a 1000 node topology in a \(200\times 200\times 50\) meter 3D space for varying number of static anchors. Figure 18: (a) CDF location errors. (b) 3D Localization error for varying number of virtual anchors registered. median localization error increases from 31\(cm\) in LOS to 39\(cm\) in NLOS. **Static vs Mobile nodes:** To assess the impact of node mobility on the performance of _Locate3D_, we conduct experiments to compare static and mobile node scenarios. Figure 19(b) presents the CDF of 2D and 3D localization errors in both static and mobile node configurations. In the mobile scenario, the system experiences a slight degradation in performance due to varying environment and node vibrations. In the 2D scenario, the median localization error increases from a low of 6\(cm\) in the static setup to 13\(cm\) in the mobile scenario. Similarly, in the 3D scenario, the median localization error rises from 6.5\(cm\) in the static scenario to 26\(cm\) in the mobile scenario. ### Orientation Estimation accuracy Figure 20(a) presents the CDF of the error in orientation estimation results. It is highly important for our system as these orientations play a critical role in transforming AoAs from the local frame of reference of the nodes to the global frame of reference, which is then used localization of nodes within a 3D space. Figure 20(a) shows that _Locate3D_ consistently estimate accurate node's orientation with a median error of 4 degrees which is competitive with an inertial sensor. ### Virtual Anchor Registration _Locate3D_ opportunistically integrate virtual anchors by computing the correlation of trajectories which facilitate the precise localization of nodes within a 3D space. In Figure 13, we demonstrate the selection of an optimal trajectory matching threshold, which maximizes user registration while minimizing registration time. Figure 20(b) depicts the dynamic process of user registration over time. It is evident that, within the first 40 seconds, _Locate3D_ successfully registers 40% of the users, resolving any ambiguity in node locations. ### Ranging and AoA Performance The localization errors inherently depend on the raging and AoA accuracies. We report the ranging and AoA estimation performance of the UWB sensor used in our implementation. While our system can be applied to any technology, here in the implementation we have used the UWB radio to get the distance and AoA error. So in this section we report the accuracy of the ranging and angle estimation accuracy. Figure 21 shows the ranging and AoA performance of our node for varying distances and incoming angles. We see that there are biases in certain angles as it is likely due to multipath. But still the error remain within \(\pm\)5\({}^{\circ}\). ## 6 Discussion **Fusing with Visual Inertial Odometry:**_Locate3D_ gives an option to opportunistically use VIO when available. While _Locate3D_ gives accurate 3D locations and orientations at slower intervals. The VIO can give a 30Hz odometry. Each node gets a fine grained 3D tracking using on device VIO. We fuse it to get a more fine grained trajectory. Combining UWB+VIO with Kalman or Bayesian Filters. **Cases when the subgraphs don't come in vicinity for a long time:** While this case is taken care of by our critical edges sampling method explained in Section 2.2. The subgraphs will accumulate drift errors if they rely on IMU for a long period of time. One solution is to trigger the bootstrap from start as the start and look for all possible edges to measure. In applications where the anchor infrastructure is available will instantly solve this problem since the subgraphs have a higher probability of having one of the static anchor in its network. Therefore pegging the relative subgraphs relative to each other. Figure 19: (a) LOS and NLOS localization errors. (b) Static vs Mobile nodes localization errors Figure 20: (a) CDF orientation errors. (b) User registration over time. No practical limitation of angle field-of-view:New advancement in engineering these sensors can come up with better 360\({}^{\circ}\) field-of-view sensors where someone could use larger array size or more antennas to cover the range. In such cases the Spanning tree algorithm could assume a binary case of either a range+angle edge or no edge. It could also account for different noise distributions for different incoming angle. This does not require any major modification and _Locate3D_ will handle it as well to give the optimal spanning tree. Assumption of initial locations:We assume that for any subsequent time iteration the system will have the previous time iteration locations and orientations of the nodes in network. While this assumption hold true most of the time, it may not be possible to create a complete graph in the initial t=0 instant or the system may not have enough time to create the first dense graph. This kind of scenario depends on the application where the system in deployed in and we leave this part for future work. Other RF modalities:Our works is basically a framework that can be applied to any modality and not just UWB and any RF, or acoustic signals are also possible since they have the capability to measure the pairwise ranges [34, 35, 48, 73]. ## 7 Related Work Over the past few decades, indoor positioning has attracted significant attention, and numerous studies have been conducted in this area. Existing work on indoor positioning can be broadly classified into two categories: infrastructure-based systems and infrastructure-free systems. In this section, we will provide an overview of related work in both categories. Infrastructure-based systems:Motion tracking cameras such as OptiTrack [1] and Vicon [3] are used to estimate the locations of several users simultaneously. These systems are often expensive and limited to the small pre-defined area of operation. In addition to motion cameras, there are passive markers such as ARTags [28, 41] and AprilTags [72, 78, 27] which are commonly used in AR technology to precisely localize various users and objects in the environment. These markers can be localized precisely using only a camera and do not demand high computational power. However, these tags are only functional when the camera has a clear view of the tag, which means a large number of tags must be deployed in an environment to achieve broader coverage. Apart from camera-based solution, beacon-based solutions utilize UWB [43, 46, 61], Bluetooth [10, 30, 60], and ultrasound [32, 18, 30] ranging for accurate localization. These methods are often combined with odometry from either an IMU or VIO [13, 17, 36, 46, 55, 63, 70] to refine the location output and decrease the number of beacons necessary to cover the whole environment. Although these solutions can localize multiple users simultaneously, infrastructure-based systems are fundamentally limited to the prepared environments where systems have already been installed. _Locate3D_ adopts a strategy similar to these solutions however, instead of depending on stationary beacons in the environment, it leverages peer-to-peer UWB ranges and angles among multiple users which eliminates the need for any infrastructure. Infrastructure-free systems:The idea of infrastructure-free localization is to determine the relative positions between users in a common coordinate system. The absolute localization in a global coordinate system inherently requires some infrastructure/reference which is infeasible in many AR scenarios. The idea of relative localization has been explored in sensor networks to localize static as well as moving nodes. In sensor networks, relative localization has been studied to locate both stationary [42, 56] and mobile [15, 40, 53] nodes. Infrastructure-free localization has also been studied in robotics for the localization of drones or robots, using visual object detection [44, 69, 80] or a combination of odometry and distance measurements [76, 77, 5, 20, 21, 33, 5]. These techniques primarily use windowed graph-based optimization or online filtering to merge sensor data. Recent research [76, 77] has employed visual-inertial features and UWB ranges to solve windowed optimization problems for localization. However, most of these methods have only been evaluated in small environments with limited trajectories, where devices are mostly in line-of-sight (LOS) which makes them impractical for tracking over large areas. The work that is most closely related is Cappella [38], which proposes an infrastructure-free positioning system for multi-user AR applications. Cappella uses motion estimates from VIO and range measurements from UWB between users to achieve precise localization in a relative coordinate system. In contrast, _Locate3D_ utilizes both range and angle information from UWB to achieve 6DOF location estimation. Unlike Cappella, _Locate3D_ does not heavily rely on VIO, which can fail in low lighting conditions. Additionally, _Locate3D_ achieves localization in a single shot and does not depend on past motion estimates, which helps to prevent error accumulation. To accomplish _Locate3D_, a new optimization algorithm for localization has been developed that jointly optimizes both range and angle measurements from UWB. ## 8 Conclusion This paper presents _Locate3D_, a state-of-the-art system designed for 3D localization and orientation estimation for large networks. Our novel optimization algorithm integrates both range and angle-of-arrival measurements for enhanced network topology estimation, demanding fewer edge measurements. As a result, _Locate3D_ improves the latency by 75% without sacrificing accuracy, outperforming conventional range-only solutions.
2303.01620
Estimating Heterogeneous Causal Mediation Effects with Bayesian Decision Tree Ensembles
The causal inference literature has increasingly recognized that explicitly targeting treatment effect heterogeneity can lead to improved scientific understanding and policy recommendations. Towards the same ends, studying the causal pathway connecting the treatment to the outcome can be also useful. This paper addresses these problems in the context of \emph{causal mediation analysis}. We introduce a varying coefficient model based on Bayesian additive regression trees to identify and regularize heterogeneous causal mediation effects; analogously with linear structural equation models, these effects correspond to covariate-dependent products of coefficients. We show that, even on large datasets with few covariates, LSEMs can produce highly unstable estimates of the conditional average direct and indirect effects, while our \emph{Bayesian causal mediation forests} model produces estimates that are stable. We find that our approach is conservative, with effect estimates ``shrunk towards homogeneity.'' We examine the salient properties of our method using both data from the Medical Expenditure Panel Survey and empirically-grounded simulated data. Finally, we show how our model can be combined with posterior summarization strategies to identify interesting subgroups and interpret the model fit.
Angela Ting, Antonio R. Linero
2023-03-02T22:52:45Z
http://arxiv.org/abs/2303.01620v1
# Estimating Heterogeneous Causal Mediation Effects with Bayesian Decision Tree Ensembles ###### Abstract The causal inference literature has increasingly recognized that explicitly targeting treatment effect heterogeneity can lead to improved scientific understanding and policy recommendations. Towards the same ends, studying the causal pathway connecting the treatment to the outcome can be also useful. This paper addresses these problems in the context of _causal mediation analysis_. We introduce a varying coefficient model based on Bayesian additive regression trees to identify and regularize heterogeneous causal mediation effects; analogously with linear structural equation models, these effects correspond to covariate-dependent products of coefficients. We show that, even on large datasets with few covariates, LSEMs can produce highly unstable estimates of the conditional average direct and indirect effects, while our _Bayesian causal mediation forests_ model produces estimates that are stable. We find that our approach is conservative, with effect estimates "shrunk towards homogeneity." We examine the salient properties of our method using both data from the Medical Expenditure Panel Survey and empirically-grounded simulated data. Finally, we show how our model can be combined with posterior summarization strategies to identify interesting subgroups and interpret the model fit. ## 1 Introduction Estimation of heterogeneous causal effects from observational data is a topic of fundamental importance, with applications in personalized medicine (Obermeyer and Emanuel, 2016), policy recommendation (Athey, 2017), and social science (Yeager et al., 2019). A question of great recent interest in the causal inference literature is how best to leverage state-of-the-art prediction algorithms developed in the machine learning community to estimate heterogeneous treatment effects (Kunzel et al., 2019; Nie and Wager, 2021; Hahn et al., 2020). Much of this literature has focused on the question of how best to modify the estimation strategies used in the machine learning literature to be appropriate for inferring heterogeneous causal effects. A complementary approach to making better policy recommendations is to learn how a treatment of interest influences the outcome via its effects on downstream variables that are themselves causally linked to the outcome; this is referred to as _causal mediation analysis_ and the intermediate variables are referred to as _mediators_(Robins and Greenland, 1992; Pearl, 2001; Rubin, 2004). In addition to providing a sharper understanding of the causal mechanisms at play, we will see that causal mediation analysis can in some cases increase our power to detect causal effects. Similar questions about how to effectively leverage predictive algorithms have emerged in this field, with much of the focus on estimating _average_, rather than heterogeneous, mediation effects (Farbmacher et al., 2022; Linero and Zhang, 2022; Zheng and van der Laan, 2012; Tchetgen and Shpitser, 2012; Kim et al., 2017). To the best of our knowledge, there has been limited work at the intersection of these two settings, i.e., where one is interested in estimating treatment effect heterogeneity at the level of direct and indirect causal mediation effects using machine learning. The issue of estimating heterogeneity in treatment effects in the context of mediation analysis is referred to as _moderated mediation_(Muller et al., 2005). This topic has garnered significant attention in the social science literature, often utilizing linear structural equation modeling (LSEM). For example, (Preacher et al., 2007) and (Kershaw et al., 2010) applied moderated mediation using LSEMs to problems in education and health psychology, respectively. Estimating heterogeneous mediation effects in a nonparametric manner is a challenging task that relies on both strong assumptions regarding confounding and requires large amounts of data to reliably estimate the causal effects. There are important challenges in this context that need to be addressed, including: (i) determining how to properly regularize both the nuisance parameters and parameters of interst to ensure sensible results; (ii) developing methods to summarize the results of black-box fitting procedures in a meaningful way; and (iii) establishing reliable techniques to identify subgroups for which there is evidence of moderated mediation and to determine which variables are acting as effect modifiers. This paper proposes a two-layer extension of the Bayesian causal forests (BCF) algorithm for estimating heterogeneous mediation effects, which combines a standard BCF model for the mediator with a varying coefficient BART model for the outcome (Hahn et al., 2020; Deshpande et al., 2020). Our approach is motivated by the strong performance of BCFs in causal inference competitions and in practical applications (Dorie et al., 2019). Our approach directly parameterizes the models in terms of the direct and indirect effects of the treatment on the outcome. This allows us to "shrink towards homogeneity," stabilizing the estimation of the mediation effects. Our approach performs extremely well in regimes where treatment effects are nearly homogeneous, with small root-mean squared errors for individual-level mediation effects and credible intervals that attain close to the nominal rate of coverage for most individuals. Hence, our proposed approach provides a powerful tool for estimating heterogeneous mediation effects. ### The Medical Expenditure Panel Survey The Medical Expenditure Panel Survey (MEPS) is an ongoing large-scale survey administered by the Agency for Healthcare Research and Quality that aims to measure the healthcare system's use by patients, hospitals, and insurance companies. To demonstrate our proposed methodology, we employ the MEPS to investigate the health consequences of smoking. Specifically, we aim to answer the following questions: (i) does smoking have a causal impact on healthcare expenses overall? (ii) to what extent is this impact mediated (or not) by smoking's effect on overall health? and (iii) are there any moderating variables that affect the association between smoking and medical expenditures? In Section 4, we present an analysis of this dataset which yields a surprising finding: the total causal effect of smoking on medical expenditures can be masked by instability resulting from the estimation of the direct effect of smoking on healthcare costs. Although one might intuitively assume that the effect of smoking on medical expenditures is fully mediated by its impact on health, our analysis under sequential ignorability shows that the estimated direct effect of smoking on expenses is negative and largely counteracts the positive indirect effect of smoking on expenses; this direct effect is likely due to additional variables that we have not incorporated in the analysis. Additionally, we identify several variables, with age being the most important, that moderate the indirect effect of smoking on expenditures. ### Outline In Section 2 we review the potential outcomes framework for mediation, the sequential ignorability assumption, the Bayesian additive regression trees (BART) framework, and Bayesian causal forests (BCFs). In Section 3 we define our Bayesian causal mediation forests model, and show how to use it to stably estimate the direct and indirect effects. In Section 4 we use our methodology to analyze data from the MEPS data to study mediation effect heterogeneity in the effect of smoking on health care expenditures as mediated by the effect of smoking on health, and conduct an empirically-designed simulation study to show that our method performs well in terms of coverage and estimation error for estimating both average and conditional average mediation effects. We conclude in Section 5 with a discussion and possible extensions. Computational details and further simulation results are given in the supplementary material. ## 2 Review of Mediation Analysis and BART ### Overview of Mediation Analysis Mediation refers to the process through which a treatment (\(A\)) influences an outcome (\(Y\)) by acting through an intermediate _mediator_ variable (\(M\)), which occurs between the treatment and the outcome; a graphical representation is given in Figure 1. For example, let us consider the question of whether smoking affects medical expenditures directly and indirectly through its effect on health. Here, smoking status is a binary treatment (\(A\)), and the outcome of interest is the logarithm of medical expenditure (\(Y\)). Our aim is to break down the effect of smoking on medical expenditures into a _direct effect_ of smoking and an _indirect effect_ that is mediated by smoking's effect on overall health (measured as an individual's self-perceived quality of health). A natural hypothesis is that smoking does not directly cause higher medical expenditures but rather does so by reducing a person's overall health. Health is on the causal path between the treatment (smoking) and the outcome (medical expenditures) and hence is a mediator. Mediation analysis has been applied in many scientific fields including epidemiology, medicine, economics, and the social sciences (MacKinnon and Dwyer, 1993; Rubin, 2004; MacKinnon, 2008; Albert, 2008; Imai et al., 2010; VanderWeele, 2016). Much of this literature has focused on structural equation models (SEMs) to quantify mediation effects as products of coefficients in parametric models. In particular, linear structural equation models (LSEMs) have been widely used (Baron and Kenny, 1986; MacKinnon and Dwyer, 1993; MacKinnon, 2008). LSEMs have a major limitation in that the identification of the mediation effects is tied to the choice and correct specification of a particular parameteric model, limiting their applicability. To address this limitation, Imai et al. (2010) proposed a nonparametric approach based on _potential outcomes_(Rubin, 2004, 1974) that allows for the identification of average causal mediation effects under the assumption of _sequential ignorability_. This assumption Figure 1: A schematic representation of a treatment \(A\), mediator \(M\), outcome \(Y\), and confounders/effect modifiers \(X\). Arrows depict the direction of causality. states that the treatment is independent of all potential values of the outcome and mediator given the covariates, and the observed mediator is independent of all potential outcomes given the observed treatment and covariates. By avoiding parametric assumptions, this framework provides a general estimation procedure that is agnostic to the choice of model for the outcome and mediator, making it applicable in a wide range of settings. For individuals \(i=1,...,n\) and treatment \(a\in\{0,1\}\), define the potential outcome \(M_{i}(a)\) as the value of the mediator that would have been observed had the individual received treatment \(a\). Note that for each individual, only one of \(M_{i}(0)\) or \(M_{i}(1)\) is actually observed. For treated individuals (\(A_{i}=1\)), \(M_{i}(0)\) is a _counterfactual_, i.e., the value of the mediator that would have been observed had the individual been untreated instead. Similarly, the potential outcome \(Y_{i}(a,m)\) is the value of the outcome that would have been observed had the individual received treatment \(a\) and had a mediator at level \(m\). For example, \(Y_{i}\{0,M_{i}(1)\}\) is the value of the outcome that would have been observed if the individual was not treated and had a value of the mediator at the same level they would have had if they were treated. We link the potential outcomes to the observed data through the _consistency_ assumption, which states that we observe the mediator \(M_{i}=M_{i}(A_{i})\) and the outcome \(Y_{i}=Y_{i}\{A_{i},M_{i}(A_{i})\}\). Because the values of \(Y_{i}\) and \(M_{i}\) are defined only in terms of the treatment \(a\) potentially received by individual \(i\) (and not on the treatment received by other individuals), this notation also implicitly states that there is no interference between units, which is known as the _Stable Unit Treatment Value_ (SUTVA) assumption. Using these potential outcomes, we can define the causal estimates of interest. In causal mediation analysis, we are particularly interested in estimating the natural direct and natural indirect effects (Pearl, 2001; Robins and Greenland, 1992). The _natural direct effect_ is defined as \[\zeta_{a}=E[Y_{i}\{1,M_{i}(a)\}-Y_{i}\{0,M_{i}(a)\}] \tag{1}\] and the _natural indirect effect_ is defined as \[\delta_{a}=E[Y_{i}\{a,M_{i}(1)\}-Y_{i}\{a,M_{i}(0)\}]. \tag{2}\] The natural direct effect isolates the effect of the treatment while keeping the potential mediator fixed, and can be interpreted as the effect that the treatment has directly on the outcome \(Y_{i}\). Conversely, the natural indirect effect isolates the effect of the potential mediator in response to different treatment values while keeping the treatment fixed, and can be interpreted as the effect that the treatment has indirectly on the outcome \(Y_{i}\) through the mediator \(M_{i}\). The total effect of the treatment on the outcome is a sum of the direct and indirect effects, and can be defined as \[\tau=\zeta_{0}+\delta_{1}=\zeta_{1}+\delta_{0}=E[Y_{i}\{1,M_{i}(1)\}-Y_{i}\{0, M_{i}(0)\}]. \tag{3}\] We can similarly define _conditional average_ variants of both the direct and indirect effects as \[\begin{split}\zeta_{a}(x)&=E[Y_{i}\{1,M_{i}(a)\}- Y_{i}\{0,M_{i}(a)\}\mid X_{i}=x]\qquad\text{and}\\ \delta_{a}(x)&=E[Y_{i}\{a,M_{i}(1)\}-Y_{i}\{a,M_{i}( 0)\}\mid X_{i}=x].\end{split} \tag{4}\] Most of our attention will be on the conditional average direct and indirect effects, as defined in (4). ### Assumptions Let the statement \([A\perp\!\!\!\perp B\mid C=c]\) mean that \(A\) is conditionally independent of \(B\) given that \(C=c\), let \(\mathcal{X}\) denote the sample space of \(X_{i}\), and let \(\mathcal{M}\) denote the sample space of \(M_{i}\). Following Imai et al. (2010), we make the following _sequential ignorability_ assumption throughout, allowing for the identification of the direct and indirect effects. **S11**: \(\{Y_{i}(a^{\prime},m),M_{i}(a)\}\perp\!\!\!\perp A_{i}\mid X_{i}=x\) for \(a,a^{\prime}=0,1\) and all \(x\in\mathcal{X}\). **S12**: \(Y_{i}(a^{\prime},m)\perp\!\!\!\perp M_{i}(a)\mid A_{i}=a,X_{i}=x\) for \(a,a^{\prime}=0,1\) and all \(x\in\mathcal{X}\). **S13**: \(\Pr(A_{i}=a\mid X_{i}=x)>0\) and \(f\{M_{i}(a)=m\mid A_{i}=a,X_{i}=x\}>0\) for \(a=0,1\) and all \(x\in\mathcal{X}\) and \(m\in\mathcal{M}\). The first assumption states that, given the covariates, the treatment assignment is ignorable, i.e., it is independent of potential outcomes and potential mediators. This assumption is automatically satisfied when individuals are randomly assigned to treatment and control groups, but is not guaranteed to hold in observational studies, in which case researchers often collect as many pre-treatment confounders as possible so that treatment assignment ignorability is plausible after the differences in covariates between treatment groups are accounted for. The second assumption states that, given the observed treatment and covariates, the mediator is ignorable, i.e., it is independent of potential outcomes. This assumption, however, is not guaranteed to hold even in randomized experiments. In general, it cannot be directly tested from the data. The third assumption is a positivity assumption for the treatment and mediator, stating that the probability of receiving the treatment and control should be nonzero. Under SI1-SI3, we can identify the distribution of any counterfactual outcome \(Y_{i}\{a^{\prime},M_{i}(a)\}\) nonparametrically as \[\begin{split}& f(Y_{i}\{a,M_{i}(a^{\prime})\}=y\mid X_{i}=x)\\ &=\int_{\mathcal{M}}f(Y_{i}=y\mid M_{i}=m,A_{i}=a,X_{i}=x)\,f(M_ {i}=m\mid A_{i}=a,X_{i}=x)\ dm\end{split} \tag{5}\] for any \(x\in\mathcal{X}\) and \(a,a^{\prime}=0,1\)(Imai et al., 2010, Theorem 1). This allows us to make inferences about unobserved counterfactuals (left-hand side) using observed outcomes and mediators (right-hand side). Moreover, (5) is not dependent on a specific parametric model, and so can be applied to flexible (nonparametric) models. ### A Review of Bayesian Additive Regression Trees We will use the Bayesian Additive Regression Trees (BART) model proposed by Chipman et al. (2010). Consider an unknown function \(r\) that predicts an output \(Y_{i}\) using a vector of inputs \(X_{i}\) \[Y_{i}=r(X_{i})+\epsilon_{i},\quad\epsilon_{i}\sim N(0,\sigma^{2}). \tag{6}\] BART models \(r(x)\) as a sum of \(m\) regression trees \(r(x)=\sum_{j=1}^{m}g(x;T_{j},M_{j})\) where \(T_{j}\) is a binary decision tree consisting of interior node decision rules as well as a set of terminal nodes and \(M_{j}=\{\mu_{j1},\ldots,\mu_{jb_{j}}\}\) is a set of parameter values associated with each of the \(b_{j}\) terminal nodes of tree \(T_{j}\). Each \(x\) is associated with a single terminal node \(k\) of \(T_{j}\) and is then assigned the value \(g(x;T_{j},M_{j})=\mu_{jk}\). Under (6), \(E(Y_{i}\mid X_{i}=x)\) equals the sum of all the terminal node \(\mu_{jk}\)'s assigned to \(x\) by the \(g(x;T_{j},M_{j})\)'s. For a comprehensive review of BART and its applications, see Hill et al. (2020). To apply BART it is necessary to specify a prior distribution over all the parameters of the sum-of-trees model, i.e., \((T_{j},M_{j})\) for \(j=1,\ldots,m\). This prior should regularize the fit by keeping individual tree effects from being disproportionately influential. The prior consists of two components: a prior for each tree \(T_{j}\) and a prior on the terminal nodes \(M_{j}\mid T_{j}\) where \(\pi(T_{j},M_{j})=\pi_{T}(T_{j})\pi_{M}(M_{j}\mid T_{j})\). The BART model then sets \((T_{j},M_{j})\stackrel{{\text{iid}}}{{\sim}}\pi(T,M)\). The prior \(\pi_{T}\) is determined by three variables: (i) the probability that a given node is an interior node, (ii) the distribution of the splitting variable assignments at each interior node, and (iii) the distribution of the splitting rule assignment in each interior node conditional on the splitting variable. For (i), the probability that a node at depth \(d\) is an interior node is \[\alpha(1+d)^{-\beta},\quad\alpha\in(0,1),\beta\in[0,\infty) \tag{7}\] with \(\alpha=0.95\) and \(\beta=2\) being a default that favors small trees. For (ii) and (iii), the distribution of the splitting variable assignments at each interior node and the distribution of the splitting rule assignment in each interior node conditional on the splitting variable are both given a uniform prior. For the prior on the terminal nodes, we assume \(\pi(M_{j}\mid T_{j})=\prod_{k=1}^{b_{j}}\pi_{\mu}(\mu_{jk})\). To specify \(\sigma_{\mu}\), in this paper we first shift and rescale the \(Y_{i}\)'s so that \(Y_{i}\) has mean 0 and variance 1. We then use the prior \[\pi_{\mu}(\mu_{jk})=N(\mu_{jk}\mid 0,\sigma_{\mu}^{2})\qquad\text{where} \qquad\sigma_{\mu}=\frac{3}{k\sqrt{m}}\] for a suitable value of \(k\), with default \(k=2\). Note that this prior shrinks the terminal node values \(\mu_{jk}\) towards zero and applies greater shrinkage as the number of trees \(m\) is increased, ensuring that each tree is a weak learner in the ensemble of trees. ### Bayesian Decision Tree Ensembles for Causal Inference BART has been seen to perform particularly well in causal inference problems for inferring heterogeneous and average treatment effects (Hill, 2011; Wendling et al., 2018; Dorie et al., 2019). For an outcome \(Y_{i}\), binary treatment \(A_{i}\), and confounder/modifier variables \(X_{i}\), Hill (2011) proposes the model \[Y_{i}(a)=\mu(X_{i},a)+\epsilon_{i},\quad\epsilon_{i}\sim N(0,\sigma^{2}). \tag{8}\] The effect of receiving the treatment is therefore given by \[E\{Y_{i}(1)-Y_{i}(0)\mid X_{i}=x\}=\tau(x)=\mu(x,1)-\mu(x,0).\] Given BART's strong predictive performance, Hill (2011) suggests using a BART prior for \(\mu(\cdot,\cdot)\) to flexibly model the outcome and hence obtain flexible treatment effect estimates. Hahn et al. (2020) note that successful predictive modeling depends largely on careful regularization, and extend the work of Hill (2011) by noting two shortcomings of the model (8): first, the correlation between the propensity score and \(\mu(x,a)\) can induce _regularization induced confounding_ (RIC), leading to highly biased causal estimates and, second, priors based on the parameterization (8) encode prior information that treatment effects are highly non-homogeneous. To mitigate RIC they develop a prior that depends on an estimate of the propensity score \(\widehat{\pi}_{i}\) as a 1-dimensional summary of the covariates, while to address non-homogeneity they reparameterize the regression as \[Y_{i}(a)=\mu(X_{i},\widehat{\pi}_{i})+a\,\tau(X_{i})+\epsilon_{i}\] where \(\mu(x,\widehat{\pi})\) captures the prognostic effect of the control variables \(X_{i}\) and \(\tau(x)\) is exactly the treatment effect. Independent BART priors are then placed on \(\mu(\cdot)\) and \(\tau(\cdot)\), with the prior on \(\tau(\cdot)\) encoding our prior beliefs about the degree of treatment effect heterogeneity. Linero and Zhang (2022) consider estimation of direct and indirect effects in causal mediation using BART models. Linked with the concept of RIC, they also show that naively specified priors can be highly _dogmatic_(Linero, 2021) in the sense of encoding a prior belief that, on average, the mediator and outcome potential outcomes are unconfounded (and hence that inference for the average mediation effects can proceed as though there were no confounding present). Prior dogmatism induces regularization-induced confounding by giving a strong prior preference to encourage the model to attribute causal effects on the outcome as being due to the treatment rather than the confounders. To address this issue, they include "clever covariates" \(\widehat{m}_{ai}=\widehat{E}(M_{i}\mid A_{i}=a,X_{i})\) into the outcome model for \(a\in\{0,1\}\). These clever covariates are analagous to the propensity score estimate \(\widehat{\pi}_{i}\) used to correct for RIC in the BCF. Linero and Zhang (2022) then introduce the _Bayesian causal mediation forests_ (BCMF) model \[\begin{split} Y_{i}(a,m)&=\mu_{y}(m,a,X_{i})+\epsilon_{ i}\\ M_{i}(a)&=\mu_{m}(a,X_{i})+\epsilon_{i}\end{split} \tag{9}\] where the functions \(\mu_{y}(\cdot,0,\cdot)\), \(\mu_{y}(\cdot,1,\cdot)\), \(\mu_{m}(0,\cdot)\), and \(\mu_{m}(1,\cdot)\) are given independent BART priors and the clever covariates \(\widehat{m}_{0i}\) and \(\widehat{m}_{1i}\) are included as predictors into the BART model for the outcome. While the model (9) accomplishes the goal of estimating average mediation effects well (i.e., it solves the problem of RIC), it does not appropriately control the degree of heterogeneity in the conditional average mediation effects. A contribution of this work is to use the insights behind the parameterization of BCFs to develop a model that applies seperate regularization to the direct and indirect effects. ## 3 BART for Heterogeneous Mediation Effects We now introduce our causal mediation analysis model; a "Bayesian backfitting" algorithm for fitting this model is given in the supplementary material. Analogous to BCFs, the models presented enable direct regularization of \(\delta_{a}(x)\) and \(\zeta_{a}(x)\). This type of direct regularization has been shown to be crucial in generating dependable estimates of heterogeneous causal effects in other contexts (Hahn et al., 2020; Nie and Wager, 2021). For numeric outcomes and mediators, we specify the models \[Y_{i}(a,m) =\mu(X_{i})+a\,\zeta(X_{i})+m\,d(X_{i})+\epsilon_{i}, \tag{10}\] \[M_{i}(a) =\mu_{m}(X_{i})+a\,\tau_{m}(X_{i})+\nu_{i} \tag{11}\] where independent BART priors are specified for \((\mu,\zeta,d,\mu_{m},\tau_{m})\). The mediator model (11) simply corresponds to a BCF model as proposed by Hahn et al. (2020), with \(\tau_{m}(x)\) corresponding to a heterogeneous causal effect of the treatment on the outcome. The outcome model (10), on the other hand, corresponds to a _varying coefficient BART_ (VC-BART) model as proposed by Deshpande et al. (2020), with the treatment \(A_{i}\) and mediator \(M_{i}\) entering linearly. The model (10)-(11) is a varying coefficient version of commonly used LSEMs, with the coefficients modeled nonparametrically as a function of \(X_{i}\). Because of this, the conditional average mediation effects are also expressible as products of coefficients as \[\zeta_{a}(x) =\zeta(x),\qquad\qquad\text{and}\] \[\delta_{a}(x) =\tau_{m}(x)\,d(x).\] Hence, this parameterization allows us to isolate the components \(\zeta(x)\) and \(\delta(x)\) and apply differing amounts of regularization to them. Note that (10)-(11) assumes that no interaction exists between the mediator and treatment in the outcome model, and hence \(\delta_{a}(x)\) and \(\zeta_{a}(x)\) do not depend on the treatment level \(a\), i.e., \(\delta_{0}(x)=\delta_{1}(x)\) and \(\zeta_{0}(x)=\zeta_{1}(x)\). For average effects, note that the marginal distribution of \(Y_{i}\{a,M_{i}(a^{\prime})\}\) is given by \[f\big{(}Y_{i}\{a,M_{i}(a^{\prime})\}=y\big{)}=\int f\big{(}Y_{i}\{a,M_{i}(a^{ \prime})\}=y\mid X_{i}=x\big{)}\,f(X_{i}=x)\ dx. \tag{12}\] It is therefore necessary to specify a model for the distribution of the covariates. Often, when this distribution is not modeled explicitly, the empirical distribution is used instead as an estimate, i.e. \(F_{X}(dx)=\sum_{i}\omega_{i}\,\delta_{X_{i}}(dx)\) where \(\delta_{x}(\cdot)\) denotes a point-mass distribution at \(x\) and \(\omega_{i}=n^{-1}\). An alternative to the empirical distribution is the _Bayesian Bootstrap_(BB Rubin, 1981), which respects our inherent uncertainty in \(F_{X}\) while cleanly avoiding the need to model the distribution of the covariates. The BB is similar to the empirical distribution, but instead of setting \(\omega_{i}=n^{-1}\) we use an improper prior \(\pi(\omega)=\prod_{i}\omega_{i}^{-1}\); this leads to the posterior distribution \(\omega\sim\text{Dirichlet}(1,\ldots,1)\) for the weights. Under the BB the average effects are identified as \(\bar{\delta}=\sum_{i}\omega_{i}\,\delta(X_{i})\qquad\text{and}\qquad\bar{ \zeta}=\sum_{i}\omega_{i}\,\zeta(X_{i})\). ### Controlling Heterogeneity Through Prior Specification An important advantage of the models (10)-(11) relative to the model of Linero and Zhang (2022) is that we can shrink the model fits towards _homogeneous_ mediator and treatment effects through judicious choice of hyperparameters; after doing this, we can be confident that any heterogeneity we _do_ detect is well-supported by the data, rather than being the result of instability due to the use of nonparametric estimators. The degree of heterogeneity of the direct effect can be controlled via the prior specification for \(\zeta(x)\). Specifically, we can shrink \(\zeta(x)\) to a constant function, with few effect moderators, by (i) setting the parameter \(\alpha\) in (7) to a small value (say, \(\alpha=0.5\)) so that most trees do not include covariates and (ii) using a smaller number of trees (say, \(m=20\)). Using the same strategies, we can control the degree of heterogeneity in \(\tau_{m}(x)\), which represents the causal effect of the treatment on the mediator. The considerations for the indirect effects are slightly more complicated, as \(\delta(x)=d(x)\,\tau_{m}(x)\) consists of two components. Note that heterogeneity in \(\delta(x)\) is inevitable if \(\tau_{m}(x)\) is non-constant. However, if \(\tau_{m}(x)\) is constant then \(\delta(x)\) can be made homogeneous by shrinking \(d(x)\) towards a constant function. Accordingly, we adopt the same strategy for \(d(x)\) as we adopt for \(\zeta(x)\) and \(\tau_{m}(x)\): using a small number of trees and setting \(\alpha\) small. ### Modeling Non-Numeric Data Binary mediators can also be easily incorporated by using the nonparametric probit regression model \[[M_{i}(a)\mid X_{i}=x]\sim\text{Bernoulli}[\Phi\{\mu_{m}(x)+a\,\tau_{m}(x)\}],\] with BART priors again used for \((\mu_{m},\tau_{m})\). Under sequential ignorability and (10), we can identify the mediation effects as \[\zeta_{a}(x)=\zeta(x)\qquad\text{and}\qquad\delta_{a}(x)=d(x)\left[\Phi\{\mu_{m}( x)+\tau_{m}(x)\}-\Phi\{\mu_{m}(x)\}\right]\] where \(\Phi(\cdot)\) is the cumulative distribution function of a standard normal random variable. Similar expressions for the direct and indirect effects can also be computed when \([Y_{i}(a,m)\mid X_{i}=x]\sim\text{Bernoulli}[\Phi\{\mu(x)+a\,\zeta(x)+m\,d(x)\}]\) and \([M_{i}(a)\mid X_{i}=x]\sim N\{\mu_{m}(x)+a\,\tau_{m}(x),\sigma_{m}^{2}\}\) by noting that \[E[Y_{i}\{a,M_{i}(a^{\prime})\}\mid X_{i}=x]=\Phi\left(\frac{\mu(x)+a\,\zeta(x) +\{\mu_{m}(x)+a^{\prime}\,\tau_{m}(x)\}\,d(x)}{\sqrt{1+d^{2}(x)\,\sigma_{m}^{ 2}}}\right),\] which can be derived by noting that the probit model implies that \(E[Y_{i}\{a,M_{i}(a^{\prime})\}\mid X_{i}=x]=\Pr(\epsilon_{i}\leq\mu(x)+a\, \zeta(x)+M_{i}(a^{\prime})\,d(x)\mid X_{i}=x)\) where \(\epsilon_{i}\sim N(0,1)\) and \(\epsilon_{i}\) is independent of \(M_{i}(a^{\prime})\). This implies, for example, that when \(Y_{i}\) is binary and \(M_{i}\) is continuous we have \[\delta_{a}(x)=\Phi\left(\frac{\mu(x)+a\,\zeta(x)+d(x)\,\{\mu_{m}(x)+\tau_{m}(x) \}}{\sqrt{1+d^{2}(x)\,\sigma_{m}^{2}}}\right)-\Phi\left(\frac{\mu(x)+a\,\zeta( x)+d(x)\,\mu_{m}(x)}{\sqrt{1+d^{2}(x)\,\sigma_{m}^{2}}}\right).\] ### Corrections for Regularization Induced Confounding As shown by Linero and Zhang (2022), Bayesian nonparametric models for mediation are also subject to the same RIC phenomenon as models for observational data. They make the following recommendations to combat this: 1. Add an estimate of the propensity score \(\widehat{\pi}_{i}=\widehat{\Pr}(A_{i}=1\mid X_{i}=x)\) to both the outcome model and mediator model. 2. Add an estimate of the mediator regression function \(\widehat{m}_{ai}=\widehat{E}(M_{i}\mid A_{i}=a,X_{i}=x)\) to the outcome regression for \(a\in\{0,1\}\). See Linero and Zhang (2022) for an extensive discussion of why it is necessary to include these variables and a thorough simulation experiment. In principle it does not matter how \((\widehat{\pi}_{i},\widehat{m}_{0i},\widehat{m}_{1i})\) are obtained, aside from the fact that \(\widehat{\pi}_{i}\) should depend only on \((A_{i},X_{i})\) and \(\widehat{m}_{ai}\) should depend only on \((M_{i},A_{i},X_{i})\); we use BART to estimate these quantities. ### Summarizing the Posterior In addition to the (conditional) average mediation effects, it is also of interest to produce interpretable summaries of the fit of the BCMF model to the data. These summaries can help identify subpopulations that respond differently to the treatment, help us interpret the impact of the effect moderators, and provide insight into BCMF's predictive process. Woody et al. (2021) propose a general framework for posterior summarization based on projecting complex models onto interpretable surrogate models. For example, we might project the samples of \(\delta(x)\) onto an _additive function_\(\gamma(x)=\alpha+\sum_{j=1}^{p}\gamma_{j}(x_{j})\), the idea being that if \(\gamma(x)\) is a good approximation to \(\delta(x)\) then we can use the interpretable structure of \(\gamma(x)\) to understand how \(\delta(x)\) makes predictions. For simplicity we focus on the indirect effect \(\delta(x)\) and consider two classes of summaries: * An additive function, constructed as \(\widehat{\gamma}=\arg\min_{\gamma}\sum_{i}\{\delta(X_{i})-\gamma(X_{i})\}^{2 }+q_{\lambda}(\gamma)\) where \(\gamma(x)=\alpha+\sum_{j}\gamma_{j}(x_{j})\) with each \(\gamma_{j}(x_{j})\) being a univariate spline. Here, \(q_{\lambda}(\gamma)=\sum_{j}q_{\lambda}(\gamma_{j})\) is a roughness penalty for the individual additive components. This summary can be computed by fitting a _generalized additive model_(GAM, see Wood, 2006, for a review) with \(\{\delta(X_{i}):i=1,\ldots,n\}\) as the outcome and \(\{X_{i}:i=1,\ldots,n\}\) as the predictors. * A decision tree summary, where \(\widehat{\gamma}\) is constructed by running the CART algorithm (Breiman et al., 1984), treating \(\{\delta(X_{i}):i=1,\ldots,n\}\) as the outcome and \(\{X_{i}:i=1,\ldots,n\}\) as the predictors. CART summaries are useful for identifying subpopulations with substantially different treatment effects, while GAM summaries are useful for understanding the impact of the different predictors on the estimated effects in isolation. See Section 4 for an illustration of this approach on the MEPS dataset. As an overall measure of the quality of the summaries we use the squared correlation between \(\delta(x)\) and \(\gamma(x)\) given by \[R^{2}=1-\frac{\sum_{i}\{\delta(X_{i})-\widehat{\gamma}(X_{i})\}^{2}}{\sum_{i}\{ \delta(X_{i})-\widehat{\delta}\}^{2}}\] where \(\widehat{\delta}=n^{-1}\sum_{i}\delta(X_{i})\); Woody et al. (2021) refer to \(R^{2}\) as the "summary \(R^{2}\)." ## 4 Medical Expenditure Panel Survey Data We now apply our model to a subset of the the Medical Expenditure Panel Survey (MEPS). We focus on the questions of whether (i) there is a causal effect of smoking on an individual's expected annual medical expenditures, (ii) there is evidence that the effect is entirely mediated by the effect of smoking on an health, and (iii) whether any of the proposed confounders also act as modifiers of the indirect effect. We take the outcome \(Y_{i}\) to be the logarithm of an individual's annual net medical expenditure reported in the 2012 survey, the treatment \(A_{i}\) to be whether an individual is a smoker (yes or no), and the mediator \(M_{i}\) to be an ordinal measure of overall self-perceived health (1: excellent, 2: very good, 3: good, 4: fair, 5: poor). At the outset, we note that a naive two-sample \(t\)-test for a difference in medical expenditures between smokers and non-smokers shows that there is strong evidence (\(P\)-value \(<0.0005\)) that non-smokers pay _more_ in medical expenditures than smokers. Accordingly, it is important to control for confounders in assessing any causal relationships. Our model includes the following patient attributes as possible confounders: * age: Age in years. * bmi: Body mass index, which may act as a post-treatment confounder of health and medical expenditures. * education_level: Education in years. * income: Total family income per year. * poverty_level: Family income as percentage of the poverty line. * region: Northeast, West, South, or Midwest. * sex: Male or female. * marital_status: Married, divorced, separated, or widowed. * race: White, Pacific Islander, Indigenous, Black, Asian, or multiple races. * seatbelt: whether an individual wears a seatbelt in a car (always, almost always, sometimes, never, seldom, never drives/rides in a car). The posterior distribution of the average direct and indirect effect is shown in Figure 2. We see that, under the sequential ignorability assumption, there is evidence of both a direct and indirect effect of smoking on expenditures. Interestingly, these effects are in _opposite directions_ and cancel each other out to a large extent. As a result, the sign of the total effect is uncertain. This illustrates an important potential benefit of a mediation analysis: we can establish a causal relationship between smoking and medical expenditures that we could not if we restricted attention strictly to the total effect. ### Posterior Summarization We use the summarization strategies outlined in Section 3.4 to interpret the model fit and better understand the covariates and interactions contributing to the heterogeneity in the indirect effect; specifically, we project the indirect effect function \(\delta(x)\) onto a single regression tree and an additive function. We first consider a CART summary of the posterior mean of of \(\delta(x)\), which was obtained on a preliminary model fit to the MEPS dataset. According to the regression tree summary in Figure 3, race, age, and sex are the most significant effect modifiers for the indirect effect. Figure 3: Posterior summarization of the indirect effect using a single regression tree. Motivated by the subgroups found in Figure 3, in Figure 4 we display the average indirect effects within various subgroups formed by age and race. We find that the largest indirect effects occur for white middle-aged individuals, while the smallest effects are for non-white young adults. Figure 5 and Figure 6 display the results obtained from the GAM summary for continuous and discrete variables, respectively. These figures again highlight the importance of age and race as effect modifiers, indicating that older and white individuals have higher indirect effects on medical expenditures mediated by perceived health status. In summary, both the CART and GAM summaries reveal that age and race are important effect modifiers. To measure the adequacy of the summary function approximations, Figure 7 presents both the posterior distribution of \(R^{2}\) obtained from fitting the summaries to each posterior sample of \(\delta(\cdot)\) and a single \(R^{2}\) obtained from fitting the summaries to the posterior mean of \(\delta(\cdot)\). Our analysis shows that regression tree is slightly b Figure 4: Posterior density for the average indirect effect within subgroups \(G_{1},G_{2},...,G_{5}\) from the terminal nodes in Figure 3. GAM, suggesting that the interactions detected in Figure 3 provide important insight into the model's predictive process for \(\delta(\cdot)\). ### Comparison of BCMF and a LSEM To evaluate the practical usefulness of the BCMF model (10)-(11), we compare its predictive performance to that of an LSEM with interactions between the treatment, mediator, and covariates, \[\begin{split} Y_{i}(a,m)&=\beta_{0Y}+X_{i}^{T}\beta_{ Y}+a(\gamma_{0Y}+X_{i}^{T}\gamma_{Y})+m(\xi_{0}+X_{i}^{T}\xi)+\epsilon_{i}\\ M_{i}(a)&=\beta_{0M}+X_{i}^{T}\beta_{M}+a(\gamma_{0M }+X_{i}^{T}\gamma_{M})+\nu_{i}.\end{split} \tag{13}\] This model allows for heterogeneous mediation effects, but restricts them to linear functions of the confounders. To quantify the uncertainty of the LSEM estimates, we use the residual Figure 5: Posterior summarization of the indirect effect using a GAM for the continuous variables. The projection of the posterior mean is given by the dashed line while the shaded area gives a posterior 95% credible band of the projection. Figure 6: Posterior distributions of the impact of categorical variables on the indirect effect using the GAM projection. Solid lines give the posterior mean of the associated coefficient in the GAM model, thick bars are 66% credible bands, thin bars are 95% credible bands. bootstrap. By comparing the predictive performance of these two models, we can assess whether the added complexity of the BCMF model is warranted. To understand the salient differences between the predictions made from the BCMF and LSEM, we compare the estimates of \(\delta(X_{i})\) and \(\zeta(X_{i})\) for each individual, both as a whole and stratified by race, in Figure 8. Figure 8 presents the estimates of \(\delta(X_{i})\) and \(\zeta(X_{i})\) for each individual, both in aggregate and stratified by race. The effect estimates of the BCMF are substantially less heterogeneous than the LSEM, with the LSEM estimating a substantial number of both positive and negative effects for both \(\delta(X_{i})\) and \(\zeta(X_{i})\). Additionally, we see substantially less heterogeneity across race; for example, the LSEM makes counterintuitive predictions about both the direct and indirect effect of smoking within the group of Pacific Islanders. While the MEPS dataset is large, there are relatively few Pacific Islanders in the data, and in the subset we analyzed only 13 of them smoke. By applying regularization, the BCMF shrinks the direct and indirect effects within this subpopulation closer to those of the other races. Next, we fit the BCMF and LSEM models to the same training set of \(n=8056\) indi Figure 7: Posterior distribution of the summary \(R^{2}\) for the regression tree and GAM summaries of \(\delta(x)\). The black line indicates the summary \(R^{2}\) for the posterior mean of \(\delta(x)\). Figure 8: Top: Boxplot displaying the distribution of the estimated individual direct and indirect effects for the BCMF and LSEM models fit to the original MEPS dataset. Bottom: Boxplot displaying the distribution of the estimated individual effects for the BCMF and LSEM, stratified by race. viduals and compute predictions \((\widehat{Y}_{i,\text{lsem}},\widehat{M}_{i,\text{lsem}},\widehat{M}_{i,\text{ bcmf}},\widehat{Y}_{i,\text{bcmf}})\) on the test set of \(n=8057\) individuals using the fitted models. We use these predictions on the test set to evaluate the performance of the model in three ways. First, we consider the correlation between \((M_{i},Y_{i})\) and their predictions on the test set. Second, we perform a paired Wilcoxon signed-rank test comparing the squared difference \((Y_{i}-\widehat{Y}_{i,\text{lsem}})^{2}\) to \((Y_{i}-\widehat{Y}_{i,\text{bcmf}})^{2}\) (and similarly for \(M_{i}\)). Results are given in Table 1, and we see both that the correlation is somewhat higher for the BCMF model than the LSEM model, and that the difference in performance was highly statistically significant according to the signed-rank test. Our third comparison considers _stacking_(Wolpert, 1992) the predictions of the BCMF and LSEM by fitting the linear models \(Y_{i}=\beta_{0}+\beta_{1}\,\widehat{Y}_{i,\text{lsem}}+\beta_{2}\,\widehat{Y }_{i,\text{bcmf}}+\epsilon_{i}\) (and similarly for \(M_{i}\)). Results of the stacking procedure are given in Table 2. From this fit, we see that the linear model relies much more heavily on the predictions from the BCMF than the linear model, and that the BCMF predictions are much more statistically significant than the predictions from the LSEM (in the sense that there is strong evidence that the BCMF predictor improves upon the LSEM predictor, while there is only weak evidence of \begin{table} \begin{tabular}{l r r r} \hline \hline Variable & \(R_{\text{test}}\) (BART) & \(R_{\text{test}}\) (LSEM) & \(P\)-value \\ \hline \hline phealth & 0.445 & 0.419 & 0.0001 \\ log(Y) & 0.454 & 0.431 & 0.0005 \\ \hline \hline \end{tabular} \end{table} Table 1: Held-out correlation for the mediator (phealth) and outcome (logY) across all individuals for the BART and LSEM fits, and the \(p\)-value for a paired Wilcoxon matched pairs signed-rank test comparing the predictive performance on held-out data for the two models. \begin{table} \begin{tabular}{l r r r r} \hline \hline Term & Estimate & Standard Error & Statistic & \(P\)-value \\ \hline \(\widehat{M}_{\text{lsem}}\) & 0.1561 & 0.0803 & 1.9447 & 0.0518 \\ \(\widehat{M}_{\text{bcmf}}\) & 0.8477 & 0.0809 & 10.4835 & \(<0.0001\) \\ \(\widehat{Y}_{\text{lsem}}\) & 0.1509 & 0.0725 & 2.0824 & 0.0373 \\ \(\widehat{Y}_{\text{bcmf}}\) & 0.9030 & 0.0747 & 12.0840 & \(<0.0001\) \\ \hline \hline \end{tabular} \end{table} Table 2: Coefficient estimates for the linear model that aggregates the linear and BART fits on the MEPS test data. the converse). Interestingly, the LSEM predictions _are_ found to be statistically significant, suggesting that a modification of the BCMF that also includes _linear_ adjustments for the confounders (i.e., includes linear terms \(x^{\top}b\) in the functions \((\mu(x),\zeta(x),d(x),\mu_{m}(x),\tau_{m}(x))\)) may improve the fit of the model. ### Simulation Study We now conduct a simulation study to better understand the operating characteristics of the BCMF model. Our study aims to answer the following question: (i) Does the BCMF model perform better in terms of predictive accuracy in estimating the mediation effects? (ii) Do the credible intervals for \(\delta(X_{i})\) and \(\zeta(X_{i})\) attain coverage rates close to their nominal levels? (iii) Can the BCMF model estimate the effects accurately within the subgroups of the data identified by the CART summary? #### Data Generating Mechanism We use a data generating mechanism in which the confounders and treatment assignment are sampled direct from the MEPS dataset, while the mediator and outcome ground truths are obtained by fitting both our model and an LSEM to the data. To assess the performance of both methods, we replicated each simulation setting 200 times, with 8056 observations in the training set and 8057 in the testing set. We used the same training/testing split across all simulated datasets to evaluate the coverage probability of the confidence/credible intervals generated by each method. A crucial difference between the LSEM and BCMF models is that the LSEM does not regularize the mediation effects. Consequently, the LSEM produces a ground truth for the mediation effects that is more heterogeneous than is expected in practice, especially for subgroups of the population with a small sample size. For example, since the MEPS dataset includes few Pacific Islanders, the LSEM's estimate of the effect of race as an effect modifier is unstable for this group. To account for this, we also consider a third ground truth that is also an LSEM but with the parameters of (13) instead estimated using the R-Learner approach of Nie and Wager (2021), which uses the lasso to reduce the amount of heterogeneity. #### Results: Individual Effects We fit our model and the LSEM to each simulated dataset and measure point estimates of the effects, the limits and width of 95% credible intervals, and whether or not the interval captures the true parameter for each replication. Using the 200 replications, we then compute the root mean square error, absolute bias, average width of the intervals, and the coverage probability. We present the results of our simulation study in Figure 9 and Figure 10, where we compare the BCMF and LSEM models under different combinations of ground truth and Figure 9: Individual simulation reslts for \(\delta(x)\) under all combinations of fitting the BART/LSEM model under the BART/LSEM/RLEARN ground truths. Top left gives the coverage probability of nominal 95% credible intervals among all individuals, top right gives the root mean squared error, bottom left gives the absolute bias, and bottom right gives the average interval length. \begin{table} \begin{tabular}{l c r r r r} \hline \hline Setting & Method & Coverage & RMSE & Bias & Length \\ \hline BART & BART & 0.91 & 0.12 & 0.06 & 0.50 \\ BART & LSEM & 0.93 & 0.23 & 0.07 & 0.83 \\ LSEM & BART & 0.94 & 0.19 & 0.10 & 0.77 \\ LSEM & LSEM & 0.94 & 0.22 & 0.01 & 0.84 \\ RLEARN & BART & 0.88 & 0.13 & 0.07 & 0.50 \\ RLEARN & LSEM & 0.94 & 0.21 & 0.03 & 0.80 \\ \hline \hline \end{tabular} \end{table} Table 3: Coverage probability, root mean square error, and absolute bias for \(\zeta(X_{i})\) across all individuals in the test set. Figure 10: Individual simulation reslts for \(\zeta(x)\) under all combinations of fitting the BART/LSEM model under the BART/LSEM/RLEARN ground truths. Top left gives the coverage probability of nominal 95% credible intervals among all individuals, top right gives the root mean squared error, bottom left gives the absolute bias, and bottom right gives the average interval length. fitted models. Table 3 and Table 4 summarize the results from Figure 9 and Figure 10, respectively, across all individuals in the test set. When the BCMF is fitted to the BCMF ground truth, it outperforms the LSEM in terms of achieving close to the nominal coverage on interval estimates with substantially lower interval lengths, root mean squared error, and absolute bias. Interestingly, we also observed that the BCMF model is competitive in terms of root mean squared error when the LSEM is used to generate the data. We conjecture that this is due to the fact that the data generating mechanism estimated by the LSEM fit to the original data, while still quite heterogeneous, is homogeneous enough (and the effects are small enough) that the benefits of the regularization of the BCMF outweigh the fact that the LSEM is correctly specified. We observe similar behavior, which is even more pronounced, when the R-Learner is used to generate the ground truth. to detect strongly heterogeneous effects, it does not inflate the Type I error in detecting heterogeneity, making the BCMF model conservative in detecting heterogeneity. #### Results: Average and Subgroup Average Effects The BCMF also produces reliable estimates of the average mediation effects within subpopulations. We consider here both fixed and data-dependent subgroups obtained under the BCMF ground truth. The fixed subgroups are the groups identified by the terminal nodes in Figure 3: age \(\geq 67\), non-white and \(34\leq\) age \(<67\), non-white and age \(<34\), white and \(34\leq\) age \(<67\), and white and age \(<34\). The data-determined subgroups are determined through posterior projection summarization, by fitting a tree and identifying the terminal node groups for each simulated dataset. A comparison of the inferences for the average effects under each simulation scenario is given in the supplementary material. Table 5 shows the results of the simulation for both the fixed subgroups and for the data-dependent subgroups (labeled "Dynamic"). We see that the BCMF produces intervals whose coverage is close to the nominal level, with slightly poorer results in the non-white groups. Interestingly, the coverage for data-dependent groups have _higher_ coverage for the credible intervals, and in fact attain exact 95% coverage for both the direct and indirect effects. The intervals for the average effects \(\bar{\delta}\) and \(\bar{\zeta}\) also attain close to nominal coverage. \begin{table} \begin{tabular}{l r r} \hline Group & Indirect Effect & Direct Effect \\ \hline \(\texttt{age}\geq 67\) & 0.99 & 0.92 \\ \(\texttt{non-white},34\leq\texttt{age}<67\) & 0.96 & 0.88 \\ \(\texttt{non-white},\texttt{age}<34\) & 0.86 & 0.94 \\ \(\texttt{white},34\leq\texttt{age}<67\) & 0.88 & 0.92 \\ \(\texttt{white},\texttt{age}<34\) & 0.96 & 0.96 \\ \hline Average & 0.93 & 0.93 \\ Dynamic & 0.95 & 0.95 \\ \hline \end{tabular} \end{table} Table 5: Subgroup coverage probability of \(\zeta_{a}(A_{i})\) and \(\delta_{a}(A_{i})\) using the subgroups in Figure 3. Discussion In this paper we introduced a Bayesian causal mediation forest (BCMF) model that can separately identify and regularize the conditional average natural direct and indirect effects using varying coefficient models. Our approach is reminiscent of LSEMs, making it easy to identify these effects as products of varying coefficients. Additionally, we demonstrate that our model produces lower prediction error than a comparable LSEM on both real and simulated MEPS data. Furthermore, we argue that our model is conservative in estimating heterogeneity since it assumes small and mostly homogeneous mediation effects. We also provide posterior summarization methods for interpreting model fit and subgroup detection. To improve our methods and analysis, there are several directions one could take. First, we can improve the models for the outcome and mediator. For instance, log medical expenditure exhibits heteroskedasticity, with the variance of \(Y_{i}\) and \(X_{i}\) having a complex relationship, as demonstrated by Linero et al. (2020). Additionally, since the mediator in this problem is ordinal, and empirically is well-approximated with a rounded normal distribution; thus, we can improve our model by using a cumulative probit model for \(M_{i}\) rather than a normal model. The impact of using a continuous model for \(M_{i}\) rather than an ordinal model is unclear, and warrants further investigation. Exclusion of individuals with no medical expenditure from the analysis (which we have done here) is problematic, as the likelihood of incurring medical expenditure is likely to be linked with smoking status. As a further improvement to our analysis, a better approach would be to use principal stratification (Frangakis and Rubin, 2002). This approach would estimate the causal effect of smoking on medical expenditures within the strata of individuals who incur medical expenditures, irrespective of their smoking status. In such an analysis, it is assumed that all individuals who would incur medical expenses if they did not smoke would also incur medical expenses if they did smoke. This would enable a more honest evaluation of the causal effect of smoking on medical expenditures. Lastly, while our model performs well in terms of root mean squared error, for some individuals it does not quite reach the nominal coverage level for credible intervals. In the supplementary material, we show that our BCMF under-covers for individuals whose conditional mediation effect differs significantly from the average effects \(\bar{\delta}\) and \(\bar{\zeta}\). Whether this is a problem that can be fixed or simply a consequence of using a model that shrinks towards homogeneous effects warrants further investigation. Code reproducing our analysis and simulation results is available at www.github.com/vcbcmf/vcbcmf.
2310.01718
On the Peak-to-Average Power Ratio of Vibration Signals: Analysis and Signal Companding for an Efficient Remote Vibration-Based Condition Monitoring
Vibration-based condition monitoring (VBCM) is widely utilized in various applications due to its non-destructive nature. Recent advancements in sensor technology, the Internet of Things (IoT), and computing have enabled the facilitation of reliable distributed VBCM where sensor nodes are deployed at multiple locations and connected wirelessly to monitoring centers. However, sensor nodes are typically constrained by limited power resources, necessitating control over the peak-to-average power ratio (PAPR) of the generated vibration signals. Effective control of PAPR is crucial to prevent nonlinear distortion and reduce power consumption within the node. Additionally, avoiding nonlinear distortion in the vibration signal and preserving its waveform is essential to ensure the reliability of condition monitoring. This paper conducts an in-depth analysis of the PAPR of vibration signals in VBCM systems, evaluates the impact of nonlinear power amplification on the system performance, and proposes a lightweight autoencoder-based signal companding scheme to control the PAPR to improve power efficiency and mitigate the impact of nonlinear distortion. The proposed scheme employs a lightweight reconstruction autoencoder with a compression-based activation function in the source to compress the vibration signals and avoid increasing the average power of the compressed signal. In the destination, the proposed scheme uses a denoising-expansion autoencoder to expand the compressed signals while minimizing noise enhancement during the expansion process. The experimental results demonstrate the effectiveness of the proposed companding scheme in preventing nonlinear distortion, improving the efficiency of power amplification in the source, and restoring the PAPR characteristics in the destination while avoiding the undesired effect of noise expansion.
Sulaiman Aburakhia, Abdallah Shami
2023-10-03T01:04:41Z
http://arxiv.org/abs/2310.01718v1
On the Peak-to-Average Power Ratio of Vibration Signals: Analysis and Signal Companding for an Efficient Remote Vibration-Based Condition Monitoring ###### Abstract Vibration-based condition monitoring (VBCM) is widely utilized in various applications due to its non-destructive nature. Recent advancements in sensor technology, the Internet of Things (IoT), and computing have enabled the facilitation of reliable distributed VBCM where sensor nodes are deployed at multiple locations and connected wirelessly to monitoring centers. However, sensor nodes are typically constrained by limited power resources, necessitating control over the peak-to-average power ratio (PAPR) of the generated vibration signals. Effective control of PAPR is crucial to prevent nonlinear distortion and reduce power consumption within the node. Additionally, avoiding nonlinear distortion in the vibration signal and preserving its waveform is essential to ensure the reliability of condition monitoring. This paper conducts an in-depth analysis of the PAPR of vibration signals in VBCM systems, evaluates the impact of nonlinear power amplification on the system performance, and proposes a lightweight autoencoder-based signal companding scheme to control the PAPR to improve power efficiency and mitigate the impact of nonlinear distortion. The proposed scheme employs a lightweight reconstruction autoencoder with a compression-based activation function in the source to compress the vibration signals and avoid increasing the average power of the compressed signal. In the destination, the proposed scheme uses a denoising-expansion autoencoder to expand the compressed signals while minimizing noise enhancement during the expansion process. The experimental results demonstrate the effectiveness of the proposed companding scheme in preventing nonlinear distortion, improving the efficiency of power amplification in the source, and restoring the PAPR characteristics in the destination while avoiding the undesired effect of noise expansion. vibration-based condition monitoring (VBCM), sensors, power efficiency, peak-to-average power ratio (PAPR), signal companding ## I Introduction Vibration-based condition monitoring (VBCM) has been widely used in predictive maintenance (PdM) [12] and structural health monitoring (SHM) [13][13]. Furthermore, it is becoming increasingly popular, primarily due to its inherent advantages over alternative forms of condition monitoring. The main advantages of VBCM include [14][15]: * Vibration sensors are non-intrusive and can be contactless, facilitating non-destructive condition monitoring. * Real-time acquisition of vibration signals can be conducted in situ, allowing for online local condition monitoring. * Trending vibration analysis can be utilized to identify relevant conditions and conduct comparative analysis across diverse conditions or objects. * Vibration sensors are cost-effective and widely available, offering various specifications to suit a wide range of requirements. * Vibration waveform responds instantly to changes in the monitored condition and, therefore, is suitable for continuous and intermittent monitoring applications. * Of paramount significance, signal processing techniques can be applied to vibration signals to mitigate corrupting noise and extract weak condition indications from other masking signals. The rapid evolution of sensor fabrication, coupled with advancements in the Internet of Things (IoT) and computing technologies, has enabled the facilitation of large-scale remote VBCM systems comprising distributed sensor nodes. These systems are being widely utilized across diverse domains, including civil applications, industry, wildlife monitoring, agriculture, transportation, and healthcare applications. [16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 73, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 89, 90, 83, 85, 87, 88, 89, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 63, 64, 65, 66, 67, 68, 69, 70, 74, 75, 76, 77, 78, 79, 80, 83, 84, 85, 86, 87, 88, 89, 90, 84, 85, 87, 89, 95, 96, 97, 98, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 79, 80, 83, 84, 85, 86, 87, 89, 95, 96, 97, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 62, 59, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72 that involves time and power-consuming algorithms, making CS unsuitable for real-time condition monitoring [38]. This paper tackles the problem of power efficiency from a signal waveform perspective. Specifically, the paper suggests reducing the power consumption in the sensor nodes by controlling the peak-to-average power ratio (PAPR) of the acquired vibration signal. The PAPR, which is the ratio of the peak power to the signal's average power, has a direct impact on the node's power consumption since it determines the required resolution for analog-digital conversions [39] as well as the required linear range of the power amplification circuit [40], which accounts for the major part of the total power consumption in many systems [41, 42]. To the best of our knowledge, this paper is the first work that addresses the issue of PAPR in vibration signals and tackles the related problem of nonlinear power amplification in VBCM systems. Specifically, the paper statistically investigates the PAPR characteristics of vibration signals, evaluates the impact of nonlinear power amplification on the system, and proposes a lightweight framework based on signal companding to reduce the PAPR and ensure linear power amplification of the signals. Companding1 is a well-known technique in signal processing; it involves signal compression at the source and subsequent expansion at the destination. Signal companding has demonstrated its effectiveness in controlling the PAPR of multi-carrier communication signals. Nevertheless, conventional companding techniques encounter two significant limitations. Firstly, the compression mechanism increases the average power of the compressed signal. Secondly, the expansion operation amplifies the accumulated noise in the compressed signal. In order to effectively control the PAPR and address these limitations, the proposed framework adopts a two-fold approach. Firstly, it compresses the signal with a reconstruction autoencoder with a compression-based activation function. Secondly, it employs a denoising-expanding autoencoder to expand the compressed signal. This combined approach ensures efficient PAPR control while mitigating the aforementioned limitations. The paper makes the following key contributions: Footnote 1: The name COMPANDING is a composite of the words COMPressing and expandING. * To the best of our knowledge, this paper is the first contribution to the VBCM literature that addresses the PAPR of generated vibration waveform, examines the impact of nonlinear power amplifications, and proposes controlling the PAPR to improve power efficiency and mitigate nonlinear distortion. * Statistically analyzes the PAPR of vibration signals. Accordingly, a closed-form formula is derived to accurately model the statistical distribution of the PAPR of vibration Signals. * Introduces a framework based on signal companding to effectively reduce the PAPR of vibration signals and mitigate the impact of nonlinear power amplification. * At the sensor node (source), prior to the power amplification stage, the framework employs a lightweight reconstruction autoencoder that utilizes a compression-based activation function. The autoencoder function facilitates the simultaneous smoothing and compression of the vibration signal without causing an increase in the average power of the compressed signal. * At the monitoring end (destination), the proposed framework utilizes a denoising-expansion autoencoder to effectively remove noise from the compressed signals prior to the expansion process to avoid enhancement (expansion) of the accumulated noise by the expansion operation. * Comprehensively evaluates the performance of the proposed framework in the presence of nonlinear power amplification and additive white Gaussian noise (AWGN), employing a real-world vibration dataset. * Adapts the concepts of signal constellation diagrams and error vector magnitudes (EVM) as new metrics to evaluate power efficiency and quantify the nonlinear distortion resulting from the nonlinear power amplification. The remainder of the paper is structured as follows: The next section provides background information and motivation for the problem. Section 3 presents the statistical analysis of the PAPR of vibration signals. Section 4 reviews signal companding techniques that have been proposed in the literature. Section 5 introduces the proposed autoencoder-based companding framework. Section 6 presents the model of nonlinear power amplification used in the experimentation. The experimental setup and performance evaluation metrics are introduced in Section 7, while Section 8 discusses the obtained results. The paper is finally concluded in Section 9. ## II Background and Motivation Fig. 1 shows a high-level architecture of a typical remote VBCM system. In these systems, sensor nodes are deployed across various locations, either embedded in objects, placed beneath surfaces, attached to mobile or airborne objects and connected to a cloud or a processing center. Typically, sensor nodes use wireless connectivity through available cellular networks or dedicated wireless links to collecting nodes in cases where cellular coverage is unavailable or unstable [11, 12, 22, 43, 44, 45, 23]. Subsequently, the collection node transmits the accumulated signals to the cloud or the processing center via the cellular network. As mentioned earlier, these nodes are typically power-constrained, which makes efficient power utilization in key components of the node, such as signal accusation, amplification, and transmission, critical for low power consumption in these nodes. As stated earlier, the PAPR is crucial in determining how much power the system needs to operate effectively. A smaller PAPR value requires fewer bits and allows HPA to operate more efficiently, saving the battery in the system [46]. To achieve maximum power efficiency, the HPA's operating point should be positioned as close as possible to HPA's saturation point [47] as illustrated in Fig 2. When input signal peaks exceed this designated operating point, the HPA becomes prone to saturation, leading to power wastage, nonlinear amplitude distortion, and spectral spreading induced by abrupt fluctuations in the distorted amplitudes. To prevent these consequences, the HPA circuit must be designed to operate linearly over the PAPR range of the input signal, which tends to be a costly and inefficient solution [48]. Alternatively, a significant input power backoff (IBO) from the HPA's operating point should be applied to restrict the HPA's input power level, ensuring that the entire signal falls within the HPA's linear region. While this approach mitigates nonlinear distortion, it significantly reduces power efficiency, as the HPA operates in a lower-power region. Therefore, it has a high cost in terms of energy efficiency, particularly in battery-power applications [47] such as remotely deployed sensor nodes. In VBCM systems, the amplitude of the acquired vibration waveform fluctuates according to the condition of the monitored object/process. Hence, the waveform is anticipated to exhibit a high PAPR due to these fluctuations, which can reach significant magnitudes depending on the monitored condition [10][11]. Based on the preceding discussion, the remainder of the paper attempts to facilitate power-efficient remote VBCM by addressing the following key aspects: 1. Analyse PAPR characteristics of acquired vibration signals, 2. evaluate the impact of uncontrolled PAPR on the VBCM performance in the presence of nonlinear power amplification, 3. and, accordingly, propose appropriate remedy solutions to control the PAPR. ## III PAPR of Vibration Signals The PAPR is a widely employed metric for quantifying the power ratio between the peak and average values of a signal. For a given vibration signal \(x(t)\), its PAPR can be expressed as: \[PAPR_{x(t)}=\frac{Max\left|x(t)\right|^{2}}{E\left\{\left|x(t)\right|^{2} \right\}} \tag{1}\] where \(E\left\{\cdot\right\}\) denotes the expectation operator. For the finite sampled signal \(x(n)\), the PAPR is: \[PAPR_{x(n)}=\frac{Max_{ne\left[0,N\right]}\left|x(n)\right|^{2}}{\frac{1}{N} \sum_{0}^{N-1}\left|x(n)\right|^{2}} \tag{2}\] where \(N\) is the number of samples in the vibration signal \(x(n)\). The PAPR is usually expressed in dB: \[PAPR\left(\text{dB}\right)=10\times log\left(PAPR\right)\text{dB} \tag{3}\] Crest Factor (CF) is another common signal parameter that quantifies a signal's peak amplitude to its root-mean-square Fig. 1: Overview of a typical remote VBCM system. Fig. 2: HPA range curve. (RMS) value. It equals to the square root of the PAPR. However, expressed in dB, the CF is equal to the PAPR since: \[CF\left(\text{dB}\right)=10\times log\left(CF^{2}\right)\text{dB}=10\times log \left(PAPR\right)\text{dB} \tag{4}\] A signal with constant power, such as a square wave, has a PAPR of 1 (0 dB). The PAPR of a sinusoidal wave equals \(2\) or \(3.01\) dB. Determining the PAPR of a random vibration depends on its instantaneous value, which is not predictable. Nevertheless, it is possible to describe the PAPR statistically. ### _Statistical Distribution of Vibration Signals_ The statistical distribution of the vibration samples generated by a VBCM system depends on the characteristics of the monitored process and/or object and is influenced by the surrounding environment. However, when the number of vibration samples \(N\) in the signal is large, the signal will approach the Gaussian distribution with a zero mean and a variance of \(\sigma^{2}\) "central-limit theorem." Therefore, a Gaussian random process can accurately model the vibration signal. Accordingly, the signal's envelope \(|x(n)|\) follows the one-sided Gaussian distribution, and the power \(|x(n)|^{2}\) has a central chi-square with one degree of freedom. Fig. 3 shows histograms of vibration signals generated from (a) Gaussian random vibration, (b) exterior of a flying aircraft [6], (c) a flying unmanned aerial vehicle (UAV) [49], (d) an SHM setup [50], (e) a wind turbine gearbox [51], and (f) rolling bearings of a rotating machinery [52]. These vibration sets are chosen to resemble the vibration patterns found in various VBCM applications. They represent healthy or normal vibrations "except vibrations of the rolling bearings (Fig. 3.f), which includes normal and faulty vibrations." It is worth mentioning that an abnormal operation or a failure influences the instantaneous vibration, and hence, it would alter the amplitude distributions of the generated vibration. Although generalizing the aforesaid assumption of Gaussian nature may not be entirely accurate, the histograms presented in Fig. 3 show that this assumption would be valid for a broad range of vibration patterns. ### _Statistical Analysis of the PAPR_ Following the assumption that a vibration signal \(x(n)\) follows a Gaussian distribution, its probability density function (PDF) can be expressed as: \[p(x,\sigma)=\frac{1}{\sigma\sqrt{2\pi}}\times exp\left(-\frac{x^{2}}{2\sigma^{2 }}\right) \tag{5}\] Accordingly. the signal's envelope \(|x(n)|\) has a one-sided Gaussian distribution; its PDF is given by: \[p_{e}(x,\sigma)=\sqrt{\frac{2}{\pi\sigma^{2}}}\times exp\left(-\frac{x^{2}}{2 \sigma^{2}}\right),x\geq 0 \tag{6}\] The cumulative distribution function (CDF) of the signal's envelope is then obtained by: \[F_{e}(x,\sigma)=\int_{0}^{x}\frac{\sqrt{2}}{\sigma\sqrt{\pi}}\times exp\left(- \frac{u^{2}}{2\sigma^{2}}\right)du \tag{7}\] using \[t=\sqrt{\frac{u^{2}}{2\sigma^{2}}} \tag{8}\] the CDF can be written as: \[\begin{split} F_{e}(x,\sigma)&=\frac{2}{\sqrt{\pi} }\int^{\sqrt{x^{2}/2\sigma^{2}}}exp\left(-t^{2}\right)dt\\ &=erf\left(\sqrt{\frac{x^{2}}{2\sigma^{2}}}\right)\end{split} \tag{9}\] where \(erf\left(\cdot\right)\) is the error function. Accordingly, the probability that the signal's power ratio \(P=\frac{x^{2}}{2\sigma^{2}}\) is above a given Fig. 3: Histograms of sample vibration signals: (a) Gaussian random vibration, (b) acceleration of a flying aircraft, (c) acceleration measurements of a flying UAV, (d) vibration from an SHM setup, (e) vibration generated by a wind turbine gearbox, and (f) vibration generated by rolling bearings. PAPR threshold \(P_{o}\) can be obtained using the complementary cumulative distribution function (CCDF): \[Prob(P>P_{o}) =\text{CCDF}=1-(\text{CDF})^{N} \tag{10}\] \[=1-erf\left(\sqrt{\frac{P_{o}}{2}}\right)^{N}\] where \(N\) is the number of samples in the vibration signal \(x(n)\). The analytical formula of the CCDF in Eq. 10 is helpful in studying the PAPR of vibration generated in various VBCM systems. Furthermore, the CCDF is a useful metric for evaluating the effectiveness of a PAPR reduction method. Typically, a simulated CCDF is obtained using PAPR-reduced signals and compared to a simulated CCDF of the original signals to evaluate the reduction achieved in the PAPR. A closed-form approximation of Eq. 10 can be obtained using the asymptotic series expansion of the complementary error function \(erfc\left(\cdot\right)\): \[erf\left(\cdot\right)=1-erfc\left(\cdot\right), \tag{11}\] Eq. 10 can be expressed in terms of \(erfc\left(\cdot\right)\) as follows: \[Prob(P>P_{o})=\text{CCDF}=1-\left(1-erfc\left(\sqrt{\frac{P_{o}}{2}}\right) \right)^{N} \tag{12}\] For large values of \(\sqrt{P_{o}/2}\), the complementary error function may be approximated by the asymptotic series expansion: \[\begin{split} erfc\left(\sqrt{\frac{P_{o}}{2}}\right)\approx \frac{e^{-P_{o}/2}}{\sqrt{\frac{P_{o}\pi}{2}}}\times\\ \left(1-\frac{1}{P_{o}}+\frac{1\cdot 3}{P_{o}^{2}}-\cdot\cdot +(-1)^{n}\frac{(2n-1)!!}{P_{o}^{n}}+\cdot\cdot\cdot\right)\end{split} \tag{13}\] For \(P_{o}\gg 1\), \[erfc\left(\sqrt{\frac{P_{o}}{2}}\right)\approx\frac{e^{-P_{o}/2}}{\sqrt{\frac {P_{o}\pi}{2}}} \tag{14}\] Accordingly, a closed-form approximation of the CCDF can be obtained by substituting Eq. 14 into Eq. 12: \[Prob(P>P_{o})=\text{CCDF}=1-\left(1-\frac{e^{-P_{o}/2}}{\sqrt{\frac{P_{o}\pi} {2}}}\right)^{N} \tag{15}\] Fig. 4 shows plots of the CCDF in Eq. 10 and its closed-form approximation in Eq. 15 for different values of \(N\). The plotted CCDF curves show an exact match between the CCDF and its closed-form approximation. Fig. 5 shows simulated CCDFs of the aforementioned vibration sets (refer to Fig. 3) along with theoretical CCDFs (Eq. 15). It is evident that the simulated CCDFs align with their corresponding theoretical CCDFs, except for the rotating machinery. The mismatch in the case of rotating machinery could be due to the rotating nature of the bearings and speed fluctuations [11, 53]. Additionally, the graphs depicted in Fig. 4 and Fig. 5 demonstrate that as the number of samples \(N\) increases, the likelihood of experiencing a high PAPR increases. Specifically, with the number of samples \(N\geq 500\), a PAPR in the range of \(10\)-\(13\) dB is likely to occur. Thus, it can be concluded that vibration signals generally tend to have high PAPR, where peak vibrations that are 10-20 times higher than the average vibrations occur commonly. ## IV Review of signal companding techniques To the best of our knowledge, The PAPR of vibration singles and the associated problem of nonlinear power amplification have not been addressed yet in the literature. Nevertheless, this issue has been extensively studied in multi-carrier communications, particularly in orthogonal frequency-division multiplexing (OFDM) systems. In such systems, information is split into parallel streams and carried across orthogonal sub-carriers during each transmission period. These sub-carriers are then summed together to form the transmitted symbol. Transmitting the data over orthogonal sub-carriers helps to reduce interference, minimize the required bandwidth, and increase the data transmission rate. However, the summation of modulated orthogonal sub-carriers would result in very high peaks in the transmitted symbol, which makes the OFDM signal exhibit a high PAPR. Numerous methods have been suggested in the literature to reduce the high PAPR of OFDM signals to prevent nonlinear distortion and increase the efficiency of the power amplification. These techniques can be broadly categorized into three main categories: Symbol structure modification, peak clipping, and signal companding. Structure modification techniques include block coding [54], selective mapping (SLM) [55], partial transmission sequence (PTS) [55], and tone reservation [56]. These techniques reduce the PAPR by modifying the structure of the transmitted OFDM symbol. They generally impose restrictions on its parameters and require transmitting side information to reconstruct the symbol at the destination. Therefore, the reduction in PAPR comes at the cost of increased complexity and reduced data rates due to the transmission of side information. Clipping [57] offers a simple approach to reducing the PAPR by hard-limiting the peaks to a pre-defined threshold. Despite its simplicity, clipping introduces amplitude distortion and spectral Fig. 4: Theoretical CCDF and its closed-form approximation for different values of \(N\). spreading. While amplitude distortion is unrecoverable, filtering would reduce spectral spreading. However, the peaks of the filtered-clipped signal could exceed the clipping threshold due to peak power regrowth after filtering. Alternative solutions that help to reduce the clipping distortion involve repeated or iterative clipping [57] and peak windowing [58]. In contrast to clipping, peak windowing applies soft-limiting to the peaks by multiplying the signal with a window-weighting function. As a result, distortion is reduced since the peaks are smoothly and softly limited. Signal companding is a well-known method in signal processing that involves two steps. First, the signal is transformed into a compressed form at the source. Second, the inverse transform expands the compressed signal at the destination. The compression reduces the signal's dynamic range (DR) and allows for efficient processing of the signal. \(\mu\)-law and A-law [59] are the most common companding transforms that are typically applied to speech signals to reduce quantization noise and optimize the required number of bits per sample for analog-to digital conversion. In \(\mu\)-law and A-law transforms, signal compression is achieved by applying a logarithmic-based transform to enlarge small amplitudes in the signal. Signal companding has gained wide popularity in the OFDM domain compared to other techniques due to its low complexity. Companding has no restrictions on the symbol's parameters and does not require the transmission of side information. Further, companding has better error performance compared to clipping. Using signal companding to reduce the PAPR of OFDM signals was first introduced in [60]. However, the scope of analysis was limited to addressing the effect of \(\mu\)-law companding on the quantization noise. Reducing the PAPR of the signal by applying \(\mu\)-law companding will increase the signal's average power. This, in turn, improves the signal-to-quantization noise ratio since the small amplitudes are enlarged. However, considering the non-linearity of the HPA, reducing the PAPR by increasing the signal's average power will not prevent the nonlinear distortion since the large peaks are not reduced. In fact, it would lead to more distortion in the signal. The main attention of the ongoing research is directed toward addressing this problem by designing the companding function so that the increase in the signal's average power is avoided [61, 62, 63, 64, 65, 66]. The published work in this area can be grouped under two main approaches. The first approach involves using additional transforms and/or optimization algorithms, which obviously increases computational complexity. The second approach involves introducing inflexion points in the signal. This allows for independent scaling of large peaks and small amplitudes, which helps to maintain the signal's average power. However, this approach reduces the data rate since the signal's indexes must be transmitted to apply the inverse operations at the destination. A considerable amount of the recent work focuses on utilizing deep learning (DL) models to tackle the problem of PAPR in OFDM [67, 68, 69, 70, 71]. DL-based approaches are centered around designing and training the DL models to optimally, sub-optimally, or efficiently learn the function of the corresponding conventional PAPR reduction scheme while mitigating the associated drawbacks. Choosing the appropriate technique among the options mentioned above for reducing the PAPR of vibration signals starts by understanding the distinctions between OFDM and vibration signals. Regarding structure modification approaches, vibration signals are generated by sensors as raw data, unlike OFDM symbols, which are formed based on a predetermined structure. Therefore, such techniques are not applicable to vibration signals. Clipping introduces unrecoverable distortion in the clipped signal; this can be tolerated in the OFDM signal due to the error correction mechanisms. In VBCM systems, the characteristics of the monitored process/object are described by the waveform and the spectrum of the generated vibration Fig. 5: Analytical and simulated CCDFs of (a) Gaussian random vibration (\(N=5000\)), (b) acceleration of a flying aircraft (\(N=5000\)), (c) acceleration measurements of a flying UAV (\(N=50\)), (d) vibration from an SHM setup (\(N=5000\)), (e) vibration generated by a wind turbine gearbox (\(N=5000\)), and (f) vibration generated by rolling bearings (\(N=5000\)). signal. This makes clipping distortion critical and intolerable since it introduces distortion in the generated waveform and alters its spectral contents. Compared to structure modification and clipping, signal companding presents a practical solution for reducing the PAPR of vibration signals without affecting the monitoring process. However, in order to be adopted for VBCM applications, the companding transform should fulfill the following three requirements: 1. Avoiding the increase in the signal's average power. 2. Employ effective signal denoising to eliminate noise from the compressed signal before the expansion stage to mitigate the effects of noise enhancement during the expansion process 3. Avoid transmission side information as this will increase the amount of the transmitted data and will lead to more power consumption in the sensor node. Considering these requirements, the upcoming section introduces the proposed autoencoder-based companding framework. ## V Signal Companding for Reduction of PAPR As stated earlier, signal commanding provides a practical solution for reducing the PAPR of vibration signals without affecting the VBCM process. The key requirements in companding are preventing any increase in the average power of the signal and avoiding the transmission of side information. In this section, we present the lightweight companding-based framework proposed for reducing the PAPR of vibration signals. However, it is convenient first to provide a brief overview of conventional signal companding. ### _Conventional Signal Companding_ The most commonly used type of signal companding is the \(\mu\)-law companding. its compression function \(C(x)\) can be expressed as: \[y=C(x)=A\,sgn(x)\,\frac{ln(1+\mu|\frac{x}{A}|)}{ln(1+\mu)} \tag{16}\] where, \(x\) is the input signal, \(sgn(\cdot)\) is the sign function, \(A\) is a normalization constant such that \(0<|\frac{x}{A}|<1\), and \(\mu\) is the compression parameter. The expansion (inverse) function is expressed as: \[\begin{split}& x^{\prime}=C^{-1}(y)\\ &=A\left[\frac{\exp\left\{\frac{|y|}{Asgn(y)}ln(1+\mu)\right\}-1 }{\mu\,sgn(y)}\right]\end{split} \tag{17}\] Fig. 9.a displays the compression profile of \(\mu\)-law for different values of compression parameter \(\mu\). Increasing the \(\mu\) value leads to more enlargements of small amplitudes, resulting in a higher average power of the signal. Hence, the signal's average power increases as a function of \(\mu\) as illustrated in Fig. 9.b. Since the signal's peaks are maintained unchanged in \(\mu\)-law companding, the reduction in the PAPR of the signal is achieved solely by increasing its average power. However, to avoid nonlinear distortion in the vibration signal and improve the power efficiency of the HPA, it is required to reduce the PAPR by reducing the signal's peaks instead of increasing its small amplitudes. In other words, it is required to reduce the PAPR of the signal while avoiding any increase in its average power. Another issue with conventional companding is the undesired effect of enhancing the accumulated noise at the destination due to expansion operation [60]-[65]. Thus, it is crucial to reduce the effects of noise enhancement by applying effective denoising to the compressed signal prior to expanding it. ### _Proposed Signal Companding_ Here, we introduce our proposed framework for efficient companding of vibration signals that addresses the aforementioned issues of conventional companding. The main aspects of the framework are illustrated in Fig. 6. Specifically, signal compression at the source is achieved using a lightweight reconstruction autoencoder with a compression-based activation function. At the destination, signal denoising and expansion operations are combined in one process by using a denoising-reconstruction autoencoder. #### V-B1 Signal Compression at the Source In the first place, the raw vibration signals are first smoothed to remove measurement noise. Then, a reconstruction autoencoder is trained to learn the smoothing function and reconstruct these smoothed signals (target signals). By using a compression-based activation function in the autoencoder layers, the autoencoder will learn how to reconstruct the input signals based on the target signal and, at the same time, compresses the learned presentations of the input signal. As a result, the output of the trained autoencoder is a smoothed and compressed version of the input signal. **Average power of the compressed signal:** During the training process, the autoencoder compresses the signal's representations in each layer while, at the same time, it learns to minimize the loss between the input signal and the target (smoothed) signal. Here, the average power of the target signal represents an upper bound on the average power of the reconstructed signal. This will avoid any increase in the average power of the output (smoothed-compressed) signal. Further, the joint mechanism of reconstruction-compression will maintain the average power of the output signal as close as possible to the average power of the target signal. **Compression loss as a lower bound on the training loss:** To efficiently train the source autoencoder, it is important to consider the following factors: * The training objective of the source autoencoder is to reconstruct a smoothed and compressed version of the input signal rather than reconstructing the signal in its original form. * The compression-based activation function in the autoencoder layers implies that there will be a compression loss or a minimum error floor between the output signal and the target signal caused by the compression mechanism. Considering these factors, it is not required to minimize the loss until maximum convergence. Instead, it is desirable to train the autoencoder until reaching this error floor, which is determined by compression loss. Theoretically, the compression loss is the difference between the reconstructed signal and its "perfectly reconstructed" compressed form. Mathematically, the compression loss (\(CL\)) can be calculated as the difference between the target signal \(x\) and its compressed and power-preserved form \(x_{pc}\): \[\begin{split} CL=error(x,x_{pc}),\\ x_{pc}=P\left(AF(x)\right),\end{split} \tag{18}\] where \(AF\) is the compress-based activation function, and \(P\) is the power scaling operation. The \(error\) function can be either the mean absolute error (MAE) or the mean squared error (MSE). To efficiently train the autoencoder, the compression loss can be utilized to set a baseline for the loss during the training. It can be empirically determined using the input and the target training signals to obtain the average \(CL\) according to Eq. 18. Generally, the effect of compressing a signal and preserving its average power can be approximated by applying a hard limiter to the signal with a peak-limiting threshold equal to the maximum peak of its compressed form. Accordingly, the clipping noise, which is the power of the clipped portion, can be used as an estimate of the compression loss. Given a target PARR (\(PAPR_{t}\)), the maximum peak \(Peak_{c}\) of the compressed and power preserved signal \(x\) equals to: \[Peak_{c}=\sqrt{PAPR_{t}\times P_{in}}, \tag{19}\] where \(P_{in}\) is average power of the signal. Following the assumption of the Gaussian nature of \(x\), it can be modeled as Fig. 6: Proposed framework for autoencoder-based companding of vibration signals. Fig. 7: Structures of the reconstruction autoencoders used in the proposed framework. a Gaussian random process with a zero mean and a variance \(\sigma^{2}=P_{in}\). Thus, the probability that, at any given time, the signal \(x\) takes the value \(Peak_{c}\) is given by: \[\begin{split} Prob\left\{x(t)=Peak_{c}\right\}\\ =p(x)=\frac{1}{\sqrt{2\pi P_{in}}}\times\text{exp}\left(-\frac{x^ {2}}{2P_{in}}\right)\end{split} \tag{20}\] Since maximum peak of the signal is limited to \(Peak_{c}\), the clipping noise (\(CN\)) is given by: \[CN=2\int_{Peak_{c}}^{\infty}\left(x-Peak_{c}\right)^{2}p(x)dx \tag{21}\] Using the analysis presented in [40], \(CN\) can be approximated as: \[CN\cong 2\sqrt{\frac{2}{\pi}}\times\sigma^{2}\times(\sqrt{PAPR_{t}})^{-3} \times\text{exp}\left(-\frac{PAPR_{t}}{2}\right) \tag{22}\] An alternative way for training the source autoencoder involves utilizing conventional activation functions like the rectified linear unit (ReLU) and using compressed and power-reserved forms of the input signals as the training targets. However, the proposed method, in contrast, requires less training time and involves less signal processing, as highlighted in Table I, which compares the proposed method and conventional autoencoder training. The comparison demonstrates the lightweight nature of the proposed framework, which requires fewer computations and reduced signal processing, making it more power-efficient and less complex than conventional methods. #### V-B2 Signal Denoising and Expansion at the Destination In order to train the denoising-expansion autoencoder at the destination, the vibration signals of the training set are firstly compressed using the compression function \(AF\). Then, they are corrupted with AWGN noise at a desired signal-to-noise ratio (SNR). The autoencoder is then trained with the noisy, compressed signals as the input and the original signals as the target. By training the autoencoder to minimize the loss between input and target signals, it will learn the expansion mechanism. Additionally, the autoencoder weights will be tuned to remove the noise. Hence, it simultaneously acts as an expanding function and a denoising filter. ## VI Modeling Nonlinear Power Amplification A practical HPA has a limited linear range and exhibits a nonlinear behavior at its saturation point, as deposited in Fig. 2. As previously stated, to achieve linear amplification of the signal and avoid nonlinear distortion, it is essential that the peaks of the signal remain within the linear range of the HPA. Otherwise, a large IBO should be used, which as mentioned earlier, reduces the HPA efficiency. Reliable modeling of the HPA is crucial for the accurate evaluation of nonlinear power amplification effects on the signal. A power amplifier is typically modeled by its amplitude-to-amplitude (AM/AM) and amplitude-to-phase (AM/PM) conversion functions. The (AM/AM) conversion is used to characterize the amplitude distortion, which is the relationship between the input power (amplitude) and the output power (amplitude). (AM/PM) conversion is used to characterize phase deviation (distortion) caused by amplitude variations. A widely accepted solid-state power amplifier (SSPA) model is the Rapp model [72]. It has a frequency-nonselective response with a smooth transition from linearity to saturation as input amplitude approaches the saturation level. Its (AM/AM) conversion function is: \[\begin{split} A_{out}&=a\frac{A_{in}}{\left(1+\left[ \left(\frac{aA_{in}}{A_{sat}}\right)^{2}\right]^{p}\right)^{1/2p}}\\ &\text{with},A_{sat}\geq 0,\,a\geq 0,\,\text{and}\,p\geq 0\end{split} \tag{23}\] where \(A_{in}\) is the input amplitude, \(A_{sat}\) is the saturation level, \(a\) is the gain, and \(p\) is a positive number to control the nonlinearity characteristics of the HPA. The (AM/PM) conversion of the SSPA is small enough and can be neglected [72]. Fig. 8 shows the (AM/AM) conversion curve of the model with different values of \(p\). As it is shown, as the value of \(p\) increases, the model converges to a hard limiting amplifier. For large values, the model becomes precisely linear until it reaches its output saturation level. A good approximation of existing amplifiers is obtained by choosing \(p\) to be in the range of \(2\) to \(3\)[73]. In this paper, the Rapp model with \(p=2\) and \(a=1\) is used to simulate the nonlinear power amplification of vibration signals. ## VII Performance Evaluation The vibration signals of the Paderborn University (PU) bearing dataset [52] (Vibration set (f) in Section III) are used to demonstrate the effectiveness of the proposed framework Fig. 8: AM-AM and AM-PM conversions of the Rapp SSPA model. in reducing the PAPR and mitigating the effects of HPA nonlinearity. This dataset is selected because it includes actual vibration signals from a real system during both healthy and faulty operations. The fault types include Inner Race (IR) defects, Outer Race (OR) defects, and combined defects. More information about the PU dataset can be found in [2]. To create the training and testing sets, the vibration measurements of the PU dataset are segmented into segments of \(0.1\) seconds (\(6,400\) data points). This results in \(16,005\) vibration signals in total. Accordingly, the dataset is split into \(11,202\) samples for training (\(70\%\)) and \(4,803\) samples for testing (\(30\%\)). Adam optimizer (learning rate = \(0.001\)) is used to train the autoencoders and minimize the MSE. The framework is implemented using Python, Keras library [74], TensorFlow [75], and SciPy library [76]. ### _Experimental Setup_ One-dimensional convolutional (Conv1D) layers are used to implement the autoencoders. The structures and the parameters of the autoencoders are shown in Fig. 7. The activation function (\(AF\)) used in the source autoencoder is based on the \(\mu\)-law compression and expressed as: \[\text{$AF$}=sgn(x)\,\frac{ln(1+255\times|x|)}{ln(1+255)} \tag{24}\] #### V-A1 Training the Source Autoencoder To train the source autoencoder, the raw signals of the training set are smoothed in the first place. Then, the autoencoder is trained using Eq. 24 as the activation function, with the raw signals as the inputs and the smoothed signals as the targets. #### V-A2 Training the destination autoencoder Two training scenarios are considered for the destination autoencoder: Noise-free scenarioThe destination autoencoder is trained in reverse order compared to the source autoencoder. First, the signals of the training set (training targets of the destination autoencoder) are used as inputs to the source autoencoder _that has already been trained_. Subsequently, the compressed-smoothed signals obtained from the source autoencoder are employed as the training input for the destination autoencoder. Noisy scenarioTo count for the accumulated noise in practical situations, the obtained (smoothed-compressed) signals from the source autoencoder are randomly and equally corrupted with a zero-mean AWGN of \(-5\) dB and \(0\) dB SNR levels, and the destination autoencoder is trained accordingly. ### _Performance Metrics_ To show the effectiveness of the proposed framework, its performance is evaluated in the presence of nonlinear power amplification and AWGN against the cases of \(\mu\)-law compression and no compression. The saturation level (\(A_{sat}\)) of the HPA is set to the mean average power of the original (uncompressed) vibration signals. The performance of the proposed framework is evaluated in terms of the following five aspects: #### V-B1 **PAPR reduction** The CCDF is used to measure the PAPR reduction capability of the proposed framework. #### V-B2 **Average power of compressed signals** This will assess the increase in the average power resulting from the compression. #### V-B3 **Distortion due to the nonlinear power amplification** To assess the nonlinear distortion and HPA power efficiency, we Fig. 10: Signal’s amplitude constellation of the PU dataset. Fig. 9: \(\mu\)-law: (a) compression profile with different values of \(\mu\), and (b) peak and average powers as a function of \(\mu\). adapt the concepts of _amplitude constellation of vibration signals and Error Vector Magnitude (EVM)_. Signal constellation diagrams and EVM are widely used in telecommunications systems to represent modulated signals and evaluate system-level performance. **Amplitude constellation of vibration signals:** To obtain the amplitude constellation of a given set of vibration signals, each signal \(s_{i}\) is expressed in terms of its peak and mean amplitudes as follows: \[s_{i}=(A_{i_{p}},A_{i_{m}}),i=0,....,v-1, \tag{25}\] where \(A_{i_{p}}\) is the signal's peak amplitude, \(A_{i_{m}}\) is the signal's, mean amplitude, and \(v\) is the number of vibration signals in the set. Accordingly, the constellation can be displayed as a scatter plot on the \(x-y\) plane where \(x\) and \(y\) represent the signal peak amplitudes \(A_{i_{p}}\) and average amplitudes \(A_{i_{m}}\), respectively. Fig. 10 displays the amplitude constellation of the PU dataset; the points are color-coded according to their health condition so that the signals of the same health condition share the same color. The position of a given signal in the constellation indicates both its peak and average amplitude and the distance-- in terms of these amplitudes-- between the signal and the other signals in the constellation. As mentioned previously, amplitudes of vibration signals are directly related to the monitored process and/or object. Therefore, the constellation can offer crucial information and insight into the health of the monitored system. Further, as the distortion impacts both the peak and average amplitudes of the signal, its position in the constellation will be altered accordingly. This offers the opportunity to visually evaluate the nonlinear distortion by comparing the amplitude constellation of the amplified-expanded tested vibration signals, denoted as \(con_{amp}\) to the reference constellation of the original test signals, denoted as \(con_{ref}\). **Error Vector Magnitude (EVM):** The EVM can be utilized to quantify nonlinear distortion, assess HPA efficiency, and evaluate the effectiveness of the signal companding scheme. To obtain the EVM, the error vectors of the amplified-expanded test signals with respect to their reference test signals are first calculated from the corresponding constellations \(con_{amp}\) and \(con_{ref}\). The error vector \(error_{v}\) between two points \(s_{1}=(A_{1_{p}},A_{1_{m}})\) and \(s_{2}=(A_{2_{p}},A_{2_{m}})\) on the constellation is given by: \[error_{v}=[err_{p},err_{m}], \tag{26}\] \[err_{p}=A_{1_{p}}-A_{2_{p}},\] \[err_{m}=A_{1_{m}}-A_{2_{m}}\] Accordingly, the EVM can be calculated as the mean or the RMS value of the magnitudes of these obtained error vectors. It can be expressed as: \[EVM=\frac{\sqrt{\frac{1}{V}\sum_{i=0}^{V-1}\left|error_{v}[i]\right|^{2}}}{ \textit{EVM Normalization Reference}}\times 100 \tag{27}\] where \(V\) is the number of vibration signals in the test set and \(\left|error_{v}[i]\right|\) is the magnitude of the \(i\)-th error vector. In the above equation, EVM is normalized by _EVM Normalization Reference_, which equals the maximum magnitude in the reference constellation \(con_{ref}\). Hence, the EVM quantifies the power loss and amplitude distortion caused by HPA nonlinearity. In practical situations, the EVM quantifies the combined impact of all signal impairments within a VBCM system (such as distortion and noise effects), enabling measuring the overall system degradation using a single value. #### Vii-B4 **Spectral spreading** Spectral spreading or spectral broadening refers to situations when a signal's spectrum becomes wider due to nonlinear processing, such as logarithmic-based compression and nonlinear power amplification. Nonlinearity imposed on the signal's envelope causes an undesirable increase in the power of the side lopes of the power spectral density (PSD). This makes PSD an appropriate measure of spectral regrowth. Accordingly, we use the mean PSD to evaluate the compressed-amplified signals' spectral spreading. Welch's overlapped segment averaging method [77] is used to estimate the PSD. The method involves segmenting the signal using a moving window and computing each segment's fast Fourier transform (FFT). The PSD is then estimated as the average of the computed FFTs over all segments. In this paper, we use the following settings for Welch's PSD estimation: * Window: Hamming window of a length equals to \(N/2\), where \(N\) is the length of the vibration signal. This length is selected to obtain a PSD with a good resolution since reducing the window length would affect the resolution. * Overlap between segments: \(50\%\) overlap. With a window length of \(N/2\), an overlap of \(50\%\) results in a total of 3 segments, reducing the averaging-error variance compared to using two segments only and, simultaneously, avoiding introducing a high correlation between the segments. * Number of discrete Fourier transform points (NFFT): \(NFFT=8192\). This is calculated using the conventional method where NFFT is set to be equal to \(2^{p}\), where \(p\) is the smallest power of \(2\) that is greater than or equal to \(N\); which in this case equals \(13\). #### Vii-B5 **Signal denoising** The SNR of the vibration signals after denoising, denoted as \(SNR_{d}\) is used to assess the effectiveness of the proposed framework in reducing noise. \(SNR_{d}\) is expressed as: \[SNR_{d}\text{ (dB)}=10\times\text{log}_{10}\left(\frac{\sum_{n=0}^{N-1}\left| x(n)\right|^{2}}{\sum_{n=0}^{N-1}\left(\left|x(n)-x^{\prime}(n)\right|^{2} \right)}\right) \tag{28}\] where, \(x\): is the original vibration signal, \(x^{\prime}\): is the denoised signal, \(N\): is the length of the vibration signal. ## VIII Results and Discussion Before presenting and discussing the obtained results, it is convenient to demonstrate the important role of proper signal companding in mitigating the effects of nonlinear power amplification. In Fig. 11.a, it can be seen that the uncompressed vibration signal experiences high-amplitude excitations between the signal's indices \(350\) and \(550\). These amplitude excitations exceed the HPA saturation level, plotted as a dashed horizontal line in the Figure. In the absence of a proper signal companding mechanism, such excitations in the signal's waveform- directly related to the monitored system and would carry vital information about its current condition- are subject to the nonlinear distortion of the HPA. #### V-A1 **PAPR reduction** As shown in Fig. 11.b, both \(\mu\)-law compression, and the proposed compression are effective in reducing the PAPR of the test vibration signals. Specifically, the proposed compression and \(\mu\)-law compression have reduced the probability of exhibiting a PAPR of \(8\) dB in the test vibration signals from \(100\%\) to \(0.02\%\) and \(0.06\%\), respectively. However, as previously mentioned, \(\mu\)-law compression relies on preserving the signal's peak amplitude while increasing its small amplitudes. Consequently, all amplitudes in the \(\mu\)-law compressed form of the vibration signal surpass the HPA saturation level, as illustrated in Fig. 11.a, leading to significant distortion in the compressed signal, as shown later. In contrast, in the proposed compression, the source autoencoder learns how to reconstruct and compress the signal while avoiding the increase in its average power as explained in Section V.B. This is demonstrated in Fig. 11.a, which shows that the majority of the amplitudes of the proposed-compressed signals are compressed and maintained below the saturation level. #### V-A2 **Average power of compressed signals** Fig. 12 depicts ratios of the mean average power of the compressed test signals with respect to the normalized average power of the original signals. As shown, the \(\mu\)-law compressed form of the vibration signal, on average, exhibits more than a 13-fold increment in its average power due to the enlargement of its small amplitudes. While this would reduce the quantization noise in analog-to-digital conversion, it would cause severe nonlinear distortion in the signal when passing through the HPA. Regarding the proposed compression, it slightly reduces the average power of the compressed form of the vibration signal. In terms of HPA nonlinearity, the slight decrease in average power shifts the input power of the compressed signals towards the linear region of the HPA. This is demonstrated in Fig. 13, where the amplitude constellations of both the original and compressed test signals are displayed alongside the HPA's response curve. The HPA saturation level is also indicated as a dashed line on both axes. This visual setup provides useful insights into PAPR characteristics of the generated vibration signals, the behavior of the HPA, and the design requirements of the signal companding. Specifically, considering the peak amplitudes in the plot, it is obvious that all uncompressed signals and their \(\mu\)-law compressed forms will experience peak distortion after passing through the HPA since their peaks exceed the saturation level of the HPA. As for the mean amplitudes, all of the \(\mu\)-law compressed forms will experience significant and frequent amplitude distortion after passing through the HPA since their mean amplitude exceeds the saturation level. This is also the case for a considerable part of the uncompressed signals. On the contrary, the proposed framework compressed all the test signals so that their peaks and mean amplitudes fall below the saturation level. As a result, the compressed forms of the vibration signals are not subject to nonlinear distortion of the HPA. However, a slight amplitude distortion is still expected due to the soft limiting nature of the HPA and the imperfect reconstruction of the autoencoders. It is worth mentioning that the visual setup of Fig. 13 can be adapted to various VECM systems to gain more insights into the PAPR characteristics, nonlinear behavior, and requirements for PAPR reduction. Fig. 11: (a) Original and compressed vibration signals and (b) CCDFs of original and compressed signals. Fig. 12: Average power ratios of original and compressed signals with respect to normalized average power of original signals. #### Vi-B3 **Distortion due to nonlinear power amplification** Fig. 14 shows an uncomppanded vibration signal before and after the HPA, its \(\mu\)-law commanded form (_the word commanded is used here to refer to the signal that is compressed, passed through the HPA at the source, and expanded at the destination_), and its proposed-companded form. As shown, the uncompressed signal and its \(\mu\)-law commanded form experienced a significant nonlinear distortion as their peak values are restricted to the HPA saturation level. While on the other hand, the amplitudes of its proposed-companded form are free of nonlinear distortion. Since the proposed framework compresses the signal so that its amplitudes fall in the linear region of the HPA, all amplitudes-- even the ones that exceed the HPA saturation level-- are restored after the expansion. The amplitude consultations of the uncomppanded test signals, their \(\mu\)-law commanded forms, and their proposed-companded forms are displayed in Fig. 15. By co-locating these constellations with the reference constellation as shown in Fig. 15.d, a convenient visual comparison to asses the nonlinear distortion can be made. The comparison clearly shows that the uncompanded signals and their \(\mu\)-law commanded forms experience severe nonlinear distortion at the destination while, on the other hand, the proposed companding framework avoids nonlinear distortion and successfully restores the reference constellation--to a large extent-- at the destination. A comparison among the obtained EVM values is shown in Fig. 16. The comparison demonstrates, in a quantified manner, the effectiveness of the proposed framework in mitigating the effects of nonlinear distortion. Specifically, while the uncompanded vibration signals and their \(\mu\)-law commanded forms suffered from a very high distortion (\(>60\%\) EVM), the proposed-companded forms experienced very low distortion (\(<7\%\) EVM). As previously stated, the EVM quantifies the total system degradation experienced by the signals. For the uncompanded vibration signals and their \(\mu\)-law commanded forms, the exhibited distortion is exclusively caused by the nonlinear power amplification. Regarding the proposed framework, the factors that contributed to the total system degradation are: * Soft limiting nature of the HPA: with \(p=2\), the used SSPA model acts as a soft limiter. * Autoencoder error: due to imperfect signal reconstruction of the autoencoders during compression/expansion stages. This error can be reduced by conducting more fine-tuning for hyperparameters of the autoencoders. * Noise presence and channel effects: While these impairments are not considered in the evaluation setup related to the obtained EVM results, they have a strong influence on the total system degradation in practical situations. The obtained results from the EVM evaluation show that in the presence of nonlinear devices and the absence of a proper Fig. 14: Original and expanded vibration signals after passing through HPA. Fig. 13: Constellations of uncompressed and compressed signals along with HPA response curve. mechanism to reduce the PAPR, VBCM systems could suffer from severe nonlinear distortion. The results also confirm the effectiveness of the proposed framework in mitigating the effects of such distortion. #### V-B4 **Spectral spreading** Fig. 17 shows the mean PSD plots of the test vibration signals, their \(\mu\)-law-compressed forms, and their proposed-companaded forms. The mean PSD of each of these three sets is calculated by estimating the individual PSD of each signal in the set after passing through the HPA. Accordingly, the mean PSD is obtained by averaging the estimated PSDs. The \(\mu\)-law-compressed forms experienced higher spectral broadening than the proposed-compressed forms. This regrowth in the spectrum is attributed mainly to the nonlinear distortion caused by the HPA. However, it should be mentioned that the logarithmic-based nature of the compression mechanism leads to spectral regrowth in the spectrum of the compressed signal. #### V-B5 **Signal denoising** To evaluate the denoising capability of the proposed framework, the test vibration signals are first compressed using the proposed compression, passed through Fig. 16: EVM values of uncompressed and expanded signals. Fig. 17: Normalized mean PSD plots of original signals, proposed-compressed signals, and \(\mu\)-law compressed signals after passing through the HPA. Fig. 15: Amplitude constellations after passing through the HPA: (a) uncompressed signals, b) \(\mu\)-law expanded, (c) proposed-expanded signals, and (d) original signals of the PU dataset along with uncompressed and expanded signals.
2308.03345
On the shape of correlation matrices for unitaries
For a positive integer $n$, we study the collection $\mathcal{F}_{\mathrm{fin}}(n)$ formed of all $n\times n$ matrices whose entries $a_{ij}$, $1\leq i,j\leq n$, can be written as $a_{ij}=\tau(U_j^*U_i)$ for some $n$-tuple $U_1, U_2, \ldots, U_n$ of unitaries in a finite-dimensional von Neumann algebra $\mathcal{M}$ with tracial state $\tau$. We show that $\mathcal{F}_{\mathrm{fin}}(n)$ is not closed for every $n\geq 8$. This improves a result by Musat and R{\o}rdam which states the same for $n\geq 11$.
Michiya Mori
2023-08-07T06:55:19Z
http://arxiv.org/abs/2308.03345v2
# On the shape of correlation matrices for unitaries ###### Abstract. For a positive integer \(n\), we study the collection \(\mathcal{F}_{\mathrm{fin}}(n)\) formed of all \(n\times n\) matrices whose entries \(a_{ij}\), \(1\leq i,j\leq n\), can be written as \(a_{ij}=\tau(U_{j}^{*}U_{i})\) for some \(n\)-tuple \(U_{1},U_{2},\ldots,U_{n}\) of unitaries in a finite-dimensional von Neumann algebra \(\mathcal{M}\) with tracial state \(\tau\). We show that \(\mathcal{F}_{\mathrm{fin}}(n)\) is not closed for every \(n\geq 8\). This improves a result by Musat and Rordam which states the same for \(n\geq 11\). Key words and phrases:quantum correlation matrix 2020 Mathematics Subject Classification: Primary 46L10, 81P40 The author is supported by JSPS KAKENHI Grant Number 22K13934. ## 1. Introduction The Connes Embedding Problem concerning "approximation" of a type II\({}_{1}\) von Neumann algebra by finite-dimensional ones has long been regarded as one of the most important problems in the theory of operator algebras. By now this problem is known to be equivalent to various other problems. One of these problems is claimed to be resolved in the preprint [4], and it implies that the Connes Embedding Problem has a negative answer. However, the difference between the world of general type II\({}_{1}\) von Neumann algebras and the finite-dimensional world is still quite mysterious. Let \(n\) be a positive integer. We consider the collection \(\mathcal{D}(n)\) (resp. \(\mathcal{D}_{\mathrm{fin}}(n)\)) formed of all \(n\times n\) matrices whose entries \(a_{ij}\), \(1\leq i,j\leq n\), can be written as \(a_{ij}=\tau(P_{i}P_{j})\) for some \(n\)-tuple \(P_{1},P_{2},\ldots,P_{n}\) of projections in a finite von Neumann algebra (resp. finite-dimensional von Neumann algebra) \(\mathcal{M}\) with (normal faithful) tracial state \(\tau\). It is known that the existence of \(n\) such that \(\mathcal{D}_{\mathrm{fin}}(n)\) is not dense in \(\mathcal{D}(n)\) is equivalent to the negative answer to the Connes Embedding Problem. Likewise, we may consider unitaries instead of projections. Let \(\mathcal{G}(n)\) (resp. \(\mathcal{F}_{\mathrm{fin}}(n)\)) denote the set of all \(n\times n\) matrices whose entries \(a_{ij}\), \(1\leq i,j\leq n\), can be written as \(a_{ij}=\tau(U_{j}^{*}U_{i})\) for some \(n\)-tuple \(U_{1},U_{2},\ldots,U_{n}\) of unitaries in a finite von Neumann algebra (resp. finite-dimensional von Neumann algebra) \(\mathcal{M}\) with tracial state \(\tau\). Then the existence of \(n\) such that \(\mathcal{F}_{\mathrm{fin}}(n)\) is not dense in \(\mathcal{G}(n)\) is equivalent to the negative answer to the Connes Embedding Problem. More information concerning the sets \(\mathcal{D}(n)\), \(\mathcal{D}_{\mathrm{fin}}(n)\), \(\mathcal{G}(n)\), and \(\mathcal{F}_{\mathrm{fin}}(n)\) (their motivation, the relation to the Connes Embedding Problem and quantum information theory, and applications to the theory of von Neumann algebras) is detailed in [6]. It is natural to pursue a better understanding of the shapes of \(\mathcal{D}(n)\), \(\mathcal{D}_{\mathrm{fin}}(n)\), \(\mathcal{G}(n)\), and \(\mathcal{F}_{\mathrm{fin}}(n)\) for a small \(n\). It is easily seen that these sets are bounded and convex. A technique of ultraproduct enables us to show that \(\mathcal{D}(n)\) and \(\mathcal{G}(n)\) are compact. The following problems are of particular interest. **Problem 1.1**.: _What is the smallest \(n\) such that \(\mathcal{D}_{\mathrm{fin}}(n)\) is not compact?_ **Problem 1.2**.: _What is the smallest \(n\) such that \(\mathcal{D}_{\mathrm{fin}}(n)\) is not dense in \(\mathcal{D}(n)\)?_ **Problem 1.3**.: _What is the smallest \(n\) such that \(\mathcal{F}_{\mathrm{fin}}(n)\) is not compact?_ **Problem 1.4**.: _What is the smallest \(n\) such that \(\mathcal{F}_{\mathrm{fin}}(n)\) is not dense in \(\mathcal{G}(n)\)?_ Let \(n_{k}\) denote the solution of Problem 1.\(k\), \(k=1,2,3,4\). Recall that the negative answer to the Connes Embedding Problem implies \(n_{2},n_{4}<\infty\). Dykema and Juschenko proved that every point of \(\mathcal{G}(n)\) comes from an \(n\)-tuple of commuting unitaries if \(n\leq 3\)[2, Corollary 2.8]. This implies \(n_{3},n_{4}\geq 4\). As for \(n_{1}\) and \(n_{2}\), Dykema-Paulsen-Prakash obtained the upper bound \(n_{1}\leq 5\) in [3] (see also [6]). Musat and Rordam applied it to show \(n_{3}\leq 11\)[6, Theorem 3.6]. They also gave \(n_{1},n_{2}\geq 3\)[6, Proposition 2.1]. In [7, Theorem 4.8], Russell proved that \(n_{1},n_{2}\geq 4\). Our main result is that \(n_{3}\leq 8\). Thus, after our contribution we know \[4\leq n_{1}\leq 5,\quad 4\leq n_{2},\quad 4\leq n_{3}\leq 8,\quad\text{and} \quad 4\leq n_{4}.\] Note that the proof of \(n_{1}\leq 5\) in [3, 6] depends on the structure of finite sums projections given by Kruglyak, Rabanovich, and Samoilenko [5]. Our proof of \(n_{3}\leq 8\) uses the structure of finite products of self-adjoint unitaries. ## 2. Results Our goal in the rest of this article is to show that \(\mathcal{F}_{\mathrm{fin}}(n)\) is not closed if \(n\geq 8\). **Lemma 2.1**.: _Let \(\mathcal{M}\) be a von Neumann algebra with faithful tracial state \(\tau\). Let \(U\in\mathcal{M}\) be a unitary. Then the following are equivalent._ 1. \(U=U^{*}\)_._ 2. _The operator_ \((U+iI)/\sqrt{2}\) _is also a unitary._ 3. _There is a unitary operator_ \(V\in M\) _such that_ \[\sqrt{2}\operatorname{Re}\tau(U^{*}V)-\operatorname{Im}\tau(U-\sqrt{2}V)=2.\] Proof.: \((1)\Leftrightarrow(2)\) This is clear by looking at the spectrum. \((2)\Leftrightarrow(3)\) If \(V\in\mathcal{M}\) is unitary, then \[(U+iI-\sqrt{2}V)^{*}(U+iI-\sqrt{2}V)=4I-iU+iU^{*}+\sqrt{2}iV-\sqrt{2}iV^{*}- \sqrt{2}U^{*}V-\sqrt{2}V^{*}U.\] It follows that \[\frac{1}{2}\tau((U+iI-\sqrt{2}V)^{*}(U+iI-\sqrt{2}V))=2+\operatorname{Re}\tau( -iU+\sqrt{2}iV-\sqrt{2}U^{*}V).\] Since \(\tau\) is faithful, the condition \(U+iI-\sqrt{2}V=0\) is equivalent to \(2+\operatorname{Re}\tau(-iU+\sqrt{2}iV-\sqrt{2}U^{*}V)=0\), and this leads to the desired conclusion. We use the following well-known fact (see [1, Proposition 3.12]). **Lemma 2.2**.: _If \(\mathcal{M}\) is a von Neumann algebra of type II\({}_{1}\) and \(\kappa\in\mathbb{R}\), then there are self-adjoint unitaries \(S_{1},S_{2},S_{3},S_{4}\in\mathcal{M}\) such that \(S_{1}S_{2}S_{3}S_{4}=e^{2\pi i\kappa}I\)._ On the other hand, in the matrix algebra \(\mathbb{M}_{n}\), any finite product of self-adjoint unitaries has determinant \(\pm 1\). Therefore, in a finite-dimensional von Neumann algebra, a finite product of self-adjoint unitaries is never equal to the operator \(e^{2\pi i\kappa}I\) for an irrational \(\kappa\). **Theorem 2.3**.: _If \(n\geq 8\), then the set \(\mathcal{F}_{\mathrm{fin}}(n)\) is not closed._ Proof.: Fix an irrational number \(\kappa\) and self-adjoint unitaries \(S_{1},S_{2},S_{3},S_{4}\) in the AFD II\({}_{1}\) factor \((\mathcal{R},\tau_{\mathcal{R}})\) with \(S_{1}S_{2}S_{3}S_{4}=e^{2\pi i\kappa}I\). We set \[U_{1}:=I,\quad U_{2}:=S_{1},\quad U_{3}:=S_{1}S_{2},\quad U_{4}:=S_{1}S_{2}S_{3 }=e^{2\pi i\kappa}S_{4},\] \[U_{5}:=\frac{S_{1}+iI}{\sqrt{2}},\quad U_{6}:=\frac{S_{1}(S_{2}+iI)}{\sqrt{2}},\quad U_{7}:=\frac{S_{1}S_{2}(S_{3}+iI)}{\sqrt{2}},\quad U_{8}:=\frac{S_{4}+iI }{\sqrt{2}},\] and \(U_{k}:=I\) for all \(9\leq k\leq n\). Since \(\mathcal{R}\) is AFD, the matrix \((\tau(U_{j}^{*}U_{i}))_{1\leq i,j\leq n}\) belongs to the closure of \(\mathcal{F}_{\mathrm{fin}}(n)\). We show that this matrix is not in \(\mathcal{F}_{\mathrm{fin}}(n)\). Assume (towards a contradiction) that there are unitaries \(V_{j}\), \(j=1,2,\ldots,8\), in a finite-dimensional von Neumann algebra \(\mathcal{M}\) with faithful tracial state \(\tau\) such that \(\tau(V_{j}^{*}V_{i})=\tau_{\mathcal{R}}(U_{j}^{*}U_{i})\), \(1\leq i,j\leq 8\). For \(j=1,2,3\), we have \[\sqrt{2}\operatorname{Re}\tau((V_{j}^{*}V_{j+1})^{*}(V_{j}^{*}V_ {j+4}))-\operatorname{Im}\tau((V_{j}^{*}V_{j+1})-\sqrt{2}(V_{j}^{*}V_{j+4}))\] \[=\sqrt{2}\operatorname{Re}\tau(V_{j+1}^{*}V_{j+4})-\operatorname{ Im}(\tau(V_{j}^{*}V_{j+1})-\sqrt{2}\tau(V_{j}^{*}V_{j+4}))\] \[=\sqrt{2}\operatorname{Re}\tau_{\mathcal{R}}(U_{j+1}^{*}U_{j+4})- \operatorname{Im}(\tau_{\mathcal{R}}(U_{j}^{*}U_{j+1})-\sqrt{2}\tau_{\mathcal{ R}}(U_{j}^{*}U_{j+4}))\] \[=\sqrt{2}\operatorname{Re}\tau_{\mathcal{R}}\left(\frac{I+iS_{j} }{\sqrt{2}}\right)-\operatorname{Im}\left(\tau_{\mathcal{R}}(S_{j})-\sqrt{2} \tau_{\mathcal{R}}\left(\frac{S_{j}+iI}{\sqrt{2}}\right)\right)\] \[=2.\] By Lemma 2.1, \(V_{1}^{*}V_{2}\), \(V_{2}^{*}V_{3}\), \(V_{3}^{*}V_{4}\) are self-adjoint. Similarly, the equality \[\sqrt{2}\operatorname{Re}\tau((e^{-2\pi i\kappa}V_{1}^{*}V_{4})^ {*}(V_{1}^{*}V_{8}))-\operatorname{Im}\tau((e^{-2\pi i\kappa}V_{1}^{*}V_{4})- \sqrt{2}(V_{1}^{*}V_{8}))\] \[=\sqrt{2}\operatorname{Re}e^{2\pi i\kappa}\tau(V_{4}^{*}V_{8})- \operatorname{Im}(e^{-2\pi i\kappa}\tau(V_{1}^{*}V_{4})-\sqrt{2}\tau(V_{1}^{*} V_{8}))\] \[=\sqrt{2}\operatorname{Re}e^{2\pi i\kappa}\tau_{\mathcal{R}}(U_ {4}^{*}U_{8})-\operatorname{Im}(e^{-2\pi i\kappa}\tau_{\mathcal{R}}(U_{1}^{*} U_{4})-\sqrt{2}\tau_{\mathcal{R}}(U_{1}^{*}U_{8}))\] \[=\sqrt{2}\operatorname{Re}e^{2\pi i\kappa}\tau_{\mathcal{R}}\left( \frac{e^{-2\pi i\kappa}(I+iS_{4})}{\sqrt{2}}\right)-\operatorname{Im}\left(e^{- 2\pi i\kappa}\tau_{\mathcal{R}}(e^{2\pi i\kappa}S_{4})-\sqrt{2}\tau_{ \mathcal{R}}\left(\frac{S_{4}+iI}{\sqrt{2}}\right)\right)\] \[=2\] implies that \(e^{-2\pi i\kappa}V_{1}^{*}V_{4}\) is also self-adjoint. Therefore, the operator \(e^{2\pi i\kappa}I\) decomposes into the product \((V_{1}^{*}V_{2})\cdot(V_{2}^{*}V_{3})\cdot(V_{3}^{*}V_{4})\cdot(e^{-2\pi i \kappa}V_{1}^{*}V_{4})^{*}\) of four self-adjoint unitaries. This is impossible because \(\mathcal{M}\) is finite-dimensional. By imitating the proof of [6, Theorem 4.1], we see that Theorem 2.3 implies certain unfactorizability of some unital completely positive trace preserving mapping on \(\mathbb{M}_{n}\) for every \(n\geq 8\). See [6] for the details. It seems plausible for the author to guess that \(n_{1}\in\{4,5\}\) is close to \(n_{3}\) (and \(n_{2}\) is close to \(n_{4}\)). Unfortunately, the above discussion, which heavily relies on the use of self-adjoint unitaries, apparently does not give an upper bound of \(n_{3}\) that is better than \(8\). ### Acknowledgements The author appreciates Travis B. Russell (Texas Christian University) for notifying the author that part of the results in the previous version of this article was already given in [7] with similar techniques.
2310.19157
Accelerations of large inertial particles in turbulence
Understanding the dynamics of material objects advected by turbulent flows is a long standing question in fluid dynamics. In this perspective article we focus on the characterization of the statistical properties of non-interacting finite-sized massive spherical particles advected by a vigorous turbulent flow. We study the fluctuations and temporal correlations of particle accelerations and explore their behaviours with respect to the particle size and the particle mass density by means of fully-resolved numerical simulations. We observe that the measured trends can not be interpreted as the simple multiplicative combination of the two dominant effects: the spatial filtering of fluid accelerations and the added-mass-adjusted fluid-to-particle density ratio. We argue that other hydrodynamical forces or effects, e.g. preferential flow sampling, have still a significant role even at the largest particle sizes, which rich here the integral scale of turbulence.
Yaning Fan, Cheng Wang, Linfeng Jiang, Chao Sun, Enrico Calzavarini
2023-10-29T21:15:48Z
http://arxiv.org/abs/2310.19157v2
# Accelerations of large inertial particles in turbulence ###### Abstract Understanding the dynamics of material objects advected by turbulent flows is a long standing question in fluid dynamics. In this perspective article we focus on the characterization of the statistical properties of non-interacting finite-sized massive spherical particles advected by a vigorous turbulent flow. We study the fluctuations and temporal correlations of particle accelerations and explore their behaviours with respect to both the particle size and the particle mass density by means of fully-resolved numerical simulations. We observe that the measured trends can not be interpreted as the simple multiplicative combination of the two dominant effects: the spatial filtering of fluid accelerations and the added-mass-adjusted fluid-to-particle density ratio. We argue that other hydrodynamical forces or effects (e.g. preferential flow sampling) have still a significant role even at the largest particle sizes, which are here of the order of the integral scale of turbulence. ## 1 Introduction Fluid dynamics turbulence is a lasting challenge in science. Rather than representing a single fundamental question - the problem's definition itself changed over epochs and disciplines [1] - it is a faceted topic with ramifications into a plethora of open issues and applications. Among the many problems connected to turbulent flows a long standing one concerns the description of the transport by the flow of material objects [2]. Clearly, the study of the forces exerted on a body immersed in a flow is a classic topic in the broader field of fluid dynamics, and many researchers have devoted their attention to it. When considering the case of spherical bodies, we can refer to Stokes's work on drag force in creeping flows, followed by subsequent studies by Oseen, Boussinesq, and Basset on inertial and unsteady corrections (history force) to drag. Tchen, Corrsin, and Lumley conducted studies on the pressure gradient force, while Autom focused on addder-mass and lift forces in the context of rotational potential flows (see [3] for an historical overview). The unified modern formulation on the dynamics of a spherical particle in an unsteady and non uniform viscous flow is due to Maxey & Riley and Gatignol (MRG equation) [4, 5]1. However, the complexity of the problem increases considerably when the carrying flow is turbulent, due to its non-smooth and erratic characteristics both in time and space. In this case, any kind of quantitative investigation must embrace the statistical approach [8, 9]. Footnote 1: The same equation, without Faxén corrections [6], was derived by Shu-Tang Tsai in 1957 and published in Chinese language, see [7] for its recent translation. In the last decades, a large body of studies has been dedicated to the problem of particles advected by turbulence. This is largely attributed to the emerging methods of experimental fluid/particle tracking by digital cameras [10, 11, 12] and by the ever increasing high-performance numerical simulations [8, 13]. While the phenomenology is presently relatively well explored for particles whose size is of the order of the dissipative scale and the associated particle Reynolds number is small, there is only a limited number of studies for the cases of less idealized objects of larger size [14, 15, 16, 17, 18] or with non regular shape [19, 20] or inhomogeneous mass density [21]. The question is important for applications, among the many we wish to mention the highly topical issue of transport and dispersion in turbulent ocean of plastic debris [22]. Such debris encompass ranges of scales from the dissipative to the inertial ones, have a variety of shapes and buoyancy properties [23] and can undergo turbulence induced fragmentation [24]. Advances in the comprehension of the problem are crucially linked to a better insight into their translational dynamics. In this article we aim at performing a step forward into the understanding of this complex phenomenon. We aim at highlighting how the particle mass density affects the acceleration properties for particles whose size is larger than the dissipative scale of turbulence. Our approach is computational, and makes uses of numerical simulations that are capable to resolve all the spatio-temporal scales active in the problem. Due to the wide range of scales at work, these simulations are possible at the price of considering only single (or at best few non-interacting) particles and moderate turbulent flow intensities. Despite this limitation, our study allows to identify trends that are expected to be valid also at larger turbulent flow Reynolds numbers and it provides new benchmark data for future studies. ## 2 Theoretical considerations Recent experimental and numerical studies have shown that the translational [17, 25], and in part the rotational [26], statistical properties of inertial-scale-sized neutrally buoyant particles advected by a turbulent flow can be explained in terms of a _coarse-graining_ effect of the underlying turbulent flow. In other words the particle acceleration behaves - statistically - the same as the spatially-filtered fluid acceleration field unperturbed by the particle. This mechanism corresponds to assuming that the particle feedback on the carrying turbulent flow is negligible. A neutral particle has on average a small slip velocity compared to the surrounding fluid [27], it is therefore reasonable to assume that the dissipative (surface) force associated to viscous drag is sub-leading with respect to the inertial (volumetric) one. In summary it seems reasonable to state that the acceleration of a particle of typical size \(d\) goes as \(\mathbf{a}_{d}\sim\langle D_{t}\mathbf{u}\rangle_{V}\) where \(D_{t}\mathbf{u}=\partial_{t}\mathbf{u}+\mathbf{u}\cdot\nabla\mathbf{u}\) is the fluid acceleration field and \(\langle\dots\rangle_{V}\) denotes a spatial average over a volume (\(V\)) equivalent to the one of the particle. When the particle is non-neutrally buoyant this scenario is complicated by two factors, on one hand the different inertia between the particle and the fluid tends to suppress/enhance (respectively for heavy/light particles) the fluid accelerations of the surrounding carrying flow. This effect is not plainly proportional to fluid-to-particle density ratio, \(\rho_{f}/\rho_{p}\), but is instead proportional to the parameter \(\beta=3\rho_{f}/(\rho_{f}+2\rho_{p})\) because of the added-mass force exerted by the fluid on the particle [28]. Note that \(\beta\) varies in the interval \([0,3]\), and the limits correspond to the cases of very massive particles (ballistic limit) and to that of very light particles where the inertia is all in the displaced surrounding fluid (such as for the case of air bubbles in water); \(\beta=1\) identifies the neutrally buoyant case. The second factor is the occurrence of Archimedes buoyancy, which leads to an extra acceleration term of the form \((1-\beta)\mathbf{g}\). In this study for simplicity we neglect the effect of gravity, and this is always possible for sufficiently intense turbulent flows2. Adding together the mentioned volume and surface forces the following model as been put forward for the motion of finite-sized particles [29, 30]: Footnote 2: However, note that at increasing the particle size in a fixed intensity turbulent flow, the relative importance of the coarse grained fluid acceleration decrease with respect to buoyancy. \[\ddot{\mathbf{x}}=\beta\left(\langle D_{t}\mathbf{u}\rangle_{V}+\frac{12\ \nu\ c(Re_{d})}{d^{2}}(\langle\mathbf{u}\rangle_{S}-\dot{\mathbf{x}})\right)+(1- \beta)\mathbf{g}. \tag{1}\] This is an adaptation of the MRG equation, which retains the so called Faxen terms [5], with the addition of the empirical Shiller-Naumann drag correction \(c(Re_{d})=1+0.15Re_{d}^{0.687}\) where \(Re_{d}=||\langle\mathbf{u}\rangle_{S}-\dot{\mathbf{x}}||d/\nu\) is the instantaneous particle Reynolds number and the omission of the history force. The symbols \(\langle\dots\rangle_{S}\) denotes, similarly to the volume average, a spatial average over a surface (S) equivalent to the one of the particle. These average terms account for the effect of the local flow non uniformity at the scale of the particle. If the particle is small they account simply for the effect of the curvature (i.e. the laplacian) of the local flow velocity and acceleration fields, if the particle is large they include higher even-order spatial-derivatives, see also [31]. This model, known as Faxen corrected (FC) model [29, 30], predicts qualitatively a series of trends in the single- and two-time acceleration statistics, that will be discussed later on in this article. We can now ask, what are the statistical features of the particle acceleration in a turbulent flow for the non-neutral (\(\beta\neq 1\)) case? When particles are below the dissipative scale, both the Faxen and the Shiller-Naumann corrections can be neglected in eq. (1) and expanding in the small parameter \(d^{2}/(12\nu\beta)\ll 1\)(the drag response time) one obtains: \[\ddot{\mathbf{x}}\simeq D_{t}\mathbf{u}+\frac{d^{2}}{12\nu\beta}(\beta-1)\left(D_{t} \mathbf{u}\cdot\nabla\mathbf{u}+D_{t}^{2}\mathbf{u}\right). \tag{2}\] This tells that the particle acceleration variance begins to deviate from the one of a fluid tracer quadratically with its size \(d\) and only when \(\beta\neq 1\). For finite-sized particles a similar perturbative estimate is not possible. However, the response time associated to the drag becomes long and one can assume that the associated force becomes negligible. Taken together, all these considerations lead us to guess that the inertia term \(\beta\langle D_{t}\mathbf{u}\rangle_{V}\) is the most important in determining the statistics of the acceleration of large particles. This study aims at testing this hypothesis and discussing its implications. ## 3 Methods ### Fully resolved numerical study We address the above questions by solving the fluid-particle coupled problem which comprises the incompressible Navier-Stokes equations for the fluid dynamics and the Newton-Euler equation for the particle motion, with the addition of no-slip boundary conditions at their interfaces. The numerical methods, based on the coupled Lattice-Boltzmann and Immersed-Boundary algorithms, have been already described elsewhere [26, 32]. For the present study, we carried on new validations for the dynamics of non neutrally buoyant particles. Besides resolution convergence checks we verified that in the presence of gravity the particle trajectory in still fluid agrees with the expected Stokes flow dynamics. The spatial domain where the turbulent flow takes place is a tri-periodic cube. The Taylor-scale based Reynolds number is kept constant at \(Re_{\lambda}=32\), the values of the relevant numerical/physical turbulent flow scales are reported in Table 1. The control particle parameters are varied in the range \(d/\eta=[6.5,18.7]\), note that these represents inertial scales of the turbulent flow, and \(\rho_{p}/\rho_{f}=[0.4,10]\) (corresponding to \(\beta=[0.14,1.67]\)). The particle are evolved for \(O(10^{2})\) large-eddy turnover times. Each simulations contains just one particle, but multiple independent simulations are performed for each particle case; the informations along their trajectory are saved for the analysis. ## 3 Results on acceleration statistics ### Acceleration variance The first quantity we focus on is the single-component acceleration variance, \(\langle a_{d,i}^{2}\rangle\) where \(\langle\dots\rangle\) denotes here the time and ensemble average over many particle trajectories (all the measurements are also given in Table 2). Figure 1(a) displays the trends for such a quantity (normalized by the fluid flow acceleration variance) with respect to the particle diameter (in dissipative scale units) and for particle sets with different density ratios (\(\beta\)). For the neutral case, \(\beta=1\), the acceleration intensity progressively decreases from the unit value for growing particle diameters, and this is due to the spatial filtering effect mentioned above. This feature has been already highlighted in experiments [25, 17] and also reproduced in fully resolved simulations [33, 26]. A one dimensional estimate based on the Kolmogorov 1941 (K41) turbulence theory suggests that \(\langle a_{d,i}^{2}\rangle\sim\langle(D_{t}u_{i})^{2}\rangle\sim(\delta_{d}u)^ {2}/d\sim d^{-2/3}\), where \(\delta_{d}u\) stands for a typical increment of velocity over a scale \(d\). It has been shown both experimentally and numerically that \(\langle a_{d}^{2}\rangle\sim d^{\alpha}\) with the exponent \(\alpha\) varying between \(-4/3\) at low Reynolds (i.e. \(Re_{\lambda}\leq 100\) as here) [33, 26] and \(-2/3\) for developed turbulence [25, 17]. The measurements for \(\beta\neq 1\), which represent the main novelty of the present study, show that the particle acceleration grows with \(\beta\). In order to contrast the density-ratio and the particle-size effect we also trace on Fig.1(a) the results for point-like particles (solid lines). The point-particle (PP) model includes the added mass but does not account for the spatial filtering. It can be obtained from (1) by replacing the volume and surface averages by the point values and by setting \(c(Re_{d})=1\)[29]. While in the limit of small particles the PP model departs weakly from the fluid acceleration value (as suggested by eq. (2)), in the large size limit it converges to \(\langle a_{d}^{2}\rangle\simeq\beta^{2}\langle(D_{t}u_{i})^{2}\rangle\)[29]. This is better evidenced in Fig.1(b), where the acceleration variances are normalized by \(\beta^{2}\langle(D_{t}u_{i})^{2}\rangle\) and all the PP data converge to the same plateau. Conversely, the fully resolved simulation results tend to vanish in the asymptotic limit due to the progressively enhanced smoothing by filtering of turbulent fluctuations. The effect of fluid-to-particle density ratio on the acceleration variance \(\langle a_{d}^{2}\rangle\) can be examined by dividing it by the acceleration variance of the neutral particle of the corresponding size \(\langle a_{d,\beta=1}^{2}\rangle\). This is shown in Fig. 2 (a), one can notice that the data still depend on the values of \(\beta\). At this point is worth testing the hypothesis, \[\langle a_{d}^{2}\rangle\simeq\beta^{2}\langle a_{d,\beta=1}^{2}\rangle. \tag{3}\] This is done in Fig.2(b). Although the trend suggests that the curves might overlap for very large-particles, beyond the integral scale of turbulence, \(L\simeq 24\eta\), the lack of collapse of the curves can be interpreted as an evidence of the non-multiplicative effect of the added mass and spatial filtering on inertial-scale particles. We note that this multiplicative effect was instead true in the FC model studied in [29], see in particular their Fig. 1(a) where with a similar rescaling all data collapse for large particle diameters. The origin of this dicrepancy in the FC model remains to be understood. In particular it would be interesting to check if it is due to the non-linear drag. Indeed [29] did not take into account the Shiller-Naumann correction in their study. However, non-linear drag was considered in the FC model studied in [30] for neutrally buoyant particles, where it was shown to have a negligible effect on the acceleration properties. ### Acceleration temporal correlations The study of the temporal correlation functions of the acceleration allows \begin{table} \begin{tabular}{c c c c c c c} \(N^{3}\) & \(\eta/\Delta x\) & \(\tau_{\eta}/\Delta t\) & \(L/\eta\) & \(T_{L}/\tau_{\eta}\) & \(\lambda/\eta\) & \(Re_{\lambda}\) \\ \hline \(128^{3}(256^{3})\) & \(2.1(4.2)\) & \(153(607)\) & \(24.1\) & \(12.4\) & \(11.3\) & \(32\) \\ \end{tabular} \end{table} Table 1: Parameter and relevant scales of the simulated turbulent flow. \(N^{3}\): number of spatial grid points (the larger resolution is used for \(\beta>1\) particles); \(\eta=(\nu^{3}/\epsilon)^{1/4}\): Kolmogorov dissipation length scale in grid space units \(\Delta x\); \(\tau_{\eta}\): Kolmogorov time scale in time-step units \(\Delta t\), \(L=u^{\prime 3}/\epsilon\): integral scale; \(T_{L}=L/u^{\prime}\): large-eddy turnover time; \(\lambda=(15\nu u^{\prime}/\epsilon)^{1/2}\): Taylor micro-scale; \(Re_{\lambda}\): Taylor-Reynolds number. \begin{table} \begin{tabular}{c|c c c c c} \(\beta\)\(d/\eta\) & \(6.5419\) & \(9.3444\) & \(11.2119\) & \(13.0785\) & \(15.8774\) & \(18.6739\) \\ \hline \(1.6667\) & \(0.4538\) & \(0.4121\) & \(0.2400\) & \(0.2499\) & \(0.2185\) & \(0.1821\) \\ \(1.3636\) & \(0.3761\) & \(0.3265\) & \(0.2517\) & \(0.2448\) & \(0.1505\) & \(0.1610\) \\ \(1.0000\) & \(0.2749\) & \(0.2118\) & \(0.1736\) & \(0.1475\) & \(0.1204\) & \(0.0974\) \\ \(0.7500\) & \(0.2241\) & \(0.1572\) & \(0.1311\) & \(0.1045\) & \(0.0905\) & \(0.0683\) \\ \(0.5000\) & \(0.1768\) & \(0.1326\) & \(0.1020\) & \(0.0871\) & \(0.0641\) & \(0.0530\) \\ \(0.2727\) & \(0.1547\) & \(0.0977\) & \(0.0788\) & \(0.0605\) & \(0.0453\) & \(0.0351\) \\ \(0.1429\) & \(0.1308\) & \(0.0740\) & \(0.0540\) & \(0.0449\) & \(0.0337\) & \(0.0248\) \\ \end{tabular} \end{table} Table 2: \(\langle a_{i}^{2}\rangle/\langle D_{t}u_{i}^{2}\rangle\) single cartesian component particle acceleration variance normalized by the fluid-tracer acceleration variance for various \(\beta\) and \(d/\eta\) values at \(Re_{\lambda}=32\). to further reinforce the observations made for the instantaneous acceleration variance. We focus on the integral time, \(T_{I}\), here defined as the integral of the autocorrelation function from time zero to the time it first reaches the null value (i.e. first zero-crossing time \(T_{0}\)): \[T_{I}\equiv\int_{0}^{T_{0}}C(\tau)d\tau,\ \ C(\tau)\equiv\frac{\langle a_{i}(t+ \tau)a_{i}(t)\rangle}{\langle(a_{i}(t))^{2}\rangle} \tag{4}\] Figure 3(a,b) shows the \(C(\tau)\) correlations functions for the two small/large particle limiting case in this study, i.e. for (a) \(d/\eta=6.5\) and (b) \(d/\eta=18.7\). The integral correlation times computed from these curves grow with the particle size and decreases with \(\beta\), see Fig. 3(c) (all \(T_{I}\) measurements are also reported in Table 3). The first trend is usually rationalized in term of the coarse-graining hypothesis [34]. A particle of size \(d\) is subjected to turbulence fluctuations of that scale, this corresponds to an eddy turnover time \(\tau_{d}=d/\delta_{d}u\sim d^{2/3}\). This prediction is only approximately true for neutrally buoyant particles. In fact, similarly to the trends observed for the acceleration variance also in the case of the correlation time the measured scaling \(\tau_{d}\sim d^{\gamma}\) has a senstive Reynolds dependence, it is observed that \(\gamma\leq 2/3\) at small Taylor-Reynolds number [26, 33], while it is \(2/3\leq\gamma\leq 1\) at large Reynolds [17]. Figure 1: (a) Particle acceleration variance normalized by fluid-tracer acceleration variance as a function of the particle diameter (\(d\)) in dissipative units \(\eta\). Data for density ratios \(\beta\) are displayed. The corresponding results for the point-point particle (PP) model are reported (solid line). (b) Same data with the normalization \(\beta^{2}(\langle D_{l}u_{i}\rangle^{2})\) for the acceleration variance, this way the PP model tends asymptotically to the unit value. Note that the fully-resolved data with different \(\beta\) do not overlap. Figure 2: (a) Particle acceleration variance normalized by the acceleration variance of the neutrally buoyant particle of the corresponding size \(\langle a_{d,\beta=1}^{2}\rangle\) as a function of the particle diameter (\(d\)) in dissipative units \(\eta\). Note that the curves as approximately horizontal for same \(\beta\) particle families. (b) Same data with the normalization \(\langle a_{d,\beta=1}^{2}\rangle\) for the acceleration variance. Although curves get closer, they do not collapse on each other. The above argument can not be straightforwardly extended to non-neutral particles. A possible approximate adaptation is presented in the following. We indicate with \(X_{a}=\frac{1}{2}\langle D_{t}u^{2}\rangle^{1/2}\tau_{a}^{2}\) the length spanned by a fluid particle over the time \((\tau_{a})\) during which the acceleration is correlated (and so approximately constant). Such a length can be travelled by a finite sized inertial particle over a time \(\tau_{d}=\sqrt{2X_{a}/\langle a_{d}^{2}\rangle^{1/2}}\). Now using the hypothesis (3) one obtains \[\tau_{d}=\frac{\tau_{d,\beta=1}}{\sqrt{\beta}}\sim\frac{d^{2/3}}{\sqrt{\beta}}\,. \tag{5}\] The above prediction is tested in Fig. 3(d), we observe that while the size dependency of the correlation time for any particle seems to be properly normalized by the neutral case (i.e. the coarse graining hypothesis holds true), the density dependence is only approximately explained in terms of the above scaling. The collapse is better for the case of light particles were the above argument is more fitting. Acceleration's higher statistical momentsLast we consider statistical properties beyond the second order. This can be done by examining the trends in the shape of the probability density functions (PDF) of the acceleration normalized by its standard deviation. These functions are reported in Figure 4; panel (a) shows the curves for the smallest particles, while (b) for the largest particles explored in this study. It is clear the trend towards a Gaussianization of the accelerations when the particle size is increased, this feature was also predicted by the FC model simulations [29]. A similar tendency is observed at increasing the particle mass density: PDF of heavy particles have shorter tails than neutral particles of the same size, on the contrary light particles tend to have more extreme accelerations. This trend is evident for the smallest particles, and agrees with former results from PP model simulations [35], while it is here negligible for the largest particles. More robust conclusions can be drawn from the flatness of the acceleration, \(F(a_{d})=\langle a_{d}^{4}\rangle/\langle a_{d}^{2}\rangle^{2}\). Theoretical considerations suggests that \(\langle a_{d}^{4}\rangle/\langle a_{d}^{2}\rangle^{2}\sim\langle(\delta_{d}u )^{8}\rangle/\langle(\delta_{d}u)^{4}\rangle^{2}\sim d^{\varsigma s-2\zeta_{d} <0}\), where \(\zeta_{p}\) indicates the scaling exponent of the velocity structure functions of order \(p\), i.e., \(\langle(\delta_{d}u)^{p}\rangle\sim d^{\varsigma_{p}}\). Figure 3: Single component correlation function acceleration variance for particles diameter (a) \(d/\eta=6.5\) and (b) \(d/\eta=18.7\). (c) Integral correlation time of the particle acceleration versus the particle size, both expressed in dissipative units, for different \(\beta\) families. (d) Integral correlation time divided by \(T_{I,\beta=1}/\sqrt{\beta}\) versus the particle size. \begin{table} \begin{tabular}{|c|c c c c c c|} \hline \(\beta\) & \(d/\eta\) & 6.5419 & 9.3444 & 11.2119 & 13.0785 & 15.8774 & 18.6739 \\ \hline 1.6667 & 1.2126 & 1.5015 & 1.6117 & 1.6425 & 1.7564 & 2.0205 \\ 1.3636 & 1.4170 & 1.5184 & 1.6928 & 1.9592 & 1.9589 & 2.1629 \\ 1.0000 & 1.6263 & 1.7011 & 1.9216 & 2.0704 & 2.2102 & 2.4419 \\ 0.7500 & 1.7408 & 1.8971 & 2.0315 & 2.2539 & 2.5249 & 2.6728 \\ 0.5000 & 1.8806 & 2.0344 & 2.2258 & 2.4176 & 2.6733 & 3.0411 \\ 0.2727 & 2.1082 & 2.5053 & 2.6268 & 2.7740 & 3.2323 & 3.6138 \\ 0.1429 & 2.7127 & 3.1644 & 3.3539 & 3.5064 & 4.1050 & 4.5701 \\ \hline \end{tabular} \end{table} Table 3: \(T_{I}/\eta\), integral correlation time of the particle acceleration normalized by the Kolmogorov time scale for various \(\beta\) and \(d/\eta\) values at \(Re_{\lambda}=32\). The value of the exponent \(F(a_{d})\sim d^{\phi}\) can be estimated in various ways, it is \(\phi\simeq-0.44\) from Kolmogorov-Obukhov 1962 model [36] and \(\phi\simeq-0.56\) with She-Leveque parametrization [37]. Measuring a scaling behaviour for \(F(a_{d})\) is delicate as it requires large datasets. Furthermore, similar to the acceleration variance, large Reynolds numbers are needed in order to have a large-scale separation. Although some experiments and simulations did not observe any scaling trend (\(\phi=0\)) [16, 25, 33, 34], there are recently increasing evidences towards a reduction of the flatness with size, [17] measured \(\phi\simeq-0.5\pm 0.1\) for inertial range neutrally buoyant particles, [38] observed it for bubbles in turbulence. This trend is also supported by the FC model. The reduction of flatness with particle size increase is confirmed by the present fully resolved simulations (see Figure 4 panel (c)). Furthermore, we observe a weak but detectable flatness amplification with \(\beta\). This was previously observed only for dissipative scale particles both in experiments and simulations [35]. Its physical interpretation lies in the phenomenon of preferential sampling: light particles explore preferentially vortex cores where the fluid centripetal acceleration is large, while heavy particles sample the outside of vortices which are calmer regions as far as acceleration is concerned, and this is reflected on their normalized accelerations PDF [39, 40]. This points to the fact that preferential sampling is a small but non vanishing effect for inertial-scale particles. ## 5 Conclusions and Perspectives Non-interacting finite-sized massive spherical particles advected by a vigorous turbulent flow have been studied by means of fully-resolved numerical simulations. We examined the single- and two-time statistical properties of particle accelerations and explored their behaviours with respect to the particle size and to the particle mass density. In this conditions the inertial forces dominate over the dissipative ones. This study confirms all the tendencies predicted by the coarse graining picture, namely the fact that both the acceleration variance and its flatness decrease with the particle size and the opposite for the acceleration correlation time. Given the limited extension of the inertial range at the present Reynolds number (\(Re_{\lambda}=32\)) it is difficult to estimate reliable scaling laws as a function of the particle diameter \(d\). However, it seems that the observed behaviours deviate systematically from the expected scaling laws derived from the values of the velocity structure function exponents \(\zeta_{p}\). This point is delicate and requires further attention. It might be that the scaling laws derived from \(\zeta_{p}\) only holds asymptotically in \(Re_{\lambda}\), or it could be that systematic subleading deviations are present due to the influence of other hydrodynamics effects (such as the drag). Looking at the particle density dependence, we have tried to assess the hypothesis \(a_{d}\sim\beta\langle D_{t}u_{i}\rangle_{V}\). Even if the particle acceleration increases with \(\beta\), the observed trends can not be interpreted as the simple multiplicative combination of the two dominant terms: the spatial filtering of fluid accelerations and the added-mass-adjusted fluid-to-particle density ratio. A similar mismatch is observed in the dependence of the correlation time with \(\beta\), for which we provided a simple model. The study of the acceleration flatness indicates that light particles are more intermittent than heavy ones and this also when their size is Figure 4: PDF of particle acceleration normalized by their standard deviation for (a) \(d/\eta=6.5\) and (b) \(d/\eta=18.7\). PDF of fluid tracer acceleration (bold solid line) and Gaussian (traced for comparison). (c) Flatness of the acceleration of particles versus their size, for different \(\beta\). The dashed line reports the fluid tracer acceleration flatness in the same flow. Note that here \(F(D_{t}u_{i})=6\). large. This features suggest a role of preferential sampling of the flow by the particles. This interpretation shall be put under scrutiny in further studies, in fact in the context of inertial-scale particles the evidences of preferential sampling and the related preferential clustering are still not univocal [41, 42]. In the future it will be interesting to perform simulations at larger Reynolds number and in larger domain size in order reduce the impact of finite inertial-range and finite-domain effects present in our simulations. It will also be interesting to develop new analysis techniques for the detection of preferential sampling by inertial-scale particles in turbulent flows. Furthermore, the statistical relevance for large particles in turbulence of forces such as history [43] or lift [44], remains to be clearly assessed. These results may help in developing effective models for the dynamics of large particles in different context where particles are large with respect to the typical variation scale in the flow, such drifters and floaters in the ocean [45] or rock crystals in magmatic chambers and primordial magma oceans [46]. ## 7 Acknowledgement This work was supported by the National Natural Science Foundation of China under grant no. 11988102, and the New Cornerstone Science Foundation through the Xplorer Prize.
2301.07428
New constructive counterexamples to additivity of minimum output Rényi p-entropy of quantum channels
In this paper, we present new families of quantum channels for which corresponding minimum output R\'enyi $p$-entropy is not additive. Our manuscript is motivated by the results of Grudka et al., J. Phys. A: Math. Theor. 43 425304 and we focus on channels characterized by both extensions and subspaces of the antisymmetric subspace in $\mathbb{C}^d \otimes \mathbb{C}^d$, which exhibit additivity breaking for $p>2$.
Krzysztof Szczygielski, Michał Studziński
2023-01-18T10:55:30Z
http://arxiv.org/abs/2301.07428v3
# New constructive counterexamples to additivity of minimum output Renyi entropy of quantum channels ###### Abstract In this paper we present new families of quantum channels for which corresponding minimum output Renyi entropy is not additive. In the first part of the manuscript motivated by work of Grudka et. al. in [1] we focus on extensions of the antisymmetric space and its subspaces. Later on we analyze a special case of completely entangled subspace proposed by Parthasarathy in [2]. Our construction works for every \(p>2\) and dimension high enough. ## I Introduction For a long time people conjectured that minimum output entropy, or even general \(p-\)Renyi entropies, of quantum channels is additive. This expectations were caused by many of additivity results obtained by certain classes of quantum channels like for example entanglement breaking channels [3], unital qubit channels [4], depolarizing channel [5], transpose depolarizing channel [6], and many others [7; 8; 9; 10]. The question of additivity in the above sense is a fundamental one due to result by Shor [11], since it is equivalent to the question of additivity of the Holevo quantity [12]. Assuming additivity of the Holevo quantity we can rid off its regularization in the definition of the classical capacity of a quantum channel and get a single letter formula. Today we know that the structure of the set of quantum channels is much more complex in this regard. In the general situation we have examples of quantum channels for which minimum output \(p-\)Renyi entropies are not additive for various values of the parameter \(p\), including the case when \(p=1\)[13] reducing the problem the the question of additivity for the minimum output von Neumann entropy. However, most of the existing results concern non-constructive proofs, showing additivity violation for a randomly picked channel [13; 14; 15]. Despite of the progress in this area only a few explicit constructions for quantum channels violating additivity are known up to now. We can mention here papers by Werner and Holevo [16], Grudka et al. [1], and Cubitt et al. [17]. The goal of this paper we extend the known and present new results on explicit construction of additivity breaking for quantum channels. Our approach is motivated by analysis of the maximal Schmidt coefficients of the considered subspaces, which translates on respective bounds of corresponding \(p-\)Renyi entropies. In particular, our contribution is the following: 1. Quantum channels by extensions of the antisymmetric subspace. Here as a starting point we use results of Grudka et al. [1] and ask what kind of subspace can be 'added' to antisymmetric one to preserve additivity braking of quantum channels generated by the considered subspaces. Our construction relies on generalized Bell states [18] and gives new classes of additivity breaking quantum channels for any dimension \(d\) and \(p>2\). 2. Quantum channels by subspaces of the antisymmetric space. This part is complementary to the first point above. Namely, we consider subspaces of the antisymmetric space still producing quantum channels with considered property of additivity breaking. Our construction works for any \(p>2\) and dimensions \(d\) greater than a certain transition dimension \(d_{0}=d_{0}(p)\), for which expression is delivered. 3. Quantum channels by completely entangled subspaces. We consider a special case of such subspace proposed by Parthasarathy in [2]. By evaluating respective bound on Schmidt coefficients we are able derive expression giving value of the dimension \(d_{0}=d_{0}(m,p)\) for which the additivity breaking occurs. Here \(m\) is a real parameter discussed later in the main text. The structure of the paper is as follows. In Section II we briefly introduce the problem of the additivity braking of the minimal output \(p-\)Renyi entropies. In Section III we introduce all the most important tools used during further this manuscript. Next, Section IV contains main results of this paper concerning points 1-3 from the above. The paper is finished with to appendices presenting technical findings used in the main text. ## II Definition of the problem We start with some basic notions considering quantum channels, Renyi entropy and additivity breaking. Let \(B_{1}(H)\), \(B_{1}(K)\) be Banach spaces of trace class operators over Hilbert spaces \(H\) and \(K\), respectively and let \(\rho\in B_{1}(H)\) be a _density operator_ of some quantum system, namely \(\rho\geqslant 0\) and \(\operatorname{Tr}\rho=1\). Denote also by \(D(H)\) a set of all density operators on \(H\). By compactness of \(\rho\), its spectrum is necessarily pure point, \(\sigma(\rho)=\{\lambda_{i}:i\in\mathbb{N}\}\subset[0,1]\). We define the _Renyi \(p\)-entropy_ of \(\rho\) via formula \[S_{p}(\rho)=\frac{1}{1-p}\log_{2}\operatorname{Tr}\rho^{p}=\frac{1}{1-p}\log_{ 2}\sum_{i}\lambda_{i}^{p}, \tag{1}\] for \(p\geqslant 1\), where the \(p\)-th power of \(\rho\), as well as the last equality, is to be understood via the usual functional calculus and spectral theorem. Let now \(\mathcal{N}:B_{1}(H)\to B_{1}(K)\) be a completely positive and trace preserving map, i.e. a _quantum channel_. The _minimal output entropy of \(\mathcal{N}\)_ is defined as \[S_{p}^{\min}(\mathcal{N})=\min_{\rho\in D(H)}S_{p}(\mathcal{N}(\rho))=\min_{h \in H,\left\lVert h\right\rVert=1}S_{p}(\mathcal{N}(|h\rangle\langle h|)), \tag{2}\] where it suffices to minimize exclusively over pure states. The minimum output entropy is _subadditive_, i.e. for any two quantum channels \(\mathcal{N}_{1},\mathcal{N}_{2}:B_{1}(H)\to B_{1}(K)\) we have \(S_{p}^{\min}(\mathcal{N}_{1}\otimes\mathcal{N}_{2})\leqslant S_{p}^{\min}( \mathcal{N}_{1})+S_{p}^{\min}(\mathcal{N}_{2})\), and, if for some \(p\) one has equality for all pairs \((\mathcal{N}_{1},\mathcal{N}_{2})\) of channels in some dimension, one says that the minimum output entropy is additive. It was shown already in numerous sources (see introduction) that there exist pairs of channels breaking the additivity condition, i.e. such that a strict inequality \[S_{p}^{\min}(\mathcal{N}_{1}\otimes\mathcal{N}_{2})<S_{p}^{\min}(\mathcal{N}_ {1})+S_{p}^{\min}(\mathcal{N}_{2}) \tag{3}\] is satisfied. In particular, a constructive class of channels was provided in article [1], where the channels are induced by antisymmetric space \(\mathbb{C}^{d}\wedge\mathbb{C}^{d}\). ## III Methodology and formalism In this paper we will be considering the class of channels similar to the one introduced in [13; 15] and further developed in [1]. Let \(H\simeq\mathbb{C}^{m}\), \(K\simeq\mathbb{C}^{d}\) be finite dimensional Hilbert spaces. We will be considering quantum channels \[\mathcal{N},\overline{\mathcal{N}}:B(H)\to B(K), \tag{4}\] where \(m\leqslant d^{2}\), which action on density matrix can be, via the Stinespring dilation theorem [19], represented as \[\mathcal{N}(\rho)=\operatorname{Tr}_{E}V\rho V^{*},\quad\overline{\mathcal{N} }(\rho)=\operatorname{Tr}_{E}\overline{\rho}\rho V^{T}, \tag{5}\] for some appropriately chosen Hilbert space \(E\) and an isometry \(V:H\to K\otimes E\). For simplicity, we assume the Stinespring representation of both channels to be _minimal_ and so, one can conveniently choose space \(E\) isomorphic to \(K\), i.e. \(E\simeq\mathbb{C}^{d}\). Since it is known [20] that the minimum output entropy \(S_{p}^{\min}(\mathcal{N})\) does not depend on particular choice of the isometry \(V\), only on its range \(W=\operatorname{ran}V\), of dimension \(m\), one can divide a set of all channels from \(B(H)\) to \(B(K)\) into equivalence classes characterized by subspace \(W\). It was shown in [1], that choosing \(W\) as (being isomorphic to) an antisymmetric subspace of \(\mathbb{C}^{d}\otimes\mathbb{C}^{d}\) of dimension \(\dim W=\binom{d}{2}=\frac{1}{2}d(d-1)\), \[W\simeq\mathbb{C}^{d}\wedge\mathbb{C}^{d}=\operatorname{span}\bigg{\{}\frac{1 }{\sqrt{2}}(e_{i}\otimes e_{j}-e_{j}\otimes e_{i}):i<j\bigg{\}}, \tag{6}\] where \(\{e_{i}\}_{i=1}^{d}\) is an orthonormal basis spanning \(\mathbb{C}^{d}\), produces pair \(\mathcal{N},\overline{\mathcal{N}}:B(\mathbb{C}^{\binom{d}{2}})\to B(\mathbb{C }^{d})\) of channels generating additivity breaking. In what follows, we will show that the additivity condition is broken also for differently chosen subspaces \(W\), i.e. we provide prescriptions for constructing additivity breaking channels, beyond results in [1]. For composite channel \(\mathcal{N}\otimes\overline{\mathcal{N}}\) we introduce, for computational convenience, distinction between channels \(\mathcal{N}\) and \(\overline{\mathcal{N}}\) by adding respectively subscripts \(1\) and \(2\) to appropriate Hilbert spaces \(H\), \(K\) and \(E\). Likewise, channel \(\mathcal{N}\otimes\overline{\mathcal{N}}:B(H_{1}\otimes H_{2})\to B(K_{1} \otimes K_{2})\) can be put in a Stinespring form \[\mathcal{N}\otimes\overline{\mathcal{N}}(\rho)=\operatorname{Tr}_{E_{1}E_{2}} \big{(}V\otimes\overline{V}\big{)}\rho(V^{*}\otimes V^{T}), \tag{7}\] where \(V\otimes\overline{V}:H_{1}\otimes H_{2}\to K_{1}\otimes E_{1}\otimes K_{2} \otimes E_{2}\) is naturally also an isometry. Similarly, one can now identify channel \(\mathcal{N}\otimes\overline{\mathcal{N}}\) with a subspace \(\mathcal{W}=\operatorname{ran}\big{(}V\otimes\overline{V}\big{)}=W_{1} \otimes W_{2}\) (with \(W_{1,2}\) isomorphic to \(W\)) of dimension \(m^{2}\) in full tensor product space \(K_{1}\otimes E_{1}\otimes K_{2}\otimes E_{2}\). Our method of showing the additivity breaking is based on a following theorem: **Theorem 1**.: _Let \(\mathcal{N},\overline{\mathcal{N}}:B(H)\to B(K)\), where \(H\simeq\mathbb{C}^{m}\), \(K\simeq\mathbb{C}^{d}\), \(m\leqslant d^{2}\), be quantum channels of a form (5) and denote again \(\mathcal{W}=\operatorname{ran}\left(V\otimes\overline{V}\right)\). Assume that there exists a constant \(C>0\) such that \(S_{p}^{\min}(\mathcal{N})\geqslant C\). Also, assume that there exists a vector \(\psi\in\mathcal{W}\), \(\left\|\psi\right\|=1\) and a constant \(c>0\) such that \(S_{p}(\operatorname{Tr}_{E_{1}E_{2}}\left|\psi\right\rangle\left\langle\psi \right|)\leqslant c\) for some \(p\geqslant 1\). Then, if_ \[c<2C, \tag{8}\] _a pair of channels \((\mathcal{N},\overline{\mathcal{N}})\) generates additivity breaking for given \(p\)._ Proof.: Notice that mapping \(\rho\mapsto(V\otimes\overline{V})\rho(V^{*}\otimes V^{T})\) is a completely positive and trace preserving bijection, which uniquely identifies states in \(D(H_{1}\otimes H_{2})\) and \(D(\mathcal{W})\). Therefore, one immediately notices that, for all \(a\in D(\mathcal{W})\) we have \[S_{p}^{\min}(\mathcal{N}\otimes\overline{\mathcal{N}})\leqslant S_{p}( \operatorname{Tr}_{E_{1}E_{2}}a). \tag{9}\] In particular, we may take \(a=\left|\psi\right\rangle\left\langle\psi\right|\) being a rank one projection in \(\mathcal{W}\), where \(\psi\in\mathcal{W}\) satisfies all the assumptions; then, the minimum output entropy of composite channel is bounded from above, \(S_{p}^{\min}(\mathcal{N}\otimes\overline{\mathcal{N}})\leqslant c\). Now, since \(S_{p}^{\min}(\mathcal{N})=S_{p}^{\min}(\overline{\mathcal{N}})\), we have \[S_{p}^{\min}(\mathcal{N}\otimes\overline{\mathcal{N}})\leqslant S_{p}( \operatorname{Tr}_{E_{1}E_{2}}\left|\psi\right\rangle\left\langle\psi\right| )\leqslant c<2C\leqslant S_{p}^{\min}(\mathcal{N})+S_{p}^{\min}(\overline{ \mathcal{N}}), \tag{10}\] which concludes the proof. It is sometimes more convenient, from conceptual point of view, to look at the full tensor product space in Stinespring representation of channel \(\mathcal{N}\otimes\overline{\mathcal{N}}\) as a composite space of four distinct subsystems, \(K_{1}\), \(K_{2}\), \(E_{1}\) and \(E_{2}\) and consider states computed in different _cuts_, so to speak. For this, we introduce two Hilbert tensor product spaces \[\mathcal{H}=K_{1}\otimes E_{1}\otimes K_{2}\otimes E_{2},\quad\hat{\mathcal{ H}}=K_{1}\otimes K_{2}\otimes E_{1}\otimes E_{2}, \tag{11}\] which are identified by an isometric isomorphism \(\eta:\mathcal{H}\rightarrow\hat{\mathcal{H}}\) acting on simple tensors by switching vectors in second and third position, \(\eta(k_{1}\otimes f_{1}\otimes k_{2}\otimes f_{2})=k_{1}\otimes k_{2}\otimes f _{1}\otimes f_{2}\). In such case, one can look at space \(\mathcal{H}\) as a composite Hilbert space of two subsystems described by spaces \(K_{1}\otimes E_{1}\) and \(K_{2}\otimes E_{2}\), which may be called the cut \(K_{1}E_{1}:K_{2}E_{2}\). On the other hand, writing \(\hat{\mathcal{H}}=K_{12}\otimes E_{12}\) for \(K_{12}=K_{1}\otimes K_{2}\), \(E_{12}=E_{1}\otimes E_{2}\) provides different cut \(K_{1}K_{2}:E_{1}E_{2}\). By this distinction, one can easily see the following fact: **Lemma 2**.: _Let \(\xi\in\hat{\mathcal{H}}\) and let \(\rho=\operatorname{Tr}_{E_{1}E_{2}}\left|\xi\right\rangle\left\langle\xi\right|\). Then, Renyi \(p\)-entropy of \(\rho\) is bounded from above,_ \[S_{p}(\rho)\leqslant\frac{p}{1-p}\log_{2}\mu_{1}^{2}, \tag{12}\] _where \(\mu_{1}\) is the largest coefficient of the Schmidt decomposition of \(\xi\)._ Proof.: Let \(\{u_{i}\}\), \(\{v_{i}\}\) be orthonormal bases spanning \(K_{1}\otimes K_{2}\) and \(E_{1}\otimes E_{2}\), respectively. Let the Schmidt decomposition of \(\xi\) be given by \[\xi=\sum_{i}\mu_{i}u_{i}\otimes v_{i}, \tag{13}\] where \(\mu_{i}\geqslant 0\), and denote \(\mu_{1}=\max\left\{\mu_{i}\right\}\). By straightforward algebra, one easily obtains \[\rho=\operatorname{Tr}_{E_{1}E_{2}}\left|\xi\right\rangle\left\langle\xi\right| =\sum_{i}\mu_{i}^{2}\left|u_{i}\right\rangle\left\langle u_{i}\right|. \tag{14}\] Then, since \(\sum_{i}\mu_{i}^{2p}\geqslant\mu_{1}^{2p}\), the correct inequality for \(S_{p}(\rho)\) follows immediately from equation (1). **Lemma 3**.: _Let \(\mathcal{W}=W_{1}\otimes W_{2}\subset\mathcal{H}\) and let \(\psi^{+}\) be a maximally entangled state between \(W_{1}\) and \(W_{2}\) (i.e. in cut \(K_{1}E_{1}:K_{2}E_{2}\)). Then, the largest coefficient \(\mu_{1}\) of Schmidt decomposition of vector \(\eta(\psi^{+})\in\hat{\mathcal{W}}\), i.e. in cut \(K_{1}K_{2}:E_{1}E_{2}\), satisfies inequality_ \[\mu_{1}^{2}\geqslant\frac{\dim W}{\dim K\dim E}. \tag{15}\] Proof.: Let us define \(\rho\in B(K_{1}\otimes K_{2})\) as \(\rho=\operatorname{Tr}_{E_{1}E_{2}}|\eta(\psi^{+})\rangle\langle\eta(\psi^{+})|\) and introduce Schmidt decomposition of \(\eta(\psi^{+})\), i.e. in cut \(K_{1}K_{2}:E_{1}E_{2}\), as \(\eta(\psi^{+})=\sum_{i}\mu_{i}u_{i}\otimes v_{i}\) for orthonormal bases \(\{u_{i}\}\) and \(\{v_{i}\}\) spanning \(K_{1}\otimes K_{2}\) and \(E_{1}\otimes E_{2}\), respectively, and coefficients \(\mu_{i}\geqslant 0\); denote also \(\mu_{1}=\max\,\{\mu_{i}\}\). Then we again have \(\rho=\sum_{i}\mu_{i}^{2}\,|u_{i}\rangle\langle u_{i}|\) and finding the lower bound on largest Schmidt coefficient of \(\eta(\psi^{+})\) is equivalent to finding bound on largest eigenvalue of \(\rho\), i.e. its spectral radius, \(\mu_{1}^{2}=\sup_{\|x\|=1}\,\langle x,\rho(x)\rangle\). In particular, the following inequality holds, \[\mu_{1}^{2}=\sup_{\|x\|=1}\,\langle x,\rho(x)\rangle\geqslant\langle\phi^{+}, \rho(\phi^{+})\rangle=\operatorname{Tr}_{K_{1}K_{2}}P^{+}\rho, \tag{16}\] where \(\phi^{+}\) is a maximally entangled state between spaces \(K_{1}\), \(K_{2}\) and \(P^{+}=|\phi^{+}\rangle\langle\phi^{+}|\) is a associated rank one projection. Notice, that since the only effect of bijection \(\eta\) is a rearrangement of vectors in tensor product, we clearly have \(\operatorname{Tr}_{E_{1}E_{2}}|\eta(\psi^{+})\rangle\langle\eta(\psi^{+})|= \operatorname{Tr}_{E_{1}E_{2}}|\psi^{+}\rangle\langle\psi^{+}|\) and so \[\mu_{1}^{2}\geqslant\operatorname{Tr}_{K_{1}K_{2}}P^{+}\rho=\operatorname{Tr} _{K_{1}K_{2}}\big{[}P^{+}\left(\operatorname{Tr}_{E_{1}E_{2}}|\psi^{+}\rangle \langle\psi^{+}|\right)\big{]}=\operatorname{Tr}\big{[}\big{(}P^{+}\otimes \operatorname{id}_{E_{1}E_{2}}\big{)}\,|\psi^{+}\rangle\langle\psi^{+}|\big{]}. \tag{17}\] Let \(Q^{+}\) be a rank one projection onto a maximally entangled state between spaces \(E_{1}\), \(E_{2}\). Since its orthogonal complement \(\operatorname{id}_{E_{1}E_{2}}-Q^{+}\) is also a projection, it is positive semi-definite and so we have \(\operatorname{id}_{E_{1}E_{2}}\geqslant Q^{+}\). This yields, after simple algebra, to \[\mu_{1}^{2}\geqslant\operatorname{Tr}\big{[}\big{(}P^{+}\otimes\operatorname{ id}_{E_{1}E_{2}}\big{)}\,|\psi^{+}\rangle\langle\psi^{+}|\big{]}\geqslant \operatorname{Tr}\big{[}\big{(}P^{+}\otimes Q^{+}\big{)}\,|\psi^{+}\rangle \langle\psi^{+}|\big{]}. \tag{18}\] Let now \(\Psi^{+}\) denote a maximally entangled state between spaces \(K_{1}\otimes E_{1}\) and \(K_{2}\otimes E_{2}\). Immediately, we notice \(P^{+}\otimes Q^{+}=|\Psi^{+}\rangle\langle\Psi^{+}|\), which gives \[\mu_{1}^{2}\geqslant\operatorname{Tr}\big{[}\big{(}P^{+}\otimes Q^{+}\big{)} \,|\psi^{+}\rangle\langle\psi^{+}|\big{]}=\big{|}\langle\psi^{+},\Psi^{+} \rangle\big{|}^{2}. \tag{19}\] For any possible choice of \(W\) and its basis \(\{w_{i}\}\), one can always select a basis \(\{b_{i}\}\) in \(K\otimes E\) in such a way, that \(\{w_{i}\}\) is a subset of \(\{b_{i}\}\). Then, all the remaining vectors \(b_{i}\notin W\) constitute for a basis in orthogonal complement \(W^{\perp}\); denote them by \(w_{i}^{\perp}\). Let \[\theta=\sum_{i=1}^{\dim W}w_{i}\otimes w_{i},\quad\theta^{\perp}=\sum_{i=1}^{ \dim W^{\perp}}w_{i}^{\perp}\otimes w_{i}^{\perp}, \tag{20}\] with \(\theta^{\perp}\) clearly orthogonal to \(\theta\). Then, the two maximally entangled states \(\psi^{+}\) and \(\Psi^{+}\) may be expressed as \[\psi^{+}=\frac{\theta}{\sqrt{\dim W}},\quad\Psi^{+}=\frac{\theta+\theta^{\perp }}{\sqrt{\dim K\dim E}}. \tag{21}\] By direct check, the inner product in (19) is then simply proportional to \(\|\theta\|^{2}=\dim W\). We immediately obtain \[\mu_{1}^{2}\geqslant\big{|}\langle\psi^{+},\Psi^{+}\rangle\big{|}^{2}=\frac{ \|\theta\|^{4}}{\dim W\dim K\dim E}=\frac{\dim W}{\dim K\dim E}. \tag{22}\] This concludes the proof. **Lemma 4**.: _Let \(W\) be chosen in such way, that the largest Schmidt coefficient \(\mu_{1}\) for every unit vector \(\xi\in W\) satisfies inequality \(\mu_{1}^{2}\leqslant A\) for some constant \(A\in(0,1)\). Then, the lower bound \(C\) for minimal output entropy \(S_{p}^{\min}(\mathcal{N})\) can be chosen as_ \[C=\frac{1}{1-p}\log_{2}\big{[}(1-A)^{p}+A^{p}\big{]}. \tag{23}\] Proof.: Denote by \(D(H)\) and \(D(W)\) convex cones of density operators in \(B(H)\) and \(B(W)\), respectively. Clearly, mapping \(\rho\mapsto\hat{V}_{\rho}V^{*}\) is a completely positive, trace preserving continuous bijection, therefore we can identify states in these two spaces. This implies, that \[S_{p}^{\min}(\mathcal{N})=\min_{h\in H,\|h\|=1}S_{p}(\operatorname{Tr}_{E}V\,| h\rangle\langle h|\,V^{*})=\min_{w\in W,\|w\|=1}S_{p}(\operatorname{Tr}_{E}|w \rangle\langle w|). \tag{24}\] Now, let \(w\in W\) be given by its Schmidt decomposition, \(w=\sum_{i}\mu_{i}k_{i}\otimes f_{i}\) for \(\{k_{i}\}\), \(\{f_{i}\}\) orthonormal bases in \(K\) and \(E\), respectively and let \(\rho=\operatorname{Tr}_{E}|w\rangle\langle w|=\sum_{i}\mu_{i}^{2}|k_{i}\rangle \langle k_{i}|\). Let also \(\vec{r}=(\mu_{i}^{2})\in\mathbb{R}_{+}^{\dim K}\) be a vector of its eigenvalues, such that \(\rho=\operatorname{diag}\,(\mu_{i}^{2})\). Define also \[\vec{a}=(\max\,\{A,1-A\},\min\,\{A,1-A\},0,\,\ldots,\,0)\in\mathbb{R}_{+}^{\dim K} \tag{25}\] and let \(\alpha=\operatorname{diag}\vec{a}\) denote a diagonal density matrix of spectrum given by components of \(\vec{a}\) (in any order). Then, by the assumed inequality \(\mu_{1}^{2}\leqslant A\) we have, after easy algebra, \(\vec{r}\preceq\vec{a}\), where \(\preceq\) denotes the preorder of _majorization_ in \(\mathbb{R}_{+}^{\dim K}\). Since Renyi \(p\)-entropy is a _Schur concave function_, we have \(S_{p}(\alpha)\leqslant S_{p}(\rho)\) whenever \(\vec{r}\preceq\vec{a}\). However, computing directly, we obtain \[S_{p}(\alpha)=\frac{1}{1-p}\log_{2}\operatorname{Tr}\alpha^{p}=\frac{1}{1-p} \log_{2}\left[(1-A)^{p}+A^{p}\right]=C. \tag{26}\] Then \(S_{p}(\operatorname{Tr}_{E}|w\rangle\langle w|)\geqslant C\) for every \(w\in W\), and so the minimal output entropy is also bounded by \(C\). ## IV Results In this section, we present a number of different classes of additivity breaking channels, each one characterized by different choice of subspace \(W\). The key point in every case will be to find a "sufficiently well" developed bounds \(c\) and \(C\), as stated in theorem 1. For sake of brevity, we will be denoting \(d=\dim K=\dim E\) from now on. It is a known fact, that a square of the largest Schmidt coefficient of any vector of unit norm in antisymmetric space \(H_{\text{as.}}\subset K\otimes E\) is bounded from above by \(\frac{1}{2}\) (see Prop. 1 in [1]). Below, we will present, for reader's convenience, a short proof of this statement. Let \(P_{\text{as.}}:K\otimes E\to H_{\text{as.}}\) be an orthogonal projection onto antisymmetric subspace. One checks directly, that \(P_{\text{as.}}\) may be represented as \[P_{\text{as.}}=\frac{1}{2}(I-V), \tag{27}\] where \(I\) is the identity and \(V:K\otimes E\to K\otimes E\) is an operator, which swaps vector order in simple tensors, i.e. it acts as \(V(h\otimes f)=f\otimes h\) (for isomorphic spaces \(K\) and \(E\)). **Lemma 5**.: _Projection \(P_{\text{as.}}:K\otimes E\to H_{\text{as.}}\) satisfies_ \[\sup_{\|x\otimes y\|=1}\|P_{\text{as.}}(x\otimes y)\|^{2}=\frac{1}{2}. \tag{28}\] _In result, the largest Schmidt coefficient \(\mu_{1}\) of any unit vector in \(H_{\text{as.}}\) satisfies \(\mu_{1}^{2}\leqslant\frac{1}{2}\)._ Proof.: Applying (27) we have, after easy algebra, \[\sup_{\|x\otimes y\|=1}\|P_{\text{as.}}(x\otimes y)\|^{2} =\sup_{\|x\|,\|y\|=1}\,\langle x\otimes y,\frac{1}{2}(I-V)(x \otimes y)\rangle \tag{29}\] \[=\frac{1}{2}\sup_{\|x\otimes y\|=1}\,(\|x\otimes y\|^{2}-|\, \langle x,y\rangle|^{2})\] \[=\frac{1}{2}\left(1-\inf_{\|x\otimes y\|=1}|\langle x,y\rangle|^ {2}\right)\] which comes by continuity of mapping \((x,y)\mapsto|\langle x,y\rangle|^{2}\). This however is immediately found to be \(\frac{1}{2}\), since taking any two mutually orthogonal vectors \(x\), \(y\) automatically yields the infimum in above expression to be \(0\). Then \(\mu_{1}^{2}\leqslant\frac{1}{2}\) comes from lemma 14 from Appendix A. ### Extensions of antisymmetric subspace The key idea here is to extend the antisymmetric subspace \(H_{\text{as.}}\simeq\mathbb{C}^{d}\wedge\mathbb{C}^{d}\) in a way, which does not modify the value of maximal Schmidt coefficient too much, so to say. In particular, we seek for a space \[W=H_{\text{as.}}\oplus X, \tag{30}\] where \(X\) is of dimension \(n\in\{1,\,\ldots,\,\frac{1}{2}d(d+1)\}\), perhaps for \(d\) large enough. Let then \(X=\operatorname{span}\left\{\phi_{i}\right\}\), where \(\left\{\phi_{i}\right\}\) stands for some orthonormal system in \(K\otimes E\) of vectors orthogonal to \(H_{\text{as}}\). Also, let \(H_{\text{as}}=\operatorname{span}\left\{a_{i}\right\}\) and introduce orthogonal projection operators \(P_{W}\), \(P_{\text{as}}\), and \(P_{X}\) onto \(W\), \(H_{\text{as}}\), and \(X\), such that \(P_{W}=P_{\text{as}}+P_{X}\). Observe that, by lemmas 14 and 15, we have \[\mu_{1}^{2} \leqslant\sup_{\|x\otimes y\|=1}\|P_{W}(x\otimes y)\|^{2}\leqslant \sup_{\|x\otimes y\|=1}\|P_{\text{as}}(x\otimes y)\|^{2}+\sup_{\|x\otimes y\|= 1}\|P_{X}(x\otimes y)\|^{2} \tag{31}\] \[\leqslant\frac{1}{2}+\sup_{\|x\otimes y\|=1}\sum_{i=1}^{n}| \langle\phi_{i},x\otimes y\rangle|^{2}\leqslant\frac{1}{2}+\sum_{i=1}^{n}\mu_ {1}^{2}(\phi_{i}),\] where \(\mu_{1}^{2}(\phi_{i})\) is the maximal Schmidt coefficient of basis vector \(\phi_{i}\). This means, that in order to minimize the impact of adding space \(X\) to \(H_{\text{as}}\), on the upper bound of maximal Schmidt coefficient of \(\xi\in W\) (and on the lower bound \(C\) for \(S_{p}^{\min}(\mathcal{N})\)), one should seek for basis \(\left\{\phi_{i}\right\}\) of possibly smallest value of \(\mu_{1}^{2}(\phi_{i})\). #### iii.2.1 Construction via generalized Bell states A natural candidate for such basis could be a one given exclusively by maximally entangled vectors, since it is well known that a maximally entangled unit vector provides a lowest possible value of \(\mu_{1}^{2}\), equal to reciprocal of a dimension. For this, we construct an extending space \(X\) as a linear span of maximally entangled vectors and first, we show that there always exists such space. We call a state \(\psi\in K\otimes E\) a _generalized Bell state_ if and only if there exist orthonormal bases \(\left\{h_{i}\right\}\subset K\) and \(\left\{f_{i}\right\}\in E\) such that \[\psi=\frac{1}{\sqrt{d}}\sum_{j,k=1}^{d}\lambda_{jk}h_{j}\otimes f_{k}, \tag{32}\] where \(\lambda_{jk}=e^{i\alpha_{jk}}\), \(\alpha_{jk}\in\mathbb{R}\) are arbitrary, and a matrix \(\hat{\lambda}=[\lambda_{jk}]\) contains only one non-zero element in each column and each row (i.e. it resembles a permutation matrix). We show the following **Lemma 6**.: _For each choice of real sequence \((\varphi_{i})\in\mathbb{R}^{d}\), there exist \(s(d)\) generalized Bell states \(\psi_{i}\) in space \(H_{\text{as}}^{\perp}\), where_ \[s(d)=d!\sum_{k=0}^{\lfloor d/2\rfloor}\frac{1}{2^{k}k!(d-2k)!}. \tag{33}\] _Set of all such states is not linearly independent, though. In particular, it happens that \(\dim\operatorname{span}\left\{\psi_{i}\right\}=1+\binom{d}{2}<s(d)\)._ Proof.: Note, that every vector \(x=\sum_{ij}x_{ij}h_{i}\otimes f_{j}\in K\otimes E\) is uniquely identified with \(n\times n\) matrix \([x_{ij}]\), so it is enough to analyze appropriate matrices instead. Likewise, the full tensor product \(K\otimes E\) is isomorphic, as a Hilbert space, with \(M_{d}(\mathbb{C})\) equipped with Frobenius (Hilbert-Schmidt) inner product. Then, it is easy to notice that space \(H_{\text{as}}\), is identified with subspace of all _antisymmetric matrices_. i.e. matrices \(m\in M_{d}(\mathbb{C})\) satisfying \(m^{T}=-m\), and its orthogonal complement \(H_{\text{as}}^{\perp}\) is isomorphic to subspace of all _symmetric matrices_, i.e. such, that \(m^{T}=m\). Then, question of existence of generalized Bell states in \(H_{\text{as}}^{\perp}\) can be rephrased as a problem of finding Bell states described by symmetric matrices. For this, let \(\hat{M}=\text{diag}\left\{e^{i\varphi_{1}},\,\ldots,\,e^{i\varphi_{d}}\right\}\), where we chose an arbitrary, real sequence \((\varphi_{i})_{i=1}^{d}\). Then, by permuting rows (or columns) of \(\hat{M}\), i.e. by applying some permutation matrix \(\hat{\Omega}\) to \(\hat{M}\), one obtains matrix \(\hat{\lambda}=\hat{\Omega}\hat{M}\) populated with phase factors, describing some generalized Bell state \(\psi(\hat{\lambda})\). Notice, that for a condition \(\psi(\hat{\lambda})\in H_{\text{as}}^{\perp}\) to hold it is necessary and sufficient that \(\hat{\lambda}\) is symmetric, i.e. \((\hat{\Omega}\hat{M})^{T}=\hat{\Omega}\hat{M}\), which, by diagonality of \(\hat{M}\), yields \(\hat{\Omega}=\hat{\Omega}^{T}\) and each symmetric permutation matrix will generate some maximally entangled generalized Bell state orthogonal to \(H_{\text{as}}\). The total number of such Bell states is then equal to number of all symmetric permutation matrices, which is known [21] to be given by equation (33). It remains to find a dimension of linear space spanned by such set of Bell states; one easily notices, that linear independence of set \(\left\{\psi(\hat{\lambda}_{i})\right\}\) is equivalent to linear independence of a set \(\left\{\hat{\Omega}_{i}\right\}\). We conclude that \(\dim\operatorname{span}\left\{\psi(\hat{\lambda}_{i})\right\}\) must be equal to a dimension of a space spanned by a set of all symmetric permutation matrices, which is \(1+\binom{d}{2}\)[22]. Below we formulate our main result of this section: **Theorem 7**.: _There exists an unbounded region \(\mathcal{R}=\{(p,d)\}\subset(2,\infty)^{2}\) such that for every pair \((p,d)\in\mathcal{R}\) there exists an increasing sequence of extending subspaces \((X_{n})_{n=1}^{r}\) of length \(r\leqslant\lfloor\frac{d}{2}\rfloor\), where \(\dim X_{n}=n\), each one orthogonal to space \(H_{\text{as.}}\simeq\mathbb{C}^{d}\wedge\mathbb{C}^{d}\), such that for each \(n\), space \(W_{n}=H_{\text{as.}}\oplus X_{n}\) defines a pair of quantum channels \(\mathcal{N}_{n},\overline{\mathcal{N}}_{n}:B(H_{n})\to B(K)\), \(\dim H_{n}=\dim W_{n}\), \(K\simeq\mathbb{C}^{d}\), providing additivity breaking of minimal Renyi \(p\)-entropy._ Proof.: Let \(d>2\). By lemma 6, one can construct \(s(d)\) generalized Bell states \(\psi_{i}\) in space \(H_{\text{as.}}^{\perp}\simeq(\mathbb{C}^{d}\wedge\mathbb{C}^{d})^{\perp}\), whereas the space \(X=\text{span}\left\{\psi_{i}\right\}\) is of dimension \(1+\binom{d}{2}\). We will show that there exists a limiting dimension \(r<1+\binom{d}{2}\) such that all subspaces \(X_{n}\subset X\) of dimension \(n\leqslant r\) give rise to additivity breaking channels \(\mathcal{N}_{n}\), \(\overline{\mathcal{N}}_{n}\) defined by spaces \(W_{n}=H_{\text{as.}}\oplus X_{n}\). Let then \(X_{n}=\text{span}\left\{\psi_{i}:1\leqslant i\leqslant n\right\}\) and take \(\xi\in X_{n}\), \(\|\xi\|=1\). By (31), square of largest Schmidt coefficient of \(\xi\) satisfies inequality \[\mu_{1}^{2}\leqslant\frac{1}{2}+\sum_{i=1}^{n}\mu_{i}^{2}(\psi_{i})=\frac{d+2n }{2d}. \tag{34}\] Therefore, applying lemma 4 for \(A=(d+2n)/2d\) we have the lower bound for minimal output entropy \(S_{p}(\mathcal{N}_{n})\) of channel \(\mathcal{N}_{n}\) defined by \(W_{n}\), \[S_{p}^{\min}(\mathcal{N}_{n})\geqslant\frac{1}{1-p}\log_{2}\left[\left(\frac{ d+2n}{2d}\right)^{p}+\left(\frac{d-2n}{2d}\right)^{p}\right]=C. \tag{35}\] We emphasize here, that condition \(\frac{n}{d}<\frac{1}{2}\) must be satisfied for consistency of majorization scheme used in lemma 4. This shows, that possible values of \(n\), i.e. possible dimensions of extending space \(X_{n}\), can be no larger than \(\frac{d}{2}\) and therefore \(r\leqslant\lfloor\frac{d}{2}\rfloor\). Next, we estimate the upper bound \(c\) of \(S_{p}^{\min}(\mathcal{N}_{n}\otimes\overline{\mathcal{N}}_{n})\). Let \(\psi^{+}\) denote the maximally entangled state in space \(\mathcal{W}_{n}=W_{n}\otimes W_{n}\). Then, the largest Schmidt coefficient of vector \(\eta(\psi^{+})\in\tilde{\mathcal{W}}\) satisfies \[\mu_{1}^{2}\geqslant\frac{1}{d^{2}}\left(n+\frac{d(d-1)}{2}\right), \tag{36}\] which comes from lemma 3 after taking \(W=W_{n}\) (and therefore \(\dim W=n+\binom{d}{2}\)). In result, the Renyi entropy of state \(\rho=\text{Tr}_{E_{1}E_{2}}\left|\psi^{+}\right\rangle\!\left\langle\psi^{+}\right|\) is, by taking \(\xi=\eta(\psi^{+})\) in lemma 2, bounded from above, \[S_{p}(\rho)\leqslant\frac{p}{1-p}\log_{2}\left[\frac{1}{d^{2}}\left(n+\frac{d (d-1)}{2}\right)\right]=c. \tag{37}\] However, after choosing \(\psi=\psi^{+}\) in theorem 1, one concludes that \(c\) is also the upper bound of \(S_{p}^{\min}(\mathcal{N}_{n}\otimes\overline{\mathcal{N}}_{n})\). Now it suffices to check whether the condition \(c<2C\) is satisfied, which, after simple algebra, may be seen to be equivalent to a strict positivity of a function \(f_{p,d}:\{1,\,\ldots,\,\lfloor\frac{d}{2}\rfloor\}\to\mathbb{R}\), \[f_{p,d}(n)=\left[d(d-1)+2n\right]^{p}-\frac{1}{2^{p}}\left[(d+2n)^{p}+(d-2n)^ {p}\right]^{2}. \tag{38}\] For convenience, consider an extension \(f_{p,d}\) on the entire interval \([1,\lfloor\frac{d}{2}\rfloor]\) and denote it by the same symbol. Then, a limiting dimension \(r\leqslant\lfloor\frac{d}{2}\rfloor\) will simply be \(r=\lfloor x_{0}\rfloor\), where \(x_{0}\in[1,\lfloor\frac{d}{2}\rfloor]\) is a root of \(f_{p,d}\). Showing a mere existence (and an approximate locations) of \(x_{0}\) be means of analytical approach is easy. First, one checks directly that for all \(p\in\mathbb{R}\), \(d>2\) we have \[f_{p,d}\left(\frac{d}{4}\right)=8^{-p}\left[4^{p}d^{p}(2d-1)^{p}-d^{2p}(1+3^{p} )^{2}\right]<0, \tag{39}\] so \(x_{0}\) exists in interval \([1,\lfloor\frac{d}{2}\rfloor]\) whenever \(f_{p,d}(1)>0\). This condition however may be checked numerically to be satisfied for all pairs \((p,d)\) in some unbounded region \(\mathcal{R}\) of a real plane, so an appropriate root \(x_{0}\), and a limiting dimension \(r\), exist for wide variety of \(p\) and \(d\). In consequence, region \(\mathcal{R}\) provides non-trivial sequences of extending spaces \(X_{n}\). The length of such sequence, i.e. a number of possible extensions of \(H_{\text{as.}}\), is then given by \(\lfloor x_{0}\rfloor\). In order to find its approximate value one can seek for some well-behaved bounds, \(\alpha_{p,d}\) and \(\beta_{p,d}\) of function \(f_{p,d}\), such that \(\alpha_{p,d}\leqslant f_{p,d}\leqslant\beta_{p,d}\). For upper bound, notice \[f_{p,d}(x)\leqslant\beta_{p,d}(x)=(d(d-1)+2x)^{p}-\frac{1}{2^{p}}(d+2x)^{2p}. \tag{40}\] For lower bound, note that \(f_{p,d}\) is concave, so one could choose for \(\alpha_{p,d}\) an affine function, connecting points \((1,f_{p,d}(1))\) and \((\frac{d}{4},f_{p,d}(\frac{d}{4}))\). Then, the root \(x_{0}\) will be constrained by roots \(a_{p,d}\) and \(b_{p,d}\) of \(\alpha_{p,d}\) and \(\beta_{p,d}\), respectively, i.e. \(x_{0}\in[a_{p,d},\,b_{p,d}]\) and one finds \[a_{p,d}=\frac{d-4}{4}\left(f_{p,d}(1)-f_{p,d}(\frac{d}{4})\right)^{-1},\qquad b _{p,d}=\frac{1}{2}\left(1-d+\sqrt{1-4d+2d^{2}}\right). \tag{41}\] Remark 8: _We stress here, that our theorem is limited to Renyi \(p\)-entropies for \(p\) strictly larger than 2. In case \(p=2\), one shows by direct check, that \(f_{2,d}(n)<0\) for every \(d\), \(n\) and so the additivity breaking criterion is inconclusive. We hope however, that this limitation could be lifted by application of some more sophisticated methods._ ### Subspaces of antisymmetric space Here we show, that appropriate space \(W\) may also be chosen to be a subspace of antisymmetric space \(H_{\text{as}}\). Let then \(W\subset H_{\text{as}}\) be of dimension \(\dim W<{d\choose 2}\). The following technical lemma will be of importance: **Lemma 9**.: _Let \(W\) be any subspace of \(H_{\text{as}}\). and let \(P_{W}\) be an associated orthogonal projection. Then,_ \[\sup_{\|x\otimes y\|=1}\|P_{W}(x\otimes y)\|^{2}=\frac{1}{2}. \tag{42}\] _In result, the largest Schmidt coefficient \(\mu_{1}\) of any \(\xi\in W\), \(\|\xi\|=1\), satisfies \(\mu_{1}^{2}\leq\frac{1}{2}\)._ Proof.: Let \(P_{\text{as}}:K\otimes E\to H_{\text{as}}\) be the orthogonal projection onto antisymmetric space. Employing lemmas 5 and 16, we automatically have, for any \(\xi\in W\), \(\|\xi\|=1\), \[\mu_{1}^{2}(\xi)\leq\sup_{\|x\otimes y\|=1}\|P_{W}(x\otimes y)\|^{2}\leq\sup_ {\|x\otimes y\|=1}\|P_{\text{as}}.(x\otimes y)\|^{2}=\frac{1}{2}. \tag{43}\] We will show that in fact, the supremum \(\frac{1}{2}\) is attainable. For this, choose any basis \(\{w_{i}\}\) spanning \(W\) and observe, that for any such choice, there always exist some bases \(\{h_{i}\}\), \(\{f_{i}\}\) spanning spaces \(K\) and \(E\), respectively, and some bijection \(\sigma:\{1,\,\ldots,\,\dim W\}\to\{(\sigma_{1},\sigma_{2}):\sigma_{1}<\sigma_{2}\}\) into an ordered set of \(\dim W\) pairs (note, that the exact order used in the set of pairs is irrelevant), that every \(w_{i}\) may be expressed as an antisymmetrized combination of simple tensors of a form \(\{h_{\sigma_{1}}\otimes f_{\sigma_{2}}\}\), i.e. \[w_{i}=\frac{1}{\sqrt{2}}\left(h_{\sigma(i)_{1}}\otimes f_{\sigma(i)_{2}}-h_{ \sigma(i)_{2}}\otimes f_{\sigma(i)_{1}}\right), \tag{44}\] where subscripts 1 and 2 indicate a specified element of a pair \(\sigma(i)=(\sigma(i)_{1},\sigma(i)_{2})\). Expressing vectors \(x\) and \(y\) as appropriate ordered tuples \(x=(x_{i})\), \(y=(y_{i})\) with respect to bases \(\{h_{i}\}\), \(\{f_{i}\}\) we obtain \[\langle x\otimes y,P_{W}(x\otimes y)\rangle=\sum_{i=1}^{\dim W}|\langle w_{i},x\otimes y\rangle|^{2}=\frac{1}{2}\sum_{s}|x_{s_{1}}y_{s_{2}}-x_{s_{2}}y_{s_{ 1}}|^{2}, \tag{45}\] where we sum over all pairs \(s=(s_{1},s_{2})\) laying in the image of bijection \(i\mapsto\sigma(i)\). Now it is enough to note, that taking \(x\), \(y\) such that, say, \(x_{s_{1}}=y_{s_{2}}=1\), \(x_{s_{2}}=y_{s_{1}}=0\) for some specific pair \((s_{1},s_{2})\) and \(x_{p_{1,2}}=y_{p_{1,2}}=0\) for every other pair \((p_{1},p_{2})\) we have \(\langle x\otimes y,P_{W}(x\otimes y)\rangle=\frac{1}{2}\). This concludes the proof. **Theorem 10**.: _For all \(p>2\), there exists a dimension \(d_{0}\) such that for all dimensions \(d>d_{0}\) there exists an increasing sequence of subspaces \((W_{n})\) in antisymmetric space \(H_{\text{as}}\simeq\mathbb{C}^{d}\wedge\mathbb{C}^{d}\) of length \(l_{p,d}\) given via formula_ \[l_{p,d}=1-\lceil 4^{\frac{1}{p}-1}d^{2}\rceil+{d\choose 2}, \tag{46}\] _such that every subspace \(W_{n}\) defines a pair of quantum channels \(\mathcal{N}_{n}\), \(\overline{\mathcal{N}}_{n}:B(H)\to B(K)\), \(\dim H=\dim W_{n}\), \(K\simeq\mathbb{C}^{d}\), providing additivity breaking of minimal Renyi \(p\)-entropy._ Proof.: We will again find appropriate expressions for bounds \(c\) and \(C\). Let \(W\subset H_{\text{as.}}\) be any subspace of dimension \(\dim W<\frac{1}{2}d(d-1)\) and take any \(\xi\in W\), \(\|\xi\|=1\). Lemma 9 then yields, that largest Schmidt coefficient of \(\xi\) is bounded by \(\frac{1}{2}\). This means, that the lower bound \(C\) for minimal entropy \(S_{p}(\mathcal{N})\) of channel \(\mathcal{N}\) can be found, by putting \(A=\frac{1}{2}\) in lemma 4, to be 1: \[S_{p}^{\min.}(\mathcal{N})\geqslant\frac{1}{1-p}\log_{2}\left(2\cdot\frac{1}{ 2^{p}}\right)=1=C. \tag{47}\] Let \(\psi^{+}\) be maximally entangled state in \(W\otimes W\). We will denote \(\dim W=n\) for brevity. The remaining upper bound \(c\) of \(S_{p}^{\min}(\mathcal{N}\otimes\overline{\mathcal{N}})\) is again found by estimating the largest Schmidt coefficient \(\mu_{1}\) of vector \(\eta(\psi^{+})\) which, by lemma 3, satisfies \(\mu_{1}^{2}\geqslant\frac{n}{d^{2}}\) and in result \[S_{p}^{\min}(\mathcal{N}\otimes\overline{\mathcal{N}})\leqslant\frac{p}{1-p} \log_{2}\frac{n}{d^{2}}=c \tag{48}\] which comes from lemma 2. Then, the additivity breaking condition \(c<2C\) as provided in theorem 1 is equivalent to strict positivity of a function \[f_{p,d}(n)=\frac{n^{p}}{d^{2p}}-4^{1-p}. \tag{49}\] For given \(p>2\) this yields \(n>4^{\frac{1}{p}-1}d^{2}\). Now, since we seek for \(n\)-dimensional subspaces of the space of dimension \(\frac{1}{2}d(d-1)\), we have to ask if there exist any natural numbers \(n<\frac{1}{2}d(d-1)\) such that the inequality \(n>4^{\frac{1}{p}-1}d^{2}\) is satisfied, i.e. if set \(\mathbb{N}\cap\left(4^{\frac{1}{p}-1}d^{2},\frac{1}{2}d(d-1)\right)\) is nonempty. This will be guaranteed under two conditions: first, we necessarily need \(4^{\frac{1}{p}-1}d^{2}<\frac{1}{2}d(d-1)\), which however can be quickly found to be assured by all dimensions \(d\) satisfying \[d>\left\lceil(1-2^{\frac{2}{p}-1})^{-1}\right\rceil. \tag{50}\] Second, the interval \(\mathcal{I}_{p,d}=\left(4^{\frac{1}{p}-1}d^{2},\frac{1}{2}d(d-1)\right)\) must be of length greater than 1 in order to contain at least one natural number \(n\), i.e. \[\frac{d(d-1)}{2}-4^{\frac{1}{p}-1}d^{2}>1, \tag{51}\] which is satisfied by all dimensions \(d>d_{0}\) for \(d_{0}\) given via formula \[d_{0}=\left\lceil 4\left(-1+\sqrt{9-4^{\frac{1}{p}+1}}\right)^{-1}\right\rceil. \tag{52}\] Direct check then shows that all \(d>d_{0}\) also automatically satisfy the necessary inequality in (50). In consequence, for all \(p>2\) and all \(d>d_{0}\) there will exist a sequence of natural numbers \((n_{k})\) such that \(f_{p,d}(n_{k})>0\), i.e. there will exist a sequence of additivity breaking subspaces \((W_{k})\), \(\dim W_{k}=n_{k}\), \(W_{k}\subset W_{k+1}\). Number of such spaces, i.e. a length \(l_{p,d}\) of this sequence is equal to total number of all natural numbers \(n_{k}\) in the interval \(\mathcal{I}_{p,d}\); this however is easily seen to be given by claimed equation (46). Moreover, monotonicity of function \(n\mapsto f_{p,d}(n)\) yields that in each case the maximal dimension \(n_{k}\), i.e. dimension of the largest subspace \(W_{k}\) in \(H_{\text{as.}}\), is always found to be \(\frac{1}{2}d(d-1)-1\). ### Completely entangled subspace In 2004, K. Parthasarathy proposed in [2] the so-called _completely entangled subspace_\(S\) of tensor product \(H_{1}\otimes\ldots\otimes H_{k}\) of \(k\) finite dimensional Hilbert spaces, each one of dimension \(d_{k}\), as subspace, which contains no non-zero simple tensors of a form \(u_{1}\otimes\ldots\otimes u_{k}\) for \(u_{i}\in H_{i}\). Here, we will show that in special case of \(k=2\) and \(d_{1}=d_{2}=d\), i.e. in case of bipartite quantum system, the completely entangled subspace can be also used to define additivity breaking quantum channels. We will start with a general construction of such space, as presented originally in [2]. Let \(\mathcal{H}\simeq\mathbb{C}^{d}\) be spanned by a standard orthonormal basis \(\{e_{k}\}_{k=0}^{d-1}\) and let \(G=\{\lambda_{1},\lambda_{2},\,\ldots,\,\lambda_{2d-1}\}\subset\mathbb{C}\) be an arbitrary set of complex numbers, of cardinality \(2d-1\). Define vectors \[u_{\lambda}=\sum_{k=0}^{d-1}\lambda^{k}e_{k}=\left(1,\,\lambda,\, \lambda^{2},\,\ldots,\,\lambda^{d-1}\right)^{T}, \tag{53}\] as well as a subspace \[L=\text{span}\{u_{\lambda}\otimes u_{\lambda}:\lambda\in G\}. \tag{54}\] Then, we set the completely entangled space to be its orthogonal complement, \(S=L^{\perp}\). As was shown in [2], set \(\{u_{\lambda}\otimes u_{\lambda}:\lambda\in G\}\) is linearly independent and therefore \(\dim L=2d-1\) and \(\dim S=d^{2}-\dim L=(d-1)^{2}\). Subspace \(S\) then does not contain any non-zero simple tensors \(u\otimes v\) for \(u,v\in\mathcal{H}\). Having the above we are in position to prove the following: **Proposition 11**.: _For every \(d\geqslant 2\) there exists \(M_{d}>0\) such that the largest Schmidt coefficient \(\mu_{1}\) of any unit vector in completely entangled subspace \(S\subset\mathcal{H}\otimes\mathcal{H}\), \(\mathcal{H}\simeq\mathbb{C}^{d}\), satisfies inequality \(\mu_{1}^{2}\leqslant 1-M_{d}\). Moreover, one has \(\max M_{d}=M_{2}=\frac{1}{2}\), a sequence \((M_{d})_{d\geqslant 2}\) is non-increasing and \(\lim_{d\to\infty}M_{d}=0\)._ Proof.: Let \(d\geqslant 2\), \(\mathcal{H}\simeq\mathbb{C}^{d}\) and construct spaces \(L\), \(S\) and respective orthogonal projections \(P_{L}\) and \(P_{S}=\text{id}\,-P_{L}\). Let us define a function \(f:\mathcal{H}\otimes\mathcal{H}\to[0,\infty)\) via \[f(h)=\|P_{L}(h)\|^{2},\quad h\in\mathcal{H}\otimes\mathcal{H}, \tag{55}\] which is clearly continuous everywhere in \(\mathcal{H}\otimes\mathcal{H}\). By lemma 14 we have \[\mu_{1}^{2}\leqslant\sup_{\|x\otimes y\|=1}\|P_{S}(x\otimes y)\|^{2}=1-\inf_{ \|x\otimes y\|=1}f(x\otimes y). \tag{56}\] One can show, that a set of all simple tensors \(s=\{x\otimes y:x,y\in\mathcal{H}\}\) is isometrically isomorphic with a subset of all \(d\)-dimensional square matrices of rank at most \(1\). This set however is known as the _determinantal variety_ of rank \(1\) and is closed. Therefore, set \(s_{1}=\{x\otimes y:\|x\otimes y\|=1\}\) is also closed as an intersection of \(s\) with a closed unit sphere in \(\mathcal{H}\otimes\mathcal{H}\), and compact in consequence. Then, Weierstrass theorem for metric spaces asserts that \(f(s_{1})\) is also compact in \(\mathbb{R}\), i.e. function \(f\), being continuous, attains its maximum and minimum in \(s_{1}\). In particular, there exists \(x_{0}\otimes y_{0}\in s_{1}\) such that \(f(x_{0}\otimes y_{0})=\inf_{\|x\otimes y\|=1}\|P_{L}(x\otimes y)\|^{2}\) and so \[\mu_{1}^{2}\leqslant 1-f(x_{0}\otimes y_{0}). \tag{57}\] Assume indirectly, that \(f(x_{0}\otimes y_{0})=0\), i.e. \(x_{0}\otimes y_{0}\in\ker P_{L}\). However, since spaces \(L\) and \(S\) are mutually orthogonal, this yields \(x_{0}\otimes y_{0}\in\text{im}\,P_{S}=S\), which is impossible by the fact, that \(S\) contains no non-zero simple tensors. Therefore there exists \(M_{d}>0\), say \(M_{d}=f(x_{0}\otimes y_{0})\), such that \(\mu_{1}^{2}\leqslant 1-M_{d}\). For properties of sequence \((M_{d})_{d\geqslant 2}\) see proposition 18 in Appendix B.2. We formulate our main result in this section as the following theorem (we will denote \(\mathbb{N}_{k}=\{n\in\mathbb{N}:k\leqslant n\}\) for brevity): **Theorem 12**.: _For every \(p>1\) and every finite subset of dimensions \(\mathcal{D}\subset\mathbb{N}_{2}\) there exists \(d_{0}\in\mathbb{N}\) such that for every \(d\in\mathcal{D}\cap\mathbb{N}_{d_{0}+1}\) the completely entangled subspace \(S\) of space \(\mathcal{H}\otimes\mathcal{H}\), \(\mathcal{H}\simeq\mathbb{C}^{d}\), defines a pair of quantum channels \(\mathcal{N},\overline{\mathcal{N}}:\mathcal{B}(H)\to B(K)\), \(H\simeq S\), \(K\simeq\mathbb{C}^{d}\), providing additivity breaking of minimal Renyi \(p\)-entropy._ Proof.: Let \(p>1\), \(d\geqslant 2\) and \(\mathcal{H}\simeq\mathbb{C}^{d}\). Construct spaces \(L,S\in\mathcal{H}\otimes\mathcal{H}\) and denote by \(P_{L}\), \(P_{S}\) respective orthogonal projections. Notice, that the largest Schmidt coefficient \(\mu_{1}\) of any unit vector \(\xi\in S\) satisfies, due to lemma 11, \[\mu_{1}^{2}\leqslant\sup_{\|x\otimes y\|=1}\|P_{S}(x\otimes y)\|^{2}=1-\inf_{\| x\otimes y\|=1}\|P_{L}(x\otimes y)\|^{2}=1-M_{d} \tag{58}\] for some constant \(M_{d}>0\). Then, lemma 4 yields the lower bound \(C\) for minimal \(p\)-entropy of channel \(\mathcal{N}\), \[S_{p}^{\min}(\mathcal{N})\geqslant\frac{1}{1-p}\log_{2}\left[(1-M_{d})^{p}+M_{d }^{p}\right]=C. \tag{59}\] Let \(\psi^{+}\) be a maximally entangled state in \(S\otimes S\). Utilizing lemmas 2 and 3 we find the lower bound for largest Schmidt coefficient of vector \(\eta(\psi^{+})\) as well as the remaining upper bound \(c\) of minimal entropy of product channel, \[S_{p}^{\min}(\mathcal{N}\otimes\overline{\mathcal{N}})\leqslant\frac{2p}{1-p} \log_{2}\frac{d-1}{d}=c. \tag{60}\] The additivity breaking condition \(c<2C\) as demanded by theorem 1 can be then easily seen to be satisfied if and only if \[\left(\frac{d-1}{d}\right)^{p}>(1-M_{d})^{p}+M_{d}^{p}. \tag{61}\] Although we have no knowledge on exact form of a function \(d\mapsto M_{d}\), providing sufficient bounds will suffice. By virtue of proposition 18 we see that \(M_{d}\in(0,\frac{1}{2}]\) and a sequence \((M_{d})_{d\in\mathcal{D}}\) is non-increasing and bounded by \(0\) from below; in result, sequence \((g_{d})\) for \(g_{d}=(1-M_{d})^{p}+M_{d}^{p}\) is also non-increasing, which can be easily seen from monotonicity of function \(x\mapsto(1-x)^{p}+x^{p}\) for \(x\in(0,\frac{1}{2}]\). Let us define \(m=\min_{d\in\mathcal{D}}M_{d}\), so \(m\in(0,\frac{1}{2}]\). Then, we have \[(1-M_{d})^{p}+M_{d}^{p}=g_{d}\leqslant\max_{d\in\mathcal{D}}g_{d}=(1-m)^{p}+m^ {p} \tag{62}\] and additivity breaking will take place if \[\left(\frac{d-1}{d}\right)^{p}>(1-m)^{p}+m^{p}. \tag{63}\] However, by solving the inequality for \(d\) we see that this condition holds for all \(d\in\mathcal{D}\) greater than the limiting dimension \[d_{0}=\left\lceil\left(1-[(1-m)^{p}+m^{p}]^{1/p}\right)^{-1}\right\rceil. \tag{64}\] By direct check, \(d_{0}>0\) for all \(m\in(0,\frac{1}{2}]\). ## V Conclusions and open problems In this paper we address the problem of finding more examples of subspaces for which quantum channels generated by them exhibit property of additivity breaking of the minimum output Renyi entropy. Our construction is based on three subsequent steps: considering extensions of the antisymmetric subspace, considering its subspaces, and finally exploiting properties of completely entangled subspace constructed by Parthasarathy. To show that considered spaces generate interesting in our context quantum channel one has to resolve the main technical obstacle which is derivation of respective bounds on the maximal Schmidt coefficient. It turn, such bound leads us to bound on minimum output entropy for two copies of the channel. Finding bound on the maximal Schmidt coefficient is however, in general very hard and complex problem and probably it is one of the main reason why number of examples of quantum channels with considered property is so small. We would like to stress here that we are interested in having analytical, possibly tight bounds, to ensure as low dimensionality of the resulting channels as possible. This of course, at least potentially leaves a huge area for possible contribution by considering than spaces than antisymmetric one. An interesting problem would be to consider subspaces structurally very different for antisymmetric but still having some operative properties, allowing for certain derivations crucial for the topic of this manuscript. One could for example consider spaces coming from group/algebra theoretical considerations. Such additional symmetries, in principle could lead us to easier analytical work and possibly result in analytical solutions. Good candidates are subspaces defined through the famous Schur-Weyl duality [23; 24] or the algebra of partially transposed permutation operators [25]. These subspaces have been extensively studied recently in the context of port-based teleportation [26; 27]. Especially, in the latter case one could expect interesting result, since characterization of these spaces is very recent and we do not have yet any checks in this regard. However, still this is a very complicated technical problem and we leave it for possible further research. Acknowledgments The authors are indebted to Professor Michal Horodecki for discussion and comments. MS acknowledges support by grant Sonata 16, UMO-2020/39/D/ST2/01234 from the Polish National Science Centre. ## Appendix A Schmidt decomposition and some bounds on its largest coefficient Let \(H_{1}\), \(H_{2}\) be finite-dimensional Hilbert spaces. Then, it is true that for every vector \(h\in H_{1}\otimes H_{2}\) there exist a sequence \((\mu_{i})\in\mathbb{R}_{+}^{n}\) and orthonormal systems \((u_{i})_{i=1}^{n}\subset H_{1}\), \((v_{i})_{i=1}^{n}\subset H_{2}\) for \(n\leqslant\min\left\{\dim H_{1},\dim H_{2}\right\}\) such that \(h\) admits the so-called _Schmidt decomposition_ \[h=\sum_{i=1}^{n}\mu_{i}u_{i}\otimes v_{i}. \tag{10}\] Then, numbers \(\mu_{i}\geqslant 0\) are sometimes called the _Schmidt coefficients_. In all the following, notation \(\mu_{1}(\xi)\) will denote the largest Schmidt coefficient of some vector \(\xi\) (usually of unit length). **Lemma 13**.: _Let \((H_{1},\langle\cdot,\cdot\rangle_{1})\), \((H_{2},\langle\cdot,\cdot\rangle_{2})\) be Hilbert spaces and let \(h\in H_{1}\otimes H_{2}\), \(\|h\|=1\). Then, the largest Schmidt coefficient \(\mu_{1}(h)\) can be computed via the formula_ \[\mu_{1}^{2}(h)=\sup_{\|x\otimes y\|=1}|\langle h,x\otimes y\rangle|^{2}. \tag{11}\] Proof.: Let \(h\in H_{1}\otimes H_{2}\) be given via its Schmidt decomposition \[h=\sum_{i}\mu_{i}u_{i}\otimes v_{i} \tag{12}\] for some orthonormal systems \(\{u_{i}\}\subset H_{1}\), \(\{v_{i}\}\subset H_{2}\) and \(\mu_{i}\geqslant 0\). Let \(\mu_{1}=\max\left\{\mu_{i}\right\}\). For \(\|x\otimes y\|=1\) we have \[|\langle h,x\otimes y\rangle|^{2} =\left|\sum_{i}\mu_{i}\langle u_{i},x\rangle_{1}\langle v_{i},y \rangle_{2}\right|^{2}\leqslant\left(\sum_{i}\mu_{i}|\langle u_{i},x\rangle_ {1}||\langle v_{i},y\rangle_{2}|\right)^{2} \tag{13}\] \[\leqslant\mu_{1}^{2}\sum_{i}|\langle u_{i},x\rangle_{1}|^{2} \sum_{j}|\langle v_{i},y\rangle_{2}|^{2}\leqslant\mu_{1}^{2}\|x\|^{2}\|y\|^{2}\] \[=\mu_{1}^{2},\] where we employed Holder's and Bessel's inequalities. Now it suffices to take \(x=u_{1}\), \(y=v_{1}\) to check that the upper bound \(\mu_{1}^{2}\) is attainable. **Lemma 14**.: _Let \(X\) be a subspace in Hilbert space \(H\) and let \(P:H\to X\) be the associated orthogonal projection. Then, for every vector \(\xi\in X\), \(\|\xi\|=1\), we have_ \[\mu_{1}^{2}(\xi)\leqslant\sup_{\|x\otimes y\|=1}\|P(x\otimes y)\|^{2}. \tag{14}\] Proof.: Let \(\{e_{i}\}\) be an orthonormal basis spanning \(X\), so \(P=\sum_{i}|e_{i}\rangle\langle e_{i}|\). Let \(\xi\in X\), \(\|\xi\|=1\), be given as \(\xi=\sum_{i}c_{i}e_{i}\) for coefficients satisfying \(\sum_{i}|c_{i}|^{2}=1\). By lemma 13 we have \[\mu_{1}^{2}(\xi) =\sup_{\|x\otimes y\|=1}|\langle\xi,x\otimes y\rangle|^{2}=\sup _{\|x\otimes y\|=1}\left|\sum_{i}\overline{c_{i}}\langle e_{i},x\otimes y \rangle\right|^{2} \tag{15}\] \[\leqslant\sup_{\|x\otimes y\|=1}\left(\sum_{i}|c_{i}||\langle e _{i},x\otimes y\rangle|\right)^{2}\leqslant\sup_{\|x\otimes y\|=1}\sum_{i}|c_{ i}|^{2}\sum_{j}|\langle e_{i},x\otimes y\rangle|^{2}\] \[=\sup_{\|x\otimes y\|=1}\sum_{i}|\langle e_{i},x\otimes y\rangle| ^{2}=\sup_{\|x\otimes y\|=1}\left\|\sum_{i}\langle e_{i},x\otimes y\rangle e_ {i}\right\|^{2}\] \[=\sup_{\|x\otimes y\|=1}\|P(x\otimes y)\|^{2},\] which comes by Holder's inequality and orthogonality of basis \(\{e_{i}\}\) **Lemma 15**.: _Let \(X\) be a subspace in Hilbert space \(H\) such that \(X=X_{1}\oplus X_{2}\) for some mutually orthogonal subspaces \(X_{1}\), \(X_{2}\) and denote by \(P\), \(P_{1}\) and \(P_{2}\) orthogonal projections onto \(X\), \(X_{1}\) and \(X_{2}\), respectively. Then, for all unit vectors \(\xi\in X\) we have_ \[\mu_{1}^{2}(\xi)\leqslant\sup_{\|x\odot y\|=1}\|P(x\otimes y)\|^{2}\leqslant \sup_{\|x\odot y\|=1}\|P_{1}(x\otimes y)\|^{2}+\sup_{\|x\odot y\|=1}\|P_{2}(x \otimes y)\|^{2}. \tag{10}\] Proof.: Take any \(\xi\in H\), \(\|\xi\|=1\). The left-most inequality comes directly from lemma 14. For the remaining one, put \(P=P_{1}+P_{2}\) and use orthogonality of \(\operatorname{ran}P_{1}\) and \(\operatorname{ran}P_{2}\) so \(\|P(x\otimes y)\|^{2}=\|P_{1}(x\otimes y)\|^{2}+\|P_{2}(x\otimes y)\|^{2}\). The result then comes after utilizing property \(\sup_{x}\left\{f(x)+g(x)\right\}\leqslant\sup_{x}f(x)+\sup_{x}g(x)\). **Lemma 16**.: _Let \(X\), \(Y\) be subspaces in Hilbert space \(H\) such that \(X\subset Y\subset H\) and denote by \(P_{X}\) and \(P_{Y}\) respective orthogonal projections. Then, for all vectors \(\xi\in X\), \(\|\xi\|=1\), we have_ \[\mu_{1}^{2}(\xi)\leqslant\sup_{\|x\odot y\|=1}\|P_{X}(x\otimes y)\|^{2} \leqslant\sup_{\|x\odot y\|=1}\|P_{Y}(x\otimes y)\|^{2}. \tag{11}\] Proof.: Again, the left-most inequality is satisfied by lemma 14. For the second one, notice, that \(Y=X\oplus X^{\perp}\) for \(X^{\perp}\) being orthogonal to \(X\) and denote by \(P^{\perp}\) the corresponding projection; then \(P^{\perp}=P_{Y}-P_{X}\) and it is a positive semi-definite operator, i.e. \[0 \leqslant\langle x\otimes y,(P_{Y}-P_{X})(x\otimes y)\rangle= \langle x\otimes y,P_{Y}(x\otimes y)\rangle-\langle x\otimes y,P_{X}(x\otimes y)\rangle \tag{12}\] \[=\|P_{Y}(x\otimes y)\|^{2}-\|P_{X}(x\otimes y)\|^{2}\] and the claim follows. ## Appendix B Properties of Parthasarathy's completely entangled subspace ### Sum-representation Here we will provide a computationally convenient orthonormal basis spanning space \(L\), as given in section IV.3. Let \(G=\{\lambda_{1},\lambda_{2},\,\ldots,\,\lambda_{2d-1}\}\subset\mathbb{C}\) be arbitrary and of cardinality \(2d-1\). Construct linearly independent family of vectors \(u_{\lambda}\) for \(\lambda\in G\) using equation (53) and the space \(L=\operatorname{span}\left\{u_{\lambda}\otimes u_{\lambda}:\lambda\in G\right\}\). Any vector \(w\in L\) is then expressible as \[w=\sum_{i=0}^{2d-1}c_{i}u_{\lambda_{i}}\otimes u_{\lambda_{i}}=\sum_{i=1}^{2d- 1}c_{i}\sum_{kJ=0}^{d-1}\lambda_{i}^{k+l}e_{k}\otimes e_{l}. \tag{13}\] One can then rearrange the above sum after noticing, that \[\sum_{kJ=0}^{d-1}\lambda^{k+l}e_{k}\otimes e_{l} =e_{0}\otimes e_{0}+\lambda(e_{0}\otimes e_{1}+e_{1}\otimes e_{0 })+\lambda^{2}(e_{0}\otimes e_{2}+e_{1}\otimes e_{1}+e_{2}\otimes e_{0})+\ldots \tag{14}\] \[\qquad\ldots+\lambda^{2d-3}(e_{d-1}\otimes e_{d-2}+e_{d-2} \otimes e_{d-1})+\lambda^{2d-2}e_{d-1}\otimes e_{d-1}\] \[=\sum_{s=0}^{2d-2}\lambda^{s}\sum_{(kJ)\sim s}e_{k}\otimes e_{l},\] where \(\sum_{(kJ)\sim s}\) denotes summation over all such pairs \((k,l)\in\{0,\,\ldots,\,d-1\}^{2}\), that \(k+l=s\). Let us then define a family of vectors \(v_{s}\) for \(s\in\{0,\,\ldots,\,2d-2\}\) by \[v_{s}=\sum_{(k,l)\sim s}e_{k}\otimes e_{l}. \tag{15}\] Then, we can re-express \(w\) as \[w=\sum_{s=0}^{2d-2}\xi_{s}v_{s},\qquad\text{for coefficients}\quad\xi_{s}= \sum_{l=1}^{2d-1}c_{i}\lambda_{i}^{s}. \tag{16}\] We will show, that family \(\{w_{s}\}\) is linearly independent and spans \(L\): **Proposition 17**.: _Set of vectors \(\{v_{s}\}\) is an orthogonal basis in \(L\)._ Proof.: We will show that \(\operatorname{span}\left\{u_{\lambda}\otimes u_{\lambda}:\lambda\in G\right\}= \operatorname{span}\left\{v_{s}:0\leqslant s\leqslant 2d-2\right\}\). Since family \(\{u_{\lambda}\otimes u_{\lambda}\}\) can be shown [Parthasarathy] to be linearly independent, it is a (non-orthogonal) basis in \(L\); therefore vector \(w\in\operatorname{span}\left\{u_{\lambda}\otimes u_{\lambda}\right\}\) as given in (B1) is uniquely identified by sequence \((c_{i})\). Equation (B4) then shows, that \(w\in\operatorname{span}\left\{v_{s}\right\}\) and \(\operatorname{span}\left\{u_{\lambda}\otimes u_{\lambda}\right\}\subset \operatorname{span}\left\{v_{s}\right\}\). To obtain the reverse inclusion, take any \(w\in\operatorname{span}\left\{v_{s}\right\}\) and note that stacking all coefficients \(\xi_{s}\) and \(c_{i}\) into column vectors \(\vec{\xi},\vec{c}\in\mathbb{C}^{2d-1}\) leads to equation \[\vec{\xi}=\hat{M}\vec{c}\] (B5) for matrix \(\hat{M}=[\lambda_{i}^{s}]_{s,i}\in M_{2d-1}(\mathbb{C})\) being the transposed _Vandermonde matrix_. However, since all numbers \(\lambda_{i}\) are pairwise different, \(\hat{M}\) is invertible and so is a mapping \(\vec{c}\mapsto\vec{\xi}\). Therefore, any \(w\) as given by formula (B4) is uniquely identified by vector of coefficients \(\vec{c}=\hat{M}^{-1}\vec{\xi}\) and so \(w\) takes a form (B1), i.e. \(w\in\operatorname{span}\left\{u_{\lambda}\otimes u_{\lambda}\right\}\); in consequence, \(\operatorname{span}\left\{v_{s}\right\}\subset\operatorname{span}\left\{u_{ \lambda}\otimes u_{\lambda}\right\}\) and \(\{v_{s}\}\) spans \(L\). Orthogonality of this family then comes by simple algebra. Properly normalizing, we also introduce a corresponding orthonormal basis \(\{w_{s}\}\) which can be checked to be given by expression \[w_{s}=\frac{v_{s}}{\|v_{s}\|}=\frac{1}{\sqrt{d-|s-d+1|}}\sum_{(k,l)\sim s}e_{k }\otimes e_{l}.\] (B6) We will call the orthonormal basis \(\{w_{s}\}\) the _sum-representation of \(L\)_. ### Bounds on Schmidt coefficients in arbitrary dimension **Proposition 18**.: _Let \(M_{d}=\inf_{\|x\otimes y\|=1}\|P_{L}(x\otimes y)\|^{2}\) for \(P_{L}\) being a projection onto space \(L=S^{\perp}\subset\mathcal{H}\otimes\mathcal{H}\), \(\mathcal{H}\simeq\mathbb{C}^{d}\). Then we have \(M_{d}\in(0,\frac{1}{2}]\), sequence \((M_{d})_{d\geqslant 2}\) is non-increasing and \(\lim_{d\to\infty}M_{d}=0\)._ Proof.: From proposition 11 we know, that in each dimension \(d\geqslant 2\) there exists a strictly positive bound \(M_{d}\). We will show that such bounds are disallowed to grow as \(d\) increases, and so sequence \((M_{d})_{d\geqslant 2}\) is non-increasing, bounded and convergent in consequence. For this, let us define two spaces \(\mathcal{H}\simeq\mathbb{C}^{d}\) and \(\tilde{\mathcal{H}}\simeq\mathbb{C}^{d+1}\). In space \(\mathcal{H}\), choose a computational basis \(\{e_{i}\}_{i=0}^{d-1}\). Applying proposition 17 we can construct orthogonal projections \(P_{L}\) and \(P_{S}\) onto subspaces \(L\), \(S\subset\mathcal{H}\otimes\mathcal{H}\) utilizing the sum-representation of \(L\), \[P_{L}=\sum_{s=0}^{2d-1}|w_{s}\rangle\langle w_{s}|\,\qquad P_{S}=\operatorname{id} \,-P_{L}.\] (B7) Next, let \(U:\mathcal{H}\to\tilde{\mathcal{H}}\) to be any injective isometry. Define map \(g:\mathcal{H}\otimes\mathcal{H}\to\tilde{\mathcal{H}}\otimes\tilde{\mathcal{H}}\) by setting, say, \(g=U\otimes U\). Then, \(g\) is an isometrical embedding, which identifies space \(\mathcal{H}\otimes\mathcal{H}\) with a subspace \(\operatorname{ran}g\subset\tilde{\mathcal{H}}\otimes\tilde{\mathcal{H}}\). By its properties, isometry \(U\) then defines a natural, orthonormal computational basis in \(\tilde{\mathcal{H}}\), \[\tilde{\mathcal{H}}=\operatorname{span}\left\{\tilde{e}_{i}:0\leqslant i \leqslant d\right\},\] (B8) where \(\tilde{e}_{i}=U(e_{i})\) for \(i\in\{0,\,\dots,\,d-1\}\) and \(\tilde{e}_{d}\in\operatorname{span}\left\{U(e_{i}):0\leqslant i\leqslant d-1 \right\}^{\perp}\) is arbitrary. Having such basis \(\{\tilde{e}_{i}\}\), we can construct vectors \(\tilde{w}_{s}\in\tilde{\mathcal{H}}\otimes\tilde{\mathcal{H}}\) as \[\tilde{w}_{s}=\begin{cases}g(w_{s}),&0\leqslant s\leqslant d-1\\ \frac{1}{\sqrt{2d-s+1}}\left[g(v_{s})+\tilde{e}_{d}\otimes\tilde{e}_{-d}+\tilde {e}_{-d-1}\otimes\tilde{e}_{d}\right],&d\leqslant s\leqslant 2d-2,\\ \frac{1}{\sqrt{2}}\left[e_{d}\otimes\tilde{e}_{d-1}+\tilde{e}_{d-1}\otimes \tilde{e}_{d}\right],&s=2d-1,\\ \tilde{e}_{d}\otimes\tilde{e}_{d},&s=2d\end{cases}\] (B9) as well as orthogonal projections \(P_{L}\), \(P_{S}\) acting on \(\tilde{\mathcal{H}}\otimes\tilde{\mathcal{H}}\), \[P_{L}=\sum_{s=0}^{2d}|\tilde{w}_{s}\rangle\langle\tilde{w}_{s}|\qquad P_{S}= \operatorname{id}\,-P_{L}.\] (B10) Naturally, spaces \(\tilde{L}=\operatorname{ran}P_{L}\), \(\tilde{S}=\operatorname{ran}P_{\tilde{S}}\) are then higher-dimensional counterparts of \(L\) and \(S\). Now, let \(Q:\tilde{\mathcal{H}}\otimes\tilde{\mathcal{H}}\to\operatorname{ran}g\) be a projection onto the embedded subspace \(g(\mathcal{H}\otimes\mathcal{H})\) in such a way, that \(Q(\tilde{e}_{d}\otimes\tilde{e}_{i})=Q(\tilde{e}_{j}\otimes\tilde{e}_{d})=0\) for all \(i\), \(j\). We will show, that \(g(S)\subset Q(\tilde{S})\). Construction of completely entangled spaces \(S\), \(\tilde{S}\) infers that any vectors \(w\in S\) and \(\tilde{w}\in\tilde{S}\) must be of a form \(w=\xi-P_{L}(\xi)\) and \(\tilde{w}=\psi-P_{L}(\psi)\) for some \(\xi\in\mathcal{H}\otimes\mathcal{H}\) and \(\psi\in\tilde{\mathcal{H}}\otimes\tilde{\mathcal{H}}\). Therefore, utilizing equation (B7) and isometry properties of \(g\) we have \[g(w)=g(\xi)-\sum_{s=0}^{2d-2}\langle g(w_{s}),g(\xi)\rangle g(w_{s}),\qquad\xi \in\mathcal{H}\otimes\mathcal{H}. \tag{101}\] On the other hand, take any \(\tilde{w}\in\tilde{S}\). We notice, that for any \(s\in\{0,\,\ldots\,,2d-2\}\) we have \(Q(\tilde{w}_{s})=g(w_{s})\) and \(Q(\tilde{w}_{s})=0\) for \(s\in\{2d-1,2d\}\). This allows to compute \(Q(\tilde{w})\), with application of (100), \[Q(\tilde{w})=Q(\psi)-\sum_{s=0}^{2d}\langle\tilde{w}_{s},\psi\rangle\,Q( \tilde{w}_{s})=Q(\psi)-\sum_{s=0}^{2d-2}\langle\tilde{w}_{s},\psi\rangle\,g( w_{s}),\qquad\psi\in\tilde{\mathcal{H}}\otimes\tilde{\mathcal{H}}. \tag{102}\] Now, in order to show that \(g(w)\in Q(\tilde{S})\) it is enough to find such \(\psi\), that \(g(w)\) is of a form (102). Take therefore \(\psi=g(\xi)\in\operatorname{span}\left\{\xi_{i}\otimes\tilde{e}_{j}:i,j<d\right\}\). Then, \(Q\) acts trivially on \(\psi\), i.e. \(Q(\psi)=\psi=g(\xi)\) and, by hermiticity of \(Q\), we have for \(s\in\{0,\,\ldots\,,\,2d-2\}\), \[\langle\tilde{w}_{s},\psi\rangle=\langle\tilde{w}_{s},(Q\circ g)(\xi)\rangle= \langle Q(\tilde{w}_{s}),g(\xi)\rangle=\langle g(w_{s}),g(\xi)\rangle \tag{103}\] and so \(g(w)\) is of a form (102) for \(\psi=g(\xi)\) and \(g(S)\) is a subspace of \(Q(\tilde{S})\). Now, by lemma 16 we have \[\sup_{\|x\otimes y\|=1}\|P_{g(S)}(x\otimes y)\|^{2} \leqslant\sup_{\|x\otimes y\|=1}\|P_{Q(S)}(x\otimes y)\|^{2} \leqslant\sup_{\|x\otimes y\|=1}\|P_{S}(x\otimes y)\|^{2} \tag{104}\] \[=1-M_{d+1}\] where the last equality comes by proposition 11. Let us introduce a subset \(Z\) of unit sphere in \(\tilde{\mathcal{H}}\otimes\tilde{\mathcal{H}}\) by \[Z=\{x\otimes y\in\operatorname{ran}Q:\|x\|=\|y\|=1\}. \tag{105}\] Then, by properties of supremum we have \[\sup_{\|x\otimes y\|=1}\|P_{g(S)}(x\otimes y)\|^{2}\geqslant\sup_{x\otimes y \in Z}\|P_{g(S)}(x\otimes y)\|^{2}. \tag{106}\] Since clearly \(\operatorname{ran}Q=\operatorname{ran}g\), we have, for any \(x\otimes y\in Z\) that \(g^{-1}(x\otimes y)=a\otimes b\) for unique \(a\otimes b\in\mathcal{H}\otimes\mathcal{H}\) such that \(\|a\|=\|b\|=1\) and clearly \(g^{-1}(Z)\) is the whole set of all simple tensors of unit norm in \(\mathcal{H}\otimes\mathcal{H}\). This yields \[\sup_{x\otimes y\in Z}\|P_{g(S)}(x\otimes y)\|^{2} =\sup_{x\otimes y\in Z}\langle x\otimes y,\sum_{s=0}^{2d-2} \langle g(w_{s}),x\otimes y\rangle g(w_{s})\rangle \tag{107}\] \[=\sup_{x\otimes y\in Z}\langle g^{-1}(x\otimes y),\sum_{s=0}^{2d -2}\langle w_{s},g^{-1}(x\otimes y)\rangle w_{s}\rangle\] \[=\sup_{a\otimes b\in g^{-1}(Z)}\langle a\otimes b,\sum_{s=0}^{2d -2}\langle w_{s},a\otimes b\rangle w_{s}\rangle\] \[=\sup_{\|a\otimes b\|=1}\|P_{S}(a\otimes b)\|^{2}=1-M_{d}.\] Returning to (104) we obtain, for every \(d\geqslant 2\), \[1-M_{d}\leqslant\sup_{\|x\otimes y\|=1}\|P_{g(S)}(x\otimes y)\|^{2}\leqslant 1-M_{d +1}, \tag{108}\] so \(M_{d+1}\leqslant M_{d}\) and sequence \((M_{d})_{d\geqslant 2}\) is non-increasing and bounded from below by \(0\); this shows, that also \(M_{d}\to 0\) as \(d\to\infty\). The upper bound of \(M_{d}\) is then simply \(M_{2}\), which by results of lemma 9 can be quickly shown to be \(\frac{1}{2}\). This concludes the proof.
2303.11433
High Resolution Finite Difference Schemes for a Size Structured Coagulation-Fragmentation Model in the Space of Radon Measures
In this paper we develop explicit and semi-implicit second-order high-resolution finite difference schemes for a structured coagulation-fragmentation model formulated on the space of Radon measures. We prove the convergence of each of the two schemes to the unique weak solution of the model. We perform numerical simulations to demonstrate that the second order accuracy is achieved by both schemes.
Azmy S. Ackleh, Rainey Lyons, Nicolas Saintier
2023-03-20T20:27:08Z
http://arxiv.org/abs/2303.11433v1
High Resolution Finite Difference Schemes for a Size Structured Coagulation-Fragmentation Model in the Space of Radon Measures ###### Abstract In this paper we develop explicit and semi-implicit second-order high-resolution finite difference schemes for a structured coagulation-fragmentation model formulated on the space of Radon measures. We prove the convergence of each of the two schemes to the unique weak solution of the model. We perform numerical simulations to demonstrate that the second order accuracy is achieved by both schemes. Keywords:A Coagulation-Fragmentation Equation, Size Structured Populations, Radon Measures Equipped with Bounded-Lipschitz Norm, Finite Difference Schemes, High Resolution Methods AMS Subject Classification:65M06, 35L60, 92D25 ## 1 Introduction Coagulation-fragmentation (CF) equations have been used to model many physical and biological phenomenon [3, 11]. In particular, when combined with transport terms, these equations can be used to model the population dynamics of oceanic phytoplankton [2, 1, 4]. Setting such models in the space of Radon measures allows for the unified study of both discrete and continuous structures. Not only are the classical discrete and continuous CF equations special cases of the measure valued model (as shown in [6]), but this setting allows for a mixing of the two structures which has become of interest in particular applications [8, 9]. With the above applications in mind, numerical schemes to solve CF equations are of great importance to researchers. In particular, finite difference methods offer numerical schemes which are easy to implement and approximate the solution with a high order of accuracy. The latter benefit is especially important in the study of stability and optimal control of such equations. The purpose of this article is to make improvements on the two of the schemes presented in [7], namely the fully explicit and semi-implicit schemes. These schemes are shown to have certain advantages and disadvantages discussed in the aforementioned study. In particular, the fully explicit scheme has the qualitative property of conservation of mass through coagulation. On the other hand, the semi-implicit scheme has a more relaxed Courant-Friedrichs-Lewy (CFL) condition which does not depend on the initial condition. We have decided not to attempt to improve the third scheme presented in [7] as there does not seem to be a significant advantage of the named conservation law scheme to outweigh the drastic computational cost. As the state space is highly irregular, the improvement of these schemes must be handled with care. As shown in [10], discontinuities and singularities in the solution can cause drastic changes in not only the order of convergence of the scheme, but also in the behavior of the scheme. Such phenomenon is demonstrated in [5, 10]. To handle these issues, we turn to a high resolution scheme studied with classical structured population models (i.e. without coagulation-fragmentation) in [20, 12, 5]. This scheme makes use of a minmod flux limiter to control any oscillatory behavior of the scheme caused by irregularities. The layout of the paper is as follows. In Section 2 we present any notation and preliminary results about the model and state space used throughout the paper. In Section 3 we describe the model and state all assumptions imposed on the model parameters. In Section 4, we present the numerical schemes, their CFL conditions and state the main Theorem of the paper. Finally, we test the convergence rate of the schemes against well-known examples in Section 5. ## 2 Notation We make use of standard notations for function spaces. The most common examples of these are \(C^{1}(\mathbb{R}^{+})\) for the space of real valued continuously differentable functions and \(W^{1,\infty}(\mathbb{R}^{+})\) for the usual Sobelov space. The space of Radon measure will be denoted with \(\mathcal{M}(\mathbb{R}^{+})\) with \(\mathcal{M}^{+}(\mathbb{R}^{+})\) representing it positive cone. This space will be equipped with the Bounded-Lipschitz (BL) norm given by \[\|\mu\|_{BL}:=\sup_{\|\phi\|_{W^{1,\infty}}\leq 1}\left\{\int_{\mathbb{R}^{+}} \phi(x)\mu(dx):\phi\in W^{1,\infty}(\mathbb{R}^{+})\right\}.\] Another norm of interest to this space is the well studied Total Variation (TV) norm given by \[\|\nu\|_{TV}=|\nu|(\mathbb{R}^{+})=\sup_{\|f\|_{\infty}\leq 1}\left\{\int_{ \mathbb{R}^{+}}f(x)\nu(dx):f\in C_{c}(\mathbb{R}^{+})\right\}.\] For more information about these particular norms and their relationship we direct the reader to [14, 13]. For lucidity, we use operator notation in place of integration when we believe it necessary, namely \[(\mu,f):=\int_{A}f(x)\mu(dx),\] where the set \(A\) is the support of the measure \(\mu\). Finally, we denote the minmod function by \(\operatorname{mm}(a,b)\) and use the following definition \[\operatorname{mm}(a,b):=\frac{\operatorname{sign}(a)+\operatorname{sign}(b)}{ 2}\max(|a|,|b|).\] ## 3 Model and Assumptions The model of interest is the size-structured coagulation fragmentation model given by \[\begin{cases}\partial_{t}\mu+\partial_{x}(g(t,\mu)\mu)+d(t,\mu)\mu=K[\mu]+F[ \mu],&\quad(t,x)\in(0,T)\times(0,\infty),\\ g(t,\mu)(0)D_{dx}\mu(0)=\int_{\mathbb{R}^{+}}\beta(t,\mu)(y)\mu(dy),&\quad t \in[0,T],\\ \mu(0)=\mu_{0}\in\mathcal{M}^{+}(\mathbb{R}^{+}),\end{cases}. \tag{1}\] where \(\mu(t)\in\mathcal{M}^{+}(\mathbb{R}^{+})\) represents individuals' size distribution at time \(t\) and the functions \(g,d,\beta\) are their growth, death, and reproduction rate. The coagulation and fragmentation processes of a population distributed according to \(\mu\in\mathcal{M}^{+}(\mathbb{R}^{+})\) are modeled by the measures \(K[\mu]\) and \(F[\mu]\) given \[(K[\mu],\phi)=\frac{1}{2}\int_{\mathbb{R}^{+}}\int_{\mathbb{R}^{+}}\kappa(y,x )\phi(x+y)\,\mu(dx)\,\mu(dy)-\int_{\mathbb{R}^{+}}\int_{\mathbb{R}^{+}}\kappa (y,x)\phi(x)\,\mu(dy)\,\mu(dx)\] and \[(F[\mu],\phi)=\int_{\mathbb{R}^{+}}(b(y,\cdot),\phi)a(y)\,\mu(dy)-\int_{ \mathbb{R}^{+}}a(y)\phi(y)\mu(dy)\] for any test function \(\phi\). Here \(\kappa(x,y)\) is the rate at which individuals of size \(x\) coalesce with individuals of size \(y\), \(a(y)\) is the global fragmentation rate of individuals of size \(y\), and \(b(y,\cdot)\) is a measure supported on \([0,y]\) such that \(b(y,A)\) represents the probability a particle of size \(y\) fragments to a particle with size in the Borel set \(A\). **Definition 3.1**.: _Given \(T\geq 0\), we say a function \(\mu\in C([0,T],\mathcal{M}^{+}(\mathbb{R}^{+}))\) is a weak solution to (1) if for all \(\phi\in(C^{1}\cap W^{1,\infty})([0,T]\times\mathbb{R}^{+})\) and for all \(t\in[0,T]\), the following holds:_ \[\begin{split}&\int_{0}^{\infty}\phi(t,x)\mu_{t}(dx)-\int_{0}^{ \infty}\phi(0,x)\mu_{0}(dx)=\\ &\int_{0}^{t}\int_{0}^{\infty}\left[\partial_{t}\phi(s,x)+g(s, \mu_{s})(x)\partial_{x}\phi(s,x)-d(s,\mu_{s})(x)\phi(s,x)\right]\mu_{s}(dx)ds \\ &\quad+\int_{0}^{t}(K[\mu_{s}]+F[\mu_{s}],\phi(s,\cdot))\,ds+\int _{0}^{t}\int_{0}^{\infty}\phi(s,0)\beta(s,\mu_{s})(x)\mu_{s}(dx)ds.\end{split} \tag{2}\] For the numerical scheme, we will restrict ourselves to a finite domain, \([0,x_{\max}]\). Thus, we impose the following assumptions on the growth, death and birth functions: 1. For any \(R>0\), there exists \(L_{R}>0\) such that for all \(\|\mu_{i}\|_{TV}\leq R\) and \(t_{i}\in[0,\infty)\) (\(i=1,2\)) the following hold for \(f=g,d,\beta\) \[\|f(t_{1},\mu_{1})-f(t_{2},\mu_{2})\|_{\infty}\leq L_{R}(|t_{1}-t_{2}|+\|\mu_{1} -\mu_{2}\|_{BL}),\] 2. There exists \(\zeta>0\) such that for all \(T>0\) \[\sup_{t\in[0,T]}\sup_{\mu\in\mathcal{M}^{+}(\mathbb{R}^{+})}\|g(t,\mu)\|_{W^{1, \infty}}+\|d(t,\mu)\|_{W^{1,\infty}}+\|\beta(t,\mu)\|_{W^{1,\infty}}<\zeta,\] 3. For all \((t,\mu)\in[0,\infty)\times\mathcal{M}^{+}(\mathbb{R}^{+})\), \[g(t,\mu)(0)>0\quad\text{and}\quad g(t,\mu)(x_{\max})=0\] for some large \(x_{\max}>0\). We assume that the coagulation kernel \(\kappa\) satisfies the following assumption: 1. \(\kappa\) is symmetric, nonnegative, bounded by a constant \(C_{\kappa}\), and globally Lipschitz with Lipschitz constant \(L_{\kappa}\). 2. \(\kappa(x,y)=0\) whenever \(x+y>x_{\max}\). We assume that the fragmentation kernel satisfies the following assumptions: 1. \(a\in W^{1,\infty}(\mathbb{R}^{+})\) is non-negative, 2. for any \(y\geq 0\), \(b(y,dx)\) is a measure such that 1. \(b(y,dx)\) is non-negative and supported in \([0,y]\), and there exist a \(C_{b}>0\) such that \(b(y,\mathbb{R}^{+})\leq C_{b}\) for all \(y>0\), 2. there exists \(L_{b}\) such that for any \(y,\bar{y}\geq 0\), \[\|b(y,\cdot)-b(\bar{y},\cdot)\|_{BL}\leq L_{b}|y-\bar{y}|\] 3. for any \(y\geq 0\), \[(b(y,dx),x)=\int_{0}^{y}x\,b(y,dx)=y.\] The existence and uniqueness of mass conserving solutions of model (1) under these assumptions were established in [6]. ## 4 Numerical Method We adopt the numerical discretization presented in [6]. For some fixed mesh sizes \(\Delta x,\Delta t>0\), we discretize the size domain \([0,x_{\max}]\) with the cells \[\Lambda_{j}^{\Delta x}:=[(j-\frac{1}{2})\Delta x,(j+\frac{1}{2})\Delta x),\ \text{for}\ J=1,\ldots,J,\] and \[\Lambda_{0}^{\Delta x}:=[0,\frac{\Delta x}{2}).\] We denote the midpoints of these grids by \(x_{j}\). The initial condition \(\mu_{0}\in\mathcal{M}^{+}(\mathbb{R}^{+})\) will be approximated by a combination of Dirac measures \[\mu_{0}^{\Delta x}=\sum_{j=0}^{J}m_{j}^{0}\delta_{x_{j}},\text{ where }m_{j}^{0}:=\mu_{0}(\Lambda_{j}^{\Delta x}).\] We first approximate the model coefficients \(\kappa\), \(a\), \(b\) as follow. For the physical ingredients, we define \[a_{i}^{\Delta x}=\frac{1}{\Delta x}\int_{\Lambda_{i}^{\Delta x}}a(y)dy,\qquad \kappa_{i,j}^{\Delta x}=\frac{1}{\Delta x^{2}}\int_{\Lambda_{i}^{\Delta x} \times\Lambda_{j}^{\Delta x}}\kappa(x,y)dxdy\] for \(i,j\geq 1\), and \[a_{0}^{\Delta x}=\frac{2}{\Delta x}\int_{\Lambda_{0}^{\Delta x}}a(y)dy,\qquad \kappa_{0,0}^{\Delta x}=\frac{4}{\Delta x^{2}}\int_{\Lambda_{0}^{\Delta x} \times\Lambda_{0}^{\Delta x}}\kappa(x,y)dxdy\] (with the natural modifications for \(\kappa_{0,j}^{\Delta x}\) and \(\kappa_{i,0}^{\Delta x}\), \(i\geq 1\)). We then let \(a^{\Delta x}\in W^{1,\infty}(\mathbb{R}^{+})\) and \(\kappa^{\Delta x}\in W^{1,\infty}(\mathbb{R}^{+}\times\mathbb{R}^{+})\) be the linear interpolation of the \(a_{i}^{\Delta x}\) and \(\kappa_{i,j}^{\Delta x}\) respectively. Finally, we define the measure \(b^{\Delta x}(x_{j},\cdot)\in\mathcal{M}^{+}(\Delta x\mathbb{N})\) by \[b^{\Delta x}(x_{j},\cdot)=\sum_{i\leq j}b(x_{j},\Lambda_{i}^{\Delta x})\delta_ {x_{j}}=:\sum_{i\leq j}b_{j,i}^{\Delta x}\delta_{x_{j}}\] and then \(b^{\Delta x}(x,\cdot)\in\mathcal{M}^{+}(\Delta x\mathbb{N}_{0})\) for \(x\geq 0\) as the linear interpolate between the \(b^{\Delta x}(x_{j},\cdot)\). When the context is clear, we omit the \(\Delta x\) from the notation above. We make use of these approximations to combine the high-resolution scheme presented in [5] with the fully explicit and semi-implicit schemes presented in [7]. Together these schemes give us the numerical scheme \[\begin{cases}m_{j}^{k+1}=m_{j}^{k}-\frac{\Delta t}{\Delta x}(f_{j+\frac{1}{2 }}^{k}-f_{j-\frac{1}{2}}^{k})-\Delta td_{j}^{k}m_{j}^{k}+\Delta t\left( \mathcal{C}_{j,k}+\mathcal{F}_{j,k}\right),\qquad j=1,..,J,\\ g_{0}^{k}m_{0}^{k}=\Delta x\sum_{j=1}^{J}{}^{*}\beta_{j}^{k}m_{j}^{k}:=\Delta x \left(\frac{3}{2}\beta_{1}^{k}m_{1}^{k}+\frac{1}{2}\beta_{J}^{k}m_{J}^{k}+ \sum_{j=2}^{J-1}\beta_{j}^{k}m_{j}^{k}\right).\end{cases}. \tag{3}\] Where the flux term is given by \[f_{j+\frac{1}{2}}^{k}=\begin{cases}g_{j}^{k}m_{j}^{k}+\frac{1}{2}(g_{j+1}^{k}- g_{j}^{k})m_{j}^{k}+\frac{1}{2}g_{j}^{k}\text{ mm}(\Delta_{+}m_{j}^{k},\Delta_{-}m_{j}^{k})&j=2,3,\ldots,J-2\\ g_{j}^{k}m_{j}^{k}&j=0,1,J-1,J\end{cases}, \tag{4}\] the fragmentation term, \(\mathcal{F}_{j,k}\), is given by \[\mathcal{F}_{j,k}:=\sum_{i=j}^{J}b_{i,j}a_{i}m_{i}^{k}-a_{j}m_{j}^{k}, \tag{5}\] and the coagulation term, \(\mathcal{C}_{j}\), is either given by an explicit discretization as \[\mathcal{C}_{j,k}^{\text{exp}}:=\frac{1}{2}\sum_{i=1}^{j-1}\kappa_{i,j-i}m_{i}^ {k}m_{j-i}^{k}-\sum_{i=1}^{J}\kappa_{i,j}m_{i}^{k}m_{j}^{k}, \tag{6}\] or by an implicit one as \[\mathcal{C}_{j,k}^{\text{imp}}:=\frac{1}{2}\sum_{i=1}^{j-1}\kappa_{i,j-i}m_{i}^{k +1}m_{j-i}^{k}-\sum_{i=1}^{J}\kappa_{i,j}m_{i}^{k}m_{j}^{k+1}. \tag{7}\] As discussed in [7], the explicit and semi-implicit schemes behave differently with respect to the mass conservation and have different Courant-Friedrichs-Lewy (CFL) conditions. The assumed CFL condition for the schemes are Explicit: \[\Delta t\Big{(}C_{\kappa}\|\mu_{0}\|_{TV}\exp((\zeta+C_{b}C_{a})T)+C_{a}\max \{1,C_{b}\}+(1+\tfrac{3}{2\Delta x})\zeta\Big{)}\leq 1\] Semi-Implicit: \[\bar{\zeta}(2+\tfrac{3}{2\Delta x})\Delta t\leq 1,\] (8) where \(\bar{\zeta}=\max\{\zeta,\|a\|_{W^{1,\infty}}\}\), \(C_{a}=\|a\|_{\infty}\). It is clear that the semi-implicit scheme has a less restrictive and simpler CFL condition than the explicit scheme. In particular, the CFL condition of the semi-implicit scheme is independent on the initial condition unlike its counterpart. The trade off for this is a loss of qualitative behavior of the scheme in the sense of mass conservation. Indeed as shown in [7], when \(\beta=d=g=0\), the semi-implicit coagulation term does not conserve mass whereas the explicit term does. It is useful to define the following coefficients: \[A_{j}^{k}=\begin{cases}g_{j}^{k}&j=1,J,\\ \frac{1}{2}\left(g_{j+1}^{k}+g_{j}^{k}+g_{j}^{k}\,\frac{\text{mm}(\Delta_{+}m_ {j}^{k},\Delta_{-}m_{j}^{k})}{\Delta_{-}m_{j}^{k}}\right)&j=2,\\ \frac{1}{2}\left(g_{j+1}^{k}+g_{j}^{k}+g_{j}^{k}\,\frac{\text{mm}(\Delta_{+}m_ {j}^{k},\Delta_{-}m_{j}^{k})}{\Delta_{-}m_{j}^{k}}-g_{j-1}^{k}\frac{\text{mm}( \Delta_{-}m_{j}^{k},\Delta_{-}m_{j-1}^{k})}{\Delta_{-}m_{j}^{k}}\right)&j=3, \ldots,J-2,\\ \frac{1}{2}\left(2g_{j}^{k}-g_{j-1}^{k}\frac{\text{mm}(\Delta_{-}m_{j}^{k}, \Delta_{-}m_{j-1}^{k})}{\Delta_{-}m_{j}^{k}}\right)&j=J-1,\end{cases}\] and \[B_{j}^{k}=\begin{cases}\Delta_{-}g_{j}^{k}&j=1,J,\\ \frac{1}{2}\Delta_{+}g_{j}^{k}&j=2,\\ \frac{1}{2}(\Delta_{+}g_{j}^{k}+\Delta_{-}g_{j}^{k})&j=3,\ldots,J-2,\\ \frac{1}{2}\Delta_{-}g_{j}^{k}&j=J-1.\end{cases}.\] Notice, \(|A_{j}^{k}|\leq\frac{3\Delta t}{2\Delta x}\zeta\) and \(A_{j}^{k}-B_{j}^{k}\geq 0\) as \[2(A_{j}^{k}-B_{j}^{k})=\begin{cases}2g_{j-1}^{k}&j=1,J,\\ g_{j}^{k}\left(2+\frac{\text{mm}(\Delta_{+}m_{j}^{k},\Delta_{-}m_{j}^{k})}{ \Delta_{-}m_{j}^{k}}\right)&j=2,\\ g_{j}^{k}\left(1+\frac{\text{mm}(\Delta_{+}m_{j}^{k},\Delta_{-}m_{j}^{k})}{ \Delta_{-}m_{j}^{k}}\right)+g_{j-1}^{k}\left(1-\frac{\text{mm}(\Delta_{-}m_{ j}^{k},\Delta_{-}m_{j-1}^{k})}{\Delta_{-}m_{j}^{k}}\right)&j=3,\ldots,J-2,\\ g_{j}^{n}+g_{j-1}^{n}\left(1-\frac{\text{mm}(\Delta_{-}m_{j}^{n},\Delta_{-}m_{j- 1}^{n})}{\Delta_{-}m_{j}^{n}}\right)&j=J-1.\end{cases}.\] Scheme (3) can then be rewritten as \[\begin{cases}m_{j}^{k+1}=(1-\dfrac{\Delta t}{\Delta x}A_{j}^{k}-\Delta t(d_{j}^{k }+a_{j}))m_{j}^{k}+\dfrac{\Delta t}{\Delta x}(A_{j}^{k}-B_{j}^{k})m_{j-1}^{k}\\ \hskip 14.226378pt+\Delta t\sum_{i=j}^{J}b_{i,j}a_{i}m_{i}^{k}+\Delta t.{\cal C }_{j,k}\\ \\ g_{0}^{k}m_{0}^{k}=\Delta x\sum_{j=1}^{J}{}^{*}\beta_{j}^{k}m_{j}^{k}\;.\end{cases}. \tag{9}\] Depending on the choice of coagulation term, this formulation leads to either \[\begin{cases}m_{j}^{k+1}=(1-\dfrac{\Delta t}{\Delta x}A_{j}^{k}-\Delta t(d_{j} ^{k}+a_{j})-\Delta t\sum_{i=1}^{J}\kappa_{i,j}m_{i}^{k})m_{j}^{k}+\dfrac{ \Delta t}{\Delta x}(A_{j}^{k}-B_{j}^{k})m_{j-1}^{k}\\ \hskip 14.226378pt+\Delta t\sum_{i=j}^{J}b_{i,j}a_{i}m_{i}^{k}+ \dfrac{\Delta t}{2}\sum_{i=1}^{j-1}\kappa_{i,j-i}m_{i}^{k}m_{j-i}^{k}\\ \\ g_{0}^{k}m_{0}^{k}=\Delta x\sum_{j=1}^{J}{}^{*}\beta_{j}^{k}m_{j}^{k}\end{cases}, \tag{10}\] for the explicit term, \({\cal C}_{j,k}^{\rm exp}\) or \[\begin{cases}(1+\Delta t\sum_{i=1}^{J}\kappa_{i,j}m_{i}^{k})m_{j}^{k+1}=(1- \dfrac{\Delta t}{\Delta x}A_{j}^{k}-\Delta t(d_{j}^{k}+a_{j}))m_{j}^{k}+\dfrac {\Delta t}{\Delta x}(A_{j}^{k}-B_{j}^{k})m_{j-1}^{k}\\ \hskip 14.226378pt+\Delta t\sum_{i=j}^{J}b_{i,j}a_{i}m_{i}^{k}+ \dfrac{\Delta t}{2}\sum_{i=1}^{j-1}\kappa_{i,j-i}m_{i}^{k+1}m_{j-i}^{k}\\ \hskip 14.226378ptg_{0}^{k}m_{0}^{k}=\Delta x\sum_{j=1}^{J}{}^{*} \beta_{j}^{k}m_{j}^{k}\;,\end{cases}. \tag{11}\] for the implicit term, \({\cal C}_{j,k}^{\rm imp}\). For these, schemes, we have the following Lemmas which are proven in the appendix: **Lemma 4.1**.: _For each \(k=1,2,\ldots,\bar{k}\),_ * \(m_{j}^{k}\geq 0\) _for all_ \(j=1,2,\ldots J\)_,_ * \(\|\mu_{\Delta x}^{k}\|_{TV}\leq\|\mu_{0}\|_{TV}\exp((\zeta+C_{b}C_{a})T)\)_._ **Lemma 4.2**.: _For any \(l,p=1,2,\ldots,\bar{k}\),_ \[\|\mu_{\Delta x}^{l}-\mu_{\Delta x}^{p}\|_{BL}\leq{\cal L}_{T}|l-p|.\] Using the above two Lemmas, we can arrive at analogous results for the linear interpolation (12): \[\mu_{\Delta x}^{\Delta t}(t):=\mu_{\Delta x}^{0}\chi_{\{0\}}(t)+\sum_{k=0}^{ \bar{k}-1}\left[(1-\dfrac{t-k\Delta t}{\Delta t})\mu_{\Delta x}^{k}+\dfrac{t-k \Delta t}{\Delta t}\mu_{\Delta x}^{k+1}\right]\chi_{(k\Delta t,(k+1)\Delta t )}(t). \tag{12}\] Thus by the well know Ascoli-Arzela Theorem, we have the existence of a convergent subsequence of the net \(\{\mu^{\Delta t}_{\Delta x}(t)\}\) in \(C([0,T],\mathcal{M}^{+}([0,x_{\max}])\). We now need only show any convergent subsequence converges to the unique solution (2). **Theorem 4.1**.: _As \(\Delta x,\Delta t\to 0\) the sequence \(\mu^{\Delta t}_{\Delta x}\) converges in \(C([0,T],\mathcal{M}^{+}([0,x_{\max}]))\) to the solution of (1)._ Proof.: By multiplying (3) by a superfluously smooth test function \(\phi\in(W^{1,\infty}\cap C^{2})([0,T]\times\mathbb{R})\), denoting \(\phi^{k}_{j}:=\phi(k\Delta t,x_{j})\), summing over all \(j\) and \(k\), and rearranging we arrive at \[\sum_{k=0}^{\bar{k}-1}\sum_{j=1}^{J}\left((m^{k+1}_{j}-m^{k}_{j}) \phi^{k}_{j}+\frac{\Delta t}{\Delta x}(f^{k}_{j+\frac{1}{2}}-f^{k}_{j-\frac{1} {2}})\phi^{k}_{j}\right)+\Delta t\sum_{k=0}^{\bar{k}-1}\sum_{j=1}^{\infty}d^{ k}_{j}m^{k}_{j}\phi^{k}_{j} \tag{13}\] \[=\Delta t\sum_{k=1}^{\bar{k}-1}\sum_{j=1}^{J}\phi^{k}_{j}\left( \frac{1}{2}\sum_{i=1}^{j-1}\kappa_{i,j-i}m^{k}_{i}m^{k}_{j-i}-\sum_{i=1}^{J} \kappa_{i,j}m^{k}_{i}m^{k}_{j}+\sum_{i=j}^{J}b_{i,j}a_{i}m^{k}_{i}-a_{j}m^{k}_ {j}\right).\] The left-hand side of equation (13) was shown in [5] to be equivalent to \[\int_{0}^{x_{\max}}\phi(T,x)d\mu^{\bar{k}}_{\Delta x}(x)-\int_{0} ^{x_{\max}}\phi(0,x)d\mu^{0}_{\Delta x}(x)\] \[-\Delta t\sum_{k=0}^{\bar{k}-1}\left(\int_{0}^{x_{\max}}\partial_ {t}\phi(t_{k},x)d\mu^{k}_{\Delta x}(x)+\int_{0}^{x_{\max}}\partial_{x}\phi(t_ {k},x)g(t_{k},\mu^{k}_{\Delta x})(x)d\mu^{k}_{\Delta x}(x)\right.\] \[\quad\left.-\int_{\mathbb{R}^{+}}d(t_{k},\mu^{k}_{\Delta x})(x) \phi(t_{k},x)d\mu^{k}_{\Delta x}(x)+\int_{0}^{x_{\max}}\phi(t_{k},\Delta x) \beta(t_{k},\mu^{k}_{\Delta x})(x)d\mu^{k}_{\Delta x}(x)\right)+o(1),\] where \(o(1)\longrightarrow 0\) as \(\Delta t,\Delta x\longrightarrow 0\). The right-hand side of (13) was shown in [7] to be equal to \[\Delta t\sum_{k=1}^{\bar{k}-1}\left\{(K[\mu^{\Delta t}_{\Delta x}(t_{k})], \phi(t_{k},\cdot))+(F[\mu^{\Delta t}_{\Delta x}(t_{k})],\phi(t_{k},\cdot)) \right\}+O(\Delta x).\] Making use of results, it is then easy to see (13) is equivalent to \[\int_{0}^{x_{\max}}\phi(T,x)d\mu^{\Delta t}_{\Delta x}(T)(x)-\int _{0}^{x_{\max}}\phi(0,x)d\mu^{0}_{\Delta x}(x)\] \[=\int_{0}^{T}\left(\int_{0}^{x_{\max}}\partial_{t}\phi(t,x)+ \partial_{x}\phi(t,x)g(t,\mu^{\Delta t}_{\Delta x}(t))(x)d\mu^{\Delta t}_{ \Delta x}(t)(x)\right.\] \[\quad-\int_{0}^{x_{\max}}d(t,\mu^{\Delta t}_{\Delta x}(t))(x) \phi(t,x)d\mu^{\Delta t}_{\Delta x}(t)(x)+\int_{0}^{x_{\max}}\phi(t,\Delta x) \beta(t,\mu^{\Delta t}_{\Delta x}(t))(x)d\mu^{\Delta t}_{\Delta x}(t)(x) \right)dt\] \[\quad+\int_{0}^{T}(K[\mu^{\Delta t}_{\Delta x}(t)],\phi(t,\cdot))+ (F[\mu^{\Delta t}_{\Delta x}(t)],\phi(t,\cdot))\,dt+o(1).\] Passing the limit as \(\Delta t,\Delta x\longrightarrow 0\) along a converging subsequence, we then obtain that equation (2) holds for any \(\phi\in(C^{2}\cap W^{1,\infty})([0,T]\times\mathbb{R}^{+})\) with compact support. A standard density argument shows that equation (2) holds for any \(\phi\in(C^{1}\cap W^{1,\infty})([0,T]\times\mathbb{R}^{+})\). As the weak solution is unique [6], we conclude the net \(\{\mu^{\Delta t}_{\Delta x}\}\) converges to the solution of model (1). We point out that while these schemes are higher-order in space, they are only first order in time. To lift these schemes into a second-order in time as well, we make use of the second-order Runge-Kutta time discretization [21] for the explicit scheme and second-order Richardson extrapolation [16] for the semi-implicit scheme. ## 5 Numerical Examples In this section, we provide numerical simulations which test the order of the explicit and semi-implicit schemes developed in the previous sections. For each example, we give the BL error and the order of convergence. To appreciate the gain in the order of convergence compared to those studied in [7] which are based on first order approximation of the transport term, we add some of the numerical results from the scheme presented in [7]. In some of the following examples, the exact solution of the model problem is given. In these cases, we approximate the order of accuracy, \(q\), with the standard calculation: \[q=\log_{2}\left(\frac{\rho(\mu_{\Delta x}^{\Delta t}(T),\mu(T))}{\rho(\mu_{0.5 \Delta x}^{0.5\Delta t}(T),\mu(T))}\right)\] where \(\mu\) represents the exact solution of the examples considered. In the cases where the exact solutions are unknown, we approximate the order by \[q=\log_{2}\left(\frac{\rho(\mu_{\Delta x}^{\Delta t}(T),\mu_{2\Delta x}^{2 \Delta t}(T))}{\rho(\mu_{0.5\Delta x}^{0.5\Delta t}(T),\mu_{\Delta x}^{\Delta t }(T))}\right)\] and we report the numerator of the log argument as the error. The metric \(\rho\) we use here was introduced in [17] and is equivalent to the BL metric, namely \[C\rho(\mu,\nu)\leq\|\mu-\nu\|_{BL}\leq\rho(\mu,\nu)\] for some constant \(C\) (dependent on the finite domain). As discussed in [17], this metric is more efficient to compute than the BL norm and maintains the same order of convergence. An alternative to this algorithm would be to make use of the algorithms presented in [19] where convergence in the Fortet-Mourier distance is considered. Example 1In this example, we test the quality of the finite difference schemes against coagulation equations. To this end, we take \(\kappa(x,y)\equiv 1\) and \(\mu_{0}=e^{-x}dx\) with all other ingredients set to \(0\). This example has exact solution given by \[\mu_{t}=\left(\frac{2}{2+t}\right)^{2}\exp\left(-\frac{2}{2+t}x\right)dx\] see [18] for more details. The simulation is performed over the finite domain \((x,t)\in[0,20]\times[0,0.5]\). We present the BL error and the numerical order of convergence for both schemes in Table 1. **Example 2** In this example, we test the quality of the finite difference scheme against fragmentation only equations. We point out that in this case, the two schemes are identical in the spacial component. For this demonstration, we take \(\mu_{0}=e^{-x}dx\), \(b(y,\cdot)=\frac{2}{y}dx\) and \(a(x)=x\). As given in [15], this problem has an exact solution of \[\mu_{t}=(1+t)^{2}\exp(-x(1+t))dx.\] The simulation is performed over the finite domain \((x,t)\in[0,20]\times[0,0.5]\). We present the BL error and the numerical order of convergence for both schemes in Table 2. Note as compared to coagulation, the fragmentation process is more affected by the truncation of the domain. This results in the numerical order of the scheme being further from 2 than example 1. **Example 3** In this example, we test the schemes against the complete model i.e. with all biological and physical processes. To this end, we take \(\mu_{0}=e^{-x}dx\), \(g(x)=2-2e^{x-20}\), \(\beta(x)=2\), \(d(x)=1\), \(\kappa(x,y)=1\), \(a(x)=x\), and \(b(y,\cdot)=\frac{2}{y}\). The simulation is performed over the finite domain \((x,t)\in[0,20]\times[0,0.5]\). To our knowledge, the solution of this problem is unknown. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{4}{|c|}{**Explicit**} & \multicolumn{2}{c|}{**Semi-Implicit**} \\ \hline **Nx** & **Nt** & **BL Error** & **Order** & **BL Error** & **Order** \\ \hline 100 & 250 & 0.0020733 & & 0.0020886 & \\ \hline 200 & 500 & 0.00054068 & 1.9391 & 0.00054408 & 1.9407 \\ \hline 400 & 1000 & 0.00013802 & 1.9699 & 0.00013883 & 1.9705 \\ \hline 800 & 2000 & 3.4842e-05 & 1.9860 & 3.5040e-05 & 1.9862 \\ \hline 1600 & 4000 & 8.7417e-06 & 1.9948 & 8.7906e-06 & 1.9950 \\ \hline & & **Explicit (1st order)** & **Semi-Implicit (1st order)** \\ \hline 800 & 2000 & 0.015675 & 0.96974 & 0.010996 & 0.97418 \\ \hline \end{tabular} \end{table} Table 1: Error and order of convergence for example 1. Here Nx and Nt represent the number of points in \(x\) and \(t\), respectively. The numerical result in the last row for the 1st order variant is generated from the scheme presented in [7]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{4}{|c|}{**Explicit**} & \multicolumn{2}{c|}{**Semi-Implicit**} \\ \hline **Nx** & **Nt** & **BL Error** & **Order** & **BL Error** & **Order** \\ \hline 100 & 250 & 0.0053857 & & 0.0053836 & \\ \hline 200 & 500 & 0.0014548 & 1.8883 & 0.0014536 & 1.8890 \\ \hline 400 & 1000 & 0.00037786 & 1.9449 & 0.00037753 & 1.9449 \\ \hline 800 & 2000 & 9.6317e-05 & 1.9720 & 9.6322e-05 & 1.9707 \\ \hline 1600 & 4000 & 2.4468e-05 & 1.9769 & 2.4514e-05 & 1.9743 \\ \hline \multicolumn{4}{|c|}{**Explicit (1st order)**} & **Semi-Implicit (1st order)** \\ \hline 800 & 2000 & 0.059804 & 0.9128 & 0.096943 & 0.86667 \\ \hline \end{tabular} \end{table} Table 2: Error and order of convergence for example 2. Here Nx and Nt represent the number of points in \(x\) and \(t\), respectively. The numerical result in the last row for the 1st order variant is generated from the scheme presented in [7]. **Example 4**: As mentioned in [7], the mixed discrete and continuous fragmentation model studied in [8, 9], with adjusted assumptions, is a special case of model (1). Indeed, by removing the biological and coagulation terms and letting the kernel \[(b(y,\cdot),\phi)=\sum_{i=1}^{N}b_{i}(y)\phi(ih)+\int_{Nh}^{y}\phi(x)b^{c}(y,x)dx\] with \(\text{supp }b^{c}(y,\cdot)\subset[Nh,y]\) for some \(h>0\), we have the mixed model in question. We wish to demonstrate the finite difference scheme presented here maintains this mixed structure. To this end, we take the fragmentation kernel \[b^{c}(y,x)=\frac{2}{y},\quad b_{i}(y)=\frac{2}{y},\text{ and }a(x)=x^{-1},\] with initial condition \(\mu=\sum_{i=1}^{5}\delta_{i}+\chi_{[5,15]}(x)dx\), where \(\chi_{A}\) represents the characteristic function over the set \(A\). This is similar to some examples in [9] where more detail and analysis are provided. In Figure 1, we present the simulation of this example. Notice, the mixed structure is preserved in finite time. For examples of this type, the scheme could be improved upon by the inclusion of mass conservative fragmentation terms similar to those presented in [6]. ## 6 Conclusion In this paper, we have lifted two of the first order finite difference schemes presented in [7] to second order high resolution schemes using flux limiter methods. The difference between both schemes is only found in the coagulation term where the semi-implicit scheme is made linear. In context of standard structured population models (i.e. without coagulation or fragmentation), these type of schemes have been shown to be well-behaved in the presences of discontinuities and singularities. This quality makes them a well fit tool for studying PDEs in spaces of measures. We prove the convergence of both schemes under the assumption of natural CFL conditions. The order of convergence of both schemes is then tested numerically with previously used examples. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{**Explicit**} & \multicolumn{2}{c|}{**Semi-Implicit**} \\ \hline **Nx** & **Nt** & **BL Error** & **Order** & **BL Error** & **Order** \\ \hline 100 & 250 & 0.0023026 & & 0.0028799 & \\ \hline 200 & 500 & 0.00085562 & 1.4282 & 0.00076654 & 1.9096 \\ \hline 400 & 1000 & 0.0002743 & 1.6412 & 0.00076654 & 1.9549 \\ \hline 800 & 2000 & 7.5404e-05 & 1.8631 & 5.021e-05 & 1.9775 \\ \hline 1600 & 4000 & 1.9495e-05 & 1.9515 & 1.2651e-05 & 1.9887 \\ \hline \multicolumn{5}{|c|}{**Explicit (1st order)**} & **Semi-Implicit (1st order)** \\ \hline 800 & 2000 & 0.0092432 & 0.97728 & 0.0014192 & 0.98355 \\ \hline \end{tabular} \end{table} Table 3: Error and order of convergence for example 3. Here Nx and Nt represent the number of points in \(x\) and \(t\), respectively. The numerical result in the last row for the 1st order variant is generated from the scheme presented in [7]. In summary, the schemes preform as expected in the presence of smooth initial conditions. In all such simulations, the numerical schemes presented demonstrate a convergence rate of order 2. For simulations with biological terms, this convergence rate is expected to drop when singularities and discontinuities occur as demonstrated in [5]. Mass conservation of the schemes, an important property for coagulation/fragmentation processes, is discussed in detail in [6, 7]. Acknowledgments:The research of ASA is supported in part by funds from R.P. Authement Eminent Scholar and Endowed Chair in Computational Mathematics at the University of Louisiana at Lafayette. RL is grateful for the support of the Carl Tryggers Stiftelse via the grant CTS 21-1656. ## 7 Appendix ### Proof of Lemmas 4.1 and 4.2 In this section, we present the proofs of Lemmas 4.1 and 4.2 for the explicit coagulation term. The semi-implicit term follows from similar arguments in the same fashion as [7]. Proof of Lemma 4.1 Proof.: We first prove via induction that for any \(k=1,2,\ldots,\bar{k}\), \(\mu^{k}_{\Delta x}\) satisfies the following: * \(\mu^{k}_{\Delta x}\in\mathcal{M}^{+}(\mathbb{R}^{+})\) i.e. \(m^{k}_{j}\geq 0\) for all \(j=1,\ldots,J\), * \(\|\mu^{k}_{\Delta x}\|_{TV}\leq\|\mu^{0}_{\Delta x}\|_{TV}(1+(\zeta+C_{b}C_{a}) \Delta t)^{k}\). Then, the TV bound in the Lemma follows from standard arguments (see e.g. Lemma 4.1 in [7]). We prove this Theorem for the choice of the explicit coagulation term, \(\mathcal{C}^{\exp}_{j,k}\), as the implicit case is similar and more straight forward. Figure 1: Initial condition and numerical solution at time \(T=4\) of example 4. We begin by showing that \(m_{j}^{k+1}\geq 0\) for every \(j=1,2,\ldots,J\). Notice by way of (10), this reduces down to showing \[\frac{\Delta t}{\Delta x}A_{j}^{k}+\Delta t(d_{j}^{k}+a_{j})+\Delta t\sum_{i=1} ^{J}\kappa_{i,j}m_{i}^{k}\leq 1.\] Indeed, by the CFL condition (8), induction hypothesis, and \[\sum_{i=1}^{J}\kappa_{i,j}m_{i}^{k}\leq C_{\kappa}\sum_{i=1}^{J}m_{i}^{k}=C_{ \kappa}\|\mu_{\Delta x}^{k}\|_{TV}\leq C_{\kappa}\|\mu_{\Delta x}^{0}\|_{TV} \exp((\zeta+C_{b}C_{a})T),\] we arrive at the result. For the TV bound, we have since the \(m_{j}^{k}\) are non-negative, \(\|\mu_{\Delta x}^{k}\|_{TV}=\sum_{j=1}^{J}m_{j}^{k}\). By rearranging (10) and summing over \(j=1,2,\ldots,J\) we have \[\|\mu_{\Delta x}^{k+1}\|_{TV}\leq\sum_{j=1}^{J}m_{j}^{k}+\frac{ \Delta t}{\Delta x} \sum_{j=1}^{J}\Big{(}f_{j-\frac{1}{2}}^{k}-f_{j+\frac{1}{2}}^{k} \Big{)}+\Delta t\sum_{j=1}^{J}\sum_{i=j}^{J}b_{i,j}a_{i}m_{i}^{k} \tag{14}\] \[+\Delta t\Big{(}\frac{1}{2}\sum_{j=1}^{J}\sum_{i=1}^{j-1}\kappa_{ i,j-i}m_{i}^{k}m_{j-i}^{k}-\sum_{j=1}^{J}\sum_{i=1}^{J}\kappa_{i,j}m_{i}^{k}m_{j }^{k}\Big{)}.\] To bound the right-hand side of equation (14), we directly follow the arguments of Lemma 4.1 in [7] which yields \[\|\mu_{\Delta x}^{k+1}\|_{TV}\leq(1+(\zeta+C_{a}C_{b})\Delta t)\sum_{j=1}^{J}m _{j}^{k}=(1+(\zeta+C_{a}C_{b})\Delta t)\|\mu_{\Delta x}^{k}\|_{TV}.\] Using the induction hypothesis, we obtain \(\|\mu_{\Delta x}^{k+1}\|_{TV}\leq\|\mu_{\Delta x}^{0}\|_{TV}(1+(\zeta+C_{b}C_{ a})\Delta t)^{k+1}\) as desired. ### Proof of Lemma 4.2 Proof.: For \(\phi\in W^{1,\infty}(\mathbb{R}^{+})\) with \(\|\phi\|_{W^{1,\infty}}\leq 1\), and denoting \(\phi_{j}:=\phi(x_{j})\), we have for any \(k\), \[(\mu_{\Delta x}^{k+1}-\mu_{\Delta x}^{k},\phi)= \sum_{j=1}^{J}(m_{j}^{k+1}-m_{j}^{k})\phi_{j}\] \[\leq \Delta t\sum_{j=1}^{J}\phi_{j}\Big{(}\frac{1}{\Delta x}(f_{j- \frac{1}{2}}^{k}-f_{j+\frac{1}{2}}^{k})-d_{j}^{k}m_{j}^{k}-a_{j}m_{j}^{k}\] \[+\frac{1}{2}\sum_{i=1}^{j-1}\kappa_{i,j-i}m_{i}^{k}m_{j-i}^{k}- \sum_{i=1}^{J}\kappa_{i,j}m_{i}^{k}m_{j}^{k}+\sum_{i=j}^{J}b_{i,j}a_{i}m_{i}^{ k}\Big{)}.\] Let \(C\) be the right-hand side of the TV-bound from Lemma 4.1, we then see \[(\mu_{\Delta x}^{k+1}-\mu_{\Delta x}^{k},\phi)\leq\frac{\Delta t}{\Delta x} \sum_{j=1}^{J}\phi_{j}(f_{j-\frac{1}{2}}^{k}-f_{j+\frac{1}{2}}^{k})+\Delta t( \zeta+C_{a}+C_{b}C_{a}+\frac{3}{2}C_{\kappa}C^{*})C^{*}.\] Moreover, since \(g_{J}^{k}=0\) the sum in the right-hand side takes the form \[\phi_{1}g_{0}^{k}m_{0}^{k}+\sum_{j=1}^{J-1}(\phi_{j+1}-\phi_{j})f_{j+\frac{1}{2} }^{k}=\Delta x\phi_{1}\sum_{j=1}^{J}{}^{*}\beta_{j}^{k}m_{j}^{k}+\sum_{j=1}^{J- 1}(\phi_{j+1}-\phi_{j})f_{j+\frac{1}{2}}^{k}\leq 3.5\Delta x\zeta C^{*}.\] We thus obtain \[(\mu_{\Delta x}^{k+1}-\mu_{\Delta x}^{k},\phi)\leq L\Delta t,\qquad L:=(3.5 \zeta+C_{a}+C_{b}C_{a}+\frac{3}{2}C_{\kappa}C^{*})C^{*}.\] Taking the supremum over \(\phi\) gives \(\|\mu_{\Delta x}^{k+1}-\mu_{\Delta x}^{k}\|_{BL}\leq L\Delta t\) for any \(k\). The result follows.
2304.04927
Data-Driven Fast Frequency Control using Inverter-Based Resources
To address the control challenges associated with the increasing share of inverter-connected renewable energy resources, this paper proposes a direct data-driven approach for fast frequency control in the bulk power system. The proposed control scheme partitions the power system into control areas, and leverages local dispatchable inverter-based resources to rapidly mitigate local power imbalances upon events. The controller design is based directly on historical measurement sequences, and does not require identification of a parametric power system model. Theoretical results are provided to support the approach. Simulation studies on a nonlinear three-area test system demonstrate that the controller provides fast and localized frequency control under several types of contingencies.
Etinosa Ekomwenrenren, John W. Simpson-Porco, Evangelos Farantatos, Mahendra Patel, Aboutaleb Haddadi, Lin Zhu
2023-04-11T01:51:39Z
http://arxiv.org/abs/2304.04927v1
# Data-Driven Fast Frequency Control using Inverter-Based Resources ###### Abstract To address the control challenges associated with the increasing share of inverter-connected renewable energy resources, this paper proposes a direct data-driven approach for fast frequency control in the bulk power system. The proposed control scheme partitions the power system into control areas, and leverages local dispatchable inverter-based resources to rapidly mitigate local power imbalances upon events. The controller design is based directly on historical measurement sequences, and does not require identification of a parametric power system model. Theoretical results are provided to support the approach. Simulation studies on a nonlinear three-area test system demonstrate that the controller provides fast and localized frequency control under several types of contingencies. frequency control, low inertia, distributed control, renewable energy, smart grid, next generation control. ## I Introduction AKey objective in power system operations is the maintenance of a stable system frequency and the quick restoration of power balance [1]. However, the increasing penetration of inverter-connected renewable energy resources (RESs) [2] is resulting in adverse effects on power system frequency regulation [3]. By replacing conventional synchronous generators, along with their synchronized rotational mass, the increasing proliferation of RESs reduces the system inertia, resulting in a faster rate of change of frequency (ROCOF) and lower frequency nadir during contingencies (i.e., a deeper drop in frequency). Furthermore, with some RESs not participating in primary frequency control[4] while displacing synchronous generators which do, the aggregate effective primary control response in the system is reduced [5, 6]. When combined with the net load variability these intermittent and variable RESs introduce, the upshot is larger and more frequency deviations, making it increasingly difficult for system operators to maintain frequency within acceptable limits, such as the extreme variations noted by the California Independent System Operator in the so-called 'duck chart' [7]. There has been extensive research into the negative dynamic effects of reduced inertia in the power system due to increased renewables, with suggested solutions such as virtual inertia emulation and services [8, 9]. Equally important to consider is the problem of maintaining the average frequency close to nominal during normal operation, with regulation performance being quantified by regulatory authorities in Control Performance Standards (CPS) [10, 11]. For this, it is essential to consider the local system inertia, primary control response, and primary control deadband, as these have the greatest effect on the average frequency deviations [6]. This fosters the need for localized fast frequency control strategies, which take into cognizance local system model and parameter information. Traditionally, the Automatic Generation Control (AGC) system employs a centralized approach to maintain average frequency deviations within desired limits for each balancing authority area. This is achieved by generating control signals at a central control center. However, due to the sheer size of the balancing authority area, maintaining an accurate dynamic system model becomes an arduous task. Consequently, the AGC system relies on classical frequency bias constant methods [12], which, while effective to some extent, limit its speed and utility for rapid frequency control. If accurate parametric models are available, then modern model-based controller design approaches can be successfully used to enable IBR participation in local fast frequency control. In [13] the authors developed and validated such a controller based on the principles of active disturbance estimation and rejection; as our work here builds directly on this, further details are deferred to Section II. This scheme and other model-based frequency control approaches, such as model predictive control [14, 15] and robust optimal control [16, 17] can provide good closed-loop control performance and reduce average frequency deviations. However, obtaining accurate parametric models may be prohibitively difficult in practice [18], ultimately limiting the performance of model-based designs. For example, simulations in [13] show deterioration in control performance (e.g., post- disturbance settling time and overshoot) when there are parametric mismatches between the true system model and the model used for design. Data-driven or data-based controller design methods provide a promising alternative in this regard. Proposals to address the issue of power system frequency control using data-driven control can be broadly divided into two categories: indirect and direct. In an indirect approach, historical data from the power system is used to explicitly identify a system model, and a controller is then designed based on that model (e.g., [19, 20]). The indirect approach has the advantage of providing an explicit, interpretable model of the system, which can aid in understanding the particular frequency response dynamics. However, even selecting an appropriate parametric model to fit is a difficult trial-and-error process, particularly in modern power systems with diverse and quickly evolving components. Additionally, there is evidence that the intermediate identification step may lead to poorer closed-loop performance than recent direct approaches [21]. In a direct data-driven or model-free approach, a frequency controller is designed directly based on recorded or online real-world data, without explicit identification of a system model. One broad approach in this category is adaptive dynamic programming or reinforcement learning [22, 23, 24]. Here, control actions are taken to maximize some form of cumulative reward. However, reinforcement learning approaches are limited by their sensitivity to hyper-parameter selection, the complex training process required to determine the weight coefficients of the trained agent, which in turn relies on a significant amount of historical sampled data [25, 26]. In contrast with reinforcement learning, a suite of alternative direct data-driven control approaches have recently appeared [27, 25, 28, 29], and derive from a branch of control theory called behavioral systems [30]. These techniques allow for direct control while being sample efficient, and often come with rigorous performance guarantees. While the specific controllers differ between approaches (e.g., model-predictive [25], linear-quadratic, [28] etc.), these approaches all rely on the so-called _fundamental lemma_ of behavioral systems, which states that a single recorded trajectory is sufficient to capture the underlying dynamic model of the system if the input signal is rich enough to excite all system modes [30]. Our proposed controller is also based on this principle. ContributionsThis paper provides direct data-driven controller designs which enable IBRs to participate in providing geographically localized fast frequency control. The key components in our approach are novel designs for _data-driven disturbance estimators_: dynamic algorithms which provide online estimates of the net real power imbalance within a specified control area. Local IBRs are then quickly redispatched within their operating limits to eliminate the imbalance. In Section III we present two data-driven disturbance estimator designs. Both designs are based directly on recorded system data, and do not require a parametric system model. The two designs trade off between simplicity and robustness/performance. The first design, proposed in our preliminary work [31], uses a simple linear update law to estimate the disturbance, and requires tuning of only a single parameter. Extending [31], here we provide theoretical guarantees supporting this design. The first design serves as a stepping stone to our second approach, which is an optimization-based estimation procedure. The second design has a higher computational burden, requiring the solution of a convex optimization problem at each time-step, but (i) is less sensitive to noise in the recorded data, (ii) is less sensitive to strong nonlinearity in the system dynamics (e.g., governor deadbands), and (iii) shows superior performance in simulation studies. As the simulations are general, we outline specifically how these methods are applied to the frequency control problem under consideration. Compared, for instance, to the recent data-driven load-frequency controller proposed in [29], we do not make the strong assumption that a measurement of net load demand is available; our approach is based only on direct measurements of area frequency and net power flow out of the control area. In Section IV we extensively validate our designs via simulations on a detailed nonlinear three-area power system. Several scenarios are examined, included load increases, heavy renewable penetration, generation trips, and three-phase faults. The tests demonstrate that our approach provides fast and effective frequency control for the bulk grid, and outperforms our recent model-based design [13]. ## II Review: Model-Based Fast Frequency Controller [13] We briefly review the key architectural aspects of the model-based hierarchical fast frequency controller developed and extensively tested in [13]. The key criteria driving the design are a desire for (i) design simplicity, (ii) fast localized response to achieve frequency regulation, and (iii) localized use of system data and measurements to minimize latency and maximize data privacy. In the scheme of [13], the grid is divided into small local control areas (LCAs). Within each LCA, a _disturbance estimator_ processes frequency and area tie power flow measurements \(\Delta\omega\) and \(\Delta P_{\mathrm{tie}}\) (incorporating estimated delays) in order to detect frequency events. The estimator generates a real-time estimate \(\Delta\widehat{P}_{\mathrm{u}}\) of the net unmeasured active power imbalance \(\Delta P_{\mathrm{u}}\) within the LCA, and an allocation mechanism optimally redispatches local IBRs to correct the imbalance; see [13, Section III-A] for further details on the power allocator. A block diagram of this LCA controller is shown in Figure 1. In situations where local resources are insufficient, a higher-level coordinating controller facilitates the provision of additional power support from neighboring LCAs; as this higher-level coordinating controller is not our focus in this article, we refer to [13, Section III-B] for further details. Fig. 1: Block diagram of area control structure for each LCA. Dashed lines denote sampled signals. The disturbance estimator within the LCA controller is the core design component of the approach. The estimator is a Luenberger-type state observer, and its design requires a parametric model describing the LCA frequency dynamics. For practical reasons, a simple model with few parameters is strongly preferred, and the designs in [13, Section II], used the second-order _frequency response model_ \[\begin{split} 2H\Delta\dot{\omega}&=-\tfrac{1}{R_{ \mathrm{i}}}\Delta\omega+\Delta P_{\mathrm{m}}-\Delta P_{\mathrm{u}}-\Delta P _{\mathrm{tie}}+\Delta P_{\mathrm{lbr}}\\ T_{\mathrm{R}}\Delta\dot{P}_{\mathrm{m}}&=- \Delta P_{\mathrm{m}}-R_{\mathrm{g}}^{-1}(\Delta\omega+T_{\mathrm{R}}F_{ \mathrm{H}}\Delta\dot{\omega})\end{split} \tag{1}\] which considers local aggregated parameters, such as total LCA inertia \(H\), total IBR and generator primary control gains \(R_{\mathrm{l}}\) and \(R_{\mathrm{g}}\), aggregated turbine-governor time constant \(T_{\mathrm{R}}\), and aggregated high-pressure turbine fraction \(F_{\mathrm{H}}\). ## III A Data-Driven Control Approach for Area-Based Fast Frequency Control The model-based design of Section II requires an explicit and accurate model of the frequency dynamics of each LCA. In practice, this requirement poses at least two major challenges. First, an appropriate class of parametric models must be selected; this step balances simplicity vs. accuracy, and will become increasingly difficult as RESs with black-box power electronic controls proliferate. Second, the parameters of the model must be selected or fit; this procedure itself is challenging, with associated bias-variance trade-offs [32]. To address these issues, in this section we develop two direct data-driven design approaches to supplant the model-based design approach described in Section II. In essence, the idea is to replace the crude parametric LCA model (1) with a non-parametric model based on time-series data collected from the system. This time-series data is directly used to design a disturbance estimation scheme, without passing through an explicit system identification step. Section III-B describes our first data-driven disturbance estimation approach, which fuses ideas from linear estimator design and behavioral systems theory. The resulting disturbance estimator is described by a linear update rule, and requires tuning of only one scalar gain. To improve robustness to grid nonlinearities and inexact data collection procedures, our second design approach in Section III-C extends this linear estimation procedure with an optimization-based estimation procedure. Finally, in Section III-D we describe how these general estimation ideas are adapted for the particulars of power system frequency control and integrated into the hierarchical control framework outlined in Section II. ### _Background on Data-Driven System Representation_ Consider the controllable finite-dimensional discrete-time linear time-invariant (LTI) model \[\begin{split} x(t+1)&=Ax(t)+Bu(t)+B_{d}d(t)\\ y(t)&=Cx(t)+Du(t)\end{split} \tag{2}\] with time \(t\in\mathbb{Z}_{\geq 1}\), state \(x(t)\in\mathbb{R}^{n}\), control input \(u(t)\in\mathbb{R}^{m}\), disturbance input \(d(t)\in\mathbb{R}^{q}\), and measured output \(y(t)\in\mathbb{R}^{p}\). We assume the matrices \((A,B,B_{d},C,D)\) of (2) are _unknown_, and hence the model (2) cannot be directly used for simulation, analysis, and feedback design purposes. _Behavioral systems theory_ provides a set of tools for constructing a data-based representation of the dynamic system (2) using input and output measurements. We refer to [33] for a recent survey, and mention only the essential concepts here. As notation, if \((z(1),z(2),z(3),\ldots)\) is a \(\mathbb{R}^{m}\)-valued signal defined for positive time, we write \(z\in(\mathbb{R}^{m})^{\mathbb{Z}_{\geq 1}}\). The starting point is to diminish the role of the state, and consider all possible input-output sequences \((u(t),d(t),y(t))\) which are compatible with (2), called the _behaviour_: \[\begin{split}\mathscr{B}=\left\{(u,d,y)\in(\mathbb{R}^{m+q+p})^{ \mathbb{Z}_{\geq 1}}\ :\ \exists\,x\in(\mathbb{R}^{n})^{\mathbb{Z}_{\geq 1}}\text{ s.t. }\right.\\ \left.\sigma x=Ax+Bu+B_{d}d,\ y=Cx+Du\right\},\end{split} \tag{3}\] where \((\sigma x)(t)=x(t+1)\) is the shift operation. The behaviour (3) describes the system (2) as a subspace of the vector space of all possible input-output signals, and (2) is a _state-space representation_ of \(\mathscr{B}\). The _order_ of the system, denoted by \(n(\mathscr{B})\), is the smallest possible state dimension of the representation (2). Given a representation of minimal order, the _lag_ of \(\mathscr{B}\), denoted by \(\ell(\mathscr{B})\) is the smallest integer \(\ell\) such that the matrix \(\mathcal{O}_{\ell}=\mathrm{col}(C,CA,\ldots,CA^{\ell-1})\) has rank \(n(\mathscr{B})\). Let \(\mathscr{B}_{T}\) denote the restriction of the behaviour to trajectories of finite length \(T\in\mathbb{Z}_{\geq 1}\), i.e., input-output sequences of length \(T\). Suppose that we have collected \(T\)-samples of _input-output data_\(w^{d}=(u^{\mathrm{d}},d^{\mathrm{d}},y^{\mathrm{d}})\in\mathscr{B}_{T}\) from the system. This data may be directly used to create a non-parametric representation of the model (3). To do this, let \(L\leq T\) be a positive integer, and organize the data into the _Hankel matrix of depth_\(L\), given as \[\mathscr{H}_{L}(u^{\mathrm{d}})=\begin{bmatrix}u^{\mathrm{d}}(1)&\cdots&u^{ \mathrm{d}}(T-L+1)\\ \vdots&\ddots&\vdots\\ u^{\mathrm{d}}(L)&\cdots&u^{\mathrm{d}}(T)\end{bmatrix}\in\mathbb{R}^{mL\times (T-L+1)},\] with analogous definitions for \(\mathscr{H}_{L}(d^{\mathrm{d}})\) and \(\mathscr{H}_{L}(y^{\mathrm{d}})\). The input data \((u^{\mathrm{d}},d^{\mathrm{d}})\) is said to be _persistently exciting of order_\(L\) if \(\mathrm{col}(\mathscr{H}_{L}(u^{\mathrm{d}}),\mathscr{H}_{L}(d^{\mathrm{d}}))\) has full row rank; this captures the idea that the inputs are sufficiently rich and sufficiently long. The _Fundamental Lemma_[30] states that if the input data is persistently exciting of order \(L+n(\mathscr{B})\), then _any possible_ length \(L\) input-output sequence \((u,d,y)\in(\mathbb{R}^{(m+q+p)})^{L}\) can be expressed as \[\begin{bmatrix}\mathscr{H}_{L}(u^{\mathrm{d}})\\ \mathscr{H}_{L}(d^{\mathrm{d}})\\ \mathscr{H}_{L}(y^{\mathrm{d}})\end{bmatrix}g=\begin{bmatrix}u\\ d\\ y\end{bmatrix} \tag{4}\] for some vector \(g\in\mathbb{R}^{T-L+1}\). The linear equation (4) is a _data-based representation_ of the system (2), and can be leveraged for prediction and control [33]. ### _Design #1: Linear Data-Driven Disturbance Estimator_ We now consider (2) as a model for each LCA. We assume that \(d(t)\) is a constant unknown disturbance signal, which for us will model mismatch between generation and load. In our context, \(x(t)\) would consist of states of generators, converters, loads, and associated control systems, \(u(t)\) would be commands to IBRs, and \(y(t)\) would be available measurements such as frequency deviation. Since (2) would represent the system _including_ the action of primary controllers, the model (2) will be assumed to be internally exponentially stable, i.e., \(A\) will have eigenvalues within the unit circle. The design goal is to produce a real-time estimate \(\hat{d}(t)\) of the unknown disturbance \(d(t)\). Our proposed estimator design consists of two steps: 1. a _data-driven forward prediction_\(\hat{y}(t)\) of the output \(y(t)\); 2. a _linear update rule_ for \(\hat{d}(t)\) using \(\hat{y}(t)\) and the true system measurement \(y(t)\). To generate a prediction of the output for time \(t\), we will leverage (4), and assume that historical data \((u^{\mathrm{d}},d^{\mathrm{d}},y^{\mathrm{d}})\) is available. Model-based prediction using, e.g., (2) would require the specification of an initial condition. In the data-driven setting, the initial condition is implicitly defined by using recent online samples of input and output data [27]. Let \(T_{\mathrm{ini}}\geq\ell(\mathscr{B})\) be the length of the initialization data, and define the vectors \[\begin{split} u_{\mathrm{ini}}&=\mathrm{col}(u(t-T_ {\mathrm{ini}}),\ldots,u(t-1))\\ \hat{d}_{\mathrm{ini}}&=\mathrm{col}(\hat{d}(t-T_ {\mathrm{ini}}),\ldots,\hat{d}(t-1))\\ \hat{y}_{\mathrm{ini}}&=\mathrm{col}(\hat{y}(t-T_ {\mathrm{ini}}),\ldots,\hat{y}(t-1)).\end{split} \tag{5}\] Note that \(\hat{d}_{\mathrm{ini}}\) and \(\hat{y}_{\mathrm{ini}}\) are formed based on our past _estimates_ of the disturbance and output. In (4), we consider trajectories of length \(L=T_{\mathrm{ini}}+1\). We partition \(u,d,y\) in (4) as \[u=\begin{bmatrix}u_{\mathrm{ini}}\\ u(t)\end{bmatrix},\quad\hat{d}=\begin{bmatrix}d_{\mathrm{ini}}\\ \hat{d}(t)\end{bmatrix},\quad y=\begin{bmatrix}y_{\mathrm{ini}}\\ \hat{y}(t)\end{bmatrix},\] and correspondingly partitioning the rows of the Hankel matrices in the same fashion as \[\mathscr{H}_{L}(u^{\mathrm{d}})=\begin{bmatrix}U_{\mathrm{ini}}\\ U_{f}\end{bmatrix},\ \mathscr{H}_{L}(d^{\mathrm{d}})=\begin{bmatrix}D_{\mathrm{ini}}\\ D_{f}\end{bmatrix},\ \mathscr{H}_{L}(y^{\mathrm{d}})=\begin{bmatrix}Y_{\mathrm{ini}}\\ Y_{f}\end{bmatrix}.\] With these choices, (4) can be re-expressed as \[\mathscr{H}_{\mathrm{red}}g:=\begin{bmatrix}U_{p}\\ D_{p}\\ Y_{p}\\ U_{f}\\ D_{f}\end{bmatrix}g=\begin{bmatrix}u_{\mathrm{ini}}\\ \hat{d}_{\mathrm{ini}}\\ \hat{y}_{\mathrm{ini}}\\ u(t)\\ \hat{d}(t)\end{bmatrix},\qquad\hat{y}(t)=Y_{f}g. \tag{6}\] The first set of equations is solved for the unknown \(g\), and the prediction \(\hat{y}(t)=Y_{f}g\) is immediately obtained. If the underlying data-generating system is LTI and the collected data are exact, the Fundamental Lemma guarantees that (6) is consistent and the computed response matches the system's response exactly, provided \(T_{\mathrm{ini}}\geq\ell(\mathscr{B})\)[30]. With the output estimate generated, the disturbance estimate is now updated according to the feedback rule \[\hat{d}(t+1)=\hat{d}(t)-\varepsilon L(\hat{y}(t)-y(t)),\] where \(L\in\mathbb{R}^{q\times p}\) is the estimation gain and \(\varepsilon\in(0,1)\) is a tunable parameter which controls the rate of adjustment. Putting everything together, we can compactly express the overall disturbance estimator as \[\hat{y}(t) =\mathcal{P}\cdot\mathrm{col}(u_{\mathrm{ini}},\hat{d}_{\mathrm{ini }},\hat{y}_{\mathrm{ini}},u(t),\hat{d}(t)) \tag{7a}\] \[\hat{d}(t+1) =\hat{d}(t)-\varepsilon L(\hat{y}(t)-y(t)) \tag{7b}\] where \(\mathcal{P}=Y_{f}\mathscr{H}_{\mathrm{red}}^{\dagger}\) is the _prediction matrix_ and \(\mathscr{H}_{\mathrm{red}}^{\dagger}\) denotes the pseudoinverse of \(\mathscr{H}_{\mathrm{red}}\). As \(\mathcal{P}\) depends only on historical data, it can be computed once and stored, and thus implementing (7) simply amounts to matrix-vector multiplication. The final issue to address concerns the tuning of the estimator gain \(L\) and parameter \(\varepsilon\) in (7). Our tuning recommendation is \(L=G(1)^{\dagger}\), where \(G(1)=C(zI_{n}-A)^{-1}B_{d}|_{z=1}\) is the _DC gain_ of the system (2) from input \(d\) to output \(y\). This selection will be justified in our theory to follow, and the matrix \(G(1)\) can be obtained directly from the same historical data used to construct \(\mathcal{P}\) in (7); see [34, Thm. 4.1] for details on that construction. We can now give a theoretical result concerning convergence of the disturbance estimator (7). **Theorem III.1** (Data-Driven Disturbance Estimator).: _Consider the disturbance estimator (7) for the system (2) under all previous assumptions. Assume further that \(G(1)=C(I_{n}-A)^{-1}B_{d}\) has full column rank, and set the estimator gain as \(L=G(1)^{\dagger}\). Then there exists \(\varepsilon^{\star}>0\) such that for all \(\varepsilon\in(0,\varepsilon^{\star})\), \(\hat{d}(t)\to d(t)\) exponentially as \(t\to\infty\)._ The disturbance estimator (7) provides a _completely model-free_ solution to disturbance estimation problem; the only required tuning is the single scalar parameter \(\varepsilon\in(0,1)\). An implication of Theorem III.1 is that one may tune the estimator (7) by starting \(\varepsilon\) small and slowly increasing it; the proof can be found in Appendix A. **Remark III.2** (Singular Value Thresholding).: _In practice, the system generating the data which is used to build \(\mathscr{H}_{\mathrm{red}}\) in (7) may contain nonlinearity, and the measurements will be corrupted by measurement noise; this will be the case in our subsequent case studies. Both of these effects will compromise performance of the design (7). It has however been observed that low-rank approximations of Hankel matrices reduce the effects of noise in data-driven control, and enhance generalization [25]. In implementation, we compute the singular value decomposition of \(\mathscr{H}_{\mathrm{red}}\) and retain only the dominant singular values and vectors, to obtain a low-rank approximation \(\mathscr{H}_{\mathrm{red}}\)[35]. We then use \(\mathcal{P}=Y_{f}\mathscr{\tilde{H}}_{\mathrm{red}}^{\dagger}\) in (7), which, empirically, greatly increases the robustness of the approach. \(\square\)_ ### _Design #2: Optimization-Based Data-Driven Disturbance Estimator_ The advantage of (7) is simplicity, as it involves only linear update rules at each time step. We now outline a more flexible optimization-based disturbance estimation procedure which can achieve improved performance at the cost of higher implementation complexity. The key idea is to formulate the disturbance estimation problem as a regularized optimization problem. In particular, the use of regularization affords us more flexibility to select a better model class in terms of behaviour and complexity to better capture the dynamics of the true underlying system [33]. To begin, consider the previous development leading up to equation (6). Even if the system of equations (6) is consistent, it will generally have infinitely many solutions [27, 21]. The prediction matrix \(\mathcal{P}\) in (7) is given by \(\mathcal{P}=Y_{f}\mathscr{H}_{\mathrm{red}}^{\dagger}\), and corresponds precisely to taking the _least squares_ solution of the first equation in (6) as \[g^{*}=\operatorname*{argmin}_{g} \|g\|_{2}^{2}\] \[\operatorname{subject\ to} \mathcal{H}_{\text{red}}g=\operatorname{col}(u_{\text{ini}},\hat{d }_{\text{ini}},\hat{y}_{\text{ini}},u(t),\hat{d}(t))\] and then substituting to obtain \(\hat{y}(t)=Y_{f}g^{*}\). When using noisy data from a non-LTI data-generating system, it is advantageous to robustify this least-squares problem by adding regularization [21]. To this end, for the equation \(\mathcal{H}_{\text{red}}g=\xi\), we have \[g=\mathcal{H}_{\text{red}}^{\dagger}\xi\qquad\Longleftrightarrow\qquad(I- \mathcal{H}_{\text{red}}^{\dagger}\mathcal{H}_{\text{red}})g=0.\] Thus, with \(\mathcal{Q}=\mathcal{H}_{\text{red}}^{\dagger}\mathcal{H}_{\text{red}}\), a least squares solution for \(g\) also arises from minimizing the objective function \(\|(I-\mathcal{Q})g\|_{2}^{2}\) subject to the linear constraint \(\mathcal{H}_{\text{red}}g=\xi\). Our disturbance estimation approach is now to _intentionally bias_ this least squares solution, by introducing additional objective functions quantifying the prediction error along with regularization on \(g\). This intentional biasing exploits the bias-variance trade-off from system identification [32], leading to reduced overfitting in the estimation procedure. With the same notation and set-up as in Section III-B, at time \(t\) we solve the convex optimization problem \[\min_{\hat{d}(t),\hat{g}(t),g} \|y(t)-\hat{y}(t)\|_{2}^{2}+\lambda_{1}\|(I-\mathcal{Q})g\|_{2}^{ 2}+\lambda_{2}\|g\|_{2}\] (8) s.t. \[\begin{bmatrix}U_{\text{p}}\\ D_{\text{p}}\\ Y_{\text{p}}\\ U_{f}\\ D_{f}\\ Y_{f}\end{bmatrix}g=\begin{bmatrix}u_{\text{ini}}\\ \hat{d}_{\text{ini}}\\ y_{\text{ini}}\\ u(t)\\ \hat{d}(t)\\ \hat{y}(t)\end{bmatrix},\] where \(\lambda_{1},\lambda_{2}\geq 0\) are tuning parameters. The problem (8) combines the prediction and estimation steps from (7) into one formulation, jointly generating the output prediction \(\hat{y}(t)\) and the disturbance estimate \(\hat{d}(t)\). The first objective function term attempts to match the prediction \(\hat{y}(t)\) to the measured output \(y(t)\). Increasing \(\lambda_{1}\) encourages a least-squares solution for \(g\), similar to that used in (7), while increasing \(\lambda_{2}\) regularizes the solution; this reduces overfitting [32] and improves estimation robustness for noisy measurements and non-LTI dynamics. While a theoretical estimation error analysis for (8) is outside the scope of this article, the approach is strongly justified by recent advances in regularized data-driven control [33], and performance will be extensively tested in Section IV. ### _Specialization to Area-Based Fast Frequency Control using Inverter-Based Resources_ We now describe the adaptation of our general data-driven disturbance estimation methods to the fast frequency control architecture described in Section II. Consider a large interconnected power system which is divided into several small LCAs. Each LCA has local IBR's that can be re-dispatched by the operator, subject to their real-time capacity limits. Since each LCA is geographically small, the effect of a power imbalance within the LCA on the frequency is approximately independent of the specific nodal location of the imbalance within the LCA. Therefore, it is assumed that power disturbances and generation are aggregate, and effectively lumped at a single bus. Put differently, disturbance and control signals enter through the same channel, and thus \(B=B_{d}\in\mathbb{R}^{n\times 1}\) in (2). The following selections are made for inputs and outputs: the measurement \(y(t)=\Delta\omega(t)\in\mathbb{R}\) is a single local measurement of frequency deviation, and the disturbance \(d(t)=\Delta P_{\text{u}}\in\mathbb{R}\) models aggregate unmeasured generation-load imbalance within the LCA. The input \(u(t)\) to the system consists of the measured tie-line flow \(\Delta P_{\text{tie}}(t)\) out of the LCA, as well as the sum of all IBR power set-points \(\Delta P_{\text{ibr}}(t)\). Historical data must be used to build the Hankel matrices used in both estimators. As the control and disturbance channels are lumped, during the collection of historical data, the sum of IBR set-point changes, exogenous load/generation changes, and inter-LCA tie-line flow changes must be recorded. Further discussion of options for data collection is deferred to Section IV-A. As a result of the above, the estimator (7) simplifies to \[\Delta\hat{f}(t) =\mathcal{P}\cdot\operatorname{col}(\Delta v_{\text{ini}},\Delta \hat{f}_{\text{ini}},\Delta v(t)) \tag{9a}\] \[\Delta\hat{P}_{\text{u}}(t+1) =\Delta\hat{P}_{\text{u}}(t)-\varepsilon L(\Delta\hat{f}(t)- \Delta f(t)), \tag{9b}\] where \(\Delta v=\Delta P_{\text{ibr}}-\Delta P_{\text{tie}}-\Delta P_{\text{u}}\), is the aggregated input, \(\mathcal{P}=Y_{f}\left[\begin{smallmatrix}U_{\text{p}};Y_{\text{p}};U_{f} \end{smallmatrix}\right]^{\dagger}\), is the prediction matrix, and \(L\in\mathbb{R}\) is now a scalar. Analogously, the optimization-based estimator (8) becomes \[\min \|\Delta f(t)-\Delta\hat{f}(t)\|_{2}^{2}+\lambda_{1}\|(I-\mathcal{ Q})g\|_{2}^{2}+\lambda_{2}\|g\|_{2}\] (10) s.t. \[\begin{bmatrix}U_{\text{p}}\\ Y_{\text{p}}\\ U_{f}\\ Y_{f}\end{bmatrix}g=\begin{bmatrix}\Delta P_{\text{ibr,ini}}-\Delta P_{\text{ tie,ini}}-\Delta\hat{P}_{\text{u,ini}}\\ \Delta f_{\text{ini}}\\ \Delta P_{\text{ibr}}(t)-\Delta P_{\text{ie}}(t)-\Delta\hat{P}_{\text{u}}(t) \\ \Delta\hat{f}(t),\end{bmatrix},\] The imbalance estimate \(\Delta\hat{P}_{\text{u}}(t)\) from either method is then used to redispatch the local IBRs in the LCA via the optimal power allocation algorithm presented in [13]. ## IV Simulation Studies We validate our designs by applying them to the three-area nonlinear test system illustrated in Figure 2. Each LCA of the test system is based on the the IEEE 3-machine 9-bus system given in [1], with the interconnection parameters and active power dispatch info similar to [13]. In the modified test model, two synchronous generators (SGs) in area one have been replaced with a photovoltaic (PV) array and a Wind (WT) plant. Similarly, one SG each in areas two and three are replaced with a PV farm. The PV array and wind turbine plant are simplified models represented by non-dispatchable converter-based units, which are parameterized using wind power and solar irradiance data from [36, 37]. To facilitate frequency control, two dispatchable IBRs have been added in each LCA. In addition, static var compensators (SVCs) and synchronous condensers have been added to areas 1 and 2/3 to support the voltage. All SGs and dispatchable IBRs in the system are set to have a 5% speed droop curve on their respective bases, with a 36 mHz primary control deadband. The pre-disturbance generation/demand in the system is approximately 800 MW. ### _Offline Data Collection and Controller Tuning_ As described in Section III, our estimators require a library of historical data generated from a _persistently exciting input_ that must be collected _before_ the online implementation of the control. Examples of common persistently exciting inputs from the literature include, pseudo random binary sequence, autoregressive moving average sequence, sum of sinusoids, and white noise [38, 39]. Among these, white-noise derivatives are most commonly used in power system identification studies, such as measured ambient power fluctuations [40], and low-power injected probing signals such as the low-level pseudo-random noise (LLPRN) in [41] and the band-limited white noise in [42]. In terms of what sources should be actuated for this data collection, there are several theoretically-equivalent options for the purposes of this work, including 1. apply low-power probing modulations to IBRs during calm system conditions (i.e., during times of minimal unmeasured generation/load changes), and meter the resulting frequency and tie-line power changes, 2. hold IBR set-points constant, record ambient load power consumption changes (or injected pseudo random white noise that mimics such changes), and meter the resulting frequency and tie-line power changes, or obvious variations/combinations of these. Still other possibilities, such as using historical load estimates as proxy data, are of interest, but are outside of our scope in this study. In our testing to follow, we pursue option (i); we refer the reader to Remark III.1 in [13] for a discussion on the feasibility and market incentives that makes this choice viable. We now turn to the design of the low-power probing signal for IBR set-point changes. In this work, we modeled our probing injection signal after the LLPRN in [41], where we have combined a sum of sinusoids and band-limited white noise. Each IBR within each LCA is commanded with the set-point changes shown in Figure 3, given by \[\Delta P_{\mathrm{br}}(t)=\sin(12\pi t)+w(t)\qquad\text{(in MW)}. \tag{11}\] The signal consists of a sinusoidal perturbation of 1 MW (\(1.76\times 10^{-3}\) p.u.), with band-limited white noise \(w(t)\) with noise power of \(\approx 0.2\times 10^{-3}\) p.u.1 Footnote 1: Further investigation into excitation signal design is outside our scope, but see [43, 44] for recent theoretical results. While we stress that the choice of probing signal is not unique, with our choice of signal, we are able to utilize the aggregated power input \(u=\Delta P_{\mathrm{br}}-\Delta P_{\mathrm{tie}}\) and output \(y=\Delta\omega\) data for each LCA recorded for only \(10\) seconds at a sampling rate of \(0.1\) seconds, which is significantly shorter than the duration of 1200 seconds for ambient data and 600 seconds for LLPRN reported in the literature [41]. Regarding the tuning parameters, we used \(T=101\) historical data points for each LCA, collected sequentially with only one LCA being excited at a time. The length of recent past data used in (9) and (10) was \(T_{\mathrm{ini}}=7\); larger values were found to produce no benefit. The controller gain \(\varepsilon\) in (9) was set via tuning to \(\varepsilon=0.2\) by starting from a small value and increasing until satisfactory performance was reached. For the penalty parameters in (10), we set \(\lambda_{1}\) to a large value of \(1\times 10^{8}\), according to the insights from Section III-C, and \(\lambda_{2}=1\times 10^{2}\) was set via tuning by gradually increasing its value until no noticeable improvement in performance was observed. ### _Simulation Scenarios_ We consider four different testing scenarios, which aim to highlight the diverse challenges that can arise in a power system, including renewable resource variability, sudden changes in load demand, and equipment failures. The scenarios are 1. response to sudden load changes of different sizes, 2. response to solar and wind farm variability, 3. response to a three-phase-to-ground fault, and 4. response after loss of a conventional generation unit. For all scenarios, we integrate our disturbance estimators into the hierarchical fast frequency control architecture proposed in [13] and compare the model-based disturbance estimator of that work against the data-driven disturbance estimators presented in this paper. We term the controller described in (9) as the Linear Data-Driven Disturbance Estimator (LDDE) and that in (10) as the Optimization-Based Data-Driven Disturbance Estimator (ODDE); the ODDE is the default data-driven controller presented in the figures when no other context is given. As a baseline, we compare to the response without any supplementary control scheme, where frequency support is provided only through the primary droo Fig. 3: Persistently exciting IBR set-point change for data collection phase. Fig. 2: Three-LCA test system. control action of both generators and IBRs. Additionally, we compare against the response obtained by implementing standard automatic generation control (AGC) on the three-area system in Figure 2.2 In Scenario #1, we have compared the ODDE against all the alternatives listed above, and demonstrate its performance premium relative to the LDDE. In the remainder of the scenarios, we focus the plots on comparing the better estimator (ODDE) against the model-based approach presented in [13]. Finally, the data collection and real-time simulation steps include measurement noise modelled as zero-mean white noise of standard deviation \(10^{-1}\) p.u. for frequency deviation and \(2\times 10^{-2}\) p.u. for inter-area power flow measurements; these represent realistic noise of the variables scaled for their typical values (e.g., [42]). Footnote 2: See [13, Remark II.4] for extensive discussion on the distinctions between the proposed approach and traditional AGC. ### _Scenario #1: Step Load Changes_ This scenario evaluates the performance of our controller in response to step load changes of varying magnitudes - 14 MW and 60 MW - applied in Area 2. At \(t=10\)s, a small load change of 14 MW is applied at bus 8 in area two. The size of the disturbance is chosen such that the resulting frequency deviation is below the 36 mHz dead band of the generator primary control systems. The frequency response and disturbance estimate of the system are plotted in Figure 4, while the net-tie line deviations and IBR power outputs are displayed in Figure 6. For clarity in differentiating the alternative approaches, a zoomed-in frequency response plot is shown in Figure 5. Using both the model-based and data-driven disturbance estimators, the disturbance was quickly identified to originate in Area 2 and promptly corrected by adjusting the setpoints of the local IBRs, with minimal impact on other areas. Overall, the frequency was restored quickly, and the variables in the non-contingent areas returned to their pre-disturbance state due to the decentralized nature of the control scheme. The plots also demonstrate that the proposed optimization-based data-driven estimator outperforms the linear data-driven and model-based estimators in terms of a higher nadir and faster settling time for the post-contingency frequency. We believe the improved performance of the optimization-based estimator relative to the linear estimator is due to its ability to better capture the dynamics of the true underlying system in terms of behaviour and complexity. The plots in Figures 7, 8, and 9, which display the frequency response, disturbance estimate, net tie-line deviations, and IBR outputs in response to a step load change of 60 MW applied at bus 8 in Area 2 at \(t=10\)s, lead to the same conclusion about the controller's performance as the previous scenario with a 14 MW load change. This demonstrates that the proposed controller exhibits superior performance for step load changes both inside and outside the governor deadband range. In general, the results show that the robust data-driven approach presented in this study outperforms the model Fig. 4: Frequency and disturbance estimate during a 14 MW load change at bus 8 in Area 2. Fig. 5: Zoomed-in frequency plot of the contingent area showing the control alternatives during a 14 MW load change at bus 8 in Area 2. Fig. 6: Tie-line deviation and active power profiles during a 14 MW load change; dashed lines in the lower plots indicate the responses under model-based estimation. based approach and other alternatives, quickly localizing an compensating for large and small disturbances. ### _Scenario #2: High Renewable Resource Fluctuations_ In this scenario, we aim to demonstrate the effectiveness of our data-driven approach in the presence of renewable variability using realistic wind and solar irradiance data. To simulate the solar irradiance component, we use data from the Oahu solar measurement grid 1-year archive [37], containing 1-second measurements of solar irradiance. We select a slice of data from July 31, 2010 (see Figure 10). These values are used to simulate a converter-interfaced PV farm in Area 2. regularization performs the best. ### _Scenario #3: Symmetric Three-Phase Fault_ The essence of this scenario is to assess the performance of our control approach under a severe contingency like a symmetrical three-phase line-to-ground fault. The response of the system during the fault introduced at bus 8 in Area 2 at \(t=2\)s and cleared after \(0.1\)s is shown in Figures 14 and 15. Note that despite the transients, the controller is able to discern that there is no net load imbalance within the area. The results indicate that the controller is able to effectively detect and respond to frequency events, and the data-driven estimator's performance is satisfactory and similar to the model-based estimator. ### _Scenario #4: Loss of Generator_ This scenario assesses the performance of our control approach under the loss of generator G2 in area 2 at \(t=10s\). When the generator G2 is lost, the system experiences a disturbance, and the controllers respond to correct the resulting power imbalance. The response of the system under both the model-based and data-driven control is plotted in Figure 16. According to the findings, the data-driven controller outperforms the model-based controller, as demonstrated by its faster frequency settling time and lower overshoot. Importantly, despite the data used to design the LCA controller having been collected while G2 was online, the response indicates that the data-driven controller is effective even in the face of significant changes in power system composition and frequency response dynamics. This illustrates the robustness of the design, and provides flexibility for system operators in deciding how frequently they wish to collect new data to update the controller. Fig. 11: Frequency and disturbance estimate during high renewable resource fluctuations in multiple areas. Fig. 14: Frequency and disturbance estimate during a three-phase fault in Area 2. Fig. 12: Zoomed-in frequency plot during high renewable Fig. 13: Tie-line deviation and active power profiles during high renewable resource fluctuations in multiple areas.; dashed lines in the lower plots indicate the responses under model-based estimation. ## V Conclusions We have proposed and validated through detailed simulations a robust data-driven disturbance estimator that allows us to reliably compute the real-time power imbalance in a highly nonlinear power system, and in the presence of measurement noise. This data-driven estimate has been integrated in the hierarchical frequency control architecture initially proposed in [13], to provide a completely model-free approach to provide fast, localized frequency regulation in the power system. An important direction for future research is further investigation into the design of improved excitation input signals for data collection, and integration of this approach with transmission and distribution coordination schemes. ## Appendix A Proofs Proof of Theorem iii.1Under the stated assumptions of controllability, input data persistency of excitation of order \(T_{\rm{ini}}+1+n(\mathscr{B})\), and sufficient initialization length \(T_{\rm{ini}}\geq\ell(\mathscr{B})\), it follows from [27, Prop. 6] that the output predictor (7a) produces precisely the same values \(\hat{y}(t)\) as the LTI system \[\hat{x}(t+1) =A\hat{x}(t)+Bu(t)+B_{d}\hat{d}(t) \tag{12}\] \[\hat{y}(t) =C\hat{x}(t)+Du(t)\] where the matrices may be taken to be the same as those in (2). The disturbance estimator (7) may therefore be expressed as (12) with (7b), which we rewrite together as \[\begin{bmatrix}\hat{x}(t+1)\\ \hat{d}(t+1)\end{bmatrix} =\underbrace{\begin{bmatrix}A&B_{d}\\ 0&I_{q}\end{bmatrix}}_{:=\mathcal{A}}\begin{bmatrix}\hat{x}(t)\\ \hat{d}(t)\end{bmatrix}+\begin{bmatrix}B\\ 0\end{bmatrix}u(t)-\underbrace{\begin{bmatrix}0\\ \varepsilon L\end{bmatrix}}_{:=\varepsilon\mathcal{L}}\begin{bmatrix}\hat{y}( t)-y(t)\\ \\ \end{bmatrix}\] \[\hat{y}(t) =\underbrace{\begin{bmatrix}C&0\\ \vdots=\mathcal{C}\end{bmatrix}}_{:=\mathcal{C}}\begin{bmatrix}\hat{x}(t) \\ \hat{d}(t)\end{bmatrix}+Du(t)\] where \(y(t)\) is the measured output of (2). The above has the form of a Luenberger observer, and standard estimation error analysis (e.g., [45]) implies that we will have \(\hat{d}(t)\to d(t)\) exponentially, and irrespective of the initial conditions, if \[\mathcal{A}-\varepsilon\mathcal{L}C=\begin{bmatrix}A&B_{d}\\ -\varepsilon LC&I_{q}\end{bmatrix}\] is Schur stable. Recall that, by assumption, \(A\) is Schur stable, so \(I_{n}-A\) is invertible; based on this define the invertible matrix \[T=\begin{bmatrix}I_{n}&(I_{n}-A)^{-1}B_{d}\\ 0&I_{q}\end{bmatrix}.\] By similarity, \(\mathcal{A}-\varepsilon\mathcal{L}C\) is Schur stable if and only if \(\mathcal{M}(\epsilon):=T(\mathcal{A}-\varepsilon\mathcal{L}C)T^{-1}\) is as well. Simple calculations show that \(\mathcal{M}(\epsilon)\) evaluates to \[\mathcal{M}(\epsilon)=\begin{bmatrix}A+\varepsilon M_{1}&\varepsilon M_{2}\\ \varepsilon M_{3}&I_{q}-\varepsilon LG(1)\end{bmatrix},\] where \(M_{1},M_{2},M_{3}\) are constant matrices and \(G(1)=C(I_{n}-A)^{-1}B_{d}\). By assumption, \(G(1)\) has full column rank and \(L=G(1)^{\dagger}\); thus, we have that \(LG(1)=I_{q}\), and the \((2,2)\) block of the above simplifies to \((1-\varepsilon)I_{q}\). Since \(A\) is Schur stable, by standard linear Lyapunov theory there exists a matrix \(P\succ 0\) such that \(A^{\mathsf{T}}PA-P\prec 0\). Defining \(\mathcal{P}=\mathrm{diag}(P,I_{q})\), straightforward calculations and a use of the Schur complement lemma show that \(\mathcal{M}(\epsilon)^{\mathsf{T}}\mathcal{P}\mathcal{M}(\epsilon)-\mathcal{ P}\prec 0\) for all sufficiently small \(\varepsilon>0\), which establishes that \(\mathcal{M}(\epsilon)\) is Schur stable and completes the proof.
2301.06736
Syllable Subword Tokens for Open Vocabulary Speech Recognition in Malayalam
In a hybrid automatic speech recognition (ASR) system, a pronunciation lexicon (PL) and a language model (LM) are essential to correctly retrieve spoken word sequences. Being a morphologically complex language, the vocabulary of Malayalam is so huge and it is impossible to build a PL and an LM that cover all diverse word forms. Usage of subword tokens to build PL and LM, and combining them to form words after decoding, enables the recovery of many out of vocabulary words. In this work we investigate the impact of using syllables as subword tokens instead of words in Malayalam ASR, and evaluate the relative improvement in lexicon size, model memory requirement and word error rate.
Kavya Manohar, A. R. Jayan, Rajeev Rajan
2023-01-17T07:29:47Z
http://arxiv.org/abs/2301.06736v1
# Syllable Subword Tokens for Open Vocabulary Speech Recognition in Malayalam ###### Abstract In a hybrid automatic speech recognition (ASR) system, a pronunciation lexicon (PL) and a language model (LM) are essential to correctly retrieve spoken word sequences. Being a morphologically complex language, the vocabulary of Malayalam is so huge and it is impossible to build a PL and an LM that cover all diverse word forms. Usage of subword tokens to build PL and LM, and combining them to form words after decoding, enables the recovery of many out of vocabulary words. In this work we investigate the impact of using syllables as subword tokens instead of words in Malayalam ASR, and evaluate the relative improvement in lexicon size, model memory requirement and word error rate. ## 1 Introduction Malayalam belongs to the Dravidian family of languages with high morphological complexity (Manohar et al., 2020). Productive word formation in Malayalam by agglutination, inflection, and compounding leads to very long words with phonetic and orthographic changes at morpheme boundaries. This creates a large number of low frequency words and it is practically impossible to build a pronunciation lexicon that covers all complex wordforms. A hybrid automatic speech recognition (ASR) decoder is built using an acoustic model, a language model (LM) and a pronunciation lexicon (PL). The acoustic model is a mapping between the acoustic features and the phonemes of the language (Georgescu et al., 2021). The LM is a learnt representation of word sequence probabilities. The PL is a dictionary where the pronunciation of each word or subword is described as a sequence of phonemes. These are composed into a weighted finite state transducer in a typical hybrid ASR decoder (Povey et al., 2011). Words not covered in the LM and the PL are called the out of vocabulary (OOV) words and they can not be recovered by the ASR decoder (Braun et al., 2021; Smit et al., 2021). However the use of subword tokens in an ASR for morphologically complex languages can recover a portion of OOV words by combining subword tokens to words. Figure 1 illustrates a hybrid open vocabulary ASR system. Special marker symbol '+' at subword boundaries enables the recovery of words. Subword tokenization is carried out either through linguistically motivated rule based approaches or language independent data-driven approaches (Smit et al., 2021). However, there is no single algorithm that works fine for all languages. Even though the usage of subword tokenization for open vocabulary ASR has been thoroughly investigated (Hirsimaki et al., 2006; Choueiter et al., 2006; Wang et al., 2020; Zhou et al., 2021), there has not been much exploration in this regard in Malayalam language. Figure 1: An open vocabulary hybrid ASR system, with subword based LM and PL. ## 2 Related Works Morpheme based subword tokenization has been proposed for ASR in many morphologically complex languages including Finnish, Arabic and Swedish (Choueiter et al., 2006; Smit et al., 2021). Syllable like units called vowel segments have been proposed to improve the ASR performance of Sanskrit, which is an inflectional language (Adiga et al., 2021). Data driven methods of tokenization using byte pair encoding (BPE) and Morfessor has been employed in the development of bilingual Hindi-Marathi ASR for improved performance and reduced complexity (Yadavalli et al., 2022). The sole work on the usage subword tokens for Malayalam ASR (Manghat et al., 2022) applies the linguistic information on a data-driven method to improve the word error rate (WER). In the current work, we investigate the improvement that can be brought in by the linguistically motivated syllable subword tokens to address the issue of OOV recovery in Malayalam ASR. We evaluate the syllable subword ASR in terms of the WER, the lexicon size and the model memory requirement and compare it with the conventional word based PL and LM. This work is planned to be extended to analyse the impact of other data-driven methods for subword tokenization, in future. ## 3 Tokenization Algorithm The characters in Malayalam script can be classified as: (i) vowels, (ii) vowel signs, (iii) consonants, (iv) special consonants (_anuswara_, _visarga_ and _chillu_) and (v) the multi-functional character _virama_. A conjunct in Malayalam is a sequence of consonants separated by a _virama_ in between. The writing system of Malayalam is alphasyllabary in nature (Bright, 1999). It means each standalone pronunciation unit is a syllable. If words are randomly split during tokenization, as in **SOPHIA** /soofia/ being tokenized as **SOP** and **HIA**, the pronunciation can not be segmented in a valid way. Syllable tokens being valid pronunciation units, they can be described as a sequence of phonemes in the PL. A syllable in Malayalam can be a consonant or a conjunct, followed by an optional vowel sign. A standalone vowel is also a syllable, that occur only at word beginnings. Whenever a special consonant appears, it becomes the syllable ending consonant (Nair, 2016). These linguistic rules for syllable tokenization has been computationally implemented as in Algorithm 1, by Manohar et al. (2022) and made available as part of the Mlphon Python library1. Footnote 1: [https://pypi.org/project/mlphon/](https://pypi.org/project/mlphon/) ## 4 Datasets We use the publicly available open licensed Malayalam read speech datasets in our experiments. Every speech recording is associated with a textual transcript in the Malayalam script. As shown in Table 1, we divide the available speech into train and test, ensuring that speakers and speech transcripts are not overlapped. The train datasets are combined to get 1125 minutes (\(\approx\) 19 hours) of speech for acoustic model training. T1, T2 and T3 are the datasets used for testing. Except T3, all datasets are studio recorded read speech of formal sentences belonging to the same domain. T3 is mostly conversational sentences, recorded by volunteers in natural home environments, making it an out-of-domain test set. To create the LM, we use the sentences from the speech transcripts and combine it with the curated collection of text corpus published by SMC (Computing, 2020) that amounts to 205k unique sentences. From this, every sentence that appears in our test speech dataset is explicitly removed to prevent overfitting. ## 5 Methodology To develop a hybrid ASR system, we need to build an acoustic model, an LM and a PL. The acoustic model is set as a common component in both word and syllable token based ASR. The LM is a statistical ngram model of words or syllables. To study the impacts of lexicon size we create word and syllable token based PL of different sizes. Each of these components is explained in the following subsections. ### Acoustic Model The acoustic model is trained using time delay neural networks (TDNNs) with Kaldi toolkit (Povey et al., 2011). Acoustic features used in TDNN training are: (i) 40-dimensional high-resolution MFCCs extracted from frames of 25 ms length and 10 ms shift and (ii) 100-dimensional i-vectors computed from chunks of 150 consecutive frames (Saon et al., 2013). Three consecutive MFCC vectors and the i-vector corresponding to a chunk are concatenated, resulting in a 220-dimensional feature vector for a frame (Georgescu et al., 2021). This acoustic model is trained on a single NVIDIA Tesla T4 GPU. ### Language Models A statistical view of how words are combined to form valid sentences is provided by the ngram model. Word sequence probabilities could be computed by analysing a large volume of text. In a 2-gram, a history of one previous word is required. We build ngram language models of orders n=2, 3 and 4 on the text corpus described in section 4 using SRILM toolkit (Stolcke, 2002). Building LM using word tokens is straightforward, as _space_ is considered as the default delimiter between words. However to build LM using syllable tokens instead of words, we need to syllabify the text corpus. Using Mlphon Python library, the text corpus is tokenized to syllables (Manohar et al., 2022). In order to identify syllables that occur at word medial positions, we have used '+' as a marker symbol. In this approach, reconstruction of words is straightforward, as the marker indicates the positions for joining the following syllable. In the syllabified text, _space_ is the delimiter between syllable tokens. Excerpts from the text corpora used for training word and syllable token based LM are shown in Table 2. ### Pronunciation Lexicons \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Name** & **Corpus** & **\#Speakers** & **\#Utterances** & **Duration** & **Environment** \\ & & & & (minutes) & \\ \hline Train 1 & (Baby et al., 2016) & 2 & 8601 & 838 & Studio \\ Train 2 & (He et al., 2020) & 37 & 3346 & 287 & Studio \\ \hline T1 & (Prahallad et al., 2012) & 1 & 1000 & 98 & Studio \\ T2 & (He et al., 2020) & 7 & 679 & 48 & Studio \\ T3 & (Computing, 2020) & 75 & 1541 & 98 & Natural, Noisy \\ \hline \hline \end{tabular} \end{table} Table 1: Details of Speech datasets used in our experiments. \begin{table} \begin{tabular}{|l|l|} \hline **Word** & **\#Ocab** **\#Ocab**\#Ocab**\#J \\ & /auanuala!tukajilla \\ \hline **Syllable** & **\#Ocab**\#+ **\#C**\#+ **\#+ **\#+ **\#J** \\ & /a-uanua**\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\\\\\_\\\\_\\\\\\_\\\_\\\\\_\\\\\\\\\_\\\\\_\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ Sample entries in word PL and corresponding syllable PL are described in Table 3. To begin with, we create a word based PL that contains all unique words in the train audio transcripts which amounts to 25604 entries. This first lexicon is referred to as \(PL1_{W}\). To study the impact of lexicon size on OOV rate and corresponding changes in WER, we expand \(PL1_{W}\). New words are added to the lexicon based on their frequencies in the LM training corpus. When words in this corpus is ranked in the order of their frequencies, we obtain a word frequency profile as shown in Figure 2. It can be seen that a huge portion of the corpus is covered by filling the PL with high frequency words. We add words with at least 5, 4, and 3 occurrences to \(PL1_{W}\) to obtain the pronunciation lexicons \(PL2_{W}\), \(PL3_{W}\) and \(PL4_{W}\) respectively. Subword lexicons \(PLi_{S}\), with syllables as entries are derived from \(PLi_{W}\), where \(i=1,2,3,4\). The unique list of syllable tokens from every word PL is obtained to create the corresponding syllable PL. The number of entries in the syllable and word PL are presented in Table 4. The syllable tokens corresponding to each word in \(PLi_{W}\), is created with the marker symbol '\(\star\)', as described in section 5.2. The pronunciation of word and syllable tokens in \(PLi_{W}\) and \(PLi_{S}\) are derived using Mlphon python library (Manohar et al., 2022). ## 6 Experimental Results Combining the common acoustic model with the word LM, we build four different word based ASR by choosing one of \(PLi_{W}\). Percentage of OOV words in different test datasets decreases with increase in the vocabulary size, as expected and is illustrated in Figure 3. Based on this, T1 can be considered as a low OOV test dataset, T2 a medium OOV test dataset and T3 a large OOV test dataset. We repeat the above experiments with the LM training corpus and lexicons in syllified form. Lexicons with syllables as entries are significantly smaller than word based lexicons, as indicated in Table 4 and are able to decode speech with improved WER on test datasets with medium to large OOV word rate. WER is computed by equation (1), based on the number of words inserted (I), deleted (D) and substituted (S) in the predicted speech when compared to the number of words (N) in ground truth transcript. \[WER=\frac{I+D+S}{N} \tag{1}\] We report the WER on different test datasets in Figure 4. On the test set T1, where OOV rates are very low (less than 6%), word based model perform well irrespective of ngram orders, the best being 9.8%, while the best WER given by syllable models on T1 is only 12%. It shows syllable tokens are not advantageous in terms of WER in low OOV scenarios. The WER is generally high as expected on the out of domain test set T3, where almost half the words are OOV and the recording environment is drastically different from the train datasets. \begin{table} \begin{tabular}{c c|c c} \hline **Lexicon** & **Size** & **Lexicon** & **Size** \\ \hline \(PL1_{W}\) & 25604 & \(PL1_{S}\) & 3524 \\ \(PL2_{W}\) & 53240 & \(PL2_{S}\) & 5247 \\ \(PL3_{W}\) & 62483 & \(PL3_{S}\) & 5643 \\ \(PL4_{W}\) & 79950 & \(PL4_{S}\) & 6351 \\ \hline \end{tabular} \end{table} Table 4: The size of lexicons used in word and syllable based experiments Figure 3: Lexicon size and OOV rate of test datasets Figure 2: Logarithmic plot of word rank versus word frequency in the text corpus. Comparing the best WER, syllable based lexicons shows an improvement by 10% on T2 and by 7% on T3 than the corresponding word models. Since the previously published work on subword ASR for Malayalam (Manghat et al., 2022), was tested on a private dataset, the comparison of results is not meaningful and hence not attempted. ### Ngram order and WER For the word PL, increasing the ngram order imparts only nominal improvement in WER. This could be attributed to the sparse distribution of words due to the morphological complexity of Malayalam. The WER of the syllable PL does not show an improvement than the word PL for ngram order of 2. But the WER on syllable PL drastically reduces by 12% on T2 and by 6% on T3, when ngram order is increased from 2 to 3 and then it stabilizes. The mean word length in our test datasets is 3.2 syllables, providing the cause for the greatest improvement at this ngram transition. ### Ngram order and Model Size To study the the model memory requirement, we compute the size of weighted FST graph (_HCLG.fst_) used for decoding. The model sizes corresponding to the largest word and syllable lexicons \(PL4_{W}\) and \(PL4_{S}\), where the WER are the best, are presented in Figure 5. The memory requirement is generally high for word based models and it increases with the ngram order. The syllable tokens with much lower memory requirement at smaller ngram orders, show a rapid rise in model size with the increase in ngram order. There is a trade-off between the model size and the WER, while choosing the ngram order. For the ngram order of 3, ASR with syllable tokens having half the model size perform better in WER by 6% than the best word based model, as illustrated in Figures 4 and 5. ### Lexicon Size and WER There is a substantial WER improvement, when switching from \(PL1_{W}\) to \(PL2_{W}\) and \(PL1_{S}\) to \(PL2_{S}\), where the reduction in OOVs is the largest. Improvement in WER with subsequent lexicon expansions is nominal, as the added entries in the lexicons are low frequency words. ## 7 Conclusions The comprehensive evaluation of syllables as subword tokens for building an open vocabulary hybrid ASR model is a pioneer attempt of its kind in Malayalam language. The proposed syllable based LM and PL in Malayalam demonstrate remarkable improvement in WER on medium and large OOV test sets, by 10% and 7% respectively. If the test datasets are free from OOV words, word based models outperform syllable models. Furthermore, syllable models with about half the model size have better WER than the corresponding word based ones, proving the effectiveness Figure 4: WER on different test datasets Figure 5: Model Size for word and syllable ASR. of syllable token based subword modelling on morphologically complex language like Malayalam. The optimal choice of ngram order based on the trade-off between model size and WER, depends on the subword tokenization technique. This study opens scope for investigating the impacts of other subword tokenization methods for Malayalam ASR.
2302.07178
Deformations and abelian extensions of compatible pre-Lie algebras
In this paper, we first give the notation of a compatible pre-Lie algebra and its representation. We study the relation between compatible Lie algebras and compatible pre-Lie algebras. We also construct a new bidifferential graded Lie algebra whose Maurer-Cartan elements are compatible pre-Lie structures. We give the bidifferential graded Lie algebra which controls deformations of a compatible pre-Lie algebra. Then, we introduce a cohomology of a compatible pre-Lie algebra with coefficients in itself. We study infinitesimal deformations of compatible pre-Lie algebras and show that equivalent infinitesimal deformations are in the same second cohomology group. We further give the notion of a Nijenhuis operator on a compatible pre-Lie algebra. We study formal deformations of compatible pre-Lie algebras. If the second cohomology group $\huaH^2(\g;\g)$ is trivial, then the compatible pre-Lie algebra is rigid. Finally, we give a cohomology of a compatible pre-Lie algebra with coefficients in arbitrary representation and study abelian extensions of compatible pre-Lie algebras using this cohomology. We show that abelian extensions are classified by the second cohomology group.
Shanshan Liu, Liangyun Chen
2023-02-07T09:37:45Z
http://arxiv.org/abs/2302.07178v1
# Deformations and abelian extensions of compatible pre-Lie algebras ###### Abstract. In this paper, we first give the notation of a compatible pre-Lie algebra and its representation. We study the relation between compatible Lie algebras and compatible pre-Lie algebras. We also construct a new bidifferential graded Lie algebra whose Maurer-Cartan elements are compatible pre-Lie structures. We give the bidifferential graded Lie algebra which controls deformations of a compatible pre-Lie algebra. Then, we introduce a cohomology of a compatible pre-Lie algebra with coefficients in itself. We study infinitesimal deformations of compatible pre-Lie algebras and show that equivalent infinitesimal deformations are in the same second cohomology group. We further give the notion of a Nijenhuis operator on a compatible pre-Lie algebra. We study formal deformations of compatible pre-Lie algebras. If the second cohomology group \(\mathcal{H}^{2}(\mathfrak{g};\mathfrak{g})\) is trivial, then the compatible pre-Lie algebra is rigid. Finally, we give a cohomology of a compatible pre-Lie algebra with coefficients in arbitrary representation and study abelian extensions of compatible pre-Lie algebras using this cohomology. We show that abelian extensions are classified by the second cohomology group. Key words and phrases:compatible pre-Lie algebra, Maurer-Cartan element, cohomology, deformation, abelian extension 1991 Mathematics Subject Classification: * Corresponding author _MSC 2020_: 17A36, 17A40, 17B10, 17B40, 17B60, 17B63, 17D25 ###### Contents * 1 Introduction * 2 Maurer-Cartan characterizations of compatible pre-Lie algebras * 3 Formal deformations of compatible pre-Lie algebras * 3.1 Cohomologies of compatible pre-Lie algebras * 3.2 Infinitesimal deformations of compatible pre-Lie algebras * 3.3 Formal deformations of compatible pre-Lie algebras * 4 Abelian extensions of compatible pre-Lie algebras * 5 Completely pre-Lie algebras * 5.1 Completely pre-Lie algebras * 5.2 Completely pre-Lie algebras * 5.3.1 Completely pre-Lie algebras * 5.3.2 Completely pre-Lie algebras * 5.3.3.4 Completely pre-Lie algebras * 5.4.1 Completely pre-Lie algebras * 5.4.2 Completely pre-Lie algebras * 5.4.3 Completely pre-Lie algebras * 5.4.4 Completely pre-Lie algebras * 5.4.5 Completely pre-Lie algebras * 5.4.6 Completely pre-Lie algebras * 5.5.7 Completely pre-Lie algebras * 5.5.8 Completely pre-Lie algebras * 5.6.1 Completely pre-Lie algebras * 5.6.2 Completely pre-Lie algebras * 5.6.3 Completely pre-Lie algebras * 5.7.1 Completely pre-Lie algebras * 5.7.2 Completely pre-Lie algebras * 5.7.3 Completely pre-Lie algebras * 5.7.4 Completely pre-Lie algebras * 5.7.5 Completely pre-Lie algebras * 5.7.6 Completely pre-Lie algebras * 5.7.7 Completely pre-Lie algebras * 5.7.8 Completely pre-Lie algebras * 5.8.1 Completely pre-Lie algebras * 5.8.2 Completely pre-Lie algebras * 5.8.3 Completely pre-Lie algebras * 5.8.4 Completely pre-Lie algebras * 5.8.5 Completely pre-Lie algebras * 5.9.1 Completely pre-Lie algebras * 5.9.2 Completely pre-Lie algebras * 5.9.3 Completely pre-Lie algebras * 5.10.1 Completely pre-Lie algebras * 5.11.1 Completely pre-Lie algebras * 5.12.1 Completely pre-Lie algebras * 5.13.2 Completely pre-Lie algebras * 5.14.3 Completely pre-Lie algebras * 5.15.1 Completely pre-Lie algebras * 5.16.4 Completely pre-Lie algebras * 5.17.5 Completely pre-Lie algebras * 5.17.6 Completely pre-Lie algebras * 5.18.7 Completely pre-Lie algebras * 5.19.8 Completely pre-Lie algebras * 5.19.9 Completely pre-Lie algebras * 5.20.1 Completely pre-Lie algebras * 5.21.2 Completely pre-Lie algebras * 5.22.1 Completely pre-Lie algebras * 5.23.1 Completely pre-Lie algebras * 5.24.2 Completely pre-Lie algebras * 5.25.3 Completely pre-Lie algebras * 5.26.4 Completely pre-Lie algebras * 5.27.5 Completely pre-Lie algebras * 5.28.6 Completely pre-Lie algebras * 5.29.7 Completely pre-Lie algebras * 5.3.8 Completely pre-Lie algebras * 5.3.9 Completely pre-Lie algebras * 5.3.1 Completely pre-Lie algebras * 5.3.1 Completely pre-Lie algebras * 5.3.2 Completely pre-Lie algebras * 5.3.1 Completely pre-Lie algebras * 5.3.2 Completely pre-Lie algebras * 5.3.3 Completely pre-Lie algebras * 5.4.1 Completely pre-Lie algebras * 5.4.2 Completely pre-Lie algebras * 5.4.3 Completely pre-Lie algebras * 5.5.1 Completely pre-Lie algebras * 5.5.2 Completely pre-Lie algebras * 5.5.3 Completely pre-Lie algebras * 5.5.4.5 Completely pre-Lie algebras * 5.5.6 Completely pre-Lie algebras * 5.6.7 Completely pre-Lie algebras * 5.6.8 Completely pre-Lie algebras * 5.7.9 Completely pre-Lie algebras * 5.8.1 Completely pre-Lie algebras * 5.9.1 Completely pre-Lie algebras * 5.9.2 Completely pre-Lie algebras * 5.10.2 Completely pre-Lie algebras * 5.11.1 Completely pre-Lie algebras * 5.12.2 Completely pre-Lie algebras * 5.13.3 Completely pre-Lie algebras * 5.14.4 Completely pre-Lie algebras * 5.15.5 Completely pre-Lie algebras * 5.16.6 Completely pre-Lie algebras * 5.17.7 Completely pre-Lie algebras * 5.18.9 Completely pre-Lie algebras * 5.19.1 Completely pre-Lie algebras * 5.21.3 Completely pre-Lie algebras * 5.22.2 Completely pre-Lie algebras * 5.23.1 Completely pre-Lie algebras * 5.24.3 Completely pre-Lie algebras * 5.25.4 Completely pre-Lie algebras * 5.26.5 Completely pre-Lie algebras * 5.27.6 Completely pre-Lie algebras * 5.28.7 Completely pre-Lie algebras * 5.29.8 Completely pre-Lie algebras * 5.30.9 Completely pre-Lie algebras * 5.31.1 Completely pre-Lie algebras * 5.32.2 Completely pre-Lie algebras * 5.33.1 Completely pre-Lie algebras * 5.3.2.1 Completely pre-Lie algebras * 5.3.3 Completely pre-Lie algebras * 5.4.5 Completely pre-Lie algebras * 5.4.6 Completely pre-Lie algebras * 5.5.7 Completely pre-Lie algebras * 5.5.8 Completely pre-Lie algebras * 5.8.9 Completely pre-Lie algebras * 5.9.1 Completely pre-Lie algebras * 5.10.2 Completely pre-Lie algebras * 5.11.2 Completely pre-Lie algebras * 5.12.1 Completely pre-Lie algebras * 5.13.2 Completely pre-Lie algebras * 5.14.5 Completely pre-Lie algebras * 5.15.1 Completely pre-Lie algebras * 5.16.7 Completely pre-Lie algebras * 5.17.1 Completely pre-Lie algebras * 5.18.1 Completely pre-Lie algebras * 5.19.2 Completely pre-Lie algebras * 5.21.3 Completely pre-Lie algebras * 5.22.4 Completely pre-Lie algebras * 5.3.4 Completely pre-Lie algebras * 5.4.5 Completely pre-Lie algebras * 5.5.6 Completely pre-Lie algebras * 5.6.1.8 Completely pre-Lie algebras * 5.7.1 Completely pre-Lie algebras * 5.7.2 Completely pre-Lie algebras * 5.8.1 Completely pre-Lie algebras * 5.9.2 Completely pre-Lie algebras * 5.10.11 Completely pre-Lie algebras * 5.11.2 Completely pre-Lie algebras * 5.12.2 Completely pre-Lie algebras * 5.13.3 Completely pre-Lie algebras * 5.14.6 Completely pre-Lie algebras * 5.15.2 Completely pre-Lie algebras * 5.16.9 Completely pre-Lie algebras * 5.17.2 Completely pre-Lie algebras * 5.29.1 Completely pre-Lie algebras * 5.21.4 Completely pre-Lie algebras * 5.3.5 Completely pre-Lie algebras * 5.4.1 Completely pre-Lie algebras * 5.4.2 Completely pre-Lie algebras * 5.5.16 Completely pre-Lie algebras * 5.5.17 Completely pre-Lie algebras * 5.6.2 Completely pre-Lie algebras * 5.7.3 Completely pre-Lie algebras * 5.8.2 Completely pre-Lie algebras * 5.9.3 Completely pre-Lie algebras * 5.19.1 Completely pre-Lie algebras * 5.21.5 Completely pre-Lie algebras * 5.22.6 Completely pre-Lie algebras * 5.22.7 Completely pre-Lie algebras * 5.23.8 Completely pre-Lie algebras * 5.3.9 Completely pre-Lie algebras * 5.4.1 Completely pre-Lie algebras * 5.3.1 Completely pre-Lie algebras * 5.4.2 Completely pre-Lie algebras * 5.5.10.11 Completely pre-Lie algebras * 5.4.2 Completely pre-Lie algebras * 5.5.11.2 Completely pre-Lie algebras * 5.5.22.1 Completely pre-Lie algebras * 5.5.3.1 Completely pre-Lie algebras * 5.5.3.2 Completely pre-Lie algebras * 5.5.4.3 Completely pre-Lie algebras * 5.5.5.6 Completely pre-Lie algebras * 5.6.1.1 Completely pre-Lie algebras * 5.6.2.2 Completely pre-Lie algebras * 5.6.3.1 Completely pre-Lie algebras * 5.6.4.2 Completely pre-Lie algebras * 5.6.5.7 Completely pre-Lie algebras * 5.7.6.1.2 Completely pre-Lie algebras * 5.7.8.1 Completely pre-Lie algebras * 5.8.2 Completely pre-Lie algebras * 5.8.3 Completely pre-Lie algebras * 5.9.4 Completely pre-Lie algebras * 5.9.5 Completely pre-Lie algebras * 5.10.2 Completely pre-Lie algebras * 5.11.3 Completely pre-Lie algebras * 5.11.4 Completely pre-Lie algebras * 5.11.5.6 Completely pre-Lie algebras * 5.12.1 Completely pre-Lie algebras * 5.12.2 Completely pre-Lie algebras * 5.13.7 Completely pre-Lie algebras * 5.14.1 Completely pre-Lie algebras * 5.15.7 Completely pre-Lie algebras * 5.16.1.9 Completely pre-Lie algebras * 5.17.1 Completely pre-Lie algebras * 5.18.2 Completely pre-Lie algebras * 5.19.2 Completely pre-Lie algebras * 5.20.2.1 Completely pre-Lie algebras * 5.21.1 Completely pre-Lie algebras * 5.21.2 Completely pre-Lie algebras * 5.22.2 Completely pre-Lie algebras * 5.23.1 Completely pre-Lie algebras * 5.24.1 Completely pre-Lie algebras * 5.25.3.2 Completely pre-Lie algebras * 5.26.3 Completely pre-Lie algebras * 5.27.1 Completely pre-Lie algebras * 5.28.1 Completely pre-Lie algebras * 5.29.2 Completely pre-Lie algebras * 5.29.1 Completely pre-Lie algebras * 5.29.3.1 Completely pre-Lie algebras * 5.3.3.2 Completely pre-Lie algebras * 5.3.4.1 Completely pre-Lie algebras * 5.3.5.3 Completely pre-Lie algebras * 5.3.6.2 Completely pre-Lie algebras * 5.3.7.2 Completely pre-Lie algebras Introduction The study of the cohomology of a Lie algebra \((\mathfrak{g},\pi_{1},\pi_{2})\) and its representation \((V,\rho,\mu,\tilde{\rho},\tilde{\mu})\) of a Lie algebra \((\mathfrak{g},\pi_{1},\pi_{2})\) is a representation of the algebra \((\pi_{1}+\rho+\mu,\pi_{2}+\tilde{\rho}+\tilde{\mu})\). The representation \((V,\rho,\mu,\tilde{\rho},\tilde{\mu})\) of a Lie algebra \((\mathfrak{g},\pi_{1},\pi_{2})\) is a representation of the algebra \((\pi_{1}+\rho+\mu,\pi_{2}+\tilde{\rho}+\tilde{\mu})\). ## 2. Maurer-Cartan characterizations of compatible pre-Lie algebras In this section, first, we give the notation of a compatible pre-Lie algebra and its representation. Then, we study the relation between compatible Lie algebras and compatible pre-Lie algebras. Finally, we construct a new bidifferential graded Lie algebra whose Maurer-Cartan elements are compatible pre-Lie structures. We give the bidifferential graded Lie algebra which controls deformations of a compatible pre-Lie algebra. **Definition 2.1**.: _([1]) \(A\)_ **pre-Lie algebra** (\(\mathfrak{g},\cdot\)) _is a vector space \(\mathfrak{g}\) equipped with a bilinear product \(\cdot:\mathfrak{g}\otimes\mathfrak{g}\longrightarrow\mathfrak{g}\) such that for all \(x,y,z\in\mathfrak{g}\), the following equality is satisfied:_ \[(x\cdot y)\cdot z-x\cdot(y\cdot z)=(y\cdot x)\cdot z-y\cdot(x\cdot z).\] Let (\(\mathfrak{g},\cdot\)) be a pre-Lie algebra. The commutator \([x,y]_{C}=x\cdot y-y\cdot x\) gives a Lie algebra (\(\mathfrak{g},[\cdot,\cdot]_{C}\)), which is denoted by \(\mathfrak{g}^{C}\) and called the **sub-adjacent Lie algebra** of (\(\mathfrak{g},\cdot\)). **Definition 2.2**.: _([2]) \(A\)_ **representation** _of a pre-Lie algebra (\(\mathfrak{g},\cdot\)) on a vector space \(V\) consists of a pair \((\rho,\mu)\), where \(\rho:\mathfrak{g}\longrightarrow\mathfrak{gl}(V)\) is a representation of the sub-adjacent Lie algebra \(\mathfrak{g}^{C}\) on \(V\), and \(\mu:\mathfrak{g}\longrightarrow\mathfrak{gl}(V)\) is a linear map, such that for all \(x,y\in\mathfrak{g}\):_ \[\mu(y)\circ\mu(x)-\mu(x\cdot y)=\mu(y)\circ\rho(x)-\rho(x)\circ\mu(y).\] We denote a representation of a pre-Lie algebra (\(\mathfrak{g},\cdot\)) by \((V,\rho,\mu)\). Furthermore, let \(L,R:\mathfrak{g}\longrightarrow\mathfrak{gl}(\mathfrak{g})\) be linear maps, where \(L_{x}y=x\cdot y,R_{x}y=y\cdot x\). Then (\(\mathfrak{g},L,R\)) is also a representation, which is called the regular representation. A permutation \(\sigma\in\mathbb{S}_{n}\) is called an \((i,n-i)\)-unshuffle if \(\sigma(1)<\cdots<\sigma(i)\) and \(\sigma(i+1)<\cdots<\sigma(n)\). If \(i=0\) and \(i=n\), we assume \(\sigma=\mathrm{Id}\). The set of all \((i,n-i)\)-unshuffles will be denoted by \(\mathbb{S}_{(i,n-i)}\). The notion of an \((i_{1},\ldots,i_{k})\)-unshuffle and the set \(\mathbb{S}_{(i_{1},\ldots,i_{k})}\) are defined similarly. Let \(\mathfrak{g}\) be a vector space. We denote \(C^{n}(\mathfrak{g};\mathfrak{g})=\mathrm{Hom}(\Lambda^{n-1}\mathfrak{g} \otimes\mathfrak{g},\mathfrak{g})\) and consider the graded vector space \(C^{*}(\mathfrak{g};\mathfrak{g})=\oplus_{n=1}^{+\infty}C^{n}(\mathfrak{g}; \mathfrak{g})=\oplus_{n=1}^{+\infty}\mathrm{Hom}(\Lambda^{n-1}\mathfrak{g} \otimes\mathfrak{g},\mathfrak{g})\). It was shown in [4, 18, 23] that \(C^{*}(\mathfrak{g};\mathfrak{g})\) equipped with the Matsushima-Nijenhuis bracket \[[P,Q]^{MN}=P\circ Q-(-1)^{pq}Q\circ P,\quad\forall P\in C^{p+1}(\mathfrak{g}; \mathfrak{g}),Q\in C^{q+1}(\mathfrak{g};\mathfrak{g})\] ia a graded Lie algebra, where \(P\circ Q\in C^{p+q+1}(\mathfrak{g};\mathfrak{g})\) is defined by \[P\circ Q(x_{1},\ldots,x_{p+q+1})\] \[= \sum_{\sigma\in\mathbb{S}(q,1,p-1)}\mathrm{sgn}(\sigma)P(Q(x_{ \sigma(1)},\ldots,x_{\sigma(q)},x_{\sigma(q+1)}),x_{\sigma(q+2)},\ldots,x_{ \sigma(p+q)},x_{p+q+1})\] \[+(-1)^{pq}\sum_{\sigma\in\mathbb{S}(p,q)}\mathrm{sgn}(\sigma)P(x_{ \sigma(1)},\ldots,x_{\sigma(p)},Q(x_{\sigma(p+1)},\ldots,x_{\sigma(p+q)},x_{ p+q+1})).\] In particular, \(\pi\in\mathrm{Hom}(\otimes^{2}\mathfrak{g},\mathfrak{g})\) defines a pre-Lie algebra if and only if \([\pi,\pi]^{MN}=0\). If \(\pi\) is a pre-Lie algebra structure, then \(d_{\pi}:=[\pi,\cdot]^{MN}\) is a graded derivation of the graded Lie algebra (\(C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN}\)) satisfying \(d_{\pi}\circ d_{\pi}=0\), so that \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},d_{\pi})\) becomes a differential graded Lie algebra. **Definition 2.3**.: \(A\) **compatible pre-Lie algebra** _is a triple (\(\mathfrak{g},\cdot,\ast\)), where \(\mathfrak{g}\) is a vector space, "\(\cdot\)" and "\(\ast\)" are pre-Lie structures on \(\mathfrak{g}\), such that for all \(x,y,z\in\mathfrak{g}\), the following equality is satisfied:_ 1. \((x\ast y)\cdot z+(x\cdot y)\ast z-x\cdot(y\ast z)-x\ast(y\cdot z)-(y\ast x) \cdot z-(y\cdot x)\ast z+y\cdot(x\ast z)+y\ast(x\cdot z)=0\) **Proposition 2.4**.: _A triple \((\mathfrak{g},\cdot,\ast)\) is a compatible pre-Lie algebra if and only if "\(\cdot\)" and "\(\ast\)" are pre-Lie structures on \(\mathfrak{g}\), such that for all \(k_{1},k_{2}\in\mathbb{K}\), the following bilinear operation_ \[x\diamond y=k_{1}x\cdot y+k_{2}x\ast y,\quad\forall x,y\in\mathfrak{g}. \tag{2}\] _defines a pre-Lie algebra structure on \(\mathfrak{g}\)._ Proof.: It is straightforward. **Definition 2.5**.: _Let \((\mathfrak{g},\cdot,\ast)\) and \((\mathfrak{g}^{\prime},\cdot^{\prime},\ast^{\prime})\) be two compatible pre-Lie algebras. A_ **homomorphism**_\(\varphi:(\mathfrak{g},\cdot,\ast)\longrightarrow(\mathfrak{g}^{\prime}, \cdot^{\prime},\ast^{\prime})\) is both a pre-Lie homomorphism from \((\mathfrak{g},\cdot)\) to \((\mathfrak{g}^{\prime},\cdot^{\prime})\) and a pre-Lie homomorphism from \((\mathfrak{g},\ast)\) to \((\mathfrak{g}^{\prime},\ast^{\prime})\)._ **Definition 2.6**.: \(A\) **representation** _of a compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\) on a vector space \(V\) consists of a quadruple \((\rho,\mu,\tilde{\rho},\tilde{\mu})\), where \((V,\rho,\mu)\) is a representation of the pre Lie algebra \((\mathfrak{g},\cdot)\) and \((V,\tilde{\rho},\tilde{\mu})\) is a representation of the pre Lie algebra \((\mathfrak{g},\ast)\), such that for all \(x,y\in\mathfrak{g}\):_ \[\rho(x\ast y)+\tilde{\rho}(x\cdot y)-\rho(x)\tilde{\rho}(y)- \tilde{\rho}(x)\rho(y) = \rho(y\ast x)+\tilde{\rho}(y\cdot x)-\rho(y)\tilde{\rho}(x)- \tilde{\rho}(y)\rho(x), \tag{4}\] \[\mu(y)\tilde{\rho}(x)-\rho(x)\tilde{\mu}(y)-\mu(y)\tilde{\mu}(x) +\mu(x\ast y) = -\tilde{\mu}(y)\rho(x)+\tilde{\rho}(x)\mu(y)+\tilde{\mu}(y)\mu(x) -\tilde{\mu}(x\cdot y). \tag{3}\] We denote a representation of a compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\) by \((V,\rho,\mu,\tilde{\rho},\tilde{\mu})\). Furthermore, let \(L,R,\tilde{L},\tilde{R}:\mathfrak{g}\longrightarrow\mathfrak{gl}(\mathfrak{g})\) be linear maps, where \(L_{x}y=x\cdot y,R_{x}y=y\cdot x,\tilde{L}_{x}y=x\ast y,\tilde{R}_{x}y=y\ast x\). Then \((\mathfrak{g},L,R,\tilde{L},\tilde{R})\) is also a representation, which is called the **regular representation**. We define two bilinear operations \(\cdot_{\mathfrak{g}\oplus V}:\otimes^{2}(\mathfrak{g}\oplus V)\ \to\ ( \mathfrak{g}\oplus V)\) and \(\ast_{\mathfrak{g}\oplus V}:\otimes^{2}(\mathfrak{g}\oplus V)\to(\mathfrak{g }\oplus V)\) respectively by \[(x+u)\cdot_{\mathfrak{g}\oplus V}(y+v) = x\cdot y+\rho(x)(v)+\mu(y)(u),\quad\forall x,y\in\mathfrak{g},u,v\in V,\] \[(x+u)\ast_{\mathfrak{g}\oplus V}(y+v) = x\ast y+\tilde{\rho}(x)(v)+\tilde{\mu}(y)(u),\quad\forall x,y \in\mathfrak{g},u,v\in V.\] **Proposition 2.7**.: _With the above notation, \((\mathfrak{g}\oplus V,\ast_{\mathfrak{g}\oplus V})\) is a compatible pre-Lie algebra, which is denoted by \(\mathfrak{g}\ltimes_{(\rho,\mu,\tilde{\rho},\tilde{\mu})}V\) and called the_ **semi-direct product** _of the compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\) and the representation \((V,\rho,\mu,\tilde{\rho},\tilde{\mu})\)._ Proof.: Obviously, \((\mathfrak{g}\oplus V,\ast_{\mathfrak{g}\oplus V})\) and \((\mathfrak{g}\oplus V,\ast_{\mathfrak{g}\oplus V})\) are pre-Lie algebras. For all \(x,y,z\in\mathfrak{g},u,v,w\in V\), by (1), (3) and (4), we have \[((x+u)\ast_{\mathfrak{g}\oplus V}(y+v))\cdot_{\mathfrak{g}\oplus V }(z+w)+((x+u)\cdot_{\mathfrak{g}\oplus V}(y+v))\ast_{\mathfrak{g}\oplus V}(z+w)\] \[-(x+u)\cdot_{\mathfrak{g}\oplus V}((y+v)\ast_{\mathfrak{g}\oplus V }(z+w))-(x+u)\ast_{\mathfrak{g}\oplus V}((y+v)\cdot_{\mathfrak{g}\oplus V}(z+ w))\] \[-((y+v)\ast_{\mathfrak{g}\oplus V}(x+u))\cdot_{\mathfrak{g}\oplus V }(z+w)-((y+v)\cdot_{\mathfrak{g}\oplus V}(x+u))\ast_{\mathfrak{g}\oplus V}(z+w)\] \[+(y+v)\cdot_{\mathfrak{g}\oplus V}((x+u)\ast_{\mathfrak{g}\oplus V }(z+w))+(y+v)\ast_{\mathfrak{g}\oplus V}((x+u)\cdot_{\mathfrak{g}\oplus V}(z+ w))\] \[= (x\ast y)\cdot z+\rho(x\ast y)w+\mu(z)\tilde{\rho}(x)v+\mu(z) \tilde{\mu}(y)u+(x\cdot y)\ast z+\tilde{\rho}(x\cdot y)w+\tilde{\mu}(z)\rho(x)v +\tilde{\mu}(z)\mu(y)u\] \[-x\cdot(y\ast z)-\rho(x)\tilde{\rho}(y)w-\rho(x)\tilde{\mu}(z)v- \mu(y\ast z)u-x\ast(y\cdot z)-\tilde{\rho}(x)\rho(y)w-\tilde{\rho}(x)\mu(z)v -\tilde{\mu}(y\cdot z)u\] \[-(y\ast x)\cdot z-\rho(y\ast x)w-\mu(z)\tilde{\rho}(y)u-\mu(z) \tilde{\mu}(x)v-(y\cdot x)\ast z-\tilde{\rho}(y\cdot x)w-\tilde{\mu}(z)\rho(y)u -\tilde{\mu}(z)\mu(x)v\] \[+y\cdot(x\ast z)+\rho(y)\tilde{\rho}(x)w+\rho(y)\tilde{\mu}(z)u+ \mu(x\ast z)v+y\ast(x\cdot z)+\tilde{\rho}(y)\rho(x)w+\tilde{\rho}(y)\mu(z)u +\tilde{\mu}(x\cdot z)v\] \[= 0.\] This finishes the proof. Now, we will give the relation between compatible Lie algebras and compatible pre-Lie algebras. First, we will recall the notation of a compatible Lie algebra and its representation. **Definition 2.8**.: ([10, 11, 19]) \(A\) **compatible Lie algebra** _is a triple \((\mathfrak{g},[\cdot,\cdot],\{\cdot,\cdot\})\), where \(\mathfrak{g}\) is a vector space, \([\cdot,\cdot]\) and \(\{\cdot,\cdot\}\) are Lie algebra structures on \(\mathfrak{g}\), such that for all \(x,y,z\in\mathfrak{g}\), the following equality is satisfied:_ \[[\{x,y\},z]+[\{y,z\},x]+[\{z,x\},y]+\{[x,y],z\}+\{[y,z],x\}+\{[z,x],y\}=0.\] **Definition 2.9**.: ([24]) \(A\) **representation** _of a compatible Lie algebra \((\mathfrak{g},[\cdot,\cdot],\{\cdot,\cdot\})\) on a vector space \(V\) consists of a pair \((\rho,\mu)\), where \((V,\rho)\) is a representation of the Lie algebra \((\mathfrak{g},[\cdot,\cdot])\) and \((V,\mu)\) is a representation of the Lie algebra \((\mathfrak{g},\{\cdot,\cdot\})\) such that for all \(x,y\in\mathfrak{g}\):_ \[\rho(\{x,y\})+\mu([x,y])=[\rho(x),\mu(y)]-[\rho(y),\mu(x)].\] We denote a representation of a compatible Lie algebra \((\mathfrak{g},[\cdot,\cdot],\{\cdot,\cdot\})\) by \((V,\rho,\mu)\). **Proposition 2.10**.: _Let \((\mathfrak{g},\cdot,\ast)\) be a compatible pre-Lie algebra. Define two brackets \([\cdot,\cdot]\) and \(\{\cdot,\cdot\}\) respectively by_ \[[x,y]=x\cdot y-y\cdot x,\quad\{x,y\}=x\ast y-y\ast x.\] _Then, \((\mathfrak{g},[\cdot,\cdot],\{\cdot,\cdot\})\) is a compatible Lie algebra, which is denoted by \(\mathfrak{g}^{C}\) and called the_ **sub-adjacent compatible Lie algebra** _of \((\mathfrak{g},\cdot,\ast)\). Moreover, let \(L,\tilde{L}\) be linear maps, where \(L_{x}y=x\cdot y,\tilde{L}_{x}y=x\ast y\). Then \((V,L,\tilde{L})\) is a representation of the sub-adjacent compatible Lie algebra \((\mathfrak{g},[\cdot,\cdot],\{\cdot,\cdot\})\)._ Proof.: Obviously, \((\mathfrak{g},[\cdot,\cdot])\) and \((\mathfrak{g},\{\cdot,\cdot\})\) are Lie algebras. For all \(x,y,z\in\mathfrak{g}\), by (1), we have \[[\{x,y\},z]+[\{y,z\},x]+[\{z,x\},y]+\{[x,y],z\}+\{[y,z],x\}+\{[z,x ],y\}\] \[= [x\ast y-y\ast x,z]+[y\ast z-z\ast y,x]+[z\ast x-x\ast z,y]\] \[+\{x\cdot y-y\cdot x,z\}+\{y\cdot z-z\cdot y,x\}+\{z\cdot x-x \cdot z,y\}\] \[= (x\ast y)\cdot z-(y\ast x)\cdot z-z\cdot(x\ast y)+z\cdot(y\ast x )+(y\ast z)\cdot x-(z\ast y)\cdot x\] \[-x\cdot(y\ast z)+x\cdot(z\ast y)+(z\ast x)\cdot y-(x\ast z) \cdot y-y\cdot(z\ast x)+y\cdot(x\ast z)\] \[+(x\cdot y)\ast z-(y\cdot x)\ast z-z\ast(x\cdot y)+z\ast(y\cdot x )+(y\cdot z)\ast x-(z\cdot y)\ast x\] \[-x\ast(y\cdot z)+x\ast(z\cdot y)+(z\cdot x)\ast y-(x\cdot z) \ast y-y\ast(z\cdot x)+y\ast(x\cdot z)\] \[= 0.\] Thus, \((\mathfrak{g},[\cdot,\cdot],\{\cdot,\cdot\})\) is a compatible Lie algebra. Obviously, \((V,L)\) is a representation of \((\mathfrak{g},[\cdot,\cdot])\) and \((V,\tilde{L})\) is a representation of \((\mathfrak{g},\{\cdot,\cdot\})\). For all \(x,y,z\in\mathfrak{g}\), by (1), we have \[L_{\{x,y\}}z+\tilde{L}_{\{x,y\}}z-[L_{x},\tilde{L}_{y}]z+[L_{y},\tilde{L}_{x}]z\] \[= \{x,y\}\cdot z+[x,y]\ast z-L_{x}\tilde{L}_{y}z+\tilde{L}_{y}L_{x} z+L_{y}\tilde{L}_{x}z-\tilde{L}_{x}L_{y}z\] \[= (x\ast y)\cdot z-(y\ast x)\cdot z+(x\cdot y)\ast z-(y\cdot x) \ast z-x\cdot(y\ast z)+y\ast(x\cdot z)+y\cdot(x\ast z)-x\ast(y\cdot z)\] \[= 0,\] which implies that \[L_{\{x,y\}}+\tilde{L}_{\{x,y\}}=[L_{x},\tilde{L}_{y}]-[L_{y},\tilde{L}_{x}].\] Thus, \((V,L,\tilde{L})\) is a representation of the sub-adjacent compatible Lie algebra \((\mathfrak{g},[\cdot,\cdot],\{\cdot,\cdot\})\). **Proposition 2.11**.: _Let \((V,\rho,\mu,\tilde{\rho},\tilde{\mu})\) be a representation of a compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\). Then \((V,\rho-\mu,\tilde{\rho}-\tilde{\mu})\) is a representation of the sub-adjacent compatible Lie algebra \((\mathfrak{g},[\cdot,\cdot],\{\cdot,\cdot\})\)._ Proof.: Obviously, \((V,\rho-\mu)\) is a representation of the Lie algebra \((\mathfrak{g},[\cdot,\cdot])\) and \((V,\tilde{\rho}-\tilde{\mu})\) is a representation of the Lie algebra \((\mathfrak{g},[\cdot,\cdot])\). For all \(x,y\in\mathfrak{g},u\in V\), by (3) and (4), we have \[(\rho-\mu)(\{x,y\})+(\tilde{\rho}-\tilde{\mu})([x,y])-[(\rho-\mu) (x),(\tilde{\rho}-\tilde{\mu})(y)]+[(\rho-\mu)(y),(\tilde{\rho}-\tilde{\mu})( x)]\] \[= \rho\{x,y\}-\mu\{x,y\}+\tilde{\rho}[x,y]-\tilde{\mu}[x,y]-[\rho(x ),\tilde{\rho}(y)]+[\rho(x),\tilde{\mu}(y)]+[\mu(x),\tilde{\rho}(y)]\] \[-[\mu(x),\tilde{\mu}(y)]+[\rho(y),\tilde{\rho}(x)]-[\rho(y), \tilde{\mu}(x)]-[\mu(y),\tilde{\rho}(x)]+[\mu(y),\tilde{\mu}(x)]\] \[= \rho(x*y)-\rho(y*x)-\mu(x*y)+\mu(y*x)+\tilde{\rho}(x*y)-\tilde{ \rho}(y*x)-\tilde{\mu}(x*y)+\tilde{\mu}(y*x)\] \[-\rho(x)\tilde{\rho}(y)+\tilde{\rho}(y)\rho(x)+\rho(x)\tilde{\mu} (y)-\tilde{\mu}(y)\rho(x)+\mu(x)\tilde{\rho}(y)-\tilde{\rho}(y)\mu(x)-\mu(x) \tilde{\mu}(y)+\tilde{\mu}(y)\mu(x)\] \[+\rho(y)\tilde{\rho}(x)-\tilde{\rho}(x)\rho(y)-\rho(y)\tilde{\mu }(x)+\tilde{\mu}(x)\rho(y)-\mu(y)\tilde{\rho}(x)+\tilde{\rho}(x)\mu(y)+\mu(y) \tilde{\mu}(x)-\tilde{\mu}(x)\mu(y)\] \[= 0,\] which implies that \[(\rho-\mu)(\{x,y\})+(\tilde{\rho}-\tilde{\mu})([x,y])=[(\rho-\mu)(x),(\tilde {\rho}-\tilde{\mu})(y)]-[(\rho-\mu)(y),(\tilde{\rho}-\tilde{\mu})(x)].\] This finishes the proof. **Definition 2.12**.: ([17]) _Let \((\mathcal{G},[\cdot,\cdot],\mathrm{d}_{1})\) and \((\mathcal{G},[\cdot,\cdot],\mathrm{d}_{2})\) be two differential graded Lie algebras, where \(\mathcal{G}=\oplus_{i=0}^{\infty}\mathfrak{g}_{i}\). We call the quadruple \((\mathcal{G},[\cdot,\cdot],\mathrm{d}_{1},\mathrm{d}_{2})\) a_ **bidifferential graded Lie algebra** _if \(\mathrm{d}_{1}\) and \(\mathrm{d}_{2}\) satisfy_ \[\mathrm{d}_{1}\circ\mathrm{d}_{2}+\mathrm{d}_{2}\circ\mathrm{d}_{1}=0. \tag{5}\] **Definition 2.13**.: ([17]) _Let \((\mathcal{G},[\cdot,\cdot],\mathrm{d}_{1},\mathrm{d}_{2})\) be a bidifferential graded Lie algebra. A pair \((P_{1},P_{2})\in\mathfrak{g}_{1}\oplus\mathfrak{g}_{1}\) is called a_ **Maurer-Cartan element** _of \((\mathcal{G},[\cdot,\cdot],\mathrm{d}_{1},\mathrm{d}_{2})\) if \(P_{1}\) and \(P_{2}\) are Maurer-Cartan elements of the differential graded Lie algebra \((\mathcal{G},[\cdot,\cdot],\mathrm{d}_{1})\) and \((\mathcal{G},[\cdot,\cdot],\mathrm{d}_{2})\) respectively, such that_ \[\mathrm{d}_{1}P_{2}+\mathrm{d}_{2}P_{1}+[P_{1},P_{2}]=0. \tag{6}\] Let \((\mathcal{G},[\cdot,\cdot])\) be a graded Lie algebra. It is obviously that \((\mathcal{G},[\cdot,\cdot],\mathrm{d}_{1}=0,\mathrm{d}_{2}=0)\) is a bidifferential graded Lie algebra. Consider the graded Lie algebra \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN})\), we obtain the following main result. **Theorem 2.14**.: _Let \(\mathfrak{g}\) be a vector space and \(\pi_{1},\pi_{2}\in\mathrm{Hom}(\otimes^{2}\mathfrak{g},\mathfrak{g})\). Then \((\mathfrak{g},\pi_{1},\pi_{2})\) is a compatible pre-Lie algebra if and only if \((\pi_{1},\pi_{2})\) is a Maurer-Cartan element of the bidifferential graded Lie algebra \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},\mathrm{d}_{1}=0,\mathrm{ d}_{2}=0)\)._ Proof.: Obviously, \((\mathfrak{g},\pi_{1})\) and \((\mathfrak{g},\pi_{2})\) are pre-Lie algebra if and only if \[[\pi_{1},\pi_{1}]^{MN}=0,\quad[\pi_{2},\pi_{2}]^{MN}=0.\] For all \(x,y,z\in\mathfrak{g}\), we have \[[\pi_{1},\pi_{2}]^{MN}(x,y,z) = \pi_{1}(\pi_{2}(x,y),z)-\pi_{1}(\pi_{2}(y,x),z)-\pi_{1}(x,\pi_{2} (y,z))+\pi_{1}(y,\pi_{2}(x,z))\] \[+\pi_{2}(\pi_{1}(x,y),z)-\pi_{2}(\pi_{1}(y,x),z)-\pi_{2}(x,\pi_{1} (y,z))+\pi_{2}(y,\pi_{1}(x,z)),\] which implies that equation (1) is equivalent to \([\pi_{1},\pi_{2}]^{MN}=0\). Thus, \((\mathfrak{g},\pi_{1},\pi_{2})\) is a compatible pre-Lie algebra if and only if \((\pi_{1},\pi_{2})\) is a Maurer-Cartan element of the bidifferential graded Lie algebra \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},\mathrm{d}_{1}=0,\mathrm{ d}_{2}=0)\). Now, we give a new bidifferential graded Lie algebra that controls deformations of a compatible pre-Lie algebra. **Proposition 2.15**.: _Let \((\mathfrak{g},\pi_{1},\pi_{2})\) be a compatible pre-Lie algebra. Then \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},\mathrm{d}_{\pi_{1}}, \mathrm{d}_{\pi_{2}})\) is a bidifferential graded Lie algebra. Moreover, \((\mathfrak{g},\pi_{1}+\pi^{\prime}_{1},\pi_{2}+\pi^{\prime}_{2})\) is a compatible pre-Lie algebra for all \(\pi^{\prime}_{1},\pi^{\prime}_{2}\in\mathrm{Hom}(\mathfrak{so}^{2}\mathfrak{g},\mathfrak{g})\) if and only if \((\pi^{\prime}_{1},\pi^{\prime}_{2})\) is a Maurer-Cartan element of the bidifferential graded Lie algebra \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},\mathrm{d}_{\pi_{1}}, \mathrm{d}_{\pi_{2}})\)._ Proof.: Since \((\mathfrak{g},\pi_{1},\pi_{2})\) is a compatible pre-Lie algebra, by Theorem 2.14, \((\pi_{1},\pi_{2})\) is a Maurer-Cartan element of the bidifferential graded Lie algebra \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},\mathrm{d}_{1}=0,\mathrm{ d}_{2}=0)\). Thus, we have \[[\pi_{1},\pi_{1}]^{MN}=0,\quad[\pi_{2},\pi_{2}]^{MN}=0,\quad[\pi_{1},\pi_{2}]^{ MN}=0.\] Thus, \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},\mathrm{d}_{\pi_{1}})\) and \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},\mathrm{d}_{\pi_{2}})\) are differential graded Lie algebras. For all \(P\in C^{p+1}(\mathfrak{g},\mathfrak{g})\), by graded Jacobi identity, we have \[\mathrm{d}_{\pi_{1}}(\mathrm{d}_{\pi_{2}}P)+\mathrm{d}_{\pi_{2}}(\mathrm{d}_{ \pi_{1}}P)=[\pi_{1},[\pi_{2},P]^{MN}]^{MN}+[\pi_{2},[\pi_{1},P]^{MN}]^{MN}=[[ \pi_{1},\pi_{2}]^{MN},p]^{MN}=0,\] which implies that \(\mathrm{d}_{\pi_{1}}\circ\mathrm{d}_{\pi_{2}}+\mathrm{d}_{\pi_{2}}\circ \mathrm{d}_{\pi_{1}}=0\). Thus, \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},\mathrm{d}_{\pi_{1}}, \mathrm{d}_{\pi_{2}})\) is a bidifferential graded Lie algebra. If \((\mathfrak{g},\pi_{1}+\pi^{\prime}_{1},\pi_{2}+\pi^{\prime}_{2})\) is a compatible pre-Lie algebra, by Theorem 2.14, \((\pi_{1}+\pi^{\prime}_{1},\pi_{2}+\pi^{\prime}_{2})\) is a Maurer-Cartan element of the bidifferential graded Lie algebra \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},\mathrm{d}_{1}=0,\mathrm{ d}_{2}=0)\). Thus, we have \[[\pi_{1}+\pi^{\prime}_{1},\pi_{1}+\pi^{\prime}_{1}]^{MN} = 0, \tag{8}\] \[[\pi_{2}+\pi^{\prime}_{2},\pi_{2}+\pi^{\prime}_{2}]^{MN} = 0,\] (9) \[[\pi_{1}+\pi^{\prime}_{1},\pi_{2}+\pi^{\prime}_{2}]^{MN} = 0, \tag{7}\] By (7), (8) and (9), we have \[\mathrm{d}_{\pi_{1}}\pi^{\prime}_{1}+\frac{1}{2}[\pi^{\prime}_{1 },\pi^{\prime}_{1}]^{MN} = 0,\] \[\mathrm{d}_{\pi_{2}}\pi^{\prime}_{2}+\frac{1}{2}[\pi^{\prime}_{2 },\pi^{\prime}_{2}]^{MN} = 0,\] \[\mathrm{d}_{\pi_{1}}\pi^{\prime}_{2}+\mathrm{d}_{\pi_{2}}\pi^{ \prime}_{1}+[\pi^{\prime}_{1},\pi^{\prime}_{2}]^{MN} = 0.\] Thus, \((\pi^{\prime}_{1},\pi^{\prime}_{2})\) is a Maurer-Cartan element of \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},\mathrm{d}_{\pi_{1}}, \mathrm{d}_{\pi_{2}})\). The converse part can be proved similarly. We omit details. The proof is finished. ## 3. Formal deformations of compatible pre-Lie algebras In this section, first, we introduce a cohomology of a compatible pre-Lie algebra with coefficients in itself. Then, we study infinitesimal deformations of compatible pre-Lie algebras using this cohomology, we show that equivalent infinitesimal deformations are in the same second cohomology group. We give the notion of a Nijenhuis operator on a compatible pre-Lie algebra and show that a Nijenhuis operator gives rise to a trivial deformation. Finally, we study formal deformations of compatible pre-Lie algebras. If the second cohomology group \(\mathcal{H}^{2}(\mathfrak{g};\mathfrak{g})\) is trivial, then the compatible pre-Lie algebra is rigid. ### Cohomologies of compatible pre-Lie algebras Let \((\mathfrak{g},\pi)\) be a pre-Lie algebra, where \(\pi(x,y)=x\cdot y\). Because of the graded Jacobi identity, we define a coboundary operator \(\delta_{\pi}:C^{n}(\mathfrak{g},\mathfrak{g})\longrightarrow C^{n+1}( \mathfrak{g},\mathfrak{g})\) by \[\delta_{\pi}f=(-1)^{n-1}[\pi,f]^{MN},\quad\forall f\in C^{n}(\mathfrak{g}; \mathfrak{g}).\] Thus, we obtain a cochain complex \((\oplus_{n=1}^{+\infty}C^{n}(\mathfrak{g};\mathfrak{g}),\delta_{\pi})\). More precisely, for all \(x_{1},\ldots,x_{n+1}\in\mathfrak{g}\), we have \[(\delta_{\pi}f)(x_{1},\ldots,x_{n+1}) = \sum_{i=1}^{n}(-1)^{i+1}x_{i}\cdot f(x_{1},\ldots,\hat{x}_{i}, \ldots,x_{n+1})\] \[+\sum_{i=1}^{n}(-1)^{i+1}f(x_{1},\ldots,\hat{x}_{i},\ldots,x_{n}, x_{i})\cdot x_{n+1}\] \[-\sum_{i=1}^{n}(-1)^{i+1}f(x_{1},\ldots,\hat{x}_{i}\ldots,x_{n}, x_{i}\cdot x_{n+1})\] \[+\sum_{1\leq i<j\leq n}(-1)^{i+j}f([x_{i},x_{j}]_{C},x_{1},\ldots, \hat{x}_{i},\ldots,\hat{x}_{j},\ldots,x_{n+1}),\] which is a coboundary operator of pre-Lie algebra \((\mathfrak{g},\pi)\) with coefficients in the regular representation \((V,L,R)\). We can see more details in [8]. Let \((\mathfrak{g},\cdot,\ast)\) be a compatible pre-Lie algebra with \(\pi_{1}(x,y)=x\cdot y\), \(\pi_{2}(x,y)=x\ast y\). We define the set of \(n\)-cochains \((n\geq 1)\) by \[\mathfrak{C}^{n}(\mathfrak{g};\mathfrak{g})=\underbrace{C^{n}(\mathfrak{g}; \mathfrak{g})\oplus C^{n}(\mathfrak{g};\mathfrak{g})\oplus\cdots\oplus C^{n}( \mathfrak{g};\mathfrak{g})}_{n\ copies}.\] Define the operator \(\delta:\mathfrak{C}^{n}(\mathfrak{g};\mathfrak{g})\longrightarrow\mathfrak{C} ^{n+1}(\mathfrak{g};\mathfrak{g})\) by \[\delta^{1}f = (\delta_{\pi_{1}}f,\delta_{\pi_{2}}f),\quad\forall f\in\operatorname {Hom}(\mathfrak{g},\mathfrak{g}),n=1,\] \[\delta^{n}(f_{1},\ldots,f_{n}) = (\delta_{\pi_{1}}f_{1},\ldots,\underbrace{\delta_{\pi_{2}}f_{i-1} +\delta_{\pi_{1}}f_{i}}_{i},\ldots,\delta_{\pi_{2}}f_{n}),\quad\forall(f_{1}, \ldots,f_{n})\in\mathfrak{C}^{n}(\mathfrak{g},\mathfrak{g}),2\leq i\leq n.\] **Theorem 3.1**.: _The operator \(\delta:\mathfrak{C}^{n}(\mathfrak{g};\mathfrak{g})\longrightarrow\mathfrak{C} ^{n+1}(\mathfrak{g};\mathfrak{g})\) defined as above satisfies \(\delta\circ\delta=0\)._ Proof.: By Theorem 2.14, we obtain that \((\pi_{1},\pi_{2})\) is a Maurer-Cartan element of the bidifferential graded Lie algebra \((C^{*}(\mathfrak{g};\mathfrak{g}),[\cdot,\cdot]^{MN},\mathrm{d}_{1}=0,\mathrm{ d}_{2}=0)\). Thus, by the fact that \([\pi_{1},\pi_{1}]^{MN}=[\pi_{2},\pi_{2}]^{MN}=[\pi_{1},\pi_{2}]^{MN}=0\) and the graded Jacobi identity, for all \(f\in\operatorname{Hom}(\mathfrak{g},\mathfrak{g})\), we have \[\delta^{2}(\delta^{1}f)\] \[= \delta^{2}([\pi_{1},f]^{MN},[\pi_{2},f]^{MN})\] \[= -([\pi_{1},[\pi_{1},f]^{MN}]^{MN},[\pi_{2},[\pi_{1},f]^{MN}]^{MN} +[\pi_{1},[\pi_{2},f]^{MN}]^{MN},[\pi_{2},[\pi_{2},f]^{MN}]^{MN})\] \[= -(\frac{1}{2}[[\pi_{1},\pi_{1}]^{MN},f]^{MN},[[\pi_{1},\pi_{2}]^{ MN},f]^{MN},\frac{1}{2}[[\pi_{2},\pi_{2}]^{MN},f]^{MN})\] \[= (0,0,0).\] By \([\pi_{1},\pi_{1}]^{MN}=[\pi_{2},\pi_{2}]^{MN}=[\pi_{1},\pi_{2}]^{MN}=0\) and the graded Jacobi identity, for all \((f_{1},\ldots,f_{n})\in\mathfrak{C}^{n}(\mathfrak{g},\mathfrak{g}),2\leq i\leq n\), we have \[\delta^{n+1}\delta^{n}(f_{1},\ldots,f_{n})\] \[= (-1)^{n-1}\delta^{n+1}([\pi_{1},f_{1}]^{MN},\ldots,\underbrace{[ \pi_{2},f_{i-1}]^{MN}+[\pi_{1},f_{i}]^{MN}}_{i},\ldots,[\pi_{2},f_{n}]^{MN})\] \[= -([\pi_{1},[\pi_{1},f_{1}]^{MN}]^{MN},[\pi_{2},[\pi_{1},f_{1}]^{ MN}]^{MN}+[\pi_{1},[\pi_{2},f_{1}]^{MN}]^{MN}+[\pi_{1},[\pi_{1},f_{2}]^{MN}]^{ MN},\ldots,\] \[\underbrace{[\pi_{2},[\pi_{2},f_{i-2}]^{MN}]^{MN}+[\pi_{2},[\pi_{1 },f_{i-1}]^{MN}]^{MN}+[\pi_{1},[\pi_{2},f_{i-1}]^{MN}]^{MN}+[\pi_{1},[\pi_{1},f_{ i}]^{MN}]^{MN}}_{3\leq i\leq n-1},\] \[[\pi_{2},[\pi_{2},f_{n-1}]^{MN}]^{MN}+[\pi_{2},[\pi_{1},f_{n}]^{MN}]^{ MN}+[\pi_{1},[\pi_{2},f_{n}]^{MN}]^{MN},[\pi_{2},[\pi_{2},f_{n}]^{MN}]^{MN})\] \[= -(\frac{1}{2}[[\pi_{1},\pi_{1}]^{MN},f_{1}]^{MN},[[\pi_{1},\pi_{2}] ^{MN},f_{1}]^{MN}+\frac{1}{2}[[\pi_{1},\pi_{1}]^{MN},f_{2}]^{MN},\ldots,\] \[\underbrace{\frac{1}{2}[[\pi_{2},\pi_{2}]^{MN},f_{i-2}]^{MN}+[[\pi _{1},\pi_{2}]^{MN},f_{i-1}]^{MN}+\frac{1}{2}[[\pi_{1},\pi_{1}]^{MN},f_{i}]^{MN} }_{3\leq i\leq n-1},\ldots,\] \[\frac{1}{2}[[\pi_{2},\pi_{2}]^{MN},f_{n-1}]^{MN}+[[\pi_{1},\pi_{2 }]^{MN},f_{n}]^{MN},\frac{1}{2}[[\pi_{2},\pi_{2}]^{MN},f_{n}]^{MN})\] \[= (0,0,\ldots,0).\] Thus, we have \(\delta\circ\delta=0\). **Definition 3.2**.: _Let \((\mathfrak{g},\cdot,\ast)\) be a compatible pre-Lie algebra. The cohomology of the cochain complex \((\oplus_{n=1}^{+\infty}\mathfrak{C}^{\mathfrak{g}}(\mathfrak{g};\mathfrak{g}),\delta)\) is called the cohomology of \((\mathfrak{g},\cdot,\ast)\). The corresponding \(n\)-th cohomology group is denoted by \(\mathcal{H}^{\mathfrak{n}}(\mathfrak{g};\mathfrak{g})\)._ ### Infinitesimal deformations of compatible pre-Lie algebras **Definition 3.3**.: _Let \((\mathfrak{g},\cdot,\ast)\) be a compatible pre-Lie algebra and \((\omega_{1},\omega_{2})\in\mathfrak{C}^{2}(\mathfrak{g},\mathfrak{g})\). Define_ \[x\cdot,y=x\cdot y+t\omega_{1}(x,y),\quad x\ast_{t}y=x\ast y+t\omega_{2}(x,y), \quad\forall x,y\in\mathfrak{g}.\] _If for all \(t\in\mathbb{K}\), \((\mathfrak{g},\cdot,\ast_{t})\) is still a compatible pre-Lie algebra, then we say that \((\omega_{1},\omega_{2})\) generates an infinitesimal deformation of \((\mathfrak{g},\cdot,\ast)\)._ It is straightforward to verify that \((\omega_{1},\omega_{2})\) generates an infinitesimal deformation of a compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\) if and only if for all \(k_{1},k_{2}\in\mathbb{K}\), \(k_{1}\omega_{1}+k_{2}\omega_{2}\) generates an infinitesimal deformation of the pre-Lie algebra \((\mathfrak{g},\diamond)\), where "\(\diamond\)" is given by (2). By Theorem 2.14, \((\omega_{1},\omega_{2})\) generates an infinitesimal deformation of a compatible pre-Lie algebra \((\mathfrak{g},\pi_{1},\pi_{2})\), where \(x\cdot y=\pi_{1}(x,y)\) and \(x\ast y=\pi_{2}(x,y)\) if and only if \[[\pi_{1},\omega_{1}]^{MN}=0,\quad[\pi_{2},\omega_{2}]^{MN}=0, \quad[\pi_{1},\omega_{2}]^{MN}+[\pi_{2},\omega_{1}]^{MN}=0, \tag{11}\] \[[\omega_{1},\omega_{1}]^{MN}=0,\quad[\omega_{2},\omega_{2}]^{MN} =0,\quad[\omega_{1},\omega_{2}]^{MN}=0. \tag{10}\] Obviously, (10) means that \((\omega_{1},\omega_{2})\) is a \(2\)-cocycle of the compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\), i.e. \(\delta(\omega_{1},\omega_{2})=0\). (11) means that \((\mathfrak{g},\omega_{1},\omega_{2})\) is a compatible pre-Lie algebra. **Theorem 3.4**.: _With the above notation, \((\omega_{1},\omega_{2})\) is a \(2\)-cocycle of the compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\)._ **Definition 3.5**.: _Two infinitesimal deformations \((\mathfrak{g},\cdot_{t},\ast_{t})\) and \((\mathfrak{g}^{\prime},\cdot_{t}^{\prime},\ast_{t}^{\prime})\) generated by \((\omega_{1},\omega_{2})\) and \((\omega_{1}^{\prime},\omega_{2}^{\prime})\) respectively are said to be_ **equivalent** _if there exists a linear operator \(N\in\mathfrak{gl}(\mathfrak{g})\) such that \(\mathrm{Id}+\mathrm{t}\mathrm{N}\) is a compatible pre-Lie algebra homomorphism from \((\mathfrak{g}^{\prime},\cdot_{t}^{\prime},\ast_{t}^{\prime})\) to \((\mathfrak{g},\cdot_{t},\ast_{t})\)._ Two infinitesimal deformations \((\mathfrak{g},\cdot_{t},\ast_{t})\) and \((\mathfrak{g}^{\prime},\cdot_{t}^{\prime},\ast_{t}^{\prime})\) generated by \((\omega_{1},\omega_{2})\) and \((\omega_{1}^{\prime},\omega_{2}^{\prime})\) respectively are equivalent if and only if for all \(x,y\in\mathfrak{g}\), the following equalities hold: \[\omega_{1}^{\prime}(x,y)-\omega_{1}(x,y) = N(x)\cdot y+x\cdot N(y)-N(x\cdot y), \tag{13}\] \[N(\omega_{1}^{\prime}(x,y)) = \omega_{1}(x,N(y))+\omega_{1}(N(x),y)+N(x)\cdot N(y),\] (14) \[\omega_{1}(N(x),N(y)) = 0,\] (15) \[\omega_{2}^{\prime}(x,y)-\omega_{2}(x,y) = N(x)\ast y+x\ast N(y)-N(x\ast y),\] (16) \[N(\omega_{2}^{\prime}(x,y)) = \omega_{2}(x,N(y))+\omega_{2}(N(x),y)+N(x)\ast N(y), \tag{12}\] \[\omega_{2}(N(x),N(y)) = 0. \tag{17}\] Note that (12) and (15) means that \((\omega_{1}^{\prime}-\omega_{1},\omega_{2}^{\prime}-\omega_{2})=\delta N=([\pi_{1 },N]^{MN},[\pi_{2},N]^{MN})\). Thus, we have **Theorem 3.6**.: _Let \((\mathfrak{g},\cdot,\ast)\) be a compatible pre-Lie algebra. If two infinitesimal deformations \((\mathfrak{g},\cdot_{t},\ast_{t})\) and \((\mathfrak{g}^{\prime},\cdot_{t}^{\prime},\ast_{t}^{\prime})\) generated by \((\omega_{1},\omega_{2})\) and \((\omega_{1}^{\prime},\omega_{2}^{\prime})\) respectively are equivalent, then \((\omega_{1},\omega_{2})\) and \((\omega_{1}^{\prime},\omega_{2}^{\prime})\) are in the same cohomology class of \(\mathcal{H}^{2}(\mathfrak{g};\mathfrak{g})\)._ **Definition 3.7**.: ([23]) _Let \((\mathfrak{g},\cdot)\) be a pre-Lie algebra. A linear operator \(N\in\mathfrak{gl}(\mathfrak{g})\) is called a Nijenhuis operator on \((\mathfrak{g},\cdot)\) if for all \(x,y\in\mathfrak{g}\)_ \[N(x)\cdot N(y)=N(x\cdot_{N}y),\] _where the product "\(\cdot_{N}\)" is defined by_ \[x\cdot_{N}y\triangleq N(x)\cdot y+x\cdot N(y)-N(x\cdot y).\] **Proposition 3.8**.: ([23]) _Let \(N\) be a Nijenhuis operator on a pre-Lie algebra \((\mathfrak{g},\cdot)\), then \((\mathfrak{g},\cdot_{N})\) is a pre-Lie algebra, and \(N\) is a homomorphism from \((\mathfrak{g},\cdot_{N})\) to \((\mathfrak{g},\cdot)\)._ **Definition 3.9**.: _Let \((\mathfrak{g},\cdot,\ast)\) be a compatible pre-Lie algebra. A linear operator \(N\in\mathfrak{gl}(\mathfrak{g})\) is called a Nijenhuis operator on \((\mathfrak{g},\cdot,\ast)\) if \(N\) is both a Nijenhuis operator on the pre-Lie algebra \((\mathfrak{g},\cdot)\) and a Nijenhuis operator on the pre-Lie algebra \((\mathfrak{g},\ast)\)._ **Proposition 3.10**.: _Let \((\mathfrak{g},\cdot,\ast)\) be a compatible pre-Lie algebra and \(N\in\mathfrak{gl}(\mathfrak{g})\) a linear map. Then \(N\) is a Nijenhuis operator on the compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\) if and only if \(N\) is a Nijenhuis operator on the pre-Lie algebra \((\mathfrak{g},\diamond)\), where "\(\diamond\)" is given by (2)._ Proof.: If \(N\) is a Nijenhuis operator on the compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\), for all \(x,y\in\mathfrak{g}\), we have \[N(x)\diamond N(y)-N(N(x)\diamond y+x\diamond N(y)-N(x\diamond y))\] \[= k_{1}N(x)\cdot N(y)+k_{2}N(x)\ast N(y)-N(k_{1}N(x)\cdot y+k_{2} N(x)\ast y\] \[+k_{1}x\cdot N(y)+k_{2}x\ast N(y)-N(k_{1}x\cdot y+k_{2}x\ast y))\] \[= k_{1}(N(x)\cdot N(y)-N(N(x)\cdot y+x\cdot N(y)-N(x\cdot y)))\] \[+k_{2}(N(x)\ast N(y)-N(N(x)\ast y+x\ast N(y)-N(x\ast y)))\] \[= 0,\] which implies that \(N\) is a Nijenhuis operator on the pre-Lie algebra \((\mathfrak{g},\diamond)\). The converse part can be proved similarly. We omit details. The proof is finished. **Proposition 3.11**.: _Let \(N\in\mathfrak{gl}(\mathfrak{g})\) be a Nijenhuis operator on the compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\). Then \((\mathfrak{g},\cdot_{N},\ast_{N})\) is a compatible pre-Lie algebra and \(N\) is a homomorphism from \((\mathfrak{g},\cdot_{N},\ast_{N})\) to \((\mathfrak{g},\cdot,\ast)\)._ Proof.: By Proposition 3.10, \(N\) is a Nijenhuis operator on the pre-Lie algebra \((\mathfrak{g},\diamond)\). For all \(x,y\in\mathfrak{g}\), we have \[x\diamond_{N}y = N(x)\diamond y+x\diamond N(y)-N(x\diamond y)\] \[= k_{1}N(x)\cdot y+k_{2}N(x)\ast y+k_{1}x\cdot N(y)+k_{2}x\ast N(y )-k_{1}N(x\cdot y)-k_{2}N(x\ast y)\] \[= k_{1}(x\cdot_{N}y)+k_{2}(x\ast_{N}y).\] By Proposition 3.8, \((\mathfrak{g},\cdot_{n},\ast_{n})\) is a pre-Lie algebra. By Proposition 2.4, \((\mathfrak{g},\cdot_{N},\ast_{N})\) is a compatible pre-Lie algebra. By Proposition 3.8, \(N\) is a homomorphism from \((\mathfrak{g},\cdot_{N})\) to \((\mathfrak{g},\cdot)\) and a homomorphism from \((\mathfrak{g},\ast_{N})\) to \((\mathfrak{g},\ast)\). Thus, \(N\) is a homomorphism from \((\mathfrak{g},\cdot_{N},\ast_{N})\) to \((\mathfrak{g},\cdot,\ast)\). **Definition 3.12**.: _An infinitesimal deformation \((\mathfrak{g},\cdot_{t},\ast_{t})\) of a compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\) generated by \((\omega_{1},\omega_{2})\) is_ **trivial** _if there exists a linear operator \(N\in\mathfrak{gl}(\mathfrak{g})\) such that \(\operatorname{Id}+\operatorname{tN}\) is a compatible pre-Lie algebra homomorphism from \((\mathfrak{g},\cdot_{t},\ast_{t})\) to \((\mathfrak{g},\cdot,\ast)\)._ \((\mathfrak{g},\cdot_{t},\ast_{t})\) is a trivial infinitesimal deformation if and only if for all \(x,y\in\mathfrak{g}\), the following equalities hold: \[\omega_{1}(x,y) = N(x)\cdot y+x\cdot N(y)-N(x\cdot y), \tag{19}\] \[N(\omega_{1}(x,y)) = N(x)\cdot N(y),\] (20) \[\omega_{2}(x,y) = N(x)\ast y+x\ast N(y)-N(x\ast y),\] (21) \[N(\omega_{2}(x,y)) = N(x)\ast N(y). \tag{18}\] By (18) and (19), we obtain that \(N\) is a Nijenhuis operator on the pre-Lie algebra \((\mathfrak{g},\cdot)\). By (20) and (21), we obtain that \(N\) is a Nijenhuis operator on the pre-Lie algebra \((\mathfrak{g},\ast)\). Thus, by (18), (19), (20) and (21), we obtain that a trivial infinitesimal deformation of a compatible pre-Lie algebra gives rise to a Nijenhuis operator \(N\) on the compatible pre-Lie algebra. Conversely, a Nijenhuis operator can also generate a trivial infinitesimal deformation as the following theorem shows. **Theorem 3.13**.: _Let \(N\) be a Nijenhuis operator on a compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\). Then a infinitesimal deformation \((\mathfrak{g},\cdot_{t},\ast_{t})\) of the compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\) can be obtained by putting_ \[\omega_{1}(x,y) = N(x)\cdot y+x\cdot N(y)-N(x\cdot y), \tag{23}\] \[\omega_{2}(x,y) = N(x)\ast y+x\ast N(y)-N(x\ast y). \tag{22}\] _Furthermore, this infinitesimal deformation \((\mathfrak{g},\cdot_{t},\ast_{t})\) is trivial._ Proof.: By (22) and (23), we obtain that \((\omega_{1},\omega_{2})=\delta N\). Thus, \((\omega_{1},\omega_{2})\) is a \(2\)-cocycle of the compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\). Since \(\operatorname{N}\) is a Nijenhuis operator on a compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\), by Proposition 3.11, \((\mathfrak{g},\omega_{1},\omega_{2})\) is a compatible pre-Lie algebra. Thus, \((\mathfrak{g},\cdot_{t},\ast_{t})\) is a infinitesimal deformation of \((\mathfrak{g},\cdot,\ast)\). It is straightforward to deduce that \(\operatorname{Id}+\operatorname{tN}\) is a compatible pre-Lie algebra homomorphism from \((\mathfrak{g},\cdot_{t},\ast_{t})\) to \((\mathfrak{g},\cdot,\ast)\). Thus, this infinitesimal deformation is trivial. ### Formal deformations of compatible pre-Lie algebras **Definition 3.14**.: _Let \((\mathfrak{g},\pi_{1},\pi_{2})\) be a compatible pre-Lie algebra, \(\pi_{1}^{\prime}=\pi_{1}+\sum_{i=1}^{+\infty}\pi_{1}^{i}i^{j},\pi_{2}^{\prime} =\pi_{2}+\sum_{i=1}^{+\infty}\pi_{2}^{i}i^{i}:\mathfrak{gl}[[t]]\otimes \mathfrak{gl}[[t]]\longrightarrow\mathfrak{gl}[[t]]\) be \(\mathbb{K}[[t]]\)-bilinear maps, where \(\pi_{1}^{i},\pi_{2}^{i}:\mathfrak{g}\otimes\mathfrak{g}\longrightarrow \mathfrak{g}\) are linear maps. If \((\mathfrak{gl}[[t]],\pi_{1}^{\prime},\pi_{2}^{\prime})\) is still a compatible pre-Lie algebra, we say that \(\{\pi_{1}^{i},\pi_{2}^{i}\}_{i\geq 1}\) generates a \(1\)**-parameter formal deformation** of a compatible pre-Lie algebra \((\mathfrak{g},\pi_{1},\pi_{2})\)._ If \(\{\pi_{1}^{i},\pi_{2}^{i}\}_{i\geq 1}\) generates a \(1\)-parameter formal deformation of a compatible pre-Lie algebra \((\mathfrak{g},\pi_{1},\pi_{2})\), for all \(x,y,z\in\mathfrak{g}\) and \(n=1,2,\dots\), we have \[\sum_{\genfrac{}{}{0.0pt}{}{i+\mu+\mu}{i\geq 0}}\pi_{1}^{i}(\pi_{1}^{j}(x,y),z)-\pi_{1}^ {i}(x,\pi_{1}^{j}(y,z))-\pi_{1}^{i}(\pi_{1}^{j}(y,x),z)+\pi_{1}^{i}(y,\pi_{1}^{ j}(x,z))=0.\] Moreover, we have \[\sum_{\begin{subarray}{c}t+j=n\\ 0<i,j<n-1\end{subarray}}\pi^{i}_{1}(\pi^{j}_{1}(x,y),z)-\pi^{i}_{1}(x,\pi^{j}_{1 }(y,z))-\pi^{i}_{1}(\pi^{j}_{1}(y,x),z)+\pi^{i}_{1}(y,\pi^{j}_{1}(x,z))=-[\pi_{1 },\pi^{n}_{1}]^{MN}(x,y,z). \tag{24}\] Similarly, we have \[\sum_{\begin{subarray}{c}t+j=n\\ 0<i,j<n-1\end{subarray}}\pi^{i}_{2}(\pi^{j}_{2}(x,y),z)-\pi^{i}_{2}(x,\pi^{j}_{ 2}(y,z))-\pi^{i}_{2}(\pi^{j}_{2}(y,x),z)+\pi^{i}_{2}(y,\pi^{j}_{2}(x,z))=-[\pi_ {2},\pi^{n}_{2}]^{MN}(x,y,z). \tag{25}\] For all \(x,y,z\in\mathfrak{g}\) and \(n=1,2,\dots\), we have \[\sum_{\begin{subarray}{c}t+j=n\\ 0<i,j<n-1\end{subarray}}\pi^{i}_{1}(\pi^{j}_{2}(x,y),z)+\pi^{j}_{2}(\pi^{i}_{ 1}(x,y),z)-\pi^{i}_{1}(x,\pi^{j}_{2}(y,z))-\pi^{j}_{2}(x,\pi^{i}_{1}(y,z))\] \[-\pi^{i}_{1}(\pi^{j}_{2}(y,x),z)-\pi^{j}_{2}(\pi^{i}_{1}(y,x),z)+ \pi^{i}_{1}(y,\pi^{j}_{2}(x,z))+\pi^{j}_{2}(y,\pi^{i}_{1}(x,z))=0.\] Moreover, we have \[\sum_{\begin{subarray}{c}t+j=n\\ 0<i,j<n-1\end{subarray}}\pi^{i}_{1}(\pi^{j}_{2}(x,y),z)+\pi^{j}_{2}(\pi^{i}_{ 1}(x,y),z)-\pi^{i}_{1}(x,\pi^{j}_{2}(y,z))-\pi^{j}_{2}(x,\pi^{i}_{1}(y,z))\] \[-\pi^{i}_{1}(\pi^{j}_{2}(y,x),z)-\pi^{j}_{2}(\pi^{i}_{1}(y,x),z)+ \pi^{i}_{1}(y,\pi^{j}_{2}(x,z))+\pi^{j}_{2}(y,\pi^{i}_{1}(x,z))\] \[= -([\pi_{1},\pi^{n}_{2}]^{MN}+[\pi_{2},\pi^{n}_{1}]^{MN})(x,y,z).\] **Definition 3.15**.: _Let \((\pi^{\prime}_{1}\,^{\prime}=\pi_{1}+\sum_{i=1}^{+\infty}\pi^{i}_{1}\,^{ \prime}i,\pi^{\prime}_{2}\,^{\prime}=\pi_{2}+\sum_{i=1}^{+\infty}\pi^{i}_{2}\, ^{\prime}i^{\prime})\) and \((\pi^{\prime}_{1}=\pi_{1}+\sum_{i=1}^{+\infty}\pi^{i}_{1}i^{\prime},\pi^{ \prime}_{2}=\pi_{2}+\sum_{i=1}^{+\infty}\pi^{i}_{2}i^{\prime})\) be two \(1\)-parameter formal deformations of a compatible pre-Lie algebra \((\mathfrak{g},\pi_{1},\pi_{2})\). A_ **formal isomorphism** _from \((\mathfrak{g}[[t]],\pi^{\prime}_{1}\,,\pi^{\prime}_{2}\,^{\prime})\) to \((\mathfrak{g}[[t]],\pi^{\prime}_{1}\,,\pi^{\prime}_{2}\,)\) is a power series \(\Phi_{t}=\sum_{i=0}^{+\infty}\varphi_{i}i^{\prime}\), where \(\varphi_{i}:A\longrightarrow A\) are linear maps with \(\varphi_{0}=\mathrm{Id}\), such that_ \[\Phi_{t}\circ\pi^{\prime}_{1}\,^{\prime} = \pi^{\prime}_{1}\circ(\Phi_{t}\times\Phi_{t}),\] \[\Phi_{t}\circ\pi^{\prime}_{2}\,^{\prime} = \pi^{\prime}_{2}\circ(\Phi_{t}\times\Phi_{t}).\] _Two \(1\)-parameter formal deformations \((\mathfrak{g}[[t]],\pi^{\prime}_{1}\,,\pi^{\prime}_{2}\,^{\prime})\) and \((\mathfrak{g}[[t]],\pi_{1},\pi_{2})\) are said to be_ **equivalent** _if there exists a formal isomorphism \(\Phi_{t}=\sum_{i=0}^{+\infty}\varphi_{i}i^{\prime}\) from \((\mathfrak{g}[[t]],\pi^{\prime}_{1}\,,\pi^{\prime}_{2}\,^{\prime})\) to \((\mathfrak{g}[[t]],\pi_{1},\pi_{2})\)._ **Definition 3.16**.: _A \(1\)-parameter formal deformation \((\mathfrak{g}[[t]],\pi^{\prime}_{1},\pi^{\prime}_{2})\) of a compatible pre-Lie algebra \((\mathfrak{g},\pi_{1},\pi_{2})\) is said to be_ **trivial** _if it is equivalent to \((\mathfrak{g},\pi_{1},\pi_{2})\), i.e. there exists \(\Phi_{t}=\sum_{i=0}^{+\infty}\varphi_{i}i^{\prime}\), where \(\varphi_{i}:A\longrightarrow A\) are linear maps with \(\varphi_{0}=\mathrm{Id}\), such that_ \[\Phi_{t}\circ\pi^{\prime}_{1} = \pi_{1}\circ(\Phi_{t}\times\Phi_{t}),\] \[\Phi_{t}\circ\pi^{\prime}_{2} = \pi_{2}\circ(\Phi_{t}\times\Phi_{t}).\] **Definition 3.17**.: _Let \((\mathfrak{g},\pi_{1},\pi_{2})\) be a compatible pre-Lie algebra. If all \(1\)-parameter formal deformations are trivial, we say that \((\mathfrak{g},\pi_{1},\pi_{2})\) is_ **rigid**_._ **Theorem 3.18**.: _Let \((\mathfrak{g},\pi_{1},\pi_{2})\) be a compatible pre-Lie algebra. If \(\mathcal{H}^{2}(\mathfrak{g};\mathfrak{g})=0\), then \((\mathfrak{g},\pi_{1},\pi_{2})\) is rigid._ Proof.: Let \((\pi^{\prime}_{1}=\pi_{1}+\sum_{i=1}^{+\infty}\pi^{i}_{1}i^{\prime},\pi^{ \prime}_{2}=\pi_{2}+\sum_{i=1}^{+\infty}\pi^{i}_{2}t^{i})\) be a \(1\)-parameter formal deformation and assume that \(n\geq 1\) is the minimal number such that \((\pi^{n}_{1},\pi^{n}_{2})\) is not zero. By (24), (25), (26) and \(\mathcal{H}^{2}(\mathfrak{g};\mathfrak{g})=0\), we have \((\pi^{n}_{1},\pi^{n}_{2})\in B^{2}(A;A)\). Thus, there exists \(\varphi_{n}\in\mathfrak{C}^{1}(\mathfrak{g};\mathfrak{g})\) such that \((\pi^{n}_{1},\pi^{n}_{2})=\delta(-\varphi_{n})=(\delta_{\pi_{1}}(-\varphi_{n}), \delta_{\pi_{2}}(-\varphi_{n}))\). Let \(\Phi_{t}=\mathrm{Id}+\varphi_{n}t^{n}\) and define a new formal deformation \((\pi^{\prime}_{1}\,,\pi^{\prime}_{2}\,^{\prime})\) by \(\pi_{1}^{\prime}(x,y)=\Phi_{t}^{-1}\circ\pi_{1}^{\prime}(\Phi_{t}(x),\Phi_{t}(y))\), \(\pi_{2}^{\prime}(x,y)=\Phi_{t}^{-1}\circ\pi_{2}^{\prime}(\Phi_{t}(x),\Phi_{t}(y))\). Then \((\pi_{1}^{\prime},\pi_{2}^{\prime})\) and \((\pi_{1}^{\prime},\pi_{2}^{\prime})\) are equivalent. By straightforward computation, for all \(x,y\in\mathfrak{g}\), we have \[\pi_{1}^{\prime}(x,y) = \Phi_{t}^{-1}\circ\pi_{1}^{\prime}(\Phi_{t}(x),\Phi_{t}(y))\] \[= (\mathrm{Id}-\varphi_{n}t^{n}+\ldots)\pi_{1}^{\prime}(x+\varphi_{ n}(x)t^{n},y+\varphi_{n}(y)t^{n})\] \[= (\mathrm{Id}-\varphi_{n}t^{n}+\ldots)\Big{(}x\cdot y+\big{(}x \cdot\varphi_{n}(y)+\varphi_{n}(x)\cdot y+\pi_{1}^{n}(x,y)\big{)}t^{n}+\ldots \Big{)}\] \[= x\cdot y+\Big{(}x\cdot\varphi_{n}(y)+\varphi_{n}(x)\cdot y+\pi_ {1}^{n}(x,y)-\varphi_{n}(x\cdot y)\Big{)}t^{n}+\ldots.\] Thus, we have \(\pi_{1}^{1\prime}=\pi_{1}^{2\prime}=\cdots=\pi_{1}^{n-1\prime}=0\). Moreover, we have \[\pi_{1}^{n\prime}(x,y) = x\cdot\varphi_{n}(y)+\varphi_{n}(x)\cdot y+\pi_{1}^{n}(x,y)- \varphi_{n}(x\cdot y)\] \[= \delta_{\pi_{1}}\varphi_{n}(x,y)+\pi_{1}^{n}(x,y)\] \[= 0.\] Similarly, we have \(\pi_{2}^{n\prime}(x,y)=0\). Keep repeating the process, we obtain that \((\mathfrak{g}[[t]],\pi_{1}^{\prime},\pi_{2}^{\prime})\) is equivalent to \((\mathfrak{g},\pi_{1},\pi_{2})\). The proof is finished. ## 4. Abelian extensions of compatible pre-Lie algebras In this section, first, we give a compatible pre-Lie algebra \((\mathfrak{g},\pi_{1},\pi_{2})\) and its representation \((V,\rho,\mu,\tilde{\rho},\tilde{\mu})\). We construct a bidifferential graded Lie algebra whose Maurer-Cartan elements is \((\pi_{1}+\rho+\mu,\pi_{2}+\tilde{\rho}+\tilde{\mu})\). Then, we give a cohomology of a compatible pre-Lie algebra with coefficients in arbitrary representation. Finally, we study abelian extensions of compatible pre-Lie algebras using this cohomological approach. We show that abelian extensions are classified by the second cohomology group. ### Cohomologies of compatible pre-Lie algebras with coefficients in arbitrary representation Let \(\mathfrak{g}_{1}\) and \(\mathfrak{g}_{2}\) be vector spaces and elements in \(\mathfrak{g}_{1}\) will be denoted by \(x,y,x_{i}\) and elements in \(\mathfrak{g}_{2}\) will be denoted by \(u,v,v_{i}\). Let \(c:\wedge^{n-1}\mathfrak{g}_{1}\otimes\mathfrak{g}_{1}\longrightarrow\mathfrak{g }_{2}\) be a linear map. We can construct a linear map \(\hat{c}\in C^{n}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1} \oplus\mathfrak{g}_{2})\) by \[\hat{c}(x_{1}+v_{1},\ldots,x_{n}+v_{n}):=c(x_{1},\ldots,x_{n}).\] In general, for a given linear map \(f:\wedge^{k-1}\mathfrak{g}_{1}\otimes\wedge^{l}\mathfrak{g}_{2}\otimes \mathfrak{g}_{1}\longrightarrow\mathfrak{g}_{j}\) for \(j\in\{1,2\}\), we define a linear map \(\hat{f}\in C^{k+l}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1} \oplus\mathfrak{g}_{2})\) by \[\hat{f}(x_{1}+v_{1},\ldots,x_{k+l}+v_{k+l})=\sum_{\sigma\in\mathbb{S}(k-1,l)} \mathrm{sgn}(\sigma)f(x_{\sigma(1)},\ldots,x_{\sigma(k-l)},v_{\sigma(k)}, \ldots,v_{\sigma(k+l-1)},x_{k+l}).\] Similarly, for \(f:\wedge^{k}\mathfrak{g}_{1}\otimes\wedge^{l-1}\mathfrak{g}_{2}\otimes \mathfrak{g}_{2}\longrightarrow\mathfrak{g}_{j}\) for \(j\in\{1,2\}\), we define a linear map \(\hat{f}\in C^{k+l}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1} \oplus\mathfrak{g}_{2})\) by \[\hat{f}(x_{1}+v_{1},\ldots,x_{k+l}+v_{k+l})=\sum_{\sigma\in\mathbb{S}(k-1)} \mathrm{sgn}(\sigma)f(x_{\sigma(1)},\ldots,x_{\sigma(k)},v_{\sigma(k+1)}, \ldots,v_{\sigma(k+l-1)},v_{k+l}).\] We call the linear map \(\hat{f}\) a **horizontal lift** of \(f\), or simply a lift. We define \(\mathcal{G}^{k,l}=\wedge^{k-1}\mathfrak{g}_{1}\otimes\wedge^{l}\mathfrak{g}_{2} \otimes\mathfrak{g}_{1}+\wedge^{k}\mathfrak{g}_{1}\otimes\wedge^{l-1}\mathfrak{g }_{2}\otimes\mathfrak{g}_{2}\). The vector space \(\wedge^{n-1}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2})\otimes(\mathfrak{g}_{1} \oplus\mathfrak{g}_{2})\) is isomorphic to the direct sum of \(\mathcal{G}^{k,l},k+l=n\). In the sequel, we will omit the notation \(\hat{\cdot}\). **Definition 4.1**.: ([15]) _A linear map \(f\in\mathrm{Hom}(\wedge^{n-1}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2})\otimes( \mathfrak{g}_{1}\oplus\mathfrak{g}_{2}),\mathfrak{g}_{1}\oplus\mathfrak{g}_{2})\) has a **bidegree \(k|l\)** if the following four conditions hold:_ 1. \(k+l+1=n\)_;_ 2. _If_ \(X\) _is an element in_ \(\mathcal{G}^{k+1,l}\)_, then_ \(f(X)\in\mathfrak{g}_{1}\)_;_ 3. _If_ \(X\) _is an element in_ \(\mathcal{G}^{k,l+1}\)_, then_ \(f(X)\in\mathfrak{g}_{2}\)_;_ 4. _All the other case,_ \(f(X)=0\)_._ _We denote a linear map \(f\) with bidegree \(k|l\) by \(\|f\|=k|l\)._ We call a linear map \(f\)**homogeneous** if \(f\) has a bidegree. We denote the set of homogeneous linear maps of bidegree \(k|l\) by \(C^{k|l}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1}\oplus\mathfrak{ g}_{2})\). We have \(k+l\geq 0,k,l\geq-1\) because \(n\geq 1\) and \(k+1,l+1\geq 0\). By the above lift, we have the following isomorphisms: \[C^{k|0}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1} \oplus\mathfrak{g}_{2}) \cong \mathrm{Hom}(\wedge^{k}\mathfrak{g}_{1}\otimes\mathfrak{g}_{1}, \mathfrak{g}_{1})\oplus\mathrm{Hom}(\wedge^{k}\mathfrak{g}_{1}\otimes \mathfrak{g}_{2},\mathfrak{g}_{2})\oplus\mathrm{Hom}(\wedge^{k-1} \mathfrak{g}_{1}\otimes\mathfrak{g}_{2}\otimes\mathfrak{g}_{1},\mathfrak{g}_{ 2});\] \[C^{l|-1}(\mathfrak{g}_{1}\oplus\mathfrak{g}_{2},\mathfrak{g}_{1} \oplus\mathfrak{g}_{2}) \cong \mathrm{Hom}(\wedge^{l-1}\mathfrak{g}_{1}\otimes\mathfrak{g}_{1}, \mathfrak{g}_{2}).\] **Lemma 4.2**.: ([15]) _If \(\|f\|=k_{f}|l_{f}\) and \(\|g\|=k_{g}|l_{g}\), then \([f,g]^{MN}\) has the bidegree \(k_{f}+k_{g}|l_{f}+l_{g}\)._ **Proposition 4.3**.: ([16]) _Let \((V,\rho,\mu)\) be a representation of the pre-Lie algebra \((\mathfrak{g},\pi)\). Then we have_ \[[\pi+\rho+\mu,\pi+\rho+\mu]^{MN}=0.\] Let \((V,\rho,\mu)\) be a representation of the pre-Lie algebra \((\mathfrak{g},\pi)\), where \(\pi(x,y)=x\cdot y\). Denote the set of \(n\)-cochains by \[C^{n}(\mathfrak{g};V)=\mathrm{Hom}(\wedge^{n-1}\mathfrak{g}\otimes\mathfrak{g },V),\quad n\geq 1.\] By Proposition 4.3 and graded Jacobi identity, we define a coboundary operator \(\partial_{\pi+\rho+\mu}:C^{n}(\mathfrak{g},V)\longrightarrow C^{n+1}( \mathfrak{g},V)\) by \[\partial_{\pi+\rho+\mu}f=(-1)^{n-1}[\pi+\rho+\mu,f]^{MN},\quad\forall f\in C^{ n}(\mathfrak{g},V).\] In facet, since \(\pi+\rho+\mu\in C^{1|0}(\mathfrak{g}\oplus V,\mathfrak{g}\oplus V)\) and \(f\in C^{n-1}(\mathfrak{g}\oplus V,\mathfrak{g}\oplus V)\), by Lemma 4.2, we obtain that \([\pi+\rho+\mu,f]^{MN}\in C^{n+1|-1}(\mathfrak{g}\oplus V,\mathfrak{g}\oplus V)\). Thus, \([\pi+\rho+\mu,f]^{MN}\in C^{n+1}(\mathfrak{g},V)\), we obtain a well-defined cochain complex \((\oplus_{n=1}^{+\infty}C^{n}(\mathfrak{g};V),\partial_{\pi+\rho+\mu})\). More precisely, for all \(x_{1},\ldots,x_{n+1}\in\mathfrak{g}\), we have \[(\partial_{\pi+\rho+\mu}f)(x_{1},\ldots,x_{n+1}) = \sum_{i=1}^{n}(-1)^{i+1}\rho(x_{i})f(x_{1},\ldots,\hat{x_{i}}, \ldots,x_{n+1})\] \[+\sum_{i=1}^{n}(-1)^{i+1}\mu(x_{n+1})f(x_{1},\ldots,\hat{x_{i}}, \ldots,x_{n},x_{i})\] \[-\sum_{i=1}^{n}(-1)^{i+1}f(x_{1},\ldots,\hat{x_{i}}\ldots,x_{n},x _{i}\cdot x_{n+1})\] \[+\sum_{1\leq i<j\leq n}(-1)^{i+j}f([x_{i},x_{j}]_{C},x_{1},\ldots, \hat{x_{i}},\ldots,\hat{x_{j}},\ldots,x_{n+1}),\] which is a coboundary operator of pre-Lie algebra \((\mathfrak{g},\pi)\) with coefficients in the representation \((V,\rho,\mu)\). We can see more details in [8]. Let \((\mathfrak{g},\cdot,\ast)\) be a compatible pre-Lie algebra with \(\pi_{1}(x,y)=x\cdot y\), \(\pi_{2}(x,y)=x\ast y\) and \((V,\rho,\mu,\tilde{\rho},\tilde{\mu})\) a representation of \((\mathfrak{g},\cdot,\ast)\). **Proposition 4.4**.: _With the above notation, \((\pi_{1}+\rho+\mu,\pi_{2}+\tilde{\rho}+\tilde{\mu})\) is a Maurer-Cartan element of the bidifferential graded Lie algebra \((C^{*}(\mathfrak{g}\oplus V;\mathfrak{g}\oplus V),[\cdot,\cdot]^{MN},\mathrm{d }_{1}=0,\mathrm{d}_{2}=0)\), i.e._ \[[\pi_{1}+\rho+\mu,\pi_{1}+\rho+\mu]^{MN}=0,\quad[\pi_{2}+\tilde{\rho}+\tilde{ \mu},\pi_{2}+\tilde{\rho}+\tilde{\mu}]^{MN}=0,\quad[\pi_{1}+\rho+\mu,\pi_{2}+ \tilde{\rho}+\tilde{\mu}]^{MN}=0.\] Proof.: Since \((V,\rho,\mu)\) is a representation of the pre-Lie algebra \((\mathfrak{g},\cdot)\), by Proposition 4.3, we have \[[\pi_{1}+\rho+\mu,\pi_{1}+\rho+\mu]^{MN}=0.\] Similarly, since \((V,\tilde{\rho},\tilde{\mu})\) is a representation of the pre-Lie algebra \((\mathfrak{g},\ast)\), by Proposition 4.3, we have \[[\pi_{2}+\tilde{\rho}+\tilde{\mu},\pi_{2}+\tilde{\rho}+\tilde{\mu}]^{MN}=0.\] For all \(x,y,z\in\mathfrak{g},u,v,w\in V\), by (1), (3) and (4), we have \[[\pi_{1}+\rho+\mu,\pi_{2}+\tilde{\rho}+\tilde{\mu}]^{MN}(x+u,y+v,z+w)\] \[= (\pi_{1}+\rho+\mu)(\pi_{2}(x,y)+\tilde{\rho}(x)v+\tilde{\mu}(y)u, z+w)-(\pi_{1}+\rho+\mu)(\pi_{2}(y,x)+\tilde{\rho}(y)u+\tilde{\mu}(x)v,z+w)\] \[-(\pi_{1}+\rho+\mu)(x+u,\pi_{2}(y,z)+\tilde{\rho}(y)w+\tilde{\mu} (z)v)+(\pi_{1}+\rho+\mu)(y+v,\pi_{2}(x,z)+\tilde{\rho}(x)w+\tilde{\mu}(z)u)\] \[+(\pi_{2}+\tilde{\rho}+\tilde{\mu})(\pi_{1}(x,y)+\rho(x)v+\mu(y)u,z+w)-(\pi_{2}+\tilde{\rho}+\tilde{\mu})(\pi_{1}(y,x)+\rho(y)u+\mu(x)v,z+w)\] \[-(\pi_{2}+\tilde{\rho}+\tilde{\mu})(x+u,\pi_{1}(y,z)+\rho(y)w+\mu( z)v)+(\pi_{2}+\tilde{\rho}+\tilde{\mu})(y+v,\pi_{1}(x,z)+\rho(x)w+\mu(z)u)\] \[= \pi_{1}(\pi_{2}(x,y),z)+\rho(\pi_{2}(x,y))w+\mu(z)\tilde{\rho}(x) v+\mu(z)\tilde{\mu}(y)u\] \[-\pi_{1}(\pi_{2}(y,x),z)-\rho(\pi_{2}(y,x))w-\mu(z)\tilde{\rho}(y )u-\mu(z)\tilde{\mu}(x)v\] \[-\pi_{1}(x,\pi_{2}(y,z))-\rho(x)\tilde{\rho}(y)w-\rho(x)\tilde{ \mu}(z)v-\mu(\pi_{2}(y,z))u\] \[+\pi_{1}(y,\pi_{2}(x,z))+\rho(y)\tilde{\rho}(x)w+\rho(y)\tilde{ \mu}(z)u+\mu(\pi_{2}(x,z))v\] \[+\pi_{2}(\pi_{1}(x,y),z)+\tilde{\rho}(\pi_{1}(x,y))w+\tilde{\mu} (z)\rho(x)v+\tilde{\mu}(z)\mu(y)u\] \[-\pi_{2}(\pi_{1}(y,x),z)-\tilde{\rho}(\pi_{1}(y,x))w-\tilde{\mu} (z)\rho(y)u-\tilde{\mu}(z)\mu(x)v\] \[-\pi_{2}(x,\pi_{1}(y,z))-\tilde{\rho}(x)\rho(y)w-\tilde{\rho}(x) \mu(z)v-\tilde{\mu}(\pi_{1}(y,z))u\] \[+\pi_{2}(y,\pi_{1}(x,z))+\tilde{\rho}(y)\rho(x)w+\tilde{\rho}(y) \mu(z)u+\tilde{\mu}(\pi_{1}(x,z))v\] \[= 0,\] which implies that \[[\pi_{1}+\rho+\mu,\pi_{2}+\tilde{\rho}+\tilde{\mu}]^{MN}=0.\] This finishes the proof. We define the set of \(n\)-cochains (\(n\geq 1\)) by \[\mathfrak{C}^{n}(\mathfrak{g};V)=\underbrace{C^{n}(\mathfrak{g};V)\oplus C^{n} (\mathfrak{g};V)\oplus\cdots\oplus C^{n}(\mathfrak{g};V)}_{n\ copies}.\] Define the operator \(\partial:\mathfrak{C}^{n}(\mathfrak{g};V)\longrightarrow\mathfrak{C}^{n+1}( \mathfrak{g};V)\) by \[\partial^{1}f=(\partial_{\pi_{1}+\rho+\mu}f,\partial_{\pi_{2}+\tilde{\rho}+ \tilde{\mu}}f),\quad\forall f\in\operatorname{Hom}(\mathfrak{g},V),n=1.\] And for all \((f_{1},\ldots,f_{n})\in\mathfrak{C}^{n}(\mathfrak{g},V)\), \(2\leq i\leq n\), we have \[\partial^{n}(f_{1},\ldots,f_{n})=(\partial_{\pi_{1}+\rho+\mu}f_{1},\ldots, \underbrace{\partial_{\pi_{2}+\tilde{\rho}+\tilde{\mu}}f_{1}+\partial_{\pi_{1} +\rho+\mu}f_{i}}_{i},\ldots,\partial_{\pi_{2}+\tilde{\rho}+\tilde{\mu}}f_{n}).\] **Theorem 4.5**.: _The operator \(\partial:\mathfrak{C}^{n}(\mathfrak{g};V)\longrightarrow\mathfrak{C}^{n+1}( \mathfrak{g};V)\) defined as above satisfies \(\partial\circ\partial=0\)._ Proof.: By Proposition 4.4 and the graded Jacobi identity, similarly to the proof of Theorem 3.1, we have \(\partial\circ\partial=0\). **Definition 4.6**.: _Let \((V,\rho,\mu,\tilde{\rho},\tilde{\mu})\) be a representation of the compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\). The cohomology of the cochain complex \((\mathfrak{G}_{n=1}^{+\infty}(\mathfrak{C}^{n}(\mathfrak{g};V),\partial)\) is called the cohomology of \((\mathfrak{g},\cdot,\ast)\) with coefficients in the representation \((V,\rho,\mu,\tilde{\rho},\tilde{\mu})\). The corresponding \(n\)-th cohomology group is denoted by \(\mathcal{H}^{n}(\mathfrak{g};V)\)._ ### Abelian extensions of compatible pre-Lie algebras **Definition 4.7**.: _Let \((\mathfrak{g},\cdot,\ast)\) and \((V,\cdot_{V},\ast_{V})\) be two compatible pre-Lie algebras. An_ **extension** _of \((\mathfrak{g},\cdot,\ast)\) by \((V,\cdot_{V},\ast_{V})\) is a short exact sequence of compatible pre-Lie algebras morphisms:_ \[0\longrightarrow V\stackrel{{\tau}}{{\longrightarrow}}\hat{ \mathfrak{g}}\stackrel{{ p}}{{\longrightarrow}}\mathfrak{g} \longrightarrow 0,\] _where \((\hat{\mathfrak{g}},\cdot_{\mathfrak{g}},\ast_{\mathfrak{g}})\) is a compatible pre-Lie algebra._ _It is called an_ **abelian extension** _if \((V,\cdot_{V},\ast_{V})\) is an abelian compatible pre-Lie algebra, i.e. for all \(u,v\in V,u\cdot_{V}v=u\ast_{V}v=0\)._ **Definition 4.8**.: \(A\) **section** _of an extension \((\hat{\mathfrak{g}},\cdot_{\mathfrak{g}},\ast_{\mathfrak{g}})\) of a compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\) by \((V,\cdot_{V},\ast_{V})\) is a linear map \(s:\mathfrak{g}\longrightarrow\hat{\mathfrak{g}}\) such that \(p\circ s=\operatorname{Id}_{\mathfrak{g}}\)._ Let \((\hat{\mathfrak{g}},\cdot_{\mathfrak{g}},\ast_{\mathfrak{g}})\) be an abelian extension of a compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\) by \((V,\cdot_{V},\ast_{V})\) and \(s:\mathfrak{g}\longrightarrow\hat{\mathfrak{g}}\) a section. For all \(x,y\in\mathfrak{g}\), define linear maps \(\theta,\tilde{\theta}:\mathfrak{g}\otimes\mathfrak{g}\longrightarrow V\) respectively by \[\theta(x,y) = s(x)\cdot_{\hat{\mathfrak{g}}}s(y)-s(x\cdot y), \tag{27}\] \[\tilde{\theta}(x,y) = s(x)\ast_{\hat{\mathfrak{g}}}s(y)-s(x\ast y). \tag{26}\] And for all \(x,y\in\mathfrak{g},u\in V\), define \(\rho,\mu,\tilde{\rho},\tilde{\mu}:\mathfrak{g}\longrightarrow\mathfrak{gl}(V)\) respectively by \[\rho(x)(u)=s(x)\cdot_{\hat{\mathfrak{g}}}u,\quad\mu(x)(u)=u\cdot_{\hat{ \mathfrak{g}}}s(x), \tag{29}\] \[\tilde{\rho}(x)(u)=s(x)\ast_{\hat{\mathfrak{g}}}u,\quad\tilde{ \mu}(x)(u)=u\ast_{\hat{\mathfrak{g}}}s(x). \tag{28}\] Obviously, \(\hat{\mathfrak{g}}\) is isomorphic to \(\mathfrak{g}\oplus V\) as vector spaces. Transfer the compatible pre-Lie algebra structure on \(\hat{\mathfrak{g}}\) to that on \(\mathfrak{g}\oplus V\), we obtain a compatible pre-Lie algebra \((\mathfrak{g}\oplus V,\cdot_{(\theta,\rho,\mu)},\ast_{(\tilde{\theta},\tilde {\rho},\tilde{\mu})})\), where "\(\cdot_{(\theta,\rho,\mu)}\)" and "\(\ast_{(\tilde{\theta},\tilde{\rho},\tilde{\mu})}\)" are given by \[(x+u)\cdot_{(\theta,\rho,\mu)}(y+v) = x\cdot y+\theta(x,y)+\rho(x)(v)+\mu(y)(u),\quad\forall\ x,y\in \mathfrak{g},u,v\in V, \tag{31}\] \[(x+u)\ast_{(\tilde{\rho},\tilde{\rho},\tilde{\mu})}(y+v) = x\ast y+\tilde{\theta}(x,y)+\tilde{\rho}(x)(v)+\tilde{\mu}(y)(u ),\quad\forall\ x,y\in\mathfrak{g},u,v\in V. \tag{30}\] **Theorem 4.9**.: _With the above notation, \((V,\rho,\mu,\tilde{\rho},\tilde{\mu})\) is a representation of the compatible pre-Lie algebra \((\mathfrak{g},\cdot,\ast)\). Moreover, this representation is independent of the choice of sections._ Proof.: For all \(x,y\in\mathfrak{g}\), \(u\in V\), by the definition of a pre-Lie algebra, we have \[0 = (x\cdot_{(\theta,\rho,\mu)}y)\cdot_{(\theta,\rho,\mu)}u-x\cdot_{ (\theta,\rho,\mu)}(y\cdot_{(\theta,\rho,\mu)}u)-(y\cdot_{(\theta,\rho,\mu)}x) \cdot_{(\theta,\rho,\mu)}u+y\cdot_{(\theta,\rho,\mu)}(x\cdot_{(\theta,\rho, \mu)}u)\] \[= (x\cdot y+\theta(x,y))\cdot_{(\theta,\rho,\mu)}u-x\cdot_{(\theta,\rho,\mu)}\rho(y)u-(y\cdot x+\theta(y,x))\cdot_{(\theta,\rho,\mu)}u+y\cdot_{ (\theta,\rho,\mu)}\rho(x)u\] \[= \rho(x\cdot y)u-\rho(x)\rho(y)u-\rho(y\cdot x)u+\rho(y)\rho(x)u,\] and \[0 = (u\cdot_{(\theta,\rho,\mu)}x)\cdot_{(\theta,\rho,\mu)}y-u\cdot_{ (\theta,\rho,\mu)}(x\cdot_{(\theta,\rho,\mu)}y)-(x\cdot_{(\theta,\rho,\mu)}u) \cdot_{(\theta,\rho,\mu)}y+x\cdot_{(\theta,\rho,\mu)}(u\cdot_{(\theta,\rho, \mu)}y)\] \[= \mu(x)u\cdot_{(\theta,\rho,\mu)}y-u\cdot_{(\theta,\rho,\mu)}(x \cdot y+\theta(x,y))-\rho(x)u\cdot_{(\theta,\rho,\mu)}y+x\cdot_{(\theta,\rho, \mu)}\mu(y)u\] \[= \mu(y)\mu(x)u-\mu(x\cdot y)u-\mu(y)\rho(x)u+\rho(x)\mu(y)u,\] which implies that \[\rho([x,y]_{C} = \rho(x)\circ\rho(y)-\rho(y)\circ\rho(x),\] \[\mu(y)\circ\mu(x)-\mu(x\cdot y) = \mu(y)\circ\rho(x)-\rho(x)\circ\mu(y).\] Thus, \((V,\rho,\mu)\) is a representation of the pre-Lie algebra \((\mathfrak{g},\cdot)\). Similarly, \((V,\tilde{\rho},\tilde{\mu})\) is a representation of the pre-Lie algebra \((\mathfrak{g},\ast)\). \[0 = (x\ast_{(\tilde{\theta},\tilde{\rho},\tilde{\mu})}y)\cdot_{( \theta,\rho,\mu)}u+(x\cdot_{(\theta,\rho,\mu)}y)\ast_{(\tilde{\theta}, \tilde{\rho},\tilde{\mu})}u-x\cdot_{(\theta,\rho,\mu)}(y\ast_{(\tilde{\theta}, \tilde{\rho},\tilde{\mu})}u)-x\ast_{(\tilde{\theta},\tilde{\rho},\tilde{\mu})}(y \cdot_{(\theta,\rho,\mu)}u)\] \[-(y*_{(\tilde{\theta},\tilde{\varphi},\tilde{\mu})}x)\cdot_{(\theta, \varphi,\mu)}u-(y\cdot_{(\theta,\varphi,\mu)}x)*_{(\tilde{\theta},\tilde{\varphi}, \tilde{\mu})}u+y\cdot_{(\theta,\varphi,\mu)}(x*_{(\tilde{\theta},\tilde{\varphi}, \tilde{\mu})}u)+y*_{(\tilde{\theta},\tilde{\varphi},\tilde{\mu})}(x\cdot_{( \theta,\varphi,\mu)}u)\] \[= (x*y+\tilde{\theta}(x,y))\cdot_{(\theta,\varphi,\mu)}z-x\cdot_{( \theta,\varphi,\mu)}(y\cdot z+\theta(x,z))\] \[= \theta(x\cdot y,z)+\mu(z)\theta(x,y)-\theta(x,y\cdot z)-\rho(x) \theta(y,z)\] \[-\theta(y\cdot x,z)-\mu(z)\theta(y,x)+\theta(y,x\cdot z)+\rho(y) \theta(x,z)\] \[= -\partial_{\pi_{1}+\varphi+\mu}\theta(x,y)\] which implies that \(\partial_{\pi_{1}+\varphi+\mu}\theta=0.\) Similarly, we have \(\partial_{\pi_{2}+\tilde{\varphi}+\tilde{\mu}}\tilde{\theta}=0.\) For all \(x,y,z\in\mathfrak{g}\), by (1), we have \[0 = (x*_{(\tilde{\theta},\tilde{\varphi},\tilde{\mu})}y)\cdot_{( \theta,\varphi,\mu)}z+(x*_{(\theta,\varphi,\mu)}y)*_{(\tilde{\theta},\tilde{ \varphi},\tilde{\mu})}z-x\cdot_{(\theta,\varphi,\mu)}(y*_{(\tilde{\theta}, \tilde{\varphi},\tilde{\mu})}z)-x*_{(\tilde{\theta},\tilde{\varphi},\tilde{ \mu})}(y\cdot_{(\theta,\varphi,\mu)}z)\] \[-(y*_{(\tilde{\theta},\tilde{\varphi},\tilde{\mu})}x)\cdot_{( \theta,\varphi,\mu)}z-(y*_{(\theta,\varphi,\mu)}x)*_{(\tilde{\theta},\tilde{ \varphi},\tilde{\mu})}z+y\cdot_{(\theta,\varphi,\mu)}(x*_{(\tilde{\theta}, \tilde{\varphi},\tilde{\mu})}z)+y*_{(\tilde{\theta},\tilde{\varphi},\tilde{ \mu})}(x\cdot_{(\theta,\varphi,\mu)}z)\] \[= (x*y+\tilde{\theta}(x,y))\cdot_{(\theta,\varphi,\mu)}z+(x*y+ \theta(x,y))*_{(\tilde{\theta},\tilde{\varphi},\tilde{\mu})}z\] \[-x\cdot_{(\theta,\varphi,\mu)}(y*z+\tilde{\theta}(y,z))-x*_{( \tilde{\theta},\tilde{\varphi},\tilde{\mu})}(y\cdot z+\theta(y,z))\] \[-(y*_{(\theta,\varphi,\mu)}x)\cdot_{(\theta,\varphi,\mu)}z-(y*_ {(\theta,\varphi,\mu)}x)*_{(\tilde{\theta},\tilde{\varphi},\tilde{\mu})}(x \cdot_{(\theta,\varphi,\mu)}z)+y*_{(\tilde{\theta},\tilde{\varphi},\tilde{ \mu})}(x\cdot_{(\theta,\varphi,\mu)}z)\] \[= (x*y+\tilde{\theta}(x,y))\cdot_{(\theta,\varphi,\mu)}z+(x*y+ \theta(x,y))*_{(\tilde{\theta},\tilde{\varphi},\tilde{\mu})}z\] \[-x\cdot_{(\theta,\varphi,\mu)}(y*z+\tilde{\theta}(y,z))-x*_{( \tilde{\theta},\tilde{\varphi},\tilde{\mu})}(y\cdot z+\theta(y,z))\] \[-(y*_{(\theta,\varphi,\mu)}x)\cdot_{(\theta,\varphi,\mu)}z-(y*_ {(\theta,\varphi,\mu)}x)*_{(\tilde{\theta},\tilde{\varphi},\tilde{\mu})}(x \cdot_{(\theta,\varphi,\mu)}z)\] \[= (x*y)\theta(x*_{(\theta,\varphi,\mu)}y)-(y*_{(\theta,\varphi,\mu)} x)*_{(\tilde{\theta},\tilde{\varphi},\tilde{\mu})}(x\cdot_{(\theta,\varphi,\mu)}y)\] \[= (x*y)\theta(x*_{(\theta,\varphi,\mu)}y)-(y*_{(\theta,\varphi,\mu)} x)*_{(\tilde{\theta},\tilde{\varphi},\ \[= \theta(x*y,z)+\mu(z)\tilde{\partial}(x,y)+\tilde{\partial}(x\cdot y,z)+ \tilde{\mu}(z)\theta(x,y)\] \[-\theta(x,y*z)-\rho(x)\tilde{\theta}(y,z)-\tilde{\theta}(x,y\cdot z )-\tilde{\rho}(x)\theta(y,z)\] \[-\theta(y*x,z)-\mu(z)\tilde{\theta}(y,x)-\tilde{\theta}(y\cdot x,z)-\tilde{\mu}(z)\theta(y,x)\] \[+\theta(y,x*z)+\rho(y)\tilde{\theta}(x,z)+\tilde{\theta}(y,x\cdot z )+\tilde{\rho}(y)\theta(x,z)\] \[= -(\partial_{\pi_{2}+\tilde{\rho}+\tilde{\mu}}\theta+\partial_{\pi _{1}+\rho+\mu}\tilde{\theta})(x,y,z),\] which implies that \(\partial_{\pi_{2}+\tilde{\rho}+\tilde{\mu}}\theta+\partial_{\pi_{1}+\rho+\mu} \tilde{\theta}=0\). Thus, we have \[\partial(\theta,\tilde{\theta})=(\partial_{\pi_{1}+\rho+\mu}\theta,\partial_{ \pi_{2}+\tilde{\rho}+\tilde{\mu}}\theta+\partial_{\pi_{1}+\rho+\mu}\tilde{ \theta},\partial_{\pi_{2}+\tilde{\rho}+\tilde{\mu}}\tilde{\theta})=0.\] This finishes the proof. **Definition 4.11**.: _Let \((\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},*_{\hat{\mathfrak{g}}_{ 1}})\) and \((\hat{\mathfrak{g}}_{2},\cdot_{\hat{\mathfrak{g}}_{2}},*_{\hat{\mathfrak{g}}_{ 2}})\) be two abelian extensions of a compatible pre-Lie algebra \((\mathfrak{g},\cdot,*)\) by \((V,\cdot_{V},*_{V})\). They are said to be_ **isomorphic** _if there exists a compatible pre-Lie algebra isomorphism \(\zeta:(\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},*_{\hat{ \mathfrak{g}}_{1}})\longrightarrow(\hat{\mathfrak{g}}_{2},\cdot_{\hat{ \mathfrak{g}}_{2}},*_{\hat{\mathfrak{g}}_{2}})\) such that the following diagram is commutative:_ **Lemma 4.12**.: _Let \((\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},*_{\hat{\mathfrak{g}}_{ 1}})\) and \((\hat{\mathfrak{g}}_{2},\cdot_{\hat{\mathfrak{g}}_{2}},*_{\hat{\mathfrak{g}}_{ 2}})\) be two isomorphic abelian extensions of a compatible pre-Lie algebra \((\mathfrak{g},\cdot,*)\) by \((V,\cdot_{V},*_{V})\). Then they are give rise to the same representation of \((\mathfrak{g},\cdot,*)\)._ Proof.: Let \(s_{1}:\mathfrak{g}_{1}\longrightarrow\hat{\mathfrak{g}}_{1}\) and \(s_{2}:\mathfrak{g}_{2}\longrightarrow\hat{\mathfrak{g}}_{2}\) be two sections of \((\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},*_{\hat{\mathfrak{g}}_{ 1}})\) and \((\hat{\mathfrak{g}}_{2},\cdot_{\hat{\mathfrak{g}}_{2}},*_{\hat{\mathfrak{g}}_{ 2}})\) respectively. By Theorem 4.9, we obtain that \((V,\rho_{1},\mu_{1},\tilde{\rho}_{1},\tilde{\mu}_{1})\) and \((V,\rho_{2},\mu_{2},\tilde{\rho}_{2},\tilde{\mu}_{2})\) are their representations respectively. Define \(s^{\prime}_{1}:\mathfrak{g}_{1}\longrightarrow\hat{\mathfrak{g}}_{1}\) by \(s^{\prime}_{1}=\zeta^{-1}\circ s_{2}\). Since \(\zeta:(\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},*_{\hat{ \mathfrak{g}}_{1}})\longrightarrow(\hat{\mathfrak{g}}_{2},\cdot_{\hat{ \mathfrak{g}}_{2}},*_{\hat{\mathfrak{g}}_{2}})\) is a compatible pre-Lie algebra isomorphism satisfying the commutative diagram in Definition 4.11, by \(p_{2}\circ\zeta=p_{1}\), we have \[p_{1}\circ s^{\prime}_{1}=p_{2}\circ\zeta\circ\zeta^{-1}\circ s_{2}=\mathrm{Id} _{\mathfrak{g}}.\] Thus, we obtain that \(s^{\prime}_{1}\) is a section of \((\hat{\mathfrak{g}}_{1},\cdot_{\hat{\mathfrak{g}}_{1}},*_{\hat{\mathfrak{g}}_{ 1}})\). For all \(x\in\mathfrak{g},u\in V\), by \(\zeta\mid_{V}=\mathrm{Id}_{V}\), we have \[\rho_{1}(x)(u)=s^{\prime}_{1}(x)\cdot_{\hat{\mathfrak{g}}_{1}}u=( \zeta^{-1}\circ s_{2})(x)\cdot_{\hat{\mathfrak{g}}_{1}}u=\zeta^{-1}(s_{2}(x) \cdot_{\hat{\mathfrak{g}}_{2}}u)=\rho_{2}(x)(u),\] \[\mu_{1}(x)(u)=u\cdot_{\hat{\mathfrak{g}}_{1}}s^{\prime}_{1}(x)=u \cdot_{\hat{\mathfrak{g}}_{1}}(\zeta^{-1}\circ s_{2})(x)=\zeta^{-1}(u\cdot_{ \hat{\mathfrak{g}}_{2}}s_{2}(x))=\mu_{2}(x)(u),\] which implies that \(\rho_{1}=\rho_{2}\) and \(\mu_{1}=\mu_{2}\). Similarly, we have \(\tilde{\rho}_{1}=\tilde{\rho}_{2}\) and \(\tilde{\mu}_{1}=\tilde{\mu}_{2}\). This finishes the proof. So in the sequel, we fixed a representation \((V,\rho,\mu,\tilde{\rho},\tilde{\mu})\) of a compatible pre-Lie algebra \((\mathfrak{g},\cdot,*)\) and consider abelian extensions that induce the given representation. **Theorem 4.13**.: _Abelian extensions of a compatible pre-Lie algebra \((\mathfrak{g},\cdot,*)\) by \((V,\cdot_{V},*_{V})\) are classified by \(\mathcal{H}^{2}(\mathfrak{g};V)\)._ Proof.: Let \((\hat{\mathfrak{g}},\cdot_{\hat{\mathfrak{g}}},*_{\hat{\mathfrak{g}}})\) be an abelian extension of a compatible pre-Lie algebra \((\mathfrak{g},\cdot,*)\) by \((V,\cdot_{V},*_{V})\). Choosing a section \(s:\mathfrak{g}\longrightarrow\hat{\mathfrak{g}}\), by Theorem 4.10, we obtain that \((\theta,\tilde{\theta})\) is a 2-cocycle. Now we show that the cohomological class of \((\theta,\tilde{\theta})\) does not depend on the choice of sections. In fact, let \(s\) and \(s^{\prime}\) be two different sections. Define \(\varphi:\mathfrak{g}\longrightarrow V\) by \(\varphi(x)=s(x)-s^{\prime}(x)\). Then for all \(x,y\in\mathfrak{g}\), we have \[\theta(x,y) = s(x)\cdot_{\hat{\mathfrak{g}}}s(y)-s(x\cdot y)\] \[= \zeta(x\cdot y+\theta_{1}(x,y)+\rho(x)v+\mu(y)u)-(x+u+\varphi(x)) \cdot_{(\theta_{2},\rho,\mu)}(y+v+\varphi(y))\] \[= \theta_{1}(x,y)+\varphi(x\cdot y)-\theta_{2}(x,y)-\rho(x)\varphi(y)- \mu(y)\varphi(x)\] \[= \theta_{1}(x,y)-\theta_{2}(x,y)-\partial_{\pi_{1}+\rho+\mu}\varphi\] \[= 0,\] which implies that \[\zeta\big{(}(x+u)\cdot_{(\theta_{1},\rho,\mu)}(y+v)\big{)}=\zeta(x+u) \cdot_{(\theta_{2},\rho,\mu)}\zeta(y+v). \tag{32}\] Similarly, we have \[\zeta\big{(}(x+u)*_{(\tilde{\theta}_{1},\tilde{\rho},\tilde{\mu})}(y+v)\big{)} =\zeta(x+u)*_{(\tilde{\theta}_{2},\tilde{\rho},\tilde{\mu})}\zeta(y+v). \tag{33}\] Thus, by (32) and (33), \(\zeta\) is a compatible pre-Lie algebra isomorphism from \((\mathfrak{g}\oplus V,\cdot_{(\theta_{1},\rho,\mu)},*_{(\tilde{\theta}_{1}, \tilde{\rho},\tilde{\mu})})\) to \((\mathfrak{g}\oplus V,\cdot_{(\theta_{2},\rho,\mu)},*_{(\tilde{\theta}_{2}, \tilde{\rho},\tilde{\mu})})\). Moreover, it is obvious that the diagram in Definition 4.11 is commutative. This finishes the proof. **Acknowledgement:** This work is supported by NSF of Jilin Province (No. YDZJ202201ZYTS589), NNSF of China (Nos. 12271085, 12071405) and the Fundamental Research Funds for the Central Universities.
2308.12248
Holographic RIS Empowered THz Communications with Hardware Imperfections under Adverse Weather Conditions
This paper focuses on providing a theoretical framework for the evaluation of the performance of holographic reconfigurable intelligent surface (HRIS) empowered terahertz (THz) wireless systems under fog conditions. In more detail, we present a comprehensive methodology for evaluating the geometric losses of the end-to-end channel. Moreover, the stochastic nature of the end-to-end channel is characterized by novel closed-form probability density and cumulative distribution functions. Building upon them, the outage probability and the throughput of the system are extracted in closed form. These formulas account for the impact of transceiver hardware imperfections and are expected to become useful tools for the design of HRIS empowered THz wireless systems due to its remarkable engineering insights.
Alexandros-Apostolos A. Boulogeorgos, Stylianos Trevlakis, Theodoros A. Tsiftsis
2023-08-23T16:48:23Z
http://arxiv.org/abs/2308.12248v1
Holographic RIS Empowered THz Communications with Hardware Imperfections under Adverse Weather Conditions ###### Abstract This paper focuses on providing a theoretical framework for the evaluation of the performance of holographic reconfigurable intelligent surface (HRIS) empowered terahertz (THz) wireless systems under fog conditions. In more detail, we present a comprehensive methodology for evaluating the geometric losses of the end-to-end channel. Moreover, the stochastic nature of the end-to-end channel is characterized by novel closed-form probability density and cumulative distribution functions. Building upon them, the outage probability and the throughput of the system are extracted in closed form. These formulas account for the impact of transceiver hardware imperfections and are expected to become useful tools for the design of HRIS empowered THz wireless systems due to its remarkable engineering insights. Fog, hardware imperfections, holographic reconfigurable intelligent surface (HRIS), outage probability, stochastic characterization, throughput. ## I Introduction ### _State of the Art & Motivation_ Bringing the fiber quality of experience into the wireless world through the development of wireless fiber extenders capable of supporting data rates in the Tb/s region is a fundamental promise of terahertz (THz) technologies [1]. THz wireless fiber extenders aim to establish low-cost medium to long range high-bandwidth links for mid- and back-hauling without demanding the time-consuming processes of optical fiber installation [2, 3]. As a consequence, from a business point of view, flexible almost-instance high-data-rate connectivity is expected to bring important economic benefits, while allowing internet democratization [4, 5]. Nevertheless, THz wireless communication systems are susceptible to a number of drawbacks. First of all, they suffer from high pathloss due to the high-transmission frequencies as well as molecular resonances that lead to molecular adsorptions [6]. Motivated by this, a great amount of research effort was put on modeling the deterministic path loss of THz wireless links [7, 8, 9, 10]. In more detail, in [7], the authors employed the radiation theory in order to extract the channel characteristics of nano-scale THz wireless links. In [8], an electromagnetic analysis was conducted for the propagation characterization of in-body THz links, whilst in [9], the authors presented a path loss model for in-body nano-scale THz wireless systems. A simplified path loss model for THz systems operating in the \(275-400\,\mathrm{GHz}\) region was presented in [10]. Another important drawback of THz wireless systems is the signal fluctuation due to atmospheric conditions. The source of this phenomenon lies in the variation of the reflection index, as a result of inhomogeneities in atmospheric pressure as well as temperature and water molecular density across the propagation path [11]. Inspired by this, the authors of [12] used a weather emulating chamber in order to quantify in laboratory conditions the impact of rain in THz and infrared links. In [13], the authors quantified the joint impact of rain and misalignment in THz wireless systems in terms of outage probability. The detrimental impact of turbulence in THz wireless systems was discussed in [14]. The joint impact of snow and misalignment in THz wireless systems was evaluated in terms of bit error rate and channel capacity in [15]. The authors of [16] quantified the impact of fog in THz wireless systems in terms of bit error rate in laboratory environment. Finally, in [17], the authors experimentally demonstrated the distortion to the THz signal that was caused due to fog. To counterbalance the significant path loss as well as the signal fluctuation due to the atmospheric conditions and establish long-range connections, high-directional antennas, such as Cassegrain antennas, are employed at both the transmitter (TX) and the receiver (RX) ends, establishing high-directional links [18]. However, these links are sensitive to blockage, due to the high penetration loss of the THz transmission. Therefore, the need to create alternative paths between the TX and RX arises. On another front, with the explosive growth of data and bandwidth requirements in beyond fifth generation (5G) and sixth generation (6G) wireless communication networks, there is a unprecedented demand for improving key performance indicators (KPIs) such as energy efficiency, spectral efficiency, coverage and serviceability at very high levels [19, 20]. To this end, reconfigurable intelligent surfaces (RIS) are emerging as the revolutionary technology based on programmable metamaterial-based surfaces and seem to be the fundamental component of 6G communication infrastructures in improving the above mentioned KPIs [21, 22]. In particular, conventional RIS plates comprises an array of nearly passive reflecting meta-elements, which are able to control impinging signals through a field programmable gate array (FPGA) microcontroller. As a result, RIS adapts both the amplitude and phase of incident signals to the specific wireless environment conditions in order to either increase the coverage or cancel interference, and highlight its effectiveness in various applications such as drone-based communications, integrated sensing and communications, full-duplex communications, etc. [23, 24, 25]. Above and beyond all other considerations, due to the adaptability of RIS in creating alternative wireless paths in highly penetrated path loss environments i.e., THz communication systems, RIS appears to be a reliable technology to to combat communication blockage [26, 27]. Specifically, the authors in [26] presented for the first time key performance results (path loss and error rate) of RIS-enabled wireless communication systems, and conducted theoretical comparisons with well-established technologies such as such as relaying, multiple-input multiple-output (MIMO) beamforming, and backscatter communications. In [27], the path loss of RIS-assisted wireless systems was analytically investigated based on the vector generalization of Green's theorem. In [28], an improved path loss model for RIS-enabled wireless communications at mmWave bands was proposed. In particular, practical electromagnetic phenomena (e.g., gain patterns, received power, phase errors, and specular reflection loss per meta-element), were investigated. A similar approach is followed in [29] where an angle-dependent path loss model for RISs was proposed based on the radiation patterns of all involved parts of the communication system. However, the proposed prototype in [28, 29] are based on PIN diodes, which is impractical for sub-THz bands [30]. Furthermore, RIS is highly affected by the frequency range of the incident signal to the RIS meta-elements. Therefore, RIS's reflection and phase adjustment capability employing either varactors or PIN diodes is limited over RF frequency range and, specifically, between 100 MHz - 10 GHz. Therefore, for much higher frequencies at the sub-THz bands, i.e., 0.1-0.3 THz, micro-electromechanical system (MEMS), mechanical approach, liquid crystal, and microfluidics appear to be the most suitable tuning RIS technologies [31]. It is worth mentioning that the latter two tuning technologies cover a wide range of THz bandwidths compared to the two first mentioned tuning ones [30, 32]. The operation of RIS at sub-THz is also feasible with tuning technologies involving complementary metal-oxide-semiconductor (CMOS) transistors, Schottky diodes, or high electron mobility transistors (HEMTs) [30, 33]. Very recently, the authors in [34], have demonstrated for the first time a THz point-to-point RIS-aided transmission based on real-time beam tracking. In this experiment, the THz signals near \(0.34\) THz are controlled via a RIS plate consisted of GaN HEMTs Despite the fact that RIS is potentially considered as the key enabling technology to drive the evolution of beyond 5G and 6G era, it is true that confronts critical challenges. For example, its nearly passive nature and the absence of any active components makes conventional RIS to perform channel estimation and beam tracking as one of the most challenging issues. Consequently, in real life setups, conventional RISs constraint the bandwidth used for transmission and the achievable data rate. To address the above issues in conventional RIS, the holographic RIS (HRIS) (a.k.a. as reconfigurable holographic surface), has been emerged recently as an alternate technology [31, 35, 36]. HRIS is considered as a limited surface area that integrates a virtually infinite number of tiny metamaterial-based radiation elements in order to form a spatially continuous transceiver aperture that can achieve the holographic beamforming. HRIS is able to support channel estimation and act like continuous surface for higher frequencies up to the THz bandwidth compared to conventional RIS [31]. In one of the most representative works on HRIS [37], the authors proposed the concept of HRIS and studied its application to massive MIMO in THz frequency range. More, in this pioneer work, the authors have addressed the transmission design of nearly-passive HRISs with spatially continuous apertures by designing the beam pattern of a RIS with discrete meta-elements and generalize their beamforming framework to a surface with closely spaced (i.e. continuous) meta-elements yielding the HRIS design. In [38], the authors studied the problem of beam pattern design of RIS-assisted THz communication systems with two-dimensional finite impulse response filter design, and the above method proved to facilitate applications such as wireless positioning and over-the-air computation. In [39], the joint phase and delay wideband precoding was investigated to tackle the severe problem of array gain loss of RIS-aided THz communications due to beam split at RIS. Additionally, the authors in [40] have investigated the performance of HRIS non-orthogonal multiple access (NOMA) networks, whereas the same authors in [41] have very recently evaluated the outage of HRISs-enabled THz NOMA systems in the presence of misalignment errors and for either perfect or imperfect successive interference cancellation. Motivated by the fact that HRIS supports the same functionalities as conventional RIS, but most importantly due to its holographic pattern is an attractive candidate for RIS-aided wireless communication systems working at the sub-THz band, in this paper we study the performance of HRIS-empowered THz wireless systems with hardware imperfections at both ends and under foggy weather conditions. ### _Our Contributions_ In contrast that most of the published works study the application of conventional RIS in either sub-6GHz bands or mmwave, in this paper, the communication theoretical framework of HRIS in sub-THz bands over severe weather conditions (fog) is analytically studied under the presence of transceiver hardware imperfections. The contributions of our work can be summarized as follows: * The end-to-end geometric losses of the proposed system are calculated by considering the following channel coefficients: free-space geometric channel gain, molecular absorption gain, and fog gain. Our analysis shows that the geometric losses are quite affected by the following parameters: the relative position of HRIS in respect to the TX, HRIS size and its orientation, TX/RX antenna gains, the signal frequency, as well as the weather conditions. * The statistics of the end-to-end channel coefficients of HRIS-enabled THz systems is analytically evaluated in closed form. Based on the latter derivations, new closed-form expressions for the outage probability and data throughput are extracted when hardware imperfections take place at both ends of the considered system in foggy weather conditions. * New engineering insights are extracted such as how the geometric loss is affected by various frequencies in the sub-THZ band, and what is the optimal placement of HRIS between transmitter and receiver. Additionally, our paper provides a comprehensive study on the impact of the placement and orientation of HRIS to both outage and throughput performance metrics under fixed values of frequency range, spectral efficiency, and transceiver hardware imperfection parameters. ### _Organization of the paper_ This paper is structured as follows. In the sequel, a few basic mathematical symbols used in this paper are given. The system and signal model studied in this paper are introduced in Section II. In Section III, the geometric losses and the corresponding statistical analysis of the end-to-end channels are analytically presented, while both the outage probability and throughput are obtained in closed form. In Section IV, the analytical expressions are corroborated and cross-compared with Monte Carlo simulation results. In the same Section, some useful engineering insights are revealed and discussed in much detail. Finally, some concluding remarks are given in Section V. ### _Notations_ In this paper, \(\sqrt{x}\) returns the square root of \(x\). The cosine and exponential functions are denoted by \(\cos\left(\cdot\right)\) and \(\exp\left(\cdot\right)\), respectively. The natural logarithm is represented as \(\ln(\cdot)\), while \(\log_{a}(\cdot)\) is the logarithm to base \(a\). The operator \(\min\left(a,b\right)\) returns the minimum between \(a\) and \(b\). The gamma and the upper incomplete gamma functions are respectively represented as \(\Gamma(\cdot)\)[42, eq. (8.310)] and \(\Gamma\left(\cdot,\cdot\right)\)[42, eq. (8.350/2)]. The Kummer confluent hypergeometric function is denoted by \({}_{1}F_{1}\left(\cdot,\cdot,\cdot\right)\)[42, eq. (9.14/1)]. Moreover, \(H_{A}\left(a,b;c;x_{1},x_{2}\right)\) stands for the Lauricella hypergeometric function [43]. Finally, the Pochhammer operator is denoted by \((n)_{m}\). ## II System and signal model As demonstrated in Fig. 1, we consider a long-range THz wireless fiber extender that is established through a HRIS. By assuming that the transmitter (TX)-HRIS and HRIS-receiver (RX) links experience fog, the baseband equivalent received signal can be expressed as \[r=h_{g}\,h_{1}\,h_{2}\left(s+n_{t}\right)+n_{r}+n, \tag{1}\] where \(h_{g}\) stands for the end-to-end geometric gain channel coefficient, while \(h_{1}\) and \(h_{2}\) are independent random processes that model the impact of fog in the TX-HRIS and HRIS-RX links, respectively. The transmitted signal is denoted by \(s\), whereas \(n_{t}\) and \(n_{r}\) stands for the distortion noises due to hardware imperfections at the TX and RX, respectively. The additive white Gaussian noise is modeled by the zero-mean complex Gaussian random process \(n\) of variance \(\sigma_{n}^{2}\). The end-to-end geometric gain channel coefficient can be analyzed as \[h_{g}=h_{g,f}\,h_{g,m}\,h_{f}, \tag{2}\] where \(h_{g,f}\) is the free space geometric gain channel coefficient that can be expressed as [44, eq. (9)] \[h_{g,f}=\frac{c\,\sqrt{G_{t}\,G_{r}}\,l_{h}\,l_{v}}{4\pi\,f\,d_{1}\,d_{2}}\cos \left(\psi\right), \tag{3}\] with \(c\) being the speed of light, \(G_{t}\) and \(G_{r}\) standing for the transmission and reception antenna gains, respectively, while \(l_{h}\) and \(l_{v}\) representing the length and width of the HRIS. Also, \(f\) is the transmission frequency, whereas \(d_{1}\) and \(d_{2}\) are the TX-HRIS and HRIS-RX distances, respectively. Finally, \(\psi\) denotes the beam incident angle at the HRIS. Additionally, \(h_{g,m}\) is the molecular absorption gain coefficient that can be obtained as \[h_{g,m}=\exp\left(\frac{1}{2}\,\kappa_{m}\left(f,T,\phi,p\right)\,(d_{1}+d_{2} )\right), \tag{4}\] where \(\kappa_{m}\) is the molecular absorption coefficient, \(T\) is the atmospheric temperature, \(\phi\) stands for the relative humidity, and \(p\) represents the atmospheric pressure. Based on [7], the molecular absorption coefficient can be written as \[\kappa_{m}\left(f,T,\phi,p\right)=\sum_{k,l}\kappa_{k,l}^{g}\left(f,T,\phi,p \right), \tag{5}\] where \(\kappa_{k,l}^{g}\left(f,T,\phi,p\right)\) represents the molecular absorption coefficient for the isotoplogue \(k\) of the \(l-\)th gas and can be expressed as \[\kappa_{k,l}^{g}\left(f,T,\phi,p\right)=\frac{p}{p_{o}}\,\frac{T_{o}}{T}Q_{k,l}^{g}\,\sigma_{k,l}^{g}\left(f,T,\phi,p\right), \tag{6}\] with \(p_{o}\) and \(T_{o}\) denoting the standard pressure and temperature respectively, while \(Q_{k,l}^{g}\) and \(\sigma_{k,l}^{g}\left(f,T,\phi,p\right)\) respectively being Fig. 1: Typical model of an HRIS empowered THz communication system. the molecular volumetric density and the absorption cross section for the isotopologue \(k\) of the \(l-\)th gas. The molecular volumetric density can be evaluated as \[Q_{k,l}^{g}=\frac{p}{R\,T}\,q_{k,l}\,N_{A}, \tag{7}\] where \(R\) is the gas constant, \(N_{A}\) stands for the Avogadro constant, and \(q_{k,l}\) represents the mixing ratio for the isotopologue \(k\) of the \(l-\)th gas. Note that in the high resolution transmission (HITRAN) database, each isologue contribution is scaled based on its natural abundance in the medium. As a consequence, instead of \(q_{k,l}\), the mixing ratio of the specific gas is employed. The absorption cross section for the isotopologue \(k\) of the \(l-\)th gas can be calculated as \[\sigma_{k,l}^{g}\left(f,T,\phi,p\right)=S_{k,l}\,G_{k,l}\left(f,T,\phi,p\right), \tag{8}\] where \(S_{k,l}\) is the absorption strength of a specific type of molecule and can be directly extracted from the HITRAN database, while \(G_{k,l}\left(f,T,\phi,p\right)\) is the spectral line shape for the isotopologue \(k\) of the \(l-\)th gas that can be evaluated as in [45] \[G_{k,l}\left(f,T,\phi,p\right)=\frac{f}{f_{k,l}}\,\frac{\tanh\left(\frac{h\,c \,f}{2\,k_{b}\,T}\right)}{\tanh\left(\frac{h\,c\,f_{k,l}}{2\,k_{b}\,T}\right) }\,F_{k,l}\left(f\right), \tag{9}\] with \(h\) and \(k_{b}\) respectively being the Plank and Bolzmann constants, \(f_{k,l}\) representing the resonant frequency for the isotopologue \(k\) of the \(l-\)th gas, and \(F_{k,l}\left(f\right)\) denoting the Van Vleck-Weisskopf asymmetric line shape that can be calculated as in [46] \[F_{k,l}\left(f\right)=100\,c\frac{\alpha_{k,l}}{\pi}\,\frac{f}{f _{k,l}}\] \[\times\left(\frac{1}{\left(f-f_{k,l}\right)^{2}+\alpha_{k,l}^{2} }+\frac{1}{\left(f+f_{k,l}\right)^{2}+\alpha_{k,l}^{2}}\right), \tag{10}\] where \(\alpha_{k,l}\) is the Lorentz half-width that can be expressed as \[\alpha_{k,l}=\left(\left(1-q_{k,l}\right)\alpha_{o}^{a}+q_{k,l}\,a_{k,l}^{o} \right)\frac{p}{p_{o}}\left(\frac{T_{o}}{T}\right)^{t}, \tag{11}\] with \(t\) being the temperature broadening coefficient, while \(\alpha_{o}^{a}\) and \(a_{k,l}^{o}\) respectively denoting Lorentz half-width of air and the reference value of the Lorentz half-width for the isotopologue \(k\) of the \(l-\)th gas. Note that \(t\), \(\alpha_{o}^{a}\), and \(a_{k,l}^{o}\) can be directly extracted from HITRAN. Note that except from the aforementioned model for the molecular absorption coefficient, a number of approximations including [47, 10, 48] have been published. Although these approximations are very accurate and have been widely used, since their application is in the area of \(100-500\,\mathrm{GHz}\), in this paper, we decided to use a more general model that can cover all the THz band. Finally, \(h_{f}\) denotes the fog gain, which based on [49], can be analyzed as \[h_{f}=10^{-\frac{1}{2}\kappa_{f}\,M\,\frac{d_{1}+d_{2}}{1000}}, \tag{12}\] where \(M\) is the liquid water density in the fog and \(\kappa_{f}\) is the fog attenuation coefficient, which can be obtained as \[\kappa_{f}=\frac{0.819\,f}{\epsilon_{i}\left(1+\eta^{2}\right)}, \tag{13}\] with \[\eta=\frac{2+\epsilon_{i}}{\epsilon_{r}}. \tag{14}\] In (14), \(\epsilon_{r}\) and \(\epsilon_{i}\) are the real and imaginary part of the dielectric permittivity of water, which can be expressed as \[\epsilon_{r}=\frac{\epsilon_{0}-\epsilon_{1}}{1+\left(\frac{f}{f_{p}}\right)^ {2}}+\frac{\epsilon_{1}-\epsilon_{2}}{1+\left(\frac{f}{f_{s}}\right)^{2}}+ \epsilon_{2} \tag{15}\] and \[\epsilon_{i}=\frac{f}{f_{p}}\frac{\epsilon_{0}-\epsilon_{1}}{1+\left(\frac{f} {f_{p}}\right)^{2}}+\frac{f}{f_{s}}\frac{\epsilon_{1}-\epsilon_{2}}{1+\left( \frac{f}{f_{s}}\right)^{2}}, \tag{16}\] respectively. The parameters \(f_{p}\) and \(f_{s}\) denote the principal and secondary relaxation frequencies, which can be evaluated as \[f_{p}= 20.2\times 10^{-9}-146\times 10^{-9}\left(\theta-1\right)\] \[+316\times 10^{-9}\left(\theta-1\right)^{2} \tag{17}\] and \[f_{s}=39.9\,f_{p}, \tag{18}\] while \[\epsilon_{0} =77.66+103.3\,\left(\theta-1\right), \tag{19}\] \[\epsilon_{1} =0.0671\,\epsilon_{0},\] (20) \[\epsilon_{2} =3.52 \tag{21}\] and \[\theta=\frac{300}{T}. \tag{22}\] As discussed in [50], \(h_{i}\) with \(i=1,2\), follows a logarithmic distribution with probability density function (PDF) and cumulative distribution function (CDF) that can be, respectively, obtained as \[f_{h_{i}}(x)=\frac{\zeta_{i}^{k_{i}}}{\Gamma(k_{i})}x^{k_{i}-1}\left(\ln\left( \frac{1}{x}\right)\right)^{k_{i}-1},\,\text{with }0<x\leq 1 \tag{23}\] and \[F_{h_{i}}(x)=\frac{\Gamma\left(k_{i},\zeta_{i}\,\ln\left(\frac{1}{x}\right) \right)}{\Gamma(k_{i})},\qquad\qquad\text{with }0<x\leq 1, \tag{24}\] where \[\zeta_{i}=\frac{4.343}{\beta_{i}\,d_{i}}. \tag{25}\] In (23)-(25), \(k_{1}\) and \(\beta_{1}\) stands for the foggy conditions of the TX-HRIS link, while \(k_{2}\) and \(\beta_{2}\) stands for the foggy conditions of the HRIS-RX link. For instance, \((k_{i},\beta_{i})\), with \(i\in\{1,2\}\), refers to light fog, if \((2.32,13.12)\), to moderate fog, if \((5.49,12.06)\), to thick fog, if \((6,23)\), and to dense fog, if \((36.06,11.91)\). According to [51], the distortion noise due to the TX hardware imperfections can be modeled as a zero-mean complex Gaussian random process with variance that can be obtained as \[\sigma_{n_{t}}^{2}=\kappa_{t}^{2}\,P_{s}, \tag{26}\] where \(\kappa_{t}\) is the TX error vector magnitude (EVM) and \(P_{s}\) is the transmission power. Similarly, for a given channel realization, the distortion noise due to the RX hardware imperfections can be modeled as a zero-mean complex Gaussian random process with variance that can be expressed as \[\sigma_{n_{r}}^{2}=\kappa_{r}^{2}\,h_{g}^{2}\,h_{1}^{2}\,h_{2}^{2}\,P_{s}, \tag{27}\] where \(\kappa_{r}\) is the RX EVM. Of note, the EVM in THz wireless systems is in the range of \([0.07,0.4]\)[52, 53, 54, 55, 18]. ## III Performance Analysis This section is devoted to analyze the performance of HRIS-empowered THz wireless systems. In this direction, Section III-A provides a closed-form expression for the HRIS-empowered THz wireless system geometric losses. Section III-B presents the statistical characterization of the end-to-end channel in terms of PDF and CDF. The outage probability of the system is presented in Section III-C, while the throughput is given in Section III-D. ### _Geometric Losses_ From (2), it becomes evident that the geometric losses can be obtained as \[P_{L}=\frac{1}{h_{g}^{2}}, \tag{28}\] or equivalently \[P_{L}=\frac{1}{h_{g,f}^{2}}\,\frac{1}{h_{g,m}^{2}}\,\frac{1}{h_{f}^{2}}. \tag{29}\] Thus, from (3), (4), and (12), we observe that the end-to-end geometric loss depends on the relative position of the HRIS in respect to the TX, the HRIS size, the TX and RX antenna gains, the transmission frequency, as well as the atmospheric conditions, i.e. temperature and liquid water density. ### _Statistical Characterization of the End-to-End Channel_ Let \[A=h_{1}\,h_{2}, \tag{30}\] the following theorem returns the PDF of \(A\). **Theorem 1**.: _The PDF and CDF of \(A\) can be, respectively, expressed as_ \[f_{A}(x) =\frac{\zeta_{1}^{k_{1}}\,\zeta_{2}^{k_{2}}}{\Gamma\left(k_{1}+k_ {2}\right)}\,x^{\zeta_{2}-1}\,\left(-\ln\left(x\right)\right)^{k_{1}+k_{2}-1}\] \[\times\,_{1}F_{1}\left(k_{1};k_{1}+k_{2};(\zeta_{1}-\zeta_{2})\ln (x)\right)\] (31) _and_ \[F_{A}(x)=(-1)^{k_{1}+k_{2}=1}\frac{\zeta_{1}^{k_{1}}\,\zeta_{2}^{k_ {2}}}{k_{1}+k_{2}}\,\frac{\left(\ln\left(x\right)\right)^{k_{1}+k_{2}}}{\Gamma \left(k_{1}\right)\Gamma\left(k_{2}\right)}\] \[\times\mathrm{H}_{\mathrm{A}}\left(\mathrm{k}_{1}+\mathrm{k}_{2}, \mathrm{k}_{1};\mathrm{k}_{1}+\mathrm{k}_{2}+1;(\zeta_{1}-\zeta_{2})\ln( \mathrm{x}),\zeta_{2}\ln(\mathrm{x})\right), \tag{32}\] _with \(x\in(0,1]\)._ Proof.: For brevity, the proof of Theorem 1 is given in Appendix A. The PDF of the end-to-end channel coefficient \[A_{t}=h_{g}\,A, \tag{33}\] can be evaluated as \[f_{A_{t}}(x)=\frac{1}{h_{g}}\,f_{A}\left(\frac{x}{h_{g}}\right). \tag{34}\] By applying (31) to (34), we obtain \[f_{A_{t}}(x) =\frac{\zeta_{1}^{k_{1}}\,\zeta_{2}^{k_{2}}}{h_{g}\Gamma\left(k_ {1}+k_{2}\right)}\,\left(\frac{x}{h_{g}}\right)^{\zeta_{2}-1}\,\left(-\ln \left(\frac{x}{h_{g}}\right)\right)^{k_{1}+k_{2}-1}\] \[\times\,_{1}F_{1}\left(k_{1};k_{1}+k_{2};(\zeta_{1}-\zeta_{2})\ln (\frac{x}{h_{g}})\right). \tag{35}\] Additionally, the CDF of \(A_{t}\) can be expressed as \[F_{A_{t}}(x)=F_{A}\left(\frac{x}{h_{g}}\right), \tag{36}\] which, by applying (32), yields \[F_{A_{t}}(x)=(-1)^{k_{1}+k_{2}=1}\frac{\zeta_{1}^{k_{1}}\,\zeta _{2}^{k_{2}}}{k_{1}+k_{2}}\,\frac{\left(\ln\left(\frac{x}{h_{g}}\right)\right) ^{k_{1}+k_{2}}}{\Gamma\left(k_{1}\right)\Gamma\left(k_{2}\right)}\] \[\times\mathrm{H}_{\mathrm{A}}\left(\mathrm{k}_{1}+\mathrm{k}_{2}, \mathrm{k}_{1};\mathrm{k}_{1}+\mathrm{k}_{2}+1;(\zeta_{1}-\zeta_{2})\ln\left( \frac{\mathrm{x}}{\mathrm{h}_{g}}\right),\right.\] \[\left.\zeta_{2}\ln\left(\frac{x}{h_{g}}\right)\right). \tag{37}\] ### _Outage Probability_ From (1), the signal-to-distortion-plus-noise-ratio (SDNR) can be obtained as \[\gamma=\frac{h_{g}^{2}\,A^{2}\,P_{s}}{h_{g}^{2}\,A^{2}\left(\kappa_{t}^{2}+ \kappa_{r}^{2}\right)\,P_{s}+\sigma_{n}^{2}}, \tag{38}\] or equivalently \[\gamma=\frac{A^{2}}{A^{2}\left(\kappa_{t}^{2}+\kappa_{r}^{2}\right)+\frac{1}{ \rho}}, \tag{39}\] where \[\rho=\frac{h_{g}^{2}\,P_{s}}{\sigma_{n}^{2}}. \tag{40}\] The outage probability is defined as \[P_{o}\left(\gamma_{\mathrm{th}}\right)=\Pr\left(\gamma\leq\gamma_{\mathrm{th}} \right), \tag{41}\] where \(\gamma_{\mathrm{th}}\) stands for the SNR threshold. The following proposition returns a closed-form expression for the outage probability. **Proposition 1**.: _The outage probability can be evaluated as in (42), given at the top of the next page. In (42), the following condition should hold:_ \[\gamma_{\mathrm{th}}\leq\min\left(\frac{1}{\kappa_{t}^{2}+\kappa_{r}^{2}}, \frac{\rho}{\left(\kappa_{t}^{2}+\kappa_{r}^{2}\right)\rho+1}\right). \tag{43}\] _Otherwise, \(P_{o}\left(\gamma_{\mathrm{th}}\right)=1\)._ Proof.: For brevity, the proof of Theorem 1 is given in Appendix B. ### _Throughput_ The throughput is defined as \[D=W\,r_{t}\,\left(1-P_{o}\left(\gamma_{\rm th}\right)\right), \tag{44}\] where \(W\) and \(r_{t}\) are respectively the bandwidth and the spectral efficiency of the modulation and coding scheme that is used. Note that \(r_{t}\) and \(\gamma_{\rm th}\) are connected through \[r_{t}=\log_{2}\left(1+\gamma_{\rm th}\right). \tag{45}\] As a result, (44) can be rewritten as \[D=W\,r_{t}\,\left(1-P_{o}\left(2^{r_{t}}-1\right)\right). \tag{46}\] By applying (42) to (46), we obtain (47), given at the top of the next page. Notice that (47) is valid, if and only if (43) is satisfied. Or equivalently, if \[r_{\rm th}\leq \min\left(\log_{2}\left(\frac{1}{\kappa_{t}^{2}+\kappa_{r}^{2}}+1 \right),\right.\] \[\left.\log_{2}\left(1+\frac{\rho}{\left(\kappa_{t}^{2}+\kappa_{r }^{2}\right)\rho+1}\right)\right). \tag{48}\] Otherwise, the throughput is equal to \(0\). ## IV Results & Discussion This section focuses on verifying the theoretical framework by means of Monte Carlo simulations and extracting engineering insights. In what follows, lines stand for analytical results, while markers are used for simulations. Unless otherwise stated, black, red, orange, and blue colors are used for light, moderate, thick, and dense fog, respectively. Fig. 2 depicts the geometric loss as a function of \(f\) for different values of \(l_{h}=l_{v}\), assuming \(G_{t}=G_{r}=50\,\mathrm{dBi}\), \(\psi=\pi/4\), \(T=20^{\circ}\mathrm{C}\), \(P=101300.0\,\mathrm{Pa}\), \(l_{1}=d_{2}=50\,\mathrm{m}\) and water vapour density equals to \(7.5\,\mathrm{g}/\mathrm{m}^{3}\). Notice that a water vapour density that is equal to \(7.5\,\mathrm{g}/\mathrm{m}^{2}\) corresponds to moderate fog. From this figure, we observe that in the range of \(100\) to \(1000\,\mathrm{GHz}\), \(10\) transmission windows exist. In more detail, the first transmission windows is from \(100\) to \(180\,\mathrm{GHz}\); there, the available bandwidth is equal to \(80\,\mathrm{GHz}\). The second transmission window is from \(192\) to \(322\,\mathrm{GHz}\). The available bandwidth of the second transmission window is \(130\,\mathrm{GHz}\). Notice that although the available bandwidth of the second transmission window is greater than the one of the first transmission window, the average geometric loss in the second transmission window is also higher than the one of the first transmission window. The range of the third transmission window is \(335-373\,\mathrm{GHz}\). As a consequence, the available bandwidth of the third transmission window is \(38\,\mathrm{GHz}\). The forth transmission window is from \(390\) to approximately \(443\,\mathrm{GHz}\). Thus, the available bandwidth of the forth transmission window is \(53\,\mathrm{GHz}\). The fifth transmission window is from \(456\) to \(472\,\mathrm{GHz}\), while the sixth transmission window is from \(477\) to \(487\,\mathrm{GHz}\). The seventh transmission window is from \(587\) to \(616\,\mathrm{GHz}\). The range of the eighth transmission window is from \(630\) to \(715\,\mathrm{GHz}\). The ninth transmission window is from \(790\) to \(890\,\mathrm{GHz}\), while the tenth is from \(950\) to \(970\,\mathrm{GHz}\). Finally, from a given \(f\), as \(l_{h}=l_{v}\) increases, i.e., as the size of the HRIS increases, the geometric loss decreases. Fig. 3 demonstrates the geometric loss as a function of \(d_{1}\) for different values of \(f\), assuming \(d_{1}+d_{2}=100\,\mathrm{m}\), \(G_{t}=G_{r}=50\,\mathrm{dBi}\), \(l_{h}=l_{v}=1\,\mathrm{m}\), \(\psi=\pi/4\), \(T=20^{\circ}\mathrm{C}\), \(P=101300.0\,\mathrm{Pa}\), and water vapour density equals to \(7.5\,\mathrm{g}/\mathrm{m}^{3}\). Note that the selected frequencies are within the transmission windows. From this figure, we observe that for fixed frequency and \(d_{1}<\frac{d_{1}+d_{2}}{2}\), as \(d_{1}\) increases, the geometric loss also increases, while for \(d_{1}>\frac{d_{1}+d_{2}}{2}\), as \(d_{1}\) increases, the geometric loss decreases. Likewise, for a given \(f\), the maximum geometric loss is at \(d_{1}=\frac{d_{1}+d_{2}}{2}\). For example, for \(f=100\,\mathrm{GHz}\), as \(d_{1}\) increases from \(1\) to \(10\,\mathrm{m}\), the geometric loss increases from \(21.72\) to \(40.89\,\mathrm{dB}\), while, for the same \(f\), as \(d_{1}\) increases from \(90\) to \(99\,\mathrm{m}\), the geometric loss increases from \(40.89\) to \(21.72\,\mathrm{dB}\). From this example, it becomes evident that for a given frequency, the same geometric loss is achieved for \(d_{1}\) and \(d_{t}-d_{1}\), where \(d_{t}\) is the total distance. Moreover, we observe that based on the geometric loss, the optimal position of the HRIS is as near to the TX or the RX as possible. Finally, for a given \(d_{1}\), as \(f\) increases, the geometric loss increases. In Fig. 4, the outage probability is plotted as a function of \(f\), for different values of \(\frac{P_{r}}{\sigma_{t}^{2}}\), assuming \(G_{t}=G_{r}=50\,\mathrm{dBi}\), \(\psi=\pi/4\), \(T=20^{\circ}\mathrm{C}\), \(P=101300.0\,\mathrm{Pa}\), \(d_{1}=d_{2}=50\,\mathrm{m}\), \(l_{h}=l_{v}=1\,\mathrm{m}\), and ideal RF front-end at both the TX and the RX. Moreover, the water vapour density is set to \(7.5\,\mathrm{g}/\mathrm{m}^{3}\). Notice that these conditions corresponds to moderate fog; thus, \(k_{1}=k_{2}=5.49\) and \(\beta_{1}=\beta+2=12.06\). As expected, for a given \(f\), as \(\frac{P_{r}}{\sigma_{t}^{2}}\) increases, the outage performance improves. For example, for \(f=200\,\mathrm{GHz}\), the outage probability decreases from \(1\) to \(0.36\), as \(\frac{P_{r}}{\sigma_{t}^{2}}\) increases from \(60\) to \(80\,\mathrm{dB}\). Additionally, for a fixed \(\frac{P_{r}}{\sigma_{t}^{2}}\), a local maximum in the outage probability is observed, at the water resonating frequencies. Likewise, from this figure, we observe that based on the application reliability requirement, i.e., the maximum allowed outage probability, as well as the required energy consumption, which is translated to \(\frac{P_{r}}{\sigma_{t}^{2}}\), a different range of frequencies can be employed. For instance, if the maximum \[D=W\,r_{t} \left(1-(-1)^{k_{1}+k_{2}=1}\frac{\zeta_{1}^{k_{1}}\zeta_{2}^{k_{2}} }{k_{1}+k_{2}}\frac{\left(\ln\left(\sqrt{\frac{2^{r_{t}-1}}{\rho}}\,\frac{1}{ \sqrt{1-(2^{r_{t}}-1)(\kappa_{t}^{2}+\kappa_{t}^{2})}}\right)\right)^{k_{1}+k_{ 2}}}{\Gamma\left(k_{1}\right)\Gamma\left(k_{2}\right)}\right.\] \[\times\mathrm{H_{A}}\left(\mathrm{k_{1}+k_{2},k_{1};k_{1}+k_{2}+ 1;(\zeta_{1}-\zeta_{2})\ln(\sqrt{\frac{(2^{r_{t}}-1)}{\rho}}\,\frac{1}{\sqrt{1 -(2^{r_{t}}-1)\left(\kappa_{t}^{2}+\kappa_{t}^{2}\right)}}}),\right.\] \[\left.\zeta_{2}\ln(\sqrt{\frac{(2^{r_{t}}-1)}{\rho}}\,\frac{1}{ \sqrt{1-(2^{r_{t}}-1)\left(\kappa_{t}^{2}+\kappa_{r}^{2}\right)}})\right) \tag{47}\] allowed outage probability is set to \(10^{-6}\), which is a realistic value for backhauling scenarios, and the transmission SNR is set to \(100\,\mathrm{dB}\), the transmission frequency should be in the range of \([100,151.5\,\mathrm{GHz}]\). Note that this frequency range is a subset of the first transmission frequency window, which is defined in Fig. 2. For the same reliability requirement, and for a transmission SNR equal to \(120\,\mathrm{dB}\), we observe that the range of the transmission frequency increases to \([100,309.5\,\mathrm{GHz}]\). In other words, as the transmission SNR increases, the range of the transmission frequencies that can be used to support the system increases. As discussed in [1], this highlights the importance of adopting new type of transceivers in THz wireless systems that minimizes their noise figure. Fig. 5 illustrates the outage probability as a function of \(\frac{\rho}{\gamma_{\mathrm{th}}}\) for different values of \(d_{1}=d_{2}=d\) and fog conditions. Moreover, we assume that the transceivers are equipped with ideal RF front-end, i.e. \(\kappa_{t}=\kappa_{r}=0\). As expected for given fog conditions and \(d\), as \(\frac{\rho}{\gamma_{\mathrm{th}}}\) increases, the outage performance improves. For example for light fog and \(d=30\,\mathrm{m}\), as \(\frac{\rho}{\gamma_{\mathrm{th}}}\) increases from \(5\) to \(10\,\mathrm{dB}\), the outage probability decreases for about \(2\) orders of magnitude. Moreover, for fixed fog conditions and \(\frac{\rho}{\gamma_{\mathrm{th}}}\), as \(d\) increases, the outage probability also increases. For instance, for light fog and \(\frac{\rho}{\gamma_{\mathrm{th}}}=15\,\mathrm{dB}\), the outage probability increases from \(2.08\times 10^{-8}\) to \(7.63\times 10^{-3}\) Fig. 3: Geometric loss vs \(d_{1}\) for different values of \(f\). Fig. 2: Geometric loss as a function of \(f\) for different values of \(l_{h}=l_{v}\). as \(d\) increases from \(30\) to \(50\,\mathrm{m}\). Finally, for given \(\frac{\rho}{\gamma_{\mathrm{th}}}\) and \(d\), as the fog becomes more severe, the outage performance degrades. For example, for \(\frac{\rho}{\gamma_{\mathrm{th}}}=15\,\mathrm{dB}\) and \(d=30\,\mathrm{m}\), as the fog conditions changes from light to moderate, the outage probability increases from \(2.08\times 10^{-5}\) to \(7.15\times 10^{-3}\). For the same \(\frac{\rho}{\gamma_{\mathrm{th}}}\) and \(d\), the system under thick fog, achieves an outage probability that is equal to \(5.9\times 10^{-1}\), while, under dense fog, it is equal to \(1\). These examples indicate the importance of accounting for the fog impact on the performance of HRIS-empowered THz wireless systems. Fig. 6 depicts the outage probability as a function of \(d_{1}\) for different values of \(\frac{\rho}{\gamma_{\mathrm{th}}}\) and fog conditions, assuming \(d_{1}+d_{2}=100\,\mathrm{m}\) and ideal RF front-end. From this figure, we observe that for given \(\frac{\rho}{\gamma_{\mathrm{th}}}\) and fog conditions, for \(d_{1}<\frac{d_{1}+d_{2}}{2}\), as \(d\) increases the outage probability decreases. For example, for \(\frac{\rho}{\gamma_{\mathrm{th}}}=40\,\mathrm{dB}\) and light fog, as \(d_{1}\) increases from \(10\) to \(20\,\mathrm{m}\), the outage probability decreases from \(2.17\times 10^{-6}\) to \(4.35\times 10^{-7}\). On the other hand, for fixed \(\frac{\rho}{\gamma_{\mathrm{th}}}\) and fog conditions, for \(d_{1}>\frac{d_{1}+d_{2}}{2}\), as \(d_{1}\) increases, the outage probability also increases. For instance, \(\frac{\rho}{\gamma_{\mathrm{th}}}=40\,\mathrm{dB}\) and light fog, as \(d_{1}\) increases from \(80\) to \(90\,\mathrm{m}\), the outage probability increases from \(4.35\times 10^{-7}\) to \(2.17\times 10^{-6}\). For given fog conditions and \(\frac{\rho}{\gamma_{\mathrm{th}}}\),the outage probability achieves the minimum value for \(d_{1}=\frac{d_{1}+d_{2}}{2}\). In other words, the optimal place for the HRIS is at \(\frac{d_{1}+d_{2}}{2}\). Moreover, we observe that the same outage performance are achieved for \(d_{1}\) and \(d_{t}-d_{1}\), where \(d_{t}=d_{1}+d_{2}\). For example, for \(\frac{\rho}{\gamma_{\mathrm{th}}}=40\,\mathrm{dB}\) and light fog, for both \(d_{1}=10\,\mathrm{m}\) and \(d_{1}=90\,\mathrm{m}\), the outage probability is equal to \(2.17\times 10^{-6}\). Additionally, for given \(d\) and fog conditions, as \(\frac{\rho}{\gamma_{\mathrm{th}}}\) increases, the outage probability decreases. For instance, for \(d=10\,\mathrm{m}\) and light fog, as \(\frac{\rho}{\gamma_{\mathrm{th}}}\) changes from \(20\) to \(40\,\mathrm{dB}\), the outage probability decreases for approximately \(5\) orders of magnitude. Finally, for fixed \(d\) and \(\frac{\rho}{\gamma_{\mathrm{th}}}\), as the fog density increases, the outage performance of the system degrades. Fig. 7 demonstrates the impact of transceiver hardware imperfections on the outage performance of HRIS-empowered THz wireless systems. In more detail, the outage probability is plotted as a function of \(\kappa=\sqrt{\kappa_{t}^{2}+\kappa_{r}^{2}}\) for different values of \(\gamma_{\mathrm{th}}\), \(\rho\), as well as fog conditions, assuming \(d_{1}=d_{2}=50\,\mathrm{m}\). As expected, for given \(\gamma_{\mathrm{th}}\), \(\rho\) and fog conditions, as \(\kappa\) increases, the impact of transceivers hardware imperfections increases; thus, the outage probability increases. For example, for \(\gamma_{\mathrm{th}}=10\,\mathrm{dB}\), \(\rho=30\,\mathrm{dB}\), and light fog, the outage probability increases for more than \(2\) orders of magnitude, as \(\kappa\) increases from \(0.1\) to \(0.3\). This indicates the importance of taking into account the impact of transceiver hardware imperfections, when designing modulation and coding schemes. From this figure, it become evident that, for given \(\kappa\), \(\rho\), and fog conditions, as \(\gamma_{\mathrm{th}}\) increases, the impact of hardware imperfections becomes more severe; as a consequence, an outage performance degradation is observed. For instance, for \(\kappa=0.2\), \(\rho=30\,\mathrm{dB}\), and light fog, the outage probability increases for more than \(1\) order of magnitude, as \(\gamma_{\mathrm{th}}\) increases from \(5\) to \(10\,\mathrm{dB}\). Moreover, or given \(\kappa\), \(\rho\), and \(\gamma_{\mathrm{th}}\), as the density of fog increases, the outage probability also increases. For example, for \(\kappa=0.2\), \(\rho=30\,\mathrm{dB}\), and \(\gamma_{\mathrm{th}}=5\,\mathrm{dB}\), as the fog conditions change from light to moderate, the outage probability increases for more than \(2\) orders of magnitude. Finally, for given \(\kappa\), \(\gamma_{\mathrm{th}}\), and fog conditions, as \(\rho\) increases, the outage probability decreases. Fig. 8 presents the throughput to bandwidth ratio as a function of \(r_{t}\) for different values of \(\rho\) and fog conditions, assuming ideal RF front-end and \(d_{1}=d_{2}=50\,\mathrm{m}\). From this figure, we observe that for given \(\rho\) and fog condition, an optimal spectral efficiency of the modulation and coding transmission scheme, \(r_{t}^{o}\), exists that maximizes the throughput to bandwidth ratio. For \(r_{t}<r_{t}^{o}\), as \(r_{t}\) increases, \(\frac{D}{W}\) also increases. On the other hand, for \(r_{t}>r_{t}^{o}\), as \(r_{t}\) increases, \(\frac{D}{W}\) decreases. Moreover, for fixed \(\rho_{t}\) and fog conditions, as \(\rho\) increases, \(\frac{D}{W}\) also increases. For example, under light fog, for \(r_{t}=8\,\mathrm{bits/s/Hz}\), the \(D/W\) increases from \(4.32\) to Fig. 5: Outage probability vs \(\frac{\rho}{\gamma_{\mathrm{th}}}\) for different values of \(d_{1}=d_{2}=d\), assuming light (black), moderate (red), thick (orange), and dense (blue) fog. Fig. 6: Outage probability vs \(d_{1}\) for different values of \(\frac{\rho}{\gamma_{\mathrm{th}}}\), assuming light (black), moderate (red), thick (orange) fog and \(d_{1}+d_{2}=100\,\mathrm{m}\). \(7.96\,\mathrm{bits}/\mathrm{s}/\mathrm{Hz}\), as \(\rho\) increases from \(30\) to \(40\,\mathrm{dB}\). Finally, for given \(r_{t}\) and \(\rho\), as the fog density increases, \(D/W\) decreases. Fig. 9 presents the throughput to bandwidth ratio as a function of the transmission scheme spectral efficiency for different values of \(\kappa\) and under different fog conditions. As a benchmark, the case of ideal RF front-end is plotted. As expected, an optimal transmission spectral efficiency, \(r_{t}^{o}\), exists, for which the throughput to bandwidth ratio is maximized. For given \(\kappa\), fog density, and \(r_{t}<r_{t}^{o}\), as \(r_{t}\) increases, \(D/W\) also increases. For example, thick fog and \(\kappa=0.07\), the optimal transmission spectral efficiency is equal to \(5.5\,\mathrm{bits}/\mathrm{s}/\mathrm{Hz}\), where the achievable throughput to bandwidth ratio is \(4.125\,\mathrm{bits}/\mathrm{s}/\mathrm{Hz}\). Moreover, as the transmission spectral efficiency increases from \(4\) to \(5\,\mathrm{bits}/\mathrm{s}/\mathrm{Hz}\), the throughput to bandwidth ratio increases from \(3.58\) to \(4.058\,\mathrm{bits}/\mathrm{s}/\mathrm{Hz}\). On the other hand, for \(r_{t}>r_{t}^{o}\), as the transmission spectral efficiency increases, the throughput to bandwidth ratio decreases. For instance, for thick fog and \(\kappa=0.07\), as the transmission spectral efficiency increases from \(6\) to \(7\,\mathrm{bits}/\mathrm{s}/\mathrm{Hz}\), the throughput to bandwidth ratio decreases from \(4.01\) to \(2.74\,\mathrm{bits}/\mathrm{s}/\mathrm{Hz}\). Likewise, for given fog density and transmission spectral efficiency, as \(\kappa\) increases, the throughput to bandwidth ratio decreases. For example, for thick fog and transmission spectral efficiency equal to \(6\,\mathrm{bits}/\mathrm{s}/\mathrm{Hz}\), the throughput to bandwidth ratio decreases from \(4.01\) to \(3.26\,\mathrm{bits}/\mathrm{s}/\mathrm{Hz}\), as \(\kappa\) increases from \(0.07\) to \(0.1\). Additionally, for fixed transmission spectral efficiency and \(\kappa\), as the fog density increases, the throughput to bandwidth ratio decreases. For instance, for \(r_{t}=6\,\mathrm{bits}/\mathrm{s}/\mathrm{Hz}\) and \(\kappa=0.07\), if the fog condition changes from light to thick, the throughput to bandwidth ratio will decrease from \(6.0\) to \(3.26\,\mathrm{bits}/\mathrm{s}/\mathrm{Hz}\). This indicates the importance of Fig. 8: \(D/W\) vs \(r_{t}\) for different values of \(\rho\), assuming light (black), moderate (red), thick (orange) fog and \(d_{1}=d_{2}=50\,\mathrm{m}\). Fig. 7: Outage probability vs \(\kappa\) for different values of \(\gamma_{\mathrm{th}}\), assuming light (black), moderate (red), thick (orange) fog and \(\rho=30\,\mathrm{dB}\) (a) and \(\rho=40\,\mathrm{dB}\) (b). accurately modeling the fog conditions, when estimating the throughput performance of HRIS-empowered wireless THz systems. Finally, from this figure, it becomes apparent that a maximum transmission spectral efficiency exists beyond which the throughput to bandwidth ratio becomes equal to \(0\). The maximum transmission spectral efficiency depends on the level of transceiver hardware imperfections. For example, for \(\kappa=0.07\), it is equal to \(7.68\,\mathrm{bits/s/Hz}\), while, for \(\kappa=0.1\), it is equal to \(6.66\,\mathrm{bits/s/Hz}\). Note that the maximum transmission spectral efficiency is independent of the fog conditions. Also, we observe that the maximum transmission spectral efficiency is lower than the optimal transmission spectral efficiency for the case of ideal RF front-end. This highlights the importance of accounting for the impact of hardware imperfections, when selecting the transmission scheme. In Fig. 10, \(D/W\) is illustrated as a function of \(\psi\) for different levels of transceiver hardware imperfections and frequencies, assuming \(\frac{P_{r}}{\sigma_{k}^{2}}=120\,\mathrm{dB}\), \(r_{t}=5\,\mathrm{bits/s/Hz}\), \(G_{t}=G_{r}=50\,\mathrm{dB}\), \(\psi=\pi/4\), \(T=20^{o}\)C, \(P=101300.0\,\mathrm{Pa}\), \(d_{1}=d_{2}=50\,\mathrm{m}\), \(l_{h}=l_{v}=1\,\mathrm{m}\), and water vapor equal to \(7.5\,\mathrm{g/m^{3}}\), which corresponds to moderate fog conditions. As a benchmark, the ideal RF front-end scenario is also plotted. From this figure, we observe that for given \(\kappa_{t}=\kappa_{r}\) and \(f\), as \(\psi\) increases, \(D/W\) decreases. For example, for \(\kappa_{t}=\kappa_{r}=0\) and \(f=370\,\mathrm{GHz}\), as \(\psi\) increases from \(10^{o}\) to \(45^{o}\), \(D/W\) decreases from \(4.55\) to \(3.84\,\mathrm{bits/s/Hz}\). This indicates that not only the relative distance from the TX and the RX, but also the orientation of the HRIS plays a pivotal role on the throughput performance of HRIS-empowered wireless THz systems. Additionally, it is become apparent that, for fixed \(\psi\) and \(\kappa_{t}=\kappa_{r}\), as the geometric loss increases through a frequency increase, \(D/W\) decreases. For instance, for \(\psi=45^{o}\) and \(\kappa_{t}=\kappa_{r}=0\), \(D/W\) decreases from \(5.0\) to \(3.84\,\mathrm{bits/s/Hz}\), as \(f\) increases from \(100\) to \(370\,\mathrm{GHz}\). This example reveals the importance of appropriately allocating the frequencies in order to make the most out of the available bandwidth. Finally, for given \(f\) and \(\psi\), as the level of transceiver hardware imperfections increase, \(D/W\) decreases. Fig. 11 depicts \(D/W\) as a function of \(d_{1}\) for different values of \(\psi\) and levels of hardware imperfections, assuming moderate fog conditions, \(d_{1}+d_{2}=100\,\mathrm{m}\), \(\frac{P_{r}}{\sigma_{k}^{2}}=80\,\mathrm{dB}\), \(f=100\,\mathrm{GHz}\), \(r_{t}=5\,\mathrm{bits/s/Hz}\), \(G_{t}=G_{r}=50\,\mathrm{dB}\), \(\psi=\pi/4\), \(T=20^{o}\)C, \(P=101300.0\,\mathrm{Pa}\), and \(l_{h}=l_{v}=1\,\mathrm{m}\). The water vapor is set to \(7.5\,\mathrm{g/m^{3}}\). From this figure, we observe that for given \(\psi\), \(\kappa_{t}=\kappa_{r}\) and \(d_{1}<\frac{d_{1}+d_{2}}{2}\), as \(d_{1}\) increases, the outage probability also increases; thus, \(D/W\) decreases. On the other hand, for fixed \(\psi\), \(\kappa_{t}=\kappa_{r}\) and \(d_{1}>\frac{d_{1}+d_{2}}{2}\), as \(d_{1}\) increases, the outage probability decreases; hence, \(D/W\) increases. For \(d_{1}=\frac{d_{1}+d_{2}}{2}\), \(D/W\) is minimized. Notice that this observation contradicts Fig. 6. This is because in Fig. 6, the impact of geometric losses was neglected. This indicates the importance of taking into account both the deterministic and stochastic phenomena that affect the performance of the HRIS-empowered wireless THz system. Moreover, it becomes evident that the maximum \(D/W\) is achieved for \(d_{1}\to 0\) and \(d_{1}\to d_{1}+d_{2}\). In other words, the optimal placement of the HRIS is as near either the TX or the RX as possible. Of note, this observation is in line with the results of [57]. Likewise, we observe that for fixed \(d_{1}\) and \(\kappa_{t}=\kappa_{r}\), as \(\psi\) increases, \(D/W\) decreases. For example, for \(d_{1}=50\,\mathrm{m}\) and \(\kappa_{t}=\kappa_{r}=0.1\), as \(\psi\) increases from \(45^{o}\) to \(75^{o}\), \(D/W\) decreases about two orders of magnitude. Finally, the deterministic impact of hardware imperfections is revealed by this figure. In more detail, for given \(d_{1}\) and \(\psi\), as the level of transceiver hardware imperfections increases, \(D/W\) decreases. ## V Conclusion In this paper, we have investigated the outage and throughput performance of HRIS-empowered THz wireless systems that suffer from the joint impact of transceiver hardware imperfections and fog. In more detail, after providing the methodology to evaluate the end-to-end geometric losses and characterizing the stochastic nature of the channel in terms of PDF and CDF, we have extracted novel closed-form expressions for the system's outage probability and throughput. These expressions have revealed a number of engineering insights, such as the existence of transmission windows, the optimum placement of the HRIS, the need to design transceivers with low noise figure, as well as the severe impact of transceiver hardware imperfections and fog. Therefore, they are expected to play a key role in the design of such systems. ## Appendix A Proof of Theorem 1 Since \(h_{1}\) and \(h_{2}\) are independent random variables (RVs), the PDF of \(A\) can be expressed as \[f_{A}(x)=\int_{x}^{1}\frac{1}{y}f_{h_{1}}(y)\,f_{h_{2}}\left(\frac{x}{y}\right) \,\mathrm{d}y. \tag{49}\] By applying (23) into (49), we obtain \[f_{A}(x)=\frac{\zeta_{1}^{k_{1}}\,\zeta_{2}^{k_{2}}}{\Gamma(k_{1 })\,\Gamma(k_{2})}x^{\zeta_{2}-1} \int_{x}^{1}y^{\zeta_{1}-\zeta_{2}-1}\left(\ln\left(\frac{1}{y} \right)\right)^{k_{1}-1}\] \[\times\left(\ln\left(\frac{y}{x}\right)\right)^{k_{2}-1}\, \mathrm{d}y, \tag{50}\] or equivalently \[f_{A}(x)=\frac{\zeta_{1}^{k_{1}}\,\zeta_{2}^{k_{2}}}{\Gamma(k_{1 })\,\Gamma(k_{2})}x^{\zeta_{2}-1} \int_{x}^{1}y^{\zeta_{1}-\zeta_{2}-1}\left(-\ln\left(y\right) \right)^{k_{1}-1}\] \[\times\left(\ln\left(y\right)-\ln\left(x\right)\right)^{k_{2}-1} \,\mathrm{d}y. \tag{51}\] By setting \(\ln(y)=t\), (51) can be rewritten as \[f_{A}(x)=\frac{\zeta_{1}^{k_{1}}\,\zeta_{2}^{k_{2}}}{\Gamma(k_{1 })\,\Gamma(k_{2})}x^{\zeta_{2}-1} \int_{\ln(x)}^{0}(-t)^{k_{1}-1}\,\left(\exp(t)\right)^{\zeta_{1}- \zeta_{2}}\] \[\times\left(t-\ln(x)\right)^{k_{2}-1}\,\mathrm{d}t, \tag{52}\] which, by applying [42, eq. (3.383/1)], yields (31). The CDF of \(A\) can be evaluated as \[F_{A}(x)=\int_{0}^{x}f_{A}(x)\,\mathrm{d}x, \tag{53}\] which, by applying (31), can be rewritten as \[F_{A}(x)=(-1)^{k_{1}+k_{2}=1}\frac{\zeta_{1}^{k_{1}}\,\zeta_{2}^{k_{2}}}{ \Gamma\left(k_{1}\right)\Gamma\left(k_{2}\right)}\,\mathcal{J}, \tag{54}\] where \[\mathcal{J}=\int_{0}^{x} y^{\zeta_{2}-1}\left(\ln(y)\right)^{k_{1}+k_{2}-1}\] \[\times\,_{1}F_{1}\left(k_{1};k_{1}+k_{2};(\zeta_{1}-\zeta_{2})\, \ln(y)\right)\,\mathrm{d}y. \tag{55}\] By setting \(\ln(y)=t\), (55) can be rewritten as \[\mathcal{J}=-\int_{\ln(y)}^{\infty} \,t^{k_{1}+k_{2}-1}\,\exp\left(\zeta_{2}\,t\right)\] \[\times\,_{1}F_{1}\left(k_{1};k_{1}+k_{2};(\zeta_{1}-\zeta_{2})\, \,t\right)\,\mathrm{d}t. \tag{56}\] With the aid of [42, eq. (9.14/1)], (56) can be equivalently expressed as \[\mathcal{J}=-\sum_{n=0}^{\infty}\frac{\left(k_{1}\right)_{n}\left(\zeta_{1}- \zeta_{2}\right)^{n}}{\left(k_{1}+k_{2}\right)_{n}n!}\mathcal{K}, \tag{57}\] where \[\mathcal{K}=\int_{\ln(x)}^{\infty}t^{k_{1}+k_{2}+n-1}\,\exp\left(\zeta_{2}\,t \right)\,\mathrm{d}t. \tag{58}\] Next, by employing [42, eq. (1.211/1)], (58) can be written as \[\mathcal{K}=\sum_{m=0}^{\infty}\frac{\zeta_{2}^{m}}{m!}\int_{\ln(y)}^{\infty} t^{k_{1}+k_{2}+n+m-1}\,\mathrm{d}t, \tag{59}\] which, by applying [42, eq. (3.191/1], can be rewritten as \[\mathcal{K}=-\sum_{m=0}^{\infty}\frac{\zeta_{2}^{m}}{m!}\,\frac{1}{k_{1}+k_{2} +m+n}\,\left(\ln(x)\right)^{k_{1}+k_{2}+m+n}. \tag{60}\] By applying (60) into (57), we obtain \[\mathcal{J}=\left(\ln\left(x\right)\right)^{k_{1}+k_{2}}\,\mathcal{L}, \tag{61}\] where \[\mathcal{L}=\sum_{n=0}^{\infty}\sum_{m=0}^{\infty} \frac{1}{k_{1}+k_{2}+m+n}\,\frac{(k_{1})_{n}}{\left(k_{1}+k_{2}+m+n \right)!}\frac{1}{n!}\frac{1}{m!}\] \[\times\left((\zeta_{1}-\zeta_{2})\ln(x)\right)^{n}\,\left(\zeta_{2} \ln(x)\right)^{m}. \tag{62}\] However, \[\frac{1}{k_{1}+k_{2}+m+n}=\frac{(k_{1}+k_{2}+m+n-1)!}{(k_{1}+k_{2}+m+n)!}, \tag{63}\] or equivalently \[\frac{1}{k_{1}+k_{2}+m+n}=\frac{\Gamma\left(k_{1}+k_{2}+m+n-1\right)}{\Gamma \left(k_{1}+k_{2}+m+n\right)}, \tag{64}\] or \[\frac{1}{k_{1}+k_{2}+m+n}=\frac{(k_{1}+k_{2})_{m+n}\Gamma(m+n)}{(k_{1}+k_{2}+1)_ {m+n}\Gamma(m+n+1)}, \tag{65}\] which can be rewritten as \[\frac{1}{k_{1}+k_{2}+m+n}=\frac{(k_{1}+k_{2})_{m+n}}{(k_{1}+k_{2}+1)_{m+n}(k_{1 }+k_{2})_{1})}. \tag{66}\] Moreover, by accounting for the fact that \((k_{1}+k_{2})_{1}=k_{1}+k_{2}\), (66) can be written as \[\frac{1}{k_{1}+k_{2}+m+n}=\frac{1}{k_{1}+k_{2}}\frac{(k_{1}+k_{2})_{m+n}}{(k_{1 }+k_{2}+1)_{m+n}}. \tag{67}\] By applying (67) to (62), we get \[\mathcal{L}=\frac{1}{k_{1}+k_{2}}\sum_{n=0}^{\infty} \sum_{m=0}^{\infty}\frac{(k_{1}+k_{2})_{m+n}}{(k_{1}+k_{2}+1)_{m+n}}\frac{(k_{1 })_{n}}{(k_{1}+k_{2})_{n}}\frac{1}{n!}\frac{1}{m!}\] \[\times\left((\zeta_{1}-\zeta_{2})\ln(x)\right)^{n}\,\left(\zeta_{2} \ln(x)\right)^{m}. \tag{68}\] or equivalently \[\mathcal{L}=\frac{\mathrm{H}_{\mathrm{A}}\left(k_{1}+k_{2},k_{1};k_{1}+k_{2}+1; (\zeta_{1}-\zeta_{2})\ln(x),\zeta_{2}\ln(x)\right)}{k_{1}+k_{2}}. \tag{69}\] By applying (69) to (61), we obtain \[\mathcal{J}=\frac{\left(\ln\left(x\right)\right)^{k_{1}+k_{2}}}{k_ {1}+k_{2}}\] \[\times\mathrm{H}_{\mathrm{A}}\left(k_{1}+k_{2},k_{1};k_{1}+k_{2}+1; (\zeta_{1}-\zeta_{2})\ln(x),\zeta_{2}\ln(x)\right), \tag{70}\] Finally, by employing (70) to (54), we obtain (32). This concludes the proof. ## Appendix B Proof of Proposition 1 By applying (39) to (41), we obtain \[P_{o}\left(\gamma_{\mathrm{th}}\right)=\Pr\left(\frac{A^{2}}{A^{2}\left(\kappa_{ t}^{2}+\kappa_{r}^{2}\right)+\frac{1}{\rho}}\leq\gamma_{\mathrm{th}}\right), \tag{71}\] or equivalently \[P_{o}\left(\gamma_{\mathrm{th}}\right)=\Pr\left(A^{2}\left(1-\gamma_{\mathrm{ th}}\left(\kappa_{t}^{2}+\kappa_{r}^{2}\right)\right)\leq\frac{\gamma_{\mathrm{ th}}}{\rho}\right). \tag{72}\] By assuming that \(1-\gamma_{\mathrm{th}}\left(\kappa_{t}^{2}+\kappa_{r}^{2}\right)>0\) or equivalently \[\gamma_{\mathrm{th}}<\frac{1}{\kappa_{t}^{2}+\kappa_{r}^{2}}, \tag{73}\] (72) can be rewritten as \[P_{o}\left(\gamma_{\mathrm{th}}\right)=\Pr\left(A\leq\sqrt{\frac{\gamma_{ \mathrm{th}}}{\rho}}\frac{1}{\sqrt{1-\gamma_{\mathrm{th}}\left(\kappa_{t}^{2}+ \kappa_{r}^{2}\right)}}\right), \tag{74}\] or \[P_{o}\left(\gamma_{\mathrm{th}}\right)=F_{A}\left(\sqrt{\frac{\gamma_{\mathrm{ th}}}{\rho}}\frac{1}{\sqrt{1-\gamma_{\mathrm{th}}\left(\kappa_{t}^{2}+\kappa_{r}^{2} \right)}}\right), \tag{75}\] which, by applying (32), returns (42). From (32), the following inequality should hold: \[0<\sqrt{\frac{\gamma_{\mathrm{th}}}{\rho}}\frac{1}{\sqrt{1-\gamma_{\mathrm{th }}\left(\kappa_{t}^{2}+\kappa_{r}^{2}\right)}}\leq 1 \tag{76}\] or equivalently \[0<\frac{\gamma_{\mathrm{th}}}{\rho}\frac{1}{1-\gamma_{\mathrm{th}}\left( \kappa_{t}^{2}+\kappa_{r}^{2}\right)}\leq 1 \tag{77}\] From (73), \(\frac{\gamma_{\mathrm{th}}}{\rho}\frac{1}{1-\gamma_{\mathrm{th}}\left(\kappa_{ t}^{2}+\kappa_{r}^{2}\right)}>0\) is always true. Likewise, in order for \(\frac{\gamma_{\mathrm{th}}}{\rho}\frac{1}{1-\gamma_{\mathrm{th}}\left(\kappa_{ r}^{2}+\kappa_{r}^{2}\right)}\leq 1\) to be true, the following condition should be satisfied: \[\gamma_{\mathrm{th}}\leq\frac{\rho}{\left(\kappa_{t}^{2}+\kappa_{r}^{2}\right) \rho+1}. \tag{78}\] By combining (73) and (78), we obtain (43). This concludes the proof.
2310.07369
A differential Harnack inequality for noncompact evolving hypersurfaces
We prove a differential Harnack inequality for noncompact convex hypersurfaces flowing with normal speed equal to a symmetric function of their principal curvatures. This extends a result of Andrews for compact hypersurfaces. We assume that the speed of motion is one-homogeneous, uniformly elliptic, and suitably 'uniformly' inverse-concave as a function of the principal curvatures. In addition, we assume the hypersurfaces satisfy pointwise scaling-invariant gradient estimates for the second fundamental form. For many natural flows all of these hypotheses are met by any ancient solution which arises as a blow-up of a singularity.
Stephen Lynch
2023-10-11T10:39:41Z
http://arxiv.org/abs/2310.07369v1
# A differential Harnack inequality for noncompact evolving hypersurfaces ###### Abstract. We prove a differential Harnack inequality for noncompact convex hypersurfaces flowing with normal speed equal to a symmetric function of their principal curvatures. This extends a result of Andrews for compact hypersurfaces. We assume that the speed of motion is one-homogeneous, uniformly elliptic, and suitably 'uniformly' inverse-concave as a function of the principal curvatures. In addition, we assume the hypersurfaces satisfy pointwise scaling-invariant gradient estimates for the second fundamental form. For many natural flows all of these hypotheses are met by any ancient solution which arises as a blow-up of a singularity. ## 1. Introduction Differential Harnack inequalities for solutions to parabolic PDE were introduced by Li and Yau in their seminal paper [1]. Let \(u:M\times(0,T]\to\mathbb{R}\) denote a bounded positive solution to the heat equation, where \((M,g)\) is a compact Riemannian manifold with nonnegative Ricci curvature. The Li-Yau inequality asserts that \[\partial_{t}u-\frac{|\nabla u|^{2}}{u}+\frac{n}{2t}u\geq 0.\] This inequality is saturated by the Euclidean heat kernel. Upon integration over an appropriate path in spacetime, it recovers the classical Harnack inequality for solutions to the heat equation: for \(0<t_{0}<t_{1}\leq T\) we have \[u(x_{1},t_{1})\geq\left(\frac{t_{0}}{t_{1}}\right)^{n/2}\exp\bigg{(}-\frac{d_ {g}(x_{0},x_{1})^{2}}{4(t_{1}-t_{0})}\bigg{)}\,u(x_{0},t_{0}).\] Analogues of the Li-Yau differential Harnack inequality have since been found for many other equations, including geometric flows. Hamilton found remarkable Harnack inequalities for the curvature of solutions to the Ricci flow with nonnegative curvature operator [1] (see also [1]), and for weakly convex solutions of the mean curvature flow [1]. These inequalities play a fundamental role in understanding singularity formation, and have therefore had profound implications in geometry. Chow proved the analogue of Hamilton's Harnack inequality for compact, strictly convex hypersurfaces flowing by powers of their Gauss curvature [1]. Andrews generalised this result to a large class of fully nonlinear flows [2]. Solutions to these flows move with speed equal to a general symmetric homogeneous function of their principal curvatures. Andrews' estimate applies, in particular, when the speed function is homogeneous of degree one and inverse-concave. Solutions to curvature flows which arise as dilations of singularities are often noncompact, and it is desirable to have a Harnack inequality which applies to these. In the present paper we extend Andrews' differential Harnack inequality to noncompact convex hypersurfaces, provided the speed of motion is homogeneous of degree one, uniformly elliptic, and suitably 'uniformly' inverse-concave. In addition, we need to assume the hypersurfaces satisfy pointwise scaling-invariant gradient estimates. For certain flows all of these hypotheses are known to be satisfied by any ancient solution which arises as a blow-up of a singularity. For example, our Harnack inequality applies to blow-ups of compact embedded solutions to the flows introduced in [1] and [11]. Let us further discuss the flow introduced in [1]. There Brendle and Huisken studied domains in a Riemannian \((n+1)\)-manifold whose boundaries move inward with speed equal to \[\gamma(\lambda)=\bigg{(}\sum_{i<j}\frac{1}{\lambda_{i}+\lambda_{j}}\bigg{)}^{- 1},\] where the \(\lambda_{i}\) are the principal curvatures. By implementing a surgery procedure for this flow, they were able to classify compact Riemannian manifolds with strictly two-convex boundary and nonnegative curvature in the sense that \(R_{ikik}+R_{jkjk}\geq 0\). Namely, these spaces are all diffeomorphic to a standard ball or \(1\)-handlebody. In forthcoming work with Cogo and Vicanek Martinez we completely classify the ancient solutions which can arise as blow-ups at a singularity of this flow--these are the shrinking round sphere \(S^{n}\), the shrinking round cylinder \(\mathbb{R}\times S^{n-1}\), and the unique rotationally symmetric translating soliton. The corresponding result for mean curvature flow is due to Brendle and Choi [1, 1]. The differential Harnack inequality proven in this paper is an important ingredient in our proof. ### Main results Let \(\Gamma\subset\mathbb{R}^{n}\) denote an open, symmetric (under permutations), convex cone. We assume that \(\Gamma\) contains the positive cone \(\mathbb{R}^{n}_{+}\). Fix a function \(\gamma:\Gamma\to\mathbb{R}\) which is smooth, positive, symmetric, strictly increasing in each argument, and homogeneous of degree one. We consider evolving immersions \(F:M\times I\to\mathbb{R}^{n+1}\), \(I\subset\mathbb{R}\), which satisfy the evolution equation \[\partial_{t}F(x,t)=-G(x,t)\nu(x,t), \tag{1}\] for every \((x,t)\in M\times I\), where \(G(x,t)=\gamma(\lambda(x,t))\) and \(\nu(x,t)\) is the outward unit normal. We write \(\lambda(x,t)\) for the principal curvatures of \(F\), i.e. the eigenvalues of the Weingarten map \[A(X)=D_{X}\nu,\] at \((x,t)\). These will be labeled so that \(\lambda_{1}\leq\dots\leq\lambda_{n}\). A solution to (1) is called uniformly \(k\)-convex if \[\inf_{M\times I}\frac{\lambda_{1}+\dots+\lambda_{k}}{H}>0,\] where \(H\) is the mean curvature. The metric induced on \(M\) by the immersion \(F(\cdot,t)\) will be denoted \(g=g(t)\). The function \(\gamma\) gives rise to a smooth, \(O(n)\)-invariant function on the space of symmetric matrices with eigenvalues in \(\Gamma\). We also denote this function \(\gamma\). We say that \(\gamma\) is (strictly) inverse-concave if \(\lambda\mapsto-\gamma(\lambda^{-1})\) is (strictly) concave on \(\mathbb{R}^{n}_{+}\) or, equivalently, if \(A\mapsto-\gamma(A^{-1})\) is (strictly) concave on the space of positive-definite symmetric matrices. Inverse-concavity is equivalent to the pointwise inequality \[\bigg{(}\frac{\partial^{2}\gamma}{\partial A_{ij}\partial A_{kl}}(A)+2\frac{ \partial\gamma}{\partial A_{ik}}(A)A_{jl}^{-1}\bigg{)}S_{ij}S_{kl}\geq 0, \tag{2}\] for positive-definite symmetric \(A\) and symmetric \(S\). We now state the Harnack inequality. As mentioned above, this is new when \(M\) is noncompact. For compact \(M\), the result was proven in [1] under more general hypotheses--when \(M\) is compact (3), (4) and (5) are unnecessary. **Theorem 1.1**.: _Suppose \(\gamma\) is inverse-concave. Let \(F:M\times[0,T]\to\mathbb{R}^{n+1}\) be a complete solution to (1) which satisfies \(A\geq 0\). We assume there are positive constants \(C\) and \(\varepsilon\) such that at each point in \(M\times[0,T]\), with respect to an orthonormal frame, we have_ \[C^{-1}g_{ij}\leq\frac{\partial\gamma}{\partial A_{ij}}(A)\leq Cg_{ij}. \tag{3}\] _and_ \[\bigg{(}\frac{\partial^{2}\gamma}{\partial A_{ij}\partial A_{kl}}(A)+2\frac{ \partial\gamma}{\partial A_{ik}}(A)(A+\varepsilon Gg)_{jl}^{-1}\bigg{)}S_{ij} S_{kl}\geq 0 \tag{4}\] _for every symmetric \(S\). In addition, we assume bounded curvature_ \[\sup_{M\times[0,T]}G<\infty\] _and the gradient estimates_ \[\sup_{M\times[0,T]}G^{-2}|\nabla A|+G^{-3}|\nabla^{2}A|<\infty. \tag{5}\] _Then, for every \((x,t)\in M\times(0,T]\) and \(V\in T_{x}M\), we have_ \[\partial_{t}G+2\langle\nabla G,V\rangle+A(V,V)+\frac{G}{2t}\geq 0. \tag{6}\] _If \(\gamma\) is convex then (4) is unnecessary and instead of (5) it suffices to assume_ \[\sup_{M\times[0,T]}|\nabla G|+|\partial_{t}G|<\infty.\] Let us comment on the hypotheses (3), (4) and (5). The evolution of the Harnack quantity contains terms involving the Hessian of \(\gamma\), which need to be overcome in order to establish (6) using the maximum principle. If the solution is noncompact, we also need to be able to localise by introducing an auxhiliary function which grows at infinity. It is (3) and (4), and the gradient estimates (5), which let us achieve both of these things simultaneously. We refer to Remark 4.1 for further discussion. As stated in Theorem 1.1, if \(\gamma\) is convex then the assumption (4) is unnecessary and (5) can be weakened substantially. In fact, in this case the terms in the evolution of the Harnack quantity depending on the Hessian of \(\gamma\) have a favourable sign. This means Hamilton's proof for the mean curvature flow applies almost verbatim. We now introduce natural conditions under which (3), (4) and (5) are met. This leads to Theorem 1.2 below. _Pointwise gradient estimates for ancient solutions._ A solution to (1) is called ancient if it exists for all \(t\in(-\infty,T]\). Ancient solutions are of great interest, since they arise as models for singularity formation via rescaling. When \(\gamma\) is convex or concave, by work of Brendle and Huisken [1], a pointwise gradient estimate of the form \[G^{-2}|\nabla A|+G^{-3}|\nabla^{2}A|\leq\Lambda(n,\gamma,C,\alpha)\] holds for convex ancient solutions of (1) which satisfy (3) and are also \(\alpha\)-noncollapsing. The (interior) \(\alpha\)-noncollapsing property for solutions to (1) is a form of quantitative embeddedness. Let \(F(M,t)=M_{t}\) and suppose \(M_{t}=\partial\Omega_{t}\), where \(\Omega_{t}\) is an open subset of \(\mathbb{R}^{n+1}\). We say the solution \(M_{t}\) is \(\alpha\)-noncollapsing if there is a time-independent constant \(\alpha>0\) such that \(\Omega_{t}\) admits an inscribed ball of radius \(\alpha G(x,t)^{-1}\) for each \(x\in M_{t}\). In [1] it was shown that for compact solutions and \(\gamma\) concave, the noncollapsing property is preserved forward in time. When we do not need to refer to \(\alpha\) we simply say that \(M_{t}\) is noncollapsing. _Uniform inverse-concavity._ For each integer \(0\leq m\leq n\), we define \[\Gamma_{+}^{m}=\{\sigma(\lambda)\in\mathbb{R}^{n}:\lambda=(0,\lambda_{1}, \ldots,\lambda_{m}),\;\min_{i}\lambda_{i}>0,\;\sigma\in P_{n}\},\] where \(P_{n}\) is the group of permutations on \(n\) elements. For \(1\leq m\leq n-1\), \(\Gamma_{+}^{m}\) is the union of all of the \(m\)-dimensional facets of \(\partial\Gamma_{+}^{n}\). Each connected component of \(\Gamma_{+}^{m}\) can be identified with \(\mathbb{R}_{+}^{m}\). Since \(\Gamma\) is open, convex and contains \(\Gamma_{+}^{n}\), for each \(1\leq m\leq n\) we either have \(\Gamma_{+}^{m}\subset\Gamma\) or else \(\Gamma_{+}^{m}\cap\Gamma=\emptyset\). Let \(m_{*}\) denote the least integer such that \(\Gamma_{+}^{m}\subset\Gamma\). For each \(m_{*}\leq m\leq n\) we write \(\gamma_{m}:\mathbb{R}_{+}^{m}\to\mathbb{R}\) for the function \[\gamma_{m}(\lambda_{1},\ldots,\lambda_{m}):=\gamma(0,\lambda_{1},\ldots, \lambda_{m}).\] When \(\gamma\) is strictly inverse-concave we define \(m_{\rm IC}\geq m_{*}\) to be the least integer such that \(\gamma_{m}\) is strictly inverse-concave for every \(m_{\rm IC}\leq m\leq n\). In Section 2 we demonstrate that if the eigenvalues of \(A\geq 0\) satisfy \[\min_{0\leq m<m_{\rm IC}}\operatorname{dist}(\tfrac{\lambda}{|\lambda|}, \Gamma_{+}^{m})\geq\delta,\] then (3) and (4) hold for some positive \(C=C(n,\gamma,\delta)\) and \(\varepsilon=\varepsilon(n,\gamma,\delta)\). When \(A\geq 0\), this is equivalent to assuming that \(\lambda\) is uniformly \(k\)-positive with \(k=n-m_{\rm IC}+1\). Therefore, (3) and (4) hold on a solution to (1) which is uniformly \(k\)-convex with \(k\leq n-m_{\rm IC}+1\). As a result of all of this discussion, we have the following consequence of Theorem 1.1. **Theorem 1.2**.: _Suppose \(\gamma\) is convex, or concave and strictly inverse-concave. Let \(F:M\times(-\infty,0]\to\mathbb{R}^{n+1}\) be an ancient solution to (1). We assume the hypersurfaces \(M_{t}=F(M,t)\) each bound an open convex subset \(\Omega_{t}\). In addition, for each \(T<\infty\), we assume \(M_{t}\) is noncollapsing and uniformly \(k\)-convex on the time interval \([-T,0]\), where \(k\leq n-m_{\rm IC}+1\). Finally, we assume that_ \[\sup_{M\times[-T,0]}G<\infty\] _for each \(T<\infty\). Then, for every \((x,t)\in M\times(-\infty,0]\) and \(V\in T_{x}M\), we have_ \[\partial_{t}G+2\langle\nabla G,V\rangle+A(V,V)\geq 0. \tag{7}\] ### Examples Consider the concave speeds (cf. [1]) given by \[\gamma(\lambda)=\bigg{(}\sum_{i_{1}<\cdots<i_{k}}\frac{1}{\lambda_{i_{1}}+ \cdots+\lambda_{i_{k}}}\bigg{)}^{-1},\] for \(k\geq 2\) and \(n\geq k+1\). In this case we may take \[\Gamma=\Big{\{}\lambda:\min_{i_{1}<\cdots<i_{k}}\lambda_{i_{1}}+\cdots+ \lambda_{i_{k}}>0\Big{\}}.\] We then have \(m_{\rm IC}=m_{*}=n-k+1\), so Theorem 1.2 applies to convex ancient solutions of (1) which are noncollapsing and uniformly \(k\)-convex. Consider the ratios of elementary symmetric polynomials \(\gamma=\sigma_{k}/\sigma_{k-1}\) for \(k\geq 2\) and \(n\geq k+1\). We may take \(\Gamma\) to be the cone where \(\sigma_{k}>0\), in which case \(m_{*}=k\) and \(m_{\rm IC}=k+1\) (the function \(\gamma_{k}\) is the harmonic mean, which is inverse-concave but not strictly inverse-concave). So Theorem 1.2 applies to convex ancient solutions which are noncollapsing and uniformly \((n-k)\)-convex. If \(\gamma:\Gamma\to\mathbb{R}\) is strictly inverse-concave, \(\beta:\Gamma\to\mathbb{R}\) is inverse-concave, and \(h:\mathbb{R}_{+}^{2}\to\mathbb{R}\) is inverse-concave, then the composition \[\lambda\mapsto h(\gamma(\lambda),\beta(\lambda))\] is strictly inverse-concave. Using this observation, one finds that Theorem 1.2 applies to convex, noncollapsing, uniformly \(k\)-convex ancient solutions to the flows introduced in [10]. This class includes all blow-up limits at a singularity of a compact embedded solution. ### Translating solitons Harnack inequalities are closely related to solitons. A solution to (1) is called a translating soliton if there is a constant vector \(\xi\) on \(\mathbb{R}^{n+1}\) such that the hypersurfaces \(M_{t}=F(M,t)\) satisfy \(M_{t}=M_{0}+t\xi\). Translating solitons are characterised by the identity \[G=-\langle\xi,\nu\rangle.\] Notice that when \(A>0\) we have \[2\langle\nabla G,V\rangle+A(V,V)\geq-A^{-1}(\nabla G,\nabla G),\] with equality for \(V=-A^{-1}(\nabla G)\). In this case (7) becomes \[\partial_{t}G-A^{-1}(\nabla G,\nabla G)\geq 0.\] If equality is attained here, and \(\gamma\) is strictly inverse-concave, then the solution is a translating soliton. **Corollary 1.3**.: _Suppose \(\gamma\) is strictly inverse-concave. Let \(F:(-\infty,0]\to\mathbb{R}^{n+1}\) be an ancient solution to (1) such that \(A>0\). Suppose the Harnack inequality_ \[\partial_{t}G-A^{-1}(\nabla G,\nabla G)\geq 0\] _holds at each point in spacetime. In addition, we assume bounded curvature,_ \[\sup_{M\times[-T,0]}G<\infty,\] _and the uniform ellipticity condition (3) on \([-T,0]\) for each \(T<\infty\). If there is a point in spacetime at which_ \[\partial_{t}G-A^{-1}(\nabla G,\nabla G)=0,\] _then \(F\) is a translating soliton._ ### Pointwise Harnack estimate When \(A>0\), (6) implies a pointwise estimate comparing \(G\) at different points in spacetime. Indeed, assuming \(A>0\), the inequality (6) may be restated as \[\partial_{t}G-A^{-1}(\nabla G,\nabla G)+\frac{G}{2t}\geq 0.\] Given times \(0<t_{0}<t_{1}\leq T\), integration of this inequality yields \[G(x_{1},t_{1})\geq\left(\frac{t_{0}}{t_{1}}\right)^{\frac{1}{2}}\exp\bigg{(}- \frac{1}{4}\inf_{\gamma}\int_{t_{0}}^{t_{1}}\frac{A(\dot{\gamma},\dot{\gamma} )}{G}\,dt\bigg{)}\,G(x_{0},t_{0}),\] where the infimum is over smooth paths \(\gamma:[t_{0},t_{1}]\to M\) satisfying \(\gamma(t_{0})=x_{0}\) and \(\gamma(t_{1})=x_{1}\). ### Acknowledgements The author is grateful to M. Langford for his valuable comments. ## 2. Uniform inverse-concavity In this section we establish conditions under which (3) and (4) hold. We begin by introducing some notation. We write \(\dot{\gamma}^{i}(\lambda)\) and \(\ddot{\gamma}^{ij}(\lambda)\) for derivatives with respect to eigenvalues, so that \[\frac{d}{ds}\bigg{|}_{s=0}\gamma(\lambda+s\mu)=\dot{\gamma}^{i}(\lambda)\mu_{ i},\qquad\left.\frac{d}{ds}\right|_{s=0}\dot{\gamma}^{i}(\lambda+s\mu)=\ddot{ \gamma}^{ij}(\lambda)\mu_{j},\] and write \(\dot{\gamma}^{ij}(A)\) and \(\ddot{\gamma}^{ij,kl}(A)\) for derivatives with respect to matrix entries, so that \[\left.\frac{d}{ds}\right|_{s=0}\gamma(A+sB)=\dot{\gamma}^{ij}(A)B_{ij},\qquad \left.\frac{d}{ds}\right|_{s=0}\dot{\gamma}^{ij}(A+sB)=\ddot{\gamma}^{ij,kl}(A )B_{kl}.\] When \(A\) is a diagonal matrix with entries \(\lambda\), \(\dot{\gamma}^{ij}(A)\) is also diagonal, with entries \(\dot{\gamma}^{i}(\lambda)\). Let \(\mathrm{Sym}(n)\) denote the space of symmetric \(n\times n\)-matrices. We assume that \(\gamma\) is strictly inverse-concave. That is, the function \(A\mapsto-\gamma(A^{-1})\) is strictly concave for positive-definite \(A\in\mathrm{Sym}(n)\). In terms of derivatives this means that \[(\dot{\gamma}^{ij,kl}(A)+2\dot{\gamma}^{ik}(A)A_{jl}^{-1})S_{ij}S_{kl}>0\] for every positive-definite \(A\in\mathrm{Sym}(n)\) and every nonzero \(S\in\mathrm{Sym}(n)\). Since \(\gamma\) is homogeneous of degree one, its strict inverse-concavity is also equivalent to strict concavity of the function \(A\mapsto\gamma(A^{-1})^{-1}\) in non-radial directions. In terms of derivatives, \[(\ddot{\gamma}^{ij,kl}(A)+2\dot{\gamma}^{ik}(A)A_{jl}^{-1}-2\gamma(A)^{-1} \dot{\gamma}^{ij}(A)\dot{\gamma}^{kl}(A))S_{ij}S_{kl}>0\] for every positive-definite \(A\in\mathrm{Sym}(n)\) and every \(S\in\mathrm{Sym}(n)\) which is not a multiple of \(A\). Let \(\Gamma^{\prime}\) be a closed, symmetric, convex cone which is contained in the closure of \(\Gamma_{+}^{n}\). We are interested in consequences of the property \[\min_{0\leq m<m_{\mathrm{IC}}}\left(\inf_{\lambda\in\Gamma^{\prime}}\mathrm{ dist}(\tfrac{\lambda}{|\lambda|},\Gamma_{+}^{m})\right)>0. \tag{8}\] **Lemma 2.1**.: _Suppose \(\Gamma^{\prime}\) satisfies (8). We then have_ \[\inf_{\lambda\in\Gamma^{\prime}}\mathrm{dist}(\tfrac{\lambda}{|\lambda|}, \partial\Gamma)>0.\] Proof.: By definition, \(\Gamma_{+}^{m}\cap\Gamma=\emptyset\) for \(m<m_{*}\), and \(\Gamma_{+}^{m}\subset\Gamma\) for \(m\geq m_{*}\). It follows that \[\partial\Gamma\cap\bar{\Gamma}_{+}^{n}=\bigcup_{0\leq m<m_{*}}\Gamma_{+}^{m}. \tag{9}\] The property (8) implies \[\min_{0\leq m<m_{*}}\left(\inf_{\lambda\in\Gamma^{\prime}}\operatorname{dist}( \tfrac{\lambda}{|\lambda|},\Gamma_{+}^{m})\right)>0. \tag{10}\] Indeed, we have \(m_{*}\leq m_{\operatorname{IC}}\) by definition. Define \[\delta:=\inf_{\lambda\in\Gamma^{\prime}}\operatorname{dist}(\tfrac{\lambda}{| \lambda|},\partial\Gamma).\] The claim is that \(\delta>0\). There is a sequence \(\lambda^{(k)}\in\Gamma^{\prime}\) such that \(|\lambda^{(k)}|=1\) and \(\operatorname{dist}(\lambda^{(k)},\partial\Gamma)\to\delta\) as \(k\to\infty\). Since \(\Gamma^{\prime}\) is closed, we may assume \(\lambda^{(k)}\) converges to some \(\lambda\in\Gamma^{\prime}\). Consider the possibility that \(\lambda\in\partial\Gamma\). In this case (9) implies \(\lambda\in\Gamma_{+}^{m}\) for some \(1\leq m<m_{*}\). But due to (10) this is impossible. So we must have \(\lambda\in\Gamma\), and hence \(\delta>0\). Given a symmetric matrix \(A\), we define \(\lambda(A)\) to be the eigenvalues of \(A\). We define \(\lambda^{-1}(\Gamma^{\prime})\) to be the space of symmetric matrices with eigenvalues in \(\Gamma^{\prime}\). In addition, \(\lambda^{-1}(\Gamma^{\prime}\cap\Gamma_{+}^{n})\) will denote the space of positive-definite symmetric matrices with eigenvalues in \(\Gamma^{\prime}\). Lemma 2.1 shows that if (8) holds then \(\{\lambda\in\Gamma^{\prime}:|\lambda|=1\}\) is a compact subset of \(\Gamma\). Since \(\gamma\) is homogeneous of degree one we conclude that if (8) holds then there is a constant \(C=C(n,\gamma,\Gamma^{\prime})\) such that \[C^{-1}|\xi|^{2}\leq\dot{\gamma}^{ij}(A)\xi_{i}\xi_{j}\leq C|\xi|^{2},\qquad \gamma(A)\ddot{\gamma}^{ij,kl}(A)S_{ij}S_{kl}\geq-C|S|^{2}\] for every \(A\in\lambda^{-1}(\Gamma^{\prime})\), \(\xi\in\mathbb{R}^{n}\) and \(S\in\operatorname{Sym}(n)\). **Lemma 2.2**.: _Suppose (8) holds. There is then a positive constant \(\kappa=\kappa(n,\gamma,\Gamma^{\prime})\) such that_ \[\gamma(A)(\ddot{\gamma}^{ij,kl}(A)+2\dot{\gamma}^{ik}(A)A_{jl}^{-1})S_{ij}S_{ kl}\geq\kappa|S|^{2} \tag{11}\] _for every \(A\in\lambda^{-1}(\Gamma^{\prime}\cap\Gamma_{+}^{n})\) and \(S\in\operatorname{Sym}(n)\)._ Proof.: Let us define \[\kappa:=\inf_{A\in\lambda^{-1}(\Gamma^{\prime}\cap\Gamma_{+}^{n}),\,S\in \operatorname{Sym}(n)}|S|^{-2}\gamma(A)(\ddot{\gamma}^{ij,pq}(A)+2\dot{\gamma} ^{ip}(A)A_{jq}^{-1})S_{ij}S_{pq}.\] We claim that \(\kappa\) is positive. Let \(A^{(k)}\in\lambda^{-1}(\Gamma^{\prime}\cap\Gamma_{+}^{n})\) and \(S^{(k)}\in\operatorname{Sym}(n)\) be sequences such that \[|S^{(k)}|^{-2}\gamma(A^{(k)})(\ddot{\gamma}^{ij,pq}(A^{(k)})+2\dot{\gamma}^{ip }(A^{(k)})(A^{(k)})_{jq}^{-1})S_{ij}^{(k)}S_{pq}^{(k)}\to\kappa\] as \(k\to\infty\). Since \(\gamma\) is homogeneous of degree one, we may assume without loss of generality that \(|A^{(k)}|=1\) and \(|S^{(k)}|=1\). Since \(\gamma\) is \(O(n)\)-invariant we may also assume \(A^{(k)}\) is diagonal. By passing to a subsequence we can arrange that \(A^{(k)}\) converges to some \(\tilde{A}\), and that \(S^{(k)}\) converges to some \(\tilde{S}\). If \(\lambda(\tilde{A})\in\Gamma_{+}^{n}\) then we are done--since \(\gamma\) is strictly inverse-concave, it then follows that \(\kappa>0\). Suppose instead that \(\lambda(A)\in\Gamma_{+}^{m}\) for some \(1\leq m\leq n-1\). Since \(A^{(k)}\) in \(\lambda^{-1}(\Gamma^{\prime})\), (8) implies \(m\geq m_{\operatorname{IC}}\). Therefore, the function \(\gamma_{m}\) is strictly inverse-concave. To ease notation, let us write \(A=A^{(k)}\) and \(S=S^{(k)}\). Let \(X\) be the \(n\times n\)-matrix whose entries \(X_{ij}=S_{ij}\) when \(i,j\leq n-m\) and vanish otherwise. Let \(Z\) be the \(n\times n\)-matrix whose entries \(Z_{ij}=S_{ij}\) when \(i,j>n-m\) and vanish otherwise. We then define \(Y:=S-X-Z\). By Lemma 2.1 there is a constant \(C\) depending only on \(n\), \(\gamma\) and \(\Gamma^{\prime}\) such that \[\ddot{\gamma}^{ij,kl}(A)S_{ij}S_{pq}\geq-C.\] In addition, writing \(\lambda=\lambda(A)\), we have \[2\gamma^{ip}(A)A_{jq}^{-1}S_{ij}S_{pq} =2\sum_{i,j}\frac{\dot{\gamma}^{i}(\lambda)}{\lambda_{j}}|S_{ij} |^{2}\] \[=2\sum_{i,j\leq n-m}\frac{\dot{\gamma}^{i}(\lambda)}{\lambda_{j} }|X_{ij}|^{2}+2\sum_{i\leq n-m,j>n-m}\frac{\dot{\gamma}^{i}(\lambda)}{\lambda_ {j}}|Y_{ij}|^{2}\] \[+2\sum_{i>n-m,j\leq n-m}\frac{\dot{\gamma}^{i}(\lambda)}{\lambda_ {j}}|Y_{ij}|^{2}+2\sum_{i,j>n-m}\frac{\dot{\gamma}^{i}(\lambda)}{\lambda_{j}}| Z_{ij}|^{2}.\] Since \(\lambda_{i}\to 0\) for every \(1\leq i\leq n-m\), and \(\kappa<\infty\), we conclude that \(|X|\to 0\) and \(|Y|\to 0\) as \(k\to\infty\). In particular, \(\tilde{S}_{ij}=0\) unless \(i,j>n-m\). To finish we combine the inequalities \[\ddot{\gamma}^{ij,pq}(A)S_{ij}S_{pq}\geq\ddot{\gamma}^{ij,pq}(A)Z_{ij}Z_{pq}-C (|X|^{2}+|Y|^{2}+|X||Y|+|X||Z|+|Y||Z|)\] and \[2\gamma^{ip}(A)A_{jq}^{-1}S_{ij}S_{pq}\geq 2\sum_{i,j>n-m}\frac{\dot{\gamma}^{i }(\lambda)}{\lambda_{j}}|Z_{ij}|^{2}=2\dot{\gamma}^{ip}(A)A_{jq}^{-1}Z_{ij}Z_ {pq}\] in order to obtain \[(\ddot{\gamma}^{ij,pq}(A)+2\gamma^{ip}(A)A_{jq}^{-1}) S_{ij}S_{pq}\] \[\geq(\ddot{\gamma}^{ij,pq}(A)+2\gamma^{ip}(A)A_{jq}^{-1})Z_{ij}Z_ {pq}\] \[-C(|X|^{2}+|Y|^{2}+|X||Y|+|X||Z|+C|Y||Z|).\] Let us write \(\hat{A}\) and \(\hat{S}\) for the \(m\times m\)-matrices which coincide with the lower-right \(m\times m\)-blocks of \(\tilde{A}\) and \(\tilde{S}\), respectively. Sending \(k\to\infty\) in the last inequality then yields \[\kappa=(\dot{\gamma}^{ij,pq}_{m}(\hat{A})+2\gamma^{ip}_{m}(\hat{A})\hat{A}_{jq }^{-1})\hat{S}_{ij}\hat{S}_{pq}.\] Since \(\gamma_{m}\) is strictly inverse-concave and \(|\hat{S}|=1\) we conclude that \(\kappa>0\). Next we use Lemma 2.2 to show that (8) implies (4). **Lemma 2.3**.: _Suppose (8) holds. There is then a positive constant \(\varepsilon=\varepsilon(n,\gamma,\Gamma^{\prime})\) such that_ \[\gamma(A)(\ddot{\gamma}^{ij,kl}(A)+2\dot{\gamma}^{ik}(A)(A+\varepsilon\gamma( A)I)_{jl}^{-1})S_{ij}S_{kl}\geq 0 \tag{12}\] _for every \(A\in\lambda^{-1}(\Gamma^{\prime})\) and symmetric matrix \(S\)._ Proof.: Consider an arbitrary \(A\in\lambda^{-1}(\Gamma^{\prime})\). Set \(\lambda=\lambda(A)\). Let \(\theta\) be a small positive constant whose value will be fixed later, and denote by \(\ell\) the integer such that \(\lambda_{i}\leq\theta\gamma(\lambda)\) for \(1\leq i\leq\ell\) and \(\lambda_{i}>\theta\gamma(\lambda)\) for \(\ell+1\leq i\leq n\). Fix an arbitrary \(S\in\operatorname{Sym}(n)\) and write \(S=X+Y+Z\), where \(X_{ij}=S_{ij}\) for \(i,j\leq\ell\) and \(X_{ij}=0\) otherwise, and \(Z_{ij}=S_{ij}\) for \(i,j>\ell\) and \(Z_{ij}=0\) otherwise. We may assume without loss of generality that \(A\) is diagonal and \(\gamma(A)=1\). We then have \[2\dot{\gamma}^{ip}(A)(A+\varepsilon I)^{-1}_{jq}S_{ij}S_{pq}\] \[=2\sum_{i,j}\frac{\dot{\gamma}^{i}(\lambda)}{\lambda_{j}+ \varepsilon}|S_{ij}|^{2}\] \[=\sum_{i,j\leq\ell}\frac{\dot{\gamma}^{i}(\lambda)}{\lambda_{j}+ \varepsilon}|X_{ij}|^{2}+2\sum_{i\leq\ell,\,j>\ell}\frac{\dot{\gamma}^{i}( \lambda)}{\lambda_{j}+\varepsilon}|Y_{ij}|^{2}\] \[+2\sum_{i>\ell,\,j\leq\ell}\frac{\dot{\gamma}^{i}(\lambda)}{ \lambda_{j}+\varepsilon}|Y_{ij}|^{2}+2\sum_{i,j>\ell}\frac{\dot{\gamma}^{i}( \lambda)}{\lambda_{j}+\varepsilon}|Z_{ij}|^{2}.\] By Lemma 2.1 we have \(\dot{\gamma}^{i}(\lambda)\geq C^{-1}\) for some positive \(C=C(n,\gamma,\Gamma^{\prime})\). Therefore, \[2\sum_{i,j>\ell}\frac{\dot{\gamma}^{i}(\lambda)}{\lambda_{j}+ \varepsilon}|Z_{ij}|^{2} =2\sum_{i,j>\ell}\frac{\dot{\gamma}^{i}(\lambda)}{\lambda_{j}}|Z _{ij}|^{2}-2\varepsilon\sum_{i,j>\ell}\frac{\dot{\gamma}^{i}(\lambda)}{\lambda _{j}(\lambda_{j}+\varepsilon)}|Z_{ij}|^{2}\] \[\geq 2\dot{\gamma}^{ip}(A)(A)^{-1}_{jq}Z_{ij}Z_{pq}-C\varepsilon \theta^{-1}(\theta+\varepsilon)^{-1}|Z|^{2},\] and hence \[2\dot{\gamma}^{ip}(A) (A+\varepsilon I)^{-1}_{jq}S_{ij}S_{pq}\] \[\geq 2\dot{\gamma}^{ip}(A)(A)^{-1}_{jq}Z_{ij}Z_{pq}+C^{-1}( \theta+\varepsilon)^{-1}(|X|^{2}+|Y|^{2})\] \[-C\varepsilon\theta^{-1}(\theta+\varepsilon)^{-1}|Z|^{2}.\] Combining this inequality with \[\ddot{\gamma}^{ij,pq}(A)S_{ij}S_{pq}\geq\dot{\gamma}^{ij,pq}(A)Z_{ij}Z_{pq}-C( |X|^{2}+|Y|^{2}),\] we obtain \[(\ddot{\gamma}^{ij,pq}(A)+2\dot{\gamma}^{ip}(A)(A+\varepsilon I) ^{-1}_{jq})S_{ij}S_{pq}\] \[\geq(\ddot{\gamma}^{ij,pq}(A)+2\dot{\gamma}^{ip}(A)(A)^{-1}_{jq}) Z_{ij}Z_{pq}-C\varepsilon\theta^{-1}(\theta+\varepsilon)^{-1}|Z|^{2}\] \[+(C^{-1}(\theta+\varepsilon)^{-1}-C)(|X|^{2}+|Y|^{2}).\] Lemma 2.2 now gives \[(\ddot{\gamma}^{ij,pq}(A)+2\dot{\gamma}^{ip}(A)(A+\varepsilon I) ^{-1}_{jq})S_{ij}S_{pq}\] \[\geq(\kappa-C\varepsilon\theta^{-1}(\theta+\varepsilon)^{-1})|Z| ^{2}+(C^{-1}(\theta+\varepsilon)^{-1}-C)(|X|^{2}+|Y|^{2}).\] The right-hand side is nonnegative for \(\theta<C^{-2}/2\) and \(\varepsilon<\min\{C^{-1}\kappa\theta^{2},\theta\}\). To conclude this section we show that for \(k=n-m_{\mathrm{IC}}+1\), uniform \(k\)-positivity implies (8). The converse is also true. We write \[\operatorname{tr}(\lambda)=\lambda_{1}+\dots+\lambda_{n}.\] **Lemma 2.4**.: _Let \(k=n-m_{\mathrm{IC}}+1\). The condition (8) holds if and only if_ \[\inf_{\lambda\in\Gamma}\bigg{(}\min_{i_{1}<\dots<i_{k}}\frac{\lambda_{i_{1}}+ \dots+\lambda_{i_{k}}}{\operatorname{tr}(\lambda)}\bigg{)}>0. \tag{13}\] Proof.: Suppose first that (13) holds. Let us define \[\delta=\inf_{0\leq m<m_{\rm IC}}\bigg{(}\inf_{\lambda\in\Gamma^{\prime}}\text{ dist}(\tfrac{\lambda}{|\lambda|},\Gamma^{m}_{+})\bigg{)}.\] The claim is that \(\delta>0\). If, to the contrary, \(\delta=0\), then there is a point \(\lambda\in\Gamma^{\prime}\) such that \(|\lambda|=1\) and \(\lambda\in\Gamma^{m}_{+}\) for some \(1\leq m<m_{\rm IC}\). We may assume \(\lambda_{1}\leq\cdots\leq\lambda_{n}\), in which case \(\lambda\in\Gamma^{m}_{+}\) implies \[0=\lambda_{1}=\cdots=\lambda_{n-m},\qquad 0<\lambda_{n-m+1}\leq\cdots\leq \lambda_{n}.\] On the other hand, by (13), we have \[\lambda_{1}+\cdots+\lambda_{k}>0.\] Since \(k=n-m_{\rm IC}+1<n-m+1\), this is a contradiction. Therefore, we must have \(\delta>0\). Now suppose (8) holds. Let us define \[\rho=\inf_{\lambda\in\Gamma^{\prime}}\bigg{(}\min_{i_{1}<\cdots<i_{k}}\frac{ \lambda_{i_{1}}+\cdots+\lambda_{i_{k}}}{\text{tr}(\lambda)}\bigg{)}.\] The claim is that \(\rho>0\). Assume to the contrary \(\rho=0\). There is then a point \(\lambda\in\Gamma^{\prime}\) such that \(\text{tr}(\lambda)=1\), \(\lambda_{1}\leq\cdots\leq\lambda_{n}\), and \[0=\lambda_{1}=\cdots=\lambda_{k}.\] Using \(k=n-m_{\rm IC}+1\) we conclude that \(\lambda\in\Gamma^{m}_{+}\) for some \(m<m_{\rm IC}\). This contradicts (8), so we must have \(\rho>0\). ## 3. Evolution of the Harnack quantity Let \(F:M\times(0,T]\to\mathbb{R}^{n+1}\) a solution to (1). Let \(g=g(t)\) denote the metric on \(M\) induced by \(F(\cdot,t)\). We define a time derivative acting on vector fields by \[\nabla_{t}X^{i}:=\partial_{t}X^{i}-GA^{i}_{j}X^{j},\] and extend to tensors via the usual Leibniz rule. This time derivative has the property that \(\nabla_{t}g=0\). Given an orthonormal frame \(\{e_{i}\}\) of tangent vectors to \(M\), we write \[\Delta_{\gamma}=\dot{\gamma}^{ij}\nabla_{i}\nabla_{j},\qquad|A|^{2}_{\gamma}= \dot{\gamma}^{ij}A^{2}_{ij}=\dot{\gamma}^{ij}A_{ik}A_{kj}.\] The first variation formula for the second fundamental form asserts that \[\nabla_{t}A_{kl}=\nabla_{k}\nabla_{l}G+GA^{2}_{kl}.\] Simons' identity then yields the parabolic equation \[(\nabla_{t}-\Delta_{\gamma})A_{kl}=|A|^{2}_{\gamma}A_{kl}+\ddot{\gamma}^{ij, pq}\nabla_{k}A_{ij}\nabla_{l}A_{pq}.\] Since \(\gamma\) is homogeneous of degree one, tracing this formula with respect to \(\dot{\gamma}^{kl}\) gives \[(\partial_{t}-\Delta_{\gamma})G=|A|^{2}_{\gamma}G.\] The above evolution equations, together with the Gauss and Codazzi relations, imply the following commutation identities \[\Delta_{\gamma}\nabla_{k}f=\nabla_{k}\Delta_{\gamma}f+GA_{kl}\nabla^{l}f- \dot{\gamma}^{ij}A_{ik}A_{jl}\nabla_{l}f-\ddot{\gamma}^{ij,pq}\nabla_{i}\nabla _{j}f\nabla_{k}A_{pq}\] \[\partial_{t}\Delta_{\gamma}f =\Delta_{\gamma}\partial_{t}f+2G\dot{\gamma}^{ij}A_{ik}\nabla_{k} \nabla_{j}f+2\dot{\gamma}^{ij}A_{jk}\nabla_{i}G\nabla_{k}f\] \[+\ddot{\gamma}^{ij,kl}\nabla_{i}\nabla_{j}f\nabla_{t}A_{kl}.\] Using these, straightforward computations yield \[(\partial_{t}-\Delta_{\gamma})\partial_{t}G =|A|_{\gamma}^{2}\partial_{t}G+2G\dot{\gamma}^{ij}A_{ik}(2\nabla _{t}A_{kj}-GA_{jk}^{2}) \tag{14}\] \[+2\dot{\gamma}^{ij}A_{jk}\nabla_{i}G\nabla_{k}G+\ddot{\gamma}^{ ij,pq}\nabla_{t}A_{ij}\nabla_{t}A_{pq}\] and \[(\nabla_{t}-\Delta_{\gamma})\nabla_{k}G =|A|_{\gamma}^{2}\nabla_{k}G+\dot{\gamma}^{ij}A_{ik}A_{jl}\nabla _{l}G+2\dot{\gamma}^{ij}A_{il}\nabla_{k}A_{lj}G \tag{15}\] \[+\ddot{\gamma}^{ij,pq}\nabla_{t}A_{ij}\nabla_{k}A_{pq}.\] In case \(A>0\), one finds that \[(\nabla_{t}-\Delta_{\gamma})A_{kl}^{-1} =-|A|_{\gamma}^{2}A_{kl}^{-1}\] \[-(\ddot{\gamma}^{ij,pq}+2\dot{\gamma}^{ip}A_{jq}^{-1})A^{-1}(e_{k },\nabla A_{ij})A^{-1}(e_{l},\nabla A_{pq}).\] We combine this formula with (15) and the identity \[-4\dot{\gamma}^{ij}\nabla_{i}A^{-1}(\nabla G,\nabla_{j}\nabla G) =4\dot{\gamma}^{ip}A_{jq}^{-1}\nabla_{t}A_{ij}A^{-1}(\nabla G, \nabla A_{pq})\] \[-4G\dot{\gamma}^{ij}A_{il}A^{-1}(\nabla G,\nabla A_{lj}),\] in order to derive \[(\partial_{t} -\Delta_{\gamma})(A^{-1}(\nabla G,\nabla G))\] \[=|A|_{\gamma}^{2}A^{-1}(\nabla G,\nabla G)-2\dot{\gamma}^{ij}A^{- 1}(\nabla_{i}\nabla G,\nabla_{j}\nabla G)+2\dot{\gamma}^{ij}A_{jk}\nabla_{i}G \nabla_{k}G\] \[+2(\ddot{\gamma}^{ij,pq}+2\dot{\gamma}^{ip}A_{jq}^{-1})\nabla_{t} A_{ij}A^{-1}(\nabla G,\nabla A_{pq})\] \[-(\ddot{\gamma}^{ij,pq}+2\dot{\gamma}^{ip}A_{jq}^{-1})A^{-1}( \nabla G,\nabla A_{ij})A^{-1}(\nabla G,\nabla A_{pq}).\] Combining this with (14) and simplifying, we obtain \[(\partial_{t}-\Delta_{\gamma}) (\partial_{t}G-A^{-1}(\nabla G,\nabla G))\] \[=|A|_{\gamma}^{2}(\partial_{t}G-A^{-1}(\nabla G,\nabla G))\] \[+(\ddot{\gamma}^{ij,pq}+2\dot{\gamma}^{ip}A_{jq}^{-1})(\nabla_{t} A_{ij}-A^{-1}(\nabla G,\nabla A_{ij}))(\nabla_{t}A_{pq}-A^{-1}(\nabla G, \nabla A_{pq})).\] Consequently, \[(\partial_{t}-\Delta_{\gamma}) \bigg{(}\partial_{t}G-A^{-1}(\nabla G,\nabla G)+\frac{1}{2t}G \bigg{)}\] \[=\bigg{(}|A|_{\gamma}^{2}-\frac{2}{t}\bigg{)}\bigg{(}\partial_{t} G-A^{-1}(\nabla G,\nabla G)+\frac{1}{2t}G\bigg{)}\] \[+\frac{2}{t}(\partial_{t}G-A^{-1}(\nabla G,\nabla G))+\frac{1}{2t ^{2}}G\] \[+(\ddot{\gamma}^{ij,pq}+2\dot{\gamma}^{ip}A_{jq}^{-1})(\nabla_{t} A_{ij}-A^{-1}(\nabla G,\nabla A_{ij}))(\nabla_{t}A_{pq}-A^{-1}(\nabla G,\nabla A_{ pq})).\] In case \(A>0\) and \(\gamma\) is inverse-concave, we have \[(\ddot{\gamma}^{ij,pq}+2\dot{\gamma}^{ip}A_{jq}^{-1})(\nabla_{t}A_{ ij}-A^{-1}(\nabla G,\nabla A_{ij}))(\nabla_{t}A_{pq}-A^{-1}(\nabla G,\nabla A_{pq}))\] \[\geq\frac{2}{G}(\partial_{t}G-A^{-1}(\nabla G,\nabla G))^{2}.\] Inserting this above yields \[(\partial_{t}-\Delta_{\gamma})\biggl{(} \partial_{t}G-A^{-1}(\nabla G,\nabla G)+\frac{1}{2t}G\biggr{)}\] \[\geq\biggl{(}|A|_{\gamma}^{2}-\frac{2}{t}\biggr{)}\biggl{(} \partial_{t}G-A^{-1}(\nabla G,\nabla G)+\frac{1}{2t}G\biggr{)} \tag{16}\] \[+\frac{2}{G}\biggl{(}\partial_{t}G-A^{-1}(\nabla G,\nabla G)+ \frac{1}{2t}G\biggr{)}^{2}.\] Andrews found this inequality in [1] by working in the Gauss map parameterization (this simplifies the computation considerably). By applying the parabolic maximum to (16), Andrews could prove the following Harnack estimate for compact, strictly convex solutions of (1): at each point in spacetime one has \[\partial_{t}G-A^{-1}(\nabla G,\nabla G)+\frac{1}{2t}G\geq 0.\] This implies (6). The inequality (16) does not seem to be suitable for proving a Harnack inequality for noncompact solutions--various difficulties arise from the fact that \(A^{-1}(\nabla G,\nabla G)\) might be unbounded. To prove Theorem 1.1, we modify Hamilton's approach to the Harnack estimate for mean curvature flow [10]. That is, we define a form \(Q\) acting on tangent vectors by \[Q(V):=\partial_{t}G+2\langle\nabla G,V\rangle+A(V,V)+\frac{1}{2t}G,\] and use the maximum principle to conclude that \(Q\) is nonnegative. The key advantage of this is that at a spacetime minimum of \(Q\), given any minimizing vector \(V\), there is some freedom in how we extend \(V\) to a neighbourhood. This freedom can be exploited to introduce terms which are favourable, but suitably controlled--see Remark 4.1. The rest of the computations in this section apply to a general solution of (1). Let \(V\) be a time-dependent field of tangent vectors on \(M\). Following [10], we introduce tensors \[X_{i} =\nabla_{i}G+A_{ij}V^{j}\] \[Y_{ij} =\nabla_{i}V_{j}-GA_{ij}-\frac{1}{2t}g_{ij}\] \[W_{ij} =\nabla_{t}A_{ij}+V^{k}\nabla_{k}A_{ij}+\frac{1}{2t}A_{ij}\] \[U^{i} =(\nabla_{t}-\Delta_{\gamma})V^{i}+\dot{\gamma}^{ij}A_{jk}\nabla_ {k}G+\frac{1}{t}V^{i}.\] We compute at a point \((x_{0},t_{0})\) in spacetime with respect to a local orthonormal frame for which \(\nabla_{i}e_{j}=0\) and \(\nabla_{t}e_{i}=0\) at \((x_{0},t_{0})\). **Proposition 3.1**.: _At the point \((x_{0},t_{0})\) we have_ \[(\partial_{t} -\Delta_{\gamma})(Q(V))\] \[=\bigg{(}|A|_{\gamma}^{2}-\frac{2}{t}\bigg{)}Q(V)+2X_{i}U_{i}-4 \dot{\gamma}^{ij}Y_{ik}W_{jk}-2\dot{\gamma}^{ij}Y_{ik}A_{kl}Y_{jl} \tag{17}\] \[+\ddot{\gamma}^{ij,pq}(\nabla_{t}A_{ij}+V_{k}\nabla_{k}A_{ij})( \nabla_{t}A_{pq}+V_{l}\nabla_{l}A_{pq}).\] Proof.: A nice account of Hamilton's computation for the mean curvature flow is given in [1, Proposition 10.7]. We follow the exposition there. Note that \[Q(V)=\dot{\gamma}^{ij}W_{ij}+X_{i}V_{i}.\] Using (15) and the first variation formula \[\nabla_{t}A_{ij}=\nabla_{i}\nabla_{j}G+GA_{ij}^{2},\] we compute \[(\partial_{t}-\Delta_{\gamma})(X_{k}V_{k}) =|A|_{\gamma}^{2}X_{k}V_{k}+(X_{k}+A_{ik}V_{i})(\nabla_{t}- \Delta_{\gamma})V_{k}+2G\dot{\gamma}^{ij}A_{il}\nabla_{k}A_{lj}V_{k}\] \[+\dot{\gamma}^{ij}A_{il}A_{jk}V_{k}\nabla_{l}G-2\dot{\gamma}^{ij }(\nabla_{i}X_{k}+V_{l}\nabla_{i}A_{kl})\nabla_{j}V_{k}\] \[+\ddot{\gamma}^{ij,pq}V_{k}\nabla_{k}A_{ij}V_{l}\nabla_{l}A_{pq}+ \ddot{\gamma}^{ij,pq}\nabla_{t}A_{ij}V_{k}\nabla_{k}A_{pq}.\] Next we use (14) to derive \[(\partial_{t}-\Delta_{\gamma})(\dot{\gamma}^{ij}W_{ij}) =|A|_{\gamma}^{2}\dot{\gamma}^{ij}W_{ij}+\nabla_{i}G(\nabla_{t}- \Delta_{\gamma})V_{i}+\dot{\gamma}^{ij}A_{il}A_{jk}\nabla_{l}GV_{k}\] \[+2G\dot{\gamma}^{ij}A_{ik}(2\nabla_{t}A_{jk}-GA_{jk}^{2}+V_{l} \nabla_{l}A_{jk})\] \[-2\dot{\gamma}^{ij}\nabla_{i}V_{k}\nabla_{j}\nabla_{k}G+2\dot{ \gamma}^{ij}A_{jk}\nabla_{i}G\nabla_{k}G-\frac{1}{2t^{2}}G\] \[+\ddot{\gamma}^{ij,pq}\nabla_{t}A_{ij}\nabla_{t}A_{pq}+\dot{ \gamma}^{ij,pq}V_{k}\nabla_{k}A_{ij}\nabla_{t}A_{pq}.\] Summing these two formulae yields \[(\partial_{t}-\Delta_{\gamma})(Q(V)) =|A|_{\gamma}^{2}Q(V)+2X_{i}(\nabla_{t}-\Delta_{\gamma})V_{i}\] \[+2G\dot{\gamma}^{ij}A_{ik}(2\nabla_{t}A_{jk}-GA_{jk}^{2}+2V_{l} \nabla_{l}A_{jk})\] \[-2\dot{\gamma}^{ij}\nabla_{i}V_{k}(2\nabla_{t}A_{jk}-2GA_{jk}^{2 }+2\nabla_{k}A_{jl}V_{l}+A_{kl}\nabla_{j}V_{l})\] \[+2\dot{\gamma}^{ij}A_{il}A_{jk}V_{k}\nabla_{l}G+2\dot{\gamma}^{ij }A_{jk}\nabla_{i}G\nabla_{k}G-\frac{1}{2t^{2}}G\] \[+\ddot{\gamma}^{ij,pq}(\nabla_{t}A_{ij}+V_{k}\nabla_{k}A_{ij})( \nabla_{t}A_{pq}+V_{l}\nabla_{l}A_{pq}).\] We may rewrite this as \[(\partial_{t} -\Delta_{\gamma})(Q(V))\] \[=\bigg{(}|A|_{\gamma}^{2}-\frac{2}{t}\bigg{)}Q(V)+2X_{i}U_{i}-4 \dot{\gamma}^{ij}Y_{ik}W_{jk}-2\dot{\gamma}^{ij}\nabla_{i}V_{k}A_{kl}Y_{jl}\] \[-2G\dot{\gamma}^{ij}\bigg{(}\frac{1}{2t}A_{jk}+GA_{jk}^{2}\bigg{)} +2\dot{\gamma}^{ij}\nabla_{i}V_{k}\bigg{(}\frac{1}{2t}A_{jk}+GA_{jk}^{2}\bigg{)}\] \[+2\dot{\gamma}^{ij}A_{ik}\nabla_{k}G(A_{jl}V_{l}+\nabla_{j}G-X_{j })-\frac{1}{t}\dot{\gamma}^{ij}\bigg{(}\frac{1}{2t}A_{ij}+GA_{ij}^{2}\bigg{)}\] \[+\ddot{\gamma}^{ij,pq}(\nabla_{t}A_{ij}+V_{k}\nabla_{k}A_{ij})( \nabla_{t}A_{pq}+V_{l}\nabla_{l}A_{pq}).\] The first term on the penultimate line is zero. Collecting the remaining terms, the claim follows. Notice that if \(A>0\) then (16) is recovered when we set \(V=-A^{-1}(\nabla G)\) in (17). ## 4. Application of the maximum principle We now prove Theorem 1.1. The function \(\gamma\) is assumed to be inverse-concave (when \(\gamma\) is convex the proof is analogous but much simpler). Let \(F:M\times[0,T]\to\mathbb{R}^{n+1}\) be a complete solution to (1) such that \(A\geq 0\). We assume that (3) and (4) hold on \(M\times[0,T]\) for some positive constants \(C\) and \(\varepsilon\). That is, with respect to any orthonormal frame, \[C^{-1}g_{ij}\leq\dot{\gamma}^{ij}\leq Cg_{ij},\] and \[(\ddot{\gamma}^{ij,kl}+2\dot{\gamma}^{ik}(A+\varepsilon Gg)_{jl}^{-1})S_{ij} S_{kl}\geq 0\] for every symmetric \(S\). We also assume bounded curvature, \[\Xi:=\sup_{M\times[0,T]}G<\infty,\] and pointwise gradient estimates of the form (5), \[\Lambda:=\sup_{M\times[0,T]}G^{-2}|\nabla A|+G^{-3}|\nabla^{2}A|<\infty.\] We define \[\hat{Q}(V)=Q(V)+\varphi+\psi|V|^{2}\] on \(M\times(0,T]\), where \(\varphi(x,t)\) and \(\psi(t)\) are functions to be chosen later. For now we only assume that \(\varphi\) and \(\psi\) are bounded from below by positive constants, and that \(\varphi\) grows at spatial infinity in the following sense: for some \(p_{0}\in M\), \[\lim_{R\to\infty}\inf_{(x,t)\in M\setminus B_{g(0)}(p_{0},R)\times(0,T]} \varphi(x,t)=\infty. \tag{18}\] Since \(A\geq 0\) and the remaining terms in \(Q\) are bounded, these assumptions ensure that \(\hat{Q}>0\) on \(M\) at times close to zero. Suppose, with the aim of deriving a contradiction, that \(\hat{Q}\) fails to be positive on \(M\times[0,T]\). Then there exists a spacetime point \((x_{0},t_{0})\) such that \(\hat{Q}>0\) for \(0<t<t_{0}\) and \(\hat{Q}(V)=0\) at \((x_{0},t_{0})\) for some \(V\in T_{x_{0}}M\). For any extension of \(V\), Proposition 3.1 shows that at \((x_{0},t_{0})\) we have \[0 \geq-4\dot{\gamma}^{ij}Y_{ik}W_{jk}-2\dot{\gamma}^{ij}Y_{ik}A_{kl }Y_{jl}\] \[+\ddot{\gamma}^{ij,pq}(\nabla_{t}A_{ij}+V_{k}\nabla_{k}A_{ij})( \nabla_{t}A_{pq}+V_{l}\nabla_{l}A_{pq})\] \[+(\partial_{t}-\Delta_{\gamma})\varphi+\partial_{t}\psi|V|^{2}+2 \psi V_{i}(\nabla_{t}-\Delta_{\gamma})V_{i}\] \[-2\psi\dot{\gamma}^{ij}\nabla_{i}V_{k}\nabla_{j}V_{k}+\bigg{(} \frac{2}{t}-|A|_{\gamma}^{2}\bigg{)}(\varphi+\psi|V|^{2}).\] **Remark 4.1**.: _To derive a contradiction we would like to choose \(\varphi\) and \(\psi\), and an extension of \(V\), so that the right-hand side of the last inequality is positive. Given any matrix \(B\), we are free to extend \(V\) so that, at the point \((x_{0},t_{0})\),_ \[Y_{ij}=B_{ij},\qquad U=0.\] _In case \(\gamma\) is convex it is sufficient to choose \(B=0\). If \(A>0\) and \(\gamma\) is inverse-concave, choosing \(B_{ij}=-W_{ik}A_{kj}^{-1}\) results in a positive term which overcomes the term involving second derivatives of \(\gamma\). But when the solution is noncompact, \(A^{-1}\) is unbounded, and this introduces errors which cannot be absorbed using \(\varphi\) and \(\psi\). This is why we assume (3), (4) and (5). We extend \(V\) so that_ \[Y_{ij}=-W_{ik}(A+\varepsilon Gg)_{kj}^{-1} \tag{19}\] _at \((x_{0},t_{0})\), where \(\varepsilon\) is the constant in (4). This is sufficient to overcome the term involving second derivatives of \(\gamma\), and only introduces errors which are bounded in terms of \(C\), \(\Xi\), \(\Lambda\) and \(\varepsilon\)._ In what follows \(\Theta\) is a large constant which may depend on \(n\), \(\gamma\), \(C\), \(\Xi\) and \(\Lambda\). The value of \(\Theta\) may change from line to line. Inserting (19), we obtain \[-4\dot{\gamma}^{ij}Y_{ik}W_{jk}-2\dot{\gamma}^{ij}Y_{ik}A_{kl}Y_{jl}=2\dot{ \gamma}^{ik}(A+\varepsilon Gg)_{jl}^{-1}W_{ij}W_{kl}+2\varepsilon G\dot{ \gamma}^{ij}Y_{ik}Y_{jk}.\] Since \(\gamma\) is inverse-concave and homogeneous of degree one, if \(A\) is positive-definite at \((x_{0},t_{0})\) then we have \[\ddot{\gamma}^{ij,pq}A_{ij}S_{pq}=-(2\dot{\gamma}^{ip}A_{jq}^{-1}-2G^{-1}\dot{ \gamma}^{ij}\dot{\gamma}^{pq})A_{ij}S_{pq}=0\] for every symmetric matrix \(S\). Actually the same holds even if \(A\) is only nonnegative--to see this, approximate \(A_{ij}\) by a sequence of positive-definite symmetric matrices and pass to the limit. It follows that \[\ddot{\gamma}^{ij,pq}(\nabla_{t}A_{ij}+V_{k}\nabla_{k}A_{ij})(\nabla_{t}A_{pq }+V_{l}\nabla_{l}A_{pq})=\ddot{\gamma}^{ij,pq}W_{ij}W_{pq},\] and hence \[\ddot{\gamma}^{ij,pq} (\nabla_{t}A_{ij}+V_{k}\nabla_{k}A_{ij})(\nabla_{t}A_{pq}+V_{l} \nabla_{l}A_{pq})-4\dot{\gamma}^{ij}Y_{ik}W_{jk}-2\dot{\gamma}^{ij}A_{jl}Y_{ ki}Y_{kl}\] \[=(\ddot{\gamma}^{ij,pq}+2\dot{\gamma}^{ip}(A+\varepsilon Gg)_{jq }^{-1})W_{ij}W_{pq}+2\varepsilon G\dot{\gamma}^{ij}Y_{ik}Y_{jk}.\] The right-hand side is nonnegative by hypothesis. Moreover, if we extend \(V\) so that \(U=0\) at \((x_{0},t_{0})\), then \[2\psi V_{i}(\nabla_{t}-\Delta_{\gamma})V_{i}\geq-\Theta\psi|V|-\frac{2}{t} \psi|V|^{2}.\] Combining these facts we find that \[0\geq(\partial_{t}-\Delta_{\gamma})\varphi+(\partial_{t}\psi-\Theta\psi)|V|^{2 }-2\psi\dot{\gamma}^{ij}\nabla_{i}V_{k}\nabla_{j}V_{k}+\bigg{(}\frac{2}{t}-|A| _{\gamma}^{2}\bigg{)}\varphi-\Theta\psi\] at \((x_{0},t_{0})\). We may assume without loss of generality \(\varepsilon<1\). Since \[G|(A+\varepsilon Gg)^{-1}|\leq\Theta\varepsilon^{-1},\] we have \[\dot{\gamma}^{ij}\nabla_{i}V_{k} \nabla_{j}V_{k}\] \[\leq\Theta(|Y|^{2}+G^{2}|A|^{2}+t^{-2})\] \[\leq\Theta\varepsilon^{-2}G^{-2}|W|^{2}+\Theta(t^{-2}+1)\] \[\leq\Theta\varepsilon^{-2}(G^{-2}|\nabla^{2}A|^{2}+G^{-2}|\nabla A |^{2}|V|^{2}+G^{-2}|A|^{6}+t^{-2}G^{-2}|A|^{2})\] \[+\Theta(t^{-2}+1)\] \[\leq\Theta\varepsilon^{-2}(G^{-6}|\nabla^{2}A|^{2}+G^{-4}|\nabla A |^{2}|V|^{2}+t^{-2}+1).\] As a consequence of the pointwise gradient estimates, \[-2\psi\dot{\gamma}^{ij}\nabla_{i}V_{k}\nabla_{j}V_{k}\geq-\Theta\varepsilon^{ -2}\psi|V|^{2}-\Theta\varepsilon^{-2}\bigg{(}1+\frac{1}{t^{2}}\bigg{)}\psi.\] Therefore, at \((x_{0},t_{0})\) we have \[0\geq(\partial_{t}-\Delta_{\gamma})\varphi+(\partial_{t}\psi-\Theta \varepsilon^{-2}\psi)|V|^{2}+\bigg{(}\frac{2}{t}-\Theta\bigg{)}\varphi- \Theta\varepsilon^{-2}\bigg{(}1+\frac{1}{t^{2}}\bigg{)}\psi.\] Let \(b\), \(K\) and \(L\) be positive constants. For \(t\in(0,T]\) set \[\psi(t):=be^{Kt},\qquad\varphi(x,t):=bt^{-3/2}e^{Lt}f(x,t),\] where \(f(x,t)\geq 1\) is such that (18) holds and we have \[(\partial_{t}-\Delta_{\gamma})f\geq 2Lf.\] The construction of a function \(f\) with these properties is standard--see eg. [CCG\({}^{+}\)08, Lemma 12.7]. Inserting these definitions gives \[0\geq\bigg{(}\frac{1}{2t}+2L-\Theta\bigg{)}\frac{be^{Lt}}{t^{3/2}}f-\Theta \varepsilon^{-2}\bigg{(}1+\frac{1}{t^{2}}\bigg{)}be^{Kt}+(K-\Theta\varepsilon ^{-2})\psi|V|^{2}\] at \((x_{0},t_{0})\). For \(K>\Theta\varepsilon^{-2}\) and \(L>\max\{\Theta/2,K\}\) we conclude that \[0\geq\bigg{(}\frac{1}{2t}+2L-\Theta\bigg{)}\frac{1}{t^{3/2}}-\Theta\varepsilon ^{-2}\bigg{(}1+\frac{1}{t^{2}}\bigg{)}.\] at \(t=t_{0}\). But if \(L\) is sufficiently large the quantity on the right is positive for every \(t\in(0,T]\). This is a contradiction. We thus conclude that \(\hat{Q}(V)>0\) for every point \((x,t)\in M\times(0,T]\) and tangent vector \(V\in T_{x}M\). Sending \(b\to 0\), we obtain \(Q(V)\geq 0\). With this the proof of Theorem 1.1 is complete. Theorem 1.2 is proven as follows. Here we are assuming \(\gamma\) is convex, or concave and strictly inverse-concave. Suppose \(F\) is a convex ancient solution satisfying the hypotheses of the theorem. Fix a \(T<\infty\). Since \(F\) is assumed to be uniformly \(k\)-convex on \([-T,0]\), with \(k\leq n-m_{\rm IC}+1\), Lemma 2.1 implies there is a constant \(C\) such that (3) holds for \(t\in[-T,0]\). Moreover, Lemma 2.3 implies there is a constant \(\varepsilon\) such that (4) holds for \(t\in[-T,0]\). Since we are assuming convexity and noncollapsing, Corollary 5.2 in [BH17] (that result is for a specific flow, but the proof generalises--see [Lyn20, Theorem 4.14] for the general case) yields the pointwise gradient estimates \[G^{-2}|\nabla A|+G^{-3}|\nabla^{2}A|\leq\Lambda\] for some \(\Lambda<\infty\). This \(\Lambda\) depends only on \(n\), \(\gamma\), the ellipticity constant \(C\) and the noncollapsing constant. Therefore, \(F\) satisfies all of the hypotheses of Theorem 1.1 on \([-T,0]\). Consequently, we have \[\partial_{t}G+2\langle\nabla G,V\rangle+A(V,V)+\frac{G}{2(t+T)}\geq 0\] at every point \((x,t)\in M\times(-T,0]\) and for every \(V\in T_{x}M\). Since this is true for every \(T<\infty\), we may send \(T\to\infty\) to obtain \[\partial_{t}G+2\langle\nabla G,V\rangle+A(V,V)\geq 0.\] This completes the proof of Theorem 1.2. ## 5. The equality case In this section we prove Theorem 1.3. Suppose \(\gamma\) is strictly inverse-concave. Let \(F:M\times(-\infty,0]\to\mathbb{R}^{n+1}\) be an ancient solution to (1) which satisfies \(A>0\) and the Harnack inequality \[\partial_{t}G-A^{-1}(\nabla G,\nabla G)\geq 0\] at every point in spacetime. In addition, we assume bounded curvature \[\sup_{M\times[-T,0]}G<\infty\] and the uniform ellipticity condition (3) in \([-T,0]\) for each \(T<\infty\). Suppose there is some \((x_{0},t_{0})\in M\times(-\infty,0]\) at which \[\partial_{t}G-A^{-1}(\nabla G,\nabla G)=0.\] The computations at the beginning of Section 3 show that \[(\partial_{t} -\Delta_{\gamma})(\partial_{t}G-A^{-1}(\nabla G,\nabla G))\] \[\geq|A|^{2}_{\gamma}(\partial_{t}G-A^{-1}(\nabla G,\nabla G))\] \[+(\tilde{\gamma}^{ij,kl}+2\dot{\gamma}^{ik}A^{-1}_{jl})(\nabla_{t }A_{ij}-A^{-1}(\nabla G,\nabla A_{ij}))(\nabla_{t}A_{kl}-A^{-1}(\nabla G, \nabla A_{kl}))\] on \(M\times(-\infty,0]\). The right-hand side is nonnegative, so the strong maximum principle implies \[\partial_{t}G-A^{-1}(\nabla G,\nabla G)=0\] for \(t\leq t_{0}\). Since \(\gamma\) is assumed to be strictly inverse-concave, we conclude that \[\nabla_{t}A_{ij}-A^{-1}(\nabla G,\nabla A_{ij})=0,\] for \(t\leq t_{0}\). Define \(\xi=-G\nu-A^{-1}(\nabla G)\). Using the Codazzi equations we compute \[\langle D_{i}\xi,e_{j}\rangle =A^{-1}_{jk}A^{-1}(\nabla G,\nabla A_{ik})-GA_{ij}-A^{-1}_{jk} \nabla_{i}\nabla_{k}G\] \[=A^{-1}_{jk}A^{-1}(\nabla G,\nabla A_{ik})-A^{-1}_{jk}\nabla_{t} A_{ik}\] \[=0\] for \(t\leq t_{0}\), where \(D\) is the Euclidean connection on \(\mathbb{R}^{n+1}\). It follows that \(\xi\) can be extended to a constant vector field \(\xi\) on \(\mathbb{R}^{n+1}\) and, by definition, \[G=-\langle\xi,\nu\rangle.\] That is, \(F\) is a translating soliton with velocity \(\xi\) for \(t\leq t_{0}\). It is now straightforward to show that \(F\) is a translating soliton with velocity \(\xi\) for \(t\leq 0\). Since \(\xi\) is constant on \(\mathbb{R}^{n+1}\), the quantity \(u:=G+\langle\xi,\nu\rangle\) satisfies \[\partial_{t}u=\dot{\gamma}^{ij}\nabla_{i}\nabla_{j}u+\dot{\gamma}^{ij}A_{ik}A_{ kj}u.\] Moreover, because we have bounded curvature and uniform ellipticity, the function \(h(x,t)=e^{Kt}(|x|^{2}+1)\) satisfies \[\partial_{t}h>\dot{\gamma}^{ij}\nabla_{i}\nabla_{j}h+\dot{\gamma}^{ij}A_{ik}A_ {kj}h\] for \(t\geq t_{0}\) if the constant \(K\) is sufficiently large. By the maximum principle, \(\sup_{M_{t}}\frac{|u|}{h}\) is nonincreasing for \(t_{0}\leq t\leq 0\). Since \(u\) vanishes at \(t=t_{0}\), we have \(G=-\langle\xi,\nu\rangle\) for all \(t\leq 0\).
2303.07251
Topological phase transitions generated by the order from quantum disorder
The order from quantum disorder (OFQD) phenomenon was first discovered in quantum spin systems in geometric frustrated lattice. Similar phenomenon was also discovered in interacting bosonic systems or quantum spin systems with spin-orbit coupling in a bipartite lattice. Here we show that the OFQD also leads to a topological phase transition. We demonstrate this new connection in the experimentally realized weakly interacting Quantum Anomalous Hall system of spinor bosons in an optical lattice. There are two classes of topological phenomena: the first class is a perturbative one smoothly connected to the non-interacting limit. The second one is a non-perturbative one which has no analog in the non-interacting limit. Their experimental detections are also discussed.
Fadi Sun, Jinwu Ye
2023-03-13T16:27:01Z
http://arxiv.org/abs/2303.07251v1
# Topological phase transitions generated by order from quantum disorder ###### Abstract The order from quantum disorder (OFQD) phenomenon was first discovered in quantum spin systems in geometric frustrated lattice. Similar phenomenon was also discovered in interacting bosonic systems or quantum spin systems with spin-orbit coupling in a bipartite lattice. Here we show that the OFQD also leads to a topological phase transition. We demonstrate this new connection in the experimentally realized weakly interacting Quantum Anomalous Hall system of spinor bosons in an optical lattice. There are two classes of topological phenomena: the first class is a perturbative one smoothly connected to the non-interacting limit. The second one is a non-perturbative one which has no analog in the non-interacting limit. Their experimental detections are also discussed. pacs: 74.20.-b, 74.20.-b, 74.20.-b _1._ Introduction. -- Searching for new topological phases and topological phase transitions in various materials or artificial systems are fascinating frontiers in modern condensed matter physics [1; 2]. The quantum anomalous Hall (QAH) is the simplest topological phase with non-vanishing Chern number [3]. The non-interacting fermionic QAH has been experimentally realized in Cr doped Bi(Sb)\({}_{2}\)Te\({}_{3}\) thin films [4; 3]. On the other hand, the weakly interacting bosonic analogy of QAH model has also been successfully realized via spinor bosons \({}^{87}\)Rb [5]. It is crucial to study the topological properties of bosonic QAH model, and find possible deep connections between the fermionic QAH model and the bosonic QAH model. It is well-known that the fermionic QAH model has two bands carrying opposite Chern numbers. In the non-interacting limit, when \(|h/t|<4\), the corresponding Chern number of the lower and upper band are \(C_{1}=+{\rm sgn}(h)\) and \(C_{2}=-{\rm sgn}(h)\), respectively; when \(|h/t|>4\), the corresponding Chern number of the lower and upper bands is \(C_{1}=C_{2}=0\). However, the topological properties of weakly interacting bosonic quantum anomalous Hall model maybe much involved. Due to its bosonic nature, an interaction must be considered at the very beginning. Unlike the fermionic QAH model, the weakly interacting bosonic QAH model is always in some spin-orbital superfluid phases with the spontaneous U(1) symmetry breaking. To the quadratic level, there are always Bogliubov quasi-particle excitations above these exotic superfluid phases. If neglecting the cubic or quartic interactions between them, which is justified near any quantum phase transitions (QPT), one may ask the question on what are the topology of these Bogliubov quasi-particle bands and possible TPTs. In this Letter, we map out the Chern number of the bosonic Bogliubov band at the quadratic level. In Ref.[6], we work out quantum phases and quantum phase transitions in [6]. It is the order from quantum disorder (OFQD) phenomena which leads to the quantum ground states at \(h=0\). It is the nearly order from quantum disorder (NOFQD) phenomena which leads to the quantum phase transition at \(h=h_{c}\). In this work, we focus on the topological aspects of the same system. So the results to be achieved are complementary to those achieved in [6]. We find that there are two kinds of topological phases and TPTs. The first can be considered as the remnants from the non-interacting fermion limit, so it reduce to this limit smoothly as the interaction gets very small. It can also be called perturbative regime. The second has no analog or counterpart in the non-interacting fermion limit, so it is a completely new feature due to the interaction. We will show that it is completely induced by the non-perturbative OFQD phenomenon at \(h=0\) discovered in [6]. It may also be called perturbative regime. In addition to the OFQD at \(h=0\) and the QPT at \(h=h_{c}\) induced by the NOFQD discovered in [6], we find two critical fields \(h_{2}>h_{1}>h_{c}>0\). At the upper critical field \(h=h_{2}\), there is a conic band touching at the \(R=(\pi,\pi)\) point of the Brillouin zone (BZ), so the lower band \(C_{1}\) and upper band \(C_{2}=-C_{1}\) Chern number changes from \((C_{1},C_{2})=(+1,-1)\) below \(h_{2}\) to \((C_{1},C_{2})=(0,0)\) above \(h_{2}\); At the lower critical field \(h_{1}<h_{2}\), there is a conic band touching at the \(X=(\pi,0)\) and \(Y=(0,\pi)\) point of the BZ, so \((C_{1},C_{2})=(-1,+1)\) changes below \(h_{1}\) to \((C_{1},C_{2})=(+1,-1)\) above \(h_{1}\). Both \(h_{1}\) and \(h_{2}\) only depends on \(t\) and \(U\), but independent of the SOC \(t_{s}\), and approaches to the corresponding non-interacting value \(h_{20}=4t\) and \(h_{10}=0\) respectively. We conclude the topology in the regime \(|h|>|h_{1}|\) belongs to the first perturbative class. We show that \(h=h_{1}>h_{c}\) which is due to the NOFQD [6]. When \(h<h_{c}\), the ground state becomes a XY-CAFM SF phase which breaks \([C_{4}\times C_{4}]_{D}\to 1\) and leads to 4 bosonic Bogliubov bands in the reduced BZ. We find there is always a gap between the second and the third band and calculate the combined Chern numbers of the two lower bands and that of the two upper bands \((C_{1+2},C_{3+4})=(-1,+1)\). So there is no change on the topological band structure across the QPT at \(h=h_{c}\). However, the change comes from \(h=0\) where the OFQD leads to two Dirac points at in-commensurate momenta. Their positions not only depend on \(t,U\), but also the SOC \(t_{s}\) and related by the remaining two Mirror symmetries. As the three parameters change, the two Dirac points approach to each other, then collide at a degenerate momentum \(\Lambda=(\pi/2,\pi/2)\), then bounce off along the normal direction. Any \(h\neq 0\) opens the gap on the two Dirac points, so there is a TPT from \(h<0\) with the combined Chern number \((C_{1+2},C_{3+4})=(1,-1)\) to \(h>0\) with \((C_{1+2},C_{3+4})=(-1,1)\), which is completely induced by the OFQD at \(h=0\). We construct effective action to study the TPT and find it always contains an exotic Doppler shift term. The topology in the regime \(|h|<|h_{1}|\) belongs to the second non-perturbative class. We also critically comment on the common conceptual mistakes made in the previous theoretical or experimental literatures to attempt to evaluate edge modes within the bulk gaps associated with the bosonic bulk Chern numbers. We also elucidate the physical meanings of the combined Chern numbers. Finally, we discuss the experimental detection of these topological phenomena, especially the ones induced by the OFQD near \(h=0\). 2. The Hamiltonian, quantum phases and quantum phase transitions (QPT). -- The recently experimentally realized two-component spinor Bose-Hubbard Hamiltonian with a spin-orbit coupling [5] can be written as \[\mathcal{H} =-\sum_{i}[a_{i}^{\dagger}(t\sigma_{z}\!-\!it_{\rm s}\sigma_{x})a_ {i+x}\!+\!a_{i}^{\dagger}(t\sigma_{z}\!-\!it_{\rm s}\sigma_{y})a_{i+y}\!+\!h.c.]\] \[\quad-h\sum_{i}(n_{i\uparrow}-n_{i\downarrow})+U\sum_{i}n_{i}^{2 }-\mu\sum_{i}n_{i}\;. \tag{1}\] where \(\sigma=\uparrow\) or \(\downarrow\) denotes the \({}^{87}\)Rb atoms in the state \(|1,m_{F}=0\rangle\) or \(|1,m_{F}=-1\rangle\), respectively. Since the experiment can only achieve relatively weak spin-orbit coupling, our discussion will focus on regime \(|t_{s}|<2|t|\). The quantum phase diagram of Eq.(1) was studied in Ref.[6]. Especially, we found a new phenomenon we named Nearly order from quantum disorder (NOFQD) which captures the delicate competition between the effective potential generated by the order from quantum disorder (OFQD) and the Zeeman field. It is this competition which splits a putative first order quantum phase transition (QPT) at \(h=0\) into two second order QPTs at \(h=\pm h_{c}\). Here we briefly summarize the main results: when \(h>h_{c}\), it is in the Z-FM superfluid with the condensate wavefunction \(\Psi\propto\chi_{\uparrow}\); when \(h<h_{c}\), it is the XY-CAFM superfluid with the condensate wavefunction \(\Psi\propto\cos(\theta/2)\chi_{\uparrow}+e^{i({\bf Q}\cdot{\bf r}+\phi)}\sin( \theta/2)\chi_{\downarrow}\), where \({\bf Q}=(\pi,\pi)\) is the orbital ordering wave-vector, \(\theta=\arccos(h/h_{c})\), and \(\phi=\pm\pi/4,\pm 3\pi/4\). The \(h_{c}\) was estimated as \(h_{c}\sim n_{0}U^{2}t_{s}^{2}/t^{3}\). However, Ref.[6] fucus only on the quantum phases and QPTs, ignored the topological features of the Bogoliubov bands, also did not touch what are the connections of these quantum phases and QPTs to the topological nature of the QAH Hamiltonian. This work will address these outstanding open problems. Without loss of generality, we can set \(h,t,t_{\rm s}>0\) and then discuss the topology of the Bogoliubov excitation bands at \(h>h_{c}\) and \(h<h_{c}\) separately. 3. Evaluating the Chern number of the bosonic quasiparticle band from the lattice theory. -- To exam the topology of the Bogoliubov excitation bands, we need not only the eigenvalues but also the eigenvectors, thus one need also to pay attention to the Bogoliubov transformations. After replacing the bosonic operator by its average plus a quantum fluctuation, we obtain a quadratic Bogoliubov Hamiltonian via expanding the Hamiltonian to the second order in the quantum fluctuations. Due to the spontaneous U(1) symmetry breaking, the quadratic Hamiltonian in \({\bf k}\)-space, \(H_{\bf k}\), is a \(2N\times 2N\) matrix, \[\mathcal{H}=E_{0}+\frac{1}{2}\sum_{\bf k}(\psi_{\bf k}^{\dagger},\psi_{-\bf k })H_{\bf k}\begin{pmatrix}\psi_{\bf k}\\ \psi_{-\bf k}^{\dagger}\end{pmatrix}\;, \tag{2}\] where \(\psi_{\bf k}^{\dagger}=(\psi_{1,{\bf k}}^{\dagger},\cdots,\psi_{N,\bf k}^{ \dagger})\). The \(1,\cdots,N\) indices count the spin degrees of freedom and also the momentum resulting from the spontaneous translational symmetry breaking i9n the ground state. As shown in [6], when \(h>h_{c}\), the ground state is the Z-FM phase, we have \(N=2\) and \(\psi_{\bf k}^{\dagger}=(\psi_{\bf k\uparrow}^{\dagger},\psi_{\bf k\downarrow} ^{\dagger})\); when \(h<h_{c}\), the ground state is the XY-AFM phase, we have \(N=4\) and \(\psi_{\bf k}^{\dagger}=(\psi_{\bf k\uparrow}^{\dagger},\psi_{\bf k\downarrow} ^{\dagger},\psi_{\bf k\downarrow}^{\dagger},\psi_{\bf k+Q\uparrow}^{\dagger}, \psi_{\bf k+Q\downarrow}^{\dagger})\). where \(\vec{Q}=(\pi,\pi)\). Diagonalizing Eq.(2) by a \(2N\times 2N\) Bogoliubov transformation matrix \(T_{\bf k}\), so that \(T_{\bf k}^{\dagger}H_{\bf k}T_{\bf k}=\mathrm{diag}(\omega_{1,{\bf k}},\cdots, \omega_{N,{\bf k}},\omega_{1,-{\bf k}},\cdots,\omega_{N,-{\bf k}})\) where \(\mathrm{diag}(\cdots)\) means a diagonal matrix with diagonal elements \(\cdots\), and \((\begin{smallmatrix}\psi_{\bf k}\\ \psi_{-\bf k}^{\dagger}\end{smallmatrix})=T_{\bf k}(\begin{smallmatrix}\alpha_{ \bf k}\\ \alpha_{-\bf k}\end{smallmatrix})\), we obtain \[\mathcal{H}=E_{0}+\sum_{n,{\bf k}}\omega_{n,{\bf k}}(\alpha_{n,{\bf k}}^{ \dagger}\alpha_{n,{\bf k}}+1/2), \tag{3}\] where \({\bf k}\) sums over the corresponding Brillouin Zone (BZ), \(\omega_{n,\pm{\bf k}}>0\), \(n=1,\cdots,N\) are the \(N\) Bogoliubov excitation bands. In contrast to the fermionic cases, to keep the bosonic commutation relations, the \(T_{\bf k}\) is required to be a parity matrix instead of a unitary one, which means \(T_{\bf k}^{\dagger}\gamma_{3}T_{\bf k}=T_{\bf k}\gamma_{3}T_{\bf k}^{\dagger}=\gamma _{3}\) and \(\gamma_{3}=\sigma_{z}\otimes I_{N\times N}\). The Berry curvature of the \(n\)-th Bogoliubov excitation band can be calculated via \(F_{n}({\bf k})=i[\epsilon_{\mu\nu}\tau_{3}\partial_{\mu}T_{\bf k}^{\dagger} \tau_{3}\partial_{\nu}T_{\bf k}]_{nn}\), where \(\mu,\nu=k_{x},k_{y}\). The Chern number of the \(n\)-th Bogoliubov excitation band can be evaluated via a integral \[C_{n}=\frac{1}{2\pi}\int_{\rm BZ}d^{2}{\bf k}\,F_{n}({\bf k})\;, \tag{4}\] where the integral is over the corresponding BZ. 4. Evaluating the Chern number of the bosonic quasiparticle band from the continuum theory. -- Since the band closing is the signature of a topological phase transition (TPT), the change of band Chern number may also be computed in terms of a continuum theory. Near the band touching point, the typical effective Hamiltonian can be expressed in terms of the polar coordinate \((k,\xi)\) with \(k_{x}=k\cos\xi\) and \(k_{y}=k\sin\xi\), \[H_{\bf k}=v_{x}k^{n}\cos(n\xi)\sigma_{x}+v_{y}k^{n}\sin(n\xi)\sigma_{y}-\Delta \sigma_{z} \tag{5}\] where \(|v_{x}|=|v_{y}|=v\) and \(n=1,2,3,\cdots\) stands for the order ( or charge ) of the touching point. The corresponding dispersion relation is \(\epsilon({\bf k})=\pm\sqrt{\Delta^{2}+v^{2}k^{2n}}\), and \(\epsilon({\bf k})\sim k^{n}\) as \(\Delta\to 0\) approaching the TPT. The Berry curvature of the lower branch is \(F_{-}({\bf k})=-v_{x}v_{y}\Delta k^{-2+2n}n^{2}/|\epsilon(k)|^{3}\), thus its Chern number is \[C_{-}=\frac{1}{2\pi}\int d^{2}{\bf k}F_{-}({\bf k})=-\frac{n}{2}{\rm sgn}(v_{x }v_{y}\Delta)\:. \tag{6}\] where the \(sgn\) indicates the chirality of the Dirac boson. Therefore the band touching through the changing sign of \(\Delta\) from positive to negative across the TPT leads to a change of Chern number \(\Delta C_{=}-n\,{\rm sgn}(v_{x}v_{y})\). When there are multiple band touching points, the total change of Chern number needs to sum up all the contributions from all the band touching points. In the superfluid phases, due to the BEC at \(k=0\), the Bogoliubov transformation matrix \(T_{k}\) diverges at \(k=0\), the Chern number of the lowest band \(C_{1}\) may not be well-defined, but those of the higher bands \(C_{n>1}\) are usually well-defined, and quantized to be integers. [7]. In this Letter, we are still able to calculate \(C_{1}\) by excluding the momentum \(k\) where the BEC resides. The results are summarized in Fig. 1. When comparing the non-interacting fermionic QAH model with the interacting bosonic QAH model, we find that the bosonic interaction leads to new and additional patterns of topological bands. In the following sections, we present the details of calculations leading to Fig.1. 5. The topology of the Bogoliubov band when \(h>h_{c}\).-- In the \(h>h_{c}\) case, the ground state is the Z-FM phase, there are \(N=2\) energy bands, the Bogoliubov transformation matrix \(T_{k}\) is a \(4\times 4\) matrix. The corresponding band structure is plotted in Fig.2 where \[h_{2}=\sqrt{4t(4t+n_{0}U)}\] \[h_{1}=\sqrt{2t(2t+n_{0}U)}-2t \tag{7}\] When \(h=h_{2}\), the two bands conically touch at \(k_{R}=(\pi,\pi)\); when \(h=h_{1}\), they conically touch at \(k_{X}=(\pi,0)\) and \(k_{Y}=(0,\pi)\). A direct evaluation of Eq.(4) on the lattice shows that: when \(h>h_{2}\), the lower band and the upper band Chern number are \(C_{1}=C_{2}=0\) respectively; when \(h_{1}<h<h_{2}\), \((C_{1},C_{2})=(1,-1)\); when \(h<h_{1}\), \((C_{1},C_{2})=(-1,1)\). Below, we construct continuum theories to confirm the change of the band Chern number. At \(h=h_{2}\), one needs to expand the Hamiltonian around \(k_{R}=(\pi,\pi)\), the \(T_{k}\) at \(k_{R}\) tells the eigenmodes are \(\alpha_{1,R}=(1/\sqrt{8})[2(4+n_{0}U/t)^{-1/4}+(4+n_{0}U/t)^{1/4}]\psi_{R\uparrow }-(1/\sqrt{8})[2(4+n_{0}U/t)^{-1/4}-(4+n_{0}U/t)^{1/4}]\psi_{R\uparrow}^{\dagger}\) and \(\alpha_{2,R}=\psi_{R\downarrow}\), with the eigen-energy \(\omega_{h2}\!=\!2\sqrt{4t(4t\!+\!n_{0}U)}=2h_{2}\) When \(h\) deviates slightly from \(h_{2}\), defining \(k=k_{R}+q\) and projecting the original Hamiltonian onto these eigenmodes lead to the effective Hamiltonian: \[H_{R}=\omega_{h2}-vq_{x}\sigma_{x}-vq_{y}\sigma_{y}-\Delta\sigma_{z} \tag{8}\] where \(omega_{h2}=2h_{2},\Delta=2(h-h_{2})\), \(v=t_{\rm s}\sqrt{2[1\!+\!(8t\!+\!n_{0}U)/\omega_{h\!\!2}]}\). Note that the constant term \(omega_{h2}\) shows it is an excited energy which has a direct experimental consequence [13]. Then the dispersion takes the form \[\omega_{1,2}(q)=\omega_{h2}\mp\sqrt{\Delta^{2}+v^{2}(q_{x}^{2}+q_{y}^{2})}\:. \tag{9}\] Thus the change of band Chern number is \(\Delta C_{1}=-{\rm sgn}(v^{2})=-1\). This analysis is consistent with the numerical result \(C_{1}=0\) at \(h>h_{2}\) and \(C_{1}=+1\) at \(h<h_{2}\), therefore \(0-(+1)=-1\). At \(h=h_{1}\), one needs to expand the Hamiltonian around \(k_{X}=(\pi,0)\) and \(k_{Y}=(0,\pi)\), the \(T_{k}\) at \(k_{X}\) and \(k_{Y}\) tells the eigenmodes are \(\alpha_{1,X(Y)}=(1/\sqrt{8})[2(2+n_{0}U/t)^{-1/4}+(2+n_{0}U/t)^{1/4}]\psi_{X(Y) \uparrow}-(1/\sqrt{8})[2(2+n_{0}U/t)^{-1/4}-(2+n_{0}U/t)^{1/4}]\psi_{X(Y) \uparrow}^{\dagger}\) and \(\alpha_{2,X(Y)}=\psi_{X(Y)\downarrow}\), with the eigen-energy \(\omega_{h1}=2\sqrt{2t(2t\!+\!n_{0}U)}=2h_{1}+4t\). When \(h\) deviates slightly from \(h_{1}\), defining \(k=k_{X(Y)}+q\) and projecting the original Hamiltonian onto these eigenmodes lead to the effective Hamiltonian \[H_{X} = \omega_{h1}-vq_{x}\sigma_{x}+vq_{y}\sigma_{y}-\Delta\sigma_{z}\] \[H_{Y} = \omega_{h1}+vq_{x}\sigma_{x}-vq_{y}\sigma_{y}-\Delta\sigma_{z} \tag{10}\] where \(\omega_{h1}=2h_{1}+4t,\Delta=2(h-h_{1})\), \(v=t_{\rm s}\sqrt{2[1\!+\!(4t\!+\!n_{0}U)/\omega_{h\!\!1}]}\). Again, the constant term \(omega_{h2}\) shows it is an excited energy which has a direct experimental consequence [13]. Figure 1: The band Chern numbers of (a) The non-interacting fermionic QAH, and (b) The weak-interacting bosonic QAH. The topological non-trivial regions (\(C_{i}\neq 0\)) in (b) is slightly larger than (a). The regime \(|h|>h_{1}\) region is smoothly connected to the non-interacting limit in (a), so can be called perturbative region. While the entire \(|h|<h_{1}\) region is induced by the non-perturbative OFQD phenomenon at \(h=0\). It has no non-interacting analog, so may be called the non-perturabative region which shrinks to zero as \(U\) goes 0. Then the dispersion of \(H_{X}\) or \(H_{Y}\) takes the same form \[\omega_{1,2}(q)=\omega_{h1}\mp\sqrt{\Delta^{2}+v^{2}(q_{x}^{2}+q_{y}^{2})}\,. \tag{11}\] Thus the change of band Chern number is \(\Delta C_{1}=2{\rm sgn}(v^{2})=2\). This analysis is consistent with the numerical result \(C_{1}=+1\) at \(h>h_{1}\) and \(C_{1}=-1\) at \(h<h_{1}\), therefore \(+1-(-1)=2\). Since \(\lim_{U\to 0}h_{2}=4t\) and \(\lim_{U\to 0}h_{1}=0\), this result suggests that the topology of the Bogoliubov excitation bands at \(h>h_{c}\) is smoothly connecting to that of the non-interacting limit. Besides, the fact \(h_{2}=\sqrt{4t(4t+n_{0}U)}>4t\) at any \(U>0\) suggests that the region displaying a non-trivial topology ( with a non-zero Chern number) is enlarged with an increasing \(U>0\). One can also determine the relation between \(h_{1}\) and \(h_{c}\) when \(U/t\) is small. When \(U/t\) is small, \(h_{1}=\sqrt{2t(2t+n_{0}U)}-2t\sim n_{0}U/2\), while \(h_{c}\sim n_{0}U^{2}t_{s}^{2}/t^{3}\ll h_{1}\). Thus in the weak coupling limit \(U/t\ll 1\), there is always a window for \(h_{c}<h<h_{1}\) as shown in Fig.1b. 6. The topology of the Bogoliubov band when \(0<h<h_{c}\) and the absence of the TPT across the QPT at \(h=h_{c}\).-- In the \(h<h_{c}\) case, this corresponds to Eq.(9) and Eq.(10) in Ref.[6]. Because of spontaneous translational symmetry breaking, we have \(N=4\) energy bands and \(T_{k}\) is an \(8\times 8\) matrix. From the NOFQD analysis in Ref.[6], we know the \(h=0\) ground-state requires \(\theta=\pi/2\), and \(0<h<h_{c}\) ground-state requires \(\theta_{h}=\arccos(h/h_{c})\). We substitute \(\theta=\theta_{h}\) back into the Eq.(2), then calculate the band Chern numbers via Eq.(4). In principle, there should be an additional term due to nearly order-from-quantum disorder (NOFQD) correction [6], but we expect this correction does not change the topology of the Bogoliubov bands. From the band structure shown in Fig.3, we find that the two lower bands touch, the two upper bands also touch, and there is always a band gap between the two groups when decreasing \(\theta_{h}\) from \(\pi/2\) to \(0\) (or equivalently, increasing \(h\) from \(0\) for \(h_{c}\)). Note that two lower/upper bands touch at the X point \(k_{X}=(\pi,0)\), which is a momentum far away from \(k=0\), thus the NOFQD effects ignored so far should not change the band touching behaviors. Due to the band touching, we need to consider the combination of the Chern numbers \(C_{1+2}\) and \(C_{3+4}\) when \(h<h_{c}\). Our numerical evaluation of the integral Eq.(4) gives \((C_{1+2},C_{3+4})=(-1,+1)\). Recall the result of \(h\) just above \(h_{c}\) achieved in the last subsection also gives \((C_{1},C_{2})=(-1,+1)\), which is the same pattern as \((C_{1+2},C_{3+4})=(-1,+1)\). So we conclude that there is not any change in the topology across the quantum phase transition at \(h=h_{c}\). 7. The TPT at \(h=0\) and the absence of any QPT at \(h=0\). -- As shown in Fig.2, the topology changes at \(h=h_{1}\) with the conic band crossings at \(k_{X}=(\pi,0)\) and Figure 3: The Non-perturbative band structure of Bogoliubov excitation bands at (a) \(h>h_{c}\), (b) \(h=h_{c}\), (c) \(h=h_{c}/\sqrt{2}\), (d) \(h=0\). The parameters are \(t=1\), \(t_{s}=1/2\), \(n_{0}U=1\). It has no analog in the non-interacting limit. When \(h>0\), there is always a band gap between two upper bands and two lower bands. (a) is just a re-plot of the bottom subfigure of Fig.2(b) in the reduced Brillouin zone (BZ), so the 2 bands in the BZ changes to 4 bands in the reduced BZ. As shown in Sec.C in the SM, two lower (upper) bands also touch along the whole line from \((\pi,0)\) to \((0,-\pi)\) As shown in (a) and (b), when \(h\geq h_{c}\), they also touch along the whole line from \(\Lambda\)-point to the X-point along the boundary of the RBZ. \(\omega_{1}\) is the linear gapless Goldstone mode due to the \(U(1)\) symmetry breaking. Note that due to the dropping of the NOFQD effects [6], the roton mode \(\omega_{1}\) is still a spurious quadratic gapless mode at \(k=0\). Incorporating them back will open a small roton gap at \(k=0\), but will not change the Chern numbers of topology of the band. When \(h=0\) in (d), there is a Dirac conical band touch at \(K_{1}=(\pi-k_{0},k_{0})\) of Brillouin zone; and the mirror symmetry \(M_{1}\) with respect to \(k_{x}=k_{y}\) tells there is also a Dirac conical band touch at \(K_{2}=(k_{0},\pi-k_{0})\). Figure 2: The perturbative topological band structure of Bogoliubov excitation bands at \(h>h_{c}\) and near (a) \(h_{2}\) and (b) \(h_{1}\). It is smoothly connected to the non-interacting limit. The parameters are \(t=1\), \(t_{s}=1/2\), \(n_{0}U=1\), thus \(h_{2}\approx 4.47\) and \(h_{1}\approx 0.48\). When \(h=h_{2}\), there is a Dirac conical band touch at the R point \(k_{R}=(\pi,\pi)\) of the Brillouin zone (BZ). When \(h=h_{1}\), there is a Dirac conical band touch at the X point \(k_{X}=(\pi,0)\) of the BZ. The \([C_{4}\times C_{4}]_{\rm diag}\) symmetry tells there is also a Dirac conical band touch at the Y point \(k_{Y}=(0,\pi)\) of the BZ. \(k_{Y}=(0,\pi)\), also at \(h=h_{2}\) with the conic band crossing at \(k_{R}=(\pi,\pi)\). The origin of these TPTs can be traced back and mapped to the corresponding non-interacting limit of fermions. In this section, we show that there is also a conic band crossing at \(h=0\) in Fig.3(d) at some in-commensurate momentum \((k_{0x},k_{0y})\) between the \(\Lambda\)-point and X-point, which is not any high symmetry points. As shown in Fig.S2, there are two such Dirac points in the RBZ. The positions of the two Dirac points are in-commensurate and also depend on \(t,t_{s}\) and \(U\), so it comes from the interaction induced OFQD [6]. In contrast to the TPTs at \(h=h_{1}\) and \(h=h_{2}\), the TPT at \(h=0\) does not have any non-interacting analog. sl This is also the central result achieved in this manuscript. At \(h=0\), the ground state wavefunction is the XY-AFM with \(\theta=0\) and \(\phi=-\pi/4\). Solving the band-touching condition \(\omega_{2}(k)=\omega_{3}(k)\) leads to the following three cases: (I) when \(n_{0}U/t<8t_{s}/(\sqrt{2}t-t_{s})\), two solutions are \(K_{1}=(\pi-k_{0},k_{0})\) and \(K_{2}=(k_{0},\pi-k_{0})\), with \(k_{0}=\arcsin[\sqrt{2}tn_{0}U/(t_{s}(8t+n_{0}U))]<\pi/2\); (II) when \(n_{0}U/t>8t_{s}/(\sqrt{2}t-t_{s})\), two solutions are \(P_{1,2}=\pm(p_{0},p_{0})\), with \(p_{0}=\arcsin[(4t/(8t+n_{0}U))\sqrt{4t(4t+n_{0}U)/(4t^{2}-2t_{s}^{2})}]<\pi/2\); (III) when \(n_{0}U/t=8t_{s}/(\sqrt{2}t-t_{s})\), the two solutions merge into just one at \(\Lambda=(\pi/2,\pi/2)\). Intuitively, as shown in Fig.S2, when keeping \(h=0\), increasing \(n_{0}U/t\) from \(0^{+}\) to \(8t_{s}/(\sqrt{2}t-t_{s})\), the \(\omega_{2,3}\) conically touch at \(K_{1,2}\); then at \(n_{0}U/t=8t_{s}/(\sqrt{2}t-t_{s})\), \(K_{1}\) and \(K_{2}\) collide at \(\Lambda\)-point; further increasing \(n_{0}U/t\), they bounce off along the two opposite directions along the perpendicular direction [12], so \(\omega_{2,3}\) conic touches at \(P_{1,2}\). Around \(h=0\), defining \(k=K_{1,2}+q\) or \(k=P_{1,2}+q\), we can similarly expand the Hamiltonian around the two minima of \(\omega_{2,3}\) to find the effective Hamiltonian: \[H_{1}=\omega_{0}+c_{\parallel}q_{\parallel}+v_{\parallel}q_{ \parallel}\sigma_{x}+v_{\perp}q_{\perp}\sigma_{y}-\Delta\sigma_{z}\,,\] \[H_{2}=\omega_{0}-c_{\parallel}q_{\parallel}-v_{\parallel}q_{ \parallel}\sigma_{x}-v_{\perp}q_{\perp}\sigma_{y}-\Delta\sigma_{z}\,, \tag{12}\] where \(\omega_{0}=8t(4t+n_{0}U)/(8t+n_{0}U)\), \(q_{\parallel}=(q_{x}-q_{y})/\sqrt{2},q_{\perp}=(q_{x}+q_{y})/\sqrt{2}\), and \(\Delta=hn_{0}U\sqrt{t/(4t+n_{0}U)}\) for regime (I), \(q_{\parallel}=(q_{x}+q_{y})/\sqrt{2},q_{\perp}=(q_{x}-q_{y})/\sqrt{2}\), and \(\Delta=4ht_{s}/\sqrt{2t^{2}-t_{s}^{2}}\) for regime (II). The effective velocities \(v_{\parallel},v_{\perp},c_{\parallel}>0\) are proportional to \(\cos(k_{0})\) or \(\cos(p_{0})\) for regime (I) and regime (II) respectively and are listed in the appendix B. The dispersion of \(H_{i},i=1,2\) takes the form \[\omega_{2,3}(q)=\omega_{0}-(-1)^{i}c_{\parallel}q_{\parallel}\mp\sqrt{\Delta^ {2}+v_{\parallel}^{2}q_{\parallel}^{2}+v_{\perp}^{2}q_{\perp}^{2}}\,. \tag{13}\] It is necessary to stress that there exists a Doppler shift term, the \(c\) term, in \(H_{1,2}\), which is the salient feature unique to the OFQD induced TPT, not shared by any non-interacting counter-part. Of course, it does not affect the value of band Chern numbers. From the effective Hamiltonian Eq.(12), Eq.(6) tells the change of band Chern number is \(\Delta C=-2\text{sgn}(v_{\parallel}v_{\perp})=-2\) when \(\Delta\) changes from positive to negative. This result is consistent with the numerical evaluation via Eq.(4), which gives \(C_{1+2}=-1\) at \(h>0\) and \(C_{1+2}=+1\) at \(h<0\), therefore \(-1-(+1)=-2\). At \(h=0\), in the colliding point \(n_{0}U/t=8t_{s}/(\sqrt{2}t-t_{s})\), the \(K_{1}\) and \(K_{2}\) merge at \(\Lambda=(\pi/2,\pi/2)\)-point. Expanding around the \(\Lambda\)-point leads to the dispersion: \[\omega_{2,3}(q)=\omega_{0}(q)\mp\sqrt{u_{1}^{2}q_{\perp}^{4}+u_{2}q_{ \parallel}^{4}+wq_{\parallel}^{2}q_{\perp}^{2}}, \tag{14}\] where \(u_{1}=t^{2}/(\sqrt{2}t_{s})\), \(u_{2}=t^{2}t_{s}/[\sqrt{2}(2t^{2}-t_{s}^{2})]\), \(w=t^{2}(t^{2}-t_{s}^{2})/(2t^{2}-t_{s}^{2})\), and \(\omega_{0}(q)=4t+2\sqrt{2}t_{s}+DS\) where one can see the Doppler shift term \(DS=[t_{s}/\sqrt{2}][-q_{\parallel}^{2}+q_{\perp}^{2}t_{s}^{2}/(2t^{2}-t_{s}^{ 2})]\) also gets to the quadratic order. Note that \(4u_{1}^{2}u_{2}^{2}>w^{2}\) is ensured by the condition \(0<t_{s}<\sqrt{2}t\), so the square root is always positive define. Since \(w\)-term won't lead to a gap close, one can ignore \(w\)-term and rescale \(q\) to make \(\omega_{2,3}(q)\) isotropic. The \(\omega_{2,3}(q)\sim q^{2}\) is consistent with the fact that \(v_{\parallel}\), \(v_{\perp}\), \(c_{\parallel}\) in Eq.(12) vanish as \(k_{0}\) or \(p_{0}\) approaching \(\pi/2\). When \(h\) deviates from \(0\), the effective Hamiltonian belongs to \(n=2\) case of Eq.(5). Thus, although there is only one band touching point with the monopole charge \(n=2\), the change of Chern number is still \(-2\). In short, regardless of the value of \(n_{0}U\), the change of the Chern number from \(h=0^{+}\) to \(h=0^{-}\) is always \(-2\). So the scattering process in Fig. S1, is not a TPT. 8. Dramatic differences between the non-interacting fermionic band structure and the bosonic Bogoliubov band structure at the quadratic level.-- It is important to stress some crucial differences between the fermionic band structure and the bosonic Bogliubov band structure at the quadratic level and also pointed out the common mistakes made in all the previous literatures to calculate the edge modes associated with the bosonic Bogliubov band structure: In the fermionic or bosonic QAH problem, the Time reversal symmetry is broken, the Chern number is not protected by any symmetry, its definition involves no symmetry requirement. The former is really non-interacting, while the bosonic Bogoliubov band is non-interacting at the quadratic level only where one dropped the cubic and quartic interactions and all the higher order interactions. All these interactions are not important except near the QPT at \(h=h_{c}\). In fact, as shown by the RG analysis, near the QPT, the two quartic interaction in Eq.M32 are marginally irrelevant. But both are relevant inside the XY-AFM phase and in fact, leads to the symmetry breaking inside the phase. For the fermionic problem, there are always edge states associated with the topological phases. This is justified, because a single fermion operator never condenses. One tempts to also calculate the edge modes within the band gaps in Fig.2 and Fig.3 associated with the bosonic Bogoliubov band. Unfortunately, this kind of calculation has mathematical meanings, but makes no physical sense: this is because Eq.M3 is based on the BEC at \(k=0\) in the Z-FM phase, Eq.M9 with the parametrization Eq. M11 are based on the BEC at both \(k=0\) and \(Q=(\pi,\pi)\) in the XY-AFM phase. Unfortunately, they are ill-defined in a strip geometry. Ref.[7] and many other previous works, just copied the same method from its fermionic counterpart to evaluate edge modes associated with the bosonic Bogoliubov band: They simply transfer the quadratic Bogoliubov band in momentum space to real space, then solve the edge modes in the strip geometry by putting a periodic boundary condition along one direction, then the open boundary condition along the other. Unfortunately, this approach is well-planned mathematically, but is not self-consistent physically, because it ignored the root of BEC which leads to the quadratic bands in the first place. As stressed above, the BEC root is ill-defined in the strip geometry. So it makes no physical sense to talk about the edge modes. The claims made in some experimental works to measure the edge modes have no physical grounds. 9. Experimental detections.-- So far, the experiment[5] has detected the non-interacting fermionic Chern numbers Fig.1a using the highly excited bosonic spinors, so it just mimic the single particle properties of the Hamiltonian Eq.1 using a spinor boson with SOC. Its main purpose is to demonstrate a possible realization of the bosonic Hamiltonian Eq.1 using cold atoms. The common drawbacks of most cold atom experiments is just to demonstrate a possibility to simulate a Hamiltonian which has been well studied in materials and claim the ability to tune the parameters. But in reality, it is rare to go beyond just a simulation to demonstrate new many-body phenomenon or topological phenomena due to many-body interactions.. Here we showed that there are two kinds of topological band structures (1) The one in the regime \(|h|>|h_{1}|\) is smoothly connected to the non-interacting fermionic one. So it can also be called perturbative regime. Even so, the two critical fields \(h_{1,2}\) listed in Eq.7 still depends on the interaction \(n_{0}U\). All the parameters in the effective Hamiltonian Eq.8 and 10 also depend on the interaction \(n_{0}U\). (2) The one in the regime \(-h_{1}<h<h_{1}\) has no analog in the non-interacting counter-part, so it is completely due to the many-body interaction. More specifically, due to the non-perturbative OFQD. So it can also be called non-perturbative regime. It is also an experimentally easily accessible regime in the cold atom systems. It is extremely important to push the already 6 year old experiment[5] beyond the single-particle picture: to detect this novel purely interaction induced topological phenomena. In contrast to the detections suggested in [6] which are all equilibrium properties in the ground states and low energy excitations, here are the excited states near the high energy \(\omega_{h2},\omega_{h1}\) and \(\omega_{0}\) in Eq.8,10 and 12 near the high momentum \(R=(\pi,\pi)\) or \(X=(\pi,0),Y=(0,\pi)\) and tunable in-commensurate momenta \(K_{1,2}\) or \(P_{1,2}\) respectively [13]. We suggest to also measure the scattering process of the two bosonic Dirac points at \(h=0\) displayed in Fig.S1 by the Bragg spectroscopy [8; 9; 10] or the momentum resolved interband transitions [11] adapted to the excited states. 10. Conclusions.-- For any interacting Hamiltonian with a non-trivial topology in the non-interacting limit such as the QAH one Eq.1, there are always two aspects quantum and topology. The former focus on the ground states and quantum phase transitions which may only depend on the low energy excitations around the band minima in the weak interaction limit. OFQD is needed to even determine the ground state. NorfQD is needed to determine the QPT at \(h=h_{c}\). The latter focus on the global structure of the bands, therefore also the excited states to capture the global topology [13]. However, the Bogliubov quasi-particle band picture breaks down near the QPT near \(h=h_{c}\) where a RG analysis is needed to capture all the physics well beyond the quadratic band picture. However, as classified in [6], there are two kinds of OFQD, the first response trivially to a deformation, no NOFQD phenomenon emerging from the OFQD. The second response highly non-trivially to a deformation and lead to NOFQD phenomenon. It would be interesting to see if there are also two classes here, the first leads to a TPT, as is the case presented in this work, the second does not lead to any TPT. The two criterions coincide in the present case. It is also worthy to see if the two criterions coincide in other cases. The OFQD and the topological invariants such as Chern number seem are two very different concepts. The first concept is a completely many-body quantum phenomenon. While the latter is mainly a non-interacting topological phenomenon. Here we show that there are deep connections between the two in the context of the experimentally realized weakly interacting Quantum Anomalous Hall system of spinor bosons in an optical lattice. We expect this new effects could also be realized in frustrated quantum spin systems which leads to topological phase transitions of magnons. **Acknowledgements** J. Ye thanks Prof. Gang Tian for the hospitality during his visit at the School of Science of Great Bay University.
2306.05298
Habits of Mind: Reusing Action Sequences for Efficient Planning
When we exercise sequences of actions, their execution becomes more fluent and precise. Here, we consider the possibility that exercised action sequences can also be used to make planning faster and more accurate by focusing expansion of the search tree on paths that have been frequently used in the past, and by reducing deep planning problems to shallow ones via multi-step jumps in the tree. To capture such sequences, we use a flexible Bayesian action chunking mechanism which finds and exploits statistically reliable structure at different scales. This gives rise to shorter or longer routines that can be embedded into a Monte-Carlo tree search planner. We show the benefits of this scheme using a physical construction task patterned after tangrams.
Noémi Éltető, Peter Dayan
2023-06-08T15:42:56Z
http://arxiv.org/abs/2306.05298v1
# Habits of Mind: Reusing Action Sequences for Efficient Planning ###### Abstract When we exercise sequences of actions, their execution becomes more fluent and precise. Here, we consider the possibility that exercised action sequences can also be used to make planning faster and more accurate by focusing expansion of the search tree on paths that have been frequently used in the past, and by reducing deep planning problems to shallow ones via multi-step jumps in the tree. To capture such sequences, we use a flexible Bayesian action chunking mechanism which finds and exploits statistically reliable structure at different scales. This gives rise to shorter or longer routines that can be embedded into a Monte-Carlo tree search planner. We show the benefits of this scheme using a physical construction task patterned after tangrams. **Keywords:** Planning; Sequence models; Bayesian nonparametrics; Action chunking; ## Introduction The _law of exercise_ holds that animals have a tendency to repeat or perseverate on frequent choices, even when this reduces reward [13, 14]. This leveraging of structure in behaviour complements the leveraging of structure in the environment. gershman2020multiagent suggested that perseveration arises from a trade-off between reward maximization and policy complexity, with repetition making policies more compact. This theory was focused on isolated actions; however, animals also pick up on _sequences_ of action. The action chunking process that underlies skill or habit learning [12] has also been observed in multi-step decision-making tasks, for instance with rats repeating whole trajectories of choices [10]. A hallmark of such action chunks is that they are performed quickly and uninterrupted by state assessment, that is, open-loop. The law of exercise may apply not only to the _execution_ of actions in multi-step problems, but also to the _planning_ thereof. huys2015multiagent tested how human participants simplified a deep planning problem. One of the three heuristics they found was stochastic memoization, in which participants became increasingly prone to repeat previous action paths in their entirety or chunks of them, even though these might not be optimal. In their interpretation, this marks a transfer from flexible but costly computation to the less flexible but cheaper reliance on past experience. A similar observation was made in a task where participants planned and constructed complex structures that were required to be stable in a virtual environment simulating rigid-body physics [24]. During training, participants' construction procedures became increasingly stereotyped. The trouble with sequences of actions is that their number tends to grow exponentially with the length of the sequence. This makes methods that try to be comprehensive [15] computationally infeasible. Indeed, we know of no scalable or cognitively plausible model of action chunk reuse in planning. Here, in keeping with a general trend towards internalized decision-making [12], we examine the benefits of using a non-parametric Bayesian sequence model that can infer sparse chunks of various lengths, depending on the statistics of the sequences executed in past episodes. We use this model to augment Monte-Carlo Tree Search (MCTS) in two ways: using single actions that it predicts as being likely, to bias one-step tree expansions, and proposing popular multi-step expansions. We test our model on a physical construction task called the Sticky Tangram. ## MCTS-with-HABITS To examine the benefits of biasing planning towards previously learned action sequences, we used a Bayesian infinite sequence model (a hierarchical Dirichlet process) to augment Monte Carlo Tree Search with open-loop, multi-action, node expansions. We call the new planning algorithm MCTS-with-HABITS (Figure 1). Consider the search tree in Figure 1, representing a deterministic planning problem in which nodes correspond to states and the edges correspond to actions. In Monte Carlo Tree Search (MCTS; Kocsis and Szepesvari (2006)), the tree is traversed from the root node to a leaf by repeated action choices based on their simulated expected value. This decision mechanism is called the tree policy. A leaf is any node that has a potential child whose value has not been estimated. Once a leaf is reached, it is expanded by considering the actions that are available in that state and choosing one of them. Then, the value of the node is estimated by simulating a random rollout using a uniform random policy. The reward of the rollout (in our case of a puzzle, 1 for a correct solution; 0 otherwise) is backpropagated to the selected node and its ancestors. The four ingredients - selection, expansion, simulation, and backpropagation - constitute one step of the MCTS algorithm. Since it estimates the value of nodes via simula tion, MCTS focuses the search on promising paths without the need for handcrafted distance heuristics as in A*. In order to balance this exploitative focus (from the fraction \(\frac{w_{t}}{n_{t}}\) of wins \(w_{t}\) from \(n_{t}\) tries from a node on the \(t^{\text{th}}\) MCTS step) with exploration of potentially even better states, a bonus is added to actions leading to lesser tried nodes (\(\sqrt{\frac{lnN_{t}}{n_{t}}}\), where \(N_{t}\) is the total number of simulations run from the parent node; as in the UCT algorithm). To this conventional tree policy, we add a habit value term, that is, the scaled likelihood of the action \(a\) planned to get to a node, making its total propensity be: \[Q(a)=\frac{w_{t}}{n_{t}}+c*\sqrt{\frac{lnN_{t}}{n_{t}}}+h*p_{t}(a_{i}=a|a_{0}:a _{i-1}) \tag{1}\] where \(c\) is the exploration coefficient, here fixed to 1; \(h\) is the habit coefficient; \(a_{i}\) is the action directly leading to the node; and \(a_{0}:a_{i-1}\) is the path of all preceding planned actions from the root, where \(i\) indicates the current depth in the tree. Note that the win proportion and exploration terms are node-dependent, while the habit term is node-independent. Probability \(p_{t}(a_{i}=a|a_{0}:a_{i-1})\) comes from a Bayesian nonparametric sequence model that flexibly combines the predictive power of dependencies at variable depths (Teh, 2006). The model was recently used to explain motor skill learning in humans (Etteo, Nemeth, Janacsek, & Dayan, 2022)). Here, we adapt it to the task of hierarchical state augmentation (HABITS) in which it provides a flexible window onto the action path leading to a state, for determining the likelihood of the next action in the search (upper right box of Figure 1). Its predictions are based on previous action sequence outputs of the planner. Formally (temporarily dropping the iteration index \(t\)), the model is an hierarchical Dirichlet process (HDP): \[a_{i}\sim\mathbf{G_{u_{i}}}\sim HDP(\alpha,\mathbf{G_{\pi(u_{i})}}) \tag{2}\] where \(\mathbf{G_{u_{i}}}\) is the vector of action probabilities given the context \(\mathbf{u_{i}}\) of some previous actions before the \(i\)-th action; \(\alpha\) is a strength parameter, controlling the resemblance to the base distribution and, therefore, the speed of sequence learning; and the base distribution \(\mathbf{G_{\pi(u_{i})}}\), which is the probability distribution over actions given the truncated context \(\pi(\mathbf{u_{i}})\) containing all but the earliest action. Applying Equation 2 recursively performs a weighted smoothing across the action probabilities conditioned on preceding action chunks of shrinking sizes. For more details and algorithms, see Eletto et al. (2022). In sum, the sequence model flexibly augments the states with a _weighted_ window onto the previous actions, efficiently utilizing the ones that have predictive power. Assume that the HABITS module has been trained on action sequence outputs of the planner from previous episodes (irrespectively of the quality of those plans). Then, the HABITS module interacts with the planner in two ways. It allows for biasing the one-step search towards predictable action sequence paths through the habit term in Equation 1. Moreover, by unrolling the sequence model, we also extend the node's action repertoire by action chunks (lower right box in Figure 1). Action chunks were created by sampling and attaching successor actions while the conditional entropy of the distribution over the next action was lower than the open-loop threshold parameter \(\omega\). When the action chunks were considered by the tree policy, their habit value was determined by Figure 1: Schematic of the MCTS-with-HABITS model. The planner builds a search tree with nodes representing states and edges representing different possible actions, marked by colors (left box). The planner traverses the tree by choosing actions that ultimately lead to more simulated wins and that are more predictable by the HABITS sequence model (upper right box). At the leaf state (marked by the bold node) several potential actions (marked by dashed edges) are considered that lead to states (marked by dashed nodes) that have not been added to the tree yet. The _red_ and the _green_ actions are primitive actions whose winning values (marked as pies) are the same. Yet, the _green_ action has a higher likelihood under the sequence model, when conditioned on the action trace from the root to the leaf node. Therefore, the _green_ action will be preferred over _red_. The _green_ action is also predictably followed, given the context of the action trace, by the _blue_ action. Therefore, the chunk generator (lower right box) appends the _blue_ action to the _green_ one. Conditioned on the previous action trace _and_ the _blue_ action, no further action is strongly predicted and the chunk generation is terminated. Thus we extend the available action set by the _green-blue_ chunk. Note that the primitive _green_ action, from which we unrolled a chunk, is also kept as an alternative. If the action chunk is selected by the planner then it jumps over the state marked by a grey node, not considering the available actions from that state, also marked by grey. This effective stunting results in a different value estimate for the chunk _green-blue_ (marked by the larger pie in the node) compared to the primitive action _green_; in this example, the chunk will be preferred by the tree policy. the predictability of the first action in the chunk. By choosing an action chunk, the agent jumped deeper in the tree without adding the intermediate node(s) to the tree and evaluating them, effectively stunting all the other branches that would have grown from the intervening nodes. As such, the MCTS-with-HABITS can be viewed as a hybrid between open- and closed-loop. The complexity of the chunks proposed by the HABITS scaled with the complexity of the sequences that the agent executed in past episodes, opportunistically saving as much planning cost as possible. Our experiments tested the unique benefits of the two separate mechanisms: the sequence bias in the one-step search, realized by the habit term in the tree policy, and the open-loop action chunks, realized by unrolling the sequence model until the open-loop entropy threshold \(\omega\) is reached. Therefore, we compared three variants of our model: MCTS-with-HABITSfull, utilizing both one-step biases and open-loop action chunks; MCTS-with-HABITSopen-loop, utilizing no one-step bias for the actions but having the alternative of unrolling primitive actions into chunks; and MCTS-with-HABITSopen-step having no action chunk alternatives but utilizing sequence predictions to bias its action choices; along with the control model, MCTSvanilla. For brevity, we will refer to MCTS-with-HABITSfull as MCTS-with-HABITS. Within the scope of this paper, we did not optimize the model parameters, but selected values for demonstration (Table 1). Apart from the labelled additions to the tree policy, the MCTS algorithm was used conventionally, looping through the node selection, expansion, rollout with uniform random policy, and backpropagation of a binary value (success or failure) to the selected node and its ancestors. ## Results ### Task We tested our model on a physical construction task inspired by the classical tangram puzzle, versions of which were recently employed by Bapst et al. (2019) and W. P. McCarthy, Mattar, Kirsh, and Fan (2021). Our version is called the Sticky Tangram (Figure 2A). The agent had to plan a construction that matched a target silhouette, using the seven building blocks in the inventory. The blocks could be placed into a grid space which discretized and constrained the state space. The coarse discretization is essential in order to keep the puzzle in a moderate complexity regime. The blocks had to be placed in such a sequence that they touched each other, i.e. floating blocks were not allowed. A state included the target silhouette and the action history of the relative positions of the previously placed building blocks (Figure 2B). Adding a block to the construction constituted a primitive action; adding several blocks at once constituted an action chunk. In any state, only valid actions were considered - that is, block placements that were inside the borders of the target silhouette and that resulted in no disjoint parts. Note that this applied to action chunks as well, meaning that actions that were unrolled into chunks that would have violated the rules of the game were discarded. \begin{table} \begin{tabular}{l c c c c c} \hline Model name & \(b\) & \(c\) & \(\alpha\) & \(h\) & \(\omega\) \\ \hline MCTS-with-HABITSfull & 50 & 1 & 1 & 5 & 1.5 \\ MCTS-with-HABITSopen-loop & 50 & 1 & 1 & 0 & 1.5 \\ MCTS-with-HABITSopen-step & 50 & 1 & 1 & 5 & 0 \\ MCTSvanilla & 50 & 1 & - & 0 & 0 \\ \hline \end{tabular} \end{table} Table 1: Parameter settings for four alternative model variants. The node budget \(b\), exploration coefficient \(c\) and sequence learning parameter \(\alpha\) were fixed across the model variants, while we varied the habit coefficient \(h\) and the open-loop threshold \(\omega\). Figure 2: (A) Schematic of an example trial of the Sticky Tangram task. The silhouette (grey area) is the target that should be built up using the building blocks (shown in different colors). The first chosen block has to be placed on the floor (yellow line); the following blocks have to be placed such that they touch at least a unit segment of a previous block’s edge (hence the ’stickiness’). In the duplet condition, two blocks always occurred together, in the same relative position; in the triplet condition, three blocks formed such a chunk. Throughout the paper, these two particular chunks will be used as examples. (B) The full search tree of the example problem in (A). The leaf node in red brackets is the only solution, the other leaf nodes represent dead ends. ### Experiments ### Reusing variable-size chunks We randomly generated silhouettes that were composed of four building blocks (Figure 2A). In the duplet condition, two building blocks formed a chunk such that they would always occur together, in the same relative position and had to be placed in a fixed order. In the triplet condition, three formed such a chunk. The unchunked blocks occurred in unpredictable relative positions. 'Chunky silhouettes' comprised the duplet and two random unchunked building blocks in the duplet condition or the triplet and one random unchunked building block in the triplet condition. In the chunky silhouettes, the marginal probabilites of the chunked blocks were 1, but in the unchunked blocks they were \(<1\). That is because the elements of the chunks were _always_ present in the chunky silhouettes, per definition, but the other blocks were not. Since we wanted to study the advantage of _sequence structure_ in planning problems, we ensured that the overall marginal probabilities of the blocks were equal, and a chunked block was more predictable only conditioned on the other elements of the chunk. Therefore, we introduced 'random silhouettes' comprising only unchunked blocks. A 4:3 ratio of the chunky and random silhouettes in the training set ensured that the marginal probability distribution of the blocks was uniform - that is, the agent used all the blocks equally frequently for the correct solutions. Since the chunky silhouettes were generated using more constraints than the random ones (i.e. the chunk constraint), on average, the full search trees of chunky silhouettes are less complex. We quantified the problem complexity as the median number of nodes that the MCTSvanilla evaluated until finding the solution. Then we sampled sets of the chunky and random silhouettes such that they were matched by their complexity. We restricted the problem complexity to be \(\leq 50\). The example silhouette shown in Figure 2A and B has a full search tree of 18 nodes and a low complexity of 12. We trained four MCTS variants on 19 trials (a randomized sequence of chunky and random silhouettes). During training, all model variants used a budget of 50 nodes, enabling nearly perfect performance due to the silhouettes having complexity values \(\leq 50\) (Figure 3A and E). In the case of the three MCTS-with-HABITS model versions, the sequence module was trained on the action sequence output of the planner. We ran 32 simulations for both the duplet and triplet conditions, from different random seeds, and averaged the results. All model variants showed nearly perfect performance given a node budget sufficient for the problems (\(>96\%\) in the duplet condition and \(>94\%\) in the triplet condition; Figure 3A and E), albeit with different patterns of chunk use. The sequence model of the MCTS-with-HABITS model versions gradually extracted the structure in the action sequences generated by the planner, that is, the action duplets and triplets in the respective conditions. The MCTS-with-HABITS and MCTS-with-HABITSopen-loop models began to propose action chunks and these chunks were used more often in the correct solutions throughout the training, both in the duplet (effect of _trial:_\(F(1,696)=293.70\), \(p<.001\); Figure 3B) and in the triplet conditions (effect of _trial:_\(F(1,712)=29.51\), \(p<.001\); Figure 3F). In both conditions, the MCTS-with-HABITS was more disposed to include chunks in its plans (_model_ effect: \(F(1,696)=207.58\), \(p<.001\) for the duplet condition; \(F(1,712)=150.39\), \(p<.001\) for the triplet condition). Problems that were solved in four primitive steps by the MCTSvanilla (Figure 3C and G) were solved by the MCTS-with-HABITS in only three steps in the duplet condition (Figure 3D) and in two steps in the triplet condition (Figure 3 H). Note that action chunks were reused in spite of the same states never reoccurring across episodes since the sequence model proposes chunks in a state-independent manner. In sum, the MCTS-with-HABITS model found and utilized the largest exploitable chunks that reoccurred in planned action sequences. ### Solving high complexity problems with less compute After the training trials, we gradually reduced the node budget from the initial value of 50 to 12, 8, 5, and 1. In order to compare performance across these node budgets, we froze Figure 3: Emergence of action sequence reuse in the duplet condition (A-B) and the triplet condition (C-D). Error bars and bands indicate the 95% CI. (A)(E) All model variants had nearly perfect performance both in the duplet and triplet conditions. (B)(F) The two model versions that were permitted to use open-loop action chunks gradually picked up on the structure in the action sequences and became more likely to use an action chunk for the solution. In the triplet condition (F), chunk use emerged even faster than in the duplet condition (B). (C)(G) Examples of a correct solutions by the MCTSvanilla. (D)(H) Examples of solutions by the MCTS-with-HABITS model, using an action duplet (D) and an action triplet in (H). The black polygons mark chunks. the sequence model at the test trials. At each reduced node budget value, the agents solved four silhouettes. We ran 32 simulations for both the duplet and triplet conditions separately, from different random seeds, and averaged the results. When solving random silhouettes, all model variants' performance dropped to similar degrees as a function of node budget restriction (effect of _budget_: \(F(1,1872)=124.58\), \(p<.001\); effect of _model_: \(F(1,1872)=0.42\), \(p<.73\); Figure 4A and C). In the case of the chunky silhouettes, when previously learned action sequences were appropriately reusable, the resource-constrained model performance depended on whether the model was allowed to use one-step biases, action chunks, or both; this held both in the duplet (effect of _model_: \(F(1,1244)=17.03\), \(p<.001\); Figure 4B) and triplet conditions (effect of _model_: \(F(1,1188)=57.16\), \(p<.001\); Figure 4D). We then analyzed the planning performance in the resource-restricted regime of 12, 8, and 5 nodes, as a function of problem complexity. In the case of the random silhouettes, the different model variants' performance scaled with the problem complexity similarly, both in the duplet (_model*complexity_ interaction: \(F(6,384)=1.72\), \(p=.11\); Figure 5A) and triplet conditions (_model*complexity_ interaction: \(F(6,364)=.52\), \(p=.78\); Figure 5C). However, performance on the chunky silhouettes scaled differently with complexity across the model variants, in both conditions (_model*complexity_ interaction in duplet condition: \(F(6,744)=1.89\), \(p=.07\); Figure 5B; _model*complexity_ interaction in triplet condition: \(F(6,764)=4.07\), \(p<.001\); Figure 5D). This suggests that the _effective_ complexity of problems was reduced by relying on past action sequence statistics. ### Solving ambiguous problems We trained the model variants on chunky silhouettes that enforced an action chunk - that is, the chunked blocks had to be placed in fixed relative positions and order for the correct solution. Here, we test the model variants on so-called'sequence-ambiguous silhouettes' that have multiple solutions allowing for placing the chunked blocks in different orders (Figure 6A and E). Crucially, such sequence-breaking was unlikely under the sequence model, making the sequence-breaking alternative a lower-value plan. Indeed, solutions containing chunks were preferred by both MCTS-with-HABITS and MCTS-with-HABITS\({}_{\text{open-loop}}\) models (Figure 6C and G). We allowed for a flexible node budget and measured the number of node evaluations Figure 4: Model performance in the face of node budget restriction in the test phase of Experiment 1, in the duplet condition (A-B) and in the triplet condition (C-D). Error bars indicate the 95% CI. (A)(C) Performance on random silhouettes decreased for all model variants to similar degrees due to the node budget restriction. (B)(D) For chunky silhouettes, the performance of the MCTS-with-HABITS\({}_{\text{open-loop}}\) and MCTS-with-HABITS\({}_{\text{one-step}}\) was more resilient to the node budget restriction than that of the MCTS\({}_{\text{vanilla}}\). The full MCTS-with-HABITS model showed the best performance in the face of node budget restriction. Figure 5: Model performance as a function of problem complexity in the test phase of Experiment 1, in the duplet condition (A-B) and in the triplet condition (C-D). Error bars indicate the 95% CI. Color coding is identical to that of Figure 4. performed until the solution was reached (Figure 6D and H). In both the duplet and triplet conditions, the number of evaluated nodes was significantly different between the model variants, and the lowest in the case of the MCTS-with-HABITS (_model_ effect: \(F(3,156)=25.86\), \(p<.001\) and _model_ effect: \(F(3,156)=42.99\), \(p<.001\). To conclude, our model prefers to reuse past action sequences even when other alternatives are viable and this confers it the computational benefit of planning with smaller trees. ## Discussion Discovering the building blocks of a problem is key to its efficient solution. The behavior of animals suggest that they form action chunks that they flexibly employ for solving novel tasks. We proposed that such actions chunks can be usefully integrated into planning as well, effectively reducing the depth of problems. We introduced the MCTS-with-HABITS, a model that finds and leverages action sequence patterns at variable scales and integrates them into the planning process in order to save computational costs. We tested the model on a physical construction task that was constrained such that certain blocks were placed in predictable positions relative to each other. Our method used a flexible Bayesian sequence model, which can find predictable action sequences of different sizes in past episodes, as induced by the task constraints, which it can then reuse for the planning problem at hand. We allowed the sequence model to influence the one-step search inherent to MCTS, and also to propose multi-step node expansions to the planner. Both mechanisms gave the model an edge in performance over the baseline model when tested under resource-constraints. We showed that this was related to the model being more economical with its node budget and searching deeper on predictable paths via the multi-step expansions. Various extensions to the method are possible. For instance, in our current algorithms, the learned action sequences only influenced MCTS' tree policy, that is, the search among the nodes whose values were evaluated. One could also use the sequence model to inform the rollout policy, potentially improving the simulation step. Equally, it would be possible to store information about the complexity of subsequent planning with nodes, and use this to adjust the use of the sequence model depending on overall demands of planning space or time. In the current work, the construction task that our model was tested on was more complex than typical psychology experiments but less complex than the ones many real-life applications pose. Testing the model on more complex problems and problems with partially transferable structure will be revealing about the scalability of our approach. It would be particularly interesting to look at stochastic problems, and those with intermediate rewards, which exploit more of the power of MCTS. Our model extracts and reuses action patterns in past behavior. Another class of approaches focuses on past state visitation patterns. McGovern and Barto (2001) proposed that so-called bottleneck states can be identified at rare state transitions and that they are useful subgoals that link subproblems. Several methods based on the successor representation (Machado et al., 2017; Machado, Barreto, Precup, & Bowling, 2021) use the eigendecomposition of state succession patterns to carve the environment into subproblems. However, these approaches require the agent to have explored the entire state space extensively, a requirement that often cannot be met in machine learning and robotics. Applying the law of exercise to the realm of planning was inspired by the cognitive science literature. How model-free (Daw, Niv, & Dayan, 2005) or value-free (Miller et al., 2019) statistics interact with model-based reasoning has been a long-standing interest in the field. Here we proposed a hybrid model that flexibly adjusts the weighting of value-based and value-free assessments by relying on predictably repeated chunks of behavior. Such sequence reuse yields a solution for optimizing the direction and depth of search (Sezener, Dezfouli, & Keramati, 2019; Kuperwajs & Ma, 2021), reminiscent of humans' resource-rational strategies (Callaway et al., 2022). Future experiments should investigate whether humans indeed use habits for planning, whether they utilize such habits of mind adaptively in order to save planning costs, and, if so, whether their planning is well described by the model presented here. Figure 6: Action chunk reuse in sequence-ambiguous problems in the duplet (A-C) and triplet conditions (D-F). Error bars indicate the 95% CI. (A)(D) Example silhouettes whose correct solutions allowed for flexible ordering of the chunk elements. The digits indicate the possible serial positions of blocks in different solutions. The blue digits indicate the serial positions of the blocks in solutions that preserved the chunked order (same example chunks as in Figure 3). (B)(E) Solutions containing chunks were preferred by both model variants that were enabled to generate chunks. (C)(F) The MCTS-with-HABITSfull model evaluated the least number of nodes until finding a solution. ## Acknowledgments NE and PD were funded by the Max Planck Society. PD was also funded by the Alexander von Humboldt Foundation. The authors thank Tankred Saanum for helpful conversations.
2306.03831
GEO-Bench: Toward Foundation Models for Earth Monitoring
Recent progress in self-supervision has shown that pre-training large neural networks on vast amounts of unsupervised data can lead to substantial increases in generalization to downstream tasks. Such models, recently coined foundation models, have been transformational to the field of natural language processing. Variants have also been proposed for image data, but their applicability to remote sensing tasks is limited. To stimulate the development of foundation models for Earth monitoring, we propose a benchmark comprised of six classification and six segmentation tasks, which were carefully curated and adapted to be both relevant to the field and well-suited for model evaluation. We accompany this benchmark with a robust methodology for evaluating models and reporting aggregated results to enable a reliable assessment of progress. Finally, we report results for 20 baselines to gain information about the performance of existing models. We believe that this benchmark will be a driver of progress across a variety of Earth monitoring tasks.
Alexandre Lacoste, Nils Lehmann, Pau Rodriguez, Evan David Sherwin, Hannah Kerner, Björn Lütjens, Jeremy Andrew Irvin, David Dao, Hamed Alemohammad, Alexandre Drouin, Mehmet Gunturkun, Gabriel Huang, David Vazquez, Dava Newman, Yoshua Bengio, Stefano Ermon, Xiao Xiang Zhu
2023-06-06T16:16:05Z
http://arxiv.org/abs/2306.03831v2
# GEO-Bench: ###### Abstract Recent progress in self-supervision has shown that pre-training large neural networks on vast amounts of unsupervised data can lead to substantial increases in generalization to downstream tasks. Such models, recently coined _foundation models_, have been transformational to the field of natural language processing. Variants have also been proposed for image data, but their applicability to remote sensing tasks is limited. To stimulate the development of foundation models for Earth monitoring, we propose a benchmark comprised of six classification and six segmentation tasks, which were carefully curated and adapted to be both relevant to the field and well-suited for model evaluation. We accompany this benchmark with a robust methodology for evaluating models and reporting aggregated results to enable a reliable assessment of progress. Finally, we report results for 20 baselines to gain information about the performance of existing models. We believe that this benchmark will be a driver of progress across a variety of Earth monitoring tasks. ## 1 Introduction Earth monitoring with machine learning-based methods plays an increasing role in climate change mitigation and adaptation as well as climate science [58]. Related applications include methane source detection [62; 16], forest carbon quantification [44], extreme weather prediction [50], and crop monitoring [34; 14]. Across many of these applications, pre-trained models (e.g., a ResNet trained on ImageNet) have been used to increase generalisation performance. Improvement of the pre-trained models has been shown to reduce the need for large labelled datasets in some contexts [11] and can improve model generalisation outside of the training distribution [27]. Recent studies exploring the scaling of such pre-trained models found that increasing the size of an unsupervised (or weakly supervised) dataset as well as properly scaling the model led to an even greater increase in performance under various metrics [33; 56]. While the training of such large-scale models is usually reserved for industrial research groups with very large computer clusters, the publication of pre-trained models creates vast opportunities for the entire research and technology community (including communities of domain experts outside of machine learning). These large pre-trained models were recently coined as _foundation models_[6] as they might serve as foundations for sub-fields of machine learning. Specifically, the publication of large pre-trained models like BERT [15] and GPT-3 [7] led to a paradigm shift in the field of natural language processing (NLP). This inspired a similar shift in the field of computer vision with the release of models like CLIP [56] and DINO [9]. While CLIP performs well on various types of vision tasks, it still under-performs on Earth monitoring tasks [56]. This is not surprising as it is trained mainly on RGB images taken from a ground perspective at a single point in time. While there are many similarities between Earth observation datasets and typical ML image datasets, there are also many important differences to consider when designing effective ML models. Earth observation images are taken from an overhead rather than ground perspective, usually from a fixed distance from the Earth's surface (defined by a satellite's orbit). The satellite revisits provide a temporal axis that is sometimes irregular (e.g., a few times per year) or regular (e.g., every five days) with cloud coverage causing spurious occlusions. Images are acquired with sensors containing multiple spectral bands (e.g., thirteen for Sentinel-2), or even with different kinds of sensors, e.g., synthetic aperture radar (SAR), which can penetrate cloud coverage. Moreover, the GPS coordinates and timestamp of each acquisition offer the opportunity to combine data from multiple sources, e.g., weather data, semantic maps, and elevation. This leads to a rich multi-modal signal with potentially missing information that can be inferred from other elements of the signal. There are currently petabytes of accessible satellite datasets containing images of the Earth under various modalities from the present day to as far back as the 1960s. Distilling this large amount of information into pre-trained models of various sizes offers the opportunity to redistribute this information and make it accessible to various labs for increasing the performances on a large range of downstream tasks. The fundamental goal of these large pre-trained models is to improve generalization performance on downstream tasks. Hence, to support the machine learning community in producing better pre-trained models, it is crucial to provide a benchmark with a wide variety of downstream tasks, covering a range of modalities and dataset shapes that are likely to be encountered in practice. At the moment, existing works on pre-training models from earth observations e.g., [13; 47; 70], evaluate on different sets of downstream tasks, making it impossible to directly compare performance. Moreover, the set of tasks is often narrow in terms of diversity and the statistical methodologies do not adequately report the uncertainties in the evaluation. The present work aims to fill this void by providing a wide range of tasks across various countries with various modalities of sensors. Also, the transformed versions of the datasets are smaller than their original form, and all results can be replicated on single GPUs. This increases accessibility to research labs with limited resources and reduces overall energy consumption. Our proposed benchmark, GEO-Bench1, is composed of six image classification and six semantic segmentation tasks, which were curated by domain experts to ensure their diversity and relevance toward sustainable development. We expect this contribution to: Footnote 1: [https://zenodo.org/communities/geo-bench](https://zenodo.org/communities/geo-bench) * Stimulate and facilitate the development of foundation models for Earth monitoring * Provide a systematic way of measuring the quality of models for better scientific progress * Provide insights into which pre-trained models work best * Potentially reduces negative impacts of foundation models through an open evaluation procedure. In what follows, we start by discussing sources of data that can serve to train foundation models for earth monitoring (Sec. 2). We then present the details of GEO-Bench (Sec. 3) and how it can be used for the evaluation of foundation models (Sec. 4). Further, we review existing benchmark datasets for earth monitoring and discuss why GEO-Bench is complementary (Sec. 5). Finally, we present an extensive set of experiments, showing the performance of 20 state-of-the-art models on the benchmark to lay down reference points and to gain valuable information on existing pre-trained models (Sec. 6). ## 2 Remote sensing data for self-supervision The development of foundation models does not typically rely on a specific dataset for the pre-training phase. The choice of data source is part of the design of the model, e.g., a very large corpus of text from the internet [51] or pairs of text associated with images from the web [56]. As such, we do not provide data for training foundation models with this benchmark. However, for completeness, we outline potential sources of Earth observation data that could be used for pre-training foundation models. Multispectral images with revisits Satellite data sources such as Sentinel-2 [19; 22] and Landsat 8 [67] provide images in multiple spectral bands with periodic revisits. This yields a four-dimensional array of structured data (longitude, latitude, wavelength, time) which can be used to perform various forms of self-supervision, e.g., predicting adjacent tiles [30] or contrasting the different seasons for the same region [47]. Other sensors Synthetic Aperture Radar (SAR) and terrain elevation are also frequently available and can be matched to other sources of data through geolocalisation [55]. Such data are complementary to optical spectral bands and may encourage the model to learn higher-level semantic representations. Semantic data Through georeferencing, text-based data such as Wikipedia articles can be linked to satellite images [68]. It is also possible to join content from non-image data layers like OpenStreetMap [39]. By predicting or contrasting information from these sources, the model may learn useful and transferable semantic representations. ## 3 GEO-Bench GEO-Bench is composed of 6 classification tasks and 6 segmentation tasks. Detailed characteristics are presented in Table 1, examples are depicted in Figure 4 and 3, and the spatial coverage on the world map is presented in Figure 8 (supplementary material). In what follows, we describe the procedure for collecting and transforming the datasets. ### Design Principles GEO-Bench was established by modifying and gathering geospatial datasets, adhering to principles that secure accessibility, usability, and effective model performance assessment across tasks. Ease of Use A fundamental goal was to create an accessible, simple-to-use benchmark, and a compact dataset assortment with code for loading the data in a consistent schema. A key aim was to harmonize data to reduce the engineering work needed to tailor pre-trained architectures, while maintaining sensor type and resolution diversity. Sector Experts and Steering Committee To align GEO-Bench with practical use-cases, we assembled a team of six sector experts from fields such as forestry and climate science. A steering committee of respected scientists guides high-level benchmark decisions, assuring relevance and impact. Diversity of Modalities The objective is to evaluate model adaptability to varied geospatial sensors. Thus, the benchmark encompasses multispectral, SAR, hyperspectral, elevation, and cloud probability modalities, with spatial resolutions from 0.1 to 30 m/pixel. Diversity of Tasks We ventured beyond image classification, incorporating object detection and semantic segmentation. To maintain _ease of use_, detection and counting tasks were transformed into semantic segmentation. This led to two task sets: six image classification tasks, and six semantic segmentation tasks [24, 38]. Original Train, Validation, and Test Splits Original dataset splits were preserved when available; otherwise, we generated validation and test sets from the train set while ensuring no spatial overlap. Permissive License Most datasets needed to be adapted from their original form to satisfy the above criteria and be included in the benchmark. Hence, we include only datasets with permissive licenses. Figure 1: Foundation models encapsulate multimodal data streams through self-supervised training. The trained models can then be fine-tuned for a variety of climate-related remote sensing tasks. Image sources: quantification [44], detection [32], generation [45], counting [36], segmentation [76], and multi-class classification [52]. ### Dataset Transformations To produce a benchmark that complies with the design choices of Section 3.1, we applied the following transformations to each dataset. The procedure that was used to download and transform each dataset is fully documented and open-sourced in the GEO-Bench GitHub repository2. Figure 3: Representative samples of the **segmentation benchmark**. Figure 2: Representative samples of the **classification benchmark**. **Subsampling Large Datasets** To be more representative of typical downstream tasks, where data is usually scarce, datasets larger than \(20000\) samples were randomly subsampled. Avoiding large downstream tasks also comes with other benefits: * In Appendix A, we show that larger downstream datasets can decrease the ability to discriminate between two models that are similar in performance. * Downstream tasks with very large training sets will not usually benefit from pre-training3. Hence they are less useful for our evaluation purpose. Footnote 3: From Bayes rule, we know that the influence of the prior (pre-trained model) decreases as the size of the training data increases. * A smaller benchmark is faster to download, yields results quicker and requires less energy for computation. * We can increase the variety of experiments and the number of seeds to improve the knowledge gained from experiments. **Removing Class Imbalance** We randomly subsampled large classes to have near-uniform class sizes across datasets. This was done to prevent users of the benchmark from increasing their score by using clever class imbalance techniques instead of making progress on better pre-trained models. While good performance on highly imbalanced (long tail of classes) datasets would be a desired property of a pre-trained model, we have not found a good dataset containing a large amount of classes. **Maximum Image Shape** Datasets with images larger than 256 \(\times\) 256 pixels were center cropped. This was done to minimize the trade-off between batch size and memory size4. For applications working with larger images, it is possible to tile different predictions of the trained model according to [28]. Footnote 4: In conventional vision applications, it is common to resize the image to keep the same high-level scene. On the other hand, with remote sensing data, resizing images would reduce the spatial resolution, which may discard important information and also change the distribution of patterns. ## 4 Using The Benchmark **Fine Tuning** In the self-supervised learning literature, it is common to use the pre-trained model to encode a fixed representation of each image in the dataset and learn to classify images based on this representation [30]. While this works relatively well, this method highly depends on the pre-training task as it may not learn to encode information that is important for the downstream task [66; 54]. In practice, fine-tuning the pre-trained model often mitigates this issue and is known to frequently yield a much higher generalization performance than a model trained from random weights [47; 11]. Since this is more representative of practical usage, we encourage users of the benchmark to report the results of fine-tuned models. On the other hand, we do not discourage users from also reporting results with fixed backbones (pre-trained weights) as this can provide valuable information about the pre-trained model. **Hyperparameter Tuning** Deep learning algorithms often require the adjustment of hyperparameters, especially when an architecture is fine-tuned on a small dataset. For this reason, we recommend adjusting hyperparameters, but within a maximum budget of 16 trials per task5. Early stopping based on validation metrics is also recommended. Footnote 5: While 16 is fairly small, we believe it’s enough to adjust sensitive hyperparameters such as learning rate. Also, this favours models that are less sensitive to hyperparameter tuning. **Data Augmentation** Data augmentation plays a crucial role in the training of deep learning models, especially with small training datasets. Hence, we consider it to be part of the fine-tuning process. As a guideline, we propose limiting the augmentations to \(90^{\circ}\) rotations and vertical and horizontal flips6. On the other hand, we also encourage users to study what are the best data augmentations for remote sensing as this could lead to useful findings for practitioners and the benchmark is well-suited for evaluating such findings. Footnote 6: Random crop and resize are also common in vision, but in remote sensing, this reduces the spatial resolution, which is often crucial for high performances. **Toolbox** To facilitate the usage of the benchmark, we provide a collection of tools for various parts of the experimental pipeline as part of the open-sourced codebase7. This includes tools for loading datasets and visualising results. We also provide tools based on PyTorch-Lightning [23] to facilitate model training. ### Reporting Results For reliable and comparable results across different publications, we recommend that users follow this procedure to report results. The aim is to report results on individual tasks as well as aggregated across all tasks, with reliable confidence intervals (inspired by [2]). Code is provided to generate figures based on raw results. Random SeedsAs demonstrated in [2], 3-5 seeds are not enough to obtain reliable confidence intervals. Since pre-training and hyperparameter search are usually the computational bottlenecks, we recommend retraining the selected hyperparameter configuration for at least 10 different seeds. Interquartile Mean (IQM)We recommend using IQM. This metric removes the outliers by trimming the 25% highest values as well as the 25% lowest value and computing the average of the remaining values. The resulting finite sample estimator is less biased than the median and has less variance than the mean, often resulting in smaller confidence intervals [2]. Normalising ResultsTo aggregate performance metrics across multiple tasks, one must first normalise their values. A common approach consists of applying a linear transformation based on reference points [4]. As such, we propose to use the lowest and highest metric values achieved by a set of strong baselines (see Sec. 6) as _official reference points_. For each individual task, we scale the results such that the maximum score is 1 and the lowest one is 0. Hence, if a future model were to achieve a score superior to 1, it would imply that progress is being made on the benchmark. All reference points will be published alongside the benchmark. BootstrappingTo quantify uncertainty over observed IQMs, we use bootstrapping [20]. That is, we sample \(n\) times, with replacement, the results from training with \(n\) different seeds, and we compute IQM. Repeating this procedure \(n\!=\!1000\) times provides a distribution over IQM results, from which confidence intervals can be extracted. Aggregated ResultsAfter normalizing the results we simply compute IQM across all datasets and all results of a given model. For confidence intervals, we use _stratified bootstrap_, where seeds are sampled with replacement _individually_ for each dataset, but IQM is computed across all datasets. Displaying the resultsIn Figure 4, we show how to compactly display results from a wide range of baselines across the benchmark as well as aggregated results and statistical uncertainties. In Figure 5, we display the results for a growing training set size (with fixed validation and test set). This compactly reports the results of thousands of experiments. sensing datasets. Using meta-information such as the number of samples, the size of each sample, and the type of annotations, they analyse the correlation between each dataset and identify a variety of clusters. Based on this analysis, they recommend two classification, two segmentation, and two detection datasets for benchmarking. In contrast, we provide a collection of 12 datasets and we propose a robust methodology for aggregating results and reporting statistical uncertainties of the evaluation process. ## 6 Experiments In this section, we provide a range of baselines for the classification and segmentation benchmarks. These will serve as reference points for future evaluation8. We also seek to answer the following questions: Figure 4: **Classification Benchmark RGB Only:** Normalised accuracies of various baselines (higher is better). Violin plots are obtained from bootstrap samples of normalized IQM (Section 4.1). The left plot reports the average across all tasks. * Which new architecture performs best on remote sensing data (Section 6.2.2)? * What is the effect of training set size on the performance of each model (Section 6.2.3)? * Can we leverage multispectral channels to improve performance (Section 6.2.4)? * Are smaller datasets better at discriminating the performance of different models (Section A.5)? ### Protocol For each model, we replaced the last layer with a randomly initialised layer of the appropriate shape for the task at hand. We use different learning rates for the last layer (which starts from random weights) and for the backbone (which starts from pre-trained weights). The best learning rates were selected using the highest accuracy or Intersection over Union (IoU) on the validation set over 16 trials 9. After choosing the hyperparameters, we repeated the training for 10 seeds. To minimize overfitting, we selected the best time step using accuracy (or IoU) on the validation set and we reported the test metrics at the chosen time step. We use AdamW [43] to train convolution architectures and SGD to train transformer architectures. Footnote 9: The range of selected learning rates is different for each model and is selected based on early experiments, see appendix for details. ### Classification #### 6.2.1 Baselines Naming Schema Each baseline name starts with the corresponding architecture: **ResNet18 and ResNet50:** standard ResNet architectures [25]; **ConvNeXt-B:** the base architecture of ConvNeXt [41]; **ViT-T and ViT-S:** ViT architectures [18] of size tiny and small respectively; **SwinV2-T:** a SwinV2-tiny architecture [40]; **Million-AID ResNet50:** a ResNet architecture with pre-trained weights [70] on Million-AID [42], a remote sensing dataset with a size comparable to imagenet (only RGB). Then, keywords provide details about the training procedure: **MoCo-S2 and DINO-S2:** model trained with self-supervision on Sentinel data [71] (RGB and Multispectral pre-trained weights); **Rnd:** weights are randomly initialised; **timm:** pre-trained weights are obtained from the timm library, usually from training on ImageNet; **+R-Multi:** we manually augment an RGB architecture by randomly initialising the weights of the missing channels in the 1st layer; **multi:** the pre-trained model has multispectral channels. #### 6.2.2 Comparing Baselines on RGB only In Figure 4, we report bootstrapped IQM of the normalized accuracy (Sec 4.1) for the six datasets of the classification benchmark, as well as aggregated results10. In this first experiment, all models can only see the RGB channels. Footnote 10: We note that the variance of the results represents the uncertainty of the mean (IQM) which is significantly smaller than the variance of the raw seeds presented in Figure 9 in Appendix. These results offer valuable information across 10 common baselines in the literature. We denote the outstanding performance of SwinV2 compared to other models. It is by a large margin the best model in aggregated results and almost systematically outperforms all models on all datasets. We can also observe the large difference between Scratch ResNet18 and ResNet18 on all datasets. This highlights the importance of using a pre-trained model. Also, perhaps disappointingly, the existing model pre-trained on remote sensing data does not exhibit any improvement compared to their timm pre-trained weights, i.e., ResNet18-MoCo-S2, ResNet50-MoCo-S2, and ResNet50-Million-AID are all comparable to ResNet18 on the aggregated performance. On the other hand, in Section 6.2.4, we see that ResNet50-MoCo-S2-multi can leverage multispectral data to significantly outperform ResNet50-timm. Another insight that can be gained from these results is how useful a dataset is at discriminating baselines, i.e., a dataset where most baselines perform equally would have limited utility in our benchmark. To this end, we had to discard GeoLifeClef 2022 [12] as all models were performing equally badly11. m-eurosat also offers limited discriminativity as most models obtain close to 99% accuracy (see Figure 9). However, we can still observe that SwinV2 outperforms several other baselines on m-eurosat. Also, Figure 5 and Figure 14 show that, when decreasing the training set size, m-eurosat becomes more discriminative. Footnote 11: We suspect this dataset to have high aleatoric uncertainties. #### 6.2.3 Accuracy vs training set size As part of the benchmark, we also provide official subsets of the training sets with train ratios of (0.01, 0.02, 0.05, 0.1, 0.2, 0.5, 1) 12. Figure 5 depicts a different perspective on the models. Footnote 12: Reporting results on all 7 subsets increases the number of experiments by 7x. However, in Figure 13 (see Appendix), we show that the convergence time is proportional to the training set size. This means that training on all seven subsets takes on average about 1.88 times longer than just training on the full training set. First, we can observe the noise due to the hyperparameter selection process that is not accounted for by repeating 10 seeds with fixed hyperparameters. Also, we see that ConvNeXt becomes progressively better than SwinV2 as the training set decreases. This coincides with the common observations that transformer architectures tend to be more data-hungry, but also tend to outperform convolution architectures in the high data regime [17]. We note also, that ConvNeXt-B-timm only requires 2% of the training set to obtain aggregated performances comparable to that of ResNet18-Rnd. This impressive factor of 50x on data efficiency highlights the importance of developing new architectures and new pre-training methods. Finally, we can observe an increase in the discriminativity of the datasets as the training set decreases, specifically for m-eurosat, when the task becomes more difficult, the strong baselines stand out even more. The discriminativity of datasets is further studied in Section A.5. #### 6.2.4 Leveraging Multispectral Information We now study the effect of leveraging multispectral information during the pre-training phase and during the fine-tuning phase. We do so by fixing the backbone to either ResNet50 (Fig. 6) or ViT-S (Fig. 7) and exploring various weight initialisation schema. Since we could only find pre-trained models for Sentinel-2, we limit this experiment to the four datasets satisfying this criterion. We found that using a model pre-trained on RGB-only (timm pre-trained) and augmenting the architecture by randomly initialising the weights of the missing channels in the first layer (+RMulti) does not lead to overall improved performance except on some task such as m-so2sat. Moreover, the fine-tuning time is largely extended since we have to wait until the newly initialised weights on the first layer fully converge. On the other hand the ResNet50 pre-trained on Sentinel-2 using MoCo [71] leads to an impressive performance increase across many tasks. ### Segmentation We defer experiments on the Segmentation benchmark to Appendix A.3, where we provide experiments on six baselines (ResNet18, ResNet50, ResNet101) \(\times\) (U-Net, DeepLabV3) with pre-trained weights provided by the timm library. While ResNet101-DeepLabV3 performs best in aggregate, it still underperforms on some datasets. ### Resource Usage See Appendix A.6 for detailed resource usage of each algorithm evaluated in this section. We report the number of parameters, memory usage, the time required for a forward pass, and the convergence time for fine-tuning on downstream tasks. While memory footprint can increase by a factor of 4x for a model like SwinV2 and ConvNeXt-B compared to ResNet50, their forward pass is only twice as slow. ## 7 Conclusion We developed a new benchmark for evaluating pre-trained models on remote sensing downstream tasks. This involves adapting a variety of remote sensing datasets to a more conventional machine learning pipeline and providing code for fine-tuning and evaluating on individual tasks. We expect that this benchmark will stimulate the development of new foundation models that could lead to better generalization on a variety of earth monitoring downstream tasks and could open up opportunities for new applications. LimitationsOur benchmark does not extensively evaluate all desired features of a pretrained model for earth monitoring. For example, it does not evaluate its ability to fine-tune on temporal data nor to perform fusion with other type of data such as text or weather. Also, as pre-trained models becomes stronger, they will get closer to the theoretical limit of generalization performance, i.e. approaching the alearotic uncertainty of the dataset. Under such regime, we expect a bigger overlap between error bars when comparing 2 different models.
2301.01059
A refined multiplication formula for cluster characters
We obtain a multiplication formula for cluster characters on (stably) 2-Calabi-Yau (Frobenius or) triangulated categories. This formula generalizes those known for arbitrary pairs of objects and for Auslander-Reiten triangles. As an application, we show that for cluster algebras of acyclic types, specialization of a cluster variable to 1 sends all cluster variables to elements of a cluster algebra of smaller rank. We also obtain application to the reduction of friezes of acyclic type.
Bernhard Keller, Pierre-Guy Plamondon, Fan Qin
2023-01-03T11:59:51Z
http://arxiv.org/abs/2301.01059v2
# A refined multiplication formula for cluster characters ###### Abstract. We obtain a multiplication formula for cluster characters on (stably) 2-Calabi-Yau (Frobenius or) triangulated categories. This formula generalizes those known for arbitrary pairs of objects and for Auslander-Reiten triangles. As an application, we show that for cluster algebras of acyclic types, specialization of a cluster variable to 1 sends all cluster variables to elements of a cluster algebra of smaller rank. We also obtain applications to the reduction of friezes of acyclic type. ###### Contents * 1 Introduction * 2 Refined multiplication formula: triangulated case * 2.1 Recollections on 2-Calabi-Yau triangulated categories * 2.2 The refined multiplication formula * 3 Refined multiplication formula: Frobenius case * 3.1 Recollections on Frobenius categories * 3.2 The formula * 4 Applications * 4.1 Specialization of cluster variables in cluster algebras * 4.2 Reduction of friezes * 4.3 A formula for Auslander-Reiten triangles * 4.4 Another restricted formula ## 1. Introduction The additive categorification of cluster algebras has been an important tool in their study almost from their inception (see for instance the survey papers [10, 11, 12]). Such a categorification is given by a category \(\mathcal{C}\) (usually triangulated or exact) and a _cluster character_ sending objects of \(\mathcal{C}\) to Laurent polynomials in several variables so that suitable objects of \(\mathcal{C}\) are sent to cluster variables in a cluster algebra. The key property that a cluster character satisfies is a _multiplication formula_ which recovers the exchange relations in a cluster algebra. Such formulas at various levels of generality have been obtained in [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 232, 242, 243, 244, 256, 257, 261, 262, 263, 264, 265, 266, 267, 268, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 311, 320, 321, 323, 334, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 87, 88, 89, 91, 83, 85, 89, 92, 86, 87, 88, 89, 93, 88, 89, 94, 80, 83, 86, 88, 89, 95, 80, 84, 85, 87, 89, 96, 88, 89, 97, 80, 84, 86, 89, 98, 99, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 154, 155, 156, 157, 158, 159, 160, 161, 163, 164, 165, 166, 167, 168, 169, 170, 171, 173, 175, 176, 177, 178, 179, 180, 181, 182, 184, 185, 186, 187, 188, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 211, 222, 231, 243, 244, 256, 257, 268, 279, 280, 281, 284, 285, 286, 287, 288, 289, 290, 292, 294, 295, 296, 297, 298, 300, 31, 320, 321, 323, 324, 325, 326, 327, 328, 333, 34, 35, 36, 37, 38, 39, 40, 41, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 99, 90, 91, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 131, 128, 129, 140, 141, 143, 144, 145, 146, 147, 148, 149, 150, 163, 164, 165, 166, 177, 188, 191, 192, 193, 194, 195, 196, 197, 198, 200, 212, 22, 231, 243, 244, 256, 257, 268, 279, 299, 300, 313, 320, 321, 325, 326, 327, 328, 334, 329, 335, 329, 340, 321, 329, 336, 327, 328, 337, 338, 341, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 61, 59, 62, 63, 64, 65, 66, 67, 68, 69, 70, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 84, 85, 86, 87, 89, 99, 90, 91, 92, The main result of this paper is a multiplication formula generalizing most previously known ones in the following context. Let \(\mathcal{C}\) be a small Hom-finite Krull-Schmidt \(2\)-Calabi-Yau triangulated category over \(\mathbb{C}\), together with a basic cluster tilting object \(T\) (definitions are recalled in Section 2.1.1). Let \[CC_{T}:Obj(\mathcal{C})\to\mathbb{Z}[x_{1}^{\pm 1},\dots,x_{n}^{\pm 1}]\] be the corresponding cluster character (Definition 2.6). For any objects \(L\) and \(M\) of \(\mathcal{C}\), let \[\beta_{L,M}:\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)\times\operatorname{ Hom}_{\mathcal{C}}(M,\Sigma L)\longrightarrow k\] be the non-degenerate bifunctorial bilinear form conferring to \(\mathcal{C}\) its \(2\)-Calabi-Yau structure. For an object \(Y\), let \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)_{\langle Y\rangle}\) be the set of those morphisms \(\varepsilon:L\to\Sigma M\) such that, if we have a triangle \[M\to Y^{\prime}\to L\xrightarrow{\varepsilon}\Sigma M,\] the objects \(Y\) and \(Y^{\prime}\) have the same index (see Definition 2.5) and for each dimension vector \(\mathbf{e}\), the submodule Grassmannians \(\operatorname{Gr}_{\mathbf{e}}(\operatorname{Hom}_{\mathcal{C}}(T,Y))\) and \(\operatorname{Gr}_{\mathbf{e}}(\operatorname{Hom}_{\mathcal{C}}(T,Y^{\prime}))\) have the same Euler characteristic. It is easy to check that this set is invariant under multiplication by a non-zero scalar. For a subset \(X\) of \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)\), let \(X_{\langle Y\rangle}\) be the intersection of \(X\) with \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)_{\langle Y\rangle}\). Let \(\mathcal{Y}_{L,M}\) be a set of representatives of equivalence classes for the equivalence relation defined by \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)_{\langle Y\rangle}=\operatorname{ Hom}_{\mathcal{C}}(L,\Sigma M)_{\langle Y^{\prime}\rangle}\). Our main result is the following refined multiplication formula. **Theorem** (2.10).: _Let \(\mathcal{C}\) be a small \(\operatorname{Hom}\)-finite Krull-Schmidt \(2\)-Calabi-Yau triangulated category over \(\mathbb{C}\) with constructible cones (see Section 2.1.3) together with a basic cluster tilting object \(T\). Let \(L\) and \(M\) be objects of \(\mathcal{C}\) such that \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)\) is non-zero. Finally, let \(V\) be a non-zero vector subspace of \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)\). Then_ \[\chi(\mathbb{P}V)CC_{T}(L)CC_{T}(M)=\sum_{Y\in\mathcal{Y}_{L,M}}\chi(\mathbb{ P}V_{\langle Y\rangle})CC_{T}(Y)+\sum_{Y\in\mathcal{Y}_{M,L}}\chi(\mathcal{R}_{ \langle Y\rangle})CC_{T}(Y),\] _where \(\mathcal{R}=\mathbb{P}\operatorname{Hom}_{\mathcal{C}}(M,\Sigma L)\setminus \mathbb{P}\mathrm{Ker}\,\beta_{L,M}(V,\mathcal{?})\)._ If \(V\) is the full space, then this formula specializes to the one proved in [10, Theorem 1.1]. Our main result also has a counterpart for exact categories. **Theorem** (3.6).: _Let \(\mathcal{E}\) be an \(\operatorname{Ext}\)-finite 2-Calabi-Yau Frobenius category with a cluster tilting object \(T\). Assume that the triangulated category \(\underline{\mathcal{E}}\) has constructible cones. Let \(L\) and \(M\) be two objects of \(\mathcal{E}\) such that \(\operatorname{Ext}^{1}_{\mathcal{E}}(L,M)\) is non-zero, and let \(V\) be a non-zero vector subspace of \(\operatorname{Ext}^{1}_{\mathcal{E}}(L,M)\). Then_ \[\chi(\mathbb{P}V)CC_{T}(L)CC_{T}(M)=\sum_{Y\in\mathcal{Y}_{L,M}}\chi(\mathbb{ P}V_{\langle Y\rangle})CC_{T}(Y)+\sum_{Y\in\mathcal{Y}_{M,L}}\chi(\mathcal{R}_{ \langle Y\rangle})CC_{T}(Y).\] This generalizes a result of [11]. We expect that the refined multiplication formula generalizes to the setting of suitable extriangulated categories such as the Higgs category of [20], in which the classical multiplication formula can be proved [12]. We apply our main results to the specialization of cluster variables to \(1\). Let \(Q\) be finite quiver without loops or \(2\)-cycles and let \(Q^{\prime}\) be the quiver obtained from \(Q\) by removing a vertex \(i\). Let \(\sigma\) be the specialization of \(x_{i}\) at \(1\). In the case where \(Q\) is mutation-equivalent to an acyclic quiver, it was proved in [1] that the image of the cluster algebra \(\mathcal{A}_{Q}\) by \(\sigma\) is contained in \(\mathcal{A}_{Q^{\prime}}\otimes_{\mathbb{Z}}\mathbb{Q}\). Using our refined multiplication formula, we can improve on this result. **Corollary** (4.4).: _Assume that \(Q\) is mutation-equivalent to an acyclic quiver. Then the image of the cluster algebra \(\mathcal{A}_{Q}\) by \(\sigma\) is \(\mathcal{A}_{Q^{\prime}}\)._ More generally, we have the following results. **Corollary** (4.3).: _Assume that the quiver \(Q\) admits a non-degenerate Jacobi-finite potential. If the upper cluster algebra \(\mathcal{U}_{Q^{\prime}}\) is equal to the cluster algebra \(\mathcal{A}_{Q^{\prime}}\), then \(\sigma(\mathcal{A}_{Q})=\mathcal{A}_{Q^{\prime}}\)._ **Corollary** (4.5).: _Assume that the quiver \(Q\) admits a non-degenerate Jacobi-finite potential. If the upper cluster algebra \(\mathcal{U}_{Q^{\prime}}\) is spanned by the cluster characters of objects of the associated generalized cluster category \(\mathcal{C}\), then \(\sigma(\mathcal{U}_{Q})=\mathcal{U}_{Q^{\prime}}\)._ Note that in the above results the variable that gets specialized to \(1\) is not frozen. Our formula also finds applications in the reduction of friezes. A _frieze_ is ring morphism \(f:\mathcal{A}_{Q}\to\mathbb{Z}\) that sends all cluster variables to positive integers. Friezes originated in work of Conway and Coxeter [13, 14], but have been vastly generalized using cluster algebras, see for instance [1, 1, 15] and the survey paper [16]. Our result on friezes is the following. **Corollary** (4.6).: _Let \(Q\) be an acyclic quiver without loops or \(2\)-cycles, let \(Q^{\prime}\) be the quiver obtained by removing the vertex \(i\) in \(Q\), and let \(\sigma:\mathcal{A}_{Q}\to\mathcal{A}_{Q^{\prime}}\) be the specialization of \(x_{i}\) to \(1\). Let \(f^{\prime}:\mathcal{A}_{Q^{\prime}}\to\mathbb{Z}\) be a frieze. Then there exists a unique frieze \(f:\mathcal{A}_{Q}\to\mathbb{Z}\) such that \(f^{\prime}\circ\sigma=f\)._ The non-trivial part of the above result is the existence. Finally, in Section 4.3, we give a new proof of a multiplication formula for Auslander-Reiten triangles first obtained in [15], and in Section 4.4, we obtain a formula reminiscent of the one stated in [DX]. ## 2. Refined multiplication formula: triangulated case ### Recollections on \(2\)-Calabi-Yau triangulated categories The setting in which the multiplication formula holds is that of Hom-finite, Krull-Schmidt, triangulated, \(2\)-Calabi-Yau categories with a cluster tilting object and constructible cones. The aim of this section is to recall the main definitions and properties of this setting. #### 2.1.1. \(2\)-Calabi-Yau categories Let \(\mathcal{C}\) be a small Hom-finite triangulated category over a field \(k\), with suspension functor \(\Sigma\). **Definition 2.1**.: The category \(\mathcal{C}\) is \(2\)_-Calabi-Yau_ if, for any objects \(L\) and \(M\) of \(\mathcal{C}\), it is equipped with a bilinear form \[\beta_{L,M}:\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)\times\operatorname{ Hom}_{\mathcal{C}}(M,\Sigma L)\longrightarrow k\] which is non-degenerate and bifunctorial. Here, bifunctorial means that if \(L\), \(M\), \(N\) and \(P\) are objects of \(\mathcal{C}\), and if \(\varepsilon\in\operatorname{Hom}_{\mathcal{C}}(M,\Sigma N)\), \(\eta\in\operatorname{Hom}_{\mathcal{C}}(N,\Sigma L)\), \(\delta\in\operatorname{Hom}_{\mathcal{C}}(P,\Sigma M)\), \(f\in\operatorname{Hom}_{\mathcal{C}}(L,M)\) and \(g\in\operatorname{Hom}_{\mathcal{C}}(N,P)\), then \[\beta_{L,N}(\varepsilon\circ f,\eta) = \beta_{M,N}(\varepsilon,\Sigma f\circ\eta)\quad\text{and}\] \[\beta_{M,P}(\Sigma g\circ\varepsilon,\delta) = \beta_{M,N}(\varepsilon,\delta\circ g).\] Equivalently, \(\mathcal{C}\) is \(2\)-Calabi-Yau if it is equipped with an isomorphism of bifunctors \[\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)\longrightarrow D\operatorname{Hom}_{ \mathcal{C}}(M,\Sigma L),\] where \(D=\operatorname{Hom}_{k}(?,k)\) is the usual duality for vector spaces. #### 2.1.2. Cluster-tilting objects and associated cluster characters Let \(\mathcal{C}\) be a Hom-finite \(2\)-Calabi-Yau triangulated category. **Definition 2.2** ([2]).: An object \(T\) of \(\mathcal{C}\) is a _cluster-tilting object_ if the following hold: 1. \(T\) is _rigid_, that is, the space \(\operatorname{Hom}_{\mathcal{C}}(T,\Sigma T)\) vanishes; 2. for any object \(X\) of \(\mathcal{C}\), if \(\operatorname{Hom}_{\mathcal{C}}(T,\Sigma X)\) vanishes, then \(X\) lies in \(\operatorname{add}T\) (that is, \(X\) is a direct factor of a direct sum of copies of \(T\)). We will usually assume that \(T\) is _basic_, and write \(T=T_{1}\oplus\ldots\oplus T_{n}\), where the \(T_{i}\)'s are pairwise non-isomorphic indecomposable objects. **Examples 2.3**.: 1. The cluster categories of [2] are triangulated Hom-finite \(2\)-Calabi-Yau categories with a cluster-tilting object. 2. The generalized cluster categories of [1] also have these properties. 3. The stable categories of all the Frobenius categories of Example 3.3 also have these properties. Cluster-tilting objects are essential in the categorification of cluster algebras via triangulated categories. This is done via _cluster characters_, whose definition we recall in Definition 2.6. **Proposition 2.4** ([11]).: _Let \(T\) be a basic cluster-tilting object of \(\mathcal{C}\)._ 1. _The functor_ \(F=\operatorname{Hom}_{\mathcal{C}}(T,\Sigma?)\) _induces an equivalence of categories_ \[\mathcal{C}/(T)\xrightarrow{F}\operatorname{mod}\,\operatorname{End}_{ \mathcal{C}}(T),\] _where_ \((T)\) _is the ideal of all morphisms factoring through an object of_ \(\operatorname{add}T\)_._ 2. _Any object_ \(X\) _of_ \(\mathcal{C}\) _sits in a triangle_ \[T_{1}^{X}\to T_{0}^{X}\to X\to\Sigma T_{1}^{X},\] _where_ \(T_{1}^{X}\) _and_ \(T_{0}^{X}\) _lie in_ \(\operatorname{add}T\)_._ **Definition 2.5** ([11]).: Let \(T\) be a basic cluster-tilting object of \(\mathcal{C}\). The _index_ of an object \(X\) of \(\mathcal{C}\)_with respect to \(T\) is the element of the Grothendieck group \(K_{0}(\operatorname{add}T)\) defined by \[\operatorname{ind}_{T}X=[T_{0}^{X}]-[T_{1}^{X}],\] where \(T_{0}^{X}\) and \(T_{1}^{X}\) are as in Proposition 2.4(2). Note that, while the triangle in Proposition 2.4(2) is not unique, the index of \(X\) does not depend on the one we choose [10, Lemma 2.1]. Moreover, it was shown in [10] (in the proof of Lemma 1.3) that for any object \(X\) of \(\mathcal{C}\), the value of \(\operatorname{ind}_{T}\Sigma X+\operatorname{ind}_{T}X\) only depends on the dimension vector \(e\) of \(FX\). We will denote this value by \(\iota(e)\). Note that \(\iota\) extends to a linear map defined on all of \(\mathbb{Z}^{n}\). Let us now assume that the field \(k\) is the field \(\mathbb{C}\) of complex numbers. **Definition 2.6** ([14][21]).: Let \(T\) be a basic cluster-tilting object of \(\mathcal{C}\). The _cluster character associated with \(T\)_ is the map \[CC_{T}:Obj(\mathcal{C})\longrightarrow\mathbb{Q}(x_{1},\ldots,x_{n})\] defined by \[CC_{T}(M)=x^{\operatorname{ind}_{T}M}\sum_{e\in\mathbb{N}^{n}}\chi\Big{(} \operatorname{Gr}_{e}\bigl{(}FM\bigr{)}\Big{)}x^{-\iota(e)},\] where * \(n\) is the number of indecomposable direct factors of \(T\) in a decomposition \(T=\bigoplus_{i=1}^{n}T_{i}\); * \(x^{a}=x_{1}^{a_{1}}\cdots x_{n}^{a_{n}}\), for any \(a=\sum_{i=1}^{n}a_{i}[T_{i}]\in K_{0}(\operatorname{add}T)\); * \(\chi\) is the Euler characteristic for topological spaces; * \(FM=\operatorname{Hom}_{\mathcal{C}}(T,\Sigma M)\) is considered as a right module over \(\operatorname{End}_{\mathcal{C}}(T)\); * for any module \(R\), \(\operatorname{Gr}_{e}(R)\) is the submodule Grassmannian [14, Section 2.3], a projective variety whose points parametrize the submodules of \(R\) of dimension vector \(e\); * \(\iota(e)\) is as defined below Definition 2.5. #### 2.1.3. Constructible cones The coefficients in the multiplication formula are Euler characteristics of subsets of certain algebraic varieties. For the formula to be well-defined, we must ensure that the Euler characteristics of these subsets are well-defined integers. In [21], this is done by proving that the subsets in question are constructible. In order to do so, we need to assume that the category \(\mathcal{C}\)_has constructible cones_. Although we will not recall the definition of a category with consctructible cones (and simply refer to [21, Section 1.3]), we will list the properties of such categories that we will need. Let \(\mathcal{C}\) be a \(\operatorname{Hom}\)-finite triangulated category with a basic cluster-tilting object \(T\). Fix two objects \(L\) and \(M\). For any object \(Y\) of \(\mathcal{C}\), let \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)_{\langle Y\rangle}\) be the subset of \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)\) of all morphisms \(\varepsilon\) such that if \[M\to Y^{\prime}\to L\stackrel{{\varepsilon}}{{\to}}\Sigma M\] is a triangle, then * \(\operatorname{ind}_{T}Y^{\prime}=\operatorname{ind}_{T}Y\), and * for all dimension vectors \(e\), we have \(\chi(\operatorname{Gr}_{e}(FY))=\chi(\operatorname{Gr}_{e}(FY^{\prime}))\). For any subset \(V\) of \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)\), let \(V_{\langle Y\rangle}\) be the intersection of \(V\) with \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)_{\langle Y\rangle}\). Note that the condition \[\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)_{\langle Y\rangle}=\operatorname{ Hom}_{\mathcal{C}}(L,\Sigma M)_{\langle Y^{\prime}\rangle}\] induces an equivalence relation on the set of objects of \(\mathcal{C}\). Let \(\mathcal{Y}_{L,M}\) be a set of representatives for this equivalence relation. **Proposition 2.7** (Proposition 2.8 of [21]).: _If \(\mathcal{C}\) has constructible cones, then_ \[\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)=\coprod_{Y\in\mathcal{Y}_{L,M}} \operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)_{\langle Y\rangle}\] _is a partition of \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)\) into a finite number of constructible subsets._ **Corollary 2.8**.: _If \(\mathcal{C}\) has constructible cones, and if \(V\) is a constructible subset of \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)\), then_ \[V=\coprod_{Y\in\mathcal{Y}_{L,M}}V_{\langle Y\rangle}\] _is a decomposition of \(V\) into a finite number of pairwise disjoint constructible subsets._ **Example 2.9**.: All the triangulated categories mentioned in this paper have constructible cones, thanks to these two facts proved in [11, Sections 2.4-2.5]: stable categories of \(\operatorname{Hom}\)-finite Frobenius categories and the generalized cluster categories of [1] have constructible cones. ### The refined multiplication formula This section is devoted to the proof of the refined multiplication formula. The proof follows the lines of [11] and relies heavily on results obtained there. For any vector space \(E\) and for any subset \(U\) which is stable by scalar multiplication, we denote by \(\mathbb{P}U\) the subset of the projective space \(\mathbb{P}E\) consisting of elements \([u]\) with \(u\in U\), where \([u]\) denotes the class of \(u\) in \(\mathbb{P}E\). **Theorem 2.10**.: _Let \(\mathcal{C}\) be a \(\operatorname{Hom}\)-finite Krull-Schmidt \(2\)-Calabi-Yau triangulated category over \(\mathbb{C}\) with constructible cones and admitting a basic cluster-tilting object \(T\). Let \(L\) and \(M\) be two objects of \(\mathcal{C}\), and let \(V\) be a non-zero vector subspace of \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)\). Then the following equality holds:_ \[\chi(\mathbb{P}V)CC_{T}(L)CC_{T}(M)=\sum_{Y\in\mathcal{Y}_{L,M}}\chi(\mathbb{P }V_{\langle Y\rangle})CC_{T}(Y)+\sum_{Y\in\mathcal{Y}_{M,L}}\chi(\mathcal{R}_{ \langle Y\rangle})CC_{T}(Y),\] _where \(\mathcal{R}=\mathbb{P}\operatorname{Hom}_{\mathcal{C}}(M,\Sigma L)\setminus \mathbb{P}\mathrm{Ker}\,\beta_{L,M}(V,?)\)._ **Remark 2.11**.: If \(V\) is the whole space \(\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)\), then the formula recovers that of Y. Palu [11, Theorem 1.1]. We assume for the rest of this section that \(\mathcal{C}\) has constructible cones. The first step into proving the formula is by replacing \(CC_{T}(L)\) and \(CC_{T}(M)\) by their definitions in the left-hand side of the formula. Doing this, we get \[\chi(\mathbb{P}V)CC_{T}(L)CC_{T}(M)\] \[=x^{\operatorname{ind}_{T}(L\oplus M)}\sum_{e,f}\chi\big{(} \mathbb{P}V\times\mathbb{G}_{e}(FL)\times\mathbb{G}_{f}(FM)\big{)}x^{-\iota(e+ f)}.\] We will refine this sum by replacing \(\mathbb{P}V\times\mathbb{G}_{e}(FL)\times\mathbb{G}_{f}(FM)\) by another constructible set with the same Euler characteristic. Let us construct this set. Define \(W^{V}_{L,M}\) to be the subset of \(\mathbb{P}V\times\coprod_{d,g}\prod_{i=1}^{n}\operatorname{Gr}_{g_{i}}( \mathbb{C}^{d_{i}})\) consisting of pairs \(([\varepsilon],E)\) where \(E\) is a subrepresentation of \(FY\), where \(Y\) is the middle term of a triangle Furthermore, define \[W^{V}_{L,M}(e,f,g)=\{([\varepsilon],E)\in W^{V}_{L,M}\ |\ \underline{ \dim}\,E=g,\underline{\dim}\,Fp(E)=e,\underline{\dim}\,Fi^{-1}(E)=f\};\] \[W^{V}_{L,M}(e,f)=\{([\varepsilon],E)\in W^{V}_{L,M}\ |\ \underline{ \dim}\,Fp(E)=e,\underline{\dim}\,Fi^{-1}(E)=f\};\] \[W^{V,Y}_{L,M}(e,f,g)=\{([\varepsilon],E)\in W^{V}_{L,M}\ |\ \varepsilon \in\mathbb{P}V_{\langle Y\rangle},\underline{\dim}\,E=g,\underline{\dim}\,Fp( E)=e,\underline{\dim}\,Fi^{-1}(E)=f\};\] \[W^{V,Y}_{L,M}(e,f)=\{([\varepsilon],E)\in W^{V}_{L,M}\ |\ \varepsilon \in\mathbb{P}V_{\langle Y\rangle},\underline{\dim}\,Fp(E)=e,\underline{\dim} \,Fi^{-1}(E)=f\}.\] Then \(W^{V}_{L,M}\) and all the sets defined above are finite disjoint unions of subsets of the form \(W^{V,Y}_{L,M}(e,f,g)\). Moreover, since we assumed that \(\mathcal{C}\) has constructible cones, then the results of Y. Palu give us the following. **Lemma 2.12** (Lemma 3.1 of [10]).: _The sets \(W^{V,Y}_{L,M}(e,f,g)\), \(W^{V,Y}_{L,M}(e,f)\), \(W^{V}_{L,M}(e,f)\), \(W^{V}_{L,M}(e,f,g)\) and \(W^{V}_{L,M}\) are constructible._ Proof. In [10, Lemma 3.1], it is shown that certain sets \(W^{Y}_{LM}(e,f,g)\) are constructible. Our sets \(W^{V,Y}_{L,M}(e,f,g)\) are the intersection of these \(W^{Y}_{LM}(e,f,g)\) with \(\mathbb{P}V\times\coprod_{d,g}\prod_{i=1}^{n}\mathrm{Gr}_{g_{i}}(\mathbb{C}^{ d_{i}})\); thus they are constructible. Since all the other sets are finite unions of sets of the form \(W^{V,Y}_{L,M}(e,f,g)\), they must also be constructible. Now, consider the constructible map \[\Psi_{L,M}(e,f):W^{V}_{L,M}(e,f) \longrightarrow \mathbb{P}V\times\mathrm{Gr}_{e}(FL)\times\mathrm{Gr}_{f}(FM)\] \[([\varepsilon],E) \longmapsto ([\varepsilon],Fp(E),Fi^{-1}(E)).\] Let \(L^{V}_{1}(e,f)\) be the image of this map; let \(L^{V}_{2}(e,f)\) be the complement of the image. Then \(\chi(\mathbb{P}V\times\mathrm{Gr}_{e}(FL)\times\mathrm{Gr}_{f}(FM))=\chi(L^{V }_{1}(e,f))+\chi(L^{V}_{2}(e,f))\). Therefore our equation becomes \[(\star) \chi(\mathbb{P}V)CC_{T}(L)CC_{T}(M) = x^{\mathrm{ind}_{T}(L\oplus M)}\sum_{e,f}\chi\big{(}L^{V}_{1}(e,f )\big{)}x^{-\iota(e+f)}\] \[+x^{\mathrm{ind}_{T}(L\oplus M)}\sum_{e,f}\chi\big{(}L^{V}_{2}(e,f )\big{)}x^{-\iota(e+f)}.\] We will now study the two terms of the right-hand side of \((\star)\). _The first term of the RHS of (\(\star\))._ **Lemma 2.13**.: _We have an equality_ \[x^{\mathrm{ind}_{T}(L\oplus M)}\sum_{e,f}\chi\big{(}L^{V}_{1}(e,f)\big{)}x^{- \iota(e+f)}=\sum_{Y\in\mathcal{Y}_{L,M}}\chi(\mathbb{P}V_{\langle Y\rangle})CC _{T}(Y).\] Proof. It is proved in [11] (see also Section 3 of [10]) that the fibers of \(\Psi_{L,M}(e,f)\) are affine spaces. As a consequence, we have that \(\chi(L^{V}_{1}(e,f))=\chi(W^{V}_{L,M}(e,f))\). Thus \[x^{\mathrm{ind}_{T}(L\oplus M)}\sum_{e,f}\chi\big{(}L^{V}_{1}(e, f)\big{)}x^{-\iota(e+f)} = x^{\mathrm{ind}_{T}(L\oplus M)}\sum_{e,f}\chi\big{(}W^{V}_{L,M}( e,f)\big{)}x^{-\iota(e+f)}\] \[= \sum_{e,f,g,\langle Y\rangle}\chi\big{(}W^{V,Y}_{L,M}(e,f,g)\big{)} x^{-\iota(e+f)+\mathrm{ind}_{T}(L\oplus M)}.\] Now, by [12, Lemma 5.1], if \(([\varepsilon],E)\) lies in \(W^{V,Y}_{L,M}(e,f,g)\), then it implies that \(\operatorname{ind}_{T}(L\oplus M)-\iota(e+f)=\operatorname{ind}_{T}(Y)-\iota(g)\). Moreover, for a fixed \(g\), consider the map \[\coprod_{e,f}W^{V,Y}_{L,M}(e,f,g)\longrightarrow\mathbb{P}V_{\langle Y\rangle}\] sending a pair \(([\varepsilon],E)\) to \([\varepsilon]\). This map is obviously surjective if the left-hand side is non-empty. Moreover, the preimage of any \([\varepsilon^{\prime}]\) is isomorphic to \(\{[\varepsilon^{\prime}]\}\times\operatorname{Gr}_{g}(FY^{\prime})\), where \(Y^{\prime}\) sits in a triangle \(M\to Y^{\prime}\to L\xrightarrow{\varepsilon^{\prime}}\Sigma M\). By definition of \(\mathbb{P}V_{\langle Y\rangle}\), the Euler characteristic of all the fibers is the same and is equal to \(\chi(\operatorname{Gr}_{g}(FY))\). Thus \[\chi(\coprod_{e,f}W^{V,Y}_{L,M}(e,f,g))=\chi(\operatorname{Gr}_{g}(FY))\chi( \mathbb{P}V_{\langle Y\rangle}).\] So the sum becomes \[... = \sum_{e,f,g,Y}\chi\big{(}W^{V,Y}_{L,M}(e,f,g)\big{)}x^{-\iota(g)+ \operatorname{ind}_{T}(Y)}\] \[= \sum_{g,Y}\chi(\coprod_{e,f}W^{V,Y}_{L,M}(e,f,g)\big{)}x^{-\iota (g)+\operatorname{ind}_{T}(Y)}\] \[= \sum_{g,Y}\chi(\operatorname{Gr}_{g}(FY))\chi(\mathbb{P}V_{ \langle Y\rangle})x^{-\iota(g)+\operatorname{ind}_{T}(Y)}\] \[= \sum_{Y\in\mathcal{Y}_{L,M}}\chi(\mathbb{P}V_{\langle Y\rangle}) CC_{T}(Y).\] This finishes the proof of the lemma. _The second term of the RHS of (\(\star\))._ **Lemma 2.14**.: _We have an equality_ \[x^{\operatorname{ind}_{T}(L\oplus M)}\sum_{e,f}\chi\big{(}L_{2}^{V}(e,f)\big{)} x^{-\iota(e+f)}=\sum_{Y\in\mathcal{Y}_{M,L}}\chi(\mathcal{R}_{\langle Y\rangle}) CC_{T}(Y).\] Proof.: Recall that \[\mathcal{R}=\{[\eta]\in\mathbb{P}\operatorname{Hom}_{\mathcal{C}}(M,\Sigma L) \ |\ \exists\varepsilon\in V\ \text{with}\ \beta_{L,M}(\varepsilon,\eta)\neq 0\}.\] Define \(W^{\mathbb{P}\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M),Y}_{M,L}(f,e,g)\) as before Lemma 2.12 and let \(W^{\mathcal{R},Y}_{M,L}(f,e,g)\) be the constructible subset of all pairs \(([\eta],E)\) with \(\eta\in\mathcal{R}\). For fixed \(e\), \(f\) and \(g\), let \(C^{\mathcal{R},Y}_{L,M}(e,f,g)\) be the subset of \(L_{2}^{V}(e,f)\times W^{\mathcal{R},Y}_{M,L}(f,e,g)\) consisting of pairs \(\big{(}([\varepsilon],R,S),([\eta],E)\big{)}\) such that \(\beta_{L,M}(\varepsilon,\eta)\neq 0\), \(Fi^{-1}(E)=S\) and \(Fp(E)=R\). Finally, let \(C^{\mathcal{R}}_{L,M}(e,f)=\coprod_{g,Y\in\mathcal{Y}_{M,L}}C^{\mathcal{R},Y}_{ L,M}(e,f,g)\). Consider the two projections \[C^{\mathcal{R}}_{L,M}(e,f) \xrightarrow{p_{1}} L^{V}_{2}(e,f)\] \[C^{\mathcal{R},Y}_{L,M}(e,f,g) \xrightarrow{p_{2}} W^{\mathcal{R},Y}_{M,L}(f,e,g).\] By [12, Proposition 3.3], \(p_{1}\) and \(p_{2}\) are surjective. Moreover, by [12, Proposition 3.4], the fibers of \(p_{1}\) are extensions of affine spaces, and those of \(p_{2}\) are affine spaces. Therefore \(\chi(C^{\mathcal{R}}_{L,M}(e,f))=\chi(L^{V}_{2}(e,f))\) and \(\chi(C^{\mathcal{R},Y}_{L,M}(e,f,g))=\chi(W^{\mathcal{R},Y}_{M,L}(f,e,g))\). Thus the left-hand side in the statement is equal to \[... = x^{\operatorname{ind}_{T}(L\oplus M)}\sum_{e,f}\chi(L^{V}_{2}(e,f ))x^{-\iota(e+f)}\] \[= \sum_{e,f}\chi(C^{\mathcal{R}}_{L,M}(e,f))x^{\operatorname{ind}_ {T}(L\oplus M)-\iota(e+f)}\] \[= \sum_{e,f,g,Y}\chi(C^{\mathcal{R},Y}_{L,M}(e,f,g))x^{\operatorname {ind}_{T}(L\oplus M)-\iota(e+f)}\] \[= \sum_{e,f,g,Y}\chi(W^{\mathcal{R},Y}_{M,L}(f,e,g))x^{\operatorname {ind}_{T}(L\oplus M)-\iota(e+f)}.\] Again, by [10, Lemma 5.1], we have that if \(([\varepsilon],E)\) lies in \(W^{\mathcal{R},Y}_{M,L}(f,e,g)\), then \(\operatorname{ind}_{T}(L\oplus M)-\iota(e+f)=\operatorname{ind}_{T}(Y)-\iota (g)\). Moreover, the map \[\coprod_{e,f}W^{\mathcal{R},Y}_{M,L}(f,e,g)\longrightarrow\mathcal{R}_{(Y)}\] sending \(([\varepsilon],E)\) to \([\varepsilon]\) is surjective (if the left-hand side is non-empty), and its fibers have the form \(\{[\varepsilon^{\prime}]\}\times\operatorname{Gr}_{g}(FY^{\prime})\), where \(Y^{\prime}\) sits in a triangle \(L\to Y^{\prime}\to M\xrightarrow{\varepsilon^{\prime}}\Sigma L\). Thus \[\chi(\coprod_{e,f}W^{\mathcal{R},Y}_{M,L}(f,e,g))=\chi(\mathcal{R}_{(Y)}) \chi(\operatorname{Gr}_{g}(FY)).\] Therefore the above sequence of equalities continues: \[\ldots = \sum_{e,f,g,Y}\chi(W^{\mathcal{R},Y}_{M,L}(f,e,g))x^{\operatorname {ind}_{T}Y-\iota(g)}\] \[= \sum_{g,Y}\chi(\coprod_{e,f}W^{\mathcal{R},Y}_{M,L}(f,e,g))x^{ \operatorname{ind}_{T}Y-\iota(g)}\] \[= \sum_{g,Y}\chi(\mathcal{R}_{\langle Y\rangle})\chi(Grg(FY))x^{ \operatorname{ind}_{T}Y-\iota(g)}\] \[= \sum_{Y\in\mathcal{Y}_{M,L}}\chi(\mathcal{R}_{\langle Y\rangle}) CC_{T}(Y).\] This finishes the proof. Theorem 2.10 then follows directly from Lemma 2.13 and Lemma 2.14. ## 3. Refined multiplication formula: Frobenius case We will now follow the ideas of [10] (see also [10, Section 4]). The main difference will be that our Frobenius categories can be Hom-infinite; we will only assume that they are Ext-finite. We will also assume that they are Krull-Schmidt, and that their stable categories have constructible cones. Since the proofs are very similar to the ones in the triangulated case, in this section we only provide a detailed outline for the Frobenius case. ### Recollections on Frobenius categories **Theorem 3.1**.: _Let \(\mathcal{R}\) be a Frobenius category and \(\mathcal{F}\) a Frobenius category. Then \(\mathcal{F}\) is Hom-infinite._ Proof.: We will show that \(\mathcal{F}\) is Hom-infinite. We will show that \(\mathcal{F}\) is Hom-infinite. #### 3.1.1. 2-Calabi-Yau Frobenius categories A _Frobenius category_ is an exact category \(\mathcal{E}\) in the sense of Quillen with enough projectives and enough injectives, in which projectives and injectives coincide. It is _\(\operatorname{Ext}\)-finite_ if for any objects \(X\) and \(Y\) of \(\mathcal{E}\), the space \(\operatorname{Ext}^{1}_{\mathcal{E}}(X,Y)\) is finite-dimensional. It was proved in [10, Theorem 9.4] that if \(\mathcal{E}\) is a Frobenius category, then its stable category \(\underline{\mathcal{E}}\) is triangulated (\(\underline{\mathcal{E}}\) is the quotient of \(\mathcal{E}\) by the ideal of all morphisms factoring through a projective-injective object). Note that if \(\mathcal{E}\) is \(\operatorname{Ext}\)-finite, then \(\underline{\mathcal{E}}\) is Hom-finite. **Definition 3.1** (Section 2.7 of [10]).: An \(\operatorname{Ext}\)-finite Frobenius category is _2-Calabi-Yau_ if its stable category is 2-Calabi-Yau as a triangulated category. #### 3.1.2. Cluster-tilting objects Let \(\mathcal{E}\) be an \(\operatorname{Ext}\)-finite Krull-Schmidt 2-Calabi-Yau Frobenius category. **Definition 3.2** (Section 2.7 of [10]).: An object \(T\) of \(\mathcal{E}\) is a _cluster-tilting object_ if 1. \(T\) is rigid, that is, the space \(\operatorname{Ext}^{1}_{\mathcal{E}}(T,T)\) vanishes; 2. for any object \(X\) of \(\mathcal{E}\), if \(\operatorname{Ext}^{1}_{\mathcal{E}}(T,X)\) vanishes, then \(X\in\operatorname{add}T\); and 3. each object \(X\) of \(\mathcal{E}\) admits a right \(\operatorname{add}T\)-approximation \(T^{X}\to X\) and a left \(\operatorname{add}T\)-approximation \(X\to T_{X}\) (in other words, the functors \(\operatorname{Hom}_{\mathcal{E}}(X,?)|_{\operatorname{add}T}\) and \(\operatorname{Hom}_{\mathcal{E}}(?,X)|_{(\operatorname{add}T)^{op}}\) are finitely generated). Note that if \(T\) is a cluster-tilting object, then every indecomposable projective-injective object is isomorphic to a direct summand of \(T\). **Examples 3.3**.: 1. The module category of a preprojective algebra of Dynkin type is a Hom-finite stably 2-Calabi-Yau Frobenius category with a cluster tilting object. It was used in [11] in the categorification of cluster algebras. Its stable category has constructible cones by [13, Section 2.4]. 2. More generally, subcategories \(\mathcal{C}_{w}\) of modules over preprojective algebras were were used in [1, 12] to category cluster algebras. These categories have the same properties as those of the previous example. 3. Let \(0<k<n\) be integers, and put \(\hat{R}=\mathbb{C}[[x,y]]/(x^{k}-y^{n-k})\). The group \(G=\langle\zeta\rangle\) of \(n\)-th roots of unity acts on \(\hat{R}\) by \(\zeta.x=\zeta x\) and \(\zeta.y=\zeta^{-1}y\). Then the category \(\mathcal{E}=CM_{G}(\hat{R})\) of \(G\)-equivariant Cohen-Macaulay \(\hat{R}\)-modules is a (not necessarily Hom-finite) \(\operatorname{Ext}\)-finite Frobenius category with a cluster tilting object. Its stable category has constructible cones, since it is equivalent to categories from the previous example. This was used in [12] to give a categorification of the cluster algebra structure of the homogeneous coordinate ring \(\mathbb{C}[G_{k,n}]\) of the Grassmannian \(G_{k,n}\). Let \(T\) be a basic cluster-tilting object of \(\mathcal{E}\). Write \(T=T_{1}\oplus\ldots\oplus T_{n}\), where each \(T_{i}\) is indecomposable, and let \(C=\operatorname{End}_{\mathcal{E}}(T)\). Since \(T\) is cluster-tilting, we have a functor \[F=\operatorname{Hom}_{\mathcal{E}}(T,?):\mathcal{E}\longrightarrow \operatorname{f.g.mod}C,\] where \(\operatorname{f.g.mod}C\) is the category of finitely generated right \(C\)-modules. For any finitely generated \(C\)-modules \(L\) and \(N\) such that \(N\) is finite-dimensional, define \[\langle L,N\rangle_{\tau}=\dim\operatorname{Hom}_{C}(L,N)-\dim\operatorname {Ext}^{1}_{C}(L,N),\] \[\langle L,N\rangle_{3}=\sum_{i=0}^{3}(-1)^{i}\dim\operatorname{Ext}_{C}^{i}(L,N).\] Note that these expressions are well-defined integers, since \(\operatorname{Ext}_{C}^{i}(L,N)\) is finite-dimensional because \(\mathcal{E}\) is Ext-finite, and since \(\operatorname{Hom}_{C}(L,N)\) is finite-dimensional because \(L\) is finitely generated and \(N\) is finite-dimensional. Finally, let \(\underline{C}=\operatorname{End}_{\underline{\mathcal{E}}}(T)\). As in [10, Section 4] and [10, Section 3], we view \(\underline{C}\)-modules as \(C\)-modules with no composition factors isomorphic to the simple modules corresponding to the projective-injective direct summands of \(T\). **Proposition 3.4** (Proposition 3.2 of [10]).: _If \(L\) and \(N\) are finite-dimensional \(\underline{C}\)-modules of the same dimension vector, then for any finite-dimensional \(C\)-module \(Y\), we have that_ \[\langle L,Y\rangle_{3}=\langle N,Y\rangle_{3}.\] In view of this proposition, if \(e\) is a dimension vector, we can write \(\langle e,Y\rangle_{3}\) for the value of \(\langle L,Y\rangle_{3}\) for any \(\underline{C}\)-module \(L\) of dimension vector \(e\). **Definition 3.5** ([10]).: The _cluster character associated with \(T\)_ is the map \[CC_{T}:Obj(\mathcal{E})\longrightarrow\mathbb{Q}(x_{1},\dots,x_{n})\] defined by \[CC_{T}(M)=\prod_{i=1}^{n}x_{i}^{\langle FM,S_{i}\rangle_{\tau}}\sum_{e}\chi \big{(}\mathrm{Gr}_{e}(\operatorname{Ext}_{\underline{\mathcal{E}}}^{1}(T,M)) \big{)}\prod_{i=1}^{n}x_{i}^{\langle e,S_{i}\rangle_{3}}.\] ### The formula **Theorem 3.6**.: _Let \(\mathcal{E}\) be a \(\operatorname{Hom}\)-finite 2-Calabi-Yau Frobenius category with a cluster tilting object \(T\). Assume that the triangulated category \(\underline{\mathcal{E}}\) has constructible cones. Let \(L\) and \(M\) be two objects of \(\mathcal{E}\), and let \(V\) be a vector subspace of \(\operatorname{Ext}_{\mathcal{E}}^{1}(L,M)\). Then_ \[\chi(\mathbb{P}V)CC_{T}(L)CC_{T}(M)=\sum_{Y\in\mathcal{Y}_{L,M}}\chi(\mathbb{ P}V_{\langle Y\rangle})CC_{T}(Y)+\sum_{Y\in\mathcal{Y}_{M,L}}\chi(\mathcal{R} _{\langle Y\rangle})CC_{T}(Y).\] The proof follows the lines of that of [11, Theorem 4.1]; it is similar to that of Theorem 2.10, but uses [10, Lemma 3.4] instead of [11, Lemma 5.1]. ## 4. Applications ### Specialization of cluster variables in cluster algebras Let \(\mathcal{C}\) be a Hom-finite 2-Calabi-Yau triangulated category with a basic cluster tilting object \(T=\bigoplus_{i=1}^{n}T_{i}\). Following [11], let the _Caldero-Chapoton algebra_\(\mathcal{A}_{\mathcal{C}}\) be the subring of the ring \(\mathbb{Z}[x_{1}^{\pm 1},\dots,x_{n}^{\pm 1}]\) generated by the set of all \(CC_{T}(X)\), as \(X\) spans all objects of \(\mathcal{C}\). Motivated by the reduction of friezes (see Section 4.2) and the study of morphisms of rooted cluster algebras of [1], we wish to study the algebra obtained from \(\mathcal{A}_{\mathcal{C}}\) by specializing \(x_{n}\) to \(1\). To fix notation, let \[\sigma:\mathbb{Z}[x_{1}^{\pm 1},\dots,x_{n}^{\pm 1}]\to\mathbb{Z}[x_{1}^{\pm 1 },\dots,x_{n-1}^{\pm 1}]\] be the morphism sending each of \(x_{1},\dots,x_{n-1}\) to itself and sending \(x_{n}\) to \(1\). The main result of this section is stated in terms of Calabi-Yau reduction: it was proved in [10] that the category \[\mathcal{C}^{\prime}=\left(\Sigma^{-1}T_{n}\right)^{\perp}/\left(T_{n}\right)\] is a Hom-finite 2-Calabi-Yau triangulated category with a basic cluster tilting object \(T^{\prime}\), where \(T^{\prime}\) is the image of \(T\) under the projection functor. **Theorem 4.1**.: _Let \(\mathcal{C}\) be a Hom-finite 2-Calabi-Yau triangulated category with constructible cones and a basic cluster tilting object \(T=\bigoplus_{i=1}^{n}T_{i}\). Then \(\sigma(\mathcal{A}_{\mathcal{C}})=\mathcal{A}_{\mathcal{C}^{\prime}}\)._ **Remark 4.2**.: The proof of [1, Theorem 6.13] shows that \(\sigma(\mathcal{A}_{\mathcal{C}})\subseteq\mathcal{A}_{\mathcal{C}^{\prime}} \otimes_{\mathbb{Z}}\mathbb{Q}\). It uses the multiplication formula of Palu [11]. Our proof of Theorem 4.1 follows the same lines using our refined multiplication formula (Theorem 2.10) instead. We nonetheless include the complete argument below. Proof.: (of Theorem 4.1). It suffices to prove that \(\sigma\left(CC_{T}(X)\right)\in\mathcal{A}_{\mathcal{C}^{\prime}}\) for all objects \(X\) of \(\mathcal{C}\). The proof is by induction on the dimension of the space \(\operatorname{Hom}_{\mathcal{C}}(T_{n},\Sigma X)\). Assume first that \(\dim\operatorname{Hom}_{\mathcal{C}}(T_{n},\Sigma X)=0\). Then \(X\in(\Sigma^{-1}T_{n})^{\perp}\). Denote by \(\pi\) the projection \[\pi:\left(\Sigma^{-1}T_{n}\right)^{\perp}\to\mathcal{C}^{\prime}.\] Then \(\sigma\left(CC_{T}(X)\right)=CC_{T^{\prime}}(\pi X)\in\mathcal{A}_{\mathcal{C }^{\prime}}\). Note that this implies that \(\mathcal{A}_{\mathcal{C}^{\prime}}\subseteq\sigma(\mathcal{A}_{\mathcal{C}})\), since all objects of \(\mathcal{C}^{\prime}\) have the form \(\pi(X)\) with \(X\) an object of \(\mathcal{C}\). Assume now that \(\dim\operatorname{Hom}_{\mathcal{C}}(T_{n},\Sigma X)=d>0\). Choose any non-split triangle \[X\xrightarrow{}E\xrightarrow{}T_{n}\xrightarrow{\xi}\Sigma X\] and let \(V\) be the span on \(\xi\) in \(\operatorname{Hom}_{\mathcal{C}}(T_{n},\Sigma X)\). Applying Theorem 2.10, we get that \[CC_{T}(X)CC_{T}(T_{n})=CC_{T}(E)+\sum_{Y\in\mathcal{Y}_{X,T_{n}}}\chi\big{(} \mathcal{R}_{\langle Y\rangle}\big{)}CC_{T}(Y).\] Since \(CC_{T}(T_{n})=x_{n}\), applying the specialization \(\sigma\) to the left-hand side yields \(\sigma\left(CC_{T}(X)\right)\). Since all \(\chi\big{(}\mathcal{R}_{\langle Y\rangle}\big{)}\) are integers, it thus suffices to prove that \(CC_{T}(E)\) and all \(CC_{T}(Y)\) on the right-hand side are in \(\mathcal{A}_{\mathcal{C}^{\prime}}\). We do this by showing that the dimensions of \(\operatorname{Hom}_{\mathcal{C}}(T_{n},\Sigma E)\) and \(\operatorname{Hom}_{\mathcal{C}}(T_{n},\Sigma Y)\) are strictly smaller than \(d\) and by applying induction. To see this, first apply the functor \(\operatorname{Hom}_{\mathcal{C}}(T_{n},?)\) to the triangle defined by \(\xi\). We obtain an exact sequence \[(T_{n},T_{n})\xrightarrow{\xi_{*}}(T_{n},\Sigma X)\xrightarrow{f}(T_{n}, \Sigma E)\xrightarrow{}(T_{n},\Sigma T_{n}),\] where we write \((U,V)\) instead of \(\operatorname{Hom}_{\mathcal{C}}(U,V)\) to save space. Since \(T_{n}\) is rigid, \((T_{n},\Sigma T_{n})\) vanishes, so \(f\) is surjective; thus, \[(T_{n},\Sigma E)\cong(T_{n},\Sigma X)/\xi_{*}\big{(}(T_{n},T_{n})\big{)}.\] Lastly, \(\xi_{*}\big{(}(T_{n},T_{n})\big{)}\) is non-zero, since it contains \(\xi_{*}(id_{T_{n}})=\xi\). Therefore, \[\dim(T_{n},\Sigma E)<\dim(T_{n},\Sigma X)=d,\] and by induction, \(\sigma\left(CC_{T}(E)\right)\in\mathcal{A}_{\mathcal{C}^{\prime}}\). Now let \(Y\in\mathcal{Y}_{T_{n},X}\). By definition, there exists a non-split triangle \[T_{n}\xrightarrow{}Y\xrightarrow{}X\xrightarrow{\delta}\Sigma T_{n}.\] Applying the functor \(\operatorname{Hom}_{\mathcal{C}}(?,T)\) and repeating the above argument, we get that \[\dim(Y,\Sigma T_{n})<\dim(X,\Sigma T_{n})=d.\] By the \(2\)-Calabi-Yau property, \(\dim(Y,\Sigma T_{n})=\dim(T_{n},\Sigma Y)\). Thus, by induction, we also have that \(\sigma\left(CC_{T}(Y)\right)\in\mathcal{A}_{\mathcal{C}^{\prime}}\). This finishes the proof. Theorem 4.1 has an interesting application to cluster algebras. **Corollary 4.3**.: _Let \(Q\) be a quiver without loops or \(2\)-cycles, \(i\) be a vertex of \(Q\) and \(Q^{\prime}\) be the quiver obtained from \(Q\) by removing the vertex \(i\). Assume that there exists a non-degenerate potential \(W\) such that the generalized cluster category \(\mathcal{C}_{Q,W}\) is \(\operatorname{Hom}\)-finite._ _If the cluster algebra \(\mathcal{A}_{Q^{\prime}}\) is equal to its upper cluster algebra \(\mathcal{U}_{Q^{\prime}}\) (see [1] for details), then the specialization \(\sigma\) sending \(x_{i}\) to \(1\) satisfies \(\sigma(\mathcal{A}_{Q})=\mathcal{A}_{Q^{\prime}}\)._ Proof.: Let \(\mathcal{C}=\mathcal{C}_{Q,W}\) and \(\mathcal{C}^{\prime}=\mathcal{C}_{Q^{\prime},W^{\prime}}\), where \(W^{\prime}\) is the potential obtained by removing the terms of \(W\) involving the vertex \(i\). We know from [10] that \(\mathcal{A}_{Q}\subset\mathcal{A}_{\mathcal{C}}\), and from [10, Corollary 4.14] that \(\mathcal{A}_{\mathcal{C}}\subset\mathcal{U}_{Q}\). The same is true if we replace \(Q\) with \(Q^{\prime}\); thus, by our assumption, \(\mathcal{A}_{Q^{\prime}}=\mathcal{A}_{\mathcal{C}^{\prime}}=\mathcal{U}_{Q^{ \prime}}\). Applying \(\sigma\), we get that \(\mathcal{A}_{Q^{\prime}}\subset\sigma(\mathcal{A}_{Q})\subset\sigma(\mathcal{ A}_{\mathcal{C}})\), and this last set is \(\mathcal{A}_{\mathcal{C}^{\prime}}\) by Theorem 4.1. This finishes the proof. **Corollary 4.4**.: _If \(Q\) is mutation-equivalent to an acyclic quiver, then we have that \(\sigma(\mathcal{A}_{Q})=\mathcal{A}_{Q^{\prime}}\)._ **Corollary 4.5**.: _Assume that the quiver \(Q\) admits a non-degenerate Jacobi-finite potential, and let \(\mathcal{C}\) be its generalized cluster category. If the upper cluster algebra \(\mathcal{U}_{Q^{\prime}}\) is spanned the cluster characters of some objects in \(\mathcal{C}\), then we have that \(\sigma(\mathcal{U}_{Q})=\mathcal{U}_{Q^{\prime}}\)._ Proof.: Notice that we have \(\mathcal{A}_{\mathcal{C}^{\prime}}\subset\mathcal{U}_{Q^{\prime}}\) and \(\mathcal{A}_{\mathcal{C}}\subset\mathcal{U}_{Q}\) by the universal Laurent property of cluster characters. Since \(\mathcal{U}_{Q^{\prime}}\) is spanned by some cluster characters, we have \(\mathcal{U}_{Q^{\prime}}\subset\mathcal{A}_{\mathcal{C}^{\prime}}\). Consequently, Theorem 4.1 implies \(\mathcal{U}_{Q^{\prime}}=\mathcal{A}_{\mathcal{C}^{\prime}}=\sigma(\mathcal{A} _{\mathcal{C}})\subset\sigma(\mathcal{U}_{Q})\). Notice that every cluster for \(Q^{\prime}\) (see [1]) is the image of some cluster for \(Q\) under \(\sigma\). So we have \(\sigma(\mathcal{U}_{Q})\subset\mathcal{U}_{Q^{\prime}}\). The desired claim follows. Many upper cluster algebras are known to possess a generic basis, see [12, 13, 14]. They satisfy the assumption in Corollary 4.5. ### Reduction of friezes Let \(Q\) be a quiver without loops or \(2\)-cycles. A _frieze_ is a morphism of rings \(f:\mathcal{A}_{Q}\to\mathbb{Z}\) sending every cluster variable of \(\mathcal{A}_{Q}\) to a positive integer. This definition generalizes the originial one of Conway and Coxeter [11], and has been an area of active interest in recent years. In [1, Section 5], an operation of reduction on friezes is considered. The purpose of this section is to show that this reduction operation can be "reversed" by adding a \(1\) in a frieze. **Corollary 4.6**.: _Let \(Q\) be an acyclic quiver without loops or \(2\)-cycles, and let \(Q^{\prime}\) be the quiver obtained by removing the vertex \(i\) in \(Q\), and let \(\sigma:\mathcal{A}_{Q}\to\mathcal{A}_{Q^{\prime}}\) be the specialization of \(x_{i}\) to \(1\) (this is well-defined thanks to Corollary 4.4). Let \(f^{\prime}:\mathcal{A}_{Q^{\prime}}\to\mathbb{Z}\) be a frieze. Then there exists a unique frieze \(f:\mathcal{A}_{Q}\to\mathbb{Z}\) such that \(f^{\prime}\circ\sigma=f\)._ Proof.: If \(f\) exists, then it is unique, since it is determined by its action on the initial cluster variables of \(\mathcal{A}_{Q}\). Let us prove that such an \(f\) exists for any frieze \(f^{\prime}\). By Corollary 4.4, \(f=f^{\prime}\circ\sigma\) is a well-defined morphism of rings from \(\mathcal{A}_{Q}\) to \(\mathbb{Z}\). We only need to check that all cluster variables of \(\mathcal{A}_{Q}\) are sent to positive values by \(f\); this follows from the positivity theorem [10]: any cluster variable of \(\mathcal{A}_{Q}\) is a Laurent polynomial with nonnegative coefficients in the initial cluster variables, and these are sent to positive values by \(f\). **Remark 4.7**.: Corollary 4.6 can be deduced for friezes where \(Q\) is of type \(A_{n}\) from the results of [11], and was shown to be true in types \(A_{n},D_{n}\) and \(E_{6}\) in [1, Section 5], where it was also observed to be true for all known friezes of types \(E_{7}\) and \(E_{8}\) by a direct check. The total number of possible friezes in these types is still unknown and was conjectured in [12]. ### A formula for Auslander-Reiten triangles In this section, we will show how Theorem 2.10 allows for a new proof of the following formula of S. Dominguez and C. Geiss when \(\mathcal{C}\) has constructible cones. **Theorem 4.8** (Theorem 1 of [1]).: _Let \(\mathcal{C}\) be a \(\operatorname{Hom}\)-finite 2-Calabi-Yau category with constructible cones and a cluster tilting object \(T\). Let \(Z\) be an indecomposable object of \(\mathcal{C}\), and assume that it sits in an Auslander-Reiten triangle_ \[\Sigma Z\xrightarrow{\alpha}Y\xrightarrow{\beta}Z\xrightarrow{\varepsilon} \Sigma^{2}Z.\] _Then_ \[CC_{T}(Z)CC_{T}(\Sigma Z)=CC_{T}(Y)+1.\] We will give a proof of this theorem under the additionnal assumption that \(\mathcal{C}\)_has constructible cones_. Let \(V\) be the one-dimensional subspace of \(\operatorname{Hom}_{\mathcal{C}}(Z,\Sigma^{2}Z)\) generated by \(\varepsilon\). Applying Theorem 2.10, we get \[CC_{T}(Z)CC_{T}(\Sigma Z)=CC_{T}(Y)+\sum_{E\in\mathcal{Y}\Sigma z,Z}\chi( \mathcal{R}_{\langle E\rangle})CC_{T}(E).\] Here \(\mathcal{R}=\{[\eta]\in\mathbb{P}\operatorname{Hom}_{\mathcal{C}}(\Sigma Z, \Sigma Z)\mid\beta_{Z,\Sigma Z}(\varepsilon,\eta)\neq 0\}\). Let us show that \([\eta]\) lies in \(\mathcal{R}\) if and only if \(\eta\) is an isomorphism. Since \(Z\) is indecomposable, \(\operatorname{Hom}_{\mathcal{C}}(\Sigma Z,\Sigma Z)\) is a local ring. Hence \(\eta\) is an isomorphism if and only if it does not lie in the radical of \(\operatorname{Hom}_{\mathcal{C}}(\Sigma Z,\Sigma Z)\). Thus we need to show that \(\eta\) lies in the radical of \(\operatorname{Hom}_{\mathcal{C}}(\Sigma Z,\Sigma Z)\) if and only if \(\beta_{Z,\Sigma Z}(\varepsilon,\eta)=0\). Assume that \(\eta\) is in the radical (assume \(\eta\neq 0\); the case \(\eta=0\) is trivial). Then it is not an isomorphism, and since \(Z\) is indecomposable, it is not a retraction. Then, by definition of an Auslander-Reiten triangle, we must have that there exists a morphism \(f:Z\to Y\) such that \(\Sigma^{-1}\eta=\beta f\). But then \[\beta_{Z,\Sigma Z}(\varepsilon,\eta) = \beta_{Z,\Sigma Z}(\varepsilon,\Sigma\beta\Sigma f)\] \[= \beta_{Y,\Sigma Z}(\varepsilon\beta,\Sigma f)\] \[= \beta_{Y,\Sigma Z}(0,\Sigma f)\] \[= 0.\] Assume next that \(\eta\) is an isomorphism. Then \(\eta\) and \(\operatorname{rad}\operatorname{Hom}_{\mathcal{C}}(\Sigma Z,\Sigma Z)\) generate \(\operatorname{Hom}_{\mathcal{C}}(\Sigma Z,\Sigma Z)\) as a vector space. If \(\beta_{Z,\Sigma Z}(\varepsilon,\eta)\) were to vanish, it would thus vanish for any \(\eta^{\prime}\) in \(\operatorname{Hom}_{\mathcal{C}}(\Sigma Z,\Sigma Z)\), contradicting the fact that \(\beta_{Z,\Sigma Z}\) is non-degenerate. Thus \(\beta_{Z,\Sigma Z}(\varepsilon,\eta)\neq 0\). This proves that \(\mathcal{R}\) is the set of \([\eta]\), with \(\eta\) an isomorphism. But then \(\mathcal{R}=\mathcal{R}_{\langle 0\rangle}\) (since the middle term of a triangle associated with an isomorphism is \(0\)). Moreover, \(\mathcal{R}=\mathbb{P}\operatorname{Hom}_{\mathcal{C}}(\Sigma Z,\Sigma Z) \setminus\mathbb{P}\) rad \(\operatorname{Hom}_{\mathcal{C}}(\Sigma Z,\Sigma Z)\) is an affine space, since rad \(\operatorname{Hom}_{\mathcal{C}}(\Sigma Z,\Sigma Z)\) is a hyperplane in \(\operatorname{Hom}_{\mathcal{C}}(\Sigma Z,\Sigma Z)\). Thus \(\chi(\mathcal{R})=1\). Therefore \[CC_{T}(Z)CC_{T}(\Sigma Z) = CC_{T}(Y)+\sum_{E\in\mathcal{Y}_{Z,\Sigma Z}}\chi(\mathcal{R}_{ \langle E\rangle})CC_{T}(E)\] \[= CC_{T}(Y)+\chi(\mathcal{R}_{\langle 0\rangle})CC_{T}(0)\] \[= CC_{T}(Y)+1.\] This finishes the proof. ### Another restricted formula Theorem 2.10 allows us to obtain (always assuming contructibility of cones) the following formula, reminiscent of the one stated in [DX]. For two objects \(L\) and \(M\) of \(\mathcal{C}\), let \((T)(L,M)\) be the space of morphisms from \(L\) to \(M\) factoring through an object of \(\operatorname{add}T\). **Proposition 4.9**.: _Under the hypotheses of Theorem 2.10, we have that_ \[\chi\big{(}\mathbb{P}(T)(L,\Sigma M)\big{)}CC_{T}(L)CC_{T}(M)= \sum_{Y\in\mathcal{Y}_{L,M}}\chi\big{(}\mathbb{P}(T)(L,\Sigma M)_{\langle Y \rangle}\big{)}CC_{T}(Y)\] \[+\ \sum_{Y\in\mathcal{Y}_{M,L}}\chi\big{(}\mathbb{P}\operatorname{ Hom}_{\mathcal{C}}(M,\Sigma L)_{\langle Y\rangle}\setminus\mathbb{P}(T)(M,\Sigma L )_{\langle Y\rangle}\big{)}CC_{T}(Y).\] Proof.: This follows from Theorem 2.10 by taking \(V=(T)(L,\Sigma M)\). To see this, we only need to prove that \(\operatorname{Ker}\beta_{L,M}(V,?)=(T)(M,\Sigma L)\). Notice first that \((T)(M,\Sigma L)\) is contained in \(\operatorname{Ker}\beta_{L,M}(V,?)\); indeed, if \(f\in V\) and \(g\in(T)(M,\Sigma L)\), then \(\Sigma g\circ f=0\) (since \(T\) is rigid), so \[\beta_{L,M}(f,g)=\beta_{L,\Sigma L}(\Sigma g\circ f,id_{\Sigma L})=0.\] Moreover, \(\dim\operatorname{Ker}\beta_{L,M}(V,?)=\dim\operatorname{Hom}_{\mathcal{C}}(L,\Sigma M)/V\), and this last vector space is isomorphic to the dual of \((T)(M,\Sigma L)\) thanks to [10, Lemma 3.3]. Thus \(\operatorname{Ker}\beta_{L,M}(V,?)\) and \((T)(M,\Sigma L)\) have the same (finite) dimension, and so they are equal. ## Acknowledgements The first and second authors were supported by the French ANR grant CHARMS (ANR-19-CE40-0017-02). The second author was supported by the Institut Universitaire de France (IUF). The third author was supported by the National Natural Science Foundation of China (Grant No. 12271347). The final stages of this project were completed while the first and second author were participating in a trimester programme at the Isaac Newton Institute. The authors would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme _Cluster algebras and representation theory_ where work on this paper was undertaken. This work was supported by EPSRC grant no EP/R014604/1. We would like to thank Karin Baur, Eleonore Faber, Ana Garcia Elsener, Alastair King, Matthew Pressland and Khrystyna Serhiyenko for discussions about applications to cluster algebras and friezes.
2303.17042
Simultaneous activity and attenuation estimation in TOF-PET with TV-constrained nonconvex optimization
An alternating direction method of multipliers (ADMM) framework is developed for nonsmooth biconvex optimization for inverse problems in imaging. In particular, the simultaneous estimation of activity and attenuation (SAA) problem in time-of-flight positron emission tomography (TOF-PET) has such a structure when maximum likelihood estimation (MLE) is employed. The ADMM framework is applied to MLE for SAA in TOF-PET, resulting in the ADMM-SAA algorithm. This algorithm is extended by imposing total variation (TV) constraints on both the activity and attenuation map, resulting in the ADMM-TVSAA algorithm. The performance of this algorithm is illustrated using the penalized maximum likelihood activity and attenuation estimation (P-MLAA) algorithm as a reference. Additional results on step-size tuning and on the use of unconstrained ADMM-SAA are presented in the previous arXiv submission: arXiv:2303.17042v1.
Zhimei Ren, Emil Y. Sidky, Rina Foygel Barber, Chien-Min Kao, Xiaochuan Pan
2023-03-29T22:04:36Z
http://arxiv.org/abs/2303.17042v2
Simultaneous activity and attenuation estimation in TOF-PET with TV-constrained nonconvex optimization ###### Abstract An alternating direction method of multipliers (ADMM) framework is developed for nonsmooth biconvex optimization for inverse problems in imaging. In particular, the simultaneous estimation of activity and attenuation (SAA) problem in time-of-flight positron emission tomography (TOF-PET) has such a structure when maximum likelihood estimation (MLE) is employed. The ADMM framework is applied to MLE for SAA in TOF-PET, resulting in the ADMM-SAA algorithm. This algorithm is extended by imposing total variation (TV) constraints on both the activity and attenuation map, resulting in the ADMM-TVSAA algorithm. The performance of these algorithms is illustrated using the standard maximum likelihood activity and attenuation estimation (MLAA) algorithm as a reference. Image reconstruction in TOF-PET, simultaneous activity/attenuation estimation, large-scale nonconvex optimization, alternating direction method of multipliers ## 1 Introduction Nuclear medicine imaging modalities such as single-photon emission computed tomography (SPECT) and positron emission tomography (PET) require the input of a gamma ray attenuation map for quantitatively accurate imaging. The combination of nuclear medicine imaging with other image modalities such as X-ray computed tomography (CT) [1, 2] or magnetic resonance imaging (MRI) [3] provides a means for estimating the necessary attenuation map. There are, however, challenges in the separate attenuation map estimation. Use of CT-based attenuation maps requires extrapolation of the photon attenuation map from the diagnostic X-ray energy range to 511 keV and registration of the PET and CT imaging, which can be particularly difficult in the presence of motion [4]. The use of MRI to estimate a synthetic CT image is further complicated by the fact that bone and air have similar gray values in MRI while bone has a significantly higher attenuation coefficient for gamma rays. To avoid a separate measurement for obtaining the gamma ray attenuation map, a long-standing inverse problem of interest has been to simultaneously estimate the attenuation and activity distributions from emission data alone [5, 6]. To address simultaneous activity and attenuation (SAA) estimation, Nuyts _et al._[6] use maximum likelihood to invert the algebraic SAA model, and they find that accurate activity distributions can be recovered by appropriately regularizing the attenuation map. The regularization involves the use of Gibbs and intensity priors on the attenuation distribution that encourage local smoothness and clustering of values around known attenuation values for tissues in the scanned subject. Another interesting result for the SAA problem is obtained in considering time-of-flight positron emission tomography (TOF-PET) [7]. Defrise _et al._[8] exploit an analytic range condition [9, 10] for the continuous TOF-PET model and obtain a uniqueness result that the attenuation factor and activity can be determined up to a multiplicative constant. Returning to the SAA algebraic model for TOF-PET, a comprehensive study of this inverse problem using maximum likelihood estimation is presented in Rezaei _et al._[11], where it is found that the activity and attenuation maps can be recovered if the timing resolution of the TOF measurements is sufficiently high and if support constraints are exploited. We note an intriguing extension of the SAA problem where the background radiation from Lutetium-176, present in PET scintillators composed of either lutetium oxyorthosilicate (LSO) or lutetium-yttrium orthosilicate (LYSO), is exploited to provide additional information on the subject's attenuation map without the need for a separate scan [12]. In this work, we seek to build off of Ref. [11] and develop an image reconstruction framework for the SAA problem in TOF-PET that can incorporate nonsmooth, convex constraints in the maximum likelihood estimation. Such constraints can help to stabilize the image reconstruction when noise is present and can possibly extend the range of activity and attenuation factor recovery, possibly relaxing the requirements on the coincidence timing resolution and knowledge of the activity and attenuation distribution support. Of particular interest, here, is the use of total variation (TV) constraints on both activity and attenuation distributions. We have previously exploited such constraints in the context of nuclear medicine imaging; in Refs. [13] and [14] TV constraints are exploited to enable sparse-data sampling configurations in SPECT and PET, respectively. In Ref. [15], a similar methodology is used for image reconstruction in low-count list-mode TOF-PET. The image reconstruction algorithms developed in Refs. [13; 14; 15] are all instances of a general primal-dual (PD) solver for nonsmooth convex optimization developed by Chambolle and Pock [16; 17]. The optimization problem posed by applying TV-constraints to the SAA estimation problem, however, is nonsmooth and nonconvex. In our recent work, we develop a framework for such problems in imaging, where the optimization can be split into convex terms plus differentiable terms that are possibly nonconvex [18]. This framework is based on the alternating direction method of multipliers (ADMM) [19] in a way that is closely related to the PD algorithm. This framework has been successfully applied to the nonsmooth and nonconvex optimization problem that arises in spectral computed tomography (CT) when the spectral response of the measurement is included in the data model [20]. Here, we modify this framework to address biconvex optimization and apply it to the SAA estimation problem with convex constraints. The SAA data model and imaging problem are specified in Sec. II, where we then develop an ADMM algorithm to solve the associated optimization problem. As the focus of this work is mainly on the SAA inverse problem, we conduct a number of studies on noiseless TOF-PET data in Sec. III that explore the range of TOF-PET parameters that allow exact recovery of activity and attenuation factors. Also presented in this section are results with noisy data that demonstrate the stability of the proposed algorithm. In Sec. IV the results are discussed and the conclusions of the work are given. ## II Image reconstruction model and algorithms In presenting the SAA algorithm TOF-PET, we consider a two dimensional (2D) simulation where the lines-of-response (LORs) are organized in parallel-ray fashion and are specified in the same way that the 2D Radon transform is parameterized. For the TOF-PET model, the Radon transform is modified by including weighted line-integration that accounts for TOF information that helps to localize the positron-electron annihilation along a given LOR. After specifying the TOF-PET data model, the MLAA algorithm from Rezaei _et al._[11] is briefly summarized. We then present the nonconvex ADMM algorithm that performs SAA estimation with nonsmooth convex constraints. ### TOF-PET modelling The measurement model for the mean data in TOF-PET is \[c_{i\ell}=\exp\left[-P_{\ell}^{\top}\mu\right]\cdot T_{i\ell}^{\top}\lambda \tag{1}\] where \(\lambda\) and \(\mu\) are the unknown activity and attenuation maps, respectively; \(T_{i\ell k}\) is the TOF sensitivity matrix element for TOF window \(i\), LOR \(\ell\), and image pixel \(k\); \(P_{\ell k}\) is the X-ray projection matrix element for LOR \(\ell\) and pixel \(k\). For defining the TOF projection matrix \(T\), the TOF window sensitivity along the LOR is specified as \[w_{i}(t)=\exp[-(t-t_{i})^{2}/(2\sigma_{\text{TOF}})],\] where the sampling along the LOR is half of the full-width-half-maximum (FWHM) of this Gaussian distribution \[\Delta t=t_{i+1}-t_{i}=\text{FWHM}/2=\sqrt{2\log 2}\cdot\sigma_{\text{TOF}}.\] For this work, scatter coincidences and random events are not considered. ### Imaging model based on nonconvex optimization We consider performing SAA using likelihood maximization, where the measured coincidence count data are assumed to follow a multivariate Poisson distribution \[C_{i\ell}\sim\text{Poisson}(c_{i\ell}).\] , Equivalently, this estimation is performed by minimization if the negative log-likelihood, \[l(\lambda,\mu)=\sum_{i\ell}\left\{c_{i\ell}-C_{i\ell}\cdot\log c _{i\ell}\right\}= \tag{2}\] \[\sum_{i\ell}\left\{\exp(-P_{\ell}^{\top}\mu)\cdot T_{i\ell}^{ \top}\lambda-C_{i\ell}\cdot(-P_{\ell}^{\top}\mu+\log(T_{i\ell}^{\top}\lambda ))\right\}.\] The optimization problem of interest is \[\lambda,\mu=\operatorname*{arg\,min}_{\lambda,\mu}\left\{l(\lambda,\mu)\ \mid\ \mathbf{1}^{\top}\lambda=N_{\text{total}}\right\}, \tag{3}\] where \(l\) is the negative log-likelihood in Eq. (2); \(\mathbf{1}\) is vector of size \(\lambda\) with unit entries so that \(\mathbf{1}^{\top}\lambda\) is equivalent to summation over \(\lambda\); and \(N_{\text{total}}\) is the total number of annihilations. The constraint on the total number of annihilations is used to overcome the constant ambiguity in the SAA estimation problem [8]. This constraint is enforced in this work instead of the object support constraint investigated in Rezaei _et al._[11]. ### Summary of MLAA To solve this imaging model, Rezaei _et al._[11] developed the MLAA algorithm. For completeness, we write the MLAA update steps including a minor modification in Eq. (6) that accomodates the constraint on the total number of annihilations: \[a_{\ell} =\exp\left[-\sum_{k}P_{\ell k}\mu_{k}\right]\quad\forall\ell, \tag{4}\] \[\lambda_{k} \leftarrow\frac{\lambda_{k}}{\sum_{i\ell}a_{\ell}T_{i\ell k}} \sum_{i\ell}\left\{T_{i\ell k}\left(\frac{C_{i\ell}}{\sum_{k^{\prime}}T_{i \ell k^{\prime}}\lambda_{k^{\prime}}}\right)\right\}\quad\forall k,\] (5) \[\lambda \leftarrow\lambda\left(\frac{N_{\text{total}}}{\sum_{k}\lambda_{k} }\right),\] (6) \[\mu_{k} \leftarrow\mu_{k}+\frac{\sum_{i\ell k^{\prime}}P_{\ell k}\left(a_{ \ell}T_{i\ell k^{\prime}}\lambda_{k^{\prime}}-C_{i\ell}\right)}{\sum_{i\ell k ^{\prime}}P_{\ell k^{\prime}}P_{\ell k}a_{\ell}\sum_{k^{\prime\prime}}T_{i \ell k^{\prime\prime}}\lambda_{k^{\prime\prime}}}\quad\forall k. \tag{7}\] The MLAA algorithm essentially alternates between updating \(\lambda\) with a Poisson likelihood EM step and \(\mu\) with a Poisson transmission likelihood optimization step. In this MLAA implementation the extra update step in Eq. (6) enforces the constraint on the total number of annihilations. For MLAA the activity \(\lambda\) should have a strictly positive initialization, and this quantity will remain non-negative during the iteration. The attenuation map \(\mu\) can be negative unless a non-negativity constraint is included. Early stopping of the iteration is the primary means of performing regularization with MLAA, but explicit regularization can also be included with the use of Gibbs smoothing [21; 22]. In this work, we develop a framework for SAA which can include nonsmooth regularization. ### ADMM for nonsmooth and biconvex optimization The general convex optimization problem that ADMM solves takes the form \[\min_{x,y}\left\{f(x)+g(y)\;\mid\;Ax+By=c\right\},\] where \(f\) and \(g\) are convex and possibly non-smooth functions; \(A\) and \(B\) are linear operators; \(x\), \(y\) and \(c\) are vectors. The steps of the ADMM algorithm are \[x\leftarrow\operatorname*{arg\,min}_{x^{\prime}}\Bigl{\{}f(x^{ \prime})+u^{\top}Ax^{\prime}\] \[+\tfrac{1}{2}\|Ax^{\prime}+By-c\|_{\Sigma}^{2}+\tfrac{1}{2}\|x^{ \prime}-x\|_{H_{f}}^{2}\Bigr{\}} \tag{8}\] \[y\leftarrow\operatorname*{arg\,min}_{y^{\prime}}\Bigl{\{}g(y^{ \prime})+u^{\top}By^{\prime}\] \[+\tfrac{1}{2}\|Ax+By^{\prime}-c\|_{\Sigma}^{2}+\tfrac{1}{2}\|y^{ \prime}-y\|_{H_{g}}^{2}\Bigr{\}}\] (9) \[u\gets u+\Sigma(Ax+By-c), \tag{10}\] where \(\Sigma\), \(H_{f}\), and \(H_{g}\) are symmetric positive definite, and \(\|v\|_{M}^{2}\equiv v^{\top}Mv\) for any symmetric positive definite matrix \(M\). Because optimizing the TOF-PET likelihood is a non-convex optimization, the ADMM algorithm does not directly apply. The TOF-PET likelihood function, however, is biconvex; i.e. fixing either \(\lambda\) or \(\mu\), the likelihood is a convex function in the other variable. The ADMM algorithm can be modified to accommodate a biconvex function, and we consider the case that \(g\) is a biconvex function \[g(y)=g(y_{1},y_{2}),\] where \(y\) is the concatenation of \(y_{1}\) and \(y_{2}\); and \(g(y_{1},\cdot)\) and \(g(\cdot,y_{2})\) are convex functions for fixed \(y_{1}\) and \(y_{2}\), respectively. To accommodate the biconvexity of \(g\), the second update equation, Eq. (9), is replaced by an inner iteration with the following update equations \[y_{1}\leftarrow\operatorname*{arg\,min}_{y_{1}^{\prime}}\Bigl{\{}g (y_{1}^{\prime},y_{2})+u^{\top}B\left(y_{1}^{\prime},y_{2}\right) \tag{11}\] \[+\tfrac{1}{2}\|Ax+B\left(y_{1}^{\prime},y_{2}\right)-c\|_{\Sigma }^{2}+\tfrac{1}{2}\|(y_{1}^{\prime},y_{2})-(y_{1},y_{2})\|_{H_{g}}^{2}\Bigr{\}}\] \[y_{2}\leftarrow\operatorname*{arg\,min}_{y_{2}^{\prime}}\Bigl{\{}g (y_{1},y_{2}^{\prime})+u^{\top}B\left(y_{1},y_{2}^{\prime}\right)\] (12) \[+\tfrac{1}{2}\|Ax+B\left(y_{1},y_{2}^{\prime}\right)-c\|_{\Sigma }^{2}+\tfrac{1}{2}\|(y_{1},y_{2}^{\prime})-(y_{1},y_{2})\|_{H_{g}}^{2}\Bigr{\}}.\] The inner loop consists of alternating between Eqs. (11) and (12) for a predetermined number of iterations \(N_{y}\), where \(N_{y}\geq 1\). After the inner loop is completed, the ADMM iteration continues with Eq. (10) after the following assignment \[y=(y_{1},\;y_{2}).\] This inner loop, specified in Eqs. (11) and (12), is computationally efficient if multiplication by the matrix \(B\) is efficient; this is the case in our application because we consider \(B=I\) where \(I\) is the identity matrix. Note that multiplication by \(A\) is not performed within this inner iteration because the matrix \(A\) only appears in the term \(Ax\) which is computed before entering the inner loop. ### ADMM for large-scale tomographic image reconstruction For the large-scale optimization problems that arise in tomographic image reconstruction, the update step in Eq. (8) can be problematic because of the term \(Ax\), which appears in the minimization over \(x\). The matrix \(A\) usually contains the system matrix for the imaging model, and computation of \(Ax\) can be expensive particularly for 3D imaging; thus numerical solution of Eq. (8) may not be feasible. This "expensive inner loop" problem can be circumvented by linearization, i.e. by including the additional term \(\tfrac{1}{2}\|x^{\prime}-x\|_{H_{f}}^{2}\) in Eq. (8) [18; 23], resulting in an algorithm closely related to the primal-dual (PD) algorithm of Chambolle and Pock [16; 17]. Considering only scalar step-size parameters, i.e. \[\Sigma=\sigma I,\] the metric \(H_{f}\) in Eq. (8) is set to \[H_{f}=I/\tau-\sigma A^{\top}A. \tag{13}\] This choice cancels the \(Ax\) term in Eq. (2), and the requirement that \(H_{f}\) be positive definite yields a constraint on the step-sizes \(\sigma\) and \(\tau\). In the context of the image reconstruction problem, we also have \[H_{g}=0;\;\;B=-I;\;\;c=0.\] The ADMM generic optimization problem becomes \[\min_{x,y}\left\{f(x)+g(y)\right\}\;\mid Ax-y=0, \tag{14}\] and the algorithm for convex optimization is then specified by the following update equations \[x\leftarrow\operatorname*{arg\,min}_{x^{\prime}}\Bigl{\{}f(x^{ \prime})+x^{\prime\top}A^{\top}(u+\sigma(Ax-y))\] \[+\tfrac{1}{2\tau}\|x^{\prime}-x\|^{2}\Bigr{\}} \tag{15}\] \[y\leftarrow\operatorname*{arg\,min}_{y^{\prime}}\Bigl{\{}g(y^{ \prime})-u^{\top}y^{\prime}+\tfrac{\sigma}{2}\|Ax-y^{\prime}\|^{2}\Bigr{\}}\] (16) \[u\gets u+\sigma(Ax-y). \tag{17}\] Aside from minor details, this set of update equations is equivalent to the PD algorithm, but as a starting point to modify the update steps for non-convex optimization, this form is more convenient because both \(f\) and \(g\) functions appear directly in the updates. In contrast, the PD algorithm dualizes \(g\) and the convex conjugate \(g^{\star}\) is needed. If it is desired to apply PD to non-convex \(g\), figuring out what to put in place of \(g^{\star}\), while possible [24], adds another layer of complication to the algorithm development. The modification of the linearized ADMM updates for addressing the case where \(g\) is biconvex replaces Eq. (16) with inner loop update equations \[y_{1} =\underset{y^{\prime}_{1}}{\arg\min}\Big{\{}g(y^{\prime}_{1},y_{2} )-u^{\top}(y^{\prime}_{1},y_{2}) \tag{18}\] \[+\tfrac{\alpha}{2}\|Ax-(y^{\prime}_{1},y_{2})\|^{2}\Big{\}}\] \[y_{2} =\underset{y^{\prime}_{2}}{\arg\min}\Big{\{}g(y_{1},y^{\prime}_{ 2})-u^{\top}(y_{1},y^{\prime}_{2})\] (19) \[+\tfrac{\alpha}{2}\|Ax-(y_{1},y^{\prime}_{2})\|^{2}\Big{\}}.\] Convergence of this modified ADMM algorithm for biconvex functions is not theoretically guaranteed and thus convergence is demonstrated empirically. ### ADMM for SAA in TOF-PET The instantiation of ADMM for SAA estimation by minimization of the negative log-likelihood is covered here in detail. The optimization problem of interest is \[\lambda,\mu=\underset{\lambda,\mu}{\arg\min}\left\{l(\lambda,\mu)\;\mid\; \mathbf{1}^{\top}\lambda=N_{\text{total}},\;\;P\mu\geq 0\right\}, \tag{20}\] which is essentially the same as the optimization problem in Eq. (3). The only difference is that an additional non-negativity constraint is introduced on the sinogram of the attenuation map. To map the optimization problem in Eq. (20) onto the generic ADMM optimization in Eq. (14), the primal, splitting, and dual variables \(x\), \(y\), and \(u\), are respectively assigned as \[x=\left(\begin{array}{c}\lambda\\ \mu\end{array}\right),\;\;y=\left(\begin{array}{c}y_{\lambda}\\ y_{\mu}\end{array}\right),\;\;u=\left(\begin{array}{c}u_{\lambda}\\ u_{\mu}\end{array}\right).\] The linear system \(A\) is assigned as \[A=\left(\begin{array}{cc}T&0\\ 0&P\end{array}\right).\] The convex function \(f\) is used to represent the constraint on the total number of annihilations by setting \[f(\lambda,\mu)=\delta(\mathbf{1}^{\top}\lambda=N_{\text{total}}), \tag{21}\] where \(\delta\) is the convex indicator function, which is zero if the conditional argument is true and infinity otherwise. The biconvex function \(g\) accounts for the remaining terms in Eq. (20) \[g(y_{\lambda},y_{\mu}) =L(y_{\lambda},y_{\mu})+\delta(y_{\mu}\geq 0), \tag{22}\] \[L(y_{\lambda},y_{\mu}) =\sum_{i\ell}\Bigl{\{}\exp(-y_{\mu,\,\ell})\cdot y_{\lambda,\,i\ell}\] \[-C_{i\ell}\cdot(-y_{\mu,\,\ell}+\log(y_{\lambda,\,i\ell})) \Bigr{\}},\] where \[l(\lambda,\mu)=L(T\lambda,P\mu).\] Parametrization of the step-sizesStep-size selection is a critical issue for first-order, large-scale optimization algorithms. There can be much flexibility in the step-size selection, and it is important to select a minimal set of free parameters that are effective for algorithm efficiency but not too cumbersome in the tuning procedure. Because the system matrix \(A\) for SAA is block-diagonal, a slight generalization of the ADMM linearization is considered. The metric \(H_{f}\) is written as \[H_{f}=\left(\begin{array}{cc}H_{\lambda}&0\\ 0&H_{\mu}\end{array}\right),\] \[H_{\lambda}=\frac{I}{\tau_{\lambda}}-\sigma_{\lambda}T^{\top}T,\] \[H_{\mu}=\frac{I}{\tau_{\mu}}-\sigma_{\mu}P^{\top}P,\] and the step-size parameters are chosen according to \[\sigma_{\lambda}\tau_{\lambda}=1/\|T\|_{2}^{2},\;\;\sigma_{\mu}\tau_{\mu}=1/ \|P\|_{2}^{2},\] where \(\|M\|_{2}\) is the largest singular value of the matrix \(M\). With four step-size parameters and two equality constraints, there are two free step-size parameters. Specifically, the step-size ratios, \(\rho_{\lambda}\) and \(\rho_{\mu}\), are chosen to be the free parameters that need to be tuned: \[\sigma_{\lambda}= \rho_{\lambda}/\|T\|_{2},\;\tau_{\lambda}=1/(\rho_{\lambda}\|T\| _{2}), \tag{23}\] \[\sigma_{\mu}= \rho_{\mu}/\|P\|_{2},\;\tau_{\mu}=1/(\rho_{\mu}\|P\|_{2}). \tag{24}\] Tuning of \(\rho_{\lambda}\) and \(\rho_{\mu}\) is a necessary step any time the \(T\) or \(P\) matrices are changed due to, for example, a change in scan configuration or sampling pattern. The \(\times\)-updateFor the SAA problem in TOF-PET the \(x\)-update in Eq. (15) splits into two optimization problems \[\lambda\leftarrow\underset{\lambda^{\prime}}{\arg\min}\Bigl{\{} \lambda^{\prime}}^{\top}T^{\top}(u_{\lambda}+\sigma_{\lambda}(T\lambda-y_{ \lambda})) \tag{25}\] \[+\tfrac{1}{2\tau_{\lambda}}\|\lambda^{\prime}-\lambda\|^{2}\;\mid \mathbf{1}^{\top}\lambda^{\prime}=N_{\text{total}}\Bigr{\}},\] \[\mu\leftarrow\underset{\mu^{\prime}}{\arg\min}\Bigl{\{}\mu^{ \prime}}^{\top}P^{\top}(u_{\mu}+\sigma_{\mu}(P\mu-y_{\mu}))\] (26) \[+\tfrac{1}{2\tau_{\mu}}\|\mu^{\prime}-\mu\|^{2}\Bigr{\}}, \tag{27}\] where the convex function \(f\) from Eq. (21) is incorporated in the \(\lambda\)-update equation. The optimization problem for the \(\mu\)-update in Eq. (26) is solved by setting the gradient of the objective function to zero and solving for \(\mu^{\prime}\) \[\mu\leftarrow\mu-\tau_{\mu}P^{\top}\bar{u}_{\mu}, \tag{28}\] \[\bar{u}_{\mu}=u_{\mu}+\sigma_{\mu}(P\mu-y_{\mu}).\] For the \(\lambda\) optimization problem in Eq. (25), the total annihilation count equality constraint is accounted for using the technique of Lagrange multipliers. The objective function is augmented introducing the scalar Lagrange multiplier \(\nu\), becoming \[\phi(\nu,\lambda^{\prime}) =\nu(\mathbf{1}^{\top}\lambda^{\prime}-N_{\text{total}})+\] \[\lambda^{\prime\top}T^{\top}(u_{\lambda}+\sigma_{\lambda}(T \lambda-y_{\lambda}))+\frac{1}{2\tau_{\lambda}}\|\lambda^{\prime}-\lambda\|^{2}.\] The augmented objective function is minimized by taking its gradient, setting it to zero, and solving for \(\nu\) and \(\lambda^{\prime}\). The derivative with respect to \(\nu\) gives back the total annihilation count constraint equation, and the gradient with respect to \(\lambda^{\prime}\) is \[\frac{\partial\phi(\nu,\lambda^{\prime})}{\partial\lambda^{\prime}}=\nu\mathbf{1 }+T^{\top}(u_{\lambda}+\sigma_{\lambda}(T\lambda-y_{\lambda}))+\frac{1}{\tau_ {\lambda}}(\lambda^{\prime}-\lambda)\] Setting this gradient to zero yields \[0 =\nu\mathbf{1}+T^{\top}\bar{u}_{\lambda}+\frac{1}{\tau_{\lambda} }(\lambda^{\prime}-\lambda), \tag{29}\] \[\bar{u}_{\lambda} =u_{\lambda}+\sigma_{\lambda}(T\lambda-y_{\lambda}).\] Both \(\nu\) and \(\lambda^{\prime}\) are unknown in Eq. (29), but this problem can be resolved by invoking the constraint equation \(\mathbf{1}^{\top}\lambda^{\prime}=N_{\text{total}}\) by multiplying Eq. (29) through by \(\mathbf{1}^{\top}\): \[0=\nu N_{\text{pix}}+\mathbf{1}^{\top}T^{\top}\bar{u}_{\lambda}+\frac{1}{\tau _{\lambda}}(N_{\text{total}}-\mathbf{1}^{\top}\lambda),\] and noting that \(\mathbf{1}^{\top}\mathbf{1}=N_{\text{pix}}\), where \(N_{\text{pix}}\) is the total number of pixels in the activity map. Solving for \(\nu\) and substituting back into Eq. (29) yields the \(\lambda\)-update \[\lambda \leftarrow\lambda-\tau_{\lambda}(\nu\mathbf{1}+T^{\top}\bar{u}_ {\lambda}), \tag{30}\] \[\nu =\frac{1}{\tau_{\lambda}N_{\text{pix}}}(\mathbf{1}^{\top}\lambda- N_{\text{total}}-\tau_{\lambda}\mathbf{1}^{\top}T^{\top}\bar{u}_{\lambda}).\] Computationally, the updates in \(\lambda\) and \(\mu\) are the most expensive steps in the ADMM algorithm because they involve forward- and back-projection of \(\mu\) and \(\bar{u}_{\mu}\), respectively, in addition to TOF forward- and back-projection of \(\lambda\) and \(\bar{u}_{\lambda}\), respectively. The biconvex \(y\)-updates:The \(g\) function in Eq. (22) is biconvex in that it is convex in \(y_{\lambda}\) if \(y_{\mu}\) is fixed and _vice versa_. Splitting up the \(g\) function over the two update equations in Eqs. (18) and (19) yields \[y_{\lambda} =\operatorname*{arg\,min}_{y^{\prime}_{\lambda}}\Bigl{\{}\sum_{i \ell}\left(\exp(-y_{\mu,\,\ell})\cdot y^{\prime}_{\lambda,\,i\ell}-C_{i\ell} \cdot\log(y^{\prime}_{\lambda,\,i\ell})\right)\] \[-u^{\top}_{\lambda}y^{\prime}_{\lambda}+\frac{\sigma_{\lambda}}{ 2}\|y^{\prime}_{\lambda}-T\lambda\|^{2}\Bigr{\}}, \tag{31}\] and \[y_{\mu} =\operatorname*{arg\,min}_{y^{\prime}_{\mu}}\Bigl{\{}\sum_{i \ell}\left(\exp(-y^{\prime}_{\mu,\,\ell})\cdot y_{\lambda,\,i\ell}+C_{i\ell} \cdot y^{\prime}_{\mu,\,\ell}\right)\] \[-u^{\top}_{\mu}y^{\prime}_{\mu}+\frac{\sigma_{\mu}}{2}\|y^{ \prime}_{\mu}-P\mu\|^{2}\,\,\mid\,\,y^{\prime}_{\mu}\geq 0\Bigr{\}}, \tag{32}\] noting that the \(\exp(-y_{\mu})\cdot y_{\lambda}\) term is the only one that mixes the \(y_{\lambda}\) and \(y_{\mu}\) variables and is therefore common to both minimization problems. The minimization problems for the \(y\)-update are both separable over the components of \(y^{\prime}_{\lambda}\) and \(y^{\prime}_{\mu}\). The minimization over \(y^{\prime}_{\lambda}\) in Eq. (31) is solved analytically by setting the gradient of the objective function to zero, yielding a quadratic equation. The resulting update equation is \[y_{\lambda,\,i\ell} =\left(b_{i\ell}+\sqrt{b^{2}_{i\ell}+4\sigma_{\lambda}C_{i\ell}} \right)/(2\sigma_{\lambda}) \tag{33}\] \[b_{i\ell} =u_{\lambda,\,i\ell}+\sigma_{\lambda}T^{\top}_{i\ell}\lambda-\exp (-y_{\mu,\,\ell}).\] In solving the quadratic equation the positive root is chosen because it results in physical non-negative values of \(y_{\lambda,\,i\ell}\). Solving the minimization over \(y^{\prime}_{\mu}\) in Eq. (32) is more involved because setting the gradient of the objective function to zero results in a transcendental equation, which requires the use of a numerical solver. The objective function is convex in \(y^{\prime}_{\mu}\) and its derivatives are easily computed analytically. Thus Newton's algorithm can be applied to obtain an efficient and accurate solution to Eq. (32). Both the first and second derivatives of the objective function are needed for Newton's algorithm. Defining \(\psi\) to be the objective function of Eq. (32) \[\psi(y^{\prime}_{\mu})=\sum_{i\ell}\left(\exp(-y^{\prime}_{\mu, \,\ell})\cdot y_{\lambda,\,i\ell}+C_{i\ell}\cdot y^{\prime}_{\mu,\,\ell}\right) \\ -u^{\top}_{\mu}y^{\prime}_{\mu}+\frac{\sigma_{\mu}}{2}\|y^{\prime }_{\mu}-P\mu\|^{2},\] the first derivative of \(\psi\) is \[\frac{\partial\psi(y^{\prime}_{\mu})}{\partial y^{\prime}_{\mu, \,\ell}}=-\exp(-y^{\prime}_{\mu,\,\ell})\cdot y_{\lambda,\,\ell}\\ +C_{\ell}-u_{\mu}+\sigma_{\mu}(y^{\prime}_{\mu,\,\ell}-P^{\top}_{ \ell}\mu), \tag{34}\] where \[y_{\lambda,\,\ell}=\sum_{i}y_{\lambda,\,i\ell}\,,\,\,\,C_{\ell}=\sum_{i}C_{i \ell}\,.\] The second derivative of \(\psi\) is \[\frac{\partial^{2}\psi(y^{\prime}_{\mu})}{\partial y^{\prime}_{\mu,\,\ell}}= \exp(-y^{\prime}_{\mu,\,\ell})\cdot y_{\lambda,\,\ell}+\sigma_{\mu}\,, \tag{35}\] which is strictly positive. Thus Newton's algorithm can be applied without any difficulties with the following update equation \[y^{\prime}_{\mu,\,\ell}\gets y^{\prime}_{\mu,\,\ell}-\frac{\partial\psi(y^{ \prime}_{\mu,\,\ell})}{\partial y^{\prime}_{\mu,\,\ell}}\left(\frac{\partial^{2} \psi(y^{\prime}_{\mu})}{\partial y^{\prime}_{\mu,\,\ell}}\right)^{-1}. \tag{36}\] There is also the non-negativity constraint in Eq. (32), and this can be accounted for by thresholding negative values of \(y^{\prime}_{\mu,\,\ell}\) to zero after the Newton iteration is completed. The proposed \(y\)-update involves two additional levels of iteration. The first additional level of iteration involves alternative between solving Eqs. (31) and (32). In the second additional level of iteration Eq. (32) is solved with the Newton iteration in Eq. (36). Nevertheless, these additional nested iterations do not negatively impact the efficiency of the overall algorithm because all of the iterations for the \(y\)-update separate over the components of \(y\). The complete \(y\)-update computation takes less effort than computing, \(T\lambda\), the TOF data of an estimate of the activity map, \(\lambda\). This is one of the useful aspects of the powerful splitting technique that ADMM exploits. The u-update:The final set of ADMM update equations involve updating the \(u\) variables. For SAA in TOF-PET, Eq. (17) becomes \[u_{\lambda} \gets u_{\lambda}+\sigma_{\lambda}(T\lambda-y_{\lambda}), \tag{37}\] \[u_{\mu} \gets u_{\mu}+\sigma_{\mu}(P\mu-y_{\mu}). \tag{38}\] ``` 1:for\(k\gets 1\), \(N_{\text{iter}}\)do 2:\(\tilde{\lambda}=T^{\top}(u_{\lambda}+\sigma_{\lambda}(\bar{y}_{\lambda}-y_{ \lambda}))\) 3:\(\nu=\frac{1}{\tau_{\lambda}N_{\text{iter}}}\left(\mathbf{1}^{\top}\lambda-N_{ \text{total}}-\tau_{\lambda}\mathbf{1}^{\top}\tilde{\lambda}\right)\) 4:\(\lambda\leftarrow\lambda-\tau_{\lambda}(\nu\mathbf{1}+\tilde{\lambda})\) 5:\(\bar{y}_{\lambda}=T\lambda\) 6:\(\bar{\mu}=P^{\top}(u_{\mu}+\sigma_{\mu}(\bar{y}_{\mu}-y_{\mu}))\) 7:\(\mu\leftarrow\mu-\tau_{\mu}\bar{\mu}\) 8:\(\bar{y}_{\mu}=P\mu\) 9:\(\mu_{\mu},\ell=\sum_{\lambda}C_{i\ell}-\left(u_{\mu,\ell}-\sigma_{\mu}\bar{y }_{\mu,\ell}\right)\ \forall\ell\) 10:for\(k^{\prime}\gets 1\), \(N_{\text{iter}}\)do\(\triangleright\) Bicconvex alternation loop 11:\(b_{i\ell}=u_{\lambda,i\ell}+\sigma_{\lambda}\bar{y}_{\lambda,i\ell}-\exp(-y_{ \mu},\ell)\ \ \forall i,\ell\) 12:\(y_{\lambda,i\ell}=\left(b_{i\ell}+\sqrt{b_{i\ell}^{2}+4\sigma_{\lambda}C_{i \ell}}\right)/(2\sigma_{\lambda})\ \ \forall i,\ell\) 13:\(y_{\lambda,\ell}=\sum_{i}y_{\lambda,i\ell}\ \ \forall\ell\) 14:\(y_{\mu,\ell}^{\prime}=0\ \forall\ell\)\(\triangleright\) Initialize Newton iteration 15:for\(k^{\prime\prime}\gets 1\), \(N_{\text{new}}\)do\(\triangleright\) Loop for solving Eq. (32) 16:\(\psi_{\ell}^{(1)}=-\exp(-y_{\mu,\ell}^{\prime})\cdot y_{\lambda,\ell}+\sigma_{ \mu}y_{\mu,\ell}^{\prime}+v_{\mu,\ell}\) 17:\(\psi_{\ell}^{(2)}=-\exp(-y_{\mu,\ell}^{\prime})\cdot y_{\lambda,\ell}+\sigma_{ \mu}\ \forall\ell\) 18:\(y_{\mu,\ell}^{\prime}\gets y_{\mu,\,\ell}^{\prime}-\psi_{\ell}^{(1)}\cdot \left(\psi_{\ell}^{(2)}\right)^{-1}\ \forall\ell\) 19:\(y_{\mu,\,\ell}^{\prime}\leftarrow\text{pos}(y_{\mu,\,\ell}^{\prime})\ \ \forall\ell\)\(\triangleright\) Nonneg., Eq. (32) 20:endfor 21:\(y_{\mu,\,\ell}=y_{\mu,\,\ell}^{\prime}\ \forall\ell\) 22:endfor 23:\(u_{\lambda}\leftarrow u_{\lambda}+\sigma_{\lambda}(\bar{y}_{\lambda}-y_{ \lambda})\) 24:\(u_{\mu}\gets u_{\mu}+\sigma_{\mu}(\bar{y}_{\mu}-y_{\mu})\) 25:endfor ``` **Algorithm 1** ADMM pseudocode for SAA estimation with biconvex optimization. Variables \(\lambda\), \(\mu\), \(y_{\lambda}\), \(y_{\mu}\), \(\bar{y}_{\lambda}\), \(\bar{y}_{\mu}\), \(u_{\lambda}\), and \(u_{\mu}\) are initialized to zero. Step size ratio parameters \(\rho_{\lambda}\) and \(\rho_{\mu}\) are chosen, and step size parameters \(\sigma_{\lambda}\), \(\sigma_{\mu}\), \(\tau_{\lambda}\), and \(\tau_{\mu}\) are determined according to Eqs. (23) and (24). #### 3.2.2 ADMM pseudocode for SAA estimation The \(x\)-, \(y\)-, and \(u\)-update equations are assembled into a complete pseudocode given in Algorithm 1. The expensive projection and back-projection computations are collected in as few lines as possible, and their results stored, to avoid unnecessary repetition of these burdensome operations. The first derivative computation from Eq. (34) is performed at lines 9 and 16, where line 9 collects all terms that are not dependent on \(y_{\lambda}\) or \(y_{\mu}^{\prime}\). The function \(\text{pos}(\cdot)\) in line 19 returns the argument if it is non-negative, otherwise it returns zero. For the results presented in this work, we only consider zero initialization for all of the algorithm variables. The choice of step-size ratios \(\rho_{\lambda}\) and \(\rho_{\mu}\) will impact the convergence rate of the algorithm, and these parameters must be tuned for optimal performance. ### ADMM for TV-constrained SAA in TOF-PET The proposed ADMM framework for solving SAA estimation in TOF-PET allows for great flexibility in imposing convex constraints in the imaging optimization problem. Accordingly, we augment the total annihilation count and attenuation sinogram nonnegativity constraints in Eq. (20) with additional total variation constraints on the activity and attenuation maps \[\lambda,\mu=\underset{\lambda,\mu}{\arg\min}\Big{\{}l(\lambda,\mu)\ |\ \|\lambda\|_{\text{TV}}\leq\gamma_{\lambda},\ \ \|\mu\|_{\text{TV}}\leq\gamma_{\mu},\\ \mathbf{1}^{\top}\lambda=N_{\text{total}},\ \ P\mu\geq 0,\Big{\}} \tag{39}\] where \(\|\cdot\|_{\text{TV}}\) is the isotropic TV seminorm; \(\gamma_{\lambda}\) and \(\gamma_{\mu}\) are the TV constraint values for the activity and attenuation maps, respectively. The additional TV constraints exploit gradient sparsity in in both the activity and attenuation that potentially improves accurate estimation of their corresponding images. Because the novel aspect of this work is the treatment of the biconvex log-likelihood term, which is explained in detail in Sec. 2.2-F, the ADMM instance for this optimization problem is covered in the Supplemental Document. The ADMM algorithm for TV-constrained SAA estimation (ADMM-TVSAA) is also designed so that it makes use of the same step size ratio parameters as discussed for Algorithm 1. Because of the additional constraints, the TV constraint values \(\gamma_{\lambda}\) and \(\gamma_{\mu}\) become additional parameters of the algorithm. ## 3 Results with a 2D TOF-PET simulation The results demonstrating the ADMM-SAA algorithm are all derived from a 2D simulation using the digital reference object (DRO) shown in Fig. 1[25]. This digital phantom is binned down to 128x128 image array with physical dimension 30x30 cm\({}^{2}\). The LORs are arranged in a 2D parallel-beam geometry with 128 views covering a \(\pi\) radian arc, and 128 parallel rays being measured per view with a spacing of 0.234 cm (30/128 \(\approx\) 0.234). The TOF FWHM is taken to be 9 cm, which corresponds to a timing resolution of approximately 600 picoseconds. The spacing between TOF window samples is 4.5 cm, and a total of ten TOF samples are taken per LOR. For the image reconstruction, both the attenuation and activity images are represented on a 128x128 grid. The purpose of the presented results is to demonstrate usage of the ADMM-SAA algorithm and the impact of the TV constraints on the reconstruction of the activity and attenuation. For the following results, the biconvex alternation loop at line 10 of Algorithm 1 is run for \(N_{y}=100\) iterations, and the Newton solver at line 15 is run for \(N_{\text{new}}=10\) iterations. With both of these loop settings, Eq. (22) is solved Figure 1: (Left) Slice number 40 from the University of Washington Digital Reference Object: activity image in arbitrary units, and (Right) attenuation map displayed in the gray scale window \(\left[\mathbf{0.075,0.115}\right]\) cm\({}^{-1}\). accurately in a numerical sense. Even with the inner loops being executed with such high iteration numbers, the efficiency of the whole biconvex alternation loop is still high, because all of the computations separate across the vector components. The computational effort for the biconvex alternation loop is \(O(N_{y}\cdot N_{\text{TOF}}\cdot N_{\text{views}}\cdot\sqrt{N_{\text{pix}}})\) (the Newton loop does not increase the order of this loop because it involves the attenuation sinogram only), and by comparison, computing TOF projection, \(T\lambda\), is \(O(N_{\text{TOF}}\cdot N_{\text{views}}\cdot N_{\text{pix}})\), where \(N_{\text{pix}}\) is the total number of pixels and \(N_{\text{views}}\) is the number of projection angles. For the small 128x128 images of this study the biconvex loop is of the same order at TOF projection because \(N_{y}\approx\sqrt{N_{\text{pix}}}\), but as the data and image size increase, TOF projection becomes the more burdensome computation. It is also possible, in practice, to reduce \(N_{y}\) and \(N_{\text{newt}}\) and work with inexact solution of Eq. (22) but we do not investigate this option in this work. ### SAA from noiseless data Image reconstruction is performed on noiseless data using the mean counts as the measured data using Algorithm 1, which aims to solve the optimization problem in Eq. (20). Executing Algorithm 1 requires that the step size parameters be specified, which is done uniquely by setting values for the step size ratios \(\rho_{\lambda}\) and \(\rho_{\mu}\) from Eqs. (23) and (24), respectively. Tuning the values for \(\rho_{\lambda}\) and \(\rho_{\mu}\) is crucial for optimizing the algorithm efficiency, and this is one of the purposes of starting with noiseless data. A grid search is performed on the step size ratios, executing 100 iterations for each setting. Because noiseless consistent data are used, the discrepancy between the estimated and input data is zero at convergence. Accordingly, the data root mean square error (RMSE) as a measure of the progress toward the optimization solution, and we determine the step size ratios as the ones that minimize data RMSE after 100 iterations. Two grid searches are performed, over a coarse and fine grid, and the results of this search are shown in Fig. 2. The utility of the step size parameter tuning and comparison with MLAA on data RMSE convergence is shown in Fig. 3. If the step size parameters are chosen so that \(\sigma\) and \(\tau\) are equal, i.e. \(\rho=1\), it is clear from the plotted data RMSE that convergence can be quite sub-optimal. Tuning \(\rho_{\lambda}\) and \(\rho_{\mu}\) results in two orders of magnitude smaller data RMSE after running ADMM-SAA for 1000 iterations. For reference, the convergence of MLAA on data RMSE is also included, and it can be seen that ADMM-SAA can be tuned so that it is more efficient than MLAA for the particular conditions of the 2D TOF-PET simulation. We note that both ADMM-SAA and MLAA are optimizing the same objective function. There is some variability in the presented technique for tuning \(\rho_{\lambda}\) and \(\rho_{\mu}\). Small variation in the results are expected depending on the scanned object, the number of ADMM-SAA iterations, the metric used to quantify convergence, and discretization in the grid search. This variability, however, does not have a large impact on the algorithm efficiency as can be seen in the broad minimum of the fine grid search in Fig. 3. In the following studies, using various forms of ADMM-TVSAA, the same step size ratio parameter settings are used. For the next set of results with noiseless, consistent TOF-PET data, the recovery of the activity and attenuation images is investigated. Specifically, ADMM-TVSAA is employed to study the effectiveness for solving the associated inverse problem by imposing TV constraints on the activity alone, the attenuation alone, and both activity and attenuation. For this study, when TV constraints are imposed, the ground truth values are used as the corresponding constraint values. This tests the ability of ADMM-TVSAA to recover the ground truth images under ideal conditions. Turning off a TV constraint is achieved by setting the corresponding constraint value \(\gamma\) to a value much larger than the ground truth. The results for the use of ADMM-TVSAA with different Figure 3: Comparison of data RMSE convergence for MLAA, ADMM-SAA with non-optimal settings \(\rho_{\lambda}=\rho_{\mu}=1\), and ADMM-SAA with the tuned values \(\rho_{\lambda}=0.01\) and \(\rho_{\mu}=100\). TV constraints active are shown in Figs. 4, 5, 6, and 7, which show RMSE measures for the TOF-PET data, activity, attenuation factor \(\exp[-P\mu]\), and attenuation map, respectively. The reason why the attenuation factor RMSE is plotted is that it is the activity image that is the desired quantity that needs to be recovered be recovered. The attenuation map only needs to be recovered to the extent that the attenuation factor in the measurement model in Eq. (1) is accurate. The RMSE curves in Figs. 4-6 all show a downward trend over the 1000 iterations of ADMM-TVSAA. The result corresponding to the use of both activity and attenuation TV constraints converges more quickly than the other conditions; nevertheless all of the cases converge the activity image RMSE to better than \(10^{-2}\). At this RMSE value the reconstructed activity is not visually distinguishable from the true activity. The results for the attenuation RMSE convergence tell a different story. For this metric, use of both constraints yields the fastest convergence, and use of a TV constraint on the attenuation map results in faster convergence than the remaining two cases. The attenuation maps after 1000 iterations of ADMM-TVSAA are shown in Fig. 8, where it is seen that the use of both TV constraints yields an accurate estimate of the ground truth attenuation map. From these results, however, we cannot rule out that the other constraint combinations will yield an accurate attenuation map estimate; all of the RMSE curves do show a downward trend and it may be that after sufficient iteration numbers are reached that the attenuation map is indeed recovered. ### SAA from noisy data The next set of studies focus on SAA with noisy data. Noise realizations are obtained by scaling the mean TOF-PET data so that the total number of measured coincidences Figure 4: Comparison of TV-constraints on normalized data RMSE convergence. Figure 5: Comparison of TV-constraints on the normalized activity RMSE convergence. Figure 8: Comparison of TV-constraints on reconstructed attenuation maps: no TV constraints (Upper Left), attenuation TV constraint only (Upper Right), activity TV constraint only (Lower Left), and attenuation and activity TV constraints (Lower Right). Gray scale window is [0.075, 0.115] cm\({}^{-1}\). Figure 6: Comparison of TV-constraints on normalized attenuation factor RMSE convergence. Figure 7: Comparison of TV-constraints on normalized attenuation RMSE convergence. is \(10^{6}\); the realization is then obtained by selecting a number of detected coincidences for each LOR drawn from a Poisson distribution. To demonstrate the use of ADMM-TVSAA, the MLAA algorithm is used as a reference and the activity image estimates are displayed as a function of iteration number. The results for SAA from a single noise realization are shown in Fig. 9, where the TV constraint values are given in terms of the ground truth values and activity estimates are shown for iteration numbers between 10 and 100. When ground truth TV constraint values are not available, a validation technique can be exploited to discover the subject's TV values as discussed in Ref. [26]. Visually, the images in Fig. 9 show low bias recovery of the activity distribution by iteration number 100 with a much smaller noise amplitude for ADMM-TVSAA as compared to the MLAA result. For a quantitative bias-variance analysis, MLAA and ADMM-TVSAA are used to perform SAA on an ensemble of 100 noise realizations of TOF-PET data. The mean and pixel standard deviation for both MLAA and ADMM-TVSAA are computed and plotted in Fig. 10 as a function of iteration number. The use of the TV-constraints, allows ADMM-TVSAA to achieve activity estimates with low bias and variance as compared with basic maximum-likelihood estimation as implemented with MLAA. Again, the MLAA result is used only as a reference since it is the algorithm standard for SAA in TOF-PET. For the final study, bias-variance analysis is performed to gain an understanding of the impact of the activity and Figure 11: Normalized standard deviation (std.) versus normalized bias as a function of iteration number computed empirically from 100 noise realizations for SAA with different TV constraint combinations. The dots indicate the iteration numbers 10, 20, 50, and 100 for the respective algorithm curves. In each case, the constraint value is set to the ground truth TV value. Figure 10: Normalized standard deviation (std.) versus normalized bias as a function of iteration number computed empirically from 100 noise realizations for MLAA and TV-constrained SAA. The labeled dots indicate the iteration numbers for the respective algorithm curves. For TV-constrained ADMM-SAA, curves are shown for activity and attenuation TV constraints set to \(\gamma_{\lambda}=\mathbf{1.0}\) and \(\gamma_{\mu}=\mathbf{1.0}\), respectively. The constraint values are given as a fraction of the ground truth TV values. Figure 9: Reconstructed images for (left column) MLAA and (right column) TV-constrained ADMM-SAA (\(\gamma_{\lambda}=\mathbf{1.0}\), \(\gamma_{\mu}=\mathbf{1.0}\)) from noisy data. The rows correspond to the images at 10, 20, 50, and 100 iterations going from top to bottom. The TOF-PET simulation assumes Poisson distributed noise with a total of \(10^{6}\) measured coincidence counts. attenuation TV constraints. The image bias and pixel standard deviation are plotted in Fig. 11 as a function of iteration number for various combinations of imposing TV constraints. The curve for the use of a TV constraint on the activity alone shows the effectiveness of this constraint in controlling the image variance. The use of the TV constraint on the attenuation map alone results in increasing image variance as a function of iteration number, but there is a reduction in the image bias as compared with use of an activity TV constraint alone. Imposing both activity and attenuation constraints yields benefit in reducing image variance and bias as compared with the other two cases. ## 4 Discussion and Conclusion In this work, an ADMM framework is developed that can be applied to nonsmooth and nonconvex optimization problems that arise in imaging. The particular form of nonconvexity addressed is when the optimization problem has a biconvex structure. The imaging problem posed by simultaneous estimation of the activity and attenuation (SAA) in time-of-flight positron emission tomography (TOF-PET) has such a structure. Using this ADMM framework, a limited study on the impact of total variation (TV) constraints on the activity and attenuation for the SAA problem is presented. The use of both of these constraint is seen to help stabilize the SAA inverse problem as demonstrated by the noiseless results. When using noisy data, the TV constraints help to reduce image bias and variance as compared with use of maximum likelihood estimation alone. The shown results are intended to show the potential in solving the SAA inverse problem with the use of TV constraints on the activity and attenuation and to demonstrate the ADMM-TVSAA algorithm. While we have shown results only for ADMM-TVSAA, the ADMM framework is easily extended to include other nonsmooth, convex terms. Further study varying the test phantom and TOF-PET setup are needed to obtain a more comprehensive picture of the SAA inverse problem. ## Data Availability The implementation of the algorithms, which are presented in this article, and the code, which generates the figures, are available at: [https://github.com/zhimeir/saa_admm_paper](https://github.com/zhimeir/saa_admm_paper). ## References * [1] P. E. Kinnah, D. W. Townsend, T. Beyer, and D. Sashin, "Attenuation correction for a combined 3D PET/CT scanner," _Med. Phys._, vol. 25, pp. 2046-2053, 1998. * [2] T. Xia, A. M. Alessio, B. De Man, R. Manjeshwar, E. Asma, and P. E. Kinnah, "Ultra-two dose CT attenuation correction for PET/CT," _Phys. Med. Biol._, vol. 57, pp. 309-328, 2011. * [3] N. Burgos, M. J. Cardoso, K. Thielemans, M. Modat, S. Pedemonte, J. Dickson, A. Barnes, R. Ahmed, C. J.Mahoney, J. M. Schott, J. S. Duncan, D. Atkinson, S. R. Arridge, B. F. Hutton, and S. Ourselin, "Auttenuation correction synthesis for hybrid PET-MT scanners: application to brain studies," _IEEE Trans. Med. Imaging_, vol. 33, pp. 2332-2341, 2014. * [4] M. M. Osman, C. Cohade, Y. Nakamoto, and R. L. Wahl, "Respiratory motion artifacts on PET emission images obtained using CT attenuation correction on PET-CT," _Euro. J. Nuc. Med. Mol. Imaging_, vol. 30, pp. 603-606, 2003. * [5] F. Natterer and H. Herzog, "Attenuation correction in positron emission tomography," _Math. Method. Appl. Sci._, vol. 15, pp. 321-330, 1992. * [6] J. Nuyts, P. Dupont, S. Stroobants, R. Benninck, L. Mortelmans, and P. Suetens, "Simultaneous maximum a posteriori reconstruction of attenuation and activity distributions from emission sinograms," _IEEE Trans. Med. Imaging_, vol. 18, pp. 393-403, 1999. * [7] T. K. Lewellen, "Time-of-flight PET," in _Semin. Nucl. Med._, 1998, vol. 28, pp. 268-275. * [8] M. Defrise, A. Rezaei, and J. Nuyts, "Time-of-flight PET data determine the attenuation sinogram up to a constant," _Phys. Med. Biol._, vol. 57, pp. 885-899, 2012. * [9] M. Defrise, V. Panin, C. Michel, and M. E. Casey, "Continuous and discrete data rebinning in time-of-flight PET," _IEEE Trans. Med. Imaging_, vol. 27, pp. 1310-1322, 2008. * [10] V. Y. Panin, M. Defrise, and M. E. Casey, "Restoration of fine azimuthal sampling of measured TOF projection data," in _2010 IEEE Nucl. Sci. Symp. Med. Imaging Conf._, 2011, pp. 3079-3084. * [11] A. Rezaei, M. Defrise, G. Bal, C. Michel, M. Conti, C. Watson, and J. Nuyts, "Simultaneous reconstruction of activity and attenuation in time-of-flight PET," _IEEE Trans Med. Imaging_, vol. 31, pp. 2224-2233, 2012. * [12] L. Cheng, T. Ma, X. Zhang, Q. Peng, Y. Liu, and J. Qi, "Maximum likelihood activity and attenuation estimation using both emission and transmission data with application to utilization of L=176 background radiation in TOF PET," _Med. Phys._, vol. 47, pp. 1067-1082, 2020. * [13] P. A. Wolf, J. S. Jorgensen, T. G. Schmidt, and E. Y. Sidky, "Few-view single photon emission computed tomography (SPECT) reconstruction based on a blurred piecewise constant object model," _Phys. Med. Biol._, vol. 58, pp. 5629-5652, 2013. * [14] Z. Zhang, J. Ye, B. Chen, A. E. Perkins, S. Rose, E. Y. Sidky, C.-M. Kao, D. Xia, C.-H. Tung, and X. Pan, "Investigation of optimization-based reconstruction with an image-total-variation constraint in PET," _Phys. Med. Biol._, vol. 61, pp. 6055-6084, 2016. * [15] Z. Zhang, S. Rose, J. Ye, A. E. Perkins, B. Chen, C.-M. Kao, E. Y. Sidky, C.-H. Tung, and X. Pan, "Optimization-based image reconstruction from low-count, list-mode TOF-PET data," _IEEE Trans. Biomed. Eng._, vol. 65, pp. 936-946, 2018. * [16] A. Chambolle and T. Pock, "A first-order primal-dual algorithm for convex problems with applications to imaging," _J. Math. Imaging Vis._, vol. 40, pp. 120-145, 2011. * [17] E. Y. Sidky, J. H. Jorgensen, and X. Pan, "Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm," _Phys. Med. Biol._, vol. 57, pp. 3065-3091, 2012. * [18] R. F. Barber and E. Y. Sidky, "Convergence for nonconvex ADMM, with applications to CT imaging," _arXiv preprint arXiv:2006.07278_, 2020. * [19] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, "Distributed optimization and statistical learning via the alternating direction method of multipliers," _Found. Trends. Mach. Learn._, vol. 3, pp. 1-122, 2011. * [20] T. G. Schmidt, B. A. Sammut, R. F. Barber, X. Pan, and E. Y. Sidky, "Addressing CT metal artifacts using photon-counting detectors and one-step spectral CT image reconstruction," _Med. Phys._, vol. 49, pp. 3021-3040, 2022. * [21] K. Lange, "Convergence of EM image reconstruction algorithms with Gibbs smoothing," _IEEE Trans. Med. Imaging_, vol. 9, pp. 439-446, 1990. * [22] T. Heufeler, C. M. Rank, Y. Berker, M. T. Freitag, and M. Kachelriess, "MLAA-based attenuation correction of flexible hardware components in hybrid PET/MR imaging," _EJINMIMI physics_, vol. 4, pp. 1-23, 2017. * [23] H. Nien and J. A. Fessler, "Fast X-ray CT image reconstruction using a linearized augmented Lagrangian method with ordered subsets," _IEEE Trans. Med. Imaging_, vol. 34, pp. 388-399, 2014. * [24] R. F. Barber, E. Y. Sidky, T. G. Schmidt, and X. Pan, "An algorithm for constrained one-step inversion of spectral CT data," _Phys. Med. Biol._, vol. 61, pp. 3784-3818, 2016. * [25] L. A. Pierce, B. F. Elston, D. A. Clunie, D. Nelson, and P. E. Kinnah, "A digital reference object to analyze calculation accuracy of PET standardized uptake value," _Radiology_, vol. 277, pp. 538-545, 2015. * [26] E. Y. Sidky, J.-P. Phillips, W. Zhou, G. Ongie, J. P. Cruz-Bustida, I. S. Reiser, M. A. Anastasio, and X. Pan, "A signal detection model for quantifying overregularization in nonlinear image reconstruction," _Med. Phys._, vol. 48, pp. 6312-6323, 2021. Simultaneous activity and attenuation estimation in TOF-PET with TV-constrained nonconvex optimization: Supplemental material ### ADMM for TVSAA in TOF-PET In this Supplement, the details for the instantiation of ADMM for TV-constrained SAA estimation (ADMM-TVSAA) are explained. The optimization problem of interest is Eq. (39) from the main text, which we restate here \[\lambda,\mu=\operatorname*{arg\,min}_{\lambda,\mu}\Bigl{\{}l( \lambda,\mu)\mid\ \|D\lambda\|_{1}\leq\gamma_{\lambda},\ \|D\mu\|_{1}\leq\gamma_{\mu},\] \[\mathbf{1}^{\top}\lambda=N_{\text{total}},\ \ P\mu\geq 0\Bigr{\}},\] (S1) where \(D\) is the discretization of the spatial gradient operator, and \(\|Dx\|_{1}\) is the anisotropic TV of \(x\). To map the optimization problem onto the generic ADMM optimization in Eq. (14), the primal, splitting, and dual variables \(x\), \(y\), and \(u\), are respectively assigned as \[x=\left(\begin{array}{c}\lambda\\ \mu\end{array}\right),\ \ y=\left(\begin{array}{c}y_{\lambda}\\ z_{\lambda}\\ y_{\mu}\\ z_{\mu}\end{array}\right),\ \ u=\left(\begin{array}{c}u_{\lambda}\\ v_{\lambda}\\ u_{\mu}\\ v_{\mu}\end{array}\right),\] The linear system \(A\) is assigned as \[A=\left(\begin{array}{cc}T&0\\ \nu_{\lambda}D&0\\ 0&P\\ 0&\nu_{\mu}D\end{array}\right),\] where \[\nu_{\lambda}=\|T\|_{2}/\|D\|_{2},\ \ \nu_{\mu}=\|P\|_{2}/\|D\|_{2},\] (S2) are constants that normalize the gradient matrices to the projection matrices. As with the SAA problem, the convex function \(f\) is used to represent the constraint on the total number of annihilations by setting \[f(\lambda,\mu)=\delta(\mathbf{1}^{\top}\lambda=N_{\text{total}}).\] The biconvex function \(g\) accounts for the remaining terms in Eq. (S1) \[g(y_{\lambda},y_{\mu})=L(y_{\lambda},y_{\mu})+\delta(y_{\mu}\geq 0)+ \\ \delta(\|z_{\lambda}\|_{1}\leq\nu_{\lambda}\gamma_{\lambda})+\delta(\|z_{ \mu}\|_{1}\leq\nu_{\mu}\gamma_{\mu}),\] (S3) where the biconvex function \(L(y_{\lambda},y_{\mu})\) is defined in Eq. (2) and the TV constraint values have also been scaled to reflect the normalization of \(D\). #### Parametrization of the step-sizes With the modified system matrix \(A\), the metric from Eq. (13) becomes \[H_{f}= \left(\begin{array}{cc}H_{\lambda}&0\\ 0&H_{\mu}\end{array}\right),\] \[H_{\lambda}= \frac{I}{\tau_{\lambda}}-\sigma_{\lambda}(T^{\top}T+\nu_{\lambda }^{2}D^{\top}D),\] \[H_{\mu}= \frac{I}{\tau_{\mu}}-\sigma_{\mu}(P^{\top}P+\nu_{\mu}^{2}D^{\top }D),\] and the step-size parameters are chosen so that \[\sigma_{\lambda}\tau_{\lambda}=(\|T\|_{2}^{2}+\nu_{\lambda}^{2}\|D\|_{2}^{2}) ^{-1},\ \ \sigma_{\mu}\tau_{\mu}=(\|P\|_{2}^{2}+\nu_{\mu}^{2}\|D\|_{2}^{2})^{-1}.\] We have found empirically that the step size ratios, \(\rho_{\lambda}=0.01\) and \(\rho_{\mu}=100\), determined for ADMM-SAA in Sec. IIIA provide sufficient efficiency for ADMM-TVSAA when the parameters \(\nu_{\lambda}\) and \(\nu_{\mu}\) are determined by Eq. (S2). Accordingly, the step size parameters for ADMM-TVSAA are \[\sigma_{\lambda}=\rho_{\lambda}(\|T\|_{2}^{2}+\nu_{\lambda}^{2}\|D \|_{2}^{2})^{-1},\] \[\tau_{\lambda}=\rho_{\lambda}^{-1}(\|T\|_{2}^{2}+\nu_{\lambda}^{2} \|D\|_{2}^{2})^{-1},\] \[\sigma_{\mu}=\rho_{\mu}(\|P\|_{2}^{2}+\nu_{\mu}^{2}\|D\|_{2}^{2}) ^{-1},\] \[\tau_{\mu}=\rho_{\mu}^{-1}(\|P\|_{2}^{2}+\nu_{\mu}^{2}\|D\|_{2}^{2 })^{-1}.\] #### The \(\times\)-update For the TVSAA problem in TOF-PET the instantiation of the \(x\)-update in Eq. (15) is a straightforward generalization of the presentation in Sec. IIF, and we only state the final update equations. The \(\mu\)-update of ADMM-SAA in Eq. (28) becomes \[\mu \leftarrow\mu-\tau_{\mu}(P^{\top}\bar{u}_{\mu}+\nu_{\mu}D^{\top} \bar{v}_{\mu}),\] (S4) \[\bar{u}_{\mu} =u_{\mu}+\sigma_{\mu}(P\mu-y_{\mu}),\] \[\bar{v}_{\mu} =v_{\mu}+\sigma_{\mu}(\nu_{\mu}D\mu-z_{\mu}),\] for ADMM-TVSAA. Likewise, the \(\lambda\)-update of ADMM-SAA in Eq. (30) becomes \[\lambda\leftarrow\lambda-\tau_{\lambda}(\nu\mathbf{1}+T^{\top} \bar{u}_{\lambda}+\nu_{\lambda}D^{\top}\bar{v}_{\lambda}),\] (S5) \[\nu=\frac{1}{\tau_{\lambda}N_{\text{pix}}}\left(\mathbf{1}^{\top }\lambda-N_{\text{total}}-\tau_{\lambda}\mathbf{1}^{\top}(T^{\top}\bar{u}_{ \lambda}+\nu_{\lambda}D^{\top}\bar{v}_{\lambda})\right).\] #### The \(\boldsymbol{y}\)-update The \(g\) function in Eq. (S3) separates into a biconvex function in \(y_{\lambda}\) and \(y_{\mu}\), and convex functions in \(z_{\lambda}\) and \(z_{\mu}\). The biconvex terms are treated in exactly the same way as the ADMM-SAA presentation in Sec. IIF. Focusing on the last two convex terms in Eq. (S3) yields the optimization problems \[z_{\lambda}=\] (S6) \[\operatorname*{arg\,min}_{z^{\prime}_{\lambda}}\Bigl{\{}\delta(\|z ^{\prime}_{\lambda}\|_{1}\leq\nu_{\lambda}\gamma_{\lambda})-v_{\lambda}^{\top}z^{ \prime}_{\lambda}+\frac{\sigma_{\lambda}}{2}\|z^{\prime}_{\lambda}-\nu_{\lambda}D \lambda\|^{2}\Bigr{\}},\] and \[z_{\mu}=\] (S7) \[\operatorname*{arg\,min}_{z^{\prime}_{\mu}}\Bigl{\{}\delta(\|z^{ \prime}_{\mu}\|_{1}\leq\nu_{\mu}\gamma_{\mu})-v_{\mu}^{\top}z^{\prime}_{\mu}+ \frac{\sigma_{\mu}}{2}\|z^{\prime}_{\mu}-\nu_{\mu}D\mu\|^{2}\Bigr{\}}.\] Because both of these problems are identical, we focus on Eq. (S6). The objective function consists of a quadratic function and an indicator function that enforces the \(\ell_{1}\) constraint on \(z^{\prime}_{\lambda}\). Furthermore, the quadratic function has uniform curvature, i.e. a Hessian matrix that is proportional to the identity matrix. For this special case, the solution to Eq. (S6) is obtained in a two-step process that involves finding the minimizer of the unconstrained quadratic function, then projecting the result to the closest \(z^{\prime}_{\lambda}\) that satisfies the \(\ell_{1}\) constraint. The unconstrained minimizer is given by \[z^{\prime\prime}_{\lambda}=v_{\lambda}/\sigma_{\lambda}+\nu_{\lambda}D\lambda,\] and the solution to Eq. (S6) becomes \[z^{\prime}_{\lambda}=\mathcal{P}_{L1(\nu_{\lambda}\gamma_{\lambda})}(z^{\prime \prime}_{\lambda}),\ \ L1(r)=\{z\mid\|z\|_{1}\leq r\},\] where \(\mathcal{P}_{L1(\nu_{\lambda}\gamma_{\lambda})}(\cdot)\) denotes projection onto the \(\ell_{1}\)-ball of "radius" \(\nu_{\lambda}\gamma_{\lambda}\). An efficient algorithm for performing this projection is presented in Ref. [1], or it can also be accomplished by vector shrinkage and use of a root finding algorithm to determine the shrinkage parameter to attain an \(\ell_{1}\)-norm of \(\nu_{\lambda}\gamma_{\lambda}\). The update equations for \(z_{\lambda}\) and \(z_{\mu}\) are \[z_{\lambda} \leftarrow\mathcal{P}_{L1(\nu_{\lambda}\gamma_{\lambda})}(v_{ \lambda}/\sigma_{\lambda}+\nu_{\lambda}D\lambda),\] \[z_{\mu} \leftarrow\mathcal{P}_{L1(\nu_{\mu}\gamma_{\mu})}(v_{\mu}/\sigma_ {\mu}+\nu_{\mu}D\mu).\] The \(u\)-updateFor ADMM-TVSAA, Eq. (17) becomes \[u_{\lambda} \gets u_{\lambda}+\sigma_{\lambda}(T\lambda-y_{\lambda}),\] \[u_{\mu} \gets u_{\mu}+\sigma_{\mu}(P\mu-y_{\mu}),\] \[v_{\lambda} \gets v_{\lambda}+\sigma_{\lambda}(\nu_{\lambda}D\lambda-z_{ \lambda}),\] \[v_{\mu} \gets v_{\mu}+\sigma_{\mu}(\nu_{\mu}D\mu-z_{\mu}).\] ``` 1:for\(k\gets 1\), \(N_{\text{iter}}\)do 2:\(\bar{\lambda}_{1}=T^{\top}(u_{\lambda}+\sigma_{\lambda}(\bar{y}_{\lambda}-y_ {\lambda}))\) 3:\(\bar{\lambda}_{2}=\nu_{\lambda}D^{\top}(v_{\lambda}+\sigma_{\lambda}(\bar{z}_{ \lambda}-z_{\lambda}))\) 4:\(\nu=\frac{1}{\tau_{\lambda}N_{\text{iter}}}\left(\mathbf{1}^{\top}\bar{\lambda} -N_{\text{total}}-\tau_{\lambda}\mathbf{1}^{\top}(\bar{\lambda}_{1}+\bar{ \lambda}_{2})\right)\) 5:\(\lambda\leftarrow\lambda-\tau_{\lambda}(\nu\mathbf{1}+\bar{\lambda}_{1}+\bar{ \lambda}_{2})\) 6:\(\bar{y}_{\lambda}=T\lambda\) 7:\(\bar{z}_{\lambda}=\nu_{\lambda}D\lambda\) 8:\(\bar{\mu}_{1}=P^{\top}(u_{\mu}+\sigma_{\mu}(\bar{y}_{\mu}-y_{\mu}))\) 9:\(\bar{\mu}_{2}=\nu_{\mu}D^{\top}(v_{\mu}+\sigma_{\mu}(\bar{v}_{\mu}-z_{\mu}))\) 10:\(\mu\leftarrow\mu-\tau_{\mu}(\bar{\mu}_{1}+\bar{\mu}_{2})\) 11:\(\bar{y}_{\mu}=P\mu\) 12:\(\bar{z}_{\mu}=\nu_{\mu}D\mu\) 13:\(v_{\mu,\,\ell}=\sum_{i}C_{i\ell}-(u_{\mu,\,\ell}-\sigma_{\mu}\bar{y}_{\mu,\, \ell})\;\;\forall\ell\) 14:for\(k^{\prime}\gets 1\), \(N_{\text{iter}}\)do 15:\(b_{i\ell}=u_{\lambda,\,i\ell}+\sigma_{\lambda}\bar{y}_{\lambda,\,i\ell}- \exp(-y_{\mu,\,\ell})\;\;\forall i,\ell\) 16:\(y_{\lambda,\,i\ell}=\left(b_{i\ell}+\sqrt{b_{i\ell}^{2}+4\sigma_{\lambda}C_{ i\ell}}\right)/(2\sigma_{\lambda})\;\;\forall i,\ell\) 17:\(y_{\lambda,\,\ell}=\sum_{i}y_{\lambda,\,i\ell}\;\;\forall\ell\) 18:\(y_{\mu,\,\ell}^{\prime}=0\;\;\forall\ell\) 19:for\(k^{\prime\prime}\gets 1\), \(N_{\text{iter}}\)do 20:\(\psi_{\ell}^{(1)}=-\exp(-y_{\mu,\,\ell}^{\prime})\cdot y_{\lambda,\,\ell}+ \sigma_{\mu}y_{\mu,\,\ell}^{\prime}+v_{\mu,\,\ell}\) 21:\(\psi_{\ell}^{(2)}=-\exp(-y_{\mu,\,\ell}^{\prime})\cdot y_{\lambda,\,\ell}+ \sigma_{\mu}\;\;\forall\ell\) 22:\(y_{\mu,\,\ell}^{\prime}\gets y_{\mu,\,\ell}^{\prime}-\psi_{\ell}^{(1)} \cdot\left(\psi_{\ell}^{(2)}\right)^{-1}\;\;\forall\ell\) 23:\(y_{\mu,\,\ell}^{\prime}\leftarrow\cos(y_{\mu,\,\ell}^{\prime})\;\;\forall\ell\) 24:endfor 25:\(y_{\mu,\,\ell}=y_{\mu,\,\ell}^{\prime}\;\;\forall\ell\) 26:endfor 27:\(z_{\lambda}\leftarrow\mathcal{P}_{L1(\nu_{\lambda}\gamma_{\lambda})}(v_{\lambda}/ \sigma_{\lambda}+\bar{z}_{\lambda})\) 28:\(z_{\mu}\leftarrow\mathcal{P}_{L1(\nu_{\lambda}\gamma_{\lambda})}(v_{\mu}/ \sigma_{\mu}+\bar{z}_{\mu})\) 29:\(u_{\lambda}\gets u_{\lambda}+\sigma_{\lambda}(\bar{y}_{\lambda}-y_{\lambda})\) 30:\(u_{\mu}\gets u_{\mu}+\sigma_{\mu}(\bar{y}_{\mu}-y_{\mu})\) 31:\(v_{\lambda}\gets v_{\lambda}+\sigma_{\lambda}(\bar{z}_{\lambda}-z_{\lambda})\) 32:\(v_{\mu}\gets v_{\mu}+\sigma_{\mu}(\bar{z}_{\mu}-z_{\mu})\) 33:endfor ``` **Algorithm 2** ADMM pseudocode for TV-constrained SAA estimation. Variables \(\lambda\), \(\mu\), \(y_{\lambda}\), \(y_{\mu}\), \(z_{\lambda}\), \(z_{\mu}\),\(\bar{y}_{\lambda}\), \(\bar{y}_{\mu}\), \(\bar{z}_{\lambda}\), \(\bar{z}_{\mu}\), \(u_{\lambda}\), \(u_{\mu}\), \(v_{\lambda}\), and \(v_{\mu}\) are initialized to zero. #### ADMM pseudocode for TV-constrained SAA estimation The derivations in this appendix yield the additional steps that are needed to formulate the pseudocode shown in Algorithm S1 from the ADMM-SAA pseudocode in Algorithm 1. The step size ratio parameters \(\rho_{\lambda}\) and \(\rho_{\mu}\) are determined by performing a small grid search centered on the values obtained for the grid search described in Sec. IIIA. The values used in generating the results for ADMM-TVSAA are as follows: when both \(\lambda\)- and \(\mu\)-TV constraints are active, \(\sigma_{\mu}=50\) and \(\sigma_{\lambda}=0.01\); when only the \(\mu\)-TV constraint is active, \(\sigma_{\mu}=100\) and \(\sigma_{\lambda}=0.01\); and when only the \(\lambda\)-TV constraint is active, \(\sigma_{\mu}=200\) and \(\sigma_{\lambda}=0.01\).
2309.02372
Ascent and descent of Gorenstein homological properties
Let $\varphi\colon R\rightarrow A$ be a ring homomorphism, where $R$ is a commutative noetherian ring and $A$ is a finite $R$-algebra. We provide criteria for detecting the ascent and descent of Gorenstein homological properties. %As an application, one can deduce a result that supports a question of Avramov and Foxby. We observe that the ascent and descent of Gorenstein homological property can detect the Gorenstein properties of rings along $\varphi$. Finally, we describe when $\varphi$ induces a triangle equivalence between the stable categories of finitely generated Gorenstein projective modules over $R$ and $A$.
Jian Liu, Wei Ren
2023-09-05T16:35:02Z
http://arxiv.org/abs/2309.02372v3
# Ascent and descent of Gorenstein homological properties ###### Abstract. Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(R\) is a commutative noetherian ring and \(A\) is a finite \(R\)-algebra. We give criteria for detecting the ascent and descent of Gorenstein homological properties. As an application, we get a result that supports a question of Avramov and Foxby. We observe that the ascent and descent of Gorenstein homological property can detect the Gorenstein properties of rings along \(\varphi\). Finally, we describe when \(\varphi\) induces a triangle equivalence between the stable categories of finitely generated Gorenstein projective modules. Key words and phrases:ascent and descent, Gorenstein projective module, Gorenstein dimension, semi-dualizing complex, perfect complex. 2020 Mathematics Subject Classification: 18G20 (primary); 16E65, 18G10, 18G80 (secondary) ## 1. Introduction The study of the Gorenstein homological algebra can be traced back to the 1960s. Auslander and Bridger [1] introduced the notion of the Gorenstein projective modules under the name "totally reflexive modules", and they generalized the well-known Auslander-Buchsbaum formula to modules of finite Gorenstein dimensions. For an Iwanaga-Gorenstein ring, Buchweitz [10] established a triangle equivalence between the stable category of finitely generated Gorenstein projective modules and the singularity category; this celebrated theorem highlighted the theory of the Gorenstein homological algebra. As stated in [5], a systematic emphasis on the study of morphisms was an innovative aspect of Grothendieck's approach to algebraic geometry and commutative algebra. For a surjective ring homomorphism \(\varphi\colon R\to A\) of commutative noetherian local rings, if the kernel of \(\varphi\) is generated by a regular sequence, then it is well-known that \(A\) is Cohen-Macaulay (resp. Gorenstein, complete intersection) if and only if \(R\) is. Moreover, such ascent and descent of ring properties along a ring homomorphism can be determined by certain homological properties; see for example [4, 6, 19]. Let \(\varphi\colon R\to A\) be a ring homomorphism. We say \(\varphi\) has _ascent and descent of Gorenstein projective property_ if each finitely generated left or right \(A\)-module is Gorenstein projective if and only if the underlying \(R\)-module is Gorenstein projective. Buchweitz [10, 8.2] observed that for the integral group ring extension \(\mathbb{Z}\to\mathbb{Z}G\) of a finite group \(G\), a finitely generated \(\mathbb{Z}G\)-module, or equivalently an integral representation of \(G\), is Gorenstein projective if and only if the underlying \(\mathbb{Z}\)-module is Gorenstein projective. That is, the above ring extension satisfies the ascent and descent of Gorenstein projective property. Notice that the above \(\mathbb{Z}\to\mathbb{Z}G\) is a classical example of Forbenius extension [27]. Inspired by this fact, Chen [11] introduced the totally reflexive extension of rings and proved that such extension has ascent and descent of Gorenstein projective property. This motivates the subsequent works of Ren [31] and Zhao [33] on the ascent and descent of Gorenstein projective property for (not necessarily finitely generated) modules along Frobenius extension of rings. However, there are ring homomorphisms that satisfy the ascent and descent of Gorenstein projective property, but may not be a Frobenius extension of rings (see Examples 4.8 and 4.9). Inspired by the aforementioned facts, it is natural to ask how a ring homomorphism might behave if it has ascent and descent of Gorenstein projective properties. The second motivation for this work is a question raised by Avramov and Foxby in [4, Section 4]. Let \(\varphi\colon R\to A\) be a finite ring homomorphism of commutative noetherian local rings. They asked for a finitely generated \(A\)-module \(M\), if \(A\) has finite Gorenstein dimension over \(R\) and \(M\) has finite Gorenstein dimension over \(A\), then does \(M\) have finite Gorenstein dimension over \(R\)? Before studying the ascent and descent of Gorenstein projective property, we study the ring homomorphism \(\varphi\) which has _ascent and descent of finite Gorenstein dimension property_; see Definition 3.1. It plays an essential role in this article. In Section 3, we provide the following characterization of this property. **Theorem 1.1**.: _(See 3.12) Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(R\) is a commutative noetherian ring and \(A\) is a finite \(R\)-algebra. Consider the following two conditions_:__ 1. _A has finite Gorenstein dimension over_ \(R\) _and_ \(\mathsf{RHom}_{R}(A,R)\) _is perfect over_ \(A\) _on both sides._ 2. \(\varphi\) _has ascent and descent of finite Gorenstein dimension property._ _Then (1) implies (2). The converse holds if, in addition, \(R\) has finite Krull dimension._ This characterization yields a well-known fact that any complete intersection map satisfies this property; see Corollary 3.15. As an application of Theorem 1.1, we get the following result that supports Avramov and Foxby's question. **Corollary 1.2**.: _(See 3.14) Let \(\varphi\colon R\to A\) be a finite ring homomorphism of commutative noetherian rings. Assume \(\mathsf{RHom}_{R}(A,R)\) is perfect over \(A\). Then Avramov and Foxby's question is true._ Then, in Section 4 we get the following which concerns our question. **Theorem 1.3**.: _(See 4.3) Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(R\) is a commutative noetherian ring and \(A\) is a finite \(R\)-algebra. The following two conditions are equivalent_:__ 1. \(A\) _is Gorenstein projective over_ \(R\) _and_ \(\operatorname{Hom}_{R}(A,R)\) _is projective over_ \(A\) _on both sides._ 2. \(\varphi\) _has ascent and descent of Gorenstein projective property._ By making use of Theorem 1.3, we prove that the ascent and descent of Gorenstein projective property is a local property; see Corollary 4.6. It is worth to notice that the ascent and descent of Gorenstein projective property implies the ascent and descent of finite Gorenstein dimension property by characterizations. In Section 5, we study how the Gorenstein properties of rings behave along the ring homomorphism \(\varphi\colon R\to A\). The main result in this section is Theorem 1.4. Combining this result with Theorem 1.3, one can get a result of Avramov and Foxby; see Corollary 5.2. **Theorem 1.4**.: _(See 5.1) Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(R\) is a commutative noetherian ring of finite Krull dimension and \(A\) is a finite \(R\)-algebra. Assume \(\varphi\) has ascent and descent of finite Gorenstein dimension property. If \(R\) is an Iwanaga-Gorenstein ring, then so is \(A\). The converse holds if, in addition, the fibre \(A\otimes_{R}R/\mathfrak{m}\) is nonzero for each maximal ideal \(\mathfrak{m}\) of \(R\)._ For a noetherian ring \(A\), the category of finitely generated Gorenstein projective left \(A\)-modules, denoted by \(A\)-Gproj, is a Frobenius category. Hence, its stable category, denoted by \(A\)-Gproj, is a triangulated category; see [21]. In Section 6, we study when there is a triangle equivalence between the stable categories of finitely generated Gorenstein projective modules. The following is inspired by a recent work of Chen and Ren [12, Proposition 4.2], but there are some new ingredients; see details in Remark 6.5. **Theorem 1.5**.: _(See 6.4) Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(R\) is a commutative noetherian ring and \(A\) is a finite \(R\)-algebra. Assume that \(A\) is a projective generator as an \(R\)-module and \(\omega=\operatorname{Hom}_{R}(A,R)\) is projective as a right \(A\)-module. Then the following are equivalent_:__ 1. _The induced adjoint pair yields mutually inverse equivalences_ \[F=\omega\otimes_{A}-\colon A\text{-}\mathrm{Gproj}\rightleftarrows R\text{-} \mathrm{Gproj}\colon G=\operatorname{Hom}_{R}(\omega,-).\] 2. _For any_ \(M\in A\)_-_\(\mathrm{Gproj}\) _and_ \(N\in R\)_-_\(\mathrm{Gproj}\)_, the projective dimensions of_ \(\operatorname{Coker}(\eta_{M})\) _and_ \(\operatorname{Ker}(\varepsilon_{N})\) _are finite, where_ \(\eta\colon\operatorname{Id}\to GF\) _and_ \(\varepsilon\colon FG\to\operatorname{Id}\) _denote the unit and the counit of the adjoint pair_ \((F,G)\)_, respectively._ ## 2. Preliminaries Throughout \(A\) will be a two-sided noetherian ring, that is, \(A\) is noetherian as a left and as a right \(A\)-module. In what follows \(A\)-modules will mean left \(A\)-modules, and \(A^{\mathrm{op}}\)-modules are identified with right \(A\)-modules. The category of \(A\)-modules and its full subcategory consisting of finitely generated modules will be denoted by \(A\)-Mod and \(A\)-mod, respectively. ### Gorenstein projective modules An unbounded acyclic complex of projective left \(A\)-modules \[\mathbf{P}=\cdots\longrightarrow P_{1}\stackrel{{\partial_{1}}} {{\longrightarrow}}P_{0}\stackrel{{\partial_{0}}}{{ \longrightarrow}}P_{-1}\longrightarrow\cdots\] is called _totally acyclic_ provided that \(\operatorname{Hom}_{A}(\mathbf{P},Q)\) is still acyclic for any projective left \(A\)-module \(Q\). A left \(A\)-module \(M\) is _Gorenstein projective_ if there is a totally acyclic complex \(\mathbf{P}\) such that \(M\) is isomorphic to the image of \(\partial_{0}\). Any projective \(A\)-module is Gorenstein projective. For each totally acyclic complex \(\mathbf{P}\), the image of \(\partial_{i}\), denoted by \(\operatorname{Im}(\partial_{i})\), is Gorenstein projective for each \(i\in\mathbb{Z}\). The following characterization is well-known; see [1, Proposition 3.8]. Let \(M\) be a finitely generated left \(A\)-module. Then \(M\) is Gorenstein projective if and only if \(\operatorname{Ext}_{A}^{i}(M,A)=0=\operatorname{Ext}_{A^{\mathrm{op}}}^{i}( \operatorname{Hom}_{R}(M,A),A)=0\) for all \(i>0\), and the evaluation homomorphism \(\mathrm{e}_{M,A}\colon M\to\operatorname{Hom}_{A^{\mathrm{op}}}( \operatorname{Hom}_{A}(M,A),A)\) is an isomorphism. Thus, the finitely generated Gorenstein projective module \(M\) is also called a _totally reflexive module_; see for example [1]. ### Derived categories Let \(\mathsf{D}(A)\) denote the derived category of complexes of left \(R\)-modules. It is a triangulated category with the suspension functor [1]; for each complex \(X\), \(X[1]_{i}:=X_{i-1}\), and \(\partial_{X[1]}:=-\partial_{X}\). Its full subcategory consisting of complexes with finitely generated total homology will be denoted by \(\mathsf{D}_{b}^{f}(A)\). More precisely, for each \(X\in\mathsf{D}(A)\), it is in \(\mathsf{D}_{b}^{f}(A)\) if and only if \(\mathrm{H}_{i}(X)\) is finitely generated for all \(i\) and \(\mathrm{H}_{i}(X)=0\) for all \(|i|\gg 0\). The category \(\mathsf{D}_{b}^{f}(A)\) inherits the structure of the triangulated category from \(\mathsf{D}(A)\). A complex \(X\) is said to be _homotopy projective_ (resp. _homotopy injective_) provided that \(\mathrm{Hom}_{A}(X,-)\) (resp. \(\mathrm{Hom}_{A}(-,X)\)) preserves acyclic complexes. See [30, Section 3] for the existence of the homotopy projective resolution and the homotopy injective resolution of complexes. A complex \(X\) is said to be _bounded below_ (resp. _bounded above_) if \(X_{i}=0\) for \(i\ll 0\) (resp. \(i\gg 0\)). Every bounded below (resp. bounded above) complex of projective (resp. injective) modules is homotopy projective (resp. homotopy injective). Let \(\mathsf{RHom}_{A}(-,-)\colon\mathsf{D}(A)^{\mathrm{op}}\times\mathsf{D}(A) \to\mathsf{D}(\mathbb{Z})\) denote the right derived functor of \(\mathrm{Hom}_{A}(-,-)\). For each \(M,N\) in \(\mathsf{D}(A)\), the complex \(\mathsf{RHom}_{A}(M,N)\) can be represented by either \(\mathrm{Hom}_{A}(P,N)\) or \(\mathrm{Hom}_{A}(M,I)\), where \(P\xrightarrow{\simeq}M\) is a homotopy projective resolution and \(N\xrightarrow{\simeq}I\) is a homotopy injective resolution. Let \(-\otimes_{A}^{\mathbb{L}}-\colon\mathsf{D}(A^{\mathrm{op}})\times\mathsf{D}(A )\to\mathsf{D}(\mathbb{Z})\) denote the left derived functor of \(-\otimes_{A}-\). For each \(M\) in \(\mathsf{D}(A^{\mathrm{op}})\) and \(N\) in \(\mathsf{D}(A)\), the complex \(M\otimes_{A}^{\mathbb{L}}N\) can be represented by either \(P\otimes_{A}N\) or \(M\otimes_{A}Q\), where \(P\xrightarrow{\simeq}M\) and \(Q\xrightarrow{\simeq}N\) are homotopy projective resolutions over \(A^{\mathrm{op}}\) and \(A\), respectively. ### Gorenstein dimensions Let \(M\) be a complex in \(\mathsf{D}_{b}^{f}(A)\). A quasi-isomorphism \(G\xrightarrow{\simeq}M\) is a _Gorenstein projective resolution_ of \(M\) provided that \(G_{i}\) is a Gorenstein projective module for each \(i\in\mathbb{Z}\) and \(G_{i}=0\) for \(i\ll 0\). The _Gorenstein dimension_ of \(M\), denoted by \(\mathrm{G}\mbox{-dim}_{A}(M)\), is the smallest integer \(n\) such that there is a Gorenstein projective resolution \(G\xrightarrow{\simeq}M\) such that \(G_{i}=0\) for \(i>n\) and \(G_{n}\neq 0\); see for example [13, Definition 2.3.2]. For Gorenstein dimension of modules, we consider any module as a stalk complex concentrated in degree zero. Then, for each \(A\)-module \(M\), \(\mathrm{G}\mbox{-dim}_{A}(M)\) is precisely the Gorenstein projective dimension of \(M\); see for example [22, Definition 2.8]. For each \(M\in A\)-mod, it is clear that \(\mathrm{G}\mbox{-dim}_{A}(M)\leq\mathrm{pd}_{A}(M)\), where \(\mathrm{pd}_{A}(M)\) is the projective dimension of \(M\) over \(A\); the equality holds if the latter is finite. ### Iwanaga-Gorenstein rings A noetherian ring \(A\) is said to be _Iwanaga-Gorenstein_ provided that \(A\) has finite injective dimension as both a left module and a right module. It follows from [32, Lemma A] that if \(A\) is Iwanaga-Gorenstein, then \(\mathrm{id}_{A}({}_{A}A)=\mathrm{id}_{A}(A_{A})<\infty\). In this case, we simply write this as \(\mathrm{id}_{A}(A)\). Let \(A\) be an Iwanaga-Gorenstein ring. For each finitely generated left \(A\)-module \(M\), \(\mathrm{G}\mbox{-dim}_{A}(M)\leq\mathrm{id}_{A}(A)\); see [15, Proposition 11.5.7]. In this case, each complex in \(\mathsf{D}_{b}^{f}(A)\) has finite Gorenstein dimension. ### Perfect complexes Let \(A\) be a left noetherian ring. A complex \(X\) in \(\mathsf{D}(A)\) is said to be _perfect_ provided that it is isomorphic in \(\mathsf{D}(A)\) to a bounded complex of finitely generated projective \(A\)-modules; equivalently, \(X\) is compact as an object in \(\mathsf{D}(A)\). See [10, Chapter 1] and [28] for more details about perfect complexes. The full subcategory of \(\mathsf{D}(A)\) consisting of perfect complexes is precisely the smallest triangulated subcategory of \(\mathsf{D}(A)\) which contains \(A\) and is closed under direct summands. ### Semi-dualizing complexes Let \(D\) be complex of \(A\)-\(A\) bimodules. \(D\) is a _semi-dualizing complex_ over \(A\) if the following conditions are satisfied: (1) The total homology \(\operatorname{H}(D)\) is finitely generated over \(A\) and \(A^{\operatorname{op}}\); (2) \(D_{i}=0\) for \(i\gg 0\) and each \(D_{i}\) is injective over \(A\) and \(A^{\operatorname{op}}\); (3) The homothety morphisms \[\operatorname{m}_{D}\colon A\to\operatorname{Hom}_{A^{\operatorname{op}}}(D,D );a\mapsto(x\mapsto ax)\] and \[\operatorname{m}_{D}^{\prime}\colon A^{\operatorname{op}}\to\operatorname{ Hom}_{A}(D,D);a\mapsto(x\mapsto ax)\] are quasi-isomorphisms. If, in addition, \(D\) is a bounded complex of injective modules over \(A\) and \(A^{\operatorname{op}}\), then \(D\) is called a _dualizing complex_. When \(A\) is commutative, the above definition of the semi-dualizing complex coincides with the definition in [14, Definition 2.1]. ## 3. Ascent and descent of finite Gorenstein dimension properties The main result of this section is Theorem 1.1 from the introduction which provides a description of ascent and descent of finite Gorenstein dimension property. As a consequence, we get a result that supports a question of Avramov and Foxby; see Corollary 3.14. For a ring homomorphism \(\varphi\colon R\to A\), the map \(\varphi\) is said to be _finite_ provided that \(A\) is finitely generated both as a left \(R\)-module and as a right \(R\)-module. **Definition 3.1**.: Let \(\varphi\colon R\to A\) be a finite ring homomorphism between noetherian rings. We say \(\varphi\) has _ascent and descent of finite Gorenstein dimension property_ if the following two conditions are satisfied : 1. For each complex in \(\mathsf{D}_{b}^{f}(A)\), it has finite Gorenstein dimension over \(A\) if and only if it has finite Gorenstein dimension over \(R\); 2. For each complex in \(\mathsf{D}_{b}^{f}(A^{\operatorname{op}})\), it has finite Gorenstein dimension over \(A^{\operatorname{op}}\) if and only if it has finite Gorenstein dimension over \(R^{\operatorname{op}}\). Since each complex over an Iwanaga-Gorenstein ring has finite Gorenstein dimension, it is clear that if \(\varphi\colon R\to A\) is a finite ring homomorphism between Iwanaga-Gorenstein rings, then \(\varphi\) has ascent and descent of finite Gorenstein dimension property. Conversely, for the morphism \(\varphi\) with ascent and descent of finite Gorenstein dimension property, in Theorem 5.1 the Iwanaga-Gorenstein property of rings is tested. In the following, we abbreviate \(\operatorname{Hom}_{A}(-,A)\) and \(\operatorname{Hom}_{A^{\operatorname{op}}}(-,A)\) as \((-)^{*}\); there will be no confusion. Let \(M\) be a complex in \(\mathsf{D}_{b}^{f}(A)\). One can choose a homotopy projective resolution \(\pi_{M}\colon P\xrightarrow{\simeq}M\), where \(P\) is a bounded below complex of finitely generated projective left \(A\)-modules. Let \(\iota\colon A\xrightarrow{\simeq}I\) be an injective resolution of the right \(A\)-module \(A\). Denote by \(\eta_{M}\) the composition of the following morphisms \[\eta_{M}\colon P\ \stackrel{{ e_{P,A}}}{{\cong}}\operatorname{Hom}_{A^{ \operatorname{op}}}(P^{*},A)\ \ \stackrel{{\iota_{*}}}{{\to}} \operatorname{Hom}_{A^{\operatorname{op}}}(P^{*},I).\] Then, the _biduality morphism_\(\delta_{M}\colon M\to\mathsf{RHom}_{A^{\mathrm{op}}}(\mathsf{RHom}_{A}(M,A),A)\) of \(M\), as a morphism in the derived category \(\mathsf{D}(\mathbb{Z})\), can be defined as the right fraction \[\eta_{M}/\pi_{M}\colon M\to\operatorname{Hom}_{A^{\mathrm{op}}}(P^{*},I).\] The following is included in [13, Corollary 2.3.8] when \(A\) is commutative. The same result holds for non-commutative noetherian rings; we omit the proof. **Lemma 3.2**.: _Let \(M\) be a complex in \(\mathsf{D}^{f}_{b}(A)\). Then \(\operatorname{G-dim}_{A}(M)\) is finite if and only if the following two conditions are satisfied:_ 1. \(\mathsf{RHom}_{A}(M,A)\) _is in_ \(\mathsf{D}^{f}_{b}(A^{\mathrm{op}})\)_;_ 2. _The biduality morphism_ \(\delta_{M}\colon M\to\mathsf{RHom}_{A^{\mathrm{op}}}(\mathsf{RHom}_{A}(M,A),A)\) _is an isomorphism in_ \(\mathsf{D}(\mathbb{Z})\)_._ **Remark 3.3**.: (1) Let \(M\) be a complex in \(\mathsf{D}^{f}_{b}(A)\). If \(\operatorname{G-dim}_{A}(M)\) is finite, then it follows from Lemma 3.2 that \(\operatorname{G-dim}_{A^{\mathrm{op}}}(\mathsf{RHom}_{A}(M,A))\) is also finite. (2) Note that one cannot represent \(\mathsf{RHom}_{A}(M,A)\) by \(\operatorname{Hom}_{A}(M,I)\) since \(I\) is a complex of right \(A\)-modules. When \(A\) is commutative, the above biduality morphism coincides with the map \(\operatorname{e}_{M,I}\colon M\to\operatorname{Hom}_{A}(\operatorname{Hom}_{ A}(M,I),I)\); see [13, Section A.8]. **Lemma 3.4**.: _Keep the same notations as above. Let \(\pi\colon Q\xrightarrow{\simeq}P^{*}\) be a homotopy projective resolution of \(P^{*}\) over \(A^{\mathrm{op}}\). Then_ 1. _The biduality morphism_ \(\delta_{M}\) _is an isomorphism in_ \(\mathsf{D}(\mathbb{Z})\) _if and only if the chain map_ \(\pi^{*}\colon P^{**}\to Q^{*}\) _is a quasi-isomorphism._ 2. _For each perfect complex_ \(X\) _in_ \(\mathsf{D}^{f}_{b}(A)\)_, there is a quasi-isomorphism_ \[\pi\otimes X\colon Q\otimes_{A}X\xrightarrow{\simeq}P^{*}\otimes_{A}X.\] Proof.: (1) Let \(I\) be an injective resolution of the right \(A\)-module \(A\). Since \(I\) is homotopy injective and \(Q\) is homotopy projective, there is a commutative diagram \[\begin{CD}P^{**}@>{\pi^{*}}>{}>Q^{*}\\ @V{\iota_{*}}V{}V@V{\simeq}V{\operatorname{Hom}_{A^{\mathrm{op}}}(P^{*},I)} @V{\simeq}>{}>\operatorname{Hom}_{A^{\mathrm{op}}}(Q,I),\end{CD}\] where two unlabeled maps are induced by \(\iota\) and \(\pi\) respectively. Note that \(\delta_{M}\) is an isomorphism in \(\mathsf{D}(\mathbb{Z})\) if and only if \(\iota_{*}\) is a quasi-isomorphism. By the above diagram, this is equivalent to that \(\pi^{*}\) is a quasi-isomorphism. (2) By assumption, there exists a quasi-isomorphism \(\pi^{\prime}\colon F\xrightarrow{\simeq}X\), where \(F\) is a bounded complex of finitely generated projective left \(A\)-modules; this can be deduced by combing [10, Lemma 1.2.1] with [16, Theorem 6.6]. Consider the commutative diagram \[\begin{CD}Q\otimes_{A}F@>{\pi\otimes F}>{}>P^{*}\otimes_{A}F@>{\epsilon_{P,F} }>{}>\operatorname{Hom}_{A}(P,F)\\ @V{Q\otimes\pi^{\prime}}V{}V@V{P^{*}\otimes\pi^{\prime}}V{}V@V{\pi^{\prime}}V{ \operatorname{V}}V\\ Q\otimes_{A}X@>{\pi\otimes X}>{}>P^{*}\otimes_{A}X@>{\epsilon_{P,X}}>{}> \operatorname{Hom}_{A}(P,X),\end{CD}\] where \(\epsilon_{P,F}\) and \(\epsilon_{P,X}\) are the tensor evaluation isomorphisms; see for example [13, A.2.10]. Note that \(\pi\otimes F\) and \(Q\otimes\pi^{\prime}\) are quasi-isomorphisms; see [30, Proposition 5.8]. It is clear that \(\pi^{\prime}_{*}\) is a quasi-isomorphism. Therefore, the above diagram yields that \(P^{*}\otimes\pi^{\prime}\), and then \(\pi\otimes X\), are quasi-isomorphisms. If \(X\) is a bounded complex of finitely generated projective modules, then it is clear that the functor \(-\otimes X\) preserves quasi-isomorphisms. However, Lemma 3.4 (2) is not trivial in general. In what follows, let \(R\) be a commutative noetherian ring. The ring \(A\) is said to be a _finite \(R\)-algebra_ if there is a ring homomorphism \(\varphi\colon R\to A\) such that the image of \(\varphi\) is in the center of \(A\) and \(A\) is finitely generated as an \(R\)-module. **Proposition 3.5**.: _Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(A\) is a finite \(R\)-algebra and \(\operatorname{G-dim}_{R}(A)\) is finite. Let \(R\xrightarrow{\simeq}I\) be an injective resolution of \(R\) and \(D=\operatorname{Hom}_{R}(A,I)\). Then_ 1. \(D\) _is a semi-dualizing complex over_ \(A\)_._ 2. _If_ \(D\) _is a perfect complex of left_ \(A\)_-modules, then for each complex_ \(M\in\operatorname{\mathsf{D}}_{b}^{f}(A)\)_, there is a right_ \(A\)_-linear quasi-isomorphism_ \[\theta_{M}\colon Q\otimes_{A}D\xrightarrow{\simeq}\operatorname{Hom}_{R}(P,I),\] _where_ \(P\) _is a bounded below complex of finitely generated projective_ \(A\)_-modules such that_ \(P\xrightarrow{\simeq}M\) _is a homotopy projective resolution of_ \(M\) _over_ \(A\)_,_ \(\pi\colon Q\xrightarrow{\simeq}\operatorname{Hom}_{A}(P,A)\) _is a homotopy projective resolution of_ \(\operatorname{Hom}_{A}(P,A)\) _over_ \(A^{\operatorname{op}}\)_._ Proof.: (1) First, we can check directly that \(D=\operatorname{Hom}_{R}(A,I)\) is a bounded above complex of modules which are injective over \(A\) and \(A^{\operatorname{op}}\); see also [9, Lemma 3.1.6]. Since \(\operatorname{G-dim}_{R}(A)<\infty\), it follows from (1) of Lemma 3.2 that the total homology \(\operatorname{H}(D)\) of \(D\) is finitely generated over \(R\). Moreover, we infer that \(\operatorname{H}(D)\) is finitely generated over \(A\) and \(A^{\operatorname{op}}\). Consider the following diagram \[\begin{array}{ccc}A&\xrightarrow{\operatorname{m}_{D}}&\succ\operatorname {Hom}_{A^{\operatorname{op}}}(D,D)\\ \\ \operatorname{\operatorname{e}_{A,I}}&\simeq&\\ \operatorname{\operatorname{V}}&\mathrel{\mathop{\mathop{\mathop{\mathop{ \mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{\mathop{ \leftleftleft({\leftleft({\leftleft({\leftleftleft({ \left({ \leftleft({ } \left({{ }}}{{{ }}} {{{}}{{{}{}{{}{{}{}{}{{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{}{}{ {}{}{{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{} {}}{{}}}{{{}}}{{}}}{{{}}{{}}{{}}{{}}{{}}{{}}{{}{}{{}}{{}{}{}{}{}{{}{} {}{{}}{{}}{{}}{{}}{{}}{{}}{{}{}{}{{}}{{}}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{} {}{{}}{{}{}{{}}{{}}{{}{}{{}}{{}}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{} {}{}{{}}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{}{}{{}{} {}{{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{}{{}{}{}{}{}{}{{} {}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{}{ {}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{ }{{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{ {}{{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{ {}{{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{ }{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{ {}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{ }{{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{ }{{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{ }{{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{ {}{{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{} {}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{{}{}{}{}{}{{}{}{{}{ {}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{}{{} {}{{}{{}{}{}{}{{}{}{}{}{}{{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{} {{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{}{}{{}{}{{}{}{{}{} {}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{}{{} {{}{}{}{{}{{}{}{}{{}{}{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{} {}{{}{}{{}{{}{}{}{}{}{{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{} {}{{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{} {{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{} {{}{}{{}{}{}{{}{}{{}{}{}{{}{{}{}{}{{}{}{{}{{}{}{}{{}{}{{}{}{}{}{} {{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{} {}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{{}{}{{}{} {{}{{}{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{{}{}{{}{}{}{}{{} {{}{}{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{} {}{{{}{}{}{{}{{}{}{}{}{{}{}{}{}{}{{}{{}{}{}{{}{}{}{{}{}{}{{}{} {{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{{}{} {{}{}{{}{{}{}{{}{{}{}{{}{}{{}{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{{}{}{}{} {{}{{}{{}{}{{}{}{{}{}{}{{}{{}{}{}{{}{}{{}{{}{}{{}{}{{}{}{}{{}{{}{}{{}{{} {{}{{}{}{}{{}{{}{}{}{{}{}{{}{{}{}{{}{}{{}{ However, \(\operatorname{Hom}_{R}(A,I)\) cannot be isomorphic to a dualizing complex in \(\mathsf{D}_{b}^{f}(A)\) if \(R\) is not Iwanaga-Gorenstein. Indeed, if \(R\) is not Iwanaga-Gorenstein, then Theorem 3.12 and Theorem 5.1 will imply that neither is \(A\). This yields that the injective resolution of \(A\) cannot be bounded. Since \(\operatorname{Hom}_{R}(A,I)\) is an injective resolution of \(A\), we get that \(\operatorname{Hom}_{R}(A,I)\) is not dualizing. **Proposition 3.7**.: _Let \(A\) be a noetherian ring and \(D\) be a semi-dualizing complex. Assume \(D\) is a perfect complex of left \(A\)-modules. For each \(M\) in \(\mathsf{D}_{b}^{f}(A)\) with finite Gorenstein dimension, the evaluation morphism_ \[\operatorname{e}_{M,D}\colon M\to\operatorname{Hom}_{A^{\operatorname{op}}}( \operatorname{Hom}_{A}(M,D),D)\] _is a left \(A\)-linear quasi-isomorphism._ Proof.: One can check directly that \(\operatorname{e}_{M,D}\) is left \(A\)-linear. Note that \(D\) is homotopy injective over \(A\) and \(A^{\operatorname{op}}\). By taking a projective resolution of \(M\), we may assume \(M=P\) to be a bounded below complex of finitely generated projective left \(A\)-modules with finite Gorenstein dimension. Keep the notations as above. Choose a homotopy projective resolution \(\pi\colon Q\xrightarrow{\simeq}P^{*}\). Since \(\operatorname{G-dim}_{R}(P)<\infty\), the biduality morphism \(\delta_{P}\) is a quasi-isomorphism. It follows from Lemma 3.4 that \(\pi^{*}\colon P^{**}\to Q^{*}\) is a quasi-isomorphism. Consider the commutative diagram \[\begin{array}{ccccc}P&&\operatorname{e}_{P,D}&&\succ\operatorname{Hom}_{A^{ \operatorname{op}}}(\operatorname{Hom}_{A}(P,D),D)\\ &\\ \operatorname{e}_{P,A}&\cong&&&&\operatorname{(\epsilon_{P,D})^{*}}\\ &\operatorname{V}&&\operatorname{Hom}_{A^{\operatorname{op}}}(P^{*}\otimes_{ A}D,D)\\ &&&&\\ &&&&\operatorname{(\pi\otimes D)^{*}}\\ &\pi^{*}\ \simeq&&&&\operatorname{Hom}_{A^{\operatorname{op}}}(Q\otimes_{A}D,D)\\ &&&&\cong\ \gamma\\ \operatorname{Hom}_{A^{\operatorname{op}}}(Q,A)&&\operatorname{(m_{D})^{*}}& \succ\operatorname{Hom}_{A^{\operatorname{op}}}(Q,\operatorname{Hom}_{A^{ \operatorname{op}}}(D,D)),\end{array}\] where the quasi-isomorphism \(\pi^{*}\) is from Lemma 3.4, and \(\gamma\) is induced from the adjunction. Since \(D\) is homotopy injective and perfect over \(A\), it follows from Lemma 3.4 that \((\pi\otimes D)^{*}\) is a quasi-isomorphism, and we infer the isomorphism \((\operatorname{e}_{P,D})^{*}\) by the tensor evaluation. By assumption, \(\operatorname{m}_{D}\) is a quasi-isomorphism. Then so is \((\operatorname{m}_{D})_{*}\) since \(Q\) is homotopy projective. Therefore, the above diagram yields that \(\operatorname{e}_{P,D}\) is a quasi-isomorphism. This finishes the proof. **Remark 3.8**.: If \(A\) is a noetherian ring with a dualizing complex \(D\), then there is a quasi-isomorphism \(\operatorname{e}_{M,D}\colon M\xrightarrow{\simeq}\operatorname{Hom}_{A^{ \operatorname{op}}}(\operatorname{Hom}_{A}(M,D),D)\) for any complex \(M\) in \(\mathsf{D}_{b}^{f}(A)\); see [23, Proposition 3.4]. It is worth to remark that Proposition 3.7 is quite different from the above result. Indeed, there exists a local ring with a semi-dualizing and perfect complex \(D\), but \(D\) is not a dualizing complex; see Example 3.6. On the other hand, there exists a local ring with a dualizing complex \(D\) which is not a perfect complex. For example, for a commutative artinian local ring \(A\), the injective envelope \(E\) of the residue field of \(A\) is a dualizing complex, but \(E\) is not a perfect complex if \(A\) is not self-injective; see [17, Corollary 4.4]. We can deduce by Proposition 3.7 the following known result. **Corollary 3.9**.: _Let \(A\) be a commutative noetherian ring. For each \(M\) in \(\mathsf{D}^{f}_{b}(A)\) with finite Gorenstein dimension, the biduality morphism_ \[\delta_{M}\colon M\to\mathsf{RHom}_{A}(\mathsf{RHom}_{A}(M,A),A)\] _is an isomorphism in \(\mathsf{D}^{f}_{b}(A)\)._ Proof.: Take an injective resolution \(A\xrightarrow{\simeq}I\) over \(A\). Then \(I\) is a semi-dualizing complex and it is perfect. Combining with \(\mathsf{RHom}_{A}(-,A)\simeq\operatorname{Hom}_{A}(-,I)\), the desired result now follows immediately from Proposition 3.7. **Lemma 3.10**.: _Let \(A\) be a noetherian ring and \(N\) be a Gorenstein projective left \(A\)-module. If there exists a positive integer \(n\) such that \(\operatorname{Ext}^{n}_{A}(M,N)=0\) for all Gorenstein projective left \(A\)-modules \(M\), then \(N\) is projective._ Proof.: Since \(N\) is Gorenstein projective, there exists a long exact sequence \[0\to N\to P_{0}\xrightarrow{\partial_{0}}P_{-1}\to\cdots\xrightarrow{ \partial_{-(n-2)}}P_{-(n-1)}\to C\to 0,\] where \(P_{i}\) is a finitely generated projective left \(A\)-module for each \(i\), and \(C\) is Gorenstein projective. If \(n=1\), then the short exact sequence \(0\to N\to P_{0}\to C\to 0\) is split, and \(N\) is projective. Assume \(n>1\). Applying \(\operatorname{Hom}_{A}(C,-)\) to the above exact sequence, we have \[0=\operatorname{Ext}^{n}_{A}(C,N)\cong\operatorname{Ext}^{n-1}_{A}(C, \operatorname{Im}(\partial_{0}))\cong\cdots\cong\operatorname{Ext}^{1}_{A}(C, \operatorname{Im}(\partial_{-(n-2)})).\] We infer from this that \(C\) is projective, and hence all \(\operatorname{Im}(\partial_{i})\) and \(N\) are projective. **Lemma 3.11**.: _Let \(A\) be a noetherian ring and \(M\) be a complex in \(\mathsf{D}^{f}_{b}(A)\). If \(\operatorname{G-dim}_{A}(M)=n\) is finite, then there exists an exact triangle_ \[K[n{-}1]\to P\to M\to K[n]\] _in \(\mathsf{D}^{f}_{b}(A)\), where \(P\) is a bounded complex of finitely generated projective left \(A\)-modules with \(P_{i}=0\) for \(i\geq n\) and \(K\) is a Gorenstein projective left \(A\)-module._ Proof.: We can take a homotopy projective resolution \(F\xrightarrow{\simeq}M\) such that \(F_{i}\) is a finitely generated projective left \(A\)-module for each \(i\in\mathbb{Z}\) and \(F_{i}=0\) for \(i\ll 0\). Since \(\operatorname{G-dim}_{A}(M)=n\), \(K=\operatorname{Im}(\partial^{F}_{n})\) is Gorenstein projective and the brutal truncation \(F_{\geq n}\) is its projective resolution; see [13, Theorem 2.3.7]. Then there is an exact triangle \[F_{\geq n}[-1]\to F_{<n}\to F\to F_{\geq n}\] in \(\mathsf{D}^{f}_{b}(A)\), where \(F_{\geq n}\simeq K[n]\) and \(F\simeq M\). This completes the proof. Now, we are in a position to state the main result of this section. **Theorem 3.12**.: _Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(R\) is a commutative noetherian ring and \(A\) is a finite \(R\)-algebra. Consider the following two conditions_:__ 1. \(\operatorname{G-dim}_{R}(A)<\infty\) _and_ \(\mathsf{RHom}_{R}(A,R)\) _is perfect over_ \(A\) _on both sides._ 2. \(\varphi\) _has ascent and descent of finite Gorenstein dimension property._ _Then (1) implies (2). The converse holds if, in addition, \(R\) has finite Krull dimension._ Proof.: Set \(D=\mathsf{RHom}_{R}(A,R)\), which can be represented by \(\operatorname{Hom}_{R}(A,I)\), where \(I\) is an injective resolution of \(R\). In what follows, we identify \(\mathsf{RHom}_{R}(-,R)\) with \(\operatorname{Hom}_{R}(-,I)\). \((1)\Rightarrow(2)\). Let \(M\) be a complex in \(\mathsf{D}_{b}^{f}(A)\). Next, we show \(M\) has finite Gorenstein projective dimension over \(A\) if and only if it has finite Gorenstein projective dimension over \(R\). The similar result holds for complex in \(\mathsf{D}_{b}^{f}(A^{\operatorname{op}})\). First, assume \(\operatorname{G-dim}_{A}(M)<\infty\). Since \(D\) is perfect over \(A\), it follows from Lemma 3.2 that \(\mathsf{RHom}_{A}(M,D)\) has finitely many non-zero homologies. Then, we infer that \(\mathsf{RHom}_{R}(M,R)\) is in \(\mathsf{D}_{b}^{f}(R)\) by \(\mathsf{RHom}_{A}(M,D)\cong\mathsf{RHom}_{R}(M,R)\). Consider the commutative diagram \[\begin{array}{ccc}M&\stackrel{{\operatorname{e}_{M,I}}}{{ \rightarrow}}\operatorname{Hom}_{R}(\operatorname{Hom}_{R}(M,I),I)\\ \stackrel{{\operatorname{e}_{M,D}}}{{\vee}}&&\stackrel{{ \cong}}{{\vee}}\\ \operatorname{Hom}_{A^{\operatorname{op}}}(\operatorname{Hom}_{A}(M,D),D)& \cong&\operatorname{Hom}_{A^{\operatorname{op}}}(\operatorname{Hom}_{R}(M,I),D),\end{array}\] where the unlabeled isomorphisms are due to the adjunction \((\operatorname{Res},\operatorname{Hom}_{R}(A,-))\). It follows from Proposition 3.5 and 3.7 that \(e_{M,D}\) is a quasi-isomorphism, and hence so is \(\operatorname{e}_{M,I}\). Combining with \(\mathsf{RHom}_{R}(M,R)\in\mathsf{D}_{b}^{f}(R)\), we conclude by Lemma 3.2 that \(\operatorname{G-dim}_{R}(M)<\infty\). That is, \(\varphi\) has descent of finite Gorenstein dimension property. Now assume \(\operatorname{G-dim}_{R}(M)<\infty\). Since \(D\) is a perfect complex of right \(A\)-modules, one has \(\operatorname{G-dim}_{R}(D\otimes_{A}^{\operatorname{L}}M)<\infty\). Then, by Lemma 3.2 we infer that \[\mathsf{RHom}_{A}(M,A)\simeq\mathsf{RHom}_{A}(M,\mathsf{RHom}_{R}(D,R))\simeq \mathsf{RHom}_{R}(D\otimes_{A}^{\operatorname{L}}M,R)\] is in \(\mathsf{D}_{b}^{f}(A^{\operatorname{op}})\). Note that the first quasi-isomorphism holds since \[\operatorname{e}_{A,I}\colon A\xrightarrow{\simeq}\operatorname{Hom}_{R}( \operatorname{Hom}_{R}(A,I),I)=\mathsf{RHom}_{R}(D,R),\] which is due to \(\operatorname{G-dim}_{R}(A)<\infty\), and is also left and right \(A\)-linear. For complex \(M\in\mathsf{D}_{b}^{f}(A)\), we have \(P\xrightarrow{\simeq}M\), where \(P\) is a bounded below complex of finitely generated projective left \(A\)-modules. Let \(\pi\colon Q\xrightarrow{\simeq}\operatorname{Hom}_{A}(P,A)\) be a homotopy projective resolution. Consider the commutative diagram \[\begin{array}{ccc}P&\stackrel{{\operatorname{e}_{P,A}}}{{ \cong}}&\succ\operatorname{Hom}_{A^{\operatorname{op}}}(\operatorname{Hom}_{A }(P,A),A)\\ \stackrel{{\operatorname{e}_{P,I}}}{{\vee}}\simeq&&\stackrel{{ \pi^{*}}}{{\vee}}\\ \operatorname{Hom}_{R}(\operatorname{Hom}_{R}(P,I),I)&&\operatorname{Hom}_{A ^{\operatorname{op}}}(Q,A)\\ \stackrel{{(\theta_{M})^{*}}}{{\vee}}&&\stackrel{{ \operatorname{(e}_{A,I})_{*}}}{{\vee}}\\ \operatorname{Hom}_{R}(Q\otimes_{A}D,I)&\cong&\succ\operatorname{Hom}_{A ^{\operatorname{op}}}(Q,\operatorname{Hom}_{R}(D,I)),\end{array}\] where the quasi-isomorphism \(\operatorname{e}_{P,I}\) is from Lemma 3.2, and the unlabeled isomorphism is due to the adjunction \((-\otimes_{A}D,\operatorname{Hom}_{R}(D,-))\). Since \(Q\) is homotopy projective and \(\operatorname{e}_{A,I}\) is a quasi-isomorphism, one gets that \((\operatorname{e}_{A,I})_{*}\) is also a quasi-isomorphism. It follows immediately from Lemma 3.4 that \((\theta_{M})^{*}\) is a quasi-isomorphism as \(I\) is homotopy injective. Thus, we conclude from the above diagram that \(\pi^{*}\) is a quasi-isomorphism. Note that \(\mathsf{RHom}_{A}(M,A)\in\mathsf{D}_{b}^{f}(A^{\operatorname{op}})\). Hence, \(\operatorname{G-dim}_{A}(M)<\infty\) by Lemma 3.2 and 3.4. This yields that \(\varphi\) has ascent of finite Gorenstein dimension property. Next, we prove (2) \(\Rightarrow\) (1) under the hypothesis that the Krull dimension of \(R\) is finite. Since \(\varphi\) has descent of the finite Gorenstein dimension property, one has \(\mathrm{G}\)-\(\mathrm{dim}_{R}(A)<\infty\). This yields that \(\mathrm{G}\)-\(\mathrm{dim}_{R}(D)<\infty\); see Remark 3.3. Combining with \(\varphi\) has ascent of the finite Gorenstein dimension property, \(D\) has finite Gorenstein dimension over \(A\) on both sides. It suffices to show that \(D\) is perfect as a complex of left \(A\)-modules; the same argument holds for \(D\) as a complex of right modules. By Lemma 3.11, there exists an exact triangle in \(\mathsf{D}^{f}_{b}(A)\): ( \[\dagger\] ) \[K[n{-}1]\to P\to D\to K[n],\] where \(n=\mathrm{G}\)-\(\mathrm{dim}_{A}(D)\), \(P\) is a perfect complex and \(K\) is Gorenstein projective over \(A\). Since \(P\) is perfect, there exists \(l>0\) such that \(\mathrm{Ext}_{A}^{>l}(M,P)=0\) for all Gorenstein projective \(A\)-modules \(M\). Set \(d=\mathrm{dim}(R)\). Note that \(\mathrm{G}\)-\(\mathrm{dim}_{R}(M)<\infty\) for any Gorenstein projective \(A\)-module \(M\) by the hypothesis, and moreover, it follows from [1, Thoerem 4.13 and Corollary 4.15] that \(\mathrm{G}\)-\(\mathrm{dim}_{R}(M)\leq d\) and \(\mathrm{Ext}_{R}^{\geq d}(M,R)=0\). Then \(\mathrm{Ext}_{A}^{>d}(M,D)\cong\mathrm{Ext}_{R}^{>d}(M,R)=0\) for all Gorenstein projective \(A\)-modules \(M\). Applying \(\mathsf{RHom}_{A}(M,-)\) to (\(\dagger\) *> 4), we get an exact triangle \[\mathsf{RHom}_{A}(M,K)[n{-}1]\to\mathsf{RHom}_{A}(M,P)\to\mathsf{RHom}_{A}(M,D )\to\mathsf{RHom}_{A}(M,K)[n]\] in \(\mathsf{D}(\mathbb{Z})\). Thus, we conclude that there exists an integer \(m>\max\{l,d\}+n\) such that \(\mathrm{Ext}_{A}^{>m}(M,K)=0\) for all Gorenstein projective \(A\)-modules \(M\). By Lemma 3.10, \(K\) is projective, and hence \(D\) is perfect over \(A\) by (\(\dagger\) *> 4). This completes the proof. **Remark 3.13**.: Keep the assumption as Theorem 3.12. It is natural to ask whether (2) implies (1) without assuming the finiteness of the Krull dimension of \(R\). The condition (1) is equivalent to that \(\mathrm{G}\)-\(\mathrm{dim}_{R_{\mathfrak{p}}}(A_{\mathfrak{p}})<\infty\) and \(\mathrm{Hom}_{R_{\mathfrak{p}}}(A_{\mathfrak{p}},R_{\mathfrak{p}})\) is perfect over \(A_{\mathfrak{p}}\) on both sides for each prime ideal \(\mathfrak{p}\) of \(R\); see [3, Corollary 6.3.4] and [7, Proposition III 6.6]. Since any local ring has finite Krull dimension, it follows from Theorem 3.12 that (1) is equivalent to that \(\varphi_{\mathfrak{p}}\) has ascent and descent of finite Gorenstein dimension property for each prime ideal \(\mathfrak{p}\) of \(R\). Thus, the above question is equivalent to the following: Assume that \(\varphi\colon R\to A\) has ascent and descent of finite Gorenstein dimension property. Does \(\varphi_{\mathfrak{p}}\) have ascent and descent of finite Gorenstein dimension property for each prime ideal \(\mathfrak{p}\)? In [4, Section 4], Avramov and Foxby raised a question: Let \(\varphi\colon R\to A\) be a finite ring homomorphism of commutative noetherian local rings. For a finitely generated \(A\)-module \(M\), if both \(\mathrm{G}\)-\(\mathrm{dim}_{R}(A)\) and \(\mathrm{G}\)-\(\mathrm{dim}_{A}(M)\) are finite, then is \(\mathrm{G}\)-\(\mathrm{dim}_{R}(M)\) finite? By Theorem 3.12, we can immediately get the following result which supports the above question of Avramov and Foxby. **Corollary 3.14**.: _Let \(\varphi\colon R\to A\) be a finite ring homomorphism of commutative noetherian rings. Assume \(\mathsf{RHom}_{R}(A,R)\) is perfect over \(A\). For a finitely generated \(A\)-module \(M\), if both \(\mathrm{G}\)-\(\mathrm{dim}_{R}(A)\) and \(\mathrm{G}\)-\(\mathrm{dim}_{A}(M)\) are finite, then \(\mathrm{G}\)-\(\mathrm{dim}_{R}(M)\) is finite._ A surjective ring homomorphism \(\pi\colon R\to A\) of commutative noetherian rings is said to be a _complete intersection_ if the kernel of \(\pi\) is generated by a regular sequence of \(R\). Theorem 3.12 yields the following well-known result; see [13, Theorem 2.3.12]. **Corollary 3.15**.: _Any complete intersection map has ascent and descent of finite Gorenstein dimension property._ Proof.: Let \(\pi\colon R\to A=R/(x_{1},\ldots,x_{n})\) be a complete intersection map, where \(x_{1},\ldots,x_{n}\) is a regular sequence of \(R\). By [9, Proposition 1.6.10 and Corollary 1.6.14], the projective dimension of \(A\) over \(R\) is finite and \(\mathsf{RHom}_{R}(A,R)\simeq A[-n]\). Then, it follows from Theorem 3.12 that \(\pi\) has ascent and descent of finite Gorenstein dimension property. ## 4. Ascent and descent of Gorenstein projective properties In this section, we give a characterization of the ascent and descent of Gorenstein projective property; see Theorem 4.3. This characterization yields that the ascent and descent of Gorenstein projective property is a local property; see Corollary 4.6. **Definition 4.1**.: Let \(\varphi\colon R\to A\) be a ring homomorphism between noetherian rings. We say \(\varphi\) has _ascent and descent of Gorenstein projective property_ if the following two conditions are satisfied : 1. For each finitely generated left \(A\)-module, it is Gorenstein projective over \(A\) if and only if it is Gorenstein projective over \(R\); 2. For each finitely generated right \(A\)-module, it is Gorenstein projective over \(A^{\mathrm{op}}\) if and only if it is Gorenstein projective over \(R^{\mathrm{op}}\). The following is known; see [13, Corollary 2.3.8] for the commutative case. We give a new argument here by using Lemma 3.11; compare with [24, Lemma 6.2]. **Lemma 4.2**.: _Let \(A\) be a noetherian ring. If \(M\) is a finitely generated left \(A\)-module with \(\mathrm{G}\)-\(\dim_{A}(M)=n<\infty\) and \(\mathrm{Ext}^{i}_{A}(M,A)=0\) for all \(i>0\), then \(M\) is Gorenstein projective._ Proof.: We abbreviate \(\mathsf{RHom}_{A}(-,A)\) and \(\mathsf{RHom}_{A^{\mathrm{op}}}(-,A)\) as \((-)^{*}\). Applying the functor \(\mathsf{RHom}_{A^{\mathrm{op}}}(\mathsf{RHom}_{A}(-,A),A)\) to the exact triangle in Lemma 3.11, one has an exact triangle \[K^{**}[n-1]\longrightarrow P^{**}\longrightarrow M^{**}=\mathsf{RHom}_{A^{ \mathrm{op}}}(M^{*},A)\longrightarrow K^{**}[n]\] in \(\mathsf{D}(\mathbb{Z})\). Note that \(K^{**}\simeq K\) and \(P^{**}\simeq P\). Comparing with the exact triangle in Lemma 3.11, we have \(M\xrightarrow{\cong}M^{**}\) and \(\mathrm{Ext}^{i}_{A^{\mathrm{op}}}(M^{*},A)=0\). Thus, \(M\) is Gorenstein projective over \(A\). **Theorem 4.3**.: _Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(R\) is a commutative noetherian ring and \(A\) is a finite \(R\)-algebra. The following two conditions are equivalent:_ 1. \(A\) _is Gorenstein projective over_ \(R\) _and_ \(\mathrm{Hom}_{R}(A,R)\) _is projective over_ \(A\) _on both sides._ 2. \(\varphi\) _has ascent and descent of Gorenstein projective property._ _Moreover, if \(\varphi\) has ascent and descent of Gorenstein projective property, then for each finitely generated left \(A\)-module \(M\), \(\mathrm{G}\)-\(\dim_{R}(M)=\mathrm{G}\)-\(\dim_{A}(M)\)._ Proof.: It suffices to prove the first statement since the second one can be checked directly. \((1)\Rightarrow(2)\). It follows from Theorem 3.12 that \(\varphi\) has ascent and descent of finite Gorenstein dimension property. It suffices to prove that for any finitely generated left \(A\)-module \(M\), \(\mathrm{G}\)-\(\mathrm{dim}_{A}(M)=0\) if and only if \(\mathrm{G}\)-\(\mathrm{dim}_{R}(M)=0\). By an analogous argument, the assertion for finitely generated right \(A\)-modules holds. Assume \(\mathrm{G}\)-\(\mathrm{dim}_{A}(M)=0\). Since \(\varphi\) has descent of finite Gorenstein dimension property, \(\mathrm{G}\)-\(\mathrm{dim}_{R}(M)<\infty\). Moreover, by the hypothesis that \(\mathrm{Hom}_{R}(A,R)\) is a projective left \(A\)-module, we infer that for all \(i>0\), \[\mathrm{Ext}^{i}_{R}(M,R)\cong\mathrm{Ext}^{i}_{A}(M,\mathrm{Hom}_{R}(A,R))=0.\] It follows immediately from Lemma 4.2 that \(\mathrm{G}\)-\(\mathrm{dim}_{R}(M)=0\). Hence, \(\varphi\) has descent of Gorenstein projective property. Now assume \(\mathrm{G}\)-\(\mathrm{dim}_{R}(M)=0\). Since \(\mathrm{Hom}_{R}(A,R)\) is projective over \(A^{\mathrm{op}}\) and \(A\) is a Gorenstein projective \(R\)-module, we infer that \(\mathrm{Hom}_{R}(A,R)\otimes_{A}M\) is also a Gorenstein projective \(R\)-module. Moreover, for any \(i>0\) we conclude that \[\mathrm{Ext}^{i}_{A}(M,A) \cong\mathrm{Ext}^{i}_{A}(M,\mathrm{Hom}_{R}(\mathrm{Hom}_{R}(A, R),R))\] \[\cong\mathrm{Ext}^{i}_{R}(\mathrm{Hom}_{R}(A,R)\otimes_{A}M,R)=0\] Note that \(\mathrm{G}\)-\(\mathrm{dim}_{A}(M)<\infty\) as \(\varphi\) has ascent of finite Gorenstein dimension property. Then, it follows from Lemma 4.2 that \(M\) is a Gorenstein projective \(A\)-module. Hence, \(\varphi\) has ascent of Gorenstein projective property. \((2)\Rightarrow(1)\). Since \(\varphi\) has descent of Gorenstein projective property, \(A\) is a Gorenstein projective \(R\)-module. This yields that \(\mathrm{Hom}_{R}(A,R)\) is also a Gorenstein projective \(R\)-module, and then \(\mathrm{Hom}_{R}(A,R)\) is Gorenstein projective over \(A\) on both sides as \(\varphi\) has ascent of Gorenstein projective property. There is a short exact sequence \[0\rightarrow\mathrm{Hom}_{R}(A,R)\to F\to C\to 0\] in \(A\)-mod, where \(F\) is finitely generated projective and \(C\) is Gorenstein projective over \(A\). Then the hypothesis yield that \(C\) is also Gorenstein projective over \(R\). We infer that the sequence is split from \[\mathrm{Ext}^{1}_{A}(C,\mathrm{Hom}_{R}(A,R))\cong\mathrm{Ext}^{1}_{R}(C,R)=0;\] the isomorphism here follows from \(\mathsf{RHom}_{R}(A,R)\simeq\mathrm{Hom}_{R}(A,R)\) and the adjunction \((\mathrm{Res},\mathsf{RHom}_{R}(A,-))\). Hence, \(\mathrm{Hom}_{R}(A,R)\) is a projective left \(A\)-module. The same argument shows that \(\mathrm{Hom}_{R}(A,R)\) is projective as a right \(A\)-module. This completes the proof. From the above result and \((1)\Rightarrow(2)\) in Theorem 3.12, we get a natural observation: if \(\varphi\) has ascent and descent of Gorenstein projective property, then \(\varphi\) has ascent and descent of finite Gorenstein dimension property. The following is immediate from [9, Theorem 3.3.7] and Theorem 4.3. **Corollary 4.4**.: _Let \(\varphi\colon(R,\mathfrak{m})\rightarrow(A,\mathfrak{n})\) be a finite local homomorphism of Gorenstein local rings with the same Krull dimension. Then \(\varphi\) has ascent and descent of Gorenstein projective property._ The following is clear, which can be proved by using the characterization of Gorenstein projective modules; see 2.1. **Lemma 4.5**.: _Let \(R\) be a commutative noetherian ring and \(A\) a finite \(R\)-algebra. For any finitely generated left \(A\)-module \(M\), the following are equivalent\(:\)_ 1. \(M\) _is Gorenstein projective over_ \(A\)_._ 2. \(M_{\mathfrak{p}}\) _is Gorenstein projective over_ \(A_{\mathfrak{p}}\) _for each prime ideal_ \(\mathfrak{p}\) _of_ \(R\)_._ 3. \(M_{\mathfrak{m}}\) _is Gorenstein projective over_ \(A_{\mathfrak{m}}\) _for each maximal ideal_ \(\mathfrak{m}\) _of_ \(R\) If we replace "Gorenstein projective" in Lemma 4.5 with "projective", the above statement still holds. That is, being projective is a local property. Combining this with Theorem 4.3 and Lemma 4.5, we can get the next result, showing that the ascent and descent of Gorenstein projective property is a local property; compare this with the question in Remark 3.13. **Corollary 4.6**.: _Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(R\) is a commutative noetherian ring and \(A\) is a finite \(R\)-algebra. The following are equivalent_:__ 1. \(\varphi\colon R\to A\) _has ascent and descent of Gorenstein projective property._ 2. \(\varphi_{\mathfrak{p}}\colon R_{\mathfrak{p}}\to A_{\mathfrak{p}}\) _has ascent and descent of Gorenstein projective property for each prime ideal_ \(\mathfrak{p}\) _of_ \(R\)_._ 3. \(\varphi_{\mathfrak{m}}\colon R_{\mathfrak{m}}\to A_{\mathfrak{m}}\) _has ascent and descent of Gorenstein projective property for each maximal ideal_ \(\mathfrak{m}\) _of_ \(R\)_._ The notion of Frobenius extension of rings is a generalization of Frobenius algebra [27], which includes many interesting examples; see for example [26]. A ring extension \(S\subseteq A\) is called a _Frobenius extension_ if the following equivalent conditions hold: (1) \(A\) is finitely generated projective as a left \(S\)-module and there is an isomorphism \({}_{A}A_{S}\cong\operatorname{Hom}_{S}({}_{S}A_{A},S)\) as \(A\)-\(S\) bimodules. (2) \(A\) is finitely generated projective as a right \(S\)-module and there is an isomorphism \({}_{S}A_{A}\cong\operatorname{Hom}_{S^{\operatorname{op}}}({}_{A}A_{S},S)\) as \(S\)-\(A\) bimodules. Using Theorem 4.3, one can immediately get the following. Indeed, a strong result that any Frobenius extension has ascent and descent of Gorenstein projective property was established in [11, 31, 33]. However, as shown in Example 4.8, the converse does not hold in general. **Corollary 4.7**.: _Let \(\varphi\colon R\to A\) be a ring extension, where \(R\) is a commutative noetherian ring and \(A\) is a finite \(R\)-algebra. If it is a Frobenius extension, then \(\varphi\) has ascent and descent of Gorenstein projective property._ The next example shows that the converse of Corollary 4.7 is not true. **Example 4.8**.: Consider the injection map which maps \(t\) to \(x^{2}\): \[R=k[\![t]\!]/(t^{2})\hookrightarrow A=k[\![x,y,z]\!]/(x^{2}-y^{2},x^{2}-z^{2},xy,xz,yz).\] Both \(R\) and \(A\) are Gorenstein local noetherian rings; see [9, Example 3.2.11]. This map has ascent and descent of Gorenstein projective property, but it is not a Frobenius extension. Since \(\dim_{k}(R)=2\) and \(\dim_{k}(A)=5\), \(A\) cannot be free over \(R\). This implies that \(A\) is not projective over \(R\) as \(R\) is local. We end this section with an example that has ascent and descent of Gorenstein projective property, but that is not a Frobenius extension, not a ring homomorphism between Iwanaga-Gorenstein rings, and not a complete intersection map. **Example 4.9**.: Let \(S\) be a commutative noetherian ring which is not Iwanaga-Gorenstein. Consider the canonical surjection \[\pi\colon R=S[\![x]\!]/(x^{2})\twoheadrightarrow S.\] One can check directly that \(\operatorname{Hom}_{R}(S,R)\cong S\) as \(S\)-modules, and \(S\) is Gorenstein projective as an \(R\)-module. It follows from Theorem 4.3 that \(\pi\) has ascent and descent of Gorenstein projective property. ## 5. Testing Iwanaga-Gorenstein rings The main result of this section is Theorem 5.1. As a consequence, one can get a result of Avramov and Foxby; see Corollary 5.2. **Theorem 5.1**.: _Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(R\) is a commutative noetherian ring of finite Krull dimension and \(A\) is a finite \(R\)-algebra. Assume \(\varphi\) has ascent and descent of finite Gorenstein dimension property. If \(R\) is an Iwanaga-Gorenstein ring, then so is \(A\). The converse holds if, in addition, the fibre \(A\otimes_{R}R/\mathfrak{m}\) is nonzero for each maximal ideal \(\mathfrak{m}\) of \(R\)._ Proof.: Let \(R\) be an Iwanaga-Gorenstein ring. Take a minimal injective resolution \(R\xrightarrow{\simeq}I\) over \(R\). Note that for each injective \(R\)-module \(E\), \(\operatorname{Hom}_{R}(A,E)\) is injective both as a left \(A\)-module and as a right \(A\)-module. Combining with \(\operatorname{id}_{R}(R)<\infty\), then \(D:=\operatorname{Hom}_{R}(A,I)\) is a bounded complex of \(A\)-\(A\) bimodules and each term \(D_{i}\) is injective over \(A\) and \(A^{\operatorname{op}}\). Since \(\varphi\) has ascent and descent of finite Gorenstein dimension property, Theorem 3.12 yields that \(D\) is perfect over \(A\) on both sides. Then one can choose a projective resolution \(\pi\colon Q\xrightarrow{\simeq}D\) over \(A^{\operatorname{op}}\) such that \(Q\) is a bounded complex of finitely generated projective right \(A\)-modules. For any finitely generated projective right \(A\)-module \(P\), \(\operatorname{Hom}_{A^{\operatorname{op}}}(P,A)\) is a projective left \(A\)-module. For any \(A\)-\(A\) bimodule \(E\), if \(E\) is injective as a left \(A\)-module, then it follows from [15, Theorem 3.2.16] that as a left \(A\)-module, \(\operatorname{Hom}_{A^{\operatorname{op}}}(P,E)\cong E\otimes_{A} \operatorname{Hom}_{A^{\operatorname{op}}}(P,A)\) is also injective. Hence, \(\operatorname{Hom}_{A^{\operatorname{op}}}(Q,D)\) is a bounded complex of injective left \(A\)-modules. Consider the left \(A\)-linear morphisms \[A\xrightarrow{\simeq}\operatorname{Hom}_{A^{\operatorname{op}}}(D,D) \xrightarrow{\pi^{*}}\operatorname{Hom}_{A^{\operatorname{op}}}(Q,D),\] where the first quasi-isomorphism is from Proposition 3.5. Since \(D\) is homotopy injective, \(\pi^{*}\) is also a quasi-isomorphism. Then, we conclude \(\operatorname{id}_{A}(_{A}A)<\infty\). The same argument will show \(\operatorname{id}_{A}(A_{A})<\infty\). Therefore, \(A\) is an Iwanaga-Gorenstein ring. Conversely, we assume \(A\) is an Iwanaga-Gorenstein ring and \(A\otimes_{R}R/\mathfrak{m}\neq 0\) for each maximal ideal \(\mathfrak{m}\) of \(R\). For each \(i\) and each left \(A\)-module \(M\), there is an isomorphism \[\operatorname{Ext}_{R}^{i}(M,R)\cong\operatorname{Ext}_{A}^{i}(M,D).\] Note that \(D\) is perfect over \(A\) on both sides by Theorem 3.12. Combining this with \(\operatorname{id}_{A}(A)<\infty\), we conclude that there exists an positive integer \(j\) such that \(\operatorname{Ext}_{R}^{>j}(M,R)=0\) for all \(M\in A\)-Mod. For each maximal ideal \(\mathfrak{m}\) of \(R\), by assumption \(A\otimes_{R}R/\mathfrak{m}\) is not zero, and then is a direct sum of some copies of \(R/\mathfrak{m}\) over \(R\). If we choose \(M\) to be \(A\otimes_{R}R/\mathfrak{m}\), then \(\operatorname{Ext}_{R}^{>j}(R/\mathfrak{m},R)=0\). In particular, we have \[\operatorname{Ext}_{R_{\mathfrak{m}}}^{>j}(R_{\mathfrak{m}}/\mathfrak{m}R_{ \mathfrak{m}},R_{\mathfrak{m}})=0.\] This implies that \(\operatorname{id}_{R_{\mathfrak{m}}}(R_{\mathfrak{m}})\leq j\); see [9, Proposition 3.1.14]. Since \(\mathfrak{m}\) is arbitrary, \(\operatorname{id}_{R}(R)\leq j\). Hence, \(R\) is an Iwanaga-Gorenstein ring. This completes the proof. Following [9, Definition 3.1.18], a commutative noetherian ring \(A\) is said to be _Gorenstein_ provided that \(A_{\mathfrak{p}}\) is Iwanaga-Gorenstein for each prime ideal \(\mathfrak{p}\) of \(A\). A commutative Gorenstein ring need not be Iwanaga-Gorenstein. However, a commutative Gorenstein ring with finite Krull dimension must be Iwanaga-Gorenstein; see for example [10, Theorem 4.1.1]. As a consequence of Theorem 3.12 and Theorem 5.1, one can get the following result which is due to Avramov and Foxby [4, 4.4.4 and 7.7.2]; see also [25, Theorem 6.2]. **Corollary 5.2**.: _Let \(\varphi\colon(R,\mathfrak{m},k)\to(A,\mathfrak{n},l)\) be a finite local ring homomorphism. If \(A\) is Gorenstein, then \(R\) is Gorenstein if and only if \(\mathrm{G}\mathrm{-dim}_{R}(A)\) is finite._ Proof.: The "only if" part is clear since \(R\) is a Gorenstein ring and \(A\) is a finitely generated \(R\)-module. For the "if" part, assume \(\mathrm{G}\mathrm{-dim}_{R}(A)\) is finite. It follows from Lemma 3.2 that there is an \(A\)-linear quasi-isomorphism \[A\xrightarrow{\simeq}\mathsf{RHom}_{R}(\mathsf{RHom}_{R}(A,R),R).\] This yields the following: \[\mathsf{RHom}_{A}(l,A) \simeq\mathsf{RHom}_{A}(l,\mathsf{RHom}_{R}(\mathsf{RHom}_{R}(A, R),R))\] \[\simeq\mathsf{RHom}_{R}(l\otimes^{\mathrm{L}}_{A}\mathsf{RHom}_{ R}(A,R),R).\] Since \(\mathrm{G}\mathrm{-dim}_{R}(A)<\infty\), \(\mathsf{RHom}_{R}(A,R)\) is in \(\mathsf{D}^{f}_{b}(R)\). This implies that \(\mathsf{RHom}_{R}(A,R)\) is in \(\mathsf{D}^{f}_{b}(A)\). Thus, there is a minimal resolution \(F\xrightarrow{\simeq}\mathsf{RHom}_{R}(A,R)\), that is, \(F\) is a bounded below complex of finitely generated free left \(A\)-modules and \(\partial(F)\subseteq lF\); see [18, (2.3.c)] for the existence of minimal free resolutions. Then \[l\otimes^{\mathrm{L}}_{A}\mathsf{RHom}_{R}(A,R)\simeq l\otimes_{A}F=\prod_{i \in\mathbb{Z}}l^{\beta_{i}}[i],\] where \(\beta_{i}=\mathrm{rank}_{A}(F_{i})\). Since \(A\) is Gorenstein, \(\mathsf{RHom}_{A}(l,A)\simeq l[-d]\), where \(d\) is the Krull dimension of \(A\). We conclude by the above that \[\mathsf{RHom}_{R}(\coprod_{i\in\mathbb{Z}}l^{\beta_{i}}[i],R)\simeq l[-d].\] Then, there exists an index \(j\) such that \(\beta_{j}=1\) and \(\beta_{i}=0\) for \(i\neq j\); if not, the homology of \(\mathsf{RHom}_{R}(\coprod_{i\in\mathbb{Z}}l^{\beta_{i}}[i],R)\) will not concentrate on only one degree. Thus, \(\mathsf{RHom}_{R}(A,R)\simeq A[j]\). In particular, \(\mathsf{RHom}_{R}(A,R)\) is perfect over \(A\). Combining this with the assumption that \(\mathrm{G}\mathrm{-dim}_{R}(A)\) is finite, Theorem 3.12 yields that \(\varphi\) has ascent and descent of finite Gorenstein dimension property. Thus, \(R\) is Gorenstein by Theorem 5.1. ## 6. Triangle equivalence of stable categories Let \(A\) be a noetherian ring. Since the category of finite generated Gorenstein projective left \(A\)-modules \(A\)-Gproj is closed under extensions, it is naturally an exact category in the sense of Quillen. Moreover, it is a Frobenius category, whose projective-injective objects are precisely projective modules in \(A\)-mod. Therefore, by the general result in [21, I.2], its stable category \(A\)-Gproj is naturally a triangulated category. In this section, we study when the stable categories of finitely generated Gorenstein projective modules along a ring homomorphism are triangle equivalent; see Theorem 6.4. To prove the theorem, we need some preparations. Let \(\varphi\colon R\to A\) be a ring homomorphism. In this section, we consider \(A\) as an \(A\)-\(R\)-bimodule, and \(\omega=\operatorname{Hom}_{R}(A,R)\) as an \(R\)-\(A\) bimodule. There is an adjoint pair \[F=\omega\otimes_{A}-\colon A\text{-mod}\rightleftarrows R\text{-mod}\colon G= \operatorname{Hom}_{R}(\omega,-).\] We denote the unit by \(\eta\colon\operatorname{Id}\to GF\), and the counit by \(\varepsilon\colon FG\to\operatorname{Id}\). **Lemma 6.1**.: _Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(R\) is a commutative noetherian ring and \(A\) is a finite \(R\)-algebra. Assume that \(A\) is a projective \(R\)-module and \(\omega=\operatorname{Hom}_{R}(A,R)\) is a projective right \(A\)-module. Then there is an adjoint pair_ \[F\colon A\text{-Gproj}\rightleftarrows R\text{-Gproj}\colon G\] _between the subcategories of Gorenstein projective modules._ Proof.: The hypothesis yields that \(F=\omega\otimes_{A}-\) preserves finitely generated projective modules. For each \(M\in A\)-Gproj, choose a totally acyclic complex of finitely generated projective left \(A\)-modules \(\mathbf{P}\) such that \(M\cong\operatorname{Im}(\partial_{0}^{\mathbf{P}})\). Note that \(\omega\otimes_{A}\mathbf{P}\) is an acyclic complex of finitely generated projective \(R\)-modules. Moreover, we conclude that \(\omega\otimes_{A}\mathbf{P}\) is totally acyclic by the following isomorphisms \[\operatorname{Hom}_{R}(\omega\otimes_{A}\mathbf{P},R)\cong\operatorname{Hom} _{A}(\mathbf{P},\operatorname{Hom}_{R}(\omega,R))\cong\operatorname{Hom}_{A}( \mathbf{P},A).\] This will imply that \(\omega\otimes_{A}M\in R\)-Gproj. Thus, \(F\) preserves finitely generated Gorenstein projective modules. Since \(A\) is a finitely generated projective \(R\)-module, for any \(R\)-module \(X\), by the Hom-evaluation we have an isomorphism of abelian groups \[\theta_{X}\colon A\otimes_{R}X=A\otimes_{R}\operatorname{Hom}_{R}(R,X)\to \operatorname{Hom}_{R}(\omega,X)\] which is given by \(a\otimes x\mapsto(f\mapsto f(a)x)\); moreover, it is direct to check that \(\theta_{X}\) is a homomorphism of left \(A\)-modules. Hence, \(G\cong A\otimes_{R}-\). Let \(N\) be a finitely generated Gorenstein projective \(R\)-module. There is a totally acyclic complex of finitely generated projective \(R\)-modules \(\mathbf{P}^{\prime}\) such that \(N\cong\operatorname{Im}(\partial_{0}^{\mathbf{P}^{\prime}})\). It is clear that \(G(\mathbf{P}^{\prime})\cong A\otimes_{R}\mathbf{P}^{\prime}\) is an acyclic complex of projective left \(A\)-modules. Since \(\operatorname{Im}(\partial_{i}^{\mathbf{P}^{\prime}})\) are Gorenstein projective modules over \(R\) for all \(i\in\mathbb{Z}\), we have \[\operatorname{Ext}_{A}^{j}(A\otimes_{R}\operatorname{Im}(\partial_{i}^{ \mathbf{P}^{\prime}}),A)\cong\operatorname{Ext}_{R}^{j}(\operatorname{Im}( \partial_{i}^{\mathbf{P}^{\prime}}),A)=0\] for any \(j\geq 1\). This yields that \(G(\mathbf{P}^{\prime})\cong A\otimes_{R}\mathbf{P}^{\prime}\) is indeed a totally acyclic complex of projective left \(A\)-modules, and then \(G(N)\) is a Gorenstein projective left \(A\)-module. Consequently, we get a restricted adjoint pair \[F\colon A\text{-Gproj}\rightleftarrows R\text{-Gproj}\colon G\] between subcategories of Gorenstein projective modules. **Lemma 6.2**.: _Keep the conditions as above. The functor \(F\colon A\text{-Gproj}\to R\text{-Gproj}\) is faithful. Moreover, \(\eta_{M}\colon M\to GF(M)\) is a monomorphism for each \(M\in A\text{-Gproj}\)._ Proof.: For each \(M\in A\text{-Gproj}\), consider the exact sequence \[0\to K\xrightarrow{\iota}M\xrightarrow{\eta_{M}}GF(M)\] in \(A\text{-mod}\). We claim that \(K\) is zero. This is equivalent to that \(\iota\) is a zero map. Note that \(F(\eta_{M})\) is injective by the identity \(\operatorname{id}_{F(M)}=\varepsilon_{F(M)}\circ F(\eta_{M})\). By applying the exact functor \(F=\omega\otimes_{A}-\) to the above sequence, we infer that \(F(\iota)=0\). We have the following commutative diagram \[\operatorname{Hom}_{R}(F(M),R)\stackrel{{\operatorname{Hom}_{R}(F( \iota),R)}}{{\to}}\operatorname{Hom}_{R}(F(K),R)\] where the vertical isomorphisms are from the adjunction \((F,G)\) and the \(A\)-linear isomorphism \(\operatorname{Hom}_{R}(\omega,R)\cong A\). Thus, \(\iota^{*}=\operatorname{Hom}_{A}(\iota,A)=0\). Now, consider the following commutative diagram \[\begin{array}{ccc}K&\iota&\succ M\\ \operatorname{e}_{K,A}&&\cong\operatorname{e}_{M,A}\\ \operatorname{V}&&\operatorname{Hom}_{A^{\operatorname{op}}}(\operatorname{ Hom}_{A}(K,A),A)&\iota^{**}\operatorname{\succ Hom}_{A^{\operatorname{op}}}( \operatorname{Hom}_{A}(M,A),A),\end{array}\] where the isomorphism \(\operatorname{e}_{M,A}\) is from the assumption that \(M\in A\)-Gproj. Hence, we infer from \(\iota^{**}=\operatorname{Hom}_{A^{\operatorname{op}}}(\iota^{*},A)=0\) that \(\iota=0\), as desired. Assume that \(f:X\to Y\) is a morphism in \(A\)-Gproj such that \(F(f)=0\). Analogous to the above, we can prove that \(f=0\). Hence, the functor \[F\colon A\text{-Gproj}\to R\text{-Gproj}\] is faithful. This completes the proof. **Lemma 6.3**.: _Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(R\) is a commutative noetherian ring and \(A\) is a finite \(R\)-algebra. If \(A\) is a projective generator in \(R\)-mod, then the functor \(G\colon R\text{-mod}\to A\text{-mod}\) is faithful. Moreover, for any \(N\in R\)-mod, \(\varepsilon_{N}\colon FG(N)\to N\) is an epimorphism._ Proof.: As an \(R\)-module, \(A\) is a projective generator if and only if \(A\) is faithfully flat, then \(G\simeq A\otimes_{R}-\) is a faithful functor. Moreover, the proof of [12, Lemma 2.1 (2)] implies that for any \(N\in R\text{-mod}\), \(\varepsilon_{N}\colon FG(N)\to N\) is an epimorphism. **Theorem 6.4**.: _Let \(\varphi\colon R\to A\) be a ring homomorphism, where \(R\) is a commutative noetherian ring and \(A\) is a finite \(R\)-algebra. Assume that \(A\) is a projective generator as an \(R\)-module and \(\omega=\operatorname{Hom}_{R}(A,R)\) is projective as a right \(A\)-module. Then the following are equivalent_:__ 1. _The induced adjoint pair yields mutually inverse equivalences_ \[F\colon A\text{-}\underline{\operatorname{Gproj}}\rightleftarrows R\text{-} \underline{\operatorname{Gproj}}\colon G.\] 2. _For any_ \(M\in A\text{-}\mathrm{Gproj}\) _and_ \(N\in R\text{-}\mathrm{Gproj}\)_, the projective dimensions of_ \(\operatorname{Coker}(\eta_{M})\) _and_ \(\operatorname{Ker}(\varepsilon_{N})\) _are finite._ Proof.: It follows immediately from Lemma 6.1 that there is an induced adjoint pair of stable categories \[F\colon A\text{-}\underline{\operatorname{Gproj}}\rightleftarrows R\text{-} \underline{\operatorname{Gproj}}\colon G.\] The condition (1) is equivalent to, for each \(M\in A\text{-}\mathrm{Gproj}\) and \(N\in R\text{-}\mathrm{Gproj}\), \(\eta_{M}\) and \(\varepsilon_{N}\) are isomorphic in \(A\text{-}\mathrm{Gproj}\) and \(R\text{-}\mathrm{Gproj}\), respectively. For each \(M\in A\)-Gproj, by Lemma 6.2 we have a short exact sequence of left \(A\)-modules \[0\to M\xrightarrow{\eta_{M}}GF(M)\longrightarrow\operatorname{Coker}(\eta_{M})\to 0,\] which implies that \(\operatorname{G-dim}_{A}(\operatorname{Coker}(\eta_{M}))\leq 1\). There is a canonical fully faithful functor \[\operatorname{\mathsf{can}}\colon A\text{-}\underline{\text{Gproj}}\to \mathsf{D}_{sg}(A)=\mathsf{D}_{b}^{f}(A)/\mathsf{perf}(A)\] which sends a module to the corresponding stalk complex, where \(\mathsf{perf}(A)\) is the full subcategory of \(\mathsf{D}_{b}^{f}(A)\) consisting of perfect complexes, and \(\mathsf{D}_{sg}(A)\) stands for the singularity category; see for example [8, Theorem 3.1]. From the above embedding, we conclude that \(\eta_{M}\) is isomorphic in \(A\text{-}\underline{\text{Gproj}}\) if and only if \(\operatorname{Coker}(\eta_{M})\) has finite projective dimension over \(A\). For each \(N\in R\)-Gproj, we infer from Lemma 6.3 that there is a short exact sequence \[0\to\operatorname{Ker}(\varepsilon_{N})\longrightarrow FG(N)\xrightarrow{ \varepsilon_{N}}N\to 0,\] which also implies that \(\operatorname{Ker}(\varepsilon_{N})\in R\)-Gproj; see [2, Proposition 5.1]. Moreover, the above short exact sequence yields an exact triangle \[\operatorname{Ker}(\varepsilon_{N})\longrightarrow FG(N)\xrightarrow{ \varepsilon_{N}}N\to\Sigma\text{Ker}(\varepsilon_{N})\] in the stable category \(R\text{-}\underline{\text{Gproj}}\), where \(\Sigma\) is the suspension functor; see [21, Chapter 1] for more details on stable category of Frobenius categories. Hence, combining with a well-known fact that the projective dimension of any Gorenstein projective module is either zero or infinite ([15, Proposition 10.2.3]), we infer that \(\varepsilon_{N}\) is isomorphic in \(R\text{-}\underline{\text{Gproj}}\) if and only if \(\operatorname{Ker}(\varepsilon_{N})\) has finite projective dimension. **Remark 6.5**.: (1) Theorem 6.4 is inspired by [12, Proposition 4.2] for Frobenius pair of faithful functors. However, our new ingredient is that we do not require \(F:A\text{-}\mathrm{mod}\to R\text{-}\mathrm{mod}\) is a faithful functor; see Lemma 6.2. Moreover, \((F,G)\) is not required to be a Frobenius pair. If \(\varphi\colon R\to A\) is a Frobenius extension, then \(A\) is a finitely generated projective \(R\)-modules and \(\omega=\operatorname{Hom}_{R}(A,R)\) is isomorphic to \(A\) as an \(R\)-\(A\)-bimodule. In this case, \(F=\omega\otimes_{A}-\) is precisely the restriction functor, and \(F\colon A\text{-}\mathrm{mod}\rightleftarrows R\text{-}\mathrm{mod}\colon G\) is a Frobenius pair of functors. (2) We recall from [24, Definition 1.1] that an \(R\)-algebra \(A\) is _Gorenstein_ if the \(R\)-module \(A\) is finitely generated and projective, and for each prime ideal \(\mathfrak{p}\) of \(R\) with \(A_{\mathfrak{p}}\neq 0\), the ring \(A_{\mathfrak{p}}\) is Iwanaga-Gorenstein. For the Gorenstein \(R\)-algebra \(A\), it follows from [24, Proposition 6.7] that the composition of \(\omega\otimes_{A}-\) and the Gorenstein projective approximation functor induces an equivalence \(A\text{-}\underline{\text{Gproj}}\to A\text{-}\underline{\text{Gproj}}\) of the stable category, where \(\omega=\operatorname{Hom}_{R}(A,R)\) is an \(A\)-bimodule. If \(A\) is a finite projective \(R\)-algebra over a commutative Gorenstein ring \(R\), then \(A\) is a Gorenstein \(R\)-algebra if and only if the \(A\)-bimodule \(\omega\) is perfect on both sides; see [24, Theorem 4.6] or [20, Theorem 6.7]. ### Acknowledgements The authors are grateful to X.-W. Chen and S.B. Iyengar for their helpful comments and suggestions. After we put our preprint on arXiv, C. Psaroudakis kindly pointed out some relations between Theorem 1.1, 1.5 and Theorem I of the paper [29]. We are grateful to him for the reference and helpful comments. This work is supported by the National Natural Science Foundation of China (No. 11871125).
2304.12189
Machine Learning-based Methods for Joint {Detection-Channel Estimation} in OFDM Systems
In this work, two machine learning (ML)-based structures for joint detection-channel estimation in OFDM systems are proposed and extensively characterized. Both ML architectures, namely Deep Neural Network (DNN) and Extreme Learning Machine (ELM), are developed {to provide improved data detection performance} and compared with the conventional matched filter (MF) detector equipped with the minimum mean square error (MMSE) and least square (LS) channel estimators. The bit-error-rate (BER) performance vs. computational complexity trade-off is analyzed, demonstrating the superiority of the proposed DNN-OFDM and ELM-OFDM detectors methodologies.
Wilson de Souza Junior, Taufik Abrao
2023-04-08T19:30:23Z
http://arxiv.org/abs/2304.12189v1
# Machine Learning-based Methods for Joint Detection-Channel Estimation in OFDM Systems ###### Abstract In this work, two machine learning (ML)-based structures for joint detection-channel estimation in OFDM systems are proposed and extensively characterized. Both ML architectures, namely Deep Neural Network (DNN) and Extreme Learning Machine (ELM), are developed to provide improved data detection performance and compared with the conventional matched filter (MF) detector equipped with the minimum mean square error (MMSE) and least square (LS) channel estimators. The bit-error-rate (BER) performance _vs_ computational complexity trade-off is analyzed, demonstrating the superiority of the proposed DNN-OFDM and ELM-OFDM detectors methodologies. Machine learning, neural networks, OFDM, detection, deep learning, DNN, ELM, MMSE, LS, BER. ## I Introduction The conventional orthogonal frequency-division multiplexing (OFDM) system is a multi-carrier scheme widely utilized in communication systems due to its capacity to combat frequency-selective fading in wireless channels. Besides, Artificial intelligence (AI) and machine learning (ML) are relevant approaches in the current complex, highly demanded radio access scenarios, combined with the spectrum scarceness. The ML resources and techniques can be applied to improve the performance-complexity trade-off of OFDM systems, specifically on the receiver side. In this work, two AI-based methods, specifically a DNN-based and an ELM-based jointly symbol detection and pilot-assisted channel estimation are investigated; both techniques are compared with the conventional linear estimation methods, such as least square (LS) and minimum mean square error (MMSE) [1, 2]. ML techniques have been widely used in different telecommunication applications as a satisfactory predictor in OFDM system [3, 4, 5], as a near-optimal signal detection in OFDM with index modulation (OFDM-IM) [6], and as a channel estimator for massive MIMO [7]. In [3], the authors discuss the deep learning (DL) applicability for channel estimation and signal detection in OFDM systems. The DL-based prediction technique is explored to implicitly estimate the channel state information (CSI) and then detect the transmitted symbols using the estimated CSI. For that, the deep learning model is first trained offline using the data generated by simulation based on channel statistics and then used for recovering the online transmitted data directly. Deep neural network (DNN) is a type of artificial neural network presenting a large number of hidden layers and hyper-parameters into its composition, _i.e._, DNN implies high computational operations in contrast to Extreme Learning Machine (ELM) that has simply one hidden layer [8]. In intricate telecommunication scenarios, specifically in the 5G and beyond systems, authors in [4] propose a different architecture for the OFDM receiver aided by ELM technique. Besides, a multi-ELM, _i.e._, a parallel multiple split complex ELM structure is proposed in [5]. In [6], a DL-based detector structure for OFDM with index modulation (OFDM-IM) is proposed, termed DeepIM. The authors deploy a deep neural network with fully connected layers to recover data. Aiming to enhance the DeepIM performance, the received signal and channel vectors are pre-processed based on the domain knowledge before entering the network. Data sets available by simulations are deployed to train offline the DeepIM aiming at optimizing the bit error rate (BER) performance. After that, the trained model is deployed for the online signal detection. _Contributions_. We propose and analyze the deployment of promising ML tools applied through jointly detection and channel estimation in OFDM systems. **i)** First, we have adopted and analyzed two ML-based topologies for OFDM data detection and channel estimation: a DNN-based and ELM-based OFDM joint detector and channel estimator. **ii)** We have deployed and characterized existing models by applying them to more complex scenar ios, including multi-user systems, realistic path-loss, and short-term fading wireless channel configurations. **iii**) Extensive numerical results characterizing the performance-complexity trade-off for both ML-based OFDM detectors, demonstrating that such an approach is quite competitive. _Notations_. Italic lowercase or capital letters are scalars, boldface capital letters denote the frequency-domain vectors meanwhile boldface lowercase letters are vectors in the time domain. Operators \(\mathbb{E}[\cdot]\), \((\cdot)^{T}\), \((\cdot)^{H}\) and \((\cdot)^{\dagger}\) denote the statistical expectation, a vector or matrix transpose, Hermitian and Moore-Penrose pseudo-inverse, respectively; \(|\mathcal{A}|\) holds for the cardinality of the set \(\mathcal{A}\), \(\odot\) denotes the element-wise multiplication, \(\oslash\) denotes the element-wise division, and represents the convolution operation. ## II System Model Assuming an OFDM system with a set \(\mathcal{U}=\{1,\ldots,U\}\) users, a number of \(N_{c}\) sub-carriers and the duration of the cyclic prefix \(T_{g}\), a transmitted signal in frequency domain can be defined as \(\textbf{X}=[X_{1},\ldots,X_{N_{c}}]^{T}\), leading to a received signal \(\textbf{Y}=[Y_{1},\ldots,Y_{N_{c}}]^{T}\), with multi-path channel \(\textbf{H}=[H_{1},\ldots,H_{N_{c}}]^{T}\), and zero-mean Gaussian noise samples \(\textbf{Z}=[Z_{1},\ldots,Z_{N_{c}}]^{T}\) described by complex random variables \(Z_{i}\sim\mathcal{CN}(0,\sigma^{2})\), where \(\sigma^{2}\) is the noise power in each OFDM sub-channel. The received signal in the _frequency_ and _time domain_ can be written, respectively: \[\textbf{Y}=\textbf{X}\odot\textbf{H}+\textbf{Z},\qquad\text{and}\qquad\textbf{ y}=\textbf{x}\vartriangle\textbf{h}+\textbf{z} \tag{1}\] where **y**, **x**, **h** and **z** means IDFT of **Y**, **X**, **H** and **Z** respectively. In the considered path-loss model the received signal power decays according to \(d_{k}^{\eta}\), where \(d_{k}\) is the distance between BS and the \(k\)th user, while \(\eta\) represents the path-loss exponent. Hence, the transmitted power per sub-carrier and the average receiver power per subcarrier (\(P\)) are related by: \(P=d_{u}^{-\eta}\cdot\frac{P_{\text{r}}}{N_{c}}\), where \(P_{\text{r}}\) is the total power available at transmitter side. Besides, since more than one user sharing the same sub-channel is admitted, resulting in an OFDM system operating under inter-user interference (IuI), in Section IV we proceed with sub-carrier selection to know what sub-carriers sub-set results in smaller IuI, aiming to maximize the SINR in the \(k\)th sub-carrier. **LS OFDM Channel Estimation**. As aforementioned, firstly we assume that an OFDM pilot symbol is transmitted (channel estimation mode) and then OFDM data symbols can be transmitted (data mode) inside the channel coherence time \((\Delta t)_{\text{c}}\) interval; this composition form an OFDM frame. Inside an OFDM frame, the channel state information (CSI) is unchangeable but it changes from one frame to another. One common technique for OFDM channel estimation is the _least-squares_ (LS) method [1]. This technique is the simplest way to estimate the state of the channel; as a result, it is possible to estimate the OFDM symbols inside the same \((\Delta t)_{c}\) time interval. Once \(\mathbf{X}_{p}=[X_{1},\ldots,X_{N^{\mathrm{ilot}}}]^{T}\) and \(\mathbf{Y}_{p}=[Y_{1},\ldots,Y_{N^{\mathrm{ilot}}}]^{T}\) are the transmitted and received OFDM pilot vectors, respectively, with \(N^{\mathrm{ilot}}_{c}\) the number of sub-carriers reserved to the pilots in the OFDM frame, then the LS _channel estimation_ in the pilot OFDM sub-channels, and the _data detection_ based on LS channel estimator are obtained, respectively, by: \[\tilde{\mathbf{H}}^{\mathrm{LS}}=\mathbf{Y}_{p}\oslash\mathbf{X}_{p},\qquad \text{and}\qquad\tilde{\mathbf{X}}_{d}=\mathbf{Y}_{d}\oslash\tilde{\mathbf{H}} ^{\mathrm{LS}}, \tag{2}\] where \(N^{\mathrm{data}}_{c}\) is the number of sub-carriers destined to data symbol in the OFDM frame; \(\tilde{\mathbf{X}}_{d}\) and \(\mathbf{Y}_{d}\) are the recovery data and received data vectors, respectively. **MMSE OFDM Channel Estimation.** The MMSE channel estimator is considered a better linear solution than the aforementioned LS channel estimation due to the weight (regularization) channel matrix inversion, which is optimized in the same way as the LS solution according to the minimum mean square error problem. However, the development of the MMSE solution requires the knowledge of the signal-to-noise ratio (SNR), being the channel estimate obtained from the LS solution as [1]: \[\tilde{\mathbf{H}}^{\mathrm{MMSE}}=\mathbf{R}_{\mathbf{H}\tilde{\mathbf{H}}^{ \mathrm{LS}}}\left[\mathbf{R}_{\mathbf{H}\mathbf{H}}+\mathbf{I}_{\overline{ \gamma}}^{\frac{1}{2}}\right]^{-1}\tilde{\mathbf{H}}^{\mathrm{LS}}, \tag{3}\] where \(\mathbf{R}_{\mathbf{A}\mathbf{B}}\) denotes the cross-correlation matrix between matrices \(\mathbf{A}\) and \(\mathbf{B}\), \(i.e.\), \(\mathbf{R}_{\mathbf{A}\mathbf{B}}=\mathbb{E}[\mathbf{A}\mathbf{B}^{H}]\); the pre-processing SNR at the receiver side is defined as \(\bar{\gamma}\triangleq\frac{P}{\sigma^{2}}\), with \(P\) the average power per sub-channel at receiver side. ## III ML-based OFDM Detection Schemes In the context of machine learning, the training occurs by generating random data communicating across a channel that arrives at the receiver, and then that data is part of a data set containing labels and features. This paper presents an analysis and comparison of two different ML-based detectors that are promising for realistic OFDM system scenarios. The deployed OFDM system model is depicted in Fig. 1; the DNN and ELM architectures are described in the following. **DNN-based Detection**. The DNN is an architecture composed by \(2\cdot N_{c}\) input nodes being a real and imaginary part of OFDM frame, where \(N_{c}=N^{\mathrm{pilot}}_{c}+N^{\mathrm{data}}_{c}\). This model has exclusively 3 hidden layers, Fig. 2, in which each layer is composed of 500, 250, and 120 neurons, respectively, in the same way as adopted in [3]. The proposed DNN-based OFDM detector is exclusively inspired in offline training strategy, _i.e._, for the training stage, the DNN inputs (features) are composed by the real and imaginary part of OFDM symbols that arrives at the receiver, while the outputs (labels) are estimates for the transmitted bits. The DNN outputs are obtained from a non-linear function of input nodes: \[\mathbf{\hat{X}}_{\text{\tiny DNN}}=f(\mathbf{Y},\theta)=f^{L-1}(f^{L-2}(...f^ {1}(\mathbf{Y}))) \tag{4}\] where \(\theta\) is the set of bias and weights and \(L\) means the number of layers. The bias and weight coefficients are optimized in the training stage. The DNN model has the goal of minimizing the mean squared error (MSE) loss function, defined by: \[\mathcal{F}_{\text{loss}}=\frac{1}{L}\sum_{k=1}^{L}\left[\mathbf{\hat{X}}(k)- \mathbf{X}(k)\right]^{2} \tag{5}\] where \(\mathbf{\hat{X}}(k)\) denotes the predictions, \(\mathbf{X}(k)\) the data symbol, and \(L\) is the number of data samples in the estimation data set. _DNN Training_. The training step is responsible for the DNN to learn the channel characteristics; hence the data set must be known and sufficiently large, beyond it should be transmitted in a fraction of the channel coherence time \((\Delta t)_{\mathrm{c}}\) interval to allow the system attains suitable accuracy in the channel estimate process. Once trained, the network may be utilized to decode data to any online transmission scheme (_test stage_), also assuming the same parameters utilized previously at the training stage. **Extreme Learning Machine (ELM) based Detection**. The ELM network has other important feature that differs from DNN, its architecture is subdivided into \(N_{c}\) sub-networks, _i.e._, in the OFDM context, a sub-network is deployed to treat the signal of each sub-carrier. Fig. 3 depicts an ELM topology for each OFDM sub-carrier. The parameters of hidden nodes of ELM must be randomly generated, and afterward, they should be fixed to determine the output layer weights according to [4, 5]. ELM architecture has 2 input nodes, the real and imaginary parts of the \(n\)th sub-channel. In OFDM systems, the ELM architecture assumes that the data and pilots are time-division multiplexed, _i.e._, inside a channel coherence time \((\Delta t)_{\mathrm{c}}\) interval, there are \(I\) transmitted pilots and \(K\) transmitted data symbols. This is a primordial feature once the channel can be assumed invariant into the \((\Delta t)_{\mathrm{c}}\) interval. Still, the noise and possibly co-channel interference at the receiver side are variant in time. The data set provided by \(I\) pilots is helpful for the network to learn about statistics from noise plus interference. The hidden layer from ELM has an activation function applied to the data from the input layer, Fig. 3, where the output of \(\ell\)th hidden node is given by: \[\mathbf{o}_{i,L}=g(\mathbf{a}_{i}^{T}\cdot\mathbf{Y}_{i}+b_{L}) \tag{6}\] where \(L\) means the number of hidden neurons, \(\mathbf{a}_{i}\) is a column vector of weights with dimension 2 \(\times\) 1, while \(b_{L}\) means the bias from \(\ell\)th hidden node and \(\mathbf{Y}_{i}\in\mathbb{R}^{2\times 1}\) is the transmitted data. The \(j\)th hidden layer matrix \(\mathbf{O}\in\mathbb{R}^{I\times L}\) is given by: \[\mathbf{O}_{j}=\begin{bmatrix}g(\mathbf{a}_{1}^{T}\cdot\mathbf{Y}_{1}+b_{1}) \cdots g(\mathbf{a}_{L}^{T}\cdot\mathbf{Y}_{1}+b_{L})\\ \vdots\cdots\vdots\\ g(\mathbf{a}_{1}^{T}\cdot\mathbf{Y}_{I}+b_{1})\cdots g(\mathbf{a}_{L}^{T}\cdot \mathbf{Y}_{I}+b_{L})\end{bmatrix} \tag{7}\] Figure 1: OFDM & DNN training. Figure 4: BER for DNN, LS and MMSE CE under 4-QAM: (a) 64 pilots and (b) Reducing #pilots Figure 3: ELM architecture. Figure 2: DNN architecture. Different from DNN, the ELM topology can be trained in a non-iterative mode in order to minimize a training error function: \[\hat{\mathbf{B}}=\min_{\mathbf{B}\in\mathbb{R}^{L\times 2}}\left\|\mathbf{O} \mathbf{B}-\mathbf{X}_{\mathrm{pilot}}\right\|\ =\ \mathbf{O}^{\dagger}\mathbf{X}_{\mathrm{pilot}} \tag{8}\] where \(\hat{\mathbf{B}}\) is a \(L\times 2\) dimensional matrix denominated _output weight matrix_. Once the training stage is completed for each sub-network (sub-carrier), then ELM can operate, _i.e._ the ELM-OFDM detector can estimate the data by: \[\hat{\mathbf{X}}_{\mathrm{ELM}}=\mathbf{O}\hat{\mathbf{B}} \tag{9}\] Eqs. (4) and (9) explicitly provide estimates of the transmitted signal (OFDM data detection). Although the OFDM channel estimation intrinsically occurs inside the ML detection step, explicitly, we do not proceed with the channel estimation step. In this sense, the proposed ML-based techniques are also suitable for non-coherent signal detection since such schemes explicitly neither require channel state information knowledge at the transmitter nor the receiver. ## IV Sub-Carrier Selection for maximizing SINR We describe a method for sub-carrier selection and allocation aiming at maximizing the signal-to-interference plus noise ratio (SINR) across the users. Hence, in this work, we fix a pre-defined value for the maximum number of sub-carriers per user, \(N_{\mathrm{cpu}}\), and the number of sub-carriers that can support two OFDM users simultaneously subject to equal interference level in those subcarriers. Notice that in the adopted system model, we have admitted more than one user sharing the same sub-channel, resulting in an OFDM system operating under inter-user interference. Also, to facilitate the analysis, but without loss of generality, we have defined and selected an equal number of sub-carriers per user that generate the lesser interference over other users, \(N_{\mathrm{cpu}}^{\mathrm{eq}}\). Our principal goal is to allocate as best as possible the sub-carriers among the users in such a way that maximizes the SINR (optimization metric) in the \(k\)th sub-carrier subject to interference in a given OFDM frame, which can be written as: \[\mathrm{SINR}_{u}(k)=\frac{P_{u}(k)|H_{u}(k)|^{2}}{P_{j}(k)|H_{j}(k)|^{2}+ \sigma^{2}},\qquad j\neq u;\qquad\forall k=1,\ldots,N_{c}^{\mathrm{data}}, \quad\forall u\in\mathcal{U}; \tag{10}\] \[\text{with}\quad P_{u}(k)=\frac{P_{\mathrm{\tau}}}{N_{c}^{\mathrm{data}}},\ \text{ and }\ P_{j}(k)=\begin{cases}\frac{P_{\mathrm{\tau}}}{N_{c}^{\mathrm{data}}},& \text{if }\ j\in\mathcal{J}_{u}\\ 0&\text{otherwise}\end{cases}\] where \(\mathcal{J}_{u}\) represents the sub-set of users interfering in the \(k\)th subcarrier of user \(u\). Notice that \(P_{u}(k)=\frac{P_{t}}{N_{\rm cpu}^{\rm data}}\) indicates equal power allocation (EPA) policy across the users, where \(P_{j}(k)=\frac{P_{t}}{N_{\rm data}^{\rm data}}\) means the EPA policy also for the \(j\)th interfering user in the \(k\)th sub-carrier; \(H_{u}(k),H_{j}(k)\) mean the channel response for the \(u\)th and \(j\)th user in the \(k\)th sub-carrier, respectively. A pseudo-code for subcarriers selection aiming at maximizing the SINR, eq. (10), is depicted in Algorithm 1. \(\mathcal{U}\setminus\mathcal{J}_{u}\) is the set difference; _i.e._, it is the set of all those elements that are in \(\mathcal{U}\) but not in \(\mathcal{J}_{u}\). ``` 1:for\(u=1,2,\ldots,|\mathcal{U}\setminus\mathcal{J}_{u}|\)do 2:for\(k\)= \(N_{\rm cpu}(u-1)+1:\)\(N_{\rm cpu}(u-1)+N_{\rm cpu}\)do 3: Evaluate the \(k\)th SINR for \(u\)th user as in (10); 4:endfor; 5: Sort all SINR's for \(u\)th user according to the descending order; 6: Select the first \(N_{\rm cpu}^{\rm eq}\) SINR's; 7:endfor ``` **Algorithm 1** Sub-carrier Selection for SINR maximization ## V Simulation Results Numerical results in terms of performance _vs_ complexity trade-off for DNN and ELM are analyzed. The parameters for learning DNN-OFDM and ELM-OFDM architectures are summarized as follows, Fig. I. **OFDM System** - Transmitter Antenna: \(n_{T}=1\); Receiver Antenna: \(n_{R}=1\); Modulation Order (\(M\)-QAM): \(M=4,16,32\); Number of Users: \(U=4\); Sub-carriers: \(N_{c}=64\); Pilots Number: \(N_{\rm pilot}=64,32,16\) and \(8\); Cyclic Prefix: \(T_{g}\) = 25% and 0%, (Fig. 6); Estimation Methods: LS e MMSE. **Sub-carrier Interference & Power Allocation** - Equal Power Allocation (EPA): \(P_{u}(k)=\frac{P_{t}}{N_{\rm c}}\); Max. #sub-carriers/user: \(N_{\rm cpu}=\) 16; Interfering sub-carriers/user: \(N_{\rm cpu}^{\rm eq}\) = 4. **NLoS Channel** - Cell radius: 500 m; Total Power: \(P_{\tau}=1\) mW; SNR range: \(\bar{\gamma}\in[5;~{}25]\) dB; LoS Channel Model: Rayleigh; Path Loss coefficient: \(\eta=-3\); Path Loss model: \(d_{u}^{-\eta}\); Coherence Time: \((\Delta t)_{\rm c}\) = 5 ms; **DNN Architecture** - Hidden Layer: 3; Input Neuron: 256; Hidden Neurons: 500, 250 and 120; Output Neurons: 64; Activation Function: 3 Relu and 1 Sigmoid; Optimizer: Adam; Loss Function: MSE; Epochs: \(10^{3}\); MCS realizations: \(10^{4}\); **ELM Architecture** - Hidden Layer: 1; Input Neurons: \(2\) ; Hidden Neurons: \(L=50\); Output Neurons: 2; Sub-Network: 64; Activation Function: Radbas; Pilots symbols: \(I=50,100,200\); Data symbols: \(K=400\); and MCS realizations: \(\mathcal{T}=10^{3}\). We analyze the influence of several parameters, such as the number of pilots, the number of users, and the SNR training on the BER performance is analyzed. The analytical average BER performance is given by [9]: \(\text{BER}^{\text{theo}}=\frac{\alpha_{\text{M}}}{2}\left[1-\sqrt{\frac{0.55 \beta_{\text{M}}\bar{\gamma}}{1+0.5\beta_{\text{M}}\bar{\gamma}}}\right],\) where \(\alpha_{\text{M}}\) and \(\beta_{\text{M}}\) are constants that depend on the modulation type, _i.e._\(\alpha_{\text{M}}\) is the number of nearest neighbors to a constellation at the minimum distance, and \(\beta_{\text{M}}\) is a factor relating the minimum distance to the average symbol energy [9]. For 4-QAM results \(\text{BER}^{\text{theo}}=\frac{1}{2}\left[1-\sqrt{\frac{\gamma_{\text{b}}}{1+ \gamma_{\text{b}}}}\right]\), where \(\gamma_{\text{b}}=\frac{\bar{\gamma}}{2}\). **Channel Estimation Task**. In this sub-section, we have analyzed the numerical results related to both channel estimation methods through the normalized mean square error (NMSE) metric, which can be defined as \[\mathrm{NMSE}=\frac{\sum_{i=1}^{S}\left|\mathbf{H}(i)-\tilde{\mathbf{H}}(i) \right|^{2}}{S\,\mathbf{H}_{\text{v}}\mathbf{H}_{\text{v}}^{H}} \tag{11}\] where \(\mathbf{H}_{\text{v}}=\mathrm{vec}\left(\left[\mathbf{H}(1)\,\mathbf{H}(2)\, \dots\,\mathbf{H}(S)\right]\right)\), with \(\mathrm{vec}(\cdot)\) operator indicating the vectorization of a matrix, which converts the \((N_{c}\times S)\) OFDM channel samples matrix into the \(\mathbf{H}_{\text{v}}\) channel frequency domain column vector (\(N_{c}S\times 1\)); finally, \(S\) is the number of channel estimate samples performed in the channel coherence time interval. Under medium to high SNR, both linear channel estimators methods are suitable, _e.g._, for \(\bar{\gamma}=20\) dB, the \(\text{NMSE}_{\text{LS}}=52\times 10^{-4}\), with an order of magnitude in favor or the MMSE method. **Number of Pilots**. The DNN architecture has the advantage of attaining a good performance when compared with the classical channel estimators, such as MMSE and LS, the DNN make-up for the information lack made by interpolation in LS and MMSE methods as depicted in Fig. 4. The attainable BER for the DNN-OFDM detector is comparable to the BER from MMSE and LS for all SNR regions. Such a BER is similar to that achieved by the MMSE receiver assuming the same block-type size and arrangement, evidencing that the DNN approach is promising. Also, Fig. 4.(b) reveals that DNN-detector under a reduced number of pilots (8-pilots for DNN _vs._ 16-pilots MMSE _vs._ 8-pilots LS) outperforms the conventional estimation methods, _i.e._, the DNN-OFDM detector presents desirable robustness against the incomplete pilot's size. **Impact of the Number of Users**. So far, we have considered 4 users in the cell; hence, we equally spread \(N_{\mathrm{cpu}}=N_{c}/U\) sub-carriers per user. However, this model does not challenge the DNN, since, in this scenario, there is no inter-carrier interference. In this way, we have increased the number of active users inside the cell, aiming at verifying how DNN deals with the interference _i.e.,_ when two users share the same subcarrier. Fig. 5 depicts the BER for a cell with \(|\mathcal{U}|=5,\ldots,8\) users, where eight users mean the maximum co-subchannel interference in a system with 64 sub-carriers where for each user it has been allocated 16 sub-carriers, which results in a maximal number of user per sub-carrier equal to 2. **Cyclic Prefix Influence on the Estimation Quality**. In OFDM systems, the cyclic prefix (CP) is paramount to mitigate inter-symbol interference (ISI); however, it has a cost in terms of power, spectrum, and time. As depicted in Fig. 6, the DNN-based detector holds certain robustness against the absence of CP in low and medium SNR regions due to better BER performance compared to the linear MMSE and LS detectors with no use of CP. This indicates that the analyzed DNN architecture can learn the characteristics of channels and tends to better estimate the channel w.r.t. channel inversion strategy implemented in the LS and MMSE. The non-use of cyclic prefix is a great advantage offered by ML-based OFDM detectors, resulting in a substantial increment in the overall energy and spectral efficiencies. \begin{table} \begin{tabular}{l l} \hline \hline **Parameter** & **Value** \\ \hline \hline \multicolumn{3}{c}{**OFDM System**} \\ \hline \# Transmitter Antenna & \(n_{T}=1\) \\ \# Receiver Antenna & \(n_{R}=1\) \\ \# Modulation Order (\(M\)-QAM) & \(M=4,16,32\) \\ \# Number of Users & \(U=4\) \\ \# Sub-carriers & \(N_{c}=64\) \\ \# Pilots Number & \(N_{\text{pluct}}=64,32,16\) and 8 \\ \# Cyclic Prefix & \(T_{g}\) = 25\% \& 0\%, (Fig. 6) \\ Estimation Methods & LS e MMSE \\ \hline \multicolumn{3}{c}{**Sub-carrier Interference \& Power Allocation**} \\ \hline Equal Power Allocation (EPA) & \(P_{u}(k)=\frac{P_{t}}{N_{c}}\) \\ Max. \# sub-carriers/user & \(N_{\text{cpu}}=16\) \\ \# Interfering sub-carriers/user & \(N_{\text{cpu}}^{\text{eq}}\) = 4 \\ \hline \multicolumn{3}{c}{**NLoS Channel**} \\ \hline \# Cell radius & 500 m \\ Total Power & \(P_{t}=1\) mW \\ SNR range & \(\bar{\gamma}\in[5;~{}25]\) dB \\ Channel Model & Rayleigh \\ Path Loss coef. & \(\eta=-3\) \\ Path Loss model & \(d_{u}^{-\eta}\) \\ Coherence Time & \((\Delta t)_{c}\) = 5 ms \\ \hline \hline \end{tabular} \begin{tabular}{l l} \hline \hline **Parameter** & **Value** \\ \hline \multicolumn{3}{c}{**DNN Architecture**} \\ \hline \# Hidden Layer & 3 \\ \# Input Neuron & 256 \\ \# Hidden Neurons & 500, 250 and 120 \\ \# Output Neurons & 64 \\ Activation Function & 3 Relu and 1 Sigmoid \\ Optimizer & Adam \\ Loss Function & MSE \\ \# Epochs & \(10^{3}\) \\ \# MCS realizations & \(10^{4}\) \\ \hline \multicolumn{3}{c}{**ELM Architecture**} \\ \hline \# Hidden Layer & 1 \\ \# Input Neurons & 2 \\ \# Hidden Neurons & \(L=50\) \\ \# Output Neurons & 2 \\ \# Sub-Network & 64 \\ Activation Function & Radbas \\ \# Pilots symbols & \(I=50,100,200\) \\ \# Data symbols & \(K=400\) \\ \# MCS realizations & \(\mathcal{T}=10^{3}\) \\ \hline \hline \end{tabular} \end{table} Table I: Simulation Parameters – Channel, System and DNN and ELM parameters. **The SNR\({}_{\rm train}\) Effect on the BER Performance**. the SNR was set for the training stage, SNR\({}_{\rm train}\) influences notably the BER performance, Fig. 8; _e.g._, at the low SNR data transmission regime, \(\bar{\gamma}=5\) dB, the DNN-OFDM detector trained under SNR\({}_{\rm train}=5\) dB attains better performance; similarly, at high SNR regime (\(\bar{\gamma}=25\) dB), the better performance is reached by the DNN topology trained under SNR\({}_{\rm train}=25\) dB. **Pilot, Data Set and Modulation sizes**. In the training phase, the DNN deals with the batch size (\(\Psi\)), and epochs (\(\Omega\)); such influence on the BER performance is shown in Fig. 8.a), while the training time is depicted in Fig. 8.b). The better performance is obtained with \(\Psi=10000\) and \(\Omega=500\), although this demands a high time for training. It is notable that under a low SNR\({}_{\rm train}\), the DNN-OFDM detector trained under SNR\({}_{\rm train}=25\) dB attains better performance; similarly, at high SNR regime (\(\bar{\gamma}=25\) dB), the better performance is reached by the DNN topology trained under SNR\({}_{\rm train}=25\) dB. number of batch sizes and epochs the performance is suitable yet, _e.g._, for \(\Psi=250\) and \(\Omega=1000\), result in a reasonable performance with low training time. **Testing Time**. In contrast with the training time of Fig. 8.b), the _testing time_, \(i.e.\), the time necessary for each ML-based OFDM detector operate in real scenarios for realizing all detection steps completion. The _operation time_ for both detectors have resulted in \(T_{\textsc{Dnn}}=4.7\) ms and \(T_{\textsc{elm}}=4.2\) ms. Such an operation time has been measured based on the transmission of a single OFDM frame for DNN, while for ELM, we have considered \(I\) = 100 pilots and \(K\) = 1 symbol, for all the 64 sub-carriers. As expected, the ELM-OFDM detector presents a lower execution time to perform all steps, since its architecture comprises just one single layer. **Computational Complexity** for LS-OFDM, MMSE-OFDM, and ELM-OFDM detectors, in terms of the number of operations parameterized on the number of subcarrier \(N_{c}\), pilots \(I\), and neurons \(L\) is given by \(\mathcal{O}(I\cdot N_{c})\), \(\mathcal{O}(4\cdot I\cdot N_{c}^{3})\) and \(\mathcal{O}(2\cdot N_{c}\cdot L^{3})\) respectively. The number of _flops_ can be computed directly from the implemented code in Matlab using [10]. ## VI Conclusions Two ML-based topologies of OFDM detectors have been extensively analyzed and compared with the conventional LS and MMSE detectors, evidencing that such topologies can be more advantageous in terms of performance-complexity trade-off perspective. The DNN-based OFDM detector presented a) a promising BER performance when compared to the classical linear high-complexity inversion matrix-based OFDM detection techniques; b) robustness against the incomplete data training, implying in several pilots reduction; and c) absence of cyclic prefix requirement, increasing the energy and spectral efficiencies of the OFDM system. However, the DNN-based detector architecture results in relatively high computational complexity. The ELM-based OFDM detector has been extensively characterized to overcome this limitation, presenting a superior performance-complexity tradeoff.
2310.19143
Quantum-Acoustical Drude Peak Shift
Quantum acoustics -- a recently developed framework parallel to quantum optics -- establishesa nonperturbative and coherent treatment of the electron-phonon interaction in real space. The quantum-acoustical representation reveals a displaced Drude peak hid ing in plain sight within the venerable Fr\"ohlich model: the optical conductivity exhibits a finite frequency maximum in the far-infrared range and the d.c. conductivity is suppressed. Our results elucidate the origin of the high-temperature absorption peaks in strange or bad metals, revealing that dynamical lattice disorder steers the system towards a non-Drude behavior
J. Keski-Rahkonen, X. -Y. Ouyang, S. Yuan, A. M. Graf, A. Aydin, E. J. Heller
2023-10-29T20:33:24Z
http://arxiv.org/abs/2310.19143v3
# Quantum-Acoustical Drude Peak Shift ###### Abstract Quantum acoustics - a recently developed framework parallel to quantum optics - establishes a nonperturbative and coherent treatment of the electron-phonon interaction in real space. The quantum-acoustical representation reveals a displaced Drude peak hid ing in plain sight within the venerable Frohlich model: the optical conductivity exhibits a finite frequency maximum in the far-infrared range and the d.c. conductivity is suppressed. Our results elucidate the origin of the high-temperature absorption peaks in strange or bad metals, revealing that dynamical lattice disorder steers the system towards a non-Drude behavior. + Footnote †: preprint: APS/123-QED Stretching over four decades, an intensive theoretical pursuit has concentrated on finding an all-embracing explanation for a plethora of puzzling phenomena which have been colloquially labeled as "bad" or "strange". These kinds of "bizarre" materials seem to defy the traditional paradigms for electron behavior [1] in metals. Mysteries abound, such as high-temperature superconductivity beyond the grasp of the BCS theory [2; 3], the paradoxical existence of pseudogaps [4; 5; 6; 7; 8] and charge density waves [9; 10; 11; 12], the violation of the Mott-Ioffle-Regel (MIR) limit [13; 14], not to mention the major dilemma of linear-in-temperature resistivity over a wide temperature range (see, e.g., Refs. [15; 16; 17; 18; 19; 20; 21]) at the mysterious but ubiquitous Planckian bound [22]. This list of theoretical challenges also includes the elusive emergence of displaced Drude peaks (DDP) [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37]: a prominent absorption peak located typically in the infrared range, signaling a breakdown of the conventional Drude picture. This Letter reveals the origin of this phenomenon as the result of strong electron-phonon interaction if treated correctly as nonperturbative and coherent. A putative culprit for the observed Drude shift in the optical conductivity peak has been suggested to be some as yet unidentified dynamical disorder that generates evanescent localization, thus hampering but not precluding charge carrier diffusion [38; 39; 40; 41; 42]; Consequently, the zero-frequency conductivity does not vanish completely, but it is strongly suppressed, favoring the DDP phenomenology. This scenario contrasts with alternative points of view, like the common arguments resorting to collective modes [43; 44] or to strong electron-electron correlations [45; 46]. Here we show that a morphing potential landscape of hills and valleys stemming from thermal lattice vibrations by the Frohlich Hamiltonian (see Ref. [47]), as illustrated in Fig. 1 is the sought-after sea of "slowly moving bosonic impurities" [41], or the cryptic "self-induced randomness" [42; 48]. In a broader milieu, the subtle interplay between the Anderson localization and lattice vibrations has been encountered in a wide class of random metal alloys and other degenerate disordered systems. In fact, the intricate game of being localized or not was identified early on by Gogolin et al. [49; 50] and Thouless [51], even pondered by Anderson himself [52; 53]. The random fluctuations introduced by lattice motion slowly but surely scramble the quantum interference required for localization of the electronic state, resulting in _transient_ localization (for capturing the essential aspects of this phenomenon, see, e.g., Refs. [54]), which has lately been of interest in the context of crystalline organic semiconductors [55; 56] and halide perovskites [57]. Figure 1: Quantum acoustics. Illustration of the coherent state lattice vibrations at a certain temperature. Electrons experience a spatially continuous internal field formed by the thermal acoustic distortions. While undergoing quasi-elastic scattering events, like due to impurities, electrons can also be incipiently trapped by valleys of slowly undulating and propagating deformation potential when their kinetic energy is comparable to the fluctuations of the deformation potential. The quantum-acoustic route to linear and universal resistivity in strange metals [58] has opened up a new path unrelated to quantum criticality [59; 60], and not relying on (strong) electron-electron interaction [48; 61], instead starting with the standard Frolich Hamiltonian. Following the path paved in Refs. [47; 58], here we demonstrate the formation of a DDP due to the electrons interacting with fluctuating lattice degrees of freedom. We go further by showing that this mechanism gives rise to a temperature dependence of spectral features in agreement with experimental DDP observations in strange metals. More specifically, we consider the following Frohlich Hamiltonian [62; 63] describing the lowest-order (linear) lattice-electron coupling [64]: \[\mathcal{H}_{\text{F}}=\sum_{\mathbf{p}}\varepsilon_{\mathbf{p}}c_{\mathbf{p} }c_{\mathbf{p}}^{\dagger}+\sum_{\mathbf{q}}\hbar\omega_{\mathbf{q}}a_{\mathbf{ q}}^{\dagger}a_{\mathbf{q}}+\sum_{\mathbf{p}\mathbf{q}}g_{\mathbf{q}}c_{\mathbf{p} +\mathbf{q}}^{\dagger}c_{\mathbf{p}}\Big{(}a_{\mathbf{q}}+a_{-\mathbf{q}}^{ \dagger}\Big{)} \tag{1}\] where \(c_{\mathbf{p}}\) (\(c_{\mathbf{p}}^{\dagger}\)) is the creation (annihilation) operator for electrons with momentum \(\mathbf{p}\) and energy \(\varepsilon_{\mathbf{p}}\); whereas \(a_{\mathbf{q}}\) (\(a_{\mathbf{q}}^{\dagger}\)) is the creation (annihilation) operator for longitudinal acoustic phonons of wave vector \(\mathbf{q}\) and energy \(\hbar\omega_{\mathbf{q}}\), respectively. The electron-phonon interaction is defined by its Fourier components \(g_{\mathbf{q}}\). By following the steps of the recently established coherent state formalism in Ref. [47], the Hamiltonian gives rise to an undulating and propagating potential landscape (see Appendix A): \[V_{D}(\mathbf{r},t)=\sum_{\mathbf{q}}^{|\mathbf{q}|\leq qp}g_{\mathbf{q}} \sqrt{\langle n_{\mathbf{q}}\rangle_{\text{th}}}\cos(\mathbf{q}\cdot\mathbf{r }-\omega_{\mathbf{q}}t+\varphi_{\mathbf{q}}) \tag{2}\] where \(q_{D}\) is Debye wavenumber, \(\mathbf{r}\) is continuous position, \(\varphi_{\mathbf{q}}=\arg(\alpha_{\mathbf{q}})\) is the (random) phase of a coherent state \(|\alpha_{\mathbf{q}}\rangle\), and the mode population is determined by \(\langle n_{\mathbf{q}}\rangle_{\text{th}}\). The coherent state picture developed here is the dual partner of the traditional number state description of electron-lattice dynamics, so widely successful for describing electron resistivity [65]. In addition to recovering the results of the conventional Bloch-Gruniesen theory [66; 67], the coherent state representation extends beyond perturbation theory (see Ref. [47] for a more detailed discussion). The coherent state limit of quantum acoustics reveals a real-space, time-dependent description of electron-lattice interaction. A very similar notion was introduced in 1957 by Hanbury Brown and Twiss for the vector potential of a blackbody field [68], with the essential difference of missing the ultraviolet cut-off, i.e., the Debye wavenumber in the definition of our deformation potential originating from the minimal lattice spacing. This follows the analogous quantum optics pioneered by Glauber [69], a neglected wave perspective for lattice vibrations - _quantum acoustics_. Bardeen and Shockley, in the 1950's, regarded dynamical lattice distortions in nonpolar semiconductors [70; 71], and it seemed they would have been happy with a coherent state description, but the theory was subsumed by a number state perspective. Within the present deformation potential framework, an electron undergoes quasielastic, coherence-preserving scattering events when roaming through the slowly altering potential landscape of hills and valleys. In this work, we focus on three prototypical compounds classified as strange/bad metals, namely LSCO, Bi2212 and Sr3Ru2O7 whose material properties are given in Appendix B. However, we want to stress that the physics we find below transcends the material-specific constraints: In general, dynamical disorder, caused by lattice vibrations here, temporarily confines electrons to nest in its instantaneous potential wells (see Fig. 1), hallmarking of transient localization dynamics, and consequently results in the buildup of a DDP. It appears that, regarding photoabsorption, electrons are insensitive to the lattice dynamics when \(\omega\gg w_{D}\). In other words, the deformation landscape appears as if it is stationary for an electron, which is confirmed by our results below. Motivated by this, we initially freeze the potential and study its transitory electronic eigenstates. Since the allowed transitions take place near the Fermi level, we are allowed to focus on the states lying within the stack of \(\varepsilon_{F}\pm 3k_{\text{B}}T\) (see Appendix C.1). Fig 2 shows examples of eigenstates for the three materials at different temperatures along with the profile of the deformation potential. Despite the diversity among the physical settings, all three materials share similar qualitative characteristics: Instead of being spatially extended, as observed at lower temperatures (\(E_{F}/V_{\text{rms}}\gtrsim 1\)), the relevant states appear to be localized in the dips of the potential at temperatures \(E_{F}/V_{\text{rms}}\lesssim 1\) where the potential fluctuation \(V_{\text{rms}}\) is comparable to Fermi energy \(E_{F}\). These eigenstates of the frozen potential relate to the localized states associ Figure 2: Transient localized states. A selection of eigenstates (red color scheme) near the Fermi level is shown for the considered prototype materials at four temperatures, where the gray scale represents the corresponding frozen deformation potential. An increasing temperature leads to more spatially confined states that are linked to fugitive, Anderson-localized states in transient dynamics. ated with the transient dynamics. Subsequently, we compute the corresponding optical conductivity \(\sigma(\hbar\omega)\) within the conventional Kubo formalism employing the numerically resolved eigenstates, detailed in Appendix C.1. In the upper panel of Fig 3, we present the optical conductivity at various temperatures for the three chosen materials, averaged over an ensemble of 100 random realizations of the deformation potential. With increasing temperature, the optical conductivity evolves from the Drude-peak behavior of having a sharp maximum value located at \(\omega=0\) into a displaced peak: the maximum conductivity point steadily shifts towards higher energies \(\hbar\omega\) and the conductivity peak profile broadens. In addition, we show the temperature dependence of the peak locations and their width in Fig. 4. We determine the peak location \(\hbar\omega_{\rm p}\) as the energy at which the optical conductivity \(\sigma(\hbar\omega)\) reaches its maximum. The peak width \(\hbar\Delta\omega_{\rm p}\) is then defined in a similar manner as in Ref. [72]: the distance between the maximum and the optical conductivity point in the high-energy tail where the height of the maximum is dropped by 50%. Fig. 4 further confirms and quantifies the migration and broadening of the DDP with increasing temperature present in Fig. 3. To further validate the frozen potential results above, we expand our DDP analysis by computing the optical conductivity while considering the temporal evolution of the deformation potential. Nonetheless, we can still determine the conductivity \(\sigma(\hbar\omega)\) by utilizing the Kubo formalism, as explicated in Appendix C.2. In short, we take advantage of the already defined frozen deformation potentials as initial conditions, and let it thereafter unfold according to Eq. 2. The lower panel of Fig. 3 shows the optical conductivity spectrum of the three materials at different temperatures in the case of a dynamical potential field, averaged over 10 realizations of the distorted potential landscape. As expected, the dynamical conductivity spectrum deviates from the frozen potential prediction when \(\omega\lesssim\omega_{D}\) is indicated by the black dash line in Fig. 3. Instead of strong suppression near to the d.c. conductivity as in the static landscape situation, we observe a saturation of the optical conductivity within an energy window of the order of \(0.1\,\mathrm{eV}\) near the zero frequency. This can be interpreted as the reversed adiabatic approximation where the external electric field of frequency \(\omega\ll\omega_{D}\) is a slow varying degree of freedom and thus is roughly static compared to the fluctuations of the lattice, yielding virtually the same conductivity as the d.c. conductivity and manifesting as a conductivity plateau below the Debye frequency. Nevertheless, there is still a generic trend Figure 3: Frozen versus dynamical quantum acoustic vibration field. The upper and lower panels display the optical conductivity for the three materials at different temperatures, resolved within the static and dynamical potential landscapes averaged over 100 and 10 realizations, respectively. The back dash line marks the Debye frequency of the given material below which the frozen potential assumption breaks down. A key dynamical effect is the saturation of conductivity in the regime \(\omega\lesssim\omega_{D}\), instead of the suppression evident in the static case. However, regardless of the deformation potential dynamics and the chosen material, the optical conductivity peak shifts from the Drude-peak edict of situating at \(\omega=0\) to higher energies and broadens as the temperature increases. similar to the frozen potential approximation: the higher temperature yields a more substantial DDP, suggesting that the transiently localized states are at play in both cases. This tendency is also evident in Fig. 4 where the increase in temperature moves the DDP to higher absorption frequencies while broadening the peak at the same time. The physical picture behind the observed DDP evolution is that the increase in temperature has a two-fold effect. First, it yields stronger spatially localized, transient electronic states at the Fermi energy (shift to higher frequencies); electrons either residing in local potential wells (frozen) or nesting in instantaneous potential pockets (dynamic). These local wells or nests become more energetically confining as the deformation potential strengthens with the rising temperature. As a result, the location of the DDP roughly scales like \(\hbar\omega_{\rm p}\sim V_{\rm rms}\), which defines the fitting illustrated by the colored dashed curves in the upper panel of Fig. 4. In general, the peak location migrates like \(\hbar\omega_{\rm p}\sim(k_{B}T)^{3/2}\) at low temperatures \(T\ll T_{D}\) and \(\hbar\omega_{\rm p}\sim(k_{B}T)^{1/2}\) at high temperatures \(T\gg T_{D}\). In addition, the presence of eigenstate localization caused by the frozen lattice disorder is known to result in band tails in the density of states [47] that has later shown to persist even under quantum-acoustical lattice dynamics [73]. On the other hand, a higher temperature permits a wider energy window for electronic transitions to occur (the broadening of the peak). Whereas the transition element between the (momentarily) localized states dictates the location of DDP, the width is instead determined by the broadening function characterizing the energetically allowed transition. As indicated by the black dashed line in the lower panel of Fig. 4, the widths of the DDP behave roughly as \(\hbar\Delta\omega_{\rm p}\sim k_{\rm B}T\) that is interestingly more accurate in the case of the dynamical potential landscape. The quantum-acoustical DDP is thus intimately connected to the ambiguous Planckian timescale \(\hbar/k_{\rm B}T\) that underpins the linear-in-temperature resistivity exhibited by numerous families of bad and strange metals (for comparison, see Ref. [72]). In addition, this observed correlation between the width of the DDP and Planckian behavior supports the prospect of the near-universal transport by transient dynamics reported in Ref. [47; 58]. In the quantum-acoustic DDP scheme, there are no extrinsic sources, such as defects or impurities, which could also generate or enhance a shift in optical conductivity (see, e.g., Refs. [74; 75; 76]). In other words, the disorder at the origin of our DDP is _self-generated_, arising from the existence of thermally fluctuating lattice degrees of freedom that significantly affect the charge carrier dynamics. In particular, our DDP gives a unique temperature-dependent fingerprint, clearly distinguishing it from an impurity-induced DDP. Furthermore, the transient dynamics driving the birth of an acoustical DPP resides within a dynamical regime of nonperturbative and coherent electron-lattice motion, thus lying outside the reach of the conventional perturbative or Boltzmann transport methods (see Ref. [47]). In fact, many bad and strange metals are on the verge of a transient localization and/or have strong electron-phonon coupling [1; 46]. Moreover, alongside the DDP formation and Planckian resistivity, the deformation potential perspective offers a natural pathway for charge carriers in strange metals to cross the MIR bound with impunity at high temperatures [13; 14], as already asserted in Ref. [47]. We explicitly demonstrated the violation of MIR limit with quantum acoustics in another paper [58]. Our approach also carries the potential to enlighten the perplexing phenomenon of pseudogaps [4; 5; 6; 7; 8] and charge density waves [9; 10; 11; 12]. In particular, we see that when electron motion strongly couples and synchronizes with low-energy lattice vibration modes, it creates a favorable environment for incommensurate charge density order. Likewise, a similar resonance promoted by the deformation potential could result in temperature-dependent pseudogaps, i.e., a substantial suppression in the density of the low-energy excitations, which eventually melt away leaving the pseudogap phase regime. We aim to study these considerations in future research. In conclusion, we have introduced the phenomenon of quantum-acoustical Drude peak displacement, which involves the temperature-dependent shift and broaden Figure 4: Position and width of the Drude peak. The panels show the mean location (left) and width (right) of the optical conductivity peak as a function of temperature for the studied materials, averaged over 100 (static) and 10 (dynamic) deformation potential realizations, with error bars representing the standard deviation within the given ensemble. Specifically, as the temperature increases, there is a generic upward shift in the Drude peaks towards higher energies (upper panel), accompanied by a broadening of the peaks (lower panel). Colored dashed curves show the fittings for the DDP location estimated with the assumption that it is mostly determined by the strength of the deformation potential (\(\hbar\omega_{\rm p}\propto V_{\rm rms}\)). Furthermore, the width of the DDPs live near to Planckian bound (\(\hbar\Delta\omega_{\rm p}\sim k_{B}T\)), which is indicated by the black dashed line. ing of the optical conductivity peak to finite frequencies, demonstrated here for three archetypal strange metals. Overall, the coherent state picture of lattice vibrations, which has always been at one's disposal but not utilized, provides a fresh perspective on the investigation of the mysteries of bad and strange metals. The manifested shift in perspective simply comes from the coherent state limit of quantum acoustics. ###### Acknowledgements. We are thankful for the useful discussions with D. Kim, B. Halperin, S. Das Sarma, J. H. Miller Jr, R. L. Greene, R. Lobo and A. P. Mackenzie. Furthermore, J.K.-R. thanks the Emil Aaltonen Foundation, Vaisala Foundation, and the Oskar Huttunen Foundation, and A. M. G. thanks the Harvard Quantum Initiative for financial support. ## Appendix A Deformation potential Here we provide a brief overview of the coherent state formalism and the origin of the deformation potential; for a comprehensive discussion about the topic, see Ref. [47]. In general, a lattice deformation is treated in terms of a quantum longitudinal displacement field \(\hat{\mathbf{u}}(\mathbf{r},t)\). The longitudinal displacement of an atom at a position \(\mathbf{r}\) from its equilibrium position is \[\hat{\mathbf{u}}(\mathbf{r},t)=-i\sum_{\mathbf{q}}\hat{\mathbf{q}} \sqrt{\frac{\hbar}{2\rho\mathcal{V}\omega_{\mathbf{q}}}}\\ \times\left(a_{\mathbf{q}}e^{-i\omega_{\mathbf{q}}t}-a_{- \mathbf{q}}^{\dagger}e^{i\omega_{\mathbf{q}}t}\right)e^{i\mathbf{q}\cdot \mathbf{r}},\] where \(\mathbf{q}\) is wave vector of a normal mode, \(\omega_{\mathbf{q}}\) is phonon frequency, \(\rho\) is mass density of the lattice, \(\mathcal{V}\) is volume (area in 2D), and \(t\) is time. We omit writing the polarization vector and phonon branch indices in the subscripts since we will only deal with the longitudinal acoustic modes of the lattice vibrations, the main scattering mechanism for the charge carriers. Subsequently, we define the deformation potential as the first-order correction in the expansion of band energy due to atomic displacements in the following way \[\hat{V}_{D}(\mathbf{r},t)= E_{d}\nabla\cdot\hat{\mathbf{u}}(\mathbf{r},t)\] \[= \sum_{\mathbf{q}}^{|\mathbf{q}|\leq q_{D}}E_{d}\sqrt{\frac{\hbar} {2\rho\mathcal{V}\omega_{\mathbf{q}}}}|\mathbf{q}|(a_{\mathbf{q}}+a_{- \mathbf{q}}^{\dagger})e^{i\mathbf{q}\cdot\mathbf{r}}\] \[= \sum_{\mathbf{q}}^{|\mathbf{q}|\leq q_{D}}g_{\mathbf{q}}(a_{ \mathbf{q}}+a_{-\mathbf{q}}^{\dagger})e^{i\mathbf{q}\cdot\mathbf{r}},\] where the deformation potential constant \(E_{d}\) characterizes the coupling between electrons and the lattice, and the mode summation is restricted by the Debye wavenumber \(q_{D}\). For convenience, we convert the material constant into the mode-depend co-factor of \[g_{\mathbf{q}}=E_{d}\sqrt{\frac{\hbar}{2\rho\mathcal{V}\omega_{\mathbf{q}}}}| \mathbf{q}|=E_{d}\sqrt{\frac{2\hbar|\mathbf{q}|}{\rho\mathcal{V}v_{s}}},\] where the latter form is achieved by assuming the linear dispersion \(\omega_{\mathbf{q}}=v_{s}|\mathbf{q}|\), coupling given by the speed of sound \(v_{s}\). Thus, the electron-phonon interaction term corresponds to the Frohlich Hamiltonian in the main text. The next ingredient we introduce is the coherent state picture. Within this framework, each normal mode of lattice vibration with a wave vector \(\mathbf{q}\) is associated with a coherent state \(|\alpha_{\mathbf{q}}\rangle\). By employing the independence of normal modes, entire lattice vibrations can be expressed as the product state of the coherent states \(|\alpha_{\mathbf{q}}\rangle\) as a single multimode coherent state of \[|\mathbf{\alpha}\rangle=\prod_{\mathbf{q}}|\alpha_{\mathbf{q}}\rangle,\] whose expectation value defines the deformation potential \[V_{D}(\mathbf{r},t) =\langle\mathbf{\alpha}|\hat{V}_{D}(\mathbf{r},t)|\mathbf{\alpha}\rangle\] \[=\sum_{\mathbf{q}}^{|\mathbf{q}|\leq q_{D}}g_{\mathbf{q}}(\alpha _{\mathbf{q}}e^{-i\omega_{\mathbf{q}}t}+\alpha_{-\mathbf{q}}^{*}e^{i\omega_{ \mathbf{q}}t})e^{i\mathbf{q}\cdot\mathbf{r}}.\] This quasi-classical field of lattice vibrations corresponds to the deformation potential presented in the main text by considering a thermal coherent state \[\alpha_{\mathbf{q}}=\sqrt{\langle n_{\mathbf{q}}\rangle_{\mathrm{th}}}\exp(i \varphi_{\mathbf{q}}),\] where \(\sqrt{\langle n_{\mathbf{q}}\rangle_{\mathrm{th}}}=|\alpha_{\mathbf{q}}|\) is the thermal amplitude given by the Bose-Einstein distribution and \(\varphi_{\mathbf{q}}=\arg(\alpha_{\mathbf{q}})\) is the random phase of the coherent state determining the initial conditions. The deformation potential in itself is a curious mathematical object. First, it is homogeneously random in space and in time, meaning the probability distribution of deformation potential is independent on a position \(\mathbf{r}\) or time t given that the phases \(\varphi_{\mathbf{q}}\) are random variables. Thus, each reasonably large spatiotemporal section of the potential is statistically indistinguishable from another. Second, although the deformation potential averages to zero, its root-mean-square identifies the strength of lattice disorder, growing in temperature as \[V_{\mathrm{rms}}^{2}=\frac{2E_{d}^{2}\hbar}{\pi\rho v_{s}}\int_{0}^{q_{D}} \frac{q^{2}\mathrm{d}q}{e^{\hbar v_{s}q/k_{B}T}-1}.\] In particular, we employ the expression above to analyze the evolution of the peak location of a displaced Drude peak as a function of temperature. Regarding electron dynamics, we focus on the quantum dynamics of an electron under the following time-dependent Hamiltonian: \[\mathcal{H}_{0}=\frac{|\mathbf{p}|^{2}}{2m}+V_{D}(\mathbf{r},t),\] where \(m\) is the effective (band) mass of the electron. This effective Hamiltonian is the electron part of the considered Frohlich Hamiltonian described within the effective mass approximation. A comparison of electrons lying in the neighborhood of the Fermi energy with the strength of the deformation potential determines whether the scattering can be treated perturbatively or not [58]. Furthermore, quantum coherence of electrons is important in scattering when the wavelength of the Fermi wavelength \(\lambda_{F}\) is not much less than twice the lattice constant \(a\). It should be emphasized that the term "coherence" is employed here to describe the spatial phase coherence of the electronic wavefunction, and should not be conflated with the "coherent-versus-incoherent-metals" nomenclature, which pertains to the breakdown of the quasi-particle paradigm. In summary, dynamics is coarsely classified as \[\frac{E_{F}}{V_{\text{rms}}}=\begin{cases}\bar{E}\gg 1&\rightarrow\text{ Perturbative}\\ \bar{E}\sim 1\text{ or }\bar{E}\ll 1&\rightarrow\text{ Nonperturbative},\end{cases}\] and \[\frac{\lambda_{F}}{2a}=\begin{cases}\bar{\lambda}\ll 1&\rightarrow\text{ Incoherent}\\ \bar{\lambda}\sim 1\text{ or }\bar{\lambda}>1&\rightarrow\text{ Coherent}.\end{cases}\] ## Appendix B Material parameters Table 1 contains the relevant material parameters for determining the optical conductivity within the deformation potential scheme. The strange metals in the Table below possess two characteristic attributes: relatively high deformation potential constant and low Fermi energy compared to normal metals. However, we want to point out the robustness of the results reported in the main text against the reasonable range of material parameters, in a similar manner as analyzed in Ref. [58]. ## Appendix C Optical conductivity Here we discuss the linear response theory behind our optical conductivity results. In general, the (complex) conductivity tensor \(\sigma\) relates the current density \(j_{x}(t)\) to the applied electric field \(\text{Re}[E_{x}\exp(i\omega t)]\). Within the Kubo formalism, if the system is in thermal equilibrium with a heat reservoir with temperature \(T\), the conductivity tensor for non-interacting particles is \[\sigma(\omega)=\lim_{\eta\to 0^{+}}\frac{\mathcal{V}}{\hbar\omega} \int_{0}^{\infty}\mathrm{d}t\,\Big{\langle}\hat{j}_{x}(t),\hat{j}_{x}(0)| \Big{\rangle}e^{i(\omega+i\eta)t}+\frac{ie^{2}n}{m\omega},\] \(\mathcal{V}\) is the volume of the system, \(m\) is the effective mass of the electron, \(n\) represents the number density of electrons and \(\langle\cdots\rangle\) refers to the averaging over equilibrium thermal ensemble. We also include an infinitesimal increment \(i\eta\) to ensure that the integrand vanishes exponentially as \(t\rightarrow\infty\), and thus the integral is well-defined. ### Frozen approximation (Adiabatic limit) In the regime of \(\omega\gg\omega_{D}\), the time scale of the perturbing electric field on the system is much smaller than the timescale describing the motion of lattice. Therefore, a good, first approximation would be to assume that lattice is motionless. In fact, this frozen lattice approximation relates to the adiabatic approximation which is basically omitting the dynamics of the slow degrees of freedom when considering the fast degrees of motion. When we freeze the deformation potential, we first solve the electronic states \(\ket{n}\) and corresponding energies \(\varepsilon_{n}\) determined by the equation \[\left(\frac{\mathbf{p}^{2}}{2m^{*}}+V_{D}(\mathbf{r})\right)\ket{n}= \varepsilon_{n}\ket{n}.\] Then, by utilizing this eigenbasis, the Kubo formula for the optical conductivity takes the form of \[\text{Re}[\sigma(\omega)]=-2\lim_{\eta\to 0^{+}}\frac{e^{2} \hbar}{\mathcal{V}}\sum_{n,m}\frac{f(\varepsilon_{n})-f(\varepsilon_{m})}{ \varepsilon_{n}-\varepsilon_{m}}\\ \times\frac{|\bra{n}\hat{v}_{x}\ket{m}|^{2}}{(\hbar\omega+ \varepsilon_{n}-\varepsilon_{m})^{2}+\eta^{2}}\eta.\] where the factor of two represents the spin degree of freedom. Theoretically, the \(\eta\) should be infinitely small, but in practice, we choose the parameter \(\eta\) to be a small enough not to impact the results. More specifically, the value of \(\eta\) is taken roughly to be the energy gap between adjacent energy levels, thus guaranteeing that the fluctuations of the optical conductivity will not be too large. ### Dynamical field (Diabatic limit) On the other hand, when \(\omega\lesssim\omega_{D}\), the frozen potential approximation is not valid. However, we can still de \begin{table} \begin{tabular}{c|c c c} & LSCO & Bi2212 & Sr\({}_{3}\)Ru\({}_{2}\)O\({}_{7}\) \\ \hline \hline \(n\) [\(10^{27}\,\text{m}^{-3}\)] & 7.8 & 6.8 & 0.5 \\ \(m^{*}\) [\((m_{e})\)] & 9.8 & 8.4 & 6.8 \\ \(v_{s}\) [m/s] & 6000 & 2460 & 5850 \\ \(E_{d}\) [eV] & 20 & 10 & 20 \\ \(\rho\) [\(10^{-6}\,\text{kg}/\text{m}^{2}\)] & 3.6 & 5.2 & 8.9 \\ \(E_{F}\) [eV] & 0.12 & 0.15 & 0.03 \\ \(a\) [Å] & 3.8 & 5.4 & 3.9 \\ \(T_{D}\) [K] & 427 & 123 & 406 \\ \end{tabular} \end{table} Table 1: Material parameters that are used for three different strange metals. termine the optical conductivity by employing the Kubo formalism in the following manner, \[\mathrm{Re}[\sigma(\omega)] =-2\lim_{\eta\to 0^{+}}\frac{e^{2}}{\mathcal{V}}\sum_{n,m}\frac{f( \varepsilon_{n})-f(\varepsilon_{m})}{\varepsilon_{n}-\varepsilon_{m}}\] \[\qquad\times\int_{0}^{\infty}\mathrm{d}t\left\langle n\right| \hat{v}_{x}(t)\left|m\right\rangle\left\langle m\right|\hat{v}_{x}\left|n \right\rangle e^{i(\omega+i)t},\] where \(\hat{v}_{x}(t)\) is the velocity operator in the Heisenberg picture, and all the eigenstates and eigenvalues are taken to be that of the initial condition. This dynamical formulation reduces to the Kubo formula above when the deformation potential is treated as static, but it is also well-defined for dynamical deformation potential.
2309.03078
Political Context of the European Vaccine Debate on Twitter
At the beginning of the COVID-19 pandemic, fears grew that making vaccination a political (instead of public health) issue may impact the efficacy of this life-saving intervention, spurring the spread of vaccine-hesitant content. In this study, we examine whether there is a relationship between the political interest of social media users and their exposure to vaccine-hesitant content on Twitter. We focus on 17 European countries using a multilingual, longitudinal dataset of tweets spanning the period before COVID, up to the vaccine roll-out. We find that, in most countries, users' endorsement of vaccine-hesitant content is the highest in the early months of the pandemic, around the time of greatest scientific uncertainty. Further, users who follow politicians from right-wing parties, and those associated with authoritarian or anti-EU stances are more likely to endorse vaccine-hesitant content, whereas those following left-wing politicians, more pro-EU or liberal parties, are less likely. Somewhat surprisingly, politicians did not play an outsized role in the vaccine debates of their countries, receiving a similar number of retweets as other similarly popular users. This systematic, multi-country, longitudinal investigation of the connection of politics with vaccine hesitancy has important implications for public health policy and communication.
Giordano Paoletti, Lorenzo Dall'Amico, Kyriaki Kalimeri, Jacopo Lenti, Yelena Mejova, Daniela Paolotti, Michele Starnini, Michele Tizzani
2023-09-06T15:26:40Z
http://arxiv.org/abs/2309.03078v2
# Political Issue or Public Health: the Vaccination Debate on Twitter in Europe ###### Abstract At the beginning of the COVID-19 pandemic, fears grew that making vaccination a political (instead of public health) issue may impact the efficacy of this life-saving intervention, spurring the spread of vaccine-hesitant content. In this study, we examine whether there is a relationship between the political interest of social media users and their exposure to vaccine-hesitant content on Twitter. We focus on 17 European countries using a multilingual, longitudinal dataset of tweets spanning the period before COVID, up to the vaccine roll-out. We find that, in most countries, users' exposure to vaccine-hesitant content is the highest in the early months of the pandemic, around the time of greatest scientific uncertainty. Further, users who follow politicians from right-wing parties, and those associated with authoritarian or anti-EU stances are more likely to be exposed to vaccine-hesitant content, whereas those following left-wing politicians, more pro-EU or liberal parties, are less likely to encounter it. Somewhat surprisingly, politicians did not play an outsized role in the vaccine debates of their countries, receiving a similar number of retweets as other similarly popular users. This systematic, multi-country, longitudinal investigation of the connection of politics with vaccine hesitancy has important implications for public health policy and communication. ## Introduction Despite the success of vaccination in reducing mortality and eradicating diseases like smallpox [1], skepticism about vaccine safety and efficacy has persisted throughout history. The rapid development and global distribution of the COVID-19 vaccines spurred renewed apprehension. In February 2022, a Eurobarometer survey found that while most EU citizens supported vaccination, concerns about unknown long-term side effects of COVID-19 vaccines ranged from 47% to 70% across different countries, with those who were altogether against vaccination reaching 29% in Bulgaria, followed by 24% in Slovakia and 21% in Slovenia [2]. Even before COVID-19, the World Health Organization has indicated vaccine hesitancy as one of the top 10 threats to global health in 2019 [3]. Vaccine hesitancy was found to correlate with a complex combination of psychological and sociological factors. Recent literature has linked such attitudes with alternative health practices [4], science denial [5], and conspiratorial thinking [6]. However, this must be contextualized in the personal experiences and beliefs, and in the broader societal, public health and communication environments [7]. A concerning trend around web-based communication channels, including social networks and those integrating recommendation systems, is the possible formation of echo-chambers, both at national [8, 9] and global scales [10], as the tightly-knit, homogeneous communities in such echo-chambers provide a fertile ground for fringe narratives that oppose the mainstream [11]. These narratives often exclude traditional and authoritative sources [12] and instead are supported by low-quality information and misinformation [13], which undermines the trust in the public health authorities. As governments around the world rushed to confront the COVID-19 pandemic, various political actors joined the discussion. At the same time, the increasing adoption of social media has coincided with its use by populist and anti-establishment politicians [14], who took advantage of the context collapse around bite-size units of communication to promote skepticism of authority [15]. Worldwide, vaccine hesitancy has been linked to political beliefs, including in France and Italy [16, 17], where those backing right-wing parties had a higher unvaccinated rate. In Poland, around August 2021 vaccination rates were highest in the areas supporting the politician opposing the ruling conservative Law and Justice (PiS) party - a party that has been accused of "flirting" with "anti-vaxxers" [18]. In the U.S., by May 2022 Republicans and Republican-leaning independents (60%) were less likely than Democrats and Democratic leaners (85%) to be fully vaccinated [19]. Overall, studies find vaccine hesitancy and political populism are positively associated across Europe [20, 21]. Although the connection between the use of social media and vaccine hesitancy has been documented [13, 22], little attention has been paid to the connection between the politicized communication around vaccination on social media and vaccine hesitancy. As politicians take actions impacting the practice of medicine, including communicating to their constituents on the matters of personal health [23], it is urgent to understand the interplay between such political actors and their audience in the context of vaccination. To address this research gap, we turn to one of the most popular social media platforms - Twitter - to gauge the relationship between political actors, political interest, and the consumption of vaccine-hesitant content in the period immediately before and during the onset of the COVID-19 epidemic. We propose a custom network analysis pipeline to identify users likely to be exposed to vaccine-hesitant content in 17 European countries using a multilingual, longitudinal dataset. Our approach extends previous research [24] that assumes a two-sided controversy by also handling multiple "pockets" of opinion stances, making it generalizable to multi-sided discourse around the world. Using these tools, we answer the following research questions: 1. [label=**RQ0**] 2. Are those more exposed to vaccine-hesitant content interested in specific political parties? 3. Is a user's politicization (_i.e._ the extent of the interest in politics and the focus on few parties) related to their exposure to vaccine-hesitant content? 4. Are political actors more influential in the vaccination debate compared to non-political users? Crucially, our multilingual dataset allows us to examine the national discussions in the native languages, as out of the 17 countries considered, 16 do not have English as an official language (unlike in previous studies that use English-only queries to analyze "worldwide" vaccine sentiment [25, 26]). Further, we perform an extensive mapping between the Twitter accounts in these countries and their respective political figures, which we in turn enrich using _ParlGov_[27], a political science resource that provides detailed information about political parties, elections, cabinets, and governments in parliamentary democracies worldwide. The combination of these resources has allowed us to complete the first quantitative investigation of the relationship between political interest on social media and vaccine hesitancy on a European scale, as detailed below. ## Results In this study, we focus on the Twitter debates around vaccination immediately before and during the COVID-19 pandemic in 17 European countries, in their official languages. We begin by assigning each user in the vaccine debate a score that we title Vaccine Hesitancy Exposure (VHE) which spans from 0 to 1 and captures how likely is it that the user sees a vaccine-hesitant rather than pro-vaccine content on Twitter. Firstly, we examine the distributions of the VHE score for each country and one of four periods (1 before and 2-4 during COVID-19), as shown in Figure 1. Note that some pairs country-period were excluded due to data sparsity. One can see that in most countries and periods, VHE distributions are closer to zero (pro-vaccine) than 1 (vaccine-hesitant), indicating that the majority of users are more likely to see pro-vaccine content. Notable exceptions are France, with VHE distributions centered around 0.5 in all periods, the Netherlands, with broad VHE distributions, and Poland, where the majority of users have a VHE score below 0.5, but there exists a non-negligible minority of users likely to be exposed Figure 1: VHE score distribution across countries and periods. Dashed grey lines indicate scores where a user’s exposure has equal shares of vaccine-hesitant and pro-vaccine content. Score near 1 indicates more hesitant, and 0 – more pro-vaccine. to anti-vaccine content. Moreover, we can find interesting trends within each country, across time periods. For instance, the distribution of VHE scores in Italy begins with most users seeing roughly the same amount of vaccine-hesitant content as the pro-vaccine one. This trend then shifts towards 0, where more pro-vaccine content is easier to encounter for most users. On the other hand, Germany starts out with most users having VHE scores close to zero, and over time develops a minority of users with much higher scores. Yet in other cases, such as in France, the peak of the overall distribution remains stable over time. In aggregate, however, the peak of the VHE score comes in period 2, during the early days of the pandemic, with a macro-average of 0.39; the VHE score goes down to 0.30 by period 4, during the vaccine rollout. **RQ1**. _Are those more exposed to hesitant content interested in specific political parties?_ To answer this question, recall that we model the vaccination debate in each country as a country-specific retweet network, one for each of the four time periods; for this analysis we choose only those that have at least 300 users (61 in total). For each country/period combination, we then perform an OLS regression to model a user's Vaccine Hesitancy Exposure (VHE) score using the user's followership of politicians in different parties as predictors (as well as some control variables, see Methods). Out of these, 72% (44) had an Adjusted R\({}^{2}\) score greater than or equal to 0.1, which we select for further analysis. In these models, 266 coefficients of parties (51.6%) were significant: 126 positive (having a positive relationship to the VHE score) and 140 negative, where positive (negative) coefficients indicate that users who follow these parties are more (less) likely to see vaccine-hesitant rather than pro content on Twitter. For example, the most positive score was found by _Alternative for Germany_ in period 3, whereas the most negative by _Civic Platform_ in Poland (period 1) (for full listing of coefficients for each party across periods, see Data availability section). Thus, we find mixed results - the relationship between party interest and VHE score may vary between party, country, and period. To see if these findings generalize across countries, we group the parties according to the ParlGov classification (an extensive resource on political parties in parliamentary democracies [27]). Figure 2 shows the distribution of coefficients for user interest in parties grouped in families using ParlGov with the accompanying 99% bootstrapped confidence intervals. We find that the parties identified as Right-wing have the strongest, and most positive, relationship with the VHE score. On the Figure 3: Significant OLS coefficients (at \(p<0.01\) with Bonferroni correction) for user interest in parties. grouped in families using ParlGov, and their 99% confidence intervals. Showing countries having sufficient model fit over 4 time periods. Figure 2: Distribution (boxplots) of OLS coefficients modeling users’ VHE score by their interest in parties, grouped by families using ParlGov. Accompanying points and whiskers indicate a 99% bootstrapped confidence interval. Numbers indicate how many parties are in each group. other hand, following parties in the Social democracy and Liberal families have a negative relationship with users seeing such vaccine-hesitant content. We find no other statistically robust relationships for other party groups. To check how consistent these results are over time, in Figure 3 we plot the significant coefficients and their confidence intervals for five countries that have models with Adjusted \(R^{2}>0.1\) for all four periods. We find that the sign of the coefficients rarely flips. The coefficients for the Right-wing family of parties remain positive, and that for Social Democracy remains negative. However, we find country-specific peculiarities: Liberal parties are associated positively with VHE score in Spain (and not in France or Italy), and the Green parties are more negatively in Germany. Finally, we investigate the relationship between the VHE score and the four party characteristics as defined by ParlGov, namely Left vs. Right, Liberty vs. Authority, Anti vs. Pro-EU, and State vs. Market. Figure 4 shows the distribution of coefficients for parties in different quintiles of characteristics, accompanying bootstrapped confidence intervals, and horizontal brackets indicating statistical comparison using the one-sided Mann-Whitney U test. For Left vs. Right, Liberty vs. Authority, and Anti vs. Pro-EU dimensions, we find statistically robust differences between parties in the first and last quintiles, and sometimes with the middle quintile as well. Users following Left-leaning politicians are less likely to encounter vaccine-hesitant content, whereas those following Right-wing politicians are much more. Similarly, those following parties closer to the Liberty characteristics - those promoting expanded personal freedoms such as abortion, same-sex marriage, or greater democratic participation - are less likely to encounter vaccine-hesitant content, and the opposite for the parties closer to the Authority side - those promoting authoritarian ideals of order, tradition, and stability. Interestingly, even the party's stance on European Union correlates with the VHE score of users following them: those opposing European Union are more likely to have a higher VHE score. On the other hand, the trend for the economic dimension of State (State-controlled economy) vs. Market (free-market economy) displays a peak in association with the VHE score in the third quintile. **RQ2**. _Is a user's politicization related to their exposure to hesitant content?_ We address this question by defining two measures of politicization for each user: _political interest_ (the proportion of all accounts a user follows that are politicians) and _political focus_ (the share of politicians in the user's most followed party). Figure 5 shows the Spearman rank correlation coefficient between these two measures and the VHE score, for each country/period (those that have fewer than 300 users are greyed out). First, note that the two measures tend to produce similar results for the same countries. Second, the relationship may be different for different countries. For some, there is a positive relationship between both measures and VHE, especially Spain, indicating that (in the case of interest) the greater the share of politicians that a user follows, the more likely they are to see vaccine-hesitant content in their timeline. On the other hand, other countries display a negative correlation, especially Greece, suggesting that a greater political interest corresponds to less vaccine-hesitant content. However, this relationship tends to remain constant over time, though can sometimes change, as in the case of Poland when considering political focus. Thus we conclude that the relationship between politicization and VHE is not straightforward, but specific to a particular country and its political situation (echoing our findings in RQ1). **RQ3**. _Are political actors more influential in the vaccination debate compared to non-political users?_ Finally, we compare the engagement with content posted by political actors --the Twitter accounts of prominent politicians--to other users in that country. We match politicians with other users on the number of followers, followees, and daily posting rate in order to achieve a fair comparison. We then consider country/period networks in which there are at least 10 politicians Figure 4: Distribution (boxplots) of OLS coefficients modeling users’ VHE score by their interest in parties having one of four dimensions defined by ParlGov, grouped in quintiles. Accompanying points and whiskers indicate a 99% bootstrapped confidence interval. Horizontal brackets on top indicate the comparisons among quintiles 1, 3 and 5, * signifies whether one distribution is statistically greater than the other (one-sided Mann-Whitney U test at \(p<0.01\) with Bonferroni correction). involved in the retweet network, resulting in 44 experiments. First, when comparing the number of retweets these users have received, we find that 40 out of 44 comparisons (91%) do not show a significant difference (using one-sided Wilcoxon test, with Bonferroni correction); only in 4 cases political accounts had more retweets. The result is the same if we consider the number of unique retweeters. Second, measuring the PageRank centrality of both groups, we find only 2 cases when they are significantly different (politicians having higher centrality). Third, considering mentions, we find that in 10 cases (less than a quarter of all cases) politicians are mentioned significantly more than others, but there are no cases of the reverse. In summary, we do not find evidence of political Twitter users being much more influential than other users who have a similar social profile, using the metrics above. ## Discussion A connection between far-right and anti-vaccination movements has been seen on the streets of many European countries, with documented "anti-vax" protests of white supremacists in Britain [28] and arrests of Italy's far-right New Force during COVID-19 riots [29]. However, our study shows that the connection between interest in right-wing political actors and vaccination-hesitant content is not limited to the extreme cases on the streets, but can be found on one of the largest social media websites. We notice that, according to ParlGov, during the study period no right-wing government was in power in the considered countries (with the exception of North League from February 2021 in Italy), thus they were in an opposition role, which may have shaped their audience. Our findings are further supported by survey evidence. A large survey conducted from December 2020 to January 2022 in Spain has shown that far-right supporters were almost twice as likely to be vaccine-hesitant than the overall population - a consistent trend with a brief lessening around October 2021 [30]. Elsewhere, a survey of the residents of Norway showed that the refusal to vaccinate is associated with right-wing ideological constraint, with the authors concluding that "vaccine refusal is partly an act of political protest and defiance which attaches itself to consistent right-wing attitudes" [31]. However, a survey of French respondents found that those both on the right and left extremes were more likely to refuse vaccination [16]. A more refined look at the personal right-wing ideology was used in a survey of respondents in Germany, Poland, and the United Kingdom. When separated into two distinct right-wing dimensions, high "social dominance orientation" (SDO) is associated with higher vaccine hesitancy, while high "right-wing authoritarianism" (RWA) is associated with lower hesitancy, pointing to additional nuance within political ideologies [32]. As we show, political situations and their association with vaccination hesitancy vary greatly amongst countries, more work is necessary to understand the local political incentives around public health communication. These findings, however, also present an opportunity. The fact that those who are likely to find vaccine-hesitant information on social media are likely to follow particular political actors points to a possible direct way to communicate with them. If such actors can be persuaded to team up with the public health authorities to promote scientifically-grounded information, their audience would receive such messages from sources they already trust. For instance, the partisan divide in terms of COVID vaccination in the UK has been shown to be less drastic than in the US, likely to the greater pro-vaccination position of the Conservative UK government [33]. Additionally, public health messaging and interventions could be tailored to specific political affiliations or beliefs to better address vaccine hesitancy. Note that, even though we did not explicitly label content for misinformation, labelers have encountered many instances of possible misleading or erroneous information. Further, we acknowledge that our study does not include bot detection, as bots may play an important role in information propagation [34]. A recent study has shown there is a global proliferation of low-quality information within and across national borders [10]. Whether this information is propagated by political actors is an important question in terms of accountability and responsibility of those representing a public office. For instance, in early 2023, the UK Conservatives suspend a lawmaker for posting vaccine misinformation on Twitter [35]. Figure 5: Spearman correlation between political interest and VHE score (left) and political focus and VHE score (right) (see RQ2 Methods). Grey cells indicate country/periods having fewer than 300 users. White cells indicate a non-significant correlation. This study has certain notable limitations. Although the focus of this paper is Europe, by far not all countries were included, either due to data sparsity (despite the multi-lingual queries) or the lack of external resources (such as ParlGov). Twitter users are not representative of the larger populations and tend to be more political [36], potentially overestimating political engagement in the whole country. Although we find some relationships which are stable over time, both public health and political spheres are highly dynamic, limiting our findings to the unique (and unprecedented) time of the rollout of COVID-19 vaccines. Further, keyword-based data collection is always bound to miss content that is phrased differently from the query, with additional challenges in a multilingual setting. However, our decision to keep the keyword set consistent across countries and time was to conserve comparability between countries and to avoid topic drift (for instance, discussion around vaccine-related "green passes" during the vaccination period). The methodology used in this study also favors users who are active and engage in retweeting, excluding those who engage with the topic rarely (previous work shows it is difficult to ascertain the opinions of such users even for human annotators [8]). Observational studies also fail to capture the impact of merely seeing information, that is, whether encountering vaccine-hesitant content actually changes people's minds, although previous studies have found a link between social media use and COVID-19 vaccine hesitancy [37, 38]. Complementary methodologies including surveys may be needed to further understand cognitive and psychological nuances of the interaction between social media use, political interest, and vaccine stance deliberation. Finally, as this study deals with healthcare decisions that may be highly personal in nature, privacy is an important consideration of this work. Because the Streaming API was used to collect this data, it is possible that some of the content was removed either by the user or by Twitter by the time of the analysis. However, besides annotating select tweets, throughout the paper we deal with the structural properties of the RT network and followership of popular political accounts, and present our findings in an aggregated fashion. Further, our methodology identifies pockets of users agreeing on a topic, which could be misused to target or harass people because of their opinions. Any interventions, therefore, possibly based on this analysis must comply with the ethical standards of public communication, and further research possibly dealing directly with individuals must follow the human research guidelines of their institution [39]. ## Methods Below, we describe the initial data collection using the Twitter Streaming API, the selection of country-specific content using user geo-location, and the construction of retweet networks that represent one user's endorsement of another's content. We then enrich this data from two sides: by estimating the vaccination stance of the tweets users are likely to encounter in these networks, and by estimating the political interest of these users. Finally, we compare these two sides to answer our research questions. ### Data Collection First, in 2019 we compiled a catalog of terms related to vaccines that are translated into 18 different languages (using 2-letter ISO: _bg, cz, fi, de, el, en, es, fi, fr, hu, it, nl, pl, pt, ro, ru, sv, tr_) starting from the list created for the research of tracking vaccination discussion on Twitter [8]. Using Twitter search, we iteratively added keywords to the list and searched again until no new keywords could be found. Then, these keywords were translated by native speakers with the task of including several common grammatical variations or relevant local keywords. This resulted in a collection of 459 keywords, including _vaccine_, _novax_, _measles_, _MMR_, _vaccinated_, and others (for full listing, see Data availability). Once the collection began, we did not modify the list to incorporate new vaccine-related words to keep the selection process consistent and the data comparable across time. This methodological choice likely led to some exclusion of the tweets mentioning new vaccines that did not also use the above keywords (see sec. Limitations). The final collection, spanning 2019/10/01 - 2021/3/31, is divided into four three-month periods: (1) pre-COVID (October 2019 - December 2019), (2) pre-vaccine (July 2020 - September 2020), (3) vaccine development (October 2020 - December 2020), and (4) vaccine rollout (January 2021 - March 2021) periods (see Supplementary Figure S6 for a volume graph). The time periods were chosen based on the international news about the vaccine development, though we acknowledge that they fit loosely the vaccine rollout schedule in each individual country. For instance, in period 2 the Sputnik V vaccine was announced (2020/08/11), in period 3 Pfizer-BioNTech (2020/11/09) and Moderna (2020/12/18) vaccines were announced, and at the beginning of period 4, the first AstraZeneca vaccine was administered (2021/01/04). Note that we exclude the period from January to June 2020, when the COVID pandemic first begins. This results in a dataset of 319\(M\) of tweets. The distribution across various languages exhibits a marked heterogeneity, with English accounting for 49.0% of the tweets, followed by Spanish at 24.8%, Portuguese at 13.7%, and French at 4.7%. Other languages, including Czech, Finnish, Danish, Romanian, Bulgarian, and Hungarian, each represent a negligible portion of the traffic, constituting less than 0.1% of the total (note that they were queried separately, so the large volume of popular languages should not have affected their collection). #### Geo-localization Next, we assigned to each tweet a location using the self-reported location field provided by users in their description, by matching it to GeoNames [40], a large geographical database of locations. We manually eliminated over 500 words most commonly associated with non-locations for false matches. Using this approach, we were able to geolocate more than 49% of the users. Since some users may provide a fake position or matching may fail (e.g. in the case of homonym places), to reinforce proper geo-localisation, for each country we filtered out tweets written in a language other than the official ones of their country. We tested this approach using tweets that come with geographic coordinates in their metadata as the gold standard, resulting in 93.7% (95% CI [83.2, 100.]) accuracy (which is on average more than twice as accurate as localizing users writing in other languages). See Supplementary Table S1 for accuracy estimates by country by language. Finally, we constrained our analysis to the European countries appearing in the ParlGov dataset (more on this below in the Political Analysis section). #### Debate as networks Following previous literature [8, 9, 10], we represent the vaccine debate in each country using directed, weighted graphs that capture retweet interactions among users. It has been widely shown (e.g. [41]) that retweet networks have high homophily concerning intrinsic characteristics of the nodes, such that users who retweet each other are likely to share the same opinions and are exposed to the same type of content. For each country and period, we built a retweet (RT) network where the nodes are the users in the dataset and the weight on edges from node \(i\) to \(j\) is the number of times user \(i\) retweeted user \(j\)'s tweets. For each network, we kept the nodes in the biggest _weakly connected component_ (WCC), which on average corresponds to 93% of the total (while the size of the second largest WCC is usually less than 1%.). In the following analysis, we considered RT networks with at least 300 nodes from the following 17 countries: Austria, Belgium, Czech Republic, Denmark, Finland, France, Germany, Greece, Ireland, Italy, Netherlands, Poland, Portugal, Spain, Sweden, Switzerland, and United Kingdom. This resulted in 61 networks (17 countries \(\times\) 4 periods), as 7 networks were removed from period 1 due to the size threshold. In the Supplementary Table S2 the sizes of the WCCs in each country by period are listed. ### Vaccination Stance Labeling Next, we turn to the labeling of the tweets in terms of vaccination stance (and measuring users' exposure to these stances). To this end, we create a dataset of manually annotated samples of the content shared on the RT networks. To avoid the over-representation of some portions of the social network, we performed a stratified sampling of the tweets first dividing the networks into 15 communities and then selecting 6 tweets per community with the largest difference between the fraction of internal and external retweeters (i.e. the fraction of users from inside a community retweeting the post, minus a fraction of users retweeting it outside). The chosen number of communities and tweets per community are a trade-off between an accurate representation and our capability of manual annotation. Notably, in this step, community detection serves to perform a stratification, this is why the number of communities is the same for all networks. This strategy allowed us to identify tweets that are at the same time popular and representative of a network portion. In this way, we were able to achieve a high coverage ranging between 37.5% (1st period) and 4.5% (4th period) of the total number of tweets produced by users in the RT network and an average fraction of users covered (i.e. for whom we noted at least one of their posts) between 61.8% to 38.5%. The selected tweets were then labeled by 13 expert annotators with a background in the vaccine debate and as far as possible proficient in the original language of the tweets. Nevertheless, we translated all tweets into English using Google Translate for cross-checking. The classification task consisted in deciding whether a tweet was _pro-vaccine_, _vaccine-hesitant_, or _other_. In particular, vaccine-hesitant tweets included those stating directly the user will not vaccinate, questioning their efficacy or safety, or espousing conspiratorial views around their creation or distribution. The annotators were encouraged to take into account all information available (images, videos, URLs, etc.) if the tweet was still available online. Annotator agreement, measured using _Cohen's kappa_ of \(k=0.47\) showed the task to be moderately complex. However, when the label _other_ is excluded, the agreement is \(k=0.78\), with an overlap in labels of 92.3%. We make the labeled dataset of 5667 tweets in 13 languages available to the research community (see Data availability). With a partial annotation of the tweets at hand, we aim at assigning to each user a score, which we dub _Vaccine Hesitancy Exposure_ (VHE) score, capturing the stance of the content they might be exposed to but not necessarily their own stance on vaccination. To accomplish this task, we begin by propagating the labels of the manually annotated data to other users who have retweeted (but not "quoted") them. Next, we apply a procedure summarized in Algorithm 1. In simple words, we randomize the network, perform community detection and assign a score in [0,1] to each individual to be exposed to vaccine-hesitant content, given its class affiliation. The process is repeated 100 times, and the VHE score is obtained by averaging the result over all trials. We now describe in detail the steps in the scheme of the algorithm. Perturb the network:We generate 100 versions of the network using perturbation methods described in [42]. In this way, we attempt to mitigate weaker clustering signals that may be an artifact of the particular sample of the RT network. We sample randomly 15% of the retweets and change their target, selecting the new one according to the _weighted-in-degree_ distribution of the nodes, such that the account popularity information is preserved. We empirically choose a fraction of 15% as the maximum amount of noise we can introduce before the smallest networks' community structure is unrecoverable, while still allowing us to have significant randomization effects on the denser networks of the largest countries. Community detection (CD):Both for the stratified sampling and for the VHE definition, we perform community detection with the spectral clustering algorithm for weighted networks introduced in [43], applied to a symmetrized version of our graph. We adopt the "spin-glass" version of the regularized Laplacian matrix and extend its use to more than two communities as per [44]. We choose this algorithm for its speed, its efficiency on weighted, and sparse graphs, but also because it is one of the few known methods to estimate the number of communities in graphs. For interpretation purposes, we split the graph into \(k=15\) communities, whenever the estimated number exceeded this threshold. As a robustness check, we compared the VHE scores created using the Louvain algorithm [45], as well as the results for RQs 1 & 2, and obtained generally consistent results to those using spectral clustering (for more on VHE score robustness checks, see Supplementary Section S1.4 and additional results in Supplementary Figures S8-S11). Exposure within communities:In each community \(c\) detected above, we define a measure \(\chi^{(c)}_{VH}\) that captures the intensity of its members' exposure to hesitant content, defined as the difference between the fraction of hesitant and pro tweets: \[\chi^{(c)}_{VH}=\frac{1}{2}\left[\frac{N_{VH}-N_{Pro}}{N_{VH}+N_{Pro}+N_{other} }+1\right]\] where \(N_{VH}\), \(N_{Pro}\), \(N_{other}\) is the number of vaccine-hesitant, pro-vaccine, and other tweets in the community. The value \(\chi^{(c)}_{VH}\) spans between 0 and 1 and captures how much likely it is that a user in the community \(c\) reads a hesitant tweet instead of a pro-vaccine. From community-level to user-level scores:Finally, for each user \(u\) we compute their \(\mathit{VHE}_{u}\) score as the mean on the set of 100 values of \(\chi^{(c)}_{VH}\) where the \(c\)'s are the communities to which \(u\) was assigned. The process results in a score for each user from 0 to 1, where 0 means that the user is exposed to only pro-vax tweets, and 1 only vaccine-hesitant tweets. Intuitively, having a value greater than 0.5 means that on average the user was sorted into communities where more hesitant tweets were shared than pro-vaccine. ### Political Analysis In order to study the connections between VHE score and politics, we identified and characterized the Twitter accounts in our dataset linked to politicians from each respective country. Identifying Politicians & Parties.We began by building an extensive list of politicians (parliamentarians and parties' leaders) who are associated with a personal Twitter account using _Politicians on Social Media_[46] and _Twitter Parliament_[47] datasets. Then, we annotated them with the party affiliations in the target period (October 2019 - March 2021) by first matching them to _WikiData_[48] and then manually annotating those not found in WikiData using all available resources. Since parties may be dynamic (dissolution, merging, renaming, etc.) we took as gold standard those parties which are present in the national Parliament during the last government session up to December 2021 provided by the _ParIGov_ dataset (the _Parliamentary Governments Database_[27] is an extensive and widely-utilized resource providing detailed information about political parties, elections, cabinets, and governments in parliamentary democracies worldwide). For those parties not in _ParIGov_ but which were active in the target period, we either labeled them with a larger party in _ParIGov_ that they have joined, or as _Other_ if a larger party could not be found. Indeed, it is important to take smaller parties into account when analyzing social media data, since their supporters can be vocal on Twitter [49]. A link to the list of all political users annotated is provided in the Data availability section. Comparison of User Interest in Political Parties to the VHE Score (RQ1).To compare the user's political interests to their VHE score, we employ an OLS regression model that predicts a user's VHE score using the fraction of politicians followed by them in each party, along with a set of confounding variables. These confounding variables include the number of followers and followees, daily posting rate, weighted in-degree and weighted out-degree (number of retweets they had, and number of retweets they made in the vaccine debate, respectively), and the proportion of followed users who are politicians (political interest, defined above). These variables were selected using the Variance Inflation Factor (VIF), to make sure they do not introduce multicollinearity. We then standardized all features within each country, including the target variable. For each country/period network, we run OLS and note the model fit (Adjusted \(R^{2}\)) and the coefficients of the politically-related variables. We compute 61 such models, one for each network except for the 7 networks that have fewer than 300 users in the WCC. Further, we only consider the models which have a fit of Adjusted \(R^{2}>0.1\)[50]. To alleviate the multiple hypothesis testing problems, we apply the Bonferroni correction to the \(p\)-values of the variable coefficients, selecting those significant at \(p<0.01\), with the correction. Finally, we report aggregated results in terms of the proportion of significant coefficients, their direction (positive vs. negative), and the magnitude and stability of coefficients for select parties. We perform a similar analysis by aggregating the parties by classification assigned by ParlGov in terms of the political family (conservative, social democrats, liberal, green, etc.), and four dimensions: left/right, State/market (economic policy), liberty/authority (personal freedom), pro/anti-EU. For instance, when aggregating per political family, we run a model for each network that models the VHE score by considering the proportion of politicians followed by the user in a particular political family, along with a set of confounding variables. In the case of the four dimensions, each dimension is run as a separate model, with the numerical score of the dimension binned into quintiles (as well as a "none" score when a party does not have a score in that dimension). For each dimension, the quintile bins were computed on the whole dataset, before being applied to the parties in each country. Similar Adjusted \(R^{2}\) and Bonferroni-corrected \(p-\)value filters were applied to these models. When comparing the average coefficients of the variables in different groups, we compute a confidence interval using bootstrapping (\(n=1000\)). #### Comparison of User Politicization to the VHE Score (RQ2). To assess the politicization of the users in our data, we collected the followers of the politician identified above using the Twitter Followers API. The share of users who follow at least one politician varies on average between 54% and 88% except for Portugal at only 22%. Note that, as the API does not provide historical knowledge of the follower relationship, here we assume that the followership does not drastically change over time (between the start of our data on October 2019 and the followership collection in December 2022). We then define two measures of user politicization by each user: _political interest_ is the proportion of all accounts a user follows that are politicians, and _political focus_ is the share of politicians in the user's most followed party (for example, if a user follows 5 politicians in party A, 3 in party B, and 2 in party C, the political focus is 5/10 = 0.5). Finally, we compute the Spearman correlation between each of these two measures and the VHE score. #### Comparison of Politicians to Others (RQ3). Finally, to ascertain the importance of politicians in the vaccination debates in each country, we compare the politicians and their posts to a matched set of other users in terms of retweets and mentions, and their PageRank in the RT network (using a one-sided Wilcoxon signed-rank test). We also considered the unique sets of retweeters and those mentioning the politicians, getting the same results. To perform a fair comparison, we match each political account with another on the number of followers, followees, and the daily posting rate using the Euclidean distance with the standardization of all variables. The distributions of these variables with matches were checked with paired t-tests to make sure the political accounts were indeed close to the matched baseline.
2308.13142
A Survey of Diffusion Based Image Generation Models: Issues and Their Solutions
Recently, there has been significant progress in the development of large models. Following the success of ChatGPT, numerous language models have been introduced, demonstrating remarkable performance. Similar advancements have also been observed in image generation models, such as Google's Imagen model, OpenAI's DALL-E 2, and stable diffusion models, which have exhibited impressive capabilities in generating images. However, similar to large language models, these models still encounter unresolved challenges. Fortunately, the availability of open-source stable diffusion models and their underlying mathematical principles has enabled the academic community to extensively analyze the performance of current image generation models and make improvements based on this stable diffusion framework. This survey aims to examine the existing issues and the current solutions pertaining to image generation models.
Tianyi Zhang, Zheng Wang, Jing Huang, Mohiuddin Muhammad Tasnim, Wei Shi
2023-08-25T02:35:54Z
http://arxiv.org/abs/2308.13142v1
# A Survey of Diffusion Based Image Generation Models: ###### Abstract. Recently, there has been significant progress in the development of large models. Following the success of ChatGPT, numerous language models have been introduced, demonstrating remarkable performance. Similar advancements have also been observed in image generation models, such as Google's Imagen model, OpenAI's DALL-E 2, and stable diffusion models, which have exhibited impressive capabilities in generating images. However, similar to large language models, these models still encounter unresolved challenges. Fortunately, the availability of open-source stable diffusion models and their underlying mathematical principles has enabled the academic community to extensively analyze the performance of current image generation models and make improvements based on this stable diffusion framework. This survey aims to examine the existing issues and the current solutions pertaining to image generation models. diffusion models, image generation, issues and solutions + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition without extensive test-time training (Section 4). For **C3**: Improvements in diffusion models have been achieved through various means. Enhancing the text encoder has been shown to be crucial for text-to-image generation, and works like "Imagen" have studied the impact of different text encoders. "Mixture of experts" has been used to leverage different models for different stages of the generation process. "Instruction tuning for human preference" employs reinforcement learning with a reward model to optimize image quality based on human preferences. Additionally, "Sampling quality improvement" and "Rewrite the prompt" have been introduced as techniques to enhance image generation quality (Section 5). In general, diffusion models have made significant progress in generating high-quality images with diverse conditions. Researchers continue to explore novel techniques and improvements to further advance the capabilities of diffusion models in image generation tasks. The survey is different with existing surveys in several aspects. (1) Application Focus: Unlike the other surveys (Friedman et al., 2016; Goyal et al., 2017; Goyal et al., 2018) that provide an overview of diffusion models and their mathematical foundations, our survey focuses on the application of diffusion models in image generation. It concludes the methods and techniques used to enhance image generation quality, which is essential for many practical applications (2) Emphasis on Limitations and Solutions: While existing surveys (Friedman et al., 2016; Goyal et al., 2018; Goyal et al., 2018; Goyal et al., 2018) mostly introduce diffusion models and their applications, this survey takes a more targeted approach by identifying the current limitations of diffusion models in image generation. It then highlights individual research papers and their contributions in addressing these limitations. This enables readers to understand how various works have tackled specific challenges related to image generation using diffusion models. (3) Complementary to Other Modalities: Unlike the existing surveys that focus on the applications of diffusion models in other modalities, such as audio (Goyal et al., 2018), graph (Goyal et al., 2018), and medical images (Goyal et al., 2018), this survey complements them by providing a comprehensive exploration of image generation using diffusion models. It consolidates the advancements made in this field, offering insights for researchers and practitioners interested in image synthesis. ## 2. Preliminaries of Diffusion Models Prior to the emergence of diffusion models (Srivastava et al., 2015), GAN (Garjani et al., 2016) based or VAE (Goyal et al., 2018) based methods are all tried to reconstruct images from sampled white noise. Building upon this concept, diffusion models (Srivastava et al., 2015) took inspiration from thermodynamics and proposed a different approach. They reasoned that by gradually adding noise to an image until it becomes completely degraded into noise, the reverse process of adding noise aligns with the fundamental concept of image generation. If a deep neural network model can effectively capture and model this reversed process, it becomes possible to iteratively remove the noise, eventually revealing a clear image. This forms the fundamental idea behind diffusion models. Mathematically, this process can be modeled as a Markov process. Fig.1 is used in paper DDPM (Dempel et al., 2017) to illustrate this process. \(X_{0}\) represents the original image, \(X_{1}\),..., \(X_{t-1}\), \(X_{t}\),..., \(X_{T}\) represent the results after adding noise at each step. From fig 1, we can see that, at step \(t-1\), \(X_{t-1}\) is already a mixture of noise and image. After multiple rounds of injecting the noise, at step \(T\), the image \(X_{T}\) has already become something too noisy that can not be recognized anything. The process of adding noise step by step from \(X_{0}\) to \(X_{T}\) is called "forward process" or "diffusion process". In fig 1, \(q(X_{t}|X_{t-1})\) refers to this process. Reversely, from \(X_{T}\), the process of iterative remove the noise until getting the clear image is called the "reverse process". In fig 1, \(p_{\theta}(X_{t}|X_{t-1})\) refers to this reverse process. In the context of the forward process, the addition of noise is precisely described by equation 1. The variable \(\epsilon_{t-1}\) represents the added noise. This noise is introduced through a weighted summation, where the weights at each step are denoted as \(\beta_{t}\) or referred to as the diffusion rate. The determination of this diffusion rate is performed in advance through the utilization of a scheduler. \[x_{t}=\sqrt{1-\beta_{t}}\cdot x_{t-1}+\sqrt{\beta_{t}}\cdot\epsilon_{t-1} \tag{1}\] \begin{table} \begin{tabular}{l|l|l} \hline Challenges & Techniques & Models \\ \hline \multirow{4}{*}{Multi-object generation} & \multirow{4}{*}{Adding more controls} & ReCo (Goyal et al., 2018), GLIGEN (Goyal et al., 2018), SceneComposer (Goyal et al., 2018), SpaText (Friedman et al., 2016), \\ & & ControlNet (Goyal et al., 2018), UniControl (Goyal et al., 2018), Uni-ControlNet (Goyal et al., 2018), \\ & & T2I-adapter (Goyal et al., 2018), Universal Guidance (Garjani et al., 2016), VPGen (Goyal et al., 2018) \\ \cline{2-3} & \multirow{4}{*}{Working on attention maps} & SynGen (Goyal et al., 2018), Attend-and-Excite (Goyal et al., 2018), \\ & & CIIS (Srivastava et al., 2015), Detector Guidance (Garjani et al., 2016), \\ & & TFLA (Garjani et al., 2016), Attention-Refocusing (Garjani et al., 2016) \\ \hline Generating rare & Retrieval-based methods & RDM (Dempel et al., 2017), kNN-Diffusion (Dempel et al., 2017), Re-imagen (Friedman et al., 2016) \\ \cline{2-3} or novel concepts & \multirow{4}{*}{Subject-driven generation} & Textual Inversion (Dempel et al., 2017), ELITE (Srivastava et al., 2015), Custom Diffusion (Garjani et al., 2016), \\ & & DreamBooth (Goyal et al., 2018), E4T (Goyal et al., 2018), InstantBooth (Garjani et al., 2016), \\ & & FastComposer (Goyal et al., 2018), Blip-Diffusion (Garjani et al., 2016) \\ \hline \multirow{4}{*}{Quality improvement of generated images} & Text encoder improvement & Imagen (Goyal et al., 2018), GlueGen (Goyal et al., 2018), eDiff-i (Friedman et al., 2016), StructureDiffusion (Goyal et al., 2018) \\ \cline{2-3} & Mixture of experts & eDiff-i (Friedman et al., 2016), ERNIE-ViLG (Garjani et al., 2016) \\ \cline{1-1} \cline{2-3} & Instruction tuning for human preference & HPS (Goyal et al., 2018), ImageReward (Goyal et al., 2018) \\ \cline{1-1} \cline{2-3} & Sampling quality improvement & Shifted Diffusion (Goyal et al., 2018) \\ \cline{1-1} \cline{2-3} & Self-attention guidance & SAG (Garjani et al., 2016) \\ \cline{1-1} \cline{2-3} & Prompt rewriting & Promptist (Garjani et al., 2016) \\ \hline \end{tabular} \end{table} Table 1. Summary of existing diffusion models in image generation tasks. Given this definition, mathematically, we can have equation 2, \(\widetilde{\mu}_{t}(x_{t},x_{0})\) is in equation 3. All the \(alpha\) can be calculated using \(\beta\) (detailed derivation can refer to [26]). The conclusion here is, given the original image \(x_{0}\) and \(x_{t}\), we can use equation 2 to get \(x_{t-1}\). Now if we have a neural network \(\mu_{\theta}(x_{t},t)\) to mimic this function, we can have our loss function in equation 5. This is the original idea of diffusion model. \[q(X_{t-1}|X_{t},X_{0})=\mathcal{N}(X_{t-1};\widetilde{\mu}_{t}(x_{t},x_{0}), \widetilde{\beta}_{t}I) \tag{2}\] \[\widetilde{\mu}_{t}(x_{t},x_{0})=\frac{\sqrt{\widetilde{\mu}_{t-1}}}{1- \widetilde{\mu}_{t}}\cdot x_{0}+\frac{\sqrt{\widetilde{\alpha}_{t}}\cdot(1- \widetilde{\alpha}_{t-1})}{1-\widetilde{\alpha}_{t}}\cdot x_{t} \tag{3}\] \[\widetilde{\beta}_{t}=\frac{1-\widetilde{\alpha}_{t-1}}{1-\widetilde{\alpha}_ {t}}\cdot\beta_{t} \tag{4}\] \[L=\|\widetilde{\mu}_{t}(x_{t},x_{0})-\mu_{\theta}(x_{t},t)\|^{2} \tag{5}\] Denoising Diffusion Probabilistic Models (DDPM) [26] further improved this idea. They proposed to directly predict the noise added at each step, \(\epsilon_{t-1},X_{t-1}\) can be easily obtained by removing \(\epsilon_{t-1}\) from \(X_{t}\). The proposed loss shows in equation 6 DDPM [26] is the first diffusion model successfully generating high quality images. \[L=\|\epsilon-\epsilon_{\theta}(x_{t},t)\|^{2} \tag{6}\] DDPM originally focused on image generation from noise; however, real-world scenarios often require image generation conditioned on additional information. To address this need, Classifier-free guidance (CFG) [27] was proposed as a framework for effectively incorporating conditional inputs. This framework allows precise control over the conditioning information by changing the guidance rate. Since its introduction, the CFG framework has gained widespread adoption and serves as the foundational basis for numerous contemporary diffusion models, including Imagen [59], DALLE 2 [54], Stable Diffusion [56]. Now we can summarize the basic components of a diffusion model: * **Noise prediction module**: This is the core component of the diffusion model, taking \(X_{t}\) or together with other condition as input, the output is the added noise. Due to the success of U-net [57] in previous computer vision tasks, normally diffusion models also use this structure as their noise prediction model. [56] also added attention layer [65] to involve other conditions such as text or bounding boxes. Surprisingly, a very important feature is observed on this attention U-Net, the cross attention map in the U-net will directly affect the layout and geometry of the generated image [25]. This feature is used by a lot of researchers to improve the generation quality. Recently there are also some other works replace this U-net with pure transformer structure [4]. The main purpose of this component is to take some condition information and previous state as input, and output the next state with same dimension with previous one. * **Condition encoder**: Currently, most of the works are not just generate images from sampled noise, but also need conditioned on something, such as text. Text-to-image generation is the most common case. Same as all the other pretrained large multi-modal models, using some pretrained model and freezing it during the training is always a good strategy to save time. Current diffusion models also used this methodology. Normally, T5 [53] series encoder or CLIP [51] text encoder is used in most of the current works. * **Super resolution module**: most generation models typically operate on lower resolutions such as \(512\times 512\) or \(256\times 256\). However, real-world applications often require higher-resolution images. Take the example of wallpapers, which are commonly expected to be in 4K resolution. To address this limitation, a common approach is to incorporate super-resolution models after the initial generation process. For instance, DALLE 2 [54] employs two super-resolution models in its pipeline. Initially, it generates an image of dimension \(64\times 64\). Subsequently, the first super-resolution model upscales the image to \(256\times 256\), and the second super-resolution model further enhances the image to \(1024\times 1024\). The inclusion of super-resolution modules not only increases the image size but also improves the image quality by adding more intricate details. As a result, features like facial pores and wrinkles may appear in the generated face after super-resolution, contributing to a more realistic image appearance. * **Dimension reduction module**: To save computational resources, diffusion models often operate not directly on the pixel space of images, but rather on a lower-dimensional latent space representation. For instance, SD model [56] utilizes VAE [31] to compress the image into a latent space of dimension \(4\times 64\times 64\). On the other hand, DALLE 2 [54] takes a different approach by performing the diffusion process directly on the embedding space of [51]. In this case, both the text encoder and image encoder of CLIP [51] are components integrated into the DALLE 2 model. * **Scheduler**: as mentioned previously, this module will define the diffusion rate at each step, \(\beta_{t}\). Figure 1. The directed graphical model considered in DDPM [26]. ## 3. Multi-Objects Generation When generating images with multiple objects, various issues often arise. In Figure 2, we present four images generated by SD v1.5, illustrating the challenges encountered. In the upper two images, the prompt used was "a yellow dog and a black cat." It is evident that both images have failed to accurately represent the intended objects. In the upper-left image, the dog exhibits some black coloration, while the upper-right image shows a yellow creature with mixed characteristics of both a dog and a cat. The bottom two images were generated with a more challenging prompt, "a yellow dog, a black cat, and a girl." In both images, it is evident that the generated features do not align with the specified objects, and the bottom-right image completely lacks the presence of a girl. These examples exemplify common failure cases characterized by attributes mismatching, attributes mixing, and objects missing. Diffusion models can also encounter difficulties in accurately representing positional information. In fig 3, we present examples of this failure. The prompt used in all three images is "a red cube on top of a blue cube." The left image is generated using SD v1.5, the middle image using the open-sourced unCLIP structure model Karlo (2016), and the right image using DeepFloyd IF (IF), which is an open-sourced model based on Imagen (2017) and regarded as one of the best open-sourced text-to-image models available. In the first two images, the issue of objects missing persists, as the red cube is not present. In the IF-generated image, both the blue and red cubes are successfully generated. However, the positional information is incorrect, as the cubes are not in the intended arrangement. This showcases a common failure case in diffusion models, where accurately representing the correct positional information remains a challenge. Based on these findings, the following part of this section will introduce the proposed solutions. ### Adding More Control One of the primary approaches to address this issue involves incorporating layout information into the model. This layout information can take the form of bounding boxes for individual objects or segmentation maps. In original Latent Diffusion Model (LDM) paper (Zhu et al., 2017), the conditioning was not restricted to text, making it a versatile framework applicable to various domains. Nevertheless, training such models from scratch can be computationally intensive. As a result, many recent works have adopted fine-tuning techniques to augment the model with additional conditioning based on pretrained models, often utilizing SD (Zhu et al., 2017) as a base. ReCo (2017) and GLIGEN (2017) both incorporating bounding box information into the model. ReCo (2017) involves extensive fine-tuning on SD 1.4, introducing extra position token embeddings into the original CLIP (Zhu et al., 2017) text encoder to encode the bounding box coordinates. Both the text encoder and the diffusion module are fine-tuned in this approach. On the other hand, GLIGEN (2017) adopts common fine-tuning method from natural language processing (NLP), incorporating an adaptor layer between the self-attention and cross-attention layers. The utilization of segmentation maps has proven to be beneficial. Two notable works, SceneComposer (2017) and SpaText (2017) concentrate on leveraging segmentation maps for image synthesis. SceneComposes (2017) introduces an encoding method that accommodates imprecise segmentation maps, enabling image generation with more flexibility. On the other hand, SpaText (2017) employs a specifically designed encoding mechanism to effectively capture and represent segmentation information. ControlNet (2017) stands as a prominent work in this domain. It introduces a parallel network alongside the unet (2017) architecture of SD (Zhu et al., 2017), serving as a plug-in that imparts remarkable flexibility to the system. This structure has gained widespread adoption in the open-source community. Apart from segmentation maps, ControlNet (2017) can also incorporate other types of input, such as depth maps, normal maps, canny maps, and MLSD straight lines. Several other works, including UnitControlNet (2017), UniControl (2017), T2I-Adapter (2017) have followed Figure 4. Bounding box to image and segmentation map to image (2017). Figure 3. Failed cases for multiple objects generation: wrong position. Figure 2. Failed cases for multiple objects generation. suit by integrating various conditional inputs and incorporating additional layers to facilitate seamless connectivity between the conditioning information. Instead of the directly using the layout info as model input, Universal Guidance (Bradner et al., 2016) took the idea of classifier guidance (Han et al., 2017), extended the condition to all the other cases during the inference process. The integration of layout information has demonstrated notable improvements in generating multiple objects in image synthesis, however, these are all based on the premise we already have the layout info. A significant challenge lies in generating the layout information itself. Several approaches have been explored to address this issue. One straightforward method involves employing large language models (LLMs) to generate bounding boxes (Han et al., 2017; Li et al., 2018; Li et al., 2019). This approach directly prompts the LLMs to generate the coordinates for the bounding boxes, albeit requiring careful engineering of the prompts. One such approach, VPGen (Han et al., 2017) took the advantage of LLMs. VPGen is short form for visual programming. It capitalizes on LLMs to accomplish this task. Rather than directly generating the image, VPGen employs an LLM to analyze the prompt. It first determines the number of objects to be included in the image, then proceeds to generate coordinates for each object. Finally, it feeds all the layout information to GLIGEN (Zhou et al., 2017) to generate the final image. Another method to generate layout information is by training a separate model dedicated to bounding box generation (Li et al., 2018; Li et al., 2019). Overall, methods mentioned in this section may require retraining specific parts or the entire model. This process can be costly and time-consuming. ### Working on the Attention Maps 2022, Google research published "Prompt-to-prompt" (Han et al., 2017). In this paper, they mentioned one very important observation, which is the spatial layout and geometry of the generated image depend on the U-net's cross-attention maps. Fig 5 is the illustration of this observation.The top part of the figure demonstrates how the attention map corresponding to the "bear" token captures the location and shape of the final generated bear in the image. Moreover, the bottom part of the figure reveals that the structural characteristics of the image are already determined in the early stages of the diffusion process. Subsequent work, "Attend-and-Excite" (Bradner et al., 2016), have further emphasized the importance of generating accurate cross-attention maps during the inference phase. These studies have established a crucial finding that if the model fails to generate appropriate cross-attention maps, the resulting generated image will likely be of poor quality. Fig 6, illustrates how erroneous attention maps can mislead the image generation process. From the attention maps, the top two rows of fig 6, "tiger" and "leopard" shared similar attention maps, resulting in an image with mixed attributes. The object on the right exhibits the head of a tiger but has the spotted fur texture associated with a leopard. Building upon this observation, researchers have proposed methods that directly manipulate the attention maps to generate higher-quality images. Figure 6 illustrates this concept, where the corrected attention maps for "tiger" and "leopard" exhibit distinct hotspots. By correcting the attention maps, the resulting image aligns with the intended prompt. "SynGen" model (Zhou et al., 2017) works on cross attention map to overcome the attribute mixing issue. They stated that, the object and its corresponding attention map should share the same hot area. For example, for prompt "a red apple and a yellow banana", the attention maps of token "red" and "apple" should looks similar, so do "yellow" and "banana". This work proposed to use the similarity score of object and its attribute attention maps as optimization object, to force the object and its attribute to share the same hot area in the attention maps. The layout information can also help in the case. Several recent works (Han et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019) have focused on enhancing cross attention maps by incorporating information from provided bounding boxes. Their primary objective is to augment attention values within specified bounding boxes while suppressing attention values outside of them. For example, as shown in Figure 6, when the prompt "A striped tiger and a spotted leopard" is considered, the attention map for the "tiger" token exhibits increased intensity within the "tiger" region, while the initial prominent area corresponding to the "leopard" region diminishes. These methodologies have exhibited significant improvements in the generation quality compared to the original SD model. Instead directly modify the attention map, Attend-and-Excite (Bradner et al., 2016) proposed to iterative modify the latent to achieve better attention map. They called it "On the Fly Optimization". Intuitively, each subjects should have an area of high attention value on the corresponding attention map. The optimization objective encouraged this. The loss function is designed as: \[\mathcal{L}=\underset{s\in S}{max}\ \mathcal{L}_{s}\quad where\quad\mathcal{L}_{s}=1-max(A_{t}^{s}) \tag{7}\] This loss function provided the direction to shift the latent \(z_{t}\), the latent optimization function is \(z_{t}^{\prime}\gets z_{t}-\alpha_{t}\nabla_{z_{t}}\mathcal{L}\). After optimizing several iteration on this formula, the latent should be shifted to the one with better desired attention map, which is at least one hot area should appear in the object tokens' attention map. This \(z_{t}^{\prime}\) will be used as input of the noise prediction U-net to predict \(z_{t-1}\). So although optimization is called, there is no finetune on the model. Compared with methods mentioned in previous section, methods mentioned in this section is normally cheap because they are training free. ## 4. Rare Object or Unseen Object Generation Similar to LLMs,image generation models also encounter challenges when generating rare or unseen objects. To address this issue, a common approach is to leverage search engines. In the context of LLMs, if the model is asked about the final score of a recently completed basketball game, it typically resorts to a search engine, retrieves the relevant information online, and subsequently paraphrases it to generate the final answer. Similarly, image generation models can derive advantages from retrieved information to enhance their generation capabilities. ### Retrieval Based Methods The RDM model, as described in (Bradner et al., 2016) capitalizes on the capabilities of CLIP (Zhou et al., 2017). In this approach, the condition model utilizes CLIP (Zhou et al., 2017) encoders. During the training process, CLIP (Srivastava et al., 2015) is employed to retrieve neighboring images from a database. Subsequently, the CLIP embeddings of these retrieved images are used as condition inputs for the diffusion model. In the inference phase, the input is highly flexible. It is assumed that the text embedding should match the corresponding image embedding. Consequently, during inference, retrieval can be employed to obtain similar images, or the text embedding directly derived from the CLIP encoder can be used as input. Another related work, kNN-Diffusion (Srivastava et al., 2015) shares similarities with RDM. In this work, text is utilized to retrieve the top k nearest neighbor images from the database. These retrieved images, along with the text itself, are used as input during both the training and inference stages. Both RDM (Srivastava et al., 2015) and kNN-Diffusion (Srivastava et al., 2015) utilize retrieved images as condition inputs during their respective processes. On the other hand, Re-imagen (Krizhevsky et al., 2014) takes a different approach by conditioning its generation on both retrieved texts and images. For instance, as shown in fig 7, when generating based on the prompt "Two Chortai are running on the field," the retrieval of text is also taken into consideration. As a result, one of the top-n retrieved conditions could consist of both the text "Chortai is a breed of dog" and its corresponding image. This joint conditioning on both text and image retrieval allows Re-imagen to incorporate additional context and information, potentially leading to more contextually relevant and accurate generations. ### Subject Driven Generation The retrieval based methods discussed earlier have demonstrated the utilization of retrieved images as inputs to introduce novel concepts. This approach can be generalized and extended to address another task known as subject-driven image generation, which is also referred to as concept customization or personalized generation. In subject-driven image generation, the main objective is to present an image or a set of images that represent a particular concept, and then generate new images based on that specific concept. Figure. 8 illustrates this concept, where four images of a single dog are provided as input, and the generated images are all centered around the concept represented by that particular dog. Subject-driven image generation allows for a more interactive and personalized image generation process, where users can input specific images that encapsulate the desired concept, leading to image generation that aligns closely with their intended subject or theme. This approach opens up exciting possibilities for customizing image generation outputs according to user-defined criteria and preferences. To incorporate subject-driven image generation into the existing framework, a feasible solution is to introduce a special token, often referred to as a unique identifier. This special token can be used in various ways, such as fine-tuning the embedding of the token on the given target concept images or part of the diffusion model. By doing so, the model can be trained to specialize or overfit on the provided concept, effectively embedding the concept's representation into this special identifier. Figure 5. Cross-attention maps of a text-conditioned diffusion image generation (Zhu et al., 2018). Figure 6. The Cross attention maps at different time steps, with and without correction. Prompt is “A striped tiger and a spotted leopard” (Zhu et al., 2018). Several methods have adopted this approach, including Textual Inversion (Wang et al., 2017), ELITE (Wang et al., 2017), Custom Diffusion (Wang et al., 2017), DreamBooth (Deng et al., 2017), E4T (Tang et al., 2017). They leverage fine-tuning or specialized training techniques to embed the target concept within the special token, allowing the model to generate images closely aligned with the given subject or theme. Imagic (Imai et al., 2019) approach utilizes a more intricate methodology but achieved better result. But still the model is finetuned. Indeed, many of the methods discussed earlier require fine-tuning the model based on the given images, which can be time-consuming and computationally expensive. However, there are alternative approaches that aim to avoid test-time training and streamline the generation process. InstantBooth (Wang et al., 2017) and FastComposer (Wang et al., 2018) utilize a different image encoder to encode the concept information, which is then combined with the text embedding. Subsequently, the diffusion model is fine-tuned to accommodate this new condition, allowing for concept-specific image generation without the need for extensive test-time training. FastComposer (Wang et al., 2018) takes this concept-driven generation a step further by enabling image generation based on multiple concepts and also learning their spatial locations. Blip-diffusion (Wang et al., 2017) is another work of blip-series (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), which is well-known for its "Q-former" connector used to align text and images. Blip-diffusion also employs this Q-former to transfer concept images to the text domain. The advantage of Blip-diffusion lies in its ability to perform "zero-shot" generation, as well as "few-shot" generation with minimal fine-tuning. By incorporating these innovative approaches, subject-driven image generation can be achieved efficiently and effectively, enabling the generation of images based on specified concepts without the need for extensive model training at test time. ## 5. Quality Improvement of Generated Images Current generated images need general improvement. Generating high-quality and photorealistic images remains a challenge, and it often necessitates long and specialized prompts to achieve satisfactory results. Moreover, generated human faces may exhibit distortions or possess an oil-painting-like quality, indicating the need for further advancements in the image generation process. While larger model sizes and datasets can indeed improve performance, researchers have explored alternative approaches beyond simply scaling up these factors to enhance the existing framework in a more effective and efficient manner. ### Improve from the Text Encoder The research conducted by Imagen (Imagen, 2017) emphasizes the significant role of the text encoder in text-to-image generation models. According to their findings, enhancing the performance of the text encoder or increasing its size holds greater importance than optimizing the U-net component. In their study, Imagen (Imagen, 2017) explored various sizes and types of text encoders, and the results indicated that the T5-XXL text encoder (Wang et al., 2017) outperformed smaller T5 text encoders and similar-sized CLIP text encoders. This improvement in performance was evident in the transition from SD 1.5 to SD 2.1, and also supported by experiments conducted on the GlueGen (Imai et al., 2019). These findings highlight the crucial influence of the text encoder in enhancing the overall performance and quality of text-to-image generation models. In the landscape of current multi-modal models, two prominent categories of text encoders are widely used: CLIP (Wang et al., 2017) based text encoder and pure language pretrained text encoder like Bert (Bert, 2016), Roberta (Roy et al., 2017), t5 (Wang et al., 2017). CLIP (Wang et al., 2017) CLIP is specifically trained to maximize the mutual information between text and images, endowing its text encoder with a distinct advantage in text and image-related tasks. However, it appears that this alignment training is the only advantage of CLIP. Works like (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), all mentioned that the text encoder and the image encoder alone did not have any advantage compared with single modal models. In fact, straightforward speaking, Roberta (Roy et al., 2017) may serve as a more effective text encoder, providing richer semantic information than CLIP (Wang et al., 2017). An alternative approach was taken by eDiff-i (Brock et al., 2018), which pursued a different path by employing not just one, but two text encoders: T5-XXL (Wang et al., 2017). Figure 8. Subject driven image generation (Deng et al., 2017). Figure 7. Pipeline of Re-imagen (Brock et al., 2018). and CLIP (Wang et al., 2017). Remarkably, this hybrid text encoder configuration achieved the best results, showcasing the potential benefits of incorporating multiple text encoders for improved performance in multi-modal tasks. StructureDiffusion (Han et al., 2017) takes a different direction from the pursuit of a better text encoder. Their methodology centers on analyzing the generated text embedding, under the assumption that the self-attention mechanism can introduce perturbations to the text embedding. Specifically, they identify the occurrence of attribute mixing, where certain attributes, such as colors, may inadvertently be associated with incorrect objects in the sentence, leading to inaccurate image generation. To address this issue, StructureDiffusion analyzes the sentence structure and isolates object-attribute pairs. These pairs are then passed through the same text encoder, and the corresponding embeddings in the original sentence are replaced with the uncorrupted embeddings from the isolated pairs. For example, in the sentence "a yellow apple and a green banana," the pairs "yellow apple" and "green banana" would be processed by the text encoder separately. The uncorrupted embedding from "yellow apple" would replace the original "yellow" attribute in the sentence, avoiding the mixing with "green." The experiments conducted by StructureDiffusion demonstrate a reduction in attribute mixing cases, highlighting the effectiveness of their approach in enhancing the quality and accuracy of generated images by addressing potential perturbations in the text embeddings. ### Mixture of Experts Mixture of experts (MOE) (Zhu et al., 2017) is a technique that leverages the strengths of different models, and it has been adapted for use in diffusion models to optimize their performance. The generation process in a diffusion model involves thousands of steps, and these steps vary in nature. In the initial steps, the generation progresses from noise to form a rough image, while in the later steps, details are added to refine the image. ERNIE-ViLG (Zhu et al., 2017) and eDiff-i (Beng et al., 2017) are two models that proposed to employ different models for different stages of the generation process. ERNIE-ViLG (Zhu et al., 2017) uniformly divided the whole process in to several distinct stages, with each stage being associated with a specific model. On the other hand, eDiff-i (Beng et al., 2017) calculated thresholds to separate the entire process into three stages. The experiments conducted with ERNIE-ViLG (Zhu et al., 2017) indicate that increasing the number of separated stages leads to improvements in performance. This highlights the effectiveness of employing a mixture of experts approach in diffusion models, allowing for better utilization of different models at various stages of the generation process, and ultimately enhancing the quality of the generated images. ### Instruction Tuning for Human Preference The success of ChatGPT has sparked significant interest in instruction tuning (Han et al., 2017; Zhang et al., 2017). A methodology that naturally extends to image generation tasks due to diffusion model's Markov Process nature. In the context of image generation, the core concept of instruction tuning involves the following steps: * Human-labeled data collection: The process begins with the collection of human-labeled data, forming a training set that reflects human preferences and quality assessments of generated images. This data serves as a crucial foundation for guiding the image generation process. * Reward model training: A reward model is then trained using the collected human-labeled data. This model aims to capture and represent the preferences and evaluation criteria of humans regarding image quality. It serves as a reference for assessing the desirability of generated images. * Reinforcement Learning (RL): Leveraging reinforcement learning techniques, the image generation process is guided by the reward model. The RL framework optimizes the overall preference and quality of the generated images by directing the generation process to maximize the rewards provided by the trained reward model. By following these steps, instruction tuning can effectively enhance the quality of generated images, aligning the output with human preferences and quality standards, and thus providing a more refined and satisfying image generation experience. Indeed, several works such as (Zhu et al., 2017; Zhu et al., 2017; Zhu et al., 2017) have trained their own reward models to assess the human preference on the images. Additionally, the aesthetic score predictor model released by LAION can serve as a reward model, as demonstrated in (Beng et al., 2017). Because the denoising process is already modeled as Markov Process, it is easy to fit it into RL framework. The denoising process is to predict \(x_{t-1}\) conditioned on current \(x_{t}\), time step \(t\), and condition \(c\), \(p_{\theta}(X_{t-1}|X_{t},t,c)\), the MarKov Decision Process can be modeled as: * State: \(X_{t}\),\(t\),\(c\) * Action: \(X_{t-1}\) or the noise added. * Reward: the output of the pre defined reward model * Policy: \(p_{\theta}(X_{t-1}|X_{t},t,c)\) Given this formulation, any RL algorithm can be easily applied here. ### Sampling Quality Improvement Shifted Diffusion (Zhu et al., 2017) proposed another approach. By comprehensive study, they found out that, the distribution of image latent domain is only a very small subset of the whole embedding space. They recognized that in addition to noise removal, effectively guiding the samples towards this small subset of the image latent space is crucial for achieving high-quality image generation. To address this, they proposed the introduction of a shift term in the reverse process, giving rise to the method's distinctive name. After retraining the pipeline incorporating this modification, the experimental results demonstrated notable enhancements in the generated image quality. ### Self-Attention Guidance SAG (Zhu et al., 2017) also proposed their idea to improve the sample quality. Their methodology involves a thorough reexamination of the DDPM (Zhu et al., 2017) process, leading to the introduction of a novel concept termed "general diffusion guidance." Even in unconditioned image generation scenarios, they propose that the input white noise inherently carries valuable information that can guide the generation process, potentially resulting in the appearance of specific objects like an apple in the generated image. In their framework, they employ Gaussian blur on the on certain area of the prediction according to self-attention map to extract this condition, which is then used to guide the image generation process effectively. Their research reveals a strong correlation between the self-attention map and the detailed parts of the image. Consequently, this method enables the generation of images with heightened image details, contributing to a notable improvement in sample quality. ### Rewrite the Prompt From the "Civitai" community, a very interesting conclusion is presented, if we write a very detailed prompt, the generate image quality can be improved dramatically. Detailed prompts here not only refer to a more detailed description of the objects, but also image quality related words like "4K", "High resolution". People from the community even systematically summarized how to write a good prompt. They said in a good prompt, for example to generate a image of a girl, you should have following components: * the subject: such as "a girl" * detailed description: such as, the hair style, hair color, skin color, posture, clothing * the environments or background: such as, "in a coffee shop" * the lighting: "warm sunshine" * the image quality related: "4k", "master piece" So, overall, the prompt became, "a girl, with long straight hair, black hair, nike t-shirt, holding coffee in coffee shop, warm sunshine, 4k, master piece, detailed face, detailed hands". However, this is very hard for normal users to write such long and detailed prompt. Promptist (Krishnan et al., 2018) proposed to finetune a language model, GPT2 (Gordner et al., 2019) in this case, to rewrite the prompt and achieve better results, as illustrated in Figure 9. ## 6. Conclusion and Future Directions In this survey, we briefly introduce the diffusion models, the current issues and the provided solutions. We organize the survey according to the three main challenges and the solutions on text-to-image generation, including multi objects generation, rare case and unseen cases, and general improvement. However, it is important to note that image generation encompasses a broader scope beyond text-to-image generation. Some of the additional topics in image generation include: (1) Image editing: This task is straight forward. To modify the image, we can directly provide a prompt as an instruction (Dosov et al., 2017), such as change the scene to nighttime. Or we can also interactive direct with the image like dragging as shown in DragGAN (Zhu et al., 2018; Zhang et al., 2020) or Google's magic edit. Editing by generation model is more natural and more efficient compare with traditional method. (2) Inpainting: This task is given a image and masked a part of that image. The generation is to generate the masked part conditioned on the given prompts and the surrounding images. Adobe's Generative fill is the most famous work in this area, it is integrated in Photoshop. Future research in diffusion models should prioritize addressing the existing challenges and advancing the image generation process. While progress has been made, there are still gaps to be filled in order to achieve perfection in addressing the issues mentioned above. It is important to consider the inference time as a significant concern when comparing diffusion models to GANs. Currently, few studies focus on tackling the challenges related to positional generation within the context of diffusion models. Concept customization quality is still not good enough. Generally, to generate a perfect image, we still need to try a lot of times and the image quality can not be photo realistic. Additionally, ensuring the prevention of generating potential discriminating, toxic, or illegal content poses a considerable challenge. These areas necessitate the dedicated efforts of researchers to develop solutions and safeguards.
2310.09350
Unsupervised Domain Adaption for Neural Information Retrieval
Neural information retrieval requires costly annotated data for each target domain to be competitive. Synthetic annotation by query generation using Large Language Models or rule-based string manipulation has been proposed as an alternative, but their relative merits have not been analysed. In this paper, we compare both methods head-to-head using the same neural IR architecture. We focus on the BEIR benchmark, which includes test datasets from several domains with no training data, and explore two scenarios: zero-shot, where the supervised system is trained in a large out-of-domain dataset (MS-MARCO); and unsupervised domain adaptation, where, in addition to MS-MARCO, the system is fine-tuned in synthetic data from the target domain. Our results indicate that Large Language Models outperform rule-based methods in all scenarios by a large margin, and, more importantly, that unsupervised domain adaptation is effective compared to applying a supervised IR system in a zero-shot fashion. In addition we explore several sizes of open Large Language Models to generate synthetic data and find that a medium-sized model suffices. Code and models are publicly available for reproducibility.
Carlos Dominguez, Jon Ander Campos, Eneko Agirre, Gorka Azkune
2023-10-13T18:27:33Z
http://arxiv.org/abs/2310.09350v1
# Unsupervised Domain Adaption for Neural Information Retrieval ###### Abstract Neural information retrieval requires costly annotated data for each target domain to be competitive. Synthetic annotation by query generation using Large Language Models or rule-based string manipulation has been proposed as an alternative, but their relative merits have not been analysed. In this paper, we compare both methods head-to-head using the same neural IR architecture. We focus on the BEIR benchmark, which includes test datasets from several domains with no training data, and explore two scenarios: zero-shot, where the supervised system is trained in a large out-of-domain dataset (MS-MARCO); and unsupervised domain adaptation, where, in addition to MS-MARCO, the system is fine-tuned in synthetic data from the target domain. Our results indicate that Large Language Models outperform rule-based methods in all scenarios by a large margin, and, more importantly, that unsupervised domain adaptation is effective compared to applying a supervised IR system in a zero-shot fashion. In addition we explore several sizes of open Large Language Models to generate synthetic data and find that a medium-sized model suffices. Code and models are publicly available for reproducibility. ## 1 Introduction In recent years, there has been significant advancement in Information Retrieval (IR). IR is the process of retrieving relevant information from a collection of unstructured or semi-structured text data in response to a user's information need. With the explosion of digital content in recent years, IR has become an increasingly important field of research with applications in search engines Kobayashi and Takeda (2000), recommendation systems Gulzar et al. (2018), chatbots Yan et al. (2016), and more. Traditionally, IR systems used statistical models, such as BM25 Robertson and Zaragoza (2009), to represent documents and queries and measure their similarity. However, these models have limitations, such as the inability to capture semantic relationships between words, the requirement of exact matches between query and documents tokens - vocabulary mismatch Furnas et al. (1987)), and the lack of interpretability. In recent years, deep learning techniques, such as dual encoders, have emerged as a promising approach for IR. Dual encoders use two neural networks, one to encode the query and the other to encode the document, into high-dimensional vector representations. The similarity between the two vectors is then computed to rank the documents based on their relevance to the query. Dual encoders have several advantages over tra Figure 1: Experimental design: (left) a supervised retriever is trained with manual annotations from MS-MARCO; (middle) an unsupervised retriever is trained with automatically generated queries for MS-MARCO documents; (right) an unsupervised domain adaptation retriever is trained with both MS-MARCO manual annotations and automatically generated queries in-domain BEIR dataset documents. Evaluation is performed in BEIR producing two scenarios: zero-shot (left and middle retrievers); unsupervised domain adaptation (right retriever). ditional IR models. They can capture complex semantic relationships between words and phrases and learn from large-scale data, making them more adaptable to different domains. Additionally, they can be fine-tuned for specific tasks and easily integrated into existing systems. Recent studies have shown the effectiveness of dual encoders in various IR tasks, including passage retrieval Karpukhin et al. (2020), question answering Izacard and Grave (2021), and conversational search Wu et al. (2022). For example, the Dense Passage Retrieval (DPR) Karpukhin et al. (2020), one of the most common systems, uses dual encoders based on the BERT architecture Devlin et al. (2019). On the downside, neural IR models need to be trained on relatively large and costly datasets which include query and document pairs, e.g. MS-MARCO Nguyen et al. (2016). In many real-world applications, the performance of machine learning models degrade when deployed in a new environment or when applied to data that differ from the training data. This phenomenon, known as domain shift, can lead to poor generalization and decreased effectiveness. Domain adaptation (DA) techniques adapt a model that has been trained on one domain (or dataset) to work effectively on a different but related domain. DA is effective whenever there is a mismatch between the distribution of data on which a model was trained and the distribution of data on which it will be used in practice. In information retrieval, out-of-domain datasets usually lack the necessary annotations to support supervised domain adaptation. This makes it challenging to train models that can be effectively deployed in new domains. To overcome these challenges, unsupervised techniques to generate synthetic training data have become increasingly popular in recent years. The main idea is to take unlabeled documents from the domain and generate a query (or queries) for each document, resulting in an automatically annotated dataset. The most used techniques range from rule-based systems such as the Contriever cropping system Izacard et al. (2022) and the Inverse Cloze Task from LaPraDoR Xu et al. (2022), or leveraging generative Large Language Models (LLMs) to generate queries, such as Promptagator Dai et al. (2023) and InPars Bonifacio et al. (2022). Table 1 shows two examples of the synthetic data generated by the a LLM (left part) and a cropping rule-based method (right part). Although different question generation have been proposed, they have not been evaluated head-to-head, that is, we do not know which one is the best. In this paper, we propose a controlled evaluation setting to compare unsupervised query generation systems on equal terms, specially focused on a domain adaptation scenario. Our evaluation set-up is based on a well-known neural IR system (SBERT Reimers and Gurevych (2019)), which is kept as a fixed variable in all the experiments, and focuses solely on the impact of query generation systems. We follow standard practice and use MS-MARCO to train the base IR system Nguyen et al. (2016), as well as the BEIR benchmark for evaluationThakur et al. (2021). The methodological design (see Figure 1) has the following steps: (i) use a large annotated dataset (MS-MARCO) to train a base IR system; (ii) select a collection of out-of-domain IR datasets (BEIR); (iii) generate queries for the out-of-domain documents using different unsupervised approaches; (iv) fine-tune the base IR system with the synthetic queries and their corresponding documents for domain adaptation; (v) compare the performance using well-established metrics on BEIR. Our experiments show the following: * LLM outperforms rule-based query generation in terms of retrieval performance, indicating the superior capabilities of LLM in generating effective queries. * Unsupervised domain adaptation performance is improved by using LLM for generating queries, while the rule-based independent cropping system from Contriever fails. * In addition, we analyse the effect of increasing the amount of parameters of the query generation LLM for the IR module performance. ## 2 Related Work In this section, we will briefly review current IR systems and query generation methods. ### Retrieval models In the field of IR, documents and queries are typically represented as sparse vectors, with each element corresponding to a term in the vocabulary. BM25 Robertson and Zaragoza (2009) is a well-known ranking function that ranks documents based on query terms within a document, without considering the relationship between query terms. BM25 is a family of scoring functions, with different components and parameters. On the other hand, Dense Passage Retrieval (DPR) Karpukhin et al. (2020) and Sentence-BERT (SBERT) Reimers and Gurevych (2019) are retrieval methods that use a two-tower model architecture. The first encoder builds an index of all text passages, while the second encoder maps the input question to a vector and retrieves the top \(k\) passages with the closest vectors. The similarity of the vectors is calculated by using the dot product or cosine similarity. Moreover, they optimize the negative log-likelihood loss function to create a vector space where relevant pairs of questions and passages have higher similarity than irrelevant ones, using in-batch negatives as negative passages. The two most important differences are: (i) SBERT uses tied encoders (shared weights), whereas DPR uses two independent encoders; (ii) SBERT uses mean pooling to obtain the final vector, while DPR makes use of the \([CLS]\) token. Modern IR models allow fine-grained token-level interaction to improve the performance but with higher inference cost. Two of such models are ColBERT Khattab and Zaharia (2020) and SPLADE Formal et al. (2021). The main difference between DPR and ColBERT is in their approach to encode the document and query representations. ColBERT uses a joint space approach and a late interaction strategy, while DPR uses a dual-encoder architecture and a dense retrieval approach. ### Query generation systems Query generation is a crucial aspect of IR that aims to generate high-quality queries automatically for synthetic training data generation. Among the various approaches, Large Language Models as used by Promptagator Dai et al. (2023) and the rule-based independent cropping system from Contriever Izacard et al. (2022) are very effective widely used. Promptagator Dai et al. (2023) utilizes LLMs to generates task-specific queries by combining a prompt with a few examples from the domain (few-shot domain adaptation). The approach consists of two components: prompt-based query generation and consistency filtering. Prompt-based query generation is performed using FLAN, a LLM with 137 billion parameters not publicly available. The generated queries are then filtered based on consistency, that is, a query should be answered by the passage from which the query was generated. Another system that makes use of LLMs is InPars Bonifacio et al. (2022). This paper makes use of BM25 to retrieve the top 1000 candidate documents, and then a monoT5 Nogueira et al. (2020) neural reranker is used to reorder the documents. Reranking is performed using unsupervised generated queries by GPT-3 Brown et al. (2020). In contrast, Contriever Izacard et al. (2022) proposes a rule-based method that uses independent cropping as a data augmentation method for text. In addition it applies also word deletion, replacement, or masking. From now on, we are going to \begin{table} \begin{tabular}{p{142.3pt} p{142.3pt}|p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline **Original Document** & **LLM Query** & \begin{tabular}{c} **Rale-based string manipulation** \\ (random cropping) \\ \end{tabular} & \begin{tabular}{c} **Final Document** \\ (random cropping + \\ words deletion) \\ \end{tabular} & \begin{tabular}{c} **Query** \\ (random cropping) \\ \end{tabular} & \begin{tabular}{c} **Final Query** \\ (random cropping + \\ words deletion) \\ \end{tabular} \\ \hline Color hex is a easy to use tool to get the color codes information including color models (RGB_HJLS_HSV and CMYK), cs and html color codes. & get the color codes informationcolor codes (RGB_HJLS_HSV and CMYK) & color codes informationcolor codes (RGB_HJLS_HSV and CMYK) & Color hex is a easy to use tool to use tool & Color hex tool to use tool \\ \hline Although the European powers did make military interventions in Latin America from time to time after the Morore Doctrine was announced, the Americans did not look for war. They did, however, use the doctrine as justifification for taking Texas in 1842 under President John Tyler. & The Americans did not look for war. the doctrine justification taking Texas & Americans look for war. the doctrine justification taking to time after the Morore Doctrine & military interventions in Latin after Monroe \\ \hline \hline \end{tabular} \end{table} Table 1: Synthetic data (document and query) produced using two methods for two sample documents. The query generated by the LLM (OPT-2.7B) from each document is paired with the original document, i.e the synthetic dataset comprises the two leftmost columns. The rule-based method generates both synthetic document and query for each original document in two steps: randomly crop the input and then delete random words. In this case The synthetic dataset comprises the fourth and sixth columns. refer to the independent cropping from Contriever as _independent cropping_ or _cropping_ for short. There exist several more techniques, such as using the Inverse Cloze Task (ICT) Lee et al. (2019), used by LaPraDoR Xu et al. (2022), where a passage is broken down into multiple sentences and one sentence is used as a query while the remaining ones are joined and used as the document. On the other hand, GenQ Thakur et al. (2021), trains a T5 Raffel et al. (2020) model on MS-MARCO for 2 epochs and then generates 5 queries per document using different sampling methods. These two methods are not utilized in our study. GenQ is a supervised approach, and therefore, does not fall under the scope of our investigation. On the other hand, ICT has exhibited lower performance compared to independent cropping, making it unsuitable for our purposes. The two highly promising techniques, Large Language Models (LLM) and rule-based methods, have been extensively employed either independently or in combination. Nevertheless, the question of which approach is more effective remains unresolved as no direct head-to-head comparisons have been conducted to date. ## 3 Dataset Description and Generation In this section, we discuss how we applied independent cropping and LLMs to generate questions for a given collection of unlabelled documents. Contreiver Izacard et al. (2022) proposes to use independent cropping when unlabelled data is supplied as training data. Independent cropping is a common independent data augmentation used for images where views are generated independently by cropping the input. In natural language processing, it would be equivalent to sampling a span of tokens. Furthermore, in addition to independent cropping, they also consider different data augmentation methods such as word deletion, replacement, or masking, being word deletion the one that works the best. Following their experiments, we apply independent cropping and word deletion to obtain query-document pairs. On the other hand, LLMs can be used to generate questions in an unsupervised way (zero-shot question generation). In order to generate questions, we need to make use of a prompt. We selected the vanilla version of the prompts used by Inpars Bonifacio et al. (2022). To include diversity in the generated questions, we apply sampling with a top \(p\), that is only the smallest set of most probable tokens with probabilities that add up to top \(p\) or higher are kept for generation. We set \(p\) on 0.9 based on prior work and did not try any other value. The Open Pretrained Transformer (OPT) Zhang et al. (2022) is proposed as the model for this paper due to several reasons. Firstly, it is an open-source model. Secondly, it has a variety of checkpoints available, ranging from small (125 million parameters) to large models (175 billion parameters), enabling measuring scalability of parameters with the quality of generated queries. Lastly, our experimental results showed that OPT performed as well as other models such as Bloom Scao et al. (2023), GPT-neoX Black et al. (2022), and GPT-neo Black et al. (2021) making it a viable option for our study. Further details about the use of Large Language Models for query generation can be found in Appendix A. Examples of queries generated using LLMs and independent cropping generation methods can be found in Table 1. The table shows how query-document pairs are generated by each method. LLMs produce queries that are more akin to human language and are therefore simpler to comprehend. In contrast, independent cropping generates strings based on the content of the document. Based on our analysis, we believe that LLMs produce higher quality document-query pairs overall. ## 4 Experimental setting In this section, we explain the models used for the experiments, the metrics used to measure the performance, and the benchmark used to compare the performance of the systems. ### Models: IR model Recent papers make use of bi-encoder architectures to train their systems. Two such techniques are SBERT and DPR which have shown promising results. We experimented with both, and found that SBERT outperforms DPR. Experiments comparing SBERT and DPR can be found in Appendix B.1. Following the trend we train a two tower architecture system based on SBERT that makes use of a neural network to obtain embedding representations. This is done in two step: first the neural network matches query features \(x_{query}\) to query embeddings \(\psi(x_{query})\) and then it matches context features \(x_{context}\) to context embeddings \(\psi(x_{context})\) The output of the model can be defined as the cosine similarity of \(<\psi(x_{query}),\psi(x_{context})>\), which can be seen as the similarity between the query and the context. Following the SBERT model, we also use DistilBERT Sanh et al. (2019) as the base model. Furthermore, we employ mean pooling to obtain the final vector representation, which has been shown to outperform the use of the \([CLS]\) token on different Natural Language Processing downstream tasks Choi et al. (2021). Bi-encoder models are trained contrastively using in-batch negatives, that is, for each question in the batch, we will consider the positive passages of the other questions as the negative passages. For a batch of size \(B\), we will have for each question 1 positive passage and \(B-1\) negative passages. To ensure reproducibility, the following hyper-parameters were used to train the model: _distilbert-base-uncased_ was chosen as the base model instead of BERT due to its similar results in preliminary experiments and faster training and evaluation. The batch size was set to 256 and the number of epochs to 10. For model selection, the epoch with the highest nDCG@10 metric in the development dataset was chosen. In unsupervised DA, additional 1000 examples were generated to create the development dataset. ### BEIR Benchmark In this paper, we make use of **B**enchmarking-**IR** (BEIR). BEIR is a robust and heterogeneous evaluation benchmark for IR, comprising 18 retrieval datasets for comparison and evaluation of model generalization. BEIR is focused on diversity, that is, the benchmark includes nine different retrieval tasks: fact checking, citation prediction, duplicate question retrieval, argument retrieval, news retrieval, question answering, tweet retrieval, biomedical IR, and entity retrieval. Furthermore, datasets from diverse domains are included to cover broad topics like Wikipedia, specialized topics like COVID-19, different text types like news topics and tweets, and datasets with different sizes, query lengths, and document lengths. This benchmark is used to measure the performance of our systems and have comparable results. Additionally, the benchmark can be utilized in two distinct manners: zero-shot evaluation, which involves training a system on a specific dataset (typically MS-MARCO) and evaluating its performance using all the BEIR test datasets; and unsupervised domain adaptation, where we use the documents of one of the datasets to generate queries and create training and development splits, then we train an IR model, and finally we evaluate it using the BEIR test dataset. In this paper, we will apply unsupervised domain adaptation to all the datasets on the BEIR benchmark independently. In the case of CQADupStack benchmark Hoogeveen et al. (2015), we train a model for each dataset as the StackExchange community covers a wide variety of topics such as android, english, gaming, programming, and so on. ### Metrics Typical classification and regression metrics measure whether the predicted value is close to the actual value. Unfortunately, these metrics do not take into consideration the order of prediction, which is important in IR, as it is not the same to find the answer in the first document or in the last one. To evaluate an IR system we need to measure how relevant the results are and how good the ordering is. The most used metric in IR systems, used also in BEIR, is the nDCG@K metric, more specifically nDCG@10 with k=10. Izacard et al. (2022) claim that nDCG@K is good at evaluating rankings returned to humans, for example in a search engine, but that Recall@100 is relevant to evaluate retrievers that are used in machine learning systems, such as question answering. These two metrics are important as humans only check the first results provided, meaning that is important the order the documents are returned (nDCG@10 metric). On the other hand, question answering systems do not consider the number or order of the documents passed, making Recall@100 more informative. In this paper, we provide both metrics. ## 5 Experiments and Results We train three different types of systems, namely Marco-Supervised, Marco-Unsupervised, and Marco-with-Unsupervised-DA. For the unsupervised setting we have two variants, LLM and independent cropping, depending on the method employed for query generation. We follow usual practice and train a supervised version of the system using MS-MARCO dataset, named Marco-Supervised, which will be used as the base system for unsupervised DA. We leverage this supervised version as starting point because for real world applications this dataset is publicly avail able, and yields strong performance (Nguyen et al., 2016). Therefore, we refrain from conducting experiments involving completely unsupervised methodologies, such as marco-unsupervised + Unsupervised Domain Adaptation, or training only on BEIR datasets. Two unsupervised DA systems are trained named \(DA_{LLM}\) for queries generated using Large Language models and \(DA_{Cropping}\) for queries generated using the independent cropping generation method. Both of these systems fine-tune the Marco-Supervised system. For independent cropping the queries are generated using the default parameters found in the official implementation. In the case of LLM we need to evaluate which checkpoint is the best one. **Choosing the best checkpoint for zero-shot query generation using LLM** In order to choose which OPT checkpoint needs to be used for query generation we perform experiments on all the available checkpoints. Figure 2 compares the average performance of the different OPT checkpoints on the BEIR benchmark in a zero-shot setting. According to the figure, the average scores for the 2.7B, 6.7B, and 13B checkpoints are similar, with the 6.7B checkpoint performing the best, followed by the 2.7B checkpoint with a difference of only 0.131 points. However, the scores reach a plateau after the 2.7B checkpoint. Considering that (i) the difference between the 2.7B and 6.7B checkpoints is negligible and (ii) the time required to generate the datasets is almost halved when using the 2.7B checkpoint (see Figure 4), we decide to use the 2.7B checkpoint for DA. The scores obtained for each dataset can be found in Appendix A.3. **Results** The results comparing the zero-shot and unsupervised domain adaptation performance can be found in Table 2. The results for the Recall@100 metric can be found on Appendix C.1. **Overall results for zero-shot performance** The strong results of the unsupervised system trained using the queries generated by the LLM are noteworthy, as they are less than a point from those of the supervised system trained on MS-MARCO. This confirms the validity of using queries generated by LLMs. On the other hand the results for the unsupervised system trained cropping are much worse. **Overall results for unsupervised domain adaptation** Regarding unsupervised domain adaptation, the \(DA_{LLM}\) method outperforms the Marco-Supervised approach in average by 3.356 points in the nDCG@10 metric. In the case of \(DA_{Cropping}\), the system performs worse by -13.782 points, but it is still better than the Marco-Unsupervised-Cropping approach. These experiments demonstrate that unsupervised domain adaptation can enhance performance, especially when the LLM method is used. To summarize, we conclude that LLM works better than independent cropping, and the difference between the Marco-Supervised and Marco-Unsupervised approaches is negligible. However, when unsupervised domain adaptation is used with a LLM, the performance can be improved by 3 extra points. In addition to providing the average metric, we also report the _wins_ metric to better evaluate the performance of the system. The average metric may not be the most appropriate in certain cases, as datasets can vary in size and domain, and one dataset with exceptional performance can heavily influence the overall average, even if other datasets perform poorly. Our evaluation reveals Figure 2: Results on BEIR of the unsupervised LLM system for the zero-shot scenario. The average performance of OPT LLMs of different sizes are shown. that \(DA_{LLM}\)outperforms the other systems in 9 out of 14 datasets and ranks as the second best in the remaining 5 datasets. In contrast, the systems using independent cropping fail to achieve a win in any of the datasets. ## 6 Analysis of query generation In this section, we analyze the queries generated by both independent cropping and LLM. ### Questions lost during generation Figure 3 shows the queries lost during generation for the MS-MARCO dataset. We postprocess the dataset taking into account empty queries (the system is not able to generate a query) and same prompt queries (the system generates the same query provided in one of the examples of the prompt). In the figure, we can see that the number of lost questions decreases as the number of parameters increases. Nevertheless, from the 2.7B checkpoint onwards, we can see that the number of lost questions increases, mainly because no query is generated, but also because the same query is generated as one of the examples of the prompt. \begin{table} \begin{tabular}{c|c c c|c c} \hline \hline & \multicolumn{3}{c|}{**Zero-shot**} & \multicolumn{2}{c}{**Domain Adaptation**} \\ & **Marco- Supervised** & \multicolumn{2}{c|}{**Marco-Unsupervised**} & \multicolumn{2}{c}{**Unsupervised DA**} \\ & **Supervised** & **LLM** & **Cropping** & **LLM** & **Cropping** \\ \hline Wins & 3 & 2 & 0 & **9** & 0 \\ Average & 37.31 & 36.37 & 18.98 & **40.47** & 23.53 \\ \hline TREC-Covid & 45.51 & 41.01 & 15.09 & **56.75** & 38.27 \\ NFCorpus & 27.09 & **32.24** & 21.94 & 28.34 & 23.99 \\ Natural Questions & **34.36** & 26.00 & 10.04 & 27.87 & 11.99 \\ HotpotQA & 46.33 & 45.61 & 9.61 & **48.19** & 22.10 \\ FiQA & 22.31 & 24.70 & 7.65 & **25.93** & 10.45 \\ ArguAna & 42.20 & 45.02 & 33.77 & **53.96** & 41.63 \\ Touche-2020 & 13.87 & 11.01 & 2.82 & **21.26** & 2.61 \\ CQAdupstack & 27.62 & 26.66 & 17.81 & **32.17** & 18.53 \\ Quora & 82.90 & 80.37 & 72.00 & **84.60** & 66.37 \\ DBpedia & 30.77 & 30.56 & 10.77 & **32.51** & 14.04 \\ Scidocs & 12.97 & 13.51 & 7.32 & **15.20** & 8.49 \\ Fever & **63.37** & 50.73 & 7.05 & 60.47 & 8.75 \\ Climate-Fever & **21.64** & 21.33 & 2.83 & 21.48 & 10.13 \\ Scifact & 51.42 & **60.39** & 47.02 & 57.80 & 52.04 \\ \hline \hline \end{tabular} \end{table} Table 2: Results for all five systems on each dataset of BEIR for the two scenarios. Three systems (one supervised and two unsupervised) in the zero-shot scenario, and two unsupervised DA systems (DA scenario). Number of wins in each dataset and average performance are also shown. In each row, bold for best system, underline for second best. Figure 3: The percentage of number of questions lost during inference from a corpus of 8,841,661 documents (MS-MARCO). In the case of independent cropping, we see that a small amount of queries (0.02%) are lost probably due to short documents being used and the random deletions of words deleting the remaining ones. ### Top 10 starting words In English, wh- words are used the most for question formation. This is because they are interrogative pronouns or adverbs that are used to ask for information. The wh- words include what, who, whom, whose, which, when, where, why and how. They are versatile and can be used to ask about a wide range of information, such as the identity of a person, the location of an event, the reason for an action, or the manner in which something is done. As a result, wh- words are essential for forming questions and seeking knowledge in the English language. In Table 3, we can see the top 10 starting words for all the OPT checkpoints. Overall, we can see that the same starting words are used in all the checkpoints, with word "what" being used the most and the frequency of it increasing with the number of parameters. ### Question generation time The time required to generate a question using large language models (LLMs) typically increases with the amount of parameters in the model. This is because LLMs with more parameters require more computational power to generate each query, and therefore take longer to generate questions. This can be a bottleneck for real-world applications that require fast and efficient question generation, especially when dealing with large datasets or when generating a large number of queries. Thus, finding ways to balance the trade-off between model size and computational efficiency is an important consideration when using LLMs for question generation. In this section, we measure the time required to generate questions using all the OPT checkpoints from 125 million to 13 billion parameters. Figure 4 shows the time required to generate each dataset using 8 Nvidia A100 GPUs with 80 GB of VRAM. ## 7 Conclusions and Future Work In this paper, we have explored the use of Large Language Models (LLMs) for synthetic query generation and compared them to the rule-based independent cropping method for unsupervised domain adaptation. Our results show that LLM-based methods outperform independent cropping in all scenarios by a significant margin. Although LLMs require more time, the benefits they offer in terms of performance make them a viable alternative for query generation. We also studied the impact of the number of parameters on the performance of our system when a LLM is used for generating queries. Our experiments indicate that beyond a certain point, increasing the number of parameters in our models did not lead to any significant increase in performance. Specifically, we found that the performance of our models did not improve beyond 2.7B parameters. Furthermore, we conducted our experiments using open-source LLMs, which allowed us to compare our results with those of other researchers and ensure reproducibility. Finally, we demonstrated that unsupervised domain adaptation is an effective approach for improving the performance of neural IR systems, as compared to zero-shot learning. Our experiments showed that fine-tuning a supervised system in synthetic data from the target domain leads to significant performance improvements. For future work, we plan to explore the use of other sampling techniques and strategies for query generation using LLMs. Additionally, we will investigate different query generation techniques to further improve our system's performance. We also plan to study the impact of other types of sentence similarities on the performance of our models. Furthermore, we will examine the use of retrievers with more parameters and the use of Figure 4: Graph showing the time required for generating the dataset using different OPT checkpoints. To generate the dataset, we are making use of 8 Nvidia A100 GPUs with 80GB of VRAM memory each. rerankers to improve performance. ## Acknowledgements Carlos is funded by a PhD grant from the Spanish Government (FPU21/02867). This work is partially supported by the Ministry of Science and Innovation of the Spanish Government (AWARE project TED2021-131617B-I00, DeepKnowledge project PID2021-12777OB-C21), and the Basque Government (IXA excellence research group IT1570-22).
2301.10140
The Semantic Scholar Open Data Platform
The volume of scientific output is creating an urgent need for automated tools to help scientists keep up with developments in their field. Semantic Scholar (S2) is an open data platform and website aimed at accelerating science by helping scholars discover and understand scientific literature. We combine public and proprietary data sources using state-of-the-art techniques for scholarly PDF content extraction and automatic knowledge graph construction to build the Semantic Scholar Academic Graph, the largest open scientific literature graph to-date, with 200M+ papers, 80M+ authors, 550M+ paper-authorship edges, and 2.4B+ citation edges. The graph includes advanced semantic features such as structurally parsed text, natural language summaries, and vector embeddings. In this paper, we describe the components of the S2 data processing pipeline and the associated APIs offered by the platform. We will update this living document to reflect changes as we add new data offerings and improve existing services.
Rodney Kinney, Chloe Anastasiades, Russell Authur, Iz Beltagy, Jonathan Bragg, Alexandra Buraczynski, Isabel Cachola, Stefan Candra, Yoganand Chandrasekhar, Arman Cohan, Miles Crawford, Doug Downey, Jason Dunkelberger, Oren Etzioni, Rob Evans, Sergey Feldman, Joseph Gorney, David Graham, Fangzhou Hu, Regan Huff, Daniel King, Sebastian Kohlmeier, Bailey Kuehl, Michael Langan, Daniel Lin, Haokun Liu, Kyle Lo, Jaron Lochner, Kelsey MacMillan, Tyler Murray, Chris Newell, Smita Rao, Shaurya Rohatgi, Paul Sayre, Zejiang Shen, Amanpreet Singh, Luca Soldaini, Shivashankar Subramanian, Amber Tanaka, Alex D. Wade, Linda Wagner, Lucy Lu Wang, Chris Wilhelm, Caroline Wu, Jiangjiang Yang, Angele Zamarron, Madeleine Van Zuylen, Daniel S. Weld
2023-01-24T17:13:08Z
http://arxiv.org/abs/2301.10140v1
# The Semantic Scholar Open Data Platform ###### Abstract The volume of scientific output is creating an urgent need for automated tools to help scientists keep up with developments in their field. Semantic Scholar (S2) is an open data platform and website aimed at accelerating science by helping scholars discover and understand scientific literature. We combine public and proprietary data sources using state-of-the-art techniques for scholarly PDF content extraction and automatic knowledge graph construction to build the Semantic Scholar Academic Graph, the largest open scientific literature graph to-date, with 200M+ papers, 80M+ authors, 550M+ paper-authorship edges, and 2.4B+ citation edges. The graph includes advanced semantic features such as structurally parsed text, natural language summaries, and vector embeddings. In this paper, we describe the components of the S2 data processing pipeline and the associated APIs offered by the platform. We will update this living document to reflect changes as we add new data offerings and improve existing services. ## 1 Introduction Semantic Scholar1 (S2), was launched in 2015 by the Allen Institute for Artificial Intelligence2 (AI2) to help scholars combat information overload and more efficiently discover and understand the most relevant research literature. Through a growing number of partnerships with scientific publishers and preprint services, Semantic Scholar has built a comprehensive and open corpus of scientific publications as a public service. While the Semantic Scholar website provides many features, such as automatically-generated author pages, personalized libraries, and paper recommendations, most of the site's data and functionality is also available via data download, open-source libraries, and API services. Footnote 1: [https://semanticscholar.org](https://semanticscholar.org) Footnote 2: [https://allenai.org](https://allenai.org) In this paper, we overview the primary technology used to build the corpus, and the API services and downloads we provide to access it. We hope that the resources described in this article can further accelerate a variety of work that depends critically on high-quality scholarly data. Research into scientific NLP and science of science benefit directly from such resources. More generally, we are enabling the development of new applications that help scientists discover and understand the literature of their field. The need for timely and comprehensive scholarly data has become more imperative since the 2021 sunsetting of the Microsoft Academic Graph (MAG) [2], which was long a standard source for scholarly data in the community. The remainder of the paper proceeds as follows. In Section 2, we give an overview of the data and services that comprise the platform. In Section 3, we describe the platform's data processing pipeline. In Section 4, we describe our public APIs and downloadable datasets. In Section 5, we discuss related work. In Section 6, we conclude and discuss future work. ## 2 Platform Overview The purpose of the Semantic Scholar Open Data Platform is to build and distribute the Semantic Scholar Academic Graph, or S2AG (pronounced "stag"). S2AG is a disambiguated, high-quality, bibliographic knowledge graph. The nodes of S2AG represent papers, authors, venues, and academic institutions. The edges represent papers written by an author, papers cited by another paper, papers published in a venue, and authors affiliated with an institution. S2AG is built by ingesting a variety of data sources into our data processing pipeline, and is distributed via a variety of publicly-available APIs and datasets, as illustrated in Figure 1. The work we describe here is an update of a previous version of the Semantic Scholar data pipeline described in Ammar et al. (2018). The core structure of the graph (authors, papers and their relationships) is the same, although the components that build the knowledge graph have been refined. The entity extraction and linking described in Ammar et al. (2018) have been deprecated in favor of the semantic features described below. Table 1 summarizes the overall size of S2AG as of January 2023.3 S2AG covers many natural, physical, and social sciences. Table 2 summarizes the number of paper records in different academic fields of study. Some papers are assigned to multiple fields, whereas many are unclassified ("n/a"), due to lack of sufficient information. Footnote 3: We are populating the author-affiliation links at the time of writing, so we have given the expected eventual number. ## 3 Data Processing Pipeline At the core of Semantic Scholar lies a sophisticated data processing pipeline that continually ingests documents and metadata from numerous sources, extracting full text and metadata from PDFs, normalizing and disambiguating authors, institutions, and venues, classifying each paper's field of study, generating a textual summary of its key results, and more. The pipeline builds S2AG by ingesting academic paper metadata and PDF content from a variety of _Data Sources_. A _PDF Content Extraction_ system extracts structured data from unstructured PDFs. The extracted content, along with structured metadata from the input sources, is processed by a series of _Knowledge Graph Construction_ systems that build S2AG. A set of models add _Semantic Features_ to the graph, such as such as paper summarization and vector embeddings. Many components of our pipeline are available as open software or models. Table 3 provides a high-level summary alongside links to publicly available code, models and datasets, where applicable. Many of these also have an associated published research article, to which we refer readers seeking further details. ### Data Sources The pipeline has more than 50 input sources. They include non-profit organizations such as Crossref, preprint servers such as arXiv, academic publishers through negotiated agreements, and our own internet web crawler. Sources may provide metadata in the JATS4 format, or a variety of proprietary formats. Data may be pushed or pulled via FTP, fetched via an API, or downloaded in bulk via HTTP. Most sources are updated daily. An important additional source is human-created data. Users of the web site can claim the identity of an author in our corpus and manually curate the papers associated with that author. Our team also makes manual corrections in response to email requests from users. The first task of the pipeline is to fetch the latest data from each source and parse it into a normalized format. Sources typically provide limited information about a paper in structured form: typically the title, author names, venue, and date, often linked to a PDF file. Footnote 4: [https://jats.nlm.nih.gov/](https://jats.nlm.nih.gov/) ### PDF Content Extraction We augment the structured metadata by parsing the unstructured information in a paper's PDF into structured form, resulting in fine-grained information about the complete text. A critical output of our PDF content extraction is a structured bibliography from which we can construct the citation graph. Other important out Figure 1: Illustration of the Semantic Scholar platform puts include section headers, paragraphs, figures and tables, and inline references to figures, tables, and bibliography entries. PDFs are fundamentally a print-format specification, and extracting structured information from them is difficult and subject to errors. Nevertheless, they are the de-facto representation of a paper's official content, and we have invested heavily in our extraction technology. There are three steps to PDF content extraction: _Text Extraction_, _Visual Region Annotation_, and _Text Span Annotation_. _Text Extraction_ is the process of converting the set of commands that indicate where characters should appear on the page into plain text, i.e., inferring word boundaries and word order. For this, we use existing open-source toolkits, including pdfalto,5 PDF-Plumber,6 and/or PDFMiner.7 Footnote 5: [https://github.com/kermitt2/pdfalto](https://github.com/kermitt2/pdfalto) Footnote 6: [https://github.com/jsvine/pdfplumber](https://github.com/jsvine/pdfplumber) Footnote 7: [https://github.com/pdfminer/pdfminer.six](https://github.com/pdfminer/pdfminer.six) Footnote 8: [https://github.com/kermitt2/grobid](https://github.com/kermitt2/grobid) Footnote 9: [https://github.com/allenai/science-parse](https://github.com/allenai/science-parse) It is possible to extract decent-quality content using only the plain text from the Text Extraction step. For this, we have used Grobid8 as well as our own extractor called ScienceParse.9,10 Our latest pipeline uses visual recognition, described below, to improve the quality of the extraction. Footnote 10: [https://github.com/allenai/spv2](https://github.com/allenai/spv2) Footnote 11: [https://poppler.freedesktop.org/](https://poppler.freedesktop.org/) The _Visual Region Annotation_ step first generates a visual image for each page, using poppler.11. Within each page, we run an object detection model12 from the LayoutParser (Shen et al., 2021) library which identifies bounding boxes and labels each one with visual layout categories such as figure, table, paragraph. The code, model, and data are available at [https://github.com/Layout-Parser/layout-parser](https://github.com/Layout-Parser/layout-parser). Footnote 12: EfficientDet (Tan et al., 2020) model trained on PubMedXen (Zhong et al., 2019) _Text Span Annotation_ is the process of giving semantic labels to tokens from the Text Extraction step. For this, we use the I-VILA model trained on the S2-VL dataset, both introduced in Shen et al. (2022), to tag tokens with categories such as title, author name, section header, body text, figure caption, bibliography, etc. The code, model, and data are available at [https://github.com/allenai/VILA](https://github.com/allenai/VILA). Visual Region and Text Span annotations are converted into structured data (e.g., section headers are associated with section content, figures are associated with their captions, and bibliography entries are decomposed into their constituent elements) using the PaperMage library, which makes its code, models and data available at [https://github.com/allenai/papermage](https://github.com/allenai/papermage). The final output of the PDF extraction pipeline is a structured data object suitable for encoding in JSON format. It includes core metadata such as title, authors, and abstract, as well as the full body text with detailed structural information about the text. ### Knowledge Graph Construction The output of the PDF processing above, like the structured data in our input sources, consists of plain-string names for real world entities. Those entities include authors, the institutions with which those authors are affiliated, the venues in which the papers were published, and of course, the papers themselves. To build S2AG, we must assign an ID to each real-world entity and associate each plain-string name with the appropriate ID. _Paper Deduplication_ is necessary because we process data from many independent sources. Paper titles are not unique, and can vary in how they are expressed. Authors can release updated versions of a \begin{table} \begin{tabular}{r r} \hline \hline \multicolumn{1}{c}{**Nodes**} \\ \hline paper & 205M \\ author & 80M \\ publication venue & 550k \\ academic institution & *100k \\ \hline \hline \multicolumn{1}{c}{**Edges**} \\ \hline citation & 2.4B \\ paper-author & 580M \\ paper-venue & 40M \\ author-affiliation & *100M \\ \hline \hline \multicolumn{1}{c}{**\(*\)** = anticipated in 2023} \\ \hline \hline \end{tabular} \end{table} Table 1: Approximate size of S2AG. \begin{table} \begin{tabular}{r|r} \hline \hline Field of Study & Count \\ \hline n/a & 60.0M \\ Medicine & 31.8M \\ Biology & 20.4M \\ Physics & 11.6M \\ Engineering & 10.2M \\ Computer Science & 9.7M \\ Chemistry & 9.1M \\ Education & 7.4M \\ Materials Science & 7.4M \\ Environmental Science & 7.0M \\ Economics & 6.2M \\ Psychology & 6.2M \\ Agricultural and Food Sciences & 5.9M \\ Business & 5.6M \\ Mathematics & 3.7M \\ History & 3.4M \\ Political Science & 2.9M \\ Art & 2.8M \\ Geology & 2.6M \\ Sociology & 1.4M \\ Philosophy & 1.4M \\ Law & 1.1M \\ Linguistics & 1.1M \\ Geography & 350k \\ \hline \hline \end{tabular} \end{table} Table 2: Paper records in S2AG for different academic fields paper with a slightly modified, or almost completely different, title. Our latest paper deduplication model is named S2APLER. It works by grouping papers into blocks using title, such that papers with similar but non-identical titles end up in the same block, then scoring the pairwise similarity of papers in each block with a trained model. The model uses string-similarity features for title, abstract, author names, venue name, etc. The synthetic training dataset uses paper data from authoritative sources that provide either a PDF or a DOI.13 Paper pairs with matching DOI/PDFs from different sources are used as positive training examples. Pairs with similar titles but non-matching DOI/PDFs are used as negative examples. We make S2APLER available at [https://github.com/allenai/S2APLER](https://github.com/allenai/S2APLER). Footnote 13: [https://www.doi.org](https://www.doi.org) _Citation Linking_ is the system for finding references to one paper in another paper's bibliography. We also use the Text Span Annotation output to associate each citation link with the text of the sentence containing the citation. Citation Linking is a very similar problem to paper deduplication, except that instead of scoring the similarity between two papers, we score the similarity between a paper and a bibliography entry produced by the PDF Extraction system. We are using fuzzy text-matching heuristics on title and authors for citation linking, but anticipate that the S2APLER model can eventually be adapted to this problem. For _Publication Venue Normalization_, we combine data from Fatcat14 and MAG to build a comprehensive set of normalized venues. To match normalized venues, we index all known variant titles for the venue, including ISO-4 normalization,15 in a direct lookup table. We apply regular-expression-based rules to the unnormalized venue strings we obtain from extracted or input sources and look for exact matches in the knowledge base. Footnote 14: [https://fatcat.wiki/](https://fatcat.wiki/) Footnote 15: [https://en.wikipedia.org/wiki/ISO_4](https://en.wikipedia.org/wiki/ISO_4) In _Author Disambiguation_, we use a system named S2AND, introduced in Subramanian et al. (2021), to assign an ID to each author mention (a name of an author appearing in a particular paper). S2AND operates in three stages: (1) Grouping author mentions into candidate blocks, (2) Scoring similarity between records within a block using a LightGBM model (Ke et al., 2017), and (3) Clustering mentions within a block. The similarity scoring is a trained on a large dataset for author disambiguation also introduced in Subramanian et al. (2021). The code, model and dataset for S2AND is available at [https://github.com/allenai/S2AND](https://github.com/allenai/S2AND). For _Author Affiliation Normalization_, we link to ROR,16 a registry of persistent identifiers for research organizations. Our linking model is named S2AFF. It first parses unnormalized affiliation strings with a trained NER model into main institute, child institute, and address components. It then fetches the top 100 candidates from a Jaccard-overlap retrieval index and \begin{table} \begin{tabular}{c c c c} \hline \hline **Task** & **Model** & **Datasets** & **GitHub** \\ \hline Visual Region & EfficientDet (Tan et al., 2020) via & PubLayNet & \\ Annotation & LayoutParser (Shen et al., 2021) & (Zhong et al., 2019) & Layout-Parser/layout-parser \\ \hline Text Span & I-VILA & S2-VL & \\ Annotation & (Shen et al., 2022) & (Shen et al., 2022) & allenai/VILA \\ \hline Paper & & S2APLER & S2APLER & allenai/S2APLER \\ \hline Author Disambiguation & S2AND & S2AND & \\ & (Subramanian et al., 2021) & (Subramanian et al., 2021) & allenai/S2AND \\ \hline Affiliation & & S2AFF & S2AFF & allenai/S2AFF \\ Normalization & & & \\ \hline TLDR & BART (Lewis et al., 2019) with & SciTLDR & \\ Summarization & CATTS (Cachola et al., 2020) & (Cachola et al., 2020) & allenai/SciTLDR \\ \hline Citation Intent & & SciCite & \\ Classification & & (Cohan et al., 2019) & allenai/SciCite \\ \hline Field of Study & & & \\ Classification & & & \\ \hline Influential Citation & & & \\ Classification & & & \\ \hline Paper & SPECTER & SciDocs (Cohan et al., 2020) \& \\ Embedding & (Cohan et al., 2020) & SciRepEval (Singh et al., 2022) & allenai/SciRepEval \\ \hline \hline \end{tabular} \end{table} Table 3: Selected models and datasets used by the pipeline. ranks them using a pairwise LightGBM model (Ke et al., 2017) trained using internally-gathered human annotations. We make the code and data for S2AFF available at [https://github.com/allenai/S2AFF](https://github.com/allenai/S2AFF). We expect the API to include normalized affiliation links in 2023. ### Semantic Features We now turn to describing the models that provide semantic features on top of the knowledge graph. #### 3.4.1 TLDR Summarization To facilitate faster understanding and decision making when scanning lists of papers, we distribute short summaries of scientific papers, or TLDRs, as introduced in Cachola et al. (2020). Generating TLDRs of scientific papers can be a challenging task that involves high source compression and requires domain-specific expertise. We use a BART Lewis et al. (2019) model trained with CATTS Cachola et al. (2020), training on paper titles as a scaffolding task to overcome the problem of limited annotated training data. We trained this model on a combined dataset consisting of examples from SciTLR Cachola et al. (2020) and a separately collected set of summaries for biomedical papers from the Semantic Scholar corpus. The code, model, and data are available at [https://github.com/allenai/SciTLDR](https://github.com/allenai/SciTLDR). #### 3.4.2 Citation Intent and Influence Classification Citations play a critical role in scientific papers, and understanding the intent of a citation is helpful for automated analysis of scholarly literature. We use the model and dataset, called SciCite, introduced in Cohan et al. (2019) to classify each citation into one of three categories: _background information_, _use of methods_, or _comparing results_. The code, model, and data for SciCite are available at [https://github.com/allenai/SciCite](https://github.com/allenai/SciCite). We also classify whether each citation is _Highly Influential_. Based on the dataset and findings from Valenzuela et al. (2015),17 we use a feature-based heuristic: (1) Only citations between papers with no overlapping authors are considered eligible, (2) A citation is classified as Highly Influential if it appears at least three times in a sentence in which no other papers are cited, if the citing sentence contains terms such as "build upon" "following" or "inspired by", or if the citing sentence has references to tables or figures (indicating a direct comparison with the cited work). Footnote 17: [https://allenai.org/data/meaningful-citations](https://allenai.org/data/meaningful-citations) #### 3.4.3 Fields-of-Study Classification Prior to the discontinuation of MAG, Semantic Scholar made use of the fields-of-study classifications that MAG provided, using their level 0 taxonomy. After MAG's deprecation, we deployed our own classification model, adding Education, Law, and Linguistics to the existing MAG list. These additions were based on user feedback and comparison to other popular academic data sources such as Dimensions. We trained our own fields-of-study classifier, named S2FOS,18 using a multilabel linear SVM using character n-gram TF-IDF representations (the 300k most common character unigrams to 5-grams). For training data, we manually labeled a number of publication venues, and then propagated those labels to all papers published in their respective venues. The code, model, and data are available at [https://github.com/allenai/s2_fos](https://github.com/allenai/s2_fos) Footnote 18: [https://blog.allenai.org/9d2f641949e5](https://blog.allenai.org/9d2f641949e5) #### 3.4.4 Paper Embeddings Vector representations (embeddings) of papers can be useful in a variety of downstream applications. Our pipeline uses them for author disambiguation and recommendations, and we publish embeddings so that others may use them in their own applications. We produce embeddings using SPECTER Cohan et al. (2020), which generates document-level embeddings from SciBERT Beltagy et al. (2019). SPECTER takes the paper title and abstract as input, and is trained to minimize a triplet margin loss that encourages paper pairs with a citation relationship to have more similar embeddings than those without. We evaluated on the SciDocs benchmark, also introduced in Cohan et al. (2020), as well as a newer benchmark SciRepEval Singh et al. (2022), which increases the number and difficulty of tasks. The code, models, and data for SPECTER are available at [https://github.com/allenai/SPECTER](https://github.com/allenai/SPECTER) and for SciRepEval at [https://github.com/allenai/SciRepEval](https://github.com/allenai/SciRepEval). #### 3.4.5 Recent Paper Recommendations We dynamically train recommendation models to surface relevant new papers to users. Our recommender takes a set of positively- and negatively-annotated papers, and outputs a ranked list of recommended papers. The model generates recommendations in three steps: Ranker Training, Candidate Selection, and Candidate Ranking. Ranker Training is an extension of Arxiv Sanity.19,20 From each user's positive/negative annotations, it trains two linear Support Vector Machine models: one from the TF-IDF representations of the annotated papers and one from the SPECTER embeddings. We augment negative user annotations with randomly selected negative examples, the latter having less weight. During Candidate Selection, we use FAISS21 to search an approximate k-nearest neighbor index of the SPECTER embeddings of \(\sim\)1M papers published in the last 60 days (refreshed nightly). Finally, for Candidate Ranking we select \(\sim\)500 papers nearest the centroid of the positively-annotated papers and rank them using the average of the two model scores. Footnote 19: [https://arxiv-sanity-lite.com/](https://arxiv-sanity-lite.com/) Footnote 20: [https://github.com/karpathy/arxiv-sanity-lite](https://github.com/karpathy/arxiv-sanity-lite) Footnote 21: [https://github.com/facebookresearch/faiss](https://github.com/facebookresearch/faiss) ## 4 APIs and Datasets The outputs of our data processing pipeline and semantic models are made available through a suite of APIs and datasets described below. Because we develop and refine our models over time, the data served by the APIs may shift, or it may come from a mixture of models as we migrate from one system to its successor. Where appropriate, we will update our live documentation22 to reflect any changes. Footnote 22: [https://api.semanticscholar.org/api-docs](https://api.semanticscholar.org/api-docs) For unauthenticated users, we offer a low volume of API requests and samples of the datasets. For full datasets and high request volumes, we ask that users obtain an authentication key, at no charge, subject to terms of use.23 To date, we have issued over 700 authentication keys to various partners, for uses varying from student research projects to non-profit organizations to commercial products, serving close to 150 million requests in December 2022. Footnote 23: [https://www.semanticscholar.org/product/api#](https://www.semanticscholar.org/product/api#) Partner-Form ### Graph The Graph API24 provides the most current data from the Semantic Scholar Academic Graph. Papers can be retrieved by our internal ID, or by using identifiers from arXiv, PubMed, DOI, and others. Papers can also be retrieved via their bidirectional citation relationship to other papers, or by author. Keyword search, with some filtering options, is available for both papers and authors. We place size restrictions on the number of search results and encourage users to use the bulk snapshots when retrieving large volumes of data. Footnote 24: [https://api.semanticscholar.org/api-docs](https://api.semanticscholar.org/api-docs) ### Datasets Download links to monthly snapshots of our knowledge graph can be obtained via our Datasets API.25 Each dataset is a collection of gzipped JSON files, where records in one dataset refer to records in other datasets by ID. The datasets are: Footnote 25: [https://api.semanticscholar.org/api-docs/datasets](https://api.semanticscholar.org/api-docs/datasets) * papers: Core metadata of papers * abstracts: Abstract text for papers, where allowed by licensing * authors: Core metadata of authors * citations: Citation links between papers, with citation context and intent and influential classifications * embeddings: SPECTER embeddings of papers * paper-ids: Mapping between different IDs used to identify a paper. Useful for tracking deduplication between releases. * tldrs: TLDRs for papers * publication-venues: Core metadata for publication venues * S2ORC: Introduced in Lo et al. (2020), S2ORC is the largest publicly-available collection of full text for open-access scientific papers. S2ORC's full text is annotated with automatically-identified structural and semantic elements of the paper. section headings, paragraphs, bibliography entries, inline citation mentions, table/figure references, etc. Table 4 shows the number of open-access full-text papers broken down by academic field. Since its original release as a static collection, S2ORC has grown in size and is being kept up-to-date as part of our PDF processing pipeline. For further details on S2ORC, we refer the reader to Lo et al. (2020) and documentation at [https://github.com/allenai/s2orc](https://github.com/allenai/s2orc). ### Recommendations The Recommendations API26 generates recommendations, selected from papers published within the past 60 days, based on positive/negative paper annotations. The caller provides at least one paper ID as a positive example, and any number of paper IDs as negative examples. The response is a relevance-ordered list of recently-published papers and their metadata. Footnote 26: [https://api.semanticscholar.org/api-docs/peer-review](https://api.semanticscholar.org/api-docs/peer-review) ### Peer Review The Peer Review API27 supports the process of peer reviewer matching and conflict of interest detection.28 \begin{table} \begin{tabular}{r|c} \hline \hline Field of Study & Count \\ \hline Medicine & 2.9M \\ Biology & 2.2M \\ Physics & 1.2M \\ Computer Science & 810k \\ Mathematics & 580k \\ Psychology & 540k \\ Chemistry & 430k \\ Materials Science & 400k \\ Environmental Science & 400k \\ Engineering & 390k \\ Agricultural And Food Sciences & 380k \\ Education & 260k \\ Business & 230k \\ Economics & 210k \\ Political Science & 150k \\ Geology & 110k \\ Art & 50k \\ Sociology & 50k \\ History & 40k \\ Linguistics & 30k \\ Philosophy & 30k \\ Law & 20k \\ Geography & 20k \\ \hline \hline \end{tabular} \end{table} Table 4: Full-text availability of papers in S2ORC for different academic fields Users, such as journal editors or conference organizers, can upload information about potential reviewers (their Semantic Scholar author ID) and paper submissions (title, abstract, and list of authors, including Semantic Scholar author ID). The Peer Review API returns conflict-of-interest (COI) and matching scores for all reviewer-submission pairs. The COI score is a binary indicator saying whether the reviewer has co-authored with any of the submission's authors in the past. The reviewer match score is as the average SPECTER distance between submission and the three most similar papers written by the reviewer. ## 5 Related Work Table 5 summarizes major providers of scholarly data along three key dimensions: comprehensiveness, access, and services offered. Some providers, such as Google Scholar, do not offer any programmatic services at all. Others, such as MAG, have been discontinued. The major open provider of parsed content, PubMed Central, is not cross-disciplinary. Other providers require a subscription. Semantic Scholar is unique in providing a comprehensive and open knowledge base with the widest array of services. ## 6 Conclusion and Future Work We have described the Semantic Scholar data platform, which offers code bases, data sets, and APIs covering scientific literature. The Semantic Scholar Academic Graph (S2AG) consists of hundreds of millions of papers and billions of citation links, created by a state-of-the-art PDF extraction and knowledge graph normalization pipeline described in this paper. The platform also offers semantic features such as summarization, vector embeddings and recommendations. In the future, we hope to expose selected semantic features as services, for users to apply to their own data. We hope to add richer semantic labels to our full-text annotations. We plan to add more personalized functionality, such as access to library content and reading history. We will expand our tools for collecting human data corrections, and possibly collect automated annotations from external collaborators. Of course, we will continue to improve our existing knowledge graph construction and semantic feature models. We hope that providing these resources will enable application development and research using scholarly data to promote the advancement of science globally. ## Acknowledgements The Semantic Scholar Open Data Platform, including S2AG and various dataset and API offerings, is the product of years of work by members of the Semantic Scholar team. The authors of this paper have all contributed directly to the creation and continued maintenance of this platform, including software and model development, data curation, evaluation, design, product management, and more. This work was supported in part by NSF Grant CNS-2213656. \begin{table} \begin{tabular}{l l c c c} \hline \hline **Resource** & **URL** & **Article Count** & **Access** & **Services** \\ \hline \hline Aminer & aminer.org & 321.5M & open & D\({}^{*}\) \\ \hline arXiv & arxiv.org & 2M & open & D\({}^{**}\).F.S \\ \hline BASE & base-search.net & 180.5M & open & S \\ \hline CORE & core.ac.uk & 207.3M & open & D\({}^{*}\), S \\ \hline Dimensions & app.dimensions.ai & 123.8M & subscription & D, F, M, S \\ \hline Google Scholar & scholar.google.com &? & - & - \\ \hline The Lens & lens.org & 240.4M & subscription & D, M, S \\ \hline Meta & - & - & terminated 3/31/22 & - \\ \hline Microsoft Academic & - & - & terminated 12/31/21 & - \\ \hline OpenAlex & openalex.org & 205.2M & open & D, F, M \\ \hline PubMed Central & ncbi.nlm.nih.gov/pmc/ & 7.5M & open & D\({}^{**}\),F,P,S \\ \hline ResearchGate & researchgate.net & 135.0M & - & - \\ \hline Scopus & scopus.com & 84.0M & subscription & F, M, S \\ \hline **Semantic Scholar** & **semanticscholar.org** & **205M** & **open** & **D, F, M, P, S, T** \\ \hline Web of Science Core & webofknowledge.com & 83.2M & subscription & F, M, S \\ \hline \hline Key: & D=data download; F=field-of-study classification; M=advanced metadata; & & \\ P=semantically parsed text; S=title and abstract search; T=natural language summarization & *=data more than a year stale; **=restricted fields of study & \\ \multicolumn{4}{l}{Article count does not include patents or datasets.} \\ \end{tabular} \end{table} Table 5: Comparison of leading scholarly data providers
2304.04933
Reinforcement Learning Tutor Better Supported Lower Performers in a Math Task
Resource limitations make it hard to provide all students with one of the most effective educational interventions: personalized instruction. Reinforcement learning could be a key tool to reduce the development cost and improve the effectiveness of intelligent tutoring software that aims to provide the right support, at the right time, to a student. Here we illustrate that deep reinforcement learning can be used to provide adaptive pedagogical support to students learning about the concept of volume in a narrative storyline software. Using explainable artificial intelligence tools, we extracted interpretable insights about the pedagogical policy learned and demonstrated that the resulting policy had similar performance in a different student population. Most importantly, in both studies, the reinforcement-learning narrative system had the largest benefit for those students with the lowest initial pretest scores, suggesting the opportunity for AI to adapt and provide support for those most in need.
Sherry Ruan, Allen Nie, William Steenbergen, Jiayu He, JQ Zhang, Meng Guo, Yao Liu, Kyle Dang Nguyen, Catherine Y Wang, Rui Ying, James A Landay, Emma Brunskill
2023-04-11T02:11:24Z
http://arxiv.org/abs/2304.04933v2
# Reinforcement Learning Tutor Better Supported Lower Performers in a Math Task ###### Abstract Resource limitations make it hard to provide all students with one of the most effective educational interventions: personalized instruction. Reinforcement learning could be a key tool to reduce the development cost and improve the effectiveness of intelligent tutoring software that aims to provide the right support, at the right time, to a student. Here we illustrate that deep reinforcement learning can be used to provide adaptive pedagogical support to students learning about the concept of volume in a narrative storyline software. Using explainable artificial intelligence tools, we extracted interpretable insights about the pedagogical policy learned and demonstrated that the resulting policy had similar performance in a different student population. Most importantly, in both studies, the reinforcement-learning narrative system had the largest benefit for those students with the lowest initial pretest scores, suggesting the opportunity for AI to adapt and provide support for those most in need. **Keywords:** reinforcement learning, education, children, artificial intelligence ## 1 Introduction Many children fail basic reading and math standards, and the number of such students has greatly increased during the COVID-19 pandemic. One-on-one human tutoring can be highly effective [1], in part because it enables students to receive personalized, differentiated instruction, but it is often prohibitively expensive. Educational software aims to provide some of this personalized instruction at scale, but can still be costly and slow to build. Reinforcement learning (RL) could reduce the cost of developing effective learning technology by automating the process of specifying how best to support a student through their learning journey. RL algorithms learn from data to choose an intervention (such as a hint), given the current context (such as an estimate of a student's knowledge) to maximize the expected value of some desirable outcome, such as test scores. Preliminary work on using RL for improving educational software has enabled encouraging gains on learning outcomes [2, 3, 4, 5] or student persistence [2, 5]. Such systems have been limited to selecting among practice items. It is unknown if reinforcement learning could be used to automatically tune and optimize broader types of learning systems, such as the pedagogical feedback provided in a narrative environment, and do so in a way that is interpretable and robust. To address this, we created a narrative-based adaptive pedagogical-supported educational software to support math concept learning for students roughly ages 9-12 and used reinforcement learning to adaptively (machine) learn the responses to provide support for student learning. Recent advances in explainability methods for deep neural networks have made it possible to use advanced tools for modeling without sacrificing interpretability. We used these methods to help understand if and how the system is learning to differentiate in order to optimize desired outcomes. An additional key consideration is whether the learned pedagogical support would generalize to a different student community, as all schools may not be able to support online adaptive RL systems. We tested if the decision policies learned in the first study could be used in a different population of students that was a more geographically diverse population with a lower household income distribution. In both studies, students with the lowest pretest scores improved using our RL-powered narrative AI system, and more than compared to students using a baseline system. This highlights the potential for reinforcement learning to tune educational software parameters to enhance effectiveness, in a way that is interpretable, transfers to other populations, and can help those most in need of support. ## 2 Related Work: Reinforcement Learning for Student Learning Reinforcement learning has seen impressive successes in areas like robotics [6] and game playing [7]. The goal of a reinforcement learning algorithm is to compute a strategy (referred to as a "policy") that specifies the intervention (such as a pedagogical activity) to choose in a particular context (e.g., a learner's knowledge state and frustration level), in a way that is expected to maximize desired outcomes (e.g., test scores, engagement, retention). A key challenge is that the parameters governing the process by which contexts evolve, and outcomes occur, are unknown in advance. Instead, an algorithm must learn from experience by analyzing actual decisions made and their outcomes, a strategy with high expected outcomes. In the context of education, there have been some promising results that reinforcement learning can improve word acquisition of preschoolers interacting with a social robot [4], the persistence of learners during a fractions game [2], the performance of college students learning introductory physics [3], undergraduates learning discrete mathematics [8], and the outcomes and efficiency of working adults learning linear algebra [5]. However, in other settings, there has been little benefit over a reasonable control condition [9, 10]. More broadly, work on intelligent tutoring systems and computer-assisted learning suggests that personalized feedback and support in educational software can be an effective way to support student learning [11, 12, 13], but most prior work has focused on software designed to be used in the classroom where there are additional mechanisms to keep students' attention. We hypothesize that reinforcement learning may be particularly beneficial when learning is happening out of the classroom, or motivation and engagement are particularly critical, or in less traditional curricula that move towards different forms of instruction rather than lecture and practice. Learning sciences offer less guidance about how to best support students in these settings. Yet, such educational settings are likely to be increasingly important in the future, both due to immediate challenges due to the covid-19 pandemic and aftermath, as well as due to the types of skills needed for success in the 21st century. Reinforcement learning may inform data-driven instruction for such settings, and we focus our attention on learners outside the classroom in this work. As another contrast between our focus and prior related work, in the context of education, it is both important and of interest to understand what the algorithm learns to do: what personalized decisions are made for different contexts and individuals, and who is most helped by the algorithm. Such issues have been historically largely unstudied in the reinforcement learning research community, with some notable exceptions (e.g. [14, 15]), but are an important part of our current work. ## 3 Interface Design Learning science principles can often be too broad to inform the specific design decisions needed to create engaging, effective educational software. For example, a narrative-based, chatbot-supported1 educational interface can lead to significant learning and engagement gains over a no-narrative, no-chatbot variant [16], but doing so well is subtle. Here the effective chat-based tutoring system actually used humans to act as chatbots, in a wizard-of-oz style study. In contrast, a different narrative-based system with standard step-by-step hints (which are common in intelligent tutoring systems) provided no benefit over the no-narrative, no-hint control condition [16]. Footnote 1: Note that our work was conducted before the launch of ChatGPT in November 2022. RL has the potential to be particularly helpful in such situations where personalization may be key. In this work, we used an informal online learning environment to teach students about the concept of volume. Learning tasks in this system are embedded in a narrative storyline. In response to student input, a companion AI tutor selects among four common pedagogical strategies: providing direct hints, generic encouragement, and guided prompts that scaffold the student (e.g., "Have you heard of a unit cube?"), or passive positive acknowledgment (emoticon smiley face). Figure 1 shows a screenshot of the software used. ## 4 Approach ### Feature Space Due to past success in RL systems for adult learning [3, 5], we use a small set of features, specifically an eight-dimensional state space, described in detail below. The observation vectors were normalized element-wise before being used for training and prediction. Grade, pre-score, and anxiety score are static variables. Other variables are affected by the actions the policy takes and change as the child is solving each step of the task. * Grade: the elementary school grade a child is in, ranging from 3-5. * Pre-score: the score a child receives for the pre-test, ranging from 0-8. * Step: the step of the task a child is in, ranging from 1-6. * Failed attempts: the number of failed attempts made by the child in the current step. It is a non-negative integer. * NLP positive score: a score that reflects the positive sentiment in the last sentence mentioned by the child. It is a float ranging from 0-1. An automatic sentiment analysis tool from NLTK [17] is used to calculate this. Figure 1: **Tutoring AI Guide Interface**: A child solves a math problem while interacting with the AI-driven tutoring guide. The child can click on the “helpful?” button if they consider the AI tutor’s response to be helpful. The child can also click on “I want to stop playing” to quit the activity at any time. * NLP negative score: a score that reflects the negative sentiment in the last sentence mentioned by the child. It is a float ranging from 0-1. An automatic sentiment analysis tool from NLTK [17] is used to calculate this. * NLP help score: a score that reflects the extent to which the child asks for help in the message sent. It is a float ranging from 0-1 and calculated as the semantic similarity between the child's message and "help". * Anxiety score: the score of the math anxiety test [18] that the child takes prior to beginning the activity. ### AI Guide RL Policy Learning #### The simulation phase RL algorithms were run on a simulator before any real-world experiments were done to get an initial estimate of the performance and test the algorithm's potential. The simulator models children with various characteristics and their interactions with the math problem and agent, and is built with transition matrices of much higher dimensionality than the state space passed into the algorithms, to ensure that the simulator is challenging. We select the hyperparameters of our RL policy based on this simulator. These early simulations informed our choice of a small function model for use in our later experiments. For example, we explored various multiple policy architectures and converged on 2 hidden layers, since in our simulations the parameters for a small instructional model could be learned within a couple of hundred simulated students. #### Online learning Phase Throughout the math-learning activity, children have access to an AI guide on a side panel that provides encouragement, hints, and companionship. The goal is for the AI guide to provide additional engagement with the math activity and provide adaptive support that facilitates learning gains. The AI guide takes on the persona of the monster that children select in the fantasy-based narrative. Before entering the math learning activity, children are brought through a short tutorial in which they communicate with the AI guide, which introduces itself and asks about the children. This tutorial serves to familiarize the children with the AI guide interface and build social rapport between the AI guide and the children. We provide a workflow in Figure 2. The RL decision policy takes in a vector describing features of the learner state and outputs a particular support type (of the 4 options) to provide. The RL algorithm aims to learn an automated decision policy to maximize the expected reward function, which should capture the key desired outcomes. We specify the reward when a student \(j\) finishes as: \[R_{j}=\sum_{i=1}^{8}[\max(0,post_{ij}-pre_{ij})]-\lambda*n_{hj}+\beta n_{uj}+ \mathbb{1}(quit_{j}),\] (\(\lambda=0.013,\beta=0.1\)), where the first term is the sum over items of the \(j\)-th student's clipped learning gain from pretest to post-test on item \(i\) of the assessment, the second term is a tiny penalty on the number of hints \(n_{hj}\) given by the system to the student (since too many hints may reduce learning), the third term provides a small bonus for the number of times \(n_{uj}\) child \(j\) marked an AI guide reply as helpful, and the last term \(\mathbb{1}\left(quit_{j}\right)=-8\) is a penalty if the learner quits before completing the task. The proximal policy optimization (PPO) algorithm [19] was used to learn the decision policy to optimize the expected reward. The policy architecture is stochastic. The hyper-parameter used in the online study was \(\epsilon=0.2\). Both the policy neural network and value function neural network had two hidden layers with 16 nodes and a tanh activation function. We used an Adam optimizer with a learning rate of 0.0025 for both. The RL policy is implemented with the RLGraph package for this [20]. This optimization method was chosen as it has shown potential in similar situations, for example, in [5]. ### Offline Reinforcement Learning We also performed offline reinforcement learning to extract another policy for use in a subsequent experiment. We did this for multiple reasons. First, as described later, during online reinforcement learning, the policy had not yet converged by the end of study 1, and we wanted to compare a static learned policy to a control, where the differences might be clearer. Second, we were curious whether we might extract a higher-performing decision policy using offline learning. Third, in most experimental sciences, research is hoped to provide findings that generalize beyond the specific research setting. Such generalizability is also of key interest in machine learning. Therefore an important open issue is whether automated pedagogical strategies obtained using reinforcement learning in one setting will transfer to similar settings. We used offline reinforcement learning policy evaluation to select among potential new automated instructional policies using the data gathered from online reinforcement learning (in our study 1, as we will shortly describe). We considered two **Fig. 2**: The interaction between user and RL AI guide. The RL AI guide selects one of four actions and replies to the user once a message is received. The reward function is updated both during the interaction and after the child completes the post-quiz. Rewards 1–4 correspond to the reward functions described in Section 4.2 (Reward Function). The RL AI guide performs an update after every five children. sets of algorithms for training potential instructional policies. The first is behavior cloning [21, 22], a popular method for leveraging offline data to train an automated policy. Behavior cloning trains the model to imitate the probability distribution of actions that are outputted by the online policy. Note that this procedure does not return a policy that is exactly like our online policy because our online policy updates itself - therefore, this objective trains a new policy that tries to output the probabilities outputted by an ensemble of online models at different checkpoints. Behavior cloning minimizes the following loss: \[\mathcal{L}_{\text{BC}}(\theta,\mathcal{D})=\mathbb{E}_{(s,a,s^{\prime}) \sim\mathcal{D}}[D_{\text{KL}}(\pi_{\theta}(s)||p(a|s))]\] In our setting, this can be viewed as distilling the average policy over online reinforcement learning. The second style of algorithms we explored was offline policy gradient on the estimated performance of the trained instructional policy. This method has been used in several other offline RL optimization papers (see e.g. [23, 24]). Here we used a weighted importance sampling (WIS) estimator to estimate the value of the policy, \[\mathcal{L}_{\text{WIS}}(\theta,\mathcal{D}) =\frac{1}{\sum_{i=1}^{|\mathcal{D}|}\big{(}\prod_{t=1}^{L}\frac{ \pi_{\theta}(a_{t}|s_{t})}{p(a_{t}|s_{t})}\big{)}}\sum_{i=1}^{|\mathcal{D}|} \Big{(}\prod_{t=1}^{L}\frac{\pi_{\theta}(a_{t}|s_{t})}{p(a_{t}|s_{t})}\Big{)}R _{i} \tag{1}\] \[+\eta\cdot\frac{1}{\sum_{i=1}^{|\mathcal{D}|}\big{(}\prod_{t=1}^ {L}\frac{\pi_{\theta}(a_{t}|s_{t})}{p(a_{t}|s_{t})}\big{)}^{2}} \tag{2}\] where \(R_{i}\) is the total reward for student \(i\). This is called policy gradient via importance sampling (POIS). We also explored whether adding an effective sample size (ESS) penalty with hyperparameter \(\eta\) would help - ESS regularizes the difference between the learned policy \(\pi_{\theta}\) and the behavior policy \(p\). We considered multiple hyperparameters for each of the two algorithm procedures (see Table 1). There are 108 hyperparameter combinations to learn our policy. We use an algorithm evaluation procedure where we partition the collected dataset into a train and validation set by randomly allocating 50% of students into one group and the rest into another. We repeat this strategy 10 times. We use this splitted dataset to choose the best model architecture, hyperparameters, and learning objectives, similar to what has been proposed in [25]. We trained our model on the training split and use weighted importance sampling (WIS) to evaluate the performance of this policy on the validation set. We apply the same learning procedure across all 10 splits and compute the average of the performances. We choose the best algorithm from the highest average performance on the validation set. We then apply this algorithm to train a policy that learns from the entire dataset. In our evaluation, the behavior cloned policy was estimated to outperform the online policy in the majority of splits. Also, a small 1-layer fully connected neural network with 4-dimensional hidden state and Gaussian error linear unit [26] activation function outperformed other model architectures. Therefore we used the distilled, behavior cloned policy in our second experiment ## 5 Experimental Setups As a control condition, the interface included the mathematics task but had no narration and no adaptive support; similar to a mastery-style approach, students had to successfully complete one subpart before advancing. While this may seem like a weak control, a past study [16] on teaching an elementary school mathematics task had found that a similar control condition had performed similarly to a control condition with a narrative storyline, and slightly better than a control condition with a narrative storyline and step-wise hints (which are common in tutoring software). In study 1, we examined the speed and effectiveness of using reinforcement learning to adapt the type of AI guide feedback given to learners. Due to COVID-19 pandemic restrictions, all experiments were completed online. Subjects were randomly assigned to each condition, but with an unequal allocation- more students were assigned to the RL condition than the control condition. In total 269 elementary school students used the reinforcement learning-narrative educational software (RL). 70 students were in the control condition. Subjects completed an 8 item assessment and a math anxiety survey [27], then used the volume education software, and then completed another assessment (identical up to numerical values, and cross-randomized across students), and an engagement measure designed for studies with children [28]. In study 2 we were interested to see if the distilled behavior cloned policy learned from the online RL process (Section 4.2, would transfer to a new population of subjects. We then conducted study 2 with a new set of subjects (37 participants used for analysis): subjects were randomized into the same control condition as study 1, or using the single distilled RL policy. In study 2, we recruited a broader population more similar to that of the U.S.A. For the original study, 113 participants out of 203 provided home zip codes. For the follow-up study, 16 participants out of 30 provided home zip codes. For those that did not provide their home zip code, we use their school zip code. Using these zip codes, we obtained the median housing price and mean annual household income from the fifth American Community Survey (in 2020), accessible through an API provided by the United States Census Bureau. Figure 3 shows the difference between the student groups in study 1 and study 2. We conduct the Kolmogorov-Smirnov 2-sided test between student populations of two studies on these variables. For both mean \begin{table} \begin{tabular}{c c} \hline \hline & Hyperparameter Range \\ \hline Training algorithm & BC, POIS \\ \hline Policy network dimension & [4], [8], [16], [4, 4], [8, 8], [16, 16] \\ \hline Training Epochs & [1, 5, 10] \\ \hline ESS Penalty \(\eta\) & [0, 0.01, 0.05] \\ \hline \hline \end{tabular} \end{table} Table 1: Hyperparameters considered during offline batch reinforcement learning. annual household income (\(Pr(F(x)=G(x))=0.02<0.05\)) and median housing price (\(Pr(F(x)=G(x))=0.0005<0.01\)), we found a significant difference between two populations. In addition, subjects were more geographically and racially diverse (see Appendix). In addition, study 1 was done when many more U.S.A. children attended school remotely. Thus, study 2 offers a chance to examine the generalizability of learned RL policies. ## 6 Results Aggregate summaries are shown in Table 2. Some subjects completed the pretest or posttest twice due to a limitation in the system. We excluded these subjects from the results presented. There was no significant difference in the amount of improvement (post-test - pretest score) between the RL narrative condition and control condition (study 1: Wilcoxon rank test \(W=9632.5\), \(p=0.2\), study 2: Wilcoxon rank test \(W=185.5\), \(p=0.281\)). \begin{table} \begin{tabular}{l l l|l l} \hline & \multicolumn{2}{c|}{Online RL Study 1} & \multicolumn{2}{c}{Distilled Policy Study 2} \\ & _Control_ & _Narrative AI_ & _Control_ & _Narrative AI_ \\ \hline **Number** & 68 & 258 & 18 & 17 \\ **Pretest** & 5.46 (2.73) & 4.87 (2.38) & 4.06 (2.58) & 3.41 (2.31) \\ **Posttest** & 5.89 (2.57) & 5.72 (2.31) & 3.83 (2.53) & 4.12 (1.87) \\ **Improvement** & 0.44 (1.26) & 0.84 (1.89) & -0.22 (1.7) & 0.7 (2.33) \\ \hline **Engagement** & 3.16 (0.62) & 3.4 (0.52) & 3.17 (0.52) & 3.28 (0.57) \\ **Completion** & 92.6\% & 94.5\% & 100\% & 100\% \\ \hline \end{tabular} \end{table} Table 2: Mean (std. dev) results of children in both studies. Figure 3: Distribution of household income (left) or median housing price (right) in the zip codes provided by subjects in study 1 and study 2. There significant difference in the subject pools between the two studies. However, encouragingly, in both studies, there was a trend for subjects with a low initial pretest score (0-2) to have a much larger improvement between the pretest and post-test in the RL narrative condition (Figure 4, top row). The average improvement for these students was 2.02 in study 1 (N=41), and 2.29 in study 2 (N=7), out of a total score range was (0-8). There was a significant difference in the change in scores between the RL condition and control condition in study 2 for those with low pretest scores (0-2) (Wilcoxon rank test \(W=2,p=0.013\)), though this difference does not persist after correcting for multiple-hypothesis testing, and all other differences for studies and pretest groups were not statistically significant under the same test. Engagement scores range from 1 to 4 and subjects with low initial pretest scores (0-2) also trended to having much higher engagement in the RL AI guide condition (study 1 mean engagement score 3.29 (N=40), study 2, mean engagement score 3.28 (N=7)) than in the control condition (study 1 mean engagement score 2.7 (N=14), Figure 4: Top row, Post-test - Pretest (y-axis), Bottom row, Normalized learning gain (NLG) \(\frac{Posttest-Pretest}{MaxScore-Pretest}\) (y-axis). Scores are clustered by subjects with low (0-2), medium (3-5), and high (6-8) initial pretest scores. Error bars show standard errors. Note the NLG (bottom row) calculations exclude students who scored 100% on the Pretest since the NLG is not well defined. study 2, mean engagement score 2.7 (N=5)). Prior work suggests interpreting scores below 3.0 as low engagement and 3.0-3.6 as moderate engagement [28]. The assessment used may be subject to ceiling effects, as a number of students did receive the maximum score (8) on either the pretest or the post-test. Though the pretest scores did not significantly differ between the two conditions, in either study, since the control pretest scores were slightly higher, ceiling effects may have impacted the control condition more. To address this, we also repeated our analysis using normalized learning gains (NLG), \(\frac{Posttest-Pretest}{Maximumscore-Pretest}\), which represent the fraction of improvement made by subjects, relative to the possible improvement. Note this excludes any subjects who scored the maximum score on the pretest since the NLG is not well-defined for such students. There was no significant difference between the RL narrative condition and control condition for NLG in either case (study 1, W = 4394.5, p-value = 0.6978; study 2, W = 104.5, p-value = 0.3819). Like for posttest - pretest, we observe larger normalized learning gains for the RL narrative condition than the control condition for initially lower performing students, in both studies (Figure 4, bottom row). The NLG performance for students with medium pretest scores is similar in both conditions, as was also seen for such subjects' posttest minus pretest scores. The pattern for the highest performing students is slightly different than for the post-test - pretest scores but should be taken lightly: as stated, the NLG analysis ignores all students with maximum pretest scores. Note that an NLG of 75% for the initially high-performing student group would be at most a \(2*0.75=1.5\) post-test - pretest improvement (since 2 is the largest possible gain, if the student scored 6 on the pretest, and it is lower if the student scored 7), whereas a 30% improvement for the initially low performing student group is at least a gain of \(6*0.3=1.8\) on the post-test - pretest (since \(MaxScore-Pretest\geq 6\) for such subjects). Together these analyses encouragingly suggest that the RL narrative condition trends to provide a bigger benefit to initially lower-performing students than the control condition. We now provide some additional analyses into the RL process and the potential mechanisms underlying this difference. ### RL Online Learning In study 1, the RL agent updated the AI guide pedagogical policy over subjects, but during the 28 policy updates (after 10 subjects each), we observed significant variability, and the performance had not converged. We hypothesize this may be due to several factors. Likely most importantly, we saw a significant variation in the pretest scores of subjects over time. This may be in part because we performed rolling recruitment, adding additional recruitment sources during the study, which likely caused some shift in the distribution of the underlying students. In addition, the natural variation across third to fifth-graders and student background skills means that across small sets (such as the 10 trajectories used each round for PPO), it is quite possible to have a substantial difference in the pretest scores of those subjects. If any of the students are already at or near the ceiling of the pretest scores, there will be almost no potential room for improvement for the RL policy. Indeed there may be some natural regression to the mean, which means that an RL policy that looked promising in prior rounds for related states, may now look worse (depending on the particular generalization). Even without this potentially shifting population, ten trajectories (subjects) is a small size to average over when performing policy updates, so the gradient may be quite noisy. This suggests that performing stratification and trying to ensure a stable distribution of initial start states over participants might lead to faster convergence and better results. However, despite this, through training, subjects in the AI guide condition consistently match or exceed the average performance of those in the control condition. ### Investigating Other Explanations for the Benefit to Low Pretest Subjects A natural question is what is the mechanism behind the improved performance of subjects in the RL narrative condition over those in the control condition, for subjects with initially low pretest scores, and whether this could be due to factors beyond the RL-narration itself. One potential hypothesis is that there were additional differences between the two conditions. Indeed, on average, subjects spend longer on the RL narrative condition task than in the control condition. As Figure 5 shows2, this was consistent for students across all three groups of pretest performance, and the difference in time spent between the two conditions was largely similar for all three groups. However, only the students in the low pretest group seemed to have a significant benefit from the RL condition. It seems unlikely that time on task is the primary reason for improved performance in the RL narrative condition. Footnote 2: We excluded individuals who took longer than 90 minutes on the task in this figure, since such subjects are likely to have taken breaks. All individuals who took at least 90 minutes took over 2 hours, and there were 8 such individuals excluded using this restriction. The study was conducted remotely, and a prescreening call was done with a guardian of each child participating to discuss the study, emphasize the child should do the task without assistance, and verify the child would be participating. However, Figure 5: Time on task (sec) (y-axis) by low (0-2), medium (3-5), and high (6-8) initial pretest scores. Error bars show standard errors. Students whose time on task exceeded 90 minutes (8 students) were excluded from the analysis since it was likely such students might have taken significant breaks. it is still possible that guardians helped the children in some cases. It seems unlikely that for children with low pretest scores, guardians helped them more if the child was in the RL condition than if they were in the control condition. Indeed the control condition offered less support and hints than the RL narrative condition, so the opposite seems more likely to be true. One potential exception is that the RL narrative condition involved a storyline, and while unlikely, depending on the subject's reading skills, it is possible that the guardian would have helped the subject to understand the text. ### Integrated Gradient Analysis of Policy on Feature Space A natural question is whether benefits to subjects with low pretest scores may derive from the personalization capacity of the RL instructional policy. Indeed a key benefit of using RL to select activities is its potential to differentiate instruction if doing so is estimated to improve outcomes. Therefore it is of interest to evaluate what differentiation, if any, is done by the RL AI guide policy. However, most popular RL algorithms, including PPO, which we use here, use complex function approximators that are hard to interpret. Therefore we use a method in explainable machine learning, integrated gradient [29], to decompose the multi-decision output of the RL policy used in study 2 into a linear additive sum of attribution for each input context feature. Table 3 shows that the feature importances computed for the policy selected from offline RL and deployed in the RL condition. Recall there are three primary categories of features used to select pedagogical strategies: static features of the learner, features about the stage of the learning activity, and features about the learner's interaction and performance during learning. This analysis selected student's pretest score and their math anxiety score as the most influential contextual features on the AI guide's chosen response. Other student features had little to no effect. Figure 6 shows the probability of assigning actions for students from our distilled policy. Figure 6: The y-axis shows the probability of choosing the first action for each group of subjects, based on their pretest scores (Bottom (0-2), Top (6-8)), and math anxiety level (Low (9-13, corresponding to the bottom 25% percentile), and High (22-45, corresponding to the top 25% percentile)). Error bar shows 95% CI. Students with higher pretest scores were more likely to receive direct hints: such students may require less of the productive struggle needed to learn new mathematics. Students with lower pretest scores may need more engaged practice, but those with high math anxiety may also perceive math as more effortful [30]. Increasing the use of guided prompts may help support such students, as we observe in the policy instructional selections for low-performing, higher math anxiety students. These observed interactions between the multiple features describing student and context, and pedagogy choices, could inform expert analysis and support future hypothesis generation for learning sciences. ## 7 Discussion Our work offers cautionary optimism on the potential role of reinforcement learning in optimizing pedagogical instructional policies. The personalized narrative AI guide may benefit students with the lowest pretest performance, without harming the performance of other learners. Indeed the average gain in scores for subjects with low (0-2) pretest scores was over 2 in both studies in the RL condition, which means the mean scores for such students at least doubled, in an assessment with 8 total points. Our results do not provide a definitive mechanism for this result, though the engagement scores suggest that the control condition was not engaging for subjects with low pretest scores. For such students, the RL narrative AI guide condition yielded higher engagement, similar to those with higher pretest scores. This is likely due to the RL AI guide, not the narrative, since prior work found narrative alone, with hints, yielded no benefit over no narrative and no AI guide in a volume learning task [16]. Our encouraging result is consistent with limited prior work that personalized computer-assisted learning software may sometimes be similarly or only slightly more effective on average but may particularly benefit students who start with lower scores or take longer to complete problems (e.g. [14, 31]). Since the RL algorithm we used aims to maximize expected (test) outcomes, if differentiation within the available pedagogical supports can increase the outcome of any subgroups (without harming the outcomes of other subgroups), the algorithm should learn from data to provide such personalization. Our analysis did not find a significant benefit of RL over the \begin{table} \begin{tabular}{l c c c} \hline \hline **RL Policy Action** & _Pre-test_ & _Math_ & _Other_ \\ & _Score_ & _Anxiety_ & _Features_ \\ \hline **Pr(Direct Hint)** & +7.5\% & +8.8\% & -0.4\% \\ **Pr(Acknowledgment)** & -7.5\% & -7.4\% & +0.6\% \\ **Pr(Encouragement)** & +4.1\% & -8.2\% & +0.4\% \\ **Pr(Guided Prompt)** & -4.2\% & +6.8\% & -0.7\% \\ \hline \hline \end{tabular} \end{table} Table 3: Feature importance calculated by the integrated gradient method. Numbers represent how on average, the feature (with its original value) will positively or negatively contribute to how our RL policy decides to increase or decrease the probability of choosing an action for the current student. control condition at the population level, though it is possible an effect would be observed with a larger sample size, or with different state feature representations, network architectures, or RL algorithms. Across study 1 and 2 the comparison between the narrative RL condition and control conditions appear largely stable (Figure 4), with a trend for the RL condition benefiting those with low pretest scores,This suggests an RL decision policy learned on one population can sometimes benefit other populations. ## 8 Conclusion Our work was conducted on around 400 students, which is typically less than the number of third to fifth-graders in a school district, suggesting the feasibility of using this approach to quickly optimize digital learning environments. By combining reinforcement learning with explainable AI, this approach can provide new insights into the interaction of context and student learning that may prompt new research in learning sciences and has a high potential to help quickly identify and scale effective learning practices. ## Appendix A Real Conversation Example To provide an indication of how well the RL AI guide performs after about 200 rounds of iterations, we present below a real conversation snippet between a user and the RL AI guide. **Child**: Let's keep thinking. **AI guide**: [Direct Hint] You can fit 3 chocolates along the height, 5 chocolates along the width, and 6 chocolates along the length of the box. **Child**: Ok. **AI guide**: [Guided Prompt] Hm... I suggest thinking about the concept of a unit cube and the volume of a unit cube. It may remind you of what you need in order to calculate the volume of the box! _Child inputs 90 (correct answer) for the volume question._ **Child**: We got another one right! **AI guide**: [Acknowledgement] :) **Child**: Do you think we multiply or divide. **AI guide**: [Encouragement] I think I have some ideas! But I won't spoil it. Just let me know if you need a hint. _Child inputs 180 (correct answer) for the weight question._ **Child**: Also we got another one right! **AI guide**: [Acknowledgement] :) As can be seen from this example, the AI guide adopted all four types of teaching strategies: direct hint, guided prompt, encouragement, and acknowledgment. In particular, when the child asked "Do you think we multiply or divide.", instead of providing any meaningful help, the AI guide simply encouraged the child. This turns out to be an effective teaching strategy since the child successfully solved the problem without needing specific math-related help from the AI guide. ### The math learning activity and narrative The central task of the learning platform is the completion of an online math learning activity focused on teaching 2 concepts related to volume and its knowledge components (measurement and multiplication). Similar to the task in Ruan et al. [16], our math learning activity presented children with 6 tasks in order of increasing difficulty. All of these tasks were subparts of one overall problem (calculating if a box of chocolates weighed \(>\) 320 oz) and results from prior tasks were used in later ones. The first three tasks ask children to measure the length, width, and height of an on-screen box by dragging a bar of single-unit square chocolate along its edges. For the fourth task, children are asked to use these measurements to calculate the total number of chocolates that can fit in the box. In the fifth task, children are told that each chocolate weighs 0.5oz., the information they are required to use to help them calculate the total weight of the box. Finally, the sixth task asks children to determine if the box can be safely transported by a boat with a weight limit of 320oz. Our AI guide support component replaces the remote human feedback support component used by Ruan et. al [16]. In addition, due to the constraints of the covid-19 pandemic situation at the time, children completed our math learning activity remotely through an online web app as opposed to in a physical lab setting. This means children complete the online activity asynchronously without the observation or interference of a researcher. We conducted a 10-minute video call with each guardianchild to confirm there was a child learner who intended to complete the task. During this video call, we emphasized to the subjects that they should complete the activity without the help of any outside resources, and guardians were asked to ensure their children completed the task without outside resources. ### AI guide support During the math learning activity, each time the AI guide is sent a message, it can take one of several actions. 1) Provide an instructional hint. Hints are specific to the task the child is currently working on and are provided in a fixed order. Each time this action is taken, the next hint is provided, and when no hints are left for the current task, the AI guide sends an appropriate message. 2) Send acknowledgment. In this case, the agent decides that no action is appropriate; the AI guide acknowledges the child's message but otherwise provides no assistance or encouragement (":)"). 3) Send encouragement. A random encouraging message from a predetermined list is sent to the user (for example, "You're doing a great job. If we keep working like this, we'll be done in no time!"). These messages were written to promote a growth mindset and excitement about the challenge of the problem without giving help to the problem itself. 4) Guided Prompt. As with normal hints, guided prompts are specific to the current task and are provided in a fixed order. In contrast to normal hints, the goal of guided prompts is to provide some assistance to children who do not need as much help as a standard hint provides (for example, "Try thinking about the concept of volume to solve this problem."). The AI guide only responds when spoken to with the exception of periodic "reminder" messages which remind the children that the AI guide is there (for example, "I think you've got this. But if you need help, just let me know!"). These messages are chosen randomly from a predetermined list. The goal is to provide children with social support as well as remind children to use the AI guide as a helpful resource if they become stuck. These reminders are sent every 120 seconds after user inactivity (including both speaking to the AI guide and interacting with the software). Additionally, the AI guide has a list of predicted responses that it ignores (such as "Okay") or acknowledges with "You're welcome!" (such as "Thanks") to reduce noise from natural language responses that do not require one of the above actions. In contrast to the experimental condition, there was no AI guide and no hint system present in the control condition. The AI guide responds to input from the learner. There was an automated reminder for the child to engage if no prior interactions had happened during the 120 seconds. The automated instructional policy was trained using reinforcement learning. For the first phase, the reward model uses \(\alpha=0.01,\beta=0.1,\gamma=0.3\), based on the hyperparameter choices of prior work [5] and our earlier simulations simulation. All the hints and message templates were written and uploaded through an easy-to-use teacher-facing dashboard (see Figure 10) by educators and designers without prior background in machine learning. Figure 10: Interface for teachers to write hints and prompts. ### Distribution of Grade and Pre-test Scores in Treatment and Control **In study 1**, 339 participants in grades 3-5 were recruited through Twitter, NextDoor, userinterview.com, school mailing lists, and word of mouth. Children came from 263 different schools. Of 339 participating children, 172 were boys and 167 were girls. 114 were in grade 3, 114 were in grade 4, and 111 were in grade 5. Children were randomly assigned to one of the two systems based on a predetermined ratio: 70 children used the control system and 269 children used the system with RL AI guide-mediated guidance. Gender and grade were balanced across the two conditions. There is no significant difference between the treatment and control group in study 1 on pre-test score (Cohen's d \(-0.235\), two-sample Wilcoxon rank test \(W=10782\), p-value \(=0.058\)) as well as grade (Cohen's d 0.008, two-sample Wilcoxon rank test \(W=9371.5\), p-value \(=0.9502\)). **In study 2**, 35 participants were recruited using userinterview.com and childrenhelpingscience.com. There is no significant difference between the treatment and control group in study 2 on pre-test score (Cohen's d \(-0.262\), two-sample Wilcoxon rank test \(W=175.5\), p-value \(=0.4634\)), as well as grade (Cohen's d 0.282, two-sample Wilcoxon rank test \(W=136.5\), p-value \(=0.5665\)). ### Details on Repeated Post-test Taking in Logged Data The software did not explicitly check for students repeating the pretest or post-test, and in our post-analysis, we found a few students took either the pretest or post-test test multiple times. The logging software only recorded the score of the final time the student took the test. For this reason, we only analyzed students who took the pretest and post-test once. In study 1, this resulted in 68 (out of 70) students in the control condition being kept in the analysis (only 2 students took either the pretest or posttest twice) and 258 (out of 269) students in the RL condition. In study 2, 18 (out of 19) students in the control condition, and 17 (out of 18) students in the RL condition were included in the analysis. We computed our results after removing these duplicate entries. ### Report on Time Spent Between Control and RL Condition On average students do often spend longer3 on the RL narrative condition task than in the control condition: Figure 5. This was consistent for students across all three groups of pretest performance, and the difference in time spent between the two conditions was largely similar for all three groups. As it was only students in the low pretest group that seem to have a significant benefit from the RL condition, it seems unlikely that time on task is the primary reason for improved performance in the RL narrative condition. Footnote 3: We excluded individuals who took longer than 90 minutes on the task in this figure since such subjects are likely to have taken breaks. All individuals who took at least 90 minutes took over 2 hours, and there were 8 such individuals excluded using this restriction. We report the time spent on the pretest, task, and post-test (assessment), in each control and experiment, in both study 1 and study 2 (see Table A1). We conduct a two-sample Wilcoxon rank test on all pairs (between study 1 and study 2). We find no significance between the two studies. ### Engagement In study 1, students with low pretest scores (scores 0-2) had an average engagement score of 2.67 (N=14, standard error = 0.23) in the control condition and an average engagement score of 3.29 (N=40, standard error = 0.11) in the RL narrative AI guide condition. In study 1, students with medium pretest scores (scores 3-5) had an average engagement score of 3.43 (N=15, standard error = 0.12) in the control condition, and an average engagement score of 3.36 (N=105, standard error = 0.05) in the RL narrative AI guide condition. In study 1, students with high pretest scores (scores 6-8) had an average engagement score of 3.24 (N=38, standard error = 0.07) in the control condition, and an average engagement score of 3.48 (N=108, standard error = 0.04) in the RL narrative AI guide condition. Three subjects in study 1 did not complete the engagement survey. In study 2, students with low pretest scores (scores 0-2) had an average engagement score of 2.71 (N=5, standard error = 0.12) in the control condition and an average engagement score of 3.28 (N=7, standard error = 0.23) in the RL narrative AI guide condition. In study 2, students with medium pretest scores (scores 3-5) had an average engagement score of 3.21 (N=6, standard error = 0.23) in the control condition, and an average engagement score of 3.28 (N=6, standard error = 0.28) in the RL narrative AI guide condition. In study 2, students with high pretest scores (scores 6-8) had an average engagement score of 3.45 (N=7, standard error = 0.17) in the control condition, and an average engagement score of 3.29 (N=4, standard error = 0.23) in the RL narrative AI guide condition. ### Implementation Details The platform consists of three major parts: a user-facing interactive website (Figure A2 for control and Figure 1 for AI guide), an admin dashboard (Figure A1), and an AI guide server. Both the website and the dashboard were created using Web technologies, including ReactJS [32] and TypeScript [33]. The Python-based AI guide was hosted on an AWS server and used Flask [34] as its API gateway to expose essential functions. The interactive website communicated with a GraphQL [35] API endpoint backed by Hasura Engine [35] and PostgreSQL [36]. The stored user conversation data was reflected in real-time on the admin console, where researchers could view the chat history and modify message templates. All user data was uploaded to the backend by Google App Script upon the completion of the user's session. Questionnaires and quizzes were created using Google Forms, and we used HTML iframe to embed Google Forms into the website to automatically process the form responses so as to enable real-time RL. When users interact with the AI guide, the observation space is calculated in real-time, and the AI guide performs action selection to reply to users. When users completed the post-quiz, their answers were converted to vector inputs and fed into the RL AI guide in real-time, which triggered a webhook to request the AI guide server to update its model accordingly. A complete diagram showing the interaction between the user and the RL agent is displayed in Figure 2. ### Authors' contributions S.R., A.N., W.S., J.H. J.Z., M.G., Y.L, K N. C.W., R.Y. J.L, and E.B. conducted research. S.R., A.N., W.S.,Y.L., J.L., and E.B. designed research. S.R., A.N.,W.S., and E.B. performed the analysis. S.R., W.S., A.N., and E.B. wrote the manuscript. S.R.(Author One) contributed equally to this work with A.N. (Author Two). E.B. is to whom correspondence should be addressed. E-mail: [email protected]
2304.03084
Dislocation density transients and saturation in irradiated zirconium
Zirconium alloys are widely used as the fuel cladding material in pressurised water reactors, accumulating a significant population of defects and dislocations from exposure to neutrons. We present and interpret synchrotron microbeam X-ray diffraction measurements of proton-irradiated Zircaloy-4, where we identify a transient peak and the subsequent saturation of dislocation density as a function of exposure. This is explained by direct atomistic simulations showing that the observed variation of dislocation density as a function of dose is a natural result of the evolution of the dense defect and dislocation microstructure driven by the concurrent generation of defects and their subsequent stress-driven relaxation. In the dynamic equilibrium state of the material developing in the high dose limit, the defect content distribution of the population of dislocation loops, coexisting with the dislocation network, follows a power law with exponent $\alpha \approx 2.2$. This corresponds to the power law exponent of $\beta \approx 3.4$ for the distribution of loops as a function of their diameter that compares favourably with the experimentally measured values of $\beta$ in the range $ 3 \leq \beta \leq 4$.
Andrew R. Warwick, Rhys Thomas, Max Boleininger, Ömer Koç, Gyula Zilahi, Gabor Ribárik, Zoltan Hegedues, Ulrich Lienert, Tamas Ungar, Chris Race, Michael Preuss, Philipp Frankel, Sergei L. Dudarev
2023-04-06T14:10:06Z
http://arxiv.org/abs/2304.03084v1
# Dislocation density transients and saturation in irradiated zirconium ###### Abstract The study of the evolution of the lithium # Dislocation density transients and saturation in irradiated zirconium Andrew R. Warwick Corresponding author Rhys Thomas M. Boleininger O. Koc G. Zilahia G. Ribarik Z. Hegedues U. Lienert T. Ungar C. Race M. Preuss P. Frankel S. L. Dudarev UK Atomic Energy Authority, Culham Science Centre, Abingdon, OX14 3DB, UK Department of Materials, University of Manchester, Manchester, M13 9PL, UK Department of Materials Physics, Econus University, PO Box 32, H-1518, Budapest, Hungary Deutsches Elektronen-Synchrotron DESY, Notkest. 85, 22607, Hamburg, Germany Monash University, Clayton, VIC 3800, Australia Monash University, Clayton, VIC 3800, Australia ###### Abstract Zirconium alloys are widely used as the fuel cladding material in pressurised water reactors, accumulating a significant population of defects and dislocations from exposure to neutrons. We present and interpret synchrotron microbeam X-ray diffraction measurements of proton-irradiated Zircaloy-4, where we identify a transient peak and the subsequent saturation of dislocation density as a function of exposure. This is explained by direct atomistic simulations showing that the observed variation of dislocation density as a function of dose is a natural result of the evolution of the dense defect and dislocation microstructure driven by the concurrent generation of defects and their subsequent stress-driven relaxation. In the dynamic equilibrium state of the material developing in the high dose limit, the defect content distribution of the population of dislocation loops, coexisting with the dislocation network, follows a power law with exponent \(\alpha\approx 2.2\). This corresponds to the power law exponent of \(\beta\approx 3.4\) for the distribution of loops as a function of their diameter that compares favourably with the experimentally measured values of \(\beta\) in the range \(3\leq\beta\leq 4\). Article INFO ## 1 Introduction In the core of modern boiling (BWR) or pressurized (PWR) water reactors, the uranium dioxide fuel assemblies are immersed in circulating pressurised water and thus it is critical that only the heat produced by the fission reactions is transported by the coolant and there is no contamination of the coolant from the radioactive fuel itself. Hence, the fuel is cladded to protect the reactor environment from contamination, be that during reactor operation or in transit. From the design choices that date back over fifty years (Rickover et al., 1975), zirconium alloys are currently employed as the uranium dioxide fuel cladding in water-cooled reactors. Containing more than 95 wt\(\%\) Zr, these alloys are mostly pure zirconium, chosen for its low neutron absorption cross section (Pomerance, 1951). Small amounts of Sn, Nb, Fe and/or Cr in the alloys help protect against corrosion and improve structural integrity (Lemaignan, 2012; Onimus and Bechade, 2012). The elastic neutron scattering cross-section for a Zr nucleus is, however, similar to that of other elements in the Periodic table (Sears, 2006), and a prolonged exposure to neutron irradiation results in the accumulation of a considerable amount of microscopic radiation defects generated from the atomic recoils initiated by collisions with neutrons. This gives rise to the deterioration of mechanical and physical properties and stimulates dimensional changes (Holt, 1988; Onimus and Bechade, 2012). The high energy >1 MeV neutrons produced by the fissile uranium oxide fuel (Nicodemus and Staub, 1953) collide with Zr nuclei, initiating collision cascades that displace atoms from their lattice sites, rearrange the crystal structure, and generate crystal lattice defects (Domain and Legris, 2005). Defects accumulate with increasing exposure to neutron irradiation, in the form of pairs of self-interstitial and vacancy defects (Frenkel pairs) as well as in the form of clusters of defects that eventually coalesce into large-scale defects, such as dislocation loops and dislocations (Warwick et al., 2021). At temperatures above 300-350 \({}^{\circ}\)C where point defect diffusion occurs at an appreciable rate comparable or faster rates than the dose rate, it is important to consider random Brownian motion of defects to interfaces, grain boundaries, dislocations or other point defect clusters, whereas at lower temperatures the microstructure of accumulating defects is dominated by other factors. Given the significance of the life-limiting effect of structural changes on the properties of zirconium cladding, there is significant research effort aimed at improving the performance of structural zirconium alloys in the operating environment of a fission reactor (Adamson et al., 2019; Zinkle and Was, 2013). The exposure of a material to energetic particles is often quantified by the notion of _dose_, typically expressed in the units of 'displacement per atom' (dpa). Dpa is a simple measure of exposure of a material to radiation and represents the average number of times each atom in the material has been a part of a Frenkel self-interstitial-vacancy defect pair. Typically, a zirconium alloy cladding is exposed to \(\sim\)15 dpa over the five years of service (Zinkle and Was, 2013). Producing an accurate safety case involves the identification of the microstructure formed at a given dose, temperature, and externally applied load. In particular, the formation of dislocation loops and dislocations is known to play a critical role in the resulting deterioration of the cladding's structural and mechanical properties. For instance, in the large dose limit, high densities of defects and dislocations accumulate in the cladding, causing embrittlement. Another important degradation mode is the so-called 'irradiation-induced growth' (IIG) that arises from the anisotropy of the hexagonal close packed (hcp) crystal structure of zirconium (\(\alpha\)-Zr) and zirconium alloys (Griffiths, 2020; Onimus et al., 2022). This crystal structure is stable up to and beyond the reactor core's operating temperature range of 280 \({}^{\circ}\)C to 350 \({}^{\circ}\)C, exhibiting an hcp-bcc instability only at significantly higher temperatures above 860 \({}^{\circ}\)C (Willaime and Massobrio, 1989). With increasing dose, the interstitial and vacancy-type dislocation loops with the \(\frac{1}{3}\)(2110) Burgers vectors, inhabiting the prismatic crystallographic planes, form a population of the so-called '\(a\) loops'. At relatively low temperatures, this strongly correlates with observed elongation along the '\(a\)' (2110) and contraction along the '\(c\)' (0001) crystallographic directions, saturating at doses larger than 1 dpa (Holt, 1988). At temperatures above \(\sim\)300 \({}^{\circ}\)C and doses exceeding \(\sim\)5 dpa, large vacancy-type '\(c\) loops' with Burgers vectors \(\frac{1}{2}\)(0001) or \(\frac{1}{6}\)(2023) appear in the basal crystallographic planes. Accompanying this onset of formation of the \(c\)-type vacancy loops, the magnitudes of \(a\) and \(c\) strains increase linearly with dose in a phenomenon called 'the breakaway growth' (Choi and Kim, 2013). Whilst it is known that dislocations have substantial elastic relaxation volume (Dudarev and Ma, 2018; Boleininger et al., 2022) giving rise to significant dimensional changes (Christensen et al., 2020), the present understanding of this Figure 1: Dislocation density as a function of dose observed experimentally using microbeam synchrotron X-ray diffraction (XRD) measurements of proton-irradiated Zircaloy-4, compared to predictions derived from simulations of pure \(a\)-Zr performed using the creation relaxation algorithm technique. Four experimental samples with nominal doses of 0.1, 0.5, 1.0 and 2.0 dpa were scanned across the variable dose regime. The simulation results are scaled by a factor of 0.1 and averaged over three different interatomic potentials (see text). IIG is mostly phenomenological and existing models are unable to predict, from first principles, the variation of the dislocation content consistent with, or at least verified by, the experimental data. Below we present experimental observations and predictions derived from simulations, showing that the density of dislocations saturates in the temperature and dose range where dimensional changes exhibit saturation. This is summarised in Figure 1 illustrating the dislocation densities experimentally measured in proton-irradiated Zircaloy-4 (Zr-1.5%Sn-0.2%Fe-0.1%Cr) together with the simulation data for irradiated \(\alpha\)-Zr plotted as a function of dose. In agreement with observations performed using ion irradiation (Yu et al., 2017), we find that the density of dislocations evolves through a transient at doses \(<\)1 dpa before saturating at larger doses. The microstructures produced by our simulations indicate that at very low doses \(\ll\)1 dpa, small dislocation loops form, which subsequently grow and coalesce into a complex interconnected dislocation network developing at moderate doses \(<\)1 dpa. The dislocation network eventually forms complete crystallographic planes (Mason et al., 2020; Boleininger et al., 2023) and partly dissociates into large dislocation loops at high doses \(\gg\)1 dpa. The formation of dislocation loops and dislocations is principally driven by the stress-mediated clustering of self-interstitial atoms, where in the high dose limit the self-interstitial cluster population size distribution follows a power law \(p(N)\propto 1/N^{2.2}\) developing in the saturation regime corresponding to high neutron exposure. Below, we show that this fact, also confirmed by experimental observations (Ungar et al., 2021), has important implications for the interpretation of experimental observations of dislocation loops and for our understanding of the dynamics of microstructural evolution in the irradiated cladding. Our manuscript is structured by detailing our methods in SS2 and presenting our results in SS3 before summarising key conclusions in SS4. The concept and relevant definitions of dislocation density are discussed in SS2.1. Details of our experimental set-up and and an overview of the cmwp line profile analysis software are given in SS2.2 followed by an explanation of our simulation method, its range of validity and our choice of settings in SS2.3. We show how the simulated microstructure correlates to the dislocation density profile shown in Figure 1 in SS3.1 in addition to characterising the evolution of the power law distribution of dislocation sizes. Finally, the stored energy associated with dislocation content that drives the changing microstructure as a function of dose is investigated in SS3.2. ## 2 Theory and Methodology ### Dislocation density There are multiple ways to define the density of dislocation lines \(\rho\) in a deformed or irradiated material. For all the measurements and computations in this study, \(\rho\) is defined as a _scalar_ ratio of the total dislocation line length to volume \(V\) containing the dislocations, namely (Hull and Bacon, 2011) \[\rho:=\frac{1}{V}\int_{\perp\mathbf{\varepsilon}V}|\mathrm{d}\mathbf{\mathrm{l}}|. \tag{1}\] Here \(\mathbf{\mathrm{l}}\) is a position vector on a dislocation line such that \(\mathrm{d}\mathbf{\mathrm{l}}=\mathbf{\xi}\mathrm{d}s\) for unit tangent vector \(\mathbf{\xi}\) and arc length \(s\), and the integration is performed with respect to the arc length over all the dislocation lines \(\perp\) in \(V\). The choice of volume is somewhat arbitrary but, for a given resolution, is expected to reflect the average amount of dislocations present. For example, in a TEM micrograph this can be chosen to be a region contained in the image and in a molecular dynamics simulation one may use the entire simulation cell. Another possible definition is the areal density \(\rho_{A}\) that measures the number of dislocation lines crossing an open surface as (Hull and Bacon, 2011) \[\rho_{A}:=\frac{1}{A}\int_{\perp\mathbf{\varepsilon}A}|\mathrm{d}\mathbf{S}\cdot \mathbf{\xi}|, \tag{2}\] where \(\mathrm{d}\mathbf{S}\) is a vector area element of \(A\) with direction normal to the surface and, similar to Equation 1, there is a dependence on the choice of the surface. If all the dislocations are co-linear and perpendicular to the chosen surface, Equations 1 and 2 produce the same value. Whilst for an arbitrary distribution of dislocations this is often not the case, generally both measures do not differ by much more than a factor of two (Schoeck, 1962). Needless to say, a dislocation is not solely defined by its tangent vector and it is noteworthy that neither Equation 1 nor Equation 2 contain any information about the Burgers vector \(\mathbf{\mathrm{b}}\) of the dislocation. Significant physical quantities such as dislocation energy and the Peach-Koehler force both depend on \(\mathbf{\mathrm{b}}\). The Nye tensor \(\mathbf{\alpha}\) is a _tensorial_ measure of dislocation density that is a linear function of lattice curvature (Nye, 1953) and is also a function of position \(\mathbf{\mathrm{x}}\) such that (Jones et al., 2016) \[\mathbf{\alpha}(\mathbf{x})\,:=\int_{\perp}\delta\left(\mathbf{x}-\mathbf{l}\right) \mathbf{b(l)}\otimes\mathrm{d}\mathbf{l}, \tag{3}\] where \(\otimes\) denotes the tensor product, integration is performed over all of the dislocation lines in the system, and \(\delta\left(\mathbf{x}\right)\) is the Dirac delta distribution defined by the property \[\int\mathrm{d}^{3}\mathbf{x}^{\prime}\,\delta(\mathbf{x}^{\prime}-\mathbf{x}) f(\mathbf{x}^{\prime})=f(\mathbf{x}), \tag{4}\] for an arbitrary well-behaved function \(f(\mathbf{x})\). Whilst full information about the dislocation content is contained in Equation 3, attempting to average \(\mathbf{\alpha}\) over a volume \(V\) can be problematic. Essentially, this stems from the fact that the integral of d\(\mathbf{l}\) along a dislocation segment contained in \(V\) that starts and ends at \(\mathbf{x}_{a}\) and \(\mathbf{x}_{b}\) respectively is \(\mathbf{x}_{b}-\mathbf{x}_{a}\). Thus, any information pertaining to curvature of a dislocation line is lost and closed paths in particular, _i.e._ dislocation loops, provide no contribution to the volume average (Arsenlis and Parks, 1999; Mandadapu et al., 2014). The dislocations sections that integrate to zero are referred to as Statistically Stored Dislocations (SSD) and the surviving contributions are the Geometrically Necessary Dislocations (GND). Experimental techniques that infer GND content, such as micro-beam Laue measurements and high resolution electron back-scattered diffraction, implicitly make use of Equation 3(Das et al., 2018). As it is difficult to characterise the entire population of dislocations using Equation 3, throughout this manuscript we have chosen to use the scalar measure given by \(\rho\) in Equation 1 which is the same definition as that employed in our X-ray line profile analysis. ### Experiment Dislocation densities were measured in four \(3\,\mathrm{mm}\times 1\,\mathrm{mm}\times 0.5\,\mathrm{mm}\) Zircaloy-4 samples (composition Zr-0.17Fe-1.24Sn-0.10Cr) proton-irradiated to different doses. The samples possessed a recrystallised equiaxed microstructure with a low dislocation density of \(<\)\(1\times 10^{14}\,\mathrm{m}^{-2}\) and a characteristic'split-basal' texture due to processing, where the basal poles are aligned along the normal direction (ND), with a \(\pm\)30 degree tilt towards the transverse direction. Irradiation induced growth strains in similarly textured Zircaloys are known to saturate at doses below \(\sim\)10\(\,\mathrm{d}\mathrm{p}\mathrm{a}\) at 320\(\,\mathrm{\SIUnitSymbolMicro C}\)(Adamson et al., 2019) and thus we may expect dislocation densities to display a similar pattern of evolution in these samples. The ND face of each sample was proton irradiated with 2\(\,\mathrm{M}\mathrm{e}\mathrm{V}\) protons at 350\(\,\mathrm{\SIUnitSymbolMicro C}\) at the University of Manchester's Dalton Cumbrian Facility, UK. The temperature of the samples during irradiation was monitored _via_ a thermal imaging camera in order to hold it within \(\pm\)10\(\,\mathrm{\SIUnitSymbolMicro C}\) of the target temperature. Unlike neutrons, the Coulomb interaction between the protons and the target material results in the shallow penetration of protons into the material. The resulting radiation exposure, quantified by the dose and dose rate, varies significantly as a function of depth, with the dose rate being of the order of 10\({}^{-5}\) dpa/s. The dose profile was calculated using the quick Kinchin-Pease setting in srim(Ziegler et al., 2010) with the lattice binding energy and threshold displacement energy set to 0\(\,\mathrm{e}\mathrm{V}\) and 40\(\,\mathrm{e}\mathrm{V}\) respectively (Stoller et al., 2013). A typical dose _vs._ depth profile in one of our Zircaloy-4 samples consists of a plateau region extending \(\sim\)10\(\,\mathrm{\SIUnitSymbolMicro m}\) from the surface where the dose and dose rate are approximately constant before sharply rising and falling to zero at a region corresponding to protons coming to rest in the material, called the Bragg peak and located at \(\sim\)30\(\,\mathrm{\SIUnitSymbolMicro m}\). The samples were irradiated such that the doses at 60% of the Bragg peak depth from the surface, termed 'nominal doses', were 0.1, 0.5, 1 and 2 dpa respectively. Within the first 30\(\,\mathrm{\SIUnitSymbolMicro m}\) of each sample from the irradiated surface, the calculated dose and dose rate vary from their nominal values by factors of 0.6 to 8.5 thus allowing us to measure data spanning over a wide range of irradiation exposures. Using a small X-ray beam of \(2\,\mathrm{\SIUnitSymbolMicro m}\times 100\,\mathrm{\SIUnitSymbolMicro m}\) cross section, the samples were scanned in cross-section from the surface to a depth of 50\(\,\mathrm{\SIUnitSymbolMicro m}\) within the sample at 2\(\,\mathrm{\SIUnitSymbolMicro m}\) increments at the P21.2 beamline at the PETRA III synchrotron facility at DESY in Hamburg, Germany. The samples were translated perpendicular to the scanning direction by 200\(\,\mathrm{\SIUnitSymbolMicro m}\) during each scan to improve grain statistics and reduce the spotiness of the pattern. The set-up of the diffraction experiment and sample geometry is shown in Figure 2. Dislocation densities were extracted from the line profiles using the Convolutional Multiple Whole Profile (cmwp) software (Ribarik et al., 2020). The cmwp software models the line profile intensity \(I(q)\), where \(q\) is the wavevector magnitude, as a convolution of intensities arising from instrumental effects, size broadening, and, in particular, strain broadening due to dislocations. Instrumental effects were determined using a \(\mathrm{LaB}_{6}\) standard specimen, and the size broadening was determined assuming a log-normal size distribution of coherently scattering grains (Ribarik et al., 2020). Here, we outline the model underpinning the cmwp software, whereas for a broader context we refer an interested reader to reviews of the method (Wilkens, 1970; Ungar et al., 1999; Ribarik et al., 2020). In the theory of X-ray line profile analysis, the Fourier components of the broadened intensity peak profile corresponding to a reciprocal lattice vector \(G\), denoted \(I_{G}^{D}\), are related to the strain distribution by \[\mathcal{F}\left\{I_{G}^{D}(q)\right\}(L)=\exp\left[-2\pi^{2}G^{2}L^{2}\langle e _{G,L}^{2}\rangle\right], \tag{5}\] where \(L\) is the Fourier variable and \(\langle e_{G,L}^{2}\rangle\) is the mean-square strain. Wilkens (1970) derived an expression for \(\langle e_{G,L}^{2}\rangle\) by numerical methods arising from the'restrictedly random distribution' concept of dislocations. Dislocation lines are Figure 2: (a) : Experimental geometry for measuring line profiles from Zircaloy-4 samples on the P21.2 beamline at the DESY synchrotron, Hamburg, Germany. (b): Variation of dislocation density and dose as a function of depth for the nominal 0.1 dpa sample. assumed to be parallel in the sub-areas of equal size \(A\) perpendicular to the line direction. A number \(N_{\perp}\) of dislocation lines with equal numbers of positive and negative Burgers vectors occupy random positions in a plane normal to the dislocation lines. The characteristic linear size of the sub-areas is chosen to be proportional to a parameter termed the effective outer cut-off radius \(R_{e}\). The dislocation density \(\rho\) is then defined by Wilkens (1970) as the areal density of dislocations given by Equation 2 that, as discussed in SS2.1, is equivalent to the volume density of dislocations defined by Equation 1 for this specific dislocation configuration. The dipole character of the distribution is determined by the arrangement parameter \(M=R_{e}\sqrt{\rho}\). Whilst \(R_{e}\) is one of the fitting parameters in the cmwp software, thus affecting the value of \(M\), this article is principally concerned with the determination of \(\rho\) and thus we do not discuss the arrangement parameter further. An expression for the mean-square strain for a restricted random distribution of dislocations was derived by Wilkens (1970) \[\langle\epsilon_{G,L}^{2}\rangle=\frac{\rho Cb^{2}}{4\pi}f(\eta), \tag{6}\] where \(C\) is a parameter that is refined by the profile fitting cmwp algorithm, termed the dislocation contrast factor. The value of \(C\) was evaluated for dislocations in \(\alpha\)-Zr by Balogh et al. (2016). The Wilkens function \(f(\eta)\), where \(\eta=L/R_{e}\), has the following asymptotic forms in the limit of small and large \(\eta\), respectively \[f(\eta)\sim\begin{cases}\ln\eta&,\ \eta\to 0\\ \frac{1}{\eta}&,\ \eta\to\infty.\end{cases} \tag{7}\] The numerically obtained formula for \(f(\eta)\) may be found in Wilkens (1970). Although the Wilkens model is formally derived assuming that a dislocation configuration is composed of straight lines, the model is in fact able to mimic the statistical properties of distributions of curved dislocations (Kamminga and Delhez, 2000; Groma and Borbely, 2004). When compared to transmission electron microscope (TEM) measurements of irradiated Zircaloy-2, the cmwp software was able accurately follow the dislocation density evolution as a function of dose (Seymour et al., 2017), and the cmwp approach has now become an accepted tool for determining dislocation densities (Ungar et al., 2021; Topping et al., 2018) as well as other microstructural features (Ungar et al., 2021) in irradiated Zircaloy. The cmwp software evaluates parameters describing effects of both the specimen size and dislocation broadening of diffraction intensity peaks by first employing a statistical Monte Carlo optimisation followed by the Marquardt-Levenberg non-linear least squares algorithm, see Ribarik et al. (2020). All the peaks in the interval from \(0\,\mathrm{nm}^{-1}<q<13\,\mathrm{nm}^{-1}\) were included in the fitting procedure and the uncertainty in \(\rho\) was quantified according to the quality of fit as described by Ribarik et al. (2020). The variation of dislocation density with depth calculated by cmwp is shown in Figure 2b that, when mapped to the dose calculated by srim at a given depth, enables plotting the dislocation density as a function of dose as illustrated in Figure 1. ### Simulation The accumulation of defects and microstructural evolution as a function of dose was simulated using the Creation Relaxation Algorithm (CRA) (Derlet and Dudarev, 2020). The CRA exploits the separation of timescales associated with relatively fast stress driven and comparatively slow thermally activated evolution of defect microstructure. This results in a simple algorithm where, starting with a perfect crystal structure of \(\alpha\)-Zr, the Frenkel pairs of defects are created at random and the microstructure is subsequently relaxed _via_ direct energy minimisation such that the system evolves purely through the action of internal stresses arising from the generation of defects. The dose is measured in units of 'canonical displacement per atom' (cdpa) computed as the ratio of the total number of Frenkel pairs generated by the algorithm to the number of atoms in the system. CRA simulations assume that vacancies remain effectively immobile, in turn also immobilising the dislocation part of the microstructure (Arakawa et al., 2020), leaving the internal fluctuating stresses as the only remaining factor driving the migration and clustering of self-interstitial atom defects. In (Warwick et al., 2021) we identified the approximate temperature and dose rate range where the simulation method retains its validity when applied to \(\alpha\)-Zr. The IIG strains observed in neutron irradiated zirconium alloys with initially low dislocation densities tend to saturate with increasing dose at temperatures less than \(\sim\)300 \({}^{\circ}\)C, see Adamson et al. (2019). At temperatures above \(\sim\)300 \({}^{\circ}\)C, saturation persists over shorter intervals of dose before the strain magnitudes start increasing linearly as a part of the breakaway growth phenomenon (Holt, 1988). A significant change of pattern of thermal evolution has also been found above \(\sim\)300 \({}^{\circ}\)C in proton irradiated Zircaloy-2 when samples irradiated to 2 dpa were annealed for 1 h at various temperatures (Topping et al., 2018). The X-ray line profile measurements performed by Topping et al. (2018) showed that the \(a\)-loop density significantly decreased only at temperatures above \(\sim\)300 \({}^{\circ}\)C. These data offer a valuable insight into the timescales on which thermally activated processes, including vacancy migration, drive the evolution of _heavily_ irradiated zirconium. Earlier (Warwick et al., 2021), noting that the rates of thermally activated processes follow the Arrhenius law (Vineyard, 1957; Landauer and Swanson, 1961; Allnatt and Lidiard, 1993), we showed that the annealing experiment data imply that the characteristic activation energy \(E_{a}\) for the processes primarily responsible for the observed thermally activated behaviour must be close to \(\sim\)2 eV. Also, as described in SS2.2, the proton-irradiation defect production dose rate \(\dot{\phi}\) at all depths in our experiments is high and close to \(10^{-5}\) dpa s\({}^{-1}\). Given this high dose rate, we can estimate an upper bound \(\tilde{T}\) on the range of temperatures where the rate of migration of defects stimulated directly by irradiation is higher than the rate of thermally activated migration of defects. For a given activation energy \(E_{a}\), using the dose rate model by Nordlund et al. (2018), we find that the two rates are comparable if \[\dot{\phi}\left(\frac{2E_{d}}{E_{a}}\right)\approx\nu\exp\left(-\frac{E_{a}}{k _{B}\tilde{T}}\right), \tag{8}\] where \(E_{d}\) is the threshold displacement energy required for forming a defect, the attempt frequency is \(\nu\approx\omega_{D}/2\pi=5.84\times 10^{12}\) s\({}^{-1}\), given the Debye frequency \(\omega_{D}=3.67\times 10^{13}\) s\({}^{-1}\)(Zarestky, 1979) and \(k_{B}=0.861\times 10^{-4}\) eV/K is the Boltzmann constant. Taking \(E_{a}=2\) eV and \(E_{d}=40\) eV, and solving equation (8) for \(\tilde{T}\), we find \(\tilde{T}=625\) K\(\approx\)350 \({}^{\circ}\)C. Below this temperature, the eigenstate of thermal relaxation of microstructure is lower than the rate at which defects are driven by irradiation. Hence, at temperatures below \(\tilde{T}\), the defect structures generated by irradiation evolve predominantly through fast athermal stress relaxation (Derlet and Dudarev, 2020). The above estimate for \(\tilde{T}\)\(\sim\)350 \({}^{\circ}\)C is close to the temperature at which Topping et al. (2018) observed the occurrence of a significant change in the thermal response of microstructure during annealing. Notably, the change of pattern of breakaway growth at \(\sim\)300 \({}^{\circ}\)C noted by Holt (1988) was observed at significantly lower dose rates than those characterising our proton irradiation experiments. Hence, the above temperature must reflect the fundamental scale of activation energies associated with microstructural evolution of Zircaloy under irradiation. The migration energy of individual vacancies in pure elemental \(a\)-Zr (Varvenne et al., 2014) of \(\sim\)0.5 eV is too low to account for the observed behaviour, and while the thermal diffusion of vacancies and other point defects affect microstructural evolution, reducing the overall concentration of defects noted in our analysis, experimental observations indicate the presence of a rate-limiting process with a higher activation energy that stabilises the observed dense defect microstructures at temperatures as high as 300 \({}^{\circ}\)C. High activation energies are known to be associated with the formation of immobile vacancy-impurity clusters involving carbon or nitrogen (Fu et al., 2008; Terentyev et al., 2014; Theodorou et al., 2022). In bcc iron, the _effective_ migration energy of vacancies is defined by the energy of dissociation of a cluster involving a vacancy and a carbon dimer (Paxton, 2014), and this dissociation energy can be as high as 2.22 eV (Kabir et al., 2010), far higher than the activation energy of 0.55 eV characterising vacancy migration in pure elemental Fe (Fu et al., 2005). Given that the characteristic formation and migration energies of defects in Zr and Fe are nearly the same (Dudarev, 2013), the high effective activation energy of the order of 2 eV seen in experiments on Zr likely result from the impurity effect similar to that found in Fe. As noted by Kabir et al. (2010), at relatively low temperatures the vacancy-carbon dimer complexes are immobile, making the dissociation temperature of these complexes one of the key parameters determining the response of a material to radiation exposure. CRA simulations (Derlet and Dudarev, 2020) or the simulations involving the production of defects by successive collision cascade events (Mason et al., 2021; Granberg et al., 2023; Boleininger et al., 2023) do not imply the absence of mobility of defects. Self-interstitial atom defects exhibit the non-Arrhenius mobility (Dudarev, 2008) and their motion is strongly affected by elastic strain fields (Dudarev et al., 2010), resulting in the rapid clustering of these defects into interstitial dislocation loops and, subsequently, into a dense entangled network of dislocations (Derlet and Dudarev, 2020; Boleininger et al., 2022). The latter forms spontaneously at doses above approximately 0.3 dpa (Mason et al., 2020). The clustering of self-interstitial defects into dislocation loops and dislocations stems from the fact that this is a highly energetically favourable process, releasing up to \(E_{SIA}^{f}\approx 3\) eV per self-interstitial coalescence event (Domain and Legris, 2005; Dudarev, 2013). The fact that it is the SIA formation energy, fundamentally related to the strong elastic interaction between the self-interstitial defects, that drives the evolution of microstructure at relatively low tempeatures rather than the diffusion of self-interstitial _per se_, is confirmed by finite-temperature simulations by Chartier and Marinica (2019). The simulations were performed at 300K and hence included the thermal diffusion of self-interstitial atom defects, but still exhibited the same pattern of evolution as that predicted by the CRA simulations (Derlet and Dudarev, 2020). This is confirmed by experimental observations by Wang et al. (2023) showing the trends similar to those found in simulations, even though in tungsten the diffusion of self-interstitial defects occurs at temperatures as low as 27 K (Ehrhart et al., 1991; Ma and Dudarev, 2019). The above analysis of _ab initio_ data and experimental information shows that the temperature interval over which the dynamics of microstructural evolution changes from the low-temperature mode dominated by microscopic stress fluctuations (Derlet and Dudarev, 2020) to the high-temperature mode dominated by the Arrhenius thermally activated diffusion (Allnatt and Lidiard, 1993), in zirconium alloys spans approximately from 300 \({}^{\circ}\)C to 400 \({}^{\circ}\)C, as illustrated particularly well by Fig. 8 from Topping et al. (2018). The qualitative picture of microstructural evolution of zirconium irradiated by high-energy protons can now be summarised as follows. Proton irradiation produces relatively low energy recoils, generating defects in the form of Frenkel pairs or small defect clusters (Boleininger et al., 2023). Self-interstitial atom defects coalesce into dislocation loops and a dislocation network, whereas vacancies either diffuse and recombine with the interstitial dislocation loops or extended dislocation structures, or form immobile vacancy-impurity clusters (Kabir et al., 2010). These vacancy-impurity clusters immobilise and stabilise the dislocation microstructure (Arakawa et al., 2020), but dissociate in the temperature interval from 300 \({}^{\circ}\)C to 400 \({}^{\circ}\)C. Over this temperature interval, the mode of microstructural evolution changes from that dominated by stress fluctuations and coalescence of self-interstitial defects to the mode dominated by vacancy diffusion. Over the same interval of temperatures, the IIG changes from a mode exhibiting saturation to that of runaway growth. Our observations exhibit the formation of a dense dislocation network, suggesting that the experimental conditions correspond to the low-temperature rather than the high-temperature mode of microstructural evolution. The selection of a simulation approach below reflects and recognises this fact. An algorithm for modelling microstructural evolution at higher temperatures has to include the treatment of microscopic stress fluctuations as well as diffusion and interaction of vacancies and impurities. The development of such an algorithm remains a challenge for future studies. The CRA was implemented in the molecular dynamics program lammps(Plimpton, 1995)1. For this study, unless stated otherwise, we present results averaged across all three Embedded Atom Method (EAM) potentials developed in Ref. (Mendelev and Ackland, 2007), as is the case in Figure 1. Whilst there are variations between the potentials with respect to their predicted formation energies and elastic properties of self-interstitials and vacancies (Mendelev and Ackland, 2007; Varvenne et al., 2014; Varvenne and Clouet, 2017), we have found that all three potentials qualitatively produce the same macroscopic dimensional changes and microstructural evolution under the CRA (Warwick et al., 2021). Furthermore, a similar study also employed the CRA on Zr and predicted the same trends with a different potential (Tian et al., 2021). Footnote 1: [https://lammps.sandia.gov](https://lammps.sandia.gov), 3 Mar and 29 Oct 2020 stable builds Our simulations employ periodic boundary conditions and supercells containing \(\sim\) 2M and \(\sim\) 10M atoms, with the cell edges parallel to the [2\(\bar{1}\bar{1}0\)], [\(\bar{1}\bar{2}\bar{1}0\)] and [0001] directions. Energy minimisation was performed using a combination of the conjugate gradient and FIRE algorithms (Bitzek et al., 2006) such that the relaxed force on any atom was smaller than 1 meV\(\mathring{\text{A}}^{-1}\). In the interest of computational efficiency, the simulation cell shape and size was kept fixed during relaxation. Whilst these boundary conditions result in a macroscopic internal stress of \(\sim\)1 GPa, the dimensional changes that would occur if the cell shape relaxed may nevertheless be accurately computed using linear elasticity theory; furthermore, the microstructure resulting from relaxing under zero pressure is similar to that under zero strain (Warwick et al., 2021; Tian et al., 2021). Dislocations were identified directly from atomic positions using the Dislocation eXtraction Algorithm (DXA) (Stukowski et al., 2012). This is achieved by assigning crystal structure types to each atom using common neighbour analysis (Faken and Jonsson, 1994). Given the crystal structure of the hcp reference crystal, Burgers circuits are drawn around regions containing atoms assigned to non-hcp crystal structure in order to compute Burgers vectors and dislocation lines. Dislocation densities are computed according to Equation 1. In order to enable a closer comparison between our simulations and line profile analysis experiments, we simulated the intensity profile of powder diffraction patterns for all of the CRA microstructures using the Debye equation (Debye, 1915) where the intensity of scattered X-rays is proportional to \[I(q)=\sum_{i,j}\text{sinc}\left(2\pi qr_{ij}\right), \tag{9}\] for wavevector magnitude \(q\), and the sum runs over all the pairs of atoms positioned at \(\mathbf{r}_{i}\) and \(\mathbf{r}_{j}\) separated by distance \(r_{ij}=|\mathbf{r}_{i}-\mathbf{r}_{j}|\). Furthermore, in Equation 9 it is assumed that the atomic form factor is the same for all atoms in the system. The time required for computing all the pairwise distances \(r_{ij}\) for a system of \(N\) atoms scales unfavourably as \(N^{2}\) and thus we parallelised the task. The line profile was calculated without using periodic boundary conditions and thus the powder was treated as if it were composed of randomly oriented nano-grains as large as the simulation box. \(I(q)\) was computed over the domain \(3\,\mathrm{nm}^{-1}\leq q\leq 13\,\mathrm{nm}^{-1}\) spanning all peaks up to the \(\{22\bar{4}0\}\) reflections and the wavenumbers were sampled every \(2\times 10^{-3}\,\mathrm{nm}^{-1}\). Data were visualised and processed with the ovito(Stukowski, 2010) and paraview(Ahrens et al., 2005) software packages. For more details, please refer to our recent publication (Warwick et al., 2021). ## 3 Results and Discussion ### Dislocation structure and distribution The peak and saturation of dislocation density shown in Figure 1 can be readily understood by inspecting the spatial configuration of dislocations generated by our simulations. The nature of defects evolving through internal stresses in the CRA results in dislocations being almost exclusively formed by the agglomeration of self-interstitial atoms whilst vacancies remain immobile and generally form small clusters containing \(\mathcal{O}(10)\) vacancies that are not large enough to relax into dislocation loops. We do find a small number of much larger clusters containing \(\mathcal{O}(100)\) Figure 3: Dislocation microstructures calculated by the CRA simulations using the MA1 potential with 10M atoms. Green, orange and red dislocations indicate \(\langle 2\bar{1}\bar{1}0\rangle,\langle 1\bar{1}00\rangle\) and unusual Burgers vectors, respectively. Dots are self-interstitial atoms identified with the Wigner-Seitz analysis and boundaries of interstitial clusters are rendered as transparent surfaces. 3a: Self interstitial point defects and small dislocation loops at very low dose 0.02 cdpa. 3b : Dense dislocation network at low dose 0.13 cdpa. 3c : Extended dislocation network at high dose 2.00 cdpa with percolating interstitial cluster. vacancies. However, these are fundamentally interstitial in origin as the large vacancy clusters are nothing but vacant spaces in the crystallographic planes formed by the self-interstitial defects, see Mason et al. (2020); Boleininger et al. (2023). The renders shown in Figure 3 suggest that the dislocation structure evolves in three stages. At low dose (Figure 3a), small loops form before coalescing into a dislocation network (Figure 3b) at which point the dislocation density saturates. At high doses (Figure 3c), full interstitial-type atomic planes form and the dislocation network fragments into loops, resulting in a drop in the dislocation density. The visualised microstructures were rendered from simulations employing the MA1 potential. When comparing the interatomic potentials, we discovered that the MA2 and MA3 potentials produce large populations of twinned regions (Warwick et al., 2021). It seems likely that these are artefacts of the potentials since such defects are not commonly observed in experiment. As was noted by Warwick et al. (2021), for the MA3 potential in particular, a large proportion of dislocations coalesce into these twinned regions whose volume fraction also features a transient peak followed by saturation. The twinned regions are composed of dense arrays of dislocations and thus the pattern of evolution of dislocation structures and saturation in density is common to all three potentials. Thus, the microstructures derived from MA1 simulations are presented in order to avoid needlessly complicating our discussion. CRA simulations tend to overestimate the defect content in the high dose limit (Boleininger et al., 2023), however the qualitative trends observed under a variety of conditions are predicted accurately (Mason et al., 2020, 2021; Warwick et al., 2021). This overestimation arises from the lack of re-crystallisation induced by collision cascades. Producing the primary knock on atoms with kinetic energies close to the threshold displacement energy results in defect densities very close to those predicted by the CRA whilst much larger recoil energies result in defect densities lower by approximately a factor of approximately 10, see Boleininger et al. (2023). Thus, in agreement with the analysis by Mason et al. (2020, 2021); Boleininger et al. (2023) where the defect content was independently assessed using the experimentally measured deuterium retention, we have scaled down the dislocation density in Figure 1 by a factor of 10 to enable a direct comparison of our experimental and simulation data. After applying this scaling it can be seen that the CRA indeed predicts a qualitatively accurate variation dislocation density profile. The occurrence of a transient peak of dislocation density at a moderate dose was also noted earlier in simulations of iron and tungsten (Chartier and Marinica, 2019; Derlet and Dudarev, 2020). In order to enable a closer comparison with experiment we simulated a powder diffraction line profile for all of our CRA microstructures as described in SS2.3. Figure 4 illustrates the evolution of the \(\{1\bar{1}02\}\) diffraction peak profile as a function of dose, reflecting the eventual saturation of the microstructure in the peak profile seen in the limit of high dose. We observe that in the transient regime, the peak intensity drops before rising and settling at a steady value. Furthermore, the peak broadens at high dose and shifts to higher scattering angles, indicating lattice compression. The formation of extra atomic planes due to the coalescence of dislocation loops does not result in the volumetric expansion but instead causes lattice compression because the simulations are performed at constant cell shape and size. Under zero pressure boundary conditions, we expect the peak centre to shift to lower wave-numbers instead. Figure 4: Simulated \(\{1\bar{1}02\}\) X-ray diffraction peak profile as a function of dose computed from the MA1 microstructures. The variation in the intensity of the peak and its centre shift is highlighted by the red path. Application of the cmwp software to our simulated line profiles also shows that the dislocation density saturates as a function of dose, approaching a limiting value of \(\sim 10^{15}\) m\({}^{-2}\). Saturation of the peak profile has been quantified by extracting the dislocation density from the data using the cmwp software. cmwp is known to infer dislocation densities that are larger than those determined from TEM images and this has been attributed to the ability of X-ray line profile analysis to resolve small loops in power law distributed dislocation loops (Ungar et al., 2021; Ungar et al., 2021). Interestingly, we find that the dislocation densities computed by cmwp and DXA nevertheless differ by approximately an order of magnitude although the character of variation of the observed and simulated dislocation densities as functions of dose are the same. We note that the difference between the cmwp software and DXA results brings attention to an important question: at what size is a dislocation loop too small to be counted as a dislocation object? Usually, dislocations are considered to be the sources of long-range strain fluctuations responsible for the X-ray peak line profile broadening detected in irradiated materials (Wilkens, 1970). On the other hand, small dislocation loops produce shorter-range strains and in the far-field limit they are equivalent to point defects, with the associated strain fields resulting in the Huang diffuse scattering, producing a relatively uniform increase in the scattered intensity in X-ray diffraction patterns (Simmons and Baluffi, 1958). Furthermore, when a dislocation loop is so small that the loop diameter is comparable to the core width of a dislocation then the loop is mostly comprised of core atoms and its structure can no longer be described using conventional elasticity (Boleininger et al., 2018; Boleininger and Dudarev, 2019). Thus, one would expect the resulting strains to be unlike those associated with linear elastic fields of dislocation loops and the core effects to be significant. Modelling the strain broadening effects associated with small dislocation loops requires further analysis and we defer it to a future study. A large proportion of the dislocation objects found in the simulated microstructures are so small that they should be treated as point defects by CMWP and would also not be easily detected in TEM images (Zhou et al., 2006). Determining the nature of such defects could be highly relevant to determining mechanisms that cause the complex high dose phenomena such as breakaway growth. Size distributions of dislocation loops are often investigated in experiment (Yi et al., 2015) and so for the purposes of comparison and characterising the spatial arrangement of dislocations we examined the distribution of defect cluster - dislocation loop sizes. As the dislocations in this simulation are mostly of interstitial type, we can gain insight into the statistics of dislocation structures by examining the statistics of interstitial defect clusters. Figure 4(a) shows the frequency distribution of clusters containing \(N\) interstitials as calculated by ovito for the doses corresponding to the renders in Figure 3. For clusters with population sizes \(N<10\), the bin widths of the histogram are equal to 1 whereas at larger cluster population sizes the bin widths increase logarithmically (Milojevic, 2010). The raw data for the cluster population size frequencies were averaged over the bin widths to produce the step chart shown. Visually, the histograms appear to follow a straight line on a log-log scale, suggesting a power law distribution. Furthermore, Figure 5: 4(a) Histogram of self-interstitial cluster population sizes detected in MA1 microstructures containing 10M atoms corresponding to Figures 3. The population sizes of clusters, \(N\), follow a power law distribution. \(N_{\text{SIA}}\) denotes the total number of self-interstitial atoms detected by the Wigner-Seitz algorithm. The data are presented on a log-log scale. Lines of best fit were calculated using the maximum likelihood estimates (MLE) for the exponent in a power law distribution stated in Eq. 10. 4(b): Evolution of the MLEs for the power law exponent \(\alpha\) as a function of dose averaged over all three interatomic MA potentials. Shaded regions represent confidence intervals of the MLE. Data for the doses lower than 1 cdpa are presented on a log-linear scale whereas in the saturation regime \(>\)1 cdpa a linear-linear scale is employed. \(\alpha\) was averaged over doses larger than 1 cdpa to arrive at the value \(\langle\alpha\rangle=2.28\pm 0.01\) indicated by the horizontal dashed blue line for a 2M cell, and \(\langle\alpha\rangle=2.23\pm 0.01\) (dashed green line) for a 10M cell. simulations employing the CRA have shown evidence for self-organised critical behaviour (Derlet and Dudarev, 2020) and thus it is reasonable to test the cluster population size power law distribution hypothesis such that the probability mass function for clusters containing \(N\) interstitials is given by \[p(N)=\frac{1}{\zeta(\alpha)N^{\alpha}} \tag{10}\] where the Riemann zeta function (Heynsworth and Goldberg, 1965) is defined as \(\zeta(\alpha)=\sum_{k=1}^{\infty}1/k^{\alpha}\) for exponent \(\alpha>1\). We may define an exponent that best fits the data shown in Figure 4_via_ a maximum likelihood estimation. The likelihood function \[\mathcal{L}\left(\{N_{i}|i\in[1..N_{\mathrm{tot}}]\}|p(N|\alpha)\right)=\prod_ {i=1}^{N_{\mathrm{tot}}}p(N_{i}|\alpha) \tag{11}\] returns the probability of observing the \(N_{\mathrm{tot}}\) measured data points \(\{N_{i}|i\in[1..N_{\mathrm{tot}}]\}\) if they were produced from a distribution \(p(N|\alpha)\) of given parameter(s) \(\alpha\). The MLE is produced by determining the value of \(\alpha_{\mathrm{MLE}}\) maximising \(\ln\mathcal{L}\) such that: \[\frac{\partial\ln\mathcal{L}}{\partial\alpha}\bigg{|}_{\alpha=\alpha_{ \mathrm{MLE}}}=0. \tag{12}\] Equation 12 was solved numerically using the python package powerlaw(Alstott et al., 2014) and as shown in Figure 4(b), the MLEs and associated standard error for \(\alpha\) exhibit transients over a range of doses corresponding to the formation of a dislocation network by the coalescence of loops. The minimum of this transient appears to be correlated with the peak in dislocation density before saturating to a constant value at high doses. The twinned regions that emerge when employing the MA2 and MA3 potentials, as described at the beginning of SS3.1, are also interstitial defect clusters. Therefore they are included in this statistical analysis and we find the same trend and remarkably similar values of \(\alpha\) across all three MA potentials. Averaging across the potentials for doses larger then 1 cdpa in the saturation regime, we find the MLE for exponent \(\alpha=2.28\pm 0.01\) for a 2M simulation cell and \(\alpha=2.23\pm 0.01\) for a 10M simulation cell. These values are close but slightly higher than the exponents found in simulations and observations of collision cascades in tungsten in the limit of low dose (Sand et al., 2013; Yi et al., 2015). An immediate observation and the consequence of the power law statistics is that the overwhelming majority of clusters are small. Thus, even the order of magnitude of the calculated dislocation density is sensitive to the choice one makes for a minimum detectable size of a dislocation loop. To show this, assume that clusters with cluster population sizes \(N\) lying in a range \(N_{\mathrm{min}}<N<N_{\mathrm{max}}\) correspond to a circular platelet of interstitial atoms _i.e._ a dislocation loop (Gilbert et al., 2008). The \(N_{\mathrm{min}}\) and \(N_{\mathrm{max}}\) thresholds correspond to cluster population sizes that are either sufficiently Figure 6: Estimation of dislocation density from interstitial cluster perimeter density. \(N\) corresponds to the number of atoms in the cluster and the perimeter of a given cluster is estimated according to Equation 13. small to be treated as point defects, or are large enough to be close to the threshold for forming a percolating atomic plane consisting of interconnected dislocation loops. Assume that each platelet contains one extra atom per atomic string in the direction of the Burgers vector (Boleininger et al., 2018; Boleininger and Dudarev, 2019), and involves \(N\) atoms with volume \(\Omega\) equal to the atomic volume of \(\alpha\)-Zr. Using the formula \((\mathbf{b}\cdot\mathbf{A})\) for the volume of a dislocation loop, its perimeter can then be estimated as \[P_{S}=2\pi\sqrt{\frac{N\Omega}{\pi b}}, \tag{13}\] where \(b\) is the length of the Burgers vector along the normal to the loop habit plane. Here, we assume that the platelets form \(a\)-type dislocation loops such that \(b\) is equal to the \(a\) lattice parameter. Summing the perimeters corresponding to cluster population size distributions between various choices of \(N_{\text{min}}\) and \(N_{\text{max}}\) provides a measure of the difference in dislocation density due to different choices of thresholds. Typically, a dislocation core extends over about five interatomic distances (Boleininger et al., 2018; Boleininger and Dudarev, 2019) and thus one would reasonably argue that \(N_{\text{min}}\) should at least be greater than five. Furthermore, we should not include the fully formed planes corresponding to a cluster percolating through the periodic boundaries, thus setting \(N_{\text{max}}=10^{3}\). In Figure 6 we observe an order of magnitude difference between the results involving the counting of all the viable clusters smaller than the population size of the percolating cluster (\(5<N<10^{3}\)), and the values found by counting all the clusters but increasing the lower threshold to exclude clusters containing less than 100 self-interstitial atoms. This example illustrates how the power law distribution of defect sizes in highly irradiated microstructures can profoundly affect how dislocations are counted and their density quantified. We may also employ Equation 13 to estimate the power law exponent for the distribution of loops with respect to loop diameter \(D\) where it is apparent that, assuming that all the loops have the same Burgers vector \(b\), \(N\propto D^{2}\). Treating \(N\) as a continuous variable, we derive the probability density function as a function of loop diameter \[p_{D}(D)=\left|\frac{\text{d}N}{\text{d}D}\right|p(N(D))\propto\frac{1}{D^{2 \alpha-1}}, \tag{14}\] where \(\alpha\) is the exponent entering Equation 10. Hence the diameter of circular loops is expected to be power law distributed with exponent \(\beta=2\alpha-1\). Using the data derived from the CRA simulations in the high dose limit illustrated in Figure (b)b for a 10M atom cell, we find that \(\beta=3.46\). In experiments performed by Ungar et al. (2021) dislocation diameter data measured by TEM were combined with an XRD line profile measurement of the total dislocation density. When fitting the dislocation size distribution to a power law the exponent was found to lie in the range \(3\leq\beta\leq 4\), which agrees well with the values derived from our simulations. Concluding this section, we note that power law exponent values \(\alpha=2\) and \(\beta=3\) appear to represent natural low limits characterising the power law statistics of populations of dislocation loops in a heavily irradiated material. Indeed, any power law distribution with \(\alpha<2\) would imply a divergent total count of interstitial defects contained in the loops, a paradox that can only be resolved by recognising that loops of very large size are nothing but extra crystallographic atomic planes, which through elastic interactions would tend to modify the finite-size part of the dislocation loop distribution so as to make it normalisable, in this way steering the value of exponent \(\alpha\) towards and above the limiting value of 2. ### Stored energy Whilst analysing the configuration of atoms helps to describe how the microstructure evolves, on its own this approach provides limited insight into why the observed processes take place. Evidently, the repeated creation and relaxation of defects forces the microstructure into a state that has the energy higher than that of a single crystal. By examining the excess energy \(E_{exc}\) that the system is driven to, we may determine how the accumulation of point defects produces the observed dislocation density. For an irradiated microstructure at dose \(\phi\) containing \(N\) atoms with total energy \(E_{total}(\phi)\) we may measure the excess (stored) energy as \[E_{exc}(\phi)\,:=\,E_{total}(\phi)-N\,E_{coh}, \tag{15}\] where \(E_{coh}\) is the cohesive energy of \(\alpha\)-Zr. Our simulations are carried out under zero global strain boundary conditions. Whilst this is computationally convenient, in reality specimens are often irradiated under zero applied stress, allowing the body as a whole to undergo strain. Assuming linear elasticity, we may correct for this and remove the stored elastic energy \(E_{el}\) from our simulation results. The defects produced by radiation damage induce eigenstrains, also known as residual strains, that act as sources of elastic strain. Let \(\epsilon_{ij}^{0}\) denote the elastic strain that would come about under zero applied stress boundary conditions. In this case, the potential energy of the body is lowered by doing work equal to \(\frac{1}{2}\sigma_{ij}\epsilon_{ij}^{0}\) where Hooke's law determines the stress \(\sigma_{ij}=C_{ijkl}\epsilon_{kl}^{0}\) and the elastic constants tensor is \(C_{ijkl}\). Enforcing zero global strain requires that the integral of the strain \(\epsilon_{ij}\) over the body with volume \(V\) is zero or, equivalently, that one has zero volume averaged strain \[\langle\epsilon_{ij}\rangle:=\frac{1}{V}\int_{V}\mathrm{d}^{3}\mathbf{x}^{ \prime}\;\epsilon_{ij}(\mathbf{x}^{\prime})=0. \tag{16}\] This boundary condition is satisfied by the strain \[\epsilon_{ij}(\mathbf{x})=\epsilon_{ij}^{0}(\mathbf{x})-\langle\epsilon_{ij} ^{0}\rangle. \tag{17}\] Applying Hooke's law to Equation 17, we observe that the system is under a state of global stress \(\langle\sigma_{ij}\rangle=C_{ijkl}\langle\epsilon_{kl}^{0}\rangle\) and we may treat this as the stress that develops in our simulations. Therefore, we can compute the stored elastic energy \[E_{el}=\frac{1}{2}\langle\sigma_{ij}\rangle S_{ijkl}\langle\sigma_{kl}\rangle, \tag{18}\] where \(S_{ijkl}\) is the elastic compliance tensor (Warwick et al., 2021) related to \(C_{ijkl}\) by \(S_{ijpq}C_{pqkl}=\frac{1}{2}\left(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\right)\). We find across all the employed potentials that \(E_{el}\) accounts only for a small fraction \(<10\%\) of the stored energy, meaning that the vast majority of stored energy is contained in the point defect centres, dislocation cores and fluctuations arising from the elastic fields of these defects. Indeed, when a high dose snapshot was explicitly relaxed under zero pressure we found that the difference in excess energy to that corrected for by our estimate of \(E_{el}\) is of the same order of magnitude. Furthermore, the microstructure did not change significantly providing more indication of the minor role of the global elastic energy. The remaining stored energy is confined in the two classes of defects associated with vacancies and interstitials. Given that vacancies do not cluster significantly, we may estimate their energetic contribution as \[E_{vac}(\phi)=N_{vac}(\phi)E_{1vac}^{f}, \tag{19}\] where, for dose \(\phi\), the number of vacancies is denoted \(N_{vac}\) and \(E_{1vac}^{f}\) is the formation energy of a single vacancy. Values of \(E_{1vac}^{f}\) for each of the potentials used in this study may be found in Mendelev and Ackland (2007). To check the validity of Equation 19 we isolated the \(N_{vac}\) vacancies identified by Wigner-Seitz analysis and subsequently relaxed an \(N\)-atom supercell of pristine \(a\)-Zr containing the same number of vacancies in the same positions. The simulation cell was relaxed and the formation energy was computed as \[E_{vac}=E_{vac}^{total}-(N-N_{vac})E_{coh}, \tag{20}\] where \(E_{vac}^{total}\) is the resulting total energy. Figure 7a summarises this process and shows the resulting formation energy for these arrangements of vacancies. Thus we observe that Equation 19 is a good approximation, providing further evidence that the majority of \(E_{vac}\) is due to isolated vacancies. The contribution to \(E_{exc}\) from the formation energy of small interstitial clusters and dislocation content may now be calculated as \[E_{int}=E_{exc}-E_{el}-E_{vac}. \tag{21}\] In Figure 7b we show the relative proportion of elastic, interstitial and vacancy contributions to the total excess energy where it is evident that the elastic contribution is in the minority, vacancy contributions dominate and the interstitial contribution follows a similar trend to the dislocation density profile shown in Figure 1. The profile of \(E_{int}\) is shown in Figure 7c together with the interstitial concentration and dislocation density in order to highlight their correlation with each other. Recently, Differential Scanning Calorimetry (DSC) experiments were performed to measure the stored energy of irradiated titanium as a means of inferring the number of defects present (Hirst et al., 2022). The measurements indicated stored energies associated with irradiation induced defects to be on the order of \(0.1\,\mathrm{Jg^{-1}}\). Their analysis also allowed the authors to infer the presence of defects that are invisible to TEM imaging. At \(0.07\,\mathrm{cdpa}\) in Figure 7, we find that the peak in specific energy associated with \(E_{int}\) is \(25\,\mathrm{Jg^{-1}}\) that subsequently drops and plateaus at high dose to \(12\,\mathrm{Jg^{-1}}\). At elevated temperatures, the defect content is likely to be \(\sim 10\%\) of that calculated in our simulations cf. (Mason et al., 2020, 2021) and thus we expect the associated stored energy in Zr to be comparable to \(1\,\mathrm{Jg^{-1}}\). ## 4 Conclusions In summary, we have performed experiments and simulations showing that the dislocation density in irradiated zirconium and zircaloys exhibits a peak at a moderate dose and then saturates at doses greater than 1 dpa. Simulations indicate that this occurs in a regime of dose rate and temperature where microstructural evolution is predominantly driven by stress relaxation. The material enters a critical state at \(\sim\)1 dpa, where interstitial clusters grow to a sufficient size to percolate the volume of the material. At high dose, the population of smaller clusters and dislocation loops is distributed as a function of cluster defect content according to a power law statistics with the exponent close to \(\alpha\approx 2.2\). As a function of defect diameter, this results in a power law distribution of defect clusters with exponent of \(\beta\approx 3.5\), which compares favourably with the range of values \(3\leq\beta\leq 4\) derived from experimental observations (Ungar et al., 2021).The analysis highlights the significance of precise definition of defect sizes included in the measured dislocation densities. Irrespectively of the statistics of dislocation structures, the trend in the dislocation density evolution in zirconium irradiated at temperatures below \(\sim 350^{\circ}\) C is clear, the dislocation density saturates as a function of dose. Figure 7: Contributions of microstructural defects to the excess energy. Results shown are for the MA1 potential using 10 million atoms. 7a: Comparison of vacancy energy \(E_{vac}\) estimated by assuming all \(N_{V}\) vacancies are isolated with formation energy \(E_{toxic}^{f}\) (blue curve) with \(E_{vac}\) calculated _via_ explicitly relaxing pristine \(\alpha\)-Zr containing the same number and arrangement of vacancies. 7b : Breakdown of total excess energy \(E_{exc}\) into elastic \(E_{el}\), interstitial \(E_{int}\) and vacancy \(E_{exc}\) contributions. 7c: Comparison of interstitial excess energy, concentration and dislocation density that exhibits the similarity in profile between all three quantities. A peak occurs in all three profiles near \(0.1\,\mathrm{cdpa}\). ## 5 Acknowledgements This work received funding from the RCUK Energy Programme Grant No. EP/W006839/1 and MIDAS EPSRC Grant No. EP/S01702X/1, and was partially carried out within the framework of the EUROfusion Consortium,funded by the European Union via the Euratom Research and Training Programme (Grant Agreement No 101052200 -- EUROfusion). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the European Commission can be held responsible for them. We acknowledge DESY (Hamburg, Germany), a member of the Helmholtz Association HGF, for the provision of experimental facilities. We gratefully acknowledge the use of the high-performance computing facility MARCONI (Bologna, Italy) provided by EUROfusion, and computing resources supplied by the IRIS (STFC) Consortium. This work also received support from the EPSRC Access to HPC Programme on the ARCHER2 UK National Supercomputing Service ([http://www.archer2.ac.uk](http://www.archer2.ac.uk)).
2305.05904
Revisiting Fully Homomorphic Encryption Schemes
Homomorphic encryption is a sophisticated encryption technique that allows computations on encrypted data to be done without the requirement for decryption. This trait makes homomorphic encryption appropriate for safe computation in sensitive data scenarios, such as cloud computing, medical data exchange, and financial transactions. The data is encrypted using a public key in homomorphic encryption, and the calculation is conducted on the encrypted data using an algorithm that retains the encryption. The computed result is then decrypted with a private key to acquire the final output. This abstract notion protects data while allowing complicated computations to be done on the encrypted data, resulting in a secure and efficient approach to analysing sensitive information. This article is intended to give a clear idea about the various fully Homomorphic Encryption Schemes present in the literature and analyse and compare the results of each of these schemes. Further, we also provide applications and open-source tools of homomorphic encryption schemes.
Nimish Jain, Aswani Kumar Cherukuri
2023-05-10T05:12:54Z
http://arxiv.org/abs/2305.05904v1
Revisiting Fully Homomorphic Encryption Schemes ## Abstract Homomorphic encryption is a sophisticated encryption technique that allows computations on encrypted data to be done without the requirement for decryption. This trait makes homomorphic encryption appropriate for safe computation in scenarios involving sensitive data, such as cloud computing, medical data exchange, and financial transactions. The data is encrypted using a public key in homomorphic encryption, and the calculation is conducted on the encrypted data using an algorithm that retains the encryption. The computed result is then decrypted with a private key to acquire the final output. This abstract notion protects data while allowing complicated computations to be done on the encrypted data, resulting in a secure and efficient approach to analysing sensitive information. This article is intended to give a clear idea about the various fully Homomorphic Encryption Schemes present in the literature, as well as analysing and comparing the results of each of these schemes. Further we also provide applications and open source tools of homomorphic encryption schemes. ## Keywords Fully Homomorphic Encryption, FHE, FHE Schemes, Homomorphism, Homomorphic Encryption ## 1 Introduction The term "homomorphic" is derived from two Greek roots: "homo," which means "same," and "morph," which means "shape." The term homomorphism in mathematics refers to a structure-preserving map between two algebraic systems whose operations are the same or similar. The phrase "homomorphic encryption" refers to how this encryption approach allows computations to be conducted on encrypted data while preserving the data's structure, allowing the same computations to be performed on encrypted data as on unencrypted data. Homomorphic Encryption (HE) is a kind of encryption scheme that allows a third party (e.g., cloud, service provider) to perform certain computable functions on the encrypted data while preserving the features of the function and format of the encrypted data. Encryption methods like RSA, AES, and DES are not homomorphic, meaning that they require the data to be decrypted before any computation can be performed. This makes it challenging to use these encryption methods in situations where data privacy is a critical concern, such as cloud computing and data analytics. In contrast, homomorphic encryption enables computations to be performed directly on encrypted data, without the need for decryption. This has significant implications for privacy-preserving technologies, as it allows for secure outsourcing of computation to untrusted servers, while maintaining the confidentiality of the data. Moreover, RSA, AES, and DES can be used for other aspects of security, such as key management and message authentication. Unlike other review papers [22] in the literature, which typically provide a high-level overview of different FHE schemes, this paper goes a step further by providing a simple and step-by-step algorithm for each of the FHE schemes discussed. This makes the FHE schemes more accessible to a wider audience, including those who may not have a deep background in cryptography or computer science. By breaking down the algorithms into simple steps, readers can follow along and understand how the scheme works at a more fundamental level. In addition to providing a simple and step-by-step algorithm for each FHE scheme, this paper also presents a comprehensive comparison of different FHE schemes and open-source libraries in a tabular form in Table 1 and Table 2 respectively. The table includes important features and characteristics of each FHE scheme, such as security assumptions, key sizes, supported operations, computational complexity, and limitations. For example, if a user needs an FHE scheme that can perform addition and multiplication operations, they can easily filter the table to identify which schemes meet these criteria. Additionally, if a user is concerned about the computational complexity of the FHE scheme, they can compare the performance metrics of different schemes side-by-side. The rest of the article is structured in the following way. Section 2. explains the mathematical concept behind Homomorphism, followed by its classification and applications. Section 3 analyses the various FHE Schemes. Section 4 presents five open-source libraries implemented both in Python and C++. ## 2 Background This section provides mathematical concepts and background of homomorphic encryption. ### Homomorphism Let us take an example to understand this better. Let us take two messages \(\mathrm{m}_{1}\) and \(\mathrm{m}_{2}\) and their encrypted cipher texts \(c_{1}=E(m_{1})\) and \(c_{1}=E(m_{1})\). If function \(\mathrm{E}\) is homomorphic, then one can obtain the value of \(E(m_{1}+m_{2})\) by using \(c_{1}\) and \(c_{1}\) without knowing the values of \(m_{1}\) and \(m_{2}\). \[E(m_{1}+m_{2})=c_{1}+c_{2}=E(m_{1})+E(m_{2})\] Imagine the above scenario with any operation "\(\star\)", then we can define an Encryption Scheme (\(E\)) as homomorphic if it supports the following equation: \[E(m_{1}\star m_{2})=E(m_{1})\star E(m_{2}),\forall m_{1},m_{2}\in M\] where \(M\) is the set of all possible messages. In abstract algebra, a structure-preserving map between two algebraic structures or groups is homomorphism. Let us take a set \(S\) and an operation "\(\star\)", that combines any two elements a and b to form another element, denoted \(a\star b\) and \(a,b\in S\). The qualifications for a set and operation (\(S\),\(\star\)) to be a group are as follows: * Closure property: For all \(a,b\in S\), the result of \(a\star b\in S\). * Associativity property: For all \(a,b,c\in S\), \((a\star b)\star c=a\star(b\star c)\). * Identity element: For an element \(e\in S\), the equality \(e\star a=a\star e=a\) hold. Here \(e\) is the identity element of set \(S\). * Inverse element: For an identity element \(e\in S\) and elements \(a,b\in S\), \(a\star b=b\star a=e\) holds. Note: * The identity element \(e\in S\) is often taken as 1. * The result of operation may differ if the order of operand is changed. For example, \(a\star b\neq b\star a,\forall a,b\in S\). A group homomorphism from group \((G,\star)\) to group \((H,\star)\) is defined as \(f\colon G\to H\) and holds if \[f(g\star g\char 127\,)=f(g)\star f(g\char 127\,),\forall g,g\char 127\,G\] Group homomorphism comes into play when testing if an encryption scheme is homomorphic or not. Assume an encryption scheme \((P,C,K,E,D)\) where * \(P=\) Plain text * \(C=\) Cipher text * \(K=\) Key * \(E=\) Encryption algorithm * \(D=\) Decryption algorithm and \((P,\star)\) and \((C,\star)\) are groups of plain texts and cipher texts, respectively. The encryption algorithm maps from plain text group \((P)\) to cipher text group \((C)\) using k from Key \((K)\) is homomorphic for any operation "\(\star\)" if \[E_{k}(a)\star E_{k}(b)=E_{k}(a\star b),\forall a,b\ \in\ P\ \&\ k\ \in\ K\] Here, \(k\) can be a symmetric key or a public key, depending on the encryption algorithm used. Using the above equation, let us prove that RSA is homomorphic for "\(\bullet\)" i.e. Modular multiplication. The plain text group and cipher text group, respectively, are \((P,\bullet)\) and \((C,\bullet)\). For any two plain texts \(p_{1},p_{2}\in P\), and public key \(k=(n,e)\), \[E(p_{1},k)\ =\ p_{1}^{e}(mod\ n)\] \[E(p_{2},k)\ =\ p_{2}^{e}(mod\ n)\] \[E(p_{1},k)\bullet E(p_{2},k)=\ p_{1}^{e}\ \bullet\ p_{2}^{e}(mod\ n)=(p_{1} \bullet p_{2})^{e}(mod\ n)=E(p_{1}\bullet p_{2},k)\] Therefore, RSA is homomorphic for modular multiplication operation \((\bullet)\). ### Classification of Homomorphic Algorithms There are limitations to homomorphic algorithms. The existing encryption schemes may not satisfy homomorphism for all kinds of operations and any number of operations. Some encryption algorithms are homomorphic for addition and multiplication operations only; some are homomorphic for an infinite number of subsequent operations and for just one multiplication operation, etc. Hence, they are classified into three homomorphic encryption schemes: #### 2.2.1 Partially Homomorphic Encryption \((\mathrm{PHE})\) PHE provides for encrypted data calculations, but only for a limited number of operations, often addition and multiplication. PHE is less computationally costly than other types of homomorphic encryption, but its utility is limited. Some examples include RSA [3], Goldwasser-Micali [4], El-Gamal [5], Benaloh [6], Paillier [7], Okamoto-Uchiyama [8], etc. #### 2.2.2 Somewhat Homomorphic Encryption (SWHE) SWHE enables more complicated operations on encrypted data, such as exponentiation and polynomial evaluation. SWHE is more computationally demanding than PHE and offers more capabilities. Some examples include BGN [9], Polly Cracker scheme [10], etc. #### 2.2.3 Fully Homomorphic Encryption (Fhe) FHE supports arbitrary calculations on encrypted data, such as conditional operations, branching, and looping. FHE is the most computationally costly type of homomorphic encryption, but it also offers the most functionality [23]. These algorithms mostly make use of techniques like bootstrapping to maintain homomorphism. Some examples include Ideal Lattice-Based [2], FHE Schemes over Integers [11], LWE-Based [12], NTRU-Like [13], etc. ### Applications All three schemes of homomorphic encryption i.e. PHE, SWHE and FHE has made themselves useful in any field which deals with data processing. It can be utilized in outsourcing storage and computations sector where the custom can share data with the outsourcing corporation without disclosing its sensitive data to the company, while also allowing the companies to perform operations on the data. [1] offers a system architecture capable of performing biometric identification in the encrypted domain, as well as gives and analyses an implementation based on two current homomorphic encryption techniques. It also examines the technological aspects and challenges in this environment. Let's a sample client-server interaction scenario, the client needs to send some sensitive data to the server, and the server returns the data after performing some operations on the data. This can be achieved with or without using HE. Both the methods are demonstrated below: #### 2.3.1 Without Homomorphic Encryption Client (C) has an asymmetric key pair \((pu_{C},pr_{C})\) and message \(M\) that must be sent to server. Similarly, server (S) has its asymmetric key pair \((pu_{S},pr_{S})\) and function \(f\) which will be applied on client message. To maintain confidentiality, client encrypts M using server's public key to form \(E(M,pu_{S})\) which is then sent to the server. The server decrypts \(E(M,pu_{S})\) using its private key \(pr_{S}\) to get \(M\) and performs the function \(f\) on \(M\) to get \(f(M)\). \(f(M)\) is encrypted using client's public key \(pu_{C}\) before being sent to the client. The client receives \(E(f(M),pu_{C})\) and decrypts it using its private key to get \(f(M)\). In this scenario, since the server can see the message, it may pose a huge security threat to the client. When dealing with sensitive data, there should be way to prevent the server from viewing the raw sensitive data. #### 2.3.2 With Homomorphic Encryption Client (C) has a homomorphic encryption function \(He\), it's corresponding decryption function \(Hd\) and a message \(M\) that must be processed by the server. The server (S) has the function \(f\) to be applied on client's data. Client encrypts the message \(M\) using homomorphic encryption to get \(He(M)\) and sends it to the server for computation. The server performs the operation \(f\) homomorphically to get \(He\big{(}f(M)\big{)}\) and sends it back to the client. The client decrypts the data using \(Hd\big{(}He\big{(}f(M)\big{)}\big{)}\) to get \(f(M)\). Unlike 2.3.1., here the server performs its operation blindfolded since it cannot see the original message. The confidentiality of the client's message is intact, for both the public and server. ## 3 Fully Homomorphic Encryption To qualify as a FHE Scheme, an encryption algorithm should allow unlimited number of operations on the data while keeping the corresponding original data intact. The feasible concept of FHE was first published in 2009 by Gentry called Ideal lattice-based FHE scheme [2]. His work is promising, but computationally intensive. Ideal lattice-based FHE scheme was impractical for real-world scenario since the cost of operation was very high and was complex to implement. Figure 1: A client-server scenario without \(HE\) Figure 2: A client-server scenario with \(HE\) Many new schemes and optimizations based on Gentry's work followed. They all had lattice-based approach in common which was hard to implement. Although this scheme was very promising, it was difficult to implement in real-world application due to high computational cost and difficult implementation dur to complex mathematics. Several optimizations and new schemes based on Gentry's work followed. To name a few FHE Schemes over Integers [11], LWE-Based [12], NTRU-Like [13], etc. We will discuss these schemes in detail below. ### Ideal Lattice-based FHE Scheme The first ever FHE Scheme was proposed in Gentry's thesis in 2009 and was based on GGH-type of encryption system [14]. Goldreich-Goldwasser-Halevi (GGH) lattice-based cryptosystem is an asymmetric cryptosystem based on lattices. The Goldreich-Goldwasser-Halevi (GGH) cryptosystem takes advantage of the fact that the nearest vector problem might be difficult. A lattice \(L\) with its basis \(b_{1},b_{2},b_{3},...,b_{n}\) is formulated as follows: \[L=\sum_{i=1}^{n}\overrightarrow{b_{i}}*v_{i},v_{i}\in\mathbb{Z}\] The basis of a lattice is not unique. There are infinitely many bases for a given lattice. A basis is called "good" if the basis vectors are almost orthogonal; otherwise, it is called "bad" basis of the lattice [16]. We know that the amount of noise in a cipher text shouldn't cross a threshold beyond which the plain text cannot be recovered. Therefore, from time-to-time the noisy cipher text must be processed to either eliminate or decrease the noise level. Ideal Lattice-based approach uses methods like squashing and bootstrapping, to reduce noise. This enables us to perform infinite number of operations on cipher text without losing the integrity of plain text. Before understanding Ideal Lattice-based FHE Scheme by Gentry, you should understand the meaning of Ideal of Ring Theory. An ideal of a ring is a special subset of its elements. For example, even numbers for a set of Integers. Addition and subtraction of even numbers preserves evenness and multiplying an even number by any integer (even or odd) results in an even number; these closure and absorption properties are the defining properties of an ideal. Gentry's SWHE scheme using ideals and rings is described below: **Key Generation Algorithm:** 1. Choose two prime numbers \(p\) and \(q\) such that \(q\) divides \(p-1\). 2. Choose a generator \(g\) of the subgroup of \(F_{p}*\) of order \(q\). 3. Choose two elements \(a,b\) uniformly at random from \(F_{q}\) and compute \(f=g^{a}*f^{b}\), where \(f\) is a randomly chosen element of \(F_{p}\). 4. The secret key is \((a,b)\) and the public key is \((h,p,q,g)\). **Encryption:** 1. Let \(m\) be the plaintext message. 2. Choose a small integer \(t\) and \(a\) random element \(e\) in \(F_{p}\). 3. Compute the ciphertext as \(c=m*h^{t}*g^{e}\ mod\ p\). **Decryption:** 1. Compute \(s=at+be\ mod\ q\). 2. Compute \(m^{\prime}=c*g^{-s}\ mod\ p\). 3. Round \(m^{\prime}\) to the nearest integer to obtain the decrypted plaintext message. #### 3.2.1 Homomorphic Addition: To add two ciphertexts c1 and c2, compute c1*c2 mod p. #### 3.2.2 Homomorphic Multiplication: To multiply two ciphertexts \(c_{1}\) and \(c_{2}\), first compute \(c^{\prime}=c_{1}*c_{2}\ mod\ p\). Then, given a plaintext message \(m\) with a small integer representation, compute \((c^{\prime})^{m}\ mod\ p\) to obtain the ciphertext of the product. The security of the SWHE scheme is based on the hardness of the learning with errors (LWE) problem, which involves the estimation of a random linear function of a noisy sample. The scheme provides limited homomorphic computations, such as addition and multiplication of encrypted messages, without revealing the underlying plaintext. Gentry's FHE (Fully Homomorphic Encryption) approach employs a "squashing" strategy to deal with the noise that builds in ciphertexts after many homomorphic operations. The approach also contains a "bootstrapping" technique for refreshing the noise in ciphertexts, allowing for limitless homomorphic computations. Gentry's FHE system has the following procedure for squashing and bootstrapping. #### 3.2.3 Squashing: 1. Choose a small positive integer \(L\). 2. Compute the function \(f(x)=x^{L}-1\). 3. Evaluate the function on the ciphertext \(c\) to obtain a new ciphertext \(c^{\prime}=f(c)\ mod\ p\). 4. The squashing step reduces the noise in the ciphertext c by a factor of \(L\), at the cost of losing information about the plaintext message. However, the information can be recovered using bootstrapping. #### 3.2.4 Bootstrapping: 1. Choose a fresh key pair \((a^{\prime},s^{\prime})\) using the same key generation algorithm as before. 2. Evaluate the decryption circuit on the squashed ciphertext \(c^{\prime}\) to obtain a new ciphertext \(c^{\prime\prime}\) that encrypts the same plaintext message as \(c\), but with noise reduced to a negligible level. 3. Compute a new public key \(h^{\prime}=(g,h^{\prime}_{1},\dots,h^{\prime}_{n})\) using the same method as before, where \(h^{\prime}_{l}=g^{a^{\prime}t}*f^{s^{\prime}t}\) for a randomly chosen \(f\) in \(F_{p}\). 4. Compute a new ciphertext \(c^{\prime\prime\prime}\) that encrypts the same plaintext message as \(c\), but with the new public key \(h^{\prime}\) and a much smaller level of noise. 5. The ciphertext \(c^{\prime\prime\prime}\) can now be used as input for further homomorphic computations. The bootstrapping step involves two decryption and encryption operations, and a new public key is needed for each bootstrapping operation. Therefore, bootstrapping is computationally expensive and limits the practicality of the FHE scheme for large-scale computations. However, it allows for unlimited homomorphic computations on encrypted data, making it a powerful tool for privacy-preserving data analysis and machine learning. ### FHE Schemes over Integers A new fully homomorphic encryption scheme was proposed in [11] that was based on the Approximate-Greatest Common Divisor (AGCD) problems. AGCD problems try to recover \(p\) from the given set of \(x_{l}=pq_{l}+r_{l}\). The primary motivation behind the scheme is its conceptual simplicity. A symmetric version of the scheme is probably one of the simplest schemes. The proposed symmetric SWHE scheme is described as follows: **Key Generation Algorithm:** 1. Given a security parameter \(\lambda\). 2. A random odd integer \(p\) of bit length \(\eta\) is generated. This will be treated as a private key. 3. Choose a random large number \(q\). 4. Choose a small number \(r\) such that \(r\ll p\). **Encryption Algorithm** The message m \(\in\)\(\{0\), \(1\}\) is encrypted by using the following: \[c=E(m)=m+2r+pq\] **Decryption Algorithm** The following formula can be used for decryption: \[m=D(c)=(c\ mod\ p)\ mod\ 2\] **Homomorphism over Addition** \[E(m_{1})+E(m_{2}) =m_{1}+2r_{1}+pq_{1}+m_{2}+2r_{2}+pq_{2}\] \[=(m_{1}+m_{2})+2(r_{1}+r_{2})+(q_{1}+q_{2})q\] The output clearly falls within the ciphertext space and can be decrypted if the noise \(|(m_{1}+m_{2})+2(r_{1}+r_{2})\mid\ <\ ^{p}\big{/}_{2}\). Since \(r_{1},r_{2}\ <<\ p\), a various number of additions can still be performed on ciphertext before noise exceeds \(\ ^{p}\big{/}_{2}\). **Homomorphism over Multiplication** \[E(m_{1})E(m_{2}) =(m_{1}+2r_{1}+pq_{1})(m_{2}+2r_{2}+pq_{2})\] \[=m_{1}m_{2}+2(m_{1}r_{2}+m_{2}r_{1}+2r_{1}r_{2})+kp\] \(N=m_{1}m_{2}+2(m_{1}r_{2}+m_{2}r_{1}+2r_{1}r_{2})\) The encrypted data can be decrypted if the noise is smaller than half of the private key. \(N\ <\ ^{p}\big{/}_{2}\). \(N\) grows exponentially with the multiplication operation. This puts more restriction over the homomorphic multiplication operation than addition. ### LWE-Based FHE Scheme Brakerski and Vaikuntanathan's LWE-based fully homomorphic encryption (FHE) scheme [12] is a lattice-based cryptosystem that builds upon the Learning with Errors (LWE) problem. Here is a high-level description of the scheme: **Key Generation** 1. Choose a prime number \(p\) and an integer \(q\) such that \(q~{}>~{}p^{2}\). 2. Let \(n\) be a positive integer and choose a random matrix \(A\in Zq^{(n\times m)}\) and random vectors \(s\), \(e\), and \(u~{}\in~{}Zq^{n}\). 3. Compute \(b~{}=~{}(A,As~{}+~{}e)\) and the matrix \(B~{}=~{}(b~{}|~{}u)\). 4. The public key is \((A,B)\), and the private key is kept secret. **Encryption Algorithm** To encrypt a binary message \(m\in\{0,1\}\), generate a random vector \(r\in Zq^{n}\) and a small noise vector \(e^{\prime}~{}\in~{}Zq^{n}\). Then, compute the ciphertext c as follows: \[c~{}=~{}Ar~{}+~{}m*p~{}+~{}e^{\prime}\] The ciphertext \(c\) is then sent to the recipient. **Homomorphic Operations** The scheme allows for homomorphic addition and multiplication of ciphertexts. Given two ciphertexts \(c_{1}\) and \(c_{2}\), the following operations can be performed: * **Addition:**\(c_{1}+c_{2}~{}=c_{1}+c_{2}\) * **Multiplication:**\(c_{1}*c_{2}=(A~{}*~{}B^{\prime})r+m_{1}m_{2}p~{}+~{}e_{1}*B^{\prime}r~{}+~{}e_{2} Ar~{}+~{}e_{1}e_{2}\) Here, \(B^{\prime}\) is the transpose of \(B\), \(m_{1}\) and \(m_{2}\) are the plaintexts corresponding to \(c_{1}\) and \(c_{2}\), and \(e_{1}\) and \(e_{2}\) are the corresponding noise vectors. **Decryption** To decrypt a ciphertext \(c\), compute the inner product of c with the private key vector \(s\) modulo \(p\), and then round to the nearest integer modulo 2: \[m~{}=~{}round((c~{}*~{}s)/p)~{}mod~{}2\] This recovers the original message \(m\). Brakerski and Vaikuntanathan's FHE scheme improves upon Regev's scheme by allowing for more homomorphic operations before the noise in the ciphertexts grows too large. However, the scheme is still limited by the size of the noise in the ciphertexts, which ultimately limits the number of homomorphic operations that can be performed. ### NTRU-Like FHE Scheme NTRU (N-th degree TRUncated polynomial) is a public key cryptosystem based on the shortest vector problem (SVP) in a lattice. In recent years, NTRU has been used as a basis for constructing fully homomorphic encryption (FHE) schemes. The basic idea behind NTRU-Like FHE schemes is to use a variant of the NTRU public key cryptosystem as the underlying encryption scheme. The encryption process involves encoding the message into a polynomial and then adding noise to it. The decryption process involves recovering the message by finding the closest polynomial to the ciphertext polynomial in a certain norm. **Key Generation:** 1. Choose integers \(N\), \(p\), and \(q\), where \(p\) and \(q\) are large primes congruent to \(1\) modulo \(2N\), and \(p\) divides \(q-1\). 2. Generate a random polynomial \(f(x)\) of degree \(N-1\) with coefficients in \(\{-1,0,1\}\). 3. Compute the inverse polynomial \(f^{-1}(x)\ mod\ q\). 4. Choose a small integer \(\mathrm{e}\) and compute \(g(x)\ =\ (1\ +\ f(x))^{e}\ mod\ p\). 5. Public key is \((p,q,g(x))\) and private key is \((f(x),f^{-1}(x))\). **Encryption:** 1. Encode the message \(\mathrm{m}\) into a polynomial \(m(x)\) of degree \(N-1\) with coefficients in \(\{-1,0,1\}\). 2. Choose a small integer \(\mathrm{r}\) and compute \(h(x)\ =\ rg(x)+m(x)\ mod\ q\). 3. Send the ciphertext (\(h(x)\)). **Decryption:** 1. Compute \(c(x)=h(x)*f^{-1}(x)\ mod\ q\). 2. Compute \(m(x)=\round(c(x)\ mod\ p)\ mod\ 2\), where \(round()\) is the function that rounds to the nearest integer. 3. The decrypted message is the polynomial \(m(x)\). **Homomorphic Addition:** 1. Given two ciphertexts \(h_{1}(x)\) and \(h_{2}(x)\), compute \(h_{3}(x)=h_{1}(x)+h_{2}(x)\ mod\ q\). 2. Send the ciphertext (\(h_{3}(x)\)). **Bootstrapping (Homomorphic Multiplication):** 1. Decrypt the ciphertext \(h(x)\) to obtain \(c(x)=h(x)*f^{-1}(x)\ mod\ q\). 2. Choose a random element a from \(Z_{p}\). 3. Compute \(c^{\prime}(x)=(c(x)+a*f(x))\ mod\ q\). 4. Compute \(m^{\prime}(x)=round(c^{\prime}(x)\ mod\ p)\ mod\ 2\). 5. Compute \(h^{\prime}(x)=2r*(g(x)^{a})\ mod\ q\). 6. Compute \(h^{\prime\prime}(x)=h(x)-h^{\prime}(x)\ mod\ q\). 7. Compute \(h3(x)=h^{\prime\prime}(x)+h^{\prime}(x)*m^{\prime}(x)\ mod\ q\). 8. The encrypted result is the ciphertext (\(h3(x)\)). The above bootstrapping step can be repeated multiple times to enable deeper homomorphic circuits. The fully homomorphic aspect of these schemes comes from the fact that the NTRU cryptosystem is somewhat homomorphic. This means that the addition of two ciphertexts results in a ciphertext that can be decrypted to the sum of the corresponding plaintexts. Multiplication of ciphertexts, however, is not directly possible in the NTRU cryptosystem. Therefore, NTRU-Like FHE schemes use a technique called "bootstrapping" to perform homomorphic multiplication. ### TFHE Scheme TFHE is an open-source library for fully homomorphic encryption, distributed under the terms of the Apache 2.0 license [17]. The TFHE algorithm is a homomorphic encryption scheme that supports both addition and multiplication operations. It was proposed in 2018 and is designed to be more efficient than other FHE schemes, such as the BGV and FV schemes. It also implements a dedicated Fast Fourier Transformation for the anticyclic ring \(\mathbb{R}[X]/(X^{N}+1)\), and uses AVX assembly vectorization instructions. The default parameter set achieves a 110-bit cryptographic security, based on ideal lattice assumptions. From the user point of view, the library can evaluate a net-list of binary gates homomorphically at a rate of about 50 gates per second per core, without decrypting its input. It suffices to provide the sequence of gates, as well as ciphertexts of the input bits. And the library computes ciphertexts of the output bits. The key features of the TFHE algorithm include: 1. **Encryption:** The plaintext message is first encrypted using a symmetric key encryption scheme such as AES (Advanced Encryption Standard). This produces a ciphertext that is a stream of bits. 2. **Encoding:** The bits of the ciphertext are then encoded into a Torus, which is a mathematical structure that allows for efficient manipulation of the ciphertext using homomorphic operations. 3. **Homomorphic operations:** The TFHE algorithm supports both addition and multiplication operations on the encrypted data. These operations are performed using the encoded Torus values. 4. **Decryption:** The homomorphic result is then decoded back into a stream of bits and decrypted using the same symmetric key encryption scheme used in the encryption step. The TFHE algorithm is designed to be efficient in terms of computational cost, memory usage, and ciphertext size. It achieves this by using a combination of symmetric key encryption and bitwise operations. Additionally, it supports rotation operations, which allows for efficient evaluation of circuits with loops or variable-length operations. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline **Scheme** & **Year** & \begin{tabular}{c} **Security** \\ **Level** \\ \end{tabular} & \begin{tabular}{c} **Key** \\ **Size** \\ \end{tabular} & \begin{tabular}{c} **Homomorphic** \\ **Operations** \\ \end{tabular} & **Implementation** & **Limitations** \\ \hline \multirow{2}{*}{Gentry’s FHE} & \multirow{2}{*}{2009} & IND- & \multirow{2}{*}{2\({}^{\wedge}\)80} & Addition, & \multirow{2}{*}{Software} & Slow evaluation \\ & & CPA & & Multiplication & & speed \\ \hline \multirow{2}{*}{\begin{tabular}{l} Brakerski- \\ Gentry- \\ Vaikuntanathan (BGV) \\ \end{tabular} } & \multirow{2}{*}{2011} & IND- & \multirow{2}{*}{2\({}^{\wedge}\)80} & Addition, & \multirow{2}{*}{Software, } & Slow evaluation \\ & & CPA & & Multiplication, & Hardware & speed, large \\ \hline \multirow{2}{*}{ \begin{tabular}{l} Fan- \\ Vercauteren (FV) \\ \end{tabular} } & \multirow{2}{*}{2012} & IND- & \multirow{2}{*}{2\({}^{\wedge}\)40} & Addition, & \multirow{2}{*}{Software, } & Smaller \\ & & CPA & & Multiplication, & Hardware & ciphertext \\ \hline \end{tabular} \end{table} Table 1: FHE Schemes ## 4 Open-SOURCE LIBARARIES To make FHE more accessible and easier to use, several open-source libraries for FHE have been developed. These libraries provide a set of pre-built functions and APIs that allow developers to easily implement FHE in their applications without needing to understand the underlying mathematics and algorithms. Some of the popular open-source FHE libraries include HElib, SEAL, OpenFHE, TFHE, and HEAAN. These libraries support different FHE schemes and have varying levels of complexity and efficiency. Developers can choose the library that best suits their needs based on factors such as performance, ease of use, and compatibility with their existing systems. Open-source FHE libraries are especially valuable for researchers and developers who are working on FHE-based applications but may not have the resources or expertise to develop their own implementation from scratch. By leveraging the work of others, they can quickly and easily incorporate FHE into their applications and advance the state of the art in secure computation. ### HElib HElib [18] is a C\(++\) library for homomorphic encryption, supporting both the BGV and GSW schemes. It provides a simple and efficient API for performing homomorphic operations on encrypted data. HElib has been used in several real-world applications, including secure machine learning and privacy-preserving data analysis. ### Seal SEAL (Simple Encrypted Arithmetic Library) [19] is a C\(++\) library for homomorphic encryption, supporting the CKKS and BFV schemes. It provides a simple and user-friendly API for performing homomorphic operations on encrypted data. SEAL has been used in several real-world applications, including secure machine learning, privacy-preserving data analysis, and secure cloud computing. ### OpenFHE OpenFHE [20] is a C\(++\) library for homomorphic encryption, supporting several schemes including BGV, BFV, CKKS, DM, and CGGI. It provides a flexible and modular API for performing homomorphic operations on encrypted data. Palisade has been used in several real-world applications, including secure machine learning, privacy-preserving data analysis, and secure cloud computing. ### Tfhe TFHE (Fully Homomorphic Encryption over the Torus) [17] is a C\(++\) library for homomorphic encryption, supporting the TFHE scheme. It is designed to be more efficient than other FHE schemes, such as the BGV and FV schemes. TFHE has been used in several real-world applications, including secure machine learning and privacy-preserving data analysis. ### Heaan HEAAN (Homomorphic Encryption for Arithmetic of Approximate Numbers) [21] is a C\(++\) library for homomorphic encryption, supporting the HEAAN scheme. It is designed for efficient computation on encrypted data using approximate numbers. HEAAN has been used in several real-world applications, including secure machine learning and privacy-preserving data analysis. ## Conclusion In conclusion, Fully Homomorphic Encryption (FHE) schemes allow computation to be performed directly on encrypted data, without requiring decryption. This is a powerful tool for preserving data privacy, as it enables secure outsourcing of computation to untrusted servers, while maintaining the confidentiality of the data. FHE schemes are based on a variety of mathematical problems, including lattice-based problems, AGCD problems, and NTRU-like problems. Each scheme has its own strengths and weaknesses, and the choice of scheme depends on the specific application and security \begin{table} \begin{tabular}{|l|l|l|l|c|} \hline \multirow{2}{*}{**Library**} & **Supported** & **Programming** & \multirow{2}{*}{**API**} & **License** \\ & **Schemes** & **Language** & & \\ \hline HElib [18] & BGV, GSW & C\(++\) & Simple and efficient & BSD 3-Clause \\ \hline SEAL [19] & CKKS, BFV & C\(++\) & Simple and user-friendly & MIT \\ \hline OpenFHE [20] & BGV, BFV, CKKS, DM, CGGI & C\(++\) & Flexible and modular & Apache 2.0 \\ \hline TFHE [17] & TFHE & C\(++\) & Efficient & LGPLv3 \\ \hline HEAAN [21] & HEAAN & C\(++\) & Efficient computation on approximate numbers & MIT \\ \hline \end{tabular} \end{table} Table 2: Open-source libraries for fully homomorphic encryption (FHE) requirements. Python libraries like Pyfhel and C\(++\) libraries like OpenFHE provide an easy-to-use interface for implementing FHE schemes, enabling developers to experiment and prototype new applications. As FHE continues to advance, it has the potential to enable new privacy-preserving technologies and applications in fields such as healthcare, finance, and data analytics.
2306.03825
Interest-disclosing Mechanisms for Advertising are Privacy-Exposing (not Preserving)
Today, targeted online advertising relies on unique identifiers assigned to users through third-party cookies--a practice at odds with user privacy. While the web and advertising communities have proposed solutions that we refer to as interest-disclosing mechanisms, including Google's Topics API, an independent analysis of these proposals in realistic scenarios has yet to be performed. In this paper, we attempt to validate the privacy (i.e., preventing unique identification) and utility (i.e., enabling ad targeting) claims of Google's Topics proposal in the context of realistic user behavior. Through new statistical models of the distribution of user behaviors and resulting targeting topics, we analyze the capabilities of malicious advertisers observing users over time and colluding with other third parties. Our analysis shows that even in the best case, individual users' identification across sites is possible, as 0.4% of the 250k users we simulate are re-identified. These guarantees weaken further over time and when advertisers collude: 57% of users with stable interests are uniquely re-identified when their browsing activity has been observed for 15 epochs, increasing to 75% after 30 epochs. While measuring that the Topics API provides moderate utility, we also find that advertisers and publishers can abuse the Topics API to potentially assign unique identifiers to users, defeating the desired privacy guarantees. As a result, the inherent diversity of users' interests on the web is directly at odds with the privacy objectives of interest-disclosing mechanisms; we discuss how any replacement of third-party cookies may have to seek other avenues to achieve privacy for the web.
Yohan Beugin, Patrick McDaniel
2023-06-06T16:13:25Z
http://arxiv.org/abs/2306.03825v2
# Interest-disclosing Mechanisms for Advertising are Privacy-_Exposing_ (not Preserving) ###### Abstract. Today, targeted online advertising relies on unique identifiers assigned to users through third-party cookies-a practice at odds with user privacy. While the web and advertising communities have proposed interest-disclosing mechanisms, including Google's Topics API, as solutions, an independent analysis of these proposals in realistic scenarios has yet to be performed. In this paper, we attempt to validate the privacy (i.e., preventing unique identification) and utility (i.e., enabling ad targeting) claims of Google's Topics proposal in the context of realistic user behavior. Through new statistical models of the distribution of user behaviors and resulting targeting topics, we analyze the capabilities of malicious advertisers observing users over time and colluding with other third parties. Our analysis shows that even in the best case, individual users' identification across sites is possible, as 0.4% of the 250k users we simulate are re-identified. These guarantees weaken further over time and when advertisers collide: 57% of users are uniquely re-identified after 15 weeks of browsing, increasing to 75% after 30 weeks. While measuring that the Topics API provides moderate utility, we also find that advertisers and publishers can abuse the Topics API to potentially assign unique identifiers to users, defeating the desired privacy guarantees. As a result, the inherent diversity of users' interests on the web is directly at odds with the privacy objectives of interest-disclosing mechanisms; we discuss how any replacement of third-party cookies may have to seek other avenues to achieve privacy for the web. Targeted advertising, interest-disclosing mechanisms, Privacy Sandbox, Topics API, cross-site tracking, third-party cookies + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + Footnote †: journal: Computer Science + + Footnote †: journal: Computer Science In our evaluation, we show that an adversary can identify (1) about 25% of the noisy topics on single websites in _one-shot_ scenarios (wherein only one result from the API is observed). As user behaviors are stable across epochs and advertisers record the results returned by the API, we find that (2) the noise removal increases to 49% for 15 epochs and to 94% when 30 epochs are observed in _multi-shot_ scenarios. The identification of the genuine topics of each user lays the foundations for re-identification of cross-site visits. We find that, contrary to the goal of preventing re-identification entirely, (3) 0.4% of the 250k users we simulate are re-identified by 2 advertisers colluding across websites in one-shot scenarios, and that 17% of them can be re-identified with higher likelihood than just randomly. In multi-shot scenarios, (4) 57% of the users are uniquely re-identified and an additional 38% are matched better than just randomly in 15 epochs, while 75% are uniquely re-identified in 30 epochs and an additional 25% with a higher likelihood. This fact directly violates the privacy goals proposed by Google with Topics over TPCs. On the utility perspective, we see that (5) Topics is quite useful to advertisers. On average, the Topics API returns at least 1 true topic aligned with user interests in about 60% of cases-assuming the API is used faithfully. We further demonstrate (6) how carefully crafted subdomains can alter this accuracy and be abused to potentially assign unique identifiers to users. This paper shows that the privacy-preserving claims of Topics are directly at odds with user behaviors on the Internet. Other approaches may need to be explored to develop a truly privacy-preserving alternative to TPCs. We make the following key contributions: * We show how natural properties about user interests can break Topics's privacy claims of non re-identification. Specifically, users with stable interests are as uniquely cross-site trackable with Topics as with TPCs. * We find that Topics does not meaningfully lower the utility provided to advertisers from TPCs. We also identify ways to impact Topics's privacy and utility if not used faithfully. * We provide partial mitigations for Topics and discuss how other approaches than interest-disclosing mechanisms may have to be sought for privacy-preserving online advertising. ## 2. Background & Related Work **Third Party Cookies & Cross-site Tracking.** Web cookies, which offer websites the ability to record site-specific data in a user's browser, are routinely abused to track users online. With third party cookies (TPCs), advertisers and adversaries alike can assign unique identifiers to web users, track them across different websites, and obtain users' browsing histories. This is used to infer user interests for targeted advertising (Garay et al., 2016; Garay et al., 2017; Garay et al., 2018; Garay et al., 2018). As a result, TPCs have been deprecated by different web actors (the Tor Browser (Torba et al., 2018), Safari WebKit (Safari et al., 2018), Brave Browser (Browser et al., 2018), or Mozilla Firefox (Mozilla Firefox et al., 2018)) while others such as Google Chrome have announced their intention to do so in the near future (Krishnan et al., 2018; Krishnan et al., 2018). **Alternatives for Privacy-Preserving Advertising.** Deprecating TPCs all together without offering any replacement would disrupt how the ad-funded web presently operates. As a result, different organizations are developing privacy-preserving alternatives for personalized advertising. The focus of this paper as well as the majority of these proposals are based on _interest-disclosing mechanisms_. Generally, these solutions compute user categories or assign each user to their interests. When advertisers and publishers want to display an ad to users, that information is used to determine which ad to show. The FLoC proposal (Krishnan et al., 2018; Krishnan et al., 2018; Garay et al., 2018) and later the Topics API (Krishnan et al., 2018; Krishnan et al., 2018), made by Google as part of The Privacy Sandbox (Krishnan et al., 2018; Krishnan et al., 2018; Krishnan et al., 2018), assign users to a group of interests or classify user web histories into topics categories, and then release those categories to advertisers through a web API call. Two other proposals, SPARROW from Criteo (Criteo, 2018) and PARAKET from Microsoft (Mozilla et al., 2018), introduce a trusted third party-respectively, a gatekeeper and an anonymization service-to which user data is disclosed to perform the ad selection process. On the other hand, a different type of proposals for online advertising, like the FLEDGE API (Krishnan et al., 2018) from Google, considers that user data should not leave the browser and so executes the ad auction directly on users' devices. **Federated Learning of Cohorts (FLoC).** With FLoC, an alternative to TPCs developed by Google, participating web browsers weekly compute the interest group (or cohort) their users belong to, based on their browsing histories. Through a reporting mechanism to a central server, Google ensures that the computed cohorts are either composed of enough users or merged with other cohorts in order to provide some \(k\)-anonymity. Advertisers embedded on visited webpages can observe user cohort IDs (Krishnan et al., 2018; Krishnan et al., 2018; Garay et al., 2018). Analysis of FLoC revealed a variety of privacy concerns: (1) requirement in trusting a single actor to maintain adequate \(k\)-anonymity, (2) concern that cohort IDs could create or be linked to fingerprinting techniques, and (3) risk of re-identifying users by tracking their cohort IDs over time and by isolating them into specific cohorts through Sybil attacks (Garay et al., 2016; Krishnan et al., 2018; Krishnan et al., 2018). While some parameters and details of FLoC were still unclear, advertisers also had concerns about how to interpret the cohort ID for utility. Google eventually dropped the FLoC proposal and replaced it with the Topics API. **Topics API.** The Topics API aims to replace TPCs for personalized advertising. With the Topics API, the web browser collects and classifies the domains visited by users into topics of interest. The top visited \begin{table} \begin{tabular}{|c||c|c|} \hline \begin{tabular}{c} \begin{tabular}{c} \begin{tabular} \end{tabular} \\ \end{tabular} & \begin{tabular}{c} One-shot \\ \end{tabular} & \begin{tabular}{c} Multi-shot \\ (15-30 epochs) \\ \end{tabular} \\ \hline \hline \multirow{3}{*}{None} & **Noise removal** & **Noise removal** \\ & 25\% of noisy topics & 49-94\% of noisy \\ & removed & topics removed \\ \hline \multirow{4}{*}{ \begin{tabular}{c} Across \\ 2 websites \\ \end{tabular} } & **Cross-site tracking** & **Cross-site tracking** \\ & 0.4\% of users & 57-75\% of users \\ \cline{1-1} & re-identified & re-identified \\ \cline{1-1} & 17\% with higher & 38-25\% with higher \\ \cline{1-1} & likelihood than & likelihood than \\ \cline{1-1} & just randomly & just randomly \\ \hline \end{tabular} \end{table} Table 1. Overview of the risks of Topics’s information disclosure for different scenarios and cases of collusion with a summary of the empirical results obtained in Section 4. topics are updated once per epoch and they are observed by advertisers to select which ad to display to users visiting a website they are embedded on (Zhou et al., 2018; Wang et al., 2019; Wang et al., 2019). We provide more details about the Topics API and how it works in Section 3. **Topics Analyses**(Zhou et al., 2018; Wang et al., 2019). Along with the Topics proposal, Google released a white paper analyzing the risk of third parties re-identifying users visiting different websites (Zhou et al., 2018). First, an analytical evaluation is carried out to compute the aggregate information leakage of Topics for two scenarios (per single and longitudinal leakage) followed by an empirical experiment on a private dataset of synced Chrome users browsing histories. The reported results show that the information learned by a third party is somewhat limited compared to the worst case scenario identified. This analysis is important for the discussion around The Privacy Sandbox proposals, but it also has limitations (some explicitly mentioned by the authors): for instance, (1) it considers that no actor is colluding with each other when in practice advertisers could easily have such incentive, (2) some uniform assumptions about the distribution and observations of topics are made, (3) results are reported in aggregate potentially hiding risks for specific users, (4) the noise in the mechanism is very briefly discussed, and (5) only 2 epochs were considered in the empirical evaluation. Our analysis of Topics explicitly addresses these limitations through a realistic threat model, a more thorough analysis of user topics over time (30 epochs), and a focus on the privacy consequences of the diverse nature of user interests. Following an inquiry from Google on their position about the adoption of the Topics API (Zhou et al., 2018), Mozilla has released a privacy analysis (Wang et al., 2019) that points at shortcomings of Topics and of the re-identification evaluation of Google's white paper. Thomson, the author of this analysis, crafts a specific example of one user exhibiting a unique interest among a population, to show the risk of being re-identified through the Topics API. As proposed, a population as small as 70 users would readily leak more information than the upper bound computed by Google. Thomson additionally critiques the use of aggregate statistics, highlighting that privacy guarantees must not only be considered on average across web users but also for individuals. In this paper, we analytically and empirically demonstrate the actual consequences on the Topics API of the diverse nature of user interests. While Thomson explains that the "_users' distribution across topics does not result in actionable information about the privacy properties of the design_", we find that the distribution of topics among the top 1M most visited websites is highly skewed, and use this information to identify some of the noisy topics returned by Topics. Additionally, by simulating 250k users across 30 epochs, we demonstrate that the risks identified in our analysis exist in practice and we can quantify them. Finally, we measure the utility of the proposal for advertisers, a missing aspect from all previous analyses on Topics. **A World Wide View of the Web**(Zhou et al., 2018). Ruth et al., collaborated with Google to access a private dataset about real browsing histories of Chrome users worldwide (Zhou et al., 2018). They were able to extract interesting statistics and details about user browsing behaviors (some previously conjectured by the community) that we use in our empirical analysis. Their results show that web users always visit the same small number of websites (25% of page loads in their dataset come from only six websites with 17% from one website only) and spend most of their time on very few of them (10 websites capture half of users' time spent online). They find that the top 10k and top 1M most visited websites capture respectively from 70% to 95% of user traffic, which justify using these rankings as a proxy to study users web browsing, even though a lot of websites are visited relatively little which skews the analysis towards the tail. If their results show that browsing behaviors tend to be similar across regions for top use cases: users visit websites of similar categories (search engine, video platforms, social networks, pornography, etc.), they also explain that smaller populations and individuals exhibit different and sometimes unique behaviors. Indeed, geographic, cultural, and linguistic differences are for instance observed, and so every unique user web behavior may not be represented through global ranking lists. **Users Have Stable (and Unique) Web Behaviors**(Bahdan et al., 2016; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). Multiple studies have been carried out since the early Internet area to measure and evaluate user web behaviors. In the early 2000s, analyzes identify how users revisit websites (Wang et al., 2019) and what browsing trends and patterns they exhibit (Wang et al., 2019). These early studies already find that user browsing behaviors and interests are stable over time, subsequent studies come to the same observation. For instance, Yahoo! shows that webpages of certain types and categories are revisited by the same user over time (Wang et al., 2019; Wang et al., 2019). Studying search logs from Bing, Microsoft finds that users exhibit consistent and stable domain preferences over time, even during periods that would disrupt users' daily life (like vacations), and that third parties have the ability to observe these preferences (Wang et al., 2019). Diary studies such as Google's on the use of tablet and smartphone devices also highlight that users have a diverse and yet fixed set of activities they tend to perform repeatedly (Zhou et al., 2018). On top of being stable, user browsing behaviors have also been demonstrated to be unique: communities of interests are used to re-identify users (Bahdan et al., 2016; Wang et al., 2019), and browsing histories are shown to be unique in the original paper from Olejnik et al. (Olejnik et al., 2019) and in the replication study performed by Mozilla a few years after (Wang et al., 2019). ## 3. Exploring Topics In this paper, we use Topics as a canonical example of an interest-disclosing mechanism, as it is currently the most mature proposal to replace TPCs. Our goal is to analyze interest-disclosing proposals in terms of their privacy protections for users and their utility for ad-funded websites. A summary of our notations and values from the proposal are available in Appendix E. ### Topics in Detail With Topics, at the end of an _epoch_\(\epsilon_{0}\) (of size 1 week in the current proposal) the browser-which globally tracks user's history-classifies visited hostnames in order to compute the _top_\(\top=5\)_most visited topics_, which represents user interests. The initial _taxonomy_ is composed of \(\Omega=349\) different topics. To compute topics, the browser first checks if the domain name is present in a _static mapping_ of \(\sim\)10\(k\) most visited websites manually assigned to topics (if any). If not, a _machine learning classifier_ is used (see Figure 1). Hostnames are assigned from zero3 to several topics (one most of the time). Additionally, not all visited websites are taken into account when computing a user's topics of interests in a given epoch \(e_{0}\): only the hostnames of the web pages that opted in to Topics and made a call to the API are. Footnote 3: For some hostnames, no topic is returned because the classification is unknown or the website corresponds to sensitive categories (ethnicity, sexual orientation, etc.). **API CaII.** During epoch \(e_{0}\), when publishers or advertisers embedded on a web page call the Topics API, the browser will return them an _array of maximum \(\tau=3\) topics_: one per epoch before the current epoch. For each epoch, the topic that is returned is either, with probability \(p=0.05\), a _noisy topic_ picked uniformly randomly from the taxonomy or, with probability \(1-p\), a _genuine topic_ picked randomly from the user's top \(\top\) most visited topics for that epoch. These noisy topics are intended to provide plausible deniability to users and ensure that a minimum number of users is assigned to each topic (k-anonymity) [37]. Topics also have a few other requirements: for a genuine topic to be returned to an advertiser, they must have already seen that topic on another website visited by the user in the previous \(\tau\) epochs. If not, they may be able to observe the parent topic of the genuine one in the taxonomy, but only if they have also seen it in the past. Google explains that this _witness requirement_ ensures that the Topics API does not disclose more information than TPCs are able to. Also, if a topic is returned for a given epoch on a website, any other subsequent call to the Topics API on that same website by any caller will return the same topic for that epoch. Finally, advertisers may not receive any topic; a user could have opted out of Topics, their web browser does not support the API, they are in incognito mode, etc. **Initial Taxonomy.** Google has released Topics with an initial taxonomy of \(\Omega=349\) topics [46], seemingly curated from the taxonomy of _Content Categories_ of the Google Natural Language Processing API [35]. These topics are alphabetically ordered and divided under 24 parent categories, e.g., the _/Business & Industrial_ topic is a parent of _/Advertising & Marketing_, that is itself a parent of _/Sales_. Parent categories have a minimum, median, and maximum of respectively 3, 11, and 56 topics under them (see Appendix A), with some divided further in subcategories as shown in the previous example. Additionally, Google has removed topics that could be deemed sensitive (ethnicity, sexual orientation, etc.). **Static Mapping.** Google has released a list of manually annotated topics for \(\sim\)10\(k\) domain names [23] that we refer to as the _static mapping_. Consisting of 9254 domains, Table 2 shows the distribution of topics per individual domain. The majority of these domains are assigned very few topics (the median is 1 topic) and 1344 of them do not get assigned any topic from the taxonomy at all, but instead the _Unknown_ topic (likely because their content is considered sensitive). **Model Classifier.** Hostnames that do not appear in the static mapping are classified through the use of a model that has been trained by Google. This classifier is released publicly in the beta version of Google Chrome as a TensorFlow Lite _fflite_ model [40]. From it, we can infer that it is a Bert classifier [17] that accepts as input a string of maximum 128 characters that has been tokenized and padded with spaces if necessary. The output of the classifier is a vector of 350 confidence scores: one for each of the 349 topics of Google's taxonomy to which an additional _Unknown_ topic has been added. Google filters the output of the model according to the algorithm explained in Appendix C. **Proposal Versions.** In this paper, we use the latest version of the individual draft proposal of Topics from May 30, 2023 available on GitHub of short sha commit 24c8789, the taxonomy _v1_ of 349 topics, and the model classifier of version \(1\) (used to be labeled _2206021246_). This corresponds to Topics version _chrome.1:1:2_ being tested in Google Chrome beta at the time of submission of this paper (May 2023). Google plans to start deploying the Topics API gradually with Chrome 115 expected for July 2023 [54]. ### Topics's Threat Model Here, we present a realistic threat model for the Topics API (see Figure 2). This model considers _users_ accessing the content of a _publisher_'s website through their web _browser_. On the website along with the publisher's content, _advertisers_ embed code and scripts to display ads to users according to the result of an ad auction run by an _adtech platform_. Under the Topics approach, advertisers can no longer use third party cookies, but they can call the Topics API to obtain user topics of interest (recall, users are opted-in by default). While users \begin{table} \begin{tabular}{c c} \hline \hline Topic(s) & Domains \\ per domain & count \\ \hline 0 & 1344 \\ 1 & 4135 \\ 2 & 2350 \\ 3 & 1073 \\ \hline \hline \end{tabular} \begin{tabular}{c c} \hline Topic(s) & Domains \\ per domain & count \\ \hline 4 & 270 \\ 5 & 59 \\ 6 & 20 \\ 7 & 3 \\ \hline \hline \end{tabular} \end{table} Table 2. Number of topics per individual domain in the static mapping of \(\sim\)10\(k\) domain names annotated by Google. Figure 1. Overview of Topics and web browsers can be trusted in that they faithfully follow the Topics protocol, a fundamental risk with third parties is that they attempt to re-identify users across websites. The threats are as follows: (1) advertisers will collude-there are strong incentives for them to do so: better targeting users, improving their ad selection, etc.-and (2) third parties (advertisers and publishers) will also try to abuse the API-they can trick users into revealing certain topics by clicking on specific URLs. Finally, even though we do not consider _extensions_ in this paper as the current Topics proposal is a bit unclear on their role and access to the API, we want to acknowledge that they may be part of the threat model to consider by future work once Topics is deployed. ### Topics's Privacy and Utility Goals The Topics proposal describes four goals across _privacy_, _utility_, and _usability_. We next briefly discuss these goals and our corresponding evaluation. **(G1)**: _"It must be difficult to reidentify significant numbers of users across sites using just the API."_ This is a _privacy_ goal; with Topics it should not be possible for websites to identify that the same user visited them, as this would enable cross-site tracking (Beng et al., 2015; Beng et al., 2015; Beng et al., 2015; Beng et al., 2015). The phrasing used is ambiguous; it is not clear what _"difficult"_ and _"significant"_ precisely mean in that context. To perform our analysis, we define the _difficulty_ in breaking user privacy to be the number of websites that the API caller needs to be present on, or collude with, the number of topics that they need to observe, and the needed number of observed epochs. For _significance_, we measure the proportion of \(n\) users re-identified, ideally and to be truly private the Topics API should make this impossible for _any_ single user. **(G2)**: _"The API should provide a subset of the capabilities of third-party cookies."_ This is the _utility_ goal of Topics: the API should allow publishers and advertisers to display targeted ads to the right users based on the returned users' topic of interests. Thus, we evaluate how accurately browsing histories map onto topics of interest. **(G3)**: _"The topics revealed by the API should be less personally sensitive about a user than what could be derived using today's tracking methods."_ This other _privacy-related_ goal mentions that Topics's privacy disclosure should leak less information about users than what could be inferred from TPCs today. We analyze if advertisers can denoise the output of the API and re-identify users across websites. **(G4)**: _"Users should be able to understand the API, recognize what is being communicated about them, and have clear controls. This is largely a UX responsibility but it does require that the API be designed in a way such that the UX is feasible."_ The last goal mentioned by Google is about _usability_; although it is very important and should be taken into account when developing such an API-especially if it were to be deployed to billions of internet users-we do not consider this aspect in the rest of this paper. The reason is that a totally different set of tools and expertise (e.g., user studies, surveys, and interviews) would be required than the ones we focus on to evaluate the privacy and utility goals. We defer this usability evaluation to future work. The rest of the paper evaluates Topics according to its privacy and utility goals. ### Information Disclosure By returning user top interests, Topics discloses user information to advertisers and alike. We now analyze the risks associated with this information disclosure (see Table 1) for different cases of collusion between third parties (none and between advertisers across websites) and scenarios within which the API was called: one-shot (wherein only one epoch is observed per user) and multi-shot (several epochs observed per user). Recall (Section 3.1) that the disclosure of user interests by the Topics API is _limited, noisy_, and its content _differs_ across websites. However, users have stable web behaviors and interests over time (see Section 2), further amplified for their top \(\toptop=5\) topics collected by their browser in the Topics API. As a result, we must study the consequences of the stability of user interests on Topics's privacy claims. **No Collusion - Noise Removal.** Consider the no collusion case: an advertiser is embedded on a website and receives the topics of interest of the users visiting it. A maximum of \(\tau=3\) topics are observed per call. With a probability \(p=0.05\), each topic may be a noisy one picked from the taxonomy instead of being one of the user's genuine interests. This mechanism guarantees that for \(n\) users visiting a website once, an advertiser can expect to observe each topic in the taxonomy a minimum of \(\frac{\cdot\cdot p\tau}{D}\) times. Now, consider \(N\) and \(G\) the random variables that count the number of noisy and genuine topics in an array of \(\tau\) topics, they have the following binomial distributions: \(N\sim\mathcal{B}(\tau,\rho)\) Figure 2. The different actors for Topics on a website’s visit \(G\sim\mathcal{B}(\tau,q=1-p)\). With the values from the current proposal, advertisers can expect to get at least 2 genuine topics in 99.275% of the results that they observe in one-shot scenarios (where only one epoch is observed per user). However, from just the outcome of this probabilistic experiment, advertisers can not determine exactly which topics may be genuine or noisy. Yet, they have a direct incentive to do so, for instance, to better select which ad to display to user. This raises the question, _can third parties remove the noisy topics returned by the API?_ First, noisy topics are returned whether advertisers have observed them or not for that user in the past epochs, i.e., the witness requirement does not apply. Advertisers who track the topics assigned to websites they are embedded on can therefore easily flag noisy topics they do not have third-party coverage of. Although, we can expect in practice that advertisers will be embedded on a large set of websites, the distribution of topics on the most visited websites could inform advertisers about which topics will appear more because they are noisy than genuine. Indeed, we show in Section4.2 that not all the topics from the initial taxonomy are in practice observed on the most visited websites, and build a binary classifier to identify the noisy or genuine nature of topics. Second, if a topic is repeated in the array of \(\tau=3\) topics, advertisers can distinguish between noisy and genuine topics. A topic that repeats \(x\) times, would be noisy all these times with a probability of \(\left(\frac{P}{P}\right)^{x}\). By the opposite event rule, we have: a topic that appears \(2\leq x\leq\tau\) times is genuine at least once with probability \(1-\left(\frac{P}{P}\right)^{x}\), i.e., more than 99.99% for \(x\geq 2\). Users who have stable interests across epochs have a higher chance of returning repeated topics during a _one-shot_ scenario (i.e., a single call to the API). Similarly, advertisers in a _multi-shot_ scenario (i.e., several calls to the API are observed), have an incentive to collect user's interests across time. Doing so, advertisers amass more information than through a single API call, and can identify for instance a user's genuine topics when these repeat over non-contiguous epochs or epochs separated by at least \(\tau\) other epochs. As users have stable topics (see Section2), their Topics API's results across epochs on a website can be seen as a variant of the Coupon Collector's Problem (Copper and Goyal, 2017), and 11 epochs would be necessary in expectation to see each one of the user's genuine topics once (see AppendixD for proof). So, the more an advertiser observes a user across epochs, the more confident it becomes in which topics are truly genuine or noisy; we further study and quantify these risks in our empirical evaluation in Section4.2. **Collusion - Cross-site Tracking.** We just saw that advertisers risk being able to remove the noise added by the Topics API, especially for users exhibiting stable interests. But, _can_Topics _be used to cross-site track users_? During a given epoch, the Topics API returns a maximum of the same \(\tau=3\) topics to any caller embedded on a given website. For each consecutive epoch, third parties that regularly call the Topics API are returned at most 1 new topic per epoch. This effectively limits Topics's privacy disclosure; specifically, if user interests and their nature were uniform enough, advertisers would not be able to track users across websites. However, if we assume that the set of top \(\top=5\) topics for some user is stable, i.e., remains the same across epochs, a third-party could potentially observe all top \(\top=5\) topics of these users in as little as \(\top-\tau+1=3\) epochs. Third-parties on other websites can do the same, and collude to re-identify users. The initial taxonomy is composed of \(\Omega=349\) topics, which leaves us with a total of \(\binom{\Omega}{\top}\approx 42\)_billion_ combinations of unique top \(\top=5\) topics. Thus, if some users also exhibit unique interests, they risk being re-identified across websites in both _one-shot_ and _multi-shot_ scenarios. Here, it is important to point out that even if users are sharing common interests with others, there exists an arbitrary number of techniques that can be used on top of Topics to further discriminate users into smaller and distinct populations (Beng et al., 2017; Chen et al., 2018; Chen et al., 2019), making the risk of being re-identified real for everyone. In a given epoch, for a same user, calls to the Topics API that originate from different websites do not return the same results every time. This is an attempt at directly making it harder for advertisers that are colluding to re-identify users across websites through one-shot scenarios. However, in multi-shot scenarios where advertisers record topics returned for each user across epochs, more information is accumulated. This directly paves the way for re-identification attacks grouping users by their top \(\top\) topics as demonstrated in Section4.3. The natural diversity and stability of users interests here again conflicts with the privacy guarantees that Topics intends to provide. ## 4. Privacy Evaluation In Section3.4, we find that when no collusion is considered, an advertiser can identify and remove the noisy topics that are returned by Topics in one-shot and multi-shot scenarios. We also discuss that if third parties are colluding, users risk being tracked across websites. We now seek to empirically demonstrate and evaluate these risks: **Q1: To what extent can third parties identify noisy topics? Q2: To what degree can users be tracked across websites?** ### Challenges & Test Bed An empirical evaluation to answer these questions would require some user browsing histories. Unfortunately, no recent and large dataset of such browsing histories is publicly available to evaluate Topics. In the past, researchers have been able to obtain some browsing histories through user surveys, but the sizes of the samples were quite limited (Zhu et al., 2018; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). On the other hand, web actors, like Google, collect such data through opt-in telemetry and reporting programs, however, the resulting dataset is kept private (Zhu et al., 2018). Another option would be to buy data sold by online data brokers, but, for privacy and ethical reasons about the unclear and vague collection methodology of these datasets we immediately discard this possibility (see ethics statement in Section6.4). As a result, we decide to create a test bed, that we plan to release as an _open source artifact_4, to generate synthetic traces of browsing histories of users that participate in the Topics API. Its design is motivated by results published in recent works that studied real user browsing behaviors: in the first one, Mozilla replicated a study on the uniqueness of browsing histories (Mozilla, 2018), while in the second, Ruth et al., partnered with Google and Cloudflare to perform a large-scale measurement of real user browsing patterns (Zhu et al., 2018; Wang et al., 2019). To determine how many unique domains are visited by each user in an epoch, we use the distribution of users per number of unique domains visited reported by Mozilla (2017). To figure out exactly which domains are visited, we use the global distribution of web traffic reported by Ruth et al. (2019) on the top \(1\)M most visited websites from the CrUX top list\(-\)generated by the same research group (Ruth et al., 2019; Ruth et al., 2020). CrUX bins the domains by top rank (top 1k, 5k, 10k, 50k, 100k, 500k, and 1M) and to set a relative order within each bin, we use the rank in Tranco (2017) of the _eTLD+1_ of each website in CrUX. Ruth et al., report that they see _"Google, Youtube, Facebook, WhatsApp, Robox, and Amazon within the top six sites for at least ten countries"_(Ruth et al., 2019), we use the main FQDN of these organizations5 for the top 6. Additionally, for the top 100, we leverage the ordered list of the top 100 _eTLD+1_ globally returned by Cloudflare Radar's Domain Ranking API (Dian et al., 2019). We can now sample browsing histories for each user according to their number of unique visited domains. These histories are then classified into the topics that would be returned by the Topics classifier. As we are missing frequency information about the number of visits to each unique website, we sample sets of possible top \(\mathsf{T}=5\) topics of interests among the topics observed. Finally, advertisers in our evaluation are considered to have a large third party coverage of the web, as a result, they can observe any topic from the taxonomy for every user. This effectively removes the witness requirement of Topics but also considers a worse case threat model that is also more realistic; today some advertisers have already an important third party coverage of the Internet as demonstrated by several studies (Brandes et al., 2015; Brandes et al., 2015; Brandes et al., 2015; Ruth et al., 2019; Ruth et al., 2020; Ruth et al., 2020). Footnote 5: [www.subdomain.][organization’s name]. [com global top level domain] ### No Collusion and Noise Removal In this section, we analyze the possibility for advertisers to identify and remove the noisy topics returned by the API. Recall that advertisers have an incentive in doing so to improve, for instance, the selection of relevant ads to display to users. As discussed in Section3.4, if the distribution of topics on the most visited websites is non-uniform, advertisers can use it as prior information to discriminate topics on their genuine or noisy nature. So, we ask here: _what is the distribution of topics on the most visited websites on the Internet?_ and _how can we leverage that information in the context of the Topics API?_ **Topics Distribution as a Prior.** On Figure3, we plot the histogram and the empirical cumulative distribution of how many individual topics are observed per number of classified domains for (a) the static mapping, (b) the top \(1\)M most visited websites6 from CrUX (Ruth et al., 2019; Ruth et al., 2020; Ruth et al., 2020), and (c) the top \(1\)M most visited _eTLD+1_ from Tranco (2017) classified with the Topics API. Footnote 6: CrUX provides a list of the top \(1\)M most visited origins which represents 991 656 unique domains in practice (some appear twice depending on the _http_(s) protocol used) The results show that the distributions of observations of each topic is very non-uniform. First, the number of unique domains on which each topic is observed tends to be rather small: the median is of only 3, 66, and 189 unique domains per topic respectively for the classifications of the static mapping, CrUX, and Tranco. Then, we observe that a moderate number of topics from the initial taxonomy are never observed at all: 95, 42, and 38 topics respectively. Finally, a few topics are seen on a significant number of domains: 3 topics are seen on more than 10% of the static mapping, when 1 topic in particular (_Arts & Entertainment_) is seen in the classification of 187 278 domains on CrUX (18.8%) and of 176 204 domains on Tranco (17.6%). Given that the list of top \(1\)M most visited websites represents 95% of all page loads on the internet (Ruth et al., 2020; Ruth et al., 2020), the distribution of topics among users is highly skewed as well. Knowing this distribution, third parties can easily build a binary classifier to distinguish between the noisy and genuine nature of topics. The first approach that can be taken is to set once a global minimum number of websites among the top most visited websites on which topics must appear to be considered genuine. As our goal is to only demonstrate the capability of using this distribution as a prior, we do not explore more advanced options. But, we can think that advertisers could adapt their strategies to the website they are embedded on, the population likely observed, other biases, etc. Back to our simple classifier; to determine which global threshold on CrUX to use to identify topics that never or very rarely appear on the Internet genuinely, we generate synthetic browsing histories for a population of 52k users, simulate the Topics computation for 1 epoch, and record the performance of our classifier. We present in Table3 the raw results of the classifier for the following thresholds: Figure 3. Distribution of the observations of each individual topics on the domains for each corresponding top list 0, 1, 2, 5, 10, 20, 50, 100, 500, and 1k websites, the positive class of our classifier corresponds to the noisy nature of the topic observed, while the negative one to genuine. From these results, we identify and set a global minimum threshold of websites from CrUX on which topics must be observed; we pick 10 websites for the rest of our analysis as for this threshold our classifier has still a very high true positive rate (TPR) for a better false positive rate (FPR) than smaller thresholds in our simulation on 52k users. **One-shot Scenario.** In the one-shot scenario, advertisers only observe 1 result returned by the API in a given epoch, i.e., \(\tau=3\) topics per user. Even though this information disclosure is very limited, a topic that repeats in the returned array of a user is way more likely to be genuine than noisy as explained in Section 3.4. We use this fact and our binary classifier to measure how many of the noisy topics observed through the API can be flagged as such by advertisers. For that, we generate a new population of 250k users that have stable interests across epochs. We simulate an advertiser that observes the results returned by only one call to the Topics API for each user, e.g., a total of 750k topics are observed, from which 37.5k are expected to be noisy. The results of our noise removal mechanism are as follows: an accuracy of 96.1%, with a precision of 24.7%-higher than the 95% and 0% a naive classifier always outputting genuine would have achieved-, a true positive rate (TPR) of 93.9%, and a false positive rate (FPR) of 3.8%. Thus, we find that our classifier successfully identifies about 25% of the noisy topics in one shot-scenarios. **Multi-shot Scenario.** Here, we are interested in the multi-shot scenario wherein advertisers record across epochs the topics they observe for every user. We use the same population of 250k users with stable interests as previously, and now simulate a call to the Topics API at every epoch for 30 epochs. We still use the binary classifier built previously to determine topics that are noisy, but when more than 1 epoch is observed, we use the observation made earlier in Section 3.4; if topics repeat over non-consecutive epochs or epochs separated by at least \(\tau\) other epochs, we consider them genuine. So, in the multi-shot scenario, we first attempt to identify genuine topics; if we are able to recover the full top \(\tau=5\) stable topics for each user, we then mark all other topics as noisy. In the case where not enough repetitions have been observed, we use the same binary classifier as in the one-shot scenario7. We report next the results of this denoising process for each number of observed epochs from 1 to 30 epochs. On Figure 4, we respectively plot the evolution across epochs of the accuracy, precision, TPR, and FPR of our noise classifier as well as the median number of genuine topics among the 250k users an advertiser can retrieve. As expected; the more epochs an advertiser observes results from the Topics API, the more confident it becomes in which topics correspond to genuine or noisy ones, ultimately recovering all the top \(\tau=5\) genuine topics of stable users. Note that advertisers being able to do so, prevents users from claiming plausible deniability of being interested in some topics, which was what Google expected to provide by adding these noisy topics as per their privacy goal (**G3**). Footnote 7: Note that when only 1 epoch is observed this multi-shot noise removal is equivalent to the one-shot noise removal presented before. ### Collusion and Cross-site Tracking One of Topics's privacy goals is to prevent third parties from being able to re-identify users across websites (**G1**). Here, we consider that 2 advertisers \(A\) and \(B\) are colluding and sharing the topics they observe on 2 websites \(w_{A}\) and \(w_{B}\) they are respectively embedded on. \(n\) users visit these 2 websites at every epoch and the advertisers are trying to re-identify a portion of or all users across these two websites. We ask: _how many users are they able to re-identify?_ We empirically measure this cross-site tracking risk by using the 250k simulated users we have and generating, for each, the views across 30 epochs that these 2 advertisers calling the Topics API would observe. Note that this is the same setting considered by Google in their white paper released along with Topics (Krishnan et al., 2018): advertiser \(A\) gets access to all the topics observed for each user \(\{u_{i,B}\mid i\in[1,n]\cap\mathbb{N}\}\) by advertiser \(B\) and advertiser \(A\) attempts to match the user \(u_{j,A}\) for some given \(j\in[1,n]\cap\mathbb{N}\) they have observed on website \(w_{A}\) with the correct \(u_{j,B}\) seen on \(w_{B}\). In practice, if Topics's goal (**G1**) is respected, advertiser \(A\) should not be successful with a probability higher than \(\frac{1}{n}\) for each user, i.e., no better than a random guess. We consider that a given user observed on \(w_{A}\) is _uniquely_ re-identified when it is exactly matched to its correct identity on the other website \(w_{B}\). We say that a user has a _higher likelihood_ of being re-identified than randomly if advertiser \(A\) identifies a group of users seen by \(B\) of size \(k\) that contains the target user and such that \(\frac{1}{n}<\frac{1}{k}\). Applying the techniques presented in the previous section on noise removal, each advertiser simulated in our evaluation filters the noise from the observations of topics, keeps only the observed topics deemed not noisy, and comes up with the topics that are the most likely to be in the top \(\tau=5\) of each user. Each user that visited website \(w_{A}\) is then mapped to the user(s) on website \(w_{B}\) that have the most topics in common between the sets of genuine topics (i.e., those each advertiser thinks are part of the top \(\tau=5\) of that user) and of observed topics not deemed noisy by the classifier built in previous sections. Note that the roles of advertisers and websites \(A\) and \(B\) are interchangeable in this generic setting of the cross-site re-identification experiment. **One-shot and Multi-shot Results.** In the one-shot scenario, we find that 0.4% of the 250k users we simulate are uniquely re-identified and that 17% of them can be \begin{table} \begin{tabular}{c c c c c} \hline \hline Threshold & Accuracy & Precision & TPR & FPR \\ \hline 0 & 0.956 & 0.130 & 0.938 & 0.044 \\ 1 & 0.957 & 0.144 & 0.915 & 0.043 \\ 2 & 0.959 & 0.173 & 0.905 & 0.041 \\ 5 & 0.959 & 0.207 & 0.894 & 0.040 \\ 10 & 0.961 & 0.246 & 0.900 & 0.038 \\ 20 & 0.956 & 0.284 & 0.681 & 0.038 \\ 50 & 0.958 & 0.364 & 0.658 & 0.033 \\ 100 & 0.959 & 0.442 & 0.619 & 0.028 \\ 500 & 0.912 & 0.644 & 0.315 & 0.020 \\ 1000 & 0.872 & 0.752 & 0.246 & 0.015 \\ \hline \hline \end{tabular} \end{table} Table 3. Results of our classifier for different thresholds. Highlighted row is the threshold used in the rest of the paper. with higher likelihood than just randomly by 2 websites that are colluding. Note that while the numbers obtained in one-shot scenario are low, they are not null and some users (a total of 17.4%) can be re-identified across websites, which already violates Topics's goal (**G1**). In multi-shot scenarios, the violation is even larger the more epochs are observed: 57% of the users are uniquely re-identified and an additional 38% with a higher likelihood than just randomly when 15 epochs are observed (for a total of 95% of the users), while 75% and an additional 25% of the users are respectively re-identified uniquely and with a higher likelihood for 30 epochs (total of 100% of the users). These results across epochs are aligned with the ones obtained when removing the noisy topics in the previous section; recall that the more epochs an advertiser observes, the more genuine topics among users' stable top \(\toptop=5\) they retrieved (Figure 3(c)), and as a result the more users are re-identified, defying Topics's goals (**G1**) and (**G3**). On Figure 5, we now plot the cumulative distribution function for different epochs of the proportion of users observed by advertiser \(A\) across each group size \(k\) of re-identified users observed by advertiser \(B\). As shown, the proportion of uniquely re-identified users can be obtained for \(k=1\), but this graph also illustrates the evolution of the level of \(k\)-anonymity (directly related to the size of the re-identified group) that a user in our simulated population of size 250k users can expect across epochs. These results directly inform us on how "_difficult_" it is to re-identify "_significant numbers of users across sites_" (**G1**), for instance: for 15 epochs, over 60% of the users can not be guaranteed strictly more than 10-anonymity in our evaluation. ## 5. Utility Evaluation In Section 4, we evaluate the privacy claims of the Topics API. We now do the same for the utility claim. Advertisers and publishers are interested in serving ads that correspond to user interests. This is to maximize the outcome of the ad impressions and get conversions (click, visit, order, etc.). Thus, we ask: **Q3: How accurate is the mapping of the Topics API between domains visited by users onto topics of interest?** We answer this by comparing the classifier accuracy from different approaches: first, we measure the performance of the Topics model by comparing classification results with the static mapping published by Google. While not entirely confirmed, this static mapping likely constitutes a part of the training of fine-tuning dataset for the Topics model. We then extend this evaluation to the top 1M most visited websites using publicly available data on site content. Finally, we look at the Topics model's resistance to manipulation, evaluating the ability to craft subdomains that are misclassified by the model. ### Static Mapping Reclassification As explained in Section 4.2, domains to be classified are first checked against a static mapping of 9254 domains. If the domain is not present in this static mapping, it is classified by the Topics model. We first ask: _does the model reflect human decisions_? We evaluate this question by measuring the accuracy of the classifier on the static mapping that was manually annotated by Google and so, can be considered as a form of ground truth. After reclassifying these 9254 domains, we compute inferred topics for each site using two different filtering techniques. First, we apply the same filtering used by the Chrome browser (_Chrome Figure 4. Multi-shot noise removal results for 250k stable users simulated across 30 epochs Figure 5. Distribution of the size \(k\) of the re-identified groups of users for different epochs. filtering_, see Appendix C). This filtering outputs a maximum of 3 topics per domain, but the ground truth dataset has anywhere between 0 and 7 topics associated with each hostname. To allow for a best-case characterization of Topics's utility, we introduce a second filtering step that is more conservative: _top filtering_ retains the same number of topics as seen in the ground truth. In this way, Topics is given the best chance possible of matching topics in the ground truth dataset. For each filtering strategy, we obtain two topic sets: a set of \(\{predicted\}\) and \(\{actual\}\) topics that we compare with different metrics reported in Table 4 (see Appendix E for formula details). Note here that the difference we observe between accuracy and balanced accuracy can be explained by the fact that most frequent classes contribute more to accuracy than for balanced accuracy where each individual class's accuracy has an equal weight computed by their recall (Krizhevsky et al., 2017). From the results we can make several observations. First, we focus on the proportion of sets where all, some (Jaccard index, Dice coefficient, and Overlap coefficient averages), or at least one predicted topic are correct. These metrics show that at its best, the Topics model outputs at least one topic in common with the ground truth on 65% of the domains of the static mapping. Note that Google did not disclose if this static mapping was used to train or fine-tune Topics's model classifier, though our results would be broadly consistent with this. However, to understand how the Topics model generalizes beyond potential training data, we next explore the classifier's performance on a broader set of domains. ### Top 1M Most Visited Websites To evaluate the model classifier of Topics more systematically, we ask _what would be the accuracy of the classifier on the most visited websites?_ Thus, we classify the top 1M most visited websites as reported by the CrtUX top list. We first manually verify a subsample of the classification and then introduce a more systematic way to perform the comparison using one of Cloudflare's APIs. **Manual Verification.** Following a conservative sampling approach, we extract a uniform sample of 385 domains and the top topic they are classified into with highest confidence. The manual verification is done by pulling the meta description of each website and displaying it along with the domain and classified topic. If the meta description can not be obtained for various reasons (website unreachable, script blocking, etc.), we manually get it from a Google search. If the description is not in one of the different languages spoken by the authors, we translate it. Finally, we judge that labels returned by Topics are correct in 38% of the cases, somewhat related in 9%, and incorrect in 53% with a 95% confidence and a margin of error of 5%. **Cloudflare Categorization.** Building on this manual verification, we automatically compare the Topics classification to the content categories returned by the Cloudflare Domain Intelligence API (Krizhevsky et al., 2017; Krizhevsky et al., 2017). This service is provided through Cloudflare Radar; different data sources (website content, manual classification, etc.) are used to determine these categories, and domains that are miscategorized can be reported along with suggestions for domains not classified yet (Krizhevsky et al., 2017). These categories are used in some of Cloudflare products to filter or block traffic based on certain categories (security risks, adult themes, etc.). In order to compare Google's Topics classification to this categorization, we manually map the Topics's taxonomy to Cloudflare's 150 content categories. First, we assign sensitive categories (Adult Themes, Drugs, Religion, etc.) from the Cloudflare list to the _Unknown_ interest from Topics, we then assign each of the remaining 349 topics to every content category it could be mapped into. We further refine our assignment by looking at the domains that do not correctly get their topics mapped to content categories from the static mapping (which explains the high performance results on the static mapping). Then, we categorize the top 1M most visited websites from the CrUX top list with the Cloudflare API, and we keep only the domains for which some content categories are returned. For each domain, we end up mapping these categories to a set of topics that we can compare to the ones predicted by Topics. Table 5 shows for different ranks of top most visited websites the overlap coefficient average and the proportion of domains for which there is at least one topic in common between both classifications, using the remapped Cloudflare's output as ground truth. These results show that the Topics's classification is also quite accurate beyond the \(\sim 10k\) websites from the static mapping; indeed on the domains from the top 1M that are categorized by the Cloudflare API, at least 1 topic is output in common by Topics on 57% of the cases. We conclude that this matching implies moderate utility of the Topics model for targeted advertising (**G2**). In practice, user interests are tagged heuristically based on visits to many sites, and the aggregation of these visits would reduce classification mistakes and improve accuracy over these per-site results. ### Crafting Subdomains Motivated by a discussion on the proposal (Krizhevsky et al., 2017), we study the privacy and utility trade-off of not allowing advertisers to set their own topics. At present, the Topics model only uses the domain names of the websites as input; this is limited information to work with compared to potentially having access to some content of the website (such as the meta description for instance) or having publishers providing their own topics or classification hints. The purpose of this limitation is ostensibly to reduce the ability of website operators to influence their site tagging (and thereby reducing API abuse to assign unique identifiers to users). However, this also reduces the utility of the system because many domain names are incorrectly classified. In this section, we evaluate if this utility trade-off actually provides defense against manipulation: _can publishers influence the classification through the use of carefully crafted subdomains?_ \begin{table} \begin{tabular}{c c c} \hline \hline Metric & Top filtering & Chrome filtering \\ \hline Accuracy & 0.55 & 0.48 \\ Balanced accuracy & 0.24 & 0.23 \\ All topics correct (ratio) & 0.46 & 0.34 \\ Jaccard index (average) & 0.53 & 0.48 \\ Dice coeff. (average) & 0.56 & 0.52 \\ Overlap coeff. (average) & 0.56 & 0.61 \\ At least one correct (ratio) & 0.65 & 0.63 \\ \hline \hline \end{tabular} \end{table} Table 4. Model performance on static mapping To demonstrate the extent to which this is possible, we carry out the following experiment where we craft subdomains for each of the top 10k most visited domains. As a preliminary step, we classify with Topics the full English dictionary provided by WordNet (Vaswani et al., 2017). Then, for each domain in the top 10k, we craft 350 subdomains by preprending to it the English word output with the highest confidence for each topic. We classify with Topics, this total of 3.5M new domains and interpret the results through the following two types of misclassifications: _untargeted_ and _targeted_. An untargeted misclassification would be used to eliminate an undesirable topic association from a site, whereas a targeted misclassification could introduce a new, desirable topic association. This could be done to improve the mapping, help publishers or advertisers observing topics from websites that they would not be embedded on otherwise, and to some extent alter the topics of some users (with a potential privacy risk of assigning unique interests). Figure 6 shows the cumulative distribution for the original domains of the number of crafted subdomains that were respectively misclassified. For untargeted, we look at if the classification of the newly crafted domain changed at all from the classification of the initial domain; this happens for almost all the crafted subdomains for any domain in the top 10k. We are also interested in targeting a specific misclassification, in this case, we are successful on more than 114 crafted subdomains for half of the initial 10k domains, and at most we are able to successfully craft a total of 235 targeted subdomains. These results show that the model classifier is quite susceptible to changes to the initial domain as untargeted misclassification is successful in almost all the cases. For targeted, while our approach is quite simple, it is sufficient to target a fair number of topics. We are only interested in showing that capability here, but one could improve these results by applying for instances techniques from adversarial machine learning (Beng et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019). Our results demonstrate that publishers can craft specific subdomains to set their own topic(s). They can implement this in practice by redirecting users to them to influence the Topics computation. However, this is contradictory to the security goal that domain-based classification gives up utility to achieve. Using more site data to determine classification would further exacerbate this issue. As is, Topics does not achieve an optimal trade-off between security and utility, where restricting classification further (such as basing only on _eTLD+1_) would provide more security with minimal utility trade-off. As the current system effectively allows sites to set their own topic, we argue that such a feature should either be made openly available to publishers in an accessible and easy-to-audit way-incentivizing honest participation-or further restricted. ## 6. Discussion The privacy risks we observe with Topics arise from the API relaying the underlying distribution of user interests. Users have diverse web behaviors that are unlikely to change and so natural properties of their interests, such as heterogeneity, stability, and uniqueness, are reflected through the API results. These can be exploited as shown in Section 4, but parts of the system can also be modified to try to make the Topics observations less skewed and more uniform. Based on this, we next propose two partial mitigations, and then discuss how other types of approaches that may hold part of the solution or at least ideas to enable privacy-preserving online advertising must also be evaluated. ### Changes to the Taxonomy and Classifier Recall that in Section 4, we observe that the distribution of topics is highly skewed on the top 1M most visited websites. A direct consequence is that some topics are more likely to be noisy when observed by advertisers than genuine. The first type of mitigations consists in modifying this distribution by designing a new taxonomy and/or classifier so that all the topics appear more uniformly than they currently do. Without more details about how Google trained and fine-tuned this model classifier, it is difficult to suggest specific changes other than considering: (1) a larger training dataset with observations of all the classes, (2) extending the static mapping with observations for every topic, (3) providing more information to the classifier than just the hostname of the website (although this also introduces accuracy and privacy issues), and (4) ensuring \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Metric & Static Mapping & Top 1k & Top 5k & Top 10k & Top 50k & Top 100k & Top 500k & Top 1M \\ \hline Number of websites compared & 2500 & 423 & 1977 & 3880 & 18160 & 35466 & 172529 & 347686 \\ Overlap coefficient (average) & 0.88 & 0.76 & 0.75 & 0.72 & 0.62 & 0.59 & 0.55 & 0.53 \\ At least one topic correct (ratio) & 0.94 & 0.81 & 0.80 & 0.76 & 0.66 & 0.63 & 0.59 & 0.57 \\ \hline \hline \end{tabular} \end{table} Table 5. Model performance when using Cloudflare Domain Intelligence API returned categories as ground truth Figure 6. Cumulative distribution of the number of successful targeted and untargeted misclassifications per domain. that every topic from the taxonomy appears genuinely on the most visited websites. Modifications to the classifier can also be accompanied by changes to the taxonomy of topics. For instance, we can imagine (1) removing altogether topics that appear very little in practice (although this impacts accuracy for users and specific websites from these categories), or (2) splitting topics that appear a lot in subtopics and merging the ones that appear less under their parent category or with others. However, to do so, a crucial hypothesis is made about which domain the observations are made on. For instance, fixing the classifier and/or the taxonomy so that every topic on the top 1M most visited websites appears a minimum of times, does not imply that this would also be the case on the top 10k, or 100k, or for the visitors of a website about some given subject. This is only a partial mitigation, as it does not address the underlying diverse nature of all user interests. Moreover, this can directly impact the accuracy and level of utility of the API. ### Introducing Noise Adaptively For multi-shot scenarios, we show in Section 4 that third parties are able to remove noisy topics and recover users' genuine top interests. At the moment, the noise that is added by Topics does not have a real impact in this scenario; this is because the stability of user interests across epochs nullifies the assumption of independence between epochs. Instead, we could envision entrusting web browsers to adaptively adjust the noise being output based on what advertisers have already been able to observe about users. Web browsers have a unique vantage point on the global computation for every user: they know the websites they visit, user's top topics of interest, if they are stable or not, and they also already keep track of who calls the Topics API, on which websites, etc. What is missing is a mechanism in web browsers that: (a) models different scenarios based on the nature of user behaviors, (b) keeps track of the state of their user's privacy disclosure, and (c) blocks further leakage if advertisers know too much already. This mitigation would require modeling out the amount of disclosure users are comfortable with (likely through a choice of different options that correspond to different privacy and utility trade-offs). In that respect, techniques based on differential privacy (Bauer et al., 2017; Bauer et al., 2017) could maybe be adapted to Topics. Such a technique would need to be executed quickly by web browsers, i.e., in less time that it takes to load a web page. The complexity of such a mechanism and its impact on utility would need to be further analyzed. A caveat to this potential mitigation is that even though what is returned by the API to different advertisers can be recorded by the web browser, it remains unclear how to account for data sharing and collusion between advertisers outside the context of the Topics API. ### Other Avenues & Related Work By design, interest-disclosing mechanisms report user information to third parties, other avenues or ideas to replace TPCs with a truly private solution may be found in additional proposals of different nature. Other alternatives, such as the FLEDGE(Feldberg et al., 2017) and TURLEDOVE (Krishnan et al., 2017) proposals, assume a different setting wherein ad selection is done locally in web browsers without user data ever leaving their devices. However, more work remains to be done to evaluate these proposals. Additionally, building a more private web goes beyond the replacement of just TPCs. Other open challenges include how to perform, record, and communicate conversion and impression metrics in a private way that still lets advertisers get the utility they would like. Google is for instance leading the development of The Privacy Sandbox initiative that, _"aims to create technologies that both protect people's privacy online and give companies and developers tools to build thriving digital businesses"_(Krishnan et al., 2017). For the Web, Google's proposals also aim at preventing fraud and spam (Private Stakes Tokens API (Krishnan et al., 2017)), measuring ads conversion (Attribution reporting API (Krishnan et al., 2017)), and reducing cross-site privacy exposure (First Party Sets (Krishnan et al., 2017), Shared Storage API (Krishnan et al., 2017), CHIPS (Krishnan et al., 2017), etc.). For the Android mobile OS, the goals are similar: reducing user tracking by deprecating access to cross-app identifiers such as Advertising ID, and limiting the scope that third party libraries in applications can access (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). Other organizations contribute to the Privacy Sandbox or to their own projects, for instance: Apple with Private Click Measurement (Feldberg et al., 2017), Brave with Brave Private Search Ads (Bauer et al., 2017), or Meta and Mozilla with Interoperable Private Attribution (Krishnan et al., 2017; Krishnan et al., 2017). Work remains to be done to seek and evaluate these other different avenues to a truly private web. ### Ethics Statement In this paper, we chose for ethical reasons and privacy concerns not to collect or acquire any dataset of real user browsing histories. Instead, we use publicly available aggregated ranked lists of top visited websites (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017) and rely on recently published research papers with results about web user browsing behaviors (Krishnan et al., 2017; Krishnan et al., 2017) to generate synthetic traces. We find it to be the only way to pursue and evaluate these proposals' claims without having access to real and recent browsing history datasets (i.e., without being one of the large web actors) and without sustaining the business model of data brokers. We hope that by releasing our code as an open-source artifact, we will inspire other studies and researchers to adopt a similar approach. ## 7. Conclusion Several privacy-preserving alternatives like the ones from The Privacy Sandbox are being developed at the moment by web and online advertising actors. With the deployment of these alternatives, the modifications being introduced could impact billions of users and lead to a better web ecosystem-some proposals even aim at improving user privacy beyond the web. However, this could also very well be for the worst, if we replicate similar errors to the ones that were made in the past with the same technologies that we are trying to replace today. As a result, it is of the upmost importance to pay attention to the changes being proposed, analyze and evaluate them, and attempt to foresee their potential consequences in the context of realistic user behaviors. In this paper, we have taken on this endeavor for interest-disclosing alternative mechanisms for online advertising, such as Google's Topics API-currently aimed to be gradually deployed to Chrome users starting July 2023 with Chrome 115-, and we have shown how their privacy objectives and design are directly at odds with the natural diversity of user behaviors. ## Acknowledgments We would like to thank Cloudflare for allowing us to classify the top 1M most visited domains with their Domain Intelligence API by increasing the API rate limits of our account. The authors also thank Eric Pauley, Ryan Sheatsley, Kunyang Li, Quinn Burke, Rachel King, and Blaine Hoak for their feedback on initial versions of this paper. **Funding acknowledgment:** This material is based upon work supported by the National Science Foundation under Grant No. 1900873. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
2306.06785
Estimating the error term in the Trapezium Rule using a Runge-Kutta method
We show how the error term for the Trapezium Rule can be estimated, by solving an initial value problem using a Runge-Kutta method. The error term can then be added to the Trapezium approximation, yielding a much more accurate result. We also show how the risk of singularities in the relevant initial value problem can be mitigated.
J. S. C. Prentice
2023-06-11T21:53:56Z
http://arxiv.org/abs/2306.06785v1
# Estimating the error term in the Trapezium Rule using a Runge-Kutta method ###### Abstract We show how the error term for the Trapezium Rule can be estimated, by solving an initial value problem using a Runge-Kutta method. The error term can then be added to the Trapezium approximation, yielding a much more accurate result. We also show how the risk of singularities in the relevant initial value problem can be mitigated. ## 1 Introduction Students of applied mathematics, and numerical analysis in particular, have no doubt encountered the Trapezium Rule [1][2] and Runge-Kutta (RK) methods [3]. They will very likely have met the error term for the Trapezium Rule, and they may even know how to estimate an integral using a RK method. However, they may not know that the Trapezium Rule's error term can be estimated by solving a suitable initial value problem using, of course, a RK method. Once the error term is known, the Trapezium approximation can be considerably improved. This interesting and potentially useful device is the topic of this paper. ## 2 Theory We have the equation \[\int\limits_{a}^{x}f\left(x\right)dx = \frac{\left(x-a\right)}{2}\left(f\left(a\right)+f\left(x\right) \right)-\frac{\left(x-a\right)^{3}}{12}f^{\prime\prime}\left(\xi\left(x\right)\right) \tag{1}\] \[\xi \in \left(a,x\right).\] Here, \(f:\mathbb{R}\rightarrow\mathbb{R}\) is assumed to be Riemann integrable, the upper limit in the integral is variable, the first term on the RHS is the Trapezium Rule written in terms of \(x,\) and the second term on the RHS is the error in the Trapezium Rule, wherein we assume that \(\xi\) is a function of \(x.\) Differentiating wrt \(x\) gives \[f\left(x\right)= \frac{1}{2}\left(f\left(a\right)+f\left(x\right)\right)+\frac{ \left(x-a\right)f^{\prime}\left(x\right)}{2}\] \[-\frac{3\left(x-a\right)^{2}}{12}f^{\prime\prime}\left(\xi \right)-\frac{\left(x-a\right)^{3}}{12}\frac{df^{\prime\prime}\left(\xi\right) }{d\xi}\frac{d\xi}{dx}\] which yields the differential equation (DE) \[\frac{d\xi}{dx}=\frac{-18f\left(x\right)+6f\left(a\right)+6\left(x-a\right)f^ {\prime}\left(x\right)-3\left(x-a\right)^{2}f^{\prime\prime}\left(\xi\right) }{\left(x-a\right)^{3}f^{\prime\prime\prime}\left(\xi\right)}. \tag{2}\] If an initial value \(\xi\left(x_{0}\right)\) is provided, this DE can be solved using a Runge-Kutta method to yield \(\xi\left(x\right).\) This allows the error term to be computed, which can then be added to the Trapezium term to obtain a more accurate approximation for the integral. Note that we must necessarily assume that \(f\) and its first three derivatives all exist on \(\left[a,x\right].\) ## 3 Example To illustrate this method, we consider the simple example \[\int\limits_{1}^{x}\sin xdx.\] This gives \[\frac{d\xi}{dx}=\frac{-18\sin x+6\sin\left(1\right)+6\left(x-1\right)\cos x+3 \left(x-1\right)^{2}\sin\xi}{-\left(x-1\right)^{3}\cos\xi}. \tag{3}\] We will consider an upper limit of \(x=10\) here. To find an initial value, we choose \(x_{0}\in\left(1,10\right),\) such that \(x_{0}\) is not too close to \(1\) (to avoid potential problems in the denominator in (3)). We substitute \(x_{0}\) into (1) to get \[\frac{\left(x_{0}-1\right)}{2}\left(\sin\left(1\right)+\sin x_{0}\right)+ \frac{\left(x_{0}-1\right)^{3}}{12}\sin\left(\xi\left(x_{0}\right)\right)- \int\limits_{1}^{x_{0}}\sin xdx=0\] which we solve for \(\xi\left(x_{0}\right)\) using a numerical method such as Newton's Method or the Bisection Method. Of course, we will have to invest some effort in obtaining an accurate value for \(\int_{1}^{x_{0}}\sin xdx,\) which can be achieved using the _composite_ Trapezium Rule, for example. This effort will be well rewarded. Once \(\xi_{0}=\xi\left(x_{0}\right)\) has been determined, we solve (3) subject to the initial value \(\left(x_{0},\xi_{0}\right),\) from \(x_{0}\) up to \(10,\) and then from \(x_{0}\) down to \(1,\) using a suitable RK method. We use a seventh-order method (RK7) [3] in this work. We provide an insight into the reverse RK process from \(x_{0}\) down to \(1\) in the Appendix. We choose \(x_{0}=5.\) We find, using composite Trapezium quadrature [2], \(\int_{1}^{5}\sin xdx=0.256640120404911,\) accurate to \(\sim 10^{-15}.\) Hence, we find \(\xi_{0}=3.049296665128674\) using the Bisection Method [1], also to an accuracy of \(\sim 10^{-15}.\) Using RK7 to solve the DE gives the results shown in the figures. In Figure (a), we show the true value of the integral and the Trapezium approximation (dotted line), vs \(x\). Clearly, the Trapezium Rule is inaccurate. In Figure (b), we show the error in the Trapezium Rule vs \(x,\) where we see that the error is, in places, larger than the integral itself. In Figure (c), we show \(\xi\left(x\right).\) In Figure (d), we show the quantity \[\left|\int\limits_{1}^{x}\sin xdx-\left(\underbrace{\frac{\left(x-1\right)}{2 }\left(\sin\left(1\right)+\sin x\right)}_{\text{Trapezium Rule}}+\underbrace{ \frac{\left(x-1\right)^{3}}{12}\sin\left(\xi\left(x\right)\right)}_{\text{ Error term}}\right)\right|.\] It is clear that adding the error term to the Trapezium approximation yields values that are considerably more accurate. In a sense, we have used the error term as a _correction_ term. Considering the more exotic integrand \(f\left(x\right)=x^{2}\left(\sin x\ln\left(2+x\right)-100x\right),\) which is not analytically integrable, and using the same limits and the same \(x_{0}\) as above, we find the error in the Trapezium Rule is as high as \(\sim 2\times 10^{5}.\) However, when we add the correction term the error is reduced to \(\sim 10^{-10},\) a remarkable improvement indeed. ## 4 Conclusion We have shown how the error term for the Trapezium Rule can be estimated, by solving an appropriate initial value problem using a Runge-Kutta method. Adding the error term to the Trapezium approximation yields significantly more accurate results. Naturally, the accuracy that is attained is dependent on the accuracy of the Runge-Kutta solution. In our example, we used a small enough stepsize so that we achieved a high degree of accuracy. Ordinarily, we would not know the required stepsize beforehand, and we would implement the Runge-Kutta method with some form of error control - although not doing that here does not detract from our central result.
2308.01546
MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies
Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation. However, generating music, as a special type of audio, presents unique challenges due to limited availability of music data and sensitive issues related to copyright and plagiarism. In this paper, to tackle these challenges, we first construct a state-of-the-art text-to-music model, MusicLDM, that adapts Stable Diffusion and AudioLDM architectures to the music domain. We achieve this by retraining the contrastive language-audio pretraining model (CLAP) and the Hifi-GAN vocoder, as components of MusicLDM, on a collection of music data samples. Then, to address the limitations of training data and to avoid plagiarism, we leverage a beat tracking model and propose two different mixup strategies for data augmentation: beat-synchronous audio mixup and beat-synchronous latent mixup, which recombine training audio directly or via a latent embeddings space, respectively. Such mixup strategies encourage the model to interpolate between musical training samples and generate new music within the convex hull of the training data, making the generated music more diverse while still staying faithful to the corresponding style. In addition to popular evaluation metrics, we design several new evaluation metrics based on CLAP score to demonstrate that our proposed MusicLDM and beat-synchronous mixup strategies improve both the quality and novelty of generated music, as well as the correspondence between input text and generated music.
Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov
2023-08-03T05:35:37Z
http://arxiv.org/abs/2308.01546v1
# MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies ###### Abstract Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation.. However, generating music, as a special type of audio, presents unique challenges due to limited availability of music data and sensitive issues related to copyright and plagiarism. In this paper, to tackle these challenges, we first construct a state-of-the-art text-to-music model, MusicLDM, that adapts Stable Diffusion and AudioLDM architectures to the music domain. We achieve this by retraining the contrastive language-audio pretraining model (CLAP) and the Hifi-GAN vocoder, as components of MusicLDM, on a collection of music data samples. Then, to address the limitations of training data and to avoid plagiarism, we leverage a beat tracking model and propose two different mixup strategies for data augmentation: beat-synchronous audio mixup and beat-synchronous latent mixup, which recombine training audio directly or via a latent embeddings space, respectively. Such mixup strategies encourage the model to interpolate between musical training samples and generate new music within the convex hull of the training data, making the generated music more diverse while still staying faithful to the corresponding style. In addition to popular evaluation metrics, we design several new evaluation metrics based on CLAP score to demonstrate that our proposed MusicLDM and beat-synchronous mixup strategies improve both the quality and novelty of generated music, as well as the correspondence between input text and generated music. ## 1 Introduction Text-guided generation tasks have gained increasing attention in recent years and have been applied to various modalities, including text-to-image, text-to-video, and text-to-audio generation. Text-to-image generation has been used to create both realistic and stylized images based on textual descriptions, which can be useful in various scenarios including graphic design. Text-to-audio generation is a relatively new, but rapidly growing area, where the goal is to generate audio pieces, such as sound events, sound effects, and music, based on textual descriptions. Diffusion models have shown superior performance in these types of cross-modal generation tasks, including systems like DALLE-2 [29] and Stable Diffusion [31] for text-to-image; and AudioGen [22], AudioLDM [24], and Make-an-Audio [16] for text-to-audio. As a special type of audio generation, _text-to-music_ generation has many practical applications [2; 7]. For instance, musicians could use text-to-music generation to quickly build samples based on specific themes or moods and speed up their creative process. Amateur music lovers could leverage generated pieces to learn and practice for the purpose of musical education. However, text-to-music generation presents several specific challenges. One of the main concerns is the limited availability of text-music parallel training data [1]. Compared to other modalities such as text-to-image, there are relatively few text-music pairs available, making it difficult to train a high-quality conditional model. Large and diverse training sets may be particularly imperative for music generation, which involves many nuanced musical concepts, including melody, harmony, rhythm, and timbre. Further, the effectiveness of diffusion models trained on more modest training sets has not been fully explored. Finally, a related concern in text-to-music generation is the risk of plagiarism or lack of novelty in generated outputs [1]. Music is often protected by copyright laws, and generating new music that sounds too similar to existing music can lead to legal issues. Therefore, it is important to develop text-to-music models that can generate novel and diverse music while avoiding plagiarism, even when trained on relatively small training datasets. In this paper, we focus on both these challenges: we develop a new model and training strategy for learning to generate novel text-conditioned musical audio from limited parallel training data. Currently, since there is no open-source model for text-to-music generation, we first construct a state-of-the-art text-to-music generation model, MusicLDM, which adapts the Stable Diffusion [31] and AudioLDM [24] architectures to the music domain. Next, to address the limited availability of training data and to encourage novel generations, we adapt an idea from past work in other modalities: mixup [41], which has been applied to computer vision and audio retrieval tasks, augments training data by recombining existing training points through linear interpolation. This type of augmentation encourages models to interpolate between training data rather than simply memorizing individual training examples, and thus may be useful in addressing data limitations and plagiarism in music generation. However, for music _generation_, the naive application of mixup is problematic. Simply combining waveforms from two distinct musical pieces leads unnatural and ill-formed music: tempos and beats (as well as other musical elements) are unlikely to match. Thus, we propose two novel mixup strategies, specifically designed for music generation: beat-synchronous audio mixup (BAM) and beat-synchronous latent mixup (BLM), which first analyze and beat-align training samples before interpolating between audio samples directly or encoding and then interpolating in a latent space, respectively. We design new metrics that leverage a pretrained text and audio encoder (CLAP) to test for plagiarism and novelty in text-to-music generation. In experiments, we find that our new beat-synchronous mixup augmentation strategies, by encouraging the model to generate new music within the convex hull of the training data, substantially reduce the amount of copying in generated outputs. Further, our new model, MusicLDM, in combination with mixup, achieves better overall musical audio quality as well as better correspondence between output audio and input text. In both automatic evaluations and human listening tests, MusicLDM outperforms state-of-the-art models at the task of text-to-music generation while only being trained on 9K text-music sample pairs. Music samples and qualitative results are available at musicldm.github.io. Code and models are available at [https://github.com/RetroCircree/MusicLDM/](https://github.com/RetroCircree/MusicLDM/). ## 2 Related Work ### Text-to-Audio Generation Text-to-audio generation (TTA) [24; 22; 40] is a type of generative task that involves creating audio content from textual input. In years past, text-to-speech (TTS) [30; 35] achieved far better performance than other types of audio generation. However, with the introduction of diffusion models, superior performance in various generation tasks became more feasible. Recent work has focused on text-guided generation in general audio, with models such as Diffsound [40], AudioGen [22], AudioLDM [24], and Make-an-Audio [16] showing impressive results. In the domain of music, text-to-music models include the retrieval-based MuBERT [26], language-model-based MusicLM [1], diffusion-based Riffusion [8] and Noise2Music [15]. However, a common issue with most recent text-to-audio/music models is the lack of open-source training code. Additionally, music models often requires large amounts of privately-owned music data that are inaccessible to the public, which makes it difficult for researchers to reproduce and build upon their work. Among all these models, AudioLDM is based on open-source Stable Diffusion [31], CLAP [39], and HiFi-GAN [19] architectures. Therefore, we base our text-to-music generation model on the AudioLDM architecture, to create MusicLDM for our experiments. ### Plagiarism on Diffusion Models Diffusion models have been shown to be highly effective at generating high-quality and diverse samples for text-to-image tasks. However, a potential issue with these models is the risk of plagiarism [33; 3], or the generation novelty. As stated by [33], diffusion models are capable of memorizing and combining different image objects from training images to create replicas, which can lead to highly similar or even identical samples to the training data. [3] explores different methods that could extract the training data with a generate-and-filter pipeline, showing that new advances in privacy-preserving training of diffusion models are required. Such issues are especially concerning in the domain of music, where copyright laws are heavily enforced and violations can result in significant legal and financial consequences. Therefore, there is a need to develop strategies to mitigate the risk of plagiarism in text-to-music generation using diffusion models. ### Mixup on Data Augmentation Mixup [41] is a widely used data augmentation technique that has shown remarkable success in improving model generalization and mitigating overfitting. The basic principle of mixup is to linearly combine pairs of training samples to effectively construct new samples that lie on the line connecting the original samples in the feature space, encouraging the model to learn a more continuous and robust decision boundary. In this paper, we explore the mixup technique in the context of text-to-music generation based on latent diffusion models. Different from the mixup in other modalities, music mixup involves a delicate balance of musical elements to prevent the mixed music from being chaotic noise. Moreover, in diffusion models, mixup also can refer to the combination of latent features, rather than music signals. We propose two mixup strategies tailored for music latent diffusion models and explore their potential benefits for data augmentation and generation performance. ## 3 Methodology ### MusicLDM As illustrated in Figure 1, MusicLDM has similar architecture as AudioLDM: a contrastive language-audio pretraining (CLAP) model [39], an audio latent diffusion model [24] with a pretrained variational auto-encoder (VAE) [18], and a Hifi-GAN neural vocoder [19]. Formally, given an audio waveform \(\mathbf{x}\in\mathbb{R}^{T}\) its corresponding text, where \(T\) is the length of samples, we feed the data into three modules: Figure 1: The architecture of MusicLDM, which contains a contrastive language-audio pretraining (CLAP) model, an audio latent diffusion model with VAE, and a Hifi-GAN nerual vocoder. 1. We pass \(\mathbf{x}\) through the audio encoder [5] of CLAP \(f_{audio}(\cdot)\), to obtain the semantic audio embedding \(\mathbf{E}_{x}^{a}\in\mathbb{R}^{D}\), where \(D\) is the embedding dimension. 2. We pass the text of \(x\) through the text encoder [25] of CLAP \(f_{text}(\cdot)\), to to obtain the semantic text embedding \(\mathbf{E}_{x}^{t}\in\mathbb{R}^{D}\). 3. We transform \(\mathbf{x}\) into in the mel-spectrogram \(\mathbf{x}_{mel}\in\mathbb{R}^{T\times F}\). Then we pass \(\mathbf{x}_{mel}\) into the VAE encoder, to obtain an audio latent representation \(\mathbf{y}\in\mathbb{R}^{C\times\frac{T}{T}\times\frac{F}{T}}\), where \(T\) is the mel-spectrogram frame size, \(F\) is the number of mel bins, \(C\) is the latent channel size of VAE, and \(P\) is the downsampling rate of VAE. The VAE is pretrained to learn to encoder and decode the mel-spectrogram of music data. In MusicLDM, the latent diffusion model has a UNet architecture where each encoder or decoder block is composed of a ResNet layer [11] and a spatial transformer layer [31]. During the training, the semantic embedding of the input audio \(\mathbf{E}_{x}\) is concatenated with the latent feature of each UNet encoder and decoder block by the FiLM mechanism [27]. The output of the diffusion model is the estimated noise \(\mathbf{\epsilon}_{\theta}(\mathbf{z}_{n},n,\mathbf{E}_{x})\) from \(n\) to \((n-1)\) time step in the reverse process, where \(\theta\) is the parameter group of the diffusion model, and \(\mathbf{z}_{n}\) is the \(n\)-step feature generated by the forward process. We adopt the training objective [13; 21] as the mean square error (MSE) loss function: \[L_{n}(\theta)=\mathbb{E}_{\mathbf{z}_{0},\mathbf{\epsilon},n}||\mathbf{\epsilon}-\mathbf{ \epsilon}_{\theta}(\mathbf{z}_{n},n,\mathbf{E}_{x})||_{2}^{2} \tag{1}\] where \(\mathbf{z}_{0}=\mathbf{y}\) is the audio latent representation from VAE (i.e., the groundtruth), and \(\mathbf{\epsilon}\) is the target noise for training. More details regarding the training and the architecture of the latent diffusion model can be referred in Appendix A. For MusicLDM, we make two changes from the original AudioLDM to enhance its performance on text-to-music generation. First, since the original contrastive language-audio pretraining (CLAP) model is pretrained on text-audio pair datasets dominated by sound events, sound effects and natural sounds, we retrained the CLAP on text-music pair datasets (details in Appendix B) to improve its understanding of music data and corresponding texts. We also retrained the Hifi-GAN vocoder on music data to ensure high-quality transforms from mel-spectrograms to music waveforms. Second, in the original AudioLDM, the model is only fed with audio embeddings as the condition during the training process, i.e., \(\mathbf{E}_{x}=\mathbf{E}_{x}^{a}\); and it is fed with text embeddings to perform the text-to-audio generation, i.e., \(\mathbf{E}_{x}=\mathbf{E}_{x}^{t}\). This approach leverages the alignment of text and audio embeddings inside CLAP to train the latent diffusion model with more audio data without texts. However, this audio-to-audio training \(\mathbf{\epsilon}_{\theta}(\mathbf{z}_{n},n,\mathbf{E}_{x}^{a})\) is essentially an approximation of the text-to-audio generation \(\mathbf{\epsilon}_{\theta}(\mathbf{z}_{n},n,\mathbf{E}_{x}^{t})\). Although CLAP is trained to learn joint embeddings for text and audio, it does not explicitly enforce the embeddings to be distributed similarly in the latent space, which can make it challenging for the model to generate coherent text-to-audio outputs solely with audio-to-audio training. This problem becomes more severe when the available text-music pair data is limited. Moreover, relying solely on audio embeddings ignores the available text data, which means that we are not leveraging the full potential of our dataset. Consequently, generating accurate and realistic text-to-audio generations may not be effective. To further investigate this task, we introduce two additional training approaches for comparison: 1. Train the MusicLDM directly using the text embedding as the condition, i.e., \(\mathbf{\epsilon}_{\theta}(\mathbf{z}_{n},n,\mathbf{E}_{x}^{t})\) 2. Train the MusicLDM using the audio embedding as the condition, then finetune it with text embedding, i.e.,\(\mathbf{\epsilon}_{\theta}(\mathbf{z}_{n},n,\mathbf{E}_{x}^{a})\rightarrow\mathbf{\epsilon}_{ \theta}(\mathbf{z}_{n},n,\mathbf{E}_{x}^{t})\) The first approach follows the original target of text-to-audio, serving as a comparison with audio-to-audio training. The second approach is proposed as an improvement on audio-to-audio generation. as we shift the condition distribution from the audio embedding back to the text embedding during the training of the diffusion model. In section 4.2, we compared the above two approaches with the original audio-to-audio training approaches to determine the best approach for generating high-quality and highly correlated text-to-music outputs. ### Beat-Synchronous Mixup As shown in Figure 2, we propose two mixup strategies to augment the data during the MusicLDM training: Beat-Synchronous Audio Mixup (BAM) and Beat-Synchronous Latent Mixup (BLM). Beat-tracking via Beat TransformerMusical compositions are made up of several elements, such as chord progressions, timbre, and beats. Of these, beats play a crucial role in determining the musical structure and alignment. In most audio retrieval tasks, mixup is a popular technique that involves randomly mixing pairs of audio data to augment the training data. However, when mixing two music samples that have different tempos (beats per minute), the mixture can be chaotic and unappealing. To avoid this, we use a state-of-the-art beat tracking model, Beat Transformer [42], to extract the tempo and downbeat map of each music track, as shown in the left of Figure 2. We categorize each music track into different tempo groups and during training, we only mixed tracks within the same tempo group to ensure the tracks were in similar tempos. Furthermore, we align the tracks by comparing their downbeat maps and selecting a certain downbeat to serve as the starting position for the mixup track. This preprocessing approach allows us to better select the music data available for mixup, resulting in mixup tracks that are neatly ordered in terms of tempo and downbeats. Beat-Synchronous Audio MixupAs depicted in the upper part of Figure 2, once we select two aligned music tracks \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), we mix them by randomly selecting a mixing ratio from the beta distribution \(\lambda\sim\mathcal{B}(5,5)\), as: \[\mathbf{x}=\lambda\mathbf{x}_{1}+(1-\lambda)\mathbf{x}_{2} \tag{2}\] We then use the mixed data \(\mathbf{x}\) to obtain the CLAP embedding \(\mathbf{E}_{x}\) and the audio latent variable \(\mathbf{y}\). We train the latent diffusion model using the standard pipeline. This beat-synchronous audio mixup strategy is referred to as BAM. Beat-Synchronous Latent MixupAs depicted in the lower part of Figure 2, in the latent diffusion model, the mixup process can also be applied on the latent variables, referred as beat-synchronous latent mixup (BLM). After selecting two aligned music tracks \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), we feed them into the VAE encoder to obtain the latent variables \(\mathbf{y}_{1}\) and \(\mathbf{y}_{2}\). We then apply the mixup operation to the latent variables: \[\mathbf{y}=\lambda\mathbf{y}_{1}+(1-\lambda)\mathbf{y}_{2} \tag{3}\] In contrast to BAM, BLM applies the mixup operation to the latent space of audio, where we cannot ensure that the mixture of the latent variables corresponds to the actual mixture of the music features in the appearance. Therefore, we first generate a mixed mel-spectrogram \(\mathbf{x}_{mel}\) by feeding the mixed latent variable \(\mathbf{y}\) into the VAE decoder. Then, we feed \(\mathbf{x}_{mel}\) to the Hifi-GAN vocoder to obtain the mixed audio \(\mathbf{x}\) as the input music. With \(\mathbf{x}\) and \(\mathbf{y}\), we follow the pipeline to train the MusicLDM. What are BAM and BLM doing?As shown in the right of Figure 2, we demonstrate the interpolation between the feature space of audio when using BAM and BLM. In the feature space of audio signals, the "\(\mathbf{\bullet}\)" represents the feature point of music data, while the "\(\triangle\)" denotes the feature point of other audio signals, such as natural sound, audio activity, and noise. During the pretraining process of VAE, a latent space is constructed for encoding and decoding the music data. The VAE aims to learn the distribution of the latent variables that can best represent the original data and transform the original feature space into a lower-dimensional manifold. This manifold is designed to capture the Figure 2: Mixup strategies. Left: tempo grouping and downbeat alignment via Beat Transformer. Middle: BAM and BLM mixup strategies. Right: How BAM and BLM are applied in the feature space of audio signals and audio latent variables. underlying structure of the music data. Therefore, any feature point within this manifold is considered to be a valid representation of music. BAM and BLM are concerned with augmenting the data at different levels of feature space. As shown in right of Figure 2, BAM linearly combines two points in audio space to form a new point on the red line. BLM, represented by the blue line, performs a similar operation, but result in a new point in the VAE-transformed latent space, which will be decoded back onto the music manifold of audio space. Both BAM and BLM offer unique advantages and disadvantages. BAM applies mixup in the original feature space, resulting in a smooth interpolation between feature points. However, BAM cannot ensure a reasonable music sample that lies within the music manifold. This issue is more problematic using the simple audio mixup strategy without tempo and downbeat alignments. BLM, conversely, augments within the music manifold, fostering robust and diverse latent representations. However, BLM is computationally more expensive as it requires computing the latent feature back to audio via VAE decoder and Hifi-GAN. Furthurmore, when the ill-defined or collapsed latent exists in VAE, BLM may be out of effectiveness. Both BAM and BLM are effective data augmentation techniques that encourage the model to learn a more continuous and robust decision boundary on the audio feature space, or implicitly from the latent space to the audio space, which can improve the model's generalization performance and mitigate overfitting. In the context of text-to-music generation, these mixup strategies can have a potential to mitigate the limitations of data size and help avoid plagiirism issues. By introducing small variations through mixup, the model can touch a more rich space of music data and generate music samples that are correlated to the texts but show differences to the original training data. In Section 4.2, we evaluated whether these strategies mitigate the data limitation and plagiarism issues. ## 4 Experiments In this section, we conducted four experiments on our proposed methods. First, we retrained a new CLAP model to provide the condition embedding for MusicLDM. Second, we trained MusicLDM with different mixup strategies and compared them with available baselines. Third, we evaluated MusicLDM in terms of text-music relevance, novelty and plagiarism risk via metrics based on CLAP scores. Finally, we conducted a subjective listening test to give an additional evaluation. ### Experimental Setup DatasetThe original CLAP model trained on mostly acoustic events and sound effect datasets. In this work, we trained a CLAP model on music datasets in addition to its original training data, allowing it to better understand the relation between music and textual descriptions. The new CLAP model is trained on dataset of 2.8 Million text-audio pairs, in an approximate total duration of \(20\,000\) hours. Compared to previous CLAP model, the newly trained CLAP model performs well in zero-shot classification for both acoustic event and music. Please refer further details on training and performance of the new CLAP in Appendix B. For MusicLDM, we used the Audiostock dataset for training, along with VAE and Hifi-GAN. Specifically, the Audiostock dataset contains \(9000\) music tracks for training and \(1000\) tracks for testing. The total duration is \(455.6\) hours. It provides a correct textual description of each music track. Although CLAP is trained on more text-music data pairs, the large number of them are pseudo-captions composed primarily of non-specific metadata, such as author, song title, and album information (e.g., [a song by author A from the album B]). These captions do not align with our specific objective of text-to-music generation. Hyperparameters and Training DetailsWe trained all MusicLDM modules with music clips of 10.24 seconds at \(16\,\mathrm{kHz}\) sampling rate. In both the VAE and diffusion model, music clips are represented as mel-spectrograms with \(T=1024\) frames and \(F=128\) mel-bins. Unlike AudioLDM, MusicLDM's VAE utilizes a downsampling rate of \(P=8\) and a latent dimension of \(C=16\). The architecture of MusicLDM's latent diffusion model follows that of AudioLDM. The training process of MusicLDM aligns with AudioLDM's approach. For additional hyperparameters and training details, refer to Appendix A. ### MusicLDM Performance #### 4.2.1 Generation Quality Following evaluation techniques used in past work on audio generation [24], we use frechet distance (FD), inception score (IS), and kullback-leibler (KL) divergence to evaluate the quality of generated musical audio outputs. Frechet distance evaluates the audio quality by using an audio embedding model to measure the similarity between the embedding space of generations and that of targets. In this paper, we use two standard audio embedding models: VGGish [12] and PANN [20]. The resulting distances we denote as \(FD_{vgg}\) and \(FD_{pann}\), respectively. Inception score measures the diversity and the quality of the full set of audio outputs, while KL divergence is measured on individual pairs of generated and groundtruth audio samples and averaged. We use the audioldm_eval library1 to evaluate all the metrics mentioned above, comparing the groundtruth audio from the Audiostock 1000-track test set with the 1000 tracks of music generated by each system based on the corresponding textual descriptions. Footnote 1: [https://github.com/haoheliu/audioldm_eval](https://github.com/haoheliu/audioldm_eval) Table 1 presents the FD, IS, and KL results for our models in comparison with baseline models. In the first section, we utilized textual descriptions from the test set, sending them to the official APIs of Riffusion and MuBERT to generate corresponding results. Both Riffusion and MuBERT were unable to achieve results comparable to the remaining models. Upon reviewing the generated music from the two systems, we found that the sub-optimal performance of Riffusion resulted from poor music generation quality, with many samples either inaudible or outside the desired musical range. MuBERT, while generating high-quality pieces from real music sample libraries, fell short in replicating the distribution of Audiostock dataset. Due to the unavailability of their detailed architectures, training scripts, and data, Riffusion and MuBERT's evaluations offered only partial comparisons. We also retrained the original AudioLDM model on the Audiostock dataset, comparing it to MusicLDM variants. The distinction between AudioLDM and MusicLDM lies in the different CLAP models used for condition embeddings. Our comparison revealed that MusicLDM outperforms AudioLDM in terms of \(FD_{pann}\), \(FD_{vgg}\), and IS. This underscores the efficacy of the novel CLAP model pretrained for music, providing more suitable music embeddings as conditioning information. Comparing MusicLDM's performance with audio-to-audio training (\(\epsilon_{\theta}(\mathbf{z}_{n},n,\mathbf{E}x^{a})\)) and text-to-audio training (\(\epsilon_{\theta}(\mathbf{z}_{n},n,\mathbf{E}x^{t})\)), denoted by "MusicLDM (Only TA-Training)", we note inferior results in the latter approach. This may be suggests that a gap exists between distribution of text embedding and audio embedding, making it challenging for the diffusion model to generate high-quality audio solely from text embedding. In contrast, CLAP's audio embedding may leak low-level audio information to the diffusion model during initial training stages, hurting the model's ability to generalize to text embedding inputs. This hypothesis is further supported by the results of MusicLDM with combined audio-to-audio training and text-to-audio fine-tuning. We observe a significant decrease in \(FDvgg\) with small changes in \(FDpann\) and IS, indicating a substantial improvement in music generation quality, driven by leveraging both audio and text embeddings during training. The former facilitates good audio reconstruction during early training, while the latter shifts the distribution from audio to text to align with the eventual test-time task of text-to-music generation. \begin{table} \begin{tabular}{l c c|c c c c} \hline \hline Model & AA-Train. & TA-Train. & FD\({}_{pann}\downarrow\) & FD\({}_{vgg}\downarrow\) & Inception Score \(\uparrow\) & KL Div. \(\downarrow\) \\ \hline Riffusion [8] & ✗ & ✓ & 68.95 & 10.77 & 1.34 & 5.00 \\ MuBERT [26] & — & — & 31.70 & 19.04 & 1.51 & 4.69 \\ AudioLDM & ✓ & ✗ & 38.92 & 3.08 & 1.67 & 3.65 \\ \hline MusicLDM & ✓ & ✗ & 26.67 & 2.40 & **1.81** & 3.80 \\ MusicLDM (Only TA-Training) & ✗ & ✓ & 32.40 & 2.51 & 1.49 & 3.96 \\ MusicLDM w/. mixup & ✓ & ✗ & 30.15 & 2.84 & 1.51 & 3.74 \\ MusicLDM w/. BAM & ✓ & ✗ & 28.54 & **2.26** & 1.56 & 3.50 \\ MusicLDM w/. BLM & ✓ & ✗ & **24.95** & 2.31 & 1.79 & **3.40** \\ \hline MusicalLDM w/. Text-Finetune & ✓ & ✓ & 27.81 & 1.75 & 1.76 & 3.60 \\ MusicLDM w/. BAM \& Text-Finetune & ✓ & ✓ & 28.22 & 1.81 & 1.61 & 3.61 \\ MusicLDM w/. BLM \& Text-Finetune & ✓ & ✓ & **26.34** & **1.68** & **1.82** & **3.47** \\ \hline \hline \end{tabular} \end{table} Table 1: The evaluation of generation quality among MusicLDMs and baselines. AA-Train. and TA-Train. refer to the audio-audio training scheme and the text-audio training scheme. Last, we compared MusicLDM with different mixup strategies, namely simple mixup [41], BAM, and BLM. The comparison reveals the negative impact of the simple mixup on all metrics. This degradation in generated sample quality, characterized by instrumental interference and noise, is attributed to the simple mixup's inability to guarantee musicality in the mix. Similar observations are evident in the BAM results, indicated by a drop in \(FD_{pann}\) and IS. However, BAM's tempo and downbeat alignment, along with the original mixup benefits, counterbalance this defect to a certain extent, enhancing the model's generalization ability and improving certain metrics. BLM, as a latent space mixing method, aligns with our hypothesis in Section 3.2 that latent space mixup yield audio closely resembling music. This technique allows us to largely bypass the potential confusion issues tied to audio mixing, thus capitalizing on mixup's ability to drive generalization and prevent copying via data augmentation. Furthermore, incorporating text-finetuning results in a comprehensive improvement of music generation quality, solidifying BLM as the most effective strategy. #### 4.2.2 Text-Audio Relevance, Novelty and Plagiarism We proposed two metric groups, **text-audio similarity** and **nearest-neighbor audio similarity ratio** to assess text-audio relevance, novelty, and plagiarism risk in various models. First, text-audio similarity measures the relevance between the text and the audio. It is defined as the dot product between the groundtruth text embedding \(\mathbf{E}_{gd}^{t}\) from the test set and the audio embedding \(\mathbf{E}^{a}\) from music generated by models, i.e., \(\mathbf{E}_{gd}^{t}\cdot\mathbf{E}^{a}\). The embeddings from both text and audio are normalized in CLAP model, thus the dot product computes the cosine similarity between text and audio embeddings. Second, we would also like to measure the extent to which models are directly copying samples from the training set. We verify this by first computing the dot products between the audio embedding of each generated music output to all audio embeddings from the Audiostock training set and returning the maximum - i.e., the similarity of the nearest-neighbor in the training set. Then, we compute the fraction of generated outputs whose nearest-neighbors are above a threshold similarity. We refer this as the nearest-neighbor audio similarity ratio, providing \(SIM_{AA}@90\) where the threshold is 0.9, and \(SIM_{AA}@95\) with 0.95. The lower this fraction, the lower the risk of plagiarism - i.e., fewer samples have very close training neighbors. In the Appendix D, we show pairs of examples with both high and low similarity scores to give further intuition for this metric. \begin{table} \begin{tabular}{l|c|c c|c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{4}{c|}{Objective Metrics} & \multicolumn{4}{c}{Subjective Listening Test} \\ \cline{2-7} & Relevance & Novelty and Plagiarism Risk & & & \\ \cline{2-7} & Text-Audio Similarity\(\uparrow\) & \(SIM_{AA}@90\downarrow\) & \(SIM_{AA}@95\downarrow\) & Quality\(\uparrow\) & Relevance\(\uparrow\) & Musicality\(\uparrow\) \\ \hline Test Set (Ref.) & 0.325 & — & — & — & — & — \\ Retrieval Max (Ref.) & 0.423 & — & — & — & — & — \\ \hline MuBERT & 0.131 & 0.107 & 0 & 2.02 & 1.50 & **2.33** \\ MusicalLDM (original) & **0.281** & 0.430 & 0.047 & 1.98 & 2.17 & 2.19 \\ MusicLDM w/. mixup & 0.234 & **0.391** & 0.028 & — & — & — \\ MusicLDM w/. BAM & 0.266 & 0.402 & 0.027 & 2.04 & 2.21 & 2.01 \\ MusicLDM w/. BLM & 0.268 & 0.401 & **0.020** & **2.13** & **2.31** & 2.07 \\ \hline \hline \end{tabular} \end{table} Table 2: The objective metrics to measure the relevance and novelty (plagiarism). And the subjective listening test to evaluate the quality, relevance, and musicality. Figure 3: The violin plot of the audio-audio similarity, and the text-to-audio similarity. As shown in the left and middle column of Table 2, we present the average text-audio similarity and nearest-neighbor audio similarity ratios for two thresholds on the 1000 tracks in the Audiostock test set and the generated music from MuBERT and different variants of MusicLDM. We also provide two reference points for text-audio similarity: "Test Set" and "Retrieval Max". Specifically, "Test Set" refers to computing the cosine similarity between the groundtruth text embedding and the groudtruth audio embedding. And "Retrieval Max" refers to first computing the cosine similarities between each text embedding from the test set to all audio embeddings from the training set, then picking the highest score as the score of this text, and taking the average over all text scores. We can observe that the original MusicLDM without mixup achieves the highest text-audio relevance with an average score of 0.281, but also the highest (worst) nearest-neighbor audio similarity ratio. MusicLDM with the simple mixup strategy achieves the lowest \(SIM_{AA}@90\) ratio while sacrificing a lot in the relevance of the generation. The MusicLDM with BAM and BLM achieve a balance between the audio similarity ratios and the text-to-audio similarity. In combination with the quality evaluation results in Table 1, we can conclude that all mixup strategies are effective as a data augmentation techniques to improve generalization of the model to generate more novel music. However simple mixup degrades the generation quality, which affects the relevance score between audio and text, and also thus makes it less similar to the tracks in the training set. BAM and BLM apply the tempo and downbeat filtering on the music pairs to mix, allowing the model to maintain superior generation quality (Table 1) and text-audio relevance (Table 2), while still utilizing the benefit brought by the mixup technique to make the generation more novel (less plagiarism). Among the objective metrics, BLM is the best mixup strategy in terms of quality, relevance and novelty of the generated audio. This indicates mixing in the latent space is more efficient than mixing directly in audio space, perhaps because the latent embedding approach implicitly projects the mixture to the learned manifold of well-formed music. We show the detailed distribution of these metrics over 1000 generated tracks in Figure 3, where, for example, audio-audio similarity denotes the individual scores used to calculate the average \(SIM_{AA}\). We find that the original MusicLDM without mixup has more samples with high training similarity than other models, which further reflects that it is more prone to copying. ### Subjective Listening Test As shown in the right of Table 2, we conduct the subjective listening test on four models, namely MuBERT, the original MusicLDM, and that with BAM or BLM strategy, to further evaluate the actual hearing experience of the generated music. We do not include the simple mixup MusicLDM because its generation is at a low quality while we avoid confusing subjects with too many models in the same time. We invite 15 subjects to listen to 6 groups of the generations randomly selected from the test set. Each of group contains four generations from four models and the corresponding text description. The subjects are required to rate the music in terms of quality, relevance, and musicality (detailed in Appendix E). We observe that the samples of MusicLDM with BAM or BLM mixup strategy achieve a better relevance and quality than those of MuBERT and the original MusicLDM, this strengths our above analysis. The MuBERT samples achieve the best Musicality, because its generation is combined from the real music samples. Combined with the objective metrics, beat-synchronous latent mixup stands to be the most effectiveness method for enhancing the text-to-music generation in terms of quality, text-music relevance and novelty (i.e., reducing the risk of plagiarism). ## 5 Limitations In this section we outline the recognized limitations of our study, serving as a roadmap for future improvements. Firstly, MusicLDM is trained on the music data in a sampling rate of \(16\,\mathrm{kHz}\), while most standard music productions are \(44.1\,\mathrm{kHz}\). This constraint, tied to the Hifi-GAN vocoder's subpar performance at high sampling rates, impedes practical text-to-music application and necessitates further improvements. Secondly, resource constraints such as limited real text-music data and GPU processing power prevent us from scaling up MusicLDM's training. We are unable to determine if mix-up strategies could yield similar trends as observed with the Audiostock dataset. This issue exists in the image generation task as well. Lastly, while we recognize beat information as crucial for music alignment, there is scope for exploring other synchronization techniques like key signature and instrument alignment. We also intend to investigate the application of different audio space filters to select suitable music pairs for mixing. Conclusion In this paper, we introduce MusicLDM, a text-to-music generation model that incorporates CLAP, VAE, Hifi-GAN, and latent diffusion models. We enhance MusicLDM by proposing two efficient mixup strategies: beat-synchronous audio mixup (BAM) and beat-synchronous latent mixup (BLM), integrated into its training process. We conduct comprehensive evaluations on different variants of MusicLDM using objective and subjective metrics, assessing quality, text-music relevance, and novelty. The experimental results demonstrate the effectiveness of BLM as a standout mixup strategy for text-to-music generation. ## 7 Acknowledgments We would like to thank the Institute for Research and Coordination in Acoustics and Music (IRCAM) and Project REACH: Raising Co-creativity in Cyber-Human Musicianship for supporting this project. This project has received funding from the European Research Council (ERC REACH) under the European Union's Horizon 2020 research and innovation programme (Grant Agreement #883313). We would like to thank the support of computation infrastructure from LAION.
2301.12757
Spin $ 2 $ spectrum for marginal deformations of 4d $ \mathcal{N}=2 $ SCFTs
We compute spin $ 2 $ spectrum associated with massive graviton fluctuations in $\gamma$-deformed Gaiotto-Maldacena background those are holographically dual to marginal deformations of $\mathcal{N}=2$ SCFTs in four dimensions. Under the special circumstances, we analytically estimate the spectra both for the $ \gamma $- deformed Abelian T dual (ATD) as well as the non-Abelian T dual (NATD) cases where we retain ourselves upto leading order in the deformation parameter. Our analysis reveals a continuous spectra which is associated with the breaking of the $ U(1) $ isometry (along the directions of the internal manifold) in the presence of the $ \gamma $- deformation. We also comment on the effects of adding flavour branes into the picture and the nature of the associated spin $ 2 $ operators in the dual $ \mathcal{N}=1 $ SCFTs.
Sourav Roychowdhury, Dibakar Roychowdhury
2023-01-30T10:05:07Z
http://arxiv.org/abs/2301.12757v2
# Spin 2 spectrum for marginal deformations of 4d \(\mathcal{N}=2\) SCFTs ###### Abstract We compute spin 2 spectrum associated with massive graviton fluctuations in \(\gamma\)-deformed Gaiotto-Maldacena background those are holographically dual to marginal deformations of \(\mathcal{N}=2\) SCFTs in four dimensions. Under the special circumstances, we analytically estimate the spectra both for the \(\gamma\)- deformed Abelian T dual (ATD) as well as the non-Abelian T dual (NATD) cases where we retain ourselves upto leading order in the deformation parameter. Our analysis reveals a continuous spectra which is associated with the breaking of the \(U(1)\) isometry (along the directions of the internal manifold) in the presence of the \(\gamma\)- deformation. We also comment on the effects of adding flavour branes into the picture and the nature of the associated spin 2 operators in the dual \(\mathcal{N}=1\) SCFTs. ## 1 Introduction and summary \(\mathcal{N}=2\) SCFTs [1] those live in four dimensions and their realization as dual supergravity solutions [2]-[4] has witnessed as steady progress over the past one and half decade. In particular, the realization of some of these Type IIA geometries [5]-[6] (also known as the Gaiotto-Maldacena background) as Abelian T dual (ATD) as well as non-Abelian T dual (NATD) solutions of \(AdS_{5}\times S^{5}\) has generated renewed interest in the recent years [7]-[13]. In a typical Hanany-Witten set up [14], the \(\mathcal{N}=2\) dualities can be realised as an intersection of NS5-D4-D6 branes whose low energy (IR) description is characterised by a quiver gauge theory containing both color (\(N_{c}\)) as well as flavour (\(N_{f}\)) nodes. The color degrees of freedom are sourced due to D4 branes while the flavour modes come through D6s. In the IR, the quiver is represented by a (super)conformal fixed point which preserves a global \(SO(2,4)\times SU(2)_{\chi,\xi}\times U(1)_{\beta}\) symmetry. The holographic limit is that for which all gauge nodes become large, \(N_{c}\to\infty\). This is the limit in which the dual supergravity description makes sense. In the present paper, however, we are interested in certain holographic aspects associated to _marginal_ deformations of \(\mathcal{N}=2\) SCFTs those were introduced recently by authors in [12]. The corresponding quiver is conjectured to be describing a superconformal fixed point with \(\mathcal{N}=1\) supersymmetries. The dual supergravity (\(\gamma\)- deformed) background can be obtained by applying an \(SL(3,R)\) transformation [15]-[17] in the eleven dimensional description and thereby following a Type IIA reduction. The purpose of the present paper is to explore the properties of spin 2 operators associated to \(\mathcal{N}=1\) superconformal quivers by probing the structure of linearised gravitational perturbations in the dual Type IIA set up. The spin 2 spectrum is what is identified through the massive graviton equation [18]-[26] in the dual supergravity description. The linearised perturbation equation is in principle difficult to solve analytically, except for certain special choices for the associated quantum numbers. In fact, we are able to solve these equations for a particular choice of the quantum number \(m=0\) that is associated with the \(U(1)_{\beta}\) isometrty of the parent Type IIA background. It turns out that the corresponding graviton equation is analytically tractable which yields a regular solution. On the other hand, for \(m\neq 0\), the corresponding graviton equation results into a divergent solution which is therefore discarded. Upon fixing the quantum number \(m=0\), we solve the massive graviton equation considering the effects of the \(\gamma\)- deformation upto leading order in the perturbative expansion. Contrary to the undeformed case [23], we notice that the effect of the \(\gamma\)- deformation is to change the spectrum from discrete to continuous. We identify this as an artefact of the breaking of the spherical symmetry \(S^{2}(\chi,\xi)\) in the presence of the \(\gamma\)- deformation. These modes are associated to the \(\psi\)- direction of the internal manifold which could be recast as an isometric direction for the \(\gamma\)-deformed ATD 1. However, one looses this isometry for the \(\gamma\)- deformed NATD. One might expect a different scenario once the flavour D6 branes are placed so that the \(\psi\)- direction is bounded. However, as we outline in the Appendix C, the corresponding massive graviton equations are quite nontrivial to deal with and specially in the presence of the \(\gamma\)- deformation. Footnote 1: One way to realise this isometry for the ATD in Type IIA is through a suitable choice of gauge. We elaborate more on this as we progress in section 4 (see Footnote 3). The rest of the paper is organised as follows. We briefly outline the \(\gamma\)- deformed set up in Section 2. In Section 3, we work out the detailed structure of the linearised perturbation equations. In Section 4, we compute the spin 2 spectrum taking specific examples of \(\gamma\)-deformed ATD as well as NATD models. Finally, we draw our conclusion in Section 5, where we outline the possible structure of the spin 2 operator in the dual field theory. ## 2 Deformed Gaiotto-Maldacena background in Type IIA To begin with, we review in detail the marginal deformations of 4d \(\mathcal{N}=2\) SCFTs and their holographic duals in Type IIA. The marginal deformation of these theories leads towards a new class of \(\mathcal{N}=1\) SCFTs in 4d whose holographic dual (in Type IIA) has been obtained by authors in [12]. These class of geometries are termed as \(\gamma\)- deformed Gaiotto-Maldacena (GM) backgrounds those are constructed following an algorithm developed in [15]-[17]. The seed solution is considered to be an eleven dimensional background in M theory. Typically, these class of geometries are obtained by applying an \(SL(3,R)\) transformation (along with a deformation parameter \(\gamma\)) in M theory. Finally, the ten dimensional (\(\gamma\)-deformed) Type IIA background is constructed following a circle (\(S^{1}\)) reduction2[12] Footnote 2: See [12] for details. \[d\hat{s}_{10}^{2}=e^{-\frac{\hat{\Phi}}{2}}f_{1}\Bigg{[} ds_{AdS_{5}}^{2}+\frac{f_{2}}{4f_{1}}\Big{(}d\sigma^{2}+d\eta^{2} \Big{)}+\frac{f_{3}}{4f_{1}}d\chi^{2}+\frac{f_{3}\sin^{2}\chi}{4f_{1}\big{(}1+ \gamma^{2}f_{3}f_{4}\sin^{2}\chi\big{)}}d\xi^{2}\] \[+\ \frac{f_{4}}{4f_{1}\big{(}1+\gamma^{2}f_{3}f_{4}\sin^{2}\chi \big{)}}\Big{(}d\beta-\gamma f_{5}\sin\chi d\chi\Big{)}^{2}\Bigg{]}, \tag{1}\] where \(\gamma\) is the deformation parameter such that the solution maps to the standard \(\mathcal{N}=2\) supersymmetric Gaiotto-Maldacena background [2; 5; 6; 8; 12] in the limit of the vanishing deformation, \(\gamma=0\). The \(AdS_{5}\) line element could be expressed as \[ds_{AdS_{5}}^{2}=-\cosh^{2}rdt^{2}+dr^{2}+\sinh^{2}rd\Omega_{3}^{2}, \tag{2}\] together with the metric functions \(f_{i}(\sigma,\eta)\) \[f_{1}=\left(\frac{2\dot{V}-\ddot{V}}{V^{\prime\prime}}\right)^{ \frac{1}{2}}\ ;\ f_{2}=f_{1}\frac{2V^{\prime\prime}}{\dot{V}}\ ;\ f_{3}=f_{1}\frac{2V^{\prime\prime}\dot{V}}{ \Delta}\ ;\ f_{4}=f_{1}\frac{4V^{\prime\prime}}{2\dot{V}-\ddot{V}}\sigma^{2} \tag{3}\] \[f_{5}=2\Big{(}\frac{\dot{V}\dot{V}^{\prime}}{\Delta}-\eta\Big{)} \ ;\ f_{6}=\frac{2\dot{V}\dot{V}^{\prime}}{2\dot{V}-\ddot{V}}\ ;\ f_{7}=-\frac{4\dot{V}^{2}V^{ \prime\prime}}{\Delta}\ ;\ f_{8}=\Bigg{[}(2)^{12}\ \frac{4\big{(}2\dot{V}-\ddot{V} \big{)}^{3}}{V^{\prime\prime}\dot{V}^{2}\Delta^{2}}\Bigg{]}^{\frac{1}{2}}. \tag{4}\] The dot and the prime of the potential function \(V(\sigma,\eta)\) are denoted as \[\dot{V}=\sigma\partial_{\sigma}V\,\ V^{\prime}=\partial_{\eta}V\, \ \ddot{V}=\sigma\partial_{\sigma}\dot{V}\,\ V^{\prime\prime}=\partial_{\eta}^{2}V\,\ \dot{V}^{\prime}=\sigma \partial_{\sigma}V^{\prime} \tag{5}\] \[\Delta=\big{(}2\dot{V}-\ddot{V}\big{)}V^{\prime\prime}+\big{(} \dot{V}^{\prime}\big{)}^{2}. \tag{6}\] The background NS plus RR fluxes together with the dilaton could be expressed as [12] \[\hat{B}_{2} = \frac{1}{4}\ \frac{1}{1+\gamma^{2}f_{3}f_{4}\sin^{2}\chi}\bigg{(}f_{5 }d\Omega_{2}(\chi,\xi)-\gamma f_{3}f_{4}\sin^{2}\chi d\xi\wedge d\beta\bigg{)}, \tag{7}\] \[\hat{C}_{1} = \frac{1}{16}\ \bigg{(}f_{6}d\beta+\gamma\big{(}f_{7}-f_{5}f_{6} \big{)}\sin\chi d\chi\bigg{)},\] (8) \[\hat{C}_{3} = \frac{1}{64}\ \frac{1}{1+\gamma^{2}f_{3}f_{4}\sin^{2}\chi}f_{7}d\beta \wedge d\Omega_{2}(\chi,\xi),\] (9) \[e^{2\hat{\Phi}} = \frac{f_{8}}{1+\gamma^{2}f_{3}f_{4}\sin^{2}\chi}. \tag{10}\] ## 3 Perturbations Given the above set up (1)-(10), we are now in a position to examine the spin-2 spectrum for the \(\gamma\)-deformed Gaiotto-Maldacena background those correspond to marginal deformations of \(\mathcal{N}=2\) SCFTs in four dimensions. Following the algorithm as in [23], we express the \(\gamma\)- deformed metric (1) (apart from the usual conformal factor) as sum of the AdS\({}_{5}\) factor and a five-dimensional internal space \(\mathcal{M}_{5}^{\gamma}\) in the Einstein frame as \[ds_{10}^{2}=ds_{AdS_{5}}^{2}+ds_{\mathcal{M}_{5}^{\gamma}}^{2}. \tag{10}\] We set the notation below, where we use the indices \(M\) to denote the coordinates of the full ten-dimensional background. On the other hand, we use the indices \(x^{\mu}\) to label the AdS\({}_{5}\) directions and \(y^{m}\) for the \(\gamma\)- deformed five-dimensional internal space \(\mathcal{M}_{5}^{\gamma}\). This results into a ten dimensional metric of the following form \[ds_{10}^{2}=\tilde{g}_{MN}dx^{M}dx^{N}=\tilde{g}_{\mu\nu}(x)dx^{\mu}dx^{\nu}+ \tilde{g}_{mn}(y)dy^{m}dy^{n}, \tag{11}\] where one can split the full ten dimensional metric as \[\tilde{g}_{MN}(x,y)=\begin{bmatrix}\tilde{g}_{\mu\nu}(x)&0\\ 0&\tilde{g}_{mn}(y)\end{bmatrix}. \tag{12}\] Next, we turn on the metric fluctuations along the AdS\({}_{5}\) \[\delta g_{\mu\nu}=e^{2A}h_{\mu\nu}, \tag{13}\] while keeping the other background fluctuations to zero. This allows us to express the metric schematically as \[ds_{E}^{2}=e^{2A}\Bigg{[}\Big{(}\tilde{g}_{\mu\nu}(x)+h_{\mu\nu}\Big{)}dx^{\mu }dx^{\nu}+\tilde{g}_{mn}(y)dy^{m}dy^{n}\Bigg{]}. \tag{14}\] A straightforward comparison between equations (14) and (1) yields \[A=-\frac{\hat{\Phi}(\eta,\sigma,\chi)}{4}+\frac{1}{2}\ln f_{1}(\eta,\sigma). \tag{15}\] Next, we decompose the metric perturbation \[h_{\mu\nu}(x,y)=\mathfrak{h}_{\mu\nu}^{[tt]}(x)F(y), \tag{16}\] along with the _transverse_ as well as the _traceless_ gauge \(\tilde{\nabla}^{\mu}\mathfrak{h}_{\mu\nu}^{[tt]}(x)=0=\tilde{g}^{\mu\nu} \mathfrak{h}_{\mu\nu}^{[tt]}(x)\). Notice that, since the dilaton and the background fluxes do not change under metric perturbations, therefore the corresponding equations in (10) are trivially satisfied. On the other hand, the metric (\(R_{MN}\)) equation in (10) could be rephrased as \[R_{MN}-\frac{1}{2}\partial_{M}\Phi\partial_{N}\Phi-\frac{1}{2}\sum_{p=2}^{4} \gamma_{p}e^{\alpha_{p}\Phi}\Bigg{[}\Big{(}\mathcal{A}_{p}^{2}\Big{)}_{MN}- \beta_{p}g_{MN}\mathcal{A}_{p}^{2}\Bigg{]}=0, \tag{17}\] where we denote the remaining fields collectively along with other notations \[\mathcal{A}_{p} :=\{F_{2},H_{3},F_{4}\}\ ;\ \Big{(}\mathcal{A}_{p}^{2}\Big{)}_{MN}= \mathcal{A}_{MM_{1}\cdot M_{p-1}}\mathcal{A}_{N}^{M_{1}\cdot M_{p-1}} \tag{18}\] \[\alpha_{p} :=\Bigg{(}\frac{3}{2},-1,\frac{1}{2}\Bigg{)}\ ;\ \beta_{p}:= \Bigg{(}\frac{1}{16},\frac{1}{12},\frac{3}{32}\Bigg{)}\ ;\ \gamma_{p}:= \Bigg{(}1,\frac{1}{2},\frac{1}{6}\Bigg{)}. \tag{19}\] Next, we use (144) in order to simplify (145) while we simultaneously impose the following conditions on the background fields: \(\bullet\) The only non zero part of \(h_{MN}\) is \(h_{\mu\nu}\) which means we set \(h_{\mu m}=h_{m\mu}=h_{mn}=0\). \(\bullet\)\(h_{\mu\nu}\) is transverse and traceless. \(\bullet\)\(A\) depends only on the internal coordinates \(y^{m}\) namely, \(A=A(y^{m})\). \(\bullet\) Finally, the collective background fields (\(\tilde{\mathcal{A}}_{p}\)) also depend on the internal coordinates \(y^{m}\). Considering all the above points, one could finally express (145) as \[\left(\tilde{g}^{\rho\sigma}\tilde{R}_{\sigma\mu}h_{\rho\nu}- \tilde{g}^{\kappa\sigma}\tilde{R}^{\rho}_{\nu\kappa\mu}h_{\sigma\rho}\right)+ \left(\tilde{g}^{\rho\sigma}\tilde{R}_{\sigma\nu}h_{\rho\mu}-\tilde{g}^{\kappa \sigma}\tilde{R}^{\rho}_{\mu\kappa\nu}h_{\sigma\rho}\right)-\tilde{\nabla}^{2 }h_{\mu\nu}-8\tilde{\nabla}^{P}A\tilde{\nabla}_{P}h_{\mu\nu}\] \[-2h_{\mu\nu}\tilde{\nabla}^{2}A-16h_{\mu\nu}\big{(}\tilde{ \nabla}A\big{)}^{2}+h_{\mu\nu}\sum_{p=2}^{4}\beta_{p}\gamma_{p}e^{2(1-p)A+ \alpha_{p}\Phi}\tilde{\mathcal{A}_{p}}^{2}=0. \tag{146}\] Next, considering \(AdS_{5}\) to be of unit radius and using the fact \[\tilde{g}^{\rho\sigma}\tilde{R}_{\sigma\mu}h_{\rho\nu}-\tilde{g}^ {\kappa\sigma}\tilde{R}^{\rho}_{\nu\kappa\mu}h_{\sigma\rho} = -4h_{\mu\nu}-h_{\mu\nu}=-5h_{\mu\nu} \tag{147}\] \[\tilde{g}^{\rho\sigma}\tilde{R}_{\sigma\nu}h_{\rho\mu}-\tilde{g}^ {\kappa\sigma}\tilde{R}^{\rho}_{\mu\kappa\nu}h_{\sigma\rho} = -4h_{\nu\mu}-h_{\nu\mu}=-5h_{\mu\nu}, \tag{148}\] one could further simplify (146) as \[\tilde{\nabla}^{\alpha}\tilde{\nabla}_{\alpha}h_{\mu\nu}+\tilde{\nabla}^{m} \tilde{\nabla}_{m}h_{\mu\nu}+8\tilde{\nabla}^{m}A\tilde{\nabla}_{m}h_{\mu\nu} +\mathcal{S}h_{\mu\nu}=0, \tag{149}\] where we denote the above function as \[\mathcal{S}=10+2\tilde{\nabla}^{2}A+16\big{(}\tilde{\nabla}A\big{)}^{2}-\sum_ {p=2}^{4}\beta_{p}\gamma_{p}e^{2(1-p)A+\alpha_{p}\Phi}\tilde{\mathcal{A}_{p}} ^{2}. \tag{150}\] The second and the third term of (149) could be further simplified as \[\tilde{\nabla}^{m}\tilde{\nabla}_{m}h_{\mu\nu}+8\tilde{\nabla}^{m}A\tilde{ \nabla}_{m}h_{\mu\nu}=e^{-8A}\tilde{\nabla}^{m}\Big{[}e^{8A}\tilde{\nabla}_{m }h_{\mu\nu}\Big{]}\equiv\mathcal{L}^{(1)}\Big{[}h_{\mu\nu}\Big{]}, \tag{151}\] which reveals (149) in its final form \[\tilde{\nabla}^{\alpha}\tilde{\nabla}_{\alpha}h_{\mu\nu}+\mathcal{L}^{(1)} \Big{[}h_{\mu\nu}\Big{]}+\mathcal{S}h_{\mu\nu}=0. \tag{152}\] Here, the operator \(\mathcal{L}^{(k)}\) is defined with respect to the internal coordinates (\(y^{m}\)) as [18] \[\mathcal{L}^{(k)}:=e^{-8kA}\tilde{\nabla}^{m}\Big{[}e^{8kA}\tilde{\nabla}_{m} \Big{]}. \tag{153}\] For a massive graviton of mass \(M\) propagating in \(AdS_{5}\), one satisfies the _Bachas Estes_ equation [18; 19; 20; 21; 22; 23; 24; 25] of the form \[\Bigg{[}\tilde{\nabla}^{\alpha}\tilde{\nabla}_{\alpha}+\Big{(}2-M^{2}\Big{)} \Bigg{]}h_{\mu\nu}=0, \tag{154}\] which further simplifies (152) by replacing AdS\({}_{5}\) derivatives (\(\tilde{\nabla}^{\alpha}\tilde{\nabla}_{\alpha}\)) \[\Bigg{[}\mathcal{L}^{(1)}+\mathcal{S}+\Big{(}M^{2}-2\Big{)}\Bigg{]}F(y)=0, \tag{155}\] which is entirely defined with respect to internal directions \((y^{m})\) where we define \[\mathcal{L}^{(1)}F=\frac{1}{\sqrt{|\tilde{g}_{\mathcal{M}_{5}^{\gamma}}|}}\partial _{m}\Big{(}\sqrt{|\tilde{g}_{\mathcal{M}_{5}^{\gamma}}|}\;\tilde{g}^{mn} \partial_{n}F\Big{)}+8\tilde{g}^{mn}\partial_{m}A\partial_{n}F, \tag{3.21}\] where \(\tilde{g}_{\mathcal{M}_{5}^{\gamma}}\) is the determinant of the \(\gamma\)-deformed five dimensional metric associated with the internal space (2.1) \[d\hat{s}_{\mathcal{M}_{5}^{\gamma}}^{2} = \frac{f_{2}}{4f_{1}}\Big{(}d\sigma^{2}+d\eta^{2}\Big{)}+\frac{f_{ 3}}{4f_{1}}d\chi^{2}+\frac{f_{3}\sin^{2}\chi}{4f_{1}\big{(}1+\gamma^{2}f_{3}f_ {4}\sin^{2}\chi\big{)}}d\xi^{2} \tag{3.22}\] \[+\;\frac{f_{4}}{4f_{1}\big{(}1+\gamma^{2}f_{3}f_{4}\sin^{2}\chi \big{)}}\Big{(}d\beta-\gamma f_{5}\sin\chi d\chi\Big{)}^{2}.\] Using (3.22) together with (3.6) one can finally express (3.21) as \[\mathcal{L}^{(1)}F = \frac{2}{f_{2}}\Bigg{[}5\Big{(}\partial_{\eta}f_{1}\partial_{ \eta}F+\partial_{\sigma}f_{1}\partial_{\sigma}F\Big{)}+\frac{1}{f_{3}f_{4}( \gamma^{2}f_{3}f_{4}+\csc^{2}\chi)}f_{1}\Big{(}2f_{2}\Big{(}\csc^{2}\chi f_{3} \partial_{\beta}^{2}F \tag{3.23}\] \[\gamma^{4}f_{3}^{2}f_{4}^{3}\partial_{\xi}^{2}F+f_{4}\Big{(}\csc ^{2}\chi\Big{(}\Big{(}\cot\chi-2\partial_{\chi}\hat{\Phi}\Big{)}\partial_{ \chi}F+\partial_{\chi}^{2}F+\csc^{2}\chi\partial_{\xi}^{2}F\Big{)}+\] \[2\gamma^{2}f_{3}^{2}\partial_{\beta}^{2}F+\gamma^{2}f_{5}^{2} \partial_{\beta}^{2}F+2\gamma\csc\chi f_{5}\Big{(}\Big{(}\cot\chi-\partial_{ \chi}\hat{\Phi}\Big{)}\partial_{\beta}F+\partial_{\chi}\partial_{\beta}F\Big{)} \Big{)}+\] \[\gamma^{2}f_{3}f_{4}^{2}\Big{(}-\Big{(}2\partial_{\chi}\hat{\Phi} +\cot\chi\Big{)}\partial_{\chi}F+\partial_{\chi}^{2}F+2\csc^{2}\chi\partial_{ \xi}^{2}F+\gamma^{2}\sin^{2}\chi f_{3}^{2}\partial_{\beta}^{2}F+\] \[\gamma^{2}\sin^{2}\chi f_{5}^{2}\partial_{\beta}^{2}F+2\gamma\sin \chi f_{5}\Big{(}\partial_{\chi}\partial_{\beta}F-\partial_{\chi}\hat{\Phi} \partial_{\beta}F\Big{)}\Big{)}\Big{)}+\csc^{2}\chi\Big{(}2f_{4}\Big{(} \partial_{\eta}f_{3}\] \[\partial_{\eta}F+\partial_{\sigma}f_{3}\partial_{\sigma}F+f_{3} \Big{(}-2\partial_{\eta}\hat{\Phi}\partial_{\eta}F-2\partial_{\sigma}\hat{ \Phi}\partial_{\sigma}F+\partial_{\eta}^{2}F+\partial_{\sigma}^{2}F\Big{)} \Big{)}\] \[+f_{3}\Big{(}\partial_{\eta}f_{4}\partial_{\eta}F+\partial_{\sigma }f_{4}\partial_{\sigma}F\Big{)}\Big{)}-\gamma^{2}f_{3}^{2}f_{4}\Big{(} \partial_{\eta}f_{4}\partial_{\eta}F+\partial_{\sigma}f_{4}\partial_{\sigma}F+ f_{4}\Big{(}4\partial_{\eta}\hat{\Phi}\] \[\partial_{\eta}F-2\Big{(}-2\partial_{\sigma}\hat{\Phi}\partial_{ \sigma}F+\partial_{\eta}^{2}F+\partial_{\sigma}^{2}F\Big{)}\Big{)}\Big{)} \Big{)}\Bigg{]}.\] It is quite trivial to notice that, in the limit \(\gamma\to 0\), together with proper redefinition of \(f_{i}\)'s, the expression (3.23) boils down into the original Gaiotto-Maldacena background [23]. For the generic class of Gaiotto-Maldacena geometries characterised by the functions \(f_{i}(\sigma,\eta)\) the universal solution of (3.20) is discussed in [24]. However, in the presence of \(\gamma\)-deformation the universal solution turns out to be highly non-trivial. Therefore, in present paper we concern two specific examples namely \(\gamma\)-deformed Abelian and non-Abelian T-dual backgrounds. ## 4 Spin \(2\) spectrum Below, we discuss in detail the spectrum associated with massive graviton fluctuations taking two specific examples within the deformed Gaiotto-Maldacena class of geometries. These are the \(\gamma\)- deformed Abelian T dual (ATD) and non-Abelian T dual (NATD) solutions those were obtained by authors in [12]. The addition of flavour branes complicates the scenario and the corresponding equations of motion are hardly tractable analytically. We outline these issues in the Appendix C. ### \(\gamma\)- deformed ATD The potential function for the ATD case takes the form [8] \[V_{\rm ATD}(\sigma,\eta)=\ln\sigma-\frac{1}{2}\sigma^{2}+\eta^{2}. \tag{10}\] The associated functions \(f_{i}(\sigma,\eta)\) in (3) turn out to be 3 Footnote 3: There is a shift symmetry in the expression (10) namely \(V_{\rm ATD}(\sigma,\eta)\to V_{\rm ATD}(\sigma,\eta)+A\eta\) ; where \(A=\) const. This shift symmetry can be interpreted as a residual diffeomorphism/ gauge that preserves the metric as well as the background fluxes. By using this symmetry, one can therefore gauge away \(f_{5}\) as \(\tilde{f}_{5}(=f_{5}+2\eta)\), such that \(\tilde{f}_{5}\) vanishes for the potential function (10). This restores the \(\psi\)-isometry for the (\(\gamma\)- deformed) ATD solution. This gauge redundancy is absent for the non-Abelian T-dual solution. \[f_{1}=1\ ;\ f_{2}=\frac{4}{1-\sigma^{2}}\ ;\ f_{3}=1-\sigma^{2}\ ;\ f_{4}=4 \sigma^{2} \tag{11}\] \[f_{5}=-2\eta\ ;\ f_{6}=0\ ;\ f_{7}=-2\big{(}1-\sigma^{2}\big{)} ^{2}\ ;\ f_{8}=\frac{64}{1-\sigma^{2}}. \tag{12}\] Using (10), the expression in (3.2) takes the form \[{\cal L}^{(1)}F = \frac{1}{\sigma^{2}\big{(}\sigma^{2}-1\big{)}}\Bigg{[}\Big{(}-4 \gamma^{2}\sigma^{2}\Big{(}4\eta^{2}+\Big{(}\sigma^{2}-1\Big{)}^{2}\Big{)} \sin^{2}\chi+\sigma^{2}-1\Big{)}\partial_{\beta}^{2}F \tag{13}\] \[\sigma\Big{(}4\sigma\Big{(}\Big{(}4\gamma^{2}\sigma^{2}\Big{(} \sigma^{2}-1\Big{)}-\csc^{2}\chi\Big{)}\partial_{\xi}^{2}F+4\gamma\eta\sin \chi\partial_{\chi}\partial_{\beta}F-\partial_{\chi}^{2}F\] \[-\cot\chi\partial_{\chi}F\Big{)}-\sigma\Big{(}\sigma^{2}-1\Big{)} ^{2}\partial_{\eta}^{2}F-\sigma\Big{(}\sigma^{2}-1\Big{)}^{2}\partial_{\sigma }^{2}F+\Big{(}-5\sigma^{4}+6\sigma^{2}-1\Big{)}\] \[\partial_{\sigma}F\Big{)}+16\gamma\eta\sigma^{2}\cos\chi \partial_{\beta}F\Bigg{]}.\] Using the change of variables \[\eta=2\psi\ ;\ \sigma=\sin\alpha \tag{14}\] and expanding the L.H.S. of (3.2) upto \({\cal O}(\gamma)\) we find \[\partial_{\alpha}^{2}F+\Big{(}\cot\alpha-3\tan\alpha\Big{)} \partial_{\alpha}F+\frac{1}{\sin^{2}\alpha}\partial_{\beta}^{2}F+\frac{\cos^{ 2}\alpha}{4}\partial_{\psi}^{2}F+\frac{4}{\cos^{2}\alpha}\nabla_{(2)}^{2}(\chi,\xi)F\] \[-32\frac{\psi\gamma}{\cos^{2}\alpha}\sin\chi\Big{(}\partial_{ \chi}\partial_{\beta}F+\cot\chi\partial_{\beta}F\Big{)}+M^{2}F=0, \tag{15}\] where \(\nabla_{(2)}^{2}(\chi,\xi)=\partial_{\chi}^{2}+\cot\chi\partial_{\chi}+\csc^ {2}\chi\partial_{\xi}^{2}\), is the Laplace operator corresponding to \(S^{2}(\chi,\xi)\). Next, we propose an ansatertz of the form \[F=e^{im\beta}e^{iq\xi}\ \tilde{F}(\alpha,\chi,\psi), \tag{16}\] where \(\beta\) and \(\xi\) are the isometry directions of the internal manifold and \(\{m,q\}\) are respectively the associated quantum numbers. This further simplifies (15) as \[\partial_{\alpha}^{2}\tilde{F}+\Big{(}\cot\alpha-3\tan\alpha\Big{)}\partial_ {\alpha}\tilde{F}-\frac{1}{\sin^{2}\alpha}m^{2}\tilde{F}+\frac{\cos^{2}\alpha }{4}\partial_{\psi}^{2}\tilde{F}+\frac{4}{\cos^{2}\alpha}\] \[\Big{(}\partial_{\chi}^{2}\tilde{F}+\cot\chi\partial_{\chi}\tilde{F}-q ^{2}\csc^{2}\chi\tilde{F}\Big{)}+M^{2}\tilde{F}\] \[-32(im)\frac{\psi\gamma}{\cos^{2}\alpha}\sin\chi\Big{(}\partial_{ \chi}\tilde{F}+\cot\chi\tilde{F}\Big{)}=0. \tag{10}\] Interestingly, for \(m=0\) mode the imaginary part in (10) identically vanishes and we are left only with the real part. On the other hand, for \(m\neq 0\) mode, the solution corresponding to the imaginary part (\(\tilde{F}_{\rm imaginary}\sim\csc\chi\)) yields a diverging contribution near \(\chi=\{0,\pi\}\). As a result, the corresponding graviton fluctuation (11) blows up. Therefore, these nonzero (\(m\neq 0\)) modes are discarded for the purpose of the present analysis. ### \(m=0\) mode As mentioned previously, for \(m=0\) mode, the imaginary part of (10) vanishes and the equation for \(\tilde{F}\) takes the following form \[\partial_{\alpha}^{2}\tilde{F}+\Big{(}\cot\alpha-3\tan\alpha\Big{)} \partial_{\alpha}\tilde{F}+\frac{\cos^{2}\alpha}{4}\partial_{\psi}^{2}\tilde{ F}+\frac{4}{\cos^{2}\alpha}\] \[\Big{(}\partial_{\chi}^{2}\tilde{F}+\cot\chi\partial_{\chi} \tilde{F}-q^{2}\csc^{2}\chi\tilde{F}\Big{)}+M^{2}\tilde{F}=0. \tag{11}\] In order to proceed further, we propose a separation of variable of the form \[\tilde{F}(\alpha,\chi,\psi)=F_{\alpha}(\alpha)F_{\psi}(\psi)F_{\chi}(\chi). \tag{12}\] Plugging (12) in (11) we obtain \[\partial_{\alpha}^{2}F_{\alpha}(\alpha)+\Big{(}\cot\alpha-3\tan \alpha\Big{)}\partial_{\alpha}F_{\alpha}(\alpha)+\Bigg{[}M^{2}+\frac{\cos^{2} \alpha}{4}\frac{1}{F_{\psi}(\psi)}\partial_{\psi}^{2}F_{\psi}(\psi)\] \[+\frac{4}{\cos^{2}\alpha}\frac{1}{F_{\chi}(\chi)}\Big{(}\partial_ {\chi}^{2}F_{\chi}(\chi)+\cot\chi\partial_{\chi}F_{\chi}(\chi)-q^{2}\csc^{2} \chi F_{\chi}(\chi)\Big{)}\Bigg{]}F_{\alpha}(\alpha)=0, \tag{13}\] where the \(\psi\)- equation has a solution of the form \[F_{\psi}(\psi)=A_{1}\sin(2n\psi)+A_{2}\cos(2n\psi). \tag{14}\] Here, \(\psi\) takes values within the interval \([0,\frac{\pi}{n}]\) where \(n\) is an integer. As a result, the corresponding function \(F_{\psi}(\psi)\) is also periodic in the given interval. Similarly the equation for \(F_{\chi}(\chi)\) in (13) takes the form \[\partial_{\chi}^{2}F_{\chi}(\chi)+\cot\chi\partial_{\chi}F_{\chi}(\chi)+\Big{(} K^{2}-q^{2}\csc^{2}\chi\Big{)}F_{\chi}(\chi)=0, \tag{15}\] and yields a solution in terms of the Legendre functions \(P,Q\)[27] \[F_{\chi}(\chi)=B_{1}P\Bigg{[}m,q,\cos\chi\Bigg{]}+B_{2}Q\Bigg{[}n,q,\cos\chi \Bigg{]}, \tag{16}\] where \(m=\frac{1}{2}\Big{(}\sqrt{4K^{2}+1}-1\Big{)}\) and \(n=\frac{1}{2}\Big{(}\sqrt{4K^{2}+1}-1\Big{)}\) are integers such that \(F_{\chi}(\chi)\) is smooth within the interval \(0\leq\chi\leq\pi\), together with \(B_{1,2}\) as constants. Finally, by plugging (4.12)-(4.14) into (4.11) we find \[\partial_{\alpha}^{2}F_{\alpha}(\alpha)+\Big{(}\cot\alpha-3\tan\alpha\Big{)} \partial_{\alpha}F_{\alpha}(\alpha)+\Bigg{[}M^{2}-\frac{4K^{2}}{\cos^{2}\alpha} -n^{2}\cos^{2}\alpha\Bigg{]}F_{\alpha}(\alpha)=0. \tag{4.15}\] Next, we implement the change in variable \(z=\sin^{2}\alpha\) which finally yields \[z\big{(}1-z\big{)}\partial_{z}^{2}F_{z}(z)+\big{(}1-3z\big{)}\partial_{z}F_{z}( z)+\Bigg{[}\frac{M^{2}}{4}-\frac{K^{2}}{1-z}-\frac{n^{2}\big{(}1-z\big{)}}{4} \Bigg{]}F_{z}(z)=0. \tag{4.16}\] Notice that, the above equation clearly differs from [23] as we do not have the contribution (\(\sim\ell(\ell+1)\) ) due to the \(S^{2}(\chi,\xi)\) as well as contribution (\(m\)) due to \(S^{1}(\beta)\) of the internal manifold. This stems from the fact that the original \(SU(2)\) symmetries of \(S^{2}(\chi,\xi)\) are broken down to a single \(U(1)\) isometry (corresponding to the \(\xi\) direction (see (3.22)) in the presence of the \(\gamma\)- deformation and we fixed \(m=0\) in our analysis. ### Zero modes: \(n=0\) Setting \(n=0\) for zero modes, one finds \[z\big{(}1-z\big{)}\partial_{z}^{2}F_{z}(z)+\big{(}1-3z\big{)}\partial_{z}F_{ z}(z)+\Bigg{[}\frac{M^{2}}{4}-\frac{K^{2}}{1-z}\Bigg{]}F_{z}(z)=0. \tag{4.17}\] A particular solution of (4.17) exists only for \(K=0\) mode and in the small \(z\sim 0\) limit which is equivalent to the \(\sigma\sim 0\) limit. A closer look reveals that the corresponding solution can be expressed in terms of the Bessel function and Neumann function of the form \[F(z)_{z\sim 0}=B_{3}J_{0}\big{(}M\sqrt{z}\big{)}+B_{4}Y_{0}\big{(}M\sqrt{z} \big{)}, \tag{4.18}\] where \(M\)(mass of the graviton)\(>0\) and \(B_{3,4}\) are constants. On the other hand, the function \(Y_{0}\big{(}M\sqrt{z}\big{)}\) diverges like \(\log z\) close to \(z\sim 0\), therefore we set the constant, \(B_{4}=0\) in (4.18). Combining all these facts together, the final solution takes the form \[F(z)_{z\sim 0}=B_{3}J_{0}\big{(}M\sqrt{z}\big{)}, \tag{4.19}\] where \(M>0\). The above analysis shows that \(F_{z}(z)\) in (4.19) is indeed a smooth function in the small \(z\) limit without imposing any further constraints. Contrary to the undeformed case [23], here we obtain a continuous spectrum for the massive graviton. As mentioned previously, this is an artefact of the absence of the spherical symmetry in the presence of the \(\gamma\)- deformation. #### 4.3.1 \(n\neq 0\) case The \(n\neq 0\) case can be dealt through the WKB method of [28]-[29] where the graviton mass (\(M\)) is considered to be very large. Introducing the following change in coordinates \[r=\frac{z}{1-z}\ ;\ r\in[0,+\infty) \tag{4.20}\] the graviton equation (4.16) could be rephrased as4 Footnote 4: Here we replace \(F_{z}(z)\) by \(\Psi(r)\) in the transformed coordinates. \[\partial_{r}\Bigg{[}p(r)\partial_{r}\Psi(r)\Bigg{]}+\Bigg{[}M^{2}w(r)+q(r) \Bigg{]}\Psi(r)=0, \tag{4.21}\] where we define the above functions as \[p(r)=\frac{r}{1+r}\ ;\ w(r)=\frac{1}{4\big{(}1+r\big{)}^{3}}\ ;\ q(r)=- \Bigg{[}\frac{K^{2}}{\big{(}1+r\big{)}^{2}}+\frac{n^{2}}{\big{(}1+r\big{)}^{4} }\Bigg{]}. \tag{4.22}\] Expanding the above functions (4.22) near \(r\sim 0\) one finds \[p(r) \approx r+\mathcal{O}(r^{2}), \tag{4.23}\] \[w(r) \approx \frac{1}{4}+\mathcal{O}(r),\] (4.24) \[q(r) \approx -\Big{(}K^{2}+\frac{n^{2}}{4}\Big{)}+\Big{(}2K^{2}+n^{2}\Big{)} r+\mathcal{O}(r^{2}), \tag{4.25}\] which can be further compared to yield the associated exponents as \[p(r) \approx p_{1}r^{s_{1}}\ ;\ p_{1}=s_{1}=1, \tag{4.26}\] \[w(r) \approx w_{1}r^{s_{2}}\ ;\ w_{1}=\frac{1}{4}\,\ s_{2}=0,\] (4.27) \[q(r) \approx q_{1}r^{s_{3}}\ ;\ q_{1}=-\Big{(}K^{2}+\frac{n^{2}}{4}\Big{)}\,\ s_{3}=0. \tag{4.28}\] On a similar note, an expansion near \(r\to\infty\) reveals \[p(r) \approx 1+\mathcal{O}(r^{-1}), \tag{4.29}\] \[w(r) \approx \frac{1}{4r^{3}}+\mathcal{O}(r^{-4}),\] (4.30) \[q(r) \approx -\frac{K^{2}}{r^{2}}+\frac{2K^{2}}{r^{3}}-\frac{1}{r^{4}}\Big{(}3 K^{2}-\frac{n^{2}}{4}\Big{)}+\mathcal{O}(r^{-5}), \tag{4.31}\] which can be further compared to decode the associated exponents as \[p(r) \approx p_{2}r^{t_{1}}\ ;\ p_{2}=1\,\ t_{1}=0\, \tag{4.32}\] \[w(r) \approx w_{2}r^{t_{2}}\ ;\ w_{2}=\frac{1}{4}\,\ t_{2}=-3\,\] (4.33) \[q(r) \approx q_{2}r^{t_{3}}\ ;\ q_{2}=-K^{2}\,\ t_{3}=-2. \tag{4.34}\] Finally, we use the above information in order to express the graviton mass [28]-[29] \[M^{2}=\frac{\pi^{2}}{\zeta^{2}}\lambda\Bigg{[}\lambda-1+\frac{\alpha_{2}}{ \alpha_{1}}+\frac{\beta_{2}}{\beta_{1}}\Bigg{]}, \tag{4.35}\] where \(\lambda(\geq 1)\) is a positive continuous parameter. Here, we define the function \(\zeta\) through the following integral \[\zeta=\int_{0}^{\infty}dr\sqrt{\frac{w}{p}}=\frac{1}{2}\int_{0}^{\infty}dr\frac{1 }{\sqrt{r}(1+r)}=\frac{\pi}{2}, \tag{4.36}\] together with the other parameters as \[\alpha_{1} = s_{2}-s_{1}+2=1,\] \[\alpha_{2} = |s_{1}-1|=0,\] \[\beta_{1} = t_{1}-t_{2}-2=1,\] \[\beta_{2} = \sqrt{\big{(}t_{1}-1\big{)}^{2}-4\frac{q_{2}}{p_{2}}}=\sqrt{1+4K^ {2}}. \tag{4.37}\] Plugging (4.37) into (4.35) we finally obtain \[M^{2}=4\lambda\Big{(}\lambda-1+\sqrt{1+4K^{2}}\Big{)}. \tag{4.38}\] Here, couple of points are to be noticed. First of all, the angular momentum quantum number (\(\sim\ell\)) does not appear in (4.38) since the symmetries of the two sphere \(S^{2}(\chi,\xi)\) is broken (as a result of the \(\gamma\)- deformation) therefore it is not a conserved quantity anymore. Secondly, we notice that the spectrum does not contain any information about the quantum number \(n\) as also shown previously by the authors in [23]. This is due to the fact that the \(n^{2}\) term is suppressed in the expression (4.37) therefore it does not show up in (4.38). ### \(\gamma\)- deformed NATD Technically, the analysis for the \(\gamma\)- deformed NATD is quite similar in spirit to that with the \(\gamma\)- deformed ATD example. The corresponding potential function is given by [8] \[V_{\rm NATD}(\sigma,\eta)=\eta\Big{(}\ln\sigma-\frac{1}{2}\sigma^{2}\Big{)}+ \frac{1}{3}\eta^{3}, \tag{4.39}\] together with the associated metric functions \(f_{i}(\sigma,\eta)\) \[f_{1}=1\ ;\ f_{2}=\frac{4}{1-\sigma^{2}}\ ;\ f_{3}=\frac{4 \eta^{2}(1-\sigma^{2})}{4\eta^{2}+(1-\sigma^{2})^{2}}\ ;\ f_{4}=4\sigma^{2}, \tag{4.40}\] \[f_{5}=-\frac{8\eta^{3}}{4\eta^{2}+(1-\sigma^{2})^{2}}\ ;\ f_{6}= \big{(}1-\sigma^{2}\big{)}^{2}\ ;\ f_{7}=-\frac{8\eta^{3}(1-\sigma^{2})^{2}}{4 \eta^{2}+(1-\sigma^{2})^{2}},\] (4.41) \[f_{8}=\frac{256}{(1-\sigma^{2})(4\eta^{2}+(1-\sigma^{2})^{2})}. \tag{4.42}\] Then corresponding expression in (3.23) takes the form \[\mathcal{L}^{(1)}F = -\frac{1}{\eta^{2}\sigma^{2}(\sigma^{2}-1)}\Bigg{[}\eta^{2}\Big{(} 16\gamma^{2}\eta^{2}\sigma^{2}\sin^{2}\chi-\sigma^{2}+1\Big{)}\partial_{\beta} ^{2}F-\sigma^{2}\csc^{2}\chi\] \[\Big{(}16\gamma^{2}\eta^{2}\sigma^{2}\Big{(}\sigma^{2}-1\Big{)} \sin^{2}\chi-4\eta^{2}-\Big{(}\sigma^{2}-1\Big{)}^{2}\Big{)}\partial_{\xi}^{2 }F-16\gamma\eta^{3}\sigma^{2}\sin\chi\partial_{\chi}\partial_{\beta}F\] \[-16\gamma\eta^{3}\sigma^{2}\cos\chi\partial_{\beta}F+\eta^{2} \sigma^{6}\partial_{\eta}^{2}F+5\eta^{2}\sigma^{5}\partial_{\sigma}F-2\eta^{2 }\sigma^{4}\partial_{\eta}^{2}F+6\eta^{2}\sigma^{3}\] \[\partial_{\sigma}F+4\eta^{2}\sigma^{2}\partial_{\chi}^{2}F+\eta ^{2}\sigma^{2}\partial_{\eta}^{2}F+\eta^{2}\Big{(}\sigma^{3}-\sigma\Big{)}^{2 }\partial_{\sigma}^{2}F+\sigma^{2}\Big{(}4\eta^{2}+\sigma^{4}-2\sigma^{2}+1 \Big{)}\] \[\cot\chi\partial_{\chi}F+\eta^{2}\sigma\partial_{\sigma}F+\sigma^{6} \partial_{\chi}^{2}F+2\eta\sigma^{6}\partial_{\eta}F-2\sigma^{4}\partial_{\chi }^{2}F-4\eta\sigma^{4}\partial_{\eta}F\] \[+\sigma^{2}\partial_{\chi}^{2}F+2\eta\sigma^{2}\partial_{\eta}F \Bigg{]}. \tag{4.43}\] Using the following change in variables \[\eta=2\psi\ ;\ \sigma=\sin\alpha, \tag{4.44}\] and retaining terms upto leading order in the deformation parameter one finds \[\partial_{\alpha}^{2}F+\Big{(}\cot\alpha-3\tan\alpha\Big{)} \partial_{\alpha}F+\frac{1}{\sin^{2}\alpha}\partial_{\beta}^{2}F\] \[+\frac{\cos^{2}\alpha}{4}\bigg{(}\partial_{\psi}^{2}F+\frac{2}{ \psi}\partial_{\psi}F+\frac{1}{\psi^{2}}\nabla_{(2)}^{2}(\chi,\xi)F\bigg{)}+ \frac{4}{\cos^{2}\alpha}\nabla_{(2)}^{2}(\chi,\xi)F\] \[-32\frac{\psi\gamma}{\cos^{2}\alpha}\sin\chi\Big{(}\partial_{ \chi}\partial_{\beta}F+\cot\chi\partial_{\beta}F\Big{)}+M^{2}F=0. \tag{4.45}\] Like before, we use an ansatz of the following form \[F=e^{im\beta}e^{iq\xi}\ \tilde{F}(\alpha,\chi,\psi), \tag{4.46}\] which splits the equation in (4.45) into the form \[\partial_{\alpha}^{2}\tilde{F}+\Big{(}\cot\alpha-3\tan\alpha\Big{)} \partial_{\alpha}\tilde{F}-\frac{1}{\sin^{2}\alpha}m^{2}\tilde{F}+\frac{\cos ^{2}\alpha}{4}\] \[\Bigg{[}\partial_{\psi}^{2}\tilde{F}+\frac{2}{\psi}\partial_{ \psi}\tilde{F}+\frac{1}{\psi^{2}}\Big{(}\partial_{\chi}^{2}\tilde{F}+\cot\chi \partial_{\chi}\tilde{F}-q^{2}\csc^{2}\chi\tilde{F}\Big{)}\Bigg{]}+\] \[\frac{4}{\cos^{2}\alpha}\Big{(}\partial_{\chi}^{2}\tilde{F}+\cot \chi\partial_{\chi}\tilde{F}-q^{2}\csc^{2}\chi\tilde{F}\Big{)}\] \[-32(im)\frac{\psi\gamma}{\cos^{2}\alpha}\sin\chi\Big{(}\partial_ {\chi}\tilde{F}+\cot\chi\tilde{F}\Big{)}+M^{2}\tilde{F}=0. \tag{4.47}\] The next step would be to use the following ansatz \[\tilde{F}(\alpha,\chi,\psi)=F_{\alpha}(\alpha)F_{\chi}(\chi)F_{\psi}(\psi), \tag{4.48}\] and thereby considering only on \(m=0\) mode we obtain \[\partial_{\alpha}^{2}F_{\alpha}(\alpha)+\Big{(}\cot\alpha-3\tan \alpha\Big{)}\partial_{\alpha}F_{\alpha}(\alpha)+\Bigg{[}M^{2}+\frac{\cos^{2} \alpha}{4}\frac{1}{F_{\psi}(\psi)}\bigg{(}\partial_{\psi}^{2}F_{\psi}(\psi)\] \[+\frac{2}{\psi}\partial_{\psi}F_{\psi}(\psi)+\frac{1}{\psi^{2}} \frac{1}{F_{\chi}(\chi)}\Big{(}\partial_{\chi}^{2}F_{\chi}(\chi)+\cot\chi \partial_{\chi}F_{\chi}(\chi)-q^{2}\csc^{2}\chi F_{\chi}(\chi)\Big{)}\bigg{)}\] \[+\frac{4}{\cos^{2}\alpha}\frac{1}{F_{\chi}(\chi)}\Big{(}\partial_ {\chi}^{2}F_{\chi}(\chi)+\cot\chi\partial_{\chi}F_{\chi}(\chi)-q^{2}\csc^{2} \chi F_{\chi}(\chi)\Big{)}\Bigg{]}F_{\alpha}(\alpha)=0, \tag{4.49}\] Like before, we obtain the same solution for the \(F_{\chi}(\chi)\) as in the Abelian T-dual case in (4.14). However, in this case solution for the \(\psi\)- equation takes the form \[F_{\psi}(\psi)=\frac{e^{-2n\psi}}{\psi}-\frac{K^{2}}{4n^{2}\psi}. \tag{4.50}\] Finally, using (4.50) and introducing the new coordinate \(z=\sin^{2}\alpha\), one arrives at an equation \[z\big{(}1-z\big{)}\partial_{z}^{2}F_{z}(z)+\big{(}1-3z\big{)}\partial_{z}F_{z}(z) +\Bigg{[}\frac{M^{2}}{4}-\frac{K^{2}}{1-z}+\frac{n^{2}\big{(}1-z\big{)}}{4} \Bigg{]}F_{z}(z)=0, \tag{4.51}\] which is almost identical to that of the \(\gamma\)- deformed ATD (4.16) except for the sign in front of the \(n^{2}\) term. Notice that, the case with the zero (\(K=0\) and \(n=0\)) modes is exactly identical to what we have found for the deformed ATD case. On the other hand, for the non zero modes, one can show that the \(n^{2}\) term is largely suppressed both in the \(r\sim 0\) and \(r\to\infty\) limits which reveals an identical spectra as in the deformed ATD case. ## 5 Concluding remarks Before we conclude, a couple of remarks are in order. First of all, it is worthwhile to mention that the graviton spectrum does not receive any correction due to \(\mathcal{S}=2+\mathcal{O}(\gamma^{2})\) at leading order in the deformation (see (3.20)). Therefore, at leading order, we have a nontrivial cancellation \(\mathcal{S}-2\sim 0\) like in the undeformed case [23]. This plays a crucial role while solving the graviton spectrum at leading order in the \(\gamma\)- deformation. In the literature we analytically solve the graviton spectrum in the presence of \(\gamma\)-deformation under some specific choice of the quantum number \(m=0\). Finally, we would like comment on the dual spin 2 operator associated to the \(\mathcal{N}=1\) superconformal quiver. As mentioned previously, the \(SU(2)_{\chi,\xi}\) charges of the original Gaiotto-Maldacena background are broken down to a single \(U(1)_{\xi}\) charge (\(q\)) as a result of the \(\gamma\)- deformation. The remaining \(U(1)_{\beta}\) charge (\(m\)) is set to be zero. Notice that, the spin 2 operator that we are going to propose is associated to a dual superconformal quiver that is _unbounded_ unless we add flavour degrees of freedom. The spin 2 operator has a continuous spectrum whose dimension could be typically expressed in the form, \(\Delta=4f(\lambda,K)+q\), where \(\lambda\) is some continuous parameter. The dual spin 2 operator can be constructed taking scalars (\(\phi^{k}\)) from tensor multiplets (those also connect different vector multiplets and thereby carry color indices (\(k\))) and the gauge fields (\(A^{I}\)) of the vector multiplet those are sourced due to the color D4 branes. The resulting operator could be schematically expressed as \[\mathcal{O}_{\mu\nu}\sim\text{Tr}(\phi^{k_{i}}A^{I_{i}})T_{\mu\nu}\ ;\ \sum_{i}k_{i}+\sum_{i}I_{i}=q \tag{5.1}\] where the operator carries a \(R\)-charge of the form (\(q,m=0\)). However, the explicit expression of these dual operators is left for future investigations. It would be indeed an interesting project to repeat the above analysis in the presence of flavours and construct the dual operator spectrum using a holographic set up. However, as we outline in the Appendix C, the corresponding massive graviton equations are quite involved and therefore special numeric techniques must be adopted to solve the spectrum. ###### Acknowledgements. Its a pleasure to thank G. Itsios for clarifying some issues. One of the authors DR is indebted to the authorities of IIT Roorkee for their unconditional support towards researches in basic sciences. DR also acknowledges The Royal Society, UK for financial support. ## Appendix A Type-IIA supergravity equations In this Appendix, we briefly discuss the Type-IIA supergravity solutions of [18]. The field strengths of massive Type-IIA supergravity can be expressed as \[H_{3}=dB_{2}\ ;\ F_{2}=dC_{1}+F_{0}B_{2}\ ;\ F_{4}=dC_{3}-H_{3}\wedge C_{1}+ \frac{1}{2}F_{0}B_{2}\wedge B_{2}, \tag{10}\] together with the Bianchi identities which take the following form \[dH_{3}=0\ ;\ dF_{2}=F_{0}H_{3}\ ;\ dF_{4}=H_{3}\wedge F_{2}. \tag{11}\] The corresponding equations of motion are given by \[0 = R_{MN}-\frac{1}{2}\partial_{M}\Phi\partial_{N}\Phi-\frac{1}{16}F _{0}^{2}e^{\frac{5\Phi}{2}}g_{MN}-\frac{1}{2}e^{\frac{3\Phi}{2}}\Bigg{(}F_{MP} F_{N}^{P}-\frac{1}{16}g_{MN}\big{(}F_{2}\big{)}^{2}\Bigg{)}\] \[-\frac{1}{12}e^{\frac{\Phi}{2}}\Bigg{(}F_{MPQR}F_{N}^{PQR}-\frac {3}{32}g_{MN}\big{(}F_{4}\big{)}^{2}\Bigg{)}-\frac{1}{4}e^{-\Phi}\Bigg{(}H_{ MPQ}H_{N}^{PQ}-\frac{1}{12}g_{MN}\big{(}H_{3}\big{)}^{2}\Bigg{)}\,\] \[0 = \nabla^{M}\nabla_{M}\Phi-\frac{5}{4}F_{0}^{2}e^{\frac{5\Phi}{2}} -\frac{3}{8}e^{\frac{3\Phi}{2}}\big{(}F_{2}\big{)}^{2}-\frac{1}{96}e^{\frac{ \Phi}{2}}\big{(}F_{4}\big{)}^{2}+\frac{1}{12}e^{-\Phi}\big{(}H_{3}\big{)}^{2}\,\] \[0 = \nabla^{M}\big{(}e^{-\Phi}H_{MNP}\big{)}-F_{0}e^{\frac{3\Phi}{2}} F_{NP}-\frac{1}{2}e^{\frac{\Phi}{2}}F_{NPQR}F^{QR}+\frac{1}{2.44!4}\epsilon_{M_{1} \ldots M_{8}NP}F^{M_{1}\ldots M_{4}}F^{M_{5}\ldots M_{8}}\,\] \[0 = \nabla^{M}\big{(}e^{\frac{3\Phi}{2}}F_{MN}\big{)}+\frac{1}{6}e^{ \frac{\Phi}{2}}F_{PQRN}H^{PQR}\,\] \[0 = \nabla^{M}\big{(}e^{\frac{\Phi}{2}}F_{MNPQ}\big{)}-\frac{1}{144} \epsilon_{M_{1}\ldots M_{7}NPQ}F^{M_{1}\ldots M_{4}}H^{M_{5}\ldots M_{8}}\, \tag{12}\] where \(R_{MN}\) is the Ricci tensor and \(\epsilon_{M_{1}\ldots M_{10}}\) is totally antisymmetric tensor. Here, \(\alpha_{p}\) is a \(p\)-form flux with \(\alpha_{p}^{2}=\alpha_{M_{1}\ldots M_{p}}\alpha^{M_{1}\ldots M_{p}}\). Taking the trace of the first equation in (12) and by plugging it into the second equation we have the simplified dilaton equation of the form \[\nabla^{M}\nabla_{M}\Phi-2R+g^{MN}\partial_{M}\Phi\partial_{N}\Phi+\frac{1}{6 }e^{-\Phi}\big{(}H_{3}\big{)}^{2}=0, \tag{13}\] where \(R\) is the Ricci scalar. In addition to the bosonic sector, the massive Type-IIA supergravity also contains the gravitino \(\Psi_{M}\) as well as the dilatino \(\Lambda\) sector along with 32-component Majorana spinors. Their corresponding equations of motion are expressed as \[0 = \Gamma^{MNP}D_{N}\Psi_{P}-\frac{1}{4}d\Phi\.\ \Gamma^{M}\Lambda+ \frac{1}{4}F_{0}e^{\frac{5\Phi}{4}}\Gamma^{MN}\Psi_{N}+\frac{5}{16}F_{0}e^{ \frac{5\Phi}{4}}\Gamma^{M}\Lambda \tag{14}\] \[-\frac{1}{8}e^{\frac{3\Phi}{4}}\Bigg{(}2\Gamma^{[M]}F_{2}\.\ \Gamma^{[N]} \Gamma_{11}\Psi_{N}-\frac{3}{2}F_{2}\.\ \Gamma^{M}\Gamma_{11}\Lambda\Bigg{)}\] \[-\frac{1}{8}e^{-\frac{\Phi}{2}}\Bigg{(}2\Gamma^{[M]}H_{3}\.\ \Gamma^{[N]} \Gamma_{11}\Psi_{N}-H_{3}\.\ \Gamma^{M}\Gamma_{11}\Lambda\Bigg{)}\] \[+\frac{1}{8}e^{\frac{\Phi}{4}}\Bigg{(}2\Gamma^{[M]}F_{4}\.\ \Gamma^{[N]} \Psi_{N}+\frac{1}{2}F_{4}\.\ \Gamma^{M}\Lambda\Bigg{)}, \tag{10}\] \[0 = \Gamma^{M}\nabla_{M}\Lambda-\frac{5}{16}e^{\frac{2\Phi}{4}}F_{2} \.\ \Gamma_{11}\Lambda+\frac{3}{8}e^{\frac{2\Phi}{4}}\Gamma^{M}F_{2}\.\ \Gamma_{11}\Psi_{M}\] (11) \[+\frac{1}{4}e^{-\frac{\Phi}{2}}\Gamma^{M}H_{3}\.\ \Gamma_{11} \Psi_{M}+\frac{3}{16}e^{\frac{\Phi}{4}}F_{4}\.\ \Lambda-\frac{1}{8}e^{\frac{\Phi}{4}} \Gamma^{M}F_{4}\.\ \Psi_{M}\] \[-\frac{1}{2}\Gamma^{M}d\Phi\.\ \Psi_{M}-\frac{21}{16}F_{0}e^{\frac{5 \Phi}{4}}\Lambda-\frac{5}{8}F_{0}e^{\frac{5\Phi}{4}}\Gamma^{M}\Psi_{M}.\] Here \(\nabla_{M}\) is the spin-covariant derivative that acts on spinors and \(.\) denotes Clifford product \(\alpha_{p}\.\ \Lambda=\frac{1}{pl}\alpha_{M_{1}..M_{p}}\Gamma^{M_{1}..M_{p}}\Lambda\). The matrices \(\Gamma_{M}\) generate the Clifford algebra \(CL(1,9)\) with \(\{\Gamma^{M},\Gamma^{N}\}=2g^{MN}\). The chirality operator is defined as \(\Gamma_{11}=\Gamma_{0}...\Gamma_{9}\). ## Appendix B Fluctuations of the Type IIA background Below, we consider the fluctuations of the supergravity fields as \[g_{MN} = \tilde{g}_{MN}+h_{MN}, \tag{12}\] \[\Phi = \bar{\Phi}+\varphi,\] (13) \[H_{3} = \tilde{H}_{3}+\delta H_{3},\] (14) \[F_{2} = \bar{F}_{2}+\delta F_{2},\] (15) \[F_{4} = \bar{F}_{4}+\delta F_{4}, \tag{16}\] which yields the \(R_{MN}\) equation at leading order as \[0 = \frac{1}{2}\tilde{\nabla}^{B}\tilde{\nabla}_{M}h_{BN}+\frac{1}{ 2}\tilde{\nabla}^{B}\tilde{\nabla}_{N}h_{BM}-\frac{1}{2}\tilde{\nabla}^{2}h_{ MN}-\frac{1}{2}\tilde{\nabla}_{N}\tilde{\nabla}_{M}\tilde{h}+4\tilde{\nabla}^{B}A \tilde{\nabla}_{M}h_{BN} \tag{17}\] \[+4\tilde{\nabla}^{B}A\tilde{\nabla}_{N}h_{BM}-h_{MN}\tilde{\nabla }^{2}A-8h_{MN}\big{(}\tilde{\nabla}A\big{)}^{2}-4\tilde{\nabla}^{P}A\tilde{ \nabla}_{P}h_{MN}-\frac{1}{2}\partial_{M}\varphi\partial_{N}\bar{\Phi}\] \[-\beta_{p}h_{MN}\tilde{\mathcal{A}}_{p}^{2}-\big{(}p-1\big{)}h_{ PK}\tilde{\mathcal{A}}_{MA_{1}..A_{p-2}}^{P}\tilde{\mathcal{A}}_{N}^{A_{1}..A_{p-2}K} \Bigg{]}-\frac{\varphi}{2}\sum_{p=2}^{4}\alpha_{p}\gamma_{p}e^{2(1-p)A+\alpha_ {p}\tilde{\Phi}}\big{(}\tilde{\mathcal{A}}_{p}^{2}\big{)}_{MN}\] \[+t\tilde{g}_{MN}.\] Here, we define the above function as \[t = \tilde{\nabla}^{B}h_{BP}\tilde{\nabla}^{P}A+h_{BP}\tilde{\nabla} ^{B}\tilde{\nabla}^{P}A-\frac{1}{2}\tilde{\nabla}_{\Lambda}\tilde{h}\tilde{ \nabla}^{\Lambda}A+8h_{PB}\tilde{\nabla}^{P}A\tilde{\nabla}^{B}A \tag{18}\] \[+\frac{\varphi}{2}\sum_{p=2}^{4}\alpha_{p}\beta_{p}\gamma_{p}e^{ 2(1-p)A+\alpha_{p}\bar{\Phi}}\tilde{\mathcal{A}}_{p}^{2}+\frac{1}{2}\sum_{p=2}^ {4}\beta_{p}\gamma_{p}e^{2(1-p)A+\alpha_{p}\bar{\Phi}}\] \[\left[2\big{(}\delta\mathcal{A}_{p}\big{)}\.\ \big{(}\tilde{ \mathcal{A}}_{p}\big{)}-ph_{PK}\tilde{\mathcal{A}}_{A_{1}..A_{p-1}}^{P}\tilde{ \mathcal{A}}_{N}^{A_{1}..A_{p-1}K}\right]\,\] where \(\tilde{h}=\tilde{g}^{MN}h_{MN}\). Adding flavour branes In this Appendix, we discuss effects of adding flavour D6 branes into the above picture and the associated massive graviton equations. As we show below, these equations are quite involved and almost impossible to solve analytically. The purpose of the section is to outline the basic structure of these equations which may be useful for future investigations. We begin by considering the single kink profile whose corresponding potential function is expressed as [13] \[V(\sigma\sim 0,\eta)=\eta N_{6}\ln\sigma+\frac{\eta N_{6}\sigma^{2}}{4} \Lambda_{k}(\eta,P)-\frac{\eta N_{6}\sigma^{2}}{4}\frac{P+1}{P^{2}-\eta^{2}}, \tag{103}\] where we define \[\Lambda_{k}(\eta,P) = \big{(}P+1\big{)}\sum_{m=1}^{k}\Bigg{[}\frac{1}{(2m+(2m-1)P)^{2}- \eta^{2}}-\frac{1}{(2m+(2m+1)P)^{2}-\eta^{2}}\Bigg{]} \tag{104}\] \[+\frac{P}{(2k+1)^{2}(1+P)^{2}-\eta^{2}}.\] The associated metric functions can be expressed as \[f_{1}(\sigma\sim 0,\eta) \sim \frac{4}{\sigma}\frac{1}{g(\eta)}, \tag{105}\] \[f_{2}(\sigma\sim 0,\eta) \sim \sigma g(\eta),\] (106) \[f_{3}(\sigma\sim 0,\eta) \sim \eta^{2}\sigma g(\eta),\] (107) \[f_{4}(\sigma\sim 0,\eta) \sim 0, \tag{108}\] along with the function \[g(\eta)=\frac{\sqrt{2}}{\sqrt{\eta}(P^{2}-\eta^{2})^{\frac{3}{2}}}\Bigg{[} \big{(}P^{2}-n^{2}\big{)}^{3}\big{(}\eta\partial_{\eta}^{2}\Lambda+2\partial_ {\eta}\Lambda\big{)}-2\eta\big{(}P+1\big{)}\big{(}\eta^{2}+3P^{2}\big{)}\Bigg{]} ^{\frac{1}{2}}. \tag{109}\] Using the above information (105)-(109), it is straightforward to compute \[\mathcal{L}^{(1)}F = \frac{1}{\eta^{2}\sigma^{3}g(\eta)^{3}}16\Bigg{[}g(\eta)\Big{(} \sigma\Big{(}\eta\Big{(}-2\eta\partial_{\sigma}\Phi\partial_{\sigma}F-2\Big{(} \eta\partial_{\eta}\Phi-1\Big{)}\partial_{\eta}F+\eta\partial_{\sigma}^{2}F \tag{110}\] \[+\eta\partial_{\eta}^{2}F\Big{)}+\partial_{\chi}F\Big{(}\cot\chi -2\partial_{\chi}\Phi\Big{)}+\csc^{2}\chi\Big{(}\partial_{\xi}^{2}F-2 \partial_{\xi}\Phi\partial_{\xi}F\Big{)}+\partial_{\chi}^{2}F\Big{)}\] \[-2\eta^{2}\partial_{\sigma}F\Big{)}-2\eta^{2}\sigma\partial_{ \eta}g(\eta)\partial_{\eta}F\Bigg{]}.\] Next, we consider the Uluru profile, for which the potential function is expressed as [13] \[V(\sigma\sim 0,\eta)=-\eta N_{6}\ln\sigma+\frac{\eta N_{6}\sigma^{2}}{4} \Lambda_{u}(\eta,K,P)+\frac{\eta N_{6}\sigma^{2}}{4(P^{2}-\eta^{2})}, \tag{111}\] where we define \[\Lambda_{u}(\eta,K,P)=\sum_{n=1}^{u}(-1)^{n+1}\Bigg{[}\frac{1}{(nK+(2n-1)P)^{2}- \eta^{2}}-\frac{1}{(nK+(2n+1)P)^{2}-\eta^{2}}\Bigg{]}. \tag{111}\] This results in a similar equation (110) together with a function \[g(\eta)=\frac{\sqrt{2}}{\sqrt{\eta}(\eta^{2}-P^{2})^{\frac{3}{2}}}\Bigg{[}2 \eta^{3}+\big{(}P^{2}-n^{2}\big{)}^{3}\big{(}\eta\partial_{\eta}^{2}\Lambda_{u }+2\partial_{\eta}\Lambda_{u}\big{)}+6\eta P^{2}\Bigg{]}^{\frac{1}{2}}. \tag{112}\]
2305.11971
Condition Number of Random Tridiagonal Toeplitz Matrix
In this manuscript it is considered the eigenvalues $\lambda_j$ of a random tridiagonal Toeplitz matrix $T$. We study the asymptotic behavior of the joint distribution of $({|{\lambda}|_{\min} ,|{\lambda}|_{\max}})$. From this, we obtain the asymptotic distribution of the condition number when $T$ is symmetric. In the non-symmetric case, we understand well the singularity of the matrix and can give some good estimation of its condition number. It is remarkable that in both these cases, it is only necessary to consider two or three random variables, but this simplicity is apparent since the structure of the tridiagonal Toeplitz matrix provides non-trivial relation between them, which also induce the asymptotic behavior is completely determined by these input random variables. Also, we want to remark that our results are satisfied under mild conditions on the random variables.
Paulo Manrique-Mirón
2023-05-19T19:43:23Z
http://arxiv.org/abs/2305.11971v1
# Condition number of random tridiagonal Toeplitz matrix ###### Abstract. In this manuscript it is considered the eigenvalues \(\lambda_{j}\) of a random tridiagonal Toeplitz matrix \(T\). We study the asymptotic behavior of the joint distribution of \(\left(\left|\lambda\right|_{\min},\left|\lambda\right|_{\max}\right)\). From this, we obtain the asymptotic distribution of the condition number when \(T\) is symmetric. In the non-symmetric case, we understand well the singularity of the matrix and can give some good estimation of its condition number. It is remarkable that in both these cases, it is only necessary to consider two or three random variables, but this simplicity is apparent since the structure of the tridiagonal Toeplitz matrix provides non-trivial relation between them, which also induce the asymptotic behavior is completely determined by these input random variables. Also, we want to remark that our results are satisfied under mild conditions on the random variables. ### Introduction Tridiagonal Toeplitz matrix is a very usual and useful structured matrix which appears in many situations as theoretical and numerical problems, for example see [3] and the references in there. Thanks to its structure it is possible to obtain explicit formulas for quantities of interest as eigenvalues, eigenvectors, or determinant. In the case that the matrix is symmetric, it is possible to give an explicit expression for its condition number. The condition number of a matrix is a useful quantity to understand the numerical stability of an algorithm when it uses the matrix [5], [6]. In fact, this manuscript is dedicated to analyze the condition number of a random tridiagonal Toeplitz matrix. A tridiagonal Toeplitz matrix is a matrix which has the following form \[\left[\begin{array}{ccccc}\delta&\tau&&&&\\ \sigma&\delta&\tau&&0&\\ &.&.&.&&\\ &&.&.&.&&\\ &&&&.&.&.&\\ 0&&&&\sigma&\delta&\tau\\ &&&&\sigma&\delta\end{array}\right]\] where \(\delta,\tau,\sigma\in\mathbb{C}\). When \(\delta=X\), \(\tau=Y\), and \(\tau=Z\), where \(X,Y,Z\) are random variables, we say the tridiagonal Toeplitz matrix is random. Let \(\mathcal{A}\in\mathbb{C}^{n\times m}\) be a matrix of dimension \(n\times m\). We denote the singular values of \(\mathcal{A}\) in non-decreasing order by \(0\leq\sigma_{1}^{(n,m)}(\mathcal{A})\leq\cdots\leq\sigma_{n}^{(n,m)}(\mathcal{ A})\). That is to say, they are the square roots of the eigenvalues of the \(n\)-square matrix \(\mathcal{A}^{*}\mathcal{A}\), where \(\mathcal{A}^{*}\) denotes the conjugate transpose matrix of \(\mathcal{A}\). The condition number of \(\mathcal{A}\), \(\kappa(\mathcal{A})\), is defined as \[\kappa(\mathcal{A}):=\frac{\sigma_{n}^{(n,m)}(\mathcal{A})}{\sigma_{1}^{(n,m)} (\mathcal{A})}\quad\text{ whenever }\quad\sigma_{1}^{(n,m)}(\mathcal{A})>0. \tag{1.1}\] Usually, to compute the condition number is hard. When it is considered a random matrix with all its entries random variables (rv) which are independent and Gaussian then the matrix is invertible with probability one and the limit distribution of its condition number is good understanding [1], [2], [4]. In this manuscript we study the condition number when \(X,Y,Z\) are real rv. Two remarkable aspects of our result are that a random tridiagonal Toeplitz matrix can be singular with positive probability for either discrete or continuous random entries and it is not necessary to impose strong conditions on \(X,Y,Z\). This manuscript is organized as follows. Sections 1.2 gives the statement of our main result. Section 1.3 we develop the proof of our main result and Section 1.4 we give some examples of random tridiagonal Toeplitz matrices with Radamecher, Cauchy, and Gaussian random entries. Before starting, we use \(\left|\cdot\right|\)as the norm of a complex number or the absolute value of a real number depending on the context. ### Main Result **Theorem 1.1**.: _Let \(\mathcal{T}_{n}^{(3)}\) be a random tridiagonal Toeplitz matrix of order \(n\) associated with the rv \(X,Y,Z\). For \(x,y\in\mathbb{R}\), the joint distribution of singular values \(\sigma_{\min}\left(\mathcal{T}_{n}^{(3)}\right)\), \(\sigma_{\max}\left(\mathcal{T}_{n}^{(3)}\right)\) satisfies:_ \[\lim_{n\to\infty}\mathbb{P}\left(\sigma_{\min}\left(\mathcal{T}_ {n}^{(3)}\right)\leq x,\sigma_{\max}\left(\mathcal{T}_{n}^{(3)}\right) \right)=\] \[\mathbb{P}\left(M\leq y\right)-\mathbb{P}\left(x<0\leq M\leq y,- \frac{X}{\left|Y\right|}\in[-2,2]\right)-\mathbb{P}\left(x<m\leq M\leq y,- \frac{X}{\left|Y\right|}\not\in[-2,2]\right),\] _where \(M=\max\left\{\left|X-2\left|Y\right|\right|,\left|X+2\left|Y\right|\right|\right\}\), \(m=\min\left\{\left|X-2\left|Y\right|\right|,\left|X+2\left|Y\right|\right|\right\}\)._ _Let \(T_{n}^{(3)}\) be a random (non-symmetric) tridiagonal Toeplitz matrix of order \(n\) associated with the rv \(X,Y,Z\). For \(x,y\in\mathbb{R}\), the joint distribution of norm of the eigenvalues \(\left|\lambda\right|_{\min}\left(T_{n}^{(3)}\right)\), \(\left|\lambda\right|_{\max}\left(T_{n}^{(3)}\right)\) satisfies:_ \[\lim_{n\to\infty}\mathbb{P}\left(\left|\lambda\right|_{\min} \left(T_{n}^{(3)}\right)\leq x,\left|\lambda\right|_{\max}\left(T_{n}^{(3)} \right)\leq y\right)=\] \[\mathbb{P}\left(\mathrm{M}\leq y\right)-\mathbb{P}\left(YZ=0,x< \left|X\right|\leq y\right)-\mathbb{P}\left(YZ>0,-\frac{X}{2\sqrt{YZ}}\in[-1, 1],x<0\leq\mathrm{M}\leq y\right)\] \[-\mathbb{P}\left(YZ>0,-\frac{X}{2\sqrt{YZ}}\not\in[-1,1],x<\mu \leq\mathrm{M}\leq y\right)-\mathbb{P}\left(YZ<0,x<\left|X\right|\leq\mathrm{ M}\leq y\right),\] _where \(\mu:=\min\left\{\left|X-2\sqrt{YZ}\right|,\left|X+2\sqrt{YZ}\right|\right\}\), \(\mathrm{M}:=\max\left\{\left|X-2\sqrt{YZ}\right|,\left|X+2\sqrt{YZ}\right|\right\}\)._ Note that the first part of Theorem 1.1 is a consequence of the second part, taking \(Z\) as \(Y\). But the proof of this second part is based in the symmetric case, where it is possible to talk directly on singular values, whereas in the non-symmetric case we can only consider the eigenvalues. However, if \(A\) is \(n\times n\) matrix, then we can see directly that \[\sigma_{\min}(A)\leq\left|\lambda_{j}(A)\right|\leq\sigma_{\max}(A),\ \ j=1,\ldots,n,\] where \(\lambda_{j}(A)\) is an eigenvalue of \(A\). Thus, the second part gives of lower bound of the distribution of condition number of \(T_{n}^{(3)}\), i.e., \[\mathbb{P}\left(\frac{\left|\lambda\right|_{\max}\left(T_{n}^{(3)}\right)}{ \left|\lambda\right|_{\min}\left(T_{n}^{(3)}\right)}\geq z\right)\leq\mathbb{ P}\left(\frac{\sigma_{\max}\left(T_{n}^{(3)}\right)}{\sigma_{\min}\left(T_{n}^{(3)} \right)}\geq z\right), \tag{1.2}\] which is useful when the quotient inside of the left side is larger with positive probability. Moreover, the singularity of \(T_{n}^{(3)}\) is good understanding due to \[\mathbb{P}\left(\sigma_{\min}\left(T_{n}^{(3)}\right)=0\right)=\mathbb{P}\left( \left|\lambda\right|_{\min}\left(T_{n}^{(3)}\right)=0\right).\] In Section 1.4, we can observe that \(T_{n}^{(3)}\) has a positive probability of being singular when \(n\) is sufficiently larger for some discrete or continuous distribution, which is different from the case when a random Toeplitz matrix with its all diagonal being continuous and independent rv is invertible with probability one. In the case that \(Y\) and \(Z\) are discrete rv with \(\left|Y\right|=\left|Z\right|\) as in the case they follow a Rademacher distribution, \(T_{n}^{(3)}\) is a normal matrix [3, Theorem 3.1] and 1.2 is an equality. Also, it is necessary to notice that the random variables in Theorem 1.1 do not have any strong restriction on their joint distribution, i.e., they can be dependent rv or do not have any moment. Finally, the structure of the tridiagonal Toeplitz matrix induces that the asymptotic behavior of the condition number depends entirely on the input random variables. ### Proof We start with a useful lemma, which is the main tool to prove our result. **Lemma 1.2**.: _Let \(X,Y\) be two non-degenerated random variables and we define from these \(m:=\min\left\{\left|X-Y\right|,\left|X+Y\right|\right\}\) and \(M:=\max\left\{\left|X-Y\right|,\left|X+Y\right|\right\}\). For \(x,y\in\mathbb{R}\),_ \[\mathbb{P}\left(x<\left|X+\alpha Y\right|\leq y\text{ for all } \alpha\in[-1,1]\right)\] \[=\mathbb{P}\left(x<0\leq M\leq y,-\frac{X}{Y}\in[-1,1]\right)+ \mathbb{P}\left(x<m\leq M\leq y,-\frac{X}{Y}\not\in[-1,1]\right).\] Proof.: Let \(f(\alpha)=X+\alpha Y\) for \(\alpha\in[-1,1]\). Note that \(f(\alpha)\) is a line whose only zero is at \(-X/Y\), assuming that \(Y\neq 0\). If \(\mathcal{A}:=\{x<\left|X+\alpha Y\right|\leq y\text{ for all }\alpha\in[-1,1]\}\), we have \[\mathbb{P}\left(\mathcal{A}\right) =\mathbb{P}\left(\mathcal{A},-\frac{X}{Y}\in[-1,1]\right)+ \mathbb{P}\left(\mathcal{A},-\frac{X}{Y}\not\in[-1,1]\right)\] \[=\mathbb{P}\left(x<0\leq M\leq y,-\frac{X}{Y}\in[-1,1]\right)+ \mathbb{P}\left(x<m\leq M\leq y,-\frac{X}{Y}\not\in[-1,1]\right).\] Let \(A_{n}=\{\alpha_{1},\ldots,\alpha_{n}\}\subset[-1,1]\) for each \(n\in\mathbb{N}\) such that \(\lim_{n\to\infty}A_{n}\) is a dense set of \([-1,1]\) with \(d_{n}:=\max\left\{\left|\alpha_{j}-\alpha_{j}\right|:\text{ for all }i\neq j\right\}\) such that \(\{d_{n}\}\) is a decreasing sequence. For \(x,y\in\mathbb{R}\) and \(n\in\mathbb{N}\), we define the event \(\mathcal{A}_{n}\) as \[\mathcal{A}_{n}:=\left\{x<\left|X+\alpha Y\right|\leq y\text{ for all }\alpha\in A_{n}\right\}.\] Considering the event \(\mathcal{A}\) as in the proof of Lemma 1.2. Note the event \(\mathcal{A}\) implies the event \(\mathcal{A}_{n}\) for all \(n\in\mathbb{N}\). From this observation, we have \[\left|\mathbb{P}\left(\mathcal{A}\right)-\mathbb{P}\left(\mathcal{A}_{n} \right)\right|\leq\mathbb{P}\left(\mathcal{A}_{n}\setminus\mathcal{A}\right).\] The event \(\mathcal{A}_{n}\setminus\mathcal{A}\) implies that two possible scenarios: * \(\mathcal{E}_{n}^{(1)}\): there exists an interval \(I_{n}\subset[-1,1]\) such that there is \(\beta_{n}^{*}\in I_{n}\) with \(\left|X+\beta_{n}^{*}Y\right|\leq x\). Observe that \(\left|f(\alpha)\right|=\left|X+\alpha Y\right|\) for \(\alpha\in[-1,1]\) takes its absolute minimum value at \(-1\), \(1\), or \(-X/Y\), then \(I_{n}\) should contain one of them. Additionally, we have that one extreme of \(I_{n}\) should be open, its length \(\left|I_{n}\right|\) goes to zero as \(n\to\infty\), and \(I_{n+1}\subset I_{n}\) for all \(n\in\mathbb{N}\). * \(\mathcal{E}_{n}^{(2)}\): there exists interval \(J_{n}\subset[-1,1]\) such that there is \(\beta_{n}^{**}\in J_{n}\) satisfies \(\left|X+\beta_{n}^{**}Y\right|>y\). As \(\left|f(\alpha)\right|\) takes its absolute maximum value at \(-1\) or \(1\), \(J_{n}\) should contain one of them. Additionally, one extreme of \(J_{n}\) is open, \(\left|J_{n}\right|\) goes to zero as \(n\to\infty\), and \(J_{n+1}\subset J_{n}\) for all \(n\in\mathbb{N}\). From the above observations, we have \[\lim_{n\to\infty}\left|\mathbb{P}\left(\mathcal{A}\right)-\mathbb{ P}\left(\mathcal{A}_{n}\right)\right| \leq\lim_{n\to\infty}\mathbb{P}\left(\mathcal{A}_{n}\setminus \mathcal{A}\right)\] \[\leq\lim_{n\to\infty}\left(\mathbb{P}\left(\mathcal{E}_{n}^{(1)} \right)+\mathbb{P}\left(\mathcal{E}_{n}^{(2)}\right)\right)=0.\] Thus, we can conclude that \[\lim_{n\to\infty}\mathbb{P}\left(\mathcal{A}_{n}\right)=\mathbb{P}\left( \mathcal{A}\right). \tag{1.3}\] \(\square\)\(\square\) The singular values of a random symmetric tridiagonal Toeplitz matrix \(\mathcal{T}_{n}^{(3)}\) of order \(n\) are \(\left|X+2\left|Y\right|\cos\left(\frac{\pi}{n+1}j\right)\right|\) for \(j=1,2,\ldots,n\), see [3, Section 2]. Thus \[\sigma_{\min}\left(\mathcal{T}_{n}^{(3)}\right) =\min_{j=1,2,\ldots,n}\left\{\left|X+2\left|Y\right|\cos\left( \frac{\pi}{n+1}j\right)\right|\right\},\] \[\sigma_{\max}\left(\mathcal{T}_{n}^{(3)}\right) =\max_{j=1,2,\ldots,n}\left\{\left|X+2\left|Y\right|\cos\left( \frac{\pi}{n+1}j\right)\right|\right\}.\] Note \(\left\{\cos\left(\frac{\pi}{n+1}j\right):j=1,2,\ldots,n\right\}\) approaches to a dense set in \([-1,1]\) as \(n\) goes to infinity. Taking \(\alpha_{j}=\cos\left(\frac{\pi}{n+1}j\right)\) for \(j=1,2,\ldots,n\), and \(x,y\in\mathbb{R}\), we have from the previous discussion \[\mathbb{P}\left(\sigma_{\min}\left(\mathcal{T}_{n}^{(3)}\right) \leq x,\sigma_{\max}\left(\mathcal{T}_{n}^{(3)}\right)\leq y\right)=\] \[\quad\mathbb{P}\left(\left|X+\alpha_{j}(2\left|Y\right|)\right| \leq y\text{ for all }j=1,\ldots,n\right)-\mathbb{P}\left(x<\left|X+\alpha_{j}(2\left|Y \right|)\right|\leq y\text{ for all }j=1,\ldots,n\right)\] \[\quad\longrightarrow\] \[\quad\mathbb{P}\left(\left|X+\alpha(2\left|Y\right|)\right|\leq y \text{ for all }\alpha\in[-1,1]\right)-\mathbb{P}\left(x<\left|X+\alpha(2\left|Y\right|) \right|\leq y\text{ for all }\alpha\in[-1,1]\right)\] as \(n\) goes to infinity. Thus, from Lemma 1.2 is obtained \[\lim_{n\to\infty}\mathbb{P}\left(\sigma_{\min}\left(\mathcal{T}_{n}^{(3)} \right)\leq x,\sigma_{\max}\left(\mathcal{T}_{n}^{(3)}\right)\leq y\right)=\] \[\quad\mathbb{P}\left(M\leq y\right)-\mathbb{P}\left(x<0\leq M\leq y,-\frac{X}{\left|Y\right|}\in[-2,2]\right)-\mathbb{P}\left(x<m\leq M\leq y,- \frac{X}{\left|Y\right|}\not\in[-2,2]\right),\] where \(M=\max\left\{\left|X-2\left|Y\right|\right|,\left|X+2\left|Y\right|\right|\right\}\), \(m=\min\left\{\left|X-2\left|Y\right|\right|,\left|X+2\left|Y\right|\right|\right\}\). By Continuous Mapping Theorem, the asymptotic distribution of \(\kappa_{n}=\frac{\sigma_{\max}\left(\mathcal{T}_{n}^{(3)}\right)}{\sigma_{\min }\left(\mathcal{T}_{n}^{(3)}\right)}\) converges to \(\kappa=\frac{V}{W}\), where \((W,V)\) is distributed as \[\mathbb{P}\left(W\leq x,V\leq y\right)=\] \[\quad\mathbb{P}\left(M\leq y\right)-\mathbb{P}\left(x<0\leq M \leq y,-\frac{X}{\left|Y\right|}\in[-2,2]\right)-\mathbb{P}\left(x<m\leq M \leq y,-\frac{X}{\left|Y\right|}\not\in[-2,2]\right), \tag{1.4}\] and from this, we follow that \[\mathbb{P}\left(V\leq y\right) =\mathbb{P}\left(M\leq y\right), \tag{1.6}\] \[\mathbb{P}\left(W\leq x\right) =1-\mathbb{P}\left(x<0,-\frac{X}{|Y|}\in[-2,2]\right)-\mathbb{P} \left(x<m,-\frac{X}{|Y|}\not\in[-2,2]\right). \tag{1.5}\] The eigenvalues of a random (non-symmetric) tridiagonal Toeplitz matrix \(T_{n}^{(3)}\) of order \(n\) are \(\lambda_{j}=X+2\sqrt{YZ}\cos\left(\frac{\pi j}{n+1}\right)\) for \(j=1,2,\ldots,n\), see [3, Section 2]. Note that \(\lambda_{j}\) can be complex number. Thus \[\left|\lambda\right|_{\min}\left(T_{n}^{(3)}\right) =\min_{j=1,2,\ldots,n}\left\{\left|X+2\sqrt{YZ}\cos\left(\frac{ \pi}{n+1}j\right)\right|\right\},\] \[\left|\lambda\right|_{\max}\left(T_{n}^{(3)}\right) =\max_{j=1,2,\ldots,n}\left\{\left|X+2\sqrt{YZ}\cos\left(\frac{ \pi}{n+1}j\right)\right|\right\}.\] In the following, we assume that \(X,Y,Z\in\mathbb{R}\) with probability \(1\). Using similar arguments from the symmetric case, we follow that \[\lim_{n\to\infty}\mathbb{P}\left(\left|\lambda\right|_{\min}\left(T_{n}^{(3) }\right)\leq x,\left|\lambda\right|_{\max}\left(T_{n}^{(3)}\right)\leq y \right)=\mathbb{P}\left(\mathcal{W}\leq x,\mathcal{V}\leq y\right),\] where \((\mathcal{W},\mathcal{V})\) is distributed as \[\mathbb{P}\left(\mathcal{W}\leq x,\mathcal{V}\leq y\right)=\] \[-\mathbb{P}\left(YZ>0,-\frac{X}{2\sqrt{YZ}}\not\in[-1,1],x<\mu \leq\mathrm{M}\leq y\right)-\mathbb{P}\left(YZ<0,x<\left|X\right|\leq\mathrm{ M}\leq y\right), \tag{1.7}\] where \(\mu:=\min\left\{\left|X-2\sqrt{YZ}\right|,\left|X+2\sqrt{YZ}\right|\right\}\), \(\mathrm{M}:=\max\left\{\left|X-2\sqrt{YZ}\right|,\left|X+2\sqrt{YZ}\right|\right\}\). Additionally, \[\mathbb{P}\left(\mathcal{V}\leq y\right) =\mathbb{P}\left(\mathrm{M}\leq y\right), \tag{1.9}\] \[\mathbb{P}\left(\mathcal{W}\leq x\right) =1-\mathbb{P}\left(YZ=0,x<\left|X\right|\right)-\mathbb{P}\left( YZ>0,-\frac{X}{2\sqrt{YZ}}\in[-1,1],x<0\right)\] (1.10) \[\qquad-\mathbb{P}\left(YZ>0,-\frac{X}{2\sqrt{YZ}}\not\in[-1,1],x< \mu\right)-\mathbb{P}\left(YZ<0,x<\left|X\right|\right), \tag{1.8}\] ### Examples In the following we give some particular cases of our main result. We assume that the random variables \(X,Y,Z\) are independent. 1. \(X,Y\) have Rademacher distribution. In the symmetric case, by a direct computation is obtained \(m=1\) and \(M=3\) with probability \(1\), and \(\lim_{n\to\infty}\mathbb{P}\left(\sigma_{\min}\left(\mathcal{T}_{n}^{(3)} \right)=0\right)=1\), i.e., \(\kappa=+\infty\) with probability \(1\). In other words, a random triangular symmetric Toeplitz matrix is ill-conditioning with high probability when \(n\) is larger. 2. \(X,Y\) have Standard Cauchy distribution. In the symmetric case, we have (1.11) \[\begin{split}\mathbb{P}\left(V\leq y\right)&=\mathbb{P} \left(M\leq y\right)=2\int_{0}^{\infty}\mathbb{P}\left(|X-2v|\leq y,|X+2v|\leq y \right)\frac{1}{\pi(1+v^{2})}\mathrm{d}v\\ &=\frac{4}{\pi^{2}}\mathds{1}_{y\geq 0}\int_{0}^{y/2}\int_{0}^{y-2v }\frac{1}{(1+t^{2})(1+v^{2})}\mathrm{d}t\mathrm{d}v.\end{split}\] Note (1.12) \[c_{2}:=\mathbb{P}\left(W=0\right)=\frac{4}{\pi^{2}}\int_{0}^{\infty}\frac{ \arctan(2v)}{1+v^{2}}\mathrm{d}v\approx 0.636834,\] i.e., a random tridiagonal Toeplitz matrix with Standard Cauchy distribution is singular with high probability when its dimension is larger. By Leibniz integral rule we have that density of \(W\mathds{1}_{W>0}\) is (1.13) \[f_{W\mathds{1}_{W>0}}(w)=\frac{4}{\pi^{2}(1-c_{2})}\mathds{1}_{w>0}\int_{0}^{ \infty}\frac{1}{(1+v^{2})(1+(w+2v)^{2})}\mathrm{d}v\] Now, for \(z\geq 0\), the distribution of \(\kappa\) when the matrix is not singular is \[\begin{split}\mathbb{P}&\left(\frac{V}{W\mathds{1} _{W>0}}\leq z\right)=\int_{-\infty}^{\infty}\mathbb{P}\left(V\leq zw|W \mathds{1}_{W>0}=w\right)f_{W\mathds{1}_{W>0}}(w)dw\\ &=\frac{16}{\pi^{4}(1-c_{2})}\int_{0}^{\infty}\left[\int_{0}^{ zw/2}\int_{0}^{zw-2v}\frac{1}{(1+t^{2})(1+u^{2})}\mathrm{d}t\mathrm{d}u \right]\left[\int_{0}^{\infty}\frac{1}{(1+v^{2})(1+(w+2v)^{2})}\mathrm{d}v \right]\mathrm{d}w.\end{split}\] 3. \(X,Y\) have Standard Normal distribution. In the symmetric case we have the following. Let \(\Phi(t)\) and \(\phi(t)\) be the distribution and density, respectively, of Standard Normal rv. Then (1.14) \[\begin{split}\mathbb{P}\left(V\leq y\right)&=\mathbb{ P}\left(M\leq y\right)=2\int_{0}^{\infty}\mathbb{P}\left(|X-2v|\leq y,|X+2v|\leq y \right)\phi(v)\mathrm{d}v\\ &=2\mathds{1}_{y\geq 0}\int_{0}^{\infty}\mathds{1}_{y\geq 2v} \left[\Phi(y-2v)-\Phi(-y+2v)\right]\phi(v)\mathrm{d}v\\ &=2\mathds{1}_{y\geq 0}\int_{0}^{y/2}\left[\Phi(y-2v)-\Phi(-y+2v) \right]\phi(v)\mathrm{d}v.\end{split}\] Note (1.15) \[c_{0}:=\mathbb{P}\left(W=0\right)=4\int_{0}^{\infty}\left[\Phi(2v)-\Phi(0) \right]\phi(v)\mathrm{d}v=\frac{\pi+\tan^{-1}\left(\frac{24}{7}\right)}{2\pi }\approx 0.704832,\] i.e., \(\kappa=+\infty\) with probability bigger than \(0.7\), in other words, a random tridiagonal symmetric Toeplitz matrix with Standard Normal distribution is ill-conditioning with high probability. By Leibniz integral rule we have that density of \(W\mathds{1}_{W>0}\) is (1.16) \[f_{W\mathds{1}_{W>0}}(w)=\frac{4}{1-c_{0}}\mathds{1}_{w>0}\int_{0}^{\infty} \phi(w+2v)\phi(v)\mathrm{d}v.\] Now, for \(z\geq 0\), the distribution of \(\kappa\) when it is finite is \[\mathbb{P}\left(\frac{V}{W\mathds{1}_{W>0}}\leq z\right)=\int_{- \infty}^{\infty}\mathbb{P}\left(V\leq zw|W\mathds{1}_{W>0}=w\right)f_{W\mathds{1 }_{W>0}}(w)dw\] \[\quad=\int_{-\infty}^{\infty}\left[2\mathds{1}_{zw\geq 0}\int_{0}^{ zw/2}\left[\Phi(zw-2t)-\Phi(-zw+2t)\right]\phi(t)\mathrm{d}t\right]\left[\frac{4}{1-c_ {0}}\mathds{1}_{w>0}\int_{0}^{\infty}\phi(w+2v)\phi(v)\mathrm{d}v\right]dw\] \[\quad=\frac{8}{1-c_{0}}\int_{0}^{\infty}\left[\int_{0}^{zw/2} \left[\Phi(zw-2t)-\Phi(-zw+2t)\right]\phi(t)\mathrm{d}t\right]\left[\int_{0}^ {\infty}\phi(w+v)\phi(v)dv\right]dw.\] 4. \(X,Y,Z\) have Rademacher distribution. In the non-symmetric case, we have that \(\mu\in\left\{1,\sqrt{5}\right\}\) and \(\mathrm{M}\in\left\{3,\sqrt{5}\right\}\) with probability \(1\). Moreover, \[\mathbb{P}\left(\mathcal{W}=0\right)=1-\mathbb{P}\left(YZ>0,\frac{X}{\sqrt{YZ }}\not\in[-2,2]\right)-\mathbb{P}\left(YZ<0\right)=\frac{1}{2},\] i.e., a random non-symmetric tridiagonal Toeplitz matrix \(T_{n}^{(3)}\) of order \(n\) is singular with probability approximately \(0.5\) when \(n\) is larger, in other words, it is ill-conditionally half the time. 5. \(X,Y,Z\) have Standard Normal distribution. In the non-symmetric case we have the following. The distribution of \(\mathcal{V}\) is (1.17) \[\mathbb{P}\left(\mathcal{V}\leq y\right) =\mathbb{P}\left(\mathrm{M}\leq y\right)=2\int_{0}^{\infty} \mathbb{P}\left(\left|v-2\sqrt{YZ}\right|\leq y,\left|v+2\sqrt{YZ}\right|\leq y \right)\phi(v)\mathrm{d}v\] \[=2\mathds{1}_{y\geq 0}\int_{0}^{y}\left[\int_{0}^{\left(\frac{y-v }{2}\right)^{2}}f_{YZ}(t)\mathrm{d}t+\int_{0}^{\left(y^{2}-v^{2}\right)/4}f_{ YZ}(t)\mathrm{d}t\right]\phi(v)\mathrm{d}v,\] where (1.18) \[f_{YZ}(t)=\frac{1}{\pi}\int_{0}^{\infty}\frac{1}{s}\exp\left(-\frac{1}{2}\left( s^{2}+\frac{t^{2}}{s^{2}}\right)\right)\mathrm{d}s.\] Note (1.19) \[c_{1}:=\mathbb{P}\left(W=0\right)=2\int_{0}^{\infty}\Phi(2\sqrt{t})f_{YZ}(t) \mathrm{d}t-\frac{1}{2}\approx 0.351488.\] i.e, a random tridiagonal matrix with Standard Normal distribution is ill-conditioning with probability approximately \(0.351488\) when its order is larger. By Leibniz integral rule we have that density of \(\mathcal{W}\mathds{1}_{\mathcal{W}>0}\) is (1.20) \[f_{\mathcal{W}\mathds{1}_{\mathcal{W}>0}}(w)=\frac{1}{1-c_{1}}\mathds{1}_{w>0} \left[\phi(w)+2\int_{0}^{\infty}\phi(w+2\sqrt{t})f_{YZ}(t)\mathrm{d}t\right].\] Now, for \(z\geq 0\), the distribution of \(\mathcal{V}/\mathcal{W}\) when it is finite is \[\mathbb{P}\left(\frac{\mathcal{V}}{\mathcal{W}\mathds{1}_{\mathcal{W}>0}}\leq z \right)=\int_{-\infty}^{\infty}\mathbb{P}\left(\mathcal{V}\leq zw|\mathcal{W} \mathds{1}_{\mathcal{W}>0}=w\right)f_{\mathcal{W}\mathds{1}_{\mathcal{W}>0}}(w )\mathrm{d}w,\] where it is needed to replace the expressions 1.17, 1.20 appropriately.